Seeing the other answers above, I wanted to make a quick clarification. When you take the derivative or the integral of some function, you do it with respect to a specific variable.
Consider $f: \mathbb{R}^2 \to \mathbb{R}$ with such that $f(x,y) = z$. There are two input variables, so there are two derivatives given by
\begin{equation}
{f_y}'(x) = \frac{\partial f}{\partial x} \mathrm{~~~and~~~} {f_x}'(y) = \frac{\partial f}{\partial y}
\end{equation}
The gradient is, by definition, a vector which yields the direction and rate of the greatest increase. Here, the gradient is given by a two-dimensional vector along both the $x$-axis and the $y$-axis with
\begin{equation}
\nabla f(x,y) = \begin{bmatrix} \frac{\partial f}{\partial x} \\ \frac{\partial f}{\partial y} \end{bmatrix}
\end{equation}
Consider the curve $\mathcal{C}$ in the closed interval $[a,b]$ given by
\begin{equation}
\mathbf{r}(t) = \begin{bmatrix} x(t) \\ y(t) \end{bmatrix} \mathrm{~~~with~} t \in \mathbb{R}
\end{equation}
We then evaluate the line integral along $\mathcal{C}$ with
\begin{align}
f(\mathbf{r}(b)) - f(\mathbf{r}(a)) & = \int_a^b \left[f(\mathbf{r}(t))\right]' \; dt \\
& = \int_a^b \left[{f_y}'(x(t)) \cdot x'(t) + {f_x}'(y(t)) \cdot y'(t)\right] \; dt \\
& = \int_a^b \left[\frac{\partial f}{\partial x} \cdot \frac{dx}{dt} + \frac{\partial f}{\partial y} \cdot \frac{dy}{dt}\right] \; dt \\
& = \int_a^b \nabla f(\mathbf{r}(t)) \bullet {\mathbf{r}'}(t) \; dt \\
& = \int_\mathcal{C} \nabla f(\mathbf{u}) \bullet \mathbf{du}
\end{align}
And with $\mathbf{p} = \mathbf{r}(a)$ and $\mathbf{q} = \mathbf{r}(b)$ the gradient theorem is given by
\begin{equation}
\int_\mathcal{C} \nabla f(\mathbf{u}) \bullet \mathbf{du} = f(\mathbf{p}) - f(\mathbf{q})
\end{equation}
The total differential of $f(x,y)$ is given by
\begin{equation}
df = \frac{\partial f}{\partial x} \; dx + \frac{\partial f}{\partial y} \; dy
\end{equation}
Note that integrating with respect to multiple variables has no meaning, consider the following
\begin{equation}
\int df = \int \frac{\partial f}{\partial x} \; dx + \int \frac{\partial f}{\partial y} \; dy
\end{equation}
This is clearly invalid. The integral sign has no meaning on its own and must always be coupled with a dummy integration variable.
Instead, you may want to integrate the gradient along a linear path in $\mathbb{R}^2$ as shown in the gradient theorem above.
Consider a linear path $\gamma$ from $(0,0)$ to $(x_0,y_0)$ given by
\begin{equation}
\gamma(t) = \begin{bmatrix} x(t) \\ y(t) \end{bmatrix} = \begin{bmatrix} tx_0 \\ ty_0 \end{bmatrix} \mathrm{~~~with~} t \in [0,1] \mathrm{~and~} (x_0,y_0) \in \mathbb{R}^2
\end{equation}
and clearly we have $f(\gamma(0)) = f(0)$ and $f(\gamma(1)) = f(x_0,y_0)$.
Thus per the gradient theorem above we have
\begin{equation}
f(x_0,y_0) = \int_{\gamma[0,1]} \nabla f(\mathbf{u}) \bullet \mathbf{du}
\end{equation}
The gradient theorem works in this case because we integrate the gradient along both the $x$-axis and the $y$-axis in one straight linear path in $\mathbb{R}^2$ with the parametrization given above.
There is also another method which relies on solving for remaining parts after integrating the partial derivatives.
Let us take a concrete example with $f(x,y) = xy + y$ so that we have
\begin{equation}
{f_y}'(x) = \frac{\partial f}{\partial x} = y \mathrm{~~~and~~~} {f_x}'(y) = \frac{\partial f}{\partial y} = x + 1
\end{equation}
In fact, we easily see that
\begin{align}
f(x,y) & \neq \int \frac{\partial f}{\partial x} \; dx + \int \frac{\partial f}{\partial y} \; dy \\
& \neq 2xy + y
\end{align}
Because we integrate along both the $x$-axis and the $y$-axis, we obtain twice the result for variables that are bound together.
The correct approach is either to use the gradient theorem as shown above or to integrate with respect to one chosen variable instead, and then solve for the remaining parts.
When integrating with respect to $x$, we have
\begin{align}
f(x,y) & = \int \frac{\partial f}{\partial x} \; dx + \psi(y) + K_1 \\
& = xy + \psi(y) + K_1
\end{align}
with $\psi(y)$ is a function of $y$ only and $K_1$ is a constant.
Note carefully that the constant of integration here is any differentiable function of $y$ denoted by $\psi(y)$ since any such function would vanish upon partial differentiation with respect to $x$ (just as any pure constant $C$ would vanish upon ordinary differentiation).
On the other hand, when integrating with respect to $y$, we have
\begin{align}
f(x,y) & = \int \frac{\partial f}{\partial y} \; dy + \phi(x) + K_2 \\
& = xy + y + \phi(x) + K_2
\end{align}
with $\phi(x)$ is a function of $x$ only and $K_2$ is a constant.
Equating the two results we obtain
\begin{align}
xy + \psi(y) + K_1 & = xy + y + \phi(x) + K_2 \\
\psi(y) + K_1 & = y + \phi(x) + K_2 \\
\psi(y) & = y + \underbrace{\phi(x) + K_2 - K_1}_{\mathrm{constant}}
\end{align}
and because $\phi(x) + K_2 - K_1$ evaluates to a constant in function with respect to $y$ we clearly see that $\psi(y) = y$.
The same result is obtained by differentiating with respect to $y$ the following expression
\begin{equation}
\frac{\partial f}{\partial y} \psi(y) = \frac{\partial f}{\partial y} \left[f(x,y) - \int \frac{\partial f}{\partial x} \; dx + K_1\right]
\end{equation}
so that we have
\begin{align}
\psi'(y) & = \frac{\partial f}{\partial y} - \frac{\partial}{\partial y} \int \frac{\partial f}{\partial x} \; dx \\
& = 1
\end{align}
and integrating with respect to $y$ we obtain $\psi(y)$ with
\begin{align}
\int \psi'(y) \; dy & = \int dy \\
& = y + C
\end{align}