Assume we have a normal distribution $q$ that is parameterized by $\theta$, specifically $q_{\theta}(x) = N(\theta,1)$. We want to solve the below problem
$$
\text{min}_{\theta} \quad E_q[x^2]
$$
We want to understand how the reparameterization trick helps in calculating the gradient of this objective $E_q[x^2]$.
One way to calculate $\nabla_{\theta} E_q[x^2]$ is as follows
$$
\nabla_{\theta} E_q[x^2] = \nabla_{\theta} \int q_{\theta}(x) x^2 dx = \int x^2 \nabla_{\theta} q_{\theta}(x) \frac{q_{\theta}(x)}{q_{\theta}(x)} dx = \int q_{\theta}(x) \nabla_{\theta} \log q_{\theta}(x) x^2 dx = E_q[x^2 \nabla_{\theta} \log q_{\theta}(x)]
$$
For our example where $q_{\theta}(x) = N(\theta,1)$, this method gives
$$
\nabla_{\theta} E_q[x^2] = E_q[x^2 (x-\theta)]
$$
Reparameterization trick is a way to rewrite the expectation so that the distribution with respect to which we take the expectation is independent of parameter $\theta$. To achieve this, we need to make the stochastic element in $q$ independent of $\theta$. Hence, we write $x$ as
$$
x = \theta + \epsilon, \quad \epsilon \sim N(0,1)
$$
Then, we can write
$$
E_q[x^2] = E_p[(\theta+\epsilon)^2]
$$
where $p$ is the distribution of $\epsilon$, i.e., $N(0,1)$. Now we can write the derivative of $E_q[x^2]$ as follows
$$
\nabla_{\theta} E_q[x^2] = \nabla_{\theta} E_p[(\theta+\epsilon)^2] = E_p[2(\theta+\epsilon)]
$$