My answer only requires $X_2=X$ in distribution and not necessarily a.s.
Before giving the general answer, I first focus on the square-integrable case because:
- It is so easy to understand
- I started there and that is how I realised we do not need $X_2=X$ a.s.
First answer: when $X$ is square-integrable
In that case, the expected value can be interpreted as an orthogonal projection. The information you speak of can be measured by the variance. The smaller the loss of information, the higher the remaining variance. If the variance stays the same, then there is no loss of information, the projection in that case is the identity function. Let us write that, we have:
$$
\operatorname{Var}(X)=\mathbb E[(X-X_1)^2]+\operatorname{Var}(X_1)=\mathbb E[(X-X_1)^2]+\mathbb E[(X_1-X_2)^2]+\operatorname{Var}(X_2).
$$
But $X$ and $X_2$ have the same distribution, so $\operatorname{Var}(X_2)=\operatorname{Var}(X)$. This implies that $\mathbb E[(X_1-X_2)^2]=0$, or equivalently $X_1=X_2$ a.s.
Second answer: the general case
As $X_2=\mathbb E[X_1\vert\mathcal G]$, it suffices to show that $X_1$ is a.s. equal to a $\mathcal G$-measurable random variable. To show the latter, we can use the equality case of conditional Jensen's inequality. Indeed, if a strictly convex function $\varphi:\mathbb R\to\mathbb R_+$ satisfies
$$
\varphi(\mathbb E[X_1\vert\mathcal G])=\mathbb E[\varphi(X_1)\vert\mathcal G],
$$
then $X_1$ is a.s. equal to a $\mathcal G$-measurable random variable. So to conclude, it suffices to find $\varphi$ which satisfies the equality above.
Clearly we need that $\varphi(X_1)$ be integrable. We know that $\vert X_1\vert$ is, but $\vert\cdot\vert$ is not strictly convex. Let us then choose a strictly convex map dominated by $\vert\cdot\vert$ but strictly convex, for instance $\varphi:x\mapsto\vert x\vert+\exp(-\vert x\vert)+1$. Now using conditional Jensen's inequality we get a.s.
$$
\varphi(X_2)=\varphi(\mathbb E[X_1\vert\mathcal G])\le\mathbb E[\varphi(X_1)\vert\mathcal G]=\mathbb E[\varphi(\mathbb E[X\vert\mathcal F])\vert\mathcal G]\le\mathbb E[\mathbb E[\varphi(X)\vert\mathcal F]\vert\mathcal G].
$$
In the above inequality, the far right-hand side has expectation $\mathbb E[\varphi(X)]$. And the far left-hand side has expectation $\mathbb E[\varphi(X_2)]$, which is also $\mathbb E[\varphi(X)]$ as $X$ and $X_2$ share the same distribution.
We deduce that the random variable $\varphi(\mathbb E[X_1\vert\mathcal G])$ is dominated by $\mathbb E[\varphi(X_1)\vert\mathcal G]$, but they both share the same expectation. Therefore they are equal, which proves the claim.