I have a convex optimization of the form $$ \min_x \frac{1}{2} x^TAx-x^Tb \\ \text{s.t.}\ (I-P)x=0 $$ where $A$ is a $n$ by $n$ positive definite matrix, and $P$ is a $n$ by $n$ projection matrix (it has $p$ eigenvalues equal to zero, and $n-p$ eigenvalues equal to one)
Intuitively, it seems like I can separate the subspaces by rewriting it to:
$$
\min_x \frac{1}{2} x^TP^TAPx+\frac{\gamma}{2} x^T(I-P)^T(I-P)x-x^TP^Tb
$$
And since $P$ is a projection matrix, it is symmetric and idempotent, it can be simplified to:
$$
\min_x \frac{1}{2} x^T(PAP+\gamma(I-P))x-x^TPb
$$
This results in an unconstrained optimization, and the result is obtained by solving: $$(PAP+\gamma(I-P))x=Pb$$
This formulation is particularly handy because the matrix $(PAP+\gamma(I-P))$ is still positive definite for any positive $\gamma$ value (which is in fact the eigenvalues of the constraint subspace), so it can be solved numerically using the conjugate gradient method.
In practice, it works very well, but to be honest, I don't know if there is a flaw in my intuition, and I haven't found any resources on the subject.
Is there a known method that I can use to demonstrate my intuition is correct?