Unfortunately, this problem is not a convex optimization problem (despite the tag :)) due to the presence of nonlinear equality constraints. It is indeed fortunate that you can solve the problem analytically with one quadratic equality constraint. But with two, I am not convinced there is even a tractable numerical method.
There is a heuristic that is sometimes used in the convex optimization community that may be of use to you. Your problem is equivalent to:
$$\begin{array}{ll}
\text{minimize} & \mathop{\textrm{Tr}}(WR) \\
\text{subject to} & \mathop{\textrm{Tr}}(WR_a) = J_a \\
& \mathop{\textrm{Tr}}(WR_b) = J_b \\
& W = ww^H
\end{array}$$
This works because $$\mathop{\textrm{Tr}}(WR) =\mathop{\textrm{Tr}}(ww^HR) =\mathop{\textrm{Tr}}(w^HRw) = w^HRw.$$ Now the problem looks convex---except for the last equality constraint. What you do is relax the equality constraint to $W\succeq ww^H$, which means that $W-ww^H$ is positive semidefinite. This in turn can be written as follows:
$$W\succeq ww^H \quad\Longleftrightarrow\quad W-ww^H \succeq 0 \quad\Longleftrightarrow\quad
\begin{bmatrix} W & w \\ w^H & 1 \end{bmatrix} \succeq 0$$
The result is a semidefinite program in $W$ and $w$:
$$\begin{array}{ll}
\text{minimize} & \mathop{\textrm{Tr}}(WR) \\
\text{subject to} & \mathop{\textrm{Tr}}(WR_a) = J_a \\
& \mathop{\textrm{Tr}}(WR_b) = J_b \\
& \begin{bmatrix} W & w \\ w^H & 1 \end{bmatrix} ~ \text{p.s.d.}
\end{array}$$
This can be readily solved numerically. In fact, it should be relatively inexpensive to do so, because there are only two equality constraints! If you're a MATLAB jockey you can plug this into my toolbox CVX as follows:
cvx_begin sdp
variable W(n,n) hermitian
variable w(n)
minimize(trace(W*R))
subject to
trace(W*Ra) == Ja;
trace(W*Rb) == Jb;
[ W, w ; w', 1 ] >= 0;
cvx_end
Again, this is a relaxation of your problem: its optimal value will be less than or equal to the optimal value of the original problem, and the value of $w$ it produces is not guaranteed to even satisfy your original equality constraints. And yet, sometimes, that is exactly what happens: this relaxation actually produces $W=ww^T$. If that happens, $w$ is optimal for your original problem. By all means, try it and see if you get lucky.
On the other hand, if $W\neq ww^H$, what do you do? Here's an idea. Define $\mathop{\textrm{Tr}}(WR)=J_0$ to be the optimal value of the objective. Then try this ``rank reducing'' model:
$$\begin{array}{ll}
\text{minimize} & \mathop{\textrm{Tr}}(W) \\
\text{subject to} & \mathop{\textrm{Tr}}(WR) \leq (1+\lambda) \cdot J_0 \\
& \mathop{\textrm{Tr}}(WR_a) = J_a \\
& \mathop{\textrm{Tr}}(WR_b) = J_b \\
& \begin{bmatrix} W & w \\ w^H & 1 \end{bmatrix} ~ \text{p.s.d.}
\end{array}$$
where $\lambda\geq 0$. Note that if $\lambda=0$, you get the exact same result as in the original problem. But letting $\lambda$ be small but nonzero allows the value of $\mathop{\textrm{Tr}}(WR)$ to rise a bit, while reducing $\mathop{\textrm{Tr}}(W)$. Minimizing the trace is a common convex heuristic for reducing the rank of a PSD matrix. It doesn't always work either, but it just might. Try a couple of values of $\lambda>0$ and see what happens.
If all else fails, you might be able to use the values of $W$ and $w$ to construct a nearby point that satisfies your equality constraints but is not necessarily optimal.
EDIT: I just thought of a simplification if $R$ is positive definite. Define $v=R^{1/2}w$ and $V=R^{1/2}W^{1/2}$, and solve this alternative model:
$$\begin{array}{ll}
\text{minimize} & \mathop{\textrm{Tr}}(V) \\
\text{subject to} & \mathop{\textrm{Tr}}(VR^{-1/2}R_aR^{-1/2}) = J_a \\
& \mathop{\textrm{Tr}}(VR^{-1/2}R_bR^{-1/2}) = J_b \\
& \begin{bmatrix} V & v \\ v^H & 1 \end{bmatrix} ~ \text{p.s.d.}
\end{array}$$
Given the solution $(v,V)$ to this model, you can recover $W=R^{-1/2}VR^{-1/2}$, $w=R^{-1/2}v$. What I like about this alternative is that it combines my two-stage approach above into a single stage. You've got the rank-reducing heuristic $\mathop{\textrm{Tr}}(V)$ right there in the model.