First I will try to briefly explain some of the theoretical background,
the way I think about it.
Everything takes place in Euclidean space $\mathbf{R}^n$,
so the Hamiltonian is
$H(\mathbf{q},\mathbf{p}) = \tfrac12 \sum_{i=1}^n p_i^2 + V(\mathbf{q})$.
A basic object in the theory
is something which has been rediscovered and renamed several times;
for example, Benenti says inertia tensor, I called it an elliptic coordinates matrix in
my PhD thesis, and nowadays it's
most often called a special conformal Killing (SCK) tensor
(a name introduced by Crampin & Sarlet).
In the Euclidean case, this is simply a symmetric $n \times n$ matrix $J(\mathbf{q})$
of the form
$$
J_{ij}(\mathbf{q}) = \alpha q_i q_j + \beta_i q_j + \beta_j q_i + \gamma_{ij}
$$
for some real number $\alpha$, some real vector $\beta=(\beta_i)_{i=1}^n$,
and some real symmetric matrix $\gamma=(\gamma_{ij})_{i,j=1}^n$.
The Hamiltonian system is equivalent to the second order Newton-type system
$\ddot{\mathbf{q}} = \mathbf{F}(\mathbf{q}) = -\nabla V(\mathbf{q})$
whose force field $\mathbf{F}(\mathbf{q})$ has the particular property of being the gradient
of a function;
this is (at least locally) equivalent to the condition that
$\partial F_i/\partial q_j = \partial F_j/\partial q_i$
for all $i$ and $j$.
Now suppose that there is some matrix $J$ of the form above
(but not simply a constant times the identity matrix),
such that the vector field
$$
\tilde{\mathbf{F}}(\mathbf{q})
= \operatorname{adj}\bigl(J(\mathbf{q}) \bigr) \, \mathbf{F}(\mathbf{q})
$$
is also the gradient of some function.
(Here “adj” denotes the adjoint matrix, or adjugate matrix, or cofactor matrix,
or whatever you want to call it.)
Then, and this is one of the long parts of the story which I can't explain here,
it turns out that a whole bunch of miracles happen:
- The Hamiltonian system is integrable;
it has $n$ Poisson commuting constants of motion which are all quadratic in the momenta $p_i$.
(And they can be easily computed from $V$ and $J$, but I don't think I'll go into that here.)
- And the system is not just integrable, it is separable,
meaning that it can be solved by the Hamilton–Jacobi method
(separation of variables in the Hamilton–Jacobi equation),
after changing to suitable “separation coordinates”.
- These separation coordinates are given by the eigenvalues of $J$;
since $J(\mathbf{q})$ is a symmetric matrix (for each $\mathbf{q}$),
it has real eigenvalues $u_1,\dots,u_n$ (which are functions of $\mathbf{q}$),
and the change of variables is just $u_1=u_1(\mathbf{q}),\dots,u_n=u_n(\mathbf{q})$.
This is (in the generic case) nothing but the change from Cartesian to elliptic coordinates.
(All this depends on some genericity conditions being satisfied,
so that the $n$ constants of motion are functionally independent,
and much of the complexity of the article by Waksjö and Rauch-Wojciechowski
comes from handling all the non-generic cases.)
The condition for the vector field $\tilde{\mathbf{F}}$ to be the gradient of something is that
$\partial \tilde F_i/\partial q_j = \partial \tilde F_j/\partial q_i$
for all $i$ and $j$.
When inserting
$\tilde{\mathbf{F}} = -\operatorname{adj}(J) \, \nabla V$
into this,
one gets a system of $\binom{n}{2}$ second-order PDEs for $V$,
which is equivalent (by taking linear combinations)
to the system given in Lemma 4.1 in the article:
$$
\sum_{k=1}^n ( J_{ik} \, \partial_{kj}V - J_{jk} \, \partial_{ki}V)
+ 3 N_i \, \partial_j V - 3 N_j \, \partial_i V
= 0
$$
for $1 \le i < j \le n$, where $N_i = \alpha q_i + \beta_i$.
This gives us the recipe:
- Given a potential $V$,
plug it into these formulas, and see it there is some choice of constants
$\alpha$, $\beta_i$ and $\gamma_{ij} = \gamma_{ji}$
such that all the left-hand sides become identically zero.
- This is a matter of setting the coefficients of all linearly independent terms
to zero separately, and solving the resulting homogeneous linear system
for the constants
$\alpha$, $\beta_i$, $\gamma_{ij}$
(see example below).
It's an over-determined systems, so usually there is only the trivial solution
with all constants zero except that $\gamma_{11}=\dots=\gamma_{nn}=s$
is arbitrary, but if we are lucky, we find a non-trivial solution,
and then the miracles above happen!
- And once we know the matrix $J$, its eigenvalues tells us what the separation coordinates
are. (Although this is not really how you do it, there's more theory that you can
use to see it in a simpler way.)
Now let's try a simple example in two dimensions, the Garnier potential
$$
V(q_1,q_2) = (q_1^2+q_2^2)^2 - (\lambda_1 q_1^2 + \lambda_2 q_2^2)
,
$$
with $\lambda_1 < \lambda_2$ (say).
(This is separable in elliptic coordinates with parameters $(\lambda_1,\lambda_2)$,
but let's pretend we don't know that, since it's supposed to come out of the calculations
automatically!)
Since $n=2$, the SCK tensor has the form
$$
J(q_1,q_2) =
\begin{pmatrix}
\alpha q_1^2 + 2 \beta_1 q_1 + \gamma_{11}
&
\alpha q_1 q_2 + \beta_1 q_2 + \beta_2 q_1 + \gamma_{12}
\\
\alpha q_1 q_2 + \beta_1 q_2 + \beta_2 q_1 + \gamma_{12}
&
\alpha q_2^2 + 2 \beta_2 q_2 + \gamma_{22}
\end{pmatrix}
,
$$
and we want to determine the values of the constants $\alpha$, $\beta_1$, $\beta_2$, $\gamma_{11}$, $\gamma_{12}$, $\gamma_{22}$
which make the following identity hold identically:
$$
(J_{11} - J_{22}) \, \partial_{12}V
+ J_{12} \, (\partial_{22} V - \partial_{11} V)
+ 3 N_1 \, \partial_2 V - 3 N_2 \, \partial_1 V
= 0
.
$$
(Since $n=2$, we only have this one condition, where $i=1$ and $j=2$;
this is the classical Bertrand–Darboux equation
which can be found in §152 of Whittaker's famous book
A Treatise on the Analytical Dynamics of Particles and Rigid Bodies.)
Inserting the expressions for $J(q_1,q_2)$, $N(q_1,q_2)$ and $V(q_1,q_2)$ yields
$$
\begin{aligned}
0
&
=
(\alpha (q_1^2 - q_2^2)
+ 2 \beta_1 q_1 - 2 \beta_2 q_2
+ \gamma_{11} - \gamma_{22})
\cdot 8 q_1 q_2
\\ &
+ (\alpha q_1 q_2 + \beta_1 q_2 + \beta_2 q_1 + \gamma_{12})
\cdot
( -8 (q_1^2 - q_2^2) + 2 (\lambda_1 - \lambda_2))
\\ &
+ 3 (\alpha q_1 + \beta_1)
\cdot
(4 (q_1^2 + q_2^2) - 2 \lambda_2) q_2
\\ &
- 3 (\alpha q_2 + \beta_2)
\cdot
(4 (q_1^2 + q_2^2) - 2 \lambda_1) q_1
.
\end{aligned}
$$
Multiplying everything out, we get
$$
\begin{aligned}
0
&
=
20 \beta_1 q_2^3
+ 20 \beta_1 q_1^2 q_2
- 20 \beta_2 q_1 q_2^2
- 20 \beta_2 q_1^3
\\ &
- 8 \gamma_{12} q_1^2
+ 8 \gamma_{12} q_2^2
\\ &
+ 8 ( (\gamma_{11} - \gamma_{22}) + \alpha (\lambda_1 - \lambda_2)) q_1 q_2
\\ &
- 2 \beta_2 (\lambda_2 - 4 \lambda_1) q_1
+ 2 \beta_1 (\lambda_1 - 4 \lambda_2) q_2
\\ &
+ 2 \gamma_{12} (\lambda_1 - \lambda_2)
,
\end{aligned}
$$
and here the coefficients at the various powers $q_1^a q_2^b$ have to be separately zero.
It's immediate that this implies $\beta_1 = \beta_2 = \gamma_{12} = 0$,
and what remains then is
$$
(\gamma_{11} - \gamma_{22}) + \alpha (\lambda_1 - \lambda_2) = 0
,
$$
which admits a two-parameter solution:
$$
\alpha = -t
,\qquad
\gamma_{11} = t \lambda_1 + s
,\qquad
\gamma_{2} = t \lambda_2 + s
.
$$
The contribution from $s$ in uninteresting since it's just a multiple of the identity matrix,
so we take $s=0$,
but we get a nontrivial $J$ by taking $t=1$:
$$
J(q_1,q_2) =
\begin{pmatrix}
- q_1^2 + \lambda_{1}
&
-q_1 q_2
\\
-q_1 q_2
&
- q_2^2 + \lambda_{2}
\end{pmatrix}
.
$$
The eigenvalues of this matrix, call them $u_1(q_1,q_2)$ and $u_2(q_1,q_2)$,
satisfy
$$
\begin{aligned}
(z-u_1)(z-u_2)
&
=
\det(zI-J)
\\ &
= \det
\begin{pmatrix}
(z-\lambda_{1}) + q_1^2
&
q_1 q_2
\\
q_1 q_2
&
(z-\lambda_{2}) + q_2^2
\end{pmatrix}
\\
&
=
(z-\lambda_{1})(z-\lambda_{2})
+ q_1^2 (z-\lambda_{22})
+ q_2^2 (z-\lambda_{11})
,
\end{aligned}
$$
i.e.,
$$
\frac{(z-u_1)(z-u_2)}{(z-\lambda_1)(z-\lambda_2)} = 1 + \frac{q_1^2}{z-\lambda_1} + \frac{q_2^2}{z-\lambda_2}
,
$$
which is nothing but the defining equation for elliptic coordinates $(u_1,u_2)$
with $u_1 < \lambda_1 < u_2 < \lambda_2$.
So the eigenvalues of the SCK tensor $J$ that we have found are indeed the separation
coordinates for the potential $V$, as promised!
This example is perhaps misleadingly simple,
partly since it's just two-dimensional, but even more
since the potential was given in a Cartiesian coordinate system
whose origin and axes are perfectly aligned with the ellipses and hyperbolas
which form the coordinate curves in the elliptic coordinate system.
If we had been given the potential in a rotated coordinate system instead,
$$
V(q_1,q_2) = (q_1^2+q_2^2)^2 - (\lambda_1 (\cos\psi \, q_1 - \sin\psi \, q_2) ^2 + \lambda_2 (\sin\psi \, q_1 + \cos\psi \, q_2)^2)
$$
for some angle $\psi$,
then an application of the criterion would again have given us $\alpha=-1$, $\beta_1=\beta_2=0$,
but instead of a diagonal matrix $\gamma$ we would get some more complicated symmetric matrix $\gamma$, whose eigenvectors would tell us what the angle $\psi$ is,
so that we can rotate the coordinate system back to the “good” axes used above before changing to elliptic coordinates.
And if the potential is given in a coordinate system which has been not only rotated but also translated,
the one obtains a nonzero vector $(\beta_1,\beta_2)$ from which one can deduce where the “good” origin is.
See Corollary 4.2 in the paper.
(If $\alpha=0$ but the vector $\beta$ is nonzero,
then the potential is separable in parabolic coordinates instead.)
If $\lambda_1=\lambda_2$, then the Garnier potential is separable in polar coordinates
instead of elliptic.
This is reflected in the matrix $\gamma$ having a double eigenvalue.
In general, a multiple eigenvalue indicates that the potential has some
rotational symmetry in the corresponding eigenspace,
so that a radial variable can be separated off from the angular part.
But the angular part is not necessary separable (unless there is just one angular variable,
i.e., for a double eigenvalue);
this separability has to be tested separately using the “cyclic Bertrand–Darboux” equations.
(And this is where the complications begin, so I think I'll stop here.)