2

If we have 2 vectors $v_1,v_2$ which have been rotated into $v'_1,v'_2$ by the following operations:

$v'_1 = e^{θ\hat{n}/2}v_1e^{-θ\hat{n}/2}$

$v'_2 = e^{θ\hat{n}/2}v_2e^{-θ\hat{n}/2}$

Where $\hat{n}$ is a unit vector representing the axis of rotation, and $θ$ is a scalar representing the rotation angle.

If we knew $v_1,v_2,v'_1,v'_2$ how would we solve for either the rotor $e^{θ\hat{n}/2}$ or equivalently it's axis/angle $\hat{n},θ$?

$v_1,v_2$ can be assumed to be linearly independent and of course everything here is interpreted as a quaternion.

Theta n
  • 158
  • This answer describes how to get axis and angle. The conversion into quaternions will then be trivial. – Kurt G. Aug 09 '23 at 18:44
  • @KurtG. I believe that technique only works for a single pair of vectors, you can for example find a matrix that maps v1 onto v1', but that same matrix will not necessarily map v2 onto v2', and vice versa if you started with v2 instead. The issue is there is an infinite number of rotations that map a vector onto another vector, but only 1(or 2?) which will simultaneously map v1 onto v1' and v2 onto v2', and the axis of this rotation may be neither v1 x v1' nor v2 x v2'. – Theta n Aug 09 '23 at 19:32

3 Answers3

2

There's a straightforward formula when $\theta \ne \pi$. First get a third vector with the cross product: $$ v_3 = v_1\times v_2,\quad v_3' = v_1'\times v_2'. $$ Now we need to form the reciprocal basis of the prime basis. Let $$ V' = v_1'\wedge v_2'\wedge v_3',\quad |V'| = |v_1'\cdot(v_2'\times v_3')|. $$ Then $$ v^{\prime 1} = v_2'\wedge v_3'\,(V')^{-1} = \frac{v_2'\times v_3'}{|V'|}, $$$$ v^{\prime 2} = -v_1'\wedge v_3'\,(V')^{-1} = \frac{v_3'\times v_1'}{|V'|},, $$$$ v^{\prime 3} = v_1'\wedge v_2'\,(V')^{-1} = \frac{v_1'\times v_2'}{|V'|}. $$ The rotor $R$ we want is then $$ R \propto 1 + \sum_iv^{\prime i}v_i. $$


If $v_i$ and $v'_i$ are expressed as column vectors, then we can express all of this with neat matrix computations. Let $V = (v_1, v_2, v_3)$ and $W = (v_1', v_2', v_3')$ be the matrices with those columns. The reciprocal basis matrix is the inverse transpose $W^{-T}$; calculate the matrix of inner products $\Gamma = W^{-1}V$. Then $$ R \propto 1+\sum_iv^{\prime i}v_i = 1 + \mathrm{Tr}(\Gamma) + \sum_{i<j}(\Gamma_{ij}-\Gamma_{ji})e_ie_j $$ where $e_i$ is the standard basis.


Proof of formula:

We have $$ v'_i = Rv_i\widetilde R,\quad v^{\prime i} = Rv^i\widetilde R. $$ Now we astutely notice that $$ \sum_iv^{\prime i}v_i = \sum_iRv^i\widetilde Rv_i = R\dot\nabla\widetilde R\dot x. $$ In 3D our rotor is a sum $R = s + B$ of a scalar and a bivector; standard geometric calculus identities give us $$ \sum_iv^{\prime i}v_i = R(3s + (3-4)\widetilde B) = R(3s - \widetilde R + s) = 4sR - 1. $$ When $s \ne 0$ it follows that $$ R \propto 1 + \sum_iv^{\prime i}v_i. $$

The same idea will work in any number of dimensions so long as $R$ is a rotation in a single plane, yielding $$ R \propto 4-n + \sum_iv^{\prime i}v_i, $$ but not all rotors are of this form: consider that in 4D $$ e^{e_1e_2 + e_3e_4} = e^{e_1e_2}e^{e_3e_4} $$ has a pseudoscalar component. I don't know if there is a general formula; for 4D and so long as $\langle R\rangle_0 \ne 0$ I was able to work out $$ R \propto 4\sum_iv^{\prime i}v_i + \sum_{i,j}v^{\prime i}v^{\prime j}v_iv_j $$ using a method similar to the 3D case.

$s=0$ in 3D

When $s=0$, i.e. when the above rotor has no scalar part and is a bivector $B$, the above method fails. However, recovering $B$ is still straightforward.

The matrix $\Gamma' = WV^{-1}$ is precisely the representation of our rotation in the standard basis. Expressed in the $v'_i$ basis this is $$ W^{-1}\Gamma'W = V^{-1}W = \Gamma^{-1}. $$ So find the eigenvectors of $u^+, u^-_1, u^-_2$ of $\Gamma$ corresponding to eigenvalues $1, -1, -1$ respectively. Then $B$ is proportional to the dual of $u^+$ via a pseudoscalar $I$ or is the wedge of $u^-_1$ and $u^-_2$ (after going back to the standard basis using $W$): $$ B\propto [Wu^+]I \quad\text{or}\quad B\propto [Wu^-_1]\wedge [Wu^-_2]. $$ Also notice the following for any eigenvector $u$ of $\Gamma$: $$ \lambda u = \Gamma u \iff \lambda Wu = Vu \iff (V - \lambda W)u = 0 $$ So it suffices to determine the nullspace of $V - W$ to get $u^+$ or the nullspace of $V+W$ to get $u^-_1, u^-_2$.

  • Awesome answer! This seems to work perfectly, I'm building a little visualization of spinors which is why I asked for the rotor specifically. Tracking the local forward/up vectors of a rotating object and performing this calculation many times over small timesteps then composing the results, I do indeed see the expected behavior of a spinor, an inversion from R -> -R for each 2π turn regardless of the axis of rotation. I'll have to dig into geometric calculus to better understand line 2 and the jump to line 3 of your proof, but thanks for the answer. – Theta n Aug 10 '23 at 20:51
  • So after substituting the expressions for R, s, and B, and using some trigonometric identities, it seems that what this is really solving for is R^2 or e^θB / e^θn. So a square root is required to find R, which is straightforward enough for quaternions or equivalently the sum of a scalar and bivector, although in general maybe not the entire geometric algebra. It also makes sense as the problem as stated uniquely specifies an SO(3) rotation but does not uniquely specify an SU(2) rotation, so it seems the 1-to-2 correspondence shows up as the two possible square roots here. – Theta n Aug 10 '23 at 23:55
  • @GeorgeChiporikov I'm not sure what you're talking about. The expression I gave is definitely for $R$ and not $R^2$. The 1-to-2 issue is taken care of because I only derive an expression that $R$ is proportional to, leaving the sign ambiguous. Certainly you could square everything and get an exact expression for $R^2$, but I don't know why you would do that. – Nicholas Todoroff Aug 11 '23 at 00:27
  • I mean that the expression can be simplified further, we have s = cos(θ/2), and R = cos(θ/2) + Bsin(θ/2). Then 4sR-1 = 4cos(θ/2)cos(θ/2) + 4cos(θ/2)sin(θ/2)B - 1 = (with trigonometric identities) 1 + 2cos(θ) + 2Bsin(θ) = 1 + 2e^θB = 1 + 2R^2. So it seems more natural to say that this is a solution for R^2. I'm not saying there's anything wrong with it, I would say this is the expected result given the problem statement and it makes the 1-to-2 issue very explicit when expressed in this way. – Theta n Aug 11 '23 at 00:52
  • @GeorgeChiporikov I see. That is an interesting observation! Also explains neatly why the method fails for $s=0$, because in this case $R^2$ is a scalar and we lose all the bivector information. – Nicholas Todoroff Aug 11 '23 at 00:59
  • @GeorgeChiporikov I added a method for getting $R$ when $s=0$ in 3D. – Nicholas Todoroff Aug 14 '23 at 22:36
1

I think you can do this in a more "quaternionic" way if you want:

Quick review of quaternion and $\mathrm{SO}_3(\mathbb R)$:

We write $\mathbb H$ for the quaternions, and $h \mapsto \bar{h}$ for the quaternionic conjugation map, that is, if $h = h_0+h_1 \mathbf i + h_2\mathbf j+h_3\mathbf k\in\mathbb H$ then $\bar{h} = h_0-h_1 \mathbf i - h_2\mathbf j-h_3\mathbf k$. Let us also set $\Re(h) = h_0$ and $\Im(h) = h-h_0$. It is well-known that $\mathbb H$ is a normed division algebra over $\mathbb R$ with norm $$ N(h) = h.\bar{h} = h_0^2+h_1^2+h_2^2+h_3^2. $$ Thus in particular $\mathbb H$ is a 4-dimensional $\mathbb R$-vector space with inner product given by $\langle h,h' \rangle = \Re(h\overline{h'})$ for which the basis $\{1,\mathbf i,\mathbf j,\mathbf k\}$ is orthonormal. Let $\mathbb U = \{h \in \mathbb H: N(h)=h\bar{h}=1\}$ be the group of unit quaternions and let $\mathbb I = \{h \in \mathbb H: \bar{h}=-h\} = \text{span}_{\mathbb R}\{\mathbf i,\mathbf j,\mathbf k\}$ be the subspace of purely imaginary quaternions. We identify $\mathbb R^3$ with the standard "dot product" with $\mathbb I$.

It is easy to check that $\mathbb U$ acts on $\mathbb I$ by conjugation, that is, if $u \in \mathbb U, h\in \mathbb I$ then $uhu^{-1} = uh\bar{u} \in \mathbb I$ and that this yields a homomorphism $\pi\colon\mathbb U \to \mathrm{O}(\mathbb I)$. Since $\mathbb U$ is connected, $\pi(\mathbb U)\subseteq \mathrm{SO}(\mathbb I) \cong \mathrm{SO}_3(\mathbb R)$. Now if $u = \cos(\phi)+\sin(\phi)u_{im}$ where $u_{im}\in \mathbb U\cap \mathbb I$, then $\cos(\phi) \in Z(\mathbb H)$, hence $u_{im}$ and $u$ commute with each other, and hence $u.u_{im}u^{-1}=u_{im}$. Thus $\pi(u) \in \mathrm{SO}_3(\mathbb R)$ is a rotation which preserves the line $\mathbb R.u_{im}$. It is then easy to check that $\pi(u)$ is a rotation by $2\phi$ about $u_{im}$.

Finally, if $a,b \in \mathbb I$, then $$ a.b = \Re(a.b) + \Im(a.b) = -\langle a,b\rangle + \Im(a.b). $$ since $\langle h_1,h_2\rangle = \Re(h_1\overline{h_2}) = -\Re(h_1h_2)$ if $h_2 \in \mathbb I$. Moreover, $a^2 = -a(-a) = -a.\bar{a} = -N(a)$ is real, hence $\Im(a.a)=0$. Thus $\Im(a.b)$ is an alternating bilinear map $\mathbb I\times \mathbb I\to \mathbb I$, which becomes the vector cross product via the standard identification.

Obtaining the rotator: If we are given $v_1,v_2$ linearly independent vectors together with $v_1' = R(v_1)$ and $v_2' = R(v_2)$, where we view all of these as elements of $\mathbb I$.

Let $w_1= v_1-v_1'=(I-R)(v_1)$ and $w_2 = (I-R)(v_2)$. If $w_1,w_2$ are linearly dependent, say $w_2=\lambda w_1$, then $\lambda v_1-v_2 = R(\lambda v_1-v_2)$ and hence $\mathbb R.(\lambda v_1 -v_2)$ is the axis of the rotation $R$. Otherwise $\text{Im}(I-R) = \text{ker}(I-R)^{\perp}$ is spanned by $\{w_1,w_2\}$, and $\Im(w_1w_2) = a$ spans the axis of rotation of $R$.

Once we have a basis $\{a\}$ of the axis of rotation, it is straight-forward to find the angle of rotaiton: set $u = a/\|a\|$ so that $\{u\}$ is an orthonormal basis of the axis of rotation and let $c_1 = \langle v_1 ,u\rangle$. Then if $p_1 = v_1 - c_1u$ and $p_1' = v_1' - c_1 u$ we have $p_1,p'_1=R(p_1) \in \mathbb R. u^{\perp}$ and if $$ \cos(\alpha) = \langle p_1,p_1'\rangle/\|p_1\|^2 $$ then $R$ is the rotation by $\alpha$ about $u$ and so the quaternion rotator is then $\cos(\alpha/2)+\sin(\alpha/2).u$.

krm2233
  • 7,230
  • Very nice answer, I was able to build from this and show that the quaternion $q = \sqrt{w_1w_2^{-1}}$ represents a rotation(and scaling) around $\hat{n}$ which transforms $v_{2\perp}$ into ${v_{1\perp}}$ like so: $v_{1\perp} = qv_{2\perp}q^*$, where $v_{1\perp}, v_{2\perp}$ are the components of $v_1,v_2$ orthogonal to $\hat{n}$ (in 3D). This allows us to construct vectors from each of the unprimed and primed bases which lie in the plane of the original rotation. These vectors are useful because the rotator $e^{θ\hat{n}/2}$ commutes with them by conjugation... – Theta n Aug 16 '23 at 18:38
  • 1
    which can be used to extract the square of the rotator: $v'\perp = e^{θ\hat{n}/2}v\perp e^{-θ\hat{n}/2} = e^{θ\hat{n}/2}{(e^{-θ\hat{n}/2})}^v_\perp = e^{θ\hat{n}/2}e^{θ\hat{n}/2}v_\perp = e^{θ\hat{n}}v_\perp \implies e^{θ\hat{n}} = v'\perp v\perp^{-1}$. This leads to an explicit expression for the square of the rotator: $e^{θ\hat{n}} = v'_1qv'_2q^(v_1qv_2q^*)^{-1}$. I need to polish this up and work out the edge cases, I'll add an update to my post soon. Thanks for the answer, drawing attention to the quantity $\mathfrak{I}(w_1w_2)$ was the key insight here. – Theta n Aug 16 '23 at 18:38
  • Correction: it should be $e^{θ\hat{n}} = (v'_1\times(qv'_2q^))(v_1\times(qv_2q^))^{-1}$, cross product, not quaternionic product, otherwise the vectors we construct have a real component. – Theta n Aug 16 '23 at 19:38
0

There is a unique solution to this problem but it is easier to express it in the language of rotation matrices and vectors. Quaternions don't seem to provide a significant simplification to this scheme.

Note that for unit 3-vectors $v_1,v_1', v_2, v_2'$ related by:

$$v_1'=R v_1~~,~~ v_2'=Rv_2$$

it holds true that

$$v_1'\times v_2'=R(v_1\times v_2)$$

which can be proven using the properties of a rotation matrix and its determinant. The assumption that $v_1,v_2$ are linearly independent ensures that the set $B=\{v_1, v_2, v_1\times v_2\}$ forms a basis for $\mathbb{R}^3$, and since the action of the matrix $R$ on the basis is known, the matrix can be fully reconstructed. The reconstruction can be facilitated by constructing an orthonormal basis out of $B$ given by the vectors

$$q=v_1\\r=\frac{v_2-(v_1\cdot v_2)v_1}{\|v_1\times v_2\|}\\s=\frac{v_1\times v_2}{\|v_1\times v_2\|} $$

Then all one needs to do is express $q'=Rq,r'=Rr, s'=Rs$ in terms of $q,r,s$:

$$q'=aq+br+cs\\r'=dq+er+fs\\s'=gq+hr+ks$$

and the matrix in the $qrs$ basis is given by

$$R(qrs)=\begin{pmatrix}q'\cdot q&q'\cdot r&q'\cdot s\\r'\cdot q&r'\cdot r&r'\cdot s\\s'\cdot q&s'\cdot r&s'\cdot s\end{pmatrix}$$

Here you can calculate the angle from the trace:

$$1+2\cos\theta=q'\cdot q+r'\cdot r+s'\cdot s$$

The axis can be found as the eigenvector corresponding to the eigenvalue $1$ and then expressed in the original coordinates if neccessary.