3

Just to introduce the background of this question: As many of you know a Rotation Matrix can transform a point $^{B}\textrm{p}$ described in a rotated Coordinate Frame {B} into the point $^{A}\textrm{p}$ described in the Coordinate Frame {A} by:

$^{A}\textrm{p}$=$^{A}\textrm{R}_B \ ^{B}\textrm{p}$

The Rotation Matrix's $^{A}\textrm{R}_B$ columns are the unit vectors of {B}'s axis described in Frame {A}.

Also the Rotation about a given axis can be given by: $^{A}\textrm{R}_B$=$e^{[\hat{w}]_x\theta}$ , whereas $[\hat{w}]_x$ is the skew-symmetric 3x3 matrix of the unit vector of $\hat{w}$ (deschribed in Frame A), around which the Frame is being rotated. $\theta$ is the rotation angle (and a scalar).

Now my question: Almost every book and paper i found states that the time-derivative of the Rotation Matrix is the following:

$\frac{d}{dt}(^{A}\textrm{R}_B)$= $[w]_x \ ^{A}\textrm{R}_B \qquad (1)$

Does the solution require that the direction of $\hat{w}$ remains constant at all times?

Because if we use the chain rule on with a (1):

$\frac{d}{dt}e^{[\hat{w}]_x\theta}$ = $\frac{d}{dt}([\hat{w}(t)]_x\theta(t)) \cdot e^{[\hat{w}]_x\theta}$ = $\frac{d}{dt}([\hat{w}(t)]_x)\theta(t) e^{[\hat{w}(t)]_x\theta(t)} + [\hat{w}(t)]_x)\dot{\theta}(t) e^{[\hat{w}(t)]_x\theta(t)}$

which can be further simplified to:

$\frac{d}{dt}e^{[\hat{w}]_x\theta}$ = $(\frac{d}{dt}([\hat{w}(t)]_x)\theta(t) + [w(t)]_x)^{A}\textrm{R}_B$

Whereas $[w(t)]_x=[\hat{w}(t)]_x\theta(t)$

Thus this is not equal to (1) and an addition term is generated.

So which of these equations is true now, when the rotation is not being done around a constant axis? Or did i do something wrong?

Greetings,

1lc

1lc
  • 33
  • 1
  • 5
  • 1
    I just noticed that the time-derivative of a time-dependent matrix exponential is not trivial: $e^{A(t)} \neq A(t) e^{A(t)}$. So my solution is wrong. But the Solution in (1) isnt it either. Any help? – 1lc Jul 05 '19 at 18:33

1 Answers1

6

As you noticed in your comment, the time (single-parameter) derivative of the matrix exponential doesn't have a nice form analogous to the scalar exponential.

Your Equation 1 is essentially the definition of the "angular velocity" vector $\omega$. An orthogonal time-varying matrix $R(t)$ always satisfies, $$ R R^T = \mathbf{1} \ \ \ \ \forall t $$$$ \implies\ \ \dot{R} R^T + R \dot{R}^T = \mathbf{0}\ \ \ \ \forall t $$

We define the following also time-varying matrix, $$ \Omega := \dot{R} R^T\ \ \ \ \forall t $$

and note that, from the above, it is always skew-symmetric, $$ \dot{R} R^T = -R \dot{R}^T\ \ \implies\ \ \Omega = -\Omega^T\ \ \ \ \forall t $$

Thus $\Omega$ can be treated as the cross-product linear operator form of some time-varying vector $\omega(t)$. By simple rearrangement of the $\Omega$ definition, we have your Equation 1, \begin{equation} \dot{R} = [\omega]^{\times} R\ \ \ \ \forall t \end{equation}

If $R$ rotates some $B$-coordinates to $A$-coordinates then we call $\omega$ the angular velocity expressed in $A$-coordinates.

In conclusion, your Equation 1 holds even for time-varying $\omega$, essentially by definition.

It is a linear time-varying matrix ODE, but don't take the "matrix" part lightly. The solution is non-trivial unless the direction of $\omega$ is constant. This is due to the lack of commutivity of the Lie group $\mathbb{SO}$. Hopefully that clears things up!

jnez71
  • 4,922
  • 1
  • 21
  • 34
  • I understand the math you did there, but because I'm the type of guy to question everything i wrote a Matlab Script and approximated the time-derivate and compared it with the solution that Equation (1) produces. It shows that for a constant (non tumbling w-vector) the Solutions are essentially the same for small timesteps. But when i chose a tumbling w-vector (for example here w rotates around the z-axis in the XY-Plane) the solutions differ quite a bit. – 1lc Jul 05 '19 at 21:36
  • Matlab-Code <-- Link to Matlab Code, the skew function generates a skew-symmetric matrix out of a vector. – 1lc Jul 05 '19 at 21:43
  • @1lc The subtraction operation you used in your finite-difference is not defined for the group of rotation matrices. You may be interested in this paper for more insight. – jnez71 Jul 05 '19 at 23:50
  • Now I am confused. The book i am Reading (Robotics Vision and Control by Peter Corke) states that the derivative can be approximated by a finite difference. I thought that this is true because it makes sense and it also works in my Matlab Script for the constant w-vector. Here is the link to the page i am referring to:

    Book-Page <- Link to Book-Page

    – 1lc Jul 06 '19 at 09:11
  • @1lc In the limit, the finite-difference will match the exact derivative like your book says, but it will require a much smaller timestep than a "boxminus"-based approximation of the derivative (see the paper I linked). In robotics, it is rare now to see people using vector operations on $\mathbb{SO}3$ and then reprojecting. I suspect your book will explain the modern Lie algebraic approach later. – jnez71 Jul 06 '19 at 17:10
  • @1lc I actually examined your code closely now. The timestep seems pretty small, so I don't think that is the problem (though my statement about avoiding vector subtraction of group members still holds). The problem with your code is where you write theta=w*t. That is erroneous if w is changing direction. What even is theta? An angle about what? The line theta=w*t is what integrating a rotation about a fixed axis would yield. You cannot integrate your Equation 1 that easily if the axis is changing. – jnez71 Jul 06 '19 at 17:18