6

In Geometric Algebra, any bivector $B\in\Lambda^2\mathbb R^n$ is a sum of blades: $$B = B_1 + B_2 + \cdots$$ $$= \vec v_1\wedge\vec w_1 + \vec v_2\wedge\vec w_2 + \cdots$$ Each blade's component vectors $\vec v$ and $\vec w$, if they're not already orthogonal to each other, can easily be made so by the Gram-Schmidt process: $$B_1 = \vec v_1\wedge\vec w_1 = \vec v_1\wedge\left(\vec w_1-\Big(\frac{\vec w_1\cdot\vec v_1}{\vec v_1\cdot\vec v_1}\Big)\vec v_1\right) = \vec v_1\wedge\vec w_1'$$ $$\vec v_1\cdot\vec w_1' = 0$$ (This can even be generalized to pseudo-Euclidean space where $\vec v$ may square to zero: project $\vec v$ away from $\vec w$ instead of vice-versa, or if they both square to zero, take $\vec v'=\frac{\vec v+\vec w}{\sqrt2}$ , $\vec w'=\frac{\vec w-\vec v}{\sqrt2}$. Then $\vec v\wedge\vec w=\vec v'\wedge\vec w'$, and $\vec v'\cdot\vec w'=0$.)

But I don't know how to make each blade orthogonal to the other blades. Orthogonal means that their geometric product is their (grade 4) wedge product; all lower-grade parts are zero. $$B_1 + B_2 = B_1' + B_2'$$ $$B_1'B_2' = (B_1'\cdot B_2')+(B_1'\times B_2')+(B_1'\wedge B_2') = B_1'\wedge B_2'$$ $$B_1'\cdot B_2' = 0 = B_1'\times B_2'$$

From Wikipedia: In $\Lambda^2\mathbb R^4$,

"every bivector can be written as the sum of two simple bivectors. It is useful to choose two orthogonal bivectors for this, and this is always possible to do."


Here's a simple example, with $n = 4$: $$B_1 = e_1\wedge e_2 = e_1e_2$$ $$B_2 = (e_1 + e_3)\wedge e_4 = e_1e_4 + e_3e_4$$ $$B = B_1 + B_2 = e_1e_2 + e_1e_4 + e_3e_4$$ $$B_1B_2 = -e_2e_4 + e_1e_2e_3e_4$$ $$B_1\cdot B_2 = 0 \neq B_1\times B_2 = -e_2e_4$$

How can I rewrite $B = B_1' + B_2'$ with $B_1'\cdot B_2' = 0 = B_1'\times B_2'$ ?


EDIT1

After doing some algebra, I arrived at these equations:

$$B_1' = \frac{B+Q}{2}$$

$$B_2' = \frac{B-Q}{2}$$

$$Q^2 = B\cdot B - B\wedge B$$

$$B^2 = Q\cdot Q - Q\wedge Q$$

$$B\times Q = 0$$

$$B\wedge Q = 0$$

We only need to solve for $Q$ in terms of $B$. I was able to take a square root of the third equation (by guessing that $Q = xe_1e_2+ye_3e_4$) but I didn't find the specific root that satisfies the other equations.


EDIT2

After doing some more algebra, I find that, if $Q$ is defined as the reflection of $B$ along some unknown vector $v\neq0$,

$$Q=v^{-1}Bv$$

$$B_1=\frac{v^{-1}vB+v^{-1}Bv}{2}=v^{-1}(v\wedge B)$$

$$B_2=\frac{v^{-1}vB-v^{-1}Bv}{2}=v^{-1}(v\cdot B)$$

then $B_1$ and $B_2$ are blades, and $B_1\cdot B_2=0$ regardless of $v$, and $B_1\times B_2=0$ if and only if $v\wedge\big((v\cdot B)\cdot B\big)=0$. This means that $(v\cdot B)\cdot B$ must be parallel to $v$; in other words, $v$ is an eigenvector of the operator $(B\,\cdot)^2$. It follows that $v\cdot B=w$ is also an eigenvector with the same eigenvalue, and $v\cdot w=0$.

Generalizing, it looks like we want to find an orthogonal set of eigenvectors $v_1,v_2,v_3,\cdots$ of $(B\,\cdot)^2$, so that

$$B_1=v_1^{-1}(v_1\cdot B),\quad B_2=v_2^{-1}(v_2\cdot B),\quad B_3=v_3^{-1}(v_3\cdot B),\quad\cdots$$

Of course, all vectors $v$ must also be orthogonal to all $w=v\cdot B$.


mr_e_man
  • 5,986
  • 1
    I believe that this paper answers you question in full in section 6. In particular, for the orthogonal decomposition to exist it is sufficient that the polynomial $$ \sum_{i=0}^k\langle W_i^2\rangle(-\lambda)^{k-i},\quad W_i = \frac1{i!}\underbrace{B\wedge\cdots\wedge B}_{i\text{ times}},\quad k = \lfloor n/2\rfloor $$ to have all real roots. I believe if it has a non-real root then the decomposition does not exist (but I am unsure). If it has all real roots but a root has multiplicity greater than one, I think things are more complicated. – Nicholas Todoroff Nov 30 '22 at 21:51
  • In that paper, they prove Cartan-Dieudonne only for Euclidean spaces, but then assume that it's true for arbitrary signatures. It's not true. With signature $+-0$, that is $e_1^2=1,;e_2^2=-1,;e_3^2=0$, the product of four unit vectors $$e_1(e_1+e_3)e_2(-e_2+e_3)=1+(e_1+e_2)e_3$$ cannot be written as a product of two vectors (or any number of independent vectors), because the dot product of two vectors in the $(e_1+e_2)\wedge e_3$ plane would be $0$, not $1$. Thus their notion of graded symmetries is broken. – mr_e_man Dec 01 '22 at 23:14
  • And in section 3.4 they claim that bireflections are bivector exponentials, but $e_1e_2$ (with signature $+-$) is not an exponential. (I'm still reading...) – mr_e_man Dec 01 '22 at 23:15
  • Cartan-Dieudonne is true for all non-degenerate spaces (I don't want to think to hard about the degenerate case right now). Its even the first sentence of the wiki article, and I just checked Grove's Classical Groups and Geometric Algebra. I don't understand your argument about the product of 4 vectors not reducing to 2 vectors; I assume what you mean is that it is a counterexample to their "Invariant Decomposition" theorem at the beginning, though I will have to think about. – Nicholas Todoroff Dec 01 '22 at 23:33
  • In any case, what they do a bad job of making clear is that they implicitly accept solutions in the complexification, so with your point about $e_1e_2$ they would say $e_1e_2 = \exp i\tfrac\pi2 e_1e_2$ where $i$ is a scalar imaginary unit. Despite that, I think the ideas they present for getting the orthogonal bivector composition are worth looking at even if we aren't interested in the complexification. – Nicholas Todoroff Dec 01 '22 at 23:36
  • That exponential would be $ie_1e_2$, not $e_1e_2$. -- The product of four vectors (four reflections) is a counter-example to Cartan-Dieudonne, in a degenerate space. (In fact that product represents a translation along a null vector, using the PGA of the Lorentzian plane.) The definition of "graded symmetry" at the end of section 2.3 cannot be applied to this product. – mr_e_man Dec 02 '22 at 00:04
  • I am going to think more about Cartan-Dieudonne and your alleged counterexample before giving a proper response, but I think you are correct. As for $e_1e_2$, you are right and that is my mistake, but $e_1e_2 = -i\exp i\tfrac\pi2e_1e_2$, and $E = e_1e_2$ is not itself a bireflection, its conjugation $X \mapsto E^{-1}XE$ is. I think it is true that products of two vectors are a scalar unit (i.e. $\pm1, \pm i$) times a bivector exponential, and upon conjugation the scalar cancels and we see that the exponential does represent a bireflection... – Nicholas Todoroff Dec 02 '22 at 00:25
  • ...That's not necessarily what they meant though; honestly, I've never really read anything before section 6, I'm just interested in how they try to find the orthogonal bivector composition. I think that their definition of graded symmetry group still works, just not necessarily how they intend... – Nicholas Todoroff Dec 02 '22 at 00:29
  • Good point, about the scalars cancelling. Yes, if we complexify, then any product of two invertible vectors can be written as $uv=\exp(c+B)$, where $c$ is a (complex) scalar and $B$ is a bivector. – mr_e_man Dec 02 '22 at 00:59
  • The essence of Cartan-Dieudonne is that (1) every every orthogonal transformation is a composition of reflection and (2) the number of reflections is bounded. Let $(+p,-q,0r)$ be the signature with $m = p+q$ and $n = p+q+r$. If rather than the monoid $O(p,q,r)$ of isometries we look at the space $\Sigma(p,q,r)$ of bijective linear isometries, then Cartan-Dieudonne still holds but with the bound on the number of reflections being $2m$ rather than $n$. This is consistent with your counterexample... – Nicholas Todoroff Dec 05 '22 at 18:37
  • ...$\Sigma(p,q,r)$ is also the space representing the action of the Lipschitz group $\Gamma(p,q,r) \subseteq Cl_{p,q,r}$. We have to take care in defining $Pin(p,q,r)$ since the kernel of $\Gamma(p,q,r) \to \Sigma(p,q,r)$ is $\mathbb R \oplus Cl(\mathbb R^{n\perp})$ where $Cl(\mathbb R^{n\perp})$ is the subalgebra of $Cl_{p,q,r}$ generated by the radical $\mathbb R^{n\perp}$. Once we do take care of this, then $Pin(p,q,r)$ is still "graded" by reflections as usual. My source in figuring this out was Crumeyrolle's Orthogonal and Symplectic Clifford Algebras section 4.3. – Nicholas Todoroff Dec 05 '22 at 18:43
  • Correction: $\Sigma(p,q,r)$ should be the bijective linear isometries fixing $\mathbb R^{n\perp}$. – Nicholas Todoroff Dec 05 '22 at 18:45
  • Well, the paper uses the specific bound $n$, and says the reflections are linearly independent. But I guess we're not talking about the paper anymore, nor about decomposing bivectors.... -- I proved to myself that any (not necessarily bijective) linear isometry is the composition of (1) an arbitrary linear transformation on the radical and (2) at most $m+2r$ reflections (or $m+r=n$ if the non-degenerate subspace is "anisotropic" (has no null vectors)). But you say the bound is $2m$, not $m+2r$? Maybe they're both valid bounds. – mr_e_man Dec 05 '22 at 19:57
  • My point was really just that, while the authors are not correct, there is a sense in which the spirit of what they are saying is true. I agree this probably isn't relevant to decomposing bivectors, so if we're going to keep talking about this let's move to chat. – Nicholas Todoroff Dec 05 '22 at 20:05
  • (I've asked moderators to unfreeze the chat. Maybe this should be a new Question instead.) Do you know any bounds on the number of vectors in a product? This is related to Cartan-Dieudonne, but isometries aren't always generated by vector reflections, and conversely, vectors don't always generate isometries. Over a non-degenerate $n$-dimensional space, is a product of any number of vectors (including null vectors) a product of at most $n$ vectors? – mr_e_man Jan 24 '23 at 06:00

1 Answers1

3

By using the exponential function to rewrite $B^2$ and take its square root (link, section 2.1.1), I found a formula for bivectors in 4D:

$$B_1 = \left(\frac{-B\cdot B+\sqrt{(B\cdot B)^2-(B\wedge B)^2}+B\wedge B}{2\sqrt{(B\cdot B)^2-(B\wedge B)^2}}\right)B$$

$$B_2 = \left(\frac{B\cdot B+\sqrt{(B\cdot B)^2-(B\wedge B)^2}-B\wedge B}{2\sqrt{(B\cdot B)^2-(B\wedge B)^2}}\right)B$$

Obviously, this is undefined if $|B\cdot B|=\lVert B\wedge B\rVert$. That corresponds to an isoclinic rotation, where the two planes of rotation are not unique. In this case, any vector $v\neq0$ is an eigenvector of $(B\,\cdot\,)^2$, so we can just take $B_1=e_1(e_1\cdot B)$.


Applying this to the example problem,

$$B = e_1e_2+e_1e_4+e_3e_4$$

$$B^2 = -3 + 2e_1e_2e_3e_4;\quad B\cdot B = -3,\quad B\wedge B = 2e_1e_2e_3e_4$$

$$B_1 = \frac{(1+\sqrt5)(e_1e_2+e_3e_4)+(3+\sqrt5)e_1e_4-2e_2e_3}{2\sqrt5}$$

$$B_2 = \frac{(-1+\sqrt5)(e_1e_2+e_3e_4)+(-3+\sqrt5)e_1e_4+2e_2e_3}{2\sqrt5}$$

$$B_1\wedge B_1 = 0 = B_2\wedge B_2$$

(The vanishing wedge product means that they are actually blades, though we don't know what vectors they're made of.)

$$B_1\cdot B_2 = 0 = B_1\times B_2$$

$$B_1 + B_2 = B$$


To prove that such a decomposition always exists in Euclidean spaces, note that $(B\,\cdot\,)^2$ is symmetric/self-adjoint:

$$\big((v\cdot B)\cdot B\big)\cdot w=(v\cdot B)\cdot(B\cdot w)=-(v\cdot B)\cdot(w\cdot B)$$

so the spectral theorem applies.


...In fact the result is false for pseudo-Euclidean spaces in general. Take an orthogonal basis $\{\sigma_1,\sigma_2,\tau_1,\tau_2\}$ with $\sigma_1\!^2=\sigma_2\!^2={^+}1,\;\tau_1\!^2=\tau_2\!^2={^-}1$, and consider the bivector

$$J=\sigma_1\frac{\sigma_2+\tau_2}{2}+\tau_1\frac{\sigma_2-\tau_2}{2}\quad=\frac{\sigma_1+\tau_1}{2}\sigma_2+\frac{\sigma_1-\tau_1}{2}\tau_2$$

(The $1/2$ is only to normalize $J^4=1$.)

$$(\sigma_1\cdot J)\cdot J=\frac{-\tau_1}{2},\quad(\tau_1\cdot J)\cdot J=\frac{\sigma_1}{2}$$

$$(\sigma_2\cdot J)\cdot J=\frac{-\tau_2}{2},\quad(\tau_2\cdot J)\cdot J=\frac{\sigma_2}{2}$$

It's easy to see that $(J\,\cdot\,)^2$ has no eigenvectors, so $J$ is not orthogonally decomposable.

(If it were orthogonally decomposable as $J=v_1\wedge w_1+v_2\wedge w_2$, then $v_1$ would be an eigenvector, with eigenvalue $(v_1\wedge w_1)^2$.)

I suspect that the result is still true for Lorentzian spaces (where only one basis vector has a different signature from the others).

mr_e_man
  • 5,986
  • This is claimed in Lundholm and Svensson's https://arxiv.org/pdf/0907.5356.pdf , Theorem 6.14, but I haven't followed the proof. – mr_e_man Aug 12 '19 at 03:29
  • The link is "Physical applications of Geometric Algebra" by Doran and Lasenby. I encourage people reading this to go read this section 2.1.1. It looks like those two guys have published a great chunk of all the good materials out there to learn Geometric Algebra ! – god Sep 09 '23 at 12:17