9

Using geometric algebra, one may define the multivector derivative $∂_X$ with respect to a general multivector $X$ as $$ ∂_X ≔ \sum_J ^J (_J * ∂_X) $$ where each “component” $_J * ∂_X$ is defined by $$ (_J * ∂_X)f(X) ≔ \frac{\mathrm{d}}{\mathrm{d}\tau}f(X + \tau_J)\big|_{\tau=0} .$$

Notations

  • $A * B \equiv ⟨AB⟩_0$ denotes the scalar product. For any multivector $A$, we have $^J(_J * A) = A$. In literature, parentheses are often dropped with the understanding that $A*B C ≡ (A*B)C$.
  • We employ multi-index notation, $_J = _{j_1}∧\cdots∧_{j_k}$. (If $k = 0$ then $_J = 1$). Reciprocal bases are reversed, $^J = ^{j_k}∧\cdots∧^{j_1}$, so that $^I * _J = δ^I_J$ is always satisfied.

Problem

I’m having a humiliating time trying to sanity-check this definition by verifying, e.g., $∂_X X = n$, as stated in eq. (2.29) of [2] or eq. (7.8) of [3]. My computation begins as follows. $$ ∂_X X = \sum_J ^J (_J * ∂_X)X = \sum_J ^J \frac{\mathrm{d}}{\mathrm{d}\tau} (X + \tau_J)\big|_{\tau=0} = \sum_J ^J _J .$$ There seems to be no room for confusion here. But this is not $n$. Indeed, \begin{align} \sum_J ^J _J &= \sum_{k=0}^n \sum_{j_1 < \cdots < j_k} \underbrace{^{j_1\cdots j_k}_{j_k\cdots j_1}}_1 = \sum_{k=0}^n \binom{n}{k} = 2^n .\end{align} This contradicts Proof 46 of [3], which includes the step “$\sum_{J_d} δ^J{}_J = d$” (a sum over multi-indices in $d$ dimensions) — which I can’t see to be true!

My failure is easily generalised: in trying to show $∂_X X^2 = 2X$, we have \begin{align} ∂_X X^2 &= ^J(_J * ∂_X)X^2 = ^J \frac{\mathrm{d}}{\mathrm{d}\tau} (X + \tau_J)^2\big|_{\tau=0} \\ &= ^J(_J X + X _J) = 2^n X + ^J X _J .\end{align}

Note that it is easy to verify these results with the less general vector derivative, $\vec ∂ ≔ ^i ∂_i$ where $∂_i = _i * ∂_X$ in the notation above. Then, if $X = X^i_i$ is the position vector, we have $∂_i X = _i$ and thus $ \vec ∂ X = ^i ∂_i X = ^i _i = n $ and $$ \vec ∂ X^2 = ^i \frac{\mathrm{d}}{\mathrm{d}\tau} (X + \tau_i)^2\big|_{\tau=0} = ^i(_i X + X _i) = 2^i \, _i * X = 2X .$$ Clearly I am misinterpreting the way in which the vector derivative $\vec ∂ = ^i(_i * ∂_X)$ is ‘generalised’ to have components at all grades, $∂_X = ^J (_J * ∂_X)$. Could someone with fresh eyes help me out?


References

  1. Lasenby and Doran, “Multivector Lagrangian Fields” – Ch. 1.
  2. Hestenes and Sobczyk, “Clifford Algebra to Geometric Calculus” – Ch. 2, §2.
  3. Hitzer, “Multivector Differential Calculus” – page 3.
Jollywatt
  • 668

1 Answers1

4

Your computation is correct, but your understanding of “$d$-dimensional subspace” and the function meant by $X$ in $\partial_XX$ is not. To avoid confusion, let $f(X)$ be the function whose derivative you're trying to compute.

On page 57 of Hestenes and Sobcykz, just above the list of identities (2.28a)-(2.35), it says $X$ is the identity function on a linear subspace of dimension $d$. By this they mean the orthogonal projection onto a certain $d$-dimensional subspace of the algebra. (E.g., $X ↦ ⟨X⟩_1$.) Explicitly, if the space is denoted by $Y$, and we pick an orthonormal basis of multivectors $Y_1,...,Y_d$, then the $X ↦ \sum_{k=1}^d(X*Y_k)Y_k$ is the projection function meant by $X$. Note that $Y_i$ are basis multivectors of the algebra $G(V)$, not basis vectors $_i$ of a vector subspace of $V$.

In your computation, you take $Y$ to be the whole geometric algebra (on an $n$-dimensional space), so your $X$ is simply the identity function $f(X)=X$, and $Y$ has dimension $d=2^n$. This is the sense in which your computation is correct.

For the more general $X$ corresponding to a projection operator, the computation is $(\mathbf e_J * ∂_X)f(X) ≔ \frac{\mathrm{d}}{\mathrm{d}\tau}f(X + \tau\mathbf e_J)\big|_{\tau=0}=f(\mathbf e_J)$ for linear $f$ and for the above-desribed projection operator is explicitly given by $\sum_{k=1}^d\mathbf e_J*Y_k$.


In summary, if $G(V)$ is a geometric algebra over vector space of dimension $\dim V = n$, then $\dim G(V) = 2^n$ and the multivector derivative of the identity is indeed $$ ∂_X X = 2^n .$$

To make contact with the vector derivative, you must include the projection onto the grade-$1$ subspace: $$ ∂_X ⟨X⟩_1 = n .$$

Written differently, the vector and multivector derivatives are related by $\vec ∂ ≔ ⟨∂⟩_1$; $$ \vec ∂ = ⟨∂⟩_1 = ⟨^J (_J * ∂)⟩_1 = ⟨^J⟩_1 _J * ∂ = ^i (_i * ∂) ,$$ where $J$ is a multi-index and $i$ is a single index. Then we have $\vec ∂_X X = n$.

Similarly, your ‘unexpected’ result $$ ∂_X X^2 = 2^n X + ^J X _J $$ is indeed correct. But by including a projection, you can easily verify the familiar results $$ \vec ∂_X X^2 = ⟨∂_X⟩_1 X^2 = 2X \quad\text{or}\quad ∂_X (⟨X⟩_1)^2 = ∂_X ⟨X^2⟩_0 = 2X $$ which you expected for the vector derivative. (In verifying these, note that $^i X _i = (2 - n)X$.)

Jollywatt
  • 668
  • I agree that I’m misunderstanding something, but I don’t think this solves it. Even if we restrict the summation to basis blades over a $d$-dimensional subspace as you rightly point out, the result is still $\sum_J ^J _J = 2^d$, since $J$ is a muli-index ranging over grades $0, 1, …, d$. – Jollywatt Mar 25 '22 at 05:35
  • The result is not $\sum_J\mathbf e^J\mathbf e_J$, it is $\sum_J\mathbf e^Jf(\mathbf e_J)$ where $f$ is the projection operator onto the $d$-dimensional subspace that $X$ is supposed to be the identity of. I've edited my answer to clarify this point. – Vladimir Sotirov Mar 25 '22 at 06:16
  • Aha. My confusion is that $d$ is the dimension of a linear subspace of the algebra, not of the underlying vector space. The difference is important; an $d$-dimensional vector subspace $Y$ of $V = \operatorname{span}{1, …, _n}$ contains $2^d$ linearly independent basis _blades of grade $0, …, d$. On the other hand, a $d$-dimensional subspace $Y$ of the algebra $G(V)$ contains $d$ linearly independent basis blades. – Jollywatt Mar 26 '22 at 00:38
  • I didn’t think “$d$-dimensional subspace” was meant in the latter sense, because that could include strange subspaces like $\operatorname{span}{^1, ^2, ^3^4}$ which aren’t even algebraically closed… – Jollywatt Mar 26 '22 at 00:41