4

In Werner Greub's book Linear Algebra, 4th ed. on p. 230, he gives this proof of the normal form for a skew transformation on a finite-dimensional real inner product space. (Note Greub's convention for the matrix of a transformation is the transpose of that normally used with left-hand notation.)

I believe this proof is incorrect because it is not true in general that the $a_n$ defined form an orthonormal basis of the space. For example in $\mathbb{R}^4$, if we define the transformation $\psi$ by $$e_1\mapsto e_2\qquad e_2\mapsto -e_1\qquad e_3\mapsto e_4\qquad e_4\mapsto -e_3$$ where $e_i$ is the $i$-th standard basis vector, then $\psi$ is skew and $\varphi=\psi^2=-\iota$ is diagonalized by the standard basis. If we follow the proof for this example, we get $a_1=e_1$, $a_2=\psi e_1=e_2$, $a_3=e_2$, and $a_4=\psi e_2=-e_1$, so the $a_n$ do not form a basis of $\mathbb{R}^4$.

Does anyone see a way to salvage this proof while still retaining its spirit (in particular, avoiding use of complex numbers)?

blargoner
  • 3,501
  • It is perhaps worth noting that the proof presented does work in the case where each eigenvalue of $\psi^2$ has the minimal multiplicity, namely $2$. – Ben Grossmann Oct 21 '19 at 03:40
  • Actually, even this is incorrect: Greub seems to be making the tacit assumption that consecutive $e_j$ are taken from the same eigenspace when possible. – Ben Grossmann Oct 21 '19 at 03:46
  • Having tried googling a quick answer, I came across a growing list of errata on github that is either yours or a different blargoner's. Assuming that's you, best of luck! I would recommend that you send an email to the publisher asking whether such a list of errata already exists, if you have not done so already. – Ben Grossmann Oct 21 '19 at 04:39
  • @Omnomnomnom Yep, those are mine. Thanks for your help with this proof. – blargoner Oct 21 '19 at 14:13

1 Answers1

1

One fix is to be a bit more explicit with how we deal with each non-zero eigenspace in the following way.

Suppose that $\lambda_1,\dots,\lambda_d$ are the (distinct) negative eigenvalues of $\varphi = \psi^2$. Then by "the result of section 8.7" (presumably the spectral theorem for symmetric matrices), we can select eigenvectors $e_{j,k}$ such that $$ \varphi \,e_{j,k} = \lambda_j \,e_{j,k}\quad k = 1,\dots,m_j $$ That is: $m_j$ is the multiplicity of $\lambda_j$, and $e_{j,1},\dots,e_{j,m_{j}}$ is a basis of the eigenspace.

For each $\lambda_j$, we produce a new basis $\mathcal B_j$ for the eigenspace via the following recursive process. Initially, we take $S = \operatorname{span}\{e_{j,1},\dots,e_{j,m_j}\}$. We then do the following to $S$:

  • Select an arbitrary unit vector $a_1 \in S$ and define $a_2 = \frac 1{\kappa_j}\psi a_1$.
  • Add $a_1,a_2$ to $\mathcal B_j$.
  • Let $S'$ denote the orthogonal complement of $\operatorname{span}\{a_1,a_2\}$ relative to $S$. If $S' = \{0\}$, then we are done. Otherwise, $S'$ is a smaller eigenspace associated with $\lambda_j$; in this case we apply this process to $S'$.

In a proper writeup of the proof, we should prove that $a_2 = \frac 1{\kappa_j}\psi a_1$ (where $\kappa_j = \sqrt{|\lambda_j|}$) will necessarily be a unit vector from the same eigenspace, and that $a_2$ is orthogonal to $a_1$ (which Greub's text does not seem to mention); I will leave that to you.

Ben Grossmann
  • 234,171
  • 12
  • 184
  • 355
  • 1
    Incidentally, we can write the non-zero part of the normal form as $$ \pmatrix{0 & \kappa_1 \-\kappa_1 & 0 \ && \ddots \ &&& 0 & \kappa_p\ &&&-\kappa_p & 0} = \pmatrix{\kappa_1 \ & \ddots \ && \kappa_p} \otimes \pmatrix{0 & 1\-1 & 0} $$ where $\otimes$ denotes a Kronecker product. – Ben Grossmann Oct 21 '19 at 04:21