3

As discussed in this other question, if $A$ and $B$ are matrices such that $A+B=I$, then trivially they commute, and thus if they are both diagonalisable they are also mutually diagonalisable.

The same argument doesn't, however, apply when summing more than two such matrices. Suppose then that $$\sum_{i=1}^n A_i = I.$$ The case of $A_i\ge0$ is the one I'm most interested about, but if positivity turns out to not be relevant for this, as it might very well be the case, feel free to weaken this constraint (to maybe consider Hermitian, normal, or just diagonalisable matrices).

If $\sum_i A_i=I$ then I can say that, for example, $[A_1,A_2+...+A_n]=0$, and thus $A_1$ and $\sum_{i>1} A_i$ are mutually diagonalisable. But then I cannot iterate the argument by splitting $A_2$ from $A_3+...+A_n$, as now they sum to a diagonal matrix (in their common eigenbasis), but not to the identity.

So does the result about mutual diagonalisability only work for $n=2$? A counterexample of three or more non-mutually-diagonalisable matrices summing to the identity would be a good answer.

glS
  • 7,963

3 Answers3

4

Let $$ A_1 =\frac{1}{9} \begin{bmatrix} 3 & 2 & -1\\ 2 & 3 & -1\\ -1 & -1 & 3\\ \end{bmatrix}, \quad A_2 =\frac{1}{9} \begin{bmatrix} 3 & -1 & 2\\ -1 & 3 & -1\\ 2 & -1 & 3\\ \end{bmatrix}, \quad A_3 =\frac{1}{9} \begin{bmatrix} 3 & -1 & -1\\ -1 & 3 & 2\\ -1 & 2 & 3\\ \end{bmatrix}. $$ Then it can be checked that $\sum_iA_i = I$, $A_i\geq 0$, but do not commute.

glS
  • 7,963
Anand
  • 1,246
2

It suffices to consider the case where $n=3$ if we put $0=A_4=A_5=\cdots$.

Let $A_1$ be any positive diagonal matrix whose diagonal entries are distinct and smaller than $1$. Let $B$ be any real symmetric matrix with a zero diagonal and nonzero off-diagonal entries. Then $A_2:=\frac12(I-A_1)+\epsilon B$ and $A_3:=\frac12(I-A_1)-\epsilon B$ are positive definite when $\epsilon$ is sufficiently small, but $A_1$ and $A_2$ are not simultaneously diagonalisable, because $A_1$ commutes only with diagonal matrices.

user1551
  • 149,263
0

Here is another simple counterexample I built from the one suggested from the other answer (by making the eigenvalues "simple" and working in the eigenbasis of one of the matrices):

$$ A_1 = \begin{pmatrix}\frac13 & 0 & 0 \\ 0 & \frac23 & 0 \\ 0 & 0 & 0\end{pmatrix}, \quad A_2 = \begin{pmatrix}\frac13 & 0 & 0 \\ 0 & \frac16 & \frac{1}{2\sqrt3} \\ 0 & \frac{1}{2\sqrt3} & \frac12\end{pmatrix}, \quad A_3 = \begin{pmatrix}\frac13 & 0 & 0 \\ 0 & \frac16 & -\frac{1}{2\sqrt3} \\ 0 & -\frac{1}{2\sqrt3} & \frac12\end{pmatrix}. $$ The gist of it is that we can have two noncommuting $2\times2$ matrices $A,B\ge0$, such that $A+B\neq I$ is diagonal. By suitably embedding them in a larger space we get our counterexample. In this example, the $\frac1{2\sqrt3}$ factors can be replaced by any $c$ such that $|c|\le\frac{1}{2\sqrt3}$.

More generally, while $A,B\ge0$ with $A+B=I$ implies $[A,B]=0$, if $A+B=D$ is diagonal but not a multiple of the identity, then we can have $[A,B]\neq0$. For example, for 2x2 matrices $A,B\ge0$, if $$A+B =\begin{pmatrix}a&0\\0&b\end{pmatrix}, \qquad a,b \ge0,$$ then $$A=\begin{pmatrix}\alpha&\gamma\\\bar\gamma&\beta\end{pmatrix}, \qquad B=\begin{pmatrix}a-\alpha&-\gamma\\-\bar\gamma&b-\beta\end{pmatrix},$$ for any set of coefficients such that $0\le \alpha\le a$, $0\le \beta\le b$, and $$|\gamma| \le \min\left\{\sqrt{\alpha\beta}, \sqrt{(a-\alpha)(b-\beta)}\right\}.$$ The last condition means in particular that if either $\alpha=0$ or $\alpha=a$ then we must have $\gamma=0$, and similar conditions hold for $\beta$.

glS
  • 7,963