4

Let $A,B$ be positive matrices on a finite-dimensional space, and suppose that $A+B=I$. In the special case of $A,B$ being projectors, we know that this implies that they must be orthogonal, as shown for example here and links therein.

Can something be said about the more general case of $A,B\ge0$?

If $A,B$ have orthogonal support, it is not hard to see that they must each equal the identity on their supports. We can therefore, I think, restrict ourselves to consider only the cases in which $\operatorname{im}(A)=\operatorname{im}(B)$, as we know that the restriction of the operators on every subspace in which only one of the two acts equals the identity (more precisely, one of the two operators will act like the identity and the other like $0$).

glS
  • 7,963
  • What about $A=I/2+X$, $B=I/2-X$ for any symmetric $X$ with singular values less than 1/2. – Nick Alger Nov 30 '19 at 20:10
  • 1
    @NickAlger sure, but that's the easy case with $A,B$ diagonal, where you can just say that the condition amounts to each pair of eigenvalues summing to $1$. I was wondering more on the lines of the possible constraints on the relation between the eigenvectors of $A$ and $B$ – glS Nov 30 '19 at 20:15
  • Given your background I'm sure you're aware, but for others: This is a POVM on two operators. – Semiclassical Nov 30 '19 at 22:29
  • @Semiclassical indeed, that's one context in which this appears, but it's not the only one. For example, real and imaginary parts of a unitary $U=A+iB$ satisfy $A^T A+ B^T B=I$ (on top of another condition). Understanding the structure of positive mats summing to $I$ is also, I think, the main step to prove the CS decomposition (see e.g. this other question of mine) – glS Dec 01 '19 at 14:11

2 Answers2

3

On second thought, this is actually trivial. If $A+B=I$ then $B=I-A$, thus $[A,B]=0$. This means $A$ and $B$ are always mutually diagonalisable and such that each pair of eigenvalues corresponding to the same eigenvector sums to $1$.

Indeed, this is not even about positive matrices. The same kind of argument works for arbitrary diagonalisable matrices summing to the identity.

More generally, if $A$ is diagonalisable, and $A+B=I$, then $B$ is also diagonalisable, and in some basis we have $A=\operatorname{diag}(\lambda_1,...,\lambda_n)$ and $B=\operatorname{diag}(1-\lambda_1,...,1-\lambda_n)$. If furthermore $A\ge0$ is (Hermitian and) positive semidefinite, then there is an orthonormal basis with respect to which $A=\operatorname{diag}(\lambda_1,...,\lambda_n)$ with $0\le \lambda_i\le 1$, and again $B=\operatorname{diag}(1-\lambda_1,...,1-\lambda_n)$.

glS
  • 7,963
1

Do you agree with the following (simple and rather obvious) conclusions?

If $A\in\mathbb{R}^{n\times n}$ is diagonalized by an orthogonal $V$, i.e. $V V^{-1}=I$ and $V A V^{-1}=\Lambda_A$, then with $A+B=I$: $$ V A V^{-1} + V B V^{-1} = \Lambda_A + V B V^{-1} = V V^{-1} = I $$ Therefore:

  • $B$ is diagonalized by $V$ because $V B V^{-1}=I-\Lambda_A$ is diagonal.
  • $A$ and $B$ share the same set of eigenvectors.
  • The eigenvalues of $B$ are $\Lambda_B:=I-\Lambda_A$; moreover, they sum up to one since $\Lambda_A+\Lambda_B=I$.
glS
  • 7,963
Druidris
  • 322