If $A $ is an $n \times n$ matrix over $\mathbb C$ such that $A^2=A$ then is it true that $\operatorname{trace} A = \operatorname{rank} A$?
- 23,223
- 31,675
- 10
- 110
- 250
-
Let $P$ be the linear transformation given by $P(x)=Ax$. If ${e_1,\dots,e_k}$ is a basis of $\text{Im}(P)$ and ${e_{k+1},\dots,e_n}$ is a basis of $\text{Ker}(P)$ we have $$[P]=\begin{pmatrix} \text{Id}_k & 0 \ 0 & 0 \end{pmatrix}.$$ – nowhere dense Dec 19 '18 at 17:25
-
See also: Proving: “The trace of an idempotent matrix equals the rank of the matrix” – Martin Sleziak Sep 07 '19 at 21:30
7 Answers
Easy to show (for example, from Jordan normal form): $\lambda_k^2 = \lambda_k$, i.e., $\lambda_k \in \{0, 1\}$ are the eigenvalues of $A$. The trace is the sum of all eigenvalues and the rank is the number of non-zero eigenvalues, which - in this case - is the same thing.
- 11,592
Yes it's true. Notice that $A$ is diagonalizable since the polynomial with simple roots $x^2-x$ annihilates it and the eigenvalues of $A$ belong to the set $\{0,1\}$ so $A$ is similar to $\operatorname{diag}(\underbrace{1,\ldots,1}_{r\;\text{times}},0,\ldots,0)$ hence we see that
$$\operatorname{rank}(A)=r=\operatorname{trace}(A)$$
First, we note that $\;A^2=A\iff A(A-I)=0\;$, so the matrix is a root of $\;x(x-1)\;$ .
Thus the minimal polynomial of $\;A\;$ divides the above polynomial, which means the only eigenvalues of $\;A\;$ are zero or one , and we already know the matrix is diagonalizable (why?)
If we now pass to the Jordan Normal Form $\;J_A\;$ of $\;A\;$ (i.e., in this case the diagonal form of the matrix) , we see that we'll get as many $\;1$ 's on the diagonal as the rank of the matrix, because $\operatorname{rank} J_A = \operatorname{rank} A$ , and thus we have that $$\operatorname{Tr} A= \operatorname{Tr} J_A= \operatorname{rank} A$$
- 56,060
- 34,795
Going through the factorization of the minimal polynomial is valid, but seems overkill to me. Here’s a direct proof phrased in terms of operators rather than matrices:
Proposition. If the linear transformation $P: V\to V$ satisfies $P^2 = P$, then $V = \ker P\oplus\operatorname{im} P$. Furthermore, $P$ maps the first summand to zero and the second one identically to itself.
(Thus $P$ is usually called the projector onto $\operatorname{im} P$ along $\ker P$.)
Proof.
To see that $\ker P\cap\operatorname{im} P = 0$, let’s take any vector $v =\ker P\cap\operatorname{im} P$. Then $Pv = 0$ because $v$ is in the kernel and $v = Pw$ for some $w$ because it is in the image. Then $0 = Pv = P^2w = Pw = v$, so the intersection indeed contains only the zero vector.
To see that $\ker P +\operatorname{im} P = V$, we need to decompose an arbitrary vector $v\in V$ into a part in the kernel and a part in the image. Taking a peek at the result we want, let’s take $Pv\in\operatorname{im} P$ for the image part, leaving $v - Pv$ for the kernel part. But $P(v - Pv) = Pv - P^2v = Pv - Pv = 0$, so indeed $v - Pv\in\ker P$.
Any vector in the kernel is mapped to zero no matter what. A vector $v = Pw$ in the image maps to $Pv = P^2w = Pw = v$, that is to say to itself. ∎
Corollary. $\operatorname{tr} P = \dim\operatorname{im} P$.
Proof. Take a basis of $\ker P$ and a basis of $\operatorname{im} P$. Together they make a basis of $V$, because it’s a direct sum of those. The matrix of $P$ in that basis will consist of a zero block for the kernel summand and an identity block for the image summand, so its trace is the size of the latter block. ∎
- 585
-
+1 because this proof works if $V$ is a finite free $R$-module over an arbitrary commutative ring $R$. – user2154420 Jun 22 '23 at 17:39
Yes, because any projection matrix $A$, i.e., with $A^2=A$ is conjugated to a block matrix with identity matrix of size $r$ and a zero block. Hence $trace(A)=r=rank(A)$. See also here, section "canonical form": a projection matrix is diagonalizable, because its minimal polynomial $t^2-t$ splits into disctinct linear factors.
- 140,055
Here is a marginally different perspective:
If $Av = \lambda v$ for some non zero $v$, then $Av = \lambda Av$ so we must have either $\lambda = 1$ or $\lambda = 0$. Hence $\operatorname{tr} A$ is the algebraic multiplicity of the 1 eigenvalue in the characteristic polynomial.
Note that $\ker A^m = \ker A$ for all $m \in \mathbb{N}$, so the Jordan block corresponding to the zero eigenvalue has size $\nu = \dim \ker A$.
Hence $\dim {\cal R} A = n - \dim \ker A = n - \nu$, and from above, we see that the algebraic multiplicity of the 1 eigenvalue must be $n -\nu$, hence $\operatorname{tr} A= \dim {\cal R} A$.
- 178,207
Yes it is. There are several ways to prove this. One is the following:
Let $P(X) = X^2 - X$. As $\gcd(X,X-1) = 0$ you can apply the kernel lemma:
$$ \ker P(A) = \ker A \oplus \ker (A-I) $$ Using the hypothesis: $$ P(A) = A^2 - A = 0$$ so you get $$ E = \ker A \oplus \ker (A-I) $$ where $E = \mathbb{C}^n$ is the vector space on which $A$ acts.
Now choose a basis $e_1, e_2, \ldots, e_n$ of $E$ according to this decomposition: $$ k\le p \implies Ae_k = e_k \\ k > p \implies Ae_k = 0 $$ for a certain $p$. When you write the matrix, you realize that $p = \text{tr} A$ and $p = \text{rank } A$.
- 19,173
- 28,795