0

If the characteristic polynomial $f_A(x)$ has multiples of the same product, for example $f_A(x)= (x+2)^2(x-1)$ so $(x+2)$ has a multiple of $2$, then is there a condition on $A$ such that we know for sure that $m_A(x)= (x+2)(x-1)$ or $(x-1)$ or $(x+2)$ ? i.e, no multiples.

shinzou
  • 4,059
  • 1
    Neither $(x-1)$ nor $(x-2)$ can be the minimal polynomial of your $A$ as it certainly has eigenvectors of eigenvalue $1$ and $-2$. All irreducible factors of $f_A$ must occur in $m_A$ as well, though possible to lower power – Hagen von Eitzen Feb 06 '16 at 21:03

2 Answers2

2

(Assuming an algebraically closed field) the minimal polynomial is square-free iff $A$ is diagonalizable. This is most easily seen from the Jordan normal form, but can also be inferred directly by considering invariant subspaces per linear factor.

  • Just to make sure, by "square free" you also mean powers higher than $2$? – shinzou Feb 06 '16 at 21:12
  • @kuhaku that's correct. For example, $(x-1)(x-2)^3$ is not square-free because it has, as a factor, $(x-2)^2$. – Ben Grossmann Feb 06 '16 at 22:52
  • @kuhaku To be precise, a square-free polynomial is one that is not divisible by the square of any non-constant polynomial. (More generally in a commutative ring square-free means not divisible by the square of any non-invertible element.) For polynomials there is a practical test that does not require factoring: $P$ is square-free iff $\gcd(P,P')=1$ where $P$ is the (formal) derivative of $P$. – Marc van Leeuwen Feb 08 '16 at 08:47
2

For any diagonalisable matrix $A$ whose distinct eigenvalues are $\lambda_1,\ldots,\lambda_k$ (meaning that this list contains each eigenvalue only once, regardless of the dimension of the corresponding eigenspace or of the multiplicity of each $\lambda_i$ as root of the characteristic polynomial), a polynomial $P$ annihilates $A$ is and only if each $\lambda_i$ is a root of $P$. I repeat, it doesn't matter whether some multiplicity should be associated to $\lambda_i$, it suffices that $\lambda_i$ be a root (simple, or maybe multiple) of $P$. This is because $P[A]$ will multiply each eigenvector for $\lambda_i$ by the scalar $P[\lambda_i]$, and as long as that one scalar equls$~0$, all eigenvectors of $~A$ for $\lambda_i$ will be killed by $P[A]$; if this holds for $\lambda_1,\ldots,\lambda_k$ then all eigenvectors of$~A$ are killed, and by the diagonalisable assumption this forces $P[A]=0$. Then of course the minimal polynomial of$~A$ is just $(X-\lambda_1)(X-\lambda_2)\ldots(X-\lambda_k)$.

So $A$ being diagonalisable is a sufficient condition for the minimal polynomial of$~A$ not having any repeated factors; moreover the factors will all be of degree $1$ (of the form $X-\lambda_i$). The set of irreducible factors of the minimal polynomial is always equal to the set of irreducible factors of the characteristic polynomial, so if you know that the characteristic polynomial factors as a product of factors of degree $1$ (though maybe with some of them to some power) and also that $A$ is diagonalisable, then the minimal polynomial must be what you get from the characteristic polynomial by keeping every factor just once, i.e., removing any exponents in the factorisation. So "$A$ diagonalisable" is a condition that ensures what you are asking about.

It happens that being diagonalisable is also a necessary condition for having a minimal polynomial $(X-\lambda_1)(X-\lambda_2)\ldots(X-\lambda_k)$ with all the $\lambda_i$ distinct; at follows that "$A$ diagonalisable" is really the only condition that answers you question. This fact is not quite as easy to see as the previous one, since one needs to consider the possibility that $A$ is not diagonalisable. Nonetheless it can be shown in various ways, depending on the level of abstraction you are comfortable with (but certainly not requiring the theory of the Jordan Normal Form as Hagen suggests). The most elementary argument I know is a dimension argument: the product $(A-\lambda_1I)(A-\lambda_2I)\ldots(X-\lambda_kI)$ certainly kills the eigenspaces for $\lambda_1,\ldots,\lambda_k$, and therefore their sum; this sum is always a direct sum, and therefore has as dimension the sum of the dimensions of the eigenspaces. But the dimension of the kernel of a product of matrices cannot exceed the sum of the dimensions of their individual kernels (which here are the eigenspaces), so if it is given that $(A-\lambda_1I)\ldots(X-\lambda_kI)=0$, then it must be the case that the sum of the eigenspaces is already the whole space. This precisely means that $A$ is diagonalisable. See this answer for details.