8

From wikipedia, given any matrix $A$, we can sometimes decompose $A = LU$ using Gaussian elimination. Other times, a permutation matrix is needed, giving $PA = LU$.

If $A$ is Hermitian positive-definite, I can show that IF no permutation matrix is needed, then Gaussian elimination gives $A=LU$ which I can eventually massage and get the Cholesky decomposition $A=LL^*$. However, it seems that Hermitian positive-definite matrices are special in that no permutaiton matrix is ever needed, and hence the Cholesky decomposition always exist. Why?

suncup224
  • 2,859
  • It is in Fundamentals of Matrix Computations by David S. Watkins. It is a beautiful proof, but perhaps a little too much to write down here. – Stephen Montgomery-Smith Dec 29 '13 at 05:33
  • 1
    @Stephen: thanks for the reference. Perhaps a hint or a summary would help? I do not have access to any libraries that would have math books :( – suncup224 Dec 29 '13 at 05:40
  • It uses something called the Schur complement. I tried using google to find online proofs. Maybe this would work? http://www.cis.upenn.edu/~jean/schur-comp.pdf – Stephen Montgomery-Smith Dec 29 '13 at 05:54
  • Oh actually, I think I figured out a proof. Suppose that you are performing elimination and at some stage, the $k^{th}$ pivot came out as zero. Look at the elimination matrix $E$ thus far, and let $x$ be the $k^{th}$ row of $E$. Then $xAx^* = 0$, contradicting the positive-definiteness of $A$ – suncup224 Dec 29 '13 at 06:04
  • I don't remember it being that simple. Also I don't understand your proof. – Stephen Montgomery-Smith Dec 29 '13 at 06:06
  • For example, why must the $(1,1)$ entry of $A$ be positive? Because we can take $x = (1, 0, ..., 0)$ and observe that $xAx^$ must be positive. Now, perform elimination on the first column of $A$, and obtain $A'$. Why must entry $(2,2)$ of $A'$ be positive? Suppose otherwise, and suppose in the elimination step, we subtracted $n$ times row 1 from row 2. Then let $x = (-n, 1, 0, ..., 0)$. Then what is $xAx^$? Well, $xA$ is the second row of $A'$, which by assumption is $(0,0, stuff)$. So $xAx^*$ is $0$. For subsequent rows, use what I described in the previous comment. – suncup224 Dec 29 '13 at 06:08
  • This is the easy part of the proof. You write it as a block matrix $\begin{pmatrix} A & B \ B^T & C\end{pmatrix}$. You can show that $A$ is positive definite using the argument you showed. So you apply Cholesky on the matrix $A$ (the proof is by induction on the size of the matrix). But then you are left with a matrix $\begin{pmatrix} I & 0 \ 0 & S\end{pmatrix}$, where $S$ is the Schur complement. And then you have to show that the Schur complement is positive definite to finish off the proof by induction. – Stephen Montgomery-Smith Dec 29 '13 at 06:12
  • @StephenMontgomery-Smith: An additional question: If $A$ is Hermitian positive SEMI-definite, wikipedia says that we can still find $A = LL^*$ although this decomposition is no longer unique, and allowing $L$ to have $0$ on diagonals. Does the proof using Schur complement give a proof for this case too? Mine certainly doesn't – suncup224 Dec 29 '13 at 06:33
  • I don't know the answer to the semi-definite case. – Stephen Montgomery-Smith Dec 29 '13 at 06:36
  • Silly me, wikipedia itself has the proof >< – suncup224 Dec 29 '13 at 06:41

1 Answers1

4

The diagonal entries of $U$ in $A=LU$ are quotients of successive main diagonal minors of the matrix $A$. If $A$ is positive definite, the main minors are all positive. Sometimes this is called the Hurwitz criterion.

Put the diagonal elements of $U$ into a diagonal matrix $D$, then $A=LU=LDL^*$. Which again shows that the Cholesky decomposition works, since the critical numbers of the algorithm are these diagonal entries.

Lutz Lehmann
  • 131,652