1

I've been studying condition numbers for matrices. I found a past exam question that asks if the notion of condition numbers can be used for non-square full-rank matrices. Intuitively I thought it couldn't because $cond(A) = ||A||\cdot||A^{-1}||$ and non-square matrices have no inverse. But matlab will calculate it. Why do condition numbers exist for non-square matricies and do they have any meaning at all? How to derive $||A^{-1}||?$ Since it's rectangular I guess it has to do with SVD and singular values.

dxdydz
  • 1,411

1 Answers1

1

The (relative) condition number is a measure for the propagation of an error in the input of an algorithm. Specifically, it's the biggest possible ratio of a relative error in the output and a relative error in the input.

For the algorithm $x=A^{-1} b$ with input $b$ that has error $e$, it is $$\operatorname{cond}(A)=\sup_{b,e\ne 0} \frac{\frac{\|A^{-1}e\|}{\|A^{-1}b\|}}{\frac{\|e\|}{\|b\|}} = \|A\|\|A^{-1}\|.$$

For the algorithm $x=A^{+} b$, where $A^+$ is the so called pseudoinverse, it is $$\operatorname{cond}(A)=\|A\|\|A^{+}\|.$$ This is also the magnitude of the largest singular value divided by the smallest singular value. The singular values of $A$ are the square roots of the eigenvalues of $A^*A$, where $A^*$ is the conjugate transpose of $A$ (or just transpose if $A$ is a real matrix).

In practice rectangular matrices are used for e.g a least square model fit. We solve for a small set of model parameters such that we get the best possible fit for a large number of measurements. After all, that is the type of thing Matlab is used for. Note that Matlab has A\b to find the solution for $Ax=b$, which will apply the equivalent of a pseudoinverse if $A$ is rectangular or singular.