Due to the complains for more clarity down below I've cut my post into segments. Feel free to skip right to Definitions, Algorithm & Conjecture. If this is not clear enough, then I'm afraid I can't help it.
Story
I'm taking a course on linear algebra and recently we were covering congruence of matrices. By Sylvester's law of inertia any two real symmetric matrices are congruent if they have the same signatures. We were advised to calculate signatures by considering matrices as bilinear forms and finding their orthogonal bases, which from my point of view is extremely tedious and requires painstaking work, both with regard to memorization and computation. So I was looking out for a better way to do this and by Googling I discovered that simultaneous row and column transformations preserve the signature, which turns out to be quite simple to understand once you consider elementary operations as matrices: $$ \boldsymbol{A'} = \boldsymbol{EAE}^T $$ That is way easier! However, as an extremely lazy person I still wasn't satisfied and here's where the fun part begins:
I began looking at the elementary operations and how they affect the outcome. Multiplying a row by a negative constant might trivially change the signature (just consider identity matrix). After a while I found an example of how interchanging rows might also affect it. And adding a row multiplied by $-2$ to itself is equivalent to multiplying it by a negative scalar. Thus I was left with adding to the row a different row multiplied by a constant and I couldn't find a counterexample for this one. More than that! Using only this operation I got through my previous assignment and by turning a matrix into a row echelon form I was able to get a correct signature in every exercise. It also helped me spot a mistake in my simultaneous row and column operations on the recent test. By this method I calculated 11 correct signatures - it would be very odd if this was just an accident!
I know that chances of me discovering something new in math are infinitesimal but I couldn't resist the clickbaity title. I hope you'll forgive me. But I'm genuinely curious about this one. I tried talking with my professor about it but he seemed uninterested, or maybe I did a poor job explaining it. He just dismissed the entire problem by saying that reduction to a row echelon form does not preserve the signature.
Did I stumble upon some already known algorithm? Why then would no one talk about it at uni? I tried thinking about how to prove this but nothing comes to mind. Perhaps I miss some obvious counterexamples? If so, why did it work in all of the previous exercises?
Definitions
We use this definition of congruence and this definition of signature.
Congruence:We say that two squrare matrices A and B over some field are congruent if there exists an invertible matrix P such that: $$\boldsymbol{A} = \boldsymbol{P}^T \boldsymbol{B P}$$
Signature: A real, nondegenerate $n\times n$ symmetric matrix A, and its corresponding symmetric bilinear form $\boldsymbol{G}(v,u) = v^T \boldsymbol{A} u$, has signature $\boldsymbol{(p,q)}$ (or $\boldsymbol{p-q}$ in a different notation) if there is a nondegenerate matrix C such that $ \boldsymbol{CAC}^T $ is a diagonal matrix with p 1s and q (-1)s.
Algorithm
- Using only this operation - adding a row multiplied by a constant to another row - get a matrix to its upper-triangular form.
- Let $p$ be a number of positive entries on the diagonal and $q$ be a number of negative ones. Signature of a matrix is equal to $(p,q)$.
Conjecture
The aforementioned Algorithm provides a correct signature for all nondegenerate symmetric square matrices.
Further questions
Why would that be? How to prove it? Any ideas for counterexamples? Does it hold for $3 \times 3$ matrices and below but fails for bigger matrices, as suggested by Ben Grossmann in the comments? Any counterexamples of this sort? In this case - why would it work for n = 3?
Examples
$$ \boldsymbol{A}= \begin{bmatrix} 8 & 8 & 5\\ 8 & 0 & 4\\ 5 & 4 & 3 \end{bmatrix} \overset{r_2 \to r_2-r_1}{\longrightarrow} \begin{bmatrix} 8 & 8 & 5\\ 0 & -8 & -1\\ 5 & 4 & 3 \end{bmatrix} \overset{r_3 \to r_3-\frac{5}{8}r_1}{\longrightarrow} \begin{bmatrix} 8 & 8 & 5\\ 0 & -8 & -1\\ 0 & -1 & -\frac{1}{8} \end{bmatrix} \overset{r_3 \to r_3-\frac{1}{8}r_2}{\longrightarrow} \begin{bmatrix} 8 & 8 & 5\\ 0 & -8 & -1\\ 0 & 0 & 0 \end{bmatrix} $$
And we already see that the signature is (1,1).
Let $x \in \mathbb{R}$ For which values of $x$ the signature of B equals $2$?
$$ \boldsymbol{B} = \begin{bmatrix} 1 & 0 & 1\\ 0 & 2 & 3\\ 1 & 3 & x \end{bmatrix} \overset{r_3 \to r_3-r_1}{\longrightarrow} \begin{bmatrix} 1 & 0 & 1\\ 0 & 2 & 3\\ 0 & 3 & x-1 \end{bmatrix} \overset{r_3 \to r_3-\frac{3}{2}r_2}{\longrightarrow} \begin{bmatrix} 1 & 0 & 1\\ 0 & 2 & 3\\ 0 & 0 & x-\frac{11}{2} \end{bmatrix} $$
And the answer is for $x = \frac{11}{2}$ : $(2,0) = 2$.
Let $t,s \in \mathbb{R} $
$$ \boldsymbol{C}= \begin{bmatrix} 0 & 0 & 0 & 0 & t^2\\ 0 & -1 & 0 & 1 & 0\\ 0 & 0 & 1 & s & 0\\ 0 & 1 & s & s^2-1 & 0\\ t^2 & 0 & 0 & 0 & 0\\ \end{bmatrix} \underset{r_5 \to r_5-r_1}{\overset{r_1 \to r_1+r_5}{\longrightarrow}} \begin{bmatrix} t^2 & 0 & 0 & 0 & t^2\\ 0 & -1 & 0 & 1 & 0\\ 0 & 0 & 1 & s & 0\\ 0 & 1 & s & s^2-1 & 0\\ 0 & 0 & 0 & 0 & -t^2\\ \end{bmatrix} \underset{r_4 \to r_4-sr_3}{\overset{r_4 \to r_4+r_2}{\longrightarrow}} \begin{bmatrix} t^2 & 0 & 0 & 0 & t^2\\ 0 & -1 & 0 & 1 & 0\\ 0 & 0 & 1 & s & 0\\ 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & -t^2\\ \end{bmatrix} $$
Singature is (2,2) for $t\not= 0$ and (1,1) for $t = 0$. I achieved the same result with simultaneous row and column operations, it took twice as long.
- I would argue that it falls under "subtracting a row from itself" case, just in a smarter way, as there are two rows but they're identical.
- This is a degenerate matrix and as such does not have a signature according to the definition I just added.
– Figment Apr 18 '21 at 10:43