1

Essentially just the title. This is supposed to be the introductory Upper Division Linear Algebra class so we haven't covered the determinant yet. I'm mentioning this because I am aware of the theorem that states that the conclusion would follow if the determinant was either 1 or -1. We haven't proven that in class yet so we can't use it. Anyway, if any of you have any ideas or hints, I would greatly appreciate it.

Thank you!

EDIT: Actually, that theorem does not apply in the way I stated. Either way, it cannot be used to solve this problem as it wasn't covered in class.

EDIT 2: This question is different than the other question asking why the elements of the inverse of the Hilbert Matrix are integers because my solution cannot include much outside of row reduction (elementary matrices), the definition of matrix multiplication, and the basic properties of matrix inverses.

Daniel
  • 6,379
  • 2
    That theorem only applies to matrices which have integer entries to begin with. In this case, what you would actually need would be that the determinant is $\frac{\pm 1}{N}$ for some integer $N$ where the denominator of every $(n-1) \times (n-1)$ minor determinant divides $N$. – Daniel Schepler Jan 22 '19 at 01:36
  • @DanielSchepler Thanks for the response! I must have misread it when I googled similar problems before. I corrected it in the body of the post. – Daniel Jan 22 '19 at 01:39
  • 1
    For the invertibility part, the most efficient proof I know of is: if $x = (x_0, x_1, \ldots, x_{n-1})^T$ were a vector in the null space of $A$, then it turns out $0 = x^T A x = \int_0^1 (x_0 + x_1 t + x_2 t^2 + \cdots + x_{n-1} t^{n-1})^2 , dt$ implying you must have $x_0 = x_1 = \cdots = x_{n-1} = 0$. – Daniel Schepler Jan 22 '19 at 01:41
  • 1
    See here. For reference, this matrix is known as the Hilbert matrix; that thread was the first result in a search for "Hilbert matrix inverse". – jmerry Jan 22 '19 at 02:00
  • @jmerry Thanks for that! Knowing what it is is really helpful. Unfortunately, all the proofs of the description of such an inverse are far out of the reach of the class. – Daniel Jan 22 '19 at 02:24
  • @jmerry I don't think so. The main difference between the questions is that in mine, the proof should be much more elementary (ie, not using determinants). – Daniel Jan 22 '19 at 02:26
  • Well, I just posted a calculation of the inverse here in service of finding the sum of the elements of the inverse - and that calculation doesn't use any determinants. It's based on the idea of a Gramian matrix; we calculate the inverse by applying Gram-Schmidt to get an orthonormal set out of the vectors whose inner products are the matrix elements, and here we get $H^{-1}$ as a product of three integer matrices. – jmerry Jan 22 '19 at 02:32

1 Answers1

1

To abbreviate my post here, a determinant-free proof that the inverse of the Hilbert matrix has integer entries.

Consider the inner product $\langle f,g\rangle =\int_0^1 fg$ on nice enough functions. The $n\times n$ Hilbert matrix $H$ has $ij$ entry (running the labels from zero to $n-1$) $\langle x^i,x^j\rangle$. This makes it a Gramian matrix. The vectors we used are the usual basis for the polynomials of degree $\le n-1$, and from that we can write the inner product of any two such polynomials, written in terms of the usual basis, as $\langle f,g\rangle = f^T H g$.

Let $P$ be an upper triangular matrix with its columns orthonormal with respect to this inner product - the columns are exactly what we get by applying Gram-Schmidt to the standard basis. Then $P^T H P = I$. Move some things around, and $H=(P^T)^{-1} P^{-1}$, so $H^{-1}=P\cdot P^T$.

That's all very general stuff - but we started with something special. We already know a lot about polynomials - in particular, we have an explicit family of orthogonal polynomials with respect to this product, the Legendre polynomials. The usual inner product in the definition there uses the interval $[-1,1]$, but going to $[0,1]$ is just an affine substitution. Legendre polynomials in the standard scaling are normalized to values of $\pm 1$ at the endpoints - here, that would be $q_k(x) = \frac{1}{k!}\frac{d^k}{dx^k}\left((x-x^2)^k\right)$. We want orthonormal scaling, so we note that $$\langle q_k,q_k\rangle =\int_0^1 \frac{1}{(k!)^2}\frac{d^k}{dx^k}\left((x-x^2)^k\right)\cdot \frac{d^k}{dx^k}\left((x-x^2)^k\right)\,dx$$ Integrating by parts $k$ times, we get $$\langle q_k,q_k\rangle =\frac{(-1)^k}{(k!)^2}\int_0^1 \left((x-x^2)^k\right)\cdot \frac{d^{2k}}{dx^{2k}}\left((x-x^2)^k\right)\,dx = \binom{2k}{k}\int_0^1 x^k(1-k)^x\,dx$$ Integrate that by parts another $k$ times, and it becomes $$\langle q_k,q_k\rangle = \binom{2k}{k}\int_0^1 \frac{k!x^k}{(2k)!}\cdot (k!)\,dx = \int_0^1 x^{2k}\,dx=\frac1{2k+1}$$ To get our orthonormal scaling, we then divide $q_k$ by the square root of this number and get $p_k = \sqrt{2k+1}q_k$.

Now, we also note that the $q_k$ are all polynomials with integer coefficients; the operator $\frac{1}{k!}\frac{d^k}{dx^k}$ preserves integer polynomials. So then, we can write $$P=\begin{pmatrix}q_0 & q_1 &\cdots & q_{n-1}\end{pmatrix} \begin{pmatrix}1&0&\cdots&0\\0&\sqrt{3}&\cdots&0\\ \vdots&\vdots&\ddots&\vdots\\ 0&0&\cdots & \sqrt{2n-1}\end{pmatrix}$$ $$P\cdot P^T = \begin{pmatrix}q_0 & q_1 &\cdots & q_{n-1}\end{pmatrix} \begin{pmatrix}1&0&\cdots&0\\0&\sqrt{3}&\cdots&0\\ \vdots&\vdots&\ddots&\vdots\\ 0&0&\cdots & \sqrt{2n-1}\end{pmatrix}^2\begin{pmatrix}q_0^T \\ q_1 \\ \cdots \\ q_{n-1}\end{pmatrix}$$ $$H^{-1} = P\cdot P^T = Q\begin{pmatrix}1&0&\cdots&0\\0&3&\cdots&0\\ \vdots&\vdots&\ddots&\vdots\\ 0&0&\cdots&2n-1\end{pmatrix}Q^T$$ All three terms of that last product are matrices with integer coefficients, and $H^{-1}$ must be an integer matrix. Of course, now that we have this explicit $LU$ factorization for the inverse, we can calculate all sorts of things about it easily, such as the sum referenced in the linked thread.

jmerry
  • 19,943
  • 1
    OP says, "my solution cannot include much outside of row reduction (elementary matrices), the definition of matrix multiplication, and the basic properties of matrix inverses." I don't know if this qualifies. – Gerry Myerson Jan 22 '19 at 03:38
  • I saw the clarification in the comments, and glanced over the version edited into the question. This is ... it's more than those specific tools, but is it any more advanced? The calculus is all understandable with the standard single-variable year. The concept of inner products? That probably hasn't come up yet in the linear algebra course, but the dot product in two or three dimensions has come up earlier, and inner products are the natural extension. Gram-Schmidt is a variation on row-reduction - instead of zeroing entries, we do it to inner products. And we didn't even actually use it. – jmerry Jan 22 '19 at 03:57