21

Find maximum determinant of $3 \times 3$ matrix whose entries are $1$ or $2$.

This problem was from an old algebra exam at my university. After some coding, I found the answer to be $5$. But I am not sure how to prove it on paper. In case the entries are $0$ or $1$, I know how to prove.

I tried some case work but it's too long and there are too many cases to consider. But maybe I didn't know how to split the cases optimally.

Any help would be very appreciated. Thanks in advance.

sleeper161
  • 521
  • 8
  • 1
    Does your code tell you which matrices attain $5$? – Pranay Jan 12 '25 at 03:22
  • Here's a thought. Recall $|\det(u,v,w)| = |(u \times v) \cdot w| = |u||v||w| |\sin\theta_{u,v} \cos\theta_{u \times v, w}|$. Without the angles we then get an upper bound of $\sqrt{12}^3 \approx 41.6$ (the Hadamard bound). However, $u,v,w$ are nearly parallel here, so the two angle terms must be fairly small, leading to a lot of cancellation and a fairly small determinant. Unfortunately the magnitudes also change as you vary angles, and it's not at all clear what the global optimum would be from this perspective. – Joshua P. Swanson Jan 12 '25 at 04:17
  • 3
    After a few trials, I realised that $\left(\begin{smallmatrix} 1&2&2\2&1&2\2&2&1\end{smallmatrix}\right)$ has determinant $5$. Not sure if it’s the only one with det $5$ but the symmetry makes it seem likely. – Pranay Jan 12 '25 at 04:37
  • 1
    Might be worth using a computer to solve it for 2x2, 4x4, 5x5, to be sure it is an actual pattern and not a coincidence. – DanielV Jan 12 '25 at 05:35
  • 2
    I find the maximum in the $4 \times 4$ case to be $8$, obtained e.g. for $$ \pmatrix{1 & 1 & 2 & 2 \cr 1 & 2 & 1 & 2 \cr 2 & 1 & 1 & 2 \cr 2 & 2 & 2 & 1 \cr}$$ – Robert Israel Jan 12 '25 at 07:22
  • 1
    I gave an argument in an answer which is doable by hand, but it's not reasonable as an exam question. I would be quite interested to know a "quick" approach, if any exists. I suspect the problem is necessarily subtle and does not have some nice infinite family generalization. The related problem of the existence of Hadamard matrices remains open after many years. – Joshua P. Swanson Jan 12 '25 at 09:10
  • The problem is equivalent to maximising $|\det A|$. The maximiser must contain at least two $1$s that do not lie on the same row or the same column, or else it would have two repeated rows/columns of $2$s, which results in a zero determinant. Hence we may assume that $a_{22}=a_{33}=1$. The elements $a_{1j}$ on the first row are then solely determined by the sign of their respective minors $m_{1j}$. This reduces the number of cases to check to $16$. As Joshua P. Swanson wrote, this is doable by hand but not reasonable as an exam question. Are you sure that there are no missing constraints? – user1551 Jan 12 '25 at 09:48
  • My candidate for generalization is a matrix, which maximizes product of all Hamming distances between vectors and then maximizes number of $2$s. This is just a guess though. – Quý Nhân Jan 13 '25 at 12:30
  • 1
    here's every possible determinant. https://js.do/caffeinatedlogic/751145 – David P Jan 17 '25 at 19:17

4 Answers4

8

Call your matrix $A$. Its maximum possible determinant is positive (because $\det I_3$ is already positive). Denote by $m_{ij}$ the minor obtained by deleting the $i$-th row and the $j$-th column of $A$. By Laplace expansion along row $i$, we have $$ \det A=\sum_ja_{ij}\left[(-1)^{i+j}m_{ij}\right]. $$ Given an optimal $A$, we must have $a_{ij}=2$ if $(-1)^{i+j}m_{ij}>0$, and $a_{ij}=1$ if $(-1)^{i+j}m_{ij}<0$. If $m_{ij}=0$, we can modify the value of $a_{ij}$ to $2$ without changing the determinant, but the number of $2$s in $A$ is increased. Since $A$ has only nine elements, if we keep doing such modification, we will reach, in finite steps, a modified optimal $A$ such that for all $(i,j)$, $$ (-1)^{i+j}m_{ij}\text{ is } \begin{cases} \ge0&\text{iff }a_{ij}=2,\\ <0&\text{iff }a_{ij}=1.\\ \end{cases} $$ Now look at this modified optimal $A$. Since all elements of $A$ are either $1$ or $2$, we have $|m_{ij}|\le3$. Sort any row of $A$ in ascending order. If some sorted row of $A$ is $(1,1,1)$ or $(1,1,2)$, then $\det A\le 1(-1)+1(-1)+2(3)=4$. The similar holds for any sorted column of $A$.

If, instead, all sorted rows/columns of $A$ are either $(1,2,2)$ or $(2,2,2)$, since $A$ has no repeated rows/columns (because the optimal determinant is positive), $A$ must be equal to either $$ P\ \underbrace{\pmatrix{1&2&2\\ 2&1&2\\ 2&2&1}}_{\det=5}\ Q \quad\text{or}\quad P\ \underbrace{\pmatrix{1&2&2\\ 2&1&2\\ 2&2&2}}_{\det=2}\ Q $$ for some permutation matrices $P$ and $Q$. Hence the maximum possible value of $\det A$ is $5$ and every modified optimal $A$ is in the form of $P(2ee^T-I_3)Q$ for some permutation matrices $P,Q$ such that $\det(PQ)=1$. Since all two-rowed minors of $2ee^T-I_3$ are nonzero, the unmodified optimal $A$s are also in the same form. That is, the optimal solution is unique up to appropriate permutations of rows and columns of $2ee^T-I_3$.

P.S. There is no way I could cook up this proof during an examination.

user1551
  • 149,263
  • I think the argument is pretty similar to what my professor told us. He gave this question some few years ago and couldn't remember what the solution was so when we asked him, he had to spent 30 minutes resolving it lol. – sleeper161 Jan 13 '25 at 11:54
7

I asked my professor and I got this answer. We separate the proof into $3$ cases. Since we can switch the rows (columns) and only make the determinant change signs. We can combine a lot of symmetries into these cases.

Case $1$: There is a row (column) that is a multiple of $\begin{pmatrix} 1 & 1 & 1\end{pmatrix}$, then we have:

$|A| = \lambda\begin{vmatrix}1 & 1 & 1 \\ a & b & c \\ d & e & f\end{vmatrix} = \lambda\begin{vmatrix}1 & 0 & 0 \\ a & b - a & c - a \\ d & e - d & f- d\end{vmatrix} \le \lambda \cdot 2 = 4$

Case $2$: There exists a row (column) that is a permutation of $\begin{pmatrix} 1 & 1 & 2\end{pmatrix}$, then we have:

$|A| = \begin{vmatrix}1 & 1 & 2 \\ a & b & c \\ d & e & f\end{vmatrix} = \begin{vmatrix}1 & 1 & 1 \\ a & b & c \\ d & e & f \end{vmatrix} + \begin{vmatrix}0 & 0 & 1 \\ a & b & c \\ d & e & f \end{vmatrix} \le 2 + 3 = 5$

Case 3: The rows are $\begin{pmatrix} 1 & 2 & 2 \end{pmatrix}, \begin{pmatrix} 2 & 1 & 2 \end{pmatrix}, \begin{pmatrix} 2 & 2 & 1 \end{pmatrix}$. The matrix will now have the form:

$|A| = \begin{vmatrix}1 & 2 & 2 \\ 2 & 1 & 2 \\ 2 & 2 & 1\end{vmatrix} = 5$

The rows (columns) shouldn't repeat since the determinant will be zero. All other matrix will have $5$ or $-5$ as its determinant since we can switch the rows and columns around.

After all cases considered, the maximum determinant is $5$.

DanielV
  • 24,386
sleeper161
  • 521
  • 8
  • Just to add some clarity. Case 1, $\lambda \in {1, 2}$, just checking all the values in some row is equal. And the minor is 2 because the elements of the submatrix are $\in {1, 0, -1}$. Case 2 an upper bound given by adding case 1 to the solution for 2x2 matrices det[[2, 1], [1, 2]] = 3. And case 3 is the only thing left over. – DanielV Jan 13 '25 at 12:49
  • 2
    This is clever, but I wonder how many students are able to come up with this answer in an exam. – user1551 Jan 13 '25 at 13:09
  • @JoshuaP.Swanson Are you sure? It looks fine to me – Milten Jan 14 '25 at 08:47
  • @JoshuaP.Swanson But from case 1 with $\lambda=1$, we immediately get that the determinant of the matrix with 1's in the first row is $\le 2$. We don't need to consider cofactors – Milten Jan 14 '25 at 13:52
5

Let $A$ be an arbitrary $3 \times 3$ matrix with entries $0,1$, so $B = A+\mathbf{1}\mathbf{1}^\top$ is an arbitrary $3 \times 3$ matrix with entries $1,2$. Here $\mathbf{1}$ refers to the length $3$ column vector with all $1$'s and $\mathbf{1} \mathbf{1}^\top$ is an outer product, namely the $3 \times 3$ matrix with all $1$'s.

By the matrix determinant lemma, we have $$\det(B) = \det(A) + \mathbf{1}^\top \mathrm{adj}(A) \mathbf{1}.$$

By this post, $\det(A)$ has a maximum of $2$, which can be obtained at $$A = \begin{pmatrix}1 & 0 & 1 \\ 1 & 1 & 0 \\ 0 & 1 & 1\end{pmatrix} \qquad\Rightarrow\qquad B = \begin{pmatrix}2 & 1 & 2 \\ 2 & 2 & 1 \\ 1 & 2 & 2\end{pmatrix}.$$

This $B$ has determinant $5$ and is a solution to the original problem.

Of course, it's not clear that a maximum for $\det(A)$ will automatically maximize $\det(B)$. The correction term $\mathbf{1}^\top \mathrm{adj}(A) \mathbf{1}$ is the sum of all the terms of the adjugate matrix of $A$. It's easy to see this adjugate must be a $-1,0,1$-matrix, so the sum of its entries are at most $9$, giving $\det(B) \leq 2+9=11$.

The collection $\mathrm{adj}(A)$ here is quite constrained relative to all $3 \times 3$ matrices of $-1,0,1$'s. Brute force says it has a maximum of $3$, obtained only at cyclic reorderings of $A$ above or the identity. That would give $\det B \leq 5$, and the example above would be provably the maximum up to cyclic reordering.

Edit: Here's a way to show that correction term is at most $3$, hence showing $B$ above is maximal. It involves a little calculation, though in principle is doable by hand.

Let $r_i$ be the $i$th row of $A$ and let $c_k$ be the $k$th column of $\mathrm{adj}(A)$. Let $s_k$ be the sum of the coordinates of $c_k$. We want to show $s_1 + s_2 + s_3 \leq 3$.

First we claim for each $i \neq k$ that $s_k$ is at most the number of zero's in $r_i$. To see this, recall the fundamental property of adjugates, $A \mathrm{adj}(A) = \det(A) I_3$, so $r_i c_k = \delta_{ik} \det(A) = 0$. Let $s_k'$ (respectively, $s_k''$) be the sum of the coordinates of $c_k$ for which the corresponding coordinate of $r_i$ is $0$ (respectively, $1$). Then $s_k = s_k' + s_k''$, but $s_k'' = r_i c_k = 0$, so $s_k = s_k'$. Since $c_k$ consists of $-1,0,1$'s, the claim follows.

Now suppose $A$ has a row of all $0$'s. This forces six complementary entries of $\mathrm{adj}(A)$ to be zero, so $s_1+s_2+s_3 \leq 3$.

Next suppose $A$ has two rows each with two $0$'s. The possible $A$'s are, up to reordering, $$A = \begin{pmatrix}0 & 0 & 1 \\ 0 & 0 & 1 \\ a & b & c\end{pmatrix} \qquad \Rightarrow \qquad \mathrm{adj}(A) = \begin{pmatrix}-b & b & 0 \\ a & -a & 0 \\ 0 & 0 & 0\end{pmatrix}$$ or $$A = \begin{pmatrix}0 & 1 & 0 \\ 0 & 0 & 1 \\ a & b & c\end{pmatrix} \qquad \Rightarrow \qquad \mathrm{adj}(A) = \begin{pmatrix}-b & -c & 1\\a & 0 & 0 \\ 0 & a & 0\end{pmatrix}.$$ In the first case, $s_1+s_2+s_3=0$. Reordering the rows or columns of $A$ does not change this. In the second case, $s_1+s_2+s_3=1+2a-b-c \leq 3$. Reordering preserves this inequality.

Now we may assume $A$ has at most one row with two $0$'s, with the others having at most one $0$. From the claim, we have $s_k \leq 1$ for each $k$, so $s_1+s_2+s_3 \leq 3$, completing the argument.

  • Nice! Would the same type of argument generalize to larger matrices and/or values in a fixed range of integers, say 1, ..., k? – Mathieu Rundström Jan 12 '25 at 22:06
  • 2
    The first part of the argument just goes from ${1,2,\ldots,m}$ to ${0,1,\ldots,m-1}$. The second part is very special and I doubt would generalize much beyond $3 \times 3$. I'm fairly dubious about the existence of an infinite family of maximizers extending this example. I'd say it would be quite interesting if one were found. – Joshua P. Swanson Jan 13 '25 at 00:39
0

This answer is based on Sarrus's calculation of determinant for $3 \times 3$ matrices.

https://en.wikipedia.org/wiki/Rule_of_Sarrus

Rule of Sarrus is diagonally oriented calculation process so in considerations below diagonals play the most important role.

The first phase - presentation of the form of matrix to consider.

In desired matrix we know that there are $1$s, more than one 1 element.

We should check at least two cases:

  • at least two $1$ values in different row and column,

  • at least three $1$ values in different rows and columns,

so let us place them on the main diagonal, (if these $1$s are located on other positions always we can group them via permutations which generally don't affect absolute value of determinant - we know that permutation matrix has determinant $1$ or $-1$ what is sufficient for this problem),

other elements can be assumed as unknowns.

We can start from three $1$s.
Notice that we want only 3 potentially diagonal (i.e. which can be shifted to the diagonal via permutations diagonal) elements with value 1.

$$A = \begin{pmatrix}1 & a & b \\ d & 1 & c \\ e & f & 1\end{pmatrix} $$

Presentation of such form of matrix ends first phase of considerations.

The second phase concentrates on algebraic formula for calculations of determinant.

For considered matrix determinant is equal

$\det(A)=1+dfb+eac-ad-be-cf $.

From this form it is visible that we have to maximize $dfb$ and $eac$ so their factors have to be equal $2$.

This gives value of determinant equal $5$.

For clarification, for this case

$\det(A)=1+2 \times 2^3-3 \times 2^2$

$1$ plus $2$ cubes minus $3$ squares. The fact that we have here cubes vs. squares is crucial so the problem was reduced to evaluation of algebraic expression with values from the set $\{1,2\}$.

The second case with two $1$s on the main diagonal can be checked in similar way and it gives determinant below $5$.

$$A_2 = \begin{pmatrix}2 & a & b \\ d & 1 & c \\ e & f & 1\end{pmatrix} $$

$\det(A_2)=2+dfb+eac-ad-be-2cf=(1+dfb+eac-ad-be-cf)+(1-cf) $.

The case with only one potentially diagonal $1$ can be neglected as it leads (what is easy to check) to the determinant equal $0$.

Concluding:

the method does not only solves the problem but also it can generalize problem for matrices $3 \times 3$ with other two values:

for example: ($1$ and $10$)

then maximal $\det(A)= 1+2\times 10^3 - 3\times 10^2=1701$

or ($3$ and $7$) or even ($e$ and $2\pi$).

Widawensen
  • 8,517