1

Is $$\det(I+ ABB^*A^*C^{-1})=\det(I+ B^*A^*ABC^{-1})$$ where $I$ is identity matrix, $A,B,C$ are complex valued matrices. And $C$ is $(I+X)$ where $X$ is PSD.

I know that this makes $ABB^*A^*$ and $B^*A^*AB$ positive semi/definite. And I know for square matrices $X,Y,Z$ we have $\det(XYZ)=\det(YXZ)$ so $\det (ABB^*A^*C^{-1})=\det(B^*A^*ABC^{-1})$. And also $\det (I + ABB^*A^*C^{-1})=\det(I + B^*A^*ABC^{-1})$. Please comment if this is correct. I have a feeling I'm missing something since I didn't use properties of $C$.

Simulations seem to hold. At least when I used $A=\pmatrix{1&1\\ 0&0},\ B=I$ and $C=\pmatrix{2&0\\ 0&2}$ gave 2 in both sides.

Thanks a lot in advance.

PS. Not home work.

triomphe
  • 3,948
  • 1
    Why don't you try a few instances of $A,B,C$ to see whether the hypothesis is true? For simplicity, you may set $B=I$. – user1551 Oct 06 '13 at 18:29

2 Answers2

3

Edit: Try $A=\pmatrix{1&1\\ 0&0},\ B=I,\ C=\pmatrix{2&1\\ 1&2}$ and $C^{-1}=\frac13\pmatrix{2&-1\\ -1&2}$. The two determinants are $\frac53$ and $\frac73$, which are not equal. For questions like this, you may first do a few numerical experiments with a computer to see if the hypothesis can be easily falsified.

user1551
  • 149,263
2

Your test example is too specific: $B = I$, $C = 2I$, $C^{-1} = (1/2)I$. Of course that everything here commutes.

I tried the following random one (in Python3):

import numpy as np

A = np.array([[3,2,6],[4,1,5]])
B = np.array([[1,2],[3,4],[5,6]])
C = np.array([[2,1],[1,3]])

AB = np.dot(A, B)
Cinv = np.linalg.inv(C)

M1 = np.identity(2) + np.dot(np.dot(AB, AB.transpose()), Cinv)
M2 = np.identity(2) + np.dot(np.dot(AB.transpose(), AB), Cinv)
print("A =\n", A)
print("B =\n", B)
print("AB =\n", AB)
print("C =\n", C)
print("C^{-1} =\n", Cinv)
print("M1 =\n", M1)
print("M2 =\n", M2)
print("det M1 =", np.linalg.det(M1))
print("det M2 =", np.linalg.det(M2))

The output says that your statement is wrong:

A =
 [[3 2 6]
 [4 1 5]]
B =
 [[1 2]
 [3 4]
 [5 6]]
AB =
 [[39 50]
 [32 42]]
C =
 [[2 1]
 [1 3]]
C^{-1} =
 [[ 0.6 -0.2]
 [-0.2  0.4]]
M1 =
 [[ 1744.    535. ]
 [ 1451.2   446.6]]
M2 =
 [[  869.2   808.6]
 [ 1123.6  1047.8]]
det M1 = 2478.4
det M2 = 2204.8

Your proof is wrong because

$$\det\left( \prod_k M_k \right) = \det\left( \prod_k M_{p(k)} \right),$$

for all permutations $p$, but, generally,

$$\det\left( I + \prod_k M_k \right) \ne \det\left( I + \prod_k M_{p(k)} \right).$$

Even if you limit $A$ and $B$ to be square, the statement won't hold. For example, remove the third column/row from $A$ and $B$ above:

A = np.array([[3,2],[4,1]])
B = np.array([[1,2],[3,4]])
C = np.array([[2,1],[1,3]])

The result is:

det M1 = 172.0
det M2 = 151.0

As user1551 told you: try some random example first. Not "nice" ones like your example above, but completely random ones. If a statement is wrong, chances are that a random test will sink it.

Vedran Šego
  • 11,592
  • Thank you. Could you please explain why $$\det\left( I + \prod_k M_k \right) \ne \det\left( I + \prod_k M_{p(k)} \right).$$ in positive semi-definite case? I know $\det(I+AB)=\det(I+BA)$ when $A,B$ are PSD. – triomphe Oct 11 '13 at 18:45
  • @MLT This formula works for general (not just PSD) $A$ and $B$ of appropriate orders (it's called Sylvester's determinant theorem. However, it doesn't hold for more than two matrices. What I wrote means that such identity is not always correct. For some matrices $M_k$, however many of them, you will have equality there. But not for any set of three or more matrices. – Vedran Šego Oct 11 '13 at 19:14