0

Given matrices $\pmb{A}\in\mathbb{R}^{p\times n}$ and $\pmb{B}\in\mathbb{R}^{p\times n}$ with $p>n$, I need to solve the following linear system in symmetric matrix $\pmb{X}\in\mathbb{R}^{p\times p}$ $$\pmb{X}\pmb{A}=\pmb{B}.$$ Based on this post, it seems that $\pmb{X}$ has an explicit formula.

My issue is that all diagonal elements of the matrix $\pmb{X}$ are known to be $-1$. How does one effectively resolve $\pmb{X}$ under this constraint?

Additionally, what about the special case when $\pmb{B}=\pmb{0}$?

Hepdrey
  • 83
  • There may not necessarily exist an $\mathbf X$ such that $\mathbf X \mathbf A = \mathbf B$ and all the diagonal elements of $\mathbf X$ are $-1$. Have you proven that such an $\mathbf X$ exists first? You could instead try to solve $\mathbf X \mathbf A = \mathbf B$ for $\mathbf X$ using the method given in the answer you linked to, and then check if all the diagonal elements of the solution are indeed $-1$. – Mahmoud Mar 15 '24 at 20:41
  • @mhdadk The answer I linked to is an overdetermined system, while mine is underdetermined. Will that method still work in my case? – Hepdrey Mar 15 '24 at 21:33
  • I cross posted here with some modification. – Hepdrey Mar 16 '24 at 16:20

1 Answers1

1

$ \def\k{\otimes} \def\h{\odot} \def\o{{\tt1}} \def\LR#1{\left(#1\right)} \def\op#1{\operatorname{#1}} \def\vc#1{\op{vec}\LR{#1}} \def\diag#1{\op{diag}\LR{#1}} \def\Diag#1{\op{Diag}\LR{#1}} \def\Unvec#1{\op{vec}^{-1}\LR{#1}} \def\qif{\quad\iff\quad} \def\qiq{\quad\implies\quad} \def\c#1{\color{red}{#1}} \def\CLR#1{\c{\LR{#1}}} \def\fracLR#1#2{\LR{\frac{#1}{#2}}} \def\gradLR#1#2{\LR{\grad{#1}{#2}}} $Matrix equations can be vectorized $$\eqalign{ \vc{ABC} = \LR{C^T\k A}\vc C \\ }$$ A Commutation matrix is an orthogonal permutation matrix that transforms between the vectorizations of a matrix and its transpose, i.e. $$\eqalign{ a = \vc{A} = K^T\vc{A^T} \qif \vc{A^T} = K \vc{A} \\ }$$ Note that if $A=A^T$ then $\,\c{Ka=a}$

Construct the $X$ matrix with the required properties using the all-ones matrix $J$, the identity matrix $I$, the Hadamard product $\h$, and an arbitrary unconstrained matrix $U$ $$\eqalign{ F &= J-I &\qiq &X = \LR{F\h U} - I \\ f &= \vc F &\qiq & \vc{F\h U} = f\h u \\ G &= \Diag f &\qiq & Gu = f\h u \\ }$$ In general, a lowercase letter denotes the vectorization of a matrix with the same (uppercase) name.

Vectorize the linear system and use the pseudoinverse to write a closed-form solution $$\eqalign{ &XA = B \\ &\LR{F\h U-I}A = B \\ &\LR{F\h U}A = \LR{B+A} \;\doteq\; C \\ &\LR{A^T\k I}\vc{F\h U} = \vc C \\ &\c{\LR{A^T\k I}G}u = \vc C \\ &\c M u = c \qquad \{ {\sf implicitly\ define}\;\c{M,c} \}\\ &u = M^+c + \CLR{I-M^+M}w \\ &u = {M^+c + \c Pw} \qquad \{ {\sf implicitly\ define}\;\c{P} \}\\ }$$ where $w$ is an arbitrary vector and $P$ projects into the nullspace of $M$

Next, enforce the symmetry condition and solve for $w$ $$\eqalign{ &u = Ku \\ &\LR{M^+c + Pw} = K\LR{M^+c + Pw} \\ &\CLR{I-K}Pw = \LR{K-I}M^+c \\ &\c L Pw = -LM^+c \qquad \{ {\sf implicitly\ define}\;\c L \}\\ &w = -\LR{LP}^+ LM^+c \\ }$$ Substitute this back into the $u$ equation $$\eqalign{ \def\BR#1{\Big(#1\Big)} u &= M^+c - P\LR{LP}^+ LM^+c \\ &= \BR{\,I - P\LR{LP}^+ L\,}\,M^+c \\ }$$ Finally, undo the vec operation and construct $X$
$$\eqalign{ U &= \Unvec u \\ X &= \LR{F\h U} - I \\ }$$ For the final question, setting $B=0$ simply makes $C=A$

greg
  • 40,033
  • Thank you very much! I'm not yet familiar with these operations and need to get up to speed on them. I'll go through your answer as soon as possible. – Hepdrey Mar 17 '24 at 02:52
  • 2
    @Hepdrey My post assumes that a solution exists for the given $(A,B)$ matrices. If that is not the case, then you should reformulate this as a Least-Squares problem $$\min:\big|XA-B\big|_F^2$$ and use gradient descent to find an approximate solution. – greg Mar 17 '24 at 16:53