0

I've to solve the following nonlinear matrix equation, and I was wondering if there is any bigger class where to fit such an equation so that I can either:

  • Obtain a closed-form solution (ideal)
  • Use appropriate algorithms to solve it numerically.

Assume that $V\in\mathbb{R}^{1\times k}$, $B_i\in\mathbb{R}^{k\times 1}$, $T_i\in\mathbb{R}^{k \times k}$ with $i=1,\dots,k$, and finally $L\in\mathbb{R}^{1\times k}$. The equation reads:

$$V - \sum_{i=1}^{k} VB_iVT_i = L$$

And it has to be solved for $V$ (all the other matrices are assumed to be known). Any help or suggestion will be extremely appreciated! Thanks!!!

  • And you solve for what, V? I think it is a very standard linear system if you take a basis containing the $B_i$ (extended to be orthogonal if the $B_i$ are not linearly independent). – Ian Sep 05 '17 at 18:13
  • Yes, it has to be solved for $V$. Thanks for pointing that out, I've already made an edit to specify the unknown variable. Sorry, but, can you expand a little on that comment ? Because I didn't found the way to express this as a linear system. Thanks in advance!!! (And yes, the $B_i$ are linearly independent). – controllystuff Sep 05 '17 at 20:08
  • I think I've just correctly understood your comment: you suggest to consider the equation as a mapping, and then obtain the matrix of the transformation in that particular basis, am I correct ? I'll try to do that now! Thanks!! – controllystuff Sep 05 '17 at 20:15
  • If you change the basis of the mappings $T_i$ so that it contains the $B_i$ and is otherwise orthogonal, then $V B_i$ is just the coefficient of $B_i^T$ in the expansion of $V$, which makes the equation simpler than its current form. – Ian Sep 05 '17 at 20:45
  • Actually no, that idea may not really work if the $B_i$ are not themselves orthogonal. – Ian Sep 06 '17 at 13:39
  • Let's assume that they are. What would be the procedure? Is it possible for you to write an example in an answer? It would be extremely appreciated. Thanks!!!! – controllystuff Sep 06 '17 at 13:44
  • Moreover, you could assume that $B_i$ are indeed canonical vectors, i.e. $B_{i} = e_i = [0,\dots,\underbrace{1}_{i},\dots,0]$ – controllystuff Sep 06 '17 at 13:48
  • Actually now that I put it that way, I see that this is a "quadratic" problem, so it is more difficult than I realized. – Ian Sep 06 '17 at 14:22
  • Isn't this the same question as your earlier one? Please refrain from reposting questions, especially on the same day. – user1551 Sep 11 '17 at 11:35

1 Answers1

0

We are dealing with a system of $k$ equations of second degree in $k$ unknowns $V=(v_i)$. Using the Grobner basis software of Maple, several tries, with various values of $k$, seem to show that:

If we randomly choose (generical case) the $(B_i),(T_i),L$, then we obtain $2^k$ complex solutions, that is, according to the Bezout theorem, the maximum of isolated solutions for a $k-k$ system of degree $2$. For example, when $k=5$, we obtain $32$ complex solutions; curiously, when (for $k=5$ again) we consider the system $V-\sum_{i=1}^{4} VB_iVT_i-L=0$, then we obtain only $31$ solutions.

There is no closed form for the solutions: for example, for $n=3$, Maple gives the result in the form of a triangular decomposition $P_8(v_1)=0,v_2=Q_2(v_1),\cdots,v_n=Q_n(v_1)$ where $P_8$ is a polynomial of degree $8$ and the $Q_i$ are polynomial of degree $7$. The Galois group of $P_8$ is $S_8$ and, consequently, is not solvable. Note that, of the eight roots of $P_8$, only two are real.

The problem with the Grobner's method is that it works badly for not so large values of $k$ as $k=8$. On the other hand, pure Grobner gives all complex solutions; using special methods for the real case, we can obtain only the real solutions. Yet, this method is even slower than the previous one.

So it is better to turn to numerical methods.