9

My question is about the equivalence of three different versions of the positive real lemma.

I would like to set up the question by first stating the definition of a positive real transfer function and one version of the positive real lemma. My reference for this definition and statement is Nonlinear Systems (3rd edition) by Khalil.

Definition of positive real transfer function (Definition 6.4, page 237 of Khalil): A $p\times p$ proper rational transfer function matrix $G(s)$ is called positive real if

  • poles of all elements of $G(s)$ are in $\Re(s) \le 0$ (i.e. no poles in the open right half plane)
  • for all $\omega \in \mathbb{R}$ for with $i\omega$ is not a pole of any element of $G(s)$, the matrix $G(i\omega) + G(-i\omega)^\top$ is positive semidefinite
  • any pure imaginary pole $i\omega$ of any element $G(s)$ is a simple pole and the residue matrix $\lim_{s\rightarrow i\omega} (s-i\omega)G(s)$ is positive semidefinite Hermitian

State space representation: Let a transfer function $G(s)$ have a minimal state space representation \begin{equation*} \begin{aligned} \dot{x} &= A x + B u \\ y &= C x + D u \end{aligned} \end{equation*} where $A$ is $n\times n$, $B$ is $n\times p$, $C$ is $p\times n$ and $D$ is $p\times p$, so that $G(s) = C(sI-A)^{-1}B + D$.

Consider the following statement \begin{equation} G(s)\ \text{is positive real} \tag{FREQ} \label{eq:FrequencyDomain} \end{equation} and the statement \begin{equation} \begin{gathered} \text{There exists a symmetric positive definite matrix}\ P \\ \text{a}\ p\times n\ \text{matrix}\ L\\ \text{and a}\ p\times p\ \text{matrix}\ W\ \text{such that} \\ \begin{aligned} A^\top P + P A &= -L^\top L \\ P B - C^\top &= -L^\top W \\ D + D^\top &= W^\top W \end{aligned} \end{gathered} \tag{LYAP} \label{eq:Lyapunov} \end{equation}

The version of the Positive Real Lemma in Khalil (page 240, Lemma 6.2) states that \begin{equation*} \eqref{eq:FrequencyDomain} \iff \eqref{eq:Lyapunov} \end{equation*} This is proved in Appendix C.13 in Khalil, invoking the Spectral Factorization Theorem along the way, and I understand this.

There are other statements of the positive real lemma in the literature (For example Boyd et al, Linear Matrix Inequalities in System and Control Theory, page 35).

  • Matrix inequality \begin{equation} \begin{gathered} \text{There exists a symmetric positive definite matrix}\ P\ \text{such that} \\ M := \begin{bmatrix} A^\top P + P A & P B - C^\top \\ B^\top P - C & -(D+D^\top) \end{bmatrix} \le 0 \end{gathered} \tag{LMI} \label{eq:LMI} \end{equation}
  • Algebraic Riccati equation \begin{equation} \begin{gathered} \text{There exists a symmetric positive definite matrix}\ P\ \text{such that} \\ A^\top P + P A + (P B - C^\top)(D+D^\top)^{-1}(P B - C^\top)^\top = 0 \end{gathered} \tag{RIC} \label{eq:Riccati} \end{equation}

My question is, under what conditions are the following true?

  1. $\eqref{eq:Lyapunov} \iff \eqref{eq:LMI}$
  2. $\eqref{eq:Lyapunov} \iff \eqref{eq:Riccati}$

Here is my try:

  • $\eqref{eq:Lyapunov} \implies \eqref{eq:LMI}$ is straightforward. From \eqref{eq:Lyapunov}, $ M = -\begin{bmatrix} L & W \end{bmatrix}^\top \begin{bmatrix} L & W \end{bmatrix} $ and is therefore negative semidefinite.
  • $\eqref{eq:LMI} \implies \eqref{eq:Lyapunov}$: I am not sure how to show this, or if there are conditions under which this is true. The matrix $M$ is symmetric negative semidefinite, therefore has real non-positive eigenvalues and a full eigenspace. If $M$ only had at most $p$ non-zero eigenvalues, then $M = \mathcal{Q}(-\Lambda)\mathcal{Q}^\top$, where $\mathcal{Q}$ is $(n+p)\times p$ and $\Lambda$ is $p\times p$ diagonal with non-negative entries. Setting $\begin{bmatrix} L & W \end{bmatrix} = \sqrt{\Lambda}\mathcal{Q}^\top$ would do it. But why is $M$, which is $(n+p)\times (n+p)$, at most rank $p$? I have also not used the fact that $P$ is positive definite.
  • $\eqref{eq:Lyapunov} \implies \eqref{eq:Riccati}$ is straightforward under the additional condition that $D + D^\top$ is invertible. Direct substitution of the right hand sides of \eqref{eq:Lyapunov} into \eqref{eq:Riccati} gives the result.
  • For $\eqref{eq:Riccati} \implies \eqref{eq:Lyapunov}$, it seems that the additional condition $D+D^\top$ is positive definite is needed. If so, then $D+D^\top$ has a decomposition $W^\top W$, say the eigenvector decomposition, and defining $L^\top = -W^{-1}(PB - C^\top)$ gives the result.

Summary of my questions:

  • I am not able to show $\eqref{eq:LMI} \implies \eqref{eq:Lyapunov}$.
  • For $\eqref{eq:Lyapunov} \iff \eqref{eq:Riccati}$, is the additional condition $D+D^\top$ is positive definite needed?
Siva
  • 403
  • @user1551 To your first comment, $L$ and $W$ both have $p$ rows, so $X$ must have $p$ rows, which means the rank of $M$ is only $p$ although its size is $n+p$. Why is this so? Note: this comment mirrors my comment in response to KBS's answer below as well. – Siva Jan 01 '25 at 20:17
  • @user1551 To your second comment about the equivalency of (LMI) and (RIC), wouldn't that argument suggest (RIC) being satisfied as an inequality and not an equation? – Siva Jan 02 '25 at 04:45

1 Answers1

6

let us consider first the equivalence between (LYAP) and (RIC). As you stated (RIC), clearly you need to assume that $D+D^T$ is invertible.

Assuming that this is the case, a necessary condition for the LMI to be satisfied is that $D+D^T$ be positive definite. So, a Schur complement on (RIC) yields

$$ A^T P + P A + (P B - C^T)(D+D^T)^{-1}(P B - C^\top)^T \preceq 0, $$ which is a Riccati inequality. It can be proven that all the solutions $P$ to the Riccati inequality obeys $P\preceq P_+$, where $P_+$ is the maximum solution of (RIC). In addition, the feasibility of the Riccati inequality is equivalent to that of (RIC).

When $D+D^T$ is singular and nonzero, then the Riccati equation becomes

$$ A^T P + P A + (P B - C^T)(D+D^T)^{+}(P B - C^\top)^T = 0 $$ where $(D+D^T)^+$ denotes the Moore-Penrose Pseudoinverse of $(D+D^T)$. However, we also need the following equality condition

$$ (PB-C^T)=(PB-C^T)(D+D^T)^+(D+D^T) $$ which arises from the (non-strict) Schur complement. Note that when $D+D^T$ is invertible, this condition is automatically satisfied as $(D+D^T)^+=(D+D^T)^{-1}$.

Finally, when $D+D^T=0$, then there is no Riccati equation but just the set of conditions $A^TP+PA\preceq0$ and $PB-C^T=0$


Now, consider the implication (LMI) $\implies$ (LYAP) is immediate since if there exists a $P\succ0$ such that $M$ is negative semidefinite, then there exist a $P\succ0$ and a matrix $G\in\mathbb{R}^{m\times(n+p)}$ such that $M=-G^TG$. Setting $G=[L\ \ W]$ where $L\in\mathbb{R}^{m\times n}$ and $W\in\mathbb{R}^{m\times p}$ and where $m=\textrm{rank}(M)$.

To show that $m$ can be set to $p$ without loss of generality, then note that if $P$ is chosen such that (RIC) holds (in both cases of inverse or Moore-Penrose pseudoinverse), then $M$ has at most $p$ nonzero eigenvalues. Indeed, from the Schur complement and the law of inertia we obtain

$$ n_0(M)=n_0(D+D^T)+n_0(RIC)=p-\textrm{rank}(D+D^T)+n $$ and $$ n_-(M)=n_-(-(D+D^T))+n_-(RIC)=\textrm{rank}(D+D^T)\le p $$ where $n_0(\cdot)$ and $n_-(\cdot)$ denote the number of zero and negative eigenvalues, respectively.

KBS
  • 7,903
  • I understand the existence of an $(n+p)\times(n+p)$ matrix $G$ so that $M=-G^\top G$, but $G$ must be $p\times(n+p)$ for it to be partitioned as $[L;W]$, implying that $M$ is rank $p$. This is where I am having a problem. This question applies to user1551's comment above as well. – Siva Jan 01 '25 at 19:44
  • No, the dimensions of $G$ are $m\times(n+p)$ where $m$ is at least the rank of $M$. – KBS Jan 01 '25 at 19:55
  • Why should such a lower than $n+p$ rank decomposition of $M$ exist? Shouldn't $m=p$, since both $L$ and $W$ have $p$ rows? – Siva Jan 01 '25 at 19:57
  • If $M$ is negative semidefinite and not negative definite, then it must have eigenvalues at zero, resulting in a rank drop. So, $m$ is the number of negative eigenvalues of $M$. – KBS Jan 01 '25 at 19:58
  • But why should $M$ have at least $n$ zero eigenvalues, so that $G$ can have $p$ rows? – Siva Jan 01 '25 at 20:00
  • By assumption $M$ has between zero and $n+p$ zero eigenvalues, so we have $0\le m\le n+p$. $G$ does not necessarily has $p$ rows, I do not know where you get that from. Assuming $G$ is full-rank, then its number of rows is just $m$, which is equal to to the rank of $M$ and we have $L\in\mathbb{R}^{m\times n}$ and $W\in\mathbb{R}^{m\times p}$. – KBS Jan 01 '25 at 20:14
  • That makes sense. However, in the statement (LYAP), $L$ is $p\times n$ and $W$ is $p\times p$. While the number of rows in $L$ is not obvious, $W$ can have at most $p$ rows since it is the decomposition of a $p\times p$ matrix. For the implication (LMI) $\implies$ (LYAP) shouldn't it also be proved that $G$ has at most $p$ rows? – Siva Jan 01 '25 at 20:28
  • This is not necessarily the case. For instance, pick the matrix $M = \begin{bmatrix}1&2&0\2&4&0\0 & 0 & 1\end{bmatrix}=\begin{bmatrix}1 & 0\2&0\0 &1\end{bmatrix}\begin{bmatrix}1 & 0\2&0\0 &1\end{bmatrix}^T$. The rank of the right lower block of $M$ is 1, yet the associated $W$ matrix is $\begin{bmatrix}0\1\end{bmatrix}$. This shows that the number of rows is larger than its rank. – KBS Jan 01 '25 at 20:41
  • After thinking more about this, it should be possible to show that for certain choices for $P$ the rank of $M$ will be at most $p$. I will edit my answer tomorrow. – KBS Jan 01 '25 at 20:59
  • @Siva I have added some details. – KBS Jan 02 '25 at 09:33
  • Thank you! Two further questions: (1) Does (LMI) $\implies$ (LYAP) then require $D+D^\top \ne 0$? (2) What roles do the maximum solution to (RIC), $P_+$, in your description, and the argument that the feasibilities of the Riccati inequality and (RIC) are equivalent play in the proof? – Siva Jan 02 '25 at 15:32
  • (1) No, if $D+D^=0$, then $W=0$; and (2) the question is not clear to me. The proof of what? – KBS Jan 02 '25 at 16:03
  • (2) was simply what is the role of $P_+$ and the equivalence of Riccati equation and inequality feasibility in showing $m$ can be set to $p$. – Siva Jan 02 '25 at 17:15
  • $P^+$ does not have a role there, I just mentioned for the sake of completeness. But you need the equivalence (LMI) <--> (RIC inequality) <---> (RIC) for the argument to work. – KBS Jan 02 '25 at 19:10
  • 1
    Thank you. I have accepted the answer. – Siva Jan 02 '25 at 19:32