2

Consider $x' = f(x)$ with $f$ lipschitz, $x(t) \in \mathbb{R}^n$. Suppose $x = 0$ is an equilibrium ($f(0) = 0$), and $A = f'(0) (n \times n$ matrix).

My goal is to prove (and understand the proof) that if there exists an eigenvalue $\lambda$ of $A$ with $Re(\lambda) > 0,$ then $f$ is unstable at $x = 0$.

I can prove the following:

  1. if there is a Lyapunov function ($V>0$, $\dot{V}<0$) then $f$ has $0$ as an asymptotically stable equilibrium, and if instead we have ($V>0$, $\dot{V}\leq0$) then it is marginally stable.
  2. I can prove that any Lyapunov function $V$ exists for $x'=f(x) \iff V$ is also a Lyapunov function for LTI system $x' = Ax$.
  3. I can prove that $x' = Ax$ is asymptotically stable $\iff$ all eigenvalues of $A$ are in the open left half plane.
  4. I can prove that $A$ has all eigenvalues in the open left half plane implies there exists a Lyapunov function of the form $\langle Px,x\rangle$ with $P$ real, positive, and conversely that no such $P$ exists if $A$ has an eigenvalue in the closed right half plane.
  5. it follows then that $x'=Ax$ is asymptotically stable $\implies x' = f(x)$ is (since we can create a Lyapunov function for both)
  6. it follows that if eigenvalue $\lambda$ of $A$ exists in the open right half plane, then there can exist no Lyapunov function for $x'=f(x)$ (else $x' = Ax$ would have a Lyapunov function).
  7. It also follows that if $f$ is unstable at $x=0$, then there must exists some eigenvalue $\lambda$ in the closed right half plane.

So we can use these in the proof if helpful.

Note that if we can prove the following:

  • if the system is stable at $x=0$, then there exists a Lyapunov function near $x=0$

then we are done since this would be a Lyapunov function for the LTI system which means all eigenvalues of $A$ are in the open left half plane, and the result follows by contraposition.

This seems to be true based on this Q/A , but no proof is given.

  • 2
    Are you familiar with the Chetaev instability theorem? – Kwin van der Veen Jan 26 '25 at 21:44
  • Thanks. I was able to write a direct proof (answered below). I was not aware of Chetaev. I looked it up--like Lyapunov functions, they are a little unintuitive as to why they work b/c of the auxiliary function. – travelingbones Feb 03 '25 at 05:18

1 Answers1

1

Statement to prove: Let $\dot{x} = f(x)$, $f$ is $C^1$, $f(0) = 0$, $f'(0) = A$. If $\ \exists \ \lambda \in \text{spec}(A) \text{ such that } \text{Re}(\lambda) > 0$ then $0$ is an unstable equilibrium of $f$.

Proof: WLOG assume $A$ is in Jordan form i.e. [ A = \begin{bmatrix} A_1 & 0 \\ 0 & A_2 \end{bmatrix}, ] where $A_1$ is a Jordan block with $e_1$ (first orthonormal basis vector) an eigenvector. $A_1 = \lambda e_1$. Applying a Taylor series to $f$ near $0$ gives \begin{align*} f(x) &= f(0) + A(x - 0) + g(x) = \begin{bmatrix} A_1 & 0 \\ 0 & A_2 \end{bmatrix} \begin{bmatrix} x_1 \\ x_2 \end{bmatrix} + \begin{bmatrix} g_1(x) \\ g_2(x) \end{bmatrix} = \begin{bmatrix} A_1 x_1 + g_1(x) \\ A_2 x_2 + g_2(x) \end{bmatrix} \end{align*} with $g = O(\|x\|^2).$

Let $x(t)$ be a solution ($\dot{x} = f(x)$) such that $x(0) = \delta e_1 $. We can write $x(t)$ explicitly, \begin{align*} x(t) &= e^{At} x(0) + \int_0^t e^{A(t - \tau)} g(x(\tau)) d\tau \\ & = \begin{bmatrix} e^{A_1 t} & 0 \\ 0 & e^{A_2 t} \end{bmatrix} \begin{bmatrix} \delta e_1 \\ 0 \end{bmatrix} + \int_0^t \begin{bmatrix} e^{A_1 (t-\tau)} & 0 \\ 0 & e^{A_2 (t-\tau)} \end{bmatrix} \begin{bmatrix} g_1(x(t-\tau))\\ g_2(x(t-\tau)) \end{bmatrix} d\tau \\ & = \begin{bmatrix} e^{A_1 t} \delta e_1 &+ \int_0^t e^{A_1(t - \tau)} g_1(x(\tau)) d\tau \\ 0 &+ \int_0^t e^{A_2(t - \tau)} g_2(x(\tau)) d\tau \end{bmatrix} \\ & = \begin{bmatrix} x_1(t) \\ x_2(t) \end{bmatrix} \end{align*}

We make some observations. First, $\|e^{At} \delta e_1 \| = \delta e^{Re(\lambda)t}$. To see this, write [ A_1 = \lambda I + N, ] where $N$ has 1s on superdiagonal and $N^k = 0$ (where $k$ is the size of the Jordan block). Since $(\lambda I)N = N(\lambda I)$, $e^{A_1t} = e^{t \lambda I} e^{Nt}$. Now $Ne_1 = 0$, so $$e^{Nt} e_1 = \left[I + \frac{Nt}{1!} + \dots + \frac{(Nt)^k}{k!} \right] e_1 = e_1\ .$$ Similarly, we see $e^{\lambda t I} = e^{\lambda t} I $ since for any vector $v$, we have $$e^{\lambda t I} v = \sum_j \frac{(\lambda t)^j }{j!} I^j v = \left(\sum_j \frac{(\lambda t)^j}{j!}\right) v = e^{\lambda t} v\ .$$ This implies that $$ e^{A_1t} \delta e_i = \delta e^{\lambda t} I e^{Nt} e_1 = \delta t e^{\lambda t} e_1$$ and \begin{equation*} \left\| e^{N\tau} \right\| = \left\| I + N\tau + \dots + \frac{(N\tau)^k}{k!} \right\| \leq 1+\tau + \dots + \frac{\tau^k}{k!} = p_k(\tau) \end{equation*} since $\|N\| = 1$. (Note this is another derivation of Lemma \ref{lem:e^tA bound}.)

Next, let $c = \int_0^\infty e^{-Re(\lambda)\tau} p_k(\tau) d\tau < \infty$ since $Re(\lambda) < 0$.

BWOC suppose $0$ is marginally stable. Since $\|g(x)\| = \mathcal{O}(\|x\|^2)$, we can choose $0 < \delta$ small enough to ensure that $\|g(x(t))\| < \delta/(2c)$ and $\|x(t)\| < 1$ for all $t.$

But then we see \begin{align*} 1 & > \|x(t)\| \geq \|x_1(t)\|\\ & = \left\| e^{A_1t} x_0 + \int_0^t e^{A_1(t-\tau)}g_1(x(\tau))d\tau\right\| \\ & \geq \left\| e^{A_1t} x_0 \right\| - \left\| e^{A_1t} \int_0^t e^{-A_1\tau}g_1(x(\tau))d\tau \right\| \\ & \geq e^{Re(\lambda) t} \delta - e^{Re(\lambda)t} \int_0^t \|e^{-Re(\lambda)\tau} p_k(\tau) \| d\tau \ \|g_1(x)\|_\infty \\ & = \geq e^{Re(\lambda) t}\left[\delta - \int_0^\infty \|e^{-Re(\lambda)\tau} p_k(\tau) \| d\tau \frac{\delta}{2c} \right]\\ & \geq e^{Re(\lambda) t} \frac{\delta}{2} \to \infty \quad \quad (a contradiction) \end{align*}