8

Preliminary properties: Let the state vector $x(t)=[x_1(t),\dots,x_n(t)]^T\in\mathbb{R}^n$ be constrained to the dynamical system $$ \dot{x} = Ax + \begin{bmatrix} \phi_1(x_1) \\ \vdots \\ \phi_n(x_1) \\ \end{bmatrix}, \ \ \ \ x(0) = x_0 $$ where $A$ is defined by: $$ A = \begin{bmatrix} \lambda_1 & 1 & 0 &\cdots& 0\\ 0 & \lambda_2 & 1 &\ddots&\vdots\\ \vdots&\ddots&\ddots&\ddots&0\\ 0&\cdots&0&\lambda_{n-1}& 1\\ 0&\cdots&0&0&\lambda_n \end{bmatrix} $$ with $\lambda_i>0$, and $\phi_i(x_1) = \beta_i |x_1|^{\alpha_i}\text{sign}(x_1), \beta_i>0$, $0<\alpha_i<1$.

Question: Is it possible to show that for any initial condition $x_0\neq 0$, the solution $x(t)$ either converge to the origin, or $ \lim_{t\to\infty}\|x(t)\| = +\infty $, but cannot remain in a bounded trajectory different from staying at the origin?

Concretelly, what additional structure or conditions on the system or the initial condition do we require to show this?

In case you find this useful, here are my attempts to understand/solve the problem.

Attempt 1: I was trying to use results such as the ones from here which can conclude what I want, but require to find a Lyapunov-like function (not necesarilly positive definite) for which $\ddot{V}\neq 0, x\neq 0$. However, I haven't been able to come up with a suitable such function.

Attempt 2: The differential equation have "explicit" solution (not precisely explicit but can be expressed as) $$ x(t) = e^{At}x_0 + e^{At}\int_0^se^{-As}\Phi(x_1(s))ds $$ where $\Phi(x_1) = [\phi_1(x_1),\dots,\phi_n(x_1)]^T$. So I wanted to proceed by contradiction: assume that there exists $b,B>0$ and $T>0$ such that $b\leq \|x(t)\|\leq B$ for all $t\geq T$. Hence, $$ b\leq \left\|e^{At}x_0 + e^{At}\int_0^se^{-As}\Phi(x_1(s))ds\right\|\leq B $$ And noticing that in this case there should be $c,C>0$ such that $0<c\leq\|\Phi(x_1(t))\|\leq C $, for all $t\geq T$. Thus, try to obtain a contradiction, for example by using $C\geq\|\Phi(x_1(t))\|$ to show that $B\leq\|x(t)\|$. But unfortunately I haven't obtained anything positive in this direction neither.

Attempt 3: Can Bendixon's/Dulac criterion (see Theorem 11 here) be used to conclude something for this system? It is easy to verify that if we write this system as $\dot{x} = f(x)$, we obtain $\nabla\cdot f(x)>0$.

I know that neither my attempts nor my exposition here are perfect. However, I'm looking for suggestions/references or any idea which might help me understand more this problem.

  • From your remarks it seems to follow that $\phi_i(x_1) = \beta_i,\text{sign}(x_1),\text{abs}(x_1)^{\alpha_i}$ with $\beta_i>0$. Is this correct? – Kwin van der Veen Jan 13 '21 at 18:09
  • Hmm turns out that's what I'm using in my examples... is that the only possibility given the remarks? If so, maybe I will add that directly to the question. – FeedbackLooper Jan 13 '21 at 18:13
  • I believe so, unless your third point would involve an inequality instead of the equality $\phi_i(\eta x_1) = \eta^{\alpha_i}\phi_i(x_1)$. I do have one remark. Namely, I assume that your second remark stating $x_1\phi_i(x_1)>0$ would also require $\forall\ x_1\neq 0$? – Kwin van der Veen Jan 13 '21 at 18:17
  • Oh, that's true. I missed that. I will correct it. Thanks. – FeedbackLooper Jan 13 '21 at 18:19
  • I have attempted to solve this in reversed time (such that the initial time derivative gets negated) and apply the circle criterion to it in the hopes of being able to show global stability (globally unstable for the original system). However, I have been able to find counter examples to this, so it doesn't seems to work in every case. – Kwin van der Veen Jan 15 '21 at 14:48
  • Can you post the counterexamples? Are those counter examples of stability, or for the fact that trajectories either converge to the origin or go to infinity? Thanks anyway! – FeedbackLooper Jan 15 '21 at 15:01
  • My counter example did not use the new band structure posed on $A$. For $n=1$ and $n=2$ I think with the new band structure one might be able to show that it is true with the circle criterion. However, I am not sure how to generalize this for higher dimensions. I also have my doubts whether the circle criterion would be applicable here, since it would require the number of "inputs" as "outputs". – Kwin van der Veen Jan 24 '21 at 03:19
  • Also I assume that it should still hold that $0<\alpha_i<1$? – Kwin van der Veen Jan 24 '21 at 03:20
  • Yes, the condition on $\alpha_i$ still holds. Thank you dor your suggestion on the circle criterion, I think it is helping me understand the problem, at least for $n\leq 2$ – FeedbackLooper Jan 24 '21 at 13:11

2 Answers2

2

Inspired by the answer of open problem one can say a bit more in general when considering only the $\alpha_i=1$ cases. Although, it is stated that $0 < \alpha_i < 1$, so technically these cases would just barely violate the considered domains for each $\alpha_i$. In these cases the dynamics is linear and can be described with $\dot{x} = M\,x$, with

$$ M = \begin{bmatrix} \lambda_1 + \beta_1 & 1 & 0 & \cdots & 0 \\ \beta_2 & \lambda_2 & 1 & \ddots & \vdots \\ \vdots & 0 & \ddots & \ddots & 0 \\ \beta_{n-1} & \vdots & \ddots & \lambda_{n-1} & 1 \\ \beta_n & 0 & \dots & 0 & \lambda_n \end{bmatrix}. \tag{1} $$

These kind of systems can have non-zero bounded trajectories if $M$ has at least one eigenvalue of zero. One necessary condition for this would be that $\det(M) = 0$, since the determinant of a matrix is equal to the product of its eigenvalues.

It can be shown that in general the determinant of $(1)$ is equal to

$$ \det(M) = \prod_{k=1}^n \lambda_k + \sum_{k=1}^n \left((-1)^{k+1} \beta_k \prod_{m = k+1}^n \lambda_m\right). \tag{2} $$

Even though it holds that $\lambda_i,\beta_i > 0$ for all $i = 1, \cdots, n$, due to the minus signs inside $(2)$ it is possible to have that $\det(M) = 0$ for $n \ge 2$.

For example for $n = 2$ with $\lambda_1,\lambda_2,\beta_1 = 1$ and $\beta_2 = 2$ yields

$$ M = \begin{bmatrix} 2 & 1 \\ 2 & 1 \end{bmatrix}, \tag{3} $$

which has the eigenvalues $0$ and $3$ and thus can have non-zero bounded trajectories if the initial condition $x(0)$ is chosen such that it doesn't excite the unstable mode associated with the eigenvalue $3$.


Another sufficient condition for a counter argument would be if a certain system satisfying your description has other equilibria besides the origin. It can be noted that for the linear cases the mode, whose associated eigenvalue is zero, gives a line of equilibria. For all choices for $\alpha_i$ and using $x_1 = 1$ yields $\phi_i(x_1) = \beta_i$. Therefore, a non-zero equilibrium can be constructed by solving $\dot{x} = 0$. In order to split the knowns from the unknowns I define $x' = \begin{bmatrix}x_2 & \cdots & x_n\end{bmatrix}^\top$, such that $\dot{x} = 0$ can be split into $\dot{x}_1 = 0$ and $\dot{x}' = 0$. Substituting $x_1 = 1$ into those two expressions yields

$$ \lambda_1 + x_2 + \beta_1 = 0, \tag{4} $$

$$ A'\,x' + B = 0, \tag{5} $$

with $B = \begin{bmatrix}\beta_2 & \cdots & \beta_n\end{bmatrix}^\top$ and

$$ A' = \begin{bmatrix} \lambda_2 & 1 & 0 & \cdots & 0 \\ 0 & \lambda_3 & 1 & \ddots & \vdots \\ \vdots & \ddots & \ddots & \ddots & 0 \\ 0 & \cdots & 0 & \lambda_{n-1} & 1 \\ 0 & \cdots & 0 & 0 & \lambda_n \end{bmatrix}. \tag{6} $$

Solving $(5)$ for $x'$ yields $x' = - A'^{-1} B$. Therefore $B$ can be chosen to ensure that $\beta_i > 0$ for $i=2,\cdots,n$. However, this doesn't ensure that $\beta_1 > 0$. Namely, solving $(4)$ for $\beta_1$ yields $\beta_1 = -\lambda_1 - x_2$, where $x_2$ can be obtained from the solution for $x'$. It can be noted that scaling $B$ by a positive scalar $\gamma$ also scales $x'$ by the same scalar. Therefore, if for some valid $B$ one obtains a negative value for $x_2$ one could always find a large enough $\gamma$ such that after scaling $\beta_1$ would become positive. The inverse of $A'$ from $(6)$ can shown to be equal to

$$ A'^{-1}_{ij} = \left\{ \begin{array}{ll} \frac{(-1)^{j-i}}{\prod_{k=i}^j \lambda_k} & \text{if}\ j \geq i \\ 0 & \text{otherwise} \end{array} \right., \tag{7} $$

where $X_{ij}$ denotes the element of matrix $X$ at its $i$th row and $j$th column. Given that each element of $B$ is positive the expression for $x_2$ would thus be a sum of alternating negative and positive terms. Therefore, by choosing some of the odd terms of $B$ sufficiently large would guarantee that the associated solution for $x_2$ would be negative, which thus ensures that $\beta_1$ can be made positive.

For example for $n = 2$ with $\lambda_1,\lambda_2 = 1$ and $\beta_2 = 2$ yields $x_{eq} = \begin{bmatrix}1 & -2\end{bmatrix}^\top$ as equilibrium for every possible $\alpha_i$. It can be noted that due to the fact the the expression for $\dot{x}$ is odd in $x$ also implies that $-x_{eq}$ (thus $\begin{bmatrix}-1 & 2\end{bmatrix}^\top$) would be an equilibrium as well.

However, I am not sure if for $n \geq 2$ these systems always have multiple equilibrium points for any arbitrary choice for $\lambda_i,\beta_i > 0$. But at least I have shown there exists systems that satisfy your description that violate your postulated limits.

  • So you say that because with $\alpha_i=1$ we can have nonzero bounded trajectories, with $\alpha_i$ close to $1$ we would still have them? Its an interesting idea, however I'm not sure how to formalize such continuity argument. At the end having marginally stable poles is a very degenerate case, I would expect that changing the parameters of the system (from $\lambda_i, \beta_i$ is easy to see, but for $\alpha_i$ I'm not sure) would rule out the bounded trajectories. Thanks for this interesting insight anyway! – FeedbackLooper Jan 25 '21 at 10:23
  • By changing the parameters I mean "slightly changing". As when we go from $\alpha_i=1$ to $\alpha_i = 1 - \varepsilon$ with small $\varepsilon>0$. – FeedbackLooper Jan 25 '21 at 10:57
  • Hey Kwin van der Veen I'll give you the bounty. I don't think we have already solved the problem, but you have been very helpful. Thank you for sticking around. – FeedbackLooper Jan 25 '21 at 12:10
  • @FeedbackLooper I have updated my answer and now also includes counter examples for all nonlinear cases. – Kwin van der Veen Jan 25 '21 at 12:59
  • @FeedbackLooper for every $\alpha_i<1$ the linearization at the origin would yield an infinite slope, so I am not sure one could just approximate that using $\alpha_i=1$. Plus the fact that it would indeed be degenerate, such that any slight nonlinearities would probably drive it away from the stationary mode. However, my new section is free of such limitations. Only one now has equilibrium points instead of lines, so the set of violating initial conditions would be smaller. – Kwin van der Veen Jan 25 '21 at 13:32
  • This are very interesting ideas. I think this answers the question. Thanks sir – FeedbackLooper Jan 25 '21 at 13:57
1

Instead of the general case let us focus on the case where $\alpha_{i} = 1$ for all i. In this special case $sign(x_{1})|x_{1}|=x_{1}$ the $\phi(x_{1})$ term can then be absorbed into $A$, yielding $A' = A+\Phi(x_{1})$. So in this special case the equation is linear homogeneous.

This may seem like an oversimplification, however there are several salient points. First, it is a good first place to look for counterexamples. Second, when $\alpha > 1$ you can see that the local structure near the zero equilibrium is dominated by $A'$ for small |x(t)|. So the behavior of a subset of solutions will always be reliant on $A'$ in that way it is always worth looking at the structure of $A'$.

Lets look at the system of two equations version just to keep the notation to a reasonable level.

$$\frac{dx_{1}}{dt} = (\lambda_{1} + \beta_{1})x_{1}(t) +x_{2}(t)$$

$$\frac{dx_{2}}{dt} = \beta_{2}x_{1}(t) + \lambda_{2}x_{2}(t)$$

Solutions of which are given by:

$x(t) = e^{A't}x_{0}$

The same reasoning works in the full system case where all $\alpha_{i}=1$

Now in the general case but with system of two equations

$$|\frac{dx_{1}}{dt}| = |(\lambda_{1}x_{1}(t) + sign(x_{1}(t))\beta_{1})|x_{1}(t)|^{\alpha_{1}} +x_{2}(t)| = \lambda_{1}|x_{1}(t)| + \beta_{1}|x_{1}(t)|^{\alpha_{1}} + |x_{2}(t)|$$

$$|\frac{dx_{2}}{dt}| = |sign(x_{1}(t))\beta_{2})|x_{1}(t)|^{\alpha_{1}} + \lambda_{2}x_{2}(t) |= \beta_{2}|x_{1}(t)|^{\alpha_{1}} + \lambda_{2}|x_{2}(t)|$$

So we get the inequality:

$$|\frac{dx_{1}}{dt}| \geq \lambda_{1}|x_{1}(t)|+ |x_{2}(t)|$$

$$|\frac{dx_{2}}{dt}| \geq \lambda_{2}|x_{2}(t)|$$

Integrating against the inequality we get that

$|x_{1}(t)| > |y_{1}(t)|$ and $|x_{2}(t)| > |y_{2}(t)|$

Where $y(t) = e^{At}x_{0}$.

Note we make heavy use of the fact that the $\lambda_{i}$ and $\beta_{i}$ are all positive in the above inequality. If you want to generalize the problem to include negative $\lambda_{i}$ and $\beta_{i}$ the structure of $A'$ will be the place to start looking for counterexamples.

I will add the assumption that $x_{0}$ is either in the positive or negative orthant for now, until I update the argument.

Ok. So let's try this approach so that we can use the comparison theorem in your textbook. Let's let $U(t) = x_{1}(t) + x_{2}(t)$

$$|\frac{d(x_{1}(t) + x_{2}(t))}{dt}| = |\lambda_{1}x_{1}(t)| + |sign(x_{1}(t))\beta_{1}|x_{1}(t)|^{\alpha_{1}}| + |x_{2}(t)| + |sign(x_{1}(t))\beta_{2})|x_{1}(t)|^{\alpha_{1}}| + |\lambda_{2}x_{2}(t)|$$

$$\geq |\lambda_{1}x_{1}(t)| + |x_{2}(t)| + |\lambda_{2}x_{2}(t)| \geq min(\lambda_{1},1+\lambda_{2}) |x_{1}(t)+x_{2}(t)|$$

So $|\frac{dU}{dt}| > min(\lambda_{1},1+\lambda_{2}) |U(t)|$.

Then you can use the comparison theorem from your book to show that:

$|U(t)| > e^{min(\lambda_{1},1+\lambda_{2})t}|x_{0}|$.

You can do something similar in the kth order case.

Sorry folks, when I read the question I just assumed we were in the positive orthant. Here is a quick counterexample.

Choose so that $\beta_{1}+\lambda_{1}=1$ and $\beta_{2} = 1$ and $\lambda_{2}=1$

The alphas can be free.

$$\frac{dx_{1}}{dt} = \lambda_{1}x_{1}(t) + sign(x_{1}(t))\beta_{1}|x_{1}(t)|^{\alpha_{1}} +x_{2}(t)$$

$$\frac{dx_{2}}{dt} = sign(x_{1}(t))|x_{1}(t)|^{\alpha_{1}} + x_{2}(t)$$

$x_{0}=(1,-1)$ is a non-zero equilibrium point.

open problem
  • 1,460
  • Hey, thanks! This is an interesting idea! Maybe I'm missing something, but how did you obtained the last inequalities? Are you using some vector version of the comparison lemma? – FeedbackLooper Jan 25 '21 at 09:27
  • By comparison lemma I mean Theorem 2.1 here: https://wwwf.imperial.ac.uk/~dturaev/lecture1a.pdf As far as I know it would only work for scalar systems. But It would be nice if it worked in this particular example – FeedbackLooper Jan 25 '21 at 09:31
  • Yeah it is using a more generalized form of that comparison lemma. If this is a homework there may be a way to restructure the problem to use the lemma from your notes. – open problem Jan 25 '21 at 10:49
  • Thanks! In the references I have, I can only find the scalar version. Do you happen to have a reference for that generalized comparison lemma? I'm looking on my own but I haven't found anything. Maybe it is because I don't know how to search for it, since all google results are mostly for the scalar version. I'll keep looking anyway. Thanks again! – FeedbackLooper Jan 25 '21 at 10:54
  • Well considering that you might only be allowed to use the references in the homework I was going to suggest a change of variables U, V which is a linear combination that eliminates the $x_{2}$ variable from the top equation. Then you can use your lemma, to get the result. – open problem Jan 25 '21 at 10:57
  • 1
    It is similar to this concept of linearly bounded here but with the inequalities reversed: http://math.ecnu.edu.cn/~zmwang/teaching/Advanced_ODE-2014-2015/Slids_of_Advanced_ODE-2014-2015-files/Advanced_ODE-Lecture_5.pdf – open problem Jan 25 '21 at 11:03
  • Thanks! just one more question before awarding you the bounty. In the last thing you wrote, how did you get $| \lambda_1 x_1 + |x_1|^{\alpha_1}\text{sign}(x_1) + x_2 +|x_1|^{\alpha_2}\text{sign}(x_1) + \lambda_2x_2| = | \lambda_1 x_1| + |x_1|^{\alpha_1}|\text{sign}(x_1)| + |x_2| + |x_1|^{\alpha_2}|\text{sign}(x_1)| + |\lambda_2x_2|$.

    In other words, how can you distribute the absolute value?

    – FeedbackLooper Jan 25 '21 at 11:53
  • The sign functions as well as all the lambdas being positive make it so that all the terms have the same sign. If sign(a) = sign(b) then |a+b|=|a| + |b| – open problem Jan 25 '21 at 11:58
  • are we assuming $x_1$ and $x_2$ have the same sign? – FeedbackLooper Jan 25 '21 at 11:58
  • Oh thanks for pointing that out, I will delete it while I fix the issue. – open problem Jan 25 '21 at 12:00