4

Let a system $x' = A(t)x$ and suppose there are values positives $k, \beta$ such that a positive fundamental matrix $X(t)$ satisfies

$\|X(t)\| \leq k$, $t \geq \beta$ and

$$ \liminf_{t \rightarrow \infty} \int^t_\beta \operatorname{tr}(A(s))\,ds > - \infty.$$

Show that:

a)$X^{-1}(t)$ is bounded in $[\beta,\infty)$.

b)No system solution approaches zero solution when $t \rightarrow \infty.$

Croos
  • 1,889
  • 1
    Not sure what you mean by a "positive fundamental mantrix". Care to explicate? – Robert Lewis May 08 '15 at 18:15
  • Maybe you mean to say $\det X(t) > 0$; is that right? Cheers! – Robert Lewis May 08 '15 at 18:17
  • Being fundamental matrix means that the matrix is solution and $detX(t) \neq 0$ – Croos May 08 '15 at 19:37
  • 1
    Yes, I understand what "fundamental matrix" means; it's the "positive" I'm curious about! Cheers! – Robert Lewis May 08 '15 at 19:53
  • Usually ODEs like that have $x(t)$ a vector, not a matrix. Anyway, perhaps the problem wants you to solve the ODE using a matrix exponential, and then apply the det-matrix-trace formula here: http://en.wikipedia.org/wiki/Matrix_exponential – Michael May 11 '15 at 21:07
  • @Michael: but for time-dependent $A(t)$, we cannot in general write $A(t) = e^{B(t)}$. Cheers! – Robert Lewis May 11 '15 at 21:55
  • @RobertLewis , I believe the general solution for $t \geq \beta$ is $x(t) = e^{\int_{\beta}^t A(s)ds} x(\beta)$. – Michael May 12 '15 at 01:56
  • Note also that if $x(t)$ is a solution then $x(t)M$ is also a solution for any matrix $M$ of appropriate size. If $x(\beta)$ is invertible then we can get any desired initial condition $y(\beta)$ by using $M = x(\beta)^{-1}y(\beta)$. – Michael May 12 '15 at 02:11
  • @Michael: $x(t) = e^{\int_\beta^t A(s)ds} x(\beta)$ only works when $[A(t), \int_\beta^t A(s)ds] = 0$. For a detailed explanation, see my answer to http://math.stackexchange.com/questions/718613/proving-nonhomogeneous-ode-is-bounded/732147#732147 – Robert Lewis May 12 '15 at 05:21
  • @RobertLewis , well I never liked matrix exponentials and have never actually used them. I do not understand what you mean by $[A(t), \int_{\beta}^t A(s)ds] = 0$, to me that looks like both entries in a matrix must be 0, so $A(t)=0$. Nevertheless I differentiated by hand an $A(t)^2$ matrix and found indeed the result is not always $2A(t)A'(t)$ nor $2A'(t)A(t)$. So likely you are right that $e^{\int_{\beta}^t A(s)ds} x(\beta)$ does not always work. It does seem to work when $A(t) = g(t)A$ for some scalar function $g(t)$, which does not require $A(t)$ to be identically zero. – Michael May 12 '15 at 08:23
  • For example if one says that $[x,y] =0$ that would usually be interpreted as $x=0$ and $y=0$. I have no idea what else it could mean. – Michael May 12 '15 at 08:23
  • @Michael: by $[C, D]$ I mean the commutator or Lie bracket of the two matrices $C, D$: $[C, D] = CD - DC$! Saying $[C, D] = 0$ is another way of saying $C$ and $D$ commute, $CD = DC$! Cheeeeers! – Robert Lewis May 12 '15 at 17:47
  • @Michael If you’re interpreting that as a matrix, I believe it would be written without a comma, as $[x ; y]$. – Divide1918 Dec 07 '23 at 17:57

3 Answers3

3

I'll present a solution to this problem in several steps, and try to give links/citings/references for any specialized outside results I may use.

I assume throughout that $A(t)$ is a continuous, real matrix function of $t$, and that $\text{size}(A) = n$.

Some Notation, Definitons, etc: The following solution applies to real differential equations of the form

$x' = A(t)x; \tag{1}$

that is, we take $A(t) \in M_n(\Bbb R)$ for all $t$ in the domain of definition of $A(t)$, and correspondingly $x(t) \in \Bbb R^n$; we will also assume the the notation $\langle u, v \rangle$ refers to the standard inner product on $\Bbb R^n$; thus, $\langle u, v \rangle$ denotes the euclidean inner product $\sum_1^n u_i v_i$; here the $u_i, v_i \in \Bbb R$ are as usual the components of $u, v$; we furthermore denote the standard basis on $\Bbb R^n$ by $\mathbf e_i$; thus

$\mathbf e_1 = \begin{pmatrix} 1 \\ 0 \\ \vdots \\ 0 \end{pmatrix}, \tag{2}$

$\mathbf e_2 = \begin{pmatrix} 0 \\ 1 \\ \vdots \\ 0 \end{pmatrix}, \tag{3}$

and so forth, on down to

$\mathbf e_n = \begin{pmatrix} 0 \\ 0 \\ \vdots \\ 1 \end{pmatrix}; \tag{4}$

note that

$\Vert \mathbf e_k \Vert = 1, 1 \le k \le n. \tag{5}$

Finally, we will take $\Vert T \Vert$ to be the standard operator norm for $T \in M_n(\Bbb R)$; thus, among other equivalent definitions, we have

$\Vert T \Vert = \sup_{\Vert u \Vert = 1} \Vert Tu \Vert, \tag{6}$

where of course

$\Vert u \Vert = \sqrt{\langle u, u \rangle} \tag{7}$

for $u \in \Bbb R^n$.

These preliminary remarks being made, we continue as follows:

Before proceeding with the analytics of the solution per se, we will develop a result which expresses a bound for $\Vert T^{-1} \Vert$, for any invertible $n \times n$ real matrix $T$, in terms of a bound for $T$ and a positive lower bound for $\vert \det(T) \vert$, assuming that $T$ is non-singular. The necessary result will be presented as a brief sequence of lemmas:

Lemma 1: Let $T \in M_n(\Bbb R)$; write

$T = [T_{ij}], 1 \le i, j \le n; \tag{8}$

then if $\Vert T \Vert \le M$, we have

$\vert T_{ij} \vert \le M \tag{9}$

for every entry $T_{ij}$ of $T$.

Proof: Suppose to the contrary that for some $i, j$ we have $\vert T_{ij} \vert > M$. Let $T_k$ denote the $k$-th column of $T$; then

$T \mathbf e_j = T_j = \begin{pmatrix} T_{1j} \\T_{2j} \\ \vdots \\ T_{ij} \\ \vdots \\ T_{nj} \end{pmatrix}; \tag{10}$

from (10) we have

$\Vert T \mathbf e_j \Vert^2 = \langle T\mathbf e_j, T\mathbf e_j \rangle = \sum_{l =1}^n \vert T_{lj} \vert^2 = \sum_{l =1, l \ne i}^n \vert T_{lj} \vert^2 + \vert T_{ij} \vert^2 > M^2, \tag{11}$

whence

$M < \Vert T\mathbf e_j \Vert; \tag{12}$

but then

$M < \Vert T \mathbf e_j \Vert \le M \Vert \mathbf e_j \Vert = M; \tag{13}$

this contradiction shows we must indeed have $\vert T_{ij} \vert \le M$, $1 \le i, j \le n$. QED.

Lemma 2: As in Lemma 1, let $T \in M_n(\Bbb R)$; setting $T = [T_{ij}]$, suppose $\vert T_{ij} \vert \le B$, $0 < B \in \Bbb R$, $1, \le i, j \le n$. Then $\Vert T \Vert \le n^{3/2}B$.

Proof: Pick $u \in \Bbb R^n$, $\Vert u \Vert = 1$; writing

$u = \sum_1^n u_i \mathbf e_i, \tag{14}$

we have

$\Vert Tu \Vert^2 = \vert \Vert Tu \Vert^2 \vert = \vert \langle Tu, Tu \rangle \vert = \vert \langle T(\sum_1^n u_i \mathbf e_i), T(\sum_1^n u_j \mathbf e_j) \rangle \vert$ $= \vert \langle \sum_1^n u_i T\mathbf e_i, \sum_1^n u_j T \mathbf e_j \rangle \vert = \vert \sum_{i, j = 1}^n u_i u_j \langle T\mathbf e_i, T\mathbf e_j \rangle \vert$ $\le \sum_{i, j = 1}^n \vert u_i \vert \vert u_j \vert \vert \langle T\mathbf e_i, T\mathbf e_j \rangle \vert; \tag{15}$

as in Lemma 1, we note that $T\mathbf e_k$ is the $k$-th column of $T$; thus

$\langle T\mathbf e_i, T\mathbf e_j \rangle = \sum_1^n T_{li} T_{lj}; \tag{16}$

also,

$\vert \sum_{l = 1}^n T_{li} T_{lj} \vert \le \sum_{l = 1}^n \vert T_{li} T_{lj} \vert \le \sum_{l = 1}^n B^2 = nB^2; \tag{17}$

combining (16) and (17) yields

$\vert \langle T \mathbf e_i, T\mathbf e_j \rangle \vert \le nB^2; \tag{18}$

thus (15) becomes

$\Vert Tu \Vert^2 \le \sum_{i,j = 1}^n \vert u_i \vert \vert u_j \vert \vert \langle T\mathbf e_i, T\mathbf e_j \rangle \vert \le \sum_{i, j = 1}^n nB^2 \vert u_i \vert \vert u_j \vert \le \sum_{i, j = 1} nB^2 = n^3 B^2, \tag{19}$

since $\vert u_i \vert, \vert u_j \vert \le 1$ by virtue of the fact that $\sum_1^n u_i^2 = 1$; so

$\Vert Tu \Vert \le n^{3/2} B; \tag{20}$

since (20) holds for any unit vector $u$, we see that

$\Vert T \Vert \le n^{3/2} B \tag{21}$

as well. QED.

Lemma 3: $T = [T_{ij}]$ as in previous, that is $\vert T_{ij} \vert \le B$ etc. Then

$\vert \det(T) \vert \le n! B^n. \tag{22}$

Proof: We simply write out the determinant in fully expanded form,

$\det(T) = \sum_{\sigma \in S_n} (-1)^{\text{sign}(\sigma)} \prod_1^n T_{i \sigma(i)}, \tag{23}$

where $\sigma \in S_n$, the symmetric group on $n$ letters, presented in the form

$\sigma = \begin{pmatrix} 1 & 2 & \ldots & n \\ \sigma(1) & \sigma(2) & \ldots & \sigma(n) \end{pmatrix}, \tag{24}$

and $\text{sign}(\sigma)$ is $\pm 1$ according to whether $\sigma$ is an even or odd permutation. Taking absolute values of (23) yields

$\vert \det(T) \vert = \vert \sum_{\sigma \in S_n} (-1)^{\text{sign}(\sigma)} \prod T_{i \sigma(i)} \vert \le \sum_{\sigma \in S_n} \vert \prod_1^n T_{i \sigma(i)} \vert$ $= \sum_{\sigma \in S_n} \prod_1^n \vert T_{i \sigma(i)} \vert \le \sum_{\sigma \in S_n} B^n = n! B^n; \tag{25}$

thus,

$\vert \det(T) \vert \le n! B^n, \tag{26}$

as claimed. QED.

We are working towards preparing an estimate of $\Vert T^{-1} \Vert$ based upon Cramer's rule. We recall that Cramer's expresses the inverse of $T$, when it exists, i.e. when $\det(T) \ne 0$, as

$T^{-1} = (\det (T))^{-1} \text{adj}(T), \tag{27}$

where $\text{adj}(T)$, the so-called adjugate matrix of $T$, is defined as the transpose of $\text{cof}(T)$, the cofactor matrix of $T$, which in turn is defined in terms of the minors of $T$ as follows: we recall that the $k, l$ minor of $T$, $m_{kl}(T)$, is the determinant of the submatrix of $T$ resulting from deletion of row $k$ and column $l$ from $T$, and that the $k, l$ cofactor is $(-1)^{k + 1}m_{kl}(T)$; then

$\text{cof}(T) = [(-1)^{k + 1}m_{kl}(T)]; \tag{28}$

thus

$\text{adj}(T) = (\text{cof}(T))^T. \tag{29}$

Based upon what we have done so far, it is a relatively straightforward matter to present a bound for $\text{adj}(T)$:

Lemma 4: Again, as in Lemma 3, assuming $T = [T_{ij}]$ with $\vert T_{ij} \vert \le B$,

$\Vert \text{adj}(T) \Vert \le n!\sqrt{n} B^{n - 1}: \tag{30}$

Proof: If we apply Lemma 3 to the $m_{kl}(T)$, it readily follows that

$\vert m_{kl}(T) \vert \le (n - 1)!B^{n - 1}; \tag{31}$

thus every entry $(\text{cof}(T))_{kl}$ of the cofactor matrix of $T$ also satisfies the same bound:

$\vert (\text{cof}(T))_{kl} \vert \le (n - 1)!B^{n -1}; \tag{32}$

by (29), the same is true for the entries of $\text{adj}(T)$; thus by Lemma 2 we have

$\Vert \text{adj}(T) \Vert \le n^{3/2}(n - 1)! B^{n - 1} = n(n - 1)! \sqrt{n} B^{n -1} = n!\sqrt{n} B^{n - 1}. \tag{33}$

QED.

Exploiting Lemmas 1-4, we are finally in a position to present an estimate for $\Vert T^{-1} \Vert$ based upon $\Vert T \Vert$ and $\det(T)$:

Proposition: Suppose $\Vert T \Vert \le M$ and $\vert \det(T) \vert \ge m > 0$; then

$\Vert T^{-1} \Vert \le \dfrac{n! \sqrt{n} M^{n - 1}}{m}. \tag{34}$

Proof: By Lemma 1, $\vert T_{ij} \vert \le M$ for $1 \le i, j \le n$; thus by Lemma 4,

$\Vert \text{adj}(T) \Vert \le n!\sqrt{n}M^{n - 1}; \tag{35}$

since

$\vert \det(T) \vert \ge m, \tag{36}$

we have

$\vert \det(T) \vert^{-1} \le \dfrac{1}{m}; \tag{37}$

it now follows from (27) that

$\Vert T^{-1} \Vert = \Vert (\det (T))^{-1} \text{adj}(T) \Vert = \vert \det(T) \vert^{-1} \Vert \text{adj}(T) \Vert$ $\le \dfrac{1}{m} n!\sqrt{n}M^{n - 1} = \dfrac{n!\sqrt{n} M^{n - 1}}{m}. \tag{38}$

QED.

Main Line: Return to Analytics: Having developed the bound (34) for $\Vert T^{-1} \Vert$, we return to analysis; we are now on the downhill slope: since the fundamental matrix $X(t)$ satisfies

$X'(t) = A(t)X(t), \tag{39}$

and that we are given

$ \liminf_{t \rightarrow \infty} \int^t_\beta \text{tr}(A(s))ds > - \infty, \tag{40}$

we are in postion to exploit the well-known fact that

$\dfrac{d\det (X(t))}{dt} = \text{tr}(A(t)) (\det X(t)); \tag{41}$

(41) is widely used in the theory of linear systems of the form (1), (40) and so forth; see for example the book Ordinary Differential Equations, by Jack K. Hale, 2009 Dover Publications, Inc., ISBN-13 978-0-486-47211-6, chapter III, pp. 78-83; see also the linked pages cited by Michael, http://en.wikipedia.org/wiki/Matrix_exponential, and Aprilius, http://en.wikipedia.org/wiki/Liouville%27s_formula. The unique solution to (41) taking the value $\det(X(\beta))$ at $t = \beta$ is

$\det(X(t)) = \det(X(\beta))e^{\int_\beta^t \operatorname{tr}(A(s)) ds}, \tag{42}$

from which it follows that, since $e^{\int_\beta^t \operatorname{tr}(A(s)) ds} > 0$ for all $t \in \Bbb R$,

$\vert \det(X(t)) \vert = \vert \det(X(\beta)) \vert e^{\int_\beta^t \operatorname{tr}(A(s)) ds}. \tag{43}$

Now consider the hypothesis (40); this may be re-stated (see this wikipedia entry on lim inf, etc.) as

$\lim_{t \to \infty} \inf \{\int_\beta^\tau \operatorname{tr}A(s) ds \ \mid \tau \ge t \} > - \infty; \tag{44}$

for $t_1 \le t_2$ we have the set inclusion

$\{\int_\beta^\tau \operatorname{tr}A(s) ds \ \mid \tau \ge t_2 \} \subset \{\int_\beta^\tau \operatorname{tr}A(s) ds \ \mid \tau \ge t_1 \}; \tag{45}$

define the function $L(t)$ by

$L(t) = \inf \{\int_\beta^\tau \operatorname{tr}A(s) ds \ \mid \tau \ge t \}; \tag{46}$

from (45), we see that $L(t)$ is monotonically increasing, that is

$L(t_1) \le L(t_2) \tag{47}$

for $t_1 \le t_2$. (This follows from a basic property of sets of real numbers: if $A \subset B \subset \Bbb R$, then $\inf(B) \le \inf(A)$.) From (40), (44), we have for some $t'$ sufficiently large $L(t') > - \infty$; that is, there exists $\lambda \in \Bbb R$ such that

$L(t') = \lambda; \tag{48}$

now (47) implies

$L(t) \ge \lambda \tag{49}$

for $t \ge t'$; for such $t$, by (46),

$\int_\beta^t \operatorname{tr}(A(s))ds \ge \lambda, \tag{50}$

whence

$e^{\int_\beta^t \operatorname{tr}(A(s))ds} \ge e^\lambda; \tag{51}$

then (43) yields

$\vert \det (X(t)) \vert \ge \vert \det (X(\beta)) \vert e^\lambda \tag{52}$

for $t \ge t'$; for $t \in [\beta, t']$, $\int_\beta^t \operatorname{tr}(A(s))ds$, being a continuous function on a compact set, is bounded below by some $\mu \in \Bbb R$, i.e. we have

$\int_\beta^t \operatorname{tr}(A(s))ds \ge \mu; \tag{53}$

thus

$e^{\int_\beta^t \operatorname{tr}(A(s))ds} \ge e^\mu \tag{54}$

on $[\beta, t']$, so that

$\vert \det(X(t)) \vert \ge \vert \det(X(\beta)) \vert e^\mu \tag{55}$

for $\beta \le t \le t'$. Combining (52) with (55) we may write

$\vert \det (X(t)) \vert \ge \vert \det (X(\beta)) \vert e^{\min(\lambda, \mu)} > 0 \tag{56}$

for $t \in [\beta, \infty]$.

We are now in a position to apply the above proposition with $\Vert X(t) \Vert \le k$ and $\vert \det(X(t)) \vert \ge \ \vert \det X(\beta)) \vert e^{\min(\lambda, \mu)}$ on $[\beta, \infty)$. We thus immediately see that

$\Vert X^{-1}(t) \Vert \le \dfrac{n! \sqrt{n} k^{n - 1}}{\vert \det(X(\beta)) \vert e^{\min(\lambda, \mu)}}; \tag{57}$

we have now shown that $X^{-1}(t)$ is globally bounded on $[\beta, \infty)$, completing the present answer to the first question (a) our OP Croos here asked.

As for part (b), suppose there were a solution $x(t) \ne 0$ of (1) with $x(t) \to 0$ as $t \to \infty$. Then since

$x(t) = X(t) x(0), \tag{58}$

we have

$x(0) = X^{-1}(t)x(t); \tag{59}$

since we have proved that $X^{-1}(t)$ is bounded on $[\beta, \infty)$, and it is clearly bounded on $[0, \beta]$, it is in fact globally bounded on $[0, \infty)$; that is, there is $0 < M \in \Bbb R$ with

$\Vert X^{-1}(t) \Vert \le M \tag{60}$

for $t \in [0, \infty)$. Then

$\Vert x(0) \Vert = \Vert X^{-1}(t) x(t) \Vert \le \Vert X^{-1}(t) \Vert \Vert x(t) \Vert \le M\Vert x(t) \Vert; \tag{61}$

since $x(t) \ne 0$, we must have $x(0) \ne 0$ (this follows from uniqueness of solutions); thus

$\Vert x(0) \Vert \ge \delta \tag{62}$

for some positive $\delta \in \Bbb R$; but taking $t$ sufficiently large, we have

$\Vert x(t) \Vert < M^{-1} \delta; \tag{63}$

then

$\delta \le \Vert x(0) \Vert \le M \Vert x(t) \Vert < M M^{-1} \delta = \delta; \tag{64}$

(64) is a contradiction which in turn precludes $x(t) \to 0$; we have thus established an affirmative result to part (b).

Robert Lewis
  • 72,871
2

Here is a partial answer. Assume $X(t)$ and $A(t)$ are $n \times n$ matrices. Define: $$ c = \liminf_{t\rightarrow\infty} \int_\beta^t tr(A(s))ds $$ The problem tells us that $c > -\infty$.

1) Note that since $X(t)$ is a solution to $X'(t) = A(t)X(t)$, then for any $n\times n$ matrix $M$ we also get $Y(t) = X(t)M$ is another solution. Let $y_{\beta}$ be any desired initial condition for $Y(\beta)$. If $X(\beta)$ is invertible, then we can choose $M = X(\beta)^{-1}y_{\beta}$, so that $Y(t)=X(t)M$ is a solution with the desired initial condition $Y(\beta)=y_{\beta}$.

Thus, limiting behavior about $X(t)$ can often be translated to limiting behavior about other solutions $Y(t)$. If $Y(\beta)$ is invertible and $X(t)$ does not approach the zero solution, then $Y(t)$ also does not approach the zero solution. (If $Y(\beta)$ is not invertible then we can have $Y(t)=0$ for all $t$ as a valid solution).

2) Suppose $A(t)$ is a matrix of the form $A(t) = g(t)A$ for some (constant) matrix $A$ and some scalar-valued integrable function $g(t)$. Then a solution to the ODE is (for $t \geq \beta$):
$$ X(t) = e^{\int_{\beta}^t A(s)ds}X(\beta) \: \: (Equation 1) $$ There are other types of $A(t)$ matrices for which the above solution works, but not all (as pointed out by helpful comments of Robert Lewis above).

Now, the following link gives Jacobi's formula for any square matrix $B$: $\det(e^B) = e^{tr(B)}$. http://en.wikipedia.org/wiki/Matrix_exponential

Applying this to (Equation 1) gives: $$ \det(X(t)) = e^{\int_\beta^ttr(A(s))ds}\det(X(\beta))$$ Taking absolute values of both sides gives: $$ |\det(X(t))| = e^{\int_{\beta}^t tr(A(s))ds} |\det(X(\beta))| $$ Taking a $\liminf$ of both sides gives: $$ \liminf_{t\rightarrow\infty} |\det(X(t))| = e^{c} |\det(X(\beta))| > 0 $$ where the final inequality holds because we are told that $\det(X(\beta))\neq 0$. Since the absolute value of the determinant is bounded away from $0$, and since the determinant will be a polynomial function of the entries of the matrix, it follows that the sum of squares of the entries of $X(t)$ must be bounded away from zero.

Michael
  • 26,378
2

A quick comment to @Michael's answer: I guess we can apply Liouville's Formula here, which only requires $tr(A(t))$ to be a continuous function. Applying Liouville's Formula gives

$$det(X(t))=det(X(\beta))e^{\int_{\beta}^t tr(A(s))ds},$$

and the rest is given in @Michael 's answer:)

Aprilius
  • 246