In this link we find a way of computing the top Lyapunov exponent for a chain of Stochastic matrices where $$\lambda_1=\lim_{n\to\infty}\frac{1}{n}\log ||A_nA_{n-1}\cdots A_1||.$$ This is a relatively easy task to approximate as one can plot the function $$f(n)=\frac{1}{n}\log ||A_nA_{n-1}\cdots A_1||$$ for different values of $n$. I was wondering if a similar approach can be taken to determine other Lyapunov exponents $\lambda_2,\lambda_3,\dots$ with lower magnitude than $\lambda_1$. If not, what other approach should be taken? I am attempting to implement this into Matlab to obtain some numerical insight and currently have computed $\lambda_1$ with the approach described above.
-
The idea is to multiply a set of independent vectors by the matrices and to look at the length of the Gram-Schmidt orthogonalized vectors at each step. – user619894 Jul 16 '22 at 13:59
1 Answers
Following for example this, it is known that $\lambda_1 +\cdots +\lambda_p = \lim {1\over N} \log{\det_p(w^N_1\times\cdots w^N_p)\over \det_p(u_1\times\cdots u_p)}$ for generic vectors $u_p$ and $w^N_p =\prod_{i=1}^N A_i u_p$. However, this is numerically unstable, so the standard method is to replace this expression by a telescopic one: $$\log\left[{\det_p(w^N_1\times\cdots w^N_p) \over \det_p (w^{N-1}_1\times\cdots w^{N-1}_p)}\cdots {\det_p(w^1_1\times\cdots w^1_p) \over \det_p (u_1\times\cdots u_p)}\right]$$ and replace each $ w^M_k$ with its Gram-Schmidt orthogonalized version. This allows us to set up a stable iteration scheme: First $$x^{M+1}_k = A_{M+1}v^{M}_k$$ then,
Gram-schmidt orthogonalize the $x's$, call these $g$,
and orthonormalize $v^{M+1}={g^{M+1}\over||g^{M+1}||}$ then the sum ${1\over N}\sum^N\log\left(||g_k^{M+1}||\over||g_k^{M}||\right)$ should converge to $\lambda_k$. For more details, consult this book.
- 4,292