1

Show that the recurrence relation $$x_n=2x_{n-1}+x_{n-2}$$ has a general solution of the form $$x_n=A\lambda^n+B\mu^n.$$ Is the recurrence relation a good way to compute $x_n$ from arbitrary initial values $x_0$ and $x_1$?

Proof: Suppose we are given the following recurrence relation $$x_n=2x_{n-1}+x_{n-2}.$$ We want to show that this relation has a general solution of the form $$x_n=A\lambda^n+B\mu^n.$$

The corresponding characteristic polynomial to the recurrence relation is $$p(\lambda)=\lambda^2-2\lambda-1=0 \iff p(\lambda)=\left(\lambda-(1+\sqrt{2})\right)\left(\lambda-(1-\sqrt{2})\right)=0.$$ Hence the roots of the characteristic polynomial are $\lambda=1+\sqrt{2}$ and $\lambda=1-\sqrt{2}$. Hence the general solution for the relation is $$x_n=A(1+\sqrt{2})^n+B(1-\sqrt{2})^n,$$ where $\lambda=1+\sqrt{2}$ and $\mu=1-\sqrt{2}$.

Where I am stuck is determine whether or not this recurrence relation is a "good" way to compute $x_n$?

  • 1
    Probably has something to do with how long it will take. I'd say it isn't, in comparison to the closed form solution, because we can compute powers faster than we can iterate such a recursion. – Ian Sep 09 '17 at 16:42
  • Not sure I follow. If you have the closed formula, why would you want the recursion? Especially in a case such as your example with $1\pm \sqrt 2$ where one term goes to $0$ for large $n$ and can therefore be ignored (just taking the nearest integer to the first term). – lulu Sep 09 '17 at 16:42
  • After googling, I found an answer which states: "This recurrence relation is not a good way to compute $x_n$ since $1+\sqrt{2} > 1$. But I don't understand the logic behind it. – Username Unknown Sep 09 '17 at 16:46
  • 1
    That may have something to do with error accumulation in computer arithmetic, when applied to a recursion that is diverging. But that's a bit of a weird concern since this can be implemented in integer arithmetic, especially if $x_0,x_1$ are integers or don't have too many digits in their fractional parts. – Ian Sep 09 '17 at 16:50
  • I don't believe that I have cover that yet. Or at least not by that name. – Username Unknown Sep 09 '17 at 16:52
  • 1
    Then that's probably not the answer to your question. I'd say it's more likely related to the time taken to run the recursion vs. the time taken to evaluate the closed form solution. – Ian Sep 09 '17 at 16:53
  • You might take a look at my solution to a similar problem: https://math.stackexchange.com/questions/2422976/can-the-recurrence-relation-provide-a-stable-means-for-computing-r-n-in-this-c/2423059#2423059 – awkward Sep 09 '17 at 20:11
  • When you ask if this is a good way to do the computation, you should specify what kind of computation you are doing. Mathematically it is a fine way to compute the result. In a computer, roundoff can kill you. The answers are different. – Ross Millikan Aug 30 '19 at 14:21

2 Answers2

1

Here is what I believe is the best way to compute $x_n$. Note that $$ \begin{bmatrix}2&1\\1&0\end{bmatrix}\begin{bmatrix}x_n\\x_{n-1}\end{bmatrix}=\begin{bmatrix}x_{n+1}\\x_{n}\end{bmatrix} $$ Extending this, $$ \begin{bmatrix}2&1\\1&0\end{bmatrix}^{n-1}\begin{bmatrix}x_1\\x_{0}\end{bmatrix}=\begin{bmatrix}x_{n}\\x_{n-1}\end{bmatrix} $$ This shows that once we have computed $\begin{bmatrix}2&1\\1&0\end{bmatrix}^{n-1}$, then we can compute $x_n$ in constant time.

Fortunately, one can compute the $k^\text{th}$ power of a matrix $M$ using only $O(\log k)$ multiplications, using exponentiation by squaring:

$$M^{2k} = M^k\cdot M^k, \qquad M^{2k+1}=M\cdot M^k\cdot M^k$$

Summarizing, $x_n$ can be computed in $O(\log n)$ arithmetic operations. This is much faster than using naive dynamic programming based on the recurrence relation. Also, only integer arithmetic is involved, so there is no risk of arithmetic errors, as there would be with simply substituting $n$ into the closed form solution. This method generalizes to any homogenous integer linear recurrence of any order.

Mike Earnest
  • 84,902
  • On the other hand, for large $n$ you need large integers, so the constant factors for proceeding with integer arithmetic can quickly start getting oppressive. If you're willing to accept some truncation error, using $a^b=\exp(b \ln(a))$ does not suffer these difficulties. – Ian Sep 10 '17 at 18:37
1

So your general solution is $x_n = c_1 (1 + \sqrt{2})^n + c_2 (1 - \sqrt{2})^n$. If you pick initial values so that $x_n = (1 - \sqrt{2})^n$ (i.e., $x_0 = 1, x_1 = 1 - \sqrt{2}$), you see that $x_n \to 0$ as $n \to \infty$. Say only your initial values are sligtly off (can't represent $1 - \sqrt{2}$ exactly!), you'll get a solution for a different recurrence $p_n = 2 p_{n - 1} + p_{n - 2}$ with other initial values, the errors $\epsilon_n = x_n - p_n$ satisfy $\epsilon_n = \epsilon_{n - 1} + \epsilon_{n - 2}$ with initial values $\epsilon_0 = 0$, $\epsilon_1$ given. It's general solution is as above, but here $c_1 = \frac{2 + \epsilon_1 \sqrt{2}}{4} > 0$. Thus as $n \to \infty$, $\epsilon_n \to \infty$. The error grows without bound in this case. Errors on the way just make this worse.

vonbrand
  • 28,394