3

I happened to come up with an idea for accelerating the convergence of fixed-point iteration based on Aitken's delta squared acceleration method. What interests me is the case of $x=\sin(x)$, for which fixed-point iteration is known to give roughly $\mathcal O(n^{-1/2})$ error in $n$ iterations. When applying the below method to this problem, numerical testing suggests convergence may actually be improved to be linear i.e. of the form $\mathcal O(\lambda^n)$ for some $\lambda\in(0,1)$, but I'm unsure if this is actually the case.

My question: Does applying the below method actually accelerate the convergence of iterating $x=\sin(x)$ to linear convergence, and precisely how fast is it in this case?

Code.

Interestingly, it seems to work significantly better than using Aitken's method here.

In this case, it seems iterations should be asymptotically equivalent to Aitken's method, but Aitken's method suffers from division by zero earlier due to slower $\dot x$ and $\ddot x$ convergence, which forces it to be unable to use the Aitken acceleration. This starts at $x\approx1.5\times10^{-4}$. In contrast, the below method has $\dot x$ and $\ddot x$ convergence, which spaces them out enough to avoid division by zero during all iterations until the last iteration where $x=\sin(x)\approx9.3\times10^{-9}$.

The Acceleration Method:

The idea is that given a function $f$ with a fixed-point $x_\star=f(x_\star)$ and an initial estimate $x_0$, the following linear approximations may be made:

\begin{align}x_0&=x_\star+\epsilon\\\dot x_0&=f(x_0)\\&=f(x_\star+\epsilon)\\&\simeq f(x_\star)+f'(x)\epsilon\\&=x_\star+C\epsilon\\\ddot x_0&=f(\dot x_0)\\&\simeq x_\star+C^2\epsilon\end{align}

Supposing these equations are exact, they give a solvable system of equations:

$$\begin{cases}x_0=x_\star+\epsilon\\\dot x_0=x_\star+C\epsilon\\\ddot x_0=x_\star+C^2\epsilon\end{cases}$$

Aitken's method is based on solving $x_\star$ from these equations, but $C$ may also be solved for. Once $C$ is known, all future iterations may be accelerated by solving for $x_\star$ from the system of equations:

$$\begin{cases}x_0=x_\star+\epsilon\\\dot x_0=x_\star+C\epsilon\end{cases}$$

which yields the improved estimate of the form $(1-r)x_0+rf(x_0)$. Solving for all variables leads to the algorithm:

\begin{align}r_0&=1\\\dot x_i&=(1-r_i)x_i+r_if(x_i)\\\ddot x_i&=(1-r_i)\dot x_i+r_if(\dot x_i)\\t_i&=\frac{x_i-\dot x_i}{x_i-2\dot x_i+\ddot x_i}\\x_{i+1}&=x_i-t(x_i-\dot x_i)\\r_{i+1}&=t_ir_i\end{align}

I haven't done enough research to really know if this method is known or not. Wikipedia and some numerical analysis tests I've found suggest applying Aitken's method after every two iterations, which is equivalent to the case of $r$ being held at $r=1$.

1 Answers1

0

Consider the simplified problem of iterating $f(x)=x-x^3/6$. Each iteration may then be simplified.

\begin{align}\dot x_n&=x_n-\frac{r_n}6x_n^3\\\ddot x_n&=\dot x_n-\frac{r_n}6\dot x_n^3\\t_n&=\frac{r_nx_n^3/6}{r_nx_n^3/6-r_n\dot x_n^3/6}\\&=\frac{x_n^3}{(x_n-\dot x_n)(x_n^2+x_n\dot x_n+\dot x_n^2)}\\&\stackrel?\simeq\frac{x_n^3}{r_nx_n^3(3x_n^2)/6}\tag?\\&=\frac2{r_nx_n^2}\\x_{n+1}&\simeq x_n-\frac2{r_nx_n^2}(x_n-\dot x_n)\\&=x_n-\frac2{r_nx_n^2}\frac{r_n}6x_n^3\\&=x_n-\frac13x_n\\&=\frac23x_n\end{align}

This seems to be correct empirically, but it's not immediately clear to me how to justify $(?)$ or the replacement of $\sin$ with $x-x^3/6$.