3

In an online textbook for MIT OCW 18.013a, Calculus with Applications, the author uses residue calculus to derive the well-known formula $$\sum_{n>0} n^{-2} = \frac{\pi^2}{6}$$ (See Some Special Tricks)

He then writes:

You can actually sum the first 128 (or 1024) terms of this sum on a spreadsheet and extrapolate by comparing the sum up to different powers of 2. If you extrapolate first forming $S_2(k) = S(2^k)-S(2^{k-1})$, then $S_3(k)=(4 S_2(k) - S_2(k-1))/3$ then $S_4(k) = (8 S_3(k) - S_3(k-1))/7$. etc. You can get this answer to enormous accuracy numerically and verify this conclusion.

Would someone please explain this method of extrapolation or provide a suitable reference?

awkward
  • 15,626
  • This is called Romberg's method. See more details at https://en.wikipedia.org/wiki/Romberg%27s_method. – Somos May 15 '17 at 23:58
  • @Somos I'm having difficulty making the connection between the original problem, which is about summing a series, and Romberg's method, which as I understand it is about numerical integration. Is the idea to view the series as an integral over a discrete measure, or what? – awkward May 16 '17 at 00:34
  • 1
    Sorry. Actually, the key is Richardson extrapolation which is what Romberg's method uses. The first thing that came to my mind was Romberg, but all you need is Richardson. – Somos May 16 '17 at 01:43
  • @somos +1. I believe it is indeed Richardson extrapolation (although it's a bit tricky to make the connection because Richardson extrapolation is usually described as applying to small step sizes as in numerical integration, rather than integer steps as here). If you would care to write up a solution I will accept it, or if not I will work out the details and post. – awkward May 16 '17 at 18:46

3 Answers3

2

This follows from the Euler–Maclaurin formula, we have:

$$\sum_{n = N}^{\infty} f(n) = \int_{N}^{\infty}f(x) dx + \frac{1}{2}f(N) -\sum_{k=1}^{M}\frac{B_{2k}}{(2k)!}f^{(2k-1)}(N) + R_M$$

where the $B_{2k}$ are the Bernoulli numbers and $R_M$ is a remainder term. In case of $f(n) = \dfrac{1}{n^2}$, this yields:

$$\sum_{n = N}^{\infty} \frac{1}{n^2} = \frac{1}{N} + \frac{1}{2 N^2} +\sum_{k=1}^{M}\frac{B_{2 k}}{N^{2k+1}} + R_M$$

This means that you can extrapolate more efficiently first with the $\dfrac{1}{N}$ and the $\dfrac{1}{2 N^2}$ and from then onward only with the reciprocals of only the odd powers of $N$. Note that doubling $N$ means keeping terms up to that new $N$ minus 1, otherwise you see from re-expanding $\dfrac{1}{(N+1)^{2k+1}}$ in powers of $\dfrac{1}{N}$ that you get both even and odd powers, the extrapolation would then become less efficient.

Count Iblis
  • 10,716
2

This is a example of a general method. Richardson extrapolation is one example. Define a sequence $$a(n) = \sum_{k=1}^n 1/k^2$$ and the first few values of $a(2^k)$ strongly suggest that $a(n) \sim c_0 + c_1/n$. In general, there will be other terms. So our ansatz is that $$s(n) := a(n) \sim c_0 + c_1/n + c_2/n^2 + \dots$$ asymptotically. We can improve the convergence by eliminating the $1/n$ term. This leads to $$s_1(n) := (2s(2n) - s(n))/(2-1).$$ The next step is to eliminate the $1/n^2$ term using $$s_2(n) := (4s_1(2n) - s_1(n))/(4-1).$$ We continue and eliminate one term at a time. Each time the convergence is better. If the asymptotic expansion is different, we just use similar steps to eliminate one term at a time.

Somos
  • 37,457
  • 3
  • 35
  • 85
1

Somos has answered my question--the method is Richardson extrapolation-- but I would like to add some additional details to demonstrate how the method works in practice in the summation of $\sum n^{-2}$.

But first, let me correct a typo in the original web page, now that we know the method:

You can actually sum the first 128 (or 1024) terms of this sum on a spreadsheet and extrapolate by comparing the sum up to different powers of 2. If you extrapolate first forming $S_2(k) = \color{red}{2}S(2^k)-S(2^{k-1})$, then $S_3(k)=(4 S_2(k) - S_2(k-1))/3$ then $S_4(k) = (8 S_3(k) - S_3(k-1))/7$. etc. You can get this answer to enormous accuracy numerically and verify this conclusion.

With this correction, how well does the extrapolation work in summing the original series? I started by computing $$S(k) = \sum_{n=1}^k n^{-2}$$ for $k = 1, 2, 3, \dots ,128$ using a spreadsheet. Using only the values of $S(k)$ for $k = 1, 2, 4, 8, \dots , 128$ and applying Richardson extrapolation as described above, the numerical results are as follows:

$$\begin{matrix} k &S(k) &S_2(k) &S_3(k) &S_4(k) &S_5(k) &S_6(k) &S_7(k) &S_8(k) \\ 1 &1.0000000000 & & & & & & & \\ 2 &1.2500000000 &1.5000000000 & & & & & & \\ 4 &1.4236111111 &1.5972222222 &1.6296296296 & & & & & \\ 8 &1.5274220522 &1.6312329932 &1.6425699169 &1.6444185293 & & & & \\ 16 &1.5843465334 &1.6412710147 &1.6446170219 &1.6449094655 &1.6449421946 & & & \\ 32 &1.6141672628 &1.6439879922 &1.6448936514 &1.6449331699 &1.6449347502 &1.6449345100 & & \\ 64 &1.6294305014 &1.6446937400 &1.6449289892 &1.6449340375 &1.6449340954 &1.6449340742 &1.6449340673 & \\ 128 &1.6371520050 &1.6448735085 &1.6449334313 &1.6449340659 &1.6449340678 &1.6449340669 &1.6449340668 &1.6449340668\\ \end{matrix}$$

So summing 128 terms of the series results in $1.6371520050$, with about 0.5% error compared to the true value of $\pi^2 / 6 \approx 1.6449340668$. On the other hand, Richardson extrapolation using the same data agrees with the "exact result" to at least 11 significant digits, which is indeed an impressive increase in accuracy.

awkward
  • 15,626