3

Consider the infinite sum:

$$\frac\pi4=\sum_{i=1}^\infty \frac{(-1)^{i+1}}{2i-1}$$

I plugged it in my calculator and had to calculate the first hundred terms to get about $3$ digits of accuracy.

Why does this series converge so slowly?

pie
  • 8,483
  • It is because the terms decay slowly. Note that there is an asymptotic expansion for the remainder: $$ \frac{\pi }{4} - \sum_{k = 1}^N {\frac{{( - 1)^{k - 1} }}{{2k - 1}}} \sim ( - 1)^N \left( {\frac{1}{{4N}} - \frac{1}{{16N^3 }} + \frac{5}{{64N^5 }} - \ldots } \right) $$ for large $N$ – Gary Jan 31 '24 at 11:04
  • Could you provide more details? – Aarush Saharan Jan 31 '24 at 11:05
  • For intuition, it's related to the fact that the harmonic series diverges slowly, i.e. the terms "stay large" for a long time, so each term added or subtracted changes the value significantly. – Milten Jan 31 '24 at 11:05

2 Answers2

4

Essentially, because the terms of the series stay large. The hundredth term of the series is $-\frac{1}{199} \approx -0.005$. That means that each term you add at that point will muck about with the thousandths digit, or with carries (which have about a $50\%$ chance of happening just looking at that number) maybe even the hundredths digit.

Consider, as a comparison, the well-known series $$ e = \sum_{i = 0}^\infty \frac1{i!} $$ How large is the hundredth term? It's $\frac1{99!}\approx 10^{-156}$. At the point where you add the hundredth term, you're over 150 places away from the decimal point.

If you want to help your series somewhat, you could group terms up two-by-two. Note that your series is alternating. Every other term adds quite a lot to the total, and the next term takes away almost all of it again. If you just look at every other partial sum instead of looking at every partial sum, you get $$ \frac\pi4 = \sum_{i = 1}^\infty\left(\frac1{4i-3} - \frac1{4i-1}\right)= \sum_{i = 1}^\infty \frac{2}{(4i-3)(4i-1)} $$ which converges significantly faster (although still not nearly as fast as the standard series for $e$). Instead of adding $-\frac1{199}$, you are, at the same stage of the series, adding $\frac1{197} - \frac1{199} \approx \frac{1}{20\,000}$. You will still need tens of thousands of terms to get it accurate enough to be exact on a standard pocket calculator. But with your original series you would need billions of terms. So improvement.

Arthur
  • 204,511
  • I wrote this answer without actually looking at the series and its partial sums. Some preliminary analysis using WolframAlpha seems to indicate that the odd and even partial sums are both "equally bad" in either direction, and you seem to get a much better approximation if you take your original series, and after adding all the terms you want to add, then add half of the next term. In this case, $$\sum_{i = 1}^{100}\frac{(-1)^{n+1}}{2n-1} \approx 0.7829\\frac{1/2}{201} + \sum_{i = 1}^{100}\frac{(-1)^{n+1}}{2n-1}\approx 0.78539$$ compared to the actual goal $\frac\pi4 \approx 0.78540$. – Arthur Jan 31 '24 at 11:25
  • this is a well known phenomenon that results from many theorems. Firstly it makes sense as a kind of average of over and under estimates ( what a converging alternating series actually is ) and from summation formulas from euler cauchy etc – mick Jan 31 '24 at 12:04
1

It's possible to greatly improve the accuracy of the estimate with only a little extra effort through the use of Richardson extrapolation, as in this question: Extrapolate a sum using partial sums at powers of two. By this method we can achieve six-digit accuracy with $64$ terms of the series.

Here's how it works. Define $$S(k) = \sum_{i=1}^k \frac{(-1)^{i+1}}{2i-1}$$ Compute the first $64$ terms of the series. (It's convenient to use a spreadsheet.) In the following we will only use the sums for $k = 1,2,4,8,16,32,64$, shown in the column labelled $S(k)$ below. The remaining columns were computed by Richardson extrapolation: $S_2(k) = {2}S(2^k)-S(2^{k-1})$, then $S_3(k)=(4 S_2(k) - S_2(k-1))/3$ then $S_4(k) = (8 S_3(k) - S_3(k-1))/7$. etc.

$$\begin{matrix} k &S(k) &S_2(k) &S_3(k) &S_4(k) &S_5(k) &S_6(k) &S_7(k) \\ 1 &1.00000000 & & & & \\ 2 &0.66666667 &0.33333333 & & & \\ 4 &0.72380952 &0.78095238 &0.93015873 & & \\ 8 &0.75426795 &0.78472638 &0.78598439 &0.76538805 & \\ 16 &0.76978835 &0.78530874 &0.78550286 &0.78543407 &0.78677047 \\ 32 &0.77758757 &0.78538679 &0.78541280 &0.78539994 &0.78539766 &0.78535338 \\ 64 &0.78149215 &0.78539674 &0.78540005 &0.78539823 &0.78539811 &0.78539813 &0.78539884\\ \end{matrix}$$

The final result is $S_7(64) = 0.785398\color{red}{84}$, compared to $\pi/4 = 0.78539816$. Without Richardson extrapolation, we would have only $S(64) = 0.78\color{red}{149215}$.

If computing $64$ terms of the series seems like too much work, note that we can achieve $S_6(32) = 0.7853\color{red}{5338}$ with only $32$ terms.

awkward
  • 15,626