13

The following words reflect my understanding(an elementary one) of the divergent series. We first define an infinite series as follows:

$L = \sum_{n=0}^{\infty}a_n \Leftrightarrow L = \lim_{k \rightarrow \infty} S_k.$

Where $S_k$ is the partial sum of the infinite series from $a_0$ to $a_k$. A series whose limit exists is said to be convergent, if not, then it's called divergent.

By this former definition, series like:

$1-1+1-...$ and $1+2+3+...$ are divergent.

Then we have the notion of regularized sum. Where we look for a new definition for infinite series such that it allows us to assign real values to some divergent series. Also in the new definition series that are normally convergent under the definition $L = \sum_{n=0}^{\infty}a_n \Leftrightarrow L = \lim_{k \rightarrow \infty} S_k$, are convergent under the new definition, and the two definitions yield the same exact limit $L$ for the normally convergent series. Although I'm not sure of following, but different summation methods always assign the same value for a divergent series(in case it can be assigned to), so that $1-1+1-...=1/2$ under Caesaro summation and Abel's and any other summation that assign a value to such series.

In addition to that, there are series like $1+2+3+...$ , that are not Caesaro or Abel summable, but can be summed under other methods like zeta regularization; This implies that a series that is not summable under certain summation method(say Caesaro's), can be summable under other summation methods(like zeta).

This last fact leads me to my question:

-Can every divergent series be regularized? That is, for every series that is not summable under certain summation methods, can we find a new summation method that sums it up?

-If the answer is yes to the last question, then, does there exist a summation method such that it can sum(regularize) every single divergent series?

Omar Nagib
  • 1,326
  • 14
  • 27

6 Answers6

7

In the most general sense, a summation is a partial function from the set of summand sequences to $\mathbb R$ (or $\mathbb C$). This sounds like we could assign more or less arbitrary values and if we want we really can. However, certain properties of summations are preferred to hold, such as

  • Regularity that is, our summation method should be an extension of the standard -convergent-sequence-of-partial-sums method
  • Linearity that is, if we define $\sum a_n$ and $\sum b_n$ then we also define $\sum(ca_n+b_n)$ and have $\sum(ca_n+b_n)=c\sum a_n+\sum b_n$
  • Stability $\sum a_n$ is defined if and only if $\sum a_{n+1}$ is defined and we have $\sum a_n=a_2+\sum a_{n+1}$

To repeat: not all summation methods (not even all methods in practical use) obaey all three criteria. But if we concentrate on methods obeying all three then indeed we often get that certain (classically) divergent series are always assigned the same value under any summation method. For example, $\sum x^n=\frac1{1-x}$ follows for all $x\ne 1$ where we define the sum by, merely playing around with stability and linearity.

So how high can we try? We can use Zorn's lemma to find a maximal regular, linear, stable summation method. But will "maximal" imply "total", i.e., that all series become summable? And will the summation thus obtained be well-defined? Unfortunately, the answer to both is no. This can already be exemplified with $\sum 1$, which has do be a solution of $\sum 1 = 1+\sum 1$ per statbility. (Then again, you have have read that regularization can assign $1+1+1+\ldots =-\frac12$; apparently those methods are not linear or nopt stable ...)

  • 2
    First of all, thank you for your enlightening answer. In the stability condition, shouldn't $\sum a_n=a_0+\sum a_{n+1}$? is $a_2$ a typo?

    You ask a series being summable on one hand, and its summation being well-defined on the other hand in two separate questions(as being different notions); The two questions are equivalent, right?

    I don't understand what you mean by the last example of $\sum 1$, can you explain?

    – Omar Nagib Sep 27 '15 at 20:39
  • 2
    What happens if we get rid of the stability hypothesis ? Can we get a total function? – AlienRem Jan 09 '17 at 00:16
3

A new method for summing series is available with the hyperreal numbers, but depends on an assumption that the limit of $-1^\omega = 0$, where $\omega$ is the hyperreal unit infinity. Essentially, the steps are:

  1. Take the series representation, and derive a formula for the sum at the point of the $k$th term. This is done by using discrete calculus, or a standard summation method.
  2. Substitute the infinite unit $\omega$ for $k$
  3. Reduce the result using appropriate rules, including $-1^\omega = 0$.
  4. Optionally, round to the whichever "level of infinity" you desire (i.e., take the standard part, or leave it with infinities/infinitesimals, etc.

Example:

Let's say you wanted to know the sum of the natural numbers: $1 + 2 +3 + \ldots$. This is a simple arithmetic series. The formula for the $k$th partial sum of an arithmetic series is $$\sum_{n = 1}^k a + (n - 1)d = \frac{k^2d}{2} + \frac{k(2a - d)}{2}$$

So, since we are going to infinity (i.e., $\omega$), and $a = 1$ and $d = 1$, then the result will be $\frac{\omega^2}{2} +\frac{\omega}{2} \simeq \frac{\omega^2}{2}$ (the $\simeq$ indicates that the principal value of the result is $\frac{\omega^2}{2}$ - once you square an infinity, the next infinite power down plays essentially no role in the value).

Now, let's look at the series $1 + 3 + 5 + \ldots$. Here, $a = 1$ and $d = 2$, so the result will be simply $\omega^2$.

Using this method, you can do things like divide series by each other. So, we can say: $$\frac{(1 + 2 + 3 + \ldots)}{(1 + 3 + 5 + \ldots)} = \frac{\frac{\omega^2}{2}}{\omega^2} = \frac{1}{2}$$

Using this method, you have a lot of better-behaved divergent series. In this method, $(1 + 3 + 5 + \ldots)$ is not the same series as $(1 + 0 + 3 + 0 + 5 + \ldots)$, even though they look like they might be. The latter has a value of $\frac{\omega^2}{4}$.

This method winds up yielding consistent values over a wide range of divergent series.

You can see the introduction to the method, and an application of it to the question of Ramanujan summation here.

johnnyb
  • 3,722
  • 1
    The standard value of $1+2+3+... = -\frac{1}{12}$ and the same family of techniques assign $1+3+5+... = \frac{1}{3}$ it seems the ratio between these is $-\frac{1}{4}$ which yields a different interpretation than what you have here. So while I think this has a lot of merit (its a very natural generalization of limits and singularities) I don't think its consistent with results from complex analysis and ramanujan summation – Sidharth Ghoshal Jul 26 '23 at 00:06
  • @SidharthGhoshal - Ramanujan summation is a normalization, not a sum in the traditional sense, and therefore would not work with this. For an analysis of the question of the sequence 1 + 2 + 3 + ..., you should see this paper of mine: https://www.academia.edu/37974912/Numberphile_s_Proof_for_the_Sum_1_2_3_ – johnnyb Jul 26 '23 at 14:09
  • In your paper you mention "However, the question still remains why physicists can use −1/12 as a stand-in for the sum of all natural numbers... While no conclusive reason for this has been established" (correct me if i'm wrong) but I believe we have VERY conclusive reasons why it works. As you noted in the next section the cosine smoothed sum of $1+2+3...$ has an asymptotic expansion given by $T(n)-1/12 + O(1/n)$ where $T$ is some growing function. In situations like the Casimir Effect calculation I believe the quantity being evaluated implicitly is $1+2+3... -B(n)$ where $B(n) \sim T(n)$ – Sidharth Ghoshal Jul 26 '23 at 15:44
  • And so the quantity being measured ends up being the difference of the constant level asymptotic parts of both series which ends up as $-\frac{1}{12} - 0$. Terence tao's blog goes into this a bit: https://terrytao.wordpress.com/2010/04/10/the-euler-maclaurin-formula-bernoulli-numbers-the-zeta-function-and-real-variable-analytic-continuation/ – Sidharth Ghoshal Jul 26 '23 at 15:45
  • Of course I should take my own advice and note that the ratio of the asymptotic sums of $1+3+5+7... $ and $1+2+3...$ would be $\frac{1}{2}$ if we only drop to the constant order term at the very end. So there is a nice way to combine your omega technique here WITH Zeta function regularization into a single cohesive framework. – Sidharth Ghoshal Jul 26 '23 at 15:48
  • 1
    Also $(-1)^\infty = 0$ follows from the fact that $\sum_{n=0}^{\infty} x^n = \frac{1-x^{\infty+1}}{1-x} = \frac{1}{1-x}$ see this post here: https://mathoverflow.net/questions/429326/how-to-correctly-renormalize-this-function-at-the-pole-x-1-evaluating-sum where the fact $e^{e^\infty} = 1$ is discussed. In general $e^{e^{\infty}} = e^{0} = 1$. I noticed this was ongoing work in your paper incase it hasn't already been resolved – Sidharth Ghoshal Jul 26 '23 at 15:57
3

To be overly pedantic, the summation method $S_0$ defined as follows $$S_0 \left( \sum a_n \right) = \begin{cases} \sum a_n & \text{if it converges} \\ 0 & \text{otherwise} \end{cases} $$ sums every divergent series. Of course, $S_0$ is quite useless. But why is it wrong? To actually show that $S_0$ isn't a good divergent series summation, we should find a different divergent series summation method that we trust, and check whether $S_0$ agrees with that method.

There is one summation method that we generally trust. Assume we are also looking at a function, rather than a specific point. Then
$$S_1 \left( \sum a_n \right) = \begin{cases} \sum a_n & \text{if it converges} \\ \text{Analytical continuation of } \sum a_n & \text{if it exists} \end{cases}$$ For example, if we are looking at the function $\sum n^s$, then $S_1(\sum n) = -\frac{1}{12}$, using the analytical continuation of the zeta function. Many divergent series methods you will run into are basically attempts from people to create something that behaves like $S_1$. However, most people don't realize that essentially every divergent series summation you run into ends up being incomparable to $S_1$. I.e. there tends to exist sums that the divergent series method can sum that cannot be obtained through analytical continuation. For example, consider the (very weak) summation method, Cesaro summation. Take $f(x) = \sum_{n=1}^\infty (-1)^n \frac{x^n}{1-x^n}$

$f(x)$ cannot be analytically continued to $|x|>1$, since it has a natural boundary due to the dense poles coming from $1-x^n = 0$. However, for sufficently large $n$, this sum looks just like $\sum (-1)^n$, which can be summed using Cesaro summation. Generally, if we a divergent series summation method that can sum $\sum a_n$, then (with some reasonable conditions on the properties of the summation method) $\sum a_n \frac{x^n}{1-x^n}$ is a function that cannot be analytically continued, but can be summed using divergent series summation.

If you look at the problem more closely, you'll see that even regular summation is incomparable to analytical continuation. For example $\sum e^{-x^n}$ has a natural boundary at $|x|=1$, but of course the sum still converges beyond this.

So, the problem isn't so much that we can't regularize divergent series, but that there isn't a framework to understand the right regularization for divergent series. When we restrict ourselves to regularizing series that have an associated analytical continuation, then we have the whole theory about consistency of analytical continuation, and various knowledge about how they interact with differential equations, composition, etc. We have specific knowledge on the type of coherence we obtain from analytical continution, but there still lots of work to do to understand to coherence we would achieve from continuations by divergent series summation.

Caleb Briggs
  • 1,143
  • 1
  • 7
  • 18
2

We hope so in general, but as of today we don't have techniques for many divergent series. Consider something like $ \sum_{n=0}^{\infty} 2^{2^n}$. Having an agreed upon regularization for this would amount to having defined the function $\sum_{n=0}^{\infty} z^{2^n}$ beyond its natural boundary and unfortunately there is no such standard agreed upon continuation to my knowledge.

In general while many regularization techniques can be put forth there is a "naturality" to some regularizations. In the case of the famous $1+2+3+4+... = -\frac{1}{12}$ for it NOT to "formally" equal $-\frac{1}{12}$ would make the laurent expansion of $\frac{1}{1-e^x}$ difficult to explain because the linear term there is $-\frac{1}{12}$ and there formal manipulations that equate it with $1+2+3+...$ see here on why this should be the case.

The reason I bring up naturality is we don't have a "natural" continuation of that earlier power series to my knowledge so at this time we don't have a natural way to regularize the divergent series that come from it.

1

A common way to regularize a series is to use analytic continuation. The series $\sum_{k=0}^\infty a_k$ is then mapped to the function $f(x) = \sum_{k=0}^\infty a_k x^k.$

This function is then shown to converge on some interval $(-c, c)$ to some analytic function defined at $1.$

For example, $1+2+4+\cdots = \sum_{k=0}^\infty 2^k$ maps to $$f(x) = \sum_{k=0}^\infty 2^k x^k = \sum_{k=0}^\infty (2x)^k = \frac{1}{1-2x}.$$ Here the last equality is valid when $|2x|<1$ but the last expression is valid for all $x\neq \frac12.$ Taking $x=1$ gives $$1+2+4+\cdots = -1.$$

But not all series can be handled this way. For example, $a_k=k!$ gives a series $\sum_{k=0}^\infty k!\,x^k$ has convergence radius $0$ and therefore doesn't give an analytic function.

md2perpe
  • 30,042
0

0. A single formula for the sum of a wide class of series

I show here that series with a partial sum $S(n)$ that satisfy the conditions of Carlson's theorem allowing $S(n)$ to be analytically continued in a unique way, can be correctly summed using the formula:

$$S = \operatorname*{con}_{N}\left[\int_{N-1}^{N}S(x)dx\right]\tag{0.1} $$

where $\displaystyle \operatorname*{con}_{N}$ denotes the operation of extracting the contant term of the large $N$ asymptotic expansion.

Before I present my argument, let's consider some examples of (0.1) applied to divergent series. In case of the sum of the positive integers, we have $S(x) = \frac{1}{2} x (x+1)$, and:

$$\int_{N-1}^N S(x) dx = \frac{N^2}{2} - \frac{1}{12}\tag{0.2}$$

The sum is then the constant term in here of $-\frac{1}{12}$.

The alternating sum:

$$\sum_{k=1}^{\infty}(-1)^{k+1} k\tag{0.3}$$

can also be handled by (0.1). In this case we have:

$$S(x) = \frac{1-(2 x + 1)\exp(i\pi x)}{4}\tag{0.4}$$

Integrating from $N-1$ to $N$ yields:

$$\int_{N-1}^N S(x) dx =\frac{1}{4}+\frac{e^{i \pi N} (-1+i \pi N)}{\pi ^2}\tag{0.5}$$

The constant term is then $\frac{1}{4}$, this value of the sum is also what you find using other methods like e.g. Cesàro summation.

The sum of the harmonic series can be computed as follows. The partial sum can be written as:

$$S(n) = \sum_{k=1}^n \frac{1}{k} = \sum_{k=1}^{\infty}\left(\frac{1}{k} - \frac{1}{k+n}\right)\tag{0.6}$$

The r.h.s. is then also defined for real values of $n$, which is the unique analytic continuation per Carlson's theorem. The integral from $N-1$ to $N$ can then be written as:

$$\int_{N-1}^N S(x) dx = \sum_{k=1}^{\infty}\left[\frac{1}{k}-\log(k+N)+\log(k+N-1)\right]\tag{0.7}$$

Cutting off the summation in here to $M$ and taking the limit of $M$ to infinity, yields:

$$\int_{N-1}^N S(x) dx = \lim_{M\to\infty}\left[\sum_{k=1}^M\frac{1}{k} - \log(M+N)\right]+ \log(N)\tag{0.8}$$

The constant term in here is the value of the limit, which is Euler's constant $\gamma = 0.577215664901532\ldots$

To compute the sum of the logarithm of the positive integers, we can write the partial sum as the logarithm of the factorial and then use Stirling's formula for the asymptotic expansion. We have:

$$S(n) = \log(n!) = n\log(n) -n + \frac{1}{2}\log(n) + \frac{1}{2}\log(2\pi) +\mathcal{O}\left(\frac{1}{n}\right)\tag{0.9}$$

The integration of a term of order $1/x$ from $N-1$ to $N$ will obviously tend to zero in the large $N$ limit and will therefore not contribute to the constant term. For the term proportional to $\log(n)$, we have:

$$\int_{N-1}^N\log(x) dx = N \log(N) - (N-1)\log(N-1) - 1\tag{0.10}$$

For large $N$, the asymptotic expansion is:

$$\int_{N-1}^N\log(x) dx =-\log(N) + \mathcal{O}\left(\frac{1}{N}\right) \tag{0.11}$$

So, the term proportional to $\log(n)$ in $S(n)$ term does not contribute to the constant term.

For the term proportional to $n\log(n)$, we have:

$$\int_{N-1}^N x \log(x) dx = \frac{1}{2}\left[N^2 \log(N) - (N-1)^2\log(N-1) - N+\frac{1}{2}\right]\tag{0.12}$$

Expanding this for large $N$ yields:

$$\int_{N-1}^N x \log(x) dx = N\log(N)-\frac{1}{2}\log(N)-\frac{1}{2}+\mathcal{O}\left(\frac{1}{N}\right)\tag{0.13} $$

So, this term yields a contribution of $-\frac{1}{2}$.

The $-n$ term yields a contribution of $\frac{1}{2}$, because:

$$\int_{N-1}^Nx dx = \frac{1}{2}\left[N^2 -(N-1)^2\right] = N -\frac{1}{2}\tag{0.14}$$

We're then left with the constant term in $S(n)$ of $\frac{1}{2}\log(2\pi)$ as the value of the sum of the logarithms of the positive integers.

Let's then proceed with the derivation of the summation formula (0.1)

1. Introduction: What does the sum of a series really mean?

One can make progress toward getting to a more universal way of summing divergent series, by taking a step back and considering how series, convergent or divergent, actually arise in mathematical computations. So, let's temporarily put aside the standard definition of the value of a convergent series as being the limit of the partial sum, and consider what the math itself is telling us when we actually perform a calculation that yields the answer in the form of a series.

Whether your computation involves performing a Taylor expansion, a long division, or an asymptotic expansion, the rigorous result of the computation of the quantity $X$ you are computing you get is always of the form:

$$X = S(N) + R(N) \tag{1.1}$$

where $S(N)$ is the partial sum obtained by keeping the first $N$ terms of the series, and $R(N)$ is the remainder term. Of course, this doesn't look all that useful, because we won't know what the remainder term is, although the relevant theorems of the methods used to obtain the series will often allow you to bound the remainder term and thereby get to approximations for $X$.

If the series converges, i.e. if $\lim_{N\to\infty} S(N)$ exists, then that implies that $\lim_{N\to\infty}R(N)$ exists as well. Then assuming that $X$ as a function of the expansion parameter is analytic, implies that $\lim_{N\to\infty}R(N) = 0$. In that case, we have:

$$X = \lim_{N\to\infty} S(N)\tag{1.2}$$

When only a series is specified and we want to have a notion for the sum of the series, what we really have in mind is some quantity $X$ that in some sense is maximally analytic in the sense that any nonanalytic behavior is dictated by the specified series, which when expanded in a series, will yield the specified series.

The value of the series is then given by (1.1), and in case the series is covergent, we can eliminate the unknown remainder term by taking the limit of the partial sum and we have formula (1.2). So, this strongly suggests that the procedure of taking the limit of partial sum is not fundamental, both divergent and convergent series are treated on an equal footing according to the math that's actually involved in generating series. But these series are then supposed to be truncated, and a remainder term is supposed to be added.

The problem of summing divergent series can then be reduced to that of eliminating the remainder term.

2. Analytic continuation and Ramanujan summation as a means to find the remainder term

One way to get a handle on unknown remainder terms of divergent series, is to perform an analytic continuation of the series. We then add a new parameter and if the series is covergent in some region of the parameter space, then the sum of the analytically continued series is the limit of the partial sum (1.2) and we then then also have an expression for the remainder term as the difference between the value of the series and the partial sum.

If we then analytically continue this result for the remainder term back to the original value of the parameter which yields the original divergent series, then we obtain the remainder term for this series and thereby the sum of the series. But it's then easy to see that doing so amounts to just analytically continuing the series itself.

We can also use the Euler–Maclaurin formula to get to an expression for the remainder term of a convergent series and then argue on the basis of analytic continuation, what the remainder term of a general divergent series should be. If we proceed this way, then this results in a version of Ramanujan summation. For a convergent series $S = \sum_{k=0}^{\infty}f(k)$, the remainder term $R(N)$ is given by:

$$R(N) = \sum_{k = N+1}^{\infty}f(k)\tag{2.1}$$

We can then apply Euler–Maclaurin formula to this. It's convenient to change the upper limit of infinity to $M$ so that we can later modify the expression for divergent series. We then have a modified remainder term $R(N,M)$:

$$R(N,M) = \sum_{k = N+1}^{M}f(k)\tag{2.2}$$

We then apply the Euler–Maclaurin formula in the following way:

$$\begin{split}&R(N,M) = \sum_{k = N+1}^{M}f(k) = \sum_{k = N}^{M}f(k) - f(N)\\& = \int_N^Mf(x) dx + \frac{1}{2}\left[f(M) - f(N)\right] +\sum_{k=1}^{\infty}\frac{B_{2k}}{(2k)!}\left[f^{(2k-1)}(M) - f^{(2k-1)}(N)\right]\\ &= F(M) - F(N)+ \frac{1}{2}\left[f(M) - f(N)\right] +\sum_{k=1}^{\infty}\frac{B_{2k}}{(2k)!}\left[f^{(2k-1)}(M) - f^{(2k-1)}(N)\right]\end{split}\tag{2.3}$$

Here $F(x)$ is the indefinite integral of $f(x)$ and the summation to infinity inside the Euler-Maclaurin formula is to be interpreted as a truncated summation plus the remainder term for that formula. Here and in the following, we assume that the conditions of Carlson's theorem are met and that summands, remainder terms and partial sums are uniquely defined for real and complex arguments.

In the case of a convergent summation, all the terms involving $f(M)$ and the derivaties there tend to zero in the limit of $M$ to infinity. This means if we modify a divergent series by adding a parameter so that it becomes convergent in some region of prameter space, and we then analytically continue back to the point where we get the original series back, that we end up with an expression for the remainder term where we only have the terms involving $f(N)$ and its derivatives.

However, we need to be careful with $F(M) - F(N)$ here. In case of a convergent series, we can replace this by $-F(N)$ with the prescription to choose the integration constant such that there is no constant term at infinity (the asymptotic expansion around infinity does not contain a constant term). And this notion is then valid for the divergent case as well.

We are then led to the general expression for the remainder term that's valid for both divergent and convergent series:

$$R(N) = - F(N) - \frac{1}{2}f(N) -\sum_{k=1}^{\infty}\frac{B_{2k}}{(2k)!} f^{(2k-1)}(N)\tag{2.4}$$

Using this expression for the remainder term amounts to using a version of Ramanujan summation. The traditional version involving the function, antiderivative and derivaties at zero follows in a straightforward way from (1.1) for $N = 0$, in which case $S(0) = f(0)$, and you then get (2.4) with the sign of $\frac{1}{2}f(0)$ flipped to positive for the expression for the sum of the series.

3. Toward a universal formula for the sum of series

One can interpret formula (2.4) for the remainder term as a universal formula, but it's not always practical to use (2.4). There exists, however, a way to eliminate the need to deal with (2.4), by observing that according to (1.1), the result of adding the remainder $R(N)$ to the partial sum $S(N)$ yielding the value of the series, can also be obtained by extracting the constant term from (2.4). Obviously, adding $R(N)$ to $S(N)$ has to have the effect of eliminating all the terms in $S(N)$ that are not constant, with the sum of the series given by the sum of the constant term in $S(n)$ and the constant term in $R(n)$.

We can then find the constant term in $R(n)$ by using the fact that there is no constant term in $F(n)$ in (2.4) and expressing $F(n)$ in terms of $R(n)$. We can do this as follows. We start with considering the case of a convergent series where we also cut off the summation to upper limit of $M$ as in formula (2.2). It follows from this that $R(x,M)$ satisfies the recursion:

$$R(x,M) - R(x-1,M) = -f(x)\tag{3.1}$$

Restoring the upper limit of $M$ in (2.4) changes $F(N)$ in there to $F(N,M)$:

$$F(N,M) = -\int_N^Mf(x)dx\tag{3.2}$$

Inserting (3.1) in here, yields:

$$\begin{split}F(N,M)& = \int_N^M\left[R(x,M) - R(x-1,M)\right]dx\\&= \int_{M-1}^M R(x,M) dx - \int_{N-1}^N R(x,M) dx\end{split}\tag{3.3}$$

This means that in the divergent case when we extract this expression via analytic continuation back from the convergent case after taking the limit of $M$ to infinity, we have:

$$F(N) = -\int_{N-1}^N R(x) dx\tag{3.4}$$

We can then evaluate the constant term in here by expanding this around infinity. Only the divergent and the unknown constant term in $R(x)$ will then contribute to this. Equating the constant term in $F(N)$ to zero, then yields the constant term in $R(x)$ as:

$$-\operatorname*{con}_{N}\int_{N-1}^N D(x)dx\tag{3.4}$$

where $\displaystyle \operatorname*{con}_{N}$ denotes extracting the constant term from the large $N$ asymptotic expansion, and $D(x)$ denotes the divergent terms in $R(x)$. The divergent term $D(x)$ is then equal to minus the divergent part of $S(x)$. If we then put $D(x) = -S(x)$ in (3.4), then we get the correct contribution to the constant term coming from $R(x)$, and added to that is then the constant term coming from $S(x)$, while the terms in $S(x)$ that tend to zero don't make any contribution. So, we see that putting $D(x) = -S(x)$ in (3.4) will yield the correct value of the infinite sum as:

$$\operatorname*{con}_{N}\int_{N-1}^N S(x) dx\tag{3.5}$$

One may rewrite this formula using the recursion $S(x) - S(x-1) = f(x)$ to obtain the expression:

$$\int_{N-1}^N S(x) dx= \int_{r-1}^r S(x) dx + \int_r^{N} f(x) dx\tag{3.6}$$

where $r$ is an arbitrary real number. One can then choose $r$ such that the integral doesn't have a constant term, and the value of the summation is then given by the integral over $S(x)$. But this is then useful if one has an exact expression for $S(x)$ which requires being able to analytically continue the partial sum to real arguments.

In case of (3.5) one can resort to the asymptotic behavior of $S(N)$ for large values of $N$ to get to the analytic continuation to real arguments for the asymptotic behavior, by simply replacing $N$ by the continuous variable $x$.

4. Alternative formulation of the main result (3.5) that doesn't require integration

A major drawback of the formulas (3.5) and (3.6) is that they require one to have an analytic expression for the partial sums that can be analytically continued to the reals. If one has an analytic expression for the asymptotic behavior of the partial sums that captures the divergent part, then there is no problem, because in that case one can split the partial sums up in the divergent part and in the difference between the partial sums and the divergent part. One can then compute the limit to infinity the latter part, or estimate this numerically, while the divergent part can be treated via formula (3.5).

In many cases where we only have numerical data, this method won't work well, and we can then consider an alternative formulation of (3.5) obtained by evaluating the integral formally as follows.

We evaluate the integral in (3.5) by writing:

$$\int_{N-1}^N S(x) dx = \int_0^1 S(N-1+t) dt =\int_0^1 \left(1+\Delta\right)^t S(N-1)dt\tag{4.1}$$

where $\Delta$ is the forward difference operator:

$$\Delta g(x) = g(x+1) - g(x)\tag{4.2}$$

We then have:

$$\int_0^1 \left(1+\Delta\right)^t dt= \frac{\Delta}{\log\left(1+\Delta\right)}= 1+\frac{\Delta}{2} -\frac{\Delta^2}{12}+\frac{\Delta^3}{24}-\frac{19}{720}\Delta^4+\cdots\tag{4.3}$$

Using that $\Delta S(N-1) = f(N)$, we see that (4.1) can be written as:

$$\int_{N-1}^N S(x) dx = S(N-1) + \frac{f(N)}{2}-\frac{\Delta f(N)}{12}+\frac{\Delta^2 f(N)}{24} - \frac{19}{720}\Delta^3 f(N)+\cdots\tag{4.4}$$

This expansion in powers of the forward difference operator is then most useful, when for some integer $r$ we have that $\Delta^r f(N)$ tends to a constant in the limit of $N$ to infinity. In that case, we only need to consider a finite number of terms of the expansion to extract the constant part of the expression around infinity.

As an example, let's consider again the sum of the harmonic series, but now without making use of the explicit expression for the partial sum given by (0.6). We only need to sue that the asymptotic form of $S(N-1)$ is given by $\log(N) + \gamma +$ terms that tend to zero at infinity, and that the summand also tends to zero at infinity, to see that the constant term at infinity of (4.4) yields Euler's constant $\gamma$.

Note that in this case, the subtraction method where we subtract the divergent logarithmic part and treat that term separately, also allows us to obtain this result without using the explicit expression (0.6).

5. Universal formulation of the smooth cutoff (regulator) method

A totally different approach to evaluate divergent series is by modifying the summand using a paramater, such that the series becomes convergent, and such that the desired summand for which the summation is divergent, is obtained as a term in the series expansion in powers of that parameter.

For example, to compute the sum of integers using this method, we may consider the function:

$$g(u) = \sum_{k=0}^{\infty}\exp(- u k)= \frac{1}{1-\exp(-u)}\tag{5.1}$$

One can then argue that the sum of integers is minus the coefficient of u in the expansion around $u = 0$, which indeed yields the correct expression of $-\frac{1}{12}$. But there are then some problems with this approach. Purely formally, the divergent summation over the positive integers would be given by minus the derivative at $u = 0$, but the expansion around zero is given by:

$$g(u) = \frac{1}{u} + \frac{1}{2}+\frac{u}{12} + \mathcal{O}\left(u^2\right)\tag{5.2}$$

One may argue that the divergent term should simply be disregarded, however, this does not yield a universal prescription, as can be seen by considering the function:

$$h(u) = g\left(u+ u^2\right)\tag{5.3}$$

The coefficient of $u$ should then yield the value of the sum over the positive integers, because the $u^2$ term shouldn't formally contribute, but that's not the case:

$$h(u) =\frac{1}{u} - \frac{1}{2}+\frac{13}{12}u + \mathcal{O}\left(u^2\right)\tag{5.4}$$

Things are going wrong here because the $u^2$ term makes a contribution to the term proportional to $u$ due to the presence of the $\frac{1}{u}$ term in $g(u)$.

To fix these problems, we can use the universal formula (3.5) and derive the correct formulation of the regulator method. We then consider the case for a partial sum $S(N)$ for a convergent summation that constant a parameter, but we then postpone taking the constant part around infinity and consider the asymptotic expansion around infinity.

It is then convenient to use (4.4) and put

$$S(N-1) = S - \sum_{k=N}^{\infty}f(k)\tag{5.5}$$

where $S$ is the sum of the series in there, to obtain:

$$\int_{N-1}^N S(x) dx = S - \sum_{k=N}^{\infty}f(k)+ \frac{f(N)}{2}-\frac{\Delta f(N)}{12}+\frac{\Delta^2 f(N)}{24}+\cdots\tag{5.6}$$

Note that

$$\Delta \sum_{k=N}^{\infty}f(k) = -f(N)\tag{5.7}$$

therefore, we formally have:

$$\sum_{k=N}^{\infty}f(k) = -\frac{f(N)}{\Delta}\tag{5.8}$$

We can then write (5.6) as:

$$\int_{N-1}^N S(x) dx = S +\frac{f(N)}{\Delta}+ \frac{f(N)}{2}-\frac{\Delta f(N)}{12}+\frac{\Delta^2 f(N)}{24}+\cdots\tag{5.9}$$

We can then see from (4.3) that the sum of the entire expansion can be written as:

$$\int_{N-1}^N S(x) dx = S +\frac{f(N)}{\log\left(1+\Delta\right)}\tag{5.10}$$

We can then write the r.h.s. as an integral:

$$\frac{f(N)}{\log\left(1+\Delta\right)} = -\int_0^{\infty}\left(1+\Delta\right)^tf(N)dt = -\int_N^{\infty}f(x)dx\tag{5.11}$$

So, we get the result:

$$\int_{N-1}^N S(x) dx = S - \int_N^{\infty}f(x)dx\tag{5.12}$$

This tells us that to obtain the correct value of a divergent summation via a regularized summation, requires one to subtract the integral from $N$ to infinity of the regularized summand. One then computes the constant term of the expansion around infinity for $N$. That expression will yield the correct value for the divergent summation.

As an example, we consider again the regularized summation $h(u)$ given by (5.3):

$$h(u) = \sum_{k=0}^{\infty} \exp\left[-\left(u+u^2\right)k\right]=\frac{1}{1-\exp\left(u+u^2\right)}\tag{5.13}$$

The integral of the integrand from $N$ to infinity is then

$$\int_N^{\infty}\exp\left[-\left(u+u^2\right)x\right]dx= \frac{\exp\left[-\left(u+u^2\right)N\right]}{u+u^2}\tag{5.14}$$

Expanding this in powers of $u$ yields the expression:

$$\frac{1}{u} -1-N + \left(1+\frac{N^2}{2}\right) u +\mathcal{O}\left(u^2\right) \tag{5.15}$$

Subtracting the constant term in $N$ around infinity in this from (5.4) yields:

$$\frac{1}{2} +\frac{u}{12} + \mathcal{O}\left(u^2\right)$$

And we see that minus the coefficient of $u$ again yields the correct value of the sum of the positive integers. We also see that the sum of $1$ from zero to infinity is now correctly represented by the constant term in $u$, and we see that the divergent term in $u$ has vanished, so we these results then also follow from the derivatives w.r.t. $u$ at $u = 0$.

6. Discussion and conclusion

I've argued based on the way series actually present to us when we're doing computations, i.e. with a remainder term, that the correct way to generalize the definition of the sum $S$ of a convergent series is:

$$S = \lim_{N\to\infty}S(N)\tag{6.1} $$

where $S(N)$ is the partial sum obtained by summing the first $N$ terms of the series, is the use the expression:

$$S = \operatorname*{con}_{N}\left[\int_{N-1}^{N}S(x)dx\right]\tag{6.2} $$

where $\displaystyle \operatorname*{con}_{N}$ denotes the operation of extracting the contant term of the large $N$ asymptotic expansion.

Here one needs to invoke analytic continuation of the partial sum to real arguments. The conditions required for Carlson's theorem are then sufficient and likely necessary in case of the form (3.6). Hoswever, one could argue that with (6.2) only requiring an asymptotic expansion around the point at infinity and mapping that point to the origin would then yield an analytic continuation which is then not unique if the conditions of Carlson's theorem are not met, but which can still make the integral in (6.2) to be uniquely determined for a wider class of cases.

An interesting corollary of (6.2) concerns the sum of the derivative of a function in terms of the partial sum of the function is known in analytic form. If:

$$S(n) = \sum_{k=0}^{n}f(k)\tag{6.3}$$

and we have an analytic expression for $S(x)$ (assumed to be unique by Carlson's theorem), then we have:

$$\sum_{k=0}^{\infty}f'(k)= \operatorname*{con}_{N} f(N) - S'(-1)\tag{6.5}$$

To derive this, we can use that the derivative of the partial sum $S(x)$ satisfies the recursion relation for the partial sum of $f'(k)$, as can be seen by differentiating both sides of the recursion for the partial sum of $f(k)$:

$$S(x) - S(x-1) = f(x)\tag{6.6}$$

But $S'(x)$ does not necessarily satisfy the boundary condition for the partial sum at $x = -1$, i.e. that it is zero there. This is then remedied by subtracting $S'(-1)$ from S'(x). So, the correct partial sum of $f'(x)$ is $S'(x) - S'(-1)$. We can then apply (6.2) to this, the integral of the partial sum from $N-1$ to $N$ is given by:

$$\int_{N-1}^{N} \left[S'(x) - S'(-1)\right]dx = S(N) - S(N-1) - S'(-1)= f(N) - S'(-1)\tag{6.7}$$

From which (6.5) follows by extracting the constant term of the large $N$ expansion. A simple example is to compute again the sum of the positive integers by considering the formula for the partial sum of the squares of integers:

$$S(n) = \sum_{k=0}^{n} k^2 = \frac{1}{6} n (n+1)(2n+1)\tag{6.8}$$

We then have $f(x) = x^2$, which has no constant term in the expansion around infinity. We then see that twice the sum of the positive integers is given by $-S'(-1)$ which is $-\frac{1}{6}$, therefore the sum of the positive integers is, as expected, found to be $-\frac{1}{12}$.

  • The problem here is that you not always can interpolate the partial sum so to take integral. – Anixx Apr 06 '25 at 14:47
  • Newton series interpolation, for instance will not alwatys work. – Anixx Apr 06 '25 at 14:51
  • 1
    I prefer this formula written in the following way (so to have forward difference): $\int_{\omega }^{\omega +1} \left(\sum _{k=0}^{x-1} k\right) , dx$. This way you will get an expression in countable surreals, corresponding to this series. – Anixx Apr 06 '25 at 14:53
  • @Anixx Yes, I agree that there are some hidden assumptions here that need to be made more explicit. – Saibal Mitra Apr 06 '25 at 16:07
  • If we had a formula to interpolate all sequences in natural way, we would be able to sum any series. – Anixx Apr 06 '25 at 16:28
  • @Anixx Carlson's theorem can be invoked, but that requires the conditions for that theorem to be met: https://en.wikipedia.org/wiki/Carlson%27s_theorem – Saibal Mitra Apr 06 '25 at 16:59
  • @Anixx I think the problem may not be as severe as it seems, because we do have formulas such as Euler-Maclaurin formula and Poison sum formula which expres sums of a function over integers in terms of integrals over these functions. So, these are then invariant under a change of interpolation that leaves the values at the integers invariant. – Saibal Mitra Apr 06 '25 at 17:08
  • To use these formulas you need f(x) in smooth form already. – Anixx Apr 06 '25 at 17:11
  • For many sequences, there is NO analytic function that coincides with them on integers (yet, some of them still can be regularized). – Anixx Apr 06 '25 at 17:13
  • My opinion, if the sequence is definable or stochastic, it can be regularized (by more complicated methods). But if the sequence is undefinable even stochastically, it cannot be regularized. – Anixx Apr 06 '25 at 17:32
  • 1
    You may be interested in this paper: https://arxiv.org/abs/2411.00296 – Anixx Apr 06 '25 at 18:13
  • @Anixx Thanks, I've downloaded the paper! – Saibal Mitra Apr 07 '25 at 21:39