0. A single formula for the sum of a wide class of series
I show here that series with a partial sum $S(n)$ that satisfy the conditions of Carlson's theorem allowing $S(n)$ to be analytically continued in a unique way, can be correctly summed using the formula:
$$S = \operatorname*{con}_{N}\left[\int_{N-1}^{N}S(x)dx\right]\tag{0.1} $$
where $\displaystyle \operatorname*{con}_{N}$ denotes the operation of extracting the contant term of the large $N$ asymptotic expansion.
Before I present my argument, let's consider some examples of (0.1) applied to divergent series. In case of the sum of the positive integers, we have $S(x) = \frac{1}{2} x (x+1)$, and:
$$\int_{N-1}^N S(x) dx = \frac{N^2}{2} - \frac{1}{12}\tag{0.2}$$
The sum is then the constant term in here of $-\frac{1}{12}$.
The alternating sum:
$$\sum_{k=1}^{\infty}(-1)^{k+1} k\tag{0.3}$$
can also be handled by (0.1). In this case we have:
$$S(x) = \frac{1-(2 x + 1)\exp(i\pi x)}{4}\tag{0.4}$$
Integrating from $N-1$ to $N$ yields:
$$\int_{N-1}^N S(x) dx =\frac{1}{4}+\frac{e^{i \pi N} (-1+i \pi N)}{\pi ^2}\tag{0.5}$$
The constant term is then $\frac{1}{4}$, this value of the sum is also what you find using other methods like e.g. Cesàro summation.
The sum of the harmonic series can be computed as follows. The partial sum can be written as:
$$S(n) = \sum_{k=1}^n \frac{1}{k} = \sum_{k=1}^{\infty}\left(\frac{1}{k} - \frac{1}{k+n}\right)\tag{0.6}$$
The r.h.s. is then also defined for real values of $n$, which is the unique analytic continuation per Carlson's theorem. The integral from $N-1$ to $N$ can then be written as:
$$\int_{N-1}^N S(x) dx = \sum_{k=1}^{\infty}\left[\frac{1}{k}-\log(k+N)+\log(k+N-1)\right]\tag{0.7}$$
Cutting off the summation in here to $M$ and taking the limit of $M$ to infinity, yields:
$$\int_{N-1}^N S(x) dx = \lim_{M\to\infty}\left[\sum_{k=1}^M\frac{1}{k} - \log(M+N)\right]+ \log(N)\tag{0.8}$$
The constant term in here is the value of the limit, which is Euler's constant $\gamma = 0.577215664901532\ldots$
To compute the sum of the logarithm of the positive integers, we can write the partial sum as the logarithm of the factorial and then use Stirling's formula for the asymptotic expansion. We have:
$$S(n) = \log(n!) = n\log(n) -n + \frac{1}{2}\log(n) + \frac{1}{2}\log(2\pi) +\mathcal{O}\left(\frac{1}{n}\right)\tag{0.9}$$
The integration of a term of order $1/x$ from $N-1$ to $N$ will obviously tend to zero in the large $N$ limit and will therefore not contribute to the constant term. For the term proportional to $\log(n)$, we have:
$$\int_{N-1}^N\log(x) dx = N \log(N) - (N-1)\log(N-1) - 1\tag{0.10}$$
For large $N$, the asymptotic expansion is:
$$\int_{N-1}^N\log(x) dx =-\log(N) + \mathcal{O}\left(\frac{1}{N}\right) \tag{0.11}$$
So, the term proportional to $\log(n)$ in $S(n)$ term does not contribute to the constant term.
For the term proportional to $n\log(n)$, we have:
$$\int_{N-1}^N x \log(x) dx = \frac{1}{2}\left[N^2 \log(N) - (N-1)^2\log(N-1) - N+\frac{1}{2}\right]\tag{0.12}$$
Expanding this for large $N$ yields:
$$\int_{N-1}^N x \log(x) dx = N\log(N)-\frac{1}{2}\log(N)-\frac{1}{2}+\mathcal{O}\left(\frac{1}{N}\right)\tag{0.13} $$
So, this term yields a contribution of $-\frac{1}{2}$.
The $-n$ term yields a contribution of $\frac{1}{2}$, because:
$$\int_{N-1}^Nx dx = \frac{1}{2}\left[N^2 -(N-1)^2\right] = N -\frac{1}{2}\tag{0.14}$$
We're then left with the constant term in $S(n)$ of $\frac{1}{2}\log(2\pi)$ as the value of the sum of the logarithms of the positive integers.
Let's then proceed with the derivation of the summation formula (0.1)
1. Introduction: What does the sum of a series really mean?
One can make progress toward getting to a more universal way of summing divergent series, by taking a step back and considering how series, convergent or divergent, actually arise in mathematical computations. So, let's temporarily put aside the standard definition of the value of a convergent series as being the limit of the partial sum, and consider what the math itself is telling us when we actually perform a calculation that yields the answer in the form of a series.
Whether your computation involves performing a Taylor expansion, a long division, or an asymptotic expansion, the rigorous result of the computation of the quantity $X$ you are computing you get is always of the form:
$$X = S(N) + R(N) \tag{1.1}$$
where $S(N)$ is the partial sum obtained by keeping the first $N$ terms of the series, and $R(N)$ is the remainder term. Of course, this doesn't look all that useful, because we won't know what the remainder term is, although the relevant theorems of the methods used to obtain the series will often allow you to bound the remainder term and thereby get to approximations for $X$.
If the series converges, i.e. if $\lim_{N\to\infty} S(N)$ exists, then that implies that $\lim_{N\to\infty}R(N)$ exists as well. Then assuming that $X$ as a function of the expansion parameter is analytic, implies that $\lim_{N\to\infty}R(N) = 0$. In that case, we have:
$$X = \lim_{N\to\infty} S(N)\tag{1.2}$$
When only a series is specified and we want to have a notion for the sum of the series, what we really have in mind is some quantity $X$ that in some sense is maximally analytic in the sense that any nonanalytic behavior is dictated by the specified series, which when expanded in a series, will yield the specified series.
The value of the series is then given by (1.1), and in case the series is covergent, we can eliminate the unknown remainder term by taking the limit of the partial sum and we have formula (1.2). So, this strongly suggests that the procedure of taking the limit of partial sum is not fundamental, both divergent and convergent series are treated on an equal footing according to the math that's actually involved in generating series. But these series are then supposed to be truncated, and a remainder term is supposed to be added.
The problem of summing divergent series can then be reduced to that of eliminating the remainder term.
2. Analytic continuation and Ramanujan summation as a means to find the remainder term
One way to get a handle on unknown remainder terms of divergent series, is to perform an analytic continuation of the series. We then add a new parameter and if the series is covergent in some region of the parameter space, then the sum of the analytically continued series is the limit of the partial sum (1.2) and we then then also have an expression for the remainder term as the difference between the value of the series and the partial sum.
If we then analytically continue this result for the remainder term back to the original value of the parameter which yields the original divergent series, then we obtain the remainder term for this series and thereby the sum of the series. But it's then easy to see that doing so amounts to just analytically continuing the series itself.
We can also use the Euler–Maclaurin formula to get to an expression for the remainder term of a convergent series and then argue on the basis of analytic continuation, what the remainder term of a general divergent series should be. If we proceed this way, then this results in a version of Ramanujan summation. For a convergent series $S = \sum_{k=0}^{\infty}f(k)$, the remainder term $R(N)$ is given by:
$$R(N) = \sum_{k = N+1}^{\infty}f(k)\tag{2.1}$$
We can then apply Euler–Maclaurin formula to this. It's convenient to change the upper limit of infinity to $M$ so that we can later modify the expression for divergent series. We then have a modified remainder term $R(N,M)$:
$$R(N,M) = \sum_{k = N+1}^{M}f(k)\tag{2.2}$$
We then apply the Euler–Maclaurin formula in the following way:
$$\begin{split}&R(N,M) = \sum_{k = N+1}^{M}f(k) = \sum_{k = N}^{M}f(k) - f(N)\\& = \int_N^Mf(x) dx + \frac{1}{2}\left[f(M) - f(N)\right] +\sum_{k=1}^{\infty}\frac{B_{2k}}{(2k)!}\left[f^{(2k-1)}(M) - f^{(2k-1)}(N)\right]\\ &= F(M) - F(N)+ \frac{1}{2}\left[f(M) - f(N)\right] +\sum_{k=1}^{\infty}\frac{B_{2k}}{(2k)!}\left[f^{(2k-1)}(M) - f^{(2k-1)}(N)\right]\end{split}\tag{2.3}$$
Here $F(x)$ is the indefinite integral of $f(x)$ and the summation to infinity inside the Euler-Maclaurin formula is to be interpreted as a truncated summation plus the remainder term for that formula. Here and in the following, we assume that the conditions of Carlson's theorem are met and that summands, remainder terms and partial sums are uniquely defined for real and complex arguments.
In the case of a convergent summation, all the terms involving $f(M)$ and the derivaties there tend to zero in the limit of $M$ to infinity. This means if we modify a divergent series by adding a parameter so that it becomes convergent in some region of prameter space, and we then analytically continue back to the point where we get the original series back, that we end up with an expression for the remainder term where we only have the terms involving $f(N)$ and its derivatives.
However, we need to be careful with $F(M) - F(N)$ here. In case of a convergent series, we can replace this by $-F(N)$ with the prescription to choose the integration constant such that there is no constant term at infinity (the asymptotic expansion around infinity does not contain a constant term). And this notion is then valid for the divergent case as well.
We are then led to the general expression for the remainder term that's valid for both divergent and convergent series:
$$R(N) = - F(N) - \frac{1}{2}f(N) -\sum_{k=1}^{\infty}\frac{B_{2k}}{(2k)!} f^{(2k-1)}(N)\tag{2.4}$$
Using this expression for the remainder term amounts to using a version of Ramanujan summation. The traditional version involving the function, antiderivative and derivaties at zero follows in a straightforward way from (1.1) for $N = 0$, in which case $S(0) = f(0)$, and you then get (2.4) with the sign of $\frac{1}{2}f(0)$ flipped to positive for the expression for the sum of the series.
3. Toward a universal formula for the sum of series
One can interpret formula (2.4) for the remainder term as a universal formula, but it's not always practical to use (2.4). There exists, however, a way to eliminate the need to deal with (2.4), by observing that according to (1.1), the result of adding the remainder $R(N)$ to the partial sum $S(N)$ yielding the value of the series, can also be obtained by extracting the constant term from (2.4). Obviously, adding $R(N)$ to $S(N)$ has to have the effect of eliminating all the terms in $S(N)$ that are not constant, with the sum of the series given by the sum of the constant term in $S(n)$ and the constant term in $R(n)$.
We can then find the constant term in $R(n)$ by using the fact that there is no constant term in $F(n)$ in (2.4) and expressing $F(n)$ in terms of $R(n)$. We can do this as follows. We start with considering the case of a convergent series where we also cut off the summation to upper limit of $M$ as in formula (2.2). It follows from this that $R(x,M)$ satisfies the recursion:
$$R(x,M) - R(x-1,M) = -f(x)\tag{3.1}$$
Restoring the upper limit of $M$ in (2.4) changes $F(N)$ in there to $F(N,M)$:
$$F(N,M) = -\int_N^Mf(x)dx\tag{3.2}$$
Inserting (3.1) in here, yields:
$$\begin{split}F(N,M)& = \int_N^M\left[R(x,M) - R(x-1,M)\right]dx\\&= \int_{M-1}^M R(x,M) dx - \int_{N-1}^N R(x,M) dx\end{split}\tag{3.3}$$
This means that in the divergent case when we extract this expression via analytic continuation back from the convergent case after taking the limit of $M$ to infinity, we have:
$$F(N) = -\int_{N-1}^N R(x) dx\tag{3.4}$$
We can then evaluate the constant term in here by expanding this around infinity. Only the divergent and the unknown constant term in $R(x)$ will then contribute to this. Equating the constant term in $F(N)$ to zero, then yields the constant term in $R(x)$ as:
$$-\operatorname*{con}_{N}\int_{N-1}^N D(x)dx\tag{3.4}$$
where $\displaystyle \operatorname*{con}_{N}$ denotes extracting the constant term from the large $N$ asymptotic expansion, and $D(x)$ denotes the divergent terms in $R(x)$. The divergent term $D(x)$ is then equal to minus the divergent part of $S(x)$. If we then put $D(x) = -S(x)$ in (3.4), then we get the correct contribution to the constant term coming from $R(x)$, and added to that is then the constant term coming from $S(x)$, while the terms in $S(x)$ that tend to zero don't make any contribution. So, we see that putting $D(x) = -S(x)$ in (3.4) will yield the correct value of the infinite sum as:
$$\operatorname*{con}_{N}\int_{N-1}^N S(x) dx\tag{3.5}$$
One may rewrite this formula using the recursion $S(x) - S(x-1) = f(x)$ to obtain the expression:
$$\int_{N-1}^N S(x) dx= \int_{r-1}^r S(x) dx + \int_r^{N} f(x) dx\tag{3.6}$$
where $r$ is an arbitrary real number. One can then choose $r$ such that the integral doesn't have a constant term, and the value of the summation is then given by the integral over $S(x)$. But this is then useful if one has an exact expression for $S(x)$ which requires being able to analytically continue the partial sum to real arguments.
In case of (3.5) one can resort to the asymptotic behavior of $S(N)$ for large values of $N$ to get to the analytic continuation to real arguments for the asymptotic behavior, by simply replacing $N$ by the continuous variable $x$.
4. Alternative formulation of the main result (3.5) that doesn't require integration
A major drawback of the formulas (3.5) and (3.6) is that they require one to have an analytic expression for the partial sums that can be analytically continued to the reals. If one has an analytic expression for the asymptotic behavior of the partial sums that captures the divergent part, then there is no problem, because in that case one can split the partial sums up in the divergent part and in the difference between the partial sums and the divergent part. One can then compute the limit to infinity the latter part, or estimate this numerically, while the divergent part can be treated via formula (3.5).
In many cases where we only have numerical data, this method won't work well, and we can then consider an alternative formulation of (3.5) obtained by evaluating the integral formally as follows.
We evaluate the integral in (3.5) by writing:
$$\int_{N-1}^N S(x) dx = \int_0^1 S(N-1+t) dt =\int_0^1 \left(1+\Delta\right)^t S(N-1)dt\tag{4.1}$$
where $\Delta$ is the forward difference operator:
$$\Delta g(x) = g(x+1) - g(x)\tag{4.2}$$
We then have:
$$\int_0^1 \left(1+\Delta\right)^t dt= \frac{\Delta}{\log\left(1+\Delta\right)}= 1+\frac{\Delta}{2} -\frac{\Delta^2}{12}+\frac{\Delta^3}{24}-\frac{19}{720}\Delta^4+\cdots\tag{4.3}$$
Using that $\Delta S(N-1) = f(N)$, we see that (4.1) can be written as:
$$\int_{N-1}^N S(x) dx = S(N-1) + \frac{f(N)}{2}-\frac{\Delta f(N)}{12}+\frac{\Delta^2 f(N)}{24} - \frac{19}{720}\Delta^3 f(N)+\cdots\tag{4.4}$$
This expansion in powers of the forward difference operator is then most useful, when for some integer $r$ we have that $\Delta^r f(N)$ tends to a constant in the limit of $N$ to infinity. In that case, we only need to consider a finite number of terms of the expansion to extract the constant part of the expression around infinity.
As an example, let's consider again the sum of the harmonic series, but now without making use of the explicit expression for the partial sum given by (0.6). We only need to sue that the asymptotic form of $S(N-1)$ is given by $\log(N) + \gamma +$ terms that tend to zero at infinity, and that the summand also tends to zero at infinity, to see that the constant term at infinity of (4.4) yields Euler's constant $\gamma$.
Note that in this case, the subtraction method where we subtract the divergent logarithmic part and treat that term separately, also allows us to obtain this result without using the explicit expression (0.6).
5. Universal formulation of the smooth cutoff (regulator) method
A totally different approach to evaluate divergent series is by modifying the summand using a paramater, such that the series becomes convergent, and such that the desired summand for which the summation is divergent, is obtained as a term in the series expansion in powers of that parameter.
For example, to compute the sum of integers using this method, we may consider the function:
$$g(u) = \sum_{k=0}^{\infty}\exp(- u k)= \frac{1}{1-\exp(-u)}\tag{5.1}$$
One can then argue that the sum of integers is minus the coefficient of u in the expansion around $u = 0$, which indeed yields the correct expression of $-\frac{1}{12}$. But there are then some problems with this approach. Purely formally, the divergent summation over the positive integers would be given by minus the derivative at $u = 0$, but the expansion around zero is given by:
$$g(u) = \frac{1}{u} + \frac{1}{2}+\frac{u}{12} + \mathcal{O}\left(u^2\right)\tag{5.2}$$
One may argue that the divergent term should simply be disregarded, however, this does not yield a universal prescription, as can be seen by considering the function:
$$h(u) = g\left(u+ u^2\right)\tag{5.3}$$
The coefficient of $u$ should then yield the value of the sum over the positive integers, because the $u^2$ term shouldn't formally contribute, but that's not the case:
$$h(u) =\frac{1}{u} - \frac{1}{2}+\frac{13}{12}u + \mathcal{O}\left(u^2\right)\tag{5.4}$$
Things are going wrong here because the $u^2$ term makes a contribution to the term proportional to $u$ due to the presence of the $\frac{1}{u}$ term in $g(u)$.
To fix these problems, we can use the universal formula (3.5) and derive the correct formulation of the regulator method. We then consider the case for a partial sum $S(N)$ for a convergent summation that constant a parameter, but we then postpone taking the constant part around infinity and consider the asymptotic expansion around infinity.
It is then convenient to use (4.4) and put
$$S(N-1) = S - \sum_{k=N}^{\infty}f(k)\tag{5.5}$$
where $S$ is the sum of the series in there, to obtain:
$$\int_{N-1}^N S(x) dx = S - \sum_{k=N}^{\infty}f(k)+ \frac{f(N)}{2}-\frac{\Delta f(N)}{12}+\frac{\Delta^2 f(N)}{24}+\cdots\tag{5.6}$$
Note that
$$\Delta \sum_{k=N}^{\infty}f(k) = -f(N)\tag{5.7}$$
therefore, we formally have:
$$\sum_{k=N}^{\infty}f(k) = -\frac{f(N)}{\Delta}\tag{5.8}$$
We can then write (5.6) as:
$$\int_{N-1}^N S(x) dx = S +\frac{f(N)}{\Delta}+ \frac{f(N)}{2}-\frac{\Delta f(N)}{12}+\frac{\Delta^2 f(N)}{24}+\cdots\tag{5.9}$$
We can then see from (4.3) that the sum of the entire expansion can be written as:
$$\int_{N-1}^N S(x) dx = S +\frac{f(N)}{\log\left(1+\Delta\right)}\tag{5.10}$$
We can then write the r.h.s. as an integral:
$$\frac{f(N)}{\log\left(1+\Delta\right)} = -\int_0^{\infty}\left(1+\Delta\right)^tf(N)dt = -\int_N^{\infty}f(x)dx\tag{5.11}$$
So, we get the result:
$$\int_{N-1}^N S(x) dx = S - \int_N^{\infty}f(x)dx\tag{5.12}$$
This tells us that to obtain the correct value of a divergent summation via a regularized summation, requires one to subtract the integral from $N$ to infinity of the regularized summand. One then computes the constant term of the expansion around infinity for $N$. That expression will yield the correct value for the divergent summation.
As an example, we consider again the regularized summation $h(u)$ given by (5.3):
$$h(u) = \sum_{k=0}^{\infty} \exp\left[-\left(u+u^2\right)k\right]=\frac{1}{1-\exp\left(u+u^2\right)}\tag{5.13}$$
The integral of the integrand from $N$ to infinity is then
$$\int_N^{\infty}\exp\left[-\left(u+u^2\right)x\right]dx= \frac{\exp\left[-\left(u+u^2\right)N\right]}{u+u^2}\tag{5.14}$$
Expanding this in powers of $u$ yields the expression:
$$\frac{1}{u} -1-N + \left(1+\frac{N^2}{2}\right) u +\mathcal{O}\left(u^2\right) \tag{5.15}$$
Subtracting the constant term in $N$ around infinity in this from (5.4) yields:
$$\frac{1}{2} +\frac{u}{12} + \mathcal{O}\left(u^2\right)$$
And we see that minus the coefficient of $u$ again yields the correct value of the sum of the positive integers. We also see that the sum of $1$ from zero to infinity is now correctly represented by the constant term in $u$, and we see that the divergent term in $u$ has vanished, so we these results then also follow from the derivatives w.r.t. $u$ at $u = 0$.
6. Discussion and conclusion
I've argued based on the way series actually present to us when we're doing computations, i.e. with a remainder term, that the correct way to generalize the definition of the sum $S$ of a convergent series is:
$$S = \lim_{N\to\infty}S(N)\tag{6.1} $$
where $S(N)$ is the partial sum obtained by summing the first $N$ terms of the series, is the use the expression:
$$S = \operatorname*{con}_{N}\left[\int_{N-1}^{N}S(x)dx\right]\tag{6.2} $$
where $\displaystyle \operatorname*{con}_{N}$ denotes the operation of extracting the contant term of the large $N$ asymptotic expansion.
Here one needs to invoke analytic continuation of the partial sum to real arguments. The conditions required for Carlson's theorem are then sufficient and likely necessary in case of the form (3.6). Hoswever, one could argue that with (6.2) only requiring an asymptotic expansion around the point at infinity and mapping that point to the origin would then yield an analytic continuation which is then not unique if the conditions of Carlson's theorem are not met, but which can still make the integral in (6.2) to be uniquely determined for a wider class of cases.
An interesting corollary of (6.2) concerns the sum of the derivative of a function in terms of the partial sum of the function is known in analytic form. If:
$$S(n) = \sum_{k=0}^{n}f(k)\tag{6.3}$$
and we have an analytic expression for $S(x)$ (assumed to be unique by Carlson's theorem), then we have:
$$\sum_{k=0}^{\infty}f'(k)= \operatorname*{con}_{N} f(N) - S'(-1)\tag{6.5}$$
To derive this, we can use that the derivative of the partial sum $S(x)$ satisfies the recursion relation for the partial sum of $f'(k)$, as can be seen by differentiating both sides of the recursion for the partial sum of $f(k)$:
$$S(x) - S(x-1) = f(x)\tag{6.6}$$
But $S'(x)$ does not necessarily satisfy the boundary condition for the partial sum at $x = -1$, i.e. that it is zero there. This is then remedied by subtracting $S'(-1)$ from S'(x). So, the correct partial sum of $f'(x)$ is $S'(x) - S'(-1)$. We can then apply (6.2) to this, the integral of the partial sum from $N-1$ to $N$ is given by:
$$\int_{N-1}^{N} \left[S'(x) - S'(-1)\right]dx = S(N) - S(N-1) - S'(-1)= f(N) - S'(-1)\tag{6.7}$$
From which (6.5) follows by extracting the constant term of the large $N$ expansion. A simple example is to compute again the sum of the positive integers by considering the formula for the partial sum of the squares of integers:
$$S(n) = \sum_{k=0}^{n} k^2 = \frac{1}{6} n (n+1)(2n+1)\tag{6.8}$$
We then have $f(x) = x^2$, which has no constant term in the expansion around infinity. We then see that twice the sum of the positive integers is given by $-S'(-1)$ which is $-\frac{1}{6}$, therefore the sum of the positive integers is, as expected, found to be $-\frac{1}{12}$.