8

Trying to understand intuitively the Gamma function I started to think of it as a way to measure how much each factorial power "helps" $x^n$ in the infinite sum of $e^x$, thus trying to simplify the expression to see combinatorial explanations I approximated it as:$$\Gamma(5)=4!= \int_0^\infty \frac{x^4}{e^x}, dx \approx \gamma_n+0+\frac{1}{e}+\frac{2^4}{e^2}+ \frac{3^4}{e^3} \dots$$ Where $\gamma_n$ is precisely the error of an early instance of the Riemann sum vs the full integral. Playing with Wolfram, and seeing a faint connection with the Riemann zeta function I ended up knowing the discrete sum was calculating a "Polylogarithm": $\mathrm{Li}_{s}(z)=\sum_{k=1}^{\infty} \frac{z^{k}}{k^{s}}$, where replacing $s=-n,z=e^{-1}$ we have $\mathrm{Li}_{-n}(e^{-1})=\sum_{k=1}^{\infty} \frac{k^{n}}{e^{k}}$ I would want to know if there are some bounds (or a way to get them) if we define $$\gamma_n = \Gamma(n+1) - \mathrm{Li}_{-n}(e^{-1}) = \int_0^\infty \frac{x^n}{e^x}\, dx -\sum_{k=1}^{\infty} \frac{k^{n}}{e^{k}}$$

I find it may be fruitful because using the closed form of the $\mathrm{Li_{-n}}(x)$ function gives lovely expressions as (with $A_n$ being Eulerian polynomials) : $$ \frac{A_5(e)}{(-1+e)^6} =\frac{e+26e^2 +66 e^{3}+26 e^{5}+e^{5}} {(-1+e)^{6}} = \frac{e^6 A_5(e^{-1})}{e^6(1-e^{-1})^6} \approx 5!$$

Also $\gamma_n$ seems to be bounded by 1 when doing numeric approximation, (and said to go to zero in the limit in Wikipedia).

I have checked that there are indeed generalizations of Euler-Mascaroni constant when my definition of $\gamma_n$ would be valid, I suspect even they could be generalized to so called Stieltjes constants however trying to fit them in those definitions it's out of my reach as I'm just in my junior year of CS, and some of them have very complex forms. I also guess a geometric bound would be much easier, but i would like some help on it.

In retrospective I feel like the ratio notation of the Gamma function is much more transparent and hints at the deep connections with other functions, and my opinion it should be presented that way and not just like an integral that fulfills a functional equation, I would like to hear some opinions on this too.

EDIT: Summing over coeficientes this would assert also that Eulerian polynomials describe a family that fulfills $$n!=A_{n}(1)\approx \frac{A_n(e)}{(-1+e)^{n+1}} = \frac{A_n(e^{-1})}{(1-e^{-1})^{n+1}}$$ Maybe this has some combinatorial explanation?

The informal machinery for these identities seems to have been available since Euler times, when he tried to calculate the $\eta(s)$ function, thus the name they have, (see for example https://mathoverflow.net/questions/13130/historical-question-in-analytic-number-theory) but I wonder if there would be a combinatorial reason why that ratio is a good approximation.

  • I have not touched this subject since along time, but from what I found this expression weren't bounded, what Wikipedia express is that the polylogarithm over the gamma function converge to 1 however their additive difference seems to be unbounded. – Alejandro Quinche May 29 '24 at 16:28

1 Answers1

4

To compare the series and the integral, we will use the Abel-Plana formula which reads \begin{equation} \sum_{j=0}^\infty f(j)=\int_0^\infty f(x)\,dx+\frac{1}{2}f(0)+i\int_0^\infty\frac{f(it)-f(-it)}{e^{2\pi t}-1}\,dt \end{equation} for a function $f$ which is holomorphic in the region $\Re(z)>0$ and such that $\left|f(z)\right|<C/z^{1+\epsilon}$ for some constants $C,\epsilon>0$. This is the case for $f(z)=z^n\exp(-z)$ and thus \begin{equation} \sum_{j=0}^\infty j^ne^{-j}=\int_0^\infty x^ne^{-x}\,dx+i\int_0^\infty\frac{(it)^ne^{-it}-(-it)^ne^{it}}{e^{2\pi t}-1}\,dt \end{equation} Then \begin{equation} \gamma_n=-i^{n+1}\int_0^\infty\frac{e^{-it}-(-1)^ne^{it}}{e^{2\pi t}-1}\,t^ndt \end{equation} When $n=2p$ is an even integer, \begin{align} \gamma_{2p}&=2(-1)^{p+1}\int_0^\infty\frac{t^{2p}\sin t}{e^{2\pi t}-1}\,dt\\ &=2(-1)^{p+1}\int_0^\infty\sum_{k\ge0}\frac{(-1)^k}{(2k+1)!}\frac{t^{2p+2k+1}}{e^{2\pi t}-1}\,dt\\ &=2(-1)^{p+1}\sum_{k\ge0}\frac{(-1)^k}{(2k+1)!}\int_0^\infty\frac{t^{2p+2k+1}}{e^{2\pi t}-1}\,dt \end{align} Inversion of integral and series is valid since $\int_0^\infty t^n\sinh t/(\exp(2\pi t)-1)\,dt<\infty$). Using the integral representation of the Bernoulli numbers \begin{equation} B_{2s}=(-1)^{s+1}4s\int_{0}^{\infty}\frac{t^{2s-1}}{e^{2\pi t}-1}\mathrm{d}t \end{equation} we can express \begin{align} \gamma_{2p}&=2(-1)^{p+1}\sum_{k\ge0}\frac{(-1)^k}{(2k+1)!}\frac{(-1)^{p+k}B_{2p+2k+2}}{4(p+k+1)}\\ &=-\frac{1}{2}\sum_{k\ge0}\frac{B_{2p+2k+2}}{(p+k+1)(2k+1)!} \end{align}

When $n=2p+1$ is an odd integer, \begin{align} \gamma_n=2(-1)^{p}\int_0^\infty\frac{\cos t}{e^{2\pi t}-1}\,t^{2p+1}dt \end{align} Following the same lines, \begin{align} \gamma_{2p+1}&=2(-1)^{p}\sum_{k\ge0}\frac{(-1)^k}{(2k)!}\frac{(-1)^{p+k}B_{2p+2k+2}}{4(p+k+1)}\\ &=\frac{1}{2}\sum_{k\ge0}\frac{B_{2p+2k+2}}{(p+k+1)(2k)!} \end{align}

This alternating series can be evaluated by keeping several terms only: the maximum possible error corresponds to the first neglected term. The sign of $\gamma_n$ can thus be deduced by keeping the first term. Considering that $(-1)^{s+1}B_{2s}>0$, we have \begin{align} \gamma_n>0 \text{ if } n \equiv 1,2 \pmod 4\\ \gamma_n<0 \text{ if } n \equiv 3,0 \pmod 4 \end{align}

Paul Enta
  • 15,313
  • Happy you put the responde back, wonder why the Euler McLaurin derivation was wrong, it seems you arrived to very similar results. In this days I have experimented more and it seems $\gamma_n$ isn't bounded after all, in fact it gets very big very fast. Seems like the limiting behavior of the series diverge quite drastically from the integral, contrary to what Wikipedia and the article it cited suggested, so the more interesting combinatorial interpretation seems to be non existant. Anyway this was a nice introduction to special functions, so thanks for your help! – Alejandro Quinche May 27 '21 at 22:19
  • Thank you too, it was a nice exercise for me. The results of the EM derivation (which were numerically checked on several values of $n$) are identical to these ones. This indicates that the remainder term vanishes, but I could not find a proof for that. The large $n$ behavior is not very clear for me. – Paul Enta May 28 '21 at 09:13