Define the finite difference operator acting on samples of a function $f$ by
$$\triangledown^s_{j=0} f(j)= \sum_{j = 0 }^\infty (-1)^j\binom{s}{j}f(j).$$
Then when $s$ is a positive integer $n$, we have the truncated series
$$\triangledown^n_{j=0} f(j)= \sum_{j = 0 }^n(-1)^j\binom{n}{j}f(j)$$
and the identity
$$\triangledown^n_{j=0} \triangledown_{k=0}^j f(k) = f(n).$$
For several useful functions, often the Gregory-Newton finite-difference interpolation is convergent to the function over some right-half plane of the complex variable $s$; for example,
$$f(s-1;x) = \frac{x^{s-1}}{(s-1)!}= \triangledown_{n=0}^{s-1}\triangledown_{k=0}^{n} f(k;x)= \triangledown_{n=0}^{s-1}\triangledown_{k=0}^{n} \frac{x^k}{k!} $$
holds for $x >0$ and $Re(s) > 1/4$.
Questions:
(Changing the relation between the variables.)
- When is the equality
$$\triangledown_{n=0}^{x}\triangledown_{k=0}^{n} k^s =x^s$$
valid for real $x >0$ (other than when $s$ is a positive integer)? Both Desmo and Wolfram Alpha start to give different results even for the inner sum for $n \geq 40$ when $s = 1.5$. See this Desmos plot for the outer summation truncated to $n=40$.
- When is the equality
$$\triangledown_{n=2}^{x}\triangledown_{k=2}^{n} \ln(k) = \ln(x)$$
valid?
See this Desmos plot for the outer summation truncated to $n=40$.
The truncated series suggest convergence regions, but the numerical estimates diverge from the proposed limits due to numerical instability as the upper limits for the series are increased, so the questions remain whether the two Gregory-Newton series in the limit of infinite sums are divergent and ultimately analytic proofs are required, of course.
Motivation: In my answer to the Mathoverflow Q&A "What actually is the "right way" to view the analytic continuation of the Bell numbers?", I propose a natural interpolation from $n=1,2,3,\ldots$ to real $s >0$ of the Bell polynomials $Bell_n(x)=\phi_n(x)$, a.k.a the Stirling polynomials of the second kind $ST2_n(x)$, as
$$ST2_s(x)= \sum_{k \geq 1} (-1)^k \left(\sum_{j = 1}^k (-1)^j \binom{k}{j}j^s \right) \frac{x^k}{k!}.$$
This preserves the umbral compositional inversion property of the Stirling polynomials of the first kind $ST1_n(x) = n!\binom{x}{n}$, a.k.a. the falling factorials, if the equality in Q 1 is satisfied. For $s =n =1,2,3,\cdots$, the umbral inverse relation is the finite sum
$$ST2_n(ST1.(x))= \sum_{k \geq 1} (-1)^k \left(\sum_{j = 1}^k (-1)^j \binom{k}{j}j^s \right) \frac{(ST1.(x))^k}{k!}$$
$$=\sum_{k \geq 1} (-1)^k \binom{x}{k} \sum_{j = 1}^k (-1)^j \binom{k}{j}j^n = \triangledown_{k=1}^x \triangledown_{j=1}^k j^n $$
$$=\sum_{k = 1}^n (-1)^k \binom{x}{k} \sum_{j = 1}^k (-1)^j \binom{k}{j}j^n = x^n ,$$
valid for all real or complex $x$. The first sum reduces to a finite number of summands in this case since
$$\sum_{j = 1}^k (-1)^j \binom{k}{j}j^n = 0$$
for $k > n$.
Comments on metamorphy's answer (Jul. 10, 2025):
Focusing on the last string of equalities in the answer
$$\Gamma(\sigma)\nabla_{n=1}^{\ x}\nabla_{k=1}^{\ n}k^{-\sigma} =\sum_{n=1}^\infty(-1)^n\binom{x}{n}\int_0^\infty t^{\sigma-1}[(1-e^{-t})^n-1]\,dt \\=\int_0^\infty t^{\sigma-1}\sum_{n=1}^\infty(-1)^n\binom{x}{n}[(1-e^{-t})^n-1]\,dt=\int_0^\infty t^{\sigma-1}e^{-xt}\,dt=\Gamma(\sigma)x^{-\sigma},$$
I'd like to indicate, for a broader context, how this is related to interpolation by Ramanujan's master heuristic / theorem / formula, a.k.a. modified Mellin transform interpolation / extension, of the Stirling polynomials of the second kind $ST2_n(x)$, a.k.a the Bell / Touchard / exponential polynomials discussed in my associated MO-Q and to the umbral inverse compositional relation between the Stirling polynomials of the first and second kinds, that is,
$$x^n = n! \binom{ST2.(x)}{n} = ST1_n(ST2.(x)) = \sum_{k=0}^n ST1_{n,k}ST2_k(x)$$
$$= ST2_n(ST1.(x)) = \sum_{k=0}^n ST2_{n,k}ST1_k(x).$$
Since the e.g.f. for the Stirling polynomials of the second kind is
$$e^{ST2.(x)t} = e^{x(e^t-1)},$$
the umbral substitution
$$ x^k|_{x \to ST2.(x)} = (ST2.(x))^k = ST2_k(x)$$
in the string of equalities gives
$$\sum_{n=1}^\infty(-1)^n\binom{ST2.(x)}{n}\int_0^\infty \frac{t^{\sigma-1}}{(\sigma-1)!}[(1-e^{-t})^n-1]\,dt $$
$$ = \sum_{n=1}^\infty(-1)^n\frac{x^n}{n!}\int_0^\infty \frac{t^{\sigma-1}}{(\sigma-1)!}[(1-e^{-t})^n-1]\,dt $$
$$ = \sum_{n=1}^\infty(-1)^n\frac{x^n}{n!}\int_0^\infty \frac{t^{\sigma-1}}{(\sigma-1)!}\sum_{k=1}^n (-1)^k \binom{n}{k} e^{-kt}\,dt $$
$$ = \sum_{n=1}^\infty(-1)^n\frac{x^n}{n!}\sum_{k=1}^n (-1)^k \binom{n}{k} k^{-\sigma} $$
$$=\int_0^\infty \frac{t^{\sigma-1}}{(\sigma-1)!}\sum_{n=1}^\infty(-1)^n\binom{ST2.(x)}{n}[(1-e^{-t})^n-1]\,dt $$
$$=\int_0^\infty \frac{t^{\sigma-1}}{(\sigma-1)!}\sum_{n=1}^\infty(-1)^n\frac{x^n}{n!}[(1-e^{-t})^n-1]\,dt $$
$$=\int_0^\infty \frac{t^{\sigma-1}}{(\sigma-1)!}[e^{x(e^{-t}-1)}-e^{-x}]\,dt $$
$$=\int_0^\infty\frac{t^{\sigma-1}}{(\sigma-1)!}[e^{-t \; ST2.(x)}-e^{-x}]\,dt$$
$$ =: (ST2.(x))^{-\sigma}:=ST2_{-\sigma}(x), $$
and this last expression is a regularized version of Ramanujan's Heuristic for extension of the sequence $ST2_n(x)$ for $n=1,2,3,...$, for which $ST2_n(1)$ are the Bell numbers as presented in the MO-Q question (Riemann did this for the Bernoulli numbers), while
$$ST2_{-\sigma}(x) = \sum_{n=1}^\infty(-1)^n\frac{x^n}{n!}\sum_{k=1}^n (-1)^k \binom{n}{k} k^{-\sigma} $$
is the modified Mellin transform extension of the sequence $ST2_n(x)$ for $n=1,2,3,...$. Note $ST2_0(x) =1$ of the sequence of Stirling polynomials of the second kind whereas $\lim_{\sigma \to 0} ST2_{-\sigma}(x)= ST2_{-\sigma=0}(x)= 1-e^{-x}$ for the extension, so something like $ST2^{ext}_{-\sigma}(x)$ would be more accurate notation with $ST2^{ext}_{n}(x) = ST2_n(x)$ except for $n=0$.
This Desmos graph gives some numerical support for the analysis.