This 'answer' corroborates Andre's answer and shows Euler's method of developing the exponential generating function using essentially a matrix calculation and also how close Euler was to developing the integral formula and Helmut Hasse formula for the Hurwitz zeta function. I rely on three English translations of Euler's work.
Euler in "A succinct method for investigating the sums of infinite series through differential formulae" (translated by Jordan Bell) derives the e.g.f. for the Bernoulli numbers in a way (actually using a matrix calculation before their formal introduction by Cayley and Sylvester) that would have led him with one simple further step to the Hurwita zeta function expressed in integral form and also via the finite differences of the Helmut Hassse formula. The original paper in Latin was delivered to the St. Petersburg Academy on March 13, 1780, and was originally published as "Methodus succincta summas serierum infinitarum per formulas differentiales investigandi", Memoires de l’Academie Imperiale des Sciences de St. Petersbourg 5 (1815), 45–56, and reprinted in Leonhardi Euleri Opera Omni, Series 1: Opera Mathematica, Volume 16.
The arguments in "A succinct method ..." are similar but more general and clearer (with a little re-interpretation) than those in sections 26 and 27 (pgs. 35 and 36 in the English translation by Alexander Aycock) of his earlier paper "Considerations on certain series". The original Latin title is “De seriebus quibusdam considerationes“, presented to the St. Petersburg Academy in 1739, first published in Commentarii academiae scientiarum Petropolitanae 12, 1750, pp. 53-96, reprinted in Opera Omnia: Series 1, Volume 14, pp. 407 - 462, Eneström-Number E130, translated by Alexander Aycock for the Euler-Kreis Mainz.
See pages 48 and 52 of Correspondence of Leonhard Euler Part 1 with Christian Goldbach, edited by Franz Lemmermeyer and Martin Mattmüller, in which an integral formula for $\zeta(2)$ that was known by Euler is displayed. It is stated in this book that the exponential generating function for Bernoulli numbers was first developed by Euler in "Considerations on certain series".
Succinctly presenting Euler's arguments with obvious changes in notation:
In "A succinct method ...", Euler writes
$S-S' = X$
or
$F(x) - F(x+1) = \sum_{k \geq 0} f(x+k) - \sum_{k \geq 0} f(x+k+1) = f(x).$
Then
$S'= S+ \partial S + \frac{1}{2} \partial^2 S + \frac{1}{3!} \partial^3 S + \cdots $
or
$F(x+1) = e^{\partial_x}F(x)$.
Consequently,
$(\dagger)$
$0 = X + \partial S + \frac{1}{2!} \partial^2 S + \frac{1}{3!} \partial^3 S +\cdots $
or
$0 = f(x) - f(x) = f(x)+[F(x+1)-F(x)] = f(x) + (e^{\partial_x}-1)F(x),$
implying
$(\dagger\dagger)$
$\partial S = -X - \alpha \partial X - \beta \partial^2 X - \gamma \partial^3 X + \cdots$
or
$\partial F(x) = - e^{b.\partial_x}f(x) $
with $b_0=1$, $b_1 = \alpha$, $b_2/2! = \beta$, and so on, using umbral notation and implying the umbral maneuver $b.^n = b_n$.
Then deviating from Euler's presentation and explicitly denoting umbral evaluation before other operations by $1/\langle e^{b.t}\rangle = 1/ \sum_{n\geq 0} b_n \frac{t^n}{n!} \neq e^{-b.t}$ in general, formally
$(\dagger\dagger\dagger)$
$$\partial_x F(x) = e^{b.\partial_x}(F(x+1)-F(x)) = F(B.(x+1))- F(B.(x)),$$
(where $B.(x)^n = (b. +x)^n = \sum_{k=0}^n \binom{n}{k} b_k x^{n-k}$ are the Bernoilli polynomials) implying operationally
$$\frac{\partial_x}{\langle e^{b.\partial_x}\rangle}F(x) = F(x+1)-F(x) = (e^{\partial_x}-1)F(x),$$
so
$$\frac{1}{\langle e^{b.\partial_x}\rangle} = \frac{e^{\partial_x}-1}{\partial_x} =\sum_{n \geq 0} \frac{1}{n+1} \frac{\partial_x^n}{n!}= e^{\hat{b}.\partial_x} ,$$
and the reciprocal gives
$$ e^{b.\partial_x} = \frac{\partial_x}{e^{\partial_x}-1},$$
implying
$$1= e^{(\hat{b}.+b.)t} = e^{\hat{b.}t} e^{b.t} = \frac{e^t-1}{t}\frac{t}{e^t-1} = \left(\sum_{n \geq 0} \frac{t^n}{(n+1)!}\right)e^{b.t},$$
which is a conclusion at which Euler arrives in the paper in the following manner.
Euler iteratively takes the derivatives of $\dagger\dagger$ and substitutes these into $\dagger$, abbreviating the result in a table (expressed explicitly in matrix form here for clarity) with the $n$-row containing the $n$th summand $\partial^nS$ in $\dagger$ expressed in terms of $X=f(x)$ and $\partial^kX =\partial_x^kf(x)$ with the $k$-th column containing the coefficients of $\partial^kX =\partial_x^kf(x) $: the first few rows of the infinite matrix are
\begin{pmatrix}
X \\
\partial S \\
\frac{1}{2!}\partial^2 S \\
\frac{1}{3!}\partial^3 S\\
\frac{1}{4!}\partial^4 S\\
\frac{1}{5!}\partial^5 S\\
\vdots\\
\end{pmatrix}
$= \begin{pmatrix}
1 & & & & & \\
-1 & -b_1 & -\frac{b_2}{2!} & -\frac{b_3}{3!} & -\frac{b_4}{4!} & \cdots \\
& -\frac{1}{2!} & -\frac{1}{2!} b_1& -\frac{1}{2!}\frac{b_2}{2!} & -\frac{1}{2!}\frac{b_3}{3!} & \cdots \\
& & -\frac{1}{3!} & -\frac{1}{3!}b_1 & -\frac{1}{3!} \frac{b_2}{2!}& \cdots \\
& & & -\frac{1}{4!} & -\frac{1}{4!} b_1& \cdots \\
\vdots& & & & -\frac{1}{5!} & \cdots \\
\end{pmatrix} \begin{pmatrix}
X \\
\partial X \\
\partial^2 X \\
\partial^3 X\\
\partial^4 X\\
\partial^5 X \\
\vdots\\
\end{pmatrix}$
The sum over each column of the matrix should be zero; i.e., left multiplication by $(1 \; 1\; 1\; 1 \cdots)$ gives zero, and this determines the numerical values of $b_n$ recursively.
With $X = f(x) = e^{zx}$, then
$\partial_x^n X = z^n e^{zx}$
and
$\partial S = \partial_xF(x) = -e^{b.\partial_x}f(x)= -e^{b.\partial_x}e^{zx} = -e^{b.z}e^{zx}$
and, for $n \geq 1$,
$\partial^n S = -\partial_x^{n-1}e^{b.\partial_x}f(x)= -z^{n-1}e^{b.z}e^{zx}.$
Consequently, the general matrix formula specializes to
\begin{pmatrix}
1 \\
-1 e^{b.z}\\
-\frac{1}{2!} ze^{b.z} \\
-\frac{1}{3!} z^2e^{b.z}\\
-\frac{1}{4!} z^3e^{b.z}\\
-\frac{1}{5!} z^4e^{b.z}\\
\vdots\end{pmatrix}
$=\begin{pmatrix}
1 & & & & & \\
-1 & -b_1 & -\frac{b_2}{2!} & -\frac{b_3}{3!} & -\frac{b_4}{4!} & \cdots \\
& -\frac{1}{2!} & -\frac{1}{2!} b_1& -\frac{1}{2!}\frac{b_2}{2!} & -\frac{1}{2!}\frac{b_3}{3!} & \cdots \\
& & -\frac{1}{3!} & -\frac{1}{3!}b_1 & -\frac{1}{3!} \frac{b_2}{2!}& \cdots \\
& & & -\frac{1}{4!} & -\frac{1}{4!} b_1& \cdots \\
\vdots & & & & -\frac{1}{5!} & \cdots \\
\end{pmatrix} \begin{pmatrix}
1 \\
z \\
z^2 \\
z^3\\
z^4\\
z^5 \\
\vdots\\
\end{pmatrix}$
Multiplcation on the left by the row vector of all ones still must give zero, so
$$1 - e^{b.z}(1 + \frac{z}{2!} + \frac{z^2}{3!} + \cdots) = 1 - e^{b.z} \frac{(e^z-1)}{z} = 0$$
and
$$ e^{b.z} = \frac{z}{e^z-1}.$$
At this point Euler had everything he needed to explicitly scoop Abel, Riemann, and Hurwitz on the hybrid Laplace-Mellin integral transform reps of the Riemann and Hurwitz zeta functions.
Let $f(x) = x^{-s}$. Then
$$F(x) = \sum_{n \geq 0} \frac{1}{(x+n)^s} = \zeta(s,x),$$
the Hurwitz zeta function, and $\dagger\dagger$ gives formally
$$\partial_x \zeta(s,x) = -s \zeta(s+1,x) = -e^{b.\partial_x}x^{-s}.$$
The R.H.S. leads to a divergent asymptotic series for the Hurwitz zeta, but this can be summed by interchanging differentiation and integration with the Euler integral rep of the Euler gamma function as
$$ s \zeta(s+1,x) = e^{b.\partial_x}x^{-s} = e^{b.\partial_x}\int_0^\infty e^{-xt} \frac{t^{s-1}}{(s-1)!}dt = \int_0^\infty e^{b.\partial_x} e^{-xt} \frac{t^{s-1}}{(s-1)!}dt$$
$$ = \int_0^\infty e^{-b.t} e^{-xt} \frac{t^{s-1}}{(s-1)!}dt = \int_0^\infty \frac{-t}{e^{-t}-1} e^{-xt} \frac{t^{s-1}}{(s-1)!}dt .$$
Then
$$\zeta(s,x) = \int_0^\infty \frac{1}{1-e^{-t}} e^{-xt} \frac{t^{s-1}}{(s-1)!}dt$$
and
$$\zeta(s) = \zeta(s,1) = \int_0^\infty \frac{1}{e^{t}-1} \frac{t^{s-1}}{(s-1)!}dt.$$
This can be analytically continued in similar fashions to the gamma function integral.
Alternatively, defining the forward difference op in diff op form by
$\delta_x = e^{\partial_x}-1$, then $\partial_x = \ln(1+\delta_x)$ and
$$e^{b.\partial_x} = \frac{\partial_x}{e^{\partial_x}-1} = \frac{\partial_x}{\delta_x}= \frac{\ln(1+\delta_x)}{\delta_x}.$$
Expanding in terms of a series in the op $\delta_x$ leads to the Helmut Hasse formula for the Hurwitz zeta function and the Bernoulli function of Milnor:
$$e^{b.\partial_x}f(x) = \frac{\ln(1+\delta_x)}{\delta_x} f(x)= \sum_{n \geq 0} \frac{1}{n+1} \sum_{k=0}^n (-1)^k \binom{n}{k}e^{k\partial_x} f(x)$$
$$ = \sum_{n \geq 0} \frac{1}{n+1} \sum_{k=0}^n (-1)^k \binom{n}{k} f(x+k).$$
Equality is understood to be equivalence under analytic continuation or as a convergent summation of a divergent asymptotic series; e.g., with $f(x) = x^{-s}$ the series expansion in $b_n$ is divergent for $s$ complex in general while the final expression is convergent and gives the Bernoulli function of Milnor $B_{-s}(x) = s \zeta(s+1,x)$.