With a second attempt I can now solve your problem to arbitrary precision, and can even generalize to polynomials up to order 7.
The problem you state is to find a polynomial $f(x)=a + bx + cx^2$ which when iterated $f(f(x))$ gives $$f(f(x)) = g(x)=1 + 1x + 1 x^2/2! + O(x^3) \approx \exp(x) \tag 1$$
Expanding $f(f(x))$ and collecting like powers of $x$ gives this set of equations:
$$ \begin{array}{}
1 &= & a + ab + a^2c & (2.1)\\
1 &= & 2abc + b^2 & (2.2) \\
1/2! &= & b^2c + bc + 2ac^2 & (2.3)\\ \hline
?? &= & 2bc^2 & (2.4)\\
?? &= & c^3 & (2.5)\\
\end{array}$$
Equations (2.4) and (2.5) occur by the iteration $(f(f(x))$ but are ignored here.
Using (2.2) one can express $b$ by $a$ and $c$; using (2.1) one can express then $c$ by $a$. Some terms in the set of rearrangements when solving this show, that $a$ must be smaller than $0.5$ and some trial and error show that a bisection solver using $a$ in the interval $0.45 \lt a \lt 0.5$ can find the value for $a$ such that equations (2.1) to (2.3) are satisfied.
I got the solving coefficients $$a=0.49789408...\\ b=0.87811291...\\ c=0.26179546...$$
Of course the approximations of $f(f(x))$ to $\exp(x)$ are only meaningful for $x$ in small intervals around zero, see the table of examples:
$$\small \begin{array} {r|rr|r}
x & f(x) & f(f(x)) & \exp(x) \\ \hline
-1/2 & 0.12428649 & 0.61107564 & 0.60653066 \\
-1/4 & 0.29472807 & 0.77943937 & 0.77880078 \\
-1/8 & 0.39222052 & 0.88258179 & 0.88249690 \\
0 & 0.49789408 & 1 & 1 \\
1/8 & 0.61174875 & 1.1330520 & 1.1331485 \\
1/4 & 0.73378452 & 1.2832008 & 1.2840254 \\
1/2 & 1.0023994 & 1.6411672 & 1.6487213
\end{array}$$
***Generalization***
I was not able to find similar analytical reductions for examples with polynomials with higher degree and had to switch to a process of iterations very similar to the Newton-iteration for finding the squareroot - but which I applied to truncated Carlemanmatrices.
Let's denote
$F$ for the (truncated) Carlemanmatrix for
$f(x)$ and
$G$ for that for
$g(x)$ where
$g(x)=f(f(x))$ .
Because we work with truncated Carlemanmatrices it is not exactly
$F^2 = G$ - either
$ F^2 = \hat G$ or
$ \hat F^2 = G$ where either
$\hat G$ or
$\hat F$ is not of the Carleman type. In my papers in the tetration-forum I always used the version
$ \hat F^2 = G$ which means, the matrix associated to
$f(x)$ is not really Carleman.
Here however the polynomial descriptions in (2.1) to (2.3) or generalized to higher degrees force the solution
$F^2 = \hat G$ where
$\hat G$ is allowed to be non-Carleman, and only the second column contain the leading coefficients
$(1,1,1/2!,1/3!,...)$ from the truncated exponential-series.
***Example***
For polynomial degree 3, so
$g_3(x)=1 + x + x^2/2 + x^3/6 + O(x^4)$ and
$f_3(x)=a + bx + cx^2 + dx^3 $ and
$$F = \small \left[ \begin{array} {rrrr}
1 & a & a^2 & a^3 \\
0 & b & 2ab & 3ba^2 \\
0 & c & 2ac+b^2 & 3a^2c+3ab^2 \\
0 & d & 2bc+2ad & 6abc+3a^2d+b^3
\end{array} \right]$$
with the idea
$F^2 = \hat G$ and actually using only the second column of
$\hat G$ which reduces the notation to
$$F \cdot F_{0..3,1} = \hat G_{0..3,1} = \left[1,1,\frac12,\frac16\right]$$
In a matrix-multiplication-scheme this looks like
$$ \begin{array}{r|c}
\begin{array}{lr} F \cdot F_{0..3,1} = \hat G_{0..3,1} \qquad \qquad & \\ & \times \end{array}
& \small \begin{bmatrix} a \\b \\ c \\ d \end{bmatrix}\\ \hline
\small \left[ \begin{array} {rrrr}
1 & a & a^2 & a^3 \\
0 & b & 2ab & 3ba^2 \\
0 & c & 2ac+b^2 & 3a^2c+3ab^2 \\
0 & d & 2bc+2ad & 6abc+3a^2d+b^3
\end{array} \right]
& \small \begin{bmatrix} 1 \\1 \\ 1/2 \\ 1/6 \end{bmatrix}
\end{array}$$
The task is to find the matrix $F$ which solves that equation.
I used the Newton-algorithm for finding the square-root, which goes in general this way
- Initialize $F$ , for instance with the identity matrix
- iterate $F = ( G\cdot F^{-1} + F) / 2$ until convergence
Modification and improvement
But this shall not make sure that $F$ becomes a Carlemanmatrix, so we need to implement a matrix-function $\text{Carl}()$which makes a (truncated) Carlemanmatrix from a set of coefficients in a column-vector. Moreover, we have not the full matrix $G$ as target, but only one column in $\hat G$ so we must also rewrite the iteration step
Thus we go
Initialize $(3.1) \qquad \qquad F = \text{Carl}([0.49,0.88,0.23,0.1])$ with some values from our solution for the earlier three-term solution $(a,b,c)$ and chose one more new one. Call the relevant vector $T$: $ T=F_{0..3,1}$
Initialize $Z = \hat G_{0..3,1}$ for notational convenience
iterate
$\qquad \qquad$ $ \displaystyle
(3.2) \qquad T= (\text{Carl}(T)^{-1} \cdot Z + T)/2 \\
$
$\hspace{160px}$ until convergence.
For higher degree of the polynomials to avoid divergences in step (3.2) I introduced a weighting $w >1 $ such that
$\qquad \qquad$ $ \displaystyle
(3.2a) \qquad T= (1 \cdot \text{Carl}(T)^{-1} \cdot Z + w\cdot T)/(1+w) $
$\hspace{160px}$ Note: this is an update of a false formula in the origial answer, I had w at the wrong summand
For degree $3$ it is already better to use $w=3$ and for degree 7 I needed about $w=30$
Here is a table of coefficents computed for polynomial degrees $t=3$ to $t=10$
deg at x^0 at x^1 at x^2 at x^3 at x^4 at x^5 at x^6 at x^7
-----------------------------------------------------------------------------------------------------------------------
2 0.49789408 0.87811291 0.26179546 . . . .
3 0.49857386 0.87638317 0.24739136 0.024116727 . . .
4 0.49856631 0.87631901 0.24749709 0.024635189 -0.00068603672 . .
5 0.49856360 0.87633476 0.24754665 0.024575891 -0.00093201041 0.00026544139 .
6 0.49856335 0.87633377 0.24755048 0.024583631 -0.00093843971 0.00023767204 0.000024982253
7 0.49856319 0.87633587 0.24755370 0.024574086 -0.00095506183 0.00024635371 0.000070926074 -0.000035173120
8 0.49856327 0.87633598 0.24755248 0.024572800 -0.00095232321 0.00025114814 0.000069595854 -0.000045880532 0.0000071061759
9 0.49856326 0.87633615 0.24755257 0.024571913 -0.00095318813 0.00025236663 0.000071951101 -0.000046081230 0.0000026638321 0.0000025972359
10 0.49856328 0.87633614 0.24755226 0.024571792 -0.00095235183 0.00025322334 0.000071191576 -0.000047839864 0.0000025387777 0.0000055099949 -0.0000015206121
======= Kneser ========================================================================================================
K 0.49856329 0.87633613 0.24755219 0.024571812 -0.00095213638 0.00025333982 0.000070927552 -0.000048180843 ...
Here is a table of numerical evaluations of $f_{10}(x)$ for small $x$:
x f(x) f(f(x)) exp(x) exp(x)-f(f(x))
----------------------------------------------------------
-1 -0.15588343 0.36787877 0.36787944 0.00000067066910
-1/2 0.11914585 0.60653066 0.60653066 0.00000000032149465
-1/4 0.29456338 0.77880078 0.77880078 1.3751544 E-13
-1/8 0.39284104 0.88249690 0.88249690 5.8314013 E-17
0 0.49856328 1.0000000 1.0000000 1.5123856 E-30
1/8 0.61202107 1.1331485 1.1331485 -2.7847843 E-17
1/4 0.73349981 1.2840254 1.2840254 -7.2336095 E-15
1/2 1.0016400 1.6487213 1.6487213 0.00000000030613469
1 1.6463542 2.7182781 2.7182818 0.0000037234229
The polynomial of degree 10 allows already 6 digits precision in the interval $-1 \le x \le 1$ and 10 digits in the interval $-0.5 \le x \le 0.5$
Additional remark : this modification is better than what I got using my to-date standard method of "polynomial interpolation" for tetration with 8 or 16 degree polynomials which employs the ansatz $\hat F^2 = G \leadsto F=\sqrt G$, so this idea of the OP is really interesting. Unfortunately, the parameter $w$ in (3.2a) must be increased exponentially to achieve convergence, and the required number of iterations increase with that parameter, so I've tested up to polynomial of order 15 yet, needing $w>6000$ and many thousand iterations
http://www.digizeitschriften.de/dms/img/?PPN=GDZPPN002175851
– Adren Feb 08 '17 at 19:00halfe-solution ... Here I find a formula for $b$ in terms of $a,c$ from your second equation (3.2), and a formula for $a$ in terms of $c$ by the first equation (3.1). Then search for third equation (3.3) becoming approximately $\frac12$ by binary search in some meaningful initial interval for $c$ – Gottfried Helms Mar 12 '17 at 11:36