10

I'm working on the half iteration of the exponential function. No one has any idea what fractional iterations could mean but I think intuitively it should be a function $f(x)$ such that $f(f(x))=e^x$.

Here's how I'm finding $f(x)$ when $x\approx 0$:

If $x\approx 0$, then, we have, $$e^x\approx 1+x+\frac{x^2}{2}$$. ...(1)

Now, if we assume the required function $f(x)$ to be of the form $ax^2+bx+c$, then $$f(f(x))= a^3x^4+2a^2bx^3+(2a^2c+ab^2+ab)x^2+(2abc+b^2)x+ac^2+bc+c$$

But, since $x\approx 0$ therefore,

$$f(f(x))=e^x\approx ac^2+bc+c+(2abc+b^2)x+(2a^2c+ab^2+ab)x^2$$. ....(2)

Comparing coefficients of like powers of $x$ in equation (1) and (2), we get,

$$ac^2+bc+c=1 \tag {3.1}$$ $$2abc+b^2=1 \tag {3.2}$$ $$2a^2c+ab^2+ab=\frac{1}{2} \tag {3.3}$$

The problem is solving these equations. I've tried substitution but they get reduced to a polynomial of very high degree which I don't know how to solve. Is there some way to solve these to get $a$, $b$, and $c$ and hence get the required half iteration function of $e^x$ as $ax^2+bx+c$? Please tell me how to solve these three equations.

  • 1
    solved on the whole real line by Helmuth Kneser in 1950. – Will Jagy Feb 08 '17 at 18:52
  • 2
    Here is a link to Kneser's article ... in german :)

    http://www.digizeitschriften.de/dms/img/?PPN=GDZPPN002175851

    – Adren Feb 08 '17 at 19:00
  • 1
    For comparison: there is a real analytic solution for this. In comparison, the $C^\infty$ function with $g(g(x)) = \sin x$ cannot be extended as holomorphic around the origin; fixpoints cause problems. – Will Jagy Feb 08 '17 at 19:09
  • @Will Jaggy: Kneser's extension isn't that intuitive and is not agreed upon by all. And, I can't understand words like 'holomorphic'. Is there any simple explanation of what's the problem? –  Feb 08 '17 at 19:14
  • Henryk Trapmann proved Kneser's construction has a nice uniqueness property. The paper is available here. https://arxiv.org/pdf/1006.3981.pdf holomorphic just means analytic. – Sheldon L Feb 09 '17 at 05:09
  • Brute-force searching finds a solution near $c=0.497541000000$ , $b= 0.868934512796$ and $a=0.283293436579$ which is not completely off the first three coefficients of Sheldon's halfe-solution ... Here I find a formula for $b$ in terms of $a,c$ from your second equation (3.2), and a formula for $a$ in terms of $c$ by the first equation (3.1). Then search for third equation (3.3) becoming approximately $\frac12$ by binary search in some meaningful initial interval for $c$ – Gottfried Helms Mar 12 '17 at 11:36
  • I'm currently working on an answer using "Carleman-matrices" of increasing sizes, following your initial ansatz. The results converge nicely to Sheldon's solution when the order of the initial polynomial is increased... (but it seems I need some more time to make a meaningful, fully selfexplaining answer) – Gottfried Helms Mar 12 '17 at 11:43
  • A new brute-force-search finds a solution near $c=0.4978940790648882$ , $b=0.8781129051944374$ and $a=0.2617954567357533$ for $f(x)=ax^2+bx+c$ giving $f(f(x))=1 + 1x + \frac12x^2+ 0.12x^3+0.018x^4$ for your polynomial (There is no solution possible with the coefficients at $x^3$ and $x^4$ vanishing). This solution is a bit nearer to the Kneser than the one in the previous comment... – Gottfried Helms Mar 14 '17 at 01:41
  • Hah! I even could extend your ansatz to the degree(3)-polynomial. Assuming $f(x)=a + bx + cx^2 + dx^3$ with $f(f(x)) = 1 + 1x+ 1/2x^2+1/3!x^3 +O(x^4)$ (note that I changed the variable names!) I get (a,b,c,d)=$(0.4985738620087048, 0.8763831658843345, 0.2473913614530604, 0.02411672721119037)$ arbitrarily precise. It is a Newton-like iteration on Carlemanmatrices, but does not yer work nicely for polynomials of higher degree. – Gottfried Helms Mar 14 '17 at 03:05
  • A short statement about the uniqueness-properties of the Kneser-solution see MO: http://mathoverflow.net/a/45166/7710 – Gottfried Helms Mar 14 '17 at 11:49
  • Could practice on x exp(x) first. – Cosmas Zachos Jun 02 '17 at 14:19
  • See https://oeis.org/A199203 – Vladimir Reshetnikov Aug 06 '19 at 04:42

5 Answers5

6

There is a method using Carlemanmatrices which gives increasingly well approximations.
Consider the family of polynomials $$g_t(x) = 1+ x + \frac{x^2}{2!} +...+ \frac{x^t}{t!} $$ with the attempt to find increasingly precise approximations with increasing $t$ $$ f_t(f_t(x)) \approx g_t(x) \approx e^x $$
For some chosen $t$ define the Carlemanmatrix $G$ for $g_t(x)$ (I use a version which is transposed against the wikipedia-entry), for instance $t=2$ $$ G_2 = \left[\small \begin{array} {} 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 \\ 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 \\ 0 & 1/2 & 2 & 9/2 & 8 & 25/2 & 18 & 49/2 \\ 0 & 0 & 1 & 4 & 10 & 20 & 35 & 56 \\ 0 & 0 & 1/4 & 9/4 & 17/2 & 45/2 & 195/4 & 371/4 \\ 0 & 0 & 0 & 3/4 & 5 & 37/2 & 51 & 469/4 \\ 0 & 0 & 0 & 1/8 & 2 & 45/4 & 41 & 931/8 \\ 0 & 0 & 0 & 0 & 1/2 & 5 & 51/2 & 92 \end{array}\right]$$ We see the coefficients of our function $g_2(x)$ in the second column, that of $g_2(x)^0 = 1$ in the fist columns, that of $g_2(x)^2 $ in the third , that of $g_2(x)^3 $ in the fourth column and so on. The key is here, that with a vector $V(x)=[1,x,x^2,x^3,...]$ up to the correct dimension we can do the dotproduct $$ V(x) \cdot G_2 = V(g_2(x))$$ Actually, when we using software then the columns of higher indexes are always truncated versions of the powers of $g_2(x)$ so empirically we must take the approximations with a grain of salt.
THe key is now, that because of the form of the Carlemanmatrices, the "output" has the same form as the "input", and we can repeat the application like $$ V(x) \cdot G_2 = V(g_2(x)) \\ V(g_2(x)) \cdot G_2 = V(g_2°^2(x)) \\ V(g_2°^2(x)) \cdot G_2 = V(g_2°^3(x)) \\ $$ or more concisely, because we have associativity in the matrix-products $$V(x) \cdot G_2^h = V(g_2°^h(x)) $$ We see, that the h'th power of $G_2$ gives the h'th iterate of $g_2(x)$ and we can assume, that inserting $h=1/2$ gives -at least- an approximation to $g_2°^{1/2}(x)=f(x)$
What we need is a matrix-function for finding the square-root; this can either be done by diagonalization (implemented in Pari/GP and larger CAS-systems) of by Newton-iteration.
What I find for $G_2^{0.5} $ is

  1.0000000      0.49649737      0.24650274       0.12207723   0.060459565  0.030432921  0.015814772  0.0088358869
          0      0.88272304      0.87964476       0.65278437    0.42951272   0.26484713   0.15906693   0.097969211
          0      0.29626378       1.0688901        1.3895246     1.2985405    1.0259175   0.73517218    0.50088082
          0    -0.073304928      0.44254563        1.4092345     2.1282729    2.3110280    2.0617264     1.6127751
          0     0.020386654   -0.0089884907       0.62013806     1.9423663    3.2482288    3.8802221     3.6776250
          0   -0.0023714159   -0.0087696684      0.062271773    0.87615938    2.7873325    5.0295289     6.2944776
          0  -0.00064038818    0.0039322053    -0.0081032492    0.14270358    1.2522409    4.3026809     8.3339822
          0   0.00019462828  -0.00056495459  -0.000028314112  0.0028871718   0.23073290    1.8777280     8.6538082

and we see, that the coefficients in the second column gives some approximation to Sheldon's halfe - function.
Also the first three coefficients $(c=) 0.49649737 , (b=) 0.88272304, (a=) 0.29626378 $ give some approximation to the values of $(c,b,a)$ which I gave in my earlier comment and which solve your system of equations in (3.1) to (3.2).

Now if we do a better approximation of $g_t(x) $ to the true $\exp(x)$ - function, by, say, $t=8$ we get better approximations to Sheldon's Kneser-solution. Let $G_8$ be defined with size 16x16 then we get its top left:

  1       1           1               1               1                  1
  0       1           2               3               4                  5
  0     1/2           2             9/2               8               25/2
  0     1/6         4/3             9/2            32/3              125/6
  0    1/24         2/3            27/8            32/3             625/24
  0   1/120        4/15           81/40          128/15             625/24
  0   1/720        4/45           81/80          256/45           3125/144
  0  1/5040       8/315         243/560        1024/315         15625/1008
  0       0   127/20160       1093/6720       5461/3360         19531/2016
  0       0    41/30240      3271/60480       5459/7560         32549/6048
  0       0    19/75600     3247/201600     21809/75600       162697/60480
  0       0     1/25200      871/201600      3953/37800        14779/12096
  0       0  19/3628800    2533/2419200   62843/1814400        24589/48384
  0       0   1/1814400       37/161280     2389/226800       47129/241920
  0       0  1/25401600   1537/33868800    7499/2540160    702839/10160640
  0       0           0  4099/508032000  72803/95256000  3481427/152409600

and the square-root $G_8^{0.5} $ is

  1.0000000            0.49857405            0.24856073           0.12386855         0.061729127      0.030790812
          0            0.87630311            0.87401746           0.65347137          0.43393924       0.27003773
          0            0.24751412             1.0147132            1.3342705           1.2683840        1.0263147
          0           0.024641942            0.45793460            1.3404901           2.0049681        2.2203057
          0        -0.00094891787            0.10348608           0.72416179           1.8842050        3.0275836
          0         0.00022657746           0.010924205           0.23298583           1.1130489        2.7397893
          0        0.000077390020         0.00063916208          0.046228369          0.43595415        1.7059468
          0       -0.000032210830         0.00014299601         0.0059529659          0.11748258       0.75561802
          0       -0.000014796522       -0.000021914892        0.00061375239         0.022696178       0.24536210
          0        0.000011471527       -0.000021337036       0.000018742074        0.0033435423      0.060458407
          0      -0.0000014276569       0.0000089535786      -0.000019453706       0.00035980043      0.011807188
          0     -0.00000088358673      0.00000019051670      0.0000042134686      0.000016547312     0.0018917104
          0      0.00000031550433     -0.00000099866936     0.00000031495830     0.0000035791002    0.00026146045
          0  -0.00000000068329542      0.00000026181266    -0.00000053463876   -0.00000019767557   0.000038804566
          0    -0.000000014747905    -0.000000019088871     0.00000011141548  -0.000000094870895  0.0000061666156
          0    0.0000000019555565  -0.00000000099414503  -0.0000000090606767   0.000000019074856  0.0000014943166

The coefficients in the second column are now better approximations to Sheldon's solutions and give a better solution for $f(x)$ to approximate $f(f(x))\approx \exp(x)$

You see the principle. Ideally the Carlemanmatrix is of infinite size and also the polynomial is of infinite order (or better: equals the exponential-series).


By the logic of the Carleman-matrices the following method seems to be more inaccurate, but the approximation pattern towards the Kneser-solution seems to be even better.

Here is a list for the coefficients for $f_t(x)$ for $t=3..16$ where the Carlemanmatrices $G_t$ are also truncated to size $t \times t$ (and not $2t \times 2t$, $3t \times 3t$ or the like). I've written them horizontally for better visual comparability of approximation towards the Kneser-solution:

 t      at x^0      at x^1         x^2           x^3           x^4             x^5
 3      0.50000000  0.89442719  0.22360680            .               .               .
 4      0.49944144  0.88075164  0.23809540  0.022538920               .               .
 5      0.49907754  0.87768412  0.24309637  0.024235749   0.00082874617               .
 6      0.49887415  0.87676479  0.24517938  0.024772649   0.00013110906  0.000089433779
 7      0.49875947  0.87644601  0.24618449  0.024898334  -0.00030603063   0.00011382794
 8      0.49869216  0.87632858  0.24671872  0.024893887  -0.00055911402   0.00013292704
 9      0.49865090  0.87628661  0.24702315  0.024852593  -0.00070934134   0.00015154857
10      0.49862460  0.87627466  0.24720567  0.024805640  -0.00080017819   0.00016953631
11      0.49860726  0.87627481  0.24731954  0.024763154  -0.00085630807   0.00018510631
12      0.49859549  0.87627959  0.24739292  0.024727523  -0.00089167888   0.00019807499
13      0.49858729  0.87628581  0.24744148  0.024698565  -0.00091432971   0.00020860446
14      0.49858146  0.87629209  0.24747433  0.024675319  -0.00092902883   0.00021706224
15      0.49857724  0.87629790  0.24749699  0.024656725  -0.00093865651   0.00022382486
16      0.49857412  0.87630304  0.24751287  0.024641836  -0.00094499831   0.00022922925

Kneser

KN:     0.49856329  0.87633613  0.24755219  0.024571812  -0.00095213638   0.00025333982 ...

So I think, the Kneser-solution is the limit for this process when $t \to \infty$

  • So, is my extension of tetration the same as Kneser's? –  Mar 14 '17 at 04:03
  • @Dove :It seems so, no proof of course because this is all heuristic (but a good one). In fact, your idea to solve for the coefficients seems even better or more straightforward or faster approximating the Kneser-solution. I could solve that for polynomial order of 4,5 and 6 and the approximation is even better than what I've shown in my answer with $t=16$. I'm just looking for improving the method which solves your ansatz for higher order polynomials. – Gottfried Helms Mar 14 '17 at 06:26
  • @GottfriedHelmes I intuitively expected that evaluating the half-iteration function of $e^x$ at $x=1$ would give us the value of $^{0.5}e$, i.e. the solution of $x^x=e$. But that doesn't seem to be true when I calculated it with these coefficients. But still thanks for calculating the coefficients. –  Mar 14 '17 at 06:48
  • @Dove : Well, that's a common misunderstanding. What you get is a function, where you can insert the value $x_0=0$ and get some value $x_1$ and where you can insert $x_1$ to get then $x_2 = \exp(x_0)$ . Filling in $x_1=f(1) \approx 1.6378024$ and then iterating once gives $x_2=f(x_1) \approx e$ , so $x_1$ is the "halfiterate" of 1. A small table: $$\small \begin{bmatrix} x &\exp(x)&f(f(x))&f(x)\ \hline -0.5&0.60653066&0.61107564&0.12428649\-0.25&0.77880078&0.77943937&0.29472807\0&1&1&0.49789408\0.25&1.2840254&1.2832008&0.73378452\0.5&1.6487213&1.6411672&1.0023994\end{bmatrix}$$ – Gottfried Helms Mar 14 '17 at 08:06
  • (...contd...) The three-coefficients polynomial gives not so good approximations for $|x| \gt 0.25$ but the polynomials with higher order perform better. Here is the table using order 7: $$\small \begin{bmatrix} x&\exp(x)&f(f(x))&f(x)\ \hline -0.5&0.60653066&0.60653006&0.11914544\ -0.25&0.77880078&0.77880078&0.29456380\ 0&1&1&0.49856335\ 0.25&1.2840254&1.2840254&0.73349939\ 0.5&1.6487213&1.6487224&1.0016400 \end{bmatrix}$$ We can even do $x_2=f(f(1))$ and get $x_2=2.7184827..:$ – Gottfried Helms Mar 14 '17 at 08:17
  • How does $\exp^{[1/2]}(x)$ behave for $x\to-\infty$? Is it monotone? What is the minimal value it attains there? Is there $\lim\limits_{x\to-\infty}\exp^{[1/2]}(x)$? – Vladimir Reshetnikov Aug 14 '21 at 20:26
5

Here is a short problem I proposed two years ago to my students (translated from french ...)

I can add later a detailed solution if required.

I originally found this material somewhere on the web but don't remember where.


Let's make the assumption that there exists a continuous map $f:\mathbb{R}\rightarrow\mathbb{R}$ such that $f\circ f=\exp.$

We will prove, step by step, several properties of $f$.

  1. Prove that $\forall x\in\mathbb{R},\thinspace e^{x}>x.$
  2. Prove that the equation $f\left(x\right)=x$ doesn't have any solution.
  3. Deduce that $\forall x\in\mathbb{R},\thinspace f\left(x\right)>x.$
  4. Prove finally that $\forall x\in\mathbb{R},\thinspace f\left(x\right)<e^{x}.$

  5. Prove that $\forall x\in\mathbb{R},\thinspace f\left(e^{x}\right)=e^{f\left(x\right)}.$

  6. Compute ${\displaystyle \lim_{+\infty}f}.$

  7. Prove that there exists $\lambda<0$ such that ${\displaystyle \lim_{-\infty}f=\lambda.}$
  8. Prove that $f$ is strictly increasing.
Adren
  • 8,184
  • 1
    What do you mean by all this? –  Feb 08 '17 at 21:27
  • @Dove: All this is teaching material which could be helpful to some people interested in the subject mentioned above = functional square root of the exponential map. It is not - strictly speaking - an answer, but it is closely connected to the question. I hope that some people will find it useful. – Adren Feb 08 '17 at 21:41
3

Looking at the equations $$ac^2+bc+c=1 \tag 1$$ $$2abc+b^2=1 \tag 2$$ $$2a^2c+ab^2+ab=\frac{1}{2} \tag 3$$ we can eliminate $b$ from $(1)$ $$b=\frac{1-c-a c^2}{c}$$ Replacing in $(2)$ and solving for $a$ leads to $$a=\frac{\sqrt{1-2 c}}{c^2}$$ Replacing in $(3)$ leads to $$-c \left(c^3+12 c+6 \sqrt{1-2 c}-14\right)+4 \sqrt{1-2 c}=4$$ After squaring steps, this reduces to $$c^7+24c^5-28c^4+152c^3-264c^2+160c-32=0$$ which has only one real root close to $c=\frac 12$.

Using Newton method for finding the zero of the equation in $c^7$ leads to $$a=0.261795456735753$$ $$b=0.878112905194437$$ $$c=0.497894079064888$$ as already given in Gottfried Helms's answer. These numbers can be rationalized as $$a=\frac{37409}{142894}\qquad b=\frac{77821}{88623}\qquad c=\frac{18323}{36801}$$

Edit

Back to the problem eighteen months later, we could get expressions for the solution using $[1,n]$ Padé approximants for the septic equation (built around $c=\frac 12$). For different values of $n$, this would give $$a_1=\frac{146336 \sqrt{352121}}{331786225}\qquad b_1=\frac{18369-4 \sqrt{352121}}{18215}\qquad c_1=\frac{18215}{36584}$$ $$a_2=\frac{3257213 \sqrt{44685705147}}{2630063332009}\qquad b_2=\frac{1635466-\sqrt{44685705147}}{1621747}\qquad c_2=\frac{1621747}{3257213}$$

1

I've written a few programs that calculate Kneser's sexp and slog functions. One of my recent efforts is fatou.gp a pari-gp program that calculates the slog or Abel function for $\exp(x)$, for a wide range of real and complex bases. Kneser's construction requires calculating a Riemann mapping which is very tricky to get accurate numerical results, so instead I do an equivalent iterated 1-cyclic $\theta(z)$ mapping from the $\alpha(z)$ or Abel functions at the two fixed points. Here is the Taylor series for the half iterate of $\exp(x)$. You can compare these results with your system of equations. $$f(f(x))=\exp(x);\;\;\;f(x)=\text{sexp}(\text{slog}(x)+0.5)$$

{halfe= 0.498563287941114
+x^ 1*  0.876336132224813
+x^ 2*  0.247552187310898
+x^ 3*  0.0245718116969028
+x^ 4* -0.000952136380204206
+x^ 5*  0.000253339819008525
+x^ 6*  7.09275516366956 E-5
+x^ 7* -4.81808433402200 E-5
+x^ 8*  2.63228465405932 E-6
+x^ 9*  5.96598826774286 E-6
+x^10* -1.30879479719986 E-6
+x^11* -7.47165552015529 E-7
+x^12*  2.68510892327235 E-7
+x^13*  1.12440534247329 E-7
+x^14* -4.80789869461353 E-8
+x^15* -2.20118629742874 E-8
+x^16*  8.17933994010676 E-9
+x^17*  5.30688749879415 E-9
+x^18* -1.23819700193839 E-9
+x^19* -1.41844961463076 E-9
+x^20*  1.05287927108075 E-10
+x^21*  3.89632939104118 E-10
+x^22*  3.51707444355649 E-11
+x^23* -1.04753098725701 E-10
+x^24* -2.89321996209624 E-11
+x^25*  2.62480364845324 E-11
+x^26*  1.38848625050719 E-11
+x^27* -5.58405052292307 E-12
+x^28* -5.57754465436342 E-12
+x^29*  6.94355445214551 E-13
+x^30*  2.00345639417985 E-12
+x^31*  1.89932160713883 E-13
+x^32* -6.47541759020915 E-13
+x^33* -2.08549856161407 E-13
+x^34*  1.82048262120683 E-13
+x^35*  1.15807785730247 E-13
+x^36* -3.92069387382815 E-14
+x^37* -5.14573309824226 E-14
+x^38*  2.35454506494555 E-15
+x^39*  1.97495294136643 E-14
+x^40*  3.87021261821085 E-15 }
Sheldon L
  • 4,599
  • 1
  • 18
  • 33
  • Sheldon, you might be interested in my two answers; in the second I found an even better process to approximate your Kneser-solution, see the comparisions at the end of the postings. It gives me a vague idea, what is behind Kneser's rationale when translated into more elementary terms. Perhaps I'll post this in the tetration-forum, too. – Gottfried Helms Mar 14 '17 at 11:14
1

With a second attempt I can now solve your problem to arbitrary precision, and can even generalize to polynomials up to order 7.
The problem you state is to find a polynomial $f(x)=a + bx + cx^2$ which when iterated $f(f(x))$ gives $$f(f(x)) = g(x)=1 + 1x + 1 x^2/2! + O(x^3) \approx \exp(x) \tag 1$$
Expanding $f(f(x))$ and collecting like powers of $x$ gives this set of equations: $$ \begin{array}{} 1 &= & a + ab + a^2c & (2.1)\\ 1 &= & 2abc + b^2 & (2.2) \\ 1/2! &= & b^2c + bc + 2ac^2 & (2.3)\\ \hline ?? &= & 2bc^2 & (2.4)\\ ?? &= & c^3 & (2.5)\\ \end{array}$$ Equations (2.4) and (2.5) occur by the iteration $(f(f(x))$ but are ignored here.
Using (2.2) one can express $b$ by $a$ and $c$; using (2.1) one can express then $c$ by $a$. Some terms in the set of rearrangements when solving this show, that $a$ must be smaller than $0.5$ and some trial and error show that a bisection solver using $a$ in the interval $0.45 \lt a \lt 0.5$ can find the value for $a$ such that equations (2.1) to (2.3) are satisfied.
I got the solving coefficients $$a=0.49789408...\\ b=0.87811291...\\ c=0.26179546...$$ Of course the approximations of $f(f(x))$ to $\exp(x)$ are only meaningful for $x$ in small intervals around zero, see the table of examples: $$\small \begin{array} {r|rr|r} x & f(x) & f(f(x)) & \exp(x) \\ \hline -1/2 & 0.12428649 & 0.61107564 & 0.60653066 \\ -1/4 & 0.29472807 & 0.77943937 & 0.77880078 \\ -1/8 & 0.39222052 & 0.88258179 & 0.88249690 \\ 0 & 0.49789408 & 1 & 1 \\ 1/8 & 0.61174875 & 1.1330520 & 1.1331485 \\ 1/4 & 0.73378452 & 1.2832008 & 1.2840254 \\ 1/2 & 1.0023994 & 1.6411672 & 1.6487213 \end{array}$$


***Generalization*** I was not able to find similar analytical reductions for examples with polynomials with higher degree and had to switch to a process of iterations very similar to the Newton-iteration for finding the squareroot - but which I applied to truncated Carlemanmatrices. Let's denote $F$ for the (truncated) Carlemanmatrix for $f(x)$ and $G$ for that for $g(x)$ where $g(x)=f(f(x))$ . Because we work with truncated Carlemanmatrices it is not exactly $F^2 = G$ - either $ F^2 = \hat G$ or $ \hat F^2 = G$ where either $\hat G$ or $\hat F$ is not of the Carleman type. In my papers in the tetration-forum I always used the version $ \hat F^2 = G$ which means, the matrix associated to $f(x)$ is not really Carleman. Here however the polynomial descriptions in (2.1) to (2.3) or generalized to higher degrees force the solution $F^2 = \hat G$ where $\hat G$ is allowed to be non-Carleman, and only the second column contain the leading coefficients $(1,1,1/2!,1/3!,...)$ from the truncated exponential-series.
***Example*** For polynomial degree 3, so $g_3(x)=1 + x + x^2/2 + x^3/6 + O(x^4)$ and $f_3(x)=a + bx + cx^2 + dx^3 $ and $$F = \small \left[ \begin{array} {rrrr} 1 & a & a^2 & a^3 \\ 0 & b & 2ab & 3ba^2 \\ 0 & c & 2ac+b^2 & 3a^2c+3ab^2 \\ 0 & d & 2bc+2ad & 6abc+3a^2d+b^3 \end{array} \right]$$ with the idea $F^2 = \hat G$ and actually using only the second column of $\hat G$ which reduces the notation to $$F \cdot F_{0..3,1} = \hat G_{0..3,1} = \left[1,1,\frac12,\frac16\right]$$ In a matrix-multiplication-scheme this looks like $$ \begin{array}{r|c} \begin{array}{lr} F \cdot F_{0..3,1} = \hat G_{0..3,1} \qquad \qquad & \\ & \times \end{array} & \small \begin{bmatrix} a \\b \\ c \\ d \end{bmatrix}\\ \hline \small \left[ \begin{array} {rrrr} 1 & a & a^2 & a^3 \\ 0 & b & 2ab & 3ba^2 \\ 0 & c & 2ac+b^2 & 3a^2c+3ab^2 \\ 0 & d & 2bc+2ad & 6abc+3a^2d+b^3 \end{array} \right] & \small \begin{bmatrix} 1 \\1 \\ 1/2 \\ 1/6 \end{bmatrix} \end{array}$$

The task is to find the matrix $F$ which solves that equation.
I used the Newton-algorithm for finding the square-root, which goes in general this way

  1. Initialize $F$ , for instance with the identity matrix
  2. iterate $F = ( G\cdot F^{-1} + F) / 2$ until convergence

Modification and improvement

But this shall not make sure that $F$ becomes a Carlemanmatrix, so we need to implement a matrix-function $\text{Carl}()$which makes a (truncated) Carlemanmatrix from a set of coefficients in a column-vector. Moreover, we have not the full matrix $G$ as target, but only one column in $\hat G$ so we must also rewrite the iteration step

Thus we go

  1. Initialize $(3.1) \qquad \qquad F = \text{Carl}([0.49,0.88,0.23,0.1])$ with some values from our solution for the earlier three-term solution $(a,b,c)$ and chose one more new one. Call the relevant vector $T$: $ T=F_{0..3,1}$
    Initialize $Z = \hat G_{0..3,1}$ for notational convenience

  2. iterate
    $\qquad \qquad$ $ \displaystyle (3.2) \qquad T= (\text{Carl}(T)^{-1} \cdot Z + T)/2 \\ $
    $\hspace{160px}$ until convergence.

For higher degree of the polynomials to avoid divergences in step (3.2) I introduced a weighting $w >1 $ such that
$\qquad \qquad$ $ \displaystyle (3.2a) \qquad T= (1 \cdot \text{Carl}(T)^{-1} \cdot Z + w\cdot T)/(1+w) $
$\hspace{160px}$ Note: this is an update of a false formula in the origial answer, I had w at the wrong summand
For degree $3$ it is already better to use $w=3$ and for degree 7 I needed about $w=30$

Here is a table of coefficents computed for polynomial degrees $t=3$ to $t=10$

deg at x^0      at x^1      at x^2      at x^3       at x^4          at x^5         at x^6          at x^7        
-----------------------------------------------------------------------------------------------------------------------
 2  0.49789408  0.87811291  0.26179546            .               .              .               .
 3  0.49857386  0.87638317  0.24739136  0.024116727               .              .               .
 4  0.49856631  0.87631901  0.24749709  0.024635189  -0.00068603672              .               .
 5  0.49856360  0.87633476  0.24754665  0.024575891  -0.00093201041  0.00026544139               .
 6  0.49856335  0.87633377  0.24755048  0.024583631  -0.00093843971  0.00023767204  0.000024982253
 7  0.49856319  0.87633587  0.24755370  0.024574086  -0.00095506183  0.00024635371  0.000070926074  -0.000035173120
 8  0.49856327  0.87633598  0.24755248  0.024572800  -0.00095232321  0.00025114814  0.000069595854  -0.000045880532   0.0000071061759
 9  0.49856326  0.87633615  0.24755257  0.024571913  -0.00095318813  0.00025236663  0.000071951101  -0.000046081230   0.0000026638321   0.0000025972359
10  0.49856328  0.87633614  0.24755226  0.024571792  -0.00095235183    0.00025322334  0.000071191576  -0.000047839864   0.0000025387777   0.0000055099949    -0.0000015206121
======= Kneser ========================================================================================================
 K  0.49856329  0.87633613  0.24755219  0.024571812  -0.00095213638  0.00025333982  0.000070927552  -0.000048180843 ...

Here is a table of numerical evaluations of $f_{10}(x)$ for small $x$:

    x    f(x)         f(f(x))      exp(x)    exp(x)-f(f(x))
  ----------------------------------------------------------
   -1 -0.15588343  0.36787877  0.36787944  0.00000067066910
  -1/2 0.11914585  0.60653066  0.60653066  0.00000000032149465
  -1/4 0.29456338  0.77880078  0.77880078  1.3751544 E-13
  -1/8 0.39284104  0.88249690  0.88249690  5.8314013 E-17
   0   0.49856328  1.0000000   1.0000000   1.5123856 E-30
  1/8  0.61202107  1.1331485   1.1331485  -2.7847843 E-17
  1/4  0.73349981  1.2840254   1.2840254  -7.2336095 E-15
  1/2  1.0016400   1.6487213   1.6487213   0.00000000030613469
   1   1.6463542   2.7182781   2.7182818   0.0000037234229

The polynomial of degree 10 allows already 6 digits precision in the interval $-1 \le x \le 1$ and 10 digits in the interval $-0.5 \le x \le 0.5$

Additional remark : this modification is better than what I got using my to-date standard method of "polynomial interpolation" for tetration with 8 or 16 degree polynomials which employs the ansatz $\hat F^2 = G \leadsto F=\sqrt G$, so this idea of the OP is really interesting. Unfortunately, the parameter $w$ in (3.2a) must be increased exponentially to achieve convergence, and the required number of iterations increase with that parameter, so I've tested up to polynomial of order 15 yet, needing $w>6000$ and many thousand iterations

  • I think we can also allow some larger $x$ because the coefficient of $x^7$ is of the order $10^{-5}$. We can also expect the coefficient of $x^8$ to be of that order. So, I think we can use all values of $x$ for which $10^{-5}x^8$ is negligible. I think we can use values of $x$ upto 3. –  Mar 16 '17 at 03:20
  • @Dove: jupp. I think you have had a nice idea... :-) It is a helpful modifikation of my older procedure (I had called it "polynomial iteration" based on the truncated Carlemanmatrices), which converges much slower to the Kneser-solution provided by Sheldon. I consider I'll take this method into the tetration-forum (http://math.eretrandre.org/tetrationforum/index.php) - what do you think? – Gottfried Helms Mar 16 '17 at 03:28
  • Functional cube-roots can also be obtained by this method. If we assume $f(x)=ax^2+bx+c$, then $f(f(f(x)))$ will be of degree 8. Five terms will have to be ignored there. So, the functional-cube root will only work for small $x$. This method gives us the exact functional-root only when our assumed polynomial is of infinite degree. –  Mar 16 '17 at 03:31
  • @GottfriedHelmes Yeah, sure. You can take it to the tetration forum. But I don't think this method is ingenious at all. I think I just got lucky that no one else thought about this simple method. –  Mar 16 '17 at 03:33
  • @Dove : ... which seems to be the Kneser-solution. I had this hypothese already, but not so good data. See my comparision-treatize for 5 methods of interpolation for tetration: http://go.helms-net.de/math/tetdocs/ComparisionOfInterpolations.pdf , the last entry. – Gottfried Helms Mar 16 '17 at 03:34
  • But Kneser's solution only gives functional square-roots. What about functional cube-roots and other functions such that $f(f(f(....f(x)))...)=e^x$. This method can be used for that too. The computations are the only thing which makes this method ugly. –  Mar 16 '17 at 03:40
  • There's one advantage of using linear polynomials that you don't have to ignore any terms. So, that gives us exact functional-roots but those functions are local to points. I got this function which when applied twice to $c$ gives exactly $a^c$ but the problem is that it itself depends upon $c$: $$x\sqrt{a^c\log_ea}+\frac{a^c(1-c\log_ea)}{\sqrt{a^c\log_ea}+1}$$. Similarly, I got the $n^{th}$ functional-root to be: $$x(a^c\log_ea)^{\frac{1}{n}}+\frac{a^c(1-c\log_ea)((a^c\log_ea)^{\frac{1}{n}}-1)}{a^c\log_ea-1}$$ –  Mar 16 '17 at 03:43
  • @Dove: Well, that method shown here is for square-root only, true. But the Kneser-method can be extended to any fractional iteration "height" (you may play with the Pari/GP-implementation of Sheldon Levenstein) as well as the finding of matrix-square-root can be extended to the finding of any fractional (and even complex) power via diagonalization (or even your proposed modification of the Newton-iteration). Unfortunately I do not see at the moment the possibility to restrict the diagonalization procedure such that the resulting fractional power is a truncated Carlemanmatrix... – Gottfried Helms Mar 16 '17 at 03:49
  • You might also be interested in this: http://tetration.org/IF.pdf. A user named David Geisler gave this link to me. But I couldn't understand it myself because I don't know anything beyond high-school mathematics. –  Mar 16 '17 at 03:55
  • What do you think about those local functional-roots of mine which depend upon $c$? –  Mar 16 '17 at 04:06
  • @Dove - sorry it is late late night here and I've no idea yet. Let's see what I can do tomorrow... – Gottfried Helms Mar 16 '17 at 04:11
  • 1
    @GottfriedHelms Here is the plot of absolute values of the coefficients in the Taylor series of half iteration of exponent in log scale (blue points represent positive coefficients, red -- negative ones): https://i.sstatic.net/7gWw9.png – Vladimir Reshetnikov Oct 30 '20 at 23:59
  • @VladimirReshetnikov - nice! How did you get 12 coordinates with this method? I needed many digits internal precision for only 7 or even 8 such coordinates, so I left it after this ... – Gottfried Helms Oct 31 '20 at 00:53
  • 1
    @VladimirReshetnikov - when we discussed this topic in end of october, I didn't remember that I had done a small study on coefficients of tetration-powerseries, however of the $t^x-1$-type, (or $\exp( \ln t \cdot x)-1$), for some fractional heights. To get convergent series I tried a Stirling-transformation and this looked nice. It was 2008 and I had not much experience with this subject. Here is the text on my webspace: http://go.helms-net.de/math/tetdocs/CoefficientsUtFractionalHeight.htm Perhaps it gives an idea to try this with tetration itself. – Gottfried Helms Dec 12 '20 at 00:49