7

I want to find a square root of a power series in the following way:

Let $$A(x)=a_0+a_1x+\cdots=\sum_{j\ge 0}a_jx^j\qquad \mathrm{and}\qquad B(x)=b_0+b_1x+\ldots=\sum_{i\ge 0}b_ix^i$$ such that $a_0>0$ and $b_0>0$ and $A=B^2$.

How can I express the coefficients $a_j$ in terms of $b_i$? I used the Cauchy product to get $a_j=\sum_{k=0}^jb_kb_{j-k}$, but have no idea how to continue.

Can someone help me out with finding a recursive relation between $a_j$ and $b_i$ with initial value for the sequence $(b_i)$?

ViktorStein
  • 5,024
mary
  • 71
  • 1
    You just did that. Do you want $b$ in terms of $a$? – Stefano Mar 28 '17 at 11:47
  • As you investigated it will be a self-convolution of the $b_j$s. You can also do it by a discrete fourier transform of coefficients and take roots and inverse transform. – mathreadler Mar 28 '17 at 11:57

5 Answers5

7

From the Cauchy product, identify the coefficients by increasing powers.

$$b_0^2=a_0,\\ 2b_0b_1=a_1,\\ 2b_0b_2+b_1^2=a_2,\\ 2b_0b_3+2b_1b_2=a_3,\\ 2b_0b_4+2b_1b_3+b_2^2=a_4,\\ \cdots$$

Then $$b_0=\sqrt{a_0},\\ b_1=\frac{a_1}{2b_0},\\ b_2=\frac{a_2-b_1^2}{2b_0},\\ b_3=\frac{a_3-2b_1b_2}{2b_0},\\ b_4=\frac{a_4-b_2^2-2b_1b_3}{2b_0},\\ \cdots$$

4

There is no known closed form of $\{b_k\}_{k \in \mathbb{N}}$ in terms of $\{a_k\}_{k \in \mathbb{N}}$ in general.

Some particular cases are remarkable.

  • For $x \in \mathbb{R}$, we have $A=B^2$, with $$ A=e^x=\sum_{k=0}^{\infty} \frac{x^k}{k!},\quad B=\sqrt{e^x}=e^{\large\frac{x}2} = \sum_{k=0}^{\infty} \frac{x^k}{2^k k!}. \tag1$$
  • For $|x|<1$, we have $A=B^2$, with $$ A=1+x,\quad B=\sqrt{1+x}=(1+x)^{1/2} = \sum_{k=0}^{\infty} \binom{1/2}{k}x^k \tag2$$ and generally, for $|x|<1$, $$ A=(1+x)^{2\alpha},\quad B=(1+x)^{\alpha} = \sum_{k=0}^{\infty} \binom{\alpha}{k}x^k \tag3$$ where $$ \binom{\alpha}{k}:=\frac{\alpha(\alpha-1)\ldots(\alpha-k+1)}{k!},\quad \alpha \in \mathbb{R}. \tag4$$
Olivier Oloa
  • 122,789
2

What you are asking for is a special case of Faa di Bruno's formula, which is a formula expressing the Taylor coefficients of the composition $f(g(x))$ of two functions in terms of the coefficients of the individual functions $f(t)$ and of $g(x)$. Your question is simply about the case where $f(t)=\sqrt{t}$, so just plug in that function into the formula.

Note that Faa di Bruno's formula is fairly complicated -- the $n$th coefficient of $f(g(x))$ is expressed as a sum over the $p(n)$ integer partitions of $n$. Since the number of partitions grows exponentially in $\sqrt{n}$, that can mean a lot of terms to process even for $n$ that's moderately large.

Dan Romik
  • 944
1

We can also use the convolution theorem in Fourier analysis $$\widehat{f*g} = \hat f \cdot \hat g,$$ where $*$ denotes convolution and $\cdot$ is the ordinary product.

We can do this because the product of any polynomial or power series corresponds to a discrete convolution of it's coefficients so it becomes a product once we step over into the Fourier domain.

Now what remains to solve $x^2 = c$ in the Fourier domain (for each Fourier coefficient). The trivial solution would just be to choose the same $x = \sqrt{c}$ for some suitable branch of the complex root function. And then lastly to perform the inverse-transform.

ViktorStein
  • 5,024
mathreadler
  • 26,534
1

Let's consider $B(x) = A^\alpha(x)$. (My understanding of OP's question is ultimately considering the case where $\alpha=1/2$ and expressing $B_i$ in terms of $A_i$.) If $B(x) = A^\alpha(x)$, then

\begin{align*} B_k &= \frac{1}{k A_{0}} \sum_{i=1}^{k} (i(\alpha+1) - k) A_i B_{k-i}\\ \text{and }B_0 &= A_0^\alpha. \end{align*}

Proof: Since $\alpha$ and $a$ are a little close to each other, we're going to deviate slightly from OP's notation and refer to $A(x) = \sum A_i x^i$ and $B(x) = \sum B_i x^i$.

\begin{align*} B(x) &= A^\alpha(x) \\ B'(x) &= \alpha A^{\alpha-1}(x)A'(x). \\ A(x)B'(x) &= \alpha A^{\alpha}(x)A'(x) \\ &= \alpha A'(x) B(x) \\ \end{align*}

Now let's look at the terms of degree $k-1$ on each side of $A(x)B'(x) = \alpha A'(x) B(x)$.

\begin{align*} \require{color} \sum_{j=0}^{\color{red}{k-1}} (j+1) A_{k-j-1} B_{j+1} &= \sum_{i=0}^{\color{red}{k-1}} \alpha (i+1) A_{i+1} B_{k-i-1} \\ \color{red}{k A_{0} B_{k} } \color{blue}{ + \sum_{j=0}^{k-2} (j+1) A_{k-j-1} B_{j+1}} &= \color{red}{\alpha k A_{k} B_{0}} \color{black}{+} \sum_{i=0}^{k-2} \alpha (i+1) A_{i+1} B_{k-i-1} \\ k A_{0} B_{k} &= \alpha k A_{k} B_{0} + \sum_{i=0}^{k-2} (i+1) \alpha A_{i+1} B_{k-i-1} \color{blue}{- \sum_{j=0}^{k-2} (j+1) A_{k-j-1} B_{j+1} }\\ \end{align*}

If we let $i = k - 2 - j$, basically just reversing the order we do this sum, we see that $\sum_{j=0}^{k-2} (j+1) A_{k-j-1} B_{j+1} = \sum_{i=0}^{k-2} (k-i-1) A_{i+1} B_{k-i-1}$. And so,

\begin{align*} k A_{0} B_{k} &= \alpha k A_{k} B_{0} + \sum_{i=0}^{k-2} \alpha(i+1) A_{i+1} B_{k-i-1} - (k-i-1) A_{i+1} B_{k-i-1} \\ &= \alpha k A_{k} B_{0} + \sum_{i=0}^{k-2} (\alpha(i + 1) - k + (i + 1)) A_{i+1} B_{k-i-1} \\ k A_{0} B_{k} &= \alpha k A_{k} B_{0} + k \sum_{i=0}^{k-2} ((\alpha+1)(i + 1) - k) A_{i+1} B_{k-i-1} \\ B_{k} &= \alpha A_{k} \frac{B_{0}}{A_{0}} + \frac{1}{k A_{0}} \sum_{i=0}^{k-2} ((\alpha+1)(i + 1) - k) A_{i+1} B_{k-i-1} \\ \end{align*}

And we can make this a little tighter by messing with the index on the summation: \begin{align*} B_{k} &= \color{red}{\alpha A_{k} \frac{B_{0}}{A_{0}}} \color{black}{+} \frac{1}{k A_{0}} \sum_{i=1}^{k-1} (i(\alpha+1) - k) A_i B_{k-i}\\ &= \frac{1}{k A_{0}} \sum_{i=1}^{\color{red}{k}} (i(\alpha+1) - k) A_i B_{k-i}\\ \end{align*}

If we go back to $B(x) = A^\alpha(x)$ and set $x=0$ on both sides, that gives us $B_0 = A_0^\alpha$. $\tag*{$\blacksquare$}$

This argument is from Donald Knuth's The Art of Computer Programming v2 ch4.7, where he attributed it to Leonard Euler's Introductio in Analysin Infinitorum. He assumed that $A_0 = 1$ and subsequently $B_0 = 1$.

Now, if you're like me when I first read this argument, you might find this all a little suspicious. If $A(x)=1+5x+10x^2+10x^3+5x^4+x^5$ and $\alpha=1/5$, then we can get that $(1 + 5x + 10x^2 + 10x^3 + 5x^4 + x^5)^{1/5} = 1 + x$ by just computing $B_0$ and $B_1$ and in the process only use the information in $A_0$ and $A_1$. This may seem true, but it is not. To see that, keep in mind that these computations are treating these polynomials like power series. To show that $A^{1/5}(x)=1+x$, we need to also show that $B_i=0$ for all $i>1$, which isn't hard, but may not be immediately obvious. For $i \in [2,6]$, a lot of nice cancelation happens. For $i>6$, the combination of $A_i=0$ and $B_j=0$ for $j \in [2,i-1]$ make it easy to argue that $A_i=0$ for all $i>6$ as well. The short of it is that you really do need all the information in $A(x)$ to argue that $B(x)$ has finite degree, that is that it actually is a polynomial. That's all peripheral to OP's question, but I get why this might be somewhat controversial since this is a closed form answer that doesn't have that illuminating shine.