13

In general, are there any clever tricks to help find the roots of a finite Fourier series? Presumably there aren't analytic methods, but can we use the fact that our function is a finite Fourier series to facilitate numerical methods?

The hardest part of numerical root finding is bracketing the root, so is there a way we can do that easily? I know for a general function it's a hard problem.

Jay Lemmon
  • 2,228
  • There are methods that depend on calculating eigenvalues of a special matrix that depends on the Fourier series coefficients. I'm googling around a bit, but I know the Chebfun package for MATLAB does something similar, but for a Chebyshev polynomial instead of a Fourier series. – rajb245 Apr 24 '13 at 01:09
  • 1
    To go further, it seems the latest review article on the topic is called "Computing the zeros, maxima and inflection points of Chebyshev, Legendre and Fourier series: solving transcendental equations by spectral interpolation and polynomial rootfinding" – rajb245 Apr 24 '13 at 01:17
  • And finally, I was wrong about that being a review article. It is (according to the authors) the first work to publish the result. They give the formulation of how to construct the aforementioned matrix, which is in terms of the Fourier series coefficients. You calculate the eigenvalues of that matrix, and calculate some simple functions on those eigenvalues to give you the roots of your Fourier series. – rajb245 Apr 24 '13 at 01:33
  • @rajb245 That looks really interesting. Unfortunately I don't have access to Springer Journals, and I don't see a free PDF online anywhere. :/ – Jay Lemmon Apr 24 '13 at 03:33
  • 2
    By writing the sines and cosines in terms of exponentials you can analytically approximate the large zeros of your sum with exponentially decreasing absolute error. I treat a simple example in my answer to this question and give a reference to a paper which addresses the general case. – Antonio Vargas Apr 26 '13 at 00:16

2 Answers2

15

The short answer to your question is "yes", you can exploit the structure of the Fourier-series to reduce your rootfinding problem to a matrix eigenvalue problem, which can then be solved efficiently using QR decomposition. None of this is my own, but I'm paraphrasing a paper by John P. Boyd at University of Michigan from 2006, entitled "Computing the zeros, maxima and inflection points of Chebyshev, Legendre and Fourier series: solving transcendental equations by spectral interpolation and polynomial rootfinding". Let a finite Fourier-series with $2N+1$ terms be given by

$$ f_N(t) = \sum_{j=0}^N a_j\cos(jt) + \sum_{j=1}^N b_j\sin(jt) $$

Define the following $2N+1$ coefficients:

$$ h_k = \left\{\begin{array}{ll} a_{N-k}+i b_{N-k}, & k=0,1,\ldots,(N-1)\\ 2 a_0 & k=N \\ a_{k-N} - i b_{k-N} & k=N+1,N+2,\ldots, 2N \end{array}\right. $$

And define the following $2N\times2N$ matrix $\bf B$ with entries $B_{jk}$ at indicies $j,k$ as $$ B_{jk} = \left\{ \begin{array}{ll} \delta_{j,k-1}, & j=1,2,\ldots,(2N-1)\\ - \frac{h_{k-1}}{a_N - i b_{N}} & j=2N \end{array}\right. $$

The delta above is the Kronecker delta that is one when its two arguments are the same, zero otherwise. Let the matrix $\bf B$ have eigenvalues given by $z_k$, then the roots of $f_N(t)$ are given by

$$ t_k = -i \log(z_k) $$

In general $z_k$ will be complex or negative, so we mean the complex logarithm (this nicely gives that each root is also periodic, the expected behavior of a periodic function)

$$ \log(z) = \log(|z|) + i (\arg(z)+2\pi m) \;\;\text{for integer }m $$

For a final formula for the periodic roots

$$ t_k = \arg(z_k)+2\pi m-i \log(|z_k|),\;\;k=1,2,\ldots,2N,\;\;m\text{ an integer} $$

So the numerical task of Fourier series rootfinding is reduced to basically solving for eigenvalues of the matrix $\bf B$.

rajb245
  • 4,875
0

There's a way by transforming into a equivalent problem: finding polynomial roots

Let $f_n$ be your finite Fourier series with $n$ harmonics:

$$ f_n(t) = \sum_{j=0}^{n} a_j \cdot \cos \left(jt\right) + \sum_{j=1}^{n} b_j \cdot \sin \left(jt\right) $$

Use the relations

$$ u = \tan \left(\dfrac{t}{2}\right) $$ $$ \sin t = \dfrac{2u}{1+u^2} \in \nu\left(R_2\right) $$ $$ \cos t = \dfrac{1-u^2}{1+u^2} \in \nu\left(R_2\right) $$

$\nu\left(R_{p}\right)$ is the space of all rational polynomial functions of degree at most $p$

It's easy to see $\cos jt$ and $\sin jt$ lie on $\nu\left(R_{2j}\right)$:

$$ \sin 2t = 2 \sin \left(t\right)\cos \left(t\right) = \dfrac{2 \left(2u\right)\left(1-u^2\right)}{\left(1+u^2\right)^2} \in \nu\left(R_{4}\right) $$ $$ \cos 2t = \cos^2 \left(t\right) - \sin^2\left(t\right) = \dfrac{\left(1-u^2\right)^2 - \left(2u\right)^2}{\left(1+u^2\right)^2} \in \nu\left(R_{4}\right) $$

$$ \sin jt = \underbrace{\sin \left((j-k)t\right)}_{\in \ \nu\left(R_{2j-2k}\right)} \cdot \underbrace{\cos \left(kt\right)}_{\in \ \nu\left(R_{2k}\right)} + \underbrace{\cos\left((j-k)t\right)}_{\in \ \nu\left(R_{2j-2k}\right)} \cdot \underbrace{\sin \left(kt\right)}_{\in \ \nu\left(R_{2k}\right)} $$ $$ \cos jt = \underbrace{\cos \left((j-k)t\right)}_{\in \ \nu\left(R_{2j-2k}\right)} \cdot \underbrace{\cos \left(kt\right)}_{\in \ \nu\left(R_{2k}\right)} - \underbrace{\sin\left((j-k)t\right)}_{\in \ \nu\left(R_{2j-2k}\right)} \cdot \underbrace{\sin \left(kt\right)}_{\in \ \nu\left(R_{2k}\right)} $$ $$ \sin jt = \dfrac{\square + \square \cdot u + \cdots + \square \cdot u^{2j}}{\left(1+u^2\right)^j} \in \nu\left(R_{2j}\right) $$ $$ \cos jt = \dfrac{\square + \square \cdot u + \cdots + \square \cdot u^{2j}}{\left(1+u^2\right)^j} \in \nu\left(R_{2j}\right) $$

Expanding the rational polynomials' numerator, to have a common denominator, you find a rational polynomial $q(u)$ of degree $2n$:

$$f_{n}(t) \equiv p(u) = \dfrac{q(u)}{\left(1+u^2\right)^{n}} = \dfrac{q_0 + q_1 \cdot u + \cdots + q_{2n} \cdot u^{2n}}{\left(1+u^2\right)^{n}}$$

Hence, finding the roots of $f_n$ is the same as finding the roots of $p(u)$, which roots are the same as $q(u)$.

Since $q(u)$ is a polynomial of degree at most $2n$, then it has at most $2n+1$ roots.

If you find a root $u^{\star}$ such $q(u^{\star})$, then $t^{\star}$ is a root of $f_{n}$.

$$ t^{\star} = 2 \arctan\left(u^{\star}\right) $$

Note that $u^{\star} \in \mathbb{R}$, then $t^{\star} \in \left(-\pi, \ \pi\right)$.

Note the edge case when $t^{\star} = \pi$ can be a root, which would lead to $u^{\star} = \pm \infty$. It's represented when the function $p(u)$ approaches to zero infinity, meaning:

$$t^{\star} = \pi \Leftrightarrow \lim_{u \to \infty} p(u) = 0 \Leftrightarrow \text{degree}(q) < 2n$$

It's not a problem since it's a unique value and can be easily evaluated beforehand.

Carlos Adir
  • 1,394