30

On a closed interval (e.g. $[-\pi, \pi]$), $\cos{x}$ has finitely many zeros. Thus I wonder if we could fit a finite degree polynomial $p:\mathbb{R} \to \mathbb{R}$ perfectly to $\cos{x}$ on a closed interval such as $[-\pi, \pi]$.

The Taylor series is

$$\cos{x} = \sum_{i=0}^{\infty} (-1)^i\frac{x^{2i}}{(2i)!} = 1 - \frac{x^2}{2} + \frac{x^4}{4!} - \frac{x^6}{6!} + \frac{x^8}{8!}-\dots$$

Using Desmos to graph $\cos{x}$ and $1-\frac{x^2}{2}$ yields:

cosine x and first 2 terms of its Taylor series

which is clearly imperfect on $[-\pi,\pi]$. Using a degree 8 polynomial (the first 5 terms of the Taylor series above) looks more promising:

cosine x and first 5 terms of its Taylor series

But upon zooming in very closely, the approximation is still imperfect:

cosine x and first 5 terms of its Taylor series near x=pi

There is no finite degree polynomial that equals $\cos{x}$ on all of $\mathbb{R}$ (although I do not know how to prove this either), but can we prove that no finite degree polynomial can perfectly equal $\cos{x}$ on any closed interval $[a,b]\subseteq \mathbb{R}$? Would it be as simple as proving that the remainder term in Taylor's Theorem cannot equal 0? But this would only prove that no Taylor polynomial can perfectly fit $\cos{x}$ on a closed interval...

jskattt797
  • 1,791
  • 12
  • 29
  • 5
    A finite degree polynomial has a finite number of zeros on all of $\mathbb{R}$ by the fundamental theorem of algebra. $\cos(x)$ has an infinite number of zeros e.g by periodicity. So they cannot be equal over all of $\mathbb{R}$. – NickD Apr 23 '20 at 22:29
  • 16
    Of course, there's none, since the rigourous definition of cosine is precisely that it is the sum of the series $\sum_{i=0}^\infty (-1)^i\frac{x^{2i}}{(2i)!}$. Any finite number of terms will necessarily be only an approximation. – Bernard Apr 23 '20 at 22:30
  • @Bernard Is it obvious a-priori that such an infinite sum could not work out to be equal to some finite polynomial? Not necessarily a sum of some finite number of the terms in the sequence, just a polynomial of some description? – MegaWidget Apr 24 '20 at 19:31
  • 3
    It is indeed clear, essentially because 1) a polynomial is its own Taylor series and 2) the Taylor series of a given function is unique. – Bernard Apr 24 '20 at 19:43
  • 3
    @Bernard I don't think you're factoring in the expertise of the OP. I don't know their background, but I doubt this is obvious to them if they're asking. What's more, I doubt that the concept of Taylor series would be obvious to many people if we took a random sampling. The idea that an infinite sum can be convergent confused a lot of my fellow students back when I took calculus. As a trained biotech, sigma factors are obvious to... me and the minority of other people who've studied them. This is a public Q&A, pls don't presume expertise. – Galen Apr 24 '20 at 21:03
  • I'm not complaining but it appears that Arthur's answer to this question gave a proof that a polynomial cannot be imbeded in a cosine like function. What do you want. Were you looking for a more intuitive proof? I'm not sure if this is the case but if you don't put a check mark beside any answers, it becomes harder for people to learn what type of answer tends to solve your problem and write one that solves your problem. Is that really what you want? I think the easiest way for me to learn is if you try and figure out yourself what you were confused about and how you resolved your confusion. – Timothy Apr 24 '20 at 21:43
  • From my perspective, there' more than one way you could have been confused and I have no way to know in which way you are confused. – Timothy Apr 24 '20 at 21:44
  • @Galen: But ii is part of the basic curriculum that the coefficients of the Taylor series are linked in a quite simple way to their successive derivatives, say at $0$. And two polynomials which have the same derivatives at $0$ are equal, since Taylor's formula is exact for polynomials (I learnt that as a first year student). – Bernard Apr 24 '20 at 21:45
  • @Bernard I learned it as a first year student as well, but that's not a criterion for 'obvious' on a public Q&A site. – Galen Apr 24 '20 at 21:49
  • 3
    To quote the description of Mathematics.SE, it is For people studying math at any level and professionals. Any level includes not knowing Taylor series. – Galen Apr 24 '20 at 21:53
  • Is my understanding correct? First, cosine may be defined by it Taylor series at 0. Second, the Taylor series of a polynomial at 0 is itself (I am not sure how to prove this, but the integral form of the remainder is 0 for the polynomial's Taylor polynomial...) Third, a function's Taylor series at a given point is unique. Since the Taylor series of a polynomial is not the Taylor series of cosine at the point 0, they cannot be the same function. However, does this necessarily imply that they cannot be the same function on a closed interval? – jskattt797 Apr 25 '20 at 06:36
  • 1
    You quote the polynomial of cosine. It has infinite terms. Therefore, if it does not have infinite terms, it is not cosine. Same applies for sine. The expansion is true for all $x\in\Bbb R$, and it’s also true for all $x\in[a,b]$. – gen-ℤ ready to perish Apr 25 '20 at 18:37
  • 1
    @jskattt797 - Theoretically, it's obvious that the Taylor series of a polynomial is itself. A Taylor series is a limit of the best possible polynomial approximations to a given function (in a precisely definable sense of "best"). If anyone has a definition of "best possible polynomial approximation" such that some other polynomial would be a better approximation to $p(x)$ than $p(x)$ itself, then the definition of "best possible" is highly suspect, wouldn't you say? Not a proof, but you shouldn't think too hard about the details here before you have some intuition for it. – Dark Malthorp Apr 25 '20 at 20:28
  • @jskattt797 - If that fact is not intuitively clear, that's OK. But in that case, you should spend some time trying to understand what Taylor's theorem means on a non-technical "plain English" level. This will also make understanding the formal statement of it easier! – Dark Malthorp Apr 25 '20 at 20:33
  • @MegaWidget . I do not think it obvious that a power series with infinitely many non-$0$ co-efficients cannot coincide with a polynomial on a real interval of positive length. It $does$ follow readily from some important basic results in complex analysis. – DanielWainfleet Apr 26 '20 at 16:53
  • 2
    @Bernard: Describing it as clear is pointless and unnecessarily intimidating to the OP. It should be clear to you that if it was clear to the OP then they would not have asked. Besides, many things can be 'clear' from an intuitive perspective but painful to prove (Jordan curve theorem, for example, of the fact that $x \mapsto \cos ( n \arccos x)$ has a finite Taylor series). – copper.hat Apr 28 '20 at 19:52
  • @copper.hat: But I recalled the arguments why it should be clear, that the O.p. did not have necessarily in mind. – Bernard Apr 28 '20 at 20:12
  • @jskattt797 what do you nean by remainder term of taylor , thats quite confusing – ProblemDestroyer Apr 07 '22 at 05:41
  • Suppose $f : (a,b) \to \mathbb{R}$ is $n$-times differentiable at some $c \in (a, b)$. The $n$th remainder is $f(x)$ minus its $n$th Taylor polynomial: $R(x) := f(x) - \sum_{i = 0}^n \frac{f^{(i)}(c)}{i!} (x - c)^i$. Thus "$f(x)$ equals its $n$th Taylor polynomial" is equivalent to "$n$th remainder equals $0$." – jskattt797 Apr 08 '22 at 15:32
  • For posterity, I would like to add that $$1-\frac{x^2}{2!}+\frac{x^4}{4!}-\frac{x^6}{6!}+\frac{x^8}{8!}$$which appears in the Desmos plot in the answer post, is not only the eighth, but also the ninth degree Taylor polynomial of the cosine function. – Arthur May 28 '22 at 16:21

10 Answers10

140

Yes, it is impossible.

Pick any point in the interior of the interval, and any polynomial. If you differentiate the polynomial repeatedly at that point, you will eventually get only zeroes. This doesn't happen for the cosine function, which instead repeats in an infinite cycle of length $4$. Thus the cosine function cannot be a polynomial on a domain with non-empty interior.

Arthur
  • 204,511
  • 1
    So for some $n \in \mathbb{N}$, if the $n$th derivative of two functions are not equal at a point $a$, then the functions cannot be equal in any interval $(a-\delta, a + \delta)$ for $\delta > 0$? – jskattt797 Apr 25 '20 at 06:44
  • 6
    @jskatt797 That's right, because if they were equal in that interval, then they would necessarily have the same derivatives at $a$. You could also, instead of looking at just a point, consider the $n$th derivative functions. For any polynomial they are, from some point on, the zero functions. Not so for the cosine. – Arthur Apr 25 '20 at 06:49
  • I'm in awe of your ability to write a clear answer. – copper.hat Apr 28 '20 at 20:01
68

We don't even need to differentiate many times. Just note that $f'' = -f$ is satisfied by $f = \cos$ but not if $f$ is a non-zero polynomial function because $f''$ has lower degree than $f$. (This implicitly uses the fact that two polynomials that are equal at infinitely many points must be identical.) $ \def\lfrac#1#2{{\large\frac{#1}{#2}}} $

To answer a comment on Claude's post, here is a neat proof. Define $\deg(\lfrac{g}{h}) = \deg(g)-\deg(h)$ for any polynomial functions $g,h$. Given any function $f = \lfrac{g}{h}$ where $g,h$ are polynomial functions on some non-trivial interval, we have $f' = \lfrac{g'}{h}-\lfrac{g·h'}{h^2} = f·\lfrac{g'·h-g·h'}{g·h}$, and hence $\deg(f') < \deg(f) $ since $\deg(g'·h-g·h') < \deg(g·h)$. Thus $\deg(f'') < \deg(f)$ and therefore $f'' ≠ -f$. So even Padé approximants are not enough to perfectly fit anything except rational functions, on any non-trivial interval.

user21820
  • 60,745
  • I am missing something, because for $f(x)=x^3+x^2+x+1$ there is $f''(x)=6x+2$ and this can be solved for $x$ in $f(x)=-f''(x)$. But this is for the three roots, only. Maybe this is what I am missing? – a concerned citizen Apr 24 '20 at 18:44
  • 3
    @aconcernedcitizen you've described finding points at which the 2 functions have equal value. They would need to be identical EVERYWHERE, in order for the functions themselves to be identical. – Brondahl Apr 24 '20 at 18:52
  • @Brondahl Yes, as I stared a bit longer to it, it occured to me that was the case. I guess the part where it says "lower degree than f" confused me. – a concerned citizen Apr 24 '20 at 18:55
  • @aconcernedcitizen: I edited my post to make clear that we want the second derivative to be equal to its negation on its entire domain. – user21820 Apr 24 '20 at 19:26
21

Here's a proof using only basic trigonometry and algebra, no calculus or infinite series required.

We'll do a proof by contradiction. Suppose $\cos(x)$ is a polynomial on some closed interval $[a,b]$, with $a\ne b$. We'll split it into two cases, depending on whether or not $0\in [a,b]$.

Case 1. Suppose your interval contains the origin, i.e. $a \le 0 \le b$. If $\cos(x)$ is a polynomial function on $[a,b]$, then $2\cos^2(\frac x 2) - 1$ is also a polynomial function on $[a,b]$, since $x\in[a,b]$ implies $x/2 \in [a,b]$. Now, recall the half angle formula for $\cos(x)$:$$ \cos(x) = 2\cos^2(\frac x 2) - 1 $$ The half-angle formula tells us that these two polynomials are in fact the same polynomial. But if $\cos(x)$ has degree $n$, then $2\cos^2(\frac x 2) - 1$ must have degree $2n$. Since two polynomials with different degree cannot be equal on any interval, this implies $2n = n$, or $n=0$. Since $\cos(x)$ is not constant, we have a contradiction, so $\cos(x)$ is not a polynomial on any interval containing $0$.

Case 2. Now, what if the interval does not contain the origin? This takes a few more steps, but we can show that if $\cos(x)$ is a polynomial on $[a,b]$, then it must also be a polynomial (potentially a different polynomial) on $[0,b-a]$, which contains the origin so is impossible by the above argument.

For $x\in [0,b-a]$, we use the angle sum formula to find $$ \cos(x) = \cos(x+a -a) = \cos(x+a)\cos(a) + \sin(x+a)\sin(a) $$ Since $\cos(x+a)$ is a polynomial of $x$, and $\sin(x+a)^2 + \cos(x+a)^2= 1$, this means that on the interval $[0,b-a]$, the cosine of $x$ has the property that $$ \left(\cos(x) - p(x)\right)^2 = q(x) $$ for some polynomials $p$ and $q$. In particular $p(x) = \cos(a+x)\cos(a)$ and $q(x) = \sin^2(a) \left(1-\cos^2(x+a)\right)$. Equivalently, $\cos(x) = p(x) \pm \sqrt{q(x)}$. Again, the half-angle formula tells us $\cos x = 2\cos^2(\frac x 2) - 1$ (for $x\in[0,b-a]$). Substituting into the above, we get some very messy algebra:\begin{eqnarray} \left(2\cos^2\left(\frac x 2\right) - 1 - p(x)\right)^2 &=& q(x)\\ \left(2p(\frac x 2)^2 \pm 4 p(\frac x 2)\sqrt{q(\frac x 2)} + 2q(\frac x 2) - 1 - p(x)\right)^2 &=& q(x)\end{eqnarray} expanding the left-hand side, we get:$$ q(x) = \left(2p(\frac x 2)^2+ 2q(\frac x 2) - 1 - p(x)\right)^2 + 16 p(\frac x 2)^2q(\frac x 2) \pm 8\left(2p(\frac x 2)^2+ 2q(\frac x 2) - 1 - p(x)\right)p(\frac x 2)\sqrt{q(\frac x 2)} $$ which implies $\pm\sqrt{q(x/2)}$ is actually a rational function. Since its square is a polynomial, this means $\pm\sqrt{q(x/2)}$ is a polynomial itself, so $\pm\sqrt{q(x)}$ is also a polynomial. Therefore $\cos(x) = p(x) \pm \sqrt{q(x)}$ is a polynomial for $x\in[0,b-a]$. Since this interval contains the origin, we again have a contradiction, so $\cos(x)$ cannot be a polynomial on $[a,b]$.

All this shows why results from calculus are helpful - the problem is trivial if we bring in derivatives!


As an addendum: All of these arguments can be generalized to show that $\cos(x)$ is also not a rational function on any interval, and that the other trig functions similarly are not polynomials or rational functions.

14

If $p$ is a polynomial the function $f(z) = p(z)-\cos z$ is entire, and the uniqueness theorem shows that if $f(z) = 0$ on any line segment then $f= 0$.

(The uniqueness theorem is stronger than that, it just needs $f$ to be zero on any sequence with an accumulation point.)

Addendum:

To clarify, since a non zero polynomial has at most $\partial p$ zeros and $\cos$ has a countable number then we cannot have $f=0$.

copper.hat
  • 178,207
11

I do not know if you have any specific reason to require a polynomial.

Nevertheless, for function approximations, Padé approximants are much better than Taylor expansions even if, to some extent, they look similar. For example $$\cos(x) \sim \frac {1-\frac{115 }{252}x^2+\frac{313 }{15120}x^4 } {1+\frac{11 }{252}x^2+\frac{13 }{15120}x^4 }$$ is better than the Taylor series to $O(x^{9})$ that you considered

To compare $$\int_{-\pi}^\pi \Big[ \frac {1-\frac{115 }{252}x^2+\frac{313 }{15120}x^4 } {1+\frac{11 }{252}x^2+\frac{13 }{15120}x^4 }-\cos(x)\Big]^2\,dx=0.000108$$ $$\int_{-\pi}^\pi \Big[1-\frac{x^2}{2}+\frac{x^4}{24}-\frac{x^6}{720}+\frac{x^8}{40320}-\cos(x)\Big]^2\,dx=0.000174$$ but nothing is absolutely perfect.

If I add one more term to the Padé approximant, the values of the corresponding integral become $1.25\times 10^{-9}$ and for $x=\frac \pi 2$ the value of the approximated function is $-6.57\times 10^{-9}$.

Now, have a look at an approximation I built for you $$\cos(x)=\frac{1-\frac{399 }{881}x^2+\frac{20 }{1037}x^4 } {1+\frac{58 }{1237}x^2+\frac{1}{756}x^4 }$$ which gives for the integral $1.49\times 10^{-8}$.

  • 1
    Indeed, Pade approximations are really good for some functions whose Taylor series only converge on a tiny bounded interval. – user21820 Apr 24 '20 at 10:06
  • Since a Pade approximant is a rational function, this raises another question: is it also impossible to perfectly approximate cosine on a closed interval with a rational function? Many of the answers so far only work for polynomials if I am not mistaken. – jskattt797 Apr 25 '20 at 23:57
  • 1
    @jskattt797. "Perfectly ?" , no – Claude Leibovici Apr 26 '20 at 01:08
  • Yes, this makes intuitive sense. I am wondering if there is a proof. I think the proofs given so far apply to polynomials but not necessarily to rational functions. – jskattt797 Apr 26 '20 at 01:45
  • 1
    @jskattt797 the proof is similar to that in the answer by 21820. Namely: none of the Padé approximants will satisfy the ODE for the cosine. – J. M. ain't a mathematician Apr 26 '20 at 02:45
10

One of the statements you mentioned you don't know how to prove, is easy. $\cos x$ has infinitely many roots along the real line but any polynomial of finite degree would have finitely many roots. But, there cannot be a finite degree polynomial that equals $\cos x$ on $[-\pi, \pi]$ or any other closed interval, for that matter. You could show that the power series you provided for $\cos x$ converges uniformly on any closed interval. So, if $\cos x = p(x)$ for some finite degree polynomial, $p(x)$ could also be viewed as a power series with finitely many non zero coefficients. But, the power series of a function (assuming convergence) is unique. Hence, such $p$ cannot exist. However, you could approximate $\cos x$ with polynomials within any precision that you would like, by the Stone Weierstrass theorem.

Besfort
  • 622
4

No, it is not impossible, but only for the reason that a single point is a closed interval. You can certainly get exact agreement between cosine and a polynomial on any closed interval, $[p,p]$, $p \in \Bbb{R}$. If the closed interval you are interested in has nonempty interior, then, yes, it is impossible (as adequately explained elsewhere).

Eric Towers
  • 70,953
  • I upvoted your answer because it is indeed good to be careful with boundary cases, even though it is implicit in the other answers. =) – user21820 Apr 27 '20 at 04:14
3

Given a smooth function on an interval and an interior point of that interval, the Taylor series of that function around the point is completely determined. Then you are looking for a polynomial whose Taylor series around $0$ (say) coincides with that of the cosine, which obviously does not exist, since any polynomial is its own Taylor series.

Of course if you consider a single point to be a closed interval, then a perfect approximation on that interval is possible.

3

Altho other people have already mentioned the impossibility of having a polynomial that is everywhere equal to the cosine over a finite interval, for a smooth function like cosine, it is possible to obtain a uniform approximation that can be made as close an approximation as possible. This involves an expansion in terms of Chebyshev polynomials (of the first kind), and in fact there is an entire project, the Chebfun project, that relies on approximating complicated functions as (possibly piecewise) Chebyshev series.

I will give a concrete example in Mathematica (adapted from this answer). In the following, I have arbitrarily chosen a polynomial approximation of degree $128$ to approximate the cosine:

f[x_] := Cos[x];
{a, b} = {-π, π}; (* interval of approximation *)
n = 128; (* arbitrarily chosen integer *)
prec = 25; (* precision *)
cnodes = Rescale[N[Cos[π Range[0, n]/n], prec], {-1, 1}, {a, b}];
fc = f /@ cnodes;
cc = Sqrt[2/n] FourierDCT[fc, 1];
cc[[{1, -1}]] /= 2;

cosApprox[x_] = cc.ChebyshevT[Range[0, n], Rescale[x, {a, b}, {-1, 1}]]

{Plot[{f[x], cosApprox[x]}, {x, a, b},
      PlotLegends -> Placed[{"Exact", "Chebyshev series"}, Bottom],
      PlotStyle -> {AbsoluteThickness[4], AbsoluteThickness[1]}],
 Plot[f[x] - cosApprox[x], {x, a, b},
      PlotRange -> All, PlotStyle -> ColorData[97, 4]]} // GraphicsRow

cosine and its Chebyshev series approximant

In theory, as you increase the degree, the approximation gets better and better; in practice, you will often hit the limits of your machine's numerics.

  • "for a smooth function like cosine, it is possible to obtain a uniform approximation that can be made as close an approximation as possible" Doesn't a Taylor series do this? – jskattt797 Apr 25 '20 at 23:50
  • No, because a Taylor/Maclaurin series is (by definition) only good near the expansion point, and can diverge wildly far from it. – J. M. ain't a mathematician Apr 25 '20 at 23:54
  • Could you please clarify/provide more rigor for the meaning of "only good" near the expansion point? The degree 8 Taylor polynomial for cosine seems to be quite "good" at approximating cosine on the entire interval $[-\pi,\pi]$, not just at $0$. – jskattt797 Apr 26 '20 at 00:01
  • Did you already try plotting the (relative or absolute) difference between cosine and its degree-$8$ Taylor expansion? Do you already have a picture? – J. M. ain't a mathematician Apr 26 '20 at 00:23
  • I should have said on $[-\frac{\pi}{2},\frac{\pi}{2}]$ for the degree-8. The degree-18 Taylor polynomial approximates quite well on the entire interval $[-\pi,\pi]$: https://www.desmos.com/calculator/komaw3ihni – jskattt797 Apr 26 '20 at 02:01
  • 1
    If you'll look at that plot you just showed me, you'll notice that it's good near the middle, but the difference increases as you go closer to the ends. (Note also the scale in the $y$-axis.) As a point of contrast, this is what you get with a degree $16$ polynomial assembled from the Chebyshev series (and again note the scale in the $y$-axis). – J. M. ain't a mathematician Apr 26 '20 at 02:42
  • 3
    For different notions of closeness, different polynomials will be optimal. The Chebyshev series will win for one fairly popular notion. A good question might be: for what definition of closeness is a truncated Taylor series optimal? – badjohn Apr 26 '20 at 14:46
  • @bad, surely worth a separate question. ;) – J. M. ain't a mathematician Apr 26 '20 at 14:48
  • I'll consider it. The obvious answer is trivial: the polynomial that has the most consecutive correct derivatives at $0$. It is less clear whether it will be best for any other criterion. – badjohn Apr 26 '20 at 14:51
1

Although this is an incomplete answer unlike the ones I read here, I'd like to offer what I eventually thought of since the idea still seems original: There can be no polynomial with rational coefficients that exactly approximates $\cos$ on $[0,1]$, because it will have a wrong integral over this interval ($\sin 1$ being irrational). I believe this argument can be adapted to a different interval $[\alpha,\beta]$ by finding a sub-interval with rational endpoints $[a,b] \subset [\alpha,\beta]$ and using something like the notion of algebraic independence over $\mathbb{Q}$ (search for $a$ and $b$ such that $\sin b - \sin a$ be irrational? Which should happen most of the time) and/or Niven's theorem, and possibly enhanced to real coefficients since a polynomial with such coefficients can be well-approximated by sequences of polynomials with rational ones. Thank you for your question, it reminds me much of the kind I would've asked when younger!

Vandermonde
  • 2,724