18

Let $$f(x) = \sum\limits_{i=1}^n a_i^x - \sum\limits_{i=1}^n b_i^x$$ where the $a_i$ and $b_i$ are positive reals such that $f(x)$ is not a constant zero for all real $x$.

Is it possible to find a maximum possible number of zeros of $f(x)$ and how this is affected by $n$?

Early experimentation suggests the maximum number of zeros might be $n$, as I could not find any examples producing more. E.g.:

  • $n=1, a_1=1, b_1=2$ has $f(0)=0$
  • $n=2, a_1=1, a_2=4, b_1=2, b_2=3$ has $f(0)=f(1)=0$
  • $n=3, a_1=1, a_2=6, a_3=8, b_1=2, b_2=3, b_3=10$ has $f(0)=f(0.7114953\ldots)=f(1)=0$
  • $n=4, a_1=10, a_2=11, a_3=60, a_4=79, b_1=9, b_2=20, b_3=30, b_3=101$ has $f(-4.46722769\ldots)=f(0)=f(0.19000515\ldots)=f(1)=0$

and so on.

An earlier question suggests there cannot be more than $2n-1$ zeros but that dealt with a slightly more general form. So perhaps here with $n=2$ there is an example with $3$ zeros, or here with $n=3$ perhaps there are examples with $4$ or $5$ zeros, which I have not been able to find.

Henry
  • 169,616
  • 3
    This is a super problem. Every technique I can think of using has some flaw. – kimchi lover May 31 '20 at 20:57
  • 2
    Idea (to get $n$ roots): take polynomial $P(x)$ of degree $n$ with $n$ different real roots $a_1, a_2, \ldots, a_n$. Then, consider small nonzero $\alpha$ such that $P(x)+\alpha$ also has $n$ real different roots $b_1, b_2, \ldots, b_n$. Now it's easy to see that in this case we have $f(0)=f(1)=\ldots=f(n-1)=0$ (because of the equality of first $n-1$ elementary symmetric polynomials). – richrow Jun 01 '20 at 13:53
  • @richrow: That is neat. So you could for example always take $a_i=i$ and $\alpha =0.1$. For $n=3$ you would get $a_1=1$, $a_2=2$, $a_3=3$, $b_1=0.953319468\ldots$, $b_2=2.101031257\ldots$, $b_3=2.945649273\ldots$ giving $f(0)=f(1)=f(2)$ – Henry Jun 01 '20 at 17:43
  • Perhaps https://mathoverflow.net/questions/45031/sturm-chain-analogue-for-exponential-polynomials and its references are relevant. – kimchi lover Jun 03 '20 at 12:42
  • @kimchilover I suspect that link might be too general, leading to the conclusion that there is a bound. You have already shown in your answer that the bound in this specific case is no more than $2n-2$ for $n>1$ while richrow has shown in a comment that the bound here is at least $n$ – Henry Jun 03 '20 at 13:40
  • Understood: I'm just grasping at straws. – kimchi lover Jun 03 '20 at 14:07

1 Answers1

9

Here is an argument (revised, with nontrivial input from the OP) that the number of roots cannot exceed $n$. It has a calculus formula piece, a variation diminishing piece, and a combinatorics piece.

Change the notation somewhat, so $$f(x)=\sum_{i=1}^n e^{a_ix} - \sum_{i=1}^n e^{b_ix},$$ for real $a_i, b_i$. To simplify things later assume all the $a_i$ and $b_i$ are distinct.

Define $G(t)=\#\{i:a_i\le t\}-\#\{i:b_i\le t\}$, where "$\#$" denotes "cardinality of". Then $$f(x)= x \int_{\mathbb R} e^{tx} G(t) dt,$$ as may be seen by writing $$f(x)=\sum_i (e^{a_ix}-e^{b_ix})=\sum_i x\int_{b_i}^{a_i}e^{tx}dt.$$ This is the calculus formula piece of the argument.

The combinatorial argument below shows the function $G$ can have at most $n-1$ changes of sign. Hence (and this is the variation diminishing part of the argument) its bilateral Laplace transform $g(x)=\int_{\mathbb R}e^{tx}G(t)dt$ has at most $n-1$ roots, and so $f(x)=xg(x)$ has at most $n$ roots. (The number of sign changes $S(G)$ in $G$ is defined as the supremum over all increasing sequences $t_1<\cdots<t_k$ of all lengths $k$, of the number of strict sign changes in $G(t_1),\ldots, G(t_k)$, ignoring zero values.)

The function $G$ is piecewise constant, integer valued, continuous on the right, with limits on the left; all its discontinuities are $\pm1$ jumps, which occur exactly at points in $M$, the set of $a_i$ and $b_i$ values. Let $m_1\le m_2\le\cdots\le m_{2n}$ be the elements of $M$ in sorted numerical order. Hence $S(G)$ is equal to the number of sign changes in the particular sequence $G(m_1),\ldots,G(m_{2n})$.

Since $G(m_i)-G(m_{i+1})=\pm1$ for all $i<2n$, the number of sign changes in $G$ is thus the number of subscripts $j$ for which $1<j<2n$ and for which $G(m_{j-1}),G(m_j),G(m_{j+1}))=(1,0,-1)$ or $=(-1,0,1)$. For this to happen $j$ must be even. Since there are $n-1$ even $j$ with $1<j<2n$, we see that $S(G)$ is at most $n-1$.

We can relax the restriction that all the elements of $M$ are distinct by observing that a perturbation of the elements of $M$ that preserves strict inequalities and breaks ties cannot decrease $S(G)$. I hope this example makes clear how this happens. Suppose we have $$a_1<b_1=b_2<a_2<a_3<b_3=b_4<a_4$$ and $$a_1^*<b_1^*<b_2^*<a_2^*<a_3^*<b_3^*<b_4^*<a_4^*$$ with corresponding $G$ and $G^*$ functions; we know $S(G^*)\le n-1$. The calculation of $S(G)$ is summarized in the chart $$ \begin{matrix} m_i:&a_1 & b_1 &b_2 & a_2 & a_3 & b_3&b_4 & a_4\\ G(m_i):& 1 & -1 & -1 & 0 & 1 & -1 & -1 & 0 \end{matrix} $$ (where we see jumps of size $-2$ at $m_2=m_3$, etc) and for $S(G^*)$ in the chart $$ \begin{matrix} m_i:&a_1^* & b_1^* &b_2^* & a_2^* & a_3^* & b_3^*&b_4^* & a_4^*\\ G^*(m_i):& 1 & 0 & -1 & 0 & 1 & 0 & -1 & 0 \end{matrix} $$ which, in this case, show the same number of sign changes in the the bottom rows. More generally, to each increasing sequence $t_1<\cdots< t_k$ there corresponds a sequence $t_1^*<\cdots <t_k^*$ so that the sequence of values in $G(t_i)$ is the same as the sequence of values of $G^*(t_i^*)$. So the supremum defining $S(G)$ extends over a subset of those defining $S(G^*)$. Hence $S(G)\le S(G^*)\le n-1$.

The basic variation diminishing (or total positivity) fact used here is due, I suppose, to Schoenberg: a bilateral Laplace transform $f(x)=\int_{\mathbb R} e^{xy}\nu(dy)$ of a signed measure $\nu$ cannot have more sign changes than $\nu$ has. This is more or less equivalent to convolution with the Gaussian kernel having the variation diminishing property. It generalizes Descartes' rule of signs. It is contained in in S. Karlin's magisterial but diffusely organized 1968 book Total Positivity (see pp.233, 237). See Schoenberg, I. J. "On Pólya frequency functions. I. The totally positive functions and their Laplace transforms" J. Analyse Math. 1 (1951), 331–374 (MR0047732); if I come across a more recent and accessible source, I'll add it.

kimchi lover
  • 24,981
  • 1
    I think your "if $x<0$ then $f(x)>0$" argument can be used in a similar way to give "if $x>0$ then $f(x)<0$" since then $e^{a_ix}<e^{b_ix}$, avoiding consideration of $f'(x)$ – Henry Jun 01 '20 at 18:03
  • But thank you anyway, as this amounts to showing the maximum number of zeros for $n=2$ is $2$ and removes $5$ zeros as a possibility for $n=3$. – Henry Jun 01 '20 at 18:06
  • This looks very impressive though I am going to have to read it several times to understand it all. You may have a small typo in "The number of sign changes in $G$ is at most more more than the number of sign changes in $[s,\infty)$" – Henry Jun 04 '20 at 21:05
  • A simplification of your combinatorial argument might be that only even values of $j$ can be special, but $2n$ is not special and there are only $n-1$ other even values. A complication with your definition of special may come if say $b_i=b_{i+1}$ but this does not look fundamental. Instead you might say that you need at least two $a_i$s or two $b_i$s to change the sign of $G(t)$ but you also need one at each end to move from $0$ and back to $0$, so the maximum number of sign changes is $\frac{2n-1-1}2=n-1$ – Henry Jun 04 '20 at 22:01
  • Thanks for the "more more" typo. I'm sure there is a simpler way to see the combinatorial stuff, and to take duplicates into account. – kimchi lover Jun 04 '20 at 22:16
  • Many thanks for this result and for all your efforts - have the bounty – Henry Jun 05 '20 at 19:55
  • Thank you for an interesting (and fun) question, and for your help in formulating my answer. – kimchi lover Jun 05 '20 at 20:16