5

Hello it's related to Boutin's Identity :

First remark : I was working on this : show this inequality $\left(\sum_{i=1}^{n}x_{i}+n\right)^n\ge \left(\prod_{i=1}^{n}x_{i}\right)\left(\sum_{i=1}^{n}\frac{1}{x_{i}}+n\right)^n$

Wich is equivalent after substitution to :

$$ \left(\dfrac{-\sum_{i=1}^{n}\sin(x_{i})^2+2n}{\sum_{i=1}^{n}\tan(x_i)^2+2n}\right)^n\le\prod_{i=1}^{n}\cos(x_i)^2 . $$

So my idea was to use in the RHS the following identity :

$$2\cos(a)\cos(b)=\cos(a+b)+\cos(a-b)$$

For $n=3$ we get :

$$4\cos(a)\cos(b)\cos(c)=\big(\cos(a+b)+\cos(a-b)\big)\cos(c)$$ Since: $$2(\cos(a+b)+\cos(a-b))\cos(c)=\cos(a+b+c)+\cos(a+b-c)+\cos(a+c-b)+\cos(a-b-c)$$

So we have :

$$4\cos(a)\cos(b)\cos(c)=\cos(a+b+c)+\cos(a+b-c)+\cos(a-b+c)+\cos(a-b-c)$$

So there exists an analogy between Boutin's identity and the decomposition of \cosinus product .

So the natural question is : It's a coincidence or not ?


Second question : What's the particular functional equation behind this I mean if we take an expression of the form :

$$\sum_{2^{n-1}}\pm f(\pm x_1\pm x_2\pm\cdots\pm x_n)$$

What $f(x)$ should check as a condition to have :

$$\sum_{2^{n-1}}\pm f(\pm x_1\pm x_2\pm\cdots\pm x_n)=\alpha f(x_1)\cdots f(x_n)$$

Where $\alpha$ is a constant.

Example :

For cosinus it's $2f(a)f(b)=f(a+b)+f(a-b)$

Thanks a lot

1 Answers1

3

I think it's a very general problem and by the way very difficult...But it's a challenge !

Hi. :-) It was a challenge to my memory which I failed: few days before you asked the question I saw a paper considered a similar equation. Trying to find it, I looked more than a half of a thousand pdf files. At last, when I googled for trigonometric Identities and functional equations I found a paper [Kan] and then I was able to findt also in a few year old gif file. :-) Unfortunately, I can read only the first page of this article, so it is not very useful for me.

But I also found a book [Eft], Section 7 of which is devoted to equations for trigonometric functions. For your question is relevant D’Alembert-Poisson I Equation, considered in Subsection 7.2 (see also Subsection 7.4).

Find all continuous functions $f :\Bbb R\to \Bbb R$ satisfying for all $x,y\in\Bbb R$ the equality

$$f (x + y) + f (x − y) = 2 f (x) f (y).$$

These remarks allow us to answer your question. Indeed,

Taking into account the examples which you have presented, I guess that you question can be formulated more precisely, asking about only one equation for given $n$ and $\alpha$. Namely,

(1) $\sum f(x_1\pm x_2\pm\cdots\pm x_n)=\alpha f(x_1)\cdots f(x_n),$

where all signs before functions and a variable $x_1$ are positive, but and all other variables come with all $2^{n-1}$ possible combinations of signs. For instance, for $n=3$ we have the equation

$$f(x_1+x_2+x_3)+ f(x_1+x_2-x_3)+ f(x_1-x_2+x_3)+ f(x_1-x_2+x_3)=\alpha f(x_1)f(x_2)f(x_3).$$

But it is not so complicated as it looks. Indeed,

For $n=1$ we have a boring equation $f(x_1) =\alpha f(x_1)$ or $ f(x_1)(\alpha-1)=0$, whose solution is the zero function for $\alpha\ne 1$ and any function for $\alpha=1$.

So further we assume that $n>1$. If $f(0)=0$ or $\alpha=0$ then putting $x_2=\dots x_n=0$, we obtain $2^{n-1}f(x_1)=0$, that is $f$ is the zero function. So further we assume also that $f(0)\ne 0$ and $\alpha\ne 0$.

If $n=2$ then multiplying the function $f$ by $\alpha/2$ we see that without loss of generality we may assume that $\alpha=2$. In this case a solution for continuous $f$ was considered in [Eft], a more general case, maybe, is solved in [Kan] (at JSTOR is written that the registered users may read the paper online for free).

We can reduce the case $n\ge 3$ to the previous. By putting $x_3=\dots x_n=0$, we obtain that the function $f$ satisfies Equality 1 with the constant $\alpha\left(\frac {f(0)}2\right)^{n-2}$ at the right hand side. Moreover, putting all $x_i$ equal to zero, we obtain that $2^{n-1}=\alpha f(0)^{n-1}$, that is $\alpha\left(\frac {f(0)}2\right)^{n-2}=\frac 2{f(0)}$.

PS. Lou van den Dries in [vdDri] proved that all valid identities in terms of variables, real constants, the arithmetic operations of addition and multiplication, and the trigonometric operations of sine and cosine are consequences of a few familiar identities and numerical facts. More precisely, consider the familiar identities:

1) $\cos(x + y) = \cos x \cdot\cos y - \sin x\cdot\sin y$

2) $\sin(x + y) = \cos x \cdot\sin y + \sin x\cdot\cos y$

3) $\cos(-x) = \cos x, \sin(-x) = - \sin x.$

From these identities and numerical facts like $\cos 0 = 1$ we can derive other identities like $\cos^2 x + \sin^2 x = 1$. It is showed that all valid identities formulated in terms of variables, individual real numbers, the functions $\cos$ and $\sin$, and the ring operations $+$, $-$, $\cdot$ can be derived from the identities 1-3 above, plus the identities defining commutative rings with unit $1$, plus the "true numerical facts", these being the valid identities not containing variables. Terms like $\sin(\cos x)$ are allowed. The addition law for the cosine and the symmetry law for the cosine ($\cos(-x) =\cos x$) are derivable from the other identities, but that one cannot also leave out $\sin(-x) = - \sin x$.

References

[vdDri] Lou van den Dries A completeness theorem for trigonometric identities and various results on exponential functions, Proc. Amer. Math. Soc. 35 (1972), *96**:2 (1986), 345-352.

[Eft] Costas Efthimiou, Introduction to functional equations: theory and problem-solving strategies for mathematical competitions and beyond,

[Kan] Pl. Kannappan Trigonometric Identities and Functional Equations, The Mathematical Gazette 88:512 (2004), 249-257.

Alex Ravsky
  • 106,166