30

I am looking for a function over the real line, $g$, with $g*g = g^2$ (or a proof that such a function doesn't exist on some space like $L_1 \cap L_2$ or $L_1 \cap L_\infty$). This relation can't hold in a non-trivial way over any finite space, by that I mean that if $f$ is a probability density function, then $fg^2 = fg * fg$ implies by integration that $E_f[g^2] = E_f[g]^2$, so $g$ would have to have $0$ variance. Furthermore, this relation can't hold on the positive real line since $e^{-x}$ is a probability distribution with $e^{-x}(g*g) = (e^{-x}g)* (e^{-x}g)$, which would again mean that $g$ has $0$ variance, forcing it to be zero. Alternatively, one could hope for a power series for $g$ and use the fact that $h_n \equiv x^{n-1}/(n-1)!$ has $h_n * h_m = h_{n+m}$ over the positive real line to derive that the power series of $g^2$ would have to be zero. None of the tricks in this paragraph seem to work over the entire real line, however.

Instead, Fourier transforming $g^2 = g*g$ reveals the same equation in Fourier space: $\hat{g}^2 = \hat{g^2} = \hat{g} * \hat{g}$. If we assume $g$ and $g^2$ have finite moments, this lets us relate them by differentiating $\hat{g}^2$ repeatedly to reveal the binomial-type formula $E_{g^2}[x^n] = \Sigma_{k = 0}^n {n \choose k}E_g[x^k]E_g[x^{n-k}]$, where $E_g[x]$ means $\int_{-\infty}^\infty dx g x$. This is quite the strange condition to me, but it doesn't seem to lead to any obvious contradictions. It says that the cumulants of $g^2$ are twice that of $g$.

If we expand $g = \sum_{n=0} a_n \psi_n$ in terms of Hermite-Gauss functions $\psi_n(x) \equiv H_n(x)e^{-x^2/2}$ and use the convolution theorem, the fact that $F[\psi_n] = (-i)^n \psi_n$, and orthonormality of the $\psi_n$, we can derive the fact that the 4 mod 4 families of modes $\{\psi_n:n = k \hspace{.2cm}\text{mod} 4\}$ in $g^2$ result from the products over modes in $g$ whose index sums to $k \hspace{.1cm} \text{mod} 4$: $\langle \psi_k , g^2\rangle = \sum_{n+m = k\text{mod 4}} a_n a_m <\psi_n \psi_m , \psi_k> $ for any $k$. This is opposed to the usual situation where $n+m = k+2$ modes are also included in the sum. This says that the index of $g$'s modes don't mix mod 4 after squaring. This condition is difficult to work with because there is not a nice way of dealing with $<\psi_n \psi_m, \psi_k>$. If there was an extra $e^{x^2/2}$ inside that inner product, it would let us use the tripple product formula for $H_n(x)$ which has a nice form where only finitely many terms pop out (unlike the seemingly all-to-all coupling of the $\psi$ product).

A solution of $g^2 = g*g$ would lead to solutions of $g^2 = \lambda g*g$ for any $\lambda$ by using $g(x/\lambda)*g(x/\lambda) (\lambda x) = \lambda g*g(x)$. Furthermore, using $(e^{\lambda x}g)^2 = e^{2 \lambda x} g*g = e^{\lambda x} (e^{\lambda x}g)*(e^{\lambda x})$ would give us a solution to $g^2 = e^{\lambda x} g*g$ for any $\lambda$. Substituting $g(x) \rightarrow g(x+\delta)$ gives us solutions to translated equations $g^2(x) = g*g (x+\delta)$. Unfortunately, none of these transformations have made the solution to the problem obvious.

The problem becomes trivial if we are allowed to rescale like $g(x)^2 = g*g(\sqrt{2} x) \sqrt{2}$, as its just a Gaussian! Herein lies the reason why I suspect that there is no solution to $g^2 = g*g$. The left-hand side tightens $g$, whereas the right-hand side spreads $g$ out. Interestingly enough, there are solutions to $g = g*g$ (sinc) as well as $g^{1/2} = g*g$ (again Gaussian), but the homogeneous version of the problem seems more elusive. I have read some papers about solving convolutional equations, but they as far as I have seen have either been over a finite space or have not had the non-linearity present here.

I have also tried taking a fractional Fourier transform with angle $-\pi/4$ to convert $g^2 = g*g$ into $g_{-\pi/4}*_{-\pi/4}g_{-\pi/4} = g_{-\pi/4}*_{\pi/4} g_{-\pi/4}$, which leads to another two dimensional integral which is easy to write down but not solve. I thought of doing a fractional Fourier transform about a very small angle in order to make the convolution $g*g$ less delocalized as well as the product $g*g$ smoother. This didn't really lead to anything helpful.

I tried thinking up of a scheme to find better and better approximations for $g$, as I would be happy even with an implicit answer or a numerical method to plot $g$. The obvious candidate $g = \sqrt{g*g}$ has the problem of there being cycles (Gaussians) as well as convolutions being difficult to compute over the whole real line. If one thinks of $g^2-g*g$ as a Lagrangian and tries to do a path integral of $e^{-|g^2-g*g|}$ over configurations of $g$ with some given $||g||$, then I think this corresponds to a badly non-local field theory. Yikes!

As a final note, the problem can be restated as finding a function where the act of squaring commutes with taking a Fourier transform: $\hat{g}^2 = \hat{g^2}$. This would be solved (sufficient but not necessary) by a function who is its own Fourier transform $g = \hat{g}$, and whose square is its own Fourier transform $g^2 = \hat{g^2}$. But again we run into the problem where there is no good basis in which to to do both the product and the Fourier transform. Polynomials multiply well but Fourier transform/convolve poorly, vice versa for Hermite Gaussians (etc etc with seemingly every choice of basis). Any help approaching this problem would be greatly appreciated, as I am seriously losing my mind over it!

Edit: There is also a simple asymptotic argument which shows that if g is upper and lower bounded by ~$x^{-n}$ at infinity for some n then it can't solve $g^2 = g*g$. The reason is that the limit of $x^n g^2$ is $0$, but the limit of $x^n g*g$ would be $\int_{-\infty}^\infty dx g(x)$, so this would have to be zero (forcing the integral of $g^2$ to be zero as well). However such an argument fails if $g$ decays faster than any polynomial, or if it repeatedly crosses $0$ as $x$ goes off to infinity.

Another edit: I believe that the answer is yes and unique (but trivial) for g taken to be a discrete vector with elements over $Z$. This is because Fourier transforming the problem moves back to a compact space where the variance argument shows that the only solution is constant, meaning that the only solution for such a $g$ is $g_i = \delta_i$.

  • If $g \in L^{1}$ then $g \in L^{2}$ automatically by the given equation. – Kavi Rama Murthy Sep 13 '23 at 08:50
  • nice, I think then g*g is continuous and so is g as well then – BigMathGuy Sep 13 '23 at 09:02
  • Is $g$ real valued? Or nonnegative? – LL 3.14 Sep 13 '23 at 12:38
  • I was hoping for a real valued g – BigMathGuy Sep 13 '23 at 17:43
  • 3
    A half-formed idea: since $g$ and its Fourier transform satisfy the same equation, maybe the uncertainty principle can be used to show the pair can’t actually share the same properties derived from the equation they satisfy. – kieransquared Sep 13 '23 at 18:57
  • No luck with that either I'm afraid. The uncertainty principle seems to be too weak a bound to kill this problem. I've also tried things like young's inequality for convolutions together with clever changes of variables, but it seems like $g^2 = g*g$ is "just right" in the sense that it escapes violating the simple inequalities. – BigMathGuy Sep 13 '23 at 20:27
  • Related question in abelian groups : https://arxiv.org/pdf/2205.08749.pdf – LL 3.14 Sep 28 '23 at 09:56
  • 1
  • Joako, the issue is that those have clear ways of convolving (multiply the characteristic function), but the real space form and real space multiplication is unclear. If you know of a clever way that these could be helpful please say so, as I am a novice (: – BigMathGuy Oct 02 '23 at 19:49
  • It kind of reminds me of an old question of mine: https://math.stackexchange.com/q/2332292/99220

    Here, one successful attempt was an expansion of the form $f(x) = ∑{n∈ℤ}a_n e^{-\frac{1}{2}c_n x^2}$. The convolution behaves nicely: $e^{-\frac{1}{2}a x^2}*e^{-\frac{1}{2} b x^2} \propto e^{-\frac{1}{2}\frac{ab}{a+b}x^2} = e^{-\frac{1}{2}(a∥b)x^2}$, using the [parallel operator](https://en.wikipedia.org/wiki/Parallel(operator)) (see also: https://math.stackexchange.com/q/1785715/99220). If one finds a nice discrete set that is closed under ∥, one could try this expansion.

    – Hyperplane Oct 04 '23 at 09:47
  • Hi Hyperplane, I actually tried this except with your sum replaced with an integral, which is more general. I was unfortunately left with a two dimensional integral as a functional equation for a(n) which I couldn't solve (and which there looked to be no solution). The question is actually very similar to yours though in the sense that we are trying to find a function with both a local propery (real space) and a nonlocal property (fourier space). – BigMathGuy Oct 05 '23 at 22:02

4 Answers4

8

Partial answer: the answer is negative if $g\in L^1$ is a nonnegative function. Of course, by the equation, the same is true if we replace $g$ by its Fourier transform or $L^1$ by $L^2$.

Assume $g$ is a solution of the problem with the above assumptions. Without loss of generality, we take $\|g\|_{L^1} = 1$. Then $$ \int g^2 = \int g*g = \left(\int g\right)^2 = \|g\|_{L^1}^2 = 1 $$ so $g\in L^2$. Then by the assumption and the Cauchy–Schwarz inequality $$ g(x)^2 = \int g(x-y)\,g(y)\,\mathrm d y \leq \|g\|_{L^2}^2 = 1. $$ Therefore $$ C:= \|g\|_{L^{\infty}} \leq 1. $$ In particular, $(C-g)\,g$ is a nonnegative function such that $$ 0\leq \int (C-g)\,g = C - \int g^2 \\ = C - \int g*g = C - 1 $$ and this is nonpositive since $C \leq 1$. Hence $(C-g)\,g = 0$, that is $C\,g = g^2 = g*g$. Integrating, one sees that $C=1$, so $g^2= g$ which means that $g$ is the characteristic function of some set (i.e. for a.e. $x$, $g(x)\in\{0,1\}$). In particular, $\hat g = \hat g * \hat g$, and so by the functional relation of $g$ that is also verified by its Fourier transform, $\hat g = \hat g * \hat g = \hat g^2$. By this last relation, we deduce that $\hat g$ is also the characteristic function of some set, and in particular $\hat g\geq 0$. Therefore, $$ \|\hat g\|_{L^1} = \int \hat g = \int |\hat g|^2 \\= \int |g|^2 = \int g = 1 $$ from which we deduce that $\hat g\in L^1$ and so $g$ is continuous, which is of course in contradiction with the fact that $g$ is an integrable characteristic function.


Remark: If we do not assume $g\geq 0$ but $g$ real valued instead, then we still get $$\label{1}\tag{1} \|g\|_{L^\infty} \leq \|g\|_{L^2} \leq \|g\|_{L^1} =: 1. $$ Since $g\in L^1$, we get $\hat g \in C_0^0$. Moreover, $$ \int |\mathcal{F}(g^2)| = \int |\hat g * \hat g| = \int |\hat g^2| \\ = \int |\hat g|^2 = \int |g|^2 \leq 1 $$ and so $g^2 \in C^0_0$.

One can also get other features of such a solution if it exists. First, $$ \int |g|^4 = \int |g*g|^2 = \int |\hat g^2|^2 = \int |\hat g|^4 $$ where the second identity follows from Plancherel Formula. Hence $\|g\|_{L^4} = \|\hat g\|_{L^4}$. In particular, assuming that $g:\Bbb R^d\to \Bbb R$, by the sharp Hausdorff–Young inequality and Hölder's inequality $$ \|\hat g\|_{L^4} \leq \theta_4^{d/2} \,\|g\|_{L^{4/3}} \leq \theta_4^{d/2} \,\|g\|_{L^4}^{1/3}\, \|g\|_{L^1}^{2/3}, $$ where $\theta_4 = 2/3^{3/4} < 1$ (actually, $\theta_4 \simeq 0.877$). Since $\|g\|_{L^4} = \|\hat g\|_{L^4}$ and $\|g\|_{L^1} = 1$, it yields $$ \|g\|_{L^4} \leq \theta_4^{3d/2} < 1. $$ Since on the other hand $\|g\|_{L^2} \leq \|g\|_{L^4}^{2/3}\|g\|_{L^1}^{1/3}$ and $\|g\|_{L^4} \leq \|g\|_{L^\infty}^{3/4}\|g\|_{L^1}^{1/4}$, one obtains the following improvement with respect to Inequality \eqref{1} $$\boxed{ \|g\|_{L^4}^{4/3} \leq \|g\|_{L^\infty} \leq \|g\|_{L^2} \leq \|g\|_{L^4}^{2/3} \leq \theta_4^d < 1. }$$ Notice also that since $\|g\|_{L^4} \leq \|g\|_{L^\infty}^{1/2}\|g\|_{L^2}^{1/2}$, by the above inequality, $\|g\|_{L^4} \leq \|g\|_{L^2}$.

LL 3.14
  • 13,938
  • This is nice. Is there a way to extend this to funtions which can be negative? Intuitively, g's negative regions are the only thing which would allow it to not expand too much under its autoconvolution, so I suspect they are important. – BigMathGuy Sep 13 '23 at 17:47
  • You are also assuming that $g \in L^{\infty}$. The inequality $|g|{\infty}^{2}\leq |g|{\infty}|g|1$ also holds when $|g|{\infty}=\infty$. – Kavi Rama Murthy Sep 14 '23 at 04:45
  • Indeed! I edited the proof, now it should work better :) – LL 3.14 Sep 14 '23 at 09:06
  • Just for my understanding as a non-mathematician: do you analyze $\int g^2 = \left( \int g \right)^2$ under the assumption that $g$ is non-negative? Because I think you can easily show that $\int g^2 = \left( \int g \right)^2$ must hold for any $g$. – Jeroen Boschma Sep 17 '23 at 09:30
  • Yes indeed, this holds without the assumption that $g$ is non-negative. What needs the assumption is $(\int g)^2 = |g|_{L^1}^2$ (there is only an inequality if $g$ changes sign) and most importantly the fact that $(C-g)g\geq 0$ later in the proof. – LL 3.14 Sep 17 '23 at 09:44
  • I apologize for the lack of expertise, but is the Haussdorf-Young inequality still true for functions over the real line? I had thought it only works for functions over (0,1) – BigMathGuy Sep 17 '23 at 22:40
  • Yes, it works on the whole real line... this is the inequality for Fourier transform, not for Fourier series. And actually the sharp version I am using here is only valid on the whole space ans not on $(0,1)$. – LL 3.14 Sep 18 '23 at 06:43
  • 2
    Be careful, $g^2$ continuous doesn't mean that $g$ is too, when $g$ can have any sign (take $g = 1$ on positive numbers, $g = -1$ on non-positive numbers). – Cactus Sep 27 '23 at 20:05
  • Excuse my ignorance, but I don't understand the answer: Does it proves that function $g(x)$ such as $g^2=g*g$ does not exist? What happens with $g :=\ C\ \delta(x)$? as example, I think it fulfills the conditions. – Joako Nov 11 '23 at 14:05
4

Not an answer, but we can try to find such a function numerically by minimizing $∫|g^2(x) - g^{*2}(x)|^2 dx$ over a discretized grid of points.

Below is the result of such a simulation, using $2^{16}$ grid points in the interval $[-16,+16]$, initialized as a gaussian. The optimization terminates with a loss value which is numerically zero. One can observe 3 effects:

  1. Oscillations with increasing magnitude towards the sides, so likely converges to something not in $L^2$.
  2. Fractal-like structure with high frequency components appearing near 0.
  3. A-periodic oscillations.

Note that the convolve(g, g, "same") could introduce some dirt effects, since it simply chops off the values outside the original range.

enter image description here

Code (python using JAX)

#!/usr/bin/env python
# coding: utf-8

In[1]:

get_ipython().run_line_magic('config', "InteractiveShell.ast_node_interactivity='last_expr_or_assign' # always print last expr.")

import jax import jax.numpy as np import matplotlib.pyplot as plt from jax.scipy.optimize import minimize as jax_minimize from scipy.optimize import minimize as scipy_minimize from scipy.sparse.linalg import LinearOperator

In[2]:

use 64 bit floats (slow)

jax.config.update("jax_enable_x64", True)

key = jax.random.PRNGKey(0) x_max = 16 N = 216 x = np.linspace(-x_max, x_max, N) g = np.exp(-(x2) / 2) # start with gaussian.

@jax.jit def loss(g): r = g * g - np.convolve(g, g, "same") return np.mean(r**2)

grad = jax.jit(jax.grad(loss))

class HVP(LinearOperator): def init(self, x): self.x = x self.shape = (len(x), len(x))

def _matvec(self, v):
    return jax.vjp(grad, v)[1](self.x)


In[ ]:

sol = jax_minimize(loss, x0=g, method="BFGS", options={"maxiter": 100})

sol = scipy_minimize( loss, x0=g, method="Newton-CG", options={"disp": True}, jac=grad, hess=HVP, ) print(f"Final loss-value: {sol.fun}")

In[ ]:

get_ipython().run_line_magic('config', "InlineBackend.figure_format = 'svg'")

fig = plt.figure(figsize=(12, 8), constrained_layout=True) gs = fig.add_gridspec(2, 2) ax1 = fig.add_subplot(gs[0, :]) ax2 = fig.add_subplot(gs[1, 0]) ax3 = fig.add_subplot(gs[1, 1]) center = slice(N // 2 - N // 32, N // 2 + N // 32) left = slice(N // 5, N // 3) ax1.plot(x, sol.x) ax2.plot(x[left], sol.x[left]) ax3.plot(x[center], sol.x[center]) fig.savefig("4768288.png", dpi=300)

Hyperplane
  • 12,204
  • 1
  • 22
  • 52
  • Interesting! I am wondering what is the effect of having to set a boundary however, since the function seems to grow on the boundary. Is there a possibility to see the Fourier transform too ? – LL 3.14 Sep 14 '23 at 12:18
  • @LL3.14 feel free to play around with the code. There are many things one can try, for example one could consider minimizing a Sobolev norm instead (adding $+λ‖∇g‖_{L^2}^2$ regularization term for some small $λ>0$, which kills high frequencies.) I tried this with + 0.001*np.mean(np.diff(g)**2) -- it makes the fractal structure completely disappear. – Hyperplane Sep 14 '23 at 12:30
  • Wow! This is super interesting! It kind of looks like what I expected, with lots of wild oscillations. The fractal-like nature is cool too; I had thought about g potentially being a rough path before but this makes me want to revisit that idea more seriously. Thank you! – BigMathGuy Sep 14 '23 at 16:57
  • @Hyperplane: great numerics. Can you explain which functional are you minimizing in your previous comment? (You wrote "adding $+\lambda\lVert\nabla g\rVert$") – Giuseppe Negro Sep 28 '23 at 22:44
  • @GiuseppeNegro It's minimizing the function loss in the pseudocode. – Hyperplane Sep 29 '23 at 07:00
  • So you are minimizing the functional $\int_{-\infty}^\infty \lvert g^2(x)-g^{\ast 2}(x)\rvert^2, dx +\lambda \int_{-\infty}^\infty \lvert\nabla g(x)\rvert^2, dx, $ is that it? Is there anything you can do to prevent your iteration to converge to the trivial solution $g=0$? – Giuseppe Negro Sep 29 '23 at 12:53
  • @GiuseppeNegro Since it's a gradient based optimization, the found solution depends on the initialization. For this example I used a simple Gaussian, but different starting conditions might lead to different solutions. – Hyperplane Sep 29 '23 at 13:29
3

Defining a discrete function as $U = \{u_0,\cdots,u_n\}$ with the properties $u_k \ge u_{k-1},u_0 =0, \sum_k^n u_k^2 = 1$ we can define the residual

$$ \delta_k = u_k^2-\sum_j^ku_{k-j}u_j $$

and then we can determine the $u_k$ values satisfying

$$ \min_U\sum_m \delta_m^2(U),\ \ \ \text{s.t.}\ \ \ u_k \ge u_{k-1},u_0 =0, \sum_k^n u_k^2 = 1 $$

The following MATHEMATICA script performs the calculations.

n = 40;
equs = Table[u[k]^2 - Sum[u[k - j] u[j], {j, 1, k}], {k, 1, n}];
U = Table[u[k], {k, 0, n}];
restrs = {Table[u[k] > u[k - 1], {k, 1, n}], U.U == 1, u[0] == 0};
sol = NMinimize[Join[{equs.equs}, restrs], U, Method -> {"Automatic", "InitialPoints" -> {Table[x, {x, 0, 0.5, 0.5/n}]}}]
data = U /. sol[[2]];
gr1 = ListPlot[data, Filling -> Axis];

data2 = Table[Sum[data[[k - j]] data[[j]], {j, 1, k - 1}], {k, 2, n}] data3 = Table[data[[j]]^2, {j, 1, n - 1}] data23 = data3 - data2 gr4 = ListPlot[data23, Filling -> Axis]

Concluding, it seems that such functions there exist as a limit of the discrete case. Follows the plot showing such $U$ and also the the close fit between $U^2$ and $U\circledast U$.

enter image description here enter image description here

Cesareo
  • 36,341
  • This seems wrong to me (but very limited math knowledge here)... For such a $U$, the convolution always shows a left and a right 'tail', and therefore $UU$ cannot approach the shape of $U$ or $U^2$ ($UU$ starts to approach a Gaussian, CLT...). If you perform the convolution $U*U$ with $U$ defined from $t=0$ to $t=T$, then there are 2 separate integration regions when shifting the 'flipped' $U$ over the non-flipped $U$: for $t<T$ you integrate from 0 to $t$, and for $t>T$ you only integrate from $(t-T)$ to $T$. I do not see that behavior back in your equations. – Jeroen Boschma Oct 07 '23 at 10:32
-1

I suspect there's a sense in which sinewaves/dirac delta offer a (unique) solution to the equation? I'll note the following:

  • they're Fourier transforms (as you predict)
  • sinewaves (more generally $g(x) = \exp(zx)$ for complex z) are "almost a solution" in that the convolution term diverges. Tacking on a wide gaussian ($g(x) = \exp(\varepsilon x^2 + z_x)$) should yield good approximations on large regions around the origin.
  • sinewaves seem to work for weird spaces, in particular $g(x) = \sin(x)$ solves the equation up to constant factor on the quotient space $\mathbb R / 2 \pi \mathbb Z $ (the circle with circumference $2 \pi$)
  • 1
    Unfortunately it seems that delta functions don't solve it in the sense that there's no limit which gives them as a solution. They are in fact a solution to the discrete case, but in the continuous case if g integrates to $1$, then theres nothing you can do to prevent $g^2$ from integrating to a large number. Also, the convolution is bounded by $||g||_2 = 1$, so $|g| <=1$ as well. – BigMathGuy Sep 16 '23 at 19:05