23

Consider the following transform $g\mapsto f$, where $$ f(x) = \int_{0}^{\infty} \exp\left\{-\int_0^t\int_{x-\tau}^{x+\tau} g(y) \, dy \, d\tau\right\} \, dt $$ Assume $f,g>0$ are $C^\infty(\mathbb{R})$, and $f,g\in L^2(\mathbb{R})$. Assume also that $f$ is $1$-Lipschitz.

Is it possible to invert this transform (write $g$ in terms of $f$)? If not, what properties of $g$ can be inferred?

Some thoughts: Essentially, I want to infer properties of $g$ from $f$, like bounds on its derivatives, and regularity overall. I understand this could also be mapped to a problem in functional analysis, integral geometry or tomography, so could $f$ be related to the Radon transform? Or Bochner's theorem?

An alternative formulation of $f$ is $$ f(x)=\int_{0}^{\infty} \exp\left\{-\int_{-t}^{t} \left(t-|u|\right) g(x+u)\,du\right\}\,dt $$ Note that one can write $$ h(x):=\int_{-t}^{t} \left(t-|u|\right) g(x+u)\,du $$ as a convolution, by setting $$ h(x)=(k * g)(x)=\int_{-\infty}^{\infty}k(u)g(x-u)\,du $$ where $k$ is the "tent" function $$ k(u)=\begin{cases} t-|u|,&|u|\leq t,\\ 0,&|u|>t. \end{cases} $$ Could we use the Fourier transform to prove injectivity (or non-injectivity)? Perhaps using the Convolution Theorem in one dimension? See here, for example.

To answer my main question on invertibility, I lean towards a "no", since we probably do not have injectivity. The transform tends to smooth out or "blur" the details of $g$, part of the reason being that the double integral in the exponent averages $g$ over symmetric intervals around $x$. Smoothing-type operators of this form typically lose enough information about $g$ so, technically, one can often construct nontrivial perturbations $\delta g$ such that the induced change in $f$ is zero (or arbitrarily small), indicating a failure of injectivity. Is this the correct line of reasoning? How can I formalize this proof? Are there any conditions that I could impose on $g$ to guarantee invertibility (like being $C^\infty(\mathbb{R})$)?

sam wolfe
  • 3,465

4 Answers4

5

Not an answer. Too long for a comment.

Following OP's idea, let $\theta(t) = (1 - |t|)\chi_{[-1,1]}(t)$, where $\chi$ is the characteristic function.

We denote the (somewhat) Melin transform (or Stieltjes moments)

$$f(\alpha, x) = \int_0^{\infty} t^{2\alpha}\exp\left(-\int_{-\infty}^{\infty} t \theta\left(\frac{u}{t}\right) g(x-u) du\right)dt$$

Assume this is well-defined for all $\alpha > -\frac{1}{2}$. Then

$ \begin{aligned} \frac{\partial}{\partial x}f(\alpha, x) &= \int_0^{\infty} t^{2\alpha}\exp\left(-\int_{-\infty}^{\infty} t \theta\left(\frac{u}{t}\right) g(x-u) du\right) \cdot \left(-\int_{-\infty}^{\infty} t \theta\left(\frac{u}{t}\right) g'(x-u) du\right) dt \end{aligned}$

And use integration by parts, $$\begin{aligned}\int_{-\infty}^{\infty} t \theta\left(\frac{u}{t}\right) g'(x-u) du &= t\theta\left(\frac{u}{t}\right) ( -g(x - u) )\Big|_{-\infty}^{\infty} + \int_{-\infty}^{\infty} \theta'\left(\frac{u}{t}\right) g(x - u) du\\&= 0 - \int_{0}^t g(x-u)du + \int_{-t}^0 g(x-u) du \end{aligned}$$

If $g(x)$ is real analytic, and let $G'(s) = g(s)$, then we can expand

$$-\int_{0}^t g(x-u)du + \int_{-t}^0 g(x-u) du = 2\sum_{s=1}^{\infty} \frac{G^{(2s)}(x)}{(2s)!} t^{2s}$$

That means

$$\frac{\partial}{\partial x}f(\alpha, x) = -2 \sum_{s=1}^{\infty} \frac{G^{(2s)}(x)}{(2s)!} f(\alpha+s, x)$$

The partial derivatives of $f(0, x)$ are known pointwisely, and trigger an infinite system of equations in $f(k, x)$ and derivatives of $g$ evaluated at $x$, if there are some great properties of $g$, then it might be able to solve them all.

Yimin
  • 3,697
  • 21
  • 34
  • Thanks, this is interesting. For instance, could this be used to infer Lipschitz continuity for a certain constant on $g$, assuming $g$ is smooth? Could you provide an example of a potential application? Additionally, is the assumption of a well-defined function necessary, or could it potentially be proven? – sam wolfe Jan 09 '25 at 15:21
  • It cannot infer any continuity of $g$, this approach assumes analyticity of $g$, so continuity is already there. Some easy cases can be concluded. For instance, if $g$ is a polynomial of some degree, then the summation becomes finite. I believe that probably can give an injectivity of $g$. Intuitively, if $g$'s Taylor series converges super fast, then probably there is a fixed point statement to get uniqueness too. – Yimin Jan 09 '25 at 15:45
  • I have attempted a proof of non-injectivity (posted as an answer). Any thoughts on it? – sam wolfe Jan 09 '25 at 23:23
5

Edit: I think this is not quite right yet, since the transform $\widehat{g}$, and consequently $g$, would depend on $t$ (since the zeros $\{\omega\}$ depend on $t$). Any way around this?

This is my attempt at showing non-injectivity, following my argument. Any thoughts on whether this is correct would be greatly appreciated!

A direct calculation shows that the Fourier transform of $k$ is \begin{equation} \widehat{k}(\omega) = \int_{-\infty}^{\infty} k(u)e^{-i\omega u}du = \frac{4}{\omega^2}\sin^2\left(\frac{\omega t}{2}\right). \end{equation} In particular, $\widehat{k}(\omega)$ vanishes for infinitely many real $\omega$, namely at \begin{equation} \omega = \pm \frac{2\pi n}{t}, \quad n\in \mathbb{Z}\backslash\{0\}. \end{equation} By the convolution theorem, \begin{equation} \widehat{k*g}(\omega) = \widehat{k}(\omega)\widehat{g}(\omega). \end{equation} Since $\widehat{k}(\omega)$ has infinitely many real zeros, one can construct non-zero functions $\widehat{g}$ supported on those zeros of $\widehat{k}$. Such a $\widehat{g}$ (hence $g$) satisfies \begin{equation} \widehat{k*g}(\omega) = \widehat{k}(\omega)\widehat{g}(\omega) = 0 \quad \forall\,\omega, \end{equation} which means \begin{equation} (k*g)(x)=0 \quad \text{for all }x. \end{equation} Thus there exist distinct, non-trivial $g_1, g_2$ with $(k*g_1)=(k*g_2)$, showing that the map \begin{equation} g \mapsto\int_{-t}^t \left(t - |u|\right)g(x+u)du \end{equation} is not injective. Finally, since \begin{equation} f(x) = \int_{0}^{\infty} \exp\left(-\,(k*g)(x)\right)dt \end{equation} depends only on the value of $(k*g)(x)$, the same non-uniqueness (different $g$ giving the same $k*g$) implies the overall transform $g\mapsto f$ cannot be injective.

Is this correct?

sam wolfe
  • 3,465
  • One of the answers here says that convolution is invertible for conditions that your specific $k(x)$ could fulfill with a high probability. – K. Grammatikos Jan 10 '25 at 00:17
  • @K.Grammatikos I am not entirely sure I follow the argument there, could you clarify how that could be applied in my case. However, I do think my proof might not be quite accurate, since it underlies $t$-dependence of $\widehat{g}$ (and thus $g$). Any workaround? – sam wolfe Jan 10 '25 at 13:18
  • It seems the Fourier transform of $g$ also depends on $t$, it is just of a measure zero in $t$ that you can find your $g_1$ and $g_2$. Isn't it? I intend to believe the injectivity is true, just not stable (no bounded inverse). – Yimin Jan 10 '25 at 14:47
  • @Yimin Indeed. I tend to agree, if $g\equiv g(y,\tau)$, non-injectivty is easily shown. A bit trickier to show that we gain injectivity in the $g\equiv g(y)$ case, perhaps. – sam wolfe Jan 10 '25 at 14:54
  • @K.Grammatikos what does "high probability" mean in this case? – sam wolfe Jan 17 '25 at 14:52
  • It's a funny way of saying that I don't have time to check whether the conditions cited are fulfilled by your choice of convolution, but I believe they are. – K. Grammatikos Jan 17 '25 at 18:47
1

There is a problem in your assumption: indeed, if $g\geq 0$ and $g\in L^2$ then you must have $f(x) \to\infty$ as $x\to\infty$ or $x\to -\infty$.

Indeed, if that was not the case, then there would be some $M>0$ and an increasing sequence $x_n \to \infty$ (the same reasoning will hold with a sequence decreasing to $-\infty$ instead) such that $x_{n+1}-x_n > 4M$ and $f(x_n) \leq M$ for every $n$. But \begin{align*} M \geq f(x_n) &\geq \int_0^{2M} \mathrm{d} t \exp\left( - \int_0^t \int_{x_n-\tau}^{x_n+\tau} g(y) \mathrm{d} y \mathrm{d}\tau\right) \\ &\geq 2M \exp\left( - \int_0^{2M} \int_{x_n-\tau}^{x_n+\tau} g(y) \mathrm{d} y \mathrm{d}\tau\right) . \end{align*} It follows that $$ \int_0^{2M} \mathrm{d}\tau \int_{x_n-\tau}^{x_n+\tau} g(y) \mathrm{d}y = \int_{x_n-2M}^{x_n+2M} g(y) \max(0, 2M-|y-x_n|) \mathrm{d}y \geq \ln 2 . $$ Summing this over $n$ and using that $\max(0, 2M-|y-x_n|) \leq 2M$ gives $$ 2M\int g(y) \mathrm{d}y = \infty . $$ On the other hand, the fact that $L^p$ norm on a probability space (here $[x_n-2M, x_n+2M]$ with the uniform measure) are increasing give, writing $\phi(y) = \max(0, 2M-|y-x_n|)$ $$ \frac{\ln 2}{4M} \leq \frac{1}{4M} \int_{x_n-2M}^{x_n+2M} g(y) \phi(y) \mathrm{d} y \leq \left( \frac{1}{4M} \int_{x_n-2M}^{x_n+2M} g(y)^2 \phi(y)^2 \mathrm{d}y \right)^{1/2} $$ from which we deduce similarly that $\int g(y)^2 \mathrm{d} y = \infty$.

1

Here is a condition for injectivity with dubious applicability: if $g$ is $C^\infty$ on $[0, \infty)$ and zero on $(-\infty, 0)$ (a discontinuity at $0$ is allowed), then you can recover all its derivatives at $0$. This means that if $g$ is analytic on $[0, \infty)$ and zero on $(-\infty, 0)$ (again, there can be a discontinuity at $0$), then $g$ is uniquely determined.

Indeed, consider $f(x)-f(0)$ for $x>0$ small: $$ f(x) - f(0) = \int_0^\infty \mathrm{d}t \left[ \exp\left( -\int_0^t \mathrm{d}\tau \int_{x-\tau}^{x+\tau} g \right) - \exp\left( -\int_0^t \mathrm{d}\tau \int_{0}^{\tau} g \right) \right] \\ = \int_0^x \mathrm{d}t \exp\left( -\int_0^t \mathrm{d}\tau \int_{x-\tau}^{x+\tau} g \right) - \int_0^{2x} \mathrm{d} t \exp\left( -\int_0^t \mathrm{d}\tau \int_{0}^{\tau} g \right) . \qquad (\star) $$ If both $f$ and $g$ are analytic on $[0, \infty)$, and can hence be written for $x\geq 0$ as $g(x) = \sum_{n\geq 0} g_n \frac{x^n}{n!}$ and $f(x) = \sum_{n\geq 0} f_n \frac{x^n}{n!}$, then the above equation yields an infinite system of equations connecting the $g_n$ to the $f_n$. If this system is triangular, then the $g_n$ are uniquely determined by the $f_n$ and uniqueness is guaranteed. This is indeed the case. Observe first that $$ \int_0^t \mathrm{d}\tau \int_{x-\tau}^{x+\tau} g = \int_0^t \sum_{n\geq 0} \frac{g_n}{(n+1)!} ((x+\tau)^{n+1} - (x-\tau)^{n+1}) = \sum_{n\geq 0} \frac{g_n}{(n+2)!} ((x+t)^{n+2} - 2x^{n+2} + (x-t)^{n+2}) = O(x^2) . $$ Expand $(\star)$ with the first order estimate $e^{-x} = 1-x+O(x^2)$: $$ f(x)-f(0) = -x + O\left(x^4 \left(\int_0^{2x} g\right)^2\right) - \int_0^x \sum_{n\geq 0} \frac{g_n}{(n+2)!} ((x+t)^{n+2} - 2x^{n+2} + (x-t)^{n+2}) + \int_0^{2x} \sum_{n\geq 0} \frac{g_n}{(n+2)!} t^{n+2} \\ = -x + O\left(x^4 \left(\int_0^{2x} g\right)^2\right) - \sum_{n\geq 0} \frac{g_n}{(n+3)!} \left( (2x)^{n+3} - x^{n+3} - 2(n+3) x^{n+3} + x^{n+3} - (2x)^{n+3} \right) \\ = -x + O\left(x^4 \left(\int_0^{2x} g\right)^2\right) + 2 \sum_{n\geq 0} \frac{g_n}{(n+2)!} x^{n+3} . $$ If we expand the $O\left(x^4 \left(\int_0^{2x} g\right)^2\right)$, we will observe that only $g_k$ with $k<n$ appear in the term with coefficient $x^{n+3}$. This guarantees that the system is triangular.

  • This is interesting. I wonder if there is a slightly more qualitative proof, without the need for an expansion, even if $g$ is not constructible. If we consider just the "tent" convolution, the answer by Robert to this question seems to suggest $L^2$ is enough. Any thoughts on this? – sam wolfe Mar 27 '25 at 19:42
  • @samwolfe your problem is less straightforward. Convolution translates very well through Fourier transform, but the transformation $g\mapsto f$ here is more involved; so I do not think that Robert's answer can shed light on your problem here. – Thomas Lehéricy Mar 28 '25 at 00:40
  • I see, thank you! It slightly bugs me that an inversion is not possible in a relatively straightforward way, without the need of an infinite system. I've considered some version of the D'Alembert formula, but with no success. Thank you so much for your contributions nonetheless! – sam wolfe Mar 28 '25 at 14:00