6

Show that an ansatz is solving $\Delta|u|^{\frac12}=0$ in 2 dimensions $(\mathbb{R}^{1+1})$

I have added how the ansatz solve the equation by brute force, but I am stuck in defining properly it's boundary conditions, I don't know if it is even possible or not. I hope you could show me how if it is possible, or show why it is not.


Initial question

I am trying to extend the results of this other ODE question into it's PDE version.

So I would like to know if the following ansatz

$$u(x,t) = \frac{\text{sgn}(u(0,0))}{4}\left(2\sqrt{|u(0,0)|}-(x+t)\right)^2\cdot\theta\!\left(2\sqrt{|u(0,0)|}-(x+t)\right)$$

is a solution to

$$\Delta|u|^{\frac12}=0$$

I am trying this ansatz after realizing in Wolfram-Alpha that the solutions for $y'=-\sqrt{y}$ shown here have the same structure than the solutions for $\partial_t^2\sqrt{y}=0$ as shown here, and by using some ansatz with variable $(x+t)$ I am aiming to have some separation of variables (I hope).

I tried in Wolfram-Alpha and kind of solve it: it shows that $\partial_x^2\sqrt{u}+\partial_t^2\sqrt{u}=0$, but I moved the unitary step functions $\theta$ outside the derivatives and maybe it is illegal on this case (I aim to have cancellations of the form $(T-t)\delta(T-t)=0$). And I have no clue what to do with the initial/boundary conditions.

What would happen if I make the change of variable $\tau=-t$? Would it be solving $u_\tau=\Delta|u|^{\frac12}$? or maybe $(\partial_x^2-\partial_t^2)u^{\frac12}=0$?

and if it is not, which PDE these ansatz are solving?

And how I should define the initial/boundary conditions?

Hope you could show your steps, and comment about why it has lost a degree of freedom (the solution is determined by just one boundary, neither derivatives' initial conditions are required).

PS: I suspect the initial condition is ill defined since should be like $u(0,t)$ or something like that. Please fix it as it makes sense as PDE in the best way you need to fit the equations as solutions.


Update 4 - some attempts

The ansatz is based in what I reviewed for ODEs like $y'=-\sqrt[n]{y}$ in this answer which admits solutions of the form $y(t)=\left(\frac{(n-1)}{n}\left(T-t\right)\right)^{\frac{n}{n-1}}\theta(T-t)$ which becomes zero $y(t)=0,\,\forall t\geq T$ after some finite time $T<\infty$, and have the property of cancelling the derivatives of $\theta$ since $x\delta(x)=0$.

Thinking in the fact that finite duration behavior cannot be modeled by any non-piecewise power series since it would violate the Identity theorem, so classic ansatz with power series won't work here, and also that no linear ODE would show these behavior neither because a Non-Lipschitz term is required (details in these papers by Vardia T. Haimo paper 1 paper 2), since this question I wonder if for ODEs something like $x(t)=\sum\limits_{k=2}^\infty a_k(T-t)^k\theta(T-t)$ could be a general finite duration solution (notice is not a Taylor's series since is not defined at $t=T$, and the serie starts from $k=2$ because a constant term at $k=0$ would lead to Dirac's Delta functions on its derivatives, and $a_1=0$ such as $x'(T)=0$), but later I found that for PDEs the requirement of having a Non-Lipschitz term maybe don't apply since it is possible for linear PDEs to have solutions that becomes zero if I fix one point in space: see $\text{Eq. 8}$ and $\text{Eq. 9}$ in this question where a smooth bump function solves the classic wave equation in $\mathbb{R}^{1+1}$ (which are not even analytic), but I think that for becoming zero in the whole space of the variables $\{x,\ t\}$ it still required the non-Lipschitz term as shown in $\text{Eq. 11}$, but little analysis I could made from it since I don't know the Fourier Transform of a smooth bump function.

So looking for a simpler example where I could apply similar for the ODEs examples, I think about the Laplace's equation $\Delta u = 0$ since it could be solved by separation of variables. So I started to look for an ansatz for $\Delta|u|^{\frac12}=0$ thinking in a quadratic polynomial such the 2nd derivative of its square root were some constant (I could just added/sustract them later modifying the equation).

By substitution $n=2\,$ I have that $y'=-\sqrt{y}\,$ it is solved by $y(t)=\frac14(T-t)^2\theta(T-t)$. Now understanding that the trivial zero solution solves $\Delta|u|^{\frac12}=0$ I will forget about the term from $\theta(T-t)$ so I will focus only in the interval where the solution is not forever-zero and its derivatives don't participate in the results.

Then, thinking in the separation of variables of the Laplace's equation, I will try an ansatz $u(x,t)=y(x+t)$ and check if it will solve each second derivative $\partial_t^2 u=0$ and $\partial_x^2 u=0$.

  1. If I try to solve in Wolfram-Alpha $$\partial_x^2 u(x,t)=0$$ it is expanded as $$\frac{2uu_{xx}-u_x^2}{4u^{\frac32}}=0$$ and gives as solution $$u(x,t)=\frac{c_1(t)x^2}{4c_2(t)}+xc_1(t)+c_2(t)$$ which is the solution of matching the numerator equal to zero $$2uu_{xx}-u_x^2=0$$
  2. By symmetry I expect a similar quadratic answer on the time variable as shown in Wolfram-Alpha$$\partial_t^2 u(x,t)=0$$ it is expanded as $$\frac{2uu_{tt}-u_t^2}{4u^{\frac32}}=0$$ and gives as solution $$u(x,t)=\frac{c_1(x)t^2}{4c_2(x)}+tc_1(x)+c_2(x)$$ which is the solution of matching the numerator equal to zero $$2uu_{tt}-u_t^2=0$$
  3. So far a quadratic solution looks as the valid alternative, so thinking in the $\theta(T-t)$ derivatives cancellation requirement let review how they will behave: $$\begin{array}{r c l} \frac{\partial^2}{\partial x^2}\left(\sqrt{\frac14(T-(t+x))^2}\right) & = & 0\qquad\text{as intended}\\ \frac{\partial^2}{\partial t^2}\left(\sqrt{\frac14(T-(t+x))^2}\right) & = & 0\qquad\text{as intended}\\ \frac{\partial^2}{\partial x^2}\left(\frac14(T-(t+x))^2\right) & = & \frac12\\ \frac{\partial^2}{\partial t^2}\left(\frac14(T-(t+x))^2\right) & = & \frac12\\ \end{array}$$ so the second derivatives are giving some constant value as expected, and solving the equation in particular due the linearity of the Laplacian operator.
  4. Now to verify if separation of variables is working lets check in Wolfram-Alpha the full equation: $$\partial_x^2 u(x,t)+\partial_t^2 u(x,t)=0$$ it don't gives a solution but do gives the following expansion $$\frac{2u(u_{xx}+u_{tt})-u_x^2-u_t^2}{4u^{\frac32}}=0$$ where you could notice is the sum of the previous results for each term considered independently. If I try to solve the numerator in Wolfram-Alpha it again don't deliver an solution $$2u(u_{xx}+u_{tt})-u_x^2-u_t^2=0$$ but I could do the following ansatz: if I believe a quadratic function is a solution, then the term of the double derivatives should match a constant value, let say $b$, so if I replace $u_{xx}+u_{tt}=b$ in Wolfram-Alpha I will have that $$2bu-u_x^2-u_t^2=0$$ do show as solution a quadratic function for the remaining equation, so is consistent the ansatz with the PDE I aiming to solve $$u(x,t) = \frac{2bc_1tx+2bc_2t+bc_1x^2+2bc_1c_2x+bc_2^2+bt^2}{2(c_1^2+1)}$$ now, if I make $b=\frac12+\frac12=1$ I will have $$u(x,t) = \frac{2c_1tx+2c_2t+c_1x^2+2c_1c_2x+c_2^2+t^2}{2(c_1^2+1)}=\frac{(c_2+t)^2+c_1x(2(c_2+t)+x)}{2(c_1^2+1)}$$ and if I set $c_1=1$ I will recover the form $$u_0(x,t) = \frac14(c_2+t+x)^2 =\frac14(c_3-(t+x))^2 $$ showing that at least the ansatz is consistent with the differential equation and separation of variables is working.

My problem now is in how to define properly boundary conditions: I don't know if its possible to make them with just one constant value, Do I require a smart choice? Or requiring just a constant shows the ansatz is not a valid solution of the PDE?

I am very rusty on boundary conditions problems, and here looks like one side is predetermined since it becomes zero loosing in this way a degree of freedom (I believe). In Wikipedia says the Laplace's Equation could be a specific form for a particular Heat Equation at steady-state, I remember seeing this equation in engineering a decade ago so I will review my notes and see if I can extrapolate something later.

But I hope someone could derive a formal solution during the bounty and not my ugly trial-and-error procedure.

Update 5

I have tried some things I saw for solving the heat equation but don't work, I think it lost all their flexibility in the way I did the ansatz and only a determined shape for the boundaries are allowed. Fixing $T=5$ I have these plots of a quadratic function that decreases to zero as time passes: Contour plot / Desmos plot

Contour plot in Wolfram-Alpha

animation of the solution as time pases

I still think it is a valid solution, the value at $x=L$ would be $u(L,t)=\frac{1}{4}\left((2\sqrt{u(0,0)}-L-t)^{+}\right)^2$.

Hope you could confirm if this is formally a valid solution.


From now on it is not a mandatory reading for giving an answer


Update 1 - answers to comments

User @K.defaoite have made some interesting observations and it would extend too much the comments so I will answer here with more detail what I have done so far.

I aim to make a simple case of a PDE with solutions of finite duration, and started from something I know it already show these kind of behavior: the ODE shown in the mentioned question is given by: $$u'(t) = -\text{sgn}(u(t))\sqrt{|u(t)|}$$ will stand the solution: $$u(t) = \frac{\text{sgn}(u(0))}{4}\left(2\sqrt{|u(0)|}-t\right)^2\cdot\theta\!\left(2\sqrt{|u(0)|}-t\right)$$ where the finite extinction time $T=2\sqrt{|u(0)|}<\infty$ the solution by itself would become zero forever after. Notice this behavior cannot be modeled by any non-piecewise power series since it would violate the Identity theorem, so classic ansatz with power series won't work here. Also, no linear ODE could have these kind of solutions of finite duration, since the require a Non-Lipschitz term in the ODE when uniqueness could be broken (details in these papers by Vardia T. Haimo paper 1 paper 2).

As studied in this other question, an analogous phenomena for PDEs is not as easy to understand: in Eq. 8 and Eq. 9 I could make a linear PDE that show a solution of finite duration if I focus only in one point in space, but since solutions to PDEs aren't lines as in ODEs, but instead, like surfaces/sheets, and these sheets never become zero on the full $\{t,x\}$ space, I tried to force this behavior and I was able to make a PDE in Eq. 11 that do show the PDE-kind of finite duration behavior, but unfortunately, the solution I used is based in an smooth bump function so I cannot use the Fourier Transform to study is properties (but I do can do it in the case of the mentioned ODE solution as is shown here).

Later thanks to the following comment by @CalvinKhor he tells than in this paper is shown that the PDE $u_t=\Delta u^m,\,0<m<1$ is shown to support these kind of solutions of finite duration. Unfortunately the math level of the paper overpass my current skills (I understood very little of the paper), but since the example looks similar to the one I did above, I am trying to make with it the simplest posible example of a closed-form solution of finite duration to a PDE.

What I did is the following: since I am thinking in only 2 dimensions, one for time and other for space $\mathbb{R}^{1+1}$, in this case $\Delta \equiv \partial_t^2+\partial_x^2$ and the most simple equation would be just making it equal to zero, this is why I based the question in $\Delta|u|^{\frac12}=0$ instead of the mentioned one $u_t=\Delta u^{\frac12}$, but any example would work for me.

I tried first the following analogy with the wave equation: if I have a phase variable $\varphi = x-t$, the wave equation $u_{tt}=u_{xx}$ it could be transformed just in $u_{\varphi\varphi}=u_{\varphi\varphi}$ and recover the wave equation by expanding the LHS by time and the RHS by the space variable. So I tried to insert just a similar variable $\varphi = x\pm t$ on the solution of the ODE to see what would happen.

From now on, I start to do some illegal stuff to see what will happen and how it would fail hoping to take lessons in order to improve further attempts. The first non-rigorous assumption was keeping the initial condition as a constant $u(0,0)$ which I think is mistaken since PDEs requires boundary conditions, but to check what would happen with the change of variable $\varphi$ it would be a first approach to keep it as constant.

Then I starting by trying replacing $u(x,t):=u(\varphi_1)$ with $\varphi_1 = x+t$ and taking the second derivatives, but with an illegal step that was moving outside the derivative the step function for avoiding Dirac's delta functions, something that sometimes could be done as explained in this comment by @md2perpe, but I am not sure if here it is valid or not. Also, other illegal thing, I will focus first only in scenarios where $u(x,t)\geq 0$ since I could avoid the absolute value and evaluate $\Delta u^{\frac12}$ instead of $\Delta |u|^{\frac12}$, with all these I found the following on the answers of Wolfram-Alpha 1 and Wolfram-Alpha 2:

$$\begin{array}{r c l} \dfrac{\partial^2}{\partial x^2}\left(\sqrt{\frac{\text{sgn}(u(0,0))}{4}\left(2\sqrt{|u(0,0)|}-(x+t)\right)^2 }\right) & = & 0 \\ \dfrac{\partial^2}{\partial t^2}\left(\sqrt{\frac{\text{sgn}(u(0,0))}{4}\left(2\sqrt{|u(0,0)|}-(x+t)\right)^2 }\right) & = & 0 \\ \end{array}$$

and this gives me hopes that maybe it is possible to improve it for solving $\Delta |u|^{\frac12}=0$ or maybe $(\partial_x^2-\partial_t^2)|u|^{\frac12}=0$, but I don't know how to proceed with the boundary conditions, my PDEs background is only on linear PDEs through Fourier solution and that kind of approach don't works here (and to be honest, it is very rusty).

I hope now is more clear what I am trying to do. I don't know how to explain it further.


Update 3 - answer 2 to @K.defaoite comments

If I focus on the equation $u_t=\Delta |u|^{\frac12}$ in the RHS I will have the same situation described in the Answer 1 of $\Delta |u|^{\frac12} = 0$, and in the LHS following Wolfram-Alpha I will have:

$$\begin{array}{r c l} \dfrac{\partial}{\partial t}\left(\sqrt{\frac{\text{sgn}(u(0,0))}{4}\left(2\sqrt{|u(0,0)|}-(x+t)\right)^2 }\right) & = & -\frac12\sqrt{\text{sgn}\!(u(0,0))}\,\text{sgn}\!\left(2\sqrt{|u(0,0)|}-(x+t)\right) \end{array}$$

And here I fall again in the issues with the border conditions I don't know how to handle:

  • If I choose $u(0,0)=0$ the equation is solved but don't have any meaning since it becomes the trivial zero solution for $t\geq 0$
  • Now if I try to make $u(0,0)$ such as $2\sqrt{|u(0,0)|}-(x+t)=0$ then it would be a function of $\{x,\ t\}$ and all previous derivatives are invalid since $u(0,0)$ is not a constant anymore but instead something like $u(0,t)$ or $u(x,0)$ which will make pop-up some extra terms.

The alternative could be make that $2\sqrt{|u(0,0)|}-(x+t)>0$ with already $u(0,0)>0$ on the whole interval the solutions is just different from zero, which will make become: $$u_t\equiv -\frac12$$

So instead I should be trying to solve the differential equation: $$u_t+\frac12 = \Delta |u|^{\frac12}$$

or maybe try an ansatz $v(x,t) = u(x,t)+\frac{t}{2}$ for the equation $v_t = \Delta |v|^{\frac12}$ instead.

But since I don't fully understand the problem of how to define initial/border conditions on non-linear PDEs I could be making many other alternatives and all we have the same validity that these ones: none, since I don't even now if they are ill defined from they very beginning.

That is why I am looking for a simple example, for having something to grab and start from there. Hope you could give one, what ever you want, with a simple finite duration solution.


Update 2: answer to comment to @AlexRavsky

With $\varphi = x-t$ you get:

$$\begin{array}{r c l}\text{RHS:}\quad u_{\varphi\varphi} & = & \frac{\partial}{\partial\varphi}\left(\frac{\partial u}{\partial\varphi}\right) = \frac{\partial}{\partial\varphi}\left(\frac{\partial u}{\partial\varphi}\underbrace{\frac{\partial x}{\partial x}}_{=1}\right) = \frac{\partial}{\partial\varphi}\left(\frac{\partial u}{\partial x}\frac{\partial x}{\partial \varphi}\right) = \frac{\partial}{\partial\varphi}\left(\frac{u_x}{\left(\underbrace{\frac{\partial \varphi}{\partial x}}_{=1}\right)}\right) \\ & = & \frac{\partial}{\partial\varphi}\left(u_x\right)\underbrace{\frac{\partial x}{\partial x}}_{=1} = \frac{\partial}{\partial x}\left(u_x\right)\frac{\partial x}{\partial \varphi} = u_{xx}\left(\underbrace{\frac{\partial \varphi}{\partial x}}_{=1}\right)^{-1} = u_{xx}\end{array}$$

$$\begin{array}{r c l}\text{LHS:}\quad u_{\varphi\varphi} & = & \frac{\partial}{\partial\varphi}\left(\frac{\partial u}{\partial\varphi}\right) = \frac{\partial}{\partial\varphi}\left(\frac{\partial u}{\partial\varphi}\underbrace{\frac{\partial t}{\partial t}}_{=1}\right) = \frac{\partial}{\partial\varphi}\left(\frac{\partial u}{\partial t}\frac{\partial t}{\partial \varphi}\right) = \frac{\partial}{\partial\varphi}\left(\frac{u_t}{\left(\underbrace{\frac{\partial \varphi}{\partial t}}_{=-1}\right)}\right) \\ & = & -\frac{\partial}{\partial\varphi}\left(u_t\right)\underbrace{\frac{\partial t}{\partial t}}_{=1} = -\frac{\partial}{\partial t}\left(u_t\right)\frac{\partial t}{\partial \varphi} = -u_{tt}\left(\underbrace{\frac{\partial \varphi}{\partial t}}_{=-1}\right)^{-1} = u_{tt}\end{array}$$

$$\Rightarrow u_{\varphi\varphi}=u_{\varphi\varphi} \iff u_{xx}=u_{tt} \iff u_{xx}-u_{tt}=0\quad\text{wave equation in 2D with }c=1$$


Update 6 - comments to @CalvinKhor answer

Thanks for the answer. This is going to be too long for the comment section, but in summary I would like you to elaborate on "non-zero harmonic functions cannot be identically zero on any non-empty open set".

In the intro section of the Wiki for Harmonic functions they are defined as an "harmonic function is a twice continuously differentiable function $f:\,U\to\mathbb{R}$, where $U$ is an open subset of ⁠$\mathbb{R}^n$⁠ that satisfies Laplace's equation", so in mi mind (maybe mistakenly), all what I need is to prove it satisfies the Laplace's equation (which I think I did en Update 4).

So far i was able to understand from your answer, considering $T:=2\sqrt{|u(0,0)|}$, far away the critical line $L:= T-(x+t)=0$ there are no issues since the trivial zero solution $u_0(x,t)=0$ solves the Laplace's equation, and on the other section, assuming now $u(0,0)>0$, then $q(x,t)=\sqrt{u(x,t)}=\frac12 |T-(x-t)|$ should be fulfilling the Laplace's equation far away of the critical line $L$ (Wolfram-Alpha): $$\frac{\partial^2}{\partial \{x^2,\ t^2\}}\left(q(x,t)\right)=\delta(T-(x-t))$$ so it is zero outside the critical line $L$ fulfilling the Laplace's equation.

So the problem is to see what happens in the critical line $L$: but here it is fulfilling already that $T-(x+t)=0$ which implies that $$u(x,t)\biggr|_{L}= \frac{\text{sgn}(u(0,0))}{4}\left(\overbrace{\require{cancel}\cancel{2\sqrt{|u(0,0)|}-(x+t)}}^{\displaystyle{=0}}\right)^2\cdot\theta\!\left(2\sqrt{|u(0,0)|}-(x+t)\right)\equiv 0$$ so it shouldn't be any problems here neither since we are again taking derivatives of the trivial zero solution.

$$\Delta|u|^{\frac12} = \begin{cases} 0,\quad \text{if}\,\, T-(x+t)>0 \,\,\text{since}\,\, \frac{\partial^2}{\partial \{x^2,\ t^2\}}\left(\sqrt{u(x,t)}\right)=\delta(T-(x-t)) \\ 0,\quad\text{if}\,\, T-(x+t)=0 \,\,\text{since}\,\, u(x,t)=0\\ 0,\quad\text{if}\,\, T-(x+t)<0 \,\,\text{since}\,\, \theta(T-(x+t))=0\\\end{cases}$$ which looks continuous for me (maybe I am mistaken here, please also elaborate about this).

With these points in mind I think $u(x,t)$ it is indeed exactly solving the $\Delta|u|^{\frac12}=0$, so it is being harmonic in the Wikipedia's definition sense.

But here it is where I would like you to elaborate on the answer you shared: If I think about non-piecewise power series, like the polynomial solution mentioned in the comment by @paulgarrett, it would never become a constant flat value due the Identity theorem, so in this non-piecewise sense I understand I wouldn't have harmonic solutions.

But because of this, I would like to know if the mentioned reasons you based you answer are assuming as axiom the solution must be non-piecewise defined, since for the piecewise definition I have used, for me looks like it is exactly solving the $\Delta|u|^{\frac12}=0$ equation, and as example of what I meaning here, the Liouville's theorem (complex analysis) is based in a Entire function, which, as also the Maximum modulus principle, has as assumption that the studied function is an Holomorphic function, so it starts from the assumption that the studied function it is Analytic function, so they have both been built with the underlying assumption that the studied function could be represented by a non-piecewise power series on the whole domain it is defined, assumption that the function $u(x,t)$ don't fulfills to start with, and I am wonder if the same issue it is happening with the analysis you have shared (here maybe I am mistaken and had a conceptual mistake, so this could be an important point for me to understand).

PS: I was not aware about the Math3D website, thanks for sharing it!.

Joako
  • 1,957
  • 2
    Is $\Delta$ supposed to represent $\partial_x^2+\partial_t^2$ in this case? It is customary to use the letters $x,y$ or $x_1,x_2$ and reserve $t$ as a variable only defined on $\mathbb R_+$. Just want to clarify. – K.defaoite Oct 16 '24 at 22:55
  • @K.defaoite here you caught me off guard. I was thinking on the wave equation and tried to split the phase as is done on the 1D wave equation, but I am aiming to make the simplest PDE example of finite duration solutions of these equation with the lowest dimension possible, to see if I can reproduce was is done on the paper shared by (at)CalvinKhor on his comment here (paper where I understood very little) – Joako Oct 17 '24 at 00:11
  • @K.defaoite maybe I should change the title of the question from "... 2 dimensions" to "$\mathbb{R}^{1+1}$ dimensions? – Joako Oct 17 '24 at 01:02
  • So.... when you write $\Delta$ you really just mean $\partial_x^2$ ? It's just an ODE in that case. Please explain what you mean by $\Delta$. Did you mean instead to write the d'Alembert operator $\square=\frac{1}{c^2}\partial_t^2-\partial_x^2$ ?? – K.defaoite Oct 17 '24 at 03:59
  • In the linked paper, $\Delta$ is assumed to have the standard meaning, i.e, $\Delta=\sum_i\partial_{x_i}^2$ – K.defaoite Oct 17 '24 at 04:01
  • @K.defaoite I was thinking on $\partial_x^2+\partial_t^2$ with $x$ an spatial variable and $t$ the time, and tried to use something like $\varphi=x\pm t$ on a similar ODE on the variable $\varphi$ to make an ansatz for these PDEs since they look similar – Joako Oct 17 '24 at 04:21
  • If that's the case then it behaves completely differently from the equation in the linked paper. – K.defaoite Oct 17 '24 at 14:20
  • @K.defaoite probably, I understood very little from the paper. As I said I am looking to make simplest possible example from PDE with solutions of finite duration so I can study it, I find examples very illustrative. So far the nearest I have found is the one of Eq. 11 on this other question, but the solutions is a smooth bump function so I don't know how to find its Fourier Transform – Joako Oct 17 '24 at 15:15
  • 1
    so I guess what you probably want is $\partial_t u=\partial_x^2(|u|^{1/2})$ – K.defaoite Oct 17 '24 at 16:06
  • @K.defaoite I added what I am trying to do with the better explanation I can do (do my background on math and in English - I am not native in it and still struggle a lot). I hope it is clear enough to receive help on it. – Joako Oct 17 '24 at 22:01
  • "it could be transformed just in $u_{\varphi\varphi}=u_{\varphi\varphi}$" I guess there is a misprint somewhere. – Alex Ravsky Oct 18 '24 at 18:40
  • 1
    @AlexRavsky I have answer your observation as an Update since it wasn't fitting the comments' character number restrictions. I hope is clear now. – Joako Oct 18 '24 at 19:22
  • 2
    Just to be as clear as possible - the equation discussed in the linked paper about the fast diffusion equation are of the form $\partial_t u=\sum_i\partial^2_{x_i}u$, where each of the $x_i$ are defined on all of $\mathbb R$, but $t$ is defined on $\mathbb R_+$. Generally we prescribe an "initial condition" $u(0,x)=(...)$ for the $t$ variable and a "boundary condition" $u(t,x)=(...)~\text{for}~x\in\partial\Omega$. This equation behaves TOTALLY DIFFERENT to what you reference later, which is $(\partial_t^2+\partial_x^2)|u|^{1/2}=0.$ I believe the former is what you should be interested in – K.defaoite Oct 19 '24 at 03:38
  • @K.defaoite I appreciate your feedback, but as I don't know which are the differences you mention, I cannot tell you anything in favor of one or the other alternatives. My intention is to make ANY simple PDE with solutions of finite duration so I can start studying it, so if you want to focus in any you felt is more affordable is completely fine with me. As you could see my knowledge on PDE is not zero but is pretty basic, and all help is welcome, but if I start for the very abstract paper I am certainly be as lost at the end as I am now. – Joako Oct 19 '24 at 03:47
  • @K.defaoite I added what I can do for the case $u_t=\Delta|u|^{\frac12}$ but as on the previous one, is little what I can do since I don't understand the issues with initial/border conditions. Hope you could deliver a simple example with solutions of finite duration. Thanks. – Joako Oct 19 '24 at 17:57
  • @K.defaoite I believe that in the update I just uploaded I show that somehow the ansatz do solve $\nabla |u|^{\frac12}=0$, but I am stuck on how to give it boundary conditions, even I don't know if is possible or not meaning the ansatz it just wrong. Hope you could take a look. – Joako Oct 22 '24 at 17:56
  • Hi Joako, I gave you the link to that paper not because it was the best to read but because it's a modern survey that would link to various other papers. In particular one should definitely start on the no boundary case, and on this the paper says "Since the 70s this model has been thoroughly studied and we can say that, although today the theory is quite complete at least for the Cauchy problem on the whole space, see [130, 18, 20, 30, 71, 21]". So those are probably better to start – Calvin Khor Oct 22 '24 at 22:11
  • 1
    Also on skimming [130] I find there is an explicit solution that becomes extinct in finite time. The ansatz can be seen in the introduction of this paper https://www.sciencedirect.com/science/article/abs/pii/S0022123622002154 – Calvin Khor Oct 22 '24 at 22:13
  • @CalvinKhor Thanks for the comment. I have to read it in detail yet, it is not an easy read for me. – Joako Oct 22 '24 at 22:13
  • 1
    @Joako I dont know if it would be an easy read even for the authors :) but try some of the others. Unfortunately most of the basic results are old so not that easy to find. But theres some e.g. https://www.jstor.org/stable/54148 – Calvin Khor Oct 22 '24 at 22:15
  • @CalvinKhor Do you know if in the math literature these kind of solutions of finite duration are named or grouped in some specific term in order for search for them? Had been really hard to find by myself related literature, and since you have shared plenty of them maybe I had been using the wrong terminology (I have recreated many of the equations in order to have simple examples since I don't find them when I search for them) – Joako Oct 22 '24 at 22:27

1 Answers1

1

I was invited to write an answer here. The differential equation is invariant under scaling $u \mapsto k u$ so we may as well assume $u(0,0)=1$ and ignore the $\pm1/4$ at the front. Thus the claimed solution is simply $$u(x,t) = \max\big((2 - x-t), 0\big)^2$$ and $$|u(x,t)|^{1/2} = \max\big((2 - x-t), 0\big).$$

The fact that $u$ does not solve $\Delta |u|^{1/2}=0$ can be seen 'without calculation': If it did, then of course $:=||^{1/2}$ solves $Δ=0$, i.e. $$ would be harmonic. But non-zero harmonic functions cannot be identically zero on any non-empty open set. In fact paul garrett in above exchange points out $v$ has to be polynomial.

The little triangle flap bottom right of math3d graph is the part where the function is zero (i.e. $x+y>2$): 3d graph

Nevertheless we can try to explicitly see the issue from first principles. The issue is that $|u|^{1/2}$ is not continuously differentiable at the transition boundary $2=x+t$, so $\Delta |u|^{1/2}$ is not defined there. Indeed consider $|u|^{1/2}$ restricted along a line orthogonal to $\{(x,t):2=x+t\}$ e.g. $P(s) =(x(s), t(s)) = (s,s)$. Then $$|u|^{1/2}\circ P(s) = |u(s,s)|^{1/2} = 2\max(1-s,0)$$ Thus $(|u|^{1/2}\circ P)'(s) = -2$ when $s<1$ and $0$ when $s>1$. Expressed in terms of the partials this says that $\partial_x |u|^{1/2} + \partial_y |u|^{1/2}$ is not continuous.

If you were to try and weaken the above using some notion of weak derivatives you would discover that $\Delta |u|^{1/2}$ is different from zero by a certain measure supported on $x+y=2$.

PS: functions of the form $u=f(x+t)$ do solve a PDE called the transport equation: $ \binom{1}{-1}\cdot \nabla u=0$, or in terms of partials $ \partial_x u- \partial_y u= 0$.

--

responding to update:

The fact that $u=0$ has no bearing on the value of $\Delta u$ at all. Consider instead $u(x,y) = x^2$. It is zero at $x=0$ for all $y$, is it true that $\Delta u(0,y) = 0$? Absolutely not, $\Delta u(x,y) \equiv 2$ for all $x,y$.

It seems here there is an erroneous order of operations. Let me write loosely $\text{ev}_p: \{\text{functions defined on 'most' of $\mathbb R^2$}\} \to \mathbb R \cup \{\text{undefined},\infty,-\infty\}$ to be the function that evaluates a function at $p$ (possibly in some extended sense by a limit). If $f$ is properly defined at $p$ then $$ \text{ev}_p f := f(p)$$ Your argument sounds to me like you are saying $\text{ev}_p f = 0$ for $p=(x,t)$ satisfying $2=x+t$, so we can differentiate it twice etc. But this is somehow $$ \Delta \text{ev}_p |u|^{1/2} = \Delta ( |u(p)|^{1/2}) = \Delta 0 = 0?$$ when instead it should be $$ \text{ev}_p \Delta |u|^{1/2} = (\Delta |u|^{1/2})(p) $$ which as I said, is not defined. $\Delta |u|^{1/2}$ is not defined because both $\partial_x^2 |u|^{1/2}$ and $\partial_y^2 |u|^{1/2}$ are not defined. They are not defined because $\partial_x^2 |u|^{1/2}$ = $\partial_x (\partial_x|u|^{1/2})$ and $\partial_x|u|^{1/2}$ is not continuous at those $p$. Not continuous at $p$ implies not differentiable at $p$ (equivalently differentiable implies continuous).

PPS a 'piecewise' definition cannot remove the singularity/discontinuity, per se. Sometimes there exists a continuous function that can be defined piecewise that extends the domain e.g. $\sin x/x$ but the fact that the definition is piecewise is as important as the ink colour you write the symbols "$\sin x/x$" in. $\sin x/x$ has a (unique) continuous extension defined on $\mathbb R$; there is no continuous extension of $\partial_x |u|^{1/2}$ that is defined on $\mathbb R^2$. You don't even strictly need a piecewise definition; $f(x) = \lim_{\mathbb R\setminus \{0\} \ni y \to x} \frac{\sin y}y$ is not 'piecewise' and extends sinc to the whole real line.

PPPS: in terms of heavisides and deltas, for $f(s) = |u|^{1/2}(s,s) = \max(2-2s,0)$, $f'(s)= -2 \theta(1-s)$ and $f'' = -2 \delta(s-1)$. I might have missed a minus sign or something. But the mathematically rigourous meaning of this is in the sense of distributions.

PPPPS (sorry): this part is actually wrong:

so it starts from the assumption that the studied function it is Analytic function, so they have both been built with the underlying assumption that the studied function could be represented by a non-piecewise power series on the whole domain it is defined,

Power series are only guaranteed to converge inside the radius of convergence. To get the maximal domain you can define the function on, you might need to repeatedly recenter your power series expansions and extend piecewise. For example $\sum_{k=0}^\infty (-1)^k z^{2k}$ converges (absolutely) only when $|z|<1$, and diverges when $|z|>1$. But it can be analytically continued to the whole complex plane minus two points ($z=\pm i$). Indeed that function is $\frac{1}{1+x^2}$.

tl;dr the issue is that your function is harmonic as a function on the disjoint sets $\{2 < x + t\}$ and $\{2 > x + t \}$ (in the precise sense of Wikipedia) but is not harmonic on any open set $U$ which intersects with the line $\{ 2= x+t\}$.

Calvin Khor
  • 36,192
  • 6
  • 47
  • 102
  • Thanks for the answer. I have added at the end of the question my comments (too long for this section). I hope you could elaborate in what I mentioned there. – Joako Nov 05 '24 at 15:01
  • Your computation looks to be ok your conclusion is wrong. The delta function is not continuous. In fact a mathematician would say it is not even a function. Thiss why I mentioned a measure supported on the transition curve (the support of a measure is the set where it is not zero, roughly speaking) @Joako – Calvin Khor Nov 05 '24 at 16:32
  • But there is no delta function in my opinion, the function value it is zero where the undefined critical line should be, that is why I ask, I don't know if I have something conceptually wrong but those delta function are fictitious in my opinion, been eliminated due the piecewise definition: in the critical line $\Delta |u|^{\frac12}$ is zero from the right, from the left, and at the critical line, so those delta function never exist in reality. Another question about the plot: it's equivalent for your analysis to consider $u(x,t)=\max((2-x-t),0)^2$ than $u(x,t)=(2-x-t)\max((2-x-t),0)$, right? – Joako Nov 05 '24 at 16:40
  • 1
    @Joako I tried to respond in an edit – Calvin Khor Nov 05 '24 at 17:25
  • So, if understood your explanation, I cannot solve the issue even if I define $$u(x,t):= \begin{cases} \frac14 (T-(x+t))^2,\quad\text{if},,T-(x+t)>0;\ 0,\quad \text{otherwise}\end{cases}$$ but these is what it is done for defining properly Smooth bump functions in piecewise sections, even I have used as example this answer where a bump function solves the wave equation, and these kind of crashes with the interpretation you are using for harmonic solution, surely I am mistaken (...) – Joako Nov 05 '24 at 22:37
  • (...) but I want to understand why sometimes I can assume piecewise and sometimes no: could you share the source of the theorem that explains you affirmation "But non-zero harmonic functions cannot be identically zero on any non-empty open set."?, I would like to review it's assumptions. Thanks beforehand. – Joako Nov 05 '24 at 22:40
  • 1
    @Joako re bump functions, it has nothing to do with the definition being piecewise, don't focus on that. It only has to do with the definitions of differentiability and continuity. Bump functions are $C^\infty$. $f(x) = \begin{cases} 1 & x>0\ 0 & x\le 0\end{cases}$ is discontinuous. $f(x) = \begin{cases} x & x>0\ 0 & x\le 0\end{cases}$ is not differentiable. $f(x) = \begin{cases} x^2 & x>0\ 0 & x\le 0\end{cases}$ is not $C^2$. And the bumps are smooth. Since they are smooth you can stick them into whatever PDE you want. What is the crash? PS the theorem is the identity theorem – Calvin Khor Nov 05 '24 at 22:53
  • Hi, sorry for my late answer, I was without internet access because of a job at sea. Even if I have some mistakes on the analysis, I still believe the function defined piecewise is do a solution, since a piecewise defined function is not tied to the Identity Theorem, as smooth bump functions show, they are exactly zero outside the bump so if they were tied to the Identity Theorem they must be zero in the full domain, which is not the case since they are defined piecewise, and in the same meaning, the solution I show defined for $T-(x+t)>0$ and zero otherwise is do solving the eqn – Joako Nov 22 '24 at 21:39
  • @Joako 1. Being 'defined piecewise' is not a mathematical property, you cannot deduce any mathematics at all from being piecewise defined. So please dont focus on that! 2. It seems we do not agree then on the word 'solution'. The same way that if a function is only $C^2$ and not $C^3$, then you don't bother talking about its Taylor series, how can you say that $g=|u|^{1/2}$ which is not twice differentiable solves $\Delta g=0$? 3. I don't understand the comparison with bump functions. You said $g$ is harmonic. If we assume this is true, then we can apply Identity theorem, contradiction. – Calvin Khor Nov 23 '24 at 08:22
  • 1
    I have been thinking a lot and I think now I get it, but is not related to the Identity Theorem, my answer fails just because $\Delta v$ is not continuously differentiable two times: is like taking $\partial^2/\partial x^2,(|x|\theta(x))$ I will have a jump discontinuity previous the last derivative independently of how I define $\theta(0)\equiv 0$. I am trying to understand how it deviates from examples as $y'=-\text{sgn}(y)\sqrt{|y|}$ which do admit piecewise defined solutions that achieve a finite extinction time. – Joako Nov 29 '24 at 15:53
  • Do you have any ideas of how I can modify $\Delta |u|^\frac12=0$ in order to admit the solution I have presented above? – Joako Nov 29 '24 at 15:55
  • 1
    @Joako not in any interesting way, no. But glad to see you understood – Calvin Khor Nov 29 '24 at 19:24