7

Let two smooth $v_1$ and $v_2$ both satisfy the system

$$\partial_t{v}-\Delta v=f \quad \text{in} \quad U \times (0,\infty), $$ $$v = g \quad \text{on} \quad \partial U \times (0,\infty),$$

for some fixed given smooth $f: \bar{U}\times (0,\infty) \rightarrow \mathbb{R}$ and $g: \partial U \times (0,\infty).$ $U$ is open, bounded and $U \subset \mathbb{R}^n.$ Show that $$\sup_{x \in U} |v_1(t, x) − v_2(t, x)| \rightarrow 0,$$ as $t \rightarrow \infty.$

This is my work:

Let $ u =v_1 -v_2,$ it is sufficient to prove $\sup_{x \in U} |u(x,t)| \rightarrow 0,$ as $t \rightarrow \infty. (1)$

$u$ obeys the system $$\partial_t{u}-\Delta u=0 \quad \text{in} \quad U \times (0,\infty), $$ $$u = 0 \quad \text{on} \quad \partial U \times (0,\infty).$$ Multiply both sides by $u.|u|^{2(m-1)},$ note that $\partial_t(|u|^{2m})=2m\partial_tu.u.|u|^{2(m-1)}$ then $$\dfrac{1}{2m}\partial_t\int_{U}|u|^{2m}dx=\int_{U}\Delta u.u.|u|^{2(m-1)}dx$$ Apply integration by part for the RHS, we get $$\dfrac{1}{2m}\partial_t\int_{U}|u|^{2m}dx=-(2m-1)\int_{U}|\nabla u|^2|u|^{2(m-1)}dx.$$ By the generalize Poincare's inequality, we obtain $$\partial_t\int_{U}|u|^{2m}dx \leq -2C\left(2-\dfrac{1}{m}\right)\int_{U}|u|^{2m}dx$$ or $$\partial_t\Vert u(t,\cdot)\Vert^{2m}_{L^{2m}(U)} \leq -2C\left(2-\dfrac{1}{m}\right)\Vert u(t,\cdot)\Vert^{2m}_{L^{2m}(U)}$$ $\Rightarrow 2m \Vert u(t,\cdot)\Vert^{2m-1}.\partial_t\Vert u(t,\cdot)\Vert_{L^{2m}(U)} \leq -2C\left(2-\dfrac{1}{m}\right)\Vert u(t,\cdot)\Vert^{2m}_{L^{2m}(U)}$

$\Rightarrow \partial_t\Vert u(t,\cdot)\Vert_{L^{2m}(U)} \leq -2\dfrac{C}{m}\left(2-\dfrac{1}{m}\right)\Vert u(t,\cdot)\Vert_{L^{2m}(U)}$

Applying Gronwall's inequality, we get

$\Vert u(t,\cdot)\Vert_{L^{2m}(U)} \leq e^{-2\frac{C}{m}\left(2-\frac{1}{m}\right)t}\Vert u(0,\cdot)\Vert_{L^{2m}(U)}.$

I am planing to let $m \rightarrow \infty$ to obtain $\Vert u(t,\cdot)\Vert_{L^{\infty}(U)}$ and let $t \rightarrow \infty$ then $e^{-2\frac{C}{m}\left(2-\frac{1}{m}\right)t} \rightarrow 0$ to obtain (1). But the problem is as $m \rightarrow \infty,$ $e^{-2\frac{C}{m}\left(2-\frac{1}{m}\right)t} \rightarrow 1 \neq 0.$

Am I on the right track or did I make some wrong steps?

Could you provide any ideas to improve my work?

Arctic Char
  • 16,972
dtttruc
  • 75
  • 1
    More details about $U$ please: Bounded? in $\mathbb{R}$? in $\mathbb{R}^n$ – MOMO Mar 25 '24 at 21:39
  • $U $ is open, bounded and $U \subset \mathbb{R}^n$. – dtttruc Mar 25 '24 at 21:44
  • You do not want to expand the solution as an eigenfunction series? It seems the most direct way to prove this. – MOMO Mar 25 '24 at 22:06
  • In my class, we still haven't studied about eigenfunction series. We only start studying Heat equations via the Energy method. – dtttruc Mar 25 '24 at 22:44
  • Where is the initial condition? – Matthew Cassell Mar 26 '24 at 08:03
  • 1
    @MatthewCassell - This is true for general initial conditions and therefore it is not mentioned – MOMO Mar 26 '24 at 08:42
  • @dtttruc I apologize. It seems more tricky than I first thought. Can you use anything else? Maybe maximum principle? Arzelà–Ascoli theorem might help since you have $L_p$ convergence and you want uniform convergence. – MOMO Mar 26 '24 at 20:33
  • @MOMO I think if we apply the maximum principle, we can only claim $\sup_{x \in U}|u(t,x)|$ be bounded by $\sup |u(0,x)|,$ but we have no clue that $\sup |u(0,x)| = 0?$ and why do you want uniform convergence? Could you tell me more? Thank you very much for your response. – dtttruc Mar 26 '24 at 23:30
  • 1
    @dtttruc It is not what I want. You asked to show $\sup_x|u|\rightarrow 0$ as $t\rightarrow\infty$ which is the definition for uniform convergence. In your accepted answer, it is shown that $E=||u||^2\rightarrow 0$ which is convergence in $L^2$ norm. But this is something you already managed to prove in your post... (for $m=1$) – MOMO Mar 27 '24 at 05:09
  • @MOMO Oh now I got your idea, If we have the convergence in $L^2$ norm, it is not sufficient to get the convergence in $L^{\infty}$ norm, which I tried to prove in my original work, is it? Moreover, if we have $u(x,t) \rightarrow 0,$ it is also not enough to conclude $lim_{t \rightarrow \infty} sup_{x} |u(x,t)|=0.$ – dtttruc Mar 27 '24 at 06:23
  • @dtttruc Right. $L^2$ convergence does not imply $L^\infty$ convergence. Even $L^p$ convergence for all $p>0$ (which is close to what you showed) does not imply $L^\infty$ convergence. When you claim $u(x,t)→0$ you need to specify for which $x$ you mean. Generaly this not not true for any (or even almost any) $x$. What you can conclude by $L^2$ convergence is an existence of subsequence $u(x,t_n)$ that converges for almost any $x$ to $0$ – MOMO Mar 27 '24 at 06:34
  • @dtttruc By using the maximum principle you might get the conditions for Arzelà–Ascoli theorem in order to show uniform convergence. – MOMO Mar 27 '24 at 06:37
  • @MOMO But, $u$ being the solution of the heat equation gives it certain smoothness properties, so I don't think it is possible that $u(t_n,x)\to 0$ as $n\to\infty$ without $u(t,x)\to 0$ in general, am I wrong? – K.defaoite Mar 27 '24 at 18:12
  • Maybe, but even so $u→0$ does not guarantee $\sup|u|→0$. Anyway, you should be careful when assuming anything about the smoothness of $u$, because as far as I know one can show this by finding the solution as an eigenfunction series, but you did not want to rely on this. – MOMO Mar 27 '24 at 18:40
  • @dtttruc Sorry for taking so long on this. It ended up being a bit more delicate than I expected. – K.defaoite Mar 28 '24 at 20:39
  • Please have a look at my edited answer. – K.defaoite Mar 28 '24 at 20:39

4 Answers4

1

Yes, as mentioned in the comments, the energy energy integral is useful here. Let $u$ be defined as in the question.

Define

$$E(t)=\int_{U}{u(t,x)}^2~\mathrm d^m x$$ Note $E$ is bounded below by $0$.

Now, observe $$\dot E(t)=\int_U 2 ~u(t,x)~\partial_tu(t,x)~\mathrm d^m x \\ =2\int_U (u ~\Delta u)(t,x)\mathrm d^mx \\ =2\int_U \big(u ~\nabla\cdot( \nabla u)\big)(t,x)\mathrm d^mx$$

Recall the generalized integration by parts: $$\int_U \phi~\nabla\cdot v~\mathrm d\mu^m=\int_{\partial U}n\cdot \phi v~\mathrm d\mu^{n-1}-\int_{U}v\cdot \nabla\phi~\mathrm d\mu^m$$

Taking in our case $\phi=u$ and $v=\nabla u$, we get $$\dot E(t)=2\int_U \big(u ~\nabla\cdot( \nabla u)\big)(t,x)\mathrm d^mx \\ =2\int_{\partial U} \big( n\cdot (u\nabla u)\big)(t,x)\mathrm d^m x-2\int_U |\nabla u|^2(t,x)\mathrm d^m x$$ The first integral is zero due to the assumptions on the boundary data of $u$, and therefore we obtain $$\dot E(t)=-2\int_U|\nabla u|^2(t,x)\mathrm d^m x$$

Poincare's inequality implies

$$\dot E(t)=-2\int_U |\nabla u|^2\mathrm d^m x=2{\left\Vert\nabla u(t,\cdot)\right\Vert_2}^2\leq -2C {\Vert u(t,\cdot)\Vert_2}^2$$ Since ${\Vert u(t,\cdot)\Vert_2}^2=E$, we have $$\dot E\leq -2C E$$ Hence by Gronwall's inequality we get $E(t)\leq \mathrm e^{-2Ct}E(0)$ which implies $E\to 0 $ which implies ${\Vert u(t,\cdot)\Vert_2}\to 0$ as $t\to\infty$.


So, we have shown that $\Vert u(t,\cdot)\Vert_2\to 0$, but this is not enough to show that $\Vert u(t,\cdot)\Vert_\infty\to 0$, as desired in the question. However, this is rectified using the strong maximum principle.

Define the parabolic cylinder and its boundary $$U(T):=(0,T]\times U \\ \Gamma(T)=\overline{U(T)}\setminus U(T)=(\{0\}\times \bar U)\cup ([0,T]\times\partial U)$$

THEOREM: Strong maximum principle for the heat equation. Assume $u$ is a classical solution of the heat equation in $U(T)$. Then, (i) $$\max_{\overline{U(T)}}u=\max_{\Gamma(T)}u$$ Furthermore (ii), if $U$ is connected and there exists a point $(t_0,x_0)\in U(T)$ such that $u(t_0,x_0)=\max_{\overline{U(T)}}u$, then $u$ is constant in $\overline{U(t_0)}$.

(For proof: See page 55 of Evans PDE book.)

The (ii) statement means that, if ever $u$ assumes its maximum inside $U$ at any positive time, then $u$ must be constant.

In our case, we know that $u=0$ on $\partial U$. That means that, aside from the trivial case $u\equiv 0$, we know that $$\operatorname{argmax}_{\overline{U(T)}}|u|\in \{0\}\times U$$ I.e, it must occur in the open domain $U$ at $t=0$. However, the (ii) statement of the above theorem now tells us that , aside from the trivial solution $u\equiv 0$, for all times $t>0$, $$\sup_U |u(t,\cdot)| < \sup_U |u(0,\cdot)|$$ Because otherwise , $u$ would be constant in the domain $U(t)$, and we know the only constant solution satisfying our BCs is the zero solution. So, we have shown that the function $$M(t)=\sup_{U}|u(t,\cdot)|$$ Is a decreasing function bounded below by zero, and hence $M\to M_\infty\in\mathbb R_{\geq 0}$.


To show $M_\infty=0$ is a little bit more difficult. But essentially, the idea is, the only way for $\Vert u(t,\cdot)\Vert _2\to 0$ (as already shown) while maintaining $\Vert u(t,\cdot)\Vert_\infty \to M_\infty >0$ would be if the family of functions $u(t,\cdot)$ "clustered" around some finite collection of points $\{x_0,x_1,...x_{N-1}\}$, i.e $u(t,x)\to 0$ as $t\to\infty$ for all $x\in U\setminus \{x_0,...,x_{N-1}\}$ but with $u(t,x)\to L\leq M_\infty$ for all $x\in \{x_0,...,x_{N-1}\}$.

However, such "clustering" is impossible, as it would violate the smoothing properties of the heat equation. To make the proof easier, assume that only one "cluster point" $x_0$ exists satisfying $\Vert u(t,x_0) \Vert_\infty\to M_\infty>0$. Since we already know that ($\star$) $u(t,x)\to 0$ as $t\to\infty$ $\forall x\neq x_0$, this means that, given $t$ large enough and $\epsilon$ small enough, we can make the derivative estimate

$(\star) ~:~ \text{proof needed!}$ $$\left|\frac{u(t,x_0+\epsilon\upsilon)-u(t,x_0)}{\epsilon}\right| \\ (\upsilon \in \mathbb R^m , |\upsilon|=1)$$ Arbitrarily large. But, by the mean value theorem, we know that $\exists x^*$ on the line segment connecting $x_0$ and $x_0+\epsilon\upsilon$ such that $$\upsilon \cdot \nabla u(t,x^*)=\frac{u(t,x_0+\epsilon\upsilon)-u(t,x_0)}{\epsilon}$$ Which implies $$|\upsilon \cdot \nabla u(t,x^*)|=|\nabla u(t,x^*)|=\left|\frac{u(t,x_0+\epsilon\upsilon)-u(t,x_0)}{\epsilon}\right|$$ And since we know we can make the RHS arbitrarily large, that means we can make $|\nabla u(t,x^*)|$ arbitrarily large as long as we choose a point $x^*$ close enough to $x_0$ and a time $t$ large enough. However, we know from theorem 9 on page 61 of Evans PDE that $\exists c\in\mathbb R_+$ such that $$\max_{C(t,x;r/2)}|\nabla u|\leq \frac{c}{r^{m+3}}\Vert u \Vert_{L^1\big(C(t,x;r)\big)}$$ Where $C(t,x;r)$ is the cylinder $$C(t,x;r)=\{(s,y)\in\mathbb R_{\geq 0}\times\mathbb R^m : |x-y|\leq r ~\text{and}~t-r^2\leq s\leq t\}$$

But, since $L^2$ convergence implies $L^1$ convergence we know that $\Vert u \Vert_{L^1\big(C(t,x;r)\big)}\to 0$ as $t\to\infty$, and thus our ability to make $|\nabla u|$ arbitrarily large would violate our initial assumptions. Therefore, it is not possible for the families of functions $u(t,\cdot)$ to "cluster" around a point $x_0$ and therefore the only possible value for $M_\infty$ is $0$, in other words,

$$\boxed{\Vert u(t,\cdot)\Vert_\infty\to 0~~\text{as}~t\to\infty}$$

As desired. $\blacksquare$.


Addendum: Details.

The key point is to show that $\Vert u \Vert_{L^1\big(C^1(t,x;r)\big)}\to 0$ as $t\to\infty$. We already know that $E(t)={\Vert u(t,\cdot) \Vert_2}^2\leq \mathrm e^{-2Ct}E(0)$ from Gronwall's inequality. This means that, for any $0<T<t$, we have $$\sqrt{\int_{t-T}^{t}{E(t')}^2\mathrm dt}=\Vert u \Vert_{L^2\big(U(t)\setminus U(t-T)\big)}\to 0 ~~\text{as}~t\to\infty$$ But, since $C(t,x;\sqrt{T})\subset \big(U(t)\setminus U(t-T)\big)$ which implies $\Vert u \Vert_{L^2\big(C(t,x;\sqrt{T})\big)}\to 0$ as $t\to\infty$ for all positive $T$. But, since $L^2$ convergence implies $L^1$ convergence, we get $\Vert u \Vert_{L^1\big(C(t,x;r)\big)}\to 0 $ as $t\to\infty$ for any $x\in U$ and suitable $r>0$.

K.defaoite
  • 13,890
  • By showing $E\rightarrow 0$ you showed $L_2$ convergence, but uniform convergence is needed here. – MOMO Mar 26 '24 at 20:31
  • Thank you for your answer. I can prove $E \rightarrow 0,$ but how does it help to prove $u(x,t) \rightarrow 0?$ And sorry, what do you mean by $E_{\infty},$ is it $lim_{t \rightarrow \infty} E(t)$ or does it have any special properties? And why $\partial_t u \rightarrow 0?$ – dtttruc Mar 26 '24 at 23:21
  • @MOMO I don't see why uniform convergence is required here. – K.defaoite Mar 27 '24 at 03:36
  • @dtttruc $E\to 0\implies u\to 0$ since $u^2 \geq 0$. Yes, $E_\infty = \lim_{t\to\infty }E(t)$ . We know $\partial_t u\to 0$ because, $E\to E_\infty$ implies that $\dot E\to 0$, and $\dot{E}=2\int_U u\partial_t u \mathrm d\mu^n $ which means either $u\to 0$ (which is equivalent) or $\dot u\to 0$. – K.defaoite Mar 27 '24 at 03:41
  • Thank you very much, I got it ^^. For the second part $E \rightarrow E_{\infty}$ implying $\dot{E} \rightarrow 0,$ at first I thought we must have conditions $E$ converges uniformly to $E_{\infty}$ to conclude $\dot{E} \rightarrow 0$ but I checked these conditions work for sequence not for the real function, is it? I also will write an alternative proof using $E \rightarrow 0.$ – dtttruc Mar 27 '24 at 04:31
  • @K.defaoite see my comment on the original post. – MOMO Mar 27 '24 at 05:13
  • @dtttruc It should be reasonably easy to show that $E$ is a smooth function of $t$, which forces that $E\to E_\infty$ implies $\dot E\to 0$. – K.defaoite Mar 27 '24 at 18:14
  • @K.defaoite But I am not sure if we have the convergence of $L^2$ to imply the convergence pointwise of $u(x,t).$ Could you prove it? Furtheremore, as MOMO said, it is not sufficient from $u (x,t) \rightarrow 0$ to conclude $\sup_{x}|u(x,t)| \rightarrow 0.$ – dtttruc Mar 27 '24 at 21:47
  • @dtttruc I will come back to this tomorrow. – K.defaoite Mar 28 '24 at 03:57
  • @K.defaoite I have two questions, could you explain to me? Firstly, can you explain clearly that why if $I(t,\varepsilon) \rightarrow 0$ as $t \rightarrow \infty,$ then the derivative estimate is arbitrary large? Secondly, for the $L^2$ convergence implies $L^1$ convergence, we are considering the convergence of $u$ in $L^2(U)$ w.r.t $x$, but why we can conclude the convergence of $u$ in $L^1$ with another set $C(t,x,r)$ w.r.t $(x,t)$ ? – dtttruc Mar 29 '24 at 07:05
  • @dtttruc Perhaps the usage of an integral was not the best way to go about it. The ability to make the derivative estimate arbitrarily large follows from the fact that $u(t,x)\to 0$ as $t\to\infty$ for all $x\neq x_0$. – K.defaoite Mar 29 '24 at 14:55
  • As for the other point, we already know that $\Vert u \Vert_{L^2(C(t,x;r))}\to 0$ as $t\to\infty$. We know this because (1) we can simply integrate $E$ over some interval $(t,t+T)$ and let $t\to\infty$ to conclude that $\Vert u\Vert_{L^2\big(U(T+t)\setminus U(t)\big)}\to 0$ as $t\to\infty$. Then (2) we observe that $C(t+T,x;r)\subset U(t+T)\setminus U(t)$ for a suitable choice of $r$, and use the fact that $L^2$ convergence on an open domain $\Omega$ implies $L^2$ convergence on all sub-domains $\Omega'\subseteq \Omega$. – K.defaoite Mar 29 '24 at 14:59
  • @K.defaoite Sorry but can you explain why we have $u(x,t) \rightarrow 0$ as $t \rightarrow \infty$ for all $x \neq x_0$? For the second part why when $E(t) \rightarrow 0$ then we can conclude that $\int_{t}^{t+T}E(t) \rightarrow 0$? Can you prove it. I have a counter-example, if we consider that $u(x,t)=\sqrt{\dfrac{|sint|}{|t|}}.f(x)$ then $E(t)= \int_{U}u^2(x,t)dx \rightarrow 0$ but $\int_{t}^{t+T}\int_U u^2(x,t)dxdt$ does not converge to $0.$ – dtttruc Mar 29 '24 at 18:34
  • @dtttruc Sorry, this follows from the fact that $E(t)\leq \mathrm e^{-2Ct}E(0)$ as given from Gronwall's inequality, not from the fact that $E(t)\to 0$. I'll edit. – K.defaoite Mar 30 '24 at 03:08
  • @dtttruc Note that the fact that $U$ is bounded is important here - otherwise the assertion that $L^2$ convergence implies $L^1$ convergence would not hold. – K.defaoite Mar 30 '24 at 03:29
  • Thank you very much for the clear details, it makes sense now for the second part. But sorry for the first part that $u(x,t) \rightarrow 0$ as $t \rightarrow \infty$ for all $x \neq x_0$ , you haven't explained to me yet. This is because I can find an example that L^2 convergence does not imply pointwise convergence. Furthermore, if we only assume there exist $x_0$ such that $u(t,x_0) \rightarrow M_{\infty} >0$, it is not sufficient to get $u(x,t) \rightarrow 0$ as $t \rightarrow \infty$ for all $x \neq x_0$. – dtttruc Mar 30 '24 at 04:55
  • @dtttruc I believe that $L^2$ convergence of a sequence of continuous functions does indeed imply pointwise convergence a.e , I will try to find a reference – K.defaoite Mar 31 '24 at 01:24
  • @K.defaoite This is what I found for the sequence of continuous functions that converge in $L^2$ but not converge pointwise. https://math.stackexchange.com/questions/372720/do-l2-convergence-and-continuity-imply-pointwise-convergence – dtttruc Mar 31 '24 at 01:39
  • @dtttruc Yes, it is possible for a sequence of continuous functions which converge in $L^2$ to not converge pointwise. However, for the purposes of this problem, only pointwise convergence almost everywhere is required, which I believe is guaranteed. – K.defaoite Mar 31 '24 at 15:19
  • @dtttruc Proven! Have a look at my other answer. – K.defaoite Mar 31 '24 at 21:15
  • @K.defaoite Sorry. Bothering you again. Now when I read it, I am confused why is $M(t)$ decreasing? Since it is a really important condition to have the limit of $M(t)$ finite. Could you explain it to me? – dtttruc Apr 18 '24 at 21:52
  • @dtttruc It follows from the strong maximum principle. Reread that section of my answer carefully. Statement (i) implies that $\dot M\leq 0$ - this follows from the fact that $u=0$ on $\partial\Omega$.Then, statement (ii) further improves this to get the strict inequality $\dot M < 0$, since if ever $M(t)=M(0)$ for $t>0$ this would imply that $u$ is constant, i.e $u\equiv 0$ the trivial case. – K.defaoite Apr 18 '24 at 22:47
  • For statement ii, we need $U$ is connected, but in the original problem, we do not have this condition. But I think we only need $\dot{M} \leq 0.$ Sorry but I don't see the relation of $\max_{U(T)}u(x,t) = \max_{\Gamma(T)}u(x,t)$ and $\dot{M}(t)$ to get $\dot{M} \leq 0.$ – dtttruc Apr 18 '24 at 23:03
1

To fix the gap $(\star)$ in my previous answer.

In my other answer, I claimed that $\Vert u(t,\cdot)\Vert_2\to 0$ as $t\to\infty$ implied that $u(t,x)\to 0$ for almost all $x\in U$.

Note that the restriction to "almost all" is necessary, because $L^p$ convergence does not imply everywhere pointwise convergence, not even for a sequence of continuous functions. However, we can fix this, with the help of the following theorem:

THEOREM: Rapid convergence in measure implies pointwise convergence almost everywhere.

Let $(X,\Sigma,\mu)$ be a measure space and let $(f_n)_{n\in\mathbb N}$ be a sequence of measurable functions which converges in measure to the measurable function $f$. Then, as shown by John Dawkins HERE as a consequence of the Borel-Cantelli lemma, if $$\sum_{n=1}^\infty \mu\big(\{x\in X:|f_n(x)-f(x)|>\epsilon\}\big)<\infty \\ \forall \epsilon>0$$ Then, $f_n\overset{\text{p.w}}{\longrightarrow}f$ almost everywhere as $n\to\infty$.


In our case, we can consider the measure space $(U,\mathscr B(U),\mu^m)$ where $\mathscr B(U)$ are the Borel subsets of $U$ and $\mu^m$ is the $m$ dimensional standard Euclidean measure. Since $L^p$ convergence implies convergence in measure, we know that $u_k\overset{\mu^m}{\longrightarrow} 0$ as $k\to\infty$ where $u_k=u(t_k,\cdot)$ is the function $u$ evaluated at time $t=t_k$, and $(t_k)_{k\in\mathbb N}$ is any strictly increasing sequence of positive real numbers.

First, observe that $$\text{IF}~~{\Vert u(t,\cdot )\Vert^2_{L^2(U)}}=\int_U {u(t,x)}^2\mathrm d^m x<\delta \\ \text{THEN}~~\mu^m\big(x\in U:{|u(t,x)|}^2>\epsilon\big)< \frac{\delta}{\epsilon} \\ \text{equivalently,}~~~\mu^m\big(x\in U:|u(t,x)|>\epsilon\big)< \frac{\delta}{\epsilon^2}$$

I.e, if ${\Vert u(t,\cdot )\Vert^2_{L^2(U)}}$ is less than $\delta$, then ${u(t,\cdot)}^2$ can only take values of $\epsilon$ or greater on a set $V\subset U$ with measure $\mu^m(V)$ no larger than $\delta/\epsilon$.

Now, recall our Gronwall inequality: $$\int_U {u(t,x)}^2\mathrm d^m x={\Vert u(t,\cdot )\Vert^2_{L^2(U)}}=E(t)\leq \mathrm e^{-2c_1 t}E(0)$$ Putting the above two together, we get

$${\Vert u(t,\cdot )\Vert^2_{L^2(U)}}<\mathrm e^{-2c_1 t}E(0)\implies \mu^m\big(x\in U:|u(t,x)|>\epsilon\big)< \frac{\mathrm e^{-2c_1 t}E(0)}{\epsilon^2}$$ And therefore, letting $t=t_k$,

$$\sum_{k=1}^\infty \mu^m(x\in U:|u_k(x)|>\epsilon) < \frac{E(0)}{\epsilon^2}\sum_{k=1}^\infty \mathrm e^{-2c_1k}<\infty$$

Thus $u_k\overset{\text{p.w}}{\longrightarrow} 0$ almost everywhere as $k\to\infty$. Since this holds for any sequence $(t_k)_{k\in\infty}$ we conclude that $u(t,x)\overset{\text{p.w}}{\longrightarrow} 0$ as $t\to \infty$ for almost all $x\in U$. My other answer uses this as a starting point to show that this "almost all" is in fact, all.

K.defaoite
  • 13,890
  • Thank you very much for this proof, I got it. But I still have one last question, how can we claim: If $u(x,t) \rightarrow 0$ as $t \rightarrow \infty$ for $x \neq x_0$ then for $t$ large enough and $\varepsilon$ small enough, the derivative estimate $$\Big|\dfrac{u(t,x_0+\varepsilon v-u(t,x_0)}{\varepsilon}\Big|$$ arbitrary large. I wrote it down but I cannot prove this claim. – dtttruc Apr 02 '24 at 20:10
  • Moreover, I found a counter example: $u(x,t)=e^{-t(x-x_0)^2}+\dfrac{1}{1+t}$, we have $u(x,t) \rightarrow 0$ as $t \rightarrow \infty$ and $u(x_0,t) \rightarrow 1 \neq 0$ as $t \rightarrow \infty.$ However, $\lim_{t \rightarrow \infty} \partial_{x}u(x_0,t)=0,$ which contradicts to the above statement. – dtttruc Apr 02 '24 at 20:10
  • @dtttruc The derivative doesnt become arbitrarily large precisely at the point $x_0$, it becomes arbitrarily large at some point on the line connecting $x_0$ to $x_0+\epsilon\upsilon$. This is a consequence of the mean value theorem. – K.defaoite Apr 02 '24 at 21:50
  • @dtttruc So, for example, set $x_0=0$ in your example and see what happens to $|\partial_xu(t,0.1)|$ as $t\to\infty$. You will see that it increases without bound. Desmos. This violates the smoothing properties of the heat equation. – K.defaoite Apr 02 '24 at 21:54
  • Hmm, okay so can you show me some analytical steps to get the derivative estimate? I really appreciate your help. – dtttruc Apr 02 '24 at 22:40
1

EDIT: Clarification of the derivative estimate

In my first answer, we showed that $u(t,x)\to 0$ as $t\to\infty$ for almost all $x\in U$, that is, except for some countable collection of points $\{x_0,x_1,\dots,x_{N-1}\}$ which satisfy $u(t,x)\to L\leq M_{\infty}$ as $t\to\infty $ for $x\in \{x_0,\dots,x_{N-1}\}$. Here, $M_{\infty}=\lim_{t\to\infty}M(t)=\lim_{t\to\infty}\sup_{U}|u(t,\cdot)|$. We want to show that $M_\infty =0$.


The statements $u(t,x_0)\to M_\infty$ as $t\to\infty$ and $u(t,x)\to 0$ as $t\to\infty$ (for all $x\neq x_0$) mean that, given any $\delta'>0$, we can always find a $T'>0$ such that $|u(t,x_0)-M_\infty|<\delta'$ for all $t>T'$, and similarly given any $\delta''>0$ we can find a $T''>0$ so that $|u(t,x)|<\delta''$ for all $t>T''$ , for all $x\neq x_0$.

To make things easier, we can take $$T=\max(T',T'') \\ \delta=\max(\delta',\delta'')$$

To combine these into one statement: $$\text{for all }\delta>0\text{ we can find a }T>0\text{ such that} \\ |u(t,x_0)-M_\infty|<\delta~\text{and}~|u(t,x)|<\delta \\ \forall t>T~,~\forall x\in U\setminus \{x_0\}$$

Now, let $x=x_0+\epsilon\upsilon$, where $\epsilon>0$ and $\upsilon\in \partial\mathbb B^m(0,1)$. From the above, that means that, for all $t>T$, we can bound the difference between these two points by $$|u(t,x_0)-u(t,x_0+\epsilon\upsilon)|>M_\infty -2\delta$$ Since, if $u(t,x)$ is within $\delta$ of $M_\infty$ and $u(t,x_0+\epsilon\upsilon)$ is within $\delta $ of $0$, the difference between the two is at minimum $M_\infty -2\delta$. Now, dividing both sides of the above by $\epsilon$, we get $$\left|\frac{u(t,x_0)-u(t,x_0+\epsilon\upsilon)}{\epsilon}\right|>\frac{M_\infty-2\delta}{\epsilon}$$ Since we can make $\delta,\epsilon$ arbitrarily small, we can make the RHS arbitrarily large, say equal to some value $A$.

To make things easier we can define the function $$f:\mathbb R\to\mathbb R \\f(s)=u(t,x_0+s\upsilon)$$ So $$\left|\frac{f(0)-f(\epsilon)}{\epsilon}\right|=A \\ \frac{f(\epsilon)-f(0)}{\epsilon}=\pm A$$

However, by the mean value theorem this means that we can find some value $\iota\in (0,\epsilon)$ such that $f'(\iota)=\frac{f(\epsilon)-f(0)}{\epsilon}$ exactly. Now, see that

$$f'(s)=\upsilon\cdot \nabla u(t,x_0+s\upsilon)$$ So $$f'(\iota)=\upsilon \cdot \nabla u(t,x_0+\iota\upsilon)=\frac{f(\epsilon)-f(0)}{\epsilon}=\pm A=\text{arbitrarily large}$$

Hence $$|\upsilon\cdot \nabla u(t,x_0+\iota\upsilon)|=|\nabla u(t,x^*)|=\text{arbitrarily large}$$

But, the point $(t,x^*)$ is inside the set $C(t,x_0;r/2)$ for some positive $r>2\iota$, and we know from theorem 9 on page 61 of Evans that $$\max_{C(t,x_0;r/2)}|\nabla u|\leq \frac{c}{r^{m+3}}\Vert u \Vert_{L^1\big(C(t,x_0;r)\big)}$$

BUT, we have already demonstrated in our other answer that $\Vert u \Vert_{L^1\big(C(t,x_0;r)\big)}\to 0$ as $t\to\infty$. Therefore we have reached a contradiction.

K.defaoite
  • 13,890
0

To address the lack of connectedness when applying the strong maximum principle:

Since $U\subset \mathbb R^m$ is open and $\mathbb R^m$ is connected, it follows that $U$ is locally connected from which it follows (see Henno's answer here) that $U$ can be written as a countable union of disjoint, open, connected sets $(U_k)_{k\in\mathbb N}$. And, since $U$ is bounded, it follows that each of these $U_k$ are bounded as well. In this way we can simply break up our original IVP

$$\begin{cases}(\partial_t -\Delta )u=0 & \text{in}~\mathbb R_{\geq 0}\times U \\ u=0&\text{on}~\mathbb R_{\geq 0}\times \partial U\end{cases}$$
Into a countable number of IVPs on each of the subsets $U_k$, $$\begin{cases}(\partial_t -\Delta )u_k=0 & \text{in}~\mathbb R_{\geq 0}\times U_k \\ u_k=0&\text{on}~\mathbb R_{\geq 0}\times \partial U_k\end{cases}$$ In each of these subsets $U_k$, both statement (i) and (ii) of the strong maximum principle apply, since they are open and bounded. We can then consider

$$M_k(t)=\sup_{U_k}|u_k(t,\cdot)|$$ Where for each, as previously discussed, we have $\dot{M}_k < 0$. (Note the strict inequality!) It then follows that the function $$M(t):=\max(M_1(t),M_2(t)\dots)=\sup _U |u(t,\cdot)|$$ Is strictly decreasing as well. (It is not rigorous to state this as $\dot M<0$ since in this case $M$ need not be differentiable.)

K.defaoite
  • 13,890