2

This is a lemma of the paper "Mean curvature flow singularities for mean convex surfaces" by Gerhard Huisken and Carlo Sinestrari (the paper is available here):

$\textbf{Lemma 3.2.}$ Suppose $(1 + \eta) H^2 \leq |A|^2 \leq c_0 H^2$ for some $\eta, c_0 > 0$ at some point of $\mathscr{M}_t$, then we also have

(i) $-2Z \geq \eta H^2|A|^2$;

(ii) $|H \nabla_i h_{kl} - \nabla_i H h_{kl}|^2 \geq \frac{\eta^2}{4n(n-1)^2c_0} H^2 |\nabla H^2|$

My doubt is concerning to item $(ii)$ and below is the argument given by the authors

We have (see [10, Lemma $2.3$ (ii)], reference [10] is available here)

$|H \ \nabla_i h_{kl} - \nabla_i H \ h_{kl}|^2 \geq \frac{1}{4} |\nabla_i H \ h_{kl} - \nabla_k H \ h_{il}|^2 = \frac{1}{2} (|A|^2 |\nabla H|^2 - |\nabla^i H h_{il}|^2).$

Let us denote with $\lambda_1, \cdots, \lambda_n$ the eigenvalues of $A$ in such a way that $\lambda_n$ is an eigenvalue with the largest modulus. Then we have $|\nabla^i H \ h_{il}|^2 \leq \lambda_n^2 |\nabla H|^2$ and

\begin{align*} |H \ \nabla_i h_{kl} - \nabla_i H \ h_{kl}|^2 &\geq \frac{1}{2} \sum\limits_{i=1}^{n-1} \lambda_i^2 |\nabla H|^2 = \sum\limits_{i=1}^{n-1} \lambda_i^2 \lambda_n^2 \frac{|\nabla H|^2}{2\lambda_n^2}\\ &\geq \sum\limits_{i=1}^{n-1} \sum\limits_{j=i+1}^n \lambda_i^2 \lambda_j^2 \frac{|\nabla H|^2}{2(n-1)|A|^2}\\ &\geq \left( \sum\limits_{i,j=1, \ i < j}^n \lambda_i \lambda_j \right)^2 \frac{|\nabla H|^2}{n(n-1)|A|^2}\\ &= \frac{(|A|^2 - H^2)^2}{4n(n-1)|A|^2} |\nabla H|^2 \geq \frac{\eta^2 H^2}{4n(n-1)c_0} |\nabla H|^2. \square \end{align*}

I would like to understand the following equality and inequalities:

a) $\frac{1}{4} |\nabla_i H \ h_{kl} - \nabla_k H \ h_{il}|^2 = \frac{1}{2} (|A|^2 |\nabla H|^2 - |\nabla^i H h_{il}|^2)$;

b) $|H \ \nabla_i h_{kl} - \nabla_i H \ h_{kl}|^2 \geq \frac{1}{2} \sum\limits_{i=1}^{n-1} \lambda_i^2 |\nabla H|^2$;

c) $\sum\limits_{i=1}^{n-1} \sum\limits_{j=i+1}^n \lambda_i^2 \lambda_j^2 \frac{|\nabla H|^2}{2(n-1)|A|^2} \geq \left( \sum\limits_{i,j=1, \ i < j}^n \lambda_i \lambda_j \right)^2 \frac{|\nabla H|^2}{n(n-1)|A|^2}$.

My thoughts:

$a$ and $b$) I just consider use normal coordinates, but this doesn't helps me because the right hand side in $a$ and $b$ would be zero.

$c$) I just try to prove that $\sum\limits_{i=1}^{n-1} \sum\limits_{j=i+1}^n \lambda_i^2 \lambda_j^2 \geq \left( \sum\limits_{i,j=1, \ i < j}^n \lambda_i \lambda_j \right)^2$, but I can't prove this because I don't know if all eigenvalues are non-negative. Indeed, I even don't know if the $H > 0$ because I didn't see the hypothesis that the hypersurfaces is mean convex on the paper until this lemma.

Thanks in advance!

George
  • 3,957

1 Answers1

2

(a) is just direct computations:

\begin{align*} \frac{1}{4} |\nabla_i H \ h_{kl} - \nabla_k H \ h_{il}|^2 &= \frac 14 \sum_{i,k,l} (\nabla _i H h_{kl} - \nabla _k H h_{il})^2 \\ &=\frac 14 \sum_{i,k,l}\bigg( (\nabla _i H h_{kl})^2 + (\nabla _k H h_{il})^2 - 2\nabla _i H h_{kl} \nabla _k H h_{il}\bigg)\\ &=\frac 12 \left(\sum_i (\nabla_iH)^2 \sum_{k,l} h_{kl}^2\right) -\frac 12 \sum_{i,k,l} \nabla _i H h_{kl} \nabla _k H h_{il} \\ &= \frac 12 |\nabla H|^2 |A|^2 -\frac 12 \sum_{i,k,l} \nabla _i H h_{kl} \nabla _k H h_{il} \end{align*}

For the remaining term, use (that at one point) $h_{il} = \lambda _i \delta_{il}$, so $$\sum_{i,k,l} \nabla _i H h_{kl} \nabla _k H h_{il} =\sum_i (\nabla_iH)^2 \lambda_i^2 = |\nabla^ i H h_{il}|^2$$

In the paper (b) is shown using (a). By (a) and that $|\nabla H h_{il}|^2 \le \lambda_n^2 |\nabla H|^2$,

\begin{align*} |H \ \nabla_i h_{kl} - \nabla_i H \ h_{kl}|^2 &\ge \frac{1}{2} (|A|^2 |\nabla H|^2 - |\nabla^i H h_{il}|^2)\\ &\ge \frac 12\bigg( \left(\sum_{i=1}^n \lambda_i^2\right) |\nabla H|^2 - \lambda_n^2 |\nabla H|^2 \bigg)\\ &= \frac 12 \left(\sum_{i=1}^{n-1} \lambda_i^2\right) |\nabla H|^2 \end{align*}

Lastly (c) has completely nothing to do with MCF. Note that

$$ \sum_{i=1}^{n-1} \sum_{j=i+1}^n = \sum_{i,j=1, i<j}^n.$$

Thus

\begin{align*} \left(\sum_{i,j=1, i<j}^n \lambda_i \lambda_j \right)^2 &= \left(\sum_{i=1}^{n-1} \sum_{j=i+1}^n \lambda_i \lambda_j\right)^2 \\ &\le \left(\sum_{i=1}^{n-1} \sum_{j=i+1}^n (\lambda_i \lambda_j)^2\right) \left(\sum_{i=1}^{n-1} \sum_{j=i+1}^n 1^2\right) \ \ \ \ \text{(Cauchy-Schwarz inequality)}\\ &= \frac{n(n-1)}{2} \sum_{i=1}^{n-1} \sum_{j=i+1}^n (\lambda_i \lambda_j)^2. \end{align*}

Arctic Char
  • 16,972
  • I have some doubts about your computations in $a$:

    a.1) Are you considering normal coordinates, right? a.2) Why the first inequality is valid? I can't see this even if you are considering normal coordinates. Firstly, I think $i,k,l$ are fixed because there is not any indication by the authors that $i,k,l$ are changing for the term $\nabla_i H \ h_{kl} - \nabla_k H \ h_{il}$ and second I can't see as a vector field. Are you seeing $\nabla_i H$ as $(\nabla_i H)^j \partial_j$ (I'm denoting $\partial_j := \frac{\partial}{\partial x_j}$ and $(\nabla_i H)^j$ the coordinates in this basis)?

    – George Apr 02 '19 at 23:56
  • It would explain the first inequality, but I don't understand why you are considering the indexes $i,k,l$ changing in a sum.

    I have some doubts about the item $c$ too: c.1) You forgot to digit the observation for $c$, but I think the observation is that $\sum_{i,j=1, i<j}^n \lambda_i \lambda_j = \sum_{i=1}^{n-1} \sum_{j=i+1}^n \lambda_i \lambda_j$, right? c.2) Why inner product are you considering for use the Cauchy-Schwarz inequality? I can't see this.

    – George Apr 03 '19 at 00:00
  • @George You can assume that Einstein convention is used in papers in geometric analysis (especially in Ricci flow or MCF). I am seeing e.g. $\nabla_i H h_{kl}$ as representation of the $(0,3)$-tensor $\nabla H \otimes A$. $H$ is a function so $\nabla_iH$ represent the $(0,1)$-tensor $\partial_i H dx^i$, while $\nabla^i H$ represent the $(1,0)$-tensor $g^{ij} \partial _j H \partial_i$. – Arctic Char Apr 03 '19 at 04:16
  • For part $(c)$, I do not write $\lambda_i\lambda_j$ since I want to emphasis that the summation is the same, so it does not matter if it is $\lambda_i \lambda_j$ or any $f(i, j)$. Lastly for the inner product... it is really just the usual Cauchy schwarz for the dot product: we use $\sum_i f_i = (f_1, \cdots, f_k) \cdot (1,\cdots, 1)$. – Arctic Char Apr 03 '19 at 04:18
  • Firstly, I would like to apologize with you, but there is a typo in my question $a.2$ I asked "why the first inequality is valid?", but the question is "why the first equality is valid?", i .e., why $\frac{1}{4} |\nabla_i H \ h_{kl} - \nabla_k H \ h_{il}|^2 = \frac 14 \sum_{i,k,l} (\nabla i H h{kl} - \nabla k H h{il})^2$ is valid? I don't know if I clear the first time because of the typo, but I would want to say this before. – George Apr 04 '19 at 18:00
  • About your comment that I can assume the Einstein convention, I understand that if a have a derivation $X$ and $f \in \mathcal{C}^{\infty}(M)$, then $Xf = \sum_\limits{i=1}^n X^i \frac{\partial f}{\partial x_i}$ can be rewritten in the Einstein convention as $X^f = X^i \frac{\partial f}{\partial x_i}$, where I know that the sum is over $i$ because the index $i$ appears in above of $X^i$ and below of the partial derivative $\frac{\partial f}{\partial x_i}$, but how exactly we can see the Einstein convention in a tensorial product? – George Apr 04 '19 at 18:01
  • Finally, for part $(c)$, I imagined that you are using the usual inner product, but what you confuse me is that you have a double sum here. Are you seeing the inner product like this

    $$\sum_{i=1}^{n-1} \sum_{j=i+1}^n \lambda_i \lambda_j = \left\langle \left( \sum_{j=2}^n \lambda_1 \lambda_j, \sum_{j=3}^n \lambda_2 \lambda_j, \cdots, \sum_{j=i+1}^n \lambda_i \lambda_j, \cdots, \lambda_{n-1} \lambda_n \right), (1, \cdots, 1) \right\rangle,$$

    where the vectors in inner product are in $\mathbb{R}^{n-1}$?

    – George Apr 04 '19 at 18:14
  • @George I should be more clear: when doing tensor calculation, e.g. when we write $|X_{ijk}|^2$, it means (1) you have a $(0,3)$-tensor $X = \sum_{i,j,k} X_{ijk} \partial i \otimes \partial _j \otimes \partial _k$ and (2) you want to calculate the norm of this tensor. $|X{ijk}|$ is not to be understood as the absolute value of the $(i,j,k)$ coefficient of $X$ (which depends on coordinates). – Arctic Char Apr 05 '19 at 04:55
  • For (c), let me just down explicitly the case for $n=3$, hopefully it will be clearer. I am basically treating the vector in $\mathbb R^{\frac{n(n-1}{2}$: $$\left(\sum_{i,j=1, i<j}^3 \lambda_i \lambda_j \right)^2 = (\lambda_1 \lambda_2 + \lambda_1\lambda_3 + \lambda_2\lambda_3)^2 = \bigg( (\lambda_1 \lambda_2 , \lambda_1\lambda_3 , \lambda_2\lambda_3)\cdot (1,1,1) \bigg)^2$$ – Arctic Char Apr 05 '19 at 04:59
  • 1
    I understood $(c)$, thanks! About my question $a.2$, let me see if I understood, then are you considering this norm as $|\cdot|$ and you are doing the computations in normal coordinates in such way that you will have

    $$\frac{1}{4} |\nabla_i H \ h_{kl} - \nabla_k H \ h_{il}|^2 = \frac 14 \sum_{i,k,l} g^{ii}g^{kk}g^{ll}(\nabla i H h{kl} - \nabla k H h{il})(\nabla i H h{kl} - \nabla k H h{il}) = \frac 14 \sum_{i,k,l} (\nabla i H h{kl} - \nabla k H h{il})^2?$$

    – George Apr 05 '19 at 13:30
  • Yes, exactly @george – Arctic Char Apr 05 '19 at 13:39