51

If a function $f : \mathbb Z\times \mathbb Z \rightarrow \mathbb{R}^{+} $ satisfies the following condition

$$\forall x, y \in \mathbb{Z}, f(x,y) = \dfrac{f(x + 1, y)+f(x, y + 1) + f(x - 1, y) +f(x, y - 1)}{4}$$

then is $f$ constant function?

Willie Wong
  • 75,276
Jineon Baek
  • 1,104
  • You probably wanto to add a boundedness condition. Otherwise $f(x,y)=x$ is a counterexample. – Julián Aguirre Jul 17 '11 at 10:24
  • @Julian Aguirre: since $x\in\mathbb Z$, we don't have $f(x,y)\geq 0$. – Davide Giraudo Jul 17 '11 at 11:08
  • 1
    @girdav You are right. The lower bound is probably enough. – Julián Aguirre Jul 17 '11 at 13:08
  • This question was asked on [this forum] (https://www.ocf.berkeley.edu/~wwu/cgi-bin/yabb/YaBB.cgi?action=display;board=riddles_hard;num=1131915398;start=0), where a user claims to have shown an elementary solution via considering the difference of each square and the square left of it. Sadly I don't understand how "after reading hint 3 the solution is trivial"; if anyone understands this poster's solution this could provide an elementary solution to the question. – Aditya Gupta Nov 26 '22 at 20:45
  • @AdityaGupta It's the proof given by Orangeskid below. – Hecatonchires Mar 18 '24 at 18:41

7 Answers7

49

You can prove this with probability.

Let $(X_n)$ be the simple symmetric random walk on $\mathbb{Z}^2$. Since $f$ is harmonic, the process $M_n:=f(X_n)$ is a martingale. Because $f\geq 0$, the process $M_n$ is a non-negative martingale and so must converge almost surely by the Martingale Convergence Theorem. That is, we have $M_n\to M_\infty$ almost surely.

But $(X_n)$ is irreducible and recurrent and so visits every state infinitely often. Thus (with probability one) $f(X_n)$ takes on every $f$ value infinitely often.

Thus $f$ is a constant function, since the sequence $M_n=f(X_n)$ can't take on distinct values infinitely often and still converge.

  • 1
    I love probabilistic arguments in analysis! Very nice. –  Jul 17 '11 at 11:46
  • @Jonas Thanks. I'm not sure how to prove this without probability, though I suppose it is possible. –  Jul 17 '11 at 11:50
  • 7
    This is indeed very nice: Question: Is it possible to change this argument in such a way that it applies for $\mathbb{Z}^n$ instead of $\mathbb{Z}^2$ only? Is it even true that a non-negative harmonic function on $\mathbb{Z}^n$ is constant for $n \geq 3$? For bounded ones this seems clear by considering the Poisson boundary. – t.b. Jul 17 '11 at 11:53
  • @Theo I was thinking the same thing. I seem to recall another probabilistic proof for $d\geq 3$, but not the details. Let me think about it. –  Jul 17 '11 at 11:57
  • @Andrew This proof only works for $d=1,2$, but I think there is another proof for $d\geq 3$. Let me work on it. –  Jul 17 '11 at 11:58
  • 5
    @Byron: This paper contains the claim that it is true that "nonnegative nearest-neighbors harmonic function on $\mathbb{Z}^d$ are constant for any $d$" on page 2. – t.b. Jul 17 '11 at 12:08
  • And following the references therein one finds Theorems 7.1 and 7.3 in Woess's survey that gives a lot of references. It would be nice to extract a crisp proof from there, though. – t.b. Jul 17 '11 at 12:24
  • (Sorry for posting this comment as an answer; I do not have any reputation points yet.) Nice proof! But, even with the risk of sounding stupid, let me confirm that I understood correctly: is this the Liouville's theorem from complex analysis? This is related to Julian Aguirre's comment. Doesn't Liouville's theorem require that the entire function be bounded? What made it possible for us to work with just a lower bound (and no upper bound) on $f$ in the discrete case? Thanks! – Srivatsan Jul 18 '11 at 12:35
  • 5
    The usual Liouville theorem also holds with just a one-sided bound. – GEdgar Jul 18 '11 at 13:56
  • 3
    If you know that the real part of an entire function $f(z)$ is non-negative on the complex plane, what can you say about the function $g(z)=f(z)/(1+f(z))$? – Jyrki Lahtonen Jul 18 '11 at 14:07
15

I can give a proof for the d-dimensional case, if $f\colon\mathbb{Z}^d\to\mathbb{R}^+$ is harmonic then it is constant. The following based on a quick proof that I mentioned in the comments to the same (closed) question on MathOverflow, Liouville property in Zd. [Edit: I updated the proof, using a random walk, to simplify it]

First, as $f(x)$ is equal to the average of the values of $f$ over the $2d$ nearest neighbours of $x$, we have the inequality $f(x)\ge(2d)^{-1}f(y)$ whenever $x,y$ are nearest neighbours. If $\Vert x\Vert_1$ is the length of the shortest path from $x$ to 0 (the taxicab metric, or $L^1$ norm), this gives $f(x)\le(2d)^{\Vert x\Vert_1}f(0)$. Now let $X_n$ be a simple symmetric random walk in $\mathbb{Z}^d$ starting from the origin and, independently, let $T$ be a random variable with support the nonnegative integers such that $\mathbb{E}[(2d)^{2T}] < \infty$. Then, $X_T$ has support $\mathbb{Z}^d$ and $\mathbb{E}[f(X_T)]=f(0)$, $\mathbb{E}[f(X_T)^2]\le\mathbb{E}[(2d)^{2T}]f(0)^2$ for nonnegative harmonic $f$. By compactness, we can choose $f$ with $f(0)=1$ to maximize $\Vert f\Vert_2\equiv\mathbb{E}[f(X_T)^2]^{1/2}$.

Writing $e_i$ for the unit vector in direction $i$, set $f_i^\pm(x)=f(x\pm e_i)/f(\pm e_i)$. Then, $f$ is equal to a convex combination of $f^+_i$ and $f^-_i$ over $i=1,\ldots,d$. Also, by construction, $\Vert f\Vert_2\ge\Vert f^\pm_i\Vert_2$. Comparing with the triangle inequality, we must have equality here, and $f$ is proportional to $f^\pm_i$. This means that there are are constants $K_i > 0$ such that $f(x+e_i)=K_if(x)$. The average of $f$ on the $2d$ nearest neighbours of the origin is $$ \frac{1}{2d}\sum_{i=1}^d(K_i+1/K_i). $$ However, for positive $K$, $K+K^{-1}\ge2$ with equality iff $K=1$. So, $K_i=1$ and $f$ is constant.

Now, if $g$ is a positive harmonic function, then $\tilde g(x)\equiv g(x)/g(0)$ satisfies $\mathbb{E}[\tilde g(X_T)]=1$. So, $$ {\rm Var}(\tilde g(X_T))=\mathbb{E}[\tilde g(X_T)^2]-1\le\mathbb{E}[f(X_T)^2]-1=0, $$ and $\tilde g$ is constant.

  • Taxicab metric. Never heard that name (I learn maths in french at school). Funny! – Patrick Da Silva Jul 17 '11 at 20:24
  • @Patrick: Also called the Manhattan metric. – George Lowther Jul 17 '11 at 20:30
  • LAAAAAAAAAWL. Funnier. – Patrick Da Silva Jul 17 '11 at 20:39
  • 2
    Note: A similar proof will also show that harmonic $f\colon\mathbb{R}^d\to\mathbb{R}^+$ is constant. Interestingly, in the two dimensional case, Byron's proof can be modified to show that harmonic $f\colon\mathbb{R}^2\setminus{0}\to\mathbb{R}^+$ is constant (as 2d Brownian motion has zero probability of hitting 0 at positive times). Neither of the proofs generalize to harmonic $f\colon\mathbb{R}^d\setminus{0}\to\mathbb{R}^+$ for $d\not=2$. In fact, considering $f(x)=\Vert x\Vert^{2-d}$, we see that $f$ need not be constant for $d\not=2$. – George Lowther Jul 17 '11 at 22:54
  • I don't understand how compactness is used. First, compactness of what? Second, it seems that in this part of the proof one restricts to a particular choice of f that maximizes a certain norm, and I am not sure where this leaves us with other functions that don't. Maybe I am missing something trivial... – Andrea Ferretti Apr 27 '23 at 16:28
9

Here is an elementary proof assuming we have bounds for $f$ on both sides.

Define a random walk on $\mathbb{Z}^2$ which, at each step, stays put with probability $1/2$ and moves to each of the four neighboring vertices with probability $1/8$. Let $p_k(u,v)$ be the probability that the walk travels from $(m,n)$ to $(m+u, n+v)$ in $k$ steps. Then, for any $(m, n)$ and $k$, we have $$f(m, n) = \sum_{(u,v) \in \mathbb{Z}^2} p_k(u,v) f(m+u,n+v).$$ So $$f(m+1, n) - f(m, n) = \sum_{(u,v) \in \mathbb{Z}^2} \left( p_k(u-1,v) - p_k(u,v) \right) f(m+u,n+v).$$ If we can show that $$\lim_{k \to \infty} \sum_{(u,v) \in \mathbb{Z}^2} \left| p_k(u-1,v) - p_k(u,v) \right| =0 \quad (\ast)$$ we deduce that $$f(m+1,n) = f(m,n)$$ and we win.

Remark: More generally, we could stay put with probability $p$ and travel to each neighbor with probability $(1-p)/4$. If we choose $p$ too small, then $p_k(u,v)$ tends to be larger for $u+v$ even then for $u+v$ odd, rather than depending "smoothly" on $(u,v)$. I believe that $(\ast)$ is true for any $p>0$, but this elementary proof only works for $p > 1/3$. For concreteness, we'll stick to $p=1/2$.

We study $p_k(u,v)$ using the generating function expression $$\left( \frac{x+x^{-1}+y+y^{-1}+4}{8} \right)^k = \sum_{u,v} p_k(u,v) x^u y^v.$$

Lemma: For fixed $v$, the quantity $p(u,v)$ increases as $u$ climbs from $-\infty$ up to $0$, and then decreases as $u$ continues climbing from $0$ to $\infty$.

Proof: We see that $\sum_u p_k(u,v) x^u$ is a positive sum of Laurent polynomials of the form $(x/8+1/2+x^{-1}/8)^j$. So it suffices to prove the same thing for the coefficients of this Laurent polynomial. In other words, writing $(x^2+8x+1)^k = \sum e_i x^i$, we want to prove that $e_i$ is unimodal with largest value in the center. Now, $e_i$ is the $i$-th elementary symmetric function in $j$ copies of $4+\sqrt{15}$ and $j$ copies of $4-\sqrt{15}$. By Newton's inequalities, $e_i^2 \geq \frac{i (2j-i)}{(i+1)(2j-i+1)} e_{i-1} e_{i+1} > e_{i-1} e_{i+1}$ so $e_i$ is unimodal; by symmetry, the largest value is in the center. (The condition $p>1/3$ in the above remark is when the quadratic has real roots.) $\square$

Corollary: $$\sum_u \left| p_k(u-1,v) - p_k(u,v) \right| = 2 p_k(0,v).$$

Proof: The above lemma tells us the signs of all the absolute values; the sum is \begin{multline*} \cdots + (p_k(-1,v) - p_{k}(-2,v)) + (p_k(0,v) - p_{k}(-1,v)) + \\ (p_k(0,v) - p_k(1,v)) + (p_k(1,v) - p_k(2,v)) + \cdots = 2 p_k(0,v). \qquad \square\end{multline*}

So, in order to prove $(\ast)$, we must show that $\lim_{k \to \infty} \sum_v p_k(0,v)=0$. In other words, we must show that the coefficient of $x^0$ in $\left( \frac{x}{8}+\frac{3}{4} + \frac{x^{-1}}{8} \right)^k$ goes to $0$.

There are probably a zillion ways to do this; here a probabilistic one. We are rolling an $8$-sided die $k$ times, and we want the probability that the numbers of ones and twos are precisely equal. The probability that we roll fewer than $k/5$ ones and twos approaches $0$ by the law of large numbers (which can be proved elementarily by, for example, Chebyshev's inequality). If we roll $2r > k/5$ ones and twos, the probability that we exactly the same number of ones and twos is $$2^{-2r} \binom{2r}{r} < \frac{1}{\sqrt{\pi r}} < \frac{1}{\sqrt{\pi k/10}}$$ which approaches $0$ as $k \to \infty$. See here for elementary proofs of the bound on $\binom{2r}{r}$.

I wrote this in two dimensions, but the same proof works in any number of dimensions

  • 1
    Hi, I know this is an old post (and an amazing solution). Do you maybe have a few words as to the motivation you had when solving it?

    I have 3 questions (sorry the formatting won't let me enter):

    1. What was the motivation to approach it with a random walk?

    2.Did you try small case and conjecture the unimodularity for a fixed v? Or is there a good heuristic?

    Thanks in advance, I love your answers :)

    – Andy Sep 23 '17 at 01:25
3

A proof for the case $f\colon\mathbb{Z}^d \to \mathbb{R}$ is harmonic and bounded then $f$ is constant, taken from a book by Dynkin and Yushkevich.

First, a Lemma: if $g\colon \mathbb{Z}^d\to \mathbb{R}$ is harmonic and there exists $L>0$ such that $|g(x) + g(x+e_1) + \cdots + g(x + k e_1)| \le L$ for all $x \in \mathbb{Z}^d$ and $k\ge 0$ then $g \equiv 0$. For assume that $g$ takes positive values, and let $\sup g= M > 0$. Note that if $g(x)> M-\epsilon$, then for all neighbors $x'$ of $x$ we have $g(x')> M-2 d \epsilon$, otherwise the average around $x$ is $\le M-\epsilon$. In particular, $g(x+e_1) > M-2 d \epsilon$. Now, by taking $\epsilon$ small enough we can ensure that a long enough chain of values $g(x)$, $g(x+e_1)$, $\ldots$, $g(x+k e_1)$ are $> M/2$, and for $k$ large enough get a contradiction.

Now, consider $f\colon \mathbb{Z}^d \to \mathbb{R}$ harmonic and bounded. Then the function $g(x) \colon = f(x+e_1) - f(x)$ is again harmonic, and satisfies the condition of the lemma. We conclude that $f(x) \equiv f(x+e_1)$. Similarly for all the other unit vectors $e_i$ and we conclude $f$ constant.

orangeskid
  • 56,630
  • 1
    Very elegant! Nice! – cnikbesku Jan 14 '23 at 19:00
  • Isn't there an easy way to convert that into the proof which only requires that numbers are, say, non-negative - that is, the function is only bounded from below (or from above, what matters is that it is only bounded from one side)? – JimT Mar 23 '23 at 23:23
  • @Jim T: I can't see it at the moment ;-) I am curious too.... – orangeskid Mar 24 '23 at 03:13
  • The original elementary proof by H.A.Heilbronn (that's the one in Dynkin-Yushkevich book) certainly makes use of all numbers being bounded from both sides. But perhaps there is a trick which allows to reduce one problem to the other. – JimT Mar 24 '23 at 16:10
  • @JimT: Interesting info that the proof is due to Heilbronn, thanks! I find in the general case the proof that uses the Poisson kernel the easiest to understand, it's only some properties of averages. Every such equation has a Poisson kernel ( in a discrete case, or in the continuous one). In the continuous case, for a ball, the properties of the kernel that we need are easy to establish. – orangeskid Mar 24 '23 at 16:49
  • 1
    H.A.Heilbronn "On Discrete Harmonic Functions", Mathematical Proceedings of the Cambridge Philosophical Society, 1949. – JimT Mar 25 '23 at 04:05
  • @JimT: Thank you very much! Do you know if Heilbronn proved the result for bounded below (rather than bounded) functions? – orangeskid Mar 25 '23 at 05:04
  • 1
    No, I don't think so - but I think in some other article he mentions that it was valid. Not sure if I saw an elementary proof anywhere. – JimT Mar 25 '23 at 18:38
1

Let $S$ be the set of harmonic functions $f:\mathbb{Z}^d \to [0,+\infty)$ with the constraint $f(0)\in [0,1]$. For any $x,y\in \mathbb{Z}^d$, let $d(x,y)=\sum_{j=1}^d |x_j-y_j|$, i.e. we use the taxicab metric. For any $x,y \in \mathbb{Z}^d$ with $d(x,y)=1$, the harmonicity and non-negativity of $f$ implies that $f(y)\leq (2d)f(x)$, so as a corollary $f(x)\in [0,(2d)^{d(x,0)}]$ and also "$f\in S$ has a zero" $\Leftrightarrow f\equiv 0$. If we now endow the vector space of functions with domain $\mathbb{Z}^d$ and co-domain $\mathbb{R}$ with the norm $$\|g\| = \sup_{x\in \mathbb{Z}^d} (4d)^{-d(x,0)}|g(x)|,$$ then $S$ is a compact, convex subset. Let $f\in S$ be an arbitrary extreme point of $S$. First consider the case where $f$ has a zero: then, as previously discussed, $f\equiv 0$ and we are done. In the other case, we have $$f(.)=\sum_{j=1}^d \left[(2d)^{-1}f(e_j)\right]\underbrace{\left[f(.+e_j)/f(e_j)\right]}_{\in S}+\sum_{j=1}^d\left[(2d)^{-1}f(-e_j)\right]\underbrace{\left[f(.-e_j)/f(-e_j)\right]}_{\in S}$$ which (in the right hand side) is a convex linear combination and so, because $f$ is assumed to be an extreme point, the summands $f(.\pm e_j)/f(\pm e_j)$ must be equal to $f(.)$. Fully integrating that result gives $$f(x)=\prod_{j=1}^d f(e_j)^{x_j},$$ but comparing with the harmonicity condition further constrains this to $f\equiv 1$. So the extreme points of the convex set $S$ are the "identically 0" and "identically 1" functions. The Krein-Milman theorem then implies that $S$ only contains constant functions.

0

HINT:

Consider a square $Q_n = \{ (x,y) \ | \ \max(x,y) \le n\}$. A harmonic function is uniquely determined by its values on the boundary of $Q_n$ ( in the continuous case, that is the perimeter of the square, in the discrete case it is a finite set of points on that perimeter). Therefore for a point $x$ inside we have

$$f(x) = \int_{\partial Q_n} \rho_n(x,y) f(y)\, d y$$

where $\rho_n(x,y) \colon \partial Q_n \to [0, \infty)$ ( the Poisson kernel).

We have the following important property of the Poisson kernel. Consider $K$ a compact region in the plane and $\epsilon > 0$. There exists $N_{\epsilon, K}$ such that

$$(1- \epsilon) \rho_n(x, y) \le \rho_n (x', y) \le (1+\epsilon) \rho_n(x,y)$$

for $x,x' \in K$ and $n\ge N_{\epsilon, K}$

Now, if $f\ge 0$ on $\partial Q_n$ this implies

$$(1-\epsilon) f(x) \le f(x') \le (1+\epsilon) f(x)$$

for $x$, $x' \in K$.

Therefore, if $f$ harmonic and $\ge 0$ at infinity, then $f$ is constant.

orangeskid
  • 56,630
0

The following is just a sketch that should work for harmonic $f:\mathbb{Z}^d \to [0,+\infty)$ (it may prove tough to flesh out the details):

  1. If $f$ is bounded, follow the easy proof that examines the difference $g(.)=f(.+e_j)-f(.)$.

  2. Now suppose $f$ is not bounded and suppose WLOG that $f(0)=1$.

  3. For every $M>1$ let $S_M\subseteq \mathbb{Z}^d$ be the connected component of $f^{-1}((M,+\infty))$ that is closest to the origin (connectedness and distance both judged in the nearest-neighbor metric). Note that the maximum principle implies that $S_M$ is an infinite set. For later use, let $d(M):=\text{dist}(0,S_M)$.

  4. With $S_M$ described in the previous point, we have that $$f(.)>M (2d)^{-\text{dist}(.,S_M)}=: g_M(.)$$ where distances are given in the taxicab metric. For later use, define $$h_M:\mathbb{Z}^{d} \to [0,+\infty):x \mapsto M (2d)^{-\text{dist}(x,\{(n,0,\ldots,0)\}_{n\geq d(M)})}$$

  5. Let $X_t$ be a random walk initiated at the origin (at time $t=0$). Let $T:X \to \mathbb{N}$ be a random variable with full support on the natural numbers.

  6. $1=\mathbb{E}[f(X_T)]\geq\mathbb{E}[g_M(X_T)]\geq \mathbb{E}[h_M(X_T)]$. Let me note that the final inequality may prove tedious to prove, but I think it can be done through elementary means.

  7. But if we tune $M$ sufficiently large and tune the random variable $T$ so that $\mathbb{E}[T]$ is sufficiently large, the right hand side in the previous point should exceed $1$, giving a contradiction.