If a function $f : \mathbb Z\times \mathbb Z \rightarrow \mathbb{R}^{+} $ satisfies the following condition
$$\forall x, y \in \mathbb{Z}, f(x,y) = \dfrac{f(x + 1, y)+f(x, y + 1) + f(x - 1, y) +f(x, y - 1)}{4}$$
then is $f$ constant function?
If a function $f : \mathbb Z\times \mathbb Z \rightarrow \mathbb{R}^{+} $ satisfies the following condition
$$\forall x, y \in \mathbb{Z}, f(x,y) = \dfrac{f(x + 1, y)+f(x, y + 1) + f(x - 1, y) +f(x, y - 1)}{4}$$
then is $f$ constant function?
You can prove this with probability.
Let $(X_n)$ be the simple symmetric random walk on $\mathbb{Z}^2$. Since $f$ is harmonic, the process $M_n:=f(X_n)$ is a martingale. Because $f\geq 0$, the process $M_n$ is a non-negative martingale and so must converge almost surely by the Martingale Convergence Theorem. That is, we have $M_n\to M_\infty$ almost surely.
But $(X_n)$ is irreducible and recurrent and so visits every state infinitely often. Thus (with probability one) $f(X_n)$ takes on every $f$ value infinitely often.
Thus $f$ is a constant function, since the sequence $M_n=f(X_n)$ can't take on distinct values infinitely often and still converge.
I can give a proof for the d-dimensional case, if $f\colon\mathbb{Z}^d\to\mathbb{R}^+$ is harmonic then it is constant. The following based on a quick proof that I mentioned in the comments to the same (closed) question on MathOverflow, Liouville property in Zd. [Edit: I updated the proof, using a random walk, to simplify it]
First, as $f(x)$ is equal to the average of the values of $f$ over the $2d$ nearest neighbours of $x$, we have the inequality $f(x)\ge(2d)^{-1}f(y)$ whenever $x,y$ are nearest neighbours. If $\Vert x\Vert_1$ is the length of the shortest path from $x$ to 0 (the taxicab metric, or $L^1$ norm), this gives $f(x)\le(2d)^{\Vert x\Vert_1}f(0)$. Now let $X_n$ be a simple symmetric random walk in $\mathbb{Z}^d$ starting from the origin and, independently, let $T$ be a random variable with support the nonnegative integers such that $\mathbb{E}[(2d)^{2T}] < \infty$. Then, $X_T$ has support $\mathbb{Z}^d$ and $\mathbb{E}[f(X_T)]=f(0)$, $\mathbb{E}[f(X_T)^2]\le\mathbb{E}[(2d)^{2T}]f(0)^2$ for nonnegative harmonic $f$. By compactness, we can choose $f$ with $f(0)=1$ to maximize $\Vert f\Vert_2\equiv\mathbb{E}[f(X_T)^2]^{1/2}$.
Writing $e_i$ for the unit vector in direction $i$, set $f_i^\pm(x)=f(x\pm e_i)/f(\pm e_i)$. Then, $f$ is equal to a convex combination of $f^+_i$ and $f^-_i$ over $i=1,\ldots,d$. Also, by construction, $\Vert f\Vert_2\ge\Vert f^\pm_i\Vert_2$. Comparing with the triangle inequality, we must have equality here, and $f$ is proportional to $f^\pm_i$. This means that there are are constants $K_i > 0$ such that $f(x+e_i)=K_if(x)$. The average of $f$ on the $2d$ nearest neighbours of the origin is $$ \frac{1}{2d}\sum_{i=1}^d(K_i+1/K_i). $$ However, for positive $K$, $K+K^{-1}\ge2$ with equality iff $K=1$. So, $K_i=1$ and $f$ is constant.
Now, if $g$ is a positive harmonic function, then $\tilde g(x)\equiv g(x)/g(0)$ satisfies $\mathbb{E}[\tilde g(X_T)]=1$. So, $$ {\rm Var}(\tilde g(X_T))=\mathbb{E}[\tilde g(X_T)^2]-1\le\mathbb{E}[f(X_T)^2]-1=0, $$ and $\tilde g$ is constant.
Here is an elementary proof assuming we have bounds for $f$ on both sides.
Define a random walk on $\mathbb{Z}^2$ which, at each step, stays put with probability $1/2$ and moves to each of the four neighboring vertices with probability $1/8$. Let $p_k(u,v)$ be the probability that the walk travels from $(m,n)$ to $(m+u, n+v)$ in $k$ steps. Then, for any $(m, n)$ and $k$, we have $$f(m, n) = \sum_{(u,v) \in \mathbb{Z}^2} p_k(u,v) f(m+u,n+v).$$ So $$f(m+1, n) - f(m, n) = \sum_{(u,v) \in \mathbb{Z}^2} \left( p_k(u-1,v) - p_k(u,v) \right) f(m+u,n+v).$$ If we can show that $$\lim_{k \to \infty} \sum_{(u,v) \in \mathbb{Z}^2} \left| p_k(u-1,v) - p_k(u,v) \right| =0 \quad (\ast)$$ we deduce that $$f(m+1,n) = f(m,n)$$ and we win.
Remark: More generally, we could stay put with probability $p$ and travel to each neighbor with probability $(1-p)/4$. If we choose $p$ too small, then $p_k(u,v)$ tends to be larger for $u+v$ even then for $u+v$ odd, rather than depending "smoothly" on $(u,v)$. I believe that $(\ast)$ is true for any $p>0$, but this elementary proof only works for $p > 1/3$. For concreteness, we'll stick to $p=1/2$.
We study $p_k(u,v)$ using the generating function expression $$\left( \frac{x+x^{-1}+y+y^{-1}+4}{8} \right)^k = \sum_{u,v} p_k(u,v) x^u y^v.$$
Lemma: For fixed $v$, the quantity $p(u,v)$ increases as $u$ climbs from $-\infty$ up to $0$, and then decreases as $u$ continues climbing from $0$ to $\infty$.
Proof: We see that $\sum_u p_k(u,v) x^u$ is a positive sum of Laurent polynomials of the form $(x/8+1/2+x^{-1}/8)^j$. So it suffices to prove the same thing for the coefficients of this Laurent polynomial. In other words, writing $(x^2+8x+1)^k = \sum e_i x^i$, we want to prove that $e_i$ is unimodal with largest value in the center. Now, $e_i$ is the $i$-th elementary symmetric function in $j$ copies of $4+\sqrt{15}$ and $j$ copies of $4-\sqrt{15}$. By Newton's inequalities, $e_i^2 \geq \frac{i (2j-i)}{(i+1)(2j-i+1)} e_{i-1} e_{i+1} > e_{i-1} e_{i+1}$ so $e_i$ is unimodal; by symmetry, the largest value is in the center. (The condition $p>1/3$ in the above remark is when the quadratic has real roots.) $\square$
Corollary: $$\sum_u \left| p_k(u-1,v) - p_k(u,v) \right| = 2 p_k(0,v).$$
Proof: The above lemma tells us the signs of all the absolute values; the sum is \begin{multline*} \cdots + (p_k(-1,v) - p_{k}(-2,v)) + (p_k(0,v) - p_{k}(-1,v)) + \\ (p_k(0,v) - p_k(1,v)) + (p_k(1,v) - p_k(2,v)) + \cdots = 2 p_k(0,v). \qquad \square\end{multline*}
So, in order to prove $(\ast)$, we must show that $\lim_{k \to \infty} \sum_v p_k(0,v)=0$. In other words, we must show that the coefficient of $x^0$ in $\left( \frac{x}{8}+\frac{3}{4} + \frac{x^{-1}}{8} \right)^k$ goes to $0$.
There are probably a zillion ways to do this; here a probabilistic one. We are rolling an $8$-sided die $k$ times, and we want the probability that the numbers of ones and twos are precisely equal. The probability that we roll fewer than $k/5$ ones and twos approaches $0$ by the law of large numbers (which can be proved elementarily by, for example, Chebyshev's inequality). If we roll $2r > k/5$ ones and twos, the probability that we exactly the same number of ones and twos is $$2^{-2r} \binom{2r}{r} < \frac{1}{\sqrt{\pi r}} < \frac{1}{\sqrt{\pi k/10}}$$ which approaches $0$ as $k \to \infty$. See here for elementary proofs of the bound on $\binom{2r}{r}$.
I wrote this in two dimensions, but the same proof works in any number of dimensions
I have 3 questions (sorry the formatting won't let me enter):
2.Did you try small case and conjecture the unimodularity for a fixed v? Or is there a good heuristic?
Thanks in advance, I love your answers :)
– Andy Sep 23 '17 at 01:25A proof for the case $f\colon\mathbb{Z}^d \to \mathbb{R}$ is harmonic and bounded then $f$ is constant, taken from a book by Dynkin and Yushkevich.
First, a Lemma: if $g\colon \mathbb{Z}^d\to \mathbb{R}$ is harmonic and there exists $L>0$ such that $|g(x) + g(x+e_1) + \cdots + g(x + k e_1)| \le L$ for all $x \in \mathbb{Z}^d$ and $k\ge 0$ then $g \equiv 0$. For assume that $g$ takes positive values, and let $\sup g= M > 0$. Note that if $g(x)> M-\epsilon$, then for all neighbors $x'$ of $x$ we have $g(x')> M-2 d \epsilon$, otherwise the average around $x$ is $\le M-\epsilon$. In particular, $g(x+e_1) > M-2 d \epsilon$. Now, by taking $\epsilon$ small enough we can ensure that a long enough chain of values $g(x)$, $g(x+e_1)$, $\ldots$, $g(x+k e_1)$ are $> M/2$, and for $k$ large enough get a contradiction.
Now, consider $f\colon \mathbb{Z}^d \to \mathbb{R}$ harmonic and bounded. Then the function $g(x) \colon = f(x+e_1) - f(x)$ is again harmonic, and satisfies the condition of the lemma. We conclude that $f(x) \equiv f(x+e_1)$. Similarly for all the other unit vectors $e_i$ and we conclude $f$ constant.
Let $S$ be the set of harmonic functions $f:\mathbb{Z}^d \to [0,+\infty)$ with the constraint $f(0)\in [0,1]$. For any $x,y\in \mathbb{Z}^d$, let $d(x,y)=\sum_{j=1}^d |x_j-y_j|$, i.e. we use the taxicab metric. For any $x,y \in \mathbb{Z}^d$ with $d(x,y)=1$, the harmonicity and non-negativity of $f$ implies that $f(y)\leq (2d)f(x)$, so as a corollary $f(x)\in [0,(2d)^{d(x,0)}]$ and also "$f\in S$ has a zero" $\Leftrightarrow f\equiv 0$. If we now endow the vector space of functions with domain $\mathbb{Z}^d$ and co-domain $\mathbb{R}$ with the norm $$\|g\| = \sup_{x\in \mathbb{Z}^d} (4d)^{-d(x,0)}|g(x)|,$$ then $S$ is a compact, convex subset. Let $f\in S$ be an arbitrary extreme point of $S$. First consider the case where $f$ has a zero: then, as previously discussed, $f\equiv 0$ and we are done. In the other case, we have $$f(.)=\sum_{j=1}^d \left[(2d)^{-1}f(e_j)\right]\underbrace{\left[f(.+e_j)/f(e_j)\right]}_{\in S}+\sum_{j=1}^d\left[(2d)^{-1}f(-e_j)\right]\underbrace{\left[f(.-e_j)/f(-e_j)\right]}_{\in S}$$ which (in the right hand side) is a convex linear combination and so, because $f$ is assumed to be an extreme point, the summands $f(.\pm e_j)/f(\pm e_j)$ must be equal to $f(.)$. Fully integrating that result gives $$f(x)=\prod_{j=1}^d f(e_j)^{x_j},$$ but comparing with the harmonicity condition further constrains this to $f\equiv 1$. So the extreme points of the convex set $S$ are the "identically 0" and "identically 1" functions. The Krein-Milman theorem then implies that $S$ only contains constant functions.
HINT:
Consider a square $Q_n = \{ (x,y) \ | \ \max(x,y) \le n\}$. A harmonic function is uniquely determined by its values on the boundary of $Q_n$ ( in the continuous case, that is the perimeter of the square, in the discrete case it is a finite set of points on that perimeter). Therefore for a point $x$ inside we have
$$f(x) = \int_{\partial Q_n} \rho_n(x,y) f(y)\, d y$$
where $\rho_n(x,y) \colon \partial Q_n \to [0, \infty)$ ( the Poisson kernel).
We have the following important property of the Poisson kernel. Consider $K$ a compact region in the plane and $\epsilon > 0$. There exists $N_{\epsilon, K}$ such that
$$(1- \epsilon) \rho_n(x, y) \le \rho_n (x', y) \le (1+\epsilon) \rho_n(x,y)$$
for $x,x' \in K$ and $n\ge N_{\epsilon, K}$
Now, if $f\ge 0$ on $\partial Q_n$ this implies
$$(1-\epsilon) f(x) \le f(x') \le (1+\epsilon) f(x)$$
for $x$, $x' \in K$.
Therefore, if $f$ harmonic and $\ge 0$ at infinity, then $f$ is constant.
The following is just a sketch that should work for harmonic $f:\mathbb{Z}^d \to [0,+\infty)$ (it may prove tough to flesh out the details):
If $f$ is bounded, follow the easy proof that examines the difference $g(.)=f(.+e_j)-f(.)$.
Now suppose $f$ is not bounded and suppose WLOG that $f(0)=1$.
For every $M>1$ let $S_M\subseteq \mathbb{Z}^d$ be the connected component of $f^{-1}((M,+\infty))$ that is closest to the origin (connectedness and distance both judged in the nearest-neighbor metric). Note that the maximum principle implies that $S_M$ is an infinite set. For later use, let $d(M):=\text{dist}(0,S_M)$.
With $S_M$ described in the previous point, we have that $$f(.)>M (2d)^{-\text{dist}(.,S_M)}=: g_M(.)$$ where distances are given in the taxicab metric. For later use, define $$h_M:\mathbb{Z}^{d} \to [0,+\infty):x \mapsto M (2d)^{-\text{dist}(x,\{(n,0,\ldots,0)\}_{n\geq d(M)})}$$
Let $X_t$ be a random walk initiated at the origin (at time $t=0$). Let $T:X \to \mathbb{N}$ be a random variable with full support on the natural numbers.
$1=\mathbb{E}[f(X_T)]\geq\mathbb{E}[g_M(X_T)]\geq \mathbb{E}[h_M(X_T)]$. Let me note that the final inequality may prove tedious to prove, but I think it can be done through elementary means.
But if we tune $M$ sufficiently large and tune the random variable $T$ so that $\mathbb{E}[T]$ is sufficiently large, the right hand side in the previous point should exceed $1$, giving a contradiction.