10

Let $X_{0}=1$ and define the Markov Chain $X_{n}$ given by the transition probabilities $p_{01}=1$ $p_{k,k+1}=p$ and $p_{k,k-1}=(1-p)=q$, $k\geq 1$ where $p$ is some fixed number in $(0,1)$.

I want to show that if $p\leq\frac{1}{2}$, then $\frac{X_{n}}{n}\xrightarrow{a.s.} 0$ and if $p>\frac{1}{2}$, then $\frac{X_{n}}{n}\xrightarrow{a.s.}p-q$.

Intuitively, I understand why this must be the case. I have shown that for $p>\frac{1}{2}$, the chain is transient and for $p\leq \frac{1}{2}$ the chain is positive recurrent by applying results from the usual simple random walk.

I can see that therefore, if $p> \frac{1}{2}$, then $X_{n}$ will hit $0$ only finitely many times.

I can write $X_{n}=X_{n}-X_{T_{last}^{0}}$ where $T_{last}^{0}$ denotes the last time $X_{n}$ hits $0$ and then, as for $p>\frac{1}{2}$ and then try and then try to do something with $\frac{X_{n}-X_{T_{last}^{0}}}{n}$ but the problem is that $T_{last}^{0}$ is not a stopping time and so I cannot any the markov property.

Similarly, for $p\leq\frac{1}{2}$, I can see that it will hit $0$ infinitely often and hence, $X_{n}(\omega)$ should remain bounded almost surely. But I am failing at doing this rigorously.

Any help is appreciated.

Dovahkiin
  • 1,610

2 Answers2

3

For the $p>\frac12$ case: the intuition is that the Markov chain is transient so at some point it leaves zero forever and effectively becomes a simple random walk, and at that point you can use the regular SLLN. In order to prove this rigourously, let $\tau_N$ be the first hitting time to some arbitrary $N>0$ and let $\tau_{N,0}$ be the first hitting time to zero after $\tau_N$, and note that $\tau_N<\infty$ a.s., but $\mathbb P[\tau_{N,0}<\infty]$ equals the chain's hitting probability of zero from $N$. Let $Y^N$ be a process such that $Y^N_n=X_n$ for $n\leq \tau_{N,0}$, and such that after the stopping time $\tau_{N,0}$, $Y^N$ diverges from $X$ to instead evolve as a simple random walk on the integers with parameter $p$. But we see that actually $Y^N$ has the law of a simple random walk with parameter $p$ starting from the almost surely finite $\tau_N$ which means that by the SLLN, $\frac{Y^N_n}{n} \to p-q$ a.s.. Next, observe that $$\left\{ \lim_{n\to\infty}\frac{X_n}{n} = p-q \right\} \supseteq \bigcap_{n=0}^\infty\{ Y^N_n = X_n \}\supseteq\{ \tau_{N,0} =\infty \}$$ so it only remains to bound the hitting probability of zero from $N$, and show that this probability $\to0$ as $N\to\infty$ ($N$ was chosen arbitrarily).

For the $p\leq\frac12$ case, let $X^p$ be the Markov chain with parameter $p$. Couple the Markov chains $(X^p: p\in[0,1])$ in such a way that $X^{p_1}_n\leq X^{p_2}_n$ $\forall n$ whenever $p_1 \leq p_2$. Do this by taking the probability space generated by a sequence $(U_n)_{n=0}^\infty$ of i.i.d. random variables uniformly distributed on $[0,1]$. Then define the $X^p_n$ recursively: $X^p_0=1$ $\forall p\in[0,1]$ and $$X^p_{n+1}=\begin{cases} X^p_n+1 & U_n\leq p\ \mathrm{or}\ X^p_n=0, \\ X^p_n-1 & \mathrm{otherwise}. \end{cases}$$ If $p_1 < p_2$, the only case where $X^{p_1}_{n+1}=X^{p_1}_n+1$ and $X^{p_2}_{n+1}=X^{p_2}_n-1$ for a given $n$ is when $X^{p_1}_n=0$ but $X^{p_2}_n>0$. But in this case we must have that $X^{p_2}_n\geq 2$ because of the parity of the process ($X^p_0=1$ always odd, $X^p_1$ always even etc.), so this will never cause $X^{p_1}_{n+1}$ to be greater than $X^{p_2}_{n+1}$. In all other cases, $X^{p_1}_{n+1}=X^{p_1}_n+1$ implies $X^{p_2}_{n+1}=X^{p_2}_n+1$ and so we always have $X^{p_1}_n\leq X^{p_2}_n$ $\forall n$.

Since $p-q\to 0$ as $p\downarrow \frac12$, by the squeeze theorem $$\lim_{n\to\infty}\frac{X^p_n}{n}=0$$ a.s. whenever $p\leq\frac12$.

Wei
  • 2,090
  • 1
    Well I already have intuitions which can be used to intuitively justify the convergence. What I am lacking is a proper rigorous way. While your answer is helpful, it does not really solve my issue. Take the $p>\frac{1}{2}$ case for example, by using a coupling of iid Symmetric Bern$(p)$ variates, I can argue that almost surely (in some probability space), we have the desired convergence due to the Strong law. But that only shows convergence in distribution(and also in probability). For $p=\frac{1}{2}$, I can use Maximal inequalities to get sharper bounds and show convergence in probability. – Dovahkiin Aug 28 '24 at 08:06
  • Again, for the case of $p<\frac{1}{2}$, by your hint, I can argue that $P(X_{n}>\epsilon n)\to \sum_{k=\epsilon n}^{\infty}\pi_{k}\to 0$ where $\pi_{k}$ is the invariant distribution (which exists due to positive recurrence). But this again only shows convergence in probability. – Dovahkiin Aug 28 '24 at 08:17
  • I've edited my answer to provide more details for $p>\frac12$, and included all of $p\leq\frac12$ in the same coupling argument as the previous Borel-Cantelli approach was a bit too tricky. – Wei Aug 28 '24 at 08:52
  • Let me know if you need more details on the coupling argument, as I can fill that in as well (it's just a bit fiddly) – Wei Aug 28 '24 at 09:08
  • Thanks. The main trouble with my thinking was that I could not find a stopping time after which the walk evolved as a simple random walk. I was using $T_{last}$ which is not a stopping time. – Dovahkiin Aug 28 '24 at 09:54
  • I think you can simplify the stopping time $\tau_{N,0}$ by defining $\tau_{N}$ to be just $N$ and $\tau_{N,0}=\inf{k\geq N:X_{k}=0}$. Then that essentially would work as the same as then too $P(\tau_{N,0}<\infty)\to 0$ as $N\to\infty$. Also, I think we can use the same coupling to conclude $\frac{X_{n}-n(p-q)}{\sqrt{n}}\xrightarrow{d}N(0,4pq)$. This is again due to the fact that after $\tau_{N,0}$ which is finite a.s., the walk would evolve as a Simple random walk and $P(\tau_{N,0}<\infty)\to 0$ as $N\to\infty$. – Dovahkiin Aug 28 '24 at 10:05
  • I mean, the basic idea should be that hitting 0 after a large time becomes less and less likely and on the event that it does not hit 0, the walk evolves as a simple random walk. – Dovahkiin Aug 28 '24 at 10:46
  • 1
    I was the one who downvoted; if you explain how do the coupling, I'll reverse my vote. I see how you would do the coupling for random walks without the boundary at zero; let $U_n\sim \text{Unif}(0,1)$ for each $n\ge 0$, then say that $X_{n+1}^p=X_n^p+1$ if $U_n\le p$, and $X_{n+1}^p=X_n^p-1$ if $U_n>p$. The first part where $p>1/2$ appears to be completely correct, and elegant. – Mike Earnest Aug 28 '24 at 15:13
  • 1
    @MikeEarnest done (edited my answer). I think the key observation is that the deterministic parity of each $X_n$ allows you to get the same domination result as in the simple random walk case. – Wei Aug 28 '24 at 16:28
  • Thank you for filling in the details. :^) – Mike Earnest Aug 28 '24 at 17:05
3

The following lemma makes quick work of the $p<1/2$ case.

Lemma: For all $n\ge 0$ and all $k\ge 0$, $$P(X_n=k)\le (p/q)^{k-1}.$$

Proof: This is simple to prove by induction on $n$. The base case where $n=0$ is immediate. Assuming that the lemma is true for $n$, then for any $k\ge 1$, $$ \begin{align} P(X_{n+1}=k) &=p\cdot P(X_{n}=k-1)+q\cdot P(X_{n}=k+1) \\&\le p\cdot (p/q)^{k-2}+q\cdot (p/q)^k \\&=(p/q)^{k-1}. \end{align}$$ $\tag*{$\square$}$ To prove that $X_n/n\to 0$ almost surely, it suffices to prove $\sum_{n\ge 1}P(X_n/n>\epsilon)$ is finite, for all $\epsilon>0.$ This is simple to do using the lemma.
$$ P(X_n/n > \epsilon) = \sum_{k=\lceil n\epsilon \rceil}^\infty P(X_n=k) \le \sum_{k=\lceil n\epsilon \rceil}^\infty (p/q)^{k-1} = \frac{(p/q)^{\lceil n\epsilon\rceil -1}}{1-p/q}. $$


Here is a trick which works when $p=1/2$. Let $S_n$ be a simple random walk on the integers (the Markov chain with transition probabilities $p_{k,k+1}=p_{k,k-1}=1/2$ for all $k\in \mathbb Z$). Note that $X_n$ has the exact same distribution as $|S_n|$. The strong law implies $|S_n/n|\to 0$ almost surely, hence we conclude the identically distributed $X_n/n\to 0$ almost surely.


Finally, I deal with the $p>1/2$ case. Let $S_n$ be the random walk on the integers that moves up with probability $p$. We shall couple $X_n$ with $S_n$ so that $X_n$ increases when $S_n$ increases, and $X_n$ decreases when $S_n$ decreases, except in the case where $X_n=0$, in which case $X_n$ increases no matter what $S_n$ does.

Let $D_n=X_n-S_n$. I claim that $\sup_n D_n$ is finite with probability one. This is because $D_n$ is non-decreasing, and $D_n$ only increases when $X_n=0$ and $S_{n+1}=S_n-1$, but there are only finitely many times where $X_n=0$. Since $X_n/n=(S_n+D_n)/n$, we conclude by noting $S_n/n\stackrel{\text{a.s.}}\longrightarrow p-q$ by SLLN, and $D_n/n\stackrel{\text{a.s.}}\longrightarrow 0$ since $\sup_n D_n$ is almost surely finite.

Mike Earnest
  • 84,902
  • Yeah I spotted that $X_{n}$ has the same distribution as $|S_{n}|$ when $p=\frac{1}{2}$. That's why I guessed that $X_{n}/\sqrt{n}$ should converge to $N(0,1)$ in distribution. However I was confused because I thought that this meant that in the probability space where $X_{n}$ is defined, we would have convergence in distribution of $X_{n}/n$ to $0$. But indeed I was wrong. – Dovahkiin Aug 28 '24 at 15:53
  • Also, can you say whether taking $\tau_{N,0}$ to be the first time after $N$ that the walk hits $0$ suffices in the coupling given by Wei? – Dovahkiin Aug 28 '24 at 15:54
  • I wanted to know if in Wei's argument, we can simplify the stopping time $\tau_{N,0}$ to be the first hitting time of $0$ after $N$ instead of it being the first hitting time of $0$ after the first hitting time of $N$. – Dovahkiin Aug 28 '24 at 16:12
  • I don't think that works. Let $\sigma_{N,0}$ be the stopping time you described, so $\sigma_{N,0}=\inf {n\ge N\mid X_n=0}$. I cannot see how you would prove $P(\sigma_{N,0}=\infty)\to 1$ as $N\to\infty$, which is a key step in Wei's proof. – Mike Earnest Aug 28 '24 at 16:19
  • 1
    Isn't ${\sigma_{N,0}<\infty}\subseteq {\text{There exists a 0 after time N}}$ and the probability of the latter goes to $0$. ? – Dovahkiin Aug 28 '24 at 16:23
  • I see the error in my thought. Thanks – Dovahkiin Aug 28 '24 at 16:25
  • @Dovahkiin I see it now! Yes, your version works as well. – Mike Earnest Aug 28 '24 at 16:28
  • I think I should say ${\sigma_{N,0}<\infty}\subseteq{T_{last}^{0}>N}$ and the probability of the latter goes to $0$ as $T_{last}^{0}$ is almost surely finite. – Dovahkiin Aug 28 '24 at 16:30