3

Consider the Markov chain on $\{0, 1,2,...\}$ with an absorbing state at $0$. We start at $X_0 = 10$. At step $t$, we move to the left with probability $0.4$ and to the right with probability $0.6$. I want to rigorously prove that the absorption probability $q_{10} \neq 1$.

First, I calculated the absorption probability using the recurrence

$$q_n = 0.4 q_{n-1} + 0.6 q_{n+1}$$

Using the characterestic equation, I find that the solution of the recurrence is $q_n = A\cdot 1^n + B\left( \frac{2}{3} \right)^{n}$. Subbing $q_0 = 1$, we get $A+B=1$. So the solution is $q_n = 1-B + B\left( \frac{2}{3} \right)^{n}$. Now all I need to show is to prove $B\neq 0$. I tried calculating the expectation, and there is clearly drift to the right, but I am not sure how to rigorously show that $q_n \neq 1$ for $n=10$.

AspiringMat
  • 2,607
  • I'm not sure what you are trying to do here. The reality is you only need to show that $q_1 \lt 1$ and you get $q_m \lt 1$ for all $m\geq 1$ by Markov property. I gave a complete solution here https://math.stackexchange.com/questions/4674446/does-an-asymmetric-one-dimensional-random-walk-come-back-home/ – user8675309 Aug 26 '24 at 00:17
  • @user8675309 I don’t understand how your argument rules out $P=1$ as a solution? Also to be clear, my solution is saying that $q_n = 1-B + B\cdot (2/3)^n $ and if you notice $(1-p)/p$ here is $2/3$. So what I am trying to show is that $B=1$ – AspiringMat Aug 26 '24 at 02:08
  • All you need is the lemma part of the link -- in words it tells you that that $\frac{2}{3}$ is an upper bound on the probability of absorption. So $P\leq \frac{2}{3}\lt 1$ which rules out $P=1$. – user8675309 Aug 26 '24 at 02:47

2 Answers2

5

There are quite a few different ways to make a rigorous argument. Here are various ideas:

(1) You have correctly set up the system of recurrence relations and its boundary condition, and it has multiple solutions. A general theorem, e.g. see Theorem 1.3.2 in Section 1.3 of James Norris's Markov Chains says that the collection of absorption probabilities are given by the minimal non-negative solution to the system of recurrences. That is, there exists a non-negative solution $(q_n)$ with the property that for any other non-negative solution $(q'_n)$, you have $q_n\leq q'_n$ for all $n$; and then the absorption probabilities are given by this $(q_n)$. In your case the minimal non-negative solution is obtained by putting $B=1$, giving $q_n=(2/3)^n$.

(2) You mentioned considering the expectation. One way to make this into a rigorous argument is via the Strong Law of Large Numbers. Consider a random walk on $\mathbb{Z}$ without the absorbing barrier at $0$. The displacement from the initial location after $m$ steps is the sum of $m$ i.i.d. random variables with mean $0.2$. The SLLN then tells you that as $m\to\infty$, $X_m/m \to0.2$ with probability $1$. In particular, $X_m\to\infty$ with probability $1$.

But now suppose that $q_{10}=1$. Then (applying the Markov property, or formally speaking the strong Markov property) with probability $1$, every time the chain hits $10$, it subsequently hits $0$. This is incompatible with $X_m\to\infty$. So we conclude that $q_{10}<1$.

(3) Along the lines suggested by @Wei's answer, you could also consider a sequence of finite problems. Starting from $X_0=10$, let $A_N$ be the event that the chain hits $N$ before hitting $0$. For each $N>10$, you can obtain $P(A_N)$ by considering a finite set of recurrences, which has a unique solution. You will observe that $P(A_N)\to c$ as $N\to\infty$ for some strictly positive $c$.

Now notice that $A_N$ is a decreasing sequence of events, i.e. $A_{N+1}\subseteq A_N$ for all $N$. It follows from the probability axioms that for any such decreasing sequence, $P(\bigcap_N A_N) =\lim_{N\to\infty}P(A_N)$. Now, the event that the chain does not get absorbed at $0$ contains the event $\bigcap_N A_N$ (since if the walk hits $N$ before $0$ for every $N>10$, then in fact it never hits $0$). So the probability of non-absorption is at least $\lim_{N\to\infty}P(A_N)=c>0$. (As Wei comments, in fact it is exactly equal to $c$.)

James Martin
  • 1,234
  • 7
  • 10
1

In this case it's more tractable to calculate the probability that you don't hit zero.

Notice that in order to start at 10 and never hit zero, you need to hit 11 before you hit zero, then hit 12 before you hit zero, then hit 13 before you hit zero,...

Wei
  • 2,090