8

I know for irreducible and positive recurrent Markov Chain there exists an unique stationary distribution. For Markov Chain with several communication classes (example C1, C2) there exist stationary distributions (linear combination of $ \pi $1 and $ \pi $2).

But how about this one, states {1,2,3} can communicate with each other while state {4} can access to other states.

$$ \begin{matrix} 0.5 & 0.5 & 0 & 0 \\ 0.3 & 0.3 & 0.4 &0 \\ 0 & 0.5 & 0.5 & 0 \\ 0 & 0 & 0.5 & 0.5 \\ \end{matrix} $$

Kathy
  • 97
  • Did you try to solve the equation for a distribution m to be stationary, that is, mP=m? – Did Mar 11 '15 at 08:35

1 Answers1

5

One persistent state and one transient state. It seems you already know that you have two classes in this chain, $C_1 = \{1,2,3\}$ and $C_2 \{4\}.$ Class $C_2$ (or, if you like, element 4) is called transient. Once a transition is made from state 4 to state 3, the chain stays forever in class $C_1$, which is called a persistent class.

Considering just $C_1$ as its own chain, one can find its stationary (or steady-state) distribution. That will tell you the proportion of time the chain spends in each of its states 1, 2, and 3 over the long run. Over the long run, the chain spends essentially none of its time in state 4. (Starting in state 4, the average 'time to absorption' into $C_1$ is 1/0.5 = 2 steps.)

Two states between which there is no intercommunication. Another case arises if there are two classes $C_1$ and $C_2$ between which transitions in neither direction are possible. Then it is simpler to consider the two classes as two separate chains and find the stationary distribution of each.

Importance of resolving a chain into classes. Notice that in both cases, in order to talk about stationary distributions, it is convenient to resolve (or 'reduce') the chain into classes. The methods of finding stationary distributions of persistent classes should be applied to individual classes only. For this reason most textbooks refer to stationary distributions only for classes, and not for reducible chains.

More complex cases. This is not the place to give a complete guide to all possible combinations of persistent and transient classes. I will just mention one more-complex case. In the famous gambler's ruin problem there are three states. Suppose A starts with 3 dollars and B starts with 2 dollars. On each toss of a coin the winner pays the loser a dollar, and the game continues until one of the players is broke. Number the states in terms of how much money A has at each step in the process. Possible states go from 0 (A is broke), 2, 4, ..., 5 (B is broke). In this chain state 0 is a singleton absorbing class, and stat 5 is another singleton absorbing class. The remaining states form a third class, which is transient.

Here we do not talk about stationary distributions at all. The interesting questions are the probability p that A wins (chain is 'absorbed' into state 5'). Then the probability of getting absorbed into state 0 is 1 - p. These probabilities depend on the starting state (3, as described above). Also of interest are the expected length of time until the game ends, and the expected number of times each transient state is visited. Specifics of these also depend on the starting state.

In summary, for persistent classes the long-run behavior is described by the stationary distributions. For chains with both transient and persistent classes, the long behavior is described by time until absorption, expected numbers of visits to transient states before absorption, and expected numbers of visits to various transient states before absorption.

BruceET
  • 52,418
  • @ BruceET: Can you point me a to a rigorous proof for the case of one persistent class and one transient class? Intuitively I think that if the persistent class has a unique steady state solution $\pi_p$ then $\pi_p$ appended with zeros (corresponding to the transient class) should be the steady state solution for the original Markov chain. – minion Nov 20 '15 at 01:53
  • This may be a matter of definitions. Seems to me some books define steady state only for persistent chains (or classes). – BruceET Nov 20 '15 at 04:29
  • For the persistent class of states 1,2,3: In R, P = matrix(c(.5,.5,0, .3,.3,.4, 0,.5,.5), byrow=T, nrow=3); g = eigen(t(P))$vectors[,1]; sg = g/sum(g); sg returns the steady state vector $\sigma =(0.2500000 0.4166667 0.3333333) = (3/12, 5/12, 4/12).$ Thus $\sigma P = \sigma.$ – BruceET Oct 23 '19 at 18:50