7

$A$ and $B$ play a round-based game. Each round $A$ wins with probability $\frac{1}{3}$ and $B$ with probability $\frac{2}{3}$. The loser of a round pays $1$ USD to the winner. The winner of the whole game is the one who wins all the USD from the other player. Now assume that $A$ starts with $n\geq 1$ USD and $B$ with $1$ USD. What is the winning probability of $B$? (Hint: Use recursion and the law of total probability)


Let be $P(B)$ the probability that $B$ wins the game and $P(B|(k))$ the probability that $B$ wins the game if $B$ has $k\geq 1$ USD. So either one could understand the question in the sense that we must compute $P(B)$ or we must compute $P(B|(1))$. However, in both cases we must apply law of total probability which requires us to find a disjoint decomposition of $B$ or $(B\cap (1))$, respectively. As we don't know how the corresponding sets are defined we can't do this.

Maybe someone can help me to disentangle this problem or show me how to set up a proper recursion without knowing how the sets look like.

Philipp
  • 4,951
  • 2
  • 14
  • 30

3 Answers3

1
Notation

We make the following notation local to this solution. This notation is required only if users need a completely rigorous proof, as requested by OP.

For the situation when $B$ has $1$ USD and $A$ has $n$ USD, let us construct the corresponding sample space $\Omega_n$. We define $$ \Omega_{n} = \{(R_1,R_2,\ldots,R_m) : m \geq 1, R_i \in \{A,B\} \text{ and the game is complete after $m$ steps}\} $$

Here, $R_1,R_2,\ldots,R_m$ are letters that either $A$ or $B$, and if $R_i = A$ then $A$ won in the $i$th round. Otherwise, if $R_i = B$ then $B$ won in the $i$th round. In addition, we stop the tuple at round $m$ when there is some winner of the game at the end of that round.

For example, if $n=2$ then I can write down some elements of $\Omega_2$ as follows :

  • $(A)$ : So $A$ won the first round and $B$ has $0$ USD left, hence $A$ wins.

  • $(B,A,A) $ : $B$ won the first round , but $A$ won the next two. Now, $B$ has $0$ USD left, so $A$ wins the game.

  • $(B,B)$ : $B$ won all three rounds, and $A$ now has $0$ USD left, hence $B$ wins the game.

  • $(B,A,B,B)$ : $B$ won the first round, $A$ won the next, and then $B$ won the next two so $A$ has $0$ USD left and $B$ wins the game.

An example of a tuple not in $\Omega_2$, for example, is $(B)$ because although $B$ won the first round, there is no winner at the end of this round. Similarly, $(B,A,B,A) \notin \Omega_2$. I hope that the description of elements of $\Omega_n$ are now clear to everybody.

To make $\Omega_n$ into a probability space, we need a probability assignment $p_n : \Omega_n \to [0,1]$. This is easy : we know that $B$ wins each round with probability $\frac 23$ and $A$ with probability $\frac 13$. Hence, if $(R_1,\ldots,R_m) \in \Omega_n$ and $M$ is the number of $A$s in the tuple $(R_1,\ldots,R_m)$, then $$ p_n((R_1,\ldots,R_m)) = \left(\frac{1}{3}\right)^M \left(\frac 23\right)^{m-M}. $$ Now, $(\Omega_n,p_n)$ is a proper probability space for each $n \geq 1$ because we have a sample space along with a probability assignment on it. We will refer to the elements of $\Omega_n$ as "games" because that's what each element represents.

By definition of $\Omega_n$, every sample element is a game that either has $B$ as a winner or $A$ as a winner at the end. Let $P_n$ be the probability of the event that, at the end of a game in $\Omega_n$, $B$ is the winner.

We will find a way to relate $P_n$ with $P_{n-1}$, which is the intention of the author here.

(There are ways to solve this question without changing the value of $n$ and using recursion. This approach strictly uses recursion by relating $P_n$ to $P_{n-1}$). For a proof that doesn't change the value of $n$, see the other answer to this question.

We will require the analysis of another "new" game in order to create the recursion. Here's the description of the "new" game : consider the old game where $A$ has probability $\frac 13$ of winning a round, $B$ has probability $\frac 23$ of winning a round, the winner gives $1$ USD to the loser, but the key difference is that $A$ starts with $1$ USD, while $B$ starts with $n$ USD.

This "new" game has its own sample space $\Omega'_n$, which is given by $$ \Omega'_{n} = \{(R_1,R_2,\ldots,R_m) : m \geq 1, R_i \in \{A,B\} \text{ and the "new" game is complete after $m$ steps}\} $$

So for example, $(B) \in \Omega'_{2}$ because $B$ winning one round means that $A$ has $0$ USD at the end of this round so $B$ is the winner of the "new" game. Similarly, $(A,A),(A,B,B) \in \Omega'_{2}$ as well.

The same kind of probability assignment $p'_n$ can given be easily specified for $\Omega'_2$ : $$ p'_n((R_1,\ldots,R_m)) = \left(\frac{1}{3}\right)^M \left(\frac 23\right)^{m-M}. $$ where $M$ is the number of $A$s in $R_1,\ldots,R_m$. Thus, we have another bunch of sample spaces $(\Omega'_n,p'_n), n \geq 1$ corresponding to the "new" game where $A$ starts with $1$ USD and $B$ starts with $n$ USD. We refer to elements of $\Omega'_n$ as "new" games.

Let $Q_n$ be the probability(in $\Omega'_n$) that $A$ is the winner at the end of a "new" game. For example, $A$ wins at the end of the "new" game $(A,A,A)$.

We will find ways to calculate both $P_n$ and $Q_n$ using recursive formulas dependent on each other.


Let $E_n \subset \Omega_n$ be the event that $B$ wins the usual game and $F_n \subset \Omega'_n$ be the event that $A$ wins the "new" game. Observe that $P_n$ is the probability(in $\Omega_n$) that $E_n$ occurs, similarly $Q_n$ is the probability $F_n$ occurs.

Let $(R_1,\ldots,R_m) \in E_n$. Then, $B$ wins at the end of the game, which means that $A$ has $0$ USD left. Now, during every such game, there must be a first point during which $A$ has exactly $1$ USD left and $B$ has $n$ USD. Define $f_n: E_n \to \mathbb N$ as $f_n((R_1,\ldots,R_m)) = i$ where $i$ is the smallest round, after whose completion $A$ has $1$ USD left.

For example, consider the game $$ (B,A,B,B,A,B,B) \in \Omega_3. $$ In this game, $B$ wins at the end, but there are two occasions when $A$ has exactly $1$ USD left : at the end of the fourth round and at the end of the sixth round. In this case, $f((B,A,B,B,A,B,B)) = 4$.

Similarly, $$ (B,B) \in \Omega_2 \implies f((B,B)) = 1. $$

Now, consider the following "split" functions $G_{1,n}$ and $G_{2,n}$ defined on $E_n$ : if $l \in E_n$ is a game, then $G_{1,n}(l)$ is the tuple formed by taking all the rounds in a game until the end of the $f(l)$th round. Whatever comes after that is defined by $G_{2,n}(l)$.

For example, $$ f_3((B,A,B,B,A,B,B)) = 4\\ \implies G_{1,3}(B,A,B,B,A,B,B) = (B,A,B,B),G_{2,3}((B,A,B,B,A,B,B)) = (A,B,B)\\ f_2((B,B)) = 1 \\ \implies G_{1,2}((B,B)) = (B), G_{2,2}((B,B)) = (B) $$


What follows is the first central claim of this entire argument, along with a proof (that, if self-evident, can be skipped).

The "split" map $l \to (G_{1,n}(l), G_{2,n}(l))$ is a bijection between $E_{n}$ and $E_{n-1} \times (\Omega'_n \setminus F_n)$.

Here are examples of this bijection : take $l = (B,A,B,B,A,B,B)$. THen, if you look just at $G_{1,3}(l) = (B,A,B,B)$, then just by itself, $(B,A,B,B)$ represents an entire game in which $A$ had $2$ USD, $B$ had $1$ USD, and $B$ won. That is, $(B,A,B,B) \in E_{n-1}$. On the other hand, $(A,B,B)$ is a "new" game in which $A$ had $1$ USD, $B$ has $2$ USD, and $A$ lost : which is why $(A,B,B) \in (\Omega'_n \setminus F_n)$.

Proof : Let $(R_1,\ldots,R_m)\in E_n$. By definition , $$G_{1,n}((R_1,\ldots,R_m))=(R_1,\ldots,R_{f_n((R_1,\ldots,R_m))}).$$

Now, imagine that we were playing the game in which $A$ has $n$ USD to begin with. If the sequence of winners is given by $R_1,\ldots,R_{f_n((R_1,\ldots,R_m))}$ then at the end of round $f_n((R_1,\ldots,R_m))$ , $A$ has $1$ USD and $B$ has $n$ USD.

However, if $A$ instead had $n-1$ USD to begin with instead of $n$, then the same sequence of rounds $(R_1,\ldots,R_{f_n((R_1,\ldots,R_m))})$ played in that order clearly result in $A$ having $0$ USD at the end of round $f_n((R_1,\ldots,R_m))$ (because $A$ has one less dollar). It follows that $G_{1,n}((R_1,\ldots,R_m)) \in E_{n-1}$.

Now consider, by definition, $$ G_{2,n}((R_1,\ldots,R_m))=(R_{f_n((R_1,\ldots,R_m))+1},\ldots,R_m). $$ In the usual game, this is the phase of play which begins with $A$ having $1$ USD and $B$ having $n$ USD, and finishes with $A$ losing. However, we can treat this phase as a "new" game because $A$ is effectively starting from $1$ USD and loses from that point on. Hence, it is obvious that $$ G_{2,n}((R_1,\ldots,R_m)) \in \Omega'_n \setminus F_{n} $$ because $F_{n}$ consists of those "new" games in which $A$ wins, but we want $B$ to win instead, so we take the complement.

Now, there is an obvious map the other way, the "concatenation" map : given $(R_1,\ldots,R_m) \in E_{n-1}$ and $(S_1,\ldots,S_k) \in \Omega'_n \setminus F_{n}$, just play the "new" game after the old game. That is, consider the element $$ (R_1,\ldots,R_m,S_1,\ldots,S_k) $$ we claim that this element is in $E_n$. But that's obvious : if $B$ has $1$ USD and $A$ has $n$ USD, then because $(R_1,\ldots,R_m) \in E_{n-1}$, we see that at the end of round $R_m$, $B$ has $n$ USD and $A$ has $1$ USD left. Now, from that point on, $(S_1,\ldots,S_k)$ is chosen so that $A$ loses from this point on. All in all, the entire tuple represents a game in which $B$ has $1$ USD, $A$ has $n$ USD, and yet $B$ wins. Hence, the game belongs in $E_{n-1}$.

It is not difficult to see that the "split" and "concatenation" maps that have been described between $E_n$ and $E_{n-1} \times (\Omega'_n \setminus F_n)$ are inverses of each other. We are done with the proof of the main result. $\blacksquare$


Now, with a proof just like the above, we make a similar but equally important claim. Let us define the analogous notation.

Let $(R_1,\ldots,R_m) \in F_n$. Then, $A$ wins at the end of the game, which means that $B$ has $0$ USD left. During every such game, there must be a first point during which $B$ has exactly $1$ USD left and $A$ has $n$ USD. Define $f'_n: E_n \to \mathbb N$ as $f'_n((R_1,\ldots,R_m)) = i$ where $i$ is the smallest round, after whose completion $B$ has $1$ USD left.

Consider the following "split" functions $G'_{1,n}$ and $G'_{2,n}$ defined on $F_n$ : if $l \in F_n$ is a game, then $G'_{1,n}(l)$ is the tuple formed by taking all the rounds in a game until the end of the $f'(l)$th round. Whatever comes after that is defined by $G'_{2,n}(l)$.

The second central claim follows, with a proof entirely analogous to that of the first claim.

The "split" map $l \to (G'_{1,n}(l), G'_{2,n}(l))$ is a bijection between $F_{n}$ and $F_{n-1} \times (\Omega_n \setminus E_n)$.


As a result of these bijection, we derive the following equations.

We have $P_1= \frac 23, Q_1 = \frac 13$. Furthermore, for every $n \geq 2$, $$ P_n = P_{n-1}(1-Q_{n}) \\ Q_n = Q_{n-1}(1-P_n) $$

Proof : We observe the following "probability-preserving property" of the "split" map we defined in our first claim. Let $(R_1,\ldots,R_m) \in E_n$ and let $M$ be the number of $A$s in $(R_1,\ldots,R_m)$, with $M_1,M_2$ being the number of $A$s in $G_{1,n}((R_1,\ldots,R_m))$ and $G_{2,n}((R_1,\ldots,R_m))$ respectively.

Because the "split" map literally splits the tuple $(R_1,\ldots,R_m)$ into two parts, it is obvious that $M = M_1+M_2$. Therefore, $$ p_n((R_1,\ldots,R_m)) = \left(\frac 13\right)^M \left(\frac 23\right)^{m-M} \\ = \left[\left(\frac 13\right)^{M_1} \left(\frac 23\right)^{f_n((R_1,\ldots,R_m))-M_1}\right]\left[\left(\frac 13\right)^{M_2} \left(\frac 23\right)^{m-f_n((R_1,\ldots,R_m))-M_2}\right] \\ = p_{n-1}(G_{1,n}((R_1,\ldots,R_m)) \times p'_{n}(G_{2,n}(R_1,\ldots,R_m)) $$

Now, because of the "concatenation" side of the bijection established in the first claim and this probability preserving property, \begin{align} &P_n \\&= \sum_{(R_1,\ldots,R_m) \in E_n} p_n(R_1,\ldots,R_m) \\&= \sum_{(S_1,\ldots,S_k) \in E_{n-1},(T_1,\ldots,T_l) \in \Omega'_n \setminus F_n} p_n(S_1,\ldots,S_k,T_1,\ldots,T_l) \\ &= \sum_{(S_1,\ldots,S_k) \in E_{n-1},(T_1,\ldots,T_l) \in \Omega'_n \setminus F_n}p_{n-1}((S_1,\ldots,S_k)) \times p'_{n}((T_1,\ldots,T_l)) \\ &= \left[\sum_{(S_1,\ldots,S_k) \in E_{n-1}} p_{n-1}((S_1,\ldots,S_k)) \right]\left[\sum_{(T_1,\ldots,T_l) \in \Omega'_n \setminus F_n} p'_{n}((T_1,\ldots,T_l))\right] \\&= P_{n-1}(1-Q_{n}) \end{align}

The proof of the other result is similar and follows from the second key central claim.


Thus, we have the equations $$ P_{n}=P_{n-1}(1-Q_n) \\ Q_n = Q_{n-1}(1-P_n) $$ Solving these equations, $$ P_{n} = P_{n-1}(1-Q_{n-1}(1-P_n)) = P_{n-1} - P_{n-1}Q_{n-1} + P_{n-1}Q_{n-1}P_n \\ \implies P_{n} =\frac{P_{n-1}(1-Q_{n-1})}{1-P_{n-1}Q_{n-1}} \\ Q_n = \frac{Q_{n-1}(1-P_{n-1})}{1-P_{n-1}Q_{n-1}} $$

This along with the initial values $$ P_1 = \frac 23, Q_1 = \frac 13 $$ furnishes the answer (the initial values are obvious). For example, $$ P_2 = \frac{4}{7} , Q_2 = \frac 17 \\ P_3 = \frac{8}{15}, Q_3 = \frac{1}{15} $$

But now, we begin to see a pattern here. Indeed, it is not difficult to observe that $$ P_i = \frac{2^i}{2^{i+1}-1}, Q_i = \frac{1}{2^{i+1}-1} $$

These can easily be proved by induction, providing the answer to the question.

  • The problem with your approach is that the reason of your main assumption $P_{n}=P_{n-1}(1-Q_n)$ remains very vague. Though it makes intuitively sense I don't see a mathematical reason why this equation holds. – Philipp May 16 '23 at 21:30
  • @Philipp All right. At the expense of introducing plenty of notation, I'll try to have a modification up by the end of today. – Sarvesh Ravichandran Iyer May 17 '23 at 02:14
  • The probability space you have defined completely irgnores the games which continue forever. So it doesn't fit the experiment we are talking about. Maybe there is some good reason that justifies this, something like "all those games must have probability $0$", but I am pretty sure that this reason requires more measure/probability theory. As this question is stated in a way that we should only use "basic" tools, like conditional probabilites, I think that it is simply not possible to justify any recursive equation with our tools at hand. (In fact, this is an old question from first year) – Philipp May 17 '23 at 10:44
  • Ok, I do agree that I'm counting out "infinite" games, but if you believe you can avoid measure theory at all, then there's a problem because if you wish to include infinite games, your sample space is uncountable and hence doesn't admit any p.m.f on it which isn't countably supported. So you cannot place any kind of "discrete" probability measure on it unless you wish to ignore those infinite sequences (i.e. give them "probability zero"). Giving them probability $0$ makes sense because there must be infinitely many $A$s and $B$s in such a sequence. – Sarvesh Ravichandran Iyer May 17 '23 at 11:12
  • It is possible to modify the argument by including these infinite sequences (which don't finish) in both $\Omega_n$ and $\Omega'_n$, and then giving them all probability $p_n(l) = 0$ for $l$ an infinite sequence. That $p_n(l)=0$ also fits the experiment as far as I can see it because there must be infinitely many $A$s and $B$s in such a sequence and extending the usual $p_n$ formula logically to these sequences gives $0$. We are "including them" and respecting the experiment, but simultaneously also justifying that we don't really need to study them. Makes sense? – Sarvesh Ravichandran Iyer May 17 '23 at 11:15
  • Though defining the probability of infinite games by $0$ creates a well defined prob. measure, it's a very strong assumption to say that the infinite games have no effect. How do we justify this assumption? Sure, we can do it intuitively by saying "it is very unlikely that the game lasts forever...", but this is exactly what I am trying to avoid. Moreover, just saying "extending the usual $p_n$ formula logically [...]" is no valid argument. You start from a countable probability space and then derive properties about elements that live in a completely different (uncountable) prob. space. – Philipp May 17 '23 at 12:07
  • What we could do, and I guess that you had something similar in your mind but carried it out a bit too hand-wavy, is that from the beginning we make the assumption that there already exists a well-defined probability space which consists of the uncountable sample set $\Omega:={A,B}^{\mathbb{N}}$ and already comes along with a prob. measure $p$ that matches with the requirement that $A$ wins a round with $\frac{1}{3}$ and $B$ wins a round with $\frac{2}{3}$. – Philipp May 17 '23 at 12:11
  • In particular, we could assume that this $p$ could be defined by assigning only those $\omega$'s, that represent a game that ends in round $m\geq1$, the probability $p(\omega)=\left(\frac{1}{3}\right)^{m-l}\left(\frac{2}{3}\right)^l$. Here $l$ denotes the occurences of rounds won by $B$. Based on this assumptions one could try to show that the probability of those $\omega$'s that represent infinite games must be $0$. If we have successfully proven this statement, then it would be no restriction to continue with a countable probability space. – Philipp May 17 '23 at 12:13
  • But this reasoning includes a lot of if's, so I still believe that questions of this type, which are for some reason very popular in first year probability/stochastis classes, are not well defined without further assumptions. (Sorry for the lengthy comment) – Philipp May 17 '23 at 12:16
  • @Philipp The problem is that you can't insert all the $\Omega_n$ into a single probability space. Each one is a game (and hence probability space) by itself for a different value of $n$. You can only treat them as separate probability spaces. That's why putting a measure on ${A,B}^{\mathbb N}$ and extracting the $\Omega_n/\Omega'_n$ out of it won't work. I still believe that the best way to work this out is to justify that assigning infinite tuples a p.m.f value of $0$ is an accurate representation of the experiment. These are popular in first year because the assumption ... – Sarvesh Ravichandran Iyer May 18 '23 at 10:46
  • ... that infinite tuples are to be given p.m.f values of $0$ is not inspired by measure-theory, but rather by the infinite number of occurrences of $A,B$ in each such game. I spoke to some undergraduates about this question and they find that assigning $0$ weight to infinite tuples is an accurate representation of the experiment. I'll have to read up what good books say about it. Implicitly, the books have to assume this, otherwise they all fall in the measure theory trap. – Sarvesh Ravichandran Iyer May 18 '23 at 10:48
0

$\def\eqdef{\stackrel{\text{def}}{=}}$ Hint

If $\ p_k=P(B\,|\,(k)\,)\ $, then $\ p_k\ $ must satisfy the following recursion: $$ p_k=\frac{p_{k-1}}{3}+\frac{2p_{k+1}}{3}\ , $$ with boundary conditions $\ p_0=0, p_{n+1}=1\ $.

Reply to OP's query in comment below

There's a hidden assumption underlying the above equation, which was almost certainly intended, but not explicitly stated, by the setter of the problem you've described. From the way you framed your question, I had assumed you were aware of this. From your query, however, I'm now guessing that your awareness of this assumption is only partial. The giveaway is your referral to $\ (k)\ $ as a "set". However, there's no single event at which player $B$ can have $\ k\ $ dollars, since he or she could have that amount after any round beyond the $\ (k-1)^\text{th}\ $.

What you write as $\ P(B\,|\,(k)\,)\ $ should actually be $\ P\big(\mathcal{B}\,\big|\,K_r=k\,\big)\ $, where $\ \mathcal{B}\ $ is the event that player $B$ wins, and $\ K_r\ $ is the number of dollars $B$ has after the $\ r^\text{th}\ $ round. If you don't make the hidden assumption I refer to above, then, as the problem is currently given, $\ P\big(\mathcal{B}\,\big|\,K_r=k\,\big)\ $ is not necessarily independent of $\ r\ $. Once you make this assumption, however, it becomes intuitively obvious that $\ P\big(\mathcal{B}\,\big|\,K_r=k\,\big)\ $ is independent of $\ r\ $ (at least for $\ r\ge k-1\ $. If $\ r< k-1\ $, then $\ \big\{\,K_r=k\,\big\}=\varnothing\ $ is a null event, and $\ P\big(\mathcal{B}\,\big|\,K_r=k\,\big)\ $ is meaningless).

Let $$ W_r=\cases{\hspace{0.8em}1&if $B$ wins round $\ r\ $\\ -1&if $B$ loses round $\ r\ $.} $$ Then $\ K_r=K_{r-1}+W_r\ $, provided $\ 2\le K_{r-1}\le n\ $. The hidden assumption referred to above is that $$ W_1,W_2,\dots,W_r,\dots $$ are independent. Then if the sequence $\ k_ 1,k_2,\dots, k_r\ $ satisfies the conditions $\ k_1=2, 1\le k_i\le n,\ $ and $\ \big|k_i-k_{i-1}\big|=1\ $ for $\ i=2,3,\dots,r\ $, then \begin{align} P\left(K_r=k_r\,\left|\,\bigcap_{i=1}^{r-1}\big\{K_i=k_i\big\}\right.\right)&=\frac{P\left(\bigcap_\limits{i=1}^r\big\{K_i=k_i\big\}\right)}{P\left(\bigcap_\limits{i=1}^{r-1}\big\{K_i=k_i\big\}\right)}\\ &=\frac{P\left(\big\{W_1=1\big\}\cap\bigcap_\limits{i=2}^r\big\{W_i=k_i-k_{i-1}\big\}\right)}{P\left(\big\{W_1=1\big\}\cap\bigcap_\limits{i=2}^{r-1}\big\{W_i=k_i-k_{i-1}\big\}\right)}\\ &=\frac{\prod_\limits{i=2}^rP\big(\big\{W_i=k_i-k_{i-1}\big\}\big)}{\prod_\limits{i=2}^{r-1}P\big(\big\{W_i=k_i-k_{i-1}\big\}\big)}\\ &=P\big(\,W_r=k_r-k_{r-1}\,\big)\\ &=P\big(\,K_r=k_r\,\big|\,K_{r-1}=k_{r-1}\,\big) \end{align} Also, \begin{align} P\big(\,K_r=k\,\big|\,K_{r-1}=0\,\big)&=\cases{1&if $\ k=0$\\ 0&otherwise}\\ P\big(\,K_r=k\,\big|\,K_{r-1}=n+1\,\big)&=\cases{1&if $\ k=n+1$\\ 0&otherwise} \end{align} Thus, under the above assumption, $\ K_r\ $ is a time-homogeneous Markov chain, with initial state, $\ K_0=1\ $, and $\ (n+2)\times(n+2)\ $ transition matrix, $\ P\ $, given by \begin{align} P_{ij}&\eqdef P\big(\,K_{r+1}=j\,\big|\,K_r=i\,\big)\\ &=\cases{\frac{2}{3}&if $\ 1\le j=i+1\le n+1$\\ \frac{1}{3}&if $\ 0\le j=i-1\le n-1$\\ 1&if $\ j=i\in\{0,n+1\}$\\ 0&otherwise,} \end{align} or $$ P=\pmatrix{1&0&0&0&\dots&\dots&0&0&0&0\\ \frac{1}{3}&0&\frac{2}{3}&0&\dots&\dots&0&0&0&0\\ 0&\frac{1}{3}&0&\frac{2}{3}&\dots&\dots&0&0&0&0\\ \vdots&&\ddots&\ddots&\ddots&&\vdots&\vdots&\vdots&\vdots\\ \vdots&\vdots&&\ddots&\ddots&\ddots&&\vdots&\vdots&\vdots\\ \vdots&\vdots&\vdots&&\ddots&\ddots&\ddots&&\vdots&\vdots\\ \vdots&\vdots&\vdots&\vdots&&\ddots&\ddots&\ddots&&\vdots\\ 0&0&0&0&\dots&\dots&\frac{1}{3}&0&\frac{2}{3}&0\\ 0&0&0&0&\dots&\dots&0&\frac{1}{3}&0&\frac{2}{3}\\ 0&0&0&0&\dots&\dots&0&0&0&1}\ . $$ This Markov chain has two absorbing states, at $\ k=0\ $ and $\ k=n+1\ $, and both of these are reachable from any of the other states, all of which are transient. The theory of such chains tells us $\ P^n\ $ converges to a limit $\ P_\infty\ $ as $\ n\rightarrow\infty\ $, and \begin{align} P_\infty&=\pmatrix{1&0&\dots&0&0\\ 1-p_1&0&\dots&0&p_1\\ 1-p_2&0&\dots&0&p_2\\ \vdots&\vdots&&\vdots&\vdots\\ 1-p_n&0&\dots&0&p_n\\ 0&0&\dots&0&1}\\ &=(\mathbf{1}-p)\,\varepsilon_0^T+p\,\varepsilon_{n+1}^T\ , \end{align} where $\ \varepsilon_j\ $ is the $\ (n+2)\times1\ $ column vector whose $\ j^{\,\text{th}}\ $ entry is $\ 1\ $ and all of whose other entries are $\ 0\ $, $\ \mathbf{1}\ $ the $\ (n+2)\times1\ $ column vector all of whose entries are $\ 1\ $, and $\ p\ $ the $\ (n+2)\times1\ $ column vector whose $\ j^{\,\text{th}}\ $ entry is $\ p_j\eqdef P\big(\mathcal{B}\,\big|\,K_r=j\,\big)\ $. You can now obtain the above recursion from the fact that for $\ 1\le k\le n+1\ $ \begin{align} p_k&=\varepsilon_k^Tp\\ &=\varepsilon_k^TP_\infty\varepsilon_{n+1}\\ &=\varepsilon_k^TPP_\infty\varepsilon_{n+1}\\ &=\left(\frac{1}{3}\varepsilon_{k-1}+\frac{2}{3}\varepsilon_{k+1}\right)P_\infty\varepsilon_{n+1}\\ &=\frac{p_{k-1}}{3}+\frac{2p_{k+1}}{3}\ . \end{align} Once you become familiar with the properties of time-homogeneous Markov chains, all the above rigmarole becomes completely redundant. You can simply state that the independence of $\ W_r\ $ implies that $\ K_r\ $ is a time-homogeneous Markov chain and write down the recursion without further ado.

  • What is the mathematical reasoning behind your equation? I think to argue why this equations should hold we need information about how the sets $B$ and $(k)$ are defined. – Philipp May 16 '23 at 14:14
0

Answer:† $\frac{2^{n}}{2^{n+1}-1}$.


Fix $n\ge 1$.

Suppose we're at some point of the game when $A$ has $m$ dollars and $B$ has $\left(n+1-m\right)$ dollars ($m\in\left\{ 0,1,2,\dots,n,n+1\right\} $).

Let $p_{m}$ be the probability $B$ wins the game at this point of the game. (So, $p_{n}$ will be our answer.)

Then

  • $p_{n+1}=0$.
  • $p_{0}=1$.
  • For every $m\in\left\{ 1,2,\dots,n\right\} $, $p_{m}=\frac{2}{3}p_{m-1}+\frac{1}{3}p_{m+1}$ or $p_{m-1}\overset{\star}{=}\frac{3}{2}\left(p_{m}-\frac{1}{3}p_{m+1}\right)$.

So,

  • $p_{n-1}\overset{\star}{=}\frac{3}{2}\left(p_{n}-\frac{1}{3}p_{n+1}\right)=\frac{3}{2}p_{n}$.
  • $p_{n-2}\overset{\star}{=}\frac{3}{2}\left(p_{n-1}-\frac{1}{3}p_{n}\right)=\frac{3}{2}\left(\frac{3}{2}p_{n}-\frac{1}{3}p_{n}\right)=\frac{7}{4}p_{n}$.
  • $p_{n-3}\overset{\star}{=}\frac{3}{2}\left(p_{n-2}-\frac{1}{3}p_{n-1}\right)=\frac{3}{2}\left(\frac{7}{4}p_{n}-\frac{1}{2}p_{n}\right)=\frac{15}{8}p_{n}$.

We can show by induction (omitted) that for any integer $k\leq n$, $p_{n-k}=\frac{2^{k+1}-1}{2^{k}}p_{n}$.

So, $p_{n}=\frac{2^{n}}{2^{n+1}-1}p_{n-n}=\frac{2^{n}}{2^{n+1}-1}p_{0}=\frac{2^{n}}{2^{n+1}-1}$.


†I assume $n$ is an integer but if $n$ can be a non-integer, then change the answer to $\frac{2^{\lceil n \rceil}}{2^{\lceil n\rceil+1}-1}$.