4

Apologies in advance for the length, I tried being precise.

Suppose a game where in each turn you can gamble a certain amount of money on the result of a fair coin toss. If the coin comes out tails you lose what you gambled and if it comes out heads you gain double what you gambled, this goes on for $n$ turns.

To formalize this let $X_{1},X_{2},...$ be i.i.d r.vs such that $\mathbb{P}\left(X_{i}=2\right)=\mathbb{P}\left(X_{i}=-1\right)=\frac{1}{2}$ and let $\mathcal{F}_{k}=\sigma\left(X_{1},...,X_{k}\right)$ be the natural filtration. A gambling strategy is a sequence of non-negative r.vs $C_{1},C_{2},...$ which are previsible relative to $\mathcal{F}$ (that is $C_{k}$ is $\mathcal{F}_{k-1}$-measurable). Your initial wealth is $Y_{0}=1$ and for each $1\leq k \leq n$ your accumulated wealth is $$Y_{k}=1+\sum_{j=1}^{k}C_{j}X_{j}$$ A gambling strategy is said to be:

  1. Legal if $C_{k}\leq Y_{k-1}$ for all $k$.
  2. Conservative if $C_{k}\leq 0.99Y_{k-1}$ for all $k$

Now for the actual question:

  • Show that the legal strategy that maximizes $\mathbb{E}\left[\log Y_{n}\right]$ is always gambling a constant proportion $A$ of the current wealth and using this strategy there is a constant $B$ such that $\mathbb{E}\left[\log Y_{n}\right]=Bn$.
  • Denote $M_{k}=\log\left(Y_{k}\right)-Bk$ ($B$ is an arbitrary constant), show that there is a $K<\infty$ such that under any conservative strategy $\left|M_{i}-M_{i-1}\right|<K$ almost surely.

Attempted solution:

For the first question I worked in a sort of recursive fashion. Suppose we are at time $n-1$ with $Y_{n-1}=x$ and we need to choose the ratio A. Define: $$V_{1}\left(x\right):=\mathbb{E}_{A}\left[\ln\left(Y_{n}\right)|Y_{n-1}=x\right]$$ By direct calculation we get: $$V_{1}\left(x,A\right)=\frac{1}{2}\mathbb{E}\left[\ln\left(\left(1+2A\right)x\right)\right]+\frac{1}{2}\mathbb{E}\left[\ln\left(\left(1-A\right)x\right)\right] =\frac{1}{2}\ln\left(\left(1+2A\right)x\right)+\frac{1}{2}\ln\left(\left(1-A\right)x\right)=\frac{1}{2}\left(\ln\left(1+2A\right)+\ln\left(1-A\right)\right)+\ln\left(x\right) $$ Derivating twice gives: $$\frac{\partial}{\partial A}V_{1}\left(x,A\right)=\frac{4A-1}{2\left(A-1\right)\left(2A+1\right)}\qquad\frac{\partial^{2}}{\partial A^{2}}V_{1}\left(x,A\right)=\frac{-8A^{2}+4A-5}{2\left(A-1\right)^{2}\left(2A+1\right)^{2}} $$ Which shows that $A=\frac{1}{4} $ is unique maximum of $V_{1}$ for which: $$V_{1}\left(x,\frac{1}{4}\right)=\frac{1}{2}\left(\ln\left(1\frac{1}{2}\right)+\ln\left(\frac{3}{4}\right)\right)+\ln\left(x\right)=\frac{1}{2}\ln\left(\frac{9}{8}\right)+\ln\left(x\right)$$ Now by similar logic if we are at time $n-2$ and we got $Y_{n-2}=x$ to invest: $$V_{2}\left(x,A\right):=\mathbb{E}_{A}\left[\ln\left(Y_{n}\right)|Y_{n-2}=x\right]=\frac{1}{2}\mathbb{E}_{A^{'}}\left[\ln\left(Y_{n}\right)|Y_{n-1}=\left(1+2A\right)x\right]+\frac{1}{2}\mathbb{E}_{A^{'}}\left[\ln\left(Y_{n}\right)|Y_{n-1}=\left(1-A\right)x\right] =\frac{1}{2}V_{1}\left(\left(1+2A\right)x,A^{'}\right)+\frac{1}{2}V_{1}\left(\left(1-A\right)x,A^{'}\right)\overbrace{\leq}^{1}\frac{1}{2}\left(\frac{1}{2}\ln\left(\frac{9}{8}\right)+\ln\left(\left(1+2A\right)x\right)\right)+\frac{1}{2}\left(\frac{1}{2}\ln\left(\frac{9}{8}\right)+\ln\left(\left(1-A\right)x\right)\right) =\frac{1}{2}\ln\left(\left(1+2A\right)x\right)+\frac{1}{2}\ln\left(\left(1-2A\right)x\right)+\frac{1}{2}\ln\left(\frac{9}{8}\right)=\underbrace{\frac{1}{2}\ln\left(1+2A\right)+\frac{1}{2}\ln\left(1-A\right)+\ln\left(x\right)+\frac{1}{2}\ln\left(\frac{9}{8}\right)}_{*} $$ Where the marked inequality is a result of $A^{'}=\frac{1}{4}$ being optimal for time $n-1$. Maximizing the new equaiton * is exactly the same as maximizing the equation we had at time $n-1$ and it is maximized again for $A=\frac{1}{4}$ for which we get $$V_{2}\left(x,\frac{1}{4}\right)=\frac{1}{2}\ln\left(\frac{9}{8}\right)+\ln\left(x\right)+\frac{1}{2}\ln\left(\frac{9}{8}\right)=2\cdot\left(\frac{1}{2}\ln\left(\frac{9}{8}\right)\right)+\ln\left(x\right)$$ Continuing this inductively we obtain: $$\mathbb{E}\left[\ln\left(Y_{n}\right)\right]=n\left(\frac{1}{2}\ln\left(\frac{9}{8}\right)\right)+\ln\left(Y_{0}\right)=n\left(\frac{1}{2}\ln\left(\frac{9}{8}\right)\right) $$ This is indeed of the form $\mathbb{E}\left[\ln\left(Y_{n}\right)\right]=Bn$ for $B=\frac{1}{2}\ln\left(\frac{9}{8}\right)$.

Additionally, I tried writing this "solution path" using conditioning on $\mathcal{\mathcal{F}}_{n-1}$, that is by doing a similar analysis for $\mathbb{E}_{A}\left[\ln\left(Y_{n}\right)|\mathcal{F}_{n-1}\right]$ which seems more formally accurate however I couldn't make it work out.

As for the second question, I showed that $M_{k}$ is a supermartingale and I know that if it had bounded differences then relative to a well chosen stopping time the Optional Stopping Theorem could be applied. Is it possible to deduce that $M_{k}$ has bounded differences by way of contradiction by showing some contradiction to the Optional Stopping Theorem if the opposite was true? If so what stopping time should be used?

Sorry for the lengthy read, help would be very appreciated.

LlamaMan
  • 245
  • 1
  • 9
  • I you win, your wealth increases by $1+2A$, no? You wrote instead $1+A$ – leonbloy Jul 13 '14 at 18:21
  • If you had $1$ and you invested $A$ and won $2A$ then you got $1+A$ total after the win. – LlamaMan Jul 13 '14 at 18:51
  • But your interpretation is not what the equation says: when you win $Y_{k}=Y_{k-1}+ C_k X_k = Y_{k-1} + A Y_{k-1} 2 $ – leonbloy Jul 13 '14 at 21:36
  • If you had $x$ prior to making a gamble and you gambled $Ax$ and won then you receive $2Ax$ which are added to your remaining $(1-A)x$ and you end up with $(1+A)x$ – LlamaMan Jul 16 '14 at 18:10
  • I repeat once more: that interpretation is NOT what your first equation $Y_{k}=1+\sum_{j=1}^{k}C_{j}X_{j}$ says. This equation does NOT say (and is incosistent with) your interpretation that when one wins, one receives $2Ax$. So, either that equation is wrong OR your interpretation here is wrong. – leonbloy Jul 16 '14 at 18:26
  • 1
    For example, you start with wealth $Y_0=1$ and win the first round (so $X_1 =2$). Then you bet $C_1=A Y_0$, you new accumulated wealth is (according to the equation above) $Y_1=1+ X_1 C_1 = Y_0+ 2 A Y_0 =Y_0( 1 +2 A)$. BTW, this implies that when you win, you are paid $2 C_j$ $plus$ the amount gambled $C_j$ ($3 C_j$), so that the total $net$ increase in wealth is $2 C_j$ – leonbloy Jul 16 '14 at 18:39
  • 1
    If this were not the case, you'd win or loss the same (relative) amount each time, so the constraint of "legal game" would led you to bet as small a value as possible (that's why you get $A=0$ when you maximize). The problem only makes sense if the game has some advantage for you. – leonbloy Jul 16 '14 at 18:48
  • @leonbloy You are absolutely right, my interpretation was indeed incorrect, thanks for repeating your explanation. I will adjust the numbers in my previous calculations and see if this leads to a more sensible result for the first question. Regardless however I would still very much love for someone to provide an answer to the first part that is phrased in the "language" of conditioning on Sigma-Algebras rather on the events $Y_{k}=k$ and the second question also remains. – LlamaMan Jul 16 '14 at 18:50
  • I corrected the calculations and it seems to work out fine now (though I'd appreciate review of the calculations). I would still very much appreciate help formalizing the first question using conditioning on Sigma-Algebras and there is also the second question. (I asked a friend to offer a bounty for these). – LlamaMan Jul 16 '14 at 19:26
  • The second question is way easier than you're thinking. Just note $M_i - M_{i-1} = \log\frac{Y_i}{Y_{i-1}} - B$. For a conservative strategy, $\frac{Y_i}{Y_{i-1}}$ is bounded. So $|M_i - M_{i-1}|$ is bounded. If the strategy weren't conservative, it'd be possible to bet and lose all of $Y_{i-1}$, which would make $\log\frac{Y_i}{Y_{i-1}}$ unbounded. – Will Nelson Jul 18 '14 at 00:44

1 Answers1

2

I think you are very close to the solution but made too complicated.

First, simply condition and use the chain rule for expectations: $E[\ln Y_n] = E[E[\ln Y_n|\mathcal{F_{n-1}]]}$

Then it is the calculation you made before, so assuming it is correct and in line with your notations: $E[\ln Y_n] =\frac{1}{2}(\ln(1+2A)+\ln(1-A))+E[\ln(Y_{n-1})]$

This is related to stochastic control theory / dynamic programming in discrete time (Bellman equation), which basically tells you that optimizing on a step by step basis gives you the overall optimal strategy.

You can then maximize for $A$ and find $A=\frac{1}{4}$. This proves that this is the optimal strategy at every step, as we only conditioned on the past. You can also see that this is "almost" a martingale, apart from a slight drift (positive and in your favour).

For the second part of the question, as mentioned in the comments, simply remark that: $|M_{k}-M_{k-1}|=|\log(Y_{k})-Bk - (\log(Y_{k-1})-B(k-1))| = |\log \frac{Y_k}{Y_{k-1}}+B| = |\frac{1}{2} \log((1+2A)(1-A)) +B|$.

At this stage you can have a second look at the function you studied above: you found the maximum, but this is in absolute value, so you also need the minimum. As by definition, $0 \leq A \leq 0.99$, $|M_{k}-M_{k-1}| \leq \frac{1}{2} \max(|\log(2.98 \times 0.01)| ,\log 1, B*) +|B| = K < +\infty$ with $B*$ the value found in a/.

Hope this helps.

Matt B.
  • 1,246