1

I'm trying to find the expectation of a stopping time. Specifically,

Let $T_1,...,T_n$ be i.i.d exponential random variables with mean $1$. Let $S_n = T_1 + ... + T_n$ denote their partial sum. Define the stopping time,

$$ T = \inf\{n \geq 1 : S_n \geq 1\}$$

which is the first time $S_n$ exceeds $1$. Calculate $E[T]$.

Here was my approach. Let $x = E[T]$. I want to condition on what happens at the first time $T_1$. If $T_1 \geq 1$, then the process, $\{S_n\}$ has stopped and $T \equiv 1$. Otherwise if $T_1 < 1$, then after one step, process, $\{S_n\}$ will renew again until it reaches $1$. So we have the following equation,

\begin{eqnarray} x &=& E[T]\\ &=& E[T | T_1 < 1]P(T_1 < 1) + E[T | T_1 \geq 1]P(T_1 \geq 1)\\ &=& (1+x)P(T_1 < 1) + P(T_1 \geq 1)\\ &=& (1+x)(1-e^{-1}) + e^{-1} \end{eqnarray}

This results in $x = e$.

However, I was told that the answer is $2$. They gave a heuristic explanation that $S_T$ is distributed as $1 + S$ where $S$ is an exponential random variable with mean $1$. I can see the intuition behind this from the memoryless property, but I can't prove why it is so. I ran three simulations in $\textsf{R}$ and got $x \approx 2.001, 2.0161, 1.9785$, which seems to confirm that the answer is $2$. Can someone explain this result?

Also, why/where did my approach fail?

  • 2
    When you say $\mathbb{E}[T|T_1 < 1] = 1 + x,$ you're asserting that, in expectation, if the first step doesn't get all the way to $1,$ then the remaining steps have to get all the way to $1$. This is clearly false - if $T_1 = 1/2,$ then $T_2 + \dots T_T$ only have to get up to $1/2$. Since $T_1 > 0$ a.s., we should have $\mathbb{E}[T|T_1 < 1] < 1 + \mathbb{E}[T].$ – stochasticboy321 Apr 13 '19 at 19:00
  • 1
    I noticed that you wanted a non-heuristic proof - note that ${T > n} = {S_n < 1}$, since the $S_n$ are non-decreasing. It should be straightforward to figure out $\pi_n := P(S_n<1)$ by establishing a recurrence relation between $\pi_n$ and $\pi_{n-1}$ - you'll likely need the volume of a standard simplex in $n$ dimensions. This will directly give you $P(T = n),$ which you can then use to find the mean. – stochasticboy321 Apr 13 '19 at 21:19
  • 2
    A more high level argument is from queuing theory - suppose it takes you $\mathrm{Exp}(1)$ time to do a job. How many jobs will you finish in $1$ unit of time? It is a classical result that this number is $\mathrm{Poission}(1)$ distributed. However, intuitively, the mean is simpler to argue - your rate of doing jobs is $1$ per unit, so you, in expectation, should finish one job per unit (a Little's law type argument). You are interested in this number plus one - which job will you be doing when the time unit finishes. – stochasticboy321 Apr 13 '19 at 21:24
  • 1
    @stochasticboy321 Thank you! I was able to do a brute force calculation along these lines. –  Apr 14 '19 at 16:24
  • 1
    ^That's grand, you're welcome :). You should add an answer below, so that others trying the same problem can have a reference. – stochasticboy321 Apr 15 '19 at 01:59
  • 1
    @stochasticboy321 I added an answer. If someone can confirm the value of that infinite sum, then I think the solution would be complete. –  Apr 15 '19 at 17:39

1 Answers1

1

Following @stochasticboy321's approach, we want to find $P(T > n) = P(S_n < 1)$. Since $S_n = T_1 + ... + T_n \sim$ Gamma($n,1$), we have,

$$ P(S_n < 1) = \frac{1}{\Gamma(n)}\int_0^1x^{n-1}e^{-x}dx = 1 - \frac{\Gamma(n,1)}{\Gamma(n)}$$

where $\Gamma(n,1)$ is the incomplete Gamma function. To get this expression, I used the nice identity found here. Finally,

$$ E[T] = 1 + \sum\limits_{n=1}^\infty P(T > n) = 1 + \sum\limits_{n=1}^\infty P(S_n<1) = 1+ \sum\limits_{n=1}^\infty\left(1 - \frac{\Gamma(n,1)}{\Gamma(n)}\right)$$

I have no idea how to evaluate the sum in closed form, but a computation in Wolfram Alpha seems to suggest it converges to $1$. Thus, $E[T] = 2$ (at least by conjecture from this numerical computation).

  • 1
    If we set the integral above to $I_n,$ then by integration by parts, and using $\Gamma(n) = (n-1) \Gamma(n-1) = (n-1)!,$ we get for $n \ge 1,$

    $$ P(T > n) = \frac{I_n}{\Gamma(n)} = \frac{(n-1) I_{n-1}}{(n-1) \Gamma(n-1)} - \frac{e^{-1}}{{(n-1)!}} = P(T > n-1) - \frac{e^{-1}}{(n-1)!} $$

    But then $$P(T = n) = P(T> n-1) - P(T > n) = \frac{e^{-1}}{(n-1)!}.$$ We immediately have that $T \overset{\mathrm{law}}= 1 + Z,$ where $Z \sim \mathrm{Pois}(1),$ and so has mean $2$. Implicitly this also solves the series above (by summation by parts).

    – stochasticboy321 Apr 15 '19 at 19:35
  • @stochasticboy321 Wow I would give a thousand upvotes if it were possible. Thank you! –  Apr 15 '19 at 20:26