2

The probability that the first success occurs at the $n^\text{th}$ experiment in a sequence of Bernoulli experiments with probability of success $p$ follows a geometric discrete distribution $X$ of parameter $p$, with $\Pr[X=n]=(1-p)^{n-1}\cdot p$. A most classical and intuitive result is that $E(X)=1/p$ for $0<p\le1$. I'm after an analytic proof of that.

By definition of expected value as $E(X)=\displaystyle\lim_{m\to\infty}\sum_{n=1}^m\Pr[X=n]\cdot n$, if it exists, I get $$E(X)=\lim_{m\to\infty}\sum_{n=1}^m (1-p)^{n-1}\cdot p\cdot n$$ I prove convergence of the series using the ratio test and that $1-p<1$, to get $$E(X)=p\sum_{n=1}^\infty(1-p)^{n-1}\cdot n$$ But I'm pretty stuck there. Please hint at a method.

fgrieu
  • 1,828
  • 1
  • @insipidintegrator: Sort of. I understand the computation there as the sum of a geometric series. And vaguely understand the reasoning that it must yield the same result as the one I'm trying to get, using that $\sum\Pr(X>x)$ technique. I was hoping something going on from where I'm stuck, but if that's the simplest, so be it. And no I'm not familiar with arithmetic-geometric series (or forgot about these); I only see that I met one! – fgrieu Dec 08 '23 at 10:52
  • 1
    Try this: https://math.stackexchange.com/questions/301751/expectation-and-variance-of-the-geometric-distribution – insipidintegrator Dec 08 '23 at 11:05

2 Answers2

3

We can conclude analytically from where I got stuck. The one critical step is to rewrite $$E(X)=p\sum_{n=1}^\infty(1-p)^{n-1}\cdot n$$ as $$E(X)=p\sum_{n=1}^\infty\left(\sum_{m=n}^\infty(1-p)^{m-1}\right)$$ We then see the right sum as the sum of terms in a geometric series to get $$E(X)=p\sum_{n=1}^\infty\left((1-p)^{n-1}\cdot\frac1p\right)$$ then $$E(X)=\sum_{n=1}^\infty(1-p)^{n-1}$$ then again see this as the sum of terms in a geometric series to get the desired $$E(X)=\frac1p$$


Thanks to links in comments, I now see that we also get to the result by starting from $E(X)=\displaystyle\lim_{m\to\infty}\sum_{n=1}^m\Pr[X\ge n]$ rather than from $E(X)=\displaystyle\lim_{m\to\infty}\sum_{n=1}^m\Pr[X=n]\cdot n$ as I did.

The method in this answer is an analytic translation of that very idea.

fgrieu
  • 1,828
3

Here is a different solution from the point you got stuck.

You want to evaluate the number

$$E(X) = p\sum_{n=1}^\infty(1-p)^{n-1}\cdot n$$

Here we recognize the "$n$ times something to the power $n-1$"-theme from differentiating and conclude that

$$E(X) = pf'(1-p)$$

where $f'$ is the derivative of the function:

$$f(x) = \sum_{n=0}^\infty x^n$$

(This holds for $x$ where the sum converges, so $|x| < 1$, but you already showed that)

But recognizing the geometric series we know that $f(x) = (1 - x)^{-1}$ from which it follows that

$$f'(x) = (1 - x)^{-2}$$

hence

$$E(x) = p \cdot f'(1-p) = p \cdot (1 - (1-p))^{-2} = \frac{p}{p^2} = \frac{1}{p}$$


That said, I prefer this, yet, different solution to the problem:

At the first Bernoulli experiment you have probability $p$ to be successful immediately and end with $X = 1$ and probability $(1-p)$ that after this first trial you need an expected $E(X)$ other trials. So:

$$E(X) = p \cdot 1 + (1 -p) \cdot (1 + E(X))$$ and from this you can solve $E(X)$ algebraically.

Vincent
  • 11,280