I found out that if $Z=X+Y$ then $P(Z=z)=\sum_y(P(X=z-y)\cdot P(Y=y))$ with the help of the law of total probability and I need to conclude that $Z\sim\operatorname{NB}(2,p),$ but I can't seem to figure out how.
Asked
Active
Viewed 409 times
-2
-
https://math.stackexchange.com/q/548525/304635 – jlammy Dec 26 '21 at 20:30
1 Answers
0
There are several definitions for these distributions. Make sure you are using the appropriate ones.
- A Geometric distributed random variable, is that of the count of successes before the first failure, in a sequence of Bernoulli trials with success rate $p$.$$X\sim\mathcal{Geo}(p)\implies\mathsf P(X=x)= (1-p)^x p~\mathbf 1_{x\in\Bbb N}\\Y\sim\mathcal{Geo}(p)\implies\mathsf P(Y=y)= (1-p)^y p~\mathbf 1_{y\in\Bbb N}$$
- A Negative Binomial distributed random variable, is that of the count of successes before failure number $n$, in a sequence of Bernoulli trials with success rate $p$.$$B\sim\mathcal{NB}(n,p)\implies\mathsf P(B=w)=\dbinom{w+n-1}{n-1} (1-p)^w p^n~\mathbf 1_{w\in\Bbb N}\\Z\sim\mathcal{NB}(2,p)\implies\mathsf P(Z=z)=(z+1) (1-p)^z p^2~\mathbf 1_{z\in\Bbb N}$$
Then it is just a matter of substituting in the probability mass functions:$$\begin{align}\mathsf P(X+Y=z) ~&=~\sum_{y} \left(\mathsf P(X=z-y)\cdot\mathsf P(Y=y)\right)\\[1ex]&=~\sum_y\left((1-p)^{z-y}~p~\mathbf 1_{0\leq y}\cdot(1-p)^y~p~\mathbf 1_{0\leq z-y}\right)\end{align}$$
Graham Kemp
- 133,231
-
But P(Z=z)=(z+1)(1−p)zp2 1z∈N and P(X+Y=z) =(1−p)zp2 1z∈N without the (z+1) . where did the (z+1) go? – Josef Sigron Dec 27 '21 at 14:49
-
It is in the sumattion. $\sum_y\mathbf 1_{0\leqslant y\leqslant z}\mathbf 1_{0\leqslant z}= (z+1)\mathbf 1_{0\leqslant z}$ – Graham Kemp Dec 27 '21 at 19:41
-
-
An indicator is a piecewise function which equals $1$ when the indicated statement holds, and $0$ otherwise.$$\mathbf 1_{0\leqslant y\lt z}=\begin{cases}1 &:& 0\leqslant y\lt z\0&:&\text{otherwise}\end{cases}$$ – Graham Kemp Dec 28 '21 at 12:34