0

It is well known that if $X_1,...,X_n \overset{iid}{\sim} Exp(\lambda)$, then $Y_i:=X_{(i)}-X_{(i-1)} \overset{ind}{\sim} Exp((n+1-i)\lambda)$ for $i=1,...,n$ ($X_{(0)}:=0$). The standard way is to use Jacobian transform (e.g. see here).

However, I'm wondering if the method below for computing $Y_2$ alone can be justified as well:

First, $P(Y_2\geq y) = E[P(X_{(2)}-X_{(1)}\geq y | X_{(1)}, I_{X_k=X_{(1)}}, k=1,...,n)]$. Then since

\begin{align} & P(X_{(2)}-X_{(1)}\geq y | X_{(1)}=x, I_{X_i=X_{(1)}}=1, I_{X_j=X_{(1)}}=0, \forall j\neq i) \\ =& P(X_{(2)}\geq y + x| X_{(1)}=x, I_{X_i=X_{(1)}}=1, I_{X_j=X_{(1)}}=0, \forall j\neq i) \tag{1} \\ =& P(X_{(2)}\geq y + x| X_i=x, I_{X_j>x}=1, \forall j\neq i) \tag{2} \\ =& P(X_{j}\geq y + x, j\neq i| X_i=x, I_{X_j>x}=1, \forall j\neq i) \\ (independence) =& P(X_{j}\geq y + x, j\neq i| I_{X_j>x}=1, \forall j\neq i) \\ (memoryless) =& P(X_{j}\geq y, \forall j\neq i) \\ =& P(\min_{j\neq i} X_{j}\geq y) = \exp\{-(n+1-2)\lambda y\} \end{align} The term inside the expectation can be moved out unaffected. Done.

Question: My only concern lies in how to justify the change from (1) to (2). At the first glimpse, one might think that because the two events in the conditioning are equal, this is "clearly" valid. However, since the two events have probabilities 0, we cannot justify this by the simple Bayes formula + event swap as in the discrete case.

In fact, following the definition of conditional expectation from measure theory, one can show the two conditional probabilities do exist (check regular conditional probabilities or disintegration; e.g. from Foundations of Modern Probability by Olav Kallenberg). Justifying the event swap though, is annoyingly challenging to me.

NXWang
  • 307
  • 1
  • 8

1 Answers1

1

You can derive the result rigorously using the fact for independent $X,Y$: $$P((X, Y)\in B\mid X) = h(X)\quad\text{where}\quad h(x):=P((x,Y)\in B),\tag{$\ast$}$$ and therefore $P((X,Y)\in B) = E[h(X)]$.

Apply ($\ast$) to the event $\{X_j - X_i\ge y,\ \forall j\ne i,\ X_j>X_i,\ \forall j\ne i\}$ with $X:=X_i$ and $Y$ the vector of all $X_j$, $j\ne i$: $$\begin{aligned} h(x):=P(X_j - x\ge y,\ \forall j\ne i,\ X_j>x,\ \forall j\ne i) &=P(X_j\ge x+y,\ \forall j\ne i)\\ &=\left(e^{-\lambda y}\right)^{n-1}\left(e^{-\lambda x}\right)^{n-1}\\ &=\left(e^{-\lambda y}\right)^{n-1} P(X_j> x,\ \forall j\ne i) \end{aligned}$$ Conclude that

$$\begin{aligned}P(X_{(2)}-X_{(1)}\ge y)&=\sum_{i=1}^n P(X_j-X_i\ge y,\ \forall j\ne i, X_j>X_i,\ \forall j\ne i)\\ &=\sum_{i=1}^n E[h(X_i)]\\ &=\sum_{i=1}^n\left(e^{-\lambda y}\right)^{n-1} \frac1n\\ &=\left(e^{-\lambda y}\right)^{n-1}. \end{aligned} $$ To calculate $E[h(X_1)]$, you can perform a direct integration, or define $g(x):=P(X_j>x,\ \forall j\ne i)$ and observe that $E[g(X_i)]=P(X_j>X_i,\ \forall j\ne i)=\frac1n$.

grand_chat
  • 40,909