2

In the book “Second Course in Probability”, by Ross and Peköz, simplified proof is given to Skorokhod’s representation theorem considering the case of continuous random variables.

The proof uses the following Lemma:

If $X_n \rightarrow_d X$, where $X$ and $X_n$ are continuous with distributions $F$ and $F_n$ respectively. If $F_n(x_n) \rightarrow F(x)$ with $0<F(x) < 1$, then $x_n\rightarrow x$.

With such lemma, the authors write the following:

Let $U$ be uniform $(0,1)$ r.v and set $Y_n = F^{-1} (U)$ and $Y=F^{-1}(U)$. Note that because $$ F_n(F_n^{-1}(u))=u=F(F^{-1}(u)) $$

it follows from the Lemma that $F_n^{-1}(u) \rightarrow F^{-1}(u) $ for all $u$. Thus, $Y_n \rightarrow_{a.s} Y$.

My question is in the final assertion. Why does this implies that $Y_n \rightarrow_{a.s} Y$? It would seem that the convergence would be everywhere and not only $a.s$.

user10354138
  • 33,887
  • 1
    From what I see from Billingsley's 'Convergence of Probability Measures', the statement of Skorohod Representation theorem on page 70 is 'everywhere' rather than 'almost sure'. I guess you are right and I guess maybe the 'a.s' statement is enough for most scenarios so people state it in this way. I am not 100% sure about this, so maybe you could check the proof on Billingsley's to see if the convergence is indeed everywhere. – Robert Jul 20 '20 at 16:29

1 Answers1

0

Based on the proof here https://www.columbia.edu/~ww2040/proofchno.pdf for the case of $S=\mathbb{R}$, one simple reason for almost-surely is that we can only define the inverse $F^{-1}(t)$ of cdf $F$ only for $t\in (0,1)$

$$F^{-1}(t):=\inf\{s\in \mathbb{R}: F(s)>t\}, 0<t<1,$$

because for continuous rvs $F(-\infty)=0, F(+\infty)=1$. So when we fix uniform $\omega\in [0,1]$, we are actually forced to exclude $\omega=0,1$

$$Y_{n}(\omega):=F_{n}^{-1}(\omega), \omega\in (0,1)$$

$$Y_{n}(0)=Y_{n}(1)=1,$$

and similarly for the limiting variable $Y=Y_{\infty}$.

Therefore, we only get the convergence almost surely i.e. away from $\omega=0,1$.

However, as mentioned in the comments and here https://eventuallyalmosteverywhere.wordpress.com/2014/10/13/skorohod-representation-theorem/, one can simply redefine it on the measure zero set. So we let them equal the limiting variable

$$Y_{n}(0)=Y_{n}(1)=X(0)=X(1)$$

and thus we get convergence everywhere.

Thomas Kojar
  • 7,349