-2

So, I recently (re-)discovered that random variables learned in elementary probability such as the exponentially distributed random variable $X$ with cdf $F_X(x) = 1-e^{- \lambda x}$ can be explicitly represented as

$$X(\omega) := \frac{1}{\lambda} \ln(\frac{1}{1-\omega}) \tag{*}$$

This can be derived with the formula

$$X(\omega) = \sup\{x \in \mathbb{R}: F_X(x) < \omega\}$$

This is apparently called Skorokhod representation (so-called in David Williams' Probability with Martingales). Where can I find the rest of the Skorokhod representations for the random variables we learned in elementary probability? I'm looking for something like this table for moment-generating functions.

ETA: I just realised there's a connection between the normal distribution analogue of this and Excel's NORMINV(RAND(),\mu,\sigma).


(*) Some advanced probability stuff:

The probability space is $((0,1), \mathscr B(0,1), \mu)$ where $\mu$ is Lebesgue measure.

BCLC
  • 14,197

1 Answers1

0

Part I is in general. Part II is for $((0,1),\mathscr B(0,1), Leb)$

Part I:

Skorokhod representation of a random variable $X$ with distribution $F_X$ in the probability space $(\Omega, \mathscr F, \mathbb P)$ is given by

$$X(\omega) = \sup\{x \in \mathbb R | F_X(x) < y\}$$

However, in $(\mathbb R, \mathscr B(\mathbb R), \mathbb P)$, its Skorokhod representation is simply the identity random variable:

$$X(\omega) = \omega$$

Part II:

TL;DR To get Skorokhod representation of $X$ in $((0,1),\mathscr B(0,1), Leb)$, just look up $X$'s quantile function $F^{-1}_X(x)$ and then change the parameter from $x$ to $\omega$. Why? Actually for the identity random variable $Y(\omega) = \omega$ in the probability space in $(*)$ in OP, $Y$ is uniform. That is, quantile functions refer to the probability space $((0,1),\mathscr B(0,1), Leb)$ because $((0,1),\mathscr B(0,1), Leb)$ for uniform is like $(\mathbb R, \mathscr B(\mathbb R), \mathbb P)$ is for exponential. Then probability integral transform states that $X = F^{-1}_X(Y)$. You can also look up 'Examples of the inverse transform sampling.' Try section 4.11 'Sampling from a distribution' in Grimmett and Stirzaker's Probability and Random Processes.


  1. By probability integral transform, the basis for Inverse transform sampling (hence the NORMINV(RAND(),\mu,\sigma)), we have that for any $X$ and uniform $Y$,

$$X=F^{-1}_X(Y)$$

where $F^{-1}_X(y)$ is the regular inverse of cdf $F_X(x)$ if it's continuous otherwise generally:

$$F^{-1}_X(y) := \inf\{x \in \mathbb R | F_X(x) \ge y\}$$ and $y \in [0,1]$.

  1. Convince yourself that $$\inf\{x \in \mathbb R | F_X(x) \ge y\} = \sup\{x \in \mathbb R | F_X(x) < y\}$$

  2. Since uniform distribution's Skorokhod representation is the identity random variable $Y(\omega) = \omega$ in $((0,1),\mathscr B(0,1), \mu)$ where $\mu$ is Lebesgue, any random variable is its quantile function evaluated at a uniformly distributed random variable, i.e. evaluated at such $\omega$:

$$X=F^{-1}_X(Y):=\inf\{a \in \mathbb R | F_X(a) \ge Y\}$$

Conclusion? If you want Skorokhod representations, just look up quantile functions.


For example, to get the Skorokhod representation of a Bernoulli random variable $X$ with parameter $p$, look up its quantile to be

$$F_X^{-1}(y) = 1_{(p,1)}(y)$$

ETA: It seems Bernoulli here is $P(X=0)=p$ rather than $P(X=1)=p$

Conclude by probability integral transform:

$$X(\omega) = F_X^{-1}(\omega) = 1_{(p,1)}(\omega)\tag{*}$$

Wait, what's the difference between $y$ and $\omega$? $y$ has no distribution. It's just some number in $[0,1]$. Meanwhile, $\omega = Y(\omega) \sim Unif(0,1)$

Going back, we can verify $(*)$ by proving that

$$1_{(p,1)}(\omega) = \sup\{a \in \mathbb R | (1-p)1_{[0,1)}(a) + 1_{[1,\infty)}(a) < \omega\}$$

and

$$1_{(p,1)}(\omega) \sim Be(p)$$

Btw, since it can be shown that $X(\omega) \sim $Be$(p) \iff X(1-\omega) \sim $Be$(p)$ (for $\omega \sim$ Unif$(0,1)$), we have not only that $1_{(0,1-p)}(\omega) \sim Be$ but also that

$$1_{(0,1-p)}(\omega) \sim Be(p)$$

BCLC
  • 14,197