Let $T$ be a positive random variable, $S(t)=\operatorname{P}(T\geq t)$. Prove that $$E[T]=\int^\infty_0 S(t)dt.$$ I have tried this unsuccessfully.
6 Answers
$$A_t=[T\geqslant t]\qquad S(t)=E[\mathbf 1_{A_t}]\qquad T=\int_0^\infty\mathbf 1_{A_t}\,\mathrm dt$$
- 284,245
-
2Maybe add a final note that $\mathbb E[T]=\int_0^\infty\mathbb E[\mathbf 1_{A_t}],\mathrm dt$. – Maverick Meerkat Jun 05 '21 at 12:23
Consider for any $n > 0$, $$\begin{align*} \int_{t=0}^n t f_T(t) \, dt &= \int_{t=0}^n \left(\int_{s=0}^t \, ds\right) f_T(t) \, dt \\ &= \int_{t=0}^n \int_{s=0}^t f_T(t) \, ds \, dt \\ &= \int_{s=0}^n \int_{t=s}^n f_T(t) \, dt \, ds \\ &= \int_{s=0}^n F_T(n) - F_T(s) \, ds. \end{align*}$$ Then as $n \to \infty$, $F_T(n) \to 1$ and we obtain $${\rm E}[T] = \int_{s=0}^\infty 1 - F_T(s) \, ds = \int_{s=0}^\infty S_T(s) \, ds.$$
- 143,828
-
8
-
So you claim that $$\lim_{n\to\infty}\int_0^n (F_T(n)-F_T(s))ds=\lim_{n\to\infty}\int_0^n (1-F_T(s))ds$$ and then, why this holds? – byk7 Feb 12 '25 at 19:47
Using integration by parts and the fact that $f(t)dt=dF(t)=-d(1-F(t))=-dS(t)$ $$ \begin{align*} E(T) = \int_0^\infty t f_T(t) \, dt &= \int_0^\infty -t\,dS(t) \\ &= \left. -tS(t) \right|_0^\infty - \int_0^\infty S(t) \, d(-t) \\ &= 0 + \int_0^\infty S(t) \, dt \\ &= \int_0^\infty S(t) \, dt. \end{align*} $$
- 81
-
1To see why $tS(t)\to 0$ as $t\to\infty$ check https://math.stackexchange.com/a/1398832/342736 – Maverick Meerkat Jun 05 '21 at 12:37
-
@MaverickMeerkat They assume the density to be non-increasing. On the other hand, the relation seems to be true without the monotonicity (or even when there is no density at all). – byk7 Feb 12 '25 at 19:57
Another way of thinking:
Consider $n>0$, integration by parts we have
$\int_{0}^{n}xF(dx)=nF(n)-\int_{0}^{n}F(x)dx = n-nS(n)-\int_{0}^{n}(1-S(x))dx$
$=\int_{0}^{n}S(x)dx-nS(n)$
where $S(x)=1-F(x)$ is the survival function.
As $n\to \infty$, the second part converges to zero. To see this, notice that $S(x)=\int_{x}^{\infty}f(t)dt$, providing $E(X)$ do exist,
$\lim_{x\to \infty}xS(x)=\lim_{x\to \infty} x\int_{x}^{\infty}f(t)dt \leq \lim_{x\to \infty} \int_{x}^{\infty}tf(t)dt=0$
- 10,752
- 123
Background. [Durrett, 2010, Exercise 1.7.2] Let $g \geq 0$ be a measurable function on a sigma-finite measure space $(\Omega, \mathcal{F}, \mu)$. Then \begin{align} \int_\Omega g \,d\mu = \int_0^\infty \mu(\{\omega : g(\omega) > y\}) \, d{y} \quad\quad\quad (1) \end{align} When $\Omega=\mathbb{R}$, this says that the "area under the curve" of $g$ can obtained via integrating either vertical or horizontal cross-sections. This equality can be shown quickly via Fubini-Tonelli.
Argument. We want to show that the expected value of a random variable $T$ equals the integral of its survival function; i.e. \begin{align} E[T] := \int_\Omega T(\omega) \, d{P(\omega)} = \int_0^\infty P(\omega: T(\omega) > y) \, dy \quad\quad\quad (2) \end{align}
Now Eq. (2) is obtained from Eq. (1) by simply taking $\mu=P$ and the function of interest to be the random variable, $g=T$. Applying the "Law of the Unconscious Statistician", we can rewrite Eq. (2) in terms of the induced probability measure $P_T$ on the Borel subsets of the reals: \begin{align} E[T] = \int_\mathbb{R} t \, d{P_T(t)} = \int_0^\infty P_T(t: t > y) \, dy \quad\quad\quad (3) \end{align}
And so Eq. (3) is just a special case of the fact that the area under a curve can be obtained either by integrating vertical or horizontal cross sections.
- 992
- 6
- 16
-
Note: my answer is essentially @Did 's answer, but with some extra detail. – ashman Sep 09 '22 at 18:49
\begin{align} &\int_0^\infty S(t) \,\mathrm dt = \int_0^\infty \mathop{\mathbb P_T}\left(\left[t, \infty\right)\right) \,\mathrm dt \\ = &\int_0^\infty \left(\int_{\mathbb R} \mathop{\mathbf1_{\left[t,\infty\right)}}(x) \,\mathrm d \mathbb P_T(x) \right)\,\mathrm dt = \int_0^\infty \left(\int_{\mathbb R} \mathop{\mathbf1_{\left[0,x\right]}}(t) \,\mathrm d \mathbb P_T(x) \right)\,\mathrm dt. \end{align} Note that $f: \mathbb R^2 \to \mathbb R, f(t,x) \mathrel{:=} \mathop{\mathbf1_{\left[0,x\right]}}(t)$ is $\left(\mathcal B(\mathbb R) \otimes \mathcal B(\mathbb R), \mathcal B(\mathbb R)\right)$-measurable since $A \mathrel{:=} \left\{(x,t) \in \mathbb R^2: 0 \leq t \leq x\right\} \in \mathcal B(\mathbb R^2) = \mathcal B(\mathbb R) \otimes \mathcal B(\mathbb R)$ and $\mathop{\mathbf1_{\left[0,x\right]}}(t) = \mathop{\mathbf1_A} (x, t)$ for all $(x,t) \in \mathbb R^2.$
Therefore, we can apply Fubini's theorem to get $$ \int_0^\infty \left(\int_{\mathbb R} \mathop{\mathbf1_{\left[0,x\right]}}(t) \,\mathrm d \mathbb P_T(x) \right)\,\mathrm dt = \int_{\mathbb R} \left(\int_0^\infty \mathop{\mathbf1_{\left[0,x\right]}}(t) \,\mathrm dt \right)\,\mathrm d \mathbb P_T(x) = \int_{\mathbb R} x\,\mathrm d \mathbb P_T(x). $$ And, by the law of the unconscious statistician, we have $$ \int_{\mathbb R} x\,\mathrm d \mathbb P_T(x)= \int_\Omega T(\omega) \,\mathrm d \mathbb P(\omega) = \mathop{\mathbb E}\left[T\right]. $$
- 365