5

enter image description here

Hello, I am trying to solve the following problem. For part (a) my idea is as follows: $P\{$all values are distict$\}=1-P\{$there exist $i\ne j$ so that $X_i=X_j\}=P\cup_{i\ne j}\{X_i=X_j\}\leq \sum_{i\ne j}P(X_i=X_j).$ So, it suffices to show that $P(X_i=X_j)$ is $0$ for any $i\ne j$. To do this I was going to try taking the convolution to find the distribution of $X_i-X_j$, then if we can show it is uniformly distributed we know any single point will have probability 0. Is this reasoning accurate?

For part (b), I don't know how to approach. I want to say that it should be true, since density ensures that we can always find some bigger number less than 1, and with enough trials we will reach such a value eventually. However, I don't know how to formalize this in the structure given. Any guidance would be very much appreciated

modz
  • 447

2 Answers2

3

For part (a), you are correct. For the final step, realizes that for $i \neq j$, $X_i - X_j$ is a continuous random variable so $\mathbb{P}(X_i - X_j = 0) = 0 \ \forall i \neq j$.

For part (b), let $M_n = \max\{X_1, X_2, \ldots, X_n\}$. Notice that $M_n = X_{T_n}$, it is readily that the following is true: $$ \lim_{n \rightarrow \infty} M_n = 1 \implies \lim_{n \rightarrow \infty} T_n = \infty $$

First, notice that $(T_n)$ is an increasing sequence, taking values in $\mathbb{N}$. So, if $\lim_{n \rightarrow \infty} T_n < \infty$, then $T_n$ is constant after some $n_0 \in \mathbb{N}$. Thus, $M_n = X_{T_n} = X_{T_{n_0}} \ \forall n \ge n_0$, which means $\lim_{n \rightarrow \infty} M_n = X_{T_{n_0}} < 1$

Thus, $$ \mathbb{P}\left(\lim_{n \rightarrow \infty} M_n = 1\right) \le \mathbb{P}\left(\lim_{n \rightarrow \infty} T_n = \infty\right) $$

For $\epsilon > 0$, consider the following series: $$ \sum_{n = 1}^\infty \mathbb{P}(\vert M_n - 1 \vert > \epsilon) = \sum_{n = 1}^\infty \mathbb{P}(M_n < 1 - \epsilon) \stackrel{\text{indp}}{=} \sum_{n = 1}^\infty [\mathbb{P}(X_1 < 1 - \epsilon)]^n = \sum_{n = 1}^\infty (1 - \epsilon)^n < \infty $$ The Borel-Cantelli lemma implies that $\mathbb{P}\left(\lim_{n \rightarrow \infty} M_n = 1\right) = 1$. Therefore, $\mathbb{P}\left(\lim_{n \rightarrow \infty} T_n = \infty\right) = 1$

3

See that for a fixed large integer $M$, for all $n> M$, $\{T_{n}\leq M\}=\cap_{k=1}^{n-M}\{X_{M+k}<\max(X_{1},...,X_{M})\}$.

And as each $X_{M+k}$ is independent of $\{X_{1},...,X_{M}\}$ and $(X_{k+M})_{k}$ is still a sequence of iid uniform random variables, you have

$\{X_{M+k}<\max(X_{1},...,X_{M})\}$ are independent events and have equal probability.

Now $P(X_{M+1}<\max(X_{1},...,X_{M}))=\frac{M\cdot M!}{(M+1)!}=\frac{M}{M+1}$

You can see this by a simple combinatorial argument. There are $M\cdot M!$ ways to seat $M+1$ persons from left to right so that the $M+1$-th person does not get the rightmost seat. Otherwise see that $\max(X_{1},...,X_{M})$ has the density $Mx^{M-1}$.

So $P(X_{M+1}<\max(X_{1},...,X_{M}))=\int_{0}^{1}Mx^{M}\,dx=\frac{M}{M+1}$

So $P(T_{n}\leq M)=(\frac{M}{M+1})^{n-M}$ which is summable in $n$. Hence, the Borel-Cantelli Lemma implies

$$P(T_{n}\leq M\,\text{infinitely often})=P(\lim\inf T_{n}\leq M)=0$$

This holds for all integers $M$. Thus $$P(\lim\inf T_{n}=\infty)=P\bigg(\bigcap_{M}\,\{\lim\inf T_{n}>M\}\bigg)=1$$

Couple this with the fact that $T_{n}$'s are a non-decreasing sequence of random variables to get that $$P(\lim_{n\to\infty}T_{n}=\infty)=1$$