1

Question :

let $Z_n, n\geq 1$, be a sequence of random variables and $c$ a constant such that, for each $\epsilon \gt 0, P{|Z_n-c|\gt\epsilon}\rightarrow\ 0 \,as\, n \, \rightarrow \infty $. Show that, for any bounded continuous function g, $$E[g(Z_n)]\rightarrow g(c)\,as\, n\rightarrow\infty$$

Answer attempt:

I set $Z^n=Z_1+Z_2+...+Z_n$ to be my sequence of random variables.

Chebyshev's inequality states, $P{|X-\mu|\geq k}\leq \frac{\sigma^2}{k^2}$, so based on this inequality I set $c=n\mu$, where $\mu$ is the sample mean, and if $\epsilon = n\epsilon_0$, where $\epsilon_0$ is some arbitrary constant, then if I plug this into Chebyshev's inequality I get : $$p(|Z^n-\mu n|\geq\epsilon_0 n)\leq \frac{\sigma^2}{n^2 \epsilon^2_0}=p(|\frac{Z^n}{n}-\mu|\geq\epsilon_0)\leq \frac{\sigma^2}{n^3 \epsilon_0}$$ So this goes to $0$ as $n\rightarrow \infty$, so it makes sense that $c=n\mu$.

So then, to prove $ E(g(Z^n)) \rightarrow g(c) \, as \, n\rightarrow\infty$, I am not sure what to do. I think, intuitively, $E(g(z_i)) \,as \,n\rightarrow\infty = g(\mu)$, which would prove the inequality.

But, how do you prove $\sum_0^\infty g(z_i)p(z)=g(\mu)$? Anyway yeah, I am lost.

Frank
  • 910
  • 3
    I suppose you mean $P(|Z_n-c|>\epsilon) \to \color{red}{0}$...? As it is written now it doesn't make any sense. – saz May 16 '18 at 19:25
  • It seems to me that this question is equivalent to the question: "Convergence in probability implies convergence in distribution". This is standard. See eg https://math.stackexchange.com/questions/236955/convergence-in-probability-implies-convergence-in-distribution?utm_medium=organic&utm_source=google_rich_qa&utm_campaign=google_rich_qa – user52227 May 18 '18 at 12:53

1 Answers1

1

I do not get your attempt sorry. Possible way:

What you are given says that for every $\epsilon >0$, $P(Z_n\notin B_\epsilon(c))\to 0$ as $n\to\infty$. For $f$ bounded and continuous and an arbitrary $\delta>0$, you can use the inequality \begin{align} |E[f(Z_n)]-f(c)|\le&\, E[|(f(Z_n)-f(c))|\mathbf 1_{B_\epsilon(c)}(Z_n)]+|E[(f(Z_n)-f(c))\mathbf 1_{B_\epsilon(c)^c}(Z_n)]|\\ \le &\, E[|(f(Z_n)-f(c))|\mathbf 1_{B_\epsilon(c)}(Z_n)]+ 2\sup_x|f(x)|P[Z_n\notin B_\epsilon(c)], \end{align} and choose $\epsilon $ small so that $E[|(f(Z_n)-f(c))|\mathbf 1_{B_\epsilon(c)}(Z_n)]\le \delta$ (independently of $n$), and then choose $n$ big to make the second term smaller than $\delta$.

(Generally, convergence in probability implies convergence in distribution which is equivalent to convergence testing against bounded continuous functions. For convergence to constants you can improve this result).

Rgkpdx
  • 1,568
  • 1
  • 13
  • 27