2

I've been told in class that, given a Martingale Difference Sequence (MDS), $(X_t)_{t\geq0}$, if $\mathbb{E}|x_t|^p < \infty$ for some $p> 1$, then $$ \frac{1}{T}\sum_{t=0}^T X_t \overset{P}{\to} \mathbb{E}[X_t] = 0. $$ However, the proof that has been given in class is completely wrong because it uses the inequality $$ \mathbb{E}\big| \frac{1}{T}\sum_{t=0}^T X_t\big| \leq \mathbb{E}\big|\frac{1}{T}\sum_{t=1}^T X_t \mathbb{1}_{\{|X_t|>M|\}}\big| $$ for some constant $M$ "sufficiently large", but that does not work. The idea of the proof is to apply the Markov Theory at some point.

On the other hand, I have been seen that a similar result is proven by John Elton (1981) in The Annals of Probability. However, he assumes independence. Is independence necessary for this result to be true? Or is it an hypothesis that can be relaxed?

Thanks

R__
  • 301
  • 1
  • 7
  • 1
    A standard reference for something like this is Chow (1971). That paper uses a truncation technique similar to what you saw in class. Decomposing $X$ into parts with $|X|\leq M$ and $|X|> M$ with $M$ "large enough" is a standard technique for proving laws of large numbers. Both parts are then manipulated separately. Your inequality above seems to miss the $|X|\leq M$. – Galton Apr 08 '22 at 17:01
  • Yeah, that's what I argued to the professor, but I don't see how to prove that the $\leq$ part goes to zero... I will take a look at the paper. Thanks – R__ Apr 08 '22 at 17:10
  • You don't prove that the $\leq M$ part goes to zero. Usually, you recenter with $E X 1(X \leq M)$ and then you can apply moment inequalities to that object because it is bounded. As $M$ grows large, that object will behave a lot like the original problem and the $X > M$ part will disappear by an argument involving the dominated convergence theorem. – Galton Apr 08 '22 at 17:28
  • But I need the RHS to converge to zero, what's the point then to see that the $\leq$ is bounded? I'm not getting the point, sorry – R__ Apr 08 '22 at 17:51
  • 1
    The strategy is usually as follows: You would show that the sample average of $X_i1(X_i \leq M) - E X_i 1(X_i \leq M)$ goes to zero for every fixed $M$, even if that $M$ is very large, using moment inequalities that can be applied because this sample average is guaranteed to have finite moments. You would then show that the sample average of $X_i1(X_i > M) - E X_i 1(X_i > M)$ can be made as small as desired without changing the first argument by choosing $M$ large enough. The paper I linked uses this strategy very effectively. – Galton Apr 08 '22 at 18:11
  • Okay, I will try. Thanks – R__ Apr 08 '22 at 19:03
  • 2
    Even the statement should be clarified. Are the $X_t$ supposed to be uniformly bounded in $\mathbb L^p$ – Davide Giraudo Apr 11 '22 at 08:01
  • I agree... but I shared the statement as it was provided to me... – R__ Apr 12 '22 at 10:41

1 Answers1

2

In the paper by Chow (1971), a stronger result was shown: if $(X_t)_{t\geqslant 1}$ is a uniformly integrable martingale difference sequence, then $T^{-1}\mathbb E\left\lvert\sum_{t=1}^TX_t \right\rvert\to 0$.

In particular, one does not need moments of order $p>1$.

Let us explain the idea. We consider the truncated version of $X_t$ defined by $$ X_{t,\leqslant M}:=X_t\mathbf{1}\{\lvert X_t\rvert\leqslant M\}-\mathbb E\left[X_t\mathbf{1}\{\lvert X_t\rvert\leqslant M\}\mid\mathcal F_{t-1}\right] $$ and the tail part $$ X_{t,\gt M}:=X_t\mathbf{1}\{\lvert X_t\rvert\gt M\}-\mathbb E\left[X_t\mathbf{1}\{\lvert X_t\rvert\gt M\}\mid\mathcal F_{t-1}\right]. $$ In this way, $X_t=X_{t,\leqslant M}+X_{t,\gt M}$ and $\left(X_{t,\leqslant M}\right)_{t\geqslant 1}$ and $\left(X_{t,\gt M}\right)_{t\geqslant 1}$ are martingale difference sequences.

Let $\tau\colon M\mapsto \sup_{t\geqslant 1}\mathbb E\left[\left\lvert X_t\right\rvert\mathbf{1}\{\left\lvert X_t\right\rvert>M\}\right]$; by definition of uniform integrability, $\tau(M)\to 0$ as $M$ goes to infinity.

Observe that $$\tag{0} \frac 1T\mathbb E\left\lvert\sum_{t=1}^TX_t \right\rvert\leqslant \frac 1T\mathbb E\left\lvert\sum_{t=1}^TX_{t,\leqslant M}\right\rvert+\frac 1T\mathbb E\left\lvert\sum_{t=1}^TX_{t,\gt M}\right\rvert; $$ therefore, we have to bound the contribution of each terms. The second one is easier to treat: we have $$ \frac 1T\mathbb E\left\lvert\sum_{t=1}^TX_{t,\gt M}\right\rvert\leqslant 2\tau(M)\tag{1}. $$ For the first one, we use the fact that the random variables $\left(X_{t,\leqslant M}\right)_{t\geqslant 1}$ are pairwise orthogonal. We get that $$ \frac 1T\mathbb E\left\lvert\sum_{t=1}^TX_{t,\leqslant M}\right\rvert \leqslant \frac 1T\sqrt{ \mathbb E\left[\left(\sum_{t=1}^TX_{t,\leqslant M}\right)^2\right] }=\frac 1T\sqrt{ \sum_{t=1}^T\mathbb E\left[X_{t,\leqslant M} ^2\right] }. $$ Then $$ \mathbb E\left[X_{t,\leqslant M} ^2\right]\leqslant 4\mathbb E\left[X_t^2\mathbf{1}\{\lvert X_t\rvert\leqslant M\}\right]\leqslant 4M\tau{0}\tag{2} $$ hence the combination of (0), (1) and (2) gives $$ \frac 1T\mathbb E\left\lvert\sum_{t=1}^TX_t \right\rvert\leqslant 2\frac{\sqrt M}{\sqrt T}\sqrt{\tau(0)}+2\tau(M). $$ Take $M=\sqrt T$ to conclude.

Davide Giraudo
  • 181,608