-1

For context, I’m a high schooler self-studying real analysis. Is my proof that the Cesaro mean sequence, $C_{n}:=\frac{x_{1}+...+x_{n}}{n}$, converges to $a$ if $x_{n}$ converges to the same value correct?

Proof: From the assumption that $x_{n} \to a$, we can conclude that there exists an $N \in \mathbb{Z}^{+}$ for each $\epsilon \in \mathbb{R}^{+}$, such that $|x_{n}-a|<\epsilon$, for $n\ge N$. We want to show that there exists an $N \in \mathbb{Z}^{+}$ for each $\epsilon \in \mathbb{R}^{+}$, such that $|C_{n}-a|<\epsilon$, for $n\ge N$. Since $C_{n}$ is the mean of values from $x_{n}$, there does not exist any $n$ for which $C_{n}>\max_{n\in \mathbb{Z}^{+}} \{x_{n}\}$. Then for large $n$, we have that $|C_{n}-a|<|x_{n}-a|<\epsilon$, which implies that $|C_{n}-a|<\epsilon$, as desired.

I’m mostly worried that my proof skips over steps, or is based on a faulty premise. Please be as nit-picky as you deem appropriate!

Blabby
  • 17
  • The first two sentences are right (unpacking the hypothesis and the goal), but the rest doesn't work. To prove the goal, you need to show how $N$ can be chosen for a given $\epsilon$. To ensure $C_n$ is close enough to $a$, you need to ensure that enough elements in the mean are close to $a$ to offset the early elements that may be far from $a$. Think about how the actual calculation will work out. – Karl Dec 18 '24 at 17:10

2 Answers2

2

The main issue is that your leap to $|C_n - a| < |x_n - a|$ is not justified (and I think this inequality isn't even true). As far as I can tell, you're using two inferences, both of which are incorrect:

First, even if every $C_n$ is less than $\max\{x_n\}$, that doesn't necessarily imply $C_n\leq x_n$ for every fixed large $n$.

Second, $C_n \leq x_n$ doesn't necessarily imply $|C_n - a| \leq |x_n - a|$, because it's not valid to chain inequalities inside absolute values like that. (Taking $a=1$ as an example, try to come up with examples with actual numbers $C, x$ where $C\leq x$, but $|C-1| > |x-1|$.)

Your observation that $C_n \leq \max\{x_n\}$ is correct, but not very helpful. I'd start over.

A hint for this exercise: write $C_n - a$ as $\sum_{k=1}^n (x_n - na)/n$, and break the sum into two parts, one of which covers large $k$ (where $x_n$ is close to $a$) and one that covers small $k$. Then try to show that both of the parts are small.

But really, this is a pretty challenging exercise. My undergraduate students struggle with it. You may want to warm up with more basic exercises like: if $x_n \to a$, prove using the $\varepsilon$-$N$ definition of convergence that $x_n^2 + 1\to a^2 +1$.

  • Thank you so much for the comprehensive answer! I'll give it another go once my semester exams are over with. Would it be most appropriate to respond to my existing post or to make another one once I believe that I've landed on a valid solution? – Blabby Dec 19 '24 at 03:01
0

Here is a proof from first principles with explicit bounds.

This is another of my "nothing original but redone my way" answers.

Theorem.

If $\lim_{n\to\infty} x_n=L $, then $\lim_{n\to\infty} \dfrac1{n}\sum_{k=1}^n x_k=L $.

Proof.

If $\lim_{n\to\infty} x_n=L $, then, for any $c > 0$, there is a $n_0(c)$ such that for all $n > n_0(c)$, $L-c \lt x_n \lt L+c $.

Let $s(n) =\sum_{k=1}^{n} x_k $.

Therefore, for any $N > n_0(c)$,

$\begin{array}\\ \sum_{n=n_0(c)+1}^{N} x_n &\lt\sum_{n=n_0(c)+1}^{N} (L+c)\\ &=(N-n_0(c))(L+c)\\ &=NL+Nc-n_0(c)(L+c)\\ \text{and}\\ \sum_{n=n_0(c)+1}^{N} x_n &\gt\sum_{n=n_0(c)+1}^{N} (L-c)\\ &=(N-n_0(c))(L-c)\\ &=NL-Nc-n_0(c)(L-c)\\ \end{array} $

so

$\begin{array}\\ s(N) &=\sum_{k=1}^{N} x_k\\ &=\sum_{k=1}^{n_0(c)} x_k+\sum_{k=n_0(c)+1}^{N} x_k\\ &=s(n_0(c))+\sum_{k=n_0(c)+1}^{N} x_k\\ &\lt s(n_0(c))+NL+Nc-n_0(c)(L+c)\\ \text{and}\\ s(N) &=\sum_{k=1}^{N} x_k\\ &=\sum_{k=1}^{n_0(c)} x_k+\sum_{k=n_0(c)+1}^{N} x_k\\ &=s(n_0(c))+\sum_{k=n_0(c)+1}^{N} x_k\\ &\gt s(n_0(c))+NL-Nc-n_0(c)(L-c)\\ \end{array} $

or

$\begin{array}\\ s(N)-NL &\lt s(n_0(c))+Nc-n_0(c)(L+c)\\ \text{and}\\ s(N)-NL &\gt s(n_0(c))-Nc-n_0(c)(L-c)\\ \end{array} $

Dividing by $N$,

$\begin{array}\\ \dfrac{s(N)}{N}-L &\lt c+\dfrac{s(n_0(c))-n_0(c)(L+c)}{N}\\ \text{and}\\ \dfrac{s(N)}{N}-L &\gt -c+\dfrac{s(n_0(c))-n_0(c)(L-c)}{N}\\ \end{array} $

Then, for any $\epsilon > 0$, let $c=\dfrac{\epsilon}{2}$ and choose $N$ so that

$\begin{array}\\ \dfrac{\epsilon}{2} &\gt\left|\dfrac{s(n_0(c))-n_0(c)(L+c)}{N}\right|\\ \text{and}\\ \dfrac{\epsilon}{2} &\gt\left|\dfrac{s(n_0(c))-n_0(c)(L-c)}{N}\right|\\ \text{or}\\ N &\gt \dfrac{2}{\epsilon}\max(|s(n_0(c))-n_0(c)(L+c)|, |s(n_0(c))-n_0(c)(L-c)|)\\ \end{array} $

Then $\left|\dfrac{s(N)}{N}-L\right| \lt \epsilon$.

If $L > 0$ then simplifications can be made in the formula for $n$.

marty cohen
  • 110,450