7

Let $a$ and $b$ be two real numbers such that $a < b$, let $E$ be any countable subset of the open interval $(a,b)$, and let the elements of $E$ be arranged in a sequence $$x_1, x_2, x_3, \ldots.$$ Now let $\{c_n\}$ be any sequence of positive real numbers such that the series $\sum c_n$ converges.

Now define the function $f \colon (a,b) \to \mathbb{R}$ as follows: $$f(x) \colon= \sum_{x_n < x} c_n \ \ \ \ \text{ for all } x \in (a,b).$$

Then Rudin states that (a) the function $f$ is monotonically increasing on $(a,b)$; if $a < x < y < b$, then $$f(y) = \sum_{x_n < y} c_n \geq \sum_{x_n < x} c_n = f(x)$$ because if any $c_n < x$, then that particular $c_n$ is obviously less than $y$ also. (b) $f$ is discontinuous at every point of $E$; in fact, $$ f(x_n + ) - f(x_n - ) = c_n.$$ How does this hold? How to show this rigorously using the $\epsilon$-$\delta$ approach? (c) $f$ is continuous at every other point of $(a,b)$. How to show this using the rigorous approach?

Moreover, $f(x-) = f(x) = f(x+)$ at all points of $(a,b)$.

7 Answers7

11

$\newcommand{\eps}{\varepsilon}$One nice way to investigate the question rigorously is to consider the "unit step function" $$ H(x) = \begin{cases} 0 & \text{if $x \leq 0$,} \\ 1 & \text{if $x > 0$.} \end{cases} $$ The function $H$ is obviously non-decreasing and continuous everywhere except $0$.

For each positive integer $n$, the function $f_{n}(x) = c_{n} H(x - x_{n})$ is the "step of height $c_{n}$ at $x_{n}$"; again, this function is obviously non-decreasing and has a "jump" of size $c_{n}$ at $x_{n}$.

The interesting observation is that $$ f(x) = \sum_{x_{n} < x} f_{n}(x). $$ It follows at once that $f$ is non-decreasing.

Parts (b) and (c) follow almost immediately from the (easy) fact that the preceding series "converges uniformly" to $f$. However, Rudin doesn't discuss uniform limits until Chapter 7 (if memory serves), so we'll have to establish a tool from the definitions.

Lemma: If $x \not\in E$, i.e., if $x \neq x_{n}$ for all $n$, then $f$ is continuous at $x$.

Proof (sketch): Fix $\eps > 0$ arbitrarily. Use summability of $(c_{n})$ to choose a natural number $N$ such that $$ \sum_{n = N+1}^{\infty} c_{n} < \eps. $$ Now pick $\delta > 0$ so that $(x - \delta, x + \delta)$ contains none of the $x_{n}$ with $n \leq N$; for example, take $$ \delta = \min \{|x_{n} - x| : 1 \leq n \leq N\}. $$ If $|x - y| < \delta$, then $$ |f(x) - f(y)| \leq \sum_{n=N+1}^{\infty} c_{n} < \eps. $$ (The first inequality requires justification; the point is, each of $f(x)$ and $f(y)$ is a sum of various $c_{n}$, but if $n \leq N$, then $x_{n}$ does not lie between $x$ and $y$, so "$c_{n}$ does not appear in the difference".)

This lemma handles part (c). Part (b) is immediate from the following "trick": For each $n$, we can "decompose" $f$ as $$ f(x) = \underbrace{f(x) - f_{n}(x)}_{g_{n}(x)} + f_{n}(x). $$ The difference $g_{n}(x)$ on the right-hand side is precisely the function constructed in the same manner as $f$, except by eliminating the point $x_{n}$ from the set $E$, and removing the corresponding summand from $f(x)$. As such $g_{n}$ is continuous at $x_{n}$ by the lemma (!). Since $f_{n}$ has a jump discontinuity at $x_{n}$, $f$ does, as well.

  • D.Hwang, can you explain your last paragraph about $g_n(x)$? Sorry but I can't understand it since English not my native. – RFZ Sep 15 '15 at 11:05
  • @Pacman: The idea is that the jump in $f(x)$ is "entirely caused by" the term $f_{n}$, so "removing" (i.e., subtracting) that term leaves a continuous function, $g_{n}$. More rigorously, removing one term from an infinite sequence (here, the discontinuities of $f$) leaves an infinite sequence (the discontinuities of $g_{n}$). By the lemma, $g_{n}$ is continuous at $x_{n}$, so $f = g_{n} + f_{n}$ must be discontinuous at $x_{n}$. – Andrew D. Hwang Sep 15 '15 at 14:31
  • D.Hwang, why $g_n$ is continuous at $x_n\in E$ by lemma? Lemma is about that $f(x)$ is continuous at every point of $(a,b)\setminus E$ – RFZ Sep 15 '15 at 14:50
  • The lemma says the sum of a sequence of a certain family of step functions is continuous at each point that is not a discontinuity of a term of the sequence. As you say, $f$ is a function of this type, but $g_{n}$ is also such a function. If you like, the "$f$" in the statement of the lemma refers to a generic sum of step functions, not (merely) to the function $f$ in the problem statement. – Andrew D. Hwang Sep 15 '15 at 16:10
2

Hint : Note that, $\forall x\in (x_n,x_{n+1}]$, for some $n\in \mathbb{N}$ $$f(x)=\sum_{i=1}^n c_i$$ which shows that $f$ is increasing in $(a,b)$ and also, $f(x_n+)-f(x_n-)=\sum_{i=1}^n c_i-\sum_{i=1}^{n-1} c_i=c_n$ Also, note that the function is left continuous.

It is better to first draw the function in your mind and then go for the $\epsilon-\delta$ proof, which would easily follow from the picture.

  • But this is only an intuitive argument; it's not a rigorous proof. – Saaqib Mahmood Mar 19 '15 at 12:25
  • Yes, I have not given you the rigorous proof, but I have shown you how to think one. – Samrat Mukhopadhyay Mar 20 '15 at 06:36
  • @SamratMukhopadhyay, I upvoted you about a year ago, but now I realize that things can't always be made so simple as $f(x)=\sum_{i=1}^n c_i$. What if we want discontinuity at every rational in $(a,b)$? Then $f(x)$ is an infinite series for every $x$. – Silent Nov 18 '18 at 02:03
  • @Silent, that's a good point. In that case, though, we can write, $f(x_n-)$ as the sum of $c_n$'s which is a subsequence of the sequence $c_n$ that, when summed, produce $f(x_n+)$. – Samrat Mukhopadhyay Nov 18 '18 at 06:58
2

I think this example begs for the use of the concept of an absolutely summable family, as defined by Dieudonne in Chapter V, Section 3 of Foundations of Modern Analysis, or (at an introductory undergraduate level) in Chapter 5 of Alan F. Beardon, Limits: A New Approach to Real Analysis.

(Of course one can do without this, and perhaps then it is best to ignore Rudin's remark that "the order in which the terms are arranged is immaterial," which may be a bit of a red herring, because instead of using absolutely summable families one can define the sum of any series $\sum c_{n_k}$, where $( n_k : k \in \mathbb{N} )$ is any strictly increasing sequence, and then of course the order of the terms remains the same.)

If $J = \{ 1, 2, 3, \dotsc \}$, then $( c_n : n \in J \}$ is an absolutely summable family, therefore so is $( c_n : n \in J_x )$, where $J_x = \{ n \in J : x_n < x \}$, for all $x \in (a, b)$, and: $$ f(x) = \sum_{n \in J_x} c_n \qquad (a < x < b). $$

The ordering of the index set $J$ is not used, and $( x_n : n \in J )$ may be just any countable family in $(a, b)$. It is injective, but I don't think we need this. However, for neatness, we can exploit the unused injectivity, as follows:

Take the given countable subset $E \subset (a, b)$ as the index set for the absolutely summable family, which now becomes $( c_x : x \in E )$.

If possible, I won't use the assumption that $E$ is infinite, i.e. $E$ is at most countable.

Define: \begin{gather*} \mu(S) = \sum_{y \in S} c_y \qquad (S \subseteq E), \\ f(x) = \mu(E \cap (a, x)) \qquad (a < x < b). \end{gather*}

Property (a) is trivial.

To prove (b) and (c) together, we need to prove: (i) $f(x-) = f(x)$; (ii) $f(x+) = \mu(E \cap (a, x])$.

Proof of (i).

For all $\epsilon > 0$, there exists finite $F \subset E \cap (a, x)$ such that $\mu(F) > f(x) - \epsilon.$ If $\max(F) < t < x$, then $f(t) > f(x) - \epsilon$. Since we already know that $f(x-) \leqslant f(x)$, this proves that $f(x-) = f(x)$.

Proof of (ii).

Define $g(x) = \mu(E \cap (a, x])$ and $h(x) = \mu(E \cap (x, b))$. Then $g(x) + h(x) = \mu(E)$, which is a constant independent of $x$. By the same argument as in (i) (or else by a change of variable from $x$ to $a + b - x$), we have $h(x+) = h(x)$, therefore $g(x+) = g(x)$. But it is clear that $f(x+) = g(x+)$, because if $x < t < u < b,$ then $f(t) \leqslant g(u)$ and $g(t) \leqslant f(u)$. Hence $f(x+) = g(x)$. Q.E.D.

  • These notes are regurgitated from when I chewed over the same problem in 2004. I've only tidied up the notation, and added a reference to the textbook by Beardon. It's surprisingly hard to concentrate on a problem afresh when merely rehashing old notes like this - worth trying once, but perhaps not again! – Calum Gilhooley Mar 20 '15 at 00:50
  • This has been downvoted. Is the argument invalid, or just unclear? Either way, I'll work through it again, if I know what to look for - and simply delete it, if it's irreparably bad. (No rush - I can't attend to it immediately, anyway.) – Calum Gilhooley Mar 23 '15 at 09:20
2

Given $\varepsilon>0$, choose $M\in\mathbb N$ so large that $\sum_{m=M+1}^\infty c_m<\varepsilon$.

Then choose $\delta>0$ so small that all points in $\{x_1,\ldots,x_M\}$ with the possible exception of $x$ itself, are at a distance $>\delta$ from $x$.

Probably you can take it from there.

PS: So if $|y-x|<\delta$, then $|f(y)-f(x)|$ is a sum of members of the sequence $\{c_n\}_{n=1}^\infty$ whose sum is less than $\varepsilon$, unless $x$ itself is in the sequence. That proves continuity at numbers $x$ that are not in the sequence.

Now suppose $x$ is in the sequence. Then $x= x_k$ for some $k$. Then if $x<y<x+\delta$, then again $|f(y)-f(x)|<\varepsilon$ for the same reason. But in this case, we need to prove that $f(x-)$ would be the sum of all members of $\{c_n\}_{n=1}^\infty$ for which $x_k<x$, and $f(x)=f(x+)$ would be that sum plus $c_k$. So suppose $0<x-y<\delta$. Then $f(y)$ differs from the sum of all members of $\{c_n\}_{n=1}^\infty$ for which $x_k<x$ by a sum of members of $\{c_n\}_{n=1}^\infty$ that is less than $\varepsilon$.

1

I'll just add an answer here to make (b) more explicit. I would comment to improve upon tchappy's answer but I don't have enough reputation points.

To show $f(x+) - f(x-) = c_n$ for $x \in E$, we show the following from which the result follows by rearrangement. $$\text{(I) }f(x-) = \sum_{x_n \lt x} c_n $$ $$\text{(II) }f(x+) = \sum_{x_n \leq x} c_n $$

From Theorem 4.29, we have $$f(x-) = sup_{a<t<x}f(t) $$ $$f(x+) = inf_{x<t<b}f(t) $$

We first establish (I).
By monotonicity, for all $t<x$, $f(t)\leq f(x) = \sum_{x_n \lt x} c_n$.
So we have that $\sum_{x_n \lt x} c_n$ is an upperbound of $\{f(t) | a<t<x \}$. We show it is a least upperbound.

For arbitrary $\epsilon > 0$, we have a positive integer $N$ such that $$ \sum_{n=N}^{\infty} c_n < \epsilon $$

We have two sequences as follows:
$\{x_1, x_2, x_3, ..., x_N, ...\}$
$\{c_1, c_2, c_3, ..., c_N, ...\}$
We wish to determine some $t$ where $a<t<x$ so that $$f(x) - f(t) = \sum_{x_n \lt x} c_n - \sum_{x_n \lt t} c_n = \sum_{t \leq x_n \lt x} c_n < \epsilon$$

By producing a $t$ such that $a<t<x$ for which no element of $\{x_1, x_2, ..., x_N\}$ satisfies $t \leq x_n \lt x$, we can conclude that

$$ f(x) - f(t) = \sum_{t \leq x_n \lt x} c_n \leq \sum_{n=N}^{\infty} c_n < \epsilon $$ So we proceed to produce such a $t$.
If $x\leq\text{min}(x_1, x_2, ..., x_N)$, then the series $f(x)$ already contains no element of $\{x_1, x_2, ..., x_N\}$ and for any $a<t<b$:

$$f(x)- f(t) \leq f(x) \leq \sum_{n=N}^{\infty} c_n < \epsilon$$

Otherwise, there exists some $x_i \in \{x_1, x_2, ..., x_N\}$ such that $x_i \lt x$. We choose maximal such $x_i$ and let $t = \frac{x + x_i}{2}$. Then by choice of $t$, no $\{x_1, x_2, ..., x_N\}$ satisfies $t \leq x_n \lt x$.

Since $\epsilon$ was arbitrary, we conclude that (I) holds.

The proof for (II) follows similarly.
We establish $\sum_{x_n \leq x} c_n$ to be a lowerbound of $\{f(t) | x<t<b \}$ by noting that for any $t$ such that $x<t<b$, $$f(t)= \sum_{x_n \lt t} c_n \geq \sum_{x_n \leq x} c_n$$

And similarly, for arbitrary $\epsilon > 0$ we produce $t$ where $x<t<b$ such that $$f(t) - \sum_{x_n \leq x} c_n = \sum_{x \lt x_n \lt t} c_n < \epsilon$$

by disinguishing two cases; $x\geq\text{max}(x_1, x_2, ..., x_N)$ and otherwise. The first case is again trivial, while in the second case, we observe that there exists some $x_i \in \{x_1, x_2, ..., x_N\}$ such that $x_i \gt x$. We choose minimal such $x_i$ as $t$, so we have no element of $\{x_1, x_2, ..., x_N\}$ that satisfies $x \lt x_n \lt t$ and the statement (II) follows.

0
  1. $\sup_{a<t<x_n} f(t) = \sum_{x_i<x_n} c_i$.
  2. $\inf_{x_n<t<b} f(t) = \sum_{x_i\leq x_n} c_i$.

Proof of 1.:
If $a < t < x_n$, then $f(t) \leq f(x_n)$ since $f$ is monotonically increasing on $(a, b)$.
Let $\epsilon$ be an arbitrary positive real number.
Let $N$ be a natural number such that $$\sum_{i = N}^{\infty} c_i < \epsilon.$$
There exists $t_0$ such that $a < t_0 < x_n$ and $N \leq \min \{i | t_0 \leq x_i < x_n\}$ ($\min \emptyset = +\infty$).
$$f(x_n) - f(t_0) = \sum_{i \in \{i | t_0 \leq x_i < x_n\}} c_i \leq \sum_{i = N}^{\infty} c_i < \epsilon.$$
So, $$f(x_n) - \epsilon < f(t_0) \leq f(x_n).$$
So, $$\sup_{a<t<x_n} f(t) = f(x_n-) = f(x_n) = \sum_{x_i<x_n} c_i.$$

Proof of 2.:
If $x_n < t < b$, then $\sum_{x_i\leq x_n} c_i \leq \sum_{x_i < t} c_i = f(t)$.
Let $\epsilon$ be an arbitrary positive real number.
Let $N$ be a natural number such that $$\sum_{i = N}^{\infty} c_i < \epsilon.$$
There exists $t_0$ such that $x_n < t_0 < b$ and $N \leq \min \{i | x_n < x_i < t_0\}$($\min \emptyset = +\infty$).
$$f(t_0) - \sum_{x_i \leq x_n} c_i = \sum_{x_i < t_0} c_i - \sum_{x_i \leq x_n} c_i = \sum_{x_n < x_i < t_0} c_i \leq \sum_{i = N}^{\infty} c_i < \epsilon.$$
So, $$\sum_{x_i \leq x_n} c_i \leq f(t_0) < \sum_{x_i \leq x_n} c_i + \epsilon.$$
So, $$\inf_{x_n<t<b} f(t) = f(x_n+) = f(x_n) + c_n = \sum_{x_i\leq x_n} c_i.$$

From the above proof of 1. and 2., $f(x_n+) - f(x_n-) = c_n$.

tchappy ha
  • 9,894
0

Here is my attempt that uses what Rudin has established up to chapter 4.

Fix $a<x<b$ and $\epsilon>0$. Let $\Lambda$ be the set of all $j$'s such that $x_j<x$. If $\Lambda$ is finite, then we can find a $p$ for which $x_j<p<x$ for all $j\in\Lambda$, whence for $p<\xi<x$, $f(\xi)=f(x)$. Otherwise, we can arrange $\Lambda$ in a sequence $\{\alpha_n\}$. Since $f(x)=\sum c_{\alpha_n}$, there is an $N$ such that $$n\geq N\qquad\mbox{implies}\qquad f(x)-\epsilon<\sum^n_{j=1}c_{\alpha_j}\leq f(x)\mbox{.}$$ Choose $p'<x$ so that $p'>x_{\alpha_j}$, $j=1,2,...,N$. It follows that $$|f(x)-f(\xi)|<\epsilon\qquad\mbox{whenever}\qquad p'<\xi<x\mbox{.}$$ This shows that $f(x-)=f(x)$.

Next, we prove that $f(x_n+)=c_n+f(x_n); f(x+)=f(x), x\not=x_n$. Observe that for $x'>x$, $$\begin{align*} f(x') &= f(x)+\sum_{x\leq x_j<x'}c_j \\ &= \begin{array}{ll} f(x)+c_n+\sum_{x<x_j<x'} & \mbox{if }x=x_n\mbox{ for some }n \\ f(x)+\sum_{x<x_j<x'} & \mbox{otherwise} \end{array}\mbox{,} \end{align*}$$ therefore it suffices to show that $\sum_{x<x_j<x'}$ can be made arbitrarily small if only $x'$ is close enough to $x$ from the right. By Cauchy criterion, there exists $M$ such that $$ \sum_{j=m}^nc_j < \epsilon\qquad\mbox{for }n\geq m\geq M\mbox{.}$$ Picking $q>x$ so that $(x,q)$ contains no points $x_1,x_2,...,x_M$, we thus have $$\sum_{x<x_j<q}c_j\leq\epsilon\mbox{.}$$ This establishes (b) and (c).