logo

Pointwise Convergence of Function Sequences 📂Analysis

Pointwise Convergence of Function Sequences

Definition

Let us define a function $f : E \to \mathbb{R}$ for the subset $E \ne \emptyset$ of $\mathbb{R}$. If the sequence of functions $\left\{ f_{n} : E \to \mathbb{R} \right\}_{n=1}^{\infty}$ satisfies $f(x) = \lim \limits_{n \to \infty} f_{n} (X)$ for each $x \in E$, then it is said to converge pointwise to $f_{n}$ in $E$, denoted by:

$$ f_{n} \to f $$

Explanation

Rewriting the above definition using the epsilon-delta argument gives the following necessary and sufficient condition.

For every $\varepsilon > 0$ and $x \in E$, there exists a $N \in \mathbb{N}$ that satisfies $n \ge N \implies | f_{n} (x) - f(x) | < \varepsilon$.

Sequences are merely ‘functions whose domains are $\mathbb{N}$,’ so there is no issue with having a set of functions as their range, making it possible to think of terrifying entities like the sequence of functions $\left\{ f_{n} \right\}_{n \in \mathbb{N}}$. If you’re still loosely thinking of sequences as ‘points moving on a line as $n$ increases’, it will be hard to accept.

With the emergence of new sequences comes the need to discuss new types of convergence. The concept of pointwise convergence does not seem too difficult since if there is more than one exception in $E$, it cannot be considered convergence in $E$. So, why exactly is the common-sense ‘convergence’ specifically called ‘pointwise convergence’?

The reason is obviously because pointwise convergence alone is insufficient when discussing the convergence of the function itself. In fact, ‘better convergence’ compared to pointwise convergence is essentially considered ’not sufficiently good convergence’. Frankly, considering $f_{n} (x)$, if one fixes a specific $x_{0}$, it appears as $a_{n} := f_{n} (x_{0} )$, so there is no need to bother with the concept of a sequence of functions.

Here are examples where the original properties of $f_{n}$ are not maintained when it is said to converge pointwise to $f$ in $E$.

Theorems

Assume that in $E$, $f_{n}$ converges pointwise to $f$.

(a) Even if $f_{n}$ is differentiable, $f$ may not be differentiable.

(b) Even if $f_{n}$ is integrable, $f$ may not be integrable.

(c) Even if $f_{n}, f$ is differentiable, $\lim \limits_{n \to \infty} \dfrac{d}{dx} f_{n} (x) = \dfrac{d}{dx} \left( \lim \limits_{n \to \infty} f_{n} (x) \right)$ may not hold.

(d) Even if $f_{n}, f$ is integrable, $\displaystyle \lim \limits_{n \to \infty} \int_{a}^{b} f_{n} (x) dx = \int_{a}^{b} \left( \lim \limits_{n \to \infty} f_{n} (x) \right) dx$ may not hold.


Especially, (a) is an example that also demonstrates non-preservation of continuity.

Proof

Counterexample (a)

Let’s define $f_{n} , f$ in $E = [0,1]$ as follows.

$$ \begin{align*} f_{n} (x) &:= x^{n} \\ f(x) &:= \begin{cases} 0 &, 0 \le x < 1 \\ 1 &, x=1 \end{cases} \end{align*} $$

Obviously, in $E$, it converges pointwise to $f_{n} \to f$. However, while $f_{n}$ is differentiable in $[0,1]$, $f$ is not continuous in $x=1$ and therefore not differentiable.

Counterexample (b)

Let’s define $f_{n} , f$ in $E = [0,1]$ as follows.

$$ \begin{align*} f_{n} (x) &:= \begin{cases} 1 &, x = {{ p } \over { m }} , p \in \mathbb{Z} , m \in \left\{ 1 , \cdots , n \right\} \\ 0 &, \text{otherwise} \end{cases} \\ f(x) &:= \begin{cases} 1 &, x \in \mathbb{Q} \\ 0 &, \text{otherwise} \end{cases} \end{align*} $$

The setting of $f_{n}$ is somewhat complex, with $f_{1} (x)$ being $1$ only at $ x \in \left\{ 0 , 1 \right\}$, $f_{2} (x)$ being $1$ only at $\displaystyle x \in \left\{ 0 , {{ 1 } \over { 2 }} , 1 \right\}$, and $f_{3} (x)$ being $1$ only at $x \in \left\{ 0 , {{ 1 } \over { 3 }} , {{ 1 } \over { 2 }} , {{ 2 } \over { 3 }} , 1 \right\}$. Proceeding in this manner, eventually, it will be $1$ only at every $x \in \mathbb{Q}$, and thus, we know it converges pointwise to $f_{n} \to f$ in $E$. However, while $f_{n}$ is integrable in $[0,1]$, the Dirichlet function $f$ is not integrable.

Counterexample (c)

Let’s define $f_{n} , f$ in $E = [0,1]$ as follows.

$$ \begin{align*} f_{n} (x) &:= {{ x^{n} } \over { n }} \\ f(x) &:= 0 \end{align*} $$

Obviously, in $E$, it converges pointwise to $f_{n} \to f$, and each of the derivatives is found as

$$ \begin{align*} f’_{n} (x) =& x^{n-1} \\ f '(x) =& 0 \end{align*} $$

However, in $x=1$,

$$ 1 = \lim \limits_{n \to \infty} \dfrac{d}{dx} f_{n} (1) \ne \dfrac{d}{dx} \left( \lim \limits_{n \to \infty} f_{n} (1) \right) = 0 $$

Counterexample (d)

Let’s define $f_{n} , f$ in $E = [0,1]$ as follows.

$$ \begin{align*} f_{1} (x) &:= 1 \\ f_{n} (x) &:= \begin{cases} n^2 x &, 0 \le x < {{ 1 } \over { n }} \\ 2n - n^2 x &, {{ 1 } \over { n }} \le x < {{ 2 } \over { n }} \\ 0 &, {{ 2 } \over { n }} \le x \le 1 \end{cases} \\ f(x) &:= 0 \end{align*} $$

20190619\_122238.png

Though $f_{n}$ looks complex, it is pretty straightforward when looking at the above diagram, and one can tell it converges pointwise to $f_{n} \to f$ in $E$. Here, $\displaystyle \int_{0}^{1} f_{n} (x) dx$ is the same as the area inside the triangle with a height of $n$ and base length of ${{ 2 } \over { n }}$, so $n$ always equals $1$ regardless. However,

$$ \int_{0}^{1} f(x) dx = \int_{0}^{1} 0 dx = 0 $$

Hence,

$$ 1 = \lim \limits_{n \to \infty} \int_{0}^{1} f_{n} (x) dx \ne \int_{0}^{1} \left( \lim \limits_{n \to \infty} f_{n} (x) \right) dx = 0 $$

See Also