Trapezoidal Rule
Definition
Let’s assume $f : [a,b] \to \mathbb{R}$ is integrable over $[a,b]$ and $[a,b]$ is divided into nodes at intervals of $\displaystyle h:= {{b-a} \over {n}}$, like $a = x_{0} < \cdots < x_{n} = b$. The numerical integration operator $I_{n}^{1}$, defined as follows, is called the trapezoidal rule. $$ I_{n}^{1} (f) := \displaystyle \sum_{k=1}^{n} {{h} \over {2}} \left( f(x_{k-1}) + f(x_{k} ) \right) $$
Theorem
Let’s say $f \in C^2 [a,b]$. The error $E_{1}^{1}$ and the asymptotic error $\tilde{E}_{n}^{1}$ of the trapezoidal rule can be summarized as follows:
- [1]: $$E_{1}^{1} (f) = - {{1} \over {12}} h^{3} f '' ( \xi )$$
- [2]: $$\tilde{E}_{n}^{1} (f) = - {{ h^2 } \over {12}} [ f '(b) - f '(a) ]$$
Explanation
If we expand $I_{n}^{1} (f)$, it can be written as follows: $$ I_{n}^{1} (f) = h \left[ {{1} \over {2}} f(x_{0}) + f ( x_{1} ) + \cdots + f ( x_{n-1} ) + {{1} \over {2}} f(x_{n} ) \right] $$ The trapezoidal rule is one of the simplest methods for calculating the numerical integration of a definite integral $\displaystyle I (f) = \int_{a}^{b} f(x) dx$. This method can be easily thought of even if one only knows about the method of finite sums.
Proof 1
[1]
Strategy: Since the trapezoid is a linear interpolation of the given function, we can use the properties of polynomial interpolation.
$$ I_{1}^{1} (f) := \left( {{ b - a } \over { 2 }} \right) [ f(a) + f(b) ] $$ This can be regarded as approximating the integral of the function by linearly interpolating $f$ over the interval $[a,b]$ to $I(f)$. Then, the error $E_{n}^{1} (f)$ between the actual $I(f)$ and $I_{1}^{1} (f)$ for some $\xi \in [a,b]$ is computed as follows.
- [4] Error with the actual function: For a function $(n+1)$ times differentiable over $f : \mathbb{R} \to \mathbb{R}$ and for some $\xi \in \mathscr{H} \left\{ x_{0} , \cdots , x_{n} \right\}$, the polynomial interpolation $p_{n}$ of $f$ satisfies the following for some $t \in \mathbb{R}$: $$ f(t) - p_{n} (t) = {{ (t - x_{0}) \cdots (t - x_{n}) } \over { (n+1)! }} f^{(n+1)} ( \xi ) $$
$$ \begin{align*} E_{1}^{1} (f) :=& I(f) - I_{1}^{1} (f) \\ =& \int_{a}^{b} \left[ f(x) - {{ f(b) ( x - a ) - f(a) (x - b) } \over { b - a }} \right] dx \\ =& \int_{a}^{b} \left[ f(x) - p_{1} (x) \right] dx \\ =& {{1} \over {2}} f '' ( \xi ) \int_{a}^{b} (x-a) (x-b) dx \\ =& \left[ {{1} \over {2}} f '' ( \xi ) \right] \left[ - {{1} \over {6}} (b-a)^{3} \right] \\ =& - {{1} \over {12}} (b-a)^{3} f '' ( \xi ) \\ =& - {{1} \over {12}} h^{3} f '' ( \xi ) \end{align*} $$
■
[2]
Strategy: If we derive the Riemann sums, what follows naturally deduces from the Fundamental Theorem of Calculus.
According to Theorem [1], the error between the actual $I(f)$ and $I_{n}^{1} (f)$ for some $\xi_{k} \in [x_{k-1}, x_{k} ]$ is computed as follows: $$ \begin{align*} \displaystyle E_{n}^{1} (f) =& I (f) - I_{n}^{1} (f) \\ =& \sum_{k=1}^{n} \left( - {{ h^3 } \over { 12 }} f '' ( \xi_{k} ) \right) \end{align*} $$ Regarding this, $$ \begin{align*} \lim_{n \to \infty} {{ E_{n}^{1} (f) } \over { h^2 }} =& \lim_{n \to \infty} {{1} \over {h^2}} \sum_{k=1}^{n} \left( - {{ h^3 } \over { 12 }} f '' ( \xi_{k} ) \right) \\ =& - {{ 1 } \over { 12 }} \lim_{ n \to \infty} \sum_{k=1}^{n} h f '' ( \xi_{k} ) \\ =& - {{ 1 } \over { 12 }} \int_{a}^{b} f ''(x) dx \\ =& - {{ 1 } \over { 12 }} [ f '(b) - f '(a) ] \end{align*} $$ Therefore, $$ \lim_{n \to \infty} {{\tilde{E}_{n} (f) } \over { E_{n} (f) }} = 1 $$
$$ E_{n}^{1} (f) \approx \tilde{E}_{n}^{1} (f) = - {{ h^2 } \over { 12 }} [ f '(b) - f '(a) ] $$
■
Atkinson. (1989). An Introduction to Numerical Analysis(2nd Edition): p253. ↩︎