logo

Simpson's Rule 📂Numerical Analysis

Simpson's Rule

Definition

20190611_151154.png

Let $f : [a,b] \to \mathbb{R}$ be integrable on $[a,b]$ and divide $[a,b]$ into nodes with equal intervals of $\displaystyle h:= {{b-a} \over {n}}$ like $a = x_{0} < \cdots < x_{n} = b$. Then, the numerical integration operator $I_{n}^{2}$ defined as follows is called the Simpson’s Rule. $$ I_{n}^{2} (f) := \sum_{k=1}^{n/2} {{h} \over {3}} \left[ f(x_{2k-2}) + 4 f( x_{2k-1} ) + f(x_{2k} ) \right] $$

Theorem

Let us denote $f \in C^4 [a,b]$. The error $E_{1}^{2}$ and the asymptotic error $\tilde{E}_{n}^{2}$ of Simpson’s Rule are as follows:

  • [1]: $$E_{1}^{2} (f) = - {{h^5} \over {90}} f^{(4)} ( \xi )$$
  • [2]: $$\tilde{E}_{n}^{2} (f) = - {{ h^4 } \over {180}} [ f^{(3)} (b) - f^{(3)} (a) ]$$

Explanation

Expanding $I_{n}^{2} (f)$ gives the following. $$ \begin{align*} I_{n}^{2} (f) =& {{h} \over {3}} [ f(x_{0}) + 4 f ( x_{1} ) + 2 f( x_{2} ) + 4 f ( x_{3} ) + 2 f ( x_{4} ) + \cdots \\ & + 2 f (x_{n-2} ) + 4 f ( x_{n-1} ) + f(x_{n} ) ] \end{align*} $$ Unlike the Trapezoidal Rule which uses linear interpolation to compute the numerical integration of the definite integral $\displaystyle I (f) = \int_{a}^{b} f(x) dx$, this method employs quadratic interpolation.

A point to note is that the error is computed as $\displaystyle E_{1}^{2} (f) = - {{h^5} \over {90}} f^{(4)} ( \xi )$, indicating that even if an $2$th order interpolation is performed for integration, if $f$ is a polynomial of degree $3$ or less, the error exactly becomes $0$.

Proof 1

[1]

Strategy: Since the quadratic function is the quadratic interpolation of the given function, properties of polynomial interpolation can be utilized.


For convenience, let’s denote $\displaystyle c:= \left( {{a+b} \over {2}} \right)$. $$ I_{1}^{2} (f) := \left( {{ b - a } \over { 6 }} \right) \left[ f(a) + 4 f \left( c \right) + f(b) \right] $$ This approximates $I(f)$ as the integral of the function obtained by performing quadratic interpolation over the interval $[a,b]$.

Newton’s Divided Differences Formula: $$p_{n} (x) =\sum_{i=0}^{n} f [ x_{0} , \cdots , x_{i} ] \prod_{j=0}^{i-1} (x - x_{j} )$$

Given three points $\displaystyle a , c , b$ and for all $x \in [a,b]$, $$ \begin{align*} \displaystyle E_{1}^{2} (f) :=& I(f) - I_{1}^{2} (f) \\ =& \int_{a}^{b} \left[ f(x) - p_{2} (x) \right] dx \\ =& \int_{a}^{b} \left[ p_{2+1} (x) - p_{2} (x) \right] dx \\ =& \int_{a}^{b} (x-a)(x-c)(x-b) f [a,c,b,x] dx \end{align*} $$ Let’s define $w$ as follows. $$ w(x) := \int_{a}^{b} (t-a)(t-c)(t-b) dt $$ Since $(t-a)(t-c)(t-b)$ is an odd function centered at $t = c$, $w(b)=0$, and the integral turns out to be $0$ if both ends are equal, $w(a) = 0$. Then, according to the partial integration and the differentiation formulas for divided differences, $$ \begin{align*} \displaystyle E_{1}^{2} (f) =& \int_{a}^{b} w’(x) f [a,c,b,c] dx \\ =& \left[ w(x) f [ a,c,b, x] \right]_{a}^{b} - \int_{a}^{b} w(x) {{d} \over {dx}} f [a,c,b,x] dx \\ =& - \int_{a}^{b} w(x) f [a,c,b,x,x] dx \end{align*} $$ By the Fundamental Theorem of Calculus, since $w(x) \ge 0$, we can use the Mean Value Theorem for Integrals.

Mean Value Theorem for Integrals: If a function $f$ is continuous on the closed interval $[a,b]$ and $w(x) \ge 0$ is integrable, then there exists at least one $\eta$ in $[a,b]$ satisfying $\displaystyle \int_{a}^{b} f(x) w(x) dx = f( \eta ) \int_{a}^{b} w(x) dx$.

Furthermore, according to the properties of divided differences, $$ \begin{align*} \displaystyle E_{1}^{2} (f) =& - f [a,c,b,\eta,\eta] \int_{a}^{b} w(x) dx \\ =& - {{ f^{(4)} ( \xi ) } \over {24}} \left[ {{4} \over {15}} h^5 \right] \\ =& - {{h^5} \over {90}} f^{(4)} ( \xi ) \end{align*} $$ some $ \eta , \xi \in [a,b]$ exists that satisfies the above.

Proof[2]

Strategy: Once the Riemann sums are derived, the rest naturally follows from the Fundamental Theorem of Calculus. However, since the Simpson’s rule involves addition like $\displaystyle \sum_{k=1}^{n/2}$, a trick of multiplying by $\displaystyle {{2} \over {2}}$ is used in deriving the Riemann sums.


According to Theorem [1], the error between the actual $I(f)$ and $I_{n}^{2} (f)$ is computed as follows for some $\xi_{k} \in [x_{2(k-1)}, x_{2k} ]$. $$ \begin{align*} \displaystyle E_{n}^{2} (f) =& I (f) - I_{n}^{2} (f) \\ =& \sum_{k=1}^{n/2} \left( - {{ h^5 } \over { 90 }} f^{(4)} ( \xi_{k} ) \right) \end{align*} $$ Regarding this, $$ \begin{align*} \lim_{n \to \infty} {{ E_{n}^{2} (f) } \over { h^4 }} =& \lim_{n \to \infty} {{1} \over {h^4}} \sum_{k=1}^{n/2} \left( - {{ h^5 } \over { 90 }} f^{(4)} ( \xi_{k} ) \right) \\ =& \lim_{n \to \infty} {{1} \over {h^4}} {{2} \over {2}} \sum_{k=1}^{n/2} \left( - {{ h^5 } \over { 90 }} f^{(4)} ( \xi_{k} ) \right) \\ =& - {{ 1 } \over { 180 }} \lim_{ n \to \infty} \sum_{k=1}^{n/2} 2 h f^{(4)} ( \xi_{k} ) \\ =& - {{ 1 } \over { 180 }} \lim_{ n \to \infty} \sum_{k=1}^{n/2} {{b-a} \over {n/2}} f^{(4)} ( \xi_{k} ) \\ =& - {{ 1 } \over { 180 }} \int_{a}^{b} f^{(4)} (x) dx \\ =& - {{ 1 } \over { 180 }} [ f^{(3)} (b) - f^{(3)} (a) ] \end{align*} $$ Therefore, $$ \lim_{n \to \infty} {{\tilde{E}_{n} (f) } \over { E_{n} (f) }} = 1 $$

$$ E_{n}^{2} (f) \approx \tilde{E}_{n}^{2} (f) = - {{ h^4 } \over { 180 }} [ f^{(3)}(b) - f^{(3)}(a) ] $$


  1. Atkinson. (1989). An Introduction to Numerical Analysis(2nd Edition): p257~258. ↩︎