logo

Convergence and Error of Multistep Methods 📂Numerical Analysis

Convergence and Error of Multistep Methods

Theorem

Given the initial value problem $\begin{cases} y ' = f(x,y) \\ ( y( x_{0} ) , \cdots , y(x_{p}) )= ( Y_{0} , \cdots , Y_{p} ) \end{cases}$, a multistep method $$ \displaystyle y_{n+1} = \sum_{j=0}^{p} a_{j} y_{n-j} + h \sum_{j = -1}^{p} b_{j} f (x_{n-j} , y_{n-j} ) $$ is consistent, and if the initial error $\displaystyle \eta (h) : = \max_{ 0 \le i \le p} | Y (x_{i} ) - y_{h} (x_{i} ) |$ satisfies $\displaystyle \lim_{ h \to 0} \eta (h) = 0$, and for $j = 0, 1, \cdots , p$ if $a_{j} \ge 0$ holds and $f$ satisfies the Lipschitz condition, then the method converges and for an appropriate constant $c_{1} , c_{2}$, $$ \max_{ x_{0} \le x_{n} \le b} | Y (x_{n} ) - y_{h} (x_{n} ) | \le c_{1} \eta (h) + c_{2} \tau (h) $$ holds. In particular, if $\eta (h) = O ( h^{m} )$, the convergence rate of the method is $O ( h^{m} )$.

Explanation

The term multistep method does not refer to a single specific method, so proving its convergency is a very important task. Understanding how much the error according to $h$ can grow is also a necessary step if one intends to use it in practice.

Proof 1

If we denote the difference between the true value $Y_{i}$ and the calculated value $y_{i}$ as $\epsilon_{i} : = Y_{i} - y_{i}$, $$ Y_{n+1} = \sum_{j=0}^{p} a_{j} Y_{n-j} + h \sum_{j = -1}^{p} b_{j} f (x_{n-j} , Y_{n-j} ) + h \tau_{n} (Y) $$

$$ y_{n+1} = \sum_{j=0}^{p} a_{j} y_{n-j} + h \sum_{j = -1}^{p} b_{j} f (x_{n-j} , y_{n-j} ) $$ Subtracting the latter two equations $$ \epsilon_{n+1} = \sum_{j=0}^{p} a_{j} \epsilon_{n-j} + h \sum_{j = -1}^{p} b_{j} [ f (x_{n-j} , Y_{n-j} ) - f (x_{n-j} , y_{n-j} ) ] + h \tau_{n} (Y) $$ Since $f$ satisfies the Lipschitz condition, $$ | \epsilon_{n+1} | \le \sum_{j=0}^{p} a_{j} | \epsilon_{n-j} | + h K \sum_{j = -1}^{p} b_{j} | \epsilon_{n - j} | + h \tau_{n} (h) $$ Utilizing the expression $\displaystyle E_{n} := \max_{0 \le i \le n} | \epsilon_{i} |$ that represents the largest error among $\epsilon_{i}$ up to $n$, $$ | \epsilon_{n+1} | \le \sum_{j=0}^{p} a_{j} E_{n}+ h K \sum_{j = -1}^{p} | b_{j} | E_{n+1 } + h \tau_{n} (h) $$

Consistency and order of convergence of multistep methods: For the initial value problem $\begin{cases} y ' = f(x,y) \\ ( y( x_{0} ) , \cdots , y(x_{p}) )= ( Y_{0} , \cdots , Y_{p} ) \end{cases}$, a necessary and sufficient condition for a multistep method $$ \displaystyle y_{n+1} = \sum_{j=0}^{p} a_{j} y_{n-j} + h \sum_{j = -1}^{p} b_{j} f (x_{n-j} , y_{n-j} ) $$ to have consistency and to have a convergence order of $m \in \mathbb{N}$ are (i) and (ii), respectively.

  • (i): $$\begin{cases} \displaystyle \sum_{j = 0}^{p} a_{j} = 1 \\ \displaystyle - \sum_{j = 0}^{p} j a_{j} + \sum_{j = -1}^{p} b_{j} = 1 \end{cases}$$
  • (ii): For $i = 0, 1 , \cdots , m$, $$\sum_{j=0}^{p} (-j)^{i} a_{j} + i \sum_{j=-1}^{p} (- j )^{i-1} b_{j} = 1$$

Since the given multistep method is consistent, if we say $\displaystyle c : = K \sum_{j = -1}^{p} | b_{j} |$, $$ | \epsilon_{n+1} | \le E_{n}+ hc E_{n+1 } + h \tau_{n} (h) $$ if $ | \epsilon_{n+1} | \le E_{n+1}$, then $E_{n+1} = E_{n}$, $$ | \epsilon_{n+1} | \le E_{n+1} = E_{n} \le E_{n}+ hc E_{n+1 } + h \tau_{n} (h) $$ if $ | \epsilon_{n+1} | > E_{n+1}$, $$ E_{n+1} < | \epsilon_{n+1} | \le E_{n} \le E_{n}+ hc E_{n+1 } + h \tau_{n} (h) $$ Therefore, in any case, $$ E_{n+1} \le E_{n}+ hc E_{n+1 } + h \tau_{n} (h) $$ Organizing the terms, $$ (1 - hc ) E_{n+1} \le E_{n} + h \tau_{n} (h) $$ Assuming $h \to 0$, we can assume that $h$ is sufficiently small to satisfy $ (1 - hc ) > 0$ and $\displaystyle hc \le {{1 } \over {2}}$. Therefore, $$ \begin{align*} E_{n+1} \le & {{E_{n} } \over {1 - hc}}+ {{ h \tau_{n} (h) } \over {1 - hc }} \\ \le & (1 + 2 hc ) E_{n} + 2 h \tau_{n} (h) \end{align*} $$ By recursively solving, $$ \begin{align*} E_{n} =& \max_{ x_{0} \le x_{n} \le b} | Y (x_{n} ) - y_{h} (x_{n} ) | \\ \le & e^{2c(b-x_{0})} \eta (h) + \left[ {{ e^{2c (b - x_{0} ) } - 1 } \over {c}} \right] \tau (h) \end{align*} $$


  1. Atkinson. (1989). An Introduction to Numerical Analysis(2nd Edition): p360~361. ↩︎