logo

Convergence and Root Condition of Consistent Multistep Methods 📂Numerical Analysis

Convergence and Root Condition of Consistent Multistep Methods

Theorem

If a multistep method is consistent, then the method is convergent $\iff$ The method satisfies the root condition

Explanation

For a closed interval $[x_{0} , b]$, when cutting into units of $h$ to create node points, let’s call it $x_{0} \le x_{1} \le \cdots \le x_{N(h) -1} \le x_{N(h) } \le b$. Where $N(h)$ represents the index of the last node point that varies according to $h$.

The method being convergent means that if $h \to 0$ when $\displaystyle \eta (h) : = \max_{0 \le n \le p} | Y_{n} - y_{ n} | \to 0$, then $h \to 0$ when $\displaystyle \max_{x_{0} \le x_{n} \le b } | Y_{n} - y_{ n} | \to 0$.

Proof 1

Consistency of Multistep Methods: For the initial value problem $\begin{cases} y ' = f(x,y) \\ ( y( x_{0} ) , \cdots , y(x_{p}) ) = (Y_{0}, \cdots , Y_{p} ) \end{cases}$ $$ y_{n+1} = \sum_{j=0}^{p} a_{j} y_{n-j} + h \sum_{j = -1}^{p} b_{j} f (x_{n-j} , y_{n-j} ) $$ satisfies the following. $$ \begin{cases} \displaystyle \sum_{j = 0}^{p} a_{j} = 1 \\ \displaystyle - \sum_{j = 0}^{p} j a_{j} + \sum_{j = -1}^{p} b_{j} = 1 \end{cases} $$

Root Condition: Let’s assume $\displaystyle \rho ( r) = r^{p+1} - \sum_{j=0}^{p} a_{j} r^{p-j}$ for a multistep method that is consistent. When the roots $r_{0} , \cdots , r_{p}$ of equation $\rho (r ) = 0$ satisfy the following conditions, the given multistep method is said to satisfy the Root Condition.

  • (i): $| r_{j} | \le 1$
  • (ii): $|r_{j}| = 1 \implies \rho ‘(r_{j}) \ne 0$

$(\Rightarrow)$

Let’s assume that the method is convergent but does not satisfy the root condition. This assumption will be shown to be contradicted by the initial value problem $\begin{cases} y ' = 0 \\ y(0) = 0 \end{cases}$.

The true solution of the given problem is trivially $Y_{n} = 0$ for all $n \ge 0$.

And because $f = 0$, the numerical solution is $\displaystyle y_{n+1} = \sum_{j=0}^{p} a_{j} y_{n - j}$. Among them, let’s say $y_{0} , \cdots , y_{p}$ satisfies $$ \eta (h) = \max_{0 \le n \le p} | Y_{n} - y_{ n} | = \max_{0 \le n \le p} | y_{ n} | \to 0 $$ when $h \to 0$. For some $0 \le j \le p$, consider the following two cases.

  • Case 1.
    If it does not satisfy (i), for $ j$ when $|r_{j}| > 1$, $\displaystyle y_n = h r_{j} ^n$ satisfies $$ \eta (h) = \max_{0 \le n \le p} | y_{ n} | = h \left| r_{j} \right| ^{p} \to 0 $$ when $h \to 0$. However, since $N(h) \to \infty$ when $h \to 0$, $$ \max_{x_{0} \le x_{n} \le b} | Y_{n} - y_{ n} | = h \left| r_{j} \right| ^{N (h) } \to \infty $$ Therefore, the method does not have convergence.
  • Case 2.
    If it does not satisfy (ii), for $ j$ when $|r_{j}| = 1 \implies \rho’(r_{j}) = 0$, this means that if $|r_{j}| = 1$ then $r_{j}$ is a repeated root of the characteristic equation, indicating at least one repeated root $r_{j}$ exists. Therefore, the particular solution $n r_{j}^{n}$ exists so that $\displaystyle y_n = h n r_{j} ^n$ satisfies $$ \eta (h) = \max_{0 \le n \le p} | y_{ n} | = h p \left| r_{j} \right| ^{p} \to 0 $$ when $h \to 0$, and $\displaystyle y_n = h \left[ r_{j} (0) \right]^n$ satisfies $$ \eta (h) = \max_{0 \le n \le p} | y_{ n} | = h \left| r_{j} (0) \right| ^{p} \to 0 $$ when $h \to 0$. However, when $h \to 0$, $$ \max_{x_{0} \le x_{n} \le b} | Y_{n} - y_{ n} | = h \left| r_{j} (0) \right| ^{N (h) } \to \infty $$ Therefore, the method does not have convergence.

$(\Leftarrow)$ The original proof is overly complicated with many logical leaps2.

If it is considered $e_{n} : = Y_{n} -y_{n}$, solving the nonhomogeneous linear differential equation $Y = y + e$ can be considered.

Conversely, $y = Y - e$ becomes a homogeneous linear differential equation. $$ \begin{align} y_{n+1} = \sum_{j=0}^{p} a_{j} y_{n-j} + h \sum_{j = -1}^{p} b_{j} f (x_{n-j} , y_{n-j} ) \end{align} $$

$$ \begin{align} Y_{n+1} = \sum_{j=0}^{p} a_{j} Y_{n-j} + h \sum_{j = -1}^{p} b_{j} f (x_{n-j} , Y_{n-j} ) \end{align} $$ Subtracting $(2)$ from $(1)$ $$ e_{n+1} = \sum_{i=0}^{n} a_{j} e_{n-j} + h \sum_{j= -1}^{p} b_{j} \left[ f(x_{n-j} , Y_{n- j }) - f(x_{n-j} , y_{n- j }) \right] $$ acquires. Assuming $Y_{n} = d_{0} s_{0}^{n} + d_{1} s_{1}^{n} + \cdots + d_{p} s_{p}^{n}$, $$ e_{n} = g_{0} ( r_{0} + s_{0} )^n + g_{1} ( r_{1} + s_{1} )^n + \cdots + g_{p} ( r_{p} + s_{p} )^n $$ since the given method satisfies the root condition at $N \ge p$ also for $( r_{0} + s_{0} ) , \cdots , ( r_{p} + s_{p} )$, $$ \begin{align*} | e_{N} | & =& | g_{0} | | r_{0} + s_{0} |^N + | g_{1} | | r_{1} + s_{1} | ^N + \cdots + | g_{p} | | r_{p} + s_{p} |^N \\ \le & \max_{0 \le n \le p} | Y_{ n} - y_{n} | \end{align*} $$ assuming $h \to 0$ when $\displaystyle \max_{0 \le n \le p} | Y_{ n} - y_{n} | \to 0$, then $e_{N} \to 0$ must be, $$ \max_{x_{0} \le x_{n} \le b} | Y_{n} - y_{ n} | = h \left| r_{j} (0) \right| ^{N (h) } \to 0 $$ and thus, the method has convergence.


  1. Atkinson. (1989). An Introduction to Numerical Analysis(2nd Edition): p401~403. ↩︎

  2. Isaacson. (2012). ANALYSIS OF NUMERICAL METHODS: p405~417. https://www.researchgate.net/file.PostFileLoader.html?id=56c583ac5e9d97577f8b458e&assetKey=AS:330399868833792@1455784875579 ↩︎