logo

Stability and Root Conditions of Consistent Multistep Methods 📂Numerical Analysis

Stability and Root Conditions of Consistent Multistep Methods

Theorem

If a multistep method is consistent, the method is stable $\iff$ the method satisfies the root condition.

Explanation

When creating node points by cutting the closed interval $[x_{0} , b]$ at intervals of $h$, let’s call it $x_{0} \le x_{1} \le \cdots \le x_{N(h) -1} \le x_{N(h) } \le b$. Here, $N(h)$ represents the index of the last node point that changes according to $h$.

Consider $z_{0} , \cdots , z_{p}$, a very slight change to the original given initial value $y_{0} , \cdots , y_{p}$. That the method is stable means for sufficiently small positive numbers $h$ and $\epsilon$, if it is said that $\displaystyle \max_{0 \le n \le p} | y_{ n} - z_{n} | \le \epsilon$, then there exists some constant $C$ that satisfies $\displaystyle \max_{0 \le n \le N(h) } | y_{ n} - z_{n} | \le c \epsilon$, independent of $h$. If the initial small change can become a big change as $h$ is adjusted, it is said that the method is not stable.

Proof 1

Consistent Multistep Method: For the initial value problem $\begin{cases} y ' = f(x,y) \\ ( y( x_{0} ) , \cdots , y(x_{p}) ) = (Y_{0}, \cdots , Y_{p} ) \end{cases}$, a multistep method $$ y_{n+1} = \sum_{j=0}^{p} a_{j} y_{n-j} + h \sum_{j = -1}^{p} b_{j} f (x_{n-j} , y_{n-j} ) $$ satisfies the following. $$ \begin{cases} \displaystyle \sum_{j = 0}^{p} a_{j} = 1 \\ \displaystyle - \sum_{j = 0}^{p} j a_{j} + \sum_{j = -1}^{p} b_{j} = 1 \end{cases} $$

Root Condition: Let’s say $\displaystyle \rho ( r) = r^{p+1} - \sum_{j=0}^{p} a_{j} r^{p-j}$ for a multistep method that is consistent. When the roots $r_{0} , \cdots , r_{p}$ of equation $\rho (r ) = 0$ satisfy the following conditions, the given multistep method is said to satisfy the Root Condition.

  • (i): $| r_{j} | \le 1$
  • (ii): $|r_{j}| = 1 \implies \rho ‘(r_{j}) \ne 0$

$(\Rightarrow)$

Assume that the method is stable but does not satisfy the root condition.

This assumption will be shown to be contradicted by the initial value problem $\begin{cases} y ' = 0 \\ y(0) = 0 \end{cases}$.

The numerical solution to the given problem is trivially $y_{n} = 0$ for all $n \ge 0$.

When considering $z_{0} , \cdots , z_{p}$, a very slight change given to $y_{0} , \cdots , y_{p}$ $$ z_{n+1} = a_{0} z_{n} + a_{1} z_{n-1} + \cdots + a_{p-1} z_{n-p-1} + a_{p} z_{n -p} $$ To obtain the characteristic solution, let’s say $z_{n} := r^{n}$ $$ r^{n+1} = a_{0} r^{n} + a_{1} r^{n-1} + \cdots + a_{p-1} r^{n-p-1} + a_{p} r^{n -p} $$ Dividing both sides by $r^{n-p}$ $$ r^{p+1} = a_{0} r^{p} + a_{1} r^{n-1} + \cdots + a_{p-1} r^{1} + a_{p} $$ This $p+1$th degree equation has $r_{1}, r_{2} , \cdots , r_{p}$ roots in addition to $r_{0}=1$. Therefore, the general solution is $$ z_{n} = c_{0} r_{0}^{n} + c_{1} r_{1}^{n} + \cdots + c_{p} r_{p}^{n} $$ for some $c_{0} , \cdots , c_{p}$.

Consider the following two cases for some $0 \le j \le p$.

  • Case 1.
    If condition (i) is not satisfied and for $ j$, it is $|r_{j}| > 1$, then $z_{0} = 0 + \cdots + c_{j} r_{j}^{0} + \cdots + 0 = \epsilon$, that is, it’s said $$ z_{0} = c_{j} r_{j}^{0} = \epsilon $$ . In other words, since $z_{0} = c_{j} ( r_{j} ) ^{0} \implies z_{ n} = c_{j} ( r_{j} ) ^{n}$ $$ z_{0} = \epsilon, z_{1} = \epsilon r_{j } , \cdots , z_{p} = \epsilon r_{j}^{p} $$ and $$ \max_{0 \le n \le p } | y_{n} - z_{n} | = \epsilon | r_{j} |^{p} $$ Applying the method to $[x_{0} , b]$ $$ \max_{x_{0} \le x_{n} \le b } | y_{n} - z_{n} | = \epsilon |r_{j}|^{N(h)} $$ But when $h \to 0$, $N(h) \to \infty$ and since $|r_{j}| > 1$, $$ \max_{x_{0} \le x_{n} \le b } | y_{n} - z_{n} | = c \cdot \epsilon |r_{j}|^{p} $$ there cannot exist any $C>0$ that satisfies. Therefore, the method becomes unstable.
  • Case 2.
    If condition (ii) is not satisfied and for $j$, it is $|r_{j}| = 1 \implies \rho’(r_{j}) = 0$, this means if $|r_{j}| = 1$, then $r_{j}$ is a repeated root of the characteristic equation, and the multiplicity of $r_{j}$ must be at least $2$. In this case, the general solution is represented as a linear combination including linearly independent particular solutions $r_{j}^{n} , n r_{j}^{n} , \cdots , n^{ \nu - 1} r_{j}^{n}$. Mathematically, $$ z_{n} = c_{j_{0}} r_{j}^{n} + c_{j_{1}} n r_{j}^{n} + \dots + c_{j_{\nu-1}} n^{\nu-1} r_{j}^{n} $$ Here, by setting $\displaystyle \epsilon := \max_{ 0 \le k \le \nu-1} | c_{j_{k}} |$, since $| r_{j} | = 1$ $$ | z_{0 } | \le \epsilon , | z_{1 } | \le 2 \epsilon , \cdots , \displaystyle | z_{ p} | \le \epsilon ( 1 + p + \cdots + p^{\nu -1} ) = \epsilon {{ p^{\nu} -1 } \over { p - 1}} $$ As it’s a multistep method, for $p \ge 2$ $$ \max_{0 \le n \le p } | y_{n} - z_{n} | = \epsilon {{ p^{\nu} -1 } \over { p - 1}} $$ Applying the method to $[x_{0} , b]$ $$ \max_{x_{0} \le x_{n} \le b } | y_{n} - z_{n} | = \epsilon {{ N(h)^{\nu} -1 } \over { N(h) - 1}} $$ But when $h \to 0$, $N(h) \to \infty$, therefore $$ \max_{x_{0} \le x_{n} \le b } | y_{n} - z_{n} | = c \cdot \epsilon {{ p^{\nu} -1 } \over { p - 1}} $$ there cannot exist any $C>0$ that satisfies. Therefore, the method becomes unstable.

$(\Leftarrow)$ The original proof is too difficult and contains many leaps2.

If it is considered $e_{n} : = y_{n} - z_{n}$, it can be thought of as solving the non-homogeneous linear differential equation $y = z + e$.

Conversely, $z = y - e$ becomes a homogeneous linear differential equation. $$ \begin{align} \displaystyle y_{n+1} = \sum_{j=0}^{p} a_{j} y_{n-j} + h \sum_{j = -1}^{p} b_{j} f (x_{n-j} , y_{n-j} ) \end{align} $$

$$ \begin{align} \displaystyle z_{n+1} = \sum_{j=0}^{p} a_{j} z_{n-j} + h \sum_{j = -1}^{p} b_{j} f (x_{n-j} , z_{n-j} ) \end{align} $$ If $(1)$ is subtracted from $(2)$, $$ e_{n+1} = \sum_{i=0}^{n} a_{j} e_{n-j} + h \sum_{j= -1}^{p} b_{j} \left[ f(x_{n-j} , y_{n- j }) - f(x_{n-j} , z_{n- j }) \right] $$ is obtained. If it is said that $y_{n} = d_{0} s_{0}^{n} + d_{1} s_{1}^{n} + \cdots + d_{p} s_{p}^{n}$, $$ e_{n} = g_{0} ( r_{0} + s_{0} )^n + g_{1} ( r_{1} + s_{1} )^n + \cdots + g_{p} ( r_{p} + s_{p} )^n $$ Let’s consider $\displaystyle \max_{0 \le n \le p} | y_{ n} - z_{n} | = \epsilon$ for this. The given method $$ e_{n} = g_{0} ( r_{0} + s_{0} )^n + g_{1} ( r_{1} + s_{1} )^n + \cdots + g_{p} ( r_{p} + s_{p} )^n $$ also satisfies the root condition for $( r_{0} + s_{0} ) , \cdots , ( r_{p} + s_{p} )$, $$ \begin{align*} \displaystyle | e_{n} | =& | g_{0} | | r_{0} + s_{0} |^n + | g_{1} | | r_{1} + s_{1} | ^n + \cdots + | g_{p} | | r_{p} + s_{p} |^n \\ \le & p \max_{1 \le i \le p} | g_{i}| \max_{1 \le i \le p} | r_{i} + s_{i} |^n \\ \le & p \cdot \epsilon \cdot 1 \end{align*} $$ Therefore, $\displaystyle \max_{ x_{0} \le x_{n} \le b} | e_{n} | = p \epsilon$ holds, and the method is stable.


  1. Atkinson. (1989). An Introduction to Numerical Analysis(2nd Edition): p398~401. ↩︎

  2. Isaacson. (2012). ANALYSIS OF NUMERICAL METHODS: p405~417. https://www.researchgate.net/file.PostFileLoader.html?id=56c583ac5e9d97577f8b458e&assetKey=AS:330399868833792@1455784875579 ↩︎