logo

Convergence and Root Condition of Consistent Multistep Methods 📂Numerical Analysis

Convergence and Root Condition of Consistent Multistep Methods

Theorem

If a multistep method is consistent, then the method is convergent     \iff The method satisfies the root condition

Explanation

For a closed interval [x0,b][x_{0} , b], when cutting into units of hh to create node points, let’s call it x0x1xN(h)1xN(h)bx_{0} \le x_{1} \le \cdots \le x_{N(h) -1} \le x_{N(h) } \le b. Where N(h)N(h) represents the index of the last node point that varies according to hh.

The method being convergent means that if h0h \to 0 when η(h):=max0npYnyn0\displaystyle \eta (h) : = \max_{0 \le n \le p} | Y_{n} - y_{ n} | \to 0, then h0h \to 0 when maxx0xnbYnyn0\displaystyle \max_{x_{0} \le x_{n} \le b } | Y_{n} - y_{ n} | \to 0.

Proof 1

Consistency of Multistep Methods: For the initial value problem {y=f(x,y)(y(x0),,y(xp))=(Y0,,Yp)\begin{cases} y ' = f(x,y) \\ ( y( x_{0} ) , \cdots , y(x_{p}) ) = (Y_{0}, \cdots , Y_{p} ) \end{cases} yn+1=j=0pajynj+hj=1pbjf(xnj,ynj) y_{n+1} = \sum_{j=0}^{p} a_{j} y_{n-j} + h \sum_{j = -1}^{p} b_{j} f (x_{n-j} , y_{n-j} ) satisfies the following. {j=0paj=1j=0pjaj+j=1pbj=1 \begin{cases} \displaystyle \sum_{j = 0}^{p} a_{j} = 1 \\ \displaystyle - \sum_{j = 0}^{p} j a_{j} + \sum_{j = -1}^{p} b_{j} = 1 \end{cases}

Root Condition: Let’s assume ρ(r)=rp+1j=0pajrpj\displaystyle \rho ( r) = r^{p+1} - \sum_{j=0}^{p} a_{j} r^{p-j} for a multistep method that is consistent. When the roots r0,,rpr_{0} , \cdots , r_{p} of equation ρ(r)=0\rho (r ) = 0 satisfy the following conditions, the given multistep method is said to satisfy the Root Condition.

  • (i): rj1| r_{j} | \le 1
  • (ii): rj=1    ρ(rj)0|r_{j}| = 1 \implies \rho ‘(r_{j}) \ne 0

()(\Rightarrow)

Let’s assume that the method is convergent but does not satisfy the root condition. This assumption will be shown to be contradicted by the initial value problem {y=0y(0)=0\begin{cases} y ' = 0 \\ y(0) = 0 \end{cases}.

The true solution of the given problem is trivially Yn=0Y_{n} = 0 for all n0n \ge 0.

And because f=0f = 0, the numerical solution is yn+1=j=0pajynj\displaystyle y_{n+1} = \sum_{j=0}^{p} a_{j} y_{n - j}. Among them, let’s say y0,,ypy_{0} , \cdots , y_{p} satisfies η(h)=max0npYnyn=max0npyn0 \eta (h) = \max_{0 \le n \le p} | Y_{n} - y_{ n} | = \max_{0 \le n \le p} | y_{ n} | \to 0 when h0h \to 0. For some 0jp0 \le j \le p, consider the following two cases.

  • Case 1.
    If it does not satisfy (i), for j j when rj>1|r_{j}| > 1, yn=hrjn\displaystyle y_n = h r_{j} ^n satisfies η(h)=max0npyn=hrjp0 \eta (h) = \max_{0 \le n \le p} | y_{ n} | = h \left| r_{j} \right| ^{p} \to 0 when h0h \to 0. However, since N(h)N(h) \to \infty when h0h \to 0, maxx0xnbYnyn=hrjN(h) \max_{x_{0} \le x_{n} \le b} | Y_{n} - y_{ n} | = h \left| r_{j} \right| ^{N (h) } \to \infty Therefore, the method does not have convergence.
  • Case 2.
    If it does not satisfy (ii), for j j when rj=1    ρ(rj)=0|r_{j}| = 1 \implies \rho’(r_{j}) = 0, this means that if rj=1|r_{j}| = 1 then rjr_{j} is a repeated root of the characteristic equation, indicating at least one repeated root rjr_{j} exists. Therefore, the particular solution nrjnn r_{j}^{n} exists so that yn=hnrjn\displaystyle y_n = h n r_{j} ^n satisfies η(h)=max0npyn=hprjp0 \eta (h) = \max_{0 \le n \le p} | y_{ n} | = h p \left| r_{j} \right| ^{p} \to 0 when h0h \to 0, and yn=h[rj(0)]n\displaystyle y_n = h \left[ r_{j} (0) \right]^n satisfies η(h)=max0npyn=hrj(0)p0 \eta (h) = \max_{0 \le n \le p} | y_{ n} | = h \left| r_{j} (0) \right| ^{p} \to 0 when h0h \to 0. However, when h0h \to 0, maxx0xnbYnyn=hrj(0)N(h) \max_{x_{0} \le x_{n} \le b} | Y_{n} - y_{ n} | = h \left| r_{j} (0) \right| ^{N (h) } \to \infty Therefore, the method does not have convergence.

()(\Leftarrow) The original proof is overly complicated with many logical leaps2.

If it is considered en:=Ynyne_{n} : = Y_{n} -y_{n}, solving the nonhomogeneous linear differential equation Y=y+eY = y + e can be considered.

Conversely, y=Yey = Y - e becomes a homogeneous linear differential equation. yn+1=j=0pajynj+hj=1pbjf(xnj,ynj) \begin{align} y_{n+1} = \sum_{j=0}^{p} a_{j} y_{n-j} + h \sum_{j = -1}^{p} b_{j} f (x_{n-j} , y_{n-j} ) \end{align}

Yn+1=j=0pajYnj+hj=1pbjf(xnj,Ynj) \begin{align} Y_{n+1} = \sum_{j=0}^{p} a_{j} Y_{n-j} + h \sum_{j = -1}^{p} b_{j} f (x_{n-j} , Y_{n-j} ) \end{align} Subtracting (2)(2) from (1)(1) en+1=i=0najenj+hj=1pbj[f(xnj,Ynj)f(xnj,ynj)] e_{n+1} = \sum_{i=0}^{n} a_{j} e_{n-j} + h \sum_{j= -1}^{p} b_{j} \left[ f(x_{n-j} , Y_{n- j }) - f(x_{n-j} , y_{n- j }) \right] acquires. Assuming Yn=d0s0n+d1s1n++dpspnY_{n} = d_{0} s_{0}^{n} + d_{1} s_{1}^{n} + \cdots + d_{p} s_{p}^{n}, en=g0(r0+s0)n+g1(r1+s1)n++gp(rp+sp)n e_{n} = g_{0} ( r_{0} + s_{0} )^n + g_{1} ( r_{1} + s_{1} )^n + \cdots + g_{p} ( r_{p} + s_{p} )^n since the given method satisfies the root condition at NpN \ge p also for (r0+s0),,(rp+sp)( r_{0} + s_{0} ) , \cdots , ( r_{p} + s_{p} ), eN=g0r0+s0N+g1r1+s1N++gprp+spNmax0npYnyn \begin{align*} | e_{N} | & =& | g_{0} | | r_{0} + s_{0} |^N + | g_{1} | | r_{1} + s_{1} | ^N + \cdots + | g_{p} | | r_{p} + s_{p} |^N \\ \le & \max_{0 \le n \le p} | Y_{ n} - y_{n} | \end{align*} assuming h0h \to 0 when max0npYnyn0\displaystyle \max_{0 \le n \le p} | Y_{ n} - y_{n} | \to 0, then eN0e_{N} \to 0 must be, maxx0xnbYnyn=hrj(0)N(h)0 \max_{x_{0} \le x_{n} \le b} | Y_{n} - y_{ n} | = h \left| r_{j} (0) \right| ^{N (h) } \to 0 and thus, the method has convergence.


  1. Atkinson. (1989). An Introduction to Numerical Analysis(2nd Edition): p401~403. ↩︎

  2. Isaacson. (2012). ANALYSIS OF NUMERICAL METHODS: p405~417. https://www.researchgate.net/file.PostFileLoader.html?id=56c583ac5e9d97577f8b458e&assetKey=AS:330399868833792@1455784875579 ↩︎