Convergence and Root Condition of Consistent Multistep Methods📂Numerical Analysis
Convergence and Root Condition of Consistent Multistep Methods
Theorem
If a multistep method is consistent, then the method is convergent ⟺ The method satisfies the root condition
Explanation
For a closed interval [x0,b], when cutting into units of h to create node points, let’s call it x0≤x1≤⋯≤xN(h)−1≤xN(h)≤b. Where N(h) represents the index of the last node point that varies according to h.
The method being convergent means that if h→0 when η(h):=0≤n≤pmax∣Yn−yn∣→0, then h→0 when x0≤xn≤bmax∣Yn−yn∣→0.
Consistency of Multistep Methods: For the initial value problem {y′=f(x,y)(y(x0),⋯,y(xp))=(Y0,⋯,Yp)yn+1=j=0∑pajyn−j+hj=−1∑pbjf(xn−j,yn−j)
satisfies the following.
⎩⎨⎧j=0∑paj=1−j=0∑pjaj+j=−1∑pbj=1
Root Condition: Let’s assume ρ(r)=rp+1−j=0∑pajrp−j for a multistep method that is consistent. When the roots r0,⋯,rp of equation ρ(r)=0 satisfy the following conditions, the given multistep method is said to satisfy the Root Condition.
(i): ∣rj∣≤1
(ii): ∣rj∣=1⟹ρ‘(rj)=0
(⇒)
Let’s assume that the method is convergent but does not satisfy the root condition. This assumption will be shown to be contradicted by the initial value problem {y′=0y(0)=0.
The true solution of the given problem is trivially Yn=0 for all n≥0.
And because f=0, the numerical solution is yn+1=j=0∑pajyn−j. Among them, let’s say y0,⋯,yp satisfies
η(h)=0≤n≤pmax∣Yn−yn∣=0≤n≤pmax∣yn∣→0
when h→0. For some 0≤j≤p, consider the following two cases.
Case 1. If it does not satisfy (i), for j when ∣rj∣>1, yn=hrjn satisfies
η(h)=0≤n≤pmax∣yn∣=h∣rj∣p→0
when h→0. However, since N(h)→∞ when h→0,
x0≤xn≤bmax∣Yn−yn∣=h∣rj∣N(h)→∞
Therefore, the method does not have convergence.
Case 2. If it does not satisfy (ii), for j when ∣rj∣=1⟹ρ’(rj)=0, this means that if ∣rj∣=1 then rj is a repeated root of the characteristic equation, indicating at least one repeated root rj exists. Therefore, the particular solution nrjn exists so that yn=hnrjn satisfies
η(h)=0≤n≤pmax∣yn∣=hp∣rj∣p→0
when h→0, and yn=h[rj(0)]n satisfies
η(h)=0≤n≤pmax∣yn∣=h∣rj(0)∣p→0
when h→0. However, when h→0,
x0≤xn≤bmax∣Yn−yn∣=h∣rj(0)∣N(h)→∞
Therefore, the method does not have convergence.
(⇐) The original proof is overly complicated with many logical leaps2.
If it is considered en:=Yn−yn, solving the nonhomogeneous linear differential equation Y=y+e can be considered.
Conversely, y=Y−e becomes a homogeneous linear differential equation.
yn+1=j=0∑pajyn−j+hj=−1∑pbjf(xn−j,yn−j)
Yn+1=j=0∑pajYn−j+hj=−1∑pbjf(xn−j,Yn−j)
Subtracting (2) from (1)en+1=i=0∑najen−j+hj=−1∑pbj[f(xn−j,Yn−j)−f(xn−j,yn−j)]
acquires. Assuming Yn=d0s0n+d1s1n+⋯+dpspn,
en=g0(r0+s0)n+g1(r1+s1)n+⋯+gp(rp+sp)n
since the given method satisfies the root condition at N≥p also for (r0+s0),⋯,(rp+sp),
∣eN∣≤=0≤n≤pmax∣Yn−yn∣∣g0∣∣r0+s0∣N+∣g1∣∣r1+s1∣N+⋯+∣gp∣∣rp+sp∣N
assuming h→0 when 0≤n≤pmax∣Yn−yn∣→0, then eN→0 must be,
x0≤xn≤bmax∣Yn−yn∣=h∣rj(0)∣N(h)→0
and thus, the method has convergence.
■
Atkinson. (1989). An Introduction to Numerical Analysis(2nd Edition): p401~403. ↩︎