logo

Parasitic Solution 📂Numerical Analysis

Parasitic Solution

Definition 1

A Parasitic Solution refers to a term whose magnitude grows and whose sign changes as the method progresses. It’s useful to imagine a sequence an=2n+(2)na_{n} = 2^{-n} + (-2)^{n} that does not converge due to (2)n (-2)^{n}. The term ‘parasitic’ is quite intuitive and appropriate in describing these terms as they hinder convergence.

Example: Dahlquist Problem

Consider, for example, {y=λyy(0)=1\begin{cases} y ' = \lambda y \\ y(0) = 1 \end{cases}, which is precisely solved by Y=eλxY = e^{ \lambda x}. However, if we need concrete values, numerical methods must be considered. Let’s use the midpoint method for calculation.

Midpoint Method: For a given initial value problem {y=f(x,y)(y(x0),y(x1))=(Y0,Y1)\begin{cases} y ' = f(x,y) \\ ( y( x_{0} ) , y (x_{1} )) = (Y_{0} ,Y_{1} ) \end{cases} of a continuous function ff defined by DR2D \subset \mathbb{R}^2. Let’s assume the interval (a,b)(a,b) is divided into nodes like ax0<x1<<xn<xNba \le x_{0} < x_{1} < \cdots < x_{n} < \cdots x_{N} \le b. Especially for a sufficiently small h>0h > 0, if we say xj=x0+jhx_{j} = x_{0} + j h, then for the initial value y0=Y0y_{0} = Y_{0}: yn+1=yn1+2hf(xn,yn) y_{n+1} = y_{n-1} + 2 h f ( x_{n} , y_{n} )

Applying the midpoint method to the problem looks like this: yn+1=yn1+2hλyn y_{n+1} = y_{n-1} + 2h \lambda y_{n} Approaching through the solution methods for 2nd order linear homogeneous differential equations, assuming yn=rny_{n} = r^{n} we get: rn+1=rn1+2hλrn r^{n+1} = r^{n-1} + 2 h \lambda r^{n} Eliminating rn1r^{n-1} from both sides and arranging into a quadratic equation yields: r22hλr1=0 r^2 - 2h \lambda r - 1 = 0 Solving via quadratic formula results in: r0=hλ+1+h2λ2 r_{0} = h \lambda + \sqrt{ 1 + h^2 \lambda^2 }

r1=hλ1+h2λ2 r_{1} = h \lambda - \sqrt{ 1 + h^2 \lambda^2 } The general solution for some β0,β1\beta_{0} , \beta_{1} is: yn=β0r0n+β1r1n y_{n} = \beta_{0} r_{0}^{n} + \beta_{1} r_{1}^{n} Substituting n=0,1n=0,1 gives: {y0=β0+β1y1=β0r0+β1r1 \begin{cases} y_{0} = \beta_{0} + \beta_{1} \\ y_{1} = \beta_{0} r_{0} + \beta_{1} r_{1} \end{cases} Meanwhile, since we already know Y=eλxY = e^{ \lambda x} as the exact solution, {y0=1=β0+β1y1=eλh=β0r0+β1r1 \begin{cases} y_{0} = 1 = \beta_{0} + \beta_{1} \\ y_{1} = e^{ \lambda h } = \beta_{0} r_{0} + \beta_{1} r_{1} \end{cases} we can derive: {β0=eλhr121+h2λ2β1=r0eλh21+h2λ2 \begin{cases} \displaystyle \beta_{0} = {{e^{ \lambda h} - r_{1} } \over {2 \sqrt{ 1+ h^2 \lambda^2 } }} \\ \displaystyle \beta_{1} = {{r_{0} - e^{ \lambda h} } \over {2 \sqrt{ 1+ h^2 \lambda^2 } } } \end{cases} Expanding eλhe^{\lambda h} with Maclaurin’s expansion, since eλh=1+λh+λ2h22+O(h3λ3)\displaystyle e^{\lambda h} = 1 + \lambda h + {{\lambda^2 h^2} \over {2}} + O (h^3 \lambda^3): β0=eλhr121+h2λ2=1+λh+λ2h22+O(h3λ3)hλ+1+h2λ221+h2λ2=1+λ2h221+h2λ2+21+h2λ2+O(h3λ3)21+h2λ2=1+1+λ2h221+h2λ2+O(h3λ3)21+h2λ2 \begin{align*} \displaystyle \beta_{0} =& {{e^{ \lambda h} - r_{1} } \over {2 \sqrt{ 1+ h^2 \lambda^2 } }} \\ =& {{1 + \lambda h + {{\lambda^2 h^2} \over {2}} + O (h^3 \lambda^3 ) - h \lambda + \sqrt{ 1 + h^2 \lambda^2 } } \over {2 \sqrt{ 1+ h^2 \lambda^2 } }} \\ =& {{1 + {{\lambda^2 h^2} \over {2}} - \sqrt{ 1 + h^2 \lambda^2 } + 2 \sqrt{ 1 + h^2 \lambda^2 } + O (h^3 \lambda^3 ) } \over {2 \sqrt{ 1+ h^2 \lambda^2 } }} \\ =& 1 + {{1 + {{\lambda^2 h^2} \over {2}} - \sqrt{ 1 + h^2 \lambda^2 } + O (h^3 \lambda^3 ) } \over {2 \sqrt{ 1+ h^2 \lambda^2 } }} \end{align*} Likewise, expanding 1+h2λ2\sqrt{ 1+ h^2 \lambda^2 } with Maclaurin results in 1+h2λ2=1+λ2h22+O(h4λ4)\displaystyle \sqrt{ 1+ h^2 \lambda^2 } = 1 + {{\lambda^2 h^2} \over {2}} + O (h^4 \lambda^4): β0=1+1+λ2h221+h2λ2+O(h3λ3)21+h2λ2=1+1+λ2h22+O(h3λ3)1λ2h22+O(h4λ4)21+h2λ2 \begin{align*} \displaystyle \beta_{0} =& 1 + {{1 + {{\lambda^2 h^2} \over {2}} - \sqrt{ 1 + h^2 \lambda^2 } +O (h^3 \lambda^3 ) } \over {2 \sqrt{ 1+ h^2 \lambda^2 } }} \\ =& 1 + {{1 + {{\lambda^2 h^2} \over {2}} + O (h^3 \lambda^3 ) - 1 - {{\lambda^2 h^2} \over {2}} + O (h^4 \lambda^4) } \over {2 \sqrt{ 1+ h^2 \lambda^2 } }} \end{align*} Similarly for β1\beta_{1}: β1=r0eλh21+h2λ2=hλ+1+h2λ2ehλ21+h2λ2=hλ+1+λ2h22+O(h4λ4)1hλλ2h22O(h3λ3)21+h2λ2 \begin{align*} \beta_{1} =& {{ r_{0} - e^{ \lambda h} } \over {2 \sqrt{ 1+ h^2 \lambda^2 } }} \\ =& { { h \lambda + \sqrt{ 1+ h^2 \lambda^2 } - e^{h \lambda } } \over {2 \sqrt{ 1+ h^2 \lambda^2 } }} \\ =& {{ h \lambda + 1 + {{\lambda^2 h^2} \over {2}} + O (h^4 \lambda^4 ) - 1 - h \lambda - {{\lambda^2 h^2} \over {2}} - O (h^3 \lambda^3) } \over {2 \sqrt{ 1+ h^2 \lambda^2 } }} \end{align*}

When Big O notation is in the denominator, moving it to the numerator: For a0a \ne 0, p>0p>0, and nNn \in \mathbb{N}, apply 1a+O(hn)p=1ap+O(hn)\displaystyle {{1} \over { \sqrt[p]{a + O ( h^n ) } }} = {{1} \over { \sqrt[p]{a } }}+ O(h^n)

In summary, we get: β0=1+O(h3λ3)β1=O(h3λ3) \begin{align*} \beta_{0} =& 1+ O (h^{3} \lambda^{3} ) \\ \beta_{1} =& O (h^{3} \lambda^{3} ) \end{align*} That is, when h0h \to 0, then β01\beta_{0} \to 1 and because of β10\beta_{1} \to 0: yn=β0r0n+β1r1nr0n y_{n} = \beta_{0} r_{0}^{n} + \beta_{1} r_{1}^{n} \to r_{0}^{n} Now, the problem is what happens to this general solution when nn is fixed, and λ>0\lambda > 0 increases. If λ>0\lambda > 0, there’s no need to worry because r0>r1>0r_{0} > | r_{1} | > 0 makes β0r0b\beta_{0} r_{0}^{b} grow much faster than β1r1n\beta_{1} r_{1}^{n}. However, if λ<0\lambda <0, the story changes. If 0<r0<10 < r_{0} < 1 and r1<1r_{1} < -1, then β1r1n\beta_{1} r_{1}^{n} will overpower β0r0n\beta_{0} r_{0}^{n} in magnitude while changing its sign as nn increases.

At this point, we call β1r1n\beta_{1} r_{1}^{n} a Parasitic Solution, and because of this danger, it’s said that the midpoint method has Weak Stability. Thus, it’s crucial to mathematically verify whether there’s no issue when the sign of f(x,Y(x))y\displaystyle { {\partial f(x, Y(x)) } \over {\partial y}} is negative.

20181009\_124502.png

For example, solving an initial value problem similar to {y=xy2y(0)=0\begin{cases} y ' = x - y^2 \\ y(0) = 0 \end{cases} with the midpoint method, initially, it seems to work fine, but from the third line onwards, the solution starts to fluctuate significantly.

Similarly, the Milne’s Method

yn+1=yn1+h3[f(xn1,yn1)+f(xn,yn)]+f(xn+1,yn+1) y_{n+1} = y_{n-1} + {{h} \over {3}} [ f(x_{n-1} , y_{n-1}) + f(x_{n} , y_{n})] + f(x_{n+1} , y_{n+1}) can also be shown to have weak stability. The commonly used problem like {y=λyy(0)=1\begin{cases} y ' = \lambda y \\ y(0) = 1 \end{cases} is referred to as Dahlquist Problem.


  1. Atkinson. (1989). An Introduction to Numerical Analysis(2nd Edition): p364~365. ↩︎