logo

Parasitic Solution 📂Numerical Analysis

Parasitic Solution

Definition 1

A Parasitic Solution refers to a term whose magnitude grows and whose sign changes as the method progresses. It’s useful to imagine a sequence $a_{n} = 2^{-n} + (-2)^{n}$ that does not converge due to $ (-2)^{n}$. The term ‘parasitic’ is quite intuitive and appropriate in describing these terms as they hinder convergence.

Example: Dahlquist Problem

Consider, for example, $\begin{cases} y ' = \lambda y \\ y(0) = 1 \end{cases}$, which is precisely solved by $Y = e^{ \lambda x}$. However, if we need concrete values, numerical methods must be considered. Let’s use the midpoint method for calculation.

Midpoint Method: For a given initial value problem $\begin{cases} y ' = f(x,y) \\ ( y( x_{0} ) , y (x_{1} )) = (Y_{0} ,Y_{1} ) \end{cases}$ of a continuous function $f$ defined by $D \subset \mathbb{R}^2$. Let’s assume the interval $(a,b)$ is divided into nodes like $a \le x_{0} < x_{1} < \cdots < x_{n} < \cdots x_{N} \le b$. Especially for a sufficiently small $h > 0$, if we say $x_{j} = x_{0} + j h$, then for the initial value $y_{0} = Y_{0}$: $$ y_{n+1} = y_{n-1} + 2 h f ( x_{n} , y_{n} ) $$

Applying the midpoint method to the problem looks like this: $$ y_{n+1} = y_{n-1} + 2h \lambda y_{n} $$ Approaching through the solution methods for 2nd order linear homogeneous differential equations, assuming $y_{n} = r^{n}$ we get: $$ r^{n+1} = r^{n-1} + 2 h \lambda r^{n} $$ Eliminating $r^{n-1}$ from both sides and arranging into a quadratic equation yields: $$ r^2 - 2h \lambda r - 1 = 0 $$ Solving via quadratic formula results in: $$ r_{0} = h \lambda + \sqrt{ 1 + h^2 \lambda^2 } $$

$$ r_{1} = h \lambda - \sqrt{ 1 + h^2 \lambda^2 } $$ The general solution for some $\beta_{0} , \beta_{1}$ is: $$ y_{n} = \beta_{0} r_{0}^{n} + \beta_{1} r_{1}^{n} $$ Substituting $n=0,1$ gives: $$ \begin{cases} y_{0} = \beta_{0} + \beta_{1} \\ y_{1} = \beta_{0} r_{0} + \beta_{1} r_{1} \end{cases} $$ Meanwhile, since we already know $Y = e^{ \lambda x}$ as the exact solution, $$ \begin{cases} y_{0} = 1 = \beta_{0} + \beta_{1} \\ y_{1} = e^{ \lambda h } = \beta_{0} r_{0} + \beta_{1} r_{1} \end{cases} $$ we can derive: $$ \begin{cases} \displaystyle \beta_{0} = {{e^{ \lambda h} - r_{1} } \over {2 \sqrt{ 1+ h^2 \lambda^2 } }} \\ \displaystyle \beta_{1} = {{r_{0} - e^{ \lambda h} } \over {2 \sqrt{ 1+ h^2 \lambda^2 } } } \end{cases} $$ Expanding $e^{\lambda h}$ with Maclaurin’s expansion, since $\displaystyle e^{\lambda h} = 1 + \lambda h + {{\lambda^2 h^2} \over {2}} + O (h^3 \lambda^3)$: $$ \begin{align*} \displaystyle \beta_{0} =& {{e^{ \lambda h} - r_{1} } \over {2 \sqrt{ 1+ h^2 \lambda^2 } }} \\ =& {{1 + \lambda h + {{\lambda^2 h^2} \over {2}} + O (h^3 \lambda^3 ) - h \lambda + \sqrt{ 1 + h^2 \lambda^2 } } \over {2 \sqrt{ 1+ h^2 \lambda^2 } }} \\ =& {{1 + {{\lambda^2 h^2} \over {2}} - \sqrt{ 1 + h^2 \lambda^2 } + 2 \sqrt{ 1 + h^2 \lambda^2 } + O (h^3 \lambda^3 ) } \over {2 \sqrt{ 1+ h^2 \lambda^2 } }} \\ =& 1 + {{1 + {{\lambda^2 h^2} \over {2}} - \sqrt{ 1 + h^2 \lambda^2 } + O (h^3 \lambda^3 ) } \over {2 \sqrt{ 1+ h^2 \lambda^2 } }} \end{align*} $$ Likewise, expanding $\sqrt{ 1+ h^2 \lambda^2 }$ with Maclaurin results in $\displaystyle \sqrt{ 1+ h^2 \lambda^2 } = 1 + {{\lambda^2 h^2} \over {2}} + O (h^4 \lambda^4)$: $$ \begin{align*} \displaystyle \beta_{0} =& 1 + {{1 + {{\lambda^2 h^2} \over {2}} - \sqrt{ 1 + h^2 \lambda^2 } +O (h^3 \lambda^3 ) } \over {2 \sqrt{ 1+ h^2 \lambda^2 } }} \\ =& 1 + {{1 + {{\lambda^2 h^2} \over {2}} + O (h^3 \lambda^3 ) - 1 - {{\lambda^2 h^2} \over {2}} + O (h^4 \lambda^4) } \over {2 \sqrt{ 1+ h^2 \lambda^2 } }} \end{align*} $$ Similarly for $\beta_{1}$: $$ \begin{align*} \beta_{1} =& {{ r_{0} - e^{ \lambda h} } \over {2 \sqrt{ 1+ h^2 \lambda^2 } }} \\ =& { { h \lambda + \sqrt{ 1+ h^2 \lambda^2 } - e^{h \lambda } } \over {2 \sqrt{ 1+ h^2 \lambda^2 } }} \\ =& {{ h \lambda + 1 + {{\lambda^2 h^2} \over {2}} + O (h^4 \lambda^4 ) - 1 - h \lambda - {{\lambda^2 h^2} \over {2}} - O (h^3 \lambda^3) } \over {2 \sqrt{ 1+ h^2 \lambda^2 } }} \end{align*} $$

When Big O notation is in the denominator, moving it to the numerator: For $a \ne 0$, $p>0$, and $n \in \mathbb{N}$, apply $\displaystyle {{1} \over { \sqrt[p]{a + O ( h^n ) } }} = {{1} \over { \sqrt[p]{a } }}+ O(h^n)$

In summary, we get: $$ \begin{align*} \beta_{0} =& 1+ O (h^{3} \lambda^{3} ) \\ \beta_{1} =& O (h^{3} \lambda^{3} ) \end{align*} $$ That is, when $h \to 0$, then $\beta_{0} \to 1$ and because of $\beta_{1} \to 0$: $$ y_{n} = \beta_{0} r_{0}^{n} + \beta_{1} r_{1}^{n} \to r_{0}^{n} $$ Now, the problem is what happens to this general solution when $n$ is fixed, and $\lambda > 0$ increases. If $\lambda > 0$, there’s no need to worry because $r_{0} > | r_{1} | > 0$ makes $\beta_{0} r_{0}^{b}$ grow much faster than $\beta_{1} r_{1}^{n}$. However, if $\lambda <0$, the story changes. If $0 < r_{0} < 1$ and $r_{1} < -1$, then $\beta_{1} r_{1}^{n}$ will overpower $\beta_{0} r_{0}^{n}$ in magnitude while changing its sign as $n$ increases.

At this point, we call $\beta_{1} r_{1}^{n}$ a Parasitic Solution, and because of this danger, it’s said that the midpoint method has Weak Stability. Thus, it’s crucial to mathematically verify whether there’s no issue when the sign of $\displaystyle { {\partial f(x, Y(x)) } \over {\partial y}}$ is negative.

20181009\_124502.png

For example, solving an initial value problem similar to $\begin{cases} y ' = x - y^2 \\ y(0) = 0 \end{cases}$ with the midpoint method, initially, it seems to work fine, but from the third line onwards, the solution starts to fluctuate significantly.

Similarly, the Milne’s Method

$$ y_{n+1} = y_{n-1} + {{h} \over {3}} [ f(x_{n-1} , y_{n-1}) + f(x_{n} , y_{n})] + f(x_{n+1} , y_{n+1}) $$ can also be shown to have weak stability. The commonly used problem like $\begin{cases} y ' = \lambda y \\ y(0) = 1 \end{cases}$ is referred to as Dahlquist Problem.


  1. Atkinson. (1989). An Introduction to Numerical Analysis(2nd Edition): p364~365. ↩︎