logo

Initial Value Variation and the Error in the Euler Method 📂Numerical Analysis

Initial Value Variation and the Error in the Euler Method

Theorem

The solution $Y(x)$ to the initial value problem $\begin{cases} y ' = f(x,y) \\ y( x_{0} ) = Y_{0} \end{cases}$ defined in $[x_{0} , b] \times \mathbb{R}$ with respect to $f$, is $Y \in C^{3} [ x_{0} , b ]$ if $\displaystyle f_{y} (x,y) = {{ \partial f (x,y) } \over { \partial y }}$ and $\displaystyle f_{yy} (x,y) = {{ \partial^{2} f (x,y) } \over { \partial y^{2} }}$ are both continuous and bounded. Let’s say the initial value $y_{h} (x_{0} )$ satisfies $Y_{0} - y_{h} (x_{0} ) = \delta_{0} h + O ( h^2 )$. Then, the error produced by the Euler method satisfies the following for the solution $D(x)$ to the linear initial value problem $$ \begin{cases} \displaystyle D’ (x) = f_{y} (x, Y(x) ) D(x) + {{1} \over {2}} Y’’ (x) \\ D ( x_{0} ) = \delta_{0} \end{cases} $$. $$ Y(x_{n} ) - y_{h} (x_{n} ) = D(x_{n} ) h + O (h^2) $$

Explanation

The statement that the initial value $y_{h} (x_{0} )$ satisfies $Y_{0} - y_{h} (x_{0} ) = \delta_{0} h + O ( h^2 )$ implies that the initial value can slightly differ from the true value. Moreover, the error that arises from this difference can further be reduced if one can calculate $D$.

Proof 1

Strategy: Assume a strong Lipschitz condition and establish $\epsilon_{n} = O ( h^2)$, $g_{n}$, and $k_{n}$ to prove it. The proof technique might seem a bit stubborn due to the extensive manipulation of formulas, which is quite common in numerical analysis. Though it might not be a beautiful proof, becoming familiar with this method itself is beneficial.

$x_{i} : = x_{0} + ih$, especially, $x_{n+1} - x_{n} = h$. For convenience, let’s use $$ Y(x_{n} ) := Y_{n} $$ $$ y(x_{n}) := y_{n} $$ and express the error at the $n$th step as $\epsilon_{n}$ like this. $$ \epsilon_{n} : = Y(x_{n}) - y (x_{n} ) $$


Part 1.

If we take the $3$rd Taylor expansion of $Y(x_{n+1} )$ with respect to $ x_{n}$, for some $x_{n} \le \xi_{n} \le x_{n+1}$, $$ Y_{n+1} = Y_{n} + h Y’_{n} + {{h^2} \over {2}} Y’’ ( x_{n} ) + {{h^3} \over {6}} Y’’’ ( \xi_{n} ) $$ from which we get the following formula using the Euler approximation. $$ \begin{align} \displaystyle \epsilon_{n+1} = \epsilon_{n} + h ( f( x_{n} , Y_{n} ) - f (x_{n} , y_{n}) ) + {{h^2} \over {2}} Y’’ ( x_{n} ) + {{h^3} \over {6}} Y’’’ ( \xi_{n} ) \end{align} $$ Viewing $f(x_{n} , y_{n})$ of $(1)$ as a function of $y_{n}$ and taking its Taylor expansion, for some $\zeta_{n} \in \mathscr{H} \left\{ y_{n} , Y_{n} \right\} $, $$ f(x_{n} , y_{n} ) = f(x_{n} , Y_{n} ) + (y_{n } - Y_{n}) f_{y} (x_{n} , Y_{n} ) + {{1} \over {2}} ( y_{n} - Y_{n} )^2 f_{yy} (x_{n} , \zeta_{n} ) $$ Therefore, $$ \epsilon_{n+1} = \epsilon_{n} + h \left( \epsilon_{ n } f_{y} (x_{n} , Y_{n} ) - {{1} \over {2}} \epsilon^2 f_{yy} (x_{n} , \zeta_{n} ) \right) + {{h^2} \over {2}} Y’’ ( x_{n} ) + {{h^3} \over {6}} Y’’’ ( \xi_{n} ) $$ Summarizing, $$ \epsilon_{n+1} = [ 1+ f_{y} (x_{n} , Y_{n} ) ] \epsilon_{n} + {{h^2} \over {2}} Y’’ ( x_{n} ) + {{h^3} \over {6}} Y’’’ ( \xi_{n} ) - {{1} \over {2}} h f_{yy} (x_{n} , \zeta_{n} ) \epsilon_{n}^{2} $$


Part 2.

Strong Lipschitz condition and error in the Euler method: $$\max_{ x_{0 } \le x_{n} \le b } | Y_{x_{n}} - y_{h} (x_{n}) | \le \epsilon^{( b - x_{0} ) K} | \epsilon_{0} | + \left[ {{ \epsilon^{(b- x_{0}) K } - 1 } \over {K}} \right] \tau (h)$$

Since $\epsilon = O (h)$, $$ {{h^3} \over {6}} Y’’’ ( \xi_{n} ) - {{1} \over {2}} h f_{yy} (x_{n} , \zeta_{n} ) \epsilon_{n}^{2} = O (h^{3} ) $$ and assuming $O ( h^3)$ is sufficiently small, we can define $g_{n+1}$ with $O ( h^3)$ eliminated as follows. $$ g_{n+1} : = [ 1+ f_{y} (x_{n} , Y_{n} ) ] g_{n} + {{h^2} \over {2}} Y’’ ( x_{n} ) $$ Given $g_{0} = h \delta_{0}$ from the assumption, we set $g_{n} = h \delta_{n}$, and eliminating $h$ from both sides gives $$ \delta_{n+1} : = \delta_{n} + h \left[ f_{y} (x_{n} , Y_{n} ) \delta_{n} + {{1} \over {2}} Y’’ ( x_{n} ) \right] $$ Slightly organizing this gives $$ {{\delta_{n+1} - \delta_{n} } \over {h}} = f_{y} (x_{n} , Y_{n} ) \delta_{n} + {{1} \over {2}} Y’’ ( x_{n} ) $$ noticing the form of this equation, it resembles the linear initial value problem $\begin{cases} \displaystyle D’ (x) = f_{y} (x, Y(x) ) D(x) + {{1} \over {2}} Y’’ (x) \\ D ( x_{0} ) = \delta_{0} \end{cases}$. Eventually, $\delta_{n}$ is the Euler approximate solution of $D(x_{n} )$, i.e., $D(x_{n} ) - \delta_{n} = O (h) $. By the definition of $g_{n}$, multiplying both sides by $h$ gives $$ g_{n} = D ( x_{n} ) h + O ( h^2 ) $$


Part 3.

If we set $k_{n} : = e_{n} - g_{n}$, $$ k_{n+1} = [ 1 + h f_{h} (x_{n} , y_{n} ) ] k_{n} + O(h^3) $$ and since $f_{y}$ is bounded, for some $K \ge 0$, $$ | k_{n+1} | \le ( 1 + h K ) | k_{n} | + O(h^3) $$ Unfolding recursively gives $$ |k_{n}| = O(h^2) + O (h^3 ) = O(h^2) $$ Therefore, $$ \begin{align*} \epsilon_{n} =& g_{n} + k_{n} \\ =& [ h D (x_{n} ) + O(h^2) ] + O(h^2) \\ =& O(h^2) \end{align*} $$


  1. Atkinson. (1989). An Introduction to Numerical Analysis(2nd Edition): p352~354. ↩︎