logo

Initial Value Variation and the Error in the Euler Method 📂Numerical Analysis

Initial Value Variation and the Error in the Euler Method

Theorem

The solution Y(x)Y(x) to the initial value problem {y=f(x,y)y(x0)=Y0\begin{cases} y ' = f(x,y) \\ y( x_{0} ) = Y_{0} \end{cases} defined in [x0,b]×R[x_{0} , b] \times \mathbb{R} with respect to ff, is YC3[x0,b]Y \in C^{3} [ x_{0} , b ] if fy(x,y)=f(x,y)y\displaystyle f_{y} (x,y) = {{ \partial f (x,y) } \over { \partial y }} and fyy(x,y)=2f(x,y)y2\displaystyle f_{yy} (x,y) = {{ \partial^{2} f (x,y) } \over { \partial y^{2} }} are both continuous and bounded. Let’s say the initial value yh(x0)y_{h} (x_{0} ) satisfies Y0yh(x0)=δ0h+O(h2)Y_{0} - y_{h} (x_{0} ) = \delta_{0} h + O ( h^2 ). Then, the error produced by the Euler method satisfies the following for the solution D(x)D(x) to the linear initial value problem {D(x)=fy(x,Y(x))D(x)+12Y’’(x)D(x0)=δ0 \begin{cases} \displaystyle D’ (x) = f_{y} (x, Y(x) ) D(x) + {{1} \over {2}} Y’’ (x) \\ D ( x_{0} ) = \delta_{0} \end{cases} . Y(xn)yh(xn)=D(xn)h+O(h2) Y(x_{n} ) - y_{h} (x_{n} ) = D(x_{n} ) h + O (h^2)

Explanation

The statement that the initial value yh(x0)y_{h} (x_{0} ) satisfies Y0yh(x0)=δ0h+O(h2)Y_{0} - y_{h} (x_{0} ) = \delta_{0} h + O ( h^2 ) implies that the initial value can slightly differ from the true value. Moreover, the error that arises from this difference can further be reduced if one can calculate DD.

Proof 1

Strategy: Assume a strong Lipschitz condition and establish ϵn=O(h2)\epsilon_{n} = O ( h^2), gng_{n}, and knk_{n} to prove it. The proof technique might seem a bit stubborn due to the extensive manipulation of formulas, which is quite common in numerical analysis. Though it might not be a beautiful proof, becoming familiar with this method itself is beneficial.

xi:=x0+ihx_{i} : = x_{0} + ih, especially, xn+1xn=hx_{n+1} - x_{n} = h. For convenience, let’s use Y(xn):=Yn Y(x_{n} ) := Y_{n} y(xn):=yn y(x_{n}) := y_{n} and express the error at the nnth step as ϵn\epsilon_{n} like this. ϵn:=Y(xn)y(xn) \epsilon_{n} : = Y(x_{n}) - y (x_{n} )


Part 1.

If we take the 33rd Taylor expansion of Y(xn+1)Y(x_{n+1} ) with respect to xn x_{n}, for some xnξnxn+1x_{n} \le \xi_{n} \le x_{n+1}, Yn+1=Yn+hYn+h22Y’’(xn)+h36Y’’’(ξn) Y_{n+1} = Y_{n} + h Y’_{n} + {{h^2} \over {2}} Y’’ ( x_{n} ) + {{h^3} \over {6}} Y’’’ ( \xi_{n} ) from which we get the following formula using the Euler approximation. ϵn+1=ϵn+h(f(xn,Yn)f(xn,yn))+h22Y’’(xn)+h36Y’’’(ξn) \begin{align} \displaystyle \epsilon_{n+1} = \epsilon_{n} + h ( f( x_{n} , Y_{n} ) - f (x_{n} , y_{n}) ) + {{h^2} \over {2}} Y’’ ( x_{n} ) + {{h^3} \over {6}} Y’’’ ( \xi_{n} ) \end{align} Viewing f(xn,yn)f(x_{n} , y_{n}) of (1)(1) as a function of yny_{n} and taking its Taylor expansion, for some ζnH{yn,Yn}\zeta_{n} \in \mathscr{H} \left\{ y_{n} , Y_{n} \right\} , f(xn,yn)=f(xn,Yn)+(ynYn)fy(xn,Yn)+12(ynYn)2fyy(xn,ζn) f(x_{n} , y_{n} ) = f(x_{n} , Y_{n} ) + (y_{n } - Y_{n}) f_{y} (x_{n} , Y_{n} ) + {{1} \over {2}} ( y_{n} - Y_{n} )^2 f_{yy} (x_{n} , \zeta_{n} ) Therefore, ϵn+1=ϵn+h(ϵnfy(xn,Yn)12ϵ2fyy(xn,ζn))+h22Y’’(xn)+h36Y’’’(ξn) \epsilon_{n+1} = \epsilon_{n} + h \left( \epsilon_{ n } f_{y} (x_{n} , Y_{n} ) - {{1} \over {2}} \epsilon^2 f_{yy} (x_{n} , \zeta_{n} ) \right) + {{h^2} \over {2}} Y’’ ( x_{n} ) + {{h^3} \over {6}} Y’’’ ( \xi_{n} ) Summarizing, ϵn+1=[1+fy(xn,Yn)]ϵn+h22Y’’(xn)+h36Y’’’(ξn)12hfyy(xn,ζn)ϵn2 \epsilon_{n+1} = [ 1+ f_{y} (x_{n} , Y_{n} ) ] \epsilon_{n} + {{h^2} \over {2}} Y’’ ( x_{n} ) + {{h^3} \over {6}} Y’’’ ( \xi_{n} ) - {{1} \over {2}} h f_{yy} (x_{n} , \zeta_{n} ) \epsilon_{n}^{2}


Part 2.

Strong Lipschitz condition and error in the Euler method: maxx0xnbYxnyh(xn)ϵ(bx0)Kϵ0+[ϵ(bx0)K1K]τ(h)\max_{ x_{0 } \le x_{n} \le b } | Y_{x_{n}} - y_{h} (x_{n}) | \le \epsilon^{( b - x_{0} ) K} | \epsilon_{0} | + \left[ {{ \epsilon^{(b- x_{0}) K } - 1 } \over {K}} \right] \tau (h)

Since ϵ=O(h)\epsilon = O (h), h36Y’’’(ξn)12hfyy(xn,ζn)ϵn2=O(h3) {{h^3} \over {6}} Y’’’ ( \xi_{n} ) - {{1} \over {2}} h f_{yy} (x_{n} , \zeta_{n} ) \epsilon_{n}^{2} = O (h^{3} ) and assuming O(h3)O ( h^3) is sufficiently small, we can define gn+1g_{n+1} with O(h3)O ( h^3) eliminated as follows. gn+1:=[1+fy(xn,Yn)]gn+h22Y’’(xn) g_{n+1} : = [ 1+ f_{y} (x_{n} , Y_{n} ) ] g_{n} + {{h^2} \over {2}} Y’’ ( x_{n} ) Given g0=hδ0g_{0} = h \delta_{0} from the assumption, we set gn=hδng_{n} = h \delta_{n}, and eliminating hh from both sides gives δn+1:=δn+h[fy(xn,Yn)δn+12Y’’(xn)] \delta_{n+1} : = \delta_{n} + h \left[ f_{y} (x_{n} , Y_{n} ) \delta_{n} + {{1} \over {2}} Y’’ ( x_{n} ) \right] Slightly organizing this gives δn+1δnh=fy(xn,Yn)δn+12Y’’(xn) {{\delta_{n+1} - \delta_{n} } \over {h}} = f_{y} (x_{n} , Y_{n} ) \delta_{n} + {{1} \over {2}} Y’’ ( x_{n} ) noticing the form of this equation, it resembles the linear initial value problem {D(x)=fy(x,Y(x))D(x)+12Y’’(x)D(x0)=δ0\begin{cases} \displaystyle D’ (x) = f_{y} (x, Y(x) ) D(x) + {{1} \over {2}} Y’’ (x) \\ D ( x_{0} ) = \delta_{0} \end{cases}. Eventually, δn\delta_{n} is the Euler approximate solution of D(xn)D(x_{n} ), i.e., D(xn)δn=O(h)D(x_{n} ) - \delta_{n} = O (h) . By the definition of gng_{n}, multiplying both sides by hh gives gn=D(xn)h+O(h2) g_{n} = D ( x_{n} ) h + O ( h^2 )


Part 3.

If we set kn:=engnk_{n} : = e_{n} - g_{n}, kn+1=[1+hfh(xn,yn)]kn+O(h3) k_{n+1} = [ 1 + h f_{h} (x_{n} , y_{n} ) ] k_{n} + O(h^3) and since fyf_{y} is bounded, for some K0K \ge 0, kn+1(1+hK)kn+O(h3) | k_{n+1} | \le ( 1 + h K ) | k_{n} | + O(h^3) Unfolding recursively gives kn=O(h2)+O(h3)=O(h2) |k_{n}| = O(h^2) + O (h^3 ) = O(h^2) Therefore, ϵn=gn+kn=[hD(xn)+O(h2)]+O(h2)=O(h2) \begin{align*} \epsilon_{n} =& g_{n} + k_{n} \\ =& [ h D (x_{n} ) + O(h^2) ] + O(h^2) \\ =& O(h^2) \end{align*}


  1. Atkinson. (1989). An Introduction to Numerical Analysis(2nd Edition): p352~354. ↩︎