logo

Strong Lipschitz Condition and Error of the Euler Method 📂Numerical Analysis

Strong Lipschitz Condition and Error of the Euler Method

Theorem

Let’s assume that the solution $Y(x)$ to the initial value problem $\begin{cases} y ' = f(x,y) \\ y( x_{0} ) = Y_{0} \end{cases}$, defined in $[x_{0} , b] \times \mathbb{R}$ for $f$, is twice differentiable in $[x_{0} , b]$. If $f$ satisfies strong Lipschitz condition $$ |f(x,y_{1} ) - f(x,y_{2}) | \le K | y_{1} - y_{2} | $$ for all $x_{0} \le x \le b$, $ y_{1} , y_{2} \in \mathbb{R}$, and $K \ge 0$, then for the solution $\left\{ y_{n} ( x_{ n } ) \ : \ x_{0} \le x_{n} \le b \right\} $ obtained by Euler’s method, we have $$ \max_{ x_{0 } \le x_{n} \le b } | Y_{x_{n}} - y_{h} (x_{n}) | \le \epsilon^{( b - x_{0} ) K} | \epsilon_{0} | + \left[ {{ \epsilon^{(b- x_{0}) K } - 1 } \over {K}} \right] \tau (h) $$ where $\tau (h) = {{h} \over {2}} \left\| Y’’ \right\|_{\infty}$ and $\epsilon_{0} = Y_{0} - y_{h} (x_{0 } )$.

See Also

Strong Lipschitz condition $\implies$ Lipschitz condition $\implies$ Local Lipschitz condition

Description

To sum up what has been a lengthy explanation, there’s a slightly stronger condition than the Lipschitz condition that speaks to the accuracy of solutions derived from Euler’s method. Unlike the case with just the Lipschitz condition, which assumes continuity, the strong Lipschitz condition also considers differentiability.

Proof 1

$$ x_{n+1} - x_{n} = h $$

$$ Y_{n} := Y(x_{n} ) $$

$$ y_{n} := y(x_{n}) $$ Let’s denote the error at the $n$th step by $\epsilon_{n} : = Y(x_{n}) - y (x_{n} )$.

If we perform an $2$th order Taylor expansion of $Y(x_{n+1} )$ with respect to $ x_{n}$, for some $x_{n} \le \xi_{n} \le x_{n+1}$, $$ Y_{n+1} = Y_{n} + h Y’_{n} + {{h^2} \over {2}} Y’’ ( \xi_{n} ) $$ Let’s conveniently denote it as $\displaystyle \tau_{n} := {{h} \over {2}} Y’’ ( \xi_{n} )$, $$ Y_{n+1} = Y_{n} + h Y’_{n} + h \tau_{n} $$

$$ \max_{n} | \tau_{n} | \le \tau (h) $$ Subtracting the equation obtained by Euler’s method, $y_{n+1} = y_{n} + h f ( x_{n} , y_{n} )$, from both sides, $$ Y_{n+1} - y_{n+1} = Y_{n} - y_{n} + h ( f( x_{n} , Y_{n} ) - f (x_{n} , y_{n}) ) + h \tau_{n} $$ If we express it in terms of $\epsilon_{n}$, $$ \epsilon_{n+1} = \epsilon_{n} + h ( f( x_{n} , Y_{n} ) - f (x_{n} , y_{n}) ) + h \tau_{n} $$ Taking the absolute value of both sides, $$ | \epsilon_{n+1} | \le | \epsilon_{n} | + h | f( x_{n} , Y_{n} ) - f (x_{n} , y_{n}) | + h | \tau (h) | $$ By the Lipschitz condition, $$ | \epsilon_{n+1} | \le | \epsilon_{n} | + h K | Y_{n} - y_{n} | + h | \tau (h) | $$ To summarize, $$ | \epsilon_{n+1} | \le (1 + hK ) | \epsilon_{n} | + h | \tau (h) | $$ Solving it recursively, $$ | \epsilon_{n+1} | \le (1 + hK )^n | \epsilon_{0} | + [ 1 + (1 + hK) + \cdots + (1 + hK)^{n-1} ] h | \tau (h) | $$ By the Sum formula of a finite geometric series, $$ | \epsilon_{n+1} | \le (1 + hK )^n | \epsilon_{0} | + \left[ {{ (1+ hK)^{n} - 1} \over {hK }} \right] h | \tau (h) | $$

Following the Bernoulli’s inequality: $(1 + x )^{ \alpha } \le e^{ x \alpha }$

According to a corollary of Bernoulli’s inequality, because $ (1 + hK )^n \le e^{hKn}$, $$ | \epsilon_{n} | \le e^{( b - x_{0} ) K} | \epsilon_{0} | + \left[ {{ e^{(b- x_{0}) K } - 1 } \over {K}} \right] \tau (h) $$

Additional Conditions

Now, let’s think about adding a condition from $\displaystyle | \epsilon_{n+1} | \le | \epsilon_{n} | + h K | Y_{n} - y_{n} | + h | \tau (h) | $ due to the Lipschitz condition up to $\displaystyle {{\partial f (x,y) } \over { \partial y }} \le 0$. To beautify the expression, $$ | \epsilon_{n+1} | \le (1 + hK) | \epsilon_{n} | + {{h^2} \over {2}} | Y’’ ( \xi_{n } ) | $$ By the Mean valuetheorem, $$ K = \left| {{f(x,y_{1} ) - f(x,y_{2}) } \over {y_{1} - y_{2}}} \right| = \left| {{ \partial f ( x_{n} , \zeta_{n} ) } \over { \partial y }} \right| $$ there exists $\zeta_{n} \in \mathscr{H} \left\{ y_{h} (x_{n} ) , Y ( x_{n} ) \right\}$ that satisfies. If $h$ is sufficiently small so that $\displaystyle 1+ h {{ \partial f ( x_{n} , \zeta_{n} ) } \over { \partial y }} \ge -1$, then $$ | \epsilon_{n+1} | \le | \epsilon_{n} | + {{h^2} \over {2}} | Y’’ ( \xi_{n } ) | $$ Solving this inequality recursively as well, $$ | \epsilon_{n} | \le | \epsilon_{0} | + {{h^2} \over {2}} [ |Y’’ ( \xi_{0 } ) | + \cdots + |Y’’ ( \xi_{n-1 } ) | ] $$ Therefore, $$ | \epsilon_{n} | \le | \epsilon_{0} | + {{h^2} \over {2}} n \left\| Y’’ \right\|_{\infty} $$ since $nh = b - x_{0}$, $$ | \epsilon_{n} | \le | \epsilon_{0} | + {{h} \over {2}} \left\| Y’’ \right\|_{\infty} ( b - x_{0}) $$ This means that the condition $\displaystyle {{\partial f (x,y) } \over { \partial y }} \le 0$ has linearly reduced the upper bound of the error, which would have increased exponentially with respect to $(b - x_{0})$ originally. Fortunately, many problems found in natural phenomena satisfy this assumption, thereby significantly reducing the error.


  1. Atkinson. (1989). An Introduction to Numerical Analysis(2nd Edition): p347. $$ x_{i} : = x_{0} + ih $$ ↩︎