logo

Euler Method in Numerical Analysis 📂Numerical Analysis

Euler Method in Numerical Analysis

Method 1

$D \subset \mathbb{R}^2$ defines a continuous function $f$ for the initial value problem given by $\begin{cases} y ' = f(x,y) \\ y( x_{0} ) = Y_{0} \end{cases}$. Suppose the interval $(a,b)$ is divided into nodes like $a \le x_{0} < x_{1} < \cdots < x_{n} < \cdots x_{N} \le b$. Particularly for a sufficiently small $h > 0$, if it is assumed that $x_{j} = x_{0} + j h$, then for the initial value $y_{0} \simeq Y_{0}$, $$ y_{n+1} = y_{n} + h f ( x_{n} , y_{n} ) $$

Explanation

Euler’s method is conceptually a very simple approach but demonstrates core ideas in numerical analysis. Of course, there are many problems associated with it today, but conversely, this means there is much room for improvement. Therefore, it’s not so much the Euler method itself that is important, but rather the process of its derivation that needs to be carefully understood.

Derivation

Taylor Expansion

If you expand $Y(x_{n+1} )$ up to the $2$ term in relation to $x_{n}$ using Taylor expansion, $$ Y ( x_{n+1} ) = Y ( x_{n} ) + ( x_{n+1} - x_{n}) y ' ( x_{n} ) + {{( x_{n+1} - x_{n})^2} \over {2}} Y’’ ( \xi_{n} ) $$ then simplifying gives $$ Y ( x_{n+1} ) = Y ( x_{n} ) + h y ' ( x_{n} ) + {{h^2} \over {2}} Y’’ ( \xi_{n} ) $$ Now, discarding the error term $\displaystyle {{h^2} \over {2}} Y’’ ( \xi_{n} )$, $$ y_{n+1} = y_{n} + h f ( x_{n} , y_{n} ) $$

What is immediately clear from the derivation using Taylor expansion is that as $h$ decreases, so does the error. Moreover, there’s no particular reason why the Taylor approximation must be exactly up to the $2$ term, which implies that increasing the order will reduce the error.

Numerical Integration

The definite integral of $[x_{n} , x_{n+1}]$ to $Y’(t) = f (t, Y(t))$ is $$ \int_{x_{n}}^{x_{n+1}} f(t,Y(t)) dt = Y(x_{n+1}) - Y(x_{n}) $$ 20180910\_093211.png Although there is a slight error, through numerical integration, since $\displaystyle \int_{x_{n}}^{x_{n+1}} f(t,Y(t)) dt \simeq h f(x_{n} , Y(x_{n}) ) $, $$ y_{n+1} - y_{n} = h f ( x_{n} , y_{n} ) $$

Considering the idea of the method of rectangles, it can be inferred that the error decreases as $h$ becomes smaller. As seen in the figure, there is a significant difference between the actual integral value and the numerically calculated value, but using the trapezoidal method or Simpson’s method would reduce the error.

Implementation

Below is code written in R.

Euler<-function(f,Y\_0,a,b,h=10^(-3))
{
  Y <- t(Y\_0)
  node <- seq(a,b,by=h)
  
  for(x in node)
  {
    Y<-rbind(Y,Y[length(Y[,1]),]+h*f(x,Y[length(Y[,1]),]))
  }
  
  return(Y)
}
 
f<-function(x,y) {y}
out<-Euler(f,seq(1,1,len=100),0,2)
out[,1]
 
g<-function(x,y) {1/(1+x^2) + - 2*(y^(2))}
out<-Euler(g,seq(0,0,len=100),0,2,h=0.2)
out[,1]

  1. Atkinson. (1989). An Introduction to Numerical Analysis(2nd Edition): p341. ↩︎