logo

Midpoint Method 📂Numerical Analysis

Midpoint Method

Methods

In the continuous function defined by $D \subset \mathbb{R}^2$ for the initial value problem $\begin{cases} y ' = f(x,y) \\ ( y( x_{0} ), y (x_{1}) ) = ( Y_{0} , Y_{1} ) \end{cases}$ given for $f$, assume the interval $(a,b)$ is divided into nodes like $a \le x_{0} < x_{1} < \cdots < x_{n} < \cdots x_{N} \le b$. Particularly, for sufficiently small $h > 0$ if $x_{j} = x_{0} + j h$ then for the initial value $y_{0} = Y_{0}$ $$ y_{n+1} := y_{n-1} + 2 h f ( x_{n} , y_{n} ) $$

Derivation 1

Expanding $Y’(t)$ into a Taylor series for $x_{n}$ yields $$ Y’(t) = y ' ( x_{n} ) + ( t - x_{n} ) Y’’ (x_{n} ) + {{( t - x_{n} )^2} \over {2}} Y^{(3)} ( \xi_{n} ) $$ satisfying $\xi_{n} \in [ x_{n-1} , x_{n} ]$ exists. Taking the definite integral from $t= x_{n-1}$ to $x_{n+1}$ on both sides gives us $$ \int_{x_{n-1} }^{ x_{n+1} } Y’(t) dt = \int_{x_{n-1} }^{ x_{n+1} } \left[ y ' ( x_{n} ) + ( t - x_{n} ) Y’’ (x_{n} ) + {{( t - x_{n} )^2} \over {2}} Y^{(3)} ( \xi_{n} ) \right] dt $$ Since $x_{n-1} = x_{n} - h$ and $x_{n+1} = x_{n} + h$, $$ \int_{x_{n-1} }^{ x_{n+1} } Y’(t) dt = ( x_{n} + h - (x_{n} - h) ) y ' ( x_{n} ) + {{ h^3 - ( - h^3 ) } \over {6}} Y^{(3)} ( \xi_{n} ) $$ according to the fundamental theorem of calculus, we have $$ Y(x_{n+1}) - Y(x_{n-1})= 2h y ' ( x_{n} ) + {{ h^3 } \over {3}} Y^{(3)} ( \xi_{n} ) $$ Assuming $Y’ = f$ and omitting $\displaystyle {{ h^3 } \over {3}} Y^{(3)} ( \xi_{n} )$, $$ y_{n+1} = y_{n-1} + 2 h f ( x_{n} , y_{n} ) $$

Comparison with Euler’s Method

Error Analysis

Compared to Euler’s method, using two data points might increase the computation, but it is expected that the error would be smaller. Even with many conditions, the error in Euler’s method is only bounded by $O (h)$.

On the other hand, the midpoint method, being a multi-point method, has its error calculated as: $$ \max_{ x_{0} \le x_{n} \le b} | Y (x_{n} ) - y_{h} (x_{n} ) | \le e^{2K(b-x_{0})} \eta (h) + \left[ {{ e^{2K (b - x_{0} ) } - 1 } \over {2K}} \right] \left[ {{1} \over {3}} h^2 | Y^{(3)} |_{\infty} \right] $$ Here, the initial error is $\displaystyle \eta (h) = \max \left\{ |Y_{0} - y_{0}| , | Y(x_{1}) - y_{h} (x_{1}) | \right\}$, but since $y_{0} =Y_{0}$, it essentially becomes $ \eta (h) = | Y(x_{1}) - y_{h} (x_{1}) |$. Expanding $Y$ using Taylor series gives $$ Y (x_{1}) = Y(x_{0}) + h Y(x_{0}) + {{h^{2} } \over {2}} Y’’ (\zeta) = y_{0} + h f(x_{0} , y_{0}) + {{h^{2} } \over {2}} Y’’ (\zeta) $$ Since according to Euler’s method $ y_{0} + h f(x_{0} , y_{0}) = y_{1}$, $$ Y(x_{1}) - y_{h} (x_{1}) = {{h^{2} } \over {2}} Y’’ (\zeta) = O(h^{2} ) $$ Consequently, the midpoint method can be said to be more accurate than Euler’s method since it is bound by $O (h^2)$.

Stability Issues

However, the midpoint method has a critical flaw. Below is the code implementing the midpoint method in R.

Midpoint<-function(f,Y\_0,Y\_1=NA,a,b,h=10^(-3))
{
  Y <- t(Y\_0)
  if(is.na(Y\_1))
  {
    Y<-rbind(Y,Y[1,]+h*f(a,Y[1,]))
  }
  else
  {
    Y<-rbind(Y,Y\_1)
  }
  node <- seq(a,b,by=h)
  
  for(x in node[-1])
  {
    Y<-rbind(Y,Y[length(Y[,1])-1,]+2*h*f(x,Y[length(Y[,1]),]))
  }
  
  return(Y)
}
 
f<-function(x,y) {1/(1+x^2) + - 2*(y^(2))}
out<-Midpoint(f,seq(0,0,len=1),0,2,h=0.1)
out[,1]
 
g<-function(x,y) {x-y^2}
out<-Midpoint(g,seq(0,0,len=1),0,3.25,h=0.25)
out[,1]

Upon running, unlike the function f(), g() does not yield a proper solution, showcasing an issue with the stability of the midpoint method.

20190412\_165014.png


  1. Atkinson. (1989). An Introduction to Numerical Analysis(2nd Edition): p362. ↩︎