logo

Trapezoidal Method 📂Numerical Analysis

Trapezoidal Method

Definition 1

Given an initial value problem $\begin{cases} y ' = f(x,y) \\ y( x_{0} ) = Y_{0} \end{cases}$ for a continuous function defined in $D \subset \mathbb{R}^2$ on the interval $(a,b)$, let’s say this interval is divided into nodes as described in $a \le x_{0} < x_{1} < \cdots < x_{n} < \cdots x_{N} \le b$. Particularly, for sufficiently small $h > 0$, if we assume $x_{j} = x_{0} + j h$ then for the initial value $y_{0} = Y_{0}$ we have $$ y_{n+1} = y_{n-1} + {{h} \over {2}} [ f ( x_{n} , y_{n} ) + f ( x_{n+1} , y_{n+1} ) ] $$

Explanation

Predictor-Corrector Algorithm

Just as using more data in the Midpoint Method shows better performance than the Euler Method, we can expect that the Trapezoidal Method improves performance by doing more calculations. However, it’s essentially a one-step method. The issue is that it’s an implicit method, meaning it can’t be directly resolved and typically involves iterating a method to solve equations. Naturally, this significantly increases the amount of computation and, despite being a one-step method, it can be slower.

First, we ‘guess’ $y_{n+1}^{(0)}$ to fit $Y_{n+1}$ well enough to obtain $$ y_{n+1}^{(1)} = y_{n-1} + {{h} \over {2}} [ f ( x_{n} , y_{n} ) + f ( x_{n+1} , y_{n+1}^{(0)} ) ] $$ ‘Guessing’ means finding a reasonably good estimate of $y_{n+1}^{(0)}$ for one step ahead, similar to the Euler Method. Then, calculations are repeated until sufficient accuracy is achieved, finding $$ y_{n+1}^{(j+1)} = y_{n-1} + {{h} \over {2}} [ f ( x_{n} , y_{n} ) + f ( x_{n+1} , y_{n+1}^{(j)} ) ] $$ Iteration counts are denoted by the transition from $y_{n+1}^{(j)}$ to $j$, which is expected to become more precise with each iteration. Subtracting $$ y_{n+1}^{(j+1)} = y_{n-1} + {{h} \over {2}} [ f ( x_{n} , y_{n} ) + f ( x_{n+1} , y_{n+1}^{(j)} ) ] $$ from both sides of $$ y_{n+1} = y_{n-1} + {{h} \over {2}} [ f ( x_{n} , y_{n} ) + f ( x_{n+1} , y_{n+1} ) ] $$ gives us $$ y_{n+1} - y_{n+1}^{(j+1)} = {{h} \over {2}} [ f ( x_{n+1} , y_{n+1} ) - f ( x_{n+1} , y_{n+1}^{ (j) } ) ] $$ Assuming the Lipschitz condition, we can obtain $$ | y_{n+1} - y_{n+1}^{(j+1)} | = {{hK} \over {2}} | y_{n+1} - y_{n+1}^{(j)} | $$ Meaning $\displaystyle {{ h K } \over {2}}<1$ must be satisfied for convergence, and if $K$ is large, then $h$ must be given quite small.

Additionally, since the Trapezoidal Method is derived from $$ Y_{n+1} = Y_{n-1} + {{h} \over {2}} [ f ( x_{n} , Y_{n} ) + f ( x_{n+1} , Y_{n+1} ) ] - {{h^3 } \over {12}} Y^{(3)} ( \xi_{n}) $$ and depends on the truncation error by $O( h^3 )$, $|y_{n+1} - y_{n+1}^{(j)} |$ must be smaller than at least $O(h^4 )$. Reading the explanation reveals that essentially, if coded as described above, it is not different from the Euler Method, and in fact contains it. However, it’s appropriate to view it as correcting through the Trapezoidal Method. Hence, this algorithm is called the Predictor-Corrector Algorithm. Here, the role of the Predictor is played by the Euler Method, and the Corrector by the Trapezoidal Method. This method aims to predict an appropriate value and correct it to find a numerical solution.

Implementation

20181009\_141134.png

The screenshot above shows the result of solving the initial value problem $\begin{cases} \displaystyle y ' = {{1} \over {1 + x^2 }} - 2y^2 \\ y(0) = 0 \end{cases}$ with both the Trapezoidal and Euler Methods and comparing the errors to the true solution $\displaystyle Y = {{x } \over {1 + x^2}}$. As expected, the error from the Trapezoidal Method is much smaller.

20181009\_141144.png

The screenshot above shows the result of solving the initial value problem $\begin{cases} \displaystyle y ' = x - y^2 \\ y(0) = 0 \end{cases}$ with the Trapezoidal and Midpoint Methods and comparing their numerical solutions. The Midpoint Method shows fluctuating values as it progresses, indicating a Parasitic Solution, whereas the Trapezoidal Method stably solves the problem well.

Below is the code written in R. j denotes the number of iterations to be repeated, if not entered separately, it simply makes one prediction with the Euler Method and proceeds.

Euler<-function(f,Y_0,a,b,h=10^(-3))
{
  Y <- t(Y_0)
  node <- seq(a,b,by=h)
  
  for(x in node)
  {
    Y<-rbind(Y,Y[length(Y[,1]),]+h*f(x,Y[length(Y[,1]),]))
  }
  
  return(Y)
}
 
Midpoint<-function(f,Y_0,a,b,h=10^(-3))
{
  Y <- t(Y_0)
  Y<-rbind(Y,Y[1,]+h*f(a,Y[1,]))
  node <- seq(a,b,by=h)
  
  for(x in node[-1])
  {
    Y<-rbind(Y,Y[length(Y[,1])-1,]+2*h*f(x,Y[length(Y[,1]),]))
  }
  
  return(Y)
}
 
Trapezoidal<-function(f,Y_0,a,b,h=10^(-3),j=0)
{
  Y <- t(Y_0)
  Y<-rbind(Y,Y[1,]+h*f(a,Y[1,]))
  node <- seq(a,b,by=h)
  
  for(x in node[-1])
  {
    Y_guess<-Y[length(Y[,1]),] + h* f(x,Y[length(Y[,1]),] )
    if(j>0)
    {
      for(temp in 1:j)
      {
        Y_guess<-Y[length(Y[,1]),]+(h/2)*( f(x,Y[length(Y[,1]),] )+ f(x+h,Y_guess) )
      }
    } else {"Wrong Iteration Number"}
    Y<-rbind(Y,Y[length(Y[,1]),]+(h/2)*( f(x,Y[length(Y[,1]),] )+ f(x+h,Y_guess) ))
  }
  
  return(Y)
}
 
Y<-function(x) {x / (1 + x^2)}
 
f<-function(x,y) {1/(1+x^2) + - 2*(y^(2))}
out<-Trapezoidal(f,seq(0,0,len=1),0,2,h=0.1)
abs(Y(seq(0,2.1,by=0.1)) - out[,1])
out<-Euler(f,seq(0,0,len=1),0,2,h=0.1)
abs(Y(seq(0,2.1,by=0.1)) - out[,1])
 
g<-function(x,y) {x-y^2}
out<-Trapezoidal(g,seq(0,0,len=1),0,3.25,h=0.25)
out[,1]
out<-Midpoint(g,seq(0,0,len=1),0,3.25,h=0.25)
out[,1]

  1. Atkinson. (1989). An Introduction to Numerical Analysis(2nd Edition): p367. ↩︎