logo

Trapezoidal Method 📂Numerical Analysis

Trapezoidal Method

Definition 1

Given an initial value problem {y=f(x,y)y(x0)=Y0\begin{cases} y ' = f(x,y) \\ y( x_{0} ) = Y_{0} \end{cases} for a continuous function defined in DR2D \subset \mathbb{R}^2 on the interval (a,b)(a,b), let’s say this interval is divided into nodes as described in ax0<x1<<xn<xNba \le x_{0} < x_{1} < \cdots < x_{n} < \cdots x_{N} \le b. Particularly, for sufficiently small h>0h > 0, if we assume xj=x0+jhx_{j} = x_{0} + j h then for the initial value y0=Y0y_{0} = Y_{0} we have yn+1=yn1+h2[f(xn,yn)+f(xn+1,yn+1)] y_{n+1} = y_{n-1} + {{h} \over {2}} [ f ( x_{n} , y_{n} ) + f ( x_{n+1} , y_{n+1} ) ]

Explanation

Predictor-Corrector Algorithm

Just as using more data in the Midpoint Method shows better performance than the Euler Method, we can expect that the Trapezoidal Method improves performance by doing more calculations. However, it’s essentially a one-step method. The issue is that it’s an implicit method, meaning it can’t be directly resolved and typically involves iterating a method to solve equations. Naturally, this significantly increases the amount of computation and, despite being a one-step method, it can be slower.

First, we ‘guess’ yn+1(0)y_{n+1}^{(0)} to fit Yn+1Y_{n+1} well enough to obtain yn+1(1)=yn1+h2[f(xn,yn)+f(xn+1,yn+1(0))] y_{n+1}^{(1)} = y_{n-1} + {{h} \over {2}} [ f ( x_{n} , y_{n} ) + f ( x_{n+1} , y_{n+1}^{(0)} ) ] ‘Guessing’ means finding a reasonably good estimate of yn+1(0)y_{n+1}^{(0)} for one step ahead, similar to the Euler Method. Then, calculations are repeated until sufficient accuracy is achieved, finding yn+1(j+1)=yn1+h2[f(xn,yn)+f(xn+1,yn+1(j))] y_{n+1}^{(j+1)} = y_{n-1} + {{h} \over {2}} [ f ( x_{n} , y_{n} ) + f ( x_{n+1} , y_{n+1}^{(j)} ) ] Iteration counts are denoted by the transition from yn+1(j)y_{n+1}^{(j)} to jj, which is expected to become more precise with each iteration. Subtracting yn+1(j+1)=yn1+h2[f(xn,yn)+f(xn+1,yn+1(j))] y_{n+1}^{(j+1)} = y_{n-1} + {{h} \over {2}} [ f ( x_{n} , y_{n} ) + f ( x_{n+1} , y_{n+1}^{(j)} ) ] from both sides of yn+1=yn1+h2[f(xn,yn)+f(xn+1,yn+1)] y_{n+1} = y_{n-1} + {{h} \over {2}} [ f ( x_{n} , y_{n} ) + f ( x_{n+1} , y_{n+1} ) ] gives us yn+1yn+1(j+1)=h2[f(xn+1,yn+1)f(xn+1,yn+1(j))] y_{n+1} - y_{n+1}^{(j+1)} = {{h} \over {2}} [ f ( x_{n+1} , y_{n+1} ) - f ( x_{n+1} , y_{n+1}^{ (j) } ) ] Assuming the Lipschitz condition, we can obtain yn+1yn+1(j+1)=hK2yn+1yn+1(j) | y_{n+1} - y_{n+1}^{(j+1)} | = {{hK} \over {2}} | y_{n+1} - y_{n+1}^{(j)} | Meaning hK2<1\displaystyle {{ h K } \over {2}}<1 must be satisfied for convergence, and if KK is large, then hh must be given quite small.

Additionally, since the Trapezoidal Method is derived from Yn+1=Yn1+h2[f(xn,Yn)+f(xn+1,Yn+1)]h312Y(3)(ξn) Y_{n+1} = Y_{n-1} + {{h} \over {2}} [ f ( x_{n} , Y_{n} ) + f ( x_{n+1} , Y_{n+1} ) ] - {{h^3 } \over {12}} Y^{(3)} ( \xi_{n}) and depends on the truncation error by O(h3)O( h^3 ), yn+1yn+1(j)|y_{n+1} - y_{n+1}^{(j)} | must be smaller than at least O(h4)O(h^4 ). Reading the explanation reveals that essentially, if coded as described above, it is not different from the Euler Method, and in fact contains it. However, it’s appropriate to view it as correcting through the Trapezoidal Method. Hence, this algorithm is called the Predictor-Corrector Algorithm. Here, the role of the Predictor is played by the Euler Method, and the Corrector by the Trapezoidal Method. This method aims to predict an appropriate value and correct it to find a numerical solution.

Implementation

20181009\_141134.png

The screenshot above shows the result of solving the initial value problem {y=11+x22y2y(0)=0\begin{cases} \displaystyle y ' = {{1} \over {1 + x^2 }} - 2y^2 \\ y(0) = 0 \end{cases} with both the Trapezoidal and Euler Methods and comparing the errors to the true solution Y=x1+x2\displaystyle Y = {{x } \over {1 + x^2}}. As expected, the error from the Trapezoidal Method is much smaller.

20181009\_141144.png

The screenshot above shows the result of solving the initial value problem {y=xy2y(0)=0\begin{cases} \displaystyle y ' = x - y^2 \\ y(0) = 0 \end{cases} with the Trapezoidal and Midpoint Methods and comparing their numerical solutions. The Midpoint Method shows fluctuating values as it progresses, indicating a Parasitic Solution, whereas the Trapezoidal Method stably solves the problem well.

Below is the code written in R. j denotes the number of iterations to be repeated, if not entered separately, it simply makes one prediction with the Euler Method and proceeds.

Euler<-function(f,Y_0,a,b,h=10^(-3))
{
  Y <- t(Y_0)
  node <- seq(a,b,by=h)
  
  for(x in node)
  {
    Y<-rbind(Y,Y[length(Y[,1]),]+h*f(x,Y[length(Y[,1]),]))
  }
  
  return(Y)
}
 
Midpoint<-function(f,Y_0,a,b,h=10^(-3))
{
  Y <- t(Y_0)
  Y<-rbind(Y,Y[1,]+h*f(a,Y[1,]))
  node <- seq(a,b,by=h)
  
  for(x in node[-1])
  {
    Y<-rbind(Y,Y[length(Y[,1])-1,]+2*h*f(x,Y[length(Y[,1]),]))
  }
  
  return(Y)
}
 
Trapezoidal<-function(f,Y_0,a,b,h=10^(-3),j=0)
{
  Y <- t(Y_0)
  Y<-rbind(Y,Y[1,]+h*f(a,Y[1,]))
  node <- seq(a,b,by=h)
  
  for(x in node[-1])
  {
    Y_guess<-Y[length(Y[,1]),] + h* f(x,Y[length(Y[,1]),] )
    if(j>0)
    {
      for(temp in 1:j)
      {
        Y_guess<-Y[length(Y[,1]),]+(h/2)*( f(x,Y[length(Y[,1]),] )+ f(x+h,Y_guess) )
      }
    } else {"Wrong Iteration Number"}
    Y<-rbind(Y,Y[length(Y[,1]),]+(h/2)*( f(x,Y[length(Y[,1]),] )+ f(x+h,Y_guess) ))
  }
  
  return(Y)
}
 
Y<-function(x) {x / (1 + x^2)}
 
f<-function(x,y) {1/(1+x^2) + - 2*(y^(2))}
out<-Trapezoidal(f,seq(0,0,len=1),0,2,h=0.1)
abs(Y(seq(0,2.1,by=0.1)) - out[,1])
out<-Euler(f,seq(0,0,len=1),0,2,h=0.1)
abs(Y(seq(0,2.1,by=0.1)) - out[,1])
 
g<-function(x,y) {x-y^2}
out<-Trapezoidal(g,seq(0,0,len=1),0,3.25,h=0.25)
out[,1]
out<-Midpoint(g,seq(0,0,len=1),0,3.25,h=0.25)
out[,1]

  1. Atkinson. (1989). An Introduction to Numerical Analysis(2nd Edition): p367. ↩︎