logo

Implicit Runge-Kutta Methods 📂Numerical Analysis

Implicit Runge-Kutta Methods

Overview

This document introduces the Runge-Kutta method, an Ordinary Differential Equation (ODE) solver. The commonly used fourth-order Runge-Kutta method, RK4, is a type of explicit Runge-Kutta method. This document explains the implicit Runge-Kutta method.

Buildup1

y=f(t,y),tt0,y(t0)=y0 y^{\prime} = f(t,y),\quad t \ge t_{0},\quad y(t_{0}) = y_{0}

Consider the given ordinary differential equation as above. yy is a function of tt, and ^{\prime} represents the derivative with respect to tt. When denoted as yn=y(tn)y_{n} = y(t_{n}), an explicit Runge-Kutta method approximates yn+1y_{n+1} for a given yny_{n} as follows:

yn+1=yn+hj=1νbjf(tn+cjh,y(tn+cjh)),n=0,1,2, \begin{equation} y_{n+1} = y_{n} + h\sum_{j=1}^{\nu} b_{j} f(t_{n} + c_{j}h,y(t_{n} + c_{j}h)),\quad n=0,1,2,\cdots \end{equation}

Where

ξ1=ynξ2=yn+ha2,1f(tn,ξ1)ξ3=yn+ha3,1f(tn,ξ1)+ha3,2f(tn+c2h,ξ2)ξν=yn+hi=1ν1aν,if(tn+cih,ξi) \begin{align*} \xi_{1} &= y_{n} \\ \xi_{2} &= y_{n} + ha_{2,1}f(t_{n}, \xi_{1}) \\ \xi_{3} &= y_{n} + ha_{3,1}f(t_{n}, \xi_{1}) + ha_{3,2}f(t_{n} + c_{2}h, \xi_{2}) \\ \vdots & \\ \xi_{\nu} &= y_{n} + h\sum_{i=1}^{\nu-1}a_{\nu, i}f(t_{n} + c_{i}h, \xi_{i}) \end{align*}

According to the formulas above, each ξj\xi_{j} can be calculated if ξj1\xi_{j-1} is known, and in explicit methods, it is set up so that ξ1\xi_{1} is a known value because ξ1=yn\xi_{1} = y_{n}. That is to say, it is set up to enable sequential calculation from ξ2\xi_{2} to ξν\xi_{\nu}, which is an attribute of explicit methods.

In implicit methods, all ξi\xi_{i} are required to calculate each ξj\xi_{j}. In other words, we need to solve a system of equations for ξj\xi_{j} to calculate yn+1y_{n+1}.

Definition

The implicit Runge-Kutta method approximates yn+1y_{n+1} for a given yny_{n} as follows:

yn+1=yn+hj=1νbjf(tn+cjh,ξj),n=0,1,2, y_{n+1} = y_{n} + h\sum_{j=1}^{\nu} b_{j} f(t_{n} + c_{j}h, \xi_{j}),\quad n=0,1,2,\dots

Where,

ξj=yn+hi=1νaj,if(tn+cih,ξi),j=1,,ν \xi_{j} = y_{n} + h\sum_{i=1}^{\nu}a_{j,i}f(t_{n} + c_{i}h, \xi_{i}),\quad j=1,\dots,\nu

Here, the matrix A=[aj,i]A = [a_{j,i}] is called the RK matrix. Also, b\mathbf{b} and c\mathbf{c} are referred to as RK weights and RK nodes, respectively.

b=[b1b2bν]andc=[c1c2cν] \mathbf{b} = \begin{bmatrix} b_{1} \\ b_{2} \\ \vdots \\ b_{\nu} \end{bmatrix} \quad \text{and} \quad \mathbf{c} = \begin{bmatrix} c_{1} \\ c_{2} \\ \vdots \\ c_{\nu} \end{bmatrix}

Where,

cj=i=1νaj,i,j=1,2,,ν c_{j} = \sum\limits_{i=1}^{\nu} a_{j, i},\quad j=1,2,\dots,\nu

Moreover, equation (1)(1) is said to have ν\nu stages, and such methods are called ν\nu stage (or order) Runge-Kutta methods.

Explanation

Unlike in explicit methods where matrix A=[aj,i]A = [a_{j,i}] was a lower triangular matrix, in implicit methods, AA can be any matrix.

In RK methods, coefficients aj,ia_{j,i}, bjb_{j}, cjc_{j} are not to be found but selected for use. Coefficients are denoted as cAbt\begin{array}{c|c} \mathbf{c} & A \\ \hline & \mathbf{b}^{t} \end{array}, which is called an RK tableaux.

Compared with explicit methods, it naturally takes much more time to solve. However, the advantage of implicit methods is not in speed but in the stability of the solution. The disadvantage of explicit methods is the instability of the solution, for which implicit methods are used as an alternative.

See Also


  1. Arieh Iserles, A First Course in the Numerical Analysis of Differential Equations (2nd Edition, 2009), p41-42 ↩︎