logo

Explicit Runge-Kutta Methods 📂Numerical Analysis

Explicit Runge-Kutta Methods

Overview

Introducing the Runge-Kutta method, an Ordinary Differential Equation (ODE) solver. A separate article is published for a detailed explanation of the commonly used 4th Order Runge-Kutta method RK4.

Buildup

Consider the following ordinary differential equation. yy is a function of tt, and ^{\prime} means the derivative with respect to tt. y=f(t,y),tt0,y(t0)=y0 y^{\prime} = f(t,y),\quad t \ge t_{0},\quad y(t_{0}) = y_{0} Integrating this from tnt_{n} to tn+1=tn+ht_{n+1} = t_{n} + h (to avoid confusion let’s use τ\tau instead of tt as the integration variable), tntn+1ydτ=tntn+1f(τ,y)dτ \int_{t_{n}}^{t_{n+1}}y^{\prime}d\tau = \int_{t_{n}}^{t_{n+1}} f(\tau,y)d\tau Rearranging the left side and replacing it with τtn+hτ\tau \equiv t_{n} + h\tau, we obtain the following equation: [y(τ)]tntn+1=tntn+1f(τ,y)dτ \big[y(\tau)\big]_{t_{n}}^{t_{n+1}} = \int_{t_{n}}^{t_{n+1}} f(\tau,y)d\tau y(tn+1)=y(tn)+tntn+1f(τ,y)dτ=y(tn)+h01f(tn+hτ,y(tn+hτ))dτ \Downarrow \\ y(t_{n+1}) = y(t_{n}) + \int_{t_{n}}^{t_{n+1}} f(\tau,y)d\tau = y(t_{n}) + h\int_{0}^{1} f\big(t_{n} + h\tau,y(t_{n} + h\tau)\big)d\tau

Now, by approximating the latter integral as a sum, we obtain the following equation:

y(tn+1)=y(tn)+hj=1νbjf(tn+cjh,y(tn+cjh)),n=0,1,2,(1) y(t_{n+1}) = y(t_{n}) + h\sum_{j=1}^{\nu} b_{j} f\big(t_{n} + c_{j}h,y(t_{n} + c_{j}h)\big),\quad n=0,1,2,\cdots \tag{1}

This is an approximation where, knowing yn=y(tn)y_{n} = y(t_{n}), we can know yn+1=y(tn+1)y_{n+1} = y(t_{n+1}). Of course, since the right side includes an unknown value y(tn+cjh)y(t_{n} + c_{j}h), this must also be approximated. Since the formula is complex, let’s denote j=1,2,,νj = 1,2,\dots,\nu as follows:

ξj=y(tn+cjh) \xi_{j} = y(t_{n} + c_{j}h)

First, let’s set it as c1=0c_{1} = 0. Then ξ1=yn\xi_{1} = y_{n} is known. Now, let’s approximate ξj\xi_{j} sequentially as follows by applying the multistep method.

ξ2=y(tn+c2h)=yn+ha2,1f(tn+c1h,yn+c1h)=yn+ha2,1f(tn,yn)(c1=0)=yn+ha2,1f(tn,ξ1)(ξ1=yn)ξ3=y(tn+c3h)=yn+ha3,1f(tn+c1h,yn+c1h)+ha3,2f(tn+c2h,yn+c2h)=yn+ha3,1f(tn,ξ1)+ha3,2f(tn+c2h,ξ2)ξν=yn+hj=1n1aν,jf(tn+cjh,ξj) \begin{align*} \xi_{2} &= y(t_{n} + c_{2}h) \\ &= y_{n} + h a_{2,1}f(t_{n} + c_{1}h, y_{n} + c_{1}h) \\ &= y_{n} + h a_{2,1}f(t_{n}, y_{n}) & (\because c_{1} = 0)\\ &= y_{n} + h a_{2,1}f(t_{n}, \xi_{1}) & (\because \xi_{1}=y_{n}) \\[2em] \xi_{3} &= y(t_{n} + c_{3}h) \\ &= y_{n} + h a_{3,1}f(t_{n} + c_{1}h, y_{n} + c_{1}h) + h a_{3,2}f(t_{n} + c_{2}h, y_{n} + c_{2}h) \\ &= y_{n} + h a_{3,1}f(t_{n}, \xi_{1}) + h a_{3,2}f(t_{n} + c_{2}h, \xi_{2}) \\ &\vdots \\ \xi_{\nu} &= y_{n} + h\sum_{j=1}^{n-1}a_{\nu,j} f(t_{n} + c_{j}h, \xi_{j}) \end{align*}

Now, we have obtained all values on the right side of (1)(1), yny_{n}, f(tn,ξ1)f(t_{n}, \xi_{1}), f(tn+c2h,ξ2)f(t_{n} + c_{2}h, \xi_{2}), \dots, f(tn+cj1h,ξj1)f(t_{n} + c_{j-1}h, \xi_{j-1}). The Explicit Runge-Kutta method is a method to approximate the value of yn+1y_{n+1} as a linear combination of yny_{n}, f(tn,ξ1)f(t_{n}, \xi_{1}), f(tn+c2h,ξ2)f(t_{n} + c_{2}h, \xi_{2}), \dots, f(tn+cj1h,ξj1)f(t_{n} + c_{j-1}h, \xi_{j-1}).

Definition

The explicit Runge-Kutta method is a method that approximates yn+1y_{n+1} for a given yny_{n} as follows.

yn+1=yn+hj=1νbjf(tn+cjh,ξj),n=0,1,2,(2) y_{n+1} = y_{n} + h\sum_{j=1}^{\nu} b_{j} f(t_{n} + c_{j}h, \xi_{j}),\quad n=0,1,2,\cdots \tag{2}

Here, each of the ξj\xi_{j} is,

ξ1=ynξ2=yn+ha2,1f(tn,ξ1)ξ3=yn+ha3,1f(tn,ξ1)+ha3,2f(tn+c2h,ξ2)ξν=yn+hi=1ν1aν,if(tn+cih,ξi) \begin{align*} \xi_{1} &= y_{n} \\ \xi_{2} &= y_{n} + ha_{2,1}f(t_{n}, \xi_{1}) \\ \xi_{3} &= y_{n} + ha_{3,1}f(t_{n}, \xi_{1}) + ha_{3,2}f(t_{n} + c_{2}h, \xi_{2}) \\ \vdots & \\ \xi_{\nu} &= y_{n} + h\sum_{i=1}^{\nu-1}a_{\nu, i}f(t_{n} + c_{i}h, \xi_{i}) \end{align*}

Here, the ν×ν\nu \times \nu matrix A=[aj,i]A = [a_{j,i}] is called the Runge-Kutta matrix. Empty components in the formula are 00.

A=[aj,i]=[a2,1a3,1a3,2aν,1aν,2aν,ν1] A = [a_{j,i}] = \begin{bmatrix} \\ a_{2,1} & \\ a_{3,1} & a_{3,2} \\ \vdots & \\ a_{\nu,1} & a_{\nu,2} & \cdots & a_{\nu,\nu-1} & \end{bmatrix}

Also, the below b\mathbf{b} and c\mathbf{c} are called the RK weights and RK nodes, respectively.

b=[b1b2bν]andc=[c1c2cν] \mathbf{b} = \begin{bmatrix} b_{1} \\ b_{2} \\ \vdots \\ b_{\nu} \end{bmatrix} \quad \text{and} \quad \mathbf{c} = \begin{bmatrix} c_{1} \\ c_{2} \\ \vdots \\ c_{\nu} \end{bmatrix}

Also, it is said to have ν\nu stages, and such a method is called a ν\nu stage (or ν\nu order) Runge-Kutta method.

Explanation

As defined, the RK matrix is obviously a lower triangular matrix.

Following the above method, ξj\xi_{j} can be calculated sequentially from ξ1\xi_{1} to ξν\xi_{\nu}. Consequently, (2)(2) can be calculated, and yn+1y_{n+1} is obtained. By repeating this, values of yy for the next step, yn+2y_{n+2}, yn+3y_{n+3}, \dots can be determined.

In the RK method, the coefficients aj,ia_{j,i}, bjb_{j}, cjc_{j} are not solved for, but chosen to use. The coefficients are represented as in cAbt\begin{array}{c|c} \mathbf{c} & A \\ \hline & \mathbf{b}^{t} \end{array}, which is called the RK tableaux. The RK table of the commonly used RK4 is as follows.

012121201210010016131316 \begin{array}{c|cccc} 0 & \\[0.5em] \frac{1}{2} & \frac{1}{2} \\[0.5em] \frac{1}{2} & 0 & \frac{1}{2} \\[0.5em] 1 & 0 & 0 & 1 \\[0.5em] \hline {\displaystyle {\color{white}\dfrac{0}{0}}} & \frac{1}{6} & \frac{1}{3} & \frac{1}{3} & \frac{1}{6} \end{array}

See Also