logo

Richardson Error Estimation 📂Numerical Analysis

Richardson Error Estimation

Buildup

Comparing with the true value is the best method to check the performance of a method for solving differential equations, but there are cases where it is bothersome to immediately find the true value, and other cases where it is difficult to find the true solution at all. In such cases, one can estimate the error by comparing $y_{h} (x_{n} )$ and the result of doubling the step size $h$ to obtain $y_{2h} (x_{n} )$.

For instance, in the case of the trapezoidal method, by adjusting $h$, one obtains the following two equations: $$ Y(x_{n}) - y_{h} (x_{n} ) = D ( x_{n} ) h^2 + O ( h^3 ) $$

$$ Y(x_{n}) - y_{2h} (x_{n} ) = 4 D ( x_{n} ) h^2 + O ( h^3 ) $$ By a simple operation of eliminating $D( x_{n} )$, one obtains: $$ Y(x_{n}) = {{1 } \over {3}} [4 y_{h} (x_{n} ) - y_{2h} (x_{n} ) ] + O ( h^3 ) $$ Arranging it in the form we want: $$ Y(x_{n}) - y_{h} (x_{n} )= {{1 } \over {3}} [ y_{h} (x_{n} ) - y_{2h} (x_{n} ) ] + O ( h^3 ) $$ Upon discarding $O(h^3)$, which is fairly small: $$ Y(x_{n}) - y_{h} (x_{n} ) \approx {{1 } \over {3}} [ y_{h} (x_{n} ) - y_{2h} (x_{n} ) ] $$ That is, we can calculate $\displaystyle {{1 } \over {3}} [ y_{h} (x_{n} ) - y_{2h} (x_{n} ) ] $ as an estimate of the actual error $\displaystyle Y(x_{n}) - y_{h} (x_{n} )$ without needing the true solution. Needless to say, this error decreases as $h$ is reduced.

Definition 1

Simply put, Richardson Error Estimation refers to a method of estimating errors without a true value.

Such calculations are not only applicable to the trapezoidal method but can also be usefully employed across numerical analysis. Moreover, the above calculation is only an example simplified with $h$ and $2h$; however, if the situation for performing a Richardson Error Estimation arises, there is absolutely no reason not to use $h$ and $\displaystyle {{h} \over {2}}$.

Implementation

Below is Python code that automatically solves problems by changing the step size through Richardson Error Estimation.

def Detrap(f,x_0,y_0,x_end,tol,h,h_min,h_max,ier) :
    print("x_n\t\th\t\ty_n\t\t\tY_n-y_n\t\ttrunc")
    x_1 = 0
    x_2 = 0
    y_1 = 0
    y_2 = 0
 
    #2
    loop = 1
    ier = 0
 
    while 1 :
    #for i in range(5) :
        #4
        y_guess = y_0 + h * f(x_0, y_0)
        y_guess = y_0 + (h / 2) \ast (f(x_0, y_0) + f(x_0 + h, y_guess))
        y_e = y_0 + (h / 2) \ast (f(x_0, y_0) + f(x_0 + h, y_guess)) # y_e : y_{h} (x_{0} + (h))
 
        y_guess = y_0 + (h / 2) * f(x_0, y_0)
        y_guess = y_0 + (h / 4) \ast (f(x_0, y_0) + f(x_0 + h / 2, y_guess))
        y_ = y_0 + (h / 4) \ast (f(x_0, y_0) + f(x_0 + h / 2, y_guess))  # y_ : y_{h/2} (x_{0} + (h/2))
 
        y_guess = y_ + (h / 2) * f(x_0 + h / 2, y_)
        y_guess = y_ + (h / 4) \ast (f(x_0 + h / 2, y_) + f(x_0 + h, y_guess))
        y__ = y_ + (h / 4) \ast (f(x_0 + h / 2, y_) + f(x_0 + h, y_guess))  # y__ : y_{h/2} (x_{0} + h)
 
        #5
        trunc = (4/3)*(y__ - y_e)
 
        #6
        if ( tol*h/4 <= abs(trunc) <= tol*h ) or loop==2 :
            x_1 = x_0 + h
            y_1 = y_e
            print("%0.4f\t%0.4f\t%0.6f\t%1.2e\t%1.2e" % (x_1,h,y_1,Y(x_1)-y_1,trunc))
            break
 
        #7
        D_3y = (f(x_0 + h,y__)- 2*f( x_0 + (h/2),y_) + f(x_0,y_0) ) / (h**2 / 4)
        if D_3y==0 :
            h = h_max
            loop = 2
        else :
            h = (6*tol/abs(D_3y))**(1/2)
 
        #8
        if h<h_min :
            ier = 2
            return
 
        if h>h_max :
            h=h_max
            ier = 1
            loop = 2
 
    while 1 :
        linepointer = [0.]
 
        while 1 :
            #11
            x_2 = x_1 + h
            y_guess = y_0 + 2*h*f(x_1,y_1)
            y_2 = y_1 + (h / 2) \ast (f(x_1, y_1) + f(x_2, y_guess))
 
            #12
            trunc = -(y_2 - y_guess)/6
 
            #13
            if abs(trunc)<((tol*h)/4) or (tol*h)<abs(trunc) :
                break
 
            #14
            if round(x_2,4) in [0.2725,1.9595,7.6959] :
                print("x_n\t\th\t\ty_n\t\t\tY_n-y_n\t\ttrunc")
                print("-------------------------------------------------")
            print("%0.4f\t%0.4f\t%0.6f\t%1.2e\t%1.2e" % (x_2,h,y_2,Y(x_2)-y_2,trunc))
 
            #15
            x_1 = x_2
            y_0 = y_1
            y_1 = y_2
 
            if not x_1<x_end :
                return
 
        #17
        x_0 = x_1
        y_0 = y_1
        h_0 = h
        h = ( (tol*( h_0**3 ) ) / ( 2 * abs(trunc)) )**(1/2)
 
        #18
        h = min(h,2*h_0)
 
        #19
        if h<h_min :
            ier = 2
            return
 
        if h>h_max :
            ier = 1
            h = h_max
 
        #20
        y_guess = y_0 + h * f(x_0,y_0)
        y_guess = y_0 + (h / 2) \ast (f(x_0, y_0) + f(x_0 + h, y_guess))
        y_1 = y_0 + (h / 2) \ast (f(x_0, y_0) + f(x_0 + h, y_guess))
        x_1 = x_0 + h
 
        #21
        if round(x_1,4) in [0.2725,1.9595,7.6959] :
            print("x_n\t\th\t\ty_n\t\t\tY_n-y_n\t\ttrunc")
            print("-------------------------------------------------")
 
        print("%0.4f\t%0.4f\t%0.6f\t%1.2e\t%1.2e" % (x_1,h,y_1,Y(x_1)-y_1,trunc))
 
        #22
        if not x_1<x_end :
            return
 
#Example
def f(x,y) :
    return (1/(1 + x**2)) - 2 * ( y**2 )
 
def Y(x) :
    return x/(1 + x**2)
 
Detrap(f,x_0 = 0,y_0 = 0,x_end = 10,tol=0.0005,h=0.1,h_min=0.001,h_max=1.0,ier=0)

  1. Atkinson. (1989). An Introduction to Numerical Analysis(2nd Edition): p372. ↩︎