logo

Numerical Solutions of Heat Equations: Finite Difference Method 📂Numerical Analysis

Numerical Solutions of Heat Equations: Finite Difference Method

Numerical Solution of Heat Equation1

Let’s assume we are given a one-dimensional heat equation as follows.

$$ \dfrac{\partial u}{\partial t} = \dfrac{\partial^{2} u}{\partial x^{2}},\qquad 0\le x \le 1,\quad t \ge 0 \tag{1} $$

Our goal is to approximate the solution using finite points. Let’s divide the space-time domain as follows.

$$ \left\{ (\ell \Delta x, n\Delta t) : \ell=0,1,\dots,d+1,\ n\ge 0 \right\}\quad \text{ where } \Delta x = \frac{1}{d+1} $$

Let’s denote the approximation to $u(\ell \Delta x, n\Delta t)$ by $u_{\ell}^{n}$. Pay attention that the superscript in ${}^{n}$ is not a power but an index of time. The RHS and LHS of $(1)$ are approximated as follows respectively.

$$ \dfrac{\partial^{2} u(x,t)}{\partial x^{2}} = \dfrac{1}{(\Delta x)^{2}} \big[ u(x-\Delta x, t) - 2u(x,t) + u(x+\Delta x, t) \big] + \mathcal{O}\left( (\Delta x)^{2} \right) $$ $$ \dfrac{\partial u(x,t)}{\partial t} = \dfrac{1}{\Delta t} \big[ u(x,t+\Delta t) - u(x,t) \big] + \mathcal{O}(\Delta t) $$

Now, let’s set $\mu = \dfrac{\Delta t}{(\Delta x)^{2}}$. Substituting the two equations into $(1)$ and then multiplying by $\Delta t = \mu (\Delta x)^{2}$, we obtain the following.

$$ \dfrac{u(x,t+\Delta t) - u(x,t)}{\Delta t} = \dfrac{1}{(\Delta x)^{2}} \big[ u(x-\Delta x, t) - 2u(x,t) + u(x+\Delta x, t) \big] + \mathcal{O}\left( (\Delta x)^{2} \right) $$ $$ \implies u(x,t+\Delta t) = u(x,t) + \dfrac{\mu (\Delta x)^{2}}{(\Delta x)^{2}} \big[ u(x-\Delta x, t) - 2u(x,t) + u(x+\Delta x, t) \big] + \mathcal{O}\left( (\Delta x)^{4} \right) $$ $$ \implies u(x,t+\Delta t) = u(x,t) + \mu \big[ u(x-\Delta x, t) - 2u(x,t) + u(x+\Delta x, t) \big] + \mathcal{O}\left( (\Delta x)^{4} \right) $$ $$ \implies u_{\ell}^{n+1} = u_{\ell}^{n} + \mu \left( u_{\ell-1}^{n} - 2u_{\ell}^{n} + u_{\ell+1}^{n} \right),\quad \ell=1,2,\dots,d,\ n=0,1,\dots \tag{2} $$

Here, the ratio of spatial and temporal intervals $\mu = \dfrac{\Delta t}{(\Delta x)^{2}}$ is an important constant called the Courant number, which determines the convergence of $(2)$.

Explanation

From the theorem below, $(2)$ converges when $\mu \le \frac{1}{2}$. This means that there is an upper bound on the time step which can be advanced at one time depending on the spatial interval. That is, the denser the space domain is divided, the smaller the time step that can be advanced at once becomes. For simple examples, this may not be a problem, but if the length of the time domain is long, it can be a cause of long computation time. For example, when the space domain is divided into a vector of size 100, the time domain is divided into as many as 1733 pieces.

Convergence

If $\mu \le \dfrac{1}{2}$, the numerical solution $(2)$ converges.

Proof

Let’s define $t^{\ast} \gt 0$ as an arbitrary constant. Let’s define the error between the approximated value and the actual solution at each point as follows.

$$ e_{\ell}^{n} := u_{\ell}^{n} - u(\ell\Delta x, n\Delta t),\qquad \ell = 0,1,\dots,d+1,\quad n=0,1,\dots,n_{\Delta t} $$

Here, $n_{\Delta t} = \lfloor t^{\ast}/{\Delta t} \rfloor = \lfloor t^{\ast} / (\mu (\Delta x)^{2}) \rfloor$ is the endpoint of $n$. That the method converges can be expressed as the following equation being true for error $e_{\ell}^{n}$.

$$ \lim\limits_{\Delta x \to 0} \left[ \max\limits_{n=0,1,\dots,n_{\Delta t}} \left( \max\limits_{\ell=0,1,\dots,d+1} \left| e_{\ell}^{n} \right| \right) \right] = 0 $$

Let’s set the value inside the parentheses as follows.

$$ \eta^{n} := \max\limits_{\ell=0,1,\dots,d+1} \left| e_{\ell}^{n} \right|,\quad n=0,1,\dots,n_{\Delta t} \tag{3} $$

Then the expression for convergence is,

$$ \lim\limits_{\Delta x \to 0} \left( \max\limits_{n=0,1,\dots,n_{\Delta t}} \eta^{n} \right) = 0 \tag{4} $$

Now let’s assume $\tilde{u}_{\ell}^{n} = u(\ell\Delta x, n\Delta t)$ is the true value of $u_{\ell}^{n}$.

$$ \begin{align*} u_{\ell}^{n+1} &= u_{\ell}^{n} + \mu \left( u_{\ell-1}^{n} - 2u_{\ell}^{n} + u_{\ell+1}^{n} \right) \\ \tilde{u}_{\ell}^{n+1} &= \tilde{u}_{\ell}^{n} + \mu \left( \tilde{u}_{\ell-1}^{n} - 2\tilde{u}_{\ell}^{n} + \tilde{u}_{\ell+1}^{n} \right) + \mathcal{O}\left( (\Delta x)^{4} \right) \end{align*} \qquad \begin{array}{l} \ell = 0,1,\dots,d+1 \\ n=0,1,\dots,n_{\Delta t} \end{array} $$

Subtracting these, we get the following.

$$ \begin{align*} e_{\ell}^{n+1} &= e_{\ell}^{n} + \mu \left( e_{\ell-1}^{n} - 2e_{\ell}^{n} + e_{\ell+1}^{n} \right) + \mathcal{O}\left( (\Delta x)^{4} \right) \\[1em] &\le e_{\ell}^{n} + \mu \left( e_{\ell-1}^{n} - 2e_{\ell}^{n} + e_{\ell+1}^{n} \right) + c (\Delta x)^{4},\qquad c \gt 0 \end{align*} $$

The inequality holds by the definition of asymptotic notation. By the triangle inequality, we obtain the following.

$$ \begin{align*} \left| e_{\ell}^{n+1} \right| &\le \left| e_{\ell}^{n} + \mu \left( e_{\ell-1}^{n} - 2e_{\ell}^{n} + e_{\ell+1}^{n} \right) + c (\Delta x)^{4} \right| \\ &\le \left| e_{\ell}^{n} + \mu \left( e_{\ell-1}^{n} - 2e_{\ell}^{n} + e_{\ell+1}^{n} \right) \right| + c (\Delta x)^{4} \\ &\le \mu \left| e_{\ell-1}^{n} \right| + \left| 1 - 2\mu \right| \left| e_{\ell}^{n} \right| + \mu \left| e_{\ell+1}^{n} \right| + c (\Delta x)^{4} \\ &\le (2\mu + \left| 1 - 2\mu \right|)\eta^{n} + c (\Delta x)^{4},\qquad n=0,1,\dots,n_{\Delta t}-1 \end{align*} $$

The last inequality holds due to $(3)$. Since $\mu \le 1/2$, $(2\mu + \left| 1 - 2\mu \right|)=1$, and again from $(3)$ we obtain the following.

$$ \eta^{n+1} = \max\limits_{\ell=0,1,\dots,d+1} \left| e_{\ell}^{n+1} \right| \le \eta^{n} + c (\Delta x)^{4},\qquad n=0,1,\dots,n_{\Delta t}-1 $$

Repeatedly applying this inductively,

$$ \eta^{n+1} \le \eta^{n} + c (\Delta x)^{4} \le \eta^{n-1} + 2c (\Delta x)^{4} \le \eta^{n-2} + 3 c (\Delta x)^{4} \le \cdots $$ $$ \implies \eta^{n} \le \eta^{0} + n c (\Delta x)^{4},\qquad n=0,1,\dots,n_{\Delta t} $$

Since the initial value is given as $\eta^{0} = 0$, and since $n(\Delta x)^{2} = n\Delta t / \mu = t^{\ast} / \mu$,

$$ \eta^{n} \le \dfrac{ct^{\ast}}{\mu}(\Delta x)^{2},\qquad n=0,1,\dots,n_{\Delta t} $$

Therefore, for all $n$,

$$ \eta^{n} \to 0\quad \text{ as } \Delta x \to 0 $$

Combining this with $(4)$ completes the proof.

Example

This can be implemented in Julia as follows.

1 Dimension

Now let’s consider the following one-dimensional heat equation.

$$ \begin{array}{lc} \text{Domain:} & \\ & \Omega = [-1,1]\times [0,0.35] \\ \text{Heat equation:} & \\ & u_{t} - u_{xx} = 0 \\[0.5em] & u(x,0) = \sin (\pi x) \\[0.5em] & u(-1, t) = 0,\quad u(1, t) = 0 \\[0.5em] \text{Exact solution:}& \\ & u(x,t) = \sin(\pi x) e^{-\pi^2 t} \end{array} $$

Here, if we set $\Delta x = \dfrac{1-(-1)}{99}$ and $\Delta t = \mu (\Delta)^{2} = 0.5(\Delta x)^{2}$ and solve it with $(2)$, we get the following result.

mu=0.500.png

On the other hand, if we set it to $\Delta t = \mu (\Delta)^{2} = 0.51(\Delta x)^{2}$ and solve it, we can see that the solution is not obtained properly.

mu=0.510.png

using Plots
using LinearAlgebra

function Solver_Heat_1d(Δt, T, x, f, g)
    # Δt: time step,    T: last time,   x: space vector
    # f: initial value,  g: boundary value
    
    t = Vector(0:Δt:T)
    n = length(t)
    ℓ = length(x)
    Δx = x[2]-x[1]
    μ = Δt/((Δx)^2)
    μ = Δt/((Δx)^2)

    U = zeros(ℓ,n)
    U[1,:] .= g[1]
    U[end,:] .= g[end]

    U[:,1] = f

    for i ∈ 2:n
        U[2:end-1, i] = (1-2μ)*U[2:end-1, i-1] + μ*(U[1:end-2, i-1] + U[3:end, i-1])                     
    end
    return U
end

function ref_sol(x, t, exact_sol)
    xt =[kron(ones(length(t)),x);; kron(t,ones(length(x)))]
    ref_U = reshape(exact_sol(xt),length(x),length(t))
    return ref_U
end

function results(t, x, U, ref_U, μ)
    h1 = heatmap(t, x, U, colormap=:viridis, clim=(-1,1))
    # xlabel!("time [0, $(round(t[end], digits=2)) ]")
    # ylabel!("space [$(x[1]),  $(x[end])]")
    title!("FDM solution")

    h2 = heatmap(t, x, ref_U, colormap=:viridis, clim=(-1,1))
    title!("exact solution")

    h3 = heatmap(t, x, U - ref_U, colormap=:viridis)
    title!("error")

    h4 = heatmap(framestyle=:none)
    annotate!([0.5],[0.5], text("μ = $(round(μ, digits=3))", 30))

    plot(h1, h2, h3, h4, size=(1500,750))
end

for μ ∈ [0.5, 0.51]
    x = Vector(LinRange(-1., 1, 100))
    Δx = x[2]-x[1]
    initial_f = sin.(π*x)
    Δt = μ*((Δx)^2)
    T = 0.35
    bdry_g = [0.0, 0.0]
    t = Vector(0:Δt:T)

    exact_sol(xt) = sin.(π*xt[:,1]) .* exp.(- (π)^2 * xt[:,2])
    ref_u = ref_sol(x, t, exact_sol)
    U = Solver_Heat_1d(Δt, T, x, initial_f, bdry_g)
    results(t, x, U, ref_u, μ)
    savefig("mu=" * string(μ) * ".png") 
end

2 Dimension

$$ \begin{array}{lc} \text{Domain:} & \\ & \Omega = [-2,2] \times [-2,2] \times [0,1] \\ \text{Heat equation:} & \\ & u_{t} - u_{xx} = 0 \\[0.5em] & u(x,y,0) = \frac{1}{4} e^{-x^{2} -y^{2}} \\[0.5em] & u(x, y, t) = 0\qquad \text{ on } \partial \left( [-2,2] \times [-2,2] \right) \end{array} $$

Using the same method to solve the above differential equations,

2d.gif

using Plots

cd = @__DIR__

function Solver_Heat_2d(Δt, T, x, f)
    # Δt: time step,    T: last time,   x: space vector
    # f: initial value
    
    t = Vector(0:Δt:T)
    n = length(t)
    ℓ = length(x)
    Δx = x[2]-x[1]
    μ = Δt/((Δx)^2)

    U = zeros(ℓ,ℓ,n)
    U[:,:,1] = f

    for i ∈ 2:n
        U[2:end-1, 2:end-1, i] = (1-4μ)*U[2:end-1, 2:end-1, i-1] + μ*(U[1:end-2, 2:end-1, i-1] + U[3:end, 2:end-1, i-1] + U[2:end-1, 1:end-2, i-1] + U[2:end-1, 3:end, i-1])                     
    end

    return U
end

x = range(-2., stop=2, length=100)'
y = range(-2., stop=2, length=100)

initial_f = @. (1/4)exp(-x^2) * exp(-y^2) 

Δx = x[2]-x[1]
Δt = 0.49*((Δx)^2)/2
T = 1.0

U = Solver_Heat_2d(Δt, T, x, initial_f)

anim = @animate for i ∈ range(1, stop=size(U)[3], step=100)
    surface(U[:,:,i], zlims=(0,0.5), camera=(30,30), colorbar=:none, showaxis = false, grid=false, size=(500,500))
end

gif(anim, cd*"/2d.gif", fps=10)

Environment

  • OS: Windows11
  • Version: Julia v1.8.3, Plots v1.38.6

  1. A. Iserles, A First Course in the Numerical Analysis of Differential Equations (2nd, 2009), p349-353 ↩︎