logo

Differential Stages in Numerical Analysis 📂Numerical Analysis

Differential Stages in Numerical Analysis

Definition

A Divided Difference of a function $f : \mathbb{R} \to \mathbb{R}$ for distinct $x_{1} , \cdots , x_{n}$ is defined as follows:

$$ \begin{align*} f[x_{0}] :=& f( x_{0} ) \\ f [ x_{0} , x_{1} ] :=& {{ f ( x_{1} ) - f ( x_{0} ) } \over { x_{1} - x_{0} }} \\ f [ x_{0} , x_{1} , x_{2} ] :=& {{ f [ x_{1} , x_{2} ] - f [ x_{0} , x_{1} ] } \over { x_{2} - x_{0} }} \\ f [ x_{0} , \cdots , x_{n} ] :=& {{ f [ x_{1} , \cdots , x_{n} ] - f [ x_{0} , \cdots , x_{n-1} ] } \over { x_{n} - x_{0} }} \end{align*} $$

Theorem

  • [1]: $f [ x_{0} , \cdots , x_{n} ]$ is always the same regardless of the order of $x_{0} , \cdots , x_{n}$.
  • [2]: $$f [ x_{0} , \cdots , x_{n} ] = \sum_{i=0}^{n} {{ f( x_{i} ) } \over { \prod_{j \in \left\{ 0 , \cdots , n \right\} \setminus \left\{ i \right\} } ( x_{i} - x_{j} ) }}$$
  • [3]: There exists $\xi$ that satisfies the following.$$f’ ( \xi ) = f [ x_0 , x_{1} ]$$
  • [4]: There exists $\xi$ that satisfies the following.$${{1} \over {n!}} f^{(n)} ( \xi ) =f [ x_{0} , \cdots , x_{n} ]$$
  • [3]’: $$f [ x_{i} , x_{i} ] = f ' ( x_{i} )$$
  • [4]’: $$f [ \underbrace{ x_{i} , \cdots , x_{i} }_{ n+1 } ] = {{1} \over {n!}} f^{(n)} ( x_{i} )$$

  • $\mathscr{H} \left\{ a,b,c, \cdots \right\}$ denotes the smallest interval that includes $a,b,c, \cdots$.

Explanation

Divided differences greatly save space when expressing various theories of numerical analysis.

Theorem [2] is especially useful in actual calculations, since it is expressed as $$ f [ x_{0} , x_{1} , x_{2} ] = {{ f( x_{0} ) } \over { ( x_{0} - x_{1} ) ( x_{0} - x_{2} ) }} + {{ f( x_{1} ) } \over { ( x_{1} - x_{2} ) ( x_{1} - x_{0} ) }} + {{ f( x_{2} ) } \over { ( x_{2} - x_{0} ) ( x_{2} - x_{1} ) }} $$, allowing for iterative calculations to be simplified compared to the original definition.

[3] is essentially Mean Value Theorem, and if you have ever considered the concept of ‘instantaneous rate of change’ when differentiating, this will help you understand why divided differences can replace differentiation and why it’s called Divided Difference. Theorem [4] can be generalized in this manner. The proof can be thought of as the variables of divided differences becoming infinitely close, or taking a limit, which can be seen in [3]’, [4]’ as derivatives of degree $n$. Such a conceptual approach overcomes the limitations arising from the definition of divided differences and enriches the theory of numerical analysis.

Example

Let’s calculate some divided differences for $f(x) := x^2 + 1$ simply.

For one point, $$ f[3] = f(3) = 10 $$

For two points, $$ f[0,5] = {{ f(0) - f(5) } \over { 0 - 5}} = {{1 - 26 } \over { -5 }} = 5 $$

For three points, $$ \begin{align*} f[2,3,-1] =& {{ \displaystyle {{ f(2) - f( 3) } \over { 2 - 3 }} - {{ f(3) - f(-1) } \over { 3 - (-1) }} } \over { 2 - (-1) }} \\ =& {{ \displaystyle {{ 5 - 10 } \over { -1 }} - {{ 10 - 2 } \over { 4 }} } \over { 3}} \\ =& {{ 5 -2 } \over {3}} \\ =& 1 \end{align*} $$ Divided differences can obviously be calculated for a finite vector, but usually, most applications go up to $n=2$, and beyond that, it’s rarely used.

Implementation

Here is an implementation of divided differences in R code, which takes a function $f$ and a vector $\mathbb{x} = ( x_{0} , \cdots , x_{n} )$ to return $f [ x_{0} , \cdots , x_{n} ]$.

f<-function(x) {x^2 + 1}
dd<-function(f,X){
  if(length(X)==1) {return(f(X))}
  temp<-numeric(0)
  for(i in 1:length(X)) {temp[i]<-prod(X[i]-X[-i])}
  return(sum(f(X)/temp))
}
dd(f,c(3))
dd(f,c(0,5))
dd(f,c(2,3,-1))

The result of executing the above code is as follows, and it’s confirmed to exactly match the hand calculation for $f(x) = x^2 +1$.

20190319\_161740.png

Proof

[2]

Strategy: Compare the degrees of Lagrange’s formula and Newton’s divided difference formula.


$$ \Psi_{n} (x) := (x-x_{0}) \cdots (x - x_{n}) $$ Differentiating $\Psi_{n}$ with respect to $x$ and substituting $x = x_{j}$ gives $$ \Psi’_{n} (x_{j}) := (x-x_{0}) \cdots (x - x_{j-1})(x - x_{j+1}) \cdots (x - x_{n}) $$

$$ \implies {{ \Psi (x)_{n} } \over { (x - x_{j}) \Psi’_{n}(x_{j}) }} \equiv {{ (x - x_{0} ) \cdots (x - x_{j-1} ) (x - x_{j+1} ) \cdots (x - x_{n} ) } \over { (x_{j} - x_{0} ) \cdots (x_{j} - x_{j-1} ) (x_{j} - x_{j+1} ) \cdots (x_{j} - x_{n} ) }} = l_{j} (x) $$ According to Lagrange’s formula, the polynomial interpolation of $f$ is $$ p_{n} (x) = \sum_{j=0}^{n} {{ \Psi_{n} (x) } \over { (x - x_{j}) \Psi’_{n} (x_{j}) }} \cdot f( x_{j}) $$ Therefore, since the coefficient of the highest order term of $p_{n}$ is $\displaystyle \sum_{j=0}^{n} {{ f( x_{j}) } \over { \Psi’_{n} (x_{j}) }}$, and in Newton’s divided difference formula, the coefficient of the highest order term of $p_{n}$ is $f [x_{0} , \cdots , x_{n} ]$, $$ f [ x_{0} , \cdots , x_{n} ] = \sum_{i=0}^{n} {{ f( x_{i} ) } \over { \displaystyle \prod_{j \in \left\{ 0 , \cdots , n \right\} \setminus \left\{ i \right\} } ( x_{i} - x_{j} ) }} $$

[1]

By Theorem [2], changing the order within $[x_{0} , \cdots , x_{n}]$ is just changing the order of addition when calculating $\displaystyle \sum_{i=0}^{n} {{ f( x_{i} ) } \over { \displaystyle \prod_{j \in \left\{ 0 , \cdots , n \right\} \setminus \left\{ i \right\} } ( x_{i} - x_{j} ) }}$, so it’s always the same.

[4]

Strategy: Compare the degrees of Lagrange’s formula and Newton’s divided difference formula.


If we say $p_{n}(x) = p_{n-1}(x) + C(x)$, then $\deg C = n$ and for $i= 0, \cdots, (n-1)$, $p_{n}(x_{i}) = f(x_{i}) = p_{n-1}(x_{i})$, so for some constant $a_{n} \ne 0$ we have $C (x) = a_{n} (x - x_{0}) \cdots (x - x_{n-1})$. Substituting $C(x)$ with $x=x_{n}$ gives $$ C(x_{n} ) = p_{n} (x_{n}) - p_{n-1}(x_{n}) $$

$$ C (x_{n}) = a_{n} (x_{n} - x_{0}) \cdots (x_{n} - x_{n}) $$ Since $p_{n}$ is the polynomial interpolation of $f$, it becomes $p_{n} (x_{n}) = f(x_{n})$, $$ a_{n} (x_{n} - x_{0}) \cdots (x_{n} - x_{n-1}) = f (x_{n}) - p_{n-1}(x_{n}) $$

$$ a_{n} = {{ f (x_{n}) - p_{n-1}(x_{n-1}) } \over { (x_{n} - x_{0}) \cdots (x_{n} - x_{n-1}) }} $$

Error between polynomial interpolation and actual function: For some $\xi \in \mathscr{H} \left\{ x_{0} , \cdots , x_{n-1} \right\}$, the polynomial interpolation $p_{n-1}$ of $f$ for $n$-times differentiable $f : \mathbb{R} \to \mathbb{R}$ is for some $x_{n} \in \mathbb{R}$ $$ f(x_{n}) - p_{n-1} (x_{n}) = {{ (x_{n} - x_{0}) \cdots (x_{n} - x_{n-1}) } \over { n! }} f^{(n)} ( \xi ) $$

According to the formula of Error between polynomial interpolation and actual function, $$ a_{n} = {{ f^{(n)} ( \xi ) } \over { n! }} $$ Meanwhile, since the coefficient of the highest order term of $p_{n}$ in Newton’s divided difference formula is $a_{n} = f [x_{0} , \cdots , x_{n} ]$, $$ {{1} \over {n!}} f^{(n)} ( \xi ) =f [ x_{0} , \cdots , x_{n} ] $$

[3]

It’s trivial by Theorem [4].