logo

Chebyshev Nodes 📂Numerical Analysis

Chebyshev Nodes

Definition

13660_2016_1030_Fig2_HTML.gif

From $[-1,1]$ to $\displaystyle x_{k} = \cos \left( {{2k-1} \over {2n}} \pi \right)$, $k=1, \cdots , n$ are called Chebyshev nodes.

Explanation

Unlike the equidistant node points typically used, Chebyshev nodes refer to the node points obtained by dividing the arc of a semicircle into equal sizes and projecting these points onto the $x$ axis. The distribution of points tends to gather a bit more at the extremes than in the middle. As will be explained below, this very point is the advantage of Chebyshev nodes.

The reason why they are called Chebyshev nodes is because these points are the zeros of the Chebyshev polynomials.

Chebyshev polynomial: $T_{n} (x) = \cos \left( n \cos^{-1} x \right)$ is called the first kind Chebyshev polynomial.

  • [1]: The roots of $\displaystyle T_{n} (x)$ are $\displaystyle x_{k} = \cos \left( {{2k-1} \over {2n}} \pi \right)$, $k=1, \cdots , n$

As with the Chebyshev polynomial, Chebyshev nodes also possess rich mathematical properties, especially valuable for direct application in numerical analysis. Whether it’s for interpolation or solving differential equations, one might think that nodes would be worse than equidistant ones. However, having more information at the end points can lead to better results. Equidistant nodes may be easier to manipulate mathematically but can also be seen as collecting too much unimportant information.

Let’s look at an example of interpolation mentioned as a brief example.

Polynomial Interpolation:

  • [4] The error from the actual function: for a $(n+1)$ times differentiable $f : \mathbb{R} \to \mathbb{R}$ and some $\xi \in \mathscr{H} \left\{ x_{0} , \cdots , x_{n} \right\}$, the polynomial interpolation $p_{n}$ of $f$ for some $t \in \mathbb{R}$ satisfies $\displaystyle f(t) - p_{n} (t) = {{ (t - x_{0}) \cdots (t - x_{n}) } \over { (n+1)! }} f^{(n+1)} ( \xi )$.

$$ f(x) - p_{n} (x) = {{ (x - x_{0}) \cdots (x - x_{n}) } \over { (n+1)! }} f^{(n+1)} ( \xi ) $$ In the error above, $f^{(n+1)}$ and $(n+1)!$ are now beyond help. The error now entirely depends on $x , x_{0} , \cdots , x_{n}$, but since $x$ is a variable, let’s set it aside for a moment. If changing $x_{0} , \cdots , x_{n}$ can reduce the error, it would be better to source data from good nodes rather than blindly from equidistant nodes from the start. Let’s call $x_{0} , \cdots , x_{n}$ a Chebyshev node from now on. $$ T_{0} (x) = 1 $$

$$ T_{1} (x) = x $$

$$ T_{n+1} (x) = 2x T_{n} (x) - T_{n-1} (x) $$ Therefore, the leading coefficient of $T_{n+1} (x)$ is $2^{n}$, and since they have Chebyshev nodes as zeros $$ T_{n+1} (x) = 2^{n} ( x - x_{0} ) \cdots ( x - x_{n} ) $$ Meanwhile, because of $| T_{n+1} (x) | \le 1$, moving $2^{n}$ to the left-hand side and taking the absolute value, we can obtain the following astonishing result. $$ \begin{align*} & | ( x - x_{0} ) \cdots ( x - x_{n} ) | \le {{1} \over { 2^{n} }} \\ \implies& | f(x) - p_{n} (x) | \le {{ \left| f^{(n+1)} ( \xi ) \right| } \over { 2^{n} (n+1)! }} \end{align*} $$ Of course, being able to be bound by $\displaystyle {{1} \over {2^{n}}}$ is impressive in itself, but in reality, it’s not just that; it’s also known that there’s no further reducing the error. That is, Chebyshev nodes are not merely good at reducing errors but are known to be optimal nodes.