logo

Chebyshev Expansion 📂Numerical Analysis

Chebyshev Expansion

Buildup 1

To understand Chebyshev expansions, it’s essential first to grasp how Chebyshev expansions arise. Consider instead solving a least squares problem as opposed to solving a minimax problem. $$ M_{n} (f) := \inf_{\deg(r) \le n } \left\| f - r \right\|_{2} $$ Given the least squares problem as described above in $f : [a,b] \to \mathbb{R}$, the goal is to find a polynomial of degree $n$ or less $r_{n}^{ \ast }$ that minimizes $M_{n} (f)$. Fortunately, the space of polynomials $\mathbb{R} [ x ]$ becomes a finite-dimensional inner product space with respect to the given weight $w(x)$ in the inner product $\displaystyle \left< f , g \right> : = \int_{a}^{b} f(x) g(x) w(x) dx$, and the existence of an orthonormal basis $\left\{ \varphi_{k} \right\}_{k=0}^{n}$ is guaranteed by the Gram-Schmidt process $$ r(x) = b_{0} \varphi_{0} (x) + b_{1} \varphi_{1} (x) + \cdots + b_{n} \varphi_{n} (x) $$ thus transforming the problem into finding a linear combination of $r(x)$. In other words, the least squares problem $$ | f - r |_{2}^{2} = \int_{a}^{b} \left[ f(x) - \sum_{j=0}^{n} b_{j} \varphi_{j} (x) \right]^{2} w(x) dx $$ aims to find $r = r_{n}^{ \ast }$ that minimizes the value. $$ \begin{align*} \displaystyle 0 \le & | f - r |_{2}^{2} \\ =& \left< f - \sum_{j=0}^{n} b_{j} \varphi_{j} , f - \sum_{j=0}^{n} b_{j} \varphi_{j} \right> \\ =& \left< f , f \right> - 2 \sum_{j=0}^{n} b_{j} \left< f , \varphi_{j} \right> + \sum_{i} \sum_{j} b_{i} b_{j} \left< \varphi_{i} , \varphi_{j} \right> \\ =& | f |_{2}^{2} - 2 \sum_{j=0}^{n} b_{j} \left< f , \varphi_{j} \right> + \sum_{j=0}^{n} b_{j}^2 \\ =& | f |_{2}^{2} - \sum_{j=0}^{n} \left< f , \varphi_{j} \right>^2 + \sum_{j=0}^{n} \left[ \left< f , \varphi_{j} \right> - b_{j} \right]^2 \end{align*} $$ Here, since $| f |_{2}^{2}$ and $\displaystyle \sum_{j=0}^{n} \left< f , \varphi_{j} \right>^2$ are already determined by $f$ and $\left\{ \varphi_{k} \right\}_{k=0}^{n}$, they do not change. Therefore, it is necessary to find the case that minimizes the last term $\displaystyle \sum_{j=0}^{n} \left[ \left< f , \varphi_{j} \right> - b_{j} \right]^2$, which can simply be done by $b_{j} = \left< f , \varphi_{j} \right>$, leading to the following least squares solution. $$ r_{n}^{ \ast } (x) = \sum_{j=0}^{n} \left< f , \varphi_{j} \right> \varphi_{j} (x) $$ Let’s now talk about Chebyshev polynomials in earnest.

Definition 2

Chebyshev Polynomials: $T_{n} (x) = \cos \left( n \cos^{-1} x \right)$ is called the First Kind Chebyshev Polynomial.

  • [3] Inner Product of Functions: Given the inner product $\displaystyle \left<f, g\right>:=\int_a^b f(x) g(x) w(x) dx$ with weight $w$ as $\displaystyle w(x) := {{1} \over { \sqrt{1 - x^2} }}$, $\left\{ T_{0} , T_{1}, T_{2}, \cdots \right\}$ becomes an orthogonal set.

Let us define the orthonormal basis $\left\{ \varphi_{k} \right\}_{k=0}^{n}$ as follows, so the orthogonal basis $\left\{ T_{k} \right\}_{k = 0}^{n}$ is normalized. $$ \varphi_{n} (x) := \begin{cases} \displaystyle \sqrt{ {{1} \over { \pi }} } & , n=0 \\ \displaystyle \sqrt{ {{2} \over { \pi }} } T_{n} (x) & , n \in \mathbb{N} \end{cases} $$ This series is called either the Chebyshev Least Squares Approximation or simply Chebyshev Expansion. $$ C_{n} (x) := \sum_{j=0}^{n} \left< f , \varphi_{j} \right> \varphi_{j} (x) $$ Or sometimes, it’s more convenient to use $T_{k}$ directly as shown below. $$ c_{j} := {{2} \over { \pi }} \int_{-1}^{1} {{f(x) T_{j} (x) } \over {\sqrt{1 - x^2 }}} dx $$

$$ C_{n} (x) = c_{0} + \sum_{j=1}^{n} c_{j} T_{j} (x) $$

Explanation

The reason Chebyshev expansions are beneficial is that they reasonably satisfy good conditions for function approximation:

  • (i): If $f$ exists and is continuous from $[-1,1]$ to $f^{(r)}$, for some constant $B$ according to Chebyshev least squares approximation $C_{n}$ and $f,r$, we have $\displaystyle | f - C_{n} |_{\infty} \le {{B \ln n} \over {n^{r}}}$
  • (ii): $C_{n+1} (x) = C_{n} (x) + c_{n+1} T_{n+1} (X)$
  • (iii): $\displaystyle c_{n+1} \sim {{1} \over {2^{n}}}$

Conditions for a Good Basis in Function Approximation:

  • (i): $\displaystyle p_{n} (x)$ must be representable as a linear combination of $\displaystyle \left\{ \varphi_{n} (x) \in \mathbb{Q} [ x ] : n \ge 0 \right\}$, and there must exist $n \in \mathbb{N}$ that makes $\displaystyle \max_{x \in [a,b]} | f(x) - p_{n} (x) |$ as small as desired.
  • (ii): It is better if $p_{n} (x)$ can be calculated as in $\displaystyle p_{n} (x) = \sum_{k=0}^{n} a_{k} \varphi_{n} (X)$. In other words, it’s preferable to add a new term for calculation rather than recomputing everything from scratch to reduce errors.
  • (iii): It’s favorable if $| a_{n} |$ converges quickly to $0$. In other words, the faster the convergence with as few terms added as possible, the better.
  • (i): This is one of the most important reasons to understand Chebyshev expansion. Initially, the least squares method seeks an approximate solution that minimizes variance rather than finding a solution that globally minimizes errors. Since the calculation involves sums or integrals of errors, some parts might fit well while others do not. If one was to ignore these inconsistencies, there would be no reason to approximate functions in the first place. The true objective of function approximation is to minimize uniform errors, i.e., to find an approximation where $\displaystyle \rho_{n} (f) = \inf_{\deg(q) \le n } \left\| f - q \right\|_{\infty}$ is minimized. Compared to $n^r$, the rate of divergence for $\ln n$ is relatively slow, meaning the inequality $\displaystyle | f - C_{n} |_{\infty} \le {{B \ln n} \over {n^{r}}}$ surprisingly indicates that Chebyshev’s least squares approximation is quite useful even in minimax problems.
  • (ii): $T_{n}$ is already determined recursively. Calculating this is not particularly difficult, and $c_{j}$ can also be simply calculated. There is no need to compute everything at once to achieve the desired accuracy, which is always welcome from a practical standpoint. Less precision saves calculations, and if more accuracy is needed, there is no need to discard previous calculations.
  • (iii): Let’s represent the polynomial $p_{n+1}$ for approximating $f$ in two ways as follows. $$ \begin{align*} p_{n+1} (x) :=& a_{0} + a_{1} x + \cdots + a_{n} x^{n} + a_{n+1} x^{n+1} \\ p_{n+1} (x) :=& c_{0} + c_{1} T_{1} (x) + \cdots + c_{n} T_{n} (x) + c_{n+1} T_{n+1} (x) \end{align*} $$ The coefficient of the highest-order term of $p_{n+1} (X)$, $a_{n+1}$, equates to that of $c_{n+1} T_{n+1} (x)$. However, $$ \begin{align*} T_{0} (x) =& 1 \\ T_{1} (x) =& x \\ T_{n+1} (x) =& 2x T_{n} (x) - T_{n-1} (x) \end{align*} $$ thus, the coefficient of the highest-order term of $T_{n+1} (x)$ is $2^{n}$. Summarizing this in a formula results in $$ a_{n+1} = c_{n+1} 2^{n} \implies c_{n+1} = {{a_{n+1}} \over {2^{n}}} $$ Meanwhile, since $| T_{m} (x) | = \left| \cos \left( m \cos^{-1} x \right) \right| \le 1$, we find that the term $\displaystyle \left| c_{m+1} T_{m+1} (x) \right| = {{1} \over {2^{m}}} \left| a_{m+1} T_{m+1} (x) \right|$ decreases quite, or rather, incredibly fast. This qualitatively states that Chebyshev expansion started with a good basis. While Taylor expansion adds purely mundane terms as in $1 , x , x^{2} , x^{3} , \cdots$, Chebyshev expansion speeds up the process by selecting functions that significantly contribute to the approximation first, adding here and subtracting there.

  1. Atkinson. (1989). An Introduction to Numerical Analysis(2nd Edition): p216. ↩︎

  2. Atkinson. (1989). An Introduction to Numerical Analysis(2nd Edition): p219. ↩︎