logo

Function Approximation in Numerical Analysis 📂Numerical Analysis

Function Approximation in Numerical Analysis

Buildup

Although it is true that computers are overwhelmingly faster at numerical calculations than humans, it isn’t because they understand transcendental functions or irrational numbers. For instance, when asked to calculate $\displaystyle \sin {{ \pi } \over {6}} = {{1} \over { 2 }}$, instead of drawing a right triangle and finding the ratio of the hypotenuse to the height using the geometric definition of trigonometric functions, it uses polynomial functions to expand into a series and calculates it through basic arithmetic.

From the computer’s perspective, it doesn’t particularly care what $\displaystyle \sin x \approx x - {{x^3} \over {3!}} + {{x^5} \over {5!}}$ truly means. It just uses the polynomial equation available because it can perform arithmetic operations faster than humans. And the existence of such polynomial functions is always guaranteed by the following theorem.

Stone-Weierstrass Theorem: If $f$ is continuous on $[a,b]$, then for a given $\epsilon > 0$, there exists a polynomial function $p(x)$ satisfying $\displaystyle \max_{x \in [a,b]} | f(x) - p (x) | < \epsilon$.

However, thinking about it ‘from the computer’s standpoint’, it’s clear that calculating numbers like $\displaystyle {{\pi^{31} } \over {31!}}$ can be quite challenging for computers as well. First of all, both the numerator and the denominator are pretty long, so just storing the large numbers before even starting the calculation is a task, not to mention it doesn’t have the concept of altering the order of operations to reduce calculation time like humans do. [ NOTE: In fact, such techniques are implemented in applied mathematics fields like cryptography using algorithms like Exponentiation by Squaring. ] Moreover, ‘from the human’s standpoint’, it’s known that Taylor series only assure convergence at sufficiently small $(-h, h)$, which essentially means the calculation using Taylor’s formula is pointwise.

Although Taylor expansion is theoretically one of the most powerful tools in the history of mathematics, from the considerations above, it’s understandable that it may not be the best formula for numerical calculations1. In other words, if one wishes to calculate $f(x)$ as $$ f(x) \approx \sum_{k=0}^{n} a_{k} \varphi_{n} (x) $$, then $\displaystyle \left\{ \varphi_{n} (x) = {{x^{n}} \over {n!}} : n \ge 0 \right\}$ is not a good basis.

Gram-Schmidt Process: Every finite-dimensional inner product space has an orthonormal basis.

Fortunately, since the set of polynomial functions $\mathbb{Q} [ x ]$ forms a finite-dimensional normed vector space, it’s guaranteed to possess an orthonormal basis. In numerical analysis, the focus is particularly on bases with the following properties:

What Makes a Good Basis?

  • (i): $\displaystyle p_{n} (x)$ should be represented as a linear combination of $\displaystyle \left\{ \varphi_{n} (x) \in \mathbb{Q} [ x ] : n \ge 0 \right\}$, and there should exist $n \in \mathbb{N}$ that makes $\displaystyle \max_{x \in [a,b]} | f(x) - p_{n} (x) |$ as small as desired.
  • (ii): It’s preferable for $p_{n} (x)$ to be calculated as $\displaystyle p_{n} (x) = \sum_{k=0}^{n} a_{k} \varphi_{n} (X)$. That is, it’s better to add new terms for reducing errors rather than completely recalculating each time.
  • (iii): It’s good for $| a_{n} |$ to converge quickly to $0$. In other words, the fewer terms added, the quicker the convergence.

The reason for considering the set of polynomial functions with rational coefficients $\mathbb{Q} [ x ]$ instead of $\mathbb{R} [ x ]$ is simple. It’s because computers can’t understand irrational numbers when performing numerical calculations.


  1. Atkinson. (1989). An Introduction to Numerical Analysis(2nd Edition): p199. ↩︎