logo

Minimization and Maximization Approximations and Least Squares Approximations in Numerical Analysis 📂Numerical Analysis

Minimization and Maximization Approximations and Least Squares Approximations in Numerical Analysis

Buildup 1

Let’s assume we are given the problem of approximating a given function $f : [a,b] \to \mathbb{R}$. Since computation is the responsibility of computers, our goal is to approximate it with a polynomial function $f$. Approximating a function means that we want to use a function similar to $f$ not just at a single point but across its entire domain $[a,b]$, so the goal is to reduce the error where it is largest. This kind of situation is referred to as Minimax problem. [ NOTE: The term ‘Minimax’ is more apt than the original expression in economics which denotes the situation of maximizing the minimum benefit, so let’s just remember it as ‘Minimax’. ] $$ \rho_{n} (f) := \inf_{\deg(q) \le n } \left\| f - q \right\|_{\infty} $$ Rewriting the minimax problem in formulaic terms, the goal is to find the polynomial function of degree $n$ or less, $q$, that makes $\rho_{n} (f)$ the smallest. $\| \cdot \|_{\infty}$ represents the maximum norm, and our objective is to find $q_{n}^{ \ast }$ that minimizes it. In other words, finding $q_{n}^{ \ast }$ that satisfies $\rho_{n} (f) = \| f - q_{n}^{ \ast } \|$ is the minimax approximation.

However, this is easier said than done. Emphasizing again, a minimax approximation involves considering the error across the entire domain $[a,b]$ at once, which is far from a trivial task. On closer inspection, $[a,b]$ is an uncountably infinite set even before considering it as a closed interval, making it more complicated to handle than it appears.

Least Squares Approximation 2

$$ M_{n} (f) := \inf_{\deg(r) \le n } \left\| f - r \right\|_{2} $$ To circumvent these difficulties, mathematicians have brought in the notion of least squares approximation. This method, known as the least squares, seeks the solution that minimizes the square of the deviations. Formulaically, this means finding the polynomial function of degree $n$ or less, $r$, that makes $M_{n} (f)$ the smallest. In other words, finding $r_{n}^{ \ast }$ that satisfies $M_{n} (f) = \| f - r_{n}^{ \ast } \|$ is the least squares approximation. The linear algebra techniques used in implementing this method can be naturally applied in functional analysis as well.

The reason for choosing the least squares, $\displaystyle \| f - r \|_{2} = \sqrt{ \int_{a}^{b} \left| f(x) - r(x) \right|^{2} d x }$, comes down to the fact that inner products are only defined in Hilbert spaces. Being able to define inner products allows the use of various techniques, thus the choice of Hilbert spaces among Lebesgue spaces.

And, surprisingly, the least squares solution $C_{n}$ not only performs well in $\| \cdot \|_{2}$ but also in $\| \cdot \|_{\infty}$. Even if it’s not a minimax, being bound by $\displaystyle \| f - C_{n} \| \le {{ \ln n } \over { n }}$ still makes $C_{n}$ a rather usable approximation as its divergence rate is significantly slower than that of $n$.


  1. Atkinson. (1989). An Introduction to Numerical Analysis(2nd Edition): p201. ↩︎

  2. Atkinson. (1989). An Introduction to Numerical Analysis(2nd Edition): p204. ↩︎