logo

Minimization and Maximization Approximations and Least Squares Approximations in Numerical Analysis 📂Numerical Analysis

Minimization and Maximization Approximations and Least Squares Approximations in Numerical Analysis

Buildup 1

Let’s assume we are given the problem of approximating a given function f:[a,b]Rf : [a,b] \to \mathbb{R}. Since computation is the responsibility of computers, our goal is to approximate it with a polynomial function ff. Approximating a function means that we want to use a function similar to ff not just at a single point but across its entire domain [a,b][a,b], so the goal is to reduce the error where it is largest. This kind of situation is referred to as Minimax problem. [ NOTE: The term ‘Minimax’ is more apt than the original expression in economics which denotes the situation of maximizing the minimum benefit, so let’s just remember it as ‘Minimax’. ] ρn(f):=infdeg(q)nfq \rho_{n} (f) := \inf_{\deg(q) \le n } \left\| f - q \right\|_{\infty} Rewriting the minimax problem in formulaic terms, the goal is to find the polynomial function of degree nn or less, qq, that makes ρn(f)\rho_{n} (f) the smallest. \| \cdot \|_{\infty} represents the maximum norm, and our objective is to find qnq_{n}^{ \ast } that minimizes it. In other words, finding qnq_{n}^{ \ast } that satisfies ρn(f)=fqn\rho_{n} (f) = \| f - q_{n}^{ \ast } \| is the minimax approximation.

However, this is easier said than done. Emphasizing again, a minimax approximation involves considering the error across the entire domain [a,b][a,b] at once, which is far from a trivial task. On closer inspection, [a,b][a,b] is an uncountably infinite set even before considering it as a closed interval, making it more complicated to handle than it appears.

Least Squares Approximation 2

Mn(f):=infdeg(r)nfr2 M_{n} (f) := \inf_{\deg(r) \le n } \left\| f - r \right\|_{2} To circumvent these difficulties, mathematicians have brought in the notion of least squares approximation. This method, known as the least squares, seeks the solution that minimizes the square of the deviations. Formulaically, this means finding the polynomial function of degree nn or less, rr, that makes Mn(f)M_{n} (f) the smallest. In other words, finding rnr_{n}^{ \ast } that satisfies Mn(f)=frnM_{n} (f) = \| f - r_{n}^{ \ast } \| is the least squares approximation. The linear algebra techniques used in implementing this method can be naturally applied in functional analysis as well.

The reason for choosing the least squares, fr2=abf(x)r(x)2dx\displaystyle \| f - r \|_{2} = \sqrt{ \int_{a}^{b} \left| f(x) - r(x) \right|^{2} d x }, comes down to the fact that inner products are only defined in Hilbert spaces. Being able to define inner products allows the use of various techniques, thus the choice of Hilbert spaces among Lebesgue spaces.

And, surprisingly, the least squares solution CnC_{n} not only performs well in 2\| \cdot \|_{2} but also in \| \cdot \|_{\infty}. Even if it’s not a minimax, being bound by fCnlnnn\displaystyle \| f - C_{n} \| \le {{ \ln n } \over { n }} still makes CnC_{n} a rather usable approximation as its divergence rate is significantly slower than that of nn.


  1. Atkinson. (1989). An Introduction to Numerical Analysis(2nd Edition): p201. ↩︎

  2. Atkinson. (1989). An Introduction to Numerical Analysis(2nd Edition): p204. ↩︎