logo

Optimization Techniques in Mathematics 📂Optimization

Optimization Techniques in Mathematics

Definition

  1. The problem of finding $x^{ \ast } = \argmin_{x} f(x)$ that makes the function value of function $f : \mathbb{R}^{n} \to \mathbb{R}$ minimum is known as the Optimization Problem, and the algorithm to solve this problem is called an Optimization Technique. The given function $f$ in the optimization problem is specifically referred to as the Objective Function.
  2. $x^{ \ast }$ is called the Global Optimizer if for all $x$, $f(x^{ \ast }) \le f(x)$ is satisfied.
  3. If there exists a neighborhood $\mathcal{N}$ of $x^{ \ast }$ satisfying $f(x^{ \ast }) \le f(x)$ for all $x \in \mathcal{N}$, then $x^{ \ast }$ is called a Local Optimizer.

In these definitions, the direction of the inequality can be reversed, i.e., the explanations still apply if the maximization is discussed instead, and they are collectively referred to as optimization.

Explanation

People who use optimization techniques may talk about ‘maximizing profits’ or ‘minimizing costs’, but whether it is maximum or minimum is not a significant issue for mathematicians. The reason why minimization is near-synonymous with optimization is that the algorithms used for minimization problems can also be applied to maximization problems simply by multiplying by $-1$, and because mathematically significant functions like metrics or norms have the set of real numbers greater than or equal to $0$ as their codomain, meaning a minimum value exists.

With the recent trend in deep learning, the objective function (or cost function, loss function) is usually assumed to be smooth, but there is no guarantee that it must be so, therefore algorithms and methods to overcome this have also been studied. The domain of the objective function does not necessarily have to be $\mathbb{R}^{n}$.

Global Optimizer

While the existence of an optimizer may be shown with just a few simple conditions, there is no theorem that can prove a local optimizer is a global optimizer. Ideally, everyone would want to find the optimizer, but in practice, it is rare for anyone to truly expect the solution they found to be the optimizer. There are many optimization problems but no ‘optimized’ optimization technique that fits all, which is why numerous improvement algorithms have been developed.

Strictness and Isolation of Optimizers

In most cases, the following definitions are meaningless, but are mentioned here nonetheless.

  1. If there exists a neighborhood $\mathcal{N}$ of $x^{ \ast }$ satisfying $f(x^{ \ast }) < f(x)$ for all $x \in \mathcal{N}$, then $x^{ \ast }$ is called a Strict Local Optimizer.
  2. If there exists a unique neighborhood $\mathcal{N}$ of $x^{ \ast }$ satisfying $f(x^{ \ast }) < f(x)$ for all $x \in \mathcal{N}$, then $x^{ \ast }$ is called an Isolated Local Optimizer.