logo

Multi-objective Optimization Problem: Pareto Optimization 📂Optimization

Multi-objective Optimization Problem: Pareto Optimization

Definition

In the context of an optimization problem, a problem whose objective function is extended to a vector function is called a multi-objective optimization problem, and solving such a problem is called Pareto optimization.

Explanation

Optimization is often formulated as the problem of finding $x \in X$ that minimizes the value of a loss function given in the form $L : \mathbb{R}^{n} \to \mathbb{R}$. If that function performs multiple tasks simultaneously and is given as a vector function $L = \left( L_{1} , \cdots , L_{m} \right)$, there is no generally agreed-upon definition of an ordering on $\mathbb{R}^{m}$, so the notion of optimizing in a single direction no longer applies. For example, if the function obtained at the end of optimization excels at minimizing $L_{1}$ but performs poorly at minimizing $L_{2}$, or vice versa, then each is optimal only within its own domain and one cannot say which is better.

A widely used idea for converting a multi-objective optimization problem into a standard optimization problem is to define a new $L_{0}$ as their weighted sum, as follows. $$ L_{0} = \sum_{k=1}^{m} \lambda_{k} L_{k} $$

This can be seen as shifting the complexity of having multiple objective functions to a hyperparameter problem. Ironically, because this method is so simple and intuitive, the need to consider Pareto optimality is less frequent than one might expect even when objective functions are vector-valued.

Meanwhile, in Pareto optimization, the set of meaningful solutions obtained when trade-offs among objective functions are taken into account is called the Pareto front. In multi-objective optimization problems it is common to follow a process of selecting the solution on the Pareto front that best matches the intended objective.