logo

Proof of the Existence of an Optimal Solution in the Equation Form of Linear Programming Problems 📂Optimization

Proof of the Existence of an Optimal Solution in the Equation Form of Linear Programming Problems

Theorem 1

$$ \begin{matrix} \text{Maximize} & \mathbf{c}^{T} \mathbf{x} \\ \text{subject to} & A \mathbf{x} = \mathbf{b} \\ & \mathbf{x} \ge \mathbf{0} \end{matrix} $$

Let’s say the linear programming problem is represented in the form of equation form for matrices $A \in \mathbb{R}^{m \times n}$, $\mathbf{b} \in \mathbb{R}^{m \times 1}$, and $\mathbf{c} \in \mathbb{R}^{n}$. If there exists at least one feasible solution, and the set of feasible solutions is Bounded Above by $\mathbf{c}^{T} \mathbf{x}$, then an optimal solution exists.


  • $\mathbf{c}^{T}$ denotes the transpose.
  • A feasible solution refers to a solution that satisfies the constraints, regardless of optimization.

Proof

Strategy: This proof is essentially the same as the proof of the theorem that if an optimal solution exists, then one of them is a basic feasible solution.

Definition of basic feasible solution: For a feasible solution $\mathbf{x} \in \mathbb{R}^{n}$, there exists a set $B \subseteq [n]$ with cardinality $m$ such that $\exists A_{B}^{-1}$ and $x_{j} = 0 \forall j \notin B$, and this $\mathbf{x} = \left( x_{1} , \cdots , x_{n} \right)$ is called a basic feasible solution. This set $B$ is called the basis.

For any feasible solution $\mathbf{x}_{0}$, $$ \mathbf{c}^{T} \mathbf{x}_{0} \le \mathbf{c}^{T} \mathbf{x} $$ let’s call the solution with the most number of $0$ among those that satisfy $\mathbf{x}$, as $\tilde{\mathbf{x}}$.

  • If $\tilde{\mathbf{x}}$ is proved to be a basic feasible solution, since there are finitely many basic feasible solutions, the existence of an optimal solution can be shown.
  • Thus, since any optimal solution is a feasible solution before it is optimal, there exists a (optimal) basic feasible solution with the same objective function value. Therefore, at least one of the optimal solutions must be a basic feasible solution.

Now, when we define the set of indices $$ K = \left\{ k \in [n] : \left( \tilde{\mathbf{x}} \right)_{k} > 0 \right\} $$, let’s consider whether the column vectors of $A_{K}$ are linearly independent or not.


Case 1. Linear Independence

By the definition of a basic feasible solution, $\tilde{\mathbf{x}}$ is naturally a basic feasible solution, and there is nothing further to prove.


Case 2. Linear Dependence

If they are linearly dependent, not linearly independent, then there exists some $|K|$-dimensional vector $\mathbf{v} \in \mathbb{R}^{|K|}$ that satisfies the following. $$ A_{K} \mathbf{v} = \mathbf{0} $$ We can think of an extended vector $\mathbf{w} \in \mathbb{R}^{n}$ that satisfies $\mathbf{w}_{K} = \mathbf{v}$ and $A \mathbf{w} = A_{K} \mathbf{v} = \mathbf{0}$. For example, if $K = \left\{ 2,3,5 \right\}$ then $\mathbf{w}$ becomes a vector $\mathbf{w}= (0,-2,1,0,7)$ whose $2,3,5$-th component is $\mathbf{v}$ and the rest are $0$. At this moment, $\mathbf{w}$ can also satisfy $$ \mathbf{c}^{T} \mathbf{w} > 0 $$ by readjusting the sign of the component of $\mathbf{v}$. Let’s suppose $\mathbf{w}$ mentioned onward as a vector that satisfies the above inequality.

Now, let’s consider the following vector $\mathbf{x}(t)$ that is dependent on $t \ge 0$ for $\mathbf{w}$. $$ \mathbf{x} (t) = \tilde{\mathbf{x}} + t \mathbf{w} $$ First, $\tilde{\mathbf{x}}$ is a feasible solution, and by the definition of $\mathbf{w}$ that satisfies $A \mathbf{w} = A_{K} \mathbf{v} = \mathbf{0}$, for all $t \ge 0$, $$ \begin{align*} A \mathbf{x} (t) =& A \tilde{\mathbf{x}} + t A \mathbf{w} \\ =& A \tilde{\mathbf{x}} \\ =& \mathbf{b} \end{align*} $$ is satisfied. In other words, $\mathbf{x} (t)$ also satisfies all the constraints except for $\mathbf{x}(t) \ge 0$. Meanwhile, if we consider the $j$-th component $x_{j}(t)$ of $\mathbf{x} (t)$, it appears as $x_{j}(t) = \tilde{x}_{j} + t w_{j}$, and at least one of them must be $w_{j} < 0$. If for all $j$, $w_{j} > 0$, then for all $t$, $\mathbf{x} (t) = \tilde{\mathbf{x}} + t \mathbf{w} \ge 0$, making all $\mathbf{x} (t)$ feasible, which contradicts the assumption that the objective function is bounded above. If at least one of $w_{j_{0}}$ is negative, then as we increase $t$ from $0$ to $\infty$, at some point, there will be a $t_{0} \ge 0$ that satisfies $$ x_{j_{0}} \left( t_{0} \right) = \tilde{x_{j_{0}}} + t_{0} w_{j_{0}} = 0 $$ for the first time. Then, $\mathbf{x} \left( t_{0} \right)$ will have one more component of $0$ than $\tilde{\mathbf{x}}$ while also satisfying $$ \begin{align*} \mathbf{c}^{T} \mathbf{x} \left( t_{0} \right) =& \mathbf{c}^{T} \tilde{\mathbf{x}} + t_{0} \mathbf{c}^{T} \mathbf{w} \\ \ge & \mathbf{c}^{T} \mathbf{x}_{0} + t_{0} \mathbf{c}^{T} \mathbf{w} \\ \ge & \mathbf{c}^{T} \mathbf{x}_{0} \end{align*} $$ This contradicts the definition of $\tilde{\mathbf{x}}$, which was supposed to have the most number of $0$ while satisfying $\mathbf{c}^{T} \mathbf{x}_{0} \le \mathbf{c}^{T} \mathbf{x}$.


Uniqueness of the basic feasible solution: The basic feasible solution is uniquely determined by the basis $B$.

Consequently, $A_{K}$ cannot be linearly dependent, so $\tilde{\mathbf{x}}$ must be a basic feasible solution, and since the basic feasible solution determined by the choice of basis is unique, there exist finitely many basic feasible solutions, implying the existence of an optimal solution as well. Since all optimal solutions are also feasible solutions, for any optimal solution, there exists a basic feasible solution with the same objective function value, and they are also optimal solutions; therefore, at least one of the optimal solutions must be a basic feasible solution.

See Also

Basic Theorem of Linear Programming

This theorem is used in the proof of Basic Theorem of Linear Programming.


  1. Matousek. (2007). Understanding and Using Linear Programming: p46. ↩︎