logo

If an Optimal Solution Exists in Linear Programming Problems, One of Them is a Basic Feasible Solution 📂Optimization

If an Optimal Solution Exists in Linear Programming Problems, One of Them is a Basic Feasible Solution

Theorem 1

$$ \begin{matrix} \text{Maximize} & \mathbf{c}^{T} \mathbf{x} \\ \text{subject to} & A \mathbf{x} = \mathbf{b} \\ & \mathbf{x} \ge \mathbf{0} \end{matrix} $$

Given matrices $A \in \mathbb{R}^{m \times n}$, $\mathbf{b} \in \mathbb{R}^{m \times 1}$, and $\mathbf{c} \in \mathbb{R}^{n}$, let’s assume a linear programming problem is represented in the form of an equation as above. If an optimal solution exists, then an optimal basic feasible solution also exists.


  • $\mathbf{c}^{T}$ represents a transpose.
  • A feasible solution refers to a solution that satisfies the constraints regardless of optimization.

Proof

Strategy: This proof is essentially the same as that of the existence of an optimal solution.

Definition of basic feasible solution: For a feasible solution $\mathbf{x} \in \mathbb{R}^{n}$, there exists a set $B \subseteq [n]$ such that its cardinality is $m$, $\exists A_{B}^{-1}$ and $x_{j} = 0 \forall j \notin B$ for $\mathbf{x} = \left( x_{1} , \cdots , x_{n} \right)$. We call this set $B$ a basis.

For any feasible solution $\mathbf{x}_{0}$, $$ \mathbf{c}^{T} \mathbf{x}_{0} \le \mathbf{c}^{T} \mathbf{x} $$ let’s call the solution with the most number of $0$ components among those satisfying $\mathbf{x}$, as $\tilde{\mathbf{x}}$.

  • If we can show that $\tilde{\mathbf{x}}$ is a basic feasible solution, the existence of an optimal solution can be shown by the fact that there are finitely many basic feasible solutions.
  • Thus, at least one of any optimal solutions, being a feasible solution before being optimal, must have the same objective function value as some (optimal) basic feasible solution. Therefore, at least one optimal solution is a basic feasible solution.

Now, let’s divide the cases depending on whether the column vectors of $A_{K}$ are linearly independent or not when the set of indices $$ K = \left\{ k \in [n] : \left( \tilde{\mathbf{x}} \right)_{k} > 0 \right\} $$ is defined.


Case 1. Linear Independence

Following the definition of a basic feasible solution, $\tilde{\mathbf{x}}$ naturally is a basic feasible solution, and there is nothing more to prove.


Case 2. Linear Dependence

If they are linearly dependent and not linearly independent, there exists some vector $\mathbf{v} \in \mathbb{R}^{|K|}$ of dimension $|K|$ satisfying: $$ A_{K} \mathbf{v} = \mathbf{0} $$ Considering an extended vector $\mathbf{w} \in \mathbb{R}^{n}$ that satisfies $\mathbf{w}_{K} = \mathbf{v}$ and $A \mathbf{w} = A_{K} \mathbf{v} = \mathbf{0}$, for instance, if $K = \left\{ 2,3,5 \right\}$ is $\mathbf{v} = (-2,1,7)$, then $\mathbf{w}$ becomes a vector $\mathbf{w}= (0,-2,1,0,7)$, whose $2,3,5$th component is similar to $\mathbf{v}$, and the rest are $0$. Now, by adjusting the sign of the component of $\mathbf{v}$, $\mathbf{w}$ can also satisfy: $$ \mathbf{c}^{T} \mathbf{w} > 0 $$ From now on, consider $\mathbf{w}$ as a vector satisfying the above inequality.

For $\mathbf{w}$, let’s consider the following vector $\mathbf{x}(t)$ dependent on $t \ge 0$: $$ \mathbf{x} (t) = \tilde{\mathbf{x}} + t \mathbf{w} $$ First, $\tilde{\mathbf{x}}$ is a feasible solution, and by the definition of $\mathbf{w}$ made to satisfy $A \mathbf{w} = A_{K} \mathbf{v} = \mathbf{0}$, for all $t \ge 0$ $$ \begin{align*} A \mathbf{x} (t) =& A \tilde{\mathbf{x}} + t A \mathbf{w} \\ =& A \tilde{\mathbf{x}} \\ =& \mathbf{b} \end{align*} $$ holds. In other words, $\mathbf{x} (t)$ also satisfies all constraints except for $\mathbf{x}(t) \ge 0$. Considering the $j$th component $x_{j}(t)$ of $\mathbf{x} (t)$, it appears as $x_{j}(t) = \tilde{x}_{j} + t w_{j}$, and at least one of them must be $w_{j} < 0$. If, for all $j$, $w_{j} > 0$ holds, then for all $t$, $\mathbf{x} (t) = \tilde{\mathbf{x}} + t \mathbf{w} \ge 0$ would be true, making all $\mathbf{x} (t)$ feasible solutions, which contradicts the boundedness assumption of the objective function since: $$ \mathbf{c}^{T} \mathbf{x} (t) = \mathbf{c}^{T} \tilde{\mathbf{x}} + t \mathbf{c}^{T} \mathbf{w} \to \infty $$ Hence, if at least one of $w_{j_{0}}$ is negative, there must exist a $t_{0} \ge 0$ that first satisfies: $$ x_{j_{0}} \left( t_{0} \right) = \tilde{x_{j_{0}}} + t_{0} w_{j_{0}} = 0 $$ Then, $\mathbf{x} \left( t_{0} \right)$ will have one more $0$ component compared to $\tilde{\mathbf{x}}$ while still satisfying: $$ \begin{align*} \mathbf{c}^{T} \mathbf{x} \left( t_{0} \right) =& \mathbf{c}^{T} \tilde{\mathbf{x}} + t_{0} \mathbf{c}^{T} \mathbf{w} \\ \ge & \mathbf{c}^{T} \mathbf{x}_{0} + t_{0} \mathbf{c}^{T} \mathbf{w} \\ \ge & \mathbf{c}^{T} \mathbf{x}_{0} \end{align*} $$ This contradicts the definition of $\tilde{\mathbf{x}}$, which was made to have the most number of $0$ components while satisfying $\mathbf{c}^{T} \mathbf{x}_{0} \le \mathbf{c}^{T} \mathbf{x}$.


Uniqueness of basic feasible solution: The basic feasible solution is uniquely determined by the basis $B$.

Consequently, since $A_{K}$ cannot be linearly dependent, $\tilde{\mathbf{x}}$ must be a basic feasible solution, and the basic feasible solution, determined by the choice of basis, being unique implies that both a basic feasible solution and an optimal solution exist finitely many, and therefore, for some optimal solutions, there exists a basic feasible solution with the same objective function value, making at least one of the optimal solutions a basic feasible solution.

See Also

The Fundamental Theorem of Linear Programming

This theorem is used in the proof of the fundamental theorem of linear programming.


  1. Matousek. (2007). Understanding and Using Linear Programming: p46. ↩︎