logo

Eigenvalues of Positive Definite Matrices and the Maximum Value of Quadratic Forms 📂Linear Algebra

Eigenvalues of Positive Definite Matrices and the Maximum Value of Quadratic Forms

Theorem

Let’s assume that the eigenpairs $A \in \mathbb{R}^{p \times p}$ of a positive definite matrix $\left\{ \left( \lambda_{k} , e_{k} \right) \right\}_{k=1}^{n}$ are ordered as $\lambda_{1} \ge \cdots \ge \lambda_{n} \ge 0$. On the Unit Sphere, the maximum and minimum values of the quadratic form $\mathbf{x}^{T} A \mathbf{x}$ are as follows. $$ \begin{align*} \max_{\left\| \mathbf{x} \right\| = 1} \mathbf{x}^{T} A \mathbf{x} =& \lambda_{1} & \text{, attained when } \mathbf{x} = e_{1} \\ \min_{\left\| \mathbf{x} \right\| = 1} \mathbf{x}^{T} A \mathbf{x} =& \lambda_{p} & \text{, attained when } \mathbf{x} = e_{p} \end{align*} $$ Meanwhile, the following holds for $k = 2, \cdots , p-1$: $$ \max_{\substack{\left\| \mathbf{x} \right\| = 1 \\ \mathbf{x} \perp e_{1} , \cdots , e_{k-1} }} \mathbf{x}^{T} A \mathbf{x} = \lambda_{k} \quad \text{, attained when } \mathbf{x} = e_{k} $$


Description

The given theorem describes the relationship between the quadratic form and the eigenvalues for vectors that satisfy $\left\| \mathbf{x} \right\| = 1$, i.e., on the unit sphere for $\mathbf{x}$. Specifically, the last formula demonstrates that the maximum value of the quadratic form, when excluding eigen vectors up to the $(k-1)$th in the solution space, exactly matches $\lambda_{k}$.

Proof 1

For a fixed $\mathbf{x} \ne 0$, if we set $\mathbf{x}_{1} = \mathbf{x} / \sqrt{ \mathbf{x}^{T} \mathbf{x} }$, then $\left\| \mathbf{x}_{1} \right\| = 1$, and hence, $$ {{ \mathbf{x}^{T} A \mathbf{x} } \over { \mathbf{x}^{T} \mathbf{x} }} = \mathbf{x}_{1}^{T} A \mathbf{x}_{1} $$ thus, in the proof, we should directly deal with ${{ \mathbf{x}^{T} B \mathbf{x} } \over { \mathbf{x}^{T} \mathbf{x} }}$ instead of sticking to the condition of $\left\| \mathbf{x} \right\| = 1$.


Part 1. $\max \mathbf{x}^{T} A \mathbf{x} = \lambda_{1}$

Square root matrix of a positive definite matrix: Let’s say the eigenpairs $A$ of a positive definite matrix $\left\{ \left( \lambda_{k} , e_{k} \right) \right\}_{k=1}^{n}$ are ordered as $\lambda_{1} > \cdots > \lambda_{n} > 0$. For an orthogonal matrix $P = \begin{bmatrix} e_{1} & \cdots & e_{n} \end{bmatrix} \in \mathbb{R}^{n \times n}$ and a diagonal matrix $\Lambda = \diag \left( \lambda_{1} , \cdots , \lambda_{n} \right)$, the square root matrix of $A$ is as follows. $$ \sqrt{A} = P \sqrt{\Lambda} P^{T} = \sum_{k=1}^{n} \sqrt{\lambda_{k}} e_{k} e_{k}^{T} $$

Assuming that the orthogonal matrix $P$ and the diagonal matrix $\Lambda$ are defined as mentioned above, let’s define vector $\mathbf{y} \in \mathbb{R}^{p}$ as follows: $$ \mathbf{y} = \left( y_{1} , \cdots , y_{p} \right) := P^{T} \mathbf{x} $$ Accordingly, $$ \mathbf{x} \ne \mathbf{0} \implies \mathbf{y} \ne \mathbf{0} $$ and hence, for the identity matrix $I \in \mathbb{R}^{p \times p}$, the following holds: $$ \begin{align*} & {{ \mathbf{x}^{T} A \mathbf{x} } \over { \mathbf{x}^{T} \mathbf{x} }} \\ =& {{ \mathbf{x}^{T} \sqrt{A} \sqrt{A} \mathbf{x} } \over { \mathbf{x}^{T} I \mathbf{x} }} \\ =& {{ \mathbf{x}^{T} P \sqrt{\Lambda} P^{T} P \sqrt{\Lambda} P^{T} \mathbf{x} } \over { \mathbf{x}^{T} P P^{T} \mathbf{x} }} \\ =& {{ \mathbf{y}^{T} \sqrt{\Lambda} I \sqrt{\Lambda} \mathbf{y} } \over { \mathbf{y}^{T} \mathbf{y} }} \\ =& {{ \mathbf{y}^{T} \Lambda \mathbf{y} } \over { \mathbf{y}^{T} \mathbf{y} }} \\ =& {{ \sum_{i=1}^{p} \lambda_{i} y_{i}^{2} } \over { \sum_{i=1}^{p} y_{i}^{2} }} \\ \le& \lambda_{1} {{ \sum_{i=1}^{p} y_{i}^{2} } \over { \sum_{i=1}^{p} y_{i}^{2} }} & \cdots \star \\ =& \lambda_{1} \end{align*} $$ $P = \begin{bmatrix} e_{1} & \cdots & e_{n} \end{bmatrix}$ and for different $i \ne j$, $e_{i}^{T} e_{j} = 0$; when indices are the same, $e_{i}^{T} e_{i} = 1$, so if we choose $\mathbf{x} = e_{1}$, then $$ \mathbf{y} = P^{T} e_{1} = \begin{bmatrix} 1 \\ 0 \\ \vdots \\ 0 \end{bmatrix} = e_{1} $$ and hence, $$ \begin{align*} & {{ \mathbf{x}^{T} A \mathbf{x} } \over { \mathbf{x}^{T} \mathbf{x} }} \\ =& {{ \mathbf{y}^{T} \Lambda \mathbf{y} } \over { \mathbf{y}^{T} \mathbf{y} }} \\ =& {{ e_{1}^{T} \Lambda e_{1} } \over { e_{1}^{T} e_{1} }} \\ =& \lambda_{1} \end{align*} $$ Therefore, $$ \max_{\left\| \mathbf{x} \right\| = 1} \mathbf{x}^{T} A \mathbf{x} = \lambda_{1} \quad \text{, attained when } \mathbf{x} = e_{1} $$ Similarly, we can also find the minimum value as follows. In the development labelled as $\star$, $\lambda_{1}$ becomes $\lambda_{p}$, and the direction of the inequality is reversed. $$ \min_{\left\| \mathbf{x} \right\| = 1} \mathbf{x}^{T} A \mathbf{x} = \lambda_{p} \quad \text{, attained when } \mathbf{x} = e_{p} $$


Part 2. $\max_{\mathbf{x} \perp e_{1} , \cdots , e_{k-1} } \mathbf{x}^{T} A \mathbf{x} = \lambda_{k}$

Defined $\mathbf{y}$ as $\mathbf{y} = P^{T} \mathbf{x}$ and since $P$ is an orthogonal matrix, $P \mathbf{y} = \mathbf{x}$. This $\mathbf{x}$ can be represented as follows by the eigen vectors $e_{1} , \cdots , e_{p}$: $$ \begin{align*} \mathbf{x} =& P \mathbf{y} \\ =& y_{1} e_{1} + y_{2} e_{2} + \cdots + y_{p} e_{p} \end{align*} $$ If we define $\mathbf{x} \perp e_{1} , \cdots , e_{k-1}$, then for all $e_{i}$ with respect to $i < k$, the inner product is $$ \begin{align*} 0 =& e_{i}^{T} \mathbf{x} \\ =& y_{1} e_{i}^{T} e_{1} + y_{2} e_{i}^{T} e_{2} + \cdots + y_{p} e_{i}^{T} e_{p} \\ =& y_{i} e_{i}^{T} e_{i} \\ =& y_{i} \end{align*} $$ Looking back at $\star$ from Part 1, $$ {{ \mathbf{x}^{T} A \mathbf{x} } \over { \mathbf{x}^{T} \mathbf{x} }} = {{ \sum_{i=k}^{p} \lambda_{i} y_{i}^{2} } \over { \sum_{i=k}^{p} y_{i}^{2} }} \le \lambda_{k} {{ \sum_{i=k}^{p} y_{i}^{2} } \over { \sum_{i=k}^{p} y_{i}^{2} }} $$ and when $y_{k} = 1$ and $y_{k+1} = \cdots = y_{p} = 0$ are set, i.e., when $\mathbf{x} = \sum_{i=1}^{p} y_{i} e_{i} = e_{k}$, the maximum value is obtained.


  1. Johnson. (2013). Applied Multivariate Statistical Analysis(6th Edition): p118~119. ↩︎