logo

Eigenvalues of Positive Definite Matrices and the Maximum Value of Quadratic Forms 📂Linear Algebra

Eigenvalues of Positive Definite Matrices and the Maximum Value of Quadratic Forms

Theorem

Let’s assume that the eigenpairs ARp×pA \in \mathbb{R}^{p \times p} of a positive definite matrix {(λk,ek)}k=1n\left\{ \left( \lambda_{k} , e_{k} \right) \right\}_{k=1}^{n} are ordered as λ1λn0\lambda_{1} \ge \cdots \ge \lambda_{n} \ge 0. On the Unit Sphere, the maximum and minimum values of the quadratic form xTAx\mathbf{x}^{T} A \mathbf{x} are as follows. maxx=1xTAx=λ1, attained when x=e1minx=1xTAx=λp, attained when x=ep \begin{align*} \max_{\left\| \mathbf{x} \right\| = 1} \mathbf{x}^{T} A \mathbf{x} =& \lambda_{1} & \text{, attained when } \mathbf{x} = e_{1} \\ \min_{\left\| \mathbf{x} \right\| = 1} \mathbf{x}^{T} A \mathbf{x} =& \lambda_{p} & \text{, attained when } \mathbf{x} = e_{p} \end{align*} Meanwhile, the following holds for k=2,,p1k = 2, \cdots , p-1: maxx=1xe1,,ek1xTAx=λk, attained when x=ek \max_{\substack{\left\| \mathbf{x} \right\| = 1 \\ \mathbf{x} \perp e_{1} , \cdots , e_{k-1} }} \mathbf{x}^{T} A \mathbf{x} = \lambda_{k} \quad \text{, attained when } \mathbf{x} = e_{k}


Description

The given theorem describes the relationship between the quadratic form and the eigenvalues for vectors that satisfy x=1\left\| \mathbf{x} \right\| = 1, i.e., on the unit sphere for x\mathbf{x}. Specifically, the last formula demonstrates that the maximum value of the quadratic form, when excluding eigen vectors up to the (k1)(k-1)th in the solution space, exactly matches λk\lambda_{k}.

Proof 1

For a fixed x0\mathbf{x} \ne 0, if we set x1=x/xTx\mathbf{x}_{1} = \mathbf{x} / \sqrt{ \mathbf{x}^{T} \mathbf{x} }, then x1=1\left\| \mathbf{x}_{1} \right\| = 1, and hence, xTAxxTx=x1TAx1 {{ \mathbf{x}^{T} A \mathbf{x} } \over { \mathbf{x}^{T} \mathbf{x} }} = \mathbf{x}_{1}^{T} A \mathbf{x}_{1} thus, in the proof, we should directly deal with xTBxxTx{{ \mathbf{x}^{T} B \mathbf{x} } \over { \mathbf{x}^{T} \mathbf{x} }} instead of sticking to the condition of x=1\left\| \mathbf{x} \right\| = 1.


Part 1. maxxTAx=λ1\max \mathbf{x}^{T} A \mathbf{x} = \lambda_{1}

Square root matrix of a positive definite matrix: Let’s say the eigenpairs AA of a positive definite matrix {(λk,ek)}k=1n\left\{ \left( \lambda_{k} , e_{k} \right) \right\}_{k=1}^{n} are ordered as λ1>>λn>0\lambda_{1} > \cdots > \lambda_{n} > 0. For an orthogonal matrix P=[e1en]Rn×nP = \begin{bmatrix} e_{1} & \cdots & e_{n} \end{bmatrix} \in \mathbb{R}^{n \times n} and a diagonal matrix Λ=diag(λ1,,λn)\Lambda = \diag \left( \lambda_{1} , \cdots , \lambda_{n} \right), the square root matrix of AA is as follows. A=PΛPT=k=1nλkekekT \sqrt{A} = P \sqrt{\Lambda} P^{T} = \sum_{k=1}^{n} \sqrt{\lambda_{k}} e_{k} e_{k}^{T}

Assuming that the orthogonal matrix PP and the diagonal matrix Λ\Lambda are defined as mentioned above, let’s define vector yRp\mathbf{y} \in \mathbb{R}^{p} as follows: y=(y1,,yp):=PTx \mathbf{y} = \left( y_{1} , \cdots , y_{p} \right) := P^{T} \mathbf{x} Accordingly, x0    y0 \mathbf{x} \ne \mathbf{0} \implies \mathbf{y} \ne \mathbf{0} and hence, for the identity matrix IRp×pI \in \mathbb{R}^{p \times p}, the following holds: xTAxxTx=xTAAxxTIx=xTPΛPTPΛPTxxTPPTx=yTΛIΛyyTy=yTΛyyTy=i=1pλiyi2i=1pyi2λ1i=1pyi2i=1pyi2=λ1 \begin{align*} & {{ \mathbf{x}^{T} A \mathbf{x} } \over { \mathbf{x}^{T} \mathbf{x} }} \\ =& {{ \mathbf{x}^{T} \sqrt{A} \sqrt{A} \mathbf{x} } \over { \mathbf{x}^{T} I \mathbf{x} }} \\ =& {{ \mathbf{x}^{T} P \sqrt{\Lambda} P^{T} P \sqrt{\Lambda} P^{T} \mathbf{x} } \over { \mathbf{x}^{T} P P^{T} \mathbf{x} }} \\ =& {{ \mathbf{y}^{T} \sqrt{\Lambda} I \sqrt{\Lambda} \mathbf{y} } \over { \mathbf{y}^{T} \mathbf{y} }} \\ =& {{ \mathbf{y}^{T} \Lambda \mathbf{y} } \over { \mathbf{y}^{T} \mathbf{y} }} \\ =& {{ \sum_{i=1}^{p} \lambda_{i} y_{i}^{2} } \over { \sum_{i=1}^{p} y_{i}^{2} }} \\ \le& \lambda_{1} {{ \sum_{i=1}^{p} y_{i}^{2} } \over { \sum_{i=1}^{p} y_{i}^{2} }} & \cdots \star \\ =& \lambda_{1} \end{align*} P=[e1en]P = \begin{bmatrix} e_{1} & \cdots & e_{n} \end{bmatrix} and for different iji \ne j, eiTej=0e_{i}^{T} e_{j} = 0; when indices are the same, eiTei=1e_{i}^{T} e_{i} = 1, so if we choose x=e1\mathbf{x} = e_{1}, then y=PTe1=[100]=e1 \mathbf{y} = P^{T} e_{1} = \begin{bmatrix} 1 \\ 0 \\ \vdots \\ 0 \end{bmatrix} = e_{1} and hence, xTAxxTx=yTΛyyTy=e1TΛe1e1Te1=λ1 \begin{align*} & {{ \mathbf{x}^{T} A \mathbf{x} } \over { \mathbf{x}^{T} \mathbf{x} }} \\ =& {{ \mathbf{y}^{T} \Lambda \mathbf{y} } \over { \mathbf{y}^{T} \mathbf{y} }} \\ =& {{ e_{1}^{T} \Lambda e_{1} } \over { e_{1}^{T} e_{1} }} \\ =& \lambda_{1} \end{align*} Therefore, maxx=1xTAx=λ1, attained when x=e1 \max_{\left\| \mathbf{x} \right\| = 1} \mathbf{x}^{T} A \mathbf{x} = \lambda_{1} \quad \text{, attained when } \mathbf{x} = e_{1} Similarly, we can also find the minimum value as follows. In the development labelled as \star, λ1\lambda_{1} becomes λp\lambda_{p}, and the direction of the inequality is reversed. minx=1xTAx=λp, attained when x=ep \min_{\left\| \mathbf{x} \right\| = 1} \mathbf{x}^{T} A \mathbf{x} = \lambda_{p} \quad \text{, attained when } \mathbf{x} = e_{p}


Part 2. maxxe1,,ek1xTAx=λk\max_{\mathbf{x} \perp e_{1} , \cdots , e_{k-1} } \mathbf{x}^{T} A \mathbf{x} = \lambda_{k}

Defined y\mathbf{y} as y=PTx\mathbf{y} = P^{T} \mathbf{x} and since PP is an orthogonal matrix, Py=xP \mathbf{y} = \mathbf{x}. This x\mathbf{x} can be represented as follows by the eigen vectors e1,,epe_{1} , \cdots , e_{p}: x=Py=y1e1+y2e2++ypep \begin{align*} \mathbf{x} =& P \mathbf{y} \\ =& y_{1} e_{1} + y_{2} e_{2} + \cdots + y_{p} e_{p} \end{align*} If we define xe1,,ek1\mathbf{x} \perp e_{1} , \cdots , e_{k-1}, then for all eie_{i} with respect to i<ki < k, the inner product is 0=eiTx=y1eiTe1+y2eiTe2++ypeiTep=yieiTei=yi \begin{align*} 0 =& e_{i}^{T} \mathbf{x} \\ =& y_{1} e_{i}^{T} e_{1} + y_{2} e_{i}^{T} e_{2} + \cdots + y_{p} e_{i}^{T} e_{p} \\ =& y_{i} e_{i}^{T} e_{i} \\ =& y_{i} \end{align*} Looking back at \star from Part 1, xTAxxTx=i=kpλiyi2i=kpyi2λki=kpyi2i=kpyi2 {{ \mathbf{x}^{T} A \mathbf{x} } \over { \mathbf{x}^{T} \mathbf{x} }} = {{ \sum_{i=k}^{p} \lambda_{i} y_{i}^{2} } \over { \sum_{i=k}^{p} y_{i}^{2} }} \le \lambda_{k} {{ \sum_{i=k}^{p} y_{i}^{2} } \over { \sum_{i=k}^{p} y_{i}^{2} }} and when yk=1y_{k} = 1 and yk+1==yp=0y_{k+1} = \cdots = y_{p} = 0 are set, i.e., when x=i=1pyiei=ek\mathbf{x} = \sum_{i=1}^{p} y_{i} e_{i} = e_{k}, the maximum value is obtained.


  1. Johnson. (2013). Applied Multivariate Statistical Analysis(6th Edition): p118~119. ↩︎