logo

Covariance Matrix 📂Mathematical Statistics

Covariance Matrix

Definition1

$p$-dimensional random vector $\mathbf{X} = \left( X_{1}, \cdots , X_{p} \right)$ is defined as follows for $\text{Cov} (\mathbf{X})$, which is called a Covariance Matrix.

$$ \left( \text{Cov} \left( \mathbf{X} \right) \right)_{ij} := \text{Cov} \left( X_{i} , X_{j} \right) $$


Explanation

To put the definition in simpler words, it is as follows.

$$ \text{Cov} \left( \mathbf{X} \right) := \begin{pmatrix} \text{Var} \left( X_{1} \right) & \text{Cov} \left( X_{1} , X_{2} \right) & \cdots & \text{Cov} \left( X_{1} , X_{p} \right) \\ \text{Cov} \left( X_{2} , X_{1} \right) & \text{Var} \left( X_{2} \right) & \cdots & \text{Cov} \left( X_{2} , X_{p} \right) \\ \vdots & \vdots & \ddots & \vdots \\ \text{Cov} \left( X_{p} , X_{1} \right) & \text{Cov} \left( X_{p} , X_{2} \right) & \cdots & \text{Var} \left( X_{p} \right) \end{pmatrix} $$

All covariance matrices are positive semi-definite matrices. In other words, for all vectors $\mathbf{x} \in \mathbb{R}^{p}$, the following holds true.

$$ 0 \le \textbf{x}^{T} \text{Cov} \left( \mathbf{X} \right) \textbf{x} $$

Theorems

  • [1]: If $\mathbf{\mu} \in \mathbb{R}^{p}$ is given as $\mathbf{\mu} := \left( EX_{1} , \cdots , EX_{p} \right)$, $$ \text{Cov} (\mathbf{X}) = E \left[ \mathbf{X} \mathbf{X}^{T} \right] - \mathbf{\mu} \mathbf{\mu}^{T} $$
  • [2]: If a matrix of constants $A \in \mathbb{R}^{k \times p}$ is given as $(A)_{ij} := a_{ij}$, $$ \text{Cov} ( A \mathbf{X}) = A \text{Cov} \left( \mathbf{X} \right) A^{T} $$

Proof

[1]

$$ \begin{align*} \text{Cov} \left( \mathbf{X} \right) =& E \left[ \left( \mathbf{X} - \mathbf{\mu} \right) \left( \mathbf{X} - \mathbf{\mu} \right)^{T} \right] \\ =& E \left[ \mathbf{X} \mathbf{X}^{T} - \mathbf{\mu} \mathbf{X}^{T} - \mathbf{X} \mathbf{\mu}^{T} + \mathbf{\mu} \mathbf{\mu}^{T} \right] \\ =& E \left[ \mathbf{X} \mathbf{X}^{T} \right] - \mathbf{\mu} E \left[ \mathbf{X}^{T} \right] - E \left[ \mathbf{X} \right] \mathbf{\mu}^{T} + E \left[ \mathbf{\mu} \mathbf{\mu}^{T} \right] \\ =& E \left[ \mathbf{X} \mathbf{X}^{T} \right] - \mathbf{\mu} \mathbf{\mu}^{T} \end{align*} $$

[2] 2

$$ \begin{align*} \text{Cov} \left( A \mathbf{X} \right) =& E \left[ \left( A\mathbf{X} - A\mathbf{\mu} \right) \left( A\mathbf{X} - A\mathbf{\mu} \right)^{T} \right] \\ =& E \left[ A\left(\mathbf{X} -\mathbf{\mu} \right) \left( \mathbf{X} - \mathbf{\mu} \right)^{T} A^{T} \right] \\ =& A E \left[ \left(\mathbf{X} -\mathbf{\mu} \right) \left( \mathbf{X} - \mathbf{\mu} \right)^{T}\right] A^{T} \\ =& A \text{Cov}\left( \mathbf{X} \right) A^{T} \end{align*} $$


  1. Hogg et al. (2013). Introduction to Mathematical Statistcs(7th Edition): p126. ↩︎

  2. https://stats.stackexchange.com/a/106207/172321 ↩︎