logo

Linear Transformations of Multivariate Normal Distributions 📂Probability Distribution

Linear Transformations of Multivariate Normal Distributions

정리 1

Linear Transformations’ Normality

Regarding matrix $A \in \mathbb{R}^{m \times n}$ and vector $\mathbf{b} \in \mathbb{R}^{m}$, a random vector $\mathbf{X} \sim N_{n} \left( \mu , \Sigma \right)$ following a multivariate normal distribution undergoes a linear transformation $\mathbf{Y} = A \mathbf{X} + \mathbf{b}$ will still follow a multivariate normal distribution $N_{m} \left( A \mu + \mathbf{b} , A \Sigma A^{T} \right)$.

Normality of Marginal Distributions

$$ \begin{align*} \mathbf{X} =& \begin{bmatrix} \mathbf{X}_{1} \\ \mathbf{X}_{2} \end{bmatrix} & : \Omega \to \mathbb{R}^{n} \\ \mu =& \begin{bmatrix} \mu_{1} \\ \mu_{2} \end{bmatrix} & \in \mathbb{R}^{n} \\ \Sigma =& \begin{bmatrix} \Sigma_{11} & \Sigma_{12} \\ \Sigma_{21} & \Sigma_{22} \end{bmatrix} & \in \mathbb{R}^{n \times n} \end{align*} $$ Assuming $\mathbf{X}$, $\mu$, and $\Sigma$ are represented in Jordan block form, if $\mathbf{X} \sim N_{n} \left( \mu, \Sigma \right)$ holds, then one of the marginal random vectors, $X_{1}$, follows a multivariate normal distribution $N_{m} \left( \mu_{1} , \Sigma_{11} \right)$.

Proof

Linear Transformation

Moment Generating Function of Multivariate Normal Distribution: The moment generating function of $X \sim N_{p} \left( \mu , \Sigma \right)$ is as follows. $$ M_{X} \left( \mathbf{t} \right) = \exp \left( \mathbf{t}^{T} \mu + {{ 1 } \over { 2 }} \mathbf{t}^{T} \Sigma \mathbf{t} \right) \qquad , \mathbf{t} \in \mathbb{R}^{p} $$

It is directly deduced from the moment generating function of the multivariate normal distribution. The moment generating function of $\mathbf{Y}$ is as follows. $$ \begin{align*} M_{\mathbf{Y}} \left( \mathbf{t} \right) =& E \left[ \exp \left( \mathbf{t}^{T} \mathbf{Y} \right) \right] \\ =& E \left[ \exp \left( \mathbf{t}^{T} \left( A \mathbf{X} + \mathbf{b} \right) \right) \right] \\ =& E \left[ \exp \left( \mathbf{t}^{T} \mathbf{b} \right) \right] E \left[ \exp \left( \mathbf{t}^{T} A \mathbf{X} \right) \right] \\ =& \exp \left( \mathbf{t}^{T} \mathbf{b} \right) E \left[ \exp \left( \left( A^{T} \mathbf{t} \right) ^{T} \mathbf{X} \right) \right] \\ =& \exp \left( \mathbf{t}^{T} \mathbf{b} \right) \exp \left( \left( A^{T} \mathbf{t} \right) ^{T} \left( \mu + {{ 1 } \over { 2 }} \Sigma A^{T} \mathbf{t} \right) \right) \\ =& \exp \left( \mathbf{t}^{T} \mathbf{b} \right) \exp \left( \left( A^{T} \mathbf{t} \right) ^{T} \mu + {{ 1 } \over { 2 }} \left( \mathbf{t}^{T} A \Sigma A^{T} \mathbf{t} \right) \right) \\ =& \exp \left( \mathbf{t}^{T} \left( \mathbf{b} + A \mu \right) + {{ 1 } \over { 2 }} \left( \mathbf{t}^{T} A \Sigma A^{T} \mathbf{t} \right) \right) \end{align*} $$

This is identical to the moment generating function of $N_{m} \left( A \mu + \mathbf{b} , A \Sigma A^{T} \right)$.

Marginal Distribution

It is a trivial corollary of the above theorem. With respect to the identity matrix $I_{m} \in \mathbb{R}^{m \times m}$ and zero matrix $O_{m(n-m)} \in \mathbb{R}^{m \times (n-m)}$, if matrix $A \in \mathbb{R}^{m \times n}$ is defined as $$ A = \begin{bmatrix} I_{m} & O_{m(n-m)} \end{bmatrix} $$ then naturally $$ \mathbf{X}_{1} = A \mathbf{X} $$ This mapping that omits some components of the vector is also referred to as natural projection.


  1. Hogg et al. (2013). Introduction to Mathematical Statistcs(7th Edition): p183. ↩︎