Maximum a Posteriori Estimation for Linear Regression Model in Machine Learning
Theorem
Assume that the relationship between data $\mathbf{x}_{i} \in \mathbb{R}^{n}$ and its labels $y_{i} \in \mathbb{R}$ can be expressed by the following linear model.
$$ y_{i} = \mathbf{w}^{\mathsf{T}} \mathbf{x}_{i} + \epsilon_{i}, \qquad i = 1, \ldots, K \tag{1} $$
The parameter $\mathbf{w}_{\text{MAP}}$ that maximizes the posterior probability is as follows. For $\mathbf{y} = \begin{bmatrix} y_{1} & \cdots & y_{K} \end{bmatrix}^{\mathsf{T}}$ and $\mathbf{X} = \begin{bmatrix} \mathbf{x}_{1} & \cdots & \mathbf{x}_{K} \end{bmatrix}^{T} \in \mathbb{R}^{n \times K}$,
When the prior distribution is normal: $$ \mathbf{w}_{\text{MAP}} = \left(\dfrac{1}{\sigma^{2}} \mathbf{X}^{T}\mathbf{X} + \boldsymbol{\Sigma}^{-1} \right)^{-1} \left(\dfrac{1}{\sigma^{2}} \mathbf{X}^{T} \mathbf{y} + \boldsymbol{\Sigma}^{-1} \boldsymbol{\mu} \right) $$
In this case, $\boldsymbol{\mu}$ and $\boldsymbol{\Sigma}$ are the mean vector and covariance matrix of the prior distribution of $\mathbf{w}$, respectively.
When the prior distribution is Laplace:
An explicit form of the optimal solution does not exist.
Explanation
Looking at the below $(3)$, assuming that the prior distribution of $\mathbf{w}$ is the standard normal distribution $N(\mathbf{0}, \mathbf{I})$, it becomes clear that this is a problem similar to ridge regression.
$$ \argmin_{\mathbf{w}} \left[\dfrac{1}{2\sigma^{2}} \| \mathbf{y} - \mathbf{X}\mathbf{w} \|_{2}^{2} + \dfrac{1}{2} \| \mathbf{w} \|_{2}^{2} \right] $$
On the other hand, assuming the prior distribution is Laplace distribution $\operatorname{Laplace}(0, b)$, it becomes a problem similar to lasso regression.
$$ \argmin_{\mathbf{w}} \left[\dfrac{1}{2\sigma^{2}} \| \mathbf{y} - \mathbf{X}\mathbf{w} \|_{2}^{2} + \dfrac{1}{b} \| \mathbf{w} \|_{1} \right] $$
Prior Distribution = Normal Distribution
In $(1)$, $\mathbf{w} \in \mathbb{R}^{n}$ is a parameter, and let us assume $\epsilon_{i} \sim N(0, \sigma^{2})$ is Gaussian noise. Assuming $\epsilon_{i}$ follows $N(0, \sigma^{2})$, $y_{i} = \mathbf{w}^{\mathsf{T}} \mathbf{x}_{i} + \epsilon_{i}$ follows $N(\mathbf{w}^{\mathsf{T}} \mathbf{x}_{i}, \sigma^{2})$.
$$ y_{i} \sim N(\mathbf{w}^{\mathsf{T}} \mathbf{x}_{i}, \sigma^{2}) $$
Maximum a posteriori estimation is about finding $\mathbf{w}_{\text{MAP}}$ that satisfies the following.
$$ \mathbf{w}_{\text{MAP}} = \argmax_{\mathbf{w}} p(\mathbf{y} | \mathbf{w}, \mathbf{X}) p(\mathbf{w}) \tag{2} $$
$p(\mathbf{y} | \mathbf{w}, \mathbf{X})$ is the likelihood, and $p(\mathbf{w})$ is the prior probability. The likelihood function is as follows.
$$ p(\mathbf{y} | \mathbf{w}, \mathbf{X}) = \dfrac{1}{(2\pi \sigma^{2})^{K/2}} \exp \left[ -\dfrac{1}{2\sigma^{2}} \| \mathbf{y} - \mathbf{X}\mathbf{w} \|_{2}^{2} \right] $$
And let’s assume that the prior distribution of $\mathbf{w}$ follows the following multivariate normal distribution.
$$ \mathbf{w} \sim N(\boldsymbol{\mu}, \boldsymbol{\Sigma}), \qquad p(\mathbf{w}) = \dfrac{1}{\sqrt{(2\pi)^{n} \det \boldsymbol{\Sigma}}} \exp \left[ -\dfrac{1}{2} (\mathbf{w} - \boldsymbol{\mu})^{\mathsf{T}} \boldsymbol{\Sigma}^{-1} (\mathbf{w} - \boldsymbol{\mu}) \right] $$
Since the posterior probability is represented as an exponential function, considering the log likelihood is more convenient for calculation.
$$ \begin{align*} \mathbf{w}_{\text{MAP}} &= \argmax_{\mathbf{w}} \log (p(\mathbf{y} | \mathbf{w}, \mathbf{X}) p(\mathbf{w})) \\ &= \argmax_{\mathbf{w}} \left[-\dfrac{1}{2\sigma^{2}} \| \mathbf{y} - \mathbf{X}\mathbf{w} \|_{2}^{2} -\dfrac{1}{2} (\mathbf{w} - \boldsymbol{\mu})^{\mathsf{T}} \boldsymbol{\Sigma}^{-1} (\mathbf{w} - \boldsymbol{\mu}) \right] \\ &= \argmin_{\mathbf{w}} \left[\dfrac{1}{2\sigma^{2}} \| \mathbf{y} - \mathbf{X}\mathbf{w} \|_{2}^{2} + \dfrac{1}{2} (\mathbf{w} - \boldsymbol{\mu})^{\mathsf{T}} \boldsymbol{\Sigma}^{-1} (\mathbf{w} - \boldsymbol{\mu}) \right] \tag{3}\\ \end{align*} $$
Thus, by differentiating the equation above so that $\mathbf{0}$ is satisfied, $\mathbf{w}$ becomes $\mathbf{w}_{\text{MAP}}$. Let’s calculate the gradient.
$$ \begin{align*} & \nabla_{\mathbf{w}} \left[\dfrac{1}{2\sigma^{2}} \| \mathbf{y} - \mathbf{X}\mathbf{w} \|_{2}^{2} +\dfrac{1}{2} (\mathbf{w} - \boldsymbol{\mu})^{\mathsf{T}} \boldsymbol{\Sigma}^{-1} (\mathbf{w} - \boldsymbol{\mu}) \right] \\ &= \nabla_{\mathbf{w}} \left[\dfrac{1}{2\sigma^{2}} (\mathbf{y} - \mathbf{X}\mathbf{w})^{T}(\mathbf{y} - \mathbf{X}\mathbf{w}) +\dfrac{1}{2} (\mathbf{w} - \boldsymbol{\mu})^{\mathsf{T}} \boldsymbol{\Sigma}^{-1} (\mathbf{w} - \boldsymbol{\mu}) \right] \\ &= \nabla_{\mathbf{w}} \left[\dfrac{1}{2\sigma^{2}} (\mathbf{y} - \mathbf{X}\mathbf{w})^{T}(\mathbf{y} - \mathbf{X}\mathbf{w})\right] + \nabla_{\mathbf{w}} \left[\dfrac{1}{2} (\mathbf{w} - \boldsymbol{\mu})^{\mathsf{T}} \boldsymbol{\Sigma}^{-1} (\mathbf{w} - \boldsymbol{\mu}) \right] \\ \end{align*} $$
Refer to the differentiation rules below.
Derivatives of Vectors and Matrices
Inner product $$ \frac{ \partial f(\mathbf{x})}{ \partial \mathbf{x} } = \frac{ \partial (\mathbf{w}^{T}\mathbf{x})}{ \partial \mathbf{x} } = \frac{ \partial (\mathbf{x}^{T}\mathbf{w})}{ \partial \mathbf{x} } = \mathbf{w} $$
Norm $$ \nabla f(\mathbf{x}) = \dfrac{\partial \left\| \mathbf{x} \right\|^{2}}{\partial \mathbf{x}} = \dfrac{\partial (\mathbf{x}^{T}\mathbf{x})}{\partial \mathbf{x}} = 2\mathbf{x} $$
Quadratic form $$ \dfrac{\partial f(\mathbf{x})}{\partial \mathbf{x}} = \dfrac{\partial (\mathbf{x}\mathbf{R}\mathbf{x})}{\partial \mathbf{x}} = (\mathbf{R} + \mathbf{R}^{T})\mathbf{x} $$ If $\mathbf{R}$ is a symmetric matrix, $$ \dfrac{\partial f(\mathbf{x})}{\partial \mathbf{x}} = 2\mathbf{R}\mathbf{x} $$
Calculating the differentiation starting with the first term, we have the following.
$$ \begin{align*} &\nabla_{\mathbf{w}} \left[\dfrac{1}{2\sigma^{2}} (\mathbf{y} - \mathbf{X}\mathbf{w})^{T}(\mathbf{y} - \mathbf{X}\mathbf{w})\right] \\ &=\nabla_{\mathbf{w}} \left[\dfrac{1}{2\sigma^{2}} \left( \mathbf{y}^{T}\mathbf{y} - 2\mathbf{y}^{T}\mathbf{X}\mathbf{w} + \mathbf{w}^{T}\mathbf{X}^{T}\mathbf{X}\mathbf{w} \right) \right] \\ &= \dfrac{1}{2\sigma^{2}}\left[- 2\mathbf{X}^{T}\mathbf{y} + 2\mathbf{X}^{T}\mathbf{X}\mathbf{w} \right] \\ &= \dfrac{1}{\sigma^{2}}\left[-\mathbf{X}^{T}\mathbf{y} + \mathbf{X}^{T}\mathbf{X}\mathbf{w} \right] \\ &= -\dfrac{1}{\sigma^{2}} \mathbf{X}^{T} \left(\mathbf{y} - \mathbf{X}\mathbf{w} \right) \\ \end{align*} $$
Calculating the second term yields the following. The covariance matrix being a symmetric matrix, and the inverse of a symmetric matrix is also symmetric,
$$ \nabla_{\mathbf{w}} \left[\dfrac{1}{2} (\mathbf{w} - \boldsymbol{\mu})^{\mathsf{T}} \boldsymbol{\Sigma}^{-1} (\mathbf{w} - \boldsymbol{\mu}) \right] \\ = \boldsymbol{\Sigma}^{-1}(\mathbf{w} - \boldsymbol{\mu}) $$
Thus, we obtain the following.
$$ -\dfrac{1}{\sigma^{2}} \mathbf{X}^{T} \left(\mathbf{y} - \mathbf{X}\mathbf{w}_{\text{MAP}} \right) + \boldsymbol{\Sigma}^{-1}(\mathbf{w}_{\text{MAP}} - \boldsymbol{\mu}) = \mathbf{0} $$
To find $\mathbf{w}_{\text{MAP}}$, solving the above equation yields:
$$ \begin{align*} && - \dfrac{1}{\sigma^{2}} \mathbf{X}^{T} \mathbf{y} + \dfrac{1}{\sigma^{2}} \mathbf{X}^{T}\mathbf{X}\mathbf{w}_{\text{MAP}} + \boldsymbol{\Sigma}^{-1} \mathbf{w}_{\text{MAP}} - \boldsymbol{\Sigma}^{-1} \boldsymbol{\mu} = \mathbf{0} \\ \implies && \left(\dfrac{1}{\sigma^{2}} \mathbf{X}^{T}\mathbf{X} + \boldsymbol{\Sigma}^{-1} \right) \mathbf{w}_{\text{MAP}} = \dfrac{1}{\sigma^{2}} \mathbf{X}^{T} \mathbf{y} + \boldsymbol{\Sigma}^{-1} \boldsymbol{\mu} \\ \implies && \mathbf{w}_{\text{MAP}} = \left(\dfrac{1}{\sigma^{2}} \mathbf{X}^{T}\mathbf{X} + \boldsymbol{\Sigma}^{-1} \right)^{-1} \left(\dfrac{1}{\sigma^{2}} \mathbf{X}^{T} \mathbf{y} + \boldsymbol{\Sigma}^{-1} \boldsymbol{\mu} \right) \end{align*} $$
Prior Distribution = Laplace Distribution
Several assumptions and calculations are the same as above except for the prior distribution being the Laplace distribution. Maximum a posteriori estimation is about finding $\mathbf{w}_{\text{MAP}}$ that satisfies the following.
$$ \mathbf{w}_{\text{MAP}} = \argmax_{\mathbf{w}} p(\mathbf{y} | \mathbf{w}, \mathbf{X}) p(\mathbf{w}) $$
$p(\mathbf{y} | \mathbf{w}, \mathbf{X})$ is the likelihood, and $p(\mathbf{w})$ is the prior probability. The likelihood function is as follows.
$$ p(\mathbf{y} | \mathbf{w}, \mathbf{X}) = \dfrac{1}{(2\pi \sigma^{2})^{K/2}} \exp \left[ -\dfrac{1}{2\sigma^{2}} \| \mathbf{y} - \mathbf{X}\mathbf{w} \|_{2}^{2} \right] $$
Assume for the prior distribution of $\mathbf{w}$ that each $w_{i}$ independently follows Laplace distribution $\operatorname{Laplace}(\mu, b)$.
$$ p(\mathbf{w}) = \prod_{i=1}^{n} \dfrac{1}{2b} \exp \left( -\dfrac{|w_{i} - \mu|}{b} \right) $$ $$ \implies \log p(\mathbf{w}) = -n\log 2b - \sum_{i=1}^{n} \left[ \dfrac{|w_{i} - \mu|}{b} \right] $$
Since the posterior probability is represented as an exponential function, considering the log likelihood is more convenient for calculation.
$$ \begin{align*} \mathbf{w}_{\text{MAP}} &= \argmax_{\mathbf{w}} \log (p(\mathbf{y} | \mathbf{w}, \mathbf{X}) p(\mathbf{w})) \\ &= \argmax_{\mathbf{w}} \left[-\dfrac{1}{2\sigma^{2}} \| \mathbf{y} - \mathbf{X}\mathbf{w} \|_{2}^{2} - \sum_{i=1}^{n} \dfrac{|w_{i} - \mu|}{b} \right] \\ &= \argmin_{\mathbf{w}} \left[\dfrac{1}{2\sigma^{2}} \| \mathbf{y} - \mathbf{X}\mathbf{w} \|_{2}^{2} + \dfrac{1}{b} \| \mathbf{w} - \mu\mathbf{1} \|_{1} \right] \\ \end{align*} $$
Here, $\mathbf{1} \in \mathbb{R}^{n}$ is a vector with all elements being 1. If we let $\mu = 0$, it becomes a form similar to lasso regression.
$$ \argmin_{\mathbf{w}} \left[\dfrac{1}{2\sigma^{2}} \| \mathbf{y} - \mathbf{X}\mathbf{w} \|_{2}^{2} + \dfrac{1}{b} \| \mathbf{w} \|_{1} \right] $$
Unfortunately, in this case, it is known that there is no closed form for the optimal solution (2571/#최적해-6).