logo

Maximum a Posteriori Estimation for Linear Regression Model in Machine Learning 📂Machine Learning

Maximum a Posteriori Estimation for Linear Regression Model in Machine Learning

Theorem

Assume that the relationship between data xiRn\mathbf{x}_{i} \in \mathbb{R}^{n} and its labels yiRy_{i} \in \mathbb{R} can be expressed by the following linear model.

yi=wTxi+ϵi,i=1,,K(1) y_{i} = \mathbf{w}^{\mathsf{T}} \mathbf{x}_{i} + \epsilon_{i}, \qquad i = 1, \ldots, K \tag{1}

The parameter wMAP\mathbf{w}_{\text{MAP}} that maximizes the posterior probability is as follows. For y=[y1yK]T\mathbf{y} = \begin{bmatrix} y_{1} & \cdots & y_{K} \end{bmatrix}^{\mathsf{T}} and X=[x1xK]TRn×K\mathbf{X} = \begin{bmatrix} \mathbf{x}_{1} & \cdots & \mathbf{x}_{K} \end{bmatrix}^{T} \in \mathbb{R}^{n \times K},

  • When the prior distribution is normal: wMAP=(1σ2XTX+Σ1)1(1σ2XTy+Σ1μ) \mathbf{w}_{\text{MAP}} = \left(\dfrac{1}{\sigma^{2}} \mathbf{X}^{T}\mathbf{X} + \boldsymbol{\Sigma}^{-1} \right)^{-1} \left(\dfrac{1}{\sigma^{2}} \mathbf{X}^{T} \mathbf{y} + \boldsymbol{\Sigma}^{-1} \boldsymbol{\mu} \right)

    In this case, μ\boldsymbol{\mu} and Σ\boldsymbol{\Sigma} are the mean vector and covariance matrix of the prior distribution of w\mathbf{w}, respectively.

  • When the prior distribution is Laplace:

    An explicit form of the optimal solution does not exist.

Explanation

Looking at the below (3)(3), assuming that the prior distribution of w\mathbf{w} is the standard normal distribution N(0,I)N(\mathbf{0}, \mathbf{I}), it becomes clear that this is a problem similar to ridge regression.

arg minw[12σ2yXw22+12w22] \argmin_{\mathbf{w}} \left[\dfrac{1}{2\sigma^{2}} \| \mathbf{y} - \mathbf{X}\mathbf{w} \|_{2}^{2} + \dfrac{1}{2} \| \mathbf{w} \|_{2}^{2} \right]

On the other hand, assuming the prior distribution is Laplace distribution Laplace(0,b)\operatorname{Laplace}(0, b), it becomes a problem similar to lasso regression.

arg minw[12σ2yXw22+1bw1] \argmin_{\mathbf{w}} \left[\dfrac{1}{2\sigma^{2}} \| \mathbf{y} - \mathbf{X}\mathbf{w} \|_{2}^{2} + \dfrac{1}{b} \| \mathbf{w} \|_{1} \right]

Prior Distribution = Normal Distribution

In (1)(1), wRn\mathbf{w} \in \mathbb{R}^{n} is a parameter, and let us assume ϵiN(0,σ2)\epsilon_{i} \sim N(0, \sigma^{2}) is Gaussian noise. Assuming ϵi\epsilon_{i} follows N(0,σ2)N(0, \sigma^{2}), yi=wTxi+ϵiy_{i} = \mathbf{w}^{\mathsf{T}} \mathbf{x}_{i} + \epsilon_{i} follows N(wTxi,σ2)N(\mathbf{w}^{\mathsf{T}} \mathbf{x}_{i}, \sigma^{2}).

yiN(wTxi,σ2) y_{i} \sim N(\mathbf{w}^{\mathsf{T}} \mathbf{x}_{i}, \sigma^{2})

Maximum a posteriori estimation is about finding wMAP\mathbf{w}_{\text{MAP}} that satisfies the following.

wMAP=arg maxwp(yw,X)p(w)(2) \mathbf{w}_{\text{MAP}} = \argmax_{\mathbf{w}} p(\mathbf{y} | \mathbf{w}, \mathbf{X}) p(\mathbf{w}) \tag{2}

p(yw,X)p(\mathbf{y} | \mathbf{w}, \mathbf{X}) is the likelihood, and p(w)p(\mathbf{w}) is the prior probability. The likelihood function is as follows.

p(yw,X)=1(2πσ2)K/2exp[12σ2yXw22] p(\mathbf{y} | \mathbf{w}, \mathbf{X}) = \dfrac{1}{(2\pi \sigma^{2})^{K/2}} \exp \left[ -\dfrac{1}{2\sigma^{2}} \| \mathbf{y} - \mathbf{X}\mathbf{w} \|_{2}^{2} \right]

And let’s assume that the prior distribution of w\mathbf{w} follows the following multivariate normal distribution.

wN(μ,Σ),p(w)=1(2π)ndetΣexp[12(wμ)TΣ1(wμ)] \mathbf{w} \sim N(\boldsymbol{\mu}, \boldsymbol{\Sigma}), \qquad p(\mathbf{w}) = \dfrac{1}{\sqrt{(2\pi)^{n} \det \boldsymbol{\Sigma}}} \exp \left[ -\dfrac{1}{2} (\mathbf{w} - \boldsymbol{\mu})^{\mathsf{T}} \boldsymbol{\Sigma}^{-1} (\mathbf{w} - \boldsymbol{\mu}) \right]

Since the posterior probability is represented as an exponential function, considering the log likelihood is more convenient for calculation.

wMAP=arg maxwlog(p(yw,X)p(w))=arg maxw[12σ2yXw2212(wμ)TΣ1(wμ)]=arg minw[12σ2yXw22+12(wμ)TΣ1(wμ)] \begin{align*} \mathbf{w}_{\text{MAP}} &= \argmax_{\mathbf{w}} \log (p(\mathbf{y} | \mathbf{w}, \mathbf{X}) p(\mathbf{w})) \\ &= \argmax_{\mathbf{w}} \left[-\dfrac{1}{2\sigma^{2}} \| \mathbf{y} - \mathbf{X}\mathbf{w} \|_{2}^{2} -\dfrac{1}{2} (\mathbf{w} - \boldsymbol{\mu})^{\mathsf{T}} \boldsymbol{\Sigma}^{-1} (\mathbf{w} - \boldsymbol{\mu}) \right] \\ &= \argmin_{\mathbf{w}} \left[\dfrac{1}{2\sigma^{2}} \| \mathbf{y} - \mathbf{X}\mathbf{w} \|_{2}^{2} + \dfrac{1}{2} (\mathbf{w} - \boldsymbol{\mu})^{\mathsf{T}} \boldsymbol{\Sigma}^{-1} (\mathbf{w} - \boldsymbol{\mu}) \right] \tag{3}\\ \end{align*}

Thus, by differentiating the equation above so that 0\mathbf{0} is satisfied, w\mathbf{w} becomes wMAP\mathbf{w}_{\text{MAP}}. Let’s calculate the gradient.

w[12σ2yXw22+12(wμ)TΣ1(wμ)]=w[12σ2(yXw)T(yXw)+12(wμ)TΣ1(wμ)]=w[12σ2(yXw)T(yXw)]+w[12(wμ)TΣ1(wμ)] \begin{align*} & \nabla_{\mathbf{w}} \left[\dfrac{1}{2\sigma^{2}} \| \mathbf{y} - \mathbf{X}\mathbf{w} \|_{2}^{2} +\dfrac{1}{2} (\mathbf{w} - \boldsymbol{\mu})^{\mathsf{T}} \boldsymbol{\Sigma}^{-1} (\mathbf{w} - \boldsymbol{\mu}) \right] \\ &= \nabla_{\mathbf{w}} \left[\dfrac{1}{2\sigma^{2}} (\mathbf{y} - \mathbf{X}\mathbf{w})^{T}(\mathbf{y} - \mathbf{X}\mathbf{w}) +\dfrac{1}{2} (\mathbf{w} - \boldsymbol{\mu})^{\mathsf{T}} \boldsymbol{\Sigma}^{-1} (\mathbf{w} - \boldsymbol{\mu}) \right] \\ &= \nabla_{\mathbf{w}} \left[\dfrac{1}{2\sigma^{2}} (\mathbf{y} - \mathbf{X}\mathbf{w})^{T}(\mathbf{y} - \mathbf{X}\mathbf{w})\right] + \nabla_{\mathbf{w}} \left[\dfrac{1}{2} (\mathbf{w} - \boldsymbol{\mu})^{\mathsf{T}} \boldsymbol{\Sigma}^{-1} (\mathbf{w} - \boldsymbol{\mu}) \right] \\ \end{align*}

Refer to the differentiation rules below.

Derivatives of Vectors and Matrices

  • Inner product f(x)x=(wTx)x=(xTw)x=w \frac{ \partial f(\mathbf{x})}{ \partial \mathbf{x} } = \frac{ \partial (\mathbf{w}^{T}\mathbf{x})}{ \partial \mathbf{x} } = \frac{ \partial (\mathbf{x}^{T}\mathbf{w})}{ \partial \mathbf{x} } = \mathbf{w}

  • Norm f(x)=x2x=(xTx)x=2x \nabla f(\mathbf{x}) = \dfrac{\partial \left\| \mathbf{x} \right\|^{2}}{\partial \mathbf{x}} = \dfrac{\partial (\mathbf{x}^{T}\mathbf{x})}{\partial \mathbf{x}} = 2\mathbf{x}

  • Quadratic form f(x)x=(xRx)x=(R+RT)x \dfrac{\partial f(\mathbf{x})}{\partial \mathbf{x}} = \dfrac{\partial (\mathbf{x}\mathbf{R}\mathbf{x})}{\partial \mathbf{x}} = (\mathbf{R} + \mathbf{R}^{T})\mathbf{x} If R\mathbf{R} is a symmetric matrix, f(x)x=2Rx \dfrac{\partial f(\mathbf{x})}{\partial \mathbf{x}} = 2\mathbf{R}\mathbf{x}

Calculating the differentiation starting with the first term, we have the following.

w[12σ2(yXw)T(yXw)]=w[12σ2(yTy2yTXw+wTXTXw)]=12σ2[2XTy+2XTXw]=1σ2[XTy+XTXw]=1σ2XT(yXw) \begin{align*} &\nabla_{\mathbf{w}} \left[\dfrac{1}{2\sigma^{2}} (\mathbf{y} - \mathbf{X}\mathbf{w})^{T}(\mathbf{y} - \mathbf{X}\mathbf{w})\right] \\ &=\nabla_{\mathbf{w}} \left[\dfrac{1}{2\sigma^{2}} \left( \mathbf{y}^{T}\mathbf{y} - 2\mathbf{y}^{T}\mathbf{X}\mathbf{w} + \mathbf{w}^{T}\mathbf{X}^{T}\mathbf{X}\mathbf{w} \right) \right] \\ &= \dfrac{1}{2\sigma^{2}}\left[- 2\mathbf{X}^{T}\mathbf{y} + 2\mathbf{X}^{T}\mathbf{X}\mathbf{w} \right] \\ &= \dfrac{1}{\sigma^{2}}\left[-\mathbf{X}^{T}\mathbf{y} + \mathbf{X}^{T}\mathbf{X}\mathbf{w} \right] \\ &= -\dfrac{1}{\sigma^{2}} \mathbf{X}^{T} \left(\mathbf{y} - \mathbf{X}\mathbf{w} \right) \\ \end{align*}

Calculating the second term yields the following. The covariance matrix being a symmetric matrix, and the inverse of a symmetric matrix is also symmetric,

w[12(wμ)TΣ1(wμ)]=Σ1(wμ) \nabla_{\mathbf{w}} \left[\dfrac{1}{2} (\mathbf{w} - \boldsymbol{\mu})^{\mathsf{T}} \boldsymbol{\Sigma}^{-1} (\mathbf{w} - \boldsymbol{\mu}) \right] \\ = \boldsymbol{\Sigma}^{-1}(\mathbf{w} - \boldsymbol{\mu})

Thus, we obtain the following.

1σ2XT(yXwMAP)+Σ1(wMAPμ)=0 -\dfrac{1}{\sigma^{2}} \mathbf{X}^{T} \left(\mathbf{y} - \mathbf{X}\mathbf{w}_{\text{MAP}} \right) + \boldsymbol{\Sigma}^{-1}(\mathbf{w}_{\text{MAP}} - \boldsymbol{\mu}) = \mathbf{0}

To find wMAP\mathbf{w}_{\text{MAP}}, solving the above equation yields:

1σ2XTy+1σ2XTXwMAP+Σ1wMAPΣ1μ=0    (1σ2XTX+Σ1)wMAP=1σ2XTy+Σ1μ    wMAP=(1σ2XTX+Σ1)1(1σ2XTy+Σ1μ) \begin{align*} && - \dfrac{1}{\sigma^{2}} \mathbf{X}^{T} \mathbf{y} + \dfrac{1}{\sigma^{2}} \mathbf{X}^{T}\mathbf{X}\mathbf{w}_{\text{MAP}} + \boldsymbol{\Sigma}^{-1} \mathbf{w}_{\text{MAP}} - \boldsymbol{\Sigma}^{-1} \boldsymbol{\mu} = \mathbf{0} \\ \implies && \left(\dfrac{1}{\sigma^{2}} \mathbf{X}^{T}\mathbf{X} + \boldsymbol{\Sigma}^{-1} \right) \mathbf{w}_{\text{MAP}} = \dfrac{1}{\sigma^{2}} \mathbf{X}^{T} \mathbf{y} + \boldsymbol{\Sigma}^{-1} \boldsymbol{\mu} \\ \implies && \mathbf{w}_{\text{MAP}} = \left(\dfrac{1}{\sigma^{2}} \mathbf{X}^{T}\mathbf{X} + \boldsymbol{\Sigma}^{-1} \right)^{-1} \left(\dfrac{1}{\sigma^{2}} \mathbf{X}^{T} \mathbf{y} + \boldsymbol{\Sigma}^{-1} \boldsymbol{\mu} \right) \end{align*}

Prior Distribution = Laplace Distribution

Several assumptions and calculations are the same as above except for the prior distribution being the Laplace distribution. Maximum a posteriori estimation is about finding wMAP\mathbf{w}_{\text{MAP}} that satisfies the following.

wMAP=arg maxwp(yw,X)p(w) \mathbf{w}_{\text{MAP}} = \argmax_{\mathbf{w}} p(\mathbf{y} | \mathbf{w}, \mathbf{X}) p(\mathbf{w})

p(yw,X)p(\mathbf{y} | \mathbf{w}, \mathbf{X}) is the likelihood, and p(w)p(\mathbf{w}) is the prior probability. The likelihood function is as follows.

p(yw,X)=1(2πσ2)K/2exp[12σ2yXw22] p(\mathbf{y} | \mathbf{w}, \mathbf{X}) = \dfrac{1}{(2\pi \sigma^{2})^{K/2}} \exp \left[ -\dfrac{1}{2\sigma^{2}} \| \mathbf{y} - \mathbf{X}\mathbf{w} \|_{2}^{2} \right]

Assume for the prior distribution of w\mathbf{w} that each wiw_{i} independently follows Laplace distribution Laplace(μ,b)\operatorname{Laplace}(\mu, b).

p(w)=i=1n12bexp(wiμb) p(\mathbf{w}) = \prod_{i=1}^{n} \dfrac{1}{2b} \exp \left( -\dfrac{|w_{i} - \mu|}{b} \right)     logp(w)=nlog2bi=1n[wiμb] \implies \log p(\mathbf{w}) = -n\log 2b - \sum_{i=1}^{n} \left[ \dfrac{|w_{i} - \mu|}{b} \right]

Since the posterior probability is represented as an exponential function, considering the log likelihood is more convenient for calculation.

wMAP=arg maxwlog(p(yw,X)p(w))=arg maxw[12σ2yXw22i=1nwiμb]=arg minw[12σ2yXw22+1bwμ11] \begin{align*} \mathbf{w}_{\text{MAP}} &= \argmax_{\mathbf{w}} \log (p(\mathbf{y} | \mathbf{w}, \mathbf{X}) p(\mathbf{w})) \\ &= \argmax_{\mathbf{w}} \left[-\dfrac{1}{2\sigma^{2}} \| \mathbf{y} - \mathbf{X}\mathbf{w} \|_{2}^{2} - \sum_{i=1}^{n} \dfrac{|w_{i} - \mu|}{b} \right] \\ &= \argmin_{\mathbf{w}} \left[\dfrac{1}{2\sigma^{2}} \| \mathbf{y} - \mathbf{X}\mathbf{w} \|_{2}^{2} + \dfrac{1}{b} \| \mathbf{w} - \mu\mathbf{1} \|_{1} \right] \\ \end{align*}

Here, 1Rn\mathbf{1} \in \mathbb{R}^{n} is a vector with all elements being 1. If we let μ=0\mu = 0, it becomes a form similar to lasso regression.

arg minw[12σ2yXw22+1bw1] \argmin_{\mathbf{w}} \left[\dfrac{1}{2\sigma^{2}} \| \mathbf{y} - \mathbf{X}\mathbf{w} \|_{2}^{2} + \dfrac{1}{b} \| \mathbf{w} \|_{1} \right]

Unfortunately, in this case, it is known that there is no closed form for the optimal solution (2571/#최적해-6).

See Also