logo

Maximum Likelihood Estimation for Linear Regression Model in Machine Learning 📂Machine Learning

Maximum Likelihood Estimation for Linear Regression Model in Machine Learning

Summary

Assume the relationship between data $\mathbf{x}_{i} \in \mathbb{R}^{n}$ and its labels $y_{i} \in \mathbb{R}$ is described by the following linear model.

$$ y_{i} = \mathbf{w}^{\mathsf{T}} \mathbf{x}_{i} + \epsilon_{i}, \qquad i = 1, \ldots, K \tag{1} $$

When $K > n$, the parameter $\mathbf{w}_{\text{ML}}$ that maximizes the likelihood is as follows.

$$ \mathbf{w}_{\text{ML}} = (\mathbf{X}^{\mathsf{T}} \mathbf{X})^{-1} \mathbf{X}^{\mathsf{T}} \mathbf{y} $$

Here, $\mathbf{y} = \begin{bmatrix} y_{1} & \cdots & y_{K} \end{bmatrix}^{\mathsf{T}}$ and $\mathbf{X} = \begin{bmatrix} \mathbf{x}_{1} & \cdots & \mathbf{x}_{K} \end{bmatrix}^{\mathsf{T}} \in \mathbb{R}^{K \times n}$.

Explanation

In $(1)$, $\mathbf{w} \in \mathbb{R}^{n}$ are parameters and $\epsilon_{i} \sim N(0, \sigma^{2})$ is [Gaussian noise]. It is assumed that $\epsilon_{i}$ follows $N(0, \sigma^{2})$, hence $y_{i} = \mathbf{w}^{\mathsf{T}} \mathbf{x}_{i} + \epsilon_{i}$ follows $N(\mathbf{w}^{\mathsf{T}} \mathbf{x}_{i}, \sigma^{2})$.

$$ y_{i} \sim N(\mathbf{w}^{\mathsf{T}} \mathbf{x}_{i}, \sigma^{2}) $$

Maximum likelihood estimation is finding the $\mathbf{w}_{\text{ML}}$ that satisfies the following.

$$ \mathbf{w}_{\text{ML}} = \argmax_{\mathbf{w}} p(\mathbf{y} | \mathbf{w}, \mathbf{X}) $$

The likelihood function for $y_{i}$ and $\mathbf{y}$ in terms of $\mathbf{w}$ is as follows.

$$ p(y_{i} | \mathbf{w}, \mathbf{x}_{i}) = \dfrac{1}{\sqrt{2\pi \sigma^{2}}}\exp \left[ -\dfrac{(y_{i} - \mathbf{w}^{\mathsf{T}} \mathbf{x}_{i})^{2}}{2\sigma^{2}} \right] $$

$$ \begin{align*} p(\mathbf{y} | \mathbf{w}, \mathbf{X}) &= \prod_{i=1}^{K} p(y_{i} | \mathbf{w}, \mathbf{x}_{i}) \\ &= \prod_{i=1}^{K} \dfrac{1}{\sqrt{2\pi \sigma^{2}}} \exp \left[ -\dfrac{(y_{i} - \mathbf{w}^{\mathsf{T}} \mathbf{x}_{i})^{2}}{2\sigma^{2}} \right] \\ &= \dfrac{1}{(2\pi \sigma^{2})^{K/2}} \exp \left[ -\dfrac{1}{2\sigma^{2}} \sum_{i=1}^{K} (y_{i} - \mathbf{w}^{\mathsf{T}} \mathbf{x}_{i})^{2} \right] \\ &= \dfrac{1}{(2\pi \sigma^{2})^{K/2}} \exp \left[ -\dfrac{1}{2\sigma^{2}} \| \mathbf{y} - \mathbf{X}\mathbf{w} \|_{2}^{2} \right] \end{align*} $$

Since the likelihood is expressed as an exponential function, considering the log likelihood is convenient for computation.

$$ \begin{align*} \mathbf{w}_{\text{ML}} &= \argmax_{\mathbf{w}} \log p(\mathbf{y} | \mathbf{w}, \mathbf{X}) \\ &= \argmax_{\mathbf{w}} \dfrac{1}{(2\pi \sigma^{2})^{K/2}} \left( -\dfrac{1}{2\sigma^{2}} \| \mathbf{y} - \mathbf{X}\mathbf{w} \|_{2}^{2} \right) \\ &= \argmax_{\mathbf{w}} (-\| \mathbf{y} - \mathbf{X}\mathbf{w} \|_{2}^{2}) \\ &= \argmin_{\mathbf{w}} \| \mathbf{y} - \mathbf{X}\mathbf{w} \|_{2}^{2} \end{align*} $$

According to the least squares method, $\mathbf{w}_{\text{ML}}$ is as follows.

$$ \mathbf{w}_{\text{ML}} = (\mathbf{X}^{\mathsf{T}} \mathbf{X})^{-1} \mathbf{X}^{\mathsf{T}} \mathbf{y} $$

See Also