Sum of Squared Residuals' Gradient
Overview
Many regression problems in statistics and machine learning use the sum of squared residuals as an objective function, and in particular when $f$ is a linear combination it can be compactly expressed in matrix form. $$ \begin{align*} RSS =& \sum_{k} \left( y_{k} - f \left( \mathbf{x}_{k} \right) \right)^{2} \\ =& \sum_{k} \left( y_{k} - \left( s_{0} + s_{1} x_{k1} + \cdots + s_{p} x_{kp} \right) \right)^{2} \\ =& \left( \mathbf{y} - X \mathbf{s} \right)^{T} \left( \mathbf{y} - X \mathbf{s} \right) \\ =& \left\| \mathbf{y} - X \mathbf{s} \right\|_{2}^{2} \end{align*} $$ We add a slight generalization and derive the gradient of a scalar function with respect to the matrix $R \in \mathbb{R}^{n \times n}$ that has the following form.
Formula 1
$$ f \left( \mathbf{s} \right) := \left( \mathbf{y} - X \mathbf{s} \right)^{T} R \left( \mathbf{y} - X \mathbf{s} \right) $$ For the vector $\mathbf{y} \in \mathbb{R}^{n}$ that is independent of $\mathbf{s}$, and for the matrices $X \in \mathbb{R}^{n \times p}$ and $R \in \mathbb{R}^{n \times n}$, the following holds. $$ {{ \partial f \left( \mathbf{s} \right) } \over { \partial \mathbf{s} }} = - X^{T} \left( R + R^{T} \right) \left( \mathbf{y} - X \mathbf{s} \right) $$
Derivation
Properties of the transpose: $r,s\in \mathbb{R}$ and $A,B$ are assumed to have sizes that make the matrix operations well-defined in each case. Then the following holds.
- (a) Linearity: $$\left( rA + sB\right)^{T}=r A^{T} + s B^{T}$$
Gradients with respect to vectors and matrices: $$ \frac{ \partial \mathbf{w}^{T}\mathbf{x}}{ \partial \mathbf{w} } = \frac{ \partial \mathbf{x}^{T}\mathbf{w}}{ \partial \mathbf{w} } = \mathbf{x} $$ $$ \frac{ \partial }{ \partial \mathbf{w} }\left( \mathbf{w}^{T}\mathbf{R}\mathbf{w} \right)= \left( \mathbf{R} + \mathbf{R}^{T} \right) \mathbf{w} $$
$$ \begin{align*} {{ \partial } \over { \partial \mathbf{s} }} f \left( \mathbf{s} \right) =& {{ \partial } \over { \partial \mathbf{s} }} \left( \mathbf{y} - X \mathbf{s} \right)^{T} R \left( \mathbf{y} - X \mathbf{s} \right) \\ =& {{ \partial } \over { \partial \mathbf{s} }} \left( \mathbf{y}^{T} - \mathbf{s}^{T} X^{T} \right) R \left( \mathbf{y} - X \mathbf{s} \right) \\ =& {{ \partial } \over { \partial \mathbf{s} }} \left( - \mathbf{s}^{T} X^{T} R \mathbf{y} - \mathbf{y}^{T} R X \mathbf{s} + \mathbf{s}^{T} X^{T} R X \mathbf{s} \right) \\ =& - X^{T} R \mathbf{y} - X^{T} R^{T} \mathbf{y} + X^{T} \left( R + R^{T} \right) X \mathbf{s} \\ =& - X^{T} \left( R + R^{T} \right) \mathbf{y} + X^{T} \left( R + R^{T} \right) X \mathbf{s} \\ =& - X^{T} \left( R + R^{T} \right) \left( \mathbf{y} - X \mathbf{s} \right) \end{align*} $$
■
Corollary 1
As a corollary, if $R$ is a symmetric matrix then $$ {{ \partial f \left( \mathbf{s} \right) } \over { \partial \mathbf{s} }} = - 2 X^{T} R \left( \mathbf{y} - X \mathbf{s} \right) $$, and if it is the identity matrix we obtain: $$ {{ \partial f \left( \mathbf{s} \right) } \over { \partial \mathbf{s} }} = - 2 X^{T} \left( \mathbf{y} - X \mathbf{s} \right) $$
Corollary 2
For the Hadamard product $\odot$, if we define $f(\mathbf{s}) := \left\| X(\boldsymbol{\tau} \odot \mathbf{s}) - \mathbf{y} \right\|_{2}^{2}$, then since $X(\boldsymbol{\tau} \odot \mathbf{s}) = X \diag(\boldsymbol{\tau}) \mathbf{s}$
$$ \begin{align*} \dfrac{\partial f(\mathbf{s})}{\partial \mathbf{s}} & = 2 \left( X \diag(\boldsymbol{\tau}) \right)^{T} \left( X \diag(\boldsymbol{\tau})\mathbf{s} - \mathbf{y}\right) \\ & = 2 \diag(\boldsymbol{\tau})^{T} X^{T} \left( X (\boldsymbol{\tau} \odot \mathbf{s}) - \mathbf{y}\right) \\ & = 2 \boldsymbol{\tau} \odot X^{T} \left( X (\boldsymbol{\tau} \odot \mathbf{s}) - \mathbf{y}\right) \\ \end{align*} $$
Corollary 3
The derivative of the ▷eq15◯-norm ▷eq16◯, regarded as the distance between a point $\mathbf{a}$ and a vector $\mathbf{x} = \mathbf{x} (t)$, is as follows. $$ {\frac{ d \left\| \mathbf{x} - \mathbf{a} \right\| }{ d t }} = \dot{\mathbf{x}} \cdot {\frac{ \mathbf{x} - \mathbf{a} }{ \left\| \mathbf{x} - \mathbf{a} \right\| }} $$ This is derived by noting that $R$ and $X$ are identity matrices, and by applying the chain rule to $\sqrt{\cdot}$. $$ \begin{align*} & {\frac{ d \left\| \mathbf{x} - \mathbf{a} \right\| }{ d t }} \\ =& {\frac{ d }{ d t }} \sqrt{ \left\| \mathbf{a} - \mathbf{x} \right\|_{2}^{2} } \\ =& {\frac{ 1 }{ 2 \sqrt{ \left\| \mathbf{a} - \mathbf{x} \right\|_{2}^{2} } }} {\frac{ d }{ d t }} \left( \mathbf{a} - \mathbf{x} \right)^{T} \left( \mathbf{a} - \mathbf{x} \right) \\ =& {\frac{ - 2 \left( \mathbf{a} - \mathbf{x} \right) }{ 2 \left\| \mathbf{x} - \mathbf{a} \right\| } } \cdot {\frac{ d }{ d t }} \mathbf{x} \\ =& {\frac{ \mathbf{x} - \mathbf{a} }{ \left\| \mathbf{x} - \mathbf{a} \right\| }} \cdot \dot{\mathbf{x}} \end{align*} $$
- Corollary 3 can also be used to prove optical properties of an ellipse.
Petersen. (2008). The Matrix Cookbook: p10. ↩︎