logo

Subgradient Method 📂Optimization

Subgradient Method

Definition1

Let’s say the objective function f:RnRf : \mathbb{R}^{n} \to \mathbb{R} is a convex function. Let’s denote the subgradient of ff at point x(k)\mathbf{x}^{(k)} as g(k)\mathbf{g}^{(k)}. The method of updating x(k)\mathbf{x}^{(k)} in the following way to solve the optimization problem for ff is called the subgradient method.

x(k+1)=x(k)αg(k) \mathbf{x}^{(k+1)} = \mathbf{x}^{(k)} - \alpha \mathbf{g}^{(k)}

Description2

It’s a form where the gradient in the Gradient Descent is replaced with a subgradient.

gradient descent: x(k+1)=x(k)αf(x(k)) \text{gradient descent: } \mathbf{x}^{(k+1)} = \mathbf{x}^{(k)} - \alpha \nabla f(\mathbf{x}^{(k)})

If ff is differentiable, one can use Gradient Descent, so it’s an optimization method chosen when the objective function is not differentiable. However, it has the disadvantage of having a slow convergence rate.