logo

Gradient of Scalar Field 📂Vector Analysis

Gradient of Scalar Field

Definition

A gradient, specifically referred to as the total derivative of a scalar field $f : \mathbb{R}^{n} \to \mathbb{R}$, is denoted by $\nabla f$.

$$ \begin{align*} \nabla f := f^{\prime} =& \begin{bmatrix} D_{1}f & D_{2}f & \cdots & D_{n}f\end{bmatrix} \\ =& \begin{bmatrix} \dfrac{\partial f}{\partial x_{1}} & \dfrac{\partial f}{\partial x_{2}} & \cdots & \dfrac{\partial f}{\partial x_{n}} \end{bmatrix} \\ =& \dfrac{\partial f}{\partial x_{1}}\hat{x}_{1} + \dfrac{\partial f}{\partial x_{2}}\hat{x}_{2} + \dots + \dfrac{\partial f}{\partial x_{n}}\hat{x}_{n} \end{align*} $$

Description

Simply put, the gradient is a derivative of a multivariable function. The gradient of a 3-dimensional scalar function, often used in physics and other fields, is as follows.

$$ \nabla f = \dfrac{\partial f}{\partial x}\hat{\mathbf{x}} + \dfrac{\partial f}{\partial y}\hat{\mathbf{y}} + \dfrac{\partial f}{\partial z}\hat{\mathbf{z}} = \left( \dfrac{\partial f}{\partial x}, \dfrac{\partial f}{\partial y}, \dfrac{\partial f}{\partial z} \right) $$

20190329\_094406.png

A noteworthy aspect is that the derivative of a scalar function, which has scalar function values, becomes a vector function with vector function values. This may be considered obvious from the definition of the total derivative, yet it can also be intuitively understood.

For example, consider the image above. This image visually represents the function $z : \mathbb{R}^{2} \to \mathbb{R}$, defined as $z(x,y) = x^2 - y^2$. Unlike functions of the form $y = f(x)$ with only one variable, when thinking about the rate of change of a function with more than two variables, it’s necessary to consider not just the magnitude but also the direction of change.

Reflecting this concept, directional derivatives refer to the derivatives in any given direction. Therefore, a multivariable function has an infinitely many directional derivatives, but as demonstrated in the theorem below, the gradient points in the direction of the greatest rate of change.

Proof

Let’s define the direction vector $\mathbb{d} : = ( d_1 , \cdots , d_n )$ that makes $\left\| \mathbb{d} \right\| = 1$ possible. By the multivariate Taylor’s theorem,

$$ f \left( x_{0} + h \mathbb{d} \right) = f ( \mathbb{x}_{0} ) + h \left[ {{ \partial f ( \mathbb{x}_{0} ) } \over { \partial x_{1} }} d_{1} + \cdots + {{ \partial f ( \mathbb{x}_{0} ) } \over { \partial x_{n} }} d_{n} \right] + O (h^2) $$

When converted to matrix form,

$$ f \left( x_{0} + h \mathbb{d} \right) - f ( \mathbb{x}_{0} ) = h \begin{bmatrix} {{ \partial f ( \mathbb{x}_{0} ) } \over { \partial x_{1} }} \\ \vdots \\ {{ \partial f ( \mathbb{x}_{0} ) } \over { \partial x_{n} }} \end{bmatrix} \cdot \begin{bmatrix} d_{1} \\ \vdots \\ d_{n} \end{bmatrix} + O (h^2) $$

And in vector form,

$$ {{ f \left( x_{0} + h \mathbb{d} \right) - f ( \mathbb{x}_{0} )} \over {h}} = \nabla f \left( \mathbb{x}_{0} \right) \cdot \mathbb{d} + O (h) $$

When $h \to 0$,

$$ \nabla f \left( \mathbb{x}_{0} \right) \cdot \mathbb{d} = \lim_{h \to 0} {{ f \left( x_{0} + h \mathbb{d} \right) - f ( \mathbb{x}_{0} )} \over {h}} $$

That $\mathbb{b}$ has the same direction as the slope from $\mathbb{x}_{0}$ to $f$ means that $\mathbb{d}$

$$ \lim_{h \to 0} {{ f \left( x_{0} + h \mathbb{d} \right) - f ( \mathbb{x}_{0} )} \over {h}} $$

maximizes this, implying that the unit vector in question is $\displaystyle \mathbb{d} = {{\nabla f \left( \mathbb{x}_{0} \right) } \over { \left\| \nabla f \left( \mathbb{x}_{0} \right) \right\| }}$. Therefore,

$$ \nabla f \left( \mathbb{x}_{0} \right) = \left\| \nabla f \left( \mathbb{x}_{0} \right) \right\| \mathbb{d} $$

becomes the gradient of $f$ from $\mathbb{x}_{0}$.

See Also