logo

Partial Derivatives 📂Vector Analysis

Partial Derivatives

Definitions1

Let us define $E\subset \mathbb{R}^{n}$ as an open set, and $\mathbf{x}\in E$, and $\mathbf{f} : E \to \mathbb{R}^{m}$. Let $\left\{ \mathbf{e}_{1}, \mathbf{e}_{2}, \dots, \mathbf{e}_{n} \right\}$, and $\left\{ \mathbf{u}_{1}, \mathbf{u}_{2}, \dots, \mathbf{u}_{m} \right\}$ be the standard basis of $\mathbb{R}^{n}$ and $\mathbb{R}^{m}$, respectively.

Then, the components $f_{i} : \mathbb{R}^{n} \to \mathbb{R}$ of $\mathbf{f}$ are defined as follows.

$$ \mathbf{f} (\mathbf{x}) = \sum_{i=1}^{m} f_{i}(\mathbf{x})\mathbf{u}_{i}, \quad \mathbf{x} \in E $$

or

$$ f_{i} (\mathbf{x}) := \mathbf{f} (\mathbf{x}) \cdot \mathbf{u}_{i},\quad i \in \left\{ 1,\dots, m \right\} $$

If the following limit exists, it is called the partial derivative of $f_{i}$ with respect to $x_{j}$, denoted by $D_{j}f_{i}$ or $\dfrac{\partial f_{i}}{\partial x_{j}}$.

$$ \dfrac{\partial f_{i}}{\partial x_{j}} = D_{j}f_{i} := \lim _{t \to 0} \dfrac{f_{i}(\mathbf{x}+ t \mathbf{e}_{j}) -f_{i}(\mathbf{x})}{t} $$

Explanation

The term “partial” means biased or considering differentiation with respect only to one variable instead of all variables. This is in contrast to the total derivative.

It’s not a partial $\check{}$ function, but a partial derivative of $\check{}$.

Between the total derivative and the partial derivative of $\mathbf{f}$, the following theorem holds.

Theorem

Let $E, \mathbf{x}, \mathbf{f}$ be as defined in Definitions. Suppose $\mathbf{f}$ is differentiable at $\mathbf{x}$. Then, each partial derivative $D_{j}f_{i}(\mathbf{x})$ exists, and the following equation holds.

$$ \mathbf{f}^{\prime}(\mathbf{x})\mathbf{e}_{j} = \sum_{i=1}^{m} D_{j}f_{i}(\mathbf{x})\mathbf{u}_{i},\quad j \in \left\{ 1,\dots, n \right\} $$

Corollary

$\mathbf{f}^{\prime}(\mathbf{x})$ can be represented as the following matrix, which is a linear transformation.

$$ \mathbf{f}^{\prime}(\mathbf{x}) = \begin{bmatrix} (D_{1}f_{1}) (\mathbf{x}) & (D_{2}f_{1}) (\mathbf{x}) & \cdots & (D_{n}f_{1}) (\mathbf{x}) \\ (D_{1}f_{2}) (\mathbf{x}) & (D_{2}f_{2}) (\mathbf{x}) & \cdots & (D_{n}f_{2}) (\mathbf{x}) \\ \vdots & \vdots & \ddots & \vdots \\ (D_{1}f_{m}) (\mathbf{x}) & (D_{2}f_{m}) (\mathbf{x}) & \cdots & (D_{n}f_{m}) (\mathbf{x}) \end{bmatrix} $$

This is also known as the Jacobian matrix of $\mathbf{f}$.

Proof

Let us fix $j$. Assuming that $\mathbf{f}$ is differentiable at $\mathbf{x}$, the following equation holds.

$$ \mathbf{f}( \mathbf{x} + t \mathbf{e}_{j}) - \mathbf{f}(\mathbf{x}) = \mathbf{f}^{\prime}(\mathbf{x})(t\mathbf{e}_{j}) + \mathbf{r}(t\mathbf{e}_{j}) $$

Here, $\mathbf{r}(t\mathbf{e}_{j})$ is a remainder that satisfies the following.

$$ \lim _{t \to 0} \dfrac{|\mathbf{r}(t\mathbf{e}_{j}) |}{t}=0 $$

Since $\mathbf{f}^{\prime}(\mathbf{x})$ is a linear transformation, the following holds.

$$ \dfrac{\mathbf{f}( \mathbf{x} + t \mathbf{e}_{j}) - \mathbf{f}(\mathbf{x})}{t} = \dfrac{\mathbf{f}^{\prime}(\mathbf{x})(t\mathbf{e}_{j})}{t} + \dfrac{\mathbf{r}(t\mathbf{e}_{j})}{t} = \mathbf{f}^{\prime}(\mathbf{x})(\mathbf{e}_{j}) + \dfrac{\mathbf{r}(t\mathbf{e}_{j})}{t} $$

Taking the limit as $\lim _{t \to 0}$ on both sides gives us the following.

$$ \lim _{t \to 0} \dfrac{\mathbf{f}( \mathbf{x} + t \mathbf{e}_{j}) - \mathbf{f}(\mathbf{x})}{t} = \mathbf{f}^{\prime}(\mathbf{x})\mathbf{e}_{j} $$

Expressing $\mathbf{f}$ in components yields:

$$ \begin{align*} \mathbf{f}^{\prime}(\mathbf{x})\mathbf{e}_{j} &= \lim _{t \to 0} \dfrac{\mathbf{f}( \mathbf{x} + t \mathbf{e}_{j}) - \mathbf{f}(\mathbf{x})}{t} \\ &= \lim _{t \to 0} \dfrac{\sum_{i=1}^{m} f_{i}( \mathbf{x} + t \mathbf{e}_{j})\mathbf{u}_{i} - \sum_{i=1}^{m} f_{i}(\mathbf{x})\mathbf{u}_{i}}{t} \\ &= \sum_{i=1}^{m} \lim _{t \to 0} \dfrac{f_{i}( \mathbf{x} + t \mathbf{e}_{j}) - f_{i}(\mathbf{x})}{t} \mathbf{u}_{i} \end{align*} $$

Then, by the definition of partial derivatives, we obtain:

$$ \mathbf{f}^{\prime}(\mathbf{x})\mathbf{e}_{j} = \sum_{i=1}^{m} D_{j}f_{i}(\mathbf{x}) \mathbf{u}_{i} $$

Examples

Suppose that $f : \R^{3} \to \R, \gamma : \R \to \R^{3}$ is a differentiable function. Also,

$$ \gamma (t) = \left( x(t), y(t), z(t) \right) $$

And let us denote the composition of $f$ and $\gamma$ as $g = f \circ \gamma$.

$$ g(t) = f \circ \gamma (t) = f \left( \gamma (t) \right) $$

Then, $g^{\prime}$, by the chain rule, definition of partial derivatives, and the theorem introduced above, goes as follows.

$$ \begin{align*} \dfrac{d g}{d t}(t_{0}) = g^{\prime}(t_{0}) =&\ f^{\prime}(\gamma (t_{0})) \gamma^{\prime}(t_{0}) \\ =&\ \begin{bmatrix} D_{1}f(\gamma (t_{0})) & D_{2}f(\gamma (t_{0})) & D_{3}f(\gamma (t_{0})) \end{bmatrix} \begin{bmatrix} D\gamma_{1} (t_{0}) \\ D\gamma_{2} (t_{0}) \\ D\gamma_{3} (t_{0}) \end{bmatrix} \\ =&\ \begin{bmatrix} \dfrac{\partial f}{\partial x}(\gamma (t_{0})) & \dfrac{\partial f}{\partial y}(\gamma (t_{0})) & \dfrac{\partial f}{\partial z}(\gamma (t_{0})) \end{bmatrix} \begin{bmatrix} \dfrac{d x}{d t}(t_{0}) \\ \dfrac{d y}{d t}(t_{0}) \\ \dfrac{d z}{d t}(t_{0}) \end{bmatrix} \\ =&\ \dfrac{\partial f}{\partial x}(\gamma (t_{0}))\dfrac{d x}{d t}(t_{0}) + \dfrac{\partial f}{\partial y}(\gamma (t_{0}))\dfrac{d y}{d t}(t_{0}) + \dfrac{\partial f}{\partial z}(\gamma (t_{0}))\dfrac{d z}{d t}(t_{0}) \end{align*} $$

Therefore,

$$ \implies \dfrac{d g}{d t} = \dfrac{\partial f}{\partial x}\dfrac{d x}{d t} + \dfrac{\partial f}{\partial y}\dfrac{d y}{d t} + \dfrac{\partial f}{\partial z}\dfrac{d z}{d t} $$


  1. Walter Rudin, Principles of Mathematical Analysis (3rd Edition, 1976), p215 ↩︎