logo

Curvature of a Principal Curve 📂Geometry

Curvature of a Principal Curve

Buildup1

To know in which direction and how much a surface $M$ is curved, it is sufficient to know the normal curvatures $\kappa_{n}$ in each direction. In other words, knowing all $\kappa_{n}$ at point $p$ allows us to understand how $M$ is bent. The first step towards this is to think about the maximum and minimum values of $\kappa_{n}$. The following theorem applies to the unit tangent curve $\boldsymbol{\gamma}$:

Lemma

If $\mathbf{T}$ is the tangent field of the unit speed curve $\boldsymbol{\gamma}$, then $\kappa_{n} = II (\mathbf{T}, \mathbf{T})$ holds.

Therefore, our goal is to find the maximum and minimum values of $II(\mathbf{X}, \mathbf{X}) = \kappa_{n}$ for the tangent vector $\mathbf{X} \in T_{p}M$. Here, $II$ is the second fundamental form.

This problem is, in other words, a maximization (minimization) problem of $II(\mathbf{X}, \mathbf{X})$ with the constraint $\left\langle \mathbf{X}, \mathbf{X} \right\rangle = 1$. Such problems can be solved by the Lagrange multipliers method. Then, the problem we need to solve changes from finding the maximum (minimum) values of $II(\mathbf{X}, \mathbf{X})$ to finding the maximum (minimum) values of the following $f$. Considering the Weingarten map $L$, since $II(\mathbf{X}, \mathbf{X}) = \left\langle L(\mathbf{X}), \mathbf{X} \right\rangle$,

$$ \begin{align*} f(\mathbf{X}, \lambda) &= II(\mathbf{X}, \mathbf{X}) - \lambda (\left\langle \mathbf{X}, \mathbf{X} \right\rangle - 1) \\ &= \left\langle L(\mathbf{X}), \mathbf{X} \right\rangle - \lambda\left\langle \mathbf{X}, \mathbf{X} \right\rangle + \lambda\\ &= \left\langle L(\mathbf{X}) - \lambda \mathbf{X}, \mathbf{X} \right\rangle + \lambda\\ \end{align*} $$

Expressing this in terms of coordinate chart mapping $\mathbf{x}$ gives, $\mathbf{X} = X^{1}\mathbf{x}_{1} + \mathbf{X}^{2}\mathbf{x}_{2}$, $L(\mathbf{x}_{k}) = \sum\limits_{l}{L^{l}}_{k}\mathbf{x}_{l}$,

$$ \begin{align*} f(\mathbf{X}, \lambda) &= f(X^{1}, X^{2}, \lambda) \\ &= \lambda + \left\langle \sum\limits_{i,j} {L^{i}}_{j}X^{j}\mathbf{x}_{i} - \sum\limits_{j}\lambda X^{j}\mathbf{x}_{j}, \sum\limits_{k}X^{k}\mathbf{x}_{k} \right\rangle \\ &= \lambda + \left\langle {L^{i}}_{j}X^{j}\mathbf{x}_{i} - \lambda X^{j}\mathbf{x}_{j}, X^{k}\mathbf{x}_{k} \right\rangle & \text{by } \href{https://freshrimpsushi.github.io/posts/einstein-notation}{\text{Einstein notation}} \\ &= \lambda + {L^{i}}_{j}X^{j}X^{k}\left\langle \mathbf{x}_{i}, \mathbf{x}_{k} \right\rangle - \lambda X^{j}X^{k} \left\langle \mathbf{x}_{j}, \mathbf{x}_{k} \right\rangle \\ &= \lambda + {L^{i}}_{j}X^{j}X^{k}g_{ik} - \lambda X^{j}X^{k} g_{jk} \\ &= \lambda + {L^{i}}_{j}X^{j}X^{k}g_{ik} - \lambda X^{j}X^{k} \delta_{ij}g_{ik} \\ &= \lambda + ({L^{i}}_{j} - \lambda \delta_{ij}) X^{j}X^{k}g_{ik} \end{align*} $$

$\delta$ is the Kronecker delta. By the method of Lagrange multipliers, we obtain $\dfrac{\partial f}{\partial X^{l}} = 0$. Since $L_{jk} = \sum\limits_{l}{L^{l}}_{k}g_{lj}$,

$$ \begin{align*} 0 = \dfrac{\partial f}{\partial X^{l}} &= \sum\limits_{ijk} ({L^{i}}_{j} - \lambda \delta_{ij})\delta_{jl}X^{k}g_{ik} + \sum\limits_{ijk} ({L^{i}}_{j} - \lambda \delta_{ij})\delta_{kl}X^{j}g_{ik} \\ &= \sum\limits_{ik} ({L^{i}}_{l} - \lambda \delta_{il})X^{k}g_{ik} + \sum\limits_{ij} ({L^{i}}_{j} - \lambda \delta_{ij})X^{j}g_{il} \\ &= \sum\limits_{ik} {L^{i}}_{l}X^{k}g_{ik} - \sum\limits_{ik}\lambda \delta_{il}X^{k}g_{ik} + \sum\limits_{ij} {L^{i}}_{j}X^{j}g_{il} - \sum\limits_{ij}\lambda \delta_{ij}X^{j}g_{il} \\ &= \sum\limits_{k} L_{kl}X^{k} - \sum\limits_{k}\lambda X^{k}g_{lk} + \sum\limits_{j} L_{lj}X^{j} - \sum\limits_{j}\lambda X^{j}g_{jl} \\ &= \sum\limits_{j} \left( L_{jl}X^{j} - \lambda X^{j}g_{lj} + L_{lj}X^{j} - \lambda X^{j}g_{jl} \right) \\ &= 2\sum\limits_{j} \left( L_{jl}X^{j} - \lambda X^{j}g_{lj} \right) = 2\sum\limits_{j}L_{jl}X^{j} - 2\sum\limits_{j}\lambda X^{j}g_{lj} \\ &= 2\sum\limits_{j}L_{jl}X^{j} - 2\sum\limits_{j}\lambda X^{j}g_{lj} \\ &= 2\sum\limits_{ij}{L^{i}}_{j}X^{j}g_{il} - 2\sum\limits_{ij}\lambda X^{j}\delta_{ij}g_{li} \\ &= 2\sum\limits_{ij}\left( {L^{i}}_{j} - \lambda\delta_{ij} \right)X^{j}g_{li} \\ \end{align*} $$

$$ \implies \sum\limits_{ij}\left( {L^{i}}_{j} - \lambda\delta_{ij} \right)X^{j}g_{li} = 0 $$

Therefore, for all $Y^{l}$, we obtain the following.

$$ \sum\limits_{ijl}\left( {L^{i}}_{j} - \lambda\delta_{ij} \right)X^{j}Y^{l}g_{li} = 0 $$

This means that $\forall \mathbf{Y}=\sum\limits_{l}Y^{l}\mathbf{x}_{l}$,

$$ \begin{align*} \left\langle L(\mathbf{X}) - \lambda \mathbf{X}, \mathbf{Y} \right\rangle &= \left\langle L\left( \sum\limits_{j}X^{j}\mathbf{x}_{j} \right) - \sum\limits_{i}\lambda X^{i} \mathbf{x}_{i}, \sum\limits_{l}Y^{l}\mathbf{x}_{l} \right\rangle \\ &= \left\langle \sum\limits_{ij}{L^{i}}_{j}X^{j}\mathbf{x}_{i} - \sum\limits_{ij}\lambda \delta_{ij} X^{j} \mathbf{x}_{i}, \sum\limits_{l}Y^{l}\mathbf{x}_{l} \right\rangle \\ &= \sum\limits_{ijl}{L^{i}}_{j}X^{j}Y^{l}\left\langle \mathbf{x}_{i}, \mathbf{x}_{l} \right\rangle - \sum\limits_{ijl}\lambda \delta_{ij}X^{j}Y^{l}\left\langle \mathbf{x}_{i}, \mathbf{x}_{l} \right\rangle \\ &= \sum\limits_{ijl}({L^{i}}_{j} - \lambda \delta_{ij} )X^{j}Y^{l}g_{il} \\ &= 0 \end{align*} $$

Hence, we obtain the following.

$$ \dfrac{\partial f}{\partial X^{l}} = 0 \implies \left\langle L(\mathbf{X}) - \lambda \mathbf{X}, \mathbf{Y} \right\rangle = 0\quad \forall \mathbf{Y} \implies L(\mathbf{X}) = \lambda \mathbf{X} $$

Therefore, $\lambda$ is an eigenvalue of $L$, and $\mathbf{X}$ is the corresponding eigenvector. In particular, $\mathbf{X}$ must satisfy the constraint $\left\langle \mathbf{X}, \mathbf{X} \right\rangle = 1$, hence it is a unit eigenvector.

Therefore, it is concluded that for the two unit eigenvectors, $II(\mathbf{X}, \mathbf{X})$ takes the maximum (minimum) value.

Moreover, let $B = \left\{ \mathbf{x}_{1}, \mathbf{x}_{2} \right\}$ and, for convenience, represent the matrix representation of $L$ with the same notation as $L$, denoted by $L \equiv \left[ L \right]_{B}$, then $\lambda$ is the solution of the following equation.

$$ \begin{equation} \begin{aligned} \det(L - \lambda I) &= (\lambda - {L^{1}}_{1})(\lambda - {L^{2}}_{2}) - {L^{1}}_{2}{L^{2}}_{1} \\ &= \lambda^{2} - ({L^{1}}_{1}{L^{2}}_{2})\lambda + ({L^{1}}_{1}{L^{2}}_{2} - {L^{1}}_{2}{L^{2}}_{1}) \\ &= \lambda^{2} - \tr(L) \lambda + \det(L) \\ &= 0 \end{aligned} \label{1} \end{equation} $$

Let’s denote the two solutions (eigenvalues) as $\kappa_{1}, \kappa_{2}$ ($\kappa_{1} \ge \kappa_{2}$). The following theorem states that these two values are indeed the minimum and maximum values of $\kappa_{n}$.

Theorem

On each point of the surface $M$, there exist 1. directions where the normal curvature is maximum and minimum, respectively, and 2. two directions that are orthogonal to each other.

Proof

The two eigenvalues of $L$ are respectively the maximum and minimum values of the normal curvature.

Following the discussion above, the normal curvature at point $p$ on $M$ in the direction of the eigenvectors of $L$ takes the maximum and minimum values. Let’s call the two eigenvalues of $L$ at point $p$ $\kappa_{1}, \kappa_{2}(\kappa_{1} \ge \kappa_{2})$, and the corresponding eigenvectors $\mathbf{X}_{1}, \mathbf{X}_{2}$. Then, the maximum and minimum values of the normal curvature are as follows.

$$ \kappa_{n} = II(\mathbf{X}_{i}, \mathbf{X}_{i}) = \left\langle L(\mathbf{X}_{i}), \mathbf{X}_{i} \right\rangle = \left\langle \kappa_{i}\mathbf{X}_{i}, \mathbf{X}_{i} \right\rangle = \kappa_{i}\left\langle \mathbf{X}_{i}, \mathbf{X}_{i} \right\rangle = \kappa_{i} $$

Therefore, the larger eigenvalue $\kappa_{1}$ is the maximum normal curvature, and the smaller value $\kappa_{2}$ is the minimum normal curvature.

The two eigenvectors are orthogonal to each other.

  • $\kappa_{1} \ne \kappa_{2}$

In this case, since $L$ is self-adjoint,

$$ \kappa_{1} \left\langle \mathbf{X}_{1}, \mathbf{X}_{2} \right\rangle = \left\langle L(\mathbf{X}_{1}), \mathbf{X}_{2} \right\rangle = \left\langle \mathbf{X}_{1}, L(\mathbf{X}_{2}) \right\rangle = \left\langle \mathbf{X}_{1}, \kappa_{2} \mathbf{X}_{2} \right\rangle = \kappa_{2} \left\langle \mathbf{X}_{1}, \mathbf{X}_{2} \right\rangle \\ \implies (\kappa_{1} - \kappa_{2}) \left\langle \mathbf{X}_{1}, \mathbf{X}_{2} \right\rangle = 0 $$

By assumption, $\left\langle \mathbf{X}_{1}, \mathbf{X}_{2} \right\rangle = 0$

  • $\kappa_{1} = \kappa_{2}$

Lemma

Let’s say $\lambda$, $\mathbf{X}$ are the eigenvalue and eigenvector of $L$ at point $p$ on surface $M$, respectively. Assuming the unit tangent vector $\mathbf{Y} \in T_{p}M$ satisfies $\left\langle \mathbf{X}, \mathbf{Y} \right\rangle = 0$. Then $\mathbf{Y}$ is also an eigenvector.

Proof

By assumption, $\left\{ \mathbf{X}, \mathbf{Y} \right\}$ forms a basis of $T_{p}M$. Since $L$ is self-adjoint,

$$ 0 = \left\langle \lambda \mathbf{X}, \mathbf{Y} \right\rangle = \left\langle L(\mathbf{X}), \mathbf{Y} \right\rangle = \left\langle \mathbf{X}, L(\mathbf{Y}) \right\rangle = \left\langle \mathbf{X}, a_{1} \mathbf{X} + a_{2} \mathbf{Y} \right\rangle $$

Therefore, $a_{1}=0$ holds, and since $L(\mathbf{Y}) = a_{2}\mathbf{Y}$, $\mathbf{Y}$ is also an eigenvector.

According to the lemma, a unit vector orthogonal to $\mathbf{X}_{1}$ is also an eigenvector. Therefore, it can be chosen as $\mathbf{X}_{2}$.

Definition

  • The eigenvalues $\kappa_{1}, \kappa_{2}$ of the Weingarten map $L$ defined at point $p\in M$ are called the principal curvatures at point $p$ on the surface $M$. The eigenvectors of $L$ are called the principal directions at point $p$.

  • A point where the two principal curvatures $\kappa_{1}, \kappa_{2}$ are equal is called an umbilic.

  • If the tangent vector at every point of a curve is the principal direction at that point on the surface $M$, then the curve is a line of curvature on a surface $M$.

Explanation

According to the discussion above, the larger (smaller) principal curvature is the maximum (minimum) normal curvature at point $p$.

All points of $S^{2}$ and $\mathbb{R}^{2}$ are umbilics. [The converse is also true.]

In $\eqref{1}$, by the relationship between roots and coefficients, $\kappa_{1} \kappa_{2} = \det L$ holds, and this is called the Gaussian curvature. Also, $\dfrac{\kappa_{1} + \kappa_{2}}{2} = \dfrac{\tr{L}}{2}$ is called the mean curvature.


  1. Richard S. Millman and George D. Parker, Elements of Differential Geometry (1977), p127-129 ↩︎