logo

Curvature of a Principal Curve 📂Geometry

Curvature of a Principal Curve

Buildup1

To know in which direction and how much a surface MM is curved, it is sufficient to know the normal curvatures κn\kappa_{n} in each direction. In other words, knowing all κn\kappa_{n} at point pp allows us to understand how MM is bent. The first step towards this is to think about the maximum and minimum values of κn\kappa_{n}. The following theorem applies to the unit tangent curve γ\boldsymbol{\gamma}:

Lemma

If T\mathbf{T} is the tangent field of the unit speed curve γ\boldsymbol{\gamma}, then κn=II(T,T)\kappa_{n} = II (\mathbf{T}, \mathbf{T}) holds.

Therefore, our goal is to find the maximum and minimum values of II(X,X)=κnII(\mathbf{X}, \mathbf{X}) = \kappa_{n} for the tangent vector XTpM\mathbf{X} \in T_{p}M. Here, IIII is the second fundamental form.

This problem is, in other words, a maximization (minimization) problem of II(X,X)II(\mathbf{X}, \mathbf{X}) with the constraint X,X=1\left\langle \mathbf{X}, \mathbf{X} \right\rangle = 1. Such problems can be solved by the Lagrange multipliers method. Then, the problem we need to solve changes from finding the maximum (minimum) values of II(X,X)II(\mathbf{X}, \mathbf{X}) to finding the maximum (minimum) values of the following ff. Considering the Weingarten map LL, since II(X,X)=L(X),XII(\mathbf{X}, \mathbf{X}) = \left\langle L(\mathbf{X}), \mathbf{X} \right\rangle,

f(X,λ)=II(X,X)λ(X,X1)=L(X),XλX,X+λ=L(X)λX,X+λ \begin{align*} f(\mathbf{X}, \lambda) &= II(\mathbf{X}, \mathbf{X}) - \lambda (\left\langle \mathbf{X}, \mathbf{X} \right\rangle - 1) \\ &= \left\langle L(\mathbf{X}), \mathbf{X} \right\rangle - \lambda\left\langle \mathbf{X}, \mathbf{X} \right\rangle + \lambda\\ &= \left\langle L(\mathbf{X}) - \lambda \mathbf{X}, \mathbf{X} \right\rangle + \lambda\\ \end{align*}

Expressing this in terms of coordinate chart mapping x\mathbf{x} gives, X=X1x1+X2x2\mathbf{X} = X^{1}\mathbf{x}_{1} + \mathbf{X}^{2}\mathbf{x}_{2}, L(xk)=lLlkxlL(\mathbf{x}_{k}) = \sum\limits_{l}{L^{l}}_{k}\mathbf{x}_{l},

f(X,λ)=f(X1,X2,λ)=λ+i,jLijXjxijλXjxj,kXkxk=λ+LijXjxiλXjxj,Xkxkby \href=λ+LijXjXkxi,xkλXjXkxj,xk=λ+LijXjXkgikλXjXkgjk=λ+LijXjXkgikλXjXkδijgik=λ+(Lijλδij)XjXkgik \begin{align*} f(\mathbf{X}, \lambda) &= f(X^{1}, X^{2}, \lambda) \\ &= \lambda + \left\langle \sum\limits_{i,j} {L^{i}}_{j}X^{j}\mathbf{x}_{i} - \sum\limits_{j}\lambda X^{j}\mathbf{x}_{j}, \sum\limits_{k}X^{k}\mathbf{x}_{k} \right\rangle \\ &= \lambda + \left\langle {L^{i}}_{j}X^{j}\mathbf{x}_{i} - \lambda X^{j}\mathbf{x}_{j}, X^{k}\mathbf{x}_{k} \right\rangle & \text{by } \href{https://freshrimpsushi.github.io/posts/einstein-notation}{\text{Einstein notation}} \\ &= \lambda + {L^{i}}_{j}X^{j}X^{k}\left\langle \mathbf{x}_{i}, \mathbf{x}_{k} \right\rangle - \lambda X^{j}X^{k} \left\langle \mathbf{x}_{j}, \mathbf{x}_{k} \right\rangle \\ &= \lambda + {L^{i}}_{j}X^{j}X^{k}g_{ik} - \lambda X^{j}X^{k} g_{jk} \\ &= \lambda + {L^{i}}_{j}X^{j}X^{k}g_{ik} - \lambda X^{j}X^{k} \delta_{ij}g_{ik} \\ &= \lambda + ({L^{i}}_{j} - \lambda \delta_{ij}) X^{j}X^{k}g_{ik} \end{align*}

δ\delta is the Kronecker delta. By the method of Lagrange multipliers, we obtain fXl=0\dfrac{\partial f}{\partial X^{l}} = 0. Since Ljk=lLlkgljL_{jk} = \sum\limits_{l}{L^{l}}_{k}g_{lj},

0=fXl=ijk(Lijλδij)δjlXkgik+ijk(Lijλδij)δklXjgik=ik(Lilλδil)Xkgik+ij(Lijλδij)Xjgil=ikLilXkgikikλδilXkgik+ijLijXjgilijλδijXjgil=kLklXkkλXkglk+jLljXjjλXjgjl=j(LjlXjλXjglj+LljXjλXjgjl)=2j(LjlXjλXjglj)=2jLjlXj2jλXjglj=2jLjlXj2jλXjglj=2ijLijXjgil2ijλXjδijgli=2ij(Lijλδij)Xjgli \begin{align*} 0 = \dfrac{\partial f}{\partial X^{l}} &= \sum\limits_{ijk} ({L^{i}}_{j} - \lambda \delta_{ij})\delta_{jl}X^{k}g_{ik} + \sum\limits_{ijk} ({L^{i}}_{j} - \lambda \delta_{ij})\delta_{kl}X^{j}g_{ik} \\ &= \sum\limits_{ik} ({L^{i}}_{l} - \lambda \delta_{il})X^{k}g_{ik} + \sum\limits_{ij} ({L^{i}}_{j} - \lambda \delta_{ij})X^{j}g_{il} \\ &= \sum\limits_{ik} {L^{i}}_{l}X^{k}g_{ik} - \sum\limits_{ik}\lambda \delta_{il}X^{k}g_{ik} + \sum\limits_{ij} {L^{i}}_{j}X^{j}g_{il} - \sum\limits_{ij}\lambda \delta_{ij}X^{j}g_{il} \\ &= \sum\limits_{k} L_{kl}X^{k} - \sum\limits_{k}\lambda X^{k}g_{lk} + \sum\limits_{j} L_{lj}X^{j} - \sum\limits_{j}\lambda X^{j}g_{jl} \\ &= \sum\limits_{j} \left( L_{jl}X^{j} - \lambda X^{j}g_{lj} + L_{lj}X^{j} - \lambda X^{j}g_{jl} \right) \\ &= 2\sum\limits_{j} \left( L_{jl}X^{j} - \lambda X^{j}g_{lj} \right) = 2\sum\limits_{j}L_{jl}X^{j} - 2\sum\limits_{j}\lambda X^{j}g_{lj} \\ &= 2\sum\limits_{j}L_{jl}X^{j} - 2\sum\limits_{j}\lambda X^{j}g_{lj} \\ &= 2\sum\limits_{ij}{L^{i}}_{j}X^{j}g_{il} - 2\sum\limits_{ij}\lambda X^{j}\delta_{ij}g_{li} \\ &= 2\sum\limits_{ij}\left( {L^{i}}_{j} - \lambda\delta_{ij} \right)X^{j}g_{li} \\ \end{align*}

    ij(Lijλδij)Xjgli=0 \implies \sum\limits_{ij}\left( {L^{i}}_{j} - \lambda\delta_{ij} \right)X^{j}g_{li} = 0

Therefore, for all YlY^{l}, we obtain the following.

ijl(Lijλδij)XjYlgli=0 \sum\limits_{ijl}\left( {L^{i}}_{j} - \lambda\delta_{ij} \right)X^{j}Y^{l}g_{li} = 0

This means that Y=lYlxl\forall \mathbf{Y}=\sum\limits_{l}Y^{l}\mathbf{x}_{l},

L(X)λX,Y=L(jXjxj)iλXixi,lYlxl=ijLijXjxiijλδijXjxi,lYlxl=ijlLijXjYlxi,xlijlλδijXjYlxi,xl=ijl(Lijλδij)XjYlgil=0 \begin{align*} \left\langle L(\mathbf{X}) - \lambda \mathbf{X}, \mathbf{Y} \right\rangle &= \left\langle L\left( \sum\limits_{j}X^{j}\mathbf{x}_{j} \right) - \sum\limits_{i}\lambda X^{i} \mathbf{x}_{i}, \sum\limits_{l}Y^{l}\mathbf{x}_{l} \right\rangle \\ &= \left\langle \sum\limits_{ij}{L^{i}}_{j}X^{j}\mathbf{x}_{i} - \sum\limits_{ij}\lambda \delta_{ij} X^{j} \mathbf{x}_{i}, \sum\limits_{l}Y^{l}\mathbf{x}_{l} \right\rangle \\ &= \sum\limits_{ijl}{L^{i}}_{j}X^{j}Y^{l}\left\langle \mathbf{x}_{i}, \mathbf{x}_{l} \right\rangle - \sum\limits_{ijl}\lambda \delta_{ij}X^{j}Y^{l}\left\langle \mathbf{x}_{i}, \mathbf{x}_{l} \right\rangle \\ &= \sum\limits_{ijl}({L^{i}}_{j} - \lambda \delta_{ij} )X^{j}Y^{l}g_{il} \\ &= 0 \end{align*}

Hence, we obtain the following.

fXl=0    L(X)λX,Y=0Y    L(X)=λX \dfrac{\partial f}{\partial X^{l}} = 0 \implies \left\langle L(\mathbf{X}) - \lambda \mathbf{X}, \mathbf{Y} \right\rangle = 0\quad \forall \mathbf{Y} \implies L(\mathbf{X}) = \lambda \mathbf{X}

Therefore, λ\lambda is an eigenvalue of LL, and X\mathbf{X} is the corresponding eigenvector. In particular, X\mathbf{X} must satisfy the constraint X,X=1\left\langle \mathbf{X}, \mathbf{X} \right\rangle = 1, hence it is a unit eigenvector.

Therefore, it is concluded that for the two unit eigenvectors, II(X,X)II(\mathbf{X}, \mathbf{X}) takes the maximum (minimum) value.

Moreover, let B={x1,x2}B = \left\{ \mathbf{x}_{1}, \mathbf{x}_{2} \right\} and, for convenience, represent the matrix representation of LL with the same notation as LL, denoted by L[L]BL \equiv \left[ L \right]_{B}, then λ\lambda is the solution of the following equation.

det(LλI)=(λL11)(λL22)L12L21=λ2(L11L22)λ+(L11L22L12L21)=λ2tr(L)λ+det(L)=0 \begin{equation} \begin{aligned} \det(L - \lambda I) &= (\lambda - {L^{1}}_{1})(\lambda - {L^{2}}_{2}) - {L^{1}}_{2}{L^{2}}_{1} \\ &= \lambda^{2} - ({L^{1}}_{1}{L^{2}}_{2})\lambda + ({L^{1}}_{1}{L^{2}}_{2} - {L^{1}}_{2}{L^{2}}_{1}) \\ &= \lambda^{2} - \tr(L) \lambda + \det(L) \\ &= 0 \end{aligned} \label{1} \end{equation}

Let’s denote the two solutions (eigenvalues) as κ1,κ2\kappa_{1}, \kappa_{2} (κ1κ2\kappa_{1} \ge \kappa_{2}). The following theorem states that these two values are indeed the minimum and maximum values of κn\kappa_{n}.

Theorem

On each point of the surface MM, there exist 1. directions where the normal curvature is maximum and minimum, respectively, and 2. two directions that are orthogonal to each other.

Proof

The two eigenvalues of LL are respectively the maximum and minimum values of the normal curvature.

Following the discussion above, the normal curvature at point pp on MM in the direction of the eigenvectors of LL takes the maximum and minimum values. Let’s call the two eigenvalues of LL at point pp κ1,κ2(κ1κ2)\kappa_{1}, \kappa_{2}(\kappa_{1} \ge \kappa_{2}), and the corresponding eigenvectors X1,X2\mathbf{X}_{1}, \mathbf{X}_{2}. Then, the maximum and minimum values of the normal curvature are as follows.

κn=II(Xi,Xi)=L(Xi),Xi=κiXi,Xi=κiXi,Xi=κi \kappa_{n} = II(\mathbf{X}_{i}, \mathbf{X}_{i}) = \left\langle L(\mathbf{X}_{i}), \mathbf{X}_{i} \right\rangle = \left\langle \kappa_{i}\mathbf{X}_{i}, \mathbf{X}_{i} \right\rangle = \kappa_{i}\left\langle \mathbf{X}_{i}, \mathbf{X}_{i} \right\rangle = \kappa_{i}

Therefore, the larger eigenvalue κ1\kappa_{1} is the maximum normal curvature, and the smaller value κ2\kappa_{2} is the minimum normal curvature.

The two eigenvectors are orthogonal to each other.

  • κ1κ2\kappa_{1} \ne \kappa_{2}

In this case, since LL is self-adjoint,

κ1X1,X2=L(X1),X2=X1,L(X2)=X1,κ2X2=κ2X1,X2    (κ1κ2)X1,X2=0 \kappa_{1} \left\langle \mathbf{X}_{1}, \mathbf{X}_{2} \right\rangle = \left\langle L(\mathbf{X}_{1}), \mathbf{X}_{2} \right\rangle = \left\langle \mathbf{X}_{1}, L(\mathbf{X}_{2}) \right\rangle = \left\langle \mathbf{X}_{1}, \kappa_{2} \mathbf{X}_{2} \right\rangle = \kappa_{2} \left\langle \mathbf{X}_{1}, \mathbf{X}_{2} \right\rangle \\ \implies (\kappa_{1} - \kappa_{2}) \left\langle \mathbf{X}_{1}, \mathbf{X}_{2} \right\rangle = 0

By assumption, X1,X2=0\left\langle \mathbf{X}_{1}, \mathbf{X}_{2} \right\rangle = 0

  • κ1=κ2\kappa_{1} = \kappa_{2}

Lemma

Let’s say λ\lambda, X\mathbf{X} are the eigenvalue and eigenvector of LL at point pp on surface MM, respectively. Assuming the unit tangent vector YTpM\mathbf{Y} \in T_{p}M satisfies X,Y=0\left\langle \mathbf{X}, \mathbf{Y} \right\rangle = 0. Then Y\mathbf{Y} is also an eigenvector.

Proof

By assumption, {X,Y}\left\{ \mathbf{X}, \mathbf{Y} \right\} forms a basis of TpMT_{p}M. Since LL is self-adjoint,

0=λX,Y=L(X),Y=X,L(Y)=X,a1X+a2Y 0 = \left\langle \lambda \mathbf{X}, \mathbf{Y} \right\rangle = \left\langle L(\mathbf{X}), \mathbf{Y} \right\rangle = \left\langle \mathbf{X}, L(\mathbf{Y}) \right\rangle = \left\langle \mathbf{X}, a_{1} \mathbf{X} + a_{2} \mathbf{Y} \right\rangle

Therefore, a1=0a_{1}=0 holds, and since L(Y)=a2YL(\mathbf{Y}) = a_{2}\mathbf{Y}, Y\mathbf{Y} is also an eigenvector.

According to the lemma, a unit vector orthogonal to X1\mathbf{X}_{1} is also an eigenvector. Therefore, it can be chosen as X2\mathbf{X}_{2}.

Definition

  • The eigenvalues κ1,κ2\kappa_{1}, \kappa_{2} of the Weingarten map LL defined at point pMp\in M are called the principal curvatures at point pp on the surface MM. The eigenvectors of LL are called the principal directions at point pp.

  • A point where the two principal curvatures κ1,κ2\kappa_{1}, \kappa_{2} are equal is called an umbilic.

  • If the tangent vector at every point of a curve is the principal direction at that point on the surface MM, then the curve is a line of curvature on a surface MM.

Explanation

According to the discussion above, the larger (smaller) principal curvature is the maximum (minimum) normal curvature at point pp.

All points of S2S^{2} and R2\mathbb{R}^{2} are umbilics. [The converse is also true.]

In (1)\eqref{1}, by the relationship between roots and coefficients, κ1κ2=detL\kappa_{1} \kappa_{2} = \det L holds, and this is called the Gaussian curvature. Also, κ1+κ22=trL2\dfrac{\kappa_{1} + \kappa_{2}}{2} = \dfrac{\tr{L}}{2} is called the mean curvature.


  1. Richard S. Millman and George D. Parker, Elements of Differential Geometry (1977), p127-129 ↩︎