logo

Hopf-Lax Formula 📂Partial Differential Equations

Hopf-Lax Formula

Buildup1

Let’s consider the initial value problem of the Hamilton-Jacobi equation that depends only on $H$ as $Du$ for the Hamilton-Jacobi equation.

$$ \begin{equation} \left\{ \begin{aligned} u_{t} + H(Du)&=0 && \text{in } \mathbb{R}^n \times (0,\infty) \\ u&=g && \text{on } \mathbb{R}^n \times \left\{ t=0 \right\} \end{aligned} \right. \label{eq1} \end{equation} $$

Generally, the Hamiltonian depends on the spatial variables as in the form of $H(Du, x)$, but let’s say here it is not affected by $x$. Also, let’s assume the following for the Hamiltonian $H\in C^\infty$.

$$ \begin{cases} H \mathrm{\ is\ convex} \\ \lim \limits_{|p|\to \infty} \dfrac{H(p)}{|p|}=\infty \end{cases} $$

And if we say $L=H^{\ast}$, the Lagrangian $L$ also satisfies the same characteristics. Lastly, let’s assume the initial value $g : \mathbb{R}^n \to \mathbb{R}$ is Lipschitz continuous. That is,

$$ \mathrm{Lip}(g):=\sup \limits_{x,y\in \mathbb{R}^n \\ x \ne y} \dfrac{ |g(x)-g(y)| }{|x-y|} < \infty $$

Furthermore, the characteristic equation of the given Hamilton-Jacobi equation $\eqref{eq1}$ is as follows.

$$ \begin{align*} \dot{\mathbf{p}}(s) &= -D_{x}H \big( \mathbf{p}(s), \mathbf{x}(s) \big) \\ \dot{z}(s) &= D_{p} H\big( \mathbf{p}(s),\ \mathbf{x}(s)\big)\cdot \mathbf{p}(s) -H\big( \mathbf{p}(s), \mathbf{x}(s)\big) \\ \dot{\mathbf{x}}(s) &= D_{p}H\big( \mathbf{p}(s), \mathbf{x}(s) \big) \end{align*} $$

Here, since $H$ is assumed to be independent of $x$, it can be rewritten as follows.

$$ \begin{align*} \dot{\mathbf{p}} &= 0 \\ \dot{z} &= D H( \mathbf{p} )\cdot \mathbf{p} -H ( \mathbf{p} ) \\ \dot{\mathbf{x}} &= DH ( \mathbf{p}) \end{align*} $$

At this time, it is $t(s)=s, p(s)=Du(x(s), s), z(s)=u(x(s), s)$. Since there is no need to distinguish between the differentiation with respect to $p$ and $x$, the subscript of $D$ was omitted. Since the Euler-Lagrange equation holds for fixed start and end points, if there exists a solution to the given Hamilton-Jacobi equation $\eqref{eq1}$, then it is a local in time solution as follows.

$$ u= u(x,t) \in C^2\big( \mathbb{R}^n \times [0,T]\big) $$

In the above characteristic equation, the first and third equations are Hamilton’s equations that satisfy the Euler-Lagrange equation from the minimization problem of action defined by Lagrangian $L=H*$.

If $H$ and $L$ are differentiable at $p$ and $v\in \mathbb{R}^n$, then all the following contents are equivalent.

$$ \begin{cases} p\cdot v=L(v) + H(p) \\ p=DL(v) \\ v=DH(p) \end{cases} $$

At this time, it is defined as $p=D_{v}L(v)$, so using the above lemma, we obtain the following.

$$ \begin{align*} \dot{z}(s) &= DH(\mathbf{p})\cdot \mathbf{p}-H(\mathbf{p}) \\ &= \mathbf{v} \cdot \mathbf{p}-H(\mathbf{p}) \\ &= L(\mathbf{v})+H(\mathbf{p})-H(\mathbf{p}) \\ &= L(\mathbf{v}) = L\big(\dot{\mathbf{x}}(s)\big) \end{align*} $$

Therefore, if we find $z(t)$, it is as follows.

$$ \begin{align*} z(t) &= \int_{0}^t \dot{z}(s)dx +z(0) \\ &= \int_{0}^tL \big( \dot{\mathbf{x}}(s) \big) + u\big( \mathbf{x}(0),\ 0\big) \\ &= \int_{0}^t L\big( \dot{\mathbf{x}}(s)\big) ds +g\big(\mathbf{x}(0) \big) \end{align*} $$

However, at this time, since it was $z(t)=u(x(t), t)$ in the above condition, we obtain the following.

$$ u(x,t)=\int_{0}^t L\big( \dot{\mathbf{x}}(s)\big) ds +g\big(\mathbf{x}(0) \big) \quad (0 \le t <T) $$

This is a local in time smooth solution, so the question remains whether a global in time weak solution can be obtained. Returning to the minimization problem of action, the difference from when the Euler-Lagrange equation was derived is that only the endpoint is fixed.

Let’s say a fixed $x \in \mathbb{R}^n, t>0$ is given. And let’s say the admissible class $\mathcal{A}$ is as follows.

$$ \mathcal{A}=\left\{ \mathbf{w}\in C^1\big( [0,t];\mathbb{R}^n \big)\ :\ \mathbf{w}(t)=x \right\} $$

And let’s consider the minimization problem of the following action.

$$ \mathbf{w}(\cdot) \in \mathcal{A} \mapsto \int_{0}^t L\big( \dot{\mathbf{w}}(s)\big) ds + g(\mathbf{w}(0)) $$

If a minimizer $\mathbf{x}(\cdot)$ exists, then it is $\mathbf{p}(s):=DL(\dot{\mathbf{x}}(s))$, satisfies the Euler-Lagrange equation, and therefore also satisfies the Hamilton equation. Thus, as in the case of the local in time solution obtained earlier, the solution will be given as follows.

$$ u(x,t)=\int_{0}^tL\big( \dot{\mathbf{x}}(s)\big)ds +g \big( \mathbf{x}(0) \big) $$

Based on the above content, if a global in time weak solution exists, it can be defined as follows.

$$ \begin{equation} u(x,t):=\inf \limits_{\mathbf{w} \in \mathcal{A}} \left\{ \int_{0}^t L\big( \dot{\mathbf{w}}(s) \big)ds + g\big( \mathbf{w}(0) \big) \right\} \label{eq2} \end{equation} $$

Theorem

Let’s say $x \in \mathbb{R}^n$ and $t>0$. Then, the solution to the minimization problem of $\eqref{eq2}$ is given as follows.

$$ u(x,t) = \min \limits_{y \in \mathbb{R}^n} \left\{ tL\left( \dfrac{x-y}{t} \right) +g(y) \right\} $$

This is called the Hopf-Lax formula.

Proof

First, we show that it holds for $\inf$, and then actually show that it becomes $\min$ in order.


  • Step 1.

    There exists an arbitrary fixed $y \in \mathbb{R}^n, t\in \mathbb{R}$. And let’s define $\mathbf{w}$ as follows.

    $$ \mathbf{w}(s) :=y+\frac{s}{t}(x-y) \quad (0 \le s \le t) $$

    Then $\mathbf{w}(0)=y$ and $\mathbf{w}(t)=x$. Then $\mathbf{w}$ is an element of the admissible class $\mathcal{A}$.

    $$ \mathcal{A}= \left\{ \mathbf{w}(\cdot) \ \big| \ \mathbf{w}(0)=y,\ \mathbf{w}(t)=x\right\} $$

    Then, by the definition of ~, the following inequality holds.

    $$ \begin{align*} u(x,t) & \le& \int_{0}^t L \left( \frac{x-y}{t}\right)ds + g(y) \\ &= tL\left( \frac{x-y}{t}\right)+g(y) \end{align*} $$

    Since this inequality holds for all $y \in \mathbb{R}^n$, we obtain the following.

    $$ u(x,t) \le \inf \limits_{y \in \mathbb{R}^n} \left(t L\left(\frac{x-y}{t} \right) +g(y)\right) $$

  • Step 2.

    Let’s say $\mathbf{w}(\cdot) \in \mathcal{A}$. Then $\mathbf{w}(\cdot) \in C^1([0;t];\mathbb{R}^n)$ and $\mathbf{w}(t)=x$.

    Jensen’s Inequality

    Suppose the function $f$ is convex. Then, the following formula holds. $$ f \left( -\!\!\!\!\!\! \int_{U} u dx \right) \le -\!\!\!\!\!\! \int_{U} f(u) dx $$

    Then, by the above lemma, the following holds.

    $$ L \left( \frac{1}{t}\int_{0}^t \dot{\mathbf{w}}(s) dx\right) \le \dfrac{1}{t}\int_{0}^t L \big( \dot{\mathbf{w}(s)} \big)ds $$

    And let’s say the starting point is $y$, $\mathbf{w}(0)=y$. Then, the above inequality is as follows.

    $$ \begin{align*} && L\left( \dfrac{1}{t} \big( \mathbf{w}(t)-\mathbf{w}(0) \big) \right) &\le \dfrac{1}{t}\int_{0}^tL \big( \dot{\mathbf{w}}(s) \big)ds
    \\ \implies&& L\left( \dfrac{x-y}{t} \right) &\le \dfrac{1}{t}\int_{0}^tL \big( \dot{\mathbf{w}}(s) \big)ds \end{align*} $$

    Multiply both sides by $t$ and add $g(y)$ to get the following.

    $$ tL\left( \dfrac{x-y}{t} \right) + g(y) \le \int_{0}^tL \big( \dot{\mathbf{w}}(s) \big)ds + g(y) $$

    Since the right-hand side’s $\inf$ is $u(x,t)$, it is as follows.

    $$ tL\left( \dfrac{x-y}{t} \right) + g(y) \le u(x,t) $$

    Finally, taking $\inf \limits_{y\in \mathbb{R}^n}$ on both sides, we obtain the following.

    $$ \inf \limits_{y \in \mathbb{R}^n} \left( tL\left( \dfrac{x-y}{t} \right) + g(y) \right) \le u(x,t) $$

Therefore, by Step 1. and Step 2., the following holds.

$$ \begin{equation} u(x,t) = \inf \limits_{y \in \mathbb{R}^n} \left( tL\left( \dfrac{x-y}{t} \right) + g(y) \right) \label{eq3} \end{equation} $$

  • Step 3.

    Let’s say $\left\{y_{k} \right\}_{k=1}^\infty$ is a minimizing sequence for $\eqref{eq3}$. Then, the following holds.

    $$ \begin{equation} tL\left( \dfrac{x-y_{k}}{t} \right) + g(y_{k}) \to u(x,t)\in [-\infty, \infty) \quad \mathrm{as}\ k\to \infty \label{eq4} \end{equation} $$

    First, assume $\left\{y_{k} \right\}$ is not bounded. We will show this assumption leads to a contradiction, proving that $\left\{ y_{k} \right\}$ is bounded. By assumption, $|y_{k}| \to \infty$ and $y_{k}=0$, $k$ are at most finite. Therefore, let’s consider a subsequence that only satisfies $y_{k}\ne 0$ again as $\left\{ y_{k} \right\}$. The following holds.

    $$ \left| \dfrac{x-y_{k}}{t} \right| \to \infty $$

    Then, by the properties of the Lagrangian $L$, the following holds.

    $$ a_{k}:= \dfrac{L\left( \dfrac{x-y_{k}}{t}\right)}{\left| \dfrac{x-y_{k}}{t}\right|} \to \infty $$

    Therefore, $L\left( \dfrac{x-y_{l}}{t}\right) \to \infty$ and multiplying by a constant yields the same result.

    $$ \begin{equation} tL\left( \dfrac{x-y_{k}}{t}\right) \to \infty \label{eq5} \end{equation} $$

    Rewriting the Lipschitz condition of $g$ is as follows.

    $$ \dfrac{|g(x)-g(y_{k})|}{|x-y_{k}|} \le \mathrm{Lip}(g)=C \quad \forall \ k \in \mathbb{N} $$

    Therefore, we obtain the following.

    ▶eq32

Adding $\eqref{eq5}$ to both sides gives the following.

$$ tL\left( \dfrac{x-y_{k}}{t}\right)+ g(x) -g(y_{k}) \le C|x-y_{k}|+ tL\left( \dfrac{x-y_{k}}{t}\right) \quad \mathrm{for\ large}\ k $$

Appropriately rearranging the above formula yields the following.

$$ tL\left( \dfrac{x-y_{k}}{t}\right)-C|x-y_{k}| + g(x) \le tL\left( \dfrac{x-y_{k}}{t}\right) + g(y_{k}) $$

Rewritten, it is as follows.

$$ a_{k}|x-y_{k}| -C|x-y_{k}| + g(x) =|x-y_{k}|(a_{k}-C)+g(x) \le tL\left( \dfrac{x-y_{k}}{t}\right) + g(y_{k}) $$

Since $a_{k}\to \infty$ and $|x-y_{k}| \to \infty$, the left side diverges to $\infty$ and the right side also diverges. Therefore, by the definition of $u(x,t)$, $u(x,t)\to \infty$ is true. This is a contradiction to $\eqref{eq4}$, so $\left\{ y_{k} \right\}$ is bounded.

Since $\left\{ y_{k} \right\}$ is bounded, let’s assume $y_{k} \to y_{0}$. Then, the following holds.

$$ tL \left( \dfrac{x-y_{k}}{t} \right)+g(y_{k}) \to tL \left( \dfrac{x-y_{0}}{t}\right)+g(y_{0}) =\min\limits_{y \in \mathbb{R}^n}\left( tL \left( \dfrac{x-y}{t}\right)+g(y) \right) $$

Then, by $\eqref{eq4}$, the following holds.

$$ tL\left( \dfrac{x-y_{k}}{t} \right) + g(y_{k}) \to u(x,t)\in [-\infty, \infty) \quad \mathrm{as}\ k\to \infty $$

Therefore, we obtain the following.

$$ u(x,t) = \min \limits_{y \in \mathbb{R}^n} \left( tL\left( \dfrac{x-y}{t} \right) +g(y) \right) $$


  1. Lawrence C. Evans, Partial Differential Equations (2nd Edition, 2010), p122-124 ↩︎