Poker-Plank Equation Derivation
Theorem
$$ d X_{t} = f \left( t, X_{t} \right) dt + g \left( t , X_{t} \right) d W_{t} \qquad , t \in \left[ t_{0} , T \right] $$ Given a stochastic differential equation as above, and let $F \in C_{0}^{\infty} \left( \mathbb{R} \right)$. Then, at time point $t$, the probability density function $p(t,x)$ of $X_{t}$ follows the next partial differential equation. $$ {{ \partial p(t,x) } \over { \partial t }} = - {{ \partial \left[ p(t,x) f(t,x) \right] } \over { \partial x }} + {{ 1 } \over { 2 }} {{ \partial^{2} \left[ p(t,x) \left( g(t,x) \right)^{2} \right] } \over { \partial x^{2} }} $$
- $C_{0}^{\infty} \left( \mathbb{R} \right)$ is a class of functions that are infinitely differentiable and converges to $0$ when $x \to \infty$.
Explanation
Note that what is described in the equation is not $X_{t}$ itself, but its probability distribution.
Especially if $f = 0$, it is a heat equation.
Derivation
$$ \begin{align*} X =& X_{t} \\ f =& f \left( t, X_{t} \right) \\ g =& g \left( t, X_{t} \right) \\ g^{2} =& \left[ g \left( t, X_{t} \right) \right]^{2} \end{align*} $$
For convenience, allow the notation above, and let’s use the following widely used partial differential notation. $$ F_{x} = {{ \partial F } \over { \partial x }} $$
Part 1.
- Itô’s lemma: $$ dY_{t} = \left( V_{t} + V_{x} u + {{ 1 } \over { 2 }} V_{xx} v^{2} \right) dt + V_{x} v d W_{t} $$
- [6] Expected value of Itô integral: $$ E \left[ \int_{a}^{b} f d W_{t} \right] = 0 $$
According to Itô’s lemma, calculating $d F \left( X \right)$ results in an integral, namely the expected value of Itô integral $(\star)$, $$ \begin{align*} & d F \left( X \right) = \left( f F_{x} + {{ 1 } \over { 2 }} g^{2} F_{xx} \right) dt + g F_{x} d W_{s} \\ \implies & \int_{0}^{t} d F \left( X \right) = \int_{0}^{t} \left( f F_{x} + {{ 1 } \over { 2 }} g^{2} F_{xx} \right) dt + \int_{0}^{t} g F_{x} d W_{s} \\ \implies & E \left( F(X) \right) = E \int_{0}^{t} \left( f F_{x} + {{ 1 } \over { 2 }} g^{2} F_{xx} \right) ds + 0 & \because \star \\ \implies & {{ d E (F) } \over { dt }} = E \left[ f F_{x} + {{ 1 } \over { 2 }} g^{2} F_{xx} \right] \end{align*} $$
Part 2. The emergence of $p (t,x)$
Meanwhile, the expected value of $F(X)$ can be represented as $\int_{\mathbb{R}} p(t,x) F(x) dx$ regarding the probability density function $p (t,x)$ of $X_{t}$ at time point $t$, $$ \begin{align*} {{ d } \over { dt } } E (F) =& E \left[ f F_{x} + {{ 1 } \over { 2 }} g^{2} F_{xx} \right] \\ \implies {{ d } \over { dt }} \int_{-\infty}^{\infty} p(t,x) F(x) dx =& \int_{-\infty}^{\infty} p(t,x)\left[ {\color{red} f F_{x}} + {\color{blue} {{ 1 } \over { 2 }} g^{2} F_{xx} } \right] dx \end{align*} $$ Now, let’s calculate the right side one by one through integration by parts.
Part 3. Integration by Parts
$p \left( t,\pm \infty \right) = 0$. The first term ${\color{red} \int_{-\infty}^{\infty} p(t,x) f F_{x} dx}$ is $$ \begin{align*} & \int_{-\infty}^{\infty} p(t,x) f F_{x} dx \\ =& \left[ p (t,x) f \cdot F(x) \right]_{-\infty}^{\infty} - \int_{-\infty}^{\infty} {{ \partial \left[ p (t,x) f \right] } \over { \partial x }} F(x) dx \\ =& 0 - 0 - \int_{-\infty}^{\infty} {{ \partial \left[ p (t,x) f \right] } \over { \partial x }} F (x) dx \end{align*} $$ According to the assumption that $F \in C_{0}^{\infty} \left( \mathbb{R} \right)$, $F \left( \pm \infty \right) = 0$. The second term, ${\color{blue} \int_{-\infty}^{\infty} p(t,x) {{ 1 } \over { 2 }} g^{2} F_{xx} dx}$, is obtained by doing integration by parts twice, $$ \begin{align*} & \int_{-\infty}^{\infty} p(t,x) {{ 1 } \over { 2 }} g^{2} F_{xx} dx \\ =& \left[ p (t,x) {{ 1 } \over { 2 }} g^{2} \cdot F_{x} (x) \right]_{-\infty}^{\infty} - {{ 1 } \over { 2 }} \int_{-\infty}^{\infty} {{ \partial \left[ p (t,x) g^{2} \right] } \over { \partial x }} F_{x} (x) dx \\ =& 0 - 0 - {{ 1 } \over { 2 }} \int_{-\infty}^{\infty} {{ \partial \left[ p (t,x) g^{2} \right] } \over { \partial x }} F_{x} (x) dx \\ =& - {{ 1 } \over { 2 }} \left[ {{ \partial \left[ p (t,x) g^{2} \right] } \over { \partial x }} F (x) \right]_{-\infty}^{\infty} + {{ 1 } \over { 2 }} \int_{-\infty}^{\infty} {{ \partial^{2} \left[ p (t,x) g^{2} \right] } \over { \partial x^{2} }} F (x) dx \\ =& - 0 + 0 + {{ 1 } \over { 2 }} \int_{-\infty}^{\infty} {{ \partial^{2} \left[ p (t,x) g^{2} \right] } \over { \partial x^{2} }} F (x) dx \end{align*} $$
Part 4.
$$ \begin{align*} & \int_{-\infty}^{\infty} {{ \partial p(t,x) } \over { \partial t }} F(x) dx \\ =& {{ d } \over { dt }} \int_{-\infty}^{\infty} p(t,x) F(x) dx \\ =& \int_{-\infty}^{\infty} p(t,x)\left[ {\color{red} f F_{x}} + {\color{blue} {{ 1 } \over { 2 }} g^{2} F_{xx} } \right] dx \\ =& - \int_{-\infty}^{\infty} {{ \partial \left[ p (t,x) f \right] } \over { \partial x }} F (x) dx + {{ 1 } \over { 2 }} \int_{-\infty}^{\infty} {{ \partial^{2} \left[ p (t,x) g^{2} \right] } \over { \partial x^{2} }} F (x) dx \end{align*} $$ If we push the first row to the left side and the last row to the left side, we obtain the following. $$ \int_{-\infty}^{\infty} \left[ {{ \partial p(t,x) } \over { \partial t }} - \left( - {{ \partial \left[ p (t,x) f \right] } \over { \partial x }} + {{ 1 } \over { 2 }} {{ \partial^{2} \left[ p (t,x) g^{2} \right] } \over { \partial x^{2} }} \right) \right] F(x) dx = 0 $$ Since the integrated above holds for all $F \in C_{0}^{\infty} \left( \mathbb{R} \right)$, the inside of the brackets becomes $0$, resulting in the desired partial differential equation. $$ {{ \partial p(t,x) } \over { \partial t }} = - {{ \partial \left[ p(t,x) f(t,x) \right] } \over { \partial x }} + {{ 1 } \over { 2 }} {{ \partial^{2} \left[ p(t,x) \left( g(t,x) \right)^{2} \right] } \over { \partial x^{2} }} $$
■
See Also
- It can be considered as the stochastic differential equation version of Kolmogorov’s differential equation.