logo

Stochastic Differential Equations with White Noise 📂Stochastic Differential Equations

Stochastic Differential Equations with White Noise

Motivation

ξ(t):=?W˙(t)=dW(t)dt \xi (t) \overset{?}{:=} \dot{W}(t) = {{d W (t)} \over {dt}}

Imagine a ξ\xi as the derivative of a Wiener process as shown above. When thinking of Brownian motion, this ξ(t)\xi (t) would be a noise representing random fluctuations at the time point tt. Although it seems very intuitive and not at all awkward, regrettably, there is an issue with the existence of W˙(t)\dot{W}(t) in the universal sense.

Non-differentiability of the Wiener Process 1

Yh:=W(t+h)W(t)h Y_{h} := {{W (t + h) - W(t)} \over {h}}

Let’s consider a probability process of the average rate of change {Yh}\left\{ Y_{h} \right\} for the Wiener process {Wt}t0\left\{ W_{t} \right\}_{t \ge 0} as shown above. Obviously, YhY_{h} follows a normal distribution, and its mean and variance are calculated as follows. E(Yh)=1hE(W(t+h)W(t))=0Var(Yh)=1h2Var(W(t+h)W(t))=1h2h=1h \begin{align*} E \left( Y_{h} \right) & = {{ 1 } \over { h }} E \left( W (t+h) - W (t) \right) = 0 \\ \operatorname{Var} \left( Y_{h} \right) & = {{ 1 } \over { h^{2} }} \operatorname{Var} \left( W (t+h) - W (t) \right) = {{ 1 } \over { h^{2} }} \cdot h = {{ 1 } \over { h }} \end{align*} Therefore, YnY_{n} can be represented for the random variable ZZ that follows the standard normal distribution N(0,1)N(0,1) as shown in Yh=1hZ\displaystyle Y_{h} = {{ 1 } \over { \sqrt{h} }} Z. At this time, let’s consider the probability P(Yh>k)=P(Zh>k) P \left( \left| Y_{h} \right| > k \right) = P \left( \left| {{ Z } \over { \sqrt{h} }} \right| > k \right) with a suitable constant k>0k > 0. No matter what kk is, together with ZZ, if h0h \to 0 then Zh\left| {{ Z } \over { \sqrt{h} }} \right| diverges to infinity, therefore limh0P(Zh>k)=1 \lim_{h \to 0} P \left( \left| {{ Z } \over { \sqrt{h} }} \right| > k \right) = 1 is satisfied. In other words, what was thought to be a probability process of the average rate of change {Yh}\left\{ Y_{h} \right\} is actually (probability) diverging, and hence it can be known that W(t)W(t) is non-differentiable anywhere.

Buildup

Since there was somewhat a problem in defining white noise with the concept of the Wiener process and traditional derivatives, we try to circumvent and formulate its definition. First, let {Xt}t0\left\{ X_{t} \right\}_{t \ge 0} always be a stochastic process that follows a finite variance normal distribution. In other words, for all t0t \ge 0, E(Xt2)< E \left( X_{t}^{2} \right) < \infty it is. For all t1,t20t_{1}, t_{2} \ge 0, E(Xt1)=E(Xt2) E \left( X_{t_{1}} \right) = E \left( X_{t_{2}} \right) if and the covariance with some function h=RRh = \mathbb{R} \to \mathbb{R} is Cov(Xt1,Xt2)=E(Xt1Xt2)=h(t2t1) \operatorname{Cov} \left( X_{t_{1}} , X_{t_{2}} \right) = E \left( X_{t_{1}} \cdot X_{t_{2}} \right) = h \left( t_{2} - t_{1} \right) then such Gaussian stochastic process {Xt}t0\left\{ X_{t} \right\}_{t \ge 0} is called Stationary in the wide Sense. This is an adequate expression to some extent regressed from stationarity in time-series analysis in terms of having a constant mean and the variance changing over time lapses explained through the function hh.

Dirac delta function: Let’s call the Dirac measure defined as below a δx0\delta_{x_0}. δx0(E):={1x0E0x0E \delta_{x_0} (E) := \begin{cases} 1 & x_0 \in E \\ 0 & x_0 \notin E \end{cases}

In particular, if X0=0X_{0} = 0 and hh is a Dirac function that is x0=0x_{0} =0, then the intuitive meaning of {Xt}t0\left\{ X_{t} \right\}_{t \ge 0} becomes noise that satisfies all of the following. XtN(0,1)(EX0=0δ0=1    EXt2=1)Xt1Xt2(δ0=0    Cov(Xt1,Xt2)=0) \begin{align*} X_{t} & \sim N \left( 0 , 1 \right) & \left( \because E X_{0} = 0 \land \delta_{0} = 1 \implies E X_{t}^{2} = 1 \right) \\ X_{t_{1}} & \perp X_{t_{2}} & \left( \because \delta_{0} = 0 \implies \operatorname{Cov} \left( X_{t_{1}}, X_{t_{2}} \right) = 0 \right) \end{align*}

This means that the noise does not drift―the mean is 00 and follows a normal distribution with constant variance, and any noise at a certain time is independent of any noise at another time. As seen in the definition of the Wiener process, properties (i) and (ii) are satisfied, while (iii) and (iv) are meaningful when thinking of intervals rather than a single point, thus not significant when discussing white noise.

Wiener Process: When s<t<t+us< t < t+u is defined, a stochastic process {Wt}\left\{ W_{t} \right\} that satisfies the following conditions is called a Wiener process.

  • (i): W0=0W_{0} = 0
  • (ii): (Wt+uWt)Ws\left( W_{t+u} - W_{t} \right) \perp W_{s}
  • (iii): (Wt+uWt)N(0,u)\left( W_{t+u} - W_{t} \right) \sim N ( 0, u )
  • (iv): The sample paths of WtW_{t} are almost everywhere continuous.

Accordingly, we intend to introduce the following definition not as a derivative of a real Wiener process but as sufficient noise at each point in time. Indeed, after this definition, such expressions are naturally used in formulas. ξ(t)=dW(t)dtdW(t)=noisedt \begin{align*} \xi (t) =& {{d W (t)} \over {dt}} \\ d W (t) =& \text{noise} \cdot dt \end{align*}

Extending the definition of the Wiener process to distributions, the distributional derivative of the Wiener process satisfies the definition of white noise. In other words, white noise is a weak derivative of the Wiener process.

Definition 2

A Gaussian stochastic process {ξ(t)}\left\{ \xi (t) \right\} is called White Noise if it has the wide sense of stationarity satisfying the following two conditions:

  • (i): E(ξ(0))=0E \left( \xi (0) \right) = 0
  • (ii): Cov(ξ(t1),ξ(t2))=δ0\operatorname{Cov} \left( \xi \left( t_{1} \right), \xi \left( t_{2} \right) \right) = \delta_{0}

See Also


  1. Panik. (2017). Stochastic Differential Equations: An Introduction with Applications in Population Dynamics Modeling: p110. ↩︎

  2. Panik. (2017). Stochastic Differential Equations: An Introduction with Applications in Population Dynamics Modeling: p109. ↩︎