logo

Wiener Process 📂Probability Theory

Wiener Process

Definition

When s<t<t+us< t < t+u, a stochastic process {Wt}\left\{ W_{t} \right\} that satisfies the following conditions is called a Wiener Process:

  • (i): W0=0W_{0} = 0
  • (ii): (Wt+uWt)Ws\left( W_{t+u} - W_{t} \right) \perp W_{s}
  • (iii): (Wt+uWt)N(0,u)\left( W_{t+u} - W_{t} \right) \sim N ( 0, u )
  • (iv): The sample paths of WtW_{t} are almost surely continuous.

Basic Properties

  • [1]: WtN(0,t)\displaystyle W_{t} \sim N ( 0 , t )
  • [2]: E(Wt)=0\displaystyle E ( W_{t} ) = 0
  • [3]: Var(Wt)=t\displaystyle \operatorname{Var} ( W_{t} ) = t
  • [4]: cov(Wt,Ws)=E(WtWs)=12(t+sts)=min{t,s}\displaystyle \text{cov} ( W_{t} , W_{s} ) = E (W_{t}W_{s}) = {{1} \over {2}} (|t| + |s| - |t-s|) = \min \left\{ t , s \right\}

Description

The Wiener Process is also called Brownian Motion.

  • (ii): Having (Wt+uWt)Ws\left( W_{t+u} - W_{t} \right) \perp W_{s} means that

  • (iii): The increments follow a normal distribution N(0,t)N(0,t), signifying that the Wiener Process does not care about specific points in time, but the uncertainty increases as the difference between two points in time increases.

  • (iv): The fact that sample paths are almost surely continuous means that if there is a point following the Wiener process, the chance of it ’teleporting’ is as if 00. If it’s too hard to understand, knowing that it does not make sudden leaps is enough.

  • [1]: An interesting fact is that the probability density function of WtW_{t} fWt(x,t)=12πtex22t f_{W_{t}} (x,t) = {{1} \over { \sqrt{ 2 \pi t } }} e^{ - {{x^2} \over {2t} } } becomes the solution to the heat equation ut=122ux2 {{\partial u } \over { \partial t }} = {{1} \over {2}} {{\partial^2 u } \over { \partial x^2 }} .

  • [4]: It’s not common to see the covariance expressed as the minimum of something. It’s highly recommended to follow the proof process and understand how it was derived.

Proof

[1]

By (i) and (iii), Wt=Wt0=WtW0N(0,t)W_{t} = W_{t} - 0 = W_{t} - W_{0} \sim N ( 0 , t )

[2]

Since WtW_{t} follows a normal distribution by [1], E(Wt)=0\displaystyle E ( W_{t} ) = 0

[3]

Since WtW_{t} follows a normal distribution by [1], Var(Wt)=t\displaystyle \operatorname{Var} ( W_{t} ) = t

[4]

Let t>st > s then by the definition of covariance and [2] cov(Wt,Ws)=E([WtE(Wt)][WsE(Ws)])=E(WtWs) \text{cov} ( W_{t} , W_{s} ) = E \left( \left[ W_{t} - E ( W_{t} ) \right] \left[ W_{s} - E ( W_{s} ) \right] \right) = E \left( W_{t} W_{s} \right)

Wt=(WtWs)+WsW_{t} = ( W_{t} - W_{s} ) + W_{s} therefore

E(WtWs)=E[((WtWs)+Ws)Ws]=E[(WtWs)Ws]+E(Ws2) \begin{align*} E \left( W_{t} W_{s} \right) =& E \left[ \left( ( W_{t} - W_{s} ) + W_{s} \right) \cdot W_{s} \right] \\ =& E \left[ ( W_{t} - W_{s} ) \cdot W_{s} \right] + E \left( W_{s}^{2} \right) \end{align*}

The first term by (ii) and [2]

E[(WtWs)Ws]=E(Wt)E(WtWs)=0 E \left[ ( W_{t} - W_{s} ) \cdot W_{s} \right] = E ( W_{t} ) \cdot E ( W_{t} - W_{s} ) = 0

The second term by [3]

E(Ws2)02=E(Ws2)[E(Ws)]2=Var(Ws)=s E \left( W_{s}^{2} \right) - 0^2 = E \left( W_{s}^{2} \right) - \left[ E ( W_{s} ) \right]^2 = \operatorname{Var} ( W_{s} ) = s

Summarizing cov(Wt,Ws)=s\displaystyle \text{cov} ( W_{t} , W_{s} ) = s. Similarly, the same result is obtained when s>ts > t

cov(Wt,Ws)=min{t,s} \text{cov} ( W_{t} , W_{s} ) = \min \left\{ t , s \right\}