logo

Definition of Poisson Processes through Differential Operator Matrices 📂Probability Theory

Definition of Poisson Processes through Differential Operator Matrices

Definition

Let us assume that λ>0\lambda > 0 is given. If it satisfies X(0)=0X(0) = 0 and has infinitesimal probabilities like {X(t):t[0,)}\left\{ X(t) : t \in [0,\infty) \right\}, then it is called a Poisson Process. pij(Δt):=P(X(t+Δt=jX(t)=i))={λΔ+o(Δt),if j=i+11λΔ+o(Δt),if j=io(Δt),if j>i+10,if j<i \begin{align*} p_{ij} \left( \Delta t \right) := & P \left( X \left( t + \Delta t = j | X(t) = i \right) \right) \\ =& \begin{cases} \lambda \Delta + o \left( \Delta t \right) & , \text{if } j = i+1 \\ 1 - \lambda \Delta + o \left( \Delta t \right) & , \text{if } j = i \\ o \left( \Delta t \right) & , \text{if } j > i + 1 \\ 0 & , \text{if } j < i \end{cases} \end{align*} This probability depends only on time Δt\Delta t.


  • o(Δt)o \left( \Delta t \right) represents a function that approximates 00 for sufficiently small Δt\Delta t. limΔt0o(Δt)Δt=0 \lim_{\Delta t \to 0} {{ o \left( \Delta t \right) } \over { \Delta t }} = 0

Explanation

Comparing with the definition of Poisson Process through the exponential distribution, it does not clearly show the exponential distribution, which the arrival times follow, but it is immediately apparent that it is a continuous Markov chain.

Thinking about the transition probability matrix P(Δt)P \left( \Delta t \right) and the infinitesimal generator matrix Q=P(0)Q = P’(0), P(Δt)=[1λΔt00λΔt1λΔt00λΔt1λΔt00tλΔ]+o(Δt) P \left( \Delta t \right) = \begin{bmatrix} 1 - \lambda \Delta t & 0 & 0 & \cdots \\ \lambda \Delta t & 1 - \lambda \Delta t & 0 & \cdots \\ 0 & \lambda \Delta t & 1 - \lambda \Delta t & \cdots \\ 0 & 0 t & \lambda \Delta & \cdots \\ \vdots & \vdots & \cdots & \ddots \end{bmatrix} + o \left( \Delta t \right) and Q=limΔt0P(Δt)=[λ00λλ00λλ00λ] Q = \lim_{\Delta t \to 0} P \left( \Delta t \right) = \begin{bmatrix} - \lambda & 0 & 0 & \cdots \\ \lambda & - \lambda & 0 & \cdots \\ 0 & \lambda & - \lambda & \cdots \\ 0 & 0 & \lambda & \cdots \\ \vdots & \vdots & \cdots & \ddots \end{bmatrix} according to the Kolmogorov differential equations, dP(t)dt=QP(t) {{ d P(t) } \over { dt }} = Q P(t) the probability pk(t)p_{k} (t) that the state is kk at time point tt is dpk(t)dt=λpk(t)+λpk1(t) {{ d p_{k}(t) } \over { dt }} = -\lambda p_{k}(t) + \lambda p_{k-1}(t) and it can be expressed as follows, with the solution being: p0(t)=eλtp1(t)=λteλtp2(t)=(λt)2eλt2!pk(t)=(λt)keλtk! \begin{align*} p_{0} (t) =& e^{-\lambda t} \\ p_{1} (t) =& \lambda t e^{-\lambda t} \\ p_{2} (t) =& \left( \lambda t \right)^{2} {{ e^{-\lambda t} } \over { 2! }} \\ \vdots & \\ p_{k} (t) =& \left( \lambda t \right)^{k} {{ e^{-\lambda t} } \over { k! }} \\ \vdots & \end{align*} From this enumeration, we can naturally verify that the Poisson distribution Poi(λt)\text{Poi} \left( \lambda t \right), given that it’s a Markov chain, and the time (or arrival time) τ\tau it takes for an event to occur once at p1(t)=λteλtp_{1} (t) = \lambda t e^{-\lambda t} follows the exponential distribution exp(λt)\exp \left( \lambda t \right).

See Also