logo

Definition of Poisson Processes through Differential Operator Matrices 📂Probability Theory

Definition of Poisson Processes through Differential Operator Matrices

Definition

Let us assume that $\lambda > 0$ is given. If it satisfies $X(0) = 0$ and has infinitesimal probabilities like $\left\{ X(t) : t \in [0,\infty) \right\}$, then it is called a Poisson Process. $$ \begin{align*} p_{ij} \left( \Delta t \right) := & P \left( X \left( t + \Delta t = j | X(t) = i \right) \right) \\ =& \begin{cases} \lambda \Delta + o \left( \Delta t \right) & , \text{if } j = i+1 \\ 1 - \lambda \Delta + o \left( \Delta t \right) & , \text{if } j = i \\ o \left( \Delta t \right) & , \text{if } j > i + 1 \\ 0 & , \text{if } j < i \end{cases} \end{align*} $$ This probability depends only on time $\Delta t$.


  • $o \left( \Delta t \right)$ represents a function that approximates $0$ for sufficiently small $\Delta t$. $$ \lim_{\Delta t \to 0} {{ o \left( \Delta t \right) } \over { \Delta t }} = 0 $$

Explanation

Comparing with the definition of Poisson Process through the exponential distribution, it does not clearly show the exponential distribution, which the arrival times follow, but it is immediately apparent that it is a continuous Markov chain.

Thinking about the transition probability matrix $P \left( \Delta t \right)$ and the infinitesimal generator matrix $Q = P’(0)$, $$ P \left( \Delta t \right) = \begin{bmatrix} 1 - \lambda \Delta t & 0 & 0 & \cdots \\ \lambda \Delta t & 1 - \lambda \Delta t & 0 & \cdots \\ 0 & \lambda \Delta t & 1 - \lambda \Delta t & \cdots \\ 0 & 0 t & \lambda \Delta & \cdots \\ \vdots & \vdots & \cdots & \ddots \end{bmatrix} + o \left( \Delta t \right) $$ and $$ Q = \lim_{\Delta t \to 0} P \left( \Delta t \right) = \begin{bmatrix} - \lambda & 0 & 0 & \cdots \\ \lambda & - \lambda & 0 & \cdots \\ 0 & \lambda & - \lambda & \cdots \\ 0 & 0 & \lambda & \cdots \\ \vdots & \vdots & \cdots & \ddots \end{bmatrix} $$ according to the Kolmogorov differential equations, $$ {{ d P(t) } \over { dt }} = Q P(t) $$ the probability $p_{k} (t)$ that the state is $k$ at time point $t$ is $$ {{ d p_{k}(t) } \over { dt }} = -\lambda p_{k}(t) + \lambda p_{k-1}(t) $$ and it can be expressed as follows, with the solution being: $$ \begin{align*} p_{0} (t) =& e^{-\lambda t} \\ p_{1} (t) =& \lambda t e^{-\lambda t} \\ p_{2} (t) =& \left( \lambda t \right)^{2} {{ e^{-\lambda t} } \over { 2! }} \\ \vdots & \\ p_{k} (t) =& \left( \lambda t \right)^{k} {{ e^{-\lambda t} } \over { k! }} \\ \vdots & \end{align*} $$ From this enumeration, we can naturally verify that the Poisson distribution $\text{Poi} \left( \lambda t \right)$, given that it’s a Markov chain, and the time (or arrival time) $\tau$ it takes for an event to occur once at $p_{1} (t) = \lambda t e^{-\lambda t}$ follows the exponential distribution $\exp \left( \lambda t \right)$.

See Also