logo

Convergence of Probabilities Defined by Measure Theory 📂Probability Theory

Convergence of Probabilities Defined by Measure Theory

Probability Convergence Defined Rigorously

Given a probability space (Ω,F,P)( \Omega , \mathcal{F} , P).

A sequence of random variables {Xn}nN\left\{ X_{n} \right\}_{n \in \mathbb{N}} is said to converge in probability to a random variable XX if it converges in measure to XX, denoted as XnPXX_{n} \overset{P}{\to} X.


  • If you’re not yet familiar with measure theory, the term probability space can be disregarded.

Explanation

The convergence of {Xn}nN\left\{ X_{n} \right\}_{n \in \mathbb{N}} to XX means, for all ε>0\varepsilon > 0, limnP({ωΩ:Xn(ω)X(ω)ε})=0 \lim_{n \to \infty} P \left( \left\{ \omega \in \Omega : | X_{n}(\omega) - X(\omega) | \ge \varepsilon \right\} \right) = 0 and in a more familiar form, it can be shown as follows: limnP(Xn(ω)X(ω)<ε)=1 \lim_{n \to \infty} P \left( | X_{n}(\omega) - X(\omega) | < \varepsilon \right) = 1 Since a sequence of random variables is a stochastic process, it is inferred to be useful in the theory of stochastic processes.

Properties of probability convergence from measure convergence:

Since probability PP is a measure, it inherits the properties of measure convergence.

See Also