logo

Probability Bounds in Mathematical Statistics 📂Mathematical Statistics

Probability Bounds in Mathematical Statistics

Definition 1

Let’s assume that a sequence of random variables {Xn}\left\{ X_{n} \right\} is given. If for all ε>0\varepsilon > 0, there exists a NεNN_{\varepsilon} \in \mathbb{N} and a constant Bε>0B_{\varepsilon} > 0 such that the following is satisfied, then {Xn}\left\{ X_{n} \right\} is said to be Bounded in Probability. nNε    P[XnBε]1ε n \ge N_{\varepsilon} \implies P \left[ \left| X_{n} \right| \le B_{\varepsilon} \right] \ge 1 - \varepsilon

Explanation

If you think about it, many of the probability distribution functions we encounter in everyday life have an infinitely wide domain. Just thinking about the standard normal distribution N(0,1)N(0,1), although it is unlikely, it is not impossible for a sample to yield a 101010^{10} with a probability that is not 00. However, by establishing such a definition, it can be said to be bounded in a probabilistic sense, although it is not exactly bounded as defined in analysis. For example, if there is a sequence of random variables like {XnN(0,n)}nN\left\{ X_{n} \sim N (0,n) \right\}_{n \in \mathbb{N}}, no matter how well BεB_{\varepsilon} is chosen, it cannot handle nn \to \infty and thus cannot be considered bounded in probability. It might seem unlikely to encounter such distributions, but in fact, they can easily be encountered in stochastic processes, specifically with the Wiener process.

Just as a sequence being convergent in analysis naturally implies boundedness, the following theorem holds true.

Theorem

If it converges in distribution, it is bounded in probability.

Proof

Rigorous Definition


  1. Hogg et al. (2013). Introduction to Mathematical Statistcs(7th Edition): p306. ↩︎