Probability Bounds in Mathematical Statistics
Definition 1
Let’s assume that a sequence of random variables $\left\{ X_{n} \right\}$ is given. If for all $\varepsilon > 0$, there exists a $N_{\varepsilon} \in \mathbb{N}$ and a constant $B_{\varepsilon} > 0$ such that the following is satisfied, then $\left\{ X_{n} \right\}$ is said to be Bounded in Probability. $$ n \ge N_{\varepsilon} \implies P \left[ \left| X_{n} \right| \le B_{\varepsilon} \right] \ge 1 - \varepsilon $$
Explanation
If you think about it, many of the probability distribution functions we encounter in everyday life have an infinitely wide domain. Just thinking about the standard normal distribution $N(0,1)$, although it is unlikely, it is not impossible for a sample to yield a $10^{10}$ with a probability that is not $0$. However, by establishing such a definition, it can be said to be bounded in a probabilistic sense, although it is not exactly bounded as defined in analysis. For example, if there is a sequence of random variables like $\left\{ X_{n} \sim N (0,n) \right\}_{n \in \mathbb{N}}$, no matter how well $B_{\varepsilon}$ is chosen, it cannot handle $n \to \infty$ and thus cannot be considered bounded in probability. It might seem unlikely to encounter such distributions, but in fact, they can easily be encountered in stochastic processes, specifically with the Wiener process.
Just as a sequence being convergent in analysis naturally implies boundedness, the following theorem holds true.
Theorem
If it converges in distribution, it is bounded in probability.
Proof
■
Rigorous Definition
Hogg et al. (2013). Introduction to Mathematical Statistcs(7th Edition): p306. ↩︎