logo

Proof of the Weak Law of Large Numbers 📂Mathematical Statistics

Proof of the Weak Law of Large Numbers

Law

Given {Xk}k=1n\left\{ X_{k} \right\}_{k=1}^{n} are iid random variables with distribution (μ,σ2)\left( \mu, \sigma^2 \right) , then when nn \to \infty XnPμ \overline{X}_n \overset{P}{\to} \mu


Explanation

Ranked among the most important theorems in statistics, alongside the Central Limit Theorem.

This theorem implies that no matter the distribution, ’the sample mean converges to the population mean’. While this might seem obvious, in natural sciences, the word ‘obvious’ is seldom as significant. Beyond its application, the term ’law’ is fitting for such a proposition, especially from the perspective of those pursuing academia.

Fortunately, despite its significance, the proof itself is simple, requiring only an understanding of the definition of probability convergence and the Chebyshev’s inequality.

Proof1

A trick for using Chebyshev’s inequality will be employed.

For all ϵ>0\epsilon > 0 P(Xnμϵ)=P(Xnμ(ϵnσ)(σn)) P(|\overline{X}_n - \mu | \ge \epsilon) = P\left(|\overline{X}_n - \mu | \ge \left( \epsilon \sqrt{n} \over \sigma \right) \left( \sigma \over \sqrt{n} \right) \right)

Here, the population mean and variance of Xn\overline{X}_n are respectively μ\mu and σ2/n\displaystyle \sigma^2 / n . We have already met the conditions to use Chebyshev’s inequality.

Chebyshev’s inequality P(Xμkσ)1k2 P(| X - \mu | \ge k\sigma ) \le {1 \over k^2}

Therefore, when nn \to \infty P(Xnμϵ)=P(Xnμ(ϵnσ)σn)σ2nϵ20 P(|\overline{X}_n - \mu | \ge \epsilon) = P\left(|\overline{X}_n - \mu | \ge \left( \epsilon \sqrt{n} \over \sigma \right) {\sigma \over \sqrt{n}} \right) \le {{\sigma^2} \over {n \epsilon^2}} \to 0


  1. Hogg et al. (2013). Introduction to Mathematical Statistics(7th Edition): p296. ↩︎