logo

Derivation of the Standard Normal Distribution as a Limiting Distribution of the Binomial Distribution 📂Probability Distribution

Derivation of the Standard Normal Distribution as a Limiting Distribution of the Binomial Distribution

Theorem

De Moivre-Laplace Theorem

If XiB(1,p)X_i \sim B(1,p) and Yn=X1+X2++XnY_n = X_1 + X_2 + \cdots + X_n, then YnB(n,p)Y_n \sim B(n,p) and Ynnpnp(1p)DN(0,1) { { Y_n - np } \over {\sqrt{ np(1-p) } } }\overset{D}{\to} N(0,1)


Description

This theorem is also known as the De Moivre–Laplace Theorem, and is widely known as a special case of the central limit theorem.

1635506522.gif

From the beginning of learning statistics, it has been taught that as the sample size of a binomial distribution increases, it approximates a normal distribution. This is evident from experience, and the process of proof does not hold great significance, but it serves as a good example to concretely grasp convergence in distribution, which may not be intuitively obvious from formulas alone.

Derivation

Ynnpnp(1p)=nXnpp(1p) { { Y_n - np } \over {\sqrt{ np(1-p) } } } = \sqrt{n} { { \overline{X_n} - p } \over { \sqrt{p(1-p)} } } Since XiB(1,p)X_i \sim B(1,p), we have E(Xi)=pE(X_i ) = p and Var(Xi)=p(1p)\Var(X_i ) = p(1-p). Furthermore, by the central limit theorem, nXnpp(1p)DN(0,1) \sqrt{n} { { \overline{X_n} - p } \over { \sqrt{p(1-p)} } } \overset{D}{\to} N(0,1)