Derivation of the Standard Normal Distribution as a Limiting Distribution of the Binomial Distribution
Theorem
De Moivre-Laplace Theorem
If $X_i \sim B(1,p)$ and $Y_n = X_1 + X_2 + \cdots + X_n$, then $Y_n \sim B(n,p)$ and $$ { { Y_n - np } \over {\sqrt{ np(1-p) } } }\overset{D}{\to} N(0,1) $$
- $N \left( \mu , \sigma^{2} \right)$ is a normal distribution with mean $\mu$ and variance $\sigma^{2}$.
- $B(n,p)$ is a binomial distribution with $n$ trials and probability $p$.
- $\overset{D}{\to}$ denotes convergence in distribution.
Description
This theorem is also known as the De Moivre–Laplace Theorem, and is widely known as a special case of the central limit theorem.
From the beginning of learning statistics, it has been taught that as the sample size of a binomial distribution increases, it approximates a normal distribution. This is evident from experience, and the process of proof does not hold great significance, but it serves as a good example to concretely grasp convergence in distribution, which may not be intuitively obvious from formulas alone.
Derivation
$$ { { Y_n - np } \over {\sqrt{ np(1-p) } } } = \sqrt{n} { { \overline{X_n} - p } \over { \sqrt{p(1-p)} } } $$ Since $X_i \sim B(1,p)$, we have $E(X_i ) = p$ and $\operatorname{Var}(X_i ) = p(1-p)$. Furthermore, by the central limit theorem, $$ \sqrt{n} { { \overline{X_n} - p } \over { \sqrt{p(1-p)} } } \overset{D}{\to} N(0,1) $$
■