logo

Small-Sample Hypothesis Testing for the Difference Between Two Population Means 📂Statistical Test

Small-Sample Hypothesis Testing for the Difference Between Two Population Means

Hypothesis Testing 1

Assume that two independent populations, each following a normal distribution $N \left( \mu_{1} , \sigma_{1}^{2} \right)$ and $N \left( \mu_{2} , \sigma_{2}^{2} \right)$ with $\sigma_{1}^{2} = \sigma^{2} = \sigma_{2}^{2}$, i.e., the population variances are unknown but assumed to be equal. When the samples are small, meaning the number of samples is $n_{1} , n_{2} < 30$, the hypothesis testing for the difference between two population means $D_{0}$ is as follows:

  • $H_{0}$: $\mu_{1} - \mu_{2} = D_{0}$. In other words, the difference in population means is $D_{0}$.
  • $H_{1}$: not $\mu_{1} - \mu_{2} = D_{0}$. In other words, the difference in population means is not $D_{0}$.

Test Statistic

The test statistic, using the sample standard deviation $s_{1}, s_{2}$ is as follows: $$ t = {{ \left( \overline{X}_{1} - \overline{X}_{2} \right) - D_{0} } \over { \sqrt{ s_{p}^{2} \left( {{ 1 } \over { n_{1} }} + {{ 1 } \over { n_{2} }} \right) } }} $$ Here, $s_{p}^{2}$ is the pooled sample variance, calculated as follows: $$ s_{p}^{2} = {{ \left( n_{1} - 1 \right) s_{1}^{2} + \left( n_{2} - 1 \right) s_{2}^{2} } \over { n_{1} + n_{2} - 2 }} $$ This test statistic follows a t-distribution, with its degrees of freedom $\mathrm{df}$ calculated based on the floor function $\lfloor \cdot \rfloor$ as follows: $$ \mathrm{df} = \left\lfloor {{ \left( {{ s_{1}^{2} } \over { n_{1} }} + {{ s_{2}^{2} } \over { n_{2} }} \right)^{2} } \over { {{ \left( s_{1}^{2} / n_{1} \right)^{2} } \over { n_{1} - 1 }} + {{ \left( s_{2}^{2} / n_{2} \right)^{2} } \over { n_{2} - 1 }} }} \right\rfloor $$

Derivation

Strategy: It’s fundamentally challenging for freshmen to grasp, as well as for undergraduate students with some experience, and it usually becomes intuitively understandable at the graduate level. Conversely, if you’ve studied to that extent, it often ends with the enumeration of a few lemmas.


Pooled Sample Variance: When the population variances are unknown but assumed to be equal, the unbiased estimator for the population variance is as follows: $$ S_{p}^{2} := {{ \left( n_{1} - 1 \right) S_{1}^{2} + \cdots + \left( n_{m} - 1 \right) S_{m}^{2} } \over { \left( n_{1} - 1 \right) + \cdots + \left( n_{m} - 1 \right) }} = {{ \sum_{i=1}^{m} \left( n_{i} - 1 \right) S_{i}^{2} } \over { \sum_{i=1}^{m} \left( n_{i} - 1 \right) }} $$

Satterthwaite’s Approximation: Let $k = 1, \cdots , n$, and assume $Y_{k} \sim \chi_{r_{k}}^{2}$ and $a_{k} \in \mathbb{R}$. If for some $\nu > 0$ $$ \sum_{k=1}^{n} a_{k} Y_{k} \sim {{ \chi_{\nu}^{2} } \over { \nu }} $$ then, the estimator for $\hat{\nu}$ can be used as follows: $$ \hat{\nu} = {{ \left( \sum_{k} a_{k} Y_{k} \right)^{2} } \over { \sum_{k} {{ a_{k}^{2} } \over { r_{k} }} Y_{k}^{2} }} $$

Derivation of Student’s t-distribution from Independent Normal and Chi-squared Distributions: If two random variables $W,V$ are independent and $W \sim N(0,1)$, $V \sim \chi^{2} (r)$, then $$ T = { {W} \over {\sqrt{V/r} } } \sim t(r) $$

$$ t = {{ \left( \overline{X}_{1} - \overline{X}_{2} \right) - D_{0} } \over { \sqrt{ s_{p}^{2} \left( {{ 1 } \over { n_{1} }} + {{ 1 } \over { n_{2} }} \right) } }} = {{ { \left( \overline{X}_{1} - \overline{X}_{2} \right) - D_{0} } \over { \displaystyle \sigma / \sqrt{ {{ 1 } \over { n_{1} }} + {{ 1 } \over { n_{2} }} } } } \over { \sqrt{ \displaystyle {{ \textrm{df} s_{p}^{2} } \over { \sigma^{2} }} / \textrm{df} } }} $$ According to Satterthwaite’s approximation, the denominator on the right-hand side follows a chi-squared distribution with degrees of freedom $\mathrm{df}$, the numerator follows a standard normal distribution, and $t$ approximately follows a t-distribution with degrees of freedom $\mathrm{df}$. When the random variable $Y$ follows the t-distribution $t(\mathrm{df})$, rejecting $H_{0}$ at the significance level $\alpha$ for $P \left( Y \ge t_{\alpha} \right) = \alpha$ sufficient to satisfy $t_{\alpha}$ is equivalent to: $$ \left| t \right| \ge t_{\alpha} $$ This means that relying on the null hypothesis that $\mu_{1} - \mu_{2} = D_{0}$ is too far from $D_{0}$ to be credible.


  1. Mendenhall. (2012). Introduction to Probability and Statistics (13th Edition): p400. ↩︎