logo

Likelihood Ratio Test Including Sufficient Statistic 📂Mathematical Statistics

Likelihood Ratio Test Including Sufficient Statistic

Theorem

Hypothesis Testing: $$ \begin{align*} H_{0} :& \theta \in \Theta_{0} \\ H_{1} :& \theta \in \Theta_{0}^{c} \end{align*} $$

Likelihood Ratio Test Statistic: $$ \lambda \left( \mathbf{x} \right) := {{ \sup_{\Theta_{0}} L \left( \theta \mid \mathbf{x} \right) } \over { \sup_{\Theta} L \left( \theta \mid \mathbf{x} \right) }} $$

If $T \left( \mathbf{X} \right)$ is a sufficient statistic for the parameter $\theta$, and

  • $\lambda^{\ast} (t)$ is a likelihood ratio test statistic dependent on $T$
  • $\lambda (\mathbf{x})$ is a likelihood ratio test statistic dependent on $\mathbf{X}$

Then, for all $\mathbf{x} \in \Omega$ in all sample spaces, $\lambda^{\ast} \left( T \left( \mathbf{x} \right) \right) = \lambda \left( \mathbf{x} \right)$ holds.

Explanation

This theorem allows us to revisit why a sufficient statistic was named as such. Accordingly, when conducting a likelihood ratio test, if there is a sufficient statistic, one can start with $\lambda^{\ast}$ without considering other possibilities.

Proof 1

$$ f \left( \mathbf{x} \mid \theta \right) = g \left( t \mid \theta \right) h \left( \mathbf{x} \right) $$

According to the Neyman Factorization Theorem, the pdf or pmf of $\mathbf{x}$ , $f \left( \mathbf{x} \mid \theta \right)$ can be represented as the pdf or pmf of $T$ , $g \left( t \mid \theta \right)$ and a function $h \left( \mathbf{x} \right)$ that does not depend on $\theta$ as follows. $$ \begin{align*} \lambda \left( \mathbf{x} \right) =& {{ \sup_{\Theta_{0}} L \left( \theta \mid \mathbf{x} \right) } \over { \sup_{\Theta} L \left( \theta \mid \mathbf{x} \right) }} \\ =& {{ \sup_{\Theta_{0}} f \left( \mathbf{x} \mid \theta \right) } \over { \sup_{\Theta} f \left( \mathbf{x} \mid \theta \right) }} \\ =& {{ \sup_{\Theta_{0}} g \left( T \left( \mathbf{x} \right) \mid \theta \right) h \left( \mathbf{x} \right) } \over { \sup_{\Theta} g \left( T \left( \mathbf{x} \right) \mid \theta \right) h \left( \mathbf{x} \right) }} & \because T \text{ is sufficient} \\ =& {{ \sup_{\Theta_{0}} g \left( T \left( \mathbf{x} \right) \mid \theta \right) } \over { \sup_{\Theta} g \left( T \left( \mathbf{x} \right) \mid \theta \right) }} & \because h \text{ doesn’t depend on } \theta \\ =& {{ \sup_{\Theta_{0}} L^{\ast} \left( \theta \mid \mathbf{x} \right) } \over { \sup_{\Theta} L^{\ast} \left( \theta \mid \mathbf{x} \right) }} & \because g \text{ is the pdf or pmf of } T \\ =& \lambda^{\ast} \left( T \left( \mathbf{x} \right) \right) \end{align*} $$


  1. Casella. (2001). Statistical Inference(2nd Edition): p377. ↩︎