logo

Most Powerful Test Containing Sufficient Statistics 📂Mathematical Statistics

Most Powerful Test Containing Sufficient Statistics

Theorem

Hypothesis Testing: H0:θ=θ0H1:θ=θ1 \begin{align*} H_{0} :& \theta = \theta_{0} \\ H_{1} :& \theta = \theta_{1} \end{align*}

In such hypothesis testing, let us call the probability density function or probability mass function for θ0,θ1\theta_{0}, \theta_{1} of sufficient statistic TT considering θ\theta as g(tθ0),g(tθ1)g \left( t | \theta_{0} \right), g \left( t | \theta_{1} \right). Then, given a rejection region SS and a certain constant k0k \ge 0, all hypothesis tests dependent on TT are the most powerful tests at level α\alpha if they satisfy the following three conditions:

  • (i): If g(tθ1)>kg(tθ0)g \left( t | \theta_{1} \right) > k g \left( t | \theta_{0} \right) then tSt \in S
  • (ii): If g(tθ1)<kg(tθ0)g \left( t | \theta_{1} \right) < k g \left( t | \theta_{0} \right) then tSct \in S^{c}
  • (iii): α=Pθ0(TS)\alpha = P_{\theta_{0}} \left( T \in S \right)

Explanation

This theorem is essentially a corollary of the Pearson-Neyman lemma. It not only plays a role in the proof of the Karlin-Rubin theorem but also suggests that sufficient statistics can be used to conveniently design the most powerful test.

Proof 1

The rejection region for the original sample X\mathbf{X} is R={x:T(X)S}R = \left\{ \mathbf{x} : T \left( \mathbf{X} \right) \in S \right\}. According to the Neyman factorization theorem, the probability density function or probability mass function of X\mathbf{X} can be represented as follows by a non-negative function h(x)h \left( \mathbf{x} \right): f(xθi)=g(T(x)θi)h(x),i=0,1 f \left( \mathbf{x} | \theta_{i} \right) = g \left( T \left( \mathbf{x} \right) | \theta_{i} \right) h \left( \mathbf{x} \right) \qquad , i = 0,1 Assuming that the conditions (i) and (ii) required by the theorem are satisfied: xR    f(xθ1)>g(T(x)θ1)h(x)=kg(tθ0)=kf(xθ0)xRc    f(xθ1)<g(T(x)θ1)h(x)=kg(tθ0)=kf(xθ0) \begin{align*} \mathbf{x} \in R \impliedby & f \left( \mathbf{x} | \theta_{1} \right) > g \left( T \left( \mathbf{x} \right) | \theta_{1} \right) h \left( \mathbf{x} \right) = k g \left( t | \theta_{0} \right) = k f \left( \mathbf{x} | \theta_{0} \right) \\ \mathbf{x} \in R^{c} \impliedby & f \left( \mathbf{x} | \theta_{1} \right) < g \left( T \left( \mathbf{x} \right) | \theta_{1} \right) h \left( \mathbf{x} \right) = k g \left( t | \theta_{0} \right) = k f \left( \mathbf{x} | \theta_{0} \right) \end{align*} And based on condition (iii), the following holds: Pθ0(XR)=Pθ0(T(X)S)=α P_{\theta_{0}} \left( \mathbf{X} \in R \right) = P_{\theta_{0}} \left( T \left( \mathbf{X} \right) \in S \right) = \alpha

Pearson-Neyman Lemma: In such hypothesis testing, let us call the probability density function or probability mass function for θ0,θ1\theta_{0}, \theta_{1} as f(xθ0),f(xθ1)f \left( \mathbf{x} | \theta_{0} \right), f \left( \mathbf{x} | \theta_{1} \right) and given a rejection region RR and a certain constant k0k \ge 0, if

  • (i): If f(xθ1)>kf(xθ0)f \left( \mathbf{x} | \theta_{1} \right) > k f \left( \mathbf{x} | \theta_{0} \right) then xR\mathbf{x} \in R
  • (ii): If f(xθ1)<kf(xθ0)f \left( \mathbf{x} | \theta_{1} \right) < k f \left( \mathbf{x} | \theta_{0} \right) then xRc\mathbf{x} \in R^{c}
  • (iii): α=Pθ0(XR)\alpha = P_{\theta_{0}} \left( \mathbf{X} \in R \right)

then, the following two propositions are equivalent:

  • All hypothesis tests that satisfy the above three conditions are the most powerful tests at level α\alpha.
  • If a hypothesis test that satisfies these three conditions with a constant k>0k > 0 exists, then all most powerful tests at level α\alpha, except for the set AΩA \subset \Omega, Pθ0(XA)=Pθ1(XA)=0 P_{\theta_{0}} \left( \mathbf{X} \in A \right) = P_{\theta_{1}} \left( \mathbf{X} \in A \right) = 0 satisfy (i) and (ii), and are exactly the most powerful tests at size α\alpha.

According to (    )(\impliedby) of the Pearson-Neyman lemma, the given hypothesis test is the most powerful test.


  1. Casella. (2001). Statistical Inference(2nd Edition): p389. ↩︎