logo

Calin-Rubin Theorem Proof 📂Mathematical Statistics

Calin-Rubin Theorem Proof

Theorem

Hypothesis Testing:
H0:θθ0H1:θ>θ0 \begin{align*} H_{0} :& \theta \le \theta_{0} \\ H_{1} :& \theta > \theta_{0} \end{align*}

In such a hypothesis test, TT is called a sufficient statistic for θ\theta, and the family {g(tθ):θΘ}\left\{ g(t | \theta) : \theta \in \Theta \right\} of the probability density function or probability mass function of tt possesses a Monotone Likelihood Ratio (MLR). Then, for t0\forall t_{0},
H0 is rejected if and only if T>t0 H_{0} \text{ is rejected if and only if } T > t_{0}
the hypothesis test is a level α=Pθ0(T>t0)\alpha = P_{\theta_{0}} \left( T > t_{0} \right) most powerful test.


  • For parameter θ\theta, a function β(θ):=Pθ(XR)\beta (\theta) := P_{\theta} \left( \mathbf{X} \in \mathbb{R} \right) with rejection region RR is called a power function. If supθΘ0β(θ)α\sup_{\theta \in \Theta_{0}} \beta (\theta) \le \alpha, the given hypothesis test is called a level α\alpha hypothesis test.

Explanation

Note that the given hypothesis test is H0:θθ0H_{0} : \theta \le \theta_{0}, namely a one-sided test. For example, if conducting a z-test under the condition of Zzα/2|Z| \ge z_{\alpha/2}, the null hypothesis would be rejected, which is a two-sided test, hence the Karlin-Rubin theorem cannot be indiscriminately used. Typically, the issue arises where it might be the most powerful test on one side but not on the other, leading to consideration for measures like unbiased test efficiency.

The theorem that a unidirectional test is the most powerful test can be guaranteed by demonstrating that the family of the probability density function or probability mass function of a sufficient statistic has a monotone likelihood ratio.

Proof 1

Part 1.

The null hypothesis is rejected when θ>θ0\theta > \theta_{0}, making the power function β(θ)=Pθ(T>t0)\beta (\theta) = P_{\theta} \left( T > t_{0} \right). Since the family of the probability density function or probability mass function of sufficient statistic TT for θ\theta has a monotone likelihood ratio, β(θ)\beta \left( \theta \right) is a monotonically increasing function and,
supθθ0β(θ)=β(θ0)=α \sup_{\theta \le \theta_{0}} \beta \left( \theta \right) = \beta \left( \theta_{0} \right) = \alpha
it’s a level α\alpha test.


Part 2.

Definition of a Monotone Likelihood Ratio: Let the family of the probability mass function or probability density function for a parameter θR\theta \in \mathbb{R} and a univariate random variable TT be G:={g(tθ):θΘ}G := \left\{ g ( t | \theta) : \theta \in \Theta \right\}. If for all θ2>θ1\theta_{2} > \theta_{1},
g(tθ2)g(tθ1) {{ g \left( t | \theta_{2} \right) } \over { g \left( t | \theta_{1} \right) }}
is a monotone function in {t:g(tθ1)>0g(tθ2)>0}\left\{ t : g \left( t | \theta_{1} \right) > 0 \lor g \left( t | \theta_{2} \right) > 0 \right\}, then GG is said to have a Monotone Likelihood Ratio (MLR).

Now, fix θ>θ0\theta ' >\theta_{0} and consider another hypothesis test as follows.
H0:θ=θ0H1:θ=θ \begin{align*} H'_{0} :& \theta = \theta_{0} \\ H'_{1} :& \theta = \theta ' \end{align*}
This new hypothesis test is a setup for applying the corollary of the Neyman-Pearson lemma, and it is also an arbitrary hypothesis test that belongs to the rejection region θΘ0c\theta ' \in \Theta_{0}^{c} of the original hypothesis test. Considering the set T:={t>t0:g(tθ)g(tθ0)}\mathcal{T} := \left\{ t > t_{0} : g \left( t | \theta ' \right) \lor g \left( t | \theta_{0} \right) \right\},
k:=inftTg(tθ)g(tθ0) k ' := \inf_{t \in \mathcal{T}} {{ g \left( t | \theta ' \right) } \over { g \left( t | \theta_{0} \right) }}
by its definition, the following is obtained.
T>t0    g(tθ)g(tθ0)>k    g(tθ)>kg(tθ0) T > t_{0} \iff {{ g \left( t | \theta ' \right) } \over { g \left( t | \theta_{0} \right) }} > k’ \iff g \left( t | \theta ' \right) > k’ g \left( t | \theta_{0} \right)


Part 3.

Most Powerful Test Involving a Sufficient Statistic: H0:θ=θ0H1:θ=θ1 \begin{align*} H_{0} :& \theta = \theta_{0} \\ H_{1} :& \theta = \theta_{1} \end{align*}

In such a hypothesis test, let’s call the family g(tθ0),g(tθ1)g \left( t | \theta_{0} \right), g \left( t | \theta_{1} \right) of the probability density function or probability mass function for sufficient statistic TT for θ\theta. Then, with rejection region SS and some constant k0k \ge 0, all hypothesis tests dependent on TT that satisfy the following three conditions are level α\alpha most powerful tests:

  • (i): If g(tθ1)>kg(tθ0)g \left( t | \theta_{1} \right) > k g \left( t | \theta_{0} \right), then tSt \in S
  • (ii): If g(tθ1)<kg(tθ0)g \left( t | \theta_{1} \right) < k g \left( t | \theta_{0} \right), then tSct \in S^{c}
  • (iii): If α=Pθ0(TS)\alpha = P_{\theta_{0}} \left( T \in S \right)

Conditions (i) and (ii) from Part 2, and condition (iii) from Part 1 are met, hence H0 vs H1H'_{0} \text{ vs } H'_{1} is a most powerful test. In other words, for all level α\alpha and other testing powers β\beta^{\ast} compared to H0H'_{0},
β(θ)β(θ) \beta^{\ast} \left( \theta ' \right) \le \beta \left( \theta ' \right)
holds true. Since β\beta was a monotonically increasing function in Part 1 and θ>θ0\theta ' >\theta_{0} was fixed in Part 2, it is understood that β(θ0)α\beta \left( \theta_{0} \right) \le \alpha applies to all hypothesis tests. Meanwhile, all level α\alpha hypothesis tests for H0H_{0}
β(θ0)supθΘ0β(θ)α \beta^{\ast} \left( \theta_{0} \right) \le \sup_{\theta \in \Theta_{0}} \beta^{\ast} \left( \theta \right) \le \alpha
are satisfied. During the proof process, it was irrelevant what θ\theta ' was as long as it’s a level α\alpha hypothesis test, confirming that β(θ)β(θ)\beta^{\ast} \left( \theta ' \right) \le \beta \left( \theta ' \right) also holds for any θΘ0c\theta ' \in \Theta_{0}^{c} of the original hypothesis test as well as all level α\alpha hypothesis tests for H0H_{0}. In other words, the hypothesis test given in the theorem is a level α\alpha most powerful test.


  1. Casella. (2001). Statistical Inference(2nd Edition): p391~392. ↩︎