Calin-Rubin Theorem Proof
Theorem
Hypothesis Testing:
$$ \begin{align*} H_{0} :& \theta \le \theta_{0} \\ H_{1} :& \theta > \theta_{0} \end{align*} $$
In such a hypothesis test, $T$ is called a sufficient statistic for $\theta$, and the family $\left\{ g(t | \theta) : \theta \in \Theta \right\}$ of the probability density function or probability mass function of $t$ possesses a Monotone Likelihood Ratio (MLR). Then, for $\forall t_{0}$,
$$
H_{0} \text{ is rejected if and only if } T > t_{0}
$$
the hypothesis test is a level $\alpha = P_{\theta_{0}} \left( T > t_{0} \right)$ most powerful test.
- For parameter $\theta$, a function $\beta (\theta) := P_{\theta} \left( \mathbf{X} \in \mathbb{R} \right)$ with rejection region $R$ is called a power function. If $\sup_{\theta \in \Theta_{0}} \beta (\theta) \le \alpha$, the given hypothesis test is called a level $\alpha$ hypothesis test.
Explanation
Note that the given hypothesis test is $H_{0} : \theta \le \theta_{0}$, namely a one-sided test. For example, if conducting a z-test under the condition of $|Z| \ge z_{\alpha/2}$, the null hypothesis would be rejected, which is a two-sided test, hence the Karlin-Rubin theorem cannot be indiscriminately used. Typically, the issue arises where it might be the most powerful test on one side but not on the other, leading to consideration for measures like unbiased test efficiency.
The theorem that a unidirectional test is the most powerful test can be guaranteed by demonstrating that the family of the probability density function or probability mass function of a sufficient statistic has a monotone likelihood ratio.
Proof 1
Part 1.
The null hypothesis is rejected when $\theta > \theta_{0}$, making the power function $\beta (\theta) = P_{\theta} \left( T > t_{0} \right)$. Since the family of the probability density function or probability mass function of sufficient statistic $T$ for $\theta$ has a monotone likelihood ratio, $\beta \left( \theta \right)$ is a monotonically increasing function and,
$$
\sup_{\theta \le \theta_{0}} \beta \left( \theta \right) = \beta \left( \theta_{0} \right) = \alpha
$$
it’s a level $\alpha$ test.
Part 2.
Definition of a Monotone Likelihood Ratio: Let the family of the probability mass function or probability density function for a parameter $\theta \in \mathbb{R}$ and a univariate random variable $T$ be $G := \left\{ g ( t | \theta) : \theta \in \Theta \right\}$. If for all $\theta_{2} > \theta_{1}$,
$$ {{ g \left( t | \theta_{2} \right) } \over { g \left( t | \theta_{1} \right) }} $$
is a monotone function in $\left\{ t : g \left( t | \theta_{1} \right) > 0 \lor g \left( t | \theta_{2} \right) > 0 \right\}$, then $G$ is said to have a Monotone Likelihood Ratio (MLR).
Now, fix $\theta’ >\theta_{0}$ and consider another hypothesis test as follows.
$$
\begin{align*}
H’_{0} :& \theta = \theta_{0}
\\ H’_{1} :& \theta = \theta’
\end{align*}
$$
This new hypothesis test is a setup for applying the corollary of the Neyman-Pearson lemma, and it is also an arbitrary hypothesis test that belongs to the rejection region $\theta’ \in \Theta_{0}^{c}$ of the original hypothesis test. Considering the set $\mathcal{T} := \left\{ t > t_{0} : g \left( t | \theta’ \right) \lor g \left( t | \theta_{0} \right) \right\}$,
$$
k’ := \inf_{t \in \mathcal{T}} {{ g \left( t | \theta’ \right) } \over { g \left( t | \theta_{0} \right) }}
$$
by its definition, the following is obtained.
$$
T > t_{0} \iff {{ g \left( t | \theta’ \right) } \over { g \left( t | \theta_{0} \right) }} > k’ \iff g \left( t | \theta’ \right) > k’ g \left( t | \theta_{0} \right)
$$
Part 3.
Most Powerful Test Involving a Sufficient Statistic: $$ \begin{align*} H_{0} :& \theta = \theta_{0} \\ H_{1} :& \theta = \theta_{1} \end{align*} $$
In such a hypothesis test, let’s call the family $g \left( t | \theta_{0} \right), g \left( t | \theta_{1} \right)$ of the probability density function or probability mass function for sufficient statistic $T$ for $\theta$. Then, with rejection region $S$ and some constant $k \ge 0$, all hypothesis tests dependent on $T$ that satisfy the following three conditions are level $\alpha$ most powerful tests:
- (i): If $g \left( t | \theta_{1} \right) > k g \left( t | \theta_{0} \right)$, then $t \in S$
- (ii): If $g \left( t | \theta_{1} \right) < k g \left( t | \theta_{0} \right)$, then $t \in S^{c}$
- (iii): If $\alpha = P_{\theta_{0}} \left( T \in S \right)$
Conditions (i) and (ii) from Part 2, and condition (iii) from Part 1 are met, hence $H’_{0} \text{ vs } H’_{1}$ is a most powerful test. In other words, for all level $\alpha$ and other testing powers $\beta^{\ast}$ compared to $H’_{0}$,
$$
\beta^{\ast} \left( \theta’ \right) \le \beta \left( \theta’ \right)
$$
holds true. Since $\beta$ was a monotonically increasing function in Part 1 and $\theta’ >\theta_{0}$ was fixed in Part 2, it is understood that $\beta \left( \theta_{0} \right) \le \alpha$ applies to all hypothesis tests. Meanwhile, all level $\alpha$ hypothesis tests for $H_{0}$
$$
\beta^{\ast} \left( \theta_{0} \right) \le \sup_{\theta \in \Theta_{0}} \beta^{\ast} \left( \theta \right) \le \alpha
$$
are satisfied. During the proof process, it was irrelevant what $\theta '$ was as long as it’s a level $\alpha$ hypothesis test, confirming that $\beta^{\ast} \left( \theta’ \right) \le \beta \left( \theta’ \right)$ also holds for any $\theta’ \in \Theta_{0}^{c}$ of the original hypothesis test as well as all level $\alpha$ hypothesis tests for $H_{0}$. In other words, the hypothesis test given in the theorem is a level $\alpha$ most powerful test.
■
Casella. (2001). Statistical Inference(2nd Edition): p391~392. ↩︎