Calin-Rubin Theorem Proof
📂Mathematical StatisticsCalin-Rubin Theorem Proof
Theorem
Hypothesis Testing:
H0:H1:θ≤θ0θ>θ0
In such a hypothesis test, T is called a sufficient statistic for θ, and the family {g(t∣θ):θ∈Θ} of the probability density function or probability mass function of t possesses a Monotone Likelihood Ratio (MLR). Then, for ∀t0,
H0 is rejected if and only if T>t0
the hypothesis test is a level α=Pθ0(T>t0) most powerful test.
- For parameter θ, a function β(θ):=Pθ(X∈R) with rejection region R is called a power function. If supθ∈Θ0β(θ)≤α, the given hypothesis test is called a level α hypothesis test.
Explanation
Note that the given hypothesis test is H0:θ≤θ0, namely a one-sided test. For example, if conducting a z-test under the condition of ∣Z∣≥zα/2, the null hypothesis would be rejected, which is a two-sided test, hence the Karlin-Rubin theorem cannot be indiscriminately used. Typically, the issue arises where it might be the most powerful test on one side but not on the other, leading to consideration for measures like unbiased test efficiency.
The theorem that a unidirectional test is the most powerful test can be guaranteed by demonstrating that the family of the probability density function or probability mass function of a sufficient statistic has a monotone likelihood ratio.
Proof
Part 1.
The null hypothesis is rejected when θ>θ0, making the power function β(θ)=Pθ(T>t0). Since the family of the probability density function or probability mass function of sufficient statistic T for θ has a monotone likelihood ratio, β(θ) is a monotonically increasing function and,
θ≤θ0supβ(θ)=β(θ0)=α
it’s a level α test.
Part 2.
Definition of a Monotone Likelihood Ratio: Let the family of the probability mass function or probability density function for a parameter θ∈R and a univariate random variable T be G:={g(t∣θ):θ∈Θ}. If for all θ2>θ1,
g(t∣θ1)g(t∣θ2)
is a monotone function in {t:g(t∣θ1)>0∨g(t∣θ2)>0}, then G is said to have a Monotone Likelihood Ratio (MLR).
Now, fix θ′>θ0 and consider another hypothesis test as follows.
H0′:H1′:θ=θ0θ=θ′
This new hypothesis test is a setup for applying the corollary of the Neyman-Pearson lemma, and it is also an arbitrary hypothesis test that belongs to the rejection region θ′∈Θ0c of the original hypothesis test. Considering the set T:={t>t0:g(t∣θ′)∨g(t∣θ0)},
k′:=t∈Tinfg(t∣θ0)g(t∣θ′)
by its definition, the following is obtained.
T>t0⟺g(t∣θ0)g(t∣θ′)>k’⟺g(t∣θ′)>k’g(t∣θ0)
Part 3.
Most Powerful Test Involving a Sufficient Statistic:
H0:H1:θ=θ0θ=θ1
In such a hypothesis test, let’s call the family g(t∣θ0),g(t∣θ1) of the probability density function or probability mass function for sufficient statistic T for θ. Then, with rejection region S and some constant k≥0, all hypothesis tests dependent on T that satisfy the following three conditions are level α most powerful tests:
- (i): If g(t∣θ1)>kg(t∣θ0), then t∈S
- (ii): If g(t∣θ1)<kg(t∣θ0), then t∈Sc
- (iii): If α=Pθ0(T∈S)
Conditions (i) and (ii) from Part 2, and condition (iii) from Part 1 are met, hence H0′ vs H1′ is a most powerful test. In other words, for all level α and other testing powers β∗ compared to H0′,
β∗(θ′)≤β(θ′)
holds true. Since β was a monotonically increasing function in Part 1 and θ′>θ0 was fixed in Part 2, it is understood that β(θ0)≤α applies to all hypothesis tests. Meanwhile, all level α hypothesis tests for H0
β∗(θ0)≤θ∈Θ0supβ∗(θ)≤α
are satisfied. During the proof process, it was irrelevant what θ′ was as long as it’s a level α hypothesis test, confirming that β∗(θ′)≤β(θ′) also holds for any θ′∈Θ0c of the original hypothesis test as well as all level α hypothesis tests for H0. In other words, the hypothesis test given in the theorem is a level α most powerful test.
■