Likelihood Ratio Test Including Sufficient Statistic
📂Mathematical StatisticsLikelihood Ratio Test Including Sufficient Statistic
Theorem
Hypothesis Testing:
H0:H1:θ∈Θ0θ∈Θ0c
Likelihood Ratio test statistic:
λ(x):=supΘL(θ∣x)supΘ0L(θ∣x)
If T(X) is a sufficient statistic for the parameter θ, and
- λ∗(t) is a likelihood ratio test statistic dependent on T
- λ(x) is a likelihood ratio test statistic dependent on X
Then, for all x∈Ω in all sample spaces, λ∗(T(x))=λ(x) holds.
Explanation
This theorem allows us to revisit why a sufficient statistic was named as such. Accordingly, when conducting a likelihood ratio test, if there is a sufficient statistic, one can start with λ∗ without considering other possibilities.
Proof
f(x∣θ)=g(t∣θ)h(x)
According to the Neyman Factorization Theorem, the pdf or pmf of x , f(x∣θ) can be represented as the pdf or pmf of T , g(t∣θ) and a function h(x) that does not depend on θ as follows.
λ(x)======supΘL(θ∣x)supΘ0L(θ∣x)supΘf(x∣θ)supΘ0f(x∣θ)supΘg(T(x)∣θ)h(x)supΘ0g(T(x)∣θ)h(x)supΘg(T(x)∣θ)supΘ0g(T(x)∣θ)supΘL∗(θ∣x)supΘ0L∗(θ∣x)λ∗(T(x))∵T is sufficient∵h doesn’t depend on θ∵g is the pdf or pmf of T
■