logo

Lemmas-Schep Theorem Proof 📂Mathematical Statistics

Lemmas-Schep Theorem Proof

Theorem 1 2

A unique unbiased estimator dependent on a complete sufficient statistic exists. That is, for the complete sufficient statistic TT, if E[ϕ(T)]=τ(θ)E \left[ \phi (T) \right] = \tau (\theta), then ϕ(T)\phi (T) is the unique unbiased estimator for τ(θ)\tau (\theta), namely the best unbiased estimator.

Explanation

The Lehmann-Scheffé theorem is a powerful theorem that guarantees the uniqueness of unbiased estimators, making the completeness and sufficiency of statistics a reason for their importance. According to this theorem, finding sufficient statistics is meaningful, and there is no need to look for better unbiased estimators.

Proof

Rao-Blackwell theorem: Given a parameter θ\theta, let TT be a sufficient statistic for θ\theta and WW be an unbiased estimator of τ(θ)\tau \left( \theta \right). Defining ϕ(T):=E(WT)\phi \left( T \right) := E \left( W | T \right), for all θ\theta, it holds that: Eθϕ(T)=τ(θ)Varθϕ(T)VarθW \begin{align*} E_{\theta} \phi (T) =& \tau (\theta) \\ \operatorname{Var}_{\theta} \phi (T) \le& \operatorname{Var}_{\theta} W \end{align*} In other words, ϕ(T)\phi (T) is a better unbiased estimator for τ(θ)\tau (\theta) than WW.

According to the Rao-Blackwell theorem, ϕ(T)\phi (T) is an unbiased estimator, and its variance is not larger than the variance of the unbiased estimator WW for τ(θ)\tau (\theta). If another unbiased estimator for τ(θ)\tau (\theta) is defined as ww ', and ψ(T):=E(WT)\psi \left( T \right) := E \left( W’ | T \right) is defined as: Eθ[ϕ(T)ψ(T)]=τ(θ)τ(θ)=0 E_{\theta} \left[ \phi \left( T \right) - \psi \left( T \right) \right] = \tau (\theta) - \tau (\theta) = 0 then, due to the completeness of TT, for all θ\theta: Eθ[ϕ(T)ψ(T)]=0    Pθ(ϕ(T)=ψ(T))=100% E_{\theta} \left[ \phi \left( T \right) - \psi \left( T \right) \right] = 0 \implies P_{\theta} \left( \phi \left( T \right) = \psi \left( T \right) \right) = 100 \% Thus, ϕ(T)\phi (T) is the unique unbiased estimator, making it the best unbiased estimator, completing the proof.


  1. Casella. (2001). Statistical Inference(2nd Edition): p369. ↩︎

  2. Hogg et al. (2013). Introduction to Mathematical Statistcs(7th Edition): p402. ↩︎