logo

Lemmas-Schep Theorem Proof 📂Mathematical Statistics

Lemmas-Schep Theorem Proof

Theorem 1 2

A unique unbiased estimator dependent on a complete sufficient statistic exists. That is, for the complete sufficient statistic $T$, if $E \left[ \phi (T) \right] = \tau (\theta)$, then $\phi (T)$ is the unique unbiased estimator for $\tau (\theta)$, namely the best unbiased estimator.

Explanation

The Lehmann-Scheffé theorem is a powerful theorem that guarantees the uniqueness of unbiased estimators, making the completeness and sufficiency of statistics a reason for their importance. According to this theorem, finding sufficient statistics is meaningful, and there is no need to look for better unbiased estimators.

Proof

Rao-Blackwell theorem: Given a parameter $\theta$, let $T$ be a sufficient statistic for $\theta$ and $W$ be an unbiased estimator of $\tau \left( \theta \right)$. Defining $\phi \left( T \right) := E \left( W | T \right)$, for all $\theta$, it holds that: $$ \begin{align*} E_{\theta} \phi (T) =& \tau (\theta) \\ \text{Var}_{\theta} \phi (T) \le& \text{Var}_{\theta} W \end{align*} $$ In other words, $\phi (T)$ is a better unbiased estimator for $\tau (\theta)$ than $W$.

According to the Rao-Blackwell theorem, $\phi (T)$ is an unbiased estimator, and its variance is not larger than the variance of the unbiased estimator $W$ for $\tau (\theta)$. If another unbiased estimator for $\tau (\theta)$ is defined as $w '$, and $\psi \left( T \right) := E \left( W’ | T \right)$ is defined as: $$ E_{\theta} \left[ \phi \left( T \right) - \psi \left( T \right) \right] = \tau (\theta) - \tau (\theta) = 0 $$ then, due to the completeness of $T$, for all $\theta$: $$ E_{\theta} \left[ \phi \left( T \right) - \psi \left( T \right) \right] = 0 \implies P_{\theta} \left( \phi \left( T \right) = \psi \left( T \right) \right) = 100 \% $$ Thus, $\phi (T)$ is the unique unbiased estimator, making it the best unbiased estimator, completing the proof.


  1. Casella. (2001). Statistical Inference(2nd Edition): p369. ↩︎

  2. Hogg et al. (2013). Introduction to Mathematical Statistcs(7th Edition): p402. ↩︎