logo

Unbiased Estimator 📂Mathematical Statistics

Unbiased Estimator

Definition 1

If the estimator TT of θ\theta satisfies the following, then TT is called the unbiased estimator of θ\theta. ET=θ E T = \theta

Explanation

Especially, among the unbiased estimators for θ\theta, the one with the smallest variance is called the minimum variance unbiased estimator.

Unbiasedness refers to the property of not having any bias. For example, when we assume Xi(μ,σ2)X_{i} \sim \left( \mu , \sigma^{2} \right), if we use the sample mean X=1niXi\displaystyle \overline{X} = {{ 1 } \over { n }} \sum_{i} X_{i} as the estimator for μ\mu, since EX=μ\displaystyle E \overline{X} = \mu, then X\overline{X} becomes an unbiased estimator of μ\mu. This might seem obvious at first, but the fact that an estimator perfectly estimates the parameter is a very important characteristic and not at all a given. For instance, let’s consider the variance and sample variance.

Example

If we assume Xi(μ,σ2) X_{i} \sim \left( \mu , \sigma^{2} \right) , the unbiased estimator for variance is as follows: S2:=1n1i=1n(XiX)2 S^{2} := {{1} \over {n-1}} \sum_{i=1}^{n} \left( X_{i} - \overline{X} \right)^{2} As known, unlike the sample mean, the sample variance sums up all squared deviations and divides by n1n-1 not nn. The reason we divide by n1n-1 when calculating the sample variance why we divide by n1n-1 when calculating sample variance can be explained in various ways depending on the person’s level of understanding, but the most accurate formulaic explanation is ‘in order for the sample variance to be an unbiased estimator’.

Proof 2

If we assume μ:=EXσ2:=EXi2μ2 \mu := E \overline{X} \\ \sigma^{2} := E X_{i} ^{2} - \mu^{2} , then E(X2)μ2=E(X2)(EX)2=VarX=Var(1ni=1nXi)=1n2i=1nVarXi=1n2nσ2=σ2n \begin{align*} E \left( \overline{X}^{2} \right) - \mu^{2} =& E \left( \overline{X}^{2} \right) - \left( E \overline{X} \right)^{2} \\ =& \operatorname{Var} \overline{X} \\ =& \operatorname{Var} \left( {{1} \over {n}} \sum_{i=1}^{n} X_{i} \right) \\ =& {{1} \over {n^{2}}} \sum_{i=1}^{n} \operatorname{Var} X_{i} \\ =& {{1} \over {n^{2}}} n \sigma^{2} \\ =& {{\sigma^{2}} \over {n}} \end{align*} , thus the expected value of the sample variance S2S^{2} is ES2=(n1)1Ei=1n(XiX)2=(n1)1[i=1nEXi2i=1nEX2]=(n1)1[i=1n(σ2+μ2)n(μ2+σ2n)]=(n1)1[nσ2+nμ2nμ2σ2]=(n1)1(n1)σ2=σ2 \begin{align*} E S^{2} =& (n-1)^{-1} E \sum_{i=1}^{n} \left( X_{i} - \overline{X} \right)^{2} \\ =& (n-1)^{-1} \left[ \sum_{i=1}^{n} E X_{i}^{2} - \sum_{i=1}^{n} E \overline{X} ^{2} \right] \\ =& (n-1)^{-1} \left[ \sum_{i=1}^{n} \left( \sigma^{2} + \mu^{2} \right) - n \left( \mu^{2} + {{\sigma^{2}} \over {n}} \right) \right] \\ =& (n-1)^{-1} \left[ n\sigma^{2} + n \mu^{2} - n \mu^{2} - \sigma^{2} \right] \\ =& (n-1)^{-1} (n-1) \sigma^{2} \\ =& \sigma^{2} \end{align*}


  1. Hogg et al. (2013). Introduction to Mathematical Statistics(7th Edition): p208. ↩︎

  2. Hogg et al. (2013). Introduction to Mathematical Statistics(7th Edition): p137. ↩︎