Sufficient Statistics and Maximum Likelihood Estimators of a Normal Distribution
📂Probability Distribution Sufficient Statistics and Maximum Likelihood Estimators of a Normal Distribution Theorem A given random sample X : = ( X 1 , ⋯ , X n ) ∼ N ( μ , σ 2 ) \mathbf{X} := \left( X_{1} , \cdots , X_{n} \right) \sim N \left( \mu , \sigma^{2} \right) X := ( X 1 , ⋯ , X n ) ∼ N ( μ , σ 2 ) follows a normal distribution .
The sufficient statistic T T T and maximum likelihood estimator ( μ ^ , σ 2 ^ ) \left( \hat{\mu}, \widehat{\sigma^{2}} \right) ( μ ^ , σ 2 ) for ( μ , σ 2 ) \left( \mu, \sigma^{2} \right) ( μ , σ 2 ) are as follows:
T = ( ∑ k X k , ∑ k X k 2 ) ( μ ^ , σ 2 ^ ) = ( 1 n ∑ k X k , 1 n ∑ k ( X k − X ‾ ) 2 )
\begin{align*}
T =& \left( \sum_{k} X_{k}, \sum_{k} X_{k}^{2} \right)
\\ \left( \hat{\mu}, \widehat{\sigma^{2}} \right) =& \left( {{ 1 } \over { n }} \sum_{k} X_{k}, {{ 1 } \over { n }} \sum_{k} \left( X_{k} - \overline{X} \right)^{2} \right)
\end{align*}
T = ( μ ^ , σ 2 ) = ( k ∑ X k , k ∑ X k 2 ) ( n 1 k ∑ X k , n 1 k ∑ ( X k − X ) 2 )
Proof Sufficient Statistic f ( x ; λ ) = ∏ k = 1 n f ( x k ; λ ) = ∏ k = 1 n 1 2 π σ exp [ − 1 2 ( x k − μ σ ) 2 ] = 1 2 π n σ n exp [ − 1 2 σ 2 ∑ k = 1 n x k 2 ] exp [ 1 σ 2 ∑ k = 1 n μ x k ] exp [ − 1 2 σ 2 n μ 2 ] = μ exp [ μ σ 2 ∑ k = 1 n x k − 1 2 σ 2 n μ 2 ] ⋅ 1 2 π n σ n exp [ − 1 2 σ 2 ∑ k = 1 n x k 2 ] = σ 1 2 π n σ n exp [ − 1 2 σ 2 ∑ k = 1 n x k 2 ] exp [ 1 σ 2 ∑ k = 1 n μ x k ] exp [ − 1 2 σ 2 n μ 2 ] ⋅ 1
\begin{align*}
f \left( \mathbf{x} ; \lambda \right) =& \prod_{k=1}^{n} f \left( x_{k} ; \lambda \right)
\\ =& \prod_{k=1}^{n} {{ 1 } \over { \sqrt{2 \pi} \sigma }} \exp \left[ - {{ 1 } \over { 2 }} \left( {{ x_{k} - \mu } \over { \sigma }} \right)^{2} \right]
\\ =& {{ 1 } \over { \sqrt{2 \pi}^{n} \sigma^{n} }} \exp \left[ - {{ 1 } \over { 2 \sigma^{2} }} \sum_{k=1}^{n} x_{k}^{2} \right] \exp \left[ {{ 1 } \over { \sigma^{2} }} \sum_{k=1}^{n} \mu x_{k} \right] \exp \left[ - {{ 1 } \over { 2 \sigma^{2} }} n \mu^{2} \right]
\\ \overset{\mu}{=}& \exp \left[ {{ \mu } \over { \sigma^{2} }} \sum_{k=1}^{n} x_{k} - {{ 1 } \over { 2 \sigma^{2} }} n \mu^{2} \right] \cdot {{ 1 } \over { \sqrt{2 \pi}^{n} \sigma^{n} }} \exp \left[ - {{ 1 } \over { 2 \sigma^{2} }} \sum_{k=1}^{n} x_{k}^{2} \right]
\\ \overset{\sigma}{=}& {{ 1 } \over { \sqrt{2 \pi}^{n} \sigma^{n} }} \exp \left[ - {{ 1 } \over { 2 \sigma^{2} }} \sum_{k=1}^{n} x_{k}^{2} \right] \exp \left[ {{ 1 } \over { \sigma^{2} }} \sum_{k=1}^{n} \mu x_{k} \right] \exp \left[ - {{ 1 } \over { 2 \sigma^{2} }} n \mu^{2} \right] \cdot 1
\end{align*}
f ( x ; λ ) = = = = μ = σ k = 1 ∏ n f ( x k ; λ ) k = 1 ∏ n 2 π σ 1 exp [ − 2 1 ( σ x k − μ ) 2 ] 2 π n σ n 1 exp [ − 2 σ 2 1 k = 1 ∑ n x k 2 ] exp [ σ 2 1 k = 1 ∑ n μ x k ] exp [ − 2 σ 2 1 n μ 2 ] exp [ σ 2 μ k = 1 ∑ n x k − 2 σ 2 1 n μ 2 ] ⋅ 2 π n σ n 1 exp [ − 2 σ 2 1 k = 1 ∑ n x k 2 ] 2 π n σ n 1 exp [ − 2 σ 2 1 k = 1 ∑ n x k 2 ] exp [ σ 2 1 k = 1 ∑ n μ x k ] exp [ − 2 σ 2 1 n μ 2 ] ⋅ 1
Neyman Factorization Theorem : Consider a random sample X 1 , ⋯ , X n X_{1} , \cdots , X_{n} X 1 , ⋯ , X n that has the same probability mass/density function f ( x ; θ ) f \left( x ; \theta \right) f ( x ; θ ) for a parameter θ ∈ Θ \theta \in \Theta θ ∈ Θ . A statistic Y = u 1 ( X 1 , ⋯ , X n ) Y = u_{1} \left( X_{1} , \cdots , X_{n} \right) Y = u 1 ( X 1 , ⋯ , X n ) is a sufficient statistic θ \theta θ if there exist two non-negative functions k 1 , k 2 ≥ 0 k_{1} , k_{2} \ge 0 k 1 , k 2 ≥ 0 that satisfy the following:
f ( x 1 ; θ ) ⋯ f ( x n ; θ ) = k 1 [ u 1 ( x 1 , ⋯ , x n ) ; θ ] k 2 ( x 1 , ⋯ , x n )
f \left( x_{1} ; \theta \right) \cdots f \left( x_{n} ; \theta \right) = k_{1} \left[ u_{1} \left( x_{1} , \cdots , x_{n} \right) ; \theta \right] k_{2} \left( x_{1} , \cdots , x_{n} \right)
f ( x 1 ; θ ) ⋯ f ( x n ; θ ) = k 1 [ u 1 ( x 1 , ⋯ , x n ) ; θ ] k 2 ( x 1 , ⋯ , x n )
Note that k 2 k_{2} k 2 should not depend on θ \theta θ .
According to the Neyman Factorization Theorem , T : = ( ∑ k X k , ∑ k X k 2 ) T := \left( \sum_{k} X_{k}, \sum_{k} X_{k}^{2} \right) T := ( ∑ k X k , ∑ k X k 2 ) is a sufficient statistic for ( μ , σ 2 ) \left( \mu, \sigma^{2} \right) ( μ , σ 2 ) .
Maximum Likelihood Estimator log L ( μ , σ 2 ; x ) = log f ( x ; μ , σ 2 ) = − n log σ 2 π − 1 2 σ 2 ∑ k = 1 n x k 2 + 1 σ 2 ∑ k = 1 n μ x k − 1 2 σ 2 n μ 2
\begin{align*}
\log L \left( \mu, \sigma^{2} ; \mathbf{x} \right) =& \log f \left( \mathbf{x} ; \mu, \sigma^{2} \right)
\\ =& - n \log \sigma \sqrt{2 \pi} - {{ 1 } \over { 2 \sigma^{2} }} \sum_{k=1}^{n} x_{k}^{2} + {{ 1 } \over { \sigma^{2} }} \sum_{k=1}^{n} \mu x_{k} - {{ 1 } \over { 2 \sigma^{2} }} n \mu^{2}
\end{align*}
log L ( μ , σ 2 ; x ) = = log f ( x ; μ , σ 2 ) − n log σ 2 π − 2 σ 2 1 k = 1 ∑ n x k 2 + σ 2 1 k = 1 ∑ n μ x k − 2 σ 2 1 n μ 2
The log-likelihood function of the random sample is as shown above, and for the likelihood function to reach its maximum value, the partial derivative with respect to μ , σ \mu, \sigma μ , σ has to be 0 0 0 . Initially, to have the partial derivative with respect to μ \mu μ be 0 0 0 ,
0 = 1 σ 2 ∑ k = 1 n x k − 1 σ 2 n μ ⟹ μ = 1 n ∑ k = 1 n x k
\begin{align*}
& 0 = {{ 1 } \over { \sigma^{2} }} \sum_{k=1}^{n} x_{k} - {{ 1 } \over { \sigma^{2} }} n \mu
\\ \implies & \mu = {{ 1 } \over { n }} \sum_{k=1}^{n} x_{k}
\end{align*}
⟹ 0 = σ 2 1 k = 1 ∑ n x k − σ 2 1 n μ μ = n 1 k = 1 ∑ n x k
thus, regardless of σ \sigma σ , μ ^ = ∑ k = 1 n X k / n \hat{\mu} = \sum_{k=1}^{n} X_{k} / n μ ^ = ∑ k = 1 n X k / n holds, and to have the partial derivative with respect to σ \sigma σ be 0 0 0 ,
0 = − n σ + 1 σ 3 ∑ k = 1 n x k 2 − 2 σ 3 ∑ k = 1 n μ x k + 1 σ 3 n μ 2 ⟹ n σ 2 = ∑ k = 1 n x k 2 − 2 ∑ k = 1 n μ x k + n μ 2 ⟹ σ 2 = 1 n ∑ k = 1 n ( x k − μ ) 2
\begin{align*}
& 0 = - {{ n } \over { \sigma }} + {{ 1 } \over { \sigma^{3} }} \sum_{k=1}^{n} x_{k}^{2} - {{ 2 } \over { \sigma^{3} }} \sum_{k=1}^{n} \mu x_{k} + {{ 1 } \over { \sigma^{3} }} n \mu^{2}
\\ \implies & n \sigma^{2} = \sum_{k=1}^{n} x_{k}^{2} - 2 \sum_{k=1}^{n} \mu x_{k} + n \mu^{2}
\\ \implies & \sigma^{2} = {{ 1 } \over { n }} \sum_{k=1}^{n} \left( x_{k} - \mu \right)^{2}
\end{align*}
⟹ ⟹ 0 = − σ n + σ 3 1 k = 1 ∑ n x k 2 − σ 3 2 k = 1 ∑ n μ x k + σ 3 1 n μ 2 n σ 2 = k = 1 ∑ n x k 2 − 2 k = 1 ∑ n μ x k + n μ 2 σ 2 = n 1 k = 1 ∑ n ( x k − μ ) 2
thus, for μ ^ = μ ^ = ∑ k = 1 n X k / n \hat{\mu} = \hat{\mu} = \sum_{k=1}^{n} X_{k} / n μ ^ = μ ^ = ∑ k = 1 n X k / n , σ 2 ^ = ∑ k = 1 n ( X k − μ ) 2 / n \widehat{\sigma^{2}} = \sum_{k=1}^{n} \left( X_{k} - \mu \right)^{2} / n σ 2 = ∑ k = 1 n ( X k − μ ) 2 / n holds.
■