logo

Sufficient Statistics and Maximum Likelihood Estimators of Exponential Distributions 📂Probability Distribution

Sufficient Statistics and Maximum Likelihood Estimators of Exponential Distributions

Theorem

Given a random sample $\mathbf{X} := \left( X_{1} , \cdots , X_{n} \right) \sim \exp \left( \lambda \right)$ that follows an exponential distribution.

The sufficient statistic $T$ and maximum likelihood estimator $\hat{\lambda}$ for $\lambda$ are as follows. $$ \begin{align*} T =& \sum_{k=1}^{n} X_{k} \\ \hat{\lambda} =& {{ n } \over { \sum_{k=1}^{n} X_{k} }} \end{align*} $$

Proof

Sufficient Statistic

$$ \begin{align*} f \left( \mathbf{x} ; \lambda \right) =& \prod_{k=1}^{n} f \left( x_{k} ; \lambda \right) \\ =& \prod_{k=1}^{n} \lambda e^{-\lambda x_{k}} \\ =& \lambda^{n} e^{-\lambda \sum_{k} x_{k}} \\ =& \lambda^{n} e^{-\lambda \sum_{k} x_{k}} \cdot 1 \end{align*} $$

Neyman Factorization Theorem: Let a random sample $X_{1} , \cdots , X_{n}$ have the same probability mass/density function $f \left( x ; \theta \right)$ for a parameter $\theta \in \Theta$. A statistic $Y = u_{1} \left( X_{1} , \cdots , X_{n} \right)$ is a sufficient statistic for $\theta$ if there exist two non-negative functions $k_{1} , k_{2} \ge 0$ that satisfy the following. $$ f \left( x_{1} ; \theta \right) \cdots f \left( x_{n} ; \theta \right) = k_{1} \left[ u_{1} \left( x_{1} , \cdots , x_{n} \right) ; \theta \right] k_{2} \left( x_{1} , \cdots , x_{n} \right) $$ Note, $k_{2}$ must not depend on $\theta$.

According to the Neyman Factorization Theorem, $T := \sum_{k} X_{k}$ is the sufficient statistic for $\lambda$.

Maximum Likelihood Estimator

$$ \begin{align*} \log L \left( \lambda ; \mathbf{x} \right) =& \log f \left( \mathbf{x} ; \lambda \right) \\ =& \log \lambda^{n} e^{-\lambda \sum_{k} x_{k}} \\ =& n \log \lambda - \lambda \sum_{k=1}^{n} x_{k} \end{align*} $$

The log-likelihood function of the random sample is as above, and for the likelihood function to be maximized, the partial derivative with respect to $\lambda$ must be $0$, thus $$ \begin{align*} & 0 = n {{ 1 } \over { \lambda }} - \sum_{k=1}^{n} x_{k} \\ \implies & \lambda = {{ n } \over { \sum_{k=1}^{n} x_{k} }} \end{align*} $$

Therefore, the maximum likelihood estimator $\hat{\lambda}$ for $\lambda$ is as follows. $$ \hat{\lambda} = {{ n } \over { \sum_{k=1}^{n} X_{k} }} $$