logo

Binomial Distribution 📂Probability Distribution

Binomial Distribution

Definition 1

Continuous

For $[a,b] \subset \mathbb{R}$, a continuous probability distribution $U(a,b)$ with the following probability density function is called the Uniform Distribution. $$ f(x) = {{ 1 } \over { b - a }} \qquad , x \in [a,b] $$

Discrete

For a finite set $\left\{ x_{k} \right\}_{k=1}^{n}$, a discrete probability distribution with the following probability mass function is called the Uniform Distribution. $$ p \left( x_{k} \right) = P \left( X = x_{k} \right) = {{ 1 } \over { n }} \qquad , k = 1, \cdots , n $$

Description

The uniform distribution is commonly referred to as the uniform distribution. A typical example of a discrete uniform distribution is $x_{k} = k$, such as dice, in which case the mathematical properties are often of little concern. Unless otherwise mentioned, the uniform distribution refers to the continuous discrete distribution.

The importance of the uniform distribution is not so much for a particular reason but because it is the simplest distribution we can think of. It might seem too simplistic for students familiar with distribution theory, but it is still widely used in fields such as mathematics and artificial intelligence.

Information Theory

From the perspective of information theory, it is a very important distribution because, whether discrete or continuous, Shannon entropy is maximized. Given that other distributions have highs and lows in probability functions, it’s natural that uniform distribution does not give any hint of what the sample might be like.

Maximizing entropy in the discrete uniform distribution is also a good example of the Lagrange multiplier method.

Basic Properties

Moment Generating Function

  • [1]: $$m(t) = {{ e^{tb} - e^{ta} } \over { t(b-a) }}$$

Mean and Variance

  • [2]: If $X \sim U(a,b)$ then $$ E(X) = {{ a+b } \over { 2 }} \\ \operatorname{Var}(X) = {{ (b-a)^{2} } \over { 12 }} $$

Sufficient Statistic and Maximum Likelihood Estimate

  • [3]: Suppose we are given a random sample $\mathbf{X} := \left( X_{1} , \cdots , X_{n} \right) \sim U \left( 0 , \theta \right)$.

The sufficient statistic $T$ and maximum likelihood estimate $\hat{\theta}$ for $\theta$ are as follows. $$ \begin{align*} T =& \max_{k=1 , \cdots , n} X_{k} \\ \hat{\theta} =& \max_{k=1 , \cdots , n} X_{k} \end{align*} $$

Proof

[1]

$$ \begin{align*} m(t) = \int_{a}^{b} e^{tx} {{ 1 } \over { b-a }} dx =& {{ 1 } \over { b-a }} \left[ {{ 1 } \over { t }} e^{tx} \right]_{a}^{b} \\ =& {{ e^{tb} - e^{ta} } \over { t(b-a) }} \end{align*} $$

[2]

Deduced directly.

[3]

Deduced directly.


  1. Hogg et al. (2013). Introduction to Mathematical Statistics (7th Edition): p45. ↩︎