logo

Laplace Prior Distribution 📂Mathematical Statistics

Laplace Prior Distribution

Buildup

If there’s almost no information about the parameter, there’s no reason to consider a complex prior distribution:

  • Example 1: If someone with a fair understanding of statistics is asked to guess the gender ratio of incoming freshmen for a certain university’s statistics department next year, they might make a reasonable guess based on the gender ratios in previous years. However, if someone with no relation or interest in the subject hears this question, they’ll likely guess 50:50 without a special reason.
  • Example 2: If it’s said that there’s a bag with red, blue, green, and yellow marbles, and no other information is provided, then one would guess that the probability of drawing each color of marble is just 25:25:25:25.

Definition 1

In such situations of extreme lack of information, the prior distribution used is called a Noninformative Prior. Among these, the prior distribution that does not assume any particular distribution and keeps all options open fairly is called the Laplace Prior.

Explanation

Improper Prior

If the parameter θ\theta belongs to a certain interval (a,b)(a,b), then its prior distribution would be represented as a π(θ)=1ba,a<θ<b\displaystyle \pi (\theta) = {{1} \over {b-a}} , a < \theta < b, or uniform distribution. The problem arises when parameters like θ -\infty \le \theta \le \infty are not bounded. In such cases, if π(θ)\pi (\theta) is put as a uniform distribution, then it would be calculated as π(θ)dθ=\displaystyle \int_{-\infty}^{\infty} \pi ( \theta ) d \theta = \infty, hence cannot be used as a distribution function. Such prior distributions are called Improper Priors. Special care is needed when using the Laplace prior as such improper priors can lead to improper posteriors.

Problems with Improper Priors

For instance, if the data is assumed to follow an exponential distribution exp(1θ)\displaystyle \exp \left( {{1} \over {\theta}} \right), then one could consider π(θ)c\displaystyle \pi ( \theta ) \propto c as the Laplace prior.

In this case, the posterior distribution of θ\theta would be p(θy)1θexp(yθ) p ( \theta | y ) \propto {{1} \over {\theta }} \exp \left( - {{y} \over {\theta }} \right) To check if this posterior distribution is appropriate, we set θ=1z\displaystyle \theta = {{1} \over {z}} and find its definite integral 0p(θy)dθ0zexp(yz)1z2dz= \int_{0 }^{\infty} p ( \theta | y ) d \theta \propto \int_{0}^{\infty} z \exp ( - y z ) {{1} \over {z^2}} dz = \infty Therefore, the posterior distribution is not appropriate as a probability distribution function, thus, another prior distribution should be considered.

Improper Priors Are Not Always Problematic

However, not all improper priors lead to improper posteriors. For example, if the data is assumed to follow a normal distribution N(θ,σ2)N ( \theta , \sigma^2 ), then one could consider π(θ)c\displaystyle \pi ( \theta ) \propto c as the Laplace prior. In this case, the posterior distribution of θ\theta would be p(θy1,yn)exp(12σ2i=1n(yiθ)2) p ( \theta | y_{1} , \cdots y_{n} ) \propto \exp \left( - {{1} \over {2 \sigma^2}} \sum_{i=1}^{n} (y_{i} - \theta )^2 \right) With a bit more calculation, p(θy1,yn)exp(n2σ2(θy)2) p ( \theta | y_{1} , \cdots y_{n} ) \propto \exp \left( - {{n} \over {2 \sigma^2}} (\theta - \overline{y} )^2 \right) hence, an appropriate posterior distribution N(y,σ2/n)N ( \overline{y} , \sigma^2 / n ) can be obtained.


  1. 김달호. (2013). R과 WinBUGS를 이용한 베이지안 통계학: p114. ↩︎