logo

Relative Entropy (Kullback-Leibler Divergence) in Classical Information Theory

Relative Entropy (Kullback-Leibler Divergence) in Classical Information Theory

양자정보이론
[ 펼치기 · 접기 ]

For a discrete random variable $X$ with a probability mass function $p, q$, the relative entropy concerning $q$ of $p$ is defined as follows:

$$ D(p \| q) := \sum p(x) \log_{2} \dfrac{p(x)}{q(x)} \tag{1} $$

In this case, for $p \ne 0$, it is defined as $p \log_{2}(\frac{p}{0}) := \infty$. For continuous random variables, it is defined through integration.

$$ D(p \| q) := \int p(x) \ln \dfrac{p(x)}{q(x)} dx $$

The expected value form is as follows:

$$ D(p \| q) = E_{p} \left[ \log \dfrac{p(X)}{q(X)} \right] $$

Explanation

Relative entropy is also referred to as Kullback-Leibler divergence and is denoted using the following notations:

$$ D(p \| q) = D_{\text{KL}}(p \| q) = H(p \| q) $$

$D(p \| q)$, when assuming the distribution of $X$ as $q$ (given that the actual distribution of $X$ is $p$), measures how poorly $q$ represents $p$, in other words, the degree of dissimilarity between $q$ and $p$. Since $-\log q$ indicates the information quantity of $q$, definition $(1)$ represents the average difference in information between $q$ and $p$.

$$ \begin{align*} \sum p(x) \log_{2} \dfrac{p(x)}{q(x)} &= \sum p(x) \big[ -\log_{2}q(x) - (-\log_{2}p(x)) \big] \\ &= \sum p(x) \big[ I(q(x)) - I(p(x)) \big] \\ &= E \big[ I(q) - I(p) \big] \end{align*} $$

Properties

  1. Non-symmetry $$ D(p \| q) \ne D(q \| p) $$

  2. Non-negativity $$ D(p \| q) \ge 0 $$ The equality holds when $p = q$.

Proof

2.

By definition, if $p=q$ then $D(p \| q) = 0$, so let’s consider $p \ne q$.

$$ \begin{align*} -D(p \| q) &= \sum p(x) \log_{2} \dfrac{q(x)}{p(x)} \\ &\le \log_{2} \left( \sum p(x) \dfrac{q(x)}{p(x)} \right) \\ &= \log_{2} \left( \sum q(x) \right) \\ &= \log_{2} 1 \\ &= 0 \end{align*} $$

The inequality holds because the logarithm function is concave, thus verified by the Jensen’s inequality.

Jensen’s Inequality

If $f$ is a concave function, the following holds true for $\sum_{k=1}^{n} \lambda_{k} = 1$,

$$ f\left( \sum\limits_{k=1}^{n}\lambda_{k}x_{k} \right) \ge \sum\limits_{k=1}^{n} \lambda_{k} f(x_{k}) $$

Therefore, by multiplying both sides by negative one, we have:

$$ 0 \le D(p \| q) $$

See Also