Gibbs' Representation of Entropy
Formulas
Given that the macrostate of a system is in the $i$ state, let’s denote the probability as $P_{i}$. The measured entropy $S$ of this system can be expressed as follows: $$ S = - k_{B} \sum_{i} P_{i} \ln P_{i} $$ Here, $k_{B}$ is the Boltzmann constant.
Explanation
Shannon Entropy: When the probability mass function of a discrete random variable $X$ is $p(x)$, the entropy of $X$ is expressed as follows. $$ H(X) := - \sum p(x) \log_{2} p(x) $$
Gibbs’ expression of entropy takes a similar form to Shannon entropy as defined in probability information theory and others.
Derivation 1
Part 1.
First law of thermodynamics: $$ d U = \delta Q + \delta W $$
From the definition of entropy, where $\delta Q = T dS$ and $\delta W= - p dV$, we get the following equation, which is another form of the first law of thermodynamics. $$ dU = T dS - p dV $$
Considering the total differential of $U$, we have: $$ dU = {{ \partial U } \over { \partial S }} dS + {{ \partial U } \over { \partial S }} dV $$ Comparing only the terms of $dS$ in both equations, we can see that the following holds. $$ T = {{ \partial U } \over { \partial S }} $$Definition of temperature: $$ {{1 } \over {k_{B} T}} : = {{ d \ln ( \Omega ) } \over {d E }} $$
Following the definition of temperature, if $\dfrac{1}{k_{B}} \dfrac{ \partial S }{ \partial U } = \dfrac{ d \ln ( \Omega ) }{ d E }$ and the energy is $U = E$, solving the differential equation yields: $$ S = k_{B} \ln \Omega $$ Here, $\Omega$ represents the number of states distinguishable by energy $U$.
Part 2. $S_{\text{total}} = S + S_{\text{micro}}$
According to Part 1, if the number of states we can easily observe and distinguish is $X$, then $S$ can be expressed as: $$ S = k_{B} \ln X $$ If these $X$ states can be equally composed of $Y$ microstates, then, while it’s impossible to observe the microstates directly, the entropy itself can be expressed as $S_{\text{micro}} = k_{B} \ln Y$. Meanwhile, the entire system actually has $X \times Y = XY$ states, so $S_{\text{total}} = k_{B} \ln XY$. If we write this in a formula, it can be expressed as the sum of observed entropy and the entropy corresponding to the microstates. $$ \begin{align*} S_{\text{total}} =& k_{B} \ln XY \\ =& k_{B} \ln X + k_{B} \ln Y \\ =& S + S_{\text{micro}} \end{align*} $$
Part 3. $P_{i}$
Let the total number of possible microstates in a system be $N$, and the number of macrostates for the $i$ state be $n_{i}$, then $\sum \limits_{i} n_{i} = N$. Therefore, the probability of the macrostate being in the $i$ state is as follows: $$ P_{i} := {{n_{i}} \over {N}} $$
Part 4.
However, since we cannot easily calculate the entropy of a microstate, if we take a probabilistic expected value, it would be as follows: $$ S_{\text{micro} } = \left< S_{i} \right> = \sum_{i} P_{i} S_{i} = \sum_{i} P_{i} k_{B} \ln n_{i} $$ Meanwhile, the entropy of the entire system can simply be represented as $S_{\text{total} } = k_{B} \ln N$, and since it was $S_{\text{total}} = S + S_{\text{micro}}$ in Part 2, $$ S = S_{\text{total} } - S_{\text{micro} } = k_{B} \left( \ln N - \sum_{i} P_{i} \ln n_{i} \right) $$ At this point, because $\ln N = \sum_{i} P_{i} \ln N$, the following holds true: $$ \ln N - \sum_{i} P_{i} \ln n_{i} = \sum_{i} P_{i} ( \ln N - \ln n_{i} ) = - \sum_{i} P_{i} \ln P_{i} $$ To summarize, we obtain the following formula: $$ S = - k_{B} \sum_{i} P_{i} \ln P_{i} $$
■
Stephen J. Blundell and Katherine M. Blundell(2nd Edition, 2014): p150~152. ↩︎