logo

Gibbs' Representation of Entropy 📂Thermal Physics

Gibbs' Representation of Entropy

Formulas

Given that the macrostate of a system is in the ii state, let’s denote the probability as PiP_{i}. The measured entropy SS of this system can be expressed as follows: S=kBiPilnPi S = - k_{B} \sum_{i} P_{i} \ln P_{i} Here, kBk_{B} is the Boltzmann constant.

Explanation

Shannon Entropy: When the probability mass function of a discrete random variable XX is p(x)p(x), the entropy of XX is expressed as follows. H(X):=p(x)log2p(x) H(X) := - \sum p(x) \log_{2} p(x)

Gibbs’ expression of entropy takes a similar form to Shannon entropy as defined in probability information theory and others.

Derivation 1

  • Part 1.

    First law of thermodynamics: dU=δQ+δW d U = \delta Q + \delta W

    From the definition of entropy, where δQ=TdS\delta Q = T dS and δW=pdV\delta W= - p dV, we get the following equation, which is another form of the first law of thermodynamics. dU=TdSpdV dU = T dS - p dV
    Considering the total differential of UU, we have: dU=USdS+USdV dU = {{ \partial U } \over { \partial S }} dS + {{ \partial U } \over { \partial S }} dV Comparing only the terms of dSdS in both equations, we can see that the following holds. T=US T = {{ \partial U } \over { \partial S }}

    Definition of temperature: 1kBT:=dln(Ω)dE {{1 } \over {k_{B} T}} : = {{ d \ln ( \Omega ) } \over {d E }}

    Following the definition of temperature, if 1kBSU=dln(Ω)dE\dfrac{1}{k_{B}} \dfrac{ \partial S }{ \partial U } = \dfrac{ d \ln ( \Omega ) }{ d E } and the energy is U=EU = E, solving the differential equation yields: S=kBlnΩ S = k_{B} \ln \Omega Here, Ω\Omega represents the number of states distinguishable by energy UU.

  • Part 2. Stotal=S+SmicroS_{\text{total}} = S + S_{\text{micro}}

    According to Part 1, if the number of states we can easily observe and distinguish is XX, then SS can be expressed as: S=kBlnX S = k_{B} \ln X If these XX states can be equally composed of YY microstates, then, while it’s impossible to observe the microstates directly, the entropy itself can be expressed as Smicro=kBlnYS_{\text{micro}} = k_{B} \ln Y. Meanwhile, the entire system actually has X×Y=XYX \times Y = XY states, so Stotal=kBlnXYS_{\text{total}} = k_{B} \ln XY. If we write this in a formula, it can be expressed as the sum of observed entropy and the entropy corresponding to the microstates. Stotal=kBlnXY=kBlnX+kBlnY=S+Smicro \begin{align*} S_{\text{total}} =& k_{B} \ln XY \\ =& k_{B} \ln X + k_{B} \ln Y \\ =& S + S_{\text{micro}} \end{align*}

  • Part 3. PiP_{i}

    Let the total number of possible microstates in a system be NN, and the number of macrostates for the ii state be nin_{i}, then ini=N\sum \limits_{i} n_{i} = N. Therefore, the probability of the macrostate being in the ii state is as follows: Pi:=niN P_{i} := {{n_{i}} \over {N}}

  • Part 4.

    However, since we cannot easily calculate the entropy of a microstate, if we take a probabilistic expected value, it would be as follows: Smicro=<Si>=iPiSi=iPikBlnni S_{\text{micro} } = \left< S_{i} \right> = \sum_{i} P_{i} S_{i} = \sum_{i} P_{i} k_{B} \ln n_{i} Meanwhile, the entropy of the entire system can simply be represented as Stotal=kBlnNS_{\text{total} } = k_{B} \ln N, and since it was Stotal=S+SmicroS_{\text{total}} = S + S_{\text{micro}} in Part 2, S=StotalSmicro=kB(lnNiPilnni) S = S_{\text{total} } - S_{\text{micro} } = k_{B} \left( \ln N - \sum_{i} P_{i} \ln n_{i} \right) At this point, because lnN=iPilnN\ln N = \sum_{i} P_{i} \ln N, the following holds true: lnNiPilnni=iPi(lnNlnni)=iPilnPi \ln N - \sum_{i} P_{i} \ln n_{i} = \sum_{i} P_{i} ( \ln N - \ln n_{i} ) = - \sum_{i} P_{i} \ln P_{i} To summarize, we obtain the following formula: S=kBiPilnPi S = - k_{B} \sum_{i} P_{i} \ln P_{i}


  1. Stephen J. Blundell and Katherine M. Blundell(2nd Edition, 2014): p150~152. ↩︎