Classical Statistics, also known as Classical Statistical Mechanics, is a branch of physics that deals with the statistical behavior of systems composed of a large number of particles. It is based on the laws of classical mechanics, particularly Newtonian mechanics, and is applicable when the effects of quantum mechanics are negligible.
The central idea of classical statistics is to use statistical methods to describe the macroscopic behavior of a system by averaging over the microscopic states of its constituents. Each particle in the system follows deterministic laws, but due to the large number of particles, it is practically impossible to track all their individual states. Therefore, a probabilistic approach is adopted.
Basic Postulates of Statistical Mechanics
The fundamental framework of statistical mechanics is built upon the following basic postulates:
1. Postulate of Equal A Priori Probability
In an isolated system in equilibrium, all accessible microstates are equally probable. This means if a system can exist in several configurations (microstates) with the same total energy, then it has no preference for any specific one of them.
2. Postulate of Ergodicity
Over a long period of time, the time average of a physical quantity for a system in equilibrium is equal to the ensemble average. This ensures that statistical methods can be applied to physical systems.
3. Postulate of Statistical Independence
The probability of the whole system being in a certain state is the product of the probabilities of its parts, provided the parts are non-interacting. This leads to the multiplicative nature of the partition function.
4. Microcanonical, Canonical, and Grand Canonical Ensembles
Depending on the physical constraints, different ensembles are used:
Microcanonical ensemble: Fixed energy, volume, and number of particles (E, V, N).
Canonical ensemble: Fixed temperature, volume, and number of particles (T, V, N).
Grand canonical ensemble: Fixed temperature, volume, and chemical potential (T, V, μ).
5. Additivity of Entropy
The total entropy of a composite system is the sum of the entropies of the subsystems. Entropy is an extensive property.
Postulates of Classical Statistical Mechanics
The fundamental assumptions of classical statistical mechanics include:
Each possible microstate of the system is equally probable if the system is isolated (microcanonical ensemble).
The system obeys classical mechanics: positions and momenta are continuous variables.
The system evolves in time according to Liouville’s theorem, which ensures conservation of phase space density.
Phase Space
A system with \( N \) particles has a phase space of \( 6N \) dimensions (3 for position and 3 for momentum per particle). A point in this space represents a complete microstate of the system.
Probability Distribution Function
The probability density \( \rho(q, p, t) \) in phase space describes the likelihood of the system being in a particular microstate. The evolution of this distribution over time is governed by Liouville's equation:
Classical statistics can be used to derive thermodynamic properties such as pressure, temperature, and entropy for systems like ideal gases. It also forms the basis for concepts like:
Equipartition Theorem
Maxwell-Boltzmann Distribution
Microcanonical, Canonical, and Grand Canonical Ensembles
However, classical statistics fails at low temperatures and high densities, where quantum effects become important, requiring the use of quantum statistical mechanics instead.
Liouville’s Theorem states that in Hamiltonian mechanics, the density of phase points in phase space is conserved along the trajectories of the system. In other words, the flow of the ensemble of systems through phase space is incompressible.
If \( \rho(q, p, t) \) is the density of systems in phase space, then Liouville’s theorem says:
Consider a system with \( N \) degrees of freedom, having generalized coordinates \( q_i \) and conjugate momenta \( p_i \). The phase space is \( 2N \)-dimensional. Let \( \rho(q, p, t) \) be the density function of representative points in phase space.
The total time derivative of \( \rho \) along the trajectory of the system is:
This completes the proof. The density \( \rho \) of representative points in phase space remains constant as the system evolves — which is Liouville’s Theorem. It is fundamental to statistical mechanics and ensures conservation of information in phase space.
A Microcanonical Ensemble is a collection of a large number of isolated systems, each having the same fixed values of:
Energy \( E \)
Volume \( V \)
Number of particles \( N \)
These systems are completely isolated from their surroundings, meaning they do not exchange energy or particles. This ensemble is the most fundamental and simplest of all statistical ensembles and is governed by the laws of classical or quantum mechanics.
Basic Assumptions
- All accessible microstates of the system consistent with the given \( E, V, N \) are equally probable.
- The system is in equilibrium.
Phase Space and Density of States
In classical statistical mechanics, the number of accessible microstates within a small energy range \( [E, E + \delta E] \) is given by:
Isolated system: No exchange of energy or particles with the surroundings.
Fixed variables: Energy \( E \), Volume \( V \), and Particle Number \( N \) are constant.
Equal probability: All accessible microstates are equally probable.
Entropy is additive: For independent systems, total entropy is the sum of individual entropies.
Foundation for other ensembles: Canonical and Grand Canonical ensembles are derived from the microcanonical ensemble by allowing limited exchanges.
Applications
- Studying ideal gases in isolated containers
- Understanding entropy and fundamental thermodynamic relations
- Forming the basis for the derivation of canonical and grand canonical ensembles
State and Prove the Law of Equipartition of Energy
The kinetic energy of a dynamical system containing a large number of particles in thermal equilibrium is equally divided among all degrees of freedom. The average energy associated with each degree of freedom is:
\[
\frac{1}{2}kT
\]
Proof:
Since the molecules are similar and distinguishable:
Hence, the mean energy of a harmonic oscillator in classical statistical mechanics is \( kT \), with \( \frac{1}{2}kT \) from each quadratic degree of freedom.
Thermodynamic Properties in Microcanonical Ensemble
\[
\begin{aligned}
&\text{1. Partition Function: } W = \Delta \Gamma \\
&\text{2. Entropy: } S = k \ln W \\
\quad &= k \ln \left(\dfrac{V}{N}\left(\dfrac{2 \pi m k T}{h^{2}}\right)^{3 / 2}\right)+\dfrac{5}{2} N k \\
&\text{3. Energy: } E=\dfrac{3 h^{2} N^{5 / 3}}{4 \pi m V^{2 / 3}} e^{\left(\dfrac{2 S}{3 N K}-\dfrac{5}{3}\right)} \\
&\text{4. Temperature: } T=\left(\dfrac{\partial E}{\partial S}\right)_{N,V}=\dfrac{2 E}{3 N k} \\
&\text{5. Average Energy: } \langle E\rangle=\dfrac{E}{N}=\dfrac{3}{2} k T \\
&\text{6. Specific Heat at Constant Volume:} \\
&C_{V}=\left(\dfrac{\partial E}{\partial T}\right)_{V}=\dfrac{\partial}{\partial T}\left(\dfrac{3}{2} N k T\right)=\dfrac{3}{2} N k
\end{aligned}
\]
Total Energy:
\[
E = -NkT^2 \frac{\partial}{\partial T}(\ln Z)
\]
Average Energy:
\[
\langle E \rangle = -kT^2 \frac{\partial}{\partial T}(\ln Z)
\]
Gibbs Free Energy:
\[
G = R T - N k \ln Z
\]
Enthalpy:
\[
\begin{aligned}
H &= U + P V \\
&= E + R T \\
&= N\langle E\rangle + R T \\
&= N k T^2 \frac{\partial}{\partial T}(\ln Z) + R T
\end{aligned}
\]
Derive the statistical expression of entropy for a thermodynamic system ( \(S=k \ln W\))
Derivation : We have
\[
\begin{aligned}
&\Delta S = \frac{dQ}{T} \\
&= \frac{\int P\, dV}{T}\\
&= \frac{\int \frac{nRT}{V}\, dV}{T}\\
&= nR \int \frac{1}{V}\, dV\\
&= nR \ln \left( \frac{V_2}{V_1} \right)\\
&= n k N_A \ln \left( \frac{V_2}{V_1} \right) \\
&= N k \ln \left( \frac{V_2}{V_1} \right)\\
&= k \ln \left( \frac{V_2}{V_1} \right)^N\\
&= k \ln \left( \frac{V_2^N}{V_1^N} \right)\\
&= k \ln \left( \frac{W_2}{W_1} \right)\\
\end{aligned}\]
(Since thermodynamic probability is proportional to volume available)
\[
= k \ln W_2 - k \ln W_1
\]
\[
\Rightarrow S_2 - S_1 = k \ln W_2 - k \ln W_1
\]
\[
\Rightarrow S = k \ln W
\]
Prove that $s_{A}+s_{B}=s_{A B}$ or prove that entropy is additive in nature.
Proof.
Consider a vessel divided into two equal halves, A and B, in which there are two different gases at the same temperature \( T \) and pressure \( P \). Let \( W_A \) and \( W_B \) be the thermodynamic probabilities of the two halves.
Entropy of the first half A:
\[\begin{equation} S_A = k \ln W_A
\end{equation}\]
Entropy of the Second half B:
\[
\begin{equation} S_B = k \ln W_B
\end{equation}\]
Total entropy of (A + B):
\[
\begin{aligned}
S_{AB} &= k \ln W_{AB} \\
&= k \ln (W_A W_B)\\
&= k \ln W_A + k \ln W_B \\
&= S_A + S_B
\end{aligned}
\]
Thus, entropy of a system is the sum of the entropies of its subsystems.
That’s why it is an extensive parameter like volume, area, length, and internal energy.
What is Gibbs' Paradox? How is it resolved by Gibbs?
Gibbs' Paradox arises from classical statistical mechanics, where the entropy appears to increase upon mixing two identica gases — violating the expected additive nature of entropy. This paradox occurs due to the \textbf{incorrect assumption of distinguishability} of identical particles in classical statistics.
Resolution
The paradox is resolved using \textit{quantum statistics}, which treats identical particles as \textbf{indistinguishable}. The meaningless overcounting in classical Boltzmann statistics is corrected by dividing the number of microstates by \( \mathbf{n!} \), where \( n \) is the number of molecules.
Explanation
Consider two systems A and B, each at the same temperature \( T \). Let the systems be characterized by:
\[
(N, V, T, m) \quad \text{and} \quad (N, V, T, m)
\]
where:
\(N\) : Number of molecules,
\(V\) : Volume,
\(m\) : Mass of each molecule,
\(T\) : Temperature
Assume the partition separating systems A and B is both insulating and impermeable
Then the entropy of the combined system (A + B) is:
\[
\begin{aligned}
S_{A+B} &= S_A + S_B \\
&= \left[nk \ln \left( \frac{V}{h^3}(2\pi m k T)^{3/2} \right) + \frac{3}{2} nk \right] \\
&\quad + \left[nk \ln \left( \frac{V}{h^3}(2\pi m k T)^{3/2} \right) + \frac{3}{2} nk \right] \\
&= 2 \left[nk \ln \left( \frac{V}{h^3}(2\pi m k T)^{3/2} \right) + \frac{3}{2} nk \right]
\end{aligned}
\]
Now, if we remove the partition and allow the gases to mix (even though they are identical), then:
Total number of molecules becomes \( 2n \),
Total volume becomes \( 2V \).
Then, the entropy becomes:
\[
\begin{aligned}
S_{AB} &= (2n)k \ln \left( \frac{2V}{h^3}(2\pi m k T)^{3/2} \right) + \frac{3}{2}(2n)k \\
&= 2 \left[ nk \ln \left( \frac{V}{h^3}(2\pi m k T)^{3/2} \right) + \frac{3}{2} nk \right] + 2nk \ln 2
\end{aligned}
\]
Conclusion: The additional term \( 2nk \ln 2 \) implies an artificial increase in entropy upon mixing, which is paradoxical since the gases are identical. Quantum mechanics resolves this by acknowledging the indistinguishability of particles and dividing by \( n! \), thereby canceling the extra \( \ln 2 \) term.
Hence it is obserbed that before and after removal of Partition the entropy differes by a value $2 n k \ln 2$ violating the additive nature of entropy.
Removal of Gibbs' Paradox
The partition function for one molecule is given by:
\[
Z = \frac{V}{h^{3}} (2 \pi m k T)^{3/2}
\]
The partition function for \( n \) molecules (assuming distinguishable particles) is:
\[
Z_n = \left( \frac{V}{h^{3}} (2 \pi m k T)^{3/2} \right)^n \]
\[= \frac{V^n}{h^{3n}} (2 \pi m k T)^{3n/2}
\]
However, since the molecules are indistinguishable, the corrected partition function should be:
\[
Z_n = \frac{1}{n!} \cdot \frac{V^n}{h^{3n}} (2 \pi m k T)^{3n/2}
\]
Now, the correct entropy is:
\[
\begin{aligned}
S &= k \ln Z_n + \frac{3}{2} n k \\
&= k \ln \left( \frac{V^n}{n! h^{3n}} (2 \pi m k T)^{3n/2} \right) + \frac{3}{2} n k \\
&= k \ln \left( \frac{V^n}{h^{3n}} (2 \pi m k T)^{3n/2} \right) - k \ln n! + \frac{3}{2} n k \\
&\approx k \ln \left( \frac{V^n}{h^{3n}} (2 \pi m k T)^{3n/2} \right) - n k \ln n + n k + \frac{3}{2} n k \\
&= k \ln \left( \frac{V^n}{n^n h^{3n}} (2 \pi m k T)^{3n/2} \right) + \frac{5}{2} n k \\
&= k \ln \left( \left( \frac{V}{n h^3} (2 \pi m k T)^{3/2} \right)^n \right) + \frac{5}{2} n k \\
&= n k \ln \left( \frac{V}{n h^3} (2 \pi m k T)^{3/2} \right) + \frac{5}{2} n k
\end{aligned}
\]
Thus, the final expression for entropy is:
\[
S = n k \ln \left( \frac{V}{n h^3} (2 \pi m k T)^{3/2} \right) + \frac{5}{2} n k \]
or
\[
S = n k \ln \left( \frac{V}{n h^3} (2 \pi m k T)^{3/2} e^{5/2} \right)
\]
This is the famous Sackur–Tetrode equation
Now, using this equation, the entropy of the system (A + B) before removal of the partition is:
\[
\begin{aligned}
S_{A+B} &= S_A + S_B \\
&= 2 \left[ n k \ln \left( \frac{V}{n h^3} (2 \pi m k T)^{3/2} \right) + \frac{5}{2} n k \right]
\end{aligned}
\]
And after removal of the partition, the entropy becomes:
\[
\begin{aligned}
S_{AB} &= (2n) k \ln \left( \frac{2V}{2n h^3} (2 \pi m k T)^{3/2} \right) + \frac{5}{2} (2n) k \\
&= 2 \left[ n k \ln \left( \frac{V}{n h^3} (2 \pi m k T)^{3/2} \right) + \frac{5}{2} n k \right]
\end{aligned}
\]
Conclusion: There is no change in entropy before and after mixing identical gases — which resolves the Gibbs' Paradox. The paradox vanishes when quantum indistinguishability is properly accounted for.Which proves both are equal and hence the additive nature of entropy.
⬆️ Back to Table of Contents
What is Nernst's Heat Theorem ?
Ans: This theorem states that at absolute zero temperature, entropy of a system vanishes i.e at $\mathrm{T}=0, \mathrm{~S}=0$. This is also called the third law of thermodynamics. As temperature decreases , the system comes to the lowest quantum state for which the statistical weight W becomes 1 which makes $\ln \mathrm{W}=0$ thus $\mathrm{S}=0$. This fact verifies the therem.
But from Sackur Tetrode formula for entropy we find $S$ decreases with decrease in $T$ but at $T=0$ , it is not Zero. Hence Sackur-Tetrode Formula for a Perfect gas though free feom Gibb’s Paradox does not satisy third law of thermodynamics.
Classical Ideal Gas
A classical ideal gas is a theoretical gas composed of many identical point particles that do not interact with each other except through elastic collisions. It follows the laws of classical mechanics (Newtonian mechanics), and its statistical properties can be described using Maxwell-Boltzmann statistics.
Properties of a Classical Ideal Gas:
Particles are indistinguishable and non-interacting.
Each particle moves in straight lines unless it collides elastically with a wall or another particle.
The gas obeys the ideal gas law: \( PV = NkT \).
Energy is purely kinetic: \( E = \dfrac{1}{2}mv^2 \).
The probability distribution of particle speeds is given by the Maxwell-Boltzmann distribution.
Density of States:
The number of quantum states available to particles in a given energy range is:
This result shows that each degree of freedom contributes \( \frac{1}{2}kT \) to the average energy of the system, as predicted by the equipartition theorem.
What is Partition Function and write down its properties:
Ans:
\begin{align*}
n_{i}&=g_{i} e^{\alpha} e^{-E / k T}\\
&\Rightarrow \sum n_{i}=e^{\alpha} \sum g_{i} e^{-E / k T}\\
&\Rightarrow N=e^{\alpha} \sum g_{i} e^{-E / k T}\\
&\Rightarrow \dfrac{n_{i}}{N}=\dfrac{g_{i} e^{\alpha} e^{-E / k T}}{e^{\alpha} \sum g_{i} e^{-E / k T}}\\
& \Rightarrow \dfrac{n_{i}}{N}=\dfrac{g_{i} e^{-E / k T}}{\sum g_{i} e^{-E / k T}}\\
& \Rightarrow \dfrac{n_{i}}{N}=\dfrac{g_{i} e^{-E / k T}}{Z}
\end{align*}
Z is called as the partition function of the system. It represents how N particles are distributed among thier energy levels.
Features of Partition Function(Z)
It indicates the mode of distribution of energy among various energy levels.
It is a pure number, hence it is dimensionless.
It can never be zero.
Its lowest value is 1 at absolute zero temperature as molecules stay in the ground state.
It is much larger than 1 at higher temperature as fewer molecules stay in the ground state.
It is also a measure of the extent to which particles may escape from the ground state.
Partition Function for Canonical Ensembles with no degeneracy
\[Z=\sum{{{g}_{i}}{{e}^{-\beta {{E}_{i}}}}}\]
Partition Function for Canonical Ensembles with degeneracy
\[Z=\sum{{{e}^{-\beta {{E}_{i}}}}}\]
Partition Function for Grand Canonical Ensembles with
\[ Z=\sum{{{n}_{i}}{{e}^{-\beta \left( {{E}_{i}}-\mu \right){{n}_{i}}}}}\]
Thermodynamic quantities in terms of Partition Function
Relation between Mean energy $\langle E\rangle$ and Z :
\[
\begin{align*}
S &= k \ln W = k \ln(Z_N e^{E/kT}) = k \ln Z_N + \frac{E}{T} \\
S &= k \ln\left(n! \prod_{i=1}^{\infty} \frac{g_i^{n_i}}{n_i!}\right) \\
&= k\left[n \ln n - n + \sum\left(n_i \ln g_i - n_i \ln n_i + n_i\right)\right] \\
&= k\left[n \ln n + \sum\left(n_i \ln g_i - n_i \ln n_i\right)\right] \\
&= k\left[n \ln n + \sum\left(n_i \ln \frac{g_i}{n_i}\right)\right] \\
&= k\left[n \ln n + \sum\left(n_i \ln \left(\frac{Z}{n} e^{\beta E_i}\right)\right)\right] \\
&= k\left[n \ln n + \sum\left(n_i \ln Z - n_i \ln n + \beta n_i E_i\right)\right] \\
&= k\left[n \ln n + \sum n_i \ln Z - \sum n_i \ln n + \beta \sum n_i E_i\right] \\
&= k[n \ln n + n \ln Z - n \ln n + \beta E] \\
&= kn \ln Z + k \cdot \frac{1}{kT} \cdot \frac{3}{2} n k T \\
&= kn \ln Z + k \cdot \frac{1}{kT} E \\
&= kn \ln Z + \frac{E}{T} \\
&= k \ln Z + \frac{3}{2} nk
\end{align*}
\]
HelmHoltz Free Energy F
$$
\begin{aligned}
F & =E-T S \\
& =E-T\left(k \ln Z+\frac{E}{T}\right)\\
F&=-k T \ln Z
\end{aligned}
$$
Gibbs Free Energy G
$$
\begin{aligned}
& G=H-T S\\
&=E+PV-T S \\
& =E+R T-T(n k \ln Z+E / T) \\
& =E+R T-n k T \ln Z-E \\
& =R T-n k \ln Z
\end{aligned}
$$
$$
\begin{aligned}
H & =U+P V\\
&=E+R T\\
&=N\langle E\rangle+R T \\
& =N K T^{2} \dfrac{\partial}{\partial Z}(\ln Z)+R T
\end{aligned}
$$
Pressure of the gas $\mathbf{P}$ :
$$
\begin{aligned}
& \because F=U-T S\\
&=E-T S \\
& \Rightarrow d F=d E-S d T-T d S \\
&\Rightarrow d F=d E-S d T-d Q \\
& \Rightarrow d F=d E-S d T-d Q \\
&\Rightarrow d F=d U-S d T-d Q \\
\quad \because d Q &=d U+d W=d E+d W \\
& \Rightarrow d F=-d W-S d T \\
&\Rightarrow d F=-P d V-S d T \\
& \Rightarrow-\left(\frac{d F}{d V}\right)_{T}=P \\
& \Rightarrow P=-\left(\frac{d}{d V}(-k T \ln Z)\right)_{T} \\
& \Rightarrow P=k T\left(\frac{d}{d V} \ln Z\right)_{T}
\end{aligned}
$$
$$\begin{aligned}
\because d Q &=d U+d W\\
&=d U+P d V\\
\Rightarrow T d S&=d U+P d V\\
\Rightarrow\left(\frac{d S}{d U}\right)_{V}&=\frac{1}{T}.\\
but d(\ln W)&=0 \quad \& \quad d(\beta U)=0\\
\Rightarrow d(\ln W)&=\beta d U\\
\Rightarrow \beta &=\frac{d(\ln W)}{d U}\\
&=\frac{1}{k} \frac{d(k \ln W)}{d U}\\
& =\frac{1}{k} \frac{d S}{d U} \\
\Rightarrow \beta &=\frac{1}{k}\left(\frac{1}{T}\right)\\
&=\frac{1}{k T}
\end{aligned}$$
Prove that the Partition function of a monatomic ideal gas molecule is :
$$
Z=\frac{V}{h^{3}}(2 \pi m k T)^{3 / 2}
$$
Proof :
$$
\begin{aligned}
& Z=\sum_{i} g_{i} e^{-E_{i} / k T} \\
& =\sum_{i} g(E) e^{-E / k T} \\
& =\int_{0}^{\infty} \frac{2 \pi V}{h^{3}}(2 m)^{3 / 2} E^{1 / 2} d E e^{-E / k T} \\
& =\frac{2 \pi V}{h^{3}}(2 m)^{3 / 2} \int_{0}^{\infty} E^{1 / 2} e^{-E / k T} d E \\
& =\frac{2 \pi V}{h^{3}}(2 m)^{3 / 2} \frac{\Gamma(3 / 2)}{(1 / k T)^{1+1 / 2}} \\
& =\frac{2 \pi V}{h^{3}}(2 m)^{3 / 2}(k T)^{3 / 2} \sqrt{\pi} / 2 \\
& Z=\frac{V}{h^{3}}(2 \pi m k T)^{3 / 2}
\end{aligned}
$$
( Partition function of the system containing $\mathbf{N}$ distinct ideal gas molecules $\left.Z_{N}=\frac{V^{N}}{h^{3 N}}(2 \pi m k T)^{3 N / 2}\right)$
⬆️ Back to Table of Contents
Ensembles in Statistical Mechanics
What is an Ensemble?
In statistical mechanics, an ensemble is a large collection of imagined copies of a physical system, each representing a possible microstate.
These systems:
Follow the same physical laws.
Have the same macroscopic constraints (depending on the ensemble type).
Differ in microscopic configurations (positions and momenta).
Ensembles are used to compute average quantities that describe macroscopic behavior.
Why Are Ensembles Important?
🔬 Link between microscopic mechanics and macroscopic thermodynamics.
📊 Let us use probability to deal with uncertainty in microstates.
💡 Help derive laws of thermodynamics.
🧮 Make computations more feasible by replacing exact tracking with statistical averages.
Types of Ensembles
1. Microcanonical Ensemble
Fixed quantities: \( N, V, E \)
Represents isolated systems (no exchange of energy or particles).
Each accessible microstate has equal probability.
2. Canonical Ensemble
Fixed quantities: \( N, V, T \)
System is in thermal contact with a heat bath (energy can fluctuate).
The probability of a state with energy \( E \) is given by:
\[
P(E) \propto e^{-E / (k_B T)}
\]
3. Grand Canonical Ensemble
Fixed quantities: \( \mu, V, T \)
System can exchange both energy and particles with a reservoir.
Both energy and particle number fluctuate.
Summary Table
Ensemble Type
Fixed Quantities
Fluctuating Quantities
Used For...
Microcanonical
\( N, V, E \)
None
Isolated systems
Canonical
\( N, V, T \)
Energy
Systems with thermal contact
Grand Canonical
\( \mu, V, T \)
Energy, Particle Number
Systems in contact with heat & matter reservoirs
Partition Function of Canonical Ensemble
1. Canonical Ensemble Overview
In statistical mechanics, the canonical ensemble describes a system in thermal equilibrium
with a heat reservoir at a fixed temperature \( T \), fixed volume \( V \), and fixed number of particles \( N \).
The system can exchange energy with the reservoir but not particles.
2. Definition of the Partition Function
The central quantity of the canonical ensemble is the partition function, denoted by \( Z \).
For a system with discrete energy levels \( E_i \), the canonical partition function is defined as:
\( Z = \sum_{i} e^{-\beta E_i} \)
where:
\( E_i \) is the energy of the \( i \)-th microstate.
\( \beta = \dfrac{1}{k_B T} \), with \( k_B \) being the Boltzmann constant.
3. Physical Significance
The partition function encodes all thermodynamic information of the system. Once \( Z \) is known,
other thermodynamic quantities can be derived:
Helmholtz free energy:
\( F = -k_B T \ln Z \)
Internal energy:
\( U = -\dfrac{\partial}{\partial \beta} \ln Z \)
Entropy:
\( S = -\left( \dfrac{\partial F}{\partial T} \right)_V = k_B \left( \ln Z + \beta U \right) \)
The probability of the system being in state \( i \) is given by:
\( P_i = \dfrac{e^{-\beta E_i}}{Z} \)
So for this system:
\( P_0 = \dfrac{1}{1 + e^{-\beta \epsilon}} \),
\( P_1 = \dfrac{e^{-\beta \epsilon}}{1 + e^{-\beta \epsilon}} \)
5. Continuous Energy Spectrum
If the energy spectrum is continuous, the sum becomes an integral:
\( Z = \int g(E) e^{-\beta E} \, dE \)
where \( g(E) \) is the density of states.
6. Importance in Thermodynamics
The partition function is fundamental because it serves as a generating function for all thermodynamic properties.
Its knowledge allows us to compute equilibrium behavior, energy distributions, fluctuations, and more.
Discuss the equivalence of a Canonical and Grand Caqnonical Ensemble.
Ans : In canonical ensemble only energy fluctuates while in Grand Canonical ensemble both the energy and particles fluctuate. It is obseved that the energy fluctuation in both is nearly the same. Hence the fluctuations in number of particles is a major difference. Hence the Grand Canonical ensemble is equivalent to the Canonical ensemble if the fluctuation in density in Grand Canonical Ensemble is very little or the density doesn not change.
Let us consider a system A which is characterised by Tmperature T , Volume V and chemical potential $\mu$ which are constants . Let it be in contatct with a huge heat reservoir B through a conducting and permeable wall which permits both energy and particles.\\
Let E and $\mathrm{E}^{\prime}$ be the energies of A and B and N and $\mathrm{N}^{\prime}$ are the number of particles in A and B respectively then $E+E^{\prime}=E_{0} \quad\left(E^{\prime}>>E\right) \& N+N^{\prime}=N_{0} \quad\left(N^{\prime} \gg>N\right)$ where $E_{0} \& N_{0}$ are the total energy and number of particles of A + B .\\
The composite system of A and B is isolated and total energy and total particles donot change . Then The microcanonical distribution is applicable to both thus :\\
The probability dW that A and B are in the volumes $d \Gamma_{1}$ and $d \Gamma_{2}$ of the phase space is :
$$
\begin{aligned}
d W & =C \delta\left(E+E^{\prime}-E_{0}\right) d \Gamma_{1} d \Gamma_{2} \\
\rho_{n N} & =\int d W=\int C \delta\left(E+E^{\prime}-E_{0}\right) d \Gamma_{1} d \Gamma_{2} \\
&= C \int \delta\left(E_{n, N}+E^{\prime}-E_{0}\right) d \Gamma_{1} d \Gamma_{2} \\
& = C \int \delta\left(E^{\prime}-\left(E_{0}-E_{n, N}\right)\right) d \Gamma_{1} d \Gamma_{2} \\
&= C \int \delta\left(E^{\prime}-\left(E_{0}-E_{n, N}\right)\right) d \Gamma_{2} \\
& =C \int \delta\left(E^{\prime}-\left(E_{0}-E_{n, N}\right)\right) \dfrac{\Delta \Gamma_{2}}{\Delta E^{\prime}} d E^{\prime} \\
&= C \int \delta\left(E^{\prime}-\left(E_{0}-E_{n, N}\right)\right) \dfrac{e^{\dfrac{s_{2}\left(E^{\prime}, N^{\prime}\right)}{k}}}{\Delta E^{\prime}} d E^{\prime} \\
&= C \int \delta\left(E^{\prime}-\left(E_{0}-E_{n, N}\right)\right) \dfrac{e^{\dfrac{s_{2}\left(E^{\prime}, N^{\prime}\right)}{k}}}{\Delta E^{\prime}} d E^{\prime}\\
& =C \dfrac{e^{\dfrac{S_{2}\left(E_{0}-E_{n N}, N^{\prime}\right)}{k}}}{\Delta\left(E_{0}-E_{n N}\right)} \\
& =C \dfrac{e^{\dfrac{S_{2}\left(E_{0}-E_{n N}, N_{0}-N\right)}{k}}}{\Delta\left(E_{0}-E_{n N}\right)} \\
& =C \dfrac{e^{\dfrac{S_{2}\left(E_{0}, N_{0}\right)-E_{n N}\left(\dfrac{\partial S_{2}\left(E_{0}, N_{0}\right)}{\partial E_{0}}\right)_{N_{0}}-N\left(\dfrac{\partial S_{2}\left(E_{0}, N_{0}\right)}{\partial N_{0}}\right)_{E_{0}}+\ldots}{k}}}{\Delta\left(E_{0}-E_{n N}\right)} \\
& =C \dfrac{e^{\dfrac{S_{2}\left(E_{0}, N_{0}\right)-E_{n N}\left(\dfrac{\partial S_{2}\left(E_{0}, N_{0}\right)}{\partial E_{0}}\right)_{N_{0}}-N\left(\dfrac{\partial S_{2}\left(E_{0}, N_{0}\right)}{\partial N_{0}}\right)_{E_{0}}+\ldots}{k}}}{\Delta E_{0}} \\
& =C \dfrac{e^{\dfrac{S_{2}\left(E_{0}, N_{0}\right)-E_{n N} \dfrac{1}{T}-N\left(\dfrac{-\mu}{T}\right)+\ldots .}{k}}}{\Delta E_{0}} \\
& =C \dfrac{e^{\dfrac{S_{2}\left(E_{0}, N_{0}\right)}{k}}}{\Delta E_{0}} e^{\dfrac{-E_{n N} \dfrac{1}{T}-N\left(\dfrac{-\mu}{T}\right)+\ldots}{k}} \\
& =C \dfrac{e^{\dfrac{S_{2}\left(E_{0}, N_{0}\right)}{k}}}{\Delta E_{0}} e^{\dfrac{\mu N-E_{n N}}{k T}} \\
& =A e^{\dfrac{\mu N-E_{n N}}{k T}} \\
& \sum_{n} \rho_{n N}=1=A \sum e^{\dfrac{\mu N-E_{n N}}{k T}} \\
& \Rightarrow A=\dfrac{1}{\sum e^{\dfrac{\mu N-E_{n N}}{k T}}} \\
& \Rightarrow \rho_{n N}=\dfrac{e^{\dfrac{\mu N-E_{n N}}{k T}}}{\sum e^{\dfrac{\mu N-E_{n N}}{k T}}} \\
& Z=\sum e^{\dfrac{\mu N-E_{n N}}{k T}}
\end{aligned}
$$
ALTERNATE METHOD :
Consider a thermodynamic system charactersied by variables T, V, $\mu$ ( chemical Potential) .Let n is the number of members in the ensemble each with Volume V and a wall permeable and conducting. Let us take that the whole ensemble is immersed in a huge reservoir at T and $\mu$ till\\
the equillibrium is reached and then place walls around the ensemble which ate impermeable to both moledcules and heat .and finally remove the ensemble from the reservoir. The ensemble itslf is then an isolated supersystem.
The volume of this supersystem is nV., Total Energy is E. The total number of moloecules n. As the number of molecules in each system is variable, then assume that the average number of molecules in each system is $\bar{n}$.\\
since the no of particles in a particular system is variable . The energy level in a system is a function of particle number $n$.\\
Let $n_{i}(n)$ be the state of occupation numbers in a given super system. The number of systems which contain $n$ particles and are in the particular state $E_{i}(n, V)$.\\
The number of possible quantum states of the supersystem is :
$$
W=\frac{N}{\prod_{i, n} n_{i}(n)!}
$$
Subject to the constraints
$\sum_{i, n} n_{i}(n)=n$. $\qquad$
$\sum_{i, n} E_{i}(n, V) n_{i}(n)=E$.
$\sum_{i, n} n n_{i}(n)=\bar{n}$
Out of all sets of occupation numbers the most probable state is the state which has maximum thermodynamic Probability or highest occupation number .Thus $\ln W$ is maxumum for the state.\\
Hence using Lagrange's multipliers $\alpha, \beta \& \gamma$ we find
$$
\begin{aligned}
& : \quad d(\ln W)-d\left(\alpha \sum_{i, n} n_{i}(n)\right)-d\left(\sum_{i, n} \beta E_{i}(n, V) n_{i}(n)\right)-d\left(\gamma \sum_{i, n} n n_{i}(n)\right)=0 \\
& \quad \Rightarrow d\left[(n \log n-n)-\sum_{i, n} n_{i}(n)\left(\ln n_{i}(n)-1\right)-\alpha \sum_{i, n} n_{i}(n)-\beta \sum_{i, n} E_{i}(n, V) n_{i}(n)-\gamma \sum_{i, n} n n_{i}(n)\right]=0 \\
& \quad \Rightarrow 0-d n_{i}(n) \ln n_{i}(n)-\dfrac{n_{i}(n)}{n_{i}(n)} d n_{i}(n)+\sum d n_{i}(n)-\alpha \sum_{i, n} d n_{i}(n)-\beta \sum E_{i}(n, V) d n_{i}(n)-\gamma \sum_{i, n} n d n_{i}(n)=0 \\
& \quad \Rightarrow\left(-\ln n_{i}(n)-\alpha-\beta E_{i}(n, V)-\gamma n\right) d n_{i}(n)=0 \\
& \quad \Rightarrow \ln n_{i}(n)+\alpha+\beta E_{i}(n, V)+\gamma n=0 \\
& \quad \Rightarrow \ln n_{i}(n)=-\alpha-\beta E_{i}(n, V)-\gamma n \\
& \quad \Rightarrow n_{i}(n)=e^{-\alpha-\beta E_{i}(n, V)-\gamma n} \\
& \quad \Rightarrow n_{i}(n)=e^{-\alpha} e^{-\beta E_{i}(n, V)-\gamma n}
\end{aligned}
$$
$$
\begin{align*}
\Rightarrow \sum n_{i}(n)=e^{-\alpha} \sum e^{-\beta E_{i}(n, V)-\gamma n} \\
\Rightarrow e^{-\alpha}=\dfrac{n}{\sum e^{-\beta E_{i}(n, V)-\gamma n}} \\
\Rightarrow n_{i}(n)=\dfrac{n e^{-\beta E_{i}(n, V)-\gamma n}}{\sum e^{-\beta E_{i}(n, V)-\gamma n}} \\
\rho_{n, N}=\dfrac{n_{i}(n)}{n} \\
\Rightarrow \rho_{n, N}=\dfrac{n e^{-\beta E_{i}(n, V)-\mu n}}{\sum e^{-\beta E_{i}(n, V)-\mu n}} \\
\Rightarrow Z=\sum e^{-\beta E_{i}(n, V)-\mu n}
\end{align*}
$$
⬆️ Back to Table of Contents
Fluctuations in the Density of a Grand Canonical Ensemble
In a Grand Canonical Ensemble, the number of particles \(N\) and energy \(E\) both fluctuate.
Since the density \(\rho = \dfrac{N}{V}\) and the volume \(V\) is fixed, fluctuations in density are equivalent to fluctuations in \(N\).
The average number of particles is:
\[
\left\langle N \right\rangle = \dfrac{\sum_N \sum_n N e^{-\dfrac{E_{nN} - \mu N}{kT}}}{\sum_N \sum_n e^{-\dfrac{E_{nN} - \mu N}{kT}}}
\]
Proof: Consider a system of \(N\) identical, indistinguishable, and weakly interacting particles contained in a vessel of volume \(V\) at an equilibrium temperature \(T\) and total energy \(U\). Potential energy is zero; the energy is purely kinetic. This is a monoatomic ideal gas. Since the mean separation between molecules is larger than the thermal wavelength, they are distinguishable and any number of particles can occupy any energy level.
Let there be \(l\) states with energies \(E_1, E_2, E_3, \ldots, E_l\) such that:
\(n_1\) particles with energy \(E_1\)
\(n_2\) particles with energy \(E_2\)
\(n_3\) particles with energy \(E_3\)
\(n_l\) particles with energy \(E_l\)
with degeneracies:
\(g_1\) levels in energy \(E_1\)
\(g_2\) levels in energy \(E_2\)
\(g_3\) levels in energy \(E_3\)
Subject to the constraints:
The number of ways to distribute the particles is the product of the number of ways of selecting and arranging them.
The selection ways:
This way the total selection will be :
Each group is arranged in their respective \(g_i\) levels:
which means the total number of particles is constant and total energy of the system is constant. The particles can be distributed in a number of ways subject to the constraints and liberty that any number of particles can occupy any number of energy states.
The number of ways is thus the product of number of ways of particle's selection (considering distinguishability) and the number of arrangements among various energy states. \(n_{1}\) particles can be selected from \(N\) particles in \(^{N}C_{n_{1}}\) ways.
The next \(n_{2}\) particles will be selected from remaining \(N-n_{1}\) particles in \(^{N-n_{1}}C_{n_{2}}\) ways.
This way the total selection will be:
The arrangement can be done as follows. The first particle from \(n_{1}\) particles can occupy \(g_{1}\) levels in \(g_{1}\) ways. Similarly, the second particle from \(n_{1}\) particles in \(g_{1}\) ways, the third particle in \(g_{1}\) ways, and so on, as there is no restriction. Hence \(n_{1}\) particles can be arranged in \(g_{1}\) energy levels in \(g_{1}^{n_{1}}\) ways. Similarly, \(n_{2}\) particles in \(g_{2}\) energy levels in \(g_{2}^{n_{2}}\) ways and so on.
Thus, the number of arrangements for \(N\) particles in \(g_{1}, g_{2}, \ldots, g_{l}\) energy levels is:
\[
W_{\text{arrangements}} = {g}_{1}^{n_{1}} {g}_{2}^{n_{2}} {g}_{3}^{n_{3}} \cdots {g}_{l}^{n_{l}}
\]
Thus, the number of ways of distributing \(N\) particles in \(l\) states is:
\[
W = W_{\text{selection}} \times W_{\text{arrangements}}
\]
Thus the thermodynamic probability of MB distribution is:
\begin{align}
W= N! \prod_{1}^{l} \dfrac{g_i^{n_i}}{n_i!}
\end{align}
Derivation of Maxwell-Boltzmann Distribution Function
Taking Logarithm of the thermodynamic probability we have:
\[
\begin{aligned}
\ln W= & (N \ln N-N)+\sum n_{i} \ln g_{i}-\sum n_{i} \ln n_{i}+\sum_{1}^{l} n_{i} \\
& =N \ln N+\sum n_{i} \ln g_{i}-\sum n_{i} \ln n_{i} \\
\Rightarrow d(\ln W) & =0+\sum d n_{i} \ln g_{i}-\sum d n_{i} \ln n_{i}-\sum n_{i} \dfrac{1}{n_{i}} d n_{i} \\
& =\sum \ln g_{i} d n_{i}-\sum \ln n_{i} d n_{i}-\sum d n_{i}=\sum \ln g_{i} d n_{i}-\sum \ln n_{i} d n_{i}-0\\
&=\sum \ln \dfrac{g_{i}}{n_{i}} d n_{i}
\end{aligned}\]
For most probable state \(W\) is maximum .
\[
\begin{aligned}
& \Rightarrow d(\ln W)=0 \\
& \Rightarrow d\left(\alpha \sum_{1}^{l} n_{i}\right)=\alpha \sum_{1}^{l} d n_{i}=0 \\
& \Rightarrow d\left(\beta \sum_{1}^{l} n_{i} E_{i}\right)=\beta \sum_{1}^{l} E_{i} d n_{i}=0
\end{aligned}
\]
Combining all these we get
\[
\begin{aligned}
& d(\ln W)=\alpha \sum_{1}^{l} d n_{i}+\beta \sum_{1}^{l} E_{i} d n_{i} \\
& \sum_{1}^{l} \ln \dfrac{g_{i}}{n_{i}} d n_{i}=\sum_{1}^{l}\left(\alpha+\beta E_{i}\right) \\
& \Rightarrow \ln \dfrac{g_{i}}{n_{i}}=\left(\alpha+\beta E_{i}\right) \\
& \Rightarrow \dfrac{g_{i}}{n_{i}}=e^{\alpha+\beta E_{i}} \\
& \Rightarrow n_{i}=\dfrac{g_{i}}{e^{\alpha+\beta E_{i}}} \\
& \Rightarrow \dfrac{n_{i}}{g_{i}}=\dfrac{1}{e^{\alpha+\beta E_{i}}} \\
& f\left(E_{i}\right)=\dfrac{1}{e^{\alpha+\beta E_{i}}}
\end{aligned}
\]
. Evaluate $\alpha$ and $\beta$ appearing in the three types of distribution functions as follows :
$$
\begin{array}{ll}
n_{i}=\dfrac{g_{i}}{e^{\left(\alpha+\beta E_{i}\right)}} & \text { ( MB statistics ) } \\
n_{i}=\dfrac{g_{i}}{e^{\left(\alpha+\beta E_{i}\right)}-1} & \text { ( BE statistics ) } \\
n_{i}=\dfrac{g_{i}}{e^{\left(\alpha+\beta E_{i}\right)}}+1 & \text { ( FD statistics ) }
\end{array}
$$
As $\alpha$ and $\beta$ do not depend on types of distribution we can take MB distribution function for easy evaluation.
Then
$$
\Rightarrow n_{i}=g_{i} e^{-\alpha} e^{-\beta E_{i}}
$$
$\Rightarrow n(E)=g(E) e^{-\alpha} e^{-\beta E}$ when energy levels are continuos we can replace $E_{i}$ by $E$ and $g_{i}$ by $g(E)$ and $n_{i}$ by $n(E)$
$\Rightarrow n(E)=g(E) e^{-\alpha} e^{-\beta E}$
Thus, \(g(E) = V \cdot 2\pi\left(\dfrac{2m}{h^2}\right)^{3/2} E^{1/2} dE\)
(For particles with no spin. For particles with spin \(\pm 1/2\), we multiply by 2)
As we have number of partcles occupying energy $\mathrm{E}_{\mathrm{i}}$ in $\mathrm{g}_{\mathrm{i}}$ number of cells, Then $n_{i}=g_{i} e^{-\alpha} e^{-E_{i} / k T}$
Thus Total number of particles is
$$
N=\int_{0}^{\infty} n(E) d E
$$
( where $n(E)$ represents number of particles in the energy range $E$ and $E+d E$ )
$$
\begin{aligned}
\Rightarrow N&=\int_{0}^{\infty} g(E) e^{-\alpha} e^{-E / k T} d E\\
& =e^{-\alpha} \int_{0}^{\infty} 2 \pi V\left(\dfrac{2 m}{h^{2}}\right)^{3 / 2} E^{1 / 2} e^{-E / k T} d E \\
& =e^{-\alpha} 2 \pi V\left(\dfrac{2 m}{h^{2}}\right)^{3 / 2} \int_{0} E^{1 / 2} e^{-E / k T} d E \\
& =e^{-\alpha} 2 \pi V\left(\dfrac{2 m}{h^{2}}\right)^{3 / 2}\left(\dfrac{\sqrt{\pi}}{2(1 / k T)^{3 / 2}}\right) \\
& =e^{-\alpha} 2 \pi V\left(\dfrac{2 m}{h^{2}}\right)^{3 / 2}\left(\dfrac{(k T)^{3 / 2} \sqrt{\pi}}{2}\right) \\
& =e^{-\alpha} \pi^{3 / 2} V\left(\dfrac{2 m}{h^{2}}\right)^{3 / 2}(k T)^{3 / 2} \\
& =e^{-\alpha} V\left(\dfrac{2 m \pi k T}{h^{2}}\right)^{3 / 2} \\
& \Rightarrow e^{-\alpha}=\dfrac{N}{V}\left(\dfrac{h^{2}}{2 m \pi k T}\right)^{3 / 2}
\end{aligned}
$$
Relation of $e^{-\alpha}$ with Partition function :
Prove that the number of particles in the energy range $E$ and $E+dE$ is :
$n(E) d E=\dfrac{2 N}{\sqrt{\pi}}\left(\dfrac{1}{k T}\right)^{3 / 2} \sqrt{E} e^{-E / k T} d E$\\
Proof :
$$
\begin{aligned}
n(p) d p&=4 \pi N\left(\dfrac{1}{2 \pi m k T}\right)^{3 / 2} p^{2} e^{-p^{2} / 2 m k T} d p\\
& \Rightarrow n(E) d E=4 \pi N\left(\dfrac{1}{2 \pi m k T}\right)^{3 / 2}(2 m E) e^{-E / k T} d(\sqrt{2 m E}) \\
& \Rightarrow n(E) d E=4 \pi N\left(\dfrac{1}{2 \pi m k T}\right)^{3 / 2}(2 m E) e^{-E / k T}(\sqrt{2 m}) \dfrac{1}{2 \sqrt{E}} d E \\
& \Rightarrow n(E) d E=4 \pi N\left(\dfrac{1}{2 \pi m k T}\right)^{3 / 2}(2 m)^{3 / 2} E^{1 / 2} e^{-E / k T} \dfrac{1}{2 \sqrt{E}} d E \\
& \Rightarrow n(E) d E=\dfrac{2 N}{\sqrt{\pi}}\left(\dfrac{1}{k T}\right)^{3 / 2} E^{1 / 2} e^{-E / k T} d E
\end{aligned}
$$
Prove that the number of particles in the range $v$ and $v+d v$ is :
$n(v) d v=4 \pi N\left(\dfrac{m}{2 \pi k T}\right)^{3 / 2} e^{-m v^{2} / 2 k T} v^{2} d v$
Proof:
$$
\begin{aligned}
& n(p) d p=4 \pi N\left(\dfrac{1}{2 \pi m k T}\right)^{3 / 2} p^{2} e^{-p^{2} / 2 m k T} d p \\
& \Rightarrow n(v) d v=4 \pi N\left(\dfrac{1}{2 \pi m k T}\right)^{3 / 2} m^{2} v^{2} e^{-m v^{2} / 2 k T} m d v \\
& \Rightarrow n(v) d v=4 \pi N\left(\dfrac{m}{2 \pi m k T}\right)^{3 / 2} e^{-m v^{2} / 2 k T} v^{2} d v
\end{aligned}
$$
Prove that the most probable speed of gas molecules is $v_{m p}=\sqrt{\dfrac{2 k T}{m}}$
Proof:
Since most probable speed is the speed that is possessed by maximum number of molecules particles
then $\dfrac{d}{d v}(P(v))=0$,
Where $P$ is the probability that particles have speed between $ v $ and $v+dv$
$$
\begin{aligned}
&\Rightarrow \dfrac{d}{d v}\left(e^{-m v^{2} / 2 k T} v^{2}\right)=0, \quad P(v)=n(v) / N \\
&\Rightarrow e^{-m v^{2} / 2 k T}\left(-\dfrac{m}{2 k T} 2 v\right) v^{2}+e^{-m v^{2} / 2 k T}(2 v)=0\\
& \Rightarrow-\dfrac{m}{k T} v^{3}+2 v=0 \\
& \Rightarrow-\dfrac{m}{k T} v^{2}+2=0 \\
& \Rightarrow v^{2}=\dfrac{2 k T}{m} \\
& \Rightarrow v_{m p}=\sqrt{\dfrac{2 k T}{m}}
\end{aligned}
$$
Prove that the average ( mean ) speed of the gas molecules is: $v_{a v g}=\sqrt{\dfrac{3 k T}{m}}$
$$
\begin{aligned}
v_{\text {avg }}&=\dfrac{n_{1} v_{1}+n_{2} v_{2}+\ldots}{n_{1}+n_{2}+\ldots .}=\dfrac{\sum_{0}^{\infty} n v}{\sum_{0}^{\infty} n} \\
&=\dfrac{\int_{0}^{\infty} n(v) v}{\int_{0}^{\infty} n(v)}\\
&=\dfrac{\int_{0}^{\infty}\left(4 \pi N\left(\dfrac{m}{2 \pi k T}\right)^{3 / 2} e^{-m v^{2} / 2 k T} v^{2} d v\right) v}{\int_{0}^{\infty} 4 \pi N\left(\dfrac{m}{2 \pi k T}\right)^{3 / 2} e^{-m v^{2} / 2 k T} v^{2} d v}\\
&=\dfrac{\int_{0}^{\infty}\left(e^{-m v^{2} / 2 k T} v^{3} d v\right)}{\int_{0}^{\infty} e^{-m v^{2} / 2 k T} v^{2} d v}\\
&=\dfrac{\int_{0}^{\infty} x^{3} e^{-a x^{2}} d x}{\int_{0}^{\infty} x^{2} e^{-a x^{2}} d x} \quad(\text { Taking } m / 2 k T=a)\\
&=\dfrac{\dfrac{1}{2 a^{2}}}{\dfrac{1}{4 a} \sqrt{\dfrac{\pi}{a}}}=\dfrac{2}{\sqrt{\pi a}}=\dfrac{2 \sqrt{2 k T}}{\sqrt{\pi m}}\\
&=\sqrt{\dfrac{8 k T}{\pi m}}
\end{aligned}\]
Prove that the rms speed of gas molecules is $v_{r m s}=\sqrt{\dfrac{3 k T}{m}}$
Proof:
$$
\begin{aligned}
v_{r m s} & =\sqrt{\dfrac{n_{1} v_{1}{ }^{2}+n_{2} v_{2}{ }^{2}+n_{3} v_{3}{ }^{2}+\ldots}{n_{1}+n_{2}+n_{3}+\ldots}}\\
&=\sqrt{\dfrac{\int_{0}^{\infty} n(v) v^{2}}{\int_{0}^{\infty} n(v)}}=\sqrt{\dfrac{\int_{0}^{\infty}\left(e^{-m v^{2} / 2 k T} v^{4} d v\right)}{\int_{0}^{\infty} e^{-m v^{2} / 2 k T} v^{2} d v}} \\
& =\sqrt{\dfrac{\int_{0}^{\infty} x^{4} e^{-a x^{2}} d x}{\int_{0}^{\infty} x^{2} e^{-a x^{2}} d x}}(\text { Taking } m / 2 k T=a) \\
& =\sqrt{\dfrac{\dfrac{3}{\dfrac{8 a^{2}}{\sqrt{\dfrac{\pi}{a}}}}}{\dfrac{1}{4 a} \sqrt{\dfrac{\pi}{a}}}}\\
&=\sqrt{\dfrac{3}{2 a}}\\
&=\sqrt{\dfrac{6 k T}{2 m}} \\
& =\sqrt{\dfrac{3 k T}{m}}
\end{aligned}
$$
Prove that the mean Kinetic energy of gas molecules is $\langle E\rangle=\dfrac{3}{2} k T$}
Proof :
$$
\begin{aligned}
& \langle E\rangle=\dfrac{n_{1} E_{1}+n_{2} E_{2}+n_{3} E_{3}+\ldots \ldots . .}{n_{1}+n_{2}+n_{3}+\ldots \ldots .}=\dfrac{\int_{0}^{\infty} n(E) E d E}{\int_{0}^{\infty} n(E) d E} \\
& =\dfrac{\dfrac{2 N}{\sqrt{\pi}}\left(\dfrac{1}{k T}\right)^{3 / 2} \int_{0}^{\infty} E^{3 / 2} e^{-E / k T} d E}{\dfrac{2 N}{\sqrt{\pi}}\left(\dfrac{1}{k T}\right)^{3 / 2} \int_{0}^{\infty} E^{1 / 2} e^{-E / k T} d E} \\
& =\dfrac{\int_{0}^{\infty} E^{3 / 2} e^{-E / k T} d E}{\int_{0}^{\infty} E^{1 / 2} e^{-E / k T} d E} \\
& =\dfrac{\int_{0}^{\infty} x^{5 / 2-1} e^{-a x} d x}{\int_{0}^{\infty} x^{3 / 2-1} e^{-a x} d x}(\text { Taking } a=1 / k T, x=E) \\
& =\dfrac{\Gamma(5 / 2) / a^{5 / 2+1}}{\Gamma(3 / 2) / a^{3 / 2+1}}=\dfrac{\dfrac{3 \sqrt{\pi}}{4} \dfrac{1}{a^{5 / 2}}}{\frac{\sqrt{\pi}}{2} \frac{1}{a^{3 / 2}}}=\frac{3}{2} \frac{1}{a} \\
&=\frac{3}{2} k T
\end{aligned}
$$
⬆️ Back to Table of Contents
In quantum statistical mechanics, the density matrix (or density operator) describes the statistical state of a quantum system. It generalizes the concept of a state vector to include mixed states.
For a pure state \( |\psi\rangle \), the density matrix is:
\[
\rho = |\psi\rangle \langle \psi|
\]
For a mixed state with probabilities \( p_i \) and states \( |\psi_i\rangle \), it becomes:
\[
\rho = \sum_i p_i |\psi_i\rangle \langle \psi_i|
\]
2. Importance in Statistical Mechanics
The density matrix is crucial for describing quantum systems in equilibrium or interacting with an environment. It allows us to compute ensemble averages, account for thermal mixtures, and analyze decoherence.
In thermal equilibrium at temperature \( T \), the density matrix is:
\[
\rho = \frac{1}{Z} e^{-\beta H}, \quad \text{where } Z = \mathrm{Tr}(e^{-\beta H})
\]
This allows computation of thermodynamic quantities like:
\[
\langle A \rangle = \mathrm{Tr}(\rho A)
\]
Example 2: Two-Level System
A two-state system (e.g., spin-1/2) in thermal equilibrium has:
\[
\rho = \frac{1}{Z}
\begin{pmatrix}
e^{-\beta \varepsilon_0} & 0 \\
0 & e^{-\beta \varepsilon_1}
\end{pmatrix},
\quad Z = e^{-\beta \varepsilon_0} + e^{-\beta \varepsilon_1}
\]
5. Theorem: Expectation Value from Density Matrix
Theorem: For any observable \( A \), the ensemble average is given by:
\[
\langle A \rangle = \mathrm{Tr}(\rho A)
\]
Proof:
If the system is in state \( |\psi_i\rangle \) with probability \( p_i \):
\[
\langle A \rangle = \sum_i p_i \langle \psi_i | A | \psi_i \rangle
\]
But:
\[
\mathrm{Tr}(\rho A) = \mathrm{Tr}\left(\sum_i p_i |\psi_i\rangle\langle\psi_i| A\right)
= \sum_i p_i \langle \psi_i | A | \psi_i \rangle = \langle A \rangle
\]
6. Summary
The density matrix is a powerful tool in quantum statistical mechanics. It unifies treatment of pure and mixed states, encodes thermal properties, and facilitates calculation of measurable quantities through trace operations.
6. Theorem: Density Matrix Commutes with Hamiltonian in Equilibrium
Theorem: In thermal equilibrium, the density matrix \( \rho \) commutes with the Hamiltonian \( H \):
\[
[\rho, H] = 0
\]
Proof:
In the canonical ensemble,
\[
\rho = \frac{1}{Z} e^{-\beta H}
\]
Since \( e^{-\beta H} \) is a function of \( H \), and any operator commutes with functions of itself:
\[
[\rho, H] = \left[\frac{1}{Z} e^{-\beta H}, H\right] = 0
\]
Therefore, \( \rho \) and \( H \) share the same eigenbasis in equilibrium.
7. Summary
The density matrix is a powerful tool in quantum statistical mechanics. It unifies treatment of pure and mixed states, encodes thermal properties, facilitates calculation of measurable quantities, and commutes with the Hamiltonian at equilibrium.
Show that density matrix satisfies a quantum Mechanical analogue of Liouville’s theorem
If the time derivative of density matrix is zero then the density matrix commutes with
Hamiltonian:
Proof : Since
When time derivative of density matrix is zero. The value of
\[ [H,\rho]=0 \]
Thus $$ H\rho-\rho H=0 $$ , Hence \(\rho \) commutes with Hamiltonian.
Then the density matrix of this 3 level quantum states is represented by the 3X3 matrix
which is hermitian
Quantum Statistics will reduce to Classical Statistics(more specifically MB statistics) if the
mean separation between the particles exceeds much larger than the mean debroglie wavelength.
In classical statistics the particles are identical , neutral and noninteracting but distinguishable.
In Quantum statistics the Particles are so closer that their wave functions overlap
,Hence the particles become indistnguishable. Their mean separation is comparable to the
debroglie wavelength. Moreover the system that adopts Quantum statistics is generally at a
very low temperature so that the randomness of particles decreases to such a level that the
particles are closer enough. In addition to it the density of the system is also very high.
Derivation of Mathematical Condition:( VALIDITY OF MB STATISTICS)
From Heisenberg Uncertainity Principle
$$\Delta x\Delta p>>\hbar$$
$$\Rightarrow \Delta x>>\lambda $$
$$\Rightarrow{{\left( \Delta x \right)}^{3}}\propto \frac{V}{N} $$
If the system contains N particles enclosed in volume V , Then Volume per particle is\( N/V \)
Assuming the system in a Cube, the length of the cube is given by \(\Delta x \)where :
\[\begin{align}
& \Delta x={{\left( \frac{V}{N} \right)}^{\frac{1}{3}}} \\
& {{\left( \frac{V}{N} \right)}^{\frac{1}{3}}}>>\lambda \\
& {{\left( \frac{M}{\rho } \right)}^{\frac{1}{3}}}<<\lambda \\
& {{\left( \frac{1}{\rho } \right)}^{1/3}}<<\lambda \\
& \rho {{\lambda }^{3}}>>1 \\
\end{align}\]
\( \rho\) is density of the system.
Other Conditions
So other conditions for classical approximation is :
1. $N/V$
( concentration of molecules) must be very small.
2. The temperature T msut be very high
3. The mass of the molecules is not to be too small.
⬆️ Back to Table of Contents
IDEAL FERMI GAS :
Properties of FERMI GAS and it's Comparison with Classical Ideal Gas
Fermions are negatively charged while classically ideal gas is neutral.
Fermions travel with finite speed while classical particles may be treated differently.
Fermions have odd spins while molecules have no spins.
Fermions obey FERMI-DIRAC statistical distribution while classical gas obeys Maxwell-Boltzmann statistics.
Fermions are indistinguishable in FERMI GAS while molecules in classical gas are distinguishable.
Fermion density in FERMI GAS is much greater than the molecular density in classical ideal gas.
Fermions obey Pauli's Exclusion Principle while molecules do not.
Prove that Internal energy of the Fermi gas is given by :
$$
U=\frac{3}{2} N k T \frac{f_{5 / 2}(z)}{f_{3 / 2}(z)}
$$
Proof:
$$
\begin{aligned}
& U=k T^{2} \frac{\partial}{\partial T} \sum g_{i} \ln Z \\
& =k T^{2} \frac{\partial}{\partial T}\left(\frac{P V}{k T}\right) \\
& =k T^{2} \frac{\partial}{\partial T}\left(\frac{V}{\lambda^{3}} f_{5 / 2}(z)\right)=k T^{2} V f_{5 / 2}(z) \frac{\partial}{\partial T}\left(\frac{1}{\lambda^{3}}\right) \\
& =k T^{2} V f_{5 / 2}(z)\left(-\frac{3}{\lambda^{4}} \frac{d \lambda}{d T}\right) \\
& =k T^{2} V f_{5 / 2}(z)\left(-\frac{3}{\lambda^{4}} \frac{d}{d T}\left(\frac{h}{\sqrt{2 \pi m k T}}\right)\right) \\
& =k T^{2} V f_{5 / 2}(z)\left(-\frac{3}{\lambda^{4}}\left(\frac{h}{\sqrt{2 \pi m k}}\left(\frac{-1}{2} T^{-3 / 2}\right)\right)\right) \\
& =\frac{3}{2} k T^{2} V f_{5 / 2}(z) \frac{1}{\lambda^{4}} \frac{h}{\sqrt{2 \pi m k T}} \frac{1}{T} \\
& =\frac{3}{2} k T V f_{5 / 2}(z) \frac{1}{\lambda^{3}}=\frac{3}{2} k T \frac{V}{\lambda^{3}} f_{5 / 2}(z)\\
& =\frac{3}{2} k T \frac{N}{f_{3 / 2}(z)} f_{5 / 2}(z) \\
& =\frac{3}{2} N k T \frac{f_{5 / 2}(z)}{f_{3 / 2}(z)}
\end{aligned}
$$
Prove that
\begin{align}
U &= -\frac{\partial}{\partial \beta} \ln Z \\
&= k T^2 \frac{\partial}{\partial T} \ln Z \\
&= k T^2 \frac{\partial \beta}{\partial T} \cdot \frac{\partial}{\partial \beta} \ln Z \\
&= k T^2 \frac{\partial}{\partial T} \left( \frac{1}{kT} \right) \cdot \frac{\partial}{\partial \beta} \ln Z \\
&= k T^2 \left( -\frac{1}{k T^2} \right) \cdot \frac{\partial}{\partial \beta} \ln Z \\
&= -\frac{\partial}{\partial \beta} \ln Z
\end{align}
Prove that for a Maxwell Boltzman Gas :The number of Particles is given by
$$N=\frac{z V}{\lambda^{3}}$$
Proof:
$$
\begin{aligned}
& n_{i}=g_{i} e^{-\alpha} e^{-\beta E_{i}}=g_{i} z e^{-\beta E_{i}} \\
& \Rightarrow N=\sum n_{i}=z \sum g_{i} e^{-\beta E_{i}} \\
& =z \sum_{0}^{\infty} \frac{V}{h^{3}} e^{-\frac{p^{2}}{2 m k T}} 4 \pi p^{2} d p \\
& =\frac{4 \pi z V}{h^{3}} \int_{0}^{\infty} e^{-\left(\frac{p^{2}}{2 m k T}\right)} p^{2} d p \\
& =\frac{4 \pi z V}{h^{3}} \int_{0}^{\infty} e^{-x}(2 m k T) x\left(\sqrt{2 m k T} \frac{1}{2 \sqrt{x}} d x\right) \text { Let } \frac{p^{2}}{2 m k T}=x \Rightarrow d p=\sqrt{2 m k T} \frac{1}{2 \sqrt{x}} d x \\
& =\frac{2 \pi z V}{h^{3}}(2 m k T)^{3 / 2} \int_{0}^{\infty} e^{-x} x^{-1 / 2} d x \\
& =\frac{2 \pi z V}{h^{3}}(2 m k T)^{3 / 2} \sqrt{\pi}=\frac{z V}{h^{3}}(2 \pi m k T)^{3 / 2} \\
& =\frac{z V}{\lambda^{3}}
\end{aligned}
$$
Number of fermions at OK in the momentum range $p$ and $p+d p$ is :
Number of fermions at OK in the energy range $E$ and $E+d E$ is :
\begin{align*}
n(E)\, dE
&= \frac{8 \pi V (2mE) \cdot \frac{\sqrt{2m}}{2\sqrt{E}}\, dE}{h^{3}} \\
&= \frac{8 \sqrt{2} \pi V m^{3/2} E^{1/2}\, dE}{h^{3}} \\
\end{align*}
Number of fermions at 0K in the velocity range \( v\) to \( v+dv\) ia:
\begin{align*}
n(v)\, dv &= \frac{8 \sqrt{2} \pi V m^{3/2} \left(\frac{1}{2}mv^{2}\right)^{1/2} \cdot \frac{1}{2}m \cdot 2v\, dv}{h^{3}} \\
\Rightarrow \quad n(v)\, dv
&= \frac{8 \pi V m^{3} v^{2}\, dv}{h^{3}}
\end{align*}
FERMI ENERGY ::
It is the energy of the highest occupied state of the fermions at $\mathbf{0 K}$.
Method-1
Number of fermions in the energy range $E$ and $E+d E$ at 0 K is :
$$
\begin{aligned}
N&=\int_{0}^{E_{F}} n(E) d E \\
& =\int_{0}^{E_{F}} \frac{8 \sqrt{2} \pi V m^{3 / 2} E^{1 / 2} d E}{h^{3}} \\
& =\frac{8 \sqrt{2} \pi V m^{3 / 2}}{h^{3}} \int_{0}^{E_{F}} E^{1 / 2} d E \\
& =\frac{8 \sqrt{2} \pi V m^{3 / 2}}{h^{3}} \times \frac{2}{3} E_{F}^{3 / 2} \\
& =\left(\frac{16 \sqrt{2} \pi V m^{3 / 2}}{3 h^{3}}\right) E_{F}^{3 / 2} \\
\Rightarrow E_{F}^{3}&=\frac{9 N^{2} h^{6}}{256 \times 2 \pi^{2} V^{2} m^{3}} \\
\Rightarrow E_{F} & =\left(\frac{9 N^{2} h^{6}}{256 \times 2 \pi^{2} V^{2} m^{3}}\right)^{1 / 3}\\
&=\frac{h^{2}}{2 m}\left(\frac{3 N}{8 \pi V}\right)^{1 / 3} \\
\Rightarrow E_{F} & =\frac{h^{2}}{2 m}\left(\frac{3 \rho}{8 \pi}\right)^{1 / 3}
\end{aligned}
$$
Method-2
Applying Heisenberg Uncertainity Principle :
$$
\begin{aligned}
& d x d p_{x} d y d p_{y} d z d p_{z}=h^{3} \\
\Rightarrow & d x d y d z\left(d p_{x} d p_{y} d p_{z}\right)=h^{3} \\
\Rightarrow & \left(d p_{x} d p_{y} d p_{z}\right)=\frac{h^{3}}{d x d y d z}=\frac{h^{3}}{V}\\
\Rightarrow \dfrac{4}{3} \pi p_{F}^{3}&=\dfrac{h^{3}}{V} \frac{N}{2}
\end{aligned}
$$
As in the spherical volume upto fermi level two electrons can occupy one ernergy state separately, one with upspin and another with down spin
$$
\begin{aligned}
\Rightarrow p_{F} &=\left(\frac{3 h^{3} N}{8 \pi V}\right)^{1 / 3} \\
\Rightarrow\left(2 m E_{F}\right)^{1 / 2} &=\left(\frac{3 h^{3} N}{8 \pi V}\right)^{1 / 3} \\
\Rightarrow E_{F} & =\frac{1}{2 m}\left(\frac{3 h^{3} N}{8 \pi V}\right)^{2 / 3} \\
\Rightarrow E_{F} &=\frac{h^{2}}{2 m}\left(\frac{3 N}{8 \pi V}\right)^{2 / 3} \\
\Rightarrow E_{F}&=\frac{h^{2}}{2 m}\left(\frac{3 \rho}{8 \pi}\right)^{2 / 3}
\end{aligned}
$$
FERMI SPEED :
It is the speed of fermions in the fermi level.
\begin{align*}
\Rightarrow \quad V_{F}
&= \sqrt{\frac{2 E_{F}}{m}} \\
&= \sqrt{\frac{2}{m} \cdot \frac{h^{2}}{2m} \left( \frac{3\rho}{8\pi} \right)^{2/3}} \\
\Rightarrow \quad V_{F}
&= \frac{h}{m} \left( \frac{3\rho}{8\pi} \right)^{1/3}
\end{align*}
FERMI MOMENTUM :
It is the momentum of the fermions in the fermi level.\\
\begin{align*}
p_{F}
&= m V_{F} \\
& = m \cdot \frac{h}{m} \left( \frac{3\rho}{8\pi} \right)^{1/3} \\
&= h \left( \frac{3\rho}{8\pi} \right)^{1/3}
\end{align*}
FERMI TEMPERATURE :
It is the temperature at which fermi energy is attained.\\
\begin{align*}
T_{F}
&= \frac{E_{F}}{k} \\
&= \frac{1}{k} \cdot \frac{h^{2}}{2m} \left( \frac{3\rho}{8\pi} \right)^{2/3} \\
&= \frac{h^{2}}{2mk} \left( \frac{3\rho}{8\pi} \right)^{2/3}
\end{align*}
Average Energy of fermions at 0 K is :
$$
\langle E\rangle=\frac{\int_{0}^{E_{F}} E n(E) d E}{\int_{0}^{E_{F}} n(E) d E}=\frac{\int_{0}^{E_{F}} E n(E) d E}{\int_{0}^{E_{F}} n(E) d E}
$$
$$
\begin{aligned}
& \int_{0}^{E_{F}} E \frac{8 \sqrt{2} \pi V m^{3 / 2}}{h^{3}} E^{1 / 2} d E \\
& \int_{0}^{E_{F}} \frac{8 \sqrt{2} \pi V m^{3 / 2}}{h^{3}} E^{1 / 2} d E \\
&= \int_{0}^{E_{F}} E^{3 / 2} d E \\
&= \frac{(2 / 5) E_{F}^{5 / 2}}{(2 / 3) E_{F}^{3 / 2}} \\
&= \frac{3}{5} E_{F}
\end{aligned}
$$
Total energy of fermions at 0 K
$$
\begin{aligned}
\langle E\rangle & =\int_{0}^{E_{F}} E n(E) d E\\
&=\int_{0}^{E_{F}} E \frac{8 \sqrt{2} \pi V m^{3 / 2}}{h^{3}} E^{1 / 2} d E \\
& =\frac{8 \sqrt{2} \pi V m^{3 / 2}}{h^{3}} \int_{0}^{E_{F}} E^{3 / 2} d E\\
&=\frac{8 \sqrt{2} \pi V m^{3 / 2}}{h^{3}} \frac{2}{5} E_{F}^{5 / 2} \\
& =\frac{16 \sqrt{2} \pi V m^{3 / 2}}{5 h^{3}} E_{F}^{5 / 2} \\
& =\frac{16 \sqrt{2} \pi V m^{3 / 2}}{5 h^{3}} E_{F}^{3 / 2} E_{F}\\
&=\frac{16 \sqrt{2} \pi V m^{3 / 2}}{5 h^{3}}\left(\frac{3 N h^{3}}{16 \sqrt{2} \pi V m^{3 / 2}}\right) E_{F} \\
& =\frac{3}{5} N E_{F}
\end{aligned}
$$
Average Speed of Fermions at 0K.
\begin{align*}
\langle v \rangle
&= \frac{\int_{0}^{v_{F}} v\, n(v)\, dE}{\int_{0}^{v_{F}} n(v)\, dE} \\
&= \frac{\int_{0}^{v_{F}} v \cdot \frac{8\pi V m^{3}}{h^{3}} v^{2}\, dv}{\int_{0}^{v_{F}} \frac{8\pi V m^{3}}{h^{3}} v^{2}\, dv} \\
&= \frac{\frac{8\pi V m^{3}}{h^{3}} \int_{0}^{v_{F}} v^{3}\, dv}{\frac{8\pi V m^{3}}{h^{3}} \int_{0}^{v_{F}} v^{2}\, dv} \\
&= \frac{\int_{0}^{v_{F}} v^{3}\, dv}{\int_{0}^{v_{F}} v^{2}\, dv} \\
&= \frac{\frac{1}{4} v_{F}^{4}}{\frac{1}{3} v_{F}^{3}} \\
&= \frac{3}{4} v_{F}
\end{align*}
How does the distribution function of a strongly degenerate Fermi Gas vary at $\mathbf{T = 0 K}$
Ans:
For a strongly degenerate Fermi gas, the chemical potential $\mu = \mu_{0}$ is known as the Fermi energy $E_{F}$ of the gas.
The distribution function is:
\[
f(E) = \frac{1}{e^{\frac{E - E_{F}}{kT}} + 1}
\]
Case 1: When $E < E_{F}$
\begin{align*}
f(E) &= \frac{1}{e^{-\infty} + 1} = \frac{1}{\frac{1}{e^{\infty}} + 1} = \frac{1}{0 + 1} = \frac{1}{1} = 1
\end{align*}
This shows that all the energy states up to the Fermi level are completely filled.
Case 2: When $E > E_{F}$
\begin{align*}
f(E) &= \frac{1}{e^{\infty} + 1} = \frac{1}{\infty + 1} = \frac{1}{\infty} = 0
\end{align*}
This shows that all the energy states above the Fermi level are empty.
Case 3: When $E = E_{F}$ and $T > 0\,\text{K}$
\begin{align*}
f(E) &= \frac{1}{e^{0} + 1} = \frac{1}{1 + 1} = \frac{1}{2} = 0.5
\end{align*}
This shows that the Fermi level is half filled at finite temperature.
THEORY OF WHITE DWARF STARS
These are old stars in which most of the Hydrogen is converted into Helium through thermo nuclear reactions. These are the most common end of of evolution of stars ( others are black holes, neutron stars ).The material content is almost Helium. The rate of thermonuclear energy is very slow. Hence the luminosity of the star is very low. The lttle brightness that exists however comes from the gravitaional energy released due to the process of slow contraction .\\
The mass of the white dwarf is around $10^{33} \mathrm{gm}$ of Helium which is nearly the mass of the sun and the radius is nearly the same as the radius of the earth. Thus the density is roughly $\rho=10^{7} \mathrm{gm} /$ $\mathrm{cm}^{3}$ The temperature of the core is almost $\mathrm{T}=10^{7} \mathrm{~K}$. The high tempereture is due to mean thermal energy of the order of $10^{3} \mathrm{eV}$ whcich is much larger than the ionisation potentials of Helium.
Unlike most other stars that are supported against their own gravitation by normal gas pressure, white dwarf stars are supported by the degeneracy pressure of the electron gas in their interior. Degeneracy pressure is the increased resistance exerted by electrons composing the gas, as a result of stellar contraction. The application of the so-called Fermi-Dirac statistics and of special relativity to the study of the equilibrium structure of white dwarf stars leads to the existence of a mass-radius relationship through which a unique radius is assigned to a white dwarf of a given mass; the larger the mass, the smaller the radius. Furthermore, the existence of a limiting mass is predicted, above which no stable white dwarf star can exist. This limiting mass, known as the Chandrasekhar limit, is on the order of 1.4 solar masses. Both predictions are in excellent agreement with observations of white dwarf stars.
DETERMINATION OF CHANDRASEKHAR LIMIT:
The microscopic contents of a white dwarf may be considered as N electrons each of mass m and $N / 2$ helium nuclei ( each of mass $\approx 4 \mathrm{~m}_{\mathrm{p}}$ )
Thus mass of WHITE DWARF is :\\
\begin{align*}
M &= N m + \frac{1}{2} N (4 m_p) \approx 2N m_p \\
\Rightarrow \quad n &= \frac{N}{V} = \frac{M / 2m_p}{M / \rho} \\
\Rightarrow \quad n &= \frac{\rho}{2m_p} \\
\Rightarrow \quad n &= \frac{10^{7}}{2 \times 1.6 \times 10^{-24}\ \mathrm{kg}} \\
&= \frac{10^{7}}{3.2 \times 10^{-24}} = 3.125 \times 10^{30}\ \mathrm{particles/m^3}
\end{align*}
\begin{align*}
n &\approx 10^{30} \ \text{electrons/cm}^{3}
\end{align*}
Thus, the white dwarf can be considered as a strongly degenerate electron gas, with \( N \) electrons each having Fermi momentum \( P_{F} \) as:
\[
p_{F} = \left( \frac{3N}{8\pi V} \right)^{1/3} h
= \left( \frac{3n}{8\pi} \right)^{1/3} h
\approx 3 \times 10^{-17} \ \mathrm{g/s}
\]
This momentum is comparable with the charactersitic momentum $\approx m_{0} c \approx 3 \times 10^{-16} \mathrm{gmcm} / \mathrm{s}$ of an electron. Thus the fermi energy of the electron would be comparable with the rest mass energy $m_{0} c^{2}$. It shows that motion of the electron is relativistic and the elctron gas is in completely degenerate state. The contribution of Helium nuclei in comparision to the electrons is insignificant and can be neglected. In addition ,contribution of nuclei in radiation is also neglected.\\
Thus wite dwarf is left with electron gas only.
considering ground state properties of N relativistic electrons with degeneacy 2 , The number of electrons in the momentum range $p$ and $p+d p$ is :
$$
d N=2 \times \frac{V \times 4 \pi p^{2} d p}{h^{3}}=\frac{8 \pi p^{2} V d p}{h^{3}}
$$
Total Number of electron is:
$N=\int d N=\int_{0}^{P_{F}} \frac{8 \pi p^{2} V d p}{h^{3}}=\frac{8 \pi V}{3 h^{3}} p_{F}^{3}$
$$
\begin{aligned}
& E_{0}=\int E d N=\int E d N=\int_{0}^{p_{F}} E \frac{8 \pi V}{h^{3}} p^{2} d p \\
& =\frac{8 \pi V m_{0} c^{2}}{h^{3}} \int_{0}^{p_{F}}\left(\left(1+\left(\frac{p}{m_{0} c}\right)\right)^{1 / 2}-1\right) p^{2} d p
\end{aligned}
$$
Total pressure
\begin{align*}
P_{0}&=\int d P_{0}\\
&=\int_{0}^{P_{F}} \frac{p u}{3 V} d N\\
&=\int_{0}^{P_{F}} \frac{p u}{3 V} \frac{8 \pi V}{h^{3}} p^{2} d p\\
& =\int_{0}^{P_{F}} \frac{p}{3 V}\left(\frac{p}{m_{0}\left(1+\left(\frac{p}{m_{0} c^{2}}\right)^{1 / 2}\right)}\right) \frac{8 \pi V}{h^{3}} p^{2} d p \\
& =\int_{0}^{P_{\mathrm{F}}} \frac{8 \pi}{3 h^{3}}\left(\frac{p^{4}}{m_{0}\left(1+\left(\frac{p}{m_{0} c}\right)^{2}\right)^{1 / 2}}\right) d p \\
& =\int_{0}^{\theta_{\mathrm{F}}} \frac{8 \pi}{3 h^{3}}\left(\frac{\left(m_{0} c \sinh \theta\right)^{4}}{m_{0}\left(1+\left(\frac{m_{0} c \sinh \theta}{m_{0} c}\right)^{2}\right)^{1 / 2}}\right)\left(m_{0} c \cosh \theta\right) d \theta \\
& =\int_{0}^{\theta_{F}} \frac{8 \pi}{3 h^{3}}\left(\frac{\left(m_{0} c \sinh \theta\right)^{4}}{m_{0}\left(1+\left(\sin ^{2} h \theta\right)^{2}\right)^{1 / 2}}\right)\left(m_{0} c \cosh \theta\right) d \theta \\
& =\int_{0}^{\theta_{F}} \frac{8 \pi m_{0}^{4} c^{5}}{3 h^{3}}\left(\sinh ^{4} \theta\right) d \theta=\frac{\pi m_{0}^{4} c^{5}}{3 h^{3}} \int_{0} 8\left(\sinh ^{4} \theta\right) d \theta \\
& =\frac{\pi m_{0}^{4} c^{5}}{3 h^{3}} A(x) \tag{1}
\end{align*}
Where $A(x)=\int_{0}^{\theta_{\mathrm{F}}} 8\left(\sinh ^{4} \theta\right) d \theta$ Taking $x=\sinh \theta$
As The elctron gas remains stable . The change in energy of the gas due to Adibatic expansion will be balanced by the change in energy due to graviatational force due to change in size considering the shape of white dwarf to be spherical ,Gravitaional Potential energy is :\\
\begin{align*}
\text{GPE} &= -\frac{3}{5} \frac{G M^{2}}{R} \\
&= \alpha \frac{G M^{2}}{R}
\end{align*}
Thus: $d E_{0}+d E_{g}=0$
Hence R adius of the White Dwarf will be real if $ M \ll M_{0}$ i.e mass of the White dwarf must be less than a particular mass $M_{0}$ called as Chandrasekhar Limit . $M_{0}=1.44 M_{\text {sun }}$. This mass is called Chandrasekhar Limit.
ADDITIONAL THEORIES ON WHITE DWARFS
HOW IT IS FORMED:
Every star burns under the nuclear fusion reaction by fusing Hydrogen into Helium, this fusion forces star to outwards (due to internal pressure) but gravity of the star forces it inwards - giving rise to a shape of ball.\\
But less massive stars aren't strong enough to create this pressure, making the star to be crushed down due to its own gravity. This gravity is so strong that even electrons are smashed inwards. When there's no space left for the electrons to be smashed further, it will eventually form a White Dwarf (which is obviously very dense).
POSITION OF WHIT DWARFS :
The nearest known white dwarf is Sirius B, at 8.6 light years, the smaller component of the Sirius binary star. There are currently thought to be nine white dwarf stars within 25 light years among the hundred star systems nearest the Sun.
DISCOVERY:
The unusual faintness of white dwarfs was first recognized in 1910.[3] The name white dwarf was coined by Willem Luyten in 1922.
The estimated radii of observed white dwarfs are typically $0.8-2 \%$ the radius of the Sun; this is comparable to the Earth's radius of approximately $0.9 \%$ OF solar radius Although white dwarfs are known with estimated masses as low as 0.17 M and as high as 1.33 M the mass distribution is strongly peaked at 0.6 M ?, and the majority lie between 0.5 and 0.7 M .
It packs mass comparable to the Sun's into a volume that is typically a million times smaller than the Sun's.\\
A typical white dwarf has a density of between $10^{4}$ and $10^{7} \mathrm{~g} / \mathrm{cm}^{3}$\\
A white dwarf, also called a degenerate dwarf, is a stellar core remnant composed mostly of electron-degenerate matter. A white dwarf is very dense: A white dwarf's faint luminos ity comes from the emission of stored thermal energy; no fusion takes place in a white dwarf wherein mass is converted to energy.
HOW STABLE ?
Compression of a white dwarf will increase the number of electrons in a given volume. Applying the Pauli exclusion principle, this will increase the kinetic energy of the electrons, thereby increasing the pressure. This electron degeneracy pressure supports a white dwarf against gravitational collapse. The pressure depends only on density and not on tempera ture.
CHANDRASEKHAR LIMIT :
White dwarfs can attain mass that is smaller than 1.44 times the mass of Sun called as CHANDEASEKHAR LIMIT .
\section*{WHAT HAPPENS IF WHITE DWARF CROSSES CHANDRASEKHAR LIMIT ?}
If a white dwarf were to exceed the Chandrasekhar limit, and nuclear reactions did not take place, the pressure exerted by electrons would no longer be able to balance the force of gravity, and it would collapse into a denser object called a neutron star or a Black Dwarf
INTERNAL TEMPERATURE :
the interior of the white dwarf maintains a uniform temperature, approximately 107 K . An outer shell of non-degenerate matter cools from approximately 107 K to 104 K
RADIATION :
The visible radiation emitted by white dwarfs varies over a wide color range, from the bluewhite color of an O-type main sequence star to the red of an M-type red dwarf.] White dwarf effective surface temperatures extend from over 150,000 K to barely under 4,000 K.\\
White dwarfs also radiate neutrinos through the Urca process.\\
Most observed white dwarfs have relatively high surface temperatures, between 8,000 K and 40,000 A white dwarf, though, spends more of its lifetime at cooler temperatures than at hotter temperatures, so we should expect that there are more cool white dwarfs than hot white dwarfs.
RATE OF COOLING IN WHITE DWARFS :
The rate of cooling has been estimated for a carbon white dwarf of 0.59 M with a hydrogen atmosphere. After initially taking approximately 1.5 billion years to cool to a surface tem perature of 7,140 K, cooling approximately 500 more kelvins to 6,590 K takes around 0.3
billion years, but the next two steps of around 500 kelvins (to 6,030 K and 5,550 K) take first 0.4 and then 1.1 billion years.
White dwarfs are thought to be the final evolutionary state of stars whose mass is not high enough to become a neutron star, that of about 10 solar masses.
Back To Table of Contents
IDEAL BOSE GAS
An ideal Bose gas is a quantum-mechanical phase of matter, analogous to a classical ideal gas.
It has integral spins.
Its constituents are called bosons.
Its examples are photons, which travel at the speed of light and are quanta of electromagnetic radiation.
Bosons obey Bose-Einstein statistics.
Bosons are neutral.
The theory of Bose gas can explain many phenomena that were classically inexplicable, such as:
Black body radiation
Heat capacity variation
Superconductivity and so on
Properties of PHOTON GAS and it's Comparision with Classical Ideal Gas:
The rest mass of a photon is zero, whereas a gas molecule has a finite mass.
Photons in a photon gas are indistinguishable, while molecules in a classical ideal gas are distinguishable.
A photon gas obeys Bose-Einstein statistics, whereas a classical ideal gas obeys Maxwell-Boltzmann statistics.
Photons have integral spin, while molecules are spinless.
Photons in a photon gas travel at the speed of light, but gas molecules travel with speeds ranging from 0 to infinity.
Photons are not conserved during collisions, whereas gas molecules are conserved during collisions.
Number of cells with momentum in the range $p$ and $p+d p$ as per BE statistics :
$$g(p) d p=2 \times \frac{V 4 \pi p^{2} d p}{h^{3}}$$
Number of cells with frequency in the range $f$ and $f+d f$ as per BE statistics :
$$
g(f) d f=\frac{8 \pi V}{h^{3}}\left(\frac{h f}{c}\right)^{2} \frac{h}{c} d f$$
$$=\frac{8 \pi V f^{2}}{c^{3}} d f
$$
Number of Photons/particles in the momentum range $p$ and $p+d p$ as per BE statistics :
$$
\begin{aligned}
& n(p) d p=\frac{g(p) d p}{e^{\beta E}-1}\\
&=\frac{8 \pi p^{2} V d p}{h^{3}\left(e^{\beta E}-1\right)} \\
\Rightarrow n(p) d p &=
h^{3}\left(e^{\frac{p^{2}}{2 m k T}}-1\right)
\end{aligned}
$$
Number of Photons/particles in the momentum range $p$ and $p + dp $ as per BE statistics :
$$n(f) d f=\frac{g(f) d f}{\left(e^{\frac{n f}{k T}}-1\right)}$$
$$\Rightarrow n(f) d f=\frac{8 \pi V f^{2} d f}{c^{3}\left(e^{\frac{h f}{k T}}-1\right)}$$
Energy of Photons in the range $f$ and $f+d f$
\begin{align*}
\Rightarrow E(f) d f &=h f n(f) df \\
&=h f \frac{8 \pi V f^{2} d f}{c^{3}\left(e^{\frac{h f}{h t}}-1\right)}
\end{align*}
Energy density in the range $f$ and $f+df$
\begin{align*}
u(f) d f&=\frac{E(f) d f}{V}\\
&=\frac{8 \pi h f^{3} d f}{c^{3}\left(e^{\frac{H f}{e r}}-1\right)}
\end{align*}
Number of Photons/Particles in the wavelength range $\lambda$ and $\lambda+d \lambda $ as per BE statistics :
$$n(\lambda) d \lambda=\frac{8 \pi d \lambda}{\lambda^{4}\left(e^{\dfrac{h c}{\lambda k T}}-1\right)}$$
Energy density in the wavelength range $\lambda$ and $\lambda+d \lambda$
>
$$
u(\lambda) d \lambda=\frac{8 \pi h c d \lambda}{\lambda^{5}\left(e^{\frac{h c}{\lambda k t}}-1\right)}
$$
Specialc case (1) when $\lambda$ is very long
$$
u(\lambda) d \lambda=\frac{8 \pi h c d \lambda}{\lambda^{5}\left(1+\frac{h c}{\lambda k T}-1\right)}$$
$$=\frac{8 \pi k T}{\lambda^{4}} d \lambda
$$
Specialc case (2) when $\lambda$ is very short
\[
\begin{align}
u(\lambda) \, d\lambda
&= \frac{8 \pi h c \, d\lambda}{\lambda^{5} \left(e^{\frac{h c}{k k T}} - 1\right)} \\
&= \frac{8 \pi h c \, d\lambda}{\lambda^{5} \left(e^{\frac{h c}{k k T}}\right)} \\
&= \frac{8 \pi h c}{\lambda^{4}} \, e^{- \left( \frac{h c}{k c t} \right)} \, d\lambda
\end{align}
\]
Wein's Displacement Law :
It states that as temperature of black body is raised , the maximu intensity of the radiation emmitted by it is displaced towards shorter side such that the product of wavelength at which maximum intensity is radiated and corresponding temperature is constant .
Mathematically $\lambda_{m} T=b$\\
Where $b$ is wien's constant $=b=2 \cdot 898 \times 10^{-5} \mathrm{~m}-K$.
Derivation of Planck's Law :
When intensity of radiation is maximum
$$
\begin{aligned}
& \frac{d}{d \lambda}\left(E_{\lambda}\right)=\frac{d}{d \lambda}\left(\frac{8 \pi h c \lambda^{-5}}{e^{\frac{h c}{\lambda k T}}-1}\right)=0 \\
& \Rightarrow-5 \lambda^{-6}\left(e^{\frac{h c}{\lambda k T}}-1\right)^{-1}+\lambda^{-5}\left(e^{\frac{h c}{\lambda k T}}-1\right)^{-2} e^{\frac{h c}{\lambda k T}} \frac{h c}{k T} \lambda^{-2}=0 \\
& \Rightarrow-\frac{5}{\lambda^{6}} \frac{1}{e^{\frac{h c}{\lambda k T}}-1}+\frac{1}{\lambda^{7}} \frac{h c}{k T\left(e^{\frac{h c}{\lambda k T}}-1\right)^{2}}=0 \\
& \Rightarrow \frac{5}{\lambda^{6}}=\frac{1}{e^{\frac{h c}{\lambda k T}}-1}\left(\frac{h c}{k T}\right) \frac{1}{\lambda^{7}} \\
& \Rightarrow e^{\frac{h c}{\lambda k T}}-1=\frac{e^{\frac{h c}{\lambda k T}} h c}{5 \lambda k T} \\
& \Rightarrow e^{\frac{h c}{\lambda k T}}\left(1-\frac{h c}{5 \lambda k T}\right)=1 \\
& \Rightarrow e^{x}\left(1-\frac{x}{5}\right)=1 \\
& \Rightarrow x=4.965 \\
& \Rightarrow \frac{h c}{\lambda k T}=4.965 \\
& \Rightarrow \lambda_{m} T=\frac{4.965 k}{h c}=b=2 \cdot 898 \times 10^{-3} m-K \\
& \therefore \lambda_{m} \alpha \frac{1}{T}
\end{aligned}
$$
Stefan's Law :
It states that the total energy radiated per unit area per unit time by a black body at any temperature T is directly proportional to the fourth power of it's absolute temperature .
Mathematically: $E \propto T^{4}$
$$
\begin{aligned}
& \text { or } \\
& E / A=\sigma T^{4} \quad\left(\sigma=5.67 \times 10^{-8} \mathrm{~W} / \mathrm{m}^{2}-K^{4}\right. \text { called as Stefan's Constant.) }
\end{aligned}
$$
Proof :
Energy density in the frequency range $f$ and $f+d f$
$$
\begin{aligned}
\Rightarrow u(f) d f&=\frac{E(f) d f}{V}\\
&=\frac{8 \pi h f^{3} d f}{c^{3}\left(e^{\frac{h f}{k T}}-1\right)} \\
& \left.=8 \pi h\left(\frac{k T}{h}\right)^{3} \frac{k T}{h} \int_{0}^{\infty} \frac{x^{3} d x}{c^{3}\left(e^{x}-1\right)} \quad \text { Putting } \frac{h f}{k T}=x \Rightarrow f=x k T / h, \Rightarrow d f=d x k T / h\right) \\
& =\frac{8 \pi k^{4} T^{4}}{c^{3} h^{3}} \int_{0}^{\infty} \frac{x^{3} d x}{\left(e^{x}-1\right)} \\
& =\frac{8 \pi k^{4} T^{4}}{c^{3} h^{3}} \int_{0}^{\infty} \frac{x^{3} e^{-x} d x}{e^{-x}\left(e^{x}-1\right)} \\
& =\frac{8 \pi k^{4} T^{4}}{c^{3} h^{3}} \int_{0}^{\infty} \frac{x^{3} e^{-x} d x}{\left(1-e^{-x}\right)} \\
& =\frac{8 \pi k^{4} T^{4}}{c^{3} h^{3}} \int_{0}^{\infty} x^{3} e^{-x}\left(1-e^{-x}\right)^{-1} d x \\
& =\frac{8 \pi k^{4} T^{4}}{c^{3} h^{3}} \int_{0}^{\infty} x^{3} e^{-x}\left(1+e^{-x}+e^{-2 x}+e^{-3 x}+\ldots\right) d x \\
& =\frac{8 \pi k^{4} T^{4}}{c^{3} h^{3}} \int_{0}^{\infty}\left(x^{3} e^{-x}+x^{3} e^{-2 x}+x^{3} e^{-3 x}+x^{3} e^{-4 x}+\ldots\right) d x \\
& =\frac{8 \pi k^{4} T^{4}}{c^{3} h^{3}}\left(6\left(1+\frac{1}{2^{4}}+\frac{1}{3^{4}}+\frac{1}{4^{4}}+\ldots\right)\right) \text { using } \int_{0}^{\infty} x^{3} e^{-a x} d x=\frac{6}{a^{4}} \\
& =\left(\frac{8 \pi^{5} k^{4}}{15 c^{3} h^{3}} T^{4}\right. \\
& \therefore u_{f} \alpha T^{4}
\end{aligned}
$$
Number of Photons in Volume $\mathbf{V}$ at any temperature T is :
$$
\begin{aligned}
N&=\int_{0}^{\infty} n(f) d f \\
& =\int_{0}^{\infty} \frac{8 \pi V f^{2} d f}{c^{3}\left(e^{\frac{b f}{k T}}-1\right)} \\
& =\frac{8 \pi V}{c^{3}} \int_{0}^{\infty} \frac{f^{2} d f}{\left(e^{\frac{H f}{h_{T}}}-1\right)} \\
& =\frac{8 \pi}{c^{3}}\left(\frac{k T}{h}\right)^{2 \infty} \frac{x^{2} d x}{\left(e^{x}-1\right)}\left(\frac{k T}{h}\right) \quad \text { Substituiting } \frac{h f}{k T}=x \\
& =8 \pi\left(\frac{k T}{h c}\right) \int_{0}^{3 \infty} \frac{x^{2} d x}{\left(e^{x}-1\right)} \\
& =8 \pi\left(\frac{k T}{h c}\right) \int_{0}^{3 \infty} \frac{x^{2} e^{-x} d x}{e^{-x}\left(e^{x}-1\right)} \\
& =8 \pi\left(\frac{k T}{h c}\right)^{3 \infty} \int_{0}^{2} e^{-x}\left(1-e^{-x}\right)^{-1} d x \\
& =8 \pi\left(\frac{k T}{h c}\right)^{3 \infty} \int_{0}^{2} x^{2} e^{-x}\left(1+e^{-x}+e^{-2 x}+e^{-3 x}+e^{-4 x}+\ldots\right) d x \\
& =8 \pi\left(\frac{k T}{h c}\right)^{3 \infty} \int_{0}^{2}\left(x^{2} e^{-x}+x^{2} e^{-2 x}+x^{2} e^{-3 x}+x^{2} e^{-4 x}+x^{2} e^{-5 x}+\ldots\right) d x \\
& =8 \pi\left(\frac{k T}{h c}\right)^{3} \times 2 \times \sum \frac{1}{n^{3}} \quad \text { using } \int_{0}^{\infty} x^{m} e^{-a x} d x=\frac{m!}{n^{m+1}}
\end{aligned}
$$
WHAT IS BOSE-EINSTEIN CONDENSATION ? DERIVE BOSE TEMPERATURE
Ans : It is the fifth state of matter where all bosons are compressed share a common minimum energy state.\\
When BOSONS are cooled to extemely low temperature they fall or condednse into a lowest accessible quantum quantum state. This state is called BE condensate.\\
This Phenomenon can explain Superfluidity of liquid Helim.
Thus,
\[
f(E) = \frac{1}{A \, e^{\frac{\beta E}{kT}}}
\]
for a Maxwell-Boltzmann gas at \( A \gg 1 \), where
\[
A = e^{\alpha} = \frac{V}{N} \left( \frac{2\pi m k T}{h^2} \right)^{3/2}
\]
For a Bose-Einstein Gas:
\( A \geq 1 \), so the chemical potential is negative.
\( A > 1 \) when \( T > T_B \) and \( A = 1 \) when \( T \leq T_B \).
\[
\begin{align}
A &= 1 = \frac{V}{N} \left( \frac{2\pi m k T}{h^2} \right)^{3/2} \\
\Rightarrow N &= V \left( \frac{2\pi m k T}{h^2} \right)^{3/2} \\
\Rightarrow \frac{N_{\text{Excited}}}{N_{\text{Total}}} &= \left( \frac{T}{T_B} \right)^{3/2} \\
\Rightarrow N_{\text{Ground}} &= N_{\text{Total}} \left( 1 - \left( \frac{T}{T_B} \right)^{3/2} \right)
\end{align}
\]
This shows that when \( T < T_B \), a large number of particles accumulate in the ground state. At \( T = 0\,K \), all particles are in the ground state.
(This was first achieved in a laboratory in 1995 by Eric Cornell and Carl Wieman at the University of Colorado at Boulder (NIST–JILA lab), for which they were awarded the Nobel Prize in 2001.)
WHAT IS BOSE-EINSTEIN CONDENSATION?
Bose-Einstein Condensation and Bose Temperature
Definition:
Bose-Einstein Condensation (BEC) is a quantum phenomenon in which a gas of bosons at low density, when cooled to a sufficiently low temperature, locks together to share a common minimum energy state.\\
This state of matter is called the \textbf{Bose-Einstein Condensate} — often referred to as the \textbf{fifth state of matter}.\\
In BEC, the bosons condense in momentum space, with their wavefunctions overlapping and becoming indistinguishable from each other.
The theoretical prediction was given by \textbf{Satyendra Nath Bose} in 1924 and later extended by \textbf{Albert Einstein}. The phenomenon was experimentally observed in laboratories in \textbf{1995} (Rubidium-87) and in \textbf{2001} (Sodium-23).\\
This phenomenon explains key quantum behaviors like:
Superfluidity in liquid Helium
Superconductivity in materials
Derivation of Bose Temperature \( T_B \)
The number of bosons in the excited states is given by:
\begin{align*}
N_e &= \int_0^\infty \frac{g(E)}{e^{\beta(E - \mu)} - 1} \, dE
\end{align*}
At the \textbf{critical temperature} (Bose temperature), the chemical potential \( \mu = 0 \). So:
\begin{align*}
N &= N_e = \int_0^\infty \frac{g(E)}{e^{\beta E} - 1} \, dE
\end{align*}
The density of states in 3D is:
\begin{align*}
g(E) &= 2\pi V \left( \frac{2m}{h^2} \right)^{3/2} E^{1/2}
\end{align*}
So:
\begin{align*}
N &= \int_0^\infty \frac{2\pi V \left( \frac{2m}{h^2} \right)^{3/2} E^{1/2}}{e^{\beta E} - 1} \, dE \\
&= V \left( \frac{2\pi m k T}{h^2} \right)^{3/2} \int_0^\infty \frac{u^{1/2}}{e^u - 1} \, du \quad (\text{Substitute } u = \beta E) \\
&= V \left( \frac{2\pi m k T}{h^2} \right)^{3/2} \Gamma(3/2) \zeta(3/2)
\end{align*}
Since \( \Gamma(3/2) = \frac{\sqrt{\pi}}{2} \), and \( \zeta(3/2) \approx 2.612 \), we have:
\begin{align*}
N &= V \left( \frac{2\pi m k T}{h^2} \right)^{3/2} \zeta(3/2)
\end{align*}
Solving for \( T \) gives the \textbf{Bose temperature} \( T_B \):
\begin{align*}
T_B &= \frac{h^2}{2\pi m k} \left( \frac{N}{V \zeta(3/2)} \right)^{2/3}
\end{align*}
Proof of BEC Below \( T_B \)
When \( T < T_B \), the number of particles in excited states becomes:
\begin{align*}
N_e &= N \left( \frac{T}{T_B} \right)^{3/2}
\end{align*}
Then, the number of particles in the ground state is:
\begin{align*}
N_0 &= N - N_e = N \left[ 1 - \left( \frac{T}{T_B} \right)^{3/2} \right]
\end{align*}
Conclusion:
As \( T \to 0 \), almost all bosons fall into the ground state:
\[
\lim_{T \to 0} N_0 = N
\]
This marks the formation of a Bose-Einstein Condensate — a macroscopic occupation of the lowest energy state, a purely quantum phenomenon.\\
Derive Bose Temperature and Prove that below Bose temperature The number of bosons will increase in the ground state to show the Bose-Einstein Condensation
ANS :\\
It is a quantum phenomenon in which a gas of bosons of low density when cooled to a suffiencently low temperature locked together to share a common minimum energy state.
This state of matter is called the Bose-Einstein Condensate.
It is called as the 5th state of matter.
The bosons condense in the momentum space with their wavefunctions overlapping with each other.
The theory for the possibility of such a state of matter was given by SN bose in 1924 but was made practically possible in the laboratory after 71 years in 1995 and 2001.
This Phenomenon can explain Superfluidity of liquid Helim \& the phenomenon of superconductivity
BOSE EINSTEIN GAS AT ANY TEMPERATURE T
$$
\begin{aligned}
N&=\int_{0}^{\infty} f(E) g(E) d E \\
& =\int_{0}^{\infty} \frac{1}{e^{(\alpha+\beta E)}-1} g(E) d E \\
& =\int_{0}^{\infty} \frac{1}{e^{\alpha} e^{\beta E}-1}\left(2 \pi V\left(\frac{2 m}{h^{2}}\right)^{3 / 2} E^{1 / 2}\right) d E \\
& =\int_{0}^{\infty} \frac{1}{z^{-1} e^{\beta E}-1}\left(2 \pi V\left(\frac{2 m}{h^{2}}\right)^{3 / 2} E^{1 / 2}\right) d E \\
& =2 \pi V\left(\frac{2 m}{h^{2}}\right)^{3 / 2}(k T)^{1 / 2}(k T) \int_{0}^{\infty} \frac{u^{1 / 2}}{z^{-1} e^{u}-1} d u \\
& (\beta E=u \Rightarrow E=u k T)
\end{aligned}
$$
$$
\begin{aligned}
& =2 \pi V\left(\frac{2 m k T}{h^{2}}\right)^{3 / 2} \frac{\sqrt{\pi}}{2} \frac{2}{\sqrt{\pi}} \int_{0}^{\infty} \frac{u^{3 / 2-1}}{z^{-1} e^{u}-1} d u \\
& =V\left(\frac{2 m \pi k T}{h^{2}}\right)^{3 / 2} \frac{2}{\sqrt{\pi}} \int_{0}^{\infty} \frac{u^{3 / 2-1}}{z^{-1} e^{u}-1} d u \\
& =V\left(\frac{2 m \pi k T}{h^{2}}\right)^{3 / 2} \frac{1}{\Gamma(3 / 2)} \int_{0}^{\infty} \int_{z^{-1} e^{u}-1}^{u^{3 / 2-1}} d u \\
& =V\left(\frac{2 m \pi k T}{h^{2}}\right)^{3 / 2} F_{3 / 2}(z) \\
& =\frac{V}{\left(\frac{h^{2}}{2 m \pi k T}\right)^{3 / 2}} F_{3 / 2}(z) \\
& =\frac{V}{\lambda^{3}} F_{3 / 2}(z)
\end{aligned}
$$
Hence $$F_{n}(z)=\dfrac{1}{\Gamma(n)} \int_{0}^{\infty} \dfrac{u^{n-1}}{z^{-1} e^{u}-1} d u$$
SIMPLIFICATION OF THE FUNCTION IN TERMS OF REIMAN ZETA FUNCTION
$$
\begin{aligned}
& =\frac{1}{\Gamma(n)} \int_{0}^{\infty} \frac{u^{n-1}}{z^{-1} e^{u}\left(1-\frac{1}{z^{-1} e^{u}}\right)} d u \\
& =\frac{1}{\Gamma(n)} \int_{0}^{\infty} \frac{u^{n-1}}{z^{-1} e^{u}}\left(1-\frac{1}{z^{-1} e^{u}}\right)^{-1} d u \\
& =\frac{1}{\Gamma(n)} \int_{0}^{\infty} z \frac{u^{n-1}}{e^{u}}\left(1+\frac{z}{e^{u}}+\frac{z^{2}}{e^{2 u}}+\ldots\right) d u \\
& =\frac{1}{\Gamma(n)} \int_{0}^{\infty} z \frac{u^{n-1}}{e^{u}} \sum_{r=1}^{r=\infty}\left(\frac{z}{e^{u}}\right)^{r-1} d u \\
& =\frac{1}{\Gamma(n)} \int_{0}^{\infty} \sum_{r=1}^{r=\infty}\left(\frac{z}{e^{u}}\right)^{r} u^{n-1} d u \\
& =\frac{1}{\Gamma(n)} \int_{0}^{\infty} \sum_{r=1}^{r=\infty} z^{r} e^{-r u} u^{n-1} d u \quad \text { Taking } r u=x \\
& =\frac{1}{\Gamma(n)} \int_{0}^{\infty} \sum_{r=1}^{r=\infty} z^{r} e^{-x}\left(\frac{x}{r}\right)^{n-1} \frac{d x}{r}\\
& =\frac{1}{\Gamma(n)} \int_{0}^{\infty} \sum_{r=1}^{r=\infty} z^{r} e^{-x} x^{n-1} \frac{d x}{r^{n}} \\
& =\frac{1}{\Gamma(n)} \int_{0}^{\infty} e^{-x} x^{n-1} d x \sum_{r=1}^{r=\infty} \frac{z^{r}}{r^{n}} \\
& =\sum_{r=1}^{r=\infty} \frac{z^{r}}{r^{n}}
\end{aligned}
$$
Hence,
\begin{align*}
F_{n}(z)&=\sum_{r=1}^{r=\infty} \frac{z^{r}}{r^{n}}\\
\Rightarrow N&=\frac{V}{\lambda^{3}} F_{3 / 2}(z) \&
\end{align*}
Since\begin{align*}
U &= \int_{0}^{\infty} f(E) g(E) E \, dE \\
&= \int_{0}^{\infty} \frac{1}{z^{-1} e^{\beta E}-1} \cdot 2 \pi V \left( \frac{2m}{h^2} \right)^{3/2} E \cdot \frac{1}{2\sqrt{E}} \cdot E \, dE \\
&= \pi V \left( \frac{2m}{h^2} \right)^{3/2} \int_{0}^{\infty} \frac{E^{3/2}}{z^{-1} e^{\beta E} - 1} \, dE \\
&= \pi V \left( \frac{2m}{h^2} \right)^{3/2} \frac{3\sqrt{\pi}}{2} \left( \frac{1}{\frac{3\sqrt{\pi}}{2}} \int_{0}^{\infty} \frac{E^{3/2}}{z^{-1} e^{\beta E} - 1} \, dE \right) \\
&= V \left( \frac{2\pi m}{h^2} \right)^{3/2} \frac{3}{2} \left( \frac{1}{\frac{3\sqrt{\pi}}{2}} \int_{0}^{\infty} \frac{u^{3/2}}{z^{-1} e^{u} - 1} \, du \cdot (kT)^{3/2} \cdot kT \right) \\
&= V \left( \frac{2\pi m kT}{h^2} \right)^{3/2} kT \cdot \frac{3}{2} \left( \frac{1}{\frac{3\sqrt{\pi}}{2}} \int_{0}^{\infty} \frac{u^{3/2}}{z^{-1} e^{u} - 1} \, du \right) \\
&= \frac{V}{\left( \frac{h^2}{2\pi m kT} \right)^{3/2}} kT \cdot \frac{3}{2} \left( \frac{1}{\frac{3\sqrt{\pi}}{2}} \int_{0}^{\infty} \frac{u^{3/2}}{z^{-1} e^{u} - 1} \, du \right) \\
&= \frac{V}{\lambda^3} \cdot \frac{3kT}{2} \cdot \frac{1}{\Gamma(5/2)} \int_{0}^{\infty} \frac{u^{5/2 - 1}}{z^{-1} e^{u} - 1} \, du \\
&= \frac{V}{\lambda^3} \cdot \frac{3kT}{2} \cdot F_{5/2}(z) \\
U &= \frac{3kT V}{2 \lambda^3} F_{5/2}(z)
\end{align*}
BOSE-EINSTEIN GAS AT $T \geq T_{B}$
At sufficeinttly high temperature
All the particles are in the exicited state
The number of paricles in the ground state with energy $E=0$ is neglibly small
$$
\begin{aligned}
C_{V} & =\left(\frac{\partial U}{\partial T}\right)_{V}=\frac{3 N k}{2}\left(1-\frac{1}{4 \sqrt{2}} \frac{N}{V}\left(\frac{h^{2}}{2 \pi m k}\right)^{3 / 2} \frac{\partial}{\partial T}\left(\frac{1}{\sqrt{T}}\right)\right) \\
& =\frac{3 N k}{2}\left(1+\frac{1}{8 \sqrt{2}} \frac{N}{V}\left(\frac{h^{2}}{2 \pi m k T}\right)^{3 / 2}\right)
\end{aligned}
$$
The first term is for the classical ideal gas .
Second term is for the first order correction due to the deviation from the classical behaviour. When the temperature will be sufficiently large the correction terms will be very small.\\
B. Pressure
$$
\begin{aligned}
& P=\frac{2}{3 V} U=\frac{2}{3 V} \frac{3 N k}{2}\left(T-\frac{1}{4 \sqrt{2}} \frac{N}{\sqrt{T V}}\left(\frac{h^{2}}{2 \pi m k}\right)^{3 / 2}\right) \\
& =\frac{N k}{V}\left(T-\frac{1}{4 \sqrt{2}} \frac{N}{\sqrt{T} V}\left(\frac{h^{2}}{2 \pi m k}\right)^{3 / 2}\right) \\
& =\frac{N k T}{V}\left(1-\frac{1}{4 \sqrt{2}} \frac{N}{V}\left(\frac{h^{2}}{2 \pi m k T}\right)^{3 / 2}\right) \\
& =\frac{R T}{V}\left(1-\frac{1}{4 \sqrt{2}} \frac{N}{V}\left(\frac{h^{2}}{2 \pi m k T}\right)^{3 / 2}\right) \\
& \Rightarrow P V=R T\left(1-\frac{1}{4 \sqrt{2}} \frac{N}{V}\left(\frac{h^{2}}{2 \pi m k T}\right)^{3 / 2}\right)
\end{aligned}
$$
The first term is for the number of particles in the ground state. The above expression is valid for all temperatures.Since number of particles in a state at any temperature can not be negative,
$$
A \geq 1 \Rightarrow e^{-\mu / k T} \geq 1 \Rightarrow-\mu / k T \geq 0 \Rightarrow \mu \leq 0
$$
since $ A=e^{\alpha}=e^{-\mu / k T}\right)$
at sufficiently low temperature $\mu \approx-10^{-38} \mathrm{erg}$ which is very small and can be taken to be zero.Thus the chemical potential can be negative or zero for the bosons.\\
it is zero for photons.
It can be positive and negative for Fermions and as well as for Maxwell gas.
The chemical potential at temperatute greater than the Bose temperature is negative. So
$$
\text { A>1, when } T>T_{B}
$$
But when the temperature is lowered below $T_{B}$, the chemical potential becomes 0 .
So
A=1, when $TCases
\textbf{Case 1:} When \( T > T_B \)\\
The system is entirely in the gaseous phase. All the particles occupy excited states only.\\
\textbf{Case 2:} When \( T = T_B \)\\
\begin{align*}
N_{\text{ground}} &= N_{\text{total}} \left( 1 - \left( \frac{T_B}{T_B} \right)^{3/2} \right) \\
&= N_{\text{total}} (1 - 1) = 0
\end{align*}
So, no particles occupy the ground state. All particles are in the excited states.\\
Case 3:
When \( T < T_B \)\\
The system may be regarded as a mixture of two phases:
1. A Bose-Einstein condensate (ground state occupation).
2. A thermal gas (excited states).
a gaseous phase in the exicited states with number of particles as $N_{e}=N_{\text {Total }}\left(\frac{T}{T_{B}}\right)^{3 / 2}$
a condensed phase with number of particles in the ground state $N_{0}=N_{\text {total }}-N_{e} \ll N_{\text {total }}$
A large number of particles are in the ground state and some are in the excites state too.
Case 4
: When $\mathrm{T}=0\left(T \rightarrow 0, N_{0} \rightarrow N_{\text {total }} \rightarrow N\right)$\\
Since $N_{\text {ground }}=N_{\text {Total }}\left(1-\left(\frac{0}{T_{B}}\right)^{3 / 2}\right)=N_{\text {total }}$\\
All the particles occupy the ground state. The condesation of the particles takes place in the momentum space not like the condensation of gas in real space. All the wavefunctons of particles overlap with each other due to increasing wavlength. The state is called Bose-Einstein Condensate and the process as Bose-Einstein Condensation\\
( 1924 theoritical predication and practical realisation in laboratory in 1995 and 2001)
Graphical Explanation
$f(E)=\frac{1}{e^{\left(\frac{\alpha+\beta E}{k T}\right)}-1}=\frac{1}{e^{\alpha} e^{\frac{\beta E}{k T}}-1}=\frac{1}{A e^{\frac{\beta E}{k T}}-1}$
Thus $f(E)=\frac{1}{A e^{\frac{\beta E}{k T}}}$ is For MB gas at $A \ggg 1$ where $A=e^{\alpha}=\frac{V}{N}\left(\frac{2 \pi m k T}{h^{2}}\right)^{3 / 2}$\\
For a Bose -Einstein Gas $A \geq 1$. Therefore The chemical Potential is negative .\\
For a Bose-Einstein Gas $A>1$ when $\mathrm{T}>\mathrm{T}_{\mathrm{B}} \quad \& \quad \mathrm{~A}=1$ When $T \leq T_{B}$.\\
$N_{\text {Total }}=N_{\text {Ground }}+N_{\text {exicited }}$\\
$N_{\text {Total }}=N_{\text {excited }} \quad$ for $T=T>T_{B}$\\
$N_{\text {Total }}=N_{\text {Ground }} \quad$ for $T
WHAT are PHONONS
Phonon is the quantum of lattice vibrational energy (elastic wave).
It has integral spin.
It obeys Bose-Einstein statistics and is made of indistinguishable particles.
Phonons travel with the speed of sound in a solid, unlike photons which travel at the speed of light.
The number of phonons is not conserved; phonons can be created and destroyed.
They are neutral.
Phonons exhibit both wave and particle characteristics.
Vibrational spectrum of a phonon varies in the range \( 10^4 \) to \( 10^{12} \) Hz.
Energy of a phonon is given by: \( E = \hbar \omega \)
Phonons play a major role in many physical properties of condensed matter, such as thermal and electrical conductivity.
The study of phonons is a crucial part of condensed matter physics.
(The concept of the phonon was introduced by Igor Tamm (1895–1971, Vladivostok, Russia) in 1932, for which he was awarded the Nobel Prize in 1958.)
Classical Theory (Doulong and Petit's Law ) 1819
Specific heat is same for all the elementary solids which is 6 calories or 3 R at room tmperature .Specific heat capacity is defined as the amount of heat requred to raise the temperature of a uniit mass of body through a temperature difference of one \href{http://unit.In}{unit.In} solids while ernergy is given it is spent by the atoms which vibrate about lattice simple harmonically and the remaining part by the free electrons to move to the excited states. As the energy abosrbed by the electrons is very small . Thus the Heat capacity of the solid is due to atomic vibrations at the lattice sites only.
Atoms in a solid vibrate freely about their mean positions like harmonic oscillators.
All the oscillators vibrate with the same frequency.
Their energies are different as they vibrate with different amplitudes.
The energy of an oscillator can take values from 0 to infinity.
The internal energy of a solid is due to the sum of vibrational motion of atoms and thermal excitation of electrons to higher energy states.
\[
U_{\text{solid}} = U_{\text{lattice}} + U_{\text{electron}}
\]
Thus,
\[
C_V = \left( \frac{dU}{dt} \right)_V = \frac{dU_{\text{lattice}}}{dt} + \frac{dU_{\text{electron}}}{dt}
\]
\[
= C_{\text{lattice}} + C_{\text{electron}}
\]
The contribution from electrons is usually very small and can often be neglected.
$$
C_{V}=C_{\text {lattice }}
$$
DERIVATION OF SPECIFIC HEAT :( DOULONG-PETIT-CLASSICAL METHOD
Let us derive the MEAN ENERGY of the solid first.
Since each atom is a simple harmonic oscillator. It's Total energy is
$$
E=\frac{p^{2}}{2 m}+\frac{1}{2} m \omega^{2} q^{2}=f(p)+f(q)
$$
Since $n(E)=e^{-\alpha-\beta E}=A e^{-\beta E}$ Applying Boltzman Distribution of energy, Average Energy of the oscillator is :
$$
\begin{aligned}
\Rightarrow\langle E\rangle&=\frac{\int E e^{-\frac{E}{k T}} d E}{\int e^{-\frac{E}{k T}} d E}=\frac{\int(f(p)+f(q)) e^{-\left(\frac{f(p)+f(q)}{k T}\right)} d p d q}{\int e^{-\left(\frac{f(p)+f(q)}{k T}\right)} d p d q} \\
& =\frac{\int f(p) e^{-\frac{f(p)}{k T}} d p \int e^{-\frac{f(q)}{k T}} d q \int f(q) e^{-\frac{f(q)}{k T}} d q \int e^{-\frac{f(p)}{k T}} d p}{\int e^{-\frac{f(p)}{k T}} d p \int e^{-\frac{f(q)}{k T}} d q}\\
& =\frac{\int f(p) e^{-\frac{f(p)}{k T}}}{\int e^{-\frac{f(p)}{k T}} d p}+\frac{\int f(q) e^{-\frac{f(q)}{k T}} d q}{\int e^{-\frac{f(q)}{k T}} d q} \\
& =\frac{\int_{-\infty}^{\infty} \frac{p^{2}}{2 m} e^{-\frac{p^{2}}{2 m k T}} d p}{\int_{-\infty}^{\infty} e^{-\frac{p^{2}}{2 m k T}} d p}+\frac{\int_{-\infty}^{\infty}\left(\frac{1}{2} m \omega^{2} q^{2}\right) e^{-\frac{m \omega^{2} q^{2}}{2 k T}} d q}{\int_{-\infty}^{\infty} e^{-\frac{m \omega^{2} q^{2}}{2 k T}} d q} \\
& =\frac{k T \int_{-\infty}^{\infty} x^{2} e^{-x^{2}} d x}{\int_{-\infty}^{\infty} e^{-x^{2}} d x}+\frac{k T \int_{-\infty}^{\infty}\left(y^{2}\right) e^{-y^{2}} d y}{\int_{-\infty}^{\infty} e^{-y^{2}} d y} \quad\left(\text { taking } x^{2}=\frac{p^{2}}{2 m k T}, y^{2}=\frac{m \omega^{2} q^{2}}{2 k T}\right) \\
& =k T \frac{\frac{1}{2} \sqrt{\pi}}{\sqrt{\pi}}+k T \frac{\frac{1}{2} \sqrt{\pi}}{\sqrt{\pi}} \quad\left(\because \int_{-\infty}^{\infty} e^{-a x^{2}} d x=\sqrt{\frac{\pi}{a}}, \int_{-\infty}^{\infty} x^{2} e^{-a x^{2}} d x=\frac{1}{2 a} \sqrt{\frac{\pi}{a}}\right) \\
& =\frac{k T}{2}+\frac{k T}{2}=k T \\
\Rightarrow E_{\text {total }}&=3 \mathrm{NkT}
\end{aligned}
$$
Specific heat of the solid is :
Specific heat is constant at all temperatures and is equal to \( 25~\mathrm{J/mol{\cdot}K} \).
Later, it was found to be \( 3R \), as \( R \) had not been defined at that time.
This law is valid at high temperatures for most solids.
It is valid for elements of atomic weight greater than 40
Limitations:
It fails for light elements such as Boron.Beryilium, Carbon or diamond at low temperature 2. $\mathrm{C}_{\mathrm{v}}$ apprcoches to zero at low temperature found experimentaly where it is not so .
EINSTEIN THEORY OF SPECIFIC HEAT (1907):
In every crystal the atom is surrounded by a group of first nearest neighbours, second nearest neighbours etc. Every atom vibrates in the vicinity of it's Lattice point or equillibrium position in a force field exerted by all it's neigbours . Each atom vibrates independently and at a constant frquency. in three dimensional space .Each atom can be treated like a simple harmonic oscillator and the energy of nth ocillator is given by :
$$
E_{n}=\left(n+\frac{1}{2}\right) h f \quad \text { Where } \mathrm{n}=0,1,2,3 \ldots .
$$
Mean energy of the oscillator is :
As energy of every harmonic oscillator is quantised and energy of nth oscillator is :
$$
U_{n}=\left(n+\frac{1}{2}\right) h f
$$
Then the average energy of such oscillator is:
$$
\begin{aligned}
\langle U\rangle&=\dfrac{\sum_{n=0}^{n=\infty} U_{n} e^{-\left(\dfrac{U_{n}}{k T}\right)}}{\sum_{n=0}^{n=\infty} e^{-\left(\frac{U_{n}}{k T}\right)}}\\
& =\frac{\sum_{0}^{\infty}\left(n+\frac{1}{2}\right) h f e^{-\frac{\left(n+\frac{1}{2}\right) h f}{k T}}}{\sum_{0}^{\infty} e^{-\frac{\left(n+\frac{1}{2}\right) h f}{k T}}} \\
&= h f \frac{\sum_{0}^{\infty} n e^{-\frac{n h f}{k T}} e^{-\frac{h f}{2 k T}}}{\sum_{0}^{\infty} e^{\frac{-n h f}{k T}} e^{-\frac{h f}{2 k T}}}+\frac{1}{2} h f \frac{\sum_{0}^{\infty} e^{-\frac{n h f}{k T}} e^{-\frac{h f}{2 k T}}}{\sum_{0}^{\infty} e^{\frac{-n h f}{k T}} e^{-\frac{h f}{2 k T}}} \\
&= h f \frac{\sum_{0}^{\infty} n e^{-\frac{n h f}{k T}}}{\sum_{0}^{\infty} e^{\frac{-n h f}{k T}}+\frac{1}{2} h f} \frac{\sum_{0}^{\infty} e^{-\frac{n h f}{k T}}}{\sum_{0}^{\infty} e^{\frac{-n h f}{k T}}}=h f\left(\frac{\sum_{0}^{\infty} n e^{-\frac{n h f}{k T}}}{\sum_{0}^{\infty} e^{\frac{-n h f}{k T}}}+\frac{1}{2}\right) \\
&= h f\left(\frac{0+e^{-x}+2 e^{-2 x}+3 e^{-3 x}+4 e^{-4 x}+\ldots . .}{1+e^{-x}+e^{-2 x}+e^{-3 x}+e^{-4 x}+\ldots . .} \frac{1}{2}\right) T a k i n g x=\frac{h f}{k T} \\
&=h f\left(-\frac{d}{d x} \ln \left(1+e^{-x}+e^{-2 x}+e^{-3 x}+e^{-4 x}+\ldots .\right)+\frac{1}{2}\right) \\
&=h f\left(-\frac{d}{d x} \ln \left(1-e^{-x}\right)^{-1}+\frac{1}{2}\right) \\
&=h f\left(\frac{d}{d x} \ln \left(1-e^{-x}\right)+\frac{1}{2}\right)=h f\left(\frac{e^{-x}}{1-e^{-x}}+\frac{1}{2}\right)=h f\left(\frac{1}{2}+\frac{1}{e^{x}-1}\right) \\
&=h f\left(\frac{1}{2}+\frac{1}{e^{h f / k T}-1}\right)
\end{aligned}
$$
As the crystal contianing N atoms has 3 N harmonic oscillators . The total internal energy of the crystal becomes
$$
\begin{aligned}
U_{\text {total }}& =3 N\langle U\rangle\\
&=3 N\left(\frac{1}{2} h f+\frac{h f}{e^{h f / k T}-1}\right) \\
\Rightarrow U_{\text {total }}&=3 N\left(\frac{1}{2} h f+\frac{h f}{e^{h f / k T}-1}\right)
\end{aligned}
$$
Then specific heat of the crystal at constant volume is :
$$
\begin{aligned}
C_{V}&=\frac{d U_{\text {total }}}{d T}\\
&=0+3 N h f \frac{d}{d T}\left(\frac{1}{e^{h f / k T}-1}\right) \\
& =3 N h f \frac{d}{d T}\left(e^{h f / k T}-1\right)^{-1} \\
& =-(3 N h f)\left(e^{h f / k T}-1\right)^{-2}\left(-e^{h f / k T} \frac{h f}{k T^{2}}\right) \\
& =3 N k\left(\frac{h f}{k T}\right)^{2}\left(\frac{1}{e^{h f / k T}-1}\right)^{2} e^{h f / k T} \\
& =3 N k\left(\frac{\theta_{E}}{T}\right)^{2}\left(\frac{1}{e^{\theta_{E} / T}-1}\right)^{2} e^{\theta_{E} / T} \text { where } h f=k \theta_{E}, \theta_{E} \text { is Einstein Temperature }
\end{aligned}
$$
Special Case :
(1) When temperature is very high \( T \gg \theta_E \)
As temperature \( T \to 0 \), the exponential term \( e^{-\theta_E/T} \to 0 \), hence the specific heat \( C_V \to 0 \).
Limitation of Einstein's Theory :
Einstein 's theory fails at low temperature as at low temperature $C_{V} \propto T^{3}$ instead of $C_{V} \alpha e^{-\left(\frac{\theta_{E}}{T}\right)}$
Einstein's assumption that each atomic oscillator vibrates independently at same frequency like other is wrong. In reality as per Debye Model The oscillators are coupled together and vibrate with wide range of frewuencies .
>
DEBYE'S THEORY OF SPECIFIC HEAT :
DEBYES' $T^{3}$ LAW -1912 PETER DEBYE (B.1884-1966, NETHERLAND
As Einstein's Model failed at low temperature region due to his assumption that all the atoms vibrtae with same frequency , Peter Debye came up with idea of Phonons. He considered interaction between the vibrating atoms. Each atom does not vibrate at a constant frquency independently. He suggested that the atoms vibrate cobinedly as coupled oscillators. Their vibration generates a spectrum of frequencies ranging from 0 to a certain value called Debye Frequency ( $f_{D}$ ) . There will be a limit to upper level frequency as all frequecnies are not allowed in the conduction band of the solid. There will be two transverse vibrations and one longitudinal vibrations. The system may be considered as comprising a large Number N of non-inteacting , identical, linear coupled harmonic Oscillators each of mass m , in thermal equillibrium at temperature T.The collective modes of vibrations involving groups of atoms are the possible sound waves which can propagate through solids The energy of vibrational energies of atoms (lattice vibrations ) is quantised and called as phonon. If $f_{i}$ is the frequency of the ith degree of freedom then $E_{n}=h f_{i}\left(n+\frac{1}{2}\right)$\\
The number of modes of vibration in the frequency range f and $\mathrm{f}+\mathrm{df}$ for three directions of Polarisation ( 2 transverse and 1 longitudinal) .\\
The number of modes of vibration dN with frequencies in one direction lying in the range f and $\mathrm{f}+\mathrm{df}$ is given by :
$$
\begin{aligned}
E&=\int_{0}^{f_{D}}\langle E\rangle d N\\
&=\int_{0}^{f_{D}} \frac{h f}{e^{\frac{h f}{k T}}-1} \frac{9 n}{f_{D}^{3}} f^{2} d f \\
& =\frac{9 h n}{f_{D}^{3}} \int_{0}^{f_{D}} \frac{f^{3}}{e^{\frac{h f}{k T}}-1} d f\\
&=\frac{9 h n}{f_{D}^{3}}\left(\frac{k T}{h}\right)^{4 x_{m}} \frac{x^{3}}{e^{x}-1} d x\\
&=\frac{9 h n}{f_{D}^{3}} \frac{k T}{h}\left(\frac{k T}{h}\right)^{3 x_{m}} \int_{0} \frac{x^{3}}{e^{x}-1} d x\\
&=\frac{9 R}{f_{D}^{3}} T\left(\frac{k T}{h}\right)^{3 x_{m}} \int_{0} \frac{x^{3}}{e^{x}-1} d x\\
&=\frac{9 R}{f_{D}^{3}} T\left(\frac{k T}{h}\right)^{3 \theta_{D} / T} \int_{0} \frac{x^{3}}{e^{x}-1} d x
\end{aligned}
$$
( where $$x=\frac{h f}{k T}=\frac{k T_{D}}{k T}=\frac{\theta_{D}}{T} $$)
SPECIAL CASES :
Case-1:At high temperature
$$
\begin{aligned}
E &=\frac{9 R}{f_{D}^{3}} T\left(\dfrac{k T}{h}\right)^{3 \theta_{D} / T} \int_{0}^{\dfrac{x^{3}}{x}} d x \\
&=9 R T\left(\frac{k T}{h f_{D}}\right)^{3}\left(\frac{\theta_{D}}{T}\right)^{3} \frac{1}{3} \\
& =9 R T\left(\frac{T}{\theta_{D}}\right)^{3}\left(\frac{\theta_{D}}{T}\right)^{3} \frac{1}{3}=3 R T
\end{aligned}
$$
Thus $$C_{V}=\frac{d E}{d T}=3 R$$
Case-2: At low temperature
\[
\begin{align}
E&=\frac{9 R}{f_{D}{ }^{3}} T\left(\frac{k T}{h}\right)^{3} \frac{\pi^{4}}{15} \\
& =9 R T\left(\frac{k T}{h f_{D}{ }^{3}}\right)^{3} \frac{\pi^{4}}{15} \\
& =9 R T\left(\frac{T}{T_{D}}\right)^{3} \frac{\pi^{4}}{15}\\
&= \frac{9 R \pi^{4}}{15 \, T_D^3} \, T^4 \\
\Rightarrow C_V &= \frac{dE}{dT} = \frac{36}{15} \, \pi^4 R \left( \frac{T}{\theta_D} \right)^3 \alpha T^3 \\
C_V &\propto T^3
\end{align}
\]
Debye's theory is also known as Debye's $T^{3}$ law. It fits both at high and low temperature. It is observed that at high temperature quantum effects are not of much importance as in each of the three theories specific heat at high temperature is $3 R$.
CRITICISM OF DEBYE'S THEORY:
The continuum model is valid for long wavelengths (the model assumes only low frequencies are allowed).
Debye's cut-off frequencies should be different, as longitudinal and transverse waves have different speeds.
Debye's temperature is assumed to be independent of actual temperature, but it is found to vary up to 10% with temperature.
Debye's theory assumes that the solid consists of identical atoms, so it is not valid for crystals like NaCl.
Debye's theory neglects interatomic interaction and electronic contribution to lattice specific heat.
The assumption of 3N modes of vibration is also not justifiable, as elastic waves are supposed to have infinite frequencies.
Debye's \( T^3 \) law does not hold well when the temperature is less than 10% of the Debye temperature.
It is defined as any homogeneous and physically distinct part of a thermodynamic system which is separated from other parts of the system by a definite boundary.
What is Phase transition ?
Transformation of a thermodynamic system from one phase to another at a given constant temperature and pressure is called a Phase transition .
What are different types of Phase transition ?
There are two types of Phase transitions
First order
Second order
What is a First Order Phase transitions :
It is the type of Phase transition in which physical state of a matter changes from one to the other at constant temperature and pressure . There is a transfer of heat or latent heat associated with transition.
Examples .
1. transition of ice at $0^{\circ} \mathrm{C}$ into water at $0^{\circ} \mathrm{C}$ (solid siate to liquid state)
2. transition of water at $100^{\circ} \mathrm{C}$ to vapour at $100^{\circ} \mathrm{C}$ (Liquid to gaseous state)
Properties of First oder transition :
Molar Gibb's function $G$ has the same value for both phases in equillibrium .
Specific entropy ( $\mathrm{S} /$ Volume ) changes
Specific volume ( V/mass) changes
Gibbs free enrgy, Entropy S , Volume V , enthalpy H and Specific heat vary with temperature
First order differntial of G or the slope of G is discontinuos at the transition temperature
i.e $\left(\frac{\partial G_{1}}{\partial T}\right)_{P} \neq\left(\frac{\partial G_{2}}{\partial T}\right)_{P} \&\left(\frac{\partial G_{1}}{\partial P}\right)_{T} \neq\left(\frac{\partial G_{2}}{\partial P}\right)_{T}$
Density changes discontinuosly.
How differentials of $G$ are discontinuos :\\
\begin{align}
G &= H - T S = U + P V - T S \\
dG &= dU + P\,dV + V\,dP - T\,dS - S\,dT \\
dG &= dU + dW + V\,dP - dQ - S\,dT \\
dG &= dU + dW - dQ + V\,dP - S\,dT \\
dG &= 0 + V\,dP - S\,dT \\
G &= G(P, T) \\
\Rightarrow dG &= \left(\frac{\partial G}{\partial P}\right)_{T} dP + \left(\frac{\partial G}{\partial T}\right)_{P} dT \\
\Rightarrow V &= \left(\frac{\partial G}{\partial P}\right)_{T}, \quad S = -\left(\frac{\partial G}{\partial T}\right)_{P}
\end{align}
As V and S chnages first derivative of G is discontinuos.
Solution :
Let us consider an isolated system which contains two thermodynamic phases A and B of a matter in equillibrium. The Phase A is characterised by $\left(E_{1}, V_{1}, N_{1}, S_{1}, T_{1}, \mu_{1}, G_{1}, P_{1}\right)$ and the Phase B is characterised by $\left(E_{2}, V_{2}, N_{2}, S_{2}, T_{2}, \mu_{2}, G_{2}, P_{2}\right)$ Where\\
\textbf{Thermodynamic Definitions:}
\begin{align*}
E &= \text{Internal Energy} \\
V &= \text{Volume} \\
N &= \text{Number of Particles} \\
S &= \text{Entropy} \\
T &= \text{Temperature} \\
\mu &= \text{Chemical Potential} \\
G &= \text{Gibbs Free Energy} \\
P &= \text{Pressure}
\end{align*}
\textbf{Since the system is isolated:}
\begin{align*}
V_1 + V_2 &= V \\
N_1 + N_2 &= N \\
E_1 + E_2 &= E
\end{align*}
As the total system is isolated:
\begin{align*}
dV &= 0, \quad dN = 0, \quad dE = 0 \\
dV_1 &= -dV_2 \\
dN_1 &= -dN_2 \\
dE_1 &= -dE_2
\end{align*}
\textbf{As total entropy of the system is: } \( S = S_1 + S_2 \), we have:
\begin{align*}
0 &= dS_1(E_1, V_1, N_1) + dS_2(E_2, V_2, N_2) \\
0 &= \frac{\partial S_1}{\partial E_1} dE_1 + \frac{\partial S_1}{\partial V_1} dV_1 + \frac{\partial S_1}{\partial N_1} dN_1 \\
&\quad + \frac{\partial S_2}{\partial E_2} dE_2 + \frac{\partial S_2}{\partial V_2} dV_2 + \frac{\partial S_2}{\partial N_2} dN_2
\end{align*}
Substituting thermodynamic identities:
\begin{align*}
0 &= \frac{1}{T_1} dE_1 + \frac{P_1}{T_1} dV_1 - \frac{\mu_1}{T_1} dN_1 \\
&\quad + \frac{1}{T_2} dE_2 + \frac{P_2}{T_2} dV_2 - \frac{\mu_2}{T_2} dN_2
\end{align*}
Now using \( dE_2 = -dE_1 \), \( dV_2 = -dV_1 \), \( dN_2 = -dN_1 \):
\begin{align*}
0 &= \frac{1}{T_1} dE_1 + \frac{P_1}{T_1} dV_1 - \frac{\mu_1}{T_1} dN_1 \\
&\quad - \frac{1}{T_2} dE_1 - \frac{P_2}{T_2} dV_1 + \frac{\mu_2}{T_2} dN_1
\end{align*}
Combining terms:
Since this must hold for arbitrary \( dE_1, dV_1, dN_1 \), we conclude:
\begin{align*}
\frac{1}{T_1} - \frac{1}{T_2} &= 0 \\
\frac{P_1}{T_1} - \frac{P_2}{T_2} &= 0 \\
\frac{\mu_2}{T_2} - \frac{\mu_1}{T_1} &= 0
\end{align*}
\textbf{Thus:}
\begin{align*}
T_1 &= T_2, \quad P_1 = P_2, \quad \mu_1 = \mu_2
\end{align*}
Hence when two different phases of a mattere are in equillibrium then their temperature , pressure and chemical potentials must be equal.
Which in turn shows that it must be in Thermal equillibrium ( as $\mathrm{T}_{1}, \mathrm{~T}_{2}$ ), Mechanical Equillibrium ( as $\mathrm{P}_{1}=\mathrm{P}_{2}$ ) and Chemical equillibrium ( as $\mu_{1}=\mu_{2}$ ).
\section*{As Gibbs free energy is defined as Chemical Potentials per particles}
\begin{align*}
G_1(P, T) &= \frac{\mu_1(P, T)}{N_1}, \quad
G_2(P, T) = \frac{\mu_2(P, T)}{N_2}
\end{align*}
\textbf{Then chemical equilibrium condition gives:}
\begin{align*}
\mu_1(P, T) &= \mu_2(P, T) \\
\Rightarrow G_1(P, T) N_1 &= G_2(P, T) N_2 \\
\Rightarrow G_1 &= G_2 \quad \text{if } N_1 = N_2 \\
\Rightarrow G_1 > G_2 &\quad \text{if } N_1 < N_2 \\
\Rightarrow G_1 > G_2 &\quad \text{if } N_1 > N_2
\end{align*}
\textbf{As chemical potentials are equal, the total Gibbs free energy is:}
\begin{align*}
G = N_1 G_1 + N_2 G_2
\end{align*}
\textbf{Also,}
\begin{align*}
N = N_1 + N_2
\end{align*}
\textbf{Taking differential of } \( G \):
\begin{align*}
dG &= dN_1 G_1 + N_1 dG_1 + dN_2 G_2 + N_2 dG_2 \\
\Rightarrow 0 &= dN_1 G_1 + N_1 dG_1 - dN_1 G_2 - N_2 dG_1 \\
\quad \text{using } dN_2& = -dN_1,\, dG_2 = dG_1\text{} \\
\Rightarrow 0 &= (N_1 - N_2) dG_1 + (G_1 - G_2) dN_1
\end{align*}
\textbf{Therefore,}
\begin{align*}
N_1 - N_2 &= 0 \\
G_1 - G_2 &= 0
\end{align*}
EXPLAIN HOW A FIRST ORDER PHASE TRANSITON IS A DISCONTINOUS PHASE TRANSITION
The first-order phase transitions are those that involve a latent heat. During such a transition, a system either absorbs or releases a fixed (and typically large) amount of energy. Because energy cannot be instantaneously transferred between the system and its environment, first-order transitions are associated with "mixed-phase regimes" in which some parts of the system have completed the transition and others have not. This phenomenon is familiar to anyone who has boiled a pot of water: the water does not instantly turn into gas, but forms a turbulent mixture of water and water vapor bubbles .Mixed-phase systems are difficult to study, because their dynamics are violent and hard to control. However, many important phase transitions fall in this category, including the solid/liquid/gas transitions.\\[0pt]
The first order phase transition occurs between the triple point and critical point (excluding the critical point) ]
WHY THE NAME FIRST ORDER ?
According to Ernsfest classification the order of a phase transition is the order of lowest differential of G which shows discontinuity.\\
Hence first order $\left(\frac{\partial G}{\partial T}\right)_{P} \&\left(\frac{\partial G}{\partial P}\right)_{T}$ are discontinuos since entropy and Volume are discontinuos\\
given by the relations :
$$
\because\left(\frac{\partial G}{\partial T}\right)_{P}=-S \&\left(\frac{\partial G}{\partial P}\right)_{T}=V
$$
\section*{HoW ?}
$$
\begin{aligned}
\because G & =H-T S \\
G= & U+P V-T S \\
\Rightarrow d G & =d U+V d P+P d V-S d T-T d S \\
& =d U+P d V+V d P-S d T-d Q \\
& =d U+d W+V d P-S d T-d Q \\
& =d Q+V d P-S d T-d Q \\
& =V d P-S d T \\
\Rightarrow & \left(\frac{\partial G}{\partial P}\right)_{T}=V \quad \&\left(\frac{\partial G}{\partial T}\right)_{P}=-S
\end{aligned}
$$
In 2nd order Phase Transition , the second order differential of Gibbs free energy are discontinous
Since : \textbf{Second Derivatives of Gibbs Free Energy:}
\begin{align*}
\left(\frac{\partial^2 G}{\partial T^2}\right)_P &= -\frac{C_P}{T} \\
\left(\frac{\partial^2 G}{\partial P^2}\right)_T &= -V k_T \\
\left(\frac{\partial^2 G}{\partial P \partial T}\right) &= V \beta_P
\end{align*}
\textbf{Proof:}
1. Temperature Derivative:
\begin{align*}
\frac{\partial}{\partial T} \left( \frac{\partial G}{\partial T} \right)_P
&= \left( \frac{\partial (-S)}{\partial T} \right)_P \\
&= - \left( \frac{\partial S}{\partial T} \right)_P \\
&= - \left( \frac{\partial}{\partial T} \left( \frac{Q}{T} \right) \right)_P \\
&= - \frac{1}{T} \left( \frac{\partial Q}{\partial T} \right)_P \\
&= - \frac{C_P}{T}
\end{align*}
2. Pressure Derivative:
\begin{align*}
\frac{\partial}{\partial P} \left( \frac{\partial G}{\partial P} \right)_T
&= \left( \frac{\partial V}{\partial P} \right)_T \\
&= V \left( \frac{1}{V} \frac{\partial V}{\partial P} \right)_T \\
&= V k_T
\end{align*}
where
\[
k_T = \frac{1}{V} \left( \frac{\partial V}{\partial P} \right)_T \quad \text{(Isothermal compressibility)}
\]
3. Mixed Derivative:
\begin{align*}
\frac{\partial}{\partial P} \left( \frac{\partial G}{\partial T} \right)_P
&= - \frac{\partial S}{\partial P} \\
&= V \beta_P
\end{align*}
where
\[
\beta_P = \left( \frac{1}{V} \frac{\partial V}{\partial T} \right)_P \quad \text{(Thermal expansion coefficient)}
\]
Discontinuity Of Specific Heat In Phase Transitions:
Specific heat at constant pressure is :
$$
\begin{aligned}
C_{p} & =\left(\frac{\partial Q}{\partial T}\right)_{P}=T\left(\frac{\partial S}{\partial T}\right)_{P}=-T\left(\frac{\partial}{\partial T}\left(\frac{\partial G}{\partial T}\right)\right)_{P} \\
& =-T\left(\frac{\partial^{2} G}{\partial T^{2}}\right)_{P}
\end{aligned}
$$
Also in terms of Latent heat $\frac{d L}{d T}=\frac{L}{T}+C_{m f}-C_{m i}$. In first order phase transitions $L \neq 0$, thus specific heat at constant temperature has a sharp jump.
In first order transitions : The discontnuity is as shownn below:\\
\includegraphics[max width=\textwidth, center]{2024_12_28_757fe0ed403a0addbd02g-77}
Change in Specific heat in second order Phase transition :\\
\includegraphics[max width=\textwidth, center]{2024_12_28_757fe0ed403a0addbd02g-77(1)}
Phase transitions of the second order show a finite discontinuity in the specific heat $C_{P}$.
What is a DISCONTINUOS PHASE TRANSITION ?
Phase transition in which change in entropy is discontinuos at a particular temperature .This happens due to existence of latent heat\\
$d S=\frac{L}{T}$.\\
It is seen in First order Phase transition.
What is a CONTINUOS PHASE TRANSITION ?
Phase Transition in which the change in entropy is continuos. There is no involvement of Latent heat\\
CONDITIONS FOR FIRST ORDER PHASE TRANSFORMATION :
There are changes in entropy ( $\mathrm{S}_{1}$ is not equal to $\mathrm{S}_{2}$ )
There are changes in volume
The first order derivatives of Gibbs function change discontinuously.
$\left(\frac{\partial G}{\partial T}\right)_{P} \&\left(\frac{\partial G}{\partial P}\right)_{T}$ are discontinuos since
$$
\because\left(\frac{\partial G}{\partial T}\right)_{P}=-S \&\left(\frac{\partial G}{\partial P}\right)_{T}=V
$$
Consider a system consisting of two phases of a pure substance at equilibrium. Under these conditions, the two phases are at the same pressure and temperature. Consider the change of state associated with a transfer of dn moles from phase 1 to phase 2 ( $\mathrm{p}, \mathrm{T}$ remaining constant). That is,
Define a SECOND ORDER PHASE TRANSITIONS :( CONTINUOS PHASE TRANSITION)
The second class of phase transitions are the continuous phase transitions, also called secondorder phase transitions. These have no associated latent heat Clausius -Clapeyron Law does not make sense here.\\
CONDITIONS FOR SECOND ORDER PHASE TRANSFORMATION
Both specific entropy ( $\mathrm{S} / \mathrm{mass}$ ) and specific volume ( Volume/mass) do not change in second-order phase transitions.\\
In 2nd order Phase Transition, the second order differential of Gibbs free energy are discontinous.\\
Since : $\quad\left(\frac{\partial^{2} G}{\partial T^{2}}\right)_{P}=-\frac{C_{p}}{T},\left(\frac{\partial^{2} G}{\partial P^{2}}\right)_{T}=-V k_{T}$ and $\left(\frac{\partial^{2} G}{\partial P \partial T}\right)=V \beta_{p}$
Examples of second-order phase transitions :
Ferromagnetic transition,
Superfluid transition
Bose-Einstein condensation.
OTHER TYPES OF PHASE TRANSITION :
Several transitions are known as the infinite-order phase transitions. They are continuous but break no symmetries. The most famous example is the Berezinsky-Kosterlitz-Thouless transition in the two-dimensional XY model. Many quantum phase transitions in two-dimensional electron gases belong to this class. Apartfrom this, there is another phase transition known as Lambda phase transition]
CLAUSIUS -CLEPYRON EQUATION :
Named after Rudolf Claussius (1822-1888, Germany ) \& Benoit Paul Emile Clapeyron ( 1799-1884, Paris ),
What is it ?
An equation that describes how pressure varies with temperature when sytem is in equillibrium betweeen two phases ( first order).
It is applicable for 1) Solid phase -Liquid phase Transition
Liquid Phase- Vapour phase transition
Solid Phase - Vapour Phase transition
DERIVATION:
Let $G_1$ and $G_2$ be the Gibbs functions for two phases of the system which are in equilibrium. Then,
1.Solid-Liquid Phase:
\begin{align*}
\frac{dP}{dT} &= \frac{L_f/T}{\Delta V} = \frac{L_f}{T \Delta V}
\end{align*}
where $L_f$ is the latent heat of fusion.
Effect of Pressure on Melting Point of Solids:
\textbf{Case (1):} When the volume of the solid in the liquid phase is more than in the solid phase, i.e., $V_2 > V_1$ or $\Delta V > 0$, then
\[
\frac{dP}{dT} > 0
\]
So, the melting point increases with pressure.
Case (2) When the solid contracts on melting, i.e., $V_2 < V_1$ or $\Delta V < 0$, then
\[
\frac{dP}{dT} < 0
\]
Thus, the melting point decreases with increase in pressure.
\textbf{Exception:} Substances like Ice, Sb (Antimony), and Bi (Bismuth) contract on melting. So, for them at melting point:
\[
\frac{dP}{dT} < 0
\]
Hence, the P-T graph for solid-liquid transition may have positive or negative slope depending on the sign of $\Delta V$.
2.Liquid-Vapour Phase:
\begin{align*}
\frac{dP}{dT} &= \frac{L_V/T}{\Delta V} = \frac{L_V}{T \Delta V}
\end{align*}
where $L_V$ is the latent heat of vaporization.
In this phase transition, volume always increases, i.e., $\Delta V > 0$ or $V_2 > V_1$ at boiling point. Thus,
\[
\frac{dP}{dT} > 0 \quad \text{always}
\]
and the slope of the P-T graph is always positive.
\end{enumerate}
Effect of Pressure on Boiling Point:
Boiling point increases with an increase in pressure.
How does pressure vary with temperature during Liquid-Vapour transition when volume of vapour is much larger than that of the liquid?
3. Solid - Liquid Phase:
\begin{align*}
\Rightarrow \frac{dP}{dT} &= \frac{\frac{L_s}{T}}{\Delta V} \\
\Rightarrow \frac{dP}{dT} &= \frac{L_s}{T \Delta V} \quad \text{($L_s$ is the latent heat of sublimation)}
\end{align*}
In Solid-Vapour phase transition, the final volume in the vapour phase is always greater than that in the solid phase, thus
\[
V_2 > V_1 \quad \text{or} \quad \Delta V > 0
\]
Then,
\[
\frac{dP}{dT} >
\]
always, and the slope of the $P$-$T$ graph is always positive.
What are the criteria for the stability of a thermodynamic system?
Ans: Let us consider a thermodynamic system in contact with a heat reservoir at temperature $T$. Suppose it undergoes an infinitesimal irreversible process by extracting heat $dQ$ from the reservoir. If the change in entropy of the system is $dS$ and that of the reservoir is $dS_0$, then the change in entropy of the universe increases for the irreversible process:
\[
dS + dS_0 > 0
\]
\begin{align*}
\Rightarrow dS - \frac{dQ}{T} &> 0 \\
\Rightarrow T dS - dQ &> 0 \\
\Rightarrow T dS - (dU + dW) &> 0 \\
\Rightarrow T dS - dU - P dV &> 0
\end{align*}
Isolated Adiabatic System:
Since $dQ = 0$, the criterion becomes:
\[
dS > 0
\]
Hence, stability is achieved when entropy is maximum.
Constant Volume and Entropy:
Since $dS = 0$ and $dV = 0$, the inequality becomes:
\[
dU< 0
\]
So the system is stable when internal energy is minimum.
Constant Pressure and Entropy:
Since $dP = 0$, $dS = 0$, then:
\begin{align*}
dU + d(PV) & < 0 \\
\Rightarrow dH & < 0
\end{align*}
Thus, stability requires minimum enthalpy.
Constant Volume and Temperature:
Here, $dV = 0$ and $dT = 0$. Then:
\begin{align*}
dU - d(TS) & < 0 \\
\Rightarrow d(U - TS) & < 0 \\
\Rightarrow dF & < 0
\end{align*}
Hence, the Helmholtz free energy must be minimum for stability.
Constant Temperature and Pressure:
Here, $dT = 0$, $dP = 0$:
\begin{align*}
dU + P dV - T dS &< 0 \\
&= dU + d(PV) - d(TS) \\
&= d(H - TS) \\
&= dG < 0
\end{align*}
Thus, the Gibbs free energy must be minimum for the system to be stable.
How does latent heat vary with pressure?
Ans: Let $S_i$ and $S_f$ be the molar entropies at the initial and final states. When a first-order phase transition occurs, the molar latent heat $L$ is related to the entropies as:
\[
\Delta S = \frac{L}{T} \quad \Rightarrow \quad L = T (S_f - S_i)
\]
Also,
\[
\frac{L}{T} = -\left( {G_f}{T} \right)_P + \left( {G_i}{T} \right)_P
\]
Differentiating both sides with respect to $T$:
\begin{align*}
\frac{1}{T} {L}{T} - \frac{L}{T^2} &= {S_f}{T} - {S_i}{T} \\
\Rightarrow {L}{T} - \frac{L}{T} &= T {S_f}{T} - T {S_i}{T} \\
\Rightarrow {L}{T} - \frac{L}{T} &= {Q_f}{T} - {Q_i}{T} \\
\Rightarrow {L}{T} - \frac{L}{T} &= C_{mf} - C_{mi} \\
\Rightarrow {L}{T} &= \frac{L}{T} + C_{mf} - C_{mi}
\end{align*}
This is also known as the second latent heat equation
What are the criteria for the stability of a thermodynamic system?
Let us consider a thermodynamic system which is in contact with a heat reservoir at temperature \( T \). Suppose it undergoes an infinitesimal irreversible process by extracting heat \( dQ \) from the reservoir. If the change in entropy of the system is \( dS \), and that of the reservoir is \( dS_0 \), then the entropy of the universe increases for the irreversible process.
\[
dS + dS_0 > 0
\]
\begin{align}
\Rightarrow dS - \frac{dQ}{T} &> 0 \\
\Rightarrow T\,dS - dQ &> 0 \\
\Rightarrow T\,dS - (dU + dW) &> 0 \\
\Rightarrow T\,dS - dU - P\,dV &> 0
\end{align}
Stability criterion for an isolated adiabatic system:
Since \( dQ = 0 \), we have:
\[
dS > 0
\]
Thus, the system is stable when entropy is maximum.
Stability criterion for a system with constant volume and entropy:
In this case, \( dS = 0 \), \( dV = 0 \), so:
\[
dU < 0
\]
Hence, the system is stable when internal energy is minimum.
Stability criterion for constant pressure and entropy:
Here, \( dP = 0 \), \( dS = 0 \):
\begin{align}
dU + d(PV) &< 0 \\
\Rightarrow dH &< 0
\end{align}
The system is stable when enthalpy \( H \) is minimum.
Stability criterion for constant volume and temperature:
\( dV = 0 \), \( dT = 0 \):
\begin{align}
dU - d(TS) &< 0 \\
\Rightarrow d(U - TS) &< 0 \\
\Rightarrow dF &< 0
\end{align}
The system is stable when Helmholtz free energy \( F \) is minimum.
Stability criterion for constant temperature and pressure:
\( dT = 0 \), \( dP = 0 \):
\begin{align}
dU + P\,dV - T\,dS &< 0 \\
dU + d(PV) - d(TS) &< 0 \\
d(U + PV - TS) &< 0 \\
d(H - TS) &< 0 \\
dG &< 0
\end{align}
Thus, the system is stable when Gibbs free energy \( G \) is minimum.
How does latent heat vary with temperature?
Let \( S_i \) and \( S_f \) be the molar entropies of the initial and final states. For a first-order phase transition, the molar latent heat \( L \) is related to entropy as:
\begin{align}
dS &= \frac{L}{T} \Rightarrow L = T(S_f - S_i) \\
\Rightarrow \frac{L}{T} &= S_f - S_i
\end{align}
Also note:
\[
\frac{L}{T} = -\left( \frac{\partial G_f}{\partial T} \right)_P + \left( \frac{\partial G_i}{\partial T} \right)_P
\]
Differentiating both sides with respect to \( T \):
\begin{align}
\frac{1}{T} \frac{dL}{dT} - \frac{L}{T^2} &= \frac{dS_f}{dT} - \frac{dS_i}{dT} \\
\Rightarrow \frac{dL}{dT} - \frac{L}{T} &= T\left( \frac{dS_f}{dT} - \frac{dS_i}{dT} \right) \\
\Rightarrow \frac{dL}{dT} - \frac{L}{T} &= \frac{dQ_f}{dT} - \frac{dQ_i}{dT} \\
\Rightarrow \frac{dL}{dT} - \frac{L}{T} &= C_{mf} - C_{mi} \\
\Rightarrow \frac{dL}{dT} &= \frac{L}{T} + C_{mf} - C_{mi}
\end{align}
This is also known as the
second latent heat equation
What is a second order Phase Transition ?
Ans: It is a type of Phase transition in which Physical state of the matter does not change.Changes occur in atomic arrangements, spins , etc .
Properties of Second Order Phase Transition:
The molar Gibbs free energy \( G \) and its first derivatives with respect to \( T \) and \( P \) are continuous, but higher-order derivatives are not:
\[
g_i = g_f
\]
Specific entropy and specific volume are continuous, but specific heats are discontinuous:
\[
s_i = s_f, \quad v_i = v_f
\]
Thus, the first derivatives of Gibbs free energy are continuous:
\[
\left( \frac{\partial G_1}{\partial T} \right)_P = \left( \frac{\partial G_2}{\partial T} \right)_P
\]
Proof:
From the first-order phase transition:
\begin{align}
\frac{L}{T} &= S_f - S_i
\end{align}
But also:
\begin{align}
\frac{L}{T} &= -\left( \frac{\partial G_f}{\partial T} \right)_P + \left( \frac{\partial G_i}{\partial T} \right)_P
\end{align}
Now, for a second-order phase transition, the latent heat \( L = 0 \), hence:
\begin{align}
-\left( \frac{\partial G_f}{\partial T} \right)_P + \left( \frac{\partial G_i}{\partial T} \right)_P &= 0 \\
\Rightarrow \left( \frac{\partial G_i}{\partial T} \right)_P &= \left( \frac{\partial G_f}{\partial T} \right)_P
\end{align}
Also, continuity of Gibbs free energy with respect to pressure implies:
\[
\left( \frac{\partial G_1}{\partial P} \right)_T = \left( \frac{\partial G_2}{\partial P} \right)_T
\]
It is discontinuous in symmetry arrangement (such as crystal structure), but continuous in the physical state of the body.
Examples :
Trnasition of Liquid Helium 1 to Liquid Helim2
Trnasition from Nonferromagnetic state to Ferromagnetic state
Suprconducting transition in a zero field
Bose Einstein Condensate
FIRST ORDER PHASE TRANSITION PARAMETERS VS TEMPERATURE
1. Gibbs Free energy vs Temperature
2. Entropy vs temperature
3. Volume vs Temperature
4. Enthalpy vs Temperature
5. specific heat at constant volume vs temperature
What are Ehrenfest Equations? Derive the Ehrenfest Equations:
These are the equations to determine the slope of $P$ vs $T$ graphs, i.e., to find the value of $\frac{dP}{dT}$ in a second-order phase transition.\\
There are two Ehrenfest equations.
Derivation of First Ehrenfest Equation:
Let the phases of matter be denoted by $i$ (initial) and $f$ (final). In a second-order phase transition, entropy is continuous:
\begin{align}
S_i &= S_f \\
S_i + dS_i &= S_f + dS_f \\
dS_i &= dS_f
\end{align}
Since $S = S(T, P)$, we write:
\begin{align}
dS &= \frac{\partial S}{\partial T} dT + \frac{\partial S}{\partial P} dP
\end{align}
Therefore,
\begin{align}
\frac{\partial S_i}{\partial T} dT + \frac{\partial S_i}{\partial P} dP &= \frac{\partial S_f}{\partial T} dT + \frac{\partial S_f}{\partial P} dP \\
T \frac{\partial S_i}{\partial T} dT + T \frac{\partial S_i}{\partial P} dP &= T \frac{\partial S_f}{\partial T} dT + T \frac{\partial S_f}{\partial P} dP \\
C_{P,i} dT + T \frac{\partial S_i}{\partial P} dP &= C_{P,f} dT + T \frac{\partial S_f}{\partial P} dP
\end{align}
Now, using the Maxwell relation:
\[
\left( \frac{\partial S}{\partial P} \right)_T = -\left( \frac{\partial V}{\partial T} \right)_P
\]
We get:
\begin{align}
C_{P,i} dT - T \left( \frac{\partial V_i}{\partial T} \right) dP &= C_{P,f} dT - T \left( \frac{\partial V_f}{\partial T} \right) dP
\end{align}
Multiplying and dividing by $V$:
\begin{align}
C_{P,i} dT - T V_i \beta_i dP &= C_{P,f} dT - T V_f \beta_f dP
\end{align}
Where volume expansivity is defined as:
\[
\beta = \frac{1}{V} \left( \frac{\partial V}{\partial T} \right)_P
\]
Rewriting:
\begin{align}
(C_{P,f} - C_{P,i}) dT &= T V (\beta_f - \beta_i) dP \\
\frac{dP}{dT} &= \frac{C_{P,f} - C_{P,i}}{T V (\beta_f - \beta_i)}
\end{align}
Since $V_i = V_f$ in second-order phase transitions, we take $V$ as common.
Derivation of Second Ehrenfest Equation:
Since volume is continuous in both phases:
\begin{align}
dV_i &= dV_f
\end{align}
But $V = V(T, P)$, so:
\begin{align}
dV &= \left( \frac{\partial V}{\partial T} \right)_P dT + \left( \frac{\partial V}{\partial P} \right)_T dP
\end{align}
Multiplying and dividing by $V$:
\begin{align}
dV &= V \beta dT - V k dP
\end{align}
Where:
\[
\beta = \frac{1}{V} \left( \frac{\partial V}{\partial T} \right)_P, \quad
k = -\frac{1}{V} \left( \frac{\partial V}{\partial P} \right)_T
\]
Now, equating for both phases:
\begin{align}
V \beta_i dT - V k_i dP &= V \beta_f dT - V k_f dP \\
(\beta_f - \beta_i) dT &= (k_f - k_i) dP \\
\frac{dP}{dT} &= \frac{\beta_f - \beta_i}{k_f - k_i}
\end{align}
⬆️ Back to Table of Contents
One Dimensional ising Model
the same direction, yielding a net magnetic moment which is macroscopic in size. The simplest theoretical description of ferromagnetism is called the Ising model. This model was invented by Wilhelm Lenz in 1920: it is named after Ernst Ising, a student of Lenz who chose the model as the subject of his doctoral dissertation in 1925.
ISING MODEL is
Mathermatical model that explains Ferromagnetism . It can explain Phase transition between,
a). Ferromagnetism to Antiferromagnetism
b). gas to liquid
c). liquid to gas and many more
Ising Model considers the system as an array of \( N \) fixed lattice sites that form an \( n \)-dimensional periodic lattice.
The structure may be square, cubic, hexagonal, etc. Each lattice site is associated with a spin variable represented by \( S_{1}, S_{2}, S_{3}, \ldots \) or by \( S_{i} \ (i = 1, 2, 3, \ldots) \).
The spin variable is a number which is either \( +1 \) or \( -1 \). It is \( +1 \) for up spin and \( -1 \) for down spin.
If the \( i \)-th site has an up spin, then \( S_{i} = +1 \); and if the \( i \)-th site has a down spin, then \( S_{i} = -1 \).
The given set of spins \( S_{i} \) represents the configuration of the whole system.
Each spin interacts with its nearest spins only.
The energy of the system in a given configuration is given by:
We disregard the KE of atoms at the lattice sites
Phase transition is essentially due to interaction energy among nearest neighbour-atoms .\\
To study the property of susceptibility we subject the lattice to an external magnetic field (B) .\\
In this case an additiona PE is associated with the spin $S_{i}$ given by $P . E=-\mu B S_{i}$\\
Where $\mu=g \mu_{B} \sqrt{j(j+1)} \mathrm{g}=$ Lande g factors,$\mu_{B}=$ Bohr Magneton, $j=$ Total angular momentum.
The first term on the right-hand side of Equation above shows that the overall energy is lowered when neighbouring atomic spins are aligned. This effect is mostly due to the Pauli exclusion principle. Electrons cannot occupy the same quantum state, so two electrons on neighbouring atoms which have parallel spins (i.e., occupy the same orbital state) cannot come close together in space. No such restriction applies if the electrons have anti-parallel spins. Different spatial separations imply different electrostatic interaction energies, and the exchange energy, , measures this difference. Note that since the exchange energy is electrostatic in origin, it can be quite large. This is far larger than the energy associated with the direct magnetic interaction between neighbouring atomic spins.However, the exchange effect is very short-range; hence, the restriction to nearest neighbour interaction is quite realistic.\\
J is exhchange interaction energy. It occurs between identical \href{http://particles.It}{particles.It} is a quantum Mechanical effect. It is due to wavefunctions of indistinguishable particles subeject to exchange symmetry.
If $\mathbf{J}>\mathbf{0}$, there is Ferromagnetism
If $\mathbf{J}<\mathbf{0}$, there will be Antiferromagnetism
First term is interaction energy among spins . The second term is due to the interaction of the ith state spin with applied Magnetic field.\\
$Z=\sum_{S_{1}} \sum_{S_{2}} \sum_{S_{3}} \ldots \ldots . . \sum_{S_{n}} e^{-\beta H\left(S_{i}\right)}$\\
( When Magnetic field is not subjected )\\
To simplify the partition function let us simplify for a one dimensional ISING chain with spins $S_{1}, S_{2}$ and $S_{3}$. Then The number of spin interactions taking into nearest spins is $2^{3}=8$. Which is:
$$
\begin{aligned}
& S_{1} S_{2}+S_{2} S_{3}+S_{3} S_{1} \\
& =(1+1+1)+(1+1+1) \\
& +(1-1-1)+(-1-1+1)+(-1+1-1) \\
& +(1-1-1)+(-1-1+1)+(-1+1-1)
\end{aligned}
$$
There will be 8 microstates $. \mathrm{g}=2$ (degeneracy) with $\mathrm{E}=\mathrm{H}=3 \mathrm{~J}, \mathrm{~g}=6$ for $\mathrm{E}=\mathrm{H}=-\mathrm{J}$
$$
\begin{array}{ccc}
S_{1} & S_{2} & S_{3} \\
+1 & +1 & +1 \\
-1 & -1 & -1 \\
+1 & +1 & -1 \\
+1 & -1 & +1 \\
-1 & +1 & +1 \\
-1 & -1 & +1 \\
-1 & +1 & -1 \\
+1 & -1 & -1
\end{array}
$$
The ground state is expected to be one in which all spins are at the same state, i.e., . Let us calculate the energy for these configurations for a chain of N spins and N spins embedded on a line
\begin{align*}
C_{V}&=\frac{\partial U}{\partial T}\\
&=\frac{N J^{2}}{k T^{2}} \sec h^{2}\left(\frac{J}{k T}\right)
\end{align*}
As specific heat varies smoothly with T, there is no transition temperature. Hence One-dimensional Ising Model can not explain the Ferromagnetism behaviour of a Metal.
Net Magnetisation
$$
\begin{aligned}
\bar{M}&=-\left(\frac{\partial F}{\partial \beta}\right)_{T}\\
&=-\frac{\partial}{\partial \beta}(-k T \ln (2 \cosh \beta J))\\
&=\frac{\partial}{\partial \beta}(k T \ln (2 \cosh \beta J)) \\
&=\frac{\partial}{\partial \beta}\left(\frac{1}{\beta} \ln (2 \cosh \beta J)\right)\\
&=-\frac{1}{\beta^{2}} \ln (2 \cosh \beta J)+\frac{1}{2 \cosh \beta J}(2 \sinh \beta J) \frac{J}{\beta} \\
&=-\frac{1}{\beta^{2}} \ln (2 \cosh \beta J)+\frac{J}{\beta} \tanh \beta J \\
& \text { If } \mathbf{M}=\mathbf{0} \text {, there will be spontaneous magnetisation. }
\end{aligned}
$$
⬆️ Back to Table of Contents