logo

Degrees of Freedom in Statistics 📂Data Science

Degrees of Freedom in Statistics

용어

The number of independent data that can change the value when calculating a certain statistic is called the Degree of Freedom1.

설명

Why It’s Hard to Explain Degrees of Freedom

When you become a freshman and study statistics, this thing called ‘degrees of freedom’ really gets annoying. Aside from being difficult and frequently mentioned, it’s because you can hardly see its definition clearly stated in any textbook. This post also introduces it merely as a ’term’, using expressions that can’t be considered as strict mathematical statements, such as ‘when calculating’ or ’that can change the value’.

The problem is, it’s understandable. It’s not just everyone being lazy and skipping it, but the concept of degrees of freedom itself feels stronger as something ‘acquired through experience’ rather than ‘understood by studying’. Around sophomore or junior year, you start to get a rough idea of what degrees of freedom are, and by the time you go to graduate school, you can usually explain it pretty well, but reciting a definition is still hard.

The issue starts with the ‘positive feeling’ that the expression itself gives. Whether it’s fashion, an open-world game, or democracy, degrees of freedom are considered good the higher or larger they are. Even the degrees of freedom that freshmen encounter for the first time are commonly calculated in ways such as ‘since the number of samples is $n$, we have $(n-1)$ degrees of freedom after subtracting $1$’. Without deep contemplation, it may seem like having more samples is better, so even the degrees of freedom in statistics might be perceived as a ‘positive or negative number’. However, in the context of handling and exploring them with precise formulas, degrees of freedom are just some numbers.

Also, it’s problematic that they appear too out of context, even frequently so. When learning about Analysis of Variance or regression analysis, suddenly degrees of freedom like $n-1$ and $n-p-1$ spill out, ‘calculated in ways that are too poorly explained’. Then, studying mathematical statistics, suddenly t-distribution and chi-squared distribution talk about parameters called degrees of freedom. Even more, F-distribution is said to have two degrees of freedom, but the meanings of those aren’t thoroughly clarified, leaving a weird feeling of somehow knowing them without fully understanding. This usually happens around sophomore or junior year, and by this time, it’s somewhat embarrassing to ask about degrees of freedom, yet it’s not something completely unknown, so people tend to awkwardly move on.

Even acknowledging the necessity of those numbers, calling them ‘degrees of freedom’ might seem meaningless at a glance. So, let’s empathize with why the term ‘degrees of freedom’ is necessary.

Extreme Example: What if There Were No Concept of Degrees of Freedom?

One good way to explain a seemingly useless concept is to describe what kind of ‘cheating’ is allowed in its absence. Let’s imagine something fun, setting aside the mathematical descriptions of what a statistic is like. Suppose we are given a sample $A$. $$ A = \left\{ 13, 7, 17, 3 \right\} $$ In this case, the number of samples is $n = 4$. But then, a junior comes up with a sample $B$, claiming to have ‘developed’ it. $$ B = \left\{ 13, 7, 17, 3 , 14, 8, 18, 4 \right\} $$ The junior says that there are $8$ samples, twice as many as in $A$. Not stopping there, ’they claim they can increase the number of samples as much as they want, up to $n \to \infty$ times, allowing all statistical techniques applicable to large samples’. However, it’s obvious at first glance that this sample is crudely forged, and the method was merely to add $1$ to the existing data to increase the number of samples.

At this moment, we must realize that we focused on the essence, $A$, not being deceived by the numbers presented by the junior, $B$. The junior’s created data is nothing but a knockoff of $$ B = B(A) = A \cup (A+1) $$ $A$. The volume of the sample isn’t just about the number but is rightfully counted as the number of naturally uncontrollable – in other words, ‘free’ – samples, and such a count that doesn’t allow ‘cheating’ is called degrees of freedom.

A Repeatedly Seen Example: $s^{2}$

Now, let’s consider the sample variance $s^{2}$, an example that almost always comes up in literature explaining degrees of freedom. The sample variance is calculated as follows when the sample mean $\overline{x}$ is given. $$ s^{2} = {{ 1 } \over { n-1 }} \sum_{k=1}^{n} \left( x_{k} - \overline{x} \right)^{2} $$ The important thing here is that the constant $\overline{x} = \sum_{k} x_{k} / n$ is already given. Regardless of which $x_{k_{0}}$ you choose, that $x_{k_{0}}$ can be reverse-calculated as a function $$ x_{k_{0}} = x_{k_{0}} \left( \left\{ x_{k} : k \ne k_{0} \right\} \right) = n \overline{x} - \sum_{k \ne k_{0}} x_{k} $$ that depends on the rest of the data. This is similar to how the junior’s data was represented in the form of $B = B(A)$ in the paragraph above. The true number of samples required to calculate $s^{2}$ in a genuine sense is not $n$ but $(n-1)$, and if only $x_{k_{0}}$ is fixed, $(n-1)$ samples $\left\{ x_{k} : k \ne k \right\}$ can change the value of $s^{2}$ calculated under the constraint that the value of $\overline{x}$ is maintained, therefore calling $(n-1)$ the degrees of freedom of $s^{2}$.

See Also