logo

Sample Standard Deviation and Standard Error Distinguished 📂Mathematical Statistics

Sample Standard Deviation and Standard Error Distinguished

Definition

Let’s call the data obtained from $X$ as $\mathbf{x} = ( x_{1}, x_{2}, \cdots , x_{n} )$.

  1. Sample Mean: $$ \overline{x} = {{1} \over {n}} \sum_{i=1}^{n} x_{i} $$
  2. Sample Standard Deviation: $$ s_{x} = \sqrt { {{1} \over {n-1}} \sum_{i=1}^{n} ( x_{i} - \overline{x} )^2 } $$
  3. Standard Error: $$ \text{s.e.} \left( \overline{X} \right) = {{ s_{x} } \over { \sqrt{n} }} $$

Explanation

Because the terms sound similar, surprisingly many people do not distinguish between the sample standard deviation and the standard error. This includes high school students who are essentially learning statistics through texts, and even extends to 3rd or 4th year university students majoring in statistics.

It’s good to keep the following five points in mind before reading further:

  • (1): Standard error is the standard deviation of the sample mean.
  • (2): Standard error does not really provide any information about the population.
  • (3): Standard error is spoken of only in relation to samples; that is, concepts like “population standard error” are not considered.
  • (4): Standard error is primarily needed when performing hypothesis tests, or when constructing confidence or prediction intervals. In other words, it only needs to be considered in contexts where intervals are discussed.
  • (5): While you can’t judge standard deviation as good or bad based on whether it’s large or small, smaller is always better for standard error. This is because standard deviation looks at ‘how different’ the data are from each other, whereas standard error looks at ‘how wrong’ the sample mean is.

Below University Level

Considering that the standard error is the standard deviation of the sample mean, it makes sense that the means of the samples would be closer to the population mean than the entire sample is. Being a measure of dispersion from the population mean, it’s evident that this is still a concept of dispersion, and it would naturally be smaller than the sample standard deviation. 20180425\_205309.png In statistics, our interest is often in means, hence the term “error” comes naturally. Multiplying by $\pm 1.96$ to create intervals yields confidence or prediction intervals, making the use of “standard” in its name appropriate. Don’t let the similarity in terms sound too complex—digest each character one by one to remember.

University Level and Above

Central Limit Theorem: $$ \sqrt{n} {{ \overline{X}_n - \mu } \over {\sigma}} \overset{D}{\to} N (0,1) $$

Tweaking the shape of the central limit theorem a bit gives us the form of $\displaystyle \overline{X}_n \overset{D}{\to} \text{N} \left( \mu , {{\sigma} \over {\sqrt{n}}} \right)$. The formula shows easily that the standard error is indeed the standard deviation of the sample mean.

What’s interesting is that if the numerator $\left( \overline{X}_{n} - \mu \right)$ follows a normal distribution and the denominator $\sigma^2$ follows a chi-squared distribution, then $n$ becomes the degrees of freedom in the derivation of $t$ distribution. With the right assumptions, the standard error can thus be seen as a statistic necessary for testing, allowing discussions beyond points (3) and (4).