logo

Derivation of the F-distribution from the t-distribution 📂Probability Distribution

Derivation of the F-distribution from the t-distribution

Theorem 1

A random variable $X \sim t(\nu)$ that follows a t-distribution with degrees of freedom $\nu > 0$ is defined as $Y$, which follows an F-distribution $F (1,\nu)$. $$ Y := X^{2} \sim F (1,\nu) $$

Proof

Via Chi-Square Distribution

$X \sim t(\nu)$, which follows a standard normal distribution, and $W$, which follows a chi-square distribution with degrees of freedom $\nu$, are related as $$ X^{2} = \left( {{ Z } \over { \sqrt{W / \nu} }} \right)^{2} = {{ Z^{2} / 1 } \over { W / \nu }} \qquad , Z \perp W $$ and, $Z^{2}$ follows a chi-square distribution with degrees of freedom $1$. Since an F-distribution can be derived from two independent chi-square distributions, $X^{2} \sim F(1, \nu)$ follows.

Through Direct Deduction via Probability Density Function 2

Strategy: Direct deduction through the probability density function.

Definition of t-distribution: A t-distribution, which is a continuous probability distribution $t \left( \nu \right)$ with a probability density function as follows for degrees of freedom $\nu > 0$. $$ f(x) = {{ \Gamma \left( {{ \nu + 1 } \over { 2 }} \right) } \over { \sqrt{\nu \pi} \Gamma \left( {{ \nu } \over { 2 }} \right) }} \left( 1 + {{ x^{2} } \over { \nu }} \right)^{- {{ \nu + 1 } \over { 2 }}} \qquad ,x \in \mathbb{R} $$

Definition of F-distribution: An F-distribution, which is a continuous probability distribution $F \left( r_{1} , r_{2} \right)$ with a probability density function as follows for degrees of freedom $r_{1}, r_{2} > 0$. $$ f(x) = {{ 1 } \over { B \left( r_{1}/2 , r_{2} / 2 \right) }} \left( {{ r_{1} } \over { r_{2} }} \right)^{r_{1} / 2} x^{r_{1} / 2 - 1} \left( 1 + {{ r_{1} } \over { r_{2} }} x \right)^{-(r_{1} + r_{2}) / 2} \qquad , x \in (0, \infty) $$


$$ \begin{align*} & Y = X^{2} \\ \implies & \sqrt{Y} = X \end{align*} $$ and since $\lambda (X) := X^{2}$ is not a bijective function, the support of $X$ is divided into $x \ge 0$ and $x < 0$. The Jacobian is $dy = 2 x dx$ Therefore, the probability density function of $Y$, $f_{Y}$, will be calculated from $$ \begin{align*} f_{Y}(y) =& \sum_{k=1}^{2} {{ \Gamma \left( {{ \nu + 1 } \over { 2 }} \right) } \over { \sqrt{\nu \pi} \Gamma \left( {{ \nu } \over { 2 }} \right) }} \left( 1 + {{ x^{2} } \over { \nu }} \right)^{- {{ \nu + 1 } \over { 2 }}} \cdot \left| {{ 1 } \over { 2x }} \right| \\ =& {{ \Gamma \left( {{ \nu + 1 } \over { 2 }} \right) } \over { \sqrt{\nu \pi} \Gamma \left( {{ \nu } \over { 2 }} \right) }} \left( 1 + {{ x^{2} } \over { \nu }} \right)^{- {{ \nu + 1 } \over { 2 }}} \cdot {{ 1 } \over { x }} \end{align*} $$

Relation between the Beta Function and the Gamma Function: $$ B(p,q) = {{\Gamma (p) \Gamma (q)} \over {\Gamma (p+q) }} $$

According to Euler’s Reflection Formula, $\sqrt{\pi} = \Gamma (1/2)$, and based on the abovementioned lemma, $$ \begin{align*} f_{Y}(y) =& {{ \Gamma \left( {{ \nu + 1 } \over { 2 }} \right) } \over { \Gamma (1/2) \Gamma \left( {{ \nu } \over { 2 }} \right) }} {{ 1 } \over { \sqrt{\nu} }} \left( 1 + {{ x^{2} } \over { \nu }} \right)^{- {{ \nu + 1 } \over { 2 }}} x^{-1} \\ =& {{ \Gamma \left( {{ \nu + 1 } \over { 2 }} \right) } \over { \Gamma (1/2) \Gamma \left( {{ \nu } \over { 2 }} \right) }} {{ 1 } \over { \sqrt{\nu} }} \sqrt{y}^{-1} \left( 1 + {{ y } \over { \nu }} \right)^{- {{ \nu + 1 } \over { 2 }}} \\ =& {{ 1 } \over { B (1/2, \nu/2) }} \left( {{ 1 } \over { \nu }} \right)^{1/2} y^{1/2-1} \left( 1 + {{ 1 } \over { \nu }} y \right)^{- {{ 1 + \nu } \over { 2 }}} \end{align*} $$


  1. Casella. (2001). Statistical Inference(2nd Edition): p258. ↩︎

  2. http://www.math.wm.edu/~leemis/chart/UDR/PDFs/TF.pdf ↩︎