logo

F-distribution 📂Probability Distribution

F-distribution

Definition 1

The continuous probability distribution F(r1,r2)F \left( r_{1} , r_{2} \right), which has the following probability density function for degrees of freedom r1,r2>0r_{1}, r_{2} > 0, is called the F-distribution. f(x)=1B(r1/2,r2/2)(r1r2)r1/2xr1/21(1+r1r2x)(r1+r2)/2,x(0,) f(x) = {{ 1 } \over { B \left( r_{1}/2 , r_{2} / 2 \right) }} \left( {{ r_{1} } \over { r_{2} }} \right)^{r_{1} / 2} x^{r_{1} / 2 - 1} \left( 1 + {{ r_{1} } \over { r_{2} }} x \right)^{-(r_{1} + r_{2}) / 2} \qquad , x \in (0, \infty)


  • B(r1/2,r2/2)B(r_{1} / 2, r_{2}/2) refers to the beta function.

Basic Properties

Moment Generating Function

  • [1]: The F-distribution does not have a moment-generating function.

Mean and Variance

  • [2]: If XF(r1,r2)X \sim F ( r_{1} , r_{2}), then E(X)=r2r22,r2>2Var(X)=2r22(r1+r22)r1(r22)2(r24),r2>4 \begin{align*} E(X) =& {{ r_{2} } \over { r_{2} - 2 }} & \qquad , r_{2} > 2 \\ \Var(X) =& {{ 2 r_{2}^{2} (r_{1} + r_{2} - 2) } \over { r_{1} (r_{2} -2)^{2} (r_{2} - 4) }} & \qquad , r_{2} > 4 \end{align*}

Theorem

Let two random variables U,VU,V be independent with Uχ2(r1)U \sim \chi^{2} ( r_{1}) and Vχ2(r2)V \sim \chi^{2} ( r_{2}).

kkth Moment

  • [a]: If d2>2kd_{2} > 2k, then F:=U/r1V/r2\displaystyle F := {{ U / r_{1} } \over { V / r_{2} }} exists as the kkth moment EFk=(r2r1)kEUkEVk E F^{k} = \left( {{ r_{2} } \over { r_{1} }} \right)^{k} E U^{k} E V^{-k}

Derived from the Chi-Squared Distribution

  • [b]: U/r1V/r2F(r1,r2){{ U / r_{1} } \over { V / r_{2} }} \sim F \left( r_{1} , r_{2} \right)

Derived from the Beta Distribution

  • [c]: A random variable XF(r1,r2)X \sim F \left( r_{1}, r_{2} \right) that follows the F-distribution with degrees of freedom r1,r2r_{1} , r_{2} is defined as YY, which follows the beta distribution Best(r12,r22)\text{Best} \left( {{ r_{1} } \over { 2 }} , {{ r_{2} } \over { 2 }} \right). Y:=(r1/r2)X1+(r1/r2)XBeta(r12,r22) Y := {{ \left( r_{1} / r_{2} \right) X } \over { 1 + \left( r_{1} / r_{2} \right) X }} \sim \text{Beta} \left( {{ r_{1} } \over { 2 }} , {{ r_{2} } \over { 2 }} \right)

Derived from the t-Distribution

  • [d]: A random variable Xt(ν)X \sim t(\nu) that follows the t-distribution with degrees of freedom ν>0\nu > 0 is defined as YY, which follows the F-distribution F(1,ν)F (1,\nu). Y:=X2F(1,ν) Y := X^{2} \sim F (1,\nu)

Reciprocality

  • [e]: If XF(r1,r2)X \sim F \left( r_{1}, r_{2} \right), then the distribution of its reciprocal is as follows. 1XF(r2,r1) {{ 1 } \over { X }} \sim F \left( r_{2}, r_{1} \right)

Explanation

Just as the t-distribution is called the Student t-distribution, the F-distribution is also referred to as the Snedecor F-distribution, named after the statistician George Snedecor2.

The probability density function of the F-distribution may seem incredibly complex at first glance, but in reality, there is little need to manipulate the formula itself. Understanding the relationship with the chi-squared distribution is of utmost importance. Just as the chi-squared distribution can be used for goodness-of-fit tests, the F-distribution can be used to compare the variances of two populations. As can be directly seen in theorem [b], since the F-distribution is expressed as a ratio of data following the chi-squared distribution, if this statistic deviates too much from 11, it can be inferred that the variances of the two distributions are different.

Proof

[1]

The existence of a moment-generating function for a random variable means that the kkth moment exists for all kNk \in \mathbb{N}. However, as per theorem [a], the kkth moment of the F-distribution exists when k<d2/2k < d_{2} / 2, thus a moment-generating function cannot exist.

[2]

Use the moment formula stated in [a].

[a]

Substituting with t=r1r2xt = {{ r_{1} } \over { r_{2} }} x results in dt=r1r2dxdt = {{ r_{1} } \over { r_{2} }} dx, so EFk=0xk1B(r1/2,r2/2)(r1r2)r1/2xr1/21(1+r1r2x)(r1+r2)/2dx=1B(r1/2,r2/2)(r1r2)r1/20xk+r1/21(1+r1r2x)(r1+r2)/2dx=1B(r1/2,r2/2)(r1r2)r1/20(r2r1t)k+r1/21(1+t)(r1+r2)/2r2r1dt=1B(r1/2,r2/2)(r1r2)r1/2(r2r1)k+r1/20tk+r1/2(1+t)r1/2r2/2dt=1B(r1/2,r2/2)(r2r1)k0tk+r1/2(1+t)(r1/2+k)(r2/2k)dt \begin{align*} E F^{k} =& \int_{0}^{\infty} x^{k} {{ 1 } \over { B \left( r_{1}/2 , r_{2} / 2 \right) }} \left( {{ r_{1} } \over { r_{2} }} \right)^{r_{1} / 2} x^{r_{1} / 2 - 1} \left( 1 + {{ r_{1} } \over { r_{2} }} x \right)^{-(r_{1} + r_{2}) / 2} dx \\ =& {{ 1 } \over { B \left( r_{1}/2 , r_{2} / 2 \right) }} \left( {{ r_{1} } \over { r_{2} }} \right)^{r_{1} / 2} \int_{0}^{\infty} x^{k + r_{1} / 2 - 1} \left( 1 + {{ r_{1} } \over { r_{2} }} x \right)^{-(r_{1} + r_{2}) / 2} dx \\ =& {{ 1 } \over { B \left( r_{1}/2 , r_{2} / 2 \right) }} \left( {{ r_{1} } \over { r_{2} }} \right)^{r_{1} / 2} \int_{0}^{\infty} \left( {{ r_{2} } \over { r_{1} }} t \right)^{k + r_{1} / 2 - 1} \left( 1 + t \right)^{-(r_{1} + r_{2}) / 2} {{ r_{2} } \over { r_{1} }} dt \\ =& {{ 1 } \over { B \left( r_{1}/2 , r_{2} / 2 \right) }} \left( {{ r_{1} } \over { r_{2} }} \right)^{r_{1} / 2} \left( {{ r_{2} } \over { r_{1} }} \right)^{k + r_{1} / 2}\int_{0}^{\infty} t^{k + r_{1} / 2 } \left( 1 + t \right)^{-r_{1}/2 - r_{2}/ 2} dt \\ =& {{ 1 } \over { B \left( r_{1}/2 , r_{2} / 2 \right) }} \left( {{ r_{2} } \over { r_{1} }} \right)^{k }\int_{0}^{\infty} t^{k + r_{1} / 2 } \left( 1 + t \right)^{-(r_{1}/2+k) - (r_{2}/ 2-k)} dt \end{align*}

Representation of the beta function as a definite integral: B(p,q)=0tp1(1+t)p+qdt B(p,q)=\int_{0}^{\infty}\frac{ t^{p-1} }{ (1+t)^{p+q}}dt

Relationship between the beta and gamma functions: B(p,q)=Γ(p)Γ(q)Γ(p+q) B(p,q) = {{\Gamma (p) \Gamma (q)} \over {\Gamma (p+q) }}

EFk=1B(r1/2,r2/2)(r2r1)kB(r12+k,r22k)=(r2r1)kΓ(r1/2+r2/2)Γ(r1/2)Γ(r2/2)Γ(r1/2+k)Γ(r2/2k)Γ(r1/2+k+r2/2k)=(r2r1)k1Γ(r1/2)Γ(r2/2)Γ(r1/2+k)Γ(r2/2k)1=(r2r1)kΓ(r1/2+k)2kΓ(r1/2)2kΓ(r2/2k)Γ(r2/2) \begin{align*} EF^{k} =& {{ 1 } \over { B \left( r_{1}/2 , r_{2} / 2 \right) }} \left( {{ r_{2} } \over { r_{1} }} \right)^{k } B \left( {{ r_{1} } \over { 2 }} + k, {{ r_{2} } \over { 2 }} - k \right) \\ =& \left( {{ r_{2} } \over { r_{1} }} \right)^{k } {{ \Gamma (r_{1}/2 + r_{2}/2) } \over { \Gamma (r_{1}/2 ) \Gamma ( r_{2}/2) }} {{ \Gamma (r_{1}/2 + k) \Gamma ( r_{2}/2 - k) } \over { \Gamma (r_{1}/2 +k + r_{2}/2 - k) }} \\ =& \left( {{ r_{2} } \over { r_{1} }} \right)^{k } {{ 1 } \over { \Gamma (r_{1}/2 ) \Gamma ( r_{2}/2) }} {{ \Gamma (r_{1}/2 + k) \Gamma ( r_{2}/2 - k) } \over { 1 }} \\ =& \left( {{ r_{2} } \over { r_{1} }} \right)^{k } {{ \Gamma (r_{1}/2 + k) 2^{k}} \over { \Gamma (r_{1}/2 ) }} {{ 2^{-k} \Gamma ( r_{2}/2 - k) } \over { \Gamma ( r_{2}/2) }} \end{align*}

Moment of the chi-squared distribution: Let’s say Xχ2(r)X \sim \chi^{2} (r). If k>r/2k > - r/ 2, then the kkth moment exists EXk=2kΓ(r/2+k)Γ(r/2) E X^{k} = {{ 2^{k} \Gamma (r/2 + k) } \over { \Gamma (r/2) }}

EFk=(r2r1)kEUkEVk E F^{k} = \left( {{ r_{2} } \over { r_{1} }} \right)^{k } E U^{k} E V^{-k}

[b]

Derive directly from the joint density function.

[c]

Derive directly from the variable change.

[d]

Circumvent as a ratio of the chi-squared distributions.

[e]

Since the numerator and the denominator are reversed, it is trivial according to theorem [b]. From a practical statistician’s point of view, defining the F-distribution according to theorem [b] and deriving the probability density function accordingly is more natural.

See Also

Generalization: Non-central F-distribution


  1. Hogg et al. (2013). Introduction to Mathematical Statistics(7th Edition): p194. ↩︎

  2. Casella. (2001). Statistical Inference(2nd Edition): p222. ↩︎