logo

Density and Cumulative Distribution Functions of Random Variables Defined by Measure Theory 📂Probability Theory

Density and Cumulative Distribution Functions of Random Variables Defined by Measure Theory

Definition 1

A probability space (Ω,F,P)( \Omega , \mathcal{F} , P) is given, and let’s say mm is a measure.

  1. For an integrable f0f \ge 0, if measure P:FRP : \mathcal{F} \to \mathbb{R} has the form of AP(A)=Afdm A \mapsto P(A) = \int_{A} f dm then PP is said to be absolutely continuous. In particular, such ff is called the density of PP with respect to measure mm.
  2. The following defined FF is called the (cumulative) distribution function corresponding to density ff. F(y):=yf(x)dx F(y) := \int_{-\infty}^{y} f(x) dx
  3. The following defined FXF_{X} is called the (cumulative) distribution function of the random variable XX. FX(y):=PX((,y]) F_{X} (y) := P_{X} \left( ( -\infty , y ] \right)
  4. For all yRy \in \mathbb{R}, the following satisfied fXf_{X} is called the density of the random variable XX. FX(y)=yfX(x)dx F_{X} (y) = \int_{-\infty}^{y} f_{X} (x) dx

  • If you’re not yet acquainted with measure theory, you can ignore the term “probability space.”

Explanation

Naturally, according to the definition of probability, Ωfdm=P(Ω)=1\displaystyle \int_{\Omega} f dm = P ( \Omega ) = 1 is true.

Absolute Continuity

When PP is P(A)=Afdm\displaystyle P(A) = \int_{A} f dm, being absolutely continuous may seem hardly surprising upon a little thought, because there can’t be any discontinuous point in PP wherein the function values for P(A{a})P(A \cup \left\{ a \right\}) at A{a}A \cup \left\{ a \right\}, added with a point to P(A)P(A) and AA, differ significantly—because it’s continuous. This originates from the properties of measure mm, regardless of PP or ff. Even if ff is a discontinuous function, PP must be continuous to satisfy the condition.

Theorem

Properties of the Distribution Function 2

The distribution function has the following properties:

  • [1] Non-decreasing: y1y2    FX(y1)FX(y2)y_{1} \le y_{2} \implies F_{X} (y_{1}) \le F_{X} ( y_{2} )
  • [2] Limits at the extremes: limyFX(y)=1limyFX(y)=0 \begin{align*} \lim_{y \to \infty} F_{X} (y) =& 1 \\ \lim_{y \to -\infty} F_{X} (y) =& 0 \end{align*}
  • [3] Right-continuous: For yy0y \ge y_{0} yy0    FX(y)FX(y0)y \to y_{0} \implies F_{X} (y) \to F_{X} (y_{0} )
  • [4] With the definition of density, you obtain another useful expression for the expected value. E(X)=0P(X>t)dt E(X) = \int_{0}^{\infty} P(X>t) dt

Proof

[4]

0yt<0 \le y \le t < \infty and according to Fubini’s theorem 0P(X>t)dt=0PX((,t])dt=0FX(t)dt=0tfX(y)dydt=00yfX(y)dtdy=00ydtfX(y)dy=0yfX(y)dy=E(X) \begin{align*} \int_{0}^{\infty} P(X>t) dt &= \int_{0}^{\infty} P_{X}( (\infty,t] ) dt \\ =& \int_{0}^{\infty} F_{X} (t) dt \\ =& \int_{0}^{\infty} \int_{t}^{\infty} f_{X} (y) dy dt \\ =& \int_{0}^{\infty} \int_{0}^{y} f_{X} (y) dt dy \\ =& \int_{0}^{\infty} \int_{0}^{y} dt f_{X} (y) dy \\ =& \int_{0}^{\infty} y f_{X} (y) dy \\ =& E(X) \end{align*}

See Also


  1. Capinski. (1999). Measure, Integral and Probability: p106~109. ↩︎

  2. Capinski. (1999). Measure, Integral and Probability: p110. ↩︎