Conditional Probability of Random Variables Defined by Measure Theory
📂Probability TheoryConditional Probability of Random Variables Defined by Measure Theory
Definition
Given a probability space (Ω,F,P).
- When G is a sub sigma field of F, the conditional probability of an event F∈F with respect to G is defined as
P(F∣G):=E(1F∣G).
- The conditional density of Y when fY∣X=x is defined as follows and given X=x is
fY∣X=x(y∣X=x):=∂y∂P(Y≤y∣X=x).
- You may ignore the term probability space if you haven’t encountered measure theory yet, but understanding this post without any knowledge of measure theory is nearly impossible.
- That G is a sub sigma field of F means that both are sigma fields of Ω, with G⊂F.
Explanation
Conditional probability introduced with measure theory is defined by the conditional expectation.
Meanwhile, for the smallest sigma field σ(X)={X−1(B):B∈B(R)} generated by the random variable X with respect to Ω, we use the following familiar notation.
E(Y∣X):=E(Y∣σ(X))
And within the parentheses of probability or expectation, Y≤y denotes the following event.
(Y≤y):={ω∈B:Y(B)≤y,B∈B(R)}∈F
Let’s derive the conditional probability fY∣X=x(y∣X=x)=fX(x)f(x,y) using these notations.
Derivation
Given the condition of the expected value of conditional probability, P(Y≤∣X)=E(1(Y≤y)∣X)=E(1(Y≤y)∣σ(X)) is obviously σ(X)-measurable. Naturally, it is assumed that X, Y have a joint density f(x,y):=f(X,Y)(x,y).
For all Borel sets B∈B(R) and F=X−1(B),
∫FP(Y≤y∣X)dP==========∫FE(1(Y≤y)∣X)dP∫F1(Y≤y)dP∫F1(Y≤y)1FdPE(1(Y≤y)1F)∬1F1(Y≤y)f(x,u)dudx∫x∈F∫−∞yf(x,u)dudx∫x∈F∫−∞yfX(x)f(x,u)fX(x)dudx∫x∈F∫−∞yfX(x)f(x,u)dufX(x)dxE(∫−∞yfX(X)f(X,u)du)∫F∫−∞yfX(X)f(X,u)dudP
Properties of Lebesgue Integration:
∫Afdm=0⟺f=0 a.e.
To conclude, since ∫FP(Y≤y∣X)dP=∫F∫−∞yfX(X)f(X,u)dudP, almost surely
P(Y≤y∣X)=∫−∞yfX(X)f(X,u)du
Finally, according to the Fundamental Theorem of Calculus,
fY∣X=x(y∣X=x)==∂y∂P(Y≤y∣X=x)fX(x)f(x,y) a.s.
■
See also