Random Variables Defined by Measure Theory
Definition 1
Let there be a given probability space $( \Omega , \mathcal{F} , P)$.
Probability variables $X$, $Y$ are said to be independent if for all Borel sets $B_{1} , B_{2} \in \mathcal{B} ( \mathbb{R} )$ the following holds true: $$ P \left( X^{-1} (B_{1} ) \cap Y^{-1} (B_{2} ) \right) = P \left( X^{-1} (B_{1}) \right) P \left( Y^{-1} (B_{2}) \right) $$
- If you have not yet encountered measure theory, the term probability space can be ignored.
In fact, after moving beyond the basic theories of probability distributions in studying probability, the concept of independence of events becomes less frequently addressed. However, by definition, $X^{-1} (B_{1}) , Y^{-1} (B_{2}) \in \mathcal{F}$ is an event, so essentially, it is an extended concept of the independence of events, and it is also correct to define it this way.
At first glance, the formula might seem complex and possibly unwelcome. However, since probability variables are introduced to handle probabilities mathematically efficiently, such a definition can be considered faithful to the objective of ‘handling it via formulas’. Especially, the following are useful equivalent conditions.
Theorem
The independence of probability variables $X$, $Y$ is equivalent to the following.
- [1] Expected value: For all Borel functions $f$, $g$ $$ E \left( f(X) g(Y) \right) = E \left( f(X) \right) E \left( g(X) \right) $$ [2] Joint density: When $X$, $Y$ have a joint density $f_{(X,Y)}$ $$ P(X,Y) = P_{X} \times P_{Y} \\ f_{(X,Y)} (x,y) = f_{X} (x) f_{Y} (y) $$
Capinski. (1999). Measure, Integral and Probability: p70. ↩︎