Independence of Events and Conditional Probability
Definition 1
Let’s assume a probability space $(\Omega , \mathcal{F} , P)$ is given.
- For $P(B)>0$, $\displaystyle P (A | B) = {{P(A \cap B)} \over {P(B)}}$ is called the conditional probability of $A$ given $B$.
- If $P(A | B) = P(A)$, that is $P( A \cap B) = P(A) \cdot P(B)$, then $A, B$ are considered independent.
- If you haven’t yet encountered measure theory, you can ignore the term probability space.
Explanation
As long as the probability space is well defined, concepts such as conditional probability and independence of events can directly use the definitions from high school level. It’s quite natural because the definitions of conditional probability and independence are very intuitive. The reason for mentioning this is to point out ’nothing changes by introducing measure theory’, and to emphasize the contrast when discussing independent variables, conditional expectation.
As transformations of conditional probability, the following two laws are obtained. These can be directly applied to Bayes’ theorem.
Theorems
- [1] Multiplication rule of probability: $$ P(A \cap B) = P(B) P(A | B) $$
- [2] Law of total probability: $$ P(C) = \sum_{i=1}^{k} P(C_{i}) P (C|C_{i}) $$
Capinski. (1999). Measure, Integral and Probability: p47~49. ↩︎