logo

Proof of the Monotone Convergence Theorem for Conditional Cases 📂Probability Theory

Proof of the Monotone Convergence Theorem for Conditional Cases

Theorem

Let’s assume that a probability space (Ω,F,P)( \Omega , \mathcal{F} , P) is given.

Considering the sequence of random variables {Xn}nN\left\{ X_{n} \right\}_{n \in \mathbb{N}} and XL1(Ω)X \in \mathcal{L}^{1} (\Omega), we have X1X2XXnX a.s. X_{1} \le X_{2} \le \cdots \le X \\ X_{n} \to X \text{ a.s.} then limnE(XnG)=E(limnXnG) a.s. \lim_{n \to \infty} E( X_{n} | \mathcal{G} ) = E( \lim_{n \to \infty} X_{n} | \mathcal{G} ) \text{ a.s.}


Explanation

The conditional monotone convergence theorem simply states that the monotone convergence theorem applies to conditional expectations just as well. Its role in probability theory is the same as that of MCT.

Proof

Strategy: Use the monotone convergence theorem to sandwich \displaystyle \int around limn\displaystyle \lim_{n \to \infty}, and by tweaking the definition of expectation to add and remove EE, make the integrands the same.


Part 1. X10X_{1} \ge 0

According to the monotone convergence theorem, for all AGA \in \mathcal{G} AlimnE(XnG)dP=limnAE(XnG)dP=limnAXndP=AlimnXndP=AE(limnXnG)dP \begin{align*} \int_{A} \lim_{n \to \infty} E( X_{n} | \mathcal{G} ) dP \color{red}{=}& \lim_{n \to \infty} \int_{A} E( X_{n} | \mathcal{G} ) dP \\ =& \lim_{n \to \infty} \int_{A} X_{n} dP \\ \color{red}{=}& \int_{A} \lim_{n \to \infty} X_{n} dP \\ =& \int_{A} E( \lim_{n \to \infty} X_{n} | \mathcal{G} ) d P \end{align*} So, $ AF,Afdm=0    f=0 a.e.\displaystyle \forall A \in \mathcal{F}, \int_{A} f dm = 0 \iff f = 0 \text{ a.e.} 이므로 limnE(XnG)=E(limnXnG) a.s. \lim_{n \to \infty} E( X_{n} | \mathcal{G} ) = E( \lim_{n \to \infty} X_{n} | \mathcal{G} ) \text{ a.s.}


Part 2. X1<0X_{1} < 0

Yn:=XnX1Y_{n} := X_{n} - X_{1} 이 확률 변수 Y=XX1Y = X - X_{1} 에 대해 YnYY_{n} \nearrow Y 라고 하면 Y10Y_{1} \ge 0, therefore, according to Part 1. limnE(YnG)=E(limnYnG) a.s. \lim_{n \to \infty} E( Y_{n} | \mathcal{G} ) = E( \lim_{n \to \infty} Y_{n} | \mathcal{G} ) \text{ a.s.} Then, by the linearity of conditional expectation, we obtain the following. limnE(XnG)=limnE(Yn+X1G)=limnE(YnG)+E(X1G)=E(XX1G)+E(X1G)=E(XX1+X1G)=E(limnXnG) a.s. \begin{align*} \lim_{n \to \infty} E( X_{n} | \mathcal{G} ) =& \lim_{n \to \infty} E( Y_{n} + X_{1} | \mathcal{G} ) \\ =& \lim_{n \to \infty} E( Y_{n} | \mathcal{G} ) + E( X_{1} | \mathcal{G} ) \\ =& E( X - X_{1} | \mathcal{G} ) + E( X_{1} | \mathcal{G} ) \\ =& E( X - X_{1} + X_{1} | \mathcal{G} ) \\ =& E( \lim_{n \to \infty} X_{n} | \mathcal{G} ) \text{ a.s.} \end{align*}

See Also