logo

Crossings in Stochastic Processes 📂Probability Theory

Crossings in Stochastic Processes

Definition

Let’s assume that we have a probability space $( \Omega , \mathcal{F} , P)$ and a submartingale $\left\{ ( X_{n} , \mathcal{F}_{n} ) \right\}$. An upcrossing occurs when, over a closed interval $[a,b]$, the value changes from being $X_{t_{1}} \le a$ to $X_{t_{2}} \ge b$. The number of upcrossings observed up to time $N \in \mathbb{N}$ is represented as follows: $$ \beta_{N} (a,b): = \text{A number of upcrossing of } \left\{ X_{n} \right\} \text{ of interval } [a,b] $$

Basic Properties

  • [1]: $\chi_{i}$ is a $\mathcal{F}_{i-1}$-measurable function.
  • [2]: $\displaystyle E \beta_{N} (a,b) \le {{ E X_{N}^{+} + |a| } \over { b-a }}$

  • $\chi_{i}$ being a $\mathcal{F}_{i-1}$-measurable function means that for every Borel set $B \in \mathcal{B}(\mathbb{R})$, $\chi_{i}^{-1} (B) \in \mathcal{F}_{i-1}$ holds.

Explanation

20191014\_121834.png Simply put, an upcrossing refers to when $X_{n}$ goes beyond the upper limit $b$ from the lower bound $a$. The number of times this happens up to $N$ is represented as $\beta_{N} (a,b)$. In the above picture, it is $\beta_{N} (a,b) = 3$.

  • [1]: Before explaining what $\chi_{i}$ is, I want to introduce a few notations frequently used for upcrossings. If you don’t like to read these explanations, it’s okay to just look at the pictures and understand intuitively. $$ \tau_{1}:= \min_{n} \left\{ \qquad n \le N: X_{n} \le a \right\} \\ \tau_{2}:= \min_{n} \left\{ \tau_{1} < n \le N: X_{n} \ge b \right\} \\ \tau_{3}:= \min_{n} \left\{ \tau_{2} < n \le N: X_{n} \le a \right\} \\ \tau_{4}:= \min_{n} \left\{ \tau_{2} < n \le N: X_{n} \ge b \right\} \\ \vdots $$ Defined as above, $\tau_{k}$ represents a stopping time when the random variable $X_{n}$ exits the interval $[a,b]$. According to the definition, for odd $k$, it exits below $a$, and for even $k$, it exits above $b$. Thus, typically, for a natural number $m$, the moment it exits below is $\tau_{2m-1}$, and the moment it exits above is $\tau_{2m}$. Using this expression, $m$ can naturally represent the $m$th upcrossing. 20191014\_122257.png

$J_{m}$ represents the set of indices where the $m$th upcrossing is occurring. If written mathematically, it’s as follows: $$ J_{m}:= \left\{ k \in \mathbb{N}: \tau_{2m-1} + 1 \le k \le \tau_{2m} \right\} $$ Regarding this, $\chi_{i}$ only takes the value of $1$ when an upcrossing is happening, and otherwise, it’s $0$. The intent behind using such a function is to isolate only the parts where upcrossings occur and eliminate the rest by multiplying them with $0$. Mathematically, it’s defined as the following indicator function: $$ \begin{align*} \chi_{i} =& \mathbb{1}_{ \bigcup J_{m} } \\ =& \begin{cases} 0 &, i \in J_{1} \cup \cdots \cup J_{m} \\ 1 &, \text{otherwise} \end{cases} \end{align*} $$ Let’s visually check them in the following picture: 20191014\_122836.png

  • [2]: While we can’t precisely calculate $E \beta_{N} (a,b)$, being able to compute its upper limit is quite good. A notable point here is that we don’t need to observe all cases, but just calculate the last occurrence $E X_{N}^{+}$.

Proof

[1]

Part 1. $(\tau_{2m-1} < i \le \tau_{2m}) = (\tau_{2m-1} < i ) \cap ( i \le \tau_{2m}) $

$\chi_{i}$ can be represented as follows, according to the definition: $$ \begin{align*} \chi_{i} =& \mathbb{1}_{\bigcup J_{m}} \\ =& \sum_{m=1}^{\beta_{N} (a,b) } \mathbb{1}_{J_{m}} \\ =& \sum_{m=1}^{\beta_{N} (a,b) } \mathbb{1}_{(\tau_{2m-1} < i \le \tau_{2m})} \end{align*} $$ So, we need to check if it’s $(\tau_{2m-1} < i \le \tau_{2m}) \in \mathcal{F}_{i-1}$. If we dissect it as an intersection: $$ (\tau_{2m-1} < i \le \tau_{2m}) = (\tau_{2m-1} < i ) \cap ( i \le \tau_{2m}) $$


Part 2. $(i \le \tau_{2m} ) \in \mathcal{F}_{i-1}$

From the definition of $\tau_{k}$, $k = 2m$, i.e., the case of even numbers, represents the moment when $X_{n}$ exceeds $b$. However, for an upcrossing to occur, $X_{n}$ must go from below $a$ to above $b$, which means returning to a point below $a$ takes at least one step. Thus, if $\chi_{i-1} = 1$ and $X_{i-1} \ge b$, the next step inevitably must be $x_{i} = 0$. This is almost as if we have determined $\chi_{i}$, even though we are only observing up to $i-1$ with just $\mathcal{F}_{i-1}$ worth of information. Therefore, it’s $(i \le \tau_{2m} ) \in \mathcal{F}_{i-1}$.


Part 3. $(\tau_{2m-1} < i ) \in \mathcal{F}_{i-1}$

According to the definition of the sigma field $\mathcal{F}_{i-1}$: $$ \begin{align*} & (i \le \tau_{2m} ) \in \mathcal{F}_{i-1} \\ \implies& (i \le \tau_{2m} )^{c} \in \mathcal{F}_{i-1} \\ \implies& (\tau_{2m} < i ) \in \mathcal{F}_{i-1} \\ \implies& (\tau_{2m-1} < i ) \in \mathcal{F}_{i-1} \end{align*} $$


Part 4. $(\tau_{2m-1} < i \le \tau_{2m}) \in \mathcal{F}_{i-1}$

According to the definition of the sigma field $\mathcal{F}_{i-1}$: $$ \begin{align*} & (\tau_{2m-1} < i ) \in \mathcal{F}_{i-1} \land (i \le \tau_{2m} ) \in \mathcal{F}_{i-1} \\ \implies& (\tau_{2m-1} < i ) \cap ( i \le \tau_{2m}) \in \mathcal{F}_{i-1} \\ \implies& (\tau_{2m-1} < i \le \tau_{2m}) \in \mathcal{F}_{i-1} \end{align*} $$

[2]

$$ \begin{align*} \beta_{N} (a,b) =& \text{A number of upcrossing of } \left\{ X_{n} \right\} \text{ of interval } [a,b] \\ =& \text{A number of upcrossing of } \left\{ X_{n} - a \right\} \text{ of interval } [0,b-a] \\ =& \text{A number of upcrossing of } \left\{ ( X_{n} - a )^{+} \right\} \text{ of interval } [0,b-a] \end{align*} $$ Therefore, by proving the following inequality for $Y_{n}:= ( X_{n} - a )^{+}$, we can assert that $\displaystyle E \beta_{N} (a,b) \le {{ E X_{N}^{+} + |a| } \over { b-a }}$ holds without loss of generality: $$ E \beta_{N} (0,b) \le {{ E Y_{N}} \over { b }} $$


Part 1. $E \left( Y_{n+1} | \mathcal{F}_{n} \right) \ge Y_{n}$

20191014\_123800.png

Looking at the above figure, it’s easy to see that $f(x) = (x - a)^{+}$ is a convex function and is non-decreasing. Thus, according to the conditional Jensen’s inequality: $$ \begin{align*} E \left( Y_{n+1} | \mathcal{F}_{n} \right) =& E \left( ( X_{n+1} - a )^{+} | \mathcal{F}_{n} \right) \\ \ge& \left( E \left( X_{n+1} - a | \mathcal{F}_{n} \right) \right)^{+} \\ \ge& \left( X_{n} - a \right)^{+} \\ =& Y_{n} \end{align*} $$


Part 2. $\displaystyle b E \beta_{N} (0,b) \le E Y_{N}$

20191014\_124310.png Regardless of the length $b$ of the interval $[0,b]$, from the definition of $\tau_{k}$, $\tau_{2m}$ means the point when it exceeds $b$ above, so it’s $Y_{\tau_{2m}} \ge b$, and $\tau_{2m-1}$ means the moment it falls below $Y_{k}$, so it’s $Y_{\tau_{2m}} \le 0$. Thus, the increase in $ Y_{k}$ during the $m$th upcrossing is $Y_{2m} - Y_{2m-1} \ge b$, and this always holds whenever there’s an upcrossing, for $\beta_{N} [0,b]$ times. Therefore, $b \beta_{N} [0,b]$ must be equal to or less than each of these increases. If we represent it as a formula, it’s as follows: $$ \begin{align*} b \beta_{N} (0,b) \le & \sum_{m=1}^{\beta_{N} (0,b)} \left( Y_{\tau_{2m}} - Y_{\tau_{2m-1}} \right) \\ =& \sum_{m=1}^{\beta_{N} (0,b)} \left[ \left( Y_{\tau_{2m}} - Y_{\tau_{2m}-1} \right) + \left( Y_{\tau_{2m}-1} - Y_{\tau_{2m}-2} \right) + \cdots + \left( Y_{\tau_{2m-1}+1} - Y_{\tau_{2m-1}} \right) \right] \\ =& \sum_{m=1}^{\beta_{N} (0,b)} \sum_{i \in J_{m}} \left( Y_{i} - Y_{i-1} \right) \end{align*} $$ Because we don’t like counting indices as $\beta_{N} (0,b)$, instead of looking at the whole $i=1 , \cdots , N$, we use $\chi_{i}$, which multiplies by $1$ while upcrossing is occurring and by $0$ when it’s not happening. Then, the above formula simplifies as follows: $$ b \beta_{N} (0,b) \le \sum_{m=1}^{\beta_{N} (0,b)} \sum_{i \in J_{m}} \left( Y_{i} - Y_{i-1} \right) = \sum_{i=1}^{N} ( Y_{i} - Y_{i-1}) \chi_{i} $$


Part 3.

Properties of Conditional Expectation:

  • [3]: If $X$ is $\mathcal{F}$-measurable, then $E(X|\mathcal{F}) =X \text{ a.s.}$
  • [11]: For all sigma fields $\mathcal{G}$, hence $E \left[ E ( X | \mathcal{G} ) \right] = E(X)$

Taking the expectation, according to [11], for the conditional expectation $E \left[ \cdot | \mathcal{F}_{i-1} \right]$: $$ \begin{align*} b E \beta_{N} (0,b) \le & \sum_{i=1}^{N} E ( Y_{i} - Y_{i-1}) \chi_{i} \\ &\color{red}{=}& \sum_{i=1}^{N} E \left[ E \left[ ( Y_{i} - Y_{i-1}) \chi_{i} | \mathcal{F}_{i-1} \right] \right] \end{align*} $$

Smoothing Property of Conditional Expectation: If $X$ is $\mathcal{G}$-measurable, then $$E(XY | \mathcal{G}) = X E (Y | \mathcal{G}) \text{ a.s.}$$

According to [1], $\chi_{i}$ is $\mathcal{F}_{i-1}$-measurable, and from the smoothing property and the definition of martingale, $Y_{i-1}$ is also $\mathcal{F}_{i-1}$-measurable, therefore according to [3]: $$ \begin{align*} b E \beta_{N} (0,b) \le & \sum_{i=1}^{N} E \left[ E \left[ ( Y_{i} - Y_{i-1}) \chi_{i} | \mathcal{F}_{i-1} \right] \right] \\ &\color{blue}{\le}& \sum_{i=1}^{N} E \left[ \chi_{i} E \left[ ( Y_{i} - Y_{i-1}) | \mathcal{F}_{i-1} \right] \right] \\ =& \sum_{i=1}^{N} E \left[ \chi_{i} E \left[ Y_{i} | \mathcal{F}_{i-1} \right] - \chi_{i} E \left[ Y_{i-1} | \mathcal{F}_{i-1} \right] \right] \\ &\color{red}{=}& \sum_{i=1}^{N} E \left[ \chi_{i} E \left[ Y_{i} | \mathcal{F}_{i-1} \right] - \chi_{i} Y_{i-1} \right] \end{align*} $$ Since $X_{n}$ was a submartingale by assumption, $Y_{n} = ( X_{n} - a )^{+}$ is also a submartingale, hence $E \left[ Y_{i} | \mathcal{F}_{i-1} \right] - Y_{i-1} \ge 0$ holds. By eliminating indices that are $\chi_{i} = 0$ and only keeping those that are $\chi_{i} = 1$, the following inequality holds: $$ \begin{align*} b E \beta_{N} (0,b) \le & \sum_{i=1}^{N} E \chi_{i} \left[ E \left[ Y_{i} | \mathcal{F}_{i-1} \right] - Y_{i-1} \right] \\ =& \sum_{i=1}^{N} \left[ E E \left[ Y_{i} | \mathcal{F}_{i-1} \right] - E Y_{i-1} \right] \\ =& \sum_{i=1}^{N} \left[ E Y_{i} - E Y_{i-1} \right] \\ =& E Y_{N} - E Y_{0} \end{align*} $$ Finally, since $Y_{n} = ( X_{n} - a )^{+}$ was the case: $$ E \beta_{N} (a,b) \le {{ E X_{N}^{+} + |a| } \over { b-a }} $$