\( \newcommand{\P}{\mathbb{P}} \) \( \newcommand{\E}{\mathbb{E}} \) \( \newcommand{\R}{\mathbb{R}} \) \( \newcommand{\N}{\mathbb{N}} \) \( \newcommand{\Z}{\mathbb{Z}} \) \( \newcommand{\bs}{\boldsymbol} \) \( \newcommand{\ms}{\mathscr} \) \( \newcommand{\mf}{\mathfrak} \) \( \newcommand{\cov}{\text{cov}} \) \( \newcommand{\cor}{\text{cor}} \) \( \newcommand{\var}{\text{var}} \) \( \newcommand{\sd}{\text{sd}} \)
  1. Random
  2. 1. Probability Spaces
  3. 1
  4. 2
  5. 3
  6. 4
  7. 5
  8. 6
  9. 7
  10. 8
  11. 9

9. Filtrations and Stopping Times

Introduction

Suppose that \( \bs{X} = \{X_t: t \in T\} \) is a stochastic process with state space \( (S, \ms S) \) defined on an underlying probability space \( (\Omega, \ms F, \P) \). To review, \( \Omega \) is the set of outcomes, \( \ms F \) the \( \sigma \)-algebra of events, and \( \P \) the probability measure on \( (S, \ms S) \). Also \( S \) is the set of states, and \( \ms S \) the \( \sigma \)-algebra of admissible subsets of \( S \). Usually, \( S \) has a topology and \( \ms S \) the Borel \( \sigma \)-algebra generated by the open subsets of \( S \). A standard set of assumptions is that the topology is locally compact, Hausdorff, and has a countable base, which we will abbreviate by LCCB. For the index set, we assume that either \( T = \N \) or that \( T = [0, \infty) \) and as usual in these cases, we interpret the elements of \( T \) as points of time. The set \( T \) is also given a topology, the discrete topology in the first case and the standard Euclidean topology in the second case, and then the Borel \( \sigma \)-algebra \( \ms T \). So in discrete time with \( T = \N \), \( \ms T = \ms P(T) \), the power set of \( T \), so every subset of \( T \) is measurable, as is every function from \( T \) into a another measurable space. Finally, \( X_t \) is a random variable and so by definition is measurable with respect to \( \ms F \) and \( \ms S \) for each \( t \in T \). We interpret \( X_t \) is the state of some random system at time \( t \in T \). Many important concepts involving \( \bs X \) are based on how the future behavior of the process depends on the past behavior, relative to a given current time.

For \( t \in T \), let \( \ms F_t = \sigma\left\{X_s: s \in T, \; s \le t\right\} \), the \( \sigma \)-algebra of events that can be defined in terms of the process up to time \( t \). Roughly speaking, for a given \( A \in \ms F_t \), we can tell whether or not \( A \) has occurred if we are allowed to observe the process up to time \( t \). The family of \( \sigma \)-algebras \( \mf F = \{\ms F_t: t \in T\} \) has two critical properties: the family is increasing in \( t \in T\), relative to the subset partial order, and all of the \( \sigma \)-algebras are sub \( \sigma \)-algebras of \( \ms F \). That is for \( s, \, t \in T \) with \( s \le t \), we have \( \ms F_s \subseteq \ms F_t \subseteq \ms F \).

Filtrations

Basic Definitions

Sometimes we need \( \sigma \)-algebras that are a bit larger than the ones in the last paragraph. For example, there may be other random variables that we get to observe, as time goes by, besides the variables in \( \bs X \). Sometimes, particularly in continuous time, there are technical reasons for somewhat different \( \sigma \)-algebras. Finally, we may want to describe how our information grows, as a family of \( \sigma \)-algebras, without reference to a random process. For the remainder of this section, we have a fixed measurable space \( (\Omega, \ms F) \), which we again think of as a sample space, and the time space \( (T, \ms T) \) as described above.

A family of \( \sigma \)-algebras \(\mf F = \{\ms F_t: t \in T\} \) is a filtration on \( (\Omega, \ms F) \) if \(s, \, t \in T\) and \(s \le t\) imply \(\ms F_s \subseteq \ms F_t \subseteq \ms F\).

  1. The space \( \left(\Omega, \ms F, \mf F\right) \) is a filtered sample space.
  2. If \( \P \) is a probability measure on \( (\Omega, \ms F) \), then \( \left(\Omega, \ms F, \mf F, \P\right) \) is a filtered probability space.

So a filtration is simply an increasing family of sub-\(\sigma\)-algebras of \( \ms F \), indexed by \( T \). We think of \( \ms F_t \) as the \( \sigma \)-algebra of events up to time \( t \in T \). The larger the \( \sigma \)-algebras in a filtration, the more events that are available, so the following relation on filtrations is natural.

Suppose that \( \mf F =\{\ms F_t: t \in T\} \) and \( \mf G = \{\ms G_t: t \in T\} \) are filtrations on \( (\Omega, \ms F) \). Then \( \mf F \) is coarser than \( \mf G \) and \( \mf G \) is finer than \( \mf F \), and we write \( \mf F \preceq \mf G \), if \( \ms F_t \subseteq \ms G_t \) for all \( t \in T \). The relation \( \preceq \) is a partial order on the collection of filtrations on \( (\Omega, \ms F) \). That is, if \( \mf F = \{\ms F_t: t \in T\} \), \( \mf G = \{\ms G_t: t \in T\} \), and \( \mf H = \ms H_t: t \in T\} \) are filtrations then

  1. \( \mf F \preceq \mf F \), the reflexive property.
  2. If \( \mf F \preceq \mf G \) and \( \mf G \preceq \mf F \) then \( \mf F = \mf G \), the antisymmetric property.
  3. If \( \mf F \preceq \mf G \) and \( \mf G \preceq \mf H \) then \( \mf F \preceq \mf H \), the transitive property.
Details:

The proof is a simple consequence of the fact that the subset relation defines a partial order.

  1. \( \ms F_t \subseteq \ms F_t \) for each \( t \in T \) so \( \mf F \preceq \mf F \).
  2. If \( \mf F \preceq \mf G \) and \( \mf G \preceq \mf F \) then \( \ms F_t \subseteq \ms G_t \) and \( \ms G_t \subseteq \ms F_t \) for each \( t \in T \). Hence \( \ms F_t = \ms G_t \) for each \( t \in T \) and so \( \mf F = \mf G \).
  3. If \( \mf F \preceq \mf G \) and \( \mf G \preceq \mf H \) then \( \ms F_t \subseteq \ms G_t \) and \( \ms G_t \subseteq \ms H_t \) for each \( t \in T \). Hence \( \ms F_t \subseteq \ms H_t \) for each \( t \in T \) and so \( \mf F \preceq \mf H \)

So the coarsest filtration on \( (\Omega, \ms F) \) is the one where \( \ms F_t = \{\Omega, \emptyset\} \) for every \( t \in T \) while the finest filtration is the one where \( \ms F_t = \ms F \) for every \( t \in T \). In the first case, we gain no information as time evolves, and in the second case, we have complete information from the beginning of time. Usually neither of these is realistic.

It's also natural to consider the \( \sigma \)-algebra that encodes our information over all time.

For a filtration \( \mf F = \{\ms F_t: t \in T\} \) on \( (\Omega, \ms F) \), define \(\ms F_\infty = \sigma \left( \bigcup\left\{\ms F_t: t \in T\right\} \right)\). Then

  1. \(\ms F_\infty = \sigma \left( \bigcup\left\{\ms F_t: t \in T, t \ge s\right\} \right)\) for \( s \in T \).
  2. \( \ms F_t \subseteq \ms F_\infty \) for \( t \in T \).
Details:

These results follows since the \(\sigma\)-algebras in a filtration are increasing in time.

Of course, it may be the case that \( \ms F_\infty = \ms F \), but not necessarily. Recall that the intersection of a collection of \( \sigma \)-algebras on \( (\Omega, \ms F) \) is another \( \sigma \)-algebra. We can use this to create new filtrations from a collection of given filtrations.

Suppose that \( \mf F_i = \left\{\ms F^i_t: t \in T\right\} \) is a filtration on \( (\Omega, \ms F) \) for each \( i \) in a nonempty index set \( I \). Then \( \mf F = \{\ms F_t: t \in T\} \) is also a filtration on \( (\Omega, \ms F) \) where \( \ms F_t = \bigcap_{i \in I} \ms F^i_t \) for \( t \in T \). This filtration is sometimes denoted \( \mf F = \bigwedge_{i \in I} \mf F_i \), and is the finest filtration that is coarser than \( \mf F_i \) for every \( i \in I \).

Details:

Suppose \( s, \, t \in T \) with \( s \le t \). Then \(\ms F^i_s \subseteq \ms F^i_t \subseteq \ms F\) for each \( i \in I \) so it follows that \( \bigcap_{i \in I} \ms F^i_s \subseteq \bigcap_{i \in I} \ms F^i_t \subseteq \ms F \).

Unions of \( \sigma \)-algebras are not in general \( \sigma \)-algebras, but we can construct a new filtration from a given collection of filtrations using unions in a natural way.

Suppose again that \( \mf F_i = \left\{\ms F^i_t: t \in T\right\} \) is a filtration on \( (\Omega, \ms F) \) for each \( i \) in a nonempty index set \( I \). Then \( \mf F = \{\ms F_t: t \in T\} \) is also a filtration on \( (\Omega, \ms F) \) where \( \ms F_t = \sigma\left(\bigcup_{i \in I} \ms F^i_t\right) \) for \( t \in T \). This filtration is sometimes denoted \( \mf F = \bigvee_{i \in I} \mf F_i \), and is the coarsest filtration that is finer than \( \mf F_i \) for every \( i \in I \).

Details:

Suppose \( s, \, t \in T \) with \( s \le t \). Then \(\ms F^i_s \subseteq \ms F^i_t \subseteq \ms F\) for each \( i \in I \) so it follows that \( \bigcup_{i \in I} \ms F^i_s \subseteq \bigcup_{i \in I} \ms F^i_t \subseteq \ms F \), and hence \( \sigma\left(\bigcup_{i \in I} \ms F^i_s\right) \subseteq \sigma\left(\bigcup_{i \in I} \ms F^i_t\right) \subseteq \ms F \).

Stochastic Processes

Note again that we can have a filtration without an underlying stochastic process in the background. However, we usually do have a stochastic process \( \bs X = \{X_t: t \in T\} \), and in this case the filtration \( \mf F^0 = \{\ms F^0_t: t \in T\} \) where \( \ms F^0_t = \sigma\{X_s: s \in T, \, s \le t\} \) is the natural filtration associated with \( \bs X \). More generally, the following definition is appropriate.

A stochastic process \(\bs X = \{X_t: t \in T\}\) on \( (\Omega, \ms F) \) is adapted to a filtration \( \mf F = \{\ms F_t: t \in T\}\) on \( (\Omega, \ms F) \) if \(X_t\) is measureable with respect to \(\ms F_t\) for each \(t \in T\).

Equivalently, \( \bs X \) is adapted to \( \mf F \) if \( \mf F \) is finer than \( \mf F^0 \), the natural filtration associated with \( \bs X \). That is, \( \sigma\{X_s: s \in T, \; s \le t\} \subseteq \ms F_t \) for each \( t \in T \). So clearly, if \( \bs X \) is adapted to a filtration, then it is adapted to any finer filtration, and \( \mf F^0 \) is the coarsest filtration to which \( \bs X \) is adapted. The basic idea behind the definition is that if the filtration \( \mf F \) encodes our information as time goes by, then the process \( \bs X \) is observable. In discrete time, there is a related definition.

Suppose that \( T = \N \). A stochastic process \( \bs X = \{X_n: n \in \N\} \) is predictable by the filtration \( \mf F = \{\ms F_n: n \in \N\} \) if \( X_{n +1}\) is measurable with respect to \( \ms F_n \) for all \( n \in \N \).

Clearly if \( \bs X \) is predictable by \( \mf F \) then \( \bs X \) is adapted to \( \mf F \). But predictable is better than adapted, in the sense that if \( \mf F \) encodes our information as time goes by, then we can look one step into the future in terms of \( \bs X \): at time \( n \) we can determine \( X_{n+1} \). The concept of predictability can be extended to continuous time, but the definition is much more complicated.

Note that ultimately, a stochastic process \( \bs X = \{X_t: t \in T\} \) with sample space \( (\Omega, \ms F) \) and state space \( (S, \ms S) \) can be viewed a function from \( \Omega \times T \) into \( S \), so \( X_t(\omega) \in S \) is the state at time \( t \in T \) corresponding to the outcome \( \omega \in \Omega \). By definition, \( \omega \mapsto X_t(\omega) \) is measurable for each \( t \in T \), but it is often necessary for the process to be jointly measurable in \( \omega \) and \( t \).

Suppose that \( \bs X = \{X_t: t \in T\} \) is a stochastic process with sample space \( (\Omega, \ms F) \) and state space \( (S, \ms S) \). Then \( \bs X \) is measurable if \( \bs X: \Omega \times T \to S \) is measurable with respect to \( \ms F \times \ms T \) and \( \ms S \).

When we have a filtration, as we usually do, there is a stronger condition that is natural. Let \( T_t = \{s \in T: s \le t\} \) for \( t \in T \), and let \( \ms T_t = \{A \cap T_t: A \in \ms T\} \) be the corresponding induced \( \sigma \)-algebra.

Suppose that \( \bs X = \{X_t: t \in T\} \) is a stochastic process with sample space \( (\Omega, \ms F) \) and state space \( (S, \ms S) \), and that \( \mf F = \{\ms F_t: t \in T\} \) is a filtration. Then \( \bs X \) is progressively measurable relative to \( \mf F \) if \( \bs X: \Omega \times T_t \to S\) is measurable with respect to \( \ms F_t \times \ms T_t \) and \( \ms S \) for each \( t \in T \).

Clearly if \( \bs X \) is progressively measurable with respect to a filtration, then it is progressively measurable with respect to any finer filtration. Of course when \( T \) is discrete , then any process \( \bs X \) is measurable, and any process \( \bs X \) adapted to \( \mf F \) progressively measurable, so these definitions are only of interest in the case of continuous time.

Suppose again that \( \bs X = \{X_t: t \in T\} \) is a stochastic process with sample space \( (\Omega, \ms F) \) and state space \( (S, \ms S) \), and that \( \mf F = \{\ms F_t: t \in T\} \) is a filtration. If \( \bs X \) is progressively measurable relative to \( \mf F \) then

  1. \( \bs X \) is measurable.
  2. \( \bs X \) is adapted to \( \mf F \).
Details:

Suppose that \( \bs X \) is progressively measurable relative to \( \mf F \).

  1. If \( A \in \ms S \) then \[ \{\bs X \in A\} = \{(\omega, t) \in \Omega \times T: X_t(\omega) \in A\} = \bigcup_{n=1}^\infty \{(\omega, t) \in \Omega \times T_n: X_t(\omega) \in A\} \] By assumption, the \( n \)th term in the union is in \( \ms F \times \ms T_n \subseteq \ms F \times \ms T \), so the union is in \( \ms F \times \ms T \).
  2. Suppose that \( t \in T \). Then \( \bs X: \Omega \times T_t \to S\) is measurable with respect to \( \ms F_t \times \ms T_t \) and \( \ms S \). But \( X_t: \Omega \to S \) is just the cross section of this function at \( t \) and hence is measurable with respect to \( \ms F_t \) and \( \ms S \).

When the state space is a topological space (which is usually the case), then as you might guess, there is a natural link between continuity of the sample paths and progressive measurability.

Suppose that \( S \) has an LCCB topology and that \( \ms S \) is the \( \sigma \)-algebra of Borel sets. Suppose also that \( \bs X = \{X_t: t \in [0, \infty)\} \) is right continuous. Then \( \bs X \) is progressively measurable relative to the natural filtration \( \mf F^0 \).

So if \( \bs X \) is right continuous, then \( \bs X \) is progressively measurable with respect to any filtration to which \( \bs X \) is adapted. We have studied different ways that two stochastic processes can be equivalent. The following example illustrates some of the subtleties of processes in continuous time.

Suppose that \( \Omega = T = [0, \infty) \), \( \ms F = \ms T \) is the \( \sigma \)-algebra of Borel measurable subsets of \( [0, \infty) \), and \( \P \) is any continuous probability measure on \( (\Omega, \ms F) \). Let \( S = \{0, 1\} \) and \( \ms S = \ms P(S) = \{\emptyset, \{0\}, \{1\}, \{0, 1\}\} \). For \( t \in T \) and \( \omega \in \Omega \), define \( X_t(\omega) = \bs{1}_t(\omega) \) and \( Y_t(\omega) = 0 \). Then

  1. \( \bs X = \{X_t: t \in T\} \) is a version of \( \bs Y = \{Y_t: t \in T\} \)
  2. \( \bs X \) is not adapted to the natural filtration of \( \bs Y \).
Details:
  1. This was shown in the section on stochastic processes, but here it is again: For \( t \in T \), \( \P(X_t \ne Y_t) = \P(\{t\}) = 0 \).
  2. Trivially, \( \sigma(Y_t) = \{\emptyset, \Omega\} \) for every \( t \in T \), so \( \sigma\{Y_s: 0 \le s \le t\} = \{\emptyset, \Omega\} \). But \( \sigma(X_t) = \{\emptyset, \{t\}, \Omega \setminus \{t\}, \Omega\} \).

Completion

Suppose now that \( P \) is a probability measure on \( (\Omega, \ms F) \). Recall that \(\ms F\) is complete with respect to \( P \) if \(A \in \ms F\), \( B \subseteq A \), and \(P(A) = 0\) imply \(B \in \ms F\) (and hence \( P(B) = 0 \)). That is, if \( A \) is an event with probability 0 and \( B \subseteq A \), then \( B \) is also an event (and also has probability 0). For a filtration, the following definition is appropriate.

The filtration \(\{\ms F_t: t \in T\}\) is complete with respect to a probability measure \( P \) on \( (\Omega, \ms F) \) if

  1. \(\ms F\) is complete with respect to \( P \)
  2. If \(A \in \ms F\) and \(P(A) = 0\) then \(A \in \ms F_0\).

Suppose \( P \) is a probability measure on \( (\Omega, \ms F) \) and that the filtration \(\{\ms F_t: t \in T\}\) is complete with respect to \( P \). If \(A \in \ms F\) is a null event (\(P(A) = 0\)) or an almost certain event (\(\P(A) = 1\)) then \(A \in \ms F_t\) for every \(t \in T\).

Details:

This follows since almost certain events are complements of null events and since the \(\sigma\)-algebras are increasing in \(t \in T\).

Recall that if \( P \) is a probability measure on \( (\Omega, \ms F) \), but \( \ms F \) is not complete with respect to \( P \), then \( \ms F \) can always be completed. Here's a review of how this is done: Let \[ \ms{N} = \{A \subseteq \Omega: \text{ there exists } N \in \ms F \text{ with } P(N) = 0 \text{ and } A \subseteq N\}\] So \( \ms{N} \) is the collection of null sets. Then we let \( \ms F^P = \sigma(\ms F \cup \ms{N}) \) and extend \( P \) to \( \ms F^P \) is the natural way: if \( A \in \ms F^P \) and \( A \) differs from \( B \in \ms F \) by a null set, then \( P(A) = P(B) \). Filtrations can also be completed.

Suppose that \( \mf F = \{\ms F_t: t \in T\} \) is a filtration on \( (\Omega, \ms F) \) and that \( P \) is a probability measure on \( (\Omega, \ms F) \). As above, let \( \ms{N} \) denote the collection of null subsets of \( \Omega \), and for \( t \in T \), let \( \ms F^P_t = \sigma(\ms F_t \cup \ms{N}) \). Then \( \mf F^P = \{\ms F^P_t: t \in T\} \) is a filtration on \( \left(\Omega, \ms F^P\right) \) that is finer than \( \mf F \) and is complete relative to \( P \).

Details:

If \( s, \, t \in T \) with \( s \le t \) then \( \ms F_s \subseteq \ms F_t \subseteq \ms F \) and hence \[ \sigma(\ms F_s \cup \ms{N}) \subseteq \sigma(\ms F_t \cup \ms{N}) \subseteq \sigma(\ms F \cup \ms{N})\] and so \( \ms F^P_s \subseteq \ms F^P_t \subseteq \ms F^P \). The probability measure \( P \) can be extended to \( \ms F^P \) as described above, and hence is defined on \( \ms F^P_t \) for each \( t \in T \). By construction, if \( A \in \ms F^P \) and \( P(A) = 0 \) then \( A \in \ms F^P_0 \) so \( \mf F^P \) is complete with respect to \( P \).

Naturally, \( \mf F^P \) is the completion of \( \mf F \) with respect to \( P \). Sometimes we need to consider all probability measures on \( (\Omega, \ms F) \).

Let \( \ms P \) denote the collection of probability measures on \( (\Omega, \ms F) \), and suppose that \( \mf F = \{\ms F_t: t \in T\} \) is a filtration on \( (\Omega, \ms F) \). Let \( \ms F^* = \bigcap \{\ms F^P: P \in \ms P\} \), and let \( \mf F^* = \bigwedge \{\mf F^P: P \in \ms P\} \). Then \( \mf F^* \) is a filtration on \( (\Omega, \ms F^*) \), known as the universal completion of \( \mf F \).

Details:

Note that \( \mf F^P \) is a filtration on \( (\Omega, \ms F^P) \) for each \( P \in \ms P \), so \( \mf F^* \) is a filtration on \( (\Omega, \ms F^*) \).

The last definition must seem awfully obscure, but it does have a place. In the theory of Markov processes we usually allow arbitrary initial distributions, which in turn produces a large collection of distributions on the sample space.

Right Continuity

In continuous time, we sometimes need to refine a given filtration somewhat.

Suppose that \( \mf F = \{\ms F_t: t \in [0, \infty)\} \) is a filtration on \( (\Omega, \ms F) \). For \( t \in [0, \infty) \), define \( \ms F_{t+} = \bigcap \{\ms F_s: s \in (t, \infty)\} \). Then \( \mf F_+ = \{\ms F_{t+}: t \in T\} \) is also a filtration on \( (\Omega, \ms F) \) and is finer than \( \mf F \).

Details:

For \( t \in [0, \infty) \) note that \( \ms F_{t+} \) is a \( \sigma \)-algebra since it is the intersection of \( \sigma \)-algebras, and clearly \( \ms F_{t+} \subseteq \ms F \). Next, if \( s, \, t \in [0, \infty) \) with \( s \le t \), then \( \{\ms F_r: r \in (t, \infty)\} \subseteq \{\ms F_r: r \in (s, \infty)\} \), so it follows that \[ \ms F_{s+} = \bigcap\{\ms F_r: r \in (s, \infty)\} \subseteq \bigcap\{\ms F_r: r \in (t, \infty)\} = \ms F_{t+} \] Finally, for \( t \in [0, \infty) \), \( \ms F_t \subseteq \ms F_s \) for every \( s \in (t, \infty) \) so \( \ms F_t \subseteq \bigcap\{\ms F_s: s \in (t, \infty)\} = \ms F_{t+} \).

Since the \( \sigma \)-algebras in a filtration are increasing, it follows that for \( t \in [0, \infty) \), \( \ms F_{t+} = \bigcap\{\ms F_s: s \in (t, t + \epsilon)\} \) for every \( \epsilon \in (0, \infty) \). So if the filtration \( \mf F \) encodes the information available as time goes by, then the filtration \( \mf F_+ \) allows an infinitesimal peak into the future at each \( t \in [0, \infty) \). In light of the previous result, the next definition is natural.

A filtration \( \mf F = \{\ms F_t: t \in [0, \infty)\} \) is right continuous if \( \mf F_+ = \mf F \), so that \( \ms F_{t+} = \ms F_t \) for every \( t \in [0, \infty) \).

Right continuous filtrations have some nice properties, as we will see later. If the original filtration is not right continuous, the slightly refined filtration is:

Suppose again that \( \mf F = \{\ms F_t: t \in [0, \infty)\} \) is a filtration. Then \( \mf F_+ \) is a right continuous filtration.

Details:

For \( t \in T \) \[ \ms F_{t++} = \bigcap\{\ms F_{s+}: s \in (t, \infty)\} = \bigcap\left\{\bigcap\{\ms F_r: r \in (s, \infty)\}: s \in (t, \infty)\right\} = \bigcap\{\ms F_u: u \in (t, \infty)\} = \ms F_{t+} \]

For a stochastic process \( \bs X = \{X_t: t \in [0, \infty)\} \) in continuous time, often the filtration \( \mf F \) that is most useful is the right-continuous refinement of the natural filtration. That is, \( \mf F = \mf F^0_+ \), so that \( \ms F_t = \sigma\{X_s: s \in [0, t]\}_+ \) for \( t \in [0, \infty) \).

Stopping Times

Basic Properties

Suppose again that we have a fixed sample space \( (\Omega, \ms F) \). Random variables taking values in the time set \( T \) are important, but often as we will see, it's necessary to allow such variables to take the value \( \infty \) as well as finite times. So let \( T_\infty = T \cup \{\infty\} \). We extend order to \( T_\infty \) by the obvious rule that \( t \lt \infty \) for every \( t \in T \). We also extend the topology on \( T \) to \( T_\infty \) by the rule that for each \( s \in T \), the set \( \{t \in T_\infty: t \gt s\} \) is an open neighborhood of \( \infty\). That is, \( T_\infty \) is the one-point compactification of \( T \). The reason for this is to preserve the meaning of time converging to infinity. That is, if \( (t_1, t_2, \ldots) \) is a sequence in \( T_\infty \) then \( t_n \to \infty \) as \( n \to \infty \) if and only if, for every \( t \in T \) there exists \( m \in \N_+ \) such that \( t_n \gt t \) for \( n \gt m \). We then give \( T_\infty \) the Borel \( \sigma \)-algebra \( \ms T_\infty \) as before. In discrete time, this is once again the discrete \( \sigma \)-algebra, so that all subsets are measurable. In both cases, we now have an enhanced time space is \( (T_\infty, \ms T_\infty) \). A random variable \( \tau \) taking values in \( T_\infty \) is called a random time.

Suppose that \( \mf F = \{\ms F_t: t \in T\} \) is a filtration on \( (\Omega, \ms F) \). A random time \( \tau \) is a stopping time relative to \( \mf F \) if \( \{\tau \le t\} \in \ms F_t \) for each \( t \in T \).

In a sense, a stopping time is a random time that does not require that we see into the future. That is, we can tell whether or not \( \tau \le t \) from our information at time \( t \). The term stopping time comes from gambling. Consider a gambler betting on games of chance. The gambler's decision to stop gambling at some point in time and accept his fortune must define a stopping time. That is, the gambler can base his decision to stop gambling on all of the information that he has at that point in time, but not on what will happen in the future. The terms Markov time and optional time are sometimes used instead of stopping time. If \( \tau \) is a stopping time relative to a filtration, then it is also a stoping time relative to any finer filtration:

Suppose that \( \mf F = \{\ms F_t: t \in T\} \) and \( \mf G = \{\ms G_t: t \in T\} \) are filtrations on \( (\Omega, \ms F) \), and that \(\mf G\) is finer than \( \mf F \). If a random time \( \tau \) is a stopping time relative to \( \mf F \) then \( \tau \) is a stopping time relative to \( \mf G \).

Details:

This is very simple. If \( t \in T \) then \( \{\tau \le t\} \in \ms F_t \) and hence \( \{\tau \le t\} \in \ms G_t \) since \( \ms F_t \subseteq \ms G_t \).

So, the finer the filtration, the larger the collection of stopping times. In fact, every random time is a stopping time relative to the finest filtration \( \mf F \) where \( \ms F_t = \ms F \) for every \( t \in T \). But this filtration corresponds to having complete information from the beginning of time, which of course is usually not sensible. At the other extreme, for the coarsest filtration \( \mf F \) where \( \ms F_t = \{\Omega, \emptyset\} \) for every \( t \in T \), the only stopping times are constants. That is, random times of the form \( \tau(\omega) = t \) for every \( \omega \in \Omega \), for some \(t \in T_\infty \).

Suppose again that \( \mf F = \{\ms F_t: t \in T\} \) is a filtration on \( (\Omega, \ms F) \). A random time \( \tau \) is a stopping time relative to \( \mf F \) if and only if \( \{\tau \gt t\} \in \ms F_t \) for each \( t \in T \).

Details:

This result is trivial since \( \{\tau \gt t\} = \{\tau \le t\}^c \) for \( t \in T \).

Suppose again that \( \mf F = \{\ms F_t: t \in T\} \) is a filtration on \( (\Omega, \ms F) \), and that \( \tau \) is a stopping time relative to \( \mf F \). Then each of the following events is in \(\ms F_t\) for every \(t \in T\):

  1. \( \{\tau \lt t\}\)
  2. \( \{\tau \ge t\}\)
  3. \( \{\tau = t\}\)
Details:
  1. Suppose first that \( T = \N \). Then \( \{\tau \lt t\} = \{\tau \le t - 1\} \in \ms F_{t-1} \subseteq \ms F_t \) for \( t \in \N \). Next suppose that \( T = [0, \infty) \). Fix \(t \in (0, \infty)\) and let \((s_1, s_2, \ldots)\) be a strictly increasing sequence in \([0, \infty)\) with \(s_n \uparrow t\) as \(n \to \infty\). Then \( \{\tau \lt t\} = \bigcup_{n=1}^\infty \{\tau \le s_n\} \). But \( \{\tau \le s_n\} \in \ms F_{s_n} \subseteq \ms F_t \) for each \( n \), so \( \{\tau \lt t\} \in \ms F_t \).
  2. This follows from (a) since \( \{\tau \ge t\} = \{\tau \lt t\}^c \) for \( t \in T \).
  3. For \( t \in T \) note that \( \{\tau = t\} = \{\tau \le t\} \setminus \{\tau \lt t\} \). Both events in the set difference are in \( \ms F_t \).

Note that when \(T = \N\), we actually showed that \(\{\tau \lt t\} \in \ms F_{t-1}\) and \(\{\tau \ge t\} \in \ms F_{t-1}\). The converse to part (a) (or equivalently (b)) is not true, but in continuous time there is a connection to the right-continuous refinement of the filtration.

Suppose that \( T = [0, \infty) \) and that \( \mf F = \{\ms F_t: t \in [0, \infty)\} \) is a filtration on \( (\Omega, \ms F) \). A random time \( \tau \) is a stopping time relative to \( \mf F_+ \) if and only if \( \{\tau \lt t\} \in \ms F_t \) for every \(t \in [0, \infty)\).

Details:

So restated, we need to show that \( \{\tau \le t\} \in \ms F_{t+} \) for every \( t \in [0, \infty) \) if and only if \( \{\tau \lt t\} \in \ms F_t \) for every \( t \in [0, \infty) \). (Note by the way, that this not the same as the statement that for every \( t \in T \), \( \{\tau \lt t\} \in \ms F_{t+} \) if and only if \( \{\tau \le t\} \in \ms F_t \), which is not true.) Suppose first that \( \tau \) is a stopping time relative to \( \mf F \). Fix \(t \in [0, \infty)\) and let \((t_1, t_2, \ldots)\) be a strictly decreasing sequence in \([0, \infty)\) with \(t_n \downarrow t\) as \(n \to \infty\). Then for each \(k \in \N_+\), \( \{\tau \le t\} = \bigcap_{n=k}^\infty \{\tau \lt t_n\} \). If \( s \gt t \) then there exists \( k \in \N_+ \) such that \( t_n \lt s \) for each \( n \ge k \). Hence \( \{\tau \lt t_n\} \in \ms F_{t_n} \subseteq \ms F_s \) for \( n \ge k \), and so it follows that \( \{\tau \le t\} \in \ms F_s \). Since this is true for every \( s \gt t \) it follows \(\{\tau \le t\} \in \ms F_{t+}\). Conversely, suppose that \( \{\tau \le t\} \in \ms F_{t+} \) for every \( t \in [0, \infty) \). Fix \( t \in (0, \infty) \) and let \( (t_1, t_2, \ldots) \) be a strictly increasing sequence in \( (0, \infty) \) with \( t_n \uparrow t \) as \( n \to \infty \). Then \( \bigcup_{i=1}^\infty \{\tau \le t_n\} = \{\tau \lt t\} \). But for every \( n \in \N_+ \) \[ \{\tau \le t_n\} \in \ms F_{t_n+} = \bigcap\left\{\ms F_s: s \in (t_n, t)\right\} \subseteq \ms F_t \] Hence \( \{\tau \lt t \} \in \ms F_t\).

If \( \mf F = \{\ms F_t: t \in [0, \infty)\} \) is a filtration and \( \tau \) is a random time that satisfies \( \{\tau \lt t \} \in \ms F_t \) for every \( t \in T \), then some authors call \( \tau \) a weak stopping time or say that \( \tau \) is weakly optional for the filtration \( \mf F \). But to me, the increase in jargon is not worthwhile, and it's better to simply say that \( \tau \) is a stopping time for the filtration \(\mf F_+\). The following corollary now follows.

Suppose that \( T = [0, \infty) \) and that \( \mf F = \{\ms F_t: t \in [0, \infty)\} \) is a right-continuous filtration. A random time \( \tau \) is a stopping time relative to \( \mf F \) if and only if \( \{\tau \lt t\} \in \ms F_t \) for every \( t \in [0, \infty) \).

The converse to part (c) of holds in discrete time.

Suppose that \( T = \N \) and that \( \mf F = \{\ms F_n: n \in \N\} \) is a filtration on \( (\Omega, \ms F) \). A random time \( \tau \) is a stopping time for \( \mf F \) if and only if \( \{\tau = n\} \in \ms F_n \) for every \( n \in \N \).

Details:

If \(\tau\) is a stopping time then as shown in , \(\{\tau = n\} \in \ms F_n\) for every \( n \in \N \). Conversely, suppose that this condition holds. For \(n \in \N\), \(\{\tau \le n\} = \bigcup_{k=0}^n \{\tau = k\}\). But \(\{\tau = k\} \in \ms F_k \subseteq \ms F_n\) for \(k \in \{0, 1, \ldots, n\}\) so \(\{\tau \le n\} \in \ms F_n\).

Basic Constructions

As noted above, a constant element of \(T_\infty\) is a stopping time, but not a very interesting one.

Suppose \(s \in T_\infty\) and that \(\tau(\omega) = s\) for all \(\omega \in \Omega\). The \(\tau\) is a stopping time relative to any filtration on \( (\Omega, \ms F) \).

Details:

For \( t \in T \) note that \(\{\tau \le t\} = \Omega\) if \(s \le t\) and \(\{\tau \le t\} = \emptyset\) if \(s \gt t\).

If the filtration \(\{\ms F_t: t \in T\}\) is complete, then a random time that is almost certainly a constant is also a stopping time. The following theorems give some basic ways of constructing new stopping times from ones we already have.

Suppose that \( \mf F = \{\ms F_t: t \in T\} \) is a filtration on \( (\Omega, \ms F) \) and that \(\tau_1\) and \(\tau_2\) are stopping times relative to \( \mf F \). Then each of the following is also a stopping time relative to \( \mf F \):

  1. \(\tau_1 \vee \tau_2 = \max\{\tau_1, \tau_2\}\)
  2. \(\tau_1 \wedge \tau_2 = \min\{\tau_1, \tau_2\}\)
  3. \(\tau_1 + \tau_2\)
Details:
  1. Note that \(\{\tau_1 \vee \tau_2 \le t\} = \{\tau_1 \le t\} \cap \{\tau_2 \le t\} \in \ms F_t\) for \(t \in T\), so the result follows from definition .
  2. Note that \(\{\tau_1 \wedge \tau_2 \gt t\} = \{\tau_1 \gt t\} \cap \{\tau_2 \gt t\} \in \ms F_t\) for \(t \in T\), so the result follows from .
  3. This is simple when \(T = \N\). In this case, \(\{\tau_1 + \tau_2 \le t\} = \bigcup_{n=0}^t \{\tau_1 = n\} \cap \{\tau_2 \le t - n\}\). But for \(n \le t\), \(\{\tau_1 = n\} \in \ms F_n \subseteq \ms F_t\) and \(\{\tau_2 \le t - n\} \in \ms F_{t - n} \subseteq \ms F_t\). Hence \(\{\tau_1 + \tau_2 \le t\} \in \ms F_t\). Suppose instead that \(T = [0, \infty)\) and \(t \in T\). Then \(\tau_1 + \tau_2 \gt t\) if and only if either \(\tau_1 \le t\) and \(\tau_2 \gt t - \tau_1\) or \(\tau_1 \gt t\). Of course \(\{\tau_1 \gt t\} \in \ms F_t\) so we just need to show that the first event is also in \(\ms F_t\). Note that \(\tau_1 \le t\) and \(\tau_2 \gt t - \tau_1\) if and only if there exists a rational \(q \in [0, t]\) such that \(q \le \tau_1 \le t\) and \(\tau_2 \ge t - q\). Each of these events is in \(\ms F_t\) and hence so is the union of the events over the countable collection of rational \(q \in [0, t]\).

Here is the obvious corollary to .

If \( (\tau_1, \tau_2, \ldots, \tau_n) \) is a finite sequence of stopping times relative to \( \mf F \), then each of the following is also a stopping time relative to \( \mf F \):

  1. \( \tau_1 \vee \tau_2 \vee \cdots \vee \tau_n \)
  2. \( \tau_1 \wedge \tau_2 \wedge \cdots \wedge \tau_n \)
  3. \( \tau_1 + \tau_2 + \cdots + \tau_n \)

But we have to be careful when we try to extend to infinite sequences.

Suppose that \( \mf F = \{\ms F_t: t \in T\} \) is a filtration on \( (\Omega, \ms F) \), and that \((\tau_n: n \in \N_+)\) is a sequence of stopping times relative to \( \mf F \). Then \(\sup\{\tau_n: n \in \N_+\}\) is also a stopping time relative to \( \mf F \).

Details:

Let \(\tau = \sup\{\tau_n: n \in \N_+\}\). Note that \(\tau\) exists in \(T_\infty\) and is a random time. For \(t \in T\), \(\{\tau \le t\} = \bigcap_{n=1}^\infty \{\tau_n \le t\}\). But each event in the intersection is in \(\ms F_t\) and hence so is the intersection.

Suppose that \( \mf F = \{\ms F_t: t \in T\} \) is a filtration on \( (\Omega, \ms F) \), and that \((\tau_n: n \in \N_+)\) is an increasing sequence of stopping times relative to \( \mf F \). Then \(\lim_{n \to \infty} \tau_n\) is a stopping time relative to \( \mf F \).

Details:

This is a corollary of . Since the sequence is increasing, \(\lim_{n \to \infty} \tau_n = \sup\{\tau_n: n \in \N_+\}\).

Suppose that \( T = [0, \infty) \) and that \( \mf F = \{\ms F_t: t \in T\} \) is a filtration on \( (\Omega, \ms F) \). If \((\tau_n: n \in \N_+)\) is a sequence of stopping times relative to \( \mf F \), then each of the following is a stopping time relative to \( \mf F_+ \):

  1. \(\inf\left\{\tau_n: n \in \N_+\right\}\)
  2. \(\liminf_{n \to \infty} \tau_n\)
  3. \(\limsup_{n \to \infty} \tau_n\)
Details:
  1. Let \(\tau = \inf\left\{\tau_n: n \in \N_+\right\}\). Then \(\{\tau \ge t\} = \bigcap_{n=1}^\infty\{\tau_n \ge t\} \in \ms F_t\) for \(t \in T\). Hence \(\tau\) is a stopping time relative to \( \mf F_+ \) by .
  2. Recall that \(\liminf_{n \to \infty} \tau_n = \sup\left\{\inf\{\tau_k: k \ge n\}: n \in \N_+\right\}\) and so this is a stopping time relative to \( \mf F_+ \) by part (a) and .
  3. Similarly note that \(\limsup_{n \to \infty} \tau_n = \inf\left\{\sup\{\tau_k: k \ge n\}: n \in \N_+\right\}\) and so this is a stopping time relative to \( \mf F_+ \) by part (a) and .

As a simple corollary, we have the following results:

Suppose that \( T = [0, \infty) \) and that \( \mf F = \{\ms F_t: t \in T\} \) is a right-continuous filtration on \( (\Omega, \ms F) \). If \((\tau_n: n \in \N_+)\) is a sequence of stopping times relative to \( \mf F \), then each of the following is a also a stopping time relative to \( \mf F \):

  1. \(\inf\left\{\tau_n: n \in \N_+\right\}\)
  2. \(\liminf_{n \to \infty} \tau_n\)
  3. \(\limsup_{n \to \infty} \tau_n\)

The \(\sigma\)-Algebra of a Stopping Time

Consider again the general setting of a filtration \(\mf F = \{\ms F_t: t \in T\}\) on the sample space \((\Omega, \ms F)\), and suppose that \(\tau\) is a stopping time relative to \( \mf F \). We want to define the \(\sigma\)-algebra \(\ms F_\tau\) of events up to the random time \(\tau\), analagous to \(\ms F_t\) the \( \sigma \)-algebra of events up to a fixed time \(t \in T\). Here is the appropriate definition:

Suppose that \( \mf F = \{\ms F_t: t \in T\} \) is a filtration on \( (\Omega, \ms F) \) and that \( \tau \) is a stopping time relative to \( \mf F \). Define \( \ms F_\tau = \left\{A \in \ms F: A \cap \{\tau \le t\} \in \ms F_t \text{ for all } t \in T\right\} \). Then \( \ms F_\tau \) is a \( \sigma \)-algebra.

Details:

First \(\Omega \in \ms F_\tau\) since \(\Omega \cap \{\tau \le t\} = \{\tau \le t\} \in \ms F_t\) for \(t \in T\). If \(A \in \ms F_\tau\) then \(A^c \cap \{\tau \le t\} = \{\tau \le t \} \setminus \left(A \cap \{\tau \le t\}\right) \in \ms F_t\) for \(t \in T\). Finally, suppose that \(A_i \in \ms F_\tau\) for \(i\) in a countable index set \(I\). Then \(\left(\bigcup_{i \in I} A_i\right) \cap \{\tau \le t\} = \bigcup_{i \in I} \left(A_i \cap \{\tau \le t\}\right) \in \ms F_t\) for \(t \in T\).

Thus, an event \(A\) is in \(\ms F_\tau\) if we can determine if \(A\) and \(\tau \le t\) both occurred given our information at time \(t\). If \(\tau\) is constant, then \(\ms F_\tau\) reduces to the corresponding member of the original filtration, which clealry should be the case, and is additional motivation for the definition.

Suppose again that \( \mf F = \{\ms F_t: t \in T\} \) is a filtration on \( (\Omega, \ms F) \). Fix \(s \in T\) and define \(\tau(\omega) = s\) for all \(\omega \in \Omega\). Then \(\ms F_\tau = \ms F_s\).

Details:

Suppose that \(A \in \ms F_s\). Then \(A \in \ms F\) and for \(t \in T\), \(A \cap \{\tau \le t\} = A\) if \(s \le t\) and \(A \cap \{\tau \le t\} = \emptyset\) if \(s \gt t\). In either case, \(A \cap \{\tau \le t\} \in \ms F_t\) and hence \(A \in \ms F_\tau\). Conversely, suppose that \(A \in \ms F_\tau\). Then \(A = A \cap \{\tau \le s\} \in \ms F_s\).

Clearly, if we have the information available in \(\ms F_\tau\), then we should know the value of \(\tau\) itself. This is also true:

Suppose again that \( \mf F = \{\ms F_t: t \in T\} \) is a filtration on \( (\Omega, \ms F) \) and that \( \tau \) is a stopping time relative to \( \mf F \). Then \(\tau\) is measureable with respect to \(\ms F_\tau\).

Details:

It suffices to show that \(\{\tau \le s\} \in \ms F_\tau\) for each \(s \in T\). For \( s, \, t \in T \), \[\{\tau \le t\} \cap \{\tau \le s\} = \{\tau \le s \wedge t\} \in \ms F_{s \wedge t} \subseteq \ms F_t\] Hence \(\{\tau \le s\} \in \ms F_\tau\).

Here are other results that relate the \( \sigma \)-algebra of a stopping time to the original filtration.

Suppose again that \( \mf F = \{\ms F_t: t \in T\} \) is a filtration on \( (\Omega, \ms F) \) and that \( \tau \) is a stopping time relative to \( \mf F \). If \(A \in \ms F_\tau\) then for \(t \in T\),

  1. \(A \cap \{\tau \lt t\} \in \ms F_t\)
  2. \(A \cap \{\tau = t\} \in \ms F_t\)
Details:
  1. By definition, \(A \cap \{\tau \le t\} \in \ms F_t\). But \(\{\tau \lt t\} \subseteq \{\tau \le t\}\) and \(\{\tau \lt t\} \in \ms F_t\). Hence \(A \cap \{\tau \lt t\} = A \cap \{\tau \le t\} \cap \{\tau \lt t\} \in \ms F_t\).
  2. similarly \(\{\tau = t\} \subseteq \{\tau \le t\}\) and \(\{\tau = t\} \in \ms F_t\). Hence \(A \cap \{\tau = t\} = A \cap \{\tau \le t\} \cap \{\tau = t\} \in \ms F_t\)

The \( \sigma \)-algebra of a stopping time relative to a filtration is related to the \( \sigma \)-algebra of the stopping time relative to a finer filtration in the natural way.

Suppose that \( \mf F = \{\ms F_t: t \in T\} \) and \( \mf G = \{\ms G_t: t \in T\} \) are filtrations on \( (\Omega, \ms F) \) and that \( \mf G \) is finer than \( \mf F \). If \( \tau \) is a stopping time relative to \( \mf F \) then \( \ms F_\tau \subseteq \ms G_\tau \).

Details:

From the result above, \( \tau \) is also a stopping time relative to \( \mf G \), so the statement makes sense. If \( A \in \ms F_\tau \) then for \( t \in T \), \( A \cap \{\tau \le t\} \in \ms F_t \subseteq \ms G_t \), so \( A \in \ms G_\tau \).

When two stopping times are ordered, their \( \sigma \)-algebras are also ordered.

Suppose that \( \mf F = \{\ms F_t: t \in T\} \) is a filtration on \( (\Omega, \ms F) \) and that \(\rho\) and \(\tau\) are stopping times for \( \mf F \) with \(\rho \le\tau\). Then \(\ms F_\rho \subseteq \ms F_\tau\).

Details:

Suppose that \(A \in \ms F_\rho\) and \(t \in T\). Note that \(\{\tau \le t\} \subseteq \{\rho \le t\}\). By definition, \(A \cap \{\rho \le t\} \in \ms F_t\) and \(\{\tau \le t\} \in \ms F_t \). Hence \(A \cap \{\tau \le t\} = A \cap \{\rho \le t\} \cap \{\tau \le t\} \in \ms F_t\), so \(A \in \ms F_\tau\).

Suppose again that \( \mf F = \{\ms F_t: t \in T\} \) is a filtration on \( (\Omega, \ms F) \), and that \(\rho\) and \(\tau\) are stopping times for \( \mf F \). Then each of the following events is in \(\ms F_\tau\) and in \(\ms F_\rho\).

  1. \(\{\rho \lt \tau\}\)
  2. \(\{\rho = \tau\}\)
  3. \(\{\rho \gt \tau\}\)
  4. \(\{\rho \le \tau\}\)
  5. \(\{\rho \ge \tau\}\)
Details:

The proofs are easy when \(T = \N\).

  1. Let \(t \in T\). Then \[\{\rho \lt \tau\} \cap \{\tau \le t\} = \bigcup_{n=0}^t \bigcup_{k=0}^{n-1} \{\tau = n, \rho = k\}\] But each event in the union is in \(\ms F_t\).
  2. Similarly, let \(t \in T\). Then \[ \{\rho = \tau\} \cap \{\tau \le t\} = \bigcup_{n=0}^t \{\rho = n, \tau = n\} \] and again each event in the union is in \(\ms F_t\).
  3. This follows from symmetry, reversing the roles of \(\rho\) and \(\tau\) in part (a).
  4. Note that \(\{\rho \le \tau\} = \{\rho \lt \tau\} \cup \{\rho = \tau\} \in \ms F_\tau\).
  5. Similarly, note that \(\{\rho \ge \tau\} = \{\rho \gt \tau\} \cup \{\rho = \tau\} \in \ms F_\tau\).

We can stop a filtration at a stopping time. In the next subsection, we will stop a stochastic process in the same way.

Suppose again that \( \mf F = \{\ms F_t: t \in T\} \) is a filtration on \( (\Omega, \ms F) \), and that \(\tau\) is a stopping times for \( \mf F \). For \( t \in T \) define \( \ms F^\tau_t = \ms F_{t \wedge \tau} \). Then \( \mf F^\tau = \{\ms F^\tau_t: t \in T\} \) is a filtration and is coarser than \( \mf F \).

Details:

The random time \( t \wedge \tau \) is a stopping time for each \( t \in T \) by the result above, so \( \ms F^\tau_t \) is a sub \( \sigma \)-algebra of \( \ms F \). If \( t \in T \), then by definition, \( A \in \ms F^\tau_t \) if and only if \( A \cap \{t \wedge \tau \le r\} \in \ms F_r \) for every \( r \in T \). But for \( r \in T \), \( \{t \wedge \tau \le r\} = \Omega \) if \( r \ge t \) and \( \{t \wedge \tau \le r\} = \{\tau \le r\} \) if \( r \lt t \). Hence \( A \in \ms F^\tau_t \) if and only if \( A \cap \{\tau \le r\} \in \ms F_r \) for \( r \lt t \) and \( A \in \ms F_t \). So in particular, \( \mf F^\tau \) is coarser than \( \mf F \). Further, suppose \( s, \, t \in T \) with \( s \le t \), and that \( A \in \ms F^\tau_s \). Let \( r \in T \). If \( r \lt s \) then \( A \cap \{\tau \le r\} \in \ms F_r \). If \( s \le r \lt t \) then \( A \in \ms F_s \subseteq \ms F_r \) and \( \{\tau \le r\} \in \ms F_r \) so again \( A \cap \{\tau \le r\} \in \ms F_r \). Finally if \( r \ge t \) then \( A \in \ms F_s \subseteq \ms F_t \). Hence \( A \in \ms F^\tau_t \).

Stochastic Processes

As usual, the most common setting is when we have a stochastic process \( \bs X = \{X_t: t \in T\} \) defined on our sample space \( (\Omega, \ms F) \) and with state space \( (S, \ms S) \). If \( \tau \) is a random time, we are often interested in the state \( X_\tau \) at the random time. But there are two issues. First, \( \tau \) may take the value infinity, in which case \( X_\tau \) is not defined. The usual solution is to introduce a new death state \( \delta \), and define \( X_\infty = \delta \). The \( \sigma \)-algebra \( \ms S \) on \( S \) is extended to \( S_\delta = S \cup \{\delta\} \) in the natural way, namely \( \ms S_\delta = \ms S \cup \{A \cup \{\delta\}: A \in \ms S\} \).

Our other problem is that we naturally expect \( X_\tau \) to be a random variable (that is, measurable), just as \( X_t \) is a random variable for a deterministic \( t \in T \). Moreover, if \( \bs X \) is adapted to a filtration \( \mf F = \{\ms F_t: t \in T\} \), then we would naturally also expect \( X_\tau \) to be measurable with respect to \( \ms F_\tau \), just as \( X_t \) is measurable with respect to \( \ms F_t \) for deterministic \( t \in T \). But this is not obvious, and in fact is not true without additional assumptions. Note that \( X_\tau \) is a random state at a random time, and so depends on an outcome \( \omega \in \Omega \) in two ways: \(X_{\tau(\omega)}(\omega)\).

Suppose that \( \bs X = \{X_t: t \in T\} \) is a stochastic process on the sample space \( (\Omega, \ms F) \) with state space \( (S, \ms S) \), and that \( \bs X \) is measurable. If \( \tau \) is a finite random time, then \( X_\tau \) is measurable. That is, \( X_\tau \) is a random variable with values in \( S \).

Details:

Note that \( X_\tau: \Omega \to S \) is the composition of the function \( \omega \mapsto (\omega, \tau(\omega)) \) from \( \Omega \) to \( \Omega \times T\) with the function \((\omega, t) \mapsto X_t(\omega) \) from \( \Omega \times T \) to \( S \). The first function is measurable because the two coordinate functions are measurable. The second function is measurable by assumption.

This result is one of the main reasons for the definition of a measurable process in the first place. Sometimes we literally want to stop the random process at a random time \( \tau \). As you might guess, this is the origin of the term stopping time.

Suppose again that \( \bs X = \{X_t: t \in T\} \) is a stochastic process on the sample space \( (\Omega, \ms F) \) with state space \( (S, \ms S) \), and that \( \bs X \) is measurable. If \( \tau \) is a random time, then the process \( \bs X^\tau = \{X^\tau_t: t \in T\} \) defined by \( X^\tau_t = X_{t \wedge \tau} \) for \( t \in T \) is the process \( \bs X \) stopped at \( \tau \).

Details:

For each \( t \in T \), note that \( t \wedge \tau \) is a finite random time, and hence \( X_{t \wedge \tau} \) is measurable by the previous result. Thus \( \bs X^\tau \) is a well-defined stochastic process on \( (\Omega, \ms F) \) with state space \( (S, \ms S) \).

When the original process is progressively measurable, so is the stopped process.

Suppose again that \( \bs X = \{X_t: t \in T\} \) is a stochastic process on the sample space \( (\Omega, \ms F) \) with state space \( (S, \ms S) \), and that \( \bs X \) is progressively measurable with respect to a filtration \( \mf F = \{\ms F_t: t \in T\} \). If \( \tau \) is a stopping time relative to \( \mf F \), then the stopped process \( \bs X^\tau = \{X^\tau_t: t \in T\} \) is progressively measurable with respect to the stopped filtration \( \mf F^\tau \).

Since \( \mf F \) is finer than \( \mf F^\tau \), it follows that \( \bs X^\tau \) is also progressively measurable with respect to \( \mf F \).

Suppose again that \( \bs X = \{X_t: t \in T\} \) is a stochastic process on the sample space \( (\Omega, \ms F) \) with state space \( (S, \ms S) \), and that \( \bs X \) is progressively measurable with respect to a filtration \( \mf F = \{\ms F_t: t \in T\} \) on \( (\Omega, \ms F) \). If \( \tau \) is a finite stopping time relative to \( \mf F \) then \( X_\tau \) is measurable with respect to \( \ms F_\tau \).

For many random processes, the first time that the process enters or hits a set of states is particularly important. In the discussion that follows, let \( T_+ = \{t \in T: t \gt 0\} \), the set of positive times.

Suppose that \( \bs X = \{X_t: t \in T\} \) is a stochastic process on \( (\Omega, \ms F) \) with state space \( (S, \ms S) \). For \( A \in \ms S \), define

  1. \( \rho_A = \inf\{t \in T: X_t \in A\} \), the first entry time to \( A \).
  2. \( \tau_A = \inf\{t \in T_+: X_t \in A\} \), the first hitting time to \( A \).

As usual, \( \inf(\emptyset) = \infty \) so \(\rho_A = \infty\) if \(X_t \notin A\) for all \(t \in T\), so that the process never enters \(A\), and \( \tau_A = \infty \) if \( X_t \notin A \) for all \( t \in T_+ \), so that the process never hits \( A \). In discrete time, it's easy to see that these are stopping times.

Suppose that \( \{X_n: n \in \N\} \) is a stochastic process on \( (\Omega, \ms F) \) with state space \( (S, \ms S) \). If \( A \in \ms S \) then \(\tau_A\) and \( \rho_A \) are stopping times relative to the natural filtration \( \mf F^0 \).

Details:

Let \( n \in \N \). Note that \(\{\rho_A \gt n\} = \{X_0 \notin A, X_1 \notin A, \ldots, X_n \notin A\} \in \sigma\{X_0, X_1, \ldots, X_n\}\). Similarly, \( \{\tau_A \gt n\} = \{X_1 \notin A, X_2 \notin A \ldots, X_n \notin A \} \subseteq \sigma\{X_0, X_1, \ldots, X_n\}\).

So of course in discrete time, \( \tau_A \) and \( \rho_A \) are stopping times relative to any filtration \( \mf F \) to which \( \bs X \) is adapted. You might think that \(\tau_A\) and \( \rho_A \) should always be a stopping times, since \(\tau_A \le t\) if and only if \(X_s \in A\) for some \( s \in T_+ \) with \(s \le t\), and \( \rho_A \le t \) if and only if \( X_s \in A \) for some \( s \in T \) with \( s \le t \). It would seem that these events are known if one is allowed to observe the process up to time \(t\). The problem is that when \(T = [0, \infty)\), these are uncountable unions, so we need to make additional assumptions on the stochastic process \( \bs X \) or the filtration \( \mf F \), or both.

Suppose that \( S \) has an LCCB topology, and that \( \ms S \) is the \( \sigma \)-algebra of Borel sets. Suppose also that \( \bs X = \{X_t: t \in [0, \infty)\} \) is right continuous and has left limits. Then \( \tau_A \) and \( \rho_A \) are stopping times relative to \( \mf F^0_+ \) for every open \( A \in \ms S \).

Here is another result that requires less of the stochastic process \( \bs X \), but more of the filtration \( \mf F \).

Suppose that \( \bs X = \{X_t: t \in [0, \infty)\} \) is a stochastic process on \( (\Omega, \ms F) \) that is progressively measurable relative to a complete, right-continuous filtration \( \mf F = \{\ms F_t: t \in [0, \infty)\} \). If \( A \in \ms S \) then \( \rho_A \) and \( \tau_A \) are stopping times relative to \( \mf F \).