\(\newcommand{\P}{\mathbb{P}}\) \(\newcommand{\E}{\mathbb{E}}\) \(\newcommand{\R}{\mathbb{R}}\) \(\newcommand{\N}{\mathbb{N}}\) \(\newcommand{\Z}{\mathbb{Z}}\) \(\newcommand{\bs}{\boldsymbol}\) \( \newcommand{\cov}{\text{cov}} \) \( \newcommand{\var}{\text{var}} \) \( \newcommand{\sd}{\text{sd}} \)
  1. Random
  2. 17. Other Stochastic Processes
  3. 1
  4. 2

3. Second Order and Gaussian Processes

Second Order Processes

Suppose that \( \bs{X} = \{X_t: t \in T\} \) is a random process with state space \(S \subseteq \R \) and index set \( T \), defined on a probability space \( (\Omega, \mathscr{F}, \P) \). Typically, the state space \( S \) is either countable, so that \( X_t \) has a discrete distribution for \( t \in T \), or an interval of \( \R \) with \( X_t \) having a continuous distribution for \( t \in T \). Typically also, the time space \( T \) is either \( \N \) (so that we have discrete time) or \( [0, \infty) \) (so that we have continuous time).

The process \( \bs{X} \) is a second order process if \( \E\left(X_t^2\right) \lt \infty \) for every \( t \in T \).

Suppose that \( \bs{X} = \{X_t: t \in T\} \) is a second order process. Then from basic properties of higher moments, \( \E\left(\left|X_t\right|\right) \lt \infty \) for \( t \in T \) and also that \( \E\left(\left|X_s X_t\right|\right) \lt \infty \) for every \( (s, t) \in T^2 \). Thus, let \( m(t) = \E(X_t) \) for \( t \in T \) and \( c(s, t) = \cov(X_s, X_t) \) for \( (s, t) \in T^2 \). These are well defined, so \( m: T \to \R \) and \( c: T^2 \to \R \). Appropriately enough, \( m \) is called the mean function and \( c \) the covariance function of \( \bs{X} \). In general of course, these functions do not determine the finite dimensional distributions of \( \bs{X} \), but are nonetheless important. In some applications, the mean and covariance functions may be known, at least approximately, when the finite dimensional distributions are not.

The mean and covariance functions determine the mean, variance, and covariance of linear combinations of the variables.

Suppose that \( (t_1, t_2, \ldots, t_n) \in T^n \), \( (a_1, a_2, \ldots, a_n) \in \R^n\), \( (s_1, s_2, \ldots, s_m) \in T^m \), and \( (b_1, b_2, \ldots, b_m) \in \R^n \). Then

  1. \( \E \left(\sum_{i=1}^n a_i X_{t_i}\right) = \sum_{i=1}^n a_i m(t_i) \)
  2. \( \cov \left(\sum_{i=1}^n a_i X_{t_i}, \sum_{j=1}^m b_j X_{s_j}\right) = \sum_{i=1}^n \sum_{j=1}^m a_i b_j c(t_i, s_j) \)
  3. \( \var \left(\sum_{i=1}^n a_i X_{t_i} \right) = \sum_{i=1}^n \sum_{j=1}^n a_i a_j c(t_i, t_j) \)
Proof:

These follow from basic properties of expected value, variance and covariance.

Clearly the covariance function is symmetric: \( c(s, t) = c(t, s) \) for \( (s, t) \in T^2 \). From part (c) of the previous theorem, \( c \) is also nonnegative definite: that is, if \( n \in \N_+ \), \( (t_1, t_2, \ldots, t_n) \in T^n \), and \( (a_1, a_2, \ldots, a_n) \in \R^n \), then \[ \sum_{i=1}^n \sum_{j=1}^n a_i a_j c(t_i, t_j) \ge 0 \] We will see below that the converse is also true: any nonnegative definite function is the covariance function of some second order process.

Gaussian Processes

Suppose that \( \bs{X} = \{X_t: t \in T\} \) is a random process with state space \( \R \). Then \( \bs{X} \) is said to be a Gaussian process if all of the finite dimensional distributions are normal. That is, if \( t_1, t_2, \ldots, t_n \in T \) are distinct then \( (X_{t_1}, X_{t_2}, \ldots, X_{t_n}) \) has an \( n \)-dimensional multivariate normal distribution.

Gaussian processes are important in part because of the fundamental importance of the normal distribution but also becasue they are simple to describe and have a number of nice mathematical properties. We use the general meaning of the term multivariate normal distribution:

\( \bs{X} = \{X_t: t \in T\} \) is a Gaussian process if and only if all finite linear combinations of the variables have normal distributions. That is, \( \bs{X} \) is a Gaussian process if and only if for distinct \( t_1, t_2, \ldots, t_n \in T \) and \( c_1, c_2, \ldots, c_n \in \R \), the random variable \( c_1 X_{t_1} + c_2 X_{t_2} + \cdots + c_n X_{t_n} \) has a (univariate) normal distribution.

Suppose that \( \bs{X} = \{X_t: t \in T\} \) is a Gaussian process. Clearly \( \bs{X} \) is also a second order process, so the mean function \( m \) and the covariance function \( c \) are well defined. In this case, the two functions do characterize the process.

The mean function \( m \) and the covariance function \( c \) determine the finite dimensional distributions of \( \bs{X} \).

Proof:

Suppose that \( t_1, t_2, \ldots, t_n \in T \) are distinct. Since the distribution of \( \left(X_{t_1}, X_{t_2}, \ldots, X_{t_n}\right) \) is multivariate normal, this distribution is determined by the vector of means and the matrix of covariances. The \( i \)th entry of the vector of means is \( m(t_i) \) and the \( (i, j) \)th entry of the matrix of covariances is \( c(t_i, t_j) \).

The following exercise gives a simple but interesting example of a Gaussian process.

Suppose that \(Z\) and \(W\) are independent random variables, each having the normal distribution with mean 0 and variance \(\sigma^2 \gt 0\). Define \[ X_t = W \cos(t) + Z \sin(t), \quad t \in [0, \infty) \]

  1. Verify that \( \bs{X} = \{X_t: t \in [0, \infty)\} \) is a Gaussian process.
  2. Find the mean function.
  3. Find the covariance function.
Proof:
  1. \( \bs{X} \) is a Gaussian process since a linear combination of the variables reduces to a linear combination of \( Z \) and \( W \).
  2. \( m(t) = 0 \) for \( t \in [0, \infty) \)
  3. \( c(s, t) = \cos(t - s) \) for \( s, \, t \in [0, \infty) \)