\(\newcommand{\P}{\mathbb{P}}\) \(\newcommand{\E}{\mathbb{E}}\) \(\newcommand{\R}{\mathbb{R}}\) \(\newcommand{\N}{\mathbb{N}}\) \(\newcommand{\Z}{\mathbb{Z}}\) \(\newcommand{\bs}{\boldsymbol}\)
  1. Random
  2. 15. Markov Processes
  3. 1
  4. 2
  5. 3
  6. 4
  7. 5
  8. 6
  9. 7
  10. 8
  11. 9
  12. 10
  13. 11
  14. 12
  15. 13
  16. 14
  17. 15
  18. 16
  19. 17
  20. 18
  21. 19
  22. 20
  23. 21
  24. 22
  25. 23

20. Chains Subordinate to the Poisson Process

Basic Theory

Introduction

Recall that the standard Poisson process with rate parameter \( r \in (0, \infty) \) involves three interrelated stochastic processes. First the sequence of interarrival times \(\bs{T} = (T_1, T_2, \ldots) \) is independent, and each variable has the exponential distribution with parameter \( r \). Next, the sequence of arrival times \( \bs{\tau} = (\tau_0, \tau_1, \ldots) \) is the partial sum sequence associated with the interrival sequence \( \bs{T} \): \[ \tau_n = \sum_{i=1}^n T_i, \quad n \in \N \] For \( n \in \N_+ \), the arrival time \( \tau_n \) has the gamma distribution with parameters \( n \) and \( r \). Finally, the Poisson counting process \( \bs{N} = \{N_t: t \in [0, \infty)\} \) is defined by \[ N_t = \max\{n \in \N: \tau_n \le t\}, \quad t \in [0, \infty) \] so that \( N_t \) is the number of arrivals in \( (0, t] \) for \( t \in [0, \infty) \). The counting variable \( N_t \) has the Poisson distribution with parameter \( r t \) for \( t \in [0, \infty) \). The counting process \( \bs{N} \) and the arrival time process \( \bs{\tau} \) are inverses in the sense that \( \tau_n \le t \) if and only if \( N_t \ge n \) for \( t \in [0, \infty) \) and \( n \in \N \). The Poisson counting process can be viewed as a continuous-time Markov chain.

Suppose that \( X_0 \) takes values in \( \N \) and is independent of \( \bs{N} \). Define \( X_t = X_0 + N_t \) for \( t \in [0, \infty) \). Then \( \bs{X} = \{X_t: t \in [0, \infty)\} \) is a continuous-time Markov chain on \( \N \) with exponential parameter function given by \( \lambda(x) = r \) for \( x \in \N \) and jump transition matrix \( Q \) given by \( Q(x, x + 1) = 1 \) for \( x \in S \).

Details:

This follows directly from the basic structure of a continuous-time Markov chain. Given \( X_t = x \), the holding time in state \( x \in \N \) is exponential with parameter \( r \), and the next state is deterministically \( x + 1 \). Note that the addition of the variable \( X_0 \) is just to allow us the freedom of arbitrary initial distributions on the state space, as is routine with Markov processes.

Note that the Poisson process, viewed as a Markov chain is a pure birth chain. Clearly we can generalize this continuous-time Markov chain in a simple way by allowing a general embedded jump chain.

Suppose that \( \bs{X} = \{X_t: t \in [0, \infty)\} \) is a Markov chain with (countable) state space \( S \), and with constant exponential parameter \( \lambda(x) = r \in (0, \infty) \) for \( x \in S \), and jump transition matrix \( Q \). Then \( \bs{X} \) is said to be subordinate to the Poisson process with rate parameter \( r \).

  1. The transition times \( (\tau_1, \tau_2, \ldots) \) are the arrival times of the Poisson process with rate \( r \).
  2. The inter-transition times \( (\tau_1, \tau_2 - \tau_1, \ldots) \) are the inter-arrival times of the Poisson process with rate \( r \) (independent, and each with the exponential distribution with rate \( r \)).
  3. \( \bs{N} = \{N_t: t \in [0, \infty)\} \) is the Poisson counting process, where \( N_t \) is the number of transitions in (0, t] for \( t \in [0, \infty) \).
  4. The Poisson process and the jump chain \( \bs{Y} = (Y_0, Y_1, \ldots) \) are independent, and \( X_t = Y_{N_t} \) for \( t \in [0, \infty) \).
Details:

These results all follow from the basic structure of a continuous-time Markov chain.

Since all states are stable, note that we must have \( Q(x, x) = 0 \) for \( x \in S \). Note also that for \( x, \, y \in S \) with \( x \ne y \), the exponential rate parameter for the transition from \( x \) to \( y \) is \( \mu(x, y) = r Q(x, y) \). Conversely suppose that \( \mu: S^2 \to (0, \infty) \) satisfies \( \mu(x, x) = 0 \) and \( \sum_{y \in S} \mu(x, y) = r \) for every \( x \in S \). Then the Markov chain with transition rates given by \( \mu \) is subordinate to the Poisson process with rate \( r \). It's easy to construct a Markov chain subordinate to the Poisson process.

Suppose that \( \bs N = \{N_t: t \in [0, \infty)\} \) is a Poisson counting process with rate \( r \in (0, \infty) \) and that \( \bs Y = \{Y_n: n \in \N\} \) is a discrete-time Markov chain on \( S \), independent of \( \bs N \), whose transition matrix satisfies \( Q(x, x) = 0 \) for every \( x \in S \). Let \( X_t = Y_{N_t} \) for \( t \in [0, \infty) \). Then \( \bs X = \{X_t: t \in [0, \infty)\} \) is a continuous-time Markov chain subordinate to the Poisson process.

Generator and Transition Matrices

Next let's find the generator matrix and the transition semigroup. Suppose again that \( \bs{X} = \{X_t: t \in [0, \infty)\} \) is a continuous-time Markov chain on \( S \) subordinate to the Poisson process with rate \( r \in (0, \infty) \) and with jump transition matrix \( Q \). As usual, let \( \bs P = \{P_t: t \in [0, \infty)\} \) denote the transition semigroup and \( G \) the infinitesimal generator.

The generator matrix \( G \) of \( \bs{X} \) is \( G = r (Q - I) \). Hence for \( t \in [0, \infty) \)

  1. The Kolmogorov backward equation is \( P^\prime_t = r (Q - I) P_t \)
  2. The Kolmogorov forward equation is \( P^\prime_t = r P_t (Q - I) \)
Details:

This follows directly from the general theory since \( G(x, x) = -\lambda(x) = -r \) for \( x \in S \) and \( G(x, y) = \lambda(x) Q(x, y) = r Q(x, y) \) for distinct \( x, \, y \in S \).

There are several ways to find the transition semigroup \( \bs{P} = \{P_t: t \in [0, \infty)\} \). The best way is a probabilistic argument using the underlying Poisson process.

For \( t \in [0, \infty) \), the transition matrix \( P_t \) is given by \[ P_t = \sum_{n=0}^\infty e^{-r t} \frac{(r t)^n}{n!} Q^n \]

Details

Our first proof is from the underlying Poisson process. Let \( N_t \) denote the number of transitions in \( (0, t] \) for \( t \in [0, \infty) \), so that \( \bs{N} = \{N_t: t \in [0, \infty)\} \) is the Poisson counting process. Let \( \bs{Y} = (Y_0, Y_1, \ldots) \) denote the jump chain, with transition matrix \( Q \). Then \( \bs{N} \) and \( \bs{Y} \) are independent, and \( X_t = Y_{N_t} \) for \( t \in [0, \infty) \). Conditioning we have \begin{align*} P_t(x, y) & = \P(X_t = y \mid X_0 = x) = \P\left(Y_{N_t} = y \mid Y_0 = x\right) \\ & = \sum_{n=0}^\infty \P\left(Y_{N_t} = y \mid N_t = n, Y_0 = y\right) \P(N_t = n \mid Y_0 = y) \\ & = \sum_{n=0}^\infty \P(Y_n = y \mid Y_0 = x) \P(N_t = n) = \sum_{n=0}^\infty e^{-r t} \frac{(r t)^n}{n!} Q^n(x, y) \end{align*}

Our second proof uses the generator matrix. Note first that for \( n \in \N \), \[ G^n = [r (Q - I)]^n = r^n \sum_{k = 0}^n \binom{n}{k}(-1)^{n-k} Q^k \] Hence \begin{align*} P_t & = e^{t G} = \sum_{n=0}^\infty \frac{t^n}{n!} G^n = \sum_{n=0}^\infty \frac{t^n}{n!} r^n \sum_{k=0}^\infty \binom{n}{k} (-1)^{n-k}Q^k \\ & = \sum_{n=0}^\infty \sum_{k=0}^n \frac{(r t)^n}{k! (n - k)!} (-1)^{n-k} Q^k = \sum_{k=0}^\infty \sum_{n=k}^\infty \frac{(r t)^n}{k! (n - k)!} (-1)^{n-k} Q^k \\ & = \sum_{k=0}^\infty \frac{(r t)^k}{k!} Q^k \sum_{n=k}^\infty \frac{1}{(n - k)!}(- r t)^{n-k} = \sum_{k=0}^\infty e^{-r t} \frac{(r t)^k}{k!} Q^k \end{align*}

Potential Matrices

Next let's find the potential matrices. As with the transition matrices, we can do this in (at least) two different ways.

Suppose again that \( \bs{X} = \{X_t: t \in [0, \infty)\} \) is a continuous-time Markov chain on \( S \) subordinate to the Poisson process with rate \( r \in (0, \infty) \) and with jump transition matrix \( Q \). For \( \alpha \in (0, \infty) \), the potential matrix \( U_\alpha \) of \( \bs{X} \) is \[ U_\alpha = \frac{1}{\alpha + r} \sum_{n=0}^\infty \left(\frac{r}{\alpha + r}\right)^n Q^n \]

Details

Our first proof is from the definition. From , \begin{align*} U_\alpha(x, y) & = \int_0^\infty e^{-\alpha t} P_t(x, y) \, dt = \int_0^\infty e^{-\alpha t} \sum_{n=0}^\infty e^{-r t} \frac{(r t)^n}{n!} Q^n(x, y) \, dt \\ & = \sum_{n=0}^\infty Q^n(x, y) \frac{r^n}{n!} \int_0^\infty e^{-(r + \alpha) t} t^n dt \end{align*} The interchange of sum and integral is justified since the terms are nonnegative. Using the change of variables \( s = (r + \alpha) t \) gives \[ U_\alpha(x, y) = \frac{1}{\alpha + r} \sum_{n=0}^\infty \left(\frac{r}{\alpha + r}\right)^n \frac{1}{n!} Q^n(x, y) \int_0^\infty e^{-s t} s^n \, ds \] The last integral is \( n! \).

Our second proof uses the generator. From , \[ \alpha I - G = \alpha I - r (Q - I) = (\alpha + r) I - r Q = (\alpha + r)\left(I - \frac{r}{\alpha + r} Q\right) \] Since \( \left\| \frac{r}{\alpha + r} Q \right\| = \frac{r}{\alpha + r} \lt 1 \) we have \[ (\alpha I - G)^{-1} = \frac{1}{\alpha + r}\left(I - \frac{r}{\alpha + r} Q\right)^{-1} = \frac{1}{\alpha + r} \sum_{n=0}^\infty \left(\frac{r}{\alpha + r}\right)^n Q^n \]

Recall that for \( p \in (0, 1) \), the \( p \)-potential matrix of the jump chain \( \bs{Y} \) is \( R_p = \sum_{n=0}^\infty p^n Q^n \). Hence we have the following nice relationship between the potential matrix of \( \bs{X} \) and the potential matrix of \( \bs{Y} \): \[ U_\alpha = \frac{1}{\alpha + r} R_{r / (\alpha + r)} \] Next recall that \( \alpha U_\alpha(x, \cdot) \) is the probability density function of \( X_T \) given \( X_0 = x \), where \( T \) has the exponential distribution with parameter \( \alpha \) and is independent of \( \bs{X} \). On the other hand, \( \alpha U_\alpha(x, \cdot) = (1 - p) R_p(x, \cdot) \) where \( p = r \big/ (\alpha + r) \). We know from our study of discrete potentials that \( (1 - p) R_p(x, \cdot) \) is the probability density function of \( Y_M \) where \( M \) has the geometric distribution on \( \N \) with parameter \( 1 - p \) and is independent of \( \bs{Y} \). But also \( X_T = Y_{N_T} \). So it follows that if \( T \) has the exponential distribution with parameter \( \alpha \), \( \bs{N} = \{N_t: t \in [0, \infty)\} \) is a Poisson process with rate \( r \), and is independent of \( T \), then \( N_T \) has the geometric distribution on \( \N \) with parameter \( \alpha \big/ (\alpha + r) \). Of course, we could easily verify this directly, but it's still fun to see such connections.

Limiting Behavior and Stationary Distributions

Once again, suppose that \( \bs{X} = \{X_t: t \in [0, \infty)\} \) is a continuous-time Markov chain on \( S \) subordinate to the Poisson process with rate \( r \in (0, \infty) \) and with jump transition matrix \( Q \). Let \( \bs{Y} = \{Y_n: n \in \N\} \) denote the jump process. The limiting behavior and stationary distributions of \( \bs{X} \) are closely related to those of \( \bs{Y} \).

Suppose that \( \bs{X} \) (and hence \( \bs{Y} \)) are irreducible and positive recurrent

  1. \( g: S \to (0, \infty) \) is invariant for \( \bs{X} \) if and only if \( g \) is invariant for \( \bs{Y} \).
  2. \( f \) is an invariant probability density function for \( \bs{X} \) if and only if \( f \) is an invariant probability density function for \( \bs{Y} \).
  3. \( \bs{X} \) is null recurrent if and only if \( \bs{Y} \) is null recurrent, and in this case, \( \lim_{n \to \infty} Q^n(x, y) = \lim_{t \to \infty} P_t(x, y) = 0 \) for \( (x, y) \in S^2 \).
  4. \( \bs{X} \) is positive recurrent if and only if \( \bs{Y} \) is positive recurrent. If \( \bs{Y} \) is aperiodic, then \( \lim_{n \to \infty} Q^n(x, y) = \lim_{t \to \infty} P_t(x, y) = f(y) \) for \( (x, y) \in S^2 \), where \( f \) is the invariant probability density function.
Details:

All of these results follow from the basic theory of stationary and limiting distributions for continuous-time chains, and the fact that the exponential parameter function \( \lambda \) is constant.

Time Reversal

Once again, suppose that \( \bs{X} = \{X_t: t \in [0, \infty)\} \) is a continuous-time Markov chain on \( S \) subordinate to the Poisson process with rate \( r \in (0, \infty) \) and with jump transition matrix \( Q \). Let \( \bs{Y} = \{Y_n: n \in \N\} \) denote the jump process. We assume that \( \bs X \) (and hence \( \bs Y \)) are irreducible. The time reversal of \( \bs X \) is closely related to that of \( \bs Y \).

Suppose that \( g: S \to (0, \infty) \) is invariant for \( \bs X \). The time reversal \( \hat{\bs X} \) with respect to \( g \) is also subordinate to the Poisson process with rate \( r \). The jump chain \( \hat{\bs Y} \) of \( \hat{\bs X} \) is the (discrete) time reversal of \( \bs Y \) with respect to \( g \).

Details:

From the previous result, \( g \) is also invariant for \( \bs Y \). From the general theory of time reversal, \( \hat{\bs X} \) has the same exponential parameter function as \( \bs X \) (namely the constant function \( r \)) and so is also subordinate to the Poisson process with rate \( r \). Finally, the jump chain \( \hat{\bs Y} \) of \( \hat{\bs X} \) is the reversal of \( \bs Y \) with respect to \( r g \) and hence also with respect to \( g \).

In particular, \( \bs X \) is reversible with respect to \( g \) if and only if \( \bs Y \) is reversible with respect to \( g \). As noted earlier, \( \bs X \) and \( \bs Y \) are of the same type: both transient or both null recurrent or both positive recurrent. In the recurrent case, there exists a positive invariant function that is unique up to multiplication by constants. In this case, the reversal of \( \bs X \) is unique, and is the chain subordinate to the Poisson process with rate \( r \) whose jump chain is the reversal of \( \bs Y \).

Uniform Chains

In the construction in for a Markov chain \( \bs X = \{X_t: t \in [0, \infty)\} \) that is subordinate to the Poisson process with rate \( r \) and jump transition kernel \( Q \), we assumed of course that \( Q(x, x) = 0 \) for every \( x \in S \). So there are no absorbing states and the sequence \( (\tau_1, \tau_2, \ldots) \) of arrival times of the Poisson process are the jump times of the chain \( \bs X \). However in our introduction to continuous-time chains, we saw that the general construction of a chain starting with the function \( \lambda \) and the transition matrix \( Q \) works without this assumption on \( Q \), although the exponential parameters and transition probabilities change. The same idea works here.

Suppose that \( \bs N = \{N_t: t \in [0, \infty)\} \) is a counting Poisson process with rate \( r \in (0, \infty) \) and that \( \bs Y = \{Y_n: n \in \N\} \) is a discrete-time Markov chain with transition matrix \( Q \) on \( S \times S \) satisfying \( Q(x, x) \lt 1 \) for \( x \in S \). Assume also that \( \bs N \) and \( \bs Y \) are independent. Define \( X_t = Y_{N_t} \) for \( t \in [0, \infty) \). Then \( \bs X = \{X_t: t \in [0, \infty)\} \) is a continuous-Markov chain with exponential parameter function \( \lambda(x) = r [1 - Q(x, x)] \) for \( x \in S \) and jump transition matrix \( \tilde Q \) given by \[ \tilde Q(x, y) = \frac{Q(x, y)}{1 - Q(x, x)}, \quad (x, y) \in S^2, \, x \ne y \]

Details:

This follows from the result in the introduction.

The Markov chain constructed above is no longer a chain subordinate to the Poisson process by definition , since the exponential parameter function is not constant, and the transition times of \( \bs X \) are no longer the arrival times of the Poisson process. Nonetheless, many of the basic results above still apply.

Let \( \bs X = \{X_t: t \in [0, \infty)\} \) be the Markov chain constructed in the previous theorem. Then

  1. For \( t \in [0, \infty) \), the transition matrix \( P_t \) is given by \[ P_t = \sum_{n=0}^\infty e^{- r t} \frac{(r t)^n}{n!} Q^n \]
  2. For \( \alpha \in (0, \infty) \), the \( \alpha \) potential matrix is given by \[ U_\alpha = \frac{1}{\alpha + r} \sum_{n=0}^\infty \left(\frac{r}{\alpha + r}\right)^n Q^n \]
  3. The generator matrix is \( G = r (Q - I) \)
  4. \( g: S \to (0, \infty) \) is invariant for \( \bs X \) if and only if \( g \) is invariant for \( \bs Y \).
Details:

The proofs are just as before.

It's a remarkable fact that every continuous-time Markov chain with bounded exponential parameters can be constructed as in , a process known as uniformization. The name comes from the fact that in the construction, the exponential parameters become constant, but at the expense of allowing the embedded discrete-time chain to jump from a state back to that state. To review the definition, suppose that \( \bs X = \{X_t: t \in [0, \infty)\} \) is a continuous-time Markov chain on \( S \) with transition semigroup \( \bs P = \{P_t: t \in [0, \infty)\} \), exponential parameter function \( \lambda \) and jump transition matrix \( Q \). Then \( \bs P \) is uniform if \( P_t(x, x) \to 1 \) as \( t \downarrow 0 \) uniformly in \( x \), or equivalently if \( \lambda \) is bounded.

Suppose that \( \lambda: S \to (0, \infty) \) is bounded and that \( Q \) is a transition matrix on \( S \) with \( Q(x, x) = 0 \) for every \( x \in S \). Let \( r \in (0, \infty) \) be an upper bound on \( \lambda \) and \( \bs N = \{N_t: t \in [0, \infty)\} \) a Poisson counting process with rate \( r \). Define the transition matrix \( \hat Q \) on \( S \) by \begin{align*} \hat Q(x, x) & = 1 - \frac{\lambda(x)}{r} \quad x \in S \\ \hat Q(x, y) & = \frac{\lambda(x)}{r} Q(x, y) \quad (x, y) \in S^2, \, x \ne y \end{align*} and let \( \bs Y = \{Y_n: n \in \N\} \) be a discrete-time Markov chain with transition matrix \( \hat Q \), independent of \( \bs N \). Define \( X_t = Y_{N_t} \) for \( t \in [0, \infty) \). Then \( \bs X = \{X_t: t \in [0, \infty)\} \) is a continuous-time Markov chain with exponential parameter function \( \lambda \) and jump transition matrix \( Q \).

Details:

Note that \( \hat Q(x, y) \ge 0 \) for every \( (x, y) \in S^2 \) and \( \sum_{y \in S} \hat Q(x, y) = 1 \) for every \( x \in S \). Thus \( \hat Q \) is a transition matrix on \( S \). Note also that \( \hat Q(x, x) \lt 1 \) for every \( x \in S \). By construction, \( \lambda(x) = r[1 - \hat Q(x, x)] \) for \( x \in S \) and \[ Q(x, y) = \frac{\hat Q(x, y)}{1 - \hat Q(x, x)}, \quad (x, y) \in S^2, \, x \ne y \] So the result now follows from .

Note in particular that if the state space \( S \) is finite then of course \( \lambda \) is bounded so the previous theorem applies. The theorem is useful for simulating a continuous-time Markov chain, since the Poisson process and discrete-time chains are simple to simulate. In addition, we have nice representations for the transition matrices, potential matrices, and the generator matrix.

Suppose that \( \bs X = \{X_t: t \in [0, \infty\} \) is a continuous-time Markov chain on \( S \) with bounded exponential parameter function \( \lambda: S \to (0, \infty) \) and jump transition matrix \( Q \). Define \( r \) and \( \hat Q \) as in the last theorem. Then

  1. For \( t \in [0, \infty) \), the transition matrix \( P_t \) is given by \[ P_t = \sum_{n=0}^\infty e^{- r t} \frac{(r t)^n}{n!} \hat Q^n \]
  2. For \( \alpha \in (0, \infty) \), the \( \alpha \) potential matrix is given by \[ U_\alpha = \frac{1}{\alpha + r} \sum_{n=0}^\infty \left(\frac{r}{\alpha + r}\right)^n \hat Q^n \]
  3. The generator matrix is \( G = r (\hat Q - I) \)
  4. \( g: S \to (0, \infty) \) is invariant for \( \bs X \) if and only if \( g \) is invariant for \( \hat Q \).
Details:

These results follow from .

Examples

The Two-State Chain

The following exercise applies the uniformization method to the two-state chain.

Consider the continuous-time Markov chain \( \bs X = \{X_t: t \in [0, \infty)\} \) on \( S = \{0, 1\} \) with exponential parameter function \( \lambda = (a, b) \), where \( a, \, b \in (0, \infty) \). Thus, states 0 and 1 are stable and the jump chain has transition matrix \[ Q = \left[\begin{matrix} 0 & 1 \\ 1 & 0 \end{matrix} \right] \] Let \( r = a + b \), an upper bound on \( \lambda \). Show that

  1. \( \hat Q = \frac{1}{a + b} \left[\begin{matrix} b & a \\ b & a \end{matrix} \right] \)
  2. \( G = \left[\begin{matrix} -a & a \\ b & - b \end{matrix}\right] \)
  3. \( P_t = \hat Q - \frac{1}{a + b} e^{-(a + b) t} G \) for \( t \in [0, \infty) \)
  4. \( U_\alpha = \frac{1}{\alpha} \hat Q - \frac{1}{(\alpha + a + b)(a + b)} G \) for \( \alpha \in (0, \infty) \)
Details:

The form of \( \hat Q \) follows easily from the definition above. Note that the rows of \( \hat Q \) are the invariant PDF. It then follows that \( \hat Q^n = \hat Q \) for \( n \in \N_+ \). The results for the transition matrix \( P_t \) and the potential \( U_\alpha \) then follow easily from the theorem in .

Although we have obtained all of these results for the two-state chain before, the derivation based on uniformization is the easiest.