\(\newcommand{\P}{\mathbb{P}}\) \(\newcommand{\E}{\mathbb{E}}\) \(\newcommand{\R}{\mathbb{R}}\) \(\newcommand{\N}{\mathbb{N}}\) \(\newcommand{\C}{\mathbb{C}}\) \(\newcommand{\ms}{\mathscr}\) \(\newcommand{\lfrta}{\leftrightarrow}\) \(\newcommand{\bs}{\boldsymbol}\)
  1. Reliability
  2. 7. Paths and Related Graphs
  3. 1
  4. 2
  5. 3

1. The Infinite Path

Basics

Polynomials

In this section and the next, a family of polynomials will play a central role.

Define the family of polynomials \(\{P_n: n \in \N\}\) by the following initial conditions and recursion relation: \begin{align} & P_0(t) = 1, \; P_1(t) = t, \quad t \in \R \\ & P_{n + 1}(t) = t P_n(t) - P_{n - 1}(t), \quad n \in \N_+, \, t \in \R \end{align}

The first few members of the family are \begin{align*} P_0(t) & = 1 \\ P_1(t) & = t \\ P_2(t) & = t^2 - 1 \\ P_3(t) & = t^3 - 2 t \\ P_4(t) & = t^4 - 3 t^2 + 1 \end{align*} This family of polynomials is closely related to a family of Chebyshev polynomials, and with this connection, the important properties follow easily.

\(P_n(t) = U_n(t / 2)\) for \(n \in \N\) and \(t \in \R\) where \(U_n\) is the Chebyshev polynomial of the second kind with degree \(n\).

Details:

This follows directly from the defining conditions for \(\{U_n: n \in \N\}\): \begin{align*} & U_0(t) = 1, \; U_1(t) = 2 t, \quad t \in \R \\ & U_{n + 1}(t) = 2 t U_n(t) - U_{n - 1}(t), \quad n \in \N_+, \, t \in \R \end{align*}

For \(n \in \N_+\), the roots of \(P_n\) are \[2 \cos\left(\frac{k}{n + 1} \pi\right), \quad k \in \{1, 2, \ldots, n\}\]

Details:

This follows directly from and the roots of the Chebyshev polynomials.

Note that \(P_n\) has \(n\) distinct (and hence simple) roots in the interval \((-2, 2)\).

For \(n \in \N\) and \(\theta \in (0, \pi)\), \[P_n(2 \cos \theta) = \frac{\sin[(n + 1) \theta]}{\sin \theta}\]

Details:

This follows from and the trigonometric definition of the Chebyshev polynomials.

Graphs

The following definition is the graph of interest in this section.

Define the relation \(\lfrta\) on \(\N\) by \(x \lfrta x + 1\) for \(x \in \N\) and \(x \lfrta x - 1\) for \(x \in \N_+\). The graph \((\N, \lfrta)\) is the symmetric, undirected simple path \((0, 1, 2, \ldots)\) (with no loops). The graph is periodic with period 2.

Details:

Clearly a walk from \(x \in \N\) back to \(x\) must have even length, and there is such a walk of length 2.

The graph \((\N, \lfrta)\) is clearly a basic and interesting algebraic structure. Since the graph is discrete, we have our usual reference space \((\N, \ms P(\N), \#)\) in the background.

The \(\sigma\)-algebra associated with \((\N, \lfrta)\) is the reference \(\sigma\)-algebra \(\ms P(\N)\).

Details:

By definition, the \(\sigma\)-algebra associated with the graph is \(\ms A = \sigma\{A_x: x \in \N\}\) where \(A_x = \{y \in \N: x \lfrta y\}\) is the neighbor set of \(x \in \N\). So \(A_x = \{x - 1, x + 1\}\) for \(x \in \N_+\) and \(A_0 = \{1\}\). So \(A_x \cap A_{x + 2} = \{x + 1\} \in \ms A\) for \(x \in \N\). It then follows that \(\N_+ \in \ms A\) and hence \(\{0\} \in \ms A\). So \(\{x\} \in \ms A\) for very \(x \in \N\).

The walk functions for \((\N, \lfrta)\) are given recursively as follows.

  1. \[ u_{n}(0) = \binom{n}{\lfloor n/2 \rfloor}, \quad n \in \N \]
  2. \[ u_n(1) = u_{n + 1}(0) = \binom{n + 1}{\lfloor (n + 1) /2 \rfloor}, \quad n \in \N \]
  3. \[ u_n(x + 1) = u_{n + 1}(x) - u_n(x - 1), \quad x \in \N_+, \, n \in \N \]
Details:
  1. Consider walks ending in 0. A walk of even length \(2 n\) must start in a state in \(\{0, 2, \ldots, 2n\}\). A walk of odd length \(2 n + 1\) must start in a state in \(\{1, 3, \ldots 2 n + 1\}\). For \(n \in \N_+\), let \(a_n(k)\) be the number of walks of length \(2 n\) starting in \(k \in \{0, 2, \ldots, 2 n\}\) and ending in 0, and let \(b_n(k)\) be the number of walks of length \(2 n + 1\) starting in \(k \in \{1, 3, \ldots, 2 n + 1\}\) and ending in 0. So \[u_{2 n}(0) = \sum_{j = 0}^n a_n(2 j), \; u_{2 n +1}(0) = \sum_{j = 0}^n b_n(2 j + 1); \quad n \in \N_+\] Trivially \(a_n(2 n) = b_n(2 n + 1) = 1\) for \(n \in \N_+\). Also, \(b_n(k) = a_n(k - 1) + a_n(k + 1)\) for \(k \in \{1, 3, \ldots, 2n - 1\}\). Similarly, \(a_{n + 1}(0) = b_n(1)\) and \(a_{n + 1}(k) = b_n(k - 1) + b_n(k + 1)\) for \(k \in \{2, 4, \ldots, 2 n\}\). It then follows that \(u_{2 n}(0) = 2 u_{2 n - 1}(0)\) and \(u_{2 n + 1}(0) = 2 u_{2 n}(0) - a_n(0)\). This leads to (a).
  2. By definition, \(u_{n + 1}(x) = (u_n L)(x)\) for \(n \in \N\), where as usual, \(L\) is the adjacency kernel of \((\N, \lfrta)\). With \(x = 0\) we have \(u_{n + 1}(0) = (u_n L)(0) = u_n(1)\) which gives (b).
  3. Finally, for \(x \in \N_+\) we have \(u_{n + 1}(x) = (u_n L)(x) = u_n(x - 1) + u_n(x + 1)\). Solving for \(u_n(x + 1)\) gives (c).

For \(x \in \N\), the generating function \(U(x, t)\) for \((\N, \lfrta)\) is defined for \(|t| \lt 1 /2\) and then satisfies \begin{align*} U(0, t) &= 1 + t U(1, t) \\ U(x, t) &= 1 + t U(x + 1, t) + t U(x - 1, t), \quad x \in \N_+, \ \end{align*}

Details:

First, it's easy to see that for \(x \in \N\), the generating function converges for \(|t| \lt \frac 1 2\). From part (b) of proposition we have \(t^{n + 1} u_n(1) = t^{n + 1} u_{n + 1}(0)\) for \(|t| \lt \frac 1 2\) and \(n \in \N\). Summing over \(n \in \N\) gives \[t U(1, t) = U(0, t) - U_0(0) = U(0, t) - 1\] From part (c) of proposition we have \[t^{n + 1} u_n(x + 1) = t^{n + 1} u_{n + 1}(x) - t^{n + 1} u_n(x - 1), \quad n \in \N, \, x \in \N_+, \, |t| \lt \frac 1 2\] Summing over \(n \in \N\) gives \[t U(x + 1, t) = U(x, t) - u_0(x) - t U(x - 1, t) = U(x, t) - 1 - t U(x - 1, t), \quad x \in \N_+, \, |t| \lt \frac 1 2 \]

For the following result, \(L\) denotes the adjacency kernel of \((\N, \lfrta)\) and \(P_n\) denotes the polynomial of degree \(n \in \N\) in .

For \(\beta \in \R\), the solution of the equation \(L g = \beta g\) is given by \(g(x) = g(0) P_x(\beta)\) for \(x \in \N\).

Details:

For \(\beta \in \R\), the equation \(L g = \beta g\) becomes \begin{align*} g(1) & = \beta g(0) \\ g(x + 1) & = \beta g(x) - g(x - 1), \quad x \in \N_+ \end{align*} Hence \(g(x) = g(0) P_x (\beta)\) for \(x \in \N\) by defintion of the polynomials.

So on the vector space of all functions from \(\N\) to \(\R\), every \(\beta \in \R\) is an eigenvalue of \((\N, \lfrta)\), and a corresponding eigenfunction is given by \(x \mapsto P_x(\beta)\). But usually we are interested in eigenvalues and eigenfunctions of \((\N, \lfrta)\) for the space \(\ms L_1\), where probability density functions live.

Probability

Distributions

Suppose that \(X\) is a random variable in \(\N\) with density function \(f\). The reliability function \(F\) of \(X\) for \((\N, \lfrta)\) is given by \[F(0) = f(1), \quad F(x) = f(x - 1) + f(x + 1), \; x \in \N_+\]

As usual, we have our basic support assumption that \(F(x) \gt 0\) for all \(x \in \N_+\). So in particular, \(f(1) \gt 0\), and for every \(x \in \N_+\), either \(f(x - 1) \gt 0\) or \(f(x + 1) \gt 0\). The reliability function does in fact determines the distribution, and we can characterize reliability functions:

The graph \((\N, \lfrta)\) is stochastic. Suppose that \(f\) is a probability density function on \(\N\) and that \(F\) is the corresponding reliability function for \((\N, \lfrta)\). Then

  1. \( f(0) = 2 - \sum_{x = 0}^\infty F(x) \)
  2. \( f(2 x + 1) = (-1)^x \sum_{k = 0}^x (-1)^k F(2 k), \quad x \in \N \)
  3. \( f(2 x) = (-1)^x f(0) + (-1)^x \sum_{k = 1}^{x} (-1)^y F(2 k - 1), \quad x \in \N_+ \)

Conversely, if \(F\) is a positive function on \(\N\) such that \(1 \lt \sum_{x = 0}^\infty F(x) \lt 2\) and such that the right side of (b) is positive for \(x \in \N\) and the right side of (c) is positive for \(x \in \N_+\), then \(F\) is the reliability function for \((\N, \lfrta)\) of a distribution on \(\N\) with probability density function given by (a), (b), and (c).

Details:
  1. Suppose that \(f\) is a density function on \(\N\) and that \(F\) is the reliability function of \(f\) for \((\N, \lfrta)\). Note that \[\sum_{x = 0}^\infty F(x) = f(1) + \sum_{x = 1}^\infty [f(x - 1) + f(x + 1)] = f(0) + 2 \sum_{x = 1}^\infty f(x) = f(0) + 2[1 - f(0)] = 2 - f(0)\] and hence (a) holds.
  2. For the odd-order terms, note first that \(f(1) = F(0)\). Next, \(F(2) = f(1) + f(3)\) so \(f(3) = F(2) - F(0)\). Next, \(F(4) = f(3) + f(5)\) so \[f(5) = F(4) - f(3) = F(4) - F(2) + F(0)\] Continuing in this way gives (b).
  3. Similarly, \(F(1) = f(0) + f(2)\), so \(f(2) = F(1) - f(0)\), where \(f(0)\) is given in (a). Next, \(F(3) = f(2) + f(4)\) so \[f(4) = F(3) - f(2) = F(3) - F(1) + f(0)\] Continuing in this way gives (c).

Conversely, suppose that \(F\) is a positive function on \(\N\) such that \(1 \lt \sum_{x=0}^\infty F(x) \lt 2\) and with the right sides of (b) and (c) positive. Define \(f\) by (a), (b), and (c). Then \(f(x) \ge 0\) for \(x \in \N\) and it's easy to see from the definitions that \(F(0) = f(1)\) and \(F(x) = f(x - 1) + f(x + 1)\) for \(x \in \N_+\). So we have \[\sum_{x = 0}^\infty F(x) = f(0) + 2 \sum_{x = 1}^\infty f(x)\] Therefore \[f(0) + \sum_{x = 0}^\infty F(x) = 2 \sum_{x = 0}^\infty f(x)\] But by definition of \(f(0)\), the left side is \(2\). It follows that \(f\) is a density function on \(\N\) and \(F\) is the probability function of \(f\) for \((\N, \lfrta)\).

Here is our main result:

The graph \((\N, \lfrta)\) does not have a constant rate distribution. However, the geometric distribution on \(\N\) has constant rate on \(\N_+\) and is the only such distribution.

Details:

Suppose that \(X\) is a random variable with value in \(\N\) having constant rate \(\alpha\). From our support assumptions, \(\P(X = 0) \gt 0\) and \(\P(X \in \N_+) \gt 0\). It follows that \(\E[u(X)] \in (1, 2)\) where as usual, \(u\) is the walk function of order 1. But \(\E[u(X)] = 1 / \alpha\) from the basic moment result in Section 1.5 so \(\alpha \in \left(\frac 1 2, 1\right)\). Suppose next that \(X\) has probability density function \(f\). By the constant rate property, \(L f = \frac 1 \alpha f\), so that

  1. \( f(1) = \frac 1 \alpha f(0) \)
  2. \( f(x + 1) = \frac 1 \alpha f(x) - f(x - 1), \quad x \in \N_+ \)
The characteristic equation of the difference equation (b) is \(r^2 - \frac{1}{\alpha} r + 1 = 0\) which has roots \[ r_1 = \frac{1 - \sqrt{1 - 4 \alpha^2}}{2 \alpha}, \quad r_2 = \frac{1 + \sqrt{1 - 4 \alpha^2}}{2 \alpha} \] Since \(\alpha \in \left(\frac 1 2, 1\right)\), the roots are complex conjugates so either \(f(x) = 0\) for all \(x \in \N\) or \(f(x) \lt 0\) for infinitely many \(x \in \N\), in either case a contradiction. In fact, we can give the solution explicitly. Since \(1 \lt 1 / \alpha \lt 2\), we can write \(1 / \alpha = 2 \cos \theta\) for some \(\theta \in (0, \pi / 3)\). From , \[f(x) = f(0) P_x\left(\frac 1 \alpha\right) = f(0) \frac{\sin[(x + 1) \theta]}{\sin \theta}, \quad x \in \N\] Next, suppose that \(X\) has the geometric distribution on \(\N\) with success parameter \(p \in (0, 1)\). So, the probability density function \(f\) is given by \(f(x) = p (1 - p)^x\) for \(x \in \N\). Of course, \(X\) has an exponential distribution for the positive semigroup \((\N, +)\). We have \(F(0) = f(1) = p (1 - p)\) and \[F(x) = f(x - 1) + f(x + 1) = p (1 - p)^{x-1} + p (1 - p)^{x + 1} = p (1 - p)^{x-1}[1 + (1 - p)^2], \quad x \in \N_+\] So the rate function \(r\) of \(X\) for \((\N, \lfrta)\) satisfies \(r(0) = f(0) / F(0) = 1/(1 - p)\) and \[r(x) = \frac{f(x)}{F(x)} = \frac{1 - p}{1 + (1 - p)^2}, \quad x \in \N_+\] Hence \(X\) has constant rate \((1 - p) / [1 + (1 - p)^2]\) on \(\N_+\). Conversely, suppose that \(X\) has constant rate \(\alpha\) on \(\N_+\), so that the probability density function \(f\) satisfies (b) above (but not necessarily (a)). The only possible solution with \(f(x) \to 0\) as \(x \to \infty\) is when \(\alpha \lt \frac{1}{2}\) so that there is a root \(r \lt 1\) of the characteristic equation. The solution has the form \(f(x) = c r^x\). The requirement that \(f\) be a density function then implies that \(c = 1 - r\), so \(X\) has the geometric distribution with parameter \(p = 1 - r\).

The geometric distribution is almost the most random way to put points in the infinite path \((\N, \lfrta)\).

Random Walks

Returning to the general case, we will consider random walk on \((\N, \lfrta)\) next.

Suppose that \(X\) is a random variable with probability density function \(f\) on \(\N\). Let \(\bs{Y} = (Y_1, Y_2, \ldots)\) denote the random walk on \((\N, \lfrta)\) associated with \(X\). Then \(Y_1\) has density function \(f\) and \(\bs Y\) has transition kernel \(P\) given by \(P(0, 1) = 1\) and \begin{align*} P(x, x - 1) & = \frac{f(x - 1)}{f(x -1) + f(x + 1)}, \quad x \in \N_+ \\ P(x, x + 1) & = \frac{f(x + 1)}{f(x - 1) + f(x + 1)}, \quad x \in \N_+ \end{align*}

Details:

This follows from the definition of the random walk: \(P(x, y) = f(y) / F(x)\) for \(x \lfrta y\) where \(F\) is the reliability function of \(X\) for \((\N, \lfrta)\).

Hence \(\bs Y\) is a birth-death chain on \(\N\) with reflecting boundary point 0.

The random walk \(\bs{Y}\) is periodic with period 2, and is positive recurrent and reversible. The invariant probability density function \(g\) given by \(g(0) = f(0) f(1) / c\) and \(g(x) = f(x)[f(x - 1) + f(x + 1)] / c\) for \(x \in \N_+\) where \[c = 2 \sum_{x = 0}^\infty f(x) f(x + 1)\]

Details

The random walk is periodic with period 2 since the underlying graph has this property. The other results follow from general results in Section 1.4. Recall that an invariant function is \( \varphi = f F\) where again, \(F\) is the reliability function of \(X\) for \((\N, \lfrta)\). The normalizing constant is \[c = \sum_{x = 0}^\infty \varphi(x) = 2 \sum_{x = 0}^\infty f(x) f(x + 1)\]

Of course, the general theory of birth-death chains would lead to the same conclusions.

Suppose that \(X\) has the geometric distribution on \(\N\) with success parameter \(p\) and that \(\bs Y\) is the random walk on \((\N, \lfrta)\) associated with \(X\).

  1. \(\bs Y\) has transition kernel \(P\) given by \(p(0, 1) = 1\) and \[P(x, x - 1) = \frac{1}{1 + (1 - p)^2}, \; P(x, x + 1) = \frac{(1 - p)^2}{1 + (1 - p)^2}, \quad x \in \N_+\]
  2. The invariant distribution of \(\bs Y\) is a modified geometric distribution with probability density function \(g\) given by \begin{align*} g(0) & = p (1 - p / 2) \\ g(x) &= [1 - p (1 - p / 2)][1 - (1 - p)^2] (1 - p)^{2 x - 2}, \quad x \in \N_+ \end{align*}
Details:

The results follow fromm and since \(X\) has density function \(f\) given by \(f(x) = p (1 - p)^x\) for \(x \in \N\) and has reliability function \(F\) for \((\N, \lfrta)\) given by \(F(0) = p (1 - p)\) and \(F(x) = p (1 - p)^{x - 1}[1 + (1 - p)^2]\) for \(x \in \N_+\).

The app below simulates the random walk on \((\N, \lfrta)\) corresponding to the geometric distribution with success parameter \(p\), as described in .