\(\newcommand{\R}{\mathbb{R}}\) \(\newcommand{\N}{\mathbb{N}}\) \(\newcommand{\E}{\mathbb{E}}\) \(\newcommand{\P}{\mathbb{P}}\) \(\newcommand{\var}{\text{var}}\) \(\newcommand{\sd}{\text{sd}}\) \(\newcommand{\cov}{\text{cov}}\) \(\newcommand{\cor}{\text{cor}}\) \(\newcommand{\skw}{\text{skew}}\) \(\newcommand{\kur}{\text{kurt}}\) \( \newcommand{\bs}{\boldsymbol} \)
  1. Random
  2. 4. Special Distributions
  3. The Uniform Distribution on an Interval

The Uniform Distribution on an Interval

The continuous uniform distribution on an interval of \( \R \) is one of the simplest of all probability distributions, but nonetheless very important. In particular, continuous uniform distributions are the basic tools for simulating other probability distributions. The uniform distribution corresponds to picking a point at random from the interval. The uniform distribution on an interval is a special case of the general uniform distribution with respect to a measure, in this case Lebesgue measure (length measure) \(\lambda\) on the standard (Borel) measurable subsets of \( \R \).

The Standard Uniform Distribution

Definition

The continuous uniform distribution on the interval \( [0, 1] \) is known as the standard uniform distribution. Thus if \( U \) has the standard uniform distribution then \[ \P(U \in A) = \lambda(A) \] for every measurable subset \(A\) of \([0, 1]\).

A simulation of a random variable with the standard uniform distribution is known in computer science as a random number. All programming languages have functions for computing random numbers, as do calculators, spreadsheets, and mathematical and statistical software packages.

Distribution Functions

Suppose that \( U \) has the standard uniform distribution. By definition, the probability density function is constant on \( [0, 1] \).

\( U \) has probability density function \(g\) given by \( g(u) = 1 \) for \( u \in [0, 1] \).

Since the density function is constant, the mode is not meaningful.

Open the Special Distribution Simulator and select the continuous uniform distribution. Keep the default parameter values for the standaard uniform distribution. Run the simulation 1000 times and compare the empirical density function and to the probability density function.

The distribution function is simply the identity function on \([0, 1]\).

\( U \) has distribution function \( G \) given by \( G(u) = u \) for \( u \in [0, 1] \).

Details:

Note that \( \P(U \le u) = \lambda[0, u] = u \) for \( u \in [0, 1] \). Recall again that \( \lambda \) is length measure.

The quantile function is the same as the distribution function.

\( U \) has quantile function \( G^{-1} \) given by \( G^{-1}(p) = p \) for \( p \in [0, 1] \). The quartiles are

  1. \( q_1 = \frac{1}{4} \), the first quartile
  2. \( q_2 = \frac{1}{2} \), the median
  3. \( q_3 = \frac{3}{4} \), the third quartile
Details:

\( G^{-1} \) is the ordinary inverse of \( G \) on the interval \( [0, 1] \), which is \( G \) itself since \( G \) is the identity function.

Open the quantile app and select the continuous uniform distribution. Keep the default parameter values for the standard uniform distribution. Compute a few values of the distribution function and the quantile function.

Moments

Suppose again that \( U \) has the standard uniform distribution. The moments (about 0) are simple.

For \(n \in \N\), \[ \E\left(U^n\right) = \frac{1}{n + 1} \]

Details:

Since the PDF is 1 on \( [0, 1] \), \[ \E\left(U^n\right) = \int_0^1 u^n \, du = \frac{1}{n + 1} \]

The mean and variance follow easily from the general moment formula in .

The mean and variance of \( U \) are

  1. \( \E(U) = \frac{1}{2} \)
  2. \( \var(U) = \frac{1}{12} \)

Open the Special Distribution Simulator and select the continuous uniform distribution. Keep the default parameter values for the standard uniform distribution. Run the simulation 1000 times and compare the empirical mean and standard deviation to the true mean and standard deviation.

Next are the skewness and kurtosis.

The skewness and kurtosis of \( U \) are

  1. \( \skw(U) = 0 \)
  2. \( \kur(U) = \frac{9}{5} \)
Details:
  1. This follows from the symmetry of the distribution about the mean \( \frac{1}{2} \).
  2. This follows from the usual formula for kurtosis in terms of the moments in , or directly, since \( \sigma^4 = \frac{1}{144} \) and \[ \E\left[\left(U - \frac{1}{2}\right)^4\right] = \int_0^1 \left(x - \frac{1}{2}\right)^4 dx = \frac{1}{80} \]

So the excess kurtosis is \( \kur(U) - 3 = -\frac{6}{5} \)

Finally, we give the moment generating function.

The moment generating function \( m \) of \( U \) is given by \( m(0) = 1 \) and \[ m(t) = \frac{e^t - 1}{t}, \quad t \in \R \setminus \{0\} \]

Details:

Again, since the PDF is 1 on \( [0, 1] \) \[ \E\left(e^{t U}\right) = \int_0^1 e^{t u} du = \frac{e^t - 1}{t}, \quad t \ne 0 \] Trivially \( m(0) = 1 \).

Related Distributions

The standard uniform distribution is connected to every other probability distribution on \( \R \) by means of the quantile function of the other distribution. When the quantile function has a simple closed form expression, this result forms the primary method of simulating the other distribution with a random number.

Suppose that \( F \) is the distribution function for a probability distribution on \( \R \), and that \( F^{-1} \) is the corresponding quantile function. If \( U \) has the standard uniform distribution, then \( X = F^{-1}(U) \) has distribution function \( F \).

Details:

A basic property of quantile functions is that \( F(x) \le p \) if and only if \( x \le F^{-1}(p) \) for \( x \in \R \) and \( p \in (0, 1) \). Hence from , \[ \P(X \le x) = \P\left[F^{-1}(U) \le x\right] = \P[U \le F(x)] = F(x), \quad x \in \R \]

Open the Random Quantile Experiment. For each distribution, run the simulation 1000 times and compare the empirical density function to the probability density function of the selected distribution. Note how the random quantiles simulate the distribution.

For a continuous distribution on an interval of \( \R \), the connection goes the other way.

Suppose that \( X \) has a continuous distribution with support on an interval \( I \subseteq \R \), with distribution function \( F \). Then \( U = F(X) \) has the standard uniform distribution.

Details:

For \( u \in (0, 1) \) recall that \( F^{-1}(u) \) is a quantile of order \( u \). Since \( X \) has a continuous distribution, \[ \P(U \ge u) = \P[F(X) \ge u] = \P[X \ge F^{-1}(u)] = 1 - F[F^{-1}(u)] = 1 - u \] Hence \( U \) is uniformly distributed on \( (0, 1) \).

The standard uniform distribution is a special case of the beta distribution.

The beta distribution with left parameter \( a = 1 \) and right parameter \( b = 1 \) is the standard uniform distribution.

Details:

The beta distribution with parameters \( a \gt 0 \) and \( b \gt 0 \) has PDF \[ x \mapsto \frac{1}{B(a, b)} x^{a-1} (1 - x)^{b-1}, \quad x \in (0, 1) \] where \( B \) is the beta function. With \( a = b = 1 \), the PDF is the standard uniform PDF.

The standard uniform distribution is also the building block of the Irwin-Hall distributions.

Suppose that \(n \in \N_+\) and that \(\bs{U} = (U_1, U_2, \ldots, U_n)\) is a sequence of independent random variables, each with the standard uniform distribution. The \(X = \sum_{i=1}^n U_i\) has the Irwin-Hall distribution of order \(n\).

The Uniform Distribution on a General Interval

Definition

The standard uniform distribution is generalized by adding location-scale parameters.

Suppose that \( U \) has the standard uniform distribution. For \( a \in \R \) and \( w \in (0, \infty) \) random variable \( X = a + w U \) has the uniform distribution with location parameter \( a \) and scale parameter \( w \).

Distribution Functions

Suppose that \( X \) has the uniform distribution with location parameter \( a \in \R \) and scale parameter \( w \in (0, \infty) \).

\( X \) has probability density function \( f \) given by \( f(x) = 1/w \) for \( x \in [a, a + w] \).

Details:

Recall that \( f(x) = \frac{1}{w} g\left(\frac{x - a}{w}\right) \) for \( x \in [a, a + w] \), where \( g \) is the standard uniform PDF in . But \( g(u) = 1 \) for \( u \in [0, 1] \), so the result follows.

Theorem shows that \( X \) really does have a uniform distribution, since the probability density function is constant on the support interval. Moreover, we can clearly parameterize the distribution by the endpoints of this interval, namely \( a \) and \( b = a + w \), rather than by the location, scale parameters \( a \) and \( w \). In fact, the distribution is more commonly known as the uniform distribution on the interval \( [a, b] \). Nonetheless, it is useful to know that the distribution is the location-scale family associated with the standard uniform distribution. In terms of the endpoint parameterization, \[ f(x) = \frac{1}{b - a}, \quad x \in [a, b] \]

Open the Special Distribution Simulator and select the uniform distribution. Vary the location and scale parameters and note the graph of the probability density function. For selected values of the parameters, run the simulation 1000 times and compare the empirical density function to the probability density function.

\( X \) has distribution function \( F \) given by \[ F(x) = \frac{x - a}{w}, \quad x \in [a, a + w] \]

Details:

Recall that \( F(x) = G\left(\frac{x - a}{w}\right) \) for \( x \in [a, a + w] \), where \( G \) is the standard uniform CDF in . But \( G(u) = u \) for \( u \in [0, 1] \) so the result follows. Of course, a direct proof using the PDF is also easy.

In terms of the endpoint parameterization, \[ F(x) = \frac{x - a}{b - a}, \quad x \in [a, b] \]

\( X \) has quantile function \( F^{-1} \) given by \( F^{-1}(p) = a + p w = (1 - p) a + p b \) for \( p \in [0, 1] \). The quartiles are

  1. \( q_1 = a + \frac{1}{4} w = \frac{3}{4} a + \frac{1}{4} b \), the first quartile
  2. \( q_2 = a + \frac{1}{2} w = \frac{1}{2} a + \frac{1}{2} b \), the median
  3. \( q_3 = a + \frac{3}{4} w = \frac{1}{4} a + \frac{3}{4} b \), the third quartile
Details:

Recall that \( F^{-1}(p) = a + w G^{-1}(p) \) where \( G^{-1} \) is the standard uniform quantile function in . But \( G^{-1}(p) = p \) for \( p \in [0, 1] \) so the result follows. Of course a direct proof from the CDF is also easy.

Open the quantile app and select the uniform distribution. Vary the parameters and note the graph of the distribution function. For selected values of the parameters, compute a few values of the distribution function and the quantile function.

Moments

Again we assume that \( X \) has the uniform distribution on the interval \( [a, b] \) where \( a, \, b \in \R \) and \( a \lt b \). Thus the location parameter is \( a \) and the scale parameter \( w = b - a \).

The moments of \( X \) are \[ \E(X^n) = \frac{b^{n+1} - a^{n+1}}{(n + 1)(b - a)}, \quad n \in \N \]

Details:

For \( n \in \N \), \[ \E(X^n) = \int_a^b x^n \frac{1}{b - a} dx = \frac{b^{n+1} - a^{n+1}}{(n + 1)(b - a)} \]

The mean and variance of \( X \) are

  1. \( \E(X) = \frac{1}{2}(a + b) \)
  2. \( \var(X) = \frac{1}{12}(b - a)^2 \)

Open the Special Distribution Simulator and select the uniform distribution. Vary the parameters and note the location and size of the mean\(\pm\)standard deviation bar. For selected values of the parameters, run the simulation 1000 times and compare the empirical mean and standard deviation to the distribution mean and standard deviation.

The skewness and kurtosis of \( X \) are

  1. \( \skw(X) = 0 \)
  2. \( \kur(X) = \frac{9}{5} \)
Details:

Recall that skewness and kurtosis are defined in terms of the standard score and hence are invariant under location-scale transformations.

Once again, the excess kurtosis is \( \kur(X) - 3 = -\frac{6}{5}\).

The moment generating function \( M \) of \( X \) is given by \( M(0) = 1 \) and \[ M(t) = \frac{e^{b t} - e^{a t}}{t(b - a)}, \quad t \in \R \setminus \{0\} \]

Details:

Recall that \( M(t) = e^{a t} m(w t) \) where \( m \) is the standard uniform MGF in . Substituting gives the result.

If \( h \) is a real-valued function on \( [a, b] \), then \( \E[h(X)] \) is the average value of \( h \) on \( [a, b] \), as defined in calculus:

If \( h: [a, b] \to \R \) is integrable, then \[ \E[h(X)] = \frac{1}{b - a} \int_a^b h(x) \, dx \]

Details:

This follows from the change of variables formula for expected value: \( \E[h(X)] = \int_a^b h(x) f(x) \, dx \).

The entropy of the uniform distribution on an interval depends only on the length of the interval.

The entropy of \( X \) is \( H(X) = \ln(b - a) \).

Details: \[ H(X) = \E\{-\ln[f(X)]\} = \int_a^b -\ln\left(\frac{1}{b - a}\right) \frac{1}{b - a} \, dx = -\ln\left(\frac{1}{b - a}\right) = \ln(b - a) \]

Related Distributions

Since the uniform distribution is a location-scale family, it is trivially closed under location-scale transformations.

If \( X \) has the uniform distribution with location parameter \( a \) and scale parameter \( w \), and if \( c \in \R \) and \( d \in (0, \infty) \), then \( Y = c + d X \) has the uniform distribution with location parameter \( c + d a \) and scale parameter \( d w \).

Details:

From the definition , we can take \( X = a + w U \) where \( U \) has the standard uniform distribution. Hence \( Y = c + d X = (c + d a) + (d w) U \).

As we saw above, the standard uniform distribution is a basic tool in the random quantile method of simulation. Uniform distributions on intervals are also basic in the rejection method of simulation. We sketch the method in the next paragraph; see the section on general uniform distributions for more theory.

Suppose that \( h \) is a probability density function for a continuous distribution with values in a bounded interval \( (a, b) \subseteq \R \). Suppose also that \( h \) is bounded, so that there exits \( c \gt 0 \) such that \( h(x) \le c \) for all \( x \in (a, b) \). Let \( \bs{X} = (X_1, X_2, \ldots) \) be a sequence of independent variables, each uniformly distributed on \( (a, b) \), and let \( \bs{Y} = (Y_1, Y_2, \ldots) \) be a sequence of independent variables, each uniformly distributed on \( (0, c) \). Finally, assume that \( \bs{X} \) and \( \bs{Y} \) are independent. Then \( ((X_1, Y_1), (X_2, Y_2), \ldots)) \) is a sequence of independent variables, each uniformly distributed on \( (a, b) \times (0, c) \). Let \( N = \min\{n \in \N_+: 0 \lt Y_n \lt h(X_n)\} \). Then \( (X_N, Y_N) \) is uniformly distributed on \( R = \{(x, y) \in (a, b) \times (0, c): y \lt h(x)\} \) (the region under the graph of \( h \)), and therefore \( X_N \) has probability density function \( h \). In words, we generate uniform points in the rectangular region \( (a, b) \times (0, c) \) until we get a point under the graph of \( h \). The \( x \)-coordinate of that point is our simulated value. The rejection method can be used to approximately simulate random variables when the region under the density function is unbounded.

Open the rejection method simulator. For each distribution, select a set of parameter values. Run the experiment 2000 times and observe how the rejection method works. Compare the empirical density function, mean, and standard deviation to their distributional counterparts.