\(\newcommand{\R}{\mathbb{R}}\)
\(\newcommand{\N}{\mathbb{N}}\)
\(\newcommand{\Z}{\mathbb{Z}}\)
\(\newcommand{\E}{\mathbb{E}}\)
\(\newcommand{\P}{\mathbb{P}}\)
\(\newcommand{\var}{\text{var}}\)
\(\newcommand{\sd}{\text{sd}}\)
\(\newcommand{\cov}{\text{cov}}\)
\(\newcommand{\cor}{\text{cor}}\)
\(\newcommand{\bias}{\text{bias}}\)
\(\newcommand{\mse}{\text{mse}}\)
\(\newcommand{\eff}{\text{eff}}\)
\(\newcommand{\bs}{\boldsymbol}\)

As usual, our starting point is a random experiment with an underlying sample space and a probability measure \(\P\). In the basic statistical model, we have an observable random variable \(\bs{X}\) taking values in a set \(S\). Recall that in general, this variable can have quite a complicated structure. For example, if the experiment is to sample \(n\) objects from a population and record various measurements of interest, then the data vector has the form \[ \bs{X} = (X_1, X_2, \ldots, X_n) \] where \(X_i\) is the vector of measurements for the \(i\)th object. The most important special case is when \((X_1, X_2, \ldots, X_n)\) are independent and identically distributed (IID). In this case \(\bs{X}\) is a random sample of size \(n\) from the distribution of an underlying measurement variable \(X\).

Recall also that a statistic is an observable function of the outcome variable of the random experiment: \(\bs{U} = \bs{u}(\bs{X})\) where \( \bs{u} \) is a known function from \( S \) into another set \( T \). Thus, a statistic is simply a random variable derived from the observation variable \(\bs{X}\), with the assumption that \(\bs{U}\) is also observable. As the notation indicates, \(\bs{U}\) is typically also vector-valued. Note that the original data vector \(\bs{X}\) is itself a statistic, but usually we are interested in statistics derived from \(\bs{X}\). A statistic \(\bs{U}\) may be computed to answer an inferential question. In this context, if the dimension of \(\bs{U}\) (as a vector) is smaller than the dimension of \(\bs{X}\) (as is usually the case), then we have achieved data reduction. Ideally, we would like to achieve significant data reduction with no loss of information about the inferential question at hand.

In the technical sense, a parameter \(\bs{\theta}\) is a function of the *distribution* of \(\bs{X}\), taking values in a parameter space \(T\). Typically, the distribution of \(\bs{X}\) will have \(k \in \N_+\) real parameters of interest, so that \(\bs{\theta}\) has the form \(\bs{\theta} = (\theta_1, \theta_2, \ldots, \theta_k)\) and thus \(T \subseteq \R^k\). In many cases, one or more of the parameters are unknown, and must be estimated from the data variable \(\bs{X}\). This is one of the of the most important and basic of all statistical problems, and is the subject of this chapter. If \( \bs{U} \) is a statistic, then the distribution of \( \bs{U} \) will depend on the parameters of \( \bs{X} \), and thus so will distributional constructs such as means, variances, covariances, probability density functions and so forth. We usually suppress this dependence notationally to keep our mathematical expressions from becoming too unwieldy, but it's very important to realize that the underlying dependence is present. Remember that the critical idea is that by observing a value \( \bs{u} \) of a statistic \( \bs{U} \) we (hopefully) gain information about the unknown parameters.

Suppose now that we have an unknown real parameter \(\theta\) taking values in a parameter space \(T \subseteq \R\). A real-valued statistic \(U = u(\bs{X})\) that is used to estimate \(\theta\) is called, appropriately enough, an estimator of \(\theta\). Thus, the estimator is a random variable and hence has a distribution, a mean, a variance, and so on (all of which, as noted above, will generally depend on \( \theta \)). When we actually run the experiment and observe the data \(\bs{x}\), the observed value \(u = u(\bs{x})\) (a single number) is the estimate of the parameter \(\theta\). The following definitions are basic.

Suppose that \( U \) is a statistic used as an estimator of a parameter \( \theta \) with values in \( T \subseteq \R \). For \( \theta \in T \),

- \( U - \theta \) is the error.
- \(\bias(U) = E(U - \theta) = \E(U) - \theta \) is the bias of \( U \)
- \(\mse(U) = \E\left[(U - \theta)^2\right] \) is the mean square error of \( U \)

Thus the error is the difference between the estimator and the parameter being estimated, so of course the error is a random variable. The bias of \( U \) is simply the expected error, and the mean square error (the name says it all) is the expected square of the error. Note that bias and mean square error are functions of \( \theta \in T \). The following definitions are a natural complement to the definition of bias.

Suppose again that \( U \) is a statistic used as an estimator of a parameter \( \theta \) with values in \( T \subseteq \R \).

- \(U\) is unbiased if \(\bias(U) = 0\), or equivalently \(\E(U) = \theta\), for all \(\theta \in T\).
- \(U\) is negatively biased if \(\bias(U) \le 0\), or equivalently \(\E(U) \le \theta\), for all \(\theta \in T\).
- \(U\) is positively biased if \(\bias(U) \ge 0\), or equivalently \(\E(U) \ge \theta\), for all \(\theta \in T\).

Thus, for an unbiased estimator, the expected value of the estimator is the parameter being estimated, clearly a desirable property. On the other hand, a positively biased estimator overestimates the parameter, on average, while a negatively biased estimator underestimates the parameter on average. Our definitions of negative and positive bias are *weak* in the sense that the weak inequalities \(\le\) and \(\ge\) are used. There are corresponding strong definitions, of course, using the strong inequalities \(\lt\) and \(\gt\). Note, however, that none of these definitions may apply. For example, it might be the case that \(\bias(U) \lt 0\) for some \(\theta \in T\), \(\bias(U) = 0\) for other \(\theta \in T\), and \(\bias(U) \gt 0\) for yet other \(\theta \in T\) .

\(\mse(U) = \var(U) + \bias^2(U)\)

This follows from basic properties of expected value and variance: \[ \E[(U - \theta)^2] = \var(U - \theta) + [\E(U - \theta)]^2 = \var(U) + \bias^2(U) \]

In particular, if the estimator is unbiased, then the mean square error of \(U\) is simply the variance of \(U\).

Ideally, we would like to have unbiased estimators with small mean square error. However, this is not always possible, and the result above shows the delicate relationship between bias and mean square error. In the next section we will see an example with two estimators of a parameter that are multiples of each other; one is unbiased, but the other has smaller mean square error. However, if we have two unbiased estimators of \(\theta\), we naturally prefer the one with the smaller variance (mean square error).

Suppose that \( U \) and \( V \) are unbiased estimators of a parameter \( \theta \) with values in \( T \subseteq \R \).

- \( U \) is more efficient than \( V \) if \( \var(U) \le \var(V) \).
- The relative efficiency of \(U\) with respect to \(V\) is \[ \eff(U, V) = \frac{\var(V)}{\var(U)} \]

Suppose again that we have a real parameter \( \theta \) with possible values in a parameter space \( T \). Often in a statistical experiment, we observe an infinite sequence of random variables over time, \(\bs{X} = (X_1, X_2, \ldots,)\), so that at time \( n \) we have observed \( \bs{X}_n = (X_1, X_2, \ldots, X_n) \). In this setting we often have a general formula that defines an estimator of \(\theta\) for each sample size \(n\). Technically, this gives a *sequence* of real-valued estimators of \(\theta\): \( \bs{U} = (U_1, U_2, \ldots) \) where \( U_n \) is a real-valued function of \( \bs{X}_n \) for each \( n \in \N_+ \). In this case, we can discuss the asymptotic properties of the estimators as \(n \to \infty\). Most of the definitions are natural generalizations of the ones above.

The sequence of estimators \(\bs{U} = (U_1, U_2, \ldots)\) is asymptotically unbiased for \( \theta \) if \( \bias(U_n) \to 0\) as \(n \to \infty\) for every \(\theta \in T \), or equivalently, \(\E(U_n) \to \theta\) as \(n \to \infty\) for every \(\theta \in T\).

Suppose that \(\bs{U} = (U_1, U_2, \ldots)\) and \(\bs{V} = (V_1, V_2, \ldots)\) are two sequences of estimators that are asymptotically unbiased for \(\theta\). The asymptotic relative efficiency of \(\bs{U}\) to \(\bs{V}\) is \[ \lim_{n \to \infty} \eff(U_n, V_n) = \lim_{n \to \infty} \frac{\var(V_n)}{\var(U_n)} \] assuming that the limit exists.

Naturally, we expect our estimators to improve, as the sample size \(n\) increases, and in some sense to converge to the parameter as \( n \to \infty \). This general idea is known as *consistency*. Once again, for the remainder of this discussion, we assume that \(\bs{U} = (U_1, U_2, \ldots)\) is a sequence of estimators for a real-valued parameter \( \theta \), with values in the parameter space \( T \).

Consistency

- \( \bs{U} \) is consistent if \(U_n \to \theta\) as \(n \to \infty\) in probability for each \(\theta \in T\). That is, \( \P\left(\left|U_n - \theta\right| \gt \epsilon\right) \to 0\) as \(n \to \infty\) for every \(\epsilon \gt 0\) and \(\theta \in T\).
- \( \bs{U} \) is mean-square consistent if \( \mse(U_n) = \E[(U_n - \theta)^2] \to 0 \) as \( n \to \infty \) for \( \theta \in T \).

Here is the connection between the two definitions:

If \( \bs{U} \) is mean-square consistent then \(\bs{U}\) is consistent.

From Markov's inequality, \[ \P\left(\left|U_n - \theta\right| \gt \epsilon\right) = \P\left[(U_n - \theta)^2 \gt \epsilon^2\right] \le \frac{\E\left[(U_n - \theta)^2\right]}{\epsilon^2} \to 0 \text{ as } n \to \infty \]

That mean-square consistency implies simple consistency is simply a statistical version of the theorem that states that mean-square convergence implies convergence in probability. Here is another nice consequence of mean-square consistency.

If \( \bs{U} \) is mean-square consistent then \( \bs{U} \) is asymptotically unbiased.

This result follows from the fact that mean absolute error is smaller than root mean square error, which in turn is special case of a general result for norms. See the advanced section on vector spaces for more details. So, using this result and the ordinary triangle inequality for expected value we have \[ |\E(U_n - \theta)| \le \E(|U_n - \theta|) \le \sqrt{\E[(U_n - \theta)]^2} \to 0 \text{ as } n \to \infty \] Hence \( \E(U_n) \to \theta \) as \( n \to \infty \) for \( \theta \in T \).

In the next several subsections, we will review several basic estimation problems that were studied in the chapter on Random Samples.

Suppose that \( X \) is a basic real-valued random variable for an experiment, with mean \( \mu \in \R\) and variance \( \sigma^2 \in (0, \infty) \). We sample from the distribution of \( X \) to produce a sequence \(\bs{X} = (X_1, X_2, \ldots)\) of independent variables, each with the distribution of \( X \). For each \( n \in \N_+ \), \( \bs{X}_n = (X_1, X_2, \ldots, X_n) \) is a random sample of size \(n\) from the distribution of \(X\).

This subsection is a review of some results obtained in the section on the Law of Large Numbers in the chapter on Random Samples. Recall that a natural estimator of the distribution mean \(\mu\) is the sample mean, defined by \[ M_n = \frac{1}{n} \sum_{i=1}^n X_i \]

The sample mean \(M\) satisfies the following properties:

- \(\E(M_n) = \mu\) so \(M_n\) is an unbiased estimator of \(\mu\).
- \(\var(M_n) = \sigma^2 / n\) so \(M_n\) is a consistent estimator of \(\mu\).

The consistency of the sample mean \(M_n\) as an estimator of the distribution mean \(\mu\) is simply the weak law of large numbers. Moreover, there are a number of important special cases of the results above. See the section on Sample Mean for the details.

Special cases of the sample mean

- Suppose that \(X = \bs{1}_A\), the indicator variable for an event \(A\) that has probability \(\P(A)\). Then the sample mean for a random sample of size \( n \) from the distribution of \( X \) is the relative frequency or empirical probability of \(A\), denoted \(P_n(A)\). Hence \(P_n(A)\) is an unbiased and consistent estimator of \(\P(A)\).
- Suppose that \(F\) denotes the distribution function of a real-valued random variable \(Y\). Then for fixed \(y \in \R\), the empirical distribution function \(F_n(y)\) is simply the sample mean for a random sample of size \(n\) from the distribution of the indicator variable \(X = \bs{1}(Y \le y)\). Hence \(F_n(y)\) is an unbiased and consistent estimator of \(F(y)\).
- Suppose that \(U\) is a random variable with a discrete distribution on a countable set \(S\) and \(f\) denotes the probability density function of \(U\). Then for fixed \(u \in S\), the empirical probability density function \(f_n(u)\) is simply the sample mean for a random sample of size \(n\) from the distribution of the indicator variable \(X = \bs{1}(U = u)\). Hence \(f_n(u)\) is an unbiased and consistent estimator of \(f(u)\).

This subsection is a review of some results obtained in the section on the Sample Variance in the chapter on Random Samples. We also assume that the fourth central moment \(\sigma_4 = \E\left[(X - \mu)^4\right]\) is finite. Recall that \(\sigma_4 / \sigma^4\) is the kurtosis of \(X\). Recall first that if \(\mu\) is known (almost always an artificial assumption), then a natural estimator of \(\sigma^2\) is a special version of the sample variance, defined by \[ W_n^2 = \frac{1}{n} \sum_{i=1}^n (X_i - \mu)^2 \]

\(W_n^2\) is the sample mean of the random sample \(\left((X_1 - \mu)^2, (X_2 - \mu)^2, \ldots, (X_n - \mu)^2\right)\) and satisfies the following properties

- \(\E\left(W_n^2\right) = \sigma^2\) so \(W_n^2\) is an unbiased estimator of \(\sigma^2\).
- \(\var\left(W_n^2\right) = \frac{1}{n}(\sigma_4 - \sigma^4)\) so \(W_n^2\) is a consistent estimator of \(\sigma^2\).

If \(\mu\) is unknown (the more reasonable assumption), then a natural estimator of the distribution variance is the standard version of the sample variance, defined by \[ S_n^2 = \frac{1}{n - 1} \sum_{i=1}^n (X_i - M)^2 \]

The sample variance \(S_n^2\) satisfies the following properties:

- \(\E\left(S_n^2\right) = \sigma^2\) so \(S_n^2\) is an unbiased estimator of \(\sigma^2\).
- \(\var\left(S_n^2\right) = \frac{1}{n} \left(\sigma_4 - \frac{n - 3}{n - 1} \sigma^4 \right)\) so \(S_n^2\) is a consistent estimator of \(\sigma^2\).

Naturally, we would like to compare the estimators \( W_n^2 \) and \( S_n^2 \), since both are unbiased estimators of \( \sigma^2 \). But again remember that \( W_n^2 \) only makes sense as an estimator if \( \mu \) is known.

Comparison of \(W_n^2\) and \(S_n^2\) as estimators of \(\sigma^2\):

- \(\var\left(W_n^2\right) \lt \var(S_n^2)\).
- The asymptotic relative efficiency of \(S_n^2\) to \(W_n^2\) is 1.

So by (a) \(W_n^2\) is better than \(S_n^2\), assuming that \(\mu\) is known so that we can actually *use* \(W_n^2\). This is perhaps not surprising, but by (b) \(S_n^2\) works just about as well as \(W_n^2\) for a large sample size \( n \). Of course, the sample standard deviation \(S_n\) is a natural estimator of the distribution standard deviation \(\sigma\). Unfortunately, this estimator is biased. Here is a more general result:

Suppose that \( \theta \) is a parameter with possible values in \(T \subseteq (0, \infty) \) (with at least two points) and that \( U \) is a statistic with values in \( T \). If \( U^2 \) is an unbiased estimator of \( \theta^2 \) then \( U \) is a negatively biased estimator of \( \theta \).

Note that \[ \var(U) = \E(U^2) - [\E(U)]^2 = \theta^2 - [\E(U)]^2, \quad \theta \in T \] Since \( T \) has at least two points, \( U \) cannot be deterministic so \( \var(U) \gt 0 \). It follows that \( [\E(U)]^2 \lt \theta^2 \) so \( \E(U) \lt \theta \) for \( \theta \in T \).

Thus, we should not be too obsessed with the unbiased property. For most sampling distributions, there will be no statistic \(U\) with the property that \(U\) is an unbiased estimator of \(\sigma\) and \(U^2\) is an unbiased estimator of \(\sigma^2\).

In this subsection we review some of the results obtained in the section on the Correlation and Regression in the chapter on Random Samples

Suppose that \( X \) and \( Y \) are real-valued random variables for an experiment, so that \( (X, Y) \) has a bivariate distribution in \( \R^2 \). Let \( \mu = \E(X)\) and \( \sigma^2 = \var(X) \) denote the mean and variance of \( X \), and let \( \nu = \E(Y) \) and \( \tau^2 = \var(Y) \) denote the mean and variance of \( Y \). For the bivariate parameters, let \( \delta = \cov(X, Y) \) denote the distribution covariance and \( \rho = \cor(X, Y) \) the distribution correlation. We need one higher-order moment as well: let \( \delta_2 = \E\left[(X - \mu)^2 (Y - \nu)^2\right] \), and as usual, we assume that all of the parameters exist. So the general parameter spaces are \( \mu, \, \nu \in \R \), \( \sigma^2, \, \tau^2 \in (0, \infty) \), \( \delta \in \R \), and \( \rho \in [0, 1] \). Suppose now that we sample from the distribution of \( (X, Y) \) to generate a sequence of independent variables \(\left((X_1, Y_1), (X_2, Y_2), \ldots\right)\), each with the distribution of \( (X, Y) \). As usual, we will let \(\bs{X}_n = (X_1, X_2, \ldots, X_n)\) and \(\bs{Y}_n = (Y_1, Y_2, \ldots, Y_n)\); these are random samples of size \(n\) from the distributions of \(X\) and \(Y\), respectively.

If \(\mu\) and \(\nu\) are known (almost always an artificial assumption), then a natural estimator of the distribution covariance \(\delta\) is a special version of the sample covariance, defined by \[ W_n = W\left(\bs{X}_n, \bs{Y}_n\right) = \frac{1}{n} \sum_{i=1}^n (X_i - \mu)(Y_i - \nu) \]

\(W_n\) is the sample mean of the random sample \(\left((X_1 - \mu)(Y_1 - \nu), (X_2 - \mu)(Y_2 - \nu), \ldots, (X_n - \mu)(Y_n - \nu)\right)\) and satisfies the following properties:

- \(\E\left(W_n\right) = \delta\) so \(W_n\) is an unbiased estimator of \(\delta\).
- \( \var\left(W_n\right) = \frac{1}{n}(\delta_2 - \delta^2) \) so \(W_n\) is a consistent estimator of \(\delta\).

If \(\mu\) and \(\nu\) are unknown (usually the more reasonable assumption), then a natural estimator of the distribution covariance \(\delta\) is the standard version of the sample covariance, defined by \[ S_n = S\left(\bs{X}_n, \bs{Y}_n\right) = \frac{1}{n - 1} \sum_{i=1}^n [X_i - M(\bs{X}_n)][Y_i - M(\bs{Y}_n)]\]

The sample covariance \(S_n\) satisfies the following properties:

- \(\E\left(S_n\right) = \delta\) so \(S_n\) is an unbiased estimator of \(\delta\).
- \( \var\left(S_n\right) = \frac{1}{n}\left(\delta_2 + \frac{1}{n - 1} \sigma^2 \tau^2 - \frac{n - 2}{n - 1} \delta^2\right) \) so \(S_n\) is consistent estimator of \(\delta\).

Once again, since we have two competing estimators of \( \delta \), we would like to compare them.

Comparison of \(W_n\) and \(S_n\) as estimators of \(\delta\):

- \(\var\left(W_n\right) \lt \var\left(S_n\right)\) for every \( n \in \N_+ \) .
- The asymptotic relative efficiency of \(S_n\) to \(W_n\) is 1.

Thus, \(W_n\) is better than \(S_n\), assuming that \(\mu\) and \( \nu \) are known so that we can actually *use* \(W_n\). But for large sample sizes, \(S_n\) works just about as well as \(W_n\).

A natural estimator of the distribution correlation \(\rho\) is the sample correlation
\[ R_n = R(\bs{X}_n, \bs{Y}_n) = \frac{S(\bs{X}_n, \bs{Y}_n)}{S(\bs{X}_n) \, S(\bs{Y}_n)} \]
Note that this statistics is a nonlinear function of the sample covariance and the two sample standard deviations. For most distributions of \((X, Y)\), we have no hope of computing the bias or mean square error of this estimator. If we *could* compute the expected value, we would probably find that the estimator is biased. On the other hand, even though we cannot compute the mean square error, a simple application of the law of large numbers shows that \(R_n \to \rho\) as \(n \to \infty\) with probability 1. Thus, the estimator is at least consistent.

Recall that the distribution regression line, with \(X\) as the predictor variable and \(Y\) as the response variable, is \(y = a + b \, x\) where \[ a = \E(Y) - \frac{\cov(X, Y)}{\var(X)} \E(X), \quad b = \frac{\cov(X, Y)}{\var(X)} \] On the other hand, the sample regression line, based on the sample of size \( n \), is \(y = A_n + B_n x\) where \[ A_n = M(\bs{Y}_n) - \frac{S(\bs{X}_n, \bs{Y}_n)}{S^2(\bs{X}_n)} M(\bs{X}_n), \quad B_n = \frac{S(\bs{X}_n, \bs{Y}_n)}{S^2(\bs{X}_n)} \] Of course, the statistics \(A_n\) and \(B_n\) are natural estimators of the parameters \(a\) and \(b\), respectively, and in a sense are derived from our previous estimators of the distribution mean, variance, and covariance. Once again, for most distributions of \((X, Y)\), it would be difficult to compute the bias and mean square errors of these estimators. But applications of the law of large numbers show that with probability 1, \( A_n \to a \) and \( B_n \to b \) as \( n \to \infty \), so at least the estimators are consistent.

Let's consider a simple example that illustrates some of the ideas above. Recall that the Poisson distribution with parameter \(\lambda \in (0, \infty)\) has probability density function \(g\) given by
\[ g(x) = e^{-\lambda} \frac{\lambda^x}{x!}, \quad x \in \N \]
The Poisson distribution is often used to model the number of random points

in a region of time or space, and is studied in more detail in the chapter on the Poisson process. The parameter \(\lambda\) is proportional to the size of the region of time or space; the proportionality constant is the average rate of the random points. The distribution is named for Simeon Poisson.

Suppose that \(X\) has the Poisson distribution with parameter \(\lambda\). The factorial moments are \( \E\left[X^{(n)}\right] = \lambda^n \) for \( n \in \N \). Hence

- \(\mu = \E(X) = \lambda\)
- \(\sigma^2 = \var(X) = \lambda\)
- \(\sigma_4 = \E\left[(X - \lambda)^4\right] = 3 \lambda^2 + \lambda\)

Suppose now that we sample from the distribution of \( X \) to produce a sequence of independent random variables \( \bs{X} = (X_1, X_2, \ldots) \), each having the Poisson distribution with unknown parameter \( \lambda \in (0, \infty) \). Again, \(\bs{X}_n = (X_1, X_2, \ldots, X_n)\) is a random sample of size \(n\) from the from the distribution for each \( n \in \N \). From the previous exercise, \(\lambda\) is both the mean and the variance of the distribution, so that we could use either the sample mean \(M_n\) or the sample variance \(S_n^2\) as an estimator of \(\lambda\). Both are unbiased, so which is better? Naturally, we use mean square error as our criterion.

Comparison of \(M_n\) to \(S_n^2\) as estimators of \(\lambda\).

- \(\var\left(M_n\right) = \frac{\lambda}{n}\) for \( n \in \N_+ \)
- \(\var\left(S_n^2\right) = \frac{\lambda}{n} \left(1 + 2 \lambda \frac{n}{n - 1} \right)\) for \( n \in \N_+ \)
- \(\var\left(M_n\right) \lt \var\left(S_n^2\right)\) so \( M_n \) is more efficient than \( S_n^2 \) for \( n \in \N_+ \)
- The asymptotic relative efficiency of \(M_n\) to \(S_n^2\) as \( n \to \infty \) is \(1 + 2 \lambda\).

So our conclusion is that the sample mean \(M_n\) is a better estimator of the parameter \(\lambda\) than the sample variance \(S_n^2\), perhaps not surprising.

Run the Poisson experiment 100 times for several values of the parameter. In each case, compute the estimators \(M\) and \(S^2\). Which estimator seems to work better?

The emission of elementary particles from a sample of radioactive material in a time interval is often assumed to follow the Poisson distribution. Thus, suppose that the alpha emissions data set is a sample from a Poisson distribution. Estimate the rate parameter \(\lambda\).

- using the sample mean
- using the sample variance

- 8.367
- 8.649

In the sample mean experiment, set the sampling distribution to gamma. Increase the sample size with the scroll bar and note graphically and numerically the unbiased and consistent properties. Run the experiment 1000 times and compare the sample mean to the distribution mean.

Run the normal estimation experiment 1000 times for several values of the parameters.

- Compare the empirical bias and mean square error of \(M\) with the theoretical values.
- Compare the empirical bias and mean square error of \(S^2\) and of \(W^2\) to their theoretical values. Which estimator seems to work better?

In matching experiment, the random variable is the number of matches. Run the simulation 1000 times and compare

- the sample mean to the distribution mean.
- the empirical density function to the probability density function.

Run the exponential experiment 1000 times and compare the sample standard deviation to the distribution standard deviation.

For Michelson's velocity of light data, compute the sample mean and sample variance.

852.4, 6242.67

For Cavendish's density of the earth data, compute the sample mean and sample variance.

5.448, 0.048817

For Short's parallax of the sun data, compute the sample mean and sample variance.

8.616, 0.561032

Consider the Cicada data.

- Compute the sample mean and sample variance of the body length variable.
- Compute the sample mean and sample variance of the body weight variable.
- Compute the sample covariance and sample correlation between the body length and body weight variables.

- 24.0, 3.92
- 0.180, 0.003512
- 0.0471, 0.4012

Consider the M&M data.

- Compute the sample mean and sample variance of the net weight variable.
- Compute the sample mean and sample variance of the total number of candies.
- Compute the sample covariance and sample correlation between the number of candies and the net weight.

- 57.1, 5.68
- 49.215, 2.3163
- 2.878, 0.794

Consider the Pearson data.

- Compute the sample mean and sample variance of the height of the father.
- Compute the sample mean and sample variance of the height of the son.
- Compute the sample covariance and sample correlation between the height of the father and height of the son.

- 67.69, 7.5396
- 68.68, 7.9309
- 3.875, 0.501

The estimators of the mean, variance, and covariance that we have considered in this section have been natural in a sense. However, for other parameters, it is not clear how to even find a reasonable estimator in the first place. In the next several sections, we will consider the problem of constructing estimators. Then we return to the study of the mathematical properties of estimators, and consider the question of when we can know that an estimator is the best possible, given the data.