Lecture 6 – Estimators of Parameters (Statistics)
What we will now do is to discuss the
for each of these parameters.
The process of taking a random sample is an experiment.
Any particular random sample leads to a specific
set of observations, but these observations will vary as one does repeated samples.
Thus, the process of
taking a random sample generates a random variable.
A statistic is an
of a parameter such as the
expected value or the variance of the population. Suppose we draw a random sample from a population.
We compute the value of the statistic by using the particular values in that random sample.
But if we draw
another random sample, we can compute another value of the statistic. Notice, then, that there is a
difference between the statistic itself and the value that the statistic assumes from each random sample.
Since the values in the sample can change with repeated samples, the values that the statistic can take on
also vary. In other words, a statistic is also a random variable that can take on a number of different values
with certain probabilities. Since a statistic is also a random variable, it has a probability distribution or
PDF, an expected value, and a variance
Why do we calculate statistics?
What we really want to know are the values of the population parameters,
i.e., the expected value, variance, etc.
Given that it is too costly or time-consuming to find the entire
population, we use a random sample to compute the value of statistics, expecting that these values will be
good estimates of the true parameter values. But what does it mean to be a good estimate
We will now
discuss the properties that we would like our estimators to possess in order that they are good estimators.
Let me again emphasize that we must distinguish between the statistic and the value the statistic takes on.
For instance, if the average is the statistic we are looking at, then the value of the statistic could be equal to
But the average and the value the average takes on based on one random sample are two distinct
things. We call the value that a statistic takes on the
of good estimators
represent the true parameter whose value we are trying to find. Let
be an estimator we are using
For finite sample sizes
, we would like the estimator to have the following properties:
- we say that
is in unbiased estimator of
is a measure of bias).
Thus, an unbiased estimator is one that on average
is equal to the true parameter.
- we say that
is an efficient estimator of
if, for a given a finite sample size,
is any other unbiased estimator of
Clearly, we prefer an efficient estimator since although on average the given estimator will equal
the true value of the parameter, any given particular
of the estimator may not be
equal to the true value.
In this case, an efficient estimator is likely to be closer (since its variance