This preview shows pages 1–3. Sign up to view the full content.
Lecture 6 – Estimators of Parameters (Statistics)
What we will now do is to discuss the
estimators
for each of these parameters.
Estimators (statistics)
The process of taking a random sample is an experiment.
Any particular random sample leads to a specific
set of observations, but these observations will vary as one does repeated samples.
Thus, the process of
taking a random sample generates a random variable.
A statistic is an
estimator
of a parameter such as the
expected value or the variance of the population. Suppose we draw a random sample from a population.
We compute the value of the statistic by using the particular values in that random sample.
But if we draw
another random sample, we can compute another value of the statistic. Notice, then, that there is a
difference between the statistic itself and the value that the statistic assumes from each random sample.
Since the values in the sample can change with repeated samples, the values that the statistic can take on
also vary. In other words, a statistic is also a random variable that can take on a number of different values
with certain probabilities. Since a statistic is also a random variable, it has a probability distribution or
PDF, an expected value, and a variance
.
Why do we calculate statistics?
What we really want to know are the values of the population parameters,
i.e., the expected value, variance, etc.
Given that it is too costly or timeconsuming to find the entire
population, we use a random sample to compute the value of statistics, expecting that these values will be
good estimates of the true parameter values. But what does it mean to be a good estimate
?
We will now
discuss the properties that we would like our estimators to possess in order that they are good estimators.
Let me again emphasize that we must distinguish between the statistic and the value the statistic takes on.
For instance, if the average is the statistic we are looking at, then the value of the statistic could be equal to
say 3.
But the average and the value the average takes on based on one random sample are two distinct
things. We call the value that a statistic takes on the
point estimate
.
Properties
of good estimators
.
Let
β
represent the true parameter whose value we are trying to find. Let
be an estimator we are using
to estimate
±
.
For finite sample sizes
, we would like the estimator to have the following properties:
a)
Unbiasedness
 we say that
is in unbiased estimator of
±
if
.
E
(
±
)
ββ
=
(Hence,
is a measure of bias).
E
(
±
)
−
Thus, an unbiased estimator is one that on average
is equal to the true parameter.
b)
Efficiency
 we say that
is an efficient estimator of
±
if, for a given a finite sample size,
v(
VV
(
±
)(
~
)
<
), where
~
is any other unbiased estimator of
.
Clearly, we prefer an efficient estimator since although on average the given estimator will equal
the true value of the parameter, any given particular
point estimate
of the estimator may not be
equal to the true value.
In this case, an efficient estimator is likely to be closer (since its variance
This preview has intentionally blurred sections. Sign up to view the full version.
View Full Document is smaller about the mean) to the true value than is a nonefficient estimator.
This is the end of the preview. Sign up
to
access the rest of the document.
This note was uploaded on 03/03/2011 for the course ECO 230 taught by Professor Yongjinpark during the Spring '11 term at Conn College.
 Spring '11
 YongJinPark
 Econometrics

Click to edit the document details