10.10.09.
EC220 New version of a subsection of Section R.8
Central limit theorem
If a random variable
X
has a normal distribution, its sample mean
X
will also have a normal
distribution.
This fact is useful for the construction of
t
statistics and confidence intervals if we are
employing
X
as an estimator of the population mean.
However, what happens if we are
not
able to
assume that
X
is normally distributed?
The standard response is to make use of a
central limit theorem
.
Loosely speaking (we will make
a more rigorous statement below), a central limit theorem states that the distribution of
X
will
approximate a normal distribution as the sample size becomes large, even when the distribution of
X
is not normal.
There are a number of central limit theorems, differing only in the assumptions that
they make in order to obtain this result.
Here we shall be content with using the simplest one, the
Lindeberg–Levy central limit theorem.
It states that, provided that the
X
i
in the sample are all drawn
independently from the same distribution (the distribution of
X
), and provided that this distribution
has finite population mean and variance, the distribution of
X
will converge on a normal distribution.
This means that our
t
statistics and confidence intervals will be approximately valid after all, provided
that the sample size is large enough.
We will start by looking at two examples.
Figure R.14 shows the distribution of
X
for the case
where the
X
has a uniform distribution with range 0 to 1, for 10,000,000 samples.
A uniform
distribution is one in which all values over the range in question are equally likely.
This preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
This is the end of the preview.
Sign up
to
access the rest of the document.
 Spring '10
 öcal
 Econometrics, Normal Distribution

Click to edit the document details