This preview shows pages 1–7. Sign up to view the full content.
ECON 103, Lecture 13: Heteroskedasticity
Maria Casanova
February 25th (version 0)
Maria Casanova
Lecture 13
This preview has intentionally blurred sections. Sign up to view the full version.
View Full Document Requirements for this lecture:
This topic is not covered in depth in Stock and Watson
If you are interested in reading more about Weighted Least
Squares you can use the book ”Introduction to Econometrics. A
Modern Approach”, by Jeﬀrey Wooldridge, although this is NOT
REQUIRED.
Maria Casanova
Lecture 13
0. Introduction
We saw in lecture 7 that the error term of the regression model (
ε
) is
said to be
homoskedastic
if its variance is constant conditional on the
explanatory variables
X
, i.e.
Var
(
ε
i

X
i
) =
σ
2
On the other hand,
ε
is said to be
heteroskedastic
when its variance
depends on the value of the dependent variables, i.e.
(
ε
i

X
i
) =
f
(
X
i
) =
σ
2
i
Maria Casanova
Lecture 13
This preview has intentionally blurred sections. Sign up to view the full version.
View Full Document 1. Consequences of heteroskedasticity for OLS
As long as the least squares assumptions hold, the OLS estimators
are unbiased and consistent,
even if the error term is
heteroskedastic.
However, under heteroskedasticity, the OLS estimator does not have
the minimum variance among all the linear, unbiased estimators of
β
(i.e., it is not BLUE)
(Remember that the GaussMarkov theorem states that
homoskedasticity is a necessary condition for OLS to be BLUE)
In particular, if the error term is heteroskedastic our estimates of the
variance of
ˆ
β
will be biased and inconsistent (see lecture 12B).
Because of this, our usual hypothesis testing routines are
unreliable
in the presence of heteroskedasticity.
Maria Casanova
Lecture 13
1. Consequences of heteroskedasticity for OLS
Intuition: Why is OLS ineﬃcient when
ε
is heteroskedastic?
We obtain the OLS estimator by minimizing the following expression:
min
ˆ
β
X
i
ˆ
ε
2
i
Notice that we weight each ˆ
ε
2
i
equally, regardless of the size of its
variance.
Ideally, we would like to give more weight to observations with lower
associated variances, as this would enable us to estimate the population
regression line more accurately.
We will want to use an estimator that does exactly that. It is called
Weighted Least Squares (WLS)
Maria Casanova
Lecture 13
This preview has intentionally blurred sections. Sign up to view the full version.
View Full Document 1. Consequences of heteroskedasticity for OLS
Figure:
Homoskedastic vs heteroskedastic error term
0
500
1000
1500
2000
2500
0
500
1000
1500
2000
2500
3000
3500
4000
4500
5000
This is the end of the preview. Sign up
to
access the rest of the document.
This note was uploaded on 03/15/2010 for the course ECON 103 taught by Professor Sandrablack during the Winter '07 term at UCLA.
 Winter '07
 SandraBlack
 Econometrics

Click to edit the document details