lecture13

lecture13 - ECON 103, Lecture 13: Heteroskedasticity Maria...

Info iconThis preview shows pages 1–7. Sign up to view the full content.

View Full Document Right Arrow Icon
ECON 103, Lecture 13: Heteroskedasticity Maria Casanova May 19th (version 0) Maria Casanova Lecture 13
Background image of page 1

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
Requirements for this lecture: This topic is not covered in depth in Stock and Watson If you are interested in reading more about Weighted Least Squares you can use the book ”Introduction to Econometrics. A Modern Approach”, by Jeffrey Wooldridge, although this is NOT REQUIRED. Maria Casanova Lecture 13
Background image of page 2
0. Introduction We saw in lecture 7 that the error term of the regression model ( ε ) is said to be homoskedastic if its variance is constant conditional on the explanatory variables X , i.e. Var ( ε i | X i ) = σ 2 On the other hand, ε is said to be heteroskedastic when its variance depends on the value of the dependent variables, i.e. Var ( ε i | X i ) = f ( X i ) = σ 2 i Maria Casanova Lecture 13
Background image of page 3

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
1. Consequences of heteroskedasticity for OLS As long as the least squares assumptions hold, the OLS estimators are unbiased and consistent, even if the error term is heteroskedastic. However, under heteroskedasticity, the OLS estimator does not have the minimum variance among all the linear, unbiased estimators of β (i.e., it is not BLUE) (Remember that the Gauss-Markov theorem states that homoskedasticity is a necessary condition for OLS to be BLUE) In particular, if the error term is heteroskedastic our estimates of the variance of ˆ β will be biased and inconsistent (see lecture 12B). Because of this, our usual hypothesis testing routines are unreliable in the presence of heteroskedasticity. Maria Casanova Lecture 13
Background image of page 4
1. Consequences of heteroskedasticity for OLS Intuition: Why is OLS inefficient when ε is heteroskedastic? We obtain the OLS estimator by minimizing the following expression: min ˆ β X i ˆ ε 2 i = Notice that we weight each ˆ ε 2 i equally, regardless of the size of its variance. Ideally, we would like to give more weight to observations with lower associated variances, as this would enable us to estimate the population regression line more accurately. We will want to use an estimator that does exactly that. It is called Weighted Least Squares (WLS) Maria Casanova Lecture 13
Background image of page 5

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
1. Consequences of heteroskedasticity for OLS Figure: Homoskedastic vs heteroskedastic error term 0 500 1000 1500 2000 2500 0 500 1000 1500 2000 2500
Background image of page 6
Image of page 7
This is the end of the preview. Sign up to access the rest of the document.

Page1 / 22

lecture13 - ECON 103, Lecture 13: Heteroskedasticity Maria...

This preview shows document pages 1 - 7. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online