{[ promptMessage ]}

Bookmark it

{[ promptMessage ]}

lecture13

# lecture13 - ECON 103 Lecture 13 Heteroskedasticity Maria...

This preview shows pages 1–7. Sign up to view the full content.

ECON 103, Lecture 13: Heteroskedasticity Maria Casanova February 25th (version 0) Maria Casanova Lecture 13

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
Requirements for this lecture: This topic is not covered in depth in Stock and Watson If you are interested in reading more about Weighted Least Squares you can use the book ”Introduction to Econometrics. A Modern Approach”, by Jeffrey Wooldridge, although this is NOT REQUIRED. Maria Casanova Lecture 13
0. Introduction We saw in lecture 7 that the error term of the regression model ( ε ) is said to be homoskedastic if its variance is constant conditional on the explanatory variables X , i.e. Var ( ε i | X i ) = σ 2 On the other hand, ε is said to be heteroskedastic when its variance depends on the value of the dependent variables, i.e. Var ( ε i | X i ) = f ( X i ) = σ 2 i Maria Casanova Lecture 13

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
1. Consequences of heteroskedasticity for OLS As long as the least squares assumptions hold, the OLS estimators are unbiased and consistent, even if the error term is heteroskedastic. However, under heteroskedasticity, the OLS estimator does not have the minimum variance among all the linear, unbiased estimators of β (i.e., it is not BLUE) (Remember that the Gauss-Markov theorem states that homoskedasticity is a necessary condition for OLS to be BLUE) In particular, if the error term is heteroskedastic our estimates of the variance of ˆ β will be biased and inconsistent (see lecture 12B). Because of this, our usual hypothesis testing routines are unreliable in the presence of heteroskedasticity. Maria Casanova Lecture 13
1. Consequences of heteroskedasticity for OLS Intuition: Why is OLS inefficient when ε is heteroskedastic? We obtain the OLS estimator by minimizing the following expression: min ˆ β X i ˆ ε 2 i Notice that we weight each ˆ ε 2 i equally, regardless of the size of its variance. Ideally, we would like to give more weight to observations with lower associated variances, as this would enable us to estimate the population regression line more accurately. We will want to use an estimator that does exactly that. It is called Weighted Least Squares (WLS) Maria Casanova Lecture 13

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
1. Consequences of heteroskedasticity for OLS Figure: Homoskedastic vs heteroskedastic error term 0 500 1000 1500 2000 2500 0 500 1000 1500 2000 2500 3000 3500 4000 4500 5000
This is the end of the preview. Sign up to access the rest of the document.

{[ snackBarMessage ]}

### What students are saying

• As a current student on this bumpy collegiate pathway, I stumbled upon Course Hero, where I can find study resources for nearly all my courses, get online help from tutors 24/7, and even share my old projects, papers, and lecture notes with other students.

Kiran Temple University Fox School of Business ‘17, Course Hero Intern

• I cannot even describe how much Course Hero helped me this summer. It’s truly become something I can always rely on and help me. In the end, I was not only able to survive summer classes, but I was able to thrive thanks to Course Hero.

Dana University of Pennsylvania ‘17, Course Hero Intern

• The ability to access any university’s resources through Course Hero proved invaluable in my case. I was behind on Tulane coursework and actually used UCLA’s materials to help me move forward and get everything together on time.

Jill Tulane University ‘16, Course Hero Intern