Ch09AssLinReg - Homoskedasticity and Heteroskedasticity (SW...

Info iconThis preview shows pages 1–4. Sign up to view the full content.

View Full Document Right Arrow Icon
1 1 Homoskedasticity and Heteroskedasticity (SW Section 17.5) If the conditional variance of the error term, ( ) 1 var | ,..., iK uX X is constant (doesn’t depend on X) then the errors are homoskedastic . Otherwise they are heteroskedastic . Gauss-Markov Theorem: If the errors are homoskedastic, then OLS is the Best Linear Unbiased Estimator of the regression coefficients If in addition the errors follow a Normal distribution, then least squares is the most efficient estimator. 2 Improving OLS under heteroskedasticity Key idea: Transform variables so that the errors are homoskedastic and then apply OLS to the transformed model. Suppose ( ) ( ) 1, , , var | ,..., ,..., ii K i i K i X hX X λ = h a known function. Then divide both sides by h to get: 00 , 11 , ... i i YX X u ββ = ++ + ±± ± ± Note () ( ) ( ) ( ) var | var | var | i i u i i X = == = ± 3 , , ... i i X u =+ + + ± ± To compute the improved estimator just regress 01 ˆˆ ˆ on , ,.. YXX Should you include a constant in this regression? A. Yes B. No How should you compute the standard errors? A. Use the heteroskedastic-consistent standard errors B. Use the default (homoskedastic standard errors 4 Weighted Least Squares (WLS) Since the conditional variance of i u ± is constant, the Gauss- Markov Theorem applies to the transformed regression and it is therefore more efficient than OLS when compared to the untransformed regression. This method is called Weighted Least Squares since it is equivalent to minimizing: 2 1 1 ... n K i i β = −−
Background image of page 1

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
2 5 Weighted Least Squares (Continued) Weighted Least Squares minimizes: () 2 01 1 1 ... n iK K i i YXX hX ββ β = −− This estimator downweights (relatively) observations with high error variances. What if h( ) is not fully known and needs to be estimated? 6 Feasible Weighted Least Squares ( ) n 2 1 , 2 22 10 1 1 , Suppose that var | ˆ Estimate and byregressing on ˆˆ a constant and toget ii i uX X u Xh X X θθ =+ This feasible WLS estimator converges to the same value as the WLS estimator with the true coefficients ( and θ ), but it is not always more efficient than OLS and the usual standard errors of the regression coefficients ( β ) are downward biased. (Why?) 7 Feasible Weighted Least Squares ( ) 2 1 , Suppose that var | Do we need restrictions on and ? i X A. Yes B. No 8 WLS versus OLS with Heteroskedastic standard errors OLS with heteroskedastic standard errors is the safe bet, but it may be inefficient. If assumptions about form of heteroskedasticity are correct, then WLS is a more efficient estimator Feasible WLS may not be better than OLS with heteroskedastic standard errors, but standard errors are always downwards biased. The idea behind WLS (transform model to a simpler situation) is important in econometrics – we will use this again in this course.
Background image of page 2
3 9 Weighted Least Squares for endogenous samples An endogenous sample is one where the probability of being selected is related to the dependent variable.
Background image of page 3

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
Image of page 4
This is the end of the preview. Sign up to access the rest of the document.

This note was uploaded on 11/28/2010 for the course ECON Economics taught by Professor Davidbrownstone during the Spring '10 term at UC Irvine.

Page1 / 14

Ch09AssLinReg - Homoskedasticity and Heteroskedasticity (SW...

This preview shows document pages 1 - 4. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online