lect2_06jan18 - Imbens, Lecture Notes 2, ARE213 Spring 06 1...

Info iconThis preview shows pages 1–4. Sign up to view the full content.

View Full Document Right Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: Imbens, Lecture Notes 2, ARE213 Spring 06 1 ARE213 Econometrics Spring 2006 UC Berkeley Department of Agricultural and Resource Economics Ordinary Least Squares II: Variance Estimation and the Bootstrap (W 4.2.3, 12.8.2) In the first lecture we considered the standard linear model Y i = X i + i . (1) We looked at estimating and functions of under the following assumption: Assumption 1 i X i N (0 , 2 ) . Assuming also that the observations are drawn randomly from some population the following distributional result was stated for the least squares estimator: N ( - ) d- N , 2 ( E [ XX ])- 1 . In fact for this result it is sufficient that i is independent of X i , one does not need normality of the i . We estimated the asymptotic variance as 2 1 N N i =1 X i X i- 1 , where 2 can be the maximum likelihood estimator 2 ml = 1 N N i =1 Y i- X i 2 , or the unbiased estimator 2 ub = 1 N- K N i =1 Y i- X i 2 , Imbens, Lecture Notes 2, ARE213 Spring 06 2 where K is the dimension of the covariate vector X i . In this lecture I want to explore alternative ways of estimating the variance, and relate them to alternative assumptions about the distribution and properties of the residuals. First we consider the distribution of under much weaker assumptions. Instead of independence and normality of the , we make the following assumption: Assumption 2 E [ i X i ] = 0 . This essentially defines the true value of to be the best linear predictor : = arg min E ( Y- X ) 2 = ( E [ XX ])- 1 E [ XY ] . Under this assumption and independent sampling, we still have normality for the least squares estimator, but now with a different variance: N ( - ) d- N , ( E [ XX ])- 1 ( E [ 2 XX ] ) ( E [ XX ])- 1 . Let the asymptotic variance be denoted by V = ( E [ XX ])- 1 ( E [ 2 XX ] ) ( E [ XX ])- 1 . This is known as the heteroskedasticity-consistent variance, or the robust variance, due to Eicker (1967) and White (1980). To see where this variance comes from, write the least squares estimator minus the truth as - = 1 N N i =1 X i X i- 1 1 N N i =1 X i Y i- = 1 N N i =1 X i X i- 1 1 N N i =1 X i X i + 1 N N i =1 X i X i- 1 1 N N i =1 X i i- Imbens, Lecture Notes 2, ARE213 Spring 06 3 = 1 N N i =1 X i X i- 1 1 N N i =1 X i i . The variance of the second factor is E 1 N N i =1 X i i 2 = 1 N 2 N i =1 E 2 i X i X i = 1 N E [ 2 XX ] . We can estimate the heteroskedasticity-consistent consistently as V = 1 N N i =1 X i X i- 1 1 N N i =1 2 i X i X i 1 N N i =1 X i X i- 1 , where i = Y i- X i is the (estimated) residual....
View Full Document

This note was uploaded on 08/01/2008 for the course ARE 213 taught by Professor Imbens during the Spring '06 term at University of California, Berkeley.

Page1 / 13

lect2_06jan18 - Imbens, Lecture Notes 2, ARE213 Spring 06 1...

This preview shows document pages 1 - 4. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online