lect18_06apr10 - Imbens, Lecture Notes 18, ARE213 Spring 06...

Info iconThis preview shows pages 1–3. Sign up to view the full content.

View Full Document Right Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: Imbens, Lecture Notes 18, ARE213 Spring 06 1 ARE213 Econometrics Spring 2006 UC Berkeley Department of Agricultural and Resource Economics Endogeneity II: Two-Stage-Least-Squares, Control Function, and Limited-Information-Maximum-Likelihood Estimation 1. Two-Stage-Least-Squares A more systematic way to combine the multiple instruments is through two-stage-least- squares estimation. Let us do this in more generality. The equation of interest is Y i = X i + i = X i 1 1 + X i 2 2 + i . Let 2 be the variance of i . The vector of covariates X i can be split into two parts, a possibly endogenous part X i 1 and an exogenous part X i 2 . The vector of instruments is Z i . It can be split into the excluded instruments Z i 1 and the exogenous covariates X i 2 , or Z i = ( Z i 1 ,X i 2 ). Typically the common part X i 2 of the vectors Z i and X i will at least contain the intercept. The TSLS estimation method consists of two stages. In the first stage all the endogenous regressors are regressed on all the instruments and exogenous variables. That is, we estimate X i 1 = Z i + i = Z i 1 1 + X i 2 2 + i . Note that X i 1 is a K- vector, so that with Z i an L- vector, is a L K matrix of parameters. Estimating this by least squares leads to = ( Z Z )- 1 Z X 1 . We then calculate the predicted values for X based on this regression: X 1 = Z . Imbens, Lecture Notes 18, ARE213 Spring 06 2 Note that if we have a similar equation for X i 2 , X i 2 = Z i + i = Z i 1 1 + X i 2 2 + i , the result would be 2 = I and 1 = 0, so that the predicted value is X 2 = X 2 . Hence in the end we could treat all regressors symmetrically and just regress X on Z to get X = Z ( Z Z )- 1 Z X . In the second stage the outcome is regressed on the predicted regressors: Y i = X i + = ( Z i ) + i . We can write the estimator for as: = ( ( X Z ) ( Z Z )- 1 ( Z X ) )- 1 ( X Z ) ( Z Z )- 1 ( Z Y ) . In large samples N ( - ) N , 2 ( ( X Z ) ( Z Z )- 1 ( Z X ) )- 1 . The error variance is E [( Y- X ) 2 ], estimated as i ( Y i- X i ) 2 /N . Note that this variance is not the variance you would get as the standard ols variance from regressing Y i on X i ....
View Full Document

Page1 / 7

lect18_06apr10 - Imbens, Lecture Notes 18, ARE213 Spring 06...

This preview shows document pages 1 - 3. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online