This preview shows pages 1–3. Sign up to view the full content.
This preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
Unformatted text preview: ECON 140, Fall 2008  10/30 Alex Rothenberg Practice Problems: Endogeneity, Measurement Error, Omitted Variables, Diagnostics Problem 1 The following set of questions ask you about whether or not OLS will produce biased estimates of in the following model: Y i = + X i + i (1) You will make heavy use of the following formula for the expected value of OLS : E [ OLS ] = + Cov( X, ) Var( X ) We proved this formula in class, but its worth going through the proof at least once so it makes sense to you. 1. Suppose that, when we record data for our regression, we measure Y i with error. That is, we actually observe e Y i , which is the sum of truth and random noise: e Y i = Y i + i Assuming that everything else in our model satisfies the usual assumptions, when will OLS be an unbiased estimator of ? ANSWER: We start by rewriting the model we run as follows: e Y i = + X i + i Y i i = + X i + i This implies that we write the model that we actually run as: Y i = + X i + e i where e i = i i . So, the new error of the regression equation is equal to the difference between the error from the true equation and the measurement error. From our omitted variables bias formula, we know that: E [ OLS ] = + Cov( X, e ) Var( X ) 1 ECON 140, Fall 2008  10/30 Alex Rothenberg = + Cov( X, ) Var( X ) = + Cov( X, ) Var( X ) Cov( X, ) Var( X ) =  Cov( X, ) Var( X ) So, as long as the measurement error in Y and the levels of X are uncorrelated (so that Cov( X, ) = 0 ), OLS produces unbiased estimates of . 2. Now, suppose that, when we record data for our regression, we measure each X i with error. That is, we actually observe f X i , which is the sum of truth and random noise: f X i = X i + i Assuming that everything else in our model satisfies the usual assumptions, when will OLS be an unbiased estimator of ? When will we overestimate ? When will we underestimate ? ANSWER: We start by rewriting the model we run as follows: Y i = + f X i + i = + ( X i + i ) + i = + X i + ( i + i ) = + X i + e i So, the new error of the regression equation is equal to the combination of the error from the true equation and times the the measurement error in X . From our omitted variables bias formula, we know that: E [ OLS ] = + Cov( X, e ) Var( X ) = + Cov( X, + ) Var( X ) = + Cov( X, ) Var( X ) =...
View Full
Document
 Spring '09

Click to edit the document details