This preview shows page 1. Sign up to view the full content.
Unformatted text preview: b2IQ + u we obtain ˆ
ˆ
ˆ
log(wage) = b0 + b1educ + b2IQ = 5.658 + 0.03912 ⋅ educ + 0.005863 ⋅ IQ
Now, estimating the simple model log(wage) = b0 + b1educ + u We obtain log(wage) = b0 + b1educ
= 5.973 + 0.05984 ⋅ educ On average we expect the estimate of b1 to be too high, E (b1 ) > b1 , since we
expect b2 > 0 and Corr (educ, IQ ) > 0
VER. 9/25/2012. © P. KOLM 30 Omitted Variable Bias: Example (2/2) We know that ˆ
ˆ
b1 = b1 + b2d1 where d1 is the slope of a regression of x 2 on x 1 . Let us verify that for this
particular example: Estimating x 2 = d0 + d1x 1 + e we obtain d1 = 3.534 b1 = 0.05984 ˆ + b d = 0.03912 + 0.005863 ⋅ 3.534 = 0.05984
ˆ
b1
21 VER. 9/25/2012. © P. KOLM 31 Omitted Variable Bias: The More General Case We will not derive the general formula for this as it is a bit more complicated3 Technically, can only determine the sign of the bias for the more general case if all of the included x ’s are uncorrelated In practice, as a useful guide even if not strictly true, we can work through the bias assuming the x ’s are uncorrelated VER. 9/25/2012. © P. KOLM 32 Omitted Variable Bias Again: What Happens to the Variance? (1/2) Consider the misspecified model from before, y = b0 + b1x1 Note that s2
s2 ˆ
£
= Var (b1 ) Var (b1 ) =
2
SST1 SST1 1  R1 ( ) In fact, Var (b1 ) <Var (b1 ) unless x 1 and x 2 are uncorrelated VER. 9/25/2012. © P. KOLM 33 Omitted Variable Bias Again: What Happens to the Variance? (2/2) While the variance of the estimator is smaller for the misspecified model, unless b2 = 0 the misspecified model is biased As the sample size grows, the variance of each estimator shrinks to zero, making the variance difference less important Corollary: Including an extraneous or irrelevant variable cannot decrease the variance of the estimator VER. 9/25/2012. © P. KOLM 34 An Interlude Source: http://xkcd.com/539/ VER. 9/25/2012. © P. KOLM 35 Multiple Regression Analysis: Statistical Inference VER. 9/25/2012. © P. KOLM 36 Overview of Statistical Inference (1/2) The statistical properties of the least squares estimators derive from the assumptions of the model These properties tell us something about the optimality of the estimators (GaussMarkov) But also provide the foundation for the process of statistical inference: “How confident are we about the estimates that we have obtained?” VER. 9/25/2012. © P. KOLM 37 Overview of Statistical Inference (2/2) Suppose we have estimated the model: wage = 272.5 + 76.22 ⋅ educ + 17.64 ⋅ exper We could have got the value of 76.22 for the coefficient on education by chance. How confident are we that the true parameter value is not 80, 15, 34, or 0? Statistical inference addresses this kind of question VER. 9/25/2012. © P. KOLM 38 Assumptions of the Classical Linear Model (CLM) We know that given the GaussMarkov assumptions, OLS is BLUE To do classical hypothesis testing, we need to add another assumption beyond the GaussMarkov assumptions, namely u N (0, s 2 ) [MLR.6] → Recall: The six assumptions, MLR.1MLR.6, are called the CLM assumptions (CLM = Classical Linear Model) VER. 9/25/2012. © P. KOLM 39 Remarks Under CLM, OLS is n...
View
Full
Document
This document was uploaded on 02/17/2014 for the course COURANT G63.2751.0 at NYU.
 Fall '14

Click to edit the document details