{[ promptMessage ]}

Bookmark it

{[ promptMessage ]}

lecture+9_complete

# lecture+9_complete - Multiple Regression Model Estimation y...

This preview shows pages 1–10. Sign up to view the full content.

1 Multiple Regression Model: Estimation y = b 0 + b 1 x 1 + b 2 x 2 + . . . b k x k + u

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
2 OUTLINE 1. Motivation for the Multiple Regression Model (MRM) 2. OLS Estimator in the MRM 3. Changing more than one independent variable simultaneously 4. Algebraic properties of OLS 5. Goodness of fit 6. Unbiasedness of OLS
3 Model: y = b 0 + b 1 x 1 + b 2 x 2 + . . . b k x k + u y = Dependent variable x 1, x 2 . . . x k = k regressors/independent variables/explanatory variables b 0 = One intercept parameter b 1 b 2 . . . b k = k slope parameters So, k = # of independent variables and k+1 = # of unknown b parameters u = Disturbance/unobservables/ error term (all factors other than x 1, x 2 . . . x k that affect y) The key assumption in the MRM is: E( u | x 1 , x 2 , . . . x k ) = 0 The error (u) is mean independent of all the independent variables (the x’s) i.e. the zero conditional mean assumption. 1. MULTIPLE REGRESSION MODEL (MRM)

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
5 There are several reasons why the MRM is more useful than the SLR. 1. We can measure the causal (ceteris paribus) effects of more than one variable. 2. Better R 2 : allows us to build better models 3. We can correct for omitted variable bias. 4. We can incorporate general functional form relationships (nonlinear) between two variables. Allows more flexibility. 1. MOTIVATION FOR THE MRM

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
6 Suppose that the true model in the population is: colGPA = b 0 + b 1 hsGPA + b 2 SAT + u such that: E(u | hsGPA, SAT) = 0 By estimating this model we can measure the ceteris paribus effects of 2 variables: hsGPA and SAT (as opposed to the SLR where we could get the ceteris paribus effect of only 1 variable). Example 1: Determinants of College GPA
7 Suppose that, instead, we estimate the SLR: colGPA = b 0 + b 1 hsGPA + v It is clear that: v = b 2 SAT+ u Therefore, in general, v is not mean independent of hsGPA because SAT scores is likely to be correlated with high school performance (hsGPA), i.e. the error term v of the SLR model is correlated with the regressor hsGPA of the model. So assumption 2 discussed in lecture 7 is violated. This implies, OLS estimation of the SLR will provide a bias estimator of b 1 i.e. we have an omitted variable bias . Example 1: Determinants of College GPA

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
8 Suppose that the true model in the population is: wage = b 0 + b 1 educ + b 2 exper + b 3 exper2 + u such that: E(u | educ, exper, exper2) = 0 By estimating this model we can measure the ceteris paribus effects of 2 variables: education and of experience (as opposed to the SLR where we could get the ceteris paribus effect of only 1 variable). Note that the regressor exper2 is exper*exper , and therefore the model permits a nonlinear (quadratic) relationship between an individual’s wage and her labor market experience i.e. allows for more flexibility. Example 2: Wage equation
9 Suppose that, instead, we estimate the SLR: wage = b 0 + b 1 educ + v It is clear that: v = b 2 exper + b 3 exper2 + u Therefore, in general, v is not mean independent of educ because educ is correlated with exper, i.e. the error term v of the SLR model is correlated with the regressor educ of the model. So assumption 2 discussed in lecture 7 is violated.

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
This is the end of the preview. Sign up to access the rest of the document.

{[ snackBarMessage ]}