practical_2.pdf

# practical_2.pdf - ECON2007 Practical 2 Notes Fall 2015 Term...

• 7

This preview shows pages 1–4. Sign up to view the full content.

ECON2007 Practical 2 Notes Fall 2015 Term Original Author: Edmund Wright Amended by Giacomo Mason & Gavin Kader October 20, 2016 Contents 1 Reminder of MLR assumptions 2 2 Unbiasedness of MLR slope estimate 3 2.1 Partialling out . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 2.2 Re-writing ˆ β 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 2.3 Proving unbiasedness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 3 Variance of MLR slope estimate 6 3.1 Derivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 3.2 Another way of writing it . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 1

This preview has intentionally blurred sections. Sign up to view the full version.

1 Reminder of MLR assumptions Recall from the lectures that our MLR assumptions are these: MLR.1 Linear in Parameters: y = β 0 + β 1 x 1 + ... + β k x k + u . MLR.2 Random Sample: { ( x i 1 , ..., x ik , y i ) : i = 1 , . . . , n } , where { x i 1 , ..., x ik , y i } are i.i.d. MLR.3 No perfect collinearity: x j 6 = c and no exact linear relationships among the x j in the sample. MLR.4 Zero Conditional Mean: E [ u | x 1 , ..., x k ] = 0. MLR.5 Homoscedasticity: E [ u 2 | x 1 , ..., x k ] = σ 2 . We will use these at various points in what follows. 2
2 Unbiasedness of MLR slope estimate 2.1 Partialling out The computation of OLS estimates with multiple variables on the right hand side becomes expo- nentially complicated as the number of variables increases. As a ‘trick’ to make our life easier, we can employ the so-called “partialling-out” algorithm. Recall that the key idea behind MLR is to estimate the effect of each explanatory variable (e.g. x 1 ) on the outcome y holding all other x variables constant . Basically, we want to strip out of x 1 all the variation that is due to the other x ’s, to isolate its effect on y . We can do this explicitly in the following way. We will focus on the case where k = 2, that is, where there are two explanatory variables. The following is extendible to the general k case. Suppose we want to estimate the following MLR: y i = β 0 + β 1 x i 1 + β 2 x i 2 + u i The partialling-out algorithm works in the following way: STEP 1 : Estimate the parameters for the SLR of x 1 on x 2 x i 1 = α 0 + α 1 x i 2 + r i 1 STEP 2

This preview has intentionally blurred sections. Sign up to view the full version.

This is the end of the preview. Sign up to access the rest of the document.
• Spring '11
• j

{[ snackBarMessage ]}

### What students are saying

• As a current student on this bumpy collegiate pathway, I stumbled upon Course Hero, where I can find study resources for nearly all my courses, get online help from tutors 24/7, and even share my old projects, papers, and lecture notes with other students.

Kiran Temple University Fox School of Business ‘17, Course Hero Intern

• I cannot even describe how much Course Hero helped me this summer. It’s truly become something I can always rely on and help me. In the end, I was not only able to survive summer classes, but I was able to thrive thanks to Course Hero.

Dana University of Pennsylvania ‘17, Course Hero Intern

• The ability to access any university’s resources through Course Hero proved invaluable in my case. I was behind on Tulane coursework and actually used UCLA’s materials to help me move forward and get everything together on time.

Jill Tulane University ‘16, Course Hero Intern