ps2solutionsv3

ps2solutionsv3 - Professor Francesca Molinari TAs Simon...

Info iconThis preview shows pages 1–4. Sign up to view the full content.

View Full Document Right Arrow Icon
Professor Francesca Molinari Spring 2010 TAs Simon Kwok and Tae-Hoon Lim Economics 320 Introduction to Econometrics Draft of Suggested Solutions to Problem Set 2 1. Question 2.2 from Wooldridge: In the equation Y = & 0 + & 1 X + u; subtract ± 0 from both sides to get Y & ± 0 = & 0 + & 1 X + ( u & ± 0 ) : Call the new error e = ( u & ± 0 ) ; so that E ( e ) = 0 : The new intercept is therefore ± 0 + & 0 ; but the slope is still & 1 : 2. Simple linear regression without a constant. Consider Y i = & 1 X i + u i : (a) e & 1 is the least-squares estimate of & 1 , i.e., residuals are ~ u i = Y i & e & 1 X i = ) X ~ u 2 i = X ( Y i & e & 1 X i ) 2 ; and we will minimize this residual sum of squares to &nd e & 1 : min e & 1 X ( Y i & e & 1 X i ) 2 d P ~ u 2 i d e & 1 = 0 = ) & 2 X ( Y i & e & 1 X i ) X i = & 2 X ( X i Y i & e & 1 X 2 i ) = 0 = ) X X i Y i = e & 1 X X 2 i = ) e & 1 = P X i Y i P X 2 i : Note that since we now have no intercept in the model, the numerator and the denominator are NOT in deviations from the mean! (b) e & 1 = P X i Y i P X 2 i = P X i ( & 1 X i + u i ) P X 2 i = & 1 P X 2 i + P X i u i P X 2 i = & 1 P X 2 i P X 2 i + P X i u i P X 2 i = & 1 + P X i u i P X 2 i 1
Background image of page 1

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
such that E h e & 1 & & & X i = E ±² & 1 + P X i u i P X 2 i ³& & & & X ´ = & 1 + 1 P X 2 i X X i E ( u i j X ) = & 1 ; since E ( u i j X ) = 0 . Next, by the law of iterated expectations, E ( e & 1 ) = E h E h e & 1 & & & X ii = E [ & 1 ] = & 1 ; = ) bias ( e & 1 ) = E ( e & 1 ) & & 1 = & 1 & & 1 = 0 : (c) var ( e & 1 & & & X ) = var ( & 1 j X ) + var ² P X i u i P X 2 i & & & & X ³ ; where we used the fact that the covariance between a random variable and a constant is zero (the X &s can be considered constants since we are conditioning on them). Next, var ( e & 1 & & & X ) = P X 2 i var ( u i j X ) ( P X 2 i ) 2 = ± 2 P X 2 i ( P X 2 i ) 2 = ± 2 P X 2 i ; where the ±rst equality follows from random sampling (SLR.2) and the second equality follows from homoskedasticity (SLR.5). Note again that the variance in this case is NOT in terms of deviations from the mean. The estimators and their variances are dependent on the model you specify. (d) e & 1 = & 1 + P X i u i P X 2 i = & 1 + 1 n P X i u i 1 n P X 2 i such that p lim e & 1 = p lim & 1 + p lim " 1 n P X i u i 1 n P X 2 i # = & 1 + 0 ² 2 since p lim( 1 n P X i u i ) = 0 and p lim( 1 n P X 2 i ) = ² 2 . Thus p lim ~ & 1 = & 1 and e & 1 is a consistent estimator of & 1 . 2
Background image of page 2
(e) b & 1 is a linear unbiased estimator of & 1 for any value of & 0 including & 0 = 0 . However, for the model Y i = & 1 X i + u i e & 1 is the best linear unbiased estimator (BLUE). Therefore from Gauss Markov Theorem var ( e & 1 j X ) & var ( b & 1 j X ) : (f) Y i = & 0 + & 1 X i + u i We derived the ^ & 1
Background image of page 3

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
Image of page 4
This is the end of the preview. Sign up to access the rest of the document.

This note was uploaded on 04/14/2010 for the course ECON 3200 taught by Professor Neilsen during the Spring '08 term at Cornell.

Page1 / 9

ps2solutionsv3 - Professor Francesca Molinari TAs Simon...

This preview shows document pages 1 - 4. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online