Econometric take home APPS_Part_1

Econometric take home APPS_Part_1 - Chapter 3 Least Squares...

Info iconThis preview shows pages 1–3. Sign up to view the full content.

View Full Document Right Arrow Icon
Chapter 3 Least Squares Exercises 1. Let 1 1 ... ... 1 n x x ⎡⎤ ⎢⎥ = ⎣⎦ X . (a) The normal equations are given by (3-12), = X'e 0 (we drop the minus sign), hence for each of the columns of X , x k , we know that x k e = 0. This implies that 1 0 n ii e = Σ = and 1 0 n i xe = Σ = . (b) Use to conclude from the first normal equation that 1 n e = Σ ayb x = . (c) We know that and 1 0 n e = Σ= 1 0 n i = Σ = . It follows then that 1 () n i xx e = 0 Σ −= because 11 0 nn i i xe x e == Σ = . Substitute e i to obtain 1 ( ) n i i xxyab x = Σ− = 0 or 1 ( ) n i i xxyyb = = 0 Then, 1 2 1 ( ) s o n i i i n ) . x xy y xxyy b xxxx b = = = = (d) The first derivative vector of e e is -2 X e . (The normal equations.) The second derivative matrix is 2 ( e e )/ b b = 2 X X . We need to show that this matrix is positive definite. The diagonal elements are 2 n and 2 2 1 n x = Σ which are clearly both positive. The determinant is (2n)( 2 1 2 n x = Σ )-( 1 2 n x = Σ ) 2 = -4( 2 1 4 n nx = Σ ) 2 = 22 2 ( ) ] 4[ ( ( )] i i n x n x x Σ−= 4[ . Note that a much simpler proof appears after (3-6). 2. Write c as b + ( c - b ). Then, the sum of squared residuals based on c is ( y - Xc ) ( y - Xc ) = [ y - X ( b + ( c - b ))] [ y - X ( b + ( c - b ))] = [( y - Xb ) + X ( c - b )] [( y - Xb ) + X ( c - b )] = ( y - Xb ) ( y - Xb ) + ( c - b ) X X ( c - b ) + 2( c - b ) X ( y - Xb ). But, the third term is zero, as 2( c - b ) X ( y - Xb ) = 2( c - b ) X e = 0 . Therefore, ( y - Xc ) ( y - Xc ) = e e + ( c - b ) X X ( c - b ) or ( y - Xc ) ( y - Xc ) - e e = ( c - b ) X X ( c - b ). The right hand side can be written as d d where d = X ( c - b ), so it is necessarily positive. This confirms what we knew at the outset, least squares is least squares. 3. The residual vector in the regression of y on X is M X y = [ I - X ( X X ) -1 X ] y . The residual vector in the regression of y on Z is M Z y = [ I - Z ( Z Z ) -1 Z ] y = [ I - XP ( (XP) (XP) ) -1 (XP) ) y = [ I - XPP -1 ( X X ) -1 ( P ) -1 P X ) y = M X y Since the residual vectors are identical, the fits must be as well. Changing the units of measurement of the regressors is equivalent to postmultiplying by a diagonal P matrix whose k th diagonal element is the scale factor to be applied to the k th variable (1 if it is to be unchanged). It follows from the result above that this will not change the fit of the regression. 4. In the regression of y on i and X , the coefficients on X are b = ( X M 0 X ) -1 X M 0 y . M 0 = I - i ( i i ) -1 i is the matrix which transforms observations into deviations from their column means. Since M 0 is idempotent and symmetric we may also write the preceding as [( X M 0 )( M 0 X )] -1 ( X M 0 )( M 0 y ) which implies that the 3
Background image of page 1

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
regression of M 0 y on M 0 X produces the least squares slopes. If only X is transformed to deviations, we would compute [( X M 0 )( M 0 X )] -1 ( X M 0 ) y but, of course, this is identical. However, if only y is transformed, the result is ( X X ) -1 X M 0 y which is likely to be quite different.
Background image of page 2
Image of page 3
This is the end of the preview. Sign up to access the rest of the document.

Page1 / 4

Econometric take home APPS_Part_1 - Chapter 3 Least Squares...

This preview shows document pages 1 - 3. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online