102c_lecture4

102c_lecture4 - 1 Partitioned Matrix Consider the following...

Info iconThis preview shows pages 1–4. Sign up to view the full content.

View Full Document Right Arrow Icon
1 Partitioned Matrix Consider the following matrix A ( p × q ) = a 11 a 12 ... a 1 q a 21 a 22 ... a 2 q ... ... ... ... a p 1 a p 2 ... a pq It can be "partitioned" as A = A 11 ( r × s ) A 12 ( r × ( q s )) A 21 (( p r ) × s ) A 22 (( p r ) × ( q s )) For example, A = 125 311 076 can be partitioned as A = μ A 11 A 12 A 21 A 22 , where A 11 =(1) , A 12 = ¡ 25 ¢ , A 21 = μ 3 0 ,and A 22 = μ 11 76 . For partitioned matrices we have A 0 = μ A 0 11 A 0 21 A 0 12 A 0 22 If A 11 and A 22 are square matrices we have also: A 1 = μ Q 1 Q 1 A 12 A 1 22 A 1 22 A 21 Q 1 A 1 22 + A 1 22 A 21 Q 1 A 12 A 1 22 (1) where Q = ¡ A 11 A 12 A 1 22 A 21 ¢ ,i f Q and A 22 are non-singular. 2 Partitioned Regression Consider the usual model y = + u and partition X =( X 1 | X 2 ) .He re X 1 has dimension N × k 1 and X 2 has dimen- sion N × ( k k 1 ) . Conformably with X , partition β into β = μ β 1 β 2 where β 1 has dimension k 1 × 1 and β 2 has dimension ( k k 1 ) × 1 . Hencewecanwr ite : y = X 1 β 1 + X 2 β 2 + u We know the formula for the OLS estimate, b β X 0 X ) 1 X 0 y .H owc a n we get OLS expressions for b β 1 and b β 2 ? The answer is through partitioned regression formulae. 1
Background image of page 1

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
Note that the OLS objective function is to min b u 0 b u = 0 min ³ y X 1 b β 1 X 2 b β 2 ´ 0 ³ y X 1 b β 1 X 2 b β 2 ´ The f rst order conditions are X 0 1 X 1 b β 1 + X 0 1 X 2 b β 2 = X 0 1 y X 0 2 X 1 b β 1 + X 0 2 X 2 b β 2 = X 0 2 y Which can be written using partitioned matrices as: μ X 0 1 X 1 X 0 1 X 2 X 0 2 X 1 X 0 2 X 2 à b β 1 b β 2 ! = μ X 0 1 y X 0 2 y (which is simply X 0 X b β = X 0 y using partitioned matrices). To obtain solutions for b β 1 and b β 2 we need to invert the partitioned matrix, i.e. à b β 1 b β 2 ! = μ X 0 1 X 1 X 0 1 X 2 X 0 2 X 1 X 0 2 X 2 1 μ X 0 1 y X 0 2 y Using (1), we obtain the expressions: à b β 1 b β 2 ! = μ ( X 0 1 M 2 X 1 ) 1 X 0 1 M 2 y ( X 0 2 M 1 X 2 ) 1 X 0 2 M 1 y where M j = I X j ¡ X 0 j X j ¢ 1 X 0 j for j =1 , 2 and so M j is idempotent. Hence we can rewrite this as à b β 1 b β 2 ! = à ¡ ( M 2 X 1 ) 0 M 2 X 1 ¢ 1 ( M 2 X 1 ) 0 M 2 y ¡ ( M 1 X 2 ) 0 M 1 X 2 ¢ 1 ( M 1 X 2 ) 0 M 1 y ! = ³ e X 0 1 e X 1 ´ 1 e X 1 e y 2 ³ e X 0 2 e X 2 ´ 1 e X 2 e y 1 i.e., b β 1 can be obtained from a (univariate) regression of e y 2 onto e X 1 ,and b β 2 can be obtained from a (univariate) regression of e y 1 onto e X 2 . But what are e y 2 , e X 1 , e y 1 and e X 2 ? e y 2 = M 2 y = ³ I X 2 ( X 0 2 X 2 ) 1 X 0 2 ´ y = y X 2 ( X 0 2 X 2 ) 1 X 0 2 y = y X 2 b β y X 2 = b u y X 2 2
Background image of page 2
and e X 1 = M 2 X 1 = ³ I X 2 ( X 0 2 X 2 ) 1 X 0 2 ´ X 1 = X 1 X 2 ( X 0 2 X 2 ) 1 X 0 2 X 1 = X 1 X 2 b β X 1 X 2 = b u X 1 X 2 Note that b u y X 2 is the residual of a regression of y onto X 2 , while b u X 1 X 2 is the residual of a regression of X 1 onto X 2 .T h eOL Se s t im a t e β 1 is obtained regressing b u y X 2 onto b u X 1 X 2 . What’s the intuition? b u y X 2 is the variability of y that is not explained by X 2 ,wh i le b u X 1 X 2 is the variability of X 1 that is
Background image of page 3

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
Image of page 4
This is the end of the preview. Sign up to access the rest of the document.

Page1 / 20

102c_lecture4 - 1 Partitioned Matrix Consider the following...

This preview shows document pages 1 - 4. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online