Econometrics-I-3

Partitioning = e | residuals(sample y e partitio x y

Info iconThis preview shows pages 18–27. Sign up to view the full content.

View Full Document Right Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: Partitioning : = E[ | ] + Residuals (sample) y e Partitio x y y y X x b β ε = conditional mean + disturbance ning : = + y y Xb X' e = projection + residual ( Note : Projection 'into the column space of ) Part 3: Least Squares Algebra Algebraic Results - 2 p A “residual maker” M = ( I - X ( X’X )-1 X’ ) p e = y - Xb = y - X ( X’X )-1 X’y = My p My = The residuals that result when y is regressed on X p MX = 0 (This result is fundamental!) How do we interpret this result in terms of residuals? When a column of X is regressed on X, we get a perfect fit and zero residuals. p (Therefore) My = MXb + Me = Me = e (You should be able to prove this. p y = Py + My, P = X ( X’X )-1 X’ = (I - M). PM = MP = 0. p Py is the projection of y into the column space of X . ˜˜˜™ ™ 18/26 Part 3: Least Squares Algebra The M Matrix p M = I- X(X’X)-1X’ is an nxn matrix p M is symmetric – M = M ’ p M is idempotent – M * M = M (just multiply it out) p M is singular – M-1 does not exist. (We will prove this later as a side result in another derivation.) ˜˜˜™ ™ 19/26 Part 3: Least Squares Algebra Results when X Contains a Constant Term p X = [ 1 , x 2,…, x K] p The first column of X is a column of ones p Since X’e = , x1’e = 0 – the residuals sum to zero. ˜˜˜™ ™ 20/26 = = = = = ′ = ∑ + n i i=1 Define [1,1,...,1] ' a column of n ones = y ny implies (after dividing by n) y (the regression line passes through the means) These do not apply if the model has no y Xb e i i'y i'y i'Xb + i'e = i'Xb x b constant term. Part 3: Least Squares Algebra Least Squares Algebra ˜˜˜™ ™ 21/26 Part 3: Least Squares Algebra Least Squares ˜˜˜™ ™ 22/26 Part 3: Least Squares Algebra Residuals ˜˜˜ ™ 23/26 Part 3: Least Squares Algebra Least Squares Residuals ˜˜˜ ˜™ 24/26 Part 3: Least Squares Algebra Least Squares Algebra-3 M is nxn potentially huge ˜˜˜ ˜™ 25/26 I X ′ X ′ X X M ′ X e Part 3: Least Squares Algebra Least Squares Algebra-4 MX = ˜˜˜ ˜ 26/26...
View Full Document

{[ snackBarMessage ]}

Page18 / 27

Partitioning = E | Residuals(sample y e Partitio x y y y X...

This preview shows document pages 18 - 27. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online