This preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
Unformatted text preview: Smaller variance but positive bias. If bias is small, may still favor the short regression. (Free lunch?) Suppose X1X2 = . Then the bias goes away. Interpretation, the information is not “right,” it is irrelevant. b 1 is the same as b 1.2. ˜˜˜™ ™ 25/34 Part 6: Finite Sample Properties of LS Specification Errors2 Including superfluous variables: Just reverse the results. Including superfluous variables increases variance. (The cost of not using information.) Does not cause a bias, because if the variables in X 2 are truly superfluous, then 2 = , so E[ b 1.2] = 1. ˜˜˜™ ™ 26/34 Part 6: Finite Sample Properties of LS Linear Restrictions Context: How do linear restrictions affect the properties of the least squares estimator? Model: y = X + Theory (information) R  q = Restricted least squares estimator: b * = b ( XX ) 1R [ R ( XX ) 1R ] 1 ( Rb  q ) Expected value: E[ b *] =  ( XX ) 1R [ R ( XX ) 1R ] 1 ( Rb  q ) Variance: 2( XX )1  2 ( XX ) 1R [ R ( XX ) 1R ]1 R ( XX )1 = Var[ b ] – a nonnegative definite matrix < Var[ b ] Implication: (As before) nonsample information reduces the variance of the estimator. ˜˜˜˜™ ™ 27/34 Part 6: Finite Sample Properties of LS Interpretation Case 1 : Theory is correct: R  q = (the restrictions do hold). b * is unbiased Var[ b *] is smaller than Var[ b ] How do we know this? Case 2 : Theory is incorrect: R  q (the restrictions do not hold). b * is biased – what does this mean? Var[ b *] is still smaller than Var[ b ] ˜˜˜˜™ ™ 28/34 Part 6: Finite Sample Properties of LS Restrictions and Information How do we interpret this important result? The theory is "information" Bad information leads us away from "the truth" Any information, good or bad, makes us more certain of our answer. In this context, any information reduces variance. What about ignoring the information? Not using the correct information does not lead us away from "the truth" Not using the information foregoes the variance reduction  i.e., does not use the ability to reduce "uncertainty." ˜˜˜˜™ ™ 29/34 Part 6: Finite Sample Properties of LS GaussMarkov Theorem A theorem of Gauss and Markov: Least Squares is the minimum variance linear unbiased estimator (MVLUE) 1. Linear estimator 2. Unbiased: E[ b  X ] = β Theorem : Var[ b*  X ] – Var[ b  X ] is nonnegative definite for any other linear and unbiased estimator b* that is not equal to b . Definition : b is efficient in this class of estimators....
View
Full Document
 Fall '10
 H.Bierens
 Econometrics, Least Squares, Standard Deviation, Variance, Mean squared error, Bias of an estimator

Click to edit the document details