Econometrics-I-6

# Smaller variance but positive bias if bias is small

This preview shows pages 26–32. Sign up to view the full content.

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: Smaller variance but positive bias. If bias is small, may still favor the short regression. (Free lunch?) Suppose X1X2 = . Then the bias goes away. Interpretation, the information is not “right,” it is irrelevant. b 1 is the same as b 1.2. &#152;&#152;&#152;™ ™ 25/34 Part 6: Finite Sample Properties of LS Specification Errors-2 Including superfluous variables: Just reverse the results. Including superfluous variables increases variance. (The cost of not using information.) Does not cause a bias, because if the variables in X 2 are truly superfluous, then 2 = , so E[ b 1.2] = 1. &#152;&#152;&#152;™ ™ 26/34 Part 6: Finite Sample Properties of LS Linear Restrictions Context: How do linear restrictions affect the properties of the least squares estimator? Model: y = X + Theory (information) R - q = Restricted least squares estimator: b * = b- ( XX )- 1R [ R ( XX )- 1R ]- 1 ( Rb - q ) Expected value: E[ b *] = - ( XX )- 1R [ R ( XX )- 1R ]- 1 ( Rb - q ) Variance: 2( XX )-1 - 2 ( XX )- 1R [ R ( XX )- 1R ]-1 R ( XX )-1 = Var[ b ] – a nonnegative definite matrix < Var[ b ] Implication: (As before) nonsample information reduces the variance of the estimator. &#152;&#152;&#152;&#152;™ ™ 27/34 Part 6: Finite Sample Properties of LS Interpretation Case 1 : Theory is correct: R - q = (the restrictions do hold). b * is unbiased Var[ b *] is smaller than Var[ b ] How do we know this? Case 2 : Theory is incorrect: R - q (the restrictions do not hold). b * is biased – what does this mean? Var[ b *] is still smaller than Var[ b ] &#152;&#152;&#152;&#152;™ ™ 28/34 Part 6: Finite Sample Properties of LS Restrictions and Information How do we interpret this important result? The theory is "information" Bad information leads us away from "the truth" Any information, good or bad, makes us more certain of our answer. In this context, any information reduces variance. What about ignoring the information? Not using the correct information does not lead us away from "the truth" Not using the information foregoes the variance reduction - i.e., does not use the ability to reduce "uncertainty." &#152;&#152;&#152;&#152;™ ™ 29/34 Part 6: Finite Sample Properties of LS Gauss-Markov Theorem A theorem of Gauss and Markov: Least Squares is the minimum variance linear unbiased estimator (MVLUE) 1. Linear estimator 2. Unbiased: E[ b | X ] = β Theorem : Var[ b* | X ] – Var[ b | X ] is nonnegative definite for any other linear and unbiased estimator b* that is not equal to b . Definition : b is efficient in this class of estimators....
View Full Document

{[ snackBarMessage ]}

### Page26 / 35

Smaller variance but positive bias If bias is small may...

This preview shows document pages 26 - 32. Sign up to view the full document.

View Full Document
Ask a homework question - tutors are online