Xx xx 1 ed to estimate e we will use the only

Info icon This preview shows pages 24–32. Sign up to view the full content.

View Full Document Right Arrow Icon
X'X X'X 0 - 1 ed to estimate E[( ) ].  We will use the only information we have,  , itself. X'X X ™    23/34
Image of page 24

Info icon This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
Part 6: Finite Sample Properties of LS Specification Errors-1 Omitting relevant variables: Suppose the correct model is y = X 1 1 + X 2 2 + . I.e., two sets of variables. Compute least squares omitting X 2. Some easily proved results: Var[ b 1] is smaller than Var[ b 1.2]. (The latter is the northwest submatrix of the full covariance matrix. The proof uses the residual maker (again!). I.e., you get a smaller variance when you omit X 2. (One interpretation: Omitting X 2 amounts to using extra information ( 2 = 0 ). Even if the information is wrong (see the next result), it reduces the variance. (This is an important result.) ™    24/34
Image of page 25
Part 6: Finite Sample Properties of LS Omitted Variables (No free lunch) E[ b 1] = 1 + ( X1X1 )- 1X1X2 2  1. So, b 1 is biased .(!!!) The bias can be huge. Can reverse the sign of a price coefficient in a “demand equation.” b 1 may be more “precise.” Precision = Mean squared error = variance + squared bias. Smaller variance but positive bias. If bias is small, may still favor the short regression. (Free lunch?) Suppose X1X2 = 0 . Then the bias goes away. Interpretation, the information is not “right,” it is irrelevant. b 1 is the same as b 1.2. ™    25/34
Image of page 26

Info icon This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
Part 6: Finite Sample Properties of LS Specification Errors-2 Including superfluous variables: Just reverse the results. Including superfluous variables increases variance. (The cost of not using information.) Does not cause a bias, because if the variables in X 2 are truly superfluous, then 2 = 0 , so E[ b 1.2] = 1. ™    26/34
Image of page 27
Part 6: Finite Sample Properties of LS Linear Restrictions Context: How do linear restrictions affect the properties of the least squares estimator? Model: y = X + Theory (information) R - q = 0 Restricted least squares estimator: b * = b - ( XX )- 1R [ R ( XX )- 1R ]- 1 ( Rb - q ) Expected value: E[ b *] = - ( XX )- 1R [ R ( XX )- 1R ]- 1 ( Rb - q ) Variance: 2( XX )-1 - 2 ( XX )- 1R [ R ( XX )- 1R ]-1 R ( XX )-1 = Var[ b ] – a nonnegative definite matrix < Var[ b ] Implication: (As before) nonsample information reduces the variance of the estimator. ™    27/34
Image of page 28

Info icon This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
Part 6: Finite Sample Properties of LS Interpretation Case 1 : Theory is correct: R - q = 0 (the restrictions do hold). b * is unbiased Var[ b *] is smaller than Var[ b ] How do we know this? Case 2 : Theory is incorrect: R - q 0 (the restrictions do not hold). b * is biased – what does this mean? Var[ b *] is still smaller than Var[ b ] ™    28/34
Image of page 29
Part 6: Finite Sample Properties of LS Restrictions and Information How do we interpret this important result?  The theory is "information"  Bad information leads us away from "the truth"  Any information, good or bad, makes us more certain of our answer. In this context, any information reduces variance. What about ignoring the information?  Not using the correct information does not lead us away from "the truth"  Not using the information foregoes the variance reduction - i.e., does not use the ability to reduce "uncertainty." ™    29/34
Image of page 30

Info icon This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
Part 6: Finite Sample Properties of LS Gauss-Markov Theorem A theorem of Gauss and Markov: Least Squares is the minimum variance linear unbiased estimator (MVLUE) 1. Linear estimator 2. Unbiased: E[ b | X ] = β Theorem : Var[ b* | X ] – Var[ b | X ] is nonnegative definite for any other linear and unbiased estimator b* that is not equal to b .
Image of page 31
Image of page 32
This is the end of the preview. Sign up to access the rest of the document.

{[ snackBarMessage ]}

What students are saying

  • Left Quote Icon

    As a current student on this bumpy collegiate pathway, I stumbled upon Course Hero, where I can find study resources for nearly all my courses, get online help from tutors 24/7, and even share my old projects, papers, and lecture notes with other students.

    Student Picture

    Kiran Temple University Fox School of Business ‘17, Course Hero Intern

  • Left Quote Icon

    I cannot even describe how much Course Hero helped me this summer. It’s truly become something I can always rely on and help me. In the end, I was not only able to survive summer classes, but I was able to thrive thanks to Course Hero.

    Student Picture

    Dana University of Pennsylvania ‘17, Course Hero Intern

  • Left Quote Icon

    The ability to access any university’s resources through Course Hero proved invaluable in my case. I was behind on Tulane coursework and actually used UCLA’s materials to help me move forward and get everything together on time.

    Student Picture

    Jill Tulane University ‘16, Course Hero Intern