{[ promptMessage ]}

Bookmark it

{[ promptMessage ]}

321w6p1 - We found that the variance of our ols estimators...

Info iconThis preview shows pages 1–5. Sign up to view the full content.

View Full Document Right Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: We found that the variance of our ols estimators is: Var j = 2 [ ∑ i = 1 n X ij − X j 2 1 − R j 2 ] because we do not know the u i , we cannot calculate 2 = n − 1 ∑ i = 1 n u i 2 An unbiased estimator of σ 2 is 2 = n − k − 1 − 1 ∑ i = 1 n u i 2 In the case of k+1 parameters, df=n-k-1 3. If the errors are homoskedastic, then an estimate of the var( β ) using σ 2 is unbiased Var j = 2 [ ∑ i = 1 n X ij − X j 2 1 − R j 2 ] ^ ^ 4. Gauss Markov Theorem – The OLS estimator is the Best Linear Unbiased Estimator (BLUE) Linear – linear in parameters Unbiased – E[ β ]= β Best – smallest variance Under assumptions 1-4 and homoskedasticity, β is the BLUE of β . These are known as the Gauss-Markov Assumptions ^ ~ ^ Interpreting OLS Paramter Estimates Recall, in our regression Y i = β + β 1 X 1i + β 2 X 2i +...+ β k X ki + u i i=1,...,n β 1 is the partial effect of X 1 on Y....
View Full Document

{[ snackBarMessage ]}

Page1 / 14

321w6p1 - We found that the variance of our ols estimators...

This preview shows document pages 1 - 5. Sign up to view the full document.

View Full Document Right Arrow Icon bookmark
Ask a homework question - tutors are online