Unformatted text preview: P. KOLM 9 Proof: Recall
n ˆ
b1 =
n å (x
i =1 i  x ) yi SSTx where SSTx º å (x i  x ) ( SSTx = “total sum of squares of x”). Now, note that
2 i =1 n å (x
i =1 n i  x ) yi =å (x i  x )(b0 + b1x i + ui )
i =1 n n n = å (x i  x )b0 + å (x i  x ) b1x i + å (x i  x ) ui
i =1 i =1 n i =1
n i =1 n i =1 = b0 å (x i  x ) + b1 å (x i  x ) x i + å (x i  x ) ui
i =1 Since
n å (x
n i =1 å (x
i =1 i  x ) = 0,
n i  x ) x i = å (x i  x ) 2 i =1 ˆ
we can rewrite b1 as VER. 9/25/2012. © P. KOLM 10 n ˆ
b1 = b1SSTx + å (x i  x ) ui
i =1 n = b1 + SSTx å (x
i =1 i  x ) ui SSTx Therefore, ˆ
E (b1 ) = b1 + 1
SSTx n å (x
i =1 i  x )E (ui ) = b1 ˆ
The unbiasedness of b0 follows from
ˆ
ˆ
E (b0 ) = E (y  b1x ) ˆ
= E (b0 + b1x  b1x ) ˆx)
= b + E (b x  b
0 1 1 = b0
We are done. VER. 9/25/2012. © P. KOLM 11 Sampling Variance of the OLS Estimators (1/2)
Now we know that the sampling distribution of our estimator is centered around
the true regression parameters, we ask: How spread out is the sample distribution?
ˆ
ˆ Or, in oth...
View
Full
Document
This document was uploaded on 02/17/2014 for the course COURANT G63.2751.0 at NYU.
 Fall '14

Click to edit the document details