This preview shows pages 1–7. Sign up to view the full content.
This preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
Unformatted text preview: 2 X ( k i + d i ) 2 = σ 2 ( X k 2 i + X d 2 i + 2 X k i d i ) Note we just demonstrated that σ 2 X k 2 i = σ 2 { b 1 } So σ 2 { ˆ β 1 } is related to σ 2 { b 1 } plus some extra stuﬀ. Proof cont. Now by showing that ∑ k i d i = 0 we’re almost done X k i d i = X k i ( c ik i ) = X k i ( c ik i ) = X k i c iX k 2 i = X c i ± X i¯ X ∑ ( X i¯ X ) 2 ²1 ∑ ( X i¯ X ) 2 = ∑ c i X i¯ X ∑ c i ∑ ( X i¯ X ) 21 ∑ ( X i¯ X ) 2 = 0 Proof end So we are left with σ 2 { ˆ β 1 } = σ 2 ( X k 2 i + X d 2 i ) = σ 2 ( b 1 ) + σ 2 ( X d 2 i ) which is minimized when the d i = 0 ∀ i . If d i = 0 then c i = k i . This means that the least squares estimator b 1 has minimum variance among all unbiased linear estimators....
View
Full
Document
This note was uploaded on 03/24/2012 for the course ECON 326 taught by Professor Whisler during the Spring '10 term at UBC.
 Spring '10
 whisler

Click to edit the document details