Gauss Markov Theorem

Gauss Markov Theorem - 2 X ( k i + d i ) 2 = σ 2 ( X k 2 i...

Info iconThis preview shows pages 1–7. Sign up to view the full content.

View Full Document Right Arrow Icon
Gauss Markov Theorem Dr. Frank Wood
Background image of page 1

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
Digression : Gauss-Markov Theorem In a regression model where E { ± i } = 0 and variance σ 2 { ± i } = σ 2 < and ± i and ± j are uncorrelated for all i and j the least squares estimators b 0 and b 1 are unbiased and have minimum variance among all unbiased linear estimators. Remember b 1 = ( X i - ¯ X )( Y i - ¯ Y ) ( X i - ¯ X ) 2 = X k i Y i , k i = ( X i - ¯ X ) ( X i - ¯ X ) 2 b 0 = ¯ Y - b 1 ¯ X σ 2 { b 1 } = σ 2 { X k i Y i } = X k 2 i σ 2 { Y i } = σ 2 1 ( X i - ¯ X ) 2
Background image of page 2
Gauss-Markov Theorem I The theorem states that b 1 has minimum variance among all unbiased linear estimators of the form ˆ β 1 = X c i Y i I As this estimator must be unbiased we have E { ˆ β 1 } = X c i E { Y i } = β 1 = X c i ( β 0 + β 1 X i ) = β 0 X c i + β 1 X c i X i = β 1 I This imposes some restrictions on the c i ’s.
Background image of page 3

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
Proof I Given these constraints β 0 X c i + β 1 X c i X i = β 1 clearly it must be the case that c i = 0 and c i X i = 1 I The variance of this estimator is σ 2 { ˆ β 1 } = X c 2 i σ 2 { Y i } = σ 2 X c 2 i I This also places a kind of constraint on the c i ’s
Background image of page 4
Proof cont. Now define c i = k i + d i where the k i are the constants we already defined and the d i are arbitrary constants. Let’s look at the variance of the estimator σ 2 { ˆ β 1 } = X c 2 i σ 2 { Y i } = σ
Background image of page 5

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
Background image of page 6
Background image of page 7
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: 2 X ( k i + d i ) 2 = σ 2 ( X k 2 i + X d 2 i + 2 X k i d i ) Note we just demonstrated that σ 2 X k 2 i = σ 2 { b 1 } So σ 2 { ˆ β 1 } is related to σ 2 { b 1 } plus some extra stuff. Proof cont. Now by showing that ∑ k i d i = 0 we’re almost done X k i d i = X k i ( c i-k i ) = X k i ( c i-k i ) = X k i c i-X k 2 i = X c i ± X i-¯ X ∑ ( X i-¯ X ) 2 ²-1 ∑ ( X i-¯ X ) 2 = ∑ c i X i-¯ X ∑ c i ∑ ( X i-¯ X ) 2-1 ∑ ( X i-¯ X ) 2 = 0 Proof end So we are left with σ 2 { ˆ β 1 } = σ 2 ( X k 2 i + X d 2 i ) = σ 2 ( b 1 ) + σ 2 ( X d 2 i ) which is minimized when the d i = 0 ∀ i . If d i = 0 then c i = k i . This means that the least squares estimator b 1 has minimum variance among all unbiased linear estimators....
View Full Document

This note was uploaded on 03/24/2012 for the course ECON 326 taught by Professor Whisler during the Spring '10 term at UBC.

Page1 / 7

Gauss Markov Theorem - 2 X ( k i + d i ) 2 = σ 2 ( X k 2 i...

This preview shows document pages 1 - 7. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online