This preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
Unformatted text preview: n i i i 1 = = β + ε ∑ v ˜˜˜˜™ ™ 30/34 Part 6: Finite Sample Properties of LS Implications of GaussMarkov p Theorem: Var[ b*  X ] – Var[ b  X ] is nonnegative definite for any other linear and unbiased estimator b* that is not equal to b . Implies: p b k = the kth particular element of b. Var[ b k X ] = the kth diagonal element of Var[ b  X ] Var[ b k X ] < Var[ b k* X ] for each coefficient. p cb = any linear combination of the elements of b. Var[ cb  X ] < Var[ cb * X ] for any nonzero c and b* that is not equal to b . ˜˜˜˜ ™ 31/34 Part 6: Finite Sample Properties of LS Aspects of the GaussMarkov Theorem Indirect proof: Any other linear unbiased estimator has a larger covariance matrix. Direct proof: Find the minimum variance linear unbiased estimator Other estimators Biased estimation – a minimum mean squared error estimator. Is there a biased estimator with a smaller ‘dispersion’? Yes, always Normally distributed disturbances – the RaoBlackwell result. (General observation – for normally distributed disturbances, ‘linear’ is superfluous.) Nonnormal disturbances  Least Absolute Deviations and other nonparametric approaches may be better in small samples ˜˜˜˜ ˜™ 32/34 Part 6: Finite Sample Properties of LS Distribution = = + ε ′ ′ = ∑ β n i i i 1 1 i i i Source of the random behavior of ( ) where is row i of . We derived E[  ] and Var[  ] earlier. The distribution of  is that of the linear combination of the disturbanc b v v X X x x X b X b X b X ε ε σ ′ ′ + σ ′ ′ σ = σ β ε ε β β i 2 i 2 1 2 2 1 es, . If has a normal distribution, denoted ~ N[0, ], then  = where ~ N[0, ] and = ( )  ~ N[ , ] N[ , ( ) ]. Note how b inherits its stochastic properties from b X A I A X X X . b X A I A X X ε . ˜˜˜˜ ˜™ 33/34 Part 6: Finite Sample Properties of LS Summary: Finite Sample Properties of b p Unbiased: E[ b ]= p Variance: Var[ b  X ] = 2( XX )1 p Efficiency: GaussMarkov Theorem with all implications p Distribution: Under normality, b  X ~ N[ , 2( XX )1 (Without normality, the distribution is generally unknown.) ˜˜˜˜ ˜ 34/34...
View
Full Document
 Fall '10
 H.Bierens
 Econometrics, Least Squares, Standard Deviation, Variance, Mean squared error, Bias of an estimator

Click to edit the document details