07 Optimality of OLS Estimators

# 07 Optimality of OLS Estimators - Economics 140A Optimality...

This preview shows pages 1–3. Sign up to view the full content.

Economics 140A Optimality of OLS Estimators Today we show that for the classic regression model, the OLS estimator is the best linear unbiased estimator (or the best unbiased estimator, depending on the assumptions). Consider the population regression in deviation-from-means form Y t = & 0 X t + U t : Because the classic assumptions ensure that the OLS estimator is unbiased, opti- mality of the OLS estimator results from minimum variance. Under assumptions 1 through 5 the OLS estimator is the best linear unbiased estimator, as established in the Gauss-Markov Theorem. If Assumptions 1 through 5 hold, then the OLS estimator B OLS is the minimum variance linear unbiased estimator of & 0 . Proof. Let B be the set of linear unbiased estimators of & 0 . For all B 2 B , B = n X t =1 w t Y t ; where f w t g n t =1 is a set of weights that may depend on the regressor and n X t =1 w t X t = 1 . B is a linear function of f Y t g n t =1 while the second follows from the statement that B is unbiased EB = E n X t =1 w t ( & 0 X t + U t ) = & 0 n X t =1 w t X t : The variance of B is V ( B ) = V n X t =1 w t Y t ! = EU 2 t n X t =1 w 2 t : The BLUE estimator B is the member of B B we solve a constrained optimization problem. The estimator B is constructed with the weights f w t g n t =1 = arg min f w t g n t =1 " n X t =1 w 2 t ± ± n X t =1 w t X t ± 1 !# ;

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
where is the Lagrange multiplier. The n (2 w t t ) w t = w = 0 ; for t = 1 ;:::;n which implies w t = 2 X t . Because the relation holds for each t , n X t =1 w t X t = 2 n X t =1 X 2 t ; which (because P n t =1 w t X t = 1 ) implies = 2 P n t =1 X 2 t : The optimal weights are w t = X t P n t =1 X 2 t and B = n X t =1 X t P n t =1 X 2 t Y t ; which is identical to B OLS . Q.E.D. OLSE as a Maximum Likelihood Estimator If we add assumption 6, which states that the regression error has a Gaussian distribution, then the OLS estimator is the best unbiased estimator. Why are we able to drop the restriction that the OLS estimator is best only within the set of linear estimators? If the error distribution is Gaussian, then the OLS estimator is identical to the maximum likelihood estimator. The maximum likelihood estima- tor requires that we specify a distribution for the data under study, but having done so the estimator is quite intuitive. Also (we state without proof) the ML estimator is the best unbiased estimator. Consider the following problem. A game is run in your neighborhood in which
This is the end of the preview. Sign up to access the rest of the document.

## This note was uploaded on 09/04/2011 for the course ECON 140a taught by Professor Staff during the Fall '08 term at UCSB.

### Page1 / 8

07 Optimality of OLS Estimators - Economics 140A Optimality...

This preview shows document pages 1 - 3. Sign up to view the full document.

View Full Document
Ask a homework question - tutors are online