{[ promptMessage ]}

Bookmark it

{[ promptMessage ]}

MIT14_30s09_lec19

# MIT14_30s09_lec19 - MIT OpenCourseWare http/ocw.mit.edu...

This preview shows pages 1–4. Sign up to view the full content.

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: MIT OpenCourseWare http://ocw.mit.edu 14.30 Introduction to Statistical Methods in Economics Spring 2009 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms . 1 14.30 Introduction to Statistical Methods in Economics Lecture Notes 19 Konrad Menzel April 28, 2009 Maximum Likelihood Estimation: Further Examples Example 1 Suppose X ∼ N ( µ , σ 2 ) , and we want to estimate the parameters µ and σ 2 from an i.i.d. sample X 1 , . . . , X n . The likelihood function is n 1 ( X i − µ ) 2 L ( θ ) = e − 2 σ 2 √ 2 πσ i =1 It turns out that it’s much easier to maximize the log-likelihood, n ( X i − µ ) 2 log L ( θ ) = log √ 2 1 πσ e − 2 σ 2 i =1 n = log 1 ( X i − µ ) 2 √ 2 πσ − 2 σ 2 i =1 n = − n 2 log(2 πσ 2 ) − 1 ( X i − µ ) 2 2 σ 2 i =1 In order to find the maximum, we take the derivatives with respect to µ and σ 2 , and set them equal to zero: n n 1 1 0 = 2 σ 2 i =1 2( X i − µ ˆ) ⇔ µ ˆ = n i =1 X i Similarly, n n n n 2 π 1 1 1 ¯ 0 = − 2 2 πσ 2 + 2 σ 2 2 i =1 ( X i − µ ˆ) 2 ⇔ σ 2 = n i =1 ( X i − µ ˆ) 2 = n i =1 ( X i − X n ) 2 Recall that we already showed that this estimator is not unbiased for σ 2 , so in general, Maximum Likelihood Estimators need not be unbiased. Example 2 Going back to the example with the uniform distribution, suppose X ∼ U [0 , θ ] , and we are interested in estimating θ . For the method of moments estimator, you can see that θ µ 1 ( θ ) = E θ [ X ] = 2 1 so equating this with the sample mean, we obtain θ ˆ MoM = 2 X ¯ n What is the maximum likelihood estimator? Clearly, we wouldn’t pick any θ ˆ ≤ max { X 1 , . . . , X n } because a sample with realizations greater than θ ˆ has zero probability under θ ˆ . Formally, the likelihood is 1 n if 0 ≤ X i ≤ θ for all i = 1 , . . . , n L ( θ ) = θ 0 otherwise We can see that any value of θ ≤ max { X 1 , . . . , X n } can’t be a maximum because L ( θ ) is zero for all those points. Also, for θ ≥ max { X 1 , . . . , X n } the likelihood function is strictly decreasing in θ , and therefore, it is maximized at θ ˆ MLE = max { X 1 , . . . , X n } Note that since X i < θ with probability 1, the Maximum Likelihood estimator is also going to be less than θ with probability one, so it’s not unbiased. More specifically, the p.d.f. of X ( n ) is given by n 1 y n − 1 f X ( n ) ( y ) = n [ F X ( y )] n − 1 f X ( y ) = θ θ θ if 0 ≤ y ≤ θ 0 otherwise so that ∞ θ y n n E [ X ( n ) ] = −∞ yf X ( n ) ( y ) dy = n θ n + 1 dy = θ We could easily construct an unbiased estimator θ ˜ = n +1 X ( n ) . n 1.1 Properties of the MLE The following is just a summary of main theoretical results on MLE (won’t do proofs for now) • If there is an eﬃcient estimator in the class of consistent estimators, MLE will produce it....
View Full Document

{[ snackBarMessage ]}

### Page1 / 7

MIT14_30s09_lec19 - MIT OpenCourseWare http/ocw.mit.edu...

This preview shows document pages 1 - 4. Sign up to view the full document.

View Full Document
Ask a homework question - tutors are online