MIT14_30s09_lec19

MIT14_30s09_lec19 - MIT OpenCourseWare http://ocw.mit.edu...

Info iconThis preview shows pages 1–4. Sign up to view the full content.

View Full Document Right Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: MIT OpenCourseWare http://ocw.mit.edu 14.30 Introduction to Statistical Methods in Economics Spring 2009 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms . 1 14.30 Introduction to Statistical Methods in Economics Lecture Notes 19 Konrad Menzel April 28, 2009 Maximum Likelihood Estimation: Further Examples Example 1 Suppose X N ( , 2 ) , and we want to estimate the parameters and 2 from an i.i.d. sample X 1 , . . . , X n . The likelihood function is n 1 ( X i ) 2 L ( ) = e 2 2 2 i =1 It turns out that its much easier to maximize the log-likelihood, n ( X i ) 2 log L ( ) = log 2 1 e 2 2 i =1 n = log 1 ( X i ) 2 2 2 2 i =1 n = n 2 log(2 2 ) 1 ( X i ) 2 2 2 i =1 In order to find the maximum, we take the derivatives with respect to and 2 , and set them equal to zero: n n 1 1 0 = 2 2 i =1 2( X i ) = n i =1 X i Similarly, n n n n 2 1 1 1 0 = 2 2 2 + 2 2 2 i =1 ( X i ) 2 2 = n i =1 ( X i ) 2 = n i =1 ( X i X n ) 2 Recall that we already showed that this estimator is not unbiased for 2 , so in general, Maximum Likelihood Estimators need not be unbiased. Example 2 Going back to the example with the uniform distribution, suppose X U [0 , ] , and we are interested in estimating . For the method of moments estimator, you can see that 1 ( ) = E [ X ] = 2 1 so equating this with the sample mean, we obtain MoM = 2 X n What is the maximum likelihood estimator? Clearly, we wouldnt pick any max { X 1 , . . . , X n } because a sample with realizations greater than has zero probability under . Formally, the likelihood is 1 n if 0 X i for all i = 1 , . . . , n L ( ) = 0 otherwise We can see that any value of max { X 1 , . . . , X n } cant be a maximum because L ( ) is zero for all those points. Also, for max { X 1 , . . . , X n } the likelihood function is strictly decreasing in , and therefore, it is maximized at MLE = max { X 1 , . . . , X n } Note that since X i < with probability 1, the Maximum Likelihood estimator is also going to be less than with probability one, so its not unbiased. More specifically, the p.d.f. of X ( n ) is given by n 1 y n 1 f X ( n ) ( y ) = n [ F X ( y )] n 1 f X ( y ) = if 0 y 0 otherwise so that y n n E [ X ( n ) ] = yf X ( n ) ( y ) dy = n n + 1 dy = We could easily construct an unbiased estimator = n +1 X ( n ) . n 1.1 Properties of the MLE The following is just a summary of main theoretical results on MLE (wont do proofs for now) If there is an ecient estimator in the class of consistent estimators, MLE will produce it....
View Full Document

This document was uploaded on 01/30/2010.

Page1 / 7

MIT14_30s09_lec19 - MIT OpenCourseWare http://ocw.mit.edu...

This preview shows document pages 1 - 4. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online