This preview has intentionally blurred sections. Sign up to view the full version.View Full Document
Unformatted text preview: Economics 245A Efficient Maximum Likelihood Estimation In the preceding lecture, we de&ned the likelihood function L ( & ; y ) and brie¡y discussed obtain- ing an estimate of & by maximizing the (log) likelihood. The resultant estimate is the maximum likelihood (ML) estimate. Today we discuss the ML estimator, paying particular attention to the relation between the estimator and complete (minimal) su¢cient statistics. E&cient Score Consider a sample Y = f Y t g n t =1 with corresponding log likelihood l Y ( & ; Y ) = log f Y ( Y ; & ). As we are to maximize the log likelihood we study the behavior of the e¢cient score S ( & ) = @ log f Y ( Y ; & ) @& j & = & & @ log f Y ( Y ; & ) @& : Note that the e¢cient score is the derivative of the population density ( f Y ( Y ; & )) evaluated at the population parameter value ( & ). If the observations are i.i.d., then S ( & ) = P n t =1 S t ( & ), where S t ( & ) = @ log f Y t ( Y t ; & ) @& j & = & . To obtain moments of the e¢cient score observe that for any density Z f Y ( y ; & ) dy = 1 implies Z @f Y ( y ; & ) @& dy = 0 ; regardless of whether the density is symmetric. Under suitable regularity conditions (explored in more detail below) E [ S ( & ) ; & ] & Z @ log f Y ( y ; & ) @& f Y ( y ; & ) dy = 0 ; which follows from Z @ log f Y ( y ; & ) @& f Y ( y ; & ) dy = Z @f Y ( y ; & ) @& dy: Because the mean of the e¢cient score is zero, the variance of the e¢cient score is...
View Full Document
This note was uploaded on 12/26/2011 for the course ECON 245a taught by Professor Staff during the Fall '08 term at UCSB.
- Fall '08