This preview shows pages 1–2. Sign up to view the full content.
This preview has intentionally blurred sections. Sign up to view the full version.View Full Document
Unformatted text preview: Economics 245A Efficient Maximum Likelihood Estimation In the preceding lecture, we de&ned the likelihood function L ( & ; y ) and briey discussed obtain- ing an estimate of & by maximizing the (log) likelihood. The resultant estimate is the maximum likelihood (ML) estimate. Today we discuss the ML estimator, paying particular attention to the relation between the estimator and complete (minimal) sucient statistics. E&cient Score Consider a sample Y = f Y t g n t =1 with corresponding log likelihood l Y ( & ; Y ) = log f Y ( Y ; & ). As we are to maximize the log likelihood we study the behavior of the ecient score S ( & ) = @ log f Y ( Y ; & ) @& j & = & & @ log f Y ( Y ; & ) @& : Note that the ecient score is the derivative of the population density ( f Y ( Y ; & )) evaluated at the population parameter value ( & ). If the observations are i.i.d., then S ( & ) = P n t =1 S t ( & ), where S t ( & ) = @ log f Y t ( Y t ; & ) @& j & = & . To obtain moments of the ecient score observe that for any density Z f Y ( y ; & ) dy = 1 implies Z @f Y ( y ; & ) @& dy = 0 ; regardless of whether the density is symmetric. Under suitable regularity conditions (explored in more detail below) E [ S ( & ) ; & ] & Z @ log f Y ( y ; & ) @& f Y ( y ; & ) dy = 0 ; which follows from Z @ log f Y ( y ; & ) @& f Y ( y ; & ) dy = Z @f Y ( y ; & ) @& dy: Because the mean of the ecient score is zero, the variance of the ecient score is...
View Full Document
- Fall '08