This preview shows pages 1–9. Sign up to view the full content.
This preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
Unformatted text preview: Applied Econometrics William Greene Department of Economics Stern School of Business Applied Econometrics 18. Maximum Likelihood Estimation Maximum Likelihood Estimation This defines a class of estimators based on the particular distribution assumed to have generated the observed random variable. The main advantage of ML estimators is that among all Consistent Asymptotically Normal Estimators , MLEs have optimal asymptotic properties. The main disadvantage is that they are not necessarily robust to failures of the distributional assumptions. They are very dependent on the particular assumptions. The oft cited disadvantage of their mediocre small sample properties is probably overstated in view of the usual paucity of viable alternatives. Setting up the MLE The distribution of the observed random variable is written as a function of the parameters to be estimated P(y i data, ) = Probability density  parameters. The likelihood function is constructed from the density Construction: Joint probability density function of the observed sample of data generally the product when the data are a random sample. Regularity Conditions What they are 1. logf(.) has three continuous derivatives wrt parameters 2. Conditions needed to obtain expectations of derivatives are met. (E.g., range of the variable is not a function of the parameters.) 3. Third derivative has finite expectation. What they mean Moment conditions and convergence. We need to obtain expectations of derivatives. We need to be able to truncate Taylor series. We will use central limit theorems The MLE The loglikelihood function: logL( data) The likelihood equation(s): First derivatives of logL equal zero at the MLE. (1/n) i logf(y i  )/ MLE = 0. (Sample statistic.) (The 1/n is irrelevant.) First order conditions for maximization A moment condition  its counterpart is the fundamental result E[ logL/ ] = . How do we use this result? An analogy principle. Average Time Until Failure Estimating the average time until failure, , of light bulbs. y i = observed life until failure. f(y i  )=(1/ )exp(y i / ) L( )= i f(y i  )= N exp( y i / ) logL ( )=Nlog ( )  y i / Likelihood equation: logL( )/ =N/ + y i / 2 =0 Note, logf(y i  )/ = 1/ + y i / 2 Since E[y i ]= , E[logf( )/ ]=0. (Regular) Properties of the Maximum Likelihood Estimator We will sketch formal proofs of these results: The loglikelihood function, again The likelihood equation and the information matrix.The likelihood equation and the information matrix....
View Full
Document
 Spring '11
 PP
 Econometrics

Click to edit the document details