{[ promptMessage ]}

Bookmark it

{[ promptMessage ]}

Econometrics-I-18

Econometrics-I-18 - Econometrics I Professor William Greene...

Info iconThis preview shows pages 1–9. Sign up to view the full content.

View Full Document Right Arrow Icon
Part 18: Maximum Likelihood Estimation Econometrics I Professor William Greene Stern School of Business Department of Economics
Background image of page 1

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
Part 18: Maximum Likelihood Estimation Econometrics I Part 18 – Maximum  Likelihood Estimation ™  1/47
Background image of page 2
Part 18: Maximum Likelihood Estimation Maximum Likelihood Estimation This defines a class of estimators based on the particular distribution assumed to have generated the observed random variable. Not estimating a mean – least squares is not available Estimating a mean (possibly), but also using information about the distribution ™  2/47
Background image of page 3

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
Part 18: Maximum Likelihood Estimation Advantage of The MLE The main advantage of ML estimators is that among all Consistent Asymptotically Normal Estimators , MLEs have optimal asymptotic properties. The main disadvantage is that they are not necessarily robust to failures of the distributional assumptions. They are very dependent on the particular assumptions. The oft cited disadvantage of their mediocre small sample properties is overstated in view of the usual paucity of viable alternatives. ™  3/47
Background image of page 4
Part 18: Maximum Likelihood Estimation Properties of the MLE p Consistent : Not necessarily unbiased, however p Asymptotically normally distributed : Proof based on central limit theorems p Asymptotically efficient : Among the possible estimators that are consistent and asymptotically normally distributed – counterpart to Gauss- Markov for linear regression p Invariant : The MLE of g() is g(the MLE of ) ™  4/47
Background image of page 5

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
Part 18: Maximum Likelihood Estimation Setting Up the MLE The distribution of the observed random variable is written as a function of the parameters to be estimated P(yi|data, β ) = Probability density | parameters. The likelihood function is constructed from the density Construction: Joint probability density function of the observed sample of data – generally the product when the data are a random sample. ™  5/47
Background image of page 6
Part 18: Maximum Likelihood Estimation (Log) Likelihood Function p f(yi| , x i) = probability density of observed yi given parameter(s) and possibly data, x i. p Observations are independent p Joint density = i f(yi| , x i) = L( | y,X ) p f(yi| , x i) is the contribution of observation i to the likelihood. p The MLE of maximizes L( | y,X ) p In practice it is usually easier to maximize logL( | y,X ) = i logf(yi| , x i) ™  6/47
Background image of page 7

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
Part 18: Maximum Likelihood Estimation The MLE The log-likelihood function: logL( |data) The likelihood equation(s): First derivatives of logL equal zero at the MLE. (1/n)Σi ∂logf(yi| ,x i)/∂ MLE = 0. (Sample statistic.) (The 1/n is irrelevant.) “First order conditions” for maximization A moment condition - its counterpart is the fundamental theoretical result E[logL/ ] = 0 .
Background image of page 8
Image of page 9
This is the end of the preview. Sign up to access the rest of the document.

{[ snackBarMessage ]}