{[ promptMessage ]}

Bookmark it

{[ promptMessage ]}

Econometrics-I-18

(1/n)σi ∂logf(yi|,x i/∂ mle = 0(sample

Info iconThis preview shows pages 8–14. Sign up to view the full content.

View Full Document Right Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: (1/n)Σi ∂logf(yi| ,x i)/∂ MLE = 0. (Sample statistic.) (The 1/n is irrelevant.) “First order conditions” for maximization A moment condition - its counterpart is the fundamental theoretical result E[logL/ ] = . ˜™™™™ ™ 7/47 Part 18: Maximum Likelihood Estimation Average Time Until Failure Estimating the average time until failure, , of light bulbs. yi = observed life until failure. f(yi|) = (1/ )exp(-yi/) L() = Πi f(yi|)= -n exp(- Σyi/ ) logL() = -nlog () - Σyi/ Likelihood equation: ∂logL()/∂ = -n/ + Σyi/2 =0 Solution: MLE = Σyi /n. Note: E[yi]= Note, ∂logf(yi|)/∂ = -1/ + yi/2 Since E[yi]= , E[∂logf()/∂]=0. (One of the Regularity conditions discussed below) ˜™™™™ ™ 8/47 Part 18: Maximum Likelihood Estimation The Linear (Normal) Model Definition of the likelihood function - joint density of the observed data, written as a function of the parameters we wish to estimate. Definition of the maximum likelihood estimator as that function of the observed data that maximizes the likelihood function, or its logarithm. For the model: yi = x i + i, where i ~ N[0,2], the maximum likelihood estimators of and 2 are b = ( XX )-1 Xy and s2 = ee /n. That is, least squares is ML for the slopes, but the variance estimator makes no degrees of freedom correction, so the MLE is biased. ˜˜™™™™ ™ 9/47 Part 18: Maximum Likelihood Estimation Normal Linear Model The log-likelihood function = i log f(yi| ) = sum of logs of densities. For the linear regression model with normally distributed disturbances logL = i [ -½log 2 - ½log 2 - ½(yi – xi )2/2 ]. = -n/2[log2 + log2 + s2/2] s2 = ee /n ˜˜™™™™ ™ 10/47 Part 18: Maximum Likelihood Estimation Likelihood Equations The estimator is defined by the function of the data that equates log-L/ to . (Likelihood equation) The derivative vector of the log-likelihood function is the score function . For the regression model, g = [logL/ , logL/2]’ = logL/ = i [(1/2) xi (yi - xi ) ] = X/2 . logL/2 = i [- 1/(22) + (yi - xi )2/(24)] = -N/22 [1 – s2/2] For the linear regression model, the first derivative vector of logL is (1/2) X ( y - X ) and (1/22) i [(yi - xi )2/2 - 1] (K1) (11) Note that we could compute these functions at any and 2. If we compute them at b and ee /n, the functions will be identically zero. ˜˜™™™™ ™ 11/47 Part 18: Maximum Likelihood Estimation Information Matrix The negative of the second derivatives matrix of the log- likelihood, - H = forms the basis for estimating the variance of the MLE....
View Full Document

{[ snackBarMessage ]}

Page8 / 48

(1/n)Σi ∂logf(yi|,x i/∂ MLE = 0(Sample statistic(The...

This preview shows document pages 8 - 14. Sign up to view the full document.

View Full Document Right Arrow Icon bookmark
Ask a homework question - tutors are online