This preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
Unformatted text preview: (1/n)Σi ∂logf(yi ,x i)/∂ MLE = 0. (Sample statistic.) (The 1/n is irrelevant.) “First order conditions” for maximization A moment condition  its counterpart is the fundamental theoretical result E[logL/ ] = . ˜™™™™ ™ 7/47 Part 18: Maximum Likelihood Estimation Average Time Until Failure Estimating the average time until failure, , of light bulbs. yi = observed life until failure. f(yi) = (1/ )exp(yi/) L() = Πi f(yi)= n exp( Σyi/ ) logL() = nlog ()  Σyi/ Likelihood equation: ∂logL()/∂ = n/ + Σyi/2 =0 Solution: MLE = Σyi /n. Note: E[yi]= Note, ∂logf(yi)/∂ = 1/ + yi/2 Since E[yi]= , E[∂logf()/∂]=0. (One of the Regularity conditions discussed below) ˜™™™™ ™ 8/47 Part 18: Maximum Likelihood Estimation The Linear (Normal) Model Definition of the likelihood function  joint density of the observed data, written as a function of the parameters we wish to estimate. Definition of the maximum likelihood estimator as that function of the observed data that maximizes the likelihood function, or its logarithm. For the model: yi = x i + i, where i ~ N[0,2], the maximum likelihood estimators of and 2 are b = ( XX )1 Xy and s2 = ee /n. That is, least squares is ML for the slopes, but the variance estimator makes no degrees of freedom correction, so the MLE is biased. ˜˜™™™™ ™ 9/47 Part 18: Maximum Likelihood Estimation Normal Linear Model The loglikelihood function = i log f(yi ) = sum of logs of densities. For the linear regression model with normally distributed disturbances logL = i [ ½log 2  ½log 2  ½(yi – xi )2/2 ]. = n/2[log2 + log2 + s2/2] s2 = ee /n ˜˜™™™™ ™ 10/47 Part 18: Maximum Likelihood Estimation Likelihood Equations The estimator is defined by the function of the data that equates logL/ to . (Likelihood equation) The derivative vector of the loglikelihood function is the score function . For the regression model, g = [logL/ , logL/2]’ = logL/ = i [(1/2) xi (yi  xi ) ] = X/2 . logL/2 = i [ 1/(22) + (yi  xi )2/(24)] = N/22 [1 – s2/2] For the linear regression model, the first derivative vector of logL is (1/2) X ( y  X ) and (1/22) i [(yi  xi )2/2  1] (K1) (11) Note that we could compute these functions at any and 2. If we compute them at b and ee /n, the functions will be identically zero. ˜˜™™™™ ™ 11/47 Part 18: Maximum Likelihood Estimation Information Matrix The negative of the second derivatives matrix of the log likelihood,  H = forms the basis for estimating the variance of the MLE....
View
Full Document
 Fall '10
 H.Bierens
 Econometrics, Normal Distribution, Variance, Maximum likelihood, Likelihood function, maximum likelihood estimation

Click to edit the document details