{[ promptMessage ]}

Bookmark it

{[ promptMessage ]}

29MiscLikelihood - MISCELLANEOUS TOPICS RELATED TO...

Info iconThis preview shows pages 1–8. Sign up to view the full content.

View Full Document Right Arrow Icon
MISCELLANEOUS TOPICS RELATED TO LIKELIHOOD Copyright c 2012 (Iowa State University) Statistics 511 1 / 30
Background image of page 1

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
INFORMATION CRITERIA Akaike’s Information criterion is given by AIC = - 2 ( ˆ θ ) + 2 k , where ( ˆ θ ) is the maximized log likelihood and k is the dimension of the model parameter space. Copyright c 2012 (Iowa State University) Statistics 511 2 / 30
Background image of page 2
AIC = - 2 ( ˆ θ ) + 2 k can be used to determine which of multiple models is “best” for a given data set. Small values of AIC are preferred. The + 2 k portion of AIC can be viewed as a penalty for model complexity. Copyright c 2012 (Iowa State University) Statistics 511 3 / 30
Background image of page 3

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
Schwarz’s Bayesian Information Criterion is given by BIC = - 2 ( ˆ θ ) + k log ( n ) . BIC is the same as AIC except the penalty for model complexity is greater for BIC (when n 8 ) and grows with n . Copyright c 2012 (Iowa State University) Statistics 511 4 / 30
Background image of page 4
AIC and BIC can each be used to compare models even if they are not nested (i.e., even if one is not a special case of the other as in our reduced vs. full model comparison discussed previously). However, if REML likelihoods are used, compared models must have the same model for the response mean. Different models for the mean would yield different error contrasts and therefore different datasets for computation of maximized REML likelihoods. Copyright c 2012 (Iowa State University) Statistics 511 5 / 30
Background image of page 5

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
LARGE n THEORY FOR MLEs Suppose θ is a k × 1 parameter vector. Let ( θ ) denote the log likelihood function. Under regularity conditions discussed in, e.g., Shao, J.(2003) Mathematical Statistics , 2 nd Ed. Springer, New York; we have the following. Copyright c 2012 (Iowa State University) Statistics 511 6 / 30
Background image of page 6
There is an estimator ˆ θ that solves the likelihood equations ∂‘ ( θ ) θ = 0 and is a (weakly) consistent estimator of θ , i.e., lim n →∞ Pr [ || ˆ θ - θ || > ε ] = 0 for any ε > 0 .
Background image of page 7

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
Image of page 8
This is the end of the preview. Sign up to access the rest of the document.

{[ snackBarMessage ]}