Stat 312: Lecture 05 Maximum Likelihood Estimation Moo K. Chung [email protected]September 14, 2004 1. (Invariance Principle) If ˆ θ is the MLE’s of parameter θ then the MLE of h ( θ ) is h ( ˆ θ ) for some function h . Proof (partial). Consider likelihood function L ( θ ) . ˆ θ satisﬁes dL ( θ ) dθ = 0 . Let φ = h ( θ ) . Then the likelihood function for φ = h ( θ ) is given by L ( h-1 ( φ )) . Differentiating the likelihood with respect to φ , we have L ( h-1 ( φ )) dφ = dL ( θ ) dθ dθ dφ = dL ( θ ) dθ 1 h0 ( θ ) = 0 . 2. Loglikelihood. Maximizing L ( θ ) is equiva-lent to maximizing ln L ( θ ) since ln is an in-creasing function. Example. This technique is best illus-trated by ﬁnding the MLE of parameters in N ( μ,σ 2 ) . ˆ μ = ¯ X, ˆ σ 2 = 1 n n X i =1 ( X i-¯ X ) 2 are the MLE of μ and σ 2 respectively. Note that ˆ σ 2 is not un unbiased estimator of σ 2 . 3.
This is the end of the preview. Sign up
access the rest of the document.
This note was uploaded on 01/31/2008 for the course STAT 312 taught by Professor Chung during the Fall '04 term at University of Wisconsin.