Probability and Statistics for Engineering and the Sciences (with CD-ROM and InfoTrac )

Info iconThis preview shows page 1. Sign up to view the full content.

View Full Document Right Arrow Icon
Stat 312: Lecture 05 Maximum Likelihood Estimation Moo K. Chung [email protected] September 14, 2004 1. (Invariance Principle) If ˆ θ is the MLE’s of parameter θ then the MLE of h ( θ ) is h ( ˆ θ ) for some function h . Proof (partial). Consider likelihood function L ( θ ) . ˆ θ satisfies dL ( θ ) = 0 . Let φ = h ( θ ) . Then the likelihood function for φ = h ( θ ) is given by L ( h - 1 ( φ )) . Differentiating the likelihood with respect to φ , we have L ( h - 1 ( φ )) = dL ( θ ) = dL ( θ ) 1 h 0 ( θ ) = 0 . 2. Loglikelihood. Maximizing L ( θ ) is equiva- lent to maximizing ln L ( θ ) since ln is an in- creasing function. Example. This technique is best illus- trated by finding the MLE of parameters in N ( μ,σ 2 ) . ˆ μ = ¯ X, ˆ σ 2 = 1 n n X i =1 ( X i - ¯ X ) 2 are the MLE of μ and σ 2 respectively. Note that ˆ σ 2 is not un unbiased estimator of σ 2 . 3.
Background image of page 1
This is the end of the preview. Sign up to access the rest of the document.

This note was uploaded on 01/31/2008 for the course STAT 312 taught by Professor Chung during the Fall '04 term at University of Wisconsin.

Ask a homework question - tutors are online