lecture05 notes - Stat 312 Lecture 05 Maximum Likelihood Estimation Moo K Chung [email protected] ^ 1(Invariance Principle If is the MLE's of ^

Probability and Statistics for Engineering and the Sciences (with CD-ROM and InfoTrac )

This preview shows page 1 out of 1 page.

Stat 312: Lecture 05 Maximum Likelihood Estimation Moo K. Chung [email protected] September 14, 2004 1. (Invariance Principle) If ˆ θ is the MLE’s of parameter θ then the MLE of h ( θ ) is h ( ˆ θ ) for some function h . Proof (partial). Consider likelihood function L ( θ ) . ˆ θ satisfies dL ( θ ) = 0 . Let φ = h ( θ ) . Then the likelihood function for φ = h ( θ ) is given by L ( h - 1 ( φ )) . Differentiating the likelihood with respect to φ , we have L ( h - 1 ( φ )) = dL ( θ ) = dL ( θ ) 1 h 0 ( θ ) = 0 . 2. Loglikelihood. Maximizing L ( θ ) is equiva- lent to maximizing ln L ( θ ) since ln is an in- creasing function. Example. This technique is best illus- trated by finding the MLE of parameters in N ( μ, σ 2 ) . ˆ μ = ¯ X, ˆ σ 2 = 1 n n X i =1 ( X i - ¯ X ) 2 are the MLE of μ and σ 2 respectively. Note that ˆ σ 2 is not un unbiased estimator of σ 2 . 3. Asymptotic unbiasness. When the sample size is large, the maximum likelihood esti- mator of θ is approximately unbiased. The MLE of θ is approximately the MVUE of θ . This is why it is the most widely used estima- tion technique in statistics. For the previous example, E ˆ σ 2 = E n - 1 n S 2 = n - 1 n σ 2 σ 2 as n → ∞ 4. If explicit density function is not available,

  • Left Quote Icon

    Student Picture

  • Left Quote Icon

    Student Picture

  • Left Quote Icon

    Student Picture