lec27 - Example for MLE: Review: What is MLE? How to nd it...

Info iconThis preview shows pages 1–5. Sign up to view the full content.

View Full Document Right Arrow Icon
Example for MLE: Review: What is MLE? How to find it (5 steps)? θ may be multiple: Θ R p with p > 1 Example: Let X 1 ,...,X n be i.i.d N ( μ,σ 2 ) , both μ and σ 2 are unknown. x 1 , ··· ,x n are the data/sample value of X 1 , ··· ,X n What is the pdf of normal random variable ? Since we have values from n independent variables, the Likelihood function is a product of n densities: L ( μ,σ 2 ) = n i =1 1 2 πσ 2 e - ( X i - μ ) 2 2 σ 2 = (2 πσ 2 ) n/ 2 · e - n i =1 ( X i - μ ) 2 2 σ 2 1
Background image of page 1

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
Log-Likelihood is: l ( μ,σ 2 ) = log L ( μ,σ 2 ) = n 2 ln(2 πσ 2 ) 1 2 σ 2 n i =1 ( X i μ ) 2 Since we have now two parameters, μ and σ 2 , we need to get 2 partial derivatives of the log-Likelihood: ∂μ log L ( μ,σ 2 ) = 0 1 2 σ 2 · n i =1 ( X i μ ) · ( 2) = 1 σ 2 n i =1 ( X i μ ) ∂σ 2 log L ( μ,σ 2 ) = n 2 1 σ 2 + 1 2( σ 2 ) 2 n i =1 ( X i μ ) 2 Need find values for μ and σ 2 , that yield zeros for both derivatives at the same time 2
Background image of page 2
Setting d log L ( μ,σ 2 ) = 0 gives ˆ μ = 1 n n i =1 X i = ¯ X, Plugging this value into the derivative for σ 2 and setting d 2 log L μ,σ 2 ) = 0 gives ˆ σ 2 = 1 n n i =1 ( X i ˆ μ ) 2 Do you find something special? ˆ σ 2 ̸ = S 2 ( with n 1 not n ) ! MLE is biased ! However, bias does not ruin MLE’s other nice features like small MSE etc. . 3
Background image of page 3

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
Topic 2: Confidence intervals: Motivations: The last lectures have provide a way to compute point estimate for parameters. Based on that, it is natural to ask ”how good is this point estimate?” Or ”how close is the estimate to the true value of the parameter?” Further thoughts: Instead of just looking at the point estimate, we will now try to compute an interval around the estimated parameter value, in which the true parameter is ”likely” to fall. An interval like that is called confidence interval. Definition: An interval ( L,U ) is an (1 α ) · 100% confidence interval for the parameter θ if it contains the parameter with probability (1 α ) P ( L < θ < U ) = 1 α.
Background image of page 4
Image of page 5
This is the end of the preview. Sign up to access the rest of the document.

This note was uploaded on 02/01/2012 for the course STAT 330B taught by Professor Zhou during the Spring '11 term at Iowa State.

Page1 / 14

lec27 - Example for MLE: Review: What is MLE? How to nd it...

This preview shows document pages 1 - 5. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online