This preview shows page 1. Sign up to view the full content.
Unformatted text preview: um likelihood estimation) fail.
22 Maximum Likelihood Estimation (MLE) • • • The apparent paradox of the previous example happens because MoM does NOT use all of the information within the data efﬁciently. In contrast, Maximum likelihood estimation (MLE) is known to be as efﬁcient as possible (in great generality). • • • • This is one of the most commonly used estimation methods. Example: Suppose we collected 5 observations on interarrival times: 3, 1, 4, 3, and 8. We have hypothesized that the exponential distribution can be used to described this data. What should be the parameter for the exponential distribution? Idea: Under MLE, we choose the parameter that makes the observed data most likely occur! Implementation: We compute the likelihood of each observation under the parameter λ 3 λe3λ 1 λ e λ 4 λe4λ 3 λe3λ 8 λe8λ
23 Observation Likelihood Example: Exponential Distribution • • Let L(λ) denote the likelihood/probability of observing these 5 observations if λ is the underlying parameter of the exponential distribution −3λ −λ −4λ −3λ −8λ L(λ) = λe λe λe λe λe = λ5 e−19λ Goal: Under MLE, we choose λ that maximizes the likelihood function L(⋅) • • For most problems, it is easier to work with loglikelihood  ln L(⋅) The maximizer of L(⋅) is the same as the maximizer of ln L(⋅)
ln L(λ) = −19λ + 5 ln λ Solving for MLE: max ln L(λ)
λ≥0 d ln L(λ) =0 dλ ⇔ 5 −19 + = 0 ˆ λ ⇔ ˆ= 5 λ 19 • In this example, the estimator under MLE coincides with the MoM estimator. It is possible that both methods give the same answer, but they will often be different. In such cases, one should prefer to use the MLE over the MoM.
24 Example: General Exponential Distribution • ln L(λ) = −λ (X1 + · · · + Xn ) + n ln λ Suppose X1, ...., Xn are i.i.d. observations from an exponential distribution with parameter λ. We want to ﬁnd the MLE estimator of λ. −λX1 −λX2 −λXn n −λ(X1 +···+Xn ) L(λ) = λe λe · · · λe =λ e
ˆ The MLE estimator λ is the maximizer of ln L(⋅) d ln L(λ) =0 dλ ⇔ ⇔ n − (X1 + · · · + Xn ) + = 0 ˆ λ n 1 ˆ= λ = X1 + · · · + Xn (X1 + · · · + Xn )/n • 25 Example: Normal Distribution • Suppose X1, ...., Xn are i.i.d. observations from a Normal distribution with mean μ and variance σ2. We want to ﬁnd the MLE estimators of μ and σ.
(X1 −µ)2 − 2σ2 1 L(µ, σ ) = √ ·e 2πσ n 1 1 − 2σ2 =√ e 2πσ ln L(µ, σ ) = −n ln √ Pn 1 √ ·e 2πσ
2 (X2 −µ)2 − 2σ2 ··· 1 √ ·e 2πσ (Xn −µ)2 − 2σ2 i=1 (Xi −µ) Solving for MLE (ˆ, σ ) by maximizing the log likelihood function: µˆ
∂ ln L(µ, σ ) 0= ∂µ ∂ ln L(µ, σ ) 0= ∂σ ⇔
n 1 0= 2 (Xi − µ) ˆ σ i=1 ˆ n 1 2 2πσ − 2 (Xi − µ) 2σ i=1 ⇔ n n 1...
View
Full
Document
This note was uploaded on 10/26/2010 for the course OR&IE 5580 at Cornell University (Engineering School).
 '10
 PAAT

Click to edit the document details