This preview shows pages 1–2. Sign up to view the full content.
This preview has intentionally blurred sections. Sign up to view the full version.View Full Document
Unformatted text preview: 1 Likelihood of Data • Consider n I.I.D. random variables X 1 , X 2 , ..., X n X i a sample from density function f (X i | ) o Note: now explicitly specify parameter of distribution We want to determine how “likely” the observed data (x 1 , x 2 , ..., x n ) is based on density f (X i | ) Define the Likelihood function , L ( ): o This is just a product since X i are I.I.D. Intuitively: what is probability of observed data using density function f (X i | ), for some choice of n i i X f L 1 ) | ( ) ( Demo Maximum Likelihood Estimator • The Maximum Likelihood Estimator (MLE) of , is the value of that maximizes L ( ) More formally: More convenient to use log-likelihood function , LL ( ): Note that log function is “monotone” for positive values o Formally: x ≤ y log(x) ≤ log(y) for all x, y > 0 So, that maximizes LL ( ) also maximizes L ( ) o Formally: o Similarly, for any positive constant c (not dependent on ): ) ( max arg L MLE n i i n i i X f X f L LL 1 1 ) | ( log ) | ( log ) ( log ) ( ) ( max arg ) ( max arg L LL ) ( max arg ) ( max arg )) ( ( max arg L LL LL c Computing the MLE • General approach for finding MLE of Determine formula for LL ( ) Differentiate LL ( ) w.r.t. (each) : To maximize, set Solve resulting (simultaneous) equation to get MLE o Make sure that derived is actually a maximum (and not a minimum or saddle point). E.g., check LL ( MLE ) < LL ( MLE ) • This step often ignored in expository derivations • So, we’ll ignore it here too (and won’t require it in this class) For many standard distributions, someone has already done this work for you. (Yay!) ) ( LL ) ( LL M L E ˆ Maximizing Likelihood with Bernoulli • Consider I.I.D. random variables X 1 , X 2 , ..., X n...
View Full Document
This document was uploaded on 12/24/2011.
- Spring '09