This preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
Unformatted text preview: EM and Monte Carlo EM Algorithms Andrew J. Womack September 26, 28, and 30, 2011 Setup We have observed data x and parameter θ . We wish to maximize the (log) likelihood L ( θ  x ) = f ( x  θ ) ‘ ( θ  x ) = log( L ( θ  x )) The problem is that the (log) likelihood is intractable. We can write the likelihood as L ( θ  x ) = Z f ( x , z  θ )d z = Z L ( θ  x , z )d z where z is considered to be “missing” data and L ( θ  x , z ) is tractable and expectations with respect to f ( z  x ,θ ) are easy to compute. Begin with a guess θ (0) Estep: Compute Q ( θ  θ ( t ) ) = R log( f ( x , z  θ )) f ( z  x ,θ ( t ) ) d z Mstep: Determine θ ( t +1) = argmin θ Q θ  θ ( t ) Background Convexity φ : R → R is convex if φ ( ta +(1 t ) b ) ≤ t φ ( a )+(1 t ) φ ( b ) for all a , b ∈ R and t ∈ [0 , 1] Derivative Condition: If φ ∈ C 2 ( R ) then φ is convex if and only if d 2 φ d x 2 > for all x ∈ R Uniqueness of Minimum: φ has a unique minimum value if and only if φ is convex. Note that this minimum might occur in multiple positions, as it does for the function φ ( x ) = ( x ≤ x x ≥ Jensen’s Inequality:...
View
Full Document
 Fall '11
 Womack
 Derivative, Expectationmaximization algorithm, em algorithm, Monte Carlo EM, Monte Carlo EM Algorithms

Click to edit the document details