This preview shows pages 1–4. Sign up to view the full content.
This preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
Unformatted text preview: EM and Monte Carlo EM Algorithms Andrew J. Womack September 26, 28, and 30, 2011 Setup We have observed data x and parameter . We wish to maximize the (log) likelihood L (  x ) = f ( x  ) (  x ) = log( L (  x )) The problem is that the (log) likelihood is intractable. We can write the likelihood as L (  x ) = Z f ( x , z  )d z = Z L (  x , z )d z where z is considered to be missing data and L (  x , z ) is tractable and expectations with respect to f ( z  x , ) are easy to compute. Begin with a guess (0) Estep: Compute Q (  ( t ) ) = R log( f ( x , z  )) f ( z  x , ( t ) ) d z Mstep: Determine ( t +1) = argmin Q  ( t ) Background Convexity : R R is convex if ( ta +(1 t ) b ) t ( a )+(1 t ) ( b ) for all a , b R and t [0 , 1] Derivative Condition: If C 2 ( R ) then is convex if and only if d 2 d x 2 > for all x R Uniqueness of Minimum: has a unique minimum value if and only if is convex. Note that this minimum might occur in multiple positions, as it does for the function ( x ) = ( x x x Jensens Inequality:...
View
Full
Document
This note was uploaded on 01/29/2012 for the course STAT 6866 taught by Professor Womack during the Fall '11 term at University of Florida.
 Fall '11
 Womack

Click to edit the document details