EM - EM and Monte Carlo EM Algorithms Andrew J. Womack...

Info iconThis preview shows pages 1–4. Sign up to view the full content.

View Full Document Right Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: EM and Monte Carlo EM Algorithms Andrew J. Womack September 26, 28, and 30, 2011 Setup We have observed data x and parameter . We wish to maximize the (log) likelihood L ( | x ) = f ( x | ) ( | x ) = log( L ( | x )) The problem is that the (log) likelihood is intractable. We can write the likelihood as L ( | x ) = Z f ( x , z | )d z = Z L ( | x , z )d z where z is considered to be missing data and L ( | x , z ) is tractable and expectations with respect to f ( z | x , ) are easy to compute. Begin with a guess (0) E-step: Compute Q ( | ( t ) ) = R log( f ( x , z | )) f ( z | x , ( t ) ) d z M-step: Determine ( t +1) = argmin Q | ( t ) Background Convexity : R R is convex if ( ta +(1- t ) b ) t ( a )+(1- t ) ( b ) for all a , b R and t [0 , 1] Derivative Condition: If C 2 ( R ) then is convex if and only if d 2 d x 2 > for all x R Uniqueness of Minimum: has a unique minimum value if and only if is convex. Note that this minimum might occur in multiple positions, as it does for the function ( x ) = ( x x x Jensens Inequality:...
View Full Document

This note was uploaded on 01/29/2012 for the course STAT 6866 taught by Professor Womack during the Fall '11 term at University of Florida.

Page1 / 11

EM - EM and Monte Carlo EM Algorithms Andrew J. Womack...

This preview shows document pages 1 - 4. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online