{[ promptMessage ]}

Bookmark it

{[ promptMessage ]}

EM - EM and Monte Carlo EM Algorithms Andrew J Womack Setup...

This preview shows pages 1–4. Sign up to view the full content.

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: EM and Monte Carlo EM Algorithms Andrew J. Womack September 26, 28, and 30, 2011 Setup We have observed data x and parameter θ . We wish to maximize the (log) likelihood L ( θ | x ) = f ( x | θ ) ‘ ( θ | x ) = log( L ( θ | x )) The problem is that the (log) likelihood is intractable. We can write the likelihood as L ( θ | x ) = Z f ( x , z | θ )d z = Z L ( θ | x , z )d z where z is considered to be “missing” data and L ( θ | x , z ) is tractable and expectations with respect to f ( z | x ,θ ) are easy to compute. Begin with a guess θ (0) E-step: Compute Q ( θ | θ ( t ) ) = R log( f ( x , z | θ )) f ( z | x ,θ ( t ) ) d z M-step: Determine θ ( t +1) = argmin θ Q θ | θ ( t ) Background Convexity φ : R → R is convex if φ ( ta +(1- t ) b ) ≤ t φ ( a )+(1- t ) φ ( b ) for all a , b ∈ R and t ∈ [0 , 1] Derivative Condition: If φ ∈ C 2 ( R ) then φ is convex if and only if d 2 φ d x 2 > for all x ∈ R Uniqueness of Minimum: φ has a unique minimum value if and only if φ is convex. Note that this minimum might occur in multiple positions, as it does for the function φ ( x ) = ( x ≤ x x ≥ Jensen’s Inequality:...
View Full Document

{[ snackBarMessage ]}

Page1 / 11

EM - EM and Monte Carlo EM Algorithms Andrew J Womack Setup...

This preview shows document pages 1 - 4. Sign up to view the full document.

View Full Document
Ask a homework question - tutors are online