EM2 - 2.3. ESTIMATION 27 2.3.3 The EM-algorithm for mixture...

Info iconThis preview shows pages 1–2. Sign up to view the full content.

View Full Document Right Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: 2.3. ESTIMATION 27 2.3.3 The EM-algorithm for mixture distributions In Section 2.2, we assumed that all distribution parameters were known. We will now relax this requirement, to allow more realistic modelling. We assume that the distribution type for each model class is known, but that the distributions parameters, = { 1 , . . . , K } , are unknown. The relative fre- quencies are also unknown. Thus, we need to estimate Y = { , } , using data y = { y 1 , . . . , y n } . Here, notation indicates that each model, k = 1 , . . . , K , has its own unique parameter set k . However, in general this is not absolutely necessary, and there may be common elements in different k . A natural example is when different models have the same variance or covariance structure. When each data point is sampled from a randomly selected model class, the re- sulting density is a mixture, p ( y | Y ) = n productdisplay i = 1 K summationdisplay k = 1 p ( y i | x i = k , k ) k , which can be very difficult to handle analytically. However, if the true class x i for each pixel was known, the likelihood would have the much nicer form p ( y , x | Y ) = n productdisplay i = 1 p ( y i | x i , x i ) x i . The EM-algorithm is a general method for calculating maximum likelihood estima- tors by augmenting (extending) the observed data with unknown quantities, which can often yield more easily handled likelihood expressions. Definition 2.3 . ( The EM-algorithm ) Augment the data set y with the random variables for unknown quantities, x . The joint, or complete, variable ( y , x ) is an extended version of y . Let L ( Y | y , x ) = p ( y , x | Y ) be the joint likelihood for ( y , x ) , Given an initial parameter estimate Y (0) , iterate the following steps. 1. E-step: Evaluate Q ( Y , Y ( t ) ) = E (log p ( y , x | Y ) | y , Y ( t ) ) , i.e. with the expec- tation taken over the conditional (or posterior) distribution for x given the...
View Full Document

Page1 / 5

EM2 - 2.3. ESTIMATION 27 2.3.3 The EM-algorithm for mixture...

This preview shows document pages 1 - 2. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online