{[ promptMessage ]}

Bookmark it

{[ promptMessage ]}

cs229-notes8 - CS229 Lecture notes Andrew Ng Part IX The EM...

Info iconThis preview shows pages 1–4. Sign up to view the full content.

View Full Document Right Arrow Icon
CS229 Lecture notes Andrew Ng Part IX The EM algorithm In the previous set of notes, we talked about the EM algorithm as applied to fitting a mixture of Gaussians. In this set of notes, we give a broader view of the EM algorithm, and show how it can be applied to a large family of estimation problems with latent variables. We begin our discussion with a very useful result called Jensen’s inequality 1 Jensen’s inequality Let f be a function whose domain is the set of real numbers. Recall that f is a convex function if f ( x ) 0 (for all x R ). In the case of f taking vector-valued inputs, this is generalized to the condition that its hessian H is positive semi-definite ( H 0). If f ( x ) > 0 for all x , then we say f is strictly convex (in the vector-valued case, the corresponding statement is that H must be strictly positive semi-definite, written H > 0). Jensen’s inequality can then be stated as follows: Theorem. Let f be a convex function, and let X be a random variable. Then: E[ f ( X )] f (E X ) . Moreover, if f is strictly convex, then E[ f ( X )] = f (E X ) holds true if and only if X = E[ X ] with probability 1 (i.e., if X is a constant). Recall our convention of occasionally dropping the parentheses when writ- ing expectations, so in the theorem above, f (E X ) = f (E[ X ]). For an interpretation of the theorem, consider the figure below. 1
Background image of page 1

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
2 a E[X] b f(a) f(b) f(EX) E[f(X)] f Here, f is a convex function shown by the solid line. Also, X is a random variable that has a 0.5 chance of taking the value a , and a 0.5 chance of taking the value b (indicated on the x -axis). Thus, the expected value of X is given by the midpoint between a and b . We also see the values f ( a ), f ( b ) and f (E[ X ]) indicated on the y -axis. Moreover, the value E[ f ( X )] is now the midpoint on the y -axis between f ( a ) and f ( b ). From our example, we see that because f is convex, it must be the case that E[ f ( X )] f (E X ). Incidentally, quite a lot of people have trouble remembering which way the inequality goes, and remembering a picture like this is a good way to quickly figure out the answer. Remark. Recall that f is [strictly] concave if and only if - f is [strictly] convex (i.e., f ( x ) 0 or H 0). Jensen’s inequality also holds for concave functions f , but with the direction of all the inequalities reversed (E[ f ( X )] f (E X ), etc.). 2 The EM algorithm Suppose we have an estimation problem in which we have a training set { x (1) , . . . , x ( m ) } consisting of m independent examples. We wish to fit the parameters of a model p ( x, z ) to the data, where the likelihood is given by ( θ ) = m i =1 log p ( x ; θ ) = m i =1 log z p ( x, z ; θ ) .
Background image of page 2
3 But, explicitly finding the maximum likelihood estimates of the parameters θ may be hard. Here, the
Background image of page 3

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
Image of page 4
This is the end of the preview. Sign up to access the rest of the document.

{[ snackBarMessage ]}