i πφ θ 2 y i which is difficult to maximize c 2019 The Trustees of the Stevens

I πφ θ 2 y i which is difficult to maximize c 2019

This preview shows page 17 - 25 out of 32 pages.

i ) + πφ θ 2 ( y i )] which is difficult to maximize c 2019 The Trustees of the Stevens Institute of Technology
Image of page 17

Subscribe to view the full document.

Bootstrap Methods EM Algorithm We can simplify it by introducing a new variable Δ i (an unobserved latent variable) that takes values either 1 or 0. 0 ( θ ; Z , Δ) = N X i = 1 [( 1 - Δ i ) log φ θ 1 ( y i ) + Δ i log φ θ 2 ( y i )] + N X i = 1 [( 1 - Δ i ) log( 1 - π ) + Δ i log π ] c 2019 The Trustees of the Stevens Institute of Technology
Image of page 18
Bootstrap Methods EM Algorithm Since the values of the Δ i ’s are unknown, we substitute for each Δ i its expected value: γ i ( θ ) = E i | θ, Z ] = P i = 1 | θ, Z ) c 2019 The Trustees of the Stevens Institute of Technology
Image of page 19

Subscribe to view the full document.

Bootstrap Methods EM Algorithm Algorithm 8.1: EM Algorithm for Two-component Gaussian Mixture 1. Take initial guesses for the parameters ˆ μ 1 , ˆ σ 2 1 , ˆ μ 2 , ˆ σ 2 2 , ˆ π 2. Expectation Step: compute the responsibilites ˆ γ i = ˆ πφ ˆ θ 2 ( y i ) ( 1 - ˆ π ) φ ˆ θ 1 ( y i ) + ˆ πφ ˆ θ 2 ( y i ) , i = 1 , . . . , N 3. Maximization Step: compute the weighted means and variances. ˆ μ 1 = N i = 1 ( 1 - ˆ γ i ) y i N i = 1 ( 1 - ˆ γ i ) , ˆ σ 2 1 = N i = 1 ( 1 - ˆ γ i )( y i - ˆ μ 1 ) 2 N i = 1 ( 1 - ˆ γ i ) ˆ μ 2 = N i = 1 ˆ γ i y i N i = 1 ˆ γ i , ˆ σ 2 1 = N i = 1 ˆ γ i ( y i - ˆ μ 2 ) 2 N i = 1 ˆ γ i and the mixing probability ˆ π = N i = 1 ˆ γ i / N 4. Iterate steps 2 and 3 until convergence.[2] c 2019 The Trustees of the Stevens Institute of Technology
Image of page 20
Bootstrap Methods EM Algorithm Mixture of M Normals We introduce new unknown random variables ( Y ) and use them to create a simpler expression of the likelihood. p ( X , Y | Θ) = p ( Y | X , Θ) p ( X , Y | Θ) p ( Y | X , Θ) (1) E-Step: P ( t ) ( y ) = P ( y | x , Θ ( t ) ) M-Step: Θ ( t + 1 ) = argmax Θ ( E P ( t ) [ln P ( y , x | Θ)]) c 2019 The Trustees of the Stevens Institute of Technology
Image of page 21

Subscribe to view the full document.

Bootstrap Methods EM Algorithm For a mixture of normals we have the lower bound: λ ( X , Θ) N X i = 1 M X j = 1 p ( t ) ( j | x i , Θ ( t ) ) ln p j g ( x i ; μ j , σ 2 j ) p ( t ) ( j | x i , Θ ( t ) ) = b t where g x i ; μ ( t ) j , σ 2 ( t ) j denotes the Gaussian pdf. Our Expectation Step is expressed as: p ( t ) ( j | x i , Θ ( t ) ) = p ( t ) j g ( x i ; μ ( t ) j , σ 2 ( t ) j ) M j = 1 p ( t ) j g ( x i ; μ ( t ) j , σ 2 ( t ) j ) c 2019 The Trustees of the Stevens Institute of Technology
Image of page 22
Bootstrap Methods EM Algorithm Since b t is a lower bound for the log-likelihood, if we maximize b t we will improve the log-likelihood as well. Looking at b t we can see: b t = N X i = 1 M X j = 1 p ( t ) ( j | x i , Θ ( t ) ) ln p j g ( x i ; μ j , σ 2 j ) - N X i = 1 M X j = 1 p ( t ) ( j | x i , Θ ( t ) ) ln p ( t ) ( j | x i , Θ ( t ) ) ˆ Θ = Θ ( t + 1 ) = argmax Θ N X i = 1 M X j = 1 p ( t ) ( j | x i , Θ ( t ) ) ln p j g ( x i ; μ j , σ 2 j ) (2) c 2019 The Trustees of the Stevens Institute of Technology
Image of page 23

Subscribe to view the full document.

Bootstrap Methods EM Algorithm To ease writing out formulas we will define the function q ( j , i ) = p j g ( x i ; μ j , σ 2 j ) This function has the following partial derivatives with respect to the parameters, q ∂μ j = q ( j , i ) x i - μ j σ 2 j !
Image of page 24
Image of page 25
  • Fall '16
  • alec schimdt

What students are saying

  • Left Quote Icon

    As a current student on this bumpy collegiate pathway, I stumbled upon Course Hero, where I can find study resources for nearly all my courses, get online help from tutors 24/7, and even share my old projects, papers, and lecture notes with other students.

    Student Picture

    Kiran Temple University Fox School of Business ‘17, Course Hero Intern

  • Left Quote Icon

    I cannot even describe how much Course Hero helped me this summer. It’s truly become something I can always rely on and help me. In the end, I was not only able to survive summer classes, but I was able to thrive thanks to Course Hero.

    Student Picture

    Dana University of Pennsylvania ‘17, Course Hero Intern

  • Left Quote Icon

    The ability to access any university’s resources through Course Hero proved invaluable in my case. I was behind on Tulane coursework and actually used UCLA’s materials to help me move forward and get everything together on time.

    Student Picture

    Jill Tulane University ‘16, Course Hero Intern

Ask Expert Tutors You can ask You can ask ( soon) You can ask (will expire )
Answers in as fast as 15 minutes