{[ promptMessage ]}

Bookmark it

{[ promptMessage ]}

HMMs-2 - Hidden Markov Models(Part 2 BMI/CS 576...

This preview shows pages 1–6. Sign up to view the full content.

Hidden Markov Models (Part 2) BMI/CS 576 www.biostat.wisc.edu/bmi576.html Mark Craven Fall 2011 Three important questions How likely is a given sequence? What is the most probable “path” for generating a given sequence? How can we learn the HMM parameters given a set of sequences?

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
Learning without hidden information learning is simple if we know the correct path for each sequence in our training set estimate parameters by counting the number of times each parameter is used across the training set 5 C A G T 0 2 2 4 4 begin end 0 4 3 2 1 5 Learning with hidden information 5 C A G T 0 begin end 0 4 3 2 1 5 ? ? ? ? if we don’t know the correct path for each sequence in our training set, consider all possible paths for the sequence estimate parameters through a procedure that counts the expected number of times each parameter is used across the training set
Learning parameters if we know the state path for each training sequence, learning the model parameters is simple – no hidden information during training – count how often each parameter is used – normalize/smooth to get probabilities – process is just like it was for Markov chain models if we don’t know the path for each training sequence, how can we determine the counts? – key insight: estimate the counts by considering every path weighted by its probability Learning parameters: the Baum-Welch algorithm a.k.a the Forward-Backward algorithm an Expectation Maximization (EM) algorithm – EM is a family of algorithms for learning probabilistic models in problems that involve hidden information in this context, the hidden information is the path that best explains each training sequence

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
Learning parameters: the Baum-Welch algorithm algorithm sketch: – initialize parameters of model – iterate until convergence calculate the expected number of times each transition or emission is used adjust the parameters to maximize the likelihood of these expected values The expectation step we want to know the probability of generating sequence x with the i th symbol being produced by state k (for all x , i and k ) C A G T A 0.4 C 0.1 G 0.2 T 0.3 A 0.4 C 0.1 G 0.1 T 0.4 A 0.2 C 0.3 G 0.3 T 0.2 begin end 0.5 0.5 0.2 0.8 0.4 0.6 0.1 0.9 0.2 0.8 0 5 4 3 2 1 A 0.1 C 0.4 G 0.4 T 0.1
The expectation step the forward algorithm gives us , the probability of being in state k

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
This is the end of the preview. Sign up to access the rest of the document.

{[ snackBarMessage ]}

What students are saying

• As a current student on this bumpy collegiate pathway, I stumbled upon Course Hero, where I can find study resources for nearly all my courses, get online help from tutors 24/7, and even share my old projects, papers, and lecture notes with other students.

Kiran Temple University Fox School of Business ‘17, Course Hero Intern

• I cannot even describe how much Course Hero helped me this summer. It’s truly become something I can always rely on and help me. In the end, I was not only able to survive summer classes, but I was able to thrive thanks to Course Hero.

Dana University of Pennsylvania ‘17, Course Hero Intern

• The ability to access any university’s resources through Course Hero proved invaluable in my case. I was behind on Tulane coursework and actually used UCLA’s materials to help me move forward and get everything together on time.

Jill Tulane University ‘16, Course Hero Intern