Lec10-hmm - Hidden Markov Models CMSC 423 Based on Chapter...

Info iconThis preview shows page 1. Sign up to view the full content.

View Full Document Right Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: Hidden Markov Models CMSC 423 Based on Chapter 11 of Jones & Pevzner, An Introduction to Bioinformatics Algorithms Checking a Casino Fair coin: Pr(Heads) = 0.5 Heads/Tails: ? Biased coin: Pr(Heads) = 0.75 Suppose either a fair or biased coin was used to generate a sequence of heads & tails. But we don’t know which type of coin was actual used. ↑↑↓↓↓↓↑↑↑↑↓↑↓↑↓↑↓↑ Checking a Casino Fair coin: Pr(Heads) = 0.5 Heads/Tails: ? Biased coin: Pr(Heads) = 0.75 Suppose either a fair or biased coin was used to generate a sequence of heads & tails. But we don’t know which type of coin was actual used. ↑↑↓↓↓↓↑↑↑↑↓↑↓↑↓↑↓↑ Checking a Casino Fair coin: Pr(Heads) = 0.5 Heads/Tails: ? Biased coin: Pr(Heads) = 0.75 Suppose either a fair or biased coin was used to generate a sequence of heads & tails. But we don’t know which type of coin was actual used. ↑↑↓↓↓↓↑↑↑↑↓↑↓↑↓↑↓↑ How could we guess which coin was more likely? Compute the Probability of the Observed Sequence Fair coin: Pr(Heads) = 0.5 Biased coin: Pr(Heads) = 0.75 X= Pr(x | Fair) = ↑ ↑ ↓ ↓ ↓ ↓ ↑ 0.5 0.5 0.5 0.5 0.5 0.5 0.5 Pr(x | Biased) = 0.75 0.75 0.25 0.25 0.25 0.25 0.75 Compute the Probability of the Observed Sequence Fair coin: Pr(Heads) = 0.5 Biased coin: Pr(Heads) = 0.75 X= Pr(x | Fair) = ↑ ↑ ↓ ↓ ↓ ↓ ↑ 0.5 × 0.5 × 0.5 × 0.5 × 0.5 × 0.5 × 0.5 Pr(x | Biased) = 0.75 ×0.75× 0.25× 0.25 × 0.25× 0.25 × 0.75 = 0.57 = 0.0078125 = 0.001647949 Compute the Probability of the Observed Sequence Fair coin: Pr(Heads) = 0.5 Biased coin: Pr(Heads) = 0.75 X= Pr(x | Fair) = ↑ ↑ ↓ ↓ ↓ ↓ ↑ 0.5 × 0.5 × 0.5 × 0.5 × 0.5 × 0.5 × 0.5 Pr(x | Biased) = 0.75 ×0.75× 0.25× 0.25 × 0.25× 0.25 × 0.75 = 0.57 = 0.0078125 = 0.001647949 The log-odds score: 0.0078 Pr(x | Fair) log2 = log2 = 2.245 Pr(x | Biased) 0.0016 > 0. Hence “Fair” is a better guess. What if the casino switches coins? Fair coin: Pr(Heads) = 0.5 Biased coin: Pr(Heads) = 0.75 Probability of switching coins = 0.1 0.1 Fair coin: Pr(Heads) = 0.5 0.1 Biased coin: Pr(Heads) = 0.75 What if the casino switches coins? Fair coin: Pr(Heads) = 0.5 Biased coin: Pr(Heads) = 0.75 Probability of switching coins = 0.1 0.1 Fair coin: Pr(Heads) = 0.5 0.1 Biased coin: Pr(Heads) = 0.75 How could we guess which coin was more likely at each position? What if the casino switches coins? Fair coin: Pr(Heads) = 0.5 Biased coin: Pr(Heads) = 0.75 Probability of switching coins = 0.1 0.1 Fair coin: Pr(Heads) = 0.5 0.1 Biased coin: Pr(Heads) = 0.75 How can we compute the probability of the entire sequence? How could we guess which coin was more likely at each position? What does this have to do with biology? atg gat ggg agc aga tca gat cag atc agg gac gat aga cga tag tga What does this have to do with biology? Now: How likely is it that this is a gene? Which parts are the start, middle and end? atg gat ggg agc aga tca gat cag atc agg gac gat aga cga tag tga What does this have to do with biology? Now: How likely is it that this is a gene? Which parts are the start, middle and end? atg gat ggg agc aga tca gat cag atc agg gac gat aga cga tag tga Start Generator Middle of Gene Generator End Generator What does this have to do with biology? Before: How likely is it that this sequence was generated by a fair coin? Which parts were generated by a biased coin? Now: How likely is it that this is a gene? Which parts are the start, middle and end? atg gat ggg agc aga tca gat cag atc agg gac gat aga cga tag tga Start Generator Middle of Gene Generator End Generator Hidden Markov Model (HMM) Fair coin: Pr(Heads) = 0.5 Biased coin: Pr(Heads) = 0.75 Probability of switching coins = 0.1 0.9 0.9 0.1 Fair Biased 0.1 Fair 0.5 0.5 H T Biased 0.75 0.25 H T Formal Definition of a HMM 7 7 Probability of going from state 5 to state 7 6 5 A= 6 5 E= 4 Probability of emitting T when in state 4. 4 3 3 2 2 1 1 1 2 3 4 5 6 7 A C G T Formal Definition of a HMM ∑ = alphabet of symbols. Q = set of states. A = an |Q| x |Q| matrix where entry (k,l) is the probability of moving from state k to state l. E = a |Q| x |∑| matrix, where entry (k,b) is the probability of emitting b when in state k. 7 7 Probability of going from state 5 to state 7 6 5 A= 6 5 E= 4 Probability of emitting T when in state 4. 4 3 3 2 2 1 1 1 2 3 4 5 6 7 A C G T Constraints on A and E 7 7 Probability of going from state 5 to state 7 6 A= 5 6 E= 5 4 4 3 3 2 2 1 Probability of emitting T when in state 4. 1 1 2 3 4 5 6 7 A C G T Sum of the # in each row must be 1. Computing Probabilities Given Path 0.9 0.9 0.1 Fair Biased 0.1 Fair 0.5 0.5 H T Biased 0.75 0.25 H T x= ↓↑↓↑↑↑↓↑↑↓ π= FFFBBB Pr(xi | πi) = B F F F 0.5 0.5 0.5 0.75 0.75 0.75 0.25 0.5 0.5 0.5 Pr(πi → πi+1) =0.1 0.9 0.9 0.1 0.9 0.9 0.9 0.1 0.1 0.1 The Decoding Problem Given x and π, we can compute: • Pr(x | π): product of Pr(xi | πi) • Pr(π): product of Pr(πi → πi+1) • Pr(x, π): product of all the Pr(xi | πi) and Pr(πi → πi+1) n ￿ Pr(x, π ) = Pr(π0 → π1 ) Pr(xi | πi ) Pr(πi → πi+1 ) i=1 But they are “hidden” Markov models because π is unknown. Decoding Problem: Given a sequence x1x2x3...xn generated by an HMM (∑, Q, A, E), find a path π that maximizes Pr(x, π). The Viterbi Algorithm to Find Best Path A[a, k] := the probability of the best path for x1...xk that ends at state a. Q 2 1 1 2 3 4 5 6 7 8 9 10 ↓↑ ↓↑↑↑↓ ↑↑ ↓ A[a, k] = the path for x1...xk-1 that goes to some state b times cost of a transition from b to i, and then to output xk from state a. b Biased Q a Fair 1 2 3 4 5 6 7 8 9 10 ↓↑ ↓↑↑↑↓ ↑↑ ↓ =k Viterbi DP Recurrence b Biased Q a Fair 1 2 3 4 5 6 7 8 9 10 = k ↓↑ ↓↑↑↑↓ ↑↑ ↓ A[a, k ] = max {A[b, k − 1] × Pr(b → a) × Pr(xk | πk = a)} b∈Q Over all possible previous states. Best path for x1..xk ending in state b Probability of transitioning from state b to state a Base case: A[a, 1] = Pr(π1 = a) × Pr(x1 | π1 = a) Probability that the first state is a Probability of emitting x1 given the first state is a. Probability of outputting xk given that the kth state is a. Which Cells Do We Depend On? 7 6 5 4 3 Q 2 1 1 x= 2 3 4 5 6 7 8 9 10 ↓↑ ↓↑↑↑↓ ↑↑ ↓ Order to Fill in the Matrix: 7 6 5 4 3 Q 2 1 1 x= 2 3 4 5 6 7 8 9 10 ↓↑ ↓↑↑↑↓ ↑↑ ↓ Where’s the answer? 7 6 max value in these red cells 5 4 3 Q 2 1 1 x= 2 3 4 5 6 7 8 9 10 ↓↑ ↓↑↑↑↓ ↑↑ ↓ Graph View of Viterbi Q x1 x2 x3 x4 x5 x6 Running Time • # of subproblems = O(n|Q|), where n is the length of the sequence. • Time to solve a subproblem = O(|Q|) • Total running time: O(n|Q|2) Using Logs Typically, we take the log of the probabilities to avoid multiplying a lot of terms: log(A[a, k ]) = max {log(A[b, k − 1] × Pr(b → a) × Pr(xk | πk = a))} = max {log(A[b, k − 1]) + log(Pr(b → a)) + log(Pr(xk | πk = a))} b∈Q b∈Q Remember: log(ab) = log(a) + log(b) Why do we want to avoid multiplying lots of terms? Using Logs Typically, we take the log of the probabilities to avoid multiplying a lot of terms: log(A[a, k ]) = max {log(A[b, k − 1] × Pr(b → a) × Pr(xk | πk = a))} = max {log(A[b, k − 1]) + log(Pr(b → a)) + log(Pr(xk | πk = a))} b∈Q b∈Q Remember: log(ab) = log(a) + log(b) Why do we want to avoid multiplying lots of terms? Multiplying leads to very small numbers: 0.1 x 0.1 x 0.1 x 0.1 x 0.1 = 0.00001 This can lead to underflow. Taking logs and adding keeps numbers bigger. Some probabilities we are interested in What is the probability of observing a string x under the assumed HMM? Pr(x) = ￿ Pr(x, π ) π What is the probability of observing x using a path where the ith state is a? Pr(x, πi = a) = ￿ Pr(x, π ) π :πi = a What is the probability that the ith state is a? Pr(x, πi = a) Pr(πi = a|x) = Pr(x) The Forward Algorithm How do we compute this: Pr(x, πk = a) = Pr(x1 , . . . , xi , πi = a) Pr(xi+1 , . . . , xn | πi = a) Recall the recurrence to compute best path for x1...xk that ends at state a: A[a, k ] = max {A[b, k − 1] × Pr(b → a) × Pr(xk | πk = a)} b∈Q We can compute the probability of emitting x1,...,xk using some path that ends in a: F [a, k ] = ￿ b∈Q F [b, k − 1] × Pr(b → a) × Pr(xk | πk = a) The Forward Algorithm How do we compute this: Pr(x, πk = a) = Pr(x1 , . . . , xi , πi = a) Pr(xi+1 , . . . , xn | πi = a) Recall the recurrence to compute best path for x1...xk that ends at state a: A[a, k ] = max {A[b, k − 1] × Pr(b → a) × Pr(xk | πk = a)} b∈Q We can compute the probability of emitting x1,...,xk using some path that ends in a: F [a, k ] = ￿ b∈Q F [b, k − 1] × Pr(b → a) × Pr(xk | πk = a) The Forward Algorithm Computes the total probability of all the paths of length k ending in state a. Q a x1 x2 x3 x4 F[a,4] x5 x6 The Forward Algorithm Computes the total probability of all the paths of length k ending in state a. Still need to compute the probability of paths leaving a and going to the end. Q a x1 x2 x3 x4 F[a,4] x5 x6 The Backward Algorithm The same idea as the forward algorithm, we just start from the end of the input string and work towards the beginning: B[a,k] = “the probability of generating string xk+1,...,xn starting from state b” B [a, k ] = ￿ b∈Q B [b, k + 1] × Pr(a → b) × Pr(xk+1 | πk+1 = b) Prob for xk+1..xn starting in state b Probability going from state a to b Probability of emitting xk+1 given that the next state is b. The Forward-Backward Algorithm Pr(x, πi = k ) F [a, i] · B [a, i] Pr(πi = a | x) = = Pr(x) Pr(x) F[a,i] B[a,i] a Estimating HMM Parameters (x(1),π(1)) = (x(2),π(2)) = (1) (1) (1) (1) (1) (1) x1 x2 x3 x4 x5 . . . xn (1) (1) (1) (1) (1) (1) π1 π2 π3 π4 π5 . . . πn (2) (2) (2) (2) (2) (2) x1 x2 x3 x4 x5 . . . xn (2) (2) (2) (2) (2) (2) π1 π2 π3 π4 π5 . . . πn # of times transition a → b is observed. Aab Pr(a → b) = ￿ q ∈Q Aaq Training examples where outputs and paths are known. # of times x was observed to be output from state a. Exa Pr(x | a) = ￿ x∈Σ Exq Pseudocounts # of times transition a → b is observed. Aab Pr(a → b) = ￿ q ∈Q Aaq # of times x was observed to be output from state a. Exa Pr(x | a) = ￿ x∈Σ Exq What if a transition or emission is never observed in the training data? 0 probability Meaning that if we observe an example with that transition or emission in the real world, we will give it 0 probability. But it’s unlikely that our training set will be large enough to observe every possible transition. Hence: we take Aab = (#times a → b was observed) + 1 Similarly for Exa. “pseudocount” Viterbi Training • Problem: typically, in the real would we only have examples of the output x, and we don’t know the paths π. Viterbi Training Algorithm: 1. Choose a random set of parameters. 2. Repeat: 1. Find the best paths. 2. Use those paths to estimate new parameters. This is an local search algorithm. It’s also an example of a “Gibbs sampling” style algorithm. The Baum-Welch algorithm is similar, but doesn’t commit to a single best path for each example. Recap • Hidden Markov Model (HMM) model the generation of strings. • They are governed by a string alphabet (∑), a set of states (Q), a set of transition probabilities A, and a set of emission probabilities for each state (E). • Given a string and an HMM, we can compute: • • The most probable path the HMM took to generate the string (Viterbi). The probability that the HMM was in a particular state at a given step (forwardbackward algorithm). • Algorithms are based on dynamic programming. • Finding good parameters is a much harder problem. The Baum-Welch algorithm is an oft-used heuristic algorithm. ...
View Full Document

This note was uploaded on 01/13/2012 for the course CMSC 423 taught by Professor Staff during the Fall '07 term at Maryland.

Ask a homework question - tutors are online