Ch11_HMM

Ch11_HMM - An Introduction to Bioinformatics Algorithms

Info iconThis preview shows page 1. Sign up to view the full content.

View Full Document Right Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: An Introduction to Bioinformatics Algorithms www.bioalgorithms.info Hidden Markov Models 1 An Introduction to Bioinformatics Algorithms www.bioalgorithms.info Outline CG-islands The "Fair Bet Casino" Hidden Markov Model Decoding Algorithm Forward-Backward Algorithm Profile HMMs HMM Parameter Estimation Viterbi training Baum-Welch algorithm 2 An Introduction to Bioinformatics Algorithms www.bioalgorithms.info Outline - CHANGE The "Fair Bet Casino" improve graphics in "HMM for Fair Bet Casino (con'd)" Decoding Algorithm SHOW the two-row graph for casino problem Forward-Backward Algorithm SHOW the similarity in dynamic programming equation between Viterbi and forward-backward algorithm HMM Parameter Estimation explain the idea of Baum-Welch Profile HMM Alignment SHOW "Profile HMM" slide more slowly show M states first and add I and D slides later on SHOW an alignment in terms of M, I, D states it is not clear how p(xi) term appears after / in "Profile HMM alignment: Dynamic Programming" MAKE a dynamic picture for "Paths in Edit Graph" 3 An Introduction to Bioinformatics Algorithms www.bioalgorithms.info CG-Islands Given 4 nucleotides: probability of occurrence is ~ 1/4. Thus, probability of occurrence of a dinucleotide is ~ 1/16. However, the frequencies of dinucleotides in DNA sequences vary widely. In particular, CG is typically underepresented (frequency of CG is typically < 1/16) 4 An Introduction to Bioinformatics Algorithms www.bioalgorithms.info Why CG-Islands? CG is the least frequent dinucleotide because C in CG is easily methylated and has the tendency to mutate into T afterwards However, the methylation is suppressed around genes in a genome. So, CG appears at relatively high frequency within these CG islands So, finding the CG islands in a genome is an important problem 5 An Introduction to Bioinformatics Algorithms www.bioalgorithms.info CG Islands and the "Fair Bet Casino" The CG islands problem can be modeled after a problem named "The Fair Bet Casino" The game is to flip coins, which results in only two possible outcomes: Head or Tail. The Fair coin will give Heads and Tails with same probability . The Biased coin will give Heads with prob. . 6 An Introduction to Bioinformatics Algorithms www.bioalgorithms.info The "Fair Bet Casino" (cont'd) Thus, we define the probabilities: P(H|F) = P(T|F) = P(H|B) = , P(T|B) = The crooked dealer chages between Fair and Biased coins with probability 10% 7 An Introduction to Bioinformatics Algorithms www.bioalgorithms.info The Fair Bet Casino Problem Input: A sequence x = x1x2x3...xn of coin tosses made by two possible coins (F or B). Output: A sequence = 1 2 3... n, with each i being either F or B indicating that xi is the result of tossing the Fair or Biased coin respectively. 8 An Introduction to Bioinformatics Algorithms www.bioalgorithms.info Problem... Fair Bet Casino Problem Any observed outcome of coin tosses could have been generated by any sequence of states! Need to incorporate a way to grade different sequences differently. Decoding Problem 9 An Introduction to Bioinformatics Algorithms www.bioalgorithms.info P(x|fair coin) vs. P(x|biased coin) Suppose first that dealer never changes coins. Some definitions: P(x|fair coin): prob. of the dealer using the F coin and generating the outcome x. P(x|biased coin): prob. of the dealer using the B coin and generating outcome x. 10 An Introduction to Bioinformatics Algorithms www.bioalgorithms.info P(x|fair coin) vs. P(x|biased coin) P(x|fair coin)=P(x1...xn|fair coin) i=1,n p (xi|fair coin)= (1/2)n P(x|biased coin)= P(x1...xn|biased coin)= i=1,n p (xi|biased coin)=(3/4)k(1/4)n-k= 3k/4n k - the number of Heads in x. 11 An Introduction to Bioinformatics Algorithms www.bioalgorithms.info P(x|fair coin) vs. P(x|biased coin) P(x|fair coin) = P(x|biased coin) 1/2n = 3k/4n 2n = 3k n = k log23 when k = n / log23 (k ~ 0.67n) 12 An Introduction to Bioinformatics Algorithms www.bioalgorithms.info Log-odds Ratio We define log-odds ratio as follows: log2(P(x|fair coin) / P(x|biased coin)) = ki=1 log2(p+(xi) / p-(xi)) = n k log23 13 An Introduction to Bioinformatics Algorithms www.bioalgorithms.info Computing Log-odds Ratio in Sliding Windows x1x2x3x4x5x6x7x8...xn Consider a sliding window of the outcome sequence. Find the log-odds for this short window. 0 Logodds value Biased coin most likely used Fair coin most likely used Disadvantages: - the length of CG-island is not known in advance - different windows may classify the same position differently 14 An Introduction to Bioinformatics Algorithms www.bioalgorithms.info Hidden Markov Model (HMM) Can be viewed as an abstract machine with k hidden states that emits symbols from an alphabet . Each state has its own probability distribution, and the machine switches between states according to this probability distribution. While in a certain state, the machine makes 2 decisions: What state should I move to next? What symbol - from the alphabet - should I emit? 15 An Introduction to Bioinformatics Algorithms www.bioalgorithms.info Why "Hidden"? Observers can see the emitted symbols of an HMM but have no ability to know which state the HMM is currently in. Thus, the goal is to infer the most likely hidden states of an HMM based on the given sequence of emitted symbols. 16 An Introduction to Bioinformatics Algorithms www.bioalgorithms.info HMM Parameters : set of emission characters. Ex.: = {H, T} for coin tossing = {1, 2, 3, 4, 5, 6} for dice tossing Q: set of hidden states, each emitting symbols from . Q={F,B} for coin tossing 17 An Introduction to Bioinformatics Algorithms www.bioalgorithms.info HMM Parameters (cont'd) A = (akl): a |Q| x |Q| matrix of probability of changing from state k to state l. aFF = 0.9 aFB = 0.1 aBF = 0.1 aBB = 0.9 E = (ek(b)): a |Q| x || matrix of probability of emitting symbol b while being in state k. eF(0) = eF(1) = eB(0) = eB(1) = 18 An Introduction to Bioinformatics Algorithms www.bioalgorithms.info HMM for Fair Bet Casino The Fair Bet Casino in HMM terms: = {0, 1} (0 for Tails and 1 Heads) Q = {F,B} F for Fair & B for Biased coin. Transition Probabilities A *** Emission Probabilities E Fair Fair Biased aFF = 0.9 aBF = 0.1 Biased aFB = 0.1 aBB = 0.9 Tails(0) Heads(1) Fair Biased eF(0) = eF(1) = eB(0) = eB(1) = 19 An Introduction to Bioinformatics Algorithms www.bioalgorithms.info HMM for Fair Bet Casino (cont'd) HMM model for the Fair Bet Casino Problem 20 An Introduction to Bioinformatics Algorithms www.bioalgorithms.info Hidden Paths A path = 1... n in the HMM is defined as a sequence of states. Consider path = FFFBBBBBFFF and sequence x = 01011101001 Probability that xi was emitted from state i x 0 1 0 1 1 1 0 1 0 0 1 = F F F B B B B B F F F P(xi|i) P(i1 i) 9/10 9/10 1/10 9/10 9/10 9/10 9/10 1/10 9/10 9/10 Transition probability from state i1 to state i 21 An Introduction to Bioinformatics Algorithms www.bioalgorithms.info P(x|) Calculation P(x|): Probability that sequence x was generated by the path : n P(x|) = P(0 1) P(xi| i) P(i i+1) i=1 = a 0, 1 e i (xi) a i, i+1 22 An Introduction to Bioinformatics Algorithms www.bioalgorithms.info P(x|) Calculation P(x|): Probability that sequence x was generated by the path : n P(x|) = P(0 1) P(xi| i) P(i i+1) i=1 = a 0, 1 e i (xi) a i, i+1 = e i+1 (xi+1) a i, i+1 if we count from i=0 instead of i=1 23 An Introduction to Bioinformatics Algorithms www.bioalgorithms.info Decoding Problem Goal: Find an optimal hidden path of states given observations. Input: Sequence of observations x = x1...xn generated by an HMM M(, Q, A, E) Output: A path that maximizes P(x|) over all possible paths . 24 An Introduction to Bioinformatics Algorithms www.bioalgorithms.info Building Manhattan for Decoding Problem Andrew Viterbi used the Manhattan grid model to solve the Decoding Problem. Every choice of = 1... n corresponds to a path in the graph. The only valid direction in the graph is eastward. This graph has |Q|2(n-1) edges. 25 An Introduction to Bioinformatics Algorithms www.bioalgorithms.info Edit Graph for Decoding Problem 26 An Introduction to Bioinformatics Algorithms www.bioalgorithms.info Decoding Problem vs. Alignment Problem Valid directions in the alignment problem. Valid directions in the decoding problem. 27 An Introduction to Bioinformatics Algorithms www.bioalgorithms.info Decoding Problem as Finding a Longest Path in a DAG The Decoding Problem is reduced to finding a longest path in the directed acyclic graph (DAG) above. Notes: the length of the path is defined as the product of its edges' weights, not the sum. 28 An Introduction to Bioinformatics Algorithms www.bioalgorithms.info Decoding Problem (cont'd) Every path in the graph has the probability P(x| ). The Viterbi algorithm finds the path that maximizes P(x|) among all possible paths. The Viterbi algorithm runs in O(n|Q|2) time. 29 An Introduction to Bioinformatics Algorithms www.bioalgorithms.info Decoding Problem: weights of edges w (k, i) (l, i+1) The weight w is given by: ??? 30 An Introduction to Bioinformatics Algorithms www.bioalgorithms.info Decoding Problem: weights of edges n P(x|) = e i+1 (xi+1) . a i, i+1 i=0 w (l, i+1) (k, i) The weight w is given by: ?? 31 An Introduction to Bioinformatics Algorithms www.bioalgorithms.info Decoding Problem: weights of edges i-th term = e i+1 (xi+1) . a i, i+1 w (k, i) (l, i+1) The weight w is given by: ? 32 An Introduction to Bioinformatics Algorithms www.bioalgorithms.info Decoding Problem: weights of edges i-th term = e i (xi) . a i, i+1 = el(xi+1). akl for i =k, i+1=l w (k, i) (l, i+1) The weight w=el(xi+1). akl 33 An Introduction to Bioinformatics Algorithms www.bioalgorithms.info Decoding Problem and Dynamic Programming sl,i+1 = maxk Q {sk,i weight of edge between (k,i) and (l,i+1)}= maxk Q {sk,i akl el (xi+1) }= el (xi+1) maxk Q {sk,i akl} 34 An Introduction to Bioinformatics Algorithms www.bioalgorithms.info Decoding Problem (cont'd) Initialization: sbegin,0 = 1 sk,0 = 0 for k begin. Let * be the optimal path. Then, P(x|*) = maxk Q {sk,n . ak,end} 35 An Introduction to Bioinformatics Algorithms www.bioalgorithms.info Viterbi Algorithm The value of the product can become extremely small, which leads to overflowing. To avoid overflowing, use log value instead. sk,i+1= logel(xi+1) + max k Q {sk,i + log(akl)} log 36 An Introduction to Bioinformatics Algorithms www.bioalgorithms.info Forward-Backward Problem Given: a sequence of coin tosses generated by an HMM. Goal: find the probability that the dealer was using a biased coin at a particular time. 37 An Introduction to Bioinformatics Algorithms www.bioalgorithms.info Forward Algorithm Define fk,i (forward probability) as the probability of emitting the prefix x1...xi and reaching the state = k. The recurrence for the forward algorithm: fk,i = ek(xi) . fl,i-1 . alk lQ 38 An Introduction to Bioinformatics Algorithms www.bioalgorithms.info Backward Algorithm However, forward probability is not the only factor affecting P(i = k|x). The sequence of transitions and emissions that the HMM undergoes between i+1 and n also affect P(i = k|x). forward xi backward 39 An Introduction to Bioinformatics Algorithms www.bioalgorithms.info Backward Algorithm (cont'd) Define backward probability bk,i as the probability of being in state i = k and emitting the suffix xi+1...xn. The recurrence for the backward algorithm: bk,i = el(xi+1) . bl,i+1 . akl lQ 40 An Introduction to Bioinformatics Algorithms www.bioalgorithms.info Backward-Forward Algorithm The probability that the dealer used a biased coin at any moment i: P(x, i = k) _______________ P(i = k|x) = P(x) = ______________ fk(i) . bk(i) P(x) P(x) is the sum of P(x, i = k) over all k 41 An Introduction to Bioinformatics Algorithms www.bioalgorithms.info Finding Distant Members of a Protein Family A distant cousin of functionally related sequences in a protein family may have weak pairwise similarities with each member of the family and thus fail significance test. However, they may have weak similarities with many members of the family. The goal is to align a sequence to all members of the family at once. Family of related proteins can be represented by their multiple alignment and the corresponding profile. 42 An Introduction to Bioinformatics Algorithms www.bioalgorithms.info Profile Representation of Protein Families Aligned DNA sequences can be represented by a 4 n profile matrix reflecting the frequencies of nucleotides in every aligned position. Protein family can be represented by a 20n profile representing frequencies of amino acids. 43 An Introduction to Bioinformatics Algorithms www.bioalgorithms.info Profiles and HMMs HMMs can also be used for aligning a sequence against a profile representing protein family. A 20n profile P corresponds to n sequentially linked match states M1,...,Mn in the profile HMM of P. 44 An Introduction to Bioinformatics Algorithms www.bioalgorithms.info Multiple Alignments and Protein Family Classification Multiple alignment of a protein family shows variations in conservation along the length of a protein Example: after aligning many globin proteins, the biologists recognized that the helices region in globins are more conserved than others. 45 An Introduction to Bioinformatics Algorithms www.bioalgorithms.info What are Profile HMMs ? A Profile HMM is a probabilistic representation of a multiple alignment. A given multiple alignment (of a protein family) is used to build a profile HMM. This model then may be used to find and score less obvious potential matches of new protein sequences. 46 An Introduction to Bioinformatics Algorithms www.bioalgorithms.info Profile HMM A profile HMM 47 An Introduction to Bioinformatics Algorithms www.bioalgorithms.info Building a profile HMM Multiple alignment is used to construct the HMM model. Assign each column to a Match state in HMM. Add Insertion and Deletion state. Estimate the emission probabilities according to amino acid counts in column. Different positions in the protein will have different emission probabilities. Estimate the transition probabilities between Match, Deletion and Insertion states The HMM model gets trained to derive the optimal parameters. 48 An Introduction to Bioinformatics Algorithms www.bioalgorithms.info States of Profile HMM Match states M1...Mn (plus begin/end states) Insertion states I0I1...In Deletion states D1...Dn 49 An Introduction to Bioinformatics Algorithms www.bioalgorithms.info Transition Probabilities in Profile HMM log(aMI)+log(aIM) = gap initiation penalty log(aII) = gap extension penalty 50 An Introduction to Bioinformatics Algorithms www.bioalgorithms.info Emission Probabilities in Profile HMM Probabilty of emitting a symbol a at an insertion state Ij: eIj(a) = p(a) where p(a) is the frequency of the occurrence of the symbol a in all the sequences. 51 An Introduction to Bioinformatics Algorithms www.bioalgorithms.info Profile HMM Alignment Define vMj (i) as the logarithmic likelihood score of the best path for matching x1..xi to profile HMM ending with xi emitted by the state Mj. vIj (i) and vDj (i) are defined similarly. 52 An Introduction to Bioinformatics Algorithms www.bioalgorithms.info Profile HMM Alignment: Dynamic Programming vMj-1(i-1) + log(aMj-1,Mj ) vMj(i) = log (eMj(xi)/p(xi)) + max vIj-1(i-1) + log(aIj-1,Mj ) vDj-1(i-1) + log(aDj-1,Mj ) vMj(i-1) + log(aMj, Ij) vIj(i) = log (eIj(xi)/p(xi)) + max vIj(i-1) + log(aIj, Ij) vDj(i-1) + log(aDj, Ij) 53 An Introduction to Bioinformatics Algorithms www.bioalgorithms.info Paths in Edit Graph and Profile HMM A path through an edit graph and the corresponding path through a profile HMM 54 An Introduction to Bioinformatics Algorithms www.bioalgorithms.info Making a Collection of HMM for Protein Families Use Blast to separate a protein database into families of related proteins Construct a multiple alignment for each protein family. Construct a profile HMM model and optimize the parameters of the model (transition and emission probabilities). Align the target sequence against each HMM to find the best fit between a target sequence and an HMM 55 An Introduction to Bioinformatics Algorithms www.bioalgorithms.info Application of Profile HMM to Modeling Globin Proteins Globins represent a large collection of protein sequences 400 globin sequences were randomly selected from all globins and used to construct a multiple alignment. Multiple alignment was used to assign an initial HMM This model then get trained repeatedly with model lengths chosen randomly between 145 to 170, to get an HMM model optimized probabilities. 56 An Introduction to Bioinformatics Algorithms www.bioalgorithms.info How Good is the Globin HMM? 625 remaining globin sequences in the database were aligned to the constructed HMM resulting in a multiple alignment. This multiple alignment agrees extremely well with the structurally derived alignment. 25,044 proteins, were randomly chosen from the database and compared against the globin HMM. This experiment resulted in an excellent separation between globin and non-globin families. 57 An Introduction to Bioinformatics Algorithms www.bioalgorithms.info PFAM Pfam decribes protein domains Each protein domain family in Pfam has: - Seed alignment: manually verified multiple alignment of a representative set of sequences. - HMM built from the seed alignment for further database searches. - Full alignment generated automatically from the HMM The distinction between seed and full alignments facilitates Pfam updates. - Seed alignments are stable resources. - HMM profiles and full alignments can be updated with newly found amino acid sequences. 58 An Introduction to Bioinformatics Algorithms www.bioalgorithms.info PFAM Uses Pfam HMMs span entire domains that include both well-conserved motifs and lessconserved regions with insertions and deletions. It results in modeling complete domains that facilitates better sequence annotation and leads to a more sensitive detection. 59 An Introduction to Bioinformatics Algorithms www.bioalgorithms.info HMM Parameter Estimation So far, we have assumed that the transition and emission probabilities are known. However, in most HMM applications, the probabilities are not known. It's very hard to estimate the probabilities. 60 An Introduction to Bioinformatics Algorithms www.bioalgorithms.info HMM Parameter Estimation Problem Given HMM with states and alphabet (emission characters) Independent training sequences x1, ... xm Find HMM parameters (that is, akl, ek(b)) that maximize P(x1, ..., xm | ) the joint probability of the training sequences. 61 An Introduction to Bioinformatics Algorithms www.bioalgorithms.info Maximize the likelihood P(x1, ..., xm | ) as a function of is called the likelihood of the model. The training sequences are assumed independent, therefore P(x1, ..., xm | ) = i P(xi | ) The parameter estimation problem seeks that i | ) realizes max P( x i In practice the log likelihood is computed to avoid underflow errors 62 An Introduction to Bioinformatics Algorithms www.bioalgorithms.info Two situations Known paths for training sequences CpG islands marked on training sequences One evening the casino dealer allows us to see when he changes dice Unknown paths CpG islands are not marked Do not see when the casino dealer changes dice 63 An Introduction to Bioinformatics Algorithms www.bioalgorithms.info Known paths Akl = # of times each k l is taken in the training sequences Ek(b) = # of times b is emitted from state k in the training sequences Compute akl and ek(b) as maximum likelihood estimators: akl = Akl / Akl ' ek (b) = Ek (b) / Ek (b' ) b' 64 l' An Introduction to Bioinformatics Algorithms www.bioalgorithms.info Pseudocounts Some state k may not appear in any of the training sequences. This means Akl = 0 for every state l and akl cannot be computed with the given equation. To avoid this overfitting use predetermined pseudocounts rkl and rk(b). Akl = # of transitions kl + rkl Ek(b) = # of emissions of b from k + rk(b) The pseudocounts reflect our prior biases about the probability values. 65 An Introduction to Bioinformatics Algorithms www.bioalgorithms.info Unknown paths: Viterbi training Idea: use Viterbi decoding to compute the most probable path for training sequence x Start with some guess for initial parameters and compute * the most probable path for x using initial parameters. Iterate until no change in * : 1. Determine Akl and Ek(b) as before 2. Compute new parameters akl and ek(b) using the same formulas as before 3. Compute new * for x and the current parameters 66 An Introduction to Bioinformatics Algorithms www.bioalgorithms.info Viterbi training analysis The algorithm converges precisely There are finitely many possible paths. New parameters are uniquely determined by the current *. There may be several paths for x with the same probability, hence must compare the new * with all previous paths having highest probability. Does not maximize the likelihood x P(x | ) but the contribution to the likelihood of the most probable path x P(x | , *) In general performs less well than Baum-Welch 67 An Introduction to Bioinformatics Algorithms www.bioalgorithms.info Unknown paths: Baum-Welch Idea: 1. Guess initial values for parameters. art and experience, not science 2. Estimate new (better) values for parameters. how ? 3. Repeat until stopping criteria is met. what criteria ? 68 An Introduction to Bioinformatics Algorithms www.bioalgorithms.info Better values for parameters Would need the Akl and Ek(b) values but cannot count (the path is unknown) and do not want to use a most probable path. For all states k,l, symbol b and training sequence x Compute Akl and Ek(b) as expected values, given the current parameters 69 An Introduction to Bioinformatics Algorithms www.bioalgorithms.info Notation For any sequence of characters x emitted along some unknown path , denote by i = k the assumption that the state at position i (in which xi is emitted) is k. 70 An Introduction to Bioinformatics Algorithms www.bioalgorithms.info Probabilistic setting for Ak,l Given x1, ... ,xm consider a discrete probability space with elementary events k,l, = "k l is taken in x1, ..., xm " For each x in {x1,...,xm} and each position i in x let Yx,i be a random variable defined by 1 , if i = k and i +1 = l Yx ,i (k ,l ) = 0, otherwise Define Y = x i Yx,i random var that counts # of times the event k,l happens in x1,...,xm. 71 An Introduction to Bioinformatics Algorithms www.bioalgorithms.info The meaning of Akl Let Akl be the expectation of Y E(Y) = x i E(Yx,i) = x i P(Yx,i = 1) = xi P({k,l | i = k and i+1 = l}) = xi P(i = k, i+1 = l | x) Need to compute P(i = k, i+1 = l | x) 72 An Introduction to Bioinformatics Algorithms www.bioalgorithms.info Probabilistic setting for Ek(b) Given x1, ... ,xm consider a discrete probability space with elementary events k,b = "b is emitted in state k in x1, ... ,xm " For each x in {x1,...,xm} and each position i in x let Yx,i be a random variable defined by 1, if xi = b and i = k Yx ,i ( k ,b ) = 0, otherwise Define Y = x i Yx,i random var that counts # of times the event k,b happens in x1,...,xm. 73 An Introduction to Bioinformatics Algorithms www.bioalgorithms.info The meaning of Ek(b) Let Ek(b) be the expectation of Y E(Y) = x i E(Yx,i) = x i P(Yx,i = 1) = xi P({k,b | xi = b and i = k}) P({ x {i| xi =b} k ,b | xi = b, i = k}) = x {i| xi =b} P( i = k | x) Need to compute P(i = k | x) 74 An Introduction to Bioinformatics Algorithms www.bioalgorithms.info Computing new parameters Consider x = x1...xn training sequence Concentrate on positions i and i+1 Use the forward-backward values: fki = P(x1 ... xi , i = k) bki = P(xi+1 ... xn | i = k) 75 An Introduction to Bioinformatics Algorithms www.bioalgorithms.info Compute Akl (1) Prob k l is taken at position i of x P(i = k, i+1 = l | x1...xn) = P(x, i = k, i+1 = l) / P(x) Compute P(x) using either forward or backward values We'll show that P(x, i = k, i+1 = l) = bli+1 el(xi+1) akl fki Expected # times k l is used in training sequences Akl = x i (bli+1 el(xi+1) akl fki) / P(x) 76 An Introduction to Bioinformatics Algorithms www.bioalgorithms.info Compute Akl (2) P(x, i = k, i+1 = l) = P(x1...xi, i = k, i+1 = l, xi+1...xn) = P(i+1 = l, xi+1...xn | x1...xi, i = k)P(x1...xi,i =k)= P(i+1 = l, xi+1...xn | i = k)fki = P(xi+1...xn | i = k, i+1 = l)P(i+1 = l | i = k)fki = P(xi+1...xn | i+1 = l)akl fki = P(xi+2...xn | xi+1, i+1 = l) P(xi+1 | i+1 = l) akl fki = P(xi+2...xn | i+1 = l) el(xi+1) akl fki = bli+1 el(xi+1) akl fki 77 An Introduction to Bioinformatics Algorithms www.bioalgorithms.info Compute Ek(b) Prob xi of x is emitted in state k P(i = k | x1...xn) = P(i = k, x1...xn)/P(x) P(i = k, x1...xn) = P(x1...xi,i = k,xi+1...xn) = P(xi+1...xn | x1...xi,i = k) P(x1...xi,i = k) = P(xi+1...xn | i = k) fki = bki fki Expected # times b is emitted in state k E k (b ) = x i: xi = b ( f ki bki ) P( x ) 78 An Introduction to Bioinformatics Algorithms www.bioalgorithms.info Finally, new parameters akl = Akl / Akl ' b' Can add pseudocounts as before. ek (b ) = E k (b ) / Ek (b' ) l' 79 An Introduction to Bioinformatics Algorithms www.bioalgorithms.info Stopping criteria Cannot actually reach maximum (optimization of continuous functions) Therefore need stopping criteria Compute the log likelihood of the model for current log P ( x | ) x Compare with previous log likelihood Stop if small difference Stop after a certain number of iterations 80 An Introduction to Bioinformatics Algorithms www.bioalgorithms.info The Baum-Welch algorithm Initialization: Pick the best-guess for model parameters (or arbitrary) Iteration: 1. Forward for each x 2. Backward for each x 3. Calculate Akl, Ek(b) 4. Calculate new akl, ek(b) 5. Calculate new log-likelihood Until log-likelihood does not change much 81 An Introduction to Bioinformatics Algorithms www.bioalgorithms.info Baum-Welch analysis Log-likelihood is increased by iterations Baum-Welch is a particular case of the EM (expectation maximization) algorithm Convergence to local maximum. Choice of initial parameters determines local maximum to which the algorithm converges 82 An Introduction to Bioinformatics Algorithms www.bioalgorithms.info The relative entropy of two distributions P,Q H(P||Q) = i P(xi) log (P(xi)/Q(xi)) Property: H(P||Q) is positive H(P||Q) = 0 iff P(xi) = Q(xi) for all i Proof of property based on f(x) = x - 1 - log x is positive f(x) = 0 iff x = 1 (except when log2) Log-likelihood is increased by iterations 83 An Introduction to Bioinformatics Algorithms www.bioalgorithms.info Proof cont'd Log likelihood is log P(x | ) = log P(x, | ) P(x, | ) = P( |x,) P(x | ) Assume t are the current parameters. Choose t+1 such that log P(x | t+1) greater than log P(x | t) log P(x | ) = log P(x, | ) - log P( |x,) log P(x | ) = P( |x,t) log P(x, | ) P( |x,t) log P( | x,) because P( |x,t) = 1 84 An Introduction to Bioinformatics Algorithms www.bioalgorithms.info Proof cont'd Notation: Q( | t) = P( |x,t) log P(x, | ) Show that t+1 that maximizes log P(x | ) may be chosen to be some that maximizes Q( | t) log P(x | ) - log P(x | t) = Q( | t) - Q(t | t) + P( |x,t) log (P( |x,t) / P( |x,)) The sum is positive (relative entropy) 85 An Introduction to Bioinformatics Algorithms www.bioalgorithms.info Proof cont'd Conclusion: log P(x | ) - log P(x | t) greater than Q( | t) - Q(t | t) with equality only when = t or when P( |x,t) = P( |x,) for some not = t 86 An Introduction to Bioinformatics Algorithms www.bioalgorithms.info Proof cont'd For an HMM P(x, | ) = a0,1 i=1,|x| ei(xi) ai,i+1 Let Akl() = # times kl appears in this product Ek(b,) = # times emission of b from k appears in this product The product is function of but Akl(), Ek(b,) do not depend on 87 An Introduction to Bioinformatics Algorithms www.bioalgorithms.info Proof cont'd Write the product using all the factors ek(b) to the power Ek(b, ) akl to the power Akl() Then replace the product in Q( | t) = P( |x,t) (k=1,M b Ek(b, ) log ek(b) + k=0,M l=1,M Akl() log akl ) 88 An Introduction to Bioinformatics Algorithms www.bioalgorithms.info Proof cont'd Remember Akl and Ek(b) computed by the BaumWelch alg at every iteration. Consider those computed at iteration t (based on t) Then Akl = P( |x,t) Akl() Ek(b) = P( |x,t) Ek(b, ) as expectations of Akl(), resp. Ek(b, ) over P( |x,t) 89 An Introduction to Bioinformatics Algorithms www.bioalgorithms.info Proof cont'd Then Q( | t) = k=1,M b Ek(b) log ek(b) + k=0,M l=1,M Akl log akl (changing order of summations) Note that consists of {akl} and {ek(b)}. The algorithm computes t+1 to consist of Akl / l' Akl' and Ek(b) / b' Ek(b') Show that this t+1 maximizes Q( | t) (compute the differences for the A part and for the E part) 90 An Introduction to Bioinformatics Algorithms www.bioalgorithms.info Speech Recognition Create an HMM of the words in a language Each word is a hidden state in Q. Each of the basic sounds in the language is a symbol in . Input: use speech as the input sequence. Goal: find the most probable sequence of states. 91 An Introduction to Bioinformatics Algorithms www.bioalgorithms.info Speech Recognition: Building the Model Analyze some large source of English sentences, such as a database of newspaper articles, to form probability matrixes. A0i: the chance that word i begins a sentence. Aij: the chance that word j follows word i. 92 An Introduction to Bioinformatics Algorithms www.bioalgorithms.info Building the Model (cont'd) Analyze English speakers to determine what sounds are emitted with what words. Ek(b): the chance that sound b is spoken in word k. Allows for alternate pronunciation of words. 93 An Introduction to Bioinformatics Algorithms www.bioalgorithms.info Speech Recognition: Using the Model Use the same dynamic programming algorithm as before Weave the spoken sounds through the model the same way we wove the rolls of the die through the casino model. represents the most likely set of words. 94 An Introduction to Bioinformatics Algorithms www.bioalgorithms.info Using the Model (cont'd) How well does it work? Common words, such as `the', `a', `of' make prediction less accurate, since there are so many words that follow normally. 95 An Introduction to Bioinformatics Algorithms www.bioalgorithms.info Improving Speech Recognition Initially, we were using a `bigram,' a graph connecting every two words. Expand that to a `trigram' Each state represents two words spoken in succession. Each edge joins those two words (A B) to another state representing (B C) Requires n3 vertices and edges, where n is the number of words in the language. Much better, but still limited context. 96 An Introduction to Bioinformatics Algorithms www.bioalgorithms.info References Slides for CS 262 course at Stanford given by Serafim Batzoglou 97 ...
View Full Document

Ask a homework question - tutors are online