HMM-Lecture5

HMM-Lecture5 - Hidden Markov Models Lecture 5, Tuesday...

Info iconThis preview shows pages 1–8. Sign up to view the full content.

View Full Document Right Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: Hidden Markov Models Lecture 5, Tuesday April 15, 2003 Definition of a hidden Markov model Definition: A hidden Markov model (HMM) • Alphabet Σ = { b 1 , b 2 , …, b M } • Set of states Q = { 1, ..., K } • Transition probabilities between any two states a ij = transition prob from state i to state j a i1 + … + a iK = 1, for all states i = 1…K • Start probabilities a 0i a 01 + … + a 0K = 1 • Emission probabilities within each state e i (b) = P( x i = b | π i = k) e i (b 1 ) + … + e i (b M ) = 1, for all states i = 1…K K 1 … 2 The three main questions on HMMs 1. Evaluation GIVEN a HMM M, and a sequence x, FIND Prob[ x | M ] 1. Decoding GIVEN a HMM M, and a sequence x, FIND the sequence π of states that maximizes P[ x, π | M ] 1. Learning GIVEN a HMM M, with unspecified transition/emission probs., and a sequence x, FIND parameters θ = (e i (.), a ij ) that maximize P[ x | θ ] Today • Decoding • Evaluation Problem 1: Decoding Find the best parse of a sequence Decoding GIVEN x = x 1 x 2 ……x N We want to find π = π 1 , ……, π N , such that P[ x, π ] is maximized π * = argmax π P[ x, π ] We can use dynamic programming! Let V k (i) = max { π 1,…,i-1} P[x 1 …x i-1 , π 1 , …, π i-1 , x i , π i = k] = Probability of most likely sequence of states ending at state π i = k 1 2 K … 1 2 K … 1 2 K … … … … 1 2 K … x 1 x 2 x 3 x K 2 1 K 2 Decoding – main idea Given that for all states k, and for a fixed position i, V k (i) = max { π 1,…,i-1} P[x 1 …x i-1 , π 1 , …, π i-1 , x i , π i = k] What is V k (i+1)? From definition, V l (i+1) = max { π 1,…,i} P[ x 1 …x i , π 1 , …, π i , x i+1 , π i+1 = l ] = max { π 1,…,i} P(x i+1 , π i+1 = l | x 1 …x i , π 1 ,…, π i ) P[x 1 …x i , π 1 ,…, π i ] = max { π 1,…,i} P(x i+1 , π i+1 = l | π i ) P[x 1 …x i-1 , π 1 , …, π i-1 , x i , π i ] = max k P(x i+1 , π i+1 = l | π i = k) max { π 1,…,i-1} P[x 1...
View Full Document

This note was uploaded on 02/13/2012 for the course CS 91.510 taught by Professor Staff during the Fall '09 term at UMass Lowell.

Page1 / 30

HMM-Lecture5 - Hidden Markov Models Lecture 5, Tuesday...

This preview shows document pages 1 - 8. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online