l21abwhmmlearn

l21abwhmmlearn - 3/6/00 Hidden Markov Models: Explanation...

Info iconThis preview shows pages 1–5. Sign up to view the full content.

View Full Document Right Arrow Icon
3/6/00 1 9/13/00 copyright Brian Wil iams, 2000 1 courtesy of JPL Hidden Markov Models: Explanation and Model Learning Brian C. Williams 16.410/16.413 Session 21 Brian C. Wil iams, copyright 2000 9/13/00 copyright Brian Wil iams, 2000 2 Reading Assignments AIMA (Russell and Norvig) § Ch 15.1-.3, 20.3 State Estimation and Hidden Markov Models From last Monday: § Ch 13 Review of Probabilities § Ch 14.1-4 Probabilistic Reasoning 9/13/00 copyright Brian Wil iams, 2000 3 Outline § Review § Explanation and Learning in Statistical Natural Language § Decoding using the Viterbi Algorithm § Evaluation via Forward and Backward Algorithms. § Model learning via the Baum-Welch Algorithm.
Background image of page 1

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
3/6/00 9/13/00 copyright Brian Wil iams, 2000 4 HMM Estimation is Pervasive courtesy of NASA Engineering Operations courtesy of NASA Dialogue Management Robot Localization Courtesy of Kanna Rajan, NASA Ames. Used with permission. 9/13/00 copyright Brian Wil iams, 2000 5 Posterior Probability, after Observations X 1,n 1,n ( | ) (| | ) n n n MPMx a - = ( ) ( ) i i MM PM ˛ = P(x i If previous observations X 1,i = x 1,i i i Then P(x i | M) = 1 previous observations X 1,i 1,i i i Then P(x i | c) = 0 Otherwise , Assume all consistent assignments to X i are equally likely observations: ci c ˛ D Xi i c } Then P(x i | M) = 1/|D ci | Assume: = x 1 , 1 ,1 ) ( x P P M | M) is estimated using model, F , according to: -1 -1 , M and F entails X = x -1 = x -1 , M and F entails X ? v let D {x | c, F is consistent with X = x • Apriori mode independence. •Consistent obs equally likely 9/13/00 copyright Brian Wil iams, 2000 6 Estimating Dynamic Systems Given a sequence of observations and commands: What is the likelihood of a particular state? ± Belief State Update : (filtering and smoothing) What is the most likely sequence of states that got me here? ± Decoding: ( What is the most likely sequence of observations generated? ± Evaluation/Prediction: What HMM most likely generated these observations? ± Learning: ( Maximization Algorithm) X 0 X1 XN-1 XN S T Viterbi Algorithm) Baum-Welch Algorithm, Expectation- 2
Background image of page 2
3/6/00 9/13/00 copyright Brian Wil iams, 2000 7 What is the likelihood of a state? Filtering Probabilities of current states Prediction Probabilities of future states Smoothing Probabilities of past states 1 t Smoothing Filtering Prediction 9/13/00 copyright Brian Wil iams, 2000 8 S t+1 : set of hidden variables in the t+1 time slice s t+1 : set of values for those hidden variables at t+1 x t+1 : set of observations at time t+1 x 1:t : set of observations from all times from 1 to t Notation a: normalization constant 9/13/00 copyright Brian Wil iams, 2000 9 Hidden Markov Models Finite States S, Actions A & Observations W State transition function T(S i ,A i ,S i+1 ) P(S | S i , A i ) Observation function O(S i , W i ) P( W i | S i ) Initial state distribution Q (S): P(S 1 ) Notation: P (S) denotes all subsets of S 3
Background image of page 3

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
3/6/00 9/13/00 copyright Brian Wil iams, 2000 10 Markov Assumptions P( S t |S ) = P(S t ) Markov assumption of evidence P(X t 0:t ,X t t ) Given a distribution over the current state, the future states and current and future observations are independent of the past.
Background image of page 4
Image of page 5
This is the end of the preview. Sign up to access the rest of the document.

Page1 / 13

l21abwhmmlearn - 3/6/00 Hidden Markov Models: Explanation...

This preview shows document pages 1 - 5. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online