MIT16_410F10_lec20

# MIT16_410F10_lec20 - 16.410/413 Principles of Autonomy and...

This preview shows pages 1–8. Sign up to view the full content.

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: 16.410/413 Principles of Autonomy and Decision Making Lecture 20: Intro to Hidden Markov Models Emilio Frazzoli Aeronautics and Astronautics Massachusetts Institute of Technology November 22, 2010 E. Frazzoli (MIT) Lecture 20: HMMs November 22, 2010 1 / 32 Assignments Readings Lecture notes [AIMA] Ch. 15.1-3, 20.3. Paper on Stellar: L. Rabiner, A tutorial on Hidden Markov Models... E. Frazzoli (MIT) Lecture 20: HMMs November 22, 2010 2 / 32 Outline 1 Markov Chains Example: Whack-the-mole 2 Hidden Markov Models 3 Problem 1: Evaluation 4 Problem 2: Explanation E. Frazzoli (MIT) Lecture 20: HMMs November 22, 2010 3 / 32 Markov Chains Definition (Markov Chain) A Markov chain is a sequence of random variables X 1 , X 2 , X 3 , . . . , X t , . . . , such that the probability distribution of X t +1 depends only on t and x t (Markov property), in other words: Pr [ X t +1 = x | X t = x t , X t- 1 = x t- 1 ,..., X 1 = x 1 ] = Pr [ X t +1 = x | X t = x t ] If each of the random variables { X t : t N } can take values in a finite set X = { x 1 , x 2 ,..., x N } , thenfor each time step t one can define a matrix of transition probabilities T t ( transition matrix ), such that T t ij = Pr [ X t +1 = x j | X t = x i ] If the probability distribution of X t +1 depends only on the preceding state x t (and not on the time step t ), then the Markov chain is stationary , and we can describe it with a single transition matrix T . E. Frazzoli (MIT) Lecture 20: HMMs November 22, 2010 4 / 32 Graph models of Markov Chains The transition matrix has the following properties: T ij 0, for all i , j { 1 , .. . , N } . N j =1 T ij = 1, for all i { 1 , .. . , N } (the transition matrix is stochastic ). A finite-state, stationary Markov Chain can be represented as a weighted graph G = ( V , E , w ), such that V = X E = { ( i , j ) : T ij &gt; } w (( i , j )) = T ij . E. Frazzoli (MIT) Lecture 20: HMMs November 22, 2010 5 / 32 Whack-THE-mole A mole has burrowed a network of underground tunnels, with N openings at ground level. We are interested in modeling the sequence of openings at which the mole will poke its head out of the ground. The probability distribution of the next opening only depends on the present location of the mole. Three holes: X = { x 1 , x 2 , x 3 } . Transition probabilities: T = . 1 0 . 4 0 . 5 . 4 . 6 . 6 0 . 4 x 1 x 2 x 3 . 1 0.4 0.5 0.4 0.6 0.4 0.6 E. Frazzoli (MIT) Lecture 20: HMMs November 22, 2010 6 / 32 Whack-the-mole 2/3 Let us assume that we know, e.g., with certainty, that the mole was at hole x 1 at time step 1 (i.e., Pr [ X 1 = x 1 ] = 1). It takes d time units to go get the mallet. Where should I wait for the mole if I want to maximize the probability of whacking it the next time it surfaces?...
View Full Document

## This note was uploaded on 12/26/2011 for the course SCIENCE 16.410 taught by Professor Prof.brianwilliams during the Fall '10 term at MIT.

### Page1 / 37

MIT16_410F10_lec20 - 16.410/413 Principles of Autonomy and...

This preview shows document pages 1 - 8. Sign up to view the full document.

View Full Document
Ask a homework question - tutors are online