MIT6_01F09_lec10

# MIT6_01F09_lec10 - 6.01 Introduction to EECS 1 Week 10 6.01...

This preview shows pages 1–3. Sign up to view the full content.

6.01: Introduction to EECS 1 Week 10 November 10, 2009 1 6.01: Introduction to EECS I Bayesian estimation, etc. Week 10 November 10, 2009 What about the bet? Total number of legos in the bag = 4 Random variable R : number of red legos in the bag. Domain D R =? Assume uniform prior on R (all values equally likely): Random variable L 0 : color of ﬁrst lego we draw out of the bag Observation model: Pr( L 0 = red | R = n ) =? Pr( L 0 = white | R = n ) = 1 Pr( L 0 = red | R = n ) We want to know: Pr( R = n | L 0 = whatever color we observed ) Bayes!! What do we know after ﬁrst Lego draw? 0 1 Number of Red Legos normalize 4 3 2 divide by sum Pr( R = r ) Pr( L 0 = l 0 | R = r ) Pr( L 0 = l 0 ,R = r ) Pr( R = r | L 0 = l 0 ) What do we know after second Lego draw? 0 1 Number of Red Legos normalize 4 3 2 divide by sum Pr( L 1 = l 1 | R = r, L 0 = l 0 ) = Pr( L 1 = l 1 | R = r ) Pr( L 1 = l 1 ,R = r | L 0 = l 0 ) Pr( R = r | L 0 = l 0 ,L 1 = l 1 ) Pr( R = r | L 0 = l 0 ) Hidden Markov Models System with a state that changes over time, probabilistically. Discrete time steps 0 , 1 ,...,t Random variables for states at each time: S 0 ,S 1 ,S 2 ,... Random variables for observations: O 0 ,O 1 ,O 2 ,... State at time t determines the probability distribution: over the observation at time t over the state at time t + 1 Hidden Markov Models System with a state that changes over time, probabilistically. Discrete time steps 0 , 1 ,...,t Random variables for states at each time: S 0 ,S 1 ,S 2 ,... Random variables for observations: O 0 ,O 1 ,O 2 ,... Initial state distribution: Pr( S 0 = s ) State transition model: Pr( S t +1 = s | S t = r ) Observation model: Pr( O t = o | S t = s ) Inference problem: given actual sequence of observations o 0 ,...,o t , compute Pr( S t +1 = s | O 0 = o 0 ,...,O t = o t )

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
6.01: Introduction to EECS 1 Week 10 November 10, 2009 2 Bayes Jargon Belief – a probability distribution over the states Prior – the initial belief before any observations Are my leftovers edible? D S t = { tasty , smelly , furry } Pr( S 0 = tasty) = 1;Pr( S 0 = smelly) = Pr( S 0 = furry) = 0 State transition model: S t +1 T S F T 0 . 8 0 . 2 0 . 0 S t S 0 . 1 0 . 7 0 . 2 F 0 . 0 0 . 0 1 . 0 No observations What is Pr( S 4 = s ) ? State transition update
This is the end of the preview. Sign up to access the rest of the document.

## This note was uploaded on 11/07/2011 for the course COMPUTER 6.01 taught by Professor Staff during the Spring '09 term at MIT.

### Page1 / 7

MIT6_01F09_lec10 - 6.01 Introduction to EECS 1 Week 10 6.01...

This preview shows document pages 1 - 3. Sign up to view the full document.

View Full Document
Ask a homework question - tutors are online