This preview shows pages 1–3. Sign up to view the full content.
1
CS 188: Artificial Intelligence
Spring 2011
Lecture 18: HMMs and Particle Filtering
4/4/2011
Pieter Abbeel  UC Berkeley
Many slides over this course adapted from Dan Klein, Stuart Russell,
Andrew Moore
Announcements
§
W4 out, due next week Monday
§
P4 out, due next week
2
Announcements
§
Course contest
§
Fun!
(And extra credit.)
§
Regular tournaments
§
Instructions posted soon!
3
Outline
§
Markov Models
( = a particular Bayes net)
§
Hidden Markov Models (HMMs)
§
Representation
( = another particular Bayes net)
§
Inference
§
Forward algorithm ( = variable elimination)
§
Particle filtering ( = likelihood weighting with some tweaks)
§
Why do we study them?
§
Widespread use for reasoning over time or space
§
Concept: Stationary distribution
5
Reasoning over Time
§
Often, we want to
reason about a sequence
of
observations
§
Speech recognition
§
Robot localization
§
User attention
§
Medical monitoring
§
Need to introduce time into our models
§
Basic approach: hidden Markov models (HMMs)
§
More general: dynamic Bayes
`
nets
6
Markov Models
§
A
Markov model
is a chainstructured BN
§
Each node is identically distributed (stationarity)
§
Value of X at a given time is called the
state
§
As a BN:
§
Parameters: called
transition probabilities
or
dynamics, specify how the state evolves over time
(also, initial probs)
X
2
X
1
X
3
X
4
This preview has intentionally blurred sections. Sign up to view the full version.
View Full Document 2
Conditional Independence
§
Basic conditional independence:
§
Past and future independent of the present
§
Each time step only depends on the previous
§
This is called the (first order) Markov property
§
Note that the chain is just a (growing) BN
§
We can always use generic BN reasoning on it if we
truncate the chain at a fixed length
X
2
X
1
X
3
X
4
8
Example: Markov Chain
§
Weather:
§
States: X = {rain, sun}
§
Transitions:
§
Initial distribution: 1.0 sun
§
What
`
s the probability distribution after one step?
rain
sun
0.9
0.9
0.1
0.1
This are two new
representations of a
CPT, not BNs!
9
sun
rain
sun
rain
0.1
0.9
0.9
0.1
Query: P(X
t
)
§
Question: probability of being in state x at time t?
§
Slow answer:
§
Enumerate all sequences of length t which end in s
§
Add up their probabilities
§
= join on X
1
through X
t1
followed by sum over X
1
through X
t1
…
10
MiniForward Algorithm
§
Question: What
`
s P(X) on some day t?
§
An instance of variable elimination! (In order X
1
, X
2
,
…
)
sun
rain
sun
rain
sun
rain
sun
rain
Forward simulation
11
Example
§
From initial observation of sun
§
From initial observation of rain
P(
X
1
)
P(
X
2
)
P(
X
3
)
P(
X
∞
)
P(
X
1
)
P(
X
2
)
P(
X
3
)
P(
X
∞
)
12
Stationary Distributions
§
If we simulate the chain long enough:
§
What happens?
This is the end of the preview. Sign up
to
access the rest of the document.
This note was uploaded on 08/26/2011 for the course CS 188 taught by Professor Staff during the Spring '08 term at University of California, Berkeley.
 Spring '08
 Staff
 Artificial Intelligence

Click to edit the document details