{[ promptMessage ]}

Bookmark it

{[ promptMessage ]}

Lecture 21-Probabilistic Reasoning over Time

Lecture 21-Probabilistic Reasoning over Time - CS 561...

Info iconThis preview shows pages 1–8. Sign up to view the full content.

View Full Document Right Arrow Icon
CS 561: Artificial Intelligence Instructor: Sofus A. Macskassy, [email protected] TAs: Nadeesha Ranashinghe ( [email protected] ) William Yeoh ( [email protected] ) Harris Chiu ( [email protected] ) Lectures: MW 5:00-6:20pm, OHE 122 / DEN Office hours: By appointment Class page: http://www-rcf.usc.edu/~macskass/CS561-Spring2010/ This class will use http://www.uscden.net/ and class webpage - Up to date information - Lecture notes - Relevant dates, links, etc. Course material: [AIMA] Artificial Intelligence: A Modern Approach, by Stuart Russell and Peter Norvig. (2nd ed)
Background image of page 1

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
CS561 - Lecture 21 - Macskassy - Spring 2010 2 Temporal Probability Models [Ch 15] Time and uncertainty Inference: filtering, prediction, smoothing Hidden Markov models Kalman filters (a brief mention) Dynamic Bayesian networks Particle filtering
Background image of page 2
CS561 - Lecture 21 - Macskassy - Spring 2010 3 Time and uncertainty The world changes; we need to track and predict it Diabetes management vs vehicle diagnosis Basic idea: copy state and evidence variables for each time step X t = set of unobservable state variables at time t e.g., BloodSugar t , StomachContents t , etc. E t = set of observable evidence variables at time t e.g., MeasuredBloodSugar t , PulseRate t , FoodEaten t This assumes discrete time ; step size depends on problem Notation: X a:b = X a , X a+1 , … , X b-1 , X b
Background image of page 3

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
CS561 - Lecture 21 - Macskassy - Spring 2010 4 Markov processes (Markov chains) Construct a Bayes net from these variables: parents? Markov assumption : X t depends on bounded subset of X 0:t-1 First-order Markov process : P ( X t j X 0:t-1 ) = P ( X t j X t-1 ) Second-order Markov process : P ( X t j X 0:t-1 ) = P ( X t j X t-2 , X t-1 ) First-order: Second-order: Sensor Markov assumption : P ( E t j X 0:t , E 0:t-1 ) = P ( E t j X t ) Stationary process: transition model P ( X t j X t-1 ) and sensor model P ( E t j X t ) fixed for all t
Background image of page 4