MIT16_410F10_lec22

MIT16_410F10_lec22 - 16.410/413 Principles of Autonomy and...

Info iconThis preview shows pages 1–6. Sign up to view the full content.

View Full Document Right Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: 16.410/413 Principles of Autonomy and Decision Making Lecture 22: Markov Decision Processes I Emilio Frazzoli Aeronautics and Astronautics Massachusetts Institute of Technology November 29, 2010 Frazzoli (MIT) Lecture 22: MDPs November 29, 2010 1 / 16 Assignments Readings Lecture notes [AIMA] Ch. 17.1-3. Frazzoli (MIT) Lecture 22: MDPs November 29, 2010 2 / 16 Outline 1 Markov Decision Processes Frazzoli (MIT) Lecture 22: MDPs November 29, 2010 3 / 16 From deterministic to stochastic planning problems A basic planning model for deterministic systems (e.g., graph/tree search algorithms, etc.) is : Planning Model (Transition system + goal) A (discrete, deterministic) feasible planning model is defined by A countable set of states S . A countable set of actions A . A transition relation →⊆ S × A × S . An initial state s 1 ∈ S . A set of goal states s G ⊂ S . We considered the case in which the transition relation is purely deterministic: if ( s , a , s ) are in relation, i.e., ( s , a , s ) ∈→ , or, more concisely, s a-→ s , then taking action a from state s will always take the state to s . Can we extend this model to include (probabilistic) uncertainty in the transitions? Frazzoli (MIT) Lecture 22: MDPs November 29, 2010 4 / 16 Markov Decision Process Instead of a (deterministic) transition relation, let us define transition probabilities; also, let us introduce a reward (or cost) structure: Markov Decision Process (Stoch. transition system + reward) A Markov Decision Process (MDP) is defined by A countable set of states S . A countable set of actions A . A transition probability function T : S × A × S → R + . An initial state s ∈ S . A reward function R : S × A × S → R + . In other words: if action a is applied from state s , a transition to state s will occur with probability T ( s , a , s )....
View Full Document

This note was uploaded on 12/26/2011 for the course SCIENCE 16.410 taught by Professor Prof.brianwilliams during the Fall '10 term at MIT.

Page1 / 19

MIT16_410F10_lec22 - 16.410/413 Principles of Autonomy and...

This preview shows document pages 1 - 6. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online