This preview shows pages 1–7. Sign up to view the full content.
This preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
Unformatted text preview: 16.410/413 Principles of Autonomy and Decision Making Lecture 23: Markov Decision Processes Policy Iteration Emilio Frazzoli Aeronautics and Astronautics Massachusetts Institute of Technology December 1, 2010 Frazzoli (MIT) Lecture 23: MDPs December 1, 2010 1 / 22 Assignments Readings Lecture notes [AIMA] Ch. 17.13. Frazzoli (MIT) Lecture 23: MDPs December 1, 2010 2 / 22 Searching over policies Value iteration converges exponentially fast, but still asymptotically. Recall how the best policy is recovered from the current estimate of the value function: π i ( s ) = arg max a E R ( s , a , s ) + γ V i ( s ) , ∀ s ∈ S . In order to figure out the optimal policy, it should not be necessary to compute the optimal value function exactly... Since there are only finitely many policies in a finitestate, finiteaction MDP, it is reasonable to expect that a search over policies should terminate in a finite number of steps. Frazzoli (MIT) Lecture 23: MDPs December 1, 2010 3 / 22 Policy evaluation Let us assume we have a policy, e.g., π : S → A , that assigns an action to each state. I.e., action π ( s ) will be chosen each time the system is at state s . Once the actions taken at each state are fixed, the MDP is turned into a Markov chain (with rewards). one can compute the expected utility collected over time using that policy In other words, one can evaluate how well a certain policy does by computing the value function induced by that policy. Frazzoli (MIT) Lecture 23: MDPs December 1, 2010 4 / 22 Policy evaluation example — na¨ ıve method Same planning problem as the previous lecture, in a smaller world (4x4). Simple policy π : always go right, unless at the goal (or inside obstacles). Expected utility (value function) starting from top left corner (cell 2 , 2): V π (2 , 2) ≈ . 06 · 8 . 1 = 0 . 5 · · · · · → · · · → · · · · · · Path Prob. Utility → 0.75 ↑ 0.08 ← 0.08 ↓→ 0.06 8.1 . . . . . . . . . Frazzoli (MIT) Lecture 23: MDPs December 1, 2010 5 / 22 Policy evaluation Recalling the MDP properties, one can write the value function at a state as the expected reward collected at the first step + expected discounted value at the next state under the given policy V π ( s ) = E R ( s , π ( s ) , s ) + γ V ( s ) = X s ∈S T ( s , π ( s...
View
Full
Document
This note was uploaded on 12/26/2011 for the course SCIENCE 16.410 taught by Professor Prof.brianwilliams during the Fall '10 term at MIT.
 Fall '10
 Prof.BrianWilliams
 Mass

Click to edit the document details