{[ promptMessage ]}

Bookmark it

{[ promptMessage ]}


07-reinforcement - Reinforcement Learning Foundations of...

Info iconThis preview shows pages 1–3. Sign up to view the full content.

View Full Document Right Arrow Icon
1 Foundations of Artificial Intelligence Reinforcement Learning CS472 – Fall 2007 Thorsten Joachims Reinforcement Learning Problem Make sequence of decisions (policy) to get to goal / maximize utility Search Problems so far Known environment State space Consequences of actions Probability distribution of non-deterministic elements Known utility / cost function First compute the sequence of decisions, then execute (potentially re- compute) Real-World Problems Environment is unknown a priori and needs to be explored Utility function unknown – only examples are available for some states No feedback on individual actions Learn to act and to assign blame/credit to individual actions Need to quickly react to unforeseen events (have learned what to do) Reinforcement Learning Issues Agent knows the full environment a priori vs. unknown environment Agent can be passive (watch) or active (explore) Feedback (i.e. rewards) in terminal states only; or a bit of feedback in any state How to measure and estimate the utility of each action Environment fully observable, or partially observable Have model of environment and effects of action…or not Æ Reinforcement Learning will address these issues! Markov Decision Process Representation of Environment: finite set of states S set of actions A for each state s S Process At each discrete time step, the agent observes state s t S and then chooses action a t A. After that, the environment gives agent an immediate reward r t changes state to s t+1 (can be probabilistic) Markov Decision Process Model: Initial state: S 0 Transition function: T(s,a,s’) Æ T(s,a,s’) is the probability of moving from state s to s’ when executing action a. Reward function: R(s) Æ Real valued reward that the agent receives for entering state s. Assumptions Markov property: T(s,a,s’) and R(s) only depend on current state s, but not on any states visited earlier. Extension: Function R may be non-deterministic as well Example Reward: In terminal states reward of +1 / -1 and agent gets “stuck” Each other state has a reward of -0.04. 1 2 3 1 2 3 - 1 + 1 4 START 0.8 0.1 0.1 • move into desired direction with prob 80% • move 90 degrees to left with prob 10% • move 90 degrees to right with prob 10%
Background image of page 1

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
2 Policy Definition: A policy π describes which action an agent selects in each state – a= π (s) Utility
Background image of page 2
Image of page 3
This is the end of the preview. Sign up to access the rest of the document.

{[ snackBarMessage ]}

Page1 / 5

07-reinforcement - Reinforcement Learning Foundations of...

This preview shows document pages 1 - 3. Sign up to view the full document.

View Full Document Right Arrow Icon bookmark
Ask a homework question - tutors are online