ICML07-taylor - Cross-Domain Transfer for Reinforcement...

Info iconThis preview shows pages 1–2. Sign up to view the full content.

View Full Document Right Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: Cross-Domain Transfer for Reinforcement Learning Matthew E. Taylor MTAYLOR@CS.UTEXAS.EDU Peter Stone PSTONE@CS.UTEXAS.EDU Department of Computer Sciences, The University of Texas at Austin Abstract A typical goal for transfer learning algorithms is to utilize knowledge gained in a source task to learn a target task faster. Recently introduced transfer methods in reinforcement learning set- tings have shown considerable promise, but they typically transfer between pairs of very similar tasks. This work introduces Rule Transfer , a trans- fer algorithm that first learns rules to summarize a source task policy and then leverages those rules to learn faster in a target task. This paper demon- strates that Rule Transfer can effectively speed up learning in Keepaway, a benchmark RL problem in the robot soccer domain, based on experience from source tasks in the gridworld domain. We empirically show, through the use of three dis- tinct transfer metrics, that Rule Transfer is effec- tive across these domains. 1. Introduction Reinforcement learning (RL) methods excel at solving com- plex tasks with minimal feedback. Transfer learning , in an RL setting, typically attempts to decrease training time by learning a source task before learning the target task . While there have been a number of recent successes, most existing transfer methods focus on pairs of tasks that are closely related, such as different mazes where agents have the same sensors and actions available (Madden & Howley, 2004). Prior to this work, the most dissimilar source and target tasks we are aware of are pairs of tasks in a single domain with different reward structures, different actions, and/or different state variables (see, for instance, past trans- fer work in the robot soccer domain (Torrey et al., 2006)). A more difficult challenge is to transfer between differ- ent domains, where we informally define a domain to be a setting for a group of semantically similar tasks . Such cross-domain transfer has been a long-term goal of trans- fer learning because it could allow transfer between signif- icantly less similar tasks. While previous transfer work has focused on reducing training time by transferring from a simple to complex task in a single domain, a (potentially) more powerful way of simplifying a task is to formulate it Appearing in Proceedings of the 24 th International Conference on Machine Learning , Corvallis, OR, 2007. Copyright 2007 by the author(s)/owner(s). as an abstraction in a different domain. This work will fo- cus on demonstrating that cross-domain transfer is feasible and effective, where the source task is selected from the rel- atively simple gridworld domain and the target task is the more complex RL benchmark task of 3 vs. 2 Keepaway in the robot soccer domain....
View Full Document

This note was uploaded on 08/25/2011 for the course EGN 3060c taught by Professor Sukthankar,g during the Fall '08 term at University of Central Florida.

Page1 / 8

ICML07-taylor - Cross-Domain Transfer for Reinforcement...

This preview shows document pages 1 - 2. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online