vanlent-aiide05 - Increasing Replayability with...

Info iconThis preview shows pages 1–2. Sign up to view the full content.

View Full Document Right Arrow Icon
Increasing Replayability with Deliberative and Reactive Planning Michael van Lent, Mark O. Riedl, Paul Carpenter, Ryan McAlinden, Paul Brobst Institute for Creative Technologies University of Southern California 13274 Fiji Way, Los Angeles, CA 90292 {vanlent, riedl, carpenter, mcalinden, brobst} Abstract Opponent behavior in today's computer games is often the result of a static set of Artificial Intelligence (AI) behaviors or a fixed AI script. While this ensures that the behavior is reasonably intelligent, it also results in very predictable behavior. This can have an impact on the replayability of entertainment-based games and the educational value of training-based games. This paper proposes a move away from static, scripted AI by using a combination of deliberative and reactive planning. The deliberative planning (or Strategic AI) system creates a novel strategy for the AI opponent before each gaming session. The reactive planning (or Tactical AI) system executes this strategy in real-time and adapts to the player and the environment. These two systems, in conjunction with a future automated director module, form the Adaptive Opponent Architecture. This paper describes the architecture and the details of the deliberative and reactive planning components. Introduction In most of today’s computer and video games the behavior of AI opponents is controlled by a static script or some other form of fixed behavior encoding. This well- controlled but limited approach to opponent behavior has both advantages and disadvantages. Since the behavior is tightly controlled it is easier for developers to guarantee that the AI opponents will behave predictably and therefore ensure the intended gaming experience. Also, scripted or fixed AI techniques can be less computationally and memory intensive since a range of strategies don’t need to be stored or considered during execution. However, the limited nature of fixed AI opponents can have a negative impact on the long-term replayability of games. The player’s early experiences, while the AI’s behaviors are still novel, are enjoyable. But as the player learns to predict and counter the AI opponent’s single approach, the experience starts to feel stale. After a fairly small number of game sessions the game experience switches from Copyright © 2005, American Association for Artificial Intelligence ( All rights reserved. exploring how to counter the AI to simply applying the same tried and true counter-strategy yet another time. As games and game technology are used more and more frequently for non-entertainment applications, the current static techniques for generating opponent behavior show some additional limitations. For example, Full Spectrum Command (FSC) (van Lent, Fisher and Mancuso 2004) is a game-based training aid developed to help U.S. Army and Singapore Armed Forces company commanders learn the cognitive skills involved in commanding troops. In each
Background image of page 1

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
Image of page 2
This is the end of the preview. Sign up to access the rest of the document.

Page1 / 6

vanlent-aiide05 - Increasing Replayability with...

This preview shows document pages 1 - 2. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online