This preview shows pages 1–3. Sign up to view the full content.
This preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
Unformatted text preview: O.J. VRIEZE Maastricht University Maastricht, The Netherlands 1. Introduction In this chapter we treat stationary strategies and the limiting average cri terion. Generally, for the average reward criterion, players need history dependent strategies in playing nearly optimal or in equilibrium. A behav ioral strategy may condition its choice of mixed action, at any given stage, on the entire history; and therefore its implementation is often a huge task. However, a stationary strategy conditions its choice of mixed action, at any given state, only on the present state. Since only as many decision rules as states need be remembered, a stationary strategy is preferable. In addition, optimal stationary strategies are also optimal in the model of the game where the players observe at every stage only the current state. These ob servation make it useful to know in which situations optimal stationary strategies exist. This viewpoint is the main topic of this chapter. Twoperson zerosum stochastic games with the limiting average cri terion were introduced by Gillette [6]. He considered games with perfect information and irreducible stochastic games. For both classes both players possess average optimal stationary strategies. Blackwell and Ferguson [3] introduced the big match which showed that for limiting average stochas tic games the value does not need to exist within the class of stationary strategies, and this result shows that historydependent strategies are in dispensable. Hoffman and Karp [7] considered irreducible stochastic games and gave an algorithm that yields optimal stationary strategies. Starting with the paper of Parthasarathy and Raghavan [9], during the eighties sev eral papers on special classes of stochastic games appeared. These classes were defined by conditions on the reward and/or transition structure. These classes were defined on the one hand to represent practical situations and on the other hand to derive classes of games for which the solution is rel atively easy, i.e., in terms of stationary strategies. In this spirit we just STOCHASTIC GAMES AND STATIONARY STRATEGIES 38 O.J. VRIEZE mention the papers of Parthasarathy et al. [10] and Vrieze et al. [14]. Fi nally, we mention the paper of Filar et al. [4], which examines from the computational viewpoint the possibilities of stationary strategies. The chapter is built up around a limit theorem that connects limits of discounted rewards to average rewards when the discount factor tends to 0. This limit theorem turns out to be quite powerful, since many theorems concerning stationary strategies can now easily be proved. Among them are irreducible stochastic games and existence questions concerning ( )easy states. We end the chapter with some considerations regarding the relation between Puisseux series and existence of optimal stationary strategies for the limiting average criterion and the total reward criterion....
View
Full
Document
This note was uploaded on 09/27/2010 for the course EE 229 taught by Professor R.srikant during the Spring '09 term at University of Illinois, Urbana Champaign.
 Spring '09
 R.Srikant

Click to edit the document details