This preview shows pages 1–2. Sign up to view the full content.
This preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
Unformatted text preview: ECE 496 REPLICATOR DYNAMICS Spring 2007 The replicator dynamics constitute the foundation of what I call the deterministic dy namical approach to evolutionary game theory (EGT). Ill attempt in what follows to present a succinct derivation of the equations along with some thoughts about strengths and weaknesses of the replicatordynamics approach to EGT. Essentially, the approach entails modeling complicated inherently random processes with deterministic nonlinear dynamical systems. Irrespective of whether the resulting models are faithful or effective, the well developed machinery of dynamicalsystems theory makes it possible to analyze them rigorously. The starting point is a twoplayer finite symmetric game. Each player has the same action space A = { a 1 ,...,a n } . The payoff to a player playing a i against an opponent paying a j is u ( u i ,a j ). Assume that these payoffs are all strictly positive. If = p 1 a 1 + p 2 a 2 + p n a n is a mixed strategy for the game, let u ( a i , ) denote the expected payoff to an a iplayer against a player. That is, u ( a i , ) = n X j =1 p j u ( a i ,a j ) . Well be considering large populations of agents that play the game with each other. Each agent is hardwired, or programmed, to play one of the actions in A every time it plays the game against another agent. The agents are not rational. You might as well think of the agent populations as multisets of actions from A rather than as populations of agents. At each positive integer time t , let P ( t ) be the population of agents at time t . The population P ( t + 1) arises from P ( t ) as follows: (i) Each agent in P ( t ) plays the game with an opponent chosen at random from P ( t ). (ii) The agent clones a number of copies of itself proportional (with some propor tionality constant K ) to its payoff in the game; these copies become members of P ( t + 1). After every agent in P ( t ) has completed (i) and (ii), the agents in P ( t ) disappear and their clones from (ii) constitute P ( t + 1). I refer to large population of agents without specifying what large means. For now, think of large as meaning large and finite. The procedure (i)(iii) allows the size of the population to change over time. For each i , Suppose P ( t ) contains N i ( t ) agents programmed to play action...
View Full
Document
 Spring '07
 DELCHAMPS
 Algorithms

Click to edit the document details