ReplicatorDynamics

# ReplicatorDynamics - ECE 496 REPLICATOR DYNAMICS Spring...

This preview shows pages 1–2. Sign up to view the full content.

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: ECE 496 REPLICATOR DYNAMICS Spring 2007 The replicator dynamics constitute the foundation of what I call the deterministic dy- namical approach to evolutionary game theory (EGT). I’ll attempt in what follows to present a succinct derivation of the equations along with some thoughts about strengths and weaknesses of the replicator-dynamics approach to EGT. Essentially, the approach entails modeling complicated inherently random processes with deterministic nonlinear dynamical systems. Irrespective of whether the resulting models are faithful or effective, the well developed machinery of dynamical-systems theory makes it possible to analyze them rigorously. The starting point is a two-player finite symmetric game. Each player has the same action space A = { a 1 ,...,a n } . The payoff to a player playing a i against an opponent paying a j is u ( u i ,a j ). Assume that these payoffs are all strictly positive. If σ = p 1 a 1 + p 2 a 2 + ··· p n a n is a mixed strategy for the game, let u ( a i ,σ ) denote the expected payoff to an a i-player against a σ-player. That is, u ( a i ,σ ) = n X j =1 p j u ( a i ,a j ) . We’ll be considering large populations of agents that play the game with each other. Each agent is hard-wired, or programmed, to play one of the actions in A every time it plays the game against another agent. The agents are not rational. You might as well think of the agent populations as multi-sets of actions from A rather than as populations of agents. At each positive integer time t , let P ( t ) be the population of agents at time t . The population P ( t + 1) arises from P ( t ) as follows: (i) Each agent in P ( t ) plays the game with an opponent chosen at random from P ( t ). (ii) The agent clones a number of copies of itself proportional (with some propor- tionality constant K ) to its payoff in the game; these copies become members of P ( t + 1). • After every agent in P ( t ) has completed (i) and (ii), the agents in P ( t ) disappear and their clones from (ii) constitute P ( t + 1). I refer to “large” population of agents without specifying what “large” means. For now, think of “large” as meaning “large and finite.” The procedure (i)-(iii) allows the size of the population to change over time. For each i , Suppose P ( t ) contains N i ( t ) agents programmed to play action...
View Full Document

{[ snackBarMessage ]}

### Page1 / 4

ReplicatorDynamics - ECE 496 REPLICATOR DYNAMICS Spring...

This preview shows document pages 1 - 2. Sign up to view the full document.

View Full Document
Ask a homework question - tutors are online