This preview shows pages 1–9. Sign up to view the full content.
This preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
Unformatted text preview: Evolutionary dynamics and potential games in
noncooperative routing Eitan Altman1 and Yezekael Hayel2 and Hisao Kameda3 1 INRIA Sophia Antipolis7 eitan.altman@sophia.inria.fr
2 LIA/University of Avignon, yezekael .hayelQuniv—avignon.fr
‘5 University of Tsukuba, kamedans.tsukuba. ac.jp Abstract. We consider a routing problem in a network with a general
topology. Considering a link cost which is linear in the link—ﬂow7 we obtain
a unique Nash equilibrium and show that the noncooperative game can
be expressed as a potential game. We establish various convergence and
stability properties of of the equilibrium related to the routing problem
belng a potentlal game. We then cons1der the routing problem in the
framework of a population game and study the evolution of the size of
the populations when the replicator dynamics is used. Key words: potential game, replicator dynamics, global optimization. m Non—cooperative routing games have long been studied in the context road traf—
ﬁc in the framework of inﬁnite number of players (drivers) where the solution
concept is the Wardrop equilibrium [18]. In that context they can be modeled
as potential games which allows one to obtain a unique equilibrium (in terms
of link ﬂows) as a solution of an equivalent (single player) optimization problem
[4] In the context of computer networks, noncooperative routing have been in
troduced and studied in [10] in a context of ﬁnitely many users, each of which
having to decide how to split flows between various links between sources and
destinations. A user may correspond to a service provider that controls the routs
taken by the trafﬁc of its subscribers. This type of formulation7 already stud
ied in the context of road trafﬁc ([6]), turns out to be much more difﬁcult and
does not enjoy in general from the structure of a potential game. In particular,
counter examples are given in [10] for the non—uniqueness of the equilibrium. It
is therefore of interest to identify conditiors on the cost structure that allow to
obtain a potential game in the setting of 10 . In this paper we show that the
case of linear link costs provides such conditions. We then exploit the potential
game structure to obtain convergence to equilibrium of schemes based on best
responses. This extends to general topology some of the convergence results ob—
tained in [2] for that restricted to parallel links. We further exploit the potential game structure to establish the convergence of the replicator dynamics which
has an evolutionary interpretation. Related work. Potential games in the case of ﬁnite number of players have
been deﬁned in It has been recently used to study networking problems like
energy control in wireless networks 15, 7, 16 or to study interference avoidance
in 8 . It has also been used for studying evolutionary dynamics of congestion
in transportation network model in [13] in the context of Wardrop equilibrium
(in 2mite population). The use of potential in ﬁnite networking games go back to
Rosenthal [11, 12]. In his framework, however, a user can send one or zero units
on each link; the ﬂow is not splittable as in our setting. By allowing a user to
send more than one unit of ﬂow over a link in the framework of Rosenthal, the
routing problem is no more a potential game. The structure of the paper is as follows. We begin by introducing in the next
section the Modeﬁhe potential game structure is then established in Section
3. Sections 4 and 5 then study the convergence of best response and of the
replicator dynamics, respectively. 2 Model We study noncooperative routing problems in a general network topology. The
network is modeled as a directed graph {V, L, f } where V is the set of nodes, L is
the set of directed arcs and f = (fl,l E L) where f; : R —> R is the cost function
of link l, which gives the cost per unit of trafﬁc on the link Z. We consider the o owing inear cost:
flOu) = (1M; + blv where A; is the total load on that link, a; and b; are positive parameters. We
consider N users numbered 17 2, . . . ,N sharing the network. Each user 1' sends a ﬁxed amount of trafﬁc A, from a source to a destination For i =
1, . . .,N and l : 1,. . .,M, denote by A? the user’s i rate on link Z. We deﬁne,
fori= 1,2,...,N andl=1,2,...,M', CHM) = A}: (am; + 51) . (1) This is the total link l cost payed by user 2'. The cost perceived by each user is
the summation of the cost perceived on each link carrying his trafﬁc. Each user s
2' cost is thus deﬁned by 1W I”
01.00 = Z (GM; + bl) = Z Cf(/\) (2)
1:1 l:1 Each user has to determine the way his trafﬁc is split in order to minimize his
cost. We have a non—cooperative game with ﬁnite number of player and inﬁnite
space strategy. The flows of each user i has to satisfy sorne feasibility conditions: positivity VleL, Angandvuewrin Z Af’: 2 xi, (3)
lEIn ’1) lEOUi '1)
where ,
I A“ if u = 7"(v) 2 —/li if u = 0 otherwise, and In(u) (resp. Out(u)) is the set of links which are input (resp. output) to
node v. The inulti—strategy (or vector strategy) A is written as A = (A7:7 A”), where
A” is the vector of all the other rates of other users on each link.
We study in this paper Nash equilibria i.e. rnulti—strategies A satisfying ow:mwAn for all players i and all strategies 7”: for user i. 3 Establishing the potential game structure Introduce the following function: P(A) = {20252 + (2 A02} + le (4) [:1 71:1 77:1 1: It is a potential of a game [9] if it satisﬁes for each player i, each multistrategy
*y and each strategy Ai for player i: PW, A’2) * PW, A71) : 01W? A’2) * 01W) A71) (5) Considering the expression of the cost function (2)7 we have for all i E {1, . . . , L} Proposition 1. The function P deﬁned by equation (4) is a potential function
Mnite Eager game. Proof We deﬁne the following function for l : 1, 2, . . . ,]\/1. N N
am : % {2W + (3252} mm. <6) 1'21 i:1 IM A]
PW=ZMMMWQPZWW. m
1:1 1:1 Considering the function P deﬁned in Equation (4), we have : i —7: i —i al ‘ i ‘ 1‘
PA m. )erm >:— Zei>2+ol>2+®xi+m2 , * i al ' i ‘ i
+51 Z/V + bl/V * 5 ZO‘if +(71)2 + )‘i +71)2 J's/51' jii 3751' 4912A? — but. al i i ‘ i ‘ i i i
:3 0‘02*(71)2+(Z)\i+Ai)2*(Z>\i+“/I)2 +bl( l*71)a
3751' 3751' =az((Al)2(Wl)2)+azZ)\i(Al—Wl)+bz(Al—Wl)i =Cl( lvAfZ)Ci(vlv\fl) Then, by summing up the lefthand side and the righthand side of the above
equation with respect to l = 1, 2, . . . , A1 and by noting the equation (7), we can
prove the equation (5), that is, P is a potential considering the difference of cost
link by link from strategy Azto=fﬂ Then P is a potential function. I The function P is strictly convex in A over the compact set defined by con—
straints . Proposition 2. The Nash Equilibrium of the game with N ﬁnite players is the
minimum of the following constrained optimization problem min), P()\) subject
to constraints Proof We construct the following Lagrangian function L(/\va) : PO‘) _ aiv(7ﬂi(v) + Z )‘i _ Z i:1 ’UEV lEInCUW l60ut(u) The equilibrium of the noncooperative game converges to the minimum solution
of the Lagrangian. We call this solution as /\* and a*. As the potential function
P is strictly convex, the minimum is unique and is A*. I Once the equilibrium exists, it follows that it is unique because it corre
sponds to the minimum of the potential function. Closed form expressions for this equilibr'um are known however for very special topologies such as the par—
allel links 1 . 4 Best response dynamics Having shown that the game is a potential game, we proceed to obtain conver—
gence. Deﬁnition 1. Asynchronous BestResponse Update (ABBU): Consider
some strictly increasing time sequence Tn. An ABBU algorithm is an update
rule in which at each Tn, one user updates its strategy to the best response
against the current strategy of the others, and the set of times at which user
i updates its strategy is inﬁnite. The next result follows from [9] by exploiting the fact that the the game has
a potentia . Theorem 1 The ABBU dynamics converges to the unique NE. 5 Replicator dynamics In this section we adopt a perspective of a population game in which the repli—
cator dynamics is used to describe the evolution of strategies. We assume that
the ﬂow A”: generated by population i is constant over time. A user i with rate
A, can be considered as a population i with a global mass A, oi inﬁnitesimal
users. The proportion of population i who use strategy m (i.e. server or link in) is pf” : . The replicator dynamics was ﬁrst introduced in 17 in the context of discrete
strategy space. In 5 , the author presents this evolutionary dynamics for continu—
ous strategy spaces. This dynamics comes from the basic tenet of the Darwinism
related to a model of evolution through imitation, where the percentage growth
rate of each strategy is given by the difference between that strategy’s ﬁtness
and the average ﬁtness of the population. Considering this dynamics, strategies
with higher ﬁtness will survive. In our model we have to modify the mathemat—
ical expression of the derivative equations as we consider cost function and not
ﬁtness. Hence, in our formulation of this dynamics, the strategy with
the lowest cost will survive. Moreover, the ﬁtness is by deﬁnition the
payoﬁ‘ obtained by unit mass which is expressed as the marginal cost
here following the deﬁnition given in [14]. Tie cost CZ perceived by users in population i who use link l is rewritten by: N
out) = WW 2M + bl).
1E1 where :L‘; = (m},xl2,...,vail,va). For all i = {1,...,N} and m = {1,...,]\/I},
the replicator dynamics is given by: it :fvi Fiﬁ?) *ffoEz) I: WW), (8) with the marginal cost given by: ; 301‘ N . . , .
17(11): =a;;x§Al+a1xE/ﬂ+bl and
M Fiﬁ) — Hit W) the mean population cost. When the constant cost I); are equal for all link l,i.e. bl = b, we prove that
the NE an" is a stationary point of the vector ﬁeld as for all i and l = 0.
The NE obtained in [2] for a general network topology is given by la; Vi,l : M—.
1:1 1/al (9) Proposition 3. The NE is a stationary point for the replicator dynamics Proof We have for all i and l: M N — fiWi) : Z — a1 i— also? i— 1),
1:1 771: 1 M N N
= Z (am 2 + ammin/li + b) — a1 — al$iAi — by
77121 *: i:1 L l M N   
l A1 /am x1 AZ 
i / l ,m + minb
: ( £11m: ) M 1
171:1 i:1 Zl:1 1/al l:1 1/al
N am: 1A /am A'/am by M am M
1:1 1:11/al 1:1 1/al
_ 1 A/al , Ai £1 , — M M M
1:1 1/al 1:1 1/al 1:1 1/al 1:1 /l a A73
C” M/ l M b
1:1 1/al 1:1 1/al
* A + Ai + b A Ai b
11:11 1/al £1 1/al £1 1/al 1/al
: 0.
which implies the Proposition. I Remark: If the constant costs I); are di—lTerent and it at equilibrium, each user
sends a positive amount of trafﬁc on each link, then the unique NE obtained
in [2] is not a stationary point of the dynamic. Considering the stability of the dynamic system, one can deﬁne two kinds
of stability. The local stability around the equilibrium known as the Lyapunoo
stability, and the global stability known as the asymptotical stability. The difﬁculty for the local stability, is to ﬁnd a Lyapunov function but, for
this perspective, we can use our potential function deﬁned by equation Deﬁnition 2. A stationnary point 35* is Lyapunou stable if there exists a func—
tion f, called a Lyapunov function, such that: 1. f 1301,
2. for allm E Q\m*, > 0
5’. for allt and a: E Q\x*, C‘t—l’t g 0, We prove now that our potential function obtained in the ﬁrst part is a global
Lyapunov function. Proposition 4. The function f deﬁned by:
f0?) = PCB) — 1305*) (10)
is a global Lyapunou function under the replicator dynamics. Proof Taking the expression of the potential (4), the function P can be expressed [W a N I I N I _ Al N _ _
Pm : :5 (2m 1)? + <sz T) +3132; 2.
221 [:1 '21 2'21 The function f is clearly 01 and f = 0. Moreover, for all m E Q\$* we have
f(x) > 0 as 05* is the unique minimum of P. Finally, for all t and as E Q 95*, we
have will» = VP<rc<t))i'(t) — VP(m*(t)).gg*(t) = VPWDW)
N 1V1
VP(x(t)) at) : :2 (a W ),
1:1 l:1
N M
= 222 WWW)»
1:1 1:1 1 0P _ 8C” : fli because P is a potential. Then, But we have for all i,l 7 8A7 — a»;
N 1 M I p 1 M I 4
Weenies) = 2 (FE mm)? — F 2mm?) ,
1:1 1: l:
N 1 1 1 Z W 96lle — E can?) 
I 1 [:1 71:1 By J ensen’s inequality, we have that the summation is non—positive 16 . Then
we have proved that the function f is a Lyapunov function for the syste n. I We have proved that the NE point is Lyapunov stable. A stronger stability
property is the asymptotical stability which implies Lyapunov stability and re
quires that in addition, the population returns to equilibrium after any small
perturbation. We cannot use directly the result from [13] as the author assumes
that the dynamic satisﬁes the non—extinction property4, which is not the case
for the replicator dynamics as 90(0) 2 0 and it is not a NE. But7 in our case, we
can use the convexity property of the Lyapunov function and the uniqueness of
the NE for the interior space. Theorem 2 The NE is asymptotically stable under the dynamics deﬁned in (8)
and any solution trajectory of this dynamics starting from an interior initial
condition converges to the NE. Proof As the dynamics (8) admits a global Lyapunov function, every solution
trajectory converges to a connected set of rest points. Using the strict convexity
of the Lyapunov function, we obtain by contradiction, as in [14], the convergence
of any interior initial condition to the NE at". As 33* is strictly positive (all components nonnegative), there exist a real
strictly positive 6 which defines a neighborhood A around the NE with strictly
positive components. Then A is a local minimizer set of the potential function
and all Nash equilibria in A are only the NE 95*. Also, for all x E A \ we have > 0 as it is equal to 0 if and only if <p(;c(t)) = 0, which is
equivalent to $(t) = 0 or $(t) = 37*. Thus, we can conclude that the function f
is a strict local Lyapunov function for 33* and then 36* is asymptotically stable. I 6 Conclusion and perspectives We have investigated in this paper non—cooperative routing for link cost density
(i.e. cost per packet) that are linear in the link flow, thus extending existing re
sults for parallel links [2] to general topology. By identifying a potential structure 4 This property requires that any extinct strategy can rebirth and is expressed by the
following implication that cplxl : 0 implies at is a NE. for this game we have obtained convergence of both best response dynamics as
well as of the replieator dynamics. As future work we plan to extend the results to the multiclass setting of and to study the stability of other dynamics. Acknowledgement The work of the ﬁrst author was supported by the BIONETs European project. References 1. 2. E. Altman, T. Basar, T. Jimenez, and N. Shimkin. Competitive Routing in Net—
works with Polynomial Costs. In in proceedings of INFOCOM 2000.
E. Altman, T. Basar, T. Jimenez and N. Shimkin. Routing into Two Parallel Links: Game Theoretic Distributed Algorithms. Special Issue of Parallel and Distributed Computing, 61 91, 2001. E. Altman and H. Kameda. Equilibria for Multiclass Routing Problems in Multi—
Agent Networks. In Advances in Dynamic Games, volume 7, pages 3437368.
W M. Beckmann, C. B. McGuire, and C. B. Winsten. Studies in the Economics of
Transportation. New Haven: Yale Univ. Press, 1956. R. Cressman. Stability ot the Replicator Equation with Continuous Strategy Space.
Mathematical Social Sciences, 5021277147, 2005. . A. Haurie and P. Marcotte. On the relationship between Nash—Cournot and Wardrop equilibria. Networks, 15:2957308, 1985.
T. Heikkinen. A Potential Game Approach to Distributed Power Control and Scheduling. Computer Networks, 50:229572311 2006.
J. Hicks A. MacKenzie J. Neel and J. Reed. A Game Theory Perspective on 10. Interierence Avoidance. In in proceedings of Globacom, 2004. . D. Monderer and L. Shapley. Potential Games. Games and Economic Behavior, 21: 221* 213, 996.
A. Orda, N. Rom7 and N. Shimkin. Competitive routing in multi—user communi—
cation networks. IEEE/ACM Transaction on Networking, 126147627, 1993. 11. R. W. Rosenthal. A class of games possessing pure strategy Nash equilibria. Int.
J. Game Theory, 2:65767, 1973. 12. R. W. Rosenthal. The network equilibrium problem in integers. Networks, 3253759,
1913a 13. B. Sandholm. An Evolutionary Approach to Congestion, 1998. 14. W. Sandholm. Evolutionary Implementation and Congestion Pricing. Review of
Economic Studies, 69:6677689, 2002. 15. G. Scutari S. Barbarossa and D. Palomar. Potential Games: A Framework for
Vector Power Control Problems With Coupled Constraints. In in proceedings of
m 16. S. Shakkottai, E. Altman, and A. Kumar. The Case for Non—cooperative Multi—
homing of Users to Access Points in IEEE 802.11 WLANs. In in proceedings of
INFOCOM, 2006. 17. P. Taylor and L. Jonker. Evolutionary Stable Strategies and Game Dynamics.
Mathematical Biosciences, 4021457156, 1978. 18. J. G. Wardrop. Some theoretical aspects of road trafﬁc research communication networks. Proc. Inst. Cir). Eng, Part 2, 1:3257378, 1952. ...
View
Full
Document
This note was uploaded on 09/27/2010 for the course EE 229 taught by Professor R.srikant during the Spring '09 term at University of Illinois, Urbana Champaign.
 Spring '09
 R.Srikant

Click to edit the document details