This preview shows page 1. Sign up to view the full content.
Unformatted text preview: E c on o m ic s 6 1 1 H a n d o u t # 3
E x t e n s i v e F o r m G a m es
Example 1. Player 1 offers player 2 a number, x, of nickels from one to 20; Player 2 says
“accept” or “reject”. If reject, both get 0; if accept, 2 receives 5x and player 1
receives 1005x. Example 2. (Bierman and Fernandez) Professor Brown announces he is going to auction off a
dollar. Bids proceed in increments of 50 cents. Bidders can not bid twice in a
row, and once a bidder passes she does not get to bid again. The highest bidder
gets the dollar, but both the highest and secondhighest bidders pay their bids to
him. Mary and Tom are the only two bidders; it is common knowledge that they
each have only $2 in their wallets; and Mary gets to make the first bid. Example 3. Entry deterrence (MWG, Fig 9.B.1) Fight if E plays “In” Accommodate if E plays “In” Out 0,2 0,2 In –3,–1 2,1 611.03  1 Information sets: (MWG, Fig 9.C.1) Finite game form in extensive form (with perfect recall):
An 11tuple GF = ( X, Y, A, I, p(), á(),H, H(), é(), ñ() , g() )
X: a finite set of nodes
Y : a space of outcomes
A : a finite set of possible actions
I : a finite set of players I = {1, 2, ..., I} {+ “0", Nature} p() : the immediate predecessor function. p(): X X {}
p(x) = for exactly one x X, called the initial node and labeled x0
s(x) : the immediate successor correspondence. s(x) = p–1(x).
611.03  2 T : the set of terminal nodes T = { x X : s(x) = } X\T : the decision nodes
P(x) : the set of all predecessors of x : given y X, y P(x) if and only if there exists
a sequence (a "path") x1, x2, ..., xm such that x1 = x; x m = y; for all i, 2 i m,
xi = p(xi–1). Such a path is unique.
S(x) : the set of all successors of x : given y X, y S(x) if and only if there exists
a sequence (a "path") x1, x2, ..., xm such that x1 = x; x m = y; for all i, 2 i m, x i
s(xi–1). Such a path is unique.
We require, for all x X, that P(x) S(x) =
á() : action function á(): X\{x0} A á(x) is an action leading from p(x) to x.
If y P(x), there is a unique path x1, x2, ..., xm such that x1 = x; xm = y; for all i,
2 i m, xi = p(xi–1). We say á(xm–1) is the action at y on the path to x.
We require, for all x, x s(x) with x x, that á(x) á(x).
c(x) is the choice set at x: c(x) = { a A : a = á(x) for some x s(x) } H : a collection of information sets that partition X
GF is a game form of perfect information if every h H is a singleton
H() : an information function H(): X H We require, for all x, x X, that H(x) = H(x) implies c(x) = c(x)
Therefore we can write c(h) for each c(x) when h = H(x)
611.03  3 é() : an information assignment function é(): H I {0} The collection of player i's information sets:
Hi = { h H : é(h) = i }
Perfect recall: (i) If H(x) = H(x), then x P(x) S(x) ( and x P(x) S(x) ) ;
(ii) If H(x) = H(x), x P(x), with é(H(x)) = é(H(x)), and a is the action
at x on the path to x, then there exists an x* P(x) H(x) such
that the action at x* on the path to x is also a.
p() : a probability function (for nature)
We require (i) p() : H0 × A [0, 1] p(h, a) = 0 if a c(h) (ii) g() : an outcome function g() : T Y Finite game in extensive form (with perfect recall): An ordered pair (GF, u ) consisting of a
finite game form in extensive form (with perfect recall) together with a collection
u = { u1(), u2(), ..., uI() } of utility functions: ui(): Y , the reals We assume each ui() is a Bernoulli utility function [Expected utility]
Common knowledge : Everybody knows the game form, etc.; everybody knows that everybody
knows the game form, etc.; everybody knows that everybody knows that everybody
knows the game form, etc.; ... . 611.03  4 A (pure) strategy for player i I in game form GF = ( X, Y, A, I, p(), á(), H, H(), é(), ñ(),
g()) is a function si: Hi A such that si(h) c(h) for all h Hi.
The collection of all of i's strategies is Si .
MasColell, Whinston, and Green, Exercise 7.D.1
A profile of strategies: s = (s1, s2, ... , sI) where si Si for all i I.
s = (s1, s2, ... , sI) = (si, s–i)
S = S1 × S2 × ... × Sn and S–i = S1 × S2 × ... × Si–1 × Si+1 × ... × Sn
The normal form representation of game form GF is ( I, {Si} ) General Theme of Chapter 9: Use dynamic structure (extensive form) to eliminate
those NE that result from noncredible actions. [Some NE are not sensible predictions.]
Entry deterrence game (a second look) Backward Induction for Games of Perfect Information.
Zermelo's Theorem. For every game of perfect information, there exists at least one
profile of strategies that can be obtained by backward induction. Any such profile is a Nash
equilibrium. If no player has the same payoff at any two terminal nodes, this equilibrium is
unique.
Given an extensive form game Ã = ( , , , , p(), á(), , H(), é(), ñ() , g(), ui() ),
we say game Ã* = ( , , , , p*(), á*(), , H*(), é*(), ñ*() , g*(), u*i() ) is a subgame
of Ã if
611.03  5 1. ;
2. ;
3. ;
4. * ;
5. p*(): {}; there exists a element x* of such that p*(x*) = , and for all x
in \{x*}, p*(x) = p(x) and p*(x) . Important: We require for each x in ,
that S*(x) = S(x).
6. á*(): \{x*} * with á*(x) = á(x) for all x \{x*};
7. * ;
8. H*(): * with H*(x) = H(x) for all x \{x*} AND #H(x*) = 1;
[So if decision node x is in the subgame, then every x' in H(x) is also.]
9. é*(): {0} with é*(h) = é(h) for all h ;
10. ñ*() : 0 × [0, 1] satisfies ñ*(h,a) = ñ(h,a) for all h 0 and a ;
11. g*() : T* * and g*(t) = g(t) for all t T*;
12. u*i(): with u*i(y) = ui(y) for all y in . Given a mixed strategy profile ó = (ó1, ó2, ..., óI) in a game Ã and a subgame Ã* of Ã, we
say ó induces a Nash equilibrium in Ã* if the moves specified in ó when applied to information
sets in Ã* constitute a Nash equilibrium for Ã*. A mixed strategy profile ó = (ó1, ó2, ..., óI) in a game Ã is a subgame perfect Nash
equilibrium (SPNE) if it induces a Nash equilibrium in every subgame Ã* of Ã. [Selten, 1965]
611.03  6 The entry deterrence game (a third look)
The Nash equilibria identified by backward induction in finite games of perfect
information are SPNE. The Centipede Game [Rosenthal, JET (1981) pp. 92100]
"...the SPNE concept insists that players should play an SPNE wherever they find
themselves in a game tree, even after a sequence of events that is contrary to the predictions of
the theory.... . players will assume that the remaining play of the game will be an SPNE even if
play up to that point has contradicted the theory."
The Centipede Game The subgame approach does not allow us to uncover all noncredible actions.
Example 9.C.1 again Exercises from the book: 9.B.3, 9.B.5, 9.B.6, 9.B.9, 9.B.11.
611.03  7 Exercises from old exams:
1. At time 0, an incumbent firm (firm I) is already in the widget market, and a potential entrant
firm (firm E) is considering entry. In order to enter, firm E must incur a cost of K > 0. Firm E’s
only opportunity to enter is at time 0. There are three production periods. In any period in which
both firms are active in the market, the game in the figure below is played. Firm E moves first,
deciding whether to stay in or exit the market. If it stays in, firm I decides whether to fight.
Once firm E plays “out”, it is out of the market forever; firm E earns 0 in any period during
which it is out of the market, and firm I earns x. The discount factor for both firms is ä; i.e., a
dollar gained in period 1 is valued at $1; a dollar earned in period 2 is valued at ä$1; and a
dollar earned in period 3 is valued at ä2$1. Similarly for costs.
Assume: 1. x > z > y; 2. y + äx > (1 + ä) z; 3. 1 + ä > K; 4. 0 < ä < 1. A. (15) What are the SPNE of this game?
B. (5) Is there a NE that is not a SPNE? 611.03  8 2. In a variation on the game of problem 1, suppose now that firm E faces a financial constraint.
In particular if firm I fights once against firm E (in any period), firm E will be forced out of the
market from that point on.
A. (15) What are the SPNE of this game?
B. (5) Is there a NE that is not a SPNE? 3. For the following game, are there any subgame perfect Nash equilibria? If so, what are they? Are there any other Nash equilibria? Separately treat the cases x 0 and x < 0. 611.03  9 4. There are two players. Player #1 offers a point s1 . Player #2 can then accept s1 or reject it;
in the latter case the outcome, s, is the status quo value s0 . If #2 accepts the outcome, s, is s1.
In both parts below, player #2's 2 preferences are represented by – (s – b )2 where b is #2's bliss
point.
(Part 1) In this part, player #1's preferences are represented by s (i.e., she prefers higher
values of s). Find the subgame perfect Nash equilibria. (Hint: Your answer may depend on the
sizes of the parameters s0 and b2 ).
(Part 2) In this part, player #1's preferences are represented by – (s – b1)2 where b1 is #1's
bliss point. Find the subgame perfect Nash equilibria. (Hint: Your answer may depend on the
sizes of the parameters s0,b1, and b2.) 5. Consider this variation on the previous question: There are two periods. In the first period the
game of Question 4 is played. In the second period, the game is played again, but the default
status quo point is now the outcome of the first period game.
Payoffs are the undiscounted utilities of the outcome at the end of the second period
Are there any subgame perfect Nash equilibria? If so, what are they?
[Assume s0 < b2.] 611.03  10 ...
View Full
Document
 Fall '08
 Kelly,J

Click to edit the document details