This preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
Unformatted text preview: a”??? Liv33$ a glee“ \ {1w \eee , ”W“ "men Max:333 1. (10 points total) In the context of zero—sum , twoperson, extensive form games,
give the relevant deﬁnition in each part below. (a) subgame lecﬂ 6%sz +Lk cgagmm magnet/36 ex clem‘eiww emerge 3g, “Mi all) eeccwerﬂ‘ ’W* VMWW%Q{3LNM& ﬁfth; "If“
{ﬁx/“ﬂan: 44%ch ﬂiez‘egewee rise ll Eweéll eff WE fagﬁﬂ % c £5.17,er Magi" mg wen e 306231491 titer
we ”ﬂ “5:"? 75m . in
C} 5% @” YemenAn” \f" , w (6) perfect information “ ggewﬁ A? " a may“
Wk (31/ch1994 loom. 1,113 Lime e 5L“: await gig“ o ELQ {rm C M" “ﬁg W? 2. State Zermelo’s Theorem for win~lose games. You must include all assumptions in the hypothesisof‘Zermelt¥s¢PheoremYnoebbreviations.
, it 3%
am. 7 #5,?“ en We “peak ALE“ 44‘" F‘“ ’ a}
/ g (:3 ﬁll—“Wee {4'31 ””33 M £3; BL if? \L r [E K a ﬁpw’i; New »« lay (gemills mm‘cfimk _) ﬁrm e Plibﬁﬂ’etu 6. {lo pomts total ) b‘or analyzmg nmte extensive—form games we typically rely on
backward induction. The point of this question is to have you explain, carefully
but brieﬂy, the elementary step of backward induction  you do not need to
explain the entire process, and you should NOT appeal to any prior knowledge of backward induction. Given a ﬁnite extensive form game G with It _>_ 2 decision nodes, explain how
to reduce it to an equivalent game with k — 1 decision nodes. (Your approach
must be general — it will not sufﬁce to analyze a speciﬁc example.) How is the
assumption of ﬁniteness used in your argument? 7 we“ I .3 ' ‘ c; 9163_ @43sz
(Emu its so baa J at to;
_WH n We
y pm, a o {,«f My Ame
N 0 l1, ”$1; an a l m P (WM ‘
;ythNh=llﬁt"‘sr; "Qua M3345}?! a
(Q) "a": am
I rtiVﬁQQ Wax
éyf‘kgolg cl 5*? a}; W p " an C ‘ =‘H V‘iL
W L} 121 i; t: a qét Lgéa‘ma b mt. T 1% 1L E
E F ) {l f l aawmac’ “in a 5L Lei L
".73: 1“ x . #Ugmaa 5%”
Mara; ed A 3% amp all
{:3 C32. ”.45.!th g f
\¢\§J~ﬂt CL «EM 3 % @W‘lmw‘ {J UK AF) 7%.!)
{1353” El QM “g “ F¥§Mlxiﬂ¥vbrvﬁﬁwm WM} Al l» , xi 3
1/!
4. Consider a zero—sum two—person game Where player I’s payoiilgs are given. by the Heme
4 X 3 matrix —4 1 —1 0 —1 —2 A _ —3 0 1 l —2 —4 Based on a quick glance at the payoﬂ matrix1 you might anticipate that the
value of this game is negative. (a) C’Grﬁp‘ute a maximin pure strategy for player I. he}. as, l m {Quads "‘ if fan It)?" VV‘YWw 1.3 ‘3 m .2”
2; a, “2’. ’51 3
I73; , ,3 I 1kg
Lil am if C. Limeﬁitfi J fit” {T is: 2 I £11: :5: "2 ins”? 2 “Vb”#2" ‘I\ ‘F‘Al'rs 8 b r E “is iT‘th 5,133)
(b) Write down a linear programming (LP) formulation of the problem of
ﬁnding the value '0 of the game in mixed strategies and a. maximin mixed
strategy for player I, a mixed strategy for player I that guarantees player I
an expected outcome of at least 1} no matter what strategy player II selects. (The LP formulation should be written out in full detail — no summation
signs or matrix notation.) DO NOT ATTEMPT TO SOLVE THE LP. Relate the optimal values of the variables in the LP, to the value of the
game and to the strategy for player I. NMtHinah 7 k
 ) }.. )Yiléfl
Ad: e/ f “Lila “3% “i l W
L a w ’2 ’3 t
C34 5?: lift” i i r; #3 f
i ' .r
r. 2. ’3“; "l l itawé’l {by
Dzé¢'\\\ 3‘5 )_,§jb{P§?l
a ,E p "(a Ag": f
[‘1‘ a, max “1%; it?!"
\ J 2
. ve ! ‘ y. L U534”
Gm «am L? as we gt 3,! o! x l
a}; like: ﬁtterwt cwoc if?) ) if} ’M C» MQIQ u fiiﬁﬁﬁx “in“ I a [,1 1 —4 1—1
1'73 0—1—2
/ 3 o 1 1—2 —4 (c) It has been suggested by a usually reliable tipster that p: (O, 3, 31,OJi1as
good mixed strategy for player I. Assuming player I uses 19— — (0, g, g, 0),
give the best lower bound you can on the expected payoﬁ to player 1. film; ”'i ”QM?” I”
W131; afﬁx "I ”i id}: 1“) C} 9 if? "the? {593% ‘1 I \‘l 3: W “11 ates/1,1111) (d) Still assuming player I uses p = (0,— 3, 3,0), and assuming that player II
uses a mixed strategy vector (1, give the expected payoff to player I in terms
of the components of g. If the tipster’ s suggestion of p— — (0,— 3, 3,0) is in
fact optimal for I, what does that imply about the optimal choice of q for II n be as precise as possible. g” __ __ ________ 1— 111511 1» ~ u. a
W l C‘a‘l a 3135‘: "a ‘" A] .1 E4 ddddd 1 w“ 1 a
ll E 5 5 j [/2 ( ﬂiraﬁLJ: IF’E‘ t? {5) l? (e) g. (:2 i=3:
—4 1 —1 w 3.":
0 —1 —2 m. ‘ Evaluate the mixed strategy q = (0.5,0, 0.5] for player II, and state as speciﬁcally as possible What it reveals about the game. 19 iffrlr‘xém :37 Wk)! fig 5: we\ 5. (33 points total) Now consider the extensiveform two—person game in the ﬁgure
below. The ordered pair on each terminal node is (payoff to I, payoff to II) at
that node. In . \— (a) Explain in just a few words how you can tell that this game is NOT strictly competitive. } MN Cc. 4131.! .1? Ci: _ a; ‘1”! wi" #01 ”11.1111 4’11 H.113 1.111111 .1
i L». 9111.11.11. «K111? sides... 121: if $11,119». T. (b) Someone just suggested that he thinks the decision tree is almost correct
— the only thing that IS wrong, he says, is that nodes 5, c and d should be
in the same information set. He is wrong; how can you be certain of that? It” 144 MilIda. .1 _ (hots is isss “211er 111.1 as ﬂaw111111111» if?
C, L 19% 1511311111 L i F3  \{ﬁw‘c If} liai‘awln A}: 9:31
Proceed to analyze the game as drawn — the decision tree is correct.I J 9.1" 7 (c) (10 points) Use backward induction to analyze the game and determine a
subgame perfect Nash Equilibrium. Show in the decision tree yourcom—
plete analysis determining the value of each subgame to both players. (d) (16 points) Write out the strategic form of this game and use it to ﬁnd
all Nash equilibria — make sure to shortr all work. Indicate which Nash
equilibria are subgame perfect and which are not. 9 6. OPTIONAL BONUS QUESTION (Adapted from Dixit and Sheath.) Two pro
posals, A and B, are under consideration by the U.S. federal government. The Congressional leadership is deciding how to proceed with them, if at all, during '
the current session. There are four possible outcomes: (1.51) A becomes law but
B does not; (ug) 13 becomes law but A does not; (1L3) both A and B become law;
(1&4) neither becomes law. The Congress (player I) can decide how to package
the proposals and then the President (player II) has a choice of vetoing or sign—
ing whatever bills the Congress passes. A bill becomes lav;r only if the Congress
passes it and the President signs it. (Congress does not have enough votes on these matters to override a veto.) Congress’s preferences are
ul >0 U3 >0 U4 >0 ”2 While the President’s preferences are
1132 F13 U3 >13 U4 >13 1151. Player I Will start at the root of the game by selecting from one of four choices:
do nothing, pass A only, pass B only, pass both A and B as separate bills. (a) Draw the decision tree for this game and use backward induction to analyze
the game and determine all subgame perfect Nash Equilibria. Shovs.r in the
decision tree your complete analysis determining the value of each subgame
to both players. (b) Both the Congressional leadership and the White House have game—theoretically
savvy interns and both already know how this game will play out (to the“ 5%;
subgame perfect Nash equilibrium you just found). However, before play
has started, someone suggests a change to the game: add a branch at the
root which corresponds to packaging A and B as a single bill. Modify the decision tree and repeat your analysis. “pm ' ii ‘1 i i ,
\ (va’iibcﬁv‘ae QM? Ls: PC3335“ Alf3“ r gggL‘ h. if“ if;
rm; {{3} sareﬁ)
I V lg ...
View
Full Document
 Spring '08
 SHMOYS

Click to edit the document details