slides_414_19_21_2011_no_solutions

slides_414_19_21_2011_no_solutions - GAME THEORY Econ 414...

Info iconThis preview shows page 1. Sign up to view the full content.

View Full Document Right Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: GAME THEORY Econ 414 Lecture 19­21 Repeated Games Instructor: Nuno Limão slides_414_19_21_2011_no_solutions 1 OUTLINE Repeated games: examples and an experiment Basics of repeated games Results: Repetition and cooperation Application I: Overfishing Application II: Severity of punishment & alternating equilibria Reputation and Cooperation: Other applications Collusion strategies Pork barrel as intertemporal cooperation International agreements Experimental evidence for repeated prisoner’s dilemma slides_414_19_21_2011_no_solutions 2 REPEATED GAMES: EXAMPLES AND AN EXPERIMENT Examples Firms setting prices/quantity can face similar competitors multiple years Debtor deciding whether to repay Countries choosing a policy (exchange rate, trade policy) that can affect others adversely Others…? Questions Why may repeated interaction of the same exact situation affect behavior and change the outcome? Can it generate more cooperation (e.g. collusion, repayment, cooperation in examples above)? What type of games may it be most useful in? Can it help explain periods of truce amidst brutal combat in WWI trenches? http://www.youtube.com/watch?v=Uep9Q‐‐tCcY&feature=fvsr slides_414_19_21_2011_no_solutions 3 Experiment : exploring the impact of repeated interaction and different time horizons on cooperation Login at http://veconlab.econ.virginia.edu/login.htm Session: ngl10 PD below w/ fixed matching in each treatment (vary across treatment) Left Right Top $2.00, $2.00 $4.00, $1.00 Bottom $1.00, $4.00 $3.00, $3.00 1st treatment: play once 2nd: play twice with same player 3rd : indefinite number of rounds with same player. slides_414_19_21_2011_no_solutions 4 BASICS OF REPEATED GAMES Definition: a repeated game is a situation in which the players face a given strategic interaction, known as the stage game, more than once Examples Firms choosing prices or quantities at the beginning of a given period Allied and German soldiers deciding whether to kill or miss while in the trench If players know that they will interact repeatedly then we must redefine their payoffs and strategies slides_414_19_21_2011_no_solutions 5 Strategy for the repeated game As before it assigns an action to each information set What is an information set now? If we had perfect recall of history of play but no common knowledge then each player knows what he did before but not what the opponent has done (fig 13.2) We assume that players have common knowledge of the history of play so when they start a new repetition. Thus, with two repetitions we have the representation in fig. 13.3 Since in fig 13.3there are 5 information sets this means that a strategy is a quintuple of actions for each player Since there are 2 possible actions for each player at each set, there are 2^5 possible strategies to consider for the repeated game slides_414_19_21_2011_no_solutions 6 Two‐period trench warfare game with common knowledge of the history slides_414_19_21_2011_no_solutions 7 Payoffs The relevant payoffs that players maximize in a repeated game must take into account all payoffs received at each stage Stage game payoffs may be aggregated in different ways but typicallyeither Unweighted sum of stage game payoffs ui, i.e. V=u1+u2+…uT Weighed/discounted sum of stage game payoffs ut, i.e. V=u1+ δu2+…δT‐1uT = ∑t=1,…T δt‐1ut For example, if T = 2 and the history is: (miss, miss), (kill, miss), where the first action in each ordered pair is for the Allied side, then the stream of single‐period payoffs for the Allied side is 4, 6 and the total payoff is either 10 If no discounting (as shown in tree) 4+ δ6 < 10 (since δ<1) if we discount slides_414_19_21_2011_no_solutions 8 RESULTS: REPETITION AND COOPERATION Finite horizon in Trench Warfare (T = 2) Suppose the Allied and the German soldiers interact exactly twice (T = 2). There is no discounting (so simply sum single‐period payoffs). What is the SPNE of this two‐period game? Represent as sequential game given fig. 13.3 so can use backward induction In addition to full game there are 4 subgames so solve each Period 2 subgame of the two‐period trench warfare game after (Kill, Kill) in Period I: unique NE of the Period 2 subgame is (Kill, Kill). slides_414_19_21_2011_no_solutions 9 after (Kill, Miss) : unique NE of the Period 2 subgame is (Kill, Kill). after (Miss, Miss) and (Miss, kill): unique Nash equilibrium of the Period 2 subgame is also (Kill, Kill). [VERIFY YOURSELF] Period 1 representation of trench warfare game after backward induction NE of this subgame is (kill, kill) so unique SPNE is (kill/kill/kill/kill/kill, kill/kill/kill/kill/kill) slides_414_19_21_2011_no_solutions 10 Finite horizon in Trench Warfare (any finite T ) Repeating game twice does not generate cooperation. What if players know they will repeat many times but finish at a specific point in the future? What is the SPNE? Payoffs at time T after some history of play until T‐1 Allies obtain a SUM of payoffs equal to AT‐1 and Germans obtain GT‐1 Payoffs at time T obtained by simply adding the payoffs already obtained to single period payoffs (for period T) to obtain NE at time T Recall that Kill, Kill is the dominant strategy in the one‐shot game, i.e. payoff of kill is higher than miss independently of what other player does. Therefore if I add a constant to all payoffs (i.e. AT‐1 and GT‐1) this is still true Thus Kill, Kill is a NE in the final stage for ANY history of play. slides_414_19_21_2011_no_solutions 11 NE at time T‐1 Sum of payoffs for any history before are denoted by AT‐2 and GT‐2 Additional payoff obtained in the last period: already determined (2,2) since in final period kill, kill is the NE. e.g. if Allied kill and German miss then get AT‐2 (history) + 6 (current) + 2 (last period). NOTE TYPO IN TEXT FIG 13.8: DELETE T‐1 AND USE T‐2. Recall that Kill, Kill is the dominant strategy in the one‐shot game. Therefore if I add a constant to all payoffs (i.e. AT‐2 +2 and GT‐2 +2 for Allies and Germans respectively) this is still true. Thus Kill, kill is again the NE What do you think the NE is for subgames after backward induction at T‐2,…, T=1? slides_414_19_21_2011_no_solutions 12 What is the intuition? Last period the previous payoffs are already realized and cannot be affected any actions have no future consequence beyond the payoff of that stage game The unique NE of the subgame is kill, kill so will do so in last stage Period before last Know what the outcome is in the last stage and can’t affect it the previous payoffs are already realized and cannot be affected Thus best response is kill, kill to maximize current payoff, as in one shot game Period T‐n: apply same logic iteratively: can’t affect what has already happened and know what will happen in the future can’t be affected by what is done now. Basic result I: No cooperation in finitely repeated games. If the stage game has a unique Nash equilibrium, then for a finitely repeated game, there is a unique subgame perfect Nash equilibrium path that is a repetition of the stage game Nash equilibrium. slides_414_19_21_2011_no_solutions 13 INFINITE OR INDEFINITE HORIZONS AND COOPERATION Motivation In Trench Warfare (and many other applications) players do not know with certainty when the game will end. A known ending is important in applying backward induction to show that no cooperation occurs and will now confirm this by considering indefinite horizons slides_414_19_21_2011_no_solutions 14 Relationship between expected payoffs from infinite and indefinite horizons with fixed stage game payoff of u Infinite horizon w/ “true” discount rate (e.g. d = 1/(1+r) or time preference) V=u+ du+d2u +… = ∑t=1,…∞ dt‐1u = u(1+d+ d2+…) = u/(1‐d) Indefinite horizon w/ probability of game continuing of p (no “true” discounting) V=u+ pu+p2u +… = ∑t=1,…∞ pt‐1u = u(1+p+ p2+…) = u/(1‐p) Indefinite horizon w/ probability of game continuing of p and “true” discounting V=u+ dpu+(dp)2u +… = ∑t=1,…∞ (dp)t‐1u = u/(1‐dp) Therefore V = ∑t=1,…∞ δt‐1u = u/(1‐δ) may represent the expected value of a stream where δ=pd so it captures either an indefinite horizon (p<1) or an infinite horizon (p=1) slides_414_19_21_2011_no_solutions 15 Definition: SPNE for a repeated game A strategy profile is a SPNE of a repeated game if and only if in each period and for each history, the prescribed action is optimal for a player, given that i. the other player(s) act according to their strategies in the current period, and ii. all players act according to their strategies in all future periods. Note that this definition only requires us to check that, conditional on (i) and (ii), the strategy is not worse than one that considers a single deviation. Why aren’t we also considering whether it is worse than a series of deviations? We are, but since our requirement that the SPNE strategy is not worse than a single deviation in any period and history then if any given deviation does not leave you better off neither will deviating many times(see appendix for more details) slides_414_19_21_2011_no_solutions 16 Finding a SPNE of the repeated trench warfare game Candidate strategy: always play Kill, Kill independent of history Payoff of strategy for each player: 2 + δ2+δ22 + … = 2/(1‐ δ) Payoff of possible deviation, miss, this period (conditional on opponent playing strategy, i.e. kill) is 0, since that is the payoff from miss , kill Payoff of continuing with strategy (kill, kill) for all future periods: δ2+δ22 + … = δ 2/(1‐ δ) Kill, kill is a SPNE iff 2/(1‐ δ)> δ 2/(1‐ δ), always true since δ<1 Repetition of the stage game NE is also a SPNE of the infinitely repeated game, but are there others? Yes, but we need some reward/punishment scheme to induce players to cooperate. slides_414_19_21_2011_no_solutions 17 Grim trigger strategy Period 1: choose miss Period t>1: Miss if both missed in all past periods and Kill if any other history Showing when grim trigger strategy is a SPNE of infinitely repeated trench warfare Payoff of strategy if history is miss, miss 4 + δ4+δ24 + … = 4/(1‐ δ) Payoff of deviating from strategy if history is miss, miss, i.e. of choosing kill given opponent plays miss AND then triggering the grim punishment (kill, kill) forever 6 + δ2+δ22 + … = 6+ δ 2/(1‐ δ) Miss, miss is optimal given history of miss, miss only if 4/(1‐ δ) > 6+ δ 2/(1‐ δ) Note that if this is true in the current period it is also true in any period, e.g. suppose you consider deviation the following period, then you compare 4 + δ4+δ24 + … > 4+δ6+ δ22/(1‐ δ)+… 4+δ24 + … > 6+ δ2/(1‐ δ)+… 4/(1‐ δ) > 6+ δ2/(1‐ δ) slides_414_19_21_2011_no_solutions 18 Condition above is necessary, but may not be sufficient. Why? SPNE requires us to check we can’t benefit from a deviation from any given period or any given history, so consider payoff from all different histories. If any played kill before then strategy prescribes that this period both choose kill, so the only other history to consider is kill, kill with payoff derived before: 2/(1‐ δ) Is there an incentive to unilaterally deviate from kill, kill in a given period? No, since payoff of miss conditional on opponent using kill is (as seen before) δ 2/(1‐ δ) Note that the last point also tell us that if the punishment prescribed by this strategy is enacted then it is credible (i.e. there is no incentive to deviate from it). This will necessarily be true in a grim trigger strategy if it involves reversion to the NE since we know that the NE of the stage game is a SPNE for any subgame. In sum, the grim trigger strategy pair is a SPNE of the infinitely repeated trench warfare iff 4/(1‐ δ) > 6+ δ2/(1‐ δ) δ≥1/2, i.e. iff place enough weight in future slides_414_19_21_2011_no_solutions 19 General “result”: Repetition of a given stage game generates an equilibrium with a more “cooperative” outcome(s) than obtained in the one‐shot interaction only if The game has some probability of continuing at any point (p>0, otherwise there is no cooperation at last stage and game unravels, as seen with finite T) Players place enough weight on future payoffs (δ sufficiently high so cost of lost cooperation or punishment is sufficiently high to counter benefit of present deviation) Previous history of play is observable, at least partially (so reward/punishment strategy can be conditional on history) There is an outcome with a higher joint payoff than the one‐shot Nash (otherwise, e.g. in zero‐sum games, there is no scope for increased cooperation) Result above Applies to other games Can be obtained with different strategies, e.g. grim trigger, Tit for tat (start by cooperating and then do whatever your opponent did the previous period) Typically leads to multiple equilibria slides_414_19_21_2011_no_solutions 20 Aside: we can use payoff space and graph average payoffs to illustrate when result above works , e.g. Prisoner’s dilemma as in trench warfare does not work: Zero‐sum games Basic steps to find SPNE in infinitely repeated game Is there scope for cooperation relative to one shot NE? If so and increased cooperation involves actions (c,c’) then consider a candidate strategy, s, where players use (c,c’) but if they deviate they face some punishment (or forfeit some benefit of cooperation) Compute the payoff stream under cooperation, Vc(δ) Compute the payoff stream if after a history of cooperation one player deviates, Vd(δ) Find the minimum δ for which Vc(δ)≥ Vd(δ) Consider alternative histories and repeat, e.g. history after punishment was triggered, which ensures punishment is credible. slides_414_19_21_2011_no_solutions 21 APPLICATION I: OVERFISHING Setup (ex. 1 ch 13) Each of 3 fishermen individually and simultaneously choose each day whether to send 1 or 2 of their boats out Daily cost of a boat = $15 Gross benefit increases in # fish caught but there is a finite stock of fish so the catch/boat decreases with the number of boats out The stage game is symmetric, so the table can be used to determine any fisherman’s payoff = gross benefit‐cost slides_414_19_21_2011_no_solutions 22 Exercise What is the NE of the stage game? Is there any scope for increased cooperation? If the fishermen interact indefinitely (or infinitely) is there a grim‐trigger strategy that can increase cooperation relative to the stage game NE? [Assume the history is common knowledge] Under what conditions is that strategy a symmetric subgame perfect Nash equilibrium of the infinitely repeated game? slides_414_19_21_2011_no_solutions 23 Solution…. slides_414_19_21_2011_no_solutions 24 APPLICATION II: SEVERITY OF PUNISHMENT & ALTERNATING EQUILIBRIA Setup (ex. 5 ch 13) Consider the infinitely repeated version of the stage game in FIGURE PR13.5. Assume that each player’s payoff is the present value of her payoff stream and the discount factor is δ slides_414_19_21_2011_no_solutions 25 Exercise What is the NE of the stage game? Is there room for improvement in cooperation? Find a strategy profile that results in an outcome path in which both players choose x in every period and the strategy profile is a subgame perfect Nash equilibrium Alternating equilibria: Find a strategy profile that results in an outcome path in which both players choose x in every odd period and y in every even period and the strategy profile is a subgame perfect Nash equilibrium. Severity of punishment: Assume that δ=2/5. Find a strategy profile that results in an outcome path in which both players choose y in every period and the strategy profile is a subgame perfect Nash equilibrium. Intertemporal cooperation: Is there a strategy profile that results in an outcome with higher payoffs for both players than any of those considered so far? Under what conditions is it a SPNE of the repeated game? slides_414_19_21_2011_no_solutions 26 Solution… slides_414_19_21_2011_no_solutions 27 REPUTATION AND COOPERATION: PORK BARREL Pork barrel: Definition: expenditures approved by congress for specific “projects/purposes” in a given politician’s constituency. Magnitude: “24 billion in 2005 for 6,376 pet projects spread among virtually every congressional district” (Boston Globe) Examples: $1 million to preserve a sewer in Trenton, NJ, as a historic monument. $3.8 million to preserve a baseball stadium in Detroit $223 million to build “A Bridge to Nowhere”? In 2005, the U.S. Congress considered a bill to approve $223 million to build a bridge connecting Ketchikan, Alaska (population 9000) and the Island of Gravina, Alaska (population: 50.) slides_414_19_21_2011_no_solutions 28 “Puzzle”: many argue that pork barrel spending is thought to benefit an elected official’s constituents, but has little effect justifying otherwise. So why would the congress representative of a taxpayer in Mississippi agree to spend tax revenue to pay for a sewer in Trenton or bridge to nowhere in Alaska? slides_414_19_21_2011_no_solutions 29 Pork barrel game spending setup Consider three U.S. senators, Senators Barrow, Byrd, and Stevens. Each senator proposes a pork barrel project every three years. In each period, all 3 senators must vote in favor for the project to be approved. A senator earns a single‐period payoff of 100 if his/her own project is approved, but a payoff of –25 if someone else’s pork barrel project is approved. NE of a simple one­shot version of a game where a senator is chosen randomly to propose a pork barrel project leads to it being rejected by the other two (0>‐25) “Puzzle” may be explained by the repeated interaction of senators (mean tenure is about 14 years and some, such as Byrd, last as long as 50 years). Can the following strategy profile sustain pork barrel spending as a SPNE? Initial period: all 3 senators support any project Subsequent periods: If senator 1 votes “no” on a project, then the others vote no on 1’s next one After punishing a deviating senator once, all return to supporting all projects. slides_414_19_21_2011_no_solutions 30 Analysis of repeated game Suppose Barrow proposes in t= 1, 4, 7,…, Byrd in t=2, 5, 8,… & Stevens in t=3, 6, 9,... Will Senator Byrd support Barrows proposal at t=1? Yes if Bs ≥ Bd ‐25 + δ *100≥0 δ1 ≥ 0.25 Bs = ‐25 + δ*100 + δ^2*(‐25) + δ^3*(‐25) + δ^4*(100) + … Bd= 0 + δ*0 + δ^2*(‐25) + δ^3*(‐25) + δ^4*(100) + … Will Senator Stevens support Barrows proposal at t=1? Yes if Ss ≥ Sd δ2 ≥ 0.64 Ss = ‐25 + δ*(‐25) + δ^2*(100) + δ^3*(‐25) + δ^4*(‐25) + δ^5*(100) + … 0 + δ*0 + δ^2*0 + δ^3*(‐25) + δ^4*(‐25) + δ^5*(100) + … Sd= [note that if Stevens deviates on Barrows then he votes no on Byrd as well, why?] slides_414_19_21_2011_no_solutions 31 Analysis of repeated game (ctd) Consider t=2: (Byrd proposes). Analysis is similar except now the following senator is Stevens so he must have a discount above the critical value δ1 ≥ 0.25 and Barrow proposes in two periods so he must have a discount factor above δ2 ≥ 0.64. Consider t=3: this requires Byrd to have discount above δ2 ≥ 0.64 In sum, the strategy supports pork barrel as a SPNE iff all senators have δ ≥ 0.64 Discussion In this setup is pork barrel “bad” for the senators? In this setup, is pork barrel bad for constituents? How would term limits for senators affect the outcome of the repeated pork barrel game? slides_414_19_21_2011_no_solutions 32 COLLUSION STRATEGIES Collusion or price fixing between firms in a market is generally illegal, nonetheless used by some firms. Here we focus on a case to illustrate strategies that can be used Background on Christie’s and Sotheby’s The two premier auction houses for fine art, founded in London mid 18th century. make profits by charging the seller a commission, e.g. 5% of the auction sale price Profits for both declined in early 90’s (low sales and commission rates from competition w/ each other). Chairmen of both houses met (1993) and decided to increase commissions Price fixing (collusion) is illegal and so indicted in 2000 for “fixing” a high commission rate. Why do firms need to “agree” on a higher price and how can they sustain collusion (given it is illegal so can’t sue the other for “breach of price fixing contract”)? slides_414_19_21_2011_no_solutions 33 Symmetric pricing stage game Unique NE: 6%, but both can be made better off by fixing prices (colluding) at 8% What strategies can sustain collusion? Grim trigger (cooperate until deviation occurs and then revert to NE) Temporary reversion to moderate rates (6%) Temporary reversion to low rates (price wars) Punishing deviating house and compensating the other slides_414_19_21_2011_no_solutions 34 Grim trigger strategy: In Period 1, charge 8% and in subsequent periods charge 8% if both charged 8% in all previous periods and 6% otherwise. [Note that we can also replace the last part of the strategy to say charge 8% if both charged 8% in the previous period and 6% otherwise since a one period punishment will automatically mean that they charge 6% and strategy then says that continue at 6%] Payoff from cooperation: 5/(1‐δ) Payoff from deviation (to BR given other is at 8%): 7+4/(1‐δ) Sustain cooperation as a SPNE iff: 5/(1‐δ) ≥7+4/(1‐δ) δ≥2/3 Note that the punishment phase involves reversion to the Nash of the stage game so it is credible (i.e. no need to check again if there is an incentive to deviate) slides_414_19_21_2011_no_solutions 35 Temporary reversion to moderate rates (6%) Strategy: In Period 1, charge 8%. In any other period, charge 8% if both charged 8% in all previous periods. If both were supposed to charge 8% and one did not then revert to 6% for 3 periods, after then charge 8%. Outcome [see page 427]: Collusion at 8% is sustained iff δ≥0.81 Three important lessons collusion is harder when the punishment for deviating is less harsh (δ≥0.81>2/3). If players have δ≥0.81 then the outcome from using a trigger strategy that reverts to Nash is the same whether the reversion is permanent or temporary. This is true when there is no punishment in equilibrium (as is the case here since there are no shocks) When punishments are triggered along the equilibrium path then the type of strategy used to enforce a similar cooperative outcome (e.g. 8%) can be important in determining the stream of payoffs and so players may have an incentive to renegotiate (something we have not allowed) slides_414_19_21_2011_no_solutions 36 Temporary reversion to low rates (price wars) Motivation We sometimes observe firms competing by pricing below cost. This suggests a price war and can be captured by strategy below If there is any chance for a price war then firms would want to have a strategy that allows them to return to collusion while punishing any deviations harshly In example below punishment does not occur along the equilibrium path so in order for it to predict we should observe price wars it would have to be modified. Candidate strategy profile: In Period 1, charge 8% and in any other period, charge 8% if either (1) both auction houses charged 8% in the previous period, or (2) both charged 4% in the previous period. charge 4% for any other history. slides_414_19_21_2011_no_solutions 37 Temporary reversion to low rates (price wars, ctd) Incentive to deviate after a cooperation history Payoff from cooperation: 5 + δ5 + δ25 + δ35 +… Payoff from deviation: 7+ δ0 + δ25 + δ35 +… No incentive to stop cooperating if 5 + δ5 ≥ 7 δ≥2/5 slides_414_19_21_2011_no_solutions 38 Temporary reversion to low rates (price wars, ctd) Incentive to deviate from punishment Given the punishment entails setting 4% when the other is setting 4% we must verify if there is an incentive to deviate from it If follow the strategy at a period of punishment: 0 + δ5 + δ25 + δ35 +… If does not (deviates to BR, 6%): 1 + δ (0 + δ5 + δ25 + δ35 +…) No incentive to deviate from punishment strategy if δ5 + δ25 + δ35 +…≥1 + δ (0 + δ5 + δ25 + δ35 +…) δ≥1/5 (does not bind relative to other condition) So, if δ≥2/5, the auction houses can maintain a collusive commission rate of 8%. slides_414_19_21_2011_no_solutions 39 Temporary reversion to low rates (price wars, ctd) Two important lessons Since the price war entails a harsh punishment entails a price lower than the NE of the stage game we must verify its credibility. It is credible if the punishment is temporary since there is the prospect of returning to cooperation Harsher temporary punishments (price wars) can be more effective than milder longer lasting ones (δ≥2/3>2/5) slides_414_19_21_2011_no_solutions 40 COOPERATION VIA INTERNATIONAL AGREEMENTS Motivation Policies and actions in one country can have consequences for others, e.g. US tariffs can affect prices received by Mexican exporters, US CO2 emissions contribute to global warming, missile systems can threaten other countries’ security Countries try to address these externalities via international agreements, e.g. WTO (trade), Kyoto (environment), ABM treaty (missile defense) But most of these agreements are subject to at least two important constraints They must be self‐enforcing, i.e. there is no world court to enforce them and impose a penalty if one country does not comply. Thus a country will only cooperate if it is in its best interest They must overcome any monitoring problems, at least partially i.e. if countries cannot monitor at all what others have done then they are unable to detect and thus punish cheating and reward good behavior slides_414_19_21_2011_no_solutions 41 Anti­Ballistic missile (ABM) treaty background Stability in cold war was to thought to rely on MAD (mutually assured destruction: even if US launched a 1st strike, USSR had enough missiles left to destroy US) ABM’s were aimed at shooting down missiles and thus posed a threat to MAD since after a first strike a country may survive a retaliation via its ABMs. Thus a country w/ ABM may be more likely to attack as would a country w/out it Fear lead to ABM treaty in 1972 between US/USSR to limit ABM, in place until 2002 Stage game in ABM treaty game Stage game has a NE with High, High even though No, No is Pareto superior slides_414_19_21_2011_no_solutions 42 Repeated ABM game setup If history is common knowledge then countries could sustain cooperative outcome (no, no) using a grim trigger strategy: start with no ABMs and remain without them as long as both countries have no ABMs However, countries may only have imperfect monitoring, i.e. they may only be able to detect if the other has ABMs with some probabilities, given below. Once detection occurs, it is common knowledge With imperfect monitoring we consider a modified strategy Period 1: no ABMs Any other period No abms if neither country observed abms in the other in all previous periods High ABMs if either country observed ABMs in the other in some past period slides_414_19_21_2011_no_solutions 43 Repeated ABM game analysis If no ABMs observed in previous periods then the payoff under this strategy is Vn=10/(1‐δ) if choose to abide and have no ABMs Vl=12+ δ[0.1*3/(1‐δ)+ 0.9*10/(1‐δ)] if choose low ABMs Vh=18+ δ[0.5*3/(1‐δ)+ 0.5*10/(1‐δ)] if choose high ABMs Choose no ABMs (abide by the treaty) iff δ ≥ 0.74 since Vn≥ Vl δ ≥ 0.74 and Vn≥ Vh δ ≥ 0.70 Choose low ABMs iff 0.7<δ < 0.74 and high if δ < 0.70 slides_414_19_21_2011_no_solutions 44 Other points to note What is the impact of imperfect monitoring? Reduces level of cooperation. To see this note that if probability of detection was 1 then a deviation is quickly detected and punished so Vl (p=1)< Vl (p=0.1) and similarly for Vh. This reduces the incentive to deviate. A similar structure and thus results can be applied if above we replaced “ABM” with other choices, e.g. “import tariffs” so we can use it to analyze other agreements, e.g. trade slides_414_19_21_2011_no_solutions 45 EXPERIMENTAL EVIDENCE FOR REPEATED PRISONER’S DILLEMA Theory offers some predictions for simple prisoner dilemma situations such as the one presented below and played in class, that are stark Predictions: If the game is played 1. once, players will choose mean 2. finitely many times, players will choose mean in every period 3. indefinite or infinitely many times, players are likely to choose nice sometimes 4. indefinite number of time, players are more likely to choose nice if the probability of continuation is higher slides_414_19_21_2011_no_solutions 46 Evidence from 390 UCLA undergraduates: fraction who play nice Prediction: If the game is played: Evidence 1. once, players will choose mean Fails 9% of time 2. finitely many times, players will choose mean in every period Fails up to 35% of times But note that final stage is similar for different fixed T (9,7,11) 3. indefinite or infinitely many times, players are likely to choose nice some of the time. Can’t reject. (But why is probability of cooperation falling w/ # rounds?) 4. indefinite number of time, players are more likely to choose nice if the probability of continuation is higher Can’t reject (46>31, 41>26, etc) Cooperative play in one‐shot and finitely repeated games is evidence against prediction. What are some possible explanations ? slides_414_19_21_2011_no_solutions 47 ...
View Full Document

This note was uploaded on 10/25/2011 for the course ECON 414 taught by Professor Staff during the Spring '08 term at Maryland.

Ask a homework question - tutors are online