Bueno de Mesquita

Bueno de Mesquita - 136 This is a complicated problem because sometimes cheating is hard to detect and some— times evidence mounts that cheating

Info iconThis preview shows pages 1–11. Sign up to view the full content.

View Full Document Right Arrow Icon
Background image of page 1

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
Background image of page 2
Background image of page 3

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
Background image of page 4
Background image of page 5

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
Background image of page 6
Background image of page 7

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
Background image of page 8
Background image of page 9

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
Background image of page 10
Background image of page 11
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: 136 This is a complicated problem because sometimes cheating is hard to detect and some— times evidence mounts that cheating has taken place when in fact it has not. The avail- able information is sometimes misleading. The problem is further complicated because even if cheating is detected and there is agreement that the culprit should be punished, nations still have to coordinate with each other to establish what the punishment should be and how it should be administered. At this juncture, the risk of problems of collective action arises again as some nations may try to free ride on the benefits of punishing a wayward state. The case of UN peacekeeping operations is an example. Although free riders benefit from the punishment’s imposition, they do not share in the possible polit- ical or economic costs of the punishment. In fact, payments to the United Nations for peacekeeping efforts are almost always in arrears as members seek the benefits while trying to avoid the costs. In recent years, the United States, particularly, has been notable for its failure to pay its dues and honor other obligations to the United Nations. LIBERAL THEORIES AND THE PROMOTION OF COOPERATION In building up a system of cooperation, it is certainly undesirable to punish a nation mis- takenly; at the same time, true cheaters must not go unpunished. The mechanisms or political institutions that are developed to monitor adherence to international norms Leading military and political officials in Pakistan stand with their heads bowed as a Muslim cleric recites prayers for twenty-three Pakistani soldiers killed in Somalia on June 5, 1993, while serving there as part of the United Nations peacekeeping force. The long line of coffins emphasizes the fact that the collective benefits of peace entail high costs, reminding us why so many states prefer to gain a free ride at the expense of others. how the international system can reward cooperation and how it can punish cheaters through well-structured rules and regimes is central to understanding how liberal theory approaches international affairs. Liberal theory focuses on two main solutions to the problem of promoting coopera- tion: hegemony and repeated interaction. Each solution can play a prominent role in promoting cooperation, but each also suffers from important deficiencies. AMERICAN HEGEMONYAND BRETTON Woons. Under hegemony, a hegemonic, or dominant, state is willing to bear the extra burden of providing public goods, such as enforcing a free trade regime, in order that all may benefit. It is in exactly this sense that liberal theories assume that international politics is hierarchical rather than anarchic. The hegemon is a central authority that is able and willing to enforce agreements and punish cheaters. At the end of World War II, the United States assumed responsibility for providing public goods to the international community—that is, it became a hegemon. It did so by signing the Bretton Woods Agreement. Under the terms of this agreement, the United States took on significant responsibility for helping to stabilize world currencies and control global inflation. By guaranteeing that the dollar could be converted to gold on demand by central banks in other countries, the United States created what was known as a dollar—gold equivalence standard. It provided a means to control inflation and stabi- lize the world money supply by making the US. dollar the world’s reserve currency. Thus, currencies acquired fixed exchange rates pegged to the value of the dollar. The cost of one ounce of gold was set at $35, so that anyone could trade an ounce of gold to the US. government for $35. Through this exchange rate mechanism the United States guar- anteed the stability of currencies by absorbing the costs of inflation itself. At the same time, the United States joined and strengthened the International Monetary Fund and the International Bank for Reconstruction and Development, now known as the World Bank. These two institutions were designed at Bretton Woods, the former to stabilize currencies and economies and the latter to foster economic recovery and development. Each has evolved since then into a quite different organization with changed functions. By August 1971 the global economic situation had changed dramatically from the days of American dominance in 1945. With deficits growing in the United States and with pressure from the British and French to convert dollars they held to gold, President Richard Nixon reneged on the agreement reached at Bretton Woods (Gowa 1983). This put an end to the fixed exchange rate system that had been created at Bretton Woods and moved much of the global economy to a system of floating exchange rates. Whereas under Bretton Woods the fixed exchange rate mechanism dampened global inflation by shifting the burden to the United States, under the floating exchange rate system curren- cies respond to market forces. One consequence of this shift was a rapid devaluation of the dollar against gold and a sustained outbreak of global inflation. Before President 137 CHAPTER 4 INTERNATIONAL POLITICS FROM A STRUCTURAL PERSPECTIVE Nixon put an end to the Bretton Woods arrangement, for example, gold sold for $35 per ounce. Afterward, it soared to as high as $400 an ounce. Indeed, so dramatic were these changes that the discarding of the Bretton Woods Agreement and its aftermath sparked debate over whether an end to American hegemony had been reached (Gowa 1983; Keohane 1984; Russett 1985; Strange 1987; Nye 1988; Kugler and Organski 1989). Today, however, it seems clear that American hegemony, if anything, has increased. A significant problem with hegemony as a solution to collective action problems is that, as liberal theorists acknowledge, the international system only rarely sees the emer- gence of a real hegemon. Furthermore, it can be quite costly for a hegemon to assume the burden of providing public goods, as Nixon’s 1971 decision to renege to avoid inflation so dramatically demonstrates. Consequently, a hegemon cannot be counted on to pro- vide public goods, especially when doing so is contrary to its interests. In fact, it is at least as easy to point to historic examples of dominant states using their position to extract tribute from dependent states as it is to find examples of them providing public goods. The unpredictability of hegemons is one reason that liberal theorists began to inves- tigate regimes and norms as alternative mechanisms that nations use to resolve collective action problems. Little evidence has emerged, however, to demonstrate that behavior is actually altered in response to regimes or norms. Having said this, we will see how inter- national law, international organizations, and domestic political institutions might induce states to behave differently from the way they would if such laws and organizations did not exist. We will see how, out of self-interest, leaders form and join organizations and agree on rules designed to tie their own hands by limiting their future choices. The earlier dis— cussion of self—regulation of tuna fishing through changing norms already pointed to one way that leaders accept rules that restrict freedom of action for the benefit of long- term interests. Give some examples of a hegemon providing public goods. What are some examples from history of hegemonic states extracting tribute or wealth from weaker states without in turn providing a public good to resolve a collective action problem? COOPERATION THROUGH RBPEATED INTERACTION. The second solution to fostering cbop- eration depends on the idea that self—interest can promote cooperation in the long run, even when short-term interests favor conflict, or at least the absence of cooperation. Liberal theory relies here on a concept called the shadow of the future. This concept states that under certain circumstances decision makers who benefit in the short run from noncooperation can be persuaded to engage in cooperative relationships if they are shown that to do so would garner them a long-term stream of benefits (Taylor 1976; Axelrod 1984). The logic for promoting cooperation when short—term interests encourage non- cooperative behavior is best depicted by a game called the prisoners’ dilemma. The story 138 \ behind the prisoners’ dilemma—which you can see played out almost any night of the week on just about any television police show—is that two confederates in crime have been arrested. Each is held in a separate cell, with no communication between them. The police do not have enough evidence to convict both of them of the serious crime they allegedly committed. But they do have enough evidence to convict them of a lesser offense. If the prisoners cooperate with each other and both remain silent, they will be charged and convicted of the lesser crime. If they both confess, they will each receive a stiff sentence. However, if one confesses and the other does not, then the former will get off with only a light sentence (as part of a plea bargain) whereas the latter will be put away for a very long time. Let’s call the payoff that each prisoner receives when neither confesses (that is, when they cooperate with each other) the reward (R) and the payoff each receives if they both confess the punishment (P). If one prisoner cooperates by remaining silent while the other defects by confessing, then we will say that the cooperator gets the sucker’s payoff (S) and the defector gets a payoff we’ll call the temptation (T). In the game of the prisoners’ dilemma, T is worth more than R, which is worth more than R which is worth more than S (T > R > P > S). For repeated versions of the game (that is, when people play it over and over again), we will assume that R is more than twice as large as the combined value of Tand S (R > [T + S]/2), implying that it is better for the players to cooperate than it is for them to alternate between confessing and cooperating over time. If, for example, R is worth 3 points, Tis worth 6 points, and S is worth 1 point, then over time the two players could learn through experience to alternate the Tand 8 pay- offs between them. This could be achieved simply by one player choosing to defect when the other chooses to cooperate and then the first player choosing to cooperate when the second defects. This leaves them each with an average benefit of (6 + 1)/2, or 3.50 points, which is larger than R at 3 points. If, however, Tis worth less than 5 points—say it’s worth 4.50 points—then alternating between cooperation and defection is not as good a strategy as trying to find a way for both to cooperate: (4.50 + 1)/2 = 2.75 points versus R at 3 points. Table 4-1 displays the possible outcomes of the prisoners’ dilemma. Notice it does not specify the order of play. This is because under the rules of the game, the players each must make their choices without knowing what the other player’s choice will be. (Remember, they are being held in separate cells with no communication possible.) The game can be solved by finding the Nash equilibrium. (Recall that a Nash equi- librium is the set of strategies from which no player has a unilateral incentive to switch.) TABLE 4-1 The Prisoners’ Dilemma . Player B’s Choice Player A (or Player B) can start by asking Player NS Choice Cooperate Defect himself or herself what the best move to . ‘ Cooperate R, R S,T make 18 if B chooses to cooperate and what De‘ect T S P P the best move is if B chooses to defect. By 139 140 CHAPTER 4 INTERNATIONAL POLITICS FROM A STRUCTURAL PERSPECTIVE examining the implications for him or her of B’s potential choices, A can determine which move will be most advantageous (though A cannot know what B will ultimately choose to do). Of course, A can also calculate from B’s viewpoint, seeing what would be best for B if A cooperates or defects. In this way, both players can formulate their com- plete plan of action—their strategy—for the game. The prisoners’ dilemma is an interesting way to look at problems of cooperation and conflict because it has a surprising implication. Notice that whatever choice A assumes B will make, A is better off defecting than cooperating. If B cooperates, A will earn T by defecting and only R by cooperating. Because T is more valuable than R, it is in A’s self- interest to defect. If A assumes that B will defect, then A earns P by defecting, which is not very good but still better than choosing to cooperate and thereby only earning 8 (the worst result). Thus, by defecting A can guarantee herself or himself a stiff prison sen- tence or a chance to get off with only a light sentence but avoid altogether the possibility of receiving a very long prison sentence. The same logic holds for B. Whatever A decides to do, B is better off defecting. Defection is each player’s dominant strategy. In consequence, they each will end up with the second-worst outcome and be handed a stiff prison term. Had they been able to coordinate their choices and cooperate with each other, they could have guaranteed themselves a light sentence, the second-best outcome. Thus, by choosing rationally they each suffered an outcome that was worse than what they would have gotten if they had cooperated. This type of outcome is said to be pareto inferior: In contrast, a pareto opti— mal outcome is one in which no player is made worse off and at least one is made better off. Ioint cooperation is pareto optimal, but the players do not seem to have a rational path to get there because no matter what the other player is expected to do, each finds that defecting dominates cooperating because it earns a bigger reward. This is the dilemma. If international politics frequently involves situations like this, then it seems that conflict rather than cooperation would prevail, as suggested by neorealism’s focus on anarchy. Many situations in international relations mimic the conditions of the prisoners’ dilemma. Consider the example of telecommunications in the United States and Mexico. The Mexican government wants to sell its telephone services to Spanish speakers in the United States while still protecting its fledgling telephone industry against American competition. When Mexico privatized its telephone company (Telefonos de Mexico), it guaranteed the company a continuing monopoly for about a decade so that it could get on its feet, forge strategic alliances (which it did with Sprint), upgrade its equipment, and thereby compete in the marketplace. Although American telephone service providers such as MCI, Sprint, and AT&T would prefer to avoid negotiations with Telefonos de Mexico and enjoy open access to the Mexican telephone market, they also want to pre- vent the Mexican company from enjoying equal access to the large Spanish—speaking telephone marketplace in the United States. The Mexican government, being sensitive to STRUCTURAL PERSPECTIVES its domestic political situation, is protecting its industry even as it seeks to gain free access for its phone company to the U.S. market. The United States, for its part, has also imposed restrictions on behalf of its home industry in an effort to reduce competition from Telefonos de Mexico for the U.S. Latino market. In effect, both “players” (Mexico and the United States) have sought 7; leaving their opponents with S. Had each govern- ment opened its telephone market fully at the outset, each country’s industry would have concentrated on the market niches in which it could be most competitive and produc- tive. American and Mexican consumers would have enjoyed the greatest benefits. By working cooperatively and promoting free trade in telecommunications, then, each would have achieved the best outcome for both, R. Instead, because P (both governments impose restrictions on access by the other country’s telephone services) is better than S, and T is better than R, each has followed a protectionist, regulatory policy that prevents achieving the best outcome for both governments through cooperation. Resistance to free trade globally arises from trade involving the prisoners’ dilemma, where each state wants to protect its own industry but enjoy unfettered access to the markets in other countries. International players may find themselves involved in this type of troubling situation over and over across an indefinite period of time. For example, during the cold war years the United States and the Soviet Union faced off repeatedly in situations where mutual cooperation would have benefitted both but mutual distrust prevented (potentially costly) attempts at cooperation. Distrust, in fact, is at the heart of the prisoners’ dilemma and at the heart of arms races. Because the prisoners’ dilemma is a noncooperative game, promises made by either player or both players to cooperate with the other mean noth- ing. Whatever agreement might have been reached previously, each should recognize that the other player could exploit the situation by defecting. So neither can count on any promise given by the other. This is a perennial problem when rival states unilaterally agree to reduce arms. The promise is not binding, nor is it credible, and if one state dis- arms and the other does not, the one that cheats gains a significant advantage. This is also a problem in trade relations where promises to open markets are made but no means of enforcing those promises are adopted. How can one escape the prisoners’ dilemma? Suppose that the sucker’s payoff is bad, but not fatal. That is, suppose it is something from which one can recover over time. If the game is played an indefinite number of times, then it makes sense to experiment by starting out by cooperating. If the other player also cooperates, both are better off. If the other player does not cooperate, he or she can be punished if the first player then chooses not to cooperate again. Over an indefinite period of repetition, the small, one—time loss from that initial sucker’s payoff becomes trivial against the possible benefit if the other player subsequently cooperates, provided enough value is attached to future payoffs. If this is the case, then each player can credibly declare that his or her strategy will be to make the move the other player made in the previous round of interaction. If a player 141 142 CHAPTER 4 INTERNATIONAL POLITICS FROM A STRUCTURAL PERSPECTIVE defects, then both players will get caught up in a cycle of repeated defection; if a player cooperates, however, a cycle of cooperation can continue indefinitely. Axelrod (1984) has shown that if the shadow of the future is large enough to allow a player to recover from a temporary setback, then possible equilibria of the game include cooperation. The key is that each player must believe that there is sufficient time to recover from a setback and that the risk of setback is amply rewarded by the prospects of a stream of high payoffs later resulting from cooperation. Defecting now and exploiting the cooperation of the other player provides a short-term benefit, but one that is more than offset by the indefinite stream of punishment that follows when the other player stops cooperating too. How can players credibly promise to cooperate with one another when they are involved in an indefinitely repeating prisoners’ dilemma? It turns out that the solution depends on being able to communicate to the other player how you plan to play the game and establish a credible scheme for punishing cheaters. The North American Free Trade Agreement (NAFTA) between the United States, Canada, and Mexico is, in essence, 'a declaration of what each country’s strategy is for dealing with trade relations in the future. Each promises to keep its market open to the others largely unfettered by tariffs and nontariff barriers. Although there are areas where nontariff barriers exist within NAFTA (for example, US. environmental requirements imposed on Mexico), these are part of the agreement and so do not represent cheating. NAFTA has rules and procedures for mediating disputes over alleged cheating. But even without an interna- tional regime like NAFTA, it is possible for mutual self-interest to be effective in design- ing a strategy that leads to cooperation between states engaged in an indefinitely repeated prisoners’ dilemma. A strategy called tit-for-tat, or “do-unto-others-what-they-just-did—to—you,” is an effective way to play the prisoners’ dilemma game when it is repeated indefinitely (or infinitely) and when the shadow of the future is sufficiently large.9 Tit-for-tat simply involves doing on each move what the other player did to you on the previous move. If Player A defects in any round of play, then Player B will defect in the next round. In this way Player B punishes Player A for cheating. If A cooperates in any round, then B will cooperate in the next round. This is B’s way of rewarding A rather than exploiting A’s cooperation. The same, of course, holds for Player A. Such a cooperative move by B (or A) would not be rational if the game were played a known number of times, but it is rational when the game is played indefinitely with a large shadow of the future so that there is a big cumulative impact on each decision maker’s welfare from cooperating. Tit- for—tat is what Axelrod terms a “nice” strategy. It is quick to forgive and quick to punish; it is also easy for each decision maker to observe the emerging pattern of play. 9If 8 is the shadow of the future, then it is sufficiently large when 8 > (T — R)/(T— P) and 8 > (T — R)/ (R - S). If these inequalities are satisfied, then cooperation can be a subgame perfect strategy. STRUCTURAL PERSPECTIVES Tit—for-tat cannot succeed in making cooperation an equilibrium strategy if the repe- titions of the prisoners’ dilemma are for a known number of times. In fact, in such a situ- ation, the dilemma cannot be escaped. The reason is simple. Suppose you and I were to play this game five times. We might each promise to cooperate at the outset. We might even play a nastier strategy than tit-for—tat that increases the cost of punishment. We might follow a punishment strategy called a grim trigger. Under this punishment strategy, I declare that if you defect even once—even by accident—I will never cooperate again. It is easy to see that tit-for-tat becomes indistinguishable from the grim trigger once some- one has defected. Now, it is straightforward for me to calculate that I cannot punish you if you defect the fifth time we play the game because there will not be a sixth repetition. Of course, you realize that the same holds for me. So, we each have an incentive to defect in the fifth round because at this point the game is not going to be repeated and there can be no punishment for defecting. That means that the fourth round of play really seems like the last part of the repeated game. However, I already know that you have a dominant strategy in the fifth round and that that strategy is to defect. As such, the fourth round really is now like the last repetition because I will have no subsequent opportunity to punish you for defecting. Therefore, because each of us will defect in the fourth round, round three will become like the last repetition, and so on down to round one. When the number of repetitions are known, the chance to cooperate unravels, pushing us to defect even in round one because there will be no opportunity to recover from the sucker’s payoff in the future by avoiding the punishment payoff and obtaining the reward payoff. How large must the shadow of the future be to induce players to play tit-for—tat and cooperate? To see the answer let us be more precise about the idea of a shadow of the future. The idea is that people attach more value to something that they receive today than they do to the same thing received tomorrow or the day after or the day after that. That is, they discount the value of something to be received in the future as compared with some- thing they get now. Let us define the shadow of the future as 8 such that O < 8 < 1. The larger 8 is, the larger is the shadow of the future. Suppose R = 3 and T = 4, P = 2, and S = 1. If players cooperate, then they each receive 3 the first time they interact and place a value of 38 on cooperating a second time and 382 for the third cooperative interaction and 353 for the next cooperative interaction and so on. If they cooperate repeatedly over an infinite time horizon, the sum of their expected payoff equals 3/ (1 — 8). That is, the sum of the infinite series R80 + R81 + R82 + R83 + . . . + R8°° is known to converge on the value R/(l — 8) provided 0 < 8 < 1. Now suppose one player defects and the other cooperates. The defector gains the big payoff of Tfor that round but then faces a payoff of P for all subsequent rounds because the other player switches to defection as a punishment. Then the original defector’s payoff is 4 + 28 + 282 + . . . , which can be sum- marized as 4 + 28/(1 ‘— 82). Suppose 8 = 0.90 for each player, then if both players always cooperate, each receives a utility of 3 for each round across an infinite horizon of rounds. The current value a player attaches to 3 each round over that horizon, discounted by 143 144 CHAPTER 4 INTERNATIONAL POLITICS FROM A STRUCTURAL PERSPECTIVE 8 = 0.90, is equivalent to a payoff of 30—that is, 3/ (l - 0.90). If one player defects in the first round and then faces the punishment payoff for the rest of the game, the current dis- counted value of the payoff is 4 + 28/(1 - 82) in this case; that is, 4 + (2 X 0.90)/ (1 — 0.81), or 13.47. In fact, gaining the temptation payoff Tand then facing punishment yields a higher payoff than cooperating for only two rounds given a discount factor or shadow of the future as high as 0.90 and given the payoffs as currently valued. By the third round of interaction, the cooperators have earned 8.13 and the cheater has earned only 7.42. What about the victim of cheating? In the first round, this player gets the sucker’s payoff of 1 and then, having chosen the grim trigger punishment strategy, receives 28/ (1 - 82) for the remaining period of play. For the assumed payoff values, never cooperating if someone once cheats you leads to a cumulative payoff equivalent to 10.47 over an infinite horizon. That is, 1 + [(2>< .09)/0.19] = 10.47. Clearly not only is the cheater better off if the players can get back on the path to cooperation, but so is the one doing the punishing. This makes the threat of permanent punishment incredible because the two players have an incentive to renegotiate after a period of punishment so that they can switch to cooperation and improve their lot. Still, there is no guarantee that they will cooperate forever. It is important to recognize that with a large enough shadow of the future, and with indefinite repetition, cooperation can be an equilibrium strategy, and therefore the pris— oners’ dilemma can be escaped. But we must also realize that cooperation is not the only equilibrium strategy, even with indefinite or infinite repetition. Defection and just about every mix of moves in between always defecting and always cooperating are other possi— ble equilibrium strategies. In fact, a well—known result in game theory, called a Folk Theorem, is that almost any combination of moves can be an equilibrium if a game is repeated an infinite or indefinite number of times. It is also important to note that tit- for—tat is an effective, but not foolproof, way to encourage cooperation in the indefinitely repeated prisoners’ dilemma. As the examples above show, there can be incentives to cheat from time to time provided a switch back to temporary cooperation can be negoti- ated quickly enough. What is more, valuing the future a lot does not always guarantee an increased incentive to cooperate. That depends on the structure of the situation. Robert Powell (1999) has shown that in situations in which players can punish short— term exploitation in the long term, as in the prisoners’ dilemma, a large shadow of the future encourages cooperation. In those cases, benefits are netted immediately through exploitation; in consequence, future costs are high. The desire to avoid those high costs encourages cooperation. The trade-off between current and future costs and benefits in a “guns versus butter” setting looks quite different, however. In some situations states that spend money on arms (“guns”) now rather than on current consumption (“butter”) acquire long-term rewards for short-term defection. The more a state values future con— sumption, the more attractive it is to that state to spend on the military now so that the state is in a stronger position to attack a rival and secure additional consumption oppor— tunities by extracting resources from the vanquished state in the future. In this case, a STRUCTURAL PERSPECTIVES large shadow of the future makes cooperation less likely because costs are borne now (some current consumption is forgone to build up military capabilities for the future), but there are future rewards from defecting. Depending on the temporal sequence of costs and benefits, a large shadow of the future can make cooperation more likely or less likely. Liberal theory is not as parsimonious as neorealism, but it does provide an improved basis for understanding cooperation. In doing so, it also provides a basis for understand- ing conflict precipitated by collective action problems. It is less successful in explaining how cooperation may be achieved in situations where conflict and competition are brought about by fundamental disagreements rather than by internecine arguments over the division of a commonly shared pie (Brams and Taylor 1996). Neither is it effective in handling distributive problems, especially those not combined with commitment or coordination problems. Regimes and norms are useful ways of thinking about coordina— tion or commitments, but they are not well suited to handling genuine conflicts of inter- est such as those that arise with distribution problems. In situations where one side’s gains come directly at the expense of the other side, with no offsetting compensation for the loser, then liberal theory has little to offer. Wars are sometimes thought of as zero- sum games, in which the winner wins exactly what the loser loses. Two-player zero-sum games do not have cooperative solutions precisely because the two parties have opposing interests. However, even zero-sum situations can offer incentives for some participants to cooperate if there are three or more players. CONFLICT AND UNCERTAINTY. Liberal theorists view conflict as a product of uncertainty or misinformation about the intentions of other states. If violations of norms of behav- ior could always be detected and punished sufficiently to make cheating unacceptably costly, then collective action problems would be resolved. Everyone would have a strong incentive to cooperate. Cheating and free riding on the efforts of others would be elimi- nated (Palmer 1990; Sandler 1992). If the coordinating mechanisms of regimes, norms, and the like are working effectively, then they are disseminating information to the states that make up the international system. Information is presumed to help states avoid con- flict because they know that others will know if they misbehave (Axeerd and Keohane 1986; Haas 1992). Keohane, for example, maintains that international systems containing institutions that generate a great deal of high- quality information and make it available on a reasonably even basis to the major actors are likely to experience more cooperation than systems that do not contain such institutions, even if fundamental state interests and the distribution of power are the same in each system. (1984, 245) The View that information improves cooperation is somewhat problematic, however. Even when states have complementary interests—which is always the case when there is a coordination problem between them—they may also have distributional issues that create a conflict of interest. If this is true, then from a logical standpoint information will 145 CHAPTER 4 INTERNATIONAL POLITICS FROM A STRUCTURAL PERSPECTIVE not always improve cooperation. As we will see in Chapter 17, when we zoom in on argu- ments about norms that enhance cooperation, it is entirely feasible for decision makers to choose a violent, conflictual course of action because they are well informed and to eschew such behavior when they are suffering from uncertainty or incomplete informa- tion about the capabilities or intentions of others. A brief example may help illustrate the point. Rivals in war often have common interests that can be realized only through mutual agreement, as required in liberal theory. The treatment of prisoners and the regulation of certain weapons are just two examples (Morrow 1998). Germany and the Allied powers (Britain, France, Russia, and the United States) had a common interest in ending World War II on a mutually acceptable basis. The problem was how to achieve such an agree- ment. One way would have been to weaken one side’s position so severely that it was pre— pared to accept an unconditional surrender. In fact, this is what happened. Such a solution is costly, and states generally look for other ways to resolve disputes. Indeed, unconditional surrender is rare. Even Japan was allowed to impose one condition (preservation of its emperor) on its surrender in 1945 despite the devastation of Hiroshima and Nagasaki. A more recent example would be Saddam Hussein’s avoidance of an unconditional surrender at the end of the Gulf War, despite the fact that his armed forces were completely routed (Haselkorn 1999). Let’s consider how high-quality information might have influenced the eventual res- olution of World War II. German chemists had developed nerve gas to which there was no known antidote well before the war was over. Such a highly lethal weapon can quickly kill or incapacitate large numbers of people. The German government, as we know, was not reluctant to use toxic gases against civilian populations, as long as there was no cred— ible threat of retaliation in kind by the Allies. Millions of innocent people were murdered in German concentration camps, many by lethal doses of cyanide. Hitler and others in Germany believed, erroneously, that the United States had developed nerve gas. The primary basis for their belief was recognition that many of Germany’s best chemists were living in exile in the United States. Hitler apparently believed that because they were the best, they too had developed nerve gas (Brown 1968). Had he known the truth—had he possessed high-quality information on this matter— he may very well have ordered the use of nerve gas in combat, knowing that the Allies could not retaliate in kind. It is conceivable that use of nerve gas over cities would have had an effect on the Allies comparable to the effect that use of the atom bomb had on Iapan. It is plausible that the use of nerve gas would have prompted a conditional sur- render at war’s end rather than the unconditional surrender ultimately imposed. A con- ditional peace would have been potentially disastrous, perhaps leaving the Nazi regime in power in Germany. Secrecy, then, led to a better result than one that might have been obtained with high-quality information. Better information is not a guarantor of co— operation, and poorer information does not necessarily make conflict more likely. ...
View Full Document

This note was uploaded on 03/31/2008 for the course POLI 150 taught by Professor Mosley during the Spring '08 term at UNC.

Page1 / 11

Bueno de Mesquita - 136 This is a complicated problem because sometimes cheating is hard to detect and some— times evidence mounts that cheating

This preview shows document pages 1 - 11. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online