This preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
Unformatted text preview: Lecture III: Normal Form Games, Rationality and Iterated Deletion of Dominated Strategies Markus M. M¨obius February 19, 2004 Readings: • Gibbons, sections 1.1.A and 1.1.B • Osborne, sections 2.12.5 and section 2.9 1 Definition of Normal Form Game Game theory can be regarded as a multiagent decision problem. It’s useful to define first exactly what we mean by a game . Every normal form (strategic form) game has the following ingredients. 1. There is a list of players D = { 1 , 2 ,..,I } . We mostly consider games with just two players. As an example consider two people who want to meet in New York. 2. Each player i can choose actions from a strategy set S i . To continue our example, each of the players has the option to go the Empire State building or meet at the old oak tree in Central Park (where ever that is ...). So the strategy sets of both players are S 1 = S 2 = { E,C } . 3. The outcome of the game is defined by the ’strategy profile’ which consists of all strategies chosen by the individual players. For example, in our game there are four possible outcomes  both players meet at the Empire state building ( E,E ), they miscoordinate, ( E,C ) and ( C,E ), or 1 they meet in Central Park ( C,C ). Mathematically, the set of strategy profiles (or outcomes of the game) is defined as S = S 1 × S 2 In our case, S has order 4. If player 1 can take 5 possible actions, and player 2 can take 10 possible actions, the set of profiles has order 50. 4. Players have preferences over the outcomes of the play. You should realize that players cannot have preferences over the actions. In a game my payoff depends on your action. In our New York game players just want to be able to meet at the same spot. They don’t care if they meet at the Empire State building or at Central Park. If they choose E and the other player does so, too, fine! If they choose E but the other player chooses C, then they are unhappy. So what matters to players are outcomes, not actions (of course their actions influence the outcome  but for each action there might be many possible outcomes  in our example there are two possible outcomes per action). Recall, that we can represent preferences over outcomes through a utility function. Mathematically, preferences over outcomes are defined as: u i : S → R In our example, u i = 1 if both agents choose the same action, and 0 otherwise. All this information can be conveniently expressed in a game matrix as shown in figure 1: A more formal definition of a game is given below: Definition 1 A normal (strategic) form game G consists of • A finite set of agents D = { 1 , 2 ,..,I } . • Strategy sets S 1 ,S 2 ,..,S I • Payoff functions u i : S 1 × S 2 × ..S I → R ( i = 1 , 2 ,..,n ) We’ll write S = S 1 × S 2 × .. × S I and we call s ∈ S a strategy profile ( s = ( s 1 ,s 2 ,..,s I )). We denote the strategy choices of all players except player i with s i for ( s 1 ,s 2 ,..,s i 1 ,s i +1 ,..s I )....
View
Full Document
 Spring '10
 Dingli
 Game Theory, player, SI

Click to edit the document details