Unformatted text preview: Econ 330 – Economic Behavior and Psychology Spring 2010 Professor Sydnor Problem Set 1 Solutions Problem 1: When some stores started introducing charges for credit‐card transactions, credit‐card companies lobbied hard to have stores call the credit‐card price the regular price and the difference a “cash rebate”, rather than call the cash price the regular price and the difference a “credit‐card surcharge.” Explain how this relates to the concepts we have discussed so far in class. This is an example of a framing effect. If the credit‐card price is called the “regular price”, then that price might be seen as the reference point for how much you have to pay for the good. Paying less in cash, then, would generate a feeling of a “gain”, which some customers might decide is worth giving up the convenience of the credit card. However, if the cash price is labeled the “regular price”, then using a credit card and paying more might be seen as a “loss” relative to the reference point of the low cash price. In that case, more people might decide that the cash‐price looks more attractive and it might be harder to get people to use credit cards. Problem 2: Tim and Randy both have simple “prospect‐theoretic” preferences for money (m) relative to their reference point (r). Both have preferences of the following form: V(m, r) = m‐r when m ≥ r and V(m, r) = ‐λ(r‐m) when m < r. a) Do these preferences exhibit loss aversion? What are the conditions (if any) on the parameters for which the model exhibits loss aversion? These preferences exhibit loss aversion only if λ > 1. b) Do these preferences exhibit diminishing sensitivity? No. These preferences are just the linear case of prospect theory, where each additional gain or loss has the same marginal impact on utility. The value function is still kinked at zero if there is loss aversion (i.e., λ > 1), but it is not concave in gains and convex in losses. Suppose that both of them currently have $50,000 and are sitting down to lunch with their friend Samuel Paulson. Samuel Paulson offers them each a gamble based on the flip of a fair coin. If the coin comes up heads they will owe Samuel $100 and if the coin comes up tails Samuel will pay them $150. c) Both Tim and Randy turn down Samuel’s offer. What have we learned about the range of λ? They compare the expected utility of the gamble versus the expected utility of staying put. If they do not gamble, then they keep their current money and are still at the reference point, which has a value = 0. If they take the gamble, then there will be a ½ chance that they will end Econ 330 – Economic Behavior and Psychology Spring 2010 Professor Sydnor up at ‐$100 relative to their starting reference point and a ½ chance that they will end up at +$150 relative to their reference point. The expected utility of the gamble is then= 1/2v(+150)+ 1/2v(‐100) = (1/2)(150) – λ(1/2)(100) If they prefer V(r,r) = v(0) =0 for sure, it means that (1/2)(150) – λ(1/2)(100) ≤0 λ ≥ 1.5 Let’s quickly check that this answer makes sense. It says that you need to have a coefficient of loss aversion of at least 1.5 to turn down this gamble. So that means if you did not have loss aversion (i.e., λ = 1), then you would want the gamble. That fits, because a person with no loss aversion in this model is “risk neutral” and just picks based on expected value. Since the expected value of the gamble is positive (EV = +$25), the risk neutral person would want the gamble. Now Samuel offers them the same gamble, but played out twice in succession (i.e., flip the coin once and then again). So, for example, if it comes up heads and then tails, Samuel would owe them $50. The money will be paid out after the two flips of the coin. Randy shouts “What difference does that make? I did not want to play one round of this bet, so I’m not even going to think about it at all; I won’t play 2 or 3 or 4 or whatever number of bets.” So Randy is Narrow Bracketing. Tim, however, says “Hold on a minute. I’m going to calculate the possible outcomes from this combined gamble. I don’t like 1 flip of the coin, but I might like two flips.” So Tim is Broad Bracketing. d) Tim does this calculation and then responds that he still does not want to play. What have we learned about the range of Tim’s λ? (Notice that we have learned nothing new about Randy because he is Narrow Bracketing). When the coin is flipped twice, there are 3 possible outcomes: a) ‐$200 (losing twice), b) +$50 (losing once and winning once, in either order), and c) +$300 (winning twice). These events outcomes occur with probability ¼, ½, and ¼ respectively. If Tim does not take the gamble, he still gets a V(r,r) = v(0) = 0 for sure. If he gambles, though, he has expected utility of: 1/4v($300) + 1/2v($50) + 1/4v(‐$200) = (1/4)(300)+(1/2)(50) – λ(1/4)(200) To prefer 0 to this expected utility of the gamble implies that: (1/4)(300)+(1/2)(50) – λ(1/4)(200) ≤ 0 100 ≤ 50λ λ ≥ 2 Notice that once he is broad bracketing, more of the outcomes are occurring in the positive domain and leaving him less chance of suffering a loss relative to where he started. That means that he would have to be more loss averse to turn down the joint gamble than one play of it in isolation. Econ 330 – Economic Behavior and Psychology Spring 2010 Professor Sydnor Samuel now offers them the same gamble again, but played out 3 times in succession (i.e., flip the coin three times in a row). The money will be paid out after the three flips of the coin. Randy screams “Leave me alone!” Tim says, “Ok, let me now calculate the possible outcomes from 3 independent trials of this gamble and I’ll consider whether I want to take it.” e) Tim now decides to accept the gamble if they play 3 times. What have we learned about the range of Tim’s λ? Now there are 4 different possible outcomes: a) ‐$300 (losing 3 times), b) ‐$50 (losing twice and winning once), c) +$200 (winning twice and losing once), d) +$450 (winning 3 times) These occur with probabilities 1/8, 3/8, 3/8, and 1/8 respectively. So now the expected utility of the gamble is: 1/8v(450) +3/8v(200)+3/8v(‐50)+1/8v(‐300) = (1/8)(450)+(3/8)(200)‐λ(3/8)(50)‐λ(1/8)(300) Since he chooses the gamble now over a utility of 0, it means that (1/8)(450)+(3/8)(200)‐λ(3/8)(50)‐λ(1/8)(300) ≥ 0 125 ≥ λ56.25 λ ≤ 2.333333 f) Could we learn anything else about the range of Tim’s λ by offering him the same gamble with more flips of the coin? No. We now have an upper bound and a lower bound for Tim’s λ (it’s somewhere between 2 and 2.2222). If we add flips, he will just find the gamble more and more attractive and will always accept. But that will just give us a higher upper bound on his λ, which will not be informative. Problem 3: Jane and Melody frequently play chess together and to make it interesting, they sometimes play for money. They just had a $100 bet on a chess game and Jane lost and is now reeling from the fact that she just lost $100. Suppose that Jane has the following preferences for money relative to her reference point (which in this case is the amount of money she had before she started playing the game with Melody). V(m, r) = m‐r when m ≥ r and V(m, r) = ‐λ(r‐m) when m < r. Jane is considering challenging Melody to another match, Double or Nothing (i.e., another $100 bet). a) Given her preferences, how high must the probability (p) that Jane thinks she’ll beat Melody be in order for Jane to levy the challenge? Econ 330 – Economic Behavior and Psychology Spring 2010 Professor Sydnor Now everything that is happening is in the loss domain. Jane is currently at ‐$100 relative to her reference point. If she does the double or nothing bet, she will end up at either ‐$200 (if she loses again) or 0 (if she wins and gets even). She gets to 0 with probability (p), the probability she wins. Because with these preferences, Jane does not exhibit diminishing sensitivity, she is willing to place the bet as long as she thinks her chance of winning is at least 50%. The reason is that a win will increase her utility by λ100 and a loss will decrease it by λ100. The additional loss is equally as painful as getting back to 0 would be pleasurable on the margin. In math: To levy the bet it must be that given her beliefs about the probability of winning (p), she thinks the expected utility of the bet is higher than the expected utility of staying at ‐$100 Expected utility of the bet: pv(0) + (1‐p)v(‐200) = ‐(1‐p)λ200 Expected utility of staying put: v(‐100) = ‐λ100 Bet is better than staying put if: ‐(1‐p)λ200 ≥ ‐λ100 (1‐p) ≤ 100/200 (remember to flip the sign on an inequality when dividing by (‐)) p ≥ 1/2 b) Does the answer to (a) depend on λ? Explain in intuitive terms why it does or does not depend on λ. No it does not. You can see it in the equations above. The intuitive reason is that all of the possibilities here are in the loss domain. There is no chance of a “gain”. Loss aversion is about how much more painful losses are relative to gains. When a choice does not involve some possibility of both gains and losses, loss aversion does not matter. In these cases the only thing that drives behavior over risky gambles is diminishing sensitivity. Now suppose instead that Jane’s preferences are given by: V(m, r) = √ when m ≥ r and V(m, r) = ‐λ√ when m < r. c) Give a verbal description of how these preferences differ from those above and what it means for the psychology of how Jane reacts to changes in money. Now Jane’s value functions exhibit diminishing sensitivity. The first gains relative to the reference point have a bigger marginal impact on utility than do gains farther away from the reference point. Similarly the firstlosses she might suffer will induce more pain than the additional pain that comes from a similar‐sized loss once she has already lost a lot. d) Given these new preferences, how high must the probability (p) that Jane things she’ll beat Melody be in order for Jane to levy the challenge? Econ 330 – Economic Behavior and Psychology Spring 2010 Professor Sydnor Now the first $100 she loses feels much worse than it does to lose an additional $100. So there is more upside (in terms of increased utility) from getting back to even than there is downside (in terms of additional loss in utility) from going down to ‐$200. This means that she should be more willing to make the bet – so even if she doesn’t think she’ll win with 50% chance, she would want to place the gamble. But the question is how low the probability could be for her to be willing to place the gamble. Clearly if she knew she would lose (p = 0), she would not bet. So how do we find the exact number? Well, (as always) we use the expected utility calculation. Staying put means utility = ‐λ√100 for sure Making the bet means expected utility is: (p)0 ‐ λ(1‐p)√200 So want to bet when: ‐ λ(1‐p)√200 ≥ ‐λ√100 (1‐p) ≤ √
√ 0.7071 p ≥ 0.293 e) How does this problem relate the “long‐shot bias” in end‐of‐the‐day betting at horse races? Because of diminishing sensitivity, Jane is willing to bet even if she is relatively unlikely to win. The value of “getting even” is very high relative to falling deeper into the hole. The more she has diminishing sensitivity, the more she will be willing to be on low odds to get herself back to even. This is the same phenomenon we see with horse‐track betters betting at the end of the day. They want to get back to “even” for the day and as such are willing to place bets on horses with long odds of winning but high payouts. Since the majority of betters act that way, and since many are down by the end of the day (since the track takes a cut of the bets), their desire for these long‐shots pushes down the payout on the long shots to the point where they are a really bad bet. Problem 4 Consider two gambles: Gamble A is a 50/50 chance to lose $50 and a 50/50 chance to gain $55 Gamble B is a 50/50 chance to lose $500 and a 50/50 chance to gain $4000 In class I argued that the tendency for people to reject small‐scale gambles like Gamble A is inconsistent with the expected‐utility‐of‐wealth model because in that model if someone turns down Gamble A, they would also be predicted to turn down pretty obviously favorable gambles like Gamble B. In this problem I want you to create a simple Excel Sheet (or other program you like to use) to verify this. Econ 330 – Economic Behavior and Psychology Spring 2010 Professor Sydnor Specifically, assume that Bob has a starting wealth level of $10,000. Also assume that Bob maximizes his expected utility of wealth and that his utility‐of‐wealth function is given by: u($X) = Here r is the risk‐aversion parameter. The higher r is, the greater is risk aversion. (Note that in the special case where r = 1, this function is the natural log, but you can largely ignore that for this problem if you are moderately clever) a) Find the minimum value for r that makesBob prefer staying at his wealth level of $10,000 to taking Gamble A. (Hint, it will be greater than 1 – feel free to use only one decimal place) If Bob takes the gamble, the two potential wealth outcomes are $9,950 and $10,055. Here is the equation for Bob’s expected utility of the gamble: 9,950
1 If Bob does not take the gamble, he stays at $10,000 for sure, which has (expected) utility 10,000 1 Note that if r = 0, the utility function is linear and Bob would be risk neutral. That means he would choose based on expected value and in this case would choose the gamble. To see that, plug 0 in for r in both equations and you will see that EU(gamble) = 10,002.5, while the EU(no gamble) is simply 10,000. Now, as r increases, the utility function will get more concave and Bob will be more risk averse. That means that at some point, if r were high enough, the slightly higher expected value of the gamble will not be enough to get Bob to be willing to accept the gamble. In order to find that level of r, we can simply use a spreadsheet with these equations and then increase r until we find the point where the EU(gamble) stops exceeding the EU(no gamble). I have put my spreadsheet up on BB as an example of how one can do this. The most important columns have been highlighted in yellow. Take a look at the sheet labeled Part A (use the tabs at the bottom of the Excel worksheet). Column A varies the level of r. If you click on the cells, you will see that I did not tediously type these all in by hand. Instead, I used a formula and then did what is known as “filling down” with that formula. In cell A3 I hand typed 0 and in cell A4 I hand types 0.0001. I wanted to have 0.0001, so that later I would never have r exactly equal to 1, because the formula is messy there. Then in cell A5, I typed the following equation: =A4+.1 That told Excel to add 1 to the value in A4. I then highlighted a lot of cells under that in column A and pressed control+D, which is a shortcut code that tells excel to use the same formula in all the other cells below. Excel will automatically update the numbering of the cells in the equation to be consistent as you go down. Econ 330 – Economic Behavior and Psychology Spring 2010 Professor Sydnor The other columns then simply put in the equations above for the u(X) and EU(gamble) = .5u(X1)+.5u(X2). The final column, K, shows the difference in EU(gamble) – EU(sure $). This is what we really care about. We see it starts out positive at 2.5 (which is what we saw above) when r =0. As r gets bigger, the difference in expected utility between the gamble and the sure $ starts to shrink. If you scroll way down, I have highlighted in red the lines where the difference goes from being positive (the gamble is better) to negative (the sure $ is better), which occurs right around r = 18.2 b) Verify that if Bob’s level of r is as in (a) that he would turn down Gamble B. Do this by calculating the expected utility of Gamble B and comparing it to the expected utility of Gamble A with the level of r you found in part (a). You could also find the maximum r that Bob could have such that he would accept Gamble B and show that it’s less than the r you found in part a. In the sheet labeled Part B, I have generated the expected utilities of Gamble B and the sure $10,000 for the range of r. The only change I made between this worksheet and the sheet for Part A is the values in Columns B and D. All the formulas that underlie the values that are calculated in columns C, E, F, I, and K are the same as before – you can click on the cells and look at the formulas to verify this. Here we see that when r reaches 14.5, Bob would prefer the sure $ to the 50/50 gamble. So that means that if his r were 18.2, he would prefer the sure money to the gamble where with 50% chance he would lose $500 and 50% chance gain $4,000. So that means that for the model to predict that he would turn down the small gamble, it will predict that he turns down the large gamble as well. So here is the logic behind the argument that the diminishing marginal utility of wealth cannot be the explanation for why people are risk averse over small stakes: Gamble A is a small‐stakes gamble that we could imagine people turning down. With this model, however, a person who turns down Gamble A would also turn down Gamble B. But Gamble B seems very attractive. So we think people will turn down Gamble A, which implies that they would turn down B, but we don’t think they will turn down B. That is the paradox, or more appropriately the inconsistency, in the model. ...
View Full Document
- Spring '11
- Tim, economic behavior, Professor Sydnor