l22_prob_exp2

l22_prob_exp2 - 6.042/18.062J Mathematics for Computer...

Info iconThis preview shows pages 1–3. Sign up to view the full content.

View Full Document Right Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: 6.042/18.062J Mathematics for Computer Science May 5, 2005 Srini Devadas and Eric Lehman Lecture Notes Expected Value II 1 The Number-Picking Game Here is a game that you and I could play that reveals a strange property of expectation. First, you think of a probability density function on the natural numbers. Your distri- bution can be absolutely anything you like. For example, you might choose a uniform distribution on 1 , 2 , . . . , 6 , like the outcome of a fair die roll. Or you might choose a bi- nomial distribution on , 1 , . . . , n . You can even give every natural number a non-zero probability, provided that the sum of all probabilities is 1. Next, I pick a random number z according to your distribution. Then, you pick a random number y 1 according to the same distribution. If your number is bigger than mine ( y 1 > z ), then the game ends. Otherwise, if our numbers are equal or mine is bigger ( z y 1 ), then you pick a new number y 2 with the same distribution, and keep picking values y 3 , y 4 , etc. until you get a value that is strictly bigger than my number, z . What is the expected number of picks that you must make? Certainly, you always need at least one pick, so the expected number is greater than one. An answer like 2 or 3 sounds reasonable, though one might suspect that the answer depends on the distribution. Lets find out whether or not this intuition is correct. 1.1 Analyzing the Game The number of picks you must make is a natural-valued random variable. And, as weve seen, there is a nice formula for the expectation of a natural-valued random variable: Ex ( # times you pick ) = Pr ( # times you pick > k ) (1) k =0 Suppose that Ive picked my number z , and you have picked k numbers y 1 , y 2 , . . . , y k . There are two possibilities: If there is a unique largest number among our picks, then my number is as likely to be it as any one of yours. So with probability 1 / ( k + 1) my number is larger than all of yours, and you must pick again. 2 Expected Value II Otherwise, there are several numbers tied for largest. My number is as likely to be one of these as any of your numbers, so with probability greater than 1 / ( k + 1) you must pick again. In both cases, with probability at least 1 / ( k + 1) , you need more than k picks to beat me. In other words: 1 Pr ( # times you pick > k ) (2) k + 1 This suggests that in order to minimize your rolls, you should chose a distribution such that ties are very rare. For example, you might choose the uniform distribution on { 1 , 2 , . . . , 10 100 } . In this case, the probability that you need more than k picks to beat me is very close to 1 / ( k +1) for moderate values of k . For example, the probability that you need more than 99 picks is almost exactly 1%. This sounds very promising for you; intuitively, you might expect to win within a reasonable number of picks on average!...
View Full Document

Page1 / 11

l22_prob_exp2 - 6.042/18.062J Mathematics for Computer...

This preview shows document pages 1 - 3. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online