This preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
Unformatted text preview: Section 3.4 Independent Events 87 of the game receiving 1 unit from the loser. Any player whose fortune drops to 0 is
eliminated, and this continues until a single player has all n E 217:1 m units, with
that player designated as the Victor. Assuming that the results of successive games
are independent and that each game is equally likely to be won by either of its two
players, ﬁnd Pi, the probability that player i is the Victor? Solution. To begin, suppose that there are 11 players, with each player initially having
1 unit. Consider player i. Each stage she plays will be equally likely to result in her
either winning or losing 1 unit, with the results from each stage being independent.
In addition, she will continue to play stages until her fortune becomes either 0 or n.
Because this is the same for all n players, it follows that each player has the same
chance of being the victor, implying that each player has probability 1/n of being the
victor. Now, suppose these 11 players are divided into r teams, with team i containing
ni players, i = 1,. . . ,r. Then the probability that the Victor is a member of team i is
ni/n. But because (a) team i initially has a total fortune of ni units, i = 1, . . . ,r, and (b) each game played by members of different teams is equally likely to be won by
either player and results in the fortune of members of the winning team increas—
ing by 1 and the fortune of the members of the losing team decreasing by I , it is easy to see that the probability that the victor is from team 1' is exactly the prob—
ability we desire. Thus, Pl : ni/n. Interestingly, our argument shows that this result
does not depend on how the players in each stage are chosen. H In the gambler’s ruin problem, there are only 2 gamblers, but they are not assumed
to be of equal skill. EXAMPLE 4] The gambler’s ruin problem Two gamblers, A and B, bet on the outcomes of successive ﬂips of a coin. On each
ﬂip, if the coin comes up heads, A collects 1 unit from B, whereas if it comes up tails,
A pays 1 unit to B. They continue to do this until one of them runs out of money. If
it is assumed that the successive ﬂips of the coin are independent and each ﬂip results
in a head with probability p, what 13 the probability that A ends up with all the. money
if he starts with 1 units and B starts with N — i units? Solution. Let E denote the event that A ends up with all the money when he starts
with i and B starts with N — i, and to make clear the dependence on the initial fortune
of A, let P, = P(E). We shall obtain an expression for P( E) by conditioning on the
outcome of the ﬁrst ﬂip as follows: Let H denote the event that the ﬁrst ﬂip lands on
heads; then P, 2: P(E) = P(E!H)P(H) + P(EIHC)P(HC)
=pP(EH) + (1  p)P(EHC) Now, given that the ﬁrst ﬂip lands on heads, the situation after the ﬁrst bet is that
A has i + 1 units and B has N ~ (1' + 1). Since the successive ﬂips are assumed to
be independent with a common probability p of heads, it follows that, from that point
on, A’s probability of winning all the money is exactly the same as if the game were
just starting with A having an initial fortune ofi + 1 and B having an initial fortune of N — (i + 1).Therefore,
P(EIH) = Pi+1 Chapter 3 Conditional Probability and Independence and similarly, P(E1HC) = Pi—l
Hence, letting q = 1  p, we obtain
Pi =PPi+1 + qPi—l 1': 194, qN—l on By making use of the obvious boundary conditions P0 = 0 and PN = 1, we shall
now solve Equation (4.2). Since p + q = 1, these equations are equivalent to PE + qu =PPi+1 + $54 01' Pi+1 * Pi = %(Pi — i—l) Hence, since P0 = O, we obtain, from Equation (4.3), P2  P1 = 9031 — P0) = 51—131
P P 2
P3 — P2 = (Ll(P2 ~ P1) = (51‘) P1
P P i—1
Pi  Pi~1 = Qua1  Pi—z) = (2) P1
P P q q N—l
PN — PN—l == ~(PN—1 — PN—2) = (—> P1
p .1 p' Adding the ﬁtsti  1 equations of (4.4) yields  P. [(g.) + (97‘ + + (Eli—ll or
_ i L—£%QP1 ﬁ€¢1 P.: 1<qaa p l ml ﬁi=1 P Using the fact that PN = 1, we obtain Ni ifp = % l
g
i 5 : Section 3.4 Independent Events 89 Hence,
1 __ i
(CI/P)N ifp ¢%
P. = 1 — (CI/P) (4.5)
ﬂ. ifp = % Let Q denote the probability that B winds up with all the money when A starts
with i and B starts with N — 1'. Then, by symmetry to the situation described, and on
replacing p by q and iby N — i, it follows that 1 — (P/éDNTi  1
1  (p/q)N liq ¢ i
Q: .
N —z . 1
N lfqzi . 1 . . 1 1
Moreover, Since q = 2 IS equrvalent to p = i, we have, when q 9t 7, 1 — (ex/pr 1 — (P/‘DN—i Pi—l—Qi: + 1 — (q/p)N 1  (p/q)N 2p”  17‘” (61/19)" + q”  qN(p/qW“i
pN _ qN qN _ pN
_.. pN __ pNiqi _ qN + qipN—i
— pN _ qN
2: 1
This result also holds when p = q = %, so
Pi + Qi = 1 In words, this equation states that, with probability 1, either A or B will wind up
with all of the money; in other words, the probability that the game continues indeﬁ—
nitely with A’s fortune always being between 1 and N — 1 is zero. (The reader must
be careful because, a priori, there are three possible outcomes of this gambling game,
not two: Either A wins, or B wins, or the game goes on forever with nobody winning.
We have just shown that this last event has probability 0.) As a numerical illustration of the preceding result, if A were to start with 5 units and B with 10, then the probability of A’s winning would be % if p were %, whereas it would jump to 5
:19... z .87 1 __ (2%)”
ifp were .6. A special case of the gambler’s ruin problem, which is also known as the problem of
duration of play, was proposed to Huygens by Fermat in 1657. The version Huygens
proposed, which he himself solved, was that A and B have 12 coins each. They play
for these coins in a game with 3 dice as follows: Whenever 11 is thrown (by either—it makes no difference who rolls the dice), A gives a coin to B. Whenever 14 is thrown,— B gives a coin to A. The person who ﬁrst wins all the coins wins the game. Since Chapter 3 Conditional Probability and Independence P{roll 11} = 22'126 and P{roll 14} = %, we see from Example 4h that, for A, this is just the gambler’s ruin problem with p = %,i = 12, and N = 24. The general form
of the gambler’s ruin problem was solved by the mathematician James Bernoulli and
published 8 years after his death in 1713. For an application of the gambler’s ruin problem to drug testing, suppose that two
new drugs have been developed for treating a certain disease. Drug i has a cure rate
Phi = 1,2, in the sense that each patient treated with drug iwill be cured with proba
bility Pi. These cure rates are, however, not known, and we are interested in ﬁnding a
method for deciding whether P1 > P2 or P2 > P1. To decide on one of these alterna—
tives, consider the following test: Pairs of patients are to be treated sequentially, with
one member of the pair receiving drug 1 and the other drug 2. The results for each
pair are determined, and the testing stops when the cumulative number of cures from
one of the drugs exceeds the cumulative number of cures from the other by some
ﬁxed, predetermined number. More formally, let X _ 1 if the patient in the jth pair that receives drug 1 is cured
1 —' 0 otherwise Y' _~ 1 if the patient in the jth pair that receives drug 2 is cured
1 T 0 otherwise For a predetermined positive integer M, the test stops after pair N, where N is the
ﬁrst value of n such that either X1 + +Xn  (Y1 ++Yn)=M or
X1++Xn — (Y1 +"'+Yn)=“‘M ' In the former case, we assert that P1 > P2 and in the latter that P2 > P1. In order to help ascertain whether the foregoing is a good test, one thing we would
like to know is the probability that it leads to an incorrect decision. That is, for given
P1 and P2, where P1 > P2, what is the probability that the test will incorrectly assert
that P2 > P1? To determine this probability, note that after each pair is checked,
the cumulative difference of cures using drug 1 versus drug 2 will go up by 1 with
probability P1(1 —— P2)—since this is the probability that drug 1 leads to a cure and
drug 2 does not———or go down by 1 with probability (1 — P1)P2, or remain the same
with probability Ple + (1 — P1)(1  P2). Hence, if we consider only those pairs
in which the cumulative difference changes, then the difference will go up by 1 with
probability P = P{up llup 1 or down 1}
3 Flu  P2)
~ P1(1  P2) + (1  P1)P2 and down by 1 with probability P 2(1 — P1)
P1(1  P2) + (1 — P1)P2
Thus, the probability that the test will assert that P2 > P1 is equal to the probability
that a gambler who wins each (one—unit) bet with probability P will go down M before going up M. But Equation (4.5), with i = M ,N = 2M, shows that this probability iS
given by 1—P: Section 3.4 Independent Events 91 P{test asserts that P2 > P1} _ M
1""(1PP)
:1__._____..__ _ 2M
1 __. (1 P)
P Where P P1(1 — P2) y=1~P 192(1—131) For instance, if P1— —— .6 and P2 — .4 then the probability of an incorrect decision is
.017 when M: 5 and reduces to .0003 when M: 10. Suppose that we are presented with a set of elements and we want to determine
whether at least one member of the set has a certain property. We can attack this
question probabilistically by randomly choosing an element of the set in such a way
that each element has a positive probability of being selected. Then the original ques—
tion can be answered by a consideration of the probability that the randomly selected
element does not have the property of interest. If this probability is equal to 1, then
none of the elements of the set have the property; if it is less than 1, then at least one
element of the set has the property. The ﬁnal example of this section illustrates this technique. EXAMPLE 4m . . The complete graph having n vertices is deﬁned to be a set of n points (called vertices)
in the plane and the 3 lines (called edges) connecting each pair of vertices. The
complete graph having 3 vertices is shown in Figure 3.3. Suppose now that each edge
in a complete graph having n vertices is to be colored either red or blue. For a ﬁxed integer k, a question of interest is, Is there a way of coloring the edges so that no set
of k vertices has all of its ( 1;) connecting edges the same color? It can be shown by a probabilistic argument that if n is not too large, then the answer is yes. FIGURE 3.3 E
E
l
r ...
View
Full Document
 Spring '08
 SMSLee

Click to edit the document details