This preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
Unformatted text preview: Copyright c 2009 by Karl Sigman 1 Gambler’s Ruin Problem Let N ≥ 2 be an integer and let 1 ≤ i ≤ N 1. Consider a gambler who starts with an initial fortune of $ i and then on each successive gamble either wins $1 or loses $1 independent of the past with probabilities p and q = 1 p respectively. Let X n denote the total fortune after the n th gamble. The gambler’s objective is to reach a total fortune of $ N , without first getting ruined (running out of money). If the gambler succeeds, then the gambler is said to win the game. In any case, the gambler stops playing after winning or getting ruined, whichever happens first. { X n } yields a Markov chain (MC) on the state space S = { , 1 ,...,N } . The transition probabilities are given by P i,i +1 = p, P i,i 1 = q, < i < N , and both 0 and N are absorbing states, P 00 = p NN = 1. 1 For example, when N = 4 the transition matrix is given by P = 1 0 0 0 0 q p 0 0 q p 0 0 q p 0 0 0 0 1 . While the game proceeds, this MC forms a simple random walk X n = Δ 1 + ··· + Δ n , X = i, where { Δ n } forms an i.i.d. sequence of r.v.s. distributed as P (Δ = 1) = p, P (Δ = 1) = q = 1 p , and represents the earnings on the successive gambles. Since the game stops when either X n = 0 or X n = N , let τ i = min { n ≥ 0 : X n ∈ { ,N } X = i } , denote the time at which the game stops when X = i . If X τ i = N , then the gambler wins, if X τ i = 0, then the gambler is ruined. Let P i ( N ) = P ( X τ i = N ) denote the probability that the gambler wins when X = i . P i ( N ) denotes the probability that the gambler, starting initially with $ i , reaches a total fortune of N before ruin; 1 P i ( N ) is thus the corresponding probably of ruin Clearly P ( N ) = 0 and P N ( N ) = 1 by definition, and we next proceed to compute P i ( N ) , 1 ≤ i ≤ N 1. Proposition 1.1 (Gambler’s Ruin Problem) P i ( N ) = 1 ( q p ) i 1 ( q p ) N , if p 6 = q ; i N , if p = q = 0 . 5 . (1) 1 There are three communication classes: C 1 = { } , C 2 = { 1 ,...,N 1 } , C 3 = { N } . C 1 and C 3 are recurrent whereas C 2 is transient. 1 Proof : For our derivation, we let P i = P i ( N ), that is, we suppress the dependence on N for ease of notation. The key idea is to condition on the outcome of the first gamble, Δ 1 = 1 or Δ 1 = 1, yielding P i = pP i +1 + qP i 1 . (2) The derivation of this recursion is as follows: If Δ 1 = 1, then the gambler’s total fortune increases to X 1 = i +1 and so by the Markov property the gambler will now win with probability P i +1 . Similarly, if Δ 1 = 1, then the gambler’s fortune decreases to X 1 = i 1 and so by the Markov property the gambler will now win with probability P i 1 . The probabilities corresponding to the two outcomes are p and q yielding (2). Since p + q = 1, (2) can be rewritten as pP i + qP i = pP i +1 + qP i 1 , yielding P i +1 P i = q p ( P i P i 1 ) ....
View
Full Document
 Summer '10
 KarlSigma
 Probability, Probability theory, Stochastic process, Markov chain, Random walk

Click to edit the document details