stochasticcolumbia2

stochasticcolumbia2 - Copyright c 2009 by Karl Sigman 1...

Info iconThis preview shows pages 1–3. Sign up to view the full content.

View Full Document Right Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: Copyright c 2009 by Karl Sigman 1 Gamblers Ruin Problem Let N 2 be an integer and let 1 i N- 1. Consider a gambler who starts with an initial fortune of $ i and then on each successive gamble either wins $1 or loses $1 independent of the past with probabilities p and q = 1- p respectively. Let X n denote the total fortune after the n th gamble. The gamblers objective is to reach a total fortune of $ N , without first getting ruined (running out of money). If the gambler succeeds, then the gambler is said to win the game. In any case, the gambler stops playing after winning or getting ruined, whichever happens first. { X n } yields a Markov chain (MC) on the state space S = { , 1 ,...,N } . The transition probabilities are given by P i,i +1 = p, P i,i- 1 = q, < i < N , and both 0 and N are absorbing states, P 00 = P NN = 1. 1 For example, when N = 4 the transition matrix is given by P = 1 0 0 0 0 q p 0 0 q p 0 0 q p 0 0 0 0 1 . While the game proceeds, this MC forms a simple random walk X n = i + 1 + + n , n 1 , X = i, where { n } forms an i.i.d. sequence of r.v.s. distributed as P ( = 1) = p, P ( =- 1) = q = 1- p , and represents the earnings on the successive gambles. Since the game stops when either X n = 0 or X n = N , let i = min { n 0 : X n { ,N }| X = i } , denote the time at which the game stops when X = i . If X i = N , then the gambler wins, if X i = 0, then the gambler is ruined. Let P i ( N ) = P ( X i = N ) denote the probability that the gambler wins when X = i . P i ( N ) denotes the probability that the gambler, starting initially with $ i , reaches a total fortune of N before ruin; 1- P i ( N ) is thus the corresponding probably of ruin Clearly P ( N ) = 0 and P N ( N ) = 1 by definition, and we next proceed to compute P i ( N ) , 1 i N- 1. Proposition 1.1 (Gamblers Ruin Problem) P i ( N ) = 1- ( q p ) i 1- ( q p ) N , if p 6 = q ; i N , if p = q = 0 . 5 . (1) 1 There are three communication classes: C 1 = { } , C 2 = { 1 ,...,N- 1 } , C 3 = { N } . C 1 and C 3 are recurrent whereas C 2 is transient. 1 Proof : For our derivation, we let P i = P i ( N ), that is, we suppress the dependence on N for ease of notation. The key idea is to condition on the outcome of the first gamble, 1 = 1 or 1 =- 1, yielding P i = pP i +1 + qP i- 1 . (2) The derivation of this recursion is as follows: If 1 = 1, then the gamblers total fortune increases to X 1 = i +1 and so by the Markov property the gambler will now win with probability P i +1 . Similarly, if 1 =- 1, then the gamblers fortune decreases to X 1 = i- 1 and so by the Markov property the gambler will now win with probability P i- 1 . The probabilities corresponding to the two outcomes are p and q yielding (2). Since p + q = 1, (2) can be re-written as pP i + qP i = pP i +1 + qP i- 1 , yielding P i +1- P i = q p ( P i- P i- 1 ) ....
View Full Document

Page1 / 6

stochasticcolumbia2 - Copyright c 2009 by Karl Sigman 1...

This preview shows document pages 1 - 3. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online