This preview shows page 1. Sign up to view the full content.
Unformatted text preview: feature of Markov chains
Example 1.1. Gambler’s ruin. Consider a gambling game in which on any
turn you win $1 with probability p = 0.4 or lose $1 with probability 1 p = 0.6.
Suppose further that you adopt the rule that you quit playing if your fortune
reaches $N . Of course, if your fortune reaches $0 the casino makes you stop.
Let Xn be the amount of money you have after n plays. Your fortune, Xn
has the “Markov property.” In words, this means that given the current state,
Xn , any other information about the past is irrelevant for predicting the next
state Xn+1 . To check this for the gambler’s ruin chain, we note that if you are
still playing at time n, i.e., your fortune Xn = i with 0 < i < N , then for any
possible history of your wealth in 1 , in 2 , . . . i1 , i0
P (Xn+1 = i + 1Xn = i, Xn 1 = in 1 , . . . X0 = i0 ) = 0.4 since to increase your wealth by one unit you have to win your next bet. Here
we have used P (B A) for the conditional probability of the event B given that
A occurs. Recall that this is deﬁned by
P (B A) = P (B \ A)
P (A) If yo...
View
Full
Document
This document was uploaded on 03/06/2014 for the course MATH 4740 at Cornell.
 Spring '10
 DURRETT
 The Land

Click to edit the document details