This preview shows page 1. Sign up to view the full content.
Unformatted text preview: g (1) = 1 + 0.6(1.25)
1.75
=
= 2.3333
0.75
0.75 Example 1.46. Tennis. In Example 1.40 we formulated the last portion of
the game as a Markov chain in which the state is the di↵erence of the scores.
The state space was S = {2, 1, 0, 1, 2} with 2 (win for server) and 2 (win
for opponent). The transition probability was
2
1
0
1
2 1
0
0
.6
0
0 2
1
.6
0
0
0 0
0
.4
0
.6
0 1
0
0
.4
0
0 2
0
0
0
.4
1 Let g (x) be the expected time to complete the game when the current state
is x. By considering what happens on one step
X
g (x) = 1 +
p(x, y )g (y )
y Since g (2) = g ( 2) = 0, if we let r(x, y ) be the restriction of the transition
probability to 1, 0, 1 we have
X
g (x)
r(x, y )g (y ) = 1
y Writing 1 for a 3 ⇥ 1 matrix (i.e., column vector) with all 1’s we can write this
as
(I r)g = 1 50 CHAPTER 1. MARKOV CHAINS so g = (I r) 1 1.
There is another way to see this. If N (y ) is the number of visits to y at
times n 0, then from (1.12)
Ex N (y ) = 1
X rn (x, y ) n=0 To see that this is (I r) 1 (x, y )...
View
Full
Document
This document was uploaded on 03/06/2014 for the course MATH 4740 at Cornell.
 Spring '10
 DURRETT
 The Land

Click to edit the document details