This preview shows page 1. Sign up to view the full content.
Unformatted text preview: m Fi = p.
i=1
b. Show that fXn+1 2 A1 ; : : : ; Xn+r 2 Ar j Xn = j; Xn,1 2 Bn,1; : : : ; X0 2 B0g
= Pj fXn+1 2 A1 ; : : : ; Xn+r 2 Ar g:
c. Recall the de nition of hitting times: Ti = inf fn 0 : Xn = ig. Show that Pi fTi =
n + m j Tj = n; Ti ng = Pj fTi = mg, and conclude that Pi fTi = Tj + m j Tj
1; Ti Tj g = Pj fTi = mg. This is one manifestation of the statement that the Markov
P chain probabilistically restarts" after it hits j .
d. Show that Pi fTi 1 j Tj 1; Ti Tj g = Pj fTi 1g. Use this to show that if
Pi fTj 1g = 1 and Pj fTi 1g = 1, then Pi fTi 1g = 1.
e. Let i be a recurrent state and let j 6= i. Recall the idea of cycles," the segments of the
path between successive visits to i. For simplicity let's just look at the rst two cycles.
Formulate and prove an assertion to the e ect that whether or not the chain visits state
j during the rst and second cycles can be described by iid Bernoulli random variables.
Stochastic Processes J. Chang, March 30, 1999 1.3. IT'S ALL JUST MATRIX THEORY" 1.3 Page 17 It's all just matrix theory" Recall that the vector 0 having components 0 i = PfX0 = ig is the initial distribution of
the chain. Let n denote the distribution of the chain at time n, that is, n i = PfXn = ig.
Suppose for simplicity that the state space is nite: S = f1; : : : ; N g, say. Then the Markov
chain has an N N probability transition matrix P = Pij = P i; j ;
where P i; j = PfXn+1 = j j Xn = ig = PfX1 = j j X0 = ig. The law of total probability
gives n+1 j = PfXn+1 = j g
=
= N
X
i=1
N
X
i=1 P fXn = igPfXn+1 = j j Xn = ig niP i; j ; which, in matrix notation, is just the equation n+1 = n P:
Note that here we are thinking of n and n+1 as row vectors , so that, for example, n = n 1; : : : ; nN :
Thus, we have
1.12
and so on, so that by induction
1.13 1 = 0P
2 = 1P = 0 P 2
3 = 2P = 0 P 3 ;
n = 0P n: 1.14 Exercise. Let P n i; j denote the i; j element in the matrix P n , the nth power of
P . Show that P ni; j = PfXn = j j X0 = ig. Ideally, you should get quite confused about
what is being asked, and then straighten it all out. So, in principle, we can nd the answer to any question about the probabilistic behavior
of a Markov chain by doing matrix algebra, nding powers of matrices, etc. However, what
is viable in practice may be another story. For example, the state space for a Markov chain
that describes repeated shu ing of a deck of cards contains 52! elementsthe permutations
of the 52 cards of the deck. This number 52! is large: about 80 million million million million
Stochastic Processes J. Chang, March 30, 1999 Page 18 1. MARKOV CHAINS million million million million million million million. The probability transition matrix that
describes the e ect of a single shu e is a 52! by 52! matrix. So, all we have to do" to answer
questions about shu ing is to take powers of such a matrix, nd its eigenvalues, and so
on! In a practical sense, simply reformulating probability questions as matrix calculations
often provides only minimal illumination in concrete questions like how many shu e...
View
Full
Document
This document was uploaded on 03/06/2014 for the course MATH 4740 at Cornell.
 Spring '10
 DURRETT
 Multiplication, Markov Chains

Click to edit the document details