This preview shows page 1. Sign up to view the full content.
Unformatted text preview: . Next, he chooses X3 according to
row 2 of P . And so on. . . .
Stochastic Processes J. Chang, March 30, 1999 1. MARKOV CHAINS Page 14 1.2 The Markov property
Clearly, in the previous example, if I told you that we came up with the values X0 = 3,
X1 = 1, and X2 = 2, then the conditional probability distribution for X3 is
P 8 1=3 for j = 1
for j = 2
2=3 for j = 3, fX3 = j j X0 = 3; X1 = 1; X2 = 2g = : 0 which is also the conditional probability distribution for X3 given only the information that
X2 = 2. In other words, given that X0 = 3, X1 = 1, and X2 = 2, the only information
relevant to the distribution to X3 is the information that X2 = 2; we may ignore the
information that X0 = 3 and X1 = 1. This is clear from the description of how to simulate
the chain! Thus, fX3 = j j X2 = 2; X1 = 1; X0 = 3g = PfX3 = j j X2 = 2g for all j . P This is an example of the Markov property.
1.3 Definition. A process X0 ; X1 ; : : : satis es the Markov property if fXn+1 = in+1 j Xn = in ; Xn,1 = in,1; : : : ; X0 = i0 g
= PfXn+1 = in+1 j Xn = in g P for all n and all i0 ; : : : ; in+1 2 S. The issue addressed by the Markov property is the dependence structure among random
variables. The simplest dependence structure for X0 ; X1 ; : : : is no dependence at all, that
is, independence. The Markov property could be said to capture the next simplest sort of
dependence: in generating the process X0 ; X1 ; : : : sequentially, each Xn depends only on
the preceding random variable Xn,1 , and not on the further past values X0 ; : : : ; Xn,2 . The
Markov property allows much more interesting and general processes to be considered than
if we restricted ourselves to independent random variables Xi , without allowing so much
generality that a mathematical treatment becomes intractable.
The Markov property implies a simple expression for the probability of our Markov
chain taking any speci ed path, as follows: fX0 = i0 ; X1 = i1 ; X2 = i2; : : : ; Xn = ing
= PfX0 = i0 gPfX1 = i1 j X0 = i0 gPfX2 = i2 j X1 = i1 ; X0 = i0 g
PfXn = in j Xn,1 = in,1; : : : ; X1 = i1 ; X0 = i0 g
= PfX0 = i0 gPfX1 = i1 j X0 = i0 gPfX2 = i2 j X1 = i1 g
PfXn = in j Xn,1 = in,1g
= 0 i0 P i0 ; i1 P i1 ; i2 P in,1 ; in :
P Stochastic Processes J. Chang, March 30, 1999 1.2. THE MARKOV PROPERTY Page 15 So, to get the probability of a path, we start out with the initial probability of the rst state
and successively multiply by the matrix elements corresponding to the transitions along the
path.
1.4 Exercise. Let X0 ; X1 ; : : : be a Markov chain, and let A and B be subsets of the state
space.
1. Is it true that PfX2 2 B j X1 = x1 ; X0 2 Ag = PfX2 2 B j X1 = x1 g? Give a proof or
counterexample.
2. Is it true that PfX2 2 B j X1 2 A; X0 = x0 g = PfX2 2 B j X1 2 Ag? Give a proof or
counterexample.
The moral: be careful about what the Markov property says! 1.5 Exercise. Let X0 ; X1 ; : : : be a Markov chain on the state space f,1; 0; 1g, and suppose
that P i; j 0 for all i; j . What is a necessary and su cient condition for the sequence of
ab...
View
Full
Document
This document was uploaded on 03/06/2014 for the course MATH 4740 at Cornell.
 Spring '10
 DURRETT
 Multiplication, Markov Chains

Click to edit the document details