This preview shows page 1. Sign up to view the full content.
Unformatted text preview: e case N = 1".
Initial distribution 0 .
This is the probability distribution of the Markov chain at time 0. For each state
i 2 S, we denote by 0 i the probability PfX0 = ig that the Markov chain starts out
in state i. Formally, 0 is a function taking S into the interval 0,1 such that 0 i 0 for all i 2 S
and X
i2S 0 i = 1: Equivalently, instead of thinking of 0 as a function from S to 0,1 , we could think
of 0 as the vector whose ith entry is 0 i = PfX0 = ig.
Probability transition rule
This is speci ed by giving a matrix P = Pij . If S is the nite set f1; : : : ; N g, say,
then P is an N N matrix. Otherwise, P will have in nitely many rows and columns;
sorry. The interpretation of the number Pij is the conditional probability, given that
the chain is in state i at time n, say, that the chain jumps to the state j at time n +1.
That is,
Pij = PfXn+1 = j j Xn = ig:
We will also use the notation P i; j for the same thing. Note that we have written
this probability as a function of just i and j , but of course it could depend on n
as well. The time homogeneity restriction mentioned in the previous footnote is
just the assumption that this probability does not depend on the time n, but rather
remains constant over time.
Formally, a probability transition matrix is an N N matrix whose entries are
all nonnegative and whose rows sum to 1.
Finally, you may be wondering why we bother to arrange these conditional probabilities into a matrix. That is a good question, and will be answered soon.
Stochastic Processes J. Chang, March 30, 1999 1.1. SPECIFYING AND SIMULATING A MARKOV CHAIN Page 13 1.1 Figure. The Markov frog. We can now get to the question of how to simulate a Markov chain, now that we know how
to specify what Markov chain we wish to simulate. Let's do an example: suppose the state
space is S = f1; 2; 3g, the initial distribution is 0 = 1=2; 1=4; 1=4, and the probability
transition matrix is
01 2 31
1010
1.2
P = 2 @ 1=3 0 2=3 A:
3 1=3 1=3 1=3
Think of a frog hopping among lily pads as in Figure 1.1. How does the Markov frog
choose a path? To start, he chooses his initial position X0 according to the speci ed
initial distribution 0 . He could do this by going to his computer to generate a uniformly
distributed random number U0 Unif0; 1, and then taking
8 1 if 0 U 1=2
0
X0 = : 2 if 1=2 U0 3=4
3 if 3=4 U0 1
We don't have to be fastidious about specifying what to do if U0 comes out be exactly 1 2
or 3 4, since the probability of this happening is 0. For example, suppose that U0 comes
out to be 0.8419, so that X0 = 3. Then the frog chooses X1 according to the probability
distribution in row 3 of P , namely, 1=3; 1=3; 1=3; to do this, he paws his computer again
to generate U1 Unif0; 1 independently of U0 , and takes
8 1 if 0 U 1=3
0
X1 = : 2 if 1=3 U0 2=3
3 if 2=3 U0 1:
Suppose he happens to get U1 = 0:1234, so that X1 = 1. Then he chooses X2 according to
row 1 of P , so that X2 = 2; there's no choice this time...
View
Full
Document
This document was uploaded on 03/06/2014 for the course MATH 4740 at Cornell University (Engineering School).
 Spring '10
 DURRETT
 Multiplication, Markov Chains

Click to edit the document details