Chapter 11
Markov Chains
11.1
Introduction
Most of our study of probability has dealt with independent trials processes. These
processes are the basis of classical probability theory and much of statistics.
We
have discussed two of the principal theorems for these processes: the Law of Large
Numbers and the Central Limit Theorem.
We have seen that when a sequence of chance experiments forms an indepen
dent trials process, the possible outcomes for each experiment are the same and
occur with the same probability. Further, knowledge of the outcomes of the pre
vious experiments does not influence our predictions for the outcomes of the next
experiment. The distribution for the outcomes of a single experiment is sufficient
to construct a tree and a tree measure for a sequence of
n
experiments, and we
can answer any probability question about these experiments by using this tree
measure.
Modern probability theory studies chance processes for which the knowledge
of previous outcomes influences predictions for future experiments.
In principle,
when we observe a sequence of chance experiments, all of the past outcomes could
influence our predictions for the next experiment. For example, this should be the
case in predicting a student’s grades on a sequence of exams in a course. But to
allow this much generality would make it very difficult to prove general results.
In 1907, A. A. Markov began the study of an important new type of chance
process. In this process, the outcome of a given experiment can affect the outcome
of the next experiment. This type of process is called a Markov chain.
Specifying a Markov Chain
We describe a Markov chain as follows: We have a set of
states,
S
=
{
s
1
, s
2
, . . . , s
r
}
.
The process starts in one of these states and moves successively from one state to
another.
Each move is called a
step.
If the chain is currently in state
s
i
, then
it moves to state
s
j
at the next step with a probability denoted by
p
ij
, and this
probability does not depend upon which states the chain was in before the current
405
This preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
406
CHAPTER 11.
MARKOV CHAINS
state.
The probabilities
p
ij
are called
transition probabilities.
The process can remain
in the state it is in, and this occurs with probability
p
ii
.
An initial probability
distribution, defined on
S
, specifies the starting state.
Usually this is done by
specifying a particular state as the starting state.
R. A. Howard
1
provides us with a picturesque description of a Markov chain as
a frog jumping on a set of lily pads. The frog starts on one of the pads and then
jumps from lily pad to lily pad with the appropriate transition probabilities.
Example 11.1
According to Kemeny, Snell, and Thompson,
2
the Land of Oz is
blessed by many things, but not by good weather. They never have two nice days
in a row. If they have a nice day, they are just as likely to have snow as rain the
next day. If they have snow or rain, they have an even chance of having the same
the next day. If there is change from snow or rain, only half of the time is this a
change to a nice day.
With this information we form a Markov chain as follows.
This is the end of the preview.
Sign up
to
access the rest of the document.
 Spring '10
 MohammadAbdolahiAzgomiPh.D
 Markov Chains, Probability theory, Markov chain

Click to edit the document details