Western University (Ontario)  Also known as University of Western Ontario
Markov chains with application
STATISTICS 4654

Fall 2016
Textbook Sections 4.5, 4.6
4.5 The Gamblers Ruin Problem
Consider a modification of the Random Walk:
The state i represents the present wealth of an
individual, who gambles $1 at each play of a game.
With probability 0 < < 1 the gambler wins $1; with
Western University (Ontario)  Also known as University of Western Ontario
Markov chains with application
STATISTICS 4654

Fall 2016
Section 4.9
Background
Suppose that you are interested not so much in terms
of a particular random variable X, but rather some
function h(X) of that random variable.
Suppose further that it is the expected outcome
E[h(X)]; denote this expected outcome b
Western University (Ontario)  Also known as University of Western Ontario
Markov chains with application
STATISTICS 4654

Fall 2016
Section 4.9
A direct way to craft the
probabilities , via the , s
Our decision to make this a timereversible Markov
chain provides us with the means to determine the
, s. Since time reversibility means (i) = (j) ,
we will use these facts to construct
Western University (Ontario)  Also known as University of Western Ontario
Markov chains with application
STATISTICS 4654

Fall 2016
Textbook Section 4.2
The ChapmanKolmogorov equations
Let () = + = = for all 0 where
is a Markov chain. Then for all m, > 0; , 0,
(+) =
()
()
=0
We will prove this result in class.
If we look at this equation closely, it resembles the
equation for m
Western University (Ontario)  Also known as University of Western Ontario
Markov chains with application
STATISTICS 4654

Fall 2016
Textbook Section 4.3
Example 4.18 The Random Walk
Consider a Markov chain defined on all the integers:
An individual starts at at an arbirtrary integer (which
we might as well consider to be = 0).
In each step of the Markov chain, with probability
0 <
Western University (Ontario)  Also known as University of Western Ontario
Markov chains with application
STATISTICS 4654

Fall 2016
Textbook Sections 4.3, 4.4
The reason to classify states
As we have seen through many examples, it is possible
for there to be a number of possible outcomes when
one investigates as n gets large.
It is possible that there is a limiting matrix, all of
wh
Western University (Ontario)  Also known as University of Western Ontario
Markov chains with application
STATISTICS 4654

Fall 2016
Textbook Section 4.1
A Stochastic Process:
A stochastic process is defined in your text as
a collection of random variables (p. 84).
A better definition is that a stochastic process is this:
A stochastic process X(t) is a collection (or family) of
rando
Western University (Ontario)  Also known as University of Western Ontario
Markov chains with application
STATISTICS 4654

Fall 2016
Textbook Sections 4.7
4.7 Branching Processes
A branching process is one model to study whether a
family line survives or eventually dies out.
One starts with a single individual, who produces j 0
offspring with probability . Each of these offspring in
Western University (Ontario)  Also known as University of Western Ontario
Markov chains with application
STATISTICS 4654

Fall 2016
Textbook Section 4.8
4.8 Timereversible Markov Chains
One can look at a ergodic Markov chain that has been
operating forever in reversed time.
It is the case that this reversedtime process is also a
Markov chain, as we can show in class. Let
= 1 = =
Western University (Ontario)  Also known as University of Western Ontario
Markov chains with application
STATISTICS 4654

Fall 2016
# Markov Chain Monte Carlo
# WALS 1  October 21, 2016
# Metropolis Hastings Algorithm
# Implimentation of the Metropolis Hastings Algorithm to generate P matrix
# Inputs are an initial 'Q' matrix and the 'b' vector
MH < function(Q,b)cfw_
# identify the