STAT 333 Assignment 3 SOLUTIONS
1.
Consider a sequence of repeated independent tosses of a
fair
coin, each toss resulting in H or
T. For each n = 1, 2, 3, . . . define X
n
= length of the run after the n
th
toss where a run is a
maximal sequence of like outcomes (i.e., all H or all T).
For example, if the sequence of outcomes looks like H H T H H H H T …
then X
1
= 1, X
2
= 2, X
3
= 1, X
4
= 1, X
5
= 2, X
6
= 3, X
7
= 4, X
8
= 1, etc.
a.
Model this as a Markov chain by writing down the state space S and transition matrix
P
.
S = {1, 2, 3, 4, …} and
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
2
1
2
1
2
1
2
1
2
1
2
1
2
1
2
1
2
1
P
b.
Prove that this chain is irreducible and find the period of the chain.
All states have a direct path to state 1, and state 1 can eventually get to any state (however
unlikely) by having an arbitrarily long run of identical tosses. Thus you can get from any
state to any other state. All states communicate and thus the chain is irreducible.
We can see immediately that the period is 1 because one of the diagonal elements is non-zero
(P
1,1
= ½)
c.
Prove that this chain is positive recurrent by solving recursively for the unique
equilibrium distribution π
= (π
1
, π
2
, π
3
, …). What distribution is this? What is the
expected number of tosses between returns to state 4?
We must solve π
P
= π
such that the elements of π
add to 1.
Looking at the first few equations, we will get
π
1
= ½ (π
1
+ π
2
+ π
3
+ …)
π
2
= ½ π
1
π
3
= ½ π
2
= (½)
2
π
1
… π
k
= ½ π
k-1
= (½)
k-1
π
1
Applying the condition that the elements add to 1, π
1
= ½ and thus π
k
= (½)
k
for all k. This is
the Geometric distribution with p = ½.
The expected number of tosses between runs of length 4 would be 1/π
4
= 1/(½)
4
= 16.
d.
Suppose the coin is not fair. Why can this not be modelled directly as a Markov chain as
above? What could you do (how could you augment the state space) to enable you to
model this as a Markov chain?
The process is not Markovian in this case, because to know where we go next, we not only
have to know the length of the current run, but
which type the run is
(H or T). We could
make it Markovian by defining the state space differently: {H, T, HH, TT, HHH, TTT,
HHHH, TTTT, …} Then, for example, you would transition from state HHH to HHHH with
probability P(H) and to T with probability 1-P(H).