STAT_333_Assignment_3Solutions

STAT_333_Assignment_3Solutions - STAT 333 Assignment 3...

Info iconThis preview shows pages 1–3. Sign up to view the full content.

View Full Document Right Arrow Icon
STAT 333 Assignment 3 SOLUTIONS 1. Consider a sequence of repeated independent tosses of a fair coin, each toss resulting in H or T. For each n = 1, 2, 3, . . . define X n = length of the run after the n th toss where a run is a maximal sequence of like outcomes (i.e., all H or all T). For example, if the sequence of outcomes looks like H H T H H H H T … then X 1 = 1, X 2 = 2, X 3 = 1, X 4 = 1, X 5 = 2, X 6 = 3, X 7 = 4, X 8 = 1, etc. a. Model this as a Markov chain by writing down the state space S and transition matrix P . S = {1, 2, 3, 4, …} and 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 2 1 2 1 2 1 2 1 2 1 2 1 2 1 2 1 2 1 P b. Prove that this chain is irreducible and find the period of the chain. All states have a direct path to state 1, and state 1 can eventually get to any state (however unlikely) by having an arbitrarily long run of identical tosses. Thus you can get from any state to any other state. All states communicate and thus the chain is irreducible. We can see immediately that the period is 1 because one of the diagonal elements is non-zero (P 1,1 = ½) c. Prove that this chain is positive recurrent by solving recursively for the unique equilibrium distribution π = (π 1 , π 2 , π 3 , …). What distribution is this? What is the expected number of tosses between returns to state 4? We must solve π P = π such that the elements of π add to 1. Looking at the first few equations, we will get π 1 = ½ (π 1 + π 2 + π 3 + …) π 2 = ½ π 1 π 3 = ½ π 2 = (½) 2 π 1 … π k = ½ π k-1 = (½) k-1 π 1 Applying the condition that the elements add to 1, π 1 = ½ and thus π k = (½) k for all k. This is the Geometric distribution with p = ½. The expected number of tosses between runs of length 4 would be 1/π 4 = 1/(½) 4 = 16. d. Suppose the coin is not fair. Why can this not be modelled directly as a Markov chain as above? What could you do (how could you augment the state space) to enable you to model this as a Markov chain? The process is not Markovian in this case, because to know where we go next, we not only have to know the length of the current run, but which type the run is (H or T). We could make it Markovian by defining the state space differently: {H, T, HH, TT, HHH, TTT, HHHH, TTTT, …} Then, for example, you would transition from state HHH to HHHH with probability P(H) and to T with probability 1-P(H).
Background image of page 1

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
2. Consider a discrete-time Markov Chain with state space S = {0, …, 6} and transition matrix P = 12 33 3 2 55 1 1 1 1 1 5 5 5 5 5 11 22 111 244 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 00 0 0 0 0 0 0000 1 0 0 0 0 0 0    a. Determine the classes of this chain, which are open or closed, and write P in simplified form, and find the period of each closed class.
Background image of page 2
Image of page 3
This is the end of the preview. Sign up to access the rest of the document.

This note was uploaded on 03/17/2012 for the course STAT 333 taught by Professor Chisholm during the Winter '08 term at Waterloo.

Page1 / 5

STAT_333_Assignment_3Solutions - STAT 333 Assignment 3...

This preview shows document pages 1 - 3. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online