HWK3_sol

# HWK3_sol - IEOR E4701 Assignment 3 Solutions Summer 2011...

This preview shows pages 1–3. Sign up to view the full content.

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: IEOR E4701 Assignment 3 - Solutions Summer 2011 Question 1 Consider the Markov chain whose transition probabilities are given by P = 0 1 2 1 2 1 2 1 2 1 3 2 3 3 4 1 4 a) Is the chain irreducible? why? b) Is the chain positive recurrent? why? c) Compute the equilibrium distribution in two di erent ways. Answer a) Notice that p ( i,j ) > for every i 6 = j . This means that every state in the states-space is reachable, independent of the starting point of the Markov chain. Therefore, the Markov chain is irreducible. b) Following the what we obseved in class, that every irreducible Markov chain is positively recurrent, we may conclude that the given Markov chain is positively recurrent. c) Since the given Markov chain is irreducible and positive recurrent we know that the equilibrium (steady- state) distribution, π , is unique. We will nd π using two di rent methods; the rst one via the formula π T = π T P and the second one through the connection π ( i ) = 1 E i τ i where τ i = min { n ≥ 1 : X n = i } , for i = 0 , 1 , 2 . Method 1: π T = π T P ( π (0) ,π (1) ,π (2)) = ( π (0) ,π (1) ,π (2)) 1 2 1 2 1 3 2 3 3 4 1 4 which leads to the following system of linear equations π (0) = 1 3 π (1) + 3 4 π (2) π (1) = 1 2 π (0) + 1 4 π (2) π (2) = 1 2 π (0) + 2 3 π (1) = ⇒ π = 4 11 , 3 11 , 4 11 . 1 IEOR E4701 Assignment 3 - Solutions Summer 2011 Method 2: π ( i ) = 1 E i τ i where τ i = min { n ≥ 1 : X n = i } , for i = 0 , 1 , 2 . For i = 0 . E τ = E ( τ | X = 0) = p (0 , 1) E ( τ | X = 0 , X 1 = 1) + p (0 , 2) E ( τ | X = 0 , X 1 = 2) = Markov property 1 2 E ( T | X = 1) + 1 2 E ( T | X = 2) where T = min { n ≥ 0 : X n = 0 } . Let g ( i ) = E ( T | X = i ) . Then, E τ = 1 2 g (1) + 1 2 g (2) . To nd g ( i ) we need to set up the following system of linear equations: For i = 0 : g (0) = 0 For i = 1 : g (1) = E ( T | X = 1) = E ( T | X = 1 , X 1 = 0) p (1 , 0) + E ( T | X = 1 , X 1 = 2) p (1 , 2) + 1 = Markov property 1 · 1 3 + g (2) · 2 3 + 1 = 2 3 g (2) + 4 3 For i = 2 : g (2) = E ( T | X = 2) = E ( T | X = 2 , X 1 = 0) p (2 , 0) + E ( T | X = 2 , X 1 = 1) p (2 , 1) + 1...
View Full Document

{[ snackBarMessage ]}

### What students are saying

• As a current student on this bumpy collegiate pathway, I stumbled upon Course Hero, where I can find study resources for nearly all my courses, get online help from tutors 24/7, and even share my old projects, papers, and lecture notes with other students.

Kiran Temple University Fox School of Business ‘17, Course Hero Intern

• I cannot even describe how much Course Hero helped me this summer. It’s truly become something I can always rely on and help me. In the end, I was not only able to survive summer classes, but I was able to thrive thanks to Course Hero.

Dana University of Pennsylvania ‘17, Course Hero Intern

• The ability to access any university’s resources through Course Hero proved invaluable in my case. I was behind on Tulane coursework and actually used UCLA’s materials to help me move forward and get everything together on time.

Jill Tulane University ‘16, Course Hero Intern