This preview shows pages 1–2. Sign up to view the full content.
This preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
Unformatted text preview: ORIE3510 Introduction to Engineering Stochastic Processes Spring 2010 Section 4 Review Stationary distribution interpretations Computation of Steadystate costs/rewards Transient state analysis (expected number of visits to transient states & absorption proba bilities) Problem 4.67 Note that if X n = i , then we can only move to i 1 or i + 1, or stay in i . Hence P i,i +1 = P ( X n +1 = i + 1  X n = i,X n 1 ,...,X ) = p N i N P i,i = P ( X n +1 = i  X n = i,X n 1 ,...,X ) = p i N + (1 p ) N i N P i,i 1 = P ( X n +1 = i 1  X n = i,X n 1 ,...,X ) = (1 p ) i N . Since the transition probabilities depend only on the fact that X n = i , we see that this is a Markov chain. Clearly, all classes communicate, so the chain is irreducible and hence recurrent. Further, we see that the period of state 0 is 1 so the chain is aperiodic. Suppose N = 2. Then we have P = 1 p p (1 p ) 1 2 1 2 p 1 2 1 p p and = (1 p ) + 1 2 (1 p ) 1 1 = 2 p 1 p = 2 1 p 1 (1 p ) 1 and 2 = 1 2 p 1 + p 2 2 = p 1 p 2 = 2 2 p 2 (1 p ) 2 . Since i i = 1 we have = 1 + 2 X i =1 2 i p i (1 p ) i !...
View
Full
Document
This note was uploaded on 10/29/2011 for the course ORIE 3510 taught by Professor Resnik during the Spring '09 term at Cornell University (Engineering School).
 Spring '09
 RESNIK

Click to edit the document details