MIT6_041F10_assn08_sol

# MIT6_041F10_assn08_sol - summationdisplay braceleftbigg...

This preview shows pages 1–3. Sign up to view the full content.

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: summationdisplay braceleftbigg Massachusetts Institute of Technology Department of Electrical Engineering & Computer Science 6.041/6.431: Probabilistic Systems Analysis (Fall 2010) Problem Set 8: Solutions 1. (a) We consider a Markov chain with states 0 , 1 , 2 , 3 , 4 , 5 , where state i indicates that there are i shoes available at the front door in the morning before Oscar leaves on his run. Now we can determine the transition probabilities. Assuming i shoes are at the front door before Oscar sets out on his run, with probability 1 2 Oscar will return to the same door from which he set out, and thus before his next run there will still be i shoes at the front door. Alternatively, with probability 1 2 Oscar returns to a different door, and in this case, with equal probability there will be min { i + 1 , 5 } or max { i 1 , } shoes at the front door before his next run. These transition probabilities are illustrated in the following Markov chain: 3 1 1 1 1 3 1 4 1 2 3 4 5 1 4 1 4 1 2 1 4 1 2 1 4 1 2 1 4 1 2 4 4 4 4 4 4 (b) When there are either 0 or 5 shoes at the front door, with probability 1 2 Oscar will leave on his run from the door with 0 shoes and hence run barefooted. To find the long-term probability of Oscar running barefooted, we must find the steady-state probabilities of being in states 0 and 5, and 5 , respectively. Note that the steady-state probabilities exist because the chain is recurrent and aperiodic. Since this is a birth-death process, we can use the local balance equations. We have p 01 = 1 p 10 , implying that 1 = and similarly, 5 = . . . = 1 = . As 5 i = 1 , i =0 it follows that i = 1 6 for i = 0 , 1 , . . . , 5. Hence, 1 1 P (Oscar runs barefooted in the long-term) = ( + 5 ) = . 2 6 2. (a) Consider any possible sequence of values x 1 , x 2 , . . . , x t 1 , i for X 1 , X 2 , . . . , X t , and note that 2 1 0 < | i | < m P ( | X t +1 | = | i | + 1 | X t = i, X t 1 = x t 1 , . . . X 1 = x 1 ) = 1 i = 0 , 0 | i | = m 1 P ( | X t +1 | = | i || X t = i, X t 1 = x t 1 , . . . X 1 = x 1 ) = 2 | i | = m , 0 | i | negationslash = m Page 1 of 8 braceleftbigg braceleftbigg braceleftbigg Massachusetts Institute of Technology Department of Electrical Engineering & Computer Science 6.041/6.431: Probabilistic Systems Analysis (Fall 2010) 1 P ( | X t +1 | = | i | 1 | X t = i, X t 1 = x t 1 , . . . X 1 = x 1 ) = 2 0 < | i | m , 0 i = 0 P ( | X t +1 | = j | X t = i, X t 1 = x t 1 , . . . X 1 = x 1 ) = 0 , || i | j | > 1 . As the conditional probabilities above only depend on | i | , where | X t | = | i | , it follows that | X 1 | , | X 2 | , . . . satisfy the Markov property. The associated Markov chain is illustrated below....
View Full Document

## This note was uploaded on 01/11/2012 for the course EE 6.431 taught by Professor Prof.dimitribertsekas during the Fall '10 term at MIT.

### Page1 / 9

MIT6_041F10_assn08_sol - summationdisplay braceleftbigg...

This preview shows document pages 1 - 3. Sign up to view the full document.

View Full Document
Ask a homework question - tutors are online