This preview shows pages 1–3. Sign up to view the full content.
This preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
Unformatted text preview: ECE320 Solution Notes 8 Spring 2006 Cornell University T.L.Fine 1. In a Markov chain, let X = { 1 , 2 , 3 } and (1) = [ . 2 , . 3 , . 5] , P = . 5 . 5 . 3 . 3 . 4 . 6 . 4 . (a) Evaluate P ( X 2 = 2 ). (2) = (1) P , P ( X 2 = 2 ) = (2) (2) = . 49 . (b) Draw a state transition diagram. .5 .3 .5 .3 .4 .4 .6 1 2 3 Figure 1: (c) Classify the three states as to whether they are absorbing, persistent, or transient. All three states are persistent in that with probability one you will always eventually return to each of them. None of them are absorbing in that with positive probability (actually probability one) you will eventually exit from each of these three states. (d) What are the periodicities d i for i ? As each state has a selfloop, a positive probability of returning to itself in one step, the periodicities are all equal to 1. (e) Identify the communicating classes. There is a single communicating class containing all three states; each state is accessible in no more than two steps for each other state. 1 (f) Identify the closed communicating classes. As there is a single communicat ing class in a finite Markov chain, it must be closed. Alternatively, as all states reside in a single communicating class, it is obviously closed. (g) If there is a stationary or limiting initial distribution , then determine . If there is no such limiting initial distribution, then provide reasons. This solution exists because the Markov chain is irreducible (a single closed communicating class) and aperiodic (period 1 for all states). The unique stationary or limiting distribution satisfies = P (1) = . 5 (1) + . 3 (2) + 0 , (2) = . 5 (1) + . 3 (2) + . 6 (3) , (3) = 0 + . 4 (2) + . 4 (3) . The unique solution, normalized to sum to 1 as required of a pmf , is (1) = 9 24 , (2) = 15 24 , (3) = 10 24 . (h) If there is a stationary limiting distribution, what is the expected time between returns to state 2 ? As the Markov chain is irreducible and aperiodic. ET 2 , 2 = 1 (2) = 24 15 . 2. Repeat Problem 1 for P = . 5 . 5 . 6 . 4 . 2 . 8 . (a) P ( X 2 = 2 ) = (2) (2) = . 2( . 5) + . 3( . 6) + . 5( . 2) = . 38 . (b) (c) State 1 is transient; with probability one you will eventually leave this state, never to return. The other two states are both persistent but neither is absorbing as there is a positive probability of leaving each of 2 , 3 in one step....
View Full
Document
 Spring '06
 FINE

Click to edit the document details