assignment9

The steady state probabilities pi for the markov

Info iconThis preview shows page 1. Sign up to view the full content.

View Full Document Right Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: s the first holding interval (U ). Thus, E[V |X0 = i, X1 = i + 1] = E[U1 |X0 = i, X1 = i + 1] = 1/(λ + µ) Conditional on {X0 = i, Xi+1 = i − 1}, we know that the first transition is a departure.So the time until the first arrival is sum of the time for first transition (i.e., a departure) and the time until the next arrival. The second term is exponentially distributed with rate λ, so we have: 5 E[V |X0 = i, X1 = i − 1] = E[U1 |X0 = i, X1 = i − 1] + E[V |X1 = i − 1] = 1 1 + λ+µ λ d) Using the total expectation lemma, we have: E[V |X0 = i] = E[V |X0 = i, X1 = i + 1]Pr {X1 = i + 1|X0 = i} + E[V |X0 = i, X1 = i − 1]Pr {X1 = i − 1|X0 = i} � � 1 λ 1 1 µ 1 = + + = λ+µλ+µ λ+µ λ λ+µ λ Since this is true for any choice of i > 0, and it was assumed that X0 = i, for i > 0, E[V ] = 1/λ. Exercise 6.2: The transition diagram for the embedded chain is: �� �� ��0��� 1 3/5 �� � �� ��1��� 2/5 3/5 �� � �� ��2��� 2/5 3/5 �� � �� ��3��� 2/5 3/5 �� � �� ��4��� 2/5 � ... 3/5 3 a) The steady state probabilities satisfy π0 = 3 π1 , 2 πi−1 = 5 πi for i ≥ 2. Iterating on 5 5 these equations, � �i−1 �� 2 5 2 i−1 π1 = π0 , for i ≥ 1 3 33 ⎤ ⎡ � 5 � 2 �i−1 � ⎦ = 6π0 1= πi = π0 ⎣1 + 33 2 πi...
View Full Document

Ask a homework question - tutors are online