Discrete-time stochastic processes

The probability pn0 in the markov chain is the

Info iconThis preview shows page 1. Sign up to view the full content.

View Full Document Right Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: has a non-zero transition to a lower numbered state). Show that there is some state (other than 1 to i), say i + 1 and (ki ) some decision ki+1 such that Pi+1+1 > 0 for some l ≤ i. ,l d) Use parts a), b), and c) to observe that there is a stationary policy k = k1 , . . . , kM for which state 1 is accessible from each other state. Chapter 5 COUNTABLE-STATE MARKOV CHAINS 5.1 Introduction and classification of states Markov chains with a countably-infinite state space (more briefly, countable-state Markov chains ) exhibit some types of behavior not possible for chains with a finite state space. Figure 5.1 helps explain how these new types of behavior arise. If the right-going transitions p in the figure satisfy p > 1/2, then transitions to the right occur with higher frequency than transitions to the left. Thus, reasoning heuristically, we expect the state Xn at time n n to drift to the right with increasing n. Given X0 = 0, the probability P0j of being in state j at time n, should then tend to zero for any fixed j with increasing n. If one tried to n define the steady-state probability of state j as limn→1 P0j , then this limit would be 0 for all j . These probabilities would not sum to 1, and thus would not correspond to a limiting distribution. Thus we say that a steady-state does not exist. In more poetic terms, the state wanders off into the wild blue yonder. q ✿♥ ✘0 ② ③ ♥ 1 ② q =1−p p p q ③ ♥ 2 ② p q ③ ♥ 3 ② p q ③ ♥ 4 ... Figure 5.1: A Markov chain with a countable state space. If p > 1/2, then as time n increases, the state Xn becomes large with high probability, i.e., limn→1 Pr {Xn ≥ j } = 1 for each integer j . The truncation of Figure 5.1 to k states is analyzed in Exercise 4.7. The solution there defines ρ = p/q and shows that if ρ 6= 1, then πi = (1 − ρ)ρi /(1 − ρk ) for each i, 0 ≤ i < k. For ρ = 1, πi = 1/k for each i. For ρ < 1, the limiting behavior as k → 1 is πi = (1 − ρ)ρi . Thus for ρ < 1, the truncated Markov chain is similar to the untruncated chain. For ρ > 1, on the other hand, the steady-state probabilities for the truncated case are geometrically decreasing from the right, and the states with significant probability keep moving to the right 197 198 CHAPTER 5. COUNTABLE-STATE MARKOV CHAINS as k increases. Although the probability of each fixed state j approaches 0 as k increases, the truncated chain never resembles the untruncated chain. This example is further studied in Section 5.3, which considers a generalization known as birth-death Markov chains. Fortunately, the strange behavior of Figure 5.1 when p > q is not typical of the Markov chains of interest for most applications. For typical countable-state Markov chains, a steadystate does exist, and the steady-state probabilities of all but a finite number of states (the number depending on the chain and the application) can almost be ignored for numerical calculations. It turns out that the appropriate tool to analyze the behavior, and particularly the long term behavior, of countable-state Markov chains is renewal theory. In particular, we will first revise the definition of recurrent states for finite-state Markov chains to cover the countable-state case. We then show that for any given recurrent state j , the sequence of discrete time epochs n at which the state Xn is equal to j essentially forms a renewal process.1 The renewal theorems then specify the time-average relative-frequency of state j , the limiting probability of j with increasing time, and a number of other relations. To be slightly more precise, we want to understand the sequence of epochs at which one state, say j , is entered, conditional on starting the chain either at j or at some other state, say i. We will see that, sub ject to the classification of states i and j , this gives rise to a delayed renewal process. In preparing to study this delayed renewal process, we need to understand the inter-renewal intervals. The probability mass functions (PMF’s) of these intervals are called first-passage-time probabilities in the notation of Markov chains. Definition 5.1. The first-passage-time probability, fij (n), is the probability that the first entry to state j occurs at discrete time n (for n ≥ 1), given that X0 = i. That is, for n = 1, fij (1) = Pij . For n ≥ 2, fij (n) = Pr {Xn =j, Xn−1 6=j, Xn−2 6=j, . . . , X1 6=j |X0 =i} . (5.1) n For n ≥ 2, note the distinction between fij (n) and Pij = Pr {Xn = j |X0 = i}. The definition in (5.1) also applies for j = i; fii (n) is thus the probability, given X0 = i, that the first occurrence of state i after time 0 occurs at time n. Since the transition probabilities are independent of time, fij (n − 1) is also the probability, given X1 = i, that the first subsequent occurrence of state j occurs at time n. Thus we can calculate fij (n) from the iterative relations X fij (n) = Pik fkj (n − 1); n > 1; fij (1) = Pij . (5.2) k6=j With this iterative approach, the first passage time probabilities fij (n) for a given n mu...
View Full Document

This note was uploaded on 09/27/2010 for the course EE 229 taught by Professor R.srikant during the Spring '09 term at University of Illinois, Urbana Champaign.

Ask a homework question - tutors are online