This preview shows page 1. Sign up to view the full content.
Unformatted text preview: eak
up the matrix [etΛ ] into a sum of n matrices, each with only a single nonzero element, then
(6.35) becomes
[P (t)] = n
X T ∫ i et∏i π i . (6.36) i=1 Note that each term in (6.36) is a matrix formed by the product of a column vector ∫i and
a row vector πi , scaled by et∏i . From Theorem 6.3, one of these eigenvalues, say ∏1 , is equal
to 0, and thus et∏1 = 1 for all t. Each other eigenvalue ∏i has a negative real part, so et∏i
goes to zero with increasing t for each of these terms. Thus the term for i = 1 in (6.36)
contains the steady state solution, e p ; this is a matrix for which each row is the steady
state probability vector p .
Another way to express the solution to these equations (for ﬁnite n) is by the use of Laplace
transforms. Let Lij (s) be the Laplace transform of Pij (t) and let [L(s)] be the n by n matrix
(for each s) of the elements Lij (s). Then the equation d[P (t)]/dt = [Q][P (t)] for t ≥ 0,
along with the initial condition [P (0)] = I , becomes the Laplace transform equation
[L(s)] = [sI − [Q]]−1 . (6.37) This appears to be simpler than (6.36), but it is really just more compact in notation. It
can still be used, however, when [Q] has fewer than n eigenvectors. 250 6.4 CHAPTER 6. MARKOV PROCESSES WITH COUNTABLE STATE SPACES Uniformization Up until now, we have discussed Markov processes under the assumption that qii = 0 (i.e.,
no transitions from a state into itself are allowed). We now consider what happens if this
restriction is removed. Suppose we start with some Markov process deﬁned by a set of
transition rates qij with qii = 0, and we modify this process by some arbitrary choice of
qii ≥ 0 for eachP
state i. This modiﬁcation changes the embedded Markov chain, since ∫i is
P
increased from k6=i qik to k6=i qik + qii . From (6.5), Pij is changed to qij /∫i for the new
value of ∫i for each i, j . Thus the steady state probabilities πi for the embedded chain are
changed. The Markov process {X (t); t ≥ 0} is not changed, since a transition from i into
itself does not change X (t) and does not change the distribution of the time until the next
transition to a diﬀerent state. The steady state probabilities for the process still satisfy
X
X
pj ∫j =
pk qkj ;
pi = 1.
(6.38)
i k The addition of the new term qj j increases ∫j by qj j , thus increasing the left hand side by
pj qj j . The right hand side is similarly increased by pj qj j , so that the solution is unchanged
(as we already determined it must be).
A particularly convenient way to add selftransitions is to add them in such a way as to
make the transition rate ∫j the same for all states. Assuming that the transition rates
{∫i ; i ≥ 0} are bounded, we deﬁne ∫ ∗ as supj ∫j for the original transition rates. Then we
P
set qj j = ∫ ∗ − k6=j qj k for each j . With this addition of selftransitions, all transition rates
∗
become ∫ ∗ . From (6.19), we see that the new steady state probabilities, πi , in the embedded
Markov chain become equal to the steady state process probabilities, pi . Naturally, we could
also choose any ∫ greater than ∫ ∗ and increase each qj j to make all transition rates equal to
that value of ∫ . When the transition rates are changed in this way, the resulting embedded
chain is called a uniformized chain and the Markov process is called the uniformized process.
The uniformized process is the same as the original process, except that quantities like the
number of transitions over some interval are diﬀerent because of the self transitions.
Assuming that all transition rates are made equal to ∫ ∗ , the new transition probabilities in
∗
the embedded chain become Pij = qij /∫ ∗ . Let N (t) be the total number of transitions that
occur from 0 to t in the uniformized process. Since the rate of transitions is the same from
all states and the intertransition intervals are independent and identically exponentially
distributed, N (t) is a Poisson counting process of rate ∫ ∗ . Also, N (t) is independent of
the sequence of transitions in the embedded uniformized Markov chain. Thus, given that
N (t) = n, the probability that X (t) = j given that X (0) = i is just the probability that
∗
the embedded chain goes from i to j in ∫ steps, i.e., Pijn . This gives us another formula for
calculating Pij (t), (i.e., the probability that X (t) = j given that X (0) = i).
Pij (t) = 1
X n=0 ∗
Pijn ∗ e−∫ t (∫ ∗ t)n
.
n! (6.39) Another situation where the uniformized process is useful is in extending Markov decision
theory to Markov processes, but we do not pursue this. 6.5. BIRTHDEATH PROCESSES 6.5 251 Birthdeath processes Birthdeath processes are very similar to the birthdeath Markov chains that we studied
earlier. Here transitions occur only between neighboring states, so it is convenient to deﬁne
∏i as qi,i+1 and µi as qi,i−1 (see Figure 6.6). Since the number of transitions from i to i + 1
is within 1 of the number of...
View
Full
Document
This note was uploaded on 09/27/2010 for the course EE 229 taught by Professor R.srikant during the Spring '09 term at University of Illinois, Urbana Champaign.
 Spring '09
 R.Srikant

Click to edit the document details