This preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
Unformatted text preview: Copyright c 2010 by Karl Sigman 1 ContinuousTime Markov Chains A Markov chain in discrete time, { X n : n ≥ } , remains in any state for exactly one unit of time before making a transition (change of state). We proceed now to relax this restriction by allowing a chain to spend a continuous amount of time in any state, but in such a way as to retain the Markov property. As motivation, suppose we consider the rat in the open maze. Clearly it is more realistic to be able to keep track of where the rat is at any continuoustime t ≥ 0 as oppposed to only where the rat is after n “steps”. Assume throughout that our state space is S = Z = {··· , 2 , 1 , , 1 , 2 , ···} (or some subset thereof). Suppose now that whenever a chain enters state i ∈ S , independent of the past, the length of time spent in state i is a continuous, strictly positive (and proper) random variable H i called the holding time in state i . When the holding time ends, the process then makes a transition into state j according to transition probability P ij , independent of the past, and so on. 1 Letting X ( t ) denote the state at time t , we end up with a continuoustime stochastic process { X ( t ) : t ≥ } with state space S . As we shall see, the holding times will have to be exponentially distributed to ensure that the continuoustime process satisfies the Markov property: The future, { X ( s + t ) : t ≥ } , given the present state, X ( s ) , is independent of the past, { X ( u ) : 0 ≤ u < s } . Such a process will be called a continuoustime Markvov chain (CTMC). The formal definition is given by Definition 1.1 A stochastic process { X ( t ) : t ≥ } with discrete state space S is called a continuoustime Markvov chain (CTMC) if for all t ≥ , s ≥ , i ∈ S , j ∈ S , P ( X ( s + t ) = j  X ( s ) = i, { X ( u ) : 0 ≤ u < s } ) = P ( X ( s + t ) = j  X ( s ) = i ) = P ij ( t ) . P ij ( t ) is the probability that the chain will be in state j , t time units from now, given it is in state i now. For each t ≥ 0 there is a transition matrix P ( t ) = ( P ij ( t )) , i,j ∈ S , and P (0) = I, the identity matrix. As for discretetime Markov chains, we are assuming here that the distribution of the future, given the present state X ( s ), does not depend on the present time s , but only on the present state X ( s ) = i , whatever it is, and the amount of time that has elapsed, t , since time s . In particular, P ij ( t ) = P ( X ( t ) = j  X (0) = i ). 1 P ii > 0 is allowed, meaning that a transition back into state i from state i can ocurr. Each time this happens though, a new H i , independent of the past, determines the new length of time spent in state i ....
View
Full Document
 Fall '08
 YAO
 Markov chain, CTMC

Click to edit the document details