{[ promptMessage ]}

Bookmark it

{[ promptMessage ]}

6.262.Lec15

# 6.262.Lec15 - DISCRETE STOCHASTIC PROCESSES Lecture 15...

This preview shows pages 1–6. Sign up to view the full content.

Lecture 15 - 3/31/2010 Discrete Stochastic Processes 1 DISCRETE STOCHASTIC PROCESSES Lecture 15 Delayed Renewal Processes (Section 3.8) Brief overview Countable-State Markov Chains (Sections 5.1 and 5.2) Examples from M/M/1 Queue Renewal Theory Approach to First Passage Times Transient, Positive Recurrent and Null Recurrent Classes Steady State Probabilities and Mean Recurrence Times Birth Death Chains

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
Lecture 15 - 3/31/2010 Discrete Stochastic Processes 2 Delayed Renewal Processes It often occurs in applications that a random process is almost a renewal process, except that the first interrenewal interval, X 1 , while independent of all the others, has a different distribution. A familiar example is a Bernoulli process where we declare a renewal every time the sequence 0101 has just appeared. Then Def: A delayed renewal process is characterized by a sequence of arrival times where X 1 , X 2 , - - - are independent, X 2 , X 3 , - - - are iid with and X 1 is a non-defective random variable. 1 , 1, n nk k SX n = = [| |] 2, k EX k <∞ 1 4, while X are i.i.d. & ( 2) 0 for all n 2. nn XP X ≥= >
Lecture 15 - 3/31/2010 Discrete Stochastic Processes 3 Since the effect of X 1 on the long-term behavior is washed out with time. For example, the the strong law and the elementary renewal theorem continue to hold, i.e., Blackwell’s theorem is similarly unaffected, and for renewal- reward processes, Sections 3.8.2 and 3.8.3 give interesting results for transient behavior, and for an “equilibrium process” where the distribution of X 1 is chosen so that but we will not develop those topics further. 1 () 0 , PX =∞ = [ () ] 1 lim lim . tt Nt ENt X →∞ →∞ == 2 2 0 [] 1 lim ( ) , for every n 2. t n t ER Rd t X ττ →∞ = 2 m(t) = for all t 0, t X

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
Lecture 15 - 3/31/2010 Discrete Stochastic Processes 4 Countable-State Markov Chains Several new phenomena can occur in a Markov chain when the number of states becomes infinite. One is that for some infinite chains, beginning at each state i you can reach any other state j in the chain along a path with positive probability, but the expected first passage time can be infinite, or, even worse, the probability that you ever reach state j can be < 1. Similarly, let be the probability that, starting from state i at time 0, the chain is in state j at time n. Then for some infinite chains, for every choice of (i,j), so there is no steady-state distribution . The first section in Chapter 5 gives a careful analysis of two examples, which you should study in detail. Here are two slightly different examples that model the M/M/1 queue. , n ij P , lim 0, n n P →∞ = π ur
Lecture 15 - 3/31/2010 Discrete Stochastic Processes 5 Countable-State Markov Chains M/M/1 Queue Example Customers arrive as a Poisson process A(t) with rate and the service time has an exponential distribution with rate (i.e., expected service time = Let n(t) represent the number of customers in queue + service at time t.

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
This is the end of the preview. Sign up to access the rest of the document.

{[ snackBarMessage ]}

### Page1 / 26

6.262.Lec15 - DISCRETE STOCHASTIC PROCESSES Lecture 15...

This preview shows document pages 1 - 6. Sign up to view the full document.

View Full Document
Ask a homework question - tutors are online