Discrete-time stochastic processes

4 renewal reward processes time averages

Info iconThis preview shows page 1. Sign up to view the full content.

View Full Document Right Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: es . . . . . . . . . . . . . . . . . . . . 108 3.5 Renewal-reward processes; ensemble-averages . . . . . . . . . . . . . . . . . 112 3.6 Applications of renewal-reward theory . . . . . . . . . . . . . . . . . . . . . 117 3.6.1 Little’s theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117 3.6.2 Expected queueing time for an M/G/1 queue . . . . . . . . . . . . . 120 Delayed renewal processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122 3.7.1 Delayed renewal-reward processes . . . . . . . . . . . . . . . . . . . . 124 3.7.2 Transient behavior of delayed renewal processes . . . . . . . . . . . . 125 3.7.3 The equilibrium process . . . . . . . . . . . . . . . . . . . . . . . . . 126 3.8 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127 3.9 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127 3.4 3.7 4 FINITE-STATE MARKOV CHAINS 139 4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139 4.2 Classification of states . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141 4.3 The Matrix representation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147 4.3.1 The eigenvalues and eigenvectors of P . . . . . . . . . . . . . . . . . 149 4.4 Perron-Frobenius theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150 4.5 Markov chains with rewards . . . . . . . . . . . . . . . . . . . . . . . . . . . 156 4.6 Markov decision theory and dynamic programming . . . . . . . . . . . . . . 165 4.6.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165 4.6.2 Dynamic programming algorithm . . . . . . . . . . . . . . . . . . . . 167 4.6.3 Optimal stationary policies . . . . . . . . . . . . . . . . . . . . . . . 170 4.6.4 Policy iteration and the solution of Bellman’s equation . . . . . . . . 173 4.6.5 Stationary policies with arbitrary final rewards . . . . . . . . . . . . 177 4.7 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182 4.8 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183 CONTENTS v 5 COUNTABLE-STATE MARKOV CHAINS 197 5.1 Introduction and classification of states . . . . . . . . . . . . . . . . . . . . 197 5.2 Branching processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207 5.3 Birth-death Markov chains . . . . . . . . . . . . . . . . . . . . . . . . . . . 210 5.4 Reversible Markov chains . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211 5.5 The M/M/1 sample-time Markov chain . . . . . . . . . . . . . . . . . . . . 214 5.6 Round-robin and processor sharing . . . . . . . . . . . . . . . . . . . . . . . 217 5.7 Semi-Markov processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223 5.8 Example — the M/G/1 queue . . . . . . . . . . . . . . . . . . . . . . . . . . 227 5.9 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 228 5.10 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 230 6 MARKOV PROCESSES WITH COUNTABLE STATE SPACES 6.1 235 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235 6.1.1 The sampled-time approximation to a Markov process . . . . . . . . 238 Steady-state behavior of irreducible Markov processes . . . . . . . . . . . . 239 6.2.1 The number of transitions per unit time . . . . . . . . . . . . . . . . 240 6.2.2 Renewals on successive entries to a state . . . . . . . . . . . . . . . . 240 6.2.3 The strong law for time-average state probabilities . . . . . . . . . . 243 6.2.4 The equations for the steady state process probabilities . . . . . . . 244 6.2.5 The sampled-time approximation again . . . . . . . . . . . . . . . . 245 6.2.6 Pathological cases . . . . . . . . . . . . . . . . . . . . . . . . . . . . 245 6.3 The Kolmogorov differential equations . . . . . . . . . . . . . . . . . . . . . 246 6.4 Uniformization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 250 6.5 Birth-death processes .............................. 251 6.6 Reversibility for Markov processes . . . . . . . . . . . . . . . . . . . . . . . 253 6.7 Jackson networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 259 6.7.1 Closed Jackson networks . . . . . . . . . . . . . . . . . . . . . . . . . 264 6.8 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 266 6.9 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 268 6.2 7 RANDOM WALKS, LARGE DEVIATIONS, AND MARTINGALES 279 7.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 279 vi CONTENTS 7.1.1 Simple random walks .......................... 280 7.1.2 Integer-valued random walks . . . . . . . . . . . . . . . . . . . . . . 280 7.1.3 Renewal processes as special cases of random walks . . . . . . . . . . 281 7.2 The waiting time in a G/G/1...
View Full Document

This note was uploaded on 09/27/2010 for the course EE 229 taught by Professor R.srikant during the Spring '09 term at University of Illinois, Urbana Champaign.

Ask a homework question - tutors are online