03.3 Discrete Time Markov Chains &acirc;€“ Part A

# 03.3 Discrete Time Markov Chains &acirc;€“ Part A -...

This preview shows pages 1–3. Sign up to view the full content.

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: 15-359 Probability & ComputingHarchol-Balter & LaffertyLecture 15March 21, 2006Lecture 15: Discrete Time Markov Chains – Part AAnnouncements:Next Quiz – Tuesday, March 28. Quiz will cover Lectures 15 & 16.Midterm 2 date moved back a week to Wednesday, April 12.Contents1 Definition of DTMC22 Examples of finite state DTMCs22.1Repair facility problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .32.2Umbrella problem. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .42.3Distribution of forest trees problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .53 Powers ofP: n-step transition probabilities54 Limiting Probabilities75 Aperiodicity and irreducibility86 Stationary (“balance”) equations97 Equivalence between stationary solution and limiting probabilities108 Examples of solving stationary equations128.1Repair facility problem with cost . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .128.2Umbrella problem. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .148.3Forest tree problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .148.4No limiting distribution problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1511Definition of DTMCSo far you have seen several examples of random variables. Astochastic processissimply a sequence of random variables.Definition 1ADTMC(discrete-time Markov chain) is a stochastic process{Xt,t= 0,1,2,...}, whereXtdenotes the state at timetand such thatPr{Xn+1=j|Xn=i, Xn-1=in-1, ..., X=i}=Pr{Xn+1=j|Xn=i},∀n≥,∀i, ..., ij=PijwherePijis independent of time and of past history.Definition 2 Markovian Property– The conditional distribution of any future stateXn+1, given past statesX,X1, ... ,Xn-1and the present stateXn, is independent ofpast states and depends only on the present stateXn.Definition 3 Transition Probability Matrix P, associated with DTMC: This is amatrixP, whose(i,j)th entry,Pij, represents the probability of moving to statejonthe next transition, given that the current state isi.Observe that the Transition Probability Matrix,P, might have infinite dimensions, ifthere are infinitely many states. Also observe that by definition,∑jPij= 1,∀i.Throughout this lecture we will focus on DTMCs with afinite number of states,M. In Lecture 16 we will move on to DTMCs with an infinite number of states.2Examples of finite state DTMCsMarkov chains are extremely powerful, and are used to model problems in computerscience, statistics, physics, biology,..., you name it!Markov chain theory is prevalent within computer science. You will see it in AI/machinelearning, computer science theory, and in all areas of system modeling (analysis of net-working protocols, memory management protocols, server performance, disk protocols,etc.). We will now consider a few examples of some simple Markov chains. As we con-tinue through the next few lectures, and the homeworks, you will see more and moreexamples....
View Full Document

{[ snackBarMessage ]}

### Page1 / 15

03.3 Discrete Time Markov Chains &acirc;€“ Part A -...

This preview shows document pages 1 - 3. Sign up to view the full document.

View Full Document
Ask a homework question - tutors are online