{[ promptMessage ]}

Bookmark it

{[ promptMessage ]}

Chapter 17 Part 3 - Agenda for This Week Monday April 11...

Info iconThis preview shows pages 1–5. Sign up to view the full content.

View Full Document Right Arrow Icon
Agenda for This Week Monday April 11 Case 2 due Markov Processes Wednesday, April 13 Markov Processes (HWs) Final Project Topic Due Friday, April 15 Case 3 Review Dynamic Programming Monday, April 18 Dynamic Programming
Background image of page 1

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
Chapter 17 Markov Processes – Part 3
Background image of page 2
Review A Markov Process describes a situation where a system is in one state at a time Switching between states is probabilistic The state of the system is dependent ONLY on the previous state of the system Steady State probabilities: The long term probability of being in a particular state no matter which state you begin in
Background image of page 3

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
Absorbing States Markov chains can also be used to analyze the properties of a system in which some states are “absorbing”, that is, where once the system reaches that state, it never leaves that state. In an absorbing state, the probability that the process remains in that state once it enters the state is 1.
Background image of page 4
Image of page 5
This is the end of the preview. Sign up to access the rest of the document.

{[ snackBarMessage ]}