4-3 - Handbook 6.3: Absorbing Markov Chains Consider the...

Info iconThis preview shows pages 1–3. Sign up to view the full content.

View Full Document Right Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: Handbook 6.3: Absorbing Markov Chains Consider the following board game: The rules are as follows: Begin on square 1. For each turn, you flip two coins, then move left or right one square according to the following rules. If you are on square 1, and flip two heads, move left. Otherwise, move right. If you are on square 2, and flip one head and one tail, move left. Otherwise, move right. If you are on square 3, and flip two heads, move right. Otherwise, move left. If you land on square 0, the game is over. You have lost the game. If you land on square 4, the game is over. You have won the game. Play continues until you have either won or lost the game. This game is clearly a Markov chain: where we move next is deter- mined only by where we currently are. We can consider the states to be the different positions on the board. This game also has two special properties: 1. The Markov chain has two states which can not be left. These are states 0 and 4. If you reach state 0, you have lost, and will not be moving to any other square. If you reach state 4, you have won, and will not be moving to any other square. We call these states absorbing states . 2. Consider the states from which we can leave, which we call the non-absorbing states . No matter which of these states we are currently on, we can always make our way to an absorbing state. 1 If a Markov chain has both of these properties, we call it an absorbing Markov chain . Lets find the transition matrix T for this Markov chain, taking the states in the order 0, 4, 1, 2, 3. To show that 0 is an absorbing state, we will have the probability of going from state 0 to state 0 equal to 1. This means that if we are currently in state 0, we will be guaranteed to remain in state 0 for the next stage. Similarly, to show that 4 is an absorbing state, we will have the prob- ability of going from state 4 to state 4 equal to 1. This means that if we are currently in state 4, we will be guaranteed to remain in state 4 for the next stage.for the next stage....
View Full Document

This note was uploaded on 02/05/2011 for the course MATH 377 taught by Professor Stephenlang during the Spring '11 term at University of Victoria.

Page1 / 6

4-3 - Handbook 6.3: Absorbing Markov Chains Consider the...

This preview shows document pages 1 - 3. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online