Lecture13 - Markov Chains Most of our study of probability...

Info iconThis preview shows pages 1–5. Sign up to view the full content.

View Full Document Right Arrow Icon
Markov Chains 11/15/2005
Background image of page 1

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
Most of our study of probability has dealt with independent trials processes. A Markov chain is aprocess in which the outcome of a given ex- periment can affect the outcome of the next experiment. 1
Background image of page 2
Markov Chains We have a set of states , S = { s 1 , s 2 , . . . , s r } . The process starts in one of these states and moves successively from one state to another. Each move is called a step . If the chain is currently in state s i , then it moves to state s j at the next step with a probability denoted by p ij , and this probability does not depend upon which states the chain was in before the current state. 2
Background image of page 3

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
The probabilities p ij are called transition probabilities. The process can remain in the state it is in, and this occurs with probability p ii . An initial probability distribution, defined on S , specifies the start- ing state. 3
Background image of page 4
Image of page 5
This is the end of the preview. Sign up to access the rest of the document.

This note was uploaded on 07/16/2010 for the course MATH 20 taught by Professor Ionescu during the Fall '05 term at Dartmouth.

Page1 / 11

Lecture13 - Markov Chains Most of our study of probability...

This preview shows document pages 1 - 5. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online