markov_notes - Markov Chains, part I December 8, 2010 1...

Info iconThis preview shows pages 1–3. Sign up to view the full content.

View Full Document Right Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: Markov Chains, part I December 8, 2010 1 Introduction A Markov Chain is a sequence of random variables X , X 1 , ... , where each X i S , such that P ( X i +1 = s i +1 | X i = s i , X i 1 = s i 1 , ..., X = s ) = P ( X i +1 = s i +1 | X i = s i ); that is, the value of the next random variable in dependent at most on the value of the previous random variable. The set S here is what we call the state space, and it can be either continuous or discrete (or a mix); however, in our discussions we will take S to be discrete, and in fact we will always take S = { 1 , 2 , ..., N } . Since X t +1 only depends on X t , it makes sense to define transition prob- abilities P i,j := P ( X t +1 = j | X t = i ) , which completely determine the dynamics of the Markov chain... well, al- most: we need to either be given X , or we to choose its value according to some distribution on the state space. In the theory of Hidden Markov Models, one has a set of probabilities 1 , ..., N , 1 + + N = 1, such that P ( X = i ) = i ; however, in some other applications, such as in the Gamblers Ruin Problem discussed in another note, we start with the value for X . Ok, so how could we generate a sequence X , X 1 , ... , given X and given the P i,j s? Well, suppose X = i . Then, we choose X 1 at random from { 1 , 2 , ..., N } , where P ( X 1 = j ) = P i,j . Next, we select X 2 at random accord- ing to the distribution P ( X 2 = k ) = P j,k . We then continue the process. 1 1.1 Graphical representation Sometimes, a more convenient way to represent a Markov chain is to use a transition diagram , which is a graph on N vertices that represent the states. The edges are directed, and each corresponds to a transition probability P i,j ; however, not all the N 2 edges are necessarily in the graph when an edge is missing, it means that the corresponding P i,j has value 0. Here is an example: suppose that N = 3, and suppose P 1 , 1 = 1 / 3 , P 1 , 2 = 2 / 3 , P 2 , 1 = 1 / 2 , P 2 , 3 = 1 / 2 , P 3 , 1 = 1 . Then, the corresponding transition diagram looks like this d47d46d45d44 d40d41d42d43 1 2 / 3 d47 1 / 3 d45 d47d46d45d44 d40d41d42d43 2 1 / 2 d47 1 / 2 d101 d47d46d45d44 d40d41d42d43 3 1 d116 1.2 Matrix representation, and population distribu- tions It is also convenient to collect together the P i,j s into an N N matrix; and, I will do this here a little bit backwards from how you might see it presented in other books, for reasons that will become clear later on: form the matrix P whose ( j, i ) entry is P i,j ; so, the i th column of the matrix represents all the transition probabilities out of node i , while the j th row represents all transition probabilities into node j . For example, the matrix corresponding to the example in the previous subsection is P = P 1 , 1 P 2 , 1 P 3 , 1 P 1 , 2 P 2 , 2 P 3 , 2 P 1 , 3 P 2 , 3 P 3 , 3 = 1 / 3 1 / 2 1 2 / 3 1 / 2 0 ....
View Full Document

Page1 / 11

markov_notes - Markov Chains, part I December 8, 2010 1...

This preview shows document pages 1 - 3. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online