Matrix p tells us where we go next from state i so

Info iconThis preview shows page 1. Sign up to view the full content.

View Full Document Right Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: ains Introduc)on to Informa)on Retrieval Sec. 21.2.1 Ergodic Markov chains n   Clearly, for all i, ∑ Pij = 1. j =1   Markov chains are abstrac*ons of random walks.   Exercise: represent the telepor*ng random walk from 3 slides ago as a Markov chain, for this case: Introduc)on to Informa)on Retrieval Sec. 21.2.1   For any (ergodic) Markov chain, there is a unique long ­term visit rate for each state.   Steady ­state probability distribu)on.   Over a long *me ­period, we visit each state in propor*on to this rate.   It doesn t ma\er where we start. Introduc)on to Informa)on Retrieval Sec. 21.2.1 Probability vectors Change in probability vector   A probability (row) vector x = (x1, … xn) tells us where the walk is at any point.   If the probability vector is x = (x1, … xn) at this step, what is it at th...
View Full Document

{[ snackBarMessage ]}

Ask a homework question - tutors are online