{[ promptMessage ]}

Bookmark it

{[ promptMessage ]}

Lecture Slides 17 - AMS 210 Applied Linear Algebra AMS 210...

Info icon This preview shows pages 1–7. Sign up to view the full content.

View Full Document Right Arrow Icon
AMS 210: Applied Linear Algebra November 12, 2009 AMS 210
Image of page 1

Info icon This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
Topics Today Problem Set 9 Markov Chain Stable Distributions Markov Chain Stable Distribution Examples Absorbing States in Markov Chains AMS 210
Image of page 2
Problem Set 9 This week’s problem set is due next Thursday, November 19. Read sections 4.2, 4.3 (first two examples), 4.4, 4.5 (first two examples). Exercises: 4.2: 2, 5, 6; 4.3: 2, 3; 4.4: 1, 6, 16, 19. Show work. If multiple sheets, stapling is strongly preferred. Graders might deduct points for insecurely bound/collated problem sets. AMS 210
Image of page 3

Info icon This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
Solving for Markov Chain Stable Distributions Appeared in section 3.5. If A is the transition matrix and p is set of probabilities of being in certain states, then Ap represents the next state. (This is old.) Solve for when p = Ip = Ap , so when ( I - A ) p = 0 . Alternatively, this happens when there is an eigenvector with an eigenvalue of 1 (this is just a definition of an eigenvalue/vector). Since the set of probabilities involved must sum to 1, require that 1 · p = 1. n + 1 equations for n variables; but multiples of eigenvectors are eigenvectors. AMS 210
Image of page 4
Markov Chain Stable Distribution Examples: General 2x2 Let A = 1 - b a b 1 - a . Solve stable state to get - bp 1 + ap 2 = 0, bp 1 - ap 2 = 0, and p 1 + p 2 = 1. p 1 = a a + b and p 2 = b a + b . AMS 210
Image of page 5

Info icon This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
Markov Chain Stable Distributions Aside: matrix in section 3.4, example 1 is wrong. Column 1 sums to .5. Goal: find when distibutions in Markov chains converge to a stable distribution. Definition: A Markov chain with transition matrix A is regular if for some positive integer h , the matrix A h has all positive entries. Theorem: Every regular Markov chain with transition matrix A has a stable probability vector p * to which p ( k ) = A k p converges, for any probability vector p . All the columns of A k p also converge to p * .
Image of page 6
Image of page 7
This is the end of the preview. Sign up to access the rest of the document.

{[ snackBarMessage ]}