15 - Probability Matrices

15 - Probability Matrices - 1 Math 1b Practical revised...

Info icon This preview shows pages 1–2. Sign up to view the full content.

View Full Document Right Arrow Icon
1 Math 1b Practical — February 25, 2011 — revised February 28, 2011 Probability matrices A probability vector is a nonnegative vector whose coordinates sum to 1. A square matrix P is called a probability matrix (or a left-stochastic matrix or a column-stochastic matrix) when all of its columns are probability vectors. [Caution: Some references use probability matrix to mean a row-stochastic matrix.] Probability matrices arise as “tran- sition matrices” in Markov chains. Let A = ( a ij ) be a probability matrix and u = ( u 1 , . . . , u n ) a probability vector. If we think of u j as the proportion of some commodity or population in ‘state’ j at a given moment (or as the probability that a member of the population is in state j ), and of a ij as the proportion of (or probability that) the commodity or population in state j that will change to state i after a unit of time, we would find the proportion of the commodity or population in state i after one unit of time to be a i 1 u 1 + . . . + a in u n . That is, the vector that describes the new proportions of the commodity or population in various states after one unit of time is A u . After k units of time, the vector that describes the new proportions of the commodity or population in various states is A k u . If a ij = 0, the edge directed from i to j in the digraph of a probability matrix A may be labeled with the number a ij ; this may help to ‘visualize’ the matrix and its meaning. Here is an example (taken from Wikipedia) similar to Story 1 about smoking. Assume that weather observations at some location show that a sunny day is 90% likely to be followed by another sunny day, and a rainy day is 50% likely to be followed by another rainy day. We are asked to predict the proportion of sunny days in a year. Let x n = ( a n , b n ) where a n is the probability that it is rainy on day n , and b n = 1 a n . Then a n +1 b n +1 = A a n b n where A = . 9 . 5 . 1 . 5 . The digraph of A is Theorem 1. Any probability matrix A has 1 as an eigenvalue. Proof: Let 1 1 be the row vector of all ones. We have 1 1 P = 1 1 . (We could call 1 1 a left eigenvector and 1 a left eigenvalue.) This means 1 1 ( P I ) = 0 , so the rows of P I are linearly dependent, so the columns of P I are linearly dependent, so ( P I ) u = 0 for some u , or P u = u .
Image of page 1

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
Image of page 2
This is the end of the preview. Sign up to access the rest of the document.

{[ snackBarMessage ]}

What students are saying

  • Left Quote Icon

    As a current student on this bumpy collegiate pathway, I stumbled upon Course Hero, where I can find study resources for nearly all my courses, get online help from tutors 24/7, and even share my old projects, papers, and lecture notes with other students.

    Student Picture

    Kiran Temple University Fox School of Business ‘17, Course Hero Intern

  • Left Quote Icon

    I cannot even describe how much Course Hero helped me this summer. It’s truly become something I can always rely on and help me. In the end, I was not only able to survive summer classes, but I was able to thrive thanks to Course Hero.

    Student Picture

    Dana University of Pennsylvania ‘17, Course Hero Intern

  • Left Quote Icon

    The ability to access any university’s resources through Course Hero proved invaluable in my case. I was behind on Tulane coursework and actually used UCLA’s materials to help me move forward and get everything together on time.

    Student Picture

    Jill Tulane University ‘16, Course Hero Intern