This preview shows pages 1–3. Sign up to view the full content.
1
Math 1b Practical — February 25, 2011 — revised February 28, 2011
Probability matrices
A
probability vector
is a nonnegative vector whose coordinates sum to 1. A square
matrix
P
is called a
probability matrix
(or a
leftstochastic matrix
or a
columnstochastic
matrix) when all of its columns are probability vectors. [Caution: Some references use
probability matrix to mean a rowstochastic matrix.] Probability matrices arise as “tran
sition matrices” in Markov chains.
Let
A
=(
a
ij
) be a probability matrix and
u
u
1
,...,u
n
) a probability vector. If
we think of
u
j
as the proportion of some commodity or population in ‘state’
j
at a given
moment (or as the probability that a member of the population is in state
j
), and of
a
ij
as the proportion of (or probability that) the commodity or population in state
j
that will
change to state
i
after a unit of time, we would ±nd the proportion of the commodity or
population in state
i
after one unit of time to be
a
i
1
u
1
+
...
+
a
in
u
n
. That is, the vector
that describes the new proportions of the commodity or population in various states after
one unit of time is
A
u
.A
f
te
r
k
units of time, the vector that describes the new proportions
of the commodity or population in various states is
A
k
u
.
If
a
ij
6
= 0, the edge directed from
i
to
j
in the digraph of a probability matrix
A
may
be labeled with the number
a
ij
; this may help to ‘visualize’ the matrix and its meaning.
Here is an example (taken from Wikipedia) similar to Story 1 about smoking. Assume
that weather observations at some location show that a sunny day is 90% likely to be
followed by another sunny day, and a rainy day is 50% likely to be followed by another rainy
day. We are asked to predict the proportion of sunny days in a year. Let
x
n
a
n
,b
n
)
>
where
a
n
is the probability that it is rainy on day
n
,and
b
n
=1
−
a
n
.Then
±
a
n
+1
b
n
+1
²
=
A
±
a
n
b
n
²
where
A
=
±
.
9
.
5
.
1
.
5
²
.
The digraph of
A
is
Theorem 1.
Any probability matrix
A
has
1
as an eigenvalue.
Proof:
Let
1
1
be the row vector of all ones. We have
1
1
P
=
1
1
.(W
ec
o
u
l
dc
a
l
l
1
1
a
left
eigenvector and 1 a
left
eigenvalue.) This means
1
1
(
P
−
I
)=
0
,sotherowso
f
P
−
I
are
linearly dependent, so the columns of
P
−
I
are linearly dependent, so (
P
−
I
)
u
=
0
for
some
u
,or
P
u
=
u
.
ut
(In general, the left eigenvalues of a matrix are the same as the right eigenvalues, and
they have both the same left and right geometric and algebraic multiplicities.)
This preview has intentionally blurred sections. Sign up to view the full version.
View Full Document2
A probability vector
s
This is the end of the preview. Sign up
to
access the rest of the document.
 Winter '08
 Aschbacher
 Matrices, Probability

Click to edit the document details