This preview shows page 1. Sign up to view the full content.
Unformatted text preview: lies that the two parts of our journey are independent. time 0
• `` a
i • ` ``
• Proof of (1.2). We do this by combining the solutions of Q1 and Q2. Breaking
things down according to the state at time m,
P (Xm+n = j, Xm = k |X0 = i)
P (Xm+n = j |X0 = i) =
k Using the deﬁnition of conditional probability as in the solution of Q1,
P (Xm+n = j, Xm = k, X0 = i)
P (X0 = i)
P (Xm+n = j, Xm = k, X0 = i) P (Xm = k, X0 = i)
P (Xm = k, X0 = i)
P (X0 = i)
= P (Xm+n = j |Xm = k, X0 = i) · P (Xm = k |X0 = i) P (Xm+n = j, Xm = k |X0 = i) = 10 CHAPTER 1. MARKOV CHAINS By the Markov property (1.1) the last expression is
= P (Xm+n = j |Xm = k ) · P (Xm = k |X0 = i) = pm (i, k )pn (k, j )
and we have proved (1.2).
Having established (1.2), we now return to computations.
Example 1.11. Gambler’s ruin. Suppose for simplicity that N = 4 in
Example 1.1, so that the transition probability is
View Full Document
This document was uploaded on 03/06/2014 for the course MATH 4740 at Cornell University (Engineering School).
- Spring '10
- The Land