ch1 - Finding a cheese in the maze mouse Markov Chain Black...

Info iconThis preview shows pages 1–5. Sign up to view the full content.

View Full Document Right Arrow Icon
1 Markov Chain Chapter 1 2 Finding a cheese in the maze Cheese Black hole mouse 3 Assumptions ± The mouse does not know where the cheese is. ± The mouse will get nothing and leaves the game if it enters the black hole. 4 Questions ± A biologist may want to know the following things. ± Can the mouse detect the cheese by smell? ± Can the mouse self-train if it is put back to the game after entering the black hole? ± Would the chance of getting cheese be the same for different initial position?
Background image of page 1

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
5 Establish a probabilistic model ± To determine if the mouse can find cheese by smell, we use the following idea. ± Determine the probability of success under the alternative hypothesis. ± Conduct experiments and compare the results with the probability. ± If the successful rate is not higher than the theoretical probability significantly, then the hypothesis is rejected. 6 Establish a Probabilistic model ± Self-train hypothesis. Idea: ± Use special materials to make the maze so that the mouse cannot find cheese by smell. ± Determine the probability under the alternative hypothesis. ± If the successful rate is not higher than the theoretical probability significantly, then the hypothesis is rejected. 7 Model construction: ass. room # 15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 0 8 Model construction: stochastic ± Define the stochastic variable X n be the position of a mouse at step n. ± X 0 = 0 means the mouse is initially placed in room 0. ± Successful probability: P(X n = 13, for some n) ± Failure probability: P(X n = 7, for some n)
Background image of page 2
9 Properties of the stochastic model ± X n+1 should depend on X n . ± We should assign values to probabilities: P(X n+1 = j| X n = i) = p n,n+1 ij ± If the mouse cannot train itself, then it should be unable to remember previous information. ± Translate this into probabilistic language: X n+1 is independent to X k for k < n. 10 Markov property ± Since X n+1 is independent to X k where k<n, P(X n+1 =j|X 0 = i 0 , …, X n-1 = i n-1 , X n =i) = P(X n+1 = j|X n = i) = p n, n+1 ij . ± Definition: A stochastic process {X n } that satisfies the above property is called a Markov Chain or a Markov process. The property is called the Markov property. 11 A smaller maze 5 cheese 4 3 2 1 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 5 4 3 2 1 0 5 4 3 2 1 0 2 1 2 1 3 1 3 1 3 1 3 1 3 1 3 1 2 1 2 1 2 1 2 1 12 Markov matrix The P =||p ij || is called the Markov matrix or the transition probability matrix. M M M M M L M M M M M L L L 3 2 1 0 23 22 21 20 13 12 11 10 03 02 01 00 i i i i p p p p p p p p p p p p p p p p = P
Background image of page 3

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
13 Markov matrix ± The (i+1)st row is the probability distribution of X n+1 conditional on X n = i. ± p ij is non-negative and ± A Markov process is completely defined once its transition probability matrix and initial state X 0 are specified. 1 1 = = j ij p 14 Characterizing Markov Chain ± Denote P(X 0 = i) = p i .
Background image of page 4
Image of page 5
This is the end of the preview. Sign up to access the rest of the document.

This note was uploaded on 05/21/2011 for the course STA 3007 taught by Professor Kb during the Spring '11 term at CUHK.

Page1 / 19

ch1 - Finding a cheese in the maze mouse Markov Chain Black...

This preview shows document pages 1 - 5. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online