This preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
Unformatted text preview: CME308: Assignment 4 Due: Tuesday, May 18, 2010 Due Date: This assignment is due on Tuesday, May 18, 2010, by 5pm under the door of 380383V. See the course website for the policy on incentives for L A T E X solutions. Topics: • Markov Chains and First Transition Analysis. • Linear Regression Models • Least squares estimation and Bootstrap. Problem 1 (10 pts): The Smiths receive the paper every morning and place it on a pile after reading it. Each afternoon, with probability 1 / 3, someone takes all the papers in the pile and puts them in the recycling bin. Also, if ever there are at least five papers in the pile, MR. Smith (with probability 1) takes all the papers to the bin. 1. This problem can be modeled with a 5 state Markov Chain, where the states are { , 1 , 2 , 3 , 4 } corre sponding to the number of papers in the pile in the evening. What is the corresponding transition matrix? 2. After a long time, what would be the expected number of papers in the pile? 3. Assume the pile starts with 0 papers. What is the expected time until the pile will again have 0 papers? Solution: 1. The transition matrix is, P = 1 3 2 3 1 3 2 3 1 3 2 3 1 3 2 3 1 2. Solving πP = π will give us the stationary distributions since the Markov Chain is aperiodic and irreducible. π = ( . 3839 0 . 2559 0 . 1706 0 . 1137 0 . 0758 ) The expected number of papers in the pile is: E [# of papers in the pile] = 4 X j =0 jπ j = 1 . 2417 3. We are asked to find the expected return time to state 0. This is given by: E [return time to state 0] = π 1 = 2 . 6049 Problem 2 (10 pts): 1. Let G = ( V,E ) be a finite undirected and connected graph. Let X = ( X n : n ≥ 0) be a Markov chain that moves from vertex to vertex by choosing uniformly among the available edges. Compute the stationary distribution of this Markov chain (known as “random walk on the graph G ”) 1 CME308: Assignment 4 Due: Tuesday, May 18, 2010 2. A knight moves randomly on a chessboard, making each admissible move with equal probability, and starting from a corner. What is the average time he takes to return to the corner he started from? Solution: 1. The transition probability from a node is uniform between its neighbors so the transition matrix has entries P ij = 1 deg( v i ) 1 {{ v i ,v j }∈ E } . From the fact that the graph is connected we know that there is a path from any vertex to any other vertex. Thus there is a nonzero probability from going from any state (in the corresponding Markov chain) to any other (with probability given by the probability of that path). Thus the Markov chain is irreducible. Furthermore, the state space V is finite so that the Markov chain is also positive recurrent so it will have a unique stationary distribution which is the solution π to π = πP....
View
Full Document
 Spring '08
 PETERGLYNN
 Least Squares, Linear Regression, Regression Analysis, Markov chain, Random walk

Click to edit the document details