This preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
Unformatted text preview: Massachusetts Institute of Technology Department of Electrical Engineering & Computer Science 6.041/6.431: Probabilistic Systems Analysis (Fall 2010) Problem Set 9 Solutions 1. (a) Yes, to 0. Applying the weak law of large numbers, we have P (  U i − μ  > ǫ ) → 0 as i → ∞ , for all ǫ > 0 Here μ = 0 since X i ∼ U ( − 1 . , 1 . 0). (b) Yes, to 1. Since W i ≤ 1, we have for ǫ > 0, lim P (  W i − 1  > ǫ ) = lim P (max { X 1 , · · · ,X i } < 1 − ǫ ) i →∞ i →∞ = lim P ( X 1 < 1 − ǫ ) · · · P ( X i < 1 − ǫ ) } i →∞ = lim i →∞ (1 − ǫ 2 ) i = . (c) Yes, to 0.  V n  ≤ min { X 1  ,  X 2  , · · · ,  X n } but min { X 1  ,  X 2  , ··· ,  X n } converges to 0 in probability. So, since  V n  ≥ 0,  V n  converges to 0 in probability. To see why min { X 1  ,  X 2  , · · · ,  X n } converges to 0 in probability, note that: lim P (  min { X 1  , · · · ,  X i } −  > ǫ ) = lim P (min { X 1  , · · · ,  X i } > ǫ ) i →∞ i →∞ = lim P (  X 1  > ǫ ) · P (  X 2  > ǫ ) · · · P (  X i  > ǫ ) i →∞ = lim (1 + ǫ ) i since  X i  is uniform between 0 and 1 i →∞ = . 2. Consider a random variable X with PMF p, if x = μ − c ; p X ( x ) = p, if x = μ + c ; 1 − 2 p, if x = μ. The mean of X is μ , and the variance of X is 2 pc 2 . To make the variance equal σ 2 , set p = σ 2 2 . 2 c For this random variable, we have σ 2 P (  X − μ  ≥ c ) = 2 p = 2 , c and therefore the Chebyshev inequality is tight. 3. (a) Let t i be the expected time until the state HT is reached, starting in state i , i.e., the mean first passage time to reach state HT starting in state i . Note that t S is the expected number of tosses until first observing heads directly followed by tails. We have, 1 1 t S = 1 + t H + t T 2 2 1 1 t T = 1 + 2 t H + 2 t T 1 t H = 1 + 2 t H Page 1 of 7 Massachusetts Institute of Technology Department of Electrical Engineering & Computer Science 6.041/6.431: Probabilistic Systems Analysis (Fall 2010) and by solving these equations, we find that the expected number of tosses until first ob serving heads directly followed by tails is t S = 4 . (b) To find the expected number of additional tosses necessary to again observe heads followed by tails, we recognize that this is the mean recurrence time t ∗ of state HT . This can be HT determined as ∗ t HT = 1 + p HT,H t H + p HT,T t T 1 1 = 1 + · 2 + · 4 2 2 = 4 . (c) Let’s consider a Markov chain with states S,H,T,TT , where S is a starting state, H indi cates heads on the current toss, T indicates tails on the current toss (without tails on the previous toss), and TT indicates tails over the last two tosses. The transition probabilities for this Markov chain are illustrated below in the state transition diagram: H T T T S 2 1 2 1 2 1 2 1 2 1 2 1 1 2 1 2 Let t i be the expected time until the state TT is reached, starting in state i , i.e., the mean first passage time to reach state...
View
Full Document
 Fall '10
 Prof.DimitriBertsekas
 Electrical Engineering, Probability theory, Massachusetts Institute of Technology, Probabilistic Systems Analysis, numb er, Department of Electrical Engineering & Computer Science

Click to edit the document details