This preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
Unformatted text preview: CS 70 Discrete Mathematics for CS Fall 2006 Papadimitriou & Vazirani Lecture 23 Variance Question: At each time step, I fip a Fair coin. IF it comes up Heads, I walk one step to the right; iF it comes up Tails, I walk one step to the leFt. How Far do I expect to have traveled From my starting point aFter n steps? Denoting a rightmove by + 1 and a leFtmove by 1, we can describe the probability space here as the set oF all words oF length n over the alphabet {± 1 } , each having equal probability 1 2 n . Let the r.v. X denote our position (relative to our starting point 0) aFter n moves. Thus X = X 1 + X 2 + ··· + X n , where X i = + 1 iF i th toss is Heads; 1 otherwise. Now obviously we have E ( X ) = 0. The easiest rigorous way to see this is to note that E ( X i ) = ( 1 2 × 1 ) + ( 1 2 × ( 1 )) = 0, so by linearity oF expectation E ( X ) = ∑ n i = 1 E ( X i ) = 0. Thus aFter n steps, my expected position is 0! But oF course this is not very inFormative, and is due to the Fact that positive and negative deviations From 0 cancel out. What the above question is really asking is: What is the expected value oF  X  , our distance From 0? Rather than consider the r.v.  X  , which is a little awkward due to the absolute value operator, we will instead look at the r.v. X 2 . Notice that this also has the eFFect oF making all deviations From 0 positive, so it should also give a good measure oF the distance traveled. However, because it is the squared distance, we will need to take a square root at the end. Let’s calculate E ( X 2 ) : E ( X 2 ) = E (( X 1 + X 2 + ··· + X n ) 2 ) = E ( ∑ n i = 1 X 2 i + ∑ i = j X i X j ) = ∑ n i = 1 E ( X 2 i ) + ∑ i = j E ( X i X j ) In the last line here, we used linearity oF expectation. To proceed, we need to compute E ( X 2 i ) and E ( X i X j ) (For i = j ). Let’s consider ¡rst X 2 i . Since X i can take on only values ± 1, clearly X 2 i = 1 always, so E ( X 2 i ) = 1. What about E ( X i X j ) ? Since X i and X j are independent , it is the case that E ( X i X j ) = E ( X i ) E ( X j ) = 0. 1 Plugging these values into the above equation gives E ( X 2 ) = ( n × 1 ) + = n . So we see that our expected squared distance From 0 is n . One interpretation oF this is that we might expect to be a distance oF about √ n away From 0 aFter n steps. However, we have to be careFul here: we cannot 1 The Following Fact was proved in class at the same time as we proved linearity oF expectation (Lecture 20): ¢or independent random variables X , Y , we have E ( XY ) = E ( X ) E ( Y ) . IF you missed the prooF in class, you should try to prove it yourselF From the de¡nition oF expectation, in similar Fashion to the prooF oF Theorem 20.1. Note that E ( XY ) = E ( X ) E ( Y ) is false For general r.v.’s X , Y ; as an example just look at E...
View
Full Document
 Spring '05
 HUANG
 Standard Deviation, Variance, Probability theory

Click to edit the document details