lecture22

lecture22 - CS 70 Discrete Mathematics for CS Spring 2005...

Info iconThis preview shows pages 1–2. Sign up to view the full content.

View Full Document Right Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: CS 70 Discrete Mathematics for CS Spring 2005 Clancy/Wagner Notes 22 Variance Question: At each time step, I flip a fair coin. If it comes up Heads, I walk one step to the right; if it comes up Tails, I walk one step to the left. How far do I expect to have traveled from my starting point after n steps? Denoting a right-move by + 1 and a left-move by- 1, we can describe the probability space here as the set of all words of length n over the alphabet { 1 } , each having equal probability 1 2 n . Let the r.v. X denote our position (relative to our starting point 0) after n moves. Thus X = X 1 + X 2 + + X n , where X i = ( + 1 if i th toss is Heads;- 1 otherwise. Now obviously we have E ( X ) = 0. The easiest rigorous way to see this is to note that E ( X i ) = ( 1 2 1 ) +( 1 2 (- 1 )) = 0, so by linearity of expectation E ( X ) = n i = 1 E ( X i ) = 0. Thus after n steps, my expected position is 0! But of course this is not very informative, and is due to the fact that positive and negative deviations from 0 cancel out. What the above question is really asking is: What is the expected value of | X | , our distance from 0? Rather than consider the r.v. | X | , which is a little awkward due to the absolute value operator, we will instead look at the r.v. X 2 . Notice that this also has the effect of making all deviations from 0 positive, so it should also give a good measure of the distance traveled. However, because it is the squared distance, we will need to take a square root at the end. Lets calculate E ( X 2 ) : E ( X 2 ) = E (( X 1 + X 2 + + X n ) 2 ) = E ( n i = 1 X 2 i + i 6 = j X i X j ) = n i = 1 E ( X 2 i ) + i 6 = j E ( X i X j ) In the last line here, we used linearity of expectation. To proceed, we need to compute E ( X 2 i ) and E ( X i X j ) (for i 6 = j ). Lets consider first X 2 i . Since X i can take on only values 1, clearly X 2 i = 1 always, so E ( X 2 i ) = 1. What about E ( X i X j ) ? Since X i and X j are independent , it is the case that E ( X i X j ) = E ( X i ) E ( X j ) = 0. 1 Plugging these values into the above equation gives E ( X 2 ) = ( n 1 ) + = n . So we see that our expected squared distance from 0 is n . One interpretation of this is that we might expect to be a distance of about n away from 0 after n steps. However, we have to be careful here: we cannot 1 Two random variables X and Y are independent if the events X = a and Y = b are independent for all pairs of values a , b . If X , Y are independent, then we have E ( XY ) = E ( X ) E ( Y ) ; youll be asked to prove this on one of your homeworks. Note that E ( XY ) = E ( X ) E ( Y ) is false for general r.v.s X , Y ; as an example just look at E ( X 2 i ) in the present discussion....
View Full Document

Page1 / 5

lecture22 - CS 70 Discrete Mathematics for CS Spring 2005...

This preview shows document pages 1 - 2. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online