N15 - CS 70-2 Spring 2009 Variance Discrete Mathematics and Probability Theory Alistair Sinclair David Tse Lecture 15 Question At each time step I

Info iconThis preview shows pages 1–3. Sign up to view the full content.

View Full Document Right Arrow Icon
CS 70-2 Discrete Mathematics and Probability Theory Spring 2009 Alistair Sinclair, David Tse Lecture 15 Variance Question: At each time step, I flip a fair coin. If it comes up Heads, I walk one step to the right; if it comes up Tails, I walk one step to the left. How far do I expect to have traveled from my starting point after n steps? Denoting a right-move by + 1 and a left-move by - 1, we can describe the probability space here as the set of all words of length n over the alphabet 1 } , each having equal probability 1 2 n . Let the r.v. X denote our position (relative to our starting point 0) after n moves. Thus X = X 1 + X 2 + ··· + X n , where X i = ( + 1 if i th toss is Heads; - 1 otherwise. Now obviously we have E ( X ) = 0. The easiest way to see this is to note that E ( X i ) = ( 1 2 × 1 )+( 1 2 × ( - 1 )) = 0, so by linearity of expectation E ( X ) = n i = 1 E ( X i ) = 0. Thus after n steps, my expected position is 0! But of course this is not very informative, and is due to the fact that positive and negative deviations from 0 cancel out. What the above question is really asking is: What is the expected value of | X | , our distance from 0? Rather than consider the r.v. | X | , which is a little awkward due to the absolute value operator, we will instead look at the r.v. X 2 . Notice that this also has the effect of making all deviations from 0 positive, so it should also give a good measure of the distance traveled. However, because it is the squared distance, we will need to take a square root at the end. Let’s calculate E ( X 2 ) : E ( X 2 ) = E (( X 1 + X 2 + ··· + X n ) 2 ) = E ( n i = 1 X 2 i + i 6 = j X i X j ) = n i = 1 E ( X 2 i )+ i 6 = j E ( X i X j ) In the last line here, we used linearity of expectation. To proceed, we need to compute E ( X 2 i ) and E ( X i X j ) (for i 6 = j ). Let’s consider first X 2 i . Since X i can take on only values ± 1, clearly X 2 i = 1 always, so E ( X 2 i ) = 1. What about E ( X i X j ) ? Well, X i X j = + 1 when X i = X j = + 1 or X i = X j = - 1, and otherwise X i X j = - 1. Also, Pr [( X i = X j = + 1 ) ( X i = X j = - 1 )] = Pr [ X i = X j = + 1 ]+ Pr [ X i = X j = - 1 ] = 1 4 + 1 4 = 1 2 , so X i X j = 1 with probability 1 2 . Therefore X i X j = - 1 with probability 1 2 also. Hence E ( X i X j ) = 0. Plugging these values into the above equation gives E ( X 2 ) = ( n × 1 )+ 0 = n . CS 70-2, Spring 2009, Lecture 15 1
Background image of page 1

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
So we see that our expected squared distance from 0 is n . One interpretation of this is that we might expect to be a distance of about n away from 0 after n steps. However, we have to be careful here: we cannot simply argue that E ( | X | ) = p E ( X 2 ) = n . (Why not?) We will see later in the lecture how to make precise deductions about | X | from knowledge of E ( X 2 ) . For the moment, however, let’s agree to view E
Background image of page 2
Image of page 3
This is the end of the preview. Sign up to access the rest of the document.

This note was uploaded on 09/06/2009 for the course CS 70 taught by Professor Papadimitrou during the Spring '08 term at University of California, Berkeley.

Page1 / 5

N15 - CS 70-2 Spring 2009 Variance Discrete Mathematics and Probability Theory Alistair Sinclair David Tse Lecture 15 Question At each time step I

This preview shows document pages 1 - 3. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online