# n14 - CS 70-2 Discrete Mathematics and Probability Theory...

This preview shows pages 1–3. Sign up to view the full content.

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: CS 70-2 Discrete Mathematics and Probability Theory Spring 2009 Alistair Sinclair, David Tse Note 14 Some Important Distributions Question : A biased coin with Heads probability p is tossed repeatedly until the first Head appears. What is the expected number of tosses? As always, our first step in answering the question must be to define the sample space Ω . A moment’s thought tells us that Ω = { H , TH , TTH , TTTH , . . . } , i.e., Ω consists of all sequences over the alphabet { H , T } that end with H and contain no other H ’s. This is our first example of an infinite sample space (though it is still discrete). What is the probability of a sample point, say ω = TTH ? Since successive coin tosses are independent (this is implicit in the statement of the problem), we have Pr [ TTH ] = ( 1- p ) × ( 1- p ) × p = ( 1- p ) 2 p . And generally, for any sequence ω ∈ Ω of length i , we have Pr [ ω ] = ( 1- p ) i- 1 p . To be sure everything is consistent, we should check that the probabilities of all the sample points add up to 1. Since there is exactly one sequence of each length i ≥ 1 in Ω , we have ∑ ω ∈ Ω Pr [ ω ] = ∞ ∑ i = 1 ( 1- p ) i- 1 p = p ∞ ∑ i = ( 1- p ) i = p × 1 1- ( 1- p ) = 1 , as expected. [In the second-last step here, we used the formula for summing a geometric series.] Now let the random variable X denote the number of tosses in our sequence (i.e., X ( ω ) is the length of ω ). Our goal is to compute E ( X ) . Despite the fact that X counts something, there’s no obvious way to write it as a sum of simple r.v.’s as we did in many examples in an earlier lecture note. (Try it!) In a later lecture, we will give a slick way to do this calculation. For now, let’s just dive in and try a direct computation of E ( X ) . Note that the distribution of X is quite simple: Pr [ X = i ] = ( 1- p ) i- 1 p for i = 1 , 2 , 3 , . . . So from the definition of expectation we have E ( X ) = ( 1 × p )+( 2 × ( 1- p ) p )+( 3 × ( 1- p ) 2 p )+ ··· = p ∞ ∑ i = 1 i ( 1- p ) i- 1 . This series is a blend of an arithmetic series (the i part) and a geometric series (the ( 1- p ) i- 1 part). There are several ways to sum it. Here is one way, using an auxiliary trick (given in the following Theorem) that is often very useful. [Ask your TA about other ways.] Theorem 14.1 : Let X be a random variable that takes on only non-negative integer values. Then E ( X ) = ∞ ∑ i = 1 Pr [ X ≥ i ] . CS 70-2, Spring 2009, Note 14 1 Proof : For notational convenience, let’s write p i = Pr [ X = i ] , for i = , 1 , 2 , . . . . From the definition of expectation, we have E ( X ) = ( × p )+( 1 × p 1 )+( 2 × p 2 )+( 3 × p 3 )+( 4 × p 4 )+ ··· = p 1 +( p 2 + p 2 )+( p 3 + p 3 + p 3 )+( p 4 + p 4 + p 4 + p 4 )+ ··· = ( p 1 + p 2 + p 3 + p 4 + ··· )+( p 2 + p 3 + p 4 + ··· )+( p 3 + p 4 + ··· )+( p 4 + ··· )+ ··· = Pr [ X ≥ 1 ]+ Pr [ X ≥ 2 ]+ Pr [ X ≥ 3 ]+ Pr [ X ≥ 4 ]+ ··· ....
View Full Document

{[ snackBarMessage ]}

### Page1 / 6

n14 - CS 70-2 Discrete Mathematics and Probability Theory...

This preview shows document pages 1 - 3. Sign up to view the full document.

View Full Document
Ask a homework question - tutors are online