# 11-chernoff - Algorithms Lecture 11: Tail Inequalities...

This preview shows pages 1–3. Sign up to view the full content.

Algorithms Lecture 11: Tail Inequalities [ Fa’10 ] If you hold a cat by the tail you learn things you cannot learn any other way. — Mark Twain 11 Tail Inequalities ? The simple recursive structure of skip lists made it relatively easy to derive an upper bound on the expected worst-case search time, by way of a stronger high-probability upper bound on the worst-case search time. We can prove similar results for treaps, but because of the more complex recursive structure, we need slightly more sophisticated probabilistic tools. These tools are usually called tail inequalities ; intuitively, they bound the probability that a random variable with a bell-shaped distribution takes a value in the tails of the distribution, far away from the mean. 11.1 Markov’s Inequality Perhaps the simplest tail inequality was named after the Russian mathematician Andrey Markov; however, in strict accordance with Stigler’s Law of Eponymy, it ﬁrst appeared in the works of Markov’s probability teacher, Pafnuty Chebyshev. 1 Markov’s Inequality. Let X be a non-negative integer random variable. For any t > 0 , we have Pr [ X t ] E [ X ] / t . Proof: The inequality follows from the deﬁnition of expectation by simple algebraic manipulation. E [ X ] = X k = 0 k · Pr [ X = k ] [ deﬁnition of E [ X ] ] = X k = 0 Pr [ X k ] [ algebra ] t - 1 X k = 0 Pr [ X k ] [ since t < ] t - 1 X k = 0 Pr [ X t ] [ since k < t ] = t · Pr [ X t ] [ algebra ] ± Unfortunately, the bounds that Markov’s inequality implies (at least directly) are often very weak, even useless. (For example, Markov’s inequality implies that with high probability, every node in an n -node treap has depth O ( n 2 log n ) . Well, duh! ) To get stronger bounds, we need to exploit some additional structure in our random variables. 1 The closely related tail bound traditionally called Chebyshev’s inequality was actually discovered by the French statistician Irénée-Jules Bienaymé, a friend and colleague of Chebyshev’s. 1

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
Algorithms Lecture 11: Tail Inequalities [ Fa’10 ] 11.2 Independence A set of random variables X 1 , X 2 ,..., X n are said to be mutually independent if and only if Pr n ^ i = 1 ( X i = x i ) = n Y i = 1 Pr [ X i = x i ] for all possible values x 1 , x 2 ,..., x n . For examples, different ﬂips of the same fair coin are mutually independent, but the number of heads and the number of tails in a sequence of n coin ﬂips are not independent (since they must add to n ). Mutual independence of the X i ’s implies that the expectation of the product of the X i ’s is equal to the product of the expectations: E n Y i = 1 X i = n
This is the end of the preview. Sign up to access the rest of the document.

## This note was uploaded on 10/14/2011 for the course ECON 101 taught by Professor Smith during the Spring '11 term at West Virginia University Institute of Technology.

### Page1 / 5

11-chernoff - Algorithms Lecture 11: Tail Inequalities...

This preview shows document pages 1 - 3. Sign up to view the full document.

View Full Document
Ask a homework question - tutors are online