This preview has intentionally blurred sections. Sign up to view the full version.View Full Document
Unformatted text preview: CS174 Lecture 10 John Canny Chernoff Bounds Chernoff bounds are another kind of tail bound. Like Markoff and Chebyshev, they bound the total amount of probability of some random variable Y that is in the “tail”, i.e. far from the mean. Recall that Markov bounds apply to any non-negative random variable Y and have the form: Pr[ Y ≥ t ] ≤ Y t where Y = E[ Y ] . Markov bounds don’t depend on any knowledge of the distribution of Y . Chebyshev bounds use knowledge of the standard deviation to give a tighter bound. The Chebyshev bound for a random variable X with standard deviation σ is: Pr[ | X − X | ≥ tσ ] ≤ 1 t 2 But we already saw that some random variables (e.g. the number of balls in a bin) fall off exponentially with distance from the mean. So Markov and Chebyshev are very poor bounds for those kinds of random variables. The Chernoff bound applies to a class of random variables and does give exponential fall-off of probability with distance from the mean. The critical condition that’s needed for a Chernoff bound is that the random variable be a sum of independent indicator random variables. Since that’s true for balls in bins, Chernoff bounds apply. Bernoulli Trials and the Binomial Distribution The first kind of random variable that Chernoff bounds work for is a random variable that is a sum of indicator variables with the same distribution (Bernoulli trials). That is, if X i is a random 1 variable with Pr[ X i = 1] = p , Pr[ X i = 0] = (1 − p ) , and the X i are all independent. Tossing a coin is a Bernoulli trial. So is the event that a randomly tossed ball falls into one of n bins ( p = 1 /n ) . If X = n X i =1 X i is a sum of Bernoulli trials, then...
View Full Document
This note was uploaded on 10/23/2011 for the course CS 4540 taught by Professor Staff during the Fall '10 term at Georgia Tech.
- Fall '10