This preview shows pages 1–2. Sign up to view the full content.
This preview has intentionally blurred sections. Sign up to view the full version.View Full Document
Unformatted text preview: 1 Inequality, Probability, and Joviality • In many cases, we don’t know the true form of a probability distribution E.g., Midterm scores But, we know the mean May also have other measures/properties o Variance o Non-negativity o Etc. Inequalities and bounds still allow us to say something about the probability distribution in such cases o May be imprecise compared to knowing true distribution! Markov’s Inequality • Say X is a non-negative random variable • Proof: I = 1 if X ≥ a , 0 otherwise Taking expectations: , ] [ ) ( a a X E a X P all for , a X I X Since a X E a X E a X P I E ] [ ) ( ] [ Andrey Andreyevich Markov • Andrey Andreyevich Markov (1856-1922) was a Russian mathematician Markov’s Inequality is named after him He also invented Markov Chains… o …which are the basis for Google’s PageRank algorithm His facial hair inspires fear in Charlie Sheen Markov and the Midterm • Statistics from last quarter’s CS109 midterm X = midterm score Using sample mean X = 80.9 E[X] What is P(X ≥ 91)? Markov bound: 88.92% of class scored 91 or greater In fact, 36.67% of class scored 91 or greater o Markov inequality can be a very loose bound o But, it made no assumption at all about form of distribution! 8892 . 91 9 . 80 91 ] [ ) 91 ( X E X P Chebyshev’s Inequality • X is a random variable with E[X] = m , Var(X) = s 2 • Proof: Since (X – m ) 2 is non-negative random variable, apply Markov’s Inequality with a = k 2 Note that: (X – m ) 2 ≥ k 2 |X – m | ≥ k , yielding: , ) ( 2 2 k k k X P all for s m 2 2 2...
View Full Document
This document was uploaded on 12/24/2011.
- Spring '09