# n17 - CS 70 Fall 2010 Discrete Mathematics and Probability...

• Notes
• 4

This preview shows pages 1–2. Sign up to view the full content.

CS 70 Discrete Mathematics and Probability Theory Fall 2010 Tse/Wagner Lecture 17 Polling and the Law of Large Numbers Polling Question: We want to estimate the proportion p of Democrats in the US population, by taking a small random sample. How large does our sample have to be to guarantee that our estimate will be within (say) 0 . 1 of the true value with probability at least 0.95? This is perhaps the most basic statistical estimation problem, and it shows up everywhere. We will develop a simple solution that uses only Chebyshev’s inequality. More refined methods can be used to get sharper results. Let’s denote the size of our sample by n (to be determined), and the number of Democrats in it by the random variable S n . (The subscript n just reminds us that the r.v. depends on the size of the sample.) Then our estimate will be the value A n = 1 n S n . Now as has often been the case, we will find it helpful to write S n = X 1 + X 2 + ··· + X n , where X i = ( 1 if person i in sample is a Democrat; 0 otherwise. Note that each X i can be viewed as a coin toss, with Heads probability p (though of course we do not know the value of p ). And the coin tosses are independent. 1 Hence, S n is a binomial random variable with parameters n and p . What is the expectation of our estimate? E ( A n ) = E ( 1 n S n ) = 1 n E ( S n ) = 1 n × ( np ) = p . So for any value of n , our estimate will always have the correct expectation p . [Such a r.v. is often called an unbiased estimator of p .] Now presumably, as we increase our sample size n , our estimate should get more and more accurate. This will show up in the fact that the variance decreases with n : i.e., as n increases, the probability that we are far from the mean p will get smaller. To see this, we need to compute Var ( A n ) . But A n = 1 n S n , which is just a constant times a binomial random variable. Theorem 17.1 : For any random variable X and constant c, we have Var ( cX ) = c 2 Var ( X ) .

This preview has intentionally blurred sections. Sign up to view the full version.

This is the end of the preview. Sign up to access the rest of the document.
• Spring '08
• Probability theory, LLN

{[ snackBarMessage ]}

### What students are saying

• As a current student on this bumpy collegiate pathway, I stumbled upon Course Hero, where I can find study resources for nearly all my courses, get online help from tutors 24/7, and even share my old projects, papers, and lecture notes with other students.

Kiran Temple University Fox School of Business ‘17, Course Hero Intern

• I cannot even describe how much Course Hero helped me this summer. It’s truly become something I can always rely on and help me. In the end, I was not only able to survive summer classes, but I was able to thrive thanks to Course Hero.

Dana University of Pennsylvania ‘17, Course Hero Intern

• The ability to access any university’s resources through Course Hero proved invaluable in my case. I was behind on Tulane coursework and actually used UCLA’s materials to help me move forward and get everything together on time.

Jill Tulane University ‘16, Course Hero Intern