This preview shows pages 1–3. Sign up to view the full content.
This preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
Unformatted text preview: Stat 134 Study Group Faculty: Prof. Ani Adhikari Study Group Leader: Prateek Bhakta, pbhakta@berkeley.edu Study Group Location: MW 1011am, 115 Chvez Community through Academics and Leadership Final Review Chapter 1: Key things to remember from chapter 1: 1. The Basics of Probability 2. Proportions of numbers / areas 3. Conditional Probability a. Bayes Rule / Bayes Trees b. Information you are given can sometimes be applied directly without Bayes rule. 4. Counting a. Use over/under counting and compensate to solve some problems easily, but watch out for accidental over/under counting if you dont want it. Chapter 2: Key things to remember from chapter 2: 1. Canonical distributions a. Bernoulli(p) A trial or event that suceeds with probability p. Basic building block. b. Binomial(n,p) For N independent Bernoulli(p) trials, what is P(k total successes)? c. Geometric(p) How many Bernoulli(p) trials do we do until a success? d. Normal(, 2 ) Approximate binomials (and more!) e. Poisson() Approximate binomials if np is small (<3) and n is large. (and more!) 2. Normal Approximation to Binomial a. Use continuity correction for Binomials. (wont matter that much) 3. More Counting. Counting is hard! The hyper geometric and multinomial distributions are commonly applied ideas in counting things. Some more advanced counting techniques that one can use include the use of recurrence relations and the stars and bars trick. Chapter 3: Key things to remember from chapter 3: 1. The idea and concept of a random variable. It is a number assigned to an event. 2. Expected value. Three main ways to find Expected value a. Definition (only use for simple things or certain series problems) b. Tail Sums (usually more useful for mins, maxs, and some infinite series) c. Indicators (Use cleverly to solve most problems.) d. Expected value is always linear. This is why indicators work just ensure that the sum of our indicators means the same thing that our Random variable is counting. e. E(XY) = E(X)E(Y) when X and Y are independent. 3. Variance and the Normal Approximation. a. Remember Var(X) = E(X 2 ) E(X) 2 b. Sometimes you use indicator expansions for indicator based problems to find E(X 2 ). [Keep in mind what it means to take the product of Indicators. c. Remember Var(X+Y) = Var(X) + Var(Y) if X and Y are independent d. Use the normal approximation when dealing with the sum of many independent, identical random variables. To do this, youll need to know the Mean and SD. 4. Discrete Distributions 377095dbe8e336b363d9084143abd03883d085f5.doc Stat 134 Study Group Faculty: Prof. Ani Adhikari Study Group Leader: Prateek Bhakta, pbhakta@berkeley.edu Study Group Location: MW 1011am, 115 Chvez Community through Academics and Leadership a. Use the concepts of probability to find distributions for weirder R.V....
View
Full
Document
This note was uploaded on 09/14/2009 for the course STAT 134 taught by Professor Aldous during the Fall '03 term at University of California, Berkeley.
 Fall '03
 aldous

Click to edit the document details