{[ promptMessage ]}

Bookmark it

{[ promptMessage ]}

pbs134review2

# pbs134review2 - Stat 134 Study Group Faculty Prof Ani...

This preview shows pages 1–3. Sign up to view the full content.

Stat 134 Study Group Faculty: Prof. Ani Adhikari Study Group Leader: Prateek Bhakta, [email protected] Study Group Location: MW 10-11am, 115 Chávez Community through Academics and Leadership Final Review Chapter 1: Key things to remember from chapter 1: 1. The Basics of Probability 2. Proportions of numbers / areas 3. Conditional Probability a. Bayes Rule / Bayes Trees b. Information you are given can sometimes be applied directly without Bayes rule. 4. Counting a. Use over/under counting and compensate to solve some problems easily, but watch out for accidental over/under counting if you don’t want it. Chapter 2: Key things to remember from chapter 2: 1. Canonical distributions a. Bernoulli(p) A trial or event that suceeds with probability p. Basic building block. b. Binomial(n,p) For N independent Bernoulli(p) trials, what is P(k total successes)? c. Geometric(p) How many Bernoulli(p) trials do we do until a success? d. Normal(μ,σ 2 ) Approximate binomials (and more!) e. Poisson(μ) Approximate binomials if np is small (<3) and n is large. (and more!) 2. Normal Approximation to Binomial a. Use continuity correction for Binomials. (wont matter that much) 3. More Counting. Counting is hard! The hyper geometric and multinomial distributions are commonly applied ideas in counting things. Some more advanced counting techniques that one can use include the use of recurrence relations and the “stars and bars” trick. Chapter 3: Key things to remember from chapter 3: 1. The idea and concept of a random variable. It is a number assigned to an event. 2. Expected value. Three main ways to find Expected value – a. Definition (only use for simple things or certain series problems) b. Tail Sums (usually more useful for mins, maxs, and some infinite series) c. Indicators (Use cleverly to solve most problems.) d. Expected value is always linear. This is why indicators work – just ensure that the sum of our indicators means the same thing that our Random variable is counting. e. E(XY) = E(X)E(Y) when X and Y are independent. 3. Variance and the Normal Approximation. a. Remember Var(X) = E(X 2 ) – E(X) 2 b. Sometimes you use indicator expansions for indicator based problems to find E(X 2 ). [Keep in mind what it means to take the product of Indicators. c. Remember Var(X+Y) = Var(X) + Var(Y) if X and Y are independent d. Use the normal approximation when dealing with the sum of many independent, identical random variables. To do this, you’ll need to know the Mean and SD. 4. Discrete Distributions 377095dbe8e336b363d9084143abd03883d085f5.doc

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
Stat 134 Study Group Faculty: Prof. Ani Adhikari Study Group Leader: Prateek Bhakta, [email protected] Study Group Location: MW 10-11am, 115 Chávez Community through Academics and Leadership a. Use the concepts of probability to find distributions for weirder R.V. b. Some common R.V. that you analyze: Geometric, Negative Binomial, Poisson.
This is the end of the preview. Sign up to access the rest of the document.

{[ snackBarMessage ]}

### Page1 / 10

pbs134review2 - Stat 134 Study Group Faculty Prof Ani...

This preview shows document pages 1 - 3. Sign up to view the full document.

View Full Document
Ask a homework question - tutors are online