This preview shows page 1. Sign up to view the full content.
Unformatted text preview: Waterloo SOS STAT 230 Final Review Package
Prepared by Arin Goswami Fall 2010 STAT 230 Final Review Package Fall 2010 Table of Contents Important formulas ............................................................................................................ 3 Chapter 3  Counting Techniques ....................................................................................... 5 Chapter 4 – Probability Rules and Conditional Probability ................................................ 8 Chapter 5 – Discrete Distributions .................................................................................... 10 Chapter 7 – Expectation, Averages and Variability........................................................... 17 Chapter 8 – Discrete Multivariate Distributions ............................................................... 20 Chapter 9 – Continuous Distributions .............................................................................. 30 Extra Practice for Final ...................................................................................................... 37 2 STAT 230 Final Review Package
Important formulas
1. – Fall 2010 n(r) =
2. – =n(n1)(n2)…(nr+1) 3. 4. 5. 6. 7. 8. a. b. c. 9.
10. 11. 12. 3 STAT 230 Final Review Package
13. 14. 15. 16. 17. 18. If X and Y are independent, then Cov(X, Y) = 0 19. The correlation coefficient of X and Y is Fall 2010 20. 21. 22. If we have n identically distributed random variables, and ai = 1 for all I = 1, …, n 4 STAT 230 Final Review Package
Chapter 3  Counting Techniques
Definitions The Addition Rule: Fall 2010 If we can do A in p ways and B in q ways, then we can do either A or B but not both in p + q ways. The Multiplication Rule: If we can do A in p ways and for each of these ways we can do job B in q ways, then we can do both A and B in p x q ways. Permutation Rules: a) The number of ways to arrange n distinct objects in a row is n! = n(n1)(n2)…(1) b) The number of ways to arrange r objects selected from n distinct objects is n(r) = =n(n1)(n2)…(nr+1) c) The number of distinct arrangements of n objects when n1 are alike of one type, n2 alike of a second type, …, nk alike of a kth type (where n1 + n2 + … + nk = n) is Combination Rules: The number of ways to choose r objects from n is denoted by The r! in this formula removes the redundant options from the permutations. *Please do not confuse permutations and combinations. Combinations do not keep track of order. Combination is equal to or smaller than its corresponding permutation. 5 STAT 230 Final Review Package
Chapter 3  Distinguishing permutations and combinations Fall 2010 Is the following a permutation or a combination?
1. A group with 30 members chooses a President, Vice President, Treasurer and Secretary. Permutation – order does matter because you care which person is chose for which office. 2. A group with 30 members chooses 4 members to serve on a public relations committee. Combination  order DOES NOT matter  you don't care which member was chosen first or last. 3. In a race with 9 horses how many ways are there for any three horses to finish in win, place or show? Permutation  order does matter because you care which horse wins, comes in second or third 4. A pizza restaurant stocks 11 different toppings. How many ways are there to order a 3 topping pizza? Combination  You care whether or not you got the three toppings you wanted but you don't care in which order the pizza guy put the ingredients on the dough. 6 STAT 230 Final Review Package
Chapter 3  Counting Technique Examples Fall 2010 1. True or False: If A and B are mutually exclusive events with P(A) > 0, P(B) > 0, then P(AB) = P(A)P(B) FALSE. 2. Two boys and two girls randomly select one of five sections of STAT 230. Find the probability of the following events. a) All four students pick the same section Number of ways for students to choose section: Number of ways for all to choose the same one: 5. So P( all same) = 5/625 = 1/125 = 0.008 b) All four students pick different sections. Number of ways to all choose different: So P(all different) = 120/625 = 0.4096. c) No student picks section one. Number of ways if section one is eliminated: So P(no section one) = 256/625 = 0.4096 = 256. d) The two boys pick different sections and the two girls pick the same section (which may be the same as one of the boys.) For the boys, there are 2* =20 ways to have them in different sections. The girls can both be in any one of the five sections. So the number of ways this can occur: 20*5 = 100. So P(boys different, girls same) = 100/625 = 4/25 = 0.16. e) If two sections are cancelled, what is the probability of all the students selecting different sections? If two sections are cancelled, there are only three sections and four students. So there is no way they can all be in different sections! The probability is 0. 7 STAT 230 Final Review Package
Chapter 4 – Probability Rules and Conditional Probability
Rules
1) 2) 3) 4) 5) Fall 2010 6) : 8 STAT 230 Final Review Package
Chapter 4 Examples 1. If A and B are events with P(A) = 0.3, P(B) = 0.6 and P(AB) = 0.2 then a) P(B) = 10.6 = 0.4 b) P(AUB) = P(A) + P(B) – P(AB) = 0.3 + 0.4 0.1 – 0.6 c) P(AB) = P(AB)/P(B) = 0.1/0.4 = 0.25 d) A and B are independent FALSE Fall 2010 2. Suppose A, B and C are three events such that A and B are mutually exclusive, C is independent of both A and B. Also, P(A) = 0.3, P(B) = 0.5 and P(C) = 0.4. Calculate a) P(AUB) 0.3 + 0.5 = 0.8 b) P(AC) 0.3*0.4 = 0.12 c) P(AUC) 0.3 + 0.4 – 0.12 = 0.58 d) P(AUCB) =P(ABUCB)/P(B) = (0+0.5*0.40)/0.5=0.4 9 STAT 230 Final Review Package
Chapter 5 – Discrete Distributions
Definitions Fall 2010 Random Variable  a function that assigns a real number to each point in a sample space S. Probability function (p.f.) of a discrete random variable X  the function Cumulative distribution function (cdf) of a random variable X  the function 2. The following are the properties of a cdf F(x): a. b. c. 3. Distributions
A) Discrete Uniform Distribution. If X takes on values a, a+1, a+2, . . . , b with all values being equally likely, then X has a discrete uniform distribution on [a, b]. B) Hypergeometric Distribution We pick n objects at random without replacement from a collection of N items, and X is the number of successes among the n objects picked. Then, X has a hypergeometric distribution. Intuition: Numerator: We use the counting techniques from Chapt er 3. We have r “success items” within the collection of N items. We select x objects out of the r objects, and select the remaining nx objects out of the Nr “failure items”.
10 STAT 230 Final Review Package Fall 2010 Denominator: If we don’t impose any restrictions, we can choose all n objects from any of the N items available. C) Binomial Distribution Suppose we conduct an experiment that results in Success, and Failure (a Bernoulli r.v.). Let the probability of success be p and the probability of failure be 1 p. We then repeat the experiment n independent times. Let X be the number of successes obtained. Then X has a binomial distribution. Intuition: The x successes can happen in any of the n trials, and the x successes and nx failures are repeats. Thus, by the counting techniques from chapter 3, we can arrange them in ways. Since each one of the n trials is independent, by the multiplication rule from chapter 4, we can simply multiply all the probabilities together. D) Geometric Distribution Suppose we conduct an experiment that can either result in success (with probability p) or failure (with probability 1p). We keep repeating the experiment independently until we obtain a success. Let X be the number of failures obtained before the first success. Then X has a geometric distribution. Intuition: We use the multiplication rule from Chapter 4 for independent events. We have x failures before obtaining the first success, so we multiply the probabilities for all of these events together. E) Negative Binomial Distribution (a generalization of the geometric distribution) Suppose we conduct an experiment which results in success (probability p) or failure (probability 1p). We keep repeating the experiment independently until we obtain k successes. Let X be the number of failures obtained before the kth success. Then X has a negative binomial distribution. Intuition: When we have x failures before obtaining the kth success, we have a total of x+k trials. Note that the x failures and previous k1 successes can happen in any order within the previous x+k1 trials. Thus, by the counting techniques from chapter 3, we can arrange them in 11 STAT 230 Final Review Package Fall 2010 ways. Since each one of the x+k trials are independent, by the multiplication rule in chapter 4 we can simply multiply the corresponding probabilities. F) Poisson Distribution There are two ways to derive a Poisson distribution, one using the binomial approximation which we will see later, and another using the conditions of a Poisson process. We will see the latter first. 3 conditions of a Poisson process occurring randomly over time (or space): a. Independence: The number of events in nonoverlapping intervals is independent. b. Individuality: For a very small time interval, the probability of 2 events occurring in the same interval is close to 0. In other words, no more than 1 event can take place at one exact point in time. c. Homogeneity/Uniformity: events occur at a uniform rate of . An event of some type occurs according to a Poisson process as defined above. Let X be the number of event occurrences in a time period of length t. Then, X has a Poisson distribution with Note: By the uniformity argument, if X takes place at a rate over a time period of length t1 , then X takes place at a rate of over a time period of length t2. G) Approximations a. Approximate of the Hypergeometric Distribution using the Binomial distribution. Recall, if X has a Hypergeometric distribution, If N and r are large, we can approximate this using a binomial distribution where . Then, we have b. Approximate of the Binomial Distribution using the Poisson distribution. Suppose that X has a Binomial (n, p) distribution. If we let n→∞ and p→0, while keeping np fixed at some µ, then we can use a Poisson distribution with µ=np to approximate X. Thus, we have 12 STAT 230 Final Review Package
Example 1 Fall 2010 In some “random digit dialling” surveys, a computer phones randomly selected telephone numbers. However, not all numbers are “active” (belong to a telephone account) and they may belong to businesses as well as to individuals or residences. Suppose that for a given large set of telephone numbers, 52% are active residential or individual numbers. We will call these “personal” numbers. Suppose that we wish to interview (over the phone) 1000 persons in a survey. (a) Suppose that the probability a call to a personal number is answered is 0.75 and that the probability the person answering agrees to be interview is 0.55. Give the probability distribution for X, the number of calls needed to obtain 1000 interviews. (b) Suppose instead that 3200 randomly selected numbers were dialled. Give the probability distribution for Y, the number of interviews obtained, and find an expression for a suitable approximation of P(Y=1000) Solutions: (a) From the question, it seems that we will need to use the negative binomial distribution. However, we first need to find the probability of success, or in other words, P(interview). We realize that to get an interview, we need one exact sequence of events. We first need to reach a personal number; the call to the personal number must be answered; and the person answering needs to agree to be interviewed. We define the following events: Pers => Reached personal number A => Call was answered I => Agrees to be interviewed Thus, Thus, now we can use the negative binomial distribution. However, note that X is the total number of calls needed before 1000 interviews, not the number of failures. Hence, if we have X=x, then the number of failures will be x1000. Thus, 13 STAT 230 Final Review Package Fall 2010 (b) Since the event of getting an interview is either a success or a failure (Bernoulli) and we have 3200 repetitions of this, Y has a Binomial (3200, 0.1287) distribution. Thus, 14 STAT 230 Final Review Package
Example 2: Fall 2010 a) Consider a random variable X taking positive integer values with cumulative distribution function F(x) = 1 – 1/x, for x= 1, 2, 3, … Then P(X≥3) = 1 – F(2) = 1 (1 – ½) = ½ b) A random variable X has probability function f(x) = distribution function. F(x) = 1 F(x) = 0, if x<1 , x = 1, 2, … Find the cumulative 15 STAT 230 Final Review Package
Example 3 The probability function (pf) of a random variable X is given by: f(x) = kx for x = 1, 2, …, 9 a) Find k. Fall 2010 f (1) + f (2) + ... + f (9) = 1 k(1 + 2 + ... + 9) = 1 k( )=1 45k = 1 k= b) Find F(x), the cumulative distribution function (cdf) of X, for all values of x. F (x) = f (1) + ... + f (x) = (1 + ... + x)
= = x2 +x
90 , x= 1, 2, ..., 9 F (x) = 0, x < 1 F (x) = 1, x > 9 c) Sketch the probability function (histogram) and the cumulative distribution (graph) of X for 0 <= x <= 5 16 STAT 230 Final Review Package
Chapter 7 – Expectation, Averages and Variability
Definitions: Fall 2010 1. The median of a sample is a value such that half the results are below it and half the results are above it, when the results are arranged in numerical order. 2. The mode of the sample is the value which occurs most often. It is possible to have more than 1 mode in a sample. 3. The expected value of a discrete random variable X with probability function f(x) is 4. Suppose that the random variable X has probability function f(x). Then the expected value of some function g(x) of X is given by Linear property of expectations: 5. The variance of a r.v. X is given by We also have the following two expressions for variance, which are usually handier 6. The standard deviation of a random variable X is 7. Means and variances of special discrete distributions: a. Binomial distribution If X is a Binomial (n,p) random variable, then b. Poisson distribution If Y is a Poisson (µ) random variable, then 8. The moment generating function (m.g.f) of a discrete r.v. X with p.f. f(x) is given by: 9. The moments of the distribution of a random variable X can be derived from its m.g.f as follows: In other words, the rth moment of X can be obtained by differentiating M(t) r times and evaluating it at t = 0. 17 STAT 230 Final Review Package
Example 1 Fall 2010 A free casino game has six envelopes each with cash amounts of 1, 2, 4, 8, 10, and 20 dollar(s). You are randomly given an envelope and receive the cash. a) What are your expected winnings? E(X) = (1 + 2 + 4 + 8 + 10 + 20)/6 = 7.5 b) Show that Var(X) = E(( Var(X) = = = = c) Calculate the variance of your expected winnings. E( = = 97.5 = 97.5 – = 41.25 Var(X) = d) Winnings from casino games are taxed at a rate of ln(x). What are your expected taxes? E(log(X)) = (log(1)+…+log(20)/6 = 1.5762 e) For a onetime event, the casino squares all the cash amounts in the envelopes. What is your expected after tax profit in this case? The after tax profit is E(t(X)) = E( Example 2 A random variable X has a Poisson distribution with f(x) = E(g(X)) E(g(X)) = , x = 0, 1, 2, … Let g(X) = . Find 18 STAT 230 Final Review Package
Example 3 Suppose that X is a discrete distribution with a m.g.f as follows: Fall 2010 (a) Find E(X) and Var(X) (b) Find Solutions (a) (b) 19 STAT 230 Final Review Package
Chapter 8 – Discrete Multivariate Distributions
Definitions Fall 2010 1. Suppose that there are two random variables X and Y. We define f(x, y), the joint probability function of (X, Y) as: And as usual, 2. We define the marginal distributions of X and Y as: In words, this means that to find the marginal distribution of X, sum the joint p.f. over all values of Y; and similarly to find the marginal distribution of Y sum the joint p.f. over all values of X. 3. Let X1, X2,…, Xn be some collection of random variables. We say that X1, X2,…, Xn are independent if and only if: Note: This is similar to the definition of independent events we saw in Chapter 4. 4. The conditional probability function of X given Y=y is Similarly 5. If we have a new variable U, such that U = g(X, Y), then the probability function for U is 6. The multinomial distribution: Similar to the binomial case, we conduct an experiment which has k different outcomes (instead of just two), with probabilities p1,p2,…,pk (p1+p2+…+pk=1). This experiment is repeated independently n times. Let X1 be the number of times outcome 1 occurs, X2 be the number of times outcome 2 occurs,…, Xk be the number of times outcome k occurs. Then, (X1, X2,…,Xk) has a multinomial distribution with the joint distribution function: Intuition: Think about the multinomial in the same way as you did about the binomial distribution. There are n objects, and we have k repeating types among the n objects. There are objects of the ith repeating type, so by our repetition rule for permutations we can arrange 20 STAT 230 Final Review Package
these objects in each type occurring times. 7. The expected value of a function of discrete rv’s X and Y, g(X, Y) is: Fall 2010 ways. Due to independence, we can simply multiply the probability of This can be extended beyond two variables X and Y. 8. Property of Multivariate Expectation: 9. The covariance of X and Y, denoted Note: A handier formula for covariance is 10. If X and Y are independent, then Cov(X, Y) = 0 11. Suppose X and Y are independent random variables. Then, if functions, . 12. The correlation coefficient of X and Y is and are any two Note: This is a measure of the strength of the relationship between X and Y that lies in the interval [1,1]. 13. Properties of Covariances: a. b. Intuition: Think of this as multiplying the two terms (aX+bY) and (cU+dV) together. (Which is exactly how it is derived using the definition) 14. Variance of a linear combination: In fact, more generally if we have n r.v’s X1,X2,…,Xn If we have n identically distributed random variables, and ai = 1 for all I = 1, …, n 21 STAT 230 Final Review Package Fall 2010 Note: This general formula is very useful in problems involving indicator random variables. If all n random variables are independent, then 22 STAT 230 Final Review Package
Example 1 X and Y are two random variables that take on integer values from 0 to 2. You are given the following information about the distribution of X and Y: Fall 2010 (a) What is the correlation between X and Y? (b) Suppose that you were not given E(Y), but were instead told that X and Y are independent. You are also given two additional random variables U and V, such that U = 5X and V = 3X – 2Y2. Calculate Cov(U, V) Solutions (a) We want to draw a table for the joint distribution of X and Y. But first, note that and we set P(X=0) = p giving P(X=0, Y=1)= 0.4p Using the given information, we fill in the following table Y X 0 1 2 Sum 0 0.12 0.270.4p 0.2 0.590.4p 1 0.4p 0.23 0.130.4p 0.36 2 0.6p0.12 0.4p 0.170.6p 0.4p+0.05 sum P 0.5 0.5p 1 Note: The bolded terms were calculated by first using the fact that simple subtractions and additions. Make sure that you can do them. and then by 23 STAT 230 Final Review Package
Filling in the table for p=0.25, we have the following Y X 0 1 2 Sum 0 0.12 0.17 0.2 0.49 1 0.1 0.23 0.03 0.36 2 0.03 0.1 0.02 0.15 sum 0.25 0.5 0.25 1 Fall 2010 Now, recall that the correlation between X and Y is given by: From the table, we can calculate Var(X) and Var(Y) We were given that . Now, And, (b) From the first table we made in part(a), we have 24 STAT 230 Final Review Package Fall 2010 Plugging this into the table in part (a), we have: Y 1 0.4p 0.23 0.02728 0.36 X 0 1 2 Sum 0 0.12 0.16728 0.2 0.48728 2 0.03408 0.10272 0.01592 0.15272 sum 0.2568 0.5 0.2432 1 Now, Cov(U, V) = Cov(5X, 3X2Y 2)=15Cov(X, X) – 10 Cov(X, Y 2) Cov(X, X)= Var(X)= E(X 2)E(X) 2. Using the same method as in part (a), we get Var(X)=0.4998 Cov(X, Y 2)=E(XY 2)E(X)E(Y 2)=0.8228(0.9864)(0.97088)= 0.1349 (Verify this) Thus, Cov(U, V) = 15(0.4998)10(0.1349) = 8.846 25 STAT 230 Final Review Package
Example 2 Fall 2010 The proportions of cats with blood types A, B, and AB respectively in a large population are 0.7, 0.2 and 0.1 respectively. Let denote the frequencies of these three types in a random sample of size 20 taken from the population. Find the conditional distribution of given =12. For P( = = 0,1,…,8 = = 26 STAT 230 Final Review Package
Example 3 Fall 2010 1. Assume random variables X and Y have joint probability function as follows. x f x, y) 0 y 2 0.05 0.2 0.25 0 1 0.2 0.3 2 0 a. Find the marginal probability function of X. f(x) = 0.25, 0.5, 0.25 for x = 0, 1, 2 b. Find cov X, Y . E(X) =1, E(Y) = 1, E(XY) = 2*0.2 + 4*0.25 = 1.4 So Cov X, Y = 1.4 – 1 = 0.4 c. Are X and Y independent? Why or why not? They are not independent since Cov X, Y ≠ 0. 27 STAT 230 Final Review Package
Example 4 Fall 2010 Suppose that a pond contains 100 fish, and 40 of them are salmons. One day, 30 random fish are caught from the pond. Let X be the number of salmons caught. What is E(X) and Var(X)? Use indicator random variables to solve this problem. Solution: We first define indicator variables X1, X2,… ,X30 , where Also note that Justification: Forty out of 100 fishes in the pond are salmons. Now, Also, Now, And, Now, note that Justification: For the fish i, we have a total of 100 fishes and 40 salmons. If the first fish is a salmon, then we have a total of 99 fishes and 39 salmons left. Thus E(XiXj) = . This gives Hence, 28 STAT 230 Final Review Package Fall 2010 *Note that hypergeometric distribution would work for this question too 29 STAT 230 Final Review Package
Chapter 9 – Continuous Distributions Fall 2010 Note: In this chapter, we present relevant examples after each group of definitions as there is a lot of material to cover. Definitions 1. The probability density function (p.d.f.) f(x) for a continuous random variable X is the derivative: Where F(x) is the c.d.f. for X. 2. The following are properties of a p.d.f. : a. b. c. d. 3. When X is continuous, we define Example 1 be a pdf Find : a)k b)F(x) c)P(1/4 < X < 5/4) d)Var(X) 30 STAT 230 Final Review Package Fall 2010 Solutions: When finding the area of a region bounded by different functions, we split the integral into pieces: b) We start with the easy part which we forget all too often: F(x) = 0 if x ≤ 0 and F(x)= 1 if x ≥ 2. i.e c) d) 31 STAT 230 Final Review Package Fall 2010 Definitions (continued) 4. Change of variables: When we know the pdf or cdf of a continuous random variable X, and want to find the p.d.f or cdf of another random variable Y, we follow the steps below: a. Write the cdf of Y as a function of X. b. Use to find . Then if you want the p.d.f. ,differentiate c. Find the range of values of y. Example 2 Suppose that the random variable X the pdf Let Y =5X2 . Find the probability density function of Y. Solution: Integrating f(x), we find that F(x) = Note: We can obtain the range for y by plugging the range for x into the equation Y=5X2. 32 STAT 230 Final Review Package
Definitions (continued) Special distributions: Fall 2010 5. Suppose that X takes values in some interval [a,b] with equal probability across all subintervals. The X has a continuous uniform distribution. We write The probability density function of X is: And the cdf of X is: The mean and variance of X is given by: 6. If the continuous random variable X has an exponential distribution, then its pdf is of the form: And its cdf is of the form: Its mean and variance are: 7. Generation of Random Variables: To generate a random value from a r.v. X which has cdf F(x), generate a random number U between 0 and 1, and find X = F1(U). Example 3 Suppose that the waiting time in a doctor’s office follow an exponential distribution with mean ½ hours. The doctor will receive a complaint if the waiting time is greater than 2 hours. What is the probability that he receives his 3rd complaint after seeing the 10th patient? Solution: The last sentence of the problem indicates that this involves a negative binomial distribution. However, we need to calculate p first. This is an exponential probability. 33 STAT 230 Final Review Package Fall 2010 Xd n n f n n ain Definitions (continued) 8. If the continuous random variable X has a normal distribution, then it has mean and variance: It’s impossible to integrate the pdf of this distribution by hand, so we use the standard normal distribution instead. To transform the normally distributed rv X into the standard normal r.v. Z, we use the transformation: In other words, if and , Note, can also be written as Notes on reading probabilities for Z from the standard normal table (for P(Z≤ z): Only values for z > 0 are shown, so we use the fact that N(0 , 1) is symmetric about 0. a. If we want P(Z ≤ z) and z < 0, we use the fact that P( Z ≤ z)= P( Z ≥ z)= 1 – P( Z ≤ z) b. Similarly if we want P(Z ≥ z) and z < 0, we use P(Z ≤  z). c. If we want P( q ≤ Z ≤ p) where q, p > 0, then we use d. If we have Q where P(Z ≤ z) = Q and Q < 0.5, we use P(Z ≤ z) = 1Q to find z. Use a similar approach if you have R where P(Z ≥ z) = R and R > 0.5. e. If we have Q where P(z ≤ Z ≤ z) = Q, use P(Z ≤ z) = Q + 0.5(1Q)=0.5Q + 0.5 I recommend that you understand these properties using a graph of a normal distribution rather than memorize them. 9. Linear combinations of independent normal random variables: a. If and Y = aX + b, where a and b are constants, then b,a2 2 b. If are independent, then where constants. are 34 STAT 230 Final Review Package
c. Let X1,X2,…,Xn be independent N(µ,
2 Fall 2010 ) random variables. Then 10. Normal Distribution in Approximations Central Limit Theorem: Let X1,X2,…,Xn be independent identically distributed r.v’s with mean µ and variance . As the number of random variables, n→∞, the cdf of: And Approach the N(0,1) cdf. Notes a. The CLT can be applied to a Poisson distribution with a large by thinking of it to be a sum of independent Poisson r.v.’s with parameter 1. b. The CLT can be applied to a Binomial distribution with a large n by thinking of it to be a sum of n independent Bernoulli distributions with parameter p. c. Continuity corrections: Think of this as going “halfway towards the next probability”. Essentially, you increase the range that you are calculating the probability for by 0.5 in each direction of consideration when using a continuous distribution in place of a discrete distribution. Sketch a histogram to help in understanding the following: Example 4 Suppose that X and Y are independent random variables, where Calculate the probability . Solution Note that X > Y – X + 3 can be rewritten as 2X – Y > 3. So, we first find the distribution for 2 X – Y. 35 STAT 230 Final Review Package Fall 2010 Recall that P( W > w) can be rewritten in terms of a standard normal distribution as: Thus, P(2XY>3) is Example 5 An astrophysicist suspects that the number of volcano eruptions per month on one of Jupiter’s moons can be modelled with a Poisson distribution. However, he isn’t sure what the parameter of the model should be. His data showed that in the last 10 years, there were only 3 months with more than 100 eruptions. What are the possible parameters of the Poisson distribution? (Use a continuity correction) Solution: This problem requires us to work backwards from the probability using the Central Limit Theorem. n n n aa . indi a a . f , a an 100 an n in a n . in a n n i n, a Converting to a standard normal probability, Thus, 36 STAT 230 Final Review Package
Extra Practice for Final Fall 2010 1. Suppose X has a normal distribution with mean 2 and variance 4. Find 2. the constant c so that P( X − 2≤ c)= 0. 99
P(c ≤ X2 ≤ c) = P( ≤ ≤ ) = 0.99 = 2.57 or 2.58, c = 5.14 or 5.16 2. A random variable X has probability function x 0 1 2 . Find the moment generating function of X. P (X = x) 0. 5 0. 3 0. 2
M(t) = E( 3. Use an expression for P(AUB) (the additivity rule of probability) to prove that P(AB)≥P(A)+P(B)1 P(AUB)=P(A)+P(B)P(AB) Therefore P(AB)=P(A)+P(B)P(AUB) ≥P(A)+P(B)1 4. If P(A) = 0.6, P(B) = 0.5 and P( A) = 0.5, find P(AUB). Are A, B independent? P(AB) = P(A) – P(A P(AUB) = P(A)+P(B) – P(AB) = 0.6 + 0.5 – 0.3 = 0.8 Since P(AB) = P(A)P(B) the events are independent 5. Suppose X and Y are discrete random variables with joint probability function f(x, y)= Try any pair of values. For example f(1,1) = But Since f(1,1) ≠ they are NOT INDEPENDENT. 6. Suppose X, Y are independent normal random variables with E(X) = E(Y) = 1 and Var(X) = Var(Y) = 5. Find P(X2Y+3>1) E(X2Y+3) = 12+3=2 37 STAT 230 Final Review Package
Var(X2Y+3)=25 P(X2Y+3>1)=P( >) = P(Z>0.2) where Z is standard normal = 0.5793 Fall 2010 7. Suppose X and Y are independent geometric random variables each with the same probability function f ( x) =( p)( 1 – p) , x=0,1,2, ... and define a random variable T= X+Y. Is the correlation coefficient between X and T, positive, negative or zero? Explain why. Large values of X will tend to result in larger values of T and small values of X to smaller values of T. Therefore pXT is positive.
x 8. Four people are listed as friends on Facebook according to the diagram: a. Find b. Find E(X) using the relationship between X and the X= So E(X) = E( =0.01+0.01+0.001+0.001 =0.022 c. Find Var( Var( Var( d. Find all Cov( Cov( ’s 38 STAT 230 Final Review Package
Cov( = =0.00009 (same for Cov( Cov( Fall 2010 e. Find Var(X) Var(X) = Var( +2 =0.0099+0.0099+0.00099+0.00099+2(0+0.00099+0.000009) =0.022536 39 ...
View
Full
Document
This note was uploaded on 04/05/2011 for the course STAT 230 taught by Professor Various during the Spring '06 term at Waterloo.
 Spring '06
 various
 Conditional Probability, Counting, Probability

Click to edit the document details