Conditional Probability Revised

Conditional Probability Revised - Conditional Probability:...

Info iconThis preview shows page 1. Sign up to view the full content.

View Full Document Right Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: Conditional Probability: Let A and B be events. The probability that event B occurs given (knowing) that event A occurs is called a conditional probability, denoted P ( B | A) . The event on the right is the given (or known) event. Let us think about a die example. Suppose our experiment is rolling a fair die once. Let us define the following events: A The die is a 3. B The die is odd. C The die is even. With these 3 events we have the following 6 conditional probabilities: P ( B | A) = 1 P( B | C ) = 0 P(C | A) = 0 P( A | B) = 1 3 P( A | C ) = 0 P(C | B) = 0 You know P(A)=1/6, P(B)=.5, and P(C)=.5. How do these conditional probabilities compare? The formula for conditional probability is as follows: P( B | A) = N ( A B) N ( A B) / n P( A B) = = (Keep in mind we need P(A)>0 for this). N ( A) N ( A) / n P( A) In general, a conditional probability will change the original probability. This change may be an increase or a decrease. However, it can also remain the same. When the conditional probability is the same as the unconditional probability, the events are said to be independent. We can now define independence as P(B|A)=P(B) or P ( A B ) = P ( A) * P ( B ) . General Multiplication Rule: P ( A B) = P( A) * P( B | A) P (I Ai ) = P( A1 ) * P( A2 | A1 ) * ... * P ( AN | A1 A2 ... AN -1 ). i =1 N Example: Let us have a local telephone number, i.e. there are 7 digits. Suppose that the 1st digit cannot be zero, and we have no other restrictions. What is the probability that the phone number consists of all even numbers (yes, zero is an even number)? If we define E i to be the event that the ith digit is even, then we are looking for the probability of the intersections of the Ei's. P (I Ei ) = P ( E1 ) * P ( E 2 | E1 ) * ... * P ( E 7 | E 6 E 5 ... E1 ). i =1 7 However, one nice thing about this fictitious problem is that the Ei's are independent. What that means in terms of the problem is that we do not "really" have conditional probabilities (see above). We have P (I Ei ) = P ( E1 ) * P ( E 2 ) * ... * P ( E 7 ). i =1 7 As you can see, independence is a wonderful quality in probability. Our answer becomes P ( I E ) = 9 * .5 . 6 i i =1 7 4 Let us actually use the conditional probability meow. Suppose there is a bag full of candy bars. There are 10 "100 grand" bars, there are 7 "3 musketeers" and 3 "snickers". If we draw 5 candy bars at random, without replacement, what is the probability that we get all "100 grand" bars? Define Gi to be the event that the ith draw is a "100 grand" bar. Then we are looking for P (I Gi ) = P (G1 ) * P (G 2 | G1 ) * ... * P (G5 | G 4 G3 G 2 G1 ). i =1 5 Here, I claim we do not have independence of events. Why? What could we do where there would be independence of events? P (I Gi ) = i =1 5 10 9 8 7 6 * * * * . 20 19 18 17 16 Law of Total Probability Suppose A1 , A2 ,..., AN form a partition. Then for any event B, P ( B ) = P ( B A1 ) + ... + P( B AN ). A modified version if this that is extremely useful is: P( B ) = P( B E ) + P ( B E C ). We can rewrite P ( B E ) as P ( B E ) = P ( B ) * P ( E | B ) = P ( E ) * P ( B | E ) Example: Suppose we are looking at a specific disease. Let us define D as the event that a particular person has the disease. Let us define O to be the event that the person tested positive for the disease. Then we have the following: P(O ) = P ( D) * P (O | D) + P( D C ) * P(O | D C ). This might seem "silly" for the above example because we are not usually interested in P(O). However, I chose this example for a different reason. There is another important theorem in conditional probability called Bayes' Rule. Bayes' Rule Suppose A1 , A2 ,..., AN form a partition. Then for any event B we have the following: P( A j | B) = P( A j ) * P( B | A j ) P( A1 ) * P( B | A1 ) + ... + P( AN ) * P( B | AN ) . Why is this formula true? How does this compare to the previous conditional probability formula? The idea behind Bayes' rule is that we revise probabilities in accordance with newly acquired information. This will be used quite frequently. Also, some problems will force you to use this. Let us think about our previous example. Suppose we are interested in what the probability of having the disease was given that the test was positive. We now have the following: P( D | O) = P ( D ) * P (O | D ) . P ( D ) * P (O | D ) + P ( D C ) * P (O | D C ) This is more often what we are concerned with in this problem. We are concerned with the idea of having the disease (or being pregnant) given that the test was positive. This formula takes into account the probabilities of testing positive because the disease is there and the probabilities of false positives. Example: Suppose a specific disease is only in 5 out of every 1000 people. Suppose the test for the disease is accurate 99% of the time a person has the disease. Suppose the test is right 95% of the time that a person does not have the disease. What is the probability that the person has the disease given that they tested positive? Different Conditional Probability example: Polya's Urn Scheme An urn contains b black balls and r red balls. One ball is selected at random, its color is recorded, and then it and c balls of the same color are put in the urn. This process is repeated. Find the probability that balls 1 and 2 are black and ball 3 is red. If we define Bi to be the event that the ith draw is black and Ri is the event that the ith draw is red. Now we are looking for P ( B1 B2 R3 ). Greeks problem: Suppose at a given university the following hold: 15% of females are in a sorority 1820% of males are in a fraternity. The campus paper uses this information to claim that 3335% of campus is "greek". Is this correct? If you answer is no, what is wrong with it and how would you fix it? For Bayes' Rule problems we have 2 additional terms. Prior probability is the unconditioned probability. For our disease example this would be P(D). Posterior probability is the revised probability, or the probability that takes into account new information. For our example, this would be P(D|O). When the conditional probability is the same as the unconditional probability, the events are said to be independent. We can now define independence as P(B|A)=P(B) or P ( A B ) = P ( A) * P ( B ) . Special Multiplication Rule If A is independent of B, then this implies that P ( A B ) = P ( A) * P ( B ) . Also, if A and B are independent, then A C and B are independent, A and B C are independent and A C and B C are independent. Pairwise Independent Events The events A, B, and C are pairwise independent if any collection of 2 of these events are independent. That is A and B are independent, A and C are independent, and B and C are independent. However, this does not imply that A, B, and C are independent. Mutually Independent Events The events A, B, and C are mutually independent events if they are pairwise independent events and the events A, B, and C are independent. A more formal definition is as follows: Let A1 , A2 , ... , AN be events. Then they are said to be mutually independent if each subcollection of the N events satisfies the multiplication property. That is, for each integer n, where 2 n N, we have P ( Ak 1 Ak 2 ... Akn ) = P ( Ak 1 ) * P( Ak 2 ) * ... * P( Akn ) Where k1, k2, ... , kn represent distinct integers between 1 and N. So this means that if we had n=5 and N=20 we could have the ki's be 1, 2, 4, 8, 10 and not just 1, 2, 3, 4, 5. Basically it means that any finite collection of the events are independent. Mutually independent implies pairwise independent. However pairwise independent does not necessarily imply mutually independent. Example: Let us roll 2 dice, a hunter green die and a cardinal red die. Let A be the event that the hunter green die is odd. Let B be the event that the cardinal red die is odd. Let C be the event that the sum of the dice is odd. Prove that these events are pairwise independent but not mutually independent. More Examples: Male Female total Single .25 .15 .4 Married .1 .1 .2 Divorced .1 .13 .23 Widowed .03 .14 .17 Total .48 .52 1.00 What is the probability that a person is male given that they are single? What is the probability that the person is single given that they are male? What is the probability that the person is divorced given that they are female? What is the probability that a person is male given that they are widowed? ...
View Full Document

This note was uploaded on 02/06/2012 for the course STAT 225 taught by Professor Martin during the Spring '08 term at Purdue University-West Lafayette.

Ask a homework question - tutors are online