This preview has intentionally blurred parts. Sign up to view the full document

View Full Document

Unformatted Document Excerpt

LECTURE NOTES ON PROBABILITY I 1. Combinatorial Analysis Probability concerns how much possibility an event can occur in a random ex- periment. For those random experiments with finite number of outcomes, say N , each being equally like, the probability of an event E is defined classically by P ( E ) := N ( E ) N , where N ( E ) denotes the number of outcomes resulting in the occurrence of the event E . Example 1.1. Consider flipping a fair coin, and consider the event that the heads come up. Then N = 2 and N ( E ) = 1, so the probability of this event is P ( E ) = 1 2 . Example 1.2. Consider tossing two dice, and denote by ( i, j ), i, j = 1 , , 6, the outcome where i appears on the first die and j appears on the second die. Let E denote the event consisting of those outcomes ( i, j ) with i + j = 8. Then N = 6 6 = 36 and N ( E ) = 5. Thus P ( E ) = 5 36 . In order to obtain the probability, we need to calculate the total number of possible outcomes for a random experiment; calculate the number of outcomes of an event in the random experiment. This requires the techniques from combinatorics. 1.1. The counting principle. Proposition 1.1 (The basic counting principle) . Suppose that two experiments are to be performed. If the first experiment has m possible outcomes, and the second experiment has n possible outcomes, then there are mn possible outcomes of the two experiments. Proof. The proof follows from enumerating all the possible outcomes. Indeed, sup- pose the m outcomes of the first experiment are a 1 , a 2 , , a m and the n outcomes of the second experiment are b 1 , b 2 , , b n . then all the possible outcomes of the two experiments are ( a 1 , b 1 ) , ( a 1 , b 2 ) , , ( a 1 , b n ) ( a 2 , b 1 ) , ( a 2 , b 2 ) , ... View Full Document

End of Preview

Sign up now to access the rest of the document