This preview shows pages 1–3. Sign up to view the full content.
This preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
Unformatted text preview: York University MATH 2030 3.0 (Elementary Probability) Fall 2011 Assignment 1 Solutions, Sept 2011 1.1 No. 2 (a) We use an equally likely outcomes model with = { suppose, a, word, is, picked, at, random, from, this, sentence } . This has 10 elements. The event in question is A = { suppose, word, picked, random, from, this, sentence } which has 7 el ements. So P ( A ) = 7 / 10. (b) Now we have an event B = { suppose, picked, random, sentence } , and P ( B ) = 4 / 10. (c) A B = B in this particular case, so P ( A B ) = P ( B ) = 4 / 10. 1.3 No. 4 (a) Yes: the event is { , 1 } (b) Yes, this is just another way of saying there was exactly one head: the event is { 1 } . (c) No, the event cant be expressed in this model. This particular model lets us keep track of the number of heads, but not the order they occur in. (d) Yes: the event is { 1 , 2 } . 1.3 No. 6 (a) There are 10 words, each of which is equally likely to be picked. Ill use an equally likely outcomes model with sample space consisting of { suppose, a, word, is, picked, at, random, from, this, sentence } . Let X be the random variable that counts the length of the word, eg X (suppose) = 7, X (word) = 4, etc. For each x we have to count the number of outcomes (ie words in the sentence) of length x , and then divide by 10 to get the desired probabilities. We compute that x 1 2 3 4 5 6 7 8 P ( X = x ) 1 10 2 10 3 10 2 10 1 10 1 10 (b) We have the same model, but a different random variable Y that counts the number of vowels in a word. For example, Y (suppose) = 3, Y (sentence) = 3. Now for every y we have to count the number of words with y vowels in order to compute probabilities. Doing this we get that y 1 2 3 P ( Y = y ) 6 10 2 10 2 10 1 2 You are free to add other possible values y to your table, but the associated probabilities will be 0. 1.3 No. 9 One way to do this is to fill in probabilities for all regions in a Venn diagram, and then add up those probabilities as needed. This would give that P ( F G H ) = 0 . 1 P ( F G H c ) = P ( F G ) P ( F G H ) = 0 . 4 . 1 = 0 . 3 P ( F G c H ) = P ( F H ) P ( F G H ) = 0 . 3 . 1 = 0 . 2 P ( F c G H ) = P ( G H ) P ( F G H ) = 0 . 2 . 1 = 0 . 1 P ( F G c H c ) = P ( F ) P ( F G H c ) P ( F G c H ) P ( F G H ) = 0 . 7 . 3 . 2 . 1 = 0 . 1 P ( F c G H c ) = P ( G ) P ( F G H c ) P ( F c G H ) P ( F G H ) = 0 . 6 . 3 . 1 . 1 = 0 . 1 P ( F c G c H ) = P ( H ) P ( F G c H ) P ( F c G H ) P ( F G H ) = 0 . 5 . 2 . 1 . 1 = 0 . 1 (a) P ( F G ) is the sum of the six terms above, other than P ( F c G c H ) ....
View
Full
Document
This note was uploaded on 01/23/2012 for the course MATH 2030 taught by Professor Salisbury during the Spring '08 term at York University.
 Spring '08
 Salisbury
 Probability

Click to edit the document details