This preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
Unformatted text preview: ELEMENTARY PROBABILITY Events and event sets. Consider tossing a die. There are six possible outcomes, which we shall denote by elements of the set { A i ; i = 1 , 2 , . . . , 6 } . A numerical value is assigned to each outcome or event: A 1 −→ 1 , A 2 −→ 2 , . . . A 6 −→ 6 . The variable x = 1 , 2 , . . . , 6 that is associated with these outcomes is called a random variable. The set of all possible outcomes is S = { A 1 ∪ A 2 ∪···∪ A 6 } is called the sample space . This is distinct from the elementary event set. For example, we can define the event E ⊂ S to be a matter of getting an even number from tossing the die. We may denote this event by E = { A 2 ∪ A 4 ∪ A 6 } , which is the event of getting a 2 or a 4 or a 6. These elementary events that constitute E are, of course, mutually exclusive. The collectison of subsets of S that can be formed by taking unions of the events within S , and also by taking intersections of these unions, is described as a sigma field, or a σfield, and there is also talk of a sigma algebra. The random variable x is a described as a scalarvalued set function defined over the sample space S . There are numerous other random variables that we might care to define in respect of the sample space by identifying other subsets and assigning numerical values to the corresponding events. We shall do so presently. Summary measures of a statistical experiment. Let us toss the die 30 times and let us record the value assumed by the random variable at each toss: 1 , 2 , 5 , 3 , . . . , 4 , 6 , 2 , 1 . To summarise this information, we may construct a frequency table: x f r 1 8 8 / 30 2 7 7 / 30 3 5 5 / 30 4 5 5 / 30 5 3 3 / 30 6 2 2 / 30 30 1 Here, f i = frequency, n = X f i = sample size, r i = f i n = relative frequency. 1 In this case, the order in which the numbers occur is of no interest; and, therefore, there is no loss of information in creating this summary. We can describe the outcome of the experiment more economically by calcu lating various summary statistics. First, there is the mean of the sample ¯ x = ∑ x i f i n = X x i r i . The variance is a measure of the dispersion of the sample relative to the mean, and it is the average of the squared deviations. It is defined by s 2 = X ( x i − ¯ x ) 2 f i n = X ( x i − ¯ x ) 2 r i = X ( x 2 i − x i ¯ x − ¯ xx i + { ¯ x } 2 ) r i = X x 2 i r i − { ¯ x } 2 , which follows since ¯ x is a constant that is not amenable to the averaging operation. Probability and the limit of relative frequency. Let us consider an indefinite sequence of tosses. We should observe that, as n → ∞ , the relative frequencies will converge to values in the near neighbourhood of 1 / 6. We will recognise that the value p i = 1 / 6 is the probability of the occurrence of a particular number x i in the set { x i ; i = 1 , . . . , 6 } = { 1 , 2 , . . . , 6 } . We are tempted to define probability simply....
View
Full Document
 Spring '12
 D.S.G.Pollock
 Probability, Probability theory, Ri, σ field

Click to edit the document details