STA6126 Chapter 4 - Revised on August 22, 2011 Chapter 4...

Info iconThis preview shows pages 1–4. Sign up to view the full content.

View Full Document Right Arrow Icon
STA6126 Chapter 4, Page 1 of 10, Revised on August 22, 2011 Chapter 4 Probability Distributions A Statistical Experiment is any activity that has a) 2 or more possible outcomes b) Uncertainty about which outcome will be observed until the end of the experiment. The sample space of a statistical experiment is the set of all possible outcomes of the experiment. An event is a subset of the sample space. Note that S and φ = { } are subsets of S also hence they are also events. The event S is called the definite event. The event φ = { } is called the impossible event. Definition 1: If a statistical experiment has k equally likely outcomes , i.e., each of the k possible outcomes (elementary events) have the same probability, then P(E i ) = 1/k. Definition 2: Given a statistical experiment with a sample space S that has k equally likely outcomes, and an event of S (say A) which contains e elements. The probability of observing the event A, denoted by P(A) is defined as Number of elements in A Number of elements in S n( A) e P( A) n( S ) k What if the elements of the sample space are not equally likely ? We may find an estimate of the probability of an event A by repeating the experiment a large number of times (say n) and counting the number of times the event A is observed (say f). Then, approximately. f n The larger n (= the number of repetitions) is, the better will be the approximation, that is, n f lim . n     Thus, we may define the probability of a given event A as the proportion of times that event would occur in a long run of repeated observations. Definition Two events A and B are said to be mutually exclusive (disjoint) if they exclude each other, i.e., if observing both A and B at the same time is impossible, i.e., P(A and B)=P({ }) = 0.
Background image of page 1

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
STA6126 Chapter 4, Page 2 of 10, Definition: Conditional probability of observing an event A, when we know (or given that) another event B has occurred, denoted as P(A|B), is defined as () ( | ) ( ) 0 P A and B P A B when P B PB  . Similarly, the conditional probability of B given A is ( | ) ( ) 0 P A and B P B A when P A PA . Definition: If two events A and B are independent events then, P(A given B) = P(A|B) = P(A) when P(B) > 0 P(B given A) = P(B|A) = P(B) when P(A) > 0 P(A and B) = P(A) × P(B). Note that if one of the statements in the above definition is true, then all of them are true. Similarly, when one of them is wrong, all of them are wrong. Basic Probability Rules: 1. For any event A, 0 ≤ P(A) ≤ 1 2. P(Not A) = 1 – P(A) and P(A) = 1 – P(Not A) 3. P(A or B) = P(A) + P(B) – P(A and B) 4. P(A or B) = P(A) + P(B) if A and B are mutually exclusive events , i.e., A and B = { }. 5. P(A and B) = P(A) × P(B given A ) 6. P(B given A ) = P(B) if A and B are independent events 7. P(A given B ) = P(A) if A and B are independent events 8. P(A and B) = P(A) × P(B) if A and B are independent events
Background image of page 2
STA6126 Chapter 4, Page 3 of 10, 4.2 Random Variables and their Probability Distributions A random variable is a real number assigned to every element of the sample space. Thus, we may define events in terms of a random variable, instead of writing it as a subset of the sample space.
Background image of page 3

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
Image of page 4
This is the end of the preview. Sign up to access the rest of the document.

Page1 / 10

STA6126 Chapter 4 - Revised on August 22, 2011 Chapter 4...

This preview shows document pages 1 - 4. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online