This preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
Unformatted text preview: A Quick Review of Basic Probability and Statistics Peter Glynn ([email protected]) L A T E X set by Nick West April 18, 2007 This course presumes knowledge of Chapters 1 to 3 of “Introduction to Probability Models” by Sheldon M. Ross. This material is also largely covered in the course text by P. Bremaud. 1.1 Probability: The Basics Ω : sample space ω ∈ Ω : sample outcome A ⊆ Ω : event X : Ω → S : “Svalued random variable” P : a probability (distribution / measure) on Ω A probability has the following properties: 1. 0 ≤ P { A } ≤ 1 for each event A . 2. P { Ω } = 1 3. for each sequence A 1 , A 2 , . . . of mutually disjoint events P ( ∞ [ i =1 A i ) = ∞ X i =1 P { A i } 1.2 Conditional Probability The conditional probability of A, given B, written as P { A  B } , is defined to be P { A  B } = P { A ∩ B } P { B } . It is a probability on the new sample space Ω B ⊂ Ω; P { A  B } is interpreted as the likelihood / probability that A occurs given knowledge that B has occurred. Conditional probability is fundamental to stochastic modeling. In particular in modeling “causality” in a stochastic setting, a causal connection between B and A means: P { A  B } ≥ P { A } . 1 1.3 Independence Two events A and B are independent of one another if P { A  B } = P { A } i.e. P { A ∩ B } = P { A } P { B } . Knowledge of B ’s occurrence has no effect on the likelihood that A will occur. 1.4 Discrete Random Variables Given a discrete random variable (rv) X which takes on values in S = { x 1 , x 2 , . . . } , its probability mass function is defined by: P X ( x i ) = P { X = x i } , i ≥ 1 . Given a collection X 1 , X 2 , . . . , X n of Svalued rvs, its joint probability mass function (pmf) is defined as P ( X 1 ,X 2 ,...,X n ) ( x 1 , x 2 , . . . , x n ) = P { X 1 = x 1 , X 2 = x 2 , . . . , X n = x n } . The conditional pmf of X given Y = y is then given by P X  Y ( x  y ) = P ( X,Y ) ( x, y ) P X ( y ) . The collection of rvs X 1 , X 2 , . . . , X n are independent if P ( X 1 ,X 2 ,...,X n ) ( x 1 , x 2 , . . . , x n ) = P X 1 ( x 1 ) · P X 2 ( x 2 ) ··· P X n ( x n ) for all ( x 1 , . . . , x n ) ∈ S n . 1.5 Continuous Random Variables Given a continuous rv X taking values in R , its probability density function f X ( · ) is the function satisfying: P { X ≤ x } = Z x∞ f X ( t ) dt. We interpret f X ( x ) as the “likelihood” that X takes on a value x . However, we need to exercise care in that interpretation. Note that P { X = x } = Z x x f X ( t ) dt = 0 , so the probability that X takes on precisely the value x (to infinite precision) is zero. The “likelihood interpretation” comes from the fact that P { X ∈ [ a , a + ] } P { X ∈ [ b , b + ] } = R a + a f X ( t ) dt R b + b f X ( t ) dt →→ f X ( a ) f X ( b ) so that f X ( a ) does indeed measure the relative likelihood that X takes on a value a (as opposed, say, to b )....
View
Full Document
 Spring '08
 PETERGLYNN
 Normal Distribution, Probability theory, probability density function, Xn, mle

Click to edit the document details