4106-08-Notes-1

# 4106-08-Notes-1 - Copyright c 2006 by Karl Sigman 1 Review...

This preview shows pages 1–3. Sign up to view the full content.

Copyright c ± 2006 by Karl Sigman 1 Review of Probability Random variables are denoted by X , Y , Z , etc. The cumulative distribution function (c.d.f.) of a random variable X is denoted by F ( x ) = P ( X x ) , -∞ < x < , and if the random variable is continuous then its probability density function is denoted by f ( x ) which is related to F ( x ) via f ( x ) = F 0 ( x ) = d dx F ( x ) F ( x ) = Z x -∞ f ( y ) dy. The probability mass function (p.m.f.) of a discrete random variable is given by p ( k ) = P ( X = k ) , -∞ < k < , for integers k . 1 - F ( x ) = P ( X > x ) is called the tail of X and is denoted by F ( x ) = 1 - F ( x ). Whereas F ( x ) increases to 1 as x → ∞ , and decreases to 0 as x → -∞ , the tail F ( x ) decreases to 0 as x → ∞ and increases to 1 as x → -∞ . If a r.v. X has a certain distribution with c.d.f. F ( x ) = P ( X x ), then we write, for simplicity of expression, X F. (1) 1.1 Moments and variance The expected value of a r.v. is denote by E ( X ) and deﬁned by E ( X ) = X k = -∞ kp ( k ) , discrete case , E ( X ) = Z -∞ xf ( x ) dx, continuous case. E ( X ) is also referred to as the ﬁrst moment or mean of X (or of its distribution). Higher moments E ( X n ) , n 1 can be computed via E ( X n ) = X k = -∞ k n p ( k ) , discrete case , E ( X n ) = Z -∞ x n f ( x ) dx, continuous case, and more generally E ( g ( X )) for a function g = g ( x ) can be computed via E ( g ( X )) = X k = -∞ g ( k ) p ( k ) , discrete case , E ( g ( X )) = Z -∞ g ( x ) f ( x ) dx, continuous case. 1

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
(Leting g ( x ) = x n yields moments for example.) Finally, the variance of X is denoted by V ar ( X ), deﬁned by E {| X - E ( X ) | 2 } , and can be computed via V ar ( X ) = E ( X 2 ) - E 2 ( X ) , (2) the second moment minus the square of the ﬁrst moment . For any r.v. X and any number a E ( aX ) = aE ( X ) , and V ar ( aX ) = a 2 V ar ( X ) . (3) For any two r.v.s. X and Y E ( X + Y ) = E ( X ) + E ( Y ) . (4) If X and Y are independent, then V ar ( X + Y ) = V ar ( X ) + V ar ( Y ) . (5) The above properties generalize in the obvious fashion to to any ﬁnite number of r.v.s. In general (independent or not) V ar ( X + Y ) = V ar ( X ) + V ( Y ) + 2 Cov ( X,Y ) , where Cov ( X,Y ) def = E ( XY ) - E ( X ) E ( Y ) . When Cov ( X,Y ) > 0, X and Y are said to be positively correlated , whereas when Cov ( X,Y ) < 0, X and Y are said to be negatively correlated . When Cov ( X,Y ) = 0, X and Y are said to be uncorrelated , and in general this is weaker than independence of X and Y : there are examples of uncorrelated r.v.s. that are not independent . 1.2 Moment generating functions The moment generating function (mgf) of a r.v. X (or its distribution) is deﬁned for all s ( -∞ , ) by M ( s ) def = E ( e sX ) (6) = Z -∞ e sx f ( x ) dx ± = X -∞ e sk p ( k ) in the discrete r.v. case ² It is so called because it generates the moments of X by diﬀerentiation at s = 0: M 0 (0) =
This is the end of the preview. Sign up to access the rest of the document.

## This note was uploaded on 10/20/2010 for the course IEOR 4106 taught by Professor Whitt during the Fall '08 term at Columbia.

### Page1 / 11

4106-08-Notes-1 - Copyright c 2006 by Karl Sigman 1 Review...

This preview shows document pages 1 - 3. Sign up to view the full document.

View Full Document
Ask a homework question - tutors are online