4703-10-Notes-0

4703-10-Notes-0 - Copyright c 2007 by Karl Sigman 1 Review...

Info iconThis preview shows pages 1–3. Sign up to view the full content.

View Full Document Right Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: Copyright c 2007 by Karl Sigman 1 Review of Probability Random variables are denoted by X , Y , Z , etc. The cumulative distribution function (c.d.f.) of a random variable X is denoted by F ( x ) = P ( X x ) ,- < x < , and if the random variable is continuous then its probability density function is denoted by f ( x ) which is related to F ( x ) via f ( x ) = F ( x ) = d dx F ( x ) F ( x ) = Z x- f ( y ) dy. The probability mass function (p.m.f.) of a discrete random variable is given by p ( k ) = P ( X = k ) ,- < k < , for integers k . 1- F ( x ) = P ( X > x ) is called the tail of X and is denoted by F ( x ) = 1- F ( x ). Whereas F ( x ) increases to 1 as x , and decreases to 0 as x - , the tail F ( x ) decreases to 0 as x and increases to 1 as x - . If a r.v. X has a certain distribution with c.d.f. F ( x ) = P ( X x ), then we write, for simplicity of expression, X F. (1) 1.1 Moments and variance The expected value of a r.v. is denote by E ( X ) and defined by E ( X ) = X k =- kp ( k ) , discrete case , E ( X ) = Z - xf ( x ) dx, continuous case. E ( X ) is also referred to as the first moment or mean of X (or of its distribution). Higher moments E ( X n ) , n 1 can be computed via E ( X n ) = X k =- k n p ( k ) , discrete case , E ( X n ) = Z - x n f ( x ) dx, continuous case, and more generally E ( g ( X )) for a function g = g ( x ) can be computed via E ( g ( X )) = X k =- g ( k ) p ( k ) , discrete case , E ( g ( X )) = Z - g ( x ) f ( x ) dx, continuous case. 1 (Leting g ( x ) = x n yields moments for example.) Finally, the variance of X is denoted by V ar ( X ), defined by E {| X- E ( X ) | 2 } , and can be computed via V ar ( X ) = E ( X 2 )- E 2 ( X ) , (2) the second moment minus the square of the first moment . We usually denote the variance by 2 = V ar ( X ) and when necessary (to avoid confusion) include X as a subscript, 2 X = V ar ( X ). = p V ar ( X ) is called the standard deviation of X . For any r.v. X and any number a E ( aX ) = aE ( X ) , and V ar ( aX ) = a 2 V ar ( X ) . (3) For any two r.v.s. X and Y E ( X + Y ) = E ( X ) + E ( Y ) . (4) If X and Y are independent, then V ar ( X + Y ) = V ar ( X ) + V ar ( Y ) . (5) The above properties generalize in the obvious fashion to to any finite number of r.v.s. In general (independent or not) V ar ( X + Y ) = V ar ( X ) + V ( Y ) + 2 Cov ( X,Y ) , where Cov ( X,Y ) def = E ( XY )- E ( X ) E ( Y ) , is called the covariance between X and Y , and is usually denoted by X,Y = Cov ( X,Y ). When Cov ( X,Y ) > 0, X and Y are said to be positively correlated , whereas when Cov ( X,Y ) < 0, X and Y are said to be negatively correlated . When Cov ( X,Y ) = 0, X and Y are said to be uncorrelated , and in general this is weaker than independence of X and Y : there are examples of uncorrelated r.v.s. that are not independent . Note in passing that Cov ( X,X ) = V ar ( X )....
View Full Document

Page1 / 16

4703-10-Notes-0 - Copyright c 2007 by Karl Sigman 1 Review...

This preview shows document pages 1 - 3. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online