Lecture 8 - Review of Two Dimensional Random Vectors

# Lecture 8 - Review of Two Dimensional Random Vectors - 1...

This preview shows pages 1–3. Sign up to view the full content.

1 REVIEW OF TWO DIMENSIONAL RANDOM VECTORS Lecture 8 ORIE3500/5500 Summer2011 Li 1 Review of two dimensional Random Vec- tors Recall that the probability distribution of a two dimensional random variable is characterized by its joint cumulative distribution function or joint cdf . It is deﬁned as F X,Y ( x, y ) = P [ X x, Y y ] , −∞ < x, y < . For the discrete case, we can deﬁne the joint probability mass function as p X ( x i ) = P [ X = x i ] = j P [ X = x i , Y = y j ] = j p X,Y ( x i , y j ) . For the continuous case, we can deﬁne the joint probability density function as f X,Y ( x, y ) = 2 ∂x∂y F X,Y ( x, y ) . Example Suppose that 3 balls are randomly selected from an urn containing 3 red, 4 white, and 5 blue balls. If we let X and Y denote, respectively, the number of red and white balls chosen, then the joint probability mass function of X and Y is given by x i , y j 0 1 2 3 p X ( x i ) 0 10 / 220 40 / 220 30 / 220 4 / 220 84 / 220 1 30 / 220 60 / 220 18 / 220 0 108 / 220 2 15 / 220 12 / 220 0 0 27 / 220 3 1 / 220 0 0 0 1 / 220 p Y ( y j ) 56 / 220 112 / 220 48 / 220 4 / 220 1 Example The joint density function of X and Y is given by f ( x, y ) = 2 e x e 2 y I 0 x< , 0 <y< . 1

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
2 HIGHER DIMENSIONAL RANDOM VECTORS Compute P ( X 1 , Y 1), P ( X < Y ), P ( X < a ) and f X ( a ). Solution: (a) P ( X 1 , Y 1) = 1 0 1 2 e x e 2 y dxdy = e 1 1 0 2 e 2 y dy = (1 e - 2 ) e . (b) P ( X < Y ) = 0 y 0 2 e x e 2 y dxdy = 1 / 3. (c) P ( X < a ) = a 0 0 2 e x e 2 y dydx = 1 e a . (d) f X ( a ) = e a . 2 Higher Dimensional Random Vectors It is the expected extension of bivariate random vectors to higher dimensions. ( X 1 , X 2 , . . . , X n ) is said to be jointly distributed if each X i is a random variable and they have a joint cumulative distribution function F X 1 , ··· ,X n ( x 1 , . . . , x n ) = P [ X 1 x 1 , . . . X n x n ] . Its properties are similar to that in the two-dimensional case, but it becomes increasingly diﬃcult to write them down in mathematical notations. You can try to write down the 4th property of joint cdfs for fun! It is important to know how to get marginal distributions from the joint distribution. As before, we get F X 1 ( x 1 ) = lim x i →∞ ,i ̸ =1 F X 1 , ··· ,X n ( x 1 , . . . , x n ) = F X 1 , ··· ,X n ( x 1 , , . . . , ) . More generally we can get the marginal of X k as F X k ( x k ) = lim x i →∞ ,i ̸ = k F X 1 , ··· ,X n ( x 1 , . . . , x n ) = F X 1 , ··· ,X n ( , . . . , , x k , , . . . , ) .
This is the end of the preview. Sign up to access the rest of the document.

{[ snackBarMessage ]}

### Page1 / 7

Lecture 8 - Review of Two Dimensional Random Vectors - 1...

This preview shows document pages 1 - 3. Sign up to view the full document.

View Full Document
Ask a homework question - tutors are online