{[ promptMessage ]}

Bookmark it

{[ promptMessage ]}

lecture8

# lecture8 - ISYE 2028 A and B Lecture 8 Kobi Abayomi 1...

This preview shows pages 1–5. Sign up to view the full content.

ISYE 2028 A and B Lecture 8 Kobi Abayomi March 25, 2009 1 Independent Random Variables Two random variables are independent if p X,Y ( x, y ) = p X ( x ) p Y ( y ) (1) or f X,Y ( x, y ) = f X ( x ) f Y ( y ) (2) This is directly analogous to the general probability rules. The conditional probability mass and density functions, are then just: p X | Y ( X | Y ) = p X,Y ( x, y ) p Y ( y ) = p X ( x ) p Y ( y ) p Y ( y ) = p X ( x ) f X | Y ( X | Y ) = f X,Y ( x, y ) f Y ( y ) = f X ( x ) f Y ( y ) f Y ( y ) = f X ( x ) Dependence is any violation of this condition. 1.1 Example Let f X 1 ,X 2 ( x 1 , x 2 ) = x 1 + x 2 · 1 { 0 <x 1 < 1 , 0 <x 2 < 1 } Are x 1 and x 2 independent? 1

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
Well...: f 1 ( x 1 ) = Z f ( x 1 , x 2 ) dx 2 = Z 1 0 x 1 + x 2 dx 2 = x + 1 / 2 · 1 { 0 x 1 < 1 } f 2 ( x 2 ) = Z f ( x 1 , x 2 ) dx 1 = Z 1 0 x 1 + x 2 dx 1 = 1 / 2 + x 2 · 1 { 0 x 2 < 1 } But: x 1 + x 2 6 = ( x 1 + 1 / 2)( x 2 + 1 / 2). The answer is no. 1.2 Independence is factorization of the pdf In general, for X 1 , X 2 f ( x 1 , x 2 ), if f ( x 1 , x 2 ) = g ( x 1 ) h ( x 2 ) this implies that X 1 is indepen- dent of X 2 This is not a formal “proof” 1 : We know that we can always write a joint pdf as product of a conditional and marginal... f X 1 ,X 2 ( x 1 , x 2 ) = f X 2 | X 1 ( x 2 | x 1 ) f X 1 ( x 1 ) If the functional form of f X 2 | X 1 ( x 2 | x 1 ) does not include (depend) x 1 , say f X 2 | X 1 ( x 2 | x 1 ) = h ( x 2 ) then - integrate both sides over x 1 ... Z f X 1 ,X 2 ( x 1 , x 2 ) dx 1 = Z f X 2 | X 1 ( x 2 | x 1 ) f X 1 ( x 1 ) dx 1 which yields f X 2 ( x 2 ) = h ( x 2 ) Z f X 1 ( x 1 ) dx 1 and of course f X 2 ( x 2 ) = h ( x 2 ) = f X 2 | X 1 ( x 2 | x 1 ) 1 But then, what is these days...? 2
Thus, independence of X 2 and X 1 is equivalent to the factorability of the joint distribution. The definitions for independence are straightforward. Often we can exploit the strong as- sumption of independence to simplify modelling, and get interesting results. 1.3 Example In n + m independent trials, each with probability p of success: let X the successes in the 1 st the first n trials; let Y the successes in the final m trials. Then P ( X = x, Y = y ) = C n x p x (1 - p ) n - x C m y p y (1 - p ) n - y and it is apparent that X and Y are independent. What about Z = X + Y ? Are Z and X independent? P ( X = x, Z = z ) = P ( X = x, Y = z - x ) = C n x p x (1 - p ) n - x C m z - x p z - x (1 - p ) m - ( z - x ) which implies that X and Z are not independent. 2 Mutual Independence Here’s an example of mutual independence Let f ( x, y, z ) = e - ( x + y + z ) 1 { 0 <x,y,z< ∞} The cumulative distribution, then, is F ( x, y, z ) = Z z 0 Z y 0 Z x 0 e - ( u + v + w ) dudvdw = · · · = (1 - e - x )(1 - e - y )(1 - e - z ) The joint distribution is completely factorable — X , Y and Z are mutually independent. In general: for X 1 , ..., X n we say they are mutually independent if 3

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
P ( X 1 x 1 , ..., X n x n ) = n Y i =1 P ( X i x i ) (3) 2.1 Expectations of independent random variables Recall our result for expectations of sums of random variables E ( X 1 + · · · + X n ) = n X i =1 E ( X i ) This result holds regardless of the dependence structure of the joint distribution f ( x 1 , ..., x n ). For a product of random variables E ( X 1 · · · X n ) = n Y i =1 E ( X i ) only holds for independence .
This is the end of the preview. Sign up to access the rest of the document.

{[ snackBarMessage ]}

### Page1 / 21

lecture8 - ISYE 2028 A and B Lecture 8 Kobi Abayomi 1...

This preview shows document pages 1 - 5. Sign up to view the full document.

View Full Document
Ask a homework question - tutors are online