This preview shows pages 1–7. Sign up to view the full content.
This preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
Unformatted text preview: 1 10. Joint Moments and Joint Characteristic Functions Following section 6, in this section we shall introduce various parameters to compactly represent the information contained in the joint p.d.f of two r.vs. Given two r.vs X and Y and a function define the r.v Using (62), we can define the mean of Z to be (101) ) , ( Y X g Z = ), , ( y x g . ) ( ) ( ∫ ∞ + ∞ − = = dz z f z Z E Z Z µ (102) PILLAI 2 However, the situation here is similar to that in (613), and it is possible to express the mean of in terms of without computing To see this, recall from (526) and (710) that where is the region in xy plane satisfying the above inequality. From (103), we get As covers the entire z axis, the corresponding regions are nonoverlapping, and they cover the entire xy plane. ( ) ( ) ∑ ∑ ∆ ∈ ∆ ∆ = ∆ + ≤ < = ∆ = ∆ + ≤ < z D y x XY Z y x y x f z z Y X g z P z z f z z Z z P ) , ( ) , ( ) , ( ) ( (103) z D ∆ ) , ( Y X g Z = ) , ( y x f XY ). ( z f Z ( , ) ( ) ( , ) ( , ) . z Z XY x y D z f z z g x y f x y x y ∆ ∈ ∆ = ∆ ∆ ∑ ∑ (104) z D ∆ z ∆ PILLAI 3 By integrating (104), we obtain the useful formula or If X and Y are discretetype r.vs, then Since expectation is a linear operator, we also get ( ) ( ) ( , ) ( , ) . Z XY E Z z f z dz g x y f x y dxdy +∞ +∞ +∞ −∞ −∞ −∞ = = ∫ ∫ ∫ (105) (106) [ ( , )] ( , ) ( , ) . XY E g X Y g x y f x y dxdy +∞ +∞ −∞ −∞ = ∫ ∫ [ ( , )] ( , ) ( , ). i j i j i j E g X Y g x y P X x Y y = = = ∑∑ (107) ( , ) [ ( , )]. k k k k k k E a g X Y a E g X Y = ∑ ∑ (108) PILLAI 4 If X and Y are independent r.vs, it is easy to see that and are always independent of each other. In that case using (107), we get the interesting result However (109) is in general not true (if X and Y are not independent). In the case of one random variable (see (10 6)), we defined the parameters mean and variance to represent its average behavior. How does one parametrically represent similar crossbehavior between two random variables? Towards this, we can generalize the variance definition given in (616) as shown below: ) ( X g Z = )]. ( [ )] ( [ ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( )] ( ) ( [ Y h E X g E dy y f y h dx x f x g dxdy y f x f y h x g Y h X g E Y X Y X = = = ∫ ∫ ∫ ∫ ∞ + ∞ − ∞ + ∞ − ∞ + ∞ − ∞ + ∞ − (109) ) ( Y h W = PILLAI 5 Covariance : Given any two r.vs X and Y , define By expanding and simplifying the right side of (1010), we also get It is easy to see that To see (1012), let so that . ) ( ) ( ) ( ) ( ) , ( __ __ ____ Y X XY Y E X E XY E XY E Y X Cov Y X − = − = − = µ µ (1010) (1012) [ ] . ) ( ) ( ) , ( Y X Y X E Y X Cov µ µ − − = (1011) , Y aX U + = . ) ( ) ( ) , ( Y Var X Var Y X Cov ≤ { } [ ] . ) ( ) , ( 2 ) ( ) ( ) ( ) ( 2 2 ≥ + + = − + − = Y Var Y X Cov a X Var a Y X a E U Var Y X µ µ (1013) PILLAI 6 The right side of (1013) represents a quadratic in the...
View
Full
Document
This note was uploaded on 10/16/2009 for the course EL el6303 taught by Professor Prof during the Spring '09 term at NYU Poly.
 Spring '09
 prof

Click to edit the document details