{[ promptMessage ]}

Bookmark it

{[ promptMessage ]}

lectr10a

# lectr10a - 10 Joint Moments and Joint Characteristic...

This preview shows pages 1–7. Sign up to view the full content.

1 10. Joint Moments and Joint Characteristic Functions Following section 6, in this section we shall introduce various parameters to compactly represent the information contained in the joint p.d.f of two r.vs. Given two r.vs X and Y and a function define the r.v Using (6-2), we can define the mean of Z to be (10-1) ) , ( Y X g Z = ), , ( y x g . ) ( ) ( + = = dz z f z Z E Z Z µ (10-2) PILLAI

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
2 However, the situation here is similar to that in (6-13), and it is possible to express the mean of in terms of without computing To see this, recall from (5-26) and (7-10) that where is the region in xy plane satisfying the above inequality. From (10-3), we get As covers the entire z axis, the corresponding regions are nonoverlapping, and they cover the entire xy plane. ( ) ( ) = + < = = + < z D y x XY Z y x y x f z z Y X g z P z z f z z Z z P ) , ( ) , ( ) , ( ) ( (10-3) z D ) , ( Y X g Z = ) , ( y x f XY ). ( z f Z ( , ) ( ) ( , ) ( , ) . z Z XY x y D z f z z g x y f x y x y = (10-4) z D z PILLAI
3 By integrating (10-4), we obtain the useful formula or If X and Y are discrete-type r.vs, then Since expectation is a linear operator, we also get ( ) ( ) ( , ) ( , ) . Z XY E Z z f z dz g x y f x y dxdy +∞ +∞ +∞ −∞ −∞ −∞ = = (10-5) (10-6) [ ( , )] ( , ) ( , ) . XY E g X Y g x y f x y dxdy +∞ +∞ −∞ −∞ = [ ( , )] ( , ) ( , ). i j i j i j E g X Y g x y P X x Y y = = = ∑∑ (10-7) ( , ) [ ( , )]. k k k k k k E a g X Y a E g X Y = (10-8) PILLAI

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
4 If X and Y are independent r.vs, it is easy to see that and are always independent of each other. In that case using (10-7), we get the interesting result However (10-9) is in general not true (if X and Y are not independent). In the case of one random variable (see (10- 6)), we defined the parameters mean and variance to represent its average behavior. How does one parametrically represent similar cross-behavior between two random variables? Towards this, we can generalize the variance definition given in (6-16) as shown below: ) ( X g Z = )]. ( [ )] ( [ ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( )] ( ) ( [ Y h E X g E dy y f y h dx x f x g dxdy y f x f y h x g Y h X g E Y X Y X = = = ∫ ∫ + + + + (10-9) ) ( Y h W = PILLAI
5 Covariance : Given any two r.vs X and Y , define By expanding and simplifying the right side of (10-10), we also get It is easy to see that To see (10-12), let so that . ) ( ) ( ) ( ) ( ) , ( __ __ ____ Y X XY Y E X E XY E XY E Y X Cov Y X = = = µ µ (10-10) (10-12) [ ] . ) ( ) ( ) , ( Y X Y X E Y X Cov µ µ = (10-11) , Y aX U + = . ) ( ) ( ) , ( Y Var X Var Y X Cov { } [ ] . 0 ) ( ) , ( 2 ) ( ) ( ) ( ) ( 2 2 + + = + = Y Var Y X Cov a X Var a Y X a E U Var Y X µ µ (10-13) PILLAI

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
6 The right side of (10-13) represents a quadratic in the variable a that has no distinct real roots (Fig. 10.1). Thus the roots are imaginary (or double) and hence the discriminant must be non-positive, and that gives (10-12). Using (10-12), we may define the normalized parameter or and it represents the correlation coefficient between X and Y .
This is the end of the preview. Sign up to access the rest of the document.

{[ snackBarMessage ]}