{[ promptMessage ]}

Bookmark it

{[ promptMessage ]}

note 9 - MA2216/ST2131 Probability Notes 9 Review Examples...

Info iconThis preview shows pages 1–5. Sign up to view the full content.

View Full Document Right Arrow Icon
MA2216/ST2131 Probability Notes 9 Review & Examples and Properties of Expectation § 1. Linear Transformation. Let X 1 , X 2 , . . . , X n be continuous random variables having joint density f and let random variables Y 1 , Y 2 , . . . , Y n be defined by the following linear transformation Y i = n X j =1 a ij X j , i = 1 , 2 , . . . , n, where the matrix A = ( a ij ) n × n has nonzero determinant det A . Thus, ( y 1 , . . . , y n ) t = A ( x 1 , . . . , x n ) t , where ( x 1 , . . . , x n ) t denotes the transpose of the row vector ( x 1 , . . . , x n ). Since det A 6 = 0, the linear transformation A is non-singular, and hence admits an inverse, A - 1 such that ( x 1 , . . . , x n ) t = A - 1 ( y 1 , . . . , y n ) t . (1 . 1) Equivalently, x = y ( A - 1 ) t , where x = ( x 1 , . . . , x n ). Note also that the Jacobian of this transformation A is nothing but its determinant: J ( x 1 , x 2 , . . . , x n ) = det A. Therefore, Y 1 , Y 2 , . . . , Y n have joint density f Y 1 ,...,Y n given by f Y 1 ,...,Y n ( y 1 , . . . , y n ) = 1 | det A | f ( x 1 , . . . , x n ) = 1 | det A | f ( y ( A - 1 ) t ) . (1 . 2) 1
Background image of page 1

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
§ 2. Cauchy-Schwarz Inequality. 1. For sequences { a 1 , a 2 , . . . , a n } and { b 1 , b 2 , . . . , b n } , the classical Cauchy- Schwartz inequality is stated as follows: " n X i =1 a i b i # 2 ˆ n X i =1 a 2 i ! · ˆ n X i =1 b 2 i ! . If a = ( a 1 , a 2 , . . . , a n ) and b = ( b 1 , b 2 , . . . , b n ) are regarded as vectors in IR n , the above inequality becomes | a · b | ≤ || a || || b || , where “ · ” refers to the inner product, and || a || def. = ˆ n X i =1 a 2 i ! 1 / 2 refers to its Euclidian norm. 2. Remarks. (i) The inequality becomes an equality if and only if a and b are linearly dependent , i.e., there exists a real constant t such that t · a = b . (ii) Such an inequality can be generalized to infinite series, say, { a i : i = 1 , 2 , . . . } , provided that X i =1 a 2 i < . In this case, the Cauchy-Schwartz inequality is given by " X i =1 a i b i # 2 ˆ X i =1 a 2 i ! · ˆ X i =1 b 2 i ! . 2
Background image of page 2
3. Cauchy-Schwarz Inequality for R.V’s. Let X 1 and X 2 be random variables having joint density f ( x, y ). Assume also that X i has finite second moment for each i = 1 , 2. Then, ( EE [ XY ]) 2 EE [ X 2 ] EE [ Y 2 ] . (2 . 1) Proof. Unless Y = - tX for some constant t , in which case this in- equality holds with equality, it follows that for all t 0 < EE [( tX + Y ) 2 ] = EE [ X 2 ] t 2 + 2 EE [ XY ] t + EE [ Y 2 ] . Hence the quadratic equation (in t ) EE [ X 2 ] t 2 + 2 EE [ XY ] t + EE [ Y 2 ] = 0 does not have real roots, which implies that the discriminant must be negative: (2 EE [ XY ]) 2 < 4 EE [ X 2 ] EE [ Y 2 ] . Thus, (2.1) is established, which is known as the Cauchy-Schwartz in- equality. 4. To be specific, for discrete r.v.’s, " X x X y xyf ( x, y ) # 2 ˆ X x x 2 f X ( x ) ! · ˆ X y y 2 f Y ( y ) ! ; (2 . 2) and for continuous r.v.’s, •Z R Z R xyf ( x, y ) dx dy 2 Z R x 2 f X ( x ) dx · Z R y 2 f Y ( y ) dy . (2 . 3) 5. In particular, for a single discrete r.v. X , ( EE [ X ]) 2 = " X x xf X ( x ) # 2 ˆ X x x 2 f X ( x ) ! = EE [ X 2 ]; (2 . 4) and, for a single continuous r.v. X , ( EE [ X ]) 2 = •Z R xf X ( x ) dx 2 Z R x 2 f X ( x ) dx = EE [ X 2 ] . (2 . 5) 3
Background image of page 3

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
§ 3. An Application.
Background image of page 4
Image of page 5
This is the end of the preview. Sign up to access the rest of the document.

{[ snackBarMessage ]}