86 variances of general linear combinations let x be

Info iconThis preview shows pages 86–94. Sign up to view the full content.

View Full Document Right Arrow Icon
86
Background image of page 86

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
Variances of General Linear Combinations Let X be an m 1 random vector with variance matrix Var X . (var3) If A is a k m nonrandom matrix and b a nonrandom k 1 vector. Then Var AX b A Var X  A Proof :Let Y AX b , so that Y A X b . Then Y Y A X X and so E  Y Y  Y Y E  A X X  A X X  A E A X X  X X A A Var X  A 87
Background image of page 87
As a corollary, if A is a k k symmetric matrix then Var AX b A Var X  A . 88
Background image of page 88

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
The Covariance Between Two Random Vectors Suppose X is a k 1 vector and Y and m 1 vector. The covariance between X and Y is defined as the k m matrix Cov X , Y E  X X  Y Y E XY X Y Notice that in the vector case, Cov X , Y Cov Y , X except in special cases; in fact, Cov X , Y is k m and Cov Y , X is m k . 89
Background image of page 89
Using matrix multiplication it is easy to show Cov X , Y Cov X 1 , Y 1 Cov X 1 , Y 2  Cov X 1 , Y m Cov X 2 , Y 1 Cov X 2 , Y 2 Cov X 2 , Y m  Cov X k , Y 1 Cov X k , Y 2 Cov X k , Y m We say X and Y are uncorrelated if Cov X , Y 0 , which means each element of X is uncorrelated with each element of Y . 90
Background image of page 90

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
(cov3) If A is r k , b is k 1, C is s m , and d is m 1, and each is nonrandom, then Cov AX b , CY d A Cov X , Y  C If X is k 1 and Y is a scalar, then Cov X , Y Cov X 1 , Y Cov X 2 , Y Cov X k , Y Cov Y , X Cov X 1 , Y Cov X 2 , Y  Cov X k , Y 91
Background image of page 91
Minkowski Inequality Let X and Y be random variables. Then because | X Y | | X | | Y | it follows that E | X Y | E | X | E | Y | , whichisonlyusefulif E | X | and E | Y | . This inequality has a generalization. Let p 1. Then the Minkowski inequality states E | X Y | p  1/ p E | X | p  1/ p E | Y | p  1/ p The p 2 case is especially useful: E | X Y | 2  1/2 E | X | 2  1/2 E | Y | 2  1/2 92
Background image of page 92

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
By subtracting off means we get SD X Y SD X SD Y . The inequality extends to several random variables: E j 1 m X j p 1/ p j 1 m E | X j | p 1/ p This is used mostly in theory, and can be proven by clever application of Jensen’s inequality.
Background image of page 93
Image of page 94
This is the end of the preview. Sign up to access the rest of the document.

{[ snackBarMessage ]}

Page86 / 107

86 Variances of General Linear Combinations Let X be an m 1...

This preview shows document pages 86 - 94. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online