62 independence and correlation if x and y are

Info iconThis preview shows pages 62–72. Sign up to view the full content.

View Full Document Right Arrow Icon
62
Background image of page 62

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
Independence and Correlation If X and Y are independent random variables with E X 2 , E Y 2 then they are always uncorrelated: Cov X , Y 0. We already covered a more general version of this finding. In fact, for any functions g X and h Y , Cov g X , h Y  0. Why? Under independence we showed E g X h Y  E g X  E g Y  and so Cov g X , h Y  E g X h Y  E g X  E g Y  0 63
Background image of page 63
Independence is a much stronger requirement than uncorrelatedness. For example, suppose X has a symmetric distribution about its mean of zero. Let Y X 2 . Then X and Y are clearly not independent. For example, P Y 1,| X | 1 P X 2 X | 1 P | X | 1 P X 2 1 P | X | 1 P Y 1 P | X | 1 unless P | X | 1 0or1 . 64
Background image of page 64

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
Interestingly, X and Y X 2 are uncorrelated: Cov X , X 2 E X X 2 E X 3 0 by symmetry. Even though Y is a deterministic function of X – once we know X ,we always know Y Y is not linearly related to X . Correlation is often described as a measure of linear association. For many purposes it is suitable, but it can miss more complicated relationships. 65
Background image of page 65
As it turns out, if g X and h Y are uncorrelated for all functions g  and h  with finite second moments, then X and Y must be independent. This provides an idea of how much stronger independence between X and Y is compared with just saying X and Y are uncorrelated. 66
Background image of page 66

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
Variance - Covariance Matrix Let X be an m 1 random vector with E X j 2 . We want to define the variance matrix of X , sometimes called the variance - covariance matrix of X or (less preferred) the covariance matrix of X . Let j 2 Var X j , j 1,2,. .., m ij Cov X i , X j , i j so we have m (possibly) different variances and m m 1 /2 (possibly) distinct covariances. 67
Background image of page 67
Arrange these in an m m matrix and call this Var X : Var X 1 2 12 1 m 12 2 2 2 m  1 m 2 m m 2 Sometimes we write X Var X or even Var X . Note that ij ji has been imposed. This means that Var X is a symmetric matrix: X X . 68
Background image of page 68

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
Often it is convenient to obtain Var X from a matrix expectation. Note that X X 1 1 X 2 2 X m m and so 69
Background image of page 69
X  X X 1 1 X 2 2 X m m X 1 1 X 2 2 X m m X 1 1 2  X 1 1  X m m X 1 1  X 2 2  X 2 2  X m m  X 1 1  X m m X m m 2 70
Background image of page 70

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
It follows that Var X E  X  X
Background image of page 71
Image of page 72
This is the end of the preview. Sign up to access the rest of the document.

{[ snackBarMessage ]}

Page62 / 107

62 Independence and Correlation If X and Y are independent...

This preview shows document pages 62 - 72. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online