The most important special case is e j 1 m x j j 1 m

Info iconThis preview shows pages 47–57. Sign up to view the full content.

View Full Document Right Arrow Icon
The most important special case is E j 1 m X j j 1 m E X j 47
Background image of page 47

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
Unlike the mean, the median is not a linear operator. In fact, in general Med X Y Med X Med Y . As a simple example, let X take on the values 1,0 with probabilities 1/3,2/3 and Y take on the values 0,1 with probabilities and assume they are independent. The random variable Z X Y takes on the values 1,0,1 with probabilities 1/9,4/9,4/9 .So Med X Y Med Z 0 Med X Med Y 0 1 1. 48
Background image of page 48
EXAMPLE : Earlier we gave a somewhat cumbersome proof that if Y ~ Binomial n , p then E Y np . We can show this much more simply using (e11). Recall that Y X 1 X 2 ... X n where each X n ~ Bernoulli p .(The X i are also independent, but we do not use that here.) Therefore, E Y E X 1 X 2 X n i 1 n E X i i 1 n p np . 49
Background image of page 49

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
In fact, the conclusion of the example is much more general. Suppose that Y X 1 X 2 ... X n for any set of RVs with E X i for i 1,2,. .., n (where the X i may be arbitrarily dependent). Then E Y n . 50
Background image of page 50
Next let X be m 1, let A be a k m nonrandom matrix, and let b be a k 1 nonrandom vector. We can form the product AX , which is a k 1 vector, using matrix multiplication: AX a 11 a 12 a 1 m a 21 a 22 a 2 m  a k 1 a k 2 a km X 1 X 2 X m j 1 m a 1 j X j j 1 m a 2 j X j j 1 m a kj X j 51
Background image of page 51

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
Therefore, AX b j 1 m a 1 j X j b 1 j 1 m a 2 j X j b 2 j 1 m a kj X j b 3 Using the definition of the expected value of a vector, it is easily seen that (e12) E AX b A E X b 52
Background image of page 52
There are other useful ways to represent matrix multiplication. Write A a 1 a 2 a k where row i of A , a i ,isa1 m vector. Then AX a 1 X a 2 X a k X 53
Background image of page 53

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
(e13) If Z is a k m random matrix, A is a q k nonrandom matrix, and B is an m r nonrandom matrix, then AZB is a q r random matrix. It can be shown that E AZB A E Z B These properties for linear functions of random vectors and matrices can be summarized succinctly: the expected value operator is a linear operator. 54
Background image of page 54
Covariance and Correlation It is often very useful to have a single number describing the dependence (lack of independence) between two random variables. Assume E X 2 , E Y 2 and let X E X , Y E Y . Consider the random variable X X  Y Y If this product is positive then either X and Y are both above or both below their means. If this is true on average, then we conclude that X and Y have a “positive association.” 55
Background image of page 55

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
Define the covariance between X and Y as Cov X , Y XY E  X X  Y Y 
Background image of page 56
Image of page 57
This is the end of the preview. Sign up to access the rest of the document.

{[ snackBarMessage ]}

Page47 / 107

The most important special case is E j 1 m X j j 1 m E X j...

This preview shows document pages 47 - 57. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online