Where the second to last equality follows from the

Info iconThis preview shows pages 31–40. Sign up to view the full content.

View Full Document Right Arrow Icon
where the second-to-last equality follows from the independence of X 1 X m and the last inequality again follows from the definition of g j 1  . 31
Background image of page 31

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
EXAMPLE : Suppose X and Y 0 are independent. Then X and log Y are independent, as are exp X and log Y , and X 2 and 1/ Y , and so on. 32
Background image of page 32
3 . Moments Involving Random Vectors Computing moments of random vectors, and functions of them, is very important in economics and econometrics. Let X be a random vector of dimension m , and let g : X be a real-valued function defined on the range of X , X m . Then, of course, Y g X is a random variable. In principle, given the CDF or PDF of X we can derive the distribution of Y . 33
Background image of page 33

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
If we are interested in just a few moments of g X , it may be much easier to just compute them directly. The expected value of g X , assuming it exists, is a weighted average, as usual. If X is discrete taking on values x 1 , x 2 ,... (where each x h is an m 1 vector) then E g X  h 1 g x h f X x h 34
Background image of page 34
A condition that ensures E g X  is well defined and a real number is that E | g X | , that is, h 1 | g x h | f X x h This is never an issue when X takes on a finite number of values; the expected value is always well-defined. As in the case of a single RV, E g X  is well-defined, but may be , when g  0. 35
Background image of page 35

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
In the continuous case, E g X  X g x f X x d x − − g x 1 ,..., x m f X x 1 x m dx 1 dx m A general measure theoretic approach allows the discrete, continuous, and mixed cases – within or across elements of X – to be treated in one framework. In practice, it is a combination of adding and integrating. 36
Background image of page 36
Cauchy - Schwarz Inequality If X and Y are two random variables, the Cauchy-Schwarz inequality says that E | XY | E X 2  1/2 E Y 2  1/2 Because | E XY | E | XY | , it also follows that | E XY | E X 2  1/2 E Y 2  1/2 37
Background image of page 37

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
Expected Value and Independence Suppose we have random variables X and Y , and we would like to compute E g X h Y  for functions X and Y . Generally, we would need the joint density of X , Y and sum or integrate. But if X and Y are independent the calculation is much simpler. (e10) If X and Y are independent, E | g X | , and E | h Y | , then E g X h Y  E g X  E h Y  38
Background image of page 38
Proof : Because X and Y are independent, f X , Y x , y f X x f Y y for all x , y . For concreteness, assume X and Y are continuous and let g E g X  , h E h Y  Then E g X h Y  − − g x h y f X
Background image of page 39

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
Image of page 40
This is the end of the preview. Sign up to access the rest of the document.

{[ snackBarMessage ]}

Page31 / 107

where the second to last equality follows from the...

This preview shows document pages 31 - 40. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online