#### You've reached the end of your free preview.

Want to read the whole page?

**Unformatted text preview: **Two g( ) functions of special interest. 1. g(X) =X. Obviously this yields E(X), the expected value of the random variable itself (frequently
referred to as the mean and represented by the character p), which is a constant providing a
measure of where the centre of the distribution is located. The metric here is the same as that of
the random variable so that, if f(x) is an income distribution measured in $US, then its location
will be in terms of a $US value. The usefulness of the linearity property of the expectations
operator can be seen by letting g(X) = a + bX where a and b are ﬁxed constants. Taking
expectations of this g(X) yields the expected value of a linear function of X which, following the
respective rules of summation and integration, can be shown to be the same linear function of the expected value of X as follows: E(a+bX) = )3 (a+bx,)j(x,) = a E ﬁx9+b Z xﬂxg = a+bE(X) all possible at, all possible 1:, all possible at, E(a+bX) = f (a + bx))‘(x)dx = a fﬂxﬂx + b mxﬂx)dx = a+bE(X) ...

View
Full Document

- Spring '14
- Schmid
- Economics, Derivative, Probability theory, probability density function