LECTURE 05: Mathematical Expectations
Key word:
moments, moment generating function, mean, variance, skewness, kurtosis, quantile,
median, value at risk.
Reference: Chapter 4 of the textbook
Mathematical Expectations under Univariate Distributions
Question:
What information can we extract from a distribution
f
X
(
x
)?
De°nition [Expected Value of
g
(
X
)
]:
Suppose
X
is a rv with pmf or pdf
f
X
(
x
)
:
Then the
expected value or mean of a measurable function
g
(
X
)
is de°ned as
E
[
g
(
X
)] =
° P
x
g
(
x
)
f
X
(
x
)
;
drv,
R
1
°1
g
(
x
)
f
X
(
x
)
dx;
crv,
provided the summation or integral exists, where the summation is over all possible values of
X
for the discrete case
:
E
= Expectation Operator
Remarks:
(i)
g
(
X
)
is a r.v. because
X
is a r.v.
(ii) If
E
j
g
(
X
)
j
=
1
;
we say that
E
[
g
(
X
)]
does not exist.
(iii) There is another way to compute
E
[
g
(
X
)]
:
Put
Y
=
g
(
X
)
and °nd pmf/pdf
f
Y
(
y
)
:
Then
E
[
g
(
X
)]
=
E
(
Y
)
=
° P
y
yf
Y
(
y
)
;
drv,
R
1
°1
yf
Y
(
y
)
dy;
crv.
This gives the same result as the above de°nition using the distribution of
X:
(iv) Why weighted by
f
X
(
x
)?
f
X
(
x
)
is proportional to the relative frequency that value
x
will
occur.
(v) The expectation
E
(
°
)
is a linear operator, namely
E
[
ag
1
(
X
) +
bg
2
(
X
)] =
aE
[
g
1
(
X
)] +
bE
[
g
2
(
X
)]
:
Linearity of the expectation operator can simplify calculation and derivation in many cases.
We now consider some examples of expectations.
Case I:
g
(
X
) =
X:
De°nition
[Mean of
X
]:
1
This preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
°
X
=
E
(
X
)
=
° P
x
xf
X
(
x
)
;
drv,
R
1
°1
xf
X
(
x
)
dx;
crv,
where the summation is over all possible
x
0
s:
Remarks:
(i) The mean
°
X
is also called the expected value of
X
, or the °rst moment of
X
. It is a
measure of central tendency for the distribution of
X
. It can be viewed as a "location" parameter.
Question:
Why is
°
X
called the measure of central tendency?
Suppose
X
1
; X
2
; :::X
n
;
follow the same distribution as
X;
and these RV±s are mutually inde
pendent. Then the socalled law of large numbers (LLNs) implies
n
°
1
n
X
i
=1
X
i
!
°
X
if
n
! 1
.
(ii) If
X
is the stock return,
°
X
is the expected stock return or longrun average stock return.
(iii) The terminology of expectation has its origin in games of chance. This can be illustrated
as follows. Four small similar chips, numbered 1, 1, 1, and 2, respectively, are placed in a bowl
and are mixed. A player is blindfolded and is to draw a chip from the bowl. If she draws one of
the three chips numbered 1, she will receive one dollar. If she draws the chip numbered 2, she
will receive two dollars. It seems reasonable to assume that the player has a ²
3
4
claim³on the $s
and a "
1
4
claim" on the $2. Her "total claim" is 1
±
3
4
+ 2
±
1
4
=
5
4
= $1
:
25
:
Thus the expectation
of
X
is precisely the player±s claim in this game.
Theorem:
If
Y
=
aX
+
b;
then
°
Y
=
a°
X
+
b:
Proof:
g
(
X
) =
aX
+
b:
Then
°
Y
=
E
(
Y
)
=
E
[
g
(
X
)]
=
E
[
aX
+
b
]
=
E
(
aX
) +
E
(
b
)
=
aE
(
X
) +
b
=
a°
X
+
b:
Remark:
The expectation
E
(
°
)
is a linear operator.
Here,
a
:
scale parameter,
b
:
location
parameter.
This is the end of the preview.
Sign up
to
access the rest of the document.
 Fall '10
 Standard Deviation, Variance, Probability theory

Click to edit the document details