MA2216/ST2131 Probability
Notes 9
Review & Examples
and
Properties of Expectation
§
1. Linear Transformation.
Let
X
1
, X
2
, . . . , X
n
be continuous random variables having joint density
f
and let random variables
Y
1
, Y
2
, . . . , Y
n
be defined by the following
linear transformation
Y
i
=
n
X
j
=1
a
ij
X
j
,
i
= 1
,
2
, . . . , n,
where the matrix
A
= (
a
ij
)
n
×
n
has nonzero determinant det
A
. Thus,
(
y
1
, . . . , y
n
)
t
=
A
(
x
1
, . . . , x
n
)
t
,
where (
x
1
, . . . , x
n
)
t
denotes the transpose of the row vector (
x
1
, . . . , x
n
).
Since det
A
6
= 0, the linear transformation
A
is nonsingular, and hence
admits an inverse,
A

1
such that
(
x
1
, . . . , x
n
)
t
=
A

1
(
y
1
, . . . , y
n
)
t
.
(1
.
1)
Equivalently,
x
∼
=
y
∼
(
A

1
)
t
, where
x
∼
= (
x
1
, . . . , x
n
).
Note also that the Jacobian of this transformation
A
is nothing but its
determinant:
J
(
x
1
, x
2
, . . . , x
n
) = det
A.
Therefore,
Y
1
, Y
2
, . . . , Y
n
have joint density
f
Y
1
,...,Y
n
given by
f
Y
1
,...,Y
n
(
y
1
, . . . , y
n
) =
1

det
A

f
(
x
1
, . . . , x
n
)
=
1

det
A

f
(
y
∼
(
A

1
)
t
)
.
(1
.
2)
1