That is because joint distributions are much more

Info iconThis preview shows page 1. Sign up to view the full content.

View Full Document Right Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: f fX,Y (u, v ). Suppose X and Y have a joint pdf fX,Y , and suppose W = aX + bY and Z = cX + dY for some constants a, b, c, and d. Equivalently, in matrix notation, suppose X has a joint pdf fX,Y , Y and suppose W X ab =A where A = . Z Y cd Thus, we begin with a random point X and get another random point W . For ease of analysis, Y Z we can suppose that X is in the u − v plane and W is in the α − β plane. That is, W is the Y Z Z image of X under the linear mapping: Y α β =A u . v The determinant of A is defined by det(A) = ad − bc. If det A = 0 then the mapping has an inverse, given by 1 α u d −b where A−1 = = A− 1 . β v det A −c a An important property of such linear transformations is that if R is a set in the u − v plane and if S is the image of the set under the mapping, then area(S ) = | det(A)|area(R), where det(A) is the determinant of A. Consider the problem of finding fW,Z (α, β ) for some fixed (α, β ). If there is ∈ a small rectangle S with a corner at (α, β ), then fW,Z (α, β ) ≈ P {(W,Z()S )S } . Now {(W, Z ) ∈ S } is area the same as {(X, Y ) ∈ R}, where R is the pr...
View Full Document

This note was uploaded on 02/09/2014 for the course ISYE 2027 taught by Professor Zahrn during the Spring '08 term at Georgia Tech.

Ask a homework question - tutors are online