Lecture05-2010 - Distribution Functions for Random...

Info iconThis preview shows pages 1–3. Sign up to view the full content.

View Full Document Right Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: Distribution Functions for Random Variables: Lecture V Charles B. Moss June 30, 2010 I. Bivariate Continuous Random Variables A. Definition 3.4.1. If there is a nonnegative function f ( x, y ) defined over the whole plane such that P ( x 1 ≤ X ≤ x 2 , y 1 ≤ Y ≤ y 2 ) = Z y 2 y 1 Z x 2 x 1 f ( x, y ) dxdy (1) for x 1 , x 2 , y 1 , and y 2 satisfying x 1 ≤ x 2 , y 1 ≤ y 2 , then ( X, Y ) is a bivariate continuous random variable and f ( X, Y ) is called the joint density function. B. Much of the work with distribution functions involves integration. In order to demonstrate a couple of solution techniques, I will work through a couple of examples. 1. Example 3.4.1. If f ( x, y ) = xy exp ( − x − y ) , x > 0 , y > and 0 otherwise, what is P ( X > 1 , Y < 1) P ( X > 1 , Y < 1) = Z 1 Z ∞ 1 xye − ( x + y ) dxdy (2) First, note that the integral can be separated into two terms P ( X > 1 , Y < 1) = Z ∞ 1 xe − 1 dx Z 1 ye − y dy (3) Each of these integrals can be solved using integration by parts: 1 AEB 6571 Econometric Methods I Professor Charles B. Moss Lecture V Fall 2010 d ( uv ) = v du + u dv v du = d ( uv ) − u dv R v du = uv − R u dv (4) In terms of a proper integral we have Z b a v du = ( uv | b a − Z b a u dv (5) In this case, we have Z ∞ 1 xe − x dx ⇒ ( v = x, dv = 1 du = e − x , u = − e − x Z ∞ 1 xe − x dx − − xe − x ⎪ ⎪ ⎪ ∞ 1 + Z ∞ 1 e − x dx = 2 e 1 = 0 . 74 (6) Working on the second part of the integral Z 1 ye − y dy = − ye − 1 ⎪ ⎪ ⎪ 1 + Z 1 e − y dy = − ye − 1 ⎪ ⎪ ⎪ 1 + − e − y ⎪ ⎪ ⎪ 1 = − e − 1 + 0 + − e − 1 + 1 (7) Putting the two parts together P ( X > 1 , Y < 1) = Z ∞ 1 xe − x dx Z 1 ye − y dy = (0 . 735) (0 . 264) = 0 . 194 (8) 2. Example 3.4.3. This example demonstrates the use of changes in variables. Implicitly the example assumes that the random variables are joint uniform for all 0 < x , y < 1 . The question is then: What is the probability that x 2 + y 2 < 1 ? Mathe- matically, this question is not separable: P X 2 + Y 2 < 1 = Z 1 Z √ 1 − x 2 dy !...
View Full Document

{[ snackBarMessage ]}

Page1 / 10

Lecture05-2010 - Distribution Functions for Random...

This preview shows document pages 1 - 3. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online