307_outline_101508

307_outline_101508 - Lecture Oct 15 SORRY THAT CLASS WAS...

Info iconThis preview shows page 1. Sign up to view the full content.

View Full Document Right Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: Lecture Oct 15 SORRY THAT CLASS WAS CANCELED TODAY. HERE ARE THE TWO KEY POINTS TO COVER. TOPIC I: A special joint distribution: the bivariate normal. X,Y have the bivariate normal distribution if their joint density is f X ,Y ( x, y ) = 1 2πσ X σ Y 2 ⎛ ⎡⎛ x − μ ⎞ 2 ⎛ x − μ X ⎞⎛ y − μY ⎞ ⎛ y − μY ⎞ ⎤ ⎞ 1 X ⎜− ⎢⎜ exp ⎟ − 2⎜ ⎟⎜ ⎟+⎜ ⎟ ⎥⎟ 2 ⎜ 2 (1 − ρ 2 ) ⎢⎝ σ X ⎠ σ X ⎠⎝ σ Y ⎠ ⎝ σ Y ⎠ ⎥ ⎟ 1− ρ ⎝ ⎣ ⎦⎠ ⎝ Idea is: X,Y each normal, and they are correlated (there is a dependence). Sketch density. Will look into correlation later. ρ IS THE CORRELATION. X,Y ARE INDEPENDENT IFF RHO = 0. Fact: X,Y bivariate normal ⇒ X normal, Y normal ∞ Pf: Find the marginals. Want to show −∞ ∫ ⎛ 1 ⎛ x − μ ⎞2 ⎞ X ⎟ exp ⎜ − ⎜ f X ,Y ( x, y ) dy = ⎜ 2⎝ σX ⎟ ⎟ 2πσ X ⎠⎠ ⎝ 1 Idea: Perhaps we can factor f X ,Y ( x, y ) = f X ( x ) g ( x, y ) and then see that as a function of y, g is a normal density so integrates to 1. Clue: We see that the standard deviation of g will be σ Y 1 − ρ 2 . So let’s go: f X ,Y ( x, y ) fX ( x) = 1 2πσ Y 1 − ρ 2 exp ( −Q ) , where ⎡⎛ x − μ ⎞ 2 ⎛ x − μ X ⎞⎛ y − μY 1 X ⎢⎜ Q=− ⎟ − 2ρ ⎜ ⎟⎜ 2 (1 − ρ 2 ) ⎢⎝ σ X ⎠ ⎝ σ X ⎠⎝ σ Y ⎣ 2 2 ⎞ ⎛ y − μY ⎞ ⎤ 1 ⎛ x − μ X ⎞ ⎟+⎜ ⎟ ⎥− ⎜ ⎟ σY ⎠ ⎥ 2 ⎝ σY ⎠ ⎠⎝ ⎦ 2 ⎡ ⎤ ⎛ ⎞ σY 1 2 2 2 ⎛ x − μX ⎞ ⎢ y − 2 ⎜ μY + ρ =− 2 ( x − μX ) ⎟ y + ρ σY ⎜ ⎟⎥ σX 2σ Y (1 − ρ 2 ) ⎢ ⎝ ⎠ ⎝ σX ⎠ ⎥ ⎣ ⎦ ⎡ ⎛ x − μ X ⎞⎤ 1 =− 2 ⎢ y − ρσ Y ⎜ ⎟⎥ σ X ⎠⎦ 2σ Y (1 − ρ 2 ) ⎣ ⎝ 2 This shows (*) 2 ⎛ ⎞ ⎛ 1 ⎛ x − μ ⎞2 ⎞ 1 1 ⎡ y − ρ (σ Y σ X )( x − μ X ) ⎤ ⎟ X ⎜− ⎢ ⎟⋅ exp ⎜ − ⎜ exp f X ,Y ( x, y ) = ⎥ ⎜ 2⎢ ⎜ 2 ⎝ σ X ⎟ ⎟ 2πσ 1 − ρ 2 2πσ X σY 1− ρ 2 ⎥⎟ ⎠⎠ Y ⎣ ⎦⎠ ⎝ ⎝ 1 In other words, our joint density factors into the hoped for marginal of X and a term which for each fixed x is a normal density in y. ∞ Thus −∞ ∫ ⎛ 1 ⎛ x − μ ⎞2 ⎞ 1 X ⎟ . Which we wanted to show. Remember the exp ⎜ − ⎜ f X ,Y ( x, y ) dy = ⎜ 2⎝ σX ⎟ ⎟ 2π ⎠⎠ ⎝ equation (*) just above—it will be important shortly. TOPIC II: Section 3.4 Independent random variables. Definition: the random variables X 1 ,… , X n are independent if f X1 ,…, X n ( x1 ,… , xn ) = f X1 ( x1 ) f X n ( xn ) and this definition holds in both the discrete mass function case and the density case. This turns out to be just what we need to conclude: knowing the values of some of these random variables gives us no information on the locations of the others. Interesting equivalence: The events A1 ,… , An are independent if and only if the indicator random variables 1A1 ,… ,1Ai are independent. (Recall 1S (ω ) = 1 if ω ∈ S , = 0 if ω ∉ S , for ω ∈ Ω, the sample space.) ...
View Full Document

This note was uploaded on 06/12/2010 for the course MATH 307 taught by Professor Luikonnen during the Fall '08 term at Tulane.

Ask a homework question - tutors are online