multivariate-normal

multivariate-normal - 1 egap:etadpU tsaL 1 A = T AA taht...

Info iconThis preview shows pages 1–3. Sign up to view the full content.

View Full Document Right Arrow Icon
ARE 210 Course Notes page 1 Last Update: 10/21/2005 page 1 L INEAR C OMBINATIONS OF N ORMAL R ANDOM V ARIABLES Let x be an n –dimensional column vector of random variables with a multivariate normal distribution with mean vector μ and variance-covariance matrix Σ . We write x ~ N( μ , Σ ). The joint probability density function for x is { } 2 ½ 1 () ( 2) e x p ½ ( ) ( ) n f − − X xx x Σμ Σ μ T . Proposition 1: The m -dimensional random vector y defined by y = Ax + b , where A is m × n , and has rank m n , and b is m × 1, is multivariate normal with mean A μ + b and variance-covariance matrix AA Σ T , ~N( , ) + y Ab A A μΣ T . The mean and variance parts of the proposition are trivial. Given the definition of y , () EE =+ = + y Ax bA b μ , and [] { } { } ( ) ( ) E E −−= + + + + yy A x b A b A x b A b μμ T T { } =−− = Ax x A μ μ T TT E ⎡⎤ =− = ⎣⎦ x A Σ T . The distributional part is only a little more difficult. Let’s do the easiest case first by de- riving the distribution directly. If m = n , so that A is square and nonsingular, then we can solve for x in terms of y as 1 xA y μ , so that 1 ∂∂ = x yA T . Note that 2 =⋅ ⋅= A A A A A ΣΣ Σ T T by well-known properties of determinants. This in turn implies that ½ , −− + A T
Background image of page 1

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
ARE 210 Course Notes page 2 Last Update: 10/21/2005 page 2 where the superscript “+”on the determinant of A on the right-hand-side indicates the ab- solute value, or positive square root of 2 = A AA T . We therefore obtain the joint density function for y from the change of variables formula as [] { } 2 ½1 ( ) ( 2 ) || e x p ½ () ( )() n f −− Y yA A y A b A A y A b Σ μ+ Σ T TT . Now consider m < n . Let ~N( , ) nn zI 0 ; x can always be written as ½ =+ xz Σ μ , where = RR Σ Δ T factors Σ , n == R R I and diag , 0, 1, , ii in δ > = Δ . R is an n × n matrix of eigen (or characteristic) vectors for Σ and Δ is a diagonal matrix of eigen (or characteristic) values for Σ . In other words, each column vector in R, say r i , and the cor- responding main diagonal element δ i of Δ , we have ,1 i i i = rr r r Σ T . We also have the orthogonality conditions between the eigen vectors, 0 ij =∀≠ T , which implies that R T is the inverse of R , i.e., n = R RI T . Since 11 () ( ) = by the properties of in- verses, and the inverse in unique, it follows that R is the inverse of R T , i.e., n = R T . Finally, since Σ is positive definite, all of the eigen values must be strictly positive. The reason for this is that if we define the n –vector = wR z T , then 22 0 n n wz = = = > ∑∑ I ww z RR z zz z 0 .
Background image of page 2
Image of page 3
This is the end of the preview. Sign up to access the rest of the document.

This note was uploaded on 08/01/2008 for the course ARE 210 taught by Professor Lafrance during the Fall '07 term at Berkeley.

Page1 / 8

multivariate-normal - 1 egap:etadpU tsaL 1 A = T AA taht...

This preview shows document pages 1 - 3. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online