{[ promptMessage ]}

Bookmark it

{[ promptMessage ]}

sta108_handout8 - Handout 8 Matrix approach to regression...

Info iconThis preview shows pages 1–2. Sign up to view the full content.

View Full Document Right Arrow Icon
Handout 8 Matrix approach to regression analysis Recall that the simple linear regression model is given by Y i = β 0 + β 1 X i + ε i , i = 1 , ..., n, where ( X 1 , Y 1 ) , ..., ( X n , Y n ) are the observations, and ε 1 , ..., ε n are independent with mean zero and variance σ 2 . This model can be rewritten as Y = X β + ε , where Y = Y 1 Y 2 . . Y n , X = 1 X 1 1 X 2 . . . . 1 X n , β = β 0 β 1 ! and ε = ε 1 ε 2 . . ε n . Random vectors and matrices A random vector is a vector of random variables. So if Y 1 , Y 2 and Y 3 are random variables, then Y = Y 1 Y 2 Y 3 is a random vector. If the means of Y 1 , Y 2 and Y 3 are μ 1 , μ 2 and μ 3 , then the mean of the random vector Y is defined to be E ( Y ) = μ 1 μ 2 μ 3 . The variance-covariance matrix of the random vector Y is defined to be σ 2 ( Y ) = V ar ( Y ) = V ar ( Y 1 ) Cov ( Y 1 , Y 2 ) Cov ( Y 1 , Y 3 ) Cov ( Y 2 , Y 1 ) V ar ( Y 2 ) Cov ( Y 2 , Y 3 ) Cov ( Y 3 , Y 1 ) Cov ( Y 3 , Y 2 ) V ar ( Y 3 ) Some basic results Let Y
Background image of page 1

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
Image of page 2
This is the end of the preview. Sign up to access the rest of the document.

{[ snackBarMessage ]}