s5 - Stat 5102 Lecture Slides Deck 5 Charles J. Geyer...

Info iconThis preview shows pages 1–8. Sign up to view the full content.

View Full Document Right Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: Stat 5102 Lecture Slides Deck 5 Charles J. Geyer School of Statistics University of Minnesota 1 Linear Models We now return to frequentist statistics for the rest of the course. The next subject is linear models , parts of which are variously called regression and analysis of variance (ANOVA) and analysis of covariance (ANCOVA), with regression being subdivided into simple linear regression and multiple regression . Although users have a very fractured view of the subject many think regression and ANOVA have nothing to do with each other a unified view is much simpler and more powerful. 2 Linear Models (cont.) In linear models we have data on n individuals. For each individ- ual we observe one variable, called the response , which is treated as random, and also observe other variables, called predictors or covariates , which are treated as fixed. If the predictors are actually random, then we condition on them. Collect the response variables into a random vector Y of length n . In linear models we assume the components of Y are normally distributed and independent and have the same variance 2 . We do not assume they are identically distributed. Their means can be different. 3 Linear Models (cont.) E ( Y ) = ( * ) var( Y ) = 2 I ( ** ) where I is the n n identity matrix. Hence Y N ( , 2 I ) ( *** ) Recall that we are conditioning on the covariates, hence the ex- pectation ( * ) is actually a conditional expectation, conditioning on any covariates that are random, although we have not indi- cated that in the notation. Similarly, the variance in ( ** ) is a conditional variance, and the distribution in ( *** ) is a conditional distribution. 4 Linear Models (cont.) One more assumption gives linear models its name = M where M is a nonrandom matrix, which may depend on the co- variates, and is a vector of dimension p of unknown parameters. The matrix M is called the model matrix or the design matrix . We will always use the former, since the latter doesnt make much sense except for a designed experiment. Each row of M corresponds to one individual. The i-th row determines the mean for the i-th individual E ( Y i ) = m i 1 1 + m i 2 2 + + m ip p and m i 1 , ... , m ip depend only on the covariate information for this individual. 5 Linear Models (cont.) The joint PDF of the data is f ( y ) = n Y i =1 1 2 exp- 1 2 2 ( y i- i ) 2 = (2 2 )- n/ 2 exp - 1 2 2 n X i =1 ( y i- i ) 2 = (2 2 )- n/ 2 exp- 1 2 2 ( y- M ) T ( y- M ) Hence the log likelihood is l ( , 2 ) =- n 2 log( 2 )- 1 2 2 ( y- M ) T ( y- M ) 6 The Method of Least Squares The maximum likelihood estimate for maximizes the log like- lihood, which is the same as minimizing the quadratic function 7 ( y- M ) T ( y- M ) Hence this method of estimation is also called the method of least squares. Historically, the method of least squares was invented about 1800 and the method of maximum likelihood was invented about 1920. So the older name still attaches to the method....
View Full Document

Page1 / 84

s5 - Stat 5102 Lecture Slides Deck 5 Charles J. Geyer...

This preview shows document pages 1 - 8. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online