This preview shows pages 1–3. Sign up to view the full content.
This preview has intentionally blurred sections. Sign up to view the full version.View Full Document
Unformatted text preview: Penalized least squares versus generalized least squares representations of linear mixed models Douglas Bates Department of Statistics University of Wisconsin Madison May 5, 2009 Abstract The methods in the lme4 package for R for fitting linear mixed models are based on sparse matrix methods, especially the Cholesky decomposition of sparse positive-semidefinite matrices, in a penalized least squares representation of the conditional model for the response given the random effects. The representation is similar to that in Hen- dersons mixed-model equations. An alternative representation of the calculations is as a generalized least squares problem. We describe the two representations, show the equivalence of the two representations and explain why we feel that the penalized least squares approach is more versatile and more computationally efficient. 1 Definition of the model We consider linear mixed models in which the random effects are represented by a q-dimensional random vector, B , and the response is represented by an n-dimensional random vector, Y . We observe a value, y , of the response. The random effects are unobserved. For our purposes, we will assume a spherical multivariate normal condi- tional distribution of Y , given B . That is, we assume the variance-covariance matrix of Y | B is simply 2 I n , where I n denotes the identity matrix of order n . (The term spherical refers to the fact that contours of the conditional density are concentric spheres.) 1 The conditional mean, E[ Y | B = b ], is a linear function of b and the p-dimensional fixed-effects parameter, , E[ Y | B = b ] = X + Zb , (1) where X and Z are known model matrices of sizes n p and n q , respectively. Thus Y | B N ( X + Zb , 2 I n ) . (2) The marginal distribution of the random effects B N ( , 2 ( ) ) (3) is also multivariate normal, with mean and variance-covariance matrix 2 ( ). The scalar, 2 , in (3) is the same as the 2 in (2). As described in the next section, the relative variance-covariance matrix, ( ), is a q q positive semidefinite matrix depending on a parameter vector, . Typically the dimension of is much, much smaller than q . 1.1 Variance-covariance of the random effects The relative variance-covariance matrix, ( ), must be symmetric and pos- itive semidefinite (i.e. x x , x R q ). Because the estimate of a vari- ance component can be zero, it is important to allow for a semidefinite ....
View Full Document
- Fall '08