lec3 - Again returning to the GLMM with random intercept,...

Info iconThis preview shows pages 1–4. Sign up to view the full content.

View Full Document Right Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: Again returning to the GLMM with random intercept, an importance sampler can be constructed by using ( u i ; i , 2 i ), the normal density approximation to the posterior density f ( u i | y i ). That is, the representation from p.196 of the integral were after, E { f ( y i | b i ) } = Z + - ( u i ; i , 2 i ) ( ( u i ) Q j f ( y ij | u i ) ( u i ; i , 2 i ) ) du i suggests drawing a sample ( u * 1 , . . . , u * R ) from ( u i ; i , 2 i ) and using the approximation E { f ( y i | b i ) } 1 R R X r ( ( u * r ) Q j f ( y ij | u * r ) ( u * r ; i , 2 i ) ) This method is akin to AGQ, which can be viewed as a determinis- tic version of importance sampling. Both methods require an extra computation of i , 2 i , the mean and variance of the normal approx- imation to the posterior distribution. Importance sampling is implemented in PROC NLMIXED. 201 3. Analytic Approximation of the Likelihood: Most of the methods that fall into this category are based upon taking a Laplace approximation to the integral involved in the GLMM loglikelihood. For a unidimensional integral, the Laplace approximation can be written as Z + - exp { f ( x ) } dx Z + - exp { f ( x )- ( x- x ) 2 / (2 2 ) } dx = Z + - exp { f ( x ) } 2 ( x ; x, 2 ) dx = exp { f ( x ) } 2 , where ( x ; x, 2 ) is a normal density with mean x and variance 2 , x is the model of f ( x ) and hence of exp { f ( x ) } , and 2 =- 2 f ( x ) x 2 fl fl fl fl x = x - 1 . The approximation in the first line here is obtained by approximating f ( x ) by a second order Taylor expansion around its mode. Youll notice that the first derivative term f ( x ) drops out here because it is evaluated at the mode, at which point f ( x ) = 0. 202 For the GLMM with random intercept, the integrand in the likeli- hood contribution for the i th cluster (which plays the role of exp { f ( x ) } ) is ( b i ; 0 , ) Y j f ( y ij | b i ) = exp log ( b i ; 0 , ) Y j f ( y ij | b i ) As mentioned before, this quantity is proportional to the posterior distribution f ( b i | y i ). Therefore, in the Laplace approximation, the quantity playing the role of x is the posterior mode b i = arg max b i ( b i ; 0 , ) Y j f ( y ij | b i ) and the curvature (inverse negative Hessian) of the integrand, 2 i , plays the role of 2 . Thus, the Laplace approximation to the loglikelihood contribution from the i th subject becomes log f ( y i ; , ) log( 2 i ) + log { ( b i ; 0 , ) } + X j log f ( y ij | b i ) = log( i / p )- b 2 i / (2 ) + X j log f ( y ij | b i ) . ( ** ) This approximation is good whenever the posterior density of b i is approximately normal. This occurs for large cluster sizes....
View Full Document

This note was uploaded on 11/13/2011 for the course STAT 8630 taught by Professor Staff during the Fall '08 term at University of Georgia Athens.

Page1 / 11

lec3 - Again returning to the GLMM with random intercept,...

This preview shows document pages 1 - 4. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online