This preview shows pages 1–2. Sign up to view the full content.
This preview has intentionally blurred sections. Sign up to view the full version.View Full Document
Unformatted text preview: MC methods: MH. Ch 6 Hw Ch 6. Metropolis-Hasting Algorithm Explain clearly what you do. When describing the MH algorithm write which is your target distribution f and the proposal q . Ex 1. The negative binomial density can be parametrized in terms of its mean and overdis- persion parameter , p ( k | , ) = 1 B ( k + 1 ,- 1 )( k + - 1 ) + 1 k 1 + 1 1 / , k = 0 , 1 ,..., (1) where B ( x,y ) = ( x )( y ) / ( x + y ) is the beta function. Under this parameterization E ( k ) = and var ( k ) = + 2 . The marginal density of is obtained by integrating out of the density. To do this we assume a conditional density for , assuming that | F( 1 , 2 ), the F-distribution with 1 and 2 degrees of freedom. This is a conjugate prior distribution for , and allows us to calculate the marginal distribution of . In practice, we take 1 = 2 a and 2 = 2 a / , where a > 0 is a chosen constant. Then, denoting the conditional density by p ( | ), we have p ( | ) = 1 B ( a ,a / ) a a - 1 (1 + ) a + a / , and the integrated likelihood function is given by p ( k | ) = Z p ( k | , ) p ( | ) d (2) = " Y j 1 B ( k j + 1 ,- 1 )( k j + - 1 ) # B ( a + j k j , ( J + a ) / ) B ( a ,a / ) , where k denotes the sample k = ( k 1 ,...,k J ) NegBin( , ), and p ( k | , ) = Q j p...
View Full Document
- Fall '11