{[ promptMessage ]}

Bookmark it

{[ promptMessage ]}

# s2 - Stat 5102(Geyer Midterm 1 Problem 1(a The joint...

This preview shows pages 1–3. Sign up to view the full content.

Stat 5102 (Geyer) Midterm 1 Problem 1 (a) The joint density is f ( x | θ ) = n Y i =1 θx θ - 1 i = θ n n Y i =1 x i ! θ - 1 = θ n a θ - 1 n where for convenience we have defined a n = n Y i =1 x i That is also the likelihood when considered a function of θ rather than x . The prior density is g ( θ ) = λe - λθ Hence the unnormalized posterior is likelihood × prior h ( θ | x ) θ n a θ - 1 n · λe - λθ = λ a n θ n e - λθ + θ log( a n ) This is clearly an unnormalized Gam( n + 1 , λ - log a n ) density. So that is the posterior distribution. (b) Equation (11.36) in the notes gives the mode of the gamma distribution: shape parameter minus one over scale parameter, in this case n λ - log a n Problem 2 (a) By Example 11.2.4 in the notes, the posterior distribution is Normal( a, b - 1 ) where a = ¯ x n + λ 0 μ 0 + λ 0 b = + λ 0 and where λ , μ 0 , and λ 0 are the precision of the data distribution and the mean and precision of the prior distribution, respectively. Since the variances are 4 and 1, the precisions are λ = 1 / 4 and λ 0 = 1. Also μ 0 = 0. Plugging those in gives a = n ¯ x n n + 4 b = n + 4 4 1

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
(b) The HPD region for a normal posterior distribution is posterior mean ± 1 . 96 × posterior standard deviation which in this case is n ¯ x n n + 4 ± 1 . 96 r 4 n + 4 (recall that b is posterior precision , which is one over posterior variance). Problem 3 (a) The likelihood for λ is L n ( λ ) = n Y i =1 ( 1 - e - λ ) e - λx i = ( 1 - e - λ ) n e - λ i x i = ( 1 - e - λ ) n e - ¯ x n The log likelihood is l n ( λ ) = - ¯ x n + n log ( 1 - e - λ ) and l 0 n ( λ ) = - n ¯ x n + n e - λ 1 - e - λ
This is the end of the preview. Sign up to access the rest of the document.

{[ snackBarMessage ]}