This preview shows pages 1–6. Sign up to view the full content.
This preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
Unformatted text preview: Diagnostics in MCMC Hoff Chapter 6 October 13, 2010 Convergence to Posterior Distribution Theory tells us that if we run the Gibbs sampler long enough the samples we obtain will be samples from the joint posterior distribution (target or stationary distribution). This does not depend on the starting point (forgets the past). I How long do we need to run the Markov Chain to adequately explore the posterior distribution? I Mixing of the chain plays a critical role in how fast we can obtain good results. How can we tell if the chain is mixing well (or poorly)? Three Component Mixture Model Posterior for :  Y . 45 N( 3 , 1 / 3) + 0 . 10 N(0 , 1 / 3) + 0 . 45 N(3 , 1 / 3) How can we draw samples from the posterior? Introduce mixture component indicator , an unobserved latent variable which simplifies sampling I = 1 then  , Y N( 3 , 1 / 3) and P ( = 1  Y ) = 0 . 45 I = 2 then  , Y N(0 , 1 / 3) and P ( = 2  Y ) = 0 . 10; I = 3 then  , Y N(3 , 1 / 3) and P ( = 3  Y ) = 0 . 45 Monte Carlo sampling: Draw ; Given , draw MC density Mixture Density Density42 2 4 0.00 0.05 0.10 0.15 0.20 0.25 0.30 Histogram of from 1000 MC draws with posterior density as a solid line MC Variation If we want to find the posterior mean of g ( ), then the Monte Carlo estimate based on M MC samples is g MC = 1 M X m g ( ( m ) ) E[ g ( )  Y ] with variance Var[ g MC ] = E[ g MC E[ g MC ]] 2 = Var[ g ( )  Y ] M leading to Monte Carlo Standard Error p Var[ g MC ]. We expect the posterior mean of g ( ) should in the interval g MC 2 p Var[ g MC ] for roughly 95% of repeated MC samples....
View
Full
Document
 Fall '09

Click to edit the document details