param-diag

param-diag - Diagnostics in MCMC Hoff Chapter 6 Convergence...

Info iconThis preview shows pages 1–6. Sign up to view the full content.

View Full Document Right Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: Diagnostics in MCMC Hoff Chapter 6 October 13, 2010 Convergence to Posterior Distribution Theory tells us that if we run the Gibbs sampler long enough the samples we obtain will be samples from the joint posterior distribution (target or stationary distribution). This does not depend on the starting point (forgets the past). I How long do we need to run the Markov Chain to adequately explore the posterior distribution? I Mixing of the chain plays a critical role in how fast we can obtain good results. How can we tell if the chain is mixing well (or poorly)? Three Component Mixture Model Posterior for μ : μ | Y ∼ . 45 N(- 3 , 1 / 3) + 0 . 10 N(0 , 1 / 3) + 0 . 45 N(3 , 1 / 3) How can we draw samples from the posterior? Introduce “mixture component indicator” δ , an unobserved latent variable which simplifies sampling I δ = 1 then μ | δ, Y ∼ N(- 3 , 1 / 3) and P ( δ = 1 | Y ) = 0 . 45 I δ = 2 then μ | δ, Y ∼ N(0 , 1 / 3) and P ( δ = 2 | Y ) = 0 . 10; I δ = 3 then μ | δ, Y ∼ N(3 , 1 / 3) and P ( δ = 3 | Y ) = 0 . 45 Monte Carlo sampling: Draw δ ; Given δ , draw μ MC density Mixture Density μ Density-4-2 2 4 0.00 0.05 0.10 0.15 0.20 0.25 0.30 Histogram of μ from 1000 MC draws with posterior density as a solid line MC Variation If we want to find the posterior mean of g ( μ ), then the Monte Carlo estimate based on M MC samples is ˆ g MC = 1 M X m g ( μ ( m ) ) → E[ g ( μ ) | Y ] with variance Var[ˆ g MC ] = E[ˆ g MC- E[ˆ g MC ]] 2 = Var[ g ( μ ) | Y ] M leading to Monte Carlo Standard Error p Var[ˆ g MC ]. We expect the posterior mean of g ( μ ) should in the interval ˆ g MC ± 2 p Var[ˆ g MC ] for roughly 95% of repeated MC samples....
View Full Document

{[ snackBarMessage ]}

Page1 / 21

param-diag - Diagnostics in MCMC Hoff Chapter 6 Convergence...

This preview shows document pages 1 - 6. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online