handout9 - ISyE8843A, Brani Vidakovic 1 Handout 9 Bayesian...

Info iconThis preview shows pages 1–3. Sign up to view the full content.

View Full Document Right Arrow Icon
ISyE8843A, Brani Vidakovic Handout 9 1 Bayesian Computation. If the selection of an adequate prior was the major conceptual and modeling challenge of Bayesian analysis, the major implementational challenge is computation. As soon as the model deviates from the conjugate structure, finding the posterior (first the marginal) distribution and the Bayes rule is all but simple. A closed form solution is more an exception than the rule, and even for such closed form solutions, lucky mathematical coincidences, convenient mixtures, and other “tricks” are needed. Up to this point I believe you got a sense of this calculational challenge. If classical statistics relies on optimization, Bayesian statistics relies on integration. The marginal needed for the posterior is an integral m ( x ) = Z Θ f ( x | θ ) π ( θ ) dθ, and the Bayes estimator of h ( θ ) , with respect to the squared error loss is a ratio of integrals, δ π ( x ) = Z Θ h ( θ ) π ( θ | x ) = R Θ h ( θ ) f ( x | θ ) π ( θ ) R Θ f ( x | θ ) π ( θ ) . The difficulties in calculating the above Bayes rule are that (i) the posterior cannot be represented in a finite form, and (ii) the integral of h ( θ ) does not have a closed form integral under the possibly closed form posterior distribution. Adopting a different loss function usually makes calculation even more difficult. An exception is absolute loss for which the Bayes rule is the mode of the posterior, and the mode is not influenced by normalizing (trouble making) constant, m ( x ) . The last two decades of research in Bayesian statistics contributed to tremendous broadening of the scope of Bayesian models. Models that could not be handled before are now routinely solved. This is done by Markov Chain Monte Carlo (MCMC) Methods, and their introduction to the field of statistics revolutionized Bayesian statistics. This handout overviews pre MCMC techniques: Monte Carlo Integration, Importance Sampling, and Analytic Approximations (Riemann, Laplace, and Saddlepoint). 1.1 Bayesian CLT Suppose that X 1 ,X 2 ,...,X n f ( x | θ ) , where θ is p -dimensional parameter, and that the prior on θ is π ( θ ) . The prior π ( θ ) could be improper, but we assume that the posterior is proper and that its mode exists. Then, when n → ∞ , [ θ | x ] → MVN p ( θ M ,H - 1 ( θ M )) , where θ M is posterior mode, i.e., a solution of ∂π * ( θ | x ) ∂θ i = 0 , i = 1 ,...,p, where π * ( θ | x ) = f ( x | θ ) π ( θ ) is non-normalized posterior. Let H be the Hessian defined as H ( θ ) = - ± 2 π * ( θ | x ) ∂θ i ∂θ j . 1
Background image of page 1

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
The asymptotic covariance matrix is H - 1 ( θ M ) = ( H ( θ )) - 1 | θ = θ M The proof can be found in standard texts on asymptotic theory. Example: Bernoulli’s.
Background image of page 2
Image of page 3
This is the end of the preview. Sign up to access the rest of the document.

Page1 / 7

handout9 - ISyE8843A, Brani Vidakovic 1 Handout 9 Bayesian...

This preview shows document pages 1 - 3. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online