It j j n ε ? ? part 24 bayesian estimation bayesian

Info icon This preview shows pages 19–29. Sign up to view the full content.

View Full Document Right Arrow Icon
it j j N ε λ λ =
Image of page 19

Info icon This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
Part 24: Bayesian Estimation Bayesian Priors ™    19/34 1 0 0 0 ~ [ , ], , ~ [ , ] ~ [ , ] (looks like chi-squared), =3, 1 ~ [ , ], ~ [ , ], 8, i i i i j j i Prior Densities N Implies N Inverse Gamma v s v s Priors over model parameters N a Wishart v v - = + = = = V w w V V 0 V V V β β β β λ β β β β 0 β β β 0 8 = I
Image of page 20
Part 24: Bayesian Estimation Bayesian Estimator p Joint Posterior= p Integral does not exist in closed form. p Estimate by random samples from the joint posterior. p Full joint posterior is not known, so not possible to sample from the joint posterior. ™    20/34 1 1 [ ,..., , , , ,..., | ] N J E V data β β β β λ λ
Image of page 21

Info icon This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
Part 24: Bayesian Estimation Gibbs Cycles for the MNP Model p Samples from the marginal posteriors ™    21/34 Marginal posterior for the individual parameters (Known and can be sampled) | , , , Marginal posterior for the common parameters (Each known and each can be sampled) | , , | , , i data data da β β β V V V β β λ β λ β λ | ,, ta data β V λ β,
Image of page 22
Part 24: Bayesian Estimation Results p Individual parameter vectors and disturbance variances p Individual estimates of choice probabilities p The same as the “random parameters model” with slightly different weights. p Allenby and Rossi call the classical method an “approximate Bayesian” approach. n (Greene calls the Bayesian estimator an “approximate random parameters model”) n Who’s right? p Bayesian layers on implausible uninformative priors and calls the maximum likelihood results “exact” Bayesian estimators p Classical is strongly parametric and a slave to the distributional assumptions. p Bayesian is even more strongly parametric than classical. p Neither is right – Both are right. ™    22/34
Image of page 23

Info icon This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
Part 24: Bayesian Estimation Comparison of Maximum Simulated Likelihood and Hierarchical Bayes p Ken Train: “A Comparison of Hierarchical Bayes and Maximum Simulated Likelihood for Mixed Logit” p Mixed Logit ™    23/34 ( , , ) ( , , ) ( , , ), 1,..., individuals, 1,..., choice situations 1,..., alternatives (may also vary) i i U i t j i t j i t j i N t T j J ε = + = = = x β
Image of page 24
Part 24: Bayesian Estimation Stochastic Structure – Conditional Likelihood Note individual specific parameter vector , i ™    24/34 , , , , 1 , *, 1 , *, 1 exp( ) Pr ob( , , ) exp( ) exp( ) exp( ) * indicator for the specific choice made by i at time t. i i j t i i j t j T i i j t t i i j t j i j t Likelihood j = = = = = = J J x x x x β β β β
Image of page 25

Info icon This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
Part 24: Bayesian Estimation Classical Approach ™    25/34 1/2 , *, 1 1 , , 1 ~ [ , ]; write where ( ) ( ) exp[( ] log exp[( ] Maximize over using maximum simulated likel i i i i j T N i i j t i i t i i i j t j N diag uncorrelated Log likelihood d = = = = = = - = J w b b + w b + v b w ) x w b w ) x b, β ΓΓ β Γ Γ = + + Γ γ ihood (random parameters model)
Image of page 26
Part 24: Bayesian Estimation Bayesian Approach – Gibbs Sampling and Metropolis-Hastings ™    26/34 1 1 1 ( | , ) ( ,..., | , ) ( ) ( ,..., | ) ( ) ( ) ( ) N i i N N Posterior L data priors Prior N normal IG parameters Inverse gamma g assumed parameters Normal with large variance = = β × = β β × γ γ × b b |
Image of page 27

Info icon This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
Part 24: Bayesian Estimation Gibbs Sampling from Posteriors: b ™    27/34 1 1 ( | ,..., , ) [ ,(1/ ) ] (1/ ) Easy to sample from Normal with known mean and variance by transforming a set of draws from standard normal.
Image of page 28
Image of page 29
This is the end of the preview. Sign up to access the rest of the document.

{[ snackBarMessage ]}

What students are saying

  • Left Quote Icon

    As a current student on this bumpy collegiate pathway, I stumbled upon Course Hero, where I can find study resources for nearly all my courses, get online help from tutors 24/7, and even share my old projects, papers, and lecture notes with other students.

    Student Picture

    Kiran Temple University Fox School of Business ‘17, Course Hero Intern

  • Left Quote Icon

    I cannot even describe how much Course Hero helped me this summer. It’s truly become something I can always rely on and help me. In the end, I was not only able to survive summer classes, but I was able to thrive thanks to Course Hero.

    Student Picture

    Dana University of Pennsylvania ‘17, Course Hero Intern

  • Left Quote Icon

    The ability to access any university’s resources through Course Hero proved invaluable in my case. I was behind on Tulane coursework and actually used UCLA’s materials to help me move forward and get everything together on time.

    Student Picture

    Jill Tulane University ‘16, Course Hero Intern