{[ promptMessage ]}

Bookmark it

{[ promptMessage ]}

Econometrics-I-24

It j j n ε ? ? part 24 bayesian estimation bayesian

Info iconThis preview shows pages 19–29. Sign up to view the full content.

View Full Document Right Arrow Icon
it j j N ε λ λ =
Background image of page 19

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
Part 24: Bayesian Estimation Bayesian Priors ™    19/34 1 0 0 0 ~ [ , ], , ~ [ , ] ~ [ , ] (looks like chi-squared), =3, 1 ~ [ , ], ~ [ , ], 8, i i i i j j i Prior Densities N Implies N Inverse Gamma v s v s Priors over model parameters N a Wishart v v - = + = = = V w w V V 0 V V V β β β β λ β β β β 0 β β β 0 8 = I
Background image of page 20
Part 24: Bayesian Estimation Bayesian Estimator p Joint Posterior= p Integral does not exist in closed form. p Estimate by random samples from the joint posterior. p Full joint posterior is not known, so not possible to sample from the joint posterior. ™    20/34 1 1 [ ,..., , , , ,..., | ] N J E V data β β β β λ λ
Background image of page 21

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
Part 24: Bayesian Estimation Gibbs Cycles for the MNP Model p Samples from the marginal posteriors ™    21/34 Marginal posterior for the individual parameters (Known and can be sampled) | , , , Marginal posterior for the common parameters (Each known and each can be sampled) | , , | , , i data data da β β β V V V β β λ β λ β λ | ,, ta data β V λ β,
Background image of page 22
Part 24: Bayesian Estimation Results p Individual parameter vectors and disturbance variances p Individual estimates of choice probabilities p The same as the “random parameters model” with slightly different weights. p Allenby and Rossi call the classical method an “approximate Bayesian” approach. n (Greene calls the Bayesian estimator an “approximate random parameters model”) n Who’s right? p Bayesian layers on implausible uninformative priors and calls the maximum likelihood results “exact” Bayesian estimators p Classical is strongly parametric and a slave to the distributional assumptions. p Bayesian is even more strongly parametric than classical. p Neither is right – Both are right. ™    22/34
Background image of page 23

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
Part 24: Bayesian Estimation Comparison of Maximum Simulated Likelihood and Hierarchical Bayes p Ken Train: “A Comparison of Hierarchical Bayes and Maximum Simulated Likelihood for Mixed Logit” p Mixed Logit ™    23/34 ( , , ) ( , , ) ( , , ), 1,..., individuals, 1,..., choice situations 1,..., alternatives (may also vary) i i U i t j i t j i t j i N t T j J ε = + = = = x β
Background image of page 24
Part 24: Bayesian Estimation Stochastic Structure – Conditional Likelihood Note individual specific parameter vector , i ™    24/34 , , , , 1 , *, 1 , *, 1 exp( ) Pr ob( , , ) exp( ) exp( ) exp( ) * indicator for the specific choice made by i at time t. i i j t i i j t j T i i j t t i i j t j i j t Likelihood j = = = = = = J J x x x x β β β β
Background image of page 25

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
Part 24: Bayesian Estimation Classical Approach ™    25/34 1/2 , *, 1 1 , , 1 ~ [ , ]; write where ( ) ( ) exp[( ] log exp[( ] Maximize over using maximum simulated likel i i i i j T N i i j t i i t i i i j t j N diag uncorrelated Log likelihood d = = = = = = - = J w b b + w b + v b w ) x w b w ) x b, β ΓΓ β Γ Γ = + + Γ γ ihood (random parameters model)
Background image of page 26
Part 24: Bayesian Estimation Bayesian Approach – Gibbs Sampling and Metropolis-Hastings ™    26/34 1 1 1 ( | , ) ( ,..., | , ) ( ) ( ,..., | ) ( ) ( ) ( ) N i i N N Posterior L data priors Prior N normal IG parameters Inverse gamma g assumed parameters Normal with large variance = = β × = β β × γ γ × b b |
Background image of page 27

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
Part 24: Bayesian Estimation Gibbs Sampling from Posteriors: b ™    27/34 1 1 ( | ,..., , ) [ ,(1/ ) ] (1/ ) Easy to sample from Normal with known mean and variance by transforming a set of draws from standard normal.
Background image of page 28
Image of page 29
This is the end of the preview. Sign up to access the rest of the document.

{[ snackBarMessage ]}