Econometrics-I-24

# P neither is right – both are

This preview shows pages 23–32. Sign up to view the full content.

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: p Neither is right – Both are right. &#152;&#152;&#152;™ ™ 22/34 Part 24: Bayesian Estimation Comparison of Maximum Simulated Likelihood and Hierarchical Bayes p Ken Train: “A Comparison of Hierarchical Bayes and Maximum Simulated Likelihood for Mixed Logit” p Mixed Logit &#152;&#152;&#152;™ ™ 23/34 ( , , ) ( , , ) ( , , ), 1,..., individuals, 1,..., choice situations 1,..., alternatives (may also vary) i i U i t j i t j i t j i N t T j J ε ′ = + = = = x β Part 24: Bayesian Estimation Stochastic Structure – Conditional Likelihood Note individual specific parameter vector , i &#152;&#152;&#152;™ ™ 24/34 , , , , 1 , *, 1 , *, 1 exp( ) Pr ob( , , ) exp( ) exp( ) exp( ) * indicator for the specific choice made by i at time t. i i j t i i j t j T i i j t t i i j t j i j t Likelihood j = = = ′ = ′ ′ = ′ = ∑ ∏ ∑ J J x x x x β β β β Part 24: Bayesian Estimation Classical Approach &#152;&#152;&#152;™ ™ 25/34 1/2 , *, 1 1 , , 1 ~ [ , ]; write where ( ) ( ) exp[( ] log exp[( ] Maximize over using maximum simulated likel i i i i j T N i i j t i i t i i i j t j N diag uncorrelated Log likelihood d = = = ′ = = = ′- = ′ ∑ ∏ ∫ ∑ J w b b + w b + v b w ) x w b w ) x b, β Ω Ω ΓΓ β Γ Γ = + + Γ γ ihood (random parameters model) Part 24: Bayesian Estimation Bayesian Approach – Gibbs Sampling and Metropolis-Hastings &#152;&#152;&#152;™ ™ 26/34 1 1 1 ( | , ) ( ,..., | , ) ( ) ( ,..., | ) ( ) ( ) ( ) N i i N N Posterior L data priors Prior N normal IG parameters Inverse gamma g assumed parameters Normal with large variance = = β × = β β × γ γ × ∏ b b | Ω Ω Part 24: Bayesian Estimation Gibbs Sampling from Posteriors: b &#152;&#152;&#152;&#152;™ ™ 27/34 1 1 ( | ,..., , ) [ ,(1/ ) ] (1/ ) Easy to sample from Normal with known mean and variance by transforming a set of draws from standard normal. N N i i p b Normal N N = β β Ω = β Ω β = β ∑ Part 24: Bayesian Estimation Gibbs Sampling from Posteriors: Ω &#152;&#152;&#152;&#152;™ ™ 28/34 1 2 , 1 r,k R 2 r,k 1 ( | , ,..., ) ~ [1 ,1 ] (1/ ) ( ) for each k=1,...,K Draw from inverse gamma for each k: Draw 1+N draws from N[0,1] = h , (1+N ) then the draw is h k N k N k k i k i k r p b Inverse Gamma N NV V N b V = = γ β β + + = β- ∑ ∑ Part 24: Bayesian Estimation Gibbs Sampling from Posteriors: i &#152;&#152;&#152;&#152;™ ™ 29/34 ( | , ) ( | ) ( , ) M=a constant, L=likelihood, g=prior (This is the definition of the posterior.) Not clear how to sample. Use Metropolis Hastings algorithm. i i i p M L data g β = × β × β b | b Ω Ω Part 24: Bayesian Estimation Metropolis – Hastings Method &#152;&#152;&#152;&#152; ™ 30/34 ,0 ,1 r : an 'old' draw (vector) the 'new' draw (vector) d = , =a constant (see below) the diagonal matrix of standard deviations =a vector or K draws from standard normal i i Define β = β = σ σ r r v v Γ Γ = Part 24: Bayesian Estimation...
View Full Document

{[ snackBarMessage ]}

### Page23 / 35

p Neither is right – Both are...

This preview shows document pages 23 - 32. Sign up to view the full document.

View Full Document
Ask a homework question - tutors are online