This preview shows pages 1–9. Sign up to view the full content.
This preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
Unformatted text preview: Econometric Analysis of Panel Data William Greene Department of Economics Stern School of Business Econometric Analysis of Panel Data 22. Individual Heterogeneity and Random Parameter Variation Heterogeneity O bservational: Observable differences across individuals (e.g., choice makers) C hoice strategy: How consumers make decisions – the underlying behavior S tructural: Differences in model frameworks P references: Differences in model ‘parameters’ Parameter Heterogeneity i,t it it i,t (1) Regression model y ε (2) Conditional probability or other nonlinear model f(y  x , ) (3) Heterogeneity  how are parameters distributed across individuals? (a) Discr ′ = + i,t i i x β β ete  the population contains a mixture of Q types of individuals. (b) Continuous. Parameters are part of the stochastic structure of the population. Distinguish Bayes and Classical Both depart from the heterogeneous ‘model,’ f(y it  x it )=g(y it , x it , β i ) What do we mean by ‘randomness’ With respect to the information of the analyst (Bayesian) With respect to some stochastic process governing ‘nature’ (Classical) Bayesian: No difference between ‘fixed’ and ‘random’ Classical: Full specification of joint distributions for observed random variables; piecemeal definitions of ‘random’ parameters. Usually a form of ‘random effects’ Hierarchical Bayesian Estimation i,t i,t Sample data generation: f(y  ) g(y , ) Individual heterogeneity: , ~ N[ ] What information exists about 'the model?' p( ) = N[ , ] ′ = + i,t i,t i i i i x , x , β Ω = u u 0, β β Γ Prior densities for structural parameters : β β Σ , e.g., and (large) v p( ) = Inverse Wishart[ , ] p( ) = whatever works for other parameters in model p( )= N[ , ] End Result: Joint prior distribution for all param γ i I A Γ Ω Priors for parameters of interest : β β Γ eters p( , , , prior 'beliefs' in , , , assumed densities) γ i A, β Γ Ω β β Σ Allenby and Rossi: Structure it,j it,j it,j it,j Conditional data generation mechanism y * = + , Utility for consumer i, choice t, ε brand j. (Consumer choice among brands of ketchup  the 'scanner data') Y = 1[y * = maximum utility among i it, j x β it,j it,j j 1 the J choices] x = (constant, log price, "availability," "featured") ~ N[0, ], = 1 ε λ λ Implies a J outcome multinomial probit model. Priors i i i i β j j j β1 β Prior Densities ~N[ , ], Implies = + , ~N[0, ] ~Inverse Gamma[v,s ] (looks like c λ hisquared), v=3, s =1 Priors over structural model parameters ~N[ ,a ], =0 β V β β w w V β β V β β β V ~Wishart[v , ],v =8, =8 V V I Bayesian Posterior Analysis...
View
Full
Document
This note was uploaded on 01/05/2012 for the course B 55.9912 taught by Professor Willamgreene during the Fall '11 term at NYU.
 Fall '11
 WillamGreene

Click to edit the document details