lecture11

# lecture11 - ISYE6414 Summer 2010 Lecture 11 Bayesian...

This preview shows pages 1–5. Sign up to view the full content.

ISYE6414 Summer 2010 Lecture 11 ‘Bayesian’ Approaches Random Eﬀects Some General Linear Models Dr. Kobi Abayomi July 29, 2010 1 Introduction We can think of Random Eﬀects methods (or models) as a generalization of what we’ve done in ordinary linear models. In ordinary linear regression, we model a predictor as a linear function of some covariates, via estimation of the linear combination of the covariates — ˆ β . In the ordinary setup, the eﬀects (the coeﬃcients) are Fixed Y i = β 0 + β 1 X 1 + ± i Both the intercept and slope parameters are ﬁxed. In a Random Eﬀects model, the parameters have a probability distribution. We can write the model as Y i = β 0 ,j + β 1 ,j X 1 + ± i where β 0 ,j and β 1 ,j vary (across groups j , say) with some probability distribution. In the Bayesian terminology, this is equivalent to placing a prior distribution on the parameters. In the classical/frequentist setup 1

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
Y = X T β + ± with ± i N (0 2 ). Thus Y i N ( X T β,σ 2 ). A likelihood for y 1 ,...y n is lik ( y 1 ,...,y n | β,σ 2 ) = n Y i =1 φ ( y i | β,σ 2 ) The Bayesian program includes prior distributions for the parameters β . Remember Bayes’ rule: π ( β,σ 2 | y 1 ,...,y n ) = lik ( y 1 ,...,y n | β,σ 2 ) π ( β,σ 2 ) g ( y 1 ,...y n ) (1) where g ( y 1 ,...,y n ) = R β,σ 2 lik ( y 1 ,...,y n | β,σ 2 ) π ( β,σ 2 ) dβdσ 2 The frequentist setup can be seen as a subset of the Bayesian, where the prior π ( β,σ 2 ) is ”non-informative” or uniform over the support of the parameters. A Bayesian setup augments Y = X T β + ± with β N ( μ β , Σ β ) for example. We call μ β , Σ β hyperparameters in that they are parameters for the parameters of interest ( β ). 2 Random and Mixed Eﬀects Let’s look at a real setup using the ANOVA approach. Let Y ij = β 0 + β j + ± i with j = 1 ,...,K — the levels of the factor and i = 1 ...n for each j . The ordinary setup has ± i N (0 2 Y ) 2
the Random Eﬀects (or Bayesian) augmentation adds β j N (0 2 β ) So Y ij and Y i 0 j have correlation σ 2 β σ 2 β + σ 2 Y . The Sum of Squares for treatment under this model SSTr = k X j =1 n X i =1 ( y j,. - y .,. ) 2 which has expectation E ( SSTr ) = ( k - 1)( 2 β + σ 2 y ). A rough estimator we get via algebra ˆ σ 2 β = SSTr k - 1 - ˆ σ 2 y n = MSTr - MSE n with ˆ σ 2 y = SSE k ( n - 1) These ‘hand calculations’ are useful but have some drawbacks If MSTr < MSE then ˆ σ 2 β < 0 Unbalanced models, where the observations are unequal across treatments, cannot yield unique estimators A mixed eﬀects model Y = X T β + Z T γ + ± introduces γ as the coeﬃcients for the random eﬀects Z . The response variable Y has the distribution Y | γ N ( X T β + Z T γ,σ 2 I ) a conditional distribution given the random eﬀects γ N (0( say ) 2 D ) 3

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
with D some diagonal matrix. The variance of Y in this model: V ar ( Y ) = V ar ( Z T γ ) + V ar ( ± ) = σ 2 ZDZ
This is the end of the preview. Sign up to access the rest of the document.

## This note was uploaded on 09/01/2011 for the course ISYE 6414 taught by Professor Staff during the Fall '08 term at Georgia Tech.

### Page1 / 14

lecture11 - ISYE6414 Summer 2010 Lecture 11 Bayesian...

This preview shows document pages 1 - 5. Sign up to view the full document.

View Full Document
Ask a homework question - tutors are online