poisson_704_beamer

# poisson_704_beamer - Lecture 23 Poisson Regression Stat 704...

This preview shows pages 1–7. Sign up to view the full content.

Lecture 23: Poisson Regression Stat 704: Data Analysis I, Fall 2010 Tim Hanson, Ph.D. University of South Carolina T. Hanson (USC) Stat 704: Data Analysis I, Fall 2010 1 / 27

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
Chapter 14 14.13 Poisson regression Poisson regression * Regular regression data { ( x i , Y i ) } n i =1 , but now Y i is a positive integer, often a count: new cancer cases in a year, number of monkeys killed, etc. * For Poisson data, var( Y i ) = E ( Y i ); variability increases with predicted values. In regular OLS regression, this manifests itself in the “megaphone shape” for r i versus ˆ Y i . * If you see this shape, consider whether the data could be Poisson (e.g. blood pressure data, p. 428). * Any count, or positive integer could potentially be approximately Poisson. In fact, binomial data where n i is really large, is approximately Poisson. T. Hanson (USC) Stat 704: Data Analysis I, Fall 2010 2 / 27
Chapter 14 14.13 Poisson regression Log and identity links Let Y i Pois( μ i ). The log-link relating μ i to x 0 i β is standard: Y i Pois( μ i ) , log μ i = β 0 + x i 1 β 1 + ··· + x i , p - 1 β p - 1 , the log-linear Poisson regression model. The identity link can also be used Y i Pois( μ i ) , μ i = β 0 + x i 1 β 1 + ··· + x i , p - 1 β p - 1 . Both can be ﬁt in PROC GENMOD. T. Hanson (USC) Stat 704: Data Analysis I, Fall 2010 3 / 27

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
Chapter 14 14.13 Poisson regression Interpretation for log-link We have Y i Pois( μ i ) . The log link log( μ i ) = x 0 i β is most common: Y i Pois( μ i ) , μ i = e β 0 + β 1 x i 1 + ··· + β k x ik , or simply Y i Pois ( e β 0 + β 1 x i 1 + ··· + β k x ik ) . Say we have k = 3 predictors. The mean satisﬁes μ ( x 1 , x 2 , x 3 ) = e β 0 + β 1 x 1 + β 2 x 2 + β 3 x 3 . Then increasing x 2 to x 2 + 1 gives μ ( x 1 , x 2 + 1 , x 3 ) = e β 0 + β 1 x 1 + β 2 ( x 2 +1)+ β 3 x 3 = μ ( x 1 , x 2 , x 3 ) e β 2 . In general, increasing x j by one, but holding the other predictors the constant, increases the mean by a factor of e β j . T. Hanson (USC) Stat 704: Data Analysis I, Fall 2010 4 / 27
Chapter 14 14.13 Poisson regression Example : Crab mating Data on female horseshoe crabs. C = color (1,2,3,4=light medium, medium, dark medium, dark). S = spine condition (1,2,3=both good, one worn or broken, both worn or broken). W = carapace width (cm). Wt = weight (kg). Sa = number of satellites (additional male crabs besides her nest-mate husband) nearby. T. Hanson (USC) Stat 704: Data Analysis I, Fall 2010 5 / 27

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
Chapter 14 14.13 Poisson regression Looking at the data. .. We initially examine width as a predictor for the number of satellites. A raw scatterplot of the numbers of satellites versus the predictors does not approximately linear trend in weight. ods png; ods graphics on;
This is the end of the preview. Sign up to access the rest of the document.

## This note was uploaded on 12/14/2011 for the course STAT 704 taught by Professor Staff during the Fall '11 term at South Carolina.

### Page1 / 27

poisson_704_beamer - Lecture 23 Poisson Regression Stat 704...

This preview shows document pages 1 - 7. Sign up to view the full document.

View Full Document
Ask a homework question - tutors are online