100%(1)1 out of 1 people found this document helpful
This preview shows page 1 - 3 out of 3 pages.
tied to the normal distribution. Indeed, most of the results on hypothesis testing rely on the assumptionof normality. The assumption thaty1, . . . , ynare normal may not always be appropriate. Examples arisewheny1, . . . , ynare binary or when they represent counts. It might therefore be nice to generalize thetheory of linear models to include these other distributional assumptions for the response values.Generalized Linear Models (GLM) generalize linear models by including both of the above features.They allow more general distributional assumptions fory1, . . . , ynand they also allow (1).2Distributional Assumptions in GLMsIn GLMs, the response variablesy1, . . . , yncan be either discrete (have pmfs) or continuous (have pdfs).It is assumed thaty1, . . . , ynare independent. We also assume that the pmf or pdf ofyican be modelledby two parametersθiandφiand can be written asf(x;θi, φi) :=h(x, φi) expxθi-b(θi)a(φi).(2)1
θiis the main parameter (also called the canonical parameter).φiis called the dispersion parameter andone often assumes thatφiis the same for alli. The functionb(θi) is called the cumulant function.This distributional form includes the normal density assumption used in the classical linear models.In classical linear models, we assume thatyi∼N(μi, σ2). The density ofyican then be written asf(x) :=1√2πσexp-(x-μi)22σ2and this can be rewritten asf(x) :=exp(-y2/(2σ2))√2πσexpyμ-μ2/2σ2.