{[ promptMessage ]}

Bookmark it

{[ promptMessage ]}

# lec3 - The contributions of distinct sets of explanatory...

This preview shows pages 1–4. Sign up to view the full content.

The contributions of distinct sets of explanatory variables to the model are typically captured by breaking up the overall regression (or model) sum of squares into distinct components. This is useful quite generally in linear models, but especially in ANOVA models where the response is modeled in terms of one or more class vari- ables or factors. In such cases, the model sum of squares is decomposed into sums of squares for each of the distinct sets of dummy, or indicator, variables necessary to capture each of the factors in the model. For example, the following model is appropriate for a randomized complete block design (RCBD) y ij = μ + β j + α i + e ij where y ij is the response from the i th treatment in the j th block, and β j and α i are block and treatment effects, respectively. This model can also be written as y = μ j n + β 1 b 1 + · · · + β b b b + α 1 t 1 + · · · + α a t a + e ( * ) In this context, the notation SS ( α | β, μ ) denotes the extra regression sum of squares due to fitting the α i s after fitting μ and the β j s and is given by SS ( α | β, μ ) = y T ( P C ( X ) - P C ( X 1 ) ) y where X 1 = ( j n , b 1 , . . . , b b ) and X = ( X 1 , t 1 , . . . , t a ). Sums of squares like this one that can be computed by fitting suc- cessively more complex models and taking the difference in regres- sion/model sum of squares at each step are called sequential sums of squares . They represent the contribution of each successive group of explana- tory variables above and beyond those explanatory variables already in the model. 201

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
Any model that can be written as y = X β + e = X 1 β 1 + X 2 β 2 + X 3 β 3 + · · · + e has a sequential sum of squares decomposition. That is, the regression or model sum of squares SS Model = y T P C ( X ) y = k P C ( X ) y k 2 can always be decomposed as follows: SS Model = k P C ( X ) y k 2 = k P C ( X 1 ) y k 2 + k ( P C ( X 1 , X 2 ) - P C ( X 1 ) ) y k 2 + k ( P C ( X 1 , X 2 , X 3 ) - P C ( X 1 , X 2 ) ) y k 2 + · · · or SS Model = SS ( β 1 ) + SS ( β 2 | β 1 ) + SS ( β 3 | β 1 , β 2 ) + · · · Note that by construction, the projections and squared lengths of projections in such a decomposition are independent because the spaces onto which we are projecting are mutually orthogonal. Such a decomposition can be extended to any number of terms. 202
Consider the RCBD model (*). This model can be written as y = X 1 β 1 + X 2 β 2 + X 3 β 3 + e where X 1 = j N , X 2 = ( b 1 , . . . , b b ) , X 3 = ( t 1 , . . . , t a ) and β 1 = μ , β 2 = ( β 1 , . . . , β b ) T , and β 3 = ( α 1 , . . . , α a ) T . The sequential break-down of the model sum of squares here is SS Model = SS ( μ ) + SS ( β | μ ) + SS ( α | β, μ ) ( ** ) Consider the null hypothesis H 0 : α 1 = · · · = α a = 0. The null model corresponding to this hypothesis is y ij = μ + β j + e ij . Fitting just the null model we have SS Model0 = SS ( μ ) + SS ( β | μ ) . Note that SSE = SS T - SS Model , where SS T = k y k 2 is the total (uncor- rected) sum of squares. Therefore, the difference in error sums of squares between the null model and the maintained model is SSE 0 - SSE = ( SS T - SS Model0 ) - ( SS T - SS Model ) = SS Model - SS Model0 = SS ( α | β, μ ) .

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
This is the end of the preview. Sign up to access the rest of the document.

{[ snackBarMessage ]}