{[ promptMessage ]}

Bookmark it

{[ promptMessage ]}

lec2 - These last two results can be demonstrated as...

Info iconThis preview shows pages 1–5. Sign up to view the full content.

View Full Document Right Arrow Icon
These last two results can be demonstrated as follows: SS A = n X i y i · - ¯ y ·· ) 2 since ¯ y i · = μ + a i + ¯ e i · and ¯ y ·· = μ + ¯ a · + ¯ e ·· , it follows that SS A = n X i ( a i + ¯ e i · - ¯ a · - ¯ e ·· ) 2 = n X i ( γ i - ¯ γ · ) 2 where γ i = a i + ¯ e i · , ¯ γ · = ¯ a · + ¯ e ·· . Notice that γ 1 , . . . , γ a iid N (0 , σ 2 a + σ 2 /n ). Therefore, γ 1 / p σ 2 a + σ 2 /n, . . . , γ a / p σ 2 a + σ 2 /n iid N (0 , 1). It follows that SS A n ( σ 2 a + σ 2 /n ) = X i ( γ i - ¯ γ · ) 2 ( σ 2 a + σ 2 /n ) = X i γ i p σ 2 a + σ 2 /n - ¯ γ · p σ 2 a + σ 2 /n ! 2 is a sum of a squared deviations from the mean in standard normal random variables (the γ i σ 2 a + σ 2 /n ’s). Therefore, SS A 2 a + σ 2 χ 2 ( a - 1) . And since the expected value of a χ 2 (d . f . ) random variable is d . f . , we have E SS A 2 a + σ 2 = a - 1 E SS A a - 1 = 2 a + σ 2 . 101
Background image of page 1

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
Under H 0 : σ 2 a = 0, F = MS A MS E F ( a - 1 , N - a ) and we reject H 0 if F > F α ( a - 1 , N - a ). ANOVA Table: Source of Sum of d.f. Mean E( MS ) F Variation Squares Squares Treatments SS A a - 1 MS A 2 a + σ 2 MS A MS E Error SS E N - a MS E σ 2 Total SS T N - 1 For an unbalanced design, replace n with ( N - i n 2 i /N ) / ( a - 1) in the above ANOVA table. 102
Background image of page 2
Estimation of Variance Components Since MS E is an unbiased estimators of its expected value σ 2 , we use ˆ σ 2 = MS E to estimate σ 2 . In addition, E MS A - MS E n = 2 a + σ 2 - σ 2 n = σ 2 a , so we use ˆ σ 2 a = MS A - MS E n to estimate σ 2 a . The validity of this estimation procedure isn’t dependent on normal- ity assumptions (on a i s and e ij s). In addition, it can be shown that (under certain assumptions) the proposed estimators are optimal in a certain sense. Occasionally, MS A < MS E . In such a case we will get ˆ σ 2 a < 0. Since a negative estimate of a variance component makes no sense, in this case ˆ σ 2 a is set equal to 0. 103
Background image of page 3

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
Confidence Intervals for Variance Components: Since SS E σ 2 χ 2 ( N - a ) it must be true that Pr χ 2 1 - α/ 2 ( N - a ) SS E σ 2 χ 2 α/ 2 ( N - a ) = 1 - α Inverting all three terms in the inequality just reverses the signs to ’s: Pr 1 χ 2 1 - α/ 2 ( N - a ) σ 2 SS E 1 χ 2 α/ 2 ( N - a ) ! = 1 - α Pr SS E χ 2 1 - α/ 2 ( N - a ) σ 2 SS E χ 2 α/ 2 ( N - a ) ! = 1 - α Therefore, a 100(1 - α )% CI for σ 2 is SS E χ 2 α/ 2 ( N - a ) , SS E χ 2 1 - α/ 2 ( N - a ) ! . It turns out that it is a good bit more complicated to derive a confidence interval for σ 2 a . However, we can more easily find exact CIs for the intra- class correlation coefficient ρ = σ 2 a σ 2 a + σ 2 and for the ratio of the variance components: θ = σ 2 a σ 2 . Both of these parameters have useful interpretations: ρ represents the proportion of the total variance that is the result of differences between treatments; θ represents the ratio of the between treatment variance to the within-treatment or error variance.
Background image of page 4
Image of page 5
This is the end of the preview. Sign up to access the rest of the document.

{[ snackBarMessage ]}