ch11sol - Chapter 11 Analysis of Variance and Regression...

Info iconThis preview shows pages 1–3. Sign up to view the full content.

View Full Document Right Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: Chapter 11 Analysis of Variance and Regression 11.1 a. The first order Taylors series approximation is Var[ g ( Y )] [ g ( )] 2 Var Y = [ g ( )] 2 v ( ) . b. If we choose g ( y ) = g * ( y ) = R y a 1 v ( x ) dx , then dg * ( ) d = d d Z a 1 p v ( x ) dx = 1 p v ( ) , by the Fundamental Theorem of Calculus. Then, for any , Var[ g * ( Y )] 1 p v ( ) ! 2 v ( ) = 1 . 11.2 a. v ( ) = , g * ( y ) = y , dg * ( ) d = 1 2 , Var g * ( Y ) dg * ( ) d 2 v ( ) = 1 / 4, independent of . b. To use the Taylors series approximation, we need to express everything in terms of = E Y = np . Then v ( ) = (1- /n ) and dg * ( ) d 2 = 1 q 1- n 1 2 q n 1 n 2 = 1 4 n (1- /n ) . Therefore Var[ g * ( Y )] dg * ( ) d 2 v ( ) = 1 4 n , independent of , that is, independent of p . c. v ( ) = K 2 , dg * ( ) d = 1 and Var[ g * ( Y )] ( 1 ) 2 K 2 = K , independent of . 11.3 a. g * ( y ) is clearly continuous with the possible exception of = 0. For that value use lH opitals rule to get lim y - 1 = lim (log y ) y 1 = log y. b. From Exercise 11.1, we want to find v ( ) that satisfies y - 1 = Z y a 1 p v ( x ) dx. Taking derivatives d dy y - 1 = y - 1 = d dy Z y a 1 p v ( x ) dx = 1 p v ( y ) . 11-2 Solutions Manual for Statistical Inference Thus v ( y ) = y- 2( - 1) . From Exercise 11.1, Var y - 1 d dy - 1 2 v ( ) = 2( - 1) - 2( - 1) = 1 . Note: If = 1 / 2, v ( ) = , which agrees with Exercise 11.2(a). If = 1 then v ( ) = 2 , which agrees with Exercise 11.2(c). 11.5 For the model Y ij = + i + ij , i = 1 ,...,k, j = 1 ,...,n i , take k = 2. The two parameter configurations ( , 1 , 2 ) = (10 , 5 , 2) ( , 1 , 2 ) = (7 , 8 , 5) , have the same values for + 1 and + 2 , so they give the same distributions for Y 1 and Y 2 . 11.6 a. Under the ANOVA assumptions Y ij = i + ij , where ij independent n(0 , 2 ), so Y ij independent n( i , 2 ). Therefore the sample pdf is k Y i =1 n i Y j =1 (2 2 )- 1 / 2 e- ( y ij- i ) 2 2 2 = (2 2 )- n i / 2 exp - 1 2 2 k X i =1 n i X j =1 ( y ij- i ) 2 = (2 2 )- n i / 2 exp- 1 2 2 k X i =1 n i 2 i exp - 1 2 2 X i X j y 2 ij + 2 2 2 k X i =1 i n i Y i . Therefore, by the Factorization Theorem, Y 1 , Y 2 ,..., Y k , X i X j Y 2 ij is jointly sufficient for ( 1 ,..., k , 2 ) . Since ( Y 1 ,..., Y k ,S 2 p ) is a 1-to-1 function of this vector, ( Y 1 ,..., Y k ,S 2 p ) is also jointly sufficient....
View Full Document

This note was uploaded on 04/18/2010 for the course STAT 622 taught by Professor Peruggia,m during the Spring '08 term at Ohio State.

Page1 / 18

ch11sol - Chapter 11 Analysis of Variance and Regression...

This preview shows document pages 1 - 3. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online