{[ promptMessage ]}

Bookmark it

{[ promptMessage ]}

ch11sol

# ch11sol - Chapter 11 Analysis of Variance and Regression...

This preview shows pages 1–4. Sign up to view the full content.

Chapter 11 Analysis of Variance and Regression 11.1 a. The first order Taylor’s series approximation is Var[ g ( Y )] [ g ( θ )] 2 · Var Y = [ g ( θ )] 2 · v ( θ ) . b. If we choose g ( y ) = g * ( y ) = y a 1 v ( x ) dx , then dg * ( θ ) = d θ a 1 v ( x ) dx = 1 v ( θ ) , by the Fundamental Theorem of Calculus. Then, for any θ , Var[ g * ( Y )] 1 v ( θ ) 2 v ( θ ) = 1 . 11.2 a. v ( λ ) = λ , g * ( y ) = y , dg * ( λ ) = 1 2 λ , Var g * ( Y ) dg * ( λ ) 2 · v ( λ ) = 1 / 4, independent of λ . b. To use the Taylor’s series approximation, we need to express everything in terms of θ = E Y = np . Then v ( θ ) = θ (1 - θ/n ) and dg * ( θ ) 2 = 1 1 - θ n · 1 2 θ n · 1 n 2 = 1 4 (1 - θ/n ) . Therefore Var[ g * ( Y )] dg * ( θ ) 2 v ( θ ) = 1 4 n , independent of θ , that is, independent of p . c. v ( θ ) = 2 , dg * ( θ ) = 1 θ and Var[ g * ( Y )] ( 1 θ ) 2 · 2 = K , independent of θ . 11.3 a. g * λ ( y ) is clearly continuous with the possible exception of λ = 0. For that value use l’Hˆ opital’s rule to get lim λ 0 y λ - 1 λ = lim λ 0 (log y ) y λ 1 = log y. b. From Exercise 11.1, we want to find v ( λ ) that satisfies y λ - 1 λ = y a 1 v ( x ) dx. Taking derivatives d dy y λ - 1 λ = y λ - 1 = d dy y a 1 v ( x ) dx = 1 v ( y ) .

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
11-2 Solutions Manual for Statistical Inference Thus v ( y ) = y - 2( λ - 1) . From Exercise 11.1, Var y λ - 1 λ d dy θ λ - 1 λ 2 v ( θ ) = θ 2( λ - 1) θ - 2( λ - 1) = 1 . Note: If λ = 1 / 2, v ( θ ) = θ , which agrees with Exercise 11.2(a). If λ = 1 then v ( θ ) = θ 2 , which agrees with Exercise 11.2(c). 11.5 For the model Y ij = μ + τ i + ε ij , i = 1 , . . . , k, j = 1 , . . . , n i , take k = 2. The two parameter configurations ( μ, τ 1 , τ 2 ) = (10 , 5 , 2) ( μ, τ 1 , τ 2 ) = (7 , 8 , 5) , have the same values for μ + τ 1 and μ + τ 2 , so they give the same distributions for Y 1 and Y 2 . 11.6 a. Under the ANOVA assumptions Y ij = θ i + ij , where ij independent n(0 , σ 2 ), so Y ij independent n( θ i , σ 2 ). Therefore the sample pdf is k i =1 n i j =1 (2 πσ 2 ) - 1 / 2 e - ( y ij - θ i ) 2 2 σ 2 = (2 πσ 2 ) - Σ n i / 2 exp - 1 2 σ 2 k i =1 n i j =1 ( y ij - θ i ) 2 = (2 πσ 2 ) - Σ n i / 2 exp - 1 2 σ 2 k i =1 n i θ 2 i × exp - 1 2 σ 2 i j y 2 ij + 2 2 σ 2 k i =1 θ i n i ¯ Y i · . Therefore, by the Factorization Theorem, ¯ Y 1 · , ¯ Y 2 · , . . . , ¯ Y k · , i j Y 2 ij is jointly sufficient for ( θ 1 , . . . , θ k , σ 2 ) . Since ( ¯ Y 1 · , . . . , ¯ Y k · , S 2 p ) is a 1-to-1 function of this vector, ( ¯ Y 1 · , . . . , ¯ Y k · , S 2 p ) is also jointly sufficient. b. We can write (2 πσ 2 ) - Σ n i / 2 exp - 1 2 σ 2 k i =1 n i j =1 ( y ij - θ i ) 2 = (2 πσ 2 ) - Σ n i / 2 exp - 1 2 σ 2 k i =1 n i j =1 ([ y ij - ¯ y i · ] + [¯ y i · - θ i ]) 2 = (2 πσ 2 ) - Σ n i / 2 exp - 1 2 σ 2 k i =1 n i j =1 [ y ij - ¯ y i · ] 2 exp - 1 2 σ 2 k i =1 n i y i · - θ i ] 2 , so, by the Factorization Theorem, ¯ Y i · , i = 1 , . . . , n , is independent of Y ij - ¯ Y i · , j = 1 , . . . , n i , so S 2 p is independent of each ¯ Y i · . c. Just identify n i ¯ Y i · with X i and redefine θ i as n i θ i .
Second Edition 11-3 11.7 Let U i = ¯ Y i · - θ i . Then k i =1 n i [( ¯ Y i · - ¯ ¯ Y ) - ( θ i - ¯ θ )] 2 = k i =1 n i ( U i - ¯ U ) 2 .

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
This is the end of the preview. Sign up to access the rest of the document.

{[ snackBarMessage ]}