chapter10 - 1 –1 Both estimators are unbiased Now V X 1 =...

Info iconThis preview shows pages 1–4. Sign up to view the full content.

View Full Document Right Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: 1 Chapter 10 10–1. Both estimators are unbiased. Now, V ( X 1 ) = σ 2 / 2 n while V ( X 2 ) = σ 2 /n . Since V ( X 1 ) < V ( X 2 ), X 1 is a more efficient estimator than X 2 . 10–2. E ( ˆ θ 1 ) = μ , E ( ˆ θ 2 ) = (1 / 2) E (2 X 1- X 6 + X 4 ) = (1 / 2)(2 μ- μ + μ ) = μ . Both estimators are unbiased. V ( ˆ θ 1 ) = σ 2 / 7 , V ( ˆ θ 2 ) = 1 2 ¶ 2 V (2 X 1- X 6 + X 4 ) = 1 4 ¶ [4 V ( X 1 ) + V ( X 6 ) + V ( X 4 )] = 1 4 ¶ 6 σ 2 = 3 σ 2 / 2 ˆ θ 1 has a smaller variance than ˆ θ 2 . 10–3. Since ˆ θ 1 is unbiased, MSE ( ˆ θ 1 ) = V ( ˆ θ 1 ) = 10. MSE ( ˆ θ 2 ) = V ( ˆ θ 2 ) + ( Bias ) 2 = 4 + ( θ- θ/ 2) 2 = 4 + θ 2 / 4 . If θ < √ 24 = 4 . 8990, ˆ θ 2 is a better estimator of θ than ˆ θ 1 , because it would have smaller MSE . 10–4. MSE ( ˆ θ 1 ) = V ( ˆ θ 1 ) = 12, MSE ( ˆ θ 2 ) = V ( ˆ θ 2 ) = 10, MSE ( ˆ θ 3 ) = E ( ˆ θ 3- θ ) 2 = 6 . ˆ θ 3 is a better estimator because it has smaller MSE . 10–5. E ( S 2 ) = (1 / 24) E (10 S 2 1 + 8 S 2 2 + 6 S 2 3 ) = (1 / 24)(10 σ 2 + 8 σ 2 + 6 σ 2 ) = (1 / 24)24 σ 2 = σ 2 10–6. Any linear estimator of μ is of the form ˆ θ = ∑ n i =1 a i X i where a i are constants. ˆ θ is an unbiased estimator of μ only if E ( ˆ θ ) = μ , which implies that ∑ n i =1 a i = 1. Now V ( ˆ θ ) = ∑ n i =1 a 2 i σ 2 . Thus we must choose the a i to minimize V ( ˆ θ ) subject to the constraint ∑ a i = 1. Let λ be a Lagrange multiplier. Then F ( a i ,λ ) = n X i =1 a 2 i σ 2- λ ˆ n X i =1 a i- 1 ! and ∂F/∂a i = ∂F/∂λ = 0 gives 2 a i σ 2- λ = 0; i = 1 , 2 ,...,a n X i =1 a i = 1 The solution is a i = 1 /n . Thus ˆ θ = X is the best linear unbiased estimator of μ . 2 10–7. L ( α ) = n Y i =1 α X i e- α /X i ! = α Σ X i e- nα , n Y i =1 X i ! ‘ n L ( α ) = n X i =1 X i ‘ n α- nα- ‘ n ˆ n Y i =1 X i ! ! d‘ n L ( α ) dα = n X i =1 X i /α- n = 0 ˆ α = n X i =1 X i /n = X 10–8. For the Poisson distribution, E ( X ) = α = μ 1 . Also, M 1 = X . Thus ˆ α = X is the moment estimator of α . 10–9. L ( λ ) = n Y i =1 λe- λt i = λ n e- λ ∑ n i =1 t i ‘ n L ( λ ) = n‘ n λ- λ n X i =1 t i d‘ n L ( λ ) dλ = ( n/λ )- n X i =1 t i = 0 ˆ λ = n , n X i =1 t i = ( t )- 1 10–10. E ( t ) = 1 /λ = μ 1 , and M 1 = t . Thus 1 /λ = t or ˆ λ = ( t )- 1 . 10–11. If X is a gamma random variable, then E ( X ) = r/λ and V ( X ) = r/λ 2 . Thus E ( X 2 ) = ( r + r 2 ) λ 2 . Now M 1 = X and M 2 = (1 /n ) ∑ n i =1 X 2 i . Equating moments, we obtain r/λ = X, ( r + r 2 ) λ 2 = (1 /n ) n X i =1 X 2 i or, ˆ λ = X ," (1 /n ) n X i =1 X 2 i- X 2 # ˆ r = X 2 ," (1 /n ) n X i =1 X 2 i- X 2 # 3 10–12. E ( X ) = 1 /p , M 1 = X . Thus 1 /p = X or ˆ p = 1 / X ....
View Full Document

Page1 / 12

chapter10 - 1 –1 Both estimators are unbiased Now V X 1 =...

This preview shows document pages 1 - 4. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online