Assign 3

# Assign 3 - THE UNIVERSITY OF HONG KONG DEPARTMENT OF...

This preview shows pages 1–3. Sign up to view the full content.

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: THE UNIVERSITY OF HONG KONG DEPARTMENT OF STATISTICS AND ACTUARIAL SCIENCE STAT1302 PROBABILITY AND STATISTICS II SUGGESTED SOLUTION TO ASSIGNMENT 3 (MARCH 2008) 1. Question 1 (a) Bias( T 1 ) = E ( T 1 )- μ = μ + μ 2- μ = 0 , thus T 1 is unbiased. MSE( T 1 ) = E X 1 + X 2 2- μ ¶ 2 = 1 4 E ( X 1 + X 2- 2 μ ) 2 = 1 4 { E ( X 1- μ ) 2 + 2 E ( X 1- μ ) E ( X 2- μ ) + E ( X 2- μ ) 2 } = σ 2 2 or μ 2 c 2 v 2 or noticing that T 1 is unbiased, just use MSE( T 1 ) = Var( T 1 ) = Var( X 1 + X 2 2 ) = Var( X 1 ) + Var( X 2 ) 4 = μ 2 c 2 v 2 . (b) Bias( T 2 ) = E ( T 1 )- μ = 2 μ 2 + c 2 v- μ =- μc 2 v 2 + c 2 v , thus T 2 is biased unless c v = 0 . MSE( T 2 ) = Var( T 2 ) + { Bias( T 2 ) } 2 = E X 1 + X 2 2 + c 2 v- 2 μ 2 + c 2 v ¶ 2 +- μc 2 v 2 + c 2 v ¶ 2 = 1 (2 + c 2 v ) 2 E ( X 1 + X 2- 2 μ ) 2 + μc 2 v 2 + c 2 v ¶ 2 = 2 σ 2 + μ 2 c 4 v (2 + c 2 v ) 2 = μ 2 c 2 v 2 + c 2 v . (c) T 2 is recommended. A good estimator should be accurate and pre- cise. T 1 is unbiased and T 2 might not, that is, T 1 is more accurate. However, T 2 is preciser since Var( T 1 ) Var( T 2 ) = 1 + 4 c 2 v + c 4 v 4 > 1 or Var( T 1 ) > Var( T 2 ) . Thus we use MSE to measure the qualities of these two estimators since it takes into account both accuracy and precision. And smaller 1 MSE implies the sampling distribution of the corresponding estima- tor is more concentrated near the parameter. In the question, by some algebra, we get MSE( T 1 ) MSE( T 2 ) = 1 + c 2 v 2 > 1 or MSE( T 1 ) > MSE( T 2 ) . . 2. Question 2 (a) For y > 0, by a transformation, cdf of Y is F Y ( y | μ,σ ) = P ( | Z | ≤ y ) = P (- y ≤ Z ≤ y ) = Z y- y ψ ( μ,σ | t ) dt = Z- y ψ ( μ,σ | t ) dt + Z y ψ ( μ,σ | t ) dt = Z y ψ (- μ,σ | t ) dt + Z y ψ ( μ,σ | t ) dt Making the first order differential with respect to y , we prove the conclusion. (b) i. l z ( σ ) = ψ (0 ,σ | z ), l * y ( σ ) = ψ (0 ,σ | y ) + ψ (0 ,σ | - y ) = 2 ψ (0 ,σ | y ), ii. log l z ( σ ) =- log σ- z 2 / 2 σ 2 , d log l z ( σ ) /dσ =- 1 /σ + z 2 /σ 3 , d 2 log l z ( σ ) /dσ 2 = 1 /σ 2- 3 z 2 /σ 4 ,- E [ d 2 log l z ( σ ) /dσ 2 ] = 2 /σ 2 . iii. It is the same as (ii). No information loss. iv. Yes. By factorization criterion, it is sufficient. (c) i. Substituting σ = 1 into (a), we get the conclusion. ii. log ψ ( μ, 1 | y ) =- ( z- μ ) 2 / 2 , d log ψ ( μ, 1 | y ) /dμ = z- μ, i 1 ( μ ) =- E [( d log ψ ( μ, 1 | y ) /dμ ) 2 ] = V ar ( Z ) = 1 iii. log l ** y = log[exp(- ( y- μ ) 2 2 ) + exp(- ( y + μ ) 2 2 )], d log l ** y /dμ = exp(- ( y- μ ) 2 2 )( y- μ )- exp(- ( y + μ ) 2 2 )( y + μ ) exp(- ( y- μ ) 2 2 )+exp(- ( y + μ ) 2 2 ) = exp( μy )( y- μ )- exp(- μy )( y + μ ) exp( μy )+exp(- μy ) , d 2 log l ** y /dμ 2 = [exp( μy )+exp(- μy )] 2- 4 y 2 [exp( μy )+exp(- μy )] 2 Fisher information contained in Y is i 2 ( μ ) = 1- 4 E [ { exp( μY ) + exp(- μY ) }- 2 Y 2 ] < 1....
View Full Document

{[ snackBarMessage ]}

### Page1 / 10

Assign 3 - THE UNIVERSITY OF HONG KONG DEPARTMENT OF...

This preview shows document pages 1 - 3. Sign up to view the full document.

View Full Document
Ask a homework question - tutors are online