This preview shows page 1. Sign up to view the full content.
Unformatted text preview: Second Edition 7-23 Therefore, the UMVUE is
n+1 E T
i=1 Xi = y 0n y (y )p (1-p)n-y (1-p) (n) y 1 = n+1 = (n+1)(n+1-y) (n+1)py (1-p)n-y+1 ( y ) = y n ((n)+( n ))py (1-p)n-y+1 y y-1 (n)+(y-1) = y n+1 =1 n+1 y (1-p)n-y+1 ( y )p ( y ) if y = 0 if y = 1 or 2 if y > 2. 7.59 We know T = (n - 1)S 2 / 2 2 . Then n-1 ET Thus E (n - 1)S 2
2 p/2 = 1
n-1 2 2 n-1 2 t
0 p+n-1 -1 2 e t -2 22 dt = p p+n-1 2 n-1 2 = Cp,n . p/2 = Cp,n , so (n - 1)p/2 S p Cp,n is an unbiased estimator of p . From Theorem 6.2.25, (X, S 2 ) is a complete, sufficient statistic. The unbiased estimator (n-1)p/2 S p Cp,n is a function of (X, S 2 ). Hence, it is the best unbiased estimator. 7.61 The pdf for Y 2 is f (y) = Thus the pdf for S 2 = 2 Y / is g(s2 ) = 1 2 (/2)2/2 s2 2
/2-1 1 y /2-1 e-y/2 . (/2)2/2 e-s 2 /(2 2 ) . Thus, the log-likelihood has the form (gathering together constants that do not depend on s2 or 2 ) 1 s2 s2 log L( 2 |s2 ) = log + K log -K 2 +K , 2 2 where K > 0 and K > 0. The loss function in Example 7.3.27 is L( 2 , a) = a a - log - 1, 2 2
2 so the loss of an estimator is the negative of its likelihood. 7.63 Let a = 2 /( 2 + 1), so the Bayes estimator is (x) = ax. Then R(, ) = (a - 1) 2 + a2 . As 2 increases, R(, ) becomes flatter. 7.65 a. Figure omitted. b. The posterior expected loss is E (L(, a)|x) = eca E e-c -cE(a-)-1, where the expectation is with respect to (|x). Then d set E (L(, a)|x) = ceca E e-c - c = 0, da and a = - 1 log E e-c is the solution. The second derivative is positive, so this is the minic mum. ...
View Full Document
This note was uploaded on 02/03/2012 for the course STA 1014 taught by Professor Dr.hackney during the Spring '12 term at UNF.
- Spring '12