solutions8

# solutions8 - cS r` 3 xW 3 3z b" v sqp nm y p ut irGWoel...

This preview shows page 1. Sign up to view the full content.

This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: cS r` 3 xW 3 3z b " v sqp nm y p ut irGWoel ` jh y 6 w 1@ w i2gf h v 6 vv @ RC 0 @ A9 5 v 87 4 8 @6 A9 5 87 4 6 8 and d k W ef d` 1 6 @ y 6 x vv y w 6 v q@ D 0 1@ x w u tW 3 W 3 ` 3 YT` 3 " " b ` 3 W 3 " and u W " W` " ` b ES ts&qrYqXVpg ES ` " W 21ih" b 4 S gfW Xed ` Q b ` " W cS aYXVU& . 3 3 " 3 3 3 3 Chapter 10, Problem 1. and hence we solve @ A9 5 87 4 8 A6 9 @ 5 87 4 6 8 " )('&% \$#! and 2210" which gives @ I 5 A9 6 P B S T 8 Q R8 FD GEC B !3 B H3 This is maximized by y b cS y w " 1@ w b ` " W ~} | cS 1 !A| { z v y 6 w yx` i1@ w wvW 3 3 y . Now Let where Chapter 10, Problem 2. Homework 8: Partial Solutions and hence we solve 1 . The mle is . The likelihood is 2 R 0 6 Y R 6 TY R h @ Q TY Q R h 3R Q , the maximum data point. Note y 6w R 3 IGE C v Q34 PHFD3z 4 5 B b v " k v @ " 6 3 534 Q 4 confidence interval is 5 4 5 B b v " k v @ " 6 G @ A9G 7 5 & ( @ 8 & ( 6 4 z3 2 v 3 " k v 4 @ 1v)( 0 & where we get is and . The mle is . The & " k v @ 3 " 3 3z " #!v S Q @ g 3 A9 6 I @ 5 Q 8 z 8 '% " & \$ z '& S z Q @ A9 6 I @ 5 3 3 z k v @ 8 " 8 z h h z h v k 4 5 4 xW u ` U z I 8 8 With . The MSE is uu Ecu k v 5 b 4 S (fW 1 5 b 4 S gfW 1 " k es" ` ` Q @ 5 z Q The nonparametric plug-in estimator is gradient of than the nonparametric plug-in. An approximate The estimates standard error is The asymptotic standard error of this is is Chapter 10, Problem 3. Chapter 10, Problem 4. The mle is R S T3R that . Hence, . Solving for . By simulation, the mle has MSE about .015, substantially smaller 2 T`XU R Y W Q U 3R V 4 v 5 B C ! Y5 b Q & Q . Then, . The estimated stan. An approximate @ \$& 3 2 5 B S ! 27 5 B S 3R Y7GP 3R " & P 3R Q34 534 2 5 5 U U R Y6x R " & 5 R & R & 4 x R & Q Q h & 4 U 4 2 R & 4 U R P 4 U R W Q U RQ h 3R & 3 Q Q Q W R Q @ Q k W @ 4 Q @ Q 43 Q2 Q v 4 2 0 wdA@ 1 1 Chapter 10, Problem 6. (a) (b) Let . is . The mle is v Q % k xA\$ # " 6 4 % !l 1 1 Q @ )0(UA&' | % A9 6 I @ 5 3 8 5 !l 8 I @Q A9 6 I @ 8 5 8 3 8 & 8 6 Y Q R 4 k dard error of tor is 95 per cent confidence interval is Thus, and 5 as (c) Chapter 10, Problem 5. has mean 2 numbers. . . Now, . The likelihood is . The mle is obtained by setting . Consistency follows from the weak law of large 3 so so the method of moments estima, the log-likelihood is yielding @ 5 v % 54 " 4 )r 3 @ 3 @ A9G & ( @ 8 & ( I @ 3 3 @ 3 @ 3 Q 2 Q 3 v 2 4 4 Q 0 v)( & @ d T@ & Q y @ v 6 w y k @ U ! U T@ A@ k 6 w Q @ @ 5 5 d ! y @ k v @ w k Q Q y @ @ w Q Q 4 4 @ v 6 6 @ q@ Q Q @ 3 2 3 3 Q and and the gradient of . is is 2 Q ~ 2 & 4 3 2 Q 5 S @ 4iU 2 3 & k ~ 4 k ~ 4 k h 4 kQ W Q & & 4 Q h Q Q \$ @ R 25 R Y5 v ' R & 4 r R & 2 4 2 Q Q D 2 1 l rp @ 3 q Q 4 3 U @ 4 so (d) Note that 2 2 5S 2 4 2 Q of By the delta method, the estimated standard error of U @ ! Since The matrix is still consistent. , we have Chapter 10, Problem 7. (a) (d) The bootstrap code is: & (c) (b) The likelihood is (e) By the law of large numbers, is . The ARE is of second derivatives is converges in probability to so the mle is inconsistent. On the other hand, converges in probability to 4 # \$ 3 " 2 , the Fisher information matrix is . For an arbitrary distribution and . The true value . So B <- 10000 tau.boot <- rep(0,B) for(i in 1:B){ xx1 <- rbinom(1,n1,p1.hat) xx2 <- rbinom(1,n2,p2.hat) tau.boot[i] <- (xx1/n1)-(xx2/n2) } 5 ...
View Full Document

## This note was uploaded on 05/25/2008 for the course STAT 36-625 taught by Professor Larrywasserman during the Fall '02 term at Carnegie Mellon.

Ask a homework question - tutors are online