This preview shows page 1. Sign up to view the full content.
Unformatted text preview: 76 Solutions Manual for Statistical Inference ^ c. From part (a) we get = 1. From part (b), X2 = 1 implies Z = 0 which, if we use the second ^ density, gives us = . d. The posterior distributions are just the normalized likelihood times prior, so of course they are different. 7.18 a. The usual first two moment equations for X and Y are x = y = EX EY = X , = Y , 1 n 1 n
2 x2 = E X 2 = X + 2 , i X i 2 2 yi = E Y 2 = Y + 2 . Y i We also need an equation involving . 1 n xi yi = E XY = Cov(X, Y ) + (E X)(E Y ) = X Y + X Y .
i Solving these five equations yields the estimators given. Facts such as 1 n x2  x2 = i
i i x2  ( i n i xi ) /n 2 = i (xi  x)2 n are used. b. Two answers are provided. First, use the Miscellanea: For
k L(x) = h(x)c() exp
i=1 n wi ()ti (x) ,
n the solutions to the k equations j=1 ti (xj ) = E j=1 ti (X j ) = nE ti (X1 ), i = 1, . . . , k, provide the unique MLE for . Multiplying out the exponent in the bivariate normal pdf shows it has this exponential family form with k = 5 and t1 (x, y) = x, t2 (x, y) = y, t3 (x, y) = x2 , t4 (x, y) = y 2 and t5 (x, y) = xy. Setting up the method of moment equations, we have xi
i = nX ,
i 2 x2 = n(2 + X ), i X 2 2 yi = n(2 + Y ), Y i yi
i = nY , =
i xi yi
i [Cov(X, Y ) + X Y ] = n(X Y + X Y ). These are the same equations as in part (a) if you divide each one by n. So the MLEs are the same as the method of moment estimators in part (a). For the second answer, use the hint in the book to write L(x, y) = L(x)L(, xy) =
2 (2X ) 2 exp 
n 1 2 2X
A (xi  X )2
i 2 2Y (12 ) n 2 exp 1 2 2Y (1  2 ) yi  Y + i B Y (x  X ) X i 2 ...
View
Full
Document
This note was uploaded on 02/03/2012 for the course STA 1014 taught by Professor Dr.hackney during the Spring '12 term at UNF.
 Spring '12
 Dr.Hackney
 Statistics

Click to edit the document details