This preview shows page 1. Sign up to view the full content.
Unformatted text preview: PROF HONG FALL 2006 ECONOMICS 619 FINAL EXAM Notes: (1) This is a closed book/notes exam. There are 7 questions, with a total of 100 points; (2) you have 150 minutes; (3) suggestion: have a look at all problems and ...rst solve the problems you feel easiest; (4) good luck! 1. [15 pts] Suppose Y = + X + jXj"; where E(X) = 0; var(X) = 2 ; E(") = 0; var(") = 2 ; and " and X are independent. " X Both and are constants. (a) [5 pts] Find E(Y jX): (b) [5 pts] Find var(Y jX): (c) [5 pts] Show cov(Y; X) = 0 if and only if = 0: ANS: (a) E(Y jX) = E [ + X + jXj"jX] = + X + jXj E ("jX) = + X: (b) var(Y jX) = var( + X + jXj"jX) = var(jXj"jX) = X 2 2 : " (c)cov(Y; X) = cov( + X + jXj"; X) = cov(X; X) + cov(jXj"; X) = cov(X; X) + E[(jXj" E(jXj"))X] = cov(X; X) + E(XjXj") = 2 = 0 if and only if = 0: X Grading policy: 5 points each. Correct formula gets 1 point. 2. [10 pts] Suppose fXi gn is an i.i.d.N ( ; 2 ) random sample, where both and 2 i=1 are unknown parameters. Contruct an unbiased estimator for = 2 , and justify it is unbiased. ANS: A possible answer: Let 2 b = Xi2 Sn ;
2 where Sn = Pn
i=1 (Xi X n )2 n 1 and X n = Pn i=1 Xi n . We could check the unbiasedness by
2 E(b) = E(Xi2 ) 2 E(Sn ) = + 2 2 = 2 1 (Remark: There are many solutions for this construction, say, we could also let Pn X2 2 b = i=1 i Sn ). n Grading policy: Knowing the concept of unbiasedness 2 points. Correct construction 8 points. 3. [12 pts] (a) [5 pts] Suppose X n = fXi gn is an i.i.d. random sample with i=1 n a population probability density f (x; ): Is X a su cient statistic for ? Give your reasoning. (b) [7 pts] Suppose X n = fXi gn is an i.i.d. random sample from a N ( ; ) i=1 population, where is unknown. Find a ONE DIMENSIONAL su cient statistic for . Give your reasoning. 2 [Note: The pdf of a N ( ; 2 ) is p21 2 exp (xi 2 ) :] 2 ANS: (a)Yes, since Yn f (xi ; ) = g(X n ; )h(X n ) f (X n ; ) = i=1 Q where g(X n ; ) = n f (xi ; ), h(X n ) = 1. i=1 Grading policy: Knowing factorization thm 1 point. (b) Yn (xi )2 f (xi ; ) = exp( ) i=1 i=1 2 Pn )2 1 n i=1 (xi ) (p ) exp( 2 2 Pn 2 Pn n 2 1 n i=1 xi + 2 i=1 xi ) exp( ) (p 2 2 Pn 2 n X n 2 1 n i=1 xi ) exp( ) exp( (p xi ) 2 2 i=1 p 1 2
i=1 f (x ; ) = = = = n Yn Pn 2 P 1 i=1 xi where g( n x2 ; ) = ( p2 )n exp( i 2 P i=1 Thus, n x2 is a su cent statistics for : i=1 i n X = g( x2 ; )h(xn ) i n 2 P ); and h(xn ) = exp( n xi ): i=1 2 Grading policy: Knowing factorization thm 1 point. Correct joint pdf 1 points. Correct separation 2 points. Correct construction 3 points. 4. [20 pts]: Suppose fXi gn are i.i.d.N (0; 2 ): There are two estimators for 2 : i=1 ^2 1 ^2 2 1X 2 = X ; n i=1 i
n n P where X = n 1 n Xi : i=1 (a) [5 pts] Check whether ^ 2 and ^ 2 are unbiased for 2 . Give your reasoning. 2 1 (b) [15 pts] Which estimator, ^ 2 or ^ 2 ; is more e cient in terms of mean squared 2 1 error? Give your reasoning. ANS: (a) E(^ 2 ) 1
n n n 1 X 2 1X 1X 2 = E( X )= E(Xi ) = ( n i=1 i n i=1 n i=1 2 1X = (Xi n i=1 X)2 ; + 0) = 2 E(^ 2 ) = 2 n 1 P 2 2 where Sn = n 1 1 n (Xi X)2 and E (Sn ) = 2 . i=1 Thus, ^ 2 is unbiased but ^ 2 is not. 1 2 Grading policy: Knowing the concept of unbiasedness 1 points. Correct answer 2 points each. (b)
2 M SE(^ 2 ) = var(^ 2 ) + (E(^ 2 ) = var(^ 2 ) 1 1 1 1 n n n X X 1 1 X 1 2 2 = var( var(Xi ) = 2 [E(Xi4 ) X )= 2 n i=1 i n i=1 n i=1 n 1 X = [3 n2 i=1 4 4 2 n 2 E[Sn ] = n 1 2 n ; E(Xi2 )2 ]
= 2 n 4 : 1 n2 M SE(^ 2 ) = var(^ 2 ) + (E(^ 2 ) 2 2 2 Since n^ 2 2
2 2 2 ) = var(^ 2 ) + 2 1); 4 = (n 2 1) Sn 2 X 2 (n 3 we have var Thus, 2(n 1) 4 1 + 2 n2 n 1 4 2 4 2 = < n n2 n 2n 1 n2 n^ 2 2
2 = 2(n 1) ) var(^ 2 ) = 2 2(n 1) n2 4 M SE(^ 2 ) = 2 4 = 4 4 = M SE(^ 2 ): 1 So, ^ 2 is more e cient in terms of MSE. 2 Grading policy: Right MSE formula 2 points. Getting MSE( 2 ) correctly 6 points, 1 2 and MSE( 2 ) correctly 7 points. 5. [10 pts] Suppose a sequence of random variables fZn g is de...ned as Zn PZn 1
1 n 1 n n
1 n (a) [4 pts] Does Zn converges in mean square to 0? Give your reasoning clearly. (b) [9 pts] Does Zn converges in probability to 0? Give your reasoning clearly. ANS: 1 1 1 1 1 2 (a) E(Zn 0)2 = E(Zn ) = n2 (1 n ) + n2 n = n + n2 (1 n ) ! 1. Thus, it does not coverge to zero in mean square. 1 (b) Take any " > 0; there exist N; when n > N; then n < "; thus lim Pr(jZn 0j > ") = lim Pr(Zn = n) = lim
n!1 n!1 1 = 0: n!1 n Thus,it converge to zero in probability. Grading policy: Correct formula 1 points. 6. [13 pts] Let X n = fX1 ; X2 ; :::; Xn g be an independent but not identically distributed random sample with E(Xi ) = and V ar(Xi ) = 2 =i2 ; where i = 1; 2; :::; n: Both and 2 are unknown. De...ne a class of estimator for as ^=
n X i=1 ci Xi : 4 P (a) [4 pts] Show that ^ is unbiased if and only if n ci = 1: i=1 (b) [9 pts] Find the most e cient unbiased estimator ^ from the class of ^ : P [Hint: n i2 = n(n + 1)(n + 2)=6:] i=1 ANS: (a) E(^ ) = n X i=1 E(ci Xi ) = n X i=1 ci E(Xi ) = n X i=1 ci = if and only if n X i=1 ci = 1: Grading policy: Correct formula 1 points. (b) var(^ ) =
n X i=1 var(ci Xi ) = n X i=1 c2 var(Xi ) = i )2 = var(^ ) = 2 n X i=1 2 c2 =i2 i c2 =i2 i M SE(^ ) = var(^ ) + (E(^ ) Now, the most e cient unbiased ^ is found by solving
n X i=1 n X i=1 fci gi=1 min M SE(^ ) = n s:t:
n X i=1 2 (c2 =i2 ) i ci = 1: The Lagrange Function is L= F.O.C.
2 n X i=1 c2 =i2 i + (1 n X i=1 ci ) ci : 2ci : 1 So =i2 = 0 for all i n X ci = 0
i=1 2 5 and hence n X i=1 ci = n X i2 =1) 2 2 i=1 2 = Pn 2 i=1 i2 = 12 2 n(n + 1)(n + 2) ci = 6i2 : n(n + 1)(n + 2) SOC is satis...ed, since we have a quadratic objective function and a linear constraint. P 6i2 Xi Thus, the most e cient estimator is ^ = n n(n+1)(n+2) : i=1 Grading policy: Set up the problem correctly 3 points. FOC 2 point. SOC 1 point. Answer 3 points. N (ai ; 2 ); 7. [20 pts] Suppose fXi gn is an independent random sample and Xi i i=1 i = 1; :::; n; where ai and 2 are known constants that dier across dierent i' and is s, i an unknown parameter. Thus, the probability density of Xi is f (xi ; ) = p 1 2
2 i exp (xi 2 ai )2
2 i : (a) [10 pts] Find the MLE ^ for ; and check if it is a global maximizer. (b) [10 pts] Does ^ achieve the CramerRao lower bound? Give your reasoning. ANS: (a) The log likelihood function is Yn Yn f (xi ; )] = log( l(x1 ; x2 ; :::; xn j ) = log[
i=1 n X (xi i=1 i=1 So, max l(x1 ; x2 ; :::; xn j ) , min F.O.C.
n X (xi i=1 p 1 2 ) 2
i ai )2 2
2 i n X (xi i=1 ai )2 2
2 i : ai )ai
2 i =0 The solution is 6 To check whether it is globally maximum, we need to check the S.O.C: @ 2 l(x1 ; x2 ; :::; xn j ) = @ 2
n X a2 i i=1 ^ = Pn Pn xi a i
2 i 2 ai 2 i i=1 i=1 : 2 i < 0; which ensures the concavity. Thus, the log likelihood function is concave and ^ is the globally maximum. Grading policy: Set up the problem correctly 3 points. FOC 2 point. SOC 2 point. Answer 3 points. (b) Since ^ is unbiased for , the CramerRao lower bound is h 1
@ 2 l(x1 ;x2 ;:::;xn j ) @ 2 E and 1 i=P n a2 i : i=1 2 i Thus, ^ achieves the CramerRao lower bound. Pn xi ai V ar 2 i=1 i V ar(^ ) = 2 Pn a2 i
i=1
2 i = Pn var(xi )a2 i
4 i i=1 i=1 Pn a2 i 2 2 i = Pn ( i=1 Pn i=1 2 a2 i i 4 i 2 ai 2 2 i 1 = Pn ) ( i=1 a2 i
i : 2) Grading policy: Correctly calculating the bound 5 points. Variance 5 points. 7 ...
View
Full
Document
This test prep was uploaded on 12/08/2007 for the course ECON 6190 taught by Professor Hong during the Fall '07 term at Cornell University (Engineering School).
 Fall '07
 HONG
 Economics, Econometrics

Click to edit the document details