This preview shows pages 1–3. Sign up to view the full content.
This preview has intentionally blurred sections. Sign up to view the full version.View Full Document
Unformatted text preview: Problem Set 3 Solutions Econ 120c / Winter 2008 Due date: Wednesday, February 13, 2008 1 Exercises 1. Let Y be a binary dependent variable with explanatory variables X 1 , X 2 , . . . , X k . (a) Show that E ( Y | X 1 , X 2 , . . . , X k ) = Pr ( Y = 1 | X 1 , X 2 , . . . , X k ) . Hint: Recall that a Bernoulli random variable is one which can only assume the values 1 or 0. If Y is a Bernoulli random variable and Pr ( Y = 1) = p , then Pr ( Y = 0) = 1- p , so E ( Y ) = 1 p + 0 (1- p ) = p. Because Y is a binary variable, it takes on only the values 0 and 1. So, E ( Y | X 1 , X 2 , . . . , X k ) = 1 Pr ( Y = 1 | X 1 , X 2 , . . . , X k ) + 0 Pr ( Y = 0 | X 1 , X 2 , . . . , X k ) = Pr ( Y = 1 | X 1 , X 2 , . . . , X k ) . (b) What is E ( Y 2 | X 1 , X 2 , . . . , X k )? Note that Y 2 takes on the value 1 2 = 1 with probability Pr ( Y = 1 | X 1 , X 2 , . . . , X k ) and takes on the value 2 = 0 with probability Pr ( Y = 0 | X 1 , X 2 , . . . , X k ) . So, Y 2 has the same distribution as Y . Thus, E ( Y 2 | X 1 , X 2 , . . . , X k ) = Pr ( Y = 1 | X 1 , X 2 , . . . , X k ) . (c) Use the results of the above calculations to find the conditional variance of the binary dependent variable, V ar ( Y | X 1 , X 2 , . . . , X k ) . Recall that for any random variable Z , V ar ( Z ) = E ( Z 2 )- E ( Z ) 2 . Let p = Pr ( Y = 1 | X 1 , X 2 , . . . , X k ) . Then, from (a) and (b), we can see that V ar ( Y | X 1 , X 2 , . . . , X k ) = E ( Y 2 | X 1 , X 2 , . . . , X k )- E ( Y | X 1 , X 2 , . . . , X k ) 2 = p- p 2 = p (1- p ) . (d) For what value of p = Pr ( Y = 1 | X 1 , X 2 , . . . , X k ) is the conditional variance the largest? For what values of p is the variance the 1 smallest? The value of p that maximizes the conditional variance is not obvious, so we use calculus to derive it. Notice that the first derivative of the conditional variance equals 1-2p. Clearly, 1- 2 p = 0 is solved at p= 1 2 . Because the second derivative equals- 2 &lt; 0, p = 1 2 gives the largest conditional variance. We know that V ar ( Z ) 0 for any random variable Z So, the conditional variance above is always at least as big as 0. It is clear that the conditional variance is exactly equal to 0, and thus the smallest it can be, only when p = 0 or p = 1. 2. Consider a probit model in the following form, where X is a continuous variable and D is a dummy variable: Pr ( Y = 1 | X, D ) = ( + 1 X + 2 D ) , (1) where X is a continuous variable and D is a dummy variable. (a) Based on the probit formulation shown above, what is the mar- ginal effect of an increase in X on Pr ( Y = 1 | X, D )? If we denote the p.d.f. of as = , the marginal effect of an increase in X is 1 ( + 1 X + 2 D ) . This uses the chain rule-see lecture notes on WebCT for more details....
View Full Document
This note was uploaded on 04/08/2008 for the course ECON 120C taught by Professor Stohs during the Winter '08 term at UCSD.
- Winter '08