chapter17_s

chapter17_s - CHAPTER 17 SOLUTIONS TO PROBLEMS 17.1(i Let...

This preview shows pages 1–3. Sign up to view the full content.

CHAPTER 17 SOLUTIONS TO PROBLEMS 17.1 (i) Let m 0 denote the number (not the percent) correctly predicted when y i = 0 (so the prediction is also zero) and let m 1 be the number correctly predicted when y i = 1. Then the proportion correctly predicted is ( m 0 + m 1 )/ n , where n is the sample size. By simple algebra, we can write this as ( n 0 / n )( m 0 /n 0 ) + ( n 1 / n )( m 1 / n 1 ) = (1 y )( m 0 / n 0 ) + y ( m 1 / n 1 ), where we have used the fact that y = n 1 / n (the proportion of the sample with y i = 1) and 1 y = n 0 / n (the proportion of the sample with y i = 0). But m 0 / n 0 is the proportion correctly predicted when y i = 0, and m 1 / n 1 is the proportion correctly predicted when y i = 1. Therefore, we have ( m 0 + m 1 )/ n = (1 y )( m 0 / n 0 ) + y ( m 1 /n 1 ). If we multiply through by 100 we obtain ˆ p = (1 y ) + 0 ˆ q y , 1 ˆ q where we use the fact that, by definition, ˆ p = 100[( m 0 + m 1 )/ n ], = 100( m 0 / n 0 ), and = 100( m 1 / n 1 ). 0 ˆ q 1 ˆ q (ii) We just use the formula from part (i): ˆ p = .30(80) + .70(40) = 52. Therefore, overall we correctly predict only 52% of the outcomes. This is because, while 80% of the time we correctly predict y = 0, y i = 0 accounts for only 30 percent of the outcomes. More weight (.70) is given to the predictions when y i = 1, and we do much less well predicting that outcome (getting it right only 40% of the time). 17.3 (i) We use the chain rule and equation (17.23). In particular, let x 1 log( z 1 ). Then, by the chain rule, 1 11 1 1 (| 0 ,) ,)1 , x Ey y zx z x 1 z ∂> =⋅ ∂∂ xx x where we use the fact that the derivative of log( z 1 ) is 1/ z 1 . When we plug in (17.23) for E( y | y > 0, x )/ x 1 , we obtain the answer. (ii) As in part (i), we use the chain rule, which is now more complicated: 12 1 2 ( | 0, ) ( | 0, ) ( | 0, ) , 1 x x z x z + x 99

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
where x 1 = z 1 and x 2 = . But E( y | y > 0, x )/ x 1 = β 1 {1 λ ( x β / σ )[ x β / + ( x β / )]}, E( y | y > 0, x )/ δ x 2 = 2 {1 ( x β / )[ x β / + ( x β / )]}, x 1 / z 1 = 1, and x 2 / z 1 = 2 z 1 . Plugging these into the first formula and rearranging gives the answer. 2 1 z 17.5 (i) patents is a count variable, and so the Poisson regression model is appropriate. (ii) Because 1 is the coefficient on log( sales ), 1 is the elasticity of patents with respect to sales . (More precisely, 1 is the elasticity of E( patents | sales , RD ) with respect to sales .) (iii) We use the chain rule to obtain the partial derivative of exp[ 0 + 1 log( sales ) + 2 RD + 3 RD 2 ] with respect to RD : (| , E patents sales RD RD ) = ( 2 + 2 3 RD )exp[ 0 + 1 log( sales ) + 2 RD + 3 RD 2 ]. A simpler way to interpret this model is to take the log and then differentiate with respect to RD : this gives 2 + 2 3 RD , which shows that the semi-elasticity of patents with respect to RD is 100( 2 + 2 3 RD ). 17.7 For the immediate purpose of determining the variables that explain whether accepted applicants choose to enroll, there is not a sample selection problem. The population of interest is applicants accepted by the particular university, and you have a random sample from this population. Therefore, it is perfectly appropriate to specify a model for this group, probably a linear probability model, a probit model, or a logit model, and estimate the model using the data at hand.
This is the end of the preview. Sign up to access the rest of the document.

{[ snackBarMessage ]}

Page1 / 11

chapter17_s - CHAPTER 17 SOLUTIONS TO PROBLEMS 17.1(i Let...

This preview shows document pages 1 - 3. Sign up to view the full document.

View Full Document
Ask a homework question - tutors are online