Wooldridge IE AISE SSM ch17 - CHAPTER 17 SOLUTIONS TO...

Info iconThis preview shows pages 1–3. Sign up to view the full content.

View Full Document Right Arrow Icon
This edition is intended for use outside of the U.S. only, with content that may be different from the U.S. Edition. This may not be resold, copied, or distributed without the prior consent of the publisher. 99 CHAPTER 17 SOLUTIONS TO PROBLEMS 17.1 (i) Let m 0 denote the number (not the percent) correctly predicted when y i = 0 (so the prediction is also zero) and let m 1 be the number correctly predicted when y i = 1. Then the proportion correctly predicted is ( m 0 + m 1 )/ n , where n is the sample size. By simple algebra, we can write this as ( n 0 / n )( m 0 /n 0 ) + ( n 1 / n )( m 1 / n 1 ) = (1 y )( m 0 / n 0 ) + y ( m 1 / n 1 ), where we have used the fact that y = n 1 / n (the proportion of the sample with y i = 1) and 1 y = n 0 / n (the proportion of the sample with y i = 0). But m 0 / n 0 is the proportion correctly predicted when y i = 0, and m 1 / n 1 is the proportion correctly predicted when y i = 1. Therefore, we have ( m 0 + m 1 )/ n = (1 y )( m 0 / n 0 ) + y ( m 1 /n 1 ). If we multiply through by 100 we obtain ˆ p = (1 y ) 0 ˆ q + y 1 ˆ q , where we use the fact that, by definition, ˆ p = 100[( m 0 + m 1 )/ n ], 0 ˆ q = 100( m 0 / n 0 ), and 1 ˆ q = 100( m 1 / n 1 ). (ii) We just use the formula from part (i): ˆ p = .30(80) + .70(40) = 52. Therefore, overall we correctly predict only 52% of the outcomes. This is because, while 80% of the time we correctly predict y = 0, y i = 0 accounts for only 30 percent of the outcomes. More weight (.70) is given to the predictions when y i = 1, and we do much less well predicting that outcome (getting it right only 40% of the time). 17.3 (i) We use the chain rule and equation (17.23). In particular, let x 1 log( z 1 ). Then, by the chain rule, 1 11 1 1 1 (| 0 ,) ,)1 , x Ey y zx z x z ∂> =⋅ ∂∂ xx x where we use the fact that the derivative of log( z 1 ) is 1/ z 1 . When we plug in (17.23) for E( y | y > 0, x )/ x 1 , we obtain the answer. (ii) As in part (i), we use the chain rule, which is now more complicated: 12 1 2 1 ( | 0, ) ( | 0, ) ( | 0, ) , x x z x z + x
Background image of page 1

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
This edition is intended for use outside of the U.S. only, with content that may be different from the U.S. Edition. This may not be resold, copied, or distributed without the prior consent of the publisher. 100 where x 1 = z 1 and x 2 = 2 1 z . But E( y | y > 0, x )/ x 1 = β 1 {1 λ ( x β / σ )[ x β / + ( x β / )]}, E( y | y > 0, x )/ δ x 2 = 2 {1 ( x β / )[ x β / + ( x β / )]}, x 1 / z 1 = 1, and x 2 / z 1 = 2 z 1 . Plugging these into the first formula and rearranging gives the answer. 17.5 (i) patents is a count variable, and so the Poisson regression model is appropriate. (ii) Because 1 is the coefficient on log( sales ), 1 is the elasticity of patents with respect to sales . (More precisely, 1 is the elasticity of E( patents | sales , RD ) with respect to sales .) (iii) We use the chain rule to obtain the partial derivative of exp[ 0 + 1 log( sales ) + 2 RD + 3 RD 2 ] with respect to RD : (| , ) E patents sales RD RD = ( 2 + 2 3 RD )exp[ 0 + 1 log( sales ) + 2 RD + 3 RD 2 ]. A simpler way to interpret this model is to take the log and then differentiate with respect to RD : this gives 2 + 2 3 RD , which shows that the semi-elasticity of patents with respect to RD is 100( 2 + 2 3 RD ).
Background image of page 2
Image of page 3
This is the end of the preview. Sign up to access the rest of the document.

Page1 / 11

Wooldridge IE AISE SSM ch17 - CHAPTER 17 SOLUTIONS TO...

This preview shows document pages 1 - 3. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online