This preview shows pages 1–4. Sign up to view the full content.
This preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
Unformatted text preview: Other Discrete Dependent Variable Models Our model is still y & i = & + x i + u i where y & i is a latent dependent variable (for example, net utility associated to making a certain choice), and the only thing we observe is the index: y i = & 1 if y & i & if y & i < 1 Logit model In the logit model, instead of making the assumption that u is standard normally distributed (which would give the probit model), one makes the assumption that it follows a logistic distribution. A logistic distribution has c.d.f. given by F ( z ) = e z 1 + e z Note that this is a symmetric distribution because F ( z ) = 1 F ( z ) . The p.d.f. is given by f ( z ) = @F ( z ) @z = e z (1 + e z ) 2 = F ( z ) F ( z ) = F ( z )[1 F ( z )] Hence Pr( y i = 1) = Pr( y & i & 0) = Pr( u i & ( & + x i )) = Pr( u i ( & + x i )) = F ( & + x i ) = e & + x i 1 + e & + x i This model is also estimated by ML. The individual contribution to the likelihood function is l i = e & + x i 1 + e & + x i y i 1 1 + e & + x i (1 y i ) and the loglikelihood is ln L = n X i =1 ln l i 1 The FOC is @ ln L @& = 0 ! n X i =1 & y i & e & + x i 1 + e & + x i x i = 0 and again a numerical method must be used to obtain the estimates and & . Logit and probit will often give very similar results as the shape of their c.d.f.&s is very similar except at the tails. Since E ( y i j x i ) = e & + x i 1+ e & + x i , the logit marginal eect is @E ( y i j x i ) @x i = @F ( + &x i ) @x i = &f ( + &x i ) = & e & + x i (1 + e & + x i ) 2 2 Ordered Probit Suppose that instead of having a binary index, we have a multiple index for an ordered discrete dependent variable. For example, suppose that y & i is the utility from working, and the only thing we observe are categories of hours of work y i = 8 > > > > < > > > > : 1 2 3 ::: k if y & i 1 if 1 < y & i 2 if 2 < y & i 3 ::: if y & i > k 1 with 1 2 3 ::: k 1 being "thresholds". Here y i = 1 could corre spond to 0 hours, y i = 2 to 1 to 110 hours, and so forth. Figure 1 gives an illustration (for k = 6 ). 1 2 3 4 5 y* y i =1 y i =2 y i =3 y i =4 y i =5 y i =6 1 2 3 4 5 y* 1 2 3 4 5 y* y i =1 y i =2 y i =3 y i =4 y i =5 y i =6 Note that the assumption of normality about u i is all we need to estimate the model with ML. In particular: 2 Pr( y i = j j x i ) = Pr( & j & 1 & y i & & j j x i ) = Pr( & j & 1 ( + x i ) & y i & & j ( + x i )) = &( & j ( + x i )) &( & j & 1 ( + x i )) with & = 1 and & k = 1 . It follows that the individual contribution to the log likelihood is ln l i = k X j =1 1 f y i = j g ln[&( & j ( + x i )) &( & j & 1 ( + x i ))] where 1 f y i = j g is an indicator function, i.e., a variable that equals 1 if the statement in brackets is true, and 0 otherwise. Note that cannot be identi&ed....
View Full
Document
 Spring '11
 Pistaferri,L
 Econometrics, Utility

Click to edit the document details