lecture13 - Extensions to Tobit and Heckit regression...

Info iconThis preview shows pages 1–4. Sign up to view the full content.

View Full Document Right Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: Extensions to Tobit and Heckit regression models 1 Multiple censoring points A useful extension to the traditional Tobit model y & i = & + & 1 x i + u i y i = & y & i if y & i > if y & i & is when data are subject to two forms of censoring. For example, y & i could be fully observed only if it is between a lower and an upper threshold, and being set equal to the thresholds otherwise. In this case the model is written: y & i = & + & 1 x i + u i y i = 8 < : l i y & i L i if y & i & l i if l i < y & i < L i if y & i L i where l i and L i are known constants. Figure 1 shows the relationship between y i and y & i in this particular case (the red line). For values of y & i between the two thresholds, y i = y & i . For values above L i or below l i , the data are censored at the thresholds. One example is when data from a survey of incomes are censored at some upper or lower thresholds for con&dentiality reasons. Another (economic) example is in a labor supply model with two possible corner solutions at 0 ( l ) leisure or 24 hours ( L ) leisure per day. Estimation is again by ML. Assume u i N ; 2 for all i . De&ne two dummies d 1 i = 1 f y & i & l i g and d 2 i = 1 f y & i L i g . Clearly, (1 d 1 i d 2 i ) = 1 f l i & y & i & L i g . The contribution of an individual to the maximum likelihood is l i = & l i ( & + & 1 x i ) d 1 i 1 & L i ( & + & 1 x i ) d 2 i 1 y i ( & + & 1 x i ) 1 d 1 i d 2 i and so the likelihood function is L = n Y i =1 l i There are two alternative estimation methods that are based on a two step strategy. 1 y y* l L y y* l L 1.1 Method 1: Use only "complete" observations In this case, one consider the conditional mean E ( y i j x i ;l i < y & i < L i ) = & + & 1 x i + E ( u i j x i ;l i < y & i < L i ) = & + & 1 x i + E ( u i j l i & ( & + & 1 x i ) < u i < L i & ( & + & 1 x i )) If E ( u i j l i & ( & + & 1 x i ) < u i < L i & ( & + & 1 x i )) 6 = 0 , OLS on the sample with "complete" observations will deliver biased and inconsistent estimates of the parameters of interest. This is because there is sample selectivity: in this case, only individuals with no too extreme realizations of u i are in the sample. We can eliminate the sample selectivity bias using a control function strategy. The problem is one of omitted variable bias: the variable E ( u i j l i & ( & + & 1 x i ) < u i < L i & ( & + & 1 x i )) is omitted from the regression. We can solve the problem by including this vari- able in the regression. For this purpose, we need to know what this expectation is. With a normality assumption on u , we can use the formulae of the normal distribution to construct an estimate of this expectation. In fact 2 E ( u i j l i & ( & + & 1 x i ) < u i < L i & ( & + & 1 x i )) = Z L i & ( & + & 1 x i ) l i & ( & + & 1 x i ) u i f ( u i j l i & ( & + & 1 x i ) < u i < L i & ( & + & 1 x i )) du i = R L i & ( & + & 1 x i ) l i & ( & + & 1 x i ) u i f ( u i ) du i...
View Full Document

This note was uploaded on 07/28/2011 for the course ECON 102C taught by Professor Pistaferri,l during the Spring '11 term at Stanford.

Page1 / 9

lecture13 - Extensions to Tobit and Heckit regression...

This preview shows document pages 1 - 4. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online