{[ promptMessage ]}

Bookmark it

{[ promptMessage ]}

lecture13

# lecture13 - Extensions to Tobit and Heckit regression...

This preview shows pages 1–4. Sign up to view the full content.

Extensions to Tobit and Heckit regression models 1 Multiple censoring points A useful extension to the traditional Tobit model y ° i = ° 0 + ° 1 x i + u i y i = ° y ° i 0 if y ° i > 0 if y ° i ° 0 is when data are subject to two forms of censoring. For example, y ° i could be fully observed only if it is between a lower and an upper threshold, and being set equal to the thresholds otherwise. In this case the model is written: y ° i = ° 0 + ° 1 x i + u i y i = 8 < : l i y ° i L i if y ° i ° l i if l i < y ° i < L i if y ° i ± L i where l i and L i are known constants. Figure 1 shows the relationship between y i and y ° i in this particular case (the red line). For values of y ° i between the two thresholds, y i = y ° i . For values above L i or below l i , the data are censored at the thresholds. One example is when data from a survey of incomes are censored at some upper or lower thresholds for con°dentiality reasons. Another (economic) example is in a labor supply model with two possible corner solutions at 0 ( l ) leisure or 24 hours ( L ) leisure per day. Estimation is again by ML. Assume u i ² N ± 0 ; ± 2 ² for all i . De°ne two dummies d 1 i = 1 f y ° i ° l i g and d 2 i = 1 f y ° i ± L i g . Clearly, (1 ³ d 1 i ³ d 2 i ) = 1 f l i ° y ° i ° L i g . The contribution of an individual to the maximum likelihood is l i = ° ³ l i ³ ( ° 0 + ° 1 x i ) ± ´ d 1 i µ 1 ³ ° ³ L i ³ ( ° 0 + ° 1 x i ) ± ´¶ d 2 i ³ 1 ± ² ³ y i ³ ( ° 0 + ° 1 x i ) ± ´´ 1 ± d 1 i ± d 2 i and so the likelihood function is L = n Y i =1 l i There are two alternative estimation methods that are based on a two step strategy. 1

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
y y* l L 1.1 Method 1: Use only "complete" observations In this case, one consider the conditional mean E ( y i j x i ; l i < y ° i < L i ) = ° 0 + ° 1 x i + E ( u i j x i ; l i < y ° i < L i ) = ° 0 + ° 1 x i + E ( u i j l i ³ ( ° 0 + ° 1 x i ) < u i < L i ³ ( ° 0 + ° 1 x i )) If E ( u i j l i ³ ( ° 0 + ° 1 x i ) < u i < L i ³ ( ° 0 + ° 1 x i )) 6 = 0 , OLS on the sample with "complete" observations will deliver biased and inconsistent estimates of the parameters of interest. This is because there is sample selectivity: in this case, only individuals with no too extreme realizations of u i are in the sample. We can eliminate the sample selectivity bias using a control function strategy. The problem is one of omitted variable bias: the variable E ( u i j l i ³ ( ° 0 + ° 1 x i ) < u i < L i ³ ( ° 0 + ° 1 x i )) is omitted from the regression. We can solve the problem by including this vari- able in the regression. For this purpose, we need to know what this expectation is. With a normality assumption on u , we can use the formulae of the normal distribution to construct an estimate of this expectation. In fact 2
E ( u i j l i ³ ( ° 0 + ° 1 x i ) < u i < L i ³ ( ° 0 + ° 1 x i )) = Z L i ± ( ° 0 + ° 1 x i ) l i ± ( ° 0 + ° 1 x i ) u i f ( u i j l i ³ ( ° 0 + ° 1 x i ) < u i < L i ³ ( ° 0 + ° 1 x i )) du i = R L i ± ( ° 0 + ° 1 x i ) l i ± ( ° 0 + ° 1 x i ) u i f ( u i ) du i ° H ³ ° L = ± ² L ³ ² H ° H ³ ° L where ² L = ² · l i ± ( ° 0 + ° 1 x i ) ± ¸ , ² H = ² · L i ± ( ° 0 + ° 1 x i ) ± ¸ and similarly for ° H and ° L . It follows that E ( y i j x i ; l i < y ° i < L i ) = ° 0 + ° 1 x

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
This is the end of the preview. Sign up to access the rest of the document.

{[ snackBarMessage ]}

### Page1 / 9

lecture13 - Extensions to Tobit and Heckit regression...

This preview shows document pages 1 - 4. Sign up to view the full document.

View Full Document
Ask a homework question - tutors are online