# Assumed a normal density we nd that this is the same

This preview shows page 1. Sign up to view the full content.

This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: 2 σ 2 EC220 Revision Lectures ···e −1 2 YN −β1 −β2 XN σ 2 Introduction MLE LDV Problems We then choose β1 , β2 which maximizes the likelihood L= 1 √ σ 2π N e −1 2 Y1 −β1 −β2 X1 σ 2 e 1 −2 Y2 −β1 −β2 X2 σ 2 ···e −1 2 YN −β1 −β2 XN σ Alternatively we can maximize the log likelihood which is easier mathematically log (L) = log 1 √ σ 2π N 1 −2 2σ N (Yi − β1 − β2 Xi )2 i =1 Since we have assumed a normal density, we ﬁnd that this is the same problem as OLS! Only the estimation of σ will be slightly different. B KOO EC220 Revision Lectures 2 Introduction MLE LDV Problems We then choose β1 , β2 which maximizes the likelihood L= 1 √ σ 2π N e −1 2 Y1 −β1 −β2 X1 σ 2 e 1 −2 Y2 −β1 −β2 X2 σ 2 ···e −1 2 YN −β1 −β2 XN σ Alternatively we can maximize the log likelihood which is easier mathematically log (L) = log 1 √ σ 2π N 1 −2 2σ N (Yi − β1 − β2 Xi )2 i =1 Since we have assumed a normal density, we ﬁnd that this is the same problem as OLS! Only the estimation of σ will be slightly different. B KOO EC220 Revision Lectures 2 Introduction MLE LDV Problems P6 2005 B KOO EC220 Revision Lectures Introduction MLE LDV Problems P6 2005 Key issue 1. Construction of (log)Likelihood function 2. Maximize it B KOO EC220 Revision Lectures Introduction MLE LDV Problems Introduction Binary Choice Tobit Sample Selection Bias Limited Dependent Variable Models We use LDV when we model scenarios where the output depends on an unobserved underlying model. In the context of this syllabus, we look at three broad types of models Binary Choice Models Linear Probability Model Logit Probit Censored Models Tobit Sample Selection Bias B KOO EC220 Revision Lectures Introduction MLE LDV Problems Introduction Binary Choice Tobit Sample Selection Bias Basically, Binary Choice Models are concerned with Yi = 1 0 Censored Models are concerned with Yi = Y ∗ Y > YL YL Y ≤ YL Sample Selection Bias models are concerned with Y∗ Y > YL Yi = no observation Y ≤ YL B KOO EC220 Revision Lectures Introduction MLE LDV Problems Introduction Binary Choice Tobit Sample Selection Bias Linear Probability Models LPM: the probability of the event occurring is assumed to be a linear function of a set of explanatory variables. We ﬁt the data with a linear regression model. Thus, E [Y |X ] = Pr [Y = 1|X ] = β1 + β2 X B KOO EC220 Revision Lectures Introduction MLE LDV Problems Introduction Binary Choice Tobit Sample Selection Bias Drawbacks of LPM 1 the error term does not have a normal distribution so the standard errors and hence the t statistics are invalid The distribution of the disturbance consists of just two speciﬁc values not continuous; standard errors and usual test statistic invalidated 2 there is heteroskedasticity in the errors Two speciﬁc values of the disturbance change with the regressor. 3 the predicted value can be greater than 0 or less than 1 Nonsensical result 4 marginal effects are constant for any characterization B KOO EC220 Revision Lectures Introduction MLE LDV Problems Introduction Binary Choice Tobit Sample Selection Bias Logit Logit: One way to overcome the problems with LPM is to use a logit function: Pr (Yi = 1|Xi ) = F (Z ) = 1 1 + e−Zi where Zi = β...
View Full Document

## This document was uploaded on 03/12/2014 for the course ECON 202 at University of London University of London International Programmes (Distance Learning).

Ask a homework question - tutors are online