This preview shows pages 1–3. Sign up to view the full content.
This preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
Unformatted text preview: 1 Determinants of Wages Berndt Chapter 5 1.1 Human Capital Theory and Log Wage Equation equalizing di/erence: returns to schoolingpostschooling investments need to equalize the costs and gains from schoolingpostschooling investments max ( lifetime wealth ) = max f s g lim T !1 R T s W ( s ) e & rt dt = & W ( s ) e & rt =r & & 1 s = W ( s ) e & rs =r where W ( s ) is wage that is associated with schooling s and r is a discount rate so W ( s ) e & rs =r & W ( s ) e & rs = 0 , or marginal gain minus marginal cost thus optimal level of schooling is given by W ( s ) =W ( s ) = r . suppose that the only costs of schooling are those of forgone earnings the rate of return on the &rst year of education r 1 is then computed as incremental bene&ts divided by incremental costs, r 1 = ( W 1 & W ) =W it can be rewritten as W 1 = W (1 + r 1 ) similarly, for year 2 of schooling, the rate of return r 2 is de&ned as r 2 = ( W 2 & W 1 ) =W 1 it implies that W 2 = W 1 (1 + r 2 ) = W (1 + r 1 ) (1 + r 2 ) after s years of schooling, W s = W (1 + r 1 ) (1 + r 2 ) (1 + r s ) in general, r s is decreasing in s meaning that there is some optimal level of schooling, which is r (given above) assume that r 1 = r 2 = ::: = r s = & (we will relax this later) and approximate (1 + & ) by e & (it is a good approximation as long as & is small) then we have W s = W e &s in real world, wage is determined by many things other than schooling incorporate those things by append a multiplicative disturbance term e u so we have W s = W e &s e u take log and obtain the log wage equation log W s = log W + &s + u rewrite this in a standard linear regression setting y i = + 1 s i + u i , for an individual i , where y i = log W i , and 1 are intercept and slope parameter in log wage equation model, 1 has an interpretation of returns to schooling (will see this later) sometimes, we can incorporate other variables into the equation one example is adding experience and its square y i = + 1 s i + 2 x i + 3 x 2 i + u i when there are more than 1 explanatory variables, we call it multiple regression 1 before we talk about multiple regression, we talk about simple regression where there is only 1 explanatory variable 1.2 Simple Regression regression analysis e.g. the heights of fathers and their sons e.g. schooling and log wage we have n subjects indexed by i = 1 ;:::;n & two data variables ( x;y ) a data variable stores a value for each subject: ( x i ;y i ) explain a dependent variable...
View
Full
Document
This note was uploaded on 06/05/2008 for the course ECON 483 taught by Professor Seikkim during the Spring '08 term at University of Washington.
 Spring '08
 SeikKim
 Econometrics

Click to edit the document details