1 x 2 \u03b3 2 \u03b1 2 1 p 2 \u03b1 1 \u03b1 1 1 x 1 \u03b3 1 \u03b1 1 2 x 2 \u03b3 2 \u03b1 2 which equals p 1 p 1 \u03b1

1 x 2 γ 2 α 2 1 p 2 α 1 α 1 1 x 1 γ 1 α 1 2 x 2

This preview shows page 8 - 11 out of 12 pages.

1 ( x 2 γ 2 ) α 2 1 + p 2 α 1 ( α 1 1)( x 1 γ 1 ) α 1 2 ( x 2 γ 2 ) α 2 ) which equals p 1 ( p 1 α 2 ( α 2 1)( x 2 γ 2 ) 2 + p 2 α 1 α 2 ( x 1 γ 1 ) 1 ( x 2 γ 2 ) 1 ) p 2 ( p 1 α 1 α 2 ( x 1 γ 1 ) α 1 1 ( x 2 γ 2 ) α 2 1 + p 2 α 1 ( α 1 1)( x 1 γ 1 ) 2 ) If we set z 1 = ( x 1 γ 1 ) and z 2 = ( x 2 γ 2 ), this reduces to p 1 ( p 1 α 2 ( α 2 1) z 2 2 + p 2 α 1 α 2 z 1 1 z 1 2 ) p 2 ( p 1 α 1 α 2 z α 1 1 1 z α 2 1 2 + p 2 α 1 ( α 1 1) z 2 1 ) This is then equivalent to Eq. (1) above, so that this is basically just the Cobb-Douglas case, as long as z 1 , z 2 > 0. So the requirements will be the same. 8
Image of page 8
iii. Constant Elasticity of Substitution parenleftBig α 1 x 1 1 + α 2 x 1 ρ 2 parenrightBig ρ I am going to take the transformation y 1 of the objective, so the Lagrangian is L ( x, λ ) = α 1 x 1 1 + α 2 x 1 ρ 2 λ ( p 1 x 1 + p 2 x 2 w ) This has FONCs α 1 ρx 1 1 1 λp 1 = 0 α 2 ρx 1 1 2 λp 2 = 0 ( p 1 x 1 + p 2 x 2 w ) = 0 The first two equations imply that α 1 x 1 1 1 α 2 x 1 1 2 = p 1 p 2 Let’s raise this to the 1 1 power to get x 1 x 2 = parenleftbigg p 1 α 2 p 2 α 1 parenrightbigg 1 1 = γ ( p 1 , p 2 ) Substitute this into the constraint to get p 1 x 1 + p 2 x 1 γ ( p 1 , p 2 ) = w or x 1 = γ ( p 1 , p 2 ) w p 1 γ ( p 1 , p 2 ) + p 2 and x 2 = γ ( p 1 , p 2 ) w p 2 γ ( p 1 , p 2 ) + p 1 The comparative statics are ∂x 1 ∂p 2 = γ 2 ( p 1 γ ( p 1 , p 2 ) + p 2 ) ( p 1 γ 2 + 1) γ ( p 1 γ ( p 1 , p 2 ) + p 2 ) 2 w =? and ∂x 1 ∂w = γ ( p 1 , p 2 ) p 1 γ ( p 1 , p 2 ) + p 2 > 0 iv. Leontief min { α 1 x 1 , α 2 x 2 } Well, the Lagrangian is L ( x, λ ) = min { α 1 x 1 , α 2 x 2 } − λ ( p 1 x 1 + p 2 x 2 w ) The objective is discontinuous whenever α 1 x 1 = α 2 x 2 , so that we must add all of those points to the candidate list. Otherwise, the gradient of the objective is ( α 1 , 0) whenever α 1 x 1 < α 2 x 2 , 9
Image of page 9
and ( α 2 , 0) whenever α 2 x 2 > α 1 x 1 . None of these points can satisfy the constraint qualification, however, since the gradient of the constraint is ( p 1 , p 2 ), but the gradient of the objective ( α 1 , 0) or (0 , α 2 ) vanishes for one of the components; for example, the system of equations α 1 λp 1 = 0 0 λp 2 = 0 ( p 1 x 1 + p 2 x 2 w ) cannot be solved, since Eq 1 implies that λ = α 1 /p 1 but Eq 2 implies that λ = 0. Therefore, we must add all of these points to the candidate list. That implies that... everything in R 2 + is on the candidate list. Now, we must be a bit more clever. Suppose the condition α 1 x 1 = α 2 x 2 fails. In particular, assume that x 1 > α 2 x 2 1 . Then if we take ǫ away from x 1 and re-allocate it to x 2 , the value of the objective will increase to α 2 ( x 2 + p 1 ε/p 2 ) > α 2 x 2 . So at any solution, we must have α 1 x 1 = α 2 x 2 .
Image of page 10
Image of page 11

You've reached the end of your free preview.

Want to read all 12 pages?

  • Left Quote Icon

    Student Picture

  • Left Quote Icon

    Student Picture

  • Left Quote Icon

    Student Picture