Machina Math Handout

# 1x1 x2 2x1 x2 g1x1 x2 g2x1

This preview shows page 1. Sign up to view the full content.

This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: veral variables. In other words, the conditions for * * (x1 ,..., xn ) to be a solution to the constrained maximization problem: max ƒ( x1 ,..., xn ) x1 ,..., xn subject to g ( x1 ,..., xn ) = c is that no legal tradeoff between any pair of variables xi and xj be able to affect the value of the objective function. In other words, the tradeoff rate between xi and xj that preserves the value of g(x1,..., xn) must be the same as the tradeoff rate between xi and xj that preserves the value of ƒ(x1,..., xn). We thus have the condition: dxi dx j = Δg = 0 dxi dx j for all i and j Δ ƒ =0 or in other words, that: − g j ( x1 ,..., xn ) gi ( x1 ,..., xn ) =− ƒ j ( x1 ,..., xn ) for all i and j ƒ i ( x1 ,..., xn ) Again, the only way to ensure that these ratios will be equal for all i and j is to have: * * * * ƒ1 ( x1 ,..., xn ) = λ ⋅ g1 ( x1 ,..., xn ) * * * * ƒ 2 ( x1 ,..., xn ) = λ ⋅ g 2 ( x1 ,..., xn ) * * * * ƒ n ( x1 ,..., xn ) = λ ⋅ g n ( x1 ,..., xn ) Econ 100A 10 Mathematical Handout To summarize: the first order conditions for the constrained maximization problem: max ƒ( x1 ,..., xn ) x1 ,..., xn subject to g ( x1 ,..., xn ) = c * * are that the solutions (x1 ,..., xn ) satisfy the equations: * * * * ƒ1 ( x1 ,..., xn ) = λ ⋅ g1 ( x1 ,..., xn ) * * * * ƒ 2 ( x1 ,..., xn ) = λ ⋅ g 2 ( x1 ,..., xn ) * * * * ƒ n ( x1 ,..., xn ) = λ ⋅ g n ( x1 ,..., xn ) and the constraint * * g ( x1 ,..., xn ) = c Once again, the easy way to remember this is simply that the normal vector of ƒ(x1,..., xn) be a scalar multiple of the normal vector of g(x1,..., xn) at the optimal point, i.e.: * * * * ( ƒ1(x1*,..., xn ) , ... , ƒn(x1*,..., xn ) ) = λ ⋅ ( g1(x1*,..., xn ) , ... , gn(x1*,..., xn ) ) * * and also that the constraint g(x1 ,..., xn ) = c be satisfied. Lagrangians The first order conditions for the above constrained maximization problem are just a system of n+1 equations in the n+1 unknowns x1,..., xn and λ. Personally, I suggest that you get these first order conditions the direct way by simply setting the normal vector of ƒ(x1,..., xn) to equal a scalar multiple of the normal vector of g(x1,..., xn) (with the scale factor λ). However, another way to obtain these equations is to construct the Lagrangian function: L(x1,..., xn,λ) ≡ ƒ(x1,..., xn) + λ⋅[c – g(x1,..., xn)] (where λ is called the Lagrangian multiplier). Then, if we calculate the partial derivatives ∂L/∂x1,...,∂L/∂xn and ∂L/∂λ and set them all equal to zero, we get the equations: * * ∂L ( x1 ,..., xn , λ ) ∂x1 * * * * = ƒ1 ( x1 ,..., xn ) − λ ⋅ g1 ( x1 ,..., xn ) = 0 * * ∂L ( x1 ,..., xn , λ ) ∂x2 * * * * = ƒ 2 ( x1 ,..., xn ) − λ ⋅ g 2 ( x1 ,..., xn ) = 0 * * ∂L ( x1 ,..., xn , λ ) ∂xn * * * * = ƒ n ( x1 ,..., xn ) − λ ⋅ g n ( x1 ,..., xn ) = 0 * * ∂L ( x1 ,..., xn , λ ) ∂λ = c− * * g ( x1 ,..., xn ) =0 But these equations are the same as our original n+1 first order conditions. In other words...
View Full Document

## This document was uploaded on 09/18/2013.

Ask a homework question - tutors are online