*This preview shows
page 1. Sign up
to
view the full content.*

**Unformatted text preview: **to be a local
minimum is that ƒ(x1,..., xn) be a weakly convex function of (x1,..., xn) (i.e., the bottom of a bowl)
in the locality of this point. Thus, if there is only one control variable x, the second order
condition is that ƒ ″(x*) > 0. If there are two control variables, the conditions are:
**
ƒ11(x1 , x2 ) > 0 **
ƒ22(x1 , x2 ) > 0 and
∗
∗
ƒ11 ( x1 , x2 ) ∗
∗
ƒ12 ( x1 , x2 ) *
*
*
*
ƒ 21 ( x1 , x2 ) ƒ 22 ( x1 , x2 ) >0 (yes, this last determinant really is supposed to be positive). Econ 100A 8 Mathematical Handout First Order Conditions for Constrained Optimization Problems (VERY important) The first order conditions for the two-variable constrained optimization problem:
max ƒ( x1 , x2 ) g ( x1 , x2 ) = c subject to x1 , x2 are easy to see from the following diagram
x2 (x1*,x2*) level curves of ƒ(x1,x2)
g(x1,x2) = c
x1 0 **
The point (x1 , x2 ) is clearly not an unconstrained maximum, since increasing both x1 and x2
would move you to a higher level curve for ƒ(x1, x2). However, this change is not “legal” since it
does not satisfy the constraint – it would move you off of the level curve g(x1, x2) = c. In order to
stay on the level curve, we must jointly change x1 and x2 in a manner which preserves the value
of g(x1, x2). That is, we can only tradeoff x1 against x2 at the “legal” rate: dx2
dx1 =
g ( x1 , x2 ) = c dx2
dx1 =−
Δg = 0 g1 ( x1 , x2 )
g 2 ( x1 , x2 ) The condition for maximizing ƒ(x1, x2) subject to g(x1, x2) = c is that no tradeoff between x1 and x2
at this “legal” rate be able to raise the value of ƒ(x1, x2). This is the same as saying that the level
curve of the constraint function be tangent to the level curve of the objective function. In other
words, the tradeoff rate which preserves the value of g(x1, x2) (the “legal” rate) must be the same
as the tradeoff rate that preserves the value of ƒ(x1, x2). We thus have the condition:
dx2
dx1 =
Δg = 0 dx2
dx1 Δ ƒ =0 which implies that:
− g1 ( x1 , x2 )
g 2 ( x1 , x2 ) =− ƒ1 ( x1 , x2 )
ƒ 2 ( x1 , x2 ) which is in turn equivalent to:
**
**
ƒ1 ( x1 , x2 ) = λ ⋅ g1 ( x1 , x2 )
**
**
ƒ 2 ( x1 , x2 ) = λ ⋅ g 2 ( x1 , x2 ) for some scalar λ.
Econ 100A 9 Mathematical Handout To summarize, we have that the first order conditions for the constrained maximization problem:
max ƒ( x1 , x2 )
x1 , x2 g ( x1 , x2 ) = c subject to
**
are that the solutions (x1 , x2 ) satisfy the equations **
**
ƒ1 ( x1 , x2 ) = λ ⋅ g1 ( x1 , x2 )
**
**
ƒ 2 ( x1 , x2 ) = λ ⋅ g 2 ( x1 , x2 )
**
g ( x1 , x2 ) = c for some scalar λ. An easy way to remember these conditions is simply that the normal vector to
**
ƒ(x1, x2) at the optimal point (x1 , x2 ) must be a scalar multiple of the normal vector to g(x1, x2) at
**
the optimal point (x1 , x2 ), i.e. that:
( ƒ1(x1*, x2*) , ƒ2(x1*, x2*) ) = λ ⋅ ( g1(x1*, x2*) , g2(x1*, x2*) )
**
and also that the constraint g(x1 , x2 ) = c be satisfied. This same principle extends to the case of se...

View
Full
Document