Unformatted text preview: x). Hence we
¯
¯
can take kl such that ∥xkl ,l − x∥ < 1/l and ∥yl − αkl ,l (xkl ,l − x)∥ < 1/l. Then
¯
¯
xkl ,l → x and
¯
∥y − αkl ,l (xkl ,l − x)∥ ≤ ∥y − yl ∥ + ∥yl − αkl ,l (xkl ,l − x)∥ → 0,
¯
¯
¯
¯
so y ∈ TC (¯).
¯
x
The dual cone of TC (¯) is called the normal cone at x and is denoted by
x
¯
NC (¯). By the deﬁnition of the dual cone, we have
x
NC (¯) = z ∈ RN (∀y ∈ TC (¯)) ⟨y, z ⟩ ≤ 0 .
x
x
The following theorem is fundamental for constrained optimization.
Theorem 2. If f is diﬀerentiable and x is a local solution of the problem
¯
minimize f (x) subject to x ∈ C,
then −∇f (¯) ∈ NC (¯).
x
x
Proof. By the deﬁnition of the normal cone, it suﬃces to show that
⟨−∇f (¯), y ⟩ ≤ 0 ⇐⇒ ⟨∇f (¯), y ⟩ ≥ 0
x
x
for all y ∈ TC (¯). Let y ∈ TC (¯) and take a sequence such that αk ≥ 0, xk → x,
x
x
¯
and αk (xk − x) → y . Since x is a local solution, for suﬃciently large k we have
¯
¯
f (xk ) ≥ f (¯). Since f is diﬀerentiable, we have
x
0 ≤ f (xk ) − f (¯) = ⟨∇f (¯), xk − x⟩ + o(∥xk − x∥).1
x
x
¯
¯
Multiplying both sides by αk ≥ 0 and letting k → ∞, we get
0 ≤ ⟨∇f (¯), αk (xk − x)⟩ + ∥αk (xk − x)∥ ·
x
¯
¯ o(∥xk − x∥)
¯
∥xk − x∥
¯ → ⟨∇f (¯), y ⟩ + ∥y ∥ · 0 = ⟨∇f (¯), y ⟩ .
x
x
The geometrical interpretation of Theorem 2 is the following. By the previous lecture, −∇f (¯) is the direction towards which f decreases fastest around
x
the point x. The tangent cone TC (¯) consists of directions towards which x can
¯
x
move around x without violating the constraint x ∈ C . Hence in order for x
¯
¯
to be a local minimum, −∇f (¯) must make an obtuse angle with any vector in
x
the tangent cone, for otherwise f can be decreased further. This is the same as
−∇f (¯) belonging to the normal cone.
x 3 KarushKuhnTucker theorem Theorem 2 is very general. Usually, we are interested in the cases where the
constraint set C is given parametrically. Consider the minimization problem
minimize
subject to
1 o(h) f (x)
gi (x) ≤ 0
hj (x) = 0 (i = 1, . . . , I )
(j = 1, . . . , J ). represents any quantity q (h) such that q (h)/h → 0 as h → 0. (2) 2014W Econ 172B Operations Research (B) Alexis Akira Toda This problem is a special case of problem (1) by setting
C = x ∈ RN (∀i)gi (x) ≤ 0, (∀j )hj (x) = 0 .
gi (x) ≤ 0 is called an inequality constraint. hj (x) = 0 is an equality constraint.
Let x ∈ C be a local solution. To study the shape of C around x, we deﬁne...
View
Full
Document
This document was uploaded on 02/18/2014 for the course ECON 172b at UCSD.
 Winter '08
 Foster,C

Click to edit the document details