Unformatted text preview: as
¯
¯
follows. The set of indices for which the equality constraints are binding,
I (¯) = {i  gi (¯) = 0} ,
x
x
is called the active set. Assume that gi ’s and hj ’s are diﬀerentiable. The set always convex > LC (¯) = y ∈ RN (∀i ∈ I (¯)) ⟨∇gi (¯), y ⟩ ≤ 0, (∀j ) ⟨∇hj (¯), y ⟩ = 0
x
x
x
x
due to linear
is called the linearizing cone of the constraints gi ’s and hj ’s. The reason why
inequalities LC (¯) is called the linearizing cone is the following. Since
x
gi (¯ + ty ) − gi (¯) = t ⟨∇gi (¯), y ⟩ + o(t),
x
x
x
the point x = x + ty almost satisﬁes the constraint gi (x) ≤ 0 if gi (¯) = 0 (i
¯
x
is an active constraint) and ⟨∇gi (¯), y ⟩ ≤ 0. The same holds for hj ’s. Thus
x
y ∈ LC (¯) implies that from x we can move slightly towards the direction of
x
¯
y and still (approximately) satisfy the constraints. Thus we can expect that
the linearizing cone is approximately equal to the tangent cone. The following
proposition make this statement precise.
Proposition 3. Suppose that x ∈ C . Then co TC (¯) ⊂ LC (¯).
¯
x
x
Proof. Since LC (¯) is a closed convex cone, it suﬃces to prove TC (¯) ⊂ LC (¯).
x
x
x
Let y ∈ TC (¯). Take a sequence {αk } ≥ 0 and C ∋ xk → x such that αk (xk −
x
¯
x) → y . Since gi (¯) = 0 for i ∈ I (¯) and gi is diﬀerentiable, we get
¯
x
x
0 ≥ gi (xk ) = gi (xk ) − gi (¯) = ⟨∇gi (¯), xk − x⟩ + o(∥xk − x∥).
x
x
¯
¯
Multiplying both sides by αk ≥ 0 and letting k → ∞, we get
0 ≥ ⟨∇gi (¯), αk (xk − x)⟩ + ∥αk (xk − x)∥ ·
x
¯
¯ o(∥xk − x∥)
¯
∥xk − x∥
¯ → ⟨∇gi (¯), y ⟩ + ∥y ∥ · 0 = ⟨∇gi (¯), y ⟩ .
x
x
Similar argument applies to hj . Hence y ∈ LC (¯).
x
Note that the tangent cone is directly deﬁned by the constraint set C ,
whereas the linearizing cone is deﬁned through the functions that deﬁne the
set C . Therefore diﬀerent parametrizations may lead to diﬀerent linearizing
cone (exercise).
The main result in static optimization is the following.
Theorem 4 (KarushKuhnTucker). Suppose that f, gi , hj are diﬀerentiable
**Most Important
¯
x
x
Theorem of the course and x is a local solution to the minimization problem (2). If LC (¯) ⊂ co TC (¯),
then there exist vectors (called Lagrange multipliers) λ ∈ RI and µ ∈ RJ such
+
that
J I first order condition µj ∇hj (¯) = 0,
x λi ∇gi (¯) +
x ∇f (¯) +
x
i=1 complementary slackness (∀i) λ g...
View
Full Document
 Winter '08
 Foster,C
 Optimization, tc, lagrange multipliers, Alexis Akira Toda, tangent cone, Akira Toda

Click to edit the document details