Lecture 4

The same holds for hj s thus x y lc implies that from

Info iconThis preview shows page 1. Sign up to view the full content.

View Full Document Right Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: as ¯ ¯ follows. The set of indices for which the equality constraints are binding, I (¯) = {i | gi (¯) = 0} , x x is called the active set. Assume that gi ’s and hj ’s are differentiable. The set always convex > LC (¯) = y ∈ RN (∀i ∈ I (¯)) ⟨∇gi (¯), y ⟩ ≤ 0, (∀j ) ⟨∇hj (¯), y ⟩ = 0 x x x x due to linear is called the linearizing cone of the constraints gi ’s and hj ’s. The reason why inequalities LC (¯) is called the linearizing cone is the following. Since x gi (¯ + ty ) − gi (¯) = t ⟨∇gi (¯), y ⟩ + o(t), x x x the point x = x + ty almost satisfies the constraint gi (x) ≤ 0 if gi (¯) = 0 (i ¯ x is an active constraint) and ⟨∇gi (¯), y ⟩ ≤ 0. The same holds for hj ’s. Thus x y ∈ LC (¯) implies that from x we can move slightly towards the direction of x ¯ y and still (approximately) satisfy the constraints. Thus we can expect that the linearizing cone is approximately equal to the tangent cone. The following proposition make this statement precise. Proposition 3. Suppose that x ∈ C . Then co TC (¯) ⊂ LC (¯). ¯ x x Proof. Since LC (¯) is a closed convex cone, it suffices to prove TC (¯) ⊂ LC (¯). x x x Let y ∈ TC (¯). Take a sequence {αk } ≥ 0 and C ∋ xk → x such that αk (xk − x ¯ x) → y . Since gi (¯) = 0 for i ∈ I (¯) and gi is differentiable, we get ¯ x x 0 ≥ gi (xk ) = gi (xk ) − gi (¯) = ⟨∇gi (¯), xk − x⟩ + o(∥xk − x∥). x x ¯ ¯ Multiplying both sides by αk ≥ 0 and letting k → ∞, we get 0 ≥ ⟨∇gi (¯), αk (xk − x)⟩ + ∥αk (xk − x)∥ · x ¯ ¯ o(∥xk − x∥) ¯ ∥xk − x∥ ¯ → ⟨∇gi (¯), y ⟩ + ∥y ∥ · 0 = ⟨∇gi (¯), y ⟩ . x x Similar argument applies to hj . Hence y ∈ LC (¯). x Note that the tangent cone is directly defined by the constraint set C , whereas the linearizing cone is defined through the functions that define the set C . Therefore different parametrizations may lead to different linearizing cone (exercise). The main result in static optimization is the following. Theorem 4 (Karush-Kuhn-Tucker). Suppose that f, gi , hj are differentiable **Most Important ¯ x x Theorem of the course and x is a local solution to the minimization problem (2). If LC (¯) ⊂ co TC (¯), then there exist vectors (called Lagrange multipliers) λ ∈ RI and µ ∈ RJ such + that J I first order condition µj ∇hj (¯) = 0, x λi ∇gi (¯) + x ∇f (¯) + x i=1 complementary slackness (∀i) λ g...
View Full Document

This document was uploaded on 02/18/2014 for the course ECON 172b at UCSD.

Ask a homework question - tutors are online