# L23_bundle - Lecture 23 Steepest Descent Subgradient...

This preview shows pages 1–6. Sign up to view the full content.

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: Lecture 23: Steepest Descent Subgradient Methods April 16, 2007 Lecture 23 Outline • Directional Derivatives and Subgradients • More Subgradient Properties • Steepest Descent Subgradient • Bundle-Type Methods using “Steepest Descent Idea” •-Subgradients and-Subdifferentals Convex Optimization 1 Lecture 23 From Subdifferential to Directional Derivative Theorem Let f be convex with dom f = R n . We then have for any x ∈ R n and any d ∈ R n , f ( x ; d ) = max s ∈ ∂f ( x ) s T d • If we know the whole subdifferential ∂f ( x ) , then for any direction d , the directional derivative at x is obtained by maximizing d T s over s ∈ ∂f ( x ) • There is a relation in the other direction: having the directional deriva- tives f ( x ; d ) for all d , one can recover the subdifferential ∂f ( x ) Convex Optimization 2 Lecture 23 From Directional Derivative to Subdifferential Theorem Let f be convex with dom f = R n . We then have for any x ∈ R n and any d ∈ R n , ∂f ( x ) = { s | f ( x ; d ) ≥ s T d for all d ∈ R n } (= K ) Proof : We have [from the definitions of subgradient and f ( x ; d ) ] that for any s ∈ ∂f ( x ) , f ( x ; d ) = lim α ↓ f ( x + αd )- f ( x ) α ≥ lim α ↓ s T ( x + αd- x ) α = s T d Hence ∂f ( x ) ⊆ K . Suppose now s ∈ K , so that s T d ≤ f ( x ; d ) for all d ∈ R n . Thus, we have for any d ∈ R n : s T d ≤ inf α> f ( x + αd )- f ( x ) α ≤ f ( x + d )- f ( x ) By letting d = z- x for any z ∈ R n , it follows that s ∈ ∂f ( x ) Convex Optimization 3 Lecture 23 Directional Derivative and Optimality Theorem Optimality Condition Let f be convex with dom f = R n and let X ⊂ R n be closed and convex. The vector x * is a minimizer of f over X if and only if f ( x * ; ( x- x * )) ≥ for all x ∈ X Proof : The result follows from the following two facts • x * is optimal if and only if there is s ∈ ∂f ( x * ) such that s T ( x- x * ) ≥ for all x ∈ X • f ( x * ; d ) = max s ∈ ∂f ( x * ) s...
View Full Document

## This note was uploaded on 08/22/2008 for the course GE 498 AN taught by Professor Angelianedich during the Spring '07 term at University of Illinois at Urbana–Champaign.

### Page1 / 15

L23_bundle - Lecture 23 Steepest Descent Subgradient...

This preview shows document pages 1 - 6. Sign up to view the full document.

View Full Document
Ask a homework question - tutors are online