3sgmethod

3sgmethod - EE236C(Spring 2008-09 3 Subgradient method • subgradient method • convergence analysis • optimal step size when f ⋆ is known

Info iconThis preview shows pages 1–4. Sign up to view the full content.

View Full Document Right Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: EE236C (Spring 2008-09) 3. Subgradient method • subgradient method • convergence analysis • optimal step size when f ⋆ is known • alternating projections • optimality 3–1 Subgradient method to minimize a nondifferentiable convex function f : choose x (0) and repeat x ( k ) = x ( k − 1) − t k g ( k − 1) , k = 1 , 2 , . . . g ( k − 1) is any subgradient of f at x ( k − 1) step size rules • fixed step: t k constant • fixed length: t k bardbl g ( k − 1) bardbl 2 constant ( i.e. , bardbl x ( k ) − x ( k − 1) bardbl 2 constant) • diminishing: t k → , ∑ ∞ k =1 t k = ∞ Subgradient method 3–2 Assumptions • f has finite optimal value f ⋆ , minimizer x ⋆ • f is convex, dom f = R n • f is Lipschitz continuous with constant G > : | f ( x ) − f ( y ) | ≤ G bardbl x − y bardbl 2 ∀ x, y this is equivalent to bardbl g bardbl 2 ≤ G for all g ∈ ∂f ( x ) , all x Subgradient method 3–3 Analysis the subgradient method is not a descent method the key quantity in the analysis is the distance to the optimal set bardbl x ( i ) − x ⋆ bardbl 2 2 = vextenddouble vextenddouble vextenddouble x ( i − 1) − t i g ( i − 1) − x ⋆ vextenddouble vextenddouble vextenddouble 2 2 = bardbl x ( i − 1) − x ⋆ bardbl 2 2 − 2 t i g ( i − 1) T ( x ( i − 1) − x ⋆ ) + t 2 i bardbl g ( i − 1) bardbl 2 2 ≤ bardbl x ( i − 1) − x ⋆ bardbl 2 2 − 2 t i parenleftBig f ( x ( i − 1) ) − f ⋆ parenrightBig + t 2 i bardbl g ( i − 1) bardbl 2 2 define f ( k ) best = min ≤ i<k f ( x ( i ) ) , and combine inequalities for i = 1 , . . . , k : 2( k summationdisplay i =1 t i ) parenleftBig f ( k ) best − f ⋆ parenrightBig ≤ bardbl x (0) − x ⋆ bardbl 2 2 − bardbl x ( k ) − x ⋆ bardbl 2 2 + k summationdisplay i =1 t 2 i bardbl g ( i − 1) bardbl 2 2 ≤ bardbl x (0) − x ⋆ bardbl 2 2 + k summationdisplay i =1 t 2 i bardbl g ( i − 1) bardbl 2 2 Subgradient method 3–4 fixed step size t i = t f ( k ) best − f ⋆ ≤ bardbl x (0) − x ⋆ bardbl...
View Full Document

This note was uploaded on 01/25/2010 for the course EE 236 taught by Professor Staff during the Spring '08 term at UCLA.

Page1 / 11

3sgmethod - EE236C(Spring 2008-09 3 Subgradient method • subgradient method • convergence analysis • optimal step size when f ⋆ is known

This preview shows document pages 1 - 4. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online