02-subgrad_method_notes

02-subgrad_method_notes - Subgradient Methods Stephen Boyd...

Info iconThis preview shows pages 1–3. Sign up to view the full content.

View Full Document Right Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: Subgradient Methods Stephen Boyd and Almir Mutapcic Notes for EE364b, Stanford University, Winter 2006-07 April 13, 2008 Contents 1 Introduction 2 2 Basic subgradient method 2 2.1 Negative subgradient update . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 2.2 Step size rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 2.3 Convergence results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 3 Convergence proof 4 3.1 Assumptions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 3.2 Some basic inequalities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 3.3 A bound on the suboptimality bound . . . . . . . . . . . . . . . . . . . . . . 7 3.4 A stopping criterion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 3.5 Numerical example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 4 Alternating projections 9 4.1 Optimal step size choice when f ⋆ is known . . . . . . . . . . . . . . . . . . . 9 4.2 Finding a point in the intersection of convex sets . . . . . . . . . . . . . . . 11 4.3 Solving convex inequalities . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 4.4 Positive semidefinite matrix completion . . . . . . . . . . . . . . . . . . . . . 15 5 Projected subgradient method 16 5.1 Numerical example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 6 Projected subgradient for dual problem 18 6.1 Numerical example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 7 Subgradient method for constrained optimization 21 7.1 Numerical example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 8 Speeding up subgradient methods 24 1 1 Introduction The subgradient method is a very simple algorithm for minimizing a nondifferentiable convex function. The method looks very much like the ordinary gradient method for differentiable functions, but with several notable exceptions: • The subgradient method applies directly to nondifferentiable f . • The step lengths are not chosen via a line search, as in the ordinary gradient method. In the most common cases, the step lengths are fixed ahead of time. • Unlike the ordinary gradient method, the subgradient method is not a descent method; the function value can (and often does) increase. The subgradient method is readily extended to handle problems with constraints. Subgradient methods can be much slower than interior-point methods (or Newton’s method in the unconstrained case). In particular, they are first-order methods; their perfor- mance depends very much on the problem scaling and conditioning. (In contrast, Newton and interior-point methods are second-order methods, not affected by problem scaling.) However, subgradient methods do have some advantages over interior-point and New- ton methods. They can be immediately applied to a far wider variety of problems than interior-point or Newton methods. The memory requirement of subgradient methods caninterior-point or Newton methods....
View Full Document

This note was uploaded on 04/09/2010 for the course EE 364B at Stanford.

Page1 / 27

02-subgrad_method_notes - Subgradient Methods Stephen Boyd...

This preview shows document pages 1 - 3. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online