hw3sol - EE364b Prof. S. Boyd EE364b Homework 3 1....

Info iconThis preview shows pages 1–3. Sign up to view the full content.

View Full Document Right Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: EE364b Prof. S. Boyd EE364b Homework 3 1. Minimizing a quadratic. Consider the subgradient method with constant step size , used to minimize the quadratic function f ( x ) = (1 / 2) x T Px + q T x , where P 0. For which values of do we have x ( k ) x , for any x (1) ? What value of gives fastest asymptotic convergence? Solution. The only subgradient for a quadratic function is the gradient, f ( x ) = Px + q . Each subgradient method iteration is x ( k +1) = x ( k ) ( Px ( k ) + q ) = ( I P ) x ( k ) q. In general, the k th iterate is x ( k ) = ( I P ) k x (0) kq. This can be viewed as a discrete-time linear dynamical system, and will be stable (and the subgradient method will converge) if and only if the eigenvalues of I P are less than 1 in magnitude. Since P 0, all the eigenvalues of P are positive. Thus, we require max ( P ) < 2 for convergence. The equivalent constraint on is that < < 2 max ( P ) . The asymptotic convergence rate is determined by the eigenvalue of I P with largest magnitude, i.e. , max i =1 ,...,n | 1 i | , where i are the eigenvalues of P . We can minimize this expression by requiring that (1 min ) = (1 max ), i.e. , that = 2 max + min . In other words, the optimal step size is the inverse of the average of the smallest and largest eigenvalues of P . 2. Step sizes that guarantee moving closer to the optimal set. Consider the subgradient method iteration x + = x g , where g f ( x ). Show that if < 2( f ( x ) f ) / bardbl g bardbl 2 2 (which is twice Polyaks optimal step size value) we have bardbl x + x bardbl 2 < bardbl x x bardbl 2 , for any optimal point x . This implies that dist ( x + ,X ) < dist ( x,X ). (Methods in which successive iterates move closer to the optimal set are called F ejer monotone . Thus, the subgradient method, with Polyaks optimal step size, is F ejer monotone.) 1 Solution. For any subgradient g , g T ( x x ) f ( x ) f . Thus, if < 2( f ( x ) f ) / bardbl g bardbl 2 , < 2 g T ( x x ) bardbl g bardbl 2 and g T g 2 g T ( x x ) < . Because > 0, we also have 2 g T g 2 g T ( x x ) < . Now we write bardbl x x bardbl 2 2 + 2 g T g 2 g T ( x x ) < bardbl x x bardbl 2 2 , x T x 2 x T x + x T x + 2 g T g 2 g T ( x x ) < bardbl x x bardbl 2 2 , ( x g ) T ( x g ) 2( x g ) T x + x T x < bardbl x x bardbl 2 2 , bardbl x + x bardbl 2 2 < bardbl x x bardbl 2 2 , and bardbl x + x bardbl 2 < bardbl x x bardbl 2 as required....
View Full Document

This note was uploaded on 04/09/2010 for the course EE 360B taught by Professor Stephenboyd during the Fall '09 term at Stanford.

Page1 / 8

hw3sol - EE364b Prof. S. Boyd EE364b Homework 3 1....

This preview shows document pages 1 - 3. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online