{[ promptMessage ]}

Bookmark it

{[ promptMessage ]}

hw3sol - EE364b Prof S Boyd EE364b Homework 3 1 Minimizing...

Info iconThis preview shows pages 1–3. Sign up to view the full content.

View Full Document Right Arrow Icon
EE364b Prof. S. Boyd EE364b Homework 3 1. Minimizing a quadratic. Consider the subgradient method with constant step size α , used to minimize the quadratic function f ( x ) = (1 / 2) x T Px + q T x , where P 0. For which values of α do we have x ( k ) x , for any x (1) ? What value of α gives fastest asymptotic convergence? Solution. The only subgradient for a quadratic function is the gradient, f ( x ) = Px + q . Each subgradient method iteration is x ( k +1) = x ( k ) α ( Px ( k ) + q ) = ( I αP ) x ( k ) αq. In general, the k th iterate is x ( k ) = ( I αP ) k x (0) kαq. This can be viewed as a discrete-time linear dynamical system, and will be stable (and the subgradient method will converge) if and only if the eigenvalues of I αP are less than 1 in magnitude. Since P 0, all the eigenvalues of P are positive. Thus, we require λ max ( P ) < 2 for convergence. The equivalent constraint on α is that 0 < α < 2 λ max ( P ) . The asymptotic convergence rate is determined by the eigenvalue of I αP with largest magnitude, i.e. , max i =1 ,...,n | 1 αλ i | , where λ i are the eigenvalues of P . We can minimize this expression by requiring that (1 αλ min ) = (1 αλ max ), i.e. , that α = 2 λ max + λ min . In other words, the optimal step size is the inverse of the average of the smallest and largest eigenvalues of P . 2. Step sizes that guarantee moving closer to the optimal set. Consider the subgradient method iteration x + = x αg , where g ∂f ( x ). Show that if α < 2( f ( x ) f ) / bardbl g bardbl 2 2 (which is twice Polyak’s optimal step size value) we have bardbl x + x bardbl 2 < bardbl x x bardbl 2 , for any optimal point x . This implies that dist ( x + , X ) < dist ( x, X ). (Methods in which successive iterates move closer to the optimal set are called ejer monotone . Thus, the subgradient method, with Polyak’s optimal step size, is F´ ejer monotone.) 1
Background image of page 1

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
Solution. For any subgradient g , g T ( x x ) f ( x ) f . Thus, if α < 2( f ( x ) f ) / bardbl g bardbl 2 , α < 2 g T ( x x ) bardbl g bardbl 2 and αg T g 2 g T ( x x ) < 0 . Because α > 0, we also have α 2 g T g 2 αg T ( x x ) < 0 . Now we write bardbl x x bardbl 2 2 + α 2 g T g 2 αg T ( x x ) < bardbl x x bardbl 2 2 , x T x 2 x T x + x ⋆T x + α 2 g T g 2 αg T ( x x ) < bardbl x x bardbl 2 2 , ( x αg ) T ( x αg ) 2( x αg ) T x + x ⋆T x < bardbl x x bardbl 2 2 , bardbl x + x bardbl 2 2 < bardbl x x bardbl 2 2 , and bardbl x + x bardbl 2 < bardbl x x bardbl 2 as required. 3. A variation on alternating projections. We consider the problem of finding a point in the intersection C negationslash = of convex sets C 1 , . . . , C m . To do this, we use alternating projections to find a point in the intersection of the two sets C 1 × · · · × C m R mn and { ( z 1 , . . . , z m ) R mn | z 1 = · · · = z m } ⊆ R mn .
Background image of page 2
Image of page 3
This is the end of the preview. Sign up to access the rest of the document.

{[ snackBarMessage ]}