L19_nondif

L19_nondif - Lecture 19: Convex Non-Smooth Optimization...

Info iconThis preview shows pages 1–6. Sign up to view the full content.

View Full Document Right Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: Lecture 19: Convex Non-Smooth Optimization April 2, 2007 Lecture 19 Outline Convex non-smooth problems Examples Subgradients and subdifferentials Subgradient properties Operations with subgradients and subdifferentials Convex Optimization 1 Lecture 19 Convex-Constrained Non-smooth Minimization minimize f ( x ) subject to x C Characteristics : The function f : R n 7 R is convex and possibly non-differentiable The set C R n is nonempty and convex The optimal value f * is finite Our focus here is non-differentiability Renewed interest comes from large-scale problems and the need for dis- tributed computations. Main questions: Where do such problems arise? How do we deal with non-differentiability? How can we solve them? Convex Optimization 2 Lecture 19 Where they arise Naturally in some applications (comm. nets, data fitting, neural-nets): Least-squares problems minimize m j =1 k h ( w,x j )- y j k 2 subject to w here ( x j ,y j ) , j = 1 ,...,m are the input-output pairs, w are weights (decision variables) to be optimized, h is convex possibly nonsmooth In Lagrangian duality minimize- q ( , ) subject to A systematic approach for generating primal optimal bounds A part of some primal-dual scheme In (sharp) penalty approaches min x C { f ( x ) + tP ( g ( x )) } where t > is a penalty parameter and the penalty function is P ( u ) = m j =1 max { u j , } or P ( u ) = max { u 1 ,...,u m , } Convex Optimization 3 Lecture 19 Example: Optimization in Network Coding Linear Cost Model minimize ( i,j ) L a ij max s S x s ij subject to max s S x s ij c ij for all ( i,j ) L { j | ( i,j ) L } x s ij- { j | ( j,i ) } x s ji = b s i for all i N , s S N is the set of nodes in the communication network S is the set of sessions [a session is a pair of nodes that communicate] L is the set of directed links ( i,j ) denotes a link originating at node i and ending at node j c ij is the communication rate capacity of the link ( i,j ) a ij is the cost for the link ( i,j ) x s ij...
View Full Document

Page1 / 15

L19_nondif - Lecture 19: Convex Non-Smooth Optimization...

This preview shows document pages 1 - 6. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online