EE364b
Prof. S. Boyd
EE364b Homework 7
1. MPC for output tracking. We consider the linear dynamical system x(t + 1) = Ax(t) + Bu(t), y (t) = Cx(t), t = 0, . . . , T 1,
with state x(t) Rn , input u(t) Rm , and output y (t) Rp . The matrices A and B are kno
ConvexOptimizationII-Lecture11 Instructor (Stephen Boyd):Hey, I guess well start. Let me just make a couple of announcements. I guess weve combined the rewrite of the preliminary project proposal with the mid-term project review, and I think thats due thi
ConvexOptimizationII-Lecture09 Instructor (Stephen Boyd):We should make sure the pile of projects is somewhere. Whos got them? I guess well start. I guess the most obvious thing, I guess most of you have figured out by now is we have looked through these
ConvexOptimizationII-Lecture08 Instructor (Stephen Boyd):Okay. Well, well. Its the good weather effect, the postproject proposal submission effect. So this is maybe a very good time for me to remind you that even though the class is televised, attendance
ConvexOptimizationII-Lecture07 Instructor (Stephen Boyd):Well, this is were all supposed to pretend its Tuesday. Ill be in Washington, unfortunately. So today Ill finish up and give a wrap up on the Analytic Center Cutting-Plane Method, and then well move
ConvexOptimizationII-Lecture02 Instructor (Stephen Boyd):Hey, I think this means we're on. Are we on? We're on, okay. So well, the first announcement I guess is kind of obvious, but I guess if you're here. We've moved. It was a challenge actually getting
The choice of metric in subgradient
methods
Stephen Boyd & John Duchi (with help from P. Giselsson)
EE364b, Stanford University
1
Mirror descent methods
subgradient method without using Euclidean steps
let h be a dierentiable convex function, then associ
Primal-Dual Subgradient Method
equality constrained problems
inequality constrained problems
Prof. S. Boyd, EE364b, Stanford University
Primal-dual subgradient method
minimize f0(x)
subject to fi(x) 0,
Ax = b
i = 1, . . . , m
with variable x Rn, fi : Rn
Stochastic Subgradient Methods
Stephen Boyd and Almir Mutapcic
Notes for EE364b, Stanford University, Winter 2006-07
April 8, 2014
1
Noisy unbiased subgradient
Suppose f : Rn R is a convex function. We say that a random vector g Rn is a noisy
(unbiased) s
Stochastic Subgradient Method
noisy unbiased subgradient
stochastic subgradient method
convergence proof
stochastic programming
expected value of convex function
on-line learning and adaptive signal processing
Prof. S. Boyd, EE364b, Stanford Univers
Subgradient Methods
subgradient method and stepsize rules
convergence results and proof
optimal step size and alternating projections
speeding up subgradient methods
Prof. J. Duchi, EE364b, Stanford University
Subgradient method
subgradient method is
Subgradient Methods
Stephen Boyd (with help from Jaehyun Park)
Notes for EE364b, Stanford University, Spring 201314
May 2014; based on notes from January 2007
Contents
1 Introduction
3
2 Basic subgradient method
2.1 Negative subgradient update . . . . . .
Subgradients
S. Boyd, J. Duchi, and L. Vandenberghe
Notes for EE364b, Stanford University, Spring 2014-15
April 1, 2015
1
Denition
We say a vector g Rn is a subgradient of f : Rn R at x dom f if for all z dom f ,
f (z) f (x) + g T (z x).
(1)
If f is conve
Subgradients
subgradients
strong and weak subgradient calculus
optimality conditions via subgradients
directional derivatives
Prof. S. Boyd, EE364b, Stanford University
Basic inequality
recall basic inequality for convex dierentiable f :
f (y) f (x) +
Ellipsoid Method
S. Boyd
Notes for EE364b, Stanford University
May 26, 2014
These notes were taken from the book Linear Controller Design: Limits of Performance,
by Boyd and Barratt [BB91], and edited (at various times over many years) by Stephen
Boyd, Jo
ConvexOptimizationII-Lecture13 Instructor (Stephen Boyd):Great, I guess this means weve started. So today, well continue with the conjugate gradient stuff. So last time let me just review sort of where we were. It was actually yesterday, but logic, I mean
ConvexOptimizationII-Lecture15 Instructor (Stephen Boyd):All right, I think this means we are on. There is no good way in this room to know if you are when the lecture starts. Okay, well, we are down to a skeleton crew here, mostly because its too hot out
EE364b
Prof. S. Boyd
EE364b Homework 7
1. MPC for output tracking. We consider the linear dynamical system x(t + 1) = Ax(t) + Bu(t), y (t) = Cx(t), t = 0, . . . , T 1 ,
with state x(t) Rn , input u(t) Rm , and output y (t) Rp . The matrices A and B are kn
EE364b
Prof. S. Boyd
EE364b Homework 6
1. Conjugate gradient residuals. Let r(k) = b Ax(k) be the residual associated with the kth element of the Krylov sequence. Show that r(j)T r(k) = 0 for j = k. In other words, the Krylov sequence residuals are mutual
EE364b
Prof. S. Boyd
EE364b Homework 6
1. Conjugate gradient residuals. Let r(k) = b Ax(k) be the residual associated with the k th element of the Krylov sequence. Show that r(j )T r(k) = 0 for j = k . In other words, the Krylov sequence residuals are mut
EE364b
Prof. S. Boyd
EE364b Homework 5
1. Distributed method for bi-commodity network ow problem. We consider a network (directed graph) with n arcs and p nodes, described by the incidence matrix A Rpn , where 1, if arc j enters node i Aij = 1, if arc j l
EE364b
Prof. S. Boyd
EE364b Homework 5
1. Distributed method for bi-commodity network ow problem. We consider a network (directed graph) with n arcs and p nodes, described by the incidence matrix A Rpn , where 1, if arc j enters node i Aij = 1, if arc j l
EE364b
Prof. S. Boyd
EE364b Homework 4
1. Projection onto the probability simplex. In this problem you will work out a simple method for nding the Euclidean projection y of x Rn onto the probability simplex P = cfw_z | z 0, 1T z = 1. Hints. Consider the p
EE364b
Prof. S. Boyd
EE364b Homework 3
1. Minimizing a quadratic. Consider the subgradient method with constant step size , used to minimize the quadratic function f (x) = (1/2)xT P x + q T x, where P 0. For which values of do we have x(k) x , for any x(1
EE364b
Prof. S. Boyd
EE364b Homework 3
1. Minimizing a quadratic. Consider the subgradient method with constant step size , used to minimize the quadratic function f (x) = (1/2)xT P x + q T x, where P 0. For which values of do we have x(k) x , for any x(1
EE364b
Prof. S. Boyd
EE364b Homework 2
1. Subgradient optimality conditions for nondierentiable inequality constrained optimization. Consider the problem minimize f0 (x) subject to fi (x) 0, i = 1, . . . , m,
with variable x Rn . We do not assume that f0
EE364b
Prof. S. Boyd
EE364b Homework 2
1. Subgradient optimality conditions for nondierentiable inequality constrained optimization. Consider the problem minimize f0 (x) subject to fi (x) 0, i = 1, . . . , m,
with variable x Rn . We do not assume that f0
EE364b
Prof. S. Boyd
EE364b Homework 1
1. For each of the following convex functions, explain how to calculate a subgradient at a given x. (a) f (x) = maxi=1,.,m (aT x + bi ). i (b) f (x) = maxi=1,.,m |aT x + bi |. i (c) f (x) = sup0t1 p(t), where p(t) =
EE364b
Prof. S. Boyd
EE364b Homework 1
1. For each of the following convex functions, explain how to calculate a subgradient at a given x. (a) f (x) = maxi=1,.,m (aT x + bi ). i (b) f (x) = maxi=1,.,m |aT x + bi |. i (c) f (x) = sup0t1 p(t), where p(t) =
ConvexOptimizationII-Lecture18 Instructor (Stephen Boyd):Well, let me you can turn off all amplification in here. So yeah, so its still you still have amplification on in here so you can oh, well, well let them figure that out. Lets see, couple of announc