DAMTP 2008/NA06
On the convergence of a wide range of trust region
methods for unconstrained optimization1
M.J.D. Powell
Abstract: We consider trust region methods for seeking the unconstrained minimum of an objective function F (x), x Rn , when the gradi
Lecture 13: Proximity, Bregman Distance, Proximal Gradient
Method
Proximity, Bregman Distance
Proximal gradient method (PG)
Analysis of PG method
Lecture 13
Zhi-Quan Luo
Proximity Operator
Let f : dom(f ) R be convex (possibly nonsmooth). For every x d
Lecture 12: Analysis of AGP Method and Extensions
Error bounds for optimization problems
Linear convergence analysis of AGP method
Extension to nonconvex quadrattic QP
Extension to nonsmooth optimization: LASSO, group
LASSO, etc
Lecture 12
Zhi-Quan Lu
Lecture 11: Approximate Gradient Projection
Linear convergence without strong convexity: composite
objective function f (x) = g (Ex) + b x.
A general framework for approximate gradient projection
Matrix splitting, coordinate descent, extra-gradient,
pr
EE5239 Introduction to Nonlinear Optimization
Zhi-Quan (Tom) Luo
Department of Electrical and Computer Engineering
University of Minnesota
[email protected]
Lecture 10
Zhi-Quan Luo
Lecture 10: Issues of Large Scale Optimization
Motivating examples:
compl
EE5239 Introduction to Nonlinear Optimization
Zhi-Quan (Tom) Luo
Department of Electrical and Computer Engineering
University of Minnesota
[email protected]
Lecture 9: Penalty Methods and Multiplier Methods
Quadratic Penalty Methods
Introduction to Mult
EE5239 Introduction to Nonlinear Optimization
Zhi-Quan (Tom) Luo
Department of Electrical and Computer Engineering
University of Minnesota
[email protected]
Lecture 8: Constrained Optimization: Duality Theory
Convex Cost/Linear Constraints
Duality Theor
EE5239 Introduction to Nonlinear Optimization
Zhi-Quan (Tom) Luo
Department of Electrical and Computer Engineering
University of Minnesota
[email protected]
Lecture 7: Constrained Optimization: Lagrangian Multipliers,
Optimality Conditions
Equality const
EE5239 Introduction to Nonlinear Optimization
Zhi-Quan (Tom) Luo
Department of Electrical and Computer Engineering
University of Minnesota
[email protected]
Lecture 6: Optimization over a Convex Set
Optimality conditions
Projection theorem
Feasible dir
EE5239 Introduction to Nonlinear Optimization
Zhi-Quan (Tom) Luo
Department of Electrical and Computer Engineering
University of Minnesota
[email protected]
Lecture 5: Second Order Methods
Newtons Method
Convergence Rate of the Pure Form
Global Converg
EE5239 Introduction to Nonlinear Optimization
Zhi-Quan (Tom) Luo
Department of Electrical and Computer Engineering
University of Minnesota
[email protected]
Lecture 4: Optimal First Order Methods
Unconstrained smooth convex minimization
Analysis of clas
EE5239 Introduction to Nonlinear Optimization
Zhi-Quan (Tom) Luo
Department of Electrical and Computer Engineering
University of Minnesota
[email protected]
Lecture 3: Additional First Order Methods
Incremental Gradient Method
Conjugate Directions
Conj
EE5239 Introduction to Nonlinear Optimization
Zhi-Quan (Tom) Luo
Department of Electrical and Computer Engineering
University of Minnesota
[email protected]
Lecture 2
Zhi-Quan Luo
Lecture 2: Gradient Methods
Gradient Methods - Motivation
Principal Gradi
EE5239 Introduction to Nonlinear Optimization
Zhi-Quan (Tom) Luo
Department of Electrical and Computer Engineering
University of Minnesota
[email protected]
Lecture 1
Zhi-Quan Luo
Lecture 1 Unconstrained Optimization
Denitions
Necessary rst/second order
November 23, 1991
ERROR BOUNDS AND CONVERGENCE ANALYSIS
OF FEASIBLE DESCENT METHODS: A GENERAL APPROACH
by
Zhi-Quan Luo and Paul Tseng
ABSTRACT
We survey and extend a general approach to analyzing the convergence and the rate of convergence of feasible de
Course Notes on First Order Optimization Methods
Zhi-Quan Luo, University of Minnesota
Spring, 2011
1
Introduction
So far we have studied various rst order methods for unconstrained smooth convex optimization.
We have also analyzed their iteration complex
Course Notes on First Order Optimization Methods
Zhi-Quan Luo, University of Minnesota
Spring, 2011
Consider a convex dierentiable minimization problem:
minimize f (x)
subject to x X
(1)
where X IRn is a closed nonempty convex set, and f is a continuously