MIT1_204S10_lec22

MIT1_204S10_lec22 - 1.204 Lecture 22 Unconstrained...

Info iconThis preview shows pages 1–5. Sign up to view the full content.

View Full Document Right Arrow Icon
1.204 Lecture 22 Unconstrained nonlinear optimization: Amoeba BFGS Linear programming: Glpk Multiple optimum values From Press Heuristics to deal with multiple optima: Start at many initial points. Choose best of optima found. Find local optimum. Take a step away from it and search again. Simulated annealing takes ‘random’ steps repeatedly 1 X 1 X 2 A B C D E F G X Y Z Figure by MIT OpenCourseWare.
Background image of page 1

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
Nonlinear optimization Unconstrained nonlinear optimization algorithms generally use the same strategy as unconstrained Select a descent direction Use a one dimensional line search to set step size Step, and iterate until convergence Constrained optimization used the constraints to limit the maximum step size Unconstrained optimization must select maximum step size Step size is problem-specific and must be tuned Memory requirements are rarely a problem Convergence, accuracy and speed are the issues Family of nonlinear algorithms Amoeba (Nelder-Mead) method Solves nonlinear optimization problem directly Requires no derivatives or line search Adapts its step size based on change in function value Conjugate gradient and quasi-Newton methods Require function, first derivatives* and line search Line search step size adapts as algorithm proceeds Newton Raphson method (last lecture Newton-Raphson method (last lecture) Used to solve nonlinear optimization problems by solving set of first order conditions Uses step size dx that makes f(x+dx)= 0. Little control. ‘Globally convergent’ Newton variant has smaller step size Needs first and second derivatives (and function) 2
Background image of page 2
Choosing among the algorithms Amoeba is simplest, most robust, slowest “Crawls downhill with no assumptions about function” No derivatives required Conjugate gradient (Polak-Ribiere) (not covered) Need first derivatives Less storage than quasi-Newton but less accuracy Quasi-Newton (Davidon-Fletcher-Powell or Broyden-Fletcher-Goldfarb-Shanno) Standard version uses first derivatives Variation computes first derivatives numerically Better than conjugate gradient for most problems Newton-Raphson Needs function, first and second derivatives Simplest code but not robust or flexible Use amoeba if you want a simple approach Amoeba algorithm The easiest algorithm for unconstrained nonlinear optimization is known as Nelder-Mead or “the amoeba” It is very different It requires only function evaluations No derivatives are required It is less efficient than the line search algorithms But it tends to be robust (line methods are temperamental) It is short (~150 lines) and relatively easy to implement Works in problems where derivatives are difficult: Fingerprint matching Models of brain function We’ll use logit demand model estimation as test case for all the algorithms today 3
Background image of page 3

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
- Amoeba steps Simplex is volume defined by n+1 points in n
Background image of page 4
Image of page 5
This is the end of the preview. Sign up to access the rest of the document.

This note was uploaded on 12/04/2011 for the course ESD 1.204 taught by Professor Georgekocur during the Spring '10 term at MIT.

Page1 / 19

MIT1_204S10_lec22 - 1.204 Lecture 22 Unconstrained...

This preview shows document pages 1 - 5. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online