{[ promptMessage ]}

Bookmark it

{[ promptMessage ]}

MIT1_204S10_lec22

# MIT1_204S10_lec22 - 1.204 Lecture 22 Unconstrained...

This preview shows pages 1–5. Sign up to view the full content.

1.204 Lecture 22 Unconstrained nonlinear optimization: Amoeba BFGS Linear programming: Glpk Multiple optimum values From Press Heuristics to deal with multiple optima: Start at many initial points. Choose best of optima found. Find local optimum. Take a step away from it and search again. Simulated annealing takes ‘random’ steps repeatedly 1 X 1 X 2 A B C D E F G X Y Z Figure by MIT OpenCourseWare.

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
t t t t t t Nonlinear optimization Unconstrained nonlinear optimization algorithms generall lly use th the same strategy as unconst i rained t t d Select a descent direction Use a one dimensional line search to set step size Step, and iterate until convergence Constrained optimization used the constraints to limit the maximum step size Unconstrained optimization must select maximum step size Step size is problem-specific and must be tuned Memory requirements are rarely a problem Convergence, accuracy and speed are the issues Family of nonlinear algorithms Amoeba (Nelder-Mead) method Solves nonlinear optimization problem directly Requires no derivatives or line search Adapts its step size based on change in function value Conjugate gradient and quasi-Newton methods Require function, first derivatives* and line search Line search step size adapts as algorithm proceeds Newton Raphson method (last lecture) Newton-Raphson method (last lecture) Used to solve nonlinear optimization problems by solving set of first order conditions Uses step size dx that makes f(x+dx)= 0. Little control. ‘Globally convergent’ Newton variant has smaller step size Needs first and second derivatives (and function) 2
Choosing among the algorithms Amoeba is simplest, most robust, slowest “Crawls downhill with no assumptions about function” No derivatives required Conjugate gradient (Polak-Ribiere) (not covered) Need first derivatives Less storage than quasi-Newton but less accuracy Quasi-Newton (Davidon-Fletcher-Powell or Broyden-Fletcher-Goldfarb-Shanno) Standard version uses first derivatives Variation computes first derivatives numerically Better than conjugate gradient for most problems Newton-Raphson Needs function, first and second derivatives Simplest code but not robust or flexible Use amoeba if you want a simple approach Amoeba algorithm The easiest algorithm for unconstrained nonlinear op timization is known as Nelder-Mead or “the amoeba” It is very different It requires only function evaluations No derivatives are required It is less efficient than the line search algorithms But it tends to be robust (line methods are temperamental) It is short (~150 lines) and relatively easy to implement Works in problems where derivatives are difficult: Fingerprint matching Models of brain function We’ll use logit demand model estimation as test case for all the algorithms today 3

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
- Amoeba steps
This is the end of the preview. Sign up to access the rest of the document.

{[ snackBarMessage ]}

### Page1 / 19

MIT1_204S10_lec22 - 1.204 Lecture 22 Unconstrained...

This preview shows document pages 1 - 5. Sign up to view the full document.

View Full Document
Ask a homework question - tutors are online