{[ promptMessage ]}

Bookmark it

{[ promptMessage ]}

lecture10

# lecture10 - Yinyu Ye MS&E Stanford Nonlinear Optimization...

This preview shows pages 1–4. Sign up to view the full content.

Yinyu Ye, MS&E, Stanford MS&E211 Lecture Note #10 1 Nonlinear Optimization Algorithms I Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A. http://www.stanford.edu/˜yyye Yinyu Ye, MS&E, Stanford MS&E211 Lecture Note #10 2 Introduction Optimization algorithms tend to be iterative procedures . Starting from a given point x 0 , they generate a sequence { x k } of iterates (or trial solutions). We study algorithms that produce iterates according to well determined rules–Deterministic Algorithm rather than some random selection process–Randomized Algorithm. The rules to be followed and the procedures that can be applied depend to a large extent on the characteristics of the problem to be solved.

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
Yinyu Ye, MS&E, Stanford MS&E211 Lecture Note #10 3 Classes of problems Some of the distinctions between optimization problems stem from (a) differentiable versus nondifferentiable functions; (b) unconstrained versus constrained variables; (c) one-dimensional versus multi-dimensional variables; (d) convex versus nonconvex minimization. Finite versus convergent iterative methods. For some classes of optimization problems (e.g., linear and quadratic optimization) there are algorithms that obtain a solution—or detect that the objective function is unbounded—in a finite number of iterations. For this reason, we call them finite algorithms . Most algorithms encountered in Optimization are not finite, but instead are convergent —or at least they are designed to be so. Their object is to generate a sequence of trial or approximate solutions that converge to a “solution.” Yinyu Ye, MS&E, Stanford MS&E211 Lecture Note #10 4 The meaning of “solution” What is meant by a solution may differ from one algorithm to another. In some cases, one seeks a local minimum ; in some cases, one seeks a global minimum ; in others, one seeks a KKT point of some sort as in the method of steepest descent discussed below. In fact, there are several possibilities for defining what a solution is. Once the definition is chosen, there must be a way of testing whether or not a point (trial solution) belongs to the set of solutions.
Yinyu Ye, MS&E, Stanford MS&E211 Lecture Note #10 5 Search directions Typically, a nonlinear optimization algorithm generates a sequence of points through an iterative scheme of the form x k +1 = x k + α k p k where p k is the search direction and α k is the step size or step length . The key is that once x k is known, then p k is chosen as some function of x k , and the scalar α k may be chosen in accordance with some line (one-dimension) search rules. Yinyu Ye, MS&E, Stanford MS&E211 Lecture Note #10 6 The general idea One selects a starting point and generates a possibly infinite sequence of trial solutions each of which is specified by the algorithm. The idea is to do this in such a way that the sequence of iterates generated by the algorithm converges to an element of the solution set of the problem.

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
This is the end of the preview. Sign up to access the rest of the document.

{[ snackBarMessage ]}

### Page1 / 20

lecture10 - Yinyu Ye MS&E Stanford Nonlinear Optimization...

This preview shows document pages 1 - 4. Sign up to view the full document.

View Full Document
Ask a homework question - tutors are online