lect22 - ISE 536Fall03: Linear Programming and Extensions...

Info iconThis preview shows pages 1–3. Sign up to view the full content.

View Full Document Right Arrow Icon
ISE 536–Fall03: Linear Programming and Extensions November 24, 2003 Lecture 22: IPM, Path Following Methods Lecturer: Fernando Ord´o˜nez 1 A few ideas from convex optimization For a convex function f : < n 7→ < , the point that minimizes f ( x ), satisfies f ( x ) = 0, i.e.: Assume now that there also is a function g : < n 7→ < m , and that you are interested in the minimizer of f ( x ) constrained to g ( x ) = 0. How do you find the point that solves: min f ( x ) s . t . g ( x ) = 0 1.1 Newton’s method To obtain the minimizer to optimization problems we need to find x such that h ( x ) = 0 for some system of equations. Newton’s method does just this! If x ∈ < , to find h ( x ) = 0 Newton’s method constructs the following iteration x k +1 = x k - h ( x k ) h 0 ( x k ) . 1
Background image of page 1

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
For n equations and n unknowns, that is if h : < n 7→ < n and x ∈ < n , then Newton’s method is: x k +1 = x k - J ( x k ) - 1 h ( x k ) , where J ( x k ) is the n × n matrix of partial derivatives:
Background image of page 2
Image of page 3
This is the end of the preview. Sign up to access the rest of the document.

Page1 / 4

lect22 - ISE 536Fall03: Linear Programming and Extensions...

This preview shows document pages 1 - 3. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online