This preview shows pages 1–2. Sign up to view the full content.
This preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
Unformatted text preview: lehighlogo IE417: Nonlinear Programming: Lecture 10 Jeff Linderoth Department of Industrial and Systems Engineering Lehigh University 16th February 2006 Jeff Linderoth IE417:Lecture 10 lehighlogo Last Time: Conjugate Gradient Solving Ax = b Or min ( x ) def = 1 / 2 x T Ax b T x, with A S n n ++ Conjugate Gradient Algorithm 1 Choose x .r = Ax b, d = d , k = 0 2 k = r T k d k d T k Ad k 3 x k +1 = x k + k d k 4 k +1 = r T k +1 Ad k d T k Ad k 5 d k +1 = r k +1 + T k +1 d k 6 If r k = 0 , stop. Else k = k + 1 , Go to 2. Jeff Linderoth IE417:Lecture 10 lehighlogo CG for Unconstrained Optimization min x R n f ( x ) FletcherReeves Conjugate Gradient 1 Given x . d = f ( x ) , k = 0 2 Compute k . x k +1 = x k + k d k 3 Compute f ( x k +1 ) , If f ( x k +1 ) = 0 , stop . Else k +1 = f ( x k +1 ) T f ( x k +1 ) f ( x k ) T f ( x k ) 4 Compute new direction: d k +1 = f ( x k +1 ) + k +1 d k Fletcher Reeves method is globally convergence as long as your k satisfies the strong Wolfe conditions (with c 2 < 1 / 2 ) Jeff Linderoth IE417:Lecture 10 lehighlogo Today: QuasiNewton Recall: m k ( d ) = f ( x k ) + f ( x k ) T d k + 1 / 2 d T B k d Minimizer of this quadratic function is d k = B 1 k f ( x k ) Step: x k +1 = x k + k d k The Question What can B do for you? Given the gradient information that you have recently seen, how would you like your model to behave?...
View
Full
Document
This note was uploaded on 02/29/2008 for the course IE 417 taught by Professor Linderoth during the Spring '08 term at Lehigh University .
 Spring '08
 Linderoth
 Systems Engineering

Click to edit the document details