fundamental-engineering-optimization-methods.pdf

The augmented lagrangian method is introduced below

Info icon This preview shows pages 147–150. Sign up to view the full content.

The augmented Lagrangian method is introduced below using an equality constrained optimization problem where the problem is given as (Belegundu and Chandrupatla, p. 276): ݂ሺ࢞ሻ 6XEMHFW WR± ݄ ሺ࢞ሻ ൌ Ͳǡ ݅ ൌ ͳǡ ǥ ǡ ݈ (7.37)
Image of page 147

Info icon This preview has intentionally blurred sections. Sign up to view the full version.

Download free eBooks at bookboon.com Fundamental Engineering Optimization Methods 148 ±umerical Optimization Methods The augmented Lagrangian function for the problem is defined as: ࣪ሺ࢞ǡ ࢜ǡ ݎሻ ൌ ݂ሺ࢞ሻ ൅ ෍ ቆݒ ݄ ሺ࢞ሻ ൅ ͳ ʹ ݎ݄ ሺ࢞ሻቇ (7.38) In the above, ݒ are the Lagrange multipliers and the additional term defines an exterior penalty function with r as the penalty parameter. The gradient and Hessian of the AL are computed as: ׏࣪ሺ࢞ǡ ࢜ǡ ݎሻ ൌ ׏݂ሺ࢞ሻ ൅ ෍ ቀݒ ൅ ݎ݄ ሺ࢞ሻቁ ׏݄ ሺ࢞ሻ ׏ ࣪ሺ࢞ǡ ࢜ǡ ݎሻ ൌ ׏ ݂ሺ࢞ሻ ൅ ෍ ൬ቀݒ ൅ ݎ݄ ሺ࢞ሻቁ ׏ ݄ ሺ࢞ሻ ൅ ݎ׏݄ ׏݄ ሺ࢞ሻ൰ (7.39) While the Hessian of the Lagrangian may not be uniformly positive definite, a large enough value of r makes the Hessian of AL positive definite at x . Next, since the AL is stationary at the optimum, then, paralleling the developments in the duality theory (Sec. 5.7), we can solve the above optimization problem via a min-max framework as follows: first, for a given r and v , we define a dual function via the following minimization problem: ߰ሺ࢜ሻ ൌ ࣪ሺ࢞ǡ ࢜ǡ ݎሻ ൌ ݂ሺ࢞ሻ ൅ ෍ ൬ݒ ݄ ሺ࢞ሻ ൅ ͳ ʹ ݎ ቀ݄ ሺ࢞ሻቁ (7.40) This step is then followed by a maximization problem defined as: ߰ሺ࢜ሻ ² The derivative of the dual function is computed as: ௗట ௗ௩ ൌ ݄ ሺ࢞ሻ ൅ ׏߰ ௗ࢞ ௗ௩ ³ where the latter term is zero, since ׏߰ ൌ ׏࣪ ൌ Ͳ ² Further, an expression for the Hessian is given as: ௗ௩ ௗ௩ ൌ ׏݄ ௗ࢞ ௗ௩ ǡ ZKHUH WKH ௗ࢞ ௗ௩ where the term can be obtained by differentiating ׏߰ ൌ Ͳ ³ which gives: ׏݄ ൅ ׏ ࣪ ൬ ௗ࢞ ௗ௩ ൰ ൌ Ͳ ³ RU ׏ ࣪ ൬ ௗ࢞ ௗ௩ ൰ ൌ െ׏݄ ² Therefore, the Hessian is computed as: ݀ ߰ ݀ݒ ݀ݒ ൌ െ׏݄ ሺ׏ ࣪ሻ ିଵ ׏݄ (7.41) The AL method proceeds as follows: we choose a suitable ሺݒሻ and solve the minimization problem in (7.40) to define ߰ሺݒሻ ² We then solve the maximization problem to find the solution that minimizes the AL. The latter step can be done using gradient-based methods. For example, the Newton update for the maximization problem is given as: ௞ାଵ ൌ ࢜ െ ቆ ݀ ߰ ݀ݒ ݀ݒ ିଵ (7.42)
Image of page 148
Download free eBooks at bookboon.com Fundamental Engineering Optimization Methods 149 ±umerical Optimization Methods For large r , the update may be approximated as: ݒ ௞ାଵ ൌ ݒ ൅ ݎ ݄ ǡ ݆ ൌ ͳǡ ǥ ǡ ݈ (Belegundu and Chandrupatla, p. 278). For inequality constrained problems, the AL may be defined as (Arora, p. 480):
Image of page 149

Info icon This preview has intentionally blurred sections. Sign up to view the full version.

Image of page 150
This is the end of the preview. Sign up to access the rest of the document.

{[ snackBarMessage ]}

What students are saying

  • Left Quote Icon

    As a current student on this bumpy collegiate pathway, I stumbled upon Course Hero, where I can find study resources for nearly all my courses, get online help from tutors 24/7, and even share my old projects, papers, and lecture notes with other students.

    Student Picture

    Kiran Temple University Fox School of Business ‘17, Course Hero Intern

  • Left Quote Icon

    I cannot even describe how much Course Hero helped me this summer. It’s truly become something I can always rely on and help me. In the end, I was not only able to survive summer classes, but I was able to thrive thanks to Course Hero.

    Student Picture

    Dana University of Pennsylvania ‘17, Course Hero Intern

  • Left Quote Icon

    The ability to access any university’s resources through Course Hero proved invaluable in my case. I was behind on Tulane coursework and actually used UCLA’s materials to help me move forward and get everything together on time.

    Student Picture

    Jill Tulane University ‘16, Course Hero Intern