ch.6 - Chapter 6. Optimization: Method of Lagrange...

Info iconThis preview shows pages 1–2. Sign up to view the full content.

View Full Document Right Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: Chapter 6. Optimization: Method of Lagrange Multipliers 6.1. Constrained Optimization In Chapter 4 we have studied a method of searching and classifying all stationary points as local extrema and saddle points. In this chapter we shall consider the problem of maximizing or minimizing a function of n variables subject to the constraint that the points ( x 1 , . . . , x n ) T must satisfy certain equations. Here we illustrate a systematic way to deal with constrained optimization problems. The method of Lagrange multipliers replaces finding stationary points of a constrained func- tion f : R n-→ R of n variables with k constraints to finding stationary points of an uncon- strained function L in n + k variables. This method introduces a new variable, known as the Lagrange multiplier , for each constraint and defines a new function, called the Lagrangian in- volving the original function, the constraints and the Lagrange multipliers. The k constraints are often given by a vector-valued function g : R n-→ R k of n variables: g ( x 1 , . . . , x n ) = ( g 1 ( x 1 , . . . , x n ) , . . . , g k ( x 1 , . . . , x n )) T . Let us begin with the simplest case. Let f : R 2-→ R and g : R 2-→ R be both real-valued functions of two variables. Consider the problem of maximizing/minimizing f ( x, y ) subject to the single constraint g ( x, y ) = 0 . We sometimes write max f ( x, y ) ( or min f ( x, y ) ) subject to g ( x, y ) = 0 . Consider the contour curves f ( x, y ) = c of f for various values of c , and the contour curve g ( x, y ) = 0 of g that represents the constraint of our problem. Now fix our attention on the contour curve g ( x, y ) = 0 . In general this contour curve will intersect with different contour curves f ( x, y ) = c , which means the value of f can change whilst moving along the contour curve g ( x, y ) = 0 : When the contour curve g ( x, y ) = 0 meets the contour f ( x, y ) = c for some c ∈ R tangen- tially at the point ( a, b ) T , the value of f will not be increased or decreased, that is, the directional derivative of f along the tangential direction of the curve g ( x, y ) = 0 at ( a, b ) T is equal to . Recall that the direction in which f has zero change is given by the direction orthogonal to ∇ f . In other words, the gradient vector ∇ f must be normal to the curve g ( x, y ) = 0 at ( a, b ) T . On the other hand, the gradient vector ∇ g is of course normal to the contour g ( x, y ) = 0 at ( a, b ) T . Geometrically, what we have deduced is that the contour curves f ( x, y ) = c and g ( x, y ) = 0 have a common tangent at ( a, b ) T , and the gradient vectors ∇ f ( a, b ) and ∇ g ( a, b ) must point in the same or opposite directions: The point ( a, b ) T is sometimes called the constrained stationary point of f . Now we can write ∇ f ( a, b ) + λ ∇ g ( a, b ) = for some scalar λ ∈ R , or equivalently ∂f ∂x ( a, b ) + λ ∂g ∂x ( a, b ) = 0 and ∂f ∂y (...
View Full Document

This note was uploaded on 12/20/2011 for the course MATH 1813 taught by Professor Wu during the Fall '11 term at HKU.

Page1 / 4

ch.6 - Chapter 6. Optimization: Method of Lagrange...

This preview shows document pages 1 - 2. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online