This preview shows page 1. Sign up to view the full content.
Unformatted text preview: Optimization with constraints Xin Li, Department of Mathematics, UCF From Calculus I, we have all learnt that if a differentiable function (defined for all ) has a maximum or minimum at , than ’
0. Putting this into the framework of optimization, we have min
R has a solution at then ’ 0. This was generalized in Calculus III to min
R has a solution at then 0. Now, we want to study the minimization under constraints: min
0 We are going to use two heuristic arguments to derive the Langrage’s method for solving this constrained minimization problem. I. First, let us consider a geometric argument. Consider the case when n=2. So, the constraint g(x)=0 gives us a surface in the 3‐dimensional space. The constrained minimization problem is “to find the point on the surface that gives the smallest value of f(x)”. Suppose that P_0 is a point on the surface that gives the minimum value of f(x). Let C be any smooth curve on the surface passing through P_0. If C has a parametric equation for :
. Assume that one variable on [a,b] that has a minimum value at . Then . Then 0. But F’ · ’ . So, ·’ Note that , we have 0 is a function of ·’ 0 is perpendicular to tangent vector of every curve at . But the totality of these tangent So, is perpendicular to the tangent plane vectors forms the tangent plane of the surface at . Thus, at of the surface given by 0. On the other hand, the gradient of at is also perpendicular 0 gives to its tangent plane at : differentiating both sides of · · and, in particular, at (when · Or, vectors ’ and ’ 0 ’ ) we have 0. 0. So, is also perpendicular to the tangent plane at . Hence, the two must be parallel. Thus, there is a constant, say , such that This is the necessary condition for to be a minimum point of we have the following theorem. Theorem (Lagrange Multiplier Method). If (Opt), then subject to 0. To summarize, is a solution to the constrained minimization problem ; ; where , the Lagrangian function (of the constrained optimization). Remark. When there are two constraints: to Lagrangian ;,
; , 0, 0, we can still apply Lagrange method but and the necessary condition is 0 . . ,
0 , 0 II. Now, let us consider a second argument for “deriving” the Lagrange Method. We use the implicit function theorem from calculus: If is differentiable and at , we have ,
, 0, then we can solve for in the equation and that for near , , 0: there is a function 0. Furthermore, is differentiable near and such / . So, we first solve for in the constraint equation ,
). Substitute this into the objective function variables): _2 0 (assuming that we have two to transform the , constrained minimization problem of two variables into a minimization problem without constrain of a single variable: ,
So, if , ) is a minimum point of , then is a minimum point of the single variable function . By Calculus I, ’
0. But ’
. So, ,
, , 0, or, – 0, at , . Call this common ratio , then the above equality becomes in the Lagrange Method. , which is what we have arrived at ...
View Full Document