Lec2 - MIT OpenCourseWare http/ocw.mit.edu 16.323 Principles of Optimal Control Spring 2008 For information about citing these materials or our

Info iconThis preview shows pages 1–6. Sign up to view the full content.

View Full Document Right Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: MIT OpenCourseWare http://ocw.mit.edu 16.323 Principles of Optimal Control Spring 2008 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms . 16.323 Lecture 2 Nonlinear Optimization • Constrained nonlinear optimization • Lagrange multipliers • Penalty/barrier functions also often used, but will not be discussed here. Figure by MIT OpenCourseWare. Spr 2008 16.323 2–1 Constrained Optimization • Consider a problem with the next level of complexity: optimization with equality constraints min F ( y ) y such that f ( y ) = 0 a vector of n constraints • To simplify the notation, assume that the p-state vector y can be separated into a decision m-vector u and a state n-vector x related to the decision variables through the constraints. Problem now becomes: min F ( x , u ) u such that f ( x , u ) = 0 – Assume that p > n otherwise the problem is completely specified by the constraints (or over specified). • One solution approach is direct substitution , which involves – Solving for x in terms of u using f – Substituting this expression into F and solving for u using an unconstrained optimization. – Works best if f is linear (assumption is that not both of f and F are linear.) June 18, 2008 Spr 2008 16.323 2–2 • Example: minimize F = x 1 2 + x 2 2 subject to the constraint that x 1 + x 2 + 2 = 0 – Clearly the unconstrained minimum is at x 1 = x 2 = 0 – Substitution in this case gives equivalent problems: min F ˜ 2 = ( − 2 − x 2 ) 2 + x 2 2 x 2 or min F ˜ 1 = x 1 2 + ( − 2 − x 1 ) 2 x 1 for which the solution ( ∂F ˜ 2 /∂x 2 = 0 ) is x 1 = x 2 = − 1 x 1 x 2-2-1.5-1-0.5 0.5 1 1.5 2-2-1.5-1-0.5 0.5 1 1.5 2 Figure 2.8: Simple function minimization with constraint. • Bottom line : substitution works well for linear constraints, but pro- cess hard to generalize for larger systems/nonlinear constraints. June 18, 2008 Spr 2008 16.323 2–3 Lagrange Multipliers • Need a more general strategy- using Lagrange multipliers. • Since f ( x , u ) = 0 , we can adjoin it to the cost with constants λ T = λ 1 ... λ n without changing the function value along the constraint to create Lagrangian function L ( x , u , λ ) = F ( x , u ) + λ T f ( x , u ) • Given values of x and u for which f ( x , u ) = , consider differential changes to the Lagrangian from differential changes to x and u : ∂L ∂L dL = d x + d u ∂ x ∂ u where ∂L = ∂L ∂L (row vector) ∂ u ∂u 1 ∂u m ··· Since u are the decision variables it is convenient to choose λ so that • ∂L ∂F + λ T ∂ f = ∂ x ≡ (2.1) ∂ x ∂ x ∂F ∂ f − 1 ⇒ λ T = − ∂ x ∂ x (2.2) • To proceed, must determine what changes are possible to the cost keeping the equality constraint satisfied....
View Full Document

This note was uploaded on 11/07/2011 for the course AERO 16.323 taught by Professor Jonathanhow during the Spring '08 term at MIT.

Page1 / 25

Lec2 - MIT OpenCourseWare http/ocw.mit.edu 16.323 Principles of Optimal Control Spring 2008 For information about citing these materials or our

This preview shows document pages 1 - 6. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online