This preview shows pages 1–4. Sign up to view the full content.
This preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
Unformatted text preview: OptimizationBased Control Richard M. Murray Control and Dynamical Systems California Institute of Technology DRAFT v2.1a, January 4, 2010 c circlecopyrt California Institute of Technology All rights reserved. This manuscript is for review purposes only and may not be reproduced, in whole or in part, without written consent from the author. Chapter 2 Optimal Control This set of notes expands on Chapter 6 of Feedback Systems by Astr om and Murray ( AM08), which introduces the concepts of reachability and state feedback. We also expand on topics in Section 7.5 of AM08 in the area of feedforward compensation. Beginning with a review of optimization, we introduce the notion of Lagrange mul tipliers and provide a summary of the Pontryagins maximum principle. Using these tools we derive the linear quadratic regulator for linear systems and describe its usage. Prerequisites. Readers should be familiar with modeling of input/output control systems using differential equations, linearization of a system around an equilib rium point and state space control of linear systems, including reachability and eigenvalue assignment. Some familiarity with optimization of nonlinear functions is also assumed. 2.1 Review: Optimization O ptimization refers to the problem of choosing a set of parameters that maximize or minimize a given function. In control systems, we are often faced with having to choose a set of parameters for a control law so that the some performance condition is satisfied. In this chapter we will seek to optimize a given specification, choosing the parameters that maximize the performance (or minimize the cost). In this section we review the conditions for optimization of a static function , and then extend this to optimization of trajectories and control laws in the remainder of the chapter. More information on basic techniques in optimization can be found in [Lue97] or the introductory chapter of [LS95]. Consider first the problem of finding the minimum of a smooth function F : R n R . That is, we wish to find a point x R n such that F ( x ) F ( x ) for all x R n . A necessary condition for x to be a minimum is that the gradient of the function be zero at x : F x ( x ) = 0 . The function F ( x ) is often called a cost function and x is the optimal value for x . Figure 2.1 gives a graphical interpretation of the necessary condition for a minimum. Note that these are not sufficient conditions; the points x 1 and x 2 and x in the figure all satisfy the necessary condition but only one is the (global) minimum. The situation is more complicated if constraints are present. Let G i : R n R , i = 1 ,... ,k be a set of smooth functions with G i ( x ) = 0 representing the constraints. Suppose that we wish to find x R n such that G i ( x ) = 0 and F ( x ) F ( x ) for all x { x R n : G i ( x ) = 0 ,i = 1 ,... ,k } . This situation can be 2.1. REVIEW: OPTIMIZATION2....
View Full
Document
 Fall '08
 Murray,R

Click to edit the document details