This preview shows pages 1–3. Sign up to view the full content.
This preview has intentionally blurred sections. Sign up to view the full version.View Full Document
Unformatted text preview: Chapter 6 Linear Quadratic Optimal Control 6.1 Introduction In previous lectures, we discussed the design of state feedback controllers using using eigenvalue (pole) placement algorithms. For single input systems, given a set of desired eigenvalues, the feedback gain to achieve this is unique (as long as the system is controllable). For multi-input systems, the feedback gain is not unique, so there is additional design freedom. How does one utilize this freedom? A more fundamental issue is that the choice of eigenvalues is not obvious. For example, there are trade offs between robustness, performance, and control effort. Linear quadratic (LQ) optimal control can be used to resolve some of these issues, by not specifying exactly where the closed loop eigenvalues should be directly, but instead by specifying some kind of performance objective function to be optimized. Other optimal control objectives, besides the LQ type, can also be used to resolve issues of trade offs and extra design freedom. We first consider the finite time horizon case for general time varying linear systems, and then proceed to discuss the infinite time horizon case for Linear Time Invariant systems. 6.2 Finite Time Horizon LQ Regulator 6.2.1 Problem Formulation Consider the m input, n state system with x n , u m : x = A ( t ) x + B ( t ) u ( t ); x (0) = x . (6.1) Find open loop control u ( ), [ t ,t f ] such that the following objective function is minimized: J ( u,x ,t ,t f ) = integraldisplay t f t bracketleftbig x T ( t ) Q ( t ) x ( t ) + u T ( t ) R ( t ) u ( t ) bracketrightbig dt + x ( t f ) T Sx ( t f ) . (6.2) where Q ( t ) and S are symmetric positive semi-definite n n matrices, R ( t ) is a symmetric positive definite m m matrix. Notice that x , t , and t f are fixed and given data. The control goal generally is to keep x ( t ) close to 0, especially, at the final time t f , using little control effort u . To wit, notice in (6.2) x T ( t ) Q ( t ) x ( t ) penalizes the transient state deviation, x T ( t f ) Sx ( t f ) penalizes the finite state 101 u T ( t ) R ( t ) u ( t ) penalizes control effort. This formulation can accommodate regulating an output y ( t ) = C ( t ) x ( t ) r at near 0. In this case, one choice for S and Q ( t ) are C T ( t ) W ( t ) C ( t ) where W ( t ) r r is symmetic positive definite matrix. 6.2.2 Solution to optimal control problem General finite, fixed horizon optimal control problem : For the system with fixed initial condition, x = f ( x,u,t ); x ( t ) = x given , and a given time horizon [ t ,t f ] , find u ( t ) , t [ t ,t f ] such that the following cost function is minimized: J ( u ( ) ,x ) = ( x ( t f )) + integraldisplay t f t L ( x ( t ) ,u ( t ) ,t ) dt where the first term is the final cost and the second term is the running cost ....
View Full Document
This note was uploaded on 02/07/2012 for the course ME 8281 taught by Professor Staff during the Fall '08 term at Minnesota.
- Fall '08