This preview shows pages 1–5. Sign up to view the full content.
This preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
Unformatted text preview: Topic #17 16.30/31 Feedback Control Systems Improving the transient performance of the LQ Servo Feedforward Control Architecture Design DOFB Servo Handling saturations in DOFB Fall 2010 16.30/31 171 LQ Servo Revisited Earlier (13 ?? ) we described how to use an integrator to achieve zero steady state error that is a bit more robust to modeling errors than the N formulation. An issue with this architecture is that the response to the reference is only driven by the integrated error there is no direct path from the reference input to the system transient might be slow. If the relevant system output is y = C y x and reference r , previously added extra states x I , where x I = e Then penalize both x and x I in the cost Caveat: note we are free to define e = r y or e = y r , but the implementation must be consistent As before, if the state of the original system is x , then the dynamics are modified to be x A x B u = + u + r x I C y x I I T and define x = x T x I T The optimal feedback for the cost J = x T R xx x + u T R uu u dt is of the form: x u = [ K K I ] = K x x I October 17, 2010 Fall 2010 16.30/31 172 Once have used LQR to design control gains K and K I , we have the freedom to choose how we implement it the control system. Provided that we dont change the feedback path and thus modify the closed loop pole locations The first controller architecture on (13 ?? ) was of the form: K G ( s ) K I 1 s u r e x I y x And the suggestion is that we modify it to this revised form: K R G ( s ) K I 1 s u r e x I y x Fig. 1: Revised implementation of the LQ servo Note key difference from above architecture is the feedforward term of the e through R . This actually adds another feedback loop, so we need to clarify how this is done. October 17, 2010 Fall 2010 16.30/31 173 Design Approach Control law for the revised implementation can be written as: u = K ( x + R e ) K I x I Note that the tracking error is in the output of the integrator and the state feedback components. So this approach has the potential to correct the slow response problems identified in the original implementation. But how do we determine R ? Assume that the state can be partitioned into parts we care about for tracking ( T x ) that we assume are directly available from y and parts we dont ( x = T x ) Can think of T and T as selector matrices with a diagonal of 1s and 0s but not always this form But must have T + T = I . Example : Consider position control case, where location x is part of state vector x = [ x,v,... ] T , and y = x = C x Scalar position reference r , and e = r y , x I = e In this example it is clear that we have x 1 x = v . ....
View
Full
Document
This note was uploaded on 02/03/2012 for the course AERO 16.30 taught by Professor Ericferon during the Fall '04 term at MIT.
 Fall '04
 EricFeron

Click to edit the document details