This preview shows pages 1–7. Sign up to view the full content.
This preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
Unformatted text preview: Topic #12 16.30/31 Feedback Control Systems StateSpace Systems Fullstate Feedback Control How do we change location of statespace eigenvalues/poles? Or, if we can change the pole locations where do we put the poles? Heuristics Linear Quadratic Regulator How well does this approach work? Reading: FPE 7.4 Fall 2010 16.30/31 122 Pole Placement Approach So far we have looked at how to pick K to get the dynamics to have some nice properties ( i.e. stabilize A ) i ( A ) i ( A BK ) Question: where should we put the closedloop poles? Approach #1: use timedomain specifications to locate dominant poles roots of: s 2 + 2 n s + n 2 = 0 Then place rest of the poles so they are much faster than the dominant 2nd order behavior. Example: could keep the same damped frequency w d and then move the real part to be 23 times faster than the real part of dominant poles n Just be careful moving the poles too far to the left because it takes a lot of control effort Recall ROT for 2nd order response (4 ?? ): 1 + 1 . 1 + 1 . 4 2 1090% rise time t r = n Settling time (5%) t s = 3 n Time to peak amplitude t p = n 1 2 Peak overshoot M p = e n t p Key difference in this case: since all poles are being placed, the assumption of dominant 2nd order behavior is pretty much guaranteed to be valid. October 17, 2010 Fall 2010 16.30/31 123 Linear Quadratic Regulator Approach #2: is to place the pole locations so that the closedloop system optimizes the cost function J LQR = x ( t ) T Q x ( t ) + u ( t ) T R u ( t ) dt where: x T Q x is the State Cost with weight Q u T R u is called the Control Cost with weight R Basic form of Linear Quadratic Regulator problem. MIMO optimal control is a time invariant linear state feedback u ( t ) = K lqr x ( t ) and K lqr found by solving Algebraic Riccati Equation (ARE) 0 = A T P + PA + Q PBR 1 B T P K lqr = R 1 B T P Some details to follow, but discussed at length in 16.323 Note: state cost written using output x T Q x , but could define a system output of interest z = C z x that is not based on a physical sensor measurement and use cost ftn: T ( t ) C T J LQR = x z QC z x ( t ) + u ( t ) T u ( t ) dt Then effectively have state penalty Q = ( C z T QC z ) Selection of z used to isolate system states of particular interest that you would like to be regulated to zero. R = I effectively sets the controller bandwidth October 17, 2010 Fall 2010 16.30/31 124 Fig. 1: Example #1: G ( s ) = 8 14 20 with control penalty and 10 ( s +8)( s +14)( s +20) October 17, 2010 Fall 2010 16.30/31 125 Fig. 2: Example #2: G ( s ) = . 94 with control penalty and 10 s 2 . 0297 October 17, 2010 Fall 2010 16.30/31 126 Fig. 3: Example #3: G ( s ) = 8 14 20 with control penalty and 10 ( s 8)( s 14)( s 20) October 17, 2010 Fall...
View
Full
Document
This note was uploaded on 02/03/2012 for the course AERO 16.30 taught by Professor Ericferon during the Fall '04 term at MIT.
 Fall '04
 EricFeron

Click to edit the document details