{[ promptMessage ]}

Bookmark it

{[ promptMessage ]}

MIT16_30F10_lec12

# MIT16_30F10_lec12 - Topic#12 16.30/31 Feedback Control...

This preview shows pages 1–9. Sign up to view the full content.

Topic #12 16.30/31 Feedback Control Systems State-Space Systems Full-state Feedback Control How do we change location of state-space eigenvalues/poles? Or, if we can change the pole locations where do we put the poles? Heuristics Linear Quadratic Regulator How well does this approach work? Reading: FPE 7.4

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
Fall 2010 16.30/31 12–2 Pole Placement Approach So far we have looked at how to pick K to get the dynamics to have some nice properties ( i.e. stabilize A ) λ i ( A ) λ i ( A BK ) Question: where should we put the closed-loop poles? Approach #1: use time-domain specifications to locate dominant poles roots of: s 2 + 2 ζω n s + ω n 2 = 0 Then place rest of the poles so they are “much faster” than the dominant 2nd order behavior. Example: could keep the same damped frequency w d and then move the real part to be 2–3 times faster than the real part of dominant poles ζω n Just be careful moving the poles too far to the left because it takes a lot of control effort Recall ROT for 2nd order response (4– ?? ): 1 + 1 . 1 ζ + 1 . 4 ζ 2 10-90% rise time t r = ω n Settling time (5%) t s = ζω 3 n Time to peak amplitude t p = π ω n 1 ζ 2 Peak overshoot M p = e ζω n t p Key difference in this case: since all poles are being placed, the assumption of dominant 2nd order behavior is pretty much guaranteed to be valid. October 17, 2010
Fall 2010 16.30/31 12–3 Linear Quadratic Regulator Approach #2: is to place the pole locations so that the closed-loop system optimizes the cost function J LQR = x ( t ) T Q x ( t ) + u ( t ) T R u ( t ) dt 0 where: x T Q x is the State Cost with weight Q u T R u is called the Control Cost with weight R Basic form of Linear Quadratic Regulator problem. MIMO optimal control is a time invariant linear state feedback u ( t ) = K lqr x ( t ) and K lqr found by solving Algebraic Riccati Equation (ARE) 0 = A T P + PA + Q PBR 1 B T P K lqr = R 1 B T P Some details to follow, but discussed at length in 16.323 Note: state cost written using output x T Q x , but could define a system output of interest z = C z x that is not based on a physical sensor measurement and use cost ftn: T ( t ) C T ˜ J LQR = x z QC z x ( t ) + ρ u ( t ) T u ( t ) dt 0 ˜ Then effectively have state penalty Q = ( C z T QC z ) Selection of z used to isolate system states of particular interest that you would like to be regulated to “zero”. R = ρI effectively sets the controller bandwidth October 17, 2010

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
Fall 2010 16.30/31 12–4 Fig. 1: Example #1: G ( s ) = 8 · 14 · 20 with control penalty ρ and 10 ρ ( s +8)( s +14)( s +20) October 17, 2010
Fall 2010 16.30/31 12–5 Fig. 2: Example #2: G ( s ) = 0 . 94 with control penalty ρ and 10 ρ s 2 0 . 0297 October 17, 2010

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
Fall 2010 16.30/31 12–6 Fig. 3: Example #3: G ( s ) = 8 · 14 · 20 with control penalty ρ and 10 ρ ( s 8)( s 14)( s 20) October 17, 2010
Fall 2010 16.30/31 12–7 Fig. 4: Example #4: G ( s ) = ( s 1) with control penalty ρ and 10 ρ ( s +1)( s 3) October 17, 2010

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
Fall 2010 16.30/31 12–8 Fig. 5: Example #5: G ( s ) = ( s 2)( s 4) with control penalty ρ and 10 ρ ( s 1)( s 3)( s 2 +0 . 8 s +4) s 2 October 17, 2010
This is the end of the preview. Sign up to access the rest of the document.

{[ snackBarMessage ]}