{[ promptMessage ]}

Bookmark it

{[ promptMessage ]}

lqr_2009_11_11_02_2up

# lqr_2009_11_11_02_2up - 15 1 The Linear Quadratic Regulator...

This preview shows pages 1–5. Sign up to view the full content.

15 - 1 The Linear Quadratic Regulator S. Lall, Stanford 2009.11.11.02 15. The Linear Quadratic Regulator Regulation and the least squares formulation of regulation The LQR problem formulation Constrained optimization formulation Dynamic programming example: path optimization Solving the Hamilton-Jacobi equation The Riccati recursion Summary of LQR solution via DP Example: force on mass The steady-state regulator Time-varying systems and tracking problems Infinite-horizon problems The Algebraic Riccati equation 15 - 2 The Linear Quadratic Regulator S. Lall, Stanford 2009.11.11.02 The Key Points of This Section idea of regulation; keep the output small, using as little input as possible multi-objective problem: allows trade-off to be made between input effort and regu- lation can be formulated as a large least squares problem instead, solve it via dynamic programming solution is Riccati recursion ; much faster to compute controller is linear state feedback u ( t ) = K t x ( t ) we often use the steady-state solution; to find it, solve the Algebraic Riccati Equation

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
15 - 3 The Linear Quadratic Regulator S. Lall, Stanford 2009.11.11.02 Regulation usual discrete-time system x ( t + 1) = Ax ( t ) + Bu ( t ) x (0) = x 0 y ( t ) = Cx ( t ) multiobjective problem regulation: keep y ( t ) small on t = 0 , . . . , N 1 ; we’d like to keep small J out = N 1 t =0 y ( t ) 2 using low input effort ; we’d like to keep small J in = N 1 t =0 u ( t ) 2 15 - 4 The Linear Quadratic Regulator S. Lall, Stanford 2009.11.11.02 least-squares formulation as before we have y (0) y (1) y (2) . . . y ( N 1) = 0 CB 0 CAB CB 0 . . . . . . CA N 2 B CA N 3 B . . . CB 0 u (0) u (1) u (2) . . . u ( N 1) + C CA CA 2 . . . CA N 1 x (0) = Lu seq + Mx 0 multiobjective least squares problem: J out ( u seq ) + μJ in ( u seq ) = Lu seq + Mx 0 2 + μ u seq 2 = L μI u seq + Mx 0 0 2 least-squares solution is open-loop ; does not use measurements of x ( t ) on t = 0 , . . . , N 1
15 - 5 The Linear Quadratic Regulator S. Lall, Stanford 2009.11.11.02 cost function J ( u seq ) = J out ( u seq ) + μJ in ( u seq ) = N 1 t =0 y ( t ) 2 + μ u ( t ) 2 = N 1 t =0 x ( t ) T C T Cx ( t ) + μu ( t ) T u ( t ) we’ll use the slightly more general cost function J ( u seq ) = N 1 t =0 x ( t ) T Qx ( t ) + u ( t ) T Ru ( t ) + x ( N ) T Q f x ( N ) where Q 0 Q f 0 R > 0 are called state cost , final state cost , and input cost matrices 15 - 6 The Linear Quadratic Regulator S. Lall, Stanford 2009.11.11.02 cost function J ( u seq ) = N 1 t =0 x ( t ) T Qx ( t ) + u ( t ) T Ru ( t ) + x ( N ) T Q f x ( N ) N is called the time horizon first term measures state deviation second term measures input size or actuator authority last term measures final state deviation Q , R set relative weights of state deviation and input usage R > 0 means any (nonzero) input adds to the cost J we often use Q = Q f = C T C and R = μI LQR problem find u (0) , . . . , u ( N 1) that minimizes J ( u seq )

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
15 - 7 The Linear Quadratic Regulator S. Lall, Stanford 2009.11.11.02 Constrained Optimization
This is the end of the preview. Sign up to access the rest of the document.

{[ snackBarMessage ]}

### Page1 / 17

lqr_2009_11_11_02_2up - 15 1 The Linear Quadratic Regulator...

This preview shows document pages 1 - 5. Sign up to view the full document.

View Full Document
Ask a homework question - tutors are online