lec19 - 19 LINEAR QUADRATIC REGULATOR 19.1 Introduction The...

Info iconThis preview shows pages 1–3. Sign up to view the full content.

View Full Document Right Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: 19 LINEAR QUADRATIC REGULATOR 19.1 Introduction The simple form of loopshaping in scalar systems does not extend directly to multivariable (MIMO) plants, which are characterized by transfer matrices instead of transfer functions. The notion of optimality is closely tied to MIMO control system design. Optimal controllers, i.e., controllers that are the best possible, according to some figure of merit, turn out to generate only stabilizing controllers for MIMO plants. In this sense, optimal control solutions provide an automated design procedure we have only to decide what figure of merit to use. The linear quadratic regulator (LQR) is a well-known design technique that provides practical feedback gains. (Continued on next page) 19.2 Full-State Feedback 93 19.2 Full-State Feedback For the derivation of the linear quadratic regulator, we assume the plant to be written in state-space form x = Ax + Bu , and that all of the n states x are available for the controller. The feedback gain is a matrix K , implemented as u = K ( x x desired ). The system dynamics are then written as: x = ( A BK ) x + BKx desired . (210) x desired represents the vector of desired states, and serves as the external input to the closed- loop system. The A-matrix of the closed loop system is ( A BK ), and the B-matrix of the closed-loop system is BK . The closed-loop system has exactly as many outputs as inputs: n . The column dimension of B equals the number of channels available in u , and must match the row dimension of K . Pole-placement is the process of placing the poles of ( A BK ) in stable, suitably-damped locations in the complex plane. 19.3 The Maximum Principle First we develop a general procedure for solving optimal control problems, using the calculus of variations. We begin with the statement of the problem for a fixed end time t f : t f choose u ( t ) to minimize J = ( x ( t f )) + L ( x ( t ) , u ( t ) , t ) dt (211) t o subject to x = f ( x ( t ) , u ( t ) , t ) (212) x ( t o ) = x o (213) where ( x ( t f ) , t f ) is the terminal cost ; the total cost J is a sum of the terminal cost and an integral along the way. We assume that L ( x ( t ) , u ( t ) , t ) is nonnegative. The first step is to augment the cost using the costate vector ( t )....
View Full Document

This note was uploaded on 02/27/2012 for the course MECHANICAL 2.154 taught by Professor Michaeltriantafyllou during the Fall '04 term at MIT.

Page1 / 7

lec19 - 19 LINEAR QUADRATIC REGULATOR 19.1 Introduction The...

This preview shows document pages 1 - 3. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online