topic17 - Topic #17 16.31 Feedback Control Systems...

Info iconThis preview shows pages 1–5. Sign up to view the full content.

View Full Document Right Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: Topic #17 16.31 Feedback Control Systems Deterministic LQR Optimal control and the Riccati equation Weight Selection Cite as: Jonathan How, course materials for 16.31 Feedback Control Systems, Fall 2007. MIT OpenCourseWare (http://ocw.mit.edu), Massachusetts Institute of Technology. Downloaded on [DD Month YYYY]. Fall 2007 16.31 171 Linear Quadratic Regulator (LQR) Have seen the solutions to the LQR problem, which results in linear full-state feedback control. Would like to demonstrate from first principles that this is the optimal form of the control. Deterministic Linear Quadratic Regulator Plant: x ( t ) = A ( t ) x ( t ) + B u ( t ) u ( t ) , x ( t ) = x z ( t ) = C z ( t ) x ( t ) Cost: J LQR = 1 2 t f t z T ( t ) R zz ( t ) z ( t ) + u T ( t ) R uu ( t ) u ( t ) dt + 1 2 x ( t f ) T P t f x ( t f ) Where P t f , R zz ( t ) > and R uu ( t ) > Define R xx ( t ) = C T z R zz ( t ) C z A ( t ) is a continuous function of time. B u ( t ) , C z ( t ) , R zz ( t ) , R uu ( t ) are piecewise continuous functions of time, and all are bounded. Problem Statement: Find input u ( t ) t [ t ,t f ] to min J LQR This is not necessarily specified to be a feedback controller. November 7, 2007 Cite as: Jonathan How, course materials for 16.31 Feedback Control Systems, Fall 2007. MIT OpenCourseWare (http://ocw.mit.edu), Massachusetts Institute of Technology. Downloaded on [DD Month YYYY]. Fall 2007 16.31 172 This is the most general form of the LQR problem we rarely need this level of generality and often suppress the time dependence of the matrices. Finite horizon problem is important for short duration control - such as landing an aircraft Control design problem is a constrained optimization, with the constraints being the dynamics of the system. The standard way of handling the constraints in an optimization is to add them to the cost using a Lagrange multiplier Results in an unconstrained optimization. Example: min f ( x,y ) = x 2 + y 2 subject to the constraint that c ( x,y ) = x + y + 2 = 0 2 1.5 1 0.5 0.5 1 1.5 2 2 1.5 1 0.5 0.5 1 1.5 2 x y Figure 1: Optimization results Clearly the unconstrained minimum is at x = y = 0 November 7, 2007 Cite as: Jonathan How, course materials for 16.31 Feedback Control Systems, Fall 2007. MIT OpenCourseWare (http://ocw.mit.edu), Massachusetts Institute of Technology. Downloaded on [DD Month YYYY]. Fall 2007 16.31 173 To find the constrained minimum, form augmented cost function L f ( x,y ) + c ( x,y ) = x 2 + y 2 + ( x + y + 2) Where is the Lagrange multiplier Note that if the constraint is satisfied, then L f The solution approach without constraints is to find the stationary point of f ( x,y ) ( f/x = f/y = 0 ) With constraints we find the stationary points of L L x = L y = L = 0 which gives L x = 2 x + = 0 L...
View Full Document

This note was uploaded on 11/07/2011 for the course AERO 16.31 taught by Professor Jonathanhow during the Fall '07 term at MIT.

Page1 / 21

topic17 - Topic #17 16.31 Feedback Control Systems...

This preview shows document pages 1 - 5. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online