This preview shows pages 1–4. Sign up to view the full content.
This preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
Unformatted text preview: Control of Nonlinear Dynamic Systems: Theory and Applications J. K. Hedrick and A. Girard 2005 1 10 ` Nonlinear Observers Introduction to Nonlinear Observers Motivation The big weakness of all the control methodologies that we have learned so far is that they require the full state . Sometimes impossible to measure. Sometimes, possible, but expensive. Key points All the control methodologies covered so far require full state information. This can be impossible, or expensive, to obtain. Nonlinear observers: o Deterministic: Lyapunov based: Thau, Raghavan Geometric Sliding o Stochastic: Extended Kalman Filter (EKF) Control of Nonlinear Dynamic Systems: Theory and Applications J. K. Hedrick and A. Girard 2005 2 Methodologies learned so far: Linearization I/O and I/S feedback linearization Sliding control (robust I/O linearization) Integrator backstepping Dynamic surface control Note: make sure you understand what the similarities and differences between all those methodologies are! In general, we only have access to p sensor outputs, that is: ) ( . t v x M z + = where: z is the measurement (of size px1) M is the measurement matrix (of size pxn) x is the state (of size nx1) v(t) represents measurement noise (of size px1) Even in nonlinear systems, the measurements will be linearly related to the state in general (property of any useful sensor). The bestknown methodology for dealing with a full state feedback controller is to separate the problem into a static controller (for example u = kx) and a dynamic observer. We then: a. Design the controller as if x x = . b. Design the observer so that x x as quickly as possible. Control of Nonlinear Dynamic Systems: Theory and Applications J. K. Hedrick and A. Girard 2005 3 Review of Linear Observers Process: ) ( t w Bu Ax x + + = &amp; Measurement: ) ( . t v x M z + = Let: ) ( x M z L Bu x A x + + = &amp; where the last term is basically a correction term ( z z ) The error dynamics are expressed by: x x x ~ = ) ( ) ( ~ ) ( ) ( ~ t Lv t w x LM A x M v Mx L Bu x A w Bu Ax x x x + = + + + = = &amp; &amp; &amp; There are two different classical approaches to dealing with the choice of L: A. Deterministic (Luenberger observer) Ignore w(t), v(t). x LM A x ~ ) ( ~ = &amp; and if (A,M) is observable, then the eigenvalues of (ALM) can be placed arbitrarily. B. Stochastic (KalmanBucy filter) L is chosen to minimize the variance of x x , the state estimation error. ) ( x M z L Bu x A x + + = &amp; 1 = R PM L T (Intuition: if the variance is small make the gains large) R and Q are noise statistics (R is associated with v) Note that R1 is identical to 1/r for scalars....
View
Full
Document
 Spring '08
 HEDRICK

Click to edit the document details