obc-kalman_22Dec09

obc-kalman_22Dec09 - Optimization-Based Control Richard M....

Info iconThis preview shows pages 1–4. Sign up to view the full content.

View Full Document Right Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: Optimization-Based Control Richard M. Murray Control and Dynamical Systems California Institute of Technology DRAFT v2.1a, January 3, 2010 c circlecopyrt California Institute of Technology All rights reserved. This manuscript is for review purposes only and may not be reproduced, in whole or in part, without written consent from the author. Chapter 5 Kalman Filtering In this chapter we derive the optimal estimator for a linear system in continuous time (also referred to as the Kalman-Bucy filter). This estimator minimizes the covariance and can be implemented as a recursive filter. Prerequisites. Readers should have basic familiarity with continuous-time stochastic systems at the level presented in Chapter ?? . 5.1 Linear Quadratic Estimators Consider a stochastic system X = AX + Bu + FW, Y = CX + V, where X represents that state, u is the (deterministic) input, W represents distur- bances that affect the dynamics of the system and V represents measurement noise. Assume that the disturbance W and noise V are zero-mean, Gaussian white noise (but not necessarily stationary): p ( w ) = 1 n 2 det R W e- 1 2 w T R- 1 W w E { W ( s ) W T ( t ) } = R W ( t ) ( t s ) p ( v ) = 1 n 2 det R v e- 1 2 v T R- 1 v v E { V ( s ) V T ( t ) } = R v ( t ) ( t s ) We also assume that the cross correlation between W and V is zero, so that the disturbances are not correlated with the noise. Note that we use multi-variable Gaussians here, with noise intensities R W R m m and R V R p p . In the scalar case, R W = 2 W and R V = 2 V . We formulate the optimal estimation problem as finding the estimate X ( t ) that minimizes the mean square error E { ( X ( t ) X ( t ))( X ( t ) X ( t )) T } given { Y ( ) : t } . It can be shown that this is equivalent to finding the expected value of X subject to the constraint given by all of the previous measurements, so that X ( t ) = E { X ( t ) | Y ( ) , t } . This was the way that Kalman originally formulated the problem and it can be viewed as solving a least squares problem: given all previous Y ( t ), find the estimate X that satisfies the dynamics and minimizes the square error with the measured data. We omit the proof since we will work directly with the error formulation. Theorem 5.1 (Kalman-Bucy, 1961) . The optimal estimator has the form of a linear observer X = A X + BU + L ( Y C X ) 5.1. LINEAR QUADRATIC ESTIMATORS 5-2 where L ( t ) = P ( t ) C T R- 1 v and P ( t ) = E { ( X ( t ) X ( t ))( X ( t ) X ( t )) T } satisfies P = AP + PA T PC T R- 1 v ( t ) CP + FR W ( t ) F T , P (0) = E { X (0) X T (0) } ....
View Full Document

Page1 / 11

obc-kalman_22Dec09 - Optimization-Based Control Richard M....

This preview shows document pages 1 - 4. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online