obc-kalman_22Dec09

# obc-kalman_22Dec09 - Optimization-Based Control Richard M...

This preview shows pages 1–4. Sign up to view the full content.

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: Optimization-Based Control Richard M. Murray Control and Dynamical Systems California Institute of Technology DRAFT v2.1a, January 3, 2010 c circlecopyrt California Institute of Technology All rights reserved. This manuscript is for review purposes only and may not be reproduced, in whole or in part, without written consent from the author. Chapter 5 Kalman Filtering In this chapter we derive the optimal estimator for a linear system in continuous time (also referred to as the Kalman-Bucy filter). This estimator minimizes the covariance and can be implemented as a recursive filter. Prerequisites. Readers should have basic familiarity with continuous-time stochastic systems at the level presented in Chapter ?? . 5.1 Linear Quadratic Estimators Consider a stochastic system ˙ X = AX + Bu + FW, Y = CX + V, where X represents that state, u is the (deterministic) input, W represents distur- bances that affect the dynamics of the system and V represents measurement noise. Assume that the disturbance W and noise V are zero-mean, Gaussian white noise (but not necessarily stationary): p ( w ) = 1 n √ 2 π √ det R W e- 1 2 w T R- 1 W w E { W ( s ) W T ( t ) } = R W ( t ) δ ( t − s ) p ( v ) = 1 n √ 2 π √ det R v e- 1 2 v T R- 1 v v E { V ( s ) V T ( t ) } = R v ( t ) δ ( t − s ) We also assume that the cross correlation between W and V is zero, so that the disturbances are not correlated with the noise. Note that we use multi-variable Gaussians here, with noise intensities R W ∈ R m × m and R V ∈ R p × p . In the scalar case, R W = σ 2 W and R V = σ 2 V . We formulate the optimal estimation problem as finding the estimate ˆ X ( t ) that minimizes the mean square error E { ( X ( t ) − ˆ X ( t ))( X ( t ) − ˆ X ( t )) T } given { Y ( τ ) : ≤ τ ≤ t } . It can be shown that this is equivalent to finding the expected value of X subject to the “constraint” given by all of the previous measurements, so that ˆ X ( t ) = E { X ( t ) | Y ( τ ) ,τ ≤ t } . This was the way that Kalman originally formulated the problem and it can be viewed as solving a least squares problem: given all previous Y ( t ), find the estimate ˆ X that satisfies the dynamics and minimizes the square error with the measured data. We omit the proof since we will work directly with the error formulation. Theorem 5.1 (Kalman-Bucy, 1961) . The optimal estimator has the form of a linear observer ˙ ˆ X = A ˆ X + BU + L ( Y − C ˆ X ) 5.1. LINEAR QUADRATIC ESTIMATORS 5-2 where L ( t ) = P ( t ) C T R- 1 v and P ( t ) = E { ( X ( t ) − ˆ X ( t ))( X ( t ) − ˆ X ( t )) T } satisfies ˙ P = AP + PA T − PC T R- 1 v ( t ) CP + FR W ( t ) F T , P (0) = E { X (0) X T (0) } ....
View Full Document

## This note was uploaded on 01/04/2012 for the course CDS 110b taught by Professor Murray,r during the Fall '08 term at Caltech.

### Page1 / 11

obc-kalman_22Dec09 - Optimization-Based Control Richard M...

This preview shows document pages 1 - 4. Sign up to view the full document.

View Full Document
Ask a homework question - tutors are online