lecture_2

# lecture_2 - 2.160 Identification, Estimation, and Learning...

This preview shows pages 1–3. Sign up to view the full content.

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: 2.160 Identification, Estimation, and Learning Lecture Notes No. 2 February 13, 2006 2. Parameter Estimation for Deterministic Systems 2.1 Least Squares Estimation y Deterministic System m u u u M 2 1 M w/parameter Linearly parameterized model Input-output y = u b 1 + u b 2 + K + b u 1 2 m m ] T m Parameters to estimate: = [ b K b R ] 1 m T m = [ u K u R m Observations: 1 T y = (1) ] T The problem is to find the parameters = [ b K b from observation data: 1 m ( ), 1 y (1) ( ), 2 y (2) M ( N ), y ( N ) The system may be a linear dynamic system, e.g. t y ) = t u b 1) + t u b 2) + + b t u m ) ( ( ( ( ( 1 2 m T m ( t ) = [ t u ), 1 t u ), 2 , t u m )] R ( ( or a nonlinear dynamic system, e.g. t y ) = t u b 1) + t u b 2) t u 1) ( ( ( ( 1 2 ( t ) = [ t u ), 1 t u 2) t u )] 1 ( ( ( T Note that the parameters, b 1 , b 2 , are linearly involved in the input-output equation. Using an estimated parameter vector , we can write a predictor that predicts the output T from inputs: t y ) = ( t ) (2) ( 1 We evaluate the predictors performance by the squared error given by N 1 V N ( ) = ( t y | ) t y )) 2 (3) ( ( N t = 1 Problem: Find the parameter vector that minimizes the squared error: = min arg V N ( ) (4) Differentiating V N ( ) and setting it to zero,...
View Full Document

## This note was uploaded on 02/27/2012 for the course MECHANICAL 2.160 taught by Professor Harryasada during the Spring '06 term at MIT.

### Page1 / 5

lecture_2 - 2.160 Identification, Estimation, and Learning...

This preview shows document pages 1 - 3. Sign up to view the full document.

View Full Document
Ask a homework question - tutors are online