lecture_21 - 2.160 System Identification, Estimation, and...

Info iconThis preview shows pages 1–4. Sign up to view the full content.

View Full Document Right Arrow Icon
2.160 System Identification, Estimation, and Learning Lecture Notes No. 2 1 May 8, 2006 16. Information Theory of System Identification 16.1 Overview Maximum Likelihood Estimate (MLE) ) ( log arg ) ( ˆ θ L x ma Z N ML = (1) Likelihood function 1 () (; , , ) N Lf y y = Π " (2) For dynamical systems ( ) ˆ ( ) ,; L fy t y t t ε (3) ( = = N t N ML t f N x ma Z 1 ; , log 1 arg ) ( ˆ ) (4) Entropy: (5) i i i p p log = The logarithm of probability Model-data agreement is quantified with regard to the amount of information. Information Theory Remember that a “good” model fully exploits all information contained in data. Least Square Estimate and the correlation method Prediction error and data are orthogonal or uncorrelated to each other. Kalman Filter The innovation process e ( t ) is a white noise random process. Maximum Likelihood Estimate and the Information Theoretic approach The logarithmic joint probability of prediction error is maximized. The degree of randomness in prediction error is maximized. The Entropy Maximization Principle The Punch Line 1
Background image of page 1

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
In the information theoretic approach, we use “information” as a generic measure for evaluating how much a model fits a given set of data. This unified measure allows us to compare diverse model structures on the same basis, and provides an objective means to select an optimal model based on the trade-off between estimate bias and variance, or accuracy v.s. reliability. The expected end results Akaike’s Information Criterion + = = N t N AIC L N Z 1 dim ) ( 1 log 1 min arg ) ( ˆ θ (6) The number of parameters The penalty of using many parameters = Dimension of parameter vector θ MDL (Minimum Description Length) by Rissanen A good mode= the shortcut code describing the data 16.2 The Kullback Leibler Information Distance Measuring the distance between the true system and its model; How much the model differs from the truth? x x Model ) ( x g PDF Truth f o (x) (Real Process) PDF To evaluate the overall difference between the model and the truth, consider the following measure: If ) ( ) ( 0 x g x f = for all x , then the distance must be zero . 2
Background image of page 2
dx x g x f x f g f I N R m m = )) ( ( ) ( log ) ( ) , ( 0 0 0 θ (7) Called the Kullback-Leibler information Distance Properties of I ( f, g ) 1) for all PDF f and g 0 ) , ( g f I Proof: Since f and g are PDF 0 ) ( x f 1 ) ( = dx x f 0 ) ( x g 1 ) ( = dx x g Define 1 ) ( ) ( ) ( x f x g x h or 1 ) ( ) ( ) ( + = x h x f x g ( 8 ) Note that (9) () 0 ) ( ) ( ) ( ) ( = = dx x f x g
Background image of page 3

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
Image of page 4
This is the end of the preview. Sign up to access the rest of the document.

This note was uploaded on 02/27/2012 for the course MECHANICAL 2.160 taught by Professor Harryasada during the Spring '06 term at MIT.

Page1 / 8

lecture_21 - 2.160 System Identification, Estimation, and...

This preview shows document pages 1 - 4. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online