lecture_18 - 2.160 System Identification, Estimation, and...

Info iconThis preview shows pages 1–5. Sign up to view the full content.

View Full Document Right Arrow Icon
2.160 System Identification, Estimation, and Learning Lecture Notes No. 18 April 26, 2006 13 Asymptotic Distribution of Parameter Estimates 13.1 Overview If convergence is guaranteed, then . * ˆ θ N But, how quickly does the estimate approach the limit ? How many data points are needed? Æ Asymptotic Variance Analysis N ˆ * The main points to be obtained in this chapter The variance analysis of this chapter will reveal a) The estimate converges to at a rate proportional to * N 1 b) Distribution converges to a Gaussian distribution: N (0, Q). c) Cov depends on the parameters sensitivity of the predictor: N ˆ y ˆ Identified model parameter with cov : a “quality tag” confidence interval N ˆ N ˆ 12.2 Central Limit Theorems. The mathematical tool needed for asymptotic variance analysis is “Central Limit” theorems. The following is a quick review of the theory. ˆ N Distribution of ˆ N The variance is small for large N . * How quickly does the variance reduce? Iteration/Data Number The variance is large for small N . 1
Background image of page 1

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
Consider two independent random variable, X and Y , with PDF, and . Define another random variable Z as the sum of X and Y : ) ( x f X ) ( y f Y Y X Z + = Let us obtain the PDF of Z. y x z z + = + xy y y x z + = x () z z f z dx x z f x f dxdy y f x f z z Z z ob Z Y X XY Y X = = = + ∫∫ ) ( ) ( ) ( ) ( ) ( Pr Example ) ( x f X x 1/2 -1 1 1 1/2 x ) ( x f X -1 ) ( x f X and have the same uniform distribution. Combining the two distributions, we can obtain the distribution of Z . ) ( y f Y ZXY f z =+ z 1/2 -2 2 Further, consider , has the same rectangular PDF as X and Y . V Y X W + + = ) ( v f V ) ( w f W w 3 -3 -1 0 1 The resultant PDF is getting close to a Gaussian distribution. 2
Background image of page 2
In general, the PDF of a random variable approaches a Gaussian distribution, regardless of the PDF of each , as N gets larger. More rigorously, the following central limit theorem has been proven. = N i i X 1 i X A Central Limit Theorem of Independent Random Variables Let be a d -dimensional random variable with , 0,1, t Xt = " Mean ) ( t X E m = Co-variance ( )( ) [ ] T t t m X m X E Q = (1) Consider the sum of given by m X t () = = N t t N m X N Y 1 1 ( 2 ) Then, as N tends to infinity, the distribution of Y N converges to the Gaussian distribution given by PDF: = y Q y Q y f T d Y 1 2 / 2 1 exp det ) 2 ( 1 ) ( π (3) where 1 1 lim N t N t yX N →∞ = =− m . 13.3 Distribution of Estimate ˆ N θ Applying the Central Limit Theorem, we can obtain the distribution of estimate as N tends to infinity. N ˆ Let be an estimate based on the prediction error method (PEM); N ˆ ) , ( min arg ˆ N N D N Z V M = ( 4 ) = = N t N N t N Z V 1 2 ) , ( 2 1 1 ) , ( ε ( 5 ) For simplicity, we first assume that the predictor ) ( ˆ t y is given by a linear regression: ϕ T t y = ) ( ˆ (6) 3
Background image of page 3

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
and the parameter vector of the true system, 0 θ , is involved in the model set, M D 0 .
Background image of page 4
Image of page 5
This is the end of the preview. Sign up to access the rest of the document.

Page1 / 11

lecture_18 - 2.160 System Identification, Estimation, and...

This preview shows document pages 1 - 5. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online