lecture_3 - 2.160 Identification Estimation and Learning...

Info iconThis preview shows pages 1–3. Sign up to view the full content.

View Full Document Right Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: 2.160 Identification, Estimation, and Learning Lecture Notes No. 3 February 15, 2006 2.3 Physical Meaning of Matrix P The Recursive Least Squares (RLS) algorithm updates the parameter vector ˆ ( θ ( t − 1) based on new data ϕ T ( t ), t y ) in such a way that the overall squared error may ˆ ( gain matrix which contains matrix P t-1 . To better understand the RLS algorithm, let us examine the physical meaning of matrix P t-1 . be minimal. This is done by multiplying the prediction error ϕ T ( t ) θ ( t − 1) − t y ) with the Recall the definition of the matrix: P t t − 1 T ( i ) T = ∑ ϕ ϕ ( i ) ΦΦ = (17) i = 1 m × t t × m m × m = Φ [ ϕ ( ).. 1 ϕ ( t ) ] ∈ R , Φ T ∈ R , ΦΦ T ∈ R Note that matrix ΦΦ T varies depending on how the set of vectors { ϕ ( i )} span the m - dimensional space. See the figure below. ) 1 ( ϕ ) 2 ( ϕ ) ( i ϕ ϕ –vector are ( ) T ΦΦ max λ ( ) T ΦΦ min λ ) ( min P λ ) ( P λ New data: ) ( t ϕ 1 ) ( − ΦΦ = T P m –dim space m-dim space many in this direction Well traveled max less traveled direction Geometric Interpretation of matrix P-1 . 1 mxm Since ΦΦ T ∈ R is a symmetric matrix of real numbers, it has all real eigenvalues. The eigen vectors associated with the individual eigenvalues are also real. Therefore, the matrix ΦΦ T can be reduced to a diagonal matrix using a coordinate transformation, i.e. using the eigen vectors as the bases. λ 0 L 0 1 ΦΦ T ⇒ D = 0 M λ 2 M ∈ R mxm ( 1 9 ) O 0 L λ m λ = λ ≥ λ ≥ L ≥ λ = λ min max 1 2 m 1/ λ 0 L 0 1 − 1 ⇒ D − 1 1/ λ 2 M mxm P = ( ΦΦ T ) = 0 M O ∈ R (20) 0 L 1/ λ m The direction of λ ( ΦΦ T ) = The direction of λ min ( P ) . max If λ min = 0 , then det( ΦΦ T ) = 0 , and the ellipsoid collapses. This implies that there is no input data ϕ ( i ) in the direction of λ min , i.e. the input data set does not contain any information in that direction. In consequence, the m-dimensional parameter vector θ cannot be fully determined by the data set. In the direction of λ , there are plenty of input data: ϕ ( i ) L . This direction has been max well explored, well excited. Although new data are obtained, the correction to the ˆ parameter vector θ ( t − 1) is small, if the new input data ϕ ( t ) is in the same direction as that of λ . See the second figure above....
View Full Document

{[ snackBarMessage ]}

Page1 / 8

lecture_3 - 2.160 Identification Estimation and Learning...

This preview shows document pages 1 - 3. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online