{[ promptMessage ]}

Bookmark it

{[ promptMessage ]}

least_squares_2009_10_14_01

# least_squares_2009_10_14_01 - 8-1 Least Squares S Lall...

This preview shows pages 1–9. Sign up to view the full content.

8 - 1 Least Squares S. Lall, Stanford 2009.10.14.01 8. Least Squares The pseudo-inverse Example: pseudo-inverse Estimation and least-squares Effects of noise on estimation Example: navigation Regression or curve-fitting Example: fitting polynomials Example: rocket Control and minimum-norm problems Example: force on mass Matlab and the pseudo-inverse History of least-squares

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
8 - 2 Least Squares S. Lall, Stanford 2009.10.14.01 The Key Points of This Section estimation problems: given y meas , find the least-squares solution x , that minimizes bardbl y meas Ax bardbl control problems: given y des , find the minimum-norm x that satisfies y des = Ax the SVD gives a computational approach it also gives useful information even when important assumptions don’t hold estimation: usually need A skinny and full rank control: usually need A fat and full rank it gives us quantitative information about the usefulness of the solutions
8 - 3 Least Squares S. Lall, Stanford 2009.10.14.01 important facts null( A T ) = range( A ) easy via the SVD: because if the SVD of A is A = U Σ V T then range( A ) = span { u 1 ,...,u r } also the SVD of A T is A T = V Σ T U T so null( A T ) = span { u r +1 ,...,u n }

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
8 - 4 Least Squares S. Lall, Stanford 2009.10.14.01 one more null( A T A ) = null( A ) also easy via the SVD: A T A = V Σ T U T U Σ V T = V Σ T Σ V T which gives an SVD of A T A . Σ T Σ has the same number of non-zero elements as Σ , so both A and A T A have null space span { v r +1 ,...,v n }
8 - 5 Least Squares S. Lall, Stanford 2009.10.14.01 The Pseudo-Inverse the thin SVD is A = ˆ U ˆ Σ V T = A U ê Î ê V T here ˆ Σ is square, diagonal, positive definite ˆ U and ˆ V are skinny, orthonormal columns the pseudo-inverse of A is A = ˆ V ˆ Σ 1 ˆ U T it is computed using the SVD

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
8 - 6 Least Squares S. Lall, Stanford 2009.10.14.01 example rank 2 matrix: A = 5 5 14 8 1 1 0 4 5 4 11 10 24 11 6 the full svd: = 0 . 49 0 . 30 0 . 82 0 . 12 0 . 91 0 . 41 0 . 86 0 . 30 0 . 41 35 . 69 0 0 0 0 0 7 . 02 0 0 0 0 0 0 0 0 0 . 33 0 . 31 0 . 79 0 . 39 0 . 15 0 . 38 0 . 20 0 . 11 0 . 53 0 . 73 0 . 25 0 . 86 0 . 36 0 . 25 0 . 01 0 . 45 0 . 26 0 . 47 0 . 67 0 . 25 0 . 69 0 . 22 0 . 14 0 . 25 0 . 62 the thin svd: = 0 . 49 0 . 30 0 . 12 0 . 91 0 . 86 0 . 30 35 . 69 0 0 7 . 02 bracketleftbigg 0 . 33 0 . 31 0 . 79 0 . 39 0 . 15 0 . 38 0 . 20 0 . 11 0 . 53 0 . 73 bracketrightbigg pseudo-inverse: A = 0 . 33 0 . 38 0 . 31 0 . 20 0 . 79 0 . 11 0 . 39 0 . 53 0 . 15 0 . 73 1 35 . 69 0 0 1 7 . 02 bracketleftbigg 0 . 49 0 . 12 0 . 86 0 . 30 0 . 91 0 . 30 bracketrightbigg
8 - 7 Least Squares S. Lall, Stanford 2009.10.14.01 key point: pseudo-inverse solves least-squares estimation problems minimum-norm control problems properties of the pseudo-inverse if A is invertible, then A = A 1 A is m × n = A is n × m ( A ) = A ( A T ) = ( A ) T ( λA ) = λ 1 A for λ negationslash = 0 caution: in general, ( AB ) negationslash = B A

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
8 - 8 Least Squares S. Lall, Stanford 2009.10.14.01 Estimation and Least-Squares assume A is skinny and full rank, so m > n
This is the end of the preview. Sign up to access the rest of the document.

{[ snackBarMessage ]}

### Page1 / 34

least_squares_2009_10_14_01 - 8-1 Least Squares S Lall...

This preview shows document pages 1 - 9. Sign up to view the full document.

View Full Document
Ask a homework question - tutors are online