{[ promptMessage ]}

Bookmark it

{[ promptMessage ]}

Lect7 - Stochastic Process Lecture 7 Minimum Mean Square...

Info icon This preview shows pages 1–4. Sign up to view the full content.

View Full Document Right Arrow Icon
Stochastic Process 11/10/2006 Lecture 7 Minimum Mean Square Error Estimation NCTUEE Summary In this lecture, I will discuss: Least Squares Least Squares using SVD Fundamental Theorem of Estimation Linear MMSE Notation We will use the following notation rules, unless otherwise noted, to represent symbols during this course. Boldface upper case letter to represent MATRIX Boldface lower case letter to represent vector Superscript ( · ) T and ( · ) H to denote transpose and hermitian (conjugate transpose), respectively Upper case italic letter to represent RANDOM VARIABLE 7-1
Image of page 1

Info icon This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
1 Least Squares Consider the linear model y = H θ + w , where H is a ”known” m × n observation matrix, θ is an n × 1 unknown parameter which may or may not be random, and w is a noise vector. Then, the least-squares estimator for θ that minimizes the 2-norm || y - H θ || 2 = ( y - H θ ) T ( y - H θ ) is given by ˆ θ LS = arg min θ || y - H θ || 2 = ( H T H ) - 1 H T y . (1) Remarks: (1) Note that when H is square and non-singular, the least-squares esti- mator is reduced to ˆ θ LS = H - 1 y . (2) The matrix H = ( H T H ) - 1 H T is called the pseudo-inverse of H . We have the LS estimator ˆ θ LS = H y . (3) The matrix H T H must be non-singular for (1) to hold true, which requires H to be full-rank. In practice, we often solve least-squares problems using the following system of normal equations: ( H T H ) ˆ θ LS = H T y . (4) Let ˜ y = y - H ˆ θ LS . From the normal equations we will find H T ˜ y = 0 . This is known as the orthogonality condition . (5) The minimum least-squares is found as J min = || y - H θ LS || 2 = y T I - H ( H T H ) - 1 H T · y . 7-2
Image of page 2
2 Geometric Interpretations The least-squares problem for the linear model y = H θ + w can be interpreted geometrically, from the concept of distance by matrix 2- norm. (1) The received signal y R m . If the matrix H R m × n for m n is full-rank, then the range space S of H is of dimension n , which is a subspace of R m . (2) The LS estimate θ LS is the vector that renders ˆ s = H θ LS the orthogo- nal projection of y onto the subspace spanned by the column vectors of H , i.e. the range of H . The orthogonal projection is given by ˆ s = H θ LS = H ( H T H ) - 1 H T | {z } , P · y = P · y , where P = H ( H T H ) - 1 H T
Image of page 3

Info icon This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
Image of page 4
This is the end of the preview. Sign up to access the rest of the document.

{[ snackBarMessage ]}

What students are saying

  • Left Quote Icon

    As a current student on this bumpy collegiate pathway, I stumbled upon Course Hero, where I can find study resources for nearly all my courses, get online help from tutors 24/7, and even share my old projects, papers, and lecture notes with other students.

    Student Picture

    Kiran Temple University Fox School of Business ‘17, Course Hero Intern

  • Left Quote Icon

    I cannot even describe how much Course Hero helped me this summer. It’s truly become something I can always rely on and help me. In the end, I was not only able to survive summer classes, but I was able to thrive thanks to Course Hero.

    Student Picture

    Dana University of Pennsylvania ‘17, Course Hero Intern

  • Left Quote Icon

    The ability to access any university’s resources through Course Hero proved invaluable in my case. I was behind on Tulane coursework and actually used UCLA’s materials to help me move forward and get everything together on time.

    Student Picture

    Jill Tulane University ‘16, Course Hero Intern