Lect7 - Stochastic Process 11/10/2006 Lecture 7 Minimum...

Info iconThis preview shows pages 1–4. Sign up to view the full content.

View Full Document Right Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: Stochastic Process 11/10/2006 Lecture 7 Minimum Mean Square Error Estimation NCTUEE Summary In this lecture, I will discuss: • Least Squares • Least Squares using SVD • Fundamental Theorem of Estimation • Linear MMSE Notation We will use the following notation rules, unless otherwise noted, to represent symbols during this course. • Boldface upper case letter to represent MATRIX • Boldface lower case letter to represent vector • Superscript ( · ) T and ( · ) H to denote transpose and hermitian (conjugate transpose), respectively • Upper case italic letter to represent RANDOM VARIABLE 7-1 1 Least Squares Consider the linear model y = H θ + w , where H is a ”known” m × n observation matrix, θ is an n × 1 unknown parameter which may or may not be random, and w is a noise vector. Then, the least-squares estimator for θ that minimizes the 2-norm || y- H θ || 2 = ( y- H θ ) T ( y- H θ ) is given by ˆ θ LS = arg min θ || y- H θ || 2 = ( H T H )- 1 H T y . (1) Remarks: (1) Note that when H is square and non-singular, the least-squares esti- mator is reduced to ˆ θ LS = H- 1 y . (2) The matrix H † = ( H T H )- 1 H T is called the pseudo-inverse of H . We have the LS estimator ˆ θ LS = H † y . (3) The matrix H T H must be non-singular for (1) to hold true, which requires H to be full-rank. In practice, we often solve least-squares problems using the following system of normal equations: ( H T H ) ˆ θ LS = H T y . (4) Let ˜ y = y- H ˆ θ LS . From the normal equations we will find H T ˜ y = . This is known as the orthogonality condition . (5) The minimum least-squares is found as J min = || y- H θ LS || 2 = y T ‡ I- H ( H T H )- 1 H T · y . 7-2 2 Geometric Interpretations The least-squares problem for the linear model y = H θ + w can be interpreted geometrically, from the concept of distance by matrix 2- norm. (1) The received signal y ∈ R m . If the matrix H ∈ R m × n for m ≥ n is full-rank, then the range space S of H is of dimension n , which is a subspace of R m . (2) The LS estimate θ LS is the vector that renders ˆ s = H θ LS the orthogo- nal projection of y onto the subspace spanned by the column vectors of H , i.e. the range of H . The orthogonal projection is given by ˆ s = H θ LS = H ( H T H )- 1 H T | {z } , P · y = P · y , where P = H (...
View Full Document

This note was uploaded on 11/28/2010 for the course EE 301 taught by Professor Gfung during the Winter '10 term at National Chiao Tung University.

Page1 / 12

Lect7 - Stochastic Process 11/10/2006 Lecture 7 Minimum...

This preview shows document pages 1 - 4. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online