This preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
Unformatted text preview: 1 16. Mean Square Estimation Given some information that is related to an unknown quantity of interest, the problem is to obtain a good estimate for the unknown in terms of the observed data. Suppose represent a sequence of random variables about whom one set of observations are available, and Y represents an unknown random variable. The problem is to obtain a good estimate for Y in terms of the observations Let represent such an estimate for Y . Note that can be a linear or a nonlinear function of the observation Clearly represents the error in the above estimate, and the square of n X X X , , , 2 1 . , , , 2 1 n X X X ) ( ) , , , ( ˆ 2 1 X X X X Y n ϕ ϕ = = (161) ) ( ⋅ ϕ . , , , 2 1 n X X X ) ( ˆ ) ( X Y Y Y X ϕ ε = = (162) 2   ε PILLAI 2 the error. Since is a random variable, represents the mean square error. One strategy to obtain a good estimator would be to minimize the mean square error by varying over all possible forms of and this procedure gives rise to the M inimization of the M ean S quare E rror (MMSE) criterion for estimation. Thus under MMSE criterion,the estimator is chosen such that the mean square error is at its minimum. Next we show that the conditional mean of Y given X is the best estimator in the above sense. Theorem1: Under MMSE criterion, the best estimator for the unknown Y in terms of is given by the conditional mean of Y gives X . Thus Proof : Let represent an estimate of Y in terms of Then the error and the mean square error is given by ε }   { 2 ε E ), ( ⋅ ϕ ) ( ⋅ ϕ n X X X , , , 2 1 }.  { ) ( ˆ X Y E X Y = = ϕ (163) ) ( ˆ X Y ϕ = ). , , , ( 2 1 n X X X X = , ˆ Y Y = ε }  ) (  { }  ˆ  { }   { 2 2 2 2 X Y E Y Y E E ϕ ε σ ε = = = (164) PILLAI }   { 2 ε E 3 Since we can rewrite (164) as where the inner expectation is with respect to Y , and the outer one is with respect to Thus To obtain the best estimator we need to minimize in (166) with respect to In (166), since and the variable appears only in the integrand term, minimization of the mean square error in (166) with respect to is equivalent to minimization of with respect to }]  { [ ] [ X z E E z E z X = }]  ) (  { [ }  ) (  { z 2 z 2 2 X X Y E E X Y E Y X n n n n n n n n n n ϕ ϕ σ ε = = . X ∫ ∞ + ∞ = = . ) ( }  ) (  { }]  ) (  { [ 2 2 2 dx X f X X Y E X X Y E E X ϕ ϕ σ ε (166) (165) , ϕ 2 ε σ . ϕ , ) ( ≥ X f X , }  ) (  { 2 ≥ X X Y E ϕ ϕ ϕ 2 ε σ }  ) (  { 2 X X Y E ϕ . ϕ PILLAI 4 Since X is fixed at some value, is no longer random, and hence minimization of is equivalent to This gives or But since when is a fixed number Using (169) ) ( X ϕ }  ) (  { 2 X X Y E ϕ . }  ) (  { 2 = ∂ ∂ X X Y E ϕ ϕ (167) }  ) ( { = X X Y E ϕ (168) ), ( }  ) ( { X X X E ϕ ϕ = (169) ) ( , X x X ϕ = ). ( x ϕ PILLAI . }  ) ( { }  { = X X E X Y E ϕ 5 in (168) we get the desired estimator to be Thus the conditional mean of Y given represents the best...
View
Full
Document
This note was uploaded on 12/07/2010 for the course ECE 313 taught by Professor No during the Fall '10 term at City University of Seattle.
 Fall '10
 no

Click to edit the document details