# 11 - 0.8 Estimators 0.7 0.6 0.5 0.4 Maximum a Posteriori...

This preview shows pages 1–4. Sign up to view the full content.

2 4 6 8 10 12 14 16 18 20 0.4 0.5 0.6 0.7 0.8 n Estimators Maximum a Posteriori Estimates Conditional Expectation Estimates 2 4 6 8 10 12 14 16 18 20 0 0.01 0.02 0.03 0.04 n Mean Squared Errors Figure 8.4: Asymptotic behavior of the MAP and LMS estimators, and the corresponding conditional mean squared errors, for Fxed x =0 . 5, and n →∞ in Problem 8.12. We have E [ X ]= E [Θ] + E [ W E [Θ] 2 X = σ 2 Θ + σ 2 W , cov(Θ ,X )= E ±( Θ E [Θ] )( X E [ X ] = E ±( Θ E [Θ] ) 2 ² = σ 2 Θ , where the last relation follows from the independence of Θ and W . Using the formulas for the mean and variance of the uniform PDF, we have E [Θ] = 7 2 Θ =3 , E [ W ]=0 2 W =1 / 3 . Thus, the linear LMS estimator is ˆ Θ=7+ 3 3+1 / 3 ( X 7 ) , or ˆ 9 10 ( X 7 ) . The mean squared error is (1 ρ 2 ) σ 2 Θ .W ehav e ρ 2 = cov(Θ ) σ Θ σ X 2 = σ 2 Θ σ Θ σ X 2 = σ 2 Θ σ 2 X = 3 / 3 = 9 10 . 101

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
Hence the mean squared error is (1 ρ 2 ) σ 2 Θ = 1 9 10 · 3= 3 10 . Solution to Problem 8.15. The conditional mean squared error of the MAP esti- mator ˆ Θ= X is E ± ( ˆ Θ Θ) 2 | X = x ² = E ± ˆ Θ 2 2 ˆ ΘΘ+Θ 2 | X = x ² = x 2 2 x E | X = x ]+ E 2 | X = x ] = x 2 2 x 101 x 100 ³ i = x 1 i + 100 ³ i = x i 100 ³ i = x 1 i . The conditional mean squared error of the LMS estimator ˆ 101 X 100 ³ i = X 1 i . is E [( ˆ Θ Θ) 2 | X = x ]= E [ ˆ Θ 2 2 ˆ 2 | X = x ] = 101 x 2 100 ³ i = x 1 i 2 101 x 100 ³ i = x 1 i E | X = x E 2 | X = x ] = (101 x ) 2 100 ³ i = x 1 i 2 + 100 ³ i = x i 100 ³ i = x 1 i . To obtain the linear LMS estimator, we compute the expectation and variance of X .W ehav e E [ X E ± E [ X | Θ] ² = E h Θ+1 2 i = (101 / 2) + 1 2 =25 . 75 , and var( X )= E [ X 2 ] ( E [ X ] ) 2 = 1 100 100 ³ x =1 x 2 ´ 100 ³ θ = x 1 θ ! (25 . 75) 2 = 490 . 19 . 102
Thecovar ianceo fΘand X is cov(Θ ,X )= E ± ( X E [ X ])(Θ E [Θ]) ² = 100 ³ θ =1 1 100 θ ³ x =1 1 θ ( x 25 . 75)( θ 50) = 416 . 63 . Applying the linear LMS formula yields ˆ Θ= E [Θ] + cov(Θ ) var( X ) ( X E [ X ] ) =50+ 416 . 63 490 . 19 ( X 25 . 75) = 0 . 85 X +28 . 11 . The mean squared error of the linear LMS estimator is E ± ( ˆ Θ Θ) 2 | X = x ² = E ± ˆ Θ 2 2 ˆ ΘΘ+Θ 2 | X = x ² = ˆ Θ 2 2 ˆ Θ E | X = x ]+ E 2 | X = x ] =(0 . 85 x . 11) 2 2(0 . 85 x . 11) 101 x 100 i = x 1 i + 100 i = x i 100 i = x 1 i . Figure 8.5 plots the conditional mean squared error of the MAP, LMS, and linear LMS estimators, as a function of x . Note that the conditional mean squared error is lowest for the LMS estimator, but that the linear LMS estimator comes very close. 0

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
This is the end of the preview. Sign up to access the rest of the document.

{[ snackBarMessage ]}

### Page1 / 10

11 - 0.8 Estimators 0.7 0.6 0.5 0.4 Maximum a Posteriori...

This preview shows document pages 1 - 4. Sign up to view the full document.

View Full Document
Ask a homework question - tutors are online