{[ promptMessage ]}

Bookmark it

{[ promptMessage ]}

Chap13_English

# Chap13_English - Chapter 13 Mean Square Estimation 13.1...

This preview shows pages 1–5. Sign up to view the full content.

Chapter 13 Mean Square Estimation 13.1 Introduction We want to estimate ( ) t S at a specific time t in terms of ( ) ξ X for a b ξ . Here we assume that both ( ) t X and ( ) t S are WSS. The general form of linear estimation can be expressed by { } ˆ ˆ ( ) ( ) | ( ), ( ) ( ) b a t E t a b h d ξ ξ α α α = = S S X X (13-1) The objective is to minimize the mean square (MS) error defined below { } 2 2 ˆ ( ) ( ) ( ) ( ) ( ) b a P E t t E t h d α α α = - = - Q S S S X (13-2) Based on the extension of the Orthogonality Principle, we have { } ( ) ( ) ( ) ( ) 0 b a E t h d α α α ξ - = Q S X X a b ξ (13-3) e ( , ) ( ) ( , ) b a R t h R d ξ α α ξ α = SX XX a b ξ (13-4) In this case, the prediction error is given by { } ( ) ( ) ( ) ( ) (0) ( ) ( , ) b b a a P E t h d t R h R t d α α α α α α = - = - SS SX S X S In the following discussions, we assume RPs are real and WSS. We consider several cases: (1) ( , ) t a b a . In this case, ˆ ( ) t S is called smoothing . (2) ( , ) t a b a and ( ) ( ) t t = X S (i.e., no noise). This is a prediction case. For t b , ˆ ( ) t S is a forward predictor; while for t a < , ˆ ( ) t S is a backward predictor. (3) ( , ) t a b a and ( ) ( ) t t a X S . This is a filtering and prediction case.

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
Some Simple Illustrations Prediction of ( ) t λ + S in terms of ( ) t S : The linear predictor is expressed by { } ˆ ˆ ( ) ( ) | ( ) ( ) t E t t a t λ λ + = + = S S S S . From { } ( ( ) ( )) ( ) 0 E t a t t λ + - = S S S , we obtain ( ) (0) R a R λ = . The prediction error is { } 2 ( ) ( ( ) ( )) ( ) (0) ( ) (0) (0) R P E t a t t R aR R R λ λ λ λ = + - + = - = - S S S Special casee If | | ( ) R Ae α τ τ - = , then a e αλ - = . In this case, { } ( ) ( ( ) ( )) ( ) 0 ( ) ( ) 0 E t a t t R aR Ae Ae e α λ ξ αλ αξ λ ξ ξ λ ξ ξ - + - - + - - = + - = - = S S S e The prediction error of ( ) t λ + S is orthogonal to ( ) t ξ - S for 0 ξ . e So, the prediction of ( ) t λ + S using ( ) t ξ - S for 0 ξ is equivalent to that of using ( ) t S only. The process is called wide-sense Markov of order 1. Estimate ( ) t λ + S using ( ) t S and ( ) t a S . The estimator is 1 2 ˆ ( ) ( ) ( ) t a t a t λ + = + S S S From ˆ ( ) ( ) ( ), ( ) t t t t λ λ + - + S S S S , we have 1 2 ( ) (0) (0) 0 R a R a R λ - - = S S 1 2 ( ) (0) (0) 0 R a R a R λ - - = SS SS S S Q (0) 0 Ra = , ( ) ( ) R R τ τ = - SS and ( ) ( ) R R τ τ = - S S 1 ( ) (0) R a R λ = and 2 ( ) (0) R a R λ =
and { } 1 2 1 2 ( ( ) ( ) ( )) ( ) (0) ( ) ( ) P E t a t a t t R a R a R λ λ λ λ = + - - + = - + S S S S If λ is small, then ( ) (0) R R λ ; and ( ) (0) (0) (0) R R R R λ λ λ ξ ρ B + = ; 1 1 a ; 2 a λ ; ˆ ( ) ( ) ( ) t t t λ λ + + S S S ; is the 1st approximation of Taylor series Filtering { } ˆ ˆ ( ) ( ) | ( ) ( ) t E t t a t = = S S X X { } ( ( ) ( )) ( ) 0 E t a t t - = S X X e (0) (0) R a R = SX XX { } ( ( ) ( )) ( ) (0) (0) P E t a t t R aR = - = - SS SX S X S Interpolation In Fig. 13.1, we wish to estimate ( ) t λ + S in terms of 2 1 N + samples ( ) t kT + S for N k N - . Here 0 T λ < < . The estimator is ˆ ( ) ( ) N k k N t a t kT λ =- + = + S S 0 T λ < < ( ( ) ( )) ( ) 0 N k k N E t a t kT t nT λ =- + - + + = Q S S S | | n N a e ( ) ( ) N k k N a R kT nT R nT λ =- - = - | | n N a This is a system of 2 1 N + equations. We can solve it to find 2 1 N + unknowns k a . The prediction error is ( ( ) ( )) ( ) (0) ( ) N N k k k N k N P E t a t kT t R a R kT λ λ λ =- =- = + - + + = - - S S S The prediction error ˆ ( ) ( ) ( ) N t t t λ λ = + - + ε S S can be regarded as the output of

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
the filter ( ) N j jkT N k k N E e a e ϖλ ϖ ϖ + =- = - with input ( ) t S . Hence, we can expressed P by { } 2 2 1 ( ) ( ) | | 2 N j jkT N k k N P E t S e a e d ϖλ ϖ ϖ ϖ π + - =- = = - ε (13-11)
This is the end of the preview. Sign up to access the rest of the document.

{[ snackBarMessage ]}