Chapter 13_English

# Chapter 13_English - 1 Chapter 13 Mean Square Estimation...

This preview shows pages 1–9. Sign up to view the full content.

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: 1 Chapter 13 Mean Square Estimation Teacher: Sin-Horng Chen Office: Engineering Bld. #4, Room 805 Tel: ext. 31822 Email: [email protected] 2 13.1 Introduction We want to estimate ( ) t S at a specific time t in terms of ( ) ξ X for a b ξ ≤ ≤ . Here we assume that both ( ) t X and ( ) t S are WSS. The general form of linear estimation can be expressed by { } ˆ ˆ ( ) ( ) | ( ), ( ) ( ) b a t E t a b h d ξ ξ α α α = ≤ ≤ = ∫ S S X X (13-1) The objective is to minimize the mean square (MS) error defined below { } 2 2 ˆ ( ) ( ) ( ) ( ) ( ) b a P E t t E t h d α α α =- =- ∫ S S S X (13-2) 3 Based on the extension of the Orthogonality Principle , we have { } ( ) ( ) ( ) ( ) b a E t h d α α α ξ - = ∫ S X X a b ξ ≤ ≤ (13-3) ⇒ ( , ) ( ) ( , ) b a R t h R d ξ α α ξ α = ∫ SX XX a b ξ ≤ ≤ (13-4) In this case, the prediction error is given by { } ( ) ( ) ( ) ( ) (0) ( ) ( , ) b b a a P E t h d t R h R t d α α α α α α = - = - ∫ ∫ SS SX S X S 4 In the following discussions, we assume RPs are real and WSS. We consider several cases: (1) ( , ) t a b ∈ . In this case, ˆ ( ) t S is called smoothing . (2) ( , ) t a b ∉ and ( ) ( ) t t = X S (i.e., no noise). This is a prediction case. For t b , ˆ ( ) t S is a forward predictor; while for t a < , ˆ ( ) t S is a backward predictor. (3) ( , ) t a b ∉ and ( ) ( ) t t ≠ X S . This is a filtering and prediction case. 5 Prediction of ( ) t λ + S in terms of ( ) t S : The linear predictor is expressed by { } ˆ ˆ ( ) ( ) | ( ) ( ) t E t t a t λ λ + = + = S S S S . From { } ( ( ) ( )) ( ) E t a t t λ + - = S S S , we obtain ( ) (0) R a R λ = . The prediction error is { } 2 ( ) ( ( ) ( )) ( ) (0) ( ) (0) (0) R P E t a t t R aR R R λ λ λ λ = + - + = - = - S S S Simple Illustrations 6 Special case1 If | | ( ) R Ae α τ τ- = , then a e αλ- = . In this case, { } ( ) ( ( ) ( )) ( ) 0 ( ) ( ) E t a t t R aR Ae Ae e α λ ξ αλ αξ λ ξ ξ λ ξ ξ- +-- +-- = +- =- = S S S The prediction error of ( ) t λ + S is orthogonal to ( ) t ξ- S for ξ . So, the prediction of ( ) t λ + S using ( ) t ξ- S for ξ ≥ is equivalent to that of using ( ) t S only. The process is called wide-sense Markov of order 1. 7 Estimate ( ) t λ + S using ( ) t S and ( ) t ′ S . The estimator is 1 2 ˆ ( ) ( ) ( ) t a t a t λ ′ + = + S S S From ˆ ( ) ( ) ( ), ( ) t t t t λ λ ′ + - + ⊥ S S S S , we have 1 2 ( ) (0) (0) 0 R a R a R λ ′- - = S S 1 2 ( ) (0) (0) 0 R a R a R λ ′ ′ ′ ′- - = SS SS S S 1 (0) R ′ = , ( ) ( ) R R τ τ ′ ′ = - SS and ( ) ( ) R R τ τ ′ ′ ′′ = - S S ∴ 1 ( ) (0) R a R λ = and 2 ( ) (0) R a R λ ′ = ′′ 8 and { } 1 2 1 2 ( ( ) ( ) ( )) ( ) (0) ( ) ( ) P E t a t a t t R a R a R λ λ λ λ ′ = + - - + ′ = - + S S S S If λ is small, then ( ) (0)...
View Full Document

## This note was uploaded on 07/21/2009 for the course CM EM5102 taught by Professor Sin-horngchen during the Fall '08 term at National Chiao Tung University.

### Page1 / 87

Chapter 13_English - 1 Chapter 13 Mean Square Estimation...

This preview shows document pages 1 - 9. Sign up to view the full document.

View Full Document
Ask a homework question - tutors are online