soln7

soln7 - EE 562a Homework Solutions 7 16 April 2007 1 1. (a)...

Info iconThis preview shows pages 1–3. Sign up to view the full content.

View Full Document Right Arrow Icon
EE 562a Homework Solutions 7 16 April 2007 1 1. (a) The biggest step in solving this problem is to formulate the problem in terms of what we have previously learned. Since we only observe x ( u, 0) and x ( u, 1), it makes sense to define an observation vector x ( u ) = ± x ( u, 0) x ( u, 1) ² . Then we can formulate the hypotheses in terms of the available observations by H 0 : x ( u ) = s 0 + w ( u ) H 1 : x ( u ) = s 1 + w ( u ) where w ( u ) = ± w ( u, 0) w ( u, 1) ² s i = P ± cos( πi ) cos( π ( i + 1)) ² = ( - 1) i P ± 1 - 1 ² . The PSD of w ( u,n ) provides a second moment description for w ( u,n ). Since there is no Dirac delta at ν = 0, the mean of w ( u,n ) is zero = K w ( m ) = R x ( m ) = F - 1 { S w ( ν ) } = σ 2 ρ | m | , where the inversion can be found using any of several methods. It follows that m w = ± 0 0 ² K w = ± K w (0) K w ( - 1) K w (1) K w (0) ² = σ 2 ± 1 ρ ρ 1 ² . Since | ρ | < 1, K w is nonsingular. The minimum distance decision rule is then x t ( u ) K - 1 w s 0 H 0 > < H 1 0 . Because K w is circulant, we know its eigenvectors and eigenvalues. In particular, we know that s 0 is an eigenvector of K w (actually s 0 is 2 e 1 , where e 1 is a standard eigenvector of a (2 × 2) circulant matrix). The correponding e-value is λ w (1) = σ 2 (1 - ρ ). It follows that K - 1 w s 0 = 1 σ 2 (1 - ρ ) s 0 , so that the decision rule is independent of ρ = 1 σ 2 (1 - ρ ) x t ( u ) s 0 H 0 > < H 1 0 ⇐⇒ x ( u, 0) H 0 > < H 1 x ( u, 1) . (b) The bound on the error probability is P ( E ) 4 ( s 0 - s 1 ) t K - 1 w ( s 0 - s 1 ) = ³ 2 P σ 2 (1 - ρ ) ´ - 1 .
Background image of page 1

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
2 EE 562a Homework Solutions 7 16 April 2007 (c) Since we have a complete statistical description of the noise, we can evaluate the prob- ability of error exactly. First consider the case when H 1 is true = P ( E|H 1 ) = P { x t ( u ) s 0 > 0 | x ( u ) = s 1 + w ( u ) } = P { ( s 1 + w ( u )) t s 0 > 0 } = P { w t ( u ) s 0 > - s t 1 s 0 } = P { η ( u ) > k s 0 k 2 } , where η ( u ) = w t ( u ) s 0 is a Gaussian random variable, with E { η ( u ) } = 0 V ar { η ( u ) } = s t 0 K w s 0 = σ 2 (1 - ρ ) k s 0 k 2 . If we define v ( u ) = η ( u ) / ( σ p (1 - ρ ) k s 0 k ), then we have P ( E|H 1 ) = P ± v ( u ) > k s 0 k σ p (1 - ρ ) ² (1) = Q ³ k s 0 k σ p (1 - ρ ) ! (2) = Q ³s 2 P σ 2 (1 - ρ ) ! . (3) You can verify that P ( E|H 0 ) = P ( E|H 1 ) (you can even argue this using symmetry) so we have the unconditional probability of error: P ( E ) = Q ³s 2 P σ 2 (1 - ρ ) ! . (d) From the definition it is clear that the Q -function is a strictly monotonically decreasing function (i.e. x > z ⇐⇒ Q ( x ) < Q ( z )). So the condition on the argument to obtain P ( E ) 10 - 3 is s 2 P σ 2 (1 - ρ ) 3 . When we specialize to white noise (i.e. ρ = 0) we have 2 P σ 2 9 = P σ 2 > 18 . Notice that if we use the bound on the error probability from part (b), we would require
Background image of page 2
Image of page 3
This is the end of the preview. Sign up to access the rest of the document.

This note was uploaded on 05/06/2008 for the course EE 562a taught by Professor Toddbrun during the Spring '07 term at USC.

Page1 / 12

soln7 - EE 562a Homework Solutions 7 16 April 2007 1 1. (a)...

This preview shows document pages 1 - 3. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online