{[ promptMessage ]}

Bookmark it

{[ promptMessage ]}

soln7

# soln7 - EE 562a Homework Solutions 7 16 April 2007 1 1(a...

This preview shows pages 1–4. Sign up to view the full content.

EE 562a Homework Solutions 7 16 April 2007 1 1. (a) The biggest step in solving this problem is to formulate the problem in terms of what we have previously learned. Since we only observe x ( u, 0) and x ( u, 1), it makes sense to define an observation vector x ( u ) = x ( u, 0) x ( u, 1) . Then we can formulate the hypotheses in terms of the available observations by H 0 : x ( u ) = s 0 + w ( u ) H 1 : x ( u ) = s 1 + w ( u ) where w ( u ) = w ( u, 0) w ( u, 1) s i = P cos( πi ) cos( π ( i + 1)) = ( - 1) i P 1 - 1 . The PSD of w ( u, n ) provides a second moment description for w ( u, n ). Since there is no Dirac delta at ν = 0, the mean of w ( u, n ) is zero = K w ( m ) = R x ( m ) = F - 1 { S w ( ν ) } = σ 2 ρ | m | , where the inversion can be found using any of several methods. It follows that m w = 0 0 K w = K w (0) K w ( - 1) K w (1) K w (0) = σ 2 1 ρ ρ 1 . Since | ρ | < 1, K w is nonsingular. The minimum distance decision rule is then x t ( u ) K - 1 w s 0 H 0 > < H 1 0 . Because K w is circulant, we know its eigenvectors and eigenvalues. In particular, we know that s 0 is an eigenvector of K w (actually s 0 is 2 e 1 , where e 1 is a standard eigenvector of a (2 × 2) circulant matrix). The correponding e-value is λ w (1) = σ 2 (1 - ρ ). It follows that K - 1 w s 0 = 1 σ 2 (1 - ρ ) s 0 , so that the decision rule is independent of ρ = 1 σ 2 (1 - ρ ) x t ( u ) s 0 H 0 > < H 1 0 ⇐⇒ x ( u, 0) H 0 > < H 1 x ( u, 1) . (b) The bound on the error probability is P ( E ) 4 ( s 0 - s 1 ) t K - 1 w ( s 0 - s 1 ) = 2 P σ 2 (1 - ρ ) - 1 .

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
2 EE 562a Homework Solutions 7 16 April 2007 (c) Since we have a complete statistical description of the noise, we can evaluate the prob- ability of error exactly. First consider the case when H 1 is true = P ( E|H 1 ) = P { x t ( u ) s 0 > 0 | x ( u ) = s 1 + w ( u ) } = P { ( s 1 + w ( u )) t s 0 > 0 } = P { w t ( u ) s 0 > - s t 1 s 0 } = P { η ( u ) > s 0 2 } , where η ( u ) = w t ( u ) s 0 is a Gaussian random variable, with E { η ( u ) } = 0 V ar { η ( u ) } = s t 0 K w s 0 = σ 2 (1 - ρ ) s 0 2 . If we define v ( u ) = η ( u ) / ( σ (1 - ρ ) s 0 ), then we have P ( E|H 1 ) = P v ( u ) > s 0 σ (1 - ρ ) (1) = Q s 0 σ (1 - ρ ) (2) = Q 2 P σ 2 (1 - ρ ) . (3) You can verify that P ( E|H 0 ) = P ( E|H 1 ) (you can even argue this using symmetry) so we have the unconditional probability of error: P ( E ) = Q 2 P σ 2 (1 - ρ ) . (d) From the definition it is clear that the Q -function is a strictly monotonically decreasing function (i.e. x > z ⇐⇒ Q ( x ) < Q ( z )). So the condition on the argument to obtain P ( E ) 10 - 3 is 2 P σ 2 (1 - ρ ) 3 . When we specialize to white noise (i.e. ρ = 0) we have 2 P σ 2 9 = P σ 2 > 18 . Notice that if we use the bound on the error probability from part (b), we would require that P σ 2 500; so we do quite a bit better in determining the error probability by knowing the complete statistical description. (e) The performance improves as the argument of the Q -function increases, so best perfor- mance is when ρ 1. In this case the probability of error goes to zero. This corresponds to a noise free direction parallel to the signals. The worst case is when ρ → - 1; then the error probability goes to Q ( P/σ 2 ). In this case the noise free diection is orthogonal to the signals and the decision rule is a reduced dimension version.
EE 562a Homework Solutions 7 16 April 2007 3 The relation to standard decision problems is that the signals are always eigenvectors of K w , and the corresponding eigenvalue is (1 -

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
This is the end of the preview. Sign up to access the rest of the document.

{[ snackBarMessage ]}

### Page1 / 12

soln7 - EE 562a Homework Solutions 7 16 April 2007 1 1(a...

This preview shows document pages 1 - 4. Sign up to view the full document.

View Full Document
Ask a homework question - tutors are online