This preview shows page 1. Sign up to view the full content.
Unformatted text preview: Var[ni ] = N0 /2 and {ni }∞ 0 is an independent
i=
identically distributed (i.i.d.) sequence of random variables with
Gaussian density functions. EECS 455 (Univ. of Michigan) Fall 2012 October 3, 2012 46 / 93 Lecture Notes 7 Example 2: Additive White Gaussian Noise
Let
s0 (t ) = ϕ0 (t ) + 2ϕ1 (t )
s1 (t ) = 2ϕ0 (t ) + ϕ1 (t )
s2 (t ) = ϕ0 (t ) − 2ϕ1 (t ) Note that the energy of each of the three signals is the same, i.e.
T2
2
0 si (t )dt = si  = 5. Then we have a three hypothesis testing
problem.
∞ H0 : r (t ) = s0 (t ) + n(t ) = (s0,i + ni )ϕi (t )
i =0
∞ H1 : r (t ) = s1 (t ) + n(t ) = (s1,i + ni )ϕi (t )
i =0
∞ H2 : r (t ) = s2 (t ) + n(t ) = (s2,i + ni )ϕi (t )
i =0 EECS 455 (Univ. of Michigan) Fall 2012 October 3, 2012 47 / 93 Lecture Notes 7 Example 1: Additive White Gaussian Noise
3 s (t)
0 2 s (t)
1 1 φ 1 0 −1 −2 −3
−3 s2(t)
−2 −1 0 φ 1 2 3 0 EECS 455 (Univ. of Michigan) Fall 2012 October 3, 2012 48 / 93 Lecture Notes 7 Example 1: Additive White Gaussian Noise
3 2 1 φ (t) 1 0 −1 −2 −3
−3 EECS 455 (Univ. of Michigan) −2 −1 0 φ0(t) Fall 2012 1 2 3 October 3, 2012 49 / 93 Lecture Notes 7 Example 1: Decision Rule
The decision rule to minimize the average error probability is given as
follows
Decide Hi if πi pi (r) = max πj pj (r)
j First let us consider the ﬁrst L + 1 variables and normalize each side
by the density function for the noise alone. The noise density function
for L + 1 variables is
p (L) (r) = EECS 455 (Univ. of Michigan) 1
2π N0 /2 N exp{− Fall 2012 1 L 2
rm }
N0
2 2 m =0 October 3, 2012 50 / 93 Lecture Notes 7 Example 1: Decision Rule
The optimal decision rule is equivalent to
Decide Hi if πi pj (r)
pi (r)
= max πj
.
p(r)
p(r)
j As usual assume πi = 1/M . Then
L
(L)
p0 (r)
p(L) (r) = √ 1
2π N0 /2 exp{− 1
2 N0
2 [ i =0,1 (ri − s0,i )2 + L
2
i =2 ri ]} L
1
2π N0 /2 √ exp{− 1
2 = exp{− 1
[
N0 = exp{+ N0
2 1
2
i =0 ri + L
2
i =2 ri } 1
[2r1 + 4r2 − 5]}.
N0 (ri − s0,i )2 − ri2 ]} i =0,1 Now since the above doesn’t depend on L we can let L → ∞ and the result is
the same.
EECS 455 (Univ. of Michigan) Fall 2012 October 3, 2012 51 / 93 Lecture Notes 7 Example 1 (L) p (r)
1
p0 (r) ∆
= exp{+ [2r0 + 4r1 − 5]}.
= lim 0L)
( (r)
p (r)
L→∞ p
N0
Similarly
1
p1 (r)
= exp{+ [4r0 + 2r1 − 5]}
p (r)
N0
1
p2 (r)
= exp{+ [2r0 − 4r1 − 5]}.
p (r)
N0 EECS 455 (Univ. of Michigan) Fall 2012 October 3, 2012 52 / 93 Lecture Notes 7 Example 1
ϕ2 (t )
s0 (t )
s1 (t )
ϕ1 (t )
s2 (t ) EECS 455 (Univ. of Michigan) Fall 2012 October 3, 2012 53 / 93 Lecture Notes 7 Decision Regions
8 6 4 2 0 −2 −4 −6 −8
−8 −6 EECS 455 (Univ. of Michigan) −4 −2 0 Fall 2012 2 4 6 8 October 3, 2012 54 / 93 Lecture Notes 7 Likelihood Ratio for Real Signals in AWGN Assume two signals in Gaussian noise.
H0 : r (t ) = s0 (t ) + n(t )
H1 : r (t ) = s1 (t ) + n(t )
Goal: Find decision rule to minimize the average error probability.
Let n(t ) autocorrelation function R ((s, t ) = N0 δ(t − s). We assume that
2
n(t ) is a zero mean white Gaussian noise rando...
View
Full
Document
 Fall '08
 Stark

Click to edit the document details