This preview shows page 1. Sign up to view the full content.
Unformatted text preview: pT (t ) These are orthogonal if fc T ≫ 1. sm = e(j 2πm/8) , m = 0, 1, ..., 7
Data bits Signal
sm
000
s0
√ (1, 0)
√
001
s1
( 2/2, 2/2)
011
s2
)
√ (0, 1√
010
s3
(− 2/2, 2/2)
110
s4
)
√ (−1, 0√
111
s5
(− 2/2, − 2/2)
101
s6
√ (0, −1)
√
100
s7
( 2/2, − 2/2)
EECS 455 (Univ. of Michigan) Fall 2012 ϕ1 (t ) 2
1
0 s3 s2 s0 s4 1 s1 s5 s6 ϕ0 (t ) s7 2
2 1 0 1 2 October 3, 2012 33 / 93 Lecture Notes 7 Optimal Receiver Note that we can recover completely r (t ) if we know the
coefﬁcients rm , m = 0, 1, ....
So the optimal decision based on observing r0 , r1 , ... is also the
optimal decision based on observing r (t ).
Given signal si (t ) is transmitted we can determine the probability
density of rm as follows.
First, rm is Gaussian since it is the result of integrating Gaussian
noise.
Second the mean of rm , conditioned on signal si (t ) transmitted is
si ,m and the variance is N0 /2. EECS 455 (Univ. of Michigan) Fall 2012 October 3, 2012 34 / 93 Lecture Notes 7 Optimal Receiver
So the probability density of rm conditioned on signal si (t )
transmitted (event Hi ) is
pi (rm ) = frm Hi (rm )
= √
2π 1
N0 /2 exp{− (rm − si ,m )2
}
2(N0 /2) Next note that rm is independent of rn for m = n.
Thus
k fr0 ,r1 ,...,rk Hi (x0 , x1 , x2 , ..., xk ) = frm Hi (xm )
m =0
k = pi (xm )
m =0 EECS 455 (Univ. of Michigan) Fall 2012 October 3, 2012 35 / 93 Lecture Notes 7 M ary Detection Problem
Consider the problem of deciding which of M hypotheses is true
based on observing a random variable (vector) r .
The performance criteria we consider is the average error
probability. That is, the probability of deciding anything except
hypothesis Hj when hypothesis Hj is true.
The underlying model is that there is a conditional probability
density (mass) function of the observation r given each hypothesis
Hj .
P {r ∈ Rm Hi } = pi (r )dr
Rm There are disjoint decision regions R0 , R1 , ..., RM −1 . When r ∈ Rm
the receiver decides Hm .
EECS 455 (Univ. of Michigan) Fall 2012 October 3, 2012 36 / 93 Lecture Notes 7 Decision Regions R3
R1 R5 R2
R0 EECS 455 (Univ. of Michigan) R4 R7
R6 Fall 2012 October 3, 2012 37 / 93 Lecture Notes 7 Objective
Our goal is to ﬁnd the decision regions R0 , R1 , ..., RM −1 that minimize
the error probability.
M −1 M −1 P e , i πi E [Pe ] = =
i =0 i =0 P {don’t decide Hi Hi }πi M −1 =
i =0 [1 − P {decide Hi Hi true}] πi
M −1 M −1 =
i =0 πi − pi (r )πi dr
i =0 Ri M −1 = 1− EECS 455 (Univ. of Michigan) pi (r )πi dr .
i =0 Fall 2012 Ri October 3, 2012 38 / 93 Lecture Notes 7 Objective
The decision rule that minimizes the average error probability is
the decision rule that maximizes
M −1 ∞ M −1 Γ= pi (r )πi dr =
i =0 Ri −∞ i =0 pi (r )πi I (r ∈ Ri )dr . Consider a small region for r ∈ A = (r0 , r0 + ∆) where pi (r ) is
nearly constant.
If r ∈ A then the contribution to Γ is either p0 (r0 )π0 ∆ if we have a
decision rule so that r0 ∈ R0 , or the contribution to Γ is p1 (r0 )π1 ∆ if
we have a decision rule so that r0 ∈ R1 , or the contribution to Γ is...
View
Full
Document
This note was uploaded on 02/12/2014 for the course EECS 455 taught by Professor Stark during the Fall '08 term at University of Michigan.
 Fall '08
 Stark

Click to edit the document details