Lecture Outline
Channel Coding
Reading:
1. Review of Error Probability
•
M
PSK:
)
2
2
cos(
)
(
)
(
M
m
t
f
t
g
t
s
c
m
π
+
=
,
m
= 0, 1, 2, …,
M
1
•
M
FSK:
)
)
(
2
cos(
)
(
)
(
t
f
m
f
t
g
t
s
c
m
∆
+
=
,
m
= 0, 1, 2, …,
M
1.
•
Typically,
T
E
t
g
s
2
)
(
=
for
T
t
≤
≤
0
.
•
Energy per symbol is
s
E
. Energy per bit is
M
E
E
s
b
2
log
=
•
Noise variance per dimension in signal space has variance
2
0
2
N
m
=
σ
•
Each signal point is at distance
s
E
from origin.
M
PSK
M
FSK
•
Message error probabilities
M
PSK
M
FSK
M
QAM
≈
M
N
E
Q
P
s
M
sin
2
2
0
=
0
N
E
Q
P
s
b


=
0
2
)
1
(
6
)
1
(
2
N
M
E
Q
M
M
P
av
M
•
Seems that error probability goes to zero only if we use very large SNR.
•
Shannon theory: We can transmit at a data rate up to channel capacity while keeping
error probability as small as we like. (Quite a surprise!)
•
For a passband channel corrupted by AWGN, Shannon derived channel capacity:
This preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
This is the end of the preview.
Sign up
to
access the rest of the document.
 Spring '09
 Hui
 Information Theory, error probability, codeword

Click to edit the document details