Gaussian Channel
Besma SMIDA
ECE 595K: Chapter 7
Fall 2011
B. Smida
(ECE 595K)
Gaussian Channel
Fall 2011
1 / 36

Today’s outline
Discrete-time Gaussian Channel Capacity
Continuous Typical Set and AEP
Gaussian Channel Coding Theorem
Bandlimited Channel
Parallel Gaussian Channels
Channels with Colored Gaussian Noise
Gaussian Channel with Feedback
B. Smida
(ECE 595K)
Gaussian Channel
Fall 2011
2 / 36

Discrete-time Gaussian Channel Capacity
Definition of Gaussian Channel
X
i
Y
i
Z
i
Definition:
(Gaussian channel)
Discrete-time channel with input
X
i
, noise
Z
i
, and output
Y
i
at time
i
. This is
Y
i
=
X
i
+
Z
i
,
where the noise
Z
i
is drawn i.i.d. from
N
(0
,
N
) and assumed to be independent of
the signal
X
i
.
Average power constraint
∑
n
i
=1
x
2
i
n
≤
P
⇒
E
[
X
2
]
≤
P
.
E
[
Y
2
] =
E
[(
X
+
Z
)
2
] =
E
[
X
2
] + 2
E
[
X
]
E
[
Z
] +
E
[
Z
2
]
≤
P
+
N
B. Smida
(ECE 595K)
Gaussian Channel
Fall 2011
3 / 36

Discrete-time Gaussian Channel Capacity
Information Capacity
Definition:
The
information capacity
with power constraint
P
is
C
=
max
E
[
X
2
]
≤
P
I
(
X
;
Y
)
.
I
(
X
;
Y
)
=
h
(
Y
)
-
h
(
Y
|
X
) =
h
(
Y
)
-
h
(
X
+
Z
|
X
)
=
h
(
Y
)
-
h
(
Z
|
X
) =
h
(
Y
)
-
h
(
Z
)
≤
1
2
log(2
π
e
(
P
+
N
))
-
1
2
log(2
π
eN
)
=
1
2
log(1 +
P
N
)
The optimum input is Gaussian and the worst noise is Gaussian
B. Smida
(ECE 595K)
Gaussian Channel
Fall 2011
4 / 36

Discrete-time Gaussian Channel Capacity
Gaussian Channel Code
Encoder
Decoder
X
i
Y
i
Z
i
W
1 :
M
ˆ
W
1 :
M
Definition:
An (
M
,
n
) code for the Gaussian channel with power constraint
P
consists of the
following:
1
An index set
{
1
,
2
, ...,
M
}
.
2
An encoding function
x
:
{
1
,
2
, ...,
M
}
→
X
n
, yielding codewords
x
n
(1)
,
x
n
(2)
, ...,
x
n
(
M
), satisfying the power constraint
P
; that is for every codeword
n
i
=1
x
2
i
(
w
)
≤
nP
,
w
= 1
,
2
, ...,
M
.
3
A decoding function
g
:
Y
n
→
{
1
,
2
, ...,
M
}
.
B. Smida
(ECE 595K)
Gaussian Channel
Fall 2011
5 / 36

Discrete-time Gaussian Channel Capacity
Coding Theorem for the Gaussian Channel
Definition:
A rate
R
is said to be
achievable
with a power constraint
P
if there exists a
sequence of (2
nR
,
n
) codes with codewords satisfying the power constraint such
that the maximal probability of error
λ
(
n
)
tends to zero. The capacity of the
channel is the supremum of the achievable rates.
Theorem:
The capacity of a Gaussian channel with power constraint
P
and noise variance
N
is
C
=
1
2
log
1 +
P
N
bits per transmission
.
Conversely, the rates
R
>
C
are not achievable.
B. Smida
(ECE 595K)
Gaussian Channel
Fall 2011
6 / 36

Discrete-time Gaussian Channel Capacity
Sphere Packing
Each transmitted
x
i
is received as a probabilistic cloud
y
i
Cloud ’radius’ =
Var(
Y
|
X
) =
√
nN
Energy of
y
i
constrained to
n
(
P
+
N
) so clouds must fit into a hypersphere of
radius
n
(
P
+
N
)
Volume of hypersphere
∝
r
n
Max number of non-overlapping clouds:
(
nP
+
nN
)
n
2
(
nN
)
n
2
= 2
n
1
2
log(1+
P
N
)
Max rate is
1
2
log(1 +
P
N
)
B. Smida
(ECE 595K)
Gaussian Channel
Fall 2011
7 / 36

Continuous AEP
Jointly Typical Set
Definition:
The set
J
(
n
)
of
jointly typical
sequences
{
(
x
n
,
y
n
)
}
with respect to the density
f
X
,
Y
(
x
,
y
) is defined as follows:
J
(
n
)
=
(
x
n
,
y
n
)
∈
R
n
×
R
n
:
-
1
n
log
f
X
(
x
n
)
-
h
(
X
)
<
,
-
1
n
log
f
Y
(
y
n
)
-
h
(
Y
)
<
,
-
1
n
log
f
X
,
Y
(
x
n
,
y
n
)
-
h
(
X
,
Y
)
<
,
B. Smida
(ECE 595K)
Gaussian Channel
Fall 2011
11 / 36

Continuous AEP
Properties of Jointly Typical Set
1
Individual pdf: (
x
n
,
y
n
)
∈
R
n
×
R
n
,
⇒
log
f
X
,
Y
(
x
n
,
y
n
) =
-
nh
(
X
)
±
n
2
Total Prob.: Pr(
J
(
n
)
)
>
1


You've reached the end of your free preview.
Want to read all 33 pages?
- Spring '20
- Information Theory, Channel capacity, Besma SMIDA