Since the ratio of the total good indices of BEC(
Z
(
P
)) is 1
−
Z
(
P
), then the total
error that we make by replacing
P
with BEC(
Z
(
P
)) is at most
Z
(
P
) which in the above
algorithm is less that the parameter
δ
.
Now, for a fixed level
n
, according to Theorem 1 if we make
k
large enough, the ratio
of the quantized subchannels that their Bhattacharyya value is less that
δ
approaches to
its original value (with no quantization), and for these subchannels as explained above
the total error made with the algorithm is
δ
. Now from the polarization theorem and by
sending
δ
to zero we deduce that as
k
→ ∞
the number of good indices approaches the
capacity of the original channel.
3.5
Simulation Results
In order to evaluate the performance of our quantization algorithm, we compare the perfor
mance of the degraded quantized channel with the performance of an upgraded quantized
channel. Similarly to Section 3.2 Algorithm 2, we introduce an algorithm which this time
splits the masses between its two neighbors.
To clarify, consider three neighbor masses
in positions (
x
i
−
1
, x
i
, x
i
+1
) with probabilities (
p
i
−
1
, p
i
, p
i
+1
). Let
t
=
x
i
−
x
i

1
x
i
+1
−
x
i

1
. Then, we
split the middle mass at
x
i
to the other two masses such that the final probabilities will
be (
p
i
−
1
+ (1
−
t
)
p
i
, p
i
+1
+
tp
i
) at positions (
x
i
−
1
, x
i
+1
). The greedy algorithm is shown in
Algorithm 3.
An upper bound on the error of this algorithm can be provided similarly to Section 3.3
3.5. Simulation Results
31
Algorithm 3
Splitting Masses Algorithm
1:
Start from the list (
p
1
, x
1
)
,
· · ·
,
(
p
n
, x
n
).
2:
Repeat
n
−
k
times
3:
Find
j
= argmin
{
p
i
(
f
(
x
i
)
−
tf
(
x
i
1
)
−
(1
−
t
)
f
(
x
i
−
1
)) :
i
negationslash
= 1
, n
}
4:
Add (1
−
t
)
p
j
to
p
j
−
1
and
tp
j
to
p
j
+1
.
5:
Delete (
p
j
, x
j
) from the list.
with a little bit of modification. Consider the error made in each step of the algorithm:
e
i
=
p
i
(
f
(
x
i
)
−
tf
(
x
i
+1
)
−
(1
−
t
)
f
(
x
i
−
1
))
(3.32)
=
−
tp
i
(
f
(
x
i
+1
−
f
(
x
i
)) + (1
−
t
)
p
i
(
f
(
x
i
)
−
f
(
x
i
−
1
))
(3.33)
≤ −
tp
i
(1
−
t
)Δ
x
i
f
′
(
x
i
+1
) + (1
−
t
)
p
i
t
Δ
x
i
f
′
(
x
i
−
1
)
(3.34)
=
p
i
t
(1
−
t
)Δ
x
2
i

f
′′
(
c
i
)

,
(3.35)
where
x
i
−
1
≤
c
i
≤
x
i
+1
and Δ
x
i
defines
x
i
+1
−
x
i
−
1
. The difference with Section 3.3 is that
now
∑
i
Δ
x
i
≤
1 (not 1
/
2).
On the other hand, for 0
≤
t
≤
1 we have
t
(1
−
t
)
≤
1
4
.
Therefore, exactly the same results of Section 3.3 can be applied here, and the total error
of the algorithm can be upper bounded by
O
parenleftBig
log(
k
)
k
√
k
parenrightBig
for the entropy function, and
O
(
1
k
2
)
for functions that

f
′′
(
x
)

is bounded.
In the simulations, we measure the maximum achievable rate while keeping the prob
ability of error less than 10
−
3
by finding maximum possible number of channels with
the smallest Bhattacharyya parameters such that the sum of their Bhattacharyya pa
rameters is upper bounded by 10
−
3
.
The channel is a binary symmetric channel with
capacity 0
.
5.
First, we compare 3 different functions
f
1
(
x
) =
h
(
x
) (entropy function),
f
2
(
x
) = 2
radicalbig
x
(1
−
x
) (Bhattacharrya function), and
f
3
(
x
) =
x
(1
−
x
You've reached the end of your free preview.
Want to read all 48 pages?
 Winter '18
 Gaurav Beniwal
 Information Theory, polar codes