3.2. Algorithms for Quantization
25
(ii) Replacing the channel with a BEC channel with the same Bhattacharyya parameter.
Furthermore, note that the
stochastic dominance
of random variable ˜
χ
with respect to
χ
implies
˜
W
is stochastically degraded with respect to
W
. (But the reverse is not true.)
In the following, we state two different algorithms based on different methods of polar
degradation of the channel.
The first algorithm is a naive algorithm called the mass
transportation algorithm based on the stochastic dominance of the random variable ˜
χ
, and
the second one which outperforms the first is called greedy mass merging algorithm. For
both of the algorithms the quantized channel is stochastically degraded with respect to the
original one.
3.2.1
Greedy Mass Transportation Algorithm
In the most general form of this algorithm we basically look at the problem as a
mass trans-
port
problem. In fact, we have non-negative masses
p
i
at locations
x
i
, i
= 1
,
· · ·
, m, x
1
<
· · ·
< x
m
.
What is required is to move the masses, by only moves to the right, to con-
centrate them on
k < m
locations, and try to minimize
∑
i
p
i
d
i
where
d
i
=
x
i
+1
−
x
i
is
the amount
i
th
mass has moved. Later, we will show that this method is not optimal but
useful in theoretical analysis of algorithms that follow.
Algorithm 1
Mass Transportation Algorithm
1:
Start from the list (
p
1
, x
1
)
,
· · ·
,
(
p
m
, x
m
).
2:
Repeat
m
−
k
times
3:
Find
j
= argmin
{
p
i
d
i
:
i
negationslash
=
m
}
4:
Add
p
j
to
p
j
+1
(i.e. move
p
j
to
x
j
+1
)
5:
Delete (
p
j
, x
j
) from the list.
Note that Algorithm 1 is based on the stochastic dominance of random variable ˜
χ
with
respect to
χ
. Furthermore, in general, we can let
d
i
=
f
(
x
i
+1
)
−
f
(
x
i
), for an arbitrary
bounded increasing function
f
.
3.2.2
Mass Merging Algorithm
The second algorithm merges the masses. Two masses
p
1
and
p
2
at positions
x
1
and
x
2
would be merged into one mass
p
1
+
p
2
at position ¯
x
1
=
p
1
p
1
+
p
2
x
1
+
p
2
p
1
+
p
2
x
2
. This algorithm
is based on the stochastic degradation of the channel, but the random variable
χ
is not
stochastically dominated by ˜
χ
.
The greedy algorithm for the merging of the masses is
shown in Algorithm 2.
Note that in practice, the function
f
can be any increasing concave function, for ex-
ample, the entropy function or the Bhattacharyya function. In fact, since the algorithm
is greedy and suboptimal, it is hard to investigate explicitly how changing the function
f
will affect the total error of the algorithm in the end (i.e., how far
˜
W
is from
W
). In
Section 3.5, we will see the results of applying Algorithm 2 for 3 different functions: the
Bhattacharrya function, the entropy function, and the function
f
(
x
) =
x
(1
−
x
).
