Bernoulli random variables with parameter
1
2
.
Theorem 3.
Using a random parity-check algorithm, we can achieve an error exponent
N
log
N
and erasure exponent
√
N
without any rate loss.
Proof.
Using Lemma 4, let
p
be
O
(
N
log
N
). Then, the rate would be
r
=
k
−
N
log
N
N
=
O
(
k
N
)
(4.17)
So we do not have a rate loss asymptotically.
We know that
P
SC
is asymptotically
O
(2
−
√
N
)[7]. Therefore, by Lemma 4.2.2 the error probability is asymptotically
P
e
=
O
(
2
−
√
N
2
N
log
N
) =
O
(2
−
N
log
N
)
(4.18)
The erasure probability is
P
erase
=
P
SC
−
P
e
=
O
(2
−
√
N
)
(4.19)
In Table 4.3, we see the simulation results of the parity-check algorithm.
The polar
code rate is 0
.
4, but as mentioned previously we have some rate loss due to the use of
parity-check bits. The error probability of the SC decoder is the error probability while
sending with the “actual” rate. The validity of Lemma 4 is shown numerically in Table
4.3.

4.2. General Channel
41
p
3
4
5
6
7
8
R
0
.
3965
0
.
3955
0
.
3945
0
.
3936
0
.
3926
0
.
3916
P
e
0
.
0448
0
.
0225
0
.
0104
0
.
0059
0
.
0
.
0033
0
.
0017
P
erase
0
.
3027
0
.
3249
0
.
3371
0
.
3416
0
.
3442
0
.
3458
P
SC
0
.
3298
0
.
3207
0
.
3090
0
.
3024
0
.
2976
0
.
2806
Table 4.3: Performance of parity-check algorithm with
p
bits of parity , when transmission takes
place over a BSC channel with capacity 0
.
5 and block-length
N
= 2
10
and the polar code rate is
0
.
4. (
P
SC
= 0
.
3475)
4.2.3
Combined Algorithm
In this section, we combine the algorithms mentions in sections 4.2.1 and 4.2.2 to provide a
powerful tool for detecting the block errors in a general channel while using polar decoder
and encoder. In fact, we declare erasures if either the parity-check or the typicality-check
fails. The interesting problem here is that we have two parameters
t
and
p
to tune. Similarly
to Lemma 4, it is easy to prove that for the combined algorithm the error probability will
be reduced by a factor of 2
p
with respect to the error probability of the typicality-check
algorithm. (
P
e
=
P
e,typical
2
p
) One interesting question in this regard is that whether combining
the algorithms helps at all. In fact, both of the algorithms are somehow detecting most of
the block errors of the SC decoder, in the expense of missing some of the correct blocks,
or alternatively some rate loss. Therefore, one algorithm may totally outperform the other
one. The intuitive answer to this question is that combining the two algorithms does help
the performance since each of them are detecting errors through different underlying logic.
The parity-check algorithm detects all of the errors except a fraction of
1
2
p
on average, and
its success has nothing to do with the “quality” of the received data. On the other hand,
the typicality-check algorithm detects what we call “strong” noises (which are the non-
typical ones indeed) and miss the errors due to “mild” noises. Therefore, combining the
two algorithms has the advantage that even in the case of having a mild noise we detect all
the errors other than a small fraction of
1
2
p
. Another interesting engineering problem here
is how to tune the parameters
p
and
t
to design a polar code with certain error probability
and erasure probability requirements.


You've reached the end of your free preview.
Want to read all 48 pages?
- Winter '18
- Gaurav Beniwal
- Information Theory, polar codes