Unformatted text preview: D ≥ log m − D log(m − 1) − Hb (D ), (1) where Hb (·) is the binary entropy function. The second equality follows by the chain rule and since
ˆ
ˆ
H (X X ) = H (X, E X ); the subsequent inequality since conditioning reduces entropy; and the last
inequality from the rate distortion constraint, and by noting that the function t · log(m − 1) + Hb (t)
−
is nondecreasing in t in the interval t ∈ 0, mm 1 .
ˆ
Next we want to show that with the following choice of the conditional distribution on X
PX X (ˆx) =
x
ˆ 1 − D if x = x
ˆ
D
else,
m−1 (2) both inequalities in (1) are met with equality. Indeed, noting that
PX (ˆ) =
ˆx PX X (ˆx)PX (x) = (1 − D )
x
ˆ
x D m−1
1
1
+
=.
m m−1 m
m and that for x = x we have
ˆ
PX X (xx) =
ˆ
ˆ PX,X (x, x)
ˆ
ˆ
PX (ˆ)
ˆx = PX X (ˆx)PX (x)
x
ˆ
PX (ˆ)
ˆx we obtain
PE X (0x) =
ˆ
ˆ PX X (xx) = (m − 1)
ˆ
ˆ
x =x
ˆ = D1
m−1 m
1
m = D
m−1 D
= D.
m−1 Since PE X (0x) does not depend on x it follows that PE (0) = D . Thus, the chosen distribution
ˆ
ˆ
ˆ
ˆ
satisﬁes the distortion constraint; additionally E is independent of X and hence the ﬁrst inequality
in (1) holds with equality.
Furthermore from the choice in (2) it follows that
1
m−1 PX X,E (ˆx, 0) =
x
ˆ 0 for x = x
ˆ
for x = x.
ˆ This can be veriﬁed by writing
PX X,E (ˆx, 0) =
x
ˆ = = c Amos Lapidoth, 2012 PX,X,E (ˆ, x, 0)
x
ˆ
PX,E (x, 0)
PX (x)PX X (ˆx)PE X,X (0x, x)
x
ˆ
ˆ
ˆ
PX (x)PE X (0x)
PX X (ˆx)PE X,X (0x, x)
x
ˆ
ˆ
ˆ
PE X (0x)
4 , and by noting that
PE X,X (0x, x) =
ˆ
ˆ 0 if x = x
ˆ
1 if x = x
ˆ and
PE X (0x) = PX X (ˆx)PE X,X (0x, x)
x
ˆ
ˆ
ˆ
x
ˆ =
x =x
ˆ Hence we can conclude that for D ≤
R(D ) = min m−1
m ˆ
PX X :E[d(X,X )]≤D
ˆ c Amos Lapidoth, 2012 D
· 1 = D.
m−1 ˆ
I (X ; X ) = log m − D log(m − 1) − Hb (D ). 5...
View
Full
Document
This note was uploaded on 05/18/2013 for the course EE Informatio taught by Professor Amoslapidoth during the Fall '11 term at Swiss Federal Institute of Technology Zurich.
 Fall '11
 AmosLapidoth

Click to edit the document details