ECE 178 Digital Image Processing Discussion Session #8
{
anindya,msargin
}
@ece.ucsb.edu
March 2, 2007
The notes are based on material in
Thomas”
.
Entropy is a measure of the uncertainty of a random variable. Let
X
be a discrete random
variable which takes values from an alphabet
X
and probability mass function
p
(
x
) =
Pr
{
X
=
x
}
,x
∈ X
. For notational convenience, we denote
p
X
(
x
) (probability that random variable
X
takes
up value
x
) by
p
(
x
).
The
entropy
H
(
X
) of a discrete random variable
X
is deﬁned by:
H
(
X
) =

X
x
∈X
p
(
x
)log
p
(
x
)
(1)
If the base of the logarithm is
b
, we will denote the entropy as
H
b
(
X
). If the base of the logarithm
is
e
, the entropy is measured in
nats
. Generally, logarithms are computed to the base 2 and the
corresponding unit for the entropy is
bits
.
For a binary random variable
X
, where
Pr
(
X
= 0) =
p
and
Pr
(
X
= 1) = 1

p
, the entropy
H
b
(
X
) can be represented by
H
(
p
). Thus,
H
(
p
) =

(
p
log
2
(
p
) + (1

p
)log
2
(1

p
)).
Example  suppose a random variable
X
has a uniform distribution over 32 possible outcomes.
Since an outcome of
X
can have one of 32 values, we need a 5bit number to represent the outcome.
Thus,
Pr
(
X
= 1) = 1
/
32 (assuming
X
can have values 1 to 32 with equal probability). The entropy
of the random variable
X
is
H
(
X
) =

32
X
i
=1
p
(
i
)log
p
(
i
) =

32
X
i
=1
(1
/
32)log (1
/
32) = log 32 = 5
bits
assuming logarithm to base 2.
Thus, the entropy equals the number of bits required to represent
X
. In this case, if we use
a 5bit number, we can represent
X
exactly (with no uncertainty). Therefore, the entropy of a
random variable is called a measure of its “uncertainty”.
We now consider an example where
X
follows a nonuniform distribution. Suppose, we have a
horse race with 8 horses taking part. Assume that their probabilities of winning are (1/2, 1/4, 1/8
, 1/16, 1/64, 1/64, 1/64, 1/64). We can calculate the entropy of the horse race (
This preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
This is the end of the preview.
Sign up
to
access the rest of the document.
 Winter '08
 MANJUNATH
 Image processing, Probability theory, Mehmet Emre Sargin, Anindya Sarkar

Click to edit the document details