Module4_1 - Module 4, Lecture 1 Entropy Rate G.L. Heileman...

Info iconThis preview shows pages 1–6. Sign up to view the full content.

View Full Document Right Arrow Icon
Module 4, Lecture 1 Entropy Rate G.L. Heileman Module 4, Lecture 1
Background image of page 1

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
Entropy Rate In this module we study how to quantify the uncertainty associated with a stochastic process. In the last module we also studied this problem, but under the very special case of an iid process. Specifically, we used the AEP to show that on average, nH ( X ) bits are sufficient to describe a sequence X n of iid RVs. When the RVs are dependent, we would expect this dependence to take a steady toll on the overall uncertainty. We will now assume the RVs are not iid, and consider if the uncertainty associated with the process converges to a limit. G.L. Heileman Module 4, Lecture 1
Background image of page 2
Entropy Rate Definition (Entropy Rate) The entropy rate of a stochastic process { X i } , x i ∈ X is defined as: H ( X ) = lim n →∞ 1 n H ( X 1 , . . . , X n ) when the limit exists. H ( X ) gives a measure of how the entropy of the sequence X n grows with n . Since we are dividing by n , we can think of H ( X ) as providing an average per symbol entropy of the random variables in the sequence. It is important to distinguish between H ( X ) and H ( X n ), where the latter is the entropy of the process at time n . G.L. Heileman Module 4, Lecture 1
Background image of page 3

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
Entropy Rate Ex: X , X 1 , X 2 , . . . are iid RVs. H ( X ) = lim n →∞ 1 n H ( X 1 , . . . , X n ) = lim nH ( X ) n = H ( X ) . Thus, if X has the uniform distribution, then H ( X ) = log |X| . X 1 , X 2 , . . . are independent (but not necessarily identical) RVs. H ( X ) = lim n →∞ 1 n H ( X 1 , . . . , X n ) = lim n →∞ n i =1 H ( X i ) n . There are distributions on X 1 , X 2 , . . . for which this limit does not exist, e.g., assume X n is a binary sequence where p i = Pr { X i = 1 } is a function of i given by p i = ± 0 . 5 , if 2 k < log log i 2 k + 1 0 , if 2 k + 1 < log log i 2 k + 2 G.L. Heileman Module 4, Lecture 1
Background image of page 4
Entropy Rate Ex (con’t): This leads to a nonstationary process. The running average of H ( X i ) oscillates between 0 and 1, which means the limit, and therefore H ( X ), does not exist. Consider a two-state Markov chain with probability transition matrix [ P ij ] = ± 1 - α α β 1 - β ² We previously showed that the the stationary distribution of this Markov chain is given by: μ 1 = β α + β , μ 2 = α α + β . Thus, the entropy of this process at time n is: H ( X n ) = H ( β α + β , α α + β ) . This is
Background image of page 5

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
Image of page 6
This is the end of the preview. Sign up to access the rest of the document.

This note was uploaded on 05/06/2010 for the course ECE 549 taught by Professor G.l.heileman during the Spring '10 term at University of New Brunswick.

Page1 / 18

Module4_1 - Module 4, Lecture 1 Entropy Rate G.L. Heileman...

This preview shows document pages 1 - 6. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online