Module2_1 - Module 2, Lecture 1 Fundamental Concepts:...

Info iconThis preview shows pages 1–6. Sign up to view the full content.

View Full Document Right Arrow Icon
Module 2, Lecture 1 Fundamental Concepts: Entropy G.L. Heileman Module 2, Lecture 1
Background image of page 1

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
Measuring Information Intuitively, we obtain “information” when we learn something we didn’t know before. We can also say that we gain information when the level of uncertainty about some outcome is reduced. E.g., as a certain date approaches, the level of uncertainty that we have about what the weather will be like on that date decreases. I.e., we have more “information” about the weather will be like. But how do we measure these things? In this course we will generally measure information using a probabilistic framework. However, it is important to recognize that there are other ways that we can measure information – we’ll talk about a few of these later in the course. G.L. Heileman Module 2, Lecture 1
Background image of page 2
Measuring Information–Probabilistic Approach Consider the following experiment, involving a information source that is generating events (or messages) according to some probability distribution: There are n possible messages: m = { m 1 , . . . , m n } , and the a priori probability of message m i is denoted Pr { m i } . For convenience, we will write Pr { m i } as p ( m i ). Since | m | is finite, this is called a discrete information source (or simply discrete source ). We’ll assume that the events are exhaustive, i.e., i p ( m i ) = 1, as well as disjoint. G.L. Heileman Module 2, Lecture 1
Background image of page 3

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
Measuring Information–Probabilistic Approach Before the discrete source generates an outcome there is a certain amount of uncertainty about what the message will be, and after an outcome is generated, we gain a certain amount of information about the source. Consider the extremes: p ( m 1 ) = 1 and p ( m i ) = 0, i = 2 , . . . , n . The outcome of the experiment is certain, so there is no uncertainty, and we gain no information by observing the outcome. p ( m i ) = 1 n , i = 1 , . . . , n . When each of the outcomes is equally likely, it seems that uncertainty should be maximal, and that we will get the maximal amount of possible information by observing the outcome. Thus, the task of measuring information (or uncertainty) seems to involve a function that maps a priori probabilities to a single real number given in the units of information (or uncertainty). G.L. Heileman Module 2, Lecture 1
Background image of page 4
Measuring Information–Probabilistic Approach Let’s call the function identified in the previous slide the uncertainty function , and denote it: H ( p ( m 1 ) , . . . , p ( m n )) . We can further define this function by stating some of the properties it should possess: Property 1 : We would like H ( p 1 , . . . , p n ) to be defined for all p 1 , . . . , p n satisfying 0 p i 1, and i p i = 1. (I.e., H is a function of the probabilities only.) Property 2 : A small change in probabilities should produce only a small change in uncertainty.
Background image of page 5

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
Image of page 6
This is the end of the preview. Sign up to access the rest of the document.

This note was uploaded on 05/06/2010 for the course ECE 549 taught by Professor G.l.heileman during the Spring '10 term at University of New Brunswick.

Page1 / 21

Module2_1 - Module 2, Lecture 1 Fundamental Concepts:...

This preview shows document pages 1 - 6. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online