Module 2, Lecture 1
Fundamental Concepts: Entropy
G.L. Heileman
Module 2, Lecture 1
Measuring Information
Intuitively, we obtain "information" when we learn something we didn't know before. We can also say that we gain information when the level of uncert
Module 2, Lecture 2
Fundamental Concepts: Entropy, Relative Entropy, Mutual Information
G.L. Heileman
Module 2, Lecture 2
Entropy
Definition (Entropy) The entropy of a RV X p(x) is: H(X ) =
xX
p(x) log 1 p(X )
1 p(x)
= E log
Properties of H(X ):
1
Proof:
Module 2, Lecture 3
Fundamental Concepts: Information Inequalities I
G.L. Heileman
Module 2, Lecture 3
Convex Functions
Many useful inequalities in information theory make use of the notion of convexity. Definition (Convex) A real function f defined on (a
Module 2, Lecture 4
Fundamental Concepts: Information Inequalities II
G.L. Heileman
Module 2, Lecture 4
Stochastic Processes
A number of the inequalities we consider next are associated with Markov chains, a special type of stochastic process. Definition
Module 3, Lecture 1
The Asymptotic Equipartition Property
G.L. Heileman
Module 3, Lecture 1
Asymptotic Equipartition Property
Consider an experiment in which you tossed a fair coin 100 times. Would you be surprised if you observed 100 heads in a row? Well
Module 4, Lecture 1
Entropy Rate
G.L. Heileman
Module 4, Lecture 1
Entropy Rate
In this module we study how to quantify the uncertainty associated with a stochastic process. In the last module we also studied this problem, but under the very special case
Module 5, Lecture 1
Data Compression: Introduction
G.L. Heileman
Module 5, Lecture 1
Data Compression Introduction
In this module we study the science (art) or representing information (i.e., data) in a compact form. Key idea: Compact representations are
Module 5, Lecture 2
Data Compression: Huffman and Arithmetic Coding
G.L. Heileman
Module 5, Lecture 2
Huffman Coding
For a given RV X , assuming knowledge of only p1 , . . . , p|X | , Huffman coding will produce an optimal prefix code, CHuf in terms of ex
Module 5, Lecture 3
Data Compression: Dictionary Methods
G.L. Heileman
Module 5, Lecture 3
Dictionary Methods
The compression methods we have considered so far make use of a probability model associated with the source in order to compress the data produc
Module 6, Lecture 1
Channel Coding: Introduction
G.L. Heileman
Module 6, Lecture 1
Channel Coding Introduction
In the last module, we assumed the channel from the source encoder to the source decoder was error free. In this module we consider how to deal
Module 6, Lecture 2
Channel Coding: The Channel Coding Theorem
G.L. Heileman
Module 6, Lecture 2
Definitions
W encoder Xn channel p( y|x) Yn decoder W
A discrete channnel is given by the triple (X , p(y |x), Y), where X and Y are finite sets corresponding
Module 6, Lecture 3
Channel Coding: Error-Correcting Codes
G.L. Heileman
Module 6, Lecture 3
Error-correcting Codes
In this lecture we'll look at some of the practical considerations of channel coding. Specifically, we'll first consider the question of wh
Module 7, Lecture 1
Information Theory and Gambling
G.L. Heileman
Module 7, Lecture 1
Background Contest
Consider a contest (e.g., a race, lottery, beauty contest, etc.) involving m contestants (e.g, horses, people, dogs, etc.) in which one and only one p
Module 8, Lecture 1
Differential Entropy and the Gaussian Channel
G.L. Heileman
Module 8, Lecture 1
Differential Entropy
Definition (Differential entropy) The differential entropy of a continuous RV X with pdf f (x) is h(x) = -
S
f (x) log f (x) dx,
where