This preview has intentionally blurred sections. Sign up to view the full version.View Full Document
Unformatted text preview: Harvard SEAS ES250 – Information Theory Channel Capacity * 1 Preliminaries and Definitions 1.1 Preliminaries and Examples • Communication between A (the sender) and B (the receiver) is succesful when both A and B agree on the content of the message. • A communication channel is modeled as a probabilistic function. • The maximum number of distinguishable signals for n uses of a communication channel grows expo- nentially with n at a rate termed the channel capacity . • Our goal is to infer the transmitted message based on the received data with a vanishingly small probability of error. Definition (Discrete memoryless channel) A discrete channel comprises of an input alphabet X , output alphabet Y , and a likelihood function (probability transition matrix) p ( y | x ). The channel is said to be memoryless if the probability distribution of the output depends only on the input at that time and is conditionally independent of previous channel inputs or outputs. Definition ”Information” channel capacity of a discrete memoryless channel: C = max p ( x ) I ( X ; Y ) , where the maximum is taken over all possible input distributions p ( x ). Examples of Channel Capacity : 1. Noiseless binary channel : C = 1 bit 2. Noisy channel with non-overlapping outputs : also C = 1 bit 3. Noisy typewriter : C = log(number of keys)- 1 bits 4. Binary symmetric channel : C = 1- H ( p ) bits 5. Binary erasure channel : C = 1- α , the fraction of bits erased 1.2 Definitions and Properties Definition (Symmetric and weakly symmetric channels) A channel is said to be symmetric if all rows of the channel transition matrix p ( y | x ) are permutations of each other, and all columns are permutations of each other. A channel is said to be weakly symmetric if every row of the transition matrix p ( ·| x ) is a permutation of every other row and all the column sums ∑ x p ( y | x ) are equal. * Based on Cover & Thomas, Chapter 7 1 Harvard SEAS ES250 – Information Theory Theorem For a weakly symmetric channel, C = log |Y| - H (row of transition matrix) , achieved by a uniform distribution on the input alphabet....
View Full Document
- Information Theory, Coding theory, discrete memoryless channel, Harvard SEAS