Lecture14

# Lecture14 - Colorado State University Ft Collins ECE 516...

This preview shows pages 1–4. Sign up to view the full content.

1 Colorado State University, Ft. Collins Fall 2008 ECE 516: Information Theory Lecture 14 October 14, 2008 Recap: Definition: A discrete memoryless channel is denoted by () Y X , | , X Y p , where X and Y are finite sets, ( ) x y p | for all X x and Y y , and 1 | = y x y p for all X x . Definition: An n M , code for the channel ( ) ( ) Y X , | , x y p , consists of 1. An index set {} M , , 1 L 2. An encoding function: { } n M X , , 1 L 3. A decoding function: { } M n , , 1 L Y ; a deterministic function: ( ) n Y g . Definition: Probability of error (conditional) ( ) () [ ] i X X i y g P n n n i = = | λ Definition: Maximal probability of error. i M i , , 1 max L = To specify dependency on a ( ) n M , code denote as ( ) n . Definition: Average (arithmetic) probability of error = = M i i n e M P 1 1 Definition: The rate R of an ( ) n M , code is M n R log 1 = bits per transmission Definition: A rate is said to be achievable if there exists a sequence of ⎡ ⎤ ( ) n nR , 2 codes such that ( ) 0 n as n . Proof of the converse: Lemma: For a DMC without feedback ( ) nC Y X I n n ;

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
2 7.6 Joint Typical Sequences Definition: The set () n A ε of jointly typical sequences with respect to joint PMF y x p , is the set of ( ) n n y x , whose empirical entropies are -close to the true entropies. { () () ()() < < < × = Y X H y x p n Y H y p n X H x p n y x A n n n n n n n n n , , log 1 log 1 , log 1 : , Y X Where = = n i i i n n y x p y x p 1 , , Theorem: (Joint AEP) Let ( ) n n Y X , be drawn iid according to = = n i i i n n y x p y x p 1 , , 1. ( ) ( ) 1 , lim = n n n n A y x p , i.e., ( ) ( ) ( ) > 1 , lim n n n n A y x p for large enough n . 2. ( ) ( ) ( ) + Y X H n n Y X H n A , , 2 2 1 3. If ( ) ( ) ( ) n n n n y p x p y x ~ ~ ~ ~ , ~ , i.e., n x ~ and n y ~ are independent with the marginals ( ) n x p and ( ) n y p obtained from ( ) n n y x p , , then ( ) ( ) ( ) ( ) ( ) 3 ; 3 ; 2 ~ , ~ 2 1 + Y X I n n n n Y X I n A y x p The Big Picture a) For a typical n x ( ) ( ) ( ) ( ) + = = n n n n n A y n n A y n n y n n n y x p y x p y x p x p , , , ( ) Y X nH X nH L , 2 2 = For each typical n x , there are about ( ) X Y nH X nH Y X nH L | , 2 2 2 = = jointly typical n y sequences.
3 7.7 Side story: Converting Channel Coding to Sphere Packing Consider a discrete-time memoryless channel with additive noise. The channel output symbol is given by n x y + = where x is the input symbol, n is the additive noise with [ ] 0 = n E , [] 2 0 2 N n E = . Assume we have an average power constraint, [ ] P x E 2 . How many bits we can transmit through this channel reliably? Let us group N symbols together. First, let us consider noise only. A vector of N noise samples () 1 1 0 1 = N n n n N L N n is a point in the N dimensional space. 2 0 2 N n E = implies that, if N is large, we will roughly have 2 0 2 N N n . This implies that N n is located close to the surface of the N dimension sphere of radius 2 0 N . Now, take a look at the transmitted symbols, again we look at a group of N symbols together. A vector of N input symbols 1 1 0 1 = N x x x N L N x is a point in the N dimensional space. The average power constraint says, [ ] P x E 2 , which implies, P 2 N x for large N . Note that the transmitter designs the input symbols.

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
This is the end of the preview. Sign up to access the rest of the document.

## This note was uploaded on 03/17/2010 for the course ECE 516 taught by Professor Rocky during the Spring '08 term at Colorado State.

### Page1 / 16

Lecture14 - Colorado State University Ft Collins ECE 516...

This preview shows document pages 1 - 4. Sign up to view the full document.

View Full Document
Ask a homework question - tutors are online