Module6_3 - Module 6, Lecture 3 Channel Coding:...

Info iconThis preview shows pages 1–5. Sign up to view the full content.

View Full Document Right Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: Module 6, Lecture 3 Channel Coding: Error-Correcting Codes G.L. Heileman Module 6, Lecture 3 Error-correcting Codes In this lecture well look at some of the practical considerations of channel coding. Specifically, well first consider the question of whether or not the best error-correcting code can be considered independent of an source-coding issues. Then well consider a specific error-correcting code in order to get a better feel for how they actually work in practice. G.L. Heileman Module 6, Lecture 3 Source-Channel Coding In practice, one encounters applications that involve both data compression and channel coding. Weve considered these two problems separately, and have investigated the situations under which each is optimal. Is there something to be gained by combining them? I.e., consider a source that is generating symbols according to RVs V 1 , V 2 , . . . . If we jointly perform the source and channel coding in one step, the situation looks like: encoder X n Y n channel p ( ) y | x V n V n channel source/ decoder channel source/ G.L. Heileman Module 6, Lecture 3 Source-Channel Separation Theorem The two main results weve discussed so far are: Source coding theorem: L H ( X ) Channel coding theorem: R C . The source-channel separation theorem ties these two results together. Specifically, this theorem describes conditions under which we can achieve optimal overall performance by designing the source and channel codes separately, and then simply combining the results. This result assumes we have a stationary ergodic source V that generates symbols V 1 , . . . , V n drawn from an alphabet V . The General AEP is an extension of the AEP, and states that if H ( V ) is the entropy rate of a finite valued stationary ergodic process { V n } , then- 1 n log p ( V , . . . , V n ) H ( V ) , with probability 1....
View Full Document

This note was uploaded on 05/06/2010 for the course ECE 549 taught by Professor G.l.heileman during the Spring '10 term at University of New Brunswick.

Page1 / 14

Module6_3 - Module 6, Lecture 3 Channel Coding:...

This preview shows document pages 1 - 5. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online