This preview shows pages 1–5. Sign up to view the full content.
This preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
Unformatted text preview: Module 6, Lecture 3 Channel Coding: ErrorCorrecting Codes G.L. Heileman Module 6, Lecture 3 Errorcorrecting Codes In this lecture well look at some of the practical considerations of channel coding. Specifically, well first consider the question of whether or not the best errorcorrecting code can be considered independent of an sourcecoding issues. Then well consider a specific errorcorrecting code in order to get a better feel for how they actually work in practice. G.L. Heileman Module 6, Lecture 3 SourceChannel Coding In practice, one encounters applications that involve both data compression and channel coding. Weve considered these two problems separately, and have investigated the situations under which each is optimal. Is there something to be gained by combining them? I.e., consider a source that is generating symbols according to RVs V 1 , V 2 , . . . . If we jointly perform the source and channel coding in one step, the situation looks like: encoder X n Y n channel p ( ) y  x V n V n channel source/ decoder channel source/ G.L. Heileman Module 6, Lecture 3 SourceChannel Separation Theorem The two main results weve discussed so far are: Source coding theorem: L H ( X ) Channel coding theorem: R C . The sourcechannel separation theorem ties these two results together. Specifically, this theorem describes conditions under which we can achieve optimal overall performance by designing the source and channel codes separately, and then simply combining the results. This result assumes we have a stationary ergodic source V that generates symbols V 1 , . . . , V n drawn from an alphabet V . The General AEP is an extension of the AEP, and states that if H ( V ) is the entropy rate of a finite valued stationary ergodic process { V n } , then 1 n log p ( V , . . . , V n ) H ( V ) , with probability 1....
View
Full
Document
This note was uploaded on 05/06/2010 for the course ECE 549 taught by Professor G.l.heileman during the Spring '10 term at University of New Brunswick.
 Spring '10
 G.L.Heileman

Click to edit the document details