{[ promptMessage ]}

Bookmark it

{[ promptMessage ]}

Module6_3 - Module 6 Lecture 3 Channel Coding...

This preview shows pages 1–5. Sign up to view the full content.

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: Module 6, Lecture 3 Channel Coding: Error-Correcting Codes G.L. Heileman Module 6, Lecture 3 Error-correcting Codes In this lecture we’ll look at some of the practical considerations of channel coding. Specifically, we’ll first consider the question of whether or not the best error-correcting code can be considered independent of an source-coding issues. Then we’ll consider a specific error-correcting code in order to get a better feel for how they actually work in practice. G.L. Heileman Module 6, Lecture 3 Source-Channel Coding In practice, one encounters applications that involve both data compression and channel coding. We’ve considered these two problems separately, and have investigated the situations under which each is optimal. Is there something to be gained by combining them? I.e., consider a source that is generating symbols according to RVs V 1 , V 2 , . . . . If we jointly perform the source and channel coding in one step, the situation looks like: encoder X n Y n channel p ( ) y | x V n V n channel source/ decoder channel source/ G.L. Heileman Module 6, Lecture 3 Source-Channel Separation Theorem The two main results we’ve discussed so far are: Source coding theorem: L ≤ H ( X ) Channel coding theorem: R ≤ C . The source-channel separation theorem ties these two results together. Specifically, this theorem describes conditions under which we can achieve optimal overall performance by designing the source and channel codes separately, and then simply combining the results. This result assumes we have a stationary ergodic source V that generates symbols V 1 , . . . , V n drawn from an alphabet V . The General AEP is an extension of the AEP, and states that if H ( V ) is the entropy rate of a finite valued stationary ergodic process { V n } , then- 1 n log p ( V , . . . , V n ) → H ( V ) , with probability 1....
View Full Document

{[ snackBarMessage ]}

Page1 / 14

Module6_3 - Module 6 Lecture 3 Channel Coding...

This preview shows document pages 1 - 5. Sign up to view the full document.

View Full Document
Ask a homework question - tutors are online