Module6_2 - Module 6 Lecture 2 Channel Coding The Channel...

Info iconThis preview shows pages 1–7. Sign up to view the full content.

View Full Document Right Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: Module 6, Lecture 2 Channel Coding: The Channel Coding Theorem G.L. Heileman Module 6, Lecture 2 Definitions encoder X n Y n W W channel p ( ) y | x decoder A discrete channnel is given by the triple ( X , p ( y | x ) , Y ), where X and Y are finite sets corresponding to the input and output; p ( y | x ) is a collection of probability mass functions, one for each x ∈ X such that for every x ∈ X and y ∈ Y , p ( y | x ) ≥ 0, and for every x , ∑ y p ( y | x ) = 1. The n-th extension of a discrete memoryless channel (DMC) is the triple ( X n , p ( y n | x n ) , Y n ), where p ( y k | x k , y k- 1 ) = p ( y k | x k ) , k = 1 , . . . , n (this is why it’s called “memoryless”). G.L. Heileman Module 6, Lecture 2 Definitions An ( M , n ) block code for the channel ( X , p ( y | x ) , Y ) consists of: 1 An index set W = { 1 , . . . , M } . 2 An encoding function X n : W → X n , yielding codewords X n (1) , . . . , X n ( M ). This codewords form the codebook . 3 A decoding function g : Y n → W which is a deterministic rule assigning a “guess” to each possible received vector. Note: if the encoder and decoder are fixed, and we assume the index is selected according to RV W , then W → X n → Y n → g ( Y n ) = ˆ W forms a Markov chain. G.L. Heileman Module 6, Lecture 2 Definitions The probability of a block error for an ( M , n ) block code and decoder, given channel ( X , p ( y | x ) , Y ), assuming index w ∈ W was sent is: λ w = Pr { g ( Y n ) 6 = w | X n = X n ( w ) } = X y n p ( y n | x n ( w )) · I ( g ( y n ) 6 = w ) where I ( · ) is the indicator function. Thus, the maximal probability of a block error for an ( M , n ) block code is λ ( n ) = max w ∈{ 1 ,..., M } { λ w } , and the (arithmetic) average probability of a block error for an ( M , n ) block code is defined as: P ( n ) e = 1 M M X w =1 λ w . G.L. Heileman Module 6, Lecture 2 Definitions Another way of calculating the average probability of a block error: P ( n ) e = M X w =1 p ( w ) · Pr { w 6 = ˆ w } . If W is uniformly distributed, (i.e., index w ∈ W is chosen according to the uniform distribution), and we assume X n ( w ) is sent, then P ( n ) e = M X w =1 1 M · Pr { w 6 = g ( Y n ) } , and if the probability of error is the same for any index w , then P ( n ) e = Pr { w 6 = g ( Y n ) } . The rate of an ( M , n ) block code is R = log M n bits/transmission . G.L. Heileman Module 6, Lecture 2 Channel Coding Theorem Let’s look at the theorem statement again: Theorem (Shannon’s channel coding theorem) Associated with each discrete memoryless channel, there is a nonnegative number C (the channel capacity) that determines the limits of the channel as follows: (1) For any > and R < C, and large enough n, there exists a block code of length n with rate ≥ R along with a decoding algorithm such that the maximal probability of block error is < ....
View Full Document

Page1 / 28

Module6_2 - Module 6 Lecture 2 Channel Coding The Channel...

This preview shows document pages 1 - 7. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online