This preview shows pages 1–3. Sign up to view the full content.
This preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
Unformatted text preview: ECE 562 Fall 2011 Error Control Coding Basics A code a mapping that takes a sequence of information symbols and produces a (larger) sequence of code symbols so as to be able to detect/correct errors in the transmission of the symbols. The simplest class of codes is the class of binary linear block codes. Here each vector of k information bits x i = [ x i, 1 ... x i,k ] is mapped to vector of n code bits c i = [ c i, 1 ... c i,n ], with n > k . The rate R of the code is defined to be the ratio k/n . A binary linear block code can be defined in terms of a k n generator matrix G with binary entries such that the code vector c i corresponding to an information vector x i is given by: c i = x i G (1) (The multiplication and addition are the standard binary or GF(2) operations.) Example: (7 , 4) Hamming Code G = 1 0 0 0  1 0 1 0 1 0 0  1 1 1 0 0 1 0  1 1 0 0 0 0 1  0 1 1 x i G = c i (2) Note that the codewords of this code are in systematic form with 4 information bits followed by 3 parity bits, i.e., c i = [ x i, 1 x i, 2 x i, 3 x i, 4 c i, 5 c i, 6 c i, 7 ] (3) with c i, 5 = x i, 1 + x i, 2 + x i, 3 , c i, 6 = x i, 2 + x i, 3 + x i, 4 , and c i, 7 = x i, 1 + x i, 2 + x i, 4 . It is easy to write down the 16 codewords of the (7,4) Hamming code. It is also easy to see that the minimum (Hamming) distance between the codewords, d min , equals 3. General Result. If d min = 2 t + 1, then the code can correct t errors. Example: Repitition Codes. A rate 1 n repitition code is defined by codebook: mapsto [0 0 ... 0] , and1 mapsto [1 1 ... 1] (4) The minimum distance of this code is n and hence it can correct n 1 2 errors. The optimum decoder for this code is simply a majority logic decoder. A rate 1 2 repitition code can detect one error, but cannot correct any errors. A rate 1 3 repitition code can correct one error. Coding Gain The coding gain of a code is the gain in SNR, at a given error probability, that is achieved by using a code before modulation. c circlecopyrt V.V. Veeravalli, 2011 1 The coding gain of a code is a function of: (i) the error probability considered, (ii) the modulation scheme used, and (iii) the channel. We now compute the coding gain for BPSK signaling in AWGN for some simple codes. Before we proceed, we introduce the following notation: c = SNR per code bit b = SNR per information bit = c R Example: Rate 1 2 repitition code, BPSK in AWGN P { code bit in error } = Q ( radicalbig 2 c ) = Q ( b ) (5) For an AWGN channel, bit errors are independent across the codeword. It is easy to see that with majority logic decoding P ce = P { decoding error } = [ Q ( b )] 2 + 1 2 2 Q ( b )[1 Q ( b )] = Q ( b ) . (6) Thus P b (with coding) = Q ( b ) > Q ( radicalbig 2 b ) = P b (without coding) . (7) The rate 1 2 repitition code results in a 3 dB coding loss for BPSK in AWGN at all error proba...
View
Full
Document
This document was uploaded on 02/08/2012.
 Fall '09

Click to edit the document details