Constraint length k 5 and are realized by feedback

Info iconThis preview shows page 1. Sign up to view the full content.

View Full Document Right Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: G3 • + 4 • • + + = Take every other symbol + out 1b + + •• •• •• RATE 1/2 RATE 1/3 RATE 1/4 RATE 1/6 ENCODERb • • •• •• Not used + out 3b • •• • Figure 7-3: Turbo Encoder Block Diagram The two convolutional encoders in the Recommended Standard (reference [3]) are recursive with constraint length K = 5, and are realized by feedback shift registers. However, unlike the encoder for the recommended plain convolutional code in section 4, the turbo codeblock is terminated by running each encoder for an additional K-1 bit times beyond the end of the information bit frame. After encoding the last bit in the frame, the leftmost adder in each component encoder receives two copies of the same feedback bit, causing it to zero its output. After K-1 more bit times, all 4 memory cells become filled with zeros, but in the interim the encoder continues to output nonzero encoded symbols. The Recommended Standard (reference [3]) allows options for non-punctured codes with rates between 1/3, 1/4, and 1/6. The puncturer is used only for code rate 1/2. The interleaver in the Recommended Standard (reference [3]) is based on a permutation rule which can be computed on-the-fly or pre-computed and stored in a look-up table, for all allowable frame lengths (1784 to 16384 bits). In figure 7-2, CLK indicates the frame clock. It is used: (1) by the input buffer to determine when to empty and refill the buffer; (2) by the output buffer/multiplexer to determine when to insert the frame sync marker; (3) by each of the convolutional encoders to determine when to terminate the codeblock. Note that an entire information block of k bits must be read in before the encoding can proceed, because some of the bits in the tail end of block will be permuted to the front and need to be encoded first. Thus, there is a fundamental encoding latency of at least k bits in the encoding process. CCSDS 130.1-G-1 Page 7-3 June 2006 TM SYNCHRONIZATION AND CHANNEL CODING —SUMMARY OF CONCEPT AND RATIONALE The turbo code introduces a couple of unique encoder complexity issues. The information block needs to be buffered and read out in a permuted order as part of the encoding process. This buffering has no analog in the plain convolutional encoder, but the size of this buffer is comparable to that required for an interleaved Reed-Solomon codeblock of the same size. The difference is that the traditional concatenated coding architecture completely separates the Reed-Solomon encoder (with its associated buffer) from the convolutional encoder. Thus, the turbo encoder cannot be regarded as a plug-in replacement for the convolutional encoder hardware. The turbo encoder actually replaces the Reed-Solomon/convolutional encoder combination. Another complexity consideration is how to implement the permutation. The best permutations for turbo codes look very random, but this requires specifying a randomlooking readout order via a ROM (Table look-up). An alternative is to use a permutation that can be generated by a simple rule rather than from a lookup table, with minor performance sacrifice. The Recommended Standard (reference [3]) specifies a permutation based on a simple rule, because it was preferred in terms of implementation on the spacecraft. 7.3 TURBO DECODER A turbo decoder uses an iterative decoding algorithm based on simple decoders individually matched to the two simple constituent codes. Each constituent decoder makes likelihood estimates derived initially without using any received parity symbols not encoded by its corresponding constituent encoder. The (noisy) received uncoded information symbols are available to both decoders for making these estimates. Each decoder sends its likelihood estimates to the other decoder, and uses the corresponding estimates from the other decoder to determine new likelihoods by extracting the ‘extrinsic information’ contained in the other decoder’s estimates based on the parity symbols available only to it. Both decoders use the ‘a posteriori probability’ (APP) bitwise decoding algorithm, which requires the same number of states as the well-known Viterbi algorithm. The turbo decoder iterates between the outputs of the two constituent decoders until reaching satisfactory convergence. The final output is a hard-quantized version of the likelihood estimates of either of the decoders. The Recommended Standard (reference [3]) does not include a detailed description of the specific turbo decoding algorithm. However, the performance curves in 7.4 for the turbo code family in the Recommended Standard (reference [3]) were obtained using a decoding algorithm with the following characteristics: a) decoder type: Iterative ‘turbo’ decoding using two 16-state component decoders (see reference [18]); b) type of component decoders: Soft-input, soft-output APP decoders (see reference [19]); c) quantization of channel symbols: At least 6 bits/symbol; d) quantization of decoder metrics: At least 8 bits; e) number...
View Full Document

{[ snackBarMessage ]}

Ask a homework question - tutors are online