This preview shows page 1. Sign up to view the full content.
Unformatted text preview: of decoder iterations: variable depending on signaltonoise ratio. CCSDS 130.1G1 Page 74 June 2006 TM SYNCHRONIZATION AND CHANNEL CODING —SUMMARY OF CONCEPT AND RATIONALE Variations from this algorithm will result in performance tradeoffs.
The overall turbo decoding procedure is depicted in figure 71 and described earlier. The
‘simple decoders 1 and 2’ each compute likelihood estimates (APP estimates) based on a
version of the APP or logAPP algorithm, 5 as described in reference [14]. A diagram
showing the structure of the turbo decoder in more detail is shown in figure 74. Figure 75
shows the basic circuits needed to implement the logAPP algorithm.
MAP1
A k1 Bk Ak BACKWARD FORWARD B k1 Extrinsic extrinsic info.
(innovation) Γk from channel + a priori likelihoods METRICS1 Decoder1
Decoder2
METRICS2
P + Delay Extrinsic
FORWARD extrinsic info.
(innovation) P 1 BACKWARD
+ decoded
bits MAP2 Figure 74: Structure of the Turbo Decoder 5 In the early turbo coding literature the APP algorithm was designated as the MAP (maximum a posteriori)
algorithm because it was derived from an homonymous algorithm for making optimum bitwise hard decisions on
plain convolutionally encoded symbols. CCSDS 130.1G1 Page 75 June 2006 TM SYNCHRONIZATION AND CHANNEL CODING —SUMMARY OF CONCEPT AND RATIONALE State Metric Ak1(Si (1)) Ak1(Si(0)) State Metric Branch Metric
Γk(x(1,Si)) { Σ k(Si,u)
}
Total Metric Branch Metric
Γ k(x(0,Si))
Compare Compare • E E Select
1 of 2 E E Select
1of 2 Select
1 of 2 Select
1of 2  • +
+
+ Look up
Table +
+ x log(1+ex)
Ak(Si) Normalize
Ak(Si)  max{Ak(Sj)}
j sw1
2 •••
•1
Initial
•
value
• + Look up
Table x log(1+ex) sw2 log Pk(uy) Normalized Ak(Si) Basic Structure for Forward and Backward
Computation in the LogAPP Algorithm Basic Structure for Bit Reliability Computation
in the LogAPP Algorithm. Figure 75: Basic Circuits to Implement the LogAPP Algorithm
Because the decoder processes whole blocks of k bits at a time, there is a minimum decoding
delay of k bits. This latency is further increased by the time required for the decoder to
process each block. If parallel decoders are used to increase decoding throughput, the latency
increases in proportion to the number of parallel decoders.
To first order, the decoding complexity of a turbo decoder relative to that of a convolutional
decoder using the same number of trellis states and branches can be estimated by multiplying
several factors: (a) a factor of 2 because the turbo code uses two component decoders; (b)
another factor of 2 because the individual decoders use forward and backward recursions
compared to the Viterbi decoder’s forwardonly recursion; (c) another small factor because
the turbo decoder’s recursions require somewhat more complex calculations than the Viterbi
decoder’s; and (d) a factor to account for the turbo decoder’s multiple iterations compared to
the Viterbi decoder’s single iteration. The relative decoding complexity for two different
turbo codes or two different convolutional codes can be estimated by multiplying two
additional factors: (e) the number of trellis states; and (f) the number of trellis branches per
input bit into each state. Factor (c) can be reduced to one by implementing an approximate
logMAP algorithm at a small sacrifice in performance. Factors (b) and (d) might be reduced
on the average by using a more advanced turbo decoding algorithm, using stooping rules or
different iteration schedules. Such an algorithm might allow the decoder to stop its iterations
early if a given codeword can already be decoded reliably, or to skip over portions of the
forward and backward recursions for some iterations. Factors (a) through (d) are 1 for Viterbi
decoders of convolutional codes. For the CCSDS standard constraintlength7 convolutional
decoder, factor (e) is 26 = 64, and factor (f) is 2/1 = 2. For the Cassini/Pathfinder constraint CCSDS 130.1G1 Page 76 June 2006 TM SYNCHRONIZATION AND CHANNEL CODING —SUMMARY OF CONCEPT AND RATIONALE length 15, rate 1/6 convolutional decoder, factor (e) is 214 = 16384 and factor (f) is 6/1=6. For
the turbo codes specified in 7.2, factor (e) is 24=16 and factor (f) ranges from 2/1=2 to 6/1=6.
A basic form of turbo decoder stops iterating after a predetermined number of iterations. For
some codewords (or sections of codewords), the predetermined number of iterations may be
too many or too few. A more efficient turbo decoder can employ a stopping rule to stop the
decoder’s iterations when convergence is satisfactory, i.e., without wasting iterations when
the decoder has already converged, and without halting iterations prematurely when the
decoder needs a little more time. Such a rule reduces the average number of iterations and
increase the average decoding throughput. This comes at the expense of a slightly more
complicated decoding algorithm and increased decoder buffering requirements to
accommodate variable decoding times.
7.4
7.4.1 PERFORMANCE OF THE RECOMMENDED TURBO CODES
SIMULATED TURBO CODE PERFORMANCE CURVES Fig...
View
Full
Document
This document was uploaded on 03/06/2014.
 Spring '14

Click to edit the document details