Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

CS1 - 40711502818 - Jatin Garg

Download as pdf or txt
Download as pdf or txt
You are on page 1of 4

Name: Jatin Garg

Roll No: 40711502818


Branch: ECE-2(M)

INFORMATION THEORY AND CODING (ITC)


CASE STUDY-1

Turbo Codes and its performance


Introduction
In 1949 Claude Shannon published a classic paper that established a mathematical basis for the
consideration of the noisy communications channel. In his analysis he quantified the maximum
theoretical capacity for a communications channel, the Shannon limit, and indicated that error-
correcting channel codes must exist that allowed this maximum capacity to be achieved. The
intervening years have seen many well-considered channel codes inch towards the Shannon
limit, but all contenders have required large block lengths to perform close to the limit. The
consequent complexity, cost, and signal latency of these codes have made them impractical
within 3 to 5 dB of the limit, but they provide useful coding gain at higher values of Eb/No and
bit error rate.

In 1993 Berrou, Glavieux and Thitimajshima proposed “a new class of convolution codes called
turbo codes whose performance in terms of Bit Error Rate (BER) are close to the Shannon limit

Principles of Turbo Codes


It is theoretically possible to approach the Shannon limit by using a block code with large block
length or a convolutional code with a large constraint length. The processing power required to
decode such long codes makes this approach impractical.
Turbo codes overcome this limitation by using recursive coders and iterative soft decoders. The
recursive coder makes convolutional codes with short constraint length appear to be block codes
with a large block length, and the iterative soft decoder progressively improves the estimate of
the received message.

Performance of Turbo Codes


Turbo codes perform well due to the attractive combination of the code's random appearance on the channel
together with the physically realizable decoding structure. Turbo codes are affected by an error floor.

The error floor is a phenomenon encountered in modern iterated sparse graph-based error correcting
codes like LDPC codes and turbo codes. When the bit error ratio (BER) is plotted for conventional codes
like Reed–Solomon codes under algebraic decoding or for convolutional codes under Viterbi decoding, the
BER steadily decreases in the form of a curve as the SNR condition becomes better. For turbo codes there is
a point after which the curve does not fall as quickly as before, in other words, there is a region in which
performance flattens. This region is called the error floor region. The region just before the sudden drop in
performance is called the waterfall region.
Error floors are usually attributed to low-weight codewords in the case of Turbo codes.
Encoding of Turbo Codes
A specific type of convolutional coder is used to generate turbo codes. The convolutional coder
shown in Figure 1a has a single input, x, outputs p0 and p1, and a constraint length K=3.
Multiplexing the outputs generates a code of rate R=1/2.

The convolutional coder shown in Figure 1b differs in that one of the outputs, p0, has been
“folded back” and is presenting one of its output sequences at the coder input, making
itrecursive. This has the effect of increasing the apparent block length without affecting the
constraint length of the coder. The input is also presented as one outputs of the coder, making it
systematic. Such coders are thus called recursive systematic convolutional (RSC) coders.

In non-recursive convolutional codes it is common practice to flush the coder with zeros to bring
the decoder to an end state. Flushing with zeros does not readily work with recursive coders,
however relatively simple binary arithmetic can establish the input sequence that will generate a
zero state. RSC codes can thus be made to appear like linear block codes.

A turbo code is the parallel concatenation of a number of RSC codes. Usually the number of
codes is kept low, typically two, as the added performance of more codes is not justified by the
added complexity and increased overhead. The input to the second decoder is an interleaved
version of the systematic x, thus the outputs of coder 1 and coder 2 are time displaced codes
generated from the same input sequence. The input sequence is only presented once at the
output. The outputs of the two coders may be multiplexed into the stream giving a rate R=1/3
code, or they may be punctured to give a rate R=1/2 code. This is illustrated in Figure 2.
The interleaver design has a significant effect on code performance. A low weight code can
produce poor error performance, so it is important that one or both of the coders produce codes
with good weight. If an input sequence x produces a low weight output from coder 1, then the
interleaved version of x needs to produce a code of good weight from coder 2. Block interleavers
give adequate performance, but pseudo random interleavers have been shown to give superior
performance.

Decoding Principles
At the receiver, the signal is demodulated with its associated noise and a soft output provided to
the decoder. The soft output might take the form of a quantized value of the decoded bit with it
associated noise, or it may be a bit with associated probability (i.e. 1 with P(1)=0.65). Most
often it is the log likelihood ratio (LLR), which is defined as:

The LLR is a measure of the probability that, given a received soft input y’, the message bit mi
associated with a transition in the trellis is 1 or 0. If the events are equiprobable then the output is
0, but any tendency for mi towards 1 or 0 will result in positive or negative values of Λi. It is
simplest to view the decoding process as 2 stages: initializing the decoder and decoding the
sequence. The demodulator output contains the soft values of the sequence x’ and the parity bits
p1’ and p2’. These are used to initialize the decoder, as shown in Figure 3a. The interleaved
sequence is sent to decoder 2, while the sequence derived from x’ is sent to decoder 1 and
presented to decoder 2 through an interleaver. This re-sequences bits from streams x’ and p1’ so
that bits generated from the same bit in x are presented simultaneously to decoder 2, whether
from x, p1’ or p2’.

The decoder may have some knowledge the probability of the transmitted signal, for example it
may know that some messages are more likely than others. This a priori information assists the
decoder, which adds information gained from the decoding process forming the a posteriori
output. The decoder uses all this information to make its best estimate of the received sequence.
The output is then de-interleaved and presented back to decoder 1, which makes its best estimate.
Further iterations through decoders 1 and 2, with associated interleaving and de-interleaving,
refine the estimate until a final version of the block, x’’, is presented at the output. This process is
shown in Figure 3b.
Decoding Algorithms
The two main types of decoder are Maximum A Posteriori (MAP) and the Soft Output Viterbi
Algorithm (SOVA). MAP looks for the most likely symbol received, SOVA looks for the most
likely sequence. Both MAP and SOVA perform similarly at high Eb/No. At low Eb/No MAP
has a distinct advantage, gained at the cost of added complexity.

MAP looks for the most probable value for each received bit by calculating the conditional
probability of the transition from the previous bit, given the probability of the received bit. The
focus on transitions, or state changes within the trellis, makes LLR a very suitable probability
measure for use in MAP.

SOVA is very similar to the standard Viterbi algorithm used in hard demodulators. It uses a
trellis to establish a surviving path but, unlike its hard counterpart, compares this with the
sequences that were used to establish the non-surviving paths. Where surviving and non-
surviving paths overlap the likelihood of that section being on the correct path is reinforced.
Where there are differences, the likelihood of that section of the path is reduced. At the output of
each decoding stage the values of the bit sequence are scaled by a channel reliability factor,
calculated from the likely output sequence, to reduce the probability of over-optimistic soft
outputs. The sequence and its associated confidence factors are then presented to the interleaver
for further iterations. After the prescribed number of iterations, the SOVA decoder will output
the sequence with the maximum likelihood.

APPLICATIONS
 Turbo codes are used extensively in 3G and 4G mobile telephony standards; e.g.,
in HSPA, EV-DO and LTE.
 MediaFLO, terrestrial mobile television system from Qualcomm.
 The interaction channel of satellite communication systems, such as DVB-RCS[4] and DVB-
RCS2.
 Recent NASA missions such as Mars Reconnaissance Orbiter use turbo codes as an alternative
to Reed–Solomon error correction-Viterbi decoder codes.
 IEEE 802.16 (WiMAX), a wireless metropolitan network standard, uses block turbo coding
and convolutional turbo coding.

ADVANTAGES
 Remarkable power efficiency in AWGN and flat fading channels for moderately low
BER.
 Design tradeoffs suitable for delivery of multimedia services.

DISADVANTAGES
 Long Latency
 Poor performance at very low BER.
 Because turbo codes operate at very low SNR, channel estimation and tracking is a
critical issue.

You might also like