Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Rima

Download as ppt, pdf, or txt
Download as ppt, pdf, or txt
You are on page 1of 42

Convolutional codes

and their decoding

KAKALI SAHARIA
M.TECH (ECE)
Department of Electronics
Pondicherry University
OUTLINE

Introduction to convolutional codes

Convolutional encoder

Convolutional codes representation and explaination with examples
-Generator Representation
-State Diagram Representation
-Tree Diagram Representation
-Trellis Diagram Representation

Decoding of convolutional codes
-Sequential decoding
-Viterbi algorithm

Advantages of convolutional codes

Practical examples of convolutional codes
FORWARD ERROR CORRECTION CODE

There are four important forward error correction codes


that find applications in digital transmission. They are:
Block Parity
Hamming Code
Interleaved Code
Convolutional Codes
Introduction to convolutinal codes

Convolutional codes are introduced by Elias in 1955.


Convolution coding is a popular error-correcting coding
method used to improve the reliability of communication system.
A message is convoluted, and then transmitted into a noisy
channel.
This convolution operation encodes some redundant
information into the transmitted signal, thereby improving the
data capacity of the channel.
Convolution codes are error detecting codes used to reliably
transmit digital data over unreliable communication channel
system to channel noise.
INTRODUCTION TO CONVOLUTIONAL
CODe (Cont’d)

The convolutional codes map information to code


bits, but sequentially convolve the sequence of
information bit according to some rule.
The convolutional coding can be applied to a
continuous data stream as well as to blocks of data
whereas the block codes can be applied only for the
block of data.
Convolutional ENcoder
 Convolutional encoder is a finite state machine (FSM),
processing information bits in a serial manner.
 Convolutional encoding of data is accomplished using a shift
register and associated combinatorial logic that performs
modulo-two addition.
A shift register is merely a chain of flip-flops wherein the
output of the nth flip-flop is tied to the input of the (n+1)th flip
flop.
Every time the active edge of the clock occurs, the input to
the flip-flop is clocked through to the output, and thus the
data are shifted over one stage.
Convolutional encoder (cont’d)

In convolutional code the block of n code bits generated


by the encoder in a particular time instant depends not
only on the block of k message bits within that time instant
but also on the block of data bits within a previous span of
N-1 time instants (N>1).
A convolutional code with constraint length N consists of
an N-stage shift register (SR) and ν modulo-2 adders.
Convolutional encoder (cont’d)

Fig(a):Convonutional Encoder
Fig(a) shows a convolutional encoder .Here N=3,v=2.
 The message bits are applied at the input of the shift register
(SR). The coded digit stream is obtained at the commutator
output. The commutator samples the ν modulo-2 adders in a
sequence, once during each input-bit interval.
Convolutional encoder (cont’d)
Example: Assume that the input digits are 1010. Find the
coded sequence output for Fig(a).
Initially, the Shift Registers s1=s2=s3=0.
When the first message bit 1 enters the SR, s1= 1, s2 =
s3=0.Then ν1=1, ν2=1 and the coder output is 11.
When the second message bit 0 enters the SR, s1=0, s2=1,
s3=0.Then ν1=1 and ν2=0 and the coder output is 10.
When the third message bit 1 enters the SR, s1=1, s2=0 and
s3=1Then ν1=0 and ν2=0 and the coder output is 00.
When the fourth message bit 0 enters the SR, s1=0, s2=1 and
s3=0.Then ν1=1 and ν2=0 and the coder output is 10.
The coded Output Sequence is : 11100010
Encoder Example
Convolutional encoder (cont’d)
PARAMETERS OF A CONVOLUTIONAL ENCODER
Convolutional codes are commonly specified by three
parameters: (n,k,m):
n = number of output bits
k = number of input bits
m = number of memory registers
Code Rate: The quantity k/n is called as code rate. It is a
measure of the efficiency of the code.
Constraint Length: The quantity L(or K) is called the
constraint length of the code. It represents the number of bits
in the encoder memory that affect the generation of the n
output bits. It is defined by
Constraint Length, L = k (m-1)
ENCODER REPRESENTATIONS

The encoder can be represented in several different but


equivalent ways. They are:

I. Generator Representation
II. State Diagram Representation
III. Tree Diagram Representation
IV. Trellis Diagram Representation
ENCODER REPRESENTATIONS
(cont’d)
I. Generator Representation
 Generator representation shows the hardware connection of
the shift register taps to the modulo-2 adders. A generator
vector represents the position of the taps for an output. A
“1” represents a connection and a “0” represents no
connection.
A convolutional code may be defined by a set of n generating
polynomials for each input bit.
For the circuit under consideration:
g1(D) = 1 + X + X2
g2(D) = 1 + X2
The set {gi(D)} defines the code completely. The length of the
shift register is equal to the highest-degree generator
polynomial.
Example
ENCODER REPRESENTATIONS
(cont’d)
 For example, the two generator vectors for the encoder in
Fig(a) are g1 = [111] and g2 = [101], where the subscripts 1 and 2
denote the corresponding output terminals.

II. State Diagram Representation


 In the state diagram, the state information of the encoder
is shown in the circles. Each new input information bit causes
a transition from one state to another.
 Contents of the rightmost (K-1) shift register stages define
the states of the encoder. The transition of an encoder from
one state to another, as caused by input bits, is depicted in the
state diagram.
ENCODER REPRESENTATIONS
(cont’d)
 The path information between the states, denoted as x/c,
represents input information bit x and output encoded bits c.
It is customary to begin convolutional encoding from the all
zero state.
Example: State diagram representation of convolutional codes.

Fig(b):state diagram
Here k=1,n=2,K=3.
v1

Input
u S1 S2

v2
Example: State diagram representation

Fig(b):state diagram Here k=1,n=2,M=3.


ENCODER REPRESENTATIONS
(cont’d)
From the state diagram
Let 00 State a ; 01 State b; 10 State c; 11 State d;
(1) State a goes to State a when the input is 0 and the output is 00
(2) State a goes to State b when the input is 1 and the output is 11
(3) State b goes to State c when the input is 0 and the output is 10
(4) State b goes to State d when the input is 1 and the output is 01
(5) State c goes to State a when the input is 0 and the output is 11
(6) State c goes to State b when the input is 1 and the output is 00
(7) State d goes to State c when the input is 0 and the output is 01
(8) State d goes to State d when the input is 1 and the output is 10
v1

Input
u S1 S2 S3

v2
Example: State diagram representation
ENCODER REPRESENTATIONS
(cont’d)
III. Tree Diagram Representation
 The tree diagram representation shows all possible
information and encoded sequences for the
convolutional encoder.
 In the tree diagram, a solid line represents input
information bit 0 and a dashed line
represents input information bit 1.
 The corresponding output encoded bits are shown on the
branches of the tree.
 An input information sequence defines a specific path
through the tree diagram from left to right.
ENCODER REPRESENTATIONS
(cont’d)
Example: Tree Diagram representation of convolutional codes

Fig(c): Tree diagram


ENCODER REPRESENTATIONS
(cont’d)
The tree diagram in Fig(b) tends to suggest that
there are eight states in the last layer of the tree and that
this will continue to grow. However some states in the last
layer (i.e. the stored data in the encoder) are equivalent as
indicated by the same letter on the tree (for example H
and h).
These pairs of states may be assumed to be
equivalent because they have the same internal state for
the first two stages of the shift register and therefore will
behave exactly the same way to the receipt of a new (0 or
1) input data bit.
ENCODER REPRESENTATIONS (cont’d)
IV. Trellis Diagram Representation
The trellis diagram is basically a redrawing of the state
diagram. It shows all possible state transitions at each time
step.
The trellis diagram is drawn by lining up all the possible
states (2L) in the vertical axis. Then we connect each state to
the next state by the allowable codeword’s for that state.
There are only two choices possible at each state. These
are determined by the arrival of either a 0 or a 1 bit.
The arrows show the input bit and the output bits are
shown in parentheses.
 The arrows going upwards represent a 0 bit and going
downwards represent a 1 bit.
ENCODER REPRESENTATIONS (cont’d)
Steps to construct trellis diagram
 It starts from scratch (all 0’s in the SR, i.e., state a) and
makes transitions corresponding to each input data digit.
These transitions are denoted by a solid line for the
next data digit 0 and by a dashed line for the next data
digit 1.
 Thus when the first input digit is 0, the encoder output
is 00 (solid line)
When the input digit is 1, the encoder output is 11
(dashed line).
 We continue this way for the second input digit and so
on as depicted in Fig (e) that follows.
ENCODER REPRESENTATIONS (cont’d)
Example: Encoding of convolutional codes using Trellis
Representation
k=1, n=2, K=3 convolutional code

We begin in state 00:


Input Data: 0 1 0 1 1 0 0
Output: 0 0 1 1 0 1 0 0 10 10 1 1
Decoding of convolutional codes
 There are several different approaches to decoding of
convolutional codes.
 These are grouped in two basic categories:

A. Sequential Decoding
-Fano Algorithm.
B.Maximum Likelihood Decoding
-Viterbi Algorithm.
Both of these two methods represent two different
approaches .
Decoding of convolutional codes
(cont’d)
Each node examined represents a path through part of the
tree.
The Fano algorithm can only operate over a code tree
because it cannot examine path merging.
At each decoding stage, the Fano algorithm retains the
information regarding three paths:
-the current path,
-its immediate predecessor path,
-one of its successor paths.
Based on this information, the Fano algorithm can move
from the current path to either its immediate predecessor
path or the selected successor path.
Decoding of convolutional codes
(cont’d)
 It allows both the forward and backward movement
through the Trellis diagram flow.
Example: Decoding using Sequential decoding-Fano algorithm

consider the code 01 11 01 11 01 01 11 is received, the


algorithm will take a start and tally with the outputs it finds
on the way. If an output does not tally, it retraces the
position back to the previous ambiguous decision.
Decoding of convolutional codes
(cont’d)
A. Sequential Decoding – Fano Algorithm.

 It was one of the first methods proposed for decoding of


a convolution ally coded bit stream.
 It was first proposed by Wozencraft and later a better
version was proposed by Fano.
 Sequential decoding concentrates only on a certain
number of likely codeword's.
The purpose of sequential decoding is to search through
the nodes of the code tree in an efficient way to find the
maximum likelihood path.
Decoding of convolutional codes
(cont’d)

Fig():decoding using sequential decoding-Fano Algorihm


Decoding of convolutional codes
(cont’d)
B.Maximum Likelihood Decoding-Virtebi Algorithm
The Viterbi decoder examines the entire received sequence
of a given length.
It works on maximum likelihood decoding rule which tried to
reduce the error between the detected sequence and the
original transmitted sequence.
Trellis diagram is constructed for a system based on the
received sequence the path is straight and the trellis level by
level.
If a condition raises in such a way that there is no path for
the corresponding sequence then the viterbi decoding helps
to detect the best path based on the subsequent sequence.
Decoding of convolutional codes
(cont’d)
. The best path is termed as survivor
Example: Maximum Likelihood decoding –Viterbi algorithm.

Fig: Maximum Likelihood decoding –Viterbi algorithm(Step1)


Decoding of convolutional codes
(cont’d)

Fig: Maximum Likelihood decoding –Viterbi algorithm(Step2


Decoding of convolutional codes
(cont’d)

Fig: Maximum Likelihood decoding –Viterbi algorithm(Step3)


Decoding of convolutional codes
(cont’d)

Fig: Maximum Likelihood decoding –Viterbi algorithm(Step4)


Advantages of convolutional codes

Convolution coding is a popular error-correcting coding


method used in digital communications.
The convolution operation encodes some redundant
information into the transmitted signal, thereby improving
the data capacity of the channel.
Convolution Encoding with Viterbi decoding is a powerful
FEC technique that is particularly suited to a channel in
which the transmitted signal is corrupted mainly by AWGN.
It is simple and has good performance with low
implementation cost.
Practical example of convolutional
codes

NASA uses a standard r=1/2,K=7 convolutional code.


IS-54/136 TDMA Cellular Standard uses a r=1/2,K=6
convolutional code.
GSM Cellular Standard uses a r=1/2,K=5 convolutional
code.
IS-95 CDMA Cellular Standard uses a r=1/2,K=9
convolutional code for forward channel and a a r=1/3,K=9
convolutional code for Reverse channel.
Galileo Space Probe used constraint length 15
convolutional code
THANK
YOU

You might also like