Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
43 views

C2.4.conv Code - Simulation.matlab

This document discusses convolutional codes and their simulation using MATLAB. Convolutional codes are a type of error-correcting code that operates on serial data bits to add redundancy. The document defines key parameters of convolutional codes like constraint length and describes their basic encoding and decoding process. It then presents MATLAB code for simulating convolutional encoding and decoding to improve radio and satellite communication links by reducing errors.

Uploaded by

thang
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
43 views

C2.4.conv Code - Simulation.matlab

This document discusses convolutional codes and their simulation using MATLAB. Convolutional codes are a type of error-correcting code that operates on serial data bits to add redundancy. The document defines key parameters of convolutional codes like constraint length and describes their basic encoding and decoding process. It then presents MATLAB code for simulating convolutional encoding and decoding to improve radio and satellite communication links by reducing errors.

Uploaded by

thang
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/310798610

Convolutional codes simulation using Matlab

Conference Paper · December 2004

CITATIONS READS

0 5,031

2 authors:

Petac Eugen A. Alzoubaidi


"Ovidius" University of Constanta, Constanta, Romania Al-Balqa' Applied University
38 PUBLICATIONS   20 CITATIONS    19 PUBLICATIONS   55 CITATIONS   

SEE PROFILE SEE PROFILE

Some of the authors of this publication are also working on these related projects:

Private Cloud For Multi Branch/Campus Universities View project

ICT qualifications in Europe - euquasit View project

All content following this page was uploaded by A. Alzoubaidi on 25 November 2016.

The user has requested enhancement of the downloaded file.


Convolutional codes simulation using Matlab
EUGEN PETAC
Ovidius University of Constantza,
ROMANIA
ABDEL-RAHMAN ALZOUBAIDI
Mutah University
JORDAN

Abstract: In order to reduce the effects of random and burst errors in transmitted signal it is necessary to use
error-control coding. We researched some possibilities of such coding using the MATLAB Communications
Toolbox. There are two types of codes available Linear Block Codes and Convolutional Codes. In block
coding the coding algorithm transforms each piece (block) of information into a code word part of which is
a generated structured redundancy. Convolutional code uses an extra parameter (memory). This puts an
extra constraint on the code. Convolutional codes operate on serial data, one or a few bits at a time. This
paper describes basic aspects of Convolutional codes and illustrates Matlab encoding and decoding
implementations. Convolutional codes are often used to improve the performance of radio and satellite
links.

Key words: - Convolutional codes, error-control coding, radio and satellite links.

fixed number of code symbols, its computations


1. Introduction depend not only on the current set of input
Convolutional codes are commonly symbols but on some of the previous input
specified by three parameters (n,k,m): n = number symbols.
of output bits; k = number of input bits; m = In general, a rate R=k/n, k  n, convolutional
number of memory registers. The quantity k/n encoder input (information sequence) is a
called the code rate, is a measure of the efficiency sequence of binary k-tuples, u = ..,u–1, u0, u1,
of the code. Commonly k and n parameters range u2,…, where ui  (ui(1) ...ui( k ) ) . The output (code
from 1 to 8, m from 2 to 10 and the code rate from
1/8 to 7/8 except for deep space applications sequence) is a sequence of binary n-tuples, v =
where code rates as low as 1/100 or even longer ..,v–1, v0, v1, v2,…, where vi  (vi(1) ...vi( n) ) . The
have been employed. sequences must start at a finite (positive or
Often the manufacturers of convolutional negative) time and may or may not end.
code chips specify [1] the code by parameters The relation between the information
(n,k,L), The quantity L is called the constraint sequences and the code sequences is determined
length of the code and is defined by Constraint by the equation
Length, L = k (m-1). The constraint length L v = uG ,
represents the number of bits in the encoder
memory that affect the generation of the n output where
bits. The constraint length L is also referred to by  G0 G1 ... Gm 
the capital letter K, which can be confusing with  
 G0 G1 ... Gm 
the lower case k, which represents the number of G 
G0 G1 ... Gm
input bits. In some books K is defined as equal to  
 ... ... ... ...
product the of k and m. Often in commercial 
spec, the codes are specified by (r, K), where r = is the semi-infinite generator matrix, and where
the code rate k/n and K is the constraint length. the sub-matrices G i , 0 i m, are binary kXn
The constraint length K however is equal to L – 1, matrices. The arithmetic in v = uG is carried out
as defined in this paper. over the binary field, F 2 , and the parts left blank
Even though a convolutional coder accepts a in the generator matrix G are assumed to be filled
fixed number of message symbols and produces a in with zeros. The right hand side of v= uG
defines a discrete-time convolution between u and
g  (G0 G1 ...Gm ) , hence, the name convolutional which can be represented as
codes [2]. v(D)=u(D)G(D),
As in many other situations where where G(D) is a k X n transfer function
convolutions appear it is convenient to express the matrix of rank k with entries in F 2(D) and v(D) is
sequences in some sort of transform. In called the code sequence corresponding to the
information theory and coding theory [3], [4] it is information sequence u(D).
common to use the delay operator D, the D-
transform. The information and code sequences A rate R = k/n convolutional code C over F 2
becomes is the image set of a rate R = k/n convolutional
transducer. We will only consider realizable
(causal) transfer function matrices, which we call
u( D)  ....  u 1 D 1  u0  u1 D  u 2 D 2  ....
generator matrices. A transfer function matrix of a
and convolutional code is called a generator matrix if
v( D)  ....  v1 D 1  v0  v1 D  v2 D 2  .... it is realizable (causal).
It follows from the definitions that a rate R =
They are related through the equation k/n convolutional code C with the k X n generator
v( D)  u( D)G( D) , matrix G(D) is the row space of G(D) over
where F((D)). Hence, it is the set of all code sequences
generated by the convolutional generator matrix,
G( D)  G0  G1 D  ...  Gm D m
G(D).
is the generator matrix. A rate R = k/n convolutional encoder of a
The set of polynomial matrices is a special convolutional code with rate R = k/n generator
case of the rational generator matrices. Hence, matrix G(D) over F 2(D) is a realization by linear
instead of having finite impulse response in the sequential circuits of G(D).
encoder, as for the polynomial case, we can allow
periodically repeating infinite impulse responses.
To make the formal definitions for this case it is
2. Convolutional encoder simulation
easier to start in the D-domain.
The Convolutional Encoder block encodes a
Let F 2((D)) denote the field of binary
sequence of binary input vectors to produce a
Laurent series. The element

sequence of binary output vectors. This block can
x( D)   xi D i  F2 (( D)), r  Z , process multiple symbols at a time. If the encoder
i r takes k input bit streams (that is, can receive 2k
contains at most finitely many negative possible input symbols), then this block's input
powers of D. similarly, let F 2[D] denote the ring vector length is L*k for some positive integer L.
of binary polynomials. Similarly, if the encoder produces n output bit
A polynomial streams (that is, can produce 2n possible output
t
symbols), then this block's output vector length is
x( D)   xi D i  F2 [ D], t  Z  , L*n. The input can be a sample-based vector with
i 0
L = 1, or a frame-based column vector with any
contains no negative powers of D and only
positive integer for L. For a variable in the
finitely many positive.
MATLAB workspace [5], [6] that contains the
Given a pair of polynomials x(D), y(D) F
trellis structure, we put its name as the Trellis
2[D], where y(D)0, we can obtain the element
structure parameter. This way is preferable
x(D)/y(D) F 2((D)) by long division. All non- because it causes Simulink [5] to spend less time
zero ratios x(D)/y(D) are invertible, so they form updating the diagram at the beginning of each
the field of binary rational functions, F 2(D), simulation, compared to the usage in the next
which is a sub-field of F 2((D)). bulleted item. For specify the encoder using its
constraint length, generator polynomials, and
A rate R = k/n (binary) convolutional possibly feedback connection polynomials, we
transducer over the field of rational functions F used a poly2trellis command within the Trellis
2(D) is a linear mapping structure field. For example, for an encoder with
 : F2k (( D))  F2n (( D)) a constraint length of 7, code generator
u( D)  v( D) polynomials of 171 and 133 (in octal numbers),
and a feedback connection of 171 (in octal), we always ends in the all-zeros state. This mode is
have used the Trellis structure parameter to appropriate when the corresponding
poly2trellis(7,[171 133],171). Convolutional Encoder block has its Reset
The encoder registers begin in the all-zeros parameter set to On each frame. In Terminated
state. We configured the encoder so that it resets mode, the block treats each frame independently,
its registers to the all-zeros state during the course and the traceback path always starts and ends in
of the simulation: The value None indicates that the all-zeros state. This mode is appropriate when
the encoder never resets; The value On each the uncoded message signal (that is, the input to
frame indicates that the encoder resets at the the corresponding Convolutional Encoder block)
beginning of each frame, before processing the has enough zeros at the end of each frame to fill
next frame of input data; The value On nonzero all memory registers of the encoder. If the encoder
Rst input causes the block to have a second input has k input streams and constraint length vector
port, labeled Rst. The signal at the Rst port is a constr (using the polynomial description), then
scalar signal. When it is nonzero, the encoder "enough" means k*max(constr-1). In the special
resets before processing the data at the first input case when the frame-based input signal contains
port. only one symbol, the Continuous mode is most
appropriate.
The Traceback depth parameter, D,
3. Convolutional decoder simulation influences the decoding delay. The decoding
3.1. Viterbi Decoder delay is the number of zero symbols that precede
The Viterbi Decoder block [7], [1] the first decoded symbol in the output. If the input
decodes input symbols to produce binary output signal is sample-based, then the decoding delay
symbols. This block can process several symbols consists of D zero symbols. If the input signal is
at a time for faster performance. If the frame-based and the Operation mode parameter
convolutional code uses an alphabet of 2n possible is set to Continuous, then the decoding delay
symbols, then this block's input vector length is consists of D zero symbols. If the Operation
L*n for some positive integer L. Similarly, if the mode parameter is set to Truncated or
decoded data uses an alphabet of 2k possible Terminated, then there is no output delay and the
output symbols, then this block's output vector Traceback depth parameter must be less than or
length is L*k. The integer L is the number of equal to the number of symbols in each frame. If
frames that the block processes in each step. The the code rate is 1/2, then a typical Traceback
input can be either a sample-based vector with depth value is about five times the constraint
L = 1, or a frame-based column vector with any length of the code.
positive integer for L. The reset port is usable only when the
The entries of the input vector are either Operation mode parameter is set to Continuous.
bipolar, binary, or integer data, depending on the Checking the Reset input check box causes the
Decision type parameter: Unquantized - Real block to have an additional input port, labeled Rst.
numbers; Hard Decision - 0, 1; Soft Decision - When the Rst input is nonzero, the decoder returns
Integers between 0 and 2k-1, where k is the to its initial state by configuring its internal
Number of soft decision bits parameter, with 0 memory as follows: Sets the all-zeros state metric
for most confident decision for logical zero and to zero; Sets all other state metrics to the
2k-1, most confident decision for logical one. maximum value; Sets the traceback memory to
Other values represent less confident decisions. zero; Using a reset port on this block is analogous
If the input signal is frame-based, then the to setting the Reset parameter in the
block has three possible methods for transitioning Convolutional Encoder block to On nonzero Rst
between successive frames. The Operation mode input.
parameter controls which method the block uses:
In Continuous mode, the block saves its internal 3.2. APP Decoder
state metric at the end of each frame, for use with The APP Decoder block [8] performs a
the next frame. Each traceback path is treated posteriori probability (APP) decoding of a
independently. In Truncated mode, the block convolutional code. The input L(u) represents the
treats each frame independently. The traceback sequence of log-likelihoods of encoder input bits,
path starts at the state with the best metric and while the input L(c) represents the sequence of
log-likelihoods of code bits. The outputs L(u) and have used this parameter to avoid losing precision
L(c) are updated versions of these sequences, during the computations. It is especially
based on information about the encoder. If the appropriate for implementation uses fixed-point
convolutional code uses an alphabet of 2n possible components.
symbols, then this block's L(c) vectors have length
Q*n for some positive integer Q. Similarly, if the
decoded data uses an alphabet of 2k possible 4. Conclusions
output symbols, then this block's L(u) vectors In these work we have constructed and
have length Q*k. The integer Q is the number of tested in Maple convolutional encoders and
frames that the block processes in each step. decoders of various types, rates, and memories.
The inputs can be either: Sample-based Convolutional codes are fundamentally different
vectors having the same dimension and from other classes of codes, in that a continuous
orientation, with Q = 1; Frame-based column sequence of message bits is mapped into a
vectors with any positive integer for Q. continuous sequence of encoder output bits. It is
To define the convolutional encoder that well-known in the literature and practice that
produced the coded input, we have used the these codes achieve a larger coding gain than that
Trellis structure MATLAB parameter. We tested with block coding with the same complexity. The
two ways: The name as the Trellis structure encoder operating at a rate 1/n bits/symbol, may
parameter, for a variable in the MATLAB be viewed as a finite-state machine that consists
workspace that contains the trellis structure. This of an M-stage shift register with prescribed
way is preferable because it causes Simulink to connections to n modulo-2 adders, and a
spend less time updating the diagram at the multiplexer that serializes the outputs of the
beginning of each simulation, compared to the adders.
usage in the next bulleted item; For specify the
encoder using its constraint length, generator
polynomials, and possibly feedback connection References:
polynomials, we used a poly2trellis command [1] Viterbi, Andrew J. "An Intuitive Justification
within the Trellis structure field. For example, and a Simplified Implementation of the MAP
for an encoder with a constraint length of 7, code Decoder for Convolutional Codes." IEEE
generator polynomials of 171 and 133 (in octal Journal on Selected Areas in
numbers), and a feedback connection of 171 (in Communications, vol. 16, February 1998. 260-
octal), we used the Trellis structure parameter to 264.
poly2trellis(7,[171 133],171. [2] Gitlin, Richard D., Jeremiah F. Hayes, and
To indicate how the encoder treats the trellis Stephen B. Weinstein. Data Communications
at the beginning and end of each frame, it’s Principles. New York: Plenum, 1992.
necessary to set the Termination method [3] Clark, George C. Jr. and J. Bibb Cain. Error-
parameter to either Truncated or Terminated. Correction Coding for Digital
The Truncated option indicates that the encoder Communications. New York: Plenum Press,
resets to the all-zeros state at the beginning of 1981.
each frame, while the Terminated option [4] Pless,V., Introduction to the Theory of Error-
indicates that the encoder forces the trellis to end Correcting Codes, 3rd ed. New York: John
each frame in the all-zeros state. Wiley & Sons, 1998.
We can control part of the decoding [5] Matlab Documentation,
algorithm using the Algorithm parameter. The http://www.math.niu.edu/help/math/matlab/
True APP option implements a posteriori [6] Matlab Online Reference Documentation,
probability. To gain speed, both the Max* and http://www.utexas.edu/math/Matlab/Manual/R
Max options approximate expressions by other eferenceTOC.html
quantities. The Max option uses max{ai} as the [7] Heller, Jerrold A. and Irwin Mark Jacobs.
approximation, while the Max* option uses "Viterbi Decoding for Satellite and Space
max{ai} plus a correction term. The Max* option Communication." IEEE Transactions on
enables the Scaling bits parameter in the mask. Communication Technology, vol. COM-19,
This parameter is the number of bits by which the October 1971. 835-848.
block scales the data it processes internally. We
[8] Höst S., Johannesson R., Zyablov V. V., and
Skopintsev O. “Generator matrices for binary
woven convolutional codes”, Proceedings of
6th Intern. Workshop on Algebraic and Comb.
Coding Theory, pages 142_146, Pskov,
Russia, Sept. 6_12 1998.

View publication stats

You might also like