Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
236 views

5CS3-01: Information Theory & Coding: Unit-3 Linear Block Code

Linear block codes divide information into blocks and add redundant parity bits to each block. The code rate is the ratio of information bits to total bits. Linear block codes have the property that the sum of two codewords is also a codeword. The generator matrix G is used to encode information bits m into a codeword c, where c = mG. G is composed of an identity matrix I and a parity matrix P determined from the generator polynomial g(x). The parity check matrix H can be used to detect and correct errors. Examples show how to determine G from g(x) and how to encode a message into a codeword.

Uploaded by

Pratap
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
236 views

5CS3-01: Information Theory & Coding: Unit-3 Linear Block Code

Linear block codes divide information into blocks and add redundant parity bits to each block. The code rate is the ratio of information bits to total bits. Linear block codes have the property that the sum of two codewords is also a codeword. The generator matrix G is used to encode information bits m into a codeword c, where c = mG. G is composed of an identity matrix I and a parity matrix P determined from the generator polynomial g(x). The parity check matrix H can be used to detect and correct errors. Examples show how to determine G from g(x) and how to encode a message into a codeword.

Uploaded by

Pratap
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 75

5CS3-01: Information Theory

& Coding

Unit-3
Linear Block Code
Linear Block Code
Introduction to error connecting codes

Coding & decoding of linear block code

Minimum distance consideration

Conversion of non-systematic form of


matrices into systematic form
Introduction
 Noise or Error is the main problem in the signal,
which disturbs the reliability of the
communication system.
 Error control coding is the coding procedure
done to control the occurrences of errors. These
techniques help in Error Detection and Error
Correction.
 There are many different error correcting codes
depending upon the mathematical principles
applied to them. But, historically, these codes
have been classified into Linear block
codes and Convolution codes.
Linear Block Codes
In the linear block codes, the parity bits and
message bits have a linear combination, which
means that the resultant code word is the linear
combination of any two code words.
Linear Block Codes
 Let us consider some blocks of data, which
contains k bits in each block. These bits are
mapped with the blocks which has n bits in each
block. Here n is greater than k. The transmitter
adds redundant bits which are n−k bits. The
ratio k/n is the code rate. It is denoted by r and
the value of r is r < 1.
 The n−k bits added here, are parity bits. Parity
bits help in error detection and error correction,
and also in locating the data. In the data being
transmitted, the left most bits of the code word
correspond to the message bits, and the right most
bits of the code word correspond to the parity bits.
Systematic Code
 Any linear block code can be a systematic code, until it is
altered. Hence, an unaltered block code is called as
a systematic code.
 Following is the representation of the structure of code
word, according to their allocation.

 If the message is not altered, then it is called as systematic


code. It means, the encryption of the data should not
change the data.
Convolution Codes
 So far, in the linear codes, we have discussed that
systematic unaltered code is preferred. Here, the
data of total n bits if transmitted, k bits are
message bits and n−k bits are parity bits.

 In the process of encoding, the parity bits are


subtracted from the whole data and the message
bits are encoded. Now, the parity bits are again
added and the whole data is again encoded.
The following figure quotes an example for blocks of data
and stream of data, used for transmission of information.

The whole process, stated above is tedious which has


drawbacks.
The allotment of buffer is a main problem here, when the
system is busy.
This drawback is cleared in convolution codes. Where the
whole stream of data is assigned symbols and then transmitted.
As the data is a stream of bits, there is no need of buffer for
storage.
Hamming Codes
 The linearity property of the code word is that the sum of
two code words is also a code word. Hamming codes are the
type of linear error correcting codes, which can detect up
to two bit errors or they can correct one bit errors without
the detection of uncorrected errors.
 While using the hamming codes, extra parity bits are used
to identify a single bit error. To get from one-bit pattern to
the other, few bits are to be changed in the data. Such
number of bits can be termed as Hamming distance. If the
parity has a distance of 2, one-bit flip can be detected. But
this can't be corrected. Also, any two bit flips cannot be
detected.
 However, Hamming code is a better procedure than the
previously discussed ones in error detection and correction.
BCH codes
 BCH codes are named after the
inventors Bose, Chaudari and Hocquenghem. During
the BCH code design, there is control on the number of
symbols to be corrected and hence multiple bit
correction is possible. BCH codes is a powerful
technique in error correcting codes.
 For any positive integers m ≥ 3 and t < 2m-1 there exists
a BCH binary code. Following are the parameters of
such code.
Block length n = 2m-1
Number of parity-check digits n - k ≤ mt
Minimum distance dmin ≥ 2t + 1
This code can be called as t-error-correcting BCH
code.
Cyclic Codes
 The cyclic property of code words is that any cyclic-
shift of a code word is also a code word. Cyclic codes
follow this cyclic property.
 For a linear code C, if every code word i.e.,
C = C1,C2,.......Cn from C has a cyclic right shift of
components, it becomes a code word. This shift of right
is equal to n-1 cyclic left shifts. Hence, it is invariant
under any shift. So, the linear code C, as it is invariant
under any shift, can be called as a Cyclic code.
 Cyclic codes are used for error correction. They are
mainly used to correct double errors and burst errors.
Introduction
Antenna
Information to
be transmitted
Source Channel
coding coding Modulation Transmitter

Air

Antenna
Information
received
Source Channel
decoding decoding Demodulation Receiver
Forward Error Correction (FEC)
 The key idea of FEC is to transmit enough
redundant data to allow receiver to recover from
errors all by itself. No sender retransmission
required.

 The major categories of FEC codes are


 Block codes
 Cyclic codes
 Reed-Solomon codes (Not covered here)
 Convolutional codes, and
 Turbo codes, etc.
Linear Block Codes
 Information is divided into blocks of length k
 r parity bits or check bits are added to each block (total
length n = k + r)
 Code rate R = k/n
 An (n, k) block code is said to be linear if the vector sum of
two codewords is a codeword
 Tradeoffs between
– Efficiency
– Reliability
– Encoding/Decoding complexity
 All arithmetic is performed using Modulo 2 Addition
0 0 1 1
0 1 0 1
0 1 1 0
Linear Block Codes
 The uncoded k data bits be represented by the m vector:
m=(m1, m2, …, mk)
The corresponding codeword be represented by the n-bit c
vector:
c=(c1, c2, …ck, ck+1, …, cn-1, cn)

 Each parity bit consists of weighted modulo 2 sum of the


data bits represented by 
symbol for Exclusive OR or
modulo 2-addition
Linear Block Codes
K- data and r = n-k redundant bits

c1  m1

c2  m2
...

ck  mk
c  m p
1 1( k 1)  m2 p2 ( k 1)  ...  mk pk ( k 1)
 k 1

...

cn  m1 p1n  m2 p2 n  ...  mk pkn
Linear Block Codes: Example
Example: Find linear block code encoder G if code generator
polynomial g(x)=1+x+x3 for a (7, 4) code; n = total number
of bits = 7, k = number of information bits = 4, r = number
of parity bits = n - k = 3
 x3 
p1  Re  3
 1  x  110
1  x  x 
1000 | 110 
 x4  0100 | 011
p2  Re    x  x 2
 011
1  x  x 
3
G   [ I | P]
0010 | 111
 x5   
p 3  Re    1  x  x 2
 111   0001 | 101 
1  x  x 
3

 x6  I is the identity matrix


p4  Re  3
 1  x 2
 101 P is the parity matrix
1  x  x 
Linear Block Codes: Example
The Generator Polynomial can be used to determine the
Generator Matrix G that allows determination of parity
bits for a given data bits of m by multiplying as follows:

1000110
0100011
c  m.G  [1011]    [1011 | 100]
0010111
 
 0001101

Data Data Parity

Other combinations of m can be used to determine all other


possible code words
Linear Block Codes
Linear Block Code
The block length C of the Linear Block Code is
C=mG
where m is the information codeword block length, G is the
generator matrix.
G = [Ik | P]k × n
where Pi = Remainder of [xn-k+i-1/g(x)] for i=1, 2, .., k, and I is
unit or identity matrix.
At the receiving end, parity check matrix can be given as:
H = [PT | In-k ], where PT is the transpose of the matrix P.
Linear Block Codes
Example: Find linear block code encoder G if
code generator polynomial g(x) with k data
bits, and r parity bits = n - k
1 0 0P1 
 
0 1 0P2 
G  I | P   


 
 k 
0 01P 
where  x nk i 1

i
P  Remainder of  , for i  1, 2, , k

 g ( x) 
Block Codes: Linear Block Codes
Message Generator Parity
Code Air Code Null
vector m matrix G check
Vector C Vector C vector 0
matrix HT
Transmitter Receiver
Operations of the generator matrix and the parity check matrix
Consider a (7, 4) linear block code, given by G as
1000 | 110  T
 0100 | 011  1011 | 100 
G  P   
H 
T

  1110 | 010
 n  k   0111 | 001 
 0010 | 111  I
 
 0001 | 101   
For convenience, the code vector is expressed as

c  m | c p  Where c p  mP is an (n-k)-bit parity check vector


Block Codes: Linear Block Codes
P 
Define matrix HT as H T

I
 nk 

Received code vector x = c  e, here e is an error vector, the


matrix HT has the property

P 
T

cH  m | c p 
I

 n k 
 mP  c p  c p  c p  0
Block Codes: Linear Block Codes
 The transpose of matrix HT is 
H  P T I nk 
 Where In-k is a n-k by n-k unit matrix and PT is the
transpose of parity matrix P.

 H is called parity check matrix.


 Compute syndrome as
 s = x HT
=( c  e ) * HT
= cHT  eHT = eHT
Linear Block Codes
 If S is 0 then message is correct else there are errors in it, from
common known error patterns the correct message can be
decoded.
For the (7, 4) linear block code, given by G as

1000 | 111 
 0100 1110 | 100 
| 110 
G 
H  1101 | 010  
 0010 | 101 
 
 0001 | 011  1011 | 001 

 For m = [1 0 1 1] and c = mG = [1 0 1 1| 0 0 1]. If there is


no error, the received vector x = c, and s = cHT = [0, 0, 0]
Linear Block Codes
 Let c suffer an error such that the received vector x =c
  e =[ 1 0 1 1 0 0 1 ] [ 0 0 1 0 0 0 0 ]
=[ 1 0 0 1 0 0 1 ]
111 
Then, Syndrome 110 
s = xHT  
101 
 
011
 1001 | 001    [101]  ( eH T )
  
 
100 
010 
 
001 
 This indicates error position, giving the corrected vector
as [1011001]
Note

Data can be corrupted


during transmission.

Some applications require that


errors be detected and corrected.
INTRODUCTION

Let us first discuss some issues related, directly or


indirectly, to error detection and correction.
correction.

Topics discussed in this section:


Types of Errors
Redundancy
Detection Versus Correction
Forward Error Correction Versus Retransmission
Coding
Modular Arithmetic
Figure : Single-bit error
Figure : Burst error of length 8
Figure : The structure of encoder and decoder

To detect or correct errors, we need to send redundant bits


BLOCK CODING

In block coding, we divide our message into blocks,


each of k bits, called datawords
datawords.. We add r redundant
bits to each block to make the length n = k + r. The
resulting n-bit blocks are called codewords
codewords..

Topics discussed in this section:


Error Detection
Error Correction
Hamming Distance
Minimum Hamming Distance
Example

The 4B/5B block coding scheme,


k = 4 and n = 5. As we saw, we have 2k = 16
datawords and 2n = 32 codewords. We saw
that 16 out of 32 codewords are used for
message transfer and the rest are either used
for other purposes or unused.
Table : A code for error detection

What if we want to send 01? We code it as 011. If 011


is received, no problem.
What if 001 is received? Error detected.
What if 000 is received? Error occurred, but not detected.
Let’s add more redundant bits to see if we can correct error.

Table : A code for error correction

Let’s say we want to send 01. We then transmit 01011.


What if an error occurs and we receive 01001. If we
assume one bit was in error, we can correct.
Note

The Hamming distance between two


words is the number of differences
between corresponding bits.
Example

Let us find the Hamming distance between two pairs of


words.

1. The Hamming distance d(000, 011) is 2 because

2. The Hamming distance d(10101, 11110) is 3 because


Note

The minimum Hamming distance is the


smallest Hamming distance between
all possible pairs in a set of words.
Example

Find the minimum Hamming distance of the coding

Solution
We first find all Hamming distances.

The dmin in this case is 2.


Example

Find the minimum Hamming distance of the coding


scheme

Solution
We first find all the Hamming distances.

The dmin in this case is 3.


Figure : Geometric concept for finding dmin in error detection
Figure : Geometric concept for finding dmin in error correction
Note

To guarantee correction of up to t errors


in all cases, the minimum Hamming
distance in a block code
must be dmin = 2t + 1.
Example

A code scheme has a Hamming distance dmin = 4. What is


the error detection and correction capability of this
scheme?

Solution
This code guarantees the detection of up to three
errors
(s = 3), but it can correct up to one error. In other
words,
if this code is used for error correction, part of its
capability is wasted. Error correction codes need to
have an odd minimum distance (3, 5, 7, . . . ).
LINEAR BLOCK CODES

Almost all block codes used today belong to a subset


called linear block codes.
codes. A linear block code is a code
in which the exclusive OR (addition modulo
modulo--2) of two
valid codewords creates another valid codeword.
codeword.

Topics discussed in this section:


Minimum Distance for Linear Block Codes
Some Linear Block Codes
Note

A simple parity-check code is a


single-bit error-detecting
code in which
n = k + 1 with dmin = 2.
Table : Simple parity-check code C(5, 4)
Figure : Encoder and decoder for simple parity-check code
Example
Example
Example
Example
Example
Unit-3 Completed

You might also like