Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

UNIT5 Part 2-1-19

Download as pdf or txt
Download as pdf or txt
You are on page 1of 19

SECX1033 - DIGITAL COMMUNICATION

UNIT IV - ERROR CONTROL CODING

Introduction :

Errors are introduced in the data when it passes through the channel. Channel noise
interferes the signal, so the signal power is reduced. Transmission power and channel
bandwidth are the main parameters in transmission of data over the channel. With this
parameters power spectral density of channel noise is also determine signal to noise ratio.
This SNR determines the probability of error. Using coding Techniques Signal to noise ratio
is reduced for fixed probability of error.

Fig. Digital Communication Systems with channel encoding

Channel encoder :

The channel encoder adds bits (redundancy) to the message bits. The encoder
signal is then over the noisy channel.

Channel decoder :

It identifies the redundant bits and uses them to detect and correct the errors in the
message bits if any. Thus the number of errors introduced due to channel noise are
minimized by encoder and decoder. Due to the redundant bits the overall data rate
increases. Hence channel has to accommodate this increased data rate. Systems becomes
slightly complex because of coding techniques.
Types of codes:

(1) Block Codes :

These Codes consists of ‘n’ number of bits in one block or codeword. This codeword
consists of ‘k’ message bits and (n-k) redundant bits. Such block codes are called (n, k)
block codes.

(2) Convolutional Codes :

The coding operation is discrete time convolution of input sequence with the impulse
response of the encoder. The Convolutional encoder accepts the message bits
continuously and generates the encoded sequence continuously.

The codes can also be classified as linear or nonlinear codes

a. Linear Code: If the two codewords of the linear codes are added by modulo-2
arithmetic the it produces third codeword in the code.

b. Nonlinear Code: Addition of the nonlinear codeword does not necessarily produce
third codeword.

Discrete memory less channel :

A discrete channel comprises of an input alphabet X, output alphabet Y, and a


likelihood function (probability transition matrix) p(y│x). The channel is said to be memory
less if the probability distribution of the output depends only on the input at that time and is
conditionally independent of previous channel inputs or outputs."Information" channel
capacity of a discrete memory less channel is:

where the maximum is taken over all possible input distributions p(x).

Methods of error correction

Forward Error Correction (FEC)

- Coding designed so that errors can be corrected at the receiver


- Appropriate for delay sensitive and one-way transmission (e.g.,
broadcast TV) of data
- Two main types, namely block codes and Convolutional codes.
Error Correction with retransmission or Automatic repeat request (ARQ)

- Decoder check the input sequence


- When it detects any error, it discards that part of the sequence and
request the transmitter for retransmission.
- Transmitter then again retransmits the part of the sequence in which
error was detected .
- Hence the decoder does not corrects the error, it just detects the error.
- It has low probability error but it is slow.

Types of errors

Random errors: These errors are due to white Gaussian noise in the channel.
These error generated in a particular interval does not affect the performance of the system
in subsequent intervals. These errors are totally uncorrelated.

Burst errors: These errors due to impulsive noise in the channel. Impulsive noise
due to lighting and switching transient. These error generated in a particular interval will
affect the performance of the system in subsequent intervals.

Important words in error control coding techniques:

Codeword: The encoded block of ‘n’ bits is called a codeword. It contains message
bits and redundant bits.
Block length: The number of bits ‘n’ after coding is called the block length of the
code.
Code rate: The ration of message bits (k) and the encoder output bits (n) is called
code rate. Code rate r is defined by
𝑘
𝑟= 0<𝑟<1
𝑛

Channel Data Rate : It is the bit rate at the output of encoder. If the bit rate at the
input of encoder is 𝑅𝑆 , Then channel data rate will be,
𝑛
𝑅𝑜 = 𝑅
𝑘 𝑠

Code Vectors : An ‘n’ bit code word can be visualised in an n dimensional space as
a vector whose elements are the bits in the code word. To visualise 3 bit code vectors there
will be 8 different code words because of 2𝑘 symbol.
Sl.No Bits of Code vector
𝑏2 = 𝑍 𝑏1 = 𝑌 𝑏0 = 𝑋
1 0 0 0
2 0 0 1
3 0 1 0
4 0 1 1
5 1 0 0
6 1 0 1
7 1 1 0
8 1 1 1
Table. Code vectors in 3 dimensional space

Fig. Code vectors representing 3bit codewords

Hamming Distance:

• Error control capability is determined by the Hamming distance


• The Hamming distance between two codewords is equal to the number of
differences between them, e.g., 10011011, 11010010 have a Hamming distance = 3
• Alternatively, can compute by adding codewords (mod 2) =01001001 (now count
up the ones)
• The maximum number of detectable errors is

𝑡𝑑𝑒𝑡 = 𝑑𝑚𝑖𝑛 − 1

• That is the maximum number of correctable errors is given by,

𝑑𝑚𝑖𝑛 − 1
𝑡𝑐𝑜𝑟𝑟 =
2

where dmin is the minimum Hamming distance between two codewords and t
means the smallest integer.
• From the example, tcorr = (3-1)/2 = 1 bit error can be corrected.
• A two-bit error will cause either an undecided correction or a failed correction.
• The number of errors that can be detected is
• From the example, tdet = 3-1 = 2
• All two-bit errors will be detected.
• As little as a three bit error might cause a failed detection.

Code efficiency :
message bits in the block k
Code efficiency = =
transmitted bits for block n
Weight of the Code :

The number of non zero elements in the transmitted code vector is called weight of
the code. It is denoted by w(X) where X is code vector.

Block Codes: ( consider only binary data)

• Data is grouped into blocks of length k bits (dataword or message)


• Each message is coded into blocks of length n bits (codeword), where in general
n>k
• This is known as (n,k) block code
• A vector notation is used for the message and codewords,
• Message M = (m1 m2….mk) Code word Structure
• Codeword C = (c1 c2……..cn)
k bits n--k bits

message bits Check bits


Linear Block Codes:

• Sum of two codewords will produce another codeword.


• It shows that any code vector can be expressed as a linear combination of other
code vector.
• Consider any code vector is having m1, m2, m3 ... mk message bits and c1, c2, c3 ... cn
check bits then the code vector can be written as

X = ( m1, m2, m3 ... mk c1, c2, c3 ... cn)

- Where q is the number of redundant bits added by the encoder q = n-k

• The code vector can also be written as X = (M|C)


• M = k bit message vector
• C = q bit check vector (Check bits play the role of error correction and detection)
• Code vector can be represented as X = MG
• X= Code vector of 1×n size or n bits
• M = Message vector of 1×k size or k bits
• G = Message vector of k×n size
In Matrix form [X]1×n = [M]1×k [G]k×n and Generate matrix G can be represented as
G = [Ik [Pk×q ]] where I = k×k identity matrix ; P = k×q Submatrix
k×n

Check vector cab be represented 𝐶 = 𝑀𝑃

The expanded form is

𝑃11 𝑃12 … 𝑃1𝑞


[𝐶1 𝐶2 𝐶3 … 𝐶𝑞 ]1×𝑞 = [𝑀1 𝑀2 𝑀3 … 𝑀𝑘 ]1×𝑘 [𝑃21 𝑃22 … 𝑃2𝑞 ]
𝑃𝑘1 𝑃𝑘2 … 𝑃𝑘𝑞
𝑘×𝑞

By solving the above equation check vector can be obtained (additions are mod 2 addition)

𝐶1 = 𝑀1 𝑃11 ⊕ 𝑀2 𝑃21 ⊕ 𝑀3 𝑃31 ⊕ … ⊕ 𝑀𝑘 𝑃𝑘1

𝐶2 = 𝑀1 𝑃12 ⊕ 𝑀2 𝑃22 ⊕ 𝑀3 𝑃32 ⊕ … ⊕ 𝑀𝑘 𝑃𝑘2

𝐶3 = 𝑀1 𝑃13 ⊕ 𝑀2 𝑃23 ⊕ 𝑀3 𝑃33 ⊕ … ⊕ 𝑀𝑘 𝑃𝑘3 and So on....

Problem

The generator matrix for a (6,3) block code is given below. Find all code vectors of
this code.

1 0 0 0 1 1
𝐺 = [0 1 0 ∶ 1 0 1]
0 0 1 1 1 0

Solution :

(i) Determination of P Submatrix from generator matrix

We know that
G = [Ik [Pk×q ]]
k×n

1 0 0 0 1 1
𝐼 = [0 1 0] Pk×q = [ 1 0 1]
0 0 1 1 1 0
(ii) To obtain equations for check bits

Here k=3, q=3 and n=6, here the block size of message vector 3 bits. Hence there will be 8
message vector as shown in the table.

Sl.No Message vector


M1 M2 M3
1 0 0 0
2 0 0 1
3 0 1 0
4 0 1 1
5 1 0 0
6 1 0 1
7 1 1 0
8 1 1 1

Then the check vector

0 1 1
[𝐶1 𝐶2 𝐶3 ] = [𝑀1 𝑀2 𝑀3 ] [ 1 0 1]
1 1 0

𝐶1 = 𝑀1 0 ⊕ 𝑀2 ⊕ 𝑀3 = 𝑀2 ⊕ 𝑀3

𝐶2 = 𝑀1 ⊕ 𝑀2 0 ⊕ 𝑀3 = 𝑀1 ⊕ 𝑀3

𝐶3 = 𝑀1 ⊕ 𝑀2 ⊕ 𝑀3 0 = 𝑀1 ⊕ 𝑀2

(iii) To determine check bits and code vectors for every message vector

For m1, m2, m3 = (000)

𝐶1 = 𝑀2 ⊕ 𝑀3 = 0 ⊕ 0 = 0

𝐶2 = 𝑀1 ⊕ 𝑀3 = 0 ⊕ 0 = 0

𝐶3 = 𝑀1 ⊕ 𝑀2 = 0 ⊕ 0 = 0 ie [𝐶1 𝐶2 𝐶3 ] = (0 0 0)

For (001)

𝐶1 = 𝑀2 ⊕ 𝑀3 = 0 ⊕ 1 = 1

𝐶2 = 𝑀1 ⊕ 𝑀3 = 0 ⊕ 1 = 1

𝐶3 = 𝑀1 ⊕ 𝑀2 = 0 ⊕ 0 = 0 ie [𝐶1 𝐶2 𝐶3 ] = (1 1 0)
Sl.No Message Bits Check bits Complete Code Vector
M1 M2 M3 𝐶1 = 𝐶2 = 𝐶3 = M1 M2 M3 C1 C2 C3
𝑀2 ⊕ 𝑀3 𝑀1 ⊕ 𝑀3 𝑀1 ⊕ 𝑀2
1 0 0 0 0 0 0 0 0 0 0 0 0
2 0 0 1 1 1 0 0 0 1 1 1 0
3 0 1 0 1 0 1 0 1 0 1 0 1
4 0 1 1 0 1 1 0 1 1 0 1 1
5 1 0 0 0 1 1 1 0 0 0 1 1
6 1 0 1 1 0 1 1 0 1 1 0 1
7 1 1 0 1 1 0 1 1 0 1 1 0
8 1 1 1 0 0 0 1 1 1 0 0 0

Parity Check Matrix:

For every block code the parity check matrix can be defined as

H = [P T ∶ Iq ]
q×n

Submatrix P is represented as

𝑃11 𝑃12 … 𝑃1𝑞


𝑃 = [𝑃21 𝑃22 … 𝑃2𝑞 ]
𝑃𝑘1 𝑃𝑘2 … 𝑃𝑘𝑞
𝑘×𝑞

𝑃11 𝑃21 … 𝑃𝑘1


T
P = [𝑃12 𝑃22 … 𝑃𝑘2 ]
𝑃1𝑞 𝑃2𝑞 … 𝑃𝑘𝑞
𝑞×𝑘

𝑃11 𝑃21 … 𝑃𝑘1 1 0 … 0


[H]q×n = [ 𝑃12 𝑃22 … 𝑃𝑘2 ∶ 0 1 … 0]
𝑃1𝑞 𝑃2𝑞 … 𝑃𝑘𝑞 0 0 … 1 q×n
Hamming Codes:

These are (n,k) linear block codes, will satisfy the following conditions.

1. Number of check bits q ≥ 3.


2. Block length n= 2q-1
3. Number of message bits k = n-q
4. Minimum distance dmin = 3
We know that

𝑘
Code rate 𝑟 = 𝑛 0<𝑟<1

𝑛−𝑞 𝑞 𝑞
𝑟= = 1− =1−
𝑛 𝑛 2q − 1

Error detection and correction capabilities of hamming codes

Since dmin is 3 for hamming code, it can detect double errors and correct single
errors.

Problem

The parity check matrix of a particular (7,4 ) linear block code is given by

1 1 1 0 1 0 0
[𝐻] = [1 1 0 1 0 1 0]
1 0 1 1 0 0 1

1. Find the generator Matrix G


2. List all the code vectors
3. What is the minimum distance between code vectors
4. How many errors can be detected and how many errors can be corrected.

Solution :

Here n =7, k =4

1. Number of check bits q= n-k = 7- 4 = 3.


2. Block length n= 2q-1 = 8-1 = 7. This shows the given code is hamming code.

(1) To determine the P Submatrix

The parity check matrix of q×n size is given and q = 3, n=7, k=4.

𝑃11 𝑃21 𝑃31 P41 … 1 0 0


[H]3×7 = [ 𝑃12 𝑃22 𝑃32 P42 … 0 1 0]
𝑃13 𝑃23 𝑃33 P43 … 0 0 1 3×7
H = [P T ∶ I3 ]

𝑃11 𝑃21 𝑃31 P41 1 1 1 0


T
P = [𝑃12 𝑃22 𝑃32 P42 ] = [1 1 0 1]
𝑃13 𝑃23 𝑃33 P43 1 0 1 1

Therefore
p p p  1
 11 12 13  1 1
p p p  1 1 0
𝑃 =  21 
22 23

 p31 p p 1 0 1
33
 0 
32
 1 1
 p41 p 42
p
43

(2) To obtain generator matrix G

G = [Ik : [Pk×q ]] G = [I4 : [P4×3 ]]4×7


k×n

1 0 0 0 1 1 1
0 1 0 0 1 1 0
𝐺=  : 
0 0 1 0 1 0 1
   
0 0 0 1 0 1 1

(3) To find all Code words

Codeword C = MP

𝑃11 𝑃12 … 𝑃1𝑞


[𝐶1 𝐶2 𝐶3 … 𝐶𝑞 ]1×𝑞 = [𝑀1 𝑀2 𝑀3 … 𝑀𝑘 ]1×𝑘 [𝑃21 𝑃22 … 𝑃2𝑞 ]
𝑃𝑘1 𝑃𝑘2 … 𝑃𝑘𝑞
𝑘×𝑞

p p p 
 11 12 13
 p 21 p p 
[𝐶1 𝐶2 𝐶3 ]1×3 = [𝑀1 𝑀2 𝑀3 𝑀4 ]1×4  
22 23

 p31 p 32
p
33
 
 p 41 p 42
p
43
 4×3
1 1 1
1 1 0
[𝐶1 𝐶2 𝐶3 ]1×3 = [𝑀1 𝑀2 𝑀3 𝑀4 ]1×4 
1 0 1
 
0 1 1
4×3
𝐶1 = 𝑀1 1 ⊕ 𝑀2 1 ⊕ 𝑀3 1 ⊕ 𝑀4 0 = 𝑀1 ⊕ 𝑀2 ⊕ 𝑀3

𝐶2 = 𝑀1 1 ⊕ 𝑀2 1 ⊕ 𝑀3 0 ⊕ 𝑀4 1 = 𝑀1 ⊕ 𝑀2 ⊕ 𝑀4

𝐶3 = 𝑀1 1 ⊕ 𝑀2 0 ⊕ 𝑀3 1 ⊕ 𝑀4 1 = 𝑀1 ⊕ 𝑀3 ⊕ 𝑀4

For example if (m1, m2, m3, m4) = (1011)

𝐶1 = 𝑀1 ⊕ 𝑀2 ⊕ 𝑀3 = 1 ⊕ 0 ⊕ 1 = 0

𝐶2 = 𝑀1 ⊕ 𝑀2 ⊕ 𝑀4 = 1 ⊕ 0 ⊕ 1 = 0

𝐶3 = 𝑀1 ⊕ 𝑀3 ⊕ 𝑀4 = 1 ⊕ 1 ⊕ 1 = 1

Sl.No Message Bits Check bits Complete Code Vector Weight of


M1 M2 M3 M4 𝐶1 𝐶2 𝐶3 M1 M2 M3 M4 C1 C2 C3 the code
vector
1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
2 0 0 0 1 0 1 1 0 0 0 1 0 1 1 3
3 0 0 1 0 1 0 1 0 0 1 0 1 0 1 3
4 0 0 1 1 1 1 0 0 0 1 1 1 1 0 4
5 0 1 0 0 1 1 0 0 1 0 0 1 1 0 3
6 0 1 0 1 1 0 1 0 1 0 1 1 0 1 4
7 0 1 1 0 0 1 1 0 1 1 0 0 1 1 4
8 0 1 1 1 0 0 0 0 1 1 1 0 0 0 3
9 1 0 0 0 1 1 1 1 0 0 0 1 1 1 4
10 1 0 0 1 1 0 0 1 0 0 1 1 0 0 3
11 1 0 1 0 0 1 0 1 0 1 0 0 1 0 3
12 1 0 1 1 0 0 1 1 0 1 1 0 0 1 4
13 1 1 0 0 0 0 1 1 1 0 0 0 0 1 3
14 1 1 0 1 0 1 0 1 1 0 1 0 1 0 4
15 1 1 1 0 1 0 0 1 1 1 0 1 0 0 4
16 1 1 1 1 1 1 1 1 1 1 1 1 1 1 7
(4) Minimum distance between code vectors

2k = 28 = 16 code vectors along with their weights. The smallest weight of any non
zero code vector is 3 therefore the minimum distance dmin =3.

(5) Error correction and Detection capabilities

dmin =3

dmin ≥ s+1  3 ≥ s+1 therefore s ≤ 2 thus two errors will be detected. And
dmin ≥ 2t+1  3 ≥ s+1 therefore t ≤ 1 thus one errors will be corrected.

Encoder of (7,4) Hamming code

Definition of Syndrome:

When Some errors are present in received vector Y then it will be not from the valid
vector and it will not satisfy the property

𝑖𝑓 𝑋𝐻 𝑇 = (0 0 0 … . .0)𝑎𝑛𝑑 𝑌𝐻 𝑇 = (0 0 0 … . .0)

then X=Y i.e no errors or Y is valid code vector or

𝑖𝑓 𝑋𝐻 𝑇 = 𝑌𝐻 𝑇 = 𝑛𝑜𝑛 𝑧𝑒𝑟𝑜 then X≠Y i.e some errors.


The non zero output of the product of 𝑌𝐻 𝑇 is called as syndrome and it is used to detect the
errors in Y. Syndrome represented by S cand be written as

[𝑆]1×𝑞 = [𝑌]1×𝑛 [ 𝐻 𝑇 ]𝑛×𝑞 and 𝑌 =𝑋⊕𝐸 & 𝑋 =𝑌⊕𝐸

Relationship between syndrome vector (S)and Error vector (E)

S = YH T = (𝑋 ⊕ 𝐸)𝐻 𝑇 = 𝑋𝐻 𝑇 ⊕ 𝐸𝐻 𝑇

𝑆 = 𝐸𝐻 𝑇 𝑠𝑖𝑛𝑐𝑒 𝑋𝐻 𝑇 = 0

Detecting error with the help of syndrome:

Problem

The parity check parity matrix of (7,4) block code is given as

1 1 1 0 1 0 0
[𝐻] = [0 1 1 1 ∶ 0 1 0]
1 1 0 1 0 0 1

Calculate the syndrome vector.

(1) To determine error pattern for single bit erros

Syndrome is 3 bit vector, here q=3. Therefore 2 q-1= 7 non zero syndrome. This shows that
7 single bit error pattern will be represented by these 7 non zero syndrome. Error vector E
is a n bit vector representing error pattern.

Sl.No Bit in Bits of vector (E), Non zero bits shows error
error
1 1st 1 0 0 0 0 0 0
2 2 nd 0 1 0 0 0 0 0
3 3rd 0 0 1 0 0 0 0
4 4 th 0 0 0 1 0 0 0
5 5 th 0 0 0 0 1 0 0
6 6th 0 0 0 0 0 1 0
7 7 th 0 0 0 0 0 0 1
(2) Calculation of Syndrome

[𝑆]1×𝑞 = [𝑌]1×𝑛 [ 𝐻 𝑇 ]𝑛×𝑞 and [𝑆]1×3 = [𝑌]1×7 [ 𝐻 𝑇 ]7×3

1 0 1
1 1 1

1 1 0
𝑇  
𝐻 = 0 1 1
1 0 0
 
0 1 0
0 0 1

For example Syndrome of first bit error is

1 0 1
1 1 1

1 1 0
𝑇  
𝑆 = 𝐸𝐻 = (1000000) 0 1 1
1 0 0
 
0 1 0
0 0 1

𝑆 =(1⊕0⊕0⊕0⊕0⊕0⊕0 0⊕0⊕0⊕0⊕0⊕0⊕0 1⊕0⊕0⊕0⊕0⊕0⊕0)

S = (1 0 1)

Syndrome of second bit error is


1 0 1
1 1 1

1 1 0
 
𝑆 = 𝐸𝐻 𝑇 = (0100000) 0 1 1
1 0 0
 
0 1 0
0 0 1

𝑆 =(0⊕1⊕0⊕0⊕0⊕0⊕0 0⊕1⊕0⊕0⊕0⊕0⊕0 0⊕1⊕0⊕0⊕0⊕0⊕0)

S = (1 1 1)
Syndrome vector are rows of HT

Sl.No Bits of vector (E), Non zero Syndrome


bits shows error vector
1 0 0 0 0 0 0 0 0 0 0
2 1 0 0 0 0 0 0 1 0 1 1st of HT
3 0 1 0 0 0 0 0 1 1 1 2nd of HT
4 0 0 1 0 0 0 0 1 1 0 3rd of HT
5 0 0 0 1 0 0 0 0 1 1 4th of HT
6 0 0 0 0 1 0 0 1 0 0 5th of HT
7 0 0 0 0 0 1 0 0 1 0 6th of HT
8 0 0 0 0 0 0 1 0 0 1 7th of HT

Error correctin using syndrome vector

Let us consider above (7,4) block code and the Code vector
𝑋 = (1 0 0 1 1 1 0 )

Let the error created in 3rd bit so


𝑌 = (1 0 (𝟏) 1 1 1 0 )

Now error correction can be done by adopting following steps

(1) Calculate the syndrome S = YHT


(2) Check the row of HT which is same as of S
(3) For Pth of row of HT , Pth bit is in error. Hence write the
corresponding error vector E.
(4) Obtain the correct vector by 𝑋 = 𝑌 ⊕ 𝐸

(1) To obtain syndrome Vector

S = YHT
1 0 1 
1 1 1 
 
1 1 0 
 
𝑆 = (1 0 1 1 1 1 0 ) 0 1 1 = (1 1 0)
1 0 0 
 
0 1 0 
0 0 1 
 
(2) To determine the row of HT which is same as of S = 3rd row
(3) To determine E ; E = (0 0 1 0 0 0 0)
(4) Correct Vector X
𝑋 = 𝑌⊕𝐸

𝑋 = (1 0 1 1 1 1 0) ⊕ ( 0 0 1 0 0 0 0 ) = ( 1 0 0 1 1 1 0 )

Thus the single bit error can be corrected using syndrome.

If double error occurs:

Consider the same message vector


𝑋 = (1 0 0 1 1 1 0 )

and
𝑌 = (1 0 (𝟏) (𝟎) 1 1 0 )

S = YHT
1 0 1
1 1 1

1 1 0
 
𝑆 = (1 0 1 0 1 1 0 ) 0 1 1 = (1 0 1)
1 0 0
 
0 1 0
0 0 1

S is equal to row of HT which is same as of S = 1st row therefore E = 1000000. Thus the
error correction and detection goes wrong. The probability occurrence of multiple errors is
less compared to single errors. To correct multiple errors extended hamming codes are
used. In these codes one extra bit is provided to correct double errors. We know that for
(n,k) block codes there are 2n-1distinct non zero syndromes. There are nC1 = n single error
pattern, nC2 double error pattern, nC3 triple error pattern and so on. Therefore to correct t
error pattern

2𝑞 − 1 ≥ 𝑛 𝐶1 + 𝑛 𝐶2 + 𝑛 𝐶3 + … . . +𝑛 𝐶𝑡

Hamming Bound

2𝑞 ≥ 1 + 𝑛 𝐶1 + 𝑛 𝐶2 + 𝑛 𝐶3 + … . . +𝑛 𝐶𝑡
𝑡

2𝑞 ≥ ∑ 𝑛 𝐶𝑖
𝑖=0
𝑡

2𝑛−𝑘 ≥ ∑ 𝑛 𝐶𝑖
𝑖=0

By taking logarithmic on base 2 on both sides

𝑛 − 𝑘 ≥ 𝑙𝑜𝑔2 (∑ 𝑛 𝐶𝑖 )
𝑖=0

𝑡
𝑘 1
1 − ≥ 𝑙𝑜𝑔2 (∑ 𝑛 𝐶𝑖 )
𝑛 𝑛
𝑖=0
𝑘
𝐶𝑜𝑑𝑖𝑛𝑔 𝑟𝑎𝑡𝑒 =𝑟
𝑛
𝑡
1
1 − 𝑟 ≥ 𝑙𝑜𝑔2 (∑ 𝑛 𝐶𝑖 )
𝑛
𝑖=0

Problem:

For a linear block code provide with an example that


1. Syndrome depends only on error pattern not on transmitted codeword.
2. All error pattern the differ by a codeword have the same syndrome.

Solution:

Syndrome depends only on error pattern not on transmitted codeword.

We know that S = EHT

This equation shows that Syndrome depends only on error pattern not on transmitted
codeword.

All error pattern the differ by a codeword have the same syndrome.

Syndrome for the first received code

S1 = Y1 H T = (𝑋1 ⊕ 𝐸)𝐻 𝑇 = 𝑋1 𝐻 𝑇 ⊕ 𝐸𝐻 𝑇
𝑆1 = 𝐸𝐻 𝑇 𝑠𝑖𝑛𝑐𝑒 𝑋1 𝐻 𝑇 = 0

Syndrome for the Second received code

S2 = Y2 H T = (𝑋2 ⊕ 𝐸)𝐻 𝑇 = 𝑋2 𝐻 𝑇 ⊕ 𝐸𝐻 𝑇

𝑆2 = 𝐸𝐻 𝑇 𝑠𝑖𝑛𝑐𝑒 𝑋2 𝐻 𝑇 = 0

For example let us consider

1 1 1 0 1 0 0
[𝐻] = [1 1 0 1 ∶ 0 1 0]
1 0 1 1 0 0 1

Consider two code words

X2 = 0 0 0 1 0 1 1 ; X3 = 0 0 1 0 1 0 1

Error is introduced in MSB

Y2 = (𝟏)0 0 1 0 1 1 ; Y3 = (𝟏) 0 1 0 1 0 1

1 1 1
1 1 0

1 0 1
 
S2 = Y2 H T = ( 𝟏 0 0 1 0 1 1) 0 1 1 = ( 1 1 1)
1 0 0
 
0 1 0
0 0 1

1 1 1
1 1 0

1 0 1
T  
S3 = Y3 H = ( 𝟏 0 1 0 1 0 1) 0 1 1 = ( 1 1 1)
1 0 0
 
0 1 0
0 0 1

Thus the syndrome S2 = S3 = ( 1 1 1) even if two codewords are different. This proves all
error pattern the differ by a codeword have the same syndrome.

Syndrome decoder for (n,k) block code

Other linear codes:

Single Parity bit code:

If there are m1, m2, m3 ... mk are the bits of the k bit message word, then m1⊕ m2
⊕ m3 ⊕ ...... ⊕ mk ⊕ c1 = 0 In the above equation C1 is the parity bit added with the
message. If there are even number of parity check bit C 1 = 0 and vice versa. For this code
the transmitted bits are n = k+1 and q=1. This code can correct single bit.

Repeated codes

In this code , a single message bit is transmitted and q=2t bit are the parity bit and
k=1, then the transmitted bits are n = 2t+1. This code can correct t errors per block. It
requires large band width.

Hadamard Code

It is derived from hadamard matrix here n = 2k and q = n-k = 2k –k. the code rate is 𝑟 =
𝑘 𝑘
= 2𝑘 This shows the code rate will be very small.
𝑛

You might also like