Error Control Coding
Error Control Coding
Evolved from the Structured Sequences (Block Codes, cyclic codes, etc.)
to random looking sequences (Convolutional Codes, Turbo, etc.)
IMPORTANCE
•Coding gains can give more than 10 dB performance improvement.
•VLSI and DSP chips are cheaper than using high power transmitters
and/or bigger antennas.
Concept of redundancy
• Systematic generation and
addition of redundant bits to the
message bits, at the
transmitter, such that
transmission errors can be
minimized by performing an
inverse operation, at the
receiver. e.g. Triple repetition
codes,
Data bit ‘0’ is replaced by the
code 000
Detect up to t errors
dmin >= t+1
Correct up to t errors
dmin >= 2t+1
Code Rate (Rc)
Rc = k/n
If probability of Pwe,uncoded = x
transmission error of a
channel is ‘x’ per bit and - With triple repetition code
Triple repetition codes Pwe,detection = P(3,3) = x3
are used. Calculate the Pwe,correction = P(2,3) + P(3,3) = 3x2 -2x3
probability that the
received word will be in
error when
- For any channel with x<<1
(a) it is used for error
Pwe,uncoded > Pwe,correction > Pwe,detection
correction
(b) It is used for error
detection - Improved BER at the cost of reduced datarate.
FEC and BEC systems
• In FEC systems the receiver processes every received • Error Detecting codes
data packet. (sometimes it may introduce more errors – Horizontal Redundancy Check (HRC)
instead of correcting an existing error). – Vertical Redundancy check (VLR)
– Cyclic Redundancy Check (CRC)
• BEC systems uses error detecting codes and – Checksum
retransmission of packet is requested for error correction. • Automatic Repeat Request (ARQ)
An ARQ protocol is used for this purpose – Stop and wait
– Go Back to n
– Selective repeat/reject
Performance of FEC system
• Thus (t+1)Rc>1 for better performance of FEC system. Also the 1 or 2 bit error
correcting code words should have relatively higher Rc.
• By plotting Pbe for both cases (coded and uncoded) it can be observed that
coding is beneficial for a reasonably high value of γb.
M C W(x) M C W(x)
0000 000 0 1000 101 3
0001 011 3 1001 110 4
0010 110 3 1010 011 4
0011 101 4 1011 000 3
0100 111 4 1100 010 3
0101 100 3 1101 001 4
0110 001 3 1110 100 4
0111 010 4 1111 111 7
Syndrome Decoding
HT = P • The Receiver already posses a syndrome table. The generated
Iq syndrome (from previous operation) is searched in it to find the
corresponding error vector can be obtained.
• Xi*HT = (0 0 0 . . . . .0)1xq
• This error vector is added with Yi to get Xi. as Y i + Ei = Xi + Ei + Ei = Xi .
• If there is any error then
Yi = X i + E i
• How does the receiver already has a syndrome table?
• The operation Yi*HT will
generate a non zero
(1xq) vector called
syndrome (S).
• S = Y*HT = E*HT
1 – Rc = (1/n)*log {∑ nCi}
• “Rc should be close to unity in order to minimize overhead but it requires n>>1 which in
turn requires more memory space and decoding time.”
If every cyclic shift of a valid code vector produces another valid code
vector then it is known as cyclic code.
(7,4) Cyclic code example
M C W(x) M C W(x)
0000 000 0 1000 101 3
0001 011 3 1001 110 4
0010 110 3 1010 011 4
0011 101 4 1011 000 3
0100 111 4 1100 010 3
0101 100 3 1101 001 4
0110 001 3 1110 100 4
0111 010 4 1111 111 7
Generation of cyclic codes
• The cyclic shift of a codeword:
X = (xn-1 xn-2 ….. x1 x0)
X(p) = xn-1pn-1+xn-2pn-2+…..+x1p+x0 then
X’ =(xn-2 ….. x1 x0 xn-1)
X’(p) = p*X(p) + xn-1*(pn+1)
• Theorem: If G(p) is a factor of (pn+1) having degree=q then the multiplication of QM(p)*G(p)
will generate a cyclic code; where QM(p) is any k-bit message.
Though for systematic codes X(p) so generated may not be used as the codeword for the
message QM(p).
Coding Gain
•For a AWGN channel, the decoded error
probability for a convolution code having code
rate Rc = k/n; and free distance = df can be
derived as
Pbe proportional to e-(Rc*df/2)*γb
Maximum error correction, but max delay, high storage and computation requirements
Other Decoding Methods
• Sequential Decoding (The output is available after each step. An increasing threshold is
used.) (min delay, min error correction capability)
• Feedback Decoding (First output is available after predetermined number of steps (e.g. 3)
(moderate delay, moderate error correction)
Punctured Convolution codes
(Improving code rate to k’/n’ from 1/n)
• Choose a matrix P of dimension nxk’
• It should have n’ ones and all other
elements as zero.
• Consecutive k’ codewords
(corresponding to k’ input bits;
assuming 1/n encoder) should be
compared with k’ columns of matrix P.
• Wherever there is a zero in matrix P
corresponding bit (may be zero or
one) of corresponding code word
should be dropped.
• Thus total number of bits that are
transmitted are n’ for k’ input bits thus
effective code rate will be k’/n’.
• Same matrix P is used at receiver.
Either all zeros or all ones are inserted
at the place of dropped bits.
• Now it can be decoded using a simple
decoder which will treat the puncturing
errors as transmission errors.
Recursive Systematic Convolutional (RSC) Codes
Claude Berrou et. Al. ‘Near Shannon limit error correcting coding and decoding’, Proc. IEEE Intl. conf. on comm., Geneva,
Hard Vs. Soft decision decoding
The
representation
of
demodulator
o/p (m) >1 bit
corresponds
to soft
decision
decoding.
2dB gain with 3 bit and 2.2 dB with infinite bit representation.