Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
35 views

Error Control Coding

Uploaded by

G Venkatesh
Copyright
© © All Rights Reserved
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
35 views

Error Control Coding

Uploaded by

G Venkatesh
Copyright
© © All Rights Reserved
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 27

Error Control Coding

Evolved from the Structured Sequences (Block Codes, cyclic codes, etc.)
to random looking sequences (Convolutional Codes, Turbo, etc.)

IMPORTANCE
•Coding gains can give more than 10 dB performance improvement.

•VLSI and DSP chips are cheaper than using high power transmitters
and/or bigger antennas.
Concept of redundancy
• Systematic generation and
addition of redundant bits to the
message bits, at the
transmitter, such that
transmission errors can be
minimized by performing an
inverse operation, at the
receiver. e.g. Triple repetition
codes,
Data bit ‘0’ is replaced by the
code 000

Data bit ‘1’ is replaced by the


code 111

• Lower error rate should be


obtained for same transmitted
power i.e. n coded bits should
carry same power as was
carried by k uncoded bits.

• It endorses the Shannon’s


Power-BW tradeoff i.e. for
same BER power saving is
possible at the expense of
more bandwidth as needed in
redundant bit transmission.
Code vectors, Hamming Distance
• n-bit codeword can be represented as • Only 2k are valid code vectors out of 2 n total possible
a code vector in a n-dimensional vector unique vectors.
space. Each unit weight vector
represents a unique • The distance between two code vectors is measured as
dimension/direction.
the weight of the resultant when mod-2 addition is done
on these vectors.

• The minimum distance between all possible pairs of valid


code vectors is called the hamming distance (d min).

Detect up to t errors
dmin >= t+1
Correct up to t errors
dmin >= 2t+1
Code Rate (Rc)
Rc = k/n

A good code should have


High dmin and Rc should be close to 1.
Triple repetition codes

If probability of Pwe,uncoded = x
transmission error of a
channel is ‘x’ per bit and - With triple repetition code
Triple repetition codes Pwe,detection = P(3,3) = x3
are used. Calculate the Pwe,correction = P(2,3) + P(3,3) = 3x2 -2x3
probability that the
received word will be in
error when
- For any channel with x<<1
(a) it is used for error
Pwe,uncoded > Pwe,correction > Pwe,detection
correction
(b) It is used for error
detection - Improved BER at the cost of reduced datarate.
FEC and BEC systems
• In FEC systems the receiver processes every received • Error Detecting codes
data packet. (sometimes it may introduce more errors – Horizontal Redundancy Check (HRC)
instead of correcting an existing error). – Vertical Redundancy check (VLR)
– Cyclic Redundancy Check (CRC)
• BEC systems uses error detecting codes and – Checksum
retransmission of packet is requested for error correction. • Automatic Repeat Request (ARQ)
An ARQ protocol is used for this purpose – Stop and wait
– Go Back to n
– Selective repeat/reject
Performance of FEC system

• If a code can correct up to t errors in a word then


• Eb represents average considering word error due to t+1 errors can be given by
energy per message bit Pwe=nCt+1αt+1(Ignoring (1-α) term and error patterns t+2 to n)
for uncoded
transmission then γb =
Eb/N0 and message • Since in n bit code word there exists t+1 error bits thus
message bit error per uncorrected code word = (k/n)(t+1)
BER (Pube) = BER by
channel = Q(sqrt(2*γb))
• When N words are transmitted then total number of
erroneous message bits = (k/n)(t+1)NPwe
• For comparison the Pbe= [(k/n)(t+1)NPwe]/Nk = n-1Ctαt+1
message bit energy in
coded system should
remain same thus
energy per code bit
should be γc =RcEb/N0
and BER by channel
(α)= Q(sqrt(2*Rc*γb))
Pbe= n-1Ctαt+1 = n-1Ct [Q(sqrt(2*Rcγb))]t+1

• On approximating the Q function by exponential function it can be seen that


Pbe involve e-(t+1)Rcγb and e-γb respectively for coded and uncoded cases.

• Thus (t+1)Rc>1 for better performance of FEC system. Also the 1 or 2 bit error
correcting code words should have relatively higher Rc.

• By plotting Pbe for both cases (coded and uncoded) it can be observed that
coding is beneficial for a reasonably high value of γb.

• For moderate value it might be better to increase power with uncoded


transmission. For low value coding will make the situation worse. (Ex 13.1.1)
Performance of ARQ

• As far as Pbe is concerned


(which depends upon
undetected errors) the
expression for BEC
remains same as for FEC
but with dmin=t+1.

• Clearly, BEC seems


advantageous in terms of
Pbe but we need to
consider loss due to
retransmission. • In BEC Rc` plays same role as Rc in case of FEC
thus simply by doing this substitution all the
• Probability of expressions developed for FEC can be used.
retransmission p=1-[P(0,n)
+Pwe]
• Find the expression for Rc` for other sliding window
protocols?
• p=1- (1-α)n = nα (ignoring Pwe
and other terms of (1-α)n expansion)

• Find average number of


transmitted words per
accepted word and the
throughput efficiency for
srej?
Linear Systematic Block Codes
• Linearity Property
– All zero vector must be a valid code vector Types of codes
– Sum of any two valid code vectors •Error detecting/correcting.
produces another valid code vector. •Systematic/nonsystematic
•Binary/nonbinary
• Requirement for Linear Systematic Block codes (n,k)
•Block (includes cyclic
[X] = [M | C] codes)/convolutional
If G be the generator matrix such that X=M*G •Recursive/nonrecursive
then G must be given as [Ik | P].
What should be the dimensions of X, M, C and P?
(7,4) Systematic Hamming code
The codewords for P = [101; 111; 110; 011]

M C W(x) M C W(x)
0000 000 0 1000 101 3
0001 011 3 1001 110 4
0010 110 3 1010 011 4
0011 101 4 1011 000 3
0100 111 4 1100 010 3
0101 100 3 1101 001 4
0110 001 3 1110 100 4
0111 010 4 1111 111 7
Syndrome Decoding
HT = P • The Receiver already posses a syndrome table. The generated
Iq syndrome (from previous operation) is searched in it to find the
corresponding error vector can be obtained.

• Xi*HT = (0 0 0 . . . . .0)1xq
• This error vector is added with Yi to get Xi. as Y i + Ei = Xi + Ei + Ei = Xi .
• If there is any error then
Yi = X i + E i
• How does the receiver already has a syndrome table?
• The operation Yi*HT will
generate a non zero
(1xq) vector called
syndrome (S).
• S = Y*HT = E*HT

• All possible error patterns ‘E’ are


already known (depending upon
error correcting capability of
code) and HT is also a known
quantity.

• Hence, If a code is to correct up


to ‘t’ errors per word then
2q – 1 >= n + nC2 + nC3 . . . . . . To accommodate the syndrome table the decoder needs to store
+ nCt (q+n)*2q bits. Larger is the table size more time will be required to
(Total no. of syndromes - 1) >= (possible find error vector thus decoding delay will be increased.
error patterns)
Drawbacks of block codes
• The Syndrome decoding requires,
2q – 1 >= n + nC2 + nC3 . . . . . . + nCt
q*Log (2) >= log{∑ nCi}; 0<=i<=t ; base =2
n – k = n*(1-k/n) = log {∑ nCi}

1 – Rc = (1/n)*log {∑ nCi}

• “Rc should be close to unity in order to minimize overhead but it requires n>>1 which in
turn requires more memory space and decoding time.”

If every cyclic shift of a valid code vector produces another valid code
vector then it is known as cyclic code.
(7,4) Cyclic code example
M C W(x) M C W(x)
0000 000 0 1000 101 3
0001 011 3 1001 110 4
0010 110 3 1010 011 4
0011 101 4 1011 000 3
0100 111 4 1100 010 3
0101 100 3 1101 001 4
0110 001 3 1110 100 4
0111 010 4 1111 111 7
Generation of cyclic codes
• The cyclic shift of a codeword:
X = (xn-1 xn-2 ….. x1 x0)
X(p) = xn-1pn-1+xn-2pn-2+…..+x1p+x0 then
X’ =(xn-2 ….. x1 x0 xn-1)
X’(p) = p*X(p) + xn-1*(pn+1)

• Theorem: If G(p) is a factor of (pn+1) having degree=q then the multiplication of QM(p)*G(p)
will generate a cyclic code; where QM(p) is any k-bit message.
Though for systematic codes X(p) so generated may not be used as the codeword for the
message QM(p).

Hamming codes, BCH codes,


If there are more than one factor of (p n+1) having degree ‘q’ then
Golay codes etc. are examples of choose any factor. It will generate a valid cyclic code but it may not
cyclic codes. possess optimum error correcting capability
Systematic cyclic codes
• For (n,k) code (q=n-k) the requirement is
ADVANTAGES:
X(p) = pq*M(p) + C(p)
•Simplified encoding and
decoding.
• Combining with cyclic property
QM(p)*G(p) = pq*M(p) + C(p)
•Easy syndrome calculation S(p) =
rem{Y(p)/G(p)}
QM(p) = {p *M(p)}/G(p) + C(p)/G(p)
q

•Ingenious error correcting


{pq*M(p)}/G(p) = QM(p) + C(p)/G(p) decoding methods have been
devised for specific cyclic codes.
They eliminate the storage
needed for table lookup.

•Cyclic Redundancy Codes


(CRC), a class of cyclic codes,
has ability to detect burst errors of
length ‘q’, all single bit errors, any
odd number of errors if (p+1) is a
factor of G(p), Double errors if
G(p) has at least three 1’s.
(2,1,2) Convolutional Encoder
G1(p) = 1+p+p2 and G2(p) = 1+p2
The message polynomial
M(p) = 1+p+p3+p4+p5+p8
Xj’ = M(p)*G1(p) = 1+p5+p7+p8+p9+p10
Xj’’ = M(p)* G2(p) = 1+p+p2+p4+p6+p7+p8+p10

Thus Xj =Xj’Xj” =11 01 01 00 01 10 ...........


Catastrophic error
• If a finite number of
channel errors causes
an infinite number of
decoding errors.
• If all generating
polynomials Xj’, Xj’’
…. of Xj has a
common factor then it
occurs.
• Systematic codes are
free from this error Example: G1(p) = 1+p and G2(p) = p+p2 has a CF (1+p).
but they have poor
error correcting
performance.
Free Distance and Coding Gain
• Free Distance (df)= [w(X)]min excluding
all zero path.

• Termination of trellis (to ensure all


zero state for next run and to find min
weight non trivial path; = 5 in fig.)

Coding Gain
•For a AWGN channel, the decoded error
probability for a convolution code having code
rate Rc = k/n; and free distance = df can be
derived as
Pbe proportional to e-(Rc*df/2)*γb

•Pbe improves when (Rc*df/2) >1.

•By definition, Coding gain = (Rc*df/2) usually


expressed in dB.
Viterbi Decoding

Maximum error correction, but max delay, high storage and computation requirements
Other Decoding Methods
• Sequential Decoding (The output is available after each step. An increasing threshold is
used.) (min delay, min error correction capability)

• Feedback Decoding (First output is available after predetermined number of steps (e.g. 3)
(moderate delay, moderate error correction)
Punctured Convolution codes
(Improving code rate to k’/n’ from 1/n)
• Choose a matrix P of dimension nxk’
• It should have n’ ones and all other
elements as zero.
• Consecutive k’ codewords
(corresponding to k’ input bits;
assuming 1/n encoder) should be
compared with k’ columns of matrix P.
• Wherever there is a zero in matrix P
corresponding bit (may be zero or
one) of corresponding code word
should be dropped.
• Thus total number of bits that are
transmitted are n’ for k’ input bits thus
effective code rate will be k’/n’.
• Same matrix P is used at receiver.
Either all zeros or all ones are inserted
at the place of dropped bits.
• Now it can be decoded using a simple
decoder which will treat the puncturing
errors as transmission errors.
Recursive Systematic Convolutional (RSC) Codes

• It is a (23,35) RSC encoder with R=1/2


– Polynomial for feedback connection: 10011 = 238
– Polynomial for check bit connection: 11101 = 358
Turbo Codes
almost all codes are good except those we can think of – J. Wolfowitz
Turbo code encoder

Turbo code decoder

Claude Berrou et. Al. ‘Near Shannon limit error correcting coding and decoding’, Proc. IEEE Intl. conf. on comm., Geneva,
Hard Vs. Soft decision decoding
The
representation
of
demodulator
o/p (m) >1 bit
corresponds
to soft
decision
decoding.
2dB gain with 3 bit and 2.2 dB with infinite bit representation.

You might also like