Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
42 views

Chapter 10 Error Detection and Correction

Uploaded by

Dalia Nashat
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
42 views

Chapter 10 Error Detection and Correction

Uploaded by

Dalia Nashat
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 98

Chapter 10

Error Detection
and
Correction

10.1 Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display.
Position of the data-link layer

10.2
Data link layer duties

 Packetizing: move data packets or frames from one hop to the next within the
same LAN or WAN
 Addressing: uses physical addresses (MAC Addresses) that are assigned for
and used to find the next hop in the hop-to-hop delivery
 Error Control: uses error detection and correction to reduce the severity of
error occurrence
 Flow Control: controls how much data a sender can transmit before it must
wait for an acknowledgement from the receiver in order not to overwhelm it
 Medium Access Control (MAC): controls how the nodes should access a
shared media in a way to prevent packet collisions

10.3
Note

Data can be corrupted


during transmission.

Some applications require that


errors be detected and corrected.

10.4
10-1 INTRODUCTION

Let us first discuss some issues related, directly or


indirectly, to error detection and correction.

Topics discussed in this section:


Types of Errors
Redundancy
Detection Versus Correction
Forward Error Correction Versus Retransmission
Coding
Modular Arithmetic

10.5
Note

In a single-bit error, only 1 bit in the data


unit has changed.

10.6
Figure 10.1 Single-bit error

10.7
Note

A burst error means that 2 or more bits


in the data unit have changed.

10.8
Figure 10.2 Burst error of length 8

 The length of the burst is measured from the 1st to the last corrupted bits
 The burst error doesn’t necessarily mean that errors occur in consecutive bits
 Burst errors are most likely to occur in serial transmission
 The noise duration is usually longer than the duration of one bit
 The severity of burst errors are dependent on the data rate & the noise duration
 For example: a noise duration of 0.01 sec would corrupt 10 bits of a 1Kbps
transmission and 10,000 bits of a 1Mbps source
10.9
Error Detection & Correction Concepts
 Error detection: find out whether an errors has occurred
 Error correction: find the exact number of corrupted bits
and their exact location within the message
 Error detection is the first step towards error correction
 Error detection and correction use the concept of
redundancy, which is adding extra bits to the data unit
 The extra bits carry information about the data unit that
help find out whether any change may have happened to
the data unit at the receiver side
 The extra bits are discarded as soon as the accuracy of
the transmission has been determined
10.10
Note

To detect or correct errors, we need to


send extra (redundant) bits with data.

10.11
Forward Error Correction vs. Retransmission
 Error correction by retransmission
 If error is detected, the sender is asked to retransmit
 Forward Error Correction (FEC):
 The receiver uses an error correcting code, which automatically
corrects certain types of errors
 Theoretically, any error can be automatically corrected
 Error correction is harder than error detection and needs more
redundant bits
 The secret of error correction is to locate the invalid bits
 To locate a single-bit error in a data unit, you need enough
redundant bits to cover all states of change in the data bits as
well as the redundant bits and the no-change case

10.12
Coding
 Redundancy is achieved via coding schemes
 Sender adds extra bits to the data bits based on a
certain relationship
 Receiver checks the relationship in order to detect
and/or correct errors
 Factors of coding schemes
 The ratio of the redundant bits to the data bits
 The robustness of the whole process
 Categories of coding schemes:
 Block coding
 Convolution coding
10.13
Note

In this book, we concentrate on block


codes; we leave convolution codes
to advanced texts.

10.14
Figure 10.3 The structure of encoder and decoder

10.15
Modular Arithmetic
 Use a limited number of integers
 Define an upper limit called modulus, N
 Use only the integers from 0 to N-1, inclusive
 This is called modulo-N arithmetic
 If a number is greater than N, it is divided by N and
the remainder is the result
 If a number is negative, add as many N’s as needed to
make it positive
 There is no carry out when numbers are added or
subtracted
 In modulo-2 arithmetic, the addition and subtraction
are similar and are performed using the bit-wise XOR
function

10.16
Figure 10.4 XORing of two single bits or two words

10.17
10-2 BLOCK CODING

In block coding, we divide our message into blocks,


each of k bits, called datawords. We add r redundant
bits to each block to make the length n = k + r. The
resulting n-bit blocks are called codewords.

Topics discussed in this section:


Error Detection
Error Correction
Hamming Distance
Minimum Hamming Distance

10.18
Figure 10.5 Datawords and codewords in block coding

10.19
Example 10.1

The 4B/5B block coding discussed in Chapter 4 is a


good example of this type of coding.

In this coding scheme, k = 4 and n = 5. As we saw, we


have 2k = 16 datawords and 2n = 32 codewords.

We saw that 16 out of 32 codewords are used for


message transfer and the rest are either used for other
purposes or unused.

10.20
Figure 10.6 Process of error detection in block coding

10.21
Example 10.2

Let us assume that k = 2 and n = 3. Table 10.1 shows the list of datawords and
codewords. Later, we will see how to derive a codeword from a dataword.
Assume the sender encodes the dataword 01 as 011 and sends it to the receiver.
Consider the following cases:
1. The receiver receives 011. It is a valid codeword. The receiver extracts the
dataword 01 from it.
2. The codeword is corrupted during transmission, and 111 is received. This is
not a valid codeword and is discarded.
3. The codeword is corrupted during transmission, and 000 is received. This is a
valid codeword. The receiver incorrectly extracts the dataword 00.
Two corrupted bits have made the error undetectable.

Table 10.1 A code for error detection (Example 10.2)

10.22
Note

An error-detecting code can detect


only the types of errors for which it is
designed; other types of errors may
remain undetected.

10.23
Figure 10.7 Structure of encoder and decoder in error correction

10.24
Example 10.3

Let us add more redundant bits to Example 10.2 to see if


the receiver can correct an error without knowing what
was actually sent. We add 3 redundant bits to the 2-bit
dataword to make 5-bit codewords. Table 10.2 shows the
datawords and codewords.

Assume the dataword is 01. The sender creates the


codeword 01011. The codeword is corrupted during
transmission, and 01001 is received. First, the receiver
finds that the received codeword is not in the table. This
means an error has occurred. The receiver, assuming that
there is only 1 bit corrupted, uses the following strategy
to guess the correct dataword.
10.25
Example 10.3 (continued)
1. Comparing the received codeword with the first
codeword in the table (01001 versus 00000), the
receiver decides that the first codeword is not the one
that was sent because there are two different bits.

2. By the same reasoning, the original codeword cannot be


the third or fourth one in the table.

3. The original codeword must be the second one in the


table because this is the only one that differs from the
received codeword by 1 bit. The receiver replaces
01001 with 01011 and consults the table to find the
dataword 01.
10.26
Table 10.2 A code for error correction (Example 10.3)

 Codeword sent = 01011, Codeword received = 01001


 01001 vs. 00000  2 bit difference 
 01001 vs. 10101  3 bit difference 
 01001 vs. 11110  4 bit difference 
 01001 vs. 01011  1 bit difference 

10.27
Note

The Hamming distance between two


words is the number of differences
between the corresponding bits.

10.28
Example 10.4

Let us find the Hamming distance between two pairs of


words.

1. The Hamming distance d(000, 011) is 2 because

2. The Hamming distance d(10101, 11110) is 3 because

10.29
Note

The minimum Hamming distance is the


smallest Hamming distance between
all possible pairs in a set of words.

10.30
Example 10.5

Find the minimum Hamming distance of the coding


scheme in Table 10.1.
Solution
We first find all Hamming distances.

The dmin in this case is 2.

10.31
Example 10.6

Find the minimum Hamming distance of the coding


scheme in Table 10.2.
Solution
We first find all the Hamming distances.

The dmin in this case is 3.

10.32
Note

To guarantee the detection of up to s


errors in all cases, the minimum
Hamming distance in a block
code must be dmin = s + 1.

10.33
Example 10.7

The minimum Hamming distance for our first code scheme


(Table 10.1) is 2. This code guarantees detection of only a
single error. For example, if the third codeword (101) is
sent and one error occurs, the received codeword does not
match any valid codeword. If two errors occur, however,
the received codeword may match a valid codeword and
the errors are not detected.

10.34
Example 10.8

Our second block code scheme (Table 10.2) has dmin = 3.


This code can detect up to two errors. Again, we see that
when any of the valid codewords is sent, two errors create
a codeword which is not in the table of valid codewords.
The receiver cannot be fooled.

However, some combinations of three errors change a


valid codeword to another valid codeword. The receiver
accepts the received codeword and the errors are
undetected.

10.35
Figure 10.8 Geometric concept for finding dmin in error detection

 All received code words that have 1 to s errors are


inside (or at the perimeter) of the circle
 All other valid codewords are outside the circle
10.36
Figure 10.9 Geometric concept for finding dmin in error correction

 If the received codeword is within a a territory, it


decides that the original codeword is at the center
 If more than t errors occur, a wrong decision will be
taken
10.37
Note

To guarantee correction of up to t errors


in all cases, the minimum Hamming
distance in a block code
must be dmin = 2t + 1.

10.38
Example 10.9

A code scheme has a Hamming distance dmin = 4. What


is the error detection and correction capability of this
scheme?

Solution
This code guarantees the detection of up to three errors
(s = 3), but it can correct up to one error. In other words,
if this code is used for error correction, part of its
capability is wasted. Error correction codes need to have
an odd minimum distance (3, 5, 7, . . . ).

10.39
10-3 LINEAR BLOCK CODES

Almost all block codes used today belong to a subset


called linear block codes. A linear block code is a code
in which the exclusive OR (addition modulo-2) of two
valid codewords creates another valid codeword.

Topics discussed in this section:


Minimum Distance for Linear Block Codes
Some Linear Block Codes

10.40
Note

In a linear block code, the exclusive OR


(XOR) of any two valid codewords
creates another valid codeword.

10.41
Example 10.10

Let us see if the two codes we defined in Table 10.1 and


Table 10.2 belong to the class of linear block codes.
1. The scheme in Table 10.1 is a linear block code because the
result of XORing any codeword with any other codeword is a
valid codeword. For example, the XORing of the second and
third codewords creates the fourth one.
2. The scheme in Table 10.2 is also a linear block code. We can
create all four codewords by XORing two other codewords.

10.42
Example 10.11

In our first code (Table 10.1), the numbers of 1s in the


nonzero codewords are 2, 2, and 2. So the minimum
Hamming distance is dmin = 2.

In our second code (Table 10.2), the numbers of 1s in the


nonzero codewords are 3, 3, and 4. So in this code we have
dmin = 3.

Note: The minimum Hamming distance for a linear block


code is the smallest number of 1’s in the nonzero
codewords

10.43
Note

A simple parity-check code is a


single-bit error-detecting
code in which
n = k + 1 with dmin = 2.

10.44
Table 10.3 Simple even parity-check code C(5, 4)

10.45
Figure 10.10 Encoder and decoder for simple parity-check code

10.46
Example 10.12

Let us look at some transmission scenarios. Assume the sender sends the
dataword 1011. The codeword created from this dataword is 10111, which is
sent to the receiver. We examine five cases:
1. No error occurs; the received codeword is 10111. The syndrome is 0. The
dataword 1011 is created.
2. One single-bit error changes a1. The received codeword is 10011. The
syndrome is 1. No dataword is created.
3. One single-bit error changes r0. The received codeword is 10110. The
syndrome is 1. No dataword is created.
4. An error changes r0 and a second error changes a3.The received codeword is
00110. The syndrome is 0.The dataword 0011 is created at the receiver.
Note that here the dataword is wrongly created due to the syndrome value.
5. Three bits a3, a2, and a1 are changed by errors. The received codeword is
01011. The syndrome is 1.The dataword is not created. This shows that the
simple parity check, guaranteed to detect one single error, can also find any
odd number of errors.

10.47
Note

A simple parity-check code can detect


an odd number of errors.

10.48
Figure 10.11 Two-dimensional parity-check code

 Data words are organized in a table (rows & columns)


 For each row and each column, a parity-check bit is
added
10.49
Figure 10.11 Two-dimensional parity-check code

10.50
Performance of the Two-Dimensional
Parity Check
 Increases the likelihood of detecting burst errors
 It fails if the error bits are in exactly the same
positions in two different data units (the changes
will cancel each other in this case)

10.51
The Hamming Error Correction Codes
 The category of Hamming codes discussed
here is with dmin= 3:
 2-bit detection & 1-bit correction
 We need to choose an integer m≥3 such that:
 n=2m-1 & k=n-m
 The number of check bits r=m
 For example:
 If m=3, then n=7 & k=4
 This is Hamming Code C(7,4) with dmin= 3
 Each code word = a3 a2 a1 a0 r2 r1 r0

10.52
Table 10.4 Hamming code C(7, 4)

10.53
Figure 10.12 The structure of the encoder and decoder for a Hamming code

 The generator creates 3-bit  The checker creates 3-bit


even-parity check syndrome
 r0 = a2 + a1 + a0 (modulo-2)  s0 = b2 + b1 + b0 + q0 (modulo-2)
 r1 = a3 + a2 + a1 (modulo-2)  s1 = b3 + b2 + b1 + q1 (modulo-2)
 r2 = a1 + a0 + a3 (modulo-2)  s2 = b1 + b0 + b3 + q2 (modulo-2)
10.54
Table 10.5 Logical decision made by the correction logic analyzer

 The 3-bit syndrome represent 1 out 8 different error conditions


as shown in the table
 The decoder is only concerned with the conditions in the table
that are not shaded, in which a data bit has been flipped

10.55
Example 10.13

Let us trace the path of three datawords from the sender to the
destination:
1. The dataword 0100 becomes the codeword 0100011. The
codeword 0100011 is received. The syndrome is 000, the final
dataword is 0100.
2. The dataword 0111 becomes the codeword 0111001. The
codeword 0011001 is received. The syndrome is 011. After
flipping b2 (changing the 1 to 0), the final dataword is 0111.
3. The dataword 1101 becomes the codeword 1101000. The
codeword 0001000 is received. The syndrome is 101. After
flipping b0, we get 0000, the wrong dataword. This shows that
our code cannot correct two errors.

10.56
Example 10.14

We need a dataword of at least 7 bits. Calculate values of k


and n that satisfy this requirement.
Solution
We need to make k = n − m greater than or equal to 7, or
2m − 1 − m ≥ 7.
1. If we set m = 3, the result is n = 23 − 1 and k = 7 − 3, or
4, which is not acceptable.
2. If we set m = 4, then n = 24 − 1 = 15 and k = 15 − 4 =
11, which satisfies the condition. So the code is
C(15, 11)

10.57
Performance of the Hamming Code
 It can detect a 2-bit error
 It can correct a 1-bit error
 It can be modified in order to detect burst
errors of size N:
 It is based on splitting the burst error between N
codewords
 Convert the frame into N codewords
 Rearrange the codewords in the frame from row-wise
to column-wise (see the example)

10.58
Figure 10.13 Burst error correction using Hamming code

10.59
10-4 CYCLIC CODES

Cyclic codes are special linear block codes with one


extra property. In a cyclic code, if a codeword is
cyclically shifted (rotated), the result is another
codeword.

Topics discussed in this section:


Cyclic Redundancy Check
Hardware Implementation
Polynomials
Cyclic Code Analysis
Advantages of Cyclic Codes
Other Cyclic Codes
10.60
Table 10.6 A CRC (Cyclic Redundancy Check) code with C(7, 4)

10.61
Cyclic Redundancy Check (CRC)
 At the encoder of the sender:
 The k-bit dataword is augmented by appending (n-k) zeros to the
right hand side of it
 The n-bit result is fed into the generator
 The generator uses an agreed-upon divisor of size (n-k+1)
 The generator divides the augmented dataword by the divisor
using modulo-2 division
 The quotient of the division is discarded
 The remainder is appended to the dataword to create the
codeword
 At the decoder of the receiver:
 The received codeword is divided by the same divisor to create
the syndrome, which has a size of (n-k) bits
 If all the syndrome bits are zeros, then no error is detected
 Otherwise, an error is detected and the received dataword is
discarded
10.62
Figure 10.14 CRC encoder and decoder

10.63
Figure 10.15 Division in CRC encoder

10.64
Figure 10.16 Division in the CRC decoder for two cases

10.65
Figure 10.21 A polynomial to represent a binary word

 The 1’s and 0’s represent the coefficients of the polynomial


 The power of each term represents the position of the bit
 The degree of the polynomial is the highest power of it
 Adding and subtracting the polynomials are done by adding
and subtracting the coefficients of the same-power terms

10.66
Figure 10.22 CRC division using polynomials

10.67
Note

The divisor in a cyclic code is normally


called the generator polynomial
or simply the generator.

10.68
Cyclic Code Analysis
 Define the following cyclic code parameters as
polynomials:
 Dataword: d(x)
 Codeword: c(x)
 Generator: g(x)
 Syndrome: s(x)
 Error: e(x)
 Received codeword = c(x) + e(x)
 Received codeword/g(x) = c(x)/g(x) + e(x)/g(x)
 c(x)/g(x) does not have a remainder by definition
 s(x) is the remainder of e(x)/g(x)
 If s(x)≠0, then an error is detected
 If s(x)=0, then either no error has occurred or the error has not
been detected
 If e(x) is divisible by g(x), then e(x) can’t be detected
10.69
Note

In a cyclic code,
If s(x) ≠ 0, one or more bits is corrupted.
If s(x) = 0, either

a. No bit is corrupted. or
b. Some bits are corrupted, but the
decoder failed to detect them.

10.70
Note

In a cyclic code, those e(x) errors that


are divisible by g(x) are not caught.

10.71
Single-Bit Error
 A single-bit error is e(x)=xi, where i is the
position of the bit
 If a single-bit error is caught, then xi is not
divisible y g(x) (i.e.; there is a remainder)
 If g(x) has, at least, two terms and the
coefficient of x0 is not zero, then e(x) can’t
be divided by g(x). That is, all single-bit
errors will be caught

10.72
Note

If the generator has more than one term


and the coefficient of x0 is 1,
all single errors can be caught.

10.73
Example 10.15

Which of the following g(x) values guarantees that a


single-bit error is caught? For each case, what is the error
that cannot be caught?
a. x + 1 b. x3 c. 1
Solution
a. No xi can be divisible by x + 1. Any single-bit error can
be caught.
b. If i is equal to or greater than 3, xi is divisible by g(x).
All single-bit errors in positions 1 to 3 are caught.
c. All values of i make xi divisible by g(x). No single-bit
error can be caught. This g(x) is useless.

10.74
Figure 10.23 Representation of two isolated single-bit errors using polynomials

 Two isolated single-bit errors are represented as


 e(x)= xj + xi = xi (xj-i + 1) = xi (xt + 1)
 If g(x) has, at least, two terms and the coefficient of x0 is
not zero, then xi can’t be divided by g(x)
 Therefore, if g(x) is to detect e(x), it must not divide (xt + 1)
where t is the distance between the two error bits and t
should be between 2 and n-1 (n is the degree of g(x))

10.75
Note

If a generator cannot divide xt + 1


(t between 2 and n – 1),
then all isolated double errors
can be detected.

10.76
Example 10.16

Find the status of the following generators related to two


isolated, single-bit errors.
a. x + 1 b. x4 + 1 c. x7 + x6 + 1 d. x15 + x14 + 1
Solution
a. This is a very poor choice for a generator. Any two
errors next to each other cannot be detected.
b. This generator cannot detect two errors that are four
positions apart.
c. This is a good choice for this purpose.
d. This polynomial cannot divide (xt + 1) if t is less than
32,768 15. A codeword with two isolated errors up to
32,768 15 bits apart can be detected by this generator.

10.77
Note

A generator that contains a factor of


(x + 1) can detect all odd-numbered
errors.

10.78
Note

❏ All burst errors with L ≤ r will be


detected. (r is the degree of g(x))

❏ All burst errors with L = r + 1 will be


detected with probability 1 – (1/2)r–1.

❏ All burst errors with L > r + 1 will be


detected with probability 1 – (1/2)r.
10.79
Example 10.17

Find the suitability of the following generators in relation


to burst errors of different lengths.
a. x6 + 1 b. x18 + x7 + x + 1 c. x32 + x23 + x7 + 1

Solution
a. This generator can detect all burst errors with a length
less than or equal to 6 bits;
3 out of 100 burst errors with length 7 will slip by;
1 – (1/2)6–1 = 1 - 1/32 = 0.97
16 out of 1000 burst errors of length 8 or more will slip by. 1 –
(1/2)6 = 1 – 1/64 = 0.984

10.80
Example 10.17 (continued)

b. This generator can detect all burst errors with a length


less than or equal to 18 bits;
8 out of 1 million burst errors with length 19 will slip by;
4 out of 1 million burst errors of length 20 or more will
slip by.

c. This generator can detect all burst errors with a length


less than or equal to 32 bits;
5 out of 10 billion burst errors with length 33 will slip by;
3 out of 10 billion burst errors of length 34 or more will
slip by.

10.81
Note

A good polynomial generator needs to


have the following characteristics:
1. It should have at least two terms.
2. The coefficient of the term x0 should
be 1.
3. It should not divide xt + 1, for t
between 2 and n − 1.
4. It should have the factor x + 1.

10.82
Performance of Cyclic Codes
 Detect single-bit errors
 Detect double-bit isolated errors
 Detect odd number of errors
 Detect burst errors of length less than or
equal to the degree of the generator
 Detect, with a very high probability, burst
errors of length greater than the degree of
the generator
 Can be implemented in either SW or HW

10.83
Table 10.7 Standard polynomials

10.84
10-5 CHECKSUM

The last error detection method we discuss here is


called the checksum. The checksum is used in the
Internet by several protocols although not at the data
link layer. However, we briefly discuss it here to
complete our discussion on error checking

Topics discussed in this section:


Idea
One’s Complement
Internet Checksum

10.85
Example 10.18

Suppose our data is a list of five 4-bit numbers that we


want to send to a destination. In addition to sending these
numbers, we send the sum of the numbers. For example,
if the set of numbers is (7, 11, 12, 0, 6), we send (7, 11, 12,
0, 6, 36), where 36 is the sum of the original numbers.
The receiver adds the five numbers and compares the
result with the sum. If the two are the same, the receiver
assumes no error, accepts the five numbers, and discards
the sum. Otherwise, there is an error somewhere and the
data are not accepted.

10.86
Example 10.19

We can make the job of the receiver easier if we send the


negative (complement) of the sum, called the checksum.
In this case, we send (7, 11, 12, 0, 6, −36). The receiver
can add all the numbers received (including the
checksum). If the result is 0, it assumes no error;
otherwise, there is an error.

10.87
Example 10.20

How can we represent the number 21 in one’s


complement arithmetic using only four bits?

Solution
The number 21 in binary is 10101 (it needs five bits). We
can wrap the leftmost bit and add it to the four rightmost
bits. We have (0101 + 1) = 0110 or 6.

10.88
Example 10.21

How can we represent the number −6 in one’s


complement arithmetic using only four bits?

Solution
In one’s complement arithmetic, the negative or
complement of a number is found by inverting all bits.
Positive 6 is 0110; negative 6 is 1001. If we consider only
unsigned numbers, this is 9. In other words, the
complement of 6 is 9. Another way to find the
complement of a number in one’s complement arithmetic
is to subtract the number from 2n − 1 (16 − 1 in this case).

10.89
Example 10.22

Let us redo Exercise 10.19 using one’s complement


arithmetic. Figure 10.24 shows the process at the sender
and at the receiver. The sender initializes the checksum
to 0 and adds all data items and the checksum (the
checksum is considered as one data item and is shown in
color). The result is 36. However, 36 cannot be expressed
in 4 bits. The extra two bits are wrapped and added with
the sum to create the wrapped sum value 6. In the figure,
we have shown the details in binary. The sum is then
complemented, resulting in the checksum value 9 (15 − 6
= 9). The sender now sends six data items to the receiver
including the checksum 9.
10.90
Example 10.22 (continued)

The receiver follows the same procedure as the sender. It


adds all data items (including the checksum); the result
is 45. The sum is wrapped and becomes 15. The wrapped
sum is complemented and becomes 0. Since the value of
the checksum is 0, this means that the data is not
corrupted. The receiver drops the checksum and keeps
the other data items. If the checksum is not zero, the
entire packet is dropped.

10.91
Figure 10.24 Example 10.22

1111
1001 0000

10.92
The Internet Checksum
 The checksum generator subdivides the data unit into
equal-length data units of 16-bit each
 The data units are added using the 1’s complement
arithmetic such that the total is always n-bit long
 The data units are binary added normally to give the partial sum
 The outer carry out is added to the partial sum to give the sum
 The result is 1’s complemented to give the checksum
 The checksum is appended to the data unit
 The checksum checker performs the same operation
including the checksum:
 If the result is zero, then the data unit has not been altered

10.93
Note

Sender site: (Internet Checksum)


1. The message is divided into 16-bit words.
2. The value of the checksum word is set to 0.
3. All words including the checksum are
added using one’s complement addition.
4. The sum is complemented and becomes the
checksum.
5. The checksum is sent with the data.

10.94
Note

Receiver site: (Internet Checksum)


1. The message (including checksum) is
divided into 16-bit words.
2. All words are added using one’s
complement addition.
3. The sum is complemented and becomes the
new checksum.
4. If the value of checksum is 0, the message
is accepted; otherwise, it is rejected.

10.95
Example 10.23

Let us calculate the checksum for a text of 8 characters


(“Forouzan”). The text needs to be divided into 2-byte (16-
bit) words. We use ASCII (see Appendix A) to change each
byte to a 2-digit hexadecimal number. For example, F is
represented as 0x46 and o is represented as 0x6F. Figure
10.25 shows how the checksum is calculated at the sender
and receiver sites. In part a of the figure, the value of
partial sum for the first column is 0x36. We keep the
rightmost digit (6) and insert the leftmost digit (3) as the
carry in the second column. The process is repeated for
each column. Note that if there is any corruption, the
checksum recalculated by the receiver is not all 0s. We
leave this an exercise.
10.96
Figure 10.25 Example 10.23

F F

F F F F

10.97
Performance of the Checksum
 Detect all odd number of errors
 Detect most even number of errors
 If corresponding bits in different data units of
opposite values are affected, the error will not
be detected
 Generally weaker than the CRC

10.98

You might also like