Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Section 15 - Digital Communications

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 27

Section 15

Digital Communications

DEFINITION. Information: Information is defined as knowledge or intelligence


communicated or received

DEFINITION. Data: Information, for example, numbers, text, images, and sounds,
in a form that is suitable for storage in or processing by a computer.

DEFINITION. Digital Transmission: is the transmittal of digital pulses between


two or more points in a communication system.

DEFINITION. Digital Radio is the transmittal of digitally modulated analog carrier


between two or more points in a communication system.

A. INFORMATION THEORY

1. Information Measure
The information send from a digital source when the ith message is transmitted is
given by

Ii = logb (1/Pi) = -logb (Pi) where: Pi = probability of the ith message

2. Average Information (Entropy)


In general, the information content will vary from message to message because the
probability of transmitting the nth message will not be equal. Consequently, we need an
average information measure for the source, considering all the possible message we
can send.

H =
=0

For your information...


If the symbols have the same probability of occurrence (P1= P2 = P1 = ... Pn) then the
entropy is maximum (H = logb N).

3. Relative Entropy
The ratio of the entropy of a source to the maximum value the entropy could take for
the same source symbol.

HR = H/HMAX HMAX = logb N; N = total number of symbols


4. Redundancy

r = 1 - HR

5. Rate of Information

R = H/T T =
=1

Sample Problem:

A telephone touch-tone keypad has the digits 0 to 9, plus the * and # keys. Assume the
probability of sending * or # is 0.005 and the probability of sending 0 to 9 is 0.099 each.
If they keys are pressed at a rate of 2 key/s, compute the entropy and data rate for this
source.

Solution:

I0 = I1 = I2 = ... I9 = log2 (1/0.099) I* = I# = log2 (1/0.005) = 7.64 bits


= 3.34 bits

H =
= P0I0 + P1I1 + P2I2 ... P#I#

Since: P0I0 = P1I1 = P2I2 ... P9I9 and P*I* = P#I#


= 10 {0.099(3.34)} + 2{0.005(7.64)}
= 3.38 bits/key

R = H/T = H x r = 3.38 bits/key x 2 keys/sec = 6.67 bps

Probability of Occurence Time required to transmit


Symbol
P(Xi) the symbol xi
x1 0.21 10 s
x2 0.14 15 s
x3 0 .09 20 s
x4 0.11 30 s
x5 0.15 25 s
x6 0.18 15 s
x7 0.12 25 s
Determine the following:

a. Entropy (H)
b. Relative Entropy (HR)
c. Rate of Information

Solution:

A. Entropy (H)

H = = 0.21 log2 (1/0.21) + 0.14 log2 (1/0.14) + 0.09 log2 (1/0.009)


+ 0.11 log2 (1/0.11) + 0.15 log2 (1/0.15) + 0.18 log2 (1/0.18) + 0.12 log2
(1/0.12) = 2.76 bits/symbol

B. Relative Entropy (HR)

HMAX = log2 N Max entropy


= log2 7 = 2.81 bits/symbol
HR = H/HMAX = 2.76/2.81 = 0.98

C. Rate of Information

T = 7=1 = 0.21 (10) + 0.14 (15) + 0.09 (20) + 0.11 (30) + 0.15 (25) +0.18 (15) +
0.12 (25) = 18.75 sec

R = H/T = 2.76 bits / 18.75 sec = 147 kbps

Parameter Equation
Code Word Length l = logb (M)
Average Code Word Length L = =1
Coding Efficiency = (Lmin / L) x 100%
Coding Efficiency =1-

In the equation Ii = logb (1/PI)

If b = 2; the unit of information is bits


If b = 10; the unit of information is dit/Hartley/decit
If b = e; the unit of information is nats/Hepits
1 Hartley = 3.32 bits and 1 nat = 1.443 bits
Sample Problem:
Calculate the coding efficiency in representing the 26 letters of the alphabet using a
binary and decimal system.

Solution:

l = log2 (26) = 4.7 bits 5 bits must be used = (4.7/5)x100 = 94%


l = log10 (26) = 1.415 dits 2 dits must be used = (1.415/2)x100 = 71 %

This proves the fact that binary coding is more efficient than decimal coding.

D. CHANNEL CLASSIFICATIONS

1. Lossless Channels
A channel described by a channel matrix with only one non-zero element in each
column.

2. Deterministic Channel
Describe by a channel matrix with only one non-zero element on each row.

3. Noiseless Channel
A channel which is both lossless and deterministic.

E. CHANNEL CAPACITY

The maximum rate at which information can be transmitted through a channel

1. For Lossless Channels

C = log 2 (N)
The lossless channel capacity is equal to source entropy, and no source information is
loss in transmission.

2. For Deterministic Channel

C2 = log2 (M)

The deterministic channel capacity is equal to destination entropy. Each member of the
source is set is uniquely associated with one, and only one, member of the destination
alphabet.

3. For Noiseless Channel

C3 = log2 (M) = log2 (N)

4. For Additive White Gaussian Noise Channel (AWGN)

C4 = (1/2) log2 (1 + S/N)

Note : The units of the previous equation are on per sample basis. Since there are 2BW
(Nyquist Sampling Theorem) samples per unit time, the capacity per unit time can be
written as

C = 2BWC4

5. Shannon Limit For Information Capacity

C = 2BWC4 = BW log2 (1 + S/N)

6. Shannon-Hartley Theorem

C = 2BWC2 = 2BW log2 (M)

Where: C = channel capacity in bps


N = is the number of input symbols
M = is the number of output symbols
BW = channel bandwidth in Hz

ECE Board Exam: April 2003


A binary digital signal is to be transmitted at 10 kbits/s, what is the absolute minimum
bandwidth is required to pass the fastest information change undistorted?

Solution:

From deterministic channel capacity;


C = 2BW log2 (M) M = 2 for binary signalling

BW = C/2 = 10000/2 = 5 KHz

ECE Board Exam: April 2003


What is the bandwidth needed to support a capacity of 20000 bits/s (using Shannons
theory), when the ratio of power to noise is 200? Also compute the information density?

Solution:

C = BW log2 (1 + S/N) BW = c/{log2 (1+S/N)}

BW = 20000/[log2 (1+200)] = 2615.54 Hz

= C/BW = log2 (1+S/N) = 20000 bps/2615.54 Hz = 7.65 bps/Hz

ECE Board Exam: April 2003


What is the channel capacity for a signal power of 200 W, noise power of 10 W and a
bandwidth of 2 KHz of a digital system? Also calculate the spectrum efficiency?

Solution:

C = BW log2 (1+S/N) = 2x103 log2 (1+200/10) = 8.779 kbps

= C/BW = log2 (1+S/N) = 8.779 kbps/2 KHz = 4.4 bps/Hz

Sample Problem:
Consider a digital source that converts an analog signal to digital form. If an input
analog signal is sampled at 1.25x the Nyquist rate and each sample is quantized into
one 256 equally likely levels.

a. What is the information rate of this source?


b. Calculate the min bit error rate of this source if transmitted overn an AWGN channel
with a BW = 10 KHz and 20 dB S/N rati.
c. Find the S/N in dB require for error free transmission if the BW = 10 KHz
d. Find the BW required for error free transmission if the S/N is 20 dB.

Assume that the successive samples are statistically independent.

Solution:

a. Information rate

H = log2 (256) = 8 bits/sample; Nyquist Rate = 2fm (max) = 2 (4 KHz) = 8 KHz


Actual rate = 1.25 x RNYQUIST = 1.25 (8 KHz) = 10 KHz = 10 kilosamples/sec
Information Rate = H x RACTUAL = 8 bits/sample {10x103 samples/sec} = 80 kbps

b. Minimum BER
Channel limitation due to noise:

C = BW log2 (1+S/N) = 10 KHz log2 (1+100) = 66.88 kbps

BER = RINFO C = 80 kbps 66.88 kbps = 13.12 kbps


It cannot be transmitted without errors

c. S/N required for error free transmission


Channel capacity Info Rate

C RINFO 80 kbps 10 KHz log2 (1+S/N)

S/N 255 or 24.1 dB

d. BW required for error free transmission


Channel capacity Info rate

C RINFO 80 kbps BW log2 (1+101)


BW 12.02 KHz

This means to provide an error free transmission the bandwidth must be at least 12.02
KHz or the S/N ratio must be greater than 24.1 dB.

F. DIGITAL SIGNAL LINE ENCODING FORMAT

Line encoding is the method used for converting a binary information sequence into a
digital signal in a digital communication system.
1. Types of Signalling

i. Unipolar Signalling
Binary 1 is represented by a high level and binary 0 by a zero level.

ii. Polar Signalling


Binary 1s and 0s are represented by equal positive negative levels.

iii. Bipolar (pseudoternary) signalling


Binary 1s are represented by alternatively positive or negative values. The
binary 0 is represented by a zero level.

iv. Manchester Signalling (Split-phase encoding)


Each binary 1 is represented by a positive half-bit period pulse followed by a
negative half-bit period pulse. Similarly, a binary 0 is presented by a negative
half-bit period pulse followed by a positive half-bit period pulse.

G. DEFINITION OF DIGITAL ENCODING FORMAT

1. Non-Return to Zero (NRZ)


i. Non-Return to Zero-Level (NRZ-L)
Where L denotes positive logical level assessment.
1 = High Level
0 = Low Level

ii. Non-Return to Zero-Level (NRZ-M)


Where M denotes inversion on mark
1 = Transition at beginning of interval
0 = No transition

iii. Non-Return to Zero-Level (NRZ-S)


Where S denotes inversion on space using negative logic
1 = No change
0 = Transition at the beginning of interval

2. Return to Zero (RZ)


1 = Transition from High to Low in middle of interval
0 = Low level

3. Biphase

i. Biphase-Level (Manchester)
1 = Transition from High to Low in middle of interval
0 = Transition from Low to High in middle of interval

ii. Biphase-Mark
Always a transition at the beginning of interval
1 = Transition in middle of interval
0 = No transition in middle of interval

iii. Biphse-Space
Always a transition at beginning of interval
1 = No transition in middle of interval
0 = Transition in middle of interval

4. Differential Manchester
1 = No transition in middle of interval
0 = Transition at beginning of interval

5. Delay Modulation (Miller)


1 = Transition in middle of interval
0 = No transition if followed by 1
Transition at end of interval if followed by 0

6. Bipolar-AMI (Alternate Mark Inversion)


1 = Pulse in first half of bit level, alternating polarity pulse to pulse
0 = No pulse

The term bipolar has two different conflicting definitions:


In the space communication industry, polar NRZ is sometimes called bipolar
NRZ, or simply bipolar.
In the telephone industry, the term bipolar denotes pseudoternary signalling, as
in the T1 bipolar RZ signalling

So many names...
Polar NRZ is also called NRZ-L.
Bipolar NRZ is called NRZ-M.
Negative Logic Bipolar NRZ is called NRZ-S.
Bipolar RZ is also called BPRZ, RZ-AMI, BPRZ-AMI, AMI or simply bipolar.
Manchester NRZ is also Manchester code, or Biphase-L, for biphase with normal
logic level.
Biphase-M is used for encoding SMPTE time-code data for recording on
videotapes.
BW Efficiency of Popular Line Codes

Spectral
Signalling Code Type Bandwidth Efficiency
(bps/Hz)
NRZ fb/2 2
Unipolar
RZ fb 1
NRZ fb/2 2
Polar RZ fb 1
Manchester fb 1
Bipolar AMI fb/2 2

H. SIGNAL ELEMENT VS. DATA ELEMENT


Relation between Bit Rate and Baud Rate

f b = n x fB where: n = (# of data segment)/(# of signal element)


fb = bit rate in bps
fB = Baud rate in Baud

Sample Problem:

A signal is carrying data in which 4 data element is encoded as one signal element. If
the bit rate is 100 kbps, what is the average value of the baud rate?

Solution:

fB = fb / n = (100 kbps) / 4 = 25 kBaud

I. DIGITAL MODULATION
1. Amplitude Shift Keying (ASK)
Digital Amplitude Modulation is simply a double-sideband, full-carrier amplitude
modulation where the input modulating signal is a binary waveform.

a.k.a Continuous-wave modulation, On-Off Keying (OOK)

Implementation of binary ASK


2. Frequency Shift Keying
Frequency Shift Keying is a form of constant-amplitude angle modulation similar to
conventional frequency modulation except that the modulating signal is a binary signal
that varies between two discrete voltage levels.

Implementation of binary FSK


i. Bandwidth Consideration

BW = 2 (fb + f)
ii. FSK Receiver
Noncoherent FSK Demodulator

Coherent FSK Demodulator

Minimum Shift Keying (MSK)


With MSK, the mark and space frequencies are selected such that they
are separated from the center frequency by an odd exact multiple of one-half of
the bit rate.

fm fs = f = n x (fb/2) where: n = positive odd integer


Sample Problem:
Calculate the frequency shift (Deviation) between mark and space for GSM cellular
radio system that uses Gaussian MSK (GMSK) with a transmission rate of 270.833
kbps.

Solution:

GMSK is a special case of FSK where n = 1

fm fs = (270.833 kbps) / 2 = 135.4165 KHz

2. Phase Shift Keying (PSK)


Phase Shift Keying is a form of angle-modulated, constant amplitude digital
modulation similar to conventional phase modulation except that with PSK the input
signal is a binary digital signal and limited number of output phase are possible.

a. Binary Phase Shift Keying (BPSK)


With BPSK, two output phases are possible for a single carrier frequency.

Implementation of binary PSK


Phasor and Constellation Diagram

Truth Table and Minimum Nyquist Bandwidth

Binary Input Output Phase


Logic 0 180
Logic 1 0

fN = fb

Sample Problem:
Determine the minimum Nyquist bandwidth and the Baud rate for a BPSK modulator
with a carrier frequency of 70 MHz and an input bit rate of 10 Mbps.

Solution:

Nyquist BW Baud Rate

fN = fb = 10 MHz fB = 10 MBaud

M-ary Encoding
M-ary is a term derived from the binary. M is simply a digit that represents the number
of conditions or combinations possible for a given number of binary variables.

N = log2 (M)
N M
1 2
2 4
3 8
4 16
5 32

Where:
N = # of bits per symbol
M = # of output conditions or symbols possible w/ N bits

Samples Problem:
How many bits are needed to address 256 different level combinations?

Solution:

N = log2 (M) = log2 (256) = [log10 256] / [log102] = 8 bits

b. Quarternary Phase Shift Keying (QPSK)


QPSK is form of angle-modulated, constant-amplitude digital modulation where four
output phase are possible (M = 4)
Implementation of Quarternary PSK

Phasor and Constellation Diagram


Truth Table and Minimum Nyquist Bandwidth

Binary Input
QPSK Output
Q I
0 0 -135
0 1 -45
1 0 +135
1 1 +45

f N = fB / 2

Sample Problem:
Calculate the minimum double-sided Nyquist BW and the Baud for a QPSK modulator
with an input data rate equal to 40 Mbps and a carrier frequency of 110 MHz.

Solution:

Nyquist BW (QPSK) Baud Rate

fN = fb/2 = 40 Mbps/2 Baud = fb / 2 = 40 Mbps/2


= 20 MHz = 20 MBaud

c. Eight Phase Shift Keying (8-PSK)


8-PSK is another form of angle-modulated, constant-amplitude digital modulation
where eight output phases are possible (M = 8)
Phasor and Constellation Diagram
Truth Table and Minimum Nyquist Bandwidth

Binary Input
8-PSK Output
Q I C
0 0 0 -112.5
0 0 1 -157.5
0 1 0 -67.5
0 1 1 -22.5 fN = fb / 3
1 0 0 +112.5
1 0 1 +157.5
1 1 0 +67.5
1 1 1 +22.5

Sample Problem:

Calculate the minimum double-sided Nyquist BW and the Baud for an 8-PSK modulator
with an input data rate equal to 25 Mbps and a carrier frequency of 45 MHz.

Solution:

Nyquist BW (8-PSK) Baud Rate

fN = fb/3 = 25 Mbps/3 fB = fb/3 = 25 Mbps/3


= 8.33 MHz = 8.33 MBaud

d. Sixteen Phase Shift Keying (16-PSK)


16-PSK is another form of angle-modulated, constant-amplitude digital modulation
where sixteen output phase are possible. (N=4 & M = 16)

Phasor and Constellation Diagram


Truth Table and Minimum Nyquist Bandwidth

Bit Code Phase


Bit Code Phase
0 0 0 0 11.25
0 0 0 0 191.25
0 0 0 1 33.75
0 0 0 1 213.75
0 0 1 0 56.25
0 0 1 0 236.25
0 0 1 1 78.75
0 0 1 1 258.75
0 1 0 0 101.25
0 1 0 0 281.25
0 1 0 1 123.75
0 1 0 1 303.75
0 1 1 0 146.25
0 1 1 0 326.25
0 1 1 1 168.75
0 1 1 1 348.75

fN = fb/3

3. Quadrature Amplitude Modulation (QAM)


QAM is a form of digital modulation where the digital information is contained in both
the amplitude and phase of the transmitted carrier.

a. Eight Quadrature Modulation (8-QAM)


8-QAM is an M-ary encoding technique where M=8. 2 amplitudes and 4 phase
are used to give 8 different symbols.

Phasor and Constellation Diagram


b. Sixteen Quadrature Modulation (16-QAM)
8-QAM is an M-ary encoding technique where M=16

3 amplitudes and 12 phases are used to give 16 different symbols


4 amplitudes and 8 phases are used to give 16 different symbols
2 amplitudes and 8 phases are used to give 16 different symbols

Phasor and Constellation Diagram

B. SPECTRAL EFFICIENCY OF DIGITAL SYSTEMS

Spectral Efficiency (Bandwidth Efficiency) or information density is an indication of


how a certain modulation scheme is efficiently utilizing its bandwidth.

In terms of Channel Capacity and


= C/BW
Bandwidth
In terms of Signal to Noise Ratio = log (1 + S/N)
In terms of Bit Rate and Nyquist BW
= fb/fB = fb/BW
(Baud Rate)

Sample Problem:
Determine the bandwidth efficiency for the following modulation scheme

a. BPSK, fb = 15 Mbps
b. QPSK, fb = 20 Mbps
c. 8-PSK, fb = 28 Mbps
d. 8-QAM, fb = 30 Mbps
e. 16-PSK, fb = 40 Mbps
f. 16-QAM, fb = 42 Mbps
Solution:

a. BPSK b. QPSK

= fb/BW = 15 Mbps/15 MHz = fb/BW = [2(20Mbps)]/(20 MHz) = 2 bps/Hz


= 1 bps/Hz = 2 bps/Hz

c. 8-PSK d. 8-QAM

= fb/BW = [3(28 Mbps)]/28 MHz = fb/BW = [3(30 Mbps)]/[30 MHz]


= 3 bps/Hz = 3 bps/Hz

e. 16-PSK f. 16-QAM

= fb/BW = [4(40 Mbps)]/[40 MHz] = fb/BW = [4(42 Mbps)]/[42 MHz]


= 4 bps/Hz = 4 bps/Hz

Summary of Various Digital Modulation Systems

System # Bits Encoded per Minimum Nyquist Spectral Efficiency


Symbol BW (Baud Rate) (bps/Hz)
BPSK 1 fb 1
QPSK 2 fb /2 2
8-PSK 3 fb /3 3
8-QAM 3 fb /3 3
16-PSK 4 fb /4 4
16-QAM 4 fb /4 4
32-QAM 5 fb /5 5
64-QAM 6 fb /6 6

K. ERROR PROBABILITIES FOR DIGITAL COMMUNICATION SYSTEMS

1. Coherent Systems with Additive White Gaussian Noise (AWGN) Channels

2
Pe = Q

2. Non-Coherent Systems with Additive White Gaussian Noise (AWGN) Channels

Pe = 0.5e-(Eb/2No)
Where: Eb = energy per bit in J/bps
No = noise density in W/Hz
Sample Problem:
Calculate the probability of error for a non-coherent FSK system if the carrier power is
10-13 W, bit rate of 30 kbps, BW of 60 KHz and noise power of 10 -14 W.

Solution:

Eb = Po/fb = [10-13 W] / 30 kbps = 3.3 attoJoules/bit

NO = N/BW = 10-14 W/60 KHz = 166.67x10-21 Watt/Hz

Pe = 0.5e-(Eb/2No) = Pe = 0.5e-0.5((3.3x10^-18/167.67x10^-21) = 2.51x10-5

This probability of error means that 25 bits will be expected to be corrupted (in error) for
every 1 million bits transmitted.

L. ERROR DETECTIONS

1. Redundancy
Redundancy involves transmitting each character twice. If the same character
is not received twice in succession, a transmission error has occurred.

2. Echoplex
Echoplex involves the receiving device echoing the received data back to the
transmitting device. The transmitting operator can view the data, as received and
echoed, making corrections as appropriate.

3. Exact-count encoding
The number of 1s in each character is the same and therefore a simple count
of number of 1s received in each character can determine if a transmission error has
occurred.

4. Parity Checking
Parity checking is by far the most commonly used method for error detection
and correction, as it is used in asynchronous devices such as PCs. Parity involves the
transmitting terminals appending one or more parity bits to the data set in order to
create odd parity or even parity.

Dimensions of Parity Checking

i. Vertical Redundancy Checking (VRC)


VRC entails the appending of a parity bit at the end of each transmitted character or
value to create an odd or even total mathematical bit value.

ii. Longitudinal Redundancy Checking (LRC) or Block Checking Character (BCC)


LRC adds another level of reliability, as data is viewed in a block or data set, as
though the receiving device were viewing data set in matrix format. Also known as
checksum, the LRC is sent as an extra character at the end of each data block.

5. Cyclic Redundancy Checking (CRC)


CRC validates transmission of a set of data, formatted in a block or frame, through
the use of a unique mathematical polynomial know to both transmitter and receiver. The
result of that calculation is appended to the block or frame or text as either a 16- or 32-
bit value.

i. CRC Encoding Procedures

1. Multiply i(x) by xn-k (puts zeroes in (n-k) low-order positions).


2. Divide xn-k by g(x).
xn-k i(x) = g(x) q(x) = r(x)
3. Add remainder r(x) to xn-k i(x)
(puts check bits in the (n-k) low-order positions).
b(x) = xn-k i(x) + r(x)

n = number of bits in a codeword


k = information bits
q(x) = quotient
r(x) = remainder
g(x) = generator polynomial
i(x) = information polynomial
b(x) = transmitted information

ii. Standard CRC Polynomial Codes

Name Used In
CRC-8 ATM Header error check
CRC-10 ATM CRC
CRC-16 Bisync
CCITT-16 HDLC, XMODEM, V.41
CCITT-32 IEEE 802, V.32

M. ERROR CORRECTIONS

1. Symbol Substitution
With symbol substitution, if a character is received in error, rather than revert
to a high level of error correction or display the incorrect character.
2. Retransmission (ARQ)
Retransmission, as the name implies, is resending a message when it is
received in error and the receive terminal automatically calls for retransmission of the
entire message.

3. Forward Error Correction (FEC)


FEC involves the addition of redundant information embedded in the data set
in order that the receiving device can detect errors and correct for them without
requiring a retransmission. The most commonly employed technique is Hamming
Code.

Hamming Distance
The number of bit position in which two codeword differs is called Hamming
distance.

2n m + n + 1 n= number of Hamming Bits


m = Number of bits in the data character

ECE Board Exam: APRIL 2003


How many Hamming Bits would be added to a data block containing 128 bits?

Solution:

Number of Hamming Bits: 2n m + n + 1

For n = 6

26 = 64; m + n + 1 = 128 + 6 + 1; 64 < 135

For n=7

27 = 128; m + n + 1 = 128 + 7 + 1; 128 < 136

For n=8

28 = 256; m + n + 1 ; 256 > 138 (satisfied)

Answer: Hamming Bits = 8

Sample Problem:
Calculate the Hamming distance to detect and correct 3 single-bit errors that occurred
during transmission. Also compute for the number of Hamming bits for a 23 bit data
string.

Solution:

Required Hamming distance for error detection

Hd = d+1 = 3+1 = 4

Required Hamming distance for error correction

Hd = 2d+1 = (2x3) + 1 = 7

Number of Hamming bits

2n m+n+1

For n = 4

24 = 16; m+n+1 = 23 + 4 +1; 16 < 28

For n = 5

25 = 32; m+n+1 =23+5+1; 32 > 29

n=5

Answer: Hamming bits = 5

To detect d single-bit errors, you need a Hamming distance of d+1 code


because with such a code there is no way that d single bit errors can change a
valid codeword into another valid codeword.
Similarly, to correct d single-bit errors, you need a distance 2d+1 code
because that way the legal codeword are so far apart that even with d changes.
The original codeword is still closer than any other codeword.
Hamming codes can only correct single bit errors. However there is a trick that
can be used to permit Hamming codes to correct burst errors.

You might also like