Section 15 - Digital Communications
Section 15 - Digital Communications
Section 15 - Digital Communications
Digital Communications
DEFINITION. Data: Information, for example, numbers, text, images, and sounds,
in a form that is suitable for storage in or processing by a computer.
A. INFORMATION THEORY
1. Information Measure
The information send from a digital source when the ith message is transmitted is
given by
H =
=0
3. Relative Entropy
The ratio of the entropy of a source to the maximum value the entropy could take for
the same source symbol.
r = 1 - HR
5. Rate of Information
R = H/T T =
=1
Sample Problem:
A telephone touch-tone keypad has the digits 0 to 9, plus the * and # keys. Assume the
probability of sending * or # is 0.005 and the probability of sending 0 to 9 is 0.099 each.
If they keys are pressed at a rate of 2 key/s, compute the entropy and data rate for this
source.
Solution:
H =
= P0I0 + P1I1 + P2I2 ... P#I#
a. Entropy (H)
b. Relative Entropy (HR)
c. Rate of Information
Solution:
A. Entropy (H)
C. Rate of Information
T = 7=1 = 0.21 (10) + 0.14 (15) + 0.09 (20) + 0.11 (30) + 0.15 (25) +0.18 (15) +
0.12 (25) = 18.75 sec
Parameter Equation
Code Word Length l = logb (M)
Average Code Word Length L = =1
Coding Efficiency = (Lmin / L) x 100%
Coding Efficiency =1-
Solution:
This proves the fact that binary coding is more efficient than decimal coding.
D. CHANNEL CLASSIFICATIONS
1. Lossless Channels
A channel described by a channel matrix with only one non-zero element in each
column.
2. Deterministic Channel
Describe by a channel matrix with only one non-zero element on each row.
3. Noiseless Channel
A channel which is both lossless and deterministic.
E. CHANNEL CAPACITY
C = log 2 (N)
The lossless channel capacity is equal to source entropy, and no source information is
loss in transmission.
C2 = log2 (M)
The deterministic channel capacity is equal to destination entropy. Each member of the
source is set is uniquely associated with one, and only one, member of the destination
alphabet.
Note : The units of the previous equation are on per sample basis. Since there are 2BW
(Nyquist Sampling Theorem) samples per unit time, the capacity per unit time can be
written as
C = 2BWC4
6. Shannon-Hartley Theorem
Solution:
Solution:
Solution:
Sample Problem:
Consider a digital source that converts an analog signal to digital form. If an input
analog signal is sampled at 1.25x the Nyquist rate and each sample is quantized into
one 256 equally likely levels.
Solution:
a. Information rate
b. Minimum BER
Channel limitation due to noise:
This means to provide an error free transmission the bandwidth must be at least 12.02
KHz or the S/N ratio must be greater than 24.1 dB.
Line encoding is the method used for converting a binary information sequence into a
digital signal in a digital communication system.
1. Types of Signalling
i. Unipolar Signalling
Binary 1 is represented by a high level and binary 0 by a zero level.
3. Biphase
i. Biphase-Level (Manchester)
1 = Transition from High to Low in middle of interval
0 = Transition from Low to High in middle of interval
ii. Biphase-Mark
Always a transition at the beginning of interval
1 = Transition in middle of interval
0 = No transition in middle of interval
iii. Biphse-Space
Always a transition at beginning of interval
1 = No transition in middle of interval
0 = Transition in middle of interval
4. Differential Manchester
1 = No transition in middle of interval
0 = Transition at beginning of interval
So many names...
Polar NRZ is also called NRZ-L.
Bipolar NRZ is called NRZ-M.
Negative Logic Bipolar NRZ is called NRZ-S.
Bipolar RZ is also called BPRZ, RZ-AMI, BPRZ-AMI, AMI or simply bipolar.
Manchester NRZ is also Manchester code, or Biphase-L, for biphase with normal
logic level.
Biphase-M is used for encoding SMPTE time-code data for recording on
videotapes.
BW Efficiency of Popular Line Codes
Spectral
Signalling Code Type Bandwidth Efficiency
(bps/Hz)
NRZ fb/2 2
Unipolar
RZ fb 1
NRZ fb/2 2
Polar RZ fb 1
Manchester fb 1
Bipolar AMI fb/2 2
Sample Problem:
A signal is carrying data in which 4 data element is encoded as one signal element. If
the bit rate is 100 kbps, what is the average value of the baud rate?
Solution:
I. DIGITAL MODULATION
1. Amplitude Shift Keying (ASK)
Digital Amplitude Modulation is simply a double-sideband, full-carrier amplitude
modulation where the input modulating signal is a binary waveform.
BW = 2 (fb + f)
ii. FSK Receiver
Noncoherent FSK Demodulator
Solution:
fN = fb
Sample Problem:
Determine the minimum Nyquist bandwidth and the Baud rate for a BPSK modulator
with a carrier frequency of 70 MHz and an input bit rate of 10 Mbps.
Solution:
fN = fb = 10 MHz fB = 10 MBaud
M-ary Encoding
M-ary is a term derived from the binary. M is simply a digit that represents the number
of conditions or combinations possible for a given number of binary variables.
N = log2 (M)
N M
1 2
2 4
3 8
4 16
5 32
Where:
N = # of bits per symbol
M = # of output conditions or symbols possible w/ N bits
Samples Problem:
How many bits are needed to address 256 different level combinations?
Solution:
Binary Input
QPSK Output
Q I
0 0 -135
0 1 -45
1 0 +135
1 1 +45
f N = fB / 2
Sample Problem:
Calculate the minimum double-sided Nyquist BW and the Baud for a QPSK modulator
with an input data rate equal to 40 Mbps and a carrier frequency of 110 MHz.
Solution:
Binary Input
8-PSK Output
Q I C
0 0 0 -112.5
0 0 1 -157.5
0 1 0 -67.5
0 1 1 -22.5 fN = fb / 3
1 0 0 +112.5
1 0 1 +157.5
1 1 0 +67.5
1 1 1 +22.5
Sample Problem:
Calculate the minimum double-sided Nyquist BW and the Baud for an 8-PSK modulator
with an input data rate equal to 25 Mbps and a carrier frequency of 45 MHz.
Solution:
fN = fb/3
Sample Problem:
Determine the bandwidth efficiency for the following modulation scheme
a. BPSK, fb = 15 Mbps
b. QPSK, fb = 20 Mbps
c. 8-PSK, fb = 28 Mbps
d. 8-QAM, fb = 30 Mbps
e. 16-PSK, fb = 40 Mbps
f. 16-QAM, fb = 42 Mbps
Solution:
a. BPSK b. QPSK
c. 8-PSK d. 8-QAM
e. 16-PSK f. 16-QAM
2
Pe = Q
2. Non-Coherent Systems with Additive White Gaussian Noise (AWGN) Channels
Pe = 0.5e-(Eb/2No)
Where: Eb = energy per bit in J/bps
No = noise density in W/Hz
Sample Problem:
Calculate the probability of error for a non-coherent FSK system if the carrier power is
10-13 W, bit rate of 30 kbps, BW of 60 KHz and noise power of 10 -14 W.
Solution:
This probability of error means that 25 bits will be expected to be corrupted (in error) for
every 1 million bits transmitted.
L. ERROR DETECTIONS
1. Redundancy
Redundancy involves transmitting each character twice. If the same character
is not received twice in succession, a transmission error has occurred.
2. Echoplex
Echoplex involves the receiving device echoing the received data back to the
transmitting device. The transmitting operator can view the data, as received and
echoed, making corrections as appropriate.
3. Exact-count encoding
The number of 1s in each character is the same and therefore a simple count
of number of 1s received in each character can determine if a transmission error has
occurred.
4. Parity Checking
Parity checking is by far the most commonly used method for error detection
and correction, as it is used in asynchronous devices such as PCs. Parity involves the
transmitting terminals appending one or more parity bits to the data set in order to
create odd parity or even parity.
Name Used In
CRC-8 ATM Header error check
CRC-10 ATM CRC
CRC-16 Bisync
CCITT-16 HDLC, XMODEM, V.41
CCITT-32 IEEE 802, V.32
M. ERROR CORRECTIONS
1. Symbol Substitution
With symbol substitution, if a character is received in error, rather than revert
to a high level of error correction or display the incorrect character.
2. Retransmission (ARQ)
Retransmission, as the name implies, is resending a message when it is
received in error and the receive terminal automatically calls for retransmission of the
entire message.
Hamming Distance
The number of bit position in which two codeword differs is called Hamming
distance.
Solution:
For n = 6
For n=7
For n=8
Sample Problem:
Calculate the Hamming distance to detect and correct 3 single-bit errors that occurred
during transmission. Also compute for the number of Hamming bits for a 23 bit data
string.
Solution:
Hd = d+1 = 3+1 = 4
Hd = 2d+1 = (2x3) + 1 = 7
2n m+n+1
For n = 4
For n = 5
n=5