Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
20 views

Unit 4 - Waveform Coding

Uploaded by

priyamegala2468
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
20 views

Unit 4 - Waveform Coding

Uploaded by

priyamegala2468
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 34

Unit 4-Waveform coding

Low pass sampling – Aliasing- Signal Reconstruction-Quantization - Uniform & non-


uniform quantization - quantization noise - Logarithmic Companding, PCM, DPCM,
ADPCM - Delta Modulation and ADM principles-Linear Predictive Coding

SAMPLING PROCESS:

1
When a message signal happens to be analog in nature, as in a speech signal or a
video signal, then it has to be converted into digital form before it can be transmitted
by digital means.
Sampling process is the first process performed in Analog to Digital conversion. Two
other processes Quantizing and Encoding are also involved in this conversion.
In sampling process a continuous time signal is converted into a discrete time signal
by measuring the signal at periodic instant of time.
For sampling process to be of practical utility, we should choose proper sampling rate
so that the discrete time signal resulting from the process uniquely defines the
original continuous time signal.
Statement for Low pass signals:
1) A band limited signal of finite energy which has no frequency components
higher than W Hz is completely described by specifying the values of the signal at
instants of time separated by 1/2W secs.
2) A band limited signal of finite energy which has no frequency components
higher than W Hz may be completely recovered from a knowledge of its samples
taken at the rate of 2W samples per sec.
Sampling theorem for Band pass signals:
The Band pass signal x(t) whose maximum bandwidth is 2W can be completely
represented into and recovered from its samples if it is sampled at the minimum rate
of twice the bandwidth.

Proof:

Let g(t) be an arbitrary signal of finite energy sampled at uniform rate ‘Ts’ seconds
denoted by g(nTs) where ‘n’ is an integer.
Ts = sampling period ; 1/Ts = fs = sampling rate.
It is called instantaneous sampling or ideal sampling.

Impulse train is represented by  (t )    (t  nTs)
n  
-1.1


Where   (t  nTs)
n  
is the delta function positioned at time t=nTs

2

Signal obtained after sampling is g  (t )   g (t ) (t  nTs)
n  


g  (t )   g (nTs) (t  nTs)
n  
- 1.2

g(nTs) = instantaneous amplitude of g(t) at instant t = Ts.



Taking Fourier Transform of equation 1.2 we get, G ( f )  F .T [ g (t )   (t  nTs )]
n  
Product in time domain = convolution in frequency domain

G ( f )  F .T [ g (t )] * F .T [   (t  nTs ) ] - 1.3
n  

Fourier Transform of impulse train



F .T [ (t  nTs )]  fs   ( f  nfs) - 1.4
m  

where fs = 1/Ts
Fourier Transform of g(t) = G(f)
Hence the Fourier Transform of sampled version is

G ( f )  G ( f ) * fs   ( f  nfs)
n  


 fs  G ( f ) * ( f  nfs)
n  

Applying shifting property of impulse response we get



G ( f )  fs  G ( f  nfs) - 1.5
n  

Eqn. 1.5 states that the process of uniformly sampling a continuous time signal of
finite energy results in a periodic spectrum with a period equal to the sampling rate.

G(f) is the Fourier Transform of original signal g(t).


G(f-nfs) = G(f) at f  0,  fs,  2 fs,  3 fs,  ...............
i.e same spectrum appears at f  0,  fs,  2 fs,  3 fs,  ...............
which means a periodic spectrum with period equal to fs is generated in frequency
domain because of sampling g(t) in time domain.

Eqn. 1.5 can be written as G(f) = fs G(f) + fs G(f  fs) + fs G(f  2fs) +
fs G(f  3fs) + fs G(f  4fs) + ……… 1.6

3
i.e. every term in the sum is the same spectrum at multiples of sampling
frequency fs.

Fig. a) Spectrum of strictly bandlimited signal g(t). b) Spectrum of sampled version


of g(t) for a sampling period Ts = 1/2W
Eqn. 1.5 can also be written as

G ( f )  fs G ( f )   fs G( f  nfs)
n  
-1.7
n0

The first term indicates the spectrum without sampling and the terms under
summation represents spectrums repeating at multiple frequencies of sampling
frequency fs.

 j 2ft
Fourier Transform of Continuous time signal [g(t)] =  g (t ) e


Fourier Transform of discontinuous time signal


 g (nT ) e
 j 2fnTs
[g(t)] = s -1.8
n  

If the signal g(t) is strictly bandlimited such that there are no frequency components
higher than W Hz, then
G( f )  0 for f W
Let fs = 2W or Ts = 1/2W where ‘W’ is the maximum frequency.
From eqn. 1.7 G(f) will be reproduced at f = 0, f =  fs , 2 fs,  3fs etc.
Since fs = 2W , fs-W = W and fs+W = 3W
i.e. the periodic spectrums G(f) just touch each other.

Two assumptions made are:


1) G( f )  0 for f W
2) fs = 2W sampling rate

The equation G ( f )  fs G ( f )   fs G( f  nfs)
n  
can be written as
n0

4

fsG( f )  G ( f )   fs G( f  nfs)
n  
n0

1 1 
G( f ) 
fs
G ( f )   G( f  nfs)
fs n  
-1.9
n0


1
fs = 2W ;  G( f )  G ( f )   G ( f  nfs) ; -1.10
2W n  
n0

1
G( f )  G ( f ) for W  f  W
2W
We know that Fourier Transform [g(t)] = G ( f )

1
 g (nT ) e
 j 2fnTs
G(f) = s
2W n  

1 1
Ts  ; Ts 
fs 2W

1  n 
 g  2W  e
 j 2fn / 2W
G( f ) 
2W n  


1  n 
 g  2W  e
 jfn / W
= for  W  f  W -1.11
2W n  

g(t) can be obtained by taking IFT of G(f).


 1 
 n  
 g  2W  e
 jfn / W
Hence g(t) = IFT[G(f)] = IFT   -1.12
 2W n   

Eqn. 1.12 says that a continuous time signal g(t) is itself represented by the
sample values g(nTs) for    n  
Reconstruction of signal from samples.

 1 
 n  
 g  2W  e
 jfn / W
g(t) = IFT  
 2W n   

 n 
W
1
 g  2W  e
 jfn / W j 2ft
By definition of IFT , g(t) =
W
 2W n  
e df - 1.13

Interchanging order of summation and integration



 n  1
W n


j 2f ( t 
 e
)
g(t) = g  2W df
n    2W  2W W

5
w
  n  
j 2f  t 
  

 n  1  e  2 w 
 
n  
g 
 2W  2W  j 2  t  n  

  
  2 w    w

  n   n  
j 2w t   j 2w t 
   2w 

e  2w  

 n  1 e
  g 
 2W  2W   n 


n  
 j 2  t   
  2w  

 n  sin( 2Wt  n ) 
 n  sin  (2Wt  n)
=  g 
n   2W  2Wt  n
=  g  2W 
n    (2Wt  n)

 n 
Reconstructed samples g(t) =  g  2W 
n  
sin c(2Wt  n) for    n  

1.14
The above equation is called as interpolation formula for reconstructing g(t)
 n 
from its samples g(nTs) or g  .
 2W 
 1   1
g(t) = g(0)sinc(2Wt) + g    sin c(2Wt  1) + g    sin c(2Wt  2) + …….
 2W   W
Thus a delayed sinc function is weighed by the amplitude of sample value at a
particular instant.
Sampling rate of 2W samples per sec. for a signal bandwidth of W Hz is called
Nyquist rate, 1/2W (in secs) is called Nyquist interval.
Limitations of sampling theorem - Aliasing:
When fs < 2W , In practice g(t) is not strictly bandlimited so aliasing is produced .
When sampling rate is less than 2W a high frequency component in the spectrum
takes the identity of a low frequency component in the spectrum. Interference of
high frequency components with that of low frequency in the spectrum of sampled
version is aliasing.
To avoid aliasing:
Low-pass antialiasing filter is used prior to sampling to attenuate the
high frequency components.
The filtered signal is sampled at a rate higher than the Nyquist rate. When fs > 2W,
under this condition, there is no aliasing and when the Signal is to be reconstructed,
it is made faithfully. The reconstruction filter is a low-pass filter whose pass band
extends from –W to W.

6
Fig.Spectrum of a signals under various conditions

Aliasing: When the sampling rate is less than ‘2W’, a high frequency component in
the spectrum takes identity of a low frequency component in the spectrum.
Interference of high frequency components with that of low frequency in the
spectrum of sampled version is called aliasing.
To avoid aliasing, there are 2 ways in which anti-aliasing filter can be used.

prefiltering a) signal b) sampled signal

7
i) Pre-filtering antialiasing filter: In this method the analog signal is prefiltered
so that the new maximum frequency f’m is reduced to fs/2 or less. Therefore the
sampled spectrum does not show aliasing.
ii) Post-filtering antialiasing filter: In this method, the aliased terms are
eliminated after sampling with the help of a low-pass filter operating on the
sampled data. The cut-off frequency f’’m for this filter should be less than (fs – fm).
Disadvantage of both antialiasing filters is that some information is always lost
due to filtering.
All realizable filters require a non-zero bandwidth for transition between Passband
and stopband which is known as Transition Bandwidth.
If sampling rate is decreased, then very narrow transition bandwidth is
preferrable.However, better trade-off is to make the transition bandwidth between
10% and 20% of the signal bandwidth.
 Practical Nyquist sampling rate f s  2.2 f m
Antialiasing can be removed even by oversampling the signals and later on filtering
by digital filters instead of analog filters because analog equipments are more
expensive.
Problems
1. Consider the analog signal x(t) = 3 cos 50t + 10 sin 300t – cos100t. What
is the Nyquist rate for this signal?
Soln:
2f1t = 50t ; f1 = 25Hz
2f2t = 300t ; f2 = 150Hz
2f3t = 100t ; f3 = 50Hz
Since maximum frequency present in x(t) is f2 = 150Hz
Nyquist rate = 2W = 2*150 = 300 Hz
Sampling frequency fs should be greater than Nyquist rate i.e., fs > 300 Hz.
2. Find the Nyquist rate and Nyquist interval for the following signals.
1
a) m(t )  cos( 4000t ) cos(1000t )
2

8
m(t) is generated by multiplication of two signals.
1
cos  cos   [cos(   )  cos(   )]
2
  4000t ,  1000t
1 1 
m(t )   [cos( 4000t 1000t )  cos( 4000t  1000t )]
2 2 
1
= [cos(3000t )  cos(5000t )]
4
1t  3000t  2f1  3000 ; f1  1500 Hz
 2 t  5000t  2f 2  5000 ; f 2  2500 Hz
1 1
Nyquist interval = 
2W 2 X 2500
= 0.2 msec
Nyquist rate = 2W = 2 X 2500 = 5000 Hz

1
b) m(t )  [sin( 500t )]
t
The signal contains only one frequency t  500t
2f  500 ; f  250 Hz
1 1
Nyquist interval = 
2W 2 X 250
= 2 msec
Nyquist rate = 2W = 2 X 250 = 500Hz
Types of sampling:
1) Ideal / instantaneous / Impulse sampling
2) Natural / Chopper sampling
3) Flat top / rectangular pulse sampling
NATURAL SAMPLING/CHOPPER SAMPLING
A better mathematical model of sampling is to use flat-top rectangular pulses
of finite width. This type of sampling is called natural sampling because the top
of each pulse in the sampled sequence x (t ) retains the shape of the original
signal during that pulse interval.
1
Choose Ts  to satisfy Nyquist criterion.
2W

9
SAMPLER IMPLEMENTATION:
The implementation of sampler is done with a sample and hold circuit in which a
switch and storage mechanism is done by using transistor and capacitor. If under
sampling is done i.e., fs < fNyq, then aliasing occurs.

Implementation is done with the sample and hold circuit using n-Channel Field
Effect Transistors [FET] and Capacitor as shown in the figure.The input signal is
applied to the Source of FET T1 and the holding capacitor is connected to the Drain
D of T1, Sampling pulse is applied to the Gate G of T1.During the Positive pulse the
Switch T1 is closed and the capacitor charges up during the sampling period.
During the negative pulse switch is opened and the value stored in the capacitor is
obtained at the output.The Sampling pulse applied to the gate G of T1 is also coupled
through the FET to the capacitor and appears as low amplitude dc offset on the
output line. To compensate this FET T2 is provided , which produces low amplitude
dc offset which is then cancelled by using sampling pulse of opposite polarity.

The Flat Top sampled signal generated can be expressed as S (t )   x(n T )h(t  nT )
n  
s s

RECONSTRUCITON OF MESSAGE SIGNAL FORM THE SAMPLED SIGNAL


In order to recover the original signal without distortion, we need to pass the
sampled signal through sample and hold circuit, Low pass filter designed to
remove component of the spectrum at the multiples of the sampling rate fs and
an equalizer whose amplitude response equals H(f) as shown in the figure
below.

Aperture Effect
Use of flat top samples introduces amplitude distortions. This distortion is called as
aperture effect. This problem is compensated by the use of equalizer at the receiver.

10
QUANTISATION: (rounding off)
The sampled signal is still analog though discretised in time. Therefore it has to be
passed through quantiser to be discretised in amplitude.
Amplitude quantization is defined as the process of transforming the sample
amplitude of a message signal into discrete amplitudes taken from a finite set of
possible amplitudes.Input sample value is quantized to nearest digital level.

Types of quantization:

Uniform quantization: step or difference between two quantization levels remains


constant over the complete amplitude range.

11
 Further the quantizer can be of either of midtread or midrise type as shown in the
figure below. Midtread type is so called because the origin lies in the middle of a
tread of the staircase like graph.Midrise type is so called because the origin lies in
the middle of a rising part of the staircase like graph.

Midtread Midrise

Non-uniform quantization: Assigned step size will be different. Use of non-uniform


quantizer is equivalent to passing the baseband signal through a compressor and
then applying the compressed signal to an uniform quantizer. It is done by uniformly
quantizing the compressed” signal.
At the receiver, an inverse compression characteristic, called “expansion” is employed
to avoid signal distortion. And the entire process is called as ‘COMPANDING’
Companding
The variation of input signal cannot be predicted ,hence it becomes difficult to
implement PCM using uniform quantization. Therefore the signal is amplified
at low signal levels and attenuated at high signal levels .After this process
uniform quantization is used which will be equivalent to having more step size
at high signal levels and small step size at low signal levels.At the receiver, a
reverse process is used. The compression of signal at transmitter and
expansion at the receiver is called as companding.

Types of companding
1) -law companding
2) A-law companding

12
For the compression, two laws are adopted: the -law in North America and the A-law
in Europe.
-law: loge (1   m )
v 
loge (1  )

-law is approximately linear at low input levels corresponding to


|m|<<1, and approximately logarithmic at high input levels
corresponding to |m|>>1.

A-law:  Am 1
1  log A , 0 m 
 A
v 
e

1  loge ( A m ) , 1
 m 1

 1  loge A A
The reciprocal slope of the second compression curve is given by |m|
with respect to |v|.

For a given value of , the reciprocal slope of compression curve which defines the
quantum step is given by the derivative of ’m’ with respect to ‘v’.

13
The characteristics of these two laws are shown in Figure. The typical values used in
practice are: =255 and A=87.6.After quantization the different quantized levels have
to be represented in a form suitable for transmission. This is done via an encoding
process.

Derivation of Quantization error / Noise power:

The difference between the input signal and the output signal is called quantization
error or quantization noise.
Consider an input signal ‘m’ of continuous amplitude in the range
(-mmax, mmax).

Vp - - Vp  2Vp
step size of the quantizer q = = where ‘L’ is the no. of levels of
L L
the quantizer.
If m(t) is normalized to minimum and maximum values equal to ‘1’ then Vp = +1 ; - Vp
= -1
2
step size q =
L
If the step size is small, number of levels are large. Then the Quantization error ‘Q’ is
assumed to be uniformly distributed random variable.

 A random variable is said to be uniformly distributed over an interval (a,b). Then


PDF of ‘x’ is given by
fX(x) = 0 for x  a
1
= for a<x  b
ba
=0 for x>b
 Similarly, the probability density function for quantization error ‘q’ can be written
as fP(p)= 0 for p  -q/2
1 q q
= for - <p 
q 2 2
q
=0 for p >
2
 For this to be true, the incoming signal should not overload the quantizer.

14
 Mean of the quantization error is zero, its variance 2P is same as the mean-
square value.
 2P = E(P2]
q/2
 2P =  p 2 f P ( p ) dp
q / 2

q/2
1
 Substituting the value of fP(p), 2P =
q  p 2 dp
q / 2

q2
 Quantization noise power 2P = or
12

2
q 2 or Nq = q
e q
2
12
12
Derivation for maximum signal to quantization noise ratio:

 The no. of bits per sample ‘R’ and the quantization levels ‘L’ are related as L =
2Vp 2Vp
2R ; R = log2L ; q= ;q=
L 2R

4V p 2
q2 2 Vp2
  Nq   L =
12
12 3L2
2Vp
 But q =
L

4V p 2
 q2 
L2

q 2 L2
 V p 
4

L2 q 2
L2 q 2 / 4
 SNR q 
Signal power
  4 = 3 L2
Noise power Nq q2
12

15
TEMPORAL WAVEFORM ENCODING
Pulse Code Modulation (PCM)
In PCM the message signal is represented in the form of coded pulses by
representing in discrete time and amplitude.
Basic operations involved in PCM: Sampling, Quantizing, Encoding at the
transmitter side and Regeneration, decoding and filtering at the receiver side
Sampling
 Sampling is done with a train of narrow rectangular pulses.
 Sampling rate should satisfy Nyquist criteria fs>2W.
 Continuously varying signal of some finite duration is reduced to a limited no. of
discrete values per second.
 LPF: acts as antialiasing filter which removes frequencies greater than ‘W’ Hz.
 Signal obtained after sampling is discrete in time.

The Elements of PCM system


Quantizing
sampled signal is quantized so that the resultant signal is discrete both in time and
amplitude.
Companding in PCM
The variation of input signal cannot be predicted ,hence it becomes difficult to
implement PCM using uniform quantization. Therefore the signal is amplified at low
signal levels and attenuated at high signal levels .After this process uniform
quantization is used which will be equivalent to having more step size at high signal
levels and small step size at low signal levels.At the receiver, a reverse process is

16
used. The compression of signal at transmitter and expansion at the receiver is
called as companding.
Types of companding
1. -law companding
2. A-law companding

 For the compression, two laws are adopted: the -law in North America and the A-
law in Europe.
-law: loge (1   m )
v 
loge (1  )

-law is approximately linear at low input levels corresponding to


|m|<<1, and approximately logarithmic at high input levels
corresponding to |m|>>1.

A-law:  Am 1
1  log A , 0 m 
 A
v 
e

1  loge ( A m ) , 1
 m 1

 1  loge A A
The reciprocal slope of the second compression curve is given by |m|
with respect to |v|.

17
For a given value of , the reciprocal slope of compression curve which defines the
quantum step is given by the derivative of ’m’ with respect to ‘v’.
The characteristics of these two laws are shown in Figure. The typical values used in
practice are: =255 and A=87.6.After quantization the different quantized levels have
to be represented in a form suitable for transmission. This is done via an encoding
process.
Encoding:
To make the transmitted signal more robust to noise interference and to make it best
suitable for transmission.Translates discrete set of sample values to a more
appropriate form of signal.The presence or absence of a pulse is a symbol.
Each discrete level is represented by a code element (symbol). In a binary code, we
have two symbols only, representing two levels of the signal.
 The electrical representation of a code is done by assigning a waveform (or a pulse)
to each symbol as shown in Figure 1.15 for the binary case.

 Some well-known line codes that can be used for the electrical representation of a
binary data stream, are: (a) Unipolar NRZ. (b) Polar NRZ signaling. (c) Unipolar RZ
signaling. (d) Bipolar RZ signaling.
(e) Split-phase or Manchester code. NRZ: non-return to zero. RZ: return to
zero.
a) Unipolar NRZ (Non Return to Zero):
 This line code is also known as on-off signalling.

18
 Transmitting a pulse of amplitude A for the duration of the symbol represents
binary ‘1’
 Switching off the pulse represents binary ‘0’.
 Disadvantage: wastage of power due to transmitted DC level.
b) Polar NRZ (Non Return to Zero):
 Transmitting a pulse of amplitude +A represents binary ‘1’.
 Transmitting a pulse of amplitude –A represents binary ‘0’.
 Disadvantage: power spectrum of the signal is large near zero frequency.
c) Unipolar RZ (Return to Zero):
 Transmitting a pulse of amplitude A for half-symbol width represents binary
‘1’.
 Switching off the pulse represents binary ‘0’.
 Disadvantage: It requires 3dB more power than polar return to zero signalling.
d) Bipolar RZ (Return to Zero):
Transmitting positive and negative pulses of equal amplitude +A and –A
respectively for half-symbol width represents symbol ‘1’.
Switching off the pulse represents binary ‘0’.
Also called as Alternate Mark Inversion (AMI) signaling.
Power spectrum of the transmitted signal has no DC component.
e) Split phase (Manchester code):
 Symbol ‘1’ is represented by a positive pulse of amplitude ‘A’ followed by a
negative pulse of amplitude –A, with both pulses being half-symbol wide.
 Symbol ‘0’ is represented by reversing the polarities of these two pulses.
 This code suppresses the DC component.
Differential encoding method is used to encode information in terms
of signal transitions. In practice a transition designates symbol 0,
while no transition designates symbol 1 (see Figure 1.17). The
original binary information is recovered simply by comparing the
polarity of adjacent binary symbols to establish whether or
not a transition has occurred.

19
Regeneration
 A regenerative repeater (see Figure below) consists of (1) an equalizer, (2) a timing
circuit, and (3) a decision-making device. Regenerative repeaters are used to
reconstruct the PCM signal.

The equalizer is used to undo the effect of the transmission channel to get
back the pulses in their original shape before transmission. Amplitude and phase
distortions caused by some characteristics of channel are removed and shapes the
pulses.
The timing circuit is used to recover the clock of the transmitted symbols (pulses),
thus providing a periodic pulse train which is then used in the decision-making
process.
The function of the decision-making device is to detect the different pulses based
on some threshold information. Each sample is compared with a threshold value. If
it exceeds threshold, ‘1’ is transmitted otherwise ‘0’ is transmitted thereby removing
distortion and noise.
The purpose of a regenerative repeater is to clean the PCM signal during its
transmission through a channel.
The regenerated signal differs from original signal for 2 reasons.
1) Due to presence of channel noise and interference, wrong decisions are
made and bit errors are introduced into the regenerated signal.
2) If the spacing between received pulses deviates from its assigned value, a
jitter is introduced into the regenerated pulse position, thereby causing
distortion.

20
Decoding:
After regeneration i.e reshape and clean up, the clean pulses are regrouped into
code words and decoded into a quantized PAM signal.
The decoding process generates a pulse, the amplitude of which is the linear sum of
all pulses in the code word.
Filtering:
The message signal is recovered by passing the decoder output to Low pass
reconstruction filter whose cut-off frequency is equal to message bandwidth ‘W’.
Virtues, Limitations and Modifications of PCM
Advantages:
1) Robustness to channel noise and interference.
2) Efficient regeneration of the coded signal along the transmission path.
3) Improved channel bandwidth is achieved.
4) Easy multiplexing and
5) Secured communication by using special modulation schemes or encryption.
Disadvantage:
1) PCM system is complex.
Modifications of PCM:
1) It can be modified to delta modulation.
2) With the help of data compression, redundancy can be removed. This is used
in DPCM.
Differential Pulse Code Modulation (DPCM)
Reason to Use DPCM
In PCM the sample of a signal are highly correlated with each other, this is due
to the fact that any signal does not change fast. That is the value from present
sample to the next sample does n9ot differ by a large amount. The adjacent samples
of the signal carry the same information with a little difference. When these samples
are encoded by a standard PCM system, the resulting encoded signal contains some
redundant information.-redundant information.

21
In the figure shown above Sample at 4Ts is represented by the level 100 and
the Sample at 5Ts is represented by the level 101. The difference between these
samples is only due to last bit . First 2 bits are called as redundant bits. Reducing
this redundant bits will reduce the overall bit rate and increases the transmission
efficiency. THIS TYPE OF MODULATION SCHEME IS CALLED AS DPCM.

DIFFERENTIAL PULSE CODE MDULATION


Transmitter of DPCM.

 DPCM Works on the principle of prediction. The value of the present sample is
predicted from the past samples. The prediction may not be exact but is very close
to the actual sample.
 The Figure above shows the block diagram of transmitter of DPCM. The sample

signal is denoted by x(nTs).and the predicted value is denoted by


 The Comparator finds out the difference between the actual sample value x(nTs)

and the predicted sample value

 This is known as prediction error and it is denoted by e(nTs). which is given

-----(!)

The Predicted value is produced by using prediction filter. The quantizer output

signal and the previous prediction is added and given asinput to the prediction

filter.This signal is called as Xq(nTs). This makes the prediction more and more clear

to the actual sampled signal.

The Quantizer output can be written as eq(nTs)=e(nTs)+q(nTs)

22
q(nTs) is the quantization error. From the figure we have
^
X q (nTs )  X (nTs )  eq (nTs )

Applying the value of eq(nTs) in the above equation we get


^
X q (nTs )  X (nTs )  e(nTs )  q(nTs )

Substituting the value of e(nTs) from equation (1) in the above equation we get
^ ^
X q (nTs )  X (nTs )  X (nTs )  X (nTs ) q(nTs )  X (nTs )  q(nTs )
Hence the quantized version of the signal is the sum of the original sample value and
the quantization error. The quantization error can be positive or negative.Irrespective
of the properties of the predictor, the quantized signal at the predictor input differs
from the original input by the quantization error.
Accordingly if the prediction is good, the variance of the prediction error will be
smaller than the variance of x(nTs), so that a quantizer with a given number of
representation levels can be adjusted to produce quantizing error with a small
variance than would be possible if the input

Receiver of DPCM

The decoder first reconstructs the quantised error signal . The predictor output and
the quantized error are summed up to give the quantized version of the original
signal .
Advantages of DPCM over PCM are redundant bits are reduced and hence the
bandwidth is reduce in DPCM
SNR of DPCM:
Consider a signal x(t) of peak amplitude Vp applied to both PCM and DPCM
systems. The signal power in both the systems are assumed to be same.
The quantizer in PCM quantizes the signal x(t) whereas the quantizer in DPCM
quantizes the difference signal d(t) and not x(t).
Let dp be the peak amplitude of the difference signal d(t).
If the number of levels L in both the cases are same, then the step size in
Vp
DPCM is reduced by a factor of .
dp

23
2
 Vp 
So, the quantization noise reduces by a factor of   . So, the SNR of DPCM
d 
 p 
increases by the same factor over PCM SNR.
2
 Vp 
Processing Gain of DPCM is G p   
d 
 p 
Slope-overload Noise in DPCM:
Slope overload can be prevented if the sampling frequency is chosen to be greater
than the threshold value given below,
2Am f m
fs 
N  1

Adaptive DPCM (ADPCM)


In PCM, the standard bit rate is 64 kbits/s. The aim of all the variants of PCM
is to reduce the number of bits used in the encoding process by removing
redundancies. Adaptive DPCM (ADPCM) is a scheme that permits the coding of
speech (voice) signals at 32 kbits/s through the combined use of adaptive
quantization and adaptive prediction.
The coding technique which uses both adaptive quantization and adaptive
prediction is called as ADPCM
Adaptive quantization is obtained by varying the step size according to
the rms value of the input signal The step size of the Adaptive Quantisation is
given by (nTs )   x (nTs ) Where φ is the constant  x (nTs ) is the
estimate of the standard deviation  x (nTs )
ADAPTIVE QUANTIZATION
In ADPCM adaptive quantization can be performed using adaptive quantization
with forward estimation (AQF) or adaptive quantization with backward estimation
(AQB).

Adaptive Quantisation with Forward Estimation[AQF]

In AQF unquantized samples of the input message signals are used to derive the
estimate x(nTs) .Initially the Samples are applied to the buffer, buffer releases

24
these samples after obtaining the estimation. Estimate will be free from
quantization noise
The major disadvantage of AQF is that it requires transmission of level
estimator through a separate channel and hence It introduces Processing delay.To
eliminate this disadvantage we go for AQB
Adaptive Quantization with Backward Estimation[AQB]

The diagram above shows AQB.Here the samples of quantizer output are used to
derive the backward estimation. Hence the Estimate will be having quantization error
but it has non linear feedback system.

ADAPTIVE PREDICTION:
In ADPCM adaptive prediction can be performed using adaptive prediction with
forward estimation (APF) or adaptive prediction with backward estimation (APB).
Adaptive Prediction with Forward Estimation[APF]

Unquantized samples of the input speech signal are buffered and the
released after the computation of M predictor co efficient. Hence the main
disadvantage of this system is that it Needs separate channel for transmitting
the calculated Estimate, in order to eliminate this disadvantage we go for APB

25
Adaptive Prediction with Backward Estimation[APB]

In APB the quantized samples are used to derive the backward Estimate.

Delta Modulation
PCM transmits all the bits which are used to code the sample. Hence signaling
rate and transmission channel bandwidth are large.To overcome this delta
modulation is used in which only one bit is transmitted per sample.
In DM, the message signal is over-sampled to purposely increase correlation
between adjacent samples.

Principle of DELTA MODULATION:


The difference between the current sample and the previous sample is
quantized only to two levels namely Δ or –Δ. If the Current sample is greater than the
previous sample then a step size of ‘+ Δ’ is assigned else ‘-Δ’ is assigned.
In DM the quantization levels are represented by two symbols: 0 for - and 1
for + i.e. when the step is reduced, ‘0’ is transmitted and when the step is increased
‘1’ is transmitted. Thus for each sample, only one binary bit is transmitted. Infect
the coding process is performed on error signal

26
The main advantage of DM is its simplicity as shown by Figure below
Transmitter of DM:
The transmitter of a DM system (Figure a) is given by a comparator, a one-bit
quantizer, an accumulator, and an encoder.

 Comparator computes the difference between its two inputs. The sampled input
signal x[nTs] and staircase approximated signal x[(n-1)Ts] are subtracted to get
error signal e[nTs].
e[nTs]=xnTs]- x[(n-1)Ts]
Where e[nTs] = error at present sample,
x[nTs] = sampled signal of x(t) ,
x[(n-1)Ts] = last sample approximation of the staircase waveform.
e[nTs] is quantized into only two levels   .The quantizer output is given by
eq[nTs]=sgn(e[nTs]) .where sgn is the sign function. Depending on the sign of e[nTs]
one bit quantizer produces an output step of + or -.Quantizer output is fed to
accumulator. The previous sample approximation x[(n-1)Ts] is restored by delaying
one sample period Ts. The quantity eq[nTs] is then used to compute the new quantized
level xq[nTs]=x[(n-1)Ts]+eq[nTs].
The encoder assigns 0 for - and 1 for +

Receiver of DM
Reconstructe
d signal

The receiver of a DM system shown in the above diagram is given by a


decoder, an accumulator, and a low-pass filter. Accumulator generates the
staircase approximated signal output and is delayed by one sampling period Ts.It
is then added to the input signal. If the input is ‘1’, it adds + step to the previous

27
output which is delayed. If the input is ‘0’ then one step ‘’ is subtracted from the
delayed signal.The out-of band quantization noise in the high frequency range is
rejected by LPF.

DM is subject to two types of quantization error: Slope overload distortion


and granular noise (hunting) (see Figure below).
Slope overload distortion is due to the fact that the staircase approximation
xq(t) can't follow closely the actual curve of the message signal x(t). In order for
mq(t) to follow closely m(t), it is required that
 dx(t )
 max
Ts dt

be satisfied. Otherwise, step-size  is too small for the staircase


approximation xmq(t) to follow x(t).

 In contrast to slope-overload distortion, granular noise occurs when  is too large


relative to the local slope characteristics of x(t). granular noise is similar to
quantization noise in PCM.
 It seems that a large  is needed for rapid variations of x(t) to reduce the slope-
overload distortion and a small  is needed for slowly varying x(t) to reduce the
granular noise. The optimum  can only be a compromise between the two cases.
 To satisfy both cases, an adaptive DM is needed, where the step size  can be
adjusted in accordance with the input signal x(t).
Advantages:
1) Signalling rate and transmission bandwidth is small.
2) Transmitter and receiver implementation is simple.
3) No A/D converter is used.
SNR Calculation of DM:
 In DM, the amplitude of modulator output changes by   only. So the maximum
f s
amplitude is Amax  .
2f m
 The error caused by granular noise alone, lies in the range ( ,  ) which is similar
to quantization noise in an uniform quantizer where the error varies from
q q
 to .
2 2

28
 If e(t) is the error due to granular noise then,
e(t )  m(t )  m
ˆ q (t )

2
e2 
3

e2
 Power spectrum of e(t) is Ge ( f )  , f  fs
2 fs
 This granular noise after passing through a low pass filter of bandwidth W gives
rise to a noise power Ng.
W
W 2
Ng   Ge ( f ) df  fs 3
W
 The SNR for DM system suffering from only granular noise is
S 3 fs
SNRg  m  S m where Sm is the power in the time-signal m(t).
N g 2W

 The slope of the signal in terms of rms value and rms bandwidth of the signal is
 (t ) 2  (2 mWrms ) 2
m

max imum DM slope fs


The slope loading factor s  
rms signal slope (2Wrms ) m
3 fs 3 fs 1
SNR  Sm   2m
2W W  2W  s  2
 rms m 

 fs 

3 f s3

4 2 s 2W 2 rmsW
fs
BT min  ;
2
transmission bandwidth BT f
 Bandwidth expansion factor b    s
signal bandwidth W 2W
2
6  W  b3
SNR   
 2  Wrms  s2

Adaptive Delta Modulation(ADM)


The performance of the Delta Modulator is improved by making the step size of
the modulator assume a time varying form.During the steep segment of the input

29
the step size is increased .When the input signal is varying slowly the step size is
reduced.
Thus the step size is Adapted to the input signal resulting in ADM.Adaptive
delta modulation (ADM) is a modification of DM, in which the step size is adapted
to the slope (variation) of the message signal.
Transmitter:The step size is constrained to lie between the minimum and
maximum values . The step size ∂(nTs) and 2∂(nTs) is constrained to lie between
the minimum and maximum values.  min  (nTs )   max

Figure: Transmitter of ADM


Upper limit  max controls slope overload distortion and lower limit controls
the Granular noise.The adaption rule for ∂(nTs) is expressed as
(nTs )  g (nTs )(nTs  Ts )
The varying multiplier g(nTs) depends on the present binary output b(nTs) and
 start   min
M previous values…Algorithm starts with the initial step size
Depending on one bit quantizer’s output, the logic for stepsize control
increases or decreases the stepsize according to certain rule. If one bit quantizer’s
output is high (1), then step size is doubled for next sample, otherwise it is
reduced.
Receiver:
Input
Output

The first part generates the step size from each incoming bit same as the
transmitter.The previous input and present input decides the step-size.It is then

30
given to an accumulator which builds up staircase waveform.The LPF then
smoothens out the staircase waveform to reconstruct the smooth signal.
Advantages of ADM. (Salient Features)
1. The signal to noise ratio becomes better than ordinary delta modulation.
2. Slope overload distortion is reduced.
3. Because of the variable step size, the dynamic range of ADM is wider than simple
DM.
4. Utilization of bandwidth is better than Delta Modulation.

Linear Predictor Coding(LPC)


Linear prediction is a signal processing function performed by a finite-duration
impulse response (FIR) discrete-time filter, as shown by Figure below.
Linear prediction consists of estimating the current sample of a signal from a
certain number of previous samples. This is always possible when the signal
samples are correlated.

Linear predictor involves the use of three functional blocks (1) a set of delay
units, (2) a set of multipliers, and (3) a set of adders.
For a linear predictor, the output is given by
p
xˆ (n)   wk x(n  k )
k 1

where p is the prediction order, wk's are the predictor coefficients.


Since the linear prediction is an estimation, it results into an error called
prediction error and given by: e(n)  x(n)  xˆ (n)
Speech Synthesizer:
 LPC is used in encoding of speech.
 The process done at the transmitter end are:
 Determination of the type of speech signal observed, i.e., whether it is voiced or
unvoiced.
 Then the corresponding pitch frequency is determined if it is voiced.
 Next filter co-efficients and gain G are determined.

31
 Atlast, they are encoded into a sequence of binary digits and transmitted to the
receiver.

The term 'voiced sounds' is defined as the sounds which are generated by the
vibrations of the vocal cords. The 'unvoiced' sounds are produced at the time of
pronunciation of letters such as ,p' or f. These sounds are produced by expelling air
through lips and teeth
In the diagram shown above, the voiced / unvoiced type information bit activates
the voiced / unvoiced switch which connects either of the impulse generator or noise
generator to the prediction filter H(z). The selected voiced or unvoiced sound is then
passed through a filter, which simulates the effect of the mouth, throat and nasal
passage of the speaker on the generated sound. The input signal is filtered by this
filter in such a way that the required letter is pronounced .The output then generates
the sequence xˆ[ n ] which is the synthesised estimate of the input speech sequence
x[n].

Transmitter of LPC

 In the LPC transmitter shown above, the speech analyzer is a signal processing
block whose output is the set of all the parameter values required to
characterize the speech under consideration.
 These parameters are fed to an encoder for onward transmission to the receiver
and also fed to a speech synthesizer block for locally generating the estimate of
the speech that the receiver would synthesize.

32
 The error between the actual speech input and the locally synthesized speech
is also fed to the encoder for onward transmission to the receiver. The bit rate
of the LPC encoder is typically 2.4 – 4.8 kbps.

Th
e decoder decodes the parameter values and gives them to the speech synthesiser
block. The synthesiser block then produces the synthesised speech sequence xˆ[ n ] .
 The received error e[n] is added to this synthesized sequence xˆ[ n ] by the
receiver to obtain x[n].
 The analog speech signal x(t) is reconstructed by passing this sequence
through an analog filter which performs the function of interpolating the signal
between sample points.
 The difference equation of a linear filter model which contains both poles and
p q
zeroes is given by x[n]   ak x[n  k ]   bk v[n  k ] .
k 1 k 0
 The coefficients {ak} and {bk} is obtained from the output sequence x[n] by
applying MMSE criterion.

Comparison of Digital Pulse Modulation Methods:


S. Parameter PCM DM ADM DPCM
No
Only one bit Bits can be
It can use 4,8 It uses only
is used to more than one
1 Number of bits or 16 bits per one bit per
encode one but are less
sample sample
sample than PCM.
The number of According to
Step size is
levels depend the signal Fixed number
fixed and
2 Levels, step size on number of variation, of levels are
cannot be
bits. Level size step size used.
varied.
is fixed. varies
Quantization Slope overload
Quantization Slope overload
Quantization error is distortion and
error depends distortion and
3 error and present but quantization
on number of granular noise
distortion other errors noise is
levels used. is present.
are absent. present.
4 Bandwidth of Highest Lowest Lowest Bandwidth

33
transmission bandwidth is bandwidth is bandwidth is required is
channel required since required required lower than
number of bits PCM.
are high.
There is no
Feedback
feedback in the Feedback Feedback
5 Feedback exists in
transmitter or exists. exists.
transmitter
receiver.
Complexity of System is
6 Simple Simple Simple
notation complex
Signal to noise Better than
7 Good Poor Fair
ratio DM
Audio and
Area of Speech and Speech and Speech and
8 video
applications images images video
Telephony

34

You might also like