Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

DPCM

Download as pdf or txt
Download as pdf or txt
You are on page 1of 24

ECT305 Analog and Digital Communication

Module 3 – Differential Pulse Code Modulation,


Linear Prediction & Delta Modulation

Lakshmi V.S.
Assistant Professor
Electronics & Communication Department
Sree Chitra Thirunal College of Engineering, Trivandrum
Differential Pulse Code Modulation
• Differential pulse-code modulation (DPCM) aims to reduce bandwidth requirement at the expense of
increased system complexity.

• When a video or voice signal is sampled at a rate slightly higher than the Nyquist rate, the resulting
sampled signal will have a high degree of correlation between adjacent samples. i.e., the signal does
not change rapidly from one sample to the next.

• When such samples are encoded using standard PCM, the resulting encoded signal contains
redundant information.

• By removing this redundancy before encoding, a more efficient coded signal can be obtained. This is
the idea behind DPCM

• A way to remove the redundancy is to use linear prediction.

2
Linear Prediction
• Consider a finite-duration impulse response (FIR) discrete-time filter, which consists of 3 blocks.
1. Set of p- unit delay elements, represented by z–1
2. Set of multipliers, involving the filter coefficients, 𝑤1 , 𝑤2 ,…, 𝑤𝑝 (p is the prediction order)
3. Set of adders used to sum the scaled versions of delayed inputs x[n-1], x[n-2],…, x[n-p] to produce 𝑥[𝑛].

𝑝

𝑥[𝑛]
ො = ෍ 𝑤𝑘 𝑥[𝑛 − 𝑘]
𝑘=1

x[n] x[n-1] x[n-2] x[n-p+1] x[n-p]


z-1 z-1 z-1

w1 w2 wp-1 wp

prediction

3
Linear Prediction
• Let 𝑥[𝑛] = 𝑥(𝑛𝑇𝑠 ), 𝑛 = 0, ±1, ±2, … is the actual sample of 𝑥(𝑡) at time t = 𝑛𝑇𝑠 .

• The prediction error, 𝑒[𝑛] is the difference between 𝑥[𝑛] and the prediction 𝑥[𝑛],
ො 𝑒 𝑛 = 𝑥 𝑛 − 𝑥[𝑛].

• The design objective of the linear predictor is to choose the filter coefficients 𝑤1 , 𝑤2 ,…, 𝑤𝑝 so as to
minimize the mean square error, J.
𝐽 = 𝐸 𝑒 2 [𝑛]

x[n] x[n-1] x[n-2] x[n-p+1] x[n-p]


z-1 z-1 z-1
𝑝
w1 w2 wp-1 wp 𝑥[𝑛]
ො = ෍ 𝑤𝑘 𝑥[𝑛 − 𝑘]
𝑘=1
prediction

4
Linear Prediction
Let 𝑥[𝑛] be a stationary process with autocorrelation function 𝑅𝑋 [𝑘]
𝐽 = 𝐸 𝑒 2 [𝑛] = 𝐸 (𝑥 𝑛 − 𝑥[𝑛])
ො 2

𝑝 2

𝐽=𝐸 𝑥[𝑛] − ෍ 𝑤𝑘 𝑥[𝑛 − 𝑘]


𝑘=1
𝑝 𝑝 𝑝

= 𝐸[𝑥 2 [𝑛]] − 2 ෍ 𝑤𝑘 𝐸[𝑥[𝑛]𝑥[𝑛 − 𝑘]] + ෍ ෍ 𝑤𝑗 𝑤𝑘 𝐸[𝑥[𝑛 − 𝑗] 𝑥[𝑛 − 𝑘]]


𝑘=1 𝑘=1 𝑗=1
𝑝 𝑝 𝑝

= 𝑅𝑋 [0] − 2 ෍ 𝑤𝑘 𝑅𝑋 [𝑘] + ෍ ෍ 𝑤𝑗 𝑤𝑘 𝑅𝑋 [𝑘 − 𝑗]
𝑘=1 𝑗=1 𝑘=1

In order to find the optimum filter coefficients 𝑤𝑘 which minimizes J, take the derivative with respect
to 𝑤𝑘 and equate it to zero.
𝑝
𝜕𝐽
= −2𝑅𝑋 𝑘 + 2 ෍ 𝑤𝑗 𝑅𝑋 𝑘 − 𝑗 = 0, 𝑘 = 1, 2, 3, … , 𝑝
𝜕𝑤𝑘
𝑗=1
5
Linear Prediction
𝑝 𝑝

−2𝑅𝑋 𝑘 + 2 ෍ 𝑤𝑗 𝑅𝑋 𝑘 − 𝑗 = 0 → ෍ 𝑤𝑗 𝑅𝑋 [𝑘 − 𝑗] = 𝑅𝑋 [𝑘] for 1 ≤ 𝑘 ≤ 𝑝.


𝑗=1 𝑗=1

• The above optimality equations are called the Wiener-Hopf equations for linear prediction.
• It can be rewritten in matrix form as:
𝑅𝑋 𝑘 − 𝑗 = 𝑅𝑋 𝑗 − 𝑘
𝑤1 - even symmetry property of
𝑅𝑋 [0] 𝑅𝑋 [1] … 𝑅𝑋 [𝑝 − 1] 𝑅𝑋 [1]
𝑤2 Autocorrelation function
𝑅𝑋 [1] 𝑅𝑋 [0] … 𝑅𝑋 [𝑝 − 2] 𝑅𝑋 [2]
⋮ =
⋮ ⋮ ⋱ ⋮ ⋮
𝑅𝑋 [𝑝 − 1] 𝑅𝑋 [𝑝 − 2] ⋯ 𝑅𝑋 [0] 𝑤𝑝 𝑅𝑋 [𝑝] 𝑹𝑋 - p x p autocorrelation matrix
𝐰𝐨 - p x 1 optimum coefficient vector
𝐰𝐨 = 𝑤1 𝑤2 ⋯ 𝑤𝑝 𝑇
𝑹𝑋 𝐰𝐨 𝐫𝐗 𝐫𝐗 - p x 1 autocorrelation vector
𝐫𝐗 = 𝑅𝑋 [1] 𝑅𝑋 [2] ⋯ 𝑅𝑋 [𝑝] 𝑇
𝑹𝑋 𝐰𝐨 = 𝐫𝐗 ⇒ 𝐨𝐩𝐭𝐢𝐦𝐚𝐥 𝐬𝐨𝐥𝐮𝐭𝐢𝐨𝐧, 𝐰𝐨 = 𝑹𝑋 −1 𝐫𝐗
• Assuming that autocorrelation matrix 𝑹𝑋 is non-singular, so that inverse exists.
6
Linear Prediction
𝑝

𝑅𝑋 wo = rX ⇒ optimal solution, wo = 𝑅𝑋 −1 rX ෍ 𝑤𝑗 𝑅𝑋 [𝑘 − 𝑗] = 𝑅𝑋 [𝑘] for 1 ≤ 𝑘 ≤ 𝑝.


𝑗=1

• The above optimality equations are called the Wiener-Hopf equations for linear prediction.
• By substituting optimum filter coefficients in the expression for J (mean-square error),we can
obtain minimum mean-square error or minimum variance of the prediction filter.
𝑝 𝑝 𝑝
𝑤1 𝑅𝑋 [1]
𝐽 = 𝑅𝑋 [0] − 2 ෍ 𝑤𝑘 𝑅𝑋 [𝑘] + ෍ ෍ 𝑤𝑗 𝑤𝑘 𝑅𝑋 [𝑘 − 𝑗]
𝑤2 𝑅 [2]
𝑘=1 𝑗=1 𝑘=1 𝐰𝐨 = ⋮ , 𝐫𝐗 = 𝑋

𝑝 𝑝 𝑝 𝑤𝑝 𝑅𝑋 [𝑝]
𝐽 = 𝑅𝑋 [0] − 2 ෍ 𝑤𝑘 𝑅𝑋 [𝑘] + ෍ 𝑤𝑘 𝑅𝑋 [𝑘] since ෍ 𝑤𝑗 𝑅𝑋 [𝑘 − 𝑗] = 𝑅𝑋 [𝑘] 𝑝

𝑘=1 𝑘=1 𝑗=1 ෍ 𝑤𝑘 𝑅𝑋 𝑘 = rXT . wo


𝑝 𝑘=1

𝐽 = 𝑅𝑋 [0] − ෍ 𝑤𝑘 𝑅𝑋 [𝑘]
𝑘=1
𝑝 𝑱𝒎𝒊𝒏 = 𝑹𝑿 𝟎 − 𝒓𝑻𝑿 . 𝒘𝒐
−𝟏
since ෍ 𝑤𝑘 𝑅𝑋 𝑘 = 𝒓𝐓𝐗 . 𝒘𝒐 = 𝒓𝑻𝑿 . 𝑹𝑿 𝒓𝑿 𝑱𝒎𝒊𝒏 = 𝑹𝑿 𝟎 − 𝒓𝑻𝑿 . 𝑹𝑿 −𝟏 𝒓𝑿 7
𝑘=1
DPCM - Transmitter
• Differential quantization is used in DPCM by predicting the future values of m(t) using a linear
predictor.
• Let 𝑚[𝑛] = 𝑚(𝑛𝑇𝑠 ), 𝑛 = 0, ±1, ±2, … is the sample of 𝑚(𝑡) at time t = 𝑛𝑇𝑠 .
• The input signal to the quantizer is 𝑒[𝑛] = 𝑚 𝑛 − 𝑚[𝑛]

• The quantizer output is 𝑒𝑞 [𝑛] = 𝑒[𝑛] + 𝑞[𝑛], where 𝑞[𝑛] is the quantization noise introduced by
the quantizer.
• The quantizer output is then
added to the predicted value
𝑚[𝑛]
ෝ to produce the prediction
filter input, 𝑚𝑞 [𝑛]
𝑚𝑞 [𝑛] = 𝑚[𝑛]
ෝ + 𝑒𝑞 [𝑛]
= 𝑚[𝑛]
ෝ + 𝑒[𝑛] + 𝑞[𝑛]
= 𝑚[𝑛] + 𝑞[𝑛] (∵ 𝑒 𝑛 = 𝑚 𝑛 − 𝑚
ෝ 𝑛)
• The quantizer output is 𝑒𝑞 [𝑛] is then
encoded to generate DPCM wave. 8
DPCM - Receiver
• The receiver consists of a decoder to reconstruct the quantized error signal.
• The quantized version of the original input is reconstructed from the decoder output, using the
same prediction filter used in the transmitter. 𝑒𝑞 𝑛 + 𝑚
ෝ 𝑛 = 𝑚𝑞 𝑛
• The corresponding receiver output is 𝑚𝑞 [𝑛], which differs from original input 𝑚[𝑛], only by the
quantization error, 𝑞[𝑛] incurred as a result of quantizing the prediction error.

𝑚𝑞 [𝑛]

𝑒𝑞 [𝑛]

𝑚[𝑛]

9
Processing Gain
2
𝜎𝑀
• The output signal-to-noise ratio 𝑆𝑁𝑅𝑂 of DPCM system is defined as, 𝑆𝑁𝑅𝑂 = 2 (∵ 𝑚𝑞 𝑛 = 𝑚 𝑛 + 𝑞 𝑛 )
𝜎𝑄
2
where 𝜎𝑀 - variance of original input sample, 𝑚[𝑛] and 𝜎𝑄2 - variance of quantization error, 𝑞[𝑛]

• This can be rewritten as product of two factors as,


2
𝜎𝑀 𝜎𝐸2
𝑆𝑁𝑅𝑂 = . 2 where 𝜎𝐸2 - variance of prediction error, 𝑒[𝑛](𝑒[𝑛] = 𝑚[𝑛] − 𝑚[𝑛])

𝜎𝐸2 𝜎𝑄
= 𝐺𝑝 . 𝑆𝑁𝑅𝑄
2
𝜎𝑀
where processing gain, 𝐺𝑝 =
𝜎𝐸2

𝜎𝐸2
Signal to quantization noise ratio 𝑆𝑁𝑅𝑄 = 2
𝜎𝑄
𝐺𝑝 > 1 ⇒ gain in SNR due to differential quantization 𝑚𝑞 [𝑛] = 𝑚[𝑛]
ෝ + 𝑒𝑞 [𝑛]
= 𝑚[𝑛]
ෝ + 𝑒[𝑛] + 𝑞[𝑛]
Better prediction ⇒ larger Gp ⇒ larger 𝑆𝑁𝑅𝑂 = 𝑚[𝑛] + 𝑞[𝑛]
DPCM Transmitter
10
Processing Gain
𝐺𝑝 > 1 ⇒ gain in SNR due to differential quantization
Better prediction ⇒ larger Gp ⇒ larger 𝑆𝑁𝑅𝑂

✓ Comparing DPCM with PCM in the case of voice signals, the improvement is around 4-11 dB,
depending on the prediction order.

✓ The greatest improvement occurs in going from no prediction to first-order prediction; with
some additional gain, resulting from increasing the prediction order up to 4 or 5, after which
little additional gain is obtained.

✓ For the same sampling rate (8KHz) and signal quality, DPCM may provide a saving of about 8~16
Kbps compared to standard PCM (64 Kbps).

11
Delta Modulation
• Delta modulation (DM) aims to reduce the system complexity.
• In DM, the message is oversampled (at a rate much higher than the Nyquist rate) to purposely
increase the correlation between adjacent samples.
• Then, the difference between adjacent samples is encoded instead of the sample value itself.
• DM provides a staircase approximation to the oversampled version of the message signal.
• The difference between the input and approximation is quantized into only 2 levels, ±∆
corresponding to positive and negative differences (single bit quantizer).

1 0 1 0 1
1 0
1 0
1 1 Staircase
0
1 m(t) approximation
0 0 1
1 mq(t)
1
Ts
12
DM Transmitter
• The principal virtue of delta
modulation is its simplicity.
• It only requires the use of comparator,
quantizer, and accumulator.
• Single-bit quantizer, simplest
quantizing strategy; the quantizer acts
as a hard limiter with only two
decision levels, namely, ±∆.
• Single unit-delay element, most
primitive form of a predictor; the block
z–1 (unit delay equal to one sampling
period 𝑇𝑠 ) acts as an accumulator.

13
DM Transmitter
• Let 𝑚[𝑛] = 𝑚(𝑛𝑇𝑠 ), 𝑛 = 0, ±1, ±2, … is the sample of 𝑚(𝑡) at time t = 𝑛𝑇𝑠 .
• Error signal, 𝑒[𝑛] is the difference between the present sample m[𝑛] and its latest approximation, 𝑚𝑞 [𝑛 − 1]
• 𝑒𝑞 𝑛 is the quantized version of 𝑒[𝑛].
• The quantized output, 𝑒𝑞 [𝑛] is coded to produce DM signal.
𝑒[𝑛] = 𝑚 𝑛 − 𝑚𝑞 [𝑛 − 1]
𝑒𝑞 [𝑛] = Δ ⋅ sgn( 𝑒[𝑛])
𝑚𝑞 [𝑛] = 𝑚𝑞 [𝑛 − 1] + 𝑒𝑞 [𝑛]

+∆ if 𝑒 𝑛 > 0
𝑒𝑞 [𝑛] = ቊ
−∆ if 𝑒 𝑛 < 0

14
DM Receiver
• In the receiver, the staircase approximation 𝑚𝑞 (𝑡) is reconstructed by passing the sequence of positive
and negative pulses, produced at the decoder output through an accumulator similar to that of
transmitter.
• The out-of-band quantization noise in the high frequency staircase waveform, 𝑚𝑞 (𝑡) is rejected by
passing through a low pass filter, with bandwidth equal to that of original message bandwidth.
𝑚𝑞 𝑛 − 1 + 𝑒𝑞 𝑛 = 𝑚𝑞 𝑛

𝑒𝑞 [𝑛] 𝑚𝑞 [𝑛]

𝑚𝑞 (𝑡)

𝑚𝑞 [𝑛 − 1]

Department of Electronics & Communication 15


Delta Modulation – Quantization Errors
Distortions due to delta modulation
✓ Slope overload distortion
✓ Granular noise

Department of Electronics & Communication 16


Delta Modulation – Quantization Errors
• If we consider the maximum slope of the original message signal m(t), it is clear that in order for
the sequence of samples 𝑚𝑞 [𝑛] to increase as fast as the sequence of message samples 𝑚[𝑛] in a
region of maximum slope of 𝑚 𝑡 , the required condition is
Δ 𝑑𝑚(𝑡)
≥ max (condition to avoid slope overload condition)
𝑇𝑠 𝑑𝑡
• Slope overload distortion – If the step-
size Δ is too small for the staircase
approximation 𝑚𝑞 (𝑡) to follow a steep
segment of the message signal 𝑚(𝑡) , the
result is that 𝑚𝑞 (𝑡) falls behind 𝑚 𝑡 .
• This condition is called slope overload,
and the resulting quantization error is
called slope-overload distortion (noise).
17
Delta Modulation – Quantization Errors
• Granular Noise - In contrast to slope-overload distortion, granular noise occurs when the step size Δ is
too large relative to the local slope characteristics of the message signal 𝑚 𝑡 , thereby causing the
staircase approximation 𝑚𝑞 (𝑡) to hunt around a relatively flat segment of 𝑚 𝑡 .

Department of Electronics & Communication 18


DPCM vs DM
• DM system can be treated as a special case of DPCM.
• DPCM and DM are basically similar, except for the use of 1-bit quantizer in DM and replacement of the prediction filter by
a single delay element.
• Unlike standard PCM system, the transmitters of both DPCM and DM uses feedback.

DPCM Transmitter
DM Transmitter

Department of Electronics & Communication 19


Comparison
Sl No. Parameters PCM DPCM DM
Bits more than one but less
1 Number of Bits use 4,8 or 16 bits per sample. use one bit for one sample.
than PCM.
The number of levels depends
Step size is kept fixed and
2 Levels And Step Size on number of bits. Level size is Number of levels is fixed.
cannot be varied.
fixed.
Quantization Error Quantization error depends on Slope overload distortion and Slope-overload distortion
3
& Distortion the number of levels. quantization noise present. and granular noise present.
4 Signal to Noise Ratio Good Moderate Poor
Highest bandwidth is required
Bandwidth required is less
5 Bandwidth since the number of bits is Lowest bandwidth required.
than PCM.
high.
There is no feedback in Feedback exists in the Feedback exists in the
6 Feedback
transmitter or receiver. transmitter. transmitter.

7 Complexity Complex system to implement Simple to implement. Simplest to implement

video telephony and audio mostly used in videos and generally used in speeches
8 Applications
telephony. speeches. and images.
20
Problems –PCM
1. A speech signal has a total duration of 10 s. It is sampled at the rate of 8 kHz and then encoded. The
signal-to-(quantization) noise ratio is required to be 40 dB. Calculate the minimum storage capacity
needed to accommodate this digitized speech signal. (problem 3.16 – Simon Haykins)
2. A PCM system uses a uniform quantizer followed by a 7-bit binary encoder. The bit rate of the
system is equal to 50 x 106 bits/s.
a. What is the maximum message bandwidth for which the system operates satisfactorily?
b. Determine the output signal-to-(quantization) noise when a full-load sinusoidal modulating wave
of frequency 1 MHz is applied to the input. (problem 3.18 – Simon Haykins)
3. Compute the A and mu law quantized values of a signal that is normalized to 0.8 with A=32 and
µ=255.
4. A PCM system uses a uniform quantizer followed by an 8 bit encoder. If the bit rate of the system is
108 bps, then what is the maximum bandwidth of the low-pass message signal for which the system
operates satisfactorily?

21
Problems – DPCM
1. A stationary process X(t) has the following values for its autocorrelation functions . 𝑅x (0) = 1, 𝑅x (1)
= 0.8, 𝑅x (2) = 0.6 𝑎𝑛𝑑 𝑅x (3) = 0.4 . Calculate the coefficients of an optimum linear predictor
involving the use of 3 unit delays. Calculate the variance of the resulting prediction error. (problem
3.32 – Simon Haykins)
2. Repeat the calculations of above problem, but this time use a linear predictor with two unit-time
delays. Compare the performance of this second optimum linear predictor with that considered in
Problem 1. (problem 3.33 – Simon Haykins)
3. Design a 3-tap linear predictor for speech signals with the autocorrelation vector [0.95,0.85,0.7,0.6],
based on Wiener-Hopf equation. Compute the minimum mean square error.

22
Problems - DM
1. The input to the delta modulator is a sinusoidal signal whose frequency varies from 500 Hz to 5000
Hz. The sampling rate is 4 times the Nyquist rate. The signal peak amplitude is 1 V. Determine the
step size when the sampling frequency is 1000 Hz.
2. A linear delta modulator is designed to operate on speech signals limited to 3.4 kHz. The
specifications of the modulator are as follows: Sampling rate = 10 fNyquist, where fNyquist is the Nyquist
rate of the speech signal. Step size Δ = 100 mV.
The modulator is tested with a 1kHz sinusoidal signal. Determine the maximum amplitude of this
test signal required to avoid slope-overload distortion. (problem 3.27 – Simon Haykins)
3. Consider a DM system designed to accommodate analog message signals limited to bandwidth W = 5
kHz. A sinusoidal test signal of amplitude A = 1V and frequency fm = 1 kHz is applied to the system.
The sampling rate of the system is 50 kHz. Calculate the step size Δ required to minimize slope
overload distortion. (problem 3.29 – Simon Haykins)
4. Consider a low-pass signal with a bandwidth of 3 kHz. A linear DM system with step size Δ = 0.1V is
used to process this signal at a sampling rate 10 times the Nyquist rate. Evaluate the maximum
amplitude of a test sinusoidal signal of frequency 1kHz, which can be processed by the system
without slope-overload distortion. (problem 3.30 – Simon Haykins) 23
THANK YOU…

You might also like