Digital Signal Processing: Iv B .Tech I Sem - R19
Digital Signal Processing: Iv B .Tech I Sem - R19
Digital Signal Processing: Iv B .Tech I Sem - R19
(19A04704b)
By
Mr.P.Srinivasulu
Assistant Professor
Department of ECE
VEMU INSTITUTE OF TECHNOLOGY
1
UNIT-1
Introduction
2
Signal Processing
• Humans are the most advanced signal processors
– speech and pattern recognition, speech synthesis,…
3
Limitations of Analog Signal Processing
• Accuracy limitations due to
– Component tolerances
– Undesired nonlinearities
• Cons
– Sampling causes loss of information
– A/D and D/A requires mixed-signal hardware
– Limited speed of processors
– Quantization and round-off errors
6
Analog, digital, mixed signal
processing
7
Digital Signal Processing
8
Sampling and reconstruction
9
Sample and hold (S/H)circuit
10
A/D converter
11
A/D converter
12
Quantization noise
13
D/A convertion
14
D/A convertion
15
Reconstruction
16
Reconstruction
17
Reconstruction
18
Reconstruction
19
Signals
• Continuous-time signals are functions of a real argument
x(t) where t can take any real value
x(t) may be 0 for a given range of values of t
• Discrete-time signals are functions of an argument that
takes values from a discrete set
x[n] where n {...-3,-2,-1,0,1,2,3...}
Integer index n instead of time t for discrete-time systems
• x may be an array of values (multi channel signal)
• Values for x may be real or complex
20
Discrete-time Signals and Systems
• Continuous-time signals are defined over a
continuum of times and thus are represented by a
continuous independent variable.
• Discrete-time signals are defined at discrete times
and thus the independent variable has discrete
values.
• Analog signals are those for which both time and
amplitude are continuous.
• Digital signals are those for which both time and
amplitude are discrete.
21
Analog vs. Digital
• The amplitude of an analog signal can take any real or complex value at each
time/sample
-1
22
Periodic (Uniform) Sampling
• Sampling is a continuous to discrete-time conversion
-3 -2 -1 0 1 2 3 4
x n x cnT n
23
Periodic Sampling
• Sampling is, in general, not reversible
• Given a sampled signal one could fit infinite continuous signals
through the samples
1
0.5
-0.5
-1
0 20 40 60 80 100
24
Representation of Sampling
• Mathematically convenient to represent in two stages
– Impulse train modulator
– Conversion of impulse train to a sequence
s(t)
Convert impulse
xc(t) x train to discrete- x[n]=xc(nT)
time sequence
xc(t) x[n]
s(t)
t n
-3T-2T-T 0 T 2T3T4T -3 -2 -1 0 1 2 3 4
25
Unit Sample Sequence
[n] = 0, n 0
= 1, n = 0.
… 1
…
0 n
The unit sample sequence plays the same role for discrete-time sequences and
systems that the unit impulse (Dirac delta function) does for continuous-time
signals and systems.
26
Impulse Function
The impulse function, also known as Dirac’s delta function, is used to
represented quantities that are highly localized in space. Examples include
point optical sources and electrical charges.
27
Definition of Impulse Function
The impulse function may be defined from its basic properties.
δ( x x 0 ) 0 , x x0
x2
f ( x ) δ( x x 0 ) dx f ( x 0 ), x1 x 0 x 2
x1
28
Graphical Representation
On graphs we will represent (x-x0) as a spike of unit
height located at the point x0.
δ( x x 0 )
x0
29
Sampling Operation
The delta function samples the function f(x).
f(x0)
f ( x ) δ( x x 0 )
x0
The function f(x) (x-x0) is graphed as a spike of height f(x0) located at the point x0.
30
Unit Step Sequence
u[n] = 1, n 0
= 0, n < 0.
… 1
…
0 n
u [ n ] [ n ] [ n 1] [ n 2 ]
u [ n ] [n k ]
Conversely, the impulse sequence can be expressed
k 0
as the first backward difference of the unit step
n
sequence:
or u[ n ]
[ k ]
k
[ n ] u [ n ] u [ n 1]
31
Exponential Sequence
x[n] =An
…
0 n
If we want an exponential sequence that is
zero for n < 0, we can write this as:
x[n ] A u[n ]
n
32
Geometric Series
A one-sided exponential sequence of the form
The sum of a finite number N of terms is
N 1
1
N
n
n0 1
n
nN1 1 33
Sinusoidal Sequence
x [ n ] A cos( o n )
0 n
…
34
Sequence as a sum of scaled, delayed
impulses
a1
a-3
-4 -2 0 4 6 n
a2 a7
p [ n ] a 3 δ[ n 3 ] a 1 δ[ n 1] a 2 δ[ n 2 ] a 7 δ[ n 7 ]
35
Sequence Operations
• The product and sum of two sequences are defined as the sample-by-
sample product and sum, respectively.
• Multiplication of a sequence by a number is defined as multiplication
of each sample value by this number.
• A sequence y[n] is said to be a delayed or shifted version of a sequence
x[n] if
y[n] = x[n – nd]
where nd is an integer.
• Combination of Basic Sequences
Ex 1 x[n] = K n, n 0,
= 0, n < 0,
or x[n] = K n u[n].
36
Systems
37
Systems
T{}
x[n] y[n]
System Characteristics
T{}
x[n] y[n]
– Maximum
40
Linearity
A linear system is one that obeys the principle of
superposition,
T a1 x [ n ] a 2 x [ n ] a 1 y 1 [ n ] a 2 y 2 [ n ]
and
• Examples
– Ideal Delay System
y[n] x[n n o ]
T { x 1 [n] x 2 [n ]} x 1 [n n o ] x 2 [n n o ]
T { x 2 [n ]} T x 1 [n] x 1 [n n o ] x 2 [n n o ]
T ax [n] ax 1 [n n o ]
aT x[n] ax 1 [n n o ]
42
Time (Shift) Invariance
A system is said to be shift invariant if the only effect caused by
a shift in the position of the input is an equal shift in the position
of the output, that is
T x [ n n 0 ] y [ n n 0 ]
The magnitude and shape of the output are unchanged, only the
location of the output is changed.
43
Time-Invariant Systems
• Time-Invariant (shift-invariant) Systems
– A time shift at the input causes corresponding time-shift at output
y[n] T { x[n]} y[n n o ] T x[n n o]
• Example
– Square
y n x[n n ]
2
Delay the input the output is
y[n] x[n]
2 1 o
y n - n o x[n n ]
2
Delay the output gives o
• Counter Example
– Compressor System
Delay the input the output is y n x[Mn n ]
1 o
y[n] x[Mn ]
Delay the output gives y n - n o x M n n o
44
Impulse Response
When the input to a system is a single impulse, the output is called
the impulse response. Let h[n] be the impulse response, given by
T [ n ] h [ n ]
f ( x ) f ( x ) ( x )
f ( u ) ( x u ) du
f [ n ] f [ n ] [ n ] f [ k ] [ n k ]
k
45
Linear Shift-Invariant Systems
Suppose that T{} is a linear, shift-invariant system with h[n] as its
impulse response.
Then, using the principle
of superposition,
T s [ n ] T s [ k ] [ n k ] s [ k ]T [ n k ]
k k
T s [ n ] s [ k ]T [ n k ] s [ k ]h [ n k]
k k
T s [ n ] s [ n ] h [ n ]
This very important result says that the output of any linear, shift-
invariant system is given by the convolution of the input with the
impulse response of the system. 46
Causality
A system is causal if, for every choice of n0, the output
sequence at the index n = n0 depends only on the input
sequence values for n 0.
47
Causal System
• Causality
– A system is causal it’s output is a function of only the current and
previous samples
• Examples
– Backward Difference
y[n] x[n] x[n 1]
• Counter Example
– Forward Difference
y[n] x[n 1] x[n]
48
Stability
A system is stable in the bounded-input, bounded-output
(BIBO) sense if and only it every bounded input produces a
bounded output sequence.
The input x[n] is bounded if there exists a fixed positive finite
value Bx such that
Stability requires that for any possible input sequence there exist
a fixed positive value By such that
y [ n ] B y
49
Stable System
• Stability (in the sense of bounded-input bounded-output BIBO)
– A system is stable if and only if every bounded input produces a bounded
output
x[n] B x y[n] B y
• Example
– Square
y[n] x[n]
2
• Counter Example
– Log
y[n] log 10 x[n]
even if input is bounded by x[n] B x
50
Memory (State)
51
Memoryless System
• Memoryless System
– A system is memoryless if the output y[n] at every value of n depends
only on the input x[n] at the same value of n
Sign
– y[n] sign x[n]
• Counter Example
– Ideal Delay System
y[n] x[n n o ]
52
Invertible System
A system is invertible if for each output sequence we can find a
unique input sequence. If two or more input sequences produce
the same output sequence, the system is not invertible.
53
Passive and Lossless Systems
A system is said to be passive if, for every finite energy input
sequence x[n], the output sequence has at most the same energy:
x [ n ]
2
2
y[n ]
n n
54
Examples of Systems
x [ n
2
1
Moving Average System y [ n ] k]
M 2 M1 1 kM
1
n
Accumulator System y [ n ] x [ k ]
k
where M is a
Compressor System y [ n ] x [ Mn ] positive integer
55
Impulse Response of LTI Systems
56
Impulse Response for Examples
Find the impulse response by computing the response to [n]
Ideal Delay System y [ n ] δ[ n n d ] FIR
1
, -M 1 n M 2
n 1, n 0
Accumulator System y [ n ] δ[ k ] IIR
k 0, n 0
y[n ] u[n ]
57
Stability Condition for LTI Systems
An LTI system is BIBO stable if and only if its impulse response
is absolutely summable, that is
S h [ k ]
k
58
Stable and Causal LTI Systems
• An LTI system is (BIBO) stable if and only if
– Impulse response is absolute summable
h k
k
y n hk x n k h k x n k
k k
59
Difference Equations
An important subclass of LTI systems are defined by
an Nth-order linear constant-coefficient difference equation:
N M
ak y[n k ] bm x [ n m ]
0 0
y [ n ] a k y [ n k ] b m x[n m ]
k 1 m 0
n
n 1
y [ n ] x [ k ] x [ n ] x [ k ] x [ n ] y [ n 1];
k k
b0 1 x[n] + y[n]
a0 1
one sample
a1 1 delay
61
Total Solution Calculation
N M
a k y [ n k ] b m x[n m ]
k 0 m0
where the homogenous solution yh[n] is obtained from the homogeneous equation:
a k y h [n k ] 0
k0
62
Homogeneous Solution
N
Given the homogeneous equation:
a k y h [n k ] 0
k0
yh [n ]
n
then
N
y h [ n ] a k
nk
n N
a 0
N
a 1
N 1
a N 0
k 0
Am m
n
yh [n ]
1
(if the roots are all distinct) The coefficients Am may be found from the
initial conditions. 63
Particular Solution
The particular solution is required to satisfy the difference equation for a specific
input signal x[n], n ≥ 0.
N M
a k y [ n k ] b m x[n m ]
k 0 m 0
To find the particular solution we assume for the solution yp[n] a form that depends
on the form of the specific input signal x[n].
y [n ] y h[n ] y p [n ]
64
General Form of Particular Solution
A sin( o n )
65
Example (66
Determine the homogeneous solution for
y [ n ] y [ n 1] 6 y [ n 2 ] 0
yh[n ]
n
Substitute
n n 1
6
n2
n2
2
6 0
n2
3 2 0
3 2
n n
y [n ] A A A
n n
A
h 1 1 2 2 1 2
66
Example (67
Determine the particular solution for
y [ n ] y [ n 1] 6 y [ n 2 ] x[ n ]
6 8
which is satisfied by = -2
67
Example (68
Determine the total solution for
y [ n ] y [ n 1] 6 y [ n 2 ] x[ n ]
y [ n ] y h [ n ] y p [ n ] A 1 3 A 2 2 2
n n
then 1 1
y [ 1] A 1 A2 2 1 A 1 .8
1
3 2
1 1 A 2 4 .8
y[ 3] A1 A2 2 1
9 4
y [ n ] 1 .8 3 4 .8 2 2
n n
68
Initial-Rest Conditions
69
Zero-input, Zero-state Response
An alternate approach for determining the total solution y[n] of a difference equation
is by computing its zero-input response yzi[n], and zero-state response yzs[n]. Then
the total solution is given by y[n] = yzi[n] + yzs[n].
The zero-input response is obtained by setting the input x[n] = 0 and satisfying the
initial conditions. The zero-state response is obtained by applying the specified input
with all initial conditions set to zero.
70
Example (1/2)
y [ n ] y [ n 1] 6 y [ n 2 ] 0 y [ 1] 1
y[ 2] 1
y h [ n ] A 1 1 A 2 2 A1 3 A 2 2
n n n n
Zero-input response:
27
A 5 .4
y [0 ] A A y [ 1] 6 y [ 2 ] 1 6 7 1
5
zi 1 2
y zi [1] A 1 ( 3 ) A 2 ( 2 ) y [ 0 ] 6 y [ 1] 7 6 13
8
A 2 1 .6
5
Zero-state response: y [ n ] y [ n 1] 6 y [ n 2 ] x[ n ]
with x[ n ] 8 u [ n ]
18
y [ 0 ] A A 2 x[ 0 ] 8 A1 5 3 .6
zs 1 2
y zs [1] A 1 ( 3 ) A 2 ( 2 ) 2 x[1] y [ 0 ] 8 8 0 32
71
A2 6 .4
5
Mitra Example (2/2)
Zero-input response:
yzi [ n ] 5 .4 ( 3 ) n 1 .6 ( 2 ) n
Zero-state response:
yzs [ n ] 3 .6 ( 3 ) n 6 .4 ( 2 ) n 2
Total solution is
y [ n ] y z [ n ] y zs [ n ]
y [ n ] 1 .8 ( 3 ) n 4 .8 ( 2 ) n 2
72
Impulse Response
73
Example
Determine the impulse response for
y [ n ] y [ n 1] 6 y [ n 2 ] x[ n ]
h [ n ] A 1 3 A 2 2
n n
For n=0
y [ 0 ] y [ 1] 6 y [ 2 ] x[ 0 ]
h[ 0 ] δ[ 0 ] 1 A1 A 2 1
For n=1
y [1] y [ 0 ] 6 y [ 1] x[1]
3 A 1 2 A2 A1 A2 0
h[1] h[ 0 ] δ[1] 0 3 2
A 1 , A 2
5 5
3 2
3 2 n 0
n n
h [ n ]
5 5 74
DSP Applications
• Image Processing • Military
– Pattern recognition – Secure communication
– Robotic vision – Radar processing
– Image enhancement – Sonar processing
– Facsimile – Missile guidance
– Satellite weather map • Telecommunications
– Animation – Echo cancellation
• Instrumentation/Control – Adaptive equalization
– Spectrum analysis – ADPCM transcoders
– Position and rate control – Spread spectrum
– Noise reduction – Video conferencing
– Data compression – Data communication
• Speech/audio • Biomedical
– Speech recognition/synthesis – Patient monitoring
– Text to speech – Scanners
– Digital audio – EEG brain mappers
– equalization – ECG analysis
– X-ray storage/enhancement
75
76
Image enhancement
77
More Examples of Applications
• Sound Recording • Telephone Dialing
Applications Applications
– Compressors and Limiters
• FM Stereo Applications
– Expanders and Noise Gates
– Equalizers and Filters • Electronic Music
– Noise Reduction Systems Synthesis
– Delay and Reverberation – Subtractive Synthesis
Systems – Additive Synthesis
– Special Effect Circuits
• Echo Cancellation in
• Speech Processing
Telephone Networks
– Speech Recognition
– Speech Communication • Interference Cancellation
in Wireless
Telecommunication
78
Reasons for Using DSP
• Signals and data of many types are • Flexibility in reconfiguring
increasingly stored in digital • Better control of accuracy
computers, and transmitted in requirement
digital form from place to place. In
• Easily transportable and possible
many cases it makes sense to offline processing
process them digitally as well.
• Cheaper hardware in some case
• Digital processing is inherently
stable and reliable. It also offers • In many case DSP is used to
certain technical possibilities not process a number of signals
available with analog methods. simultaneously. This may be done
by interlacing samples (technique
• Rapid advances in IC design and known as TDM) obtained from the
manufacture are producing ever
various signal channels.
more powerful DSP devices at
decreasing cost.
• Limitation in speed & Requirement
in larger bandwidth
79
80
System Analysis
81
Frequency Response
82
Ideal lowpass filter-example
83
Non causal to causal
84
Phase distortion and delay
85
Phase delay
86
Phase delay
87
Group delay
88
Group delay
89
The Z-Transform
90
Z-Transform Definition
-Counterpart of the Laplace transform for discrete-time signals
-Generalization of the Fourier Transform
-Fourier Transform does not exist for all signals
Z { x [ n ]} x [ n ] z n X (z)
n
The inverse z-transform is given by
1
x [ n ] X ( z ) z n 1 dz
2 j
C
X ( z ) x [ n ] z n
n
X e j x [ n ]e j n
n
j
For z e the z-transform reduces to the Fourier transform
x n e
X e j j n
n
j
j
n
n
j n
X re
n
x n re
n
x n r e
x n r - n
n
93
Unit Circle in Complex Z-Plane
-The z-transform is a function of the complex z variable
-Convenient to describe on the complex z-plane
-If we plot z=ej for =0 to 2 we get the unit circle
Im
j
ze
Unit Circle
Re
1
z-plane
94
Region of Convergence (ROC)
For any given sequence, the set of value of z for which the z-transform converges
is called the region of convergence, ROC. The criterion for convergence is that
the z-transform be absolutely summable:
n
x[n ] z
If some value of z, say, z = z1, is in the ROC, then all values of z on the circle defined
by |z| = |z1| will also be in the ROC. So, the ROC will consist of a ring in the z-plane
centered about the origin. Its outer boundary will be a circle (or the ROC may extend
outward to infinity), and its inner boundary will be a circle (or it may extend inward
to include the origin).
95
Region of Convergence
Im
r1
r2
Re
z-plane
The region of convergence (ROC) as a ring in the z-plane. For specific cases, the inner boundary
can extend inward to the origin, and the ROC becomes a disk. In other cases, the outer boundary
can extend outward to infinity. 96
Laurent Series
X ( z ) x [ n ] z n
n
A power series of this form is a Laurent series. A Laurent series, and therefore
a z-transform, represents an analytic function at every point within the region
of convergence. This means that the z-transform and all its derivatives must be
continuous functions of z within the region of convergence.
P(z)
X ( z )
Q (z)
Among the most useful z-transforms are those for which X(z) is a rational
function inside the region of convergence, for example where P(z) and Q(z)
are polynomials. The values of z for which X(z) are zero are the zeros of X(z)
and the values for which X(z) is infinite are the poles of X(z).
97
Properties of the ROC
1. The ROC is a ring or disk in the z-plane centered at the origin
0 rR z rL
2. The Fourier transform of x[n] converges absolutely if and only if the ROC of the z-
transform of x[n] includes the unit circle.
3. The ROC cannot contain any poles.
4. If x[n] is a finite-duration sequence, then the ROC is the entire z-plane except possibly
z=0 or z=.
5. If x[n] is a right-handed sequence (zero for n < N1 < ), the ROC extends outward from
the outermost (largest magnitude) finite pole in X(z) to (and possibly including infinity).
6. If x[n] is a left-handed sequence (zero for n > N2 > - ), the ROC extends inward from
the innermost (smallest magnitude) nonzero pole in X(z) to (and possibly including) zero.
7. A two-sided sequence is an infinite-duration sequence that is neither right-sided or left-
sided. If x[n] is a two-sided sequence, the ROC will consist of a ring in the z-plane,
bounded on the interior and exterior by a pole and , consistent with property 3, not
containing any poles.
8. The ROC must be a connected region.
98
Properties of ROC Shown Graphically
Finite-Duration Signals
Infinite-Duration Signals
Causal
|z| > r2
Two-sided
r2 < |z| < r1
99
Example: Right-Sided Sequence
x[n ] a nu[n ]
X ( z ) x [ n ] z n
n
n 1
X (z) az 1
1 az 1
n0
ROC az 1
1
or za
100
Example: Left-Sided Sequence
x [ n ] a n u [ n 1]
nonzero for n 1
X ( z ) x [ n ] z n
n
1
az 1 a 1 z
n n
X ( z ) 1
n n 0
1 1
1
1 1
1 a z 1 az
ROC a 1 z 1
or za
101
Example: Sum of Two Exponential
Sequences
n n
1 1
x [ n ] u [ n ] u [ n ]
2 3
X (z) x [ n ] z n
n
1 1
X ( z )
1 1
1 1
2
z 1 1
3
z
ROC 1
2
z 1 and 13 z 1
z 1
2
2 1
z 1 1 1 2 zz
X ( z )
1
z
1 z 1 1 z 1
3 2 12
1 1
z 1 z 1
2 3 2 3
1 1
n
x [ n ] u [ n ] u [ n 1]
3 2
1 n 1 1
u [ n ] 1 z
3 1 1
3
z 3
n
1 1 1
u [ n 1] z
2 1 12 z 1 2
2 zz 1
X (z) 12
z 1 z 1
2 3
1 1
ROC z
3 2
103
Example: Finite Length Sequence
a n 0 n N 1
x [ n ]
otherwise
0
X (z) x [ n ] z n
n
1 az
n 1 N
N 1
az
1
X (z) 1
n 0 1 az
ROC a and z 0
Pole-zero plot for N = 16
104
Z-transforms with the same pole-zero locations
illustrating the different possibilities for the ROC.
Each ROC corresponds to a different sequence.
e) A two-sided sequence
d) A two-sided sequence 105
Common Z-Transform Pairs
Sequence Transform ROC
[n ] 1 All z
Sequence Transform ROC
1 1 [cos ] z 1
u[n ] z1 [cos n ]u [ n ]
1 z 1 0 0
z1
1 [ 2 cos ] z 1 z 2
0
1
u [ n 1] z1 [sin ] z 1
1 z 1 [sin n ]u [ n ] 0
z1
1 [ 2 cos ] z 1 z 2
0
m 0
[n m ] z * 1 [ r cos ] z 1
[cos n ]u [ n ] 0
z r
1
1 [ 2 r cos ] z 1
r 2 z 2
0
n
a u[n ] za 0
1 az 1
[ r sin ] z 1
1 [sin n ]u [ n ] 0
z r
1 [ 2 r cos ] z 1 r 2 z 2
0
za
a n u [ n 1] 1 az 1 0
az
1 an, 0 n N 1 1 a N zN
na n u [ n ] z 0
za 1 az 1
1 az 1 0, otherwise
2
1
az
na n u [ n 1] za
1 az
2
1
106
*All z except 0 (if m > 0) or (if m<0)
Z-Transform Properties (1/2)
Linearity
Time Shifting
x [ n n 0 ] Z z
n0
X (z) ROC = Rx (except for possible addition
or deletion of 0 or )
n Z z
z x [ n ] X ROC z 0 R x
0
z 0
Differentiation of X(z)
Z dX ( z )
nx [ n ] z ROC R x
dz 107
Z-Transform Properties (2/2
Conjugation of a Complex Sequence
x [ n ] Z X *
( z* ) ROC Rx
Time Reversal
1 * 1
x [ n ] X *
* Z ROC
z Rx
Convolution of Sequences
Initial-Value Theorem
x [ 0 ] lim X ( z ) provided that x[n] is zero for n < 0, i.e. that x[n] is causal.
z 108
Inverse z-Transform
Method of inspection – recognize certain transform pairs.
z 1
a k z
k
k
k 1
k 0
A k 1 d k z 1 X ( z )
N
X (z) Ak
1
where
1 d z zdk
k 0 k
109
Example: Second-Order Z-Transform
1
X ( z ) 1
1 1
4
z 1
1 1
2
z 1
z
2
A1 A2
X (z) 1
1
1 z
1 1 12
4
z
1
A 1
14 1
1
1 1
2
1
A 2
12 1
2
1 1
4
1 2
X (z)
1 1
1 1
4
z 1 1
2
z
110
Partial Fraction Expansion
M N N
Ak
If M > N X (z) B z r
r
1 d z
1
r0 k 1 k
If M > N and X(z) has multiple-order poles, specifically a pole of order s at z=di
M N
X ( z ) B z r
N s
Ak Cm
1 d
r 1 m
r0 k 1,k i 1 dkz m 1 z 1
i
1 d sm
s
1
C m sm sm 1 d iw X w
( s m )! ( d i ) dw wd 1
111
Example
X (z) 1 1 2
1
1 2 z
z
2
z 1
z
1 1
z 1 1 z 1 1 3
z 1 1
2
z
2
2 2
2
A1 A2 B z 2
X ( z ) B 0 1
0
1 z 2
1 1
2 z 1 z 1 2
1 ( 1 ) 1
2
3
2
A A
X (z) B 1
2 A 2
9
1
12
1
1 1 z 1
1
0 1
2
z 1 1
9 8 (1 1) 2
X ( z ) 2 1
A 8
1 2 z 1
1 z 1
1
1
2
1 1
2
12
n
x [ n ] 2 [ n ] u[n ] 8u[n ]
112
Power Series Expansion
X ( z ) x [ n ] z n
n
Example :
X ( z ) z 2 1 z 1 1 z 2 z 2
1
1
2
1
2
z 1 1
2
z
x[n ] 2 [n 2 ] 1
2
[ n 1] [ n ] 1
[ n 1]
2
113
Example
X ( z ) log 1 az 1
z a
1
( 1)
n 1 n
a z
n
log 1 az
n 1 n
( 1) n 1 a n ,
n 1
x[n ] n
0, n0
114
Contour Integration
Cauchy integral theorem
1, k 1
z
k
1
2 j
dz
C
0. k 1
115
Residue Calculation
If X(z) is a rational function of z, we can write
X z z n 1 (z)
z d0
s
X
1
Res (z)z n 1
at z d 0 d
s 1 !
0
116
Quick Review of LTI Systems
• LTI Systems are uniquely d etermined by their impulse response
y n x k h n k x k h k
k
• We can write the input-output relation also in the z-domain
Y z H z X z
H e X e
Y e j j j
Y e j
H e j
X e
j
117
Phase Distortion and Delay
• Remember the ideal delay system
hi n n n d
DTFT
H i e
j e
j n d
H id e j n d
a y nk
k b x n k
k
k 0 k 0
a z k
Y z b z k X z
k k
k0 k0
N k M k
ak z Y z b k z X z
k 0 k 0
119
System Function
• Systems described as difference equations have system functions of the
form
1
M
M
Y z bk
z
k b c z 1
k
H z
X z 0
kN
0 k 1
N
a
a k z
k 0
1 d k
z 1
k0 k 1
• Example
H z 1 z
1 2
1 2 z 1 z 2
Y z
3 1 1 1 z 1 3 z 2 X z
1 1
z 1 z
1
2 4 4 8
1
1 1 3 2
z z
Y z 1
2 z 1 z 2 X z
4 8
y n y n 1 y n 2 x n 2 x n 1 x n 2
1 3
4 8
120
Stability and Causality
• A system function does not uniquely specify a system
– Need to know the ROC
• Properties of system gives clues about the ROC
• Causal systems must be right sided
– ROC is outside the outermost pole
• Stable system requires absolute summable impulse response
h n
k
y n y n 1 y n 2 x n
5
2
• System function can be written as
H z
1 1
1
1 z
1 2 z 1
2
123
Introduction
124
Block Diagram Representation of Linear Constant-coefficient
Difference Equations
+
Addition of two sequences
x1[n] x1[n] + x2[n]
x2[n]
125
Block Diagram Representation of Linear Constant-coefficient
Difference Equations
M
bk z k
Y (z)
H (z) k 0
N X (z)
z k
1 a
k 1
k
1
1
M k
H (z) H 2 ( z)H 1( z) b z
k 0 k
N
1 a k
z k
k 1
V ( z) H 1( z) X (z)
Y (z) H 2 ( z )V ( z )
2
M k 1
H ( z ) H 1 ( z ) H 2 ( z ) bk z
k 0 1 N
k
akz
k 1
W ( z ) H 2 ( z ) X ( z )
Y ( z ) H 1 ( z )W ( z )
126
Block diagram representation for a general Nth-order
difference equation:
Direct Form 127
x[n] b0 v[n] y[n]
+ +
z-1 z-1
b1 a1
x[n-1]
+ + y[n-1]
z-1 z-1
x[n-2] y[n-2]
bM-1 aN-1
z-1 + +
x[n-M] z-1
bM aN
y[n-N]
127
Block diagram representation for a general Nth-order
difference equation:
Direct Form 128
128
Combination of delay units (in case N = M)
aN-1 bN-1
+ +
z-1
aN bN
129
Block Diagram Representation of Linear Constant-
coefficient Difference Equations 2
• An implementation with the minimum number of delay elements is
commonly referred to as a canonic form implementation.
• The direct form I is a direct realization of the difference equation
satisfied by the input x[n] and the output y[n], which in turn can be
written directly from the system function by inspection.
• The direct form II or canonic direct form is an rearrangement of the
direct form I in order to combine the delay units together.
130
Signal Flow Graph Representation of Linear Constant-
coefficient Difference Equations
a
Attenuator
x[n] d e y[n]
a z-1
z-1 c
Delay Unit b
z-1
131
Basic Structures for IIR Systems
• Direct Forms
• Cascade Form
• Parallel Form
• Feedback in IIR Systems
132
Basic Structures for IIR Systems
• Direct Forms
N M
y [ n ] a k y [ n k ] b k x[n k ]
k 1 k 0
k
bk z
k 0
H (z) N
a
k
1 k
z
k 1
133
Direct Form I (M = N)
134
Direct Form II (M = N)
135
Direct Form II
136
x[n] y[n]
+ +
z-1 z-1
2 0.75
+ +
z-1 z-1
-0.125
x[n] y[n]
+ +
z-1
0.75 2
+ +
1 2 z 1 z 2
z-1 H ( z )
-0.125 1 0 .75 z 1 0 .125 z 2
137
x[n] y[n]
z-1 z-1
-0.125
x[n] y[n]
z-1
0.75 2
z-1
-0.125
138
Basic Structures for IIR Systems 2
• Cascade Form
(1 g z 1 )
M 1 M 2
(1 h k z 1 )( 1 h kz 1 )
k
H (z) A k 1 k 1
N1
(1 c z 1 ) N 2
(1 d z 1 )( 1 d z 1 )
k 1
k k 1
k k
k 1 1 a 1 k z 1 a 2 k z
where Ns is the largest integer contained in (N+1)/2.
139
w1[n] y1[n] w2[n] y2[n] w3[n] y3[n]
x[n] b01 b02 b03 y[n]
1 2 z 1 z 2 (1 z 1 )( 1 z 1 )
H (z)
1 0 .75 z 1 0 .125 z 2 (1 0 . 5 z 1 )( 1 0 .25 z 1 )
x[n] y[n]
z-1
z-1 0.5 z-1 z-1 0.25
x[n] y[n]
z-1 z-1
0.5 0.25
140
Basic Structures for IIR Systems 3
• Parallel Form
B (1 e z )
NP N1 N2 1
A
H (z) C k
z
k
1 c kz 1
k k
(1 d z 1 )( 1 d z )
1
k0 k 1 k k 1 k k
k 0 k 1 1 a 1k z a 2k z
141
C0
w1[n] b y1[n]
01
a11 z-1 b
11
a21 z-1
a12 z-1 b
12
a22 z-1
w3[n] b y3[n]
03
a13 z-1 b
13
142
1 2
H (z) 1 8 7 8 z 1
z2z 2
1 0 .75 z 1 0 .125 z 2 1 0 .75 z 1 0 .125 z
18 25
8
1 0 .5 z 1
1 0 .25 z 1
x[n] y[n]
-7 8
z-1
0.75 8 x[n] 18 y[n]
z-1 z-1
-0.125 0.5
-25
0.25 z-1
143
Transposed Forms
z-1 z-1
a a
x[n] y[n]
z-1 a
144
x[n] b0 y[n]
z-1 b1 a1 z-1
z-1
z-1 b2 a2
bN-1 aN-1
z-1
z-1 bN aN
x[n] b0 y[n]
z-1 a1 b1 z-1
z-1
z-1 a2 b2
aN-1 bN-1
z-1
z-1 aN bN
145
x[n] b0 y[n]
z-1
a1 b1
z-1
a2 b2
aN-1 bN-1
bN-1 aN-1
z-1
bN aN
146
Basic Network Structures for FIR Systems
• Direct Form
– It is also referred to as a tapped delay line structure or a transversal filter
structure.
• Transposed Form
• Cascade Form
M M S
1 2
H ( z ) h [ n ] z n ( b0 k b1 k z b2k z
)
n0 k 1
147
Direct Form
• For causal FIR system, the system function has only zeros (except for
poles at z = 0) with the difference equation:
y[n] = SMk=0 bkx[n-k]
• It can be interpreted as the discrete convolution of x[n] with the
impulse response
h[n] = bn , n = 0, 1, …, M,
0 , otherwise.
148
Direct Form (Tapped Delay Line or Transversal Filter)
x[n] b0 y[n]
z-1 b1
z-1 b2
bN-1
z-1 bN
149
Transposed Form of FIR Network
x[n]
150
Cascade Form Structure of a FIR System
151
Structures for Linear-Phase FIR Systems
152
Direct form structure for an FIR linear-phase when M is even.
z-1 z-1
z-1
153
Lattice Structures
154
Lattice Structures 2
• FIR Lattice
Y (z) N m
H ( z ) A ( z ) 1 a m z
X (z) m 1
155
Reflection coefficients or PARCOR coefficients structure
156
A recurrence formula for computing A(z) = H(z) = Y(z)/X(z) can be
obtained in terms of intermediate system functions:
a
Ei (z) i
A (z) 1 (i)
m m
m 1
z
i
O
E (z)
By recursive technique:
a (i) = k ,
i i
am(i) = am(i-1) - ki ai-m(i-1) ,
m = 1, 2, ..., (i-1)
157
Example:
158
x[n] y[n]
-0.6728 +0.182 -0.576
-0.6728 +0.182 -0.576
159
All-Pole Lattice
kN-2
kN k1
-kN -k -k1
z-1 N-2
z-1 z-1
e~N[n] e~N-1[n] e~1[n] e~ [n] 160
Basic all-pole lattice structures
• Three-multiplier form
• Four-multiplier, normalized form
N
cos i
i 1
H (z)
A( z)
(1 k i )
H (z) i 1
A( z)
161
ei[n] ei-1[n]
-ki ki
Three-multiplier form
e’i[n] (1 - ki2) e’i-1[n]
-sin qi sin qi
Four-multiplier, normalized
form
e’i[n] cos qi e’i-1[n]
162
Lattice Systems with Poles and Zeros
N i 1
Y (z) c z A (z ) B(z)
H (z)
i i
X (z) i0 A( z) A( z )
N
m
B ( z ) bm z
m 0
(i)
bm c m ci a i m
i m 1
cN cN-1 cN-2 c1 c0
y[n]
163
Example of lattice IIR filter with poles and zeros
x[n] y[n]
0.9
z-1 3
-0.64
z-1 3
0.576
z-1
x[n]
y[n]
164
UNIT-2
DFS , DFT & FFT
165
Fourier representation of signals
166
Fourier representation of signals
167
Fourier representation of signals
168
Discrete complex exponentials
169
Discrete Fourier Series ~
• Given a periodic sequence x [n] with period N so that
~ ~
x [n] x [n rN ]
N k
170
Discrete Fourier Series Pair
• A periodic sequence in terms of Fourier series coefficients
1 ~
N 1 j 2 / N kn
~
x [n]
X k e
N k 0
X k
n0
x [n]e
~
N 1 kn
~
X k x [n]W N
• Synthesis equation n0
~ 1 N 1
~ kn
x [n]
X k W N
N k 0
171
Fourier series for discrete-time periodic
signals
172
Discrete-time Fourier series
(DTFS)
173
Fourier representation of aperiodic
signals
174
Discrete-time Fourier transform
(DTFT)
175
Discrete Fourier Transform
176
Discrete Fourier Transform
177
Discrete Fourier Transform
178
Discrete Fourier Transform
179
Discrete Fourier Transform
180
Summary of properties
181
DFT Pair & Properties
• The Discrete Fourier Transform pair
N 1 N 1
1
X k
j 2 / N kn
X k e
j2 / N kn
x[n]e x[ n ]
N
n0 k 0
182
Circular convolution
183
Modulo Indices and Periodic
Repetition
184
Overlap During Periodic
Repetition
185
Periodic repetition: N=4
186
Periodic repetition: N=4
187
Modulo Indices and the Periodic
Repetition
188
Modulo Indices and the Periodic
Repetition
189
Modulo Indices and the Periodic
Repetition
190
Modulo Indices and the Periodic
Repetition
191
Circular convolution
192
Circular convolution
193
Circular convolution-another
interpretation
194
Using DFT for Linear
Convolution
195
Using DFT for Linear Convolution
196
Using DFT for Linear Convolution
197
Using DFT for Linear
Convolution
198
Using DFT for Linear
Convolution
199
Using DFT for cicular
Convolution
200
Using DFT for cicular
Convolution
201
Using DFT for cicular
Convolution
202
Filtering of Long Data Sequences
203
Filtering of Long Data Sequences
204
Over-lap Add
205
Over-lap Add
206
Over-lap Add
207
Over-lap Add
208
Over-lap Add
209
Over-lap Add
210
Over-lap Add
211
Over-lap save
212
Over-lap save
213
Over-lap save
214
Over-lap save input segment stage
215
Over-lap save input segment stage
216
Over-lap save input segment stage
217
Over-lap save filtering stage
218
Over-lap save output blocks
219
Over-lap save output blocks
220
Over-lap save output blocks
221
Over-lap save
222
Over-lap save
223
Relationships between CTFT,
DTFT, & DFT
224
Relationships between CTFT,
DTFT, & DFT
225
Relationships between CTFT, DTFT,
& DFT
226
Fast Fourier Transform
227
Discrete Fourier Transform
• The DFT pair was given as
N 1 N 1
1
X k x[n]e
j 2 / N kn x[n]
N
X k e j 2 / N kn
n0 k 0
– Periodicity in n and k e
j2 / N kn
e
j2 / N k n N
e
j2 / N k N n
228
Direct computation of DFT
229
Direct computation of DFT
230
231
FFT
232
Decimation-In-Time FFT Algorithms
• Makes use of both symmetry and periodicity
• Consider special case of N an integer power of 2
• Separate x[n] into two sequence of length N/2
– Even indexed samples in the first sequence
– Odd indexed samples in the other sequence
N 1 N 1 N 1
X k x[n]e x[n]e
j 2 / N kn j 2 / N kn j 2 / N kn
x[n]e
n0 n even n odd
X k x[2 r ]W 2 rk x[2 r
2 r 1 k
N
1]W N
r0 r0
N / 2 1 N / 2 1
x[2 r ]W rk x[2 r
k rk
N /2
W N 1]W N / 2
r0 r0
G k W k H
N
k
• G[k] and H[k] are the N/2-point DFT’s of each subsequence
233
Decimation In Time
234
Decimation In Time Cont’d
• After two steps of decimation in time
235
Decimation-In-Time FFT Algorithm
• Final flow graph for 8-point decimation in time
• Complexity:
– Nlog2N complex multiplications and additions
236
Butterfly Computation
• Flow graph constitutes of butterflies
237
In-Place Computation
• Decimation-in-time flow graphs require two sets of registers
– Input and output for each stage
• Note the arrangement of the input indices
– Bit reversed indexing
X 0 x 0
0 X 0 000 x 000
X 0 1 x 4 X 0 001 x 100
X 0 2 x 2 X 0 010 x 010
X 0 3 x 6 X 0 011 x 110
X 0 4 x 1 X 0 100 x 001
X 0 5 x 5 X 0 101 x 101
X 0 6 x 3 X 0 110 x 011
X 0 7 x 7 X 0 111 x 111
238
Decimation-In-Frequency FFT Algorithm
• The DFT equation
N 1
X k x[n]W nk N
n0
• Split the DFT equation into even and odd frequency indexes
N 1 N / 2 1 N 1
X 2 r x[n]W n2r
N x[n]W n2r
N x[n]W n2r
N
n0 n0 nN / 2
X 2 r 1 x[n] x[n N / 2 ] W n 2 r 1
N /2
n0
239
Decimation-In-Frequency FFT Algorithm
• Final flow graph for 8-point decimation in frequency
240
UNIT-3
IIR filters
241
Filter Design Techniques
• Any discrete-time system that modifies certain frequencies
• Frequency-selective filters pass only certain frequencies
• Filter Design Steps
– Specification
• Problem or application specific
– Approximation of specification with a discrete-time system
• Our focus is to go from spec to discrete-time system
– Implementation
• Realization of discrete-time systems depends on target technology
• We already studied the use of discrete-time systems to implement a
continuous-time system
– If our specifications are given in continuous time we can use
x n y n
xc(t) C/D H(ej) D/C yr(t)
H e j H c j / T
242
Digital Filter Specifications
243
Digital Filter Specifications
• These filters are unealisable because (one of the
following is sufficient)
– their impulse responses infinitely long non-
causal
– Their amplitude responses cannot be equal to a
constant over a band of frequencies
Another perspective that provides some
understanding can be obtained by looking at the
ideal amplitude squared.
244
Digital Filter Specifications
• The realisable squared amplitude response transfer
function (and its differential) is continuous in
• Such functions
– if IIR can be infinite at point but around that
point cannot be zero.
– if FIR cannot be infinite anywhere.
• Hence previous differential of ideal response is
unrealisable
245
Digital Filter Specifications
• For example the magnitude response of a digital
lowpass filter may be given as indicated below
246
Digital Filter Specifications
that G ( e j
) 1 with a deviation p
1 p G (e j
) 1 p, p
G (e j
) s, s
247
248
Digital Filter Specifications
249
Digital Filter Specifications
251
IIR Digital Filter Design
Standard approach
(1) Convert the digital filter specifications into
an analogue prototype lowpass filter
specifications
(2) Determine the analogue lowpass filter
transfer function H a ( s )
(3) Transform H a ( s ) by replacing the complex
variable to the digital transfer function
G (z)
252
IIR Digital Filter Design
• This approach has been widely used for the
following reasons:
(1) Analogue approximation techniques are
highly advanced
(2) They usually yield closed-form
solutions
(3) Extensive tables are available for
analogue filter design
(4) Very often applications require digital
simulation of analogue systems
253
IIR Digital Filter Design
255
Specification for effective frequency response of a continuous-time lowpass
filter and its corresponding specifications for discrete-time system.
dp or d1 passband ripple
ds or d2 stopband ripple
Wp, wp passband edge frequency
Ws, ws stopband edge frequency
e2 passband ripple parameter
1 – dp = 1/1 + e2
BW bandwidth = wu – wl
wc 3-dB cutoff frequency
wu, wl upper and lower 3-dB cutoff
frequensies
Dw transition band = |wp – ws|
Ap passband ripple in dB
= 20log10(1 dp)
As stopband attenuation in dB
= -20log10(ds)
256
Design of Discrete-Time IIR Filters
257
Reasons of Design of Discrete-Time IIR Filters from
Continuous-Time Filters
258
Characteristics of Commonly Used Analog Filters
• Butterworth Filter
• Chebyshev Filter
– Chebyshev Type I
– Chebyshev Type II of Inverse Chebyshev Filter
259
Butterworth Filter
• Lowpass Butterworth filters are all-pole filters characterized by the magnitude-squared
frequency response
where N is the order of the filter, Wc is its – 3-dB frequency (cutoff frequency), Wp is
the bandpass edge frequency, and 1/(1 + e2) is the band-edge value of |H(W)|2.
• Thus the Butterworth filter is completely characterized by the parameters N, d2, e, and
the ratio Ws/Wp.
260
Butterworth Lowpass Filters
• Passband is designed to be maximally flat
• The magnitude-squared function is of the form
2 1 1
j
2
H H s
1 j / j c
c
1 s / j c 2 N
c 2N
s k 1 j c
1 / 2N j / 2 N 2 k N 1
ce for k 0,1,...,2N -1
261
Frequency response of lowpass Butterworth filters
262
Chebyshev Filters
• The magnitude squared response of the analog lowpass Type I Chebyshev
filter of Nth order is given by:
263
Chebyshev Filters
• Equiripple in the passband and monotonic in the stopband
• Or equiripple in the stopband
2 1
and monotonic in the passband
Hc j V N x cos N cos 1
x
1 V N /
2 2
c
264
Frequency response of
lowpass Type I Chebyshev filter
Frequency response of
lowpass Type II Chebyshev filter
265
N = log10[( 1 - d 2 + 1 – d 2(1 + e2))/ed ]/log [(W /W ) + (W /W )2 – 1 ]
2 2 2 10 s p s p
= [cosh-1(d/e)]/[cosh-1(Ws/Wp)]
• The poles of a Type I Chebyshev filter lie on an ellipse in the s-plane with major
axis r1 = Wp{(b2 + 1)/2b] and minor axis r1 = Wp{(b2 - 1)/2b] where b is related to
e according to
b = {[ 1 + e2 + 1]/e}1/N
• The zeros of a Type II Chebyshev filter are located on the imaginary axis.
266
Type I: pole positions are
xk = r2cosfk
yk = r1sinfk
fk = [p/2] + [(2k + 1)p/2N]
r1 = Wp[b2 + 1]/2b
r2 = Wp[b2 – 1]/2b
b = {[ 1 + e2 + 1]/e}1/N
vk = Wsxk/ xk 2 + yk 2
wk = Wsyk/ xk2 + y 2k
dy ( t ) y [ n ] 2 y [ n 1 ] y [ n 2 ]
2
dt t nT
T
which s2 = [(1 – z-1)/T]2. So, for the kth derivative of y(t), sk = [(1 – z-1)/T]k.
268
Approximation of Derivative Method
• Hence, the system function for the digital IIR filter obtained as a result of the
approximation of the derivatives by finite difference is
H(z) = Ha(s)|s=(z-1)/Tz
• It is clear that points in the LHP of the s-plane are mapped into the
corresponding points inside the unit circle in the z-plane and points in the
RHP of the s-plane are mapped into points outside this circle.
– Consequently, a stable analog filter is transformed into a stable digital filter due
to this mapping property.
jW
Unit circle
s-plane
z-plane
269
Example: Approximation of derivative method
T 2
H ( z ) 1 0 .2 T 9 .01 T
1
2
1
2 ( 1 0 .1 T )
z 1
z
2
1 0 .2 T 9 .01 T 2
1 0 .2 T 9 .01 T 2
270
Filter Design by Impulse Invariance
• Remember impulse invariance
– Mapping a continuous-time impulse response to discrete-time
– Mapping a continuous-time frequency response to discrete-time
h n T d h c nT d
2
H e j H c
j j k
k Td Td
271
Impulse Invariance of System Functions
• Develop impulse invariance relation between system functions
• Partial fraction expansion of transfer function
Ak
s
N
H
c
k 1 s sk
• Corresponding impulse response
N s t
A k e k t 0
hc t k 1
0 t 0
• Impulse response of discrete-time filter
n
sk Td
N
s k nT N
d
h n T dh c nT d
k 1
Td A k e u n Td A k e u n
• System function k 1
N TdA k
H z sk Td 1
k 1 1 e z
s Td
• Pole s=sk in s-domain transform into pole at e
k
272
Impulse Invariant Algorithm
273
Example: Impulse invariant method
The analog filter has a zero at s = - 0.1 and a pair of complex conjugate poles at pk = - 0.1 j3.
Thus,
H s
1 1
a
2
2
s 0 .1 j 3 s 0 .1 j 3
H z
1 1
Then
2
0 .1 T 1 2
0 .1 T j 3T 1
1 e 1 e
j 3T
e z e z
274
Frequency response
of digital filter.
Frequency response
of analog filter.
275
Disadvantage of previous
techniques: frequency
warping aliasing effect
and error in specifications
of designed filter (frequencies)
So, prewarping of frequency
is considered.
276
Example
• Impulse invariance applied to Butterworth
0 . 89125
H e j 1 0 0 . 2
H e j
0.17783 0 . 3
0 . 89125 H j 1 0 0 . 2
H j 0.17783 0 . 3
H
c
j
2
1 j
1
/ j
c
2 N
• Determine N and c to satisfy these conditions
277
Example Cont’d
• Satisfy both constrains
0 .2 2 N 1 2 0 .3 2 N 1 2
1 and 1
c 0 . 89125 c 0 . 17783
279
Filter Design by Bilinear Transformation
• Get around the aliasing problem of impulse invariance
• Map the entire s-plane onto the unit-circle in the z-plane
– Nonlinear transformation
– Frequency response subject to warping
• Bilinear transformation
s
2 1 z 1
z 1
T d 1
• Transformed system function
H z H 2 1 1
z
c
1
T d 1 z
• Again Td cancels out so we can ignore it
• We can solve the transformation for z as
1 T / 2 s 1 T / 2 j T /2
z d
d d s j
1 T d
/ 2 s 1 T d / 2 j T d / 2
280
Bilinear Transformation
• On the unit circle the transform becomes
1 j T d / 2 j
z e
1 j T d / 2
cos / 2
T d j Td 2 e
1 e Td 2
• Which yields
2 Td
tan or 2 arctan
Td 2 2
281
Bilinear Transformation
282
Example
• Bilinear transform applied to Butterworth
0 . 89125 H e j 1 0 0 . 2
H e j 0.17783 0 . 3
• Apply bilinear transformation to specifications
2 0 .2
0 . 89125 H j 1 0 tan
Td 2
2 0 .3
H j 0.17783 tan
Td 2
• We can assume Td=1 and apply the specifications to
2 1
H j
1 /
2N
• To get c
c
283
• Example Cont’d
Solve N and c
1
2
2
log 1 1
1 0 . 89125
0 . 17783
0 . 766
N 5 . 305 6 c
2 log tan 0 . 15 tan 0 . 1
• Resulting in
0 . 20238
H s
c
s 2
0 . 3996 s 0 . 5871 s 2
1 . 0836 s 0 . 5871 s 2
1 . 4802 s 0 . 5871
H z
1 1 . 2686 z
1
0 . 7051 z 2 1 1 . 0106 z 1 0 . 3583 z 2
1
1 0 . 9044 z
1
0 . 2155 z 2
284
Example Cont’d
285
IIR Digital Filter: The bilinear
transformation
286
Bilinear Transformation
• Mapping of s-plane into the z-plane
287
Bilinear Transformation
• For z e j
with unity scalar we have
1 e j
j j tan( / 2 )
j
1 e
or tan( / 2 )
288
Bilinear Transformation
• Mapping is highly nonlinear
• Complete negative imaginary axis in the s-
plane from to 0 is mapped into
the lower half of the unit circle in the z-plane
from z 1 to z 1
• Complete positive imaginary axis in the s-
plane from 0 to is mapped into the
upper half of the unit circle in the z-plane
from z 1 toz 1
289
Bilinear Transformation
• Nonlinear mapping introduces a distortion
in the frequency axis called frequency
warping
• Effect of warping shown below
290
Spectral Transformations
• To transform G L ( z ) a given lowpass transfer
function to another transfer function G D ( zˆ)
that may be a lowpass, highpass, bandpass or
bandstop filter (solutions given by
Constantinides)
• 1 has been used to denote the unit delay in
z 1
the prototype lowpass filter G L ( z ) and zˆ
to denote the unit delay in the transformed
filter G D ( zˆ) to avoid confusion
291
Spectral Transformations
• Unit circles in z- and zˆ -planes defined by
z e j , zˆ e jˆ
• Transformation from z-domain to
zˆ-domain given by
• Then
z F ( zˆ)
G D ( zˆ) G L { F ( zˆ)}
292
Spectral Transformations
• From z F ( zˆ) , thusz F ( zˆ) , hence
1, if z 1
F ( zˆ) 1, if z 1
1, if z 1
• Therefore1 / F ( zˆ) must be a stable allpass function
1 L 1 * zˆ
𝑙
, 𝑙 1
F ( zˆ) 𝑙 1 zˆ 𝑙
293
Lowpass-to-Lowpass
Spectral Transformation
• To transform a lowpass filterG L ( z ) with a cutoff
frequency c to another lowpass filter G D ( zˆ)
with a cutoff frequency ˆc, the transformation is
1 1 1 zˆ
z
F ( zˆ) zˆ
sin ( c ˆc ) / 2
• Example - Consider the lowpass digital filter
0 .0662 (1 z 1 ) 3
G L ( z ) 1 1 2
(1 0 .2593 z )( 1 0 .6763 z 0 .3917 z )
which has a passband from dc to 0 . 25 with
a 0.5 dB ripple
• Redesign the above filter to move the
passband edge to
0 . 35 295
Lowpass-to-Lowpass
Spectral Transformation
• Here sin( 0 .05 )
0 .1934
sin( 0 .3 )
•
Hence, the desired lowpass transfer function is
( zˆ) G L ( z )
1
GD 1 zˆ 0 .1934
z 1
1 0 .1934 zˆ
-10
Gain, dB
G (z) G (z)
L D
-20
-30
-40
0 0.2 0.4 0.6 0.8 1
Lowpass-to-Lowpass
Spectral Transformation
296
Lowpass-to-Lowpass
Spectral Transformation
• The lowpass-to-lowpass transformation
1 1 1 zˆ
z
F ( zˆ) zˆ
297
Lowpass-to-Highpass
Spectral Transformation
• Desired transformation
1
z 1 zˆ
1
1 zˆ
• The transformation parameter is given by
cos ( c ˆc ) / 2
cos ( c ˆc ) / 2
where c is the cutoff frequency of the lowpass
filter and ˆc is the cutoff frequency of the desired
highpass filter
298
Lowpass-to-Highpass
Spectral Transformation
• Example - Transform the lowpass filter
0 .0662 (1 z 1 ) 3
G L ( z ) 1 1 2
(1 0 .2593 z )( 1 0 .6763 z 0 .3917 z )
Gain, dB
Normalized frequency
300
Lowpass-to-Highpass
Spectral Transformation
• The lowpass-to-highpass transformation can
also be used to transform a highpass filter with
a cutoff at c to a lowpass filter with a cutoff
at ˆc
• and transform a bandpass filter with a center
frequency at o to a bandstop filter with a
center frequency at ˆo
301
Lowpass-to-Bandpass
Spectral Transformation
• Desired transformation
2 2 1
1
zˆ zˆ
z 1 1
1 1
2
2 1
zˆ 1
zˆ
1 1
302
Lowpass-to-Bandpass
Spectral Transformation
• The parameters and are given by
303
Lowpass-to-Bandpass
Spectral Transformation
• Special Case - The transformation can be
simplified if c ˆc 2 ˆc 1
• Then the transformation reduces to
1
zˆ
z 1 zˆ1
1
1 zˆ
where cos ˆo with ˆo
denoting the
desired center frequency of the bandpass filter
304
Lowpass-to-Bandstop
Spectral Transformation
• Desired transformation
2 2 1 1
zˆ zˆ
1 1 1
z
1 2 2 1
zˆ 1
zˆ
1 1
305
Lowpass-to-Bandstop
Spectral Transformation
• The parameters and are given by
cos (ˆ c 2 ˆc 1 ) / 2
cos (ˆ c 2 ˆc 1 ) / 2
306
UNIT-4
FIR Filters
307
Selection of Filter Type
h[ n ] h[ N n ]
309
Selection of Filter Type
• Advantages in using an FIR filter -
(1) Can be designed with exact linear phase
(2) Filter structure always stable with quantised
coefficients
• Disadvantages in using an FIR filter - Order of an
FIR filter is considerably higher than that of an
equivalent IIR filter meeting the same
specifications; this leads to higher computational
complexity for FIR
310
FIR Filter Design
Digital filters with finite-duration impulse response (all-zero, or FIR filters)
have both advantages and disadvantages compared to infinite-duration
impulse response (IIR) filters.
FIR filters have the following primary advantages:
The primary disadvantage of FIR filters is that they often require a much
higher filter order than IIR filters to achieve a given level of performance.
Correspondingly, the delay of these filters is often much greater than for an
equal performance IIR filter.
FIR Design
FIR Digital Filter Design
Three commonly used approaches to FIR
filter design -
(1) Windowed Fourier series approach
(2) Frequency sampling approach
(3) Computer-based optimization methods
312
Finite Impulse Response Filters
• The transfer function is given by
N 1
n
H ( z ) h ( n ). z
n0
313
FIR: Linear phase
314
Linear Phase
• What is linear phase?
• Ans: The phase is a straight line in the passband of
the system.
• Example: linear phase (all pass system)
• I Group delay is given by the negative of the slope
of the line
315
Linear phase
• linear phase (low pass system)
• Linear characteristics only need to pertain to
the passband frequencies only.
316
FIR: Linear phase
• For Linear Phase t.f. (order N-1)
• h (n) h( N 1 n )
H ( z ) h ( n ). z n h ( n ). z n
n0 n N
2
N 1 N 1
2 2
h ( n ). z n
h ( N 1 n ). z ( N 1 n )
n0 n0
N 1
2
h ( n ) z n
z m m N 1 n
n0 317
FIR: Linear phase
• for N odd:
N 2 1
1
N 1 N 2 1
n
H ( z ) h ( n ). z z m h z
n0 2
n0 2
318
FIR: Linear phase
• II) While for –ve sign
j T j T N 1 N 2 1
H (e ) e 2
j 2 h ( n ). sin T n N 1
.
n0 2
H (e j T
2
) e h
2
N 3
2 N 1
2 h ( n ). cos T n
n0 2 319
FIR: Linear phase
• IV) While with a –ve sign
j T j T N 1
2 N 2 3
H (e ) j .h ( n ). sin N 1
2 T n
e
n0 2
321
Summary of Properties
K
H F a k cos k
j 0 j N / 2
e e
k0
Type I II III IV
Order N even odd even odd
Symmetry symmetric symmetric anti-symmetric anti-symmetric
Period 2 4 2 4
0 0 0 /2 /2
F() 1 cos(/2) sin() sin(/2)
K N/2 (N-1)/2 (N-2)/2 (N-1)/2
H(0) arbitrary arbitrary 0 0
H() arbitrary 0 0 arbitrary
323
Design of FIR filters: Windows
• Simplest way of designing FIR filters
• Method is all discrete-time no continuous-time involved
• Start with ideal frequency response
j
j n j j n
1
Hd e
n
hd n e
2
H e e h d n
d
d
0 else
• More generally
1 0 n M
h n h n w n where w n
0 else
d
324
Properties of Windows
• Prefer windows that concentrate around DC in frequency
– Less smearing, closer approximation
• Prefer window that has minimal span in time
– Less coefficient in designed filter, computationally efficient
• So we want concentration in time and in frequency
– Contradictory requirements
• Example: Rectangular window
sin M 1 / 2
j
M
1 e
j n j M 1
e
j M / 2
W e
n0
e 1 e
j
sin / 2
325
Windowing distortion
• increasing window length generally reduces the
width of the main lobe
• peak of sidelobes is generally independent of M
326
Windows
Commonly used windows
• Rectangular 1 n N 1
1 2 n
• Bartlett N
2 n
2
• Hanning 1 cos
• Hamming N 2 n
0 .54 0 .46 cos
• N
2 n 4 n
• Blackman 0 .42 0 .5 cos 0 .08 cos
N N
• 2
2 n
• Kaiser J 0 1 J0 ( )
N 1
327
Rectangular Window
• Narrowest main lob
– 4/(M+1)
– Sharpest transitions at
discontinuities in frequency
328
Bartlett (Triangular) Window
• Medium main lob
– 8/M
• Side lobs
– -25 dB
• Hamming window
performs better
• Simple equation
2n / M 0 n M /2
w n 2 2 n / M M /2 n M
else
0
329
Hanning Window
• Medium main lob
– 8/M
• Side lobs
– -31 dB
• Same complexity as
Hamming
1 2 n
w n 1 cos 0 n M
2 M
0 else
330
Hamming Window
• Medium main lob
– 8/M
331
Blackman Window
• Large main lob
– 12/M
• Complex equation
2 n 4 n
0 . 42 0 . 5 cos 0 . 08 cos 0 n M
w n M M
0 else
332
Kaiser Window Filter Design Method
• Parameterized equation
forming a set of windows
– Parameter to change main-lob
width and side-lob area trade-off
2
n M / 2
I 1
w n 0
M /2
0 n M
I
0
0 else
33
Comparison of windows
334
Kaiser window
• Kaiser window
β Transition Min. stop
width (Hz) attn dB
2.12 1.5/N 30
4.54 2.9/N 50
6.76 4.3/N 70
8.96 5.7/N 90
335
Example
• Lowpass filter of length 51 and c /2
Lowpass Filter Designed Using Hann window Lowpass Filter Designed Using Hamming window
0 0
-50 -50
-100 -100
-50
-100
Kaiser’s Formula:
20 log 10
( p s ) 13
N 1
14 .6 (s p ) / 2
338
UNIT-5
Multirate signal processing &
Finite Word length Effects
339
Single vs Multirate Processing
340
Basic Multirate operations: Decimation
and Interpolation
341
M-fold Decimator
342
Sampling Rate Reduction by an Integer Factor:
Downsampling
• We reduce the sampling rate of a sequence by “sampling” it
x n x nM x nMT
d c
• This is accomplished with a sampling rate compressor
343
Frequency Domain Representation of Downsampling
• Recall the DTFT of x[n]=xc(nT) 2 k
1
X e j X j
T k
c
T T
• The DTFT of the downsampled signal can similarly written as
1
X e
j
2 r
1 2 r
d
X c j X c j
T' r T' T' MT r MT MT
r i kM where - k and 0 i M
2 k 2 i
1 M 1 1
X e
j
X j
c
d
M i 0 T r MT T MT
• And finally
1 M 1
2 i
e
j
X j
Xe
M M
d M i 0
344
Frequency Domain Representation of Downsampling
345
Aliasing
346
Frequency Domain Representation of Downsampling w/ Prefilter
347
Decimation filter
348
L-fold Interpolator
349
Increasing the Sampling Rate by an Integer Factor:
Upsampling
• We increase the sampling rate of a sequence interpolating it
x i n x n / L x nT
c
/ L
xe x k n kL
0 else k
– Interpolating
350
Frequency Domain Representation of Expander
• The DTFT of xe[n]can be written as
j j n
j Lk
j L
Xe e x k n kL e x k e X e
n k k
351
Input-output relation on the Spectrum
352
Periodicity and spectrum images
353
Frequency Domain Representation of Interpolator
• The DTFT of the desired interpolated signals is
354
Interpolation filters
355
Fractional sampling rate convertor
356
Fractional sampling rate convertor
357
Changing the Sampling Rate by Non-Integer Factor
Lowpass filter
x[n] xe[n] Gain = L xo[n] xd[n]
L Cutoff = M
min(p/L, p/M)
h in 0 n ∓ L, ∓ 2L,...
359
Sampling of bandpass signals
360
Sampling of bandpass signals
361
Over sampling -ADC
362
363
364
365
Sub band coding
366
Sub band coding
367
Digital filter banks
368
Finite Word length Effects
369
Finite Wordlength Effects
• Finite register lengths and A/D converters
cause errors in:-
(i) Input quantisation.
(ii) Coefficient (or multiplier)
quantisation
(iii) Products of multiplication truncated
or rounded due to machine length
370
Finite Wordlength Effects
• Quantisation
Output
eo ( k )
Q
ei ( k )
Input
Q Q
ei , o ( k )
2 2
371
Finite Wordlength Effects
• The pdf for e using rounding
1
Q
Q Q
2 2 p ( e ). de E { e 2 }
Q 2
• Noise power
2
e
2
or Q 2
2
Q
2
12
372
Finite Wordlength Effects
• Let input signal be sinusoidal of unity
amplitude. Then total signal power P 1
2
• Hence 2 3 2b
P .2
2
or SNR 1 .8 6 b dB
373
Finite Wordlength Effects
• Consider a simple example of finite
precision on the coefficients a,b of second
order system with poles e j
1
H ( z )
1 2
1 az bz
1
H ( z )
1 2 cos . z 1 2 . z 2
• where a 2 cos b 2
374
Finite Wordlength Effects
bit pattern 2 cos , 2
000 0 0
• 001 0.125 0.354
010 0.25 0.5
011 0.375 0.611
100 0.5 0.707
101 0.625 0.791
110 0.75 0.866
111 0.875 0.935
1.0 1.0 1.0
375
Finite Wordlength Effects
• Finite wordlength computations
INPUT OUTPU
T
+
376
Limit-cycles; "Effective Pole"
Model; Deadband
378
Finite Wordlength Effects
• With rounding, therefore we have
b 2 y ( n 2 ) 0 .5 y (n 2)
are indistinguishable (for integers)
or b 2 y ( n 2 ) 0 .5 y ( n 2 )
• Hence 0 .5
y ( n 2 )
1 b2
y k x k b1 y k 1 b 2 y k 2
hk . sin ( k 1)
sin
381
Finite Wordlength Effects
cos 1
• Where b b1
2 2 b 2
• If b 2 1 then the response is sinusiodal
with frequency
1 b
cos 1
1
T 2
382
Finite Wordlength Effects
383
Finite Wordlength Effects
• Assume e1 ( k ) ,e 2 ( k ) ….. are not
correlated, random processes etc.
2
2
e
2
hi 2 ( k ) Q
e
2
0i
k 0
12
Hence total output noise power
2b
2 2
2
2. 2
2k sin 2
( k 1)
0 01 02 .
12 k 0 sin 2
b
• Where Q 2 and
k sin ( k 1)
h1 ( k ) h 2 ( k ) . ; k 0
sin
384
Finite Wordlength Effects
• ie
2b
2 2 1 2
. 1
0 2
6 1 1 4
2 2 cos 2
385
Finite Wordlength Effects
B(n+1)
• For FFT A(n)
B(n) -
B(n+1)
W(n)
A ( n 1) A ( n ) W ( n ). B ( n )
B ( n 1) A ( n ) W ( n ). B ( n )
A(n)
A(n+1)
B(n+1)
B(n) B(n)W(n)
386
Finite Wordlength Effects
• FFT
2 2
A ( n 1) B ( n 1) 2
2 2
A ( n 1) 2 A(n)
A(n ) 2 A(n)
387
Finite Wordlength Effects
IMAG 1.0
• FFT
-1.0 1.0
REAL
-1.0
A x ( n 1) A x ( n ) B x ( n ) C ( n ) B y (n )S (n )
A x ( n 1) A x ( n ) B x ( n ) C ( n ) B y (n) S (n )
A x ( n 1)
1 . 0 C ( n ) S ( n ) 2 . 414 ....
Ax (n )
• Modelled as
x(n) + ~
x (n) x(n) q (n)
q(n)
389
Finite Wordlength Effects
• For rounding operations q(n) is uniform
distributed between Q
2 , and where Q is
Q
2
390
Finite Wordlength Effects
q1 ( n )... q 2 ( n )... q p ( n )
x ( n ) h(n) y ( n )
• Then
p
r)
y ( n ) x ( r ). h ( n r ) q ( r ). h ( n
r0 1 r 0
391
Finite Wordlength Effects
• For zero input i.e. x ( n ) 0 , n we can write
p
y ( n ) qˆ . h ( n r )
1 r0
392
Finite Wordlength Effects
• However
h ( n ) h ( n )
n0 n0
• And hence
pQ
y ( n ) . h ( n )
2 n0
394
Quantization in Implementing Systems
• Consider the following system
395
Effects of Coefficient Quantization in IIR Systems
396
M
Effects on Roots M
bkz k
Quantiza b̂ k z k
H z k 0
N Ĥ z k 0
N
a tion 1 â
k
1 k z k
z k
k 1 k 1
397
Poles of Quantized Second-Order Sections
• Consider a 2nd order system with complex-conjugate pole pair
3-bits
7-bits
398
Coupled-Form Implementation of Complex-Conjugate Pair
399
Effects of Coefficient Quantization in FIR Systems
• No poles to worry about only zeros
• Direct form is commonly used for FIR systems
M
H z h n z n
n0
Ĥ z ĥ n z n H z H z H z h n z n
n0
n0
400
Round-Off Noise in Digital Filters
• Difference equations
implemented with finite-
precision arithmetic are
non-linear systems
• Second order direct form I
system
• Model with quantization
effect
• Density function error
terms for rounding
401
Analysis of Quantization Error
• Combine all error terms to single location to get
e n e 0 n e 1 n
e 2 n e 3n e 4n
2B
2
• The variance of e[n] in the general case is 2
e M 1 N
12
N
402
Round-Off Noise in a First-Order System
• Suppose we want to implement the following stable system
b
H z a 1
1
1 az
• The quantization error noise variance is
2B 2 2B 2B
2 M 1 N 2
h n 2 2 2 2 1
2n
a
f ef
12 12 2
12 n n0
1 a
• Noise variance increases as |a| gets closer to the unit circle
• As |a| gets closer to 1 we have to use more bits to compensate for the
increasing error
403
Zero-Input Limit Cycles in Fixed-Point Realization of IIR Filters
• For stable IIR systems the output will decay to zero when the input
becomes zero
• A finite-precision implementation, however, may continue to oscillate
indefinitely
• Nonlinear behaviour very difficult to analyze so we sill study by example
• Example: Limit Cycle Behavior in First-Order Systems
y n ay n 1 x n a 1
404
Example Cont’d
y n ay n 1 x n a 1
x n
7
n 0 . 111 b n
8
n y[n] Q(y[n])
0 7/8=0.111b 7/8=0.111b
1 7/16=0.011100b 1/2=0.100b
2 1/4=0.010000b 1/4=0.010b
3 1/8=0.001000b 1/8=0.001b
4 1/16=0.00010b 1/8=0.001b
405
Example: Limit Cycles due to Overflow
• Consider a second-order system realized by
ŷ n x n Q a 1 ŷ n 1 Q a 2 ŷ n 2
– Where Q() represents two’s complement rounding
– Word length is chosen to be 4 bits
• Assume a1=3/4=0.110b and a2=-3/4=1.010b
• Also assume
ŷ 1 3 / 4 0 . 110 b and ŷ 2 3 / 4 1 . 010 b
• Binary carry overflows into the sign bit changing the sign
• When repeated for n=1
ŷ 0 1.010b 1.010b 0 . 110 3 / 4
406
Avoiding Limit Cycles
• Desirable to get zero output for zero input: Avoid limit-cycles
• Generally adding more bits would avoid overflow
• Using double-length accumulators at addition points would
decrease likelihood of limit cycles
• Trade-off between limit-cycle avoidance and complexity
• FIR systems cannot support zero-input limit cycles
407