dsp2 Text Slides
dsp2 Text Slides
D. Sundararajan
c Springer 2024
1. chapter 1 3
2. chapter 2 35
3. chapter 3 53
4. chapter 4 88
5. chapter 5 127
6. chapter 6 160
7. chapter 7 184
8. chapter 8 195
9. chapter 9 202
10. chapter 10 229
11. chapter 11 244
Chapter 1 Discrete-Time Signals
1. Signal Classification
2. Basic Signals
3. Signal Operations
1
Signals are used for communication of infor-
mation about some behavior or nature of some
physical phenomenon, such as temperature and
pressure.
Conditioning the signal and extracting the in-
formation content in it is signal processing.
Basically, in signal processing, these signals are
suitably approximated by a few basis signals so
that signals are represented compactly and the
processing becomes simpler and efficient.
2
1. Processing in the time-domain with the
time as the independent variable
2. Processing in the frequency-domain with
the frequency as the independent variable.
We use the type that is more advantageous for
the given signal processing task.
3
Depending on the nature of sampling, quan-
tization and other characteristics, signals are
classified.
Classification helps to choose the most appro-
priate processing for a given signal. Unneces-
sary work and possible errors can be avoided,
if the signal characteristics are known.
4
Continuous-time Signals, also called analog sig-
nals, is characterized by its continuous nature
of the dependent and independent variables.
It can assume any real or complex value at
each instant. While all practical signals are
real-valued, complex-valued signals are neces-
sary and often used as intermediaries in signal
processing.
5
The values of the discrete-time Signals, also
called discrete signals, are available only at dis-
crete intervals.
Any practical signal, with sufficiently short sam-
pling interval, can be represented by a finite
number of samples and reconstructed after pro-
cessing with adequate accuracy.
6
The sample values of a quantized continuous-
time signal are quantized to certain levels. The
values can be truncated or rounded.
By quantizing the signal, the wordlength to
represent the signal is reduced. In sampling,
the number of samples is reduced. These two
steps, while they may introduce some errors,
are unavoidable to process a signal by DSP.
DSP devices can handle, due to their digital
nature, only finite number of finite wordlength
signals.
7
In a digital signal, both the dependent and in-
dependent variables are sampled values. The
importance of this type is that practical digital
devices can work only with signals of this form.
8
Representing a sequence.
{x(0)=1.1, x(1)=2.1, x(2)=3.1, x(3)=4.1}
{x(n), n = 0, 1, 2, 3} = {1.1, 2.1, 3.1, 4.1}
ˇ 2.1, 3.1, 4.1}
{1.1,
(
n + 1.1 for 0 ≤ n ≤ 3
x(n) =
0 otherwise
or graphically. The check symbol ˇ indicates
that the index of that element is 0 and the
samples to the right have positive indices and
those to the left have negative indices.
9
Periodic and Aperiodic Signal
A discrete signal x(n), defined for all n, is pe-
riodic, if
10
Even and Odd Signals
A signal is said to be even if x(−n) = x(n).
An even signal is symmetric with respect to
the vertical axis at n = 0. A signal is said to
be odd if −x(−n) = x(n). An odd signal is
antisymmetric with respect to the vertical axis
at n = 0.
11
Every signal can be expressed as the sum of
its even and odd components.
12
Circular even and odd symmetries
For periodic signals, the definitions are the
same as those for aperiodic signals with the
difference that the indices of the sample points
are defined by mod operation. A periodic sig-
nal x(n) with period N is even symmetric, if
it satisfies the condition x((−n) mod N ) =
x(n) for all n, The periodic extension of the
sequences, with even, N = 10, and odd, N =
9, lengths, {x(n), n = 0, 1, . . . , 9} = {3, 1, 5, −3, 4, 5, 4,
{x(n), n = 0, 1, . . . , 8} = {7, 1, −5, 6, 3, 3, 6, −5, 1}
are even.
13
A periodic signal x(n) with period N is odd
symmetric, if it satisfies the condition
14
Energy and power signals
The sum of the squared magnitude of the val-
ues of a real or complex valued discrete signal
x(n) as ∞
|x(n)|2
X
E=
n=−∞
If the summation is finite, the signal is an en-
ergy signal.
The average power is defined as
N
1
|x(n)|2
X
P = lim
N →∞ 2N + 1
n=−N
If the average power of a power signal is finite.
15
Causal and Noncausal Signals
Signals start at some finite time, usually cho-
sen as n = 0 and assumed to be zero for n < 0.
Such signals are called causal signals.
Causality condition restricts the limits of oper-
ations of signal processing. A noncausal signal
is such that x(n) 6= 0 for n < 0. A periodic
signal is noncausal.
16
Deterministic and Random Signals
A signal that is specified for all time is a de-
terministic signal.
The future values of a random are not exactly
known. They are characterized by some aver-
age value.
17
Bounded Signals
A discrete signal x(n) is bounded if the ab-
solute value of all its samples is less than or
equal to a finite positive number. For exam-
ple, 0.6nu(n) is bounded, while 2nu(n) is un-
bounded.
18
Absolutely Summable Signals
A signal x(n) is absolutely summable, if
∞
X
|x(n)| < ∞
n=−∞
19
Absolutely Square Summable Signals
A signal x(n) is absolutely square summable, if
∞
|x(n)|2 < ∞
X
n=−∞
20
The discrete unit-impulse signal, δ(n) is de-
fined as
(
1 for n = 0
δ(n) =
0 for n 6= 0
21
The discrete sinusoidal waveform x(n) is ex-
pressed in two equivalent forms.
−b
q
A= 2 2
a + b , θ = tan−1
a
a = A cos(θ) and b = −A sin(θ)
The first and second forms are, respectively,
rectangular and polar representation of the dis-
crete sinusoidal waveform.
22
The Sum of Sinusoids of the Same Frequency
23
The Complex Representation of Sinusoids
A j(ωn+θ) −j(ωn+θ)
x(n) = e +e = A cos(ωn+θ)
2
due to Euler’s identity. The sum of two com-
plex conjugate exponentials ejωn and e−jωn with
two conjugate coefficients A2 ejθ and A e−jθ rep-
2
resents a real sinusoid. The redundancy is not
a problem in practical implementation.
24
With an even N , the aliasing effect is charac-
terized by three formulas for real valued signals
cos( 2π
N kn + θ).
2π 2π
cos( (k + pN )n + θ)=cos( kn + θ), k = 0, 1, . . . ,
N N
2π 2π N
cos( (k + pN )n + θ)=cos(θ) cos( kn), k =
N N 2
2π 2π
cos( (pN − k)n + θ)=cos( kn − θ), k = 1, 2, . . . ,
N N
where N and index p are positive integers. In
the first case, there is no aliasing. In the sec-
ond case, there is no aliasing but the sine com-
ponent is lost. All the sinusoids in the third
case get aliased.
25
Linear Time Shifting
If we replace the independent variable n in x(n)
by n − k, with k positive, then the origin of the
signal is shifted to n = k (delayed). That is,
the locations of the samples become n + k.
With k negative, the samples of the signals in
x(n) appear at n − k (advanced).
26
Circular Time Shifting
Both the linear and the circular shifting are ba-
sically the change of the location of the sam-
ples of a signal. The difference is due to the
nature of aperiodic and periodic signals. There-
fore, the definitions are the same with the loca-
tions of the samples of a periodic signal com-
puted using the mod function.
28
Time Reversal
The replacement of the independent variable n
of a signal x(n) by −n to get x(−n) is the time
reversal operation. The values of x(n) appears
at −n in its time-reversed version x(−n), which
is the mirror image of x(n) about the vertical
axis at the origin.
29
Circular Time Reversal
This operation results in plotting the samples
of x(n) in the other direction on the unit circle.
That is, the indexing is mod N and, hence the
shift is circular.
30
Downsampling
When the independent variable n in x(n) is re-
placed by nK to get x(Kn) = xd(n), called
downsampling the signal by an integer factor
K, the samples at index n in x(n) appears at
index n/K in the down-sampled signal xd(n).
That is, xd(n) = x(Kn). At n = 0, xd(0) =
x(0). Both the signals have the same sample
value at n = 0. On either side, take the every
Kth sample of x(n) to find xd(n). The number
of samples in xd(n) is 1/K of that in x(n).
31
Upsampling
When the independent variable n in x(n) is re-
placed by n/K to get x(n/K) = xu(n), called
upsampling by an integer factor of K, the sam-
ples at index n in x(n) appears at index nK in
the up-sampled signal xu(n). K − 1 zeros are
inserted between every two adjacent samples
of x(n).
32
That is,
n ) for n = 0, ±K, ±2K, . . . ,
(
x( K
xu(n) =
0 otherwise
At n = 0, xu(0) = x(0). Both the signals have
the sample value at n = 0. On either side, each
sample of x(n) is located in xu(n) at index nK.
The in-between samples are zero-valued. The
number of samples in xu(n) is nK.
33
Discrete-Time Systems
A system is an interconnection of components,
hardware and/or software, that performs an
action or produces desired output signals in
response to input signals.
While practical input and output signals are
mostly analog, DSP systems are mostly used
for processing by converting analog signals into
digital and converting them back after process-
ing.
Response of systems is characterized in terms
basic signals, such as impulse and sinusoid.
34
First-order differential equation
dy(t) 1
+ τ y(t) = τ x(t), τ = (1)
dt RC
Approximating a first-order differential equa-
tion by a first-order difference equation.
35
System response
The complete output of a system can be con-
sidered as the sum of two independent com-
ponents. One component is the zero-input re-
sponse. This response is due to initial condi-
tions alone, assuming that the input is zero.
The second component is the zero-state re-
sponse. This response is due to the input
alone, assuming that the initial conditions are
zero.
The complete response y(n) is
y(n) = yzi(n) + yzs(n)
36
Transient and steady-state responses
The complete response of a system can also
be expressed in terms of transient and steady-
state components. The component of the re-
sponse of a system due to its natural modes
only is its transient response. The transient
response of a stable system always decays to
insignificant levels in a finite time. The tran-
sient response is its natural response while the
steady-state response is due to the natural modes
of the excitation. Complete response y(t) is
y(t) = ytr (t) + yss(t)
37
System models - Convolution
Systems are characterized by their responses
to basic signals, such as the impulse and the
sinusoid. Then, using the linearity property of
systems, the response of systems to arbitrary
input signals can be found systematically. In
the convolution model, we find the response,
called the impulse response, of a relaxed sys-
tem to the unit-impulse input.
38
Then, the response of the system for arbi-
trary input signals can be found by representing
them in terms of a sum of scaled and shifted
impulses. Since the impulse response is de-
termined assuming that the system is relaxed
(zero initial conditions), the response computed
using the convolution is the zero-state response.
39
The convolution of sequences x(n) and h(n)
resulting in the output sequence y(n) is defined
as
∞
X
y(n) = x(k)h(n − k)
k=−∞
∞
X
= h(k)x(n − k) = x(n) ∗ h(n)
k=−∞
40
Discrete Systems
The difference equation of a N th order discrete
system, relating the output y(n) to the input
x(n), is given by
41
This equation represents a N th order discrete
system, since the present output y(n) depends
on N previous values of the output. A first-
order system depends on one previous output,
y(n − 1). The difference equation can be con-
cisely written as
M
X N
X
y(n) = bk x(n − k) − ak y(n − k)
k=0 k=1
42
Linear Systems
Let y1(n) be the response of a system to an
input signal x1(n). Let y2(n) be the response
of the system to an input signal x2(n). Then,
the system is linear if the response to the input
ax1(n)+bx2(n) is ay1(n)+by2(n). That is, with
x1(n) → y1(n) and x2(n) → y2(n),
43
Time-invariant Systems
A system is time-invariant, if
44
LTI systems
The linearity and time-invariance properties of
systems enables the use of operations such
as convolution in the time-domain and linear
transforms in the frequency domain facilitat-
ing system analysis. Such systems are called
linear and time-invariant (LTI) systems.
45
Instantaneous and Dynamic Systems
A system is instantaneous, if its response de-
pends only on the present input values. Such
type of systems has no memory. For exam-
ple, the discrete model of a circuit with resis-
tors only is an instantaneous systems since the
signals at any part of the circuit depends on
present inputs only. Such type of systems is
characterized by the difference equation
y(n) = Kx(n),
where K is a constant.
46
On the other hand, circuits composed of induc-
tors and capacitors are dynamic systems and
their outputs depend on past input values also.
Such systems require memory. Obviously, in-
stantaneous systems are a specific case of dy-
namic systems. The difference equations that
characterizes a system with finite and infinite
memory are, respectively,
N
X ∞
X
y(n) = x(n − k), y(n) = x(n − k)
k=0 k=0
47
Inverse Systems
Systems with unique output for each unique in-
put are invertible. For example, y(n) = x2(n)
is not invertible. Consider the difference equa-
tion y(n) = 0.5x(n). Its impulse response is
0.5. Its inverse system is x(n) = 2y(n) with
impulse response is 2. When a system and its
inverse are connected in cascade, we get back
the input. That is, the impulse response of
the cascade system must be δ(n). The con-
volution output of the impulse responses of a
system and its inverse is δ(n).
48
Response to Complex Exponential Input
Let the impulse response of a stable system be
h(n). The system response to a complex ex-
ponential ejω0 n, −∞ < n < ∞, with frequency
jω0, is the convolution of the impulse response
and the exponential. That is,
k=−∞
49
If the complex amplitude of the exponential is
X(ejω0 ), then the coefficient of the exponential
at the output is H(ejω0 )X(ejω0 ). The steady-
state output for an input complex exponential
or a real sinusoid is the magnitude of the input
is multiplied by the magnitude of H(ejω0 ) and
the phase of H(ejω0 ) added. That is,
jω0 n jω0 j(ω0 n+6 H(ejω0 )
e → |H(e )|e
and
50
BIBO system stability
The output of a system is given by the convo-
lution of the input and the impulse response.
The basic operation in convolution is sum of
products. Assuming that the input, |x(n)| <
∞, is bounded to some finite value, for the sum
of the products to be bounded, the impulse re-
sponse h(n) must be absolutely summable.
∞
X
|h(n)| < ∞
n=−∞
This stability criterion ensures stable zero-state
response.
51
Discrete Fourier Transform
Most of the naturally occurring signals have ar-
bitrary amplitude profile. As such, it is difficult
to process. Therefore, the first major problem
in signal processing is appropriate signal rep-
resentation. Fourier analysis and the related
transforms represent arbitrary signals in terms
of complex exponentials. The difference be-
tween the transforms is that each one is more
suitable for certain class of signals, such as pe-
riodic or aperiodic and continuous or sampled.
62
Orthogonality
Let the two discrete complex exponential sig-
j 2π kn j 2π ln
nals be e N and e N with the fundamental
frequency j 2π N . Then, the orthogonality condi-
tion is given by
NX
−1 NX
−1 (
j 2π
N kn j 2π j 2π
N (k−l)n
N for k=l
e (e N ln)∗= e =
0 for k6=l
n=0 n=0
where k, l = 0, 1, . . . , N − 1.
63
DFT and IDFT
In the DFT, an arbitrary aperiodic signal, de-
fined as x(n), n = 0, 1, . . . , N − 1 and periodi-
cally extended, can be expressed by a complex
exponential polynomial of order N − 1 as
j 2π
N (0)n j 2π
N (1)n
x(n) = X(0)e + X(1)e +, · · · ,
j 2π
N (N −1)n
+X(n − 1)e
64
This summation of the complex exponentials
2π
ejk N n multiplied by their respective DFT co-
efficients gets back the time-domain samples.
65
Therefore, the inverse DFT (IDFT) equation,
the Fourier reconstruction or synthesis of the
input signal, is defined as
1 NX
−1
jk 2π
x(n) = X(k)e N n, n = 0, 1, 2, . . . N −1
N k=0
Due to the constant N in the orthogonality
property, the DFT coefficients are scaled by
1 appears in the
N and, therefore, the factor N
IDFT definition.
66
The DFT of a sequence x(n) of length N is
defined as
NX
−1
−jk 2π
Nn
X(k) = x(n)e , k = 0, 1, 2, . . . N − 1
n=0
The complex exponential is often written in its
abbreviated form as
2π 2π
e−j N kn = WN
kn
, where W = e−j N
67
Least-squares error criterion of reconstruction
That is, the sum of the squared magnitude
of the difference between the input and re-
constructed signals must be minimum. If a
N -point waveform x(n) is reconstructed using
fewer than N DFT coefficients, xa(n), then
the error E is given by
NX
−1
E= |x(n) − xa(n)|2
n=0
68
Center-zero format of the DFT and IDFT
N −1
2X N N N
−jk 2π
Nn
X(k)= x(n)e , k=−( ), −( −1), . . . , −1
2 2 2
n=(− N
2)
N −1
1 2X 2π N N N
x(n)= X(k)ejk N n, n=−( ), −( −1), . . . , −1
N 2 2 2
k=−( N
2)
69
Matrix Formulation of the DFT
The matrix formulation is a symbolic represen-
tation of a set of simultaneous linear equations.
There, all the N coefficients of the N -point
DFT can be computed simultaneously using
the matrix formulation of the DFT. The val-
ues of the input data and those of the DFT
coefficients are stored in N × 1 matrices (col-
umn vectors).
70
To find the DFT, we multiply the input data by
the samples of the complex exponential of the
−j 2π
form e N kn for each k with n = 0, 1, . . . , N −1.
These samples are located on the unit-circle
equidistantly and, hence, the samples are peri-
odic. They are the N th roots of unity. Let us
construct the N × N transform matrix of the
samples, called twiddle factors.
71
With N = 2, the DFT and IDFT definitions
for the input {x(0) = −2, x(1) = 5} are
" # " #" # " #
X(0) 1 1 −2 3
= = ,
X(1) 1 −1 5 −7
where X(0) = 3 and X(1) = −7 are the cor-
responding DFT coefficients. The IDFT gets
the input back from the coefficients.
" # " #" # " #
x(0) 1 1 1 3 −2
= =
x(1) 2 1 −1 −7 5
72
The samples of a complex signal are {x(0) =
1 − j1, x(1) = 2 + j1, x(2) = 3 + j1, x(3) =
1 − j2}
X(0) 1 1 1 1 1 − j1 7 − j1
X(1) 1 −j −1 j 2 + j1 1 − j3
= =
X(2) 1 −1 1 −1 3 + j1 1 + j1
X(3) 1 j −1 −j 1 − j2 −5 − j1
73
Let us compute the IDFT of the spectrum.
x(0) 1 1 1 1 7 − j1 1 − j1
11
x(1) 1 − j3 2 + j1
j −1 −j
= =
x(2) 4 1 −1 1 −1 1 + j1 3 + j1
x(3) 1 −j −1 j −5 − j1 1 − j2
74
Linearity
Signal and system analysis using the DFT is
based on the linearity property.
x(n) ↔ X(k), y(n) ↔ Y (k) →
75
Periodicity
With x(n) ↔ X(k), both x(n) and X(k) are
periodic with period N . That is,
76
Circular time shifting
In shifting a signal, the magnitude spectrum
remains the same, while the phase spectrum
gets changed. Let x(n) be the signal with pe-
riod N and X(k) is its spectrum. If we shift
a frequency component with frequency index
k by n0 sample intervals, the phase change is
2πkn0/N radians.
±j 2π
N kn0
x(n ± n0) ↔ e X(k)
77
Circular frequency shifting
Let x(n) be the signal with period N and X(k)
is its spectrum. In shifting a spectrum by re-
placing the frequency index k by k + k0, the
spectrum just gets shifted, where k0 is an ar-
bitrary number of sampling intervals. To shift
a spectrum by k0, the time-domain signal has
to be multiplied by a complex exponential with
frequency index by k0.
±j 2π
N k0 n
e x(n) ↔ X(k ∓ k0)
78
Circular Time reversal
Let the N -point DFT of x(n) be X(k). The
DFT
of the time
reversal x(N− n) ofx(n) is
X(0) 1 1 1 1 x(0)
X(3) 1 −j −1 j x(3)
= = X(N − k)
X(2) 1 −1 1 −1 x(2)
X(1) 1 j −1 −j x(1)
x(N − n) ↔ X(N − k)
79
Duality
Let the DFT of x(n) be X(k) with period
N . Then, due to the dual nature of the def-
initions, we can interchange the independent
variables and make the time-domain function
as the DFT of the frequency-domain function
with some minor changes. That is, if we com-
pute the DFT of X(n), then we get N x(N −k).
X(n) ↔ N x(N − k)
80
Transform of Complex Conjugates
Let the DFT of x(n) be X(k) with period N .
Conjugating both sides of the DFT definition,
we get
NX
−1 NX
−1
j 2π −j 2π
∗
X (k) = x (n)e N nk
∗
= ∗
x (N −n)e N nk →
n=0 n=0
x∗(N − n) ↔ X ∗(k)
81
Conjugating both sides of the IDFT definition,
we get
NX
−1 NX
−1
1 −jk 2π n 1 jk 2π
∗
x (n) = ∗
X (k)e N = X (N −k)e N n →
∗
N k=0 N k=0
x∗(n) ↔ X ∗(N − k)
82
Circular convolution in the time domain
As the sequences are considered periodic in
DFT, the convolution of two signals using the
DFT is circular or periodic convolution. The
circular convolution of two time-domain se-
quences x(n) and h(n), resulting in the output
sequence y(n), n = 0, 1, . . . , N − 1, is defined as
NX
−1 NX
−1
y(n) = x(m)h(n − m) = h(m)x(n − m)
m=0 m=0
83
Let x(n) ↔ X(k) and h(n) ↔ H(k), both with
period N . Then,
1 NX
−1
j 2π
y(n) = X(k)H(k)e N nk
N k=0
That is, the IDFT of X(k)H(k) is the convo-
lution of x(n) and h(n).
84
Linear Convolution using the DFT
In practice, linear convolution is more often
required. To speed up this computation, we
use the DFT with zero-padded sequences.
85
Circular convolution in the frequency domain
Let x(n) ↔ X(k) and h(n) ↔ H(k) with period
N . Then, using the duality property of the
DFT and the time-domain convolution prop-
erty, we get
1
x(n)h(n) ↔ (X(k) ∗ H(k))
N
86
Circular correlation in the time domain
Let x(n) ↔ X(k) and h(n) ↔ H(k), both with
period N . The circular cross-correlation of
x(n) and h(n) in the time- and frequency-domains
is given by
NX
−1
rxh(n) = x(p)h∗(p − n), n = 0, 1, . . . , N − 1
p=0
↔X(k)H ∗ (k)
87
Sum and difference of sequences
NX
−1 NX
−2 NX
−1
N
X(0) = x(n), X( ) = x(n)− x(n)
n=0 2 n=0,2 n=1,3
1 NX
−1
N 1 NX
−2 NX
−1
x(0) = X(k), x( )= X(k)− X(k)
N k=0 2 N k=0,2 k=1,3
88
Upsampling of a sequence
x(n) ↔ X(k), n, k = 0, 1, . . . , N − 1
and a positive integer upsampling factor L,
n )for n = 0, L, 2L, . . . , L(N −1)
(
x( L
xu(n) =
0 otherwise
89
Zero Padding
With x(n) ↔ X(k), n, k = 0, 1, . . . , N − 1,
(
x(n) for n = 0, 1, . . . , N − 1
xz (n) =
0 for n = N, N + 1, . . . , LN − 1
↔ Xz (Lk) = X(k), k = 0, 1, . . . , N − 1
90
Real-valued Signals
X(k) = X ∗(N − k)
Remember that the DFT spectrum is N -periodic.
In terms of real and imaginary parts, we get
Xr (k) = Xr (N − k) and Xi (k) = −Xi(N − k) or
N N N N
Xr ( −k) = Xr ( +k) and Xi( −k) = −Xi( +k)
2 2 2 2
x(n) real ↔ X(k) hermitian
91
Even and odd symmetry
Sine function is odd, while cosine function is
even. Therefore, the sum of the product of
an even function with the sine function over
an integral number of cycles is zero and the
DFT of an even function is even with cosine
components only. For example,
x(n) real and even ↔ X(k) real and even
92
Real and odd
The sum of the product of an odd function
with the cosine function over an integral num-
ber of cycles is zero, Therefore, the DFT of
an odd function is odd with sine components
only.
93
Half-wave symmetry
Odd half-wave symmetric.
N
x n± = −x(n),
2
Then, its even-indexed DFT coefficients are
zero, as the sum of the product of it with even
frequency components is zero.
94
Even half-wave symmetric.
N
x n± = x(n),
2
Then, its odd-indexed DFT coefficients are
zero, as the sum of the product of it with odd
frequency components is zero.
95
Parseval’s Theorem
NX
−1 NX
−1
1
|x(n)|2 = |X(k)|2
n=0 N k=0
96
Discrete-Time Fourier Transform
∞
jω
x(n)e−jωn ↔
X
X(e )=
n=−∞
1 π
Z
x(n) = X(ejω )ejωn dω, n = 0, ±1, ±2, . . .
2π −π
97
Gibbs phenomenon
As always, at discontinuities, the Fourier re-
constructed continuous waveform converges to
the average value at each discontinuity both in
the frequency and time domains. In addition,
oscillations around the discontinuities, called
Gibbs phenomenon, also occur. The largest of
the magnitude of the oscillations converges to
the value of 1.0869 for a unit discontinuity with
a relatively small number of harmonics used.
98
Some values of the DTFT and the inverse
IDTFT can be computed easily and may be
used to verify closed-form solutions.
∞ ∞
X(ej0 ) = x(n), X(ejπ ) = (−1)n x(n),
X X
n=−∞ n=−∞
1 π
Z
x(0) = X(ejω )dω
2π −π
The summation can be verified numerically for
large values of n.
99
DTFT definitions with the sampling interval
Ts
The DTFT and the inverse DTFT definitions
were presented assuming that the sampling in-
terval is 1 second. There are two ways to get
the spectrum for other sampling intervals. One
is that, we can rescale the frequency axis.
100
Another way is to redefine the definitions by
including the sampling interval as
∞
X(ejωTs ) = x(nTs)e−jnωTs
X
n=−∞
Z ωs
1 2 jωTs jnωTs
x(nTs) = ω
X(e )e dω, n = 0, ±1, ±2, .
ωs − 2s
where ωs = 2π
Ts .
101
The DTFT spectrum is continuous and the
magnitude of each of the infinite frequency
components is infinitesimal. The amplitudes
of the components are given by
1
X(ejω )dω
2π
As the amplitudes are immeasurably small, the
spectral density X(ejω ) represents x(n) in the
frequency domain as a relative amplitude spec-
trum
102
Convergence of the DTFT
If the signal is continuous, the Fourier recon-
structed signal absolutely converges to it.
If the signal is discontinuous, the Fourier re-
constructed signal converges to it in the least-
squares error sense.
103
Relation between the DFT and the DTFT
the DFT spectrum is the samples of the DTFT
spectrum at equal intervals of ω0 = 2π
N . That
is,
104
DTFT of the discrete unit impulse, x(n) =
δ(n)
∞
jω
δ(n)e−jωn = 1e−jω0 = 1
X
X(e )=
n=−∞
δ(n) ↔ 1, ∀ω
105
The nonzero samples of x(n) are {x(−1) =
0.5, x(1) = −0.5}. Find the DTFT of x(n).
X(ejω ) = 0.5(ejω − e−jω ) = j sin(ω)
The inverse Z DTFT is
1 π
x(n) = (0.5ejω −0.5e−jω )ejωn dω = {0.5, −0.5},
2π −π
sinceZ (
π 1 for n = 0
ejωn dω =
−π 0 for n = ±1, ±2, . . .
106
Consider the DTFT transform pair
sin( πn
5 ) , −∞ < n < ∞
nπ
π π
↔ u(ω + ) − u(ω − ), −π < ω ≤ π
5 5
As the spectrum is even-symmetric, taking the
inverse DTFT
Z π
1 5 sin( π
5 n)
x(n) = cos(ωn) dω = , −∞ < n < ∞
π 0 nπ
107
DTFT pair for the DC signal
By a limit process,
1, −∞ < n < ∞ ↔ 2πδ(ω), −π < ω ≤ π
108
DTFT of sinusoids
cos(ω0n + θ) = 0.5(ejθ ejω0 n + e−jθ e−jω0 n)
109
DTFT of exponential x(n) = anu(n), |a| < 1
∞ ∞
jω
X
n −jωn
X
−jω n 1
X(e )= a e = (ae ) = −jω
, |a| < 1
n=0 n=0 1 − ae
The defining summation is a geometric pro-
gression with a common ratio of (ae−jω ), if
|(ae−jω )| < 1. This implies |a|< 1, as |(e−jω )|=
1.
110
By a limit process, DTFT for the unit-step
signal
1
u(n) ↔ πδ(ω) +
1 − e−jω
The sign or signum function sgn(n) is defined
as (
1 for n ≥ 0
sgn(n) =
−1 for n < 0
Its DTFT is 2
1 − e−jω
111
The Relation Between the DTFT and DFT of
a Discrete Periodic Signal
Let x(n) be a periodic signal of period N with
its DFT X(k). The IDFT of X(k) resulting in
x(n) is given by
1 NX−1
2π
x(n) = X(k)ejkω0 n, ω0 =
N k=0 N
112
The inverse DTFT of X(ejω ) resulting in x(n),
is given by
1 π
Z
x(n) = X(ejω )ejωn dω
2π −π
1 π X(k)
Z
= 2π δ(ω−kω0)ejωn dω, n = 0, ±1, ±2, . . .
2π −π N
The DTFT of a periodic signal, with period
2π, is a set of impulses with strength 2π
N X(k)
at ω = 2π
N k.
113
Linearity
Let a signal be a linear combination of two
signals. Then, the DTFT of the signal is the
same linear combination of the DTFT of their
components.
x(n) ↔ X(ejω ), y(n) ↔ Y (ejω ),
114
Time-shifting
x(n) ↔ X(ejω )
then
x(n ± n0) ↔ e±jωn0 X(ejω )
115
Frequency-shifting
x(n) ↔ X(ejω )
then
x(n)e±jω0n ↔ X(ej(ω∓ω0 ))
116
Convolution in the Time-domain
x(n) ↔ X(ejω ) and h(n) ↔ H(ejω )
Then, the convolution theorem states that
∞
1 π
Z
X(ejω )H(ejω )ejωn dω
X
x(m)h(n−m) =
m=−∞ 2π −π
↔ X(ejω )H(ejω )
That is, we find the individual DTFT of the
two signals, multiply them and find the inverse
DTFT to find their convolution output.
117
Convolution in the Frequency Domain
The convolution integral of the DTFT of two
signals in the frequency domain corresponds
to the multiplication of the inverse DTFT of
the individual signals in the time-domain with
a scale factor. ∞
x(n)y(n)e−jωn
X
x(n)y(n) ↔
n=−∞
1 2π
Z
= X(eju )Y (ej(ω−u))du
2π 0
118
Time-reversal
Let x(n) ↔ X(ejω ). Then, x(−n) ↔ X(e−jω ).
Conjugation
119
The DTFT of real signals is conjugate or Her-
mitian symmetric. That is,
X ∗(e−jω ) = X(ejω )
120
DTFT of real-valued and even-symmetric sig-
nals
If a signal is real and even, then its spectrum
also is real and even, since only cosine compo-
nents can be used to synthesize it.
∞
X(ejω ) = x(0) + 2
X
x(n) cos(ωn)
n=1
1 π
Z
x(n) = X(ejω ) cos(ωn)dω
π 0
121
DTFT of real-valued and odd-symmetric sig-
nals
If a signal is real and odd, then its spectrum is
imaginary and odd, since only sine components
can be used to synthesize it.
∞
X(ejω ) = −j2
X
x(n) sin(ωn)
n=1
j π
Z
x(n) = X(ejω ) sin(ωn)dω
π 0
122
From the fact that the DTFT of a real and
even signal is real and even and that of a real
and odd is imaginary and odd, the real part
of the DTFT, Re(X(ejω )), of an arbitrary real
signal x(n) is the transform of its even compo-
nent xe(n) and j Im(X(ejω )) is that of its odd
component xo(n), where x(n) = xe(n) + xo(n).
123
Frequency-differentiation
k dk X(ejω )
(−jn) x(n) ↔ or (n)k x(n)
dω k
k jω
k d X(e )
↔ (j)
dω k
This property is applicable only if the result-
ing signals still fulfill the conditions for DTFT
representation
124
Summation n
X
s(n) = x(m)
m=−∞
jω X(ejω ) j0
↔ S(e )= +πX(e )δ(ω), −π < ω ≤ π
(1 − e−jω )
This property is applicable on the condition
that the summation is bounded. That is, its
DTFT exists.
125
Parseval’s theorem
∞
1 2π
Z
2 |X(ejω )|2dω
X
E= |x(n)| =
n=−∞ 2π 0
The energy of a signal can also be represented
in the frequency-domain representation, as this
representation is an equivalent representation.
Since 2π1 |X(ejω )|2dω is the signal energy over
126
The Transfer Function
With x(n), h(n), and y(n) respectively, the sys-
tem input, impulse response, and output in the
time domain, and X(ejω ), H(ejω ), and Y (ejω )
their respective DTFT, the stable LTI system
output is given, in the two domains, as
∞
X
y(n) = x(m)h(n − m)
m=−∞
127
Therefore, the transfer function, with respect
to the DTFT, is given by
jω Y (ejω )
H(e ) = jω
,
X(e )
provided |X(ejω )| 6= 0 for all frequencies and
the system is initially relaxed.
The transfer function is also the frequency re-
sponse, since it represents the filtering char-
acteristic (response to real sinusoids) of the
system.
128
Digital differentiator
The input to the differentiator is the samples
of the continuous signal x(t), whose derivative
is being approximated. The output is the sam-
ples of the derivative. That is,
dx(t)
y(t) =
dt
in the continuous time domain.
129
In the discrete time domain
dx(t)
y(n) = |t=n
dt
The periodic frequency response over one pe-
riod, of the ideal digital differentiator is defined
as
H(ejω ) = jω, −π < ω ≤ π
130
The impulse response of the ideal differentiator
is obtained by finding the inverse DTFT of its
frequency response.
1 π cos(πn)
Z
h(n) = jωejωn dω =
2π −π n
(−1)n
(
= n for n 6= 0 , −∞ < n < ∞
0 for n = 0
131
Hilbert transform
An ideal transformer is an all-pass filter, which
imparts a −90◦ phase shift on all the real fre-
quency components of the input signal.
132
The periodic frequency response over one pe-
riod of the ideal Hilbert transformer is defined
as
(
−j for 0 < ω ≤ π
H(ejω ) =
j for −π < ω < 0
133
The impulse response of the ideal Hilbert trans-
former is obtained by finding the inverse DTFT
of its frequency response.
1 π 1 0
Z Z
h(n) = −jejωn dω + jejωndω
2π 0
2π −π
2 sin2( πn
2 ) for n =
= πn 6 0 , −∞ < n < ∞
0 for n = 0
134
The DTFT spectrum and its inverse can be
adequately approximated by the DFT
135
ch.5 Power Spectral Density
143
The power spectral density (PSD) or power
spectrum of a signal shows how much power is
contained in each of its frequency components.
144
PSD of
π
x(t) = 0.6 + cos(2πt − )
4
Parseval’s theorem
NX
−1
2 1 NX
−1
|x(n)| = |X(k)|2
n=0 N k=0
145
8 samples of x(t), over one cycle of the sinu-
soid, from t = 0. Sampling interval 0.125 sec
0, 0, 0, 2.8284 + j2.8284}
146
The squared absolute values are
147
PSD of a signal can also be defined as the
Fourier transform of its autocorrelation. That
is, the autocorrelation function of a signal and
its PSD are a Fourier transform pair. The au-
tocorrelation operation is evaluating the inte-
gral of multiplying a signal by its shifted ver-
sion for various shifts. The average power is
obtained, when the shift is zero.
148
Autocorrelation rxx(τ ) of a power signal x(t)
with period T is defined as
1 0.5T
Z
rxx(τ ) = lim x(t)x(t + τ )dt,
t→∞ T −0.5T
where t is a dummy variable.
149
x(t) = A cos(2πf0t + θ), T = f1
0
1 0.5T
Z
rxx(τ ) = lim A cos(2πf0t + θ)
t→∞ T −0.5T
A cos(2πf0(t + τ ) + θ)dt
A2 0.5T
Z
= lim (cos(2πf0τ ) +
t→∞ 2T −0.5T
cos(4πf0(t + 0.5τ ) + 2θ))dt
A2
= cos(2πf0τ )
2
150
At τ = 0, we get the average power as 0.5A2 as
derived earlier. Taking the Fourier transform
of
A2
cos(2πf0τ ),
2
we get the PSD of a sinusoid as
A2
(δ(f − f0) + δ(f + f0))
4
151
Welch’s method of estimating PSD
Estimating the PSD requires some other steps
other than computing the DFT. The additional
steps are: (i) partitioning the input data into
overlapping segments; (ii) the application of
the selected window over the segments; (iii)
computing the DFT of the windowed segments;
and (iv) averaging the spectra.
152
First, we have to sample the input continuous
waveform, whose PSD has to be estimated,
and take N samples
153
1. Divide the data sequence into k segments:
..
x(n)w(n)e−j2πmn ,
X
Xk (m) =
n
154
where n = (k − 1)P, . . . M + (k − 1)P − 1 and
w(n) is the window function.
155
biased autocorrelation
∞
1 X
rxx(τ ) = x(n)x∗(n + τ ), −∞ ≤ τ ≤ ∞
N n=−∞
The biased autocorrelation values are obtained
by dividing the raw autocorrelation output by
the length of the sequence, which is N = 4 for
the example, to get
156
Blackman and Tukey Method
The equation governing the Blackman and Tukey
estimation of the PSD is
MX
−1
BT
Rxx (f ) = rxx(k)w(k)e−j2πf k ,
k=−M +1
where w is a symmetric window function of
length 2M − 1 and is zero otherwise.
157
Parametric Methods of Estimating the PSD
Autoregressive Process
AR(1) process is given by
158
w(n) is the difference between the predicted
value and the correct value, which is considered
as zero-mean white noise with variance σ 2.
159
An AR(N ) model predicts the next value in
a sequence as a linear combination of N past
values. That is
N
X
x(n) + a(k)x(n − k) = w(n)
k=1
Taking the z-transform, we get
W (z)
X(z) =
1+ N
k=1 a(k)z
−k
P
160
Let z = ejω , where ω is the radian frequency.
Therefore, the power of the AR process x(n)
is characterized in the frequency-domain as
σ2
Px(ejω ) = PN ,
|1 + k=1 a(k)e−jkω | 2
161
Yule-Walker Method of estimating PSD
rx(0) rx(−1) · · · rx(1 − N ) a(1)
rx(1) rx(0) · · · rx(2 − N ) a(2)
.. ..
rx(N − 1) rx(N − 2) · · · rx(0) a(N )
rx(1)
rx(2)
= − ..
rx(N )
162
where
M −1−k
x(k + n)x∗(n),
X
rx(k) = n = 0, 1, . . . , N
n=0
The minimum modeling error for the all-pole
model is
N
2 2
a(k)rx∗ (k)
X
|b(0)| = σ = rx(0) +
k=1
163
Let the data sequence be {1, 2, 3, 4}. Let the
order of the all-pole (autoregressive) signal mod-
eling be 2. Determine the coefficients of the
all-pole model using the autocorrelation method.
The matrix equation is
" #" # " #
rx(0) rx(−1) a(1) rx(1)
=−
rx(1) rx(0) a(2) rx(2)
raw autocorrelation of the given sequence is
ˆ 20, 11, 4}
{4, 11, 20, 30,
164
Substituting these values in the matrix equa-
tion, we get
" #" # " #
30 20 a(1) 20
=−
20 30 a(2) 11
Solving, we get a(1) = −0.76 and a(2) = 0.14.
minimum modeling error is
165
Covariance Method of Estimating PSD
rx(1, 1) rx(1, 2) rx(1, 3) · · · rx(1, p) a(1)
rx (2, 1) rx(2, 2) rx(2, 3) · · · rx(2, p) a(2)
rx (3, 1) rx(3, 2) rx(3, 3) · · · rx(3, p) a(3)
.. .. .. .. ..
rx(p, 1) rx(p, 2) rx(p, 3) · · · rx(p, p) a(p)
166
rx(1, 0)
rx (2, 0)
= −
rx (3, 0)
..
rx(p, 0)
N
x(n − l)x∗(n − k),
X
rx(k, l) = k, l ≥ 0
n=p
where p is the order of the AR model and
{x(n), n = 0, 1, . . . , N }.
167
The covariance modeling error is
p
|b(0)|2 = rx(0, 0) +
X
a(k)rx (0, k)
k=1
Let the data sequence be {1, 2, 3, 4}. Let the
order of the all-pole (autoregressive) signal mod-
eling be 2. Determine the coefficients of the
all-pole model using the covariance method.
168
rx(1, 1) = 22 + 32 = 13,
rx(1, 2) = 2 × 1 + 3 × 2 = 8 = rx(2, 1)
rx(0, 0) = 32 + 42 = 25
" #" # " #
13 8 a(1) 18
=−
8 5 a(2) 11
Solving, we get a(1) = −2 and a(2) = 1.
169
The minimum modeling error for the all-pole
model is
N
|b(0)|2 = rx(0, 0) + a(k)rx∗ (0, k)
X
k=1
For the example,
170
The Modified Covariance Method
AR parameters are obtained by solving the nor-
mal equations of the covariance method. But,
the autocorrelation estimate is found using
NX
−1
rx(k, l) = (x(n−l)x∗(n−k)+x(n−p+l)x∗(n−p+k))
n=p
This equation is derived by minimizing the sum
of the squares of the forward and backward
prediction errors.
171
Let the data sequence be {1, 2, 1, 4}. Let the
order of the all-pole (autoregressive) signal mod-
eling be 2. Determine the coefficients of the
all-pole model using the modified covariance
method.
172
The normal equations are
" #" # " #
10 10 a(1) 10
=−
10 22 a(2) 18
Solving, we get a(1) = −0.3333 and a(2) =
−0.6667. The forward prediction error is
= 6.6667
The backward prediction error is also the same
173
Frequency Estimation
In the earlier sections, the model assumed was
the output of a LTI filter driven by white noise.
In another important model, x(n) is consid-
ered as a sum of complex exponentials in white
noise. That is,
p
Ak ejωk n + w(n)
X
x(n) =
k=1
174
z-transform
By multiplying signals with appropriate expo-
nential as a preprocessing, we are able to ex-
tend the range of signals to be convolved. This
is possible due to commutability of convolution
and multiplication by an exponential. This pro-
cedure, called the z-transform, is basically an
extension of the Fourier analysis.
178
The defining equation of the one-sided or uni-
lateral z-transform of x(n) is
∞
x(n)z −n
X
X(z) =
n=0
Explicitly writing the first few terms of the in-
finite summation, we get
X(z) = x(0)+x(1)z −1 +x(2)z −2 +x(3)z −3 +· · ·
where z is a complex variable.
179
X(z) = X(rejω ) is the DTFT of x(n)r−n for
all values of r for which ∞ n=0 |x(n)r
−n | < ∞.
P
180
The area in the z-plane, where the z-transform
of a function is defined, is the area outside
of a circle with r as the radius. This area
is called the region of convergence (ROC) of
the z-transform. A circle in the z-plane with
center at the origin and radius r is the border
between the ROC and region of divergence.
The condition such as |z| > r for ROC specifies
the region outside the circle with center at the
origin in the z-plane with radius r.
181
z-transform of the unit-impulse signal, δ(n)
182
The Inverse z-Transform
1
I
x(n) = X(z)z n−1dz
2πj C
with the integral evaluated, in the counter-
clockwise direction, along any simply connected
closed contour C, encircling the origin, that lies
in the ROC of X(z). The contour of integra-
tion in evaluating the inverse z-transform could
lie anywhere in the z-plane as long it is com-
pletely in the ROC satisfying the requirements.
183
Inverse z-transform by Partial-Fraction
Most of the z-transforms of practical interest
are rational functions (a ratio of two polyno-
mials in z). The denominator polynomial can
be factored into a product of first- or second-
order terms. This type of z-transforms can be
expressed as the sum of partial fractions with
each denominator forming a factor.
184
The inverse z-transforms of the individual frac-
tions can be easily found from a short table of
z-transforms, such as those of δ(n), an u(n),
and nanu(n). The sum of the individual in-
verses is the inverse of the given z-transform.
185
z(5z − 7) 2z 3z
Y (z) = 2 = +
(z − 3z + 2) (z − 1) (z − 2)
Finding the inverse z-transform of each term,
we get the inverse of Y (z), that is the time-
domain sequence y(n) corresponding to Y (z),
186
Linearity
If x(n) ↔ X(z) and y(n) ↔ Y (z), then
187
Right Shift of a Sequence
m
x(n − m)u(n) ↔ z −mX(z) + z −m x(−n)z n
X
n=1
For example, with m = 1 and m = 2, we get
x(n − 1)u(n) ↔ z −1X(z) + x(−1)
x(n − 2)u(n) ↔ z −2X(z) + z −1x(−1) + x(−2)
188
The Transfer Function
The transfer function is defined as the ratio
of the transforms of the output and input, as-
suming that the system is initially relaxed.
Y (z)
H(z) =
X(z)
By multiplying X(z) with H(z), we get the z-
transform of the zero-state response. Taking
the inverse z-transform, we get the zero-state
response in the time domain.
189
System Stability
1. A LTI discrete system is asymptotically
stable if, and only if, all the roots (simple or re-
peated) of the denominator polynomial of the
transfer function are located inside the unit-
circle.
2. A LTI discrete system is unstable if, and
only if, one or more of the roots of the de-
nominator polynomial of the transfer function
are located outside the unit-circle or repeated
roots are located on the unit-circle.
190
3. A LTI discrete system is marginally stable
if, and only if, no roots of the denominator
polynomial of the transfer function are located
outside the unit-circle and some unrepeated
roots are located on the unit-circle.
191
Convolution
Let x(n)u(n) ↔ X(z) and h(n)u(n) ↔ H(z).
The convolution of the sequences in the trans-
form domain is the product of their z-transforms
Y (z) = X(z)H(z). Let
1 z
x(n) = ( )nu(n) ↔ X(z) =
4 z−1 4
1 n z
h(n) = ( ) u(n) ↔ H(z) =
5 z−15
192
Expanding Y (z) into partial fraction expansion
z z
Y (z) = X(z)H(z) =
z−1
4 z−1
5
5z 4z
= 1
− 1
z−4 z−5
Taking the inverse
transform ofY (z)
1 n 1 n
y(n) = 5( ) − 4( ) u(n)
4 5
193
Left Shift of a Sequence
Let x(n)u(n) ↔ X(z) and m is a positive in-
teger. Due to the left shifting, some samples
are lost and they have to be taken care of in
expressing its transform in terms of X(z).
m−1
m m
x(n)z −n
X
x(n + m)u(n) ↔ z X(z) − z
n=0
194
Multiplication by n
Let x(n)u(n) ↔ X(z). Then,
d(X(z))
nx(n)u(n) ↔ −z
dz
195
Multiplication by an
This property allows us to find the transforms
of signals anx(n) those are the product of x(n)
and an in terms of the transform X(z) of x(n),
generating a large number of z-transforms. Let
x(n)u(n) ↔ X(z). Then,
z
n
a x(n)u(n) ↔ X ,
a
for any constant a, real or complex.
196
Summation
Let x(n)u(n) ↔ X(z). Then, by summation
property,
n
X z
s(n) = x(m) ↔ S(z) = X(z)
m=0 z − 1
197
Initial Value
With out finding the inverse z-transform, we
can determine the initial term x(0) of a se-
quence x(n) from its z-transform X(z) directly.
Let x(n)u(n) ↔ X(z). Then, the identity
198
Final Value
Without finding the inverse z-transform, we
can determine the limit of a sequence x(n),
as n tends to ∞ from its z-transform X(z) di-
rectly. Let x(n)u(n) ↔ X(z). Then, the iden-
tity
lim x(n) = lim (z − 1)X(z)
n→∞ z→1
is called the final value property, if the ROC of
(z − 1)X(z) includes the unit-circle.
199
Transform of Semiperiodic Functions
The z-transform of the first period of the semiperi-
odic function is
200
For example, let x0(n) = nu(n), n = 0, 1, 2 with
period N = 3. The transform the first period
samples is
z+2
X0(z) = z −1 + 2z −2 =
z2
From the property,
z3 (z + 2) z(z + 2)
X(z) = 3 =
(z − 1) z 2 (z 3 − 1)
201
Filter means that removes something that passes
through it. A coffee filter removes coffee grounds
from the coffee extract, allowing the decoc-
tion. Water filters are used to remove salts
and destroy bacteria. An electrical filter mod-
ifies the spectrum of signals passing through
it in a desired way. Filters are commonly used
for several purposes. An example of the usage
of electrical filters is to reduce the noise mixed
up with the signals.
206
The impulse response of FIR filters is of fi-
nite length, and, therefore, implemented with
finite number of coefficients. The output of
FIR filters is an exclusive function of past and
present input samples only, without feedback.
That means no inherent stability problem.
207
Overview of FIR filter design using windows
The infinite extent impulse response of the
ideal FIR filter that meets the specification is
first found by taking the inverse DTFT of the
frequency response. The impulse response is
made realizable by shifting and truncation of
the ideal impulse response. The truncation is
carried out by multiplying the impulse response
by a window of suitable length and character-
istics. The filter order is approximately found
for each window.
208
In the time-domain, the causal FIR filter is
characterized by a difference equation that is
a linear combination of past and present in-
put samples x(n) multiplied by the coefficients
h(n). For example, the difference equation of
a second-order filter is
y(n) = h(0)x(n) + h(1)x(n − 1) + h(2)x(n − 2)
The output is y(n). The filter length M is 3,
which is the number of terms of the impulse
response.
209
The transfer function with respect to the DTFT
or the frequency response of a second-order
FIR filter is obtained by replacing z by ejω in
H(z) as
Y (ejω )
H(ejω ) =
X(ejω )
= h(0) + h(1)e−jω + h(2)e−j2ω
210
Type I Lowpass
The passband and stopband edge frequencies
of a lowpass filter are specified, respectively, as
ωc = 0.3π radians and ωs = 0.45π radians, re-
spectively. The minimum attenuation required
in the stopband is 18 dB. Design the lowpass
filter using the rectangular window. The sam-
pling frequency is fs = 1024 Hz.
211
Frequency 0.3π corresponds to 0.3π2π 1024 = 153.6
Hz. Frequency 0.45π corresponds to 0.45π2π 1024 =
230.4 Hz. The maximum attenuation provided
by filters using the rectangular window is about
21 dB. Therefore, the specification of 18 dB is
realizable. Now, we find the order of the filter
as
2π
N ≥ 0.9 = 12
0.45π − 0.3π
212
The cutoff frequency of the corresponding ideal
filter is computed as
ωs + ωc 0.3π + 0.45π
ωci = = = 0.375π
2 2
The shifted impulse response of the filter is
given by
sin(ωci ( N
2 − n)) sin(0.375π(6 − n))
h(n) = = ,
π( N
2 − n)
π(6 − n)
n = 0, 1, . . . , 12
213
The shifted impulse response values, with a
precision of four digits after the decimal point,
are
{hs(n), n = 0, 1, . . . , 12} = {0.0375, −0.0244,
214
Optimal Equiripple Linear-Phase FIR filters
In the design of this type of filters, the weighted
difference between frequency responses of the
desired and the designed filters is minimized
iteratively. These filters are characterized by:
(i) satisfying a given specification by the low-
est order; (ii) having equiripple in all the bands
except in the transition bands. The equiripples
in the bands could be of different sizes;
215
(iii) having specified cutoff band edge frequen-
cies; and (vi) requiring only one algorithm for
various type of filters such as lowpass and high-
pass, etc. The design procedure for these fil-
ters is better presented through examples.
216
While analog filters are not much used due to
the obsolescence of analog devices, the design
formulas of well-established families of analog
filters, such as the Butterworth filter, can still
be used. That is, design an analog filter to the
required specifications and then use suitable
transformation formulas, from the Laplace do-
main to the z-transform domain, to get the
required transfer function of the digital filter.
234
Butterworth Filters
The Laplace domain representation of the resister-
capacitor lowpass filter circuit is a first-order
Butterworth Filter. The transfer function is
1 1
H(jω) = and |H(jω)| = q
1 + jω 1 + ω2
The poles of the filter are equally spaced around
the left half of the unit circle. There is a pole
on the real axis for N odd.
235
Chebyshev Filters
The response of Chebyshev filters has ripples.
The poles of the filter lie on an ellipse. For the
same order and passband attenuation, due to
equiripple in the passband, Chebyshev filter has
a steeper transition band response compared
with a Butterworth filter. That is, for a given
specification, the required order is lower for the
Chebyshev filter than that of the Butterworth
filter.
236
We derived the transfer functions of the nor-
malized analog lowpass Butterworth and Cheby-
shev filters. Using transformations, we can get
the transfer functions of analog lowpass, high-
pass, bandpass and bandstop filters with arbi-
trary passband cutoff frequencies from that of
the lowpass filter by simple transformations
237
The Bilinear Transformation. The analog trans-
fer function has to be transformed to the z do-
main by a suitable transformation for replacing
the variable s of the analog filter transfer func-
tion by the digital variable z. While there are
some methods, the transformation more often
used in practice is called the bilinear transfor-
mation. It is based on transforming a differen-
tial equation into a difference equation by the
trapezoidal formula for numerical integration
238
We have to make the substitution
2 (z − 1)
s=
Ts (z + 1)
in the transfer function of the filter to get an
equivalent discrete transfer function H(z).
239
Frequency warping
The frequency range in the s-plane is infinite.
That is, −∞ ≤ ωa ≤ ∞. The effective fre-
quency range in the z-plane is finite and it is
periodic. That is, −π < ωdTs ≤ π. We have to
find the relationship between ωa and ωd in the
filter design.
240
Multirate Digital Signal Processing
In addition to digital systems with a single sam-
pling rate from input to output, there are ap-
plications in which it is advantageous to use
different sampling rates for different tasks. For
example, reconstruction of signals is easier with
a higher sampling rate. For convolution, just
more than two samples is adequate, resulting
in faster execution. A system that uses differ-
ent sampling rates is called a multirate system.
258
In addition to adders, multipliers and delay units
used in implementing single sampling rate dig-
ital systems, two more basic components, the
up-sampler and down-sampler, are required in
the implementation of multirate systems.
259
Down-sampler
The output of the down-sampler retains every
M th sample, starting from the index n = 0,
and discarding the rest. That is,
xd = x(nM ))
260
Down-sampler in the frequency-domain
MX
−1
1 j( ω−2πk
jω
Xd(e ) = X(e M ))
M k=0
261
Decimator
Downsampling the signal by a factor of M re-
duces the sampling rate to fs/M . Frequency
components with frequencies from zero to fs /(2M )
can only be properly represented. Therefore,
the high frequency components have to be fil-
tered out, which are assumed of no interest. A
signal that is filtered first by a filter with cutoff
frequency fs /(2M ) and then downsampled by
a factor of M is a decimated signal by M .
262
Interpolation
The sampling rate of a signal x(n) with sam-
pling rate fs can be increased by a factor of L
by inserting L − 1 zeros (upsampling) between
successive values of the signal. The upsampled
signal xu(n) is defined in terms of x(n) and L
as
n ), for n = 0, ±L, ±2M, . . . ,
(
x( L
xu(n) =
0, otherwise
263
The sampling rate of xu(n) becomes Lfs , L
times that of x(n). The effect of zero-insertion
is that the spectrum of xu(n) is L-fold periodic
replication of that of x(n).
264
The additional frequencies, called image fre-
quencies, created by replication are required
to construct the upsampled signal. These un-
wanted components have to be filtered out by
a lowpass filter with cutoff frequency fs /(2L)
in order to get the interpolated signal xi(n).
Therefore, the interpolation operation requires
upsampling and lowpass filtering.
265
Upsampling in the Frequency-Domain
Let x(n) ↔ X(ejω ). The signal x(n) is made
bumpy due to the insertion of zeros in upsam-
pling it. Therefore, the upsampled signal re-
quire additional high frequency components to
reconstruct it. These high frequency compo-
nents, called image frequencies, are created by
the replication of the original spectrum. Let
the upsampling factor be L = 2.
266
The spectrum of the upsampled signal is ob-
tained replacing ω by 2ω, which is spectral
compression of the original spectrum. That
is, the spectral contents over two periods are
placed over the
( range 0 to 2π.
x( n
2 ), for n = 0, ±2, ±4, . . .
xu(n) =
0, otherwise
267
In general, with the upsampling factor L,
268
Rational Sampling Rate Converters
There are different formats used in audio and
video signals recording. The sampling frequency
used for recording the signals also vary. There-
fore, it is often necessary to use rational sam-
pling rate L/M converters to convert signals
from one format to another. Let us say we
want to increase the sampling rate of a signal
x(n) with sampling frequency fs by a factor
L/M > 1.
269
The first stage of the interpolator is an up-
sampler by a factor L followed a digital anti-
imaging filter with gain L and cutoff frequency
fs/(2L). The first stage of the decimator is a
digital anti-aliasing filter with gain 1 and cut-
off frequency fs/(2M ) followed by a downsam-
pler by a factor M . These filters can be com-
bined into one with a gain of L and cutoff
frequency F0 that is the minimum of fs /(2L)
and fs/(2M ).
270
Multistage Converters
If L or M or both are long, then the required
lowpass filter becomes a narrowband filter mak-
ing the converter difficult to implement. In
order to make the design of the converter sim-
pler, a set of cascaded converters can be used
with factored L/M , where L and M are rela-
tively small. That is,
! ! !
L L1 L2 LN
= ···
M M1 M2 MN
271
Narrowband filters
Narrowband filters have very small fractional
bandwidths relatively. Typically, bandwidth of
1% is a narrow bandwidth. Multirate convert-
ers use a set of lower-order filters, while the
fixed-rate implementation use one higher-order
filter. High-order filters are difficult to design
and implement in practice. In multirate imple-
mentation, the filtering task is carried out in
three stages: a decimation stage, a filter stage
and an interpolation stage.
272
The sampling rate is reduced by the decima-
tor by a factor of M . This results in mak-
ing the narrowband wider. The filtering prob-
lem is now easier. This filter stage produces
the downsampled version of the desired signal.
The interpolation stage upsamples the filter
output to get back the desired signal at the
sampling rate of the input signal. This way
of implementing the narrowband filters can re-
duce the total filtering requirements much smaller.
273
Polyphase Implementation of the Decimator
Let us say the downsampling factor is 2 and a
FIR filter with 4 coefficients. Only the even-
indexed output values of the decimator is re-
quired. The governing equations producing
the output with some consecutive even indexes
are
y(8) =(h(0)x(8)+h(2)x(6))+(h(1)x(7)+h(3)x(5))
y(10) =(h(0)x(10)+h(2)x(8))+(h(1)x(9)+h(3)x(7
274
Because the odd-indexed output values are not
required, we can divide the input sequence into
two subsequences, odd-indexed sequence and
even-indexed sequence. The filter coefficients
can also be divided into two sets, one set con-
taining the even-indexed coefficients and the
other set containing the odd-indexed coeffi-
cients. As odd-indexed input values convolve
only with odd-indexed coefficients and
275
even-indexed input values convolve only with
even-indexed coefficients, we have partitioned
the 4-coefficient filter into two subfilters, each
one working its own input sequence of one-half
of the original sampling rate fs. The subfilters
are called polyphase filters. Therefore, the fil-
ter structure consists of two filters with two
input sequences. The outputs of the subfil-
ters are added to produce the required output
sequence.
276
Polyphase implementation of interpolation
In the polyphase implementation, the filters
work at a lower rate. Let the input signal be
x(n) = {x(0), x(1), x(2), . . . , } and it has to be
interpolated by a factor of L = 2. By upsam-
pling with L = 2, we get
xu(n) = {x(0), 0, x(1), 0, x(2), 0, . . . , } Let the
FIR filter coefficients be
h(n) = {h(0), h(1), h(2), h(3)} Then, the con-
volution of xu(n) and h(n) yields typical
277
outputs such as
y(2) =h(0)x(1) + h(1)0 + h(2)x(0) + h(3)0
y(3) =h(0)0 + h(1)x(1) + h(2)0 + h(3)x(0)
y(4) =h(0)x(2) + h(1)0 + h(2)x(1) + h(3)0
y(5) =h(0)0 + h(1)x(2) + h(2)0 + h(3)x(1)
Dropping the terms with zero inputs, we get
y(2) = h(0)x(1) + h(2)x(0)
y(3) = h(1)x(1) + h(3)x(0)
y(4) = h(0)x(2) + h(2)x(1)
y(5) = h(1)x(2) + h(3)x(1)
278
The even-indexed output samples are computed
by convolving the input x(n) with the subfilter
consisting of the even-indexed values of the fil-
ter coefficients. The odd-indexed output sam-
ples are computed by convolving the input x(n)
with the subfilter consisting of the odd-indexed
values of the filter coefficients. At each cycle,
L outputs are produced. By upsampling and
delaying the subfilter outputs, the proper in-
terleaved output is produced.
279
Discrete Wavelet Transform
Fourier analysis represents a signal in terms
of sinusoidal basis functions. It was found
that, for certain applications, it is necessary to
represent the signal in the time-frequency do-
main, rather than exclusively in the frequency-
domain. A set of basis functions was found by
Haar, which is the simplest and also practically
useful.
280
Using the Haar transform matrix, find the DWT
of x(n). Verify that x(n) is reconstructed by
computing the IDWT. Verify Parseval’s theo-
rem. {x(0) #= 2, x(1) = −3}# "
1
−√
" " #
Xφ(0, 0) 1 1 1 2 2
=√ =
Xψ (0, 0) 2 1 −1 −3 √5
2
As in common with other transforms, the DWT
output for an input is a set of coefficients that
express input in terms of the basis functions.
281
Each basis function, during its existence, con-
tributes to the value of the time-domain sig-
nal at each sample point. Therefore, multiply-
ing each basis function by the corresponding
DWT coefficient and summing the products
gets back the time-domain signal.
( √1 √1 )(− √1 ) +
2 2 2
(√1 1 5
−√ ) (√ ) =
2 2 2
2 −3
282
The IDWT gets back the input samples.
# 1
− √
" # " " #
x(0) 1 1 1 2 = 2
=√
x(1) 2 1 −1 √5 −3
2
Parseval’s theorem states that the sum of the
squared-magnitude of a time-domain sequence
(energy) equals the sum of the squared-magnitude
of the corresponding transform coefficients. For
the example sequence,
2 2 1 2 5 2
2 + (−3) = 13 = (− √ ) +( √ )
2 2
283
1-level (scale 1) 4-point Haar DWT
Xφ(1, 0)
1 1 0 0 x(0)
Xφ(1, 1) 1 0 0 1 1 x(1)
= √
Xψ (1, 0) 1 −1 0 0 x(2)
2
Xψ (1, 1) 0 0 1 −1 x(3)
−1 T , is
The inverse DWT, with W4,1 = W4,1
Xφ(1, 0)
x(0) 1 0 1 0
x(1) 1 1 0 −1 0 φX (1, 1)
= √
x(2) 0 1 0 1 Xψ (1, 0)
2
x(3) 0 1 0 −1 Xψ (1, 1)
284
Fast Computation of the DFT
It is the availability of fast algorithms to com-
pute the DFT that makes Fourier analysis in-
dispensable in practical applications, in addi-
tion to its theoretical importance from its in-
vention. In turn, the other versions of the
Fourier analysis can be approximated adequately
by the DFT.
294
The algorithm is based on the classical divide-
and-conquer strategy of developing fast algo-
rithms. A problem is divided into two smaller
problems of half the size. Each smaller prob-
lem is solved separately and the solution to
the original problem is found by appropriately
combining the solutions of the smaller prob-
lems. This process is continued recursively un-
til the smaller problems reduce to trivial cases.
Therefore, the DFT is never computed using
its definition.
295
The order of computational complexity of all
of the practically used algorithms is the same,
O(N log2 N ) to compute a N -point DFT against
O(N 2) from its definition. It is this reduction in
computational complexity by an order that has
resulted in the widespread use of the Fourier
analysis in practical applications of science and
engineering.
296
The algorithm is developed using the half-wave
symmetry of periodic waveforms. This ap-
proach gives a better viewpoint of the algo-
rithm than other approaches such as matrix
factorization. The DFT is defined for any
length. However, the practically most useful
DFT algorithms are of length that is an inte-
gral power of 2. That is N = 2M , where M is
a positive integer. If necessary, zero padding
can be employed to the sequences so that they
satisfy this constraint.
297
The algorithm is developed using the half-wave
symmetry of periodic waveforms. This ap-
proach gives a better viewpoint of the algo-
rithm than other approaches such as matrix
factorization. The DFT is defined for any
length. However, the practically most useful
DFT algorithms are of length that is an inte-
gral power of 2. That is N = 2M , where M is
a positive integer. If necessary, zero padding
can be employed to the sequences so that they
satisfy this constraint.
298
Half-Wave Symmetry of Periodic Waveforms
Any practical signal can be decomposed into a
set of sinusoidal components of even- and odd-
indexed frequencies. The algorithm for fast
computation of the DFT, which is of prime
importance in practical applications, is based
on separating the signal components into two
sets: one set containing the odd-indexed fre-
quency components and the other containing
the even-indexed frequency components.
299
A N -point of DFT becomes two N/2-point
DFTs. This process is recursively continued
until the subproblems become 1-point DFTs.
A physical understanding of separating the even
and odd frequency components, can be ob-
tained using the half-wave symmetry of peri-
odic waveforms.
300
The PM DIF DFT Algorithm
A given waveform x(n) can be expressed as the
sum of N frequency components (for example,
with N = 8),
j0 2π
8 n j1 2π
8 n j2 2π
8 n
x(n) = X(0)e + X(1)e + X(2)e
j3 2π
8 n j4 2π
8 n j5 2π
8 n
+X(3)e + X(4)e + X(5)e
j6 2π
8 n j7 2π
8 n
+X(6)e + X(7)e , n = 0, 1, . . . , 7
301
For the most compact algorithms, the input
sequence x(n) has to be expressed as 2-element
vectors
a0(n)={a00(n), a01(n)} = 2{xeh(n), xoh(n), }
where n = 0, 1, . . . , N
2 − 1.
302
The first and second elements of the vectors
are, respectively, the scaled even and odd half-
wave symmetric components xeh(n) and xoh(n)
of x(n). The division by 2 in finding the sym-
metric components is deferred. The DFT ex-
pression is reformulated as into two euations.
303
NX
−1 2π
X(k) = x(n)e−j N kn, k = 0, 1, . . . , N − 1
n=0
(N/2)−1
N −j 2π
kn
X
x(n)+x n + e , k even
N
2
= n=0
(N/2)−1
N −j 2π
N kn , k odd
X
x(n)−x n + e
2
n=0
304
The PM DIT DFT Algorithm
In a decimation-in-frequency (DIF) algorithm,
the transform sequence, X(k), is successively
divided into smaller subsequences. For exam-
ple, in the beginning of the first stage, the
computation of a N -point DFT is decomposed
into two problems: (i) computing the (N/2)
even-indexed X(k) and (ii) computing the (N/2)
odd-indexed X(k).
305
In a decimation-in-time (DIT) algorithm, the
data sequence, x(n), is successively divided into
smaller subsequences. For example, in the
beginning of the last stage, the computation
of a N -point DFT is decomposed into two
problems: (i) computing the (N/2)-point DFT
of even-indexed x(n) and (ii) computing the
(N/2)-point DFT of odd-indexed x(n). The
DIT DFT algorithm is based on zero-padding,
time-shifting, and spectral redundancy.
306
The DIT DFT algorithms can be considered
as the algorithms obtained by transposing the
signal-flow graph of the corresponding DIF al-
gorithms, that is by reversing the direction of
(signal flow) all the arrows and interchanging
the input and the output.
307
Efficient Computation of the DFT of Real Data
It is the reduction of the computational com-
plexity from O(N 2) to O(N log2 N ) that is most
important in most of the applications. How-
ever, if it is essential, the computational com-
plexity and the storage requirements can be
further reduced by a factor of about 2 for com-
puting the DFT of real data. One algorithm
computes the DFT of a single real-valued data.
Another algorithm computes the DFTs of two
real-valued data sets simultaneously.
308
Effects of Finite Wordlength
Several signals encountered in practice are of
random nature. Their future values are un-
predictable and represented by some averages
such as correlation in the time-domain and
power spectrum in the frequency-domain.
322
Quantization Noise Model
The effect of arithmetic noise is analyzed us-
ing a linear noise model by placing an additive
noise source at the output of the multipliers
and after quantization.
323
In the finite wordlength representation of an
infinite precision number, there is always some
uncertainty about the actual amplitude of that
number. The corresponding signal due to this
uncertainty is called as quantization noise. It
has to be ensured that the power of this noise
is sufficiently small by providing adequate wordlength
so that a stable system remains stable and the
signal-to-noise ratio (SNR) is high.
324
The variance σq2 of a random signal, assuming
the error is uniformly distributed with mean
zero, is given as
q2 2−2b
Z q/2
2 1 2
σq = e de = =
q −q/2 12 12
325
Quantization Effects in DFT Computation
Not only the order of speed up of the power of
2 DFT algorithms in computing the DFT but
its reduction of the quantization effects is also
effective in its immense practical importance.
The major problem with finite worldlength is
the arithmetic overflow, round-off errors and
twiddle factor quantization.
326
Quantization errors in computating the DFT
using power of 2 algorithms
In the fast power of 2 DFT algorithms, the
computation is carried out over log2 N stages,
with sharing as many partial results as possible.
This computation over stages enables to carry
out distributed scaling, which scales the noise
produced by the preceding stage by the current
stage along with the partial results. This is the
reason for the reduction of roundoff noise in
fast DFT algorithms.
327
SNR is inversely proportional to N compared
to N 2 when all the scaling was carried out at
the input. For each quadrupling of N , the
wordlength has to be increased by 1 bit to
maintain the same SNR.
328
Quantization Effects in Digital Filters
Of necessity, we have to implement digital fil-
ters using fixed-point digital systems. Con-
sequently, filter performance deteriorates from
the desired one. There are four aspects we
have to look into. The structure of the fil-
ter has a significant effect in its sensitivity to
coefficient quantization. Effects of rounding
or truncation of signal samples has a random
noise source inserted at the point of quantiza-
tion. Appropriate scaling to prevent overflows
has to be taken care of.
329
The quantization effects introduce nonlineari-
ties in the linear system, which may result in
oscillations, called limit cycles even with zero
or constant input. Remember that no new fre-
quencies other than the input frequency com-
ponents can exist in a linear system. Due
to quantization, some of the effects are: (i)
uniqueness of output may be lost; (ii) true
output may be lost; (iii) appearance of limit
cycles; and (iv) loss of stability in IIR systems.
The wordlength has to be sufficient so that the
system response is acceptable.
330
Coefficients Quantization
Quantized coefficients of the filter will be dif-
ferent from the infinite-valued coefficients de-
pending on the quantization step. Therefore,
the poles and zeros and, consequently, the fre-
quency response will be different from the de-
sired one. Stability of IIR filters may be af-
fected due to the quantization of the coeffi-
cients. The coefficient sensitivity of a filter de-
pends on the distance between its poles. The
poles of higher-order filters tend to occur in
clusters.
331
Therefore, in order to increase the distance
between poles, it is a common practice to de-
compose higher-order filters into second-order
sections connected in cascade or parallel. The
distance between pair of poles is usually longer.
There are special filter structures those are de-
signed for low coefficient sensitivity. Poles with
locations close to the unit-circle affects the fre-
quency response more. Further, design of the
filter coefficients may limit the order of the fil-
ter, which results in decomposition of longer
filters into a set of smaller ones.
332
Round-off noise
Let a first-order system is characterized by the
difference equation
y(n) = x(n) + a y(n − 1), |a| < 1
with no noise sources. With noise sources, we
get an additional equation
q(n) = e(n) + a q(n − 1)
where q(n) is the quantization noise with the
average of e(n) is zero. Consequently, the im-
pulse response is h(n) = an u(n) due to noise
model alone.
333
The finite wordlength effects make the linear
system into nonlinear one, the analysis of which
is extremely difficult. In such cases, one of the
often used method to analyze systems is make
them linear with some assumptions. While the
results are indicative of the effects of finite
wordlength, the best analysis of finite wordlength
effects is the simulation of the filter.
334
The round-off noise produced in the multipli-
cation is a random signal and it is assumed to
be uncorrelated with the input signal and ad-
ditive. Further, the noise is assumed to be a
stationary white noise sequence, e(n), with its
autocorrelation σq2δ(n).
335
Considering the noise alone as the input signal,
the output of the filter is given by convolution
as
n
X
y(n) = h(k)e(n − k)
k=0
The output noise power is given by
n
n X
2 (n) =
X
σqo h(k)h(l)E(e(n − k)e(n − l))
k=0 l=0
where E stands for ’expected value of’.
336
As its autocorrelation is nonzero only at index
zero, the noise product is nonzero only when
l = k. Therefore,
n n
−2b X
2 2
(n) = σq2 (h(k))2 = (h(k))2
X
σqo
k=0
12 k=0
The impulse response, h(n), of stable systems,
tends to zero as k → ∞. Consequently, the
2.
steady-state value of the noise power is σqo
337
For the first-order filter, the output noise power
is ∞
2 2−2b X 2k 2−2b 1
σqo = a =
12 k=0 12 1 − a2
1 .
Therefore, the power gain of the filter is 1−a 2
338
Limit Cycles Oscillations in Recursive Filters
Limit cycles are oscillations at the output of
a system, for constant or zero input. As any
linear system cannot produce a frequency com-
ponent other than present in the input, these
oscillations are nonlinearities created due to
quantization effects. The main causes are round-
off errors in multiplication and overflow errors
in addition.
339
Scaling
To ensure a high SNR, the internal signals in
the stages of a filter should be kept as high
as possible without the possibility of overflow.
This requires appropriate scaling of the val-
ues of the signals at various stages of a filter.
For stable systems, the maximum value of the
output cannot exceed the input value multi-
P∞
plied by the factor n=0 |h(n)|, where h(n) is
the impulse response of the system.
340
If the maximum magnitude of the input, say
M , is scaled to be less than
M
P∞ ,
m=0 |h(m)|
then overflow is effectively prevented. How-
ever, the reduction in signal strength can result
in a corresponding reduction in SNR.
341
A more effective scale factor that is based on
the signal energy and commonly used is
M
qP
∞ 2
m=0 |h(m)|
With this scaling, overflow will occur some-
times but a higher SNR is achieved. SNR de-
teriorates more with the poles located closer
to the unit-circle.
342
Lattice Structure
In lattice realization of digital filters, a cascade
of m stages of the same structure produces the
same output as that of the FIR filter charac-
terized by a mth-order difference equation, but
with less quantization errors.
343