DSP Notes
DSP Notes
LECTURE NOTES
B.TECH
(2020-21)
Prepared by:
Course Objective: This course introduces the processing of Discrete time signals using various transforming
techniques and structures of IIR and FIR filters and also the concept of Multi-rate Digital signal Processing.
MODULE I: Discrete Time Signals, Systems and Discrete Fourier Series [8 Periods]
Discrete Time Signals, Systems: Discrete time signals & discrete time systems, Analysis of Discrete time Linear
time invariant Systems, Discrete time systems described by difference equations. Convolution of Discrete Time
Signals and sequences Discrete Fourier Series: DFS Representation of periodic sequences, Properties of
Discrete Fourier Series,
TEXT BOOKS:
1. John G. Proakis, Dimitris G. Manolakis, “Digital Signal Processing, Principles, Algorithms,
and Applications”, Pearson Education / PHI, 4th Edition, 2007.
2. A.Nagoorkani, “Digital signal processing”, Tata McGraw Hill, 2ndEdition, 2012. 3. Avtar Singh and S.
Srinivasan, Digital Signal Processing Implementations Using DSP Microprocessors – with Examples from
TMS320C54xx, CENGAGE Learning, India, 1st Edition, 2008.
REFERENCES:
1. Shalivahana, Vallava Raju,Gnana Priya, “Digital Signal Processing”,TATA McGraw Hill, 2nd Edition, 2010.
2. Alan V. Oppenheim, Ronald W. Schafer, “Digital Signal Processing”, PHI Education, 2006.
Course Outcomes: At the end of the course, students should be able to:
1. Explain discrete time signals and systems.
2. Interpret Discrete Fourier Transform and Fast Fourier Transform algorithms
3. Design Infinite Impulse Response (IIR) filters for the given specifications.
4. Develop Finite Impulse Response (FIR) Digital filters for real time applications.
5. Analyze multirate digital signal processing applications.
1. INTRODUCTION
Signals constitute an important part of our daily life. Anything that carries some information is
called a signal. A signal is defined as a single-valued function of one or more independent variables
which contain some information. A signal is also defined as a physical quantity that varies with
time, space or any other independent variable. A signal may be represented in time domain or
frequency domain. Human speech is a familiar example of a signal. Electric current and voltage are
also examples of signals. A signal can be a function of one or more independent variables. A signal
may be a function of time, temperature, position, pressure, distance etc. If a signal depends on only
one independent variable, it is called a one- dimensional signal, and if a signal depends on two
independent variables, it is called a two- dimensional signal.
A system is defined as an entity that acts on an input signal and transforms it into an output
signal. A system is also defined as a set of elements or fundamental blocks which are connected
together and produces an output in response to an input signal. It is a cause-and- effect relation
between two or more signals. The actual physical structure of the system determines the exact
relation between the input x (n) and the output y (n), and specifies the output for every input.
Systems may be single-input and single-output systems or multi-input and multi-output systems.
Signal processing is a method of extracting information from the signal which in turn depends
on the type of signal and the nature of information it carries. Thus signal processing is concerned
with representing signals in the mathematical terms and extracting information by carrying out
algorithmic operations on the signal. Digital signal processing has many advantages over analog
signal processing. Some of these are as follows:
Digital circuits do not depend on precise values of digital signals for their operation. Digital
circuits are less sensitive to changes in component values. They are also less sensitive to variations
in temperature, ageing and other external parameters.
In a digital processor, the signals and system coefficients are represented as binary words. This
enables one to choose any accuracy by increasing or decreasing the number of bits in the binary
word.
Digital processing of a signal facilitates the sharing of a single processor among a number of
signals by time sharing. This reduces the processing cost per signal.
Digital implementation of a system allows easy adjustment of the processor characteristics
during processing.
Linear phase characteristics can be achieved only with digital filters. Also multirate processing
is possible only in the digital domain. Digital circuits can be connected in cascade without any
loading problems, whereas this cannot be easily done with analog circuits.
Storage of digital data is very easy. Signals can be stored on various storage media such as
magnetic tapes, disks and optical disks without any loss. On the other hand, stored analog signals
deteriorate rapidly as time progresses and cannot be recovered in their original form.
Digital processing is more suited for processing very low frequency signals such as seismic
signals.
Though the advantages are many, there are some drawbacks associated with processing a
signal in digital domain. Digital processing needs ‘pre’ and ‘post’ processing devices like analog-to-
digital and digital-to-analog converters and associated reconstruction filters. This increases the
complexity of the digital system. Also, digital techniques suffer from frequency limitations. Digital
systems are constructed using active devices which consume power whereas analog processing
algorithms can be implemented using passive devices which do not consume power. Moreover,
active devices are less reliable than passive components. But the advantages of digital processing
techniques outweigh the disadvantages in many applications. Also the cost of DSP hardware is
decreasing continuously. Consequently, the applications of digital signal processing are increasing
rapidly.
1
The digital signal processor may be a large programmable digital computer or a small
microprocessor programmed to perform the desired operations on the input signal. It may also be a
hardwired digital processor configured to perform a specified set of operations on the input signal.
DSP has many applications. Some of them are: Speech processing, Communication, Biomedical,
Consumer electronics, Seismology and Image processing.
The block diagram of a DSP system is shown in Figure 1.1.
In this book we discuss only about discrete one-dimensional signals and consider only single-
input and single-output discrete-time systems. In this chapter, we discuss about various basic
discrete- time signals available, various operations on discrete-time signals and classification of
discrete-time signals and discrete-time systems.
1.2 REPRESENTATION OF DISCRETE-TIME SIGNALS
Discrete-time signals are signals which are defined only at discrete instants of time. For those
signals, the amplitude between the two time instants is just not defined. For discrete- time signal the
independent variable is time n, and it is represented by x (n).
There are following four ways of representing discrete-time signals:
1. Graphical representation
2. Functional representation
3. Tabular representation
4. Sequence representation
Or x (n) =
n 2 1 0 1 2 3
x (n) 3 2 0 3 1 2
1.2.4 Sequence Representation
A finite duration sequence given in section 1.2.1 can be represented as follows:
X(n) =
X(n) =
The arrow mark denotes the n = 0 term. When no arrow is indicated, the first term corresponds
to n = 0.
So a finite duration sequence, that satisfies the condition x(n) = 0 for n < 0 can be represented as:
x(n) = {3, 5, 2, 1, 4, 7}
U (n) =
The shifted version of the discrete-time unit step sequence u(n – k) is defined as:
U (n - k) =
Figure 1.3 Discrete–time (a) Unit step function (b) Shifted unit step function
The discrete-time unit ramp sequence r (n) is that sequence which starts at n = 0 and increases
linearly with time and is defined as:
r(n) = or r(n) =
Figure 1.4 Discrete–time (a) Unit ramp sequence (b) Shifted ramp sequence.
Figure 1.5 Discrete–time (a) ParaboFic sequence (b) Shifted paraboFic sequence.
This means that the unit sample sequence is a signal that is zero everywhere, except at n = 0, where its
value is unity. It is the most widely used elementary signal used for the analysis of signals and systems.
The graphical representation of (n) and (n – k) is shown in Figure 1.6[(a) and (b)].
Figure 1.6 Discrete–time (a) Unit sample sequence (b) Delayed unit sample sequence.
Relation Between The Unit Sample Sequence And The Unit Step Sequence
The unit sample sequence (n) and the unit step sequence u(n) are related as:
U(n) = (n) = u(n) – u(n - 1)
Sinusoidal Sequence
The discrete-time sinusoidal sequence is given by
X(n) = A sin (
Where A is the amplitude, is angular frequency, is phase angle in radians and n is an integer.
The period of the discrete-time sinusoidal sequence is:
N=
Where N and m are integers.
All continuous-time sinusoidal signals are periodic, but discrete-time sinusoidal sequences may
or may not be periodic depending on the value of.
For a discrete-time signal to be periodic, the angular frequency must be a rational multiple of 2.
The graphical representation of a discrete-time sinusoidal signal is shown in Figure 1.7.
(d)].
Figure 1.8 Discrete-time exponential signal an for (a) a > 1 (b) 0 < a < 1 (c) a < -1 (d) -1 < a < 0.
1.3.7 Complex Exponential Sequence
The discrete-time complex exponential sequence is defined as:
X(n) = anej( 0 n+ )
(a) (b)
n j
Figure 1.9 complex exponential sequence x(n) = a e for (a)a > 1 (b) a < 1. (c )
(d) (n – 2)
(e)
Solution:
(a) Given
We know that
(b) (bG)iven
(c) Given
(d) Given
We know that
(e) Given
We know that
When we process a sequence, this sequence may undergo several manipulations involving the
independent variable or the amplitude of the signal.
The basic operations on sequences are as follows:
5. Time shifting
6. Time reversal
7. Time scaling
8. Amplitude scaling
9. Signal addition
10. Signal multiplication
The first three operations correspond to transformation in independent variable n of a signal.
The last three operations correspond to transformation on amplitude of a signal.
1.4.1 Time Shifting
The time shifting of a signal may result in time delay or time advance. The time shifting operation
of a discrete-time signal x(n) can be represented by
y(n) = x(n – k)
This shows that the signal y (n) can be obtained by time shifting the signal x(n) by k units. If k is
positive, it is delay and the shift is to the right, and if k is negative, it is advance and the shift is to
the left.
An arbitrary signal x(n) is shown in Figure 1.10(a). x(n – 3) which is obtained by shifting x(n)
to the right by 3 units (i.e. delay x(n) by 3 units) is shown in Figure 1.10(b). x(n + 2) which is
obtained by shifting x(n) to the left by 2 units (i.e. advancing x(n) by 2 units) is shown in
Figure 1.10(c).
Figure 1.10 (a) Sequence x(n) (b) x(n – 3) (c) x(n + 2).
1.4.2 Time
Reversal
The time reversal also called time folding of a discrete-time signal x(n) can be obtained by
foldingthe sequence about n = 0. The time reversed signal is the reflection of the original signal.
It is obtained by replacing the independent variable n by –n. Figure 1.11(a) shows an arbitrary
discrete-time signal x(n), and its time reversed version x(–n) is shown in Figure 1.11(b).
Figure 1.11[(c) and (d)] shows the delayed and advanced versions of reversed signal x(–n).
The signal x(–n + 3) is obtained by delaying (shifting to the right) the time reversed signal x(–
n) by 3 units of time. The signal x(–n – 3) is obtained by advancing (shifting to the left) the time
reversed signal x(–n) by 3 units of time.
Figure 1.12 shows other examples for time reversal of signals
EXAMPLE 1.2 Sketch the following signals:
(a) U(n+2) u(-n+3) (b) x(n) = u(n+4) – u(n-2)
Solutions:
(a) Given x(n)=u(n+2) u(-n+3)
The signal u (n + 2) u(–n + 3) can be obtained by first drawing the signal u(n + 2) as shown in
Figure 1.13(a), then drawing u (–n + 3) as shown in Figure 1.13(b),
Figure 1.11 (a) Original signal x(n) (b) Time reversed signal x(-n) (c) Time reversed and delayed
signal x(-n+3) (d) Time reversed and advanced signal x(-n-3).
and then multiplying these sequences element by element to obtain u(n + 2) u(–n + 3) as
shown in Figure 1.13(c).
x(n) = 0 for n < –2 and n > 3; x(n) = 1 for –2 < n < 3
(a) Given x(n) = u(n + 4) – u(n – 2)
The signal u(n + 4) – u(n – 2) can be obtained by first plotting u(n + 4) as shown in Figure
1.14(a), then plotting u(n – 2) as shown in Figure 1.14(b), and then subtracting each
element of u(n – 2) from the corresponding element of u(n + 4) to obtain the result shown
in Figure 1.14(c).
Figure 1.13 Plots of (a) u(n + 2) (b) u(–n + 3) (c) u(n + 2) u(–n + 3).
Figure 1.14 Plots of (a) u(n + 4) (b) u(n – 2) (c) u(n + 4) – u(n – 2).
Time scaling may be time expansion or time compression. The time scaling of a discrete- time
signal x(n) can be accomplished by replacing n by an in it. Mathematically, it can be expressed as:
y(n) = x(an)
When a > 1, it is time compression and when a < 1, it is time expansion.
Let x(n) be a sequence as shown in Figure 1.16(a). If a = 2, y(n) = x(2n). Then
y(0) = x(0) = 1
y(–1) = x(–2) = 3
y(–2) = x(–4) = 0
y(1) = x(2) = 3
y(2) = x(4) = 0
and so on.
So to plot x(2n) we have to skip odd numbered samples in x(n).
We can plot the time scaled signal y(n) = x(2n) as shown in Figure 1.16(b). Here the signal
is
compressed by 2.
If a = (1/2), y(n) = x(n/2), then
y(0) = x(0) = 1
y(2) = x(1) = 2
y(4) = x(2) = 3
y(6) = x(3) = 4
y(8) = x(4) = 0
y(–2) = x(–1) = 2
y(–4) = x(–2) = 3
y(–6) = x(–3) = 4
y(–8) = x(– 4) = 0
We can plot y(n) = x(n/2) as shown in Figure 1.16(c). Here the signal is expanded by 2. All
odd
components in x(n/2) are zero because x(n) does not have any value in between the sampling
instants.
Figure 1.16 Discrete–time s c a l i n g ( a ) P l o t o f x ( n ) ( b ) P l o t o f x ( 2 n ) ( c ) P l o t o f x ( n / 2 )
Time scaling is very useful when data is to be fed at some rate and is to be taken out at a different
rate.
Solution:
Fundamental period N=
The sum of two discrete-time periodic sequence is always periodic.
some examples of discrete-time periodic/non-periodic signals are shown in Figure 1.19.
Figure 1.19 Example of discrete-time: (a) Periodic and (b) Non-periodic signals
EXAMPLE 1.4 Show that the complex exponential sequence x(n) = ej 0n is periodic only if
0/2 is a rational number.
i.e. =
j 0N
This is possible only if e =1
This is true only if 0N =2 k
Where k is an integer =
1.5.3 Energy Signals And Power Signals
Signals may also be classified as energy signals and power signals. However there are some
signals which can neither be classified as energy signals nor power signals.
The total energy E of a discrete-time signal x(n) is defined as:
A discrete-time signal x(n) is said to be causal if x(n) = 0 for n < 0, otherwise the signal is non-
causal. A discrete-time signal x(n) is said to be anti-causal if x(n) = 0 for n > 0.
A causal signal does not exist for negative time and an anti-causal signal does not exist for
positive time. A signal which exists in positive as well as negative time is called a non-casual signal.
u(n) is a causal signal and u(– n) an anti-causal signal, whereas x(n) = 1 for – 2 ≤ n ≤ 3 is a
non-causal signal.
Any signal x(n) can be expressed as sum of even and odd components. That is
x(n) = xe(n) + xo(n)
where xe(n) is even components and xo(n) is odd components of the signal.
The relation between the input x(n) and the output y(n) of a system has the form:
y(n) = Operation on x(n)
Mathematically,
y(n) = T[x(n)]
which represents that x(n) is transformed to y(n). In other words, y(n) is the transformed
version of x(n).
Fig
ure
1.2
2 BFock diagram of discrete–time system.
A system is said to be static or memoryless if the response is due to present input alone, i.e., for
a static or memoryless system, the output at any instant n depends only on the input applied at
that instant n but not on the past or future values of input or past values of output.
For example, the systems defined below are static or memoryless systems.
24
y(n) = x(n), y(n) = 2x2(n)
In contrast, a system is said to be dynamic or memory system if the response depends upon past
or future inputs or past outputs. A summer or accumulator, a delay element is a discrete- time
system with memory.
For example, the systems defined below are dynamic or memory systems.
y(n) = x(2n)
y(n) = x(n) + x(n – 2)
y(n) + 4y(n – 1) + 4y(n – 2) = x(n)
Any discrete-time system described by a difference equation is a dynamic system.
A purely resistive electrical circuit is a static system, whereas an electric circuit having
inductors and/or capacitors is a dynamic system.
A discrete-time LTI system is memoryless (static) if its impulse response h(n) is zero
for n s 0. If the impulse response is not identically zero for n s 0, then the system is called
dynamic system or system with memory.
EXAMPLE 1.12 Find whether the following systems are dynamic or not:
(a) y(n) = x(n + 2) (b) y(n) = x2(n)
(c) y(n) = x(n – 2) + x(n)
Solution:
(a) Given y(n) = x(n + 2)
The output depends on the future value of input. Therefore, the system is dynamic.
(b) Given y(n) = x2(n)
The output depends on the present value of input alone. Therefore, the system is
static.
(c) Given y(n) = x(n – 2) + x(n)
The system is described by a difference equation. Therefore, the system is dynamic.
Causal and Non-causal Systems
A system is said to be causal (or non-anticipative) if the output of the system at any instant n
depends only on the present and past values of the input but not on future inputs, i.e., for a
causal system, the impulse response or output does not begin before the input function is applied,
i.e., a causal system is non anticipatory.
Causal systems are real time systems. They are physically realizable.
The impulse response of a causal system is zero for n < 0, since (n) exists only at n = 0,
i.e. h(n) = 0 for n<0
The examples for causal systems are:
y(n) =x(n), y(n) = x(n – 2) + x(n – 1) + x(n)
A system is said to be non-causal (anticipative) if the output of the system at any instant n
depends on future inputs. They are anticipatory systems. They produce an output even before the
input is given. They do not exist in real time. They are not physically realizable.
A delay element is a causal system, whereas an image processing system is a non-causal
system.
The examples for non-causal systems are: y(n) = x(n) + x(2n)
y(n) = x2(n) + 2x(n + 2)
EXAMPLE 1.13 Check whether the following systems are causal or not:
(a) y(n) x(n) x(n 2) (b) y(n) = x(2n)
(c) y(n) = sin[x(n)] (d) y(n) = x(–n)
Solution:
(a) Given y(n) x(n) x(n 2)
For n = –2 y( 2) x( 2) x( 4)
For n = 0 y(0) x(0) x( 2)
For n = 2 y(2) x(2) x(0)
For all values of n, the output depends only on the present and past inputs.
Therefore, the system is causal.
(b) Given
y(n) x(2n)
For n = –2
y( 2) x( 4)
For n = 0
y(0) x(0)
For n = 2
y(2) x(4)
For positive values of n, the output depends on the future values of input.
Therefore, the system is non-causal.
(c) Given
y(n) sin [x(n)]
For n = –2
y( 2) sin [x( 2)]
For n = 0
y(0) sin [x(0)]
For n = 2
y(2) sin [x(2)]
For all values of n, the output depends only on the present value of input. Therefore,
the system is causal.
(d) Given y(n) = x(–n)
For n = –2 y( 2) x(2)
For n = 0 y(0) x(0)
For n = 2 y(2) x( 2)
For negative values of n, the output depends on the future values of input.
Therefore, the system is non-causal.
A system which obeys the principle of superposition and principle of homogeneity is called a
linear system and a system which does not obey the principle of superposition and homogeneity
is called a non-linear system.
Homogeneity property means a system which produces an output y(n) for an input x(n)
must produce an output ay(n) for an input ax(n).
Superposition property means a system which produces an output y1(n) for an input x1(n)
and an output y2(n) for an input x2(n) must produce an output y1(n) + y2(n) for an input x1(n) +
x2(n).
Combining them we can say that a system is linear if an arbitrary input x1(n) produces an
output y1(n) and an arbitrary input x2(n) produces an output y2(n), then the weighted sum of
inputs ax1(n) + bx2(n) where a and b are constants produces an output ay1(n) + by2(n) which is
the sum of weighted outputs.
T(ax1(n) + bx2(n)] = aT[x1(n)] + bT[x2(n)]
Simply we can say that a system is linear if the output due to weighted sum of inputs is equal
to the weighted sum of outputs.
In general, if the describing equation contains square or higher order terms of input and/or
output and/or product of input/output and its difference or a constant, the system will definitely
be non-linear.
A bounded signal is a signal whose magnitude is always a finite value, i.e. x (n) ≤ M , where
M is a positive real finite number. For example a sinewave is a bounded signal. A system is said
to be bounded-input, bounded-output (BIBO) stable, if and only if every bounded input produces
a bounded output. The output of such a system does not diverge or does not grow unreasonably
large.
Let the input signal x(n) be bounded (finite), i.e.,
x (n) ≤ Mx for all n
where Mx is a positive real number. If
y (n)
≤ My ≤∞
i.e. if the output y(n) is also bounded, then the system is BIBO stable. Otherwise, the system is
unstable. That is, we say that a system is unstable even if one bounded input produces an
unbounded output.
It is very important to know about the stability of the system. Stability indicates the
usefulness of the system. The stability can be found from the impulse response of the system
which is nothing but the output of the system for a unit impulse input. If the impulse response is
absolutely summable for a discrete-time system, then the system is stable.
1
by the expression:
To solve the difference equation, first it is converted into algebraic equation by taking its Z-
transform. The solution is obtained in z-domain and the time domain solution is obtained by
taking its inverse Z-transform. The system response has two components. The source free
response and the forced response. The response of the system due to input alone when the initial
conditions are neglected is called the forced response of the system. It is also called the steady
state response of the system. It represents the component of the response due to the driving
force. The response of the system due to initial conditions alone when the input is neglected is
called the free or natural response of the system. It is also called the transient response of the
system. It represents the component of the response when the driving function is made zero. The
response due to input and initial conditions considered simultaneously is called the total response
of the system. For a stable system, the source free component always decays with time. In fact a
stable system is one whose source free component decays with time. For this reason the source
free component is also designated as the transient component and the component due to source is
called the steady state component. When input is a unit impulse input, the response is called the
impulse response of the system and when the input is a unit step input, the response is called the
step response of the system.
Solution:
(a) The natural response is the response due to initial conditions only. So make x(n) = 0. Then
the difference equation becomes
-
We know that the forced response is due to input alone. So for forced response, the
initial conditions are neglected. Taking Z-transform on both sides of the above
equation and neglecting the initial conditions, we have
3 1
y(n) y(n 1) + y (n 2) = u(n) + u(n
1) 4 8
We know that the forced response is due to input alone. So for forced response, the
initial conditions are neglected. Taking Z-transform on both sides of the above
equation and neglecting the initial conditions, we have
1
Y (z) 3
z 1Y (z) + z 2Y (z) = U(z) + z 1U(z) = 4 8
Taking partial fractions of Y(z)/z, we have
Taking the inverse Z-transform on both sides, we have the forced response for a
step input.
EXAMPLE 2 (a) Determine the free response of the system described by the difference equation
Solution:
(a) The free response, also called the natural response or transient response is the
response due to initial conditions only [i.e. make x(n) = 0]. So, the difference
equation is:
Taking inverse Z-transform on both sides, we get the free response of the system as:
(b) To determine the forced response, i.e. the steady state response, the initial
conditions are to be neglected.
The given difference equation is:
Taking Z-transform on both sides and neglecting the initial conditions, we have
Partial fraction expansion of Y(z)/z gives
Multiplying both sides by z, we get
Taking inverse Z-transform on both sides, the forced response of the system is:
EXAMPLE 6 Using Z-transform determine the response of the LTI system described by
y(n) 2r cos y(n 1) + r2 y(n 2) = x(n) to an excitation x(n) = anu(n).
EXAMPLE 7 Determine the step response of an LTI system whose impulse response
h(n) is given by h(n) = a nu( n); 0 < a < 1 .
Figure 1.2.1 (a) Adder (b) Constant multiplier and (c) Unit delay
element.
Adder: An adder is used to add two or more signals. The output of adder is equal to the sum
of all incoming signals.
Constant multiplier: A constant multiplier is used to multiply the signals by a constant. The
output of the multiplier is equal to the product of the input signal and the constant of the
multiplier.
Unit delay element: A unit delay element is used to delay the signal passing through it by one
sampling time.
EXAMPLE 1.2.1 Construct the block diagram for the discrete-time systems whose input- output
relations are described by the following difference equations:
(a) y(n) = 0.7x(n) + 0.3x (n 1), (b) y(n) = 0.5y(n 1) + 0.8 x(n) + 0.4 x (n 1)
Solution:
(a) Given y(n) = 0.7x(n) + 0.3x (n 1)
The system may be realized by using the difference equation directly or by using the
Z-transformed version of that. The individual terms of the given difference equation
are 0.7x(n) and 0.3x(n – 1). They are represented by the basic elements as shown in
Figure 4.2.
Alternatively
Taking Z-transform on both sides of the given difference equation, we have
Y(z) = 0.7X(z) + 0.3z–1X(z)
The individual terms of the above equation are: 0.7X(z) and 0.3 z 1X(z).
They are represented by the basic elements as shown in Figure 1.2.2.
Figure 1.2.2 BFock diagram representation of (a) 0.7X(z) and (b) 0.3z–1X(z).
The input to the system is X(z) [or x(n)] and the output of the system is Y(z) [or y(n)].
The above elements are connected as shown in Figure 4.3 to get the output Y(z) [or
y(n)].
18
(b) Given y(n) = 0.5y (n 1) + 0.8x(n) + 0.4x(n 1)
The individual terms of the above equations are 0.5y(n 1), 0.8x(n) and 0.4x(n 1).
They are represented by the basic elements as shown in Figure 4.4.
Alternatively
Taking Z-transform on both sides of the given difference equation, we have
Y (z) = 0.5 z 1Y (z) + 0.8X(z) + 0.4 z 1X(z)
The individual terms of the above equation are :0.5 z 1Y (z), 0.8X(z) and 0.4
z 1X(z). They are represented by the basic elements as shown in Figure 1.2.4.
Figure 1.2.4 BFock diagram representation of (a) 0.8X(z) (b) 0.4z–1X(z) and (c) 0.5z–1Y(z).
The input to the system is X(z)[or x(n)] and the output of the system is Y(z)[or y(n)].
The above elements are connected as shown in Figure 4.5 to get the output Y(z)[or y(n)].
Figure 1.2.5 Realization of the system described by y(n) = 0.5y(n–1) + 0.8x(n) + 0.4x(n–1).
Discrete-time LTI systems may be divided into two types: IIR systems (those that have an
infinite duration impulse response) and FIR systems (those that have a finite duration impulse
response).
Since this weighted sum involves the present and all the past input samples, we can say that
the IIR system has an infinite memory.
A system whose output y(n) at time n depends on the present input and any number of
past values of input and output is called a recursive system. The past outputs are
y(n – 1), y(n – 2), y(n – 3), ...
Hence, for recursive system, the output y(n) is given by
y(n) = F[y(n 1), y(n 2), ..., y(n N ), x(n), x(n 1), ..., x(n M )]
In recursive system, in order to compute y(n0), we need to compute all the previous values y(0),
y(1), y(2), ..., y(n0 – 1) before calculating y(n0). Hence, output of recursive system has to be
computed in order [y(0), y(1), y(2), ... ].
The system function or the transfer function of the IIR system is:
The above equations for Y(z) and H(z) can be viewed as a computational procedure (or
algorithm) to determine the output sequence y(n) from the input sequence x(n). The computations
in the above equation can be arranged into various equivalent sets of difference equations with
each set of equations defining a computational procedure or algorithm for implementing the
system.
For each set of equations, we can construct a block diagram consisting of delays, adders
and multipliers. Such block diagrams are referred to as realization of the system or equivalently
as structure for realizing the system.
The main advantage of re-arranging the sets of difference equations (i.e. the main criteria
for selecting a particular structure) is to reduce the computational complexity, memory
requirements and finite word length effects in computations.
So the factors that influence the choice of structure for realization of LTI system are:
computational complexity, memory requirements and finite word length effects in computations.
Computational complexity refers to the number of arithmetic operations required to
compute the output value y(n) for the system.
Memory requirements refer to the number of memory locations required to store the
system parameters, past inputs and outputs and any intermediate computed values.
Finite-word-length effects or finite precision effects refer to the quantization effects that
are inherent in any digital implementation of the system either in hardware or in software.
Although the above three factors play a major role in influencing our choice of the
realization of the system, other factors such as whether the structure lends itself to parallel
processing or whether the computations can be pipelined may play a role in selecting a specific
structure.
The different types of structures for realizing IIR systems are:
1. Direct form-I structure 2. Direct form-II structure
3. Transposed form structure 4. Cascade form structure
5. Parallel form structure 6. Lattice structure
7. Ladder structure
The equation for Y(z) [or y(n)] can be directly represented by a block diagram as shown
in Figure 4.6 and this structure is called Direct form-I structure. This structure uses separate
delays (z–1) for input and output samples. Hence, for realizing this structure more memory is
required. The direct form structure provides a direct relation between time domain and z-domain
equations.
The structure shown in Figure 4.6 is called a non-canonical structure because the number
of delay elements used is more than the order of the difference equation.
If the IIR system is more complex, that is of higher order, then introduce an intermediate
21
variable W(z) so that
22
So, the direct form-I structure is in two parts. The first part contains only zeros [that is, the input
components either x(n) or X(z)] and the second part contains only poles [that is, the output
components either y(n) or Y(z)]. In direct form-I, the zeros are realized first and poles are
realized second.
On taking Z-transform of the above equation and neglecting initial conditions, we get
Since the number of delay elements used in direct form-II is the same as that of the
order of the difference equation, direct form-II is called a canonical structure.
The comparison of direct form-I and direct form-II structures is given in Table 4.1
Figure 1.2.8 (a) Direct form–I structure as cascade of two systems (b) Direct form–I structure after
interchanging the order ofcascading.
In Figure 4.8(b), we can observe that the inputs to the delay elements in H1(z) and H2(z) are
the same and so the outputs of the delay elements in H1(z) and H2(z) are same. Therefore, instead
of having separate delays for H1(z) and H2(z), a single set of delays can be used. Hence, the delays
can be merged to combine the cascaded systems to a single system. The resultant structure will be
direct form-II structure as that of Figure 1.2.7. The process of converting direct form-I structure to
direct form-II structure is shown in Figure 1.2.9.
Direct form-II
EXAMPLE 1.2.4 Find the direct form-I and direct form-II realizations of a discrete-time
system represented by the transfer function
The direct form-I structure of the above equation for Y(z) can be obtained as shown in Figure 1.2.13
Direct form
The direct form-I digital network can be realized using the above equation for Y(z) as shown
in Figure 1.2.15
On rearranging the equation for Y(z), we get
Figure 1.2.15 Realization of IIR system through direct form–I
.
Figure 1.2.16 Realization of IIR system through direct form–II
Linear time-invariant discrete-time systems can be classified according to the type of impulse
response. If the impulse response sequence is of finite duration, the system is called a finite
impulse response (FIR) system, and if the impulse response sequence is of infinite duration,
the system is called an infinite impulse response (IIR) system
UNIT II
INTRODUCTION :
The DFT of a discrete-time signal x(n) is a finite duration discrete frequency sequence. The DFT
sequence is denoted by X(k). The DFT is obtained by sampling one period of the
Fourier transform X(W) of the signal x(n) at a finite number of frequency points. This sampling is
conventionally performed at N equally spaced points in the period 0 ≤w≤2w or at wk = 2πk/N;
0 ≤ k≤ N – 1. We can say that DFT is used for transforming discrete-time sequence x(n) of finite length
into discrete frequency sequence X(k) of finite length. The DFT is important for two reasons. First it
allows us to determine the frequency content of a signal, that is to perform spectral analysis. The
second application of the DFT is to perform filtering operation in the frequency domain. Let x(n) be a
discrete-time sequence with Fourier transform X(W), then the DFT of x(n) denoted by X(k) is defined
as
The DFT of x(n) is a sequence consisting of N samples of X(k). The DFT sequence starts at k = 0,
corresponding to w = 0, but does not include k = N corresponding to w = 2π (since the sample at w = 0
is same as the sample at w = 2 π). Generally, the DFT is defined as
EXAMPLE 2.1 (a) Find the 4-point DFT of x(n) = {1, –1, 2, –2} directly.
(b) Find the IDFT of X(k) = {4, 2, 0, 4} directly.
Solution:
(a) Given sequence is x(n) = {1, –1, 2, –2}. Here the DFT X(k) to be found is N =4-point
and length of the sequence L = 4. So no padding of zeros is required. We know that the DFT
{x(n)} is given by
EXAMPLE 2.2 (a) Find the 4-point DFT of x(n) = {1, –2, 3, 2}.
(b) Find the IDFT of X(k) = {1, 0, 1, 0}.
EXAMPLE 2.3 Compute the DFT of the 3-point sequence x(n) = {2, 1, 2}. Using the same sequence,
compute the 6-point DFT and compare the two DFTs.
To compute the 6-point DFT, convert the 3-point sequence x(n) into 6-point sequence by
WN is called the IDFT matrix. We may also obtain x directly from the IDFT relation in matrix form,
where the change of index from n to k and the change in the sign of the exponent in e j(2/N)nk lead to
the conjugate transpose of WN. We then have
shown below.
EXAMPLE 2.6. Find the 4-point DFT of x(n) = {1, –2, 3, 2}.
Solution: Given x(n) = {1, –2, 3, 2}, the 4-point DFT{x(n)} = X(k) is determined using matrix as
shown below.
EXAMPLE 2.6 Find the IDFT of X(k)={4, –j2, 0, j2} using DFT.
Solution: Given X(k) = {4, –j2, 0, j2} _ X*(k) = {4, j2, 0, –j2}
The IDFT of X(k) is determined using matrix as shown below.
To find IDFT of X(k) first find X*(k), then find DFT of X*(k), then take conjugate of DFT {X*(k)} and
divide by N.
PROPERTIES OF DFT
Like the Fourier and Z-transforms, the DFT has several important properties that are used to
process the finite duration sequences. Some of those properties are discussed as follows
Periodicity:
If a sequence x(n) is periodic with periodicity of N samples, then N-point DFT of the sequence, X(k) is
also periodic with periodicity of N samples.
Hence, if x(n) and X(k) are an N-point DFT pair, then
Linearity
around the circle in the clockwise direction. It is denoted as x[(–n), mod N] and
Circular Frequency Shift
Complex Conjugate
Property
DFT of Real Valued Sequences
Multiplication of Two Sequences
Parseval’s theorem says that the DFT is an energy-conserving transformation and allows us
to find the signal energy either from the signal or its spectrum. This implies that the sum of
squares of the signal samples is related to the sum of squares of the magnitude of the DFT
samples.
Circular Correlation
Linear Convolution using DFT
The DFT supports only circular convolution. When two numbers of N-point sequence are circularly
convolved, it produces another N-point sequence. For circular convolution, one of the sequence should
be periodically extended. Also the resultant sequence is periodic with period N. The linear convolution
of two sequences of length N1 and N2 produces an output sequence of length N1 + N2 – 1. To perform
linear convolution using DFT, both the sequences should be converted to N1 + N2 – 1 sequences by
padding with zeros. Then take N1 + N2 – 1-point DFT of both the sequences and determine the product
of their DFTs. The resultant sequence is given by the IDFT of the product of DFTs. [Actually the
response is given by the circular convolution of the N1 + N2 – 1 sequences]. Let x(n) be an N1-point
sequence and h(n) be an N2-point sequence. The linear convolution of x(n) and h(n) produces a
sequence y(n) of length N1 + N2 – 1. So pad x(n) with N2 – 1 zeros and h(n) with N1 – 1 zeros and make
both of them of length N1 + N2 – 1. Let X(k) be an N1 + N2 – 1-point DFT of x(n), and H(k) be an N1 +
N2 – 1- point DFT of h(n). Now, the sequence y(n) is given by the inverse DFT of the product X(k)
H(k).
y(n) = IDFT {X(k)H(k)}
This technique of convolving two finite duration sequences using DFT techniques is called fast
convolution. The convolution of two sequences by convolution sum formula. This technique of
convolving two finite duration sequences using DFT techniques is called fast convolution. The
convolution of two sequences by convolution sum formula.
Y(n)=
is called direct convolution or slow convolution. The term fast is used because the DFT can be
evaluated rapidly and efficiently using any of a large class of algorithms called Fast Fourier Transform
(FFT). In a practical sense, the size of DFTs need not be restricted to N1 + N2 – 1-point transforms.
Any number L can be used for the transform size subject to the restriction L _ (N1 + N2 – 1). If
L > (N1 + N2 – 1), then y(n) will have zero valued samples at the end of the period.
EXAMPLE 2.1 Find the linear convolution of the sequences x(n) and h(n) using DFT.
x(n) = {1, 2}, h(n) = {2, 1}
Solution: Let y(n) be the linear convolution of x(n) and h(n). x(n) and h(n) are of length 2 each. So the
linear convolution of x(n) and h(n) will produce a 3 sample sequence (2 + 2 – 1 = 3). To avoid time
aliasing, we convert the 2 sample input sequences into 3 sample sequences by padding with zeros.
EXAMPLE 2.2 Find the linear convolution of the sequences x(n) and h(n) using DFT.
x(n) = {1, 0, 2}, h(n) = {1, 1}
Solution: Let y(n) be the linear convolution of x(n) and h(n). x(n) is of length 3 and h(n) is
of length 2. So the linear convolution of x(n) and h(n) will produce a 4-sample sequence
(3 + 2 – 1 = 4). To avoid time aliasing, we convert the 2-sample and 3-sample sequences
into 4-sample sequences by padding with zeros.
x(n) = {1, 0, 2, 0} and h(n) = {1, 1, 0, 0}
By the definition of N-point DFT, the 4-point DFT of x(n) is:
Therefore, the linear convolution of x(n) and h(n) is:
y(n) = x(n) * h(n) = {1, 1, 2, 2}
The linear convolution of x(n) = {1, 0, 2} and h(n) = {1, 1} is obtained using the tabular
OVERLAP-ADD METHOD :
In overlap-add method, the longer sequence x(n) of length L is split into m number of smaller
sequences of length N equal to the size of the smaller sequence h(n). (If required zero padding may be
done to L so that L = mN). The linear convolution of each section (of length N) of longer sequence with
the smaller sequence of length N is performed. This gives an output sequence of length 2N – 1.
In t his method, the last N – 1 samples of each output sequence overlaps with the first N – 1
samples of next section. While combining the output sequences of the various sectioned convolutions,
the corresponding samples of overlapped regions are added and the samples of non-overlapped regions
are retained as such. If the linear convolution is to be performed by DFT (or FFT), since DFT supports
only circular convolution and not linear convolution directly, we have to pad each section of the longer
sequence (of length N) and also the smaller sequence (of length N) with N – 1 zeros before computing
the circular convolution of each section with the smaller sequence. The steps for this fast convolution
by overlap-add method are as follows:
Step 1: N – 1 zeros are padded at the end of the impulse response sequence h(n) which isof length N
and a sequence of length 2N – 1 is obtained. Then the 2N – 1 point FFT is performed and the output
values are stored.
Step 2: Split the data, i.e. x(n) into m blocks each of length N and pad N – 1 zeros to each block to
make them 2N – 1 sequence blocks and find the FFT of each block.
Step 3: The stored frequency response of the filter, i.e. the FFT output sequence obtained
in Step 1 is multiplied by the FFT output sequence of each of the selected block in
Step 2.
Step 4: A 2N – 1 point inverse FFT is performed on each product sequence obtained in Step 3.
Step 5: The first (N – 1) IFFT values obtained in Step 4 for each block, overlapped with the last N – 1
values of the previous block. Therefore, add the overlapping values and keep the non-overlapping
values as they are. The result is the linear convolution of x(n) and h(n).
OVERLAP-SAVE METHOD
In overlap-save method, the results of linear convolution of the various sections are obtained
using circular convolution. Let x(n) be a longer sequence of length L and h(n) be a smaller
sequence of length N. The regular convolution of sequences of length L and N has L + N – 1
samples. If L > N, we have to zero pad the second sequence h(n) to length L. So their linear
convolution will have 2L – 1 samples. Its first N – 1 samples are contaminated by
wraparound and the rest corresponds to the regular convolution. To understand this let L =
12 and N = 5. If we pad N by 7 zeros, their regular convolution has 23 (or 2L – 1) samples
with 7 trailing zeros (L – N = 7). For periodic convolution, 11 samples (L – 1 = 11) are
wrapped around. Since the last 7 (or L – N) are zeros only, first four samples (2L – 1) – (L)
– (L – N) = N – 1 = 5 – 1 = 4 of the periodic convolution are contaminated by wraparound.
This idea is the basis of overlap-save method. First, we add N – 1 leading zeros to the longer
sequence x(n) and section it into k overlapping (by N – 1) segments of length M. Typically
we choose M = 2N. Next, we zero pad h(n) (with trailing zeros) to length M, and find the periodic
convolution of h(n) with each section of x(n). Finally, we discard the first N – 1 (contaminated)
samples from each convolution and glue (concatenate) the results to give the required convolution.
Step 1: N zeros are padded at the end of the impulse response h(n) which is of length N and a sequence
of length M = 2N is obtained. Then the 2N point FFT is performed
and the output values are stored.
Step 2: A 2N point FFT on each selected data block is performed. Here each data block begins with the
last N – 1 values in the previous data block, except the first data
block which begins with N – 1 zeros.
Step 3: The stored frequency response of the filter, i.e. the FFT output sequence obtained in Step 1 is
multiplied by the FFT output sequence of each of the selected blocks
obtained in Step 2.
Step 4: A 2N point inverse FFT is performed on each of the product sequences obtained in
Step 3.
Step 5: The first N – 1 values from the output of each block are discarded and the remaining values are
stored. That gives the response y(n).
In either of the above two methods, the FFT of the shorter sequence need be found only once, stored,
and reused for all subsequent partial convolutions. Both methods allow online implementation if we
can tolerate a small processing delay that equals the time required for each section of the long sequence
to arrive at the processor
2.2 INTRODUCTION
The N-point DFT of a sequence x(n) converts the time domain N-point sequence x(n) to a
frequency domain N-point sequence X(k). The direct computation of an N-point DFT requires N
x N complex multiplications and N(N – 1) complex additions. Many methods were developed
for reducing the number of calculations involved. The most popular of these is the Fast Fourier
Transform (FFT), a method developed by Cooley and Turkey. The FFT may be defined as an
algorithm (or a method) for computing the DFT efficiently (with reduced number of
calculations). The computational efficiency is achieved by adopting a divide and conquer
approach. This approach is based on the decomposition of an N-point DFT into
successively smaller DFTs and then combining them to give the total transform. Based on this
basic approach, a family of computational algorithms were developed and they are collectively
known as FFT algorithms. Basically there are two FFT algorithms; Decimation- in- time (DIT)
FFT algorithm and Decimation-in-frequency (DIF) FFT algorithm. In this chapter, we discuss
DIT FFT and DIF FFT algorithms and the computation of DFT by these methods.
N 1
X (K ) where
x(n)e 1
j2 nk / N
, K 0,1, 2,..
N
n 0
Let WN be the complex valued phase factor, which is an Nth root of unity given by
j2 nk / N
WN e
Thus,
X(k) becomes,
N 1
X (K
)
x(n)W
n 0
N
nk
,K 0,1, 2,.. N 1
From the above equations for X(k) and x(n), it is clear that for each value of k, the direct
computation of X(k) involves N complex multiplications (4N real multiplications) and N – 1
complex additions (4N – 2 real additions). Therefore, to compute all N values of DFT, N2
complex multiplications and N(N – 1) complex additions are required. In fact the DFT and
IDFT involve the same type of computations.
If x(n) is a complex-valued sequence, then the N-point DFT given in equation for X(k)
can be expressed as
X(k) = XR(k) + jXI(k)
The direct computation of the DFT needs 2N2 evaluations of trigonometric functions, 4N2 real
multiplications and 4N(N – 1) real additions. Also this is primarily inefficient as it cannot
exploit the symmetry and periodicity properties of the phase factor WN, which are
Symmetry property k N /2 K
WN WN
Periodicity property
W Nk N
N
K
FFT algorithm exploits the two symmetry properties and so is an efficient algorithm for DFT
computation.
By adopting a divide and conquer approach, a computationally efficient algorithm can be
developed. This approach depends on the decomposition of an N-point DFT into successively
smaller size DFTs. An N-point sequence, if N can be expressed as N = r1r2r3, ..., rm. where r1 = r2 =
r3
= ... = rm, then N = rm, can be decimated into r-point sequences. For each r- point sequence, r-
point DFT can be computed. Hence the DFT is of size r. The number r is called the radix of the
FFT algorithm and the number m indicates the number of stages in computation. From the results
of r- point DFT, the r2-point DFTs are computed. From the results of r2-point DFTs, the r3-point
DFTs are computed and so on, until we get rm-point DFT. If r = 2, it is called radix-2 FFT.
N
Given sequence x(n) : x(0), x(1), x(2),.....x( 1)... x(N 1)
2
We know that the transform X(k) of the N-point sequence x(n) is given by
N 1
nk
X (K ) ,K 0,1, 2, ...N 1
x(n)WN
n 0
Breaking the sum into two parts, one for the even and one for the odd indexed values, gives
nk
N/ nk N ,K 0,1, 2,.. N
X (K ) 2 1 1
x(n)W
x(n)W
N 1.
n 0
N
n N/2
nk
N/ N
X (K ) x(n)W x(n)W
W
2 1 nk nk 1
N N N
n even n odd
When n is replaced by 2n, the even numbered samples are selected and when n is replaced by
2n + 1, the odd numbered samples are selected. Hence,
N/
X (K ) N/
2 1 (2n 1)k
x(2n)W
1)
2 1 2nk
x(2n W
N N
n 0 n 0
(W )nk W and
W 2nk 2
j
2 nk W (2n 1)k
(W k )W nk
2nk
e N
N N N /2 N N N /2
We can write
N/
N / 2 1
2 1
nk k nk
X (K ) f (n)W W f (n)W
1 N/2 N 2
n 0 n 0 N/2
F (K) N/2 1 nk
N/ n & F (K) f (n) W
f (n)W
2 1
1 1 N/2 2 2 N/2
n 0 n 0
1
k
F (K ),...k 0,1, 2, 3,.
X (k) F (K ) W N
1 N 2
The implementation of this equation for X(k) is shown in the following Figure . This first step in
the decomposition breaks the N-point transform into two (N/2)-point transforms and the k WN
provides the N-point combining algebra. The DFT of a sequence is periodic with period given by
the number of points of DFT. Hence, F1(k) and F2(k) will be periodic with period N/2.
F1(k N / 2) F1(K ),&F2 (k N / 2) F2 (K )
Figure 2.1 Illustration of flow graph of the first stage DIT FFT algorithm for N = 8.
Having performed the decimation in time once, we can repeat the process for each of the
sequences f1(n) and f2(n). Thus f1(n) would result in two (N/4)-point sequences and f2(n) would
result in another two (N/4)-point sequences.
Let the given 8-sample sequence x(n) be {x(0), x(1), x(2), x(3), x(4), x(5), x(6), x(7)}. The
8-samples should be decimated into sequences of two samples. Before decimation they are
arranged in bit reversed order as shown in Table 2.1.
Figure 2.4 Illustration of complete flow graph obtained by combining all the three stages for N = 8.
The x(n) in bit reversed order is decimated into 4 numbers of 2-point sequences as
shown below.
(i) x(0) and x(4)
(ii) x(2) and x(6)
(iii) x(1) and x(5)
(iv) x(3) and x(7)
Using the decimated sequences as input, the 8-point DFT is computed. Figure 7.5 shows the
three stages of computation of an 8-point DFT.
The computation of 8-point DFT of an 8-point sequence in detail is given below. The 8-
point sequence is decimated into 4-point sequences and 2-point sequences as shown below.
(a)
(b) (b) (c)
Figure 7.6 (a)–(c) Flow graphs for implementation of first, 2nd and 3rd stages of computation.
Butterfly Diagram
Observing the basic computations performed at each stage, we can arrive at the following
conclusions:
(i) In each computation, two complex numbers a and b are considered.
(ii) The complex number b is multiplied by a phase factor Wk . N
(ii) The product bWkNis added to the complex number a to form a new complex number A.
(iv) The product bWk is subtracted from complex number a to form new complex
N
number B.
The above basic computation can be expressed by a signal flow graph shown in Figure 7.7.
The signal flow graph is also called butterfly diagram since it resembles a butterfly.
Figure 7.7 Basic butterfFy diagram or fFow graph of radix–2 DI† FF†.
The complete flow graph for 8-point DIT FFT considering periodicity drawn in a way to
remember easily is shown in Figure 7.8. In radix-2 FFT, N/2 butterflies per stage are required to
represent the computational process. In the butterfly diagram for 8-point DFT shown in Figure
7.8, for symmetry, W 0 , W 0 and W 0 a2 re s4hown on8 the graph eventhough they are unity.
The subscript 2 indicates that it is the first stage of computation. Similarly, subscripts 4 and 8
indicate the second and third stages of computation.
Figure 7.8 †he signal flow graph or butterfly diagram for 8–point radix–2 DIT FFT.
sequences. Then each N/2-point sequence is converted to two numbers of N/4-point sequences.
This process is continued until we get N/2 numbers of 2-point sequences. Finally, the 2-point
DFT of each 2-point sequence is computed. The 2-point DFTs of N/2 numbers of 2- point
sequences will give N-samples, which is the N-point DFT of the time domain sequence. Here the
equations for N/2-point sequences, N/4-point sequences, etc., are obtained by decimation of
frequency domain sequences. Hence this method is called DIF.
To derive the decimation-in-frequency form of the FFT algorithm for N, a power of 2, we
can first divide the given input sequence x(n) = {x(0), x(1), x(2), x(3), x(4), x(5), x(6), x(7) into
the first half and last half of the points so that its DFT X(k) is
N 1 N
x(n)W
N/
X (K )
2 1 nk 1
nk
nk x(n)W W x(n)W
N N N
nk
N
n 0
n 0
n N/2
N/2 1 N/
x(n N/
x(n)W nk
2 1 (n N / 2) k
W
nk 2)W
N N N
n 0 n N/2
It is important to observe that while the above equation for X(k) contains two summations
over N/2-points, each of these summations is not an N/2-point DFT, since WNnk rather than
W nk N/2
N/ N/ n 0
X (K
)
2 1 x(n)W nk
W
( N / 2)k 2
N 1
0
n
1
1
N/2
x(n N / 2)W nk N
nk N
( 1)nk x(n nk
x(n) )W
W
N N
2
N/2 1
2
N W
x(n) ( 1)nk x(n ) nk
N
n 0 2
Let us split X(k) into even and odd numbered samples. For even values of k, the X(k) can be
written as
2k
N
x(n 2nk
N/2 1
x(n) ( 1)
X (2K ) )
N
n 0 2
N/2 1
N W nk
x(n) x(n )
N
n 0 2
X (2K ( 1)2k 1
N x(n) x(n (2k 1)n
N/2 1
1)
)W
n 0 N
2
nk nk
N/2 1
x(n) x(n W
N
) W
n 0 N N /2
2
The above equations for X(2k) and X(2k + 1) can be recognized as N/2-point DFTs. X(2k) is
the DFT of the sum of first half and last half of the input sequence, i.e. of
{x(n) + x(n + N/2)} and X(2k + 1) is the DFT of the product W n with the difference offirst
N
n
half and last half of the input, i.e. of {x(n) x(n + N/2)}WN .
If we define new time domain sequences, u1(n) and u2(n) consisting of N/2-samples, such that
then the DFTs U1(k) = X(2k) and U2(k) = X(2k + 1) can be computed by first forming the
sequences u1(n) and u2(n), then computing the N/2-point DFTs of these two sequences to obtain
the even numbered output points and odd numbered output points respectively. The procedure
suggested above is illustrated in Figure 7.9 for the case of an 8-point sequence.
Figure 7.9 FFowgraph of the DIF decomposition of an N–point DF† computation into two N/2–point
DF† computations N = 8.
Now each of the N/2-point frequency domain sequences, U1(k) and U2(k) can be decimated
into two numbers of N/4-point sequences and four numbers of new N/4-point sequences can be
obtained from them.
Let the new sequences be v11(n), v12(n), v21(n), v22(n). On similar lines as discussed above,
we can get
This process is continued till we get only 2-point sequences. The DFT of those 2-point
sequences is the DFT of x(n), i.e. X(k) in bit reversed order.
The third stage of computation for N = 8 is shown in Figure 7.11.
The entire process of decimation involves m stages of decimation where m = log2N. The
computation of the N-point DFT via the DIF FFT algorithm requires (N/2) log2N complex
multiplications and (N – 1) log2N complex additions (i.e. total number of computations remains
same in both DIF and DIT).
Observing the basic calculations, each stage involves N/2 butterflies of the type shown
in Figure 7.12.
The butterfly computation involves the following operations:
(i) In each computation two complex numbers a and b are considered.
(ii) The sum of the two complex numbers is computed which forms a new
complex number A.
(ii) Subtract the complex number b from a to get the term (a – b). The difference term
(a – b) is multiplied with the phase factor or twiddle factor WNn to form a new
complex number B.
The signal flow graph or butterfly diagram of all the three stages together is shown in Figure
7.13.
Figure 7.13 SignaF fFow graph or butterfFy diagram for the 8–point radix–2 DIF FF† aFgorithm.
Figure 7.14 (a)–(c) †he first, second and third stages of computation of 8–point DF† by Radix–2 DIF FF†.
2
2. Considering the butterfly diagram, in DIT, the complex multiplication takes place
before the add subtract operation, while in DIF, the complex multiplication takes place
after the add subtract operation.
Similarities
1. Both algorithms require the same number of operations to compute DFT.
2. Both algorithms require bit reversal at some place during computation.
The term inside the square brackets in the above equation for x(n) is same as the DFT
computation of a sequence X*(k) and may be computed using any FFT algorithm. So we can say
that the IDFT of X(k) can be obtained by finding the DFT of X*(k), taking the conjugate of that
DFT and dividing by N. Hence, to compute the IDFT of X(k) the following procedure can be
followed
1. Take conjugate of X(k), i.e. determine X*(k).
2. Compute the N-point DFT of X*(k) using radix-2 FFT.
3. Take conjugate of the output sequence of FFT.
4. Divide the sequence obtained in step-3 by N.
The resultant sequence is x(n).Thus, a single FFT algorithm serves the evaluation of both direct and
inverse DFTs.
EXAMPLE 1 Draw the butterfly line diagram for 8-point FFT calculation and briefly explain.
Use decimation-in-time algorithm.
Solution: The butterfly line diagram for 8-point DIT FFT algorithm is shown in following Figure
Solution: For 8-point DIT FFT
1. The input sequence x(n) = {x(0), x(1), x(2), x(3), x(4), x(5), x(6), x(7)},
2. bit reversed order, of input as i.e. as xr(n) = {x(0), x(4), x(2), x(6), x(1), x(5), x(3), x(7)}.
Since N = 2m = 23, the 8-point DFT computation
3. Radix-2 FFT involves 3 stages of computation, each stage involving 4 butterflies. The output
X(k) will be in normal order.
4. In the first stage, four 2-point DFTs are computed. In the second stage they are combined
into two 4-point DFTs. In the third stage, the two 4-point DFTs are combined into one 8-
point DFT.
5. The 8-point FFT calculation requires 8 log 28 = 24 complex additions and (8/2) log 28 = 12
complex multiplications.
Figure : Butterfly Fine diagram for 8–point DIT FFT algorithm for N = 8.
Figure 7.16 Butterfly Fine diagram for 8–point radix–2 DIF FFT algorithm.
5
EXAMPLE 7.4 What is FFT? Calculate the number of multiplications needed in the calculation
of DFT using FFT algorithm with 32-point sequence.
Solution: The FFT, i.e. Fast Fourier transform is a method (or algorithm) for computing the
DFT with reduced number of calculations. The computational efficiency is achieved by adopting
a divide and conquer approach. This approach is based on the decomposition of an N- point DFT
into successively smaller DFTs. This basic approach leads to a family of efficient computational
algorithms known as FFT algorithms. Basically there are two FFT algorithms. (i) DIT FFT
algorithm and (ii) DIF FFT algorithm. If the length of the sequence N = 2m, 2 indicates the radix
and m indicates the number of stages in the computation. In radix-2 FFT, the N-point sequence is
decimated into two N/2-point sequences, each N/2-point sequence is decimated into two N/4-
point sequences and so on till we get two point sequences. The DFTs of two point sequences are
computed and DFTs of two 2-point sequences are combined into DFT of one 4-point sequence,
DFTs of two 4-point sequences are combined into DFT of one 8-point sequence and so on till we
get the N-point DFT.
The number of multiplications needed in the computation of DFT using FFT algorithm
N 32
with N = 32-point sequence is = log N = log 25 = 80 .
2 2
2 2
The number of complex additions 2 2 2
= N log N = 32 log 32 = 32 log 25 = 160
EXAMPLE 7.5 Explain the inverse FFT algorithm to compute inverse DFT of a 8-point
DFT. Draw the flow graph for the same.
Solution: The IDFT of an 8-point sequence {X(k), k = 0, 1, 2, ..., 7} is defined as
The term inside the square brackets in the RHS of the above expression for x(n) is the 8- point DFT
of X *(k). Hence, in order to compute the IDFT of X(k) the following procedure can be followed:
1. Given X(k), take conjugate of X (k) i.e. determine X *(k).
2. Compute the DFT of X*(k) using radix-2 DIT or DIF FFT, [This gives 8x*(n)]
From Figure 7.18, we get the 8-point DFT of X*(k) by DIT FFT as
8x*(n) = {8x* (0), 8x*(1), 8x* (2), 8x* (3), 8x*(4), 8x* (5), 8x* (6), 8x* (7)}
1
x(n) = * {8x (0*), 8x (1*), 8x (2)*, 8x (3)*, 8x (4)*, 8x (5)*, 8x *(7)}
(6)*, 8x
8
EXAMPLE 7.11 Compute the DFT of the sequence x(n) = {1, 0, 0, 0, 0, 0, 0, 0} (a) directly,
(c) by FFT.
Solution: (a) Direct computation of DFT
The given sequence is x(n) = {1, 0, 0, 0, 0, 0, 0, 0}. We have to compute 8-point DFT. So
N = 8.
DFT {x(n)} = X(k) = N 1 2
nk N 1 7 x(n)
x(n) e = nk
j
Wnk
N x(n) W =
N 8
n n n
78
= x(0)W80 + x(1)W 18 + x(2)W 28+ x(3)W 3 + 4
8 x(4) W +
5 6
8 x(5)W +8 x(6)W + 8x(7)W
7
8
= (1) (1) + (0) (W8 1) + (0)W82 + (0) W8 3 + (0)W 84 + (0)W 58+ (0)W 6 +8 (0)W 78= 1
X(k) = 1 for all k
X(0) = 1, X(1) = 1, X(2) = 1, X(3) = 1, X(4) = 1, X(5) = 1, X(6) = 1, X(7) = 1
X(k) = {1, 1, 1, 1, 1, 1, 1, 1}
(b) Computation by FFT. Here N = 8 = 23
The computation of 8-point DFT of x(n) = {1, 0, 0, 0, 0, 0, 0, 0} by radix-2 DIT FFT
algorithm is shown in Figure 7.31. x(n) in bit reverse order is
xr(n) = {x(0), x(4), x(2), x(6), x(1), x(5), x(3), x(7)}
= {1, 0, 0, 0, 0, 0, 0, 0}
For DIT FFT input is in bit reversed order and output is in normal order.
From Figure 7.31, the 8-point DFT of the given x(n) is X(k) = {1, 1, 1, 1, 1, 1, 1, 1}
1
From Figure 7.32, we get the 8-point DFT of x(n) as
X(k) = {12, 1 j2.414, 0, 1 j0.414, 0, 1 + j0.414, 0, 1 + j2.414}
Figure 7.33 Computation of 8–point DF† of x(n) by radix–2 DIF FF† aFgorithm.
From Figure 7.33, we observe that the 8-point DFT in bit reversed order is
Xr (k) = {X(0), X(4), X(2), X(6), X(1), X(5), X(3), X(7)}
= {12, 0, 0, 0, 1 j2.414, 1 + j0.414, 1 j0.414, 1 + j2.414}
The 8-point DFT in normal order is
The magnitude and phase spectrum are shown in Figures 7.34(a) and (b).
EXAMPLE 7.13 Find the 8-point DFT by radix-2 DIT FFT algorithm.
x(n) = {2, 1, 2, 1, 2, 1, 2, 1}
Solution: The given sequence is x(n) = {x(0), x(1), x(2), x(3), x(4), x(5), x(6), x(7)}
= {2, 1, 2, 1, 2, 1, 2, 1}
For DIT FFT computation, the input sequence must be in bit reversed order and the output
sequence will be in normal order.
x(n) in bit reverse order is
EXAMPLE 7.14 Compute the DFT for the sequence x(n) = {1, 1, 1, 1, 1, 1, 1, 1}.
Solution: The given sequence is x(n) = {x(0), x(1), x(2), x(3), x(4), x(5), x(6), x(7)}
= {1, 1, 1, 1, 1, 1, 1, 1}
The computation of 8-point DFT of x(n), i.e. X(k) by radix-2, DIT FFT algorithm is shown
in Figure 7.36.
The given sequence in bit reversed order is
xr(n) = {x(0), x(4), x(2), x(6), x(1), x(5), x(3), x(7)}
= {1, 1, 1, 1, 1, 1, 1, 1}
For DIT FFT, the input is in bit reversed order and output is in normal order.
From Figure 7.36, we get the 8-point DFT of x(n) as X(k) = {8, 0, 0, 0, 0, 0, 0, 0}.
EXAMPLE 7.15 Given a sequence x(n) = {1, 2, 3, 4, 4, 3, 2, 1}, determine X(k) using DIT
FFT algorithm.
Solution: The given sequence is x(n) = {x(0), x(1), x(2), x(3), x(4), x(5), x(6), x(7)}
= {1, 2, 3, 4, 4, 3, 2, 1}
The computation of 8-point DFT of x(n), i.e. X(k) by radix-2, DIT FFT algorithm is shown in
Figure 7.37. For DIT FFT, the input is in bit reversed order and the output is in normal order.
The given sequence in bit reverse order is
xr(n) = {x(0), x(4), x(2), x(6), x(1), x(5), x(3), x(7)} = {1, 4, 3, 2, 2, 3, 4, 1}
EXAMPLE 7.16 Given a sequence x(n) = {0, 1, 2, 3, 4, 5, 6, 7}, determine X(k) using
DIT FFT algorithm.
Solution: The given sequence is x(n) = {x(0), x(1), x(2), x(3), x(4), x(5), x(6), x(7)}
= {0, 1, 2, 3, 4, 5, 6, 7}
The computation of 8-point DFT of x(n), i.e. X(k) by radix-2, DIT FFT algorithm is shown
in Figure 7.38. For DIT FFT, the input is in bit reversed order and output is in normal order.
The given sequence in bit reverse order is
xr(n) = {x(0), x(4), x(2), x(6), x(1), x(5), x(3), x(7)}
= {0, 4, 2, 6, 1, 5, 3, 7}
Figure 7.38 Computation of 8–point DF† of x(n) by radix–2, DI† FF†.
The DFT of X*(k) using radix-2, DIF FFT algorithm is computed as shown in Figure 7.42.
Solution: The IDFT x(n) of the given sequence X(k) can be obtained by finding X*(k), the
conjugate of X(k), finding the 8-point DFT of X*(k) using radix-2 DIT FFT algorithm to get
8x*(n), taking the conjugate of that to get 8x(n) and then dividing by 8 to get x(n). For DIT FFT,
the input X*(k) must be in bit reverse order. The output 8x*(n) will be in normal order. For the
given X(k).
EXAMPLE 7.20 Compute the IDFT of the square wave sequence X(k) = {12, 0, 0, 0, 4, 0,
0, 0} using DIF algorithm.
Solution: The IDFT x(n) of the given sequence X(k) can be obtained by finding X*(k), the
conjugate of X(k), finding the 8-point DFT of X*(k) using DIF algorithm to get 8x*(n) taking
the conjugate of that to get 8x(n) and then dividing the result by 8 to get x(n). For DIF
algorithm, the input X*(k) must be in normal order and the output 8x*(n) will be in bit reversed
order.
For the given X(k)
X*(k) = {12, 0, 0, 0, 4, 0, 0, 0}
The 8-point DFT of X*(k) using radix-2, DIF FFT algorithm is computed as shown in Figure
7.44.
8xr (n) = {16, 16, 16, 16, 8, 8, 8, 8}* = {16, 16, 16, 16, 8, 8, 8, 8}
1
x(n) = {16, 8, 16, 8, 16, 8, 16, 8} = {2, 1, 2, 1, 2, 1, 2, 1}
8
UNIT-III
IIR DIGITAL FILTERS
Introduction
Filters are of two types—FIR and IIR. The types of filters which make use of feedback
connection to get the desired filter implementation are known as recursive filters. Their impulse
response is of infinite duration. So they are called IIR filters. The type of filters which do not
employ any kind of feedback connection are known as non-recursive filters. Their impulse
response is of finite duration. So they are called FIR filters. IIR filters are designed by
considering all the infinite samples of the impulse response. The impulse response is obtained by
taking inverse Fourier transform of ideal frequency response. There are several techniques
available for the design of digital filters having an infinite duration unit impulse response. The
popular methods for such filter design uses the technique of first designing the digital filter in
analog domain and then transforming the analog filter into an equivalent digital filter because the
analog filter design techniques are well developed. Various methods of transforming an analog
filter into a digital filter and methods of designing digital filters are discussed.
Y (s) bk
sk
H (s) = = k 0
a N
X(s)
ak sk
k
where {ak} and {bk} are filter coefficients.
The impulse response of these filter coefficients is related to Ha(s) by the Laplace
transform.
The analog filter having the rational system function Ha(s) is expressed by a linear constant
coefficient differential equation.
N M
dk y(t) = dk x(t)
ak k
k 0
dt dtk
bk
k 0
where x(t) is the input signal and y(t) is the output of the filter.
The above three equivalent characterizations of an analog filter leads to three alternative
methods for transforming the analog filter into digital domain. The restriction on the design is
that the filters should be realizable and stable.
For stability and causality of analog filter, the analog transfer function should satisfy the
following requirements:
1. The Ha(s) should be a rational function of s, and the coefficients of s should be real.
2. The poles should lie on the left half of s-plane.
3. The number of zeros should be less than or equal to the number of poles.
For stability and causality of digital filter, the digital transfer function should satisfy the
following requirements:
1. The H(z) should be a rational function of z and the coefficients of z should be real.
2. The poles should lie inside the unit circle in z-plane.
3. The number of zeros should be less than or equal to the number of poles.
We know that the analog filter with transfer function Ha(s) is stable if all its poles lie in
the left half of the s-plane. Consequently for the conversion technique to be effective, it should
possess the following desirable properties:
1. The imaginary axis in the s-plane should map into the unit circle in the z-plane. Thus,
there will be a direct relationship between the two frequency variables in the two
domains.
2. The left half of the s-plane should map into the interior of the unit circle centered at the
origin in z-plane. Thus, a stable analog filter will be converted to a stable digital filter.
The physically realizable and stable IIR filter cannot have a linear phase. For a filter to
have a linear phase, the condition to be satisfied is h(n) = h(N – 1 – n) where N is the length of
the filter and the filter would have a mirror image pole outside the unit circle for every pole
inside the unit circle. This results in an unstable filter. As a result, a causal and stable IIR filter
cannot have linear phase. In the design of IIR filters, only the desired magnitude response is
specified and the phase response that is obtained from the design methodology is accepted.
The comparison of digital and analog filters is given below.
12
Advantages of digital filters
1. The values of resistors, capacitors and inductors used in analog filters change with
temperature. Since the digital filters do not have these components, they have high
thermal stability.
2. In digital filters, the precision of the filter depends on the length (or size) of the
registers used to store the filter coefficients. Hence by increasing the register bit length
(in hardware) the performance characteristics of the filter like accuracy, dynamic
range, stability and frequency response tolerance, can be enhanced.
3. The digital filters are programmable. Hence the filter coefficients can be changed any
time to implement adaptive features.
4. A single filter can be used to process multiple signals by using the techniques of
multiplexing.
The differential equation describing the above analog filter can be obtained as:
Y (s)
H (s) = =
a b
X(s)
s+a
or sY(s) + aY(s) = bX(s)
Integrating the above equation between the limits (nT – T) and nT, we have
Therefore, we get
2 2
Therefore, the system function of the digital filter is:
Comparing this with the analog filter system function Ha(s) we get
This is the relation between analog and digital poles in bilinear transformation. So to convert
an analog filter function into an equivalent digital filter function, just put
The general characteristic of the mapping z = esT may be obtained by putting s = and expressing the
complex variable z in the polar form as in the above equation for s.
Thus,
Since s = , we get
And
From the above equation for , we observe that if r < 1 then σ < 0 and if r > 1, then σ >
0, and if r = 1, then σ = 0. Hence the left half of the s-plane maps into points inside
the unit circle in the z-plane, the right half of the s-plane maps into points outside the unit
circle in the z-plane and the imaginary axis of s-plane maps into the unit circle in the z-plane.
This transformation results in a stable digital system.
The above relation between analog and digital frequencies shows that the entire range in Ω
is mapped only once into the range –π ≤ ω ≤ π . The entire negative imaginary axis in the s-
plane (from Ω = – ∞ to 0) is mapped into the lower half of the unit circle in z-plane (from
ω = –π to 0) and the entire positive imaginary axis in the s-plane (from Ω= α to 0) is mapped
into the upper half of unit circle in z-plane (from ω = 0 to +π ).
But as seen in Figure 1, the mapping is non-linear and the lower frequencies in analog
domain are expanded in the digital domain, whereas the higher frequencies are
Figure 1 Mapping between Ω and ω in bilinear transformation.
compressed. This is due to the nonlinearity of the arctangent function and usually known as
frequency warping.
The effect of warping on the magnitude response can be explained by considering an
analog filter with a number of passbands as shown in Figure 2(a). The corresponding digital
filter will have same number of passbands, but with disproportionate bandwidth, as shown in
Figure 2(a).
In designing digital filter using bilinear transformation, the effect of warping on amplitude
response can be eliminated by prewarping the analog filter. In this method, the specified digital
frequencies are converted to analog equivalent using the equation
These analog frequencies are called prewarp frequencies. Using the prewarp
frequencies, the analog filter transfer function is designed, and then it is transformed to digital
filter transfer function.
This effect of warping on the phase response can be explained by considering an analog
filter with linear phase response as shown in Figure 2(b). The phase response of corresponding
digital filter will be nonlinear.
Figure 2 The warping effect on (a) magnitude response and (b) phase response.
It can be stated that the bilinear transformation preserves the magnitude response of an
analog filter only if the specification requires piecewise constant magnitude, but the phase
response of the analog filter is not preserved. Therefore, the bilinear transformation can be used
only to design digital filters with prescribed magnitude response with piecewise constant
values. A linear phase analog filter cannot be transformed into a linear phase digital filter
using the bilinear transformation.
EXAMPLE 1
into a digital IIR filter by using bilinear transformation. The digital IIR filter is having a
resonant frequency of ω r =π/2.
Solution: From the transfer function, we observe that Ωc = 3. The sampling period
T can be determined using the equation:
Using the bilinear transformation, the digital filter system function is:
EXAMPLE 2
Convert the analog filter with system function
Ha (s) = s + 0.5
(s + 0.5)2 + 16
into a digital IIR filter using the bilinear transformation. The digital filter should have a
resonant frequency of ω r =π/2.
Solution: From the system function, we observe that Ωc = 4. The sampling period T can
i.e.
Using the bilinear transformation, the digital filter system function is:
EXAMPLE 3
and T = 0.5 s
To obtain H(z) using the bilinear transformation in Ha(s) , replace s by
EXAMPLE 4
and T = 1 s.
Given T = 1 s,
EXAMPLE 6
A digital filter with a 3 dB bandwidth of 0.4 is to be designed from the
analog filter whose system response is:
EXAMPLE 7
The normalized transfer function of an analog filter is given by
Convert the analog filter to a digital filter with a cutoff frequency of 0.6, using the bilinear
transformation.
Solution: The prewarping of analog filter has to be performed to preserve the magnitude
response. For this the analog cutoff frequency is determined using the bilinear transformation,
and the analog transfer function is unnormalized using this analog cutoff frequency. Then the
analog transfer function is converted to digital transfer function using the bilinear transformation.
Given that, digital cutoff frequency, ωc = 0.6 π rad/s. Let T = 1s.
In the bilinear transformation,
Analog cutoff frequency
Normalized analog transfer function
The analog transfer function is unnormalized by replacing sn by s/Ωc.
Therefore, unnormalized analog filter transfer function is given by
in Figure 1.
(a ) (b)
Figure 3 Magnitude response of low–pass filter (a) Gain vs ω and (b) Attenuation vs ω.
Let ω1 = Passband frequency in rad/s.
ω2 = Stopband frequency in rad/s.
Let the gain at the passband frequency ω1 be A1 and the gain at the stopband frequency
ω2 be A2, i.e.
The filter may be expressed in terms of the gain or attenuation at the edge frequencies. Let
α1 be the attenuation at the passband edge frequency ω1, and α 2 be the attenuation at the
stopband edge frequency ω2.
Another popular unit that is used for filter specification is dB. When the gain is
expressed in dB, it will be a negative dB. When the attenuation is expressed in dB, it will be a
positive dB.
Let k1 = Gain in dB at a passband frequency ω 1
k2 = Gain in dB at a stopband frequency ω 2
Figure 4 Magnitude response of low–pass filter (a) dB–Gain vs ω and (b) dB–attenuation vs ω .
Sometimes the specifications are given in terms of passband ripple and stopband
ripple In this case, the dB gain and attenuation can be estimated as follows:
If the ripples are specified in dB, then the minimum passband ripple is equal to k1 and
the negative of maximum passband attenuation is equal to k2.
Figure 5 Magnitude
response of Butterworth
low–pass filter for various
values of N.
Order of the filter
Since the frequency response of the filter depends on its order N, the order N has to be
estimated to satisfy the given specifications.
Usually the specifications of the filter are given in terms of gain A or attenuation at
a passband or stopband frequency as given below:
And
Assuming equality we can obtain the filter order N and the 3 dB cutoff frequency
Dividing the first equation by the second, we have
log A1 dB
i.e. = 20
A1
or
i.e.
Similarly
and is given by
In fact,
Butterworth low-pass filter transfer function
The unnormalized transfer function of the Butterworth filter is usually written in factored
form as:
Where
If (where is the 3 dB cutoff frequency of the low-pass filter) is replaced by sn, then the
normalized Butterworth filter transfer function is given by
Step 3 Decide the order N of the filter. The order N should be such that
38
Choose N such that it is an integer just greater than or equal to the value obtained above.
Step 4 Calculate the analog cutoff frequency
When the order N is odd, for unity dc gain filter, Ha(s) is given by
n
The transfer function of the above equation will have 2N poles which are given by the
roots of the denominator polynomial. It can be shown that the poles of the transfer function
symmetrically lie on a unit circle in s-plane with angular spacing of .
For a stable and causal filter the poles should lie on the left half of the s-plane. Hence
the desired filter transfer function is formed by choosing the N-number of left half poles. When
N is even, all the poles are complex and exist in conjugate pairs. When N is odd, one of the
pole is real and all other poles are complex and exist as conjugate pairs. Therefore, the transfer
function of Butterworth filters will be a product of second order factors.
The poles of the Butterworth polynomial lie on a circle, whose radius is . To determine
the number of poles of the Butterworth filter and the angle between them we use the following
rules.
If the order of the filter N is odd, then the location of the first pole is on the X-axis. The location
of subsequent poles are at θ , 2θ , ..., (360 –θ ) with the angle measured in the counter- clockwise
direction.
If is the angle of a valid pole w.r.t. the X-axis, then the pole and its conjugate are
located at .
EXAMPLE 8
Design a Butterworth digital filter using the bilinear transformation. The
1
specifications of the desired low-pass filter are:
2
with T = 1 s
Solution: The Butterworth digital filter is designed as per the following steps.
From the given specification, we have
Step 5 Determination of the transfer function of the analog Butterworth filter Ha(s)
For odd N, we have
where
For N = 3, we have
where
EXAMPLE 9
Design a low-pass Butterworth digital filter to give response of 3 dB or less for
frequencies upto 2 kHz and an attenuation of 20 dB or more beyond 4 kHz. Use the bilinear
transformation technique and obtain H(z) of the desired filter.
Solution: The specifications of the desired filter are given in terms of dB attenuation and
frequency in Hz. First the gain is to be expressed as a numerical value and frequency in rad/s.
Here attenuation at passband frequency (ω1) = 3 dB
Therefore, gain at passband edge frequency ( ω1) is k1 = –3
dB
1 s 10000
f
f2
Normalized ω2 = 2π = 2 π 4000 = 0.8
f 10000
s
Step 1 Bilinear transformation is chosen
Step 2 Ratio of analog filter edge frequencies
EXAMPLE 10
Design a low-pass Butterworth filter using the bilinear transformation
method for satisfying the following constraints:
Passband: 0–400 Hz Stopband: 2.1– 4 kHz
Passband ripple: 2 dB Stopband attenuation: 20
dB Sampling frequency: 10 kHz
Solution: Given
/20
α1 = 2 dB, k1 = –2 dB an1d A1= 10k1 = 10 2/20
= 0.794
α2 = 20 dB, k2 = –20 dB A 2= 10k2 /20 = 10 020/2 = 0.1
Step 1 Type of transformation
EXAMPLE 11
A digital low-pass filter is required to meet the following specifications.
Passband attenuation ≤ 1 dB Passband edge = 4 kHz
Stopband attenuation 40 dB Stopband edge = 8 kHz
Sampling rate = 24 kHz
The filter is to be designed by performing the bilinear transformation on an analog system
function. Design the Butterworth filter.
2πf2
f2 = 8
kHz,
4000
= 2π = 2 8000
1.047 4 = 2.094 rad/s
rad/s 0= 2π 24000
0
0
The Butterworth filter is designed as follows:
Step 1 Type of transformation
Bilinear transformation is already specified.
Step 4
The
cutoff
frequency
Step 5
EXAMPLE 12
Design a digital IIR low-pass filter with passband edge at 1000 Hz and stopband edge at
1500 Hz for a sampling frequency of 5000 Hz. The filter is to have a passband ripple of 0.5 dB
and a stopband ripple below 30 dB. Design a Butterworth filter using the bilinear transformation.
Solution: Given fs = 5000 Hz, the normalized frequencies are given as:
The Butterworth filter is designed as follows:
Step 1 Type of transformation.
Bilinear transformation is to be used.
Solution: Given
2. 3.
3.
6.16
7
So the order of the low-pass Butterworth filter is N = 7.
DESIGN OF LOW-PASS CHEBYSHEV FILTER
For designing a Chebyshev IIR digital filter, first an analog filter is designed
using the given specifications. Then the analog filter transfer function is transformed to digital
filter transfer function by using either impulse invariant transformation or bilinear
transformation.
The analog Chebyshev filter is designed by approximating the ideal frequency response
using an error function. There are two types of Chebyshev approximations.
In type-1 approximation, the error function is selected such that the magnitude response is
equiripple in the passband and monotonic in the stopband.
In type-2 approximation, the error function is selected such that the magnitude function is
monotonic in the passband and equiripple in the stopband. The type-2 magnitude response is also
called inverse Chebyshev response. The type-1 design is discussed.
A1 is the gain at the passband edge frequency ω1 and Chebyshev polynomial of the first kind
of degree N given by
(a) (b)
Figure 6 Magnitude response of type–I Chebyshev filter.
The design parameters of the Chebyshev filter are obtained by considering the low-pass
filter with the desired specifications as given below.
The corresponding analog magnitude response is to be obtained in the design process.
We have
The order of the analog filter, N can be determined from the inequality for
Choose N to be the next nearest integer to the value given above. The values of and
are determined from ω1 and ω2 using either impulse invariant transformation or bilinear
transformation.
The transfer function of Chebyshev filters are usually written in the factored form as
given below.
For odd values of N and unity dc gain filter, the parameter Bk are evaluated using the
equation:
Poles of a NORMALIZED Chebyshev filter
The transfer function of the analog system can be obtained from the equation for the
magnitude squared response as:
The normalized poles in the s-domain can be obtained by equating the denominator of
the above equation to zero, i.e., to zero.
N n
The
The solution to the above expression gives us the 2N poles of the filter given by
2N
The unnormalized poles, s’n can be obtained from the normalized poles as shown below.
The normalized poles lie on an ellipse in s-plane. Since for a stable filter all the poles
should lie in the left half of s-plane, only the N poles on the ellipse which are in the left half of s-
plane are considered.
For N even, all the poles are complex and exist in conjugate pairs. For N odd, one pole is
real and all other poles are complex and occur in conjugate pairs.
where
For even values of N and unity dc gain filter, find such that
For odd values of N and unity dc gain filter, find such that
Step 7 Using the chosen transformation, transform Ha(s) to H(z), where H(z) is
the transfer function of the digital filter.
[The high-pass, band pass and band stop filters are obtained from low-
pass filter design by frequency transformation].
EXAMPLE 14
0.707 H( ω ) 1, 0 ω 0.2π
H( ω ) 0.1, 0.5π ω π
using bilinear transformation and assuming T = 1 s.
Solution: Given
A1 = 0.707, ω1 = 0.2π
A2 = 0.1, ω2 = 0.5π
T = 1 s and bilinear transformation is to be used. The low-pass Chebyshev IIR digital filter is
designed as follows:
Step 1 Type of transformation
Here bilinear transformation is to be used.
Step 2 Attenuation constant
Step 3
On simplifying, we get
Attenuation constant
Ratio of analog frequencies
Order of filter
Analog cutoff frequency
For N odd
Using bilinear transformation, H(z) is given by
EXAMPLE 16
The specification of the desired low-pass filter is:
0.5π ω π
0 ω
H(ω ) 0.15;
Design a Chebyshev digital filter using the bilinear transformation.
Solution: Given
A1 = 0.9, ω 1 = 0.3 π
A2 = 0.15, ω 2 = 0.5 π
The Chebyshev filter is designed as per the following steps:
Step 1 The bilinear transformation is used.
Step 2 Attenuation constant
When k = 1,
When s = 0
1
Let B0 = B1, B2 = = 0.516 or B = 0.718
0 0
1.935
B0 = B1 = 0.86
a
(s + 0.577) (s2 + 0.577s + 1.29)
= 0.744
(s + 0.577) (s2 + 0.577s + 1.29)
Step 7 Digital transfer function
EXAMPLE 17
Determine the system function of the lowest order Chebyshev digital
filter that meets the following specifications.
2 dB ripple in the passband 0 ≤ω ≤ 0.25 π
Atleast 50 dB attenuation in stopband 0.4π ≤ω ≤ π
Solution: Given
/20
Ripple in passband = 2 dB, i.e. k1 = –2 dB A1 = 10k1 = 10 2/20 = 0.794
Attenuation in stopband = 50 dB, i.e. k2 = –50 dB
A2 = 10k2 /20 = 10 50/20 = 0.0031
A1 = 0.794, ω1 = 0.25π
A2 = 0.003, ω2 = 0.4π
EXAMPLE 18
specifications:
Determine the lowest order of Chebyshev filter that meets the following
(i)
(ii) 1 dB ripple in the passband 0 ω 0.3π
(iii) Atleast 60 dB attenuation in the stopband 0.35π ω
π Use the bilinear transformation.
Solution: Given ω1 = 0.3 , ω2 = 0.35
0.891 60 dB attenuation, so α2 = 60 dB or k2 = – 60 dB
Step 1 Bilinear transformation is to be used.
Step 2 Attenuation constant
INTRODUCTION
A filter is a frequency selective system. Digital filters are classified as finite duration unit
impulse response (FIR) filters or infinite duration unit impulse response (IIR) filters, depending
on the form of the unit impulse response of the system. In the FIR system, the impulse response
sequence is of finite duration, i.e., it has a finite number of non-zero terms. The IIR system has
an infinite number of non-zero terms, i.e., its impulse response sequence is of infinite duration.
IIR filters are usually implemented using recursive structures (feedback- poles and zeros) and
FIR filters are usually implemented using non-recursive structures (no feedback-only zeros). The
response of the FIR filter depends only on the present and past input samples, whereas for the
IIR filter, the present response is a function of the present and past values of the excitation as
well as past values of the response.
Advantages of FIR filter over IIR filters:
1. FIR filters are always stable.
2. FIR filters with exactly linear phase can easily be designed.
3. FIR filters can be realized in both recursive and non-recursive structures.
4. FIR filters are free of limit cycle oscillations, when implemented on a finite
word length digital system.
5. Excellent design methods are available for various kinds of FIR filters.
Disadvantages of FIR filters:
1. The implementation of narrow transition band FIR filters is very costly, as it requires
considerably more arithmetic operations and hardware components such as multipliers,
adders and delay elements.
2. Memory requirement and execution time are very high.
FIR filters are employed in filtering problems where linear phase characteristics within the
pass band of the filter are required. If this is not required, either an FIR or an IIR filter may be
employed. An IIR filter has lesser number of side lobes in the stop band than an FIR filter
with the same number of parameters. For this reason if some phase distortion is tolerable, an
IIR filter is preferable. Also, the implementation of an IIR filter involves fewer parameters,
less memory requirements and lower computational complexity.
N 1
H(z) = h(n) z n
n 0
where h(n) is the impulse response of the filter. The frequency response [Fourier transform of
h(n)] is given by
1
N 1
j n
H( ω ) = h(n) e
n 0
which is periodic in frequency with period 2 , i.e.,
H( ω ) = H( ω + 2k ), k = 0, 1, 2, ...
Since H(ω ) is complex it can be expressed as
filter as:
We have
N 1
i.e. h(n) e j n
= H(ω) e j θ(ω )
n 0
N 1
This gives us
N 1
n 0
Therefore,
N 1
n 0
N 1
h(n) [sin ωn cos αω - cosωn sin αω] = 0
n 0
N 1
i.e.
h(n) sin (α n)ω = 0
n 0
This will be zero when
This shows that FIR filters will have constant phase and group delays when the impulse
response is symmetrical about α= (N – 1)/2.
The impulse response satisfying the symmetry condition h(n) = h(N – 1 – n) for odd and even
values of N is shown in Figure 1. When N = 9, the centre of symmetry
of the sequence occurs at the fourth sample and when N = 8, the filter delay is2 3 1 samples.
(a) (b)
Figure 1 Impulse response sequence of symmetrical sequences for (a) N odd (b) N even.
If only constant group delay is required and not the phase delay, we can write
θ(ω ) = β – αω
Now, we have
This gives
Cross multiplying and rearranging, we get
This shows that FIR filters have constant group delay τ g and not constant phase delay when
the impulse response is antisymmetrical about α= (N – 1)/2.
The impulse response s3at2isfying the antisymmetry condition is shown in Figure 2. When
N = 9, the centre of antisymmetry occurs at fourth sample and when N = 8, the centre of
antisymmetry occurs at samples. From Figure 2, we find that h[(N – 1)/2] = 0 for
antisymmetric odd sequence.
a
b
Figure 2 Impulse response sequence of antisymmetric sequences for (a) N odd (b) N even.
EXAMPLE 1 The length of an FIR filter is 7. If this filter has a linear phase, show that
is satisfied
Solution: The length of the filter is 7. Therefore, for linear phase,
EXAMPLE 2
The following transfer function characterizes an FIR filter (N = 9).
Determine the magnitude response and show that the phase and group delays are constant.
Thus, the phase delay and the group delay are the same and are constants.
Rectangular Window
The weighting function (window function) for an N-point rectangular window is given by
In magnitude response of triangular window, the side lobe level is smaller than that of the
rectangular window being reduced from –13 dB to –25 dB. However, the main lobe width is
now 8 /N or twice that of the rectangular window.
The triangular window produces a smooth magnitude response in both pass band and stop
band, but it has the following disadvantages when compared to magnitude response
1
obtained by using rectangular window:
1. The transition region is more.
2. The attenuation in stop band is less.
Because of these characteristics, the triangular window is not usually a good choice
The raised cosine window multiplies the central Fourier coefficients by approximately unity and
smoothly truncates the Fourier coefficients toward the ends of the filter. The smoother ends and
broader middle section produces less distortion of hd(n) around n = 0. It is also called generalized
Hamming window.
The window sequence is of the form:
Hanning Wi ndow
The width of main lobe is 8 /N, i.e., twice that of rectangular window which results in
doubling of the transition region of the filter. The peak of the first side lobe is –32 dB relative
to the maximum value. This results in smaller ripples in both pass band and stop band of the low-
pass filter designed using Hanning window. The minimum stop band attenuation of the filter is
44 dB. At higher frequencies the stop band attenuation is even greater. When compared to
triangular window, the main lobe width is same, but the magnitude of the side lobe is reduced,
hence the Hanning window is preferable to triangular window.
Hamming Window
The Hamming window function is given by
In the magnitude response for N = 31, the magnitude of the first side lobe is down about 41dB
from the main lobe peak, an improvement of 10 dB relative to the Hanning window. But this
improvement is achieved at the expense of the side lobe magnitudes at higher frequencies, which
are almost constant with frequency. The width of the main lobe is 8 /N. In the magnitude
response of low-pass filter designed using Hamming window, the first side lobe peak is –51 dB,
which is – 7 dB lesser with respect to the Hanning window filter. However, at higher frequencies,
the stop band attenuation is low when compared to that of Hanning window. Because the
Hamming window generates lesser oscillations in the side lobes than the Hanning window for
the same main lobe width, the Hamming window is generally preferred.
Blackman Window
The Blackman window function is another type of cosine window and given by the equation
In the magnitude response, the width of the main lobe is 12π /N, which is highest among
windows. The peak of the first side lobe is at –58 dB and the side lobe magnitude decreases with
frequency. This desirable feature is achieved at the expense of increased main lobe width.
However, the main lobe width can be reduced by increasing the value of N. The side lobe
attenuation of a low-pass filter using Blackman window is –78 dB.
Table 1 gives the important frequency domain characteristics of some window functions.
EXAMPLE 3
Design an ideal low-pass filter with N = 11 with a frequency response
We have
Therefore, the designed filter coefficients are given as
5.0. INTRODUCTION
Discrete-time systems may be single-rate systems or multi-rate systems. The systems that use
single sampling rate from A/D converter to D/A converter are known as single-rate systems and
the discrete-time systems that process data at more than one sampling rate are known as multi-
rate systems. In digital audio, the different sampling rates used are 32 kHz for broadcasting, 44.1
kHz for compact disc and 48 kHz for audio tape. In digital video, the sampling rates for
composite video signals are 14.3181818 MHz and 17.734475 MHz for NTSC and PAL
respectively. But the sampling rates for digital component of video signals are 13.5 MHz and
6.75 MHz for luminance and colour difference signal. Different sampling rates can be obtained
using an up sampler and down sampler. The basic operations in multirate processing to achieve
this are decimation and interpolation. Decimation is for reducing the sampling rate and
interpolation is for increasing the sampling rate. There are many cases where multi-rate signal
processing is used. Few of them are as follows:
1. In high quality data acquisition and storage systems
2. In audio signal processing
3. In video
4. In speech processing
5. In transmultiplexers
6. For narrow band filtering
The various advantages of multirate signal processing are as follows:
1. Computational requirements are less.
2. Storage for filter coefficients is less.
3. Finite arithmetic effects are less.
4. Filter order required in multirate application is low.
5. Sensitivity to filter coefficient lengths is less.
While designing multi-rate systems, effects of aliasing for decimation and pseudoimages
for interpolators should be avoided.
5.1 SAMPLING
A continuous-time signal x(t) can be converted into a discrete-time signal x(nT) by sampling
it at regular intervals of time with sampling period T. The sampled signal x(nT) is given by
A sampling process
can also be interpreted as a modulation or multiplication process.
SANPLING THEOREM
Sampling theorem states that a band limited signal x(t) having finite energy, which has no
spectral components higher than fh hertz can be completely reconstructed from its samples taken
at the rate of 2fh or more samples per second.
The sampling rate of 2fh samples per second is the Nyquist rate and its reciprocal 1/2f h is
the Nyquist period.
x(n) 2 3 3 3
2 2
1 1 1 1
0 1 2 3 4 5 6 7 8 9 n
3
x(2n) (a)
2 3
1 1
0 1 2 3 4 n
(b)
x(3n)
1 1 1 1
n
0 1 2 3
(c)
Figure 5.2 Plots of (a) x(n), (b) x(2n) and (c) x(3n).
The block diagram of the decimator is shown in Figure 5.3. The decimator comprises two
blocks such as anti-aliasing filter and down sampler. Here the anti-aliasing filter is a low-pass
filter to band limit the input signal so that aliasing problem is eliminated and the down sampler is
used to reduce the sampling rate by keeping every Dth sample and removing D – 1 in between
samples.
7
SPECTRUM of Down SANPLED signal
Let T be sampling period of input signal x(n), and let F be its sampling rate or frequency. When
8
the signal is down sampled by D, let T ’ be its new sampling period and F’ be its
sampling frequency, then
Let us derive the spectrum of a down sampled signal x(Dn) and compare it with the
spectrum of input signal x(n). The Z-transform of the signal x(n) is given by
The down sampled signal y(n) is obtained by multiplying the sequence x(n) with a periodic train
of impulses p(n) with a period D and then leaving out the D – 1 zeros between each pair of
samples. The periodic train of impulses is given by
If we leave D – 1 zeros between each pair of samples, we get the output of down
sampler
y(n) = x’(nD) = x(nD) p(nD)
= x(nD)
The Z-transform of the output sequence is given by
Substituting z = ejw , we get the
frequency response
13
Figure 5.7 (a) Input signal x(n), (b) Output of 2 fold up sampler yf(n) = x(n/2), (c) Output of
interpolator y2(n) = x(n/2).
where W = 2πfT . The spectra of the signal w(n) contains the images of base band placed at the
harmonics of the sampling frequency ±2 /I, ±4 /I. To remove the images an anti- imaging
filter is used. The ideal characteristics of low-pass filter is given by
.
ANTI-IMAGING Filter
The low-pass filter placed after the up sampler to remove the images created due to up
sampling is called the anti-imaging filter.
EXAMPLE 10.1 Show that the up sampler and down sampler are time-variant systems.
Solution: Consider a factor of I up sampler defined by
Solution: Given that x(n) = u(n) is the unit step sequence and is defined as:
The output signal y(n) is shown in Figure 5.10(c). It is obtained by inserting two
zeros between two consecutive samples.
x(n) = u(n)
n
0 1 2 3 4 5 6 7 8 9
(a)
y(n) = x(3n)
n
0 1 2 3
(b)
y(n) = x(n/3)
n
0 1 2 3 4 5 6 7 8 9
(c)
Figure 5.10 Plots of (a) x(n)=u(n), (b) x(3n) and (c) x(n/3).
EXAMPLE 5.3 Consider a ramp sequence and sketch its interpolated and decimated
versions with a factor of 3.
Figure 5.11 Plots of (a) r(n) = nu(n), (b) y(n) = r(3n) and (c) y(n) = r(n/3).
EXAMPLE 5.4 Consider a signal x(n) = sin nu(n).
(i) Obtain a signal with a decimation factor 2.
(ii) Obtain a signal with an interpolation factor 2.
Solution: The given signal is x(n) = sin n u(n). It is as shown in Figure 5.12(a).
(i) Signal with decimation factor 2. The signal x(n) with a decimation factor 2 is given
Figure 5.12 Plots of (a) x(n) = sin n u(n), (b) y(n) = sin 2n u(n) and (c) y(n) = sin (n /2)u(n).
This shows that the cascading of up sampler and down sampler is not interchangeable when D
and I are not co-prime, i.e., when D and I have a common factor.
To meet the desired specifications of a narrow band LPF, the filters h1(n) and h2(n) should be identical
with the same pass band ripple p/2 and the same stop band ripple s.
2. Filter banks. Filter banks are usually classified into two types:
(i) Analysis filter bank and (ii) Synthesis filter bank
The D-channel synthesis filter bank shown in Figure 10.69 is dual of the analysis filter bank. In this case,
each Vd(z) is fed to an up sampler. The up-sampling process produces the signal V d(zD). These signals are
applied to filters Gd (z) and finally added to get the output signal
Xˆ (z). The filters G0(z) to G D–1(z) have the same characteristics as the analysis filters H0(z) to HD–
1(z).