Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
8 views

1. Discrete Signals and Systems

Chapter 1 of 'Digital Signal Analysis & Processing' introduces the concepts of signals and systems, defining signals as detectable physical quantities that carry information and systems as devices that manipulate these signals. It distinguishes between signal processing and signal analysis, emphasizing the role of Digital Signal Processing (DSP) in handling time-varying signals through algorithms and digital representation. The chapter also outlines the basic elements of a DSP system, its advantages over analog signal processing, and various applications across fields such as speech, image processing, and telecommunications.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views

1. Discrete Signals and Systems

Chapter 1 of 'Digital Signal Analysis & Processing' introduces the concepts of signals and systems, defining signals as detectable physical quantities that carry information and systems as devices that manipulate these signals. It distinguishes between signal processing and signal analysis, emphasizing the role of Digital Signal Processing (DSP) in handling time-varying signals through algorithms and digital representation. The chapter also outlines the basic elements of a DSP system, its advantages over analog signal processing, and various applications across fields such as speech, image processing, and telecommunications.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 26

Digital Signal Analysis & Processing Ch.

Chapter 1: Discrete Signals and Systemsi.


Signal:
We know that a signal can be a rather abstract notion, such as a flashing light on our bike (turn
signal), or a referee’s whistle indicating start, halt or end of football match, doorbell ring, etc.
Signal can be defined as a detectable physical quantity or impulse (as a voltage, current or
magnetic field strength) by which messages or information can be transmitted.
A signal is a source of information generally a physical quantity which varies with respect to time,
space, temperature like any independent variable. In other word, a signal is function of
independent variables that carry some information.
Eg.x(t) = 10t + 7t2. Signal of one independent variable (i.e. time)
2
s(x,y) = 3x + 2xy + 10y . Signal of two independent variables (i.e. x and y).
Speech, electrocardiogram (ECG), electroencephalogram (EEG), electromyogram (EMG), etc
signals are examples of information-bearing signals that evolve as functions of a single
independent variable, namely, time. An example of signal that is function of two independent
variables is an image signal.

System:
A system may be defined as a physical device that performs an operation on a signal. More
specifically, a system is something that can manipulate, change, record, or transmit signals. In
Digital Signal Processing, a system may be defined by algorithm. For example, a filter used to
reduce the noise and interference corrupting a desired information-bearing signal is called a
system. In this case the filter performs some operation/s on the signal, which has the effect of
reducing (filtering) the noise and interference from the desired information-bearing signal.

Signal Processing vs Signal Analysis:


When we pass a signal through a system, as in filtering, we say that we have processed the signal.
In this modern world we are surrounded by all kinds of signals in various forms. Some of the
signals are natural, but most of the signals are manmade. Some signals are necessary (speech),
some are pleasant (music), while many are unwanted or unnecessary in a given situation. In an
engineering context, signals are carriers of information, both useful and unwanted. Therefore,
extracting or enhancing the useful information from a mix of conflicting information is a simplest
form of signal processing. More generally, signal processing is an operation designed for
extracting, enhancing, storing, and transmitting useful information. The distinction between useful
and unwanted information is often subjective as well as objective. Hence signal processing tends
to be application dependent.
Signal analysis is something we do to design or understand something or a phenomenon. It
usually involves mathematical tools as well as devices such as spectrum analyzers, data analysis
tools, analytical tools such as FFT, MATLAB, etc. For example, while designing a link, you will
examine its spectral spreading and power levels. This is what an engineer will do. But once, he
has done with the design, the device itself will just do “signal processing”.

Digital Signal Processing refers to methods of filtering and analyzing time-varying signals based
on the assumption that the signal amplitudes can be represented by a finite set of integers
corresponding to the amplitude of the signal at a finite number of points in time. Digital Signal
Processing is distinguished from other areas in computer science by the unique type of data it

Page 1 of 26
Compiled by: Rupesh D. Shrestha[nec]
Digital Signal Analysis & Processing Ch. 1

uses: signals. In most cases, these signals originate as sensory data from the real world: seismic
vibrations, visual images, sound waves, etc. DSP is the mathematics, the algorithms, and the
techniques used to manipulate these signals after they have been converted into a digital form.
This includes a wide variety of goals, such as: enhancement of visual images, recognition and
generation of speech, compression of data for storage and transmission, etc.
Digital signal processing is the study of signals in a digital representation and the processing
methods of these signals. DSP includes subfields like: audio signal processing, control
engineering, digital image processing and speech processing. RADAR Signal processing and
communications signal processing are two other important subfields of DSP.
Since the goal of DSP is usually to measure or filter continuous real-world analog signals, the first
step is usually to convert the signal from an analog to a digital form, by using an analog to digital
converter. Often, the required output signal is another analog output signal, which requires a
digital to analog converter.
The algorithms required for DSP are sometimes performed using specialized computers, which
make use of specialized microprocessors called digital signal processors (also abbreviated DSP).
These process signals in real time and are generally purpose-designed application-specific
integrated circuits (ASICs). When flexibility and rapid development are more important than unit
costs at high volume, DSP algorithms may also be implemented using field-programmable gate
arrays (FPGAs).

Basic Elements of a Digital Signal Processing System:


The signals that we encounter in practice are mostly analog signals. These signals, which
vary continuously in time and amplitude, are processed using electrical networks containing active
and passive circuit elements. This approach is known as analog signal processing (ASP), for
example, radio and television receivers.

Figure 1: Analog Signal Processor.


They can also be processed using digital hardware containing adders, multipliers, and logic
elements or using special-purpose microprocessors. However, one needs to convert analog signals
into a form suitable for digital hardware. This form of the signal is called a digital signal. It takes
of the finite number of values at specific instances in time, and hence it can be represented by
binary numbers, or bits. The processing of digital signals is called DSP: in block diagram form it
is represented by

Figure 2: Basic Elements of Digital Signal Processing.


The input to and output from the systems are analog in nature. The various block elements are
discussed below:

Page 2 of 26
Compiled by: Rupesh D. Shrestha[nec]
Digital Signal Analysis & Processing Ch. 1

Low-Pass Filter (anti-alias): This is a prefilter or an antialiasing filter, which conditions the
analog signal to prevent aliasing.
Analog to Digital Converter (ADC): ADC produces a stream of binary numbers from analog
signals. It has three sub-elements as sampling, quantization and encoding.
Digital Signal Processor: This is the heart of DSP and can represent a general-purpose computer
or a special-purpose processor, or digital hardware, and so on. The basic elements of DSP are
adder and multiplier.
Digital to Analog Converter (DAC): This is the inverse operation to the ADC, which produces
a staircase waveform from a sequence of binary numbers, a first step towards producing an analog
signal.
Low-Pass Filter (dequantisation): This is a postfilter to smooth out staircase waveform into the
desired analog signal. It is simply a low pass filter which smooth out (or interpolates) the
sequences obtained from DAC.

Performance or these systems is usually limited by the performance (i.e. speed, resolution and
linearity) of the analog-to-digital converter. However, when using a DSP we should never forget
two facts:
 If information was not present in the sampled signal to start with, no amount of digital
manipulation will extract it.
 Real signals come with noise.

Advantages of DSP over ASP:


A major drawback of ASP is its limited scope for performing complicated signal
processing applications. This translates into nonflexibility in processing and complexity in system
designs. All of these generally lead to expensive products. On the other hand, using DSP approach,
it is possible to convert an inexpensive personal computer into a powerful signal processor. Some
important advantages of DSP are:
1. Systems using the DSP approach can be developed using software running on a general
purpose computer. Therefore DSP is relatively convenient to develop and test, and the
software is portable.
2. DSP operations are based solely on additions and multiplications, leading to extremely stable
processing capability – for example, stability independent of temperature.
3. Digital signals are easily stored on magnetic or other storable media.
4. The digital signals become transportable and can be processed off-line in a remote laboratory.
5. DSP operations can easily be modified in real time, often by simple programming changes, or
by reloading of registers.
6. It is difficult to perform precise mathematical operations on signals in analog form but these
same operations can be routinely implemented on a digital computer using software.
7. DSP has lower cost due to VLSI technology, which reduces costs of memories, gates,
microprocessors, and so forth.

The principal disadvantage of DSP is the speed of operation of analog to digital converters and
DSP, especially at very high frequencies and wide bandwidth require fast-sampling rate A/D
converters and fast digital signal processors. Primarily due to the above advantages, DSP is now
becoming a first choice in many technologies and applications, such as consumer electronics,
communications, wireless telephones, and medical imaging.

Page 3 of 26
Compiled by: Rupesh D. Shrestha[nec]
Digital Signal Analysis & Processing Ch. 1

Applications of Digital Signal Processing:


There are various application areas of digital signal processing (DSP) due to the
availability of high resolution spectral analysis. It requires high speed processor to implement the
Fast Fourier Transform. Some of these areas can be listed as below:
1. Speech Processing
Speech is a one dimensional signal. Digital processing of speech is applied to a wide range of
speech problems such as speech spectrum analysis, channel vocoders, etc. DSP is applied to
speech coding, speech enhancement, speech analysis and synthesis, speech recognition and
speaker recognition.
2. Image Processing
Any two-dimensional pattern is called an image. Digital processing of images requires two-
dimensional DSP tools such as Discrete Fourier Transform (DFT), Fast Fourier Transform (FFT)
algorithms and z-transforms. Processing of electrical signals extracted from images by digital
techniques include image formation and recording, image compression, image restoration, image
reconstruction and image enhancement.
3. Radar Signal Processing
Rader stands for “Radio Detection and Ranging”. Improvement in signal processing is
possible by digital technology. Development of DSP has led to greater sophistication of radar
tracking algorithms. Radar systems consist of transmit-receive antenna, digital processing system
and control unit.
4. Digital Communications
Application of DSP in digital communication specially telecommunications comprises of
digital transmission using pulse code modulation (PCM), digital switching using Time Division
Multiplexing (TDM), echo control and digital tape-recorders. DSP in telecommunication systems
are found to be cost effective due to availability of medium and large scale digital ICs. These ICs
have desirable properties such as small size, low cost, low power, immunity to noise and
reliability.
5. Spectral Analysis
Frequency-domain analysis is easily and effectively possible in digital signal processing using
fast Fourier transform (FFT) algorithms. These algorithms reduce computational complexity and
also reduce the computational time.
6. Sonar Signal Processign
Sonar stands for “Sound Navigation and Ranging”. Sonar is used to determine the range,
velocity and direction of targets that are remote from the observer. Sonar uses sound waves at
lower frequencies to detect objects under water.
7. Aviation
8. Astronomy
9. Telecommunication networks
10. Satellite communication
11. Microprocessor systems
12. Industrial noise control.

Types of Signals:
1. Continuous time Vs. Discrete-time signal
As the names suggest, this classification is determined by whether or not the time axis (x-axis)
is discrete (countable) or continuous. Continuous-time signals are represented by x(t) where t
denotes continuous-time and discrete-time signals or sequences are represented by x[n] where
n is an integer denotes discrete-time.
Page 4 of 26
Compiled by: Rupesh D. Shrestha[nec]
Digital Signal Analysis & Processing Ch. 1

2. Continuous value Vs. Discrete-value Signal


In discrete-value signals, there are finite values in y-axis. For example, if we look for values
between 0 to 1 V, there are infinite numbers of values (Continuous value). But if we fixed the
number of values to say 10 (0, 0.1, 0.2, …) or say 5 (0, 0.2, 0.4, 0.6, …) then there are finite
numbers of values in y-axis, known as discrete-value signal.
3. Periodic Vs. Non-Periodic Signal
Periodic signals repeat with some period T, while aperiodic or nonperiodic signals do not. We
can define a periodic function through the following mathematical expression, where t can be
any number and T is a positive constant:
𝒙(𝒕) = 𝒙(𝒕 + 𝑻)

Where T is the fundamental period (the smallest value of T that still allows above equation to be true.
For discrete time signal
𝑥[𝑛] = 𝑥[𝑛 + 𝑁]
Here, N is the time period which is an integer. That mean the signal repeats after N samples.
4. Causal Vs. Anticausal Vs. Noncausal
Causal signals are signals that are zero for all negative time, while anticausal are signals that
are zero for all positive time. Noncausal signals are signals that have nonzero values in both
positive and negative time.
5. Even Vs. Odd
An even signal is any signal x(t) such that x(t) = x(-t). Even signals can be easily spotted as
they are symmetric around the vertical axis. An odd signal, on the other hand, is a signal that
satisfy x(t) = -x(-t) (Also known as Anti-symmetric signal).
Using the definitions of even and odd signals, we can show that any signal can be written as a
combination of an even and odd signal. That is, every signal has an odd-even decomposition.
𝑥(𝑡) = 𝑥𝑒 (𝑡) + 𝑥𝑜 (𝑡) (𝑖. 𝑒. 𝑒𝑣𝑒𝑟 𝑝𝑎𝑟𝑡 + 𝑜𝑑𝑑 𝑝𝑎𝑟𝑡)
𝑥(𝑡) + 𝑥(−𝑡) 𝑥(𝑡) − 𝑥(−𝑡)
𝑥𝑒 (𝑡) = & 𝑥𝑜 (𝑡) =
2 2
6. Deterministic Vs. Random Signal
A deterministic signal is a signal in which each value of the signal is fixed and can be
determined by a mathematical expression, rule, or table. Because of this the future values of
the signal can be calculated from past values with complete confidence. On the other hand, a
random signal has a lot of uncertainty about its behavior. The future values of a random signal
cannot be accurately predicted and can be usually only be guessed based on the averages of
sets of signals.
7. Energy Vs. Power Signal
In electrical systems, a signal may represent a voltage or current. Consider a voltage v(t)
developed across a resistor R, producing a current i(t). The instantaneous power dissipated in
this resistor is defined by
𝑣 2 (𝑡)
𝑝(𝑡) =
𝑅
𝑜𝑟 𝑒𝑞𝑢𝑖𝑣𝑎𝑙𝑒𝑛𝑡𝑙𝑦, 𝑝(𝑡) = 𝑅𝑖 2 (𝑡)
In both cases, the instantaneous power p(t) is proportional to the squared amplitude of the signal.
Furthermore, for a resistance R of 1Ω, we see that above equations take on the same mathematical
form. Accordingly, in signal analysis it is customary to define power in terms of a 1Ω resistor, so
Page 5 of 26
Compiled by: Rupesh D. Shrestha[nec]
Digital Signal Analysis & Processing Ch. 1

that, regardless of whether a given signal x(t) represents a voltage or a current, we may express
the instantaneous power of the signal as,
𝑝(𝑡) = 𝑥 2 (𝑡)
Based on this convention, we define the total energy of the continuous-time signal x(t) as
𝑇
2 ∞
2 (𝑡)𝑑𝑡
𝐸 = lim ∫ 𝑥 = ∫ 𝑥 2 (𝑡)𝑑𝑡
𝑇→∞
𝑇 −∞

2
And its average power as
𝑇
2
1
𝑃 = lim ∫ 𝑥 2 (𝑡)𝑑𝑡
𝑇→∞ 𝑇
𝑇

2
In the case of a discrete-time signal x[n], the integrals are replaced by corresponding sums. Thus
the total energy of x[n] is defined by

𝐸 = ∑ |𝑥[𝑛]|2
𝑛=−∞
And its average power is defined by
N
1
𝐼𝑓 𝑡ℎ𝑒 𝑠𝑖𝑔𝑛𝑎𝑙 𝑖𝑠 𝑎𝑝𝑒𝑟𝑖𝑜𝑑𝑖𝑐, 𝑃 = lim ∑ |x[n]|2
N→∞ 2N + 1
n=−N
N−1
1
If the signal is periodic, P= ∑|x[n]|2
N
n=0
The second expression is the average power in a periodic signal x[n] with fundamental period N.
If x[n] is real we can take |x[n]|2 = x2[n].
A signal is referred to as an Energy signal, if and only if the total energy of the signal satisfies the
condition 0 < E < ∞.
On the other hand, it is referred to as a Power signal, if and only if the average power of the signal
satisfies the condition, 0 < P < ∞.
The energy and power classifications of signals are mutually exclusive. In particular, an energy
signal has zero average power, whereas a power signal has infinite energy. It is also interest to
note that periodic signals and random signals are usually viewed as power signals, whereas signals
that are both deterministic and non-periodic are energy signals.
#Find energy or power of the following signals:
𝟏 𝒏
# 𝒙[𝒏] = ( ) 𝒖[𝒏]
𝟐
The given signal is aperiodic signal and we will find energy as,
∞ ∞ 𝑛 2 ∞
2
1 1 2𝑛 1 4
𝐸 = ∑ |𝑥[𝑛]| = ∑ |( ) 𝑢[𝑛]| = ∑ ( ) = 2 =
2 2 1 3
𝑛=−∞ 𝑛=−∞ 𝑛=0 1 − (2)
𝝅𝒏
# 𝒙[𝒏] = 𝐜𝐨𝐬 ( )
𝟒
The given signal is periodic signal with period N = 8 and we will find power as,
N−1 7
1 1 𝜋𝑛
𝑃 = ∑ x 2 [n] = ∑ cos2 ( )
𝑁 8 4
n=0 n=0

Page 6 of 26
Compiled by: Rupesh D. Shrestha[nec]
Digital Signal Analysis & Processing Ch. 1

7 πn 7 7
1 1 + cos ( 2 ) 1 1 1 1 πn 1 8 1 1
= ∑[ ] = ∑ + × ∑ cos ( ) = × + ×0 =
8 2 8 2 8 2 2 8 2 16 2
n=0 n=0 n=0

Continuous-Time Sinusoidal Signals


A simple harmonic oscillation is mathematically described by the following continuous-time
sinusoidal signal:
xa(t) = Acos(Ωt + θ), -∞ < t < ∞
This signal is completely characterized by three parameters: amplitude A of the sinusoid,
frequency Ω in radians per second, and phase θ in radians. Instead of Ω, we often use the frequency

F in cycles per second or hertz (Hz), where Ω = 2πF.

In terms of F the sinusoid becomes: xa(t) = Acos(2πFt + θ), -∞ < t<∞


The analog sinusoidal signal is characterized by the following properties:

1. For every fixed value of the frequency F, xa(t) is periodic: xa(t + Tp) = xa(t),
where Tp = 1/F is the fundamental period of the sinusoidal signal.
2. Continuous-time sinusoidal signals with distinct frequencies are themselves distinct.
3. Increasing the frequency F results in an increase in the rate of oscillation of the signal, in
the sense that more periods are included in a given time interval.

The relationships we have described for sinusoidal signals carry over to the class of complex
exponential signals

xa(t) = Aej(Ωt + θ)
By definition, frequency is an inherently positive physical quantity. This is obvious if we interpret
frequency as the number of cycles per unit time in a periodic signal. However, in many cases, only
for mathematical convenience, we need to introduce negative frequencies.

Hence the frequency range for analog sinusoids is -∞ < F < ∞.


Discrete-Time Sinusoidal Signals
A discrete-time sinusoidal signal may be expressed as:
x[n] = Acos(ωn + θ), -∞ < n < ∞
Where n is an integer variable, called the sample number, A is the amplitude of the sinusoid, ω is
the frequency in radians per second, and θ is phase in radians. Instead of ω, we often use the

frequency variable f defined by ω = 2πf.


Page 7 of 26
Compiled by: Rupesh D. Shrestha[nec]
Digital Signal Analysis & Processing Ch. 1

In terms of f the sinusoid becomes: x[n] = Acos(2πf + θ), -∞ < n < ∞


The frequency f has dimensions of cycles per sample.
In contrast to continuous-time sinusoids, the discrete-time sinusoids are characterized by the
following properties:

1. A discrete-time sinusoid is periodic only if its frequency f is a rational number:


By definition, a discrete-time signal x[n] is periodic with period N (>0) iff
x[n + N] = x[n] for all n
The smallest value of N for which the above equation is true is called the fundamental period.
Proof: For a sinusoid with frequency f0 to be periodic, we should have
Cos[2πf0(n + N) + θ] = Cos[2πf0n + θ]
This relation is true if and only if there exists an integer k such that
2πf0N = 2kπ => f0 = k/N
Hence, a discrete-time sinusoidal signal is periodic only if its frequency f0 can be expressed as the
ratio of two integers (i.e. f0 is rational). To determine the fundamental period N of a periodic
sinusoid, we express its frequency f0 as the ratio of two integers and cancel common factors so
that k and N are relatively prime. Then the fundamental period of the sinusoid is equal to N.

2. Discrete-time sinusoids whose frequencies are separated by an integer multiple of 2π are


identical.
Proof: Let us consider the sinusoid cos[ω0n + θ]. It easily follows that
cos[(ω0 + 2π)n + θ] = cos[ω0n + 2πn + θ] = cos[ω0n + θ]
As a result, all sinusoidal sequences
xk[n] = Acos[ωkn + θ] k = 0, 1, 2, 3, ……
where ωk = ω0 + 2πk -π ≤ ω0 ≤ π
are indistinguishable (i.e. identical). On the other hand, the sequences of any two sinusoids with
frequencies with frequencies in the range –π ≤ ω ≤ π or ½ ≤ f ≤ ½ are distinct. Consequently,
discrete-time sinusoidal signals with frequencies |ω| ≤ π or |f| ≤ ½ are unique. Any sequence
resulting from a sinusoid with a frequency |ω| > π, or |f| > ½, is identical to a sequence obtained
from a sinusoidal signal with frequency |ω| < π. Thus we regard frequencies in the range–π ≤ ω ≤
π or ½ ≤ f ≤ ½ as unique and all frequencies |ω| > π, or |f| > ½, as aliases1.

# x[n] = cos(0.6πn) => f=0.3=3/10, rational number hence periodic.


# x[n] = cos(0.8n) => f = 0.4/π, irrational number hence aperiodic.

1In signal processing and related disciplines, aliasing refers to an effect that causes different signals to become indistinguishable
(or aliases of one another) when sampled. It also refers to the distortion or artifact that results when the signal reconstructed from
samples is different from the original continuous signal.

Page 8 of 26
Compiled by: Rupesh D. Shrestha[nec]
Digital Signal Analysis & Processing Ch. 1

3. The highest rate of oscillation in a discrete-time sinusoid is attained when ω = π (or ω = -


π) or, equivalently, f= ½ (or f=- ½).
To illustrate this property, let us investigate the characteristics of the sinusoidal signal sequence
x[n] = cosω0n
When the frequency varies from 0 to π. To simplify the argument, we take values of ω 0 = 0, π/8,
π/4, π/2, π corresponding to f = 0, 1/16, 1/8, ¼, ½; which results in periodic sequences having periods
N = ∞, 16, 8, 4, 2. We note that the period of the sinusoid decreases ( or rate of oscillation
increases) as the frequency increases.

To see what happens for π ≤ ω0 ≤ 2π, we consider the sinusoids with frequencies ω1 = ω0 and ω2
= 2π – ω0. Note that as ω1 varies from π to 2π, ω2 varies from π to 0. It can be easily seen that

x1[n] = Acosω1n = Acosω0n


x2[n] = Acosω2n = Acos(2π -ω0)n = Acos(-ω0n) = x1[n]

Hence ω2 is an alias to ω1. If we had used a sine function instead of a cosine function, the result
would basically be the same, except for a 1800 phase difference between the sinusoids x1[n] and
x2[n]. In any case, as we increase the relative frequency ω0 of a discrete-time sinusoid form π to
2π, its rate of oscillation decreases. For ω0 = 2π the result is a constant signal, as in the case of ω0
= 0. Obviously, for ω0 = π (or f = ½) we have the highest rate of oscillation.

Since discrete-time sinusoidal signals with frequencies that are separated by an integer multiple
of 2π are identical, it follows that the frequencies in any interval ω1 ≤ ω ≤ ω1 + 2π constitute all
the existing discrete-time sinusoids or complex exponentials. Hence the frequency range for
discrete-time sinusoids is finite with duration 2π. Usually, we choose the range 0 ≤ ω ≤ 2π or –π
≤ ω ≤ π (0 ≤ f ≤ 1, -½ ≤ f ≤ ½), which we call the fundamental range.

Harmonically Related Complex Exponentials

These are sets of periodic complex exponentials with fundamental frequencies that are multiple
of a single positive frequency.

𝑠𝑘 (𝑡) = 𝑒 𝑗𝑘𝛺0 𝑡 = 𝑒 𝑗𝑘2𝜋𝐹0 𝑡 𝑘 = 0, ±1, ±2, − − −

1 𝑇𝑝
𝐹𝑢𝑛𝑑𝑎𝑚𝑒𝑛𝑡𝑎𝑙𝑃𝑒𝑟𝑖𝑜𝑑 = =
𝑘𝐹0 𝑘

Similarly for discrete-time, 𝑠𝑘 [𝑛] = 𝑒 𝑗2𝜋𝑘𝑓0 𝑛

Page 9 of 26
Compiled by: Rupesh D. Shrestha[nec]
Digital Signal Analysis & Processing Ch. 1

Analog to Digital Convertor (ADC):

1. Sampling: Continuous-time signal to discrete-time signal

x(t) ===> x(nT) ≡ x[n], T is the sampling interval.

2. Quantization: Discrete-time continuous value signal to discrete-time discrete-value signal.

x[n] ===> xq[n]. Quantization error, qe = x[n] - xq[n].

3. Coding: In the coding process, each discrete value xq[n] is represented by a b – bit binary
sequence.

Sampling of Analog Signal:

We limit our discussion to periodic or uniform sampling.

𝑥(𝑡) = 𝐴𝑐𝑜𝑠(2𝜋𝐹𝑡 + 𝜃)

𝑥[𝑛] ≅ 𝑥(𝑛𝑇), −∞ < 𝑛 < ∞


𝑛
𝑡 = 𝑛𝑇 =
𝐹𝑠

Sampling periodically at a rate Fs = 1/T samples/sec

𝐹
𝑥(𝑛𝑇) ≅ 𝑥[𝑛] = 𝐴𝑐𝑜𝑠(2𝜋𝐹𝑛𝑇 + 𝜃) = 𝐴𝑐𝑜𝑠 (2𝜋 ( ) 𝑛 + 𝜃)
𝐹𝑠

If we compare with discrete-time sinusoid 𝑥[𝑛] = 𝐴𝑐𝑜𝑠(2𝜋𝑓𝑛 + 𝜃) we get,

𝐹
𝑓= , 𝑜𝑟, 𝜔 = 𝛺𝑇
𝐹𝑠

Is called relative or normalized frequency. We can use f to determine the frequency F in hertz only
if the sampling frequency Fs is known.

Recall,

1 1
−∞ < 𝐹 < ∞& − ≤ 𝑓 ≤ 𝑐𝑦𝑐𝑙𝑒𝑠/𝑠𝑎𝑚𝑝𝑙𝑒𝑠
2 2

Page 10 of 26
Compiled by: Rupesh D. Shrestha[nec]
Digital Signal Analysis & Processing Ch. 1

−∞ < 𝛺 < ∞& − 𝜋 ≤ 𝜔 ≤ 𝜋𝑟𝑎𝑑𝑖𝑎𝑛𝑠/𝑠𝑎𝑚𝑝𝑙𝑒𝑠

We find that the frequency of the continuous-time sinusoid when sampled at a rate Fs= 1/T must
fall in the range

1 𝐹𝑠 𝐹𝑠 1
− =− ≤𝐹≤ =
2𝑇 2 2 2𝑇

Mapping of infinite range to finite frequency range of variable f. since highest frequency in a
discrete-time signal is ω = π or f = ½ with sampling rate Fs, the corresponding highest value of F
𝐹𝑠 1
is, 𝐹𝑚𝑎𝑥 = = 2𝑇
2

Some terms related to sampling of analog signals:

Sampling Frequency, Fs: It is the number of samples per second while converting continuous-
time signal to discrete-time signal.

Nyquist Criteria, Fs ≥ 2Fmax: It is the criteria that has to be fulfilled for the reconstruction of
signal from discrete-time signal. Fs is the sampling frequency and Fmax is the maximum frequency
contained in continuous-time signal.

Nyquist Rate, FN = 2Fmax: It is the minimum sampling frequency required for the proper
reconstruction of the signal.

Folding Frequency (or Nyquist Frequency), Fs/2: The highest frequency that can be
reconstructed or measured using discretely sampled data. It is the half of sampling frequency.

# Consider the two analog sinusoidal signals

𝑥1 (𝑡) = cos(2𝜋(10)𝑡) &𝑥2 (𝑡) = cos(2𝜋(50)𝑡)

Sample the two signals at a rate Fs = 40 Hz and find the discrete-time signals obtained.

[Hint: The two sinusoidal signals are identical & consequently indistinguishable. We say that the
frequency F2 = 50 Hz is an alias of the frequency F1 = 10 Hz at the sampling rate of 40 Hz]

# Consider the analog signal

𝑥(𝑡) = 3𝑐𝑜𝑠100𝜋𝑡
Page 11 of 26
Compiled by: Rupesh D. Shrestha[nec]
Digital Signal Analysis & Processing Ch. 1

a. Determine the minimum sampling rate required to avoid aliasing.


b. Suppose that the signal is sampled at Fs = 200 Hz, What is the discrete-time signal obtained
after sampling?
c. Suppose that the signal is sampled at the rate Fs = 75 Hz, what is the discrete-time signal
obtained after sampling?
d. What is the frequency 0 < F < Fs/2 of a sinusoid that yields samples identical to those
obtained in part(c) ?

# Consider the analog signal

𝑥(𝑡) = 3 cos 2000𝜋𝑡 + 5 sin 6000𝜋𝑡 + 10 cos 12000𝜋𝑡

a. What is the Nyquist rate for this signal?


b. Assume now that we sample this signal using a sampling rate Fs = 5000 samples/s. What
is the discrete-time signal obtained after sampling?
What is the analog signal y(t) we can reconstruct from the samples if we use ideal interpolation?
# A digital communication link carries binary-coded words representing samples of an input
signal,
𝑥(𝑡) = 3𝑐𝑜𝑠600𝜋𝑡 + 2𝑐𝑜𝑠1800𝜋𝑡
The link is operated at 10,000 bits/s and each input sample is quantized into 1024 different voltage
levels.
i. What is the sampling frequency & folding frequency?
ii. What is the Nyquist rate for the signal x(t) ?
iii. What are the frequencies in the resulting discrete time signal x[n] ?
iv. What is the resolution ∆ ?
Solution: 

As the link is operated at 10,000 bits/s & each input sample is quantized into 1024 different voltage
levels the each sampled value is represented by log21024 = 10 bits/sample
10,000 𝑏𝑖𝑡𝑠/𝑠𝑒𝑐
i. Then maximum sampling frequency, 𝐹𝑠 = 10 𝑏𝑖𝑡𝑠/𝑠𝑎𝑚𝑝𝑙𝑒 = 1000 𝑠𝑎𝑚𝑝𝑙𝑒𝑠/𝑠𝑒𝑐

Folding frequency (is the maximum frequency that can be represented uniquely by
𝐹
sampled signal), 2𝑠 = 500 𝑠𝑎𝑚𝑝𝑙𝑒𝑠/𝑠𝑒𝑐.

ii. 𝑥(𝑡) = 3𝑐𝑜𝑠600𝜋𝑡 + 2𝑐𝑜𝑠1800𝜋𝑡

Here, F1 = 300 Hz and F2 = 900 Hz. Thus Fmax = 900 Hz.

The Nyquist rate, FN = 2Fmax = 1800 Hz.

iii. For Fs = 1000 Hz,


Page 12 of 26
Compiled by: Rupesh D. Shrestha[nec]
Digital Signal Analysis & Processing Ch. 1

𝑛 300 900
𝑥[𝑛] ≅ 𝑥(𝑛𝑇) = 𝑥 ( ) = 3𝑐𝑜𝑠2𝜋 ( ) 𝑛 + 2𝑐𝑜𝑠2𝜋 ( )𝑛
𝐹𝑠 1000 1000

3 9
= 3𝑐𝑜𝑠2𝜋 ( ) 𝑛 + 2𝑐𝑜𝑠2𝜋 ( ) 𝑛
10 10
3 1
= 3𝑐𝑜𝑠2𝜋 ( ) 𝑛 + 2𝑐𝑜𝑠2𝜋 (1 − ) 𝑛
10 10
3 1
= 3𝑐𝑜𝑠2𝜋 ( ) 𝑛 + 2𝑐𝑜𝑠2𝜋 ( ) 𝑛
10 10
3 1
∴ 𝑓1 = &𝑓 =
10 2 10
1 1
𝐻𝑒𝑟𝑒 𝑏𝑜𝑡ℎ 𝑓𝑟𝑒𝑞𝑢𝑒𝑛𝑐𝑖𝑒𝑠 𝑓1 𝑎𝑛𝑑 𝑓2 𝑙𝑖𝑒𝑠 𝑖𝑛 𝑡ℎ𝑒 𝑖𝑛𝑡𝑒𝑟𝑣𝑎𝑙 − ≤𝑓≤
2 2
iv. ADC resolution = 10 bits
𝑥𝑚𝑎𝑥 −𝑥𝑚𝑖𝑛 5−(−5)
Voltage resolution, ∆= = 1024−1 = 9.76 𝑚𝑉.
𝐿−1

Discrete-time signals:
Digital signals are discrete in both time (the independent variable) and amplitude (the dependent
variable). Signals that are discrete in time but continuous in amplitude are referred to as discrete-
time signals. Discrete-time signals are data sequences. A sequence of data is denoted by {x[n]} or
simply x[n] when the meaning is clear. The elements of the sequence are called samples. The
index n associated with each sample is an integer.
A discrete-time (D.T.) signals x[n] is a function of an independent variable that is an integer. It is
important to note that a D.T. signal is not defined at instants between two successive samples.
Also, it is incorrect to think that x[n] is equal to zero if n is not an integer. Simply, the signal x[n]
is not defined for non-integer values of n.

Representation of D.T. Signals:


1) Tabular representation
3
n -1 0 1 2
2
x[n] 1 2 1 3
2) Graphical representation

Page 13 of 26
Compiled by: Rupesh D. Shrestha[nec]
Digital Signal Analysis & Processing Ch. 1

3) Functional representation
1 𝑓𝑜𝑟 𝑛 = −1, 1
2 𝑓𝑜𝑟 𝑛 = 0, 3
𝑥[𝑛] = {
3 𝑓𝑜𝑟 𝑛 = 2
0 𝑒𝑙𝑠𝑒𝑤ℎ𝑒𝑟𝑒
4) Sequence representation
x[n] = {1, 2, 1, 3, 2} where “ “ sign represents the position of n =0. The arrow is often omitted
if it is clear from the context which sample is x[0].
Sample values can either be real or complex. The terms “discrete-time signals” and
“sequences” are used interchangeably.

Some Elementary D.T. signals:


Elementary discrete-time signals can be defined as those signals that can be used as building
blocks to build more complex signals and are normally used for the study of signals and systems.
Conversely, we can decompose complex signals into elementary signals and analyze the behaviors
of signals and systems. Followings are the discrete-time elementary signals we study in this
course:
1) The unit sample sequence or unit impulse or Kronecker delta is defined as δ[n] and is
defined as

1, 𝑓𝑜𝑟 𝑛 = 0
𝛿[𝑛] = {
0, 𝑓𝑜𝑟 𝑛 ≠ 0
Note: The analog signal δ(t) or Dirac delta is defined to be zero everywhere except at t=0 and
has unit area.

2) The unit step signal is denoted as u[n] and is defined as


1, 𝑓𝑜𝑟 𝑛 ≥ 0
𝑢[𝑛] = {
0, 𝑓𝑜𝑟 𝑛 < 0

# Prove that

𝑖. 𝑢[𝑛] = ∑ 𝛿[𝑛 − 𝑘]
𝑘=0
𝑖𝑖. 𝛿[𝑛] = 𝑢[𝑛] − 𝑢[𝑛 − 1]
Page 14 of 26
Compiled by: Rupesh D. Shrestha[nec]
Digital Signal Analysis & Processing Ch. 1

3) The unit ramp signal is denoted as ur[n] and is defined as


𝑛, 𝑓𝑜𝑟 𝑛 ≥ 0
𝑢𝑟 [𝑛] = {
0, 𝑓𝑜𝑟 𝑛 < 0

Fig: (a) Unit impulse (b) Unit Step (c) Unit Ramp sequences.

4) Sinusoidal Signal
x[n] = Acos(ωn + θ), -∞ < n < ∞
Where n is an integer variable, called the sample number, A is the amplitude of the
sinusoid, ω is the frequency in radians per second, and θ is phase in radians.
5) Exponential Signal
The exponential signal is a sequence of the form
x[n] = an for all n.
If the parameter a is real, then x[n] is a real signal. When the parameter a is complex
valued,
a = reiθ (r & θ are now parameters)
Hence, x[n] = rneiθn = rn(cosθn + j sinθn)
The real part is xR[n] = rncosθn and the imaginary part is xI[n] = rnsinθn
Alternatively, the complex signal x[n] can be represented by the amplitude function
|x[n]| = A(n) = rn
/_x[n] = φ(n) = θn
6) Sigmoid Function
The sigmoid function is one of the most commonly used activation functions. It is denoted by
σ(x) and given as
1 𝑒𝑥
𝜎(𝑥) = = = 1 − 𝜎(−𝑥)
1 + 𝑒 −𝑥 1 + 𝑒 𝑥

This function transforms a continuous real


number into a range of (0, 1). It is commonly
used in neural networks as an activation
function, where small input values result in
outputs close to 0 and large input values result
in outputs close to 1. It is also known as logistic
function.

Page 15 of 26
Compiled by: Rupesh D. Shrestha[nec]
Digital Signal Analysis & Processing Ch. 1

Classification of Discrete-time signals:


1) Energy signals & Power signals:

𝐸 = ∑ |𝑥[𝑛]|2
𝑛=−∞
If E is finite (i.e. 0 < E <∞), then x[n] is energy signal.
Many signals that possess infinite energy, have a finite average power. The average power of a
periodic sequence with a period of N samples is defined as
𝑁−1
1
𝑃 = ∑|𝑥[𝑛]|2
𝑁
𝑛=0

And for non-periodic sequences, it is defined in terms of the following limit if it exists:
𝑁
1
𝑃 = lim ∑ |𝑥[𝑛]|2
𝑁→∞ 2𝑁 + 1
𝑛=−𝑁

A signal with finite average power is called a power signal.


# Find the average power of the unit step sequence u[n].
The unit step sequence is non-periodic, therefore the average power is
𝑁
1
𝑃 = lim ∑ 𝑢2 [𝑛]
𝑁→∞ 2𝑁 + 1
𝑛=−𝑁
𝑁+1 1
= lim ( )=
𝑁→∞ 2𝑁 + 1 2

Therefore, the unit step sequence is a power signal. Note that its energy is infinite and so it is
not an energy signal.
2) Periodic signals & Aperiodic signals:
The sinusoidal signal of the form
x[n] = Asin2πf0n is periodic when f0 is a rational number, that is if f0 can be expressed as f0
= k/N ; where k and N are integers and N is the fundamental period.
3) Symmetric(even) and antisymmetric(odd) signals:
Even  x[-n] = x[n] xe[n] = ½ {x[n] + x[-n]}
Odd  x[-n] = -x[n] xo[n] = ½ {x[n] - x[-n]}

Transformation of the independent variable (time)


1) Shifting:
A signal x[n] may be shifted in time by replacing the independent variable n by n-k, where k
is an integer. If k is positive, delay of the signal by k units of time and if k is negative, advance
of the signal by k units in time.
2) Folding or a reflection of the signal about the time origin n=0:
Page 16 of 26
Compiled by: Rupesh D. Shrestha[nec]
Digital Signal Analysis & Processing Ch. 1

A signal x[n] may be folded by replacing the independent variable n by –n.


# Prove that the operations of folding and time shifting a signal are not commutative.
TDk[x[n]] = x[n-k] k>0
FD[x[n]] = x[-n]
Now, TDk[FD{x[n]}] = TDk[x[-n]] = x[-n+k]
Whereas, FD[TDk{x[n]}] = FD{x[n-k]} = x[-n-k]
Note: Shifting of folded sequence x[-n] ---> x[-n+k]
If k positive, delay; if k negative, advance.
3) Time scaling or down-sampling:
-Replacing n by µn, where µ is an integer.
If the signal x[n] was originally obtained by sampling an analog signal xa(t), the x[n] = xa(nT),
where T is the sampling interval. Now, y[n] = x[2n] = x a(2Tn). Hence the time-scaling
operation is equivalent to changing the sampling rate from 1/T to 1/2T i.e. decreasing the rate
by a factor of 2. This is a downsampling operation. Upsampling is not possible as we cannot
obtain y[n] = x[n/2] from the signal x[n].

Note: Amplitude modification includes addition, multiplication, and scaling of D.T. signals.

Discrete-time systems:
In many applications of digital signal processing we wish to design a device or an algorithm that
performs some prescribed operation on a discrete-time signal. Such a device or algorithm is called
a discrete-time system.
A discrete-time system is a device or algorithm that operates on a discrete-time signal, called the
input or excitation, according to some well-defined rule, to produce another discrete-time signal
called the output or response of the system.
y[n] = T{x[n]}
Page 17 of 26
Compiled by: Rupesh D. Shrestha[nec]
Digital Signal Analysis & Processing Ch. 1

We say that the input signal x[n] is transformed by the system into a signal y[n].

Discrete-time
x[n] y[n]
System
Input Signal Ouput Signal
Or Excitation Or Response

Block Diagram Representation of DT Systems:


(Basic building blocks that can be interconnected to form complex systems)
 An Adder:
x1[n]

Y[n] = x1[n] + x2[n]

x2[n]

 A Constant Multiplier
a
x[n] y[n] = ax1[n]

 A Signal Multiplier

x1[n] Y[n] = x1[n]x2[n]


x1[n]
 A Unit delay element

x2[n] y[n] = x[n-1]


Z-1
x[n]
 A Unit advance element

x[n] y[n] = x[n+1]


#Use basic building blocks to represent following DT Z
system described by the input-output relation:
y[n] = 0.25y[n-1] + 0.5x[n] + 0.5x[n-1]

Classification of Discrete-time systems:

1) Static (memoryless) vs dynamic (to have memory) system: -


If the o/p of D.T. system at any instant n depends at most on the input sample at the same time,
but not on past or future samples of the input then it is known as static system.
In any other case, the system is said to be dynamic.
y[n] = x[n-N] N ≥ 0
The system is said to have memory of duration N,
Page 18 of 26
Compiled by: Rupesh D. Shrestha[nec]
Digital Signal Analysis & Processing Ch. 1

If N = 0, the system is static


0<N<∞, the system is said to have finite memory.
N = ∞, the system is said to have infinite memory.

2) Time-invariant Vs. time-variant system: -


A system is called time-invariant if its input-output characteristics do not change with time.
Theorem:
A relaxed system T is time invariant or shift invariant if and only if x[n]  y[n]
Implies x[n-k]  y[n-k]
For every input signal x[n] and every time shift k.
For this we compute
y[n, k] = T{x[n-k]}  response of delayed input and
y[n-k]  delayed response
and check whether y[n, k] = y[n-k] for all possible values of k.
if true  time invariant
& if y[n, k] ≠ y[n-k], even for one value of k, the system is time variant.

3) Linear Vs. nonlinear system: -


A linear system is one that satisfies the superposition theorem which states that, “A system is
linear if and only if
T{a1x1[n] + a2x2[n]} = a1T{x1[n]} + a2T{x2[n]}
For any arbitrary i/p sequences x1[n] and x2[n], and any aribitrary constants a1& a2.

4) Causal Vs. noncausal system: -


Theorem: “A system is said to be causal if the o/p of the system at any time n {i.e. y[n]} depends
only on present and past inputs but doesnot depend on future inputs”.
If a system doesnot satisfy this definition, it is called noncausal.

5) Stable Vs. unstable system: -


An arbitrary relaxed system is said to be bounded input bounded output (BIBO) stable if and only
if every bounded input produces a bounded output.
Mathematically,
If, |x[n]| ≤ Mx< ∞
Then there must be |y[n]| ≤ My< ∞ for all n, for the system to be stable. Where, Mx and My are
finite numbers.

Interconnection of Discrete-time systems:


Discrete time systems can be interconnected to form larger systems. There are two basics ways
in which systems can be interconnected:
1. Cascade (Series)
2. Parallel

Page 19 of 26
Compiled by: Rupesh D. Shrestha[nec]
Digital Signal Analysis & Processing Ch. 1

y1[n]
T1
x[n] y[n]
x[n] y1[n] y[n]
T1 T2
y2[n]
T2

Tc Tp
(a) Cascade Interconnection (b) Parallel Interconnection

In the cascade interconnection the output of the first systemis


y1[n] = T1{x[n]}
and the output of the second system is
y[n] = T2{y1[n]} = T2[T1{x[n]}]
We observe that system T1 and T2 can be combined or consolidated into a single overall system
Tc ≡ T2T1
We can express the output of the combined system as,
y[n] = Tc{x[n]}
In general, for arbitrary systems,
T2T1 ≠ T1T2
However, if the systems T1 and T2 are linear and time-invariant, then (a) Tc is time invariant and
(b) T2T1 = T1T2
Proof of (a)
Suppose that T1 and T2 are time-invariant, then
x[n-k] — (T1) y1[n-k]
y1[n-k] —(T2) y[n-k]
Thus, x[n-k]—(Tc) y[n-k]
And therefore, Tc is time invariant.
In the parallel interconnection, the output of the system T1 is y1[n] and the output of the system
T2 is y2[n]. Hence,
y3[n] = y1[n] + y2[n] = T1{x[n]} + T2{x[n]} =(T1 + T2){x[n]} = Tp{x[n]}.
Where, Tp = T1 + T2
In general, we can use parallel and cascade interconnection of systems to construct larger, more
complex systems. Conversely, we can take a larger system and break it down into smaller
subsystems for purposes of analysis and implementation.

Analysis of Discrete-time linear time-invariant(LTI) systems:


There are two basic methods for analyzing the behavior or response of a linear system to a given
input signal.

 Convolution method:
This method for analyzing the behavior of a linear system to a given input signal is first to
decompose or resolve the input signal into a sum of elementary signals. Then, using the
linearity property of the system, the responses of the system to the elementary signals are
added to obtain the total response of the system to the given input signal.

 Linear constant coefficient difference (LCCD) equation method:

Page 20 of 26
Compiled by: Rupesh D. Shrestha[nec]
Digital Signal Analysis & Processing Ch. 1

This method is based on the direct solution of the input-output equation for the system, and
has the form
y[n] = F{y[n-1], y[n-1], …………, y[n-N], x[n], x[n-1], x[n-2], ………, x[n-M] }
Specifically, for an LTI system, the general form of the input-output relationshiop is,
𝑁 𝑀

𝑦[𝑛] = − ∑ 𝑎𝑘 𝑦[𝑛 − 𝑘] + ∑ 𝑏𝑘 𝑥[𝑛 − 𝑘]


𝑘=1 𝑘=0
Where {ak} and {bk} are constant parameters that specify the system and are independent of
x[n] and y[n].

Resolution of a Discrete-time signal into impulses:


Suppose we have an arbitrary signal x[n] that we wish to resolve into a sum of unit sample
sequences. We select the elementary signals xk[n] to be
xk[n] = δ[n-k]
where k represents the delay of the unit sample sequence.
Now suppose that we multiply the two sequences x[n] and δ[n-k], since δ[n-k] is zero
everywhere except at n = k, where its value is unity, the result of this multiplication is another
sequence that is zero everywhere except at n = k, where its value is x[k]. Thus
x[n]δ[n-k] = x[k]δ[n-k]
If we were to repeat the multiplication of x[n] with δ[n-m] where m is another delay (m≠k), the
result will be a sequence that is zero everywhere except at n = m, where its value is x[m]. Hence,
x[n]δ[n-m] = x[m]δ[n-m]
Consequently, if we repeat this multiplication over all possible delays -∞ < k < ∞, and sum all
the product sequences, the result will be a sequence equal to the sequence x[n], that is,

𝑦[𝑛] = ∑ 𝑥[𝑘]𝛿[𝑛 − 𝑘]
𝑘=−∞
The right hand side of the equation gives the resolution of or decomposition of any arbitrary
signal x[n] into a weighted (scale) sum of shifted unit samples sequences.
Response of LTI systems to arbitrary inputs: The Convolution Sum
We know,

𝑦[𝑛] = ∑ 𝑥[𝑘]𝛿[𝑛 − 𝑘]
𝑘=−∞
First, we denote the response y[n ,k] of the system to the input unit sample sequence at n = k by
the special symbol h[n, k]; -∞ < k < ∞, i.e.
y[n, k] = h[n, k] = T{δ[n-k]}
we note that n is the time index and k is a parameter showing the location of the input impulse.
If the impulse at the input is scaled by an amount Ck = x[k], the response of the system is the
corresponding scaled output, that is
Ckh[n, k] = x[k]h[n, k]
The response to any arbitrary input expressed as a sum of weighted impulses, as given above is
y[n] = T{x[n]} = T{

𝑦[𝑛] = 𝑇{𝑥[𝑛]} = 𝑇{ ∑ 𝑥[𝑘]𝛿[𝑛 − 𝑘]}


𝑘=−∞

= ∑ 𝑥[𝑘]𝑇{𝛿[𝑛 − 𝑘]}
𝑘=−∞

Page 21 of 26
Compiled by: Rupesh D. Shrestha[nec]
Digital Signal Analysis & Processing Ch. 1

= ∑ 𝑥[𝑘]ℎ[𝑛, 𝑘]
𝑘=−∞
Where h(n,k) is the response of unit impulses δ[n-k] -∞ < k < ∞.
If the response of the LTI system to the unit sample sequence δ[n] is denoted by h[n], i.e.
h[n] ≡ T{δ[n]}
then by the time-invariant property, the response of the system to the delayed unit sample
sequence δ[n-k] is,
h[n-k] = T{δ[n-k]}
then,

𝑦[𝑛] = ∑ 𝑥[𝑘]ℎ[𝑛 − 𝑘]
𝑘=−∞
The formula in above equation that gives the response y[n] of the LTI system as a function of
the input signal x[n] and the unit impulse h[n] is called a convolution sum.
Methods to calculate Convolution Sum:
1. Mathematically (by using direct formula of convolution sum).
2. Graphically (by using following procedure)
3. Matrix Method

Procedure to compute the convolution sum:


Suppose that we wish to compute the output of the system at some time instant say n = n0, then

𝑦[𝑛0 ] = ∑ 𝑥[𝑘]ℎ[𝑛0 − 𝑘]
𝑘=−∞
The procedures are:
Plot x[n] and h[n] as x[k] and h[k] then,
1) Folding: Fold h[k] about k = 0 to obtain h[-k].
2) Shifting: Shift h[-k] by n0 to the right (left) if n0 is positive (negative), to obtain h[n0-k].
3) Multiplication: Multiply x[k] by h[n0-k] to obtain the product sequence vn0(k) ≡ x[k]h[n0-k].
4) Summation: Sum all the values of the product sequence vn0(k) to obtain the value of the
output at time n = n0.
5) Repetition: Repeat steps 2 through 4, for all possible time shifts -∞ < n < ∞ to obtain overall
response.

Properties of Convolution:
To simplify the notation, we denote the convolution operations as,

𝑦[𝑛] = 𝑥[𝑛] ∗ ℎ[𝑛] = ∑ 𝑥[𝑘]ℎ[𝑛 − 𝑘]


𝑘=−∞
1) Commutative Law: x[n] * h[n] = h[n] * x[n]
2) Associative Law: (x[n] * h1[n]) * h2[n] = x[n] * (h1[n] *h2[n])
3) Distributive law: x[n]* (h1[n] + h2[n]) = x[n] * h1[n] + x[n] *h2[n]

Causal Linear Time Invariant Systems:


Causal System: whose output at time n depends only on present and past inputs but does not
depends on future inputs. i.e. Output of system x[n] at instant n = n0 depends on values of x[n]
for n ≤ n0.
Causal LTI System:
Page 22 of 26
Compiled by: Rupesh D. Shrestha[nec]
Digital Signal Analysis & Processing Ch. 1

Let us consider an LTI system having an output at time n = n0 by


𝑦[𝑛0 ] = ∑ ℎ[𝑘]𝑥[𝑛0 − 𝑘]
𝑘=−∞
Suppose that we subdivide the sum into two sets of term, one involving negative terms and other
involving positive terms of values of k.
∞ −1

𝑦[𝑛0 ] = ∑ ℎ[𝑘]𝑥[𝑛0 − 𝑘] + ∑ ℎ[𝑘]𝑥[𝑛0 − 𝑘]


𝑘=0 𝑘=−∞
={h[0]x[n0] + h[1]x[n0 - 1]+ h[2]x[n0 - 2] + - - - - - - } + {h[-1]x[n0 + 1]+ h[-2]x[n0 +2] + - - - }

We observe that the terms in the first sum involve x[n0], x[n0 - 1], - - - which are the present and
past values. On the other hand, the terms in the second sum involves the input signals x[n0 + 1],
x[n0 + 2], - - -. Now, if the output at time n = n0 is to depend only on the present and past values,
then it is clear that

h[n] = 0 for n < 0

It is both a necessary and a sufficient condition for causality. Hence “An LTI system is causal if
and only if its impulse response is zero for negative values of n.”
Thus for causal LTI system,

𝑦[𝑛] = ∑ ℎ[𝑘]𝑥[𝑛 − 𝑘]
𝑘=0
𝑛

𝑦[𝑛] = ∑ 𝑥[𝑘]ℎ[𝑛 − 𝑘]
𝑘=−∞
Stability of Linear Time Invariant Systems:
Stable System: An arbitrary relaxed system is BIBO stable if and only if its output sequence y[n]
is bounded for every bounded input x[n].
For an LTI system:
Suppose we have an LTI system,

𝑦[𝑛] = ∑ ℎ[𝑘]𝑥[𝑛 − 𝑘]
𝑘=−∞
Taking absolute values on both sides,

|𝑦[𝑛]| = | ∑ ℎ[𝑘]𝑥[𝑛 − 𝑘]|


𝑘=−∞

≤ ∑ |ℎ[𝑘]||𝑥[𝑛 − 𝑘]|
𝑘=−∞
If the input is bounded, there exists a finite number Mx such that |x[n]| ≤ Mx

|𝑦[𝑛]| ≤ 𝑀𝑥 ∑ |ℎ[𝑛]|
𝑘=−∞
The output is bounded if the impulse response of the system satisfies the condition

𝑆ℎ ≡ ∑ |ℎ[𝑛]| < ∞
𝑘=−∞

Page 23 of 26
Compiled by: Rupesh D. Shrestha[nec]
Digital Signal Analysis & Processing Ch. 1

“A linear time-invariant system is stable if its impulse response is absolutely summable”. This is
the necessary and sufficient condition for stable LTI system.

Systems with Finite-Duration and Infinite Duration Impulse Response:


Finite Duration Impulse Response System (FIR system):
h[n] = 0 for n < 0 and n ≥ M,
𝑀

𝑦[𝑛] = ∑ ℎ[𝑘]𝑥[𝑛 − 𝑘]
𝑘=0
FIR has a finite memory of length M-samples.
Infinite Duration Impulse Response System (IIR System)

𝑦[𝑛] = ∑ ℎ[𝑘]𝑥[𝑛 − 𝑘]
𝑘=0
IIR has an infinite memory.

Discrete-time system described by Difference Equations:


The convolution summation formula is given by

𝑦[𝑛] = ∑ ℎ[𝑘]𝑥[𝑛 − 𝑘]
𝑘=−∞
Above equation suggests a means for the realization of the system. In the case of FIR systems,
such a realization involves additions, multiplications, and a finite number of memory location,
which is readily implemented directly.
If the system is IIR, however, its practical implementation as given by convolution is clearly
impossible, since it requires an infinite number of memory locations, multiplications, and
additions.
There is a practical and computationally efficient means for implementing a family of IIR
systems, within the general class of IIR systems; this family of discrete-time systems is more
conveniently described by difference equations.
Recursive & Non-Recursive Discrete-time systems:
If the response of any discrete-time systems depends only on the input signals (i.e. terms of the
input signal) then the system is known as non-recursive discrete time system.
If we can express the output of the system not only in terms of the present and past values of the
input, but also in terms of the past output values, then that system is known as recursive system.
Eg: the cumulative average of a signal x[n] in the interval 0 ≤ k ≤ n defined as,
𝑛
1
𝑦[𝑛] = ∑ 𝑥[𝑘] 𝑛 = 0, 1, 2, … ..
𝑛+1
𝑘=0
The realization of above equation requires the storage of all the input samples. Since n is
increasing, our memory requirements grow linearly with time.
However, y[n] can be computed more efficiently by utilizing the previous output value y[n-1].
By a simple algebraic rearrangement, we obtain,
𝑛−1

(𝑛 + 1)𝑦[𝑛] = ∑ 𝑥[𝑘] + 𝑥[𝑛] = 𝑛𝑦[𝑛 − 1] + 𝑥[𝑛]


𝑘=0
𝑛 1
∴ 𝑦[𝑛] = 𝑦[𝑛 − 1] + 𝑥[𝑛]
𝑛+1 𝑛+1

Page 24 of 26
Compiled by: Rupesh D. Shrestha[nec]
Digital Signal Analysis & Processing Ch. 1

Now the cumulative average y[n] can be computed recursively by multiplying the previous
output value y[n-1] by n/(n+1), multiplying the present input x[n] by 1/(n+1), and adding the
two products.
This is an example of recursive system. In general whose output y[n] at time n depends on any
number of past output values y[n-1], y[n-2], ……, is called a recursive system.
The output of a causal and practically realizable recursive system can be expressed in general as,
y[n] = F{y[n-1], y[n-2], ……, y[n-N], x[n], x[n-1], ….., x[n], x[n-1], ….., x[n-M]}
If y[n] depends only on the present and past inputs, then
y[n] = F{ x[n], x[n-1], ….., x[n], x[n-1], ….., x[n-M]}
Such a system is called non-recursive.
In recursive system, we need to compute all the previous values y[0], y[1], y[2], …., y[n0 – 1] to
compute y[n0] but in non-recursive system, we can compute the y[n0] immediately without
having y[n0-1], y[n0-2], ……. This feature is desirable in some practical applications.
Difference Equations:
An LTI discrete-system can also be described by a linear coefficient difference equation of the
form,
𝑁 𝑀

∑ 𝑎𝑘 𝑦[𝑛 − 𝑘] = ∑ 𝑏𝑘 𝑥[𝑛 − 𝑘] 𝑎0 ≡ 1
𝑘=0 𝑘=0
Or, equivalently,
𝑁 𝑀

𝑦[𝑛] = − ∑ 𝑎𝑘 𝑦[𝑛 − 𝑘] + ∑ 𝑏𝑘 𝑥[𝑛 − 𝑘]


𝑘=1 𝑘=0
If aN ≠ 0, then the difference equation is of order N. This equation describes a recursive
approach for computing the current output, given the input values & previously computed
output-values.
If the system described by difference equation has a constant coefficient (independent of time)
then it is known as linear constant coefficient difference (LCCD) equation.
Consider the first order system (i.e. N = 1) & M = 0.
y[n] = ay[n-1] + x[n]
Now,
y[0] = ay[-1] + x[0]
y[1] = ay[0] + x[1] = a2y[-1] + ax[0] + x[1]
y[2] = ay[1] + x[2] = a3y[-1] + a2x[0] + ax[1] + x[2]
⁞ ⁞ ⁞ ⁞
y[n] = ay[n-1] + x[n] = an+1y[-1] +anx[0] + an-1x[1] + ….
𝑛
𝑛+1
=𝑎 𝑦[−1] ∑ 𝑎𝑘 𝑥[𝑛 − 𝑘] 𝑛≥0
𝑘=0
the response contain two parts; the first part is the result of the initial condition y[-1] of the
system and second part is the response of the system to the input signal x[n].
if the system is initially relaxed at time n = 0, then its memory should be zero. Hence y[-1] = 0
(state = output of the delay element).
In this case, we say that the system is at zero state and its corresponding output is called zero-
state response or forced response and is denoted by yzs.
𝑛

𝑦𝑧𝑠 [𝑛] = ∑ 𝑎𝑘 𝑥[𝑛 − 𝑘] 𝑛≥0


𝑘=0

Page 25 of 26
Compiled by: Rupesh D. Shrestha[nec]
Digital Signal Analysis & Processing Ch. 1

We note that above equation is a convolution summation involving the input signal convolved
with the impulse response h[n] = anu[n].
We obtained the result that the relaxed recursive system described by the first order difference
equation is a linear time-invariant IIR system with impulse response given by h[n] = anu[n].
Now, suppose the system is initially non-relaxed (i.e. y[-1] ≠ 0) and the input x[n] = 0 for all n.
the output of the system with zero input is called the zero-input response or natural response and
is denoted by yzi[n].
𝑦𝑧𝑖 [𝑛] = 𝑎𝑛+1 𝑦[−1] 𝑛 ≥ 0

We observe that a recursive system with nonzero initial condition is non-relaxed in the sense
that it can produce an output without being excited.
Linearity, time invariance and Stability of the system described by LCCD equation
A system is linear if it satisfies the following three requirements:
1. The total response is equal to the sum of the zero-input and zero-state responses.
(i.e. y[n] = yzi[n] + yzs[n])
2. The principle of superposition applies to the zero-state response.
3. The principle of superposition applies to the zero-input response.

Else the system is non-linear.


In general, recursive systems described by the constant-coefficient difference equation is linear
and time-invariant. (because the coefficients ak and bk are constants)

Some Trigonometric Identities:


sin(2𝜋 + 𝜃) = 𝑠𝑖𝑛𝜃; sin(2𝜋 − 𝜃) = −𝑠𝑖𝑛𝜃

sin(𝜋 ± 𝜃) = ∓𝑠𝑖𝑛𝜃

cos(𝜋 ± 𝜃) = −𝑐𝑜𝑠𝜃

𝑐𝑜𝑠(2𝜋 ± 𝜃) = 𝑐𝑜𝑠𝜃

cos(2𝜃) = cos 2 𝜃 − sin2 𝜃 = 2 cos2 𝜃 − 1 = 1 − 2 sin2 𝜃

sin(𝐴 ± 𝐵) = 𝑠𝑖𝑛𝐴𝑐𝑜𝑠𝐵 ± 𝑐𝑜𝑠𝐴𝑠𝑖𝑛𝐵

cos(𝐴 ± 𝐵) = 𝑐𝑜𝑠𝐴𝑐𝑜𝑠𝐵 ∓ 𝑠𝑖𝑛𝐴𝑠𝑖𝑛𝐵

References:
1. J. G. Proakis, D. G. Manolakis, “Digital Signal Processing, Principles, Algorithms and Applications”, 3 rd Edition,
Prentice-hall, 2000. Chapter 1.
2. S. Sharma, “Digital Signal Processing”, Third Revised Edition, S.K. Kataria & Sons, 2007.
3. Analog Devices, “Mixed Signal and DSP Design Techniques”, Prentice-hall 2000.

Page 26 of 26
Compiled by: Rupesh D. Shrestha[nec]

You might also like