1. Discrete Signals and Systems
1. Discrete Signals and Systems
System:
A system may be defined as a physical device that performs an operation on a signal. More
specifically, a system is something that can manipulate, change, record, or transmit signals. In
Digital Signal Processing, a system may be defined by algorithm. For example, a filter used to
reduce the noise and interference corrupting a desired information-bearing signal is called a
system. In this case the filter performs some operation/s on the signal, which has the effect of
reducing (filtering) the noise and interference from the desired information-bearing signal.
Digital Signal Processing refers to methods of filtering and analyzing time-varying signals based
on the assumption that the signal amplitudes can be represented by a finite set of integers
corresponding to the amplitude of the signal at a finite number of points in time. Digital Signal
Processing is distinguished from other areas in computer science by the unique type of data it
Page 1 of 26
Compiled by: Rupesh D. Shrestha[nec]
Digital Signal Analysis & Processing Ch. 1
uses: signals. In most cases, these signals originate as sensory data from the real world: seismic
vibrations, visual images, sound waves, etc. DSP is the mathematics, the algorithms, and the
techniques used to manipulate these signals after they have been converted into a digital form.
This includes a wide variety of goals, such as: enhancement of visual images, recognition and
generation of speech, compression of data for storage and transmission, etc.
Digital signal processing is the study of signals in a digital representation and the processing
methods of these signals. DSP includes subfields like: audio signal processing, control
engineering, digital image processing and speech processing. RADAR Signal processing and
communications signal processing are two other important subfields of DSP.
Since the goal of DSP is usually to measure or filter continuous real-world analog signals, the first
step is usually to convert the signal from an analog to a digital form, by using an analog to digital
converter. Often, the required output signal is another analog output signal, which requires a
digital to analog converter.
The algorithms required for DSP are sometimes performed using specialized computers, which
make use of specialized microprocessors called digital signal processors (also abbreviated DSP).
These process signals in real time and are generally purpose-designed application-specific
integrated circuits (ASICs). When flexibility and rapid development are more important than unit
costs at high volume, DSP algorithms may also be implemented using field-programmable gate
arrays (FPGAs).
Page 2 of 26
Compiled by: Rupesh D. Shrestha[nec]
Digital Signal Analysis & Processing Ch. 1
Low-Pass Filter (anti-alias): This is a prefilter or an antialiasing filter, which conditions the
analog signal to prevent aliasing.
Analog to Digital Converter (ADC): ADC produces a stream of binary numbers from analog
signals. It has three sub-elements as sampling, quantization and encoding.
Digital Signal Processor: This is the heart of DSP and can represent a general-purpose computer
or a special-purpose processor, or digital hardware, and so on. The basic elements of DSP are
adder and multiplier.
Digital to Analog Converter (DAC): This is the inverse operation to the ADC, which produces
a staircase waveform from a sequence of binary numbers, a first step towards producing an analog
signal.
Low-Pass Filter (dequantisation): This is a postfilter to smooth out staircase waveform into the
desired analog signal. It is simply a low pass filter which smooth out (or interpolates) the
sequences obtained from DAC.
Performance or these systems is usually limited by the performance (i.e. speed, resolution and
linearity) of the analog-to-digital converter. However, when using a DSP we should never forget
two facts:
If information was not present in the sampled signal to start with, no amount of digital
manipulation will extract it.
Real signals come with noise.
The principal disadvantage of DSP is the speed of operation of analog to digital converters and
DSP, especially at very high frequencies and wide bandwidth require fast-sampling rate A/D
converters and fast digital signal processors. Primarily due to the above advantages, DSP is now
becoming a first choice in many technologies and applications, such as consumer electronics,
communications, wireless telephones, and medical imaging.
Page 3 of 26
Compiled by: Rupesh D. Shrestha[nec]
Digital Signal Analysis & Processing Ch. 1
Types of Signals:
1. Continuous time Vs. Discrete-time signal
As the names suggest, this classification is determined by whether or not the time axis (x-axis)
is discrete (countable) or continuous. Continuous-time signals are represented by x(t) where t
denotes continuous-time and discrete-time signals or sequences are represented by x[n] where
n is an integer denotes discrete-time.
Page 4 of 26
Compiled by: Rupesh D. Shrestha[nec]
Digital Signal Analysis & Processing Ch. 1
Where T is the fundamental period (the smallest value of T that still allows above equation to be true.
For discrete time signal
𝑥[𝑛] = 𝑥[𝑛 + 𝑁]
Here, N is the time period which is an integer. That mean the signal repeats after N samples.
4. Causal Vs. Anticausal Vs. Noncausal
Causal signals are signals that are zero for all negative time, while anticausal are signals that
are zero for all positive time. Noncausal signals are signals that have nonzero values in both
positive and negative time.
5. Even Vs. Odd
An even signal is any signal x(t) such that x(t) = x(-t). Even signals can be easily spotted as
they are symmetric around the vertical axis. An odd signal, on the other hand, is a signal that
satisfy x(t) = -x(-t) (Also known as Anti-symmetric signal).
Using the definitions of even and odd signals, we can show that any signal can be written as a
combination of an even and odd signal. That is, every signal has an odd-even decomposition.
𝑥(𝑡) = 𝑥𝑒 (𝑡) + 𝑥𝑜 (𝑡) (𝑖. 𝑒. 𝑒𝑣𝑒𝑟 𝑝𝑎𝑟𝑡 + 𝑜𝑑𝑑 𝑝𝑎𝑟𝑡)
𝑥(𝑡) + 𝑥(−𝑡) 𝑥(𝑡) − 𝑥(−𝑡)
𝑥𝑒 (𝑡) = & 𝑥𝑜 (𝑡) =
2 2
6. Deterministic Vs. Random Signal
A deterministic signal is a signal in which each value of the signal is fixed and can be
determined by a mathematical expression, rule, or table. Because of this the future values of
the signal can be calculated from past values with complete confidence. On the other hand, a
random signal has a lot of uncertainty about its behavior. The future values of a random signal
cannot be accurately predicted and can be usually only be guessed based on the averages of
sets of signals.
7. Energy Vs. Power Signal
In electrical systems, a signal may represent a voltage or current. Consider a voltage v(t)
developed across a resistor R, producing a current i(t). The instantaneous power dissipated in
this resistor is defined by
𝑣 2 (𝑡)
𝑝(𝑡) =
𝑅
𝑜𝑟 𝑒𝑞𝑢𝑖𝑣𝑎𝑙𝑒𝑛𝑡𝑙𝑦, 𝑝(𝑡) = 𝑅𝑖 2 (𝑡)
In both cases, the instantaneous power p(t) is proportional to the squared amplitude of the signal.
Furthermore, for a resistance R of 1Ω, we see that above equations take on the same mathematical
form. Accordingly, in signal analysis it is customary to define power in terms of a 1Ω resistor, so
Page 5 of 26
Compiled by: Rupesh D. Shrestha[nec]
Digital Signal Analysis & Processing Ch. 1
that, regardless of whether a given signal x(t) represents a voltage or a current, we may express
the instantaneous power of the signal as,
𝑝(𝑡) = 𝑥 2 (𝑡)
Based on this convention, we define the total energy of the continuous-time signal x(t) as
𝑇
2 ∞
2 (𝑡)𝑑𝑡
𝐸 = lim ∫ 𝑥 = ∫ 𝑥 2 (𝑡)𝑑𝑡
𝑇→∞
𝑇 −∞
−
2
And its average power as
𝑇
2
1
𝑃 = lim ∫ 𝑥 2 (𝑡)𝑑𝑡
𝑇→∞ 𝑇
𝑇
−
2
In the case of a discrete-time signal x[n], the integrals are replaced by corresponding sums. Thus
the total energy of x[n] is defined by
∞
𝐸 = ∑ |𝑥[𝑛]|2
𝑛=−∞
And its average power is defined by
N
1
𝐼𝑓 𝑡ℎ𝑒 𝑠𝑖𝑔𝑛𝑎𝑙 𝑖𝑠 𝑎𝑝𝑒𝑟𝑖𝑜𝑑𝑖𝑐, 𝑃 = lim ∑ |x[n]|2
N→∞ 2N + 1
n=−N
N−1
1
If the signal is periodic, P= ∑|x[n]|2
N
n=0
The second expression is the average power in a periodic signal x[n] with fundamental period N.
If x[n] is real we can take |x[n]|2 = x2[n].
A signal is referred to as an Energy signal, if and only if the total energy of the signal satisfies the
condition 0 < E < ∞.
On the other hand, it is referred to as a Power signal, if and only if the average power of the signal
satisfies the condition, 0 < P < ∞.
The energy and power classifications of signals are mutually exclusive. In particular, an energy
signal has zero average power, whereas a power signal has infinite energy. It is also interest to
note that periodic signals and random signals are usually viewed as power signals, whereas signals
that are both deterministic and non-periodic are energy signals.
#Find energy or power of the following signals:
𝟏 𝒏
# 𝒙[𝒏] = ( ) 𝒖[𝒏]
𝟐
The given signal is aperiodic signal and we will find energy as,
∞ ∞ 𝑛 2 ∞
2
1 1 2𝑛 1 4
𝐸 = ∑ |𝑥[𝑛]| = ∑ |( ) 𝑢[𝑛]| = ∑ ( ) = 2 =
2 2 1 3
𝑛=−∞ 𝑛=−∞ 𝑛=0 1 − (2)
𝝅𝒏
# 𝒙[𝒏] = 𝐜𝐨𝐬 ( )
𝟒
The given signal is periodic signal with period N = 8 and we will find power as,
N−1 7
1 1 𝜋𝑛
𝑃 = ∑ x 2 [n] = ∑ cos2 ( )
𝑁 8 4
n=0 n=0
Page 6 of 26
Compiled by: Rupesh D. Shrestha[nec]
Digital Signal Analysis & Processing Ch. 1
7 πn 7 7
1 1 + cos ( 2 ) 1 1 1 1 πn 1 8 1 1
= ∑[ ] = ∑ + × ∑ cos ( ) = × + ×0 =
8 2 8 2 8 2 2 8 2 16 2
n=0 n=0 n=0
1. For every fixed value of the frequency F, xa(t) is periodic: xa(t + Tp) = xa(t),
where Tp = 1/F is the fundamental period of the sinusoidal signal.
2. Continuous-time sinusoidal signals with distinct frequencies are themselves distinct.
3. Increasing the frequency F results in an increase in the rate of oscillation of the signal, in
the sense that more periods are included in a given time interval.
The relationships we have described for sinusoidal signals carry over to the class of complex
exponential signals
xa(t) = Aej(Ωt + θ)
By definition, frequency is an inherently positive physical quantity. This is obvious if we interpret
frequency as the number of cycles per unit time in a periodic signal. However, in many cases, only
for mathematical convenience, we need to introduce negative frequencies.
1In signal processing and related disciplines, aliasing refers to an effect that causes different signals to become indistinguishable
(or aliases of one another) when sampled. It also refers to the distortion or artifact that results when the signal reconstructed from
samples is different from the original continuous signal.
Page 8 of 26
Compiled by: Rupesh D. Shrestha[nec]
Digital Signal Analysis & Processing Ch. 1
To see what happens for π ≤ ω0 ≤ 2π, we consider the sinusoids with frequencies ω1 = ω0 and ω2
= 2π – ω0. Note that as ω1 varies from π to 2π, ω2 varies from π to 0. It can be easily seen that
Hence ω2 is an alias to ω1. If we had used a sine function instead of a cosine function, the result
would basically be the same, except for a 1800 phase difference between the sinusoids x1[n] and
x2[n]. In any case, as we increase the relative frequency ω0 of a discrete-time sinusoid form π to
2π, its rate of oscillation decreases. For ω0 = 2π the result is a constant signal, as in the case of ω0
= 0. Obviously, for ω0 = π (or f = ½) we have the highest rate of oscillation.
Since discrete-time sinusoidal signals with frequencies that are separated by an integer multiple
of 2π are identical, it follows that the frequencies in any interval ω1 ≤ ω ≤ ω1 + 2π constitute all
the existing discrete-time sinusoids or complex exponentials. Hence the frequency range for
discrete-time sinusoids is finite with duration 2π. Usually, we choose the range 0 ≤ ω ≤ 2π or –π
≤ ω ≤ π (0 ≤ f ≤ 1, -½ ≤ f ≤ ½), which we call the fundamental range.
These are sets of periodic complex exponentials with fundamental frequencies that are multiple
of a single positive frequency.
1 𝑇𝑝
𝐹𝑢𝑛𝑑𝑎𝑚𝑒𝑛𝑡𝑎𝑙𝑃𝑒𝑟𝑖𝑜𝑑 = =
𝑘𝐹0 𝑘
Page 9 of 26
Compiled by: Rupesh D. Shrestha[nec]
Digital Signal Analysis & Processing Ch. 1
3. Coding: In the coding process, each discrete value xq[n] is represented by a b – bit binary
sequence.
𝑥(𝑡) = 𝐴𝑐𝑜𝑠(2𝜋𝐹𝑡 + 𝜃)
𝐹
𝑥(𝑛𝑇) ≅ 𝑥[𝑛] = 𝐴𝑐𝑜𝑠(2𝜋𝐹𝑛𝑇 + 𝜃) = 𝐴𝑐𝑜𝑠 (2𝜋 ( ) 𝑛 + 𝜃)
𝐹𝑠
𝐹
𝑓= , 𝑜𝑟, 𝜔 = 𝛺𝑇
𝐹𝑠
Is called relative or normalized frequency. We can use f to determine the frequency F in hertz only
if the sampling frequency Fs is known.
Recall,
1 1
−∞ < 𝐹 < ∞& − ≤ 𝑓 ≤ 𝑐𝑦𝑐𝑙𝑒𝑠/𝑠𝑎𝑚𝑝𝑙𝑒𝑠
2 2
Page 10 of 26
Compiled by: Rupesh D. Shrestha[nec]
Digital Signal Analysis & Processing Ch. 1
We find that the frequency of the continuous-time sinusoid when sampled at a rate Fs= 1/T must
fall in the range
1 𝐹𝑠 𝐹𝑠 1
− =− ≤𝐹≤ =
2𝑇 2 2 2𝑇
Mapping of infinite range to finite frequency range of variable f. since highest frequency in a
discrete-time signal is ω = π or f = ½ with sampling rate Fs, the corresponding highest value of F
𝐹𝑠 1
is, 𝐹𝑚𝑎𝑥 = = 2𝑇
2
Sampling Frequency, Fs: It is the number of samples per second while converting continuous-
time signal to discrete-time signal.
Nyquist Criteria, Fs ≥ 2Fmax: It is the criteria that has to be fulfilled for the reconstruction of
signal from discrete-time signal. Fs is the sampling frequency and Fmax is the maximum frequency
contained in continuous-time signal.
Nyquist Rate, FN = 2Fmax: It is the minimum sampling frequency required for the proper
reconstruction of the signal.
Folding Frequency (or Nyquist Frequency), Fs/2: The highest frequency that can be
reconstructed or measured using discretely sampled data. It is the half of sampling frequency.
Sample the two signals at a rate Fs = 40 Hz and find the discrete-time signals obtained.
[Hint: The two sinusoidal signals are identical & consequently indistinguishable. We say that the
frequency F2 = 50 Hz is an alias of the frequency F1 = 10 Hz at the sampling rate of 40 Hz]
𝑥(𝑡) = 3𝑐𝑜𝑠100𝜋𝑡
Page 11 of 26
Compiled by: Rupesh D. Shrestha[nec]
Digital Signal Analysis & Processing Ch. 1
As the link is operated at 10,000 bits/s & each input sample is quantized into 1024 different voltage
levels the each sampled value is represented by log21024 = 10 bits/sample
10,000 𝑏𝑖𝑡𝑠/𝑠𝑒𝑐
i. Then maximum sampling frequency, 𝐹𝑠 = 10 𝑏𝑖𝑡𝑠/𝑠𝑎𝑚𝑝𝑙𝑒 = 1000 𝑠𝑎𝑚𝑝𝑙𝑒𝑠/𝑠𝑒𝑐
Folding frequency (is the maximum frequency that can be represented uniquely by
𝐹
sampled signal), 2𝑠 = 500 𝑠𝑎𝑚𝑝𝑙𝑒𝑠/𝑠𝑒𝑐.
𝑛 300 900
𝑥[𝑛] ≅ 𝑥(𝑛𝑇) = 𝑥 ( ) = 3𝑐𝑜𝑠2𝜋 ( ) 𝑛 + 2𝑐𝑜𝑠2𝜋 ( )𝑛
𝐹𝑠 1000 1000
3 9
= 3𝑐𝑜𝑠2𝜋 ( ) 𝑛 + 2𝑐𝑜𝑠2𝜋 ( ) 𝑛
10 10
3 1
= 3𝑐𝑜𝑠2𝜋 ( ) 𝑛 + 2𝑐𝑜𝑠2𝜋 (1 − ) 𝑛
10 10
3 1
= 3𝑐𝑜𝑠2𝜋 ( ) 𝑛 + 2𝑐𝑜𝑠2𝜋 ( ) 𝑛
10 10
3 1
∴ 𝑓1 = &𝑓 =
10 2 10
1 1
𝐻𝑒𝑟𝑒 𝑏𝑜𝑡ℎ 𝑓𝑟𝑒𝑞𝑢𝑒𝑛𝑐𝑖𝑒𝑠 𝑓1 𝑎𝑛𝑑 𝑓2 𝑙𝑖𝑒𝑠 𝑖𝑛 𝑡ℎ𝑒 𝑖𝑛𝑡𝑒𝑟𝑣𝑎𝑙 − ≤𝑓≤
2 2
iv. ADC resolution = 10 bits
𝑥𝑚𝑎𝑥 −𝑥𝑚𝑖𝑛 5−(−5)
Voltage resolution, ∆= = 1024−1 = 9.76 𝑚𝑉.
𝐿−1
Discrete-time signals:
Digital signals are discrete in both time (the independent variable) and amplitude (the dependent
variable). Signals that are discrete in time but continuous in amplitude are referred to as discrete-
time signals. Discrete-time signals are data sequences. A sequence of data is denoted by {x[n]} or
simply x[n] when the meaning is clear. The elements of the sequence are called samples. The
index n associated with each sample is an integer.
A discrete-time (D.T.) signals x[n] is a function of an independent variable that is an integer. It is
important to note that a D.T. signal is not defined at instants between two successive samples.
Also, it is incorrect to think that x[n] is equal to zero if n is not an integer. Simply, the signal x[n]
is not defined for non-integer values of n.
Page 13 of 26
Compiled by: Rupesh D. Shrestha[nec]
Digital Signal Analysis & Processing Ch. 1
3) Functional representation
1 𝑓𝑜𝑟 𝑛 = −1, 1
2 𝑓𝑜𝑟 𝑛 = 0, 3
𝑥[𝑛] = {
3 𝑓𝑜𝑟 𝑛 = 2
0 𝑒𝑙𝑠𝑒𝑤ℎ𝑒𝑟𝑒
4) Sequence representation
x[n] = {1, 2, 1, 3, 2} where “ “ sign represents the position of n =0. The arrow is often omitted
if it is clear from the context which sample is x[0].
Sample values can either be real or complex. The terms “discrete-time signals” and
“sequences” are used interchangeably.
1, 𝑓𝑜𝑟 𝑛 = 0
𝛿[𝑛] = {
0, 𝑓𝑜𝑟 𝑛 ≠ 0
Note: The analog signal δ(t) or Dirac delta is defined to be zero everywhere except at t=0 and
has unit area.
# Prove that
∞
𝑖. 𝑢[𝑛] = ∑ 𝛿[𝑛 − 𝑘]
𝑘=0
𝑖𝑖. 𝛿[𝑛] = 𝑢[𝑛] − 𝑢[𝑛 − 1]
Page 14 of 26
Compiled by: Rupesh D. Shrestha[nec]
Digital Signal Analysis & Processing Ch. 1
Fig: (a) Unit impulse (b) Unit Step (c) Unit Ramp sequences.
4) Sinusoidal Signal
x[n] = Acos(ωn + θ), -∞ < n < ∞
Where n is an integer variable, called the sample number, A is the amplitude of the
sinusoid, ω is the frequency in radians per second, and θ is phase in radians.
5) Exponential Signal
The exponential signal is a sequence of the form
x[n] = an for all n.
If the parameter a is real, then x[n] is a real signal. When the parameter a is complex
valued,
a = reiθ (r & θ are now parameters)
Hence, x[n] = rneiθn = rn(cosθn + j sinθn)
The real part is xR[n] = rncosθn and the imaginary part is xI[n] = rnsinθn
Alternatively, the complex signal x[n] can be represented by the amplitude function
|x[n]| = A(n) = rn
/_x[n] = φ(n) = θn
6) Sigmoid Function
The sigmoid function is one of the most commonly used activation functions. It is denoted by
σ(x) and given as
1 𝑒𝑥
𝜎(𝑥) = = = 1 − 𝜎(−𝑥)
1 + 𝑒 −𝑥 1 + 𝑒 𝑥
Page 15 of 26
Compiled by: Rupesh D. Shrestha[nec]
Digital Signal Analysis & Processing Ch. 1
𝐸 = ∑ |𝑥[𝑛]|2
𝑛=−∞
If E is finite (i.e. 0 < E <∞), then x[n] is energy signal.
Many signals that possess infinite energy, have a finite average power. The average power of a
periodic sequence with a period of N samples is defined as
𝑁−1
1
𝑃 = ∑|𝑥[𝑛]|2
𝑁
𝑛=0
And for non-periodic sequences, it is defined in terms of the following limit if it exists:
𝑁
1
𝑃 = lim ∑ |𝑥[𝑛]|2
𝑁→∞ 2𝑁 + 1
𝑛=−𝑁
Therefore, the unit step sequence is a power signal. Note that its energy is infinite and so it is
not an energy signal.
2) Periodic signals & Aperiodic signals:
The sinusoidal signal of the form
x[n] = Asin2πf0n is periodic when f0 is a rational number, that is if f0 can be expressed as f0
= k/N ; where k and N are integers and N is the fundamental period.
3) Symmetric(even) and antisymmetric(odd) signals:
Even x[-n] = x[n] xe[n] = ½ {x[n] + x[-n]}
Odd x[-n] = -x[n] xo[n] = ½ {x[n] - x[-n]}
Note: Amplitude modification includes addition, multiplication, and scaling of D.T. signals.
Discrete-time systems:
In many applications of digital signal processing we wish to design a device or an algorithm that
performs some prescribed operation on a discrete-time signal. Such a device or algorithm is called
a discrete-time system.
A discrete-time system is a device or algorithm that operates on a discrete-time signal, called the
input or excitation, according to some well-defined rule, to produce another discrete-time signal
called the output or response of the system.
y[n] = T{x[n]}
Page 17 of 26
Compiled by: Rupesh D. Shrestha[nec]
Digital Signal Analysis & Processing Ch. 1
We say that the input signal x[n] is transformed by the system into a signal y[n].
Discrete-time
x[n] y[n]
System
Input Signal Ouput Signal
Or Excitation Or Response
x2[n]
A Constant Multiplier
a
x[n] y[n] = ax1[n]
A Signal Multiplier
Page 19 of 26
Compiled by: Rupesh D. Shrestha[nec]
Digital Signal Analysis & Processing Ch. 1
y1[n]
T1
x[n] y[n]
x[n] y1[n] y[n]
T1 T2
y2[n]
T2
Tc Tp
(a) Cascade Interconnection (b) Parallel Interconnection
Convolution method:
This method for analyzing the behavior of a linear system to a given input signal is first to
decompose or resolve the input signal into a sum of elementary signals. Then, using the
linearity property of the system, the responses of the system to the elementary signals are
added to obtain the total response of the system to the given input signal.
Page 20 of 26
Compiled by: Rupesh D. Shrestha[nec]
Digital Signal Analysis & Processing Ch. 1
This method is based on the direct solution of the input-output equation for the system, and
has the form
y[n] = F{y[n-1], y[n-1], …………, y[n-N], x[n], x[n-1], x[n-2], ………, x[n-M] }
Specifically, for an LTI system, the general form of the input-output relationshiop is,
𝑁 𝑀
𝑦[𝑛] = ∑ 𝑥[𝑘]𝛿[𝑛 − 𝑘]
𝑘=−∞
The right hand side of the equation gives the resolution of or decomposition of any arbitrary
signal x[n] into a weighted (scale) sum of shifted unit samples sequences.
Response of LTI systems to arbitrary inputs: The Convolution Sum
We know,
∞
𝑦[𝑛] = ∑ 𝑥[𝑘]𝛿[𝑛 − 𝑘]
𝑘=−∞
First, we denote the response y[n ,k] of the system to the input unit sample sequence at n = k by
the special symbol h[n, k]; -∞ < k < ∞, i.e.
y[n, k] = h[n, k] = T{δ[n-k]}
we note that n is the time index and k is a parameter showing the location of the input impulse.
If the impulse at the input is scaled by an amount Ck = x[k], the response of the system is the
corresponding scaled output, that is
Ckh[n, k] = x[k]h[n, k]
The response to any arbitrary input expressed as a sum of weighted impulses, as given above is
y[n] = T{x[n]} = T{
∞
= ∑ 𝑥[𝑘]𝑇{𝛿[𝑛 − 𝑘]}
𝑘=−∞
Page 21 of 26
Compiled by: Rupesh D. Shrestha[nec]
Digital Signal Analysis & Processing Ch. 1
∞
= ∑ 𝑥[𝑘]ℎ[𝑛, 𝑘]
𝑘=−∞
Where h(n,k) is the response of unit impulses δ[n-k] -∞ < k < ∞.
If the response of the LTI system to the unit sample sequence δ[n] is denoted by h[n], i.e.
h[n] ≡ T{δ[n]}
then by the time-invariant property, the response of the system to the delayed unit sample
sequence δ[n-k] is,
h[n-k] = T{δ[n-k]}
then,
∞
𝑦[𝑛] = ∑ 𝑥[𝑘]ℎ[𝑛 − 𝑘]
𝑘=−∞
The formula in above equation that gives the response y[n] of the LTI system as a function of
the input signal x[n] and the unit impulse h[n] is called a convolution sum.
Methods to calculate Convolution Sum:
1. Mathematically (by using direct formula of convolution sum).
2. Graphically (by using following procedure)
3. Matrix Method
𝑦[𝑛0 ] = ∑ 𝑥[𝑘]ℎ[𝑛0 − 𝑘]
𝑘=−∞
The procedures are:
Plot x[n] and h[n] as x[k] and h[k] then,
1) Folding: Fold h[k] about k = 0 to obtain h[-k].
2) Shifting: Shift h[-k] by n0 to the right (left) if n0 is positive (negative), to obtain h[n0-k].
3) Multiplication: Multiply x[k] by h[n0-k] to obtain the product sequence vn0(k) ≡ x[k]h[n0-k].
4) Summation: Sum all the values of the product sequence vn0(k) to obtain the value of the
output at time n = n0.
5) Repetition: Repeat steps 2 through 4, for all possible time shifts -∞ < n < ∞ to obtain overall
response.
Properties of Convolution:
To simplify the notation, we denote the convolution operations as,
∞
𝑦[𝑛0 ] = ∑ ℎ[𝑘]𝑥[𝑛0 − 𝑘]
𝑘=−∞
Suppose that we subdivide the sum into two sets of term, one involving negative terms and other
involving positive terms of values of k.
∞ −1
We observe that the terms in the first sum involve x[n0], x[n0 - 1], - - - which are the present and
past values. On the other hand, the terms in the second sum involves the input signals x[n0 + 1],
x[n0 + 2], - - -. Now, if the output at time n = n0 is to depend only on the present and past values,
then it is clear that
It is both a necessary and a sufficient condition for causality. Hence “An LTI system is causal if
and only if its impulse response is zero for negative values of n.”
Thus for causal LTI system,
∞
𝑦[𝑛] = ∑ ℎ[𝑘]𝑥[𝑛 − 𝑘]
𝑘=0
𝑛
𝑦[𝑛] = ∑ 𝑥[𝑘]ℎ[𝑛 − 𝑘]
𝑘=−∞
Stability of Linear Time Invariant Systems:
Stable System: An arbitrary relaxed system is BIBO stable if and only if its output sequence y[n]
is bounded for every bounded input x[n].
For an LTI system:
Suppose we have an LTI system,
∞
𝑦[𝑛] = ∑ ℎ[𝑘]𝑥[𝑛 − 𝑘]
𝑘=−∞
Taking absolute values on both sides,
∞
≤ ∑ |ℎ[𝑘]||𝑥[𝑛 − 𝑘]|
𝑘=−∞
If the input is bounded, there exists a finite number Mx such that |x[n]| ≤ Mx
∞
|𝑦[𝑛]| ≤ 𝑀𝑥 ∑ |ℎ[𝑛]|
𝑘=−∞
The output is bounded if the impulse response of the system satisfies the condition
∞
𝑆ℎ ≡ ∑ |ℎ[𝑛]| < ∞
𝑘=−∞
Page 23 of 26
Compiled by: Rupesh D. Shrestha[nec]
Digital Signal Analysis & Processing Ch. 1
“A linear time-invariant system is stable if its impulse response is absolutely summable”. This is
the necessary and sufficient condition for stable LTI system.
𝑦[𝑛] = ∑ ℎ[𝑘]𝑥[𝑛 − 𝑘]
𝑘=0
FIR has a finite memory of length M-samples.
Infinite Duration Impulse Response System (IIR System)
∞
𝑦[𝑛] = ∑ ℎ[𝑘]𝑥[𝑛 − 𝑘]
𝑘=0
IIR has an infinite memory.
𝑦[𝑛] = ∑ ℎ[𝑘]𝑥[𝑛 − 𝑘]
𝑘=−∞
Above equation suggests a means for the realization of the system. In the case of FIR systems,
such a realization involves additions, multiplications, and a finite number of memory location,
which is readily implemented directly.
If the system is IIR, however, its practical implementation as given by convolution is clearly
impossible, since it requires an infinite number of memory locations, multiplications, and
additions.
There is a practical and computationally efficient means for implementing a family of IIR
systems, within the general class of IIR systems; this family of discrete-time systems is more
conveniently described by difference equations.
Recursive & Non-Recursive Discrete-time systems:
If the response of any discrete-time systems depends only on the input signals (i.e. terms of the
input signal) then the system is known as non-recursive discrete time system.
If we can express the output of the system not only in terms of the present and past values of the
input, but also in terms of the past output values, then that system is known as recursive system.
Eg: the cumulative average of a signal x[n] in the interval 0 ≤ k ≤ n defined as,
𝑛
1
𝑦[𝑛] = ∑ 𝑥[𝑘] 𝑛 = 0, 1, 2, … ..
𝑛+1
𝑘=0
The realization of above equation requires the storage of all the input samples. Since n is
increasing, our memory requirements grow linearly with time.
However, y[n] can be computed more efficiently by utilizing the previous output value y[n-1].
By a simple algebraic rearrangement, we obtain,
𝑛−1
Page 24 of 26
Compiled by: Rupesh D. Shrestha[nec]
Digital Signal Analysis & Processing Ch. 1
Now the cumulative average y[n] can be computed recursively by multiplying the previous
output value y[n-1] by n/(n+1), multiplying the present input x[n] by 1/(n+1), and adding the
two products.
This is an example of recursive system. In general whose output y[n] at time n depends on any
number of past output values y[n-1], y[n-2], ……, is called a recursive system.
The output of a causal and practically realizable recursive system can be expressed in general as,
y[n] = F{y[n-1], y[n-2], ……, y[n-N], x[n], x[n-1], ….., x[n], x[n-1], ….., x[n-M]}
If y[n] depends only on the present and past inputs, then
y[n] = F{ x[n], x[n-1], ….., x[n], x[n-1], ….., x[n-M]}
Such a system is called non-recursive.
In recursive system, we need to compute all the previous values y[0], y[1], y[2], …., y[n0 – 1] to
compute y[n0] but in non-recursive system, we can compute the y[n0] immediately without
having y[n0-1], y[n0-2], ……. This feature is desirable in some practical applications.
Difference Equations:
An LTI discrete-system can also be described by a linear coefficient difference equation of the
form,
𝑁 𝑀
∑ 𝑎𝑘 𝑦[𝑛 − 𝑘] = ∑ 𝑏𝑘 𝑥[𝑛 − 𝑘] 𝑎0 ≡ 1
𝑘=0 𝑘=0
Or, equivalently,
𝑁 𝑀
Page 25 of 26
Compiled by: Rupesh D. Shrestha[nec]
Digital Signal Analysis & Processing Ch. 1
We note that above equation is a convolution summation involving the input signal convolved
with the impulse response h[n] = anu[n].
We obtained the result that the relaxed recursive system described by the first order difference
equation is a linear time-invariant IIR system with impulse response given by h[n] = anu[n].
Now, suppose the system is initially non-relaxed (i.e. y[-1] ≠ 0) and the input x[n] = 0 for all n.
the output of the system with zero input is called the zero-input response or natural response and
is denoted by yzi[n].
𝑦𝑧𝑖 [𝑛] = 𝑎𝑛+1 𝑦[−1] 𝑛 ≥ 0
We observe that a recursive system with nonzero initial condition is non-relaxed in the sense
that it can produce an output without being excited.
Linearity, time invariance and Stability of the system described by LCCD equation
A system is linear if it satisfies the following three requirements:
1. The total response is equal to the sum of the zero-input and zero-state responses.
(i.e. y[n] = yzi[n] + yzs[n])
2. The principle of superposition applies to the zero-state response.
3. The principle of superposition applies to the zero-input response.
sin(𝜋 ± 𝜃) = ∓𝑠𝑖𝑛𝜃
cos(𝜋 ± 𝜃) = −𝑐𝑜𝑠𝜃
𝑐𝑜𝑠(2𝜋 ± 𝜃) = 𝑐𝑜𝑠𝜃
References:
1. J. G. Proakis, D. G. Manolakis, “Digital Signal Processing, Principles, Algorithms and Applications”, 3 rd Edition,
Prentice-hall, 2000. Chapter 1.
2. S. Sharma, “Digital Signal Processing”, Third Revised Edition, S.K. Kataria & Sons, 2007.
3. Analog Devices, “Mixed Signal and DSP Design Techniques”, Prentice-hall 2000.
Page 26 of 26
Compiled by: Rupesh D. Shrestha[nec]