Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Chapter-1 Introduction To Signals and Systems: Signal

Download as pdf or txt
Download as pdf or txt
You are on page 1of 257

Chapter-1 Introduction to Signals and Systems 2020

This signal may dependent on one or more independent variables. If a signal


dependent on only one variable then it is called as One-dimensional Signal. For
Example, ac power supply, speech signal etc…
Signal:
A signal is a set of information (or) Data. Ex: A telephone signal, Human Speech.
In all the examples, the signals are functions of the independent variable time.
(Or)
A signal is defined as a physical quantity that varies with time, space or any other
independent variables.

Signal Modelling:

The representation of a signal by mathematical expression is known as Signal


Modelling. If a signal can be represented by mathematical expression then the signal is
said to be a Deterministic signals. Otherwise the signal is known as Random Signal.
System:
Signals may be processed further by systems, which may modify them (or) extract
additional information from them.
A system is an entity that processes a set of signals (Input Values) to yield
another set of signals (Output Values).
A system may be made up of physical components, as in electrical, a mechanical,
(or) hydraulic system (hardware realizations) (or) it may be an algorithm that can
compute an output from the input signals (Software Realizations).
For Example, x(t) is an Input to the system and y(t) is the output, then the system can
be realized as y(t)=Operations on [x(t)], which is mathematically represented as
𝑦 𝑡 =𝑇 𝑥 𝑡

Nalla N Page 1
Chapter-1 Introduction to Signals and Systems 2020
And it is represented by diagram as
Input Signal x(t) System y(t) Output Signal

1.1 Characterization, classification, and representation/modelling of signals and


systems
Characterization:

Any Signal can be characterised by the following four parameters, which describes the
behaviour of its performance and existence in physical. They are

1. Amplitude
2. Time Period
3. Frequency
4. Phase Shift

Amplitude: The amplitude of a periodic variable is a measure of its change over a


single period (such as time or spatial period). There are various definitions of amplitude
which are all functions of the magnitude of the differences between the
variable's extreme values. In older texts the phase is sometimes called the amplitude.
Such as Peak-to-peak amplitude, peak amplitude, semi-amplitude, Root Mean Square
Amplitude, Pulse amplitude etc.
where from the figure

1. Peak Amplitude
2. Peak-to-peak amplitude
3. Root Mean Square Amplitude
4. Wave Period (Not an
Amplitude)

Time Period: The amount of the


time taken by any signal to complete
one full cycle. It is usually computed in terms of Seconds, and denoted with a letter
t.

Frequency: The number of cycles completed by any signal in one second. It is


usually computed in terms of Hertz’s, and denoted with a letter f.

Nalla N Page 2
Chapter-1 Introduction to Signals and Systems 2020
The relation between the frequency and the period, T, of a repeating event or
𝟏
oscillation is given by 𝒇 = 𝑻.

1 mHz 1 kHz 1 MHz 1 GHz 1 THz


Frequency 1 Hz (100 Hz)
(10−3 Hz) (103 Hz) (106 Hz) (109 Hz) (1012 Hz)

1 ks 1 ms 1 µs 1 ns 1 ps
Period 1 s (100 s)
(103 s) (10−3s) (10−6 s) (10−9 s) (10−12 s)
Phase shift: The difference 𝜑 𝑡 = ∅𝐺 𝑡 − ∅𝐹 𝑡 between the phases of two
periodic signals F and G is called as the phase difference of G relative to F. At values
of t when the difference is zero, the two signals are said to be in phase, otherwise
they are out of phase with each other.
Illustration of phase shift is given in the figure. The
horizontal axis represents an angle (phase) that is
increasing with time.
Classifications of Signals: the signals are classified
according to their characteristics. Some of them are

1. Continuous-time and Discrete-time signals


2. Analog and Digital signals
3. Periodic and Aperiodic signals
4. Even and Odd signals
5. Energy and Power signals
6. Causal and Non-causal signals
7. Deterministic and Random signals

Continuous-time and Discrete-time signals

Continuous-time signals: The signals that are defined for every instant of time are
known as continuous-time signals. They are denoted by x(t).

Discrete-time signals: The signals that are defined at discrete-instant of time are
known as continuous-time signals. The discrete-time signals are continuous in
amplitude and discrete-in-time. They are denoted by x(n).

i.e., 𝑥 𝑛𝑇 = 𝑥(𝑡)𝑡=𝑛𝑇 , 𝑛 = 0, ±1, ±2, ±3, … … … ….

Nalla N Page 3
Chapter-1 Introduction to Signals and Systems 2020
Where, T= Time interval between two consecutive samples

sinusoidal signal with frequency of 10 Hz


1

0.5
amplitude

-0.5

-1
-0.1 -0.08 -0.06 -0.04 -0.02 0 0.02 0.04 0.06 0.08 0.1
time in sec
descrete sinusoidal signal with frequency of 10 Hz
1

0.5
amplitude

-0.5

-1
-0.1 -0.08 -0.06 -0.04 -0.02 0 0.02 0.04 0.06 0.08 0.1
time in sec

Examples of Discrete-time signals include book selling in a year, rain fall in a specific
month, room temperature at the same hour every day for one week etc.
To distinguish between Continuous-time and Discrete-time signals we use

 The symbol ‘t’ to denote the continuous time variable and ‘n’ to denote discrete
time variable
 For continuous time signals we will enclose the independent variable in
parenthesis (.), ex: x(t) and for discrete time variable we will enclose the
independent variable in [.] ex: x[n] where n=0,±1, ±2, ±3…. And x[n]=x(nT)
where T is sampling period.

Analog and Digital Signals:


A signal whose amplitude can take on any value in a continuous range is called
an Analog Signal. This means that analog signal amplitude can take an infinite number
of values.
A Digital signal is one whose amplitude can take only a finite number of values.

Nalla N Page 4
Chapter-1 Introduction to Signals and Systems 2020

 Continuous Time and Analog signals are not the same. Similarly discrete time
and Digital signals are not same.
 The terms Continuous time and discrete time qualify the nature of signals along
the time (horizontal axis).

 On the other hand the terms analog and digital qualify the nature of signal along
Amplitude (Vertical Axis)
 It’s clear that Analog is not necessarily continuous time and digital need not to be
discrete.

Periodic and Aperiodic Signal:

The signal x(t) is a function that satisfies the following conditions 𝑥 𝑡 = 𝑥(𝑡 + 𝑇0 ) for
all t, where T0 is the positive constant. The smallest value of T0 that satisfies the
Periodicity Condition is the period of x(t).

Nalla N Page 5
Chapter-1 Introduction to Signals and Systems 2020
From the above example of a periodic continuous signal, we can readily deduce that if
x(t) is periodic with period T, then 𝑥 𝑡 = 𝑥(𝑡 + 𝑛𝑇) for all t and for integer n. Thus, x(t)
is also periodic with period 2T,3T,4T…… the fundamental period T0 of x(t) is the
smallest value of T for which the above equation holds.

A signal which is not periodic will be referred to as an Aperiodic Signal.

Similarly in discrete form, a discrete-time signal x(n) is periodic with period N, where N
is an integer.

𝑥 𝑡 = 𝑥(𝑡 + 𝑇0 ) for all values of n.

If the above equation holds, then x(n) is also periodic with period 2N,3N,4N…. the
fundamental period N0 of x(n) is the smallest value of N for which the above equation
holds.

Even and Odd Signals:

The signal x(t) or x(n) is referred to as an even signal if it is identical to its time
reversed counterpart i.e., with reflection above the origin. In continuous time, Signal is
even if it satisfies 𝑥 𝑡 = 𝑥(−𝑡) while a discrete time signal is even if 𝑥 𝑛 = 𝑥 −𝑛 and
it is odd if −𝑥 𝑛 = 𝑥(−𝑛) in discrete-time and −𝑥 𝑡 = 𝑥(−𝑡) in continuous-time.

From the above relations we can compute the even and odd components of any signal
from the following equations:
1 1
𝑥𝑒 𝑡 = 𝑥 𝑡 + 𝑥(−𝑡) 𝑎𝑛𝑑 𝑥𝑒 𝑛 = 𝑥 𝑛 + 𝑥(−𝑛)
2 2

Nalla N Page 6
Chapter-1 Introduction to Signals and Systems 2020
1 1
𝑥𝑜 𝑡 = 𝑥 𝑡 − 𝑥(−𝑡) 𝑎𝑛𝑑 𝑥𝑜 𝑛 = 𝑥 𝑛 − 𝑥(−𝑛)
2 2

Even Signals are symmetric about vertical axis (or) time origin and Odd signals are anti
symmetric about the time origin. Similarly to the discrete time signals most Practical
Signals are neither Even nor Odd Signals.

Energy and Power Signals:

The terms signal energy and signal power is used to characterize a signal. They are not
actually measures of energy and power. The definition of signal energy and power
refers to any signal x(t), including signals that take on complex values.

The signal energy in the signal x(t) is


+∞

𝐸= 𝑥(𝑡) 2 𝑑𝑡
−∞

The necessary condition for the energy to be finite is that the signal Amplitude tend to
zero. i.e.,
The signal Power in the signal x(t) is
+𝑇
1 2
𝑃 = lim 𝑥 𝑡 𝑑𝑡
𝑇→∞ 2𝑇
−𝑇
+𝑁
1 2
𝑃 = lim 𝑥(𝑛)
𝑁→∞ 2𝑁 + 1
−𝑁

If 0<E<∞, then the signal x(t) is called an energy signal. However, there are signals
where this condition is not satisfied. For such signals we consider the power. If 0<P<∞,
then the signal is called a power signal. Note that the power for an energy signal is zero
(P = 0) and that the energy for a power signal is infinite (E = ∞). Some signals are
neither energy nor power signals.

Nalla N Page 7
Chapter-1 Introduction to Signals and Systems 2020
Classification of Systems:

System can be considered as a physical entity which manipulates one or more


input signals applied to it. For example a microphone is a system which converts the
input acoustic (voice or sound) signal into an electric signal. A system is defined
mathematically as a unique operator or transformation that maps an input signal in to
an output signal.

There are many classifications of systems based on parameter used to classify them.
They are

a) Continuous time, discrete time systems


b) Lumped Parameter and Distributed Parameter systems
c) Static and Dynamic systems
d) Causal and Non causal systems
e) Linear, non linear systems
f) Time variant, time invariant systems
g) Linear Time Invariant and Linear Time Variant Systems
h) Stable, unstable systems
i) Invertible and noninvertible systems

Continuous time, discrete time systems


A system which deals with continuous time signals is known as continuous time
system. For such a system the outputs and inputs are continuous time signals.
This is defined as y(t) = T[x(t)] where x(t) is input signal, y(t) is output signal, T[ ]
is transformation that characterizes the system behaviour.
Discrete time system deals with discrete time signals. For such a system the outputs
and inputs are discrete time signals.
This is defined as y(n) = T[x(n)] where x(n) is input signal, y(n) is output signal.
Lumped Parameter and Distributed Parameter systems

A lumped system is one in which the dependent variables of interest are a


function of time alone. In general, this will mean solving a set of ordinary differential
equations (ODEs)

Nalla N Page 8
Chapter-1 Introduction to Signals and Systems 2020
A distributed system is one in which all dependent variables are functions of
time and one or more spatial variables. In this case, we will be solving partial
differential equations (PDEs)

 The first system is a distributed system, consisting of an infinitely thin string,


supported at both ends; the dependent variable, the vertical position of the
string y(x,t) is indexed continuously in both space and time.
 The second system, a series of ``beads'' connected by massless string segments,
constrained to move vertically, can be thought of as a lumped system, perhaps an
approximation to the continuous string.
 For electrical systems, consider the difference between a lumped RLC network
and a transmission line
Static and Dynamic systems
Static system is memory-less whereas dynamic system is a memory system.
In static system the outputs at present instant depends only on present inputs.
These systems are also called as memory less systems as the system output at give time is
dependent only on the inputs at that same time.
Dynamic systems are those in which the output at present instant depends on past
inputs and past outputs. These are also called as systems with memory as the system
output needs to store information regarding the past inputs or outputs.
Example 1: y(t) = 2 x(t)
For present value t=0, the system output is y(0) = 2x(0). Here, the output is only
dependent upon present input. Hence the system is memory less or static.
Example 2: y(t) = 2 x(t) + 3 x(t-3)
For present value t=0, the system output is y(0) = 2x(0) + 3x(-3).
Here x(-3) is past value for the present input for which the system requires memory to
get this output. Hence, the system is a dynamic system.

Nalla N Page 9
Chapter-1 Introduction to Signals and Systems 2020
1. Find whether the following signals are static or dynamic?
a. 𝑦 𝑛 = 𝑛𝑥 𝑛 + 6𝑥 3 [𝑛]
b. 𝑦 𝑡 = 𝑥 𝑡 − 2 + 3𝑥 𝑡
c. 𝑦 𝑛 = ∞
𝑘=0 𝑥[𝑛 − 𝑘]
d. 𝑦 𝑛 = ∞
𝑘=−∞ 𝑥[𝑛 − 𝑘]

Causal, non causal systems:


The principle of causality states that the output of a system always succeeds
input. A system for which the principle of causality holds is defined as causal system. If
an input is applied to a system at time t=0 s then the output of a causal system is zero
for t<0. If the output depends on present and past inputs then system is casual
otherwise non casual.
A system in which output (response) precedes input is known as Non causal system.
If an input is applied to a system at time t=0 s then the output of a non causal system is
non zero for t<0. Such systems are referred as non-anticipative as the system output
does not anticipate future values of input. Non causal systems do not exist in practice.
OR
A system is said to be causal if its output depends upon present and past inputs, and
does not depend upon future input (Non-Anticipative).
For non causal system, the output depends upon future inputs also (Anticipative).
Example 1: y(n) = 2 x(t) + 3 x(t-3)
For present value t=1, the system output is y(1) = 2x(1) + 3x(-2).
Here, the system output only depends upon present and past inputs. Hence, the system
is causal.
Example 2: y(n) = 2 x(t) + 3 x(t-3) + 6x(t + 3)
For present value t=1, the system output is y(1) = 2x(1) + 3x(-2) + 6x(4) Here, the
system output depends upon future input. Hence the system is non-causal system.

2. Find whether the following signals are Causal or Non-Causal?


1
a. 𝑦 𝑛 = 𝑥 𝑛 + 𝑋[𝑛+1]

b. 𝑦 𝑡 = 𝑥 2 𝑡 + 𝑥(𝑡 − 2)
c. 𝑦 𝑛 = 𝑥 𝑛 − 2 + 𝑥[2 − 𝑛]
d. 𝑦 𝑛 = 𝑥 𝜏 𝑑𝜏

Nalla N Page 10
Chapter-1 Introduction to Signals and Systems 2020
Linear, non linear systems
A linear system is one which satisfies the principle of superposition and homogeneity
or scaling.
Consider a linear system characterized by the transformation operator T[ ]. Let x1, x2 are
the inputs applied to it and y1, y2 are the outputs. Then the following equations hold for
a linear system
y1 = T[x1], y2 = T[x2]
Principle of homogeneity: T [a*x1] = a*y1, T [b*x2] = =b*y2
Principle of superposition: T [x1] + T [x2] = a*y1+b*y2
Linearity: T [a*x1] + T [b*x2] = a*y1+b*y2
Where a, b are constants.
Linearity ensures regeneration of input frequencies at output. Nonlinearity
leads to generation of new frequencies in the output different from input frequencies.
Most of the control theory is devoted to explore linear systems.

The Transform of Combined Signals must be equivalent to Addition of transform of


Individual Signals.

𝑻 𝒂𝒙𝟏 𝒕 + 𝒃𝒙𝟐 𝒕 = 𝒂𝑻[𝒙𝟏 𝒕 ] + 𝒃𝑻[𝒙𝟐 (𝒕)]

Example: Check if the given system is Linear or Non-Linear y(t) = x2(t)


Solution: y1 (t) = T[x1(t)] = x12(t) & y2 (t) = T[x2(t)] = x22(t)
T [a1 x1(t) + a2 x2(t)] = [a1 x1(t) + a2 x2(t)]2 Which is not equal to a1 y1(t) + a2 y2(t). Hence
the system is said to be non linear.

3. Find whether the following signals are Linear or Non-Linear?


1
a. 𝑦 𝑛 = 2𝑥 𝑛 + 𝑥[𝑛−1]

b. 𝑦 𝑡 = 𝐴𝑥 𝑡 + 𝐵
𝑑𝑦 (𝑡) 𝑑𝑥 (𝑡)
c. + 2𝑦 𝑡 = 𝑥(𝑡)
𝑑𝑡 𝑑𝑡
𝑡
d. 𝑦 𝑛 = −∞
𝑥 𝜏 𝑑𝜏
e. 𝑦 𝑛 = 𝑛𝑥 𝑛

Time variant, time invariant systems


A system is said to be time variant if its input and output characteristics vary with time.
Otherwise, the system is considered as time invariant.

Nalla N Page 11
Chapter-1 Introduction to Signals and Systems 2020
The condition for time invariant system is: y (n , t) = y(n-t) and the condition for time
variant system is: y (n , t) ≠≠ y(n-t), Where y (n , t) = T[x(n-t)] = input change and y (n-t)
= output change.
OR
A system is said to be time variant system if its response varies with time. If the
system response to an input signal does not change with time such system is termed as
time invariant system. The behaviour and characteristics of time variant system are
fixed over time.
In time invariant systems if input is delayed by time t0 the output will also gets delayed
by t0. Mathematically it is specified as follows
y(t-t0) = T[x(t-t0)]
For a discrete time invariant system the condition for time invariance can be formulated
mathematically by replacing t as n*Ts is given as
y(n-n0) = T[x(n-n0)]
Where n0 is the time delay. Time invariance minimizes the complexity involved in the
analysis of systems. Most of the systems in practice are time invariant systems.
Note: In describing discrete time systems the sampling times n*Ts are mentioned as n
i.e. a discrete signal x(n*Ts) is indicated for simplicity as x(n).
Step 1: Delay the signal x(t) by K or T  eqn 1
Step 2: Replace t(t-K) or (t-T) eqn 2
Step 3: If 1=2, the system is TIV otherwise the System is Time In-variant

Example: y(n) = x(-n)


Solution: y(n, t) = T[x(n-t)] = x(-n-t)
y(n-t) = x(-(n-t)) = x(-n + t)
∴ y(n, t) ≠ y(n-t). Hence, the system is time variant.

4. Find whether the following signals are Time-Invariant or Time-Variant?


a. 𝑦 𝑡 = 𝑡𝑥 𝑡 e. 𝑦 𝑡 = 𝑥 𝑡 2
b. 𝑦 𝑛 = 𝑥 𝑛 + 𝑛𝑥[𝑛 − 1] f. 𝑦 𝑡 = 𝑒 𝑥(𝑡)
c. 𝑦 𝑡 = 𝑥(𝑡) cos 50𝜋𝑡 g. 𝑦 𝑛 = 𝑥[2𝑛]
d. 𝑦 𝑛 = 𝑥 2 𝑛 − 1 h. 𝑦 𝑡 = 𝑥(−𝑡)

Nalla N Page 12
Chapter-1 Introduction to Signals and Systems 2020
Linear Time variant (LTV) and linear Time Invariant (LTI) Systems
If a system is both linear and time variant, then it is called linear time variant (LTV)
system. If a system is both linear and time Invariant then that system is called linear
time invariant (LTI) system.
Stable and Unstable Systems
Most of the control system theory involves estimation of stability of systems.
Stability is an important parameter which determines its applicability. Stability of a
system is formulated in bounded input bounded output sense i.e. a system is stable if
its response is bounded for a bounded input (bounded means finite).
An unstable system is one in which the output of the system is unbounded for
a bounded input. The response of an unstable system diverges to infinity.
OR
The system is said to be stable only when the output is bounded for bounded
input. For a bounded input, if the output is unbounded in the system then it is said to be
unstable.
Note: For a bounded signal, amplitude is finite.
Example 1: y (t) = x2(t)
Let the input is u(t) (unit step bounded input) then the output
y(t) = u2(t) = u(t)  bounded output. Hence, the system is stable.
Example 2: y (t) = ∫x(t)dt
Let the input is u(t) (unit step bounded input) then the output y(t) = ∫u(t)dt
ramp signal (unbounded because amplitude of ramp is not finite it goes to infinite when
t →∞). Hence, the system is unstable.
Invertible and non-invertible systems
A system is said to be invertible if distinct inputs lead to distinct outputs. For
such a system there exists an inverse transformation (inverse system) denoted by T-1[ ]
which maps the outputs of original systems to the inputs applied. Accordingly we can
write
TT-1 = T-1T = I
Where I = 1 one for single input and single output systems.
A non-invertible system is one in which distinct inputs leads to same outputs. For
such a system an inverse system will not exist.
OR

Nalla N Page 13
Chapter-1 Introduction to Signals and Systems 2020

A system is said to invertible if the input of the system appears at the output.
𝑌 𝑠 = 𝑋 𝑠 𝐻1 𝑠 𝐻2 𝑠
1 1
𝑌 𝑠 = 𝑋 𝑠 𝐻1 𝑠 𝑠𝑖𝑛𝑐𝑒 𝐻1 𝑠 =
𝐻1 (𝑠) 𝐻1 (𝑠)
∴𝑌 𝑠 = 𝑋 𝑠 ⟹ 𝑦 𝑡 = 𝑥(𝑡) hence, the system is invertible.
If y(t) ≠≠ x(t), then the system is said to be non-invertible.
ELEMENTARY SIGNALS
A signal can be anything which conveys information. For example a picture of a
person gives you information regarding whether he is short or tall, fair or black e.t.c
Mathematically signal is defined as a function of one or more dependent variables that
conveys information about the state of a system. For example;
a) A speech signal is a function of time. Here independent variable is time and
dependent variable is amplitude of speech signal.
b) A picture containing varying brightness is a function of two spatial variables. Here
independent variables are spatial coordinates (X, Y) and dependent variable is
brightness or amplitude of picture.
Here are a few basic signals:
Unit Step Function
Unit step function is denoted by u(t).
 It is used as best test signal.
 Area under unit step function is unity.
1 𝑓𝑜𝑟 𝑡 ≥ 0
𝑢 𝑡 =
0 𝑡<0
Delayed Unit Step Function:
The delayed Unit step function refers to the signal
initiates from a point other than origin and
represented as originating around ‘a’. And at all
other points the signal strength is considered to be
not existent.
1 𝑓𝑜𝑟 𝑡 ≥ 𝑎
𝑢 𝑡−𝑎 =
0 𝑡<𝑎
Nalla N Page 14
Chapter-1 Introduction to Signals and Systems 2020
Unit Ramp Signal:
Ramp signal is defined as r(t) =
𝑡 𝑓𝑜𝑟 𝑡 ≥ 0
𝑟 𝑡 = 𝑶𝑹 𝑟 𝑡 = 𝑡𝑢(𝑡)
0 𝑡<0
The ramp function can be obtained by applying Unit
step function to an integer.

𝑟 𝑡 = 𝑢 𝑡 𝑑𝑡 = 𝑑𝑡 = 𝑡

In other words, the unit step function can be obtained by differentiating the Unit Ramp
Function.
𝑑𝑟(𝑡)
𝑢 𝑡 =
𝑑𝑡
Unit Parabolic Function:
Parabolic Function can be defined as
𝑡2 𝑡2
𝑝 𝑡 = 2 𝑓𝑜𝑟 𝑡 ≥ 0 𝒐𝒓 𝑝 𝑡 = 𝑢(𝑡)
2
0 𝑡<0
The unit parabolic function can be obtained by
integrating the ramp function.
𝑡2
𝑝 𝑡 = 𝑟 𝑡 𝑑𝑡 = 𝑡 𝑑𝑡 = 𝑓𝑜𝑟 𝑡≥0
2
In other words, the Ramp function can be obtained by differentiating the Parabolic
Function.
𝑑𝑝(𝑡)
𝑟 𝑡 =
𝑑𝑡
Real Exponential Signal:

𝑥 𝑡 = 𝑒 𝑎𝑡 𝑤𝑕𝑒𝑟𝑒 𝑎 = 0 𝑥 𝑡 = 𝑒 +𝑎𝑡 𝑤𝑕𝑒𝑟𝑒 𝑎 > 𝑡 𝑥 𝑡 = 𝑒 −𝑎𝑡 𝑤𝑕𝑒𝑟𝑒 𝑎 < 𝑡


Sinusoidal Signal: It is defined as
𝑥 𝑡 = 𝐴 sin(𝜔𝑡 + ∅)
Where, A is Amplitude, ω is Angular Frequency and t is time period and Ф is phase.

Nalla N Page 15
Chapter-1 Introduction to Signals and Systems 2020

Rectangular Function:
It is defined as
1
𝜋 𝑡 = 1 𝑓𝑜𝑟 𝑡 <
2
0 𝑂𝑡𝑕𝑒𝑟𝑤𝑖𝑠𝑒
It can also be defined with the help of Unit Step
Function as 𝝅 𝒕 = 𝒖 𝒕 − 𝒖(𝒕 − 𝒂)
Triangular Function:
It is defined as
𝑡
1− 𝑓𝑜𝑟 𝑡 ≤ 𝑎
∆ 𝑡 = 𝑎
0 𝑡 >𝑎

Signum Function:
Signum function is denoted as sgn(t)
𝟏 𝒕>0
𝒔𝒈𝒏 𝒕 = 𝟎 𝒕=𝟎
−𝟏 𝒕<0
This function can be expressed in terms of
Unit step signal as 𝒔𝒈𝒏 𝒕 = −𝟏 + 𝟐𝒖(𝒕)
SINC Function:
sin 𝜋𝑡
The sinc function is defined as 𝑠𝑖𝑛𝑐 𝑡 = 𝑓𝑜𝑟 − ∞ 𝑡𝑜 + ∞
𝜋𝑡

Nalla N Page 16
Chapter-1 Introduction to Signals and Systems 2020
Impulse Function:
+∞
It is defined as −∞
𝛿 𝑡 𝑑𝑡 = 𝑢(𝑡) 𝑎𝑛𝑑
𝛿 𝑡 = 0 𝑓𝑜𝑟 𝑡 ≠ 0
𝑑𝑢(𝑡)
𝛿 𝑡 =
𝑑𝑡
That is the impulse function has zero amplitude everywhere except at t=0
At t=0, the amplitude is infinity such that the area under the curve is equal to one.
Properties of Impulse Function:

+∞
 −∞
𝑥 𝑡 𝛿 𝑡 𝑑𝑡 = 𝑥 0

 𝑥 𝑡 𝛿 𝑡 − 𝑡0 = 𝑥 𝑡0 𝛿 𝑡 − 𝑡0
+∞
 −∞
𝑥 𝑡 𝛿 𝑡 − 𝑡0 𝑑𝑡 = 𝑥 𝑡0
1
 𝛿 𝑎𝑡 = 𝛿(𝑡)
𝑎
+∞
 −∞
𝑥 𝜏 𝛿 𝑡 − 𝜏 𝑑𝑡 = 𝑥 𝑡

Basic Operations on Signals: There are two variable parameters in general:

1. Amplitude
2. Time
And the operations are divided as follows

1. Time Shifting 4. Amplitude Scaling


2. Time reversal 5. Signal Multiplication
3. Time scaling 6. Signal Addition

Time Shifting:
Given signal x(t), then its time shifting of x(t)may
delay or advance the signal in time mathematically, this
can be represented as 𝑦 𝑡 = 𝑥 𝑡 − 𝑇

If T is positive the shifting delays the signal and if T is


negative the shifting advances the signal. The same can
be represented for discrete time signals also.

𝑦[𝑛] = 𝑥[𝑛 − 𝐾]

Nalla N Page 17
Chapter-1 Introduction to Signals and Systems 2020

Time Reversal: Folding or Reflection or


Transpose
Time reversal of a signal x(t) can be obtained
by Folding the signal about t=0. It is denoted
by x(-t).

Time Scaling: The time scaling is accomplished by replacing‘t’ by ‘at’ in x(t). i.e., if a>1
then the signal is compressed and if a<1 then the signal is expanded.

Amplitude Scaling:
Cx(t) is a amplitude scaled version of x(t) whose amplitude is scaled by a factor C.

Signal Addition: Addition of two signals is nothing but addition of their corresponding
amplitudes. This can be best explained by using the following example
Signal Multiplication: Multiplication of two signals is nothing but multiplication of
their corresponding amplitudes. This can be best explained by the following example

Nalla N Page 18
Chapter-1 Introduction to Signals and Systems 2020
Multiple Operations on Signals:

1. Given a signal 𝑥 𝑡 , 𝑦 𝑡 = 𝑥 𝑎𝑡 − 𝑏

𝑡−𝑏
2. Given a signal 𝑥 𝑡 , 𝑦 𝑡 = 𝑥 𝑎

3. Given a signal 𝑥 𝑡 , 𝑦 𝑡 = 𝑥 −𝑎𝑡 − 𝑏

4. Solve the following Questions


a. Perform the following and plot y(t)
i. 𝑦 𝑡 = 𝑥(3𝑡 + 2)
𝑡
ii. 𝑦 𝑡 = 𝑥(2 − 2

b. For the given signal plot


𝑦 𝑡 = 𝑥 2𝑡 − 4

Nalla N Page 19
Chapter-1 Introduction to Signals and Systems 2020
Analogy between Vectors and Signals
When approximating V1 along V2 then Ve represents the error in this
approximation. If the component of the vector V1 along V2 is drawn, then C12 indicates
the similarity of two vectors.
V1
Ve V2
C12V2
The component of vector V1 along V2 is given by V1= C12V2+ Ve
V1
V1 Ve
Ve V2 V2
C12V2 C12V2
1 2
𝑉1 = 𝐶12 𝑉2 + 𝑉𝑒1 𝑉1 = 𝐶12 𝑉2 + 𝑉𝑒1
Similarly if we consider two Signals A and B and the relation between them can
be represented as, 𝐴. 𝐵 cos 𝜃 = 𝐴𝐵 cos 𝜃 𝑤𝑕𝑒𝑟𝑒 𝐴. 𝐵 = 𝐵. 𝐴, then
→→
The Component of A along B= 𝐴 cos 𝜃 = 𝐴𝐵
𝐵
→→
The Component of B along A= Bcos 𝜃 = 𝐴𝐵
𝐴
𝑉1 𝑉2
Component of vector V1 along V2= = 𝐶12 𝑉2
𝑉2
𝑉1 𝑉2
= = 𝐶12
𝑉22
𝑉 𝑉
𝐶12 = 𝑉1 𝑉2
2 2

∴If V1 and V2 to be orthogonal to each other, then 𝑉1 . 𝑉2 = 0 ∴ 𝐶12 = 0


i.e., the similarity of two vectors is given by 𝐶12 = 0
ORTHOGONALITY IN SIGNALS:
Suppose 𝑓1 (𝑡) and 𝑓2 𝑡 are two signals then the components of 𝑓1 (𝑡) in terms of 𝑓2 (𝑡)
can be written as 𝐶12 𝑓2 (𝑡) when we are approximating error vector is 𝑓𝑒 (𝑡) then
𝑓1 𝑡 = 𝐶12 𝑓2 𝑡 + 𝑓𝑒 𝑡
That is,
𝑓𝑒 𝑡 = 𝑓1 𝑡 − 𝐶12 𝑓2 𝑡
Average of error can be written as

Nalla N Page 20
Chapter-1 Introduction to Signals and Systems 2020
𝑡2
1
= 𝑓𝑒 𝑡 𝑑𝑡
𝑡2 − 𝑡1 𝑡1
𝑡2
1
= 𝑓1 𝑡 − 𝐶12 𝑓2 𝑡 𝑑𝑡
𝑡2 − 𝑡1 𝑡1

When averaging the error signal with large positive and negative values we may get
zero as average and we may get confuse that the error is zero and that’s why it is better
to take the mean of square of the error vector fe(t)
𝑡2
1
𝑓𝑒 𝑡 = 𝑓𝑒2 (𝑡)𝑑𝑡
𝑡2 − 𝑡1 𝑡1
𝑡2
1 2
= 𝑓1 𝑡 − 𝐶12 𝑓2 𝑡 𝑑𝑡
𝑡2 − 𝑡1 𝑡1
𝑡2
1
= 𝑓12 𝑡 + 𝐶12 2 𝑓22 𝑡 − 2𝑓1 𝑡 𝐶12 𝑓2 (𝑡) 𝑑𝑡
𝑡2 − 𝑡1 𝑡1

To minimize the mean square error we need to differentiate the error vector with
respect to 𝐶12 and make it equal to zero. i.e.,
𝑑Σ
=0
𝑑𝐶12
𝑡2
𝑑Σ 1 𝑑 2 𝑑 𝑑
= 𝑓1 𝑡 + 𝐶12 2 𝑓22 𝑡 − 2 𝑓 𝑡 𝐶12 𝑓2 𝑡 𝑑𝑡
𝑑𝐶12 𝑡2 − 𝑡1 𝑡1 𝑑𝐶12 𝑑𝐶12 𝑑𝐶12 1
𝑑
In the first term 𝑑𝐶 𝑓12 𝑡 there is no 𝐶12 component and it becomes zero.
12

𝑡2 𝑡2 𝑡2
𝑑Σ 1
= 0. 𝑑𝑡 + 2𝐶12 𝑓22 𝑡 𝑑𝑡 − 2 𝑓1 𝑡 𝑓2 𝑡 𝑑𝑡 = 0
𝑑𝐶12 𝑡2 − 𝑡1 𝑡1 𝑡1 𝑡1
𝑡2 𝑡2
𝐶12 𝑓22 𝑡 𝑑𝑡 = 𝑓1 𝑡 𝑓2 𝑡 𝑑𝑡
𝑡1 𝑡1
𝑡2
𝑡1 1
𝑓 𝑡 𝑓2 𝑡 𝑑𝑡
𝐶12 = 𝑡2 2
𝑓 𝑡
𝑡1 2
𝑑𝑡

If 𝑓1 𝑡 and 𝑓2 𝑡 two signals to be orthogonal then 𝐶12 = 0 𝑡𝑕𝑒𝑛,


𝑡2
𝑓1 𝑡 𝑓2 𝑡 𝑑𝑡 = 0
𝑡1

Solve the Problems on Orthogonal of two signals with the help of sin and cos terms.

Nalla N Page 21
Chapter-1 Introduction to Signals and Systems 2020
1. Show that sin 𝑛𝜔0 𝑡 𝑎𝑛𝑑 sin 𝑚𝜔0 𝑡 are mutually orthogonal to each other where
𝜋
m, n are integers in the range of 𝑡0 , 𝑡0 + 2 𝜔
0

Soln.
𝜋
𝑡2 𝑡 0 +2
𝜔0
𝑓1 𝑡 𝑓2 𝑡 𝑑𝑡 = 0 ⟹ sin 𝑛𝜔0 𝑡 . sin 𝑚𝜔0 𝑡 𝑑𝑡
𝑡1 𝑡0
𝜋
𝑡 0 +2
1 𝜔0
= 2(sin 𝑛𝜔0 𝑡 . sin 𝑚𝜔0 𝑡) 𝑑𝑡
2 𝑡0

From the formula 2 sin 𝑎 sin 𝑏 = cos 𝑎 − 𝑏 − cos 𝑎 + 𝑏 we can write the above
relation as,
𝜋
𝑡 0 +2
1 𝜔0
= cos 𝑛𝜔0 𝑡 − 𝑚𝜔0 𝑡 − cos 𝑛𝜔0 𝑡 + 𝑚𝜔0 𝑡 𝑑𝑡
2 𝑡0
2𝜋 2𝜋
𝑡0 + 𝑡0 +
𝜔0 𝜔0
1 1
= cos 𝑛𝜔0 𝑡 − 𝑚𝜔0 𝑡 . 𝑑𝑡 − cos 𝑛𝜔0 𝑡 + 𝑚𝜔0 𝑡 . 𝑑𝑡
2 2
𝑡0 𝑡0
2𝜋
𝑡0 +
1 sin 𝑛 − 𝑚 𝜔0 𝑡 sin 𝑛 + 𝑚 𝜔0 𝑡 𝜔0
= −
2 𝑛 − 𝑚 𝜔0 𝑛 + 𝑚 𝜔0 𝑡0

After taking out 𝜔0 from the two terms in the above equation, it can be written as
2𝜋
𝑡0 +
1 sin 𝑛 − 𝑚 𝜔0 𝑡 sin 𝑛 + 𝑚 𝜔0 𝑡 𝜔0
= −
2𝜔0 𝑛−𝑚 𝑛+𝑚 𝑡0

After applying the limits of integration on the components of the above relation,
2𝜋 2𝜋
1 sin 𝑛 − 𝑚 𝜔0 𝑡0 + 𝜔0 sin 𝑛 + 𝑚 𝜔0 𝑡0 + 𝜔
0 sin 𝑛 − 𝑚 𝜔0 𝑡0
= − −
2𝜔0 𝑛−𝑚 𝑛+𝑚 𝑛−𝑚

sin 𝑛 + 𝑚 𝜔0 𝑡0
+
𝑛+𝑚

On expanding the sin terms with 𝜔0 , 𝑤𝑒 𝑔𝑒𝑡


1 sin 𝑛 − 𝑚 2𝜋 + 𝑛 − 𝑚 𝜔0 𝑡0 sin 𝑛 + 𝑚 2𝜋 + 𝑛 + 𝑚 𝜔0 𝑡0
= −
2𝜔0 𝑛−𝑚 𝑛+𝑚
sin 𝑛 − 𝑚 𝜔0 𝑡0 sin 𝑛 + 𝑚 𝜔0 𝑡0
− +
𝑛−𝑚 𝑛+𝑚

Nalla N Page 22
Chapter-1 Introduction to Signals and Systems 2020
We know that, Sin(2π+Ф)=sinФ, then using this relation we can write the above
equation as follows
1 sin 𝑛 − 𝑚 𝜔0 𝑡0 sin 𝑛 + 𝑚 𝜔0 𝑡0 sin 𝑛 − 𝑚 𝜔0 𝑡0 sin 𝑛 + 𝑚 𝜔0 𝑡0
= − − +
2𝜔0 𝑛−𝑚 𝑛+𝑚 𝑛−𝑚 𝑛+𝑚
In the above relation, 1st & 3rd terms gets cancelled similarly 2nd & 4th terms do. Then,
The total relation will be =0, which proves that, the given two signals
sin 𝑛𝜔0 𝑡 𝑎𝑛𝑑 sin 𝑚𝜔0 𝑡 are Mutually Orthogonal to each other.

2. Show that sin 𝑛𝜔0 𝑡 𝑎𝑛𝑑 cos 𝑚𝜔0 𝑡 are mutually orthogonal to each other where
𝜋
m, n are integers in the range of 𝑡0 , 𝑡0 + 2 𝜔
0

3. Show that cos 𝑛𝜔0 𝑡 𝑎𝑛𝑑 cos 𝑚𝜔0 𝑡 are mutually orthogonal to each other where
𝜋
m, n are integers in the range of 𝑡0 , 𝑡0 + 2 𝜔
0

4. A rectangular function is given by


1 𝑓𝑜𝑟 0 < 𝑡 < 𝜋
𝑓 𝑡 =
−1 𝜋 < 𝑡 < 2𝜋
Approximate this function by a wave form
sin t over the interval 0 to 2π such that mean
square error is minimum

Soln. The function f(t) will be approximated over


the interval 0 to 2π as 𝑓 𝑡 = 𝐶12 sin 𝑡 We shall find
the Optimum value of 𝐶12 which will minimize the mean square error in this
approximation. For minimizing the mean square error is we chose the relation

𝑡2
𝑡1 1
𝑓 𝑡 𝑓2 𝑡 𝑑𝑡
𝐶12 = 𝑡2 2
𝑓 𝑡
𝑡1 2
𝑑𝑡
𝜋 2𝜋
0 1
𝑓 (𝑡) sin 𝑡 𝑑𝑡 + 𝜋 𝑓1 (𝑡) sin 𝑡 𝑑𝑡
= 2𝜋
0
𝑠𝑖𝑛2 𝑡 𝑑𝑡
From the given data we can write the above relation as,
𝜋 2𝜋
0
sin 𝑡 𝑑𝑡 + 𝜋 (−1) sin 𝑡 𝑑𝑡
=
2𝜋 1 − cos 𝑡
𝑑𝑡
0 2
After integrating the terms and applying the limits on the above equation we get,

Nalla N Page 23
Chapter-1 Introduction to Signals and Systems 2020
𝜋 2𝜋
− cos 𝑡 0 − − cos 𝑡 𝜋
= 2𝜋
1 sin 2𝑡
2 𝑡− 2 0

− cos 𝜋 + cos 0 + cos 2𝜋 − cos 𝜋 − −1 + 1 + 1 − −1 4


= = =
1 𝜋 𝜋
2 2𝜋 − 0
4
𝑓 𝑡 = sin 𝑡
𝜋

Nalla N Page 24
Chapter-2 Laplace Transform 2020
The Laplace transform is similar to the Fourier transform. While the Fourier
transform of a function is a complex function of a real variable (frequency), the Laplace
transform of a function is a complex function of a complex variable. The Laplace
transform is usually restricted to transformation of functions of t with t ≥ 0.
The Laplace transform is invertible on a large class of functions. The inverse
Laplace transform takes a function of a complex variable s (often frequency) and yields
a function of a real variable t (often time). Given a simple mathematical or functional
description of an input or output to a system, the Laplace transform provides an
alternative functional description that often simplifies the process of analysing the
behaviour of the system, or in synthesizing a new system based on a set of
specifications.
Definition of the Laplace Transformation:
The two-sided or bilateral Laplace Transform pair is defined as

ℒ{𝑓 (𝑡)} = 𝐹 (𝑠) = ∫ 𝑓(𝑡)𝑒 −𝑠𝑡 𝑑𝑡 − − − (1)


−∞
𝜎+𝑗𝜔
1
ℒ −1 {𝐹(𝑠)} = 𝑓 (𝑡) = ∫ 𝐹(𝑠)𝑒 𝑠𝑡 𝑑𝑠 − − − (2)
2𝜋𝑗
𝜎−𝑗𝜔

Where ℒ{𝑓 (𝑡)} denotes the Laplace Transform of the time function 𝑓 (𝑡), ℒ −1 {𝐹(𝑠)}
denotes inverse Laplace transform and 𝑠 is the complex variable whose real part is σ,
and imaginary part ω, that is, 𝑠 = 𝜎 + 𝑗𝜔.
In most problems, we are concerned with values of time greater than some
reference time, say 𝑡 = 𝑡0 = 0, and since the initial conditions are generally known, the
bilateral or two-sided Laplace transform pair of (1) and (2) simplifies to the unilateral
or one-sided Laplace transform defined as
∞ ∞

ℒ {𝑓 (𝑡)} = 𝐹 (𝑠) = ∫ 𝑓(𝑡)𝑒 −𝑠𝑡 𝑑𝑡 = ∫ 𝑓 (𝑡)𝑒 −𝑠𝑡 𝑑𝑡 − − − (3)


𝑡0 0
𝜎+𝑗𝜔
1
ℒ −1 {𝐹(𝑠)} = 𝑓 (𝑡) = ∫ 𝐹(𝑠)𝑒 𝑠𝑡 𝑑𝑠 − − − (4)
2𝜋𝑗
𝜎−𝑗𝜔

The Laplace Transform of (3) has meaning only if the integral converges (reaches a
limit), that is, if

Nalla N Page 1
Chapter-2 Laplace Transform 2020

|∫ 𝑓(𝑡)𝑒 −𝑠𝑡 𝑑𝑡| < ∞ − − − (5)


0

To determine the conditions that will ensure us that the integral of (3) converges, we
rewrite (5) as

|∫ 𝑓(𝑡)𝑒 −𝜎𝑡 𝑒 −𝑗𝜔𝑡 𝑑𝑡| < ∞ − − − (6)


0

The term 𝑒 −𝑗𝜔𝑡 in the integral of (6) has magnitude of unity, i.e., |𝑒 −𝑗𝜔𝑡 | = 1, and thus
the condition for convergence becomes

|∫ 𝑓 (𝑡)𝑒 −𝜎𝑡 𝑑𝑡| < ∞ − − − (7)


0

We will denote transformation from the time domain to the complex frequency
domain, and vice versa, as 𝑓 (𝑡) ⇔ 𝐹 (𝑠) − − − (8)
The Dirichlet conditions are sufficient conditions for a real-valued, periodic
function 𝑓 (𝑡) to be equal to the sum of its Fourier series at each point
where 𝑓 (𝑡) is continuous. Moreover, the behaviour of the Fourier series at points of
discontinuity is determined as well (it is the midpoint of the values of the discontinuity).
The conditions are:
1. 𝑓(𝑡)must be absolutely integrable over a period.
2. 𝑓 (𝑡) must be of bounded variation in any given bounded interval.
3. 𝑓 (𝑡) must have a finite number of discontinuities in any given bounded interval,
and the discontinuities cannot be infinite.
Properties of the Laplace Transform
1. Linearity Property:
The linearity property states that if 𝑓1 (𝑡), 𝑓2 (𝑡), 𝑓3 (𝑡), … … … . 𝑓𝑛 (𝑡) have Laplace
Transforms 𝐹1 (𝑠), 𝐹2 (𝑠), 𝐹3 (𝑠), … … … 𝐹𝑛 (𝑠) respectively and 𝑐1 , 𝑐2 , 𝑐3 , … … … … 𝑐𝑛
are arbitrary constants, then
𝑐1 𝑓1 (𝑡) + 𝑐2 𝑓2 (𝑡) + ⋯ + 𝑐𝑛 𝑓𝑛 (𝑡) ⇔ 𝑐1 𝐹1 (𝑠) + 𝑐2 𝐹2 (𝑠) + ⋯ + 𝑐𝑛 𝐹𝑛 (𝑠) − −(9)
Proof:-
We know that as per the definition of the Laplace Transform,
∞ ∞
−𝑠𝑡
ℒ{𝑓 (𝑡)} = 𝐹 (𝑠) = ∫ 𝑓(𝑡)𝑒 𝑑𝑡 = ∫ 𝑓(𝑡)𝑒 −𝑠𝑡 𝑑𝑡 − − − (10)
𝑡0 0

Nalla N Page 2
Chapter-2 Laplace Transform 2020

ℒ {𝑐1 𝑓1 (𝑡) + 𝑐2 𝑓2 (𝑡) + ⋯ + 𝑐𝑛 𝑓𝑛 (𝑡)} = ∫[𝑐1 𝑓1 (𝑡) + 𝑐2 𝑓2 (𝑡) + ⋯ + 𝑐𝑛 𝑓𝑛 (𝑡)]𝑒 −𝑠𝑡 𝑑𝑡


𝑡0

∞ ∞ ∞

= 𝑐1 ∫ 𝑓1 (𝑡)𝑒 −𝑠𝑡 𝑑𝑡 + 𝑐2 ∫ 𝑓2 (𝑡)𝑒 −𝑠𝑡 𝑑𝑡 + ⋯ . +𝑐𝑛 ∫ 𝑓𝑛 (𝑡)𝑒 −𝑠𝑡 𝑑𝑡


𝑡0 𝑡0 𝑡0

= 𝑐1 𝐹1 (𝑠) + 𝑐2 𝐹2 (𝑠) + ⋯ + 𝑐𝑛 𝐹𝑛 (𝑠)

2. Time Shifting Property

The time shifting property states that a right shift in the time domain by ‘a’ units
corresponds to multiplication by 𝑒 −𝑎𝑠 in the complex frequency domain. Thus,
𝒇(𝒕 − 𝒂)𝒖(𝒕 − 𝒂) ⇔ 𝒆−𝒂𝒔 𝑭(𝒔) − − − −(𝟏𝟏)
Proof:-
We know that as per the definition of the Laplace Transform,

∞ ∞

ℒ {𝑓 (𝑡)} = 𝐹 (𝑠) = ∫ 𝑓 (𝑡)𝑒 −𝑠𝑡 𝑑𝑡 = ∫ 𝑓(𝑡)𝑒 −𝑠𝑡 𝑑𝑡 − − − (12)


𝑡0 0
𝑎 ∞

ℒ {𝑓(𝑡 − 𝑎)𝑢(𝑡 − 𝑎)} = ∫ 0𝑒 −𝑠𝑡 𝑑𝑡 + ∫ 𝑓 (𝑡 − 𝑎)𝑒 −𝑠𝑡 𝑑𝑡 − − − (13)


0 𝑎

Now, we let 𝑡 − 𝑎 = 𝜏 ; then, 𝑡 = 𝜏 + 𝑎 and 𝑑𝑡 = 𝑑𝜏. With these substitutions, the second
integral on the right side of (13) becomes
∞ ∞

∫ 𝑓 (𝜏)𝑒 −𝑠(𝜏+𝑎) 𝑑𝜏 = 𝑒 −𝑎𝑠 ∫ 𝑓 (𝜏)𝑒 −𝑠𝜏 𝑑𝜏 = 𝑒 −𝑎𝑠 𝐹(𝑠)


𝟎 0

3. Frequency Shifting Property

The frequency shifting property states that if we multiply some time domain function
𝑓 (𝑡) by an exponential function 𝑒 −𝑎𝑡 where a is an arbitrary positive constant, this
multiplication will produce a shift of the s variable in the complex frequency domain by
a units. Thus,
𝒆−𝒂𝒕 𝒇(𝒕) ⇔ 𝑭(𝒔 + 𝒂)

Proof:-
We know that as per the definition of the Laplace Transform,

Nalla N Page 3
Chapter-2 Laplace Transform 2020
∞ ∞

ℒ {𝑓 (𝑡)} = 𝐹 (𝑠) = ∫ 𝑓 (𝑡)𝑒 −𝑠𝑡 𝑑𝑡 = ∫ 𝑓(𝑡)𝑒 −𝑠𝑡 𝑑𝑡 − − − (14)


𝑡0 0
∞ ∞
Then, ℒ{𝒆−𝒂𝒕 𝑓 (𝑡)} = ∫0 𝑒 −𝑎𝑡 𝑓(𝑡)𝑒 −𝑠𝑡 𝑑𝑡 = ∫0 𝑓 (𝑡)𝑒 −(𝑠+𝑎)𝑡 𝑑𝑡 = 𝐹(𝑠 + 𝑎)

4. Scaling Property
Let a be an arbitrary positive constant; then, the scaling property states that
1 𝑠
𝑓(𝑎𝑡) ⇔ 𝐹( )
𝑎 𝑎

Proof:-
We know that as per the definition of the Laplace Transform,

∞ ∞

ℒ {𝑓 (𝑡)} = 𝐹 (𝑠) = ∫ 𝑓 (𝑡)𝑒 −𝑠𝑡 𝑑𝑡 = ∫ 𝑓(𝑡)𝑒 −𝑠𝑡 𝑑𝑡 − − − (15)


𝑡0 0

ℒ{𝑓(𝑎𝑡)} = ∫ 𝑓(𝑡)𝑒 −𝑠𝑡 𝑑𝑡


0

𝜏 𝜏
And letting 𝑎𝑡 = 𝜏, ⇒ 𝑡 = 𝑎 ⇒ 𝑑𝑡 = 𝑑(𝑎) then,
∞ ∞
𝜏 𝜏 1 𝑠 1 𝑠
ℒ {𝑓 (𝑎𝑡)} = ∫ 𝑓(𝜏)𝑒 −𝑠(𝑎) 𝑑 ( ) = ∫ 𝑓 (𝜏)𝑒 −(𝑎)𝜏 𝑑(𝜏) = 𝐹 ( )
𝑎 𝑎 𝑎 𝑎
0 0

5. Differentiation in Time Domain

The differentiation in time domain property states that differentiation in the time
domain corresponds to multiplication by in the complex frequency domain, minus the
initial value of 𝑓 (𝑡) at 𝑡 = 0. Thus,
𝒅
𝒇′ (𝒕) = 𝒇(𝒕) ⇔ 𝒔𝑭(𝒔) − 𝒇(𝟎)
𝒅𝒕
Proof:-
We know that as per the definition of the Laplace Transform,

∞ ∞

ℒ {𝑓 (𝑡)} = 𝐹 (𝑠) = ∫ 𝑓 (𝑡)𝑒 −𝑠𝑡 𝑑𝑡 = ∫ 𝑓(𝑡)𝑒 −𝑠𝑡 𝑑𝑡 − − − (16)


𝑡0 0

ℒ {𝑓′(𝑡)} = ∫ 𝑓 ′ (𝑡)𝑒 −𝑠𝑡 𝑑𝑡 − − − (17)


0

Nalla N Page 4
Chapter-2 Laplace Transform 2020
It can be written in the form of ∫ 𝑣𝑑𝑢 = 𝑢𝑣 − ∫ 𝑢 𝑑𝑣 − − − −(18)
Where, 𝑢 = 𝑓 ′ (𝑡) and 𝑣 = 𝑒 −𝑠𝑡 then relation in (18) can be rewritten as

ℒ{𝑓 ′(𝑡) }
=𝑒 −𝑠𝑡
𝑓 (𝑡)|∞
0 − ∫ 𝑒 −𝑠𝑡 (−𝑠)𝑓(𝑡)𝑑𝑡 − − − (19)
0

⇒ (𝟎 − 𝒇(𝟎)) + 𝒔 ∫ 𝒇(𝒕)𝒆−𝒔𝒕 𝒅𝒕 ⇒ 𝒔𝑭(𝒔) − 𝒇(𝟎)


𝟎

The time differentiation property can be extended to show that


𝒅𝟐
𝒇(𝒕) ⇔ 𝒔𝟐 𝑭(𝒔) − 𝒔𝒇(𝟎) − 𝒇′ (𝟎) − − − − − (𝟐𝟎)
𝒅𝒕𝟐
𝒅𝟑
𝒇(𝒕) ⇔ 𝒔𝟑 𝑭(𝒔) − 𝒔𝟐 𝒇(𝟎) − 𝒔𝒇′ (𝟎) − 𝒔𝒇′′ (𝟎) − − − − − (𝟐𝟏)
𝒅𝒕𝟑
In general, this relation can be extended for nth order differentiation in time as
𝒅𝒏
𝒏
𝒇(𝒕) ⇔ 𝒔𝒏 𝑭(𝒔) − 𝒔𝒏−𝟏 𝒇(𝟎) − 𝒔𝒏−𝟐 𝒇′ (𝟎) − ⋯ . −𝒇𝒏−𝟏 (𝟎) − − − − − (𝟐𝟐)
𝒅𝒕

6. Differentiation in Complex Frequency Domain

This property states that differentiation in complex frequency domain and multiplication
by minus one, corresponds to multiplication of 𝑓 (𝑡) by t in the time domain. In other
words,
𝒅
𝒕𝒇(𝒕) ⇔ − 𝑭(𝒔)
𝒅𝒔
Proof:-
We know that as per the definition of the Laplace Transform,

∞ ∞

ℒ {𝑓 (𝑡)} = 𝐹 (𝑠) = ∫ 𝑓 (𝑡)𝑒 −𝑠𝑡 𝑑𝑡 = ∫ 𝑓(𝑡)𝑒 −𝑠𝑡 𝑑𝑡 − − − (23)


𝑡0 0

By differentiating above relation on both sides we get,



𝑑 𝑑
𝐹(𝑠) = ∫ 𝑓 (𝑡)𝑒 −𝑠𝑡 𝑑𝑡
𝑑𝑠 𝑑𝑠
0
∞ ∞
𝜕 −𝑠𝑡
⇒ ∫ 𝑓 (𝑡 ) (𝑒 )𝑑𝑡 ⇒ ∫ 𝑓(𝑡)(−𝑡)𝑒 −𝑠𝑡 𝑑𝑡 ⇒ ℒ {(−𝑡)𝑓 (𝑡)}
𝜕𝑠
0 0

Nalla N Page 5
Chapter-2 Laplace Transform 2020
𝑑
Where, (−𝑡)𝑓 (𝑡) is the Laplace Transform Pair of the function 𝑑𝑠 𝐹 (𝑠), then it can be

further written as
𝒅
𝒕𝒇(𝒕) ⇔ − 𝑭(𝒔) − − − − − (𝟐𝟒)
𝒅𝒔
In general, it can be written as,
𝒅𝒏
𝒕𝒏 𝒇(𝒕) ⇔ (−𝟏)𝒏 𝑭(𝒔) − − − − − (𝟐𝟓)
𝒅𝒔𝒏
7. Integration in Time Domain

This property states that integration in time domain corresponds to 𝑭(𝒔) divided by 𝒔.
𝒕
𝑭(𝒔)
𝓛 {∫ 𝒇(𝒕)𝒅𝒕} ⇔
𝒔
𝟎

Proof:-
We know that as per the definition of the Laplace Transform,

∞ ∞

ℒ {𝑓 (𝑡)} = 𝐹 (𝑠) = ∫ 𝑓 (𝑡)𝑒 −𝑠𝑡 𝑑𝑡 = ∫ 𝑓(𝑡)𝑒 −𝑠𝑡 𝑑𝑡 − − − (26)


𝑡0 0
𝑡 ∞

ℒ {∫ 𝑓(𝑡)𝑑𝑡} = ∬ 𝑓(𝑡)𝑑𝑡. 𝑒 −𝑠𝑡 𝑑𝑡


0 0

𝑡 𝑒 −𝑠𝑡
It can also be written as, 𝑢 = ∫0 𝑓 (𝑡)𝑑𝑡 and 𝑣 = then the above relationbecomes,
−𝑠
∞ 𝑡
𝑑 𝑒 −𝑠𝑡
⇒ ∫ ∫ 𝑓(𝑡)𝑑𝑡. [ ] 𝑑𝑡
𝑑𝑡 −𝑠
0 0
𝑡 ∞ ∞
𝑒 −𝑠𝑡 𝑒 −𝑠𝑡
⇒ ∫ 𝑓(𝑡)𝑑𝑡 ( )| − ∫ 𝑓(𝑡) ( ) 𝑑𝑡
−𝑠 0 −𝑠
0 0
𝑡 ∞
𝑒 −𝑠𝑡
⇒ ∫ 𝑓 (𝑡)𝑑𝑡 ( )| 𝑏𝑒𝑐𝑜𝑚𝑒𝑠 𝑍𝑒𝑟𝑜 𝑎𝑠 𝑝𝑒𝑟 𝐼𝑛𝑖𝑡𝑖𝑎𝑙 𝐶𝑜𝑛𝑑𝑖𝑡𝑖𝑜𝑛𝑠, 𝑡ℎ𝑢𝑠 𝑠𝑒𝑐𝑜𝑛𝑑 𝑡𝑒𝑟𝑚 𝑟𝑒𝑚𝑎𝑖𝑛𝑠
−𝑠 0
0


1 𝐹(𝑠)
⇒ ∫ 𝑓 (𝑡)𝑒 −𝑠𝑡 𝑑𝑡 ⇒
𝑠 𝑠
0

8. Integration in Complex Frequency Domain: This property states that


Integration in Complex Frequency domain with respect to S corresponds to

Nalla N Page 6
Chapter-2 Laplace Transform 2020
𝑓(𝑡)
division of a time function f(t) by the variable t, provided that the limit lim
𝑡→0 𝑡

exists. Thus,

𝒇(𝒕)
𝓛{ } = ∫ 𝑭(𝒔)𝒅𝒔
𝒕
𝒔

Proof:-
We know that as per the definition of the Laplace Transform,

∞ ∞

ℒ {𝑓 (𝑡)} = 𝐹 (𝑠) = ∫ 𝑓 (𝑡)𝑒 −𝑠𝑡 𝑑𝑡 = ∫ 𝑓(𝑡)𝑒 −𝑠𝑡 𝑑𝑡 − − − (27)


𝑡0 0
∞ ∞ ∞

∫ 𝐹(𝑠)𝑑𝑠 = ∫ ∫ 𝑓 (𝑡)𝑒 −𝑠𝑡 𝑑𝑡. 𝑑𝑠


𝑠 𝑠 0

In the above relation, we can separate the terms as per two integrals and,

∞ ∞ ∞ ∞ ∞
−𝑠𝑡
𝑒 −𝑠𝑡
∫ 𝐹(𝑠)𝑑𝑠 = ∫ 𝑓 (𝑡)𝑑𝑡 ∫ 𝑒 𝑑𝑠 ⇒ ∫ 𝑓(𝑡)𝑑𝑡 ( )|
−𝑡 𝑠
𝑠 0 𝑠 0

After applying the limits in above equation we can rewrite as,



𝑓(𝑡) −𝑠𝑡 𝑓(𝑡)
⇒∫( ) 𝑒 𝑑𝑡 ⇒ ℒ {( )}
𝑡 𝑡
0

9. Time Periodicity:

The Time periodicity property states that a periodic function of time T corresponds to
𝑇
integral ∫0 𝑓 (𝑡)𝑒 −𝑠𝑡 𝑑𝑡 divided by (1 − 𝑒 −𝑠𝑇 ) in the complex frequency domain. Thus, if
we let 𝑓 (𝑡) be a periodic function with the period T, that is, 𝑓(𝑡) = 𝑓(𝑡 + 𝑛𝑇), for
n=1,2,3…… we get the transform pair,

𝑻
∫ 𝒇(𝒕)𝒆−𝒔𝒕 𝒅𝒕
𝒇(𝒕 + 𝒏𝑻) ⇔ 𝟎
𝟏 − 𝒆−𝒔𝑻

Proof:-
We know that as per the definition of the Laplace Transform,

Nalla N Page 7
Chapter-2 Laplace Transform 2020
∞ ∞

ℒ {𝑓 (𝑡)} = 𝐹 (𝑠) = ∫ 𝑓 (𝑡)𝑒 −𝑠𝑡 𝑑𝑡 = ∫ 𝑓(𝑡)𝑒 −𝑠𝑡 𝑑𝑡 − − − (28)


𝑡0 0

The Laplace transform of a periodic function can be expressed as,


∞ 𝑇 2𝑇 3𝑇

ℒ {𝑓 (𝑡)} = ∫ 𝑓(𝑡)𝑒 −𝑠𝑡 𝑑𝑡 = ∫ 𝑓(𝑡)𝑒 −𝑠𝑡 𝑑𝑡 + ∫ 𝑓 (𝑡)𝑒 −𝑠𝑡 𝑑𝑡 + ∫ 𝑓(𝑡)𝑒 −𝑠𝑡 𝑑𝑡 + ⋯ .


0 0 𝑇 2𝑇

In the first integral of the right side, we let 𝑡 = 𝜏, in the second 𝑡 = 𝜏 + 𝑇, in the third 𝑡 =
𝜏 + 2𝑇, and so on. The areas under each period of f(t) are equal, and thus the upper and
lower limits of integration are the same for each integral. Then,
𝑇 𝑇 𝑇
−𝑠𝜏 −𝑠(𝜏+𝑇)
ℒ {𝑓(𝑡)} = ∫ 𝑓 (𝜏)𝑒 𝑑𝜏 + ∫ 𝑓(𝜏 + 𝑇)𝑒 𝑑𝜏 + ∫ 𝑓 (𝜏 + 2𝑇)𝑒 −𝑠(𝜏+2𝑇) 𝑑𝜏 + ⋯ . (29)
0 0 0

Since the function is periodic, i.e., 𝑓 (𝜏) = 𝑓(𝜏 + 𝑇) = 𝑓(𝜏 + 2𝑇) = ⋯ . = 𝑓(𝜏 + 𝑛𝑇), we
can write eqnuation29 as
𝑇
−𝑠𝑇 −2𝑠𝑇
ℒ {𝑓 (𝑡)} = (1 + 𝑒 +𝑒 + ⋯ . ) ∫ 𝑓(𝜏)𝑒 −𝑠𝜏 𝑑𝜏 − − − −(30)
0

By Application of Binomial Theorem, that is,


1
1 + 𝑎 + 𝑎2 + 𝑎3 + ⋯ … = − − − −(31)
1−𝑎
we can find that the relation in (30) reduces to

𝑻
∫ 𝒇(𝝉)𝒆−𝒔𝝉 𝒅𝝉
𝓛{𝒇(𝒕 + 𝒏𝑻)} = 𝟎
𝟏 − 𝒆−𝒔𝑻
10. Initial Value Theorem

It states that the initial value f(0) of the time function f(t) can be found from its Laplace
transform multiplied by s and letting s→∞. that is
𝐥𝐢𝐦 𝒇(𝒕) = 𝐥𝐢𝐦 𝒔𝑭(𝒔)
𝒕→𝟎 𝒔→∞

Proof: We know that, the time differentiation property yields the following relation
𝒅
𝒇′ (𝒕) = 𝒇(𝒕) ⇔ 𝒔𝑭(𝒔) − 𝒇(𝟎)
𝒅𝒕
Then, Applying Limit S→∞ on both sides we get,

lim ∫ 𝑓 ′ (𝑡)𝑒 −𝑠𝑡 𝑑𝑡 = lim (𝑠𝐹(𝑠) − 𝑓 (0))


𝑠→∞ 𝑠→∞
0

Nalla N Page 8
Chapter-2 Laplace Transform 2020
The term on Left hand side becomes zero, as 𝑒 −𝑠𝑡 = 0, 𝑖𝑓 𝑠 → ∞, then it reduces to
lim (𝑠𝐹(𝑠) − 𝑓 (0)) = 0 ⟹ lim 𝑠𝐹(𝑠) − 𝑓(0)
𝑠→∞ 𝑠→∞

⟹ lim 𝑠𝐹(𝑠) = 𝑓(0)


𝑠→∞

⟹ lim 𝑠𝐹 (𝑠) = lim 𝑓(𝑡) − − − −(32)


𝑠→∞ 𝑡→0

11. Final Value Theorem

It states that the Final value f(∞) of the time function f(t) can be found from its Laplace
transform multiplied by s and letting s→0. That is

𝐥𝐢𝐦 𝒇(𝒕) = 𝐥𝐢𝐦 𝒔𝑭(𝒔) = 𝒇(∞)


𝒕→∞ 𝒔→𝟎

Proof: We know that, the time differentiation property yields the following relation

𝒅
𝒇′ (𝒕) = 𝒇(𝒕) ⇔ 𝒔𝑭(𝒔) − 𝒇(𝟎)
𝒅𝒕
Then, Applying Limit S→0 on both sides we get,

𝑙𝑖𝑚 ∫ 𝑓 ′ (𝑡)𝑒 −𝑠𝑡 𝑑𝑡 = 𝑙𝑖𝑚(𝑠𝐹(𝑠) − 𝑓 (0))


𝑠→0 𝑠→0
0

∫ 𝑓 ′ (𝑡)𝑑𝑡 = lim(𝑠𝐹(𝑠) − 𝑓 (0))


𝑠→0
0

𝑓(𝑡)|∞
0 = lim(𝑠𝐹(𝑠)) − 𝑓(0) ⟹ 𝑓 (∞) − 𝑓(0) = lim(𝑠𝐹(𝑠)) − 𝑓(0)
𝑠→0 𝑠→0

𝒇(∞) = 𝐥𝐢𝐦(𝒔𝑭(𝒔)) ⟹ 𝐥𝐢𝐦(𝒇(𝒕)) = 𝐥𝐢𝐦(𝒔𝑭(𝒔)) − − − −(𝟑𝟑)


𝒔→𝟎 𝒕→∞ 𝒔→𝟎

12. Convolution in Time Domain

Convolution in time domain corresponds to multiplication in the complex frequency


domain, that is,

𝒇𝟏 (𝒕) ∗ 𝒇𝟐 (𝒕) ⇔ 𝑭𝟏 (𝒔)𝑭𝟐 (𝒔)

Proof: We know that as per the definition of the Laplace Transform,


∞ ∞

ℒ {𝑓 (𝑡)} = 𝐹 (𝑠) = ∫ 𝑓 (𝑡)𝑒 −𝑠𝑡 𝑑𝑡 = ∫ 𝑓(𝑡)𝑒 −𝑠𝑡 𝑑𝑡 − − − (34)


𝑡0 0

And 𝑓1 (𝑡) ⇔ 𝐹1 (𝑠)𝑎𝑛𝑑 𝑓2 (𝑡) ⇔ 𝐹2 (𝑠)

Nalla N Page 9
Chapter-2 Laplace Transform 2020
∞ ∞ ∞

ℒ{𝑓1 (𝑡) ∗ 𝑓2 (𝑡)} = ℒ [ ∫ 𝑓1 (𝜏)𝑓2 (𝑡 − 𝜏)𝑑𝜏] = ∫ [ ∫ 𝑓1 (𝜏)𝑓2 (𝑡 − 𝜏)𝑑𝜏] 𝑒 −𝑠𝑡 𝑑𝑡


−∞ 0 −∞

By rearranging the terms inside the integrals as per the variables, we can rewrite as,
∞ ∞

⟹ ∫ 𝑓1 (𝜏) [ ∫ 𝑓2 (𝑡 − 𝜏)𝑒 −𝑠𝑡 𝑑𝑡] 𝑑𝜏


0 −∞

We let 𝑡 − 𝜏 = 𝜆; then t = 𝜆 + 𝜏, 𝑎𝑛𝑑 𝑑𝑡 = 𝑑𝜆. By substation into above relation,


∞ ∞ ∞ ∞

ℒ{𝑓1 (𝑡) ∗ 𝑓2 (𝑡)} = ∫ 𝑓1 (𝜏) [∫ 𝑓2 (𝜆)𝑒 −𝑠(𝜆+𝜏) 𝑑𝜆] 𝑑𝜏 = ∫ 𝑓1 (𝜏)𝑒 −𝑠𝜏 𝑑𝜏 ∫ 𝑓2 (𝜆)𝑒 −𝑠𝜆 𝑑𝜆
0 0 0 0

⟹ 𝐹1 (𝑠)𝐹2 (𝑠)

13. Convolution in Complex Frequency Domain: Convolution in Complex frequency


domain divided by 1/2πj, corresponds to multiplication in the time domain.

𝟏
𝒇𝟏 (𝒕)𝒇𝟐 (𝒕) ⇔ 𝑭 (𝒔) ∗ 𝑭𝟐 (𝒔)
𝟐𝝅𝒋 𝟏

Proof: We know that ℒ {𝑓1 (𝑡)𝑓2 (𝑡)} = ∫0 𝑓1 (𝑡)𝑓2 (𝑡)𝑒 −𝑠𝑡 𝑑𝑡 − − − (35) and recalling that
Inverse Laplace Transform from eqn 2,
𝜎+𝑗𝜔
1
ℒ −1 {𝐹 (𝑠)} = 𝑓1 (𝑡) = ∫ 𝐹1 (𝜇)𝑒 𝜇𝑡 𝑑𝜇 − − − (36)
2𝜋𝑗
𝜎−𝑗𝜔

Subsisting equation 36 into 35, we obtain,

∞ 𝜎+𝑗𝜔
1
ℒ{𝑓1 (𝑡)𝑓2 (𝑡)} = ∫ [ ∫ 𝐹1 (𝜇)𝑒 𝜇𝑡 𝑑𝜇] 𝑓2 (𝑡)𝑒 −𝑠𝑡 𝑑𝑡
2𝜋𝑗
0 𝜎−𝑗𝜔

𝜎+𝑗𝜔 ∞
1
⇒ ∫ 𝐹1 (𝜇) [∫ 𝑓2 (𝑡)𝑒 −(𝑠−𝜇)𝑡 𝑑𝑡] 𝑑𝜇
2𝜋𝑗
𝜎−𝑗𝜔 0

We observe that the bracketed integral is 𝐹2 (𝑠 − 𝜇); therefore,


𝜎+𝑗𝜔
1 1
𝐿{𝑓1 (𝑡)𝑓2 (𝑡)} = ∫ 𝐹1 (𝜇)𝐹2 (𝑠 − 𝜇)𝑑𝜇 = 𝐹 (𝑠) ∗ 𝐹2 (𝑠)
2𝜋𝑗 2𝜋𝑗 1
𝜎−𝑗𝜔

Nalla N Page 10
Chapter-2 Laplace Transform 2020
The Laplace Transforms Pairs for Common Functions:

S.No. 𝑓 (𝑡 ) 𝐹 (𝑠 )

1 𝑢 (𝑡 ) 1
𝑠
2 𝑡𝑢(𝑡) 1
𝑠2
3 𝑡 𝑛 𝑢(𝑡 ) 𝑛!
𝑠 𝑛+1
4 𝛿 (𝑡 ) 1

5 𝛿 (𝑡 − 𝑎 ) 𝑒 −𝑎𝑠

6 𝑒 −𝑎𝑡 𝑢(𝑡) 1
(𝑠 + 𝑎 )
7 𝑒 +𝑎𝑡 𝑢(𝑡) 1
(𝑠 − 𝑎 )
8 𝑒 +𝑗𝜔𝑡 𝑢(𝑡) 1
(𝑠 − 𝑗𝜔)
9 𝑒 −𝑗𝜔𝑡 𝑢(𝑡) 1
(𝑠 + 𝑗𝜔)
10 cos 𝜔𝑡𝑢(𝑡) 𝑠
𝑠2 + 𝜔2
11 sin 𝜔𝑡 𝑢(𝑡) 𝜔
𝑠2 + 𝜔2
12 𝑒 −𝑎𝑡 cos 𝑏𝑡 𝑠+𝑎
𝑎2 + (𝑠 + 𝑏)2
13 𝑒 −𝑎𝑡 sin 𝑏𝑡 𝑏
𝑏2 + (𝑠 + 𝑎)2
14 𝑒 𝑎𝑡 cos 𝑏𝑡 𝑠−𝑎
(𝑠 − 𝑎)2 + 𝑏2
15 𝑒 𝑎𝑡 sin 𝑏𝑡 𝑏
(𝑠 − 𝑎 )2 + 𝑏 2
16 cosh 𝑎𝑡 𝑠
𝑠 2 − 𝑎2
17 sinh 𝑎𝑡 𝑎
𝑠 − 𝑎2
2

18 𝑒 −𝑎𝑡 cosh 𝑏𝑡 𝑠+𝑎


(𝑠 + 𝑎 )2 − 𝑏 2

Nalla N Page 11
Chapter-2 Laplace Transform 2020
19 𝑒 −𝑎𝑡 sinh 𝑏𝑡 𝑏
(𝑠 + 𝑎 )2 − 𝑏 2
20 𝑒 𝑎𝑡 cosh 𝑏𝑡 𝑠
(𝑠 − 𝑎 )2 − 𝑏 2
21 𝑒 𝑎𝑡 sinh 𝑏𝑡 𝑏
(𝑠 − 𝑎 )2 − 𝑏 2
The Laplace Transforms Pairs for Common Waveforms:

#Problem 1: find the Laplace transform of the waveform 𝑓𝑝 (𝑡). The subscript ‘P’ stands
for pulse wave form.

Solution: We first express the given waveform as a sum of unit step functions. Then,
𝑓𝑝 (𝑡) = 𝐴[𝑢(𝑡) − 𝑢(𝑡 − 𝑎)]
From the Time shifting Property we get,

𝑓 (𝑡 − 𝑎)𝑢(𝑡 − 𝑎) ⇔ 𝑒 −𝑎𝑠 𝐹(𝑠)

From the Laplace Transforms of Common functions table,

1
𝑢(𝑡) ⇔
𝑠
𝐴 𝐴
For this Example, 𝐴𝑢(𝑡) ⇔ and 𝐴𝑢(𝑡 − 𝑎) ⇔ 𝑒 −𝑎𝑠
𝑠 𝑠

Then, by linearity property, the Laplace Transform of the given pulse function is

𝑨 𝑨 −𝒂𝒔 𝑨
𝑨[𝒖(𝒕) − 𝒖(𝒕 − 𝒂)] ⇔ − 𝒆 = (𝟏 − 𝒆−𝒂𝒔 )
𝒔 𝒔 𝒔
#Problem 2: find the Laplace transform of the waveform 𝑓𝐿 (𝑡). The subscript ‘L’ stands
for Linear Segment.

Nalla N Page 12
Chapter-2 Laplace Transform 2020
Solution: We must first derive the equation of the linear segment. Then, we express the
given waveform in terms of the unit step function.
𝑓𝐿 (𝑡) = (𝑡 − 1)𝑢(𝑡 − 1)
From the Time shifting Property we get,

𝑓 (𝑡 − 𝑎)𝑢(𝑡 − 𝑎) ⇔ 𝑒 −𝑎𝑠 𝐹(𝑠)

From the Laplace Transforms of Common functions table,

1
𝑡𝑢(𝑡) ⇔
𝑠2
Therefore, the Laplace Transform of Linear Segment is,

𝟏
(𝒕 − 𝟏)𝒖(𝒕 − 𝟏) ⇔ 𝒆−𝒂𝒔
𝒔𝟐
#Problem 3: find the Laplace transform of the Triangular waveform 𝑓𝑇 (𝑡).

Figure 3.1 Figure 3.2


Solution: We must first derive the equations of the linear segments. Then Figure 3.1 can
be modified into Figure 3.2, we express the given waveform in terms of the unit step
function.
𝑓𝑇 (𝑡) = 𝑡[𝑢(𝑡) − 𝑢(𝑡 − 1)] + (−𝑡 + 2)[𝑢(𝑡 − 1) − 𝑢(𝑡 − 2)]
𝑓𝑇 (𝑡) = 𝑡𝑢(𝑡) − 𝑡𝑢(𝑡 − 1) + −𝑡𝑢(𝑡 − 1) + 𝑡𝑢(𝑡 − 2) + 2𝑢(𝑡 − 1) − 2𝑢(𝑡 − 2)
Collecting like terms can be written as,
𝑓𝑇 (𝑡) = 𝑡𝑢(𝑡) − 2(𝑡 − 1)𝑢(𝑡 − 1) + (𝑡 − 2)𝑢(𝑡 − 2)
From the Time shifting Property we get,
𝑓 (𝑡 − 𝑎)𝑢(𝑡 − 𝑎) ⇔ 𝑒 −𝑎𝑠 𝐹(𝑠)
From the Laplace Transforms of Common functions table,
1
𝑡𝑢(𝑡) ⇔
𝑠2
Then,
1 1 1
𝑡𝑢(𝑡) − 2(𝑡 − 1)𝑢(𝑡 − 1) + (𝑡 − 2)𝑢(𝑡 − 2) ⇔ 2
− 2𝑒 −𝑠 2 + 𝑒 −2𝑠 2
𝑠 𝑠 𝑠

Nalla N Page 13
Chapter-2 Laplace Transform 2020
1
𝑡𝑢(𝑡) − 2(𝑡 − 1)𝑢(𝑡 − 1) + (𝑡 − 2)𝑢(𝑡 − 2) ⇔ (1 − 2𝑒 −𝑠 + 𝑒 −2𝑠 )
𝑠2
Therefore, the Laplace Transform of the triangular Waveform is given by,
𝟏
𝒇𝑻 (𝒕) ⇔ (𝟏 − 𝒆−𝒔 )𝟐
𝒔𝟐
#Problem 4: find the Laplace transform of the Rectangular Periodic waveform 𝑓𝑅 (𝑡).

Solution: This is a periodic waveform with period T=2a, and thus we can apply time
periodicity property

𝑻
∫𝟎 𝒇(𝒕)𝒆−𝒔𝒕 𝒅𝒕
𝒇(𝒕 + 𝒏𝑻) ⇔
𝟏 − 𝒆−𝒔𝑻
Where the denominator represents the periodicity of f(t). For this example,

2𝑎 𝑎 2𝑎
1 1
ℒ {𝑓𝑅 (𝑡)} = −2𝑎𝑠
∫ 𝑓𝑅 (𝑡) 𝑒 −𝑠𝑡 𝑑𝑡 = [∫ 𝐴 𝑒 −𝑠𝑡 𝑑𝑡 + ∫ (−𝐴) 𝑒 −𝑠𝑡 𝑑𝑡]
1−𝑒 1 − 𝑒 −2𝑎𝑠
0 0 𝑎
𝑎 2𝑎
𝐴 𝑒 −𝑠𝑡 𝑒 −𝑠𝑡
⇒ [ | + | ]
1 − 𝑒 −2𝑎𝑠 𝑠 0 𝑠 𝑎

Or,
𝐴
ℒ{𝑓𝑅 (𝑡)} = (−𝑒 −𝑎𝑠 + 1+𝑒 −2𝑎𝑠 − 𝑒 −𝑎𝑠 )
𝑠(1 − 𝑒 −2𝑎𝑠 )
𝐴 −𝑎𝑠 −2𝑎𝑠 )
𝐴(1 − 𝑒 −𝑎𝑠 )2 𝐴(1 − 𝑒 −𝑎𝑠 )(1 − 𝑒 −𝑎𝑠 )
= ( 1 − 2𝑒 +𝑒 = =
𝑠 (1 − 𝑒 −2𝑎𝑠 ) 𝑠(1 + 𝑒 −𝑎𝑠 )(1 − 𝑒 −𝑎𝑠 ) 𝑠(1 + 𝑒 −𝑎𝑠 )(1 − 𝑒 −𝑎𝑠 )
𝑎𝑠 𝑎𝑠 𝑎𝑠 𝑎𝑠 𝑎𝑠 𝑎𝑠 𝑎𝑠 𝑎𝑠
𝐴(1 − 𝑒 −𝑎𝑠 ) 𝐴 𝑒 2 𝑒 − 2 − 𝑒 − 2 𝑒 − 2 𝐴 𝑒− 2 𝑒 2 − 𝑒− 2 𝐴 sinh ( 2 )
= = ( 𝑎𝑠 ) = ( ) ( 𝑎𝑠 𝑎𝑠 ) =
𝑠(1 + 𝑒 −𝑎𝑠 ) 𝑠 𝑒 𝑎𝑠 −
𝑎𝑠
2𝑒 2 +𝑒 2𝑒 2

𝑎𝑠
− 𝑠 𝑒 −𝑎𝑠2 𝑒 +𝑒 2
2
− 𝑠 cosh (𝑎𝑠)
2
𝑨 𝒂𝒔
𝓛{𝒇𝑹 (𝒕)} = 𝐭𝐚𝐧𝐡 ( )
𝒔 𝟐
#Problem 5: find the Laplace transform of the half-wave rectified sine wave 𝑓𝐻𝑊 (𝑡).

Nalla N Page 14
Chapter-2 Laplace Transform 2020

Solution: This is a periodic waveform with period 𝑇 = 2𝜋. We will applt the time
periodicity property as,

𝑻
∫ 𝒇(𝒕)𝒆−𝒔𝒕 𝒅𝒕
𝒇(𝒕 + 𝒏𝑻) ⇔ 𝟎
𝟏 − 𝒆−𝒔𝑻
Where the denominator represents the periodicity of f(t). For this example,

2𝜋 𝜋
1 −𝑠𝑡
1
ℒ {𝑓𝐻𝑊 (𝑡)} = −2𝜋𝑠
∫ 𝑓 ( 𝑡 ) 𝑒 𝑑𝑡 = −2𝜋𝑠
∫ sin 𝑡 𝑒 −𝑠𝑡 𝑑𝑡
1−𝑒 1−𝑒
0 0
𝜋
1 𝑒 −𝑠𝑡 (𝑠 sin 𝑡 − cos 𝑡) 1 (1 + 𝑒 −𝜋𝑠 )
= [ ]| =
1 − 𝑒 −2𝜋𝑠 𝑠2 + 1 0
(𝑠 2 + 1) (1 − 𝑒 −2𝜋𝑠 )

1 (1 + 𝑒 −𝜋𝑠 )
ℒ{𝑓𝐻𝑊 (𝑡)} =
(𝑠 2 + 1) (1 + 𝑒 −𝜋𝑠 )(1 − 𝑒 −𝜋𝑠 )
Or,
𝟏
𝓛{𝒇𝑯𝑾 (𝒕)} =
(𝒔𝟐 + 𝟏)(𝟏 − 𝒆−𝝅𝒔 )
#Practise Problems:
Find the Laplace Transform of the following Waveforms

Nalla N Page 15
Chapter-3 Inverse Laplace Transforms 2020
For any continuous time, function 𝑓 (𝑡) the two-sided or bilateral Laplace
Transform pair is defined as

ℒ{𝑓 (𝑡)} = 𝐹 (𝑠) = ∫ 𝑓(𝑡)𝑒 −𝑠𝑡 𝑑𝑡 − − − (1)


−∞
𝜎+𝑗𝜔
1
ℒ −1 {𝐹(𝑠)} = 𝑓 (𝑡) = ∫ 𝐹(𝑠)𝑒 𝑠𝑡 𝑑𝑠 − − − (2)
2𝜋𝑗
𝜎−𝑗𝜔

Where ℒ{𝑓 (𝑡)} denotes the Laplace Transform of the time function 𝑓 (𝑡), ℒ −1 {𝐹(𝑠)}
denotes inverse Laplace transform and 𝑠 is the complex variable whose real part is σ,
and imaginary part ω, that is, 𝑠 = 𝜎 + 𝑗𝜔.
The inverse Laplace transform is that, which uses one-to-one relationship
between a signal and its unilateral Laplace transform. One such method is Partial
Fractional Method.
Let,
𝑁(𝑠) 𝑏𝑚 𝑠 𝑚 + 𝑏𝑚−1 𝑠 𝑚−1 + ⋯ … + 𝑏1 𝑠 + 𝑏0
𝐹 (𝑠 ) = = − − − − − (3)
𝐷(𝑠) 𝑠 𝑛 + 𝑎𝑛−1 𝑠 𝑛−1 + ⋯ … + 𝑎1 𝑠 + 𝑎0
Where m<n.
To find partial fraction expansion, first we need to find the roots of the denominator
polynomial 𝐷(𝑠) say 𝑝1 , 𝑝2 , … … , 𝑝𝑘 . The roots are also called as Poles of F(s). now we
can express 𝐹 (𝑠) as
𝑁(𝑠) 𝑏𝑚 𝑠 𝑚 + 𝑏𝑚−1 𝑠 𝑚−1 + ⋯ … + 𝑏1 𝑠 + 𝑏0
𝐹 (𝑠 ) = = − − − − − (4)
𝐷(𝑠) ∏𝑁 𝑘=1(𝑠 − 𝑝𝑘 )

I.DISTINCT POLES:

If all the poles 𝑝𝑘 are distinct, then we may express 𝐹(𝑠) as a sum of single pole terms
given by
𝑟1 𝑟2 𝑟𝑖 𝑟𝑘
𝐹 (𝑠 ) = + +⋯…+ + ⋯…+ − − − − − − (5)
𝑠 − 𝑝1 𝑠 − 𝑝2 𝑠 − 𝑝𝑖 𝑠 − 𝑝𝑘
Multiply both sides of the equation (5) by (𝑠 − 𝑝𝑖 ) and substitute 𝑠 = 𝑝𝑖 then we get
(𝑠 − 𝑝𝑖 )𝐹 (𝑠)|𝑠=𝑝𝑖 =
𝑟1 𝑟2 𝑟𝑖 𝑟𝑘
|(𝑠 − 𝑝𝑖 ) + (𝑠 − 𝑝𝑖 ) + ⋯ + (𝑠 − 𝑝𝑖 ) + ⋯ … + (𝑠 − 𝑝𝑖 ) | = 𝑟𝑖
𝑠 − 𝑝1 𝑠 − 𝑝2 𝑠 − 𝑝𝑖 𝑠 − 𝑝𝑘 𝑠=𝑝
𝑖

That is, the coefficient or residue 𝑟𝑖 in equation (5) can be found by


𝑟𝑖 = (𝑠 − 𝑝𝑖 )𝐹 (𝑠)|𝑠=𝑝𝑖 − − − − − −(6)

Nalla N Page 1
Chapter-3 Inverse Laplace Transforms 2020
#1. Use the partial fraction expansion method to simplify 𝐹 (𝑠) of given relation, and find
the time domain function 𝑓 (𝑡) corresponding to 𝐹(𝑠).
2
𝐹 (𝑠 ) =
𝑠 2 − 3𝑠 + 2
Solution: For the given function 𝐹 (𝑠) we can factorize and write it as
2 2 𝑟1 𝑟2
𝐹 (𝑠 ) = = = +
𝑠 2 − 3𝑠 + 2 (𝑠 − 1)(𝑠 − 2) (𝑠 − 1) (𝑠 − 2)
Where,
2
𝑟1 = (𝑠 − 1)𝐹(𝑠)|𝑠=1 = (𝑠 − 1) | = −2
(𝑠 − 1)(𝑠 − 2) 𝑠=1
2
𝑟2 = (𝑠 − 2)𝐹(𝑠)|𝑠=2 = (𝑠 − 2) | =2
(𝑠 − 1)(𝑠 − 2) 𝑠=2
Therefore,
−2 2
𝐹 (𝑠 ) = + − − − −(7)
(𝑠 − 1) (𝑠 − 2)
Applying Inverse Laplace Transform on equation (7) we get the time domain
function as,
𝑓(𝑡) = −2𝑒 𝑡 𝑢(𝑡) + 2𝑒 2𝑡 𝑢(𝑡)
#2. Use the partial fraction expansion method to simplify 𝐹 (𝑠) of given relation, and find
the time domain function 𝑓 (𝑡) corresponding to 𝐹(𝑠).
3𝑠 + 2
𝐹 (𝑠 ) =
𝑠2 + 3𝑠 + 2
Solution: For the given function 𝐹 (𝑠) we can factorize and write it as
3𝑠 + 2 3𝑠 + 2 𝑟1 𝑟2
𝐹 (𝑠 ) = = = +
𝑠 2 + 3𝑠 + 2 (𝑠 + 1)(𝑠 + 2) (𝑠 + 1) (𝑠 + 2)
3𝑠 + 2
𝑟1 = (𝑠 + 1)𝐹(𝑠)|𝑠=−1 = (𝑠 + 1) | = −1
(𝑠 + 1)(𝑠 + 2) 𝑠=−1
3𝑠 + 2
𝑟2 = (𝑠 + 2)𝐹(𝑠)|𝑠=−2 = (𝑠 + 2) | =4
(𝑠 + 1)(𝑠 + 2) 𝑠=−2
Therefore,
−1 4
𝐹 (𝑠 ) = + − − − −(8)
(𝑠 + 1) (𝑠 + 2)
Applying Inverse Laplace Transform on equation (8) we get the time domain
function as,
𝑓 (𝑡) = −𝑒 −𝑡 𝑢(𝑡) + 4𝑒 −2𝑡 𝑢(𝑡)

Nalla N Page 2
Chapter-3 Inverse Laplace Transforms 2020
#Practise Problem 1:
Use the partial fraction expansion method to simplify 𝐹(𝑠) and 𝐺(𝑠) of given relation,
and find the time domain function 𝑓(𝑡) and 𝑔(𝑡) corresponding to 𝐹(𝑠) and 𝐺(𝑠).
3𝑠 2 + 2𝑠 + 5
𝐹 (𝑠 ) =
𝑠 3 + 12𝑠 2 + 44𝑠 + 48
2𝑠 2 + 5𝑠 + 5
𝐺 (𝑠 ) =
(𝑠 + 1)2 + (𝑠 + 2)
II. COMPLEX POLES:

If 𝐹 (𝑠) has complex poles, then partial fraction of the 𝐹(𝑠) can be expressed as,
𝑟1 𝑟1∗
𝐹 (𝑠 ) = + − − − −(9)
𝑠 − 𝑝1 𝑠 − 𝑝1∗

Where 𝑟1∗ is complex conjugate of 𝑟1 and 𝑝1∗ is complex conjugate of 𝑝1 . In other words,
complex conjugate poles result in complex conjugate coefficients.

#3. Use the partial fraction expansion method to simplify 𝐹 (𝑠) of given relation, and find
the time domain function 𝑓 (𝑡) corresponding to 𝐹(𝑠).
2
𝐹 (𝑠 ) =
𝑠2 + 2𝑠 + 2
Solution: For the given function 𝐹 (𝑠) we cannot factorize normally and write in the
form of complex roots by using the following formula
−𝑏 ± √𝑏2 − 4𝑎𝑐 −2 ± √22 − (4 ∗ 1 ∗ 2)
𝑠1,2 = 𝑠 2 + 2𝑠 + 2 = = = −1 ± 𝑗1
2𝑎 2
Then, 𝐹(𝑠) can be represented in the form of partial fractions as,
𝑟1 𝑟1∗
𝐹 (𝑠 ) = +
𝑠 + 1 + 𝑗1 𝑠 + 1 − 𝑗1
𝑠
𝑟1 = (𝑠 + 1 + 𝑗1)𝐹(𝑠)|𝑠=−1−𝑗1 = (𝑠 + 1 + 𝑗1) |
(𝑠 + 1 + 𝑗1)(𝑠 + 1 − 𝑗1) 𝑠=−1−𝑗1

−1 − 𝑗1
⇒ 𝑟1 = = 0.5 − 𝑗0.5
−𝑗2
𝑠
𝑟1∗ = (𝑠 + 1 − 𝑗1)𝐹(𝑠)|𝑠=−1+𝑗1 = (𝑠 + 1 − 𝑗1) |
(𝑠 + 1 + 𝑗1)(𝑠 + 1 − 𝑗1) 𝑠=−1+𝑗1

−1 + 𝑗1
⇒ 𝑟1∗ = = 0.5 + 𝑗0.5
𝑗2
0.5 − 𝑗0.5 0.5 + 𝑗0.5
𝐹 (𝑠 ) = +
𝑠 − (−1 − 𝑗) 𝑠 − (−1 + 𝑗)

Nalla N Page 3
Chapter-3 Inverse Laplace Transforms 2020
Taking Inverse Laplace Transform on the above relation we get,
𝑓(𝑡) = (0.5 − 𝑗0.5)𝑒 (−1−𝑗)𝑡 𝑢(𝑡) + (0.5 + 𝑗0.5)𝑒 (−1+𝑗)𝑡 𝑢(𝑡)
On expansion of the above relation and separating the real and imaginary
components and writing in simplified form as,
𝑓 (𝑡) = (0.5 − 𝑗0.5)𝑒 −𝑡 𝑒 −𝑗𝑡 𝑢(𝑡) + (0.5 + 𝑗0.5)𝑒 −𝑡 𝑒 +𝑗𝑡 𝑢(𝑡)
⇒ 0.5𝑒 −𝑡 𝑒 −𝑗𝑡 𝑢(𝑡) − 0.5𝑗𝑒 −𝑡 𝑒 −𝑗𝑡 𝑢(𝑡) + 0.5𝑒 −𝑡 𝑒 +𝑗𝑡 𝑢(𝑡) + 𝑗0.5𝑒 −𝑡 𝑒 +𝑗𝑡 𝑢(𝑡)
⇒ 0.5𝑒 −𝑡 𝑒 −𝑗𝑡 𝑢(𝑡) + 0.5𝑒 −𝑡 𝑒 +𝑗𝑡 𝑢(𝑡) − 0.5𝑗𝑒 −𝑡 𝑒 −𝑗𝑡 𝑢(𝑡) + 𝑗0.5𝑒 −𝑡 𝑒 +𝑗𝑡 𝑢(𝑡)
⇒ 0.5𝑒 −𝑡 (𝑒 +𝑗𝑡 + 𝑒 −𝑗𝑡 )𝑢(𝑡) + 𝑗0.5𝑒 −𝑡 (𝑒 +𝑗𝑡 − 𝑒 −𝑗𝑡 )𝑢(𝑡)
⇒ 𝑒 −𝑡 cos 𝑡 𝑢(𝑡) − 𝑒 −𝑡 sin 𝑡 𝑢(𝑡) = 𝑒 −𝑡 (cos 𝑡 − sin 𝑡)𝑢(𝑡)
⇒ 𝒇(𝒕) = 𝒆−𝒕 (𝐜𝐨𝐬 𝒕 − 𝐬𝐢𝐧 𝒕)𝒖(𝒕)

#4. Use the partial fraction expansion method to simplify 𝐹 (𝑠) of given relation, and find
the time domain function 𝑓 (𝑡) corresponding to 𝐹(𝑠).
𝑠+3
𝐹 (𝑠 ) =
𝑠3 + 5𝑠 2 + 12𝑠 + 8
Solution: For the given function 𝐹 (𝑠) we cannot completely factorize normally and
write in the following form
2𝑠 + 3
𝐹 (𝑠) = 𝐹1 (𝑠) =
𝑠3 + 6𝑠 2 + 16𝑠 + 16

For the denominator function (𝑠 2 + 4𝑠 + 8) cannot factorize and can find the complex
roots by the following formula

2
−𝑏 ± √𝑏2 − 4𝑎𝑐 −4 ± √42 − (4 ∗ 1 ∗ 8)
𝑠2,3 = 𝑠 + 4𝑠 + 8 = = = −2 ± 𝑗2
2𝑎 2
the modified function 𝐹(𝑠) with the help of complex roots, it can be represented in the
form of partial fractions as,
𝑟1 𝑟2 𝑟2∗
𝐹 (𝑠 ) = + + − − − −(4.1)
𝑠 + 1 𝑠 + 2 − 2𝑗 𝑠 + 2 + 2𝑗
𝑠+3 −1 + 3 2
𝑟1 = (𝑠 + 1)𝐹 (𝑠)|𝑠=−1 = (𝑠 + 1) 2
| = 2
=
(𝑠 + 1)(𝑠 + 4𝑠 + 8) 𝑠=−1 ((−1) + 4(−1) + 8) 5
𝑠+3
𝑟2 = (𝑠 + 2 − 2𝑗)𝐹(𝑠)|𝑠=−2+2𝑗 = (𝑠 + 2 − 2𝑗) |
(𝑠 + 1)(𝑠 + 2 − 2𝑗)(𝑠 + 2 + 2𝑗) 𝑠=−2+2𝑗
(−2 + 2𝑗) + 3 1 + 2𝑗 1 + 2𝑗
⇒ 𝑟2 = = =
(−2 + 2𝑗 + 1)(−2 + 2𝑗 + 2 + 2𝑗) (−1 + 2𝑗)(4𝑗) −8 − 4𝑗

Nalla N Page 4
Chapter-3 Inverse Laplace Transforms 2020

When we rationalize the value of 𝑟2 we get it as,


1 + 2𝑗 −8 + 4𝑗 −8 − 16𝑗 + 4𝑗 − 8 −16 − 12𝑗 −4 −3𝑗 −1 3𝑗
𝑟2 = ∗ = = = + = −
−8 − 4𝑗 −8 + 4𝑗 32𝑗64 + 16 − 32𝑗 80 20 20 5 20
As, 𝑟2 and 𝑟2∗ are complex conjugate roots then we can write the results as,
−1 3𝑗 −1 3𝑗
⇒ 𝑟2 = − 𝑎𝑛𝑑 𝑟2∗ = +
5 20 5 20
Substituting 𝑟1 , 𝑟2 and 𝑟2∗ values in equation 4.1, we get
2 −1 3𝑗 −1 3𝑗
( 5 − 20) ( 5 + 20)
𝐹 (𝑠 ) = 5 + +
𝑠 + 1 𝑠 + 2 − 2𝑗 𝑠 + 2 + 2𝑗
Applying Inverse Laplace Transform
2 −1 3𝑗 −1 3𝑗
( 5 − 20) ( 5 + 20)
ℒ −1 {
𝐹(𝑠)} = ℒ −1 { 5 −1
}+ℒ { + } − − − −(4.2)
𝑠+1 𝑠 + 2 − 2𝑗 𝑠 + 2 + 2𝑗

From equation 4.2 we separate the last two terms and find out individual results
−1 3𝑗 −1 3𝑗 1
( 5 − 20) ( 5 + 20) −1 2𝑠 + 1 −2 𝑠+2
−1 { }=
ℒ + = ( )( 2 )
𝑠 + 2 − 2𝑗 𝑠 + 2 + 2𝑗 5 (𝑠 2 + 4𝑠 + 8) 5 (𝑠 + 4𝑠 + 8)

By adding subtracting the value of 3/2 we can modify the relation as


1 3 3 3
−2 𝑠+2+2−2 −2 𝑠+2−2
⇒ ( )( ) = ( )[ ]
5 ((𝑠 + 2)2 +22 ) 5 ((𝑠 + 2)2 +22 )

3
−2 𝑠+2 2
⇒ ( )[ 2 2
− ] − − − −(4.3)
5 ( (𝑠 + 2) +2 ) ( (𝑠 + 2)2 +22 )

𝑠+𝑎
⇒∵ 𝑒 −𝑎𝑡 cos 𝜔𝑡 𝑢(𝑡) ⇔ − − − − − −(4.4)
(𝑠 + 𝑎 ) 2 + 𝜔 2
𝜔
𝑒 −𝑎𝑡 sin 𝜔𝑡 𝑢(𝑡) ⇔ − − − − − −(4.5)
(𝑠 + 𝑎 )2 + 𝜔 2
Using the relations 4.4 and 4.5 in 4.3 we get
−2 −2𝑡 2 3 1
⇒( )𝑒 cos 2𝑡 + ( ) ( )
5 5 2 (𝑠 + 2 )2 + 22
−2 −2𝑡 3 2
⇒( )𝑒 cos 2𝑡 + ( )
5 10 (𝑠 + 2)2 + 22
−2 −2𝑡 3
⇒( )𝑒 cos 2𝑡 + ( ) 𝑒 −2𝑡 sin 2𝑡 − − − −(4.6)
5 10

Nalla N Page 5
Chapter-3 Inverse Laplace Transforms 2020
Equation 4.6 is the inverse Laplace Transform of last two components of 4.2, so it can be
re-written as,
𝟐 −𝒕 𝟐 𝟑
𝒇(𝒕) = 𝒆 𝒖(𝒕) − 𝒆−𝟐𝒕 𝐜𝐨𝐬 𝟐𝒕 𝒖(𝒕) + ( ) 𝒆−𝟐𝒕 𝐬𝐢𝐧 𝟐𝒕 𝒖(𝒕)
𝟓 𝟓 𝟏𝟎
#Practise Problem 2:
Use the partial fraction expansion method to simplify 𝐹1 (𝑠), 𝐹2 (𝑠) and 𝐹3 (𝑠) of given
relation, and find the time domain function 𝑓1 (𝑠), 𝑓2 (𝑠) and 𝑓3 (𝑠) corresponding to
𝐹1 (𝑠), 𝐹2 (𝑠) and 𝐹3 (𝑠).
2𝑠 + 3
𝐹1 (𝑠) =
𝑠3
+ 6𝑠 2 + 16𝑠 + 16
2𝑠 + 1
𝐹2 (𝑠) = 3
𝑠 + 3𝑠 2 + 4𝑠 + 2
4𝑠 2 + 8𝑠 + 10
𝐹3 (𝑠) =
(𝑠 + 2) + (𝑠 2 + 2𝑠 + 5)
III. MULTIPLE POLES/REPEATED POLES:
If 𝐹 (𝑠) has multiple poles then partial fractions of 𝐹 (𝑠) can be expressed as
𝑁 (𝑠 )
𝐹 (𝑠 ) = − − − (10)
(𝑠 − 𝑝1 )𝑚 (𝑠 − 𝑝2 ) … … . (𝑠 − 𝑝𝑛−1 )(𝑠 − 𝑝𝑛 )
Where, 𝐹 (𝑠) has simple poles, but one of the poles, say 𝑝1 has a multiplicity ‘m’. Then
corresponding to multiple pole 𝑝1 as 𝑟11 , 𝑟12 𝑟13 … … . 𝑟1𝑚 , the partial fraction of eqn 10 is,
𝑟11 𝑟12 𝑟13 𝑟1𝑚 𝑟2 𝑟3
𝐹 (𝑠 ) = + + + ⋯ . . + + +
(𝑠 − 𝑝1 )𝑚 (𝑠 − 𝑝1 )𝑚−1 (𝑠 − 𝑝1 )𝑚−2 (𝑠 − 𝑝1 ) (𝑠 − 𝑝2 ) (𝑠 − 𝑝3 )
𝑟𝑛
+⋯….+ − − − − − (11)
(𝑠 − 𝑝𝑛 )
For simple poles 𝑝1 , 𝑝2 , … … 𝑝𝑛 we proceed as before, that is we find the residues as

𝑟𝑘 = lim (𝑠 − 𝑝𝑘 )𝐹(𝑠) = (𝑠 − 𝑝𝑘 )𝐹 (𝑠)| − − − − − −(12)


𝑠→𝑝𝑘 𝑠=𝑝𝑘

The residue of the first repeated pole can be found by,


𝑟11 = (𝑠 − 𝑝1 )𝑚 𝐹(𝑠)|𝑠=𝑝1 − − − − − −(13)
𝑑
𝑟12 = [(𝑠 − 𝑝1 )𝑚 𝐹(𝑠)]|𝑠=𝑝1 − − − − − −(14)
𝑑𝑠
Similarly, we will find the remaining residues by generalised relation as
1 𝑑 𝑘−1
𝑟1𝑘 = ( ) 𝑘−1 [(𝑠 − 𝑝1 )𝑚 𝐹 (𝑠)]|𝑠=𝑝1 − − − − − −(15)
𝑘 − 1 𝑑𝑠

Nalla N Page 6
Chapter-3 Inverse Laplace Transforms 2020
#5 . Use the partial fraction expansion method to simplify 𝐹(𝑠) of given relation, and
find the time domain function 𝑓 (𝑡) corresponding to 𝐹(𝑠).
𝑠+3
𝐹 (𝑠 ) =
(𝑠 + 2)(𝑠 + 1)2
Solution: From the given Laplace transform function we can observe that, there is a
pole as s=-1 repeating 2 times. Then its partial fraction can be expressed as
𝑠+3 𝑟1 𝑟21 𝑟22
𝐹 (𝑠 ) = 2
= + 2
+
(𝑠 + 2)(𝑠 + 1) (𝑠 + 2) (𝑠 + 1) (𝑠 + 1 )
The residues are,
𝑠+3
𝑟1 = (𝑠 + 2)𝐹(𝑠)|𝑠=−2 = (𝑠 + 2) | =1
(𝑠 + 2)(𝑠 + 1)2 𝑠=−2
𝑠+3
𝑟21 = (𝑠 + 1)2 𝐹 (𝑠)|𝑠=−1 = (𝑠 + 1)2 | =2
(𝑠 + 2)(𝑠 + 1)2 𝑠=−1
𝑑 𝑠+3 𝑑 𝑠+3
𝑟22 = (𝑠 + 1)2 2
| = | = −1
𝑑𝑠 (𝑠 + 2)(𝑠 + 1) 𝑠=−1 𝑑𝑠 (𝑠 + 2) 𝑠=−1
∗ 𝑟22 value cam also be found without differentiation. That is, by substitution of already
known values in the partial fraction and letting s=0, we get as
𝑠+3 1 2 𝑟22
𝐹 (𝑠 ) = 2
| = | + 2
| + |
(𝑠 + 2)(𝑠 + 1) 𝑠=0 (𝑠 + 2) 𝑠=0 (𝑠 + 1) 𝑠=0 (𝑠 + 1) 𝑠=0
Or
3 1
= + 2 + 𝑟22 ⇒ 𝑟22 = −1
2 2
Then, we can have,
𝑠+3 1 2 −1
𝐹 (𝑠 ) = 2
= + 2
+
(𝑠 + 2)(𝑠 + 1) (𝑠 + 2) (𝑠 + 1) (𝑠 + 1 )
Applying Inverse Laplace Transform on both sides, we get
1 1 −1
ℒ −1 {𝐹(𝑠)} = ℒ −1 { } + 2ℒ −1 { 2
} + ℒ −1 { }
𝑠+2 (𝑠 + 1) (𝑠 + 1)

𝒇(𝒕) = 𝒆−𝟐𝒕 + 𝟐𝒕𝒆−𝒕 − 𝒆−𝒕


#Practise Problem 3:
Use the partial fraction expansion method to simplify 𝐹1 (𝑠), 𝐹2 (𝑠), 𝐹3 (𝑠) 𝑎𝑛𝑑 𝐹4 (𝑠) of
given relation, and find the time domain function 𝑓1 (𝑠), 𝑓2 (𝑠), 𝑓3 (𝑠) and 𝑓4 (𝑠)
corresponding to 𝐹1 (𝑠), 𝐹2 (𝑠), 𝐹3 (𝑠) 𝑎𝑛𝑑 𝐹4 (𝑠).
𝑠 2 + 3𝑠 + 1
𝐹1 (𝑠) =
(𝑠 + 1)3 + (𝑠 + 2)2

Nalla N Page 7
Chapter-3 Inverse Laplace Transforms 2020
2𝑠 + 1
𝐹2 (𝑠) =
(𝑠 + 2)3
3𝑠 2 + 8𝑠 + 6
𝐹3 (𝑠) =
(𝑠 + 2) + (𝑠 2 + 2𝑠 + 1)
𝑠+2
𝐹4 (𝑠) =
𝑠(𝑠 + 1)2
III. Case 𝒎 ≥ 𝒏 Improper Function:
If 𝐹(𝑠) is an improper function, i.e. 𝒎 ≥ 𝒏, we must first divide the numerator function
𝑁(𝑠) by the denominator function 𝐷(𝑠) to obtain an expression as,
𝑁(𝑠 )
𝐹(𝑠) = 𝑘0 + 𝑘1 𝑠 + 𝑘2 𝑠 2 + ⋯ … + 𝑘𝑚−𝑛 𝑠 𝑚−𝑛 + − − − − − −(16)
𝐷 (𝑠 )
𝑁(𝑠)
Where 𝐷(𝑠) is a proper rational function.

#6 . Derive the inverse Laplace Transform 𝑓(𝑡) of 𝐹(𝑠)


𝑠 2 + 2𝑠 + 2
𝐹 (𝑠 ) =
𝑠+1
Solution: At first, by Long division method we make the function proper as,
𝑠 2 + 2𝑠 + 2 1
𝐹 (𝑠 ) = = +1+𝑠
𝑠+1 𝑠+1
Now we know that
1
⇔ 𝑒 −𝑡 and 1 ⇔ 𝛿 (𝑡) but 𝑠 ⇔?
𝑠+1
We know that, 𝑢′ (𝑡) = 𝛿 (𝑡) and 𝑢′ ′(𝑡) = 𝛿′(𝑡) then, from the time differentiation
property we get, 𝑢′ ′(𝑡) = 𝛿 ′ (𝑡) ⇔ 𝑠 2 𝐹 (𝑠) + 𝑠𝑓 (0) − 𝑓′(0) that is,
1
𝑠 2 𝐹 (𝑠 ) = 𝑠 2 . = 𝑠
𝑠
So, 𝑠 ⇔ 𝛿 ′ (𝑡) then the inverse transform becomes,
𝒔𝟐 + 𝟐𝒔 + 𝟐 𝟏
𝑭(𝒔) = = + 𝟏 + 𝒔 ⇔ 𝒆−𝒕 + 𝜹(𝒕) + 𝜹′(𝒕)
𝒔+𝟏 𝒔+𝟏
#Practise Problem 4:
Use alternate method to solve the given equation and find inverse Laplace transform
𝑠 3 + 8𝑠 2 + 24𝑠 + 32
𝐹 (𝑠 ) =
𝑠 2 + 6𝑠 + 8

Nalla N Page 8
Chapter-3 Inverse Laplace Transforms 2020
IV. Alternative Method /Clearing Fraction Method:
Partial fractions can also be performed with the method of clearing the fractions. i.e.,
making the denominator of both sides the same, then equating the numerator.

➢ If 𝐹 (𝑠) is proper rational function we follow this procedure directly


➢ If 𝐹 (𝑠) is improper, then we first perform a long division, and then work with the
quotient and the remainder.
➢ Then 𝐹 (𝑠) can be expressed as,

𝑁(𝑠) 𝑟1 𝑟2 𝑟𝑚
𝐹 (𝑠 ) = = + 2
+ ⋯…+ − − − −(17)
𝐷(𝑠) (𝑠 − 𝑎) (𝑠 − 𝑎) (𝑠 − 𝑎 )𝑚
Let, 𝑠 2 + 𝛼𝑠 + 𝛽 be a quadratic factor 𝐷(𝑠) and suppose that (𝑠 2 + 𝛼𝑠 + 𝛽 )𝑛 is the
highest power of this factor that divides 𝐷(𝑠), now we perform the following steps.

1. To this factor, we assign the sum of ‘n’ partial fractions, that is


𝑟1 𝑠 + 𝑘1 𝑟2 𝑠 + 𝑘2 𝑟𝑛 𝑠 + 𝑘𝑛
+ + ⋯ … . +
𝑠 2 + 𝛼𝑠 + 𝛽 (𝑠 2 + 𝛼𝑠 + 𝛽 )2 (𝑠 2 + 𝛼𝑠 + 𝛽 )𝑛
2. We repeat the procedure with step 1 for each of the distinct linear and quadratic
factors of 𝐷(𝑠).
3. We set the given 𝐹(𝑠) equal to the sum of these partial fractions.
4. We clear the resulting expression of fractions and rearrange the terms in
decreasing powers of ‘s’.
5. We equate the coefficients of corresponding powers of “s”
6. We solve the resulting equations for residues.

#6. Express 𝐹 (𝑠) below as a sum of partial fractions using the method of clearing
fractions.

−2𝑠 + 4
𝐹 (𝑠 ) =
(𝑠 2 + 1)(𝑠 − 1)2

Solution: Applying step 1 up to 3 on the above equation we can write as,

−2𝑠 + 4 𝑟1 𝑠 + 𝐴 𝑟21 𝑟22


𝐹 (𝑠 ) = = 2 + +
(𝑠 2 + 1)(𝑠 − 1) 2 (𝑠 + 1) (𝑠 − 1) 2 (𝑠 − 1)
Using step 4, we obtain,
−2𝑠 + 4 = (𝑟1 𝑠 + 𝐴)(𝑠 − 1)2 + 𝑟21 (𝑠 2 + 1) + 𝑟22 (𝑠 − 1)(𝑠 2 + 1)

Nalla N Page 9
Chapter-3 Inverse Laplace Transforms 2020
With the application of step 5 we get,
−2𝑠 + 4 = (𝑟1 + 𝑟22 )𝑠 3 + (−2𝑟1 + 𝐴 − 𝑟22 + 𝑟21 )𝑠 2 + (𝑟1 − 2𝐴 + 𝑟22 )𝑠
+ (𝐴 − 𝑟22 + 𝑟21 )

By Equating the powers of ‘s’, we get

0 = 𝑟1 + 𝑟22

0 = −2𝑟1 + 𝐴 − 𝑟22 + 𝑟21

−2 = 𝑟1 − 2𝐴 + 𝑟22

4 = 𝐴 − 𝑟22 + 𝑟21

From the above relations, we can conclude the value as,

𝑟1 = 2, 𝑟21 = 1, 𝑟22 = −2, 𝐴=1

−2𝑠 + 4 2𝑠 + 1 1 2
𝐹 (𝑠 ) = = 2 + −
(𝑠 2 + 1)(𝑠 − 1) 2 (𝑠 + 1) (𝑠 − 1) 2 (𝑠 − 1)
2𝑠 1 1 2
𝐹 (𝑠 ) = + 2 + −
(𝑠 2+ 1) (𝑠 + 1) (𝑠 − 1) 2 (𝑠 − 1)
Applying Inverse Laplace Transform on both sides, we get
𝒇(𝒕) = 𝟐 𝐜𝐨𝐬 𝒕 𝒖(𝒕) + 𝐬𝐢𝐧 𝒕 𝒖(𝒕) + 𝒕𝒆𝒕 𝒖(𝒕) − 𝟐𝒆𝒕 𝒖(𝒕)

#Practise problem 5. Express 𝐹(𝑠) below as a sum of partial fractions using the
method of clearing fractions.

𝑠+3
𝐹 (𝑠 ) =
𝑠 3 + 5𝑠 2 + 12𝑠 + 8)

Nalla N Page 10
Chapter 4 Impulse Response & Convolution

In signal processing, the impulse response, or impulse response function


(IRF), of a dynamic system is its output when presented with a brief input signal, called
an impulse. More generally, an impulse response is the reaction of any dynamic system
in response to some external change. In both cases, the impulse response describes the
reaction of the system as a function of time (or possibly as a function of some
other independent variable that parameterizes the dynamic behaviour of the system).
In all these cases, the dynamic system and its impulse response may be actual physical
objects, or may be mathematical systems of equations describing such objects.
Since the impulse function contains all frequencies, the impulse response defines the
response of a linear time-invariant system for all frequencies

The impulse response and frequency response are two attributes that are useful for
characterizing linear time-invariant (LTI) systems. They provide two different ways of
calculating what an LTI system's output will be for a given input signal. A continuous-
time LTI system is usually illustrated like this:

In general, the system H maps its input signal x(t) to a corresponding output signal y(t).
There are many types of LTI systems that can have apply very different transformations
to the signals that pass through them. But they all share two key characteristics:
• The system is linear, so it obeys the principle of superposition. Stated simply, if you
linearly combine two signals and input them to the system, the output is the same
linear combination of what the outputs would have been had the signals been
passed through individually. That is, if 𝑥1 (𝑡) maps to an output
of 𝑦1 (𝑡) and 𝑥2 (𝑡) maps to an output of 𝑦2 (𝑡), then for all values of a1 and a2,
𝐻{𝑎1 𝑥1 (𝑡) + 𝑎2 𝑥2 (𝑡)} = 𝑎1 𝑦1 (𝑡) + 𝑎2 𝑦2 (𝑡)
The system is time-invariant, so its characteristics do not change with time. If you
add a delay to the input signal, then you simply add the same delay to the output. For an
input signal x(t) that maps to an output signal y(t), then for all values of τ,
𝐻 {𝑥(𝑡 − 𝜏)} = 𝑦(𝑡 − 𝜏)
Discrete-time LTI systems have the same properties; the notation is different
because of the discrete-versus-continuous difference, but they are a lot alike. These

1|Page Nalla N
Chapter 4 Impulse Response & Convolution

characteristics allow the operation of the system to be straightforwardly characterized


using its impulse and frequency responses. They provide two perspectives on the
system that can be used in different contexts.
Impulse Response:
The impulse that is referred to in the term impulse response is generally a short-
duration time-domain signal. For continuous-time systems, this is the Dirac delta
function δ(t), while for discrete-time systems, the Kronecker delta function δ[n] is
typically used. A system's impulse response (often annotated as h(t) for continuous-
time systems or h[n] for discrete-time systems) is defined as the output signal that
results when an impulse is applied to the system input.
Why is this useful?
It allows us to predict what the system's output will look like in the time domain.
Remember the linearity and time-invariance properties mentioned above? If we can
decompose the system's input signal into a sum of a bunch of components, then the
output is equal to the sum of the system outputs for each of those components. What if
we could decompose our input signal into a sum of scaled and time-shifted impulses?
Then, the output would be equal to the sum of copies of the impulse response, scaled
and time-shifted in the same way.
For discrete-time systems, this is possible, because you can write any signal x[n] as a
sum of scaled and time-shifted Kronecker delta functions:

𝑥[𝑛] = ∑ 𝑥[𝑘]𝛿[𝑛 − 𝑘]
𝑘=0

Each term in the sum is an impulse scaled by the value of x[n] at that time instant. What
would we get if we passed x[n] through an LTI system to yield y[n]? Simple: each scaled
and time-delayed impulse that we put in yields a scaled and time-delayed copy of the
impulse response at the output. That is:

𝑦[𝑛] = ∑ 𝑥[𝑘]ℎ[𝑛 − 𝑘]
𝑘=0

where h[n] is the system's impulse response. The above equation is the convolution
theorem for discrete-time LTI systems. That is, for any signal x[n] that is input to an LTI
system, the system's output y[n] is equal to the discrete convolution of the input signal
and the system's impulse response.

2|Page Nalla N
Chapter 4 Impulse Response & Convolution

For continuous-time systems, the above straightforward decomposition isn't


possible in a strict mathematical sense (the Dirac delta has zero width and infinite
height), but at an engineering level, it's an approximate, intuitive way of looking at the
problem. A similar convolution theorem holds for these systems:

𝑦(𝑡) = ∫ 𝑥(𝜏)ℎ)(𝑡 − 𝜏)𝑑𝜏


−∞

where, again, h(t) is the system's impulse response. There are a number of ways of
deriving this relationship (I think you could make a similar argument as above by
claiming that Dirac delta functions at all time shifts make up an orthogonal basis for
the L2 Hilbert space, noting that you can use the delta function's sifting property to
project any function in L2 onto that basis, therefore allowing you to express system
outputs in terms of the outputs associated with the basis (i.e. time-shifted impulse
responses), but I'm not a licensed mathematician, so I'll leave that aside).
In summary: For both discrete- and continuous-time systems, the impulse response is
useful because it allows us to calculate the output of these systems for any input signal;
the output is simply the input signal convolved with the impulse response function.
Frequency response:
An LTI system's frequency response provides a similar function: it allows you to
calculate the effect that a system will have on an input signal, except those effects are
illustrated in the frequency domain. Recall the definition of the Fourier transform:

𝑋 (𝑓) = ∫ 𝑥(𝑡)𝑒 −𝑗2𝜋𝑓𝑡 𝑑𝑡


−∞

More importantly for the sake of this illustration, look at its inverse:

𝑥(𝑡) = ∫ 𝑋(𝑓)𝑒 𝑗2𝜋𝑓𝑡 𝑑𝑓


−∞

In essence, this relation tells us that any time-domain signal x(t) can be broken
up into a linear combination of many complex exponential functions at varying
frequencies (there is an analogous relationship for discrete-time signals called
the discrete-time Fourier transform; I only treat the continuous-time case below for
simplicity). For a time-domain signal x(t), the Fourier transform yields a corresponding
function X(f) that specifies, for each frequency ff, the scaling factor to apply to the
complex exponential at frequency ff in the aforementioned linear combination. These

3|Page Nalla N
Chapter 4 Impulse Response & Convolution

scaling factors are in general complex numbers. One way of looking at complex numbers
is in amplitude/phase format, that is:
𝑋(𝑓) = 𝐴(𝑓)𝑒 𝑗∅(𝑓)
Looking at it this way, then, x(t) can be written as a linear combination of many
complex exponential functions, each scaled in amplitude by the function A(f) and shifted
in phase by the function ϕ(f). This lines up well with the LTI system properties that we
discussed previously; if we can decompose our input signal x(t) into a linear
combination of a bunch of complex exponential functions, then we can write the output
of the system as the same linear combination of the system response to those complex
exponential functions.
Here's where it gets better: exponential functions are the eigen functions of
linear time-invariant systems. The idea is, similar to eigenvectors in linear algebra, if
you put an exponential function into an LTI system, you get the same exponential
function out, scaled by a (generally complex) value. This has the effect of changing the
amplitude and phase of the exponential function that you put in.
This is immensely useful when combined with the Fourier-transform-based
decomposition discussed above. As we said before, we can write any signal x(t) as a
linear combination of many complex exponential functions at varying frequencies. If we
pass x(t) into an LTI system, then (because those exponentials are eigenfunctions of the
system), the output contains complex exponentials at the same frequencies, only scaled
in amplitude and shifted in phase. These effects on the exponentials' amplitudes and
phases, as a function of frequency, is the system's frequency response. That is, for an
input signal with Fourier transform X(f) passed into system H to yield an output with a
Fourier transform Y(f),
𝑌(𝑓) = 𝐻 (𝑓). 𝑋(𝑓) = 𝐴(𝑓)𝑒 𝑗𝜑(𝑓) 𝑋(𝑓)
In summary: So, if we know a system's frequency response H(f) and the Fourier
transform of the signal that we put into it X(f), then it is straightforward to calculate the
Fourier transform of the system's output; it is merely the product of the frequency
response and the input signal's transform. For each complex exponential frequency that
is present in the spectrum X(f), the system has the effect of scaling that exponential in
amplitude by A(f) and shifting the exponential in phase by ϕ(f) radians.
Bringing them together:

4|Page Nalla N
Chapter 4 Impulse Response & Convolution

An LTI system's impulse response and frequency response are intimately


related. The frequency response is simply the Fourier transform of the system's impulse
response (to see why this relation holds, see the answers to this other question). So, for
a continuous-time system:

𝐻 (𝑓) = ∫ ℎ(𝑡)𝑒 −𝑗2𝜋𝑓𝑡 𝑑𝑡


−∞

So, given either a system's impulse response or its frequency response, you can
calculate the other. Either one is sufficient to fully characterize the behaviour of the
system; the impulse response is useful when operating in the time domain and the
frequency response is useful when analyzing behaviour in the frequency domain.
TRANSFER FUNCTION:
A transfer function is the frequency-dependent ratio of a forced function to a
forcing function (or of an output to an input). The idea of a transfer function was
implicit when we used the concepts of impedance and admittance to relate voltage and
current.
The transfer function H(ω) of a circuit is the frequency-dependent ratio of a
phasor output Y(ω) (an element voltage or current) to a phasor input X(ω) (source
voltage or current).
𝑌(𝜔)
𝐻 (𝜔 ) =
𝑋(𝜔)
Impulse Response of any system or network can be computer by following methods:
1. Using Inverse Laplace and Inverse Fourier Transforms
2. Using Convolution of two signals
3. Using State space Representations
01. Using Laplace Transform:
➢ The h(t) of a continuous time LTS System is defined to be the response of the
system when input is 𝛿(𝑡). i.e., ℎ(𝑡) = 𝑅𝑒𝑠𝑝𝑜𝑛𝑠𝑒𝑠 𝑜𝑓 {𝛿 (𝑡)}
➢ For any system if 𝑥(𝑡) is an input and its corresponding Laplace Transform is 𝑋(𝑠)
and its Fourier Transform is 𝑋(𝐹).
➢ Similarly, the output of the system is 𝑦(𝑡), then 𝑌(𝑠) and 𝑌(𝐹) are its Laplace and
Fourier Transforms respectively.
➢ If ℎ(𝑡) is the system function and 𝐻(𝑠) is it’s Laplace Transform. Then

5|Page Nalla N
Chapter 4 Impulse Response & Convolution

Laplace Transform Fourier Transform


𝐿𝑇 𝐹𝑇
𝑥(𝑡) → 𝑋(𝑠) 𝑥(𝑡) → 𝑋(𝐹)
𝐿𝑇 𝐹𝑇
𝑦(𝑡) → 𝑌(𝑠) 𝑦(𝑡) → 𝑌(𝐹)
𝐿𝑇 𝐹𝑇
ℎ(𝑡) → 𝐻(𝑠) ℎ(𝑡) → 𝐻(𝐹)
𝑦 (𝑡 ) = 𝑥 (𝑡 ) ∗ ℎ(𝑡 ) ⟶ 𝑦 (𝑡 ) = 𝑥 (𝑡 ) ∗ ℎ (𝑡 ) ⟶
𝑌 (𝑠) = 𝑋(𝑠). 𝐻(𝑠) 𝑌 (𝐹 ) = 𝑋(𝐹 ). 𝐻(𝐹)
𝑌 (𝑠 ) 𝑌 (𝐹 )
𝐻 (𝑠 ) = 𝐻 (𝐹 ) =
𝑋 (𝑠 ) 𝑋 (𝐹 )

Impulse Response:
It if the Inverse Laplace Transform (or) Inverse Fourier Transform of Frequency
Response (or) Transfer Function of the system.
i.e., Frequency Response of the system is the ratio of Laplace Transform (or)
Fourier Transform of Output to the Laplace Transform (or) Fourier Transform of the
input.
Impulse Response ℎ(𝑡) = 𝐿−1 [𝐻(𝑠)] 𝑜𝑟 ℎ(𝑡) = 𝐹 −1 [𝐻(𝐹 )]
𝑌 (𝑠 )
ℎ(𝑡) = 𝐿−1 [𝐻(𝑠)] = 𝐿−1 [ ]
𝑋 (𝑠 )
or
𝑌(𝐹)
ℎ(𝑡) = 𝐹 −1 [𝐻 (𝐹 )] = 𝐹 −1 [ ]
𝑋(𝐹)
For any electrical network, the components 𝑅, 𝐿, 𝐶, 𝒱 (𝑡)𝑎𝑛𝑑 𝒾(𝑡) are represented in
Laplace Domain is
𝑅⇒𝑅 𝒱 (𝑡) ⇒ 𝑉(𝑠)
𝐿 ⇒ 𝑠𝐿 𝒾(𝑡) ⇒ 𝐼(𝑠)
1
𝐶⇒
𝑠𝐶
#Example 1:
Compute the impulse response of the series RC circuit of the figure in terms of the
constrains R and C. where the response is considered to be the Voltage across the
capacitor, and 𝒱𝑐 (0− ) = 0. Then compute the current through the capacitor.

6|Page Nalla N
Chapter 4 Impulse Response & Convolution

Solution: The input voltage 𝒱𝑖 (𝑡) = 𝛿 (𝑡) = 𝒾(𝑡)[𝑅 + 𝑋𝑐 ]


1
⇒ 𝑉𝑖 (𝑠) = 𝐼 (𝑠) [𝑅 + ] − − − −(1.1)
𝐶𝑠
The output Voltage, 𝒱𝑜 (𝑡) = 𝒾 (𝑡)[𝑋𝑐 ]
1
⇒ 𝑉𝑜 (𝑠) = 𝐼 (𝑠) [ ] − − − −(1.2)
𝐶𝑠
From Frequency response relation we have,
1
𝑉𝑜 (𝑠) 𝐼 (𝑠) [𝐶𝑠]
𝐻 (𝑠 ) = =
𝑉𝑖 (𝑠) 𝐼(𝑠) [𝑅 + 1 ]
𝐶𝑠
1
[ ] 1 1 1
⇒ 𝐶𝑠 ⇒ ⇒ − − − −(1.3)
1 [ 1 + 𝑠𝑅𝐶 ] ( 𝑅𝐶 ) 1
[𝑅 + ] [𝑠 + ]
𝐶𝑠 𝑅𝐶
Applying Inverse Laplace Transform on 1.3 we get the following relation

1 1
𝐿−1 [𝐻(𝑠)] = 𝐿−1 { }
(𝑅𝐶 ) [𝑠 1
+ 𝑅𝐶 ]

1 𝑡

ℎ(𝑡 ) = 𝑒 (𝑅𝐶) 𝑢(𝑡) − − − − − − − (1.4)
(𝑅𝐶 )
The current through the capacitor 𝒾𝑐 can now be computed as,
𝑑𝒱𝑐 𝑑
𝒾𝑐 = 𝐶. , 𝑡ℎ𝑢𝑠 𝒾𝑐 ⇒ 𝐶 [ℎ(𝑡)]
𝑑𝑡 𝑑𝑡
𝑑 1 𝑡

𝒾𝑐 ⇒ 𝐶 [ 𝑒 (𝑅𝐶) 𝑢(𝑡)]
𝑑𝑡 (𝑅𝐶 )
𝑡
𝑑 1 −(𝑅𝐶)
Using 𝑑𝑡 [𝑢𝑣] 𝑓𝑜𝑟𝑚𝑢𝑙𝑎, 𝑤ℎ𝑒𝑟𝑒 𝑢 = (𝑅𝐶) 𝑒 𝑎𝑛𝑑 𝑣 = 𝑢(𝑡)

1 −𝑡 1 −𝑡
𝒾𝑐 = − 𝑒 𝑅𝐶 + 𝑒 𝑅𝐶 𝛿 (𝑡)
𝑅2 𝐶 𝑅
Using Sampling Property of the delta Function, we get
𝟏 𝟏 𝒕
𝓲𝒄 = 𝜹(𝒕) + − 𝟐 𝒆−𝑹𝑪
𝑹 𝑹 𝑪

7|Page Nalla N
Chapter 4 Impulse Response & Convolution

#Example 2:
For the circuit, compute the impulse response ℎ(𝑡) = 𝒱𝑐 (𝑡) given that the initial
conditions are zero. That is, 𝒾𝐿 (0− ) = 0 𝑎𝑛𝑑 𝒱𝑐 (0− ) = 0.

Solution: The input voltage 𝒱𝑖 (𝑡) = 𝛿 (𝑡) = 𝒾(𝑡)[𝑅 + 𝑋𝑐 + 𝑋𝐿 ]


1
⇒ 𝑉𝑖 (𝑠) = 𝐼 (𝑠) [𝑅 + 𝑠𝐿 + ] − − − −(2.1)
𝐶𝑠
The output Voltage, ℎ(𝑡) = 𝒱𝑐 (𝑡) = 𝒾(𝑡)[𝑋𝑐 ]
1
⇒ 𝑉𝑐 (𝑠) = 𝐼(𝑠) [ ] − − − −(2.2)
𝐶𝑠
From Frequency response relation we have,
1
𝑉𝑜 (𝑠) 𝐼 (𝑠) [𝐶𝑠]
𝐻 (𝑠 ) = =
𝑉𝑖 (𝑠) 𝐼 (𝑠) [𝑅 + 𝑠𝐿 + 1 ]
𝐶𝑠
1 1
[ ] [ ] 1
𝐻 (𝑠 ) ⇒ 𝐶𝑠 ⇒ 2 𝐶𝑠 ⇒ 2 − − − −(2.3)
1
[𝑅 + 𝑠𝐿 + ] [𝑠 𝐿𝐶 + 𝑠𝑅𝐶 + 1] 𝑠 𝐿𝐶 + 𝑠𝑅𝐶 + 1
𝐶𝑠 𝑠𝐶
Substituting the values of R,L and C in Equation 2.3 we get
1 1 3
𝐻 (𝑠 ) ⇒ ⇒ ⇒ 2
1 4 4 4 4𝑠
𝑠 2 (4 . 3) + 𝑠 (1. 3) + 1 𝑠 2 3 + 3 + 1 𝑠 + 4𝑠 + 3

Then applying Inverse Laplace Transform, we get


𝒉(𝒕) = 𝟏. 𝟓(𝒆−𝒕 − 𝒆−𝟑𝒕 )
# Practise Problem 1
Compute the impulse response of the circuit considering zero initial values

8|Page Nalla N
Chapter 4 Impulse Response & Convolution

EVEN AND ODD FUNCTIONS OF TIME:


A function f(t) is an even function of time if the following relation holds,𝑓(−𝑡) = 𝑓(𝑡). If
f(t) is odd, if satisfies the following conditions, 𝑓(𝑡) = −𝑓(−𝑡), that is if even function
we replace ‘t’ with ‘-t’ the function f(t) does not change.

A function f(t) that is neither even nor odd can be expressed as


1 1
𝑥𝑒 (𝑡) = [𝑥(𝑡) + 𝑥(−𝑡)] 𝑎𝑛𝑑 𝑥𝑒 (𝑛) = [𝑥(𝑛) + 𝑥(−𝑛)]
2 2

1 1
𝑥𝑜 (𝑡) = [𝑥(𝑡) − 𝑥(−𝑡)] 𝑎𝑛𝑑 𝑥𝑜 (𝑛) = [𝑥(𝑛) − 𝑥(−𝑛)]
2 2

#Example 3:
Determine whether the delta function is and even or odd function of time?
Solution: Let f(t) be an arbitrary function of time that is continuous at t=t0. Then, by the
shifting property of the delta function, we get
+∞ +∞

∫ 𝑓(𝑡)𝛿(𝑡 − 𝑡0 )𝑑𝑡 = 𝑓(𝑡0 ) 𝑎𝑛𝑑 𝑓𝑜𝑟 𝑡0 = 0, ∫ 𝑓 (𝑡)𝛿(𝑡)𝑑𝑡 = 𝑓(0)


−∞ −∞

Also,
+∞ +∞

∫ 𝑓𝑒 (𝑡)𝛿 (𝑡)𝑑𝑡 = 𝑓𝑒 (0) and ∫ 𝑓𝑜 (𝑡)𝛿(𝑡)𝑑𝑡 = 𝑓𝑜 (0)


−∞ −∞

9|Page Nalla N
Chapter 4 Impulse Response & Convolution

As stated earlier, an odd function 𝑓𝑜 (𝑡) evaluated at t=0 is zero. That is 𝑓𝑜 (0) = 0.
Therefore, from the relation above will become
+∞

∫ 𝑓𝑜 (𝑡)𝛿 (𝑡)𝑑𝑡 = 𝑓𝑜 (0) = 0


−∞

And this indicates that the product 𝑓𝑜 (𝑡)𝛿(𝑡) is an odd function of ‘t’. then since 𝑓𝑜 (𝑡) is
odd, it follows that 𝛿(𝑡) must be an even function of ‘t’ for the above relation to hold.

02. Using Convolution of Two signals:


CONVOLUTION:
Consider a network whose input is either 𝑥(𝑡) or 𝛿(𝑡) and its output is impulse
response ℎ(𝑡) or 𝑦(𝑡).

Or in general,

We let arbitrary signal 𝑥(𝑡) can be represented as


𝑥(𝑡) = ∫ 𝑥(𝜏)𝛿(𝑡 − 𝜏)𝑑𝜏 − − − −(1)


−∞

The system output is given by 𝑦(𝑡) = Τ[𝑥(𝑡)] − − − (2)



Substituting (2) in (1) yields, 𝑦(𝑡) = Τ[ ∫−∞ 𝑥(𝜏)𝛿(𝑡 − 𝜏)𝑑𝜏] − − − (3)
For linear systems, it can be written as

𝑦(𝑡) = ∫ 𝑥(𝜏)Τ[𝛿(𝑡 − 𝜏)]𝑑𝜏 − − − −(4)


−∞

If the response of the system due to impulse 𝛿 (𝑡) is ℎ(𝑡) then the response of the
system due to delayed impulse is
ℎ(𝑡, 𝜏) = 𝑇[𝛿(𝑡 − 𝜏)] − − − −(5)
substituting equation (5) in equation (4) we get,

10 | P a g e Nalla N
Chapter 4 Impulse Response & Convolution

𝑦(𝑡) = ∫ 𝑥(𝜏)ℎ(𝑡, 𝜏)𝑑𝜏 − − − −(6)


−∞

For a time-invariant system, the output due to delayed input by ‘𝜏’ is equal to delayed
output by ‘𝜏’. That is ℎ(𝑡, 𝜏) = ℎ(𝑡 − 𝜏) − − − (7)
substituting equation (7) in equation (6) we get,

𝑦(𝑡) = ∫ 𝑥 (𝜏)ℎ(𝑡 − 𝜏)𝑑𝜏 − − − −(7)


−∞

This is called Convolution Integral or simply Convolution. The convolution of two


signals 𝑥 (𝑡) and ℎ(𝑡) can be represented by
𝑦(𝑡) = 𝑥(𝑡) ∗ ℎ(𝑡)
Properties of Convolution:

1. Commutative Property: 𝑥1 (𝑡) ∗ 𝑥2 (𝑡) = 𝑥2 (𝑡) ∗ 𝑥1 (𝑡)


2. Distributive Property: 𝑥1 (𝑡) ∗ [𝑥2 (𝑡) + 𝑥3 (𝑡)] = [𝑥1 (𝑡) ∗ 𝑥2 (𝑡) + 𝑥1 (𝑡) ∗ 𝑥3 (𝑡)]
3. Associative Property: 𝑥1 (𝑡) ∗ [𝑥2 (𝑡) ∗ 𝑥3 (𝑡)] = 𝑥1 (𝑡) ∗ 𝑥2 (𝑡) ∗ 𝑥3 (𝑡)
4. Shift Property: If 𝑥1 (𝑡) ∗ 𝑥2 (𝑡) = 𝑧(𝑡), then 𝑥1 (𝑡) ∗ 𝑥2 (𝑡 − 𝑇) = 𝑧(𝑡 − 𝑇)
5. Convolution with an Impulse: 𝑥(𝑡) ∗ 𝛿(𝑡) = 𝑥(𝑡)
6. Convolution with Shifted Impulse: 𝑥(𝑡) ∗ 𝛿(𝑡 − 𝑡0 ) = 𝑥(𝑡 − 𝑡0 )
𝑡
7. Convolution with Unite step: 𝑥(𝑡) ∗ 𝑢(𝑡) = ∫−∞ 𝑥(𝜏)𝑑𝜏
𝑡−𝑡
8. Convolution with shifted Unite step: 𝑥(𝑡) ∗ 𝑢(𝑡 − 𝑡0 ) = ∫−∞ 0 𝑥(𝜏)𝑑𝜏
9. Width Property: Let duration of 𝑥1 (𝑡) is 𝑇1 and 𝑥2 (𝑡) is 𝑇2, then resultant signal
duration is 𝑇1 + 𝑇2

GRAPHICAL EVALUATION OF CONVOLUTION INTEGRAL


The convolution of two signals can be performed using graphical method. The
procedure is summarized as follows:

1. For the given signals 𝑥(𝑡) and ℎ(𝑡), graph the signals 𝑥(𝜏) and ℎ(𝜏) as a function
of independent variable ′𝜏′.
2. Obtain the signal ℎ(𝑡 − 𝜏) by folding ℎ(𝜏) about 𝜏 = 0 and then time shifting by
time ‘t’.
3. Graph both the signals 𝑥(𝜏) and ℎ(𝑡 − 𝜏) on the same 𝜏-axis beginning with every
large negative time shift t.

11 | P a g e Nalla N
Chapter 4 Impulse Response & Convolution

4. Multiply the two signals 𝑥(𝜏) and ℎ(𝑡 − 𝜏) and integrate over the overlapping
intervals of two signals to obtain 𝑦(𝑡).
5. Increase the time shift ‘t’ and take the new interval whenever the function of
either 𝑥(𝜏) and ℎ(𝑡 − 𝜏) changes. The value of ‘t’ at which the change occurs
defines the end of the current interval and the beginning of the new interval.
Then calculate 𝑦(𝑡) using step 4.
6. Repeat step 5 and 4 for all intervals.

#Problems on Convolution Integral:


1. Find the convolution of 𝑥1 (𝑡) and 𝑥2 (𝑡) for the following signals
𝑥1 (𝑡) = 𝑒 −𝑎𝑡 𝑢(𝑡) and 𝑥2 (𝑡) = 𝑒 −𝑏𝑡 𝑢(𝑡)

Soln: we know that


𝑥1 (𝑡) ∗ 𝑥2 (𝑡) = ∫ 𝑥1 (𝜏)𝑥2 (𝑡 − 𝜏)𝑑𝜏


−∞

A signal 𝑥(𝑡) is said to be Causal if 𝑥(𝑡) = 0 for 𝑡 < 0. In this case both signals
𝑥1 (𝑡) and 𝑥2 (𝑡) are causal. Therefore from convolution of causal signals , we have
𝑡

𝑥1 (𝑡) ∗ 𝑥2 (𝑡) = ∫ 𝑥1 (𝜏)𝑥2 (𝑡 − 𝜏)𝑑𝜏


0

𝑡 𝑡
𝑒 −𝑏𝑡 𝑡
⇒ ∫ 𝑒 −𝑎𝜏 𝑒 −𝑏(𝑡−𝜏) 𝑑𝜏 ⇒ 𝑒 −𝑏𝑡 ∫ 𝑒 −(𝑎−𝑏)𝜏 𝑑𝜏 ⇒ 𝑒 −(𝑎−𝑏)𝜏 |0
−(𝑎 − 𝑏)
0 0

𝑒 −𝑏𝑡 [𝑒 −(𝑎−𝑏)𝑡 − 1]
⇒ for 𝑡 ≥ 0
𝑏−𝑎

𝑒 −𝑎𝑡 − 𝑒 −𝑏𝑡
𝑦 (𝑡 ) = 𝑢(𝑡)
𝑏−𝑎

2. Find the convolution of 𝑥1 (𝑡) and 𝑥2 (𝑡) for the following signals
𝑥1 (𝑡) = 𝑢(𝑡) and 𝑥2 (𝑡) = 𝑢(𝑡)

Solution: we know that


∞ 𝑡
𝑥1 (𝑡) ∗ 𝑥2 (𝑡) = ∫−∞ 𝑥1 (𝜏)𝑥2 (𝑡 − 𝜏)𝑑𝜏 = ∫0 𝑑𝜏 = 𝜏|𝑡0 = 𝑡 for 𝑡 ≥ 0 𝒚(𝒕) = 𝒕𝒖(𝒕)
Because: 𝒖(𝝉) = 𝟏 𝐟𝐨𝐫 𝝉 > 𝟎 𝑎𝑛𝑑 𝒖(𝒕 − 𝝉) = 𝟏 𝐟𝐨𝐫 𝐭 > 𝝉

12 | P a g e Nalla N
Chapter 4 Impulse Response & Convolution

3. Find the convolution of 𝑥1 (𝑡) and 𝑥2 (𝑡) for the following signals
𝑥1 (𝑡) = 𝑡𝑢(𝑡) and 𝑥2 (𝑡) = 𝑢(𝑡)

Solution: we know that


∞ 𝑡 𝑡
𝜏2 𝑡2
𝑥1 (𝑡) ∗ 𝑥2 (𝑡) = ∫ 𝑥1 (𝜏)𝑥2 (𝑡 − 𝜏)𝑑𝜏 = ∫ 𝜏𝑑𝜏 = | = for 𝑡 ≥ 0
2 0 2
−∞ 0
𝟐
𝒕
𝒚 ( 𝒕) = 𝒖(𝒕)
𝟐

4. Find the convolution of 𝑥1 (𝑡) and 𝑥2 (𝑡) for the following signals
𝑥1 (𝑡) = sin 𝑡 𝑢(𝑡) and 𝑥2 (𝑡) = 𝑢(𝑡)
𝑡
Solution: 𝑥1 (𝑡) ∗ 𝑥2 (𝑡) = ∫0 sin 𝜏 𝑑𝜏 = −cos 𝜏|𝑡0 = 1 − cos 𝑡 𝑓𝑜𝑟 𝑡 ≥ 0
𝒚(𝒕) = (𝟏 − 𝐜𝐨𝐬 𝒕)𝒖(𝒕)
# Practise Problem 2
Find the convolution of 𝑥1 (𝑡) and 𝑥2 (𝑡) for the following signals

𝑡2
1. 𝑥1 (𝑡) = 𝑟(𝑡) and 𝑥2 (𝑡) = 𝑢(𝑡) Ans: 𝑢(𝑡)
2
𝑡 1
2. 𝑥1 (𝑡) = 𝑟(𝑡) and 𝑥2 (𝑡) = 𝑒 −2𝑡 𝑢(𝑡) Ans: 2 − 4 (1 − 𝑒 −2𝑡 )

Problems using Graphical Evaluation

1. The signals ℎ(𝑡) and 𝑢(𝑡) are shown in the figure. Compute ℎ(𝑡) ∗ 𝑢(𝑡) using the
graphical evaluation.

Solution: the convolution integral states that


ℎ(𝑡) ∗ 𝑢(𝑡) = ∫ 𝑢(𝑡 − 𝜏)ℎ(𝜏)𝑑𝜏


−∞

➢ where 𝜏 is a dummy variable i.e., 𝑢(𝜏) & ℎ(𝜏)


➢ First, we form 𝑢(𝑡 − 𝜏) by constructing the image of 𝑢(𝜏) i.e. by shifting the
𝑢(−𝜏) to the right by some value ‘t’

13 | P a g e Nalla N
Chapter 4 Impulse Response & Convolution

➢ Now evaluation of 𝑢(𝑡 − 𝜏) by ℎ(𝜏) for each value of ‘t’ and computation of area
from −∞ to + ∞. Then the product of 𝑢(𝑡 − 𝜏)ℎ(𝜏) as point ‘A’ moves to the
right.
➢ Case 1: At t=0, we observe that 𝑢(𝑡 − 𝜏)|𝑡=0 = 0 = 𝑢(−𝜏)

i.e., 𝑢(𝑡 − 𝜏) ∗ ℎ(𝜏) = 0 for 𝑡 = 0 − − − − − (1)

➢ Case 2: At t>0 and t<1, shifting 𝑢(𝑡 − 𝜏) to the right so that t>0, i.e.,
𝑡 𝑡 𝑡
𝜏2 𝑡2
∫ 𝑢(𝑡 − 𝜏)ℎ(𝜏)𝑑𝜏 = ∫(1)(−𝜏 + 1)𝑑𝜏 = 𝜏 − | = 𝑡 − − − − (2)
2 0 2
0 0

➢ Case 3: At t=1, the maximum area is obtained when point A reaches to t=1, i.e.,
the graphical representation shows that , 𝑢(𝜏) ∗ ℎ(𝜏) increases during the
interval 0<t<1. Then At t=1, we get
1 1 1
𝜏2
∫ 𝑢(𝑡 − 𝜏)ℎ(𝜏)𝑑𝜏 = ∫(1)(−𝜏 + 1)𝑑𝜏 = 𝜏 − |
2 0
0 0

1
= − − − − − (3)
2

14 | P a g e Nalla N
Chapter 4 Impulse Response & Convolution

➢ Case 4: 1<t<2, using the convolution integral, we find that the area for the
interval 1<t<2 i.e., from t-1 to 1
1 1 1
𝜏2 1 𝑡 2 − 2𝑡 + 1
∫ 𝑢(𝑡 − 𝜏)ℎ(𝜏)𝑑𝜏 = ∫(1)(−𝜏 + 1)𝑑𝜏 = 𝜏 − | ( )
=1− − 𝑡−1 +
2 𝑡−1 2 2
𝑡−1 𝑡−1

𝑡2
𝑢(𝑡) ∗ ℎ(𝑡) = − 2𝑡 + 2 − − − − − −(4)
2

➢ Case 5:At t=2, we find that 𝑢(𝜏) ∗ ℎ(𝜏) = 0. For t>2, the product 𝑢(𝑡 − 𝜏) ∗ ℎ(𝜏)
is zero since there is no overlap between the two signals. Then the convolution
between these signals for 0 ≤ 𝑡 ≤ 2, is shown in the figure as,

2. The signals ℎ(𝑡) and 𝑢(𝑡) are shown in the figure. Compute ℎ(𝑡) ∗ 𝑢(𝑡) using the
graphical evaluation.

15 | P a g e Nalla N
Chapter 4 Impulse Response & Convolution

𝟎 𝒕≤𝟎
𝟏 − 𝒆−𝒕 𝟎<𝒕≥𝒕
Answer: 𝒚(𝒕) = 𝟏 − 𝒆−𝟏 = 𝟎. 𝟔𝟑𝟐 𝟎≤𝒕≥𝟏
𝒆−𝒕 (𝒆 − 𝟏) 𝒕−𝟏≤𝒕≥𝒕
{ 𝟎. 𝟐𝟑𝟑 𝒕=𝟐

3. Perform the convolution of 𝑓 (𝑡) ∗ 𝑔(𝑡)

0 for 𝑡 < −2 No Overlap


3
− 2 𝑡2 +6 for − 2 ≤ 𝑡 < 0 Partial Overlap
Answer: 𝑦(𝑡) = 𝑓 (𝑡) ∗ 𝑔(𝑡) = 6 for 0 ≤ 𝑡 < 2 Complete Overlap
3 2
𝑡 − 12𝑡 + 24 for 2 ≤ 𝑡 < 4 Partial Overlap
2
{ 0 for 𝑡 ≥ 4 No Overlap

16 | P a g e Nalla N
Chapter 4 Impulse Response & Convolution

CIRCUIT ANALYSIS WITH THE CONVOLUTION INTEGRAL


We can use the convolution integral in circuit analysis as illustrated by the following
example.
1. For the given circuit, use the convolution integral to find the capacitor voltage
when the input is the unit step function u(t) and 𝒱𝑐 (0− ) = 0. Where Resistor
R=1Ω and Capacitor C=1F

Solution: From equation 1.4, we knew that


1 −𝑡
ℎ (𝑡 ) = 𝑒 𝑅𝐶 𝑢(𝑡)
𝑅𝐶
and as R=1Ω and C=1F, the relation becomes ℎ(𝑡) = 𝑒 −𝑡 𝑢(𝑡) then he signals
ℎ(𝑡) and 𝑢(𝑡) are shown in the figure as,

Formation of ℎ(𝜏) and 𝑢(𝑡 − 𝜏) are shown as,

➢ Case 1: At t<0, there is no overlapping of the two signals and convolution is zero
−∞

𝑦(𝑡) = ∫ ℎ(𝜏)𝑢(𝑡 − 𝜏)𝑑𝜏 = 0


17 | P a g e Nalla N
Chapter 4 Impulse Response & Convolution

➢ Case2: At t=0, there is no overlapping of the two signals and convolution is zero

−∞

𝑦(𝑡) = ∫ ℎ(𝜏)𝑢(𝑡 − 𝜏)𝑑𝜏 = ∫ 𝑒 −𝜏 𝑑𝜏 = 0


∞ 𝑡=0

Case3: For 0<t<∞, there is complete overlapping of the two signals


−∞ 𝑡

𝑦(𝑡) = ∫ ℎ(𝜏)𝑢(𝑡 − 𝜏)𝑑𝜏 = ∫(1)𝑒 −𝜏 𝑑𝜏 = 𝑒 −𝜏 |𝑡0 = (1 − 𝑒 −𝑡 )𝑢(𝑡)


∞ 0

The convolution of ℎ(𝑡) ∗ 𝑢(𝑡) is shown in the figure as

Question 2: For the problems given in Impulse Response topic in Example 2 and
practice problem 1, find out the convolution values.

18 | P a g e Nalla N
Chapter 7 Chapter 5

Fourier Series

his chapter is an introduction to Fourier series. We begin with the definition of sinusoids that
T are harmonically related and the procedure for determining the coefficients of the trigonomet-
ric form of the series. Then, we discuss the different types of symmetry and how they can be
used to predict the terms that may be present. Several examples are presented to illustrate the
approach. The alternate trigonometric and the exponential forms are also presented.

7.1 Wave Analysis


The French mathematician Fourier found that any periodic waveform, that is, a waveform that
repeats itself after some time, can be expressed as a series of harmonically related sinusoids, i.e., sinu-
soids whose frequencies are multiples of a fundamental frequency (or first harmonic). For example, a
series of sinusoids with frequencies 1 MHz , 2 MHz , 3 MHz , and so on, contains the fundamental
frequency of 1 MHz , a second harmonic of 2 MHz , a third harmonic of 3 MHz , and so on. In gen-
eral, any periodic waveform f ( t ) can be expressed as
1
f ( t ) = --- a 0 + a 1 cos ωt + a 2 cos 2ωt + a 3 cos 3ωt + a 4 cos 4ωt + …
2 (7.1)
+ b 1 sin ωt + b 2 sin 2ωt + b 3 sin 3ωt + b 4 sin 4ωt + …

or

1
f ( t ) = --- a 0 +
2 ∑ ( a cos nωt + b sin nωt )
n n (7.2)
n=1

where the first term a 0 ⁄ 2 is a constant, and represents the DC (average) component of f ( t ) . Thus,
if f ( t ) represents some voltage v ( t ) , or current i ( t ) , the term a 0 ⁄ 2 is the average value of v ( t ) or
i(t) .

The terms with the coefficients a 1 and b 1 together, represent the fundamental frequency compo-
nent ω *. Likewise, the terms with the coefficients a 2 and b 2 together, represent the second har-
monic component 2ω , and so on.
Since any periodic waveform f ( t ) ) can be expressed as a Fourier series, it follows that the sum of the

* We recall that k 1 cos ωt + k 2 sin ωt = k cos ( ωt + θ ) where θ is a constant.

Signals and Systems with MATLAB Applications, Second Edition 7-1


Orchard Publications
Page 1
Chapter 7 Fourier Series

DC , the fundamental, the second harmonic, and so on, must produce the waveform f ( t ) . Generally,
the sum of two or more sinusoids of different frequencies produce a waveform that is not a sinusoid
as shown in Figure 7.1.

Total
2nd Harmonic
Fundamental
3rd Harmonic

Figure 7.1. Summation of a fundamental, second and third harmonic

7.2 Evaluation of the Coefficients


Evaluations of a i and b i coefficients of (7.1) is not a difficult task because the sine and cosine are
orthogonal functions, that is, the product of the sine and cosine functions under the integral evalu-
ated from 0 to 2π is zero. This will be shown shortly.
Let us consider the functions sin mt and cos m t where m and n are any integers, and for conve-
nience, we have assumed that ω = 1 . Then,

∫0 sin mt dt = 0 (7.3)

∫0 cos m t dt = 0 (7.4)

∫0 ( sin mt ) ( cos nt ) dt = 0 (7.5)

The integrals of (7.3) and (7.4) are zero since the net area over the 0 to 2π area is zero. The integral
of (7.5) is also is zero since
1
sin x cos y = --- [ sin ( x + y ) + sin ( x – y ) ]
2
This is also obvious from the plot of Figure 7.2, where we observe that the net shaded area above
and below the time axis is zero.

7-2 Signals and Systems with MATLAB Applications, Second Edition


Orchard Publications
Page 2
Evaluation of the Coefficients

sin x cos x

sin x ⋅ cos x


Figure 7.2. Graphical proof of ∫0 ( sin mt ) ( cos nt ) dt = 0

Moreover, if m and n are different integers, then,


∫0 ( sin mt ) ( sin nt ) dt = 0 (7.6)

since
1
( sin x ) ( sin y ) = --- [ cos ( x – y ) – cos ( x – y ) ]
2

The integral of (7.6) can also be confirmed graphically as shown in Figure 7.3, where m = 2 and
n = 3 . We observe that the net shaded area above and below the time axis is zero.

sin 2x sin 3x sin 2x ⋅ sin 3x


Figure 7.3. Graphical proof of ∫0 ( sin mt ) ( sin nt ) dt = 0 for m = 2 and n = 3

Also, if m and n are different integers, then,

Signals and Systems with MATLAB Applications, Second Edition 7-3


Orchard Publications
Page 3
Chapter 7 Fourier Series

∫0 ( cos m t ) ( cos nt ) dt = 0 (7.7)

since
1
( cos x ) ( cos y ) = --- [ cos ( x + y ) + cos ( x – y ) ]
2

The integral of (7.7) can also be confirmed graphically as shown in Figure 7.4, where m = 2 and
n = 3 . We observe that the net shaded area above and below the time axis is zero.

cos 3x cos 2x cos 2x ⋅ cos 3x


Figure 7.4. Graphical proof of ∫0 ( cos m t ) ( cos nt ) dt = 0 for m = 2 and n = 3

However, if in (7.6) and (7.7), m = n , then,


∫0
2
( sin mt ) dt = π (7.8)

and

∫0
2
( cos m t ) dt = π (7.9)

The integrals of (7.8) and (7.9) can also be seen to be true graphically with the plots of Figures 7.5
and 7.6.
It was stated earlier that the sine and cosine functions are orthogonal to each other. The simplifica-
tion obtained by application of the orthogonality properties of the sine and cosine functions,
becomes apparent in the discussion that follows.

7-4 Signals and Systems with MATLAB Applications, Second Edition


Orchard Publications
Page 4
Evaluation of the Coefficients

2
sin x
sin x

∫0
2
Figure 7.5. Graphical proof of ( sin mt ) dt = π

2
cos x

cos x

∫0
2
Figure 7.6. Graphical proof of ( cos m t ) dt = π

In (7.1), for simplicity, we let ω = 1 . Then,


1
f ( t ) = --- a 0 + a 1 cos t + a 2 cos 2t + a 3 cos 3t + a 4 cos 4t + …
2 (7.10)
+ b 1 sin t + b 2 sin 2t + b 3 sin 3t + b 4 sin 4t + …

To evaluate any coefficient, say b 2 , we multiply both sides of (7.10) by sin 2t . Then,

1
f ( t ) sin 2t = --- a 0 sin 2t + a 1 cos t sin 2t + a 2 cos 2t sin 2t + a 3 cos 3t sin 2t + a 4 cos 4t sin 2t + …
2
2
b 1 sin t sin 2t + b 2 ( sin 2t ) + b 3 sin 3t sin 2t + b 4 sin 4t sin 2t + …

Next, we multiply both sides of the above expression by dt , and we integrate over the period 0 to
2π . Then,

Signals and Systems with MATLAB Applications, Second Edition 7-5


Orchard Publications
Page 5
Chapter 7 Fourier Series

2π 2π 2π 2π
1
∫0 f ( t ) sin 2t dt = --- a 0
2 ∫0 sin 2t dt + a 1 ∫0 cos t sin 2t dt + a 2 ∫0 cos 2t sin 2t dt

2π 2π
+ a3 ∫0 cos 3t sin 2t dt + a 4 ∫0 cos 4t sin 2t dt + …
(7.11)
2π 2π 2π

∫0 ∫0 ∫0
2
+ b1 sin t sin 2t dt + b 2 ( sin 2t ) dt + b 3 sin 3t sin 2t dt


+ b4 ∫0 sin 4t sin 2t dt + …

We observe that every term on the right side of (7.11) except the term

∫0 ( sin 2t ) dt
2
b2

is zero as we found in (7.6) and (7.7). Therefore, (7.11) reduces to


2π 2π

∫0 f ( t ) sin 2t dt = b 2 ∫0 ( sin 2t ) dt = b 2 π
2

or

1
b 2 = ---
π ∫0 f ( t ) sin 2t dt

and thus we can evaluate this integral for any given function f ( t ) . The remaining coefficients can be
evaluated similarly.
The coefficients a 0 , a n , and b n are found from the following relations.


1 1-
--- a 0 = -----
2 2π ∫0 f ( t ) dt (7.12)


1
a n = ---
π ∫0 f ( t ) cos nt dt (7.13)


1
b n = ---
π ∫0 f ( t ) sin nt dt (7.14)

The integral of (7.12) yields the average ( DC ) value of f ( t ) .

7-6 Signals and Systems with MATLAB Applications, Second Edition


Orchard Publications
Page 6
Symmetry

7.3 Symmetry
With a few exceptions such as the waveform of Example 7.6, the most common waveforms that are
used in science and engineering, do not have the average, cosine, and sine terms all present. Some
waveforms have cosine terms only, while others have sine terms only. Still other waveforms have or
have not DC components. Fortunately, it is possible to predict which terms will be present in the
trigonometric Fourier series, by observing whether or not the given waveform possesses some kind
of symmetry.
We will discuss three types of symmetry that can be used to facilitate the computation of the trigo-
nometric Fourier series form. These are:
1. Odd symmetry − If a waveform has odd symmetry, that is, if it is an odd function, the series will
consist of sine terms only. In other words, if f ( t ) is an odd function, all the a i coefficients includ-
ing a 0 , will be zero.
2. Even symmetry − If a waveform has even symmetry, that is, if it is an even function, the series will
consist of cosine terms only, and a 0 may or may not be zero. In other words, if f ( t ) is an even
function, all the b i coefficients will be zero.

3. Half-wave symmetry − If a waveform has half-wave symmetry (to be defined shortly), only odd
(odd cosine and odd sine) harmonics will be present. In other words, all even (even cosine and
even sine) harmonics will be zero.
We defined odd and even functions in Chapter 6. We recall that odd functions are those for which
–f ( –t ) = f ( t ) (7.15)
and even functions are those for which
f ( –t ) = f ( t ) (7.16)
Examples of odd and even functions were given in Chapter 6. Generally, an odd function has odd
powers of the independent variable t , and an even function has even powers of the independent
variable t . Thus, the product of two odd functions or the product of two even functions will result
in an even function, whereas the product of an odd function and an even function will result in an
odd function. However, the sum (or difference) of an odd and an even function will yield a function
which is neither odd nor even.
To understand half-wave symmetry, we recall that any periodic function with period T , is expressed
as
f(t) = f(t + T) (7.17)
that is, the function with value f ( t ) at any time t , will have the same value again at a later time t + T .

Signals and Systems with MATLAB Applications, Second Edition 7-7


Orchard Publications
Page 7
Chapter 7 Fourier Series

A periodic waveform with period T , has half-wave symmetry if


–f ( t + T ⁄ 2 ) = f ( t ) (7.18)
that is, the shape of the negative half-cycle of the waveform is the same as that of the positive half-
cycle, but inverted.
We will test the waveforms of Figures 7.7 through 7.11 for any of the three types of symmetry.
1. Square waveform
For the waveform of Figure 7.7, the average value over one period T is zero, and therefore,
a 0 = 0 . It is also an odd function and has half-wave symmetry since – f ( – t ) = f ( t ) and
–f ( t + T ⁄ 2 ) = f ( t ) .

T
A

T/2
f(b) π 2π
ωt
0
T/2
f(a)
−A

Figure 7.7. Square waveform test for symmetry


Note
An easy method to test for half-wave symmetry is to choose any half-period T ⁄ 2 length on the time
axis as shown in Figure 7.7, and observe the values of f ( t ) at the left and right points on the time
axis, such as f ( a ) and f ( b ) . If there is half-wave symmetry, these will always be equal but will have
opposite signs as we slide the half-period T ⁄ 2 length to the left or to the right on the time axis at
non-zero values of f ( t ) .
2. Square waveform with ordinate axis shifted
If we shift the ordinate axis π ⁄ 2 radians to the right, as shown in Figure 7.8, we see that the
square waveform now becomes an even function and has half-wave symmetry since f ( – t ) = f ( t )
and – f ( t + T ⁄ 2 ) = f ( t ) . Also, a 0 = 0 .

Obviously, if the ordinate axis is shifted by any other value other than an odd multiple of π ⁄ 2 ,
the waveform will have neither odd nor even symmetry.

7-8 Signals and Systems with MATLAB Applications, Second Edition


Orchard Publications
Page 8
Symmetry

−π/2 π/2 2π
−2π −π π ωt
0

T/2 T/2
−A

Figure 7.8. Square waveform with ordinate shifted by π ⁄ 2

3. Sawtooth waveform
T
A

−2π −π π 2π
ωt
0
T/2 T/2
−A

Figure 7.9. Sawtooth waveform test for symmetry


For the sawtooth waveform of Figure 7.9, the average value over one period T is zero and there-
fore, a 0 = 0 . It is also an odd function because – f ( – t ) = f ( t ) , but has no half-wave symmetry
since – f ( t + T ⁄ 2 ) ≠ f ( t )
4. Triangular waveform

−2π
−π ωt
0 π 2π
T/2
−A T/2

Figure 7.10. Triangular waveform test for symmetry

Signals and Systems with MATLAB Applications, Second Edition 7-9


Orchard Publications
Page 9
Chapter 7 Fourier Series

For this triangular waveform of Figure 7.10, the average value over one period T is zero and
therefore, a 0 = 0 . It is also an odd function since – f ( – t ) = f ( t ) . Moreover, it has half-wave
symmetry because – f ( t + T ⁄ 2 ) = f ( t )
5. Fundamental, Second and Third Harmonics of a Sinusoid
Figure 7.11 shows a fundamental, second, and third harmonic of a typical sinewave.

Τ/2 Τ/2 Τ/2


c
a
b b

−a
−c
Figure 7.11. Fundamental, second, and third harmonic test for symmetry

In Figure 7.11, the half period T ⁄ 2 , is chosen as the half period of the period of the fundamental
frequency. This is necessary in order to test the fundamental, second, and third harmonics for half-
wave symmetry. The fundamental has half-wave symmetry since the a and – a values, when sepa-
rated by T ⁄ 2 , are equal and opposite. The second harmonic has no half-wave symmetry because the
ordinates b on the left and b on the right, although are equal, there are not opposite in sign. The
third harmonic has half-wave symmetry since the c and – c values, when separated by T ⁄ 2 are
equal and opposite. These waveforms can be either odd or even depending on the position of the
ordinate. Also, all three waveforms have zero average value unless the abscissa axis is shifted up or
down.
In the expressions of the integrals in (7.12) through (7.14), the limits of integration for the coeffi-
cients a n and b n are given as 0 to 2π , that is, one period T . Of course, we can choose the limits of
integration as – π to +π . Also, if the given waveform is an odd function, or an even function, or has
half-wave symmetry, we can compute the non-zero coefficients a n and b n by integrating from 0 to
π only, and multiply the integral by 2 . Moreover, if the waveform has half-wave symmetry and is
also an odd or an even function, we can choose the limits of integration from 0 to π ⁄ 2 and multiply
the integral by 4 . The proof is based on the fact that, the product of two even functions is another
even function, and also that the product of two odd functions results also in an even function. How-
ever, it is important to remember that when using these shortcuts, we must evaluate the coefficients
a n and b n for the integer values of n that will result in non-zero coefficients. This point will be
illustrated in Example 7.2.

7-10 Signals and Systems with MATLAB Applications, Second Edition


Orchard Publications
Page 10
Waveforms in Trigonometric Form of Fourier Series

We will now derive the trigonometric Fourier series of the most common periodic waveforms.

7.4 Waveforms in Trigonometric Form of Fourier Series


Example 7.1
Compute the trigonometric Fourier series of the square waveform of Figure 7.12.

T
A

π 2π
ωt
0

−A

Figure 7.12. Square waveform for Example 7.1

Solution:
The trigonometric series will consist of sine terms only because, as we already know, this waveform
is an odd function. Moreover, only odd harmonics will be present since this waveform has half-wave
symmetry. However, we will compute all coefficients to verify this. Also, for brevity, we will assume
that ω = 1 .
The a i coefficients are found from
2π π 2π
1 1 A π
∫0 ∫0 ∫π

a n = --- f ( t ) cos nt dt = --- A cos nt dt + ( – A ) cos nt dt = ------ ( sin nt 0 – sin nt )
π π nπ π
(7.19)
A A
= ------ ( sin nπ – 0 – sin n2π + sin nπ ) = ------ ( 2 sin nπ – sin n2π )
nπ nπ

and since n is an integer (positive or negative) or zero, the terms inside the parentheses on the sec-
ond line of (7.19) are zero and therefore, all a i coefficients are zero, as expected since the square
waveform has odd symmetry. Also, by inspection, the average ( DC ) value is zero, but if we attempt
to verify this using (7.19), we will get the indeterminate form 0 ⁄ 0 . To work around this problem, we
will evaluate a 0 directly from (7.12). Then,
π 2π
1 A
a 0 = ---
π ∫0 A dt + ∫π ( – A ) dt = --- ( π – 0 – 2π + π ) = 0
π
(7.20)

Signals and Systems with MATLAB Applications, Second Edition 7-11


Orchard Publications
Page 11
Chapter 7 Fourier Series

The b i coefficients are found from (7.14), that is,


2π π 2π
1 1 A π
∫0 ∫0 ∫π

b n = --- f ( t ) sin nt dt = --- A sin nt dt + ( – A ) sin nt dt = ------ ( – cos n t 0 + cos nt )
π π nπ π
(7.21)
A A
= ------ ( – cos nπ + 1 + cos 2nπ – cos nπ ) = ------ ( 1 – 2 cos nπ + cos 2nπ )
nπ nπ

For n = even , (7.21) yields


A
b n = ------ ( 1 – 2 + 1 ) = 0

as expected, since the square waveform has half-wave symmetry.
For n = odd , (7.21) reduces to
A 4A
b n = ------ ( 1 + 2 + 1 ) = -------
nπ nπ
and thus
b 1 = 4A
-------
π

4A
b 3 = -------

4A
b 5 = -------

and so on.
Therefore, the trigonometric Fourier series for the square waveform with odd symmetry is

f ( t ) = ------- ⎛ sin ωt + --- sin 3ωt + --- sin 5ωt + …⎞ = -------


4A 1 1 4A 1
π ⎝ 3 5 ⎠ π ∑ --- sin nωt
n
(7.22)
n = odd

It was stated above that, if the given waveform has half-wave symmetry, and it is also an odd or an
even function, we can integrate from 0 to π ⁄ 2 , and multiply the integral by 4 . We will apply this
property to the following example.

Example 7.2
Compute the trigonometric Fourier series of the square waveform of Example 1 by integrating from
0 to π ⁄ 2 , and multiplying the result by 4 .
Solution:
Since the waveform is an odd function and has half-wave symmetry, we are only concerned with the
odd b n coefficients. Then,

7-12 Signals and Systems with MATLAB Applications, Second Edition


Orchard Publications
Page 12
Waveforms in Trigonometric Form of Fourier Series
π⁄2
π
) = ------- ⎛ – cos n --- + 1⎞
1 4A π⁄2 4A
b n = 4 ---
π ∫0 f ( t ) sin nt dt = ------- ( – cos n t
nπ 0 nπ ⎝ 2 ⎠
(7.23)

For n = odd , (7.23) becomes


4A
b n = ------- ( – 0 + 1 ) = 4A
------- (7.24)
nπ nπ
as before, and thus the series is the same as in Example 1.

Example 7.3
Compute the trigonometric Fourier series of the square waveform of Figure 7.13.
Solution:
This is the same waveform as in Example 7.1, except that the ordinate has been shifted to the right
by π ⁄ 2 radians, and has become an even function. However, it still has half-wave symmetry. There-
fore, the trigonometric Fourier series will consist of odd cosine terms only.
Since the waveform has half-wave symmetry and is an even function, it will suffice to integrate from
0 to π ⁄ 2 , and multiply the integral by 4 . The a n coefficients are found from

π⁄2 π⁄2
π
= ------- ⎛ sin n ---⎞
1 4 4A π⁄2 4A
a n = 4 ---
π ∫0 f ( t ) cos nt dt = ---
π ∫0 A cos nt dt = ------- ( sin nt
nπ 0
)
nπ ⎝ 2⎠
(7.25)

We observe that for n = even , all a n coefficients are zero, and thus all even harmonics are zero as
expected. Also, by inspection, the average ( DC ) value is zero.

T
A

π/2 3π / 2
π
ωt
0 2π

−A

Figure 7.13. Waveform for Example 7.3

π
For n = odd , we observe from (7.25) that sin n --- , will alternate between +1 and – 1 depending on
2
the odd integer assigned to n . Thus,

Signals and Systems with MATLAB Applications, Second Edition 7-13


Orchard Publications
Page 13
Chapter 7 Fourier Series

4A
a n = ± ------- (7.26)

For n = 1, 5, 9, 13 , and so on, (7.26) becomes


4A
a n = -------

and for n = 3, 7, 11, 15 , and so on, it becomes
– 4A
a n = ----------

Then, the trigonometric Fourier series for the square waveform with even symmetry is

(n – 1)
----------------
f ( t ) = ------- ⎛ cos ω t – --- cos 3ωt + --- cos 5ωt – …⎞ = -------
4A 1 4A 2 1

1
( –1 ) --- cos n ωt (7.27)
π⎝ 3 5 ⎠ π n
n = odd

Alternate Solution:
Since the waveform of Example 7.3 is the same as of Example 7.1, but shifted to the right by π ⁄ 2
radians, we can use the result of Example 7.11, i.e.,

f ( t ) = ------- ⎛ sin ωt + --- sin 3ωt + --- sin 5ωt + …⎞


4A 1 1
(7.28)
π ⎝ 3 5 ⎠

and substitute ωt with ωt + π ⁄ 2 , that is, we let ωt = ωτ + π ⁄ 2 . With this substitution, relation
(7.28) becomes

---⎞ + --- sin 5 ⎛ ωτ + π


---⎞ + --- sin 3 ⎛ ωτ + π
f ( τ ) = ------- sin ⎛ ωτ + π ---⎞ + …
4A 1 1
π ⎝ 2 ⎠ 3 ⎝ 2 ⎠ 5 ⎝ 2⎠
(7.29)
= ------- sin ⎛ ωτ + π ------⎞ + --- sin ⎛ 5ωτ + 5π
---⎞ + --- sin ⎛ 3ωτ + 3π ------⎞ + …
4A 1 1
π ⎝ 2⎠ 3 ⎝ 2⎠ 5 ⎝ 2⎠

and using the identities sin ( x + π ⁄ 2 ) = cos x , sin ( x + 3π ⁄ 2 ) = – cos x , and so on, we rewrite (7.29)
as

4A 1 1
f ( τ ) = ------- cos ωτ – --- cos 3ωτ + --- cos 5ωτ – … (7.30)
π 3 5

and this is the same as (7.22).


Therefore, if we compute the trigonometric Fourier series with reference to one ordinate, and after-
wards we want to recompute the series with reference to a different ordinate, we can use the above
procedure to save time.

7-14 Signals and Systems with MATLAB Applications, Second Edition


Orchard Publications
Page 14
Waveforms in Trigonometric Form of Fourier Series

Example 7.4
Compute the trigonometric Fourier series of the sawtooth waveform of Figure 7.14.
T

−π π
ωt
−2π 0 2π

−A

Figure 7.14. Sawtooth waveform

Solution:
This waveform is an odd function but has no half-wave symmetry; therefore, it contains sine terms
only with both odd and even harmonics. Accordingly, we only need to evaluate the b n coefficients.

By inspection, the DC component is zero. As before, we will assume that ω = 1 .


If we choose the limits of integration from 0 to 2π , we will need to perform two integrations since

⎧ A --- t 0<t<π
⎪ π
f(t) = ⎨
⎪A--- t – 2A π < t < 2π
⎩π

However, we can choose the limits from – π to +π , and thus we will only need one integration since
A
f ( t ) = --- t –π < t < π
π

Better yet, since the waveform is an odd function, we can integrate from 0 to π , and multiply the
integral by 2 ; this is what we will do.
From tables of integrals,
1
∫ x sin ax dx = ----2- sin a x – --x- cos ax
a a
(7.31)

Then,

Signals and Systems with MATLAB Applications, Second Edition 7-15


Orchard Publications
Page 15
Chapter 7 Fourier Series

π π π
t sin nt dt = ------2- ⎛ ----2- sin nt – --- cos nt⎞
2 A
--- t sin nt dt = 2A 2A 1 t
b n = ---
π ∫0 π
-------
π2 ∫0 π ⎝n n ⎠
0 (7.32)
2A π 2A
- ( sin nt – nt cos nt )
= ---------- - ( sin nπ – nπ cos nπ )
= ----------
n2 π2 0 n2 π2
We observe that:
1. If n = even , sin nπ = 0 and cos nπ = 1 . Then, (7.32) reduces to
2A
- ( – nπ ) = – 2A
b n = ---------- -------
n2 π2 nπ
that is, the even harmonics have negative coefficients.
2. If n = odd , sin nπ = 0 , cos nπ = – 1 . Then,
2A
- ( nπ ) = 2A
b n = ---------- -------
n2 π2 nπ
that is, the odd harmonics have positive coefficients.
Thus, the trigonometric Fourier series for the sawtooth waveform with odd symmetry is

f ( t ) = ------- ⎛ sin ωt – --- sin 2ωt + --- sin 3ωt – --- sin 4ωt + …⎞ = -------
2A 1 1 1 2A n–1 1
π⎝ 2 3 4 ⎠ π ∑ ( –1 ) --- sin nωt
n
(7.33)

Example 7.5
Find the trigonometric Fourier series of the triangular waveform of Figure 7.15. Assume ω = 1 .

T
A

−2π
−π ωt
0 π/2 π 2π
−A

Figure 7.15. Triangular waveform for Example 7.5


Solution:
This waveform is an odd function and has half-wave symmetry; then, the trigonometric Fourier
series will contain sine terms only with odd harmonics. Accordingly, we only need to evaluate the b n
coefficients. We will choose the limits of integration from 0 to π ⁄ 2 , and will multiply the integral
by 4 .

7-16 Signals and Systems with MATLAB Applications, Second Edition


Orchard Publications
Page 16
Waveforms in Trigonometric Form of Fourier Series

By inspection, the DC component is zero. From tables of integrals,

1
∫ x sin ax dx
x
= -----2 sin a x – --- cos ax (7.34)
a a
Then,
π⁄2 π⁄2 π⁄2
t sin nt dt = ------2- ⎛ -----2 sin nt – --- cos nt⎞
4 2A 8A 8A 1 t
b n = ---
π ∫0 ------- t sin nt dt = ------2-
π π ∫0 π n ⎝ n ⎠
0
(7.35)
π π π
= ----------2 ⎛ sin n --- – n --- cos n --- ⎞
8A π⁄2 8A
= ----------2 ( sin nt – nt cos nt ) 0 ⎝ 2 2 2⎠
n π n π
2 2

We are only interested in the odd integers of n , and we observe that:


π
cos n --- = 0
2

For odd integers of n , the sine term yields

⎧ 1 for n = 1, 5, 9, … then, b = ----------


8A
-
⎪ n
2 2
π ⎪ n π
sin n --- = ⎨
2 ⎪ – 1 for n = 3, 7, 11, … then, b = – ----------
8A -
⎪ n
2 2
⎩ n π
Thus, the trigonometric Fourier series for the triangular waveform with odd symmetry is

(n – 1)
----------------
f ( t ) = ------2- ⎛ sin ω t – 1
--- sin 3ωt + ------ sin 5ωt – ------ sin 7ωt + …⎞ = ------2-
8A 1 1 8A 1-

2
( –1 ) ---- sin n ωt (7.36)
⎝ 25 49 ⎠ 2
π 9 π n = odd n

Example 7.6
The circuit of Figure 7.16 is a half-wave rectifier whose input is the sinusoid v in ( t ) = sin ωt , and its
output v out ( t ) is defined as

⎧ sin ωt 0 < ωt < π


v out ( t ) = ⎨ (7.37)
⎩ 0 π < ωt < 2π

Express v out ( t ) as a trigonometric Fourier series. Assume ω = 1 .

Solution:
The input and output waveforms for this example are shown in Figures 7.17 and 7.18.

Signals and Systems with MATLAB Applications, Second Edition 7-17


Orchard Publications
Page 17
Chapter 7 Fourier Series

+
+ v out ( t )
v in ( t ) R
− −

Figure 7.16. Circuit for half-wave rectifier

Figure 7.17. Input v in ( t ) for the circuit of Figure 7.16

Figure 7.18. Output v out ( t ) for the circuit of Figure 7.16

We choose the ordinate as shown in Figure 7.19.

−2π −π 0 π 2π

Figure 7.19. Half-wave rectifier waveform for the circuit of Figure 7.16

By inspection, the average is a non-zero value, and the waveform has neither odd nor even symme-
try. Therefore, we expect all terms to be present.

7-18 Signals and Systems with MATLAB Applications, Second Edition


Orchard Publications
Page 18
Waveforms in Trigonometric Form of Fourier Series

The a n coefficients are found from



1
a n = ---
π ∫0 f ( t ) cos nt dt

For this example,


π 2π
A A
a n = ---
π ∫0 sin t cos nt dt + ---
π ∫π 0 cos nt dt

and from tables of integrals


cos ( m – n )x cos ( m + n )x
∫ ( sin mx ) ( cos nx ) dx
2 2
= – ------------------------------ – ------------------------------ ( m ≠ n )
2(m – n) 2(m + n)

Then,
π
A ⎧ 1 cos ( 1 – n )t cos ( 1 + n )t ⎫
a n = --- ⎨ – --- --------------------------- + ---------------------------- ⎬
π ⎩ 2 1–n 1+n
0⎭
(7.38)
A- ⎧ ----------------------------
= – ----- cos ( π + nπ )- – -----------
cos ( π – nπ -) + ----------------------------- 1 - ⎫
1 - + -----------
⎨ ⎬
2π ⎩ 1–n 1+n 1–n n+1 ⎭

Using the trigonometric identities


cos ( x – y ) = cos x cos y + sin xsiny
and
cos ( x + y ) = cos x cos y – sin x sin y
we get
cos ( π – nπ ) = cos π cos nπ + sin π sin nπ = – cos nπ
and
cos ( π + nπ ) = cos π cos nπ – sin π sin nπ = – cos nπ

Then, by substitution into (7.38),

A ⎧ – cos nπ – cos nπ 2 ⎫ A ⎧ cos nπ cos nπ 2 ⎫


a n = – ------ ⎨ ------------------ + ------------------ – --------------2 ⎬ = ------ ⎨ --------------- + --------------- + --------------2 ⎬
2π ⎩ 1 – n 1+n 1–n ⎭ 2π ⎩ 1 – n 1+n 1–n ⎭
(7.39)
A cos nπ + n cos nπ + cos nπ – n cos nπ cos nπ + 1- ⎞
= ------ ⎛ --------------------------------------------------------------------------------------- 2 -⎞ = A
+ ------------- --- ⎛ ------------------------ n≠1
2π ⎝ 1–n
2
1–n
2⎠ π ⎝ (1 – n ) ⎠
2

Next, we can evaluate all the a n coefficients, except a 1 , from (7.39).

First, we will evaluate a 0 to obtain the DC value. By substitution of n = 0 , we get

Signals and Systems with MATLAB Applications, Second Edition 7-19


Orchard Publications
Page 19
Chapter 7 Fourier Series

a 0 = 2A ⁄ π
Therefore, the DC value is
1
--- a 0 = A
--- (7.40)
2 π

We cannot use (7.39) to obtain the value of a 1 ; therefore, we will evaluate the integral
π
A
a 1 = ---
π ∫0 sin t cos t dt
From tables of integrals,
1
∫ ( sin ax ) ( cos ax ) dx
2
= ------ ( sin ax )
2a
and thus,
π
A 2
a 1 = ------ ( sin t ) = 0 (7.41)
2π 0

From (7.39) with n = 2, 3, 4, 5, … , we get

A cos 2π + 1 ⎞
a 2 = --- ⎛ ------------------------
- = – 2A
------- (7.42)
π ⎝ ( 1 – 22 ) ⎠ 3π

A ( cos 3π + 1 ) = 0
a 3 = --------------------------------- (7.43)
2
π(1 – 3 )

We see that for odd integers of n, a n = 0 . However, for n = even , we get

A ( cos 4π + 1 ) 2A
a 4 = --------------------------------- = – --------- (7.44)
2 15π
π(1 – 4 )

A ( cos 6π + 1 ) 2A
a 6 = --------------------------------- = – --------- (7.45)
2 35π
π(1 – 6 )

A ( cos 8π + 1 ) 2A
a 8 = --------------------------------- = – --------- (7.46)
2 63π
π(1 – 8 )
and so on.
Now, we need to evaluate the bn coefficients. For this example,
2π π 2π
1 A A
b n = A ---
π ∫0 f ( t ) sin nt dt = ---
π ∫0 sin t sin nt dt + ---
π ∫π 0 sin nt dt

7-20 Signals and Systems with MATLAB Applications, Second Edition


Orchard Publications
Page 20
Waveforms in Trigonometric Form of Fourier Series

and from tables of integrals,


sin ( m + n )x
sin ( m – n )x- – ----------------------------
∫ ( sin mx ) ( sin nx ) dx = ----------------------------
2( m – n) 2( m + n)
- ( m2 ≠ n2 )

Therefore,
π
A 1 ⎧ sin ( 1 – n )t sin ( 1 + n )t ⎫
b n = --- ⋅ --- ⎨ -------------------------- – --------------------------- ⎬
π 2⎩ 1–n 1+n
0⎭

A sin ( 1 – n )π sin ( 1 + n )π
= ------ ---------------------------- – ---------------------------- – 0 + 0 = 0 ( n ≠ 1 )
2π 1–n 1+n

that is, all the b n coefficients, except b 1 , are zero.

We will find b 1 by direct substitution into (7.14) for n = 1 . Thus,


π π
A A t sin 2t A π sin 2π
∫0
2 A
b 1 = --- ( sin t ) dt = --- --- – ------------ = --- --- – -------------- = --- (7.47)
π π 2 4 π 2 4 2
0

Combining (7.40), and (7.42) through (7.47), we find that the trigonometric Fourier series for the
half-wave rectifier with no symmetry is

A A A cos 2t cos 4t cos 6t cos 8t


f ( t ) = --- + --- sin t – --- ------------- + ------------- + ------------- + ------------- + … (7.48)
π 2 π 3 15 35 63

Example 7.7
Figure 7.20 shows a full-wave rectifier circuit with input the sinusoid v in ( t ) = A sin ωt . The output
of that circuit is v out ( t ) = A sin ωt . Express v out ( t ) as a trigonometric Fourier series. Assume
ω = 1.

+ R
v in ( t ) − +
− +
v out ( t )

Figure 7.20. Full-wave rectifier circuit


Solution:
The input and output waveforms are shown in Figures 7.21 and 7.22. We choose the ordinate as
shown in Figure 7.23. By inspection, the average is a non-zero value. We choose the period of the
input sinusoid so that the output will be expressed in terms of the fundamental frequency. We also
choose the limits of integration as – π and +π , we observe that the waveform has even symmetry.

Signals and Systems with MATLAB Applications, Second Edition 7-21


Orchard Publications
Page 21
Chapter 7 Fourier Series

Therefore, we expect only cosine terms to be present. The a n coefficients are found from

vIN(t)

π 2π

-A

Figure 7.21. Input sinusoid for the circuit of Example 7.7

A
vOUT(t)

π 2π

Figure 7.22. Output waveform for the circuit of Example 7.7

−2π −π 0 π 2π

Figure 7.23. Full-wave rectified waveform with even symmetry


1
a n = ---
π ∫0 f ( t ) cos nt dt

where for this example,

π π
1 2A
a n = ---
π ∫ –π
A sin t cos nt dt = -------
π ∫0 sin t cos nt dt (7.49)

and from tables of integrals,

7-22 Signals and Systems with MATLAB Applications, Second Edition


Orchard Publications
Page 22
Waveforms in Trigonometric Form of Fourier Series

cos ( m + n )x
cos ( m – n )x- – -----------------------------
∫ ( sin mx ) ( cos nx ) dx = -----------------------------
2(n – m) 2(m + n)
- ( m2 ≠ n2 )

Since
cos ( x – y ) = cos ( y – x ) = cos x cos y + sin xsiny
we express (7.49) as
π
2A 1 ⎧ cos ( n – 1 )t cos ( n + 1 )t ⎫
a n = ------- ⋅ --- ⎨ --------------------------- – ---------------------------- ⎬
π 2⎩ n–1 n+1
0⎭

A ⎧ cos ( n – 1 )π cos ( n + 1 )π 1 1 ⎫ (7.50)


= --- ⎨ ----------------------------- – ----------------------------- – ------------ – ------------ ⎬
π⎩ n–1 n+1 n–1 n+1 ⎭

A 1 – cos ( nπ + π ) cos ( nπ – π ) – 1
= --- --------------------------------------- + --------------------------------------
π n+1 n–1

To simplify the last expression in (7.50), we make use of the trigonometric identities
cos ( nπ + π ) = cos nπ cos π – sin nπsinπ = – cos nπ
and
cos ( nπ – π ) = cos nπ cos π + sin nπsinπ = – cos nπ
Then, (7.50) simplifies to
A 1 + cos nπ 1 + cos nπ A – 2 + ( n – 1 ) cos nπ – ( n + 1 ) cos nπ
- – ------------------------- = --- -------------------------------------------------------------------------------------
a n = --- ------------------------
π n+1 n–1 π 2
n –1
(7.51)
– 2A ( cos nπ + 1 -)
= ---------------------------------------
2
n≠1
π(n – 1)

Now, we can evaluate all the a n coefficients, except a 1 , from (7.51). First, we will evaluate a 0 to
obtain the DC value. By substitution of n = 0 , we get
4A
a 0 = -------
π
Therefore, the DC value is
1
--- a 0 = 2A
------- (7.52)
2 π

From (7.51) we observe that for all n = odd , other than n = 1 , a n = 0 .

To obtain the value of a 1 , we must evaluate the integral

Signals and Systems with MATLAB Applications, Second Edition 7-23


Orchard Publications
Page 23
Chapter 7 Fourier Series

π
1
a 1 = ---
π ∫0 sin t cos t dt
From tables of integrals,
1
∫ ( sin ax ) ( cos ax ) dx
2
= ------ ( sin ax )
2a
and thus,
π
1 2
a 1 = ------ ( sin t ) = 0 (7.53)
2π 0

For n = even , from (7.51) we get


– 2A ( cos 2π + 1 ) 4A
a 2 = ---------------------------------------- = – ------- (7.54)
2 3π
π(2 – 1)

2A ( cos 4π + 1 )- = – --------
a 4 = –--------------------------------------- 4A- (7.55)
2 15π
π(4 – 1)

2A ( cos 6π + 1 )- = – --------
a 6 = –--------------------------------------- 4A- (7.56)
2 35π
π(6 – 1)

– 2A ( cos 8π + 1 ) 4A
a 8 = ---------------------------------------- = – --------- (7.57)
2 63π
π(8 – 1)
and so on. Then, combining the terms of (7.52) and (7.54) through (7.57) we get

4A ⎧ cos 2ωt cos 4ωt cos 6ωt cos 8ωt ⎫


f ( t ) = 2A - + ------------------ + ------------------ + ------------------ + … ⎬
------- – ------- ⎨ ----------------- (7.58)
π π⎩ 3 15 35 63 ⎭

Therefore, the trigonometric form of the Fourier series for the full-wave rectifier with even symmetry
is

4A 1 -
f ( t ) = 2A
------- – -------
π π ∑ ------------------
( n
2
– 1 )
cos nωt (7.59)
n = 2 , 4, 6 , …

This series of (7.59) shows that there is no component of the input (fundamental) frequency. This is
because we chose the period to be from – π and +π . Generally, the period is defined as the shortest
period of repetition. In any waveform where the period is chosen appropriately, it is very unlikely
that a Fourier series will consist of even harmonic terms only.

7.5 Gibbs Phenomenon


In Example 7.1, we found that the trigonometric form of the Fourier series of the square waveform

7-24 Signals and Systems with MATLAB Applications, Second Edition


Orchard Publications
Page 24
Alternate Forms of the Trigonometric Fourier Series

is
f ( t ) = ------- ⎛ sin ωt + --- sin 3ωt + --- sin 5ωt + …⎞ = -------
4A 1 1 4A 1
π⎝ 3 5 ⎠ π ∑ --- sin nωt
n
n = odd

Figure 7.24 shows the first 11 harmonics and their sum. We observe that as we add more and more
harmonics, the sum looks more and more like the square waveform. However, the crests do not get
flattened; this is known as Gibbs phenomenon and it occurs because of the discontinuity of the per-
fect square waveform as it changes from +A to – A .
Crest (Gibbs Phenomenon)

Sum of first 11 harmonics

Figure 7.24. Gibbs phenomenon

7.6 Alternate Forms of the Trigonometric Fourier Series


We recall that the trigonometric Fourier series is expressed as
1
f ( t ) = --- a 0 + a 1 cos ωt + a 2 cos 2ωt + a 3 cos 3ωt + a 4 cos 4ωt + …
2 (7.60)
+ b 1 sin ωt + b 2 sin 2ωt + b 3 sin 3ωt + b 4 sin 4ωt + …

If a given waveform does not have any kind of symmetry, it may be advantageous of using the alter-
nate form of the trigonometric Fourier series where the cosine and sine terms of the same frequency
are grouped together, and the sum is combined to a single term, either cosine or sine. However, we
still need to compute the a n and b n coefficients separately.

We use the triangle shown in Figure 7.25 for the derivation of the alternate forms.

cn = an + bn
ϕn
cn an a bn b
bn cos θ n = --------------------
- = ----n- sin θ n = --------------------
- = ----n-
an + bn cn an + bn cn
θn
an
b a
cos θ n = sin ϕ n θ n = atan ----n- ϕ n = atan ----n-
an bn

Figure 7.25. Derivation of the alternate form of the trigonometric Fourier series

Signals and Systems with MATLAB Applications, Second Edition 7-25


Orchard Publications
Page 25
Chapter 7 Fourier Series

We assume ω = 1 , and for n = 1, 2, 3, … , we rewrite (7.60) as


a b a b
f ( t ) = --- a 0 + c 1 ⎛ -----1 cos t + -----1 sin t⎞ + c 2 ⎛ -----2 cos 2t + -----2 sin 2t⎞ + …
1
2 ⎝ c1 c1 ⎠ ⎝ c2 c2 ⎠
an bn
+ c n ⎛ ----- cos nt + ----- sin nt⎞
⎝ cn cn ⎠

1 cos θ 1 cos t + sin θ 1 sin t cos θ 2 cos 2t + sin θ 2 sin 2t


= --- a 0 + c 1 ⎛ ⎧
⎪ ⎞ + c2 ⎛ ⎞ +…



















2 ⎝ cos ( t – θ 1 ) ⎠ ⎝ cos ( 2t – θ 2 ) ⎠
cos θ n cos nt + sin θ n sin nt
+ cn ⎛ ⎞











⎝ cos ( nt – θ n ) ⎠

and, in general, for ω ≠ 1 , we get

∞ ∞ bn
∑ cn cos ⎛⎝ nωt – atan ----
-⎞
1 1
f ( t ) = --- a 0 +
2 ∑ cn cos ( nωt – θn ) = --- a 0 +
2 a n⎠
(7.61)
n=1 n=1

Similarly,
1 sin ϕ 1 cos t + cos ϕ 1 sin t
f ( t ) = --- a 0 + c 1 ⎛ ⎞










2 ⎝ sin ( t + ϕ 1 ) ⎠
sin ϕ 2 cos 2t + cos ϕ 2 sin 2t sin ϕ n cos nt + cos ϕ n sin nt
c2 ⎛ ⎞ + … + cn ⎛ ⎞





















⎝ sin ( 2t + ϕ 2 ) ⎠ ⎝ sin ( nt + ϕ n ) ⎠

and, in general, where ω ≠ 1 , we get

∞ ∞ an
∑ cn sin ⎛⎝ nωt + atan ----
-⎞
1 1
f ( t ) = --- a 0 +
2 ∑ c n sin ( nωt + ϕ n ) = --- a 0 +
2 b n⎠
(7.62)
n=1 n=1

When used in circuit analysis, (7.61) and (7.62) can be expressed as phasors. Since it is customary to
use the cosine function in the time domain to phasor transformation, we choose to use the transfor-
mation of (7.63) below.

∞ b ∞ bn
c n cos ⎛ nωt – atan ----n-⎞ ⇔ --- a 0 +
1 1
--- a 0 +
2 ∑ ⎝ an ⎠ 2 ∑ cn ∠– atan ----
an
- (7.63)
n=1 n=1

Example 7.8
Find the first 5 terms of the alternate form of the trigonometric Fourier series for the waveform of
Figure 7.26.

7-26 Signals and Systems with MATLAB Applications, Second Edition


Orchard Publications
Page 26
Alternate Forms of the Trigonometric Fourier Series

f( t)
3

1
ω = 1
t
π/2 π 3π/2 2π

Figure 7.26. Waveform for Example 7.8

Solution:
The given waveform has no symmetry; thus, we expect both cosine and sine functions with odd and
even terms present. Also, by inspection the DC value is not zero.
We will compute the a n and b n coefficients, the DC value, and we will combine them to get an
expression in the form of (7.63). Then,
π⁄2 2π π⁄2 2π
1 1 3 1
a n = ---
π ∫0 ( 3 ) cos nt dt + ---
π ∫ π⁄2
( 1 ) cos nt dt = ------ sin nt
nπ 0
+ ------ sin nt
nπ π⁄2 (7.64)
3 π 1 1 π 2 π
= ------ sin n --- + ------ sin n2π – ------ sin n --- = ------ sin n ---
nπ 2 nπ nπ 2 nπ 2

We observe that for n = even , a n = 0 .


For n = odd ,

2
a 1 = --- (7.65)
π
and

2
a 3 = – ------ (7.66)

The DC value is
π⁄2 2π
1 1- 1 1 π⁄2
∫0 ∫π ⁄ 2 ( 1 ) d t

--- a 0 = ----- ( 3 ) dt + ------ = ------ ( 3t 0
+t π⁄2
)
2 2π 2π 2π
(7.67)
π
= ------ ⎛ ------ + 2π – ---⎞ = ------ ( π + 2π ) = ---
1 3π 1 3
2π ⎝ 2 2⎠ 2π 2

The b n coefficients are

Signals and Systems with MATLAB Applications, Second Edition 7-27


Orchard Publications
Page 27
Chapter 7 Fourier Series

π⁄2 2π π⁄2 2π
1 1 –3 –1
b n = ---
π ∫0 ( 3 ) sin nt dt + ---
π ∫ π⁄2
( 1 ) sin nt dt = ------ cos nt
nπ 0
+ ------ cos nt
nπ π⁄2 (7.68)
–3 π 3 –1 1 π 1 2
= ------ cos n --- + ------ + ------ cos n2π + ------ cos n --- = ------ ( 3 – cos n2π ) = ------
nπ 2 nπ nπ nπ 2 nπ nπ
Then,
b1 = 2 ⁄ π (7.69)

b2 = 1 ⁄ π (7.70)

b 3 = 2 ⁄ 3π (7.71)

b 4 = 1 ⁄ 2π (7.72)
From (7.63),
∞ b ∞ bn
c n cos ⎛ nωt – atan ----n-⎞ ⇔ --- a 0 +
1 1
--- a 0 +
2 ∑ ⎝ an ⎠ 2 ∑ cn ∠– atan ----
an
-
n=1 n=1
where
b 2 2 b 2 2
c n ∠– atan ----n- = a n + b n ∠– atan ----n- = a n + b n ∠– θ n = a n – jb n (7.73)
an an

Thus, for n = 1, 2, 3, and 4 , we get:


2 2
2 2
a 1 – jb 1 = --- – j --- = ⎛ --2-⎞ + ⎛ --2-⎞ ∠– 45°
π π ⎝ π⎠ ⎝ π⎠
(7.74)
8 2 2 2 2
= ------ ∠– 45° = ---------- ∠– 45° ⇔ ---------- cos ( ωt – 45° )
2 π π
π
Similarly,
1 1 1
a 2 – jb 2 = 0 – j --- = --- ∠– 90° ⇔ --- cos ( 2ωt – 90° ) (7.75)
π π π

2 2 2 2 2 2
a 3 – jb 3 = – ------ – j ------ = ---------- ∠– 135° ⇔ ---------- cos ( 3ωt – 135° ) (7.76)
3π 3π 3π 3π
and
1 1 1
a 4 – jb 4 = 0 – j ------ = ------ ∠– 90° ⇔ ------ cos ( 4ωt – 90° ) (7.77)
2π 2π 2π
Combining the terms of (7.67) and (7.74) through (7.77), we find that the alternate form of the trig-
onometric Fourier series representing the waveform of this example is

7-28 Signals and Systems with MATLAB Applications, Second Edition


Orchard Publications
Page 28
Circuit Analysis with Trigonometric Fourier Series

3 1
f ( t ) = --- + ---
2 π
[2 2 cos ( ωt – 45° ) + cos ( 2ωt – 90° )
(7.78)
2 2 1
+ ---------- cos ( 3ωt – 135° ) + --- cos ( 4ωt – 90° ) + …
3 2
]

7.7 Circuit Analysis with Trigonometric Fourier Series


When the excitation of an electric circuit is a non-sinusoidal waveform such as those we discussed
thus far, we can use Fouries series to determine the response of a circuit. The procedure is illustrated
with the examples that follow.

Example 7.9
The input to the series RC circuit of Figure 7.27, is the square waveform of Figure 7.28. Compute
the voltage v c ( t ) across the capacitor. Consider only the first three terms of the series, and assume
ω = 1.

R=1Ω
+
+ vC ( t )
− −
v in ( t ) C =1F

Figure 7.27. Circuit for Example 7.9

v in ( t ) T
A

π 2π ωt
0

−A

Figure 7.28. Input waveform for the circuit of Figure 7.27


Solution:
In Example 7.1, we found that the above square waveform can be represented by the trigonometric
Fourier series as

f ( t ) = ------- ⎛ sin ωt + --- sin 3ωt + --- sin 5ωt + …⎞


4A 1 1
⎝ ⎠
(7.79)
π 3 5

Signals and Systems with MATLAB Applications, Second Edition 7-29


Orchard Publications
Page 29
Chapter 7 Fourier Series

Since this series is the sum of sinusoids, we will use phasor analysis to obtain the solution.
The equivalent phasor circuit is shown in Figure 7.29.

R=1
C +
+ VC
− 1- −
-----
V in jω

Figure 7.29. Phasor circuit for Example 7.9


We let n represent the number of terms in the Fourier series. For this example, we are only inter-
ested in the first three terms, and thus n = 1, 2, and 3
By the voltage division expression,

1 ⁄ ( jnω ) 1
V C n = ------------------------------ V in n = -------------- V in n (7.80)
1 + 1 ⁄ ( jnω ) 1 + jn

With reference to (7.79) the phasors of the first 3 terms of (7.80) are
4A 4A 4A
------- sin t = ------- cos ( t – 90° ) ⇔ V in1 = ------- ∠– 90° (7.81)
π π π

4A 1 4A 1 4A 1
------- ⋅ --- sin 2t = ------- ⋅ --- cos ( 2t – 90° ) ⇔ V = ------- ⋅ --- ∠– 90° (7.82)
π 3 π 3 in 2 π 3

------- ⋅ 1
4A ------- ⋅ 1
--- sin 3t = 4A --- cos ( 3t – 90° ) ⇔ V 4A 1
= ------- ⋅ --- ∠– 90° (7.83)
π 5 π 5 in 3 π 5
By substitution of (7.81) through (7.83) into (7.80), we obtain the phasor and time domain voltages
indicated in (7.84) through (7.86) below.

1 4A 1 4A
V C 1 = ----------- ⋅ ------- ∠– 90° = -------------------- ⋅ ------- ∠– 90°
1+j π 2 ∠45° π
(7.84)
4A 2 4A 2
= ------- ⋅ ------- ∠– 135° ⇔ ------- ⋅ ------- cos ( t – 135° )
π 2 π 2

1 4A 1 1 4A 1
V C 2 = -------------- ⋅ ------- ⋅ --- ∠– 90° = ------------------------- ------- ⋅ --- ∠– 90°
1 + j2 π 3 5 ∠63.4° π 3
(7.85)
4A 5 4A 5
= ------- ⋅ ------- ∠– 153.4° ⇔ ------- ⋅ ------- cos ( 2t – 153.4° )
π 15 π 15

7-30 Signals and Systems with MATLAB Applications, Second Edition


Orchard Publications
Page 30
The Exponential Form of the Fourier Series

1 4A 1 1 4A 1
V C 3 = -------------- ⋅ ------- ⋅ --- ∠– 90° = ---------------------------- ------- ⋅ --- ∠– 90°
1 + j3 π 5 10 ∠71.6° π 5
(7.86)
4A 10 4A 10
= ------- ⋅ ---------- ∠– 161.6° ⇔ ------- ⋅ ---------- cos ( 3t – 161.6° )
π 50 π 50
Thus, the capacitor voltage in the time domain is

4A 2 5 10
v C ( t ) = ------- ------- cos ( t – 135° ) + ------- cos ( 2t – 153.4° ) + ---------- cos ( 3t – 161.6° ) + … (7.87)
π 2 15 50

7.8 The Exponential Form of the Fourier Series


The Fourier series are often expressed in exponential form. The advantage of the exponential form
is that we only need to perform one integration rather than two, one for the a n , and another for the
b n coefficients in the trigonometric form of the series. Moreover, in most cases the integration is
simpler.
The exponential form is derived from the trigonometric form by substitution of
jωt – jωt
+e -
cos ωt = e--------------------------- (7.88)
2
jωt – jωt
e –e
sin ωt = --------------------------- (7.89)
j2

into f ( t ) . Thus,
jωt – jωt j2ωt – j2ωt
+e +e
f ( t ) = --- a 0 + a 1 ⎛ ---------------------------- ⎞ + a 2 ⎛ --------------------------------- ⎞ +
1 e e
⎝ ⎠ ⎝ ⎠
(7.90)
2 2 2
jωt – jωt j2ωt – j2ωt
–e –e
… + b 1 ⎛ ---------------------------⎞ + b 2 ⎛ --------------------------------⎞ + …
e e
⎝ j2 ⎠ ⎝ j2 ⎠

and grouping terms with same exponents, we get

f ( t ) = … + ⎛ -----2 – -----2 ⎞ e
– j2ωt ⎛ a 1 b 1 ⎞ – jωt 1
+ --- a 0 + ⎛ -----1 + -----1⎞ e + ⎛ -----2 + -----2⎞ e
a b a b jωt a b j2ωt
+ ----- – ----- e (7.91)
⎝ 2 j2 ⎠ ⎝ 2 j2 ⎠ 2 ⎝ 2 j2 ⎠ ⎝ 2 j2⎠

The terms of (7.91) in parentheses are usually denoted as


b
C – n = --- ⎛ a n – ----n-⎞ = --- ( a n + jb n )
1 1
(7.92)
2⎝ j⎠ 2

b
C n = --- ⎛ a n + ----n-⎞ = --- ( a n – j b n )
1 1
(7.93)
2⎝ j⎠ 2

Signals and Systems with MATLAB Applications, Second Edition 7-31


Orchard Publications
Page 31
Chapter 7 Fourier Series

1
C 0 = --- a 0 (7.94)
2
Then, (7.91) is written as

– j2ωt – jωt jωt j2ωt


f ( t ) = … + C –2 e + C –1 e + C0 + C1 e + C2 e +… (7.95)

We must remember that the C i coefficients, except C 0 , are complex and occur in complex conju-
gate pairs, that is,
C –n = C n∗ (7.96)
We can derive a general expression for the complex coefficients C n , by multiplying both sides of
– jnωt
(7.95) by e and integrating over one period, as we did in the derivation of the a n and b n coeffi-
cients of the trigonometric form. Then, with ω = 1 ,
2π 2π 2π
– jnt – j2t – jnt – jt – jnt
∫0 f ( t )e dt = … + ∫0 C–2 e e dt + ∫0 C–1 e e dt (7.97)
2π 2π
– jnt jt – jnt
+ ∫0 C0 e dt + ∫0 C1 e e dt
2π 2π
j2t – jnt jnt – jnt
+ ∫0 C2 e e dt + … + ∫0 Cn e e dt

We observe that all the integrals on the right side of (7.97) are zero except the last one. Therefore,
2π 2π 2π
– jnt jnt – jnt
∫0 f ( t )e dt = ∫0 Cn e e dt = ∫0 C n dt = 2πC n

or

1 – jnt
C n = ------
2π ∫0 f ( t )e dt

and, in general, for ω ≠ 1 ,


1 – jnωt
C n = ------
2π ∫0 f ( t )e d( ωt ) (7.98)

or

T
1 – jnωt
C n = ---
T ∫0 f ( t )e d( ωt ) (7.99)

We can derive the trigonometric Fourier series from the exponential series by addition and subtrac-

7-32 Signals and Systems with MATLAB Applications, Second Edition


Orchard Publications
Page 32
The Exponential Form of the Fourier Series

tion of the exponential form coefficients C n and C –n . Thus, from (7.92) and (7.93),

1
C n + C – n = --- ( a n – jb n + a n + jb n )
2
or

a n = C n + C –n (7.100)

Similarly,
1
C n – C – n = --- ( a n – jb n – a n – j b n ) (7.101)
2
or

b n = j ( Cn – C–n ) (7.102)

Symmetry in Exponential Series

1. For even functions, all coefficients C i are real

We recall from (7.92) and (7.93) that


b
C – n = --- ⎛ a n – ----n-⎞ = --- ( a n + jb n )
1 1
(7.103)
2⎝ j ⎠ 2
and
b
C n = --- ⎛ a n + ----n-⎞ = --- ( a n – j b n )
1 1
(7.104)
2⎝ j⎠ 2
Since even functions have no sine terms, the b n coefficients in (7.103) and (7.104) are zero.
Therefore, both C –n and C n are real.
2. For odd functions, all coefficients C i are imaginary

Since odd functions have no cosine terms, the a n coefficients in (7.103) and (7.104) are zero.
Therefore, both C –n and C n are imaginary.

3. If there is half-wave symmetry, C n = 0 for n = even

We recall from the trigonometric Fourier series that if there is half-wave symmetry, all even har-
monics are zero. Therefore, in (7.103) and (7.104) the coefficients a n and b n are both zero for
n = even , and thus, both C – n and C n are also zero for n = even .

4. If there is no symmetry, f ( t ) is complex.

Signals and Systems with MATLAB Applications, Second Edition 7-33


Orchard Publications
Page 33
Chapter 7 Fourier Series

5. C –n = C n∗ always

This can be seen in (7.103) and (7.104)


Example 7.10
Compute the exponential Fourier series for the square waveform of Figure 7.30 below. Assume that
ω = 1.
T
A

π 2π
ωt
0

−A

Figure 7.30. Waveform for Example 7.10


Solution:
This is the same waveform as in Example 7.1, and as we know, it is an odd function, has half-wave
symmetry, and its DC component is zero. Therefore, the C n coefficients will be imaginary, C n = 0
for n = even , and C 0 = 0 . Using (7.98) with ω = 1 , we get
2π π 2π
1 – jnt 1 – jnt 1 – jnt
C n = ------
2π ∫0 f ( t )e dt = ------
2π ∫0 Ae dt + ------
2π ∫π –A e dt

and for n = 0 ,
π 2π
1 –0 –0 A
C 0 = ------
2π ∫0 Ae dt + ∫π ( – A )e dt = ------ ( π – 2π + π ) = 0

as expected.
For n ≠ 0 ,

π 2π π 2π
1 – jnt – jnt 1 A – jnt – A – jnt
C n = ------
2π ∫0 Ae dt + ∫π –A e dt = ------ -------- e
2π – jn 0
+ -------- e
– jn π

1 A – jnπ A –jn2π – jnπ A – jnπ – jn2π – jnπ (7.105)


= ------ -------- ( e – 1 ) + ----- ( e –e ) = ------------ ( 1 – e +e –e )
2π – jn jn 2jπn
A – jn2π – jnπ A – jnπ 2
= ------------ ( 1 + e – 2e ) = ------------ ( e – 1)
2jπn 2jπn

7-34 Signals and Systems with MATLAB Applications, Second Edition


Orchard Publications
Page 34
Line Spectra
– jnπ
For n = even , e = 1 ; then,

Cn A – jnπ 2 A 2
= ------------ ( e – 1 ) = ------------ ( 1 – 1 ) = 0 (7.106)
n = even 2jπn 2jπn
as expected.
– jnπ
For n = odd , e = – 1 . Therefore,

A – jnπ 2 A 2 A 2 2A
Cn = ------------ ( e – 1 ) = ------------ ( – 1 – 1 ) = ------------ ( – 2 ) = -------- (7.107)
n = odd 2jπn 2jπn 2jπn jπn
Using (7.95), that is,
– j2ωt – jωt jωt j2ωt
f ( t ) = … + C–2 e + C–1 e + C0 + C1 e + C2 e +…

we obtain the exponential Fourier series for the square waveform with odd symmetry as

f ( t ) = ------- ⎛ … – --- e
2A 1 – j3ωt – jωt jωt 1 j3ωt ⎞ 2A 1

jnωt
–e +e + --- e = ------- --- e (7.108)
jπ ⎝ 3 3 ⎠ jπ n
n = odd

The minus ( −) sign of the first two terms within the parentheses results from the fact that
C –n = C n∗ . For instance, since C 3 = 2A ⁄ j3π , it follows that C – 3 = C 3∗ = – 2A ⁄ j3π . We observe
that f ( t ) is purely imaginary, as expected, since the waveform is an odd function.
To prove that (7.108) and (7.22) are the same, we group the two terms inside the parentheses of
(7.108) for which n = 1 ; this will produce the fundamental frequency sin ωt . Then, we group the
two terms for which n = 3 , and this will produce the third harmonic sin 3ωt , and so on.

7.9 Line Spectra


When the Fourier series are known, it is useful to plot the amplitudes of the harmonics on a fre-
quency scale that shows the first (fundamental frequency) harmonic, and the higher harmonics
times the amplitude of the fundamental. Such a plot is known as line spectrum and shows the spec-
tral lines that would be displayed by a spectrum analyzer*.
Figure 7.31 shows the line spectrum of the square waveform of Example 7.1.
Figure 7.32 shows the line spectrum for the half-wave rectifier of Example 7.6.
The line spectra of other waveforms can be easily constructed from the Fourier series.

* An instrument that displays the spectral lines of a waveform.

Signals and Systems with MATLAB Applications, Second Edition 7-35


Orchard Publications
Page 35
Section 1: Theory Chapter 5 Fourier Series 3

1. Theory
1

● A graph of periodic function f (x) that has period L exhibits the


same pattern every L units along the x-axis, so that f (x + L) = f (x)
for every value of x. If we know what the function looks like over one
complete period, we can thus sketch a graph of the function over a
wider interval of x (that may contain many periods)

f(x )

P E R IO D = L

Toc JJ II J I Back
Section 1: Theory 4

● This property of repetition defines a fundamental spatial


2
fre-
quency k = 2π L that can be used to give a first approximation to
the periodic pattern f (x):

f (x) ' c1 sin(kx + α1 ) = a1 cos(kx) + b1 sin(kx),

where symbols with subscript 1 are constants that determine the am-
plitude and phase of this first approximation

● A much better approximation of the periodic pattern f (x) can


be built up by adding an appropriate combination of harmonics to
this fundamental (sine-wave) pattern. For example, adding

c2 sin(2kx + α2 ) = a2 cos(2kx) + b2 sin(2kx) (the 2nd harmonic)


c3 sin(3kx + α3 ) = a3 cos(3kx) + b3 sin(3kx) (the 3rd harmonic)

Here, symbols with subscripts are constants that determine the am-
plitude and phase of each harmonic contribution

Toc JJ II J I Back
Section 1: Theory 5
3
One can even approximate a square-wave pattern with a suitable sum
that involves a fundamental sine-wave plus a combination of harmon-
ics of this fundamental frequency. This sum is called a Fourier series

F u n d a m e n ta l
F u n d a m e n ta l + 2 h a rm o n ic s

F u n d a m e n ta l + 5 h a rm o n ic s P E R IO D = L
F u n d a m e n ta l + 2 0 h a rm o n ic s

Toc JJ II J I Back
Section 1: Theory 6

● In this Tutorial,
4
we consider working out Fourier series for func-
tions f (x) with period L = 2π. Their fundamental frequency is then
k = 2π
L = 1, and their Fourier series representations involve terms like

a1 cos x , b1 sin x
a2 cos 2x , b2 sin 2x
a3 cos 3x , b3 sin 3x

We also include a constant term a0 /2 in the Fourier series. This


allows us to represent functions that are, for example, entirely above
the x−axis. With a sufficient number of harmonics included, our ap-
proximate series can exactly represent a given function f (x)

f (x) = a0 /2 + a1 cos x + a2 cos 2x + a3 cos 3x + ...


+ b1 sin x + b2 sin 2x + b3 sin 3x + ...

Toc JJ II J I Back
Section 1: Theory 7
5
A more compact way of writing the Fourier series of a function f (x),
with period 2π, uses the variable subscript n = 1, 2, 3, . . .

a0 X
f (x) = + [an cos nx + bn sin nx]
2 n=1

● We need to work out the Fourier coefficients (a0 , an and bn ) for


given functions f (x). This process is broken down into three steps
Z
STEP ONE 1
a0 = f (x) dx
π

Z
STEP TWO 1
an = f (x) cos nx dx
π

Z
STEP THREE 1
bn = f (x) sin nx dx
π

where integrations are over a single interval in x of L = 2π

Toc JJ II J I Back
Section 1: Theory 8

● Finally, specifying a particular value of x = x1


6
in a Fourier series,
gives a series of constants that should equal f (x1 ). However, if f (x)
is discontinuous at this value of x, then the series converges to a value
that is half-way between the two possible function values

" V e rtic a l ju m p " /d is c o n tin u ity


in th e fu n c tio n re p re s e n te d f(x )

F o u rie r s e rie s
c o n v e rg e s to
h a lf-w a y p o in t

Toc JJ II J I Back
Section 2: Exercises 9

2. Exercises
7
Click on Exercise links for full worked solutions (7 exercises in total).
Exercise 1.
Let f (x) be a function of period 2π such that

1, −π < x < 0
f (x) =
0, 0 < x < π .

a) Sketch a graph of f (x) in the interval −2π < x < 2π

b) Show that the Fourier series for f (x) in the interval −π < x < π is
 
1 2 1 1
− sin x + sin 3x + sin 5x + ...
2 π 3 5
c) By giving an appropriate value to x, show that
π 1 1 1
= 1 − + − + ...
4 3 5 7

● Theory ● Answers ● Integrals ● Trig ● Notation


Toc JJ II J I Back
Section 2: Exercises 10

Exercise 2.
8
Let f (x) be a function of period 2π such that

0, −π < x < 0
f (x) =
x, 0 < x < π .

a) Sketch a graph of f (x) in the interval −3π < x < 3π


b) Show that the Fourier series for f (x) in the interval −π < x < π is
 
π 2 1 1
− cos x + 2 cos 3x + 2 cos 5x + ...
4 π 3 5
 
1 1
+ sin x − sin 2x + sin 3x − ...
2 3
c) By giving appropriate values to x, show that
2
(i) π4 = 1 − 31 + 15 − 17 + . . . and (ii) π8 = 1 + 312 + 512 + 712 + . . .

● Theory ● Answers ● Integrals ● Trig ● Notation


Toc JJ II J I Back
Section 2: Exercises 11

Exercise 3.
9
Let f (x) be a function of period 2π such that

x, 0 < x < π
f (x) =
π, π < x < 2π .

a) Sketch a graph of f (x) in the interval −2π < x < 2π

b) Show that the Fourier series for f (x) in the interval 0 < x < 2π is
 
3π 2 1 1
− cos x + 2 cos 3x + 2 cos 5x + . . .
4 π 3 5
 
1 1
− sin x + sin 2x + sin 3x + . . .
2 3
c) By giving appropriate values to x, show that
π 1 1 1 π2 1 1 1
(i) 4 = 1− 3 + 5 − 7 +... and (ii) 8 = 1+ 32 + 52 + 72 +...

● Theory ● Answers ● Integrals ● Trig ● Notation


Toc JJ II J I Back
Section 2: Exercises 12

Exercise 4.
10
Let f (x) be a function of period 2π such that
x
f (x) = over the interval 0 < x < 2π.
2

a) Sketch a graph of f (x) in the interval 0 < x < 4π

b) Show that the Fourier series for f (x) in the interval 0 < x < 2π is
 
π 1 1
− sin x + sin 2x + sin 3x + . . .
2 2 3
c) By giving an appropriate value to x, show that
π 1 1 1 1
= 1 − + − + − ...
4 3 5 7 9

● Theory ● Answers ● Integrals ● Trig ● Notation


Toc JJ II J I Back
Section 2: Exercises 13

Exercise 5.
11
Let f (x) be a function of period 2π such that

π − x, 0 < x < π
f (x) =
0, π < x < 2π

a) Sketch a graph of f (x) in the interval −2π < x < 2π

b) Show that the Fourier series for f (x) in the interval 0 < x < 2π is
 
π 2 1 1
+ cos x + 2 cos 3x + 2 cos 5x + . . .
4 π 3 5
1 1 1
+ sin x + sin 2x + sin 3x + sin 4x + . . .
2 3 4
c) By giving an appropriate value to x, show that
π2 1 1
= 1 + 2 + 2 + ...
8 3 5

● Theory ● Answers ● Integrals ● Trig ● Notation


Toc JJ II J I Back
Section 2: Exercises 14

Exercise 6.
12
Let f (x) be a function of period 2π such that

f (x) = x in the range − π < x < π.

a) Sketch a graph of f (x) in the interval −3π < x < 3π

b) Show that the Fourier series for f (x) in the interval −π < x < π is
 
1 1
2 sin x − sin 2x + sin 3x − . . .
2 3
c) By giving an appropriate value to x, show that
π 1 1 1
= 1 − + − + ...
4 3 5 7

● Theory ● Answers ● Integrals ● Trig ● Notation


Toc JJ II J I Back
Section 2: Exercises 15

Exercise 7.
13
Let f (x) be a function of period 2π such that

f (x) = x2 over the interval − π < x < π.

a) Sketch a graph of f (x) in the interval −3π < x < 3π

b) Show that the Fourier series for f (x) in the interval −π < x < π is

π2
 
1 1
− 4 cos x − 2 cos 2x + 2 cos 3x − . . .
3 2 3
c) By giving an appropriate value to x, show that
π2 1 1 1
= 1 + 2 + 2 + 2 + ...
6 2 3 4

● Theory ● Answers ● Integrals ● Trig ● Notation


Toc JJ II J I Back
Section 3: Answers 16

3. Answers
14

The sketches asked for in part (a) of each exercise are given within
the full worked solutions – click on the Exercise links to see these
solutions

The answers below are suggested values of x to get the series of


constants quoted in part (c) of each exercise
π
1. x = 2,
π
2. (i) x = 2, (ii) x = 0,
π
3. (i) x = 2, (ii) x = 0,
π
4. x = 2,
5. x = 0,
π
6. x = 2,
7. x = π.

Toc JJ II J I Back
Section 4: Integrals 17

4. Integrals
15
R b dv b Rb du
Formula for integration by parts: a u dx dx = [uv]a − a dx v dx
R R
f (x) f (x)dx f (x) f (x)dx
xn+1 n [g(x)]n+1
xn n+1 (n 6= −1) [g (x)] g 0 (x) n+1 (n 6= −1)
0
1 g (x)
x ln |x| g(x) ln |g (x)|
x ax
e ex ax
ln a (a > 0)
sin x − cos x sinh x cosh x
cos x sin x cosh x sinh x
tan x − ln
|cos x| tanh x ln cosh x
cosec x ln tan x2 cosech x ln tanh x2
sec x ln |sec x + tan x| sech x 2 tan−1 ex
sec2 x tan x sech2 x tanh x
cot x ln |sin x| coth x ln |sinh x|
sin2 x x
2 −
sin 2x
4 sinh2 x sinh 2x
4 − x2
cos2 x x
2 +
sin 2x
4 cosh2 x sinh 2x
4 + x2

Toc JJ II J I Back
Section 4: Integrals 18

R R
16
f (x) f (x) dx f (x) f (x) dx

1 1
tan−1 x 1 1 a+x
a2 +x2 a a a2 −x2 2a ln a−x (0 < |x| < a)

1 1 x−a
(a > 0) x2 −a2 2a ln x+a (|x| > a > 0)


2 2
√ 1 sin−1 x √ 1 ln x+ aa +x (a > 0)

a2 −x2 a a2 +x2

2 2
(−a < x < a) √ 1 ln x+ xa −a (x > a > 0)

x2 −a2

√ a2
√ a2
h √ i
sinh−1 x a2 +x2
 −1 x
 x

a2 − x2 2 sin a a2 +x2 2 a + a2

√ i √ h √ i
a2
a2 −x2
− cosh−1
2 2
+x x
+ x xa2−a

a2 x2 −a2 2 a

Toc JJ II J I Back
Section 5: Useful trig results 19

5. Useful trig results


17
When calculating the Fourier coefficients an and bn , for which n =
1, 2, 3, . . . , the following trig. results are useful. Each of these results,
which are also true for n = 0, −1, −2, −3, . . . , can be deduced from
the graph of sin x or that of cos x
s in (x )
1

● sin nπ = 0 x
−3π −2π −π 0 π 2π 3π

−1

c o s(x )
1

● cos nπ = (−1)n x
−3π −2π −π 0 π 2π 3π

−1

Toc JJ II J I Back
Section 5: Useful trig results 20
18
s in (x ) c o s(x )
1 1

x x
−3π −2π −π 0 π 2π 3π −3π −2π −π 0 π 2π 3π

−1 −1

 
0 , n even 0 , n odd
π  π 
● sin n = 1 , n = 1, 5, 9, ... ● cos n = 1 , n = 0, 4, 8, ...
2 2
−1 , n = 3, 7, 11, ... −1 , n = 2, 6, 10, ...
 

Areas cancel when


s in (x )
when integrating 1

overR whole periods + + + x


● sin nx dx = 0 −3π −2π −π 0 π 2π 3π

R
● cos nx dx = 0 −1

Toc JJ II J I Back
Section 6: Alternative notation 21

6. Alternative notation
19

● For a waveform f (x) with period L = k

a0 X
f (x) = + [an cos nkx + bn sin nkx]
2 n=1
The corresponding Fourier coefficients are
Z
STEP ONE 2
a0 = f (x) dx
L
L
Z
STEP TWO 2
an = f (x) cos nkx dx
L
L
Z
STEP THREE 2
bn = f (x) sin nkx dx
L
L
and integrations are over a single interval in x of L

Toc JJ II J I Back
Section 6: Alternative notation 22

● For a waveform f (x) with period 2L = 2π


20
k , we have that
2π π
k = 2L =L and nkx = nπx
L

a0 X h nπx nπx i
f (x) = + an cos + bn sin
2 n=1
L L
The corresponding Fourier coefficients are
Z
STEP ONE 1
a0 = f (x) dx
L
2L
Z
STEP TWO 1 nπx
an = f (x) cos dx
L L
2L
Z
STEP THREE 1 nπx
bn = f (x) sin dx
L L
2L
and integrations are over a single interval in x of 2L

Toc JJ II J I Back
Section 6: Alternative notation 23

● For a waveform f (t) with period T = 2π 21


ω

a0 X
f (t) = + [an cos nωt + bn sin nωt]
2 n=1
The corresponding Fourier coefficients are
Z
STEP ONE 2
a0 = f (t) dt
T
T
Z
STEP TWO 2
an = f (t) cos nωt dt
T
T
Z
STEP THREE 2
bn = f (t) sin nωt dt
T
T
and integrations are over a single interval in t of T

Toc JJ II J I Back
Section 7: Tips on using solutions 24

7. Tips on using solutions


22

● When looking at the THEORY, ANSWERS, INTEGRALS, TRIG


or NOTATION pages, use the Back button (at the bottom of the
page) to return to the exercises

● Use the solutions intelligently. For example, they can help you get
started on an exercise, or they can allow you to check whether your
intermediate results are correct

● Try to make less use of the full solutions as you work your way
through the Tutorial

Toc JJ II J I Back
Solutions to exercises 25

Full worked solutions


23
Exercise 1.

1, −π < x < 0
f (x) =
0, 0 < x < π, and has period 2π

a) Sketch a graph of f (x) in the interval −2π < x < 2π

f(x )
1

−2π −π 0 π 2π
x

Toc JJ II J I Back
Solutions to exercises 26

b) Fourier series representation of f (x)


24

STEP ONE
π 0
1 π
Z Z Z
1 1
a0 = f (x)dx = f (x)dx + f (x)dx
π −π π −π π 0

1 0 1 π
Z Z
= 1 · dx + 0 · dx
π −π π 0

1 0
Z
= dx
π −π
1 0
= [x]
π −π
1
= (0 − (−π))
π
1
= · (π)
π
i.e. a0 = 1 .

Toc JJ II J I Back
Solutions to exercises 27

STEP TWO
25
Z π Z 0 Z π
1 1 1
an = f (x) cos nx dx = f (x) cos nx dx + f (x) cos nx dx
π −π π −π π 0
Z 0 Z π
1 1
= 1 · cos nx dx + 0 · cos nx dx
π −π π 0
Z 0
1
= cos nx dx
π −π
 0
1 sin nx 1 0
= = [sin nx]−π
π n −π nπ
1
= (sin 0 − sin(−nπ))

1
= (0 + sin nπ)

1
i.e. an = (0 + 0) = 0.

Toc JJ II J I Back
Solutions to exercises 28

STEP THREE
26
Z π
1
bn = f (x) sin nx dx
π −π
Z 0 Z π
1 1
= f (x) sin nx dx + f (x) sin nx dx
π −π π 0
Z 0 Z π
1 1
= 1 · sin nx dx + 0 · sin nx dx
π −π π 0

0  0
1 − cos nx
Z
1
i.e. bn = sin nx dx =
π −π π n −π
1 1
= − [cos nx]0−π = − (cos 0 − cos(−nπ))
nπ nπ
1 1
= − (1 − cos nπ) = − (1 − (−1)n ) , see Trig
 nπ nπ 
0 , n even n 1 , n even
i.e. bn = 2 , since (−1) =
− nπ , n odd −1 , n odd

Toc JJ II J I Back
Solutions to exercises 29

We now have that


27

a0 X
f (x) = + [an cos nx + bn sin nx]
2 n=1
with the three steps giving

0 , n even
a0 = 1, an = 0 , and bn = 2
− nπ , n odd
It may be helpful to construct a table of values of bn

n 1 2 3  4 5 
bn − π2 0 − π2 13 0 − π2 15

Substituting our results now gives the required series


 
1 2 1 1
f (x) = − sin x + sin 3x + sin 5x + . . .
2 π 3 5

Toc JJ II J I Back
Solutions to exercises 30

c) Pick an appropriate value of x, to show that


28

π 1 1 1
4 =1− 3 + 5 − 7 + ...

Comparing this series with


 
1 2 1 1
f (x) = − sin x + sin 3x + sin 5x + . . . ,
2 π 3 5
we need to introduce a minus sign in front of the constants 13 , 17 ,. . .

So we need sin x = 1, sin 3x = −1, sin 5x = 1, sin 7x = −1, etc

π
The first condition of sin x = 1 suggests trying x = 2.

This choice gives sin π2 + 1


3 sin 3 π2 + 1
5 sin 5 π2 + 1
7 sin 7 π2
1 1 1
i.e. 1 − 3 + 5 − 7
Looking at the graph of f (x), we also have that f ( π2 ) = 0.

Toc JJ II J I Back
Solutions to exercises 31

Picking x = π
thus gives
29
2
h
0 = 21 − π2 sin π2 + 1
3 sin 3π
2 +
1
5 sin 5π
2 i
1
+ 7 sin 7π
2 + ...

h
1 2 1 1
i.e. 0 = 2 − π 1 − 3 + 5 i
1
− 7 + ...

A little manipulation then gives a series representation of π4


 
2 1 1 1 1
1 − + − + ... =
π 3 5 7 2
1 1 1 π
1 − + − + ... = .
3 5 7 4
Return to Exercise 1

Toc JJ II J I Back
Solutions to exercises 32

Exercise 2.
30

0, −π < x < 0
f (x) =
x, 0 < x < π, and has period 2π

a) Sketch a graph of f (x) in the interval −3π < x < 3π

f(x )
π

−3π −2π −π π 2π 3π
x

Toc JJ II J I Back
Solutions to exercises 33

b) Fourier series representation of f (x)


31

STEP ONE

Z π Z 0 Z π
1 1 1
a0 = f (x)dx = f (x)dx + f (x)dx
π −π π −π π 0
Z 0 Z π
1 1
= 0 · dx + x dx
π −π π 0
 π
1 x2
=
π 2 0

1 π2
 
= −0
π 2
π
i.e. a0 = .
2

Toc JJ II J I Back
Solutions to exercises 34

STEP TWO
32
Z π Z 0 Z π
1 1 1
an = f (x) cos nx dx = f (x) cos nx dx + f (x) cos nx dx
π −π π −π π 0
Z 0 Z π
1 1
=0 · cos nx dx + x cos nx dx
−π π π 0
Z π  π Z π 
1 1 sin nx sin nx
i.e. an = x cos nx dx = x − dx
π 0 π n 0 0 n
(using integration by parts)
  
1 sin nπ 1 h cos nx iπ
i.e. an = π −0 − −
π n n n 0
 
1 1
= ( 0 − 0) + 2 [cos nx]π0
π n
1 1
= 2
{cos nπ − cos 0} = 2
{(−1)n − 1}
πn
 πn
0 , n even
i.e. an = , see Trig.
− πn2 2 , n odd
Toc JJ II J I Back
Solutions to exercises 35

STEP THREE
33
π 0
1 π
Z Z Z
1 1
bn = f (x) sin nx dx = f (x) sin nx dx + f (x) sin nx dx
π −π π −π π 0
1 0 1 π
Z Z
= 0 · sin nx dx + x sin nx dx
π −π π 0
Z π  Z π 
1 1 h  cos nx iπ cos nx 
i.e. bn = x sin nx dx = x − − − dx
π 0 π n 0 0 n
(using integration by parts)
1 π
 Z 
1 1 π
= − [x cos nx]0 + cos nx dx
π n n 0
  π 
1 1 1 sin nx
= − (π cos nπ − 0) +
π n n n 0
1 1
= − (−1)n + (0 − 0), see Trig
n πn2
1
= − (−1)n
n

Toc JJ II J I Back
Solutions to exercises 36
(
− n1 , n even 34
i.e. bn =
+ n1 , n odd

We now have

a0 X
f (x) = + [an cos nx + bn sin nx]
2 n=1

− n1
( (
π 0 , n even , n even
where a0 = , an = , bn =
2 − πn2 2 , n odd 1
, n odd
n

Constructing a table of values gives

n 1 2 3 4 5
an − π2 0 − π2 · 1
32 0 − π2 · 1
52

bn 1 − 12 1
3 − 14 1
5

Toc JJ II J I Back
Solutions to exercises 37

This table of coefficients gives


35
 
1 π 2
f (x) = + − cos x + 0 · cos 2x
2 2 π
 
2 1
+ − · 2 cos 3x + 0 · cos 4x
π 3
 
2 1
+ − · 2 cos 5x + ...
π 5
1 1
+ sin x − sin 2x + sin 3x − ...
2 3
 
π 2 1 1
i.e. f (x) = − cos x + 2 cos 3x + 2 cos 5x + ...
4 π 3 5
 
1 1
+ sin x − sin 2x + sin 3x − ...
2 3

and we have found the required series!

Toc JJ II J I Back
Solutions to exercises 38

c) Pick an appropriate value of x, to show that


36

π 1 1 1
(i) 4 =1− 3 + 5 − 7 + ...

Comparing this series with


 
π 2 1 1
f (x) = − cos x + 2 cos 3x + 2 cos 5x + ...
4 π 3 5
 
1 1
+ sin x − sin 2x + sin 3x − ... ,
2 3
the required series of constants does not involve terms like 312 , 512 , 712 , ....
So we need to pick a value of x that sets the cos nx terms to zero.
The Trig section shows that cos n π2 = 0 when n is odd, and note also
that cos nx terms in the Fourier series all have odd n
π
i.e. cos x = cos 3x = cos 5x = ... = 0 when x = 2,

i.e. cos π2 = cos 3 π2 = cos 5 π2 = ... = 0

Toc JJ II J I Back
Solutions to exercises 39

Setting x = π2 in the series for f (x) gives


37
π  
π 2 π 1 3π 1 5π
f = − cos + 2 cos + 2 cos + ...
2 4 π 2 3 2 5 2
 
π 1 2π 1 3π 1 4π 1 5π
+ sin − sin + sin − sin + sin − ...
2 2 2 3 2 4 2 5 2
π 2
= − [0 + 0 + 0 + ...]
4 π 
1 1 1 1
+ 1 − sin π + · (−1) − sin 2π + · (1) − ...
2 | {z } 3 4 | {z } 5
=0 =0

π π

The graph of f (x) shows that f 2 = 2, so that

π π 1 1 1
= + 1 − + − + ...
2 4 3 5 7
π 1 1 1
i.e. = 1 − + − + ...
4 3 5 7

Toc JJ II J I Back
Solutions to exercises 40

Pick an appropriate value of x, to show that


38

π2 1 1 1
(ii) 8 =1+ 32 + 52 + 72 + ...

Compare this series with


 
π 2 1 1
f (x) = − cos x + 2 cos 3x + 2 cos 5x + ...
4 π 3 5
 
1 1
+ sin x − sin 2x + sin 3x − ... .
2 3
This time, we want to use the coefficients of the cos nx terms, and
the same choice of x needs to set the sin nx terms to zero

Picking x = 0 gives
sin x = sin 2x = sin 3x = 0 and cos x = cos 3x = cos 5x = 1

Note also that the graph of f (x) gives f (x) = 0 when x = 0

Toc JJ II J I Back
Solutions to exercises 41

So, picking x = 0 gives


39
 
π 2 1 1 1
0 = − cos 0 + 2 cos 0 + 2 cos 0 + 2 cos 0 + ...
4 π 3 5 7
sin 0 sin 0
+ sin 0 − + − ...
 2 3 
π 2 1 1 1
i.e. 0 = − 1 + 2 + 2 + 2 + ... + 0 − 0 + 0 − ...
4 π 3 5 7

We then find that

 
2 1 1 1 π
1 + 2 + 2 + 2 + ... =
π 3 5 7 4
1 1 1 π2
and 1 + 2 + 2 + 2 + ... = .
3 5 7 8

Return to Exercise 2
Toc JJ II J I Back
Solutions to exercises 42

Exercise 3.
40

x, 0 < x < π
f (x) =
π, π < x < 2π, and has period 2π

a) Sketch a graph of f (x) in the interval −2π < x < 2π

f(x )
π

−2π −π 0 π 2π
x

Toc JJ II J I Back
Solutions to exercises 43

b) Fourier series representation of f (x)


41

STEP ONE
2π π
1 2π
Z Z Z
1 1
a0 = f (x)dx = f (x)dx + f (x)dx
π 0 π 0 π π

1 π 1 2π
Z Z
= xdx + π · dx
π 0 π π
 π  2π
1 x2 π
= + x
π 2 0 π π
 2   
1 π
= − 0 + 2π − π
π 2
π
= +π
2

i.e. a0 = .
2
Toc JJ II J I Back
Solutions to exercises 44

STEP TWO
42
Z 2π
1
an = f (x) cos nx dx
π 0
Z π Z 2π
1 1
= x cos nx dx + π · cos nx dx
π 0 π π
" π # 2π
Z π 
1 sin nx sin nx π sin nx
= x − dx +
π n 0 0 n π n π
| {z }
using integration by parts
"    π #
1 1 − cos nx
= π sin nπ − 0 · sin n0 −
π n n2 0

1
+ (sin n2π − sin nπ)
n

Toc JJ II J I Back
Solutions to exercises 45
43
"    #  
1 1 cos nπ cos 0 1
i.e. an = 0−0 + − 2 + 0−0
π n n2 n n

1
= (cos nπ − 1), see Trig
n2 π
1
(−1)n − 1 ,

= 2
n π

− n22 π , n odd
(
i.e. an =
0 , n even.

Toc JJ II J I Back
Solutions to exercises 46

STEP THREE
44
Z 2π
1
bn = f (x) sin nx dx
π 0
Z π Z 2π
1 1
= x sin nx dx + π · sin nx dx
π 0 π π
" # 2π
1 h  cos nx iπ Z π  − cos nx  
π − cos nx
= x − − dx +
π n 0 0 n π n π
| {z }
using integration by parts
"   π #
1 −π cos nπ sin nx 1
= +0 + − (cos 2nπ − cos nπ)
π n n2 0 n
" #
1 −π(−1)n

sin nπ − sin 0 1
1 − (−1)n

= + 2

π n n n
1 1
− (−1)n + 1 − (−1)n

= 0 −
n n

Toc JJ II J I Back
Solutions to exercises 47
1 1 1 45
i.e. bn = − (−1)n − + (−1)n
n n n
1
i.e. bn = − .
n
We now have

a0 X
f (x) = + [an cos nx + bn sin nx]
2 n=1
(
3π 0 , n even
where a0 = 2 , an = , bn = − n1
− n22 π , n odd

Constructing a table of values gives

n 1 2 3  4 5 
an − π2 0 − π2 312 0 − π2 512
bn −1 − 21 − 13 − 14 − 15

Toc JJ II J I Back
Solutions to exercises 48

This table of coefficients gives


46

   h i
1 3π 2 1
f (x) = 2 2 + − π cos x + 0 · cos 2x + 32 cos 3x + . . .
  h i
1 1
+ −1 sin x + 2 sin 2x + 3 sin 3x + . . .

h i
3π 2 1 1
i.e. f (x) = 4 − π cos x + 32 cos 3x + 52 cos 5x + . . .
h i
1 1
− sin x + 2 sin 2x + 3 sin 3x + . . .

and we have found the required series.

Toc JJ II J I Back
Solutions to exercises 49

c) Pick an appropriate value of x, to show that


47

π 1 1 1
(i) 4 =1− 3 + 5 − 7 + ...

Compare this series with


 
3π 2 1 1
f (x) = − cos x + 2 cos 3x + 2 cos 5x + . . .
4 π 3 5
 
1 1
− sin x + sin 2x + sin 3x + . . .
2 3
Here, we want to set the cos nx terms to zero (since their coefficients
are 1, 312 , 512 , . . .). Since cos n π2 = 0 when n is odd, we will try setting
x = π2 in the series. Note also that f ( π2 ) = π2

This gives
π 3π 2
cos π2 + 1
cos 3 π2 + 1
cos 5 π2 + . . .
 
2 = 4 − π 32 52

sin π2 + 1
sin 2 π2 + 1
sin 3 π2 + 1
sin 4 π2 + 1
sin 5 π2 + . . .
 
− 2 3 4 5
Toc JJ II J I Back
Solutions to exercises 50
48
and
π 3π 2
2 = 4 − π [0 + 0 + 0 + . . .]

1 1 1 1
 
− (1) + 2 · (0) + 3 · (−1) + 4 · (0) + 5 · (1) + . . .
then

π 3π 1 1 1

2 = 4 − 1− 3 + 5 − 7 + ...

1 1 1 3π π
1− 3 + 5 − 7 + ... = 4 − 2

1 1 1 π
1− 3 + 5 − 7 + ... = 4, as required.

To show that
π2 1 1 1
(ii) 8 =1+ 32 + 52 + 72 + ... ,

We want zero sin nx terms and to use the coefficients of cos nx


Toc JJ II J I Back
Solutions to exercises 51

Setting x = 0 eliminates the sin nx terms from the series,


49
and also
gives
1 1 1 1 1 1
cos x + 2 cos 3x + 2 cos 5x + 2 cos 7x + . . . = 1 + 2 + 2 + 2 + . . .
3 5 7 3 5 7
(i.e. the desired series).
The graph of f (x) shows a discontinuity (a “vertical jump”) at x = 0

The Fourier series converges to a value that is half-way between the


two values of f (x) around this discontinuity. That is the series will
converge to π2 at x = 0
 
π 3π 2 1 1 1
i.e. = − cos 0 + 2 cos 0 + 2 cos 0 + 2 cos 0 + . . .
2 4 π 3 5 7
 
1 1
− sin 0 + sin 0 + sin 0 + . . .
2 3
 
π 3π 2 1 1 1
and = − 1 + 2 + 2 + 2 + . . . − [0 + 0 + 0 + . . .]
2 4 π 3 5 7
Toc JJ II J I Back
Solutions to exercises 52
50
Finally, this gives
 
π 2 1 1 1
− = − 1+
+ + + . . .
4 π 32 52 72
π2 1 1 1
and = 1 + 2 + 2 + 2 + ...
8 3 5 7

Return to Exercise 3

Toc JJ II J I Back
Solutions to exercises 53

Exercise 4.
51

f (x) = x2 , over the interval 0 < x < 2π and has period 2π

a) Sketch a graph of f (x) in the interval 0 < x < 4π

f(x )
π

0 π 2π 3π 4π
x

Toc JJ II J I Back
Solutions to exercises 54

b) Fourier series representation of f (x)


52

STEP ONE

Z 2π
1
a0 = f (x) dx
π 0
Z 2π
1 x
= dx
π 0 2
 2 2π
1 x
=
π 4 0

(2π)2
 
1
= −0
π 4

i.e. a0 = π.

Toc JJ II J I Back
Solutions to exercises 55

STEP TWO
53
Z 2π
1
an = f (x) cos nx dx
π 0
Z 2π
1 x
= cos nx dx
π 0 2
( 2π )
Z 2π
1 sin nx 1
= x − sin nx dx
2π n 0 n 0
| {z }
using integration by parts
(  )
1 sin n2π sin n · 0 1
= 2π −0· − ·0
2π n n n
( )
1 1
= (0 − 0) − · 0 , see Trig
2π n
i.e. an = 0.

Toc JJ II J I Back
Solutions to exercises 56

STEP THREE
54
Z 2π Z 2π
1 1 x
bn = f (x) sin nx dx = sin nx dx
π 0 π 0 2
Z 2π
1
= x sin nx dx
2π 0
(  2π Z 2π   )
1 − cos nx − cos nx
= x − dx
2π n 0 0 n
| {z }
using integration by parts
( )
1 1 1
= (−2π cos n2π + 0) + · 0 , see Trig
2π n n
−2π
= cos(n2π)
2πn
1
= − cos(2nπ)
n
1
i.e. bn = − , since 2n is even (see Trig)
n
Toc JJ II J I Back
Solutions to exercises 57

We now have
55

a0 X
f (x) = + [an cos nx + bn sin nx]
2 n=1

where a0 = π, an = 0, bn = − n1

These Fourier coefficients give

∞  
π X 1
f (x) = + 0 − sin nx
2 n=1 n
 
π 1 1
i.e. f (x) = − sin x + sin 2x + sin 3x + . . . .
2 2 3

Toc JJ II J I Back
Solutions to exercises 58

c) Pick an appropriate value of x, to show that


56

π 1 1 1 1
4 =1− 3 + 5 − 7 + 9 − ...

π π
Setting x = 2 gives f (x) = 4 and
 
π π 1 1
= − 1 + 0 − + 0 + + 0 − ...
4 2 3 5
 
π π 1 1 1 1
= − 1 − + − + − ...
4 2 3 5 7 9
 
1 1 1 1 π
1 − + − + − ... =
3 5 7 9 4
1 1 1 1 π
i.e. 1 − + − + − . . . = .
3 5 7 9 4
Return to Exercise 4

Toc JJ II J I Back
Solutions to exercises 59

Exercise 5.
57

π−x , 0<x<π
f (x) =
0 , π < x < 2π, and has period 2π

a) Sketch a graph of f (x) in the interval −2π < x < 2π

f(x )
π

−2π −π 0 π 2π
x

Toc JJ II J I Back
Solutions to exercises 60

b) Fourier series representation of f (x)


58

STEP ONE
Z 2π
1
a0 = f (x) dx
π 0
Z π Z 2π
1 1
= (π − x) dx + 0 · dx
π 0 π π
 π
1 1
= πx − x2 + 0
π 2 0
2
 
1 π
= π2 − −0
π 2
π
i.e. a0 = .
2

Toc JJ II J I Back
Solutions to exercises 61

STEP TWO
59
Z 2π
1
an = f (x) cos nx dx
π 0
1 π 1 2π
Z Z
= (π − x) cos nx dx + 0 · dx
π 0 π π
 π Z π 
1 sin nx sin nx
i.e. an = (π − x) − (−1) · dx +0
π n 0 0 n
| {z }
using integration by parts
 Z π 
1 sin nx
= (0 − 0) + dx , see Trig
π 0 n
 π
1 − cos nx
=
πn n 0
1
= − 2 (cos nπ − cos 0)
πn
1
i.e. an = − 2 ((−1)n − 1) , see Trig
πn
Toc JJ II J I Back
Solutions to exercises 62
60

 0 , n even
i.e. an =
2
, n odd

πn2

STEP THREE
Z 2π
1
bn = f (x) sin nx dx
π 0
Z π Z 2π
1
= (π − x) sin nx dx + 0 · dx
π 0 π

1
h  cos nx iπ Z π  cos nx  
= (π − x) − − (−1) · − dx + 0
π n 0 0 n
  π  1 
1 
= 0− − − · 0 , see Trig
π n n
1
i.e. bn = .
n
Toc JJ II J I Back
Solutions to exercises 63

In summary, a0 = π
and a table of other Fourier cofficients is
61
2

n 1 2 3 4 5

2 2 2 1 2 1
an = πn2 (when n is odd) π 0 π 32 0 π 52

1 1 1 1 1
bn = n 1 2 3 4 5


a0 X
∴ f (x) = + [an cos nx + bn sin nx]
2 n=1
π 2 2 1 2 1
= + cos x + cos 3x + cos 5x + . . .
4 π π 32 π 52
1 1 1
+ sin x + sin 2x + sin 3x + sin 4x + . . .
 2 3 4 
π 2 1 1
i.e. f (x) = + cos x + 2 cos 3x + 2 cos 5x + . . .
4 π 3 5
1 1 1
+ sin x + sin 2x + sin 3x + sin 4x + . . .
2 3 4
Toc JJ II J I Back
Solutions to exercises 64
π2 1 1
62
c) To show that 8 =1+ 32 + 52 + ... ,

π
note that, as x → 0 , the series converges to the half-way value of 2,

 
π π 2 1 1
and then = + cos 0 + cos 0 + cos 0 + . . .
2 4 π 32 52
1 1
+ sin 0 + sin 0 + sin 0 + . . .
2 3
 
π π 2 1 1
= + 1 + 2 + 2 + ... + 0
2 4 π 3 5
 
π 2 1 1
= 1 + 2 + 2 + ...
4 π 3 5
π2 1 1
giving = 1 + 2 + 2 + ...
8 3 5
Return to Exercise 5

Toc JJ II J I Back
Solutions to exercises 65

Exercise 6.
63

f (x) = x, over the interval −π < x < π and has period 2π

a) Sketch a graph of f (x) in the interval −3π < x < 3π

f(x )
π

x
−3π −2π −π 0 π 2π 3π

−π

Toc JJ II J I Back
Solutions to exercises 66

b) Fourier series representation of f (x)


64

STEP ONE
Z π
1
a0 = f (x) dx
π −π
Z π
1
= x dx
π −π
 π
1 x2
=
π 2 −π

1 π2 π2
 
= −
π 2 2

i.e. a0 = 0.

Toc JJ II J I Back
Solutions to exercises 67

STEP TWO
65
1 π
Z
an = f (x) cos nx dx
π −π
Z π
1
= x cos nx dx
π −π
( π Z π   )
1 sin nx sin nx
= x − dx
π n −π −π n
| {z }
using integration by parts

1 π
 Z 
1 1
i.e. an = (π sin nπ − (−π) sin(−nπ)) − sin nx dx
π n n −π
 
1 1 1
= (0 − 0) − · 0 ,
π n n
Z
since sin nπ = 0 and sin nx dx = 0,

i.e. an = 0.

Toc JJ II J I Back
Solutions to exercises 68

STEP THREE
66
1 π
Z
bn = f (x) sin nx dx
π −π
Z π
1
= x sin nx dx
π −π
( π Z π   )
1 −x cos nx − cos nx
= − dx
π n −π −π n
1 π
 Z 
1 1 π
= − [x cos nx]−π + cos nx dx
π n n −π
 
1 1 1
= − (π cos nπ − (−π) cos(−nπ)) + · 0
π n n
π
= − (cos nπ + cos nπ)

1
= − (2 cos nπ)
n
2
i.e. bn = − (−1)n .
n
Toc JJ II J I Back
Solutions to exercises 69

We thus have
67

a0 X h i
f (x) = + an cos nx + bn sin nx
2 n=1

with a0 = 0, an = 0, bn = − n2 (−1)n

and
n 1 2 3

2
bn 2 −1 3

Therefore
f (x) = b1 sin x + b2 sin 2x + b3 sin 3x + . . .
 
1 1
i.e. f (x) = 2 sin x − sin 2x + sin 3x − . . .
2 3

and we have found the required Fourier series.

Toc JJ II J I Back
Solutions to exercises 70

c) Pick an appropriate value of x, to show that


68

π 1 1 1
4 =1− 3 + 5 − 7 + ...

Setting x = π2 gives f (x) = π2 and


 
π π 1 2π 1 3π 1 4π 1 5π
= 2 sin − sin + sin − sin + sin − ...
2 2 2 2 3 2 4 2 5 2

This gives  
π 1 1 1
= 2 1 + 0 + · (−1) − 0 + · (1) − 0 + · (−1) + . . .
2 3 5 7
 
π 1 1 1
= 2 1 − + − + ...
2 3 5 7
π 1 1 1
i.e. = 1 − + − + ...
4 3 5 7

Return to Exercise 6
Toc JJ II J I Back
Solutions to exercises 71

Exercise 7.
69

f (x) = x2 , over the interval −π < x < π and has period 2π

a) Sketch a graph of f (x) in the interval −3π < x < 3π

f(x )
π
2

−3π −2π −π 0 π 2π 3π
x

Toc JJ II J I Back
Solutions to exercises 72

b) Fourier series representation of f (x)


70

STEP ONE
Z π Z π
1 1
a0 = f (x)dx = x2 dx
π −π π −π
 π
1 x3
=
π 3 −π

1 π3 π3
  
= − −
π 3 3

1 2π 3
 
=
π 3

2π 2
i.e. a0 = .
3

Toc JJ II J I Back
Solutions to exercises 73

STEP TWO
71
1 π
Z
an = f (x) cos nx dx
π −π
Z π
1
= x2 cos nx dx
π −π
( π Z π   )
1 sin nx sin nx
= x2 − 2x dx
π n −π −π n
| {z }
using integration by parts
( )
 2 π
Z
1 1 2 2
= π sin nπ − π sin(−nπ) − x sin nx dx
π n n −π
( )
2 π
Z
1 1
= (0 − 0) − x sin nx dx , see Trig
π n n −π
Z π
−2
= x sin nx dx
nπ −π

Toc JJ II J I Back
Solutions to exercises 74
72
(  π Z π   )
−2 − cos nx − cos nx
i.e. an = x − dx
nπ n −π −π n
| {z }
using integration by parts again
( )
−2 1 π
Z
1 π
= − [x cos nx]−π + cos nx dx
nπ n n −π
(   )
−2 1 1
= − π cos nπ − (−π) cos(−nπ) + · 0
nπ n n
(   )
−2 1
= − π(−1)n + π(−1)n
nπ n
( )
−2 −2π
= (−1)n
nπ n

Toc JJ II J I Back
Solutions to exercises 75
73
( )
−2 2π
i.e. an = − (−1)n
nπ n

+4π
= (−1)n
πn2
4
= (−1)n
n2

4
, n even
(
n2
i.e. an =
−4
n2 , n odd.

Toc JJ II J I Back
Solutions to exercises 76

STEP THREE
74
π
1 π 2
Z Z
1
bn = f (x) sin nx dx = x sin nx dx
π −π π −π
(  π Z π   )
1 − cos nx − cos nx
= x2 − 2x · dx
π n −π −π n
| {z }
using integration by parts
( )
Z π
1 1 2 π 2
= − x cos nx −π + x cos nx dx
π n n −π
( Z π )
1 1 2 2
π cos nπ − π 2 cos(−nπ) +

= − x cos nx dx
π n n −π
( )
 2 π
Z
1 1 2 2
= − π cos nπ − π cos(nπ) + x cos nx dx
π n| {z } n −π
=0
Z π
2
= x cos nx dx
πn −π
Toc JJ II J I Back
Solutions to exercises 77
75
( π )
Z π
2 sin nx sin nx
i.e. bn = x − dx
πn n −π −π n
| {z }
using integration by parts
( )
Z π
2 1 1
= (π sin nπ − (−π) sin(−nπ)) − sin nx dx
πn n n −π
( )
1 π
Z
2 1
= (0 + 0) − sin nx dx
πn n n −π

−2 π
Z
= sin nx dx
πn2 −π

i.e. bn = 0.

Toc JJ II J I Back
Solutions to exercises 78
76

a0 X
∴ f (x) = + [an cos nx + bn sin nx]
2 n=1
(
4
2π 2 n2 , n even
where a0 = 3 , an = −4 , bn = 0
n2 , n odd

n 1 2 3 4
1 1 1
  
an −4(1) 4 22 −4 32 4 42

2π 2
   
1 1 1 1
i.e. f (x) = − 4 cos x − 2 cos 2x + 2 cos 3x − 2 cos 4x . . .
2 3 2 3 4

+ [0 + 0 + 0 + . . .]

π2
 
1 1 1
i.e. f (x) = − 4 cos x − 2 cos 2x + 2 cos 3x − 2 cos 4x + . . . .
3 2 3 4

Toc JJ II J I Back
Solutions to exercises 79
π2
77
c) To show that = 1 + 212 + 312 + 412 + . . . ,
6
(
1 , n even
use the fact that cos nπ =
−1 , n odd

1 1 1
i.e. cos x − 22 cos 2x + 32 cos 3x − 42 cos 4x + . . . with x = π

1 1 1
gives cos π − 22 cos 2π + 32 cos 3π − 42 cos 4π + . . .

1 1 1
i.e. (−1) − 22 · (1) + 32 · (−1) − 42 · (1) + . . .

1 1 1
i.e. −1 − 22 − 32 − 42 +...

 
1 1 1
= −1 · 1 + 2 + 2 + 2 + . . .
2 3 4
| {z }
(the desired series)

Toc JJ II J I Back
Solutions to exercises 80

The graph of f (x)


78
gives that f (π) = π 2 and the series converges to
this value.

Setting x = π in the Fourier series thus gives


π2
 
2 1 1 1
π = − 4 cos π − 2 cos 2π + 2 cos 3π − 2 cos 4π + . . .
3 2 3 4
2
 
π 1 1 1
π2 = − 4 −1 − 2 − 2 − 2 − . . .
3 2 3 4
π2
 
2 1 1 1
π = + 4 1 + 2 + 2 + 2 + ...
3 2 3 4
2
 
2π 1 1 1
= 4 1 + 2 + 2 + 2 + ...
3 2 3 4
π2 1 1 1
i.e. = 1 + 2 + 2 + 2 + ...
6 2 3 4
Return to Exercise 7

Toc JJ II J I Back
Chapter 86
Chapter Chapter 6
Chapter 6
The Fourier Transform

his chapter introduces the Fourier Transform, also known as the Fourier Integral. The defini-
T tion, theorems, and properties are presented and proved. The Fourier transforms of the most
common functions are derived, the system function is defined, and several examples are given
to illustrate its application in circuit analysis.

8.1 Definition and Special Forms


We recall that the Fourier series for periodic functions of time, such as those we discussed on the pre-
vious chapter, produce discrete line spectra with non-zero values only at specific frequencies referred
to as harmonics. However, other functions of interest such as the unit step, the unit impulse, the unit
ramp, and a single rectangular pulse are not periodic functions. The frequency spectra for these func-
tions are continuous as we will see later in this chapter.
We may think of a non-periodic signal as one arising from a periodic signal in which the period
extents from – ∞ to + ∞ . Then, for a signal that is a function of time with period from – ∞ to + ∞ ,
we form the integral

– jωt
F(ω) = ∫–∞ f ( t )e dt (8.1)

and assuming that it exists for every value of the radian frequency ω , we call the function F ( ω ) the
Fourier transform or the Fourier integral.
The Fourier transform is, in general, a complex function. We can express it as the sum of its real and
imaginary components, or in exponential form, that is, as
jϕ ( ω )
F ( ω ) = Re { F ( ω ) } + jIm { F ( ω ) } = F ( ω ) e (8.2)
The Inverse Fourier transform is defined as

1
∫–∞ F ( ω )e
jωt
f ( t ) = ------ dω (8.3)

We will often use the following notation to express the Fourier transform and its inverse.

F {f(t)} = F(ω) (8.4)


and
–1
F { F(ω )} = f( t) (8.5)

Signals and Systems with MATLAB Applications, Second Edition 8-1


Orchard Publications
Page 1
Chapter 8 The Fourier Transform

8.2 Special Forms of the Fourier Transform


The time function f ( t ) is, in general, complex, and thus we can express it as the sum of its real and
imaginary parts, that is, as
f ( t ) = f Re ( t ) + j f Im ( t ) (8.6)

The subscripts Re and Im will be used often to denote the real and imaginary parts respectively.
These notations have the same meaning as Re { f ( t ) } and Im { f ( t ) } .
By substitution of (8.6) into the Fourier integral of (8.1), we get
∞ ∞
– jωt – jωt
F(ω) = ∫– ∞ f Re ( t )e dt + j ∫–∞ fIm ( t )e dt (8.7)

and by Euler’s identity


∞ ∞
F(ω) = ∫– ∞ [ f Re ( t ) cos ωt + f Im ( t ) sin ωt ] dt – j ∫–∞ [ fRe ( t ) sin ωt –fIm ( t ) cos ω t ] dt (8.8)

From (8.2), we see that the real and imaginary parts of F ( ω ) are

F Re ( ω ) = ∫–∞ [ fRe ( t ) cos ωt + fIm ( t ) sin ωt ] dt (8.9)

and

F Im ( ω ) = – ∫–∞ [ fRe ( t ) sin ωt –fIm ( t ) cos ω t ] dt (8.10)

We can derive similar forms for the Inverse Fourier transform as follows:
Substitution of (8.2) into (8.3) yields

1
∫–∞ [ FRe ( ω ) + jFIm ( ω ) ]e
jωt
f ( t ) = ------ dω (8.11)

and by Euler’s identity,

1
f ( t ) = ------
2π ∫–∞ [ FRe ( ω ) cos ωt –FIm ( ω ) sin ωt ] dω (8.12)

1
+ j ------
2π ∫–∞ [ FRe ( ω ) sin ωt + FIm ( ω ) cos ω t ] dω
Therefore, the real and imaginary parts of f ( t ) are

1
f Re ( t ) = ------
2π ∫–∞ [ FRe ( ω ) cos ωt –FIm ( ω ) sin ωt ] dω (8.13)

8-2 Signals and Systems with MATLAB Applications, Second Edition


Orchard Publications
Page 2
Special Forms of the Fourier Transform

and

1
f Im ( t ) = ------
2π ∫–∞ [ FRe ( ω ) sin ωt + FIm ( ω ) cos ω t ] dω (8.14)

Now, we will use the above relations to determine the time to frequency domain correspondence for
real, imaginary, even, and odd functions in both the time and the frequency domains. We will show
these in tabular form, as indicated in Table 8.1.

TABLE 8.1 Time Domain and Frequency Domain Correspondence (Refer to Tables 8.2 - 8.7)

f(t) F(ω)

Real Imaginary Complex Even Odd


Real
Real and Even
Real and Odd
Imaginary
Imaginary and Even
Imaginary and Odd

1. Real Time Functions


If f ( t ) is real, (8.9) and (8.10) reduce to

F Re ( ω ) = ∫–∞ fRe ( t ) cos ωt dt (8.15)

and

F Im ( ω ) = – ∫–∞ fRe ( t ) sin ωt dt (8.16)

Conclusion: If f ( t ) is real, F ( ω ) is, in general, complex. We indicate this result with a check mark
in Table 8.2.
We know that any function f ( t ) , can be expressed as the sum of an even and an odd function.
Therefore, we will also consider the cases when f ( t ) is real and even, and when f ( t ) is real and
odd*.

* In our subsequent discussion, we will make use of the fact that the cosine is an even function, while the sine is an
odd function. Also, the product of two odd functions or the product of two even functions will result in an even
function, whereas the product of an odd function and an even function will result in an odd function.

Signals and Systems with MATLAB Applications, Second Edition 8-3


Orchard Publications
Page 3
Chapter 8 The Fourier Transform

TABLE 8.2 Time Domain and Frequency Domain Correspondence (Refer also to Tables 8.3 - 8.7)

f( t) F(ω)

Real Imaginary Complex Even Odd


Real 
Real and Even
Real and Odd
Imaginary
Imaginary and Even
Imaginary and Odd

a. f Re ( t ) is even

If f Re ( – t ) = f Re ( t ) , the product f Re ( t ) cos ωt , with respect to t , is even, while the product


f Re ( t ) sin ω t is odd. In this case, (8.15) and (8.16) reduce to:


F Re ( ω ) = 2 ∫0 f Re ( t ) cos ωt dt f Re ( t ) = even (8.17)

and

F Im ( ω ) = – ∫–∞ fRe ( t ) sin ωt dt = 0 f Re ( t ) = even (8.18)

Therefore, if f Re ( t ) = even , F ( ω ) is real as seen in (8.17).

To determine whether F ( ω ) is even or odd when f Re ( t ) = even , we must perform a test for
evenness or oddness with respect to ω . Thus, substitution of – ω for ω in (8.17), yields
∞ ∞
F Re ( – ω ) = 2 ∫0 f Re ( t ) cos ( – ω )t dt = 2 ∫0 f Re ( t ) cos ωt dt = F Re ( ω ) (8.19)

Conclusion: If f ( t ) is real and even, F ( ω ) is also real and even. We indicate this result in Table 8.3.

b. f Re ( t ) is odd

If – f Re ( – t ) = f Re ( t ) , the product f Re ( t ) cos ωt , with respect to t , is odd, while the product


f Re ( t ) ( sin ω t ) is even. In this case, (8.15) and (8.16) reduce to:

8-4 Signals and Systems with MATLAB Applications, Second Edition


Orchard Publications
Page 4
Special Forms of the Fourier Transform

TABLE 8.3 Time Domain and Frequency Domain Correspondence (Refer also to Tables 8.4 - 8.7)

f(t) F(ω)

Real Imaginary Complex Even Odd


Real 
Real and Even  
Real and Odd
Imaginary
Imaginary and Even
Imaginary and Odd


F Re ( ω ) = ∫–∞ fRe ( t ) cos ωt dt = 0 f Re ( t ) = odd (8.20)

and

F Im ( ω ) = – 2 ∫0 f Re ( t ) sin ωt dt f Re ( t ) = odd (8.21)

Therefore, if f Re ( t ) = odd , F ( ω ) is imaginary.

To determine whether F ( ω ) is even or odd when f Re ( t ) = odd , we perform a test for evenness
or oddness with respect to ω . Thus, substitution of – ω for ω in (8.21), yields
∞ ∞
F Im ( – ω ) = – 2 ∫0 f Re ( t ) sin ( – ω )t dt = 2 ∫0 f Re ( t ) sin ωt dt = – F Im ( ω ) (8.22)

Conclusion: If f ( t ) is real and odd, F ( ω ) is imaginary and odd. We indicate this result in Table 8.4.
2. Imaginary Time Functions
If f ( t ) is imaginary, (8.9) and (8.10) reduce to

F Re ( ω ) = ∫–∞ fIm ( t ) sin ω t dt (8.23)

and

F Im ( ω ) = ∫–∞ fIm ( t ) cos ω t dt (8.24)

Signals and Systems with MATLAB Applications, Second Edition 8-5


Orchard Publications
Page 5
Chapter 8 The Fourier Transform

TABLE 8.4 Time Domain and Frequency Domain Correspondence (Refer also to Tables 8.5 - 8.7)

f( t) F(ω)

Real Imaginary Complex Even Odd


Real 
Real and Even  
Real and Odd  
Imaginary
Imaginary and Even
Imaginary and Odd

Conclusion: If f ( t ) is imaginary, F ( ω ) is, in general, complex. We indicate this result in Table 8.5.

TABLE 8.5 Time Domain and Frequency Domain Correspondence (Refer also to Tables 8.6 - 8.7)

f( t) F( ω )

Real Imaginary Complex Even Odd


Real 
Real and Even  
Real and Odd  
Imaginary 
Imaginary and Even
Imaginary and Odd

Next, we will consider the cases where f ( t ) is imaginary and even, and f ( t ) is imaginary and odd.
a. f Im ( t ) is even

If f Im ( – t ) = f Im ( t ) , the product f Im ( t ) cos ωt , with respect to t , is even while the product


f Im ( t ) sin ω t is odd. In this case, (8.23) and (8.24) reduce to:


F Re ( ω ) = ∫–∞ fIm ( t ) sin ω t dt = 0 f Im ( t ) = even (8.25)

and

F Im ( ω ) = 2 ∫0 f Im ( t ) cos ω t dt f Im ( t ) = even (8.26)

8-6 Signals and Systems with MATLAB Applications, Second Edition


Orchard Publications
Page 6
Special Forms of the Fourier Transform

Therefore, if f Im ( t ) = even , F ( ω ) is imaginary.

To determine whether F ( ω ) is even or odd when f Im ( t ) = even , we perform a test for evenness or
oddness with respect to ω . Thus, substitution of – ω for ω in (8.26) yields

F Im ( – ω ) = 2 ∫0 f Im ( t ) cos ( – ω )t dt
(8.27)

= 2 ∫0 f Im ( t ) cos ωt dt = F Im ( ω )

Conclusion: If f ( t ) is imaginary and even, F ( ω ) is also imaginary and even. We indicate this result in
Table 8.6.
TABLE 8.6 Time Domain and Frequency Domain Correspondence (Refer also to Table 8.7)

f(t) F(ω)

Real Imaginary Complex Even Odd


Real 
Real and Even  
Real and Odd  
Imaginary 
Imaginary and Even  
Imaginary and Odd

b. f Im ( t ) is odd

If – f Im ( – t ) = f Im ( t ) , the product f Im ( t ) cos ωt , with respect to t , is odd, while the product


f Im ( t ) sin ω t is even. In this case, (8.23) and (8.24) reduce to

∞ ∞
F Re ( ω ) = ∫– ∞ f Im ( t ) sin ω t dt = 2 ∫0 f Im ( t ) sin ω t dt f Im ( t ) = odd (8.28)

and

F Im ( ω ) = ∫–∞ fIm ( t ) cos ω t dt = 0 f Im ( t ) = odd (8.29)

Therefore, if f Im ( t ) = odd , F ( ω ) is real.

Signals and Systems with MATLAB Applications, Second Edition 8-7


Orchard Publications
Page 7
Chapter 8 The Fourier Transform

To determine whether F ( ω ) is even or odd when f Im ( t ) = odd , we perform a test for evenness
or oddness with respect to ω . Thus, substitution of – ω for ω in (8.28) yields
∞ ∞
F Re ( – ω ) = 2 ∫0 f Im ( t ) sin ( – ω )t dt = – 2 ∫0 f Im ( t ) sin ωt dt = – F Re ( ω ) (8.30)

Conclusion: If f ( t ) is imaginary and odd, F ( ω ) is real and odd. We indicate this result in Table
8.7.

TABLE 8.7 Time Domain and Frequency Domain Correspondence (Completed Table)

f(t) F(ω)

Real Imaginary Complex Even Odd


Real 
Real and Even  
Real and Odd  
Imaginary 
Imaginary and Even  
Imaginary and Odd  

Table 8.7 is now complete and shows that if f ( t ) is real (even or odd), the real part of F ( ω ) is even,
and the imaginary part is odd. Then,
F Re ( – ω ) = F Re ( ω ) f ( t ) = Real (8.31)
and
F Im ( – ω ) = – F Im ( ω ) f ( t ) = Real (8.32)
Since,
F ( ω ) = F Re ( ω ) + jF Im ( ω ) (8.33)
it follows that
F ( – ω ) = F Re ( – ω ) + jF Im ( – ω ) = F Re ( ω ) – jF Im ( ω )
or
F ( – ω ) = F∗ ( ω ) f ( t ) = Real (8.34)

Now, if F ( ω ) of some function of time f ( t ) is known, and F ( ω ) is such that F ( – ω ) = F∗ ( ω ) , can


we conclude that f ( t ) is real? The answer is yes; we can verify this with (8.14) which is repeated
below for convenience.

8-8 Signals and Systems with MATLAB Applications, Second Edition


Orchard Publications
Page 8
Properties and Theorems of the Fourier Transform

1
f Im ( t ) = ------
2π ∫–∞ [ FRe ( ω ) sin ωt + FIm ( ω ) cos ω t ] dω (8.35)

We observe that the integrand of (8.35) is zero since it is an odd function with respect to ω because
both products inside the brackets are odd functions*.
Therefore, f Im ( t ) = 0 , that is, f ( t ) is real.

We can state then, that a necessary and sufficient condition for f ( t ) to be real, is that F ( – ω ) = F∗ ( ω ) .
Also, if it is known that f ( t ) is real, the Inverse Fourier transform of (8.3) can be simplified as fol-
lows:
From (8.13),

1
f Re ( t ) = ------
2π ∫–∞ [ FRe ( ω ) cos ωt –FIm ( ω ) sin ωt ] dω (8.36)

and since the integrand is an even function with respect to ω , we rewrite (8.36) as

1
f Re ( t ) = 2 ------
2π ∫0 [ F Re ( ω ) cos ωt – F Im ( ω ) sin ωt ] dω
(8.37)
∞ ∞
1 1 j [ ωt + ϕ ( ω ) ]
= ---
π ∫0 A ( ω ) cos [ ωt + ϕ ( ω ) ] dω = --- Re
π ∫0 F ( ω )e dω

8.3 Properties and Theorems of the Fourier Transform


1. Linearity
If F 1 ( ω ) is the Fourier transform of f 1 ( t ) , F 2 ( ω ) is the transform of f 2 ( t ) , and so on, the lin-
earity property of the Fourier transform states that

a1 f1 ( t ) + a2 f2 ( t ) + … + an fn ( t ) ⇔ a1 F1 ( ω ) + a2 F2 ( ω ) + … + an Fn ( ω ) (8.38)

where a i is some arbitrary real constant.

Proof:
The proof is easily obtained from (8.1), that is, the definition of the Fourier transform. The proce-
dure is the same as for the linearity property of the Laplace transform in Chapter 2.

* In (8.31) and (8.32), we determined that F Re ( ω ) is even and F Im ( ω ) is odd.

Signals and Systems with MATLAB Applications, Second Edition 8-9


Orchard Publications
Page 9
Chapter 8 The Fourier Transform

2. Symmetry
If F ( ω ) is the Fourier transform of f ( t ) , the symmetry property of the Fourier transform states that

F ( t ) ⇔ 2πf ( – ω ) (8.39)

that is, if in F ( ω ) , we replace ω with t , we get the Fourier transform pair of (8.39).

Proof:

Since

1
∫–∞ F ( ω )e
jωt
f ( t ) = ------ dω

then,

– j ωt
2πf ( – t ) = ∫–∞ F ( ω )e dω

Interchanging t and ω , we get



– j ωt
2πf ( – ω ) = ∫–∞ F ( t )e dt

and (8.39) follows.


3. Time Scaling
If a is a real constant, and F ( ω ) is the Fourier transform of f ( t ) , then,

ω
f ( at ) ⇔ ----- F ⎛ ---- ⎞
1
a ⎝ a ⎠ (8.40)

that is, the time scaling property of the Fourier transform states that if we replace the variable t in
the time domain by at , we must replace the variable ω in the frequency domain by ω ⁄ a , and
divide F ( ω ⁄ a ) by the absolute value of a .

Proof:
We must consider both cases a > 0 and a < 0 .
For a > 0 ,

– jωt
F { f ( at ) } = ∫–∞ f ( at )e dt (8.41)

We let at = τ ; then, t = τ ⁄ a , and (8.41) becomes

8-10 Signals and Systems with MATLAB Applications, Second Edition


Orchard Publications
Page 10
Properties and Theorems of the Fourier Transform

τ ω
∞ – jω ⎛ --- ⎞ ∞ – j ⎛ ----⎞ τ
⎝a⎠ τ ⎝ a⎠ 1 ω
d ⎛ --- ⎞ = --- dτ = --- F ⎛ ----⎞
1
F {f(τ)} = ∫–∞ f ( τ )e ⎝ a⎠ a ∫–∞ f ( τ )e a ⎝ a⎠
For a < 0 ,

– jωt
F { f ( –at ) } = ∫–∞ f ( –at )e dt

and making the above substitutions, we find that the multiplying factor is – 1 ⁄ a . Therefore, for
1 ⁄ a we obtain (8.40).
4. Time Shifting
If F ( ω ) is the Fourier transform of f ( t ) , then,

– jωt 0
f ( t – t 0 ) ⇔ F ( ω )e (8.42)

that is, the time shifting property of the Fourier transform states that if we shift the time function
f ( t ) by a constant t 0 , the Fourier transform magnitude does not change, but the term ωt 0 is
added to its phase angle.
Proof:

– jωt
F { f ( t – t0 ) } = ∫–∞ f ( t – t )e 0 dt

We let t – t 0 = τ ; then, t = τ + t 0 , dt = dτ , and thus


∞ – jω ( τ + t 0 ) – jωt 0 ∞
– jω ( τ )
F { f ( t – t0 ) } = ∫– ∞ f ( τ )e dτ = e ∫–∞ f ( τ )e dτ

or
– jωt 0
F { f ( t – t0 ) } = e F(ω)

5. Frequency Shifting
If F ( ω ) is the Fourier transform of f ( t ) , then,

jω 0 t
e f ( t ) ⇔ F ( ω – ω0 ) (8.43)

jω 0 t
that is, multiplication of the time function f ( t ) by e , where ω 0 is a constant, results in shifting
the Fourier transform by ω 0 .

Signals and Systems with MATLAB Applications, Second Edition 8-11


Orchard Publications
Page 11
Chapter 8 The Fourier Transform

Proof:

F { e jω t f ( t ) }0
= ∫– ∞ e
jω 0 t
f ( t )e
– jωt
dt

or
jω 0 t ∞ –j ( ω – ω0 )
F {e f(t)} = ∫–∞ f ( t )e dt = F ( ω – ω 0 )

Also, from (8.40) and (8.43)

ω–ω
f ( at ) ⇔ ----- F ⎛ ----------------0 ⎞
jω 0 t 1
e
⎝ ⎠
(8.44)
a a

Property 5, that is, (8.43) is also used to derive the Fourier transform of the modulated signals
f ( t ) cos ωt and f ( t ) sin ωt . Thus, from
jω 0 t
e f ( t ) ⇔ F ( ω – ω0 )
and
jω 0 t –j ω0 t
e +e
cos ω 0 t = -------------------------------
2
we get

F ( ω – ω0 ) + F ( ω + ω0 )
f ( t ) cos ω 0 t ⇔ ----------------------------------------------------------
- (8.45)
2

Similarly,

F ( ω – ω0 ) –F ( ω + ω0 )
f ( t ) sin ω 0 t ⇔ -------------------------------------------------------
- (8.46)
j2

6. Time Differentiation
If F ( ω ) is the Fourier transform of f ( t ) , then,
n
d
-------- f ( t ) ⇔ ( jω ) n F ( ω ) (8.47)
n
dt
n
d n
that is, the Fourier transform of -------n- f ( t ) , if it exists, is ( jω ) F ( ω ) .
dt
Proof:
Differentiating the Inverse Fourier transform, we get

8-12 Signals and Systems with MATLAB Applications, Second Edition


Orchard Publications
Page 12
Properties and Theorems of the Fourier Transform
n n ∞ ∞ n
-------- f ( t ) = -------- ⎛ ------ dω⎞ = ------
d d 1 1 d
∫– ∞ ∫
jωt jωt
n ⎝ 2π
F ( ω )e F ( ω ) -------n- e dω
n ⎠ 2π –∞
dt dt dt
∞ ∞
dω = ( jω ) ⎛ ------ dω⎞
1 n 1
∫– ∞ ∫–∞ F ( ω )e
n jωt jωt
= ------ F ( ω ) ( jω ) e
2π ⎝ 2π ⎠

and (8.47) follows.


7. Frequency Differentiation
If F ( ω ) is the Fourier transform of f ( t ) , then,
n
n d
( – j t ) f ( t ) ⇔ ---------n F ( ω ) (8.48)

Proof:
Using the Fourier transform definition, we get
n n ∞ ∞ n
--------- F ( ω ) = --------- ⎛ – j ωt
dt⎞ =
d d d – j ωt

n

n⎝ ∫– ∞ f ( t )e
⎠ ∫ –∞
f ( t ) ---------n e

dt

∞ ∞
n – j ωt – j ωt
∫– ∞ ∫–∞ f ( t )e
n
= f ( t ) ( –j t ) e dt = ( – j t ) dt

and (8.48) follows.


8. Time Integration
If F ( ω ) is the Fourier transform of f ( t ) , then,
t
F(ω )
∫–∞ f ( τ ) dτ ⇔ ------------

+ πF ( 0 )δ ( ω ) (8.49)

Proof:
We postpone the proof of this property until we derive the Fourier transform of the unit step
function u 0 ( t ) on the next section. In the special case where in (8.49), F ( 0 ) = 0 , then,

t
F(ω)
∫–∞ f ( τ ) dτ ⇔ ------------

(8.50)

and this is easily proved by integrating both sides of the Inverse Fourier transform.
9. Conjugate Time and Frequency Functions
If F ( ω ) is the Fourier transform of the complex function f ( t ) , then,

Signals and Systems with MATLAB Applications, Second Edition 8-13


Orchard Publications
Page 13
Chapter 8 The Fourier Transform

f∗ ( t ) ⇔ F ∗ ( – ω ) (8.51)

that is, if the Fourier transform of f ( t ) = f Re ( t ) + f Im ( t ) is F ( ω ) , then, the Fourier transform of


f∗ ( t ) = f Re ( t ) – f Im ( t ) is F∗ ( – ω ) .

Proof:
∞ ∞
– jωt – jωt
F(ω) = ∫– ∞ f ( t )e dt = ∫–∞ [ fRe ( t ) + jfIm ( t ) ]e dt

∞ ∞
– jωt – jωt
= ∫– ∞ f Re ( t )e dt + j ∫–∞ fIm ( t )e dt

Then,
∞ ∞
F∗ ( ω ) = ∫– ∞ ∫–∞ fIm ( t )e
jωt jωt
f Re ( t )e dt – j dt

Replacing ω with – ω , we get



– j ωt
F∗ ( – ω ) = ∫–∞ [ fRe ( t ) – jfIm ( t ) ]e dt

and (8.51) follows.


10. Time Convolution
If F 1 ( ω ) is the Fourier transform of f 1 ( t ) , and F 2 ( ω ) is the Fourier transform of f 2 ( t ) , then,

f 1 ( t )∗ f 2 ( t ) ⇔ F 1 ( ω )F 2 ( ω ) (8.52)

that is, convolution in the time domain, corresponds to multiplication in the frequency domain.
Proof:
∞ ∞
– jωt
F { f1 ( t )∗ f2 ( t ) } = ∫–∞ ∫–∞ f ( τ )f2 ( t – τ ) dτ
1 e dt
(8.53)
∞ ∞
– jωt
= ∫–∞ f ( τ ) ∫–∞ f ( t – τ )e
1 2 dt dτ

and letting t – τ = σ , then, dt = dσ , and by substitution into (8.53),


∞ ∞ ∞ ∞
– jωτ – jωσ – jωτ – jωσ
F { f 1 ( t )∗ f 2 ( t ) } = ∫– ∞ f1 ( τ ) ∫– ∞ f 2 ( σ )e e dσ dτ = ∫– ∞ f 1 ( τ )e dτ ∫–∞ f ( σ )e
2 dσ

The first integral above is F 1 ( ω ) while the second is F 2 ( ω ) , and thus (8.52) follows.

8-14 Signals and Systems with MATLAB Applications, Second Edition


Orchard Publications
Page 14
Properties and Theorems of the Fourier Transform

Alternate Proof:
– jωt 0
We can apply the time shifting property f ( t – t 0 ) ⇔ F ( ω )e into the bracketed integral of
– jωt 0
(8.53); then, replacing it with F 2 ( ω )e , we get
∞ ∞ ∞ – jωt 0
– jωt
F { f 1 ( t )∗ f 2 ( t ) } = ∫– ∞ f1 ( τ ) ∫– ∞ f 2 ( t – τ )e dt dτ = ∫–∞ f ( τ ) dτF
1 2 ( ω )e


– jωt
= ∫–∞ f ( τ )e
1 dτF 2 ( ω ) = F 1 ( ω )F 2 ( ω )

11. Frequency Convolution


If F 1 ( ω ) is the Fourier transform of f 1 ( t ) , and F 2 ( ω ) is the Fourier transform of f 2 ( t ) , then,

1
f 1 ( t )f 2 ( t ) ⇔ ------ F 1 ( ω )∗ F 2 ( ω ) (8.54)

that is, multiplication in the time domain, corresponds to convolution in the frequency domain
divided by the constant 1 ⁄ 2π .
Proof:
∞ ∞ ∞
– jωt 1- – jωt
∫– ∞ ∫ ∫–∞ F ( χ )e
jχt
F { f1 ( t )f2 ( t ) } = [ f 1 ( t ) f 2 ( t ) ]e dt = -----
2π 1 dχ f 2 ( t )e dt
–∞
∞ ∞ ∞
1 – j ( ω – χ )t 1
= ------
2π ∫– ∞ F1 ( χ ) ∫– ∞ f 2 ( t )e dt dχ = ------
2π ∫–∞ F ( χ )F ( ω – χ ) dχ
1 2

and (8.54) follows.


12. Area under f ( t )
If F ( ω ) is the Fourier transform of f ( t ) , then,

F( 0) = ∫– ∞ f ( t ) d t (8.55)

that is, the area under a time function f ( t ) is equal to the value of its Fourier transform evaluated
at ω = 0 .
Proof:
– jωt
Using the definition of F ( ω ) and that e ω=0
= 1 , we see that (8.55) follows.

Signals and Systems with MATLAB Applications, Second Edition 8-15


Orchard Publications
Page 15
Chapter 8 The Fourier Transform

13. Area under F ( ω )


If F ( ω ) is the Fourier transform of f ( t ) , then,

1
f ( 0 ) = ------
2π ∫–∞ F ( ω ) dω (8.56)

that is, the value of the time function f ( t ) , evaluated at t = 0 , is equal to the area under its Fou-
rier transform F ( ω ) times 1 ⁄ 2π .
Proof:
jωt
In the Inverse Fourier transform of (8.3), we let e t=0
= 1 , and (8.56) follows.

14. Parseval’s Theorem


If F ( ω ) is the Fourier transform of f ( t ) , Parseval’s theorem states that
∞ ∞
1
∫– ∞ ∫– ∞ F ( ω )
2 2
f ( t ) dt = ------ dω (8.57)

that is, if the time function f ( t ) represents the voltage across, or the current through an 1 Ω resis-
tor, the instantaneous power absorbed by this resistor is either v 2 ⁄ R , v 2 ⁄ 1 , v 2 , or i 2 R , i 2 . Then,
the integral of the magnitude squared, represents the energy (in watt-seconds or joules) dissipated
by the resistor. For this reason, the integral is called the energy of the signal. Relation (8.57) then,
states that if we do not know the energy of a time function f ( t ) , but we know the Fourier trans-
form of this function, we can compute the energy without the need to evaluate the Inverse Fou-
rier transform.
Proof:
From the frequency convolution property,
1
f 1 ( t )f 2 ( t ) ⇔ ------ F 1 ( ω )∗ F 2 ( ω )

or
∞ ∞
– jωt 1
F { f1 ( t )f2 ( t ) } = ∫– ∞ [ f 1 ( t )f 2 ( t ) ]e dt = ------
2π ∫–∞ F ( χ )F ( ω – χ ) dχ
1 2 (8.58)

Since (8.58) must hold for all values of ω , it must also be true for ω = 0 , and under this condi-
tion, it reduces to
∞ ∞
1
∫ –∞
[ f 1 ( t )f 2 ( t ) ] dt = ------
2π ∫–∞ F ( χ )F ( –χ ) dχ
1 2 (8.59)

8-16 Signals and Systems with MATLAB Applications, Second Edition


Orchard Publications
Page 16
Fourier Transform Pairs of Common Functions

For the special case where f 2 ( t ) = f 1∗ ( t ) , and the conjugate functions property f∗ ( t ) ⇔ F∗ ( – ω ) ,
by substitution into (8.59), we get:
∞ ∞ ∞
1 1
∫ –∞
[ f ( t )f∗ ( t ) ] dt = ------
2π ∫ –∞
F ( ω )F∗ [ – ( – ω ) ] dω = ------
2π ∫–∞ F ( ω )F∗ ( ω ) dω
Since f ( t )f∗ ( t ) = f ( t ) and F ( ω )F∗ ( ω ) = F ( ω ) , Parseval’s theorem is proved.
2 2

The Fourier transform properties and theorems are summarized in Table 8.8.

8.4 Fourier Transform Pairs of Common Functions


In this section, we will derive the Fourier transforms of common time functions.
1.
δ(t) ⇔ 1 (8.60)
Proof:
The sifting theorem of the delta function states that

∫–∞ f ( t )δ ( t – t ) dt
0 = f ( t0 )

and if f ( t ) is defined at t = 0 , then,


∫–∞ f ( t )δ ( t ) dt = f( 0)

By the definition of the Fourier transform



– jωt – jωt
F(ω) = ∫–∞ δ ( t )e dt = e t=0
= 1

and (8.60) follows.


We will use the notation f ( t ) ↔ F ( ω ) to show the time domain to frequency domain correspon-
dence. Thus, (8.60) may also be denoted as in Figure 8.1.
f(t) F(ω) 1
δ(t)

0 t 0 ω

Figure 8.1. The Fourier transform of the delta function

Signals and Systems with MATLAB Applications, Second Edition 8-17


Orchard Publications
Page 17
Chapter 8 The Fourier Transform

TABLE 8.8 Fourier Transform Properties and Theorems

Property f( t) F(ω)
Linearity a1 f1 ( t ) + a2 f2 ( t ) + … a1 F1 ( ω ) + a2 F2 ( ω ) + …
Symmetry F(t) 2πf ( – ω )
Time Scaling f ( at ) ω
----- F ⎛ ----⎞
1
a ⎝ a⎠
Time Shifting f ( t – t0 ) – jωt 0
F ( ω )e
Frequency Shifting jω 0 t F ( ω – ω0 )
e f(t)
Time Differentiation n n
d ( jω ) F ( ω )
--------- f ( t )
n
dt
Frequency Differentiation n n
( – jt ) f ( t ) d
----------- F ( ω )
n

Time Integration t F ( ω ) + πF ( 0 )δ ( ω )
∫– ∞
------------
f ( τ ) dτ jω

Conjugate Functions f∗ ( t ) F∗ ( – ω )
Time Convolution f 1 ( t )∗ f 2 ( t ) F1 ( ω ) ⋅ F2 ( ω )

Frequency Convolution f1 ( t ) ⋅ f2 ( t ) 1
------ F 1 ( ω )∗ F 2 ( ω )

Area under f(t) ∞
F(0) = ∫–∞ f ( t ) dt
Area under F(w) ∞

1
f ( 0 ) = ------ F ( ω ) dω
2π – ∞

Parseval’s Theorem ∞ ∞
∫ ∫
2 1 2
f ( t ) dt = ------ F ( ω ) dω
–∞ 2π –∞

Likewise, the Fourier transform for the shifted delta function δ ( t – t 0 ) is

– jωt 0
δ ( t – t0 ) ⇔ e (8.61)

8-18 Signals and Systems with MATLAB Applications, Second Edition


Orchard Publications
Page 18
Fourier Transform Pairs of Common Functions

2.
1 ⇔ 2πδ ( ω ) (8.62)
Proof:
–1 ∞ ∞
1
∫– ∞ ∫–∞ δ ( ω )e
jωt jωt jωt
F { 2πδ ( ω ) } = ------ 2πδ ( ω )e dω = dω = e ω=0
= 1

and (8.62) follows.


The f ( t ) ↔ F ( ω ) correspondence is also shown in Figure 8.2.

f(t) F(ω)
1
2πδ ( ω )

0 t 0 ω

Figure 8.2. The Fourier transform of unity

Also, by direct application of the Inverse Fourier transform, or the frequency shifting property
and (8.62), we derive the transform
jω 0 t
e ⇔ 2πδ ( ω – ω 0 ) (8.63)

The transform pairs of (8.62) and (8.63) can also be derived from (8.60) and (8.61) by using the
symmetry property F ( t ) ⇔ 2πf ( – ω )
3.

1 jω 0 t –j ω0 t
cos ω 0 t = --- ( e +e ) ⇔ πδ ( ω – ω 0 ) + πδ ( ω + ω 0 ) (8.64)
2

Proof:
This transform pair follows directly from (8.63). The f ( t ) ↔ F ( ω ) correspondence is also shown
in Figure 8.3.
cosωω
cos 0 t0
t F Re ( ω )

π π
t t

−ω0 0 ω0 ω

Figure 8.3. The Fourier transform of f ( t ) = cos ω 0 t

Signals and Systems with MATLAB Applications, Second Edition 8-19


Orchard Publications
Page 19
Chapter 8 The Fourier Transform

We know that cos ω 0 t is real and even function of time, and we found out that its Fourier trans-
form is a real and even function of frequency. This is consistent with the result in Table 8.7.
4.

1 jω0 t –j ω0 t
sin ω 0 t = ----- ( e –e ) ⇔ jπδ ( ω – ω 0 ) – jπ δ ( ω + ω 0 ) (8.65)
j2

Proof:
This transform pair also follows directly from (8.63). The f ( t ) ↔ F ( ω ) correspondence is also
shown in Figure 8.4.

sin ω 0 t F Im ( ω )
π
tt −ω0
0 ω0 ω
−π

Figure 8.4. The Fourier transform of f ( t ) = sin ω 0 t

We know that sin ω 0 t is real and odd function of time, and we found out that its Fourier trans-
form is an imaginary and odd function of frequency. This is consistent with the result in Table
8.7.
5.
2
sgn ( t ) = u 0 ( t ) – u 0 ( – t ) ⇔ ------ (8.66)

where sgn ( t ) denotes the signum function shown in Figure 8.5.


f(t)
1

−1

Figure 8.5. The signum function


Proof:
To derive the Fourier transform of the sgn ( t ) function, it is convenient to express it as an expo-
nential that approaches a limit as shown in Figure 8.6.

8-20 Signals and Systems with MATLAB Applications, Second Edition


Orchard Publications
Page 20
Fourier Transform Pairs of Common Functions

f(t)
1
– at
e u0 ( t )
0

– at −1
–e u0 ( –t )

Figure 8.6. The signum function as an exponential approaching a limit

Then,
– at at
sgn ( t ) = lim [ e u0 ( t ) – e u0 ( –t ) ] (8.67)
a→0
and
0 ∞
at – j ωt – a t – j ωt
F { sgn ( t ) } = lim
a→0 ∫– ∞ –e e dt + ∫0 e e dt

0 ∞
( a – j ω )t – ( a + jω ) t
= lim
a→0 ∫– ∞ –e dt + ∫0 e dt (8.68)

1 1 –1 1 2
= lim --------------- + --------------- = --------- + ------ = ------
a→0 a – jω a + jω – jω jω jω

The f ( t ) ↔ F ( ω ) correspondence is also shown in Figure 8.7.

f( t)
F Im ( ω )
1
0
t ω
0
−1

Figure 8.7. The Fourier transform of sgn ( t )


We now know that sgn ( t ) is real and odd function of time, and we found out that its Fourier
transform is an imaginary and odd function of frequency. This is consistent with the result in
Table 8.7.
6.
1-
u 0 ( t ) ⇔ πδ ( ω ) + ----- (8.69)

Signals and Systems with MATLAB Applications, Second Edition 8-21


Orchard Publications
Page 21
Chapter 8 The Fourier Transform

Proof:
If we attempt to verify the transform pair of (8.69) by direct application of the Fourier transform
definition, we will find that

∞ ∞ – jωt ∞
– jωt – jωt
F(ω ) = ∫–∞ f ( t )e dt = F ( ω ) = ∫0 e dt = e-----------
– jω 0
(8.70)

– jωt – jωt
but we cannot say that e approaches 0 as t → ∞ , because e = 1 ∠– ωt , that is, the magni-
– jωt
tude of e is always unity, and its angle changes continuously as t assumes greater and greater
values. Since the upper limit cannot be evaluated, the integral of (8.70) does not converge.
To work around this problem, we will make use of the sgn ( t ) function which we express as

sgn ( t ) = 2u 0 ( t ) – 1 (8.71)

This expression is derived from the waveform of Figure 8.8 below.

f(t) 2

0
t

−1
Figure 8.8. Alternate expression for the signum function

We rewrite (8.71) as
1 1 1
u 0 ( t ) = --- ( 1 + sgn ( t ) ) = --- + --- sgn ( t ) (8.72)
2 2 2

and since we know that 1 ⇔ 2π δ ( ω ) and sgn ( t ) ⇔ 2 ⁄ ( jω ) , by substitution of these into (8.72)
we get
1
u 0 ( t ) ⇔ πδ ( ω ) + ------

and this is the same as(8.69). This is a complex function in the frequency domain whose real part
is π δ ( ω ) and imaginary part – 1 ⁄ ω .
The f ( t ) ↔ F ( ω ) correspondence is also shown in Figure 8.9.

8-22 Signals and Systems with MATLAB Applications, Second Edition


Orchard Publications
Page 22
Fourier Transform Pairs of Common Functions

f(t)
F Im ( ω )
1 π
F Re ( ω )
t ω

F Im ( ω )

Figure 8.9. The Fourier transform of the unit step function

Since u 0 ( t ) is real but neither even nor odd function of time, its Fourier transform is a complex
function of frequency as shown in (8.69). This is consistent with the result in Table 8.7.
Now, we will prove the time integration property of (8.49), that is,
t
F(ω )
∫–∞ f ( τ ) dτ ⇔ ------------

+ πF ( 0 )δ ( ω )

as follows:
By the convolution integral,
t
u 0 ( t )∗ f ( t ) = ∫–∞ f ( τ )u ( t – τ ) dτ
0

and since u 0 ( t – τ ) = 1 for t > τ , and it is zero otherwise, the above integral reduces to
t
u 0 ( t )∗ f ( t ) = ∫– ∞ f ( τ ) d τ
Next, by the time convolution property,
u 0 ( t )∗ f ( t ) ⇔ U 0 ( ω ) ⋅ F ( ω )
and since
1
U 0 ( ω ) = πδ ( ω ) + ------

using these results and the sampling property of the delta function, we get

U 0 ( ω ) ⋅ F ( ω ) = ⎛ πδ ( ω ) + -----
1-⎞ F ( ω ) = = πδ ( ω )F ( ω ) + F ( ω ) = πF ( 0 )δ ( ω ) + ------------
------------ F(ω)
⎝ jω ⎠ jω jω

Thus, the time integration property is proved.

Signals and Systems with MATLAB Applications, Second Edition 8-23


Orchard Publications
Page 23
Chapter 8 The Fourier Transform

7.

– jω 0 t 1
e u 0 ( t ) ⇔ πδ ( ω – ω 0 ) + ---------------------- (8.73)
j ( ω – ω0 )

Proof:
From the Fourier transform of the unit step function,
1-
u 0 ( t ) ⇔ πδ ( ω ) + -----

and the frequency shifting property,
jω 0 t
e f ( t ) ⇔ F ( ω – ω0 )
we obtain (8.73).

8.
π 1 1
u 0 ( t ) cos ω 0 t ⇔ --- [ δ ( ω – ω 0 ) + δ ( ω + ω 0 ) ] + -------------------------- + ---------------------------
2 2 j ( ω – ω 0 ) 2j ( ω + ω 0 )
(8.74)
π jω -
⇔ --- [ δ ( ω – ω 0 ) + δ ( ω + ω 0 ) ] + -----------------
2 ω –ω
2 2
0

Proof:
We first express the cosine function as
1 jω0 t –j ω0 t
cos ω 0 t = --- ( e +e )
2
From (8.73),
– jω 0 t 1 -
e u 0 ( t ) ⇔ πδ ( ω – ω 0 ) + ----------------------
j ( ω – ω0 )
and
jω 0 t 1
e u 0 ( t ) ⇔ πδ ( ω + ω 0 ) + -----------------------
j ( ω + ω0 )
Now, using
1
u 0 ( t ) ⇔ πδ ( ω ) + ------

we get (8.74).
9.
π ω
2
u 0 ( t ) sin ω 0 t ⇔ ----- [ δ ( ω – ω 0 ) + δ ( ω + ω 0 ) ] + -----------------
-2 (8.75)
j2 ω –ω
2
0

8-24 Signals and Systems with MATLAB Applications, Second Edition


Orchard Publications
Page 24
Finding the Fourier Transform from Laplace Transform

Proof:
We first express the sine function as
1 jω 0 t – j ω 0 t
sin ω 0 t = ----- ( e –e )
j2
From (8.73),
– jω 0 t 1
e u 0 ( t ) ⇔ πδ ( ω – ω 0 ) + -----------------------
j ( ω – ω0 )
and
jω 0 t 1
e u 0 ( t ) ⇔ πδ ( ω + ω 0 ) + -----------------------
j ( ω + ω0 )
Using
1-
u 0 ( t ) ⇔ πδ ( ω ) + -----

we obtain (8.75).

8.5 Finding the Fourier Transform from Laplace Transform


If a time function f ( t ) is zero for t ≤ 0 , we can obtain the Fourier transform of f ( t ) from the one-
sided Laplace transform of f ( t ) by substitution of s with jω .

Example 8.1
– αt
It is known that L [ e 1
u 0 ( t ) ] = ------------ . Compute F { e–αt u0 ( t ) } .
s+α
Solution:

F { e–αt u0 ( t ) } = L [e
– αt
u0 ( t ) ]
s = jω
1
= ------------
s+α
1
= ----------------
jω + α
s = jω

Thus, we have obtained the following Fourier transform pair.

– αt 1
e u 0 ( t ) ⇔ ---------------- (8.76)
jω + α

Example 8.2
It is known that
– αt s+α
L [(e cos ω 0 t )u 0 ( t ) ] = --------------------------------
2 2
( s + α ) + ω0

Signals and Systems with MATLAB Applications, Second Edition 8-25


Orchard Publications
Page 25
Chapter 8 The Fourier Transform

Compute F { ( e–αt cos ω0 t )u0 ( t ) }

Solution:

F { ( e–αt cos ω0 t )u0 ( t ) } = L [(e


– αt
cos ω 0 t )u 0 ( t ) ]
s = jω
s+α jω + α
= -------------------------------- = ------------------------------------
2 2 2 2
( s + α ) + ω0 ( jω + α ) + ω 0
s = jω

Thus, we have obtained the following Fourier transform pair.

– αt jω + α
(e cos ω 0 t )u 0 ( t ) ⇔ -----------------------------------
2
-
2
(8.77)
( jω + α ) + ω 0

We can also find the Fourier transform of a time function f ( t ) that has non-zero values for t < 0 ,
and it is zero for all t > 0 . But because the one-sided Laplace transform does not exist for t < 0 , we
must first express the negative time function in the t > 0 domain, and compute the one-sided
Laplace transform. Then, the Fourier transform of f ( t ) can be found by substituting s with – jω . In
other words, when f ( t ) = 0 for t ≥ 0 , and f ( t ) ≠ 0 for t < 0 , we use the substitution

F { f(t) } = L [ f ( –t ) ] s = –j ω
(8.78)

Example 8.3
–a t
Compute the Fourier transform of f ( t ) = e
a. using the Fourier transform definition
b. by substitution into the Laplace transform equivalent
Solution:
a. Using the Fourier transform definition, we get
0 ∞ 0 ∞
( a – jω )t – ( a + jω ) t
F { e –a t } = ∫– ∞ e e
at – jωt
dt + ∫0 e
– a t – jωt
e dt = ∫– ∞ e dt + ∫0 e dt

1 1 2
= --------------- + --------------- = ------------------
a – jω a + jω ω 2 + a 2

and thus we have the transform pair

8-26 Signals and Systems with MATLAB Applications, Second Edition


Orchard Publications
Page 26
Fourier Transforms of Common Waveforms

–a t 2
e ⇔ -----------------
2
-
2
(8.79)
ω +a

b. By substitution into the Laplace transform equivalent, we get

F { e–a t } = L [e
– at
] s = jω
+ L [e ]
at
s = –j ω
1-
= ----------
s+a
1-
+ ----------
s+a
s = jω s = –j ω
1 1 2
= --------------- + -------------------- = ------------------
jω + a – jω + a ω 2 + a 2

and this result is the same as (8.79).


We observe that since f ( t ) is real and even, F ( ω ) is also real and even.

8.6 Fourier Transforms of Common Waveforms


In this section, we will derive the Fourier transform of some common time domain waveforms.

Example 8.4
Derive the Fourier transform of the pulse
f ( t ) = A [ u0 ( t + T ) – u0 ( t – T ) ] (8.80)
Solution:
The pulse of (8.80) is shown in Figure 8.10.

f(t)
A

t
−T 0 T
Figure 8.10. Pulse for Example 8.4

Using the definition of the Fourier transform, we get

∞ T – jωt T
– jωt – jωt
F(ω) = ∫–∞ f ( t )e dt = ∫–T Ae dt = Ae
---------------
– jω –T

– jωt – T – jωt
(e
jωt
sin ωT sin ωT
Ae -
= -------------- =A –e -) = 2A --------------- = 2AT ---------------
-----------------------------------
jω T
jω ω ωT

Signals and Systems with MATLAB Applications, Second Edition 8-27


Orchard Publications
Page 27
Chapter 8 The Fourier Transform

We observe that the transform of this pulse has the ( sin x ) ⁄ x form, and has its maximum value 2AT
at ωT = 0 *.
Thus, we have the waveform pair

sin ωT
A [ u 0 ( t + T ) – u 0 ( t – T ) ] ⇔ 2AT --------------- (8.81)
ωT

The f ( t ) ↔ F ( ω ) correspondence is also shown in Figure 8.11, where we observe that the ω axis
crossings occur at values of ωT = ± nπ where n is an integer.

F(ω )
f(t)
A

−π π
t −2π 2π ωT
−T 0 T 0

Figure 8.11. Fourier transform of f ( t ) = A [ u 0 ( t + T ) – u 0 ( t – T ) ]

We also observe that since f ( t ) is real and even, F ( ω ) is also real and even.

Example 8.5
Derive the Fourier transform of the pulse of Figure 8.12.

f(t)
A

t
0 2T
Figure 8.12. Pulse for Example 8.5

Solution:
The expression for the given pulse is
f ( t ) = A [ u 0 ( t ) – u 0 ( t – 2T ) ] (8.82)

sin x- = 1
* We recall that lim ---------
x→0 x

8-28 Signals and Systems with MATLAB Applications, Second Edition


Orchard Publications
Page 28
Fourier Transforms of Common Waveforms

Using the definition of the Fourier transform, we get

∞ 2T – jωt 2T – jωt 0 – jω2T


– jωt – jωt A(1 – e )
∫– ∞ ∫0
Ae Ae
F(ω) = f ( t )e dt = Ae dt = --------------- = --------------- = --------------------------------
– jω 0
jω 2T

and making the substitutions


jωT – jωT
1 = e ⋅e
(8.83)
– jω2T – j ωT – jωT
e = e ⋅e
we get
– jωT jωT – jωT
–e – jωT ⎛ sin ωT ⎞ sin ωT- ⎞
F ( ω ) = ---------------- ⎛ ----------------------------⎞ = 2Ae --------------- = 2ATe –jωT ⎛ --------------
Ae e
⎝ ⎠ ⎝ ⎠ ⎝
(8.84)
ω j ω ωT ⎠

Alternate Solution:
We can obtain the Fourier transform of (8.82) using the time shifting property, i.e,
– jωt 0
f ( t – t 0 ) ⇔ F ( ω )e

sin ωT – jωT
and the result of Example 8.4. Thus, multiplying 2AT --------------- by e , we obtain (8.84).
ωT

We observe that F ( ω ) is complex* since f ( t ) is neither an even nor an odd function.

Example 8.6
Derive the Fourier transform of the waveform of Figure 8.13.
f(t) 2A

t
−T 0 T 2T
Figure 8.13. Waveform for Example 8.6
Solution:
The given waveform can be expressed as
f ( t ) = A [ u 0 ( t + T ) + u 0 ( t ) – u 0 ( t – T ) – u 0 ( t – 2T ) ] (8.85)

* We recall that e –j ω T consists of a real and an imaginary part.

Signals and Systems with MATLAB Applications, Second Edition 8-29


Orchard Publications
Page 29
Chapter 8 The Fourier Transform

and this is precisely the sum of the waveforms of Examples 8.4 and 8.5. We also observe that this
waveform is obtained by the graphical addition of the waveforms of Figures 8.10 and 8.12. There-
fore, we will apply the linearity property to obtain the Fourier transform of this waveform.
We denote the transforms of Examples 8.4 and 8.5 as F 1 ( ω ) and F 2 ( ω ) respectively, and we get

sin ωT – jωT ⎛ sin ωT⎞


F ( ω ) = F 1 ( ω ) + F 2 ( ω ) = 2AT --------------- + 2ATe ---------------
ωT ⎝ ωT ⎠
ωT ωT ωT
–j ------- ⎛ j ------- – j -------⎞
– jωT sin ωT 2 2 2 sin ωT
= 2AT ( 1 + e ) --------------- = 2ATe ⎜e +e ⎟ --------------- (8.86)
ωT ⎝ ⎠ ωT
ωT
– j -------
ωT sin ωT
cos ⎛ ------- ⎞ ---------------
2
= 4ATe
⎝ 2 ⎠ ωT

We observe that F ( ω ) is complex since f ( t ) of (8.85) is neither an even nor an odd function.

Example 8.7
Derive the Fourier transform of
f ( t ) = A cos ω 0 t [ u 0 ( t + T ) – u 0 ( t – T ) ] (8.87)

Solution:
From (8.45),
F ( ω – ω0 ) + F ( ω + ω0 )
f ( t ) cos ω 0 t ⇔ ----------------------------------------------------------
-
2
and from (8.81),
sin ωT
A [ u 0 ( t + T ) – u 0 ( t – T ) ] ⇔ 2AT ---------------
ωT
Then,

sin [ ( ω – ω 0 )T ] sin [ ( ω + ω 0 )T ]
A cos ω 0 t [ u 0 ( t + T ) – u 0 ( t – T ) ] ⇔ AT ------------------------------------
- + -------------------------------------- (8.88)
( ω – ω 0 )T ( ω + ω 0 )T

We also observe that since f ( t ) is real and even, F ( ω ) is also real and even*.

Example 8.8
Derive the Fourier transform of a periodic time function with period T .

* The sin x ⁄ x is an even function.

8-30 Signals and Systems with MATLAB Applications, Second Edition


Orchard Publications
Page 30
Fourier Transforms of Common Waveforms

Solution:
From the definition of the exponential Fourier series,
∞ jnω 0 t
f(t) = ∑ Cn e
n = –∞
(8.89)

where ω 0 = 2π ⁄ T , and recalling that


jω 0 t
e ⇔ 2πδ ( ω – ω 0 )
we get
jω 0 t
Cn e ⇔ 2πC 1 δ ( ω – ω 0 )
j2ω 0 t
Cn e ⇔ 2πC 2 δ ( ω – 2ω 0 )
(8.90)

jnω 0 t
Cn e ⇔ 2πC n δ ( ω – nω 0 )

Taking the Fourier transform of (8.89), and applying the linearity property for the transforms of
(8.90), we get
∞ ∞ ∞
F ⎧⎨ ∑
jnω 0 t ⎫ jnω 0 t
F { f(t)} =

Cn e ⎬ =

∑ Cn F { e } = 2π ∑
n = –∞
C n δ ( ω – nω )
0 (8.91)
n = –∞ n = –∞

The line spectrum of the Fourier transform of (8.91) is shown in Figure 8.14.

F( ω)
2πC –2 2πC 0
2πC 2
2πC – 4 2πC – 1 2πC 1
2πC 4
2πC – 3 2πC 3
..... .....
ω
– 4ω 0 – 3 ω 0 – 2 ω 0 – ω 0 0 ω0 2 ω0 3 ω0 4 ω0

Figure 8.14. Line spectrum for relation (8.91)

The line spectrum of Figure 8.14 reveals that the Fourier transform of a periodic time function, con-
sists of a train of equally spaced delta functions. The strength of each δ ( ω – nω 0 ) is equal to 2π
times the coefficient C n .

Example 8.9
Derive the Fourier transform of the periodic time function

Signals and Systems with MATLAB Applications, Second Edition 8-31


Orchard Publications
Page 31
Chapter 7
Discrete Time Systems and the Z Transform

his chapter is devoted to discrete time systems, and introduces the one-sided Z Transform.
T The definition, theorems, and properties are discussed, and the Z transforms of the most
common discrete time functions are derived. The discrete transfer function is also defined,
and several examples are given to illustrate its application. The Inverse Z transform, and the meth-
ods available for finding it, are also discussed.

7.1 Definition and Special Forms


The Z transform performs the transformation from the domain of discrete time signals, to another
domain which we call z – domain . It is used with discrete time signals, the same way the Laplace and
Fourier transforms are used with continuous time signals. The Z transform yields a frequency
domain description for discrete time signals, and forms the basis for the design of digital systems,
such as digital filters. Like the Laplace transform, there is the one-sided, and the two-sided Z trans-
form. We will restrict our discussion to the one-sided Z transform F(z) of a discrete time function
fn defined as


 f nz
–n
F(z) = (7.1)
n =0

and the Inverse Z transform is defined as

1

k–1
f n = ------- F(z)z dz (7.2)
j2

We can obtain a discrete time waveform from an analog (continuous or with a finite number of dis-
continuities) signal, by multiplying it by a train of impulses. We denote the continuous signal as f(t) ,
and the impulses as

n =
 t – nT
n=0
(7.3)

Multiplication of f(t) by n produces the signal g(t) defined as



g(t) = f(t)  n = f(t)

n=0
t – nT (7.4)

These signals are shown in Figure 7.1.

Nalla N 1
Chapter 7 Discrete Time Systems and the Z Transform

f(t)

t
0 (a)
n

n
0 (b)

f(t)n

t
0 (c)

Figure 7.1. Formation of discrete time signals


Of course, after multiplication by n , the only values of f(t) which are not zero, are those for
which t = nT , and thus we can express (7.4) as
 
g(t) = f nT

n =0
t – nT = 
n=0
f nTt – nT (7.5)

Next, we recall from Chapter 2, that the t – domain to s – domain transform pairs for the delta
function are (t)  1 and (t – T)  e . Therefore, taking the Laplace transform of both sides of
–sT

(7.5), and, for simplicity, letting f nT = f n , we get

    
G(s) = L  f n

 t – nT  = f n

 e–nsT =  f ne
–nsT
(7.6)
n =0 n =0 n=0

2 Nalla N
Properties and Theorems of the Z Transform

Relation (7.6), with the substitution z = esT , becomes the same as (7.1), and like s , z is also a com-
plex variable.
The Z and Inverse Z transforms are denoted as

F(z) = Z f n (7.7)

and

–1
f n = Z F(z) (7.8)

The function F(z) , as defined in (7.1), is a series of complex numbers and converges outside the cir-
cle of radius R , that is, it converges (approaches a limit) when z  R . In complex variables theory,
the radius R is known as the radius of absolute convergence.
In the complex z – plane the region of convergence is the set of z for which the magnitude of F(z)
is finite, and the region of divergence is the set of z for which the magnitude of F(z) is infinite.

7.2 Properties and Theorems of the Z Transform


The properties and theorems of the Z transform are similar to those of the Laplace transform. In
this section, we will state and prove the most common Z transforms.
1. Linearity

af1n + bf2n + cf3n +   aF1(z) + bF2(z) + cF3(z) +  (7.9)

where a b c  are arbitrary real or complex constants.


Proof:
The proof is easily obtained by application of the definition of the Z transform to each term on
the left side.
In our subsequent discussion, we will denote the discrete unit step function as u0n .

2. Shift of fnu0 n in the Discrete Time Domain

–m
f n – mu 0 n – m  z F(z) (7.10)

Nalla N 3
Chapter 7 Discrete Time Systems and the Z Transform

Proof:
Applying the definition of the Z transform, we get

Z  f n – mu0 n – m = f n – mu0 n – mz–n

n =0

and since u0 n – m = 0 for n  m and u 0 n – m = 1 for n  m , the above expression reduces to



Z  f n – m =  f n – mz –n

n=0

Now, we let n – m = k ; then, n = k + m , and when n – m = 0 or n = m , k = 0 . Therefore,


  

 f k z
–(k + m)
Z  f n – m = z–mF(z)
 f kz  f kz
–k –m
= z = z–m –k
=
k=0 k=0 k=0

3. Right Shift in the Discrete Time Domain


This property is a generalization of the previous property, and allows use of non-zero values for
n  0 . The transform pair is

m–1

f n – m  z F(z) + n =f0n – mz (7.11)


–m –n

Proof:
By application of the definition of the Z transform, we get

Z  f n – m =  f n – mz –n

n=0

We let n – m = k ; then, n = k + m , and when n = 0 , k = –m . Therefore,


  
  
–(k + m) –k –m –m –k
Z  f n – m = f k z = f k z z = z f k z
k = –m k = –m k = –m
–1  –1

 
–m –m
f kz–k + f kz–k

=z f kz –k =z F(z) +
k = –m k=0 k = –m

When k = –m , n = 0 , and when k = –1 , n = m – 1 . Then, by substitution into the last summa-


tion term above, we get

4 Nalla N
Properties and Theorems of the Z Transform

Z   –m
()
m–1
  m–n –m
()
m–1
  –n
f n– m = z F z +
n=0
 fn–mz = z F z +

n=0
fn–mz

and this is the same as (7.11).


For m = 1 , (7.11) reduces to

f n – 1  z–1F(z) + f –1 (7.12)

and for m = 2 , reduces to

f n – 2  z–2F(z) + f –2 + z–1f –1 (7.13)

4. Left Shift in the Discrete Time Domain


–1
f n + m  z m F(z) +  f n +
n = –m
mz –n (7.14)

that is, if f n is a discrete time signal, and m is a positive integer, the mth left shift of f n is
f n + m .

Proof:

Z  f n + m =  f n + mz –n

n=0

We let n + m = k ; then, n = k – m , and when n = 0 , k = m . Then,


 

 f k z
–(k – m)
Z  f n + m =  f kz
–k m
= z
k =m k=m
 m–1 m–1
F(z) –
 f k z
m m
 f k z  f k z
=z –k
– –k =z k

k=0 k=0 k=0

When k = 0 , n = –m , and when k = m – 1 , n = –1 . Then, by substitution into the last sum-


mation term of the above expression, we get
–1 –1
Z f n + m = z m
F(z) +  f n + m z –(n + m) m
= z F(z) +  f n + mz–n
n = –m n = –m

and this is the same as (7.14).

Nalla N 5
Chapter 7 Discrete Time Systems and the Z Transform

For m = 1 , the above expression reduces to

Z  f n + 1 = zF(z) – f 0z (7.15)

and for m = 2 , reduces to

Z  f n + 2 = z2F(z) – f 0z2 – f 1z (7.16)

5. Multiplication by an in the Discrete Time Domain

an f n  F  --
z
(7.17)
 a

Proof:   z –n
  --
1
Z n
   --- ---- –n  -z-
n

–n 
afn =
 afnz =  a
–n
fnz = 
k=0
fn  a =F 
a
k=0 k =0

6. Multiplication by e–naT in the Discrete Time Domain

e–naTf n  F(eaTz) (7.18)

Proof:
  –n
Z e –naT f n = e –naT
f n z–n =  f n(e aT
z) = F(eaTz)
k=0 k=0

7. Multiplication by n and n2 in the Discrete Time Domain

d
nf n  –z ----- F(z)
dz
2
(7.19)
2 d 2d
n f n  z-----F(z) + z ----- F(z)
dz dz2

Proof:
By definition,

f nz –n
F(z) =

n=0

6 Nalla N
Properties and Theorems of the Z Transform

and taking the first derivative of both sides with respect to z we get
 
 nf nz
d –z –1
(–n)f nz– n – 1 = –n

-----F(z) =
dz
n=0 n=0

Multiplication of both sides by –z yields



d
 nf nz = – z-- ---(z)
–n
F
dz
n=0

Differentiating one more time, we get the second pair in (7.19).


8. Summation in the Discrete Time Domain
n
f m   ----------  F(z)
z

m=0
z–1
(7.20)

that is, the Z transform of the sum of the values of a signal, is equal to z  (z – 1) times the Z
transform of the signal. This property is equivalent to time integration in the continuous time
domain since integration in the discrete time domain is summation. We will see on the next sec-
tion that the term z  (z – 1) is the Z transform of the discrete unit step function u0n , and
recalling that in the s – domain
1
u (t)  --
0
s
and
t
F(s )
 f()d  -----------
0 s

then, the similarity of the Laplace and Z transforms becomes apparent.


Proof:
Let
n
yn =
 xm
m=0
(7.21)

and let us express (7.21) as


n–1
yn =
m=0
 xm + xn (7.22)

Since the summation symbol in (7.21) is yn , then the summation symbol in (7.22) is yn – 1 ,
and thus we can write (7.22) as

Nalla N 7
Chapter 7 Discrete Time Systems and the Z Transform

yn = yn – 1 + xn (7.23)


Next, we take the Z transform of both sides of (7.23), and using the property

xn – mu0 n – m  z–mX(z)


we get
Y(z) = z–1Y(z) + X(z)
or
1 z-------------
Y(z) = --------------- X(z) = ----- X(z)
1 – z–1 z–1

and this relation is the same as (7.20).


9. Convolution in the Discrete Time Domain
Let the impulse response of a discrete time system be denoted as hn , that is, an impulse n ,
produces a response hn . Likewise, a delayed impulse n – m produces a delayed response
hn – m , and so on. Therefore, any discrete time input signal can be considered as an impulse
train, in which each impulse has a weight equal to its corresponding sampled value. Then, for any
other input x0 x1 x2  xm , we get
x00 → x0hn
x1n – 1 → x1hn – 1
x2n – 2 → x2hn – 2

xmn – m → xmhn – m
and the response at any arbitrary value m, is obtained by summing all the components that have
occurred up to that point, that is, if yn is the output due to the input xm convolved with
hn , then,
n
yn =
 xmhn – m
m=0
(7.24)

or
n
yn =
 hn – mxm
m=0
(7.25)

We will now prove that convolution in the discrete time domain corresponds to multiplication in
the Z domain, that is,

f1nf2n  F1(z)  F2(z) (7.26)

8 Nalla N
Properties and Theorems of the Z Transform

Proof:
Taking the Z transform of both sides of (7.24), we get
  
 
Y(z) = Z

xmhn – m  =
   xmhn – m z–n
 m=0  n =0 m=0

and interchanging the order of the summation, we get


   

   hn – m z
–n –n
xmhn – mz

Y(z) = = xm
m =0 n = 0 m =0 n =0

Next, we let k = n – m , then, n = k + m , and thus,


   
Y(z) =  xm  hkz–(k + m) =  xmz–m  hkz –k

m=0 n =0 m=0 n =0
or
Y(z) = X(z)  H(z) (7.27)
10. Convolution in the Discrete Frequency Domain
If f1n and f2 n are two sequences with Z transforms F1 (z) and F2 (z) respectively, then,
f n  f n  ------- xF (v)F  -  v–1dv
1 z
(7.28)
1 2
j 2  1 2
 v

where v is a dummy variable, and


is a closed contour inside the overlap convergence regions
for X1(v) and X2(z  v) . The proof requires contour integration; it will not be provided here.

11. Initial Value Theorem

f 0 = lim X(z) (7.29)


z→

Proof:
For all n  1 , as z → 
1
z –n = --- → 0
zn

and under these conditions f nz–n → 0 also. Taking the limit as z →  in the expression

Nalla N 9
Chapter 7 Discrete Time Systems and the Z Transform


f n z –n
F(z) =

n=0

we observe that the only non-zero value in the summation is that of n = 0 . Then,

 f nz –n
= f 0 z–0 = f 0
n=0
Therefore,
lim F(z) = f 0
z→

12. Final Value Theorem


This theorem states that if f n approaches a limit as n →  , we can find that limit, if it exists, by
multiplying the Z transform of f n by (z – 1) , and taking the limit of the product as z → 1 .
That is,

lim f n = lim (z – 1)F(z) (7.30)


n→ z→1

Proof:
Let us consider the Z transform of the sequence f n + 1 – f n , i.e.,

Z  f n + 1 – f n n + 1 – f n)z–n
= 
n=0
(f

We replace the upper limit of the summation with k, and we let k →  . Then,
k
Z  f n + 1 – f n =  ( f n + 1 – f n)z
–n
lim (7.31)
k→
n=0

From (7.15),
Z  f n + 1 = zF(z) – f 0z (7.32)
and by substitution of (7.32) into (7.31), we get
k
–n
zF(z) – f 0z – F(z)
 ( f n + 1 – f n)z
= lim
k→
n=0

Taking the limit as z → 1 on both sides, we get

10 Nalla N
The Z Transform of Common Discrete Time Functions

 k 
 ( f n + 1 – f n)z
–n
lim (z – 1)F(z) – f 0z = lim  lim 
z→1 z → 1 k →  
n=0
k

 zlim ( f n + 1 – f n)z
= lim –n
k→ →1
n =0

k
lim (z – 1)F(z) – lim
z→1 z→1
f 0z = lim
k→
 zlim
→1
 f n + 1 – f n z –n

n=0

k
lim (z – 1)F(z) – f 0
 f n + 1 – f n
= lim
z→1 k→
n=0

= lim f k – f 0 = lim f k – f 0


k→ k→

lim f k = lim (z – 1)F(z)


k→ z→1

We must remember, however, that if the sequence f n does not approach a limit, the final value
theorem is invalid. The right side of (7.30) may exist even though f n does not approach a limit.
In instances where we cannot determine whether f n exists or not, we can be certain that it
exists, if X(z) can be expressed in a proper rational form as
A(z )
X(z) = -----------
B(z)

where A(z) and B(z) are polynomials with real coefficients.


We summarize the properties and theorems of the Z transform in Table 7.1.

7.3 The Z Transform of Common Discrete Time Functions


In this section we will provide several examples to find the Z transform of some discrete time func-
tions.

Example 7.1
Find the Z transform of the geometric sequence defined as

0 n = –1 –2 –3 


f n =  n (7.33)
a n = 0 1 2 3 

Signals and Systems with MATLAB Applications, Second Edition 7-11


Orchard Publications
Chapter 7 Discrete Time Systems and the Z Transform

TABLE 7.1 Properties and Theorems of the Z transform


Property / Theorem Time Domain Z transform
Linearity af1n + bf2n +  aF1(z) + bF2(z) + 
Shift of xnu0 n f n – mu0n – m z–mF(z)
m–1
Right Shift f n – m –m –n
z F(z) + 
n =0
f n – mz

Left Shift f n + m –1
m
z F(z) + n = f–mn + mz –n

F  --
Multiplication by an z
anf n
 a

Multiplication by e–naT e–naTf n F(eaTz)


Multiplication by n nf n d
–z ----- F(z )
dz
2
Multiplication by n2 n2f n d 2d
z-----F(z) + z ----- F(z)
dz 2
dz
Summation in Time n  -----z---------- F(z)

m=0
f m z–1

Time Convolution f1 nf2 n F1 (z)  F2 (z)

Frequency Convolution f1 n  f2n ---1----   z


 xF (v)F --vv
–1
dv
1 2
j 2
Initial Value Theorem f  0  = lim F(z)
z→
Final Value Theorem lim f n = lim (z – 1)F(z)
n→ z→1

Solution:
From the definition of the Z transform,

  
az  (az
n –n –1 n
f nz–n = )

F(z) = = (7.34)
n=0 n=0 n=0

7-12 Signals and Systems with MATLAB Applications, Second Edition


Orchard Publications
The Z Transform of Common Discrete Time Functions

To evaluate this infinite summation, we form a truncated version of F(z) which contains the first k
terms of the series. We denote this truncated version as Fk(z) . Then,
k– 1
n –n –1 2 –2 k – 1 –(k – 1)
F k (z) = a z = 1 + az + a z +  + a z (7.35)
n=0

and we observe that as k →  , (7.35) becomes the same as (7.34).


To express (7.35) in a closed form, we multiply both sides by az–1 . Then,

az–1Fk (z) = az–1 + a2z–2 + a3z–3 +  + akz–k (7.36)

Subtracting (7.36) from (7.35), we get

F k(z) – az–1F (z)


k = 1 – akz–k
or

k –k –1 k
1–az = – (az )
1--------------------------
F k (z) = -------------------- (7.37)
1 – az–1 1 – az–1

for az–1  1
k
To determine F(z) from F k(z) , we examine the behavior of the term (az–1) in the numerator of
k –1 j
(7.37). We write the terms az–1 and (az–1) in polar form, that is, az–1 = az e and
k –1 k jk
(az –1 ) = az e (7.38)
–1
From (7.38) we observe that, for the values of z for which az  1 , the magnitude of the complex
k
number (az–1) → 0 as k →  and therefore,
z
F(z) = lim Fk(z) = ---------1----------= ----- ----- (7.39)
k→ 1 – az–1 z–a

for az–1  1
–1 k
For the values of z for which az  1 , the magnitude of the complex number (az–1) becomes
unbounded as k →  , and therefore, F(z) = lim F (z) is unbounded for az–1  1 .
k
k→

In summary,
 n
(az–1)

F(z) =
n=0

Signals and Systems with MATLAB Applications, Second Edition 7-13


Orchard Publications
Chapter 7 Discrete Time Systems and the Z Transform

–1 –1
converges to the complex number z  (z – a) for az  1 , and diverges for az  1 .
Also, since
a a
az–1 = -- = -----
z z
–1
then, az  1 implies that z  a , while az–1  1 implies z  a and thus,

 z
  ---------- for z  a
az
n –n
Zan u 0n = =  z–a (7.40)
n=0  unbounded for z  a

The regions of convergence and divergence for the sequence of (7.40) are shown in Figure 7.2.

Im[z]
Region of
Convergence
z
F(z) = ----- -----
z–a
Region of |a|
Divergence Re[z]
F(z) → 

Figure 7.2. Regions of convergence and divergence for the geometric sequence an
To determine whether the circumference of the circle, where z = a |, lies in the region of conver-
gence or divergence, we evaluate the sequence Fk(z) at z = a . Then,
k–1

n –n
Fk(z) = a z = 1 + az–1 + a2 z–2 +  + ak – 1 z–(k – 1)
n =0 z=a (7.41)
=1+1+1++1=k

We see that this sequence becomes unbounded as k →  , and therefore, the circumference of the
circle lies in the region of divergence.

Example 7.2
Find the Z transform of the discrete unit step function u0 n shown in Figure 7.3.

7-14 Signals and Systems with MATLAB Applications, Second Edition


Orchard Publications
The Z Transform of Common Discrete Time Functions

u0n
0 n0 1
u 0 n = 
1 n0 ....
0
n

Figure 7.3. The discrete unit step function u0n

Solution:
From the definition of the Z transform,
 
F(z) =  f nz –n
=  1z –n
(7.42)
n=0 n=0

As in the previous example, to evaluate this infinite summation, we form a truncated version of
F(z) which contains the first k terms of the series, and we denote this truncated version as F k (z) .
Then,
k–1
z
–n
Fk(z) = = 1 + z–1 + z–2 +  + z–(k – 1) (7.43)
n=0

and we observe that as k →  , (7.43) becomes the same as (7.42). To express (7.43) in a closed
form, we multiply both sides by z–1 and we get

k (z) = z + z + z +  + z
z–1F –1 –2 –3 –k
(7.44)

Subtracting (7.44) from (7.43), we get

kF (z) – z–1Fk (z) = 1 – z–k


or
–k –1 k
– (z )
1 – z = 1----------------------
F k (z) = --------------- (7.45)
–1
1 –z 1 – z–1

for z–1  1
k k k
Since (z–1) = z–1 ejk , as k →  , (z–1) → 0 . Therefore,
1 z
F(z) = lim Fk(z) = --------------- = ----- ----- (7.46)
k→ 1 – z–1 z–1

for z  1 , and the region of convergence lies outside the unit circle.

Signals and Systems with MATLAB Applications, Second Edition 7-15


Orchard Publications
Chapter 7 Discrete Time Systems and the Z Transform

Alternate Solution:
The discrete unit step u 0 n is a special case of the sequence an with a = 1 , and since 1n = 1 , by
substitution into (7.40) we get

 z
  ---------- for z  1

 1z = 
Zu 0 n = –n z 1 (7.47)
n=0  unbounded for z  1

Example 7.3
–naT
Find the Z transform of the discrete exponential sequence f n = e
Solution:


–naT –n
F(z) = e z = 1 + e–aTz–1 + e–2aTz–2 + e–3aTz–3 + 
n=0

and this is a geometric sequence which can be expressed in closed form as

z
Z e –n a T  = ------------1-------------- = ----------------- (7.48)
–aT –1 –aT
1–e z z–e

for e–aTz–1  1

Example 7.4
Find the Z transform of the discrete time functions f1n = cosnaT and f2n = sin naT

Solution:
From (7.48) of Example 7.3,
z
e–naT  --------- --------
z – e–aT
and replacing –naT with jnaT we get
z
Z e jnaT  = Z  cos naT + j sin naT = --------- ----------
ja T
z–e
z z – e–jaT
= Z  cos naT  + jZ  sin naT  = ------------------  -------------------
z – e jaT z – e–jaT
2
z – zcosaT + jz sinaT
= Z  cos naT  + jZ  sin naT  = -----------------------------------------------------
z2 – 2zcosaT + 1

7-16 Signals and Systems with MATLAB Applications, Second Edition


Orchard Publications
The Z Transform of Common Discrete Time Functions

Equating real and imaginary parts, we get the transform pairs


2
z – zcosaT
cosnaT  ----------------------------------------- (7.49)
z – 2zcosaT + 1
2

and
z cos a T
sin naT  --------------------------------------- (7.50)
z – 2zcosaT + 1
2

To define the regions of convergence and divergence, we express the denominator of (7.49) or
(7.50) as

(z – e jaT)  (z – e–jaT) (7.51)

We see that both pairs of (7.49) and (7.50) have two poles, one at z = e jaT and the other at
z = e–jaT , that is, the poles lie on the unity circle as shown in Figure 7.4.

Im[z] Region of
Convergence

Pole
1
Re[z]
Region of
Divergence 
Pole

Figure 7.4. Regions of convergence and divergence for cosnaT and sin naT
From Figure 7.4, we see that the poles separate the regions of convergence and divergence. Also,
since the circumference of the circle lies on the region of divergence, as we have seen before, the
poles lie on the region of divergence. Therefore, for the discrete time cosine and sine functions we
have the pairs

z 2 – z cos aT
cos naT  --------------------------------------- for z 1 (7.52)
z2 – 2z cos aT + 1

and

z sin aT
sin naT  ---------------------------------------- for z 1 (7.53)
z – 2zcosaT + 1
2

Signals and Systems with MATLAB Applications, Second Edition 7-17


Orchard Publications
Chapter 7 Discrete Time Systems and the Z Transform

It is shown in complex variables theory that if F(z) is a proper rational function*, all poles lie outside
the region of convergence, but the zeros can lie anywhere on the z -plane.
Example 7.5
Find the Z transform of discrete unit ramp fn = nu 0 n .

Solution:

Z nu 0 n =  nz –n
= 0 + z–1 + 2z–2 + 3z–3 +  (7.54)
n=0

We can express (7.54) in closed form using the discrete unit step function transform pair

-----z----- for
Z u 0 n  (1)z z 1
–n
= = (7.55)
z–1
n=0

Differentiating both sides of (7.55) with respect to z , we get


d  -----z-----------
d  
(1)z –n =
dz    dz  z – 1 
n=0
or
 –1
 –nz
–n–1
= ------------------
n=0 (z – 1)2
Multiplication by –z yields
  z
 nz (1) z–n =
–n --------- ---------

= n
n =0 n=0 (z – 1) 2

and thus we have the transform pair


z
nu 0n  --------- -------- (7.56)
(z – 1)2

We summarize the transform pairs we have derived, and others, given as exercises at the end of this
chapter, in Table 7.2.

* This was defined in Chapter 3, page 3-1.

7-18 Signals and Systems with MATLAB Applications, Second Edition


Orchard Publications
The Z Transform of Common Discrete Time Functions

TABLE 7.2 The Z Transform of Common Discrete Time Functions

fn F(z)
n 1
n – m z–m
anu n
0
-----z----- za
z–a
u0n -----z----- z1
z–1
(e–naT)u n
0
---------z-------- e–aTz–1  1
–aT
z–e
( cosnaT)u0n z2 – zcosaT
---------------------------------------- z 1
z2 – 2zcosaT + 1
( sin naT)u0n -------------z---s--i--n---a----T-------------- z 1
z2 – 2zcosaT + 1
(an cosnaT)u n z2 – azcosaT
0 ----------------------------------------------- z  a
z2 – 2az cos aT + a2
(an sin naT)u n --------------a--z---s--i-n----a---T
---------------- z a
0
z2 – 2az cos aT + a2
u0n–u0n – m zm – 1
---------------------------
zm – 1 (z – 1)
nu0n z  (z – 1)2

n2u n z(z + 1)  (z – 1)3


0

n + 1u0n z2  (z – 1)2
annu n (az)  (z – a)2
0
2
ann u n az(z + a)  (z – a)3
0

annn + 1u n 2az2  (z – a)3


0

Signals and Systems with MATLAB Applications, Second Edition 7-19


Orchard Publications
Chapter 7 Discrete Time Systems and the Z Transform

7.4 Computation of the Z Transform with Contour Integration*


Let F(s) be the Laplace transform of a continuous time function f(t) and F(s) the transform of
the sampled time function f(t) which we denote as f(t) . It is shown in complex variables theory
that F(s) can be derived from F(s) by the use of the contour integral
1 --------F----(---v---)dv
F  (s) = -------

j2 C 1 – e–sTevT

where C is a contour enclosing all singularities (poles) of F(s) , and v is a dummy variable for s . We
can compute the Z transform of a discrete time function f n using the transformation

F(z) = F(s) (7.58)


z = esT

By substitution of (7.58) into (7.57), and replacing v with s , we get


1 F (s )
F(z) = -------

------- --------- ds
j2 C 1 – z–1esT
(7.59)

Next, we use Cauchy’s Residue Theorem to express (7.59) as

F ( s---)------- F ( s---)------
lim (s – p k ) -------------–1
F(z) = k Res -------------–1
1 – z esT
=
s → pk 1 – z esT
(7.60)
s = pk

Example 7.6
Derive the Z transform of the discrete unit step function u0n using the residue theorem.
Solution:
We learned in Chapter 2, that
L u0 (t)= 1  s
Then, by residue theorem of (7.60),
F ( s ) ---- 1s
F(z) = lim (s – p ) ---------------- = lim (s – 0) ----------------------
k –1 sT sT
s → pk 1–z e s→0 1 – z–1 e
1  s ----
= lim s --------------- = lim ------------1------------ = -------1-------- = -----z-----
s → 0 1 – z–1esT s → 0 1 – z–1esT
1 – z–1 z – 1

* This section may be skipped without loss of continuity. It is intended for readers who have prior knowledge of
complex variables theory. However, the following examples will show that this procedure is not difficult.

7-20 Signals and Systems with MATLAB Applications, Second Edition


Orchard Publications
Computation of the Z Transform with Contour Integration
for z  1 , and this is the same as (7.47).

Example 7.7

Derive the Z transform of the discrete exponential function e–naTu n


0 using the residue theorem.

Solution:
From Chapter 2,
1
L e –at u 0(t)= ----- ------
s+a
Then, by residue theorem of (7.60),
F ( s ) ---- 1  (s + a )
F(z) = lim (s – p ) ---------------- = lim (s + a) -----------------------
k –1 sT sT
s → pk 1–z e s → –a 1 – z–1 e
1 1 z
= lim ------------ ------------ = ------ = --------- --------
s → –a 1 – z–1esT –1 –aT –aT
1–z e z–e

for z  1 and this is the same as (7.48).

Example 7.8
Derive the Z transform of the discrete unit ramp function nu0n using the residue theorem.
Solution:
From Chapter 2,
2
L tu0(t)= 1  s

Since F(s) has a second order pole at s = 0 , we need to apply the residue theorem applicable to a
pole of order n. This theorem states that

n –1
 1 (s – p )--------------
F(z) = lim  -----------------
d F(s)
---------------------- (7.61)
s → pk (n – 1)!  k
ds
n – 1
1–z e
–1 sT

Thus, for this example,

d 2 1
 s2
F(z)= lim -- --- s1----------------------- = lim --d--- ----------------------- = ---------z--------
s → 0 ds 1 – z–1esT s → 0 ds 1
– z–1 esT (z – 1)2

for z  1 , and this is the same as (7.56).

Signals and Systems with MATLAB Applications, Second Edition 7-21


Orchard Publications
Chapter 7 Discrete Time Systems and the Z Transform

7.5 Transformation Between s and z Domains


It is shown in complex variables textbooks that every function of a complex variable maps (trans-
forms) a plane xy to another plane uv . In this section, we will investigate the mapping of the plane
of the complex variable s , into the plane of the complex variable z .
Let us reconsider expressions (7.6) and (7.1) which are repeated here for convenience.

f ne –nsT
G(s) =

n =0
(7.62)

and

f nz –n
F(z) =

n=0
(7.63)

By comparison of (7.62) with (7.63),


G(s) = F(z) sT (7.64)
z=e

Thus, the variables s and z are related as

z = esT (7.65)

and

1
s = - - lnz (7.66)
T

Therefore,

F(z) = G(s) 1
(7.67)
s = --- lnz
T

Since s , and z are both complex variables, relation (7.67) allows the mapping (transformation) of
regions of the s -plane into the z -plane. We find this transformation by recalling that s =  + j and
therefore, expressing z in magnitude-phase form and using (7.65), we get
j T jT
z = z  = ze =e e (7.68)
where,

z = eT (7.69)
and

7-22 Signals and Systems with MATLAB Applications, Second Edition


Orchard Publications
Transformation Between s and z Domains
 = T (7.70)
Since
T = 1  fs

the period T defines the sampling frequency fs . Then, s = 2fs or fs = s  2 , and

T = (2)  s
Therefore, we express (7.70) as
2 
 =  ----- = 2 ----- (7.71)
s s

and by substitution of (7.69) and (7.71) into (7.68), we get

T
z = e e j2(   s ) (7.72)

The quantity e j2(   s ) in (7.72), defines the unity circle; therefore, let us examine the behavior of
z when  is negative, zero, or positive.

Case I   0 : When  is negative, from (7.67), we see that z  1 , and thus the left half of the s -
plane maps inside the unit circle of the z -plane, and for different negative values of
 , we get concentric circles with radius less than unity.

Case II   0 : When  is positive, from (7.67), we see that z  1 , and thus the right half of the s -
plane maps outside the unit circle of the z -plane, and for different positive values of
 we get concentric circles with radius greater than unity.

Case III  = 0 : When  is zero, from (7.72), we see that z = e j2(   s ) and all values of  lie
on the circumference of the unit circle. For illustration purposes, we have mapped
several fractional values of the sampling radian frequency s , and these are shown
in Table 7.3.
From Table 7.3, we see that the portion of the j axis for the interval 0    s in the s -plane,
maps on the circumference of the unit circle in the z -plane as shown in Figure 7.5.
The mapping from the z -plane to the s -plane is a multi-valued transformation since, as we have
1
seen, s = - -lnz and it is shown in complex variables textbooks that
T

lnz = lnz + j2n (7.73)

Signals and Systems with MATLAB Applications, Second Edition 7-23


Orchard Publications
Chapter 7 Discrete Time Systems and the Z Transform

TABLE 7.3 Mapping of multiples of sampling frequency


 z 
0 1 0
s  8 1 4

s  4 1 2

3s  8 1 3  4

s  2 1 

5s  8 1 5  4

3s  4 1 3  2

7s  8 1 7  4

s 1 2

j s-plane Imz z-plane

 = 0.25s
s x

x x 0.125s
0.875s
0.75s 0.375s z =1
0.625s  = 0.5s x =0
x Rez
0.5s   = s
0.375s
0.25s x
0.625 sx 0.875s
0.125s x
  = 0.75s
0 =0 0

Figure 7.5. Mapping of the s-plane to z-plane

7.6 The Inverse Z Transform


The Inverse Z transform enables us to extract f n from F(z) . It can be found by any of the follow-
ing three methods:
a. Partial Fraction Expansion
b. The Inversion Integral
c. Long Division of polynomials

7-24 Nalla N
The Inverse Z Transform

We will discuss these methods, and we will illustrate with examples.


a. Partial Fraction Expansion
This method is very similar to the partial fraction expansion method that we used in finding the
Inverse Laplace transform, that is, we expand F(z) into a summation of terms whose inverse is
known. These terms have the form
r z r2z r3z
k 1 -------------------  ------------- (7.74)
z – p1 (z – p )2 z – p2
1

where k is a constant, and ri and pi represent the residues and poles respectively; these can be real
or complex.
Before we expand F(z) into partial fractions, we must express it as a proper rational function. This
is done by expanding F(z)  z instead of F(z) , that is, we expand it as

-----(--z---)- = k- + ------------
F r1 r2 + 
+ ------------- (7.75)
z z z – p1 z – p2

and after the residues are found from


F (z ) F(z )
rk = lim (z – p ) ---------- = (z – p ) ---------- (7.76)
k k
z → pk z z z = pk

we rewrite (7.75) as
r 1z r 2z
F(z) = k + -------------+ -------------+  (7.77)
z – p1 z – p2

Example 7.9
Use the partial fraction expansion method to compute the Inverse Z transform of
1
F(z) = ----------------------------------------- ----------------------------------------- (7.78)
(1 – 0.5z )(1 – 0.75z–1)(1 – z–1)
–1

Solution:
We multiply both numerator and denominator by z3 to eliminate the negative powers of z . Then,

z3
F(z) = -------------------------------------------------------------
(z – 0.5)(z – 0.75)(z – 1)

Next, we form F(z)  z , and we expand in partial fractions as

Nalla N 7-25
Chapter 7 Discrete Time Systems and the Z Transform

F(z) z2 r1 r2 r3
----------= -------------------------------------------------------------= -------------------+ -----------------------+ ---------------
z (z – 0.5)(z – 0.75)(z – 1) (z – 0.5) (z – 0.75) (z – 1)

The residues are


2 2
z (0.5)
r1 = --------------------------------------- = ------------------------------------------------- = 2
(z – 0.75)(z – 1) z = 0.5 (0.5 – 0.75)(0.5 – 1)
2 2
z (0.75)
r2 = ------------------------------------ = ---------------------------------------------------- = –7
(z – 0.5)(z – 1) z = 0.75 (0.75 – 0.5)(0.75 – 1)

z2 1
2
r3 = -------------------------------------------- = --------------------------------------------- =8
(z – 0.5)(z – 0.75) z = 1 (1 – 0.5)(1 – 0.25)
Then,
F(z) z2 2 –7 8
----------= -------------------------------------------------------------= -------------------+ -----------------------+ ---------------
z (z – 0.5)(z – 0.75)(z – 1) (z – 0.5) (z – 0.75) (z – 1)

and multiplication of both sides by z yields

z3 2z –7z 8z
F(z) = ---------------------------------- = -------------------+ -----------------------+ --------------- (7.79)
(z – 0.5)(z – 0.75)(z – 1) (z – 0.5) (z – 0.75) (z – 1)

To find the Inverse Z transform of (7.77), we recall that


z
an  ----- -----
z–a

for z  a . Therefore, the discrete time sequence is

f n = 2(0.5)n – 7(0.75)n + 8 (7.80)

Example 7.10
Use the partial fraction expansion method to compute the Inverse Z transform of
12 z
F(z) = --------------------------------- (7.81)
(z + 1)(z – 1) 2
Solution:
Division of both sides by z and partial expansion yields

7-26 Nalla N
The Inverse Z Transform

F(z ) r1 r2 r3
---------- = ---------------1--2---------------- = ---------------+ -----------------+ ---------------
z (z + 1)(z – 1) 2 (z + 1) (z – 1)2 (z – 1)

The residues are


12 12
r1 = ------- --- ------- = ----------------------- = 3
(z – 1) 2 z = –1
(– 1 – 1)2
12 = ------1--2------- = 6
r2 = ------ -- ------
(z + 1) z = 1 (1 + 1)
d  12--------- – 12
r3 = -- --- ---- = ------------------ = –3
dz  z + 1 (z + 1)2
z=1
Then,
-----(--z---) = ---------------1--2---------------- = -------3-------- + --------6--------- + ------–---3------
F
z (z + 1)(z – 1) 2 (z + 1) (z – 1)2 (z – 1)
or
12 z 3z 6z –3 z
F(z) = --------------------------------- = ---------------------- + ----------------- + ---------------
(z + 1)(z – 1)2 (z – (–1)) (z – 1)2 (z – 1)
Now, we recall that
z
u n  ----- -----
0
z–1
and
z
nu n  --------- --------
0
(z – 1)2
for z  1 
Therefore, the discrete time sequence is

f n = 3(–1)n + 6n – 3 (7.82)


Example 7.11
Use the partial fraction expansion method to compute the Inverse Z transform of
z+1
F(z) = ---------------------------------------------- (7.83)
(z – 1)(z 2 + 2z + 2)

Nalla N 7-27
Chapter 7 Discrete Time Systems and the Z Transform

Solution:
Dividing both sides by z and performing partial fraction expansion, we get
F (z ) r r2 r3 r4
----------- = --------------------z----+-----1-------------------------------= ---1 + ----------+ -----------------------+ ------------------------ (7.84)
z z(z – 1)(z2 + 2z + 2) z z – 1 (z + 1 – j) (z + 1 + j)

The residues are


z+1 1
r1 = ---------------------------------------------- = ----- = –0.5
(z – 1)(z2 + 2z + 2) –2
z=0
z+1 2
r2 = -------------------------------------- = -- = 0.4
(z)(z 2 + 2z + 2) z=1
5
z+1 j
r3 = ------------------------------------------------ = -------------------------------------------------- = 0.05 + j0.15
(z)(z – 1)(z + 1 + j) z = – 1 + j (– 1 + j)(– 2 + j)(j2)
r4 = r3 = 0.05 – j0.15
Then,
-----(--z---) = --------------------z----+-----1 ----------- = –----0--.-5- + --0--.--4--- + 0---.-0---5-----+----j--0---.--1---5- + 0----.-0---5-----–----j--0---.--1---5-
F
z z(z – 1)(z 2 + 2z + 2) z z–1 (z + 1 – j) (z + 1 + j)
or
0.4 z ( 0.05 + j 0.15 ) z ( 0.05 – j 0.15 ) z
F(z) = – 0.5 + ---------- + ------------------------------------ + ------------------------------------
z–1 (z + 1 – j) (z + 1 + j)
= – 0.5 + ---------- + ------------------------------------ + -------0---5-----–----j--0---.--1---5---)---z-
0 . 4 z ( 0 . 0 5 + j 0 . 1 5 ) z ( 0 .
z–1 z – (– 1 + j) z – (– 1 – j)
= – 0.5 + ---------- + ------------------------------------ + ------.-0---5-----–----j--0---.--1---5---)---z-
0 . 4 z ( 0 . 0 5 + j 0 . 1 5 ) z ( 0
z–1 z – 2e j135
z – 2e–j135

7-28 Nalla N
The Inverse Z Transform

Recalling that
n  1
and
z
an u n  ----- -----
0
z–a

for z  a , we find that the discrete time sequence is


n
f n = – 0.5n + 0.4(1)n + (0.05 + j0.15)( 2e j135)

2e–j135)
n
+ (0.05–j0.15)(
n jn135 n –jn135
= – 0.5n + 0.4 + 0.05( 2 e ) + 0.05( 2 e )
n jn135 n –jn135
+ j0.15( 2 e ) – j0.15( 2 e )
or
n
n 32
f n = – 0.5n + 0.4 + ---------
2 cosn135sin n135 (7.85)
10 10

b. The Inversion Integral*


The inversion integral states that
1
C
n–1
f n = ------- F(z)z dz (7.86)
j2

where C is a closed curve that encloses all poles of the integrant, and by Cauchy’s residue theorem,
this integral can be expressed as

f n = kResF(z)z n–1


 (7.87)
z = pk

where pk represents a pole of F(z)zn – 1 and ResF(z)zn – 1 represents a residue at z = pk .

Example 7.12
Use the inversion integral method to find the Inverse Z transform of
1 + 2z–1 + z–3
F(z) = ---------------------------------------------------- (7.88)
(1 – z –1 )(1 – 0.75z–1 )

Solution:

Nalla N 7-29
Chapter 7 Discrete Time Systems and the Z Transform
Multiplication of the numerator and denominator by z3 yields

3 2
z + 2z + 1
------------------------------------------ (7.89)
z(z – 1)(z – 0.75)

and by application of (7.87),


3 2 n–1 3 2 n–2
(z + 2z + 1)z (z + 2z + 1)z
f n = -------------------------------------------- --------------------------------------------
k Res z(z – 1)(z – 0.75) =
k Res (z – 1)(z – 0.75) (7.90)
z = pk z = pk

We are interested in the values f 0 f 1 f 2  , that is, values of n = 0 1 2  .
For n = 0 , (7.70) becomes

3 2
(z + 2z + 1)
f0 =
k Res --------------------------------------------
z 2 (z – 1)(z – 0.75)
z = pk

(z + 2z + 1)
3 2
(z3 + 2z2 + 1)
= Res -------------------------------------------- + Res --------------------------------------------- (7.91)
z 2 (z – 1)(z – 0.75) z 2 (z – 1)(z – 0.75)
z=0 z=1

(z + 2z + 1)
3 2

+ Res --------------------------------------------
z 2 (z – 1)(z – 0.75)
z = 0.75

The first term on the right side of (7.71) has a pole of order 2 at z = 0 ; therefore, we must evaluate
the first derivative of
3 2
(z + 2z + 1)
---------------------------------------
(z – 1)(z – 0.75)

at z = 0 . Thus, for n = 0 , (7.71) reduces to


3 2 3 2 3 2
d (z + 2z + 1) (z + 2z + 1) (z + 2z + 1)
f 0 = -- --- --------------------------------------- + --------------------------------- + --------------------------------
dz (z – 1)(z – 0.75) z=0 z2(z – 0.75) z2(z – 1)
z=1 z = 0.75 (7.92)
28 163
= ---- + 16 – ------- = 1
7 7

For n = 1 , (7.70) becomes

7-30 Nalla N
The Inverse Z Transform

3 2
(z + 2z + 1)
f 1 = ------------------------------------------
k Res z(z – 1)(z – 0.75)
z = pk

(z + 2z + 1) 3 2
(z3 + 2z2 + 1)
= Res ------------------------------------------ + Res ------------------------------------------
z(z – 1)(z – 0.75) z=0
z(z – 1)(z – 0.75) z=1
(z + 2z + 1) 3 2
(7.93)
+ Res -----------------------------------------
z(z – 1)(z – 0.75) z = 0.75
3 2
3
(z + 2z + 1)
2 3
(z + 2z + 1)
2 (z + 2z + 1)
= --------------------------------------- + --------------------------------- + ----------------------------------
(z – 1)(z – 0.75) z=0
z(z – 0.75) z=1
z(z – 1)
z = 0.75

= 4-- + 16 – 1---6---3- = 1---5-


3 12 4

For n  2 there are no poles at z = 0 , that is, the only poles are at z = 1 and z = 0.75 . Therefore,
3 2
(z + + 1)z n – 2
2z
----------------------------------------------
fn =
k Res (z – 1)(z – 0.75)
z = pk
3 2 3 2
(z + 2z + 1)z n–2
(z 2z + 1)z n – 2 (7.94)
+
= Res ---------------------------------------------- + Res ----------------------------------------------
(z – 1)(z – 0.75) (z – 1)(z – 0.75)
z=1 z = 0.75
3 2 3 2
(z + 2z + 1)z n–2
(z + 2z + 1)z n –2

= ---------------------------------------------- + ----------------------------------------------
(z – 0.75) (z – 1)
z=1 z = 0.75

for n  2 .
From (7.74), we observe that for all values of n  2 , the exponential factor zn – 2 is always unity for
z = 1 , but varies for values z  1 . Then,
(z3 + 2z2 + 1) (z3 + 2z2 + 1)zn – 2
fn = --------------------------------- + --------------------------------------------
(z – 0.75) z=1
(z – 1) z = 0.75

4
= --------- 0.75 + 2(0.75) + 1(0.75)
3 2 n–2
(7.95)
+ ------------------------------------------------------------------------------
0.25 –0.25
163
= (163  64)(0.75) n
------- n

16 + -----------------------------------------= 16 – (0.75)
(–0.25)(0.75)2 7

for n  2 .

Nalla N 7-31
Chapter 7 Discrete Time Systems and the Z Transform

We can express f n for all n  0 as


28 4 163
f n = ---- n + -- n – 1 + 16 – --- - (0.75)n (7.96)
7 3 7

where the coefficients of n and n – 1 are the residues that were found in (7.92) and (7.93) for
n = 0 and n = 1 respectively at z = 0 . The coefficient 28  7 is multiplied by n to emphasize
that this value exists only for n = 0 and coefficient 4  3 is multiplied by n – 1 to emphasize that
this value exists only for n = 1 .

Example 7.13
Use the inversion integral method to find the Inverse Z transform of
1
F(z) = -------------------------- -------------------------- (7.97)
(1 – z –1 )(1 – 0.75z–1 )
Solution:

Multiplication of the numerator and denominator by z2 yields

7-32 Nalla N
The Inverse Z Transform

2
z
--------------------------------------- (7.98)
(z – 1)(z – 0.75)

This function has no poles at z = 0 . The poles are at z = 1 and z = 0.75 . Then by (7.87),
2n–1 n+1
zz
--------------------------------- z
---------------------------------------
fn =
k Res (z – 1 )(z – 0.75 ) =
k Res (z – 1)(z – 0.75)
z = pk z = pk

zn + 1 zn + 1
= Res --------------------------------------- + Res ---------------------------------------
(z – 1)(z – 0.75) z=1
(z – 1)(z – 0.75) z = 0.75 (7.99)
1n + 1 (0.75)
n+1 n+1 n +1
z
= ----------------------- z
+ --------------- = ----------- + ------------------------
(z – 0.75) z=1
(z – 1) z = 0.75 0.25 (–0.25)

(0.75)
n 6
= 4 – - (0.75)
n
= 4 – ------------------------------
(0.25)(0.75) 3
c. Long Division of Polynomials
To apply this method, F(z) must be a rational function, and the numerator and denominator must be
polynomials arranged in descending powers of z .

Example 7.14
Use the long division method to determine f n for n = 0 1 and 2 , given that
–1 –2 –3
1 + z + 2z + 3z (7.100)
F(z) = --------------------------------------------------------------------------------------------
(1 – 0.25z–1 )(1 – 0.5z–1 )(1 – 0.75z–1 )
Solution:

First, we multiply numerator and denominator by z3 , expand the denominator to a polynomial, and
arrange the numerator and denominator polynomials in descending powers of z . Then,
z3 + z2 + 2z + 3
F(z) = -------------------------------------------------------------------
(z – 0.25)(z – 0.5)(z – 0.75)

Nalla N 7-33
Chapter 7 Discrete Time Systems and the Z Transform
3 2
z + z + 2z + 3
F(z) = ----------------------------------------------
11 (7.101)
z 3 – --z 2 + ----z – --3---
2 16 32

Now we perform long division as shown in Figure 7.10.

5 81
Divisor 1 + -- z –1 + - z –2 +  Quotient
2 16
3 11
z 3 – -- z 2 + ---- z – --3---
z3 + z2 + 2z + 3 Dividend
2 16 32
3 11
z 3 – -- z 2 + ---- z – --3---
2 16 32
5-- 2 2---1-
z + z + 3---5- 1st Remainder
2 16 32
5 2 15 15
-- z – ---- z + 5---5- – - z –1
2 4 32 64
81
-z –  +  2nd Remainder
16
Figure 7.10. Long division for the polynomials of Example 7.14
We find that the quotient Q(z) is
5 –1 81 –2
Q(z) = 1 + -- z + - z +  (7.102)
2 16
By definition of the Z transform,

F(z) =  f n z –n
= f 0 + f 1z–1 + f 2z–2 +  (7.103)
n=0

Equating like terms in (7.102) and (7.103), we get


f 0 = 1 f 1 = 5  2 and f 2 = 81  16 (7.104)

7-34 Nalla N
The Inverse Z Transform

7.7 The Transfer Function of Discrete Time Systems


The discrete time system of Figure 7.12, can be described by the linear difference equation
xn yn
Linear Discrete Time System

Figure 7.12. Block diagram for discrete time system

yn + b1yn – 1 + b2yn – 2 +  + bkyn – k (7.105)


= a0xn + a1xn – 1 + a2xn – 2 +  + akxn – k

Nalla N 7-35
Chapter 7 Discrete Time Systems and the Z Transform
TABLE 7.4 Methods of Evaluation of the Inverse Z transform

Method Advantages Disadvantages


Partial Fraction Expansion • Most familiar • Requires that F(z) is a
• Can use the MATLAB proper rational function
residue function
Inversion Integral • Can be used whether F(z) • Requires familiarity with
is a rational function or not the Residues theorem
Long Division • Practical when only a small • Requires that F(z) is a
sequence of numbers is proper rational function
desired.
• Division may be endless
• Useful when Inverse Z
has no closed form solution
• Can use the MATLAB
dimpulse function for
large sequence of numbers

where ai and bi are constant coefficients. In a compact form,


k k
y n =
 aixn – i –  biyn – i (7.106)
i=0 i =0

Assuming that all initial conditions are zero, taking the Z transform of both sides of (7.106), and
using the Z transform pair

f n – m  z–mF(z)
we get
Y(z) + b z–1Y(z) + b z–2Y(z) +  + b z–kY(z)
1 2 k
(7.107)
= a0 X(z) + a1 z–1X(z) + a z2–2X(z) +  + a z–kkX(z)

(1 + b1 z–1 + b 2z–2 +  + b zk–k)Y(z)


(7.108)
= (a0 + a1 z + a 2 z +  + a kz ) X(z)
–1 –2 –k

a + a z–1 + a z–2 +  + a z–k


Y(z) = ----0------------1-------------------2--------------------------------k------------X(z) (7.109)
–1 –2 –k
1 + b1 z + b2 z +  + bk z

7-36 Nalla N
The Transfer Function of Discrete Time Systems

We define the discrete time system transfer function H(z) as


N(z) a0 + a z1 –1 + a z2–2 +  + a z–kk
H(z) = ----------- = -------------------------------------------------------------------------- (7.110)
–1 –2 –k
D(z) 1 + b1 z + b2 z +  + bk z

and by substitution of (7.110) into (7.109), we get

Y(z) = H(z)X(z) (7.111)

The discrete impulse response hn is the response to the input xn = n , and since

Z n =  nz –n
=1
n=0

we can find the discrete time impulse response hn by taking the Inverse Z transform of the dis-
crete transfer function H(z) , that is,

–1
hn = Z H(z) (7.112)

Example 7.15
The difference equation describing the input-output relationship of a discrete time system with zero
initial conditions, is
yn – 0.5yn – 1 + 0.125yn – 2 = xn + xn – 1 (7.113)
Compute:
a. The transfer function H(z)
b. The discrete time impulse response hn
c. The response when the input is the discrete unit step u0n

Solution:
a. Taking the Z transform of both sides of (7.113), we get

Y(z)–0.5z–1Y(z) + 0.125z–2Y(z) = X(z) + z–1X(z)


and thus
Y(z ) 1+z
–1
z+z
2
H(z) = ---------- = ----------------------------------------------- = --------------------------------------- (7.114)
X(z) 1–0.5z–1 + 0.125z–2 z2 – 0.5z + 0.125

Nalla N 7-37
Chapter 7 Discrete Time Systems and the Z Transform

b. To obtain the discrete time impulse response hn , we need to compute the Inverse Z transform
of (7.114). We first divide both sides by z and we get:

-----(---z--) = ---------------z----+-----1---------------
H
(7.115)
z z2 – 0.5z + 0.125
and thus,
-----(---z--) = --------0--.-5-----–----j--2---.--5--------- + --------0---.-5-----+-----j--2---.-5---------
H
z z – 0.25 – j0.25 z – 0.25 + j0.25
or
( 0.5 – j 2.5 )z ( 0.5 + j 2.5 )z ( 0 . 5 – j 2 .5 ) z ( 0.5 + j 2.5 ) z
H(z) = ------------------------------------------ + ---------------------------------------- = -------------------------------------- + --------------------------------------- (7.116)
z – (0.25 + j0.25) z – (0.25 – j0.25) z – 0.25 2e j45 z – 0.25 2e–j45
Recalling that
z
a n u 0 n  ----- -----
z–a

for z  a  the discrete impulse response sequence hn is


n n
hn = (0.5 – j2.5)(0.25 2e j45) + (0.5 + j2.5)(0.25 2e–j45)
= 0.5(0.25 2)ne jn45 + 0.5(0.25 2)ne–jn45

– j2.5(0.25 2)ne jn45 + j2.5(0.25 2)ne–jn45


= 0.5(0.25 2)n(e jn45 + e–jn45) – j2.5(0.25 2)n(e jn45 – e–jn45)
or

7-38 Nalla N
The Transfer Function of Discrete Time Systems

hn = -----2- n cosn45 + 5 sin n45 (7.117)


(  )
 4 
z-----------
c. From Y(z) = H(z)X(z) , the transform u n  ----- , and using the result of part (a) we get:
0
z–1
z2 + z z z(z2 + z)
Y(z) = ---------------------------------------- ---------- = -------------------------------------------------------------
z2 – 0.5z + 0.125 z – 1 (z2 – 0.5z + 0.125)(z – 1)
or
Y(z) (z2 + z)
---------- = ------------------------------------------------------------- (7.118)
z (z2 – 0.5z + 0.125)(z – 1)
Then,

----(---z--)- = ---------------------------------------------------------------------
Y 2
z+z (7.119)
z z3 – (3  2)z 2 + (5  8)z – 1  8

Nalla N 7-39
Chapter 7 Discrete Time Systems and the Z Transform
Now, we compute the residues and poles.

With these values, we express (7.117) as


Y(z) z2 + z 3.2 – 1.1 + j0.3 –1.1–j0.3
----------= --------------------------------------------------------------------- = ----------+ -------------------------------+ ---------------------------------- (7.120)
z z3 – (3  2)z 2 + (5  8)z – 1  8 z – 1 z–0.25–j0.25 z–0.25 + j0.25

or
3.2 z ( – 1.1 + j 0.3 ) z ( – 1.1 –j 0.3 ) z
Y(z) = ---------- + ----------------------------------- + -----------------------------------
z – 1 z–0.25–j0.25 z–0.25 + j0.25
3.2 z ( – 1.1 + j 0.3 ) z ( –1.1 – j 0.3 ) z (7.121)
= ---------- + -------------------------------------- + --------------------------------------
z – 1 z – 0.25 2e j45 z – 0.25 2e–j45

Recalling that
z
an u n  ----- -----
0
z–a

for z  a  we find that the discrete output response sequence is


n n
yn = 3.2 + (– 1.1 + j0.3)(0.25 2e j45) – (1.1 + j0.3)(0.25 2e–j45)
= 3.2–1.1(0.25 2)n(e jn45 + e–jn45) + j0.3(0.25 2)n(e jn45 – e–jn45)
or
n
2-
n
  -----2- 
y 
n = 3.2 – 2.2   cos n45 –0.6   sin n45
4 4
n
 - (
2 (7.122)
 )
= 3.2 –   2.2 cosn45 + 0.6 sin n45
4

The plots for the discrete time sequences hn and yn are shown in Figure 7.13.

7-40 Nalla N
Chapter-8 Discrete Fourier Transform 2020
The discrete Fourier transform (DFT) converts a finite sequence of equally-
spaced samples of a function into a same-length sequence of equally-spaced samples of
the discrete-time Fourier transform (DTFT), which is a complex-valued function of
frequency. The interval at which the DTFT is sampled is the reciprocal of the duration of
the input sequence. An inverse DFT is a Fourier series, using the DTFT samples as
coefficients of complex sinusoids at the corresponding DTFT frequencies. It has the
same sample-values as the original input sequence. The DFT is therefore said to be
a frequency domain representation of the original input sequence. If the original
sequence spans all the non-zero values of a function, its DTFT is continuous (and
periodic), and the DFT provides discrete samples of one cycle. If the original sequence is
one cycle of a periodic function, the DFT provides all the non-zero values of one DTFT
cycle.
Definition of DFT
From the definition of Z-Transform we have

𝐹(𝑧) = ∑ 𝑓[𝑛]𝑧 −𝑛 − − − − − (1)


𝑛=0

If, 𝑧 = 𝑒 𝑠𝑇 = 𝑒 𝑗𝜔𝑇 , then 𝐹 (𝑧) can be rewritten as,



𝑗𝜔𝑇 )
𝐹(𝑒 = ∑ 𝑓[𝑛]𝑒 −𝑛𝑗𝜔𝑇 − − − − − (2)
𝑛=0

The N-point DFT of 𝑥[𝑛] is defined as


𝑁−1
𝑗2𝜋𝑚𝑛
𝐷 {𝑥[𝑛]} = 𝑋 (𝑚) = ∑ 𝑥[𝑛]𝑒 − 𝑁 − − − − − (3)
𝑛=0

Where N represents the number of points that are equally spaced in the intervals 0 to
2𝜋. Then the relation can be simplified by updating the twiddle factor W, as
2𝜋
𝑊=( )𝑚
𝑁𝑇
Where m=0,1,2,3…..N-1.
The inverse DFT is defined as
𝑁−1
1 𝑗2𝜋𝑚𝑛
𝐷−1 {𝑋 [𝑚]} = 𝑥 [𝑛] = ∑ 𝑋[𝑚]𝑒 𝑁 − − − − − (4)
𝑁
𝑚=0

For n=0,1,2,3……..N-1.
Let us consider the representation of DFT by using Twiddle Factor as,

Nalla N Page 1
Chapter-8 Discrete Fourier Transform 2020
𝑗2𝜋 𝑗2𝜋
𝑊𝑁 = 𝑒 − 𝑁 and 𝑊𝑁−1 = 𝑒 𝑁

Using the above values in equations 3 & 4, we obtain


𝑁−1

𝐷{𝑥[𝑛]} = 𝑋(𝑚) = ∑ 𝑥[𝑛]𝑊𝑁𝑚𝑛 − − − − − (5)


𝑛=0
𝑁−1
1
𝐷−1 {𝑋 [𝑚]} = 𝑥[𝑛] = ∑ 𝑋[𝑚]𝑊𝑁−𝑚𝑛 − − − − − (6)
𝑁
𝑚=0

In general, the DFT 𝑋[𝑚] is complex and thus we can express it as,
𝑋 [𝑚] = 𝑅𝑒{𝑋 [𝑚]} + 𝑗𝐼𝑚{𝑋[𝑚]} − − − − − (7)
For m=0,1,2,3….N-1
We know that from Euclidian Theorem, we can write exponential value as,
−𝑗2𝜋𝑚𝑛 2𝜋𝑚𝑛 2𝜋𝑚𝑛
𝑒 𝑁 = cos ( ) − 𝑗 sin ( ) − − − − − (8)
𝑁 𝑁
Substituting the equation 8 in equation 3, it will become
𝑁−1 𝑁−1
2𝜋𝑚𝑛 2𝜋𝑚𝑛
𝑋(𝑚) = ∑ 𝑥[𝑛] cos ( ) − 𝑗 ∑ 𝑥[𝑛] sin ( ) − − − − − (9)
𝑁 𝑁
𝑛=0 𝑛=0

For n=0 in equation 9, we get 𝑋 (𝑚) = 𝑥[0] therefore,


𝑁−1
2𝜋𝑚𝑛
𝑅𝑒{𝑋 [𝑚]} = 𝑥[0] + ∑ 𝑥[𝑛] cos ( ) − − − −(10)
𝑁
𝑛=1

For m=0,1,2,3….N-1
And the Imaginary part of DFT becomes,
𝑁−1
2𝜋𝑚𝑛
𝐼𝑚{𝑋 [𝑚]} = − ∑ 𝑥[𝑛] sin ( ) − − − −(11)
𝑁
𝑛=1

For m=0,1,2,3….N-1
#Problem1:
A discrete time signal is defined by the following sequence. Compute the frequency
components 𝑋 [𝑚] if 𝑥 [0] = 1, 𝑥[1] = 2, 𝑥[2] = 2, 𝑥[3] = 1 and 𝑥[𝑛] = 0 for all other ‘n’
Solution: There are four discrete values of 𝑥[𝑛]. So we will use a 4-point DFT for this
example, N=4 with n=0,1,2, and 3.
3
2𝜋𝑚𝑛
𝑅𝑒{𝑋 [𝑚]} = 𝑥[0] + ∑ 𝑥[𝑛] cos ( )
4
𝑛=1

Nalla N Page 2
Chapter-8 Discrete Fourier Transform 2020
2𝜋𝑚(1) 2𝜋𝑚(2) 2𝜋𝑚(3)
= 𝑥[0] + 𝑥[1] cos ( ) + 𝑥[2] cos ( ) + 𝑥[3] cos ( )
4 4 4
2𝜋𝑚(1) 2𝜋𝑚(2) 2𝜋𝑚(3)
= 1 + (2) cos ( ) + (2) cos ( ) + (1) cos ( )
4 4 4
𝜋𝑚 𝜋𝑚(3)
𝑅𝑒{𝑋 [𝑚]} = 1 + (2) cos ( ) + (2) cos(𝜋𝑚) + (1) cos ( )
2 2
Now for m=0,1,2, and 3 we calculate from above relation and get it ass,
𝐹𝑜𝑟 𝑚 = 0; 𝑅𝑒{𝑋 [0]} = 1 + (2)(1) + (2)(1) + (1)(1) = 6
𝐹𝑜𝑟 𝑚 = 1; 𝑅𝑒{𝑋 [1]} = 1 + (2)(0) + (2)(−1) + (1)(0) = −1
𝐹𝑜𝑟 𝑚 = 2; 𝑅𝑒{𝑋 [2]} = 1 + (2)(−1) + (2)(1) + (1)(−1) = 0
𝐹𝑜𝑟 𝑚 = 2; 𝑅𝑒{𝑋 [3]} = 1 + (2)(0) + (2)(−1) + (1)(0) = −1
Similarly, for
3
2𝜋𝑚𝑛
𝐼𝑚{𝑋 [𝑚]} = − ∑ 𝑥[𝑛] sin ( )
4
𝑛=1

2𝜋𝑚(1) 2𝜋𝑚(2) 2𝜋𝑚(3)


𝐼𝑚{𝑋 [𝑚]} = −𝑥[1] sin ( ) − 𝑥[2] sin ( ) − 𝑥[3] sin ( )
4 4 4
𝜋𝑚 𝜋𝑚(3)
𝐼𝑚{𝑋 [𝑚]} = −2 sin ( ) − 2 sin(𝜋𝑚) − 1 sin ( )
2 2
Now for m=0,1,2, and 3 we calculate from above relation and get it ass,
𝐹𝑜𝑟 𝑚 = 0; 𝐼𝑚{𝑋 [0]} = −(2)(0) − (2)(0) − (1)(0) = 0
𝐹𝑜𝑟 𝑚 = 0; 𝐼𝑚{𝑋 [1]} = −(2)(1) − (2)(0) − (1)(−1) = −1
𝐹𝑜𝑟 𝑚 = 0; 𝐼𝑚{𝑋 [2]} = −(2)(0) − (2)(0) − (1)(0) = 0
𝐹𝑜𝑟 𝑚 = 0; 𝐼𝑚{𝑋 [3]} = −(2)(−1) − (2)(0) − (1)(1) = 1
As we know that, 𝑋 [𝑚] = 𝑅𝑒{𝑋 [𝑚]} + 𝑗𝐼𝑚{𝑋 [𝑚]}
𝑿[𝟎] = 𝟔 + 𝒋(𝟎) = 𝟔, 𝑿[𝟏] = 𝟎 + 𝒋𝟎 = 𝟎, 𝑿[𝟐] = −𝟏 − 𝒋, 𝑿[𝟑] = −𝟏 + 𝒋
#Problem2:
Use the results of problem 1 as X[m] and solve for IDFT and find the sequence x[n]
Solution: The Inverse DFT is given by the following relation
𝑁−1
−1 {
1 𝑗2𝜋𝑚𝑛
𝐷 𝑋 [𝑚]} = 𝑥[𝑛] = ∑ 𝑋 [𝑚]𝑒 𝑁
𝑁
𝑚=0

So it’s a 4-point DFT so, we can split the relation into following method.
3 3
1 𝑗2𝜋𝑚𝑛 1 𝑗2𝜋𝑚𝑛
𝑥 [𝑛 ] = ∑ 𝑋 [𝑚 ]𝑒 4 = ∑ 𝑋 [𝑚 ]𝑒 2
4 4
𝑚=0 𝑚=0

Nalla N Page 3
Chapter-8 Discrete Fourier Transform 2020
1 𝑗𝜋𝑛 𝑗3𝜋𝑛
𝑥 [𝑛 ] = [𝑋[0] + 𝑋 [1]𝑒 2 + 𝑋 [2]𝑒 𝑗𝜋𝑛 + 𝑋[3]𝑒 2 ]
4
The discrete frequency components 𝑥[𝑛] for n=0,1,2, and 3 are as follows
1 1
𝑥[0] = {𝑋 [0] + 𝑋[1] + 𝑋[2] + 𝑋[3]} = {6 + (−1 − 𝑗) + 0 + (−1 + 𝑗)} = 1
4 4
1
𝑥[1] = {𝑋[0] + 𝑋[1]𝑗 + 𝑋 [2](−1) + 𝑋 [3](−𝑗)}
4
1
= {6 + (−1 − 𝑗)(𝑗) + 0(−1) + (−1 + 𝑗)(−𝑗)} = 2
4
1
𝑥[2] = {𝑋 [0] + 𝑋 [1](−𝑗) + 𝑋[2](1) + 𝑋[3](−1)}
4
1
= {6 + (−1 − 𝑗)(−𝑗) + 0(1) + (−1 + 𝑗)(−1)} = 2
4
1
𝑥[0] = {𝑋[0] + 𝑋[1](−𝑗) + 𝑋 [2](−1) + 𝑋 [3](𝑗)}
4
1
= {6 + (−1 − 𝑗)(−𝑗) + 0(−1) + (−1 + 𝑗)(𝑗)} = 1
4

8.1 Even and Odd properties:


The discrete time and frequency function are defined as EVEN (0r) ODD in accordance
with the following relations:

➢ Even Time Function 𝑓 [𝑁 − 𝑛] = 𝑓[𝑛]


➢ Odd Time Function 𝑓 [𝑁 − 𝑛] = −𝑓[𝑛]
➢ Even Frequency Function 𝐹 [𝑁 − 𝑚] = 𝐹[𝑚]
➢ Odd Frequency Function 𝐹 [𝑁 − 𝑚] = −𝐹 [𝑚]

8.2 Properties and Theorems of DFT:

1. Linearity Property:

The linearity property states that if 𝑓1 [𝑛], 𝑓2 [𝑛], 𝑓3 [𝑛], … … … . 𝑓𝑛 [𝑛] have Laplace
Transforms 𝐹1 [𝑚], 𝐹2 [𝑚], 𝐹3 [𝑚], … … … 𝐹𝑛 [𝑚] respectively and
𝑐1 , 𝑐2 , 𝑐3 , … … … … 𝑐𝑛 are arbitrary constants, then
𝑐1 𝑓1 [𝑛] + 𝑐2 𝑓2 [𝑛] + ⋯ + 𝑐𝑛 𝑓𝑛 [𝑛] ⇔ 𝑐1 𝐹1 [𝑚] + 𝑐2 𝐹2 [𝑚] + ⋯ + 𝑐𝑛 𝐹𝑛 [𝑚]

2. Time Shifting Property

𝑥[𝑛 − 𝑘] ⇔ 𝑊𝑁𝑘𝑚 𝑋[𝑚]

Nalla N Page 4
Chapter-8 Discrete Fourier Transform 2020
Proof: We know that the definition of DFT is given by
𝑁−1

𝐷{𝑥[𝑛]} = 𝑋(𝑚) = ∑ 𝑥[𝑛]𝑊𝑁𝑚𝑛


𝑛=0

And if x[n] is shifted to right by k and we must change the lower and upper limit of the
summation from 0 to k and from N-1 to N+k-1 respectively.
𝑁+𝑘−1

𝐷{𝑥 [𝑛 − 𝑘]} = ∑ 𝑥[𝑛 − 𝑘]𝑊𝑁𝑚𝑛


𝑛=𝑘

Let 𝑛 − 𝑘 = 𝜇 then 𝑛 = 𝑘 + 𝜇 when 𝑛 = 𝑘, then 𝜇 = 0, so


𝑵−𝟏
𝒎(𝒌+𝝁)
𝑫{𝒙[𝝁]} = ∑ 𝒙[𝝁]𝑾𝑵 = 𝑾𝒌𝒎
𝑵 𝑿[𝒎]
𝝁=𝟎

3. Frequency Shifting Property


𝑊𝑁−𝑘𝑚 𝑥[𝑛] ⇔ 𝑋 [𝑚 − 𝑘]
Proof: Take Left Hand Side value,
𝑁−1 𝑁−1
(𝑚−𝑘)𝑛
𝑫{𝑊𝑁−𝑘𝑚 𝑥[𝑛]} = ∑ 𝑊𝑁−𝑘𝑚 𝑥[𝑛]𝑊𝑁𝑚𝑛 = ∑ 𝑥[𝑛]𝑊𝑁 = 𝑋[𝑚 − 𝑘]
𝑛=0 𝑛=0

4. Time Convolution Property:


𝑥 [ 𝑛 ] ∗ ℎ[ 𝑛 ] ⇔ 𝑋 [ 𝑚 ] . 𝐻 [ 𝑚 ]
Proof: 𝑥[𝑛] ∗ ℎ[𝑛] = ∑𝑁−1
𝑘=0 𝑥 [𝑘 ]ℎ[𝑛 − 𝑘 ]
𝑁−1 𝑁−1

𝐷[𝑥 [𝑛] ∗ ℎ[𝑛]] = ∑ [∑ 𝑥[𝑘]ℎ[𝑛 − 𝑘]] 𝑊𝑁𝑚𝑛


𝑛=0 𝑘=0

Rearranging the Summation and the limits as per the functions we get,
𝑁−1 𝑁+𝑘−1

= ∑ 𝑥[𝑘] [ ∑ ℎ[𝑛 − 𝑘]] 𝑊𝑁𝑘𝑚


𝑘=0 𝑛=𝑘

From the time shifting property we can modify the above relation as,
𝑁+𝑘−1

[ ∑ ℎ[𝑛 − 𝑘]] 𝑊𝑁𝑘𝑚 = 𝑊𝑁𝑘𝑚 𝐻[𝑚]


𝑛=𝑘
𝑁−1

𝐷[𝑥[𝑛] ∗ ℎ[𝑛]] = [∑ 𝑥[𝑘] 𝑊𝑁𝑘𝑚 ] 𝐻[𝑚]


𝑘=0
𝑁−1

𝐷[𝑥[𝑛] ∗ ℎ[𝑛]] = [∑ 𝑥[𝑘] 𝑊𝑁𝑘𝑚 ] 𝐻[𝑚]


𝑘=0

Nalla N Page 5
Chapter-8 Discrete Fourier Transform 2020

𝑫[𝒙[𝒏] ∗ 𝒉[𝒏]] = 𝑿[𝒌]. 𝑯[𝒎] = 𝑿[𝒎]. 𝑯[𝒎]


5. Frequency Convolution Property:
𝑁−1
1 1
𝑥 [ 𝑛 ] . 𝑦[ 𝑛 ] ⇔ ∑ 𝑋 [ 𝑘 ] 𝑌 [ 𝑚 − 𝑘 ] = 𝑋 [ 𝑚 ] ∗ 𝑌 [ 𝑚 ]
𝑁 𝑁
𝑘=0

Proof: The proof is obtained by taking the Inverse DFT, changing the order of
summation, and letting 𝒎 − 𝒌 = 𝝁.
8.3 The Sampling Theorem:
The sampling theorem specifies the minimum-sampling rate at which a
continuous-time signal needs to be uniformly sampled so that the original signal can be
completely recovered or reconstructed by these samples alone. This is usually referred
to as Shannon's sampling theorem in the literature.
It states that if a continuous time signal 𝑓(𝑡) is band limited with its highest
frequency component less than 𝜔. Then 𝑓(𝑡) can be completely recovered from its
sampled values, if the sampling frequency is equal to or greater than 2𝜔.
Generally, the relation is represented as 𝑓𝑠 ≥ 2𝑓𝑚 . Which is called as Nyquist
Rate. Where 𝑓𝑠 is Sampling Frequency and 𝑓𝑚 is Message Signal Frequency.

Figure 8.1 Sampling of a sinusoidal signal of frequency f at different sampling rates 𝑓𝑠 . With dashed lines
𝑓
are shown the alias frequencies, occurring when 𝑠 < 2.
𝑓

Nalla N Page 6

You might also like