Chapter-1 Introduction To Signals and Systems: Signal
Chapter-1 Introduction To Signals and Systems: Signal
Chapter-1 Introduction To Signals and Systems: Signal
Signal Modelling:
Nalla N Page 1
Chapter-1 Introduction to Signals and Systems 2020
And it is represented by diagram as
Input Signal x(t) System y(t) Output Signal
Any Signal can be characterised by the following four parameters, which describes the
behaviour of its performance and existence in physical. They are
1. Amplitude
2. Time Period
3. Frequency
4. Phase Shift
1. Peak Amplitude
2. Peak-to-peak amplitude
3. Root Mean Square Amplitude
4. Wave Period (Not an
Amplitude)
Nalla N Page 2
Chapter-1 Introduction to Signals and Systems 2020
The relation between the frequency and the period, T, of a repeating event or
𝟏
oscillation is given by 𝒇 = 𝑻.
1 ks 1 ms 1 µs 1 ns 1 ps
Period 1 s (100 s)
(103 s) (10−3s) (10−6 s) (10−9 s) (10−12 s)
Phase shift: The difference 𝜑 𝑡 = ∅𝐺 𝑡 − ∅𝐹 𝑡 between the phases of two
periodic signals F and G is called as the phase difference of G relative to F. At values
of t when the difference is zero, the two signals are said to be in phase, otherwise
they are out of phase with each other.
Illustration of phase shift is given in the figure. The
horizontal axis represents an angle (phase) that is
increasing with time.
Classifications of Signals: the signals are classified
according to their characteristics. Some of them are
Continuous-time signals: The signals that are defined for every instant of time are
known as continuous-time signals. They are denoted by x(t).
Discrete-time signals: The signals that are defined at discrete-instant of time are
known as continuous-time signals. The discrete-time signals are continuous in
amplitude and discrete-in-time. They are denoted by x(n).
Nalla N Page 3
Chapter-1 Introduction to Signals and Systems 2020
Where, T= Time interval between two consecutive samples
0.5
amplitude
-0.5
-1
-0.1 -0.08 -0.06 -0.04 -0.02 0 0.02 0.04 0.06 0.08 0.1
time in sec
descrete sinusoidal signal with frequency of 10 Hz
1
0.5
amplitude
-0.5
-1
-0.1 -0.08 -0.06 -0.04 -0.02 0 0.02 0.04 0.06 0.08 0.1
time in sec
Examples of Discrete-time signals include book selling in a year, rain fall in a specific
month, room temperature at the same hour every day for one week etc.
To distinguish between Continuous-time and Discrete-time signals we use
The symbol ‘t’ to denote the continuous time variable and ‘n’ to denote discrete
time variable
For continuous time signals we will enclose the independent variable in
parenthesis (.), ex: x(t) and for discrete time variable we will enclose the
independent variable in [.] ex: x[n] where n=0,±1, ±2, ±3…. And x[n]=x(nT)
where T is sampling period.
Nalla N Page 4
Chapter-1 Introduction to Signals and Systems 2020
Continuous Time and Analog signals are not the same. Similarly discrete time
and Digital signals are not same.
The terms Continuous time and discrete time qualify the nature of signals along
the time (horizontal axis).
On the other hand the terms analog and digital qualify the nature of signal along
Amplitude (Vertical Axis)
It’s clear that Analog is not necessarily continuous time and digital need not to be
discrete.
The signal x(t) is a function that satisfies the following conditions 𝑥 𝑡 = 𝑥(𝑡 + 𝑇0 ) for
all t, where T0 is the positive constant. The smallest value of T0 that satisfies the
Periodicity Condition is the period of x(t).
Nalla N Page 5
Chapter-1 Introduction to Signals and Systems 2020
From the above example of a periodic continuous signal, we can readily deduce that if
x(t) is periodic with period T, then 𝑥 𝑡 = 𝑥(𝑡 + 𝑛𝑇) for all t and for integer n. Thus, x(t)
is also periodic with period 2T,3T,4T…… the fundamental period T0 of x(t) is the
smallest value of T for which the above equation holds.
Similarly in discrete form, a discrete-time signal x(n) is periodic with period N, where N
is an integer.
If the above equation holds, then x(n) is also periodic with period 2N,3N,4N…. the
fundamental period N0 of x(n) is the smallest value of N for which the above equation
holds.
The signal x(t) or x(n) is referred to as an even signal if it is identical to its time
reversed counterpart i.e., with reflection above the origin. In continuous time, Signal is
even if it satisfies 𝑥 𝑡 = 𝑥(−𝑡) while a discrete time signal is even if 𝑥 𝑛 = 𝑥 −𝑛 and
it is odd if −𝑥 𝑛 = 𝑥(−𝑛) in discrete-time and −𝑥 𝑡 = 𝑥(−𝑡) in continuous-time.
From the above relations we can compute the even and odd components of any signal
from the following equations:
1 1
𝑥𝑒 𝑡 = 𝑥 𝑡 + 𝑥(−𝑡) 𝑎𝑛𝑑 𝑥𝑒 𝑛 = 𝑥 𝑛 + 𝑥(−𝑛)
2 2
Nalla N Page 6
Chapter-1 Introduction to Signals and Systems 2020
1 1
𝑥𝑜 𝑡 = 𝑥 𝑡 − 𝑥(−𝑡) 𝑎𝑛𝑑 𝑥𝑜 𝑛 = 𝑥 𝑛 − 𝑥(−𝑛)
2 2
Even Signals are symmetric about vertical axis (or) time origin and Odd signals are anti
symmetric about the time origin. Similarly to the discrete time signals most Practical
Signals are neither Even nor Odd Signals.
The terms signal energy and signal power is used to characterize a signal. They are not
actually measures of energy and power. The definition of signal energy and power
refers to any signal x(t), including signals that take on complex values.
𝐸= 𝑥(𝑡) 2 𝑑𝑡
−∞
The necessary condition for the energy to be finite is that the signal Amplitude tend to
zero. i.e.,
The signal Power in the signal x(t) is
+𝑇
1 2
𝑃 = lim 𝑥 𝑡 𝑑𝑡
𝑇→∞ 2𝑇
−𝑇
+𝑁
1 2
𝑃 = lim 𝑥(𝑛)
𝑁→∞ 2𝑁 + 1
−𝑁
If 0<E<∞, then the signal x(t) is called an energy signal. However, there are signals
where this condition is not satisfied. For such signals we consider the power. If 0<P<∞,
then the signal is called a power signal. Note that the power for an energy signal is zero
(P = 0) and that the energy for a power signal is infinite (E = ∞). Some signals are
neither energy nor power signals.
Nalla N Page 7
Chapter-1 Introduction to Signals and Systems 2020
Classification of Systems:
There are many classifications of systems based on parameter used to classify them.
They are
Nalla N Page 8
Chapter-1 Introduction to Signals and Systems 2020
A distributed system is one in which all dependent variables are functions of
time and one or more spatial variables. In this case, we will be solving partial
differential equations (PDEs)
Nalla N Page 9
Chapter-1 Introduction to Signals and Systems 2020
1. Find whether the following signals are static or dynamic?
a. 𝑦 𝑛 = 𝑛𝑥 𝑛 + 6𝑥 3 [𝑛]
b. 𝑦 𝑡 = 𝑥 𝑡 − 2 + 3𝑥 𝑡
c. 𝑦 𝑛 = ∞
𝑘=0 𝑥[𝑛 − 𝑘]
d. 𝑦 𝑛 = ∞
𝑘=−∞ 𝑥[𝑛 − 𝑘]
b. 𝑦 𝑡 = 𝑥 2 𝑡 + 𝑥(𝑡 − 2)
c. 𝑦 𝑛 = 𝑥 𝑛 − 2 + 𝑥[2 − 𝑛]
d. 𝑦 𝑛 = 𝑥 𝜏 𝑑𝜏
Nalla N Page 10
Chapter-1 Introduction to Signals and Systems 2020
Linear, non linear systems
A linear system is one which satisfies the principle of superposition and homogeneity
or scaling.
Consider a linear system characterized by the transformation operator T[ ]. Let x1, x2 are
the inputs applied to it and y1, y2 are the outputs. Then the following equations hold for
a linear system
y1 = T[x1], y2 = T[x2]
Principle of homogeneity: T [a*x1] = a*y1, T [b*x2] = =b*y2
Principle of superposition: T [x1] + T [x2] = a*y1+b*y2
Linearity: T [a*x1] + T [b*x2] = a*y1+b*y2
Where a, b are constants.
Linearity ensures regeneration of input frequencies at output. Nonlinearity
leads to generation of new frequencies in the output different from input frequencies.
Most of the control theory is devoted to explore linear systems.
b. 𝑦 𝑡 = 𝐴𝑥 𝑡 + 𝐵
𝑑𝑦 (𝑡) 𝑑𝑥 (𝑡)
c. + 2𝑦 𝑡 = 𝑥(𝑡)
𝑑𝑡 𝑑𝑡
𝑡
d. 𝑦 𝑛 = −∞
𝑥 𝜏 𝑑𝜏
e. 𝑦 𝑛 = 𝑛𝑥 𝑛
Nalla N Page 11
Chapter-1 Introduction to Signals and Systems 2020
The condition for time invariant system is: y (n , t) = y(n-t) and the condition for time
variant system is: y (n , t) ≠≠ y(n-t), Where y (n , t) = T[x(n-t)] = input change and y (n-t)
= output change.
OR
A system is said to be time variant system if its response varies with time. If the
system response to an input signal does not change with time such system is termed as
time invariant system. The behaviour and characteristics of time variant system are
fixed over time.
In time invariant systems if input is delayed by time t0 the output will also gets delayed
by t0. Mathematically it is specified as follows
y(t-t0) = T[x(t-t0)]
For a discrete time invariant system the condition for time invariance can be formulated
mathematically by replacing t as n*Ts is given as
y(n-n0) = T[x(n-n0)]
Where n0 is the time delay. Time invariance minimizes the complexity involved in the
analysis of systems. Most of the systems in practice are time invariant systems.
Note: In describing discrete time systems the sampling times n*Ts are mentioned as n
i.e. a discrete signal x(n*Ts) is indicated for simplicity as x(n).
Step 1: Delay the signal x(t) by K or T eqn 1
Step 2: Replace t(t-K) or (t-T) eqn 2
Step 3: If 1=2, the system is TIV otherwise the System is Time In-variant
Nalla N Page 12
Chapter-1 Introduction to Signals and Systems 2020
Linear Time variant (LTV) and linear Time Invariant (LTI) Systems
If a system is both linear and time variant, then it is called linear time variant (LTV)
system. If a system is both linear and time Invariant then that system is called linear
time invariant (LTI) system.
Stable and Unstable Systems
Most of the control system theory involves estimation of stability of systems.
Stability is an important parameter which determines its applicability. Stability of a
system is formulated in bounded input bounded output sense i.e. a system is stable if
its response is bounded for a bounded input (bounded means finite).
An unstable system is one in which the output of the system is unbounded for
a bounded input. The response of an unstable system diverges to infinity.
OR
The system is said to be stable only when the output is bounded for bounded
input. For a bounded input, if the output is unbounded in the system then it is said to be
unstable.
Note: For a bounded signal, amplitude is finite.
Example 1: y (t) = x2(t)
Let the input is u(t) (unit step bounded input) then the output
y(t) = u2(t) = u(t) bounded output. Hence, the system is stable.
Example 2: y (t) = ∫x(t)dt
Let the input is u(t) (unit step bounded input) then the output y(t) = ∫u(t)dt
ramp signal (unbounded because amplitude of ramp is not finite it goes to infinite when
t →∞). Hence, the system is unstable.
Invertible and non-invertible systems
A system is said to be invertible if distinct inputs lead to distinct outputs. For
such a system there exists an inverse transformation (inverse system) denoted by T-1[ ]
which maps the outputs of original systems to the inputs applied. Accordingly we can
write
TT-1 = T-1T = I
Where I = 1 one for single input and single output systems.
A non-invertible system is one in which distinct inputs leads to same outputs. For
such a system an inverse system will not exist.
OR
Nalla N Page 13
Chapter-1 Introduction to Signals and Systems 2020
A system is said to invertible if the input of the system appears at the output.
𝑌 𝑠 = 𝑋 𝑠 𝐻1 𝑠 𝐻2 𝑠
1 1
𝑌 𝑠 = 𝑋 𝑠 𝐻1 𝑠 𝑠𝑖𝑛𝑐𝑒 𝐻1 𝑠 =
𝐻1 (𝑠) 𝐻1 (𝑠)
∴𝑌 𝑠 = 𝑋 𝑠 ⟹ 𝑦 𝑡 = 𝑥(𝑡) hence, the system is invertible.
If y(t) ≠≠ x(t), then the system is said to be non-invertible.
ELEMENTARY SIGNALS
A signal can be anything which conveys information. For example a picture of a
person gives you information regarding whether he is short or tall, fair or black e.t.c
Mathematically signal is defined as a function of one or more dependent variables that
conveys information about the state of a system. For example;
a) A speech signal is a function of time. Here independent variable is time and
dependent variable is amplitude of speech signal.
b) A picture containing varying brightness is a function of two spatial variables. Here
independent variables are spatial coordinates (X, Y) and dependent variable is
brightness or amplitude of picture.
Here are a few basic signals:
Unit Step Function
Unit step function is denoted by u(t).
It is used as best test signal.
Area under unit step function is unity.
1 𝑓𝑜𝑟 𝑡 ≥ 0
𝑢 𝑡 =
0 𝑡<0
Delayed Unit Step Function:
The delayed Unit step function refers to the signal
initiates from a point other than origin and
represented as originating around ‘a’. And at all
other points the signal strength is considered to be
not existent.
1 𝑓𝑜𝑟 𝑡 ≥ 𝑎
𝑢 𝑡−𝑎 =
0 𝑡<𝑎
Nalla N Page 14
Chapter-1 Introduction to Signals and Systems 2020
Unit Ramp Signal:
Ramp signal is defined as r(t) =
𝑡 𝑓𝑜𝑟 𝑡 ≥ 0
𝑟 𝑡 = 𝑶𝑹 𝑟 𝑡 = 𝑡𝑢(𝑡)
0 𝑡<0
The ramp function can be obtained by applying Unit
step function to an integer.
𝑟 𝑡 = 𝑢 𝑡 𝑑𝑡 = 𝑑𝑡 = 𝑡
In other words, the unit step function can be obtained by differentiating the Unit Ramp
Function.
𝑑𝑟(𝑡)
𝑢 𝑡 =
𝑑𝑡
Unit Parabolic Function:
Parabolic Function can be defined as
𝑡2 𝑡2
𝑝 𝑡 = 2 𝑓𝑜𝑟 𝑡 ≥ 0 𝒐𝒓 𝑝 𝑡 = 𝑢(𝑡)
2
0 𝑡<0
The unit parabolic function can be obtained by
integrating the ramp function.
𝑡2
𝑝 𝑡 = 𝑟 𝑡 𝑑𝑡 = 𝑡 𝑑𝑡 = 𝑓𝑜𝑟 𝑡≥0
2
In other words, the Ramp function can be obtained by differentiating the Parabolic
Function.
𝑑𝑝(𝑡)
𝑟 𝑡 =
𝑑𝑡
Real Exponential Signal:
Nalla N Page 15
Chapter-1 Introduction to Signals and Systems 2020
Rectangular Function:
It is defined as
1
𝜋 𝑡 = 1 𝑓𝑜𝑟 𝑡 <
2
0 𝑂𝑡𝑒𝑟𝑤𝑖𝑠𝑒
It can also be defined with the help of Unit Step
Function as 𝝅 𝒕 = 𝒖 𝒕 − 𝒖(𝒕 − 𝒂)
Triangular Function:
It is defined as
𝑡
1− 𝑓𝑜𝑟 𝑡 ≤ 𝑎
∆ 𝑡 = 𝑎
0 𝑡 >𝑎
Signum Function:
Signum function is denoted as sgn(t)
𝟏 𝒕>0
𝒔𝒈𝒏 𝒕 = 𝟎 𝒕=𝟎
−𝟏 𝒕<0
This function can be expressed in terms of
Unit step signal as 𝒔𝒈𝒏 𝒕 = −𝟏 + 𝟐𝒖(𝒕)
SINC Function:
sin 𝜋𝑡
The sinc function is defined as 𝑠𝑖𝑛𝑐 𝑡 = 𝑓𝑜𝑟 − ∞ 𝑡𝑜 + ∞
𝜋𝑡
Nalla N Page 16
Chapter-1 Introduction to Signals and Systems 2020
Impulse Function:
+∞
It is defined as −∞
𝛿 𝑡 𝑑𝑡 = 𝑢(𝑡) 𝑎𝑛𝑑
𝛿 𝑡 = 0 𝑓𝑜𝑟 𝑡 ≠ 0
𝑑𝑢(𝑡)
𝛿 𝑡 =
𝑑𝑡
That is the impulse function has zero amplitude everywhere except at t=0
At t=0, the amplitude is infinity such that the area under the curve is equal to one.
Properties of Impulse Function:
+∞
−∞
𝑥 𝑡 𝛿 𝑡 𝑑𝑡 = 𝑥 0
𝑥 𝑡 𝛿 𝑡 − 𝑡0 = 𝑥 𝑡0 𝛿 𝑡 − 𝑡0
+∞
−∞
𝑥 𝑡 𝛿 𝑡 − 𝑡0 𝑑𝑡 = 𝑥 𝑡0
1
𝛿 𝑎𝑡 = 𝛿(𝑡)
𝑎
+∞
−∞
𝑥 𝜏 𝛿 𝑡 − 𝜏 𝑑𝑡 = 𝑥 𝑡
1. Amplitude
2. Time
And the operations are divided as follows
Time Shifting:
Given signal x(t), then its time shifting of x(t)may
delay or advance the signal in time mathematically, this
can be represented as 𝑦 𝑡 = 𝑥 𝑡 − 𝑇
𝑦[𝑛] = 𝑥[𝑛 − 𝐾]
Nalla N Page 17
Chapter-1 Introduction to Signals and Systems 2020
Time Scaling: The time scaling is accomplished by replacing‘t’ by ‘at’ in x(t). i.e., if a>1
then the signal is compressed and if a<1 then the signal is expanded.
Amplitude Scaling:
Cx(t) is a amplitude scaled version of x(t) whose amplitude is scaled by a factor C.
Signal Addition: Addition of two signals is nothing but addition of their corresponding
amplitudes. This can be best explained by using the following example
Signal Multiplication: Multiplication of two signals is nothing but multiplication of
their corresponding amplitudes. This can be best explained by the following example
Nalla N Page 18
Chapter-1 Introduction to Signals and Systems 2020
Multiple Operations on Signals:
1. Given a signal 𝑥 𝑡 , 𝑦 𝑡 = 𝑥 𝑎𝑡 − 𝑏
𝑡−𝑏
2. Given a signal 𝑥 𝑡 , 𝑦 𝑡 = 𝑥 𝑎
Nalla N Page 19
Chapter-1 Introduction to Signals and Systems 2020
Analogy between Vectors and Signals
When approximating V1 along V2 then Ve represents the error in this
approximation. If the component of the vector V1 along V2 is drawn, then C12 indicates
the similarity of two vectors.
V1
Ve V2
C12V2
The component of vector V1 along V2 is given by V1= C12V2+ Ve
V1
V1 Ve
Ve V2 V2
C12V2 C12V2
1 2
𝑉1 = 𝐶12 𝑉2 + 𝑉𝑒1 𝑉1 = 𝐶12 𝑉2 + 𝑉𝑒1
Similarly if we consider two Signals A and B and the relation between them can
be represented as, 𝐴. 𝐵 cos 𝜃 = 𝐴𝐵 cos 𝜃 𝑤𝑒𝑟𝑒 𝐴. 𝐵 = 𝐵. 𝐴, then
→→
The Component of A along B= 𝐴 cos 𝜃 = 𝐴𝐵
𝐵
→→
The Component of B along A= Bcos 𝜃 = 𝐴𝐵
𝐴
𝑉1 𝑉2
Component of vector V1 along V2= = 𝐶12 𝑉2
𝑉2
𝑉1 𝑉2
= = 𝐶12
𝑉22
𝑉 𝑉
𝐶12 = 𝑉1 𝑉2
2 2
Nalla N Page 20
Chapter-1 Introduction to Signals and Systems 2020
𝑡2
1
= 𝑓𝑒 𝑡 𝑑𝑡
𝑡2 − 𝑡1 𝑡1
𝑡2
1
= 𝑓1 𝑡 − 𝐶12 𝑓2 𝑡 𝑑𝑡
𝑡2 − 𝑡1 𝑡1
When averaging the error signal with large positive and negative values we may get
zero as average and we may get confuse that the error is zero and that’s why it is better
to take the mean of square of the error vector fe(t)
𝑡2
1
𝑓𝑒 𝑡 = 𝑓𝑒2 (𝑡)𝑑𝑡
𝑡2 − 𝑡1 𝑡1
𝑡2
1 2
= 𝑓1 𝑡 − 𝐶12 𝑓2 𝑡 𝑑𝑡
𝑡2 − 𝑡1 𝑡1
𝑡2
1
= 𝑓12 𝑡 + 𝐶12 2 𝑓22 𝑡 − 2𝑓1 𝑡 𝐶12 𝑓2 (𝑡) 𝑑𝑡
𝑡2 − 𝑡1 𝑡1
To minimize the mean square error we need to differentiate the error vector with
respect to 𝐶12 and make it equal to zero. i.e.,
𝑑Σ
=0
𝑑𝐶12
𝑡2
𝑑Σ 1 𝑑 2 𝑑 𝑑
= 𝑓1 𝑡 + 𝐶12 2 𝑓22 𝑡 − 2 𝑓 𝑡 𝐶12 𝑓2 𝑡 𝑑𝑡
𝑑𝐶12 𝑡2 − 𝑡1 𝑡1 𝑑𝐶12 𝑑𝐶12 𝑑𝐶12 1
𝑑
In the first term 𝑑𝐶 𝑓12 𝑡 there is no 𝐶12 component and it becomes zero.
12
𝑡2 𝑡2 𝑡2
𝑑Σ 1
= 0. 𝑑𝑡 + 2𝐶12 𝑓22 𝑡 𝑑𝑡 − 2 𝑓1 𝑡 𝑓2 𝑡 𝑑𝑡 = 0
𝑑𝐶12 𝑡2 − 𝑡1 𝑡1 𝑡1 𝑡1
𝑡2 𝑡2
𝐶12 𝑓22 𝑡 𝑑𝑡 = 𝑓1 𝑡 𝑓2 𝑡 𝑑𝑡
𝑡1 𝑡1
𝑡2
𝑡1 1
𝑓 𝑡 𝑓2 𝑡 𝑑𝑡
𝐶12 = 𝑡2 2
𝑓 𝑡
𝑡1 2
𝑑𝑡
Solve the Problems on Orthogonal of two signals with the help of sin and cos terms.
Nalla N Page 21
Chapter-1 Introduction to Signals and Systems 2020
1. Show that sin 𝑛𝜔0 𝑡 𝑎𝑛𝑑 sin 𝑚𝜔0 𝑡 are mutually orthogonal to each other where
𝜋
m, n are integers in the range of 𝑡0 , 𝑡0 + 2 𝜔
0
Soln.
𝜋
𝑡2 𝑡 0 +2
𝜔0
𝑓1 𝑡 𝑓2 𝑡 𝑑𝑡 = 0 ⟹ sin 𝑛𝜔0 𝑡 . sin 𝑚𝜔0 𝑡 𝑑𝑡
𝑡1 𝑡0
𝜋
𝑡 0 +2
1 𝜔0
= 2(sin 𝑛𝜔0 𝑡 . sin 𝑚𝜔0 𝑡) 𝑑𝑡
2 𝑡0
From the formula 2 sin 𝑎 sin 𝑏 = cos 𝑎 − 𝑏 − cos 𝑎 + 𝑏 we can write the above
relation as,
𝜋
𝑡 0 +2
1 𝜔0
= cos 𝑛𝜔0 𝑡 − 𝑚𝜔0 𝑡 − cos 𝑛𝜔0 𝑡 + 𝑚𝜔0 𝑡 𝑑𝑡
2 𝑡0
2𝜋 2𝜋
𝑡0 + 𝑡0 +
𝜔0 𝜔0
1 1
= cos 𝑛𝜔0 𝑡 − 𝑚𝜔0 𝑡 . 𝑑𝑡 − cos 𝑛𝜔0 𝑡 + 𝑚𝜔0 𝑡 . 𝑑𝑡
2 2
𝑡0 𝑡0
2𝜋
𝑡0 +
1 sin 𝑛 − 𝑚 𝜔0 𝑡 sin 𝑛 + 𝑚 𝜔0 𝑡 𝜔0
= −
2 𝑛 − 𝑚 𝜔0 𝑛 + 𝑚 𝜔0 𝑡0
After taking out 𝜔0 from the two terms in the above equation, it can be written as
2𝜋
𝑡0 +
1 sin 𝑛 − 𝑚 𝜔0 𝑡 sin 𝑛 + 𝑚 𝜔0 𝑡 𝜔0
= −
2𝜔0 𝑛−𝑚 𝑛+𝑚 𝑡0
After applying the limits of integration on the components of the above relation,
2𝜋 2𝜋
1 sin 𝑛 − 𝑚 𝜔0 𝑡0 + 𝜔0 sin 𝑛 + 𝑚 𝜔0 𝑡0 + 𝜔
0 sin 𝑛 − 𝑚 𝜔0 𝑡0
= − −
2𝜔0 𝑛−𝑚 𝑛+𝑚 𝑛−𝑚
sin 𝑛 + 𝑚 𝜔0 𝑡0
+
𝑛+𝑚
Nalla N Page 22
Chapter-1 Introduction to Signals and Systems 2020
We know that, Sin(2π+Ф)=sinФ, then using this relation we can write the above
equation as follows
1 sin 𝑛 − 𝑚 𝜔0 𝑡0 sin 𝑛 + 𝑚 𝜔0 𝑡0 sin 𝑛 − 𝑚 𝜔0 𝑡0 sin 𝑛 + 𝑚 𝜔0 𝑡0
= − − +
2𝜔0 𝑛−𝑚 𝑛+𝑚 𝑛−𝑚 𝑛+𝑚
In the above relation, 1st & 3rd terms gets cancelled similarly 2nd & 4th terms do. Then,
The total relation will be =0, which proves that, the given two signals
sin 𝑛𝜔0 𝑡 𝑎𝑛𝑑 sin 𝑚𝜔0 𝑡 are Mutually Orthogonal to each other.
2. Show that sin 𝑛𝜔0 𝑡 𝑎𝑛𝑑 cos 𝑚𝜔0 𝑡 are mutually orthogonal to each other where
𝜋
m, n are integers in the range of 𝑡0 , 𝑡0 + 2 𝜔
0
3. Show that cos 𝑛𝜔0 𝑡 𝑎𝑛𝑑 cos 𝑚𝜔0 𝑡 are mutually orthogonal to each other where
𝜋
m, n are integers in the range of 𝑡0 , 𝑡0 + 2 𝜔
0
𝑡2
𝑡1 1
𝑓 𝑡 𝑓2 𝑡 𝑑𝑡
𝐶12 = 𝑡2 2
𝑓 𝑡
𝑡1 2
𝑑𝑡
𝜋 2𝜋
0 1
𝑓 (𝑡) sin 𝑡 𝑑𝑡 + 𝜋 𝑓1 (𝑡) sin 𝑡 𝑑𝑡
= 2𝜋
0
𝑠𝑖𝑛2 𝑡 𝑑𝑡
From the given data we can write the above relation as,
𝜋 2𝜋
0
sin 𝑡 𝑑𝑡 + 𝜋 (−1) sin 𝑡 𝑑𝑡
=
2𝜋 1 − cos 𝑡
𝑑𝑡
0 2
After integrating the terms and applying the limits on the above equation we get,
Nalla N Page 23
Chapter-1 Introduction to Signals and Systems 2020
𝜋 2𝜋
− cos 𝑡 0 − − cos 𝑡 𝜋
= 2𝜋
1 sin 2𝑡
2 𝑡− 2 0
Nalla N Page 24
Chapter-2 Laplace Transform 2020
The Laplace transform is similar to the Fourier transform. While the Fourier
transform of a function is a complex function of a real variable (frequency), the Laplace
transform of a function is a complex function of a complex variable. The Laplace
transform is usually restricted to transformation of functions of t with t ≥ 0.
The Laplace transform is invertible on a large class of functions. The inverse
Laplace transform takes a function of a complex variable s (often frequency) and yields
a function of a real variable t (often time). Given a simple mathematical or functional
description of an input or output to a system, the Laplace transform provides an
alternative functional description that often simplifies the process of analysing the
behaviour of the system, or in synthesizing a new system based on a set of
specifications.
Definition of the Laplace Transformation:
The two-sided or bilateral Laplace Transform pair is defined as
∞
Where ℒ{𝑓 (𝑡)} denotes the Laplace Transform of the time function 𝑓 (𝑡), ℒ −1 {𝐹(𝑠)}
denotes inverse Laplace transform and 𝑠 is the complex variable whose real part is σ,
and imaginary part ω, that is, 𝑠 = 𝜎 + 𝑗𝜔.
In most problems, we are concerned with values of time greater than some
reference time, say 𝑡 = 𝑡0 = 0, and since the initial conditions are generally known, the
bilateral or two-sided Laplace transform pair of (1) and (2) simplifies to the unilateral
or one-sided Laplace transform defined as
∞ ∞
The Laplace Transform of (3) has meaning only if the integral converges (reaches a
limit), that is, if
Nalla N Page 1
Chapter-2 Laplace Transform 2020
∞
To determine the conditions that will ensure us that the integral of (3) converges, we
rewrite (5) as
∞
The term 𝑒 −𝑗𝜔𝑡 in the integral of (6) has magnitude of unity, i.e., |𝑒 −𝑗𝜔𝑡 | = 1, and thus
the condition for convergence becomes
∞
We will denote transformation from the time domain to the complex frequency
domain, and vice versa, as 𝑓 (𝑡) ⇔ 𝐹 (𝑠) − − − (8)
The Dirichlet conditions are sufficient conditions for a real-valued, periodic
function 𝑓 (𝑡) to be equal to the sum of its Fourier series at each point
where 𝑓 (𝑡) is continuous. Moreover, the behaviour of the Fourier series at points of
discontinuity is determined as well (it is the midpoint of the values of the discontinuity).
The conditions are:
1. 𝑓(𝑡)must be absolutely integrable over a period.
2. 𝑓 (𝑡) must be of bounded variation in any given bounded interval.
3. 𝑓 (𝑡) must have a finite number of discontinuities in any given bounded interval,
and the discontinuities cannot be infinite.
Properties of the Laplace Transform
1. Linearity Property:
The linearity property states that if 𝑓1 (𝑡), 𝑓2 (𝑡), 𝑓3 (𝑡), … … … . 𝑓𝑛 (𝑡) have Laplace
Transforms 𝐹1 (𝑠), 𝐹2 (𝑠), 𝐹3 (𝑠), … … … 𝐹𝑛 (𝑠) respectively and 𝑐1 , 𝑐2 , 𝑐3 , … … … … 𝑐𝑛
are arbitrary constants, then
𝑐1 𝑓1 (𝑡) + 𝑐2 𝑓2 (𝑡) + ⋯ + 𝑐𝑛 𝑓𝑛 (𝑡) ⇔ 𝑐1 𝐹1 (𝑠) + 𝑐2 𝐹2 (𝑠) + ⋯ + 𝑐𝑛 𝐹𝑛 (𝑠) − −(9)
Proof:-
We know that as per the definition of the Laplace Transform,
∞ ∞
−𝑠𝑡
ℒ{𝑓 (𝑡)} = 𝐹 (𝑠) = ∫ 𝑓(𝑡)𝑒 𝑑𝑡 = ∫ 𝑓(𝑡)𝑒 −𝑠𝑡 𝑑𝑡 − − − (10)
𝑡0 0
Nalla N Page 2
Chapter-2 Laplace Transform 2020
∞
∞ ∞ ∞
The time shifting property states that a right shift in the time domain by ‘a’ units
corresponds to multiplication by 𝑒 −𝑎𝑠 in the complex frequency domain. Thus,
𝒇(𝒕 − 𝒂)𝒖(𝒕 − 𝒂) ⇔ 𝒆−𝒂𝒔 𝑭(𝒔) − − − −(𝟏𝟏)
Proof:-
We know that as per the definition of the Laplace Transform,
∞ ∞
Now, we let 𝑡 − 𝑎 = 𝜏 ; then, 𝑡 = 𝜏 + 𝑎 and 𝑑𝑡 = 𝑑𝜏. With these substitutions, the second
integral on the right side of (13) becomes
∞ ∞
The frequency shifting property states that if we multiply some time domain function
𝑓 (𝑡) by an exponential function 𝑒 −𝑎𝑡 where a is an arbitrary positive constant, this
multiplication will produce a shift of the s variable in the complex frequency domain by
a units. Thus,
𝒆−𝒂𝒕 𝒇(𝒕) ⇔ 𝑭(𝒔 + 𝒂)
Proof:-
We know that as per the definition of the Laplace Transform,
Nalla N Page 3
Chapter-2 Laplace Transform 2020
∞ ∞
4. Scaling Property
Let a be an arbitrary positive constant; then, the scaling property states that
1 𝑠
𝑓(𝑎𝑡) ⇔ 𝐹( )
𝑎 𝑎
Proof:-
We know that as per the definition of the Laplace Transform,
∞ ∞
𝜏 𝜏
And letting 𝑎𝑡 = 𝜏, ⇒ 𝑡 = 𝑎 ⇒ 𝑑𝑡 = 𝑑(𝑎) then,
∞ ∞
𝜏 𝜏 1 𝑠 1 𝑠
ℒ {𝑓 (𝑎𝑡)} = ∫ 𝑓(𝜏)𝑒 −𝑠(𝑎) 𝑑 ( ) = ∫ 𝑓 (𝜏)𝑒 −(𝑎)𝜏 𝑑(𝜏) = 𝐹 ( )
𝑎 𝑎 𝑎 𝑎
0 0
The differentiation in time domain property states that differentiation in the time
domain corresponds to multiplication by in the complex frequency domain, minus the
initial value of 𝑓 (𝑡) at 𝑡 = 0. Thus,
𝒅
𝒇′ (𝒕) = 𝒇(𝒕) ⇔ 𝒔𝑭(𝒔) − 𝒇(𝟎)
𝒅𝒕
Proof:-
We know that as per the definition of the Laplace Transform,
∞ ∞
Nalla N Page 4
Chapter-2 Laplace Transform 2020
It can be written in the form of ∫ 𝑣𝑑𝑢 = 𝑢𝑣 − ∫ 𝑢 𝑑𝑣 − − − −(18)
Where, 𝑢 = 𝑓 ′ (𝑡) and 𝑣 = 𝑒 −𝑠𝑡 then relation in (18) can be rewritten as
∞
ℒ{𝑓 ′(𝑡) }
=𝑒 −𝑠𝑡
𝑓 (𝑡)|∞
0 − ∫ 𝑒 −𝑠𝑡 (−𝑠)𝑓(𝑡)𝑑𝑡 − − − (19)
0
∞
This property states that differentiation in complex frequency domain and multiplication
by minus one, corresponds to multiplication of 𝑓 (𝑡) by t in the time domain. In other
words,
𝒅
𝒕𝒇(𝒕) ⇔ − 𝑭(𝒔)
𝒅𝒔
Proof:-
We know that as per the definition of the Laplace Transform,
∞ ∞
Nalla N Page 5
Chapter-2 Laplace Transform 2020
𝑑
Where, (−𝑡)𝑓 (𝑡) is the Laplace Transform Pair of the function 𝑑𝑠 𝐹 (𝑠), then it can be
further written as
𝒅
𝒕𝒇(𝒕) ⇔ − 𝑭(𝒔) − − − − − (𝟐𝟒)
𝒅𝒔
In general, it can be written as,
𝒅𝒏
𝒕𝒏 𝒇(𝒕) ⇔ (−𝟏)𝒏 𝑭(𝒔) − − − − − (𝟐𝟓)
𝒅𝒔𝒏
7. Integration in Time Domain
This property states that integration in time domain corresponds to 𝑭(𝒔) divided by 𝒔.
𝒕
𝑭(𝒔)
𝓛 {∫ 𝒇(𝒕)𝒅𝒕} ⇔
𝒔
𝟎
Proof:-
We know that as per the definition of the Laplace Transform,
∞ ∞
𝑡 𝑒 −𝑠𝑡
It can also be written as, 𝑢 = ∫0 𝑓 (𝑡)𝑑𝑡 and 𝑣 = then the above relationbecomes,
−𝑠
∞ 𝑡
𝑑 𝑒 −𝑠𝑡
⇒ ∫ ∫ 𝑓(𝑡)𝑑𝑡. [ ] 𝑑𝑡
𝑑𝑡 −𝑠
0 0
𝑡 ∞ ∞
𝑒 −𝑠𝑡 𝑒 −𝑠𝑡
⇒ ∫ 𝑓(𝑡)𝑑𝑡 ( )| − ∫ 𝑓(𝑡) ( ) 𝑑𝑡
−𝑠 0 −𝑠
0 0
𝑡 ∞
𝑒 −𝑠𝑡
⇒ ∫ 𝑓 (𝑡)𝑑𝑡 ( )| 𝑏𝑒𝑐𝑜𝑚𝑒𝑠 𝑍𝑒𝑟𝑜 𝑎𝑠 𝑝𝑒𝑟 𝐼𝑛𝑖𝑡𝑖𝑎𝑙 𝐶𝑜𝑛𝑑𝑖𝑡𝑖𝑜𝑛𝑠, 𝑡ℎ𝑢𝑠 𝑠𝑒𝑐𝑜𝑛𝑑 𝑡𝑒𝑟𝑚 𝑟𝑒𝑚𝑎𝑖𝑛𝑠
−𝑠 0
0
∞
1 𝐹(𝑠)
⇒ ∫ 𝑓 (𝑡)𝑒 −𝑠𝑡 𝑑𝑡 ⇒
𝑠 𝑠
0
Nalla N Page 6
Chapter-2 Laplace Transform 2020
𝑓(𝑡)
division of a time function f(t) by the variable t, provided that the limit lim
𝑡→0 𝑡
exists. Thus,
∞
𝒇(𝒕)
𝓛{ } = ∫ 𝑭(𝒔)𝒅𝒔
𝒕
𝒔
Proof:-
We know that as per the definition of the Laplace Transform,
∞ ∞
In the above relation, we can separate the terms as per two integrals and,
∞ ∞ ∞ ∞ ∞
−𝑠𝑡
𝑒 −𝑠𝑡
∫ 𝐹(𝑠)𝑑𝑠 = ∫ 𝑓 (𝑡)𝑑𝑡 ∫ 𝑒 𝑑𝑠 ⇒ ∫ 𝑓(𝑡)𝑑𝑡 ( )|
−𝑡 𝑠
𝑠 0 𝑠 0
9. Time Periodicity:
The Time periodicity property states that a periodic function of time T corresponds to
𝑇
integral ∫0 𝑓 (𝑡)𝑒 −𝑠𝑡 𝑑𝑡 divided by (1 − 𝑒 −𝑠𝑇 ) in the complex frequency domain. Thus, if
we let 𝑓 (𝑡) be a periodic function with the period T, that is, 𝑓(𝑡) = 𝑓(𝑡 + 𝑛𝑇), for
n=1,2,3…… we get the transform pair,
𝑻
∫ 𝒇(𝒕)𝒆−𝒔𝒕 𝒅𝒕
𝒇(𝒕 + 𝒏𝑻) ⇔ 𝟎
𝟏 − 𝒆−𝒔𝑻
Proof:-
We know that as per the definition of the Laplace Transform,
Nalla N Page 7
Chapter-2 Laplace Transform 2020
∞ ∞
In the first integral of the right side, we let 𝑡 = 𝜏, in the second 𝑡 = 𝜏 + 𝑇, in the third 𝑡 =
𝜏 + 2𝑇, and so on. The areas under each period of f(t) are equal, and thus the upper and
lower limits of integration are the same for each integral. Then,
𝑇 𝑇 𝑇
−𝑠𝜏 −𝑠(𝜏+𝑇)
ℒ {𝑓(𝑡)} = ∫ 𝑓 (𝜏)𝑒 𝑑𝜏 + ∫ 𝑓(𝜏 + 𝑇)𝑒 𝑑𝜏 + ∫ 𝑓 (𝜏 + 2𝑇)𝑒 −𝑠(𝜏+2𝑇) 𝑑𝜏 + ⋯ . (29)
0 0 0
Since the function is periodic, i.e., 𝑓 (𝜏) = 𝑓(𝜏 + 𝑇) = 𝑓(𝜏 + 2𝑇) = ⋯ . = 𝑓(𝜏 + 𝑛𝑇), we
can write eqnuation29 as
𝑇
−𝑠𝑇 −2𝑠𝑇
ℒ {𝑓 (𝑡)} = (1 + 𝑒 +𝑒 + ⋯ . ) ∫ 𝑓(𝜏)𝑒 −𝑠𝜏 𝑑𝜏 − − − −(30)
0
𝑻
∫ 𝒇(𝝉)𝒆−𝒔𝝉 𝒅𝝉
𝓛{𝒇(𝒕 + 𝒏𝑻)} = 𝟎
𝟏 − 𝒆−𝒔𝑻
10. Initial Value Theorem
It states that the initial value f(0) of the time function f(t) can be found from its Laplace
transform multiplied by s and letting s→∞. that is
𝐥𝐢𝐦 𝒇(𝒕) = 𝐥𝐢𝐦 𝒔𝑭(𝒔)
𝒕→𝟎 𝒔→∞
Proof: We know that, the time differentiation property yields the following relation
𝒅
𝒇′ (𝒕) = 𝒇(𝒕) ⇔ 𝒔𝑭(𝒔) − 𝒇(𝟎)
𝒅𝒕
Then, Applying Limit S→∞ on both sides we get,
∞
Nalla N Page 8
Chapter-2 Laplace Transform 2020
The term on Left hand side becomes zero, as 𝑒 −𝑠𝑡 = 0, 𝑖𝑓 𝑠 → ∞, then it reduces to
lim (𝑠𝐹(𝑠) − 𝑓 (0)) = 0 ⟹ lim 𝑠𝐹(𝑠) − 𝑓(0)
𝑠→∞ 𝑠→∞
It states that the Final value f(∞) of the time function f(t) can be found from its Laplace
transform multiplied by s and letting s→0. That is
Proof: We know that, the time differentiation property yields the following relation
𝒅
𝒇′ (𝒕) = 𝒇(𝒕) ⇔ 𝒔𝑭(𝒔) − 𝒇(𝟎)
𝒅𝒕
Then, Applying Limit S→0 on both sides we get,
∞
𝑓(𝑡)|∞
0 = lim(𝑠𝐹(𝑠)) − 𝑓(0) ⟹ 𝑓 (∞) − 𝑓(0) = lim(𝑠𝐹(𝑠)) − 𝑓(0)
𝑠→0 𝑠→0
Nalla N Page 9
Chapter-2 Laplace Transform 2020
∞ ∞ ∞
By rearranging the terms inside the integrals as per the variables, we can rewrite as,
∞ ∞
ℒ{𝑓1 (𝑡) ∗ 𝑓2 (𝑡)} = ∫ 𝑓1 (𝜏) [∫ 𝑓2 (𝜆)𝑒 −𝑠(𝜆+𝜏) 𝑑𝜆] 𝑑𝜏 = ∫ 𝑓1 (𝜏)𝑒 −𝑠𝜏 𝑑𝜏 ∫ 𝑓2 (𝜆)𝑒 −𝑠𝜆 𝑑𝜆
0 0 0 0
⟹ 𝐹1 (𝑠)𝐹2 (𝑠)
𝟏
𝒇𝟏 (𝒕)𝒇𝟐 (𝒕) ⇔ 𝑭 (𝒔) ∗ 𝑭𝟐 (𝒔)
𝟐𝝅𝒋 𝟏
∞
Proof: We know that ℒ {𝑓1 (𝑡)𝑓2 (𝑡)} = ∫0 𝑓1 (𝑡)𝑓2 (𝑡)𝑒 −𝑠𝑡 𝑑𝑡 − − − (35) and recalling that
Inverse Laplace Transform from eqn 2,
𝜎+𝑗𝜔
1
ℒ −1 {𝐹 (𝑠)} = 𝑓1 (𝑡) = ∫ 𝐹1 (𝜇)𝑒 𝜇𝑡 𝑑𝜇 − − − (36)
2𝜋𝑗
𝜎−𝑗𝜔
∞ 𝜎+𝑗𝜔
1
ℒ{𝑓1 (𝑡)𝑓2 (𝑡)} = ∫ [ ∫ 𝐹1 (𝜇)𝑒 𝜇𝑡 𝑑𝜇] 𝑓2 (𝑡)𝑒 −𝑠𝑡 𝑑𝑡
2𝜋𝑗
0 𝜎−𝑗𝜔
𝜎+𝑗𝜔 ∞
1
⇒ ∫ 𝐹1 (𝜇) [∫ 𝑓2 (𝑡)𝑒 −(𝑠−𝜇)𝑡 𝑑𝑡] 𝑑𝜇
2𝜋𝑗
𝜎−𝑗𝜔 0
Nalla N Page 10
Chapter-2 Laplace Transform 2020
The Laplace Transforms Pairs for Common Functions:
S.No. 𝑓 (𝑡 ) 𝐹 (𝑠 )
1 𝑢 (𝑡 ) 1
𝑠
2 𝑡𝑢(𝑡) 1
𝑠2
3 𝑡 𝑛 𝑢(𝑡 ) 𝑛!
𝑠 𝑛+1
4 𝛿 (𝑡 ) 1
5 𝛿 (𝑡 − 𝑎 ) 𝑒 −𝑎𝑠
6 𝑒 −𝑎𝑡 𝑢(𝑡) 1
(𝑠 + 𝑎 )
7 𝑒 +𝑎𝑡 𝑢(𝑡) 1
(𝑠 − 𝑎 )
8 𝑒 +𝑗𝜔𝑡 𝑢(𝑡) 1
(𝑠 − 𝑗𝜔)
9 𝑒 −𝑗𝜔𝑡 𝑢(𝑡) 1
(𝑠 + 𝑗𝜔)
10 cos 𝜔𝑡𝑢(𝑡) 𝑠
𝑠2 + 𝜔2
11 sin 𝜔𝑡 𝑢(𝑡) 𝜔
𝑠2 + 𝜔2
12 𝑒 −𝑎𝑡 cos 𝑏𝑡 𝑠+𝑎
𝑎2 + (𝑠 + 𝑏)2
13 𝑒 −𝑎𝑡 sin 𝑏𝑡 𝑏
𝑏2 + (𝑠 + 𝑎)2
14 𝑒 𝑎𝑡 cos 𝑏𝑡 𝑠−𝑎
(𝑠 − 𝑎)2 + 𝑏2
15 𝑒 𝑎𝑡 sin 𝑏𝑡 𝑏
(𝑠 − 𝑎 )2 + 𝑏 2
16 cosh 𝑎𝑡 𝑠
𝑠 2 − 𝑎2
17 sinh 𝑎𝑡 𝑎
𝑠 − 𝑎2
2
Nalla N Page 11
Chapter-2 Laplace Transform 2020
19 𝑒 −𝑎𝑡 sinh 𝑏𝑡 𝑏
(𝑠 + 𝑎 )2 − 𝑏 2
20 𝑒 𝑎𝑡 cosh 𝑏𝑡 𝑠
(𝑠 − 𝑎 )2 − 𝑏 2
21 𝑒 𝑎𝑡 sinh 𝑏𝑡 𝑏
(𝑠 − 𝑎 )2 − 𝑏 2
The Laplace Transforms Pairs for Common Waveforms:
#Problem 1: find the Laplace transform of the waveform 𝑓𝑝 (𝑡). The subscript ‘P’ stands
for pulse wave form.
Solution: We first express the given waveform as a sum of unit step functions. Then,
𝑓𝑝 (𝑡) = 𝐴[𝑢(𝑡) − 𝑢(𝑡 − 𝑎)]
From the Time shifting Property we get,
1
𝑢(𝑡) ⇔
𝑠
𝐴 𝐴
For this Example, 𝐴𝑢(𝑡) ⇔ and 𝐴𝑢(𝑡 − 𝑎) ⇔ 𝑒 −𝑎𝑠
𝑠 𝑠
Then, by linearity property, the Laplace Transform of the given pulse function is
𝑨 𝑨 −𝒂𝒔 𝑨
𝑨[𝒖(𝒕) − 𝒖(𝒕 − 𝒂)] ⇔ − 𝒆 = (𝟏 − 𝒆−𝒂𝒔 )
𝒔 𝒔 𝒔
#Problem 2: find the Laplace transform of the waveform 𝑓𝐿 (𝑡). The subscript ‘L’ stands
for Linear Segment.
Nalla N Page 12
Chapter-2 Laplace Transform 2020
Solution: We must first derive the equation of the linear segment. Then, we express the
given waveform in terms of the unit step function.
𝑓𝐿 (𝑡) = (𝑡 − 1)𝑢(𝑡 − 1)
From the Time shifting Property we get,
1
𝑡𝑢(𝑡) ⇔
𝑠2
Therefore, the Laplace Transform of Linear Segment is,
𝟏
(𝒕 − 𝟏)𝒖(𝒕 − 𝟏) ⇔ 𝒆−𝒂𝒔
𝒔𝟐
#Problem 3: find the Laplace transform of the Triangular waveform 𝑓𝑇 (𝑡).
Nalla N Page 13
Chapter-2 Laplace Transform 2020
1
𝑡𝑢(𝑡) − 2(𝑡 − 1)𝑢(𝑡 − 1) + (𝑡 − 2)𝑢(𝑡 − 2) ⇔ (1 − 2𝑒 −𝑠 + 𝑒 −2𝑠 )
𝑠2
Therefore, the Laplace Transform of the triangular Waveform is given by,
𝟏
𝒇𝑻 (𝒕) ⇔ (𝟏 − 𝒆−𝒔 )𝟐
𝒔𝟐
#Problem 4: find the Laplace transform of the Rectangular Periodic waveform 𝑓𝑅 (𝑡).
Solution: This is a periodic waveform with period T=2a, and thus we can apply time
periodicity property
𝑻
∫𝟎 𝒇(𝒕)𝒆−𝒔𝒕 𝒅𝒕
𝒇(𝒕 + 𝒏𝑻) ⇔
𝟏 − 𝒆−𝒔𝑻
Where the denominator represents the periodicity of f(t). For this example,
2𝑎 𝑎 2𝑎
1 1
ℒ {𝑓𝑅 (𝑡)} = −2𝑎𝑠
∫ 𝑓𝑅 (𝑡) 𝑒 −𝑠𝑡 𝑑𝑡 = [∫ 𝐴 𝑒 −𝑠𝑡 𝑑𝑡 + ∫ (−𝐴) 𝑒 −𝑠𝑡 𝑑𝑡]
1−𝑒 1 − 𝑒 −2𝑎𝑠
0 0 𝑎
𝑎 2𝑎
𝐴 𝑒 −𝑠𝑡 𝑒 −𝑠𝑡
⇒ [ | + | ]
1 − 𝑒 −2𝑎𝑠 𝑠 0 𝑠 𝑎
Or,
𝐴
ℒ{𝑓𝑅 (𝑡)} = (−𝑒 −𝑎𝑠 + 1+𝑒 −2𝑎𝑠 − 𝑒 −𝑎𝑠 )
𝑠(1 − 𝑒 −2𝑎𝑠 )
𝐴 −𝑎𝑠 −2𝑎𝑠 )
𝐴(1 − 𝑒 −𝑎𝑠 )2 𝐴(1 − 𝑒 −𝑎𝑠 )(1 − 𝑒 −𝑎𝑠 )
= ( 1 − 2𝑒 +𝑒 = =
𝑠 (1 − 𝑒 −2𝑎𝑠 ) 𝑠(1 + 𝑒 −𝑎𝑠 )(1 − 𝑒 −𝑎𝑠 ) 𝑠(1 + 𝑒 −𝑎𝑠 )(1 − 𝑒 −𝑎𝑠 )
𝑎𝑠 𝑎𝑠 𝑎𝑠 𝑎𝑠 𝑎𝑠 𝑎𝑠 𝑎𝑠 𝑎𝑠
𝐴(1 − 𝑒 −𝑎𝑠 ) 𝐴 𝑒 2 𝑒 − 2 − 𝑒 − 2 𝑒 − 2 𝐴 𝑒− 2 𝑒 2 − 𝑒− 2 𝐴 sinh ( 2 )
= = ( 𝑎𝑠 ) = ( ) ( 𝑎𝑠 𝑎𝑠 ) =
𝑠(1 + 𝑒 −𝑎𝑠 ) 𝑠 𝑒 𝑎𝑠 −
𝑎𝑠
2𝑒 2 +𝑒 2𝑒 2
−
𝑎𝑠
− 𝑠 𝑒 −𝑎𝑠2 𝑒 +𝑒 2
2
− 𝑠 cosh (𝑎𝑠)
2
𝑨 𝒂𝒔
𝓛{𝒇𝑹 (𝒕)} = 𝐭𝐚𝐧𝐡 ( )
𝒔 𝟐
#Problem 5: find the Laplace transform of the half-wave rectified sine wave 𝑓𝐻𝑊 (𝑡).
Nalla N Page 14
Chapter-2 Laplace Transform 2020
Solution: This is a periodic waveform with period 𝑇 = 2𝜋. We will applt the time
periodicity property as,
𝑻
∫ 𝒇(𝒕)𝒆−𝒔𝒕 𝒅𝒕
𝒇(𝒕 + 𝒏𝑻) ⇔ 𝟎
𝟏 − 𝒆−𝒔𝑻
Where the denominator represents the periodicity of f(t). For this example,
2𝜋 𝜋
1 −𝑠𝑡
1
ℒ {𝑓𝐻𝑊 (𝑡)} = −2𝜋𝑠
∫ 𝑓 ( 𝑡 ) 𝑒 𝑑𝑡 = −2𝜋𝑠
∫ sin 𝑡 𝑒 −𝑠𝑡 𝑑𝑡
1−𝑒 1−𝑒
0 0
𝜋
1 𝑒 −𝑠𝑡 (𝑠 sin 𝑡 − cos 𝑡) 1 (1 + 𝑒 −𝜋𝑠 )
= [ ]| =
1 − 𝑒 −2𝜋𝑠 𝑠2 + 1 0
(𝑠 2 + 1) (1 − 𝑒 −2𝜋𝑠 )
1 (1 + 𝑒 −𝜋𝑠 )
ℒ{𝑓𝐻𝑊 (𝑡)} =
(𝑠 2 + 1) (1 + 𝑒 −𝜋𝑠 )(1 − 𝑒 −𝜋𝑠 )
Or,
𝟏
𝓛{𝒇𝑯𝑾 (𝒕)} =
(𝒔𝟐 + 𝟏)(𝟏 − 𝒆−𝝅𝒔 )
#Practise Problems:
Find the Laplace Transform of the following Waveforms
Nalla N Page 15
Chapter-3 Inverse Laplace Transforms 2020
For any continuous time, function 𝑓 (𝑡) the two-sided or bilateral Laplace
Transform pair is defined as
∞
Where ℒ{𝑓 (𝑡)} denotes the Laplace Transform of the time function 𝑓 (𝑡), ℒ −1 {𝐹(𝑠)}
denotes inverse Laplace transform and 𝑠 is the complex variable whose real part is σ,
and imaginary part ω, that is, 𝑠 = 𝜎 + 𝑗𝜔.
The inverse Laplace transform is that, which uses one-to-one relationship
between a signal and its unilateral Laplace transform. One such method is Partial
Fractional Method.
Let,
𝑁(𝑠) 𝑏𝑚 𝑠 𝑚 + 𝑏𝑚−1 𝑠 𝑚−1 + ⋯ … + 𝑏1 𝑠 + 𝑏0
𝐹 (𝑠 ) = = − − − − − (3)
𝐷(𝑠) 𝑠 𝑛 + 𝑎𝑛−1 𝑠 𝑛−1 + ⋯ … + 𝑎1 𝑠 + 𝑎0
Where m<n.
To find partial fraction expansion, first we need to find the roots of the denominator
polynomial 𝐷(𝑠) say 𝑝1 , 𝑝2 , … … , 𝑝𝑘 . The roots are also called as Poles of F(s). now we
can express 𝐹 (𝑠) as
𝑁(𝑠) 𝑏𝑚 𝑠 𝑚 + 𝑏𝑚−1 𝑠 𝑚−1 + ⋯ … + 𝑏1 𝑠 + 𝑏0
𝐹 (𝑠 ) = = − − − − − (4)
𝐷(𝑠) ∏𝑁 𝑘=1(𝑠 − 𝑝𝑘 )
I.DISTINCT POLES:
If all the poles 𝑝𝑘 are distinct, then we may express 𝐹(𝑠) as a sum of single pole terms
given by
𝑟1 𝑟2 𝑟𝑖 𝑟𝑘
𝐹 (𝑠 ) = + +⋯…+ + ⋯…+ − − − − − − (5)
𝑠 − 𝑝1 𝑠 − 𝑝2 𝑠 − 𝑝𝑖 𝑠 − 𝑝𝑘
Multiply both sides of the equation (5) by (𝑠 − 𝑝𝑖 ) and substitute 𝑠 = 𝑝𝑖 then we get
(𝑠 − 𝑝𝑖 )𝐹 (𝑠)|𝑠=𝑝𝑖 =
𝑟1 𝑟2 𝑟𝑖 𝑟𝑘
|(𝑠 − 𝑝𝑖 ) + (𝑠 − 𝑝𝑖 ) + ⋯ + (𝑠 − 𝑝𝑖 ) + ⋯ … + (𝑠 − 𝑝𝑖 ) | = 𝑟𝑖
𝑠 − 𝑝1 𝑠 − 𝑝2 𝑠 − 𝑝𝑖 𝑠 − 𝑝𝑘 𝑠=𝑝
𝑖
Nalla N Page 1
Chapter-3 Inverse Laplace Transforms 2020
#1. Use the partial fraction expansion method to simplify 𝐹 (𝑠) of given relation, and find
the time domain function 𝑓 (𝑡) corresponding to 𝐹(𝑠).
2
𝐹 (𝑠 ) =
𝑠 2 − 3𝑠 + 2
Solution: For the given function 𝐹 (𝑠) we can factorize and write it as
2 2 𝑟1 𝑟2
𝐹 (𝑠 ) = = = +
𝑠 2 − 3𝑠 + 2 (𝑠 − 1)(𝑠 − 2) (𝑠 − 1) (𝑠 − 2)
Where,
2
𝑟1 = (𝑠 − 1)𝐹(𝑠)|𝑠=1 = (𝑠 − 1) | = −2
(𝑠 − 1)(𝑠 − 2) 𝑠=1
2
𝑟2 = (𝑠 − 2)𝐹(𝑠)|𝑠=2 = (𝑠 − 2) | =2
(𝑠 − 1)(𝑠 − 2) 𝑠=2
Therefore,
−2 2
𝐹 (𝑠 ) = + − − − −(7)
(𝑠 − 1) (𝑠 − 2)
Applying Inverse Laplace Transform on equation (7) we get the time domain
function as,
𝑓(𝑡) = −2𝑒 𝑡 𝑢(𝑡) + 2𝑒 2𝑡 𝑢(𝑡)
#2. Use the partial fraction expansion method to simplify 𝐹 (𝑠) of given relation, and find
the time domain function 𝑓 (𝑡) corresponding to 𝐹(𝑠).
3𝑠 + 2
𝐹 (𝑠 ) =
𝑠2 + 3𝑠 + 2
Solution: For the given function 𝐹 (𝑠) we can factorize and write it as
3𝑠 + 2 3𝑠 + 2 𝑟1 𝑟2
𝐹 (𝑠 ) = = = +
𝑠 2 + 3𝑠 + 2 (𝑠 + 1)(𝑠 + 2) (𝑠 + 1) (𝑠 + 2)
3𝑠 + 2
𝑟1 = (𝑠 + 1)𝐹(𝑠)|𝑠=−1 = (𝑠 + 1) | = −1
(𝑠 + 1)(𝑠 + 2) 𝑠=−1
3𝑠 + 2
𝑟2 = (𝑠 + 2)𝐹(𝑠)|𝑠=−2 = (𝑠 + 2) | =4
(𝑠 + 1)(𝑠 + 2) 𝑠=−2
Therefore,
−1 4
𝐹 (𝑠 ) = + − − − −(8)
(𝑠 + 1) (𝑠 + 2)
Applying Inverse Laplace Transform on equation (8) we get the time domain
function as,
𝑓 (𝑡) = −𝑒 −𝑡 𝑢(𝑡) + 4𝑒 −2𝑡 𝑢(𝑡)
Nalla N Page 2
Chapter-3 Inverse Laplace Transforms 2020
#Practise Problem 1:
Use the partial fraction expansion method to simplify 𝐹(𝑠) and 𝐺(𝑠) of given relation,
and find the time domain function 𝑓(𝑡) and 𝑔(𝑡) corresponding to 𝐹(𝑠) and 𝐺(𝑠).
3𝑠 2 + 2𝑠 + 5
𝐹 (𝑠 ) =
𝑠 3 + 12𝑠 2 + 44𝑠 + 48
2𝑠 2 + 5𝑠 + 5
𝐺 (𝑠 ) =
(𝑠 + 1)2 + (𝑠 + 2)
II. COMPLEX POLES:
If 𝐹 (𝑠) has complex poles, then partial fraction of the 𝐹(𝑠) can be expressed as,
𝑟1 𝑟1∗
𝐹 (𝑠 ) = + − − − −(9)
𝑠 − 𝑝1 𝑠 − 𝑝1∗
Where 𝑟1∗ is complex conjugate of 𝑟1 and 𝑝1∗ is complex conjugate of 𝑝1 . In other words,
complex conjugate poles result in complex conjugate coefficients.
#3. Use the partial fraction expansion method to simplify 𝐹 (𝑠) of given relation, and find
the time domain function 𝑓 (𝑡) corresponding to 𝐹(𝑠).
2
𝐹 (𝑠 ) =
𝑠2 + 2𝑠 + 2
Solution: For the given function 𝐹 (𝑠) we cannot factorize normally and write in the
form of complex roots by using the following formula
−𝑏 ± √𝑏2 − 4𝑎𝑐 −2 ± √22 − (4 ∗ 1 ∗ 2)
𝑠1,2 = 𝑠 2 + 2𝑠 + 2 = = = −1 ± 𝑗1
2𝑎 2
Then, 𝐹(𝑠) can be represented in the form of partial fractions as,
𝑟1 𝑟1∗
𝐹 (𝑠 ) = +
𝑠 + 1 + 𝑗1 𝑠 + 1 − 𝑗1
𝑠
𝑟1 = (𝑠 + 1 + 𝑗1)𝐹(𝑠)|𝑠=−1−𝑗1 = (𝑠 + 1 + 𝑗1) |
(𝑠 + 1 + 𝑗1)(𝑠 + 1 − 𝑗1) 𝑠=−1−𝑗1
−1 − 𝑗1
⇒ 𝑟1 = = 0.5 − 𝑗0.5
−𝑗2
𝑠
𝑟1∗ = (𝑠 + 1 − 𝑗1)𝐹(𝑠)|𝑠=−1+𝑗1 = (𝑠 + 1 − 𝑗1) |
(𝑠 + 1 + 𝑗1)(𝑠 + 1 − 𝑗1) 𝑠=−1+𝑗1
−1 + 𝑗1
⇒ 𝑟1∗ = = 0.5 + 𝑗0.5
𝑗2
0.5 − 𝑗0.5 0.5 + 𝑗0.5
𝐹 (𝑠 ) = +
𝑠 − (−1 − 𝑗) 𝑠 − (−1 + 𝑗)
Nalla N Page 3
Chapter-3 Inverse Laplace Transforms 2020
Taking Inverse Laplace Transform on the above relation we get,
𝑓(𝑡) = (0.5 − 𝑗0.5)𝑒 (−1−𝑗)𝑡 𝑢(𝑡) + (0.5 + 𝑗0.5)𝑒 (−1+𝑗)𝑡 𝑢(𝑡)
On expansion of the above relation and separating the real and imaginary
components and writing in simplified form as,
𝑓 (𝑡) = (0.5 − 𝑗0.5)𝑒 −𝑡 𝑒 −𝑗𝑡 𝑢(𝑡) + (0.5 + 𝑗0.5)𝑒 −𝑡 𝑒 +𝑗𝑡 𝑢(𝑡)
⇒ 0.5𝑒 −𝑡 𝑒 −𝑗𝑡 𝑢(𝑡) − 0.5𝑗𝑒 −𝑡 𝑒 −𝑗𝑡 𝑢(𝑡) + 0.5𝑒 −𝑡 𝑒 +𝑗𝑡 𝑢(𝑡) + 𝑗0.5𝑒 −𝑡 𝑒 +𝑗𝑡 𝑢(𝑡)
⇒ 0.5𝑒 −𝑡 𝑒 −𝑗𝑡 𝑢(𝑡) + 0.5𝑒 −𝑡 𝑒 +𝑗𝑡 𝑢(𝑡) − 0.5𝑗𝑒 −𝑡 𝑒 −𝑗𝑡 𝑢(𝑡) + 𝑗0.5𝑒 −𝑡 𝑒 +𝑗𝑡 𝑢(𝑡)
⇒ 0.5𝑒 −𝑡 (𝑒 +𝑗𝑡 + 𝑒 −𝑗𝑡 )𝑢(𝑡) + 𝑗0.5𝑒 −𝑡 (𝑒 +𝑗𝑡 − 𝑒 −𝑗𝑡 )𝑢(𝑡)
⇒ 𝑒 −𝑡 cos 𝑡 𝑢(𝑡) − 𝑒 −𝑡 sin 𝑡 𝑢(𝑡) = 𝑒 −𝑡 (cos 𝑡 − sin 𝑡)𝑢(𝑡)
⇒ 𝒇(𝒕) = 𝒆−𝒕 (𝐜𝐨𝐬 𝒕 − 𝐬𝐢𝐧 𝒕)𝒖(𝒕)
#4. Use the partial fraction expansion method to simplify 𝐹 (𝑠) of given relation, and find
the time domain function 𝑓 (𝑡) corresponding to 𝐹(𝑠).
𝑠+3
𝐹 (𝑠 ) =
𝑠3 + 5𝑠 2 + 12𝑠 + 8
Solution: For the given function 𝐹 (𝑠) we cannot completely factorize normally and
write in the following form
2𝑠 + 3
𝐹 (𝑠) = 𝐹1 (𝑠) =
𝑠3 + 6𝑠 2 + 16𝑠 + 16
For the denominator function (𝑠 2 + 4𝑠 + 8) cannot factorize and can find the complex
roots by the following formula
2
−𝑏 ± √𝑏2 − 4𝑎𝑐 −4 ± √42 − (4 ∗ 1 ∗ 8)
𝑠2,3 = 𝑠 + 4𝑠 + 8 = = = −2 ± 𝑗2
2𝑎 2
the modified function 𝐹(𝑠) with the help of complex roots, it can be represented in the
form of partial fractions as,
𝑟1 𝑟2 𝑟2∗
𝐹 (𝑠 ) = + + − − − −(4.1)
𝑠 + 1 𝑠 + 2 − 2𝑗 𝑠 + 2 + 2𝑗
𝑠+3 −1 + 3 2
𝑟1 = (𝑠 + 1)𝐹 (𝑠)|𝑠=−1 = (𝑠 + 1) 2
| = 2
=
(𝑠 + 1)(𝑠 + 4𝑠 + 8) 𝑠=−1 ((−1) + 4(−1) + 8) 5
𝑠+3
𝑟2 = (𝑠 + 2 − 2𝑗)𝐹(𝑠)|𝑠=−2+2𝑗 = (𝑠 + 2 − 2𝑗) |
(𝑠 + 1)(𝑠 + 2 − 2𝑗)(𝑠 + 2 + 2𝑗) 𝑠=−2+2𝑗
(−2 + 2𝑗) + 3 1 + 2𝑗 1 + 2𝑗
⇒ 𝑟2 = = =
(−2 + 2𝑗 + 1)(−2 + 2𝑗 + 2 + 2𝑗) (−1 + 2𝑗)(4𝑗) −8 − 4𝑗
Nalla N Page 4
Chapter-3 Inverse Laplace Transforms 2020
From equation 4.2 we separate the last two terms and find out individual results
−1 3𝑗 −1 3𝑗 1
( 5 − 20) ( 5 + 20) −1 2𝑠 + 1 −2 𝑠+2
−1 { }=
ℒ + = ( )( 2 )
𝑠 + 2 − 2𝑗 𝑠 + 2 + 2𝑗 5 (𝑠 2 + 4𝑠 + 8) 5 (𝑠 + 4𝑠 + 8)
3
−2 𝑠+2 2
⇒ ( )[ 2 2
− ] − − − −(4.3)
5 ( (𝑠 + 2) +2 ) ( (𝑠 + 2)2 +22 )
𝑠+𝑎
⇒∵ 𝑒 −𝑎𝑡 cos 𝜔𝑡 𝑢(𝑡) ⇔ − − − − − −(4.4)
(𝑠 + 𝑎 ) 2 + 𝜔 2
𝜔
𝑒 −𝑎𝑡 sin 𝜔𝑡 𝑢(𝑡) ⇔ − − − − − −(4.5)
(𝑠 + 𝑎 )2 + 𝜔 2
Using the relations 4.4 and 4.5 in 4.3 we get
−2 −2𝑡 2 3 1
⇒( )𝑒 cos 2𝑡 + ( ) ( )
5 5 2 (𝑠 + 2 )2 + 22
−2 −2𝑡 3 2
⇒( )𝑒 cos 2𝑡 + ( )
5 10 (𝑠 + 2)2 + 22
−2 −2𝑡 3
⇒( )𝑒 cos 2𝑡 + ( ) 𝑒 −2𝑡 sin 2𝑡 − − − −(4.6)
5 10
Nalla N Page 5
Chapter-3 Inverse Laplace Transforms 2020
Equation 4.6 is the inverse Laplace Transform of last two components of 4.2, so it can be
re-written as,
𝟐 −𝒕 𝟐 𝟑
𝒇(𝒕) = 𝒆 𝒖(𝒕) − 𝒆−𝟐𝒕 𝐜𝐨𝐬 𝟐𝒕 𝒖(𝒕) + ( ) 𝒆−𝟐𝒕 𝐬𝐢𝐧 𝟐𝒕 𝒖(𝒕)
𝟓 𝟓 𝟏𝟎
#Practise Problem 2:
Use the partial fraction expansion method to simplify 𝐹1 (𝑠), 𝐹2 (𝑠) and 𝐹3 (𝑠) of given
relation, and find the time domain function 𝑓1 (𝑠), 𝑓2 (𝑠) and 𝑓3 (𝑠) corresponding to
𝐹1 (𝑠), 𝐹2 (𝑠) and 𝐹3 (𝑠).
2𝑠 + 3
𝐹1 (𝑠) =
𝑠3
+ 6𝑠 2 + 16𝑠 + 16
2𝑠 + 1
𝐹2 (𝑠) = 3
𝑠 + 3𝑠 2 + 4𝑠 + 2
4𝑠 2 + 8𝑠 + 10
𝐹3 (𝑠) =
(𝑠 + 2) + (𝑠 2 + 2𝑠 + 5)
III. MULTIPLE POLES/REPEATED POLES:
If 𝐹 (𝑠) has multiple poles then partial fractions of 𝐹 (𝑠) can be expressed as
𝑁 (𝑠 )
𝐹 (𝑠 ) = − − − (10)
(𝑠 − 𝑝1 )𝑚 (𝑠 − 𝑝2 ) … … . (𝑠 − 𝑝𝑛−1 )(𝑠 − 𝑝𝑛 )
Where, 𝐹 (𝑠) has simple poles, but one of the poles, say 𝑝1 has a multiplicity ‘m’. Then
corresponding to multiple pole 𝑝1 as 𝑟11 , 𝑟12 𝑟13 … … . 𝑟1𝑚 , the partial fraction of eqn 10 is,
𝑟11 𝑟12 𝑟13 𝑟1𝑚 𝑟2 𝑟3
𝐹 (𝑠 ) = + + + ⋯ . . + + +
(𝑠 − 𝑝1 )𝑚 (𝑠 − 𝑝1 )𝑚−1 (𝑠 − 𝑝1 )𝑚−2 (𝑠 − 𝑝1 ) (𝑠 − 𝑝2 ) (𝑠 − 𝑝3 )
𝑟𝑛
+⋯….+ − − − − − (11)
(𝑠 − 𝑝𝑛 )
For simple poles 𝑝1 , 𝑝2 , … … 𝑝𝑛 we proceed as before, that is we find the residues as
Nalla N Page 6
Chapter-3 Inverse Laplace Transforms 2020
#5 . Use the partial fraction expansion method to simplify 𝐹(𝑠) of given relation, and
find the time domain function 𝑓 (𝑡) corresponding to 𝐹(𝑠).
𝑠+3
𝐹 (𝑠 ) =
(𝑠 + 2)(𝑠 + 1)2
Solution: From the given Laplace transform function we can observe that, there is a
pole as s=-1 repeating 2 times. Then its partial fraction can be expressed as
𝑠+3 𝑟1 𝑟21 𝑟22
𝐹 (𝑠 ) = 2
= + 2
+
(𝑠 + 2)(𝑠 + 1) (𝑠 + 2) (𝑠 + 1) (𝑠 + 1 )
The residues are,
𝑠+3
𝑟1 = (𝑠 + 2)𝐹(𝑠)|𝑠=−2 = (𝑠 + 2) | =1
(𝑠 + 2)(𝑠 + 1)2 𝑠=−2
𝑠+3
𝑟21 = (𝑠 + 1)2 𝐹 (𝑠)|𝑠=−1 = (𝑠 + 1)2 | =2
(𝑠 + 2)(𝑠 + 1)2 𝑠=−1
𝑑 𝑠+3 𝑑 𝑠+3
𝑟22 = (𝑠 + 1)2 2
| = | = −1
𝑑𝑠 (𝑠 + 2)(𝑠 + 1) 𝑠=−1 𝑑𝑠 (𝑠 + 2) 𝑠=−1
∗ 𝑟22 value cam also be found without differentiation. That is, by substitution of already
known values in the partial fraction and letting s=0, we get as
𝑠+3 1 2 𝑟22
𝐹 (𝑠 ) = 2
| = | + 2
| + |
(𝑠 + 2)(𝑠 + 1) 𝑠=0 (𝑠 + 2) 𝑠=0 (𝑠 + 1) 𝑠=0 (𝑠 + 1) 𝑠=0
Or
3 1
= + 2 + 𝑟22 ⇒ 𝑟22 = −1
2 2
Then, we can have,
𝑠+3 1 2 −1
𝐹 (𝑠 ) = 2
= + 2
+
(𝑠 + 2)(𝑠 + 1) (𝑠 + 2) (𝑠 + 1) (𝑠 + 1 )
Applying Inverse Laplace Transform on both sides, we get
1 1 −1
ℒ −1 {𝐹(𝑠)} = ℒ −1 { } + 2ℒ −1 { 2
} + ℒ −1 { }
𝑠+2 (𝑠 + 1) (𝑠 + 1)
Nalla N Page 7
Chapter-3 Inverse Laplace Transforms 2020
2𝑠 + 1
𝐹2 (𝑠) =
(𝑠 + 2)3
3𝑠 2 + 8𝑠 + 6
𝐹3 (𝑠) =
(𝑠 + 2) + (𝑠 2 + 2𝑠 + 1)
𝑠+2
𝐹4 (𝑠) =
𝑠(𝑠 + 1)2
III. Case 𝒎 ≥ 𝒏 Improper Function:
If 𝐹(𝑠) is an improper function, i.e. 𝒎 ≥ 𝒏, we must first divide the numerator function
𝑁(𝑠) by the denominator function 𝐷(𝑠) to obtain an expression as,
𝑁(𝑠 )
𝐹(𝑠) = 𝑘0 + 𝑘1 𝑠 + 𝑘2 𝑠 2 + ⋯ … + 𝑘𝑚−𝑛 𝑠 𝑚−𝑛 + − − − − − −(16)
𝐷 (𝑠 )
𝑁(𝑠)
Where 𝐷(𝑠) is a proper rational function.
Nalla N Page 8
Chapter-3 Inverse Laplace Transforms 2020
IV. Alternative Method /Clearing Fraction Method:
Partial fractions can also be performed with the method of clearing the fractions. i.e.,
making the denominator of both sides the same, then equating the numerator.
𝑁(𝑠) 𝑟1 𝑟2 𝑟𝑚
𝐹 (𝑠 ) = = + 2
+ ⋯…+ − − − −(17)
𝐷(𝑠) (𝑠 − 𝑎) (𝑠 − 𝑎) (𝑠 − 𝑎 )𝑚
Let, 𝑠 2 + 𝛼𝑠 + 𝛽 be a quadratic factor 𝐷(𝑠) and suppose that (𝑠 2 + 𝛼𝑠 + 𝛽 )𝑛 is the
highest power of this factor that divides 𝐷(𝑠), now we perform the following steps.
#6. Express 𝐹 (𝑠) below as a sum of partial fractions using the method of clearing
fractions.
−2𝑠 + 4
𝐹 (𝑠 ) =
(𝑠 2 + 1)(𝑠 − 1)2
Nalla N Page 9
Chapter-3 Inverse Laplace Transforms 2020
With the application of step 5 we get,
−2𝑠 + 4 = (𝑟1 + 𝑟22 )𝑠 3 + (−2𝑟1 + 𝐴 − 𝑟22 + 𝑟21 )𝑠 2 + (𝑟1 − 2𝐴 + 𝑟22 )𝑠
+ (𝐴 − 𝑟22 + 𝑟21 )
0 = 𝑟1 + 𝑟22
−2 = 𝑟1 − 2𝐴 + 𝑟22
4 = 𝐴 − 𝑟22 + 𝑟21
−2𝑠 + 4 2𝑠 + 1 1 2
𝐹 (𝑠 ) = = 2 + −
(𝑠 2 + 1)(𝑠 − 1) 2 (𝑠 + 1) (𝑠 − 1) 2 (𝑠 − 1)
2𝑠 1 1 2
𝐹 (𝑠 ) = + 2 + −
(𝑠 2+ 1) (𝑠 + 1) (𝑠 − 1) 2 (𝑠 − 1)
Applying Inverse Laplace Transform on both sides, we get
𝒇(𝒕) = 𝟐 𝐜𝐨𝐬 𝒕 𝒖(𝒕) + 𝐬𝐢𝐧 𝒕 𝒖(𝒕) + 𝒕𝒆𝒕 𝒖(𝒕) − 𝟐𝒆𝒕 𝒖(𝒕)
#Practise problem 5. Express 𝐹(𝑠) below as a sum of partial fractions using the
method of clearing fractions.
𝑠+3
𝐹 (𝑠 ) =
𝑠 3 + 5𝑠 2 + 12𝑠 + 8)
Nalla N Page 10
Chapter 4 Impulse Response & Convolution
The impulse response and frequency response are two attributes that are useful for
characterizing linear time-invariant (LTI) systems. They provide two different ways of
calculating what an LTI system's output will be for a given input signal. A continuous-
time LTI system is usually illustrated like this:
In general, the system H maps its input signal x(t) to a corresponding output signal y(t).
There are many types of LTI systems that can have apply very different transformations
to the signals that pass through them. But they all share two key characteristics:
• The system is linear, so it obeys the principle of superposition. Stated simply, if you
linearly combine two signals and input them to the system, the output is the same
linear combination of what the outputs would have been had the signals been
passed through individually. That is, if 𝑥1 (𝑡) maps to an output
of 𝑦1 (𝑡) and 𝑥2 (𝑡) maps to an output of 𝑦2 (𝑡), then for all values of a1 and a2,
𝐻{𝑎1 𝑥1 (𝑡) + 𝑎2 𝑥2 (𝑡)} = 𝑎1 𝑦1 (𝑡) + 𝑎2 𝑦2 (𝑡)
The system is time-invariant, so its characteristics do not change with time. If you
add a delay to the input signal, then you simply add the same delay to the output. For an
input signal x(t) that maps to an output signal y(t), then for all values of τ,
𝐻 {𝑥(𝑡 − 𝜏)} = 𝑦(𝑡 − 𝜏)
Discrete-time LTI systems have the same properties; the notation is different
because of the discrete-versus-continuous difference, but they are a lot alike. These
1|Page Nalla N
Chapter 4 Impulse Response & Convolution
𝑥[𝑛] = ∑ 𝑥[𝑘]𝛿[𝑛 − 𝑘]
𝑘=0
Each term in the sum is an impulse scaled by the value of x[n] at that time instant. What
would we get if we passed x[n] through an LTI system to yield y[n]? Simple: each scaled
and time-delayed impulse that we put in yields a scaled and time-delayed copy of the
impulse response at the output. That is:
∞
𝑦[𝑛] = ∑ 𝑥[𝑘]ℎ[𝑛 − 𝑘]
𝑘=0
where h[n] is the system's impulse response. The above equation is the convolution
theorem for discrete-time LTI systems. That is, for any signal x[n] that is input to an LTI
system, the system's output y[n] is equal to the discrete convolution of the input signal
and the system's impulse response.
2|Page Nalla N
Chapter 4 Impulse Response & Convolution
where, again, h(t) is the system's impulse response. There are a number of ways of
deriving this relationship (I think you could make a similar argument as above by
claiming that Dirac delta functions at all time shifts make up an orthogonal basis for
the L2 Hilbert space, noting that you can use the delta function's sifting property to
project any function in L2 onto that basis, therefore allowing you to express system
outputs in terms of the outputs associated with the basis (i.e. time-shifted impulse
responses), but I'm not a licensed mathematician, so I'll leave that aside).
In summary: For both discrete- and continuous-time systems, the impulse response is
useful because it allows us to calculate the output of these systems for any input signal;
the output is simply the input signal convolved with the impulse response function.
Frequency response:
An LTI system's frequency response provides a similar function: it allows you to
calculate the effect that a system will have on an input signal, except those effects are
illustrated in the frequency domain. Recall the definition of the Fourier transform:
∞
More importantly for the sake of this illustration, look at its inverse:
∞
In essence, this relation tells us that any time-domain signal x(t) can be broken
up into a linear combination of many complex exponential functions at varying
frequencies (there is an analogous relationship for discrete-time signals called
the discrete-time Fourier transform; I only treat the continuous-time case below for
simplicity). For a time-domain signal x(t), the Fourier transform yields a corresponding
function X(f) that specifies, for each frequency ff, the scaling factor to apply to the
complex exponential at frequency ff in the aforementioned linear combination. These
3|Page Nalla N
Chapter 4 Impulse Response & Convolution
scaling factors are in general complex numbers. One way of looking at complex numbers
is in amplitude/phase format, that is:
𝑋(𝑓) = 𝐴(𝑓)𝑒 𝑗∅(𝑓)
Looking at it this way, then, x(t) can be written as a linear combination of many
complex exponential functions, each scaled in amplitude by the function A(f) and shifted
in phase by the function ϕ(f). This lines up well with the LTI system properties that we
discussed previously; if we can decompose our input signal x(t) into a linear
combination of a bunch of complex exponential functions, then we can write the output
of the system as the same linear combination of the system response to those complex
exponential functions.
Here's where it gets better: exponential functions are the eigen functions of
linear time-invariant systems. The idea is, similar to eigenvectors in linear algebra, if
you put an exponential function into an LTI system, you get the same exponential
function out, scaled by a (generally complex) value. This has the effect of changing the
amplitude and phase of the exponential function that you put in.
This is immensely useful when combined with the Fourier-transform-based
decomposition discussed above. As we said before, we can write any signal x(t) as a
linear combination of many complex exponential functions at varying frequencies. If we
pass x(t) into an LTI system, then (because those exponentials are eigenfunctions of the
system), the output contains complex exponentials at the same frequencies, only scaled
in amplitude and shifted in phase. These effects on the exponentials' amplitudes and
phases, as a function of frequency, is the system's frequency response. That is, for an
input signal with Fourier transform X(f) passed into system H to yield an output with a
Fourier transform Y(f),
𝑌(𝑓) = 𝐻 (𝑓). 𝑋(𝑓) = 𝐴(𝑓)𝑒 𝑗𝜑(𝑓) 𝑋(𝑓)
In summary: So, if we know a system's frequency response H(f) and the Fourier
transform of the signal that we put into it X(f), then it is straightforward to calculate the
Fourier transform of the system's output; it is merely the product of the frequency
response and the input signal's transform. For each complex exponential frequency that
is present in the spectrum X(f), the system has the effect of scaling that exponential in
amplitude by A(f) and shifting the exponential in phase by ϕ(f) radians.
Bringing them together:
4|Page Nalla N
Chapter 4 Impulse Response & Convolution
So, given either a system's impulse response or its frequency response, you can
calculate the other. Either one is sufficient to fully characterize the behaviour of the
system; the impulse response is useful when operating in the time domain and the
frequency response is useful when analyzing behaviour in the frequency domain.
TRANSFER FUNCTION:
A transfer function is the frequency-dependent ratio of a forced function to a
forcing function (or of an output to an input). The idea of a transfer function was
implicit when we used the concepts of impedance and admittance to relate voltage and
current.
The transfer function H(ω) of a circuit is the frequency-dependent ratio of a
phasor output Y(ω) (an element voltage or current) to a phasor input X(ω) (source
voltage or current).
𝑌(𝜔)
𝐻 (𝜔 ) =
𝑋(𝜔)
Impulse Response of any system or network can be computer by following methods:
1. Using Inverse Laplace and Inverse Fourier Transforms
2. Using Convolution of two signals
3. Using State space Representations
01. Using Laplace Transform:
➢ The h(t) of a continuous time LTS System is defined to be the response of the
system when input is 𝛿(𝑡). i.e., ℎ(𝑡) = 𝑅𝑒𝑠𝑝𝑜𝑛𝑠𝑒𝑠 𝑜𝑓 {𝛿 (𝑡)}
➢ For any system if 𝑥(𝑡) is an input and its corresponding Laplace Transform is 𝑋(𝑠)
and its Fourier Transform is 𝑋(𝐹).
➢ Similarly, the output of the system is 𝑦(𝑡), then 𝑌(𝑠) and 𝑌(𝐹) are its Laplace and
Fourier Transforms respectively.
➢ If ℎ(𝑡) is the system function and 𝐻(𝑠) is it’s Laplace Transform. Then
5|Page Nalla N
Chapter 4 Impulse Response & Convolution
Impulse Response:
It if the Inverse Laplace Transform (or) Inverse Fourier Transform of Frequency
Response (or) Transfer Function of the system.
i.e., Frequency Response of the system is the ratio of Laplace Transform (or)
Fourier Transform of Output to the Laplace Transform (or) Fourier Transform of the
input.
Impulse Response ℎ(𝑡) = 𝐿−1 [𝐻(𝑠)] 𝑜𝑟 ℎ(𝑡) = 𝐹 −1 [𝐻(𝐹 )]
𝑌 (𝑠 )
ℎ(𝑡) = 𝐿−1 [𝐻(𝑠)] = 𝐿−1 [ ]
𝑋 (𝑠 )
or
𝑌(𝐹)
ℎ(𝑡) = 𝐹 −1 [𝐻 (𝐹 )] = 𝐹 −1 [ ]
𝑋(𝐹)
For any electrical network, the components 𝑅, 𝐿, 𝐶, 𝒱 (𝑡)𝑎𝑛𝑑 𝒾(𝑡) are represented in
Laplace Domain is
𝑅⇒𝑅 𝒱 (𝑡) ⇒ 𝑉(𝑠)
𝐿 ⇒ 𝑠𝐿 𝒾(𝑡) ⇒ 𝐼(𝑠)
1
𝐶⇒
𝑠𝐶
#Example 1:
Compute the impulse response of the series RC circuit of the figure in terms of the
constrains R and C. where the response is considered to be the Voltage across the
capacitor, and 𝒱𝑐 (0− ) = 0. Then compute the current through the capacitor.
6|Page Nalla N
Chapter 4 Impulse Response & Convolution
1 1
𝐿−1 [𝐻(𝑠)] = 𝐿−1 { }
(𝑅𝐶 ) [𝑠 1
+ 𝑅𝐶 ]
1 𝑡
−
ℎ(𝑡 ) = 𝑒 (𝑅𝐶) 𝑢(𝑡) − − − − − − − (1.4)
(𝑅𝐶 )
The current through the capacitor 𝒾𝑐 can now be computed as,
𝑑𝒱𝑐 𝑑
𝒾𝑐 = 𝐶. , 𝑡ℎ𝑢𝑠 𝒾𝑐 ⇒ 𝐶 [ℎ(𝑡)]
𝑑𝑡 𝑑𝑡
𝑑 1 𝑡
−
𝒾𝑐 ⇒ 𝐶 [ 𝑒 (𝑅𝐶) 𝑢(𝑡)]
𝑑𝑡 (𝑅𝐶 )
𝑡
𝑑 1 −(𝑅𝐶)
Using 𝑑𝑡 [𝑢𝑣] 𝑓𝑜𝑟𝑚𝑢𝑙𝑎, 𝑤ℎ𝑒𝑟𝑒 𝑢 = (𝑅𝐶) 𝑒 𝑎𝑛𝑑 𝑣 = 𝑢(𝑡)
1 −𝑡 1 −𝑡
𝒾𝑐 = − 𝑒 𝑅𝐶 + 𝑒 𝑅𝐶 𝛿 (𝑡)
𝑅2 𝐶 𝑅
Using Sampling Property of the delta Function, we get
𝟏 𝟏 𝒕
𝓲𝒄 = 𝜹(𝒕) + − 𝟐 𝒆−𝑹𝑪
𝑹 𝑹 𝑪
7|Page Nalla N
Chapter 4 Impulse Response & Convolution
#Example 2:
For the circuit, compute the impulse response ℎ(𝑡) = 𝒱𝑐 (𝑡) given that the initial
conditions are zero. That is, 𝒾𝐿 (0− ) = 0 𝑎𝑛𝑑 𝒱𝑐 (0− ) = 0.
8|Page Nalla N
Chapter 4 Impulse Response & Convolution
1 1
𝑥𝑜 (𝑡) = [𝑥(𝑡) − 𝑥(−𝑡)] 𝑎𝑛𝑑 𝑥𝑜 (𝑛) = [𝑥(𝑛) − 𝑥(−𝑛)]
2 2
#Example 3:
Determine whether the delta function is and even or odd function of time?
Solution: Let f(t) be an arbitrary function of time that is continuous at t=t0. Then, by the
shifting property of the delta function, we get
+∞ +∞
Also,
+∞ +∞
9|Page Nalla N
Chapter 4 Impulse Response & Convolution
As stated earlier, an odd function 𝑓𝑜 (𝑡) evaluated at t=0 is zero. That is 𝑓𝑜 (0) = 0.
Therefore, from the relation above will become
+∞
And this indicates that the product 𝑓𝑜 (𝑡)𝛿(𝑡) is an odd function of ‘t’. then since 𝑓𝑜 (𝑡) is
odd, it follows that 𝛿(𝑡) must be an even function of ‘t’ for the above relation to hold.
Or in general,
If the response of the system due to impulse 𝛿 (𝑡) is ℎ(𝑡) then the response of the
system due to delayed impulse is
ℎ(𝑡, 𝜏) = 𝑇[𝛿(𝑡 − 𝜏)] − − − −(5)
substituting equation (5) in equation (4) we get,
10 | P a g e Nalla N
Chapter 4 Impulse Response & Convolution
For a time-invariant system, the output due to delayed input by ‘𝜏’ is equal to delayed
output by ‘𝜏’. That is ℎ(𝑡, 𝜏) = ℎ(𝑡 − 𝜏) − − − (7)
substituting equation (7) in equation (6) we get,
∞
1. For the given signals 𝑥(𝑡) and ℎ(𝑡), graph the signals 𝑥(𝜏) and ℎ(𝜏) as a function
of independent variable ′𝜏′.
2. Obtain the signal ℎ(𝑡 − 𝜏) by folding ℎ(𝜏) about 𝜏 = 0 and then time shifting by
time ‘t’.
3. Graph both the signals 𝑥(𝜏) and ℎ(𝑡 − 𝜏) on the same 𝜏-axis beginning with every
large negative time shift t.
11 | P a g e Nalla N
Chapter 4 Impulse Response & Convolution
4. Multiply the two signals 𝑥(𝜏) and ℎ(𝑡 − 𝜏) and integrate over the overlapping
intervals of two signals to obtain 𝑦(𝑡).
5. Increase the time shift ‘t’ and take the new interval whenever the function of
either 𝑥(𝜏) and ℎ(𝑡 − 𝜏) changes. The value of ‘t’ at which the change occurs
defines the end of the current interval and the beginning of the new interval.
Then calculate 𝑦(𝑡) using step 4.
6. Repeat step 5 and 4 for all intervals.
A signal 𝑥(𝑡) is said to be Causal if 𝑥(𝑡) = 0 for 𝑡 < 0. In this case both signals
𝑥1 (𝑡) and 𝑥2 (𝑡) are causal. Therefore from convolution of causal signals , we have
𝑡
𝑡 𝑡
𝑒 −𝑏𝑡 𝑡
⇒ ∫ 𝑒 −𝑎𝜏 𝑒 −𝑏(𝑡−𝜏) 𝑑𝜏 ⇒ 𝑒 −𝑏𝑡 ∫ 𝑒 −(𝑎−𝑏)𝜏 𝑑𝜏 ⇒ 𝑒 −(𝑎−𝑏)𝜏 |0
−(𝑎 − 𝑏)
0 0
𝑒 −𝑏𝑡 [𝑒 −(𝑎−𝑏)𝑡 − 1]
⇒ for 𝑡 ≥ 0
𝑏−𝑎
𝑒 −𝑎𝑡 − 𝑒 −𝑏𝑡
𝑦 (𝑡 ) = 𝑢(𝑡)
𝑏−𝑎
2. Find the convolution of 𝑥1 (𝑡) and 𝑥2 (𝑡) for the following signals
𝑥1 (𝑡) = 𝑢(𝑡) and 𝑥2 (𝑡) = 𝑢(𝑡)
12 | P a g e Nalla N
Chapter 4 Impulse Response & Convolution
3. Find the convolution of 𝑥1 (𝑡) and 𝑥2 (𝑡) for the following signals
𝑥1 (𝑡) = 𝑡𝑢(𝑡) and 𝑥2 (𝑡) = 𝑢(𝑡)
4. Find the convolution of 𝑥1 (𝑡) and 𝑥2 (𝑡) for the following signals
𝑥1 (𝑡) = sin 𝑡 𝑢(𝑡) and 𝑥2 (𝑡) = 𝑢(𝑡)
𝑡
Solution: 𝑥1 (𝑡) ∗ 𝑥2 (𝑡) = ∫0 sin 𝜏 𝑑𝜏 = −cos 𝜏|𝑡0 = 1 − cos 𝑡 𝑓𝑜𝑟 𝑡 ≥ 0
𝒚(𝒕) = (𝟏 − 𝐜𝐨𝐬 𝒕)𝒖(𝒕)
# Practise Problem 2
Find the convolution of 𝑥1 (𝑡) and 𝑥2 (𝑡) for the following signals
𝑡2
1. 𝑥1 (𝑡) = 𝑟(𝑡) and 𝑥2 (𝑡) = 𝑢(𝑡) Ans: 𝑢(𝑡)
2
𝑡 1
2. 𝑥1 (𝑡) = 𝑟(𝑡) and 𝑥2 (𝑡) = 𝑒 −2𝑡 𝑢(𝑡) Ans: 2 − 4 (1 − 𝑒 −2𝑡 )
1. The signals ℎ(𝑡) and 𝑢(𝑡) are shown in the figure. Compute ℎ(𝑡) ∗ 𝑢(𝑡) using the
graphical evaluation.
13 | P a g e Nalla N
Chapter 4 Impulse Response & Convolution
➢ Now evaluation of 𝑢(𝑡 − 𝜏) by ℎ(𝜏) for each value of ‘t’ and computation of area
from −∞ to + ∞. Then the product of 𝑢(𝑡 − 𝜏)ℎ(𝜏) as point ‘A’ moves to the
right.
➢ Case 1: At t=0, we observe that 𝑢(𝑡 − 𝜏)|𝑡=0 = 0 = 𝑢(−𝜏)
➢ Case 2: At t>0 and t<1, shifting 𝑢(𝑡 − 𝜏) to the right so that t>0, i.e.,
𝑡 𝑡 𝑡
𝜏2 𝑡2
∫ 𝑢(𝑡 − 𝜏)ℎ(𝜏)𝑑𝜏 = ∫(1)(−𝜏 + 1)𝑑𝜏 = 𝜏 − | = 𝑡 − − − − (2)
2 0 2
0 0
➢ Case 3: At t=1, the maximum area is obtained when point A reaches to t=1, i.e.,
the graphical representation shows that , 𝑢(𝜏) ∗ ℎ(𝜏) increases during the
interval 0<t<1. Then At t=1, we get
1 1 1
𝜏2
∫ 𝑢(𝑡 − 𝜏)ℎ(𝜏)𝑑𝜏 = ∫(1)(−𝜏 + 1)𝑑𝜏 = 𝜏 − |
2 0
0 0
1
= − − − − − (3)
2
14 | P a g e Nalla N
Chapter 4 Impulse Response & Convolution
➢ Case 4: 1<t<2, using the convolution integral, we find that the area for the
interval 1<t<2 i.e., from t-1 to 1
1 1 1
𝜏2 1 𝑡 2 − 2𝑡 + 1
∫ 𝑢(𝑡 − 𝜏)ℎ(𝜏)𝑑𝜏 = ∫(1)(−𝜏 + 1)𝑑𝜏 = 𝜏 − | ( )
=1− − 𝑡−1 +
2 𝑡−1 2 2
𝑡−1 𝑡−1
𝑡2
𝑢(𝑡) ∗ ℎ(𝑡) = − 2𝑡 + 2 − − − − − −(4)
2
➢ Case 5:At t=2, we find that 𝑢(𝜏) ∗ ℎ(𝜏) = 0. For t>2, the product 𝑢(𝑡 − 𝜏) ∗ ℎ(𝜏)
is zero since there is no overlap between the two signals. Then the convolution
between these signals for 0 ≤ 𝑡 ≤ 2, is shown in the figure as,
2. The signals ℎ(𝑡) and 𝑢(𝑡) are shown in the figure. Compute ℎ(𝑡) ∗ 𝑢(𝑡) using the
graphical evaluation.
15 | P a g e Nalla N
Chapter 4 Impulse Response & Convolution
𝟎 𝒕≤𝟎
𝟏 − 𝒆−𝒕 𝟎<𝒕≥𝒕
Answer: 𝒚(𝒕) = 𝟏 − 𝒆−𝟏 = 𝟎. 𝟔𝟑𝟐 𝟎≤𝒕≥𝟏
𝒆−𝒕 (𝒆 − 𝟏) 𝒕−𝟏≤𝒕≥𝒕
{ 𝟎. 𝟐𝟑𝟑 𝒕=𝟐
16 | P a g e Nalla N
Chapter 4 Impulse Response & Convolution
➢ Case 1: At t<0, there is no overlapping of the two signals and convolution is zero
−∞
17 | P a g e Nalla N
Chapter 4 Impulse Response & Convolution
➢ Case2: At t=0, there is no overlapping of the two signals and convolution is zero
−∞
Question 2: For the problems given in Impulse Response topic in Example 2 and
practice problem 1, find out the convolution values.
18 | P a g e Nalla N
Chapter 7 Chapter 5
Fourier Series
his chapter is an introduction to Fourier series. We begin with the definition of sinusoids that
T are harmonically related and the procedure for determining the coefficients of the trigonomet-
ric form of the series. Then, we discuss the different types of symmetry and how they can be
used to predict the terms that may be present. Several examples are presented to illustrate the
approach. The alternate trigonometric and the exponential forms are also presented.
or
∞
1
f ( t ) = --- a 0 +
2 ∑ ( a cos nωt + b sin nωt )
n n (7.2)
n=1
where the first term a 0 ⁄ 2 is a constant, and represents the DC (average) component of f ( t ) . Thus,
if f ( t ) represents some voltage v ( t ) , or current i ( t ) , the term a 0 ⁄ 2 is the average value of v ( t ) or
i(t) .
The terms with the coefficients a 1 and b 1 together, represent the fundamental frequency compo-
nent ω *. Likewise, the terms with the coefficients a 2 and b 2 together, represent the second har-
monic component 2ω , and so on.
Since any periodic waveform f ( t ) ) can be expressed as a Fourier series, it follows that the sum of the
DC , the fundamental, the second harmonic, and so on, must produce the waveform f ( t ) . Generally,
the sum of two or more sinusoids of different frequencies produce a waveform that is not a sinusoid
as shown in Figure 7.1.
Total
2nd Harmonic
Fundamental
3rd Harmonic
∫0 sin mt dt = 0 (7.3)
2π
∫0 cos m t dt = 0 (7.4)
2π
The integrals of (7.3) and (7.4) are zero since the net area over the 0 to 2π area is zero. The integral
of (7.5) is also is zero since
1
sin x cos y = --- [ sin ( x + y ) + sin ( x – y ) ]
2
This is also obvious from the plot of Figure 7.2, where we observe that the net shaded area above
and below the time axis is zero.
sin x cos x
sin x ⋅ cos x
2π
Figure 7.2. Graphical proof of ∫0 ( sin mt ) ( cos nt ) dt = 0
since
1
( sin x ) ( sin y ) = --- [ cos ( x – y ) – cos ( x – y ) ]
2
The integral of (7.6) can also be confirmed graphically as shown in Figure 7.3, where m = 2 and
n = 3 . We observe that the net shaded area above and below the time axis is zero.
2π
Figure 7.3. Graphical proof of ∫0 ( sin mt ) ( sin nt ) dt = 0 for m = 2 and n = 3
2π
since
1
( cos x ) ( cos y ) = --- [ cos ( x + y ) + cos ( x – y ) ]
2
The integral of (7.7) can also be confirmed graphically as shown in Figure 7.4, where m = 2 and
n = 3 . We observe that the net shaded area above and below the time axis is zero.
2π
Figure 7.4. Graphical proof of ∫0 ( cos m t ) ( cos nt ) dt = 0 for m = 2 and n = 3
∫0
2
( sin mt ) dt = π (7.8)
and
2π
∫0
2
( cos m t ) dt = π (7.9)
The integrals of (7.8) and (7.9) can also be seen to be true graphically with the plots of Figures 7.5
and 7.6.
It was stated earlier that the sine and cosine functions are orthogonal to each other. The simplifica-
tion obtained by application of the orthogonality properties of the sine and cosine functions,
becomes apparent in the discussion that follows.
2
sin x
sin x
2π
∫0
2
Figure 7.5. Graphical proof of ( sin mt ) dt = π
2
cos x
cos x
2π
∫0
2
Figure 7.6. Graphical proof of ( cos m t ) dt = π
To evaluate any coefficient, say b 2 , we multiply both sides of (7.10) by sin 2t . Then,
1
f ( t ) sin 2t = --- a 0 sin 2t + a 1 cos t sin 2t + a 2 cos 2t sin 2t + a 3 cos 3t sin 2t + a 4 cos 4t sin 2t + …
2
2
b 1 sin t sin 2t + b 2 ( sin 2t ) + b 3 sin 3t sin 2t + b 4 sin 4t sin 2t + …
Next, we multiply both sides of the above expression by dt , and we integrate over the period 0 to
2π . Then,
2π 2π 2π 2π
1
∫0 f ( t ) sin 2t dt = --- a 0
2 ∫0 sin 2t dt + a 1 ∫0 cos t sin 2t dt + a 2 ∫0 cos 2t sin 2t dt
2π 2π
+ a3 ∫0 cos 3t sin 2t dt + a 4 ∫0 cos 4t sin 2t dt + …
(7.11)
2π 2π 2π
∫0 ∫0 ∫0
2
+ b1 sin t sin 2t dt + b 2 ( sin 2t ) dt + b 3 sin 3t sin 2t dt
2π
+ b4 ∫0 sin 4t sin 2t dt + …
We observe that every term on the right side of (7.11) except the term
2π
∫0 ( sin 2t ) dt
2
b2
∫0 f ( t ) sin 2t dt = b 2 ∫0 ( sin 2t ) dt = b 2 π
2
or
2π
1
b 2 = ---
π ∫0 f ( t ) sin 2t dt
and thus we can evaluate this integral for any given function f ( t ) . The remaining coefficients can be
evaluated similarly.
The coefficients a 0 , a n , and b n are found from the following relations.
2π
1 1-
--- a 0 = -----
2 2π ∫0 f ( t ) dt (7.12)
2π
1
a n = ---
π ∫0 f ( t ) cos nt dt (7.13)
2π
1
b n = ---
π ∫0 f ( t ) sin nt dt (7.14)
7.3 Symmetry
With a few exceptions such as the waveform of Example 7.6, the most common waveforms that are
used in science and engineering, do not have the average, cosine, and sine terms all present. Some
waveforms have cosine terms only, while others have sine terms only. Still other waveforms have or
have not DC components. Fortunately, it is possible to predict which terms will be present in the
trigonometric Fourier series, by observing whether or not the given waveform possesses some kind
of symmetry.
We will discuss three types of symmetry that can be used to facilitate the computation of the trigo-
nometric Fourier series form. These are:
1. Odd symmetry − If a waveform has odd symmetry, that is, if it is an odd function, the series will
consist of sine terms only. In other words, if f ( t ) is an odd function, all the a i coefficients includ-
ing a 0 , will be zero.
2. Even symmetry − If a waveform has even symmetry, that is, if it is an even function, the series will
consist of cosine terms only, and a 0 may or may not be zero. In other words, if f ( t ) is an even
function, all the b i coefficients will be zero.
3. Half-wave symmetry − If a waveform has half-wave symmetry (to be defined shortly), only odd
(odd cosine and odd sine) harmonics will be present. In other words, all even (even cosine and
even sine) harmonics will be zero.
We defined odd and even functions in Chapter 6. We recall that odd functions are those for which
–f ( –t ) = f ( t ) (7.15)
and even functions are those for which
f ( –t ) = f ( t ) (7.16)
Examples of odd and even functions were given in Chapter 6. Generally, an odd function has odd
powers of the independent variable t , and an even function has even powers of the independent
variable t . Thus, the product of two odd functions or the product of two even functions will result
in an even function, whereas the product of an odd function and an even function will result in an
odd function. However, the sum (or difference) of an odd and an even function will yield a function
which is neither odd nor even.
To understand half-wave symmetry, we recall that any periodic function with period T , is expressed
as
f(t) = f(t + T) (7.17)
that is, the function with value f ( t ) at any time t , will have the same value again at a later time t + T .
T
A
T/2
f(b) π 2π
ωt
0
T/2
f(a)
−A
Obviously, if the ordinate axis is shifted by any other value other than an odd multiple of π ⁄ 2 ,
the waveform will have neither odd nor even symmetry.
−π/2 π/2 2π
−2π −π π ωt
0
T/2 T/2
−A
3. Sawtooth waveform
T
A
−2π −π π 2π
ωt
0
T/2 T/2
−A
−2π
−π ωt
0 π 2π
T/2
−A T/2
For this triangular waveform of Figure 7.10, the average value over one period T is zero and
therefore, a 0 = 0 . It is also an odd function since – f ( – t ) = f ( t ) . Moreover, it has half-wave
symmetry because – f ( t + T ⁄ 2 ) = f ( t )
5. Fundamental, Second and Third Harmonics of a Sinusoid
Figure 7.11 shows a fundamental, second, and third harmonic of a typical sinewave.
−a
−c
Figure 7.11. Fundamental, second, and third harmonic test for symmetry
In Figure 7.11, the half period T ⁄ 2 , is chosen as the half period of the period of the fundamental
frequency. This is necessary in order to test the fundamental, second, and third harmonics for half-
wave symmetry. The fundamental has half-wave symmetry since the a and – a values, when sepa-
rated by T ⁄ 2 , are equal and opposite. The second harmonic has no half-wave symmetry because the
ordinates b on the left and b on the right, although are equal, there are not opposite in sign. The
third harmonic has half-wave symmetry since the c and – c values, when separated by T ⁄ 2 are
equal and opposite. These waveforms can be either odd or even depending on the position of the
ordinate. Also, all three waveforms have zero average value unless the abscissa axis is shifted up or
down.
In the expressions of the integrals in (7.12) through (7.14), the limits of integration for the coeffi-
cients a n and b n are given as 0 to 2π , that is, one period T . Of course, we can choose the limits of
integration as – π to +π . Also, if the given waveform is an odd function, or an even function, or has
half-wave symmetry, we can compute the non-zero coefficients a n and b n by integrating from 0 to
π only, and multiply the integral by 2 . Moreover, if the waveform has half-wave symmetry and is
also an odd or an even function, we can choose the limits of integration from 0 to π ⁄ 2 and multiply
the integral by 4 . The proof is based on the fact that, the product of two even functions is another
even function, and also that the product of two odd functions results also in an even function. How-
ever, it is important to remember that when using these shortcuts, we must evaluate the coefficients
a n and b n for the integer values of n that will result in non-zero coefficients. This point will be
illustrated in Example 7.2.
We will now derive the trigonometric Fourier series of the most common periodic waveforms.
T
A
π 2π
ωt
0
−A
Solution:
The trigonometric series will consist of sine terms only because, as we already know, this waveform
is an odd function. Moreover, only odd harmonics will be present since this waveform has half-wave
symmetry. However, we will compute all coefficients to verify this. Also, for brevity, we will assume
that ω = 1 .
The a i coefficients are found from
2π π 2π
1 1 A π
∫0 ∫0 ∫π
2π
a n = --- f ( t ) cos nt dt = --- A cos nt dt + ( – A ) cos nt dt = ------ ( sin nt 0 – sin nt )
π π nπ π
(7.19)
A A
= ------ ( sin nπ – 0 – sin n2π + sin nπ ) = ------ ( 2 sin nπ – sin n2π )
nπ nπ
and since n is an integer (positive or negative) or zero, the terms inside the parentheses on the sec-
ond line of (7.19) are zero and therefore, all a i coefficients are zero, as expected since the square
waveform has odd symmetry. Also, by inspection, the average ( DC ) value is zero, but if we attempt
to verify this using (7.19), we will get the indeterminate form 0 ⁄ 0 . To work around this problem, we
will evaluate a 0 directly from (7.12). Then,
π 2π
1 A
a 0 = ---
π ∫0 A dt + ∫π ( – A ) dt = --- ( π – 0 – 2π + π ) = 0
π
(7.20)
4A
b 3 = -------
3π
4A
b 5 = -------
5π
and so on.
Therefore, the trigonometric Fourier series for the square waveform with odd symmetry is
It was stated above that, if the given waveform has half-wave symmetry, and it is also an odd or an
even function, we can integrate from 0 to π ⁄ 2 , and multiply the integral by 4 . We will apply this
property to the following example.
Example 7.2
Compute the trigonometric Fourier series of the square waveform of Example 1 by integrating from
0 to π ⁄ 2 , and multiplying the result by 4 .
Solution:
Since the waveform is an odd function and has half-wave symmetry, we are only concerned with the
odd b n coefficients. Then,
Example 7.3
Compute the trigonometric Fourier series of the square waveform of Figure 7.13.
Solution:
This is the same waveform as in Example 7.1, except that the ordinate has been shifted to the right
by π ⁄ 2 radians, and has become an even function. However, it still has half-wave symmetry. There-
fore, the trigonometric Fourier series will consist of odd cosine terms only.
Since the waveform has half-wave symmetry and is an even function, it will suffice to integrate from
0 to π ⁄ 2 , and multiply the integral by 4 . The a n coefficients are found from
π⁄2 π⁄2
π
= ------- ⎛ sin n ---⎞
1 4 4A π⁄2 4A
a n = 4 ---
π ∫0 f ( t ) cos nt dt = ---
π ∫0 A cos nt dt = ------- ( sin nt
nπ 0
)
nπ ⎝ 2⎠
(7.25)
We observe that for n = even , all a n coefficients are zero, and thus all even harmonics are zero as
expected. Also, by inspection, the average ( DC ) value is zero.
T
A
π/2 3π / 2
π
ωt
0 2π
−A
π
For n = odd , we observe from (7.25) that sin n --- , will alternate between +1 and – 1 depending on
2
the odd integer assigned to n . Thus,
4A
a n = ± ------- (7.26)
nπ
(n – 1)
----------------
f ( t ) = ------- ⎛ cos ω t – --- cos 3ωt + --- cos 5ωt – …⎞ = -------
4A 1 4A 2 1
∑
1
( –1 ) --- cos n ωt (7.27)
π⎝ 3 5 ⎠ π n
n = odd
Alternate Solution:
Since the waveform of Example 7.3 is the same as of Example 7.1, but shifted to the right by π ⁄ 2
radians, we can use the result of Example 7.11, i.e.,
and substitute ωt with ωt + π ⁄ 2 , that is, we let ωt = ωτ + π ⁄ 2 . With this substitution, relation
(7.28) becomes
and using the identities sin ( x + π ⁄ 2 ) = cos x , sin ( x + 3π ⁄ 2 ) = – cos x , and so on, we rewrite (7.29)
as
4A 1 1
f ( τ ) = ------- cos ωτ – --- cos 3ωτ + --- cos 5ωτ – … (7.30)
π 3 5
Example 7.4
Compute the trigonometric Fourier series of the sawtooth waveform of Figure 7.14.
T
−π π
ωt
−2π 0 2π
−A
Solution:
This waveform is an odd function but has no half-wave symmetry; therefore, it contains sine terms
only with both odd and even harmonics. Accordingly, we only need to evaluate the b n coefficients.
⎧ A --- t 0<t<π
⎪ π
f(t) = ⎨
⎪A--- t – 2A π < t < 2π
⎩π
However, we can choose the limits from – π to +π , and thus we will only need one integration since
A
f ( t ) = --- t –π < t < π
π
Better yet, since the waveform is an odd function, we can integrate from 0 to π , and multiply the
integral by 2 ; this is what we will do.
From tables of integrals,
1
∫ x sin ax dx = ----2- sin a x – --x- cos ax
a a
(7.31)
Then,
π π π
t sin nt dt = ------2- ⎛ ----2- sin nt – --- cos nt⎞
2 A
--- t sin nt dt = 2A 2A 1 t
b n = ---
π ∫0 π
-------
π2 ∫0 π ⎝n n ⎠
0 (7.32)
2A π 2A
- ( sin nt – nt cos nt )
= ---------- - ( sin nπ – nπ cos nπ )
= ----------
n2 π2 0 n2 π2
We observe that:
1. If n = even , sin nπ = 0 and cos nπ = 1 . Then, (7.32) reduces to
2A
- ( – nπ ) = – 2A
b n = ---------- -------
n2 π2 nπ
that is, the even harmonics have negative coefficients.
2. If n = odd , sin nπ = 0 , cos nπ = – 1 . Then,
2A
- ( nπ ) = 2A
b n = ---------- -------
n2 π2 nπ
that is, the odd harmonics have positive coefficients.
Thus, the trigonometric Fourier series for the sawtooth waveform with odd symmetry is
f ( t ) = ------- ⎛ sin ωt – --- sin 2ωt + --- sin 3ωt – --- sin 4ωt + …⎞ = -------
2A 1 1 1 2A n–1 1
π⎝ 2 3 4 ⎠ π ∑ ( –1 ) --- sin nωt
n
(7.33)
Example 7.5
Find the trigonometric Fourier series of the triangular waveform of Figure 7.15. Assume ω = 1 .
T
A
−2π
−π ωt
0 π/2 π 2π
−A
1
∫ x sin ax dx
x
= -----2 sin a x – --- cos ax (7.34)
a a
Then,
π⁄2 π⁄2 π⁄2
t sin nt dt = ------2- ⎛ -----2 sin nt – --- cos nt⎞
4 2A 8A 8A 1 t
b n = ---
π ∫0 ------- t sin nt dt = ------2-
π π ∫0 π n ⎝ n ⎠
0
(7.35)
π π π
= ----------2 ⎛ sin n --- – n --- cos n --- ⎞
8A π⁄2 8A
= ----------2 ( sin nt – nt cos nt ) 0 ⎝ 2 2 2⎠
n π n π
2 2
(n – 1)
----------------
f ( t ) = ------2- ⎛ sin ω t – 1
--- sin 3ωt + ------ sin 5ωt – ------ sin 7ωt + …⎞ = ------2-
8A 1 1 8A 1-
∑
2
( –1 ) ---- sin n ωt (7.36)
⎝ 25 49 ⎠ 2
π 9 π n = odd n
Example 7.6
The circuit of Figure 7.16 is a half-wave rectifier whose input is the sinusoid v in ( t ) = sin ωt , and its
output v out ( t ) is defined as
Solution:
The input and output waveforms for this example are shown in Figures 7.17 and 7.18.
+
+ v out ( t )
v in ( t ) R
− −
−2π −π 0 π 2π
Figure 7.19. Half-wave rectifier waveform for the circuit of Figure 7.16
By inspection, the average is a non-zero value, and the waveform has neither odd nor even symme-
try. Therefore, we expect all terms to be present.
Then,
π
A ⎧ 1 cos ( 1 – n )t cos ( 1 + n )t ⎫
a n = --- ⎨ – --- --------------------------- + ---------------------------- ⎬
π ⎩ 2 1–n 1+n
0⎭
(7.38)
A- ⎧ ----------------------------
= – ----- cos ( π + nπ )- – -----------
cos ( π – nπ -) + ----------------------------- 1 - ⎫
1 - + -----------
⎨ ⎬
2π ⎩ 1–n 1+n 1–n n+1 ⎭
a 0 = 2A ⁄ π
Therefore, the DC value is
1
--- a 0 = A
--- (7.40)
2 π
We cannot use (7.39) to obtain the value of a 1 ; therefore, we will evaluate the integral
π
A
a 1 = ---
π ∫0 sin t cos t dt
From tables of integrals,
1
∫ ( sin ax ) ( cos ax ) dx
2
= ------ ( sin ax )
2a
and thus,
π
A 2
a 1 = ------ ( sin t ) = 0 (7.41)
2π 0
A cos 2π + 1 ⎞
a 2 = --- ⎛ ------------------------
- = – 2A
------- (7.42)
π ⎝ ( 1 – 22 ) ⎠ 3π
A ( cos 3π + 1 ) = 0
a 3 = --------------------------------- (7.43)
2
π(1 – 3 )
A ( cos 4π + 1 ) 2A
a 4 = --------------------------------- = – --------- (7.44)
2 15π
π(1 – 4 )
A ( cos 6π + 1 ) 2A
a 6 = --------------------------------- = – --------- (7.45)
2 35π
π(1 – 6 )
A ( cos 8π + 1 ) 2A
a 8 = --------------------------------- = – --------- (7.46)
2 63π
π(1 – 8 )
and so on.
Now, we need to evaluate the bn coefficients. For this example,
2π π 2π
1 A A
b n = A ---
π ∫0 f ( t ) sin nt dt = ---
π ∫0 sin t sin nt dt + ---
π ∫π 0 sin nt dt
Therefore,
π
A 1 ⎧ sin ( 1 – n )t sin ( 1 + n )t ⎫
b n = --- ⋅ --- ⎨ -------------------------- – --------------------------- ⎬
π 2⎩ 1–n 1+n
0⎭
A sin ( 1 – n )π sin ( 1 + n )π
= ------ ---------------------------- – ---------------------------- – 0 + 0 = 0 ( n ≠ 1 )
2π 1–n 1+n
Combining (7.40), and (7.42) through (7.47), we find that the trigonometric Fourier series for the
half-wave rectifier with no symmetry is
Example 7.7
Figure 7.20 shows a full-wave rectifier circuit with input the sinusoid v in ( t ) = A sin ωt . The output
of that circuit is v out ( t ) = A sin ωt . Express v out ( t ) as a trigonometric Fourier series. Assume
ω = 1.
+ R
v in ( t ) − +
− +
v out ( t )
−
Therefore, we expect only cosine terms to be present. The a n coefficients are found from
vIN(t)
π 2π
-A
A
vOUT(t)
π 2π
−2π −π 0 π 2π
2π
1
a n = ---
π ∫0 f ( t ) cos nt dt
π π
1 2A
a n = ---
π ∫ –π
A sin t cos nt dt = -------
π ∫0 sin t cos nt dt (7.49)
cos ( m + n )x
cos ( m – n )x- – -----------------------------
∫ ( sin mx ) ( cos nx ) dx = -----------------------------
2(n – m) 2(m + n)
- ( m2 ≠ n2 )
Since
cos ( x – y ) = cos ( y – x ) = cos x cos y + sin xsiny
we express (7.49) as
π
2A 1 ⎧ cos ( n – 1 )t cos ( n + 1 )t ⎫
a n = ------- ⋅ --- ⎨ --------------------------- – ---------------------------- ⎬
π 2⎩ n–1 n+1
0⎭
A 1 – cos ( nπ + π ) cos ( nπ – π ) – 1
= --- --------------------------------------- + --------------------------------------
π n+1 n–1
To simplify the last expression in (7.50), we make use of the trigonometric identities
cos ( nπ + π ) = cos nπ cos π – sin nπsinπ = – cos nπ
and
cos ( nπ – π ) = cos nπ cos π + sin nπsinπ = – cos nπ
Then, (7.50) simplifies to
A 1 + cos nπ 1 + cos nπ A – 2 + ( n – 1 ) cos nπ – ( n + 1 ) cos nπ
- – ------------------------- = --- -------------------------------------------------------------------------------------
a n = --- ------------------------
π n+1 n–1 π 2
n –1
(7.51)
– 2A ( cos nπ + 1 -)
= ---------------------------------------
2
n≠1
π(n – 1)
Now, we can evaluate all the a n coefficients, except a 1 , from (7.51). First, we will evaluate a 0 to
obtain the DC value. By substitution of n = 0 , we get
4A
a 0 = -------
π
Therefore, the DC value is
1
--- a 0 = 2A
------- (7.52)
2 π
π
1
a 1 = ---
π ∫0 sin t cos t dt
From tables of integrals,
1
∫ ( sin ax ) ( cos ax ) dx
2
= ------ ( sin ax )
2a
and thus,
π
1 2
a 1 = ------ ( sin t ) = 0 (7.53)
2π 0
2A ( cos 4π + 1 )- = – --------
a 4 = –--------------------------------------- 4A- (7.55)
2 15π
π(4 – 1)
2A ( cos 6π + 1 )- = – --------
a 6 = –--------------------------------------- 4A- (7.56)
2 35π
π(6 – 1)
– 2A ( cos 8π + 1 ) 4A
a 8 = ---------------------------------------- = – --------- (7.57)
2 63π
π(8 – 1)
and so on. Then, combining the terms of (7.52) and (7.54) through (7.57) we get
Therefore, the trigonometric form of the Fourier series for the full-wave rectifier with even symmetry
is
∞
4A 1 -
f ( t ) = 2A
------- – -------
π π ∑ ------------------
( n
2
– 1 )
cos nωt (7.59)
n = 2 , 4, 6 , …
This series of (7.59) shows that there is no component of the input (fundamental) frequency. This is
because we chose the period to be from – π and +π . Generally, the period is defined as the shortest
period of repetition. In any waveform where the period is chosen appropriately, it is very unlikely
that a Fourier series will consist of even harmonic terms only.
is
f ( t ) = ------- ⎛ sin ωt + --- sin 3ωt + --- sin 5ωt + …⎞ = -------
4A 1 1 4A 1
π⎝ 3 5 ⎠ π ∑ --- sin nωt
n
n = odd
Figure 7.24 shows the first 11 harmonics and their sum. We observe that as we add more and more
harmonics, the sum looks more and more like the square waveform. However, the crests do not get
flattened; this is known as Gibbs phenomenon and it occurs because of the discontinuity of the per-
fect square waveform as it changes from +A to – A .
Crest (Gibbs Phenomenon)
If a given waveform does not have any kind of symmetry, it may be advantageous of using the alter-
nate form of the trigonometric Fourier series where the cosine and sine terms of the same frequency
are grouped together, and the sum is combined to a single term, either cosine or sine. However, we
still need to compute the a n and b n coefficients separately.
We use the triangle shown in Figure 7.25 for the derivation of the alternate forms.
cn = an + bn
ϕn
cn an a bn b
bn cos θ n = --------------------
- = ----n- sin θ n = --------------------
- = ----n-
an + bn cn an + bn cn
θn
an
b a
cos θ n = sin ϕ n θ n = atan ----n- ϕ n = atan ----n-
an bn
Figure 7.25. Derivation of the alternate form of the trigonometric Fourier series
⎧
⎪
⎪
⎪
⎪
⎨
⎪
⎪
⎪
⎪
⎩
2 ⎝ cos ( t – θ 1 ) ⎠ ⎝ cos ( 2t – θ 2 ) ⎠
cos θ n cos nt + sin θ n sin nt
+ cn ⎛ ⎞
⎧
⎪
⎪
⎪
⎪
⎨
⎪
⎪
⎪
⎪
⎩
⎝ cos ( nt – θ n ) ⎠
∞ ∞ bn
∑ cn cos ⎛⎝ nωt – atan ----
-⎞
1 1
f ( t ) = --- a 0 +
2 ∑ cn cos ( nωt – θn ) = --- a 0 +
2 a n⎠
(7.61)
n=1 n=1
Similarly,
1 sin ϕ 1 cos t + cos ϕ 1 sin t
f ( t ) = --- a 0 + c 1 ⎛ ⎞
⎧
⎪
⎪
⎪
⎪
⎨
⎪
⎪
⎪
⎪
⎩
2 ⎝ sin ( t + ϕ 1 ) ⎠
sin ϕ 2 cos 2t + cos ϕ 2 sin 2t sin ϕ n cos nt + cos ϕ n sin nt
c2 ⎛ ⎞ + … + cn ⎛ ⎞
⎧
⎪
⎪
⎪
⎪
⎨
⎪
⎪
⎪
⎪
⎩
⎧
⎪
⎪
⎪
⎪
⎨
⎪
⎪
⎪
⎪
⎩
⎝ sin ( 2t + ϕ 2 ) ⎠ ⎝ sin ( nt + ϕ n ) ⎠
∞ ∞ an
∑ cn sin ⎛⎝ nωt + atan ----
-⎞
1 1
f ( t ) = --- a 0 +
2 ∑ c n sin ( nωt + ϕ n ) = --- a 0 +
2 b n⎠
(7.62)
n=1 n=1
When used in circuit analysis, (7.61) and (7.62) can be expressed as phasors. Since it is customary to
use the cosine function in the time domain to phasor transformation, we choose to use the transfor-
mation of (7.63) below.
∞ b ∞ bn
c n cos ⎛ nωt – atan ----n-⎞ ⇔ --- a 0 +
1 1
--- a 0 +
2 ∑ ⎝ an ⎠ 2 ∑ cn ∠– atan ----
an
- (7.63)
n=1 n=1
Example 7.8
Find the first 5 terms of the alternate form of the trigonometric Fourier series for the waveform of
Figure 7.26.
f( t)
3
1
ω = 1
t
π/2 π 3π/2 2π
Solution:
The given waveform has no symmetry; thus, we expect both cosine and sine functions with odd and
even terms present. Also, by inspection the DC value is not zero.
We will compute the a n and b n coefficients, the DC value, and we will combine them to get an
expression in the form of (7.63). Then,
π⁄2 2π π⁄2 2π
1 1 3 1
a n = ---
π ∫0 ( 3 ) cos nt dt + ---
π ∫ π⁄2
( 1 ) cos nt dt = ------ sin nt
nπ 0
+ ------ sin nt
nπ π⁄2 (7.64)
3 π 1 1 π 2 π
= ------ sin n --- + ------ sin n2π – ------ sin n --- = ------ sin n ---
nπ 2 nπ nπ 2 nπ 2
2
a 1 = --- (7.65)
π
and
2
a 3 = – ------ (7.66)
3π
The DC value is
π⁄2 2π
1 1- 1 1 π⁄2
∫0 ∫π ⁄ 2 ( 1 ) d t
2π
--- a 0 = ----- ( 3 ) dt + ------ = ------ ( 3t 0
+t π⁄2
)
2 2π 2π 2π
(7.67)
π
= ------ ⎛ ------ + 2π – ---⎞ = ------ ( π + 2π ) = ---
1 3π 1 3
2π ⎝ 2 2⎠ 2π 2
π⁄2 2π π⁄2 2π
1 1 –3 –1
b n = ---
π ∫0 ( 3 ) sin nt dt + ---
π ∫ π⁄2
( 1 ) sin nt dt = ------ cos nt
nπ 0
+ ------ cos nt
nπ π⁄2 (7.68)
–3 π 3 –1 1 π 1 2
= ------ cos n --- + ------ + ------ cos n2π + ------ cos n --- = ------ ( 3 – cos n2π ) = ------
nπ 2 nπ nπ nπ 2 nπ nπ
Then,
b1 = 2 ⁄ π (7.69)
b2 = 1 ⁄ π (7.70)
b 3 = 2 ⁄ 3π (7.71)
b 4 = 1 ⁄ 2π (7.72)
From (7.63),
∞ b ∞ bn
c n cos ⎛ nωt – atan ----n-⎞ ⇔ --- a 0 +
1 1
--- a 0 +
2 ∑ ⎝ an ⎠ 2 ∑ cn ∠– atan ----
an
-
n=1 n=1
where
b 2 2 b 2 2
c n ∠– atan ----n- = a n + b n ∠– atan ----n- = a n + b n ∠– θ n = a n – jb n (7.73)
an an
2 2 2 2 2 2
a 3 – jb 3 = – ------ – j ------ = ---------- ∠– 135° ⇔ ---------- cos ( 3ωt – 135° ) (7.76)
3π 3π 3π 3π
and
1 1 1
a 4 – jb 4 = 0 – j ------ = ------ ∠– 90° ⇔ ------ cos ( 4ωt – 90° ) (7.77)
2π 2π 2π
Combining the terms of (7.67) and (7.74) through (7.77), we find that the alternate form of the trig-
onometric Fourier series representing the waveform of this example is
3 1
f ( t ) = --- + ---
2 π
[2 2 cos ( ωt – 45° ) + cos ( 2ωt – 90° )
(7.78)
2 2 1
+ ---------- cos ( 3ωt – 135° ) + --- cos ( 4ωt – 90° ) + …
3 2
]
Example 7.9
The input to the series RC circuit of Figure 7.27, is the square waveform of Figure 7.28. Compute
the voltage v c ( t ) across the capacitor. Consider only the first three terms of the series, and assume
ω = 1.
R=1Ω
+
+ vC ( t )
− −
v in ( t ) C =1F
v in ( t ) T
A
π 2π ωt
0
−A
Since this series is the sum of sinusoids, we will use phasor analysis to obtain the solution.
The equivalent phasor circuit is shown in Figure 7.29.
R=1
C +
+ VC
− 1- −
-----
V in jω
1 ⁄ ( jnω ) 1
V C n = ------------------------------ V in n = -------------- V in n (7.80)
1 + 1 ⁄ ( jnω ) 1 + jn
With reference to (7.79) the phasors of the first 3 terms of (7.80) are
4A 4A 4A
------- sin t = ------- cos ( t – 90° ) ⇔ V in1 = ------- ∠– 90° (7.81)
π π π
4A 1 4A 1 4A 1
------- ⋅ --- sin 2t = ------- ⋅ --- cos ( 2t – 90° ) ⇔ V = ------- ⋅ --- ∠– 90° (7.82)
π 3 π 3 in 2 π 3
------- ⋅ 1
4A ------- ⋅ 1
--- sin 3t = 4A --- cos ( 3t – 90° ) ⇔ V 4A 1
= ------- ⋅ --- ∠– 90° (7.83)
π 5 π 5 in 3 π 5
By substitution of (7.81) through (7.83) into (7.80), we obtain the phasor and time domain voltages
indicated in (7.84) through (7.86) below.
1 4A 1 4A
V C 1 = ----------- ⋅ ------- ∠– 90° = -------------------- ⋅ ------- ∠– 90°
1+j π 2 ∠45° π
(7.84)
4A 2 4A 2
= ------- ⋅ ------- ∠– 135° ⇔ ------- ⋅ ------- cos ( t – 135° )
π 2 π 2
1 4A 1 1 4A 1
V C 2 = -------------- ⋅ ------- ⋅ --- ∠– 90° = ------------------------- ------- ⋅ --- ∠– 90°
1 + j2 π 3 5 ∠63.4° π 3
(7.85)
4A 5 4A 5
= ------- ⋅ ------- ∠– 153.4° ⇔ ------- ⋅ ------- cos ( 2t – 153.4° )
π 15 π 15
1 4A 1 1 4A 1
V C 3 = -------------- ⋅ ------- ⋅ --- ∠– 90° = ---------------------------- ------- ⋅ --- ∠– 90°
1 + j3 π 5 10 ∠71.6° π 5
(7.86)
4A 10 4A 10
= ------- ⋅ ---------- ∠– 161.6° ⇔ ------- ⋅ ---------- cos ( 3t – 161.6° )
π 50 π 50
Thus, the capacitor voltage in the time domain is
4A 2 5 10
v C ( t ) = ------- ------- cos ( t – 135° ) + ------- cos ( 2t – 153.4° ) + ---------- cos ( 3t – 161.6° ) + … (7.87)
π 2 15 50
into f ( t ) . Thus,
jωt – jωt j2ωt – j2ωt
+e +e
f ( t ) = --- a 0 + a 1 ⎛ ---------------------------- ⎞ + a 2 ⎛ --------------------------------- ⎞ +
1 e e
⎝ ⎠ ⎝ ⎠
(7.90)
2 2 2
jωt – jωt j2ωt – j2ωt
–e –e
… + b 1 ⎛ ---------------------------⎞ + b 2 ⎛ --------------------------------⎞ + …
e e
⎝ j2 ⎠ ⎝ j2 ⎠
f ( t ) = … + ⎛ -----2 – -----2 ⎞ e
– j2ωt ⎛ a 1 b 1 ⎞ – jωt 1
+ --- a 0 + ⎛ -----1 + -----1⎞ e + ⎛ -----2 + -----2⎞ e
a b a b jωt a b j2ωt
+ ----- – ----- e (7.91)
⎝ 2 j2 ⎠ ⎝ 2 j2 ⎠ 2 ⎝ 2 j2 ⎠ ⎝ 2 j2⎠
b
C n = --- ⎛ a n + ----n-⎞ = --- ( a n – j b n )
1 1
(7.93)
2⎝ j⎠ 2
1
C 0 = --- a 0 (7.94)
2
Then, (7.91) is written as
We must remember that the C i coefficients, except C 0 , are complex and occur in complex conju-
gate pairs, that is,
C –n = C n∗ (7.96)
We can derive a general expression for the complex coefficients C n , by multiplying both sides of
– jnωt
(7.95) by e and integrating over one period, as we did in the derivation of the a n and b n coeffi-
cients of the trigonometric form. Then, with ω = 1 ,
2π 2π 2π
– jnt – j2t – jnt – jt – jnt
∫0 f ( t )e dt = … + ∫0 C–2 e e dt + ∫0 C–1 e e dt (7.97)
2π 2π
– jnt jt – jnt
+ ∫0 C0 e dt + ∫0 C1 e e dt
2π 2π
j2t – jnt jnt – jnt
+ ∫0 C2 e e dt + … + ∫0 Cn e e dt
We observe that all the integrals on the right side of (7.97) are zero except the last one. Therefore,
2π 2π 2π
– jnt jnt – jnt
∫0 f ( t )e dt = ∫0 Cn e e dt = ∫0 C n dt = 2πC n
or
2π
1 – jnt
C n = ------
2π ∫0 f ( t )e dt
2π
1 – jnωt
C n = ------
2π ∫0 f ( t )e d( ωt ) (7.98)
or
T
1 – jnωt
C n = ---
T ∫0 f ( t )e d( ωt ) (7.99)
We can derive the trigonometric Fourier series from the exponential series by addition and subtrac-
tion of the exponential form coefficients C n and C –n . Thus, from (7.92) and (7.93),
1
C n + C – n = --- ( a n – jb n + a n + jb n )
2
or
a n = C n + C –n (7.100)
Similarly,
1
C n – C – n = --- ( a n – jb n – a n – j b n ) (7.101)
2
or
b n = j ( Cn – C–n ) (7.102)
Since odd functions have no cosine terms, the a n coefficients in (7.103) and (7.104) are zero.
Therefore, both C –n and C n are imaginary.
We recall from the trigonometric Fourier series that if there is half-wave symmetry, all even har-
monics are zero. Therefore, in (7.103) and (7.104) the coefficients a n and b n are both zero for
n = even , and thus, both C – n and C n are also zero for n = even .
5. C –n = C n∗ always
π 2π
ωt
0
−A
and for n = 0 ,
π 2π
1 –0 –0 A
C 0 = ------
2π ∫0 Ae dt + ∫π ( – A )e dt = ------ ( π – 2π + π ) = 0
2π
as expected.
For n ≠ 0 ,
π 2π π 2π
1 – jnt – jnt 1 A – jnt – A – jnt
C n = ------
2π ∫0 Ae dt + ∫π –A e dt = ------ -------- e
2π – jn 0
+ -------- e
– jn π
Cn A – jnπ 2 A 2
= ------------ ( e – 1 ) = ------------ ( 1 – 1 ) = 0 (7.106)
n = even 2jπn 2jπn
as expected.
– jnπ
For n = odd , e = – 1 . Therefore,
A – jnπ 2 A 2 A 2 2A
Cn = ------------ ( e – 1 ) = ------------ ( – 1 – 1 ) = ------------ ( – 2 ) = -------- (7.107)
n = odd 2jπn 2jπn 2jπn jπn
Using (7.95), that is,
– j2ωt – jωt jωt j2ωt
f ( t ) = … + C–2 e + C–1 e + C0 + C1 e + C2 e +…
we obtain the exponential Fourier series for the square waveform with odd symmetry as
f ( t ) = ------- ⎛ … – --- e
2A 1 – j3ωt – jωt jωt 1 j3ωt ⎞ 2A 1
∑
jnωt
–e +e + --- e = ------- --- e (7.108)
jπ ⎝ 3 3 ⎠ jπ n
n = odd
The minus ( −) sign of the first two terms within the parentheses results from the fact that
C –n = C n∗ . For instance, since C 3 = 2A ⁄ j3π , it follows that C – 3 = C 3∗ = – 2A ⁄ j3π . We observe
that f ( t ) is purely imaginary, as expected, since the waveform is an odd function.
To prove that (7.108) and (7.22) are the same, we group the two terms inside the parentheses of
(7.108) for which n = 1 ; this will produce the fundamental frequency sin ωt . Then, we group the
two terms for which n = 3 , and this will produce the third harmonic sin 3ωt , and so on.
1. Theory
1
f(x )
P E R IO D = L
Toc JJ II J I Back
Section 1: Theory 4
where symbols with subscript 1 are constants that determine the am-
plitude and phase of this first approximation
Here, symbols with subscripts are constants that determine the am-
plitude and phase of each harmonic contribution
Toc JJ II J I Back
Section 1: Theory 5
3
One can even approximate a square-wave pattern with a suitable sum
that involves a fundamental sine-wave plus a combination of harmon-
ics of this fundamental frequency. This sum is called a Fourier series
F u n d a m e n ta l
F u n d a m e n ta l + 2 h a rm o n ic s
F u n d a m e n ta l + 5 h a rm o n ic s P E R IO D = L
F u n d a m e n ta l + 2 0 h a rm o n ic s
Toc JJ II J I Back
Section 1: Theory 6
● In this Tutorial,
4
we consider working out Fourier series for func-
tions f (x) with period L = 2π. Their fundamental frequency is then
k = 2π
L = 1, and their Fourier series representations involve terms like
a1 cos x , b1 sin x
a2 cos 2x , b2 sin 2x
a3 cos 3x , b3 sin 3x
Toc JJ II J I Back
Section 1: Theory 7
5
A more compact way of writing the Fourier series of a function f (x),
with period 2π, uses the variable subscript n = 1, 2, 3, . . .
∞
a0 X
f (x) = + [an cos nx + bn sin nx]
2 n=1
Toc JJ II J I Back
Section 1: Theory 8
F o u rie r s e rie s
c o n v e rg e s to
h a lf-w a y p o in t
Toc JJ II J I Back
Section 2: Exercises 9
2. Exercises
7
Click on Exercise links for full worked solutions (7 exercises in total).
Exercise 1.
Let f (x) be a function of period 2π such that
1, −π < x < 0
f (x) =
0, 0 < x < π .
b) Show that the Fourier series for f (x) in the interval −π < x < π is
1 2 1 1
− sin x + sin 3x + sin 5x + ...
2 π 3 5
c) By giving an appropriate value to x, show that
π 1 1 1
= 1 − + − + ...
4 3 5 7
Exercise 2.
8
Let f (x) be a function of period 2π such that
0, −π < x < 0
f (x) =
x, 0 < x < π .
Exercise 3.
9
Let f (x) be a function of period 2π such that
x, 0 < x < π
f (x) =
π, π < x < 2π .
b) Show that the Fourier series for f (x) in the interval 0 < x < 2π is
3π 2 1 1
− cos x + 2 cos 3x + 2 cos 5x + . . .
4 π 3 5
1 1
− sin x + sin 2x + sin 3x + . . .
2 3
c) By giving appropriate values to x, show that
π 1 1 1 π2 1 1 1
(i) 4 = 1− 3 + 5 − 7 +... and (ii) 8 = 1+ 32 + 52 + 72 +...
Exercise 4.
10
Let f (x) be a function of period 2π such that
x
f (x) = over the interval 0 < x < 2π.
2
b) Show that the Fourier series for f (x) in the interval 0 < x < 2π is
π 1 1
− sin x + sin 2x + sin 3x + . . .
2 2 3
c) By giving an appropriate value to x, show that
π 1 1 1 1
= 1 − + − + − ...
4 3 5 7 9
Exercise 5.
11
Let f (x) be a function of period 2π such that
π − x, 0 < x < π
f (x) =
0, π < x < 2π
b) Show that the Fourier series for f (x) in the interval 0 < x < 2π is
π 2 1 1
+ cos x + 2 cos 3x + 2 cos 5x + . . .
4 π 3 5
1 1 1
+ sin x + sin 2x + sin 3x + sin 4x + . . .
2 3 4
c) By giving an appropriate value to x, show that
π2 1 1
= 1 + 2 + 2 + ...
8 3 5
Exercise 6.
12
Let f (x) be a function of period 2π such that
b) Show that the Fourier series for f (x) in the interval −π < x < π is
1 1
2 sin x − sin 2x + sin 3x − . . .
2 3
c) By giving an appropriate value to x, show that
π 1 1 1
= 1 − + − + ...
4 3 5 7
Exercise 7.
13
Let f (x) be a function of period 2π such that
b) Show that the Fourier series for f (x) in the interval −π < x < π is
π2
1 1
− 4 cos x − 2 cos 2x + 2 cos 3x − . . .
3 2 3
c) By giving an appropriate value to x, show that
π2 1 1 1
= 1 + 2 + 2 + 2 + ...
6 2 3 4
3. Answers
14
The sketches asked for in part (a) of each exercise are given within
the full worked solutions – click on the Exercise links to see these
solutions
Toc JJ II J I Back
Section 4: Integrals 17
4. Integrals
15
R b dv b Rb du
Formula for integration by parts: a u dx dx = [uv]a − a dx v dx
R R
f (x) f (x)dx f (x) f (x)dx
xn+1 n [g(x)]n+1
xn n+1 (n 6= −1) [g (x)] g 0 (x) n+1 (n 6= −1)
0
1 g (x)
x ln |x| g(x) ln |g (x)|
x ax
e ex ax
ln a (a > 0)
sin x − cos x sinh x cosh x
cos x sin x cosh x sinh x
tan x − ln
|cos x| tanh x ln cosh x
cosec x ln tan x2 cosech x ln tanh x2
sec x ln |sec x + tan x| sech x 2 tan−1 ex
sec2 x tan x sech2 x tanh x
cot x ln |sin x| coth x ln |sinh x|
sin2 x x
2 −
sin 2x
4 sinh2 x sinh 2x
4 − x2
cos2 x x
2 +
sin 2x
4 cosh2 x sinh 2x
4 + x2
Toc JJ II J I Back
Section 4: Integrals 18
R R
16
f (x) f (x) dx f (x) f (x) dx
1 1
tan−1 x 1 1 a+x
a2 +x2 a a a2 −x2 2a ln a−x (0 < |x| < a)
1 1 x−a
(a > 0) x2 −a2 2a ln x+a (|x| > a > 0)
√
2 2
√ 1 sin−1 x √ 1 ln x+ aa +x (a > 0)
a2 −x2 a a2 +x2
√
2 2
(−a < x < a) √ 1 ln x+ xa −a (x > a > 0)
x2 −a2
√ a2
√ a2
h √ i
sinh−1 x a2 +x2
−1 x
x
a2 − x2 2 sin a a2 +x2 2 a + a2
√ i √ h √ i
a2
a2 −x2
− cosh−1
2 2
+x x
+ x xa2−a
a2 x2 −a2 2 a
Toc JJ II J I Back
Section 5: Useful trig results 19
● sin nπ = 0 x
−3π −2π −π 0 π 2π 3π
−1
c o s(x )
1
● cos nπ = (−1)n x
−3π −2π −π 0 π 2π 3π
−1
Toc JJ II J I Back
Section 5: Useful trig results 20
18
s in (x ) c o s(x )
1 1
x x
−3π −2π −π 0 π 2π 3π −3π −2π −π 0 π 2π 3π
−1 −1
0 , n even 0 , n odd
π π
● sin n = 1 , n = 1, 5, 9, ... ● cos n = 1 , n = 0, 4, 8, ...
2 2
−1 , n = 3, 7, 11, ... −1 , n = 2, 6, 10, ...
Toc JJ II J I Back
Section 6: Alternative notation 21
6. Alternative notation
19
2π
● For a waveform f (x) with period L = k
∞
a0 X
f (x) = + [an cos nkx + bn sin nkx]
2 n=1
The corresponding Fourier coefficients are
Z
STEP ONE 2
a0 = f (x) dx
L
L
Z
STEP TWO 2
an = f (x) cos nkx dx
L
L
Z
STEP THREE 2
bn = f (x) sin nkx dx
L
L
and integrations are over a single interval in x of L
Toc JJ II J I Back
Section 6: Alternative notation 22
Toc JJ II J I Back
Section 6: Alternative notation 23
Toc JJ II J I Back
Section 7: Tips on using solutions 24
● Use the solutions intelligently. For example, they can help you get
started on an exercise, or they can allow you to check whether your
intermediate results are correct
● Try to make less use of the full solutions as you work your way
through the Tutorial
Toc JJ II J I Back
Solutions to exercises 25
f(x )
1
−2π −π 0 π 2π
x
Toc JJ II J I Back
Solutions to exercises 26
STEP ONE
π 0
1 π
Z Z Z
1 1
a0 = f (x)dx = f (x)dx + f (x)dx
π −π π −π π 0
1 0 1 π
Z Z
= 1 · dx + 0 · dx
π −π π 0
1 0
Z
= dx
π −π
1 0
= [x]
π −π
1
= (0 − (−π))
π
1
= · (π)
π
i.e. a0 = 1 .
Toc JJ II J I Back
Solutions to exercises 27
STEP TWO
25
Z π Z 0 Z π
1 1 1
an = f (x) cos nx dx = f (x) cos nx dx + f (x) cos nx dx
π −π π −π π 0
Z 0 Z π
1 1
= 1 · cos nx dx + 0 · cos nx dx
π −π π 0
Z 0
1
= cos nx dx
π −π
0
1 sin nx 1 0
= = [sin nx]−π
π n −π nπ
1
= (sin 0 − sin(−nπ))
nπ
1
= (0 + sin nπ)
nπ
1
i.e. an = (0 + 0) = 0.
nπ
Toc JJ II J I Back
Solutions to exercises 28
STEP THREE
26
Z π
1
bn = f (x) sin nx dx
π −π
Z 0 Z π
1 1
= f (x) sin nx dx + f (x) sin nx dx
π −π π 0
Z 0 Z π
1 1
= 1 · sin nx dx + 0 · sin nx dx
π −π π 0
0 0
1 − cos nx
Z
1
i.e. bn = sin nx dx =
π −π π n −π
1 1
= − [cos nx]0−π = − (cos 0 − cos(−nπ))
nπ nπ
1 1
= − (1 − cos nπ) = − (1 − (−1)n ) , see Trig
nπ nπ
0 , n even n 1 , n even
i.e. bn = 2 , since (−1) =
− nπ , n odd −1 , n odd
Toc JJ II J I Back
Solutions to exercises 29
n 1 2 3 4 5
bn − π2 0 − π2 13 0 − π2 15
Toc JJ II J I Back
Solutions to exercises 30
π 1 1 1
4 =1− 3 + 5 − 7 + ...
π
The first condition of sin x = 1 suggests trying x = 2.
Toc JJ II J I Back
Solutions to exercises 31
Picking x = π
thus gives
29
2
h
0 = 21 − π2 sin π2 + 1
3 sin 3π
2 +
1
5 sin 5π
2 i
1
+ 7 sin 7π
2 + ...
h
1 2 1 1
i.e. 0 = 2 − π 1 − 3 + 5 i
1
− 7 + ...
Toc JJ II J I Back
Solutions to exercises 32
Exercise 2.
30
0, −π < x < 0
f (x) =
x, 0 < x < π, and has period 2π
f(x )
π
−3π −2π −π π 2π 3π
x
Toc JJ II J I Back
Solutions to exercises 33
STEP ONE
Z π Z 0 Z π
1 1 1
a0 = f (x)dx = f (x)dx + f (x)dx
π −π π −π π 0
Z 0 Z π
1 1
= 0 · dx + x dx
π −π π 0
π
1 x2
=
π 2 0
1 π2
= −0
π 2
π
i.e. a0 = .
2
Toc JJ II J I Back
Solutions to exercises 34
STEP TWO
32
Z π Z 0 Z π
1 1 1
an = f (x) cos nx dx = f (x) cos nx dx + f (x) cos nx dx
π −π π −π π 0
Z 0 Z π
1 1
=0 · cos nx dx + x cos nx dx
−π π π 0
Z π π Z π
1 1 sin nx sin nx
i.e. an = x cos nx dx = x − dx
π 0 π n 0 0 n
(using integration by parts)
1 sin nπ 1 h cos nx iπ
i.e. an = π −0 − −
π n n n 0
1 1
= ( 0 − 0) + 2 [cos nx]π0
π n
1 1
= 2
{cos nπ − cos 0} = 2
{(−1)n − 1}
πn
πn
0 , n even
i.e. an = , see Trig.
− πn2 2 , n odd
Toc JJ II J I Back
Solutions to exercises 35
STEP THREE
33
π 0
1 π
Z Z Z
1 1
bn = f (x) sin nx dx = f (x) sin nx dx + f (x) sin nx dx
π −π π −π π 0
1 0 1 π
Z Z
= 0 · sin nx dx + x sin nx dx
π −π π 0
Z π Z π
1 1 h cos nx iπ cos nx
i.e. bn = x sin nx dx = x − − − dx
π 0 π n 0 0 n
(using integration by parts)
1 π
Z
1 1 π
= − [x cos nx]0 + cos nx dx
π n n 0
π
1 1 1 sin nx
= − (π cos nπ − 0) +
π n n n 0
1 1
= − (−1)n + (0 − 0), see Trig
n πn2
1
= − (−1)n
n
Toc JJ II J I Back
Solutions to exercises 36
(
− n1 , n even 34
i.e. bn =
+ n1 , n odd
We now have
∞
a0 X
f (x) = + [an cos nx + bn sin nx]
2 n=1
− n1
( (
π 0 , n even , n even
where a0 = , an = , bn =
2 − πn2 2 , n odd 1
, n odd
n
n 1 2 3 4 5
an − π2 0 − π2 · 1
32 0 − π2 · 1
52
bn 1 − 12 1
3 − 14 1
5
Toc JJ II J I Back
Solutions to exercises 37
Toc JJ II J I Back
Solutions to exercises 38
π 1 1 1
(i) 4 =1− 3 + 5 − 7 + ...
Toc JJ II J I Back
Solutions to exercises 39
π π
The graph of f (x) shows that f 2 = 2, so that
π π 1 1 1
= + 1 − + − + ...
2 4 3 5 7
π 1 1 1
i.e. = 1 − + − + ...
4 3 5 7
Toc JJ II J I Back
Solutions to exercises 40
π2 1 1 1
(ii) 8 =1+ 32 + 52 + 72 + ...
Picking x = 0 gives
sin x = sin 2x = sin 3x = 0 and cos x = cos 3x = cos 5x = 1
Toc JJ II J I Back
Solutions to exercises 41
2 1 1 1 π
1 + 2 + 2 + 2 + ... =
π 3 5 7 4
1 1 1 π2
and 1 + 2 + 2 + 2 + ... = .
3 5 7 8
Return to Exercise 2
Toc JJ II J I Back
Solutions to exercises 42
Exercise 3.
40
x, 0 < x < π
f (x) =
π, π < x < 2π, and has period 2π
f(x )
π
−2π −π 0 π 2π
x
Toc JJ II J I Back
Solutions to exercises 43
STEP ONE
2π π
1 2π
Z Z Z
1 1
a0 = f (x)dx = f (x)dx + f (x)dx
π 0 π 0 π π
1 π 1 2π
Z Z
= xdx + π · dx
π 0 π π
π 2π
1 x2 π
= + x
π 2 0 π π
2
1 π
= − 0 + 2π − π
π 2
π
= +π
2
3π
i.e. a0 = .
2
Toc JJ II J I Back
Solutions to exercises 44
STEP TWO
42
Z 2π
1
an = f (x) cos nx dx
π 0
Z π Z 2π
1 1
= x cos nx dx + π · cos nx dx
π 0 π π
" π # 2π
Z π
1 sin nx sin nx π sin nx
= x − dx +
π n 0 0 n π n π
| {z }
using integration by parts
" π #
1 1 − cos nx
= π sin nπ − 0 · sin n0 −
π n n2 0
1
+ (sin n2π − sin nπ)
n
Toc JJ II J I Back
Solutions to exercises 45
43
" #
1 1 cos nπ cos 0 1
i.e. an = 0−0 + − 2 + 0−0
π n n2 n n
1
= (cos nπ − 1), see Trig
n2 π
1
(−1)n − 1 ,
= 2
n π
− n22 π , n odd
(
i.e. an =
0 , n even.
Toc JJ II J I Back
Solutions to exercises 46
STEP THREE
44
Z 2π
1
bn = f (x) sin nx dx
π 0
Z π Z 2π
1 1
= x sin nx dx + π · sin nx dx
π 0 π π
" # 2π
1 h cos nx iπ Z π − cos nx
π − cos nx
= x − − dx +
π n 0 0 n π n π
| {z }
using integration by parts
" π #
1 −π cos nπ sin nx 1
= +0 + − (cos 2nπ − cos nπ)
π n n2 0 n
" #
1 −π(−1)n
sin nπ − sin 0 1
1 − (−1)n
= + 2
−
π n n n
1 1
− (−1)n + 1 − (−1)n
= 0 −
n n
Toc JJ II J I Back
Solutions to exercises 47
1 1 1 45
i.e. bn = − (−1)n − + (−1)n
n n n
1
i.e. bn = − .
n
We now have
∞
a0 X
f (x) = + [an cos nx + bn sin nx]
2 n=1
(
3π 0 , n even
where a0 = 2 , an = , bn = − n1
− n22 π , n odd
n 1 2 3 4 5
an − π2 0 − π2 312 0 − π2 512
bn −1 − 21 − 13 − 14 − 15
Toc JJ II J I Back
Solutions to exercises 48
h i
1 3π 2 1
f (x) = 2 2 + − π cos x + 0 · cos 2x + 32 cos 3x + . . .
h i
1 1
+ −1 sin x + 2 sin 2x + 3 sin 3x + . . .
h i
3π 2 1 1
i.e. f (x) = 4 − π cos x + 32 cos 3x + 52 cos 5x + . . .
h i
1 1
− sin x + 2 sin 2x + 3 sin 3x + . . .
Toc JJ II J I Back
Solutions to exercises 49
π 1 1 1
(i) 4 =1− 3 + 5 − 7 + ...
This gives
π 3π 2
cos π2 + 1
cos 3 π2 + 1
cos 5 π2 + . . .
2 = 4 − π 32 52
sin π2 + 1
sin 2 π2 + 1
sin 3 π2 + 1
sin 4 π2 + 1
sin 5 π2 + . . .
− 2 3 4 5
Toc JJ II J I Back
Solutions to exercises 50
48
and
π 3π 2
2 = 4 − π [0 + 0 + 0 + . . .]
1 1 1 1
− (1) + 2 · (0) + 3 · (−1) + 4 · (0) + 5 · (1) + . . .
then
π 3π 1 1 1
2 = 4 − 1− 3 + 5 − 7 + ...
1 1 1 3π π
1− 3 + 5 − 7 + ... = 4 − 2
1 1 1 π
1− 3 + 5 − 7 + ... = 4, as required.
To show that
π2 1 1 1
(ii) 8 =1+ 32 + 52 + 72 + ... ,
Return to Exercise 3
Toc JJ II J I Back
Solutions to exercises 53
Exercise 4.
51
f(x )
π
0 π 2π 3π 4π
x
Toc JJ II J I Back
Solutions to exercises 54
STEP ONE
Z 2π
1
a0 = f (x) dx
π 0
Z 2π
1 x
= dx
π 0 2
2 2π
1 x
=
π 4 0
(2π)2
1
= −0
π 4
i.e. a0 = π.
Toc JJ II J I Back
Solutions to exercises 55
STEP TWO
53
Z 2π
1
an = f (x) cos nx dx
π 0
Z 2π
1 x
= cos nx dx
π 0 2
( 2π )
Z 2π
1 sin nx 1
= x − sin nx dx
2π n 0 n 0
| {z }
using integration by parts
( )
1 sin n2π sin n · 0 1
= 2π −0· − ·0
2π n n n
( )
1 1
= (0 − 0) − · 0 , see Trig
2π n
i.e. an = 0.
Toc JJ II J I Back
Solutions to exercises 56
STEP THREE
54
Z 2π Z 2π
1 1 x
bn = f (x) sin nx dx = sin nx dx
π 0 π 0 2
Z 2π
1
= x sin nx dx
2π 0
( 2π Z 2π )
1 − cos nx − cos nx
= x − dx
2π n 0 0 n
| {z }
using integration by parts
( )
1 1 1
= (−2π cos n2π + 0) + · 0 , see Trig
2π n n
−2π
= cos(n2π)
2πn
1
= − cos(2nπ)
n
1
i.e. bn = − , since 2n is even (see Trig)
n
Toc JJ II J I Back
Solutions to exercises 57
We now have
55
∞
a0 X
f (x) = + [an cos nx + bn sin nx]
2 n=1
where a0 = π, an = 0, bn = − n1
∞
π X 1
f (x) = + 0 − sin nx
2 n=1 n
π 1 1
i.e. f (x) = − sin x + sin 2x + sin 3x + . . . .
2 2 3
Toc JJ II J I Back
Solutions to exercises 58
π 1 1 1 1
4 =1− 3 + 5 − 7 + 9 − ...
π π
Setting x = 2 gives f (x) = 4 and
π π 1 1
= − 1 + 0 − + 0 + + 0 − ...
4 2 3 5
π π 1 1 1 1
= − 1 − + − + − ...
4 2 3 5 7 9
1 1 1 1 π
1 − + − + − ... =
3 5 7 9 4
1 1 1 1 π
i.e. 1 − + − + − . . . = .
3 5 7 9 4
Return to Exercise 4
Toc JJ II J I Back
Solutions to exercises 59
Exercise 5.
57
π−x , 0<x<π
f (x) =
0 , π < x < 2π, and has period 2π
f(x )
π
−2π −π 0 π 2π
x
Toc JJ II J I Back
Solutions to exercises 60
STEP ONE
Z 2π
1
a0 = f (x) dx
π 0
Z π Z 2π
1 1
= (π − x) dx + 0 · dx
π 0 π π
π
1 1
= πx − x2 + 0
π 2 0
2
1 π
= π2 − −0
π 2
π
i.e. a0 = .
2
Toc JJ II J I Back
Solutions to exercises 61
STEP TWO
59
Z 2π
1
an = f (x) cos nx dx
π 0
1 π 1 2π
Z Z
= (π − x) cos nx dx + 0 · dx
π 0 π π
π Z π
1 sin nx sin nx
i.e. an = (π − x) − (−1) · dx +0
π n 0 0 n
| {z }
using integration by parts
Z π
1 sin nx
= (0 − 0) + dx , see Trig
π 0 n
π
1 − cos nx
=
πn n 0
1
= − 2 (cos nπ − cos 0)
πn
1
i.e. an = − 2 ((−1)n − 1) , see Trig
πn
Toc JJ II J I Back
Solutions to exercises 62
60
0 , n even
i.e. an =
2
, n odd
πn2
STEP THREE
Z 2π
1
bn = f (x) sin nx dx
π 0
Z π Z 2π
1
= (π − x) sin nx dx + 0 · dx
π 0 π
1
h cos nx iπ Z π cos nx
= (π − x) − − (−1) · − dx + 0
π n 0 0 n
π 1
1
= 0− − − · 0 , see Trig
π n n
1
i.e. bn = .
n
Toc JJ II J I Back
Solutions to exercises 63
In summary, a0 = π
and a table of other Fourier cofficients is
61
2
n 1 2 3 4 5
2 2 2 1 2 1
an = πn2 (when n is odd) π 0 π 32 0 π 52
1 1 1 1 1
bn = n 1 2 3 4 5
∞
a0 X
∴ f (x) = + [an cos nx + bn sin nx]
2 n=1
π 2 2 1 2 1
= + cos x + cos 3x + cos 5x + . . .
4 π π 32 π 52
1 1 1
+ sin x + sin 2x + sin 3x + sin 4x + . . .
2 3 4
π 2 1 1
i.e. f (x) = + cos x + 2 cos 3x + 2 cos 5x + . . .
4 π 3 5
1 1 1
+ sin x + sin 2x + sin 3x + sin 4x + . . .
2 3 4
Toc JJ II J I Back
Solutions to exercises 64
π2 1 1
62
c) To show that 8 =1+ 32 + 52 + ... ,
π
note that, as x → 0 , the series converges to the half-way value of 2,
π π 2 1 1
and then = + cos 0 + cos 0 + cos 0 + . . .
2 4 π 32 52
1 1
+ sin 0 + sin 0 + sin 0 + . . .
2 3
π π 2 1 1
= + 1 + 2 + 2 + ... + 0
2 4 π 3 5
π 2 1 1
= 1 + 2 + 2 + ...
4 π 3 5
π2 1 1
giving = 1 + 2 + 2 + ...
8 3 5
Return to Exercise 5
Toc JJ II J I Back
Solutions to exercises 65
Exercise 6.
63
f(x )
π
x
−3π −2π −π 0 π 2π 3π
−π
Toc JJ II J I Back
Solutions to exercises 66
STEP ONE
Z π
1
a0 = f (x) dx
π −π
Z π
1
= x dx
π −π
π
1 x2
=
π 2 −π
1 π2 π2
= −
π 2 2
i.e. a0 = 0.
Toc JJ II J I Back
Solutions to exercises 67
STEP TWO
65
1 π
Z
an = f (x) cos nx dx
π −π
Z π
1
= x cos nx dx
π −π
( π Z π )
1 sin nx sin nx
= x − dx
π n −π −π n
| {z }
using integration by parts
1 π
Z
1 1
i.e. an = (π sin nπ − (−π) sin(−nπ)) − sin nx dx
π n n −π
1 1 1
= (0 − 0) − · 0 ,
π n n
Z
since sin nπ = 0 and sin nx dx = 0,
2π
i.e. an = 0.
Toc JJ II J I Back
Solutions to exercises 68
STEP THREE
66
1 π
Z
bn = f (x) sin nx dx
π −π
Z π
1
= x sin nx dx
π −π
( π Z π )
1 −x cos nx − cos nx
= − dx
π n −π −π n
1 π
Z
1 1 π
= − [x cos nx]−π + cos nx dx
π n n −π
1 1 1
= − (π cos nπ − (−π) cos(−nπ)) + · 0
π n n
π
= − (cos nπ + cos nπ)
nπ
1
= − (2 cos nπ)
n
2
i.e. bn = − (−1)n .
n
Toc JJ II J I Back
Solutions to exercises 69
We thus have
67
∞
a0 X h i
f (x) = + an cos nx + bn sin nx
2 n=1
with a0 = 0, an = 0, bn = − n2 (−1)n
and
n 1 2 3
2
bn 2 −1 3
Therefore
f (x) = b1 sin x + b2 sin 2x + b3 sin 3x + . . .
1 1
i.e. f (x) = 2 sin x − sin 2x + sin 3x − . . .
2 3
Toc JJ II J I Back
Solutions to exercises 70
π 1 1 1
4 =1− 3 + 5 − 7 + ...
This gives
π 1 1 1
= 2 1 + 0 + · (−1) − 0 + · (1) − 0 + · (−1) + . . .
2 3 5 7
π 1 1 1
= 2 1 − + − + ...
2 3 5 7
π 1 1 1
i.e. = 1 − + − + ...
4 3 5 7
Return to Exercise 6
Toc JJ II J I Back
Solutions to exercises 71
Exercise 7.
69
f(x )
π
2
−3π −2π −π 0 π 2π 3π
x
Toc JJ II J I Back
Solutions to exercises 72
STEP ONE
Z π Z π
1 1
a0 = f (x)dx = x2 dx
π −π π −π
π
1 x3
=
π 3 −π
1 π3 π3
= − −
π 3 3
1 2π 3
=
π 3
2π 2
i.e. a0 = .
3
Toc JJ II J I Back
Solutions to exercises 73
STEP TWO
71
1 π
Z
an = f (x) cos nx dx
π −π
Z π
1
= x2 cos nx dx
π −π
( π Z π )
1 sin nx sin nx
= x2 − 2x dx
π n −π −π n
| {z }
using integration by parts
( )
2 π
Z
1 1 2 2
= π sin nπ − π sin(−nπ) − x sin nx dx
π n n −π
( )
2 π
Z
1 1
= (0 − 0) − x sin nx dx , see Trig
π n n −π
Z π
−2
= x sin nx dx
nπ −π
Toc JJ II J I Back
Solutions to exercises 74
72
( π Z π )
−2 − cos nx − cos nx
i.e. an = x − dx
nπ n −π −π n
| {z }
using integration by parts again
( )
−2 1 π
Z
1 π
= − [x cos nx]−π + cos nx dx
nπ n n −π
( )
−2 1 1
= − π cos nπ − (−π) cos(−nπ) + · 0
nπ n n
( )
−2 1
= − π(−1)n + π(−1)n
nπ n
( )
−2 −2π
= (−1)n
nπ n
Toc JJ II J I Back
Solutions to exercises 75
73
( )
−2 2π
i.e. an = − (−1)n
nπ n
+4π
= (−1)n
πn2
4
= (−1)n
n2
4
, n even
(
n2
i.e. an =
−4
n2 , n odd.
Toc JJ II J I Back
Solutions to exercises 76
STEP THREE
74
π
1 π 2
Z Z
1
bn = f (x) sin nx dx = x sin nx dx
π −π π −π
( π Z π )
1 − cos nx − cos nx
= x2 − 2x · dx
π n −π −π n
| {z }
using integration by parts
( )
Z π
1 1 2 π 2
= − x cos nx −π + x cos nx dx
π n n −π
( Z π )
1 1 2 2
π cos nπ − π 2 cos(−nπ) +
= − x cos nx dx
π n n −π
( )
2 π
Z
1 1 2 2
= − π cos nπ − π cos(nπ) + x cos nx dx
π n| {z } n −π
=0
Z π
2
= x cos nx dx
πn −π
Toc JJ II J I Back
Solutions to exercises 77
75
( π )
Z π
2 sin nx sin nx
i.e. bn = x − dx
πn n −π −π n
| {z }
using integration by parts
( )
Z π
2 1 1
= (π sin nπ − (−π) sin(−nπ)) − sin nx dx
πn n n −π
( )
1 π
Z
2 1
= (0 + 0) − sin nx dx
πn n n −π
−2 π
Z
= sin nx dx
πn2 −π
i.e. bn = 0.
Toc JJ II J I Back
Solutions to exercises 78
76
∞
a0 X
∴ f (x) = + [an cos nx + bn sin nx]
2 n=1
(
4
2π 2 n2 , n even
where a0 = 3 , an = −4 , bn = 0
n2 , n odd
n 1 2 3 4
1 1 1
an −4(1) 4 22 −4 32 4 42
2π 2
1 1 1 1
i.e. f (x) = − 4 cos x − 2 cos 2x + 2 cos 3x − 2 cos 4x . . .
2 3 2 3 4
+ [0 + 0 + 0 + . . .]
π2
1 1 1
i.e. f (x) = − 4 cos x − 2 cos 2x + 2 cos 3x − 2 cos 4x + . . . .
3 2 3 4
Toc JJ II J I Back
Solutions to exercises 79
π2
77
c) To show that = 1 + 212 + 312 + 412 + . . . ,
6
(
1 , n even
use the fact that cos nπ =
−1 , n odd
1 1 1
i.e. cos x − 22 cos 2x + 32 cos 3x − 42 cos 4x + . . . with x = π
1 1 1
gives cos π − 22 cos 2π + 32 cos 3π − 42 cos 4π + . . .
1 1 1
i.e. (−1) − 22 · (1) + 32 · (−1) − 42 · (1) + . . .
1 1 1
i.e. −1 − 22 − 32 − 42 +...
1 1 1
= −1 · 1 + 2 + 2 + 2 + . . .
2 3 4
| {z }
(the desired series)
Toc JJ II J I Back
Solutions to exercises 80
Toc JJ II J I Back
Chapter 86
Chapter Chapter 6
Chapter 6
The Fourier Transform
his chapter introduces the Fourier Transform, also known as the Fourier Integral. The defini-
T tion, theorems, and properties are presented and proved. The Fourier transforms of the most
common functions are derived, the system function is defined, and several examples are given
to illustrate its application in circuit analysis.
and assuming that it exists for every value of the radian frequency ω , we call the function F ( ω ) the
Fourier transform or the Fourier integral.
The Fourier transform is, in general, a complex function. We can express it as the sum of its real and
imaginary components, or in exponential form, that is, as
jϕ ( ω )
F ( ω ) = Re { F ( ω ) } + jIm { F ( ω ) } = F ( ω ) e (8.2)
The Inverse Fourier transform is defined as
∞
1
∫–∞ F ( ω )e
jωt
f ( t ) = ------ dω (8.3)
2π
We will often use the following notation to express the Fourier transform and its inverse.
The subscripts Re and Im will be used often to denote the real and imaginary parts respectively.
These notations have the same meaning as Re { f ( t ) } and Im { f ( t ) } .
By substitution of (8.6) into the Fourier integral of (8.1), we get
∞ ∞
– jωt – jωt
F(ω) = ∫– ∞ f Re ( t )e dt + j ∫–∞ fIm ( t )e dt (8.7)
From (8.2), we see that the real and imaginary parts of F ( ω ) are
∞
F Re ( ω ) = ∫–∞ [ fRe ( t ) cos ωt + fIm ( t ) sin ωt ] dt (8.9)
and
∞
F Im ( ω ) = – ∫–∞ [ fRe ( t ) sin ωt –fIm ( t ) cos ω t ] dt (8.10)
We can derive similar forms for the Inverse Fourier transform as follows:
Substitution of (8.2) into (8.3) yields
∞
1
∫–∞ [ FRe ( ω ) + jFIm ( ω ) ]e
jωt
f ( t ) = ------ dω (8.11)
2π
and by Euler’s identity,
∞
1
f ( t ) = ------
2π ∫–∞ [ FRe ( ω ) cos ωt –FIm ( ω ) sin ωt ] dω (8.12)
∞
1
+ j ------
2π ∫–∞ [ FRe ( ω ) sin ωt + FIm ( ω ) cos ω t ] dω
Therefore, the real and imaginary parts of f ( t ) are
∞
1
f Re ( t ) = ------
2π ∫–∞ [ FRe ( ω ) cos ωt –FIm ( ω ) sin ωt ] dω (8.13)
and
∞
1
f Im ( t ) = ------
2π ∫–∞ [ FRe ( ω ) sin ωt + FIm ( ω ) cos ω t ] dω (8.14)
Now, we will use the above relations to determine the time to frequency domain correspondence for
real, imaginary, even, and odd functions in both the time and the frequency domains. We will show
these in tabular form, as indicated in Table 8.1.
TABLE 8.1 Time Domain and Frequency Domain Correspondence (Refer to Tables 8.2 - 8.7)
f(t) F(ω)
and
∞
F Im ( ω ) = – ∫–∞ fRe ( t ) sin ωt dt (8.16)
Conclusion: If f ( t ) is real, F ( ω ) is, in general, complex. We indicate this result with a check mark
in Table 8.2.
We know that any function f ( t ) , can be expressed as the sum of an even and an odd function.
Therefore, we will also consider the cases when f ( t ) is real and even, and when f ( t ) is real and
odd*.
* In our subsequent discussion, we will make use of the fact that the cosine is an even function, while the sine is an
odd function. Also, the product of two odd functions or the product of two even functions will result in an even
function, whereas the product of an odd function and an even function will result in an odd function.
TABLE 8.2 Time Domain and Frequency Domain Correspondence (Refer also to Tables 8.3 - 8.7)
f( t) F(ω)
a. f Re ( t ) is even
∞
F Re ( ω ) = 2 ∫0 f Re ( t ) cos ωt dt f Re ( t ) = even (8.17)
and
∞
F Im ( ω ) = – ∫–∞ fRe ( t ) sin ωt dt = 0 f Re ( t ) = even (8.18)
To determine whether F ( ω ) is even or odd when f Re ( t ) = even , we must perform a test for
evenness or oddness with respect to ω . Thus, substitution of – ω for ω in (8.17), yields
∞ ∞
F Re ( – ω ) = 2 ∫0 f Re ( t ) cos ( – ω )t dt = 2 ∫0 f Re ( t ) cos ωt dt = F Re ( ω ) (8.19)
Conclusion: If f ( t ) is real and even, F ( ω ) is also real and even. We indicate this result in Table 8.3.
b. f Re ( t ) is odd
TABLE 8.3 Time Domain and Frequency Domain Correspondence (Refer also to Tables 8.4 - 8.7)
f(t) F(ω)
∞
F Re ( ω ) = ∫–∞ fRe ( t ) cos ωt dt = 0 f Re ( t ) = odd (8.20)
and
∞
F Im ( ω ) = – 2 ∫0 f Re ( t ) sin ωt dt f Re ( t ) = odd (8.21)
To determine whether F ( ω ) is even or odd when f Re ( t ) = odd , we perform a test for evenness
or oddness with respect to ω . Thus, substitution of – ω for ω in (8.21), yields
∞ ∞
F Im ( – ω ) = – 2 ∫0 f Re ( t ) sin ( – ω )t dt = 2 ∫0 f Re ( t ) sin ωt dt = – F Im ( ω ) (8.22)
Conclusion: If f ( t ) is real and odd, F ( ω ) is imaginary and odd. We indicate this result in Table 8.4.
2. Imaginary Time Functions
If f ( t ) is imaginary, (8.9) and (8.10) reduce to
∞
F Re ( ω ) = ∫–∞ fIm ( t ) sin ω t dt (8.23)
and
∞
F Im ( ω ) = ∫–∞ fIm ( t ) cos ω t dt (8.24)
TABLE 8.4 Time Domain and Frequency Domain Correspondence (Refer also to Tables 8.5 - 8.7)
f( t) F(ω)
Conclusion: If f ( t ) is imaginary, F ( ω ) is, in general, complex. We indicate this result in Table 8.5.
TABLE 8.5 Time Domain and Frequency Domain Correspondence (Refer also to Tables 8.6 - 8.7)
f( t) F( ω )
Next, we will consider the cases where f ( t ) is imaginary and even, and f ( t ) is imaginary and odd.
a. f Im ( t ) is even
∞
F Re ( ω ) = ∫–∞ fIm ( t ) sin ω t dt = 0 f Im ( t ) = even (8.25)
and
∞
F Im ( ω ) = 2 ∫0 f Im ( t ) cos ω t dt f Im ( t ) = even (8.26)
To determine whether F ( ω ) is even or odd when f Im ( t ) = even , we perform a test for evenness or
oddness with respect to ω . Thus, substitution of – ω for ω in (8.26) yields
∞
F Im ( – ω ) = 2 ∫0 f Im ( t ) cos ( – ω )t dt
(8.27)
∞
= 2 ∫0 f Im ( t ) cos ωt dt = F Im ( ω )
Conclusion: If f ( t ) is imaginary and even, F ( ω ) is also imaginary and even. We indicate this result in
Table 8.6.
TABLE 8.6 Time Domain and Frequency Domain Correspondence (Refer also to Table 8.7)
f(t) F(ω)
b. f Im ( t ) is odd
∞ ∞
F Re ( ω ) = ∫– ∞ f Im ( t ) sin ω t dt = 2 ∫0 f Im ( t ) sin ω t dt f Im ( t ) = odd (8.28)
and
∞
F Im ( ω ) = ∫–∞ fIm ( t ) cos ω t dt = 0 f Im ( t ) = odd (8.29)
To determine whether F ( ω ) is even or odd when f Im ( t ) = odd , we perform a test for evenness
or oddness with respect to ω . Thus, substitution of – ω for ω in (8.28) yields
∞ ∞
F Re ( – ω ) = 2 ∫0 f Im ( t ) sin ( – ω )t dt = – 2 ∫0 f Im ( t ) sin ωt dt = – F Re ( ω ) (8.30)
Conclusion: If f ( t ) is imaginary and odd, F ( ω ) is real and odd. We indicate this result in Table
8.7.
TABLE 8.7 Time Domain and Frequency Domain Correspondence (Completed Table)
f(t) F(ω)
Table 8.7 is now complete and shows that if f ( t ) is real (even or odd), the real part of F ( ω ) is even,
and the imaginary part is odd. Then,
F Re ( – ω ) = F Re ( ω ) f ( t ) = Real (8.31)
and
F Im ( – ω ) = – F Im ( ω ) f ( t ) = Real (8.32)
Since,
F ( ω ) = F Re ( ω ) + jF Im ( ω ) (8.33)
it follows that
F ( – ω ) = F Re ( – ω ) + jF Im ( – ω ) = F Re ( ω ) – jF Im ( ω )
or
F ( – ω ) = F∗ ( ω ) f ( t ) = Real (8.34)
We observe that the integrand of (8.35) is zero since it is an odd function with respect to ω because
both products inside the brackets are odd functions*.
Therefore, f Im ( t ) = 0 , that is, f ( t ) is real.
We can state then, that a necessary and sufficient condition for f ( t ) to be real, is that F ( – ω ) = F∗ ( ω ) .
Also, if it is known that f ( t ) is real, the Inverse Fourier transform of (8.3) can be simplified as fol-
lows:
From (8.13),
∞
1
f Re ( t ) = ------
2π ∫–∞ [ FRe ( ω ) cos ωt –FIm ( ω ) sin ωt ] dω (8.36)
and since the integrand is an even function with respect to ω , we rewrite (8.36) as
∞
1
f Re ( t ) = 2 ------
2π ∫0 [ F Re ( ω ) cos ωt – F Im ( ω ) sin ωt ] dω
(8.37)
∞ ∞
1 1 j [ ωt + ϕ ( ω ) ]
= ---
π ∫0 A ( ω ) cos [ ωt + ϕ ( ω ) ] dω = --- Re
π ∫0 F ( ω )e dω
a1 f1 ( t ) + a2 f2 ( t ) + … + an fn ( t ) ⇔ a1 F1 ( ω ) + a2 F2 ( ω ) + … + an Fn ( ω ) (8.38)
Proof:
The proof is easily obtained from (8.1), that is, the definition of the Fourier transform. The proce-
dure is the same as for the linearity property of the Laplace transform in Chapter 2.
2. Symmetry
If F ( ω ) is the Fourier transform of f ( t ) , the symmetry property of the Fourier transform states that
F ( t ) ⇔ 2πf ( – ω ) (8.39)
that is, if in F ( ω ) , we replace ω with t , we get the Fourier transform pair of (8.39).
Proof:
Since
∞
1
∫–∞ F ( ω )e
jωt
f ( t ) = ------ dω
2π
then,
∞
– j ωt
2πf ( – t ) = ∫–∞ F ( ω )e dω
ω
f ( at ) ⇔ ----- F ⎛ ---- ⎞
1
a ⎝ a ⎠ (8.40)
that is, the time scaling property of the Fourier transform states that if we replace the variable t in
the time domain by at , we must replace the variable ω in the frequency domain by ω ⁄ a , and
divide F ( ω ⁄ a ) by the absolute value of a .
Proof:
We must consider both cases a > 0 and a < 0 .
For a > 0 ,
∞
– jωt
F { f ( at ) } = ∫–∞ f ( at )e dt (8.41)
τ ω
∞ – jω ⎛ --- ⎞ ∞ – j ⎛ ----⎞ τ
⎝a⎠ τ ⎝ a⎠ 1 ω
d ⎛ --- ⎞ = --- dτ = --- F ⎛ ----⎞
1
F {f(τ)} = ∫–∞ f ( τ )e ⎝ a⎠ a ∫–∞ f ( τ )e a ⎝ a⎠
For a < 0 ,
∞
– jωt
F { f ( –at ) } = ∫–∞ f ( –at )e dt
and making the above substitutions, we find that the multiplying factor is – 1 ⁄ a . Therefore, for
1 ⁄ a we obtain (8.40).
4. Time Shifting
If F ( ω ) is the Fourier transform of f ( t ) , then,
– jωt 0
f ( t – t 0 ) ⇔ F ( ω )e (8.42)
that is, the time shifting property of the Fourier transform states that if we shift the time function
f ( t ) by a constant t 0 , the Fourier transform magnitude does not change, but the term ωt 0 is
added to its phase angle.
Proof:
∞
– jωt
F { f ( t – t0 ) } = ∫–∞ f ( t – t )e 0 dt
or
– jωt 0
F { f ( t – t0 ) } = e F(ω)
5. Frequency Shifting
If F ( ω ) is the Fourier transform of f ( t ) , then,
jω 0 t
e f ( t ) ⇔ F ( ω – ω0 ) (8.43)
jω 0 t
that is, multiplication of the time function f ( t ) by e , where ω 0 is a constant, results in shifting
the Fourier transform by ω 0 .
Proof:
∞
F { e jω t f ( t ) }0
= ∫– ∞ e
jω 0 t
f ( t )e
– jωt
dt
or
jω 0 t ∞ –j ( ω – ω0 )
F {e f(t)} = ∫–∞ f ( t )e dt = F ( ω – ω 0 )
ω–ω
f ( at ) ⇔ ----- F ⎛ ----------------0 ⎞
jω 0 t 1
e
⎝ ⎠
(8.44)
a a
Property 5, that is, (8.43) is also used to derive the Fourier transform of the modulated signals
f ( t ) cos ωt and f ( t ) sin ωt . Thus, from
jω 0 t
e f ( t ) ⇔ F ( ω – ω0 )
and
jω 0 t –j ω0 t
e +e
cos ω 0 t = -------------------------------
2
we get
F ( ω – ω0 ) + F ( ω + ω0 )
f ( t ) cos ω 0 t ⇔ ----------------------------------------------------------
- (8.45)
2
Similarly,
F ( ω – ω0 ) –F ( ω + ω0 )
f ( t ) sin ω 0 t ⇔ -------------------------------------------------------
- (8.46)
j2
6. Time Differentiation
If F ( ω ) is the Fourier transform of f ( t ) , then,
n
d
-------- f ( t ) ⇔ ( jω ) n F ( ω ) (8.47)
n
dt
n
d n
that is, the Fourier transform of -------n- f ( t ) , if it exists, is ( jω ) F ( ω ) .
dt
Proof:
Differentiating the Inverse Fourier transform, we get
∞ ∞
n – j ωt – j ωt
∫– ∞ ∫–∞ f ( t )e
n
= f ( t ) ( –j t ) e dt = ( – j t ) dt
Proof:
We postpone the proof of this property until we derive the Fourier transform of the unit step
function u 0 ( t ) on the next section. In the special case where in (8.49), F ( 0 ) = 0 , then,
t
F(ω)
∫–∞ f ( τ ) dτ ⇔ ------------
jω
(8.50)
and this is easily proved by integrating both sides of the Inverse Fourier transform.
9. Conjugate Time and Frequency Functions
If F ( ω ) is the Fourier transform of the complex function f ( t ) , then,
f∗ ( t ) ⇔ F ∗ ( – ω ) (8.51)
Proof:
∞ ∞
– jωt – jωt
F(ω) = ∫– ∞ f ( t )e dt = ∫–∞ [ fRe ( t ) + jfIm ( t ) ]e dt
∞ ∞
– jωt – jωt
= ∫– ∞ f Re ( t )e dt + j ∫–∞ fIm ( t )e dt
Then,
∞ ∞
F∗ ( ω ) = ∫– ∞ ∫–∞ fIm ( t )e
jωt jωt
f Re ( t )e dt – j dt
f 1 ( t )∗ f 2 ( t ) ⇔ F 1 ( ω )F 2 ( ω ) (8.52)
that is, convolution in the time domain, corresponds to multiplication in the frequency domain.
Proof:
∞ ∞
– jωt
F { f1 ( t )∗ f2 ( t ) } = ∫–∞ ∫–∞ f ( τ )f2 ( t – τ ) dτ
1 e dt
(8.53)
∞ ∞
– jωt
= ∫–∞ f ( τ ) ∫–∞ f ( t – τ )e
1 2 dt dτ
The first integral above is F 1 ( ω ) while the second is F 2 ( ω ) , and thus (8.52) follows.
Alternate Proof:
– jωt 0
We can apply the time shifting property f ( t – t 0 ) ⇔ F ( ω )e into the bracketed integral of
– jωt 0
(8.53); then, replacing it with F 2 ( ω )e , we get
∞ ∞ ∞ – jωt 0
– jωt
F { f 1 ( t )∗ f 2 ( t ) } = ∫– ∞ f1 ( τ ) ∫– ∞ f 2 ( t – τ )e dt dτ = ∫–∞ f ( τ ) dτF
1 2 ( ω )e
∞
– jωt
= ∫–∞ f ( τ )e
1 dτF 2 ( ω ) = F 1 ( ω )F 2 ( ω )
1
f 1 ( t )f 2 ( t ) ⇔ ------ F 1 ( ω )∗ F 2 ( ω ) (8.54)
2π
that is, multiplication in the time domain, corresponds to convolution in the frequency domain
divided by the constant 1 ⁄ 2π .
Proof:
∞ ∞ ∞
– jωt 1- – jωt
∫– ∞ ∫ ∫–∞ F ( χ )e
jχt
F { f1 ( t )f2 ( t ) } = [ f 1 ( t ) f 2 ( t ) ]e dt = -----
2π 1 dχ f 2 ( t )e dt
–∞
∞ ∞ ∞
1 – j ( ω – χ )t 1
= ------
2π ∫– ∞ F1 ( χ ) ∫– ∞ f 2 ( t )e dt dχ = ------
2π ∫–∞ F ( χ )F ( ω – χ ) dχ
1 2
that is, the area under a time function f ( t ) is equal to the value of its Fourier transform evaluated
at ω = 0 .
Proof:
– jωt
Using the definition of F ( ω ) and that e ω=0
= 1 , we see that (8.55) follows.
that is, the value of the time function f ( t ) , evaluated at t = 0 , is equal to the area under its Fou-
rier transform F ( ω ) times 1 ⁄ 2π .
Proof:
jωt
In the Inverse Fourier transform of (8.3), we let e t=0
= 1 , and (8.56) follows.
that is, if the time function f ( t ) represents the voltage across, or the current through an 1 Ω resis-
tor, the instantaneous power absorbed by this resistor is either v 2 ⁄ R , v 2 ⁄ 1 , v 2 , or i 2 R , i 2 . Then,
the integral of the magnitude squared, represents the energy (in watt-seconds or joules) dissipated
by the resistor. For this reason, the integral is called the energy of the signal. Relation (8.57) then,
states that if we do not know the energy of a time function f ( t ) , but we know the Fourier trans-
form of this function, we can compute the energy without the need to evaluate the Inverse Fou-
rier transform.
Proof:
From the frequency convolution property,
1
f 1 ( t )f 2 ( t ) ⇔ ------ F 1 ( ω )∗ F 2 ( ω )
2π
or
∞ ∞
– jωt 1
F { f1 ( t )f2 ( t ) } = ∫– ∞ [ f 1 ( t )f 2 ( t ) ]e dt = ------
2π ∫–∞ F ( χ )F ( ω – χ ) dχ
1 2 (8.58)
Since (8.58) must hold for all values of ω , it must also be true for ω = 0 , and under this condi-
tion, it reduces to
∞ ∞
1
∫ –∞
[ f 1 ( t )f 2 ( t ) ] dt = ------
2π ∫–∞ F ( χ )F ( –χ ) dχ
1 2 (8.59)
For the special case where f 2 ( t ) = f 1∗ ( t ) , and the conjugate functions property f∗ ( t ) ⇔ F∗ ( – ω ) ,
by substitution into (8.59), we get:
∞ ∞ ∞
1 1
∫ –∞
[ f ( t )f∗ ( t ) ] dt = ------
2π ∫ –∞
F ( ω )F∗ [ – ( – ω ) ] dω = ------
2π ∫–∞ F ( ω )F∗ ( ω ) dω
Since f ( t )f∗ ( t ) = f ( t ) and F ( ω )F∗ ( ω ) = F ( ω ) , Parseval’s theorem is proved.
2 2
The Fourier transform properties and theorems are summarized in Table 8.8.
∫–∞ f ( t )δ ( t – t ) dt
0 = f ( t0 )
∫–∞ f ( t )δ ( t ) dt = f( 0)
0 t 0 ω
Property f( t) F(ω)
Linearity a1 f1 ( t ) + a2 f2 ( t ) + … a1 F1 ( ω ) + a2 F2 ( ω ) + …
Symmetry F(t) 2πf ( – ω )
Time Scaling f ( at ) ω
----- F ⎛ ----⎞
1
a ⎝ a⎠
Time Shifting f ( t – t0 ) – jωt 0
F ( ω )e
Frequency Shifting jω 0 t F ( ω – ω0 )
e f(t)
Time Differentiation n n
d ( jω ) F ( ω )
--------- f ( t )
n
dt
Frequency Differentiation n n
( – jt ) f ( t ) d
----------- F ( ω )
n
dω
Time Integration t F ( ω ) + πF ( 0 )δ ( ω )
∫– ∞
------------
f ( τ ) dτ jω
Conjugate Functions f∗ ( t ) F∗ ( – ω )
Time Convolution f 1 ( t )∗ f 2 ( t ) F1 ( ω ) ⋅ F2 ( ω )
Frequency Convolution f1 ( t ) ⋅ f2 ( t ) 1
------ F 1 ( ω )∗ F 2 ( ω )
2π
Area under f(t) ∞
F(0) = ∫–∞ f ( t ) dt
Area under F(w) ∞
∫
1
f ( 0 ) = ------ F ( ω ) dω
2π – ∞
Parseval’s Theorem ∞ ∞
∫ ∫
2 1 2
f ( t ) dt = ------ F ( ω ) dω
–∞ 2π –∞
– jωt 0
δ ( t – t0 ) ⇔ e (8.61)
2.
1 ⇔ 2πδ ( ω ) (8.62)
Proof:
–1 ∞ ∞
1
∫– ∞ ∫–∞ δ ( ω )e
jωt jωt jωt
F { 2πδ ( ω ) } = ------ 2πδ ( ω )e dω = dω = e ω=0
= 1
2π
f(t) F(ω)
1
2πδ ( ω )
0 t 0 ω
Also, by direct application of the Inverse Fourier transform, or the frequency shifting property
and (8.62), we derive the transform
jω 0 t
e ⇔ 2πδ ( ω – ω 0 ) (8.63)
The transform pairs of (8.62) and (8.63) can also be derived from (8.60) and (8.61) by using the
symmetry property F ( t ) ⇔ 2πf ( – ω )
3.
1 jω 0 t –j ω0 t
cos ω 0 t = --- ( e +e ) ⇔ πδ ( ω – ω 0 ) + πδ ( ω + ω 0 ) (8.64)
2
Proof:
This transform pair follows directly from (8.63). The f ( t ) ↔ F ( ω ) correspondence is also shown
in Figure 8.3.
cosωω
cos 0 t0
t F Re ( ω )
π π
t t
−ω0 0 ω0 ω
We know that cos ω 0 t is real and even function of time, and we found out that its Fourier trans-
form is a real and even function of frequency. This is consistent with the result in Table 8.7.
4.
1 jω0 t –j ω0 t
sin ω 0 t = ----- ( e –e ) ⇔ jπδ ( ω – ω 0 ) – jπ δ ( ω + ω 0 ) (8.65)
j2
Proof:
This transform pair also follows directly from (8.63). The f ( t ) ↔ F ( ω ) correspondence is also
shown in Figure 8.4.
sin ω 0 t F Im ( ω )
π
tt −ω0
0 ω0 ω
−π
We know that sin ω 0 t is real and odd function of time, and we found out that its Fourier trans-
form is an imaginary and odd function of frequency. This is consistent with the result in Table
8.7.
5.
2
sgn ( t ) = u 0 ( t ) – u 0 ( – t ) ⇔ ------ (8.66)
jω
−1
f(t)
1
– at
e u0 ( t )
0
– at −1
–e u0 ( –t )
Then,
– at at
sgn ( t ) = lim [ e u0 ( t ) – e u0 ( –t ) ] (8.67)
a→0
and
0 ∞
at – j ωt – a t – j ωt
F { sgn ( t ) } = lim
a→0 ∫– ∞ –e e dt + ∫0 e e dt
0 ∞
( a – j ω )t – ( a + jω ) t
= lim
a→0 ∫– ∞ –e dt + ∫0 e dt (8.68)
1 1 –1 1 2
= lim --------------- + --------------- = --------- + ------ = ------
a→0 a – jω a + jω – jω jω jω
f( t)
F Im ( ω )
1
0
t ω
0
−1
Proof:
If we attempt to verify the transform pair of (8.69) by direct application of the Fourier transform
definition, we will find that
∞ ∞ – jωt ∞
– jωt – jωt
F(ω ) = ∫–∞ f ( t )e dt = F ( ω ) = ∫0 e dt = e-----------
– jω 0
(8.70)
– jωt – jωt
but we cannot say that e approaches 0 as t → ∞ , because e = 1 ∠– ωt , that is, the magni-
– jωt
tude of e is always unity, and its angle changes continuously as t assumes greater and greater
values. Since the upper limit cannot be evaluated, the integral of (8.70) does not converge.
To work around this problem, we will make use of the sgn ( t ) function which we express as
sgn ( t ) = 2u 0 ( t ) – 1 (8.71)
f(t) 2
0
t
−1
Figure 8.8. Alternate expression for the signum function
We rewrite (8.71) as
1 1 1
u 0 ( t ) = --- ( 1 + sgn ( t ) ) = --- + --- sgn ( t ) (8.72)
2 2 2
and since we know that 1 ⇔ 2π δ ( ω ) and sgn ( t ) ⇔ 2 ⁄ ( jω ) , by substitution of these into (8.72)
we get
1
u 0 ( t ) ⇔ πδ ( ω ) + ------
jω
and this is the same as(8.69). This is a complex function in the frequency domain whose real part
is π δ ( ω ) and imaginary part – 1 ⁄ ω .
The f ( t ) ↔ F ( ω ) correspondence is also shown in Figure 8.9.
f(t)
F Im ( ω )
1 π
F Re ( ω )
t ω
F Im ( ω )
Since u 0 ( t ) is real but neither even nor odd function of time, its Fourier transform is a complex
function of frequency as shown in (8.69). This is consistent with the result in Table 8.7.
Now, we will prove the time integration property of (8.49), that is,
t
F(ω )
∫–∞ f ( τ ) dτ ⇔ ------------
jω
+ πF ( 0 )δ ( ω )
as follows:
By the convolution integral,
t
u 0 ( t )∗ f ( t ) = ∫–∞ f ( τ )u ( t – τ ) dτ
0
and since u 0 ( t – τ ) = 1 for t > τ , and it is zero otherwise, the above integral reduces to
t
u 0 ( t )∗ f ( t ) = ∫– ∞ f ( τ ) d τ
Next, by the time convolution property,
u 0 ( t )∗ f ( t ) ⇔ U 0 ( ω ) ⋅ F ( ω )
and since
1
U 0 ( ω ) = πδ ( ω ) + ------
jω
using these results and the sampling property of the delta function, we get
U 0 ( ω ) ⋅ F ( ω ) = ⎛ πδ ( ω ) + -----
1-⎞ F ( ω ) = = πδ ( ω )F ( ω ) + F ( ω ) = πF ( 0 )δ ( ω ) + ------------
------------ F(ω)
⎝ jω ⎠ jω jω
7.
– jω 0 t 1
e u 0 ( t ) ⇔ πδ ( ω – ω 0 ) + ---------------------- (8.73)
j ( ω – ω0 )
Proof:
From the Fourier transform of the unit step function,
1-
u 0 ( t ) ⇔ πδ ( ω ) + -----
jω
and the frequency shifting property,
jω 0 t
e f ( t ) ⇔ F ( ω – ω0 )
we obtain (8.73).
8.
π 1 1
u 0 ( t ) cos ω 0 t ⇔ --- [ δ ( ω – ω 0 ) + δ ( ω + ω 0 ) ] + -------------------------- + ---------------------------
2 2 j ( ω – ω 0 ) 2j ( ω + ω 0 )
(8.74)
π jω -
⇔ --- [ δ ( ω – ω 0 ) + δ ( ω + ω 0 ) ] + -----------------
2 ω –ω
2 2
0
Proof:
We first express the cosine function as
1 jω0 t –j ω0 t
cos ω 0 t = --- ( e +e )
2
From (8.73),
– jω 0 t 1 -
e u 0 ( t ) ⇔ πδ ( ω – ω 0 ) + ----------------------
j ( ω – ω0 )
and
jω 0 t 1
e u 0 ( t ) ⇔ πδ ( ω + ω 0 ) + -----------------------
j ( ω + ω0 )
Now, using
1
u 0 ( t ) ⇔ πδ ( ω ) + ------
jω
we get (8.74).
9.
π ω
2
u 0 ( t ) sin ω 0 t ⇔ ----- [ δ ( ω – ω 0 ) + δ ( ω + ω 0 ) ] + -----------------
-2 (8.75)
j2 ω –ω
2
0
Proof:
We first express the sine function as
1 jω 0 t – j ω 0 t
sin ω 0 t = ----- ( e –e )
j2
From (8.73),
– jω 0 t 1
e u 0 ( t ) ⇔ πδ ( ω – ω 0 ) + -----------------------
j ( ω – ω0 )
and
jω 0 t 1
e u 0 ( t ) ⇔ πδ ( ω + ω 0 ) + -----------------------
j ( ω + ω0 )
Using
1-
u 0 ( t ) ⇔ πδ ( ω ) + -----
jω
we obtain (8.75).
Example 8.1
– αt
It is known that L [ e 1
u 0 ( t ) ] = ------------ . Compute F { e–αt u0 ( t ) } .
s+α
Solution:
F { e–αt u0 ( t ) } = L [e
– αt
u0 ( t ) ]
s = jω
1
= ------------
s+α
1
= ----------------
jω + α
s = jω
– αt 1
e u 0 ( t ) ⇔ ---------------- (8.76)
jω + α
Example 8.2
It is known that
– αt s+α
L [(e cos ω 0 t )u 0 ( t ) ] = --------------------------------
2 2
( s + α ) + ω0
Solution:
– αt jω + α
(e cos ω 0 t )u 0 ( t ) ⇔ -----------------------------------
2
-
2
(8.77)
( jω + α ) + ω 0
We can also find the Fourier transform of a time function f ( t ) that has non-zero values for t < 0 ,
and it is zero for all t > 0 . But because the one-sided Laplace transform does not exist for t < 0 , we
must first express the negative time function in the t > 0 domain, and compute the one-sided
Laplace transform. Then, the Fourier transform of f ( t ) can be found by substituting s with – jω . In
other words, when f ( t ) = 0 for t ≥ 0 , and f ( t ) ≠ 0 for t < 0 , we use the substitution
F { f(t) } = L [ f ( –t ) ] s = –j ω
(8.78)
Example 8.3
–a t
Compute the Fourier transform of f ( t ) = e
a. using the Fourier transform definition
b. by substitution into the Laplace transform equivalent
Solution:
a. Using the Fourier transform definition, we get
0 ∞ 0 ∞
( a – jω )t – ( a + jω ) t
F { e –a t } = ∫– ∞ e e
at – jωt
dt + ∫0 e
– a t – jωt
e dt = ∫– ∞ e dt + ∫0 e dt
1 1 2
= --------------- + --------------- = ------------------
a – jω a + jω ω 2 + a 2
–a t 2
e ⇔ -----------------
2
-
2
(8.79)
ω +a
F { e–a t } = L [e
– at
] s = jω
+ L [e ]
at
s = –j ω
1-
= ----------
s+a
1-
+ ----------
s+a
s = jω s = –j ω
1 1 2
= --------------- + -------------------- = ------------------
jω + a – jω + a ω 2 + a 2
Example 8.4
Derive the Fourier transform of the pulse
f ( t ) = A [ u0 ( t + T ) – u0 ( t – T ) ] (8.80)
Solution:
The pulse of (8.80) is shown in Figure 8.10.
f(t)
A
t
−T 0 T
Figure 8.10. Pulse for Example 8.4
∞ T – jωt T
– jωt – jωt
F(ω) = ∫–∞ f ( t )e dt = ∫–T Ae dt = Ae
---------------
– jω –T
– jωt – T – jωt
(e
jωt
sin ωT sin ωT
Ae -
= -------------- =A –e -) = 2A --------------- = 2AT ---------------
-----------------------------------
jω T
jω ω ωT
We observe that the transform of this pulse has the ( sin x ) ⁄ x form, and has its maximum value 2AT
at ωT = 0 *.
Thus, we have the waveform pair
sin ωT
A [ u 0 ( t + T ) – u 0 ( t – T ) ] ⇔ 2AT --------------- (8.81)
ωT
The f ( t ) ↔ F ( ω ) correspondence is also shown in Figure 8.11, where we observe that the ω axis
crossings occur at values of ωT = ± nπ where n is an integer.
F(ω )
f(t)
A
−π π
t −2π 2π ωT
−T 0 T 0
We also observe that since f ( t ) is real and even, F ( ω ) is also real and even.
Example 8.5
Derive the Fourier transform of the pulse of Figure 8.12.
f(t)
A
t
0 2T
Figure 8.12. Pulse for Example 8.5
Solution:
The expression for the given pulse is
f ( t ) = A [ u 0 ( t ) – u 0 ( t – 2T ) ] (8.82)
sin x- = 1
* We recall that lim ---------
x→0 x
Alternate Solution:
We can obtain the Fourier transform of (8.82) using the time shifting property, i.e,
– jωt 0
f ( t – t 0 ) ⇔ F ( ω )e
sin ωT – jωT
and the result of Example 8.4. Thus, multiplying 2AT --------------- by e , we obtain (8.84).
ωT
Example 8.6
Derive the Fourier transform of the waveform of Figure 8.13.
f(t) 2A
t
−T 0 T 2T
Figure 8.13. Waveform for Example 8.6
Solution:
The given waveform can be expressed as
f ( t ) = A [ u 0 ( t + T ) + u 0 ( t ) – u 0 ( t – T ) – u 0 ( t – 2T ) ] (8.85)
and this is precisely the sum of the waveforms of Examples 8.4 and 8.5. We also observe that this
waveform is obtained by the graphical addition of the waveforms of Figures 8.10 and 8.12. There-
fore, we will apply the linearity property to obtain the Fourier transform of this waveform.
We denote the transforms of Examples 8.4 and 8.5 as F 1 ( ω ) and F 2 ( ω ) respectively, and we get
We observe that F ( ω ) is complex since f ( t ) of (8.85) is neither an even nor an odd function.
Example 8.7
Derive the Fourier transform of
f ( t ) = A cos ω 0 t [ u 0 ( t + T ) – u 0 ( t – T ) ] (8.87)
Solution:
From (8.45),
F ( ω – ω0 ) + F ( ω + ω0 )
f ( t ) cos ω 0 t ⇔ ----------------------------------------------------------
-
2
and from (8.81),
sin ωT
A [ u 0 ( t + T ) – u 0 ( t – T ) ] ⇔ 2AT ---------------
ωT
Then,
sin [ ( ω – ω 0 )T ] sin [ ( ω + ω 0 )T ]
A cos ω 0 t [ u 0 ( t + T ) – u 0 ( t – T ) ] ⇔ AT ------------------------------------
- + -------------------------------------- (8.88)
( ω – ω 0 )T ( ω + ω 0 )T
We also observe that since f ( t ) is real and even, F ( ω ) is also real and even*.
Example 8.8
Derive the Fourier transform of a periodic time function with period T .
Solution:
From the definition of the exponential Fourier series,
∞ jnω 0 t
f(t) = ∑ Cn e
n = –∞
(8.89)
Taking the Fourier transform of (8.89), and applying the linearity property for the transforms of
(8.90), we get
∞ ∞ ∞
F ⎧⎨ ∑
jnω 0 t ⎫ jnω 0 t
F { f(t)} =
⎩
Cn e ⎬ =
⎭
∑ Cn F { e } = 2π ∑
n = –∞
C n δ ( ω – nω )
0 (8.91)
n = –∞ n = –∞
The line spectrum of the Fourier transform of (8.91) is shown in Figure 8.14.
F( ω)
2πC –2 2πC 0
2πC 2
2πC – 4 2πC – 1 2πC 1
2πC 4
2πC – 3 2πC 3
..... .....
ω
– 4ω 0 – 3 ω 0 – 2 ω 0 – ω 0 0 ω0 2 ω0 3 ω0 4 ω0
The line spectrum of Figure 8.14 reveals that the Fourier transform of a periodic time function, con-
sists of a train of equally spaced delta functions. The strength of each δ ( ω – nω 0 ) is equal to 2π
times the coefficient C n .
Example 8.9
Derive the Fourier transform of the periodic time function
his chapter is devoted to discrete time systems, and introduces the one-sided Z Transform.
T The definition, theorems, and properties are discussed, and the Z transforms of the most
common discrete time functions are derived. The discrete transfer function is also defined,
and several examples are given to illustrate its application. The Inverse Z transform, and the meth-
ods available for finding it, are also discussed.
f nz
–n
F(z) = (7.1)
n =0
1
k–1
f n = ------- F(z)z dz (7.2)
j2
We can obtain a discrete time waveform from an analog (continuous or with a finite number of dis-
continuities) signal, by multiplying it by a train of impulses. We denote the continuous signal as f(t) ,
and the impulses as
n =
t – nT
n=0
(7.3)
Nalla N 1
Chapter 7 Discrete Time Systems and the Z Transform
f(t)
t
0 (a)
n
n
0 (b)
f(t)n
t
0 (c)
Next, we recall from Chapter 2, that the t – domain to s – domain transform pairs for the delta
function are (t) 1 and (t – T) e . Therefore, taking the Laplace transform of both sides of
–sT
G(s) = L f n
t – nT = f n
e–nsT = f ne
–nsT
(7.6)
n =0 n =0 n=0
2 Nalla N
Properties and Theorems of the Z Transform
Relation (7.6), with the substitution z = esT , becomes the same as (7.1), and like s , z is also a com-
plex variable.
The Z and Inverse Z transforms are denoted as
and
–1
f n = Z F(z) (7.8)
The function F(z) , as defined in (7.1), is a series of complex numbers and converges outside the cir-
cle of radius R , that is, it converges (approaches a limit) when z R . In complex variables theory,
the radius R is known as the radius of absolute convergence.
In the complex z – plane the region of convergence is the set of z for which the magnitude of F(z)
is finite, and the region of divergence is the set of z for which the magnitude of F(z) is infinite.
–m
f n – mu 0 n – m z F(z) (7.10)
Nalla N 3
Chapter 7 Discrete Time Systems and the Z Transform
Proof:
Applying the definition of the Z transform, we get
Z f n – mu0 n – m = f n – mu0 n – mz–n
n =0
n=0
f k z
–(k + m)
Z f n – m = z–mF(z)
f kz f kz
–k –m
= z = z–m –k
=
k=0 k=0 k=0
m–1
Proof:
By application of the definition of the Z transform, we get
Z f n – m = f n – mz –n
n=0
–m –m
f kz–k + f kz–k
=z f kz –k =z F(z) +
k = –m k=0 k = –m
4 Nalla N
Properties and Theorems of the Z Transform
Z –m
()
m–1
m–n –m
()
m–1
–n
f n– m = z F z +
n=0
fn–mz = z F z +
n=0
fn–mz
that is, if f n is a discrete time signal, and m is a positive integer, the mth left shift of f n is
f n + m .
Proof:
Z f n + m = f n + mz –n
n=0
f k z
–(k – m)
Z f n + m = f kz
–k m
= z
k =m k=m
m–1 m–1
F(z) –
f k z
m m
f k z f k z
=z –k
– –k =z k
Nalla N 5
Chapter 7 Discrete Time Systems and the Z Transform
an f n F --
z
(7.17)
a
Proof: z –n
--
1
Z n
--- ---- –n -z-
n
–n
afn =
afnz = a
–n
fnz =
k=0
fn a =F
a
k=0 k =0
Proof:
–n
Z e –naT f n = e –naT
f n z–n = f n(e aT
z) = F(eaTz)
k=0 k=0
d
nf n –z ----- F(z)
dz
2
(7.19)
2 d 2d
n f n z-----F(z) + z ----- F(z)
dz dz2
Proof:
By definition,
f nz –n
F(z) =
n=0
6 Nalla N
Properties and Theorems of the Z Transform
and taking the first derivative of both sides with respect to z we get
nf nz
d –z –1
(–n)f nz– n – 1 = –n
-----F(z) =
dz
n=0 n=0
that is, the Z transform of the sum of the values of a signal, is equal to z (z – 1) times the Z
transform of the signal. This property is equivalent to time integration in the continuous time
domain since integration in the discrete time domain is summation. We will see on the next sec-
tion that the term z (z – 1) is the Z transform of the discrete unit step function u0n , and
recalling that in the s – domain
1
u (t) --
0
s
and
t
F(s )
f()d -----------
0 s
Since the summation symbol in (7.21) is yn , then the summation symbol in (7.22) is yn – 1 ,
and thus we can write (7.22) as
Nalla N 7
Chapter 7 Discrete Time Systems and the Z Transform
or
n
yn =
hn – mxm
m=0
(7.25)
We will now prove that convolution in the discrete time domain corresponds to multiplication in
the Z domain, that is,
8 Nalla N
Properties and Theorems of the Z Transform
Proof:
Taking the Z transform of both sides of (7.24), we get
Y(z) = Z
xmhn – m =
xmhn – m z–n
m=0 n =0 m=0
hn – m z
–n –n
xmhn – mz
Y(z) = = xm
m =0 n = 0 m =0 n =0
m=0 n =0 m=0 n =0
or
Y(z) = X(z) H(z) (7.27)
10. Convolution in the Discrete Frequency Domain
If f1n and f2 n are two sequences with Z transforms F1 (z) and F2 (z) respectively, then,
f n f n ------- xF (v)F - v–1dv
1 z
(7.28)
1 2
j 2 1 2
v
Proof:
For all n 1 , as z →
1
z –n = --- → 0
zn
and under these conditions f nz–n → 0 also. Taking the limit as z → in the expression
Nalla N 9
Chapter 7 Discrete Time Systems and the Z Transform
f n z –n
F(z) =
n=0
we observe that the only non-zero value in the summation is that of n = 0 . Then,
f nz –n
= f 0 z–0 = f 0
n=0
Therefore,
lim F(z) = f 0
z→
Proof:
Let us consider the Z transform of the sequence f n + 1 – f n , i.e.,
Z f n + 1 – f n n + 1 – f n)z–n
=
n=0
(f
We replace the upper limit of the summation with k, and we let k → . Then,
k
Z f n + 1 – f n = ( f n + 1 – f n)z
–n
lim (7.31)
k→
n=0
From (7.15),
Z f n + 1 = zF(z) – f 0z (7.32)
and by substitution of (7.32) into (7.31), we get
k
–n
zF(z) – f 0z – F(z)
( f n + 1 – f n)z
= lim
k→
n=0
10 Nalla N
The Z Transform of Common Discrete Time Functions
k
( f n + 1 – f n)z
–n
lim (z – 1)F(z) – f 0z = lim lim
z→1 z → 1 k →
n=0
k
zlim ( f n + 1 – f n)z
= lim –n
k→ →1
n =0
k
lim (z – 1)F(z) – lim
z→1 z→1
f 0z = lim
k→
zlim
→1
f n + 1 – f n z –n
n=0
k
lim (z – 1)F(z) – f 0
f n + 1 – f n
= lim
z→1 k→
n=0
We must remember, however, that if the sequence f n does not approach a limit, the final value
theorem is invalid. The right side of (7.30) may exist even though f n does not approach a limit.
In instances where we cannot determine whether f n exists or not, we can be certain that it
exists, if X(z) can be expressed in a proper rational form as
A(z )
X(z) = -----------
B(z)
Example 7.1
Find the Z transform of the geometric sequence defined as
Left Shift f n + m –1
m
z F(z) + n = f–mn + mz –n
F --
Multiplication by an z
anf n
a
Solution:
From the definition of the Z transform,
az (az
n –n –1 n
f nz–n = )
F(z) = = (7.34)
n=0 n=0 n=0
To evaluate this infinite summation, we form a truncated version of F(z) which contains the first k
terms of the series. We denote this truncated version as Fk(z) . Then,
k– 1
n –n –1 2 –2 k – 1 –(k – 1)
F k (z) = a z = 1 + az + a z + + a z (7.35)
n=0
k –k –1 k
1–az = – (az )
1--------------------------
F k (z) = -------------------- (7.37)
1 – az–1 1 – az–1
for az–1 1
k
To determine F(z) from F k(z) , we examine the behavior of the term (az–1) in the numerator of
k –1 j
(7.37). We write the terms az–1 and (az–1) in polar form, that is, az–1 = az e and
k –1 k jk
(az –1 ) = az e (7.38)
–1
From (7.38) we observe that, for the values of z for which az 1 , the magnitude of the complex
k
number (az–1) → 0 as k → and therefore,
z
F(z) = lim Fk(z) = ---------1----------= ----- ----- (7.39)
k→ 1 – az–1 z–a
for az–1 1
–1 k
For the values of z for which az 1 , the magnitude of the complex number (az–1) becomes
unbounded as k → , and therefore, F(z) = lim F (z) is unbounded for az–1 1 .
k
k→
In summary,
n
(az–1)
F(z) =
n=0
–1 –1
converges to the complex number z (z – a) for az 1 , and diverges for az 1 .
Also, since
a a
az–1 = -- = -----
z z
–1
then, az 1 implies that z a , while az–1 1 implies z a and thus,
z
---------- for z a
az
n –n
Zan u 0n = = z–a (7.40)
n=0 unbounded for z a
The regions of convergence and divergence for the sequence of (7.40) are shown in Figure 7.2.
Im[z]
Region of
Convergence
z
F(z) = ----- -----
z–a
Region of |a|
Divergence Re[z]
F(z) →
Figure 7.2. Regions of convergence and divergence for the geometric sequence an
To determine whether the circumference of the circle, where z = a |, lies in the region of conver-
gence or divergence, we evaluate the sequence Fk(z) at z = a . Then,
k–1
n –n
Fk(z) = a z = 1 + az–1 + a2 z–2 + + ak – 1 z–(k – 1)
n =0 z=a (7.41)
=1+1+1++1=k
We see that this sequence becomes unbounded as k → , and therefore, the circumference of the
circle lies in the region of divergence.
Example 7.2
Find the Z transform of the discrete unit step function u0 n shown in Figure 7.3.
u0n
0 n0 1
u 0 n =
1 n0 ....
0
n
Solution:
From the definition of the Z transform,
F(z) = f nz –n
= 1z –n
(7.42)
n=0 n=0
As in the previous example, to evaluate this infinite summation, we form a truncated version of
F(z) which contains the first k terms of the series, and we denote this truncated version as F k (z) .
Then,
k–1
z
–n
Fk(z) = = 1 + z–1 + z–2 + + z–(k – 1) (7.43)
n=0
and we observe that as k → , (7.43) becomes the same as (7.42). To express (7.43) in a closed
form, we multiply both sides by z–1 and we get
k (z) = z + z + z + + z
z–1F –1 –2 –3 –k
(7.44)
for z–1 1
k k k
Since (z–1) = z–1 ejk , as k → , (z–1) → 0 . Therefore,
1 z
F(z) = lim Fk(z) = --------------- = ----- ----- (7.46)
k→ 1 – z–1 z–1
for z 1 , and the region of convergence lies outside the unit circle.
Alternate Solution:
The discrete unit step u 0 n is a special case of the sequence an with a = 1 , and since 1n = 1 , by
substitution into (7.40) we get
z
---------- for z 1
–
1z =
Zu 0 n = –n z 1 (7.47)
n=0 unbounded for z 1
Example 7.3
–naT
Find the Z transform of the discrete exponential sequence f n = e
Solution:
–naT –n
F(z) = e z = 1 + e–aTz–1 + e–2aTz–2 + e–3aTz–3 +
n=0
z
Z e –n a T = ------------1-------------- = ----------------- (7.48)
–aT –1 –aT
1–e z z–e
for e–aTz–1 1
Example 7.4
Find the Z transform of the discrete time functions f1n = cosnaT and f2n = sin naT
Solution:
From (7.48) of Example 7.3,
z
e–naT --------- --------
z – e–aT
and replacing –naT with jnaT we get
z
Z e jnaT = Z cos naT + j sin naT = --------- ----------
ja T
z–e
z z – e–jaT
= Z cos naT + jZ sin naT = ------------------ -------------------
z – e jaT z – e–jaT
2
z – zcosaT + jz sinaT
= Z cos naT + jZ sin naT = -----------------------------------------------------
z2 – 2zcosaT + 1
and
z cos a T
sin naT --------------------------------------- (7.50)
z – 2zcosaT + 1
2
To define the regions of convergence and divergence, we express the denominator of (7.49) or
(7.50) as
We see that both pairs of (7.49) and (7.50) have two poles, one at z = e jaT and the other at
z = e–jaT , that is, the poles lie on the unity circle as shown in Figure 7.4.
Im[z] Region of
Convergence
Pole
1
Re[z]
Region of
Divergence
Pole
Figure 7.4. Regions of convergence and divergence for cosnaT and sin naT
From Figure 7.4, we see that the poles separate the regions of convergence and divergence. Also,
since the circumference of the circle lies on the region of divergence, as we have seen before, the
poles lie on the region of divergence. Therefore, for the discrete time cosine and sine functions we
have the pairs
z 2 – z cos aT
cos naT --------------------------------------- for z 1 (7.52)
z2 – 2z cos aT + 1
and
z sin aT
sin naT ---------------------------------------- for z 1 (7.53)
z – 2zcosaT + 1
2
It is shown in complex variables theory that if F(z) is a proper rational function*, all poles lie outside
the region of convergence, but the zeros can lie anywhere on the z -plane.
Example 7.5
Find the Z transform of discrete unit ramp fn = nu 0 n .
Solution:
Z nu 0 n = nz –n
= 0 + z–1 + 2z–2 + 3z–3 + (7.54)
n=0
We can express (7.54) in closed form using the discrete unit step function transform pair
-----z----- for
Z u 0 n (1)z z 1
–n
= = (7.55)
z–1
n=0
We summarize the transform pairs we have derived, and others, given as exercises at the end of this
chapter, in Table 7.2.
fn F(z)
n 1
n – m z–m
anu n
0
-----z----- za
z–a
u0n -----z----- z1
z–1
(e–naT)u n
0
---------z-------- e–aTz–1 1
–aT
z–e
( cosnaT)u0n z2 – zcosaT
---------------------------------------- z 1
z2 – 2zcosaT + 1
( sin naT)u0n -------------z---s--i--n---a----T-------------- z 1
z2 – 2zcosaT + 1
(an cosnaT)u n z2 – azcosaT
0 ----------------------------------------------- z a
z2 – 2az cos aT + a2
(an sin naT)u n --------------a--z---s--i-n----a---T
---------------- z a
0
z2 – 2az cos aT + a2
u0n–u0n – m zm – 1
---------------------------
zm – 1 (z – 1)
nu0n z (z – 1)2
n + 1u0n z2 (z – 1)2
annu n (az) (z – a)2
0
2
ann u n az(z + a) (z – a)3
0
where C is a contour enclosing all singularities (poles) of F(s) , and v is a dummy variable for s . We
can compute the Z transform of a discrete time function f n using the transformation
F ( s---)------- F ( s---)------
lim (s – p k ) -------------–1
F(z) = k Res -------------–1
1 – z esT
=
s → pk 1 – z esT
(7.60)
s = pk
Example 7.6
Derive the Z transform of the discrete unit step function u0n using the residue theorem.
Solution:
We learned in Chapter 2, that
L u0 (t)= 1 s
Then, by residue theorem of (7.60),
F ( s ) ---- 1s
F(z) = lim (s – p ) ---------------- = lim (s – 0) ----------------------
k –1 sT sT
s → pk 1–z e s→0 1 – z–1 e
1 s ----
= lim s --------------- = lim ------------1------------ = -------1-------- = -----z-----
s → 0 1 – z–1esT s → 0 1 – z–1esT
1 – z–1 z – 1
* This section may be skipped without loss of continuity. It is intended for readers who have prior knowledge of
complex variables theory. However, the following examples will show that this procedure is not difficult.
Example 7.7
Solution:
From Chapter 2,
1
L e –at u 0(t)= ----- ------
s+a
Then, by residue theorem of (7.60),
F ( s ) ---- 1 (s + a )
F(z) = lim (s – p ) ---------------- = lim (s + a) -----------------------
k –1 sT sT
s → pk 1–z e s → –a 1 – z–1 e
1 1 z
= lim ------------ ------------ = ------ = --------- --------
s → –a 1 – z–1esT –1 –aT –aT
1–z e z–e
Example 7.8
Derive the Z transform of the discrete unit ramp function nu0n using the residue theorem.
Solution:
From Chapter 2,
2
L tu0(t)= 1 s
Since F(s) has a second order pole at s = 0 , we need to apply the residue theorem applicable to a
pole of order n. This theorem states that
n –1
1 (s – p )--------------
F(z) = lim -----------------
d F(s)
---------------------- (7.61)
s → pk (n – 1)! k
ds
n – 1
1–z e
–1 sT
d 2 1
s2
F(z)= lim -- --- s1----------------------- = lim --d--- ----------------------- = ---------z--------
s → 0 ds 1 – z–1esT s → 0 ds 1
– z–1 esT (z – 1)2
and
f nz –n
F(z) =
n=0
(7.63)
z = esT (7.65)
and
1
s = - - lnz (7.66)
T
Therefore,
F(z) = G(s) 1
(7.67)
s = --- lnz
T
Since s , and z are both complex variables, relation (7.67) allows the mapping (transformation) of
regions of the s -plane into the z -plane. We find this transformation by recalling that s = + j and
therefore, expressing z in magnitude-phase form and using (7.65), we get
j T jT
z = z = ze =e e (7.68)
where,
z = eT (7.69)
and
T = (2) s
Therefore, we express (7.70) as
2
= ----- = 2 ----- (7.71)
s s
T
z = e e j2( s ) (7.72)
The quantity e j2( s ) in (7.72), defines the unity circle; therefore, let us examine the behavior of
z when is negative, zero, or positive.
Case I 0 : When is negative, from (7.67), we see that z 1 , and thus the left half of the s -
plane maps inside the unit circle of the z -plane, and for different negative values of
, we get concentric circles with radius less than unity.
Case II 0 : When is positive, from (7.67), we see that z 1 , and thus the right half of the s -
plane maps outside the unit circle of the z -plane, and for different positive values of
we get concentric circles with radius greater than unity.
Case III = 0 : When is zero, from (7.72), we see that z = e j2( s ) and all values of lie
on the circumference of the unit circle. For illustration purposes, we have mapped
several fractional values of the sampling radian frequency s , and these are shown
in Table 7.3.
From Table 7.3, we see that the portion of the j axis for the interval 0 s in the s -plane,
maps on the circumference of the unit circle in the z -plane as shown in Figure 7.5.
The mapping from the z -plane to the s -plane is a multi-valued transformation since, as we have
1
seen, s = - -lnz and it is shown in complex variables textbooks that
T
s 4 1 2
3s 8 1 3 4
s 2 1
5s 8 1 5 4
3s 4 1 3 2
7s 8 1 7 4
s 1 2
= 0.25s
s x
x x 0.125s
0.875s
0.75s 0.375s z =1
0.625s = 0.5s x =0
x Rez
0.5s = s
0.375s
0.25s x
0.625 sx 0.875s
0.125s x
= 0.75s
0 =0 0
7-24 Nalla N
The Inverse Z Transform
where k is a constant, and ri and pi represent the residues and poles respectively; these can be real
or complex.
Before we expand F(z) into partial fractions, we must express it as a proper rational function. This
is done by expanding F(z) z instead of F(z) , that is, we expand it as
-----(--z---)- = k- + ------------
F r1 r2 +
+ ------------- (7.75)
z z z – p1 z – p2
we rewrite (7.75) as
r 1z r 2z
F(z) = k + -------------+ -------------+ (7.77)
z – p1 z – p2
Example 7.9
Use the partial fraction expansion method to compute the Inverse Z transform of
1
F(z) = ----------------------------------------- ----------------------------------------- (7.78)
(1 – 0.5z )(1 – 0.75z–1)(1 – z–1)
–1
Solution:
We multiply both numerator and denominator by z3 to eliminate the negative powers of z . Then,
z3
F(z) = -------------------------------------------------------------
(z – 0.5)(z – 0.75)(z – 1)
Nalla N 7-25
Chapter 7 Discrete Time Systems and the Z Transform
F(z) z2 r1 r2 r3
----------= -------------------------------------------------------------= -------------------+ -----------------------+ ---------------
z (z – 0.5)(z – 0.75)(z – 1) (z – 0.5) (z – 0.75) (z – 1)
z2 1
2
r3 = -------------------------------------------- = --------------------------------------------- =8
(z – 0.5)(z – 0.75) z = 1 (1 – 0.5)(1 – 0.25)
Then,
F(z) z2 2 –7 8
----------= -------------------------------------------------------------= -------------------+ -----------------------+ ---------------
z (z – 0.5)(z – 0.75)(z – 1) (z – 0.5) (z – 0.75) (z – 1)
z3 2z –7z 8z
F(z) = ---------------------------------- = -------------------+ -----------------------+ --------------- (7.79)
(z – 0.5)(z – 0.75)(z – 1) (z – 0.5) (z – 0.75) (z – 1)
Example 7.10
Use the partial fraction expansion method to compute the Inverse Z transform of
12 z
F(z) = --------------------------------- (7.81)
(z + 1)(z – 1) 2
Solution:
Division of both sides by z and partial expansion yields
7-26 Nalla N
The Inverse Z Transform
F(z ) r1 r2 r3
---------- = ---------------1--2---------------- = ---------------+ -----------------+ ---------------
z (z + 1)(z – 1) 2 (z + 1) (z – 1)2 (z – 1)
Nalla N 7-27
Chapter 7 Discrete Time Systems and the Z Transform
Solution:
Dividing both sides by z and performing partial fraction expansion, we get
F (z ) r r2 r3 r4
----------- = --------------------z----+-----1-------------------------------= ---1 + ----------+ -----------------------+ ------------------------ (7.84)
z z(z – 1)(z2 + 2z + 2) z z – 1 (z + 1 – j) (z + 1 + j)
7-28 Nalla N
The Inverse Z Transform
Recalling that
n 1
and
z
an u n ----- -----
0
z–a
2e–j135)
n
+ (0.05–j0.15)(
n jn135 n –jn135
= – 0.5n + 0.4 + 0.05( 2 e ) + 0.05( 2 e )
n jn135 n –jn135
+ j0.15( 2 e ) – j0.15( 2 e )
or
n
n 32
f n = – 0.5n + 0.4 + ---------
2 cosn135sin n135 (7.85)
10 10
where C is a closed curve that encloses all poles of the integrant, and by Cauchy’s residue theorem,
this integral can be expressed as
Example 7.12
Use the inversion integral method to find the Inverse Z transform of
1 + 2z–1 + z–3
F(z) = ---------------------------------------------------- (7.88)
(1 – z –1 )(1 – 0.75z–1 )
Solution:
Nalla N 7-29
Chapter 7 Discrete Time Systems and the Z Transform
Multiplication of the numerator and denominator by z3 yields
3 2
z + 2z + 1
------------------------------------------ (7.89)
z(z – 1)(z – 0.75)
We are interested in the values f 0 f 1 f 2 , that is, values of n = 0 1 2 .
For n = 0 , (7.70) becomes
3 2
(z + 2z + 1)
f0 =
k Res --------------------------------------------
z 2 (z – 1)(z – 0.75)
z = pk
(z + 2z + 1)
3 2
(z3 + 2z2 + 1)
= Res -------------------------------------------- + Res --------------------------------------------- (7.91)
z 2 (z – 1)(z – 0.75) z 2 (z – 1)(z – 0.75)
z=0 z=1
(z + 2z + 1)
3 2
+ Res --------------------------------------------
z 2 (z – 1)(z – 0.75)
z = 0.75
The first term on the right side of (7.71) has a pole of order 2 at z = 0 ; therefore, we must evaluate
the first derivative of
3 2
(z + 2z + 1)
---------------------------------------
(z – 1)(z – 0.75)
7-30 Nalla N
The Inverse Z Transform
3 2
(z + 2z + 1)
f 1 = ------------------------------------------
k Res z(z – 1)(z – 0.75)
z = pk
(z + 2z + 1) 3 2
(z3 + 2z2 + 1)
= Res ------------------------------------------ + Res ------------------------------------------
z(z – 1)(z – 0.75) z=0
z(z – 1)(z – 0.75) z=1
(z + 2z + 1) 3 2
(7.93)
+ Res -----------------------------------------
z(z – 1)(z – 0.75) z = 0.75
3 2
3
(z + 2z + 1)
2 3
(z + 2z + 1)
2 (z + 2z + 1)
= --------------------------------------- + --------------------------------- + ----------------------------------
(z – 1)(z – 0.75) z=0
z(z – 0.75) z=1
z(z – 1)
z = 0.75
For n 2 there are no poles at z = 0 , that is, the only poles are at z = 1 and z = 0.75 . Therefore,
3 2
(z + + 1)z n – 2
2z
----------------------------------------------
fn =
k Res (z – 1)(z – 0.75)
z = pk
3 2 3 2
(z + 2z + 1)z n–2
(z 2z + 1)z n – 2 (7.94)
+
= Res ---------------------------------------------- + Res ----------------------------------------------
(z – 1)(z – 0.75) (z – 1)(z – 0.75)
z=1 z = 0.75
3 2 3 2
(z + 2z + 1)z n–2
(z + 2z + 1)z n –2
= ---------------------------------------------- + ----------------------------------------------
(z – 0.75) (z – 1)
z=1 z = 0.75
for n 2 .
From (7.74), we observe that for all values of n 2 , the exponential factor zn – 2 is always unity for
z = 1 , but varies for values z 1 . Then,
(z3 + 2z2 + 1) (z3 + 2z2 + 1)zn – 2
fn = --------------------------------- + --------------------------------------------
(z – 0.75) z=1
(z – 1) z = 0.75
4
= --------- 0.75 + 2(0.75) + 1(0.75)
3 2 n–2
(7.95)
+ ------------------------------------------------------------------------------
0.25 –0.25
163
= (163 64)(0.75) n
------- n
16 + -----------------------------------------= 16 – (0.75)
(–0.25)(0.75)2 7
for n 2 .
Nalla N 7-31
Chapter 7 Discrete Time Systems and the Z Transform
where the coefficients of n and n – 1 are the residues that were found in (7.92) and (7.93) for
n = 0 and n = 1 respectively at z = 0 . The coefficient 28 7 is multiplied by n to emphasize
that this value exists only for n = 0 and coefficient 4 3 is multiplied by n – 1 to emphasize that
this value exists only for n = 1 .
Example 7.13
Use the inversion integral method to find the Inverse Z transform of
1
F(z) = -------------------------- -------------------------- (7.97)
(1 – z –1 )(1 – 0.75z–1 )
Solution:
7-32 Nalla N
The Inverse Z Transform
2
z
--------------------------------------- (7.98)
(z – 1)(z – 0.75)
This function has no poles at z = 0 . The poles are at z = 1 and z = 0.75 . Then by (7.87),
2n–1 n+1
zz
--------------------------------- z
---------------------------------------
fn =
k Res (z – 1 )(z – 0.75 ) =
k Res (z – 1)(z – 0.75)
z = pk z = pk
zn + 1 zn + 1
= Res --------------------------------------- + Res ---------------------------------------
(z – 1)(z – 0.75) z=1
(z – 1)(z – 0.75) z = 0.75 (7.99)
1n + 1 (0.75)
n+1 n+1 n +1
z
= ----------------------- z
+ --------------- = ----------- + ------------------------
(z – 0.75) z=1
(z – 1) z = 0.75 0.25 (–0.25)
(0.75)
n 6
= 4 – - (0.75)
n
= 4 – ------------------------------
(0.25)(0.75) 3
c. Long Division of Polynomials
To apply this method, F(z) must be a rational function, and the numerator and denominator must be
polynomials arranged in descending powers of z .
Example 7.14
Use the long division method to determine f n for n = 0 1 and 2 , given that
–1 –2 –3
1 + z + 2z + 3z (7.100)
F(z) = --------------------------------------------------------------------------------------------
(1 – 0.25z–1 )(1 – 0.5z–1 )(1 – 0.75z–1 )
Solution:
First, we multiply numerator and denominator by z3 , expand the denominator to a polynomial, and
arrange the numerator and denominator polynomials in descending powers of z . Then,
z3 + z2 + 2z + 3
F(z) = -------------------------------------------------------------------
(z – 0.25)(z – 0.5)(z – 0.75)
Nalla N 7-33
Chapter 7 Discrete Time Systems and the Z Transform
3 2
z + z + 2z + 3
F(z) = ----------------------------------------------
11 (7.101)
z 3 – --z 2 + ----z – --3---
2 16 32
5 81
Divisor 1 + -- z –1 + - z –2 + Quotient
2 16
3 11
z 3 – -- z 2 + ---- z – --3---
z3 + z2 + 2z + 3 Dividend
2 16 32
3 11
z 3 – -- z 2 + ---- z – --3---
2 16 32
5-- 2 2---1-
z + z + 3---5- 1st Remainder
2 16 32
5 2 15 15
-- z – ---- z + 5---5- – - z –1
2 4 32 64
81
-z – + 2nd Remainder
16
Figure 7.10. Long division for the polynomials of Example 7.14
We find that the quotient Q(z) is
5 –1 81 –2
Q(z) = 1 + -- z + - z + (7.102)
2 16
By definition of the Z transform,
F(z) = f n z –n
= f 0 + f 1z–1 + f 2z–2 + (7.103)
n=0
7-34 Nalla N
The Inverse Z Transform
Nalla N 7-35
Chapter 7 Discrete Time Systems and the Z Transform
TABLE 7.4 Methods of Evaluation of the Inverse Z transform
Assuming that all initial conditions are zero, taking the Z transform of both sides of (7.106), and
using the Z transform pair
f n – m z–mF(z)
we get
Y(z) + b z–1Y(z) + b z–2Y(z) + + b z–kY(z)
1 2 k
(7.107)
= a0 X(z) + a1 z–1X(z) + a z2–2X(z) + + a z–kkX(z)
7-36 Nalla N
The Transfer Function of Discrete Time Systems
The discrete impulse response hn is the response to the input xn = n , and since
Z n = nz –n
=1
n=0
we can find the discrete time impulse response hn by taking the Inverse Z transform of the dis-
crete transfer function H(z) , that is,
–1
hn = Z H(z) (7.112)
Example 7.15
The difference equation describing the input-output relationship of a discrete time system with zero
initial conditions, is
yn – 0.5yn – 1 + 0.125yn – 2 = xn + xn – 1 (7.113)
Compute:
a. The transfer function H(z)
b. The discrete time impulse response hn
c. The response when the input is the discrete unit step u0n
Solution:
a. Taking the Z transform of both sides of (7.113), we get
Nalla N 7-37
Chapter 7 Discrete Time Systems and the Z Transform
b. To obtain the discrete time impulse response hn , we need to compute the Inverse Z transform
of (7.114). We first divide both sides by z and we get:
-----(---z--) = ---------------z----+-----1---------------
H
(7.115)
z z2 – 0.5z + 0.125
and thus,
-----(---z--) = --------0--.-5-----–----j--2---.--5--------- + --------0---.-5-----+-----j--2---.-5---------
H
z z – 0.25 – j0.25 z – 0.25 + j0.25
or
( 0.5 – j 2.5 )z ( 0.5 + j 2.5 )z ( 0 . 5 – j 2 .5 ) z ( 0.5 + j 2.5 ) z
H(z) = ------------------------------------------ + ---------------------------------------- = -------------------------------------- + --------------------------------------- (7.116)
z – (0.25 + j0.25) z – (0.25 – j0.25) z – 0.25 2e j45 z – 0.25 2e–j45
Recalling that
z
a n u 0 n ----- -----
z–a
7-38 Nalla N
The Transfer Function of Discrete Time Systems
----(---z--)- = ---------------------------------------------------------------------
Y 2
z+z (7.119)
z z3 – (3 2)z 2 + (5 8)z – 1 8
Nalla N 7-39
Chapter 7 Discrete Time Systems and the Z Transform
Now, we compute the residues and poles.
or
3.2 z ( – 1.1 + j 0.3 ) z ( – 1.1 –j 0.3 ) z
Y(z) = ---------- + ----------------------------------- + -----------------------------------
z – 1 z–0.25–j0.25 z–0.25 + j0.25
3.2 z ( – 1.1 + j 0.3 ) z ( –1.1 – j 0.3 ) z (7.121)
= ---------- + -------------------------------------- + --------------------------------------
z – 1 z – 0.25 2e j45 z – 0.25 2e–j45
Recalling that
z
an u n ----- -----
0
z–a
The plots for the discrete time sequences hn and yn are shown in Figure 7.13.
7-40 Nalla N
Chapter-8 Discrete Fourier Transform 2020
The discrete Fourier transform (DFT) converts a finite sequence of equally-
spaced samples of a function into a same-length sequence of equally-spaced samples of
the discrete-time Fourier transform (DTFT), which is a complex-valued function of
frequency. The interval at which the DTFT is sampled is the reciprocal of the duration of
the input sequence. An inverse DFT is a Fourier series, using the DTFT samples as
coefficients of complex sinusoids at the corresponding DTFT frequencies. It has the
same sample-values as the original input sequence. The DFT is therefore said to be
a frequency domain representation of the original input sequence. If the original
sequence spans all the non-zero values of a function, its DTFT is continuous (and
periodic), and the DFT provides discrete samples of one cycle. If the original sequence is
one cycle of a periodic function, the DFT provides all the non-zero values of one DTFT
cycle.
Definition of DFT
From the definition of Z-Transform we have
∞
Where N represents the number of points that are equally spaced in the intervals 0 to
2𝜋. Then the relation can be simplified by updating the twiddle factor W, as
2𝜋
𝑊=( )𝑚
𝑁𝑇
Where m=0,1,2,3…..N-1.
The inverse DFT is defined as
𝑁−1
1 𝑗2𝜋𝑚𝑛
𝐷−1 {𝑋 [𝑚]} = 𝑥 [𝑛] = ∑ 𝑋[𝑚]𝑒 𝑁 − − − − − (4)
𝑁
𝑚=0
For n=0,1,2,3……..N-1.
Let us consider the representation of DFT by using Twiddle Factor as,
Nalla N Page 1
Chapter-8 Discrete Fourier Transform 2020
𝑗2𝜋 𝑗2𝜋
𝑊𝑁 = 𝑒 − 𝑁 and 𝑊𝑁−1 = 𝑒 𝑁
In general, the DFT 𝑋[𝑚] is complex and thus we can express it as,
𝑋 [𝑚] = 𝑅𝑒{𝑋 [𝑚]} + 𝑗𝐼𝑚{𝑋[𝑚]} − − − − − (7)
For m=0,1,2,3….N-1
We know that from Euclidian Theorem, we can write exponential value as,
−𝑗2𝜋𝑚𝑛 2𝜋𝑚𝑛 2𝜋𝑚𝑛
𝑒 𝑁 = cos ( ) − 𝑗 sin ( ) − − − − − (8)
𝑁 𝑁
Substituting the equation 8 in equation 3, it will become
𝑁−1 𝑁−1
2𝜋𝑚𝑛 2𝜋𝑚𝑛
𝑋(𝑚) = ∑ 𝑥[𝑛] cos ( ) − 𝑗 ∑ 𝑥[𝑛] sin ( ) − − − − − (9)
𝑁 𝑁
𝑛=0 𝑛=0
For m=0,1,2,3….N-1
And the Imaginary part of DFT becomes,
𝑁−1
2𝜋𝑚𝑛
𝐼𝑚{𝑋 [𝑚]} = − ∑ 𝑥[𝑛] sin ( ) − − − −(11)
𝑁
𝑛=1
For m=0,1,2,3….N-1
#Problem1:
A discrete time signal is defined by the following sequence. Compute the frequency
components 𝑋 [𝑚] if 𝑥 [0] = 1, 𝑥[1] = 2, 𝑥[2] = 2, 𝑥[3] = 1 and 𝑥[𝑛] = 0 for all other ‘n’
Solution: There are four discrete values of 𝑥[𝑛]. So we will use a 4-point DFT for this
example, N=4 with n=0,1,2, and 3.
3
2𝜋𝑚𝑛
𝑅𝑒{𝑋 [𝑚]} = 𝑥[0] + ∑ 𝑥[𝑛] cos ( )
4
𝑛=1
Nalla N Page 2
Chapter-8 Discrete Fourier Transform 2020
2𝜋𝑚(1) 2𝜋𝑚(2) 2𝜋𝑚(3)
= 𝑥[0] + 𝑥[1] cos ( ) + 𝑥[2] cos ( ) + 𝑥[3] cos ( )
4 4 4
2𝜋𝑚(1) 2𝜋𝑚(2) 2𝜋𝑚(3)
= 1 + (2) cos ( ) + (2) cos ( ) + (1) cos ( )
4 4 4
𝜋𝑚 𝜋𝑚(3)
𝑅𝑒{𝑋 [𝑚]} = 1 + (2) cos ( ) + (2) cos(𝜋𝑚) + (1) cos ( )
2 2
Now for m=0,1,2, and 3 we calculate from above relation and get it ass,
𝐹𝑜𝑟 𝑚 = 0; 𝑅𝑒{𝑋 [0]} = 1 + (2)(1) + (2)(1) + (1)(1) = 6
𝐹𝑜𝑟 𝑚 = 1; 𝑅𝑒{𝑋 [1]} = 1 + (2)(0) + (2)(−1) + (1)(0) = −1
𝐹𝑜𝑟 𝑚 = 2; 𝑅𝑒{𝑋 [2]} = 1 + (2)(−1) + (2)(1) + (1)(−1) = 0
𝐹𝑜𝑟 𝑚 = 2; 𝑅𝑒{𝑋 [3]} = 1 + (2)(0) + (2)(−1) + (1)(0) = −1
Similarly, for
3
2𝜋𝑚𝑛
𝐼𝑚{𝑋 [𝑚]} = − ∑ 𝑥[𝑛] sin ( )
4
𝑛=1
So it’s a 4-point DFT so, we can split the relation into following method.
3 3
1 𝑗2𝜋𝑚𝑛 1 𝑗2𝜋𝑚𝑛
𝑥 [𝑛 ] = ∑ 𝑋 [𝑚 ]𝑒 4 = ∑ 𝑋 [𝑚 ]𝑒 2
4 4
𝑚=0 𝑚=0
Nalla N Page 3
Chapter-8 Discrete Fourier Transform 2020
1 𝑗𝜋𝑛 𝑗3𝜋𝑛
𝑥 [𝑛 ] = [𝑋[0] + 𝑋 [1]𝑒 2 + 𝑋 [2]𝑒 𝑗𝜋𝑛 + 𝑋[3]𝑒 2 ]
4
The discrete frequency components 𝑥[𝑛] for n=0,1,2, and 3 are as follows
1 1
𝑥[0] = {𝑋 [0] + 𝑋[1] + 𝑋[2] + 𝑋[3]} = {6 + (−1 − 𝑗) + 0 + (−1 + 𝑗)} = 1
4 4
1
𝑥[1] = {𝑋[0] + 𝑋[1]𝑗 + 𝑋 [2](−1) + 𝑋 [3](−𝑗)}
4
1
= {6 + (−1 − 𝑗)(𝑗) + 0(−1) + (−1 + 𝑗)(−𝑗)} = 2
4
1
𝑥[2] = {𝑋 [0] + 𝑋 [1](−𝑗) + 𝑋[2](1) + 𝑋[3](−1)}
4
1
= {6 + (−1 − 𝑗)(−𝑗) + 0(1) + (−1 + 𝑗)(−1)} = 2
4
1
𝑥[0] = {𝑋[0] + 𝑋[1](−𝑗) + 𝑋 [2](−1) + 𝑋 [3](𝑗)}
4
1
= {6 + (−1 − 𝑗)(−𝑗) + 0(−1) + (−1 + 𝑗)(𝑗)} = 1
4
1. Linearity Property:
The linearity property states that if 𝑓1 [𝑛], 𝑓2 [𝑛], 𝑓3 [𝑛], … … … . 𝑓𝑛 [𝑛] have Laplace
Transforms 𝐹1 [𝑚], 𝐹2 [𝑚], 𝐹3 [𝑚], … … … 𝐹𝑛 [𝑚] respectively and
𝑐1 , 𝑐2 , 𝑐3 , … … … … 𝑐𝑛 are arbitrary constants, then
𝑐1 𝑓1 [𝑛] + 𝑐2 𝑓2 [𝑛] + ⋯ + 𝑐𝑛 𝑓𝑛 [𝑛] ⇔ 𝑐1 𝐹1 [𝑚] + 𝑐2 𝐹2 [𝑚] + ⋯ + 𝑐𝑛 𝐹𝑛 [𝑚]
Nalla N Page 4
Chapter-8 Discrete Fourier Transform 2020
Proof: We know that the definition of DFT is given by
𝑁−1
And if x[n] is shifted to right by k and we must change the lower and upper limit of the
summation from 0 to k and from N-1 to N+k-1 respectively.
𝑁+𝑘−1
Rearranging the Summation and the limits as per the functions we get,
𝑁−1 𝑁+𝑘−1
From the time shifting property we can modify the above relation as,
𝑁+𝑘−1
Nalla N Page 5
Chapter-8 Discrete Fourier Transform 2020
Proof: The proof is obtained by taking the Inverse DFT, changing the order of
summation, and letting 𝒎 − 𝒌 = 𝝁.
8.3 The Sampling Theorem:
The sampling theorem specifies the minimum-sampling rate at which a
continuous-time signal needs to be uniformly sampled so that the original signal can be
completely recovered or reconstructed by these samples alone. This is usually referred
to as Shannon's sampling theorem in the literature.
It states that if a continuous time signal 𝑓(𝑡) is band limited with its highest
frequency component less than 𝜔. Then 𝑓(𝑡) can be completely recovered from its
sampled values, if the sampling frequency is equal to or greater than 2𝜔.
Generally, the relation is represented as 𝑓𝑠 ≥ 2𝑓𝑚 . Which is called as Nyquist
Rate. Where 𝑓𝑠 is Sampling Frequency and 𝑓𝑚 is Message Signal Frequency.
Figure 8.1 Sampling of a sinusoidal signal of frequency f at different sampling rates 𝑓𝑠 . With dashed lines
𝑓
are shown the alias frequencies, occurring when 𝑠 < 2.
𝑓
Nalla N Page 6