Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Download as pdf or txt
Download as pdf or txt
You are on page 1of 40

SIGNALS AND SYSTEMS

CHAPTER – 1
Signal: - A signal is a function representing a physical quantity or variable
Classification of Signals:-
A. Continuous time and Discrete – Time Signals:-
A Signal n(t) is a continuous –time signal if it is a continuous variable if it is a discrete variable,
that is, x(t) is defined at discrete times, then x(t) is a discrete – time signal
B. Analog and Digital Signals:-
Analog:- A Signal can take on infinite number of distinct values in amplitude
Digital:- A Signal can take on finite number of distinct values in amplitude

Analog continuous Analog discute

Continuous Discute sequence


Digital Continuous Digital Discute
Sample
x(t) X[n]=x[nTs]
Signal
C. Real and complex Signal:-
A signal x(t) is a read signal if its valve is a real number, if x(t) complex number, then it
complex Signal.
x(t) = x1(t) + jx2(t)
where x1(t)  Real part of x(t)
x2(t)  Imaginary part of x(t)

D. Deterministic and Random Signals:-


Deterministic:- It’s values are completely specific for any given time. (equation or formula).
Random Signal: - It take random values at any given time.
E. Even and Odd Signal:-
A signal x(t) or x[n] is an even signal if
x(-t) = x(t)
x[-n] = x[n]
an odd signal if x(-t) = - x(t)
x[-n] = -x[n]
Any signal x(t) or x[n] can be expressed as
x(t) = xe(t) + x0(t)
x[n] = xe[n] + x0[n]
1
xe(t) = [x(t) + x(-t)] even part
2
SIGNALS AND SYSTEMS
1
xe(n) = [x(n) + x(-n)] even part
2
1
x0(t) = [x(t) - x(-t)] odd part
2
1
x0(n) = [x(n) – x(-n)] odd part
2
Note:- The product of
Two even or odd signal = an even signal
An even signal an odd signal = an odd signal

2 x(n)
x(t)
1 1

1 2 3 -1 0 1 2

Q.4 (1) (a) x(1 – t) (b) x(1 – t) + 2(2 – t) (c) x (3t) (d) x(t/3)

(2) x [n – 3] (b) [n + 4] (c) x [ - n] (d) x [-n + 2] (e) x [-n – 2]

Sketch even and odd components of signals


x(t) x(t) -0.5t
4 4e
4

1 2 3 4 5 t 0 t
x(n)
x(n)
4

01 2 3 4 1 2 3

F. Periodic and Non periodic Signals: -


A Continuous signal x(t) is said to be periodic with period t
x[t  T] = x(t) all t
x[n  N] = x(n) all n
non periodic (a periodic) (not periodic)

T = where T is min. value of t after which f(t) repeats is called fundament period.
f(t) = f1(t)  f2(t)  f3(t)
   
T T1 T2 T3
SIGNALS AND SYSTEMS
T1 T T
 rational  2  2
T2 T2 T1
T = LCM [T1 & T2 & T3]
f1(t) = ASMT
2
T=  2
1
Q. if x(t), there check, periodic or a periodic, final fundamental period,
(a) 2 sin t + 4 sin 3t (b) sin 5  t + sin 7 t (c) sin  + sint
j10t
(d) 2 + 3 cos πt (e) je (f) e(-1+ j)t
(g) ej7πn (h) 3ej3π(n+1/2)15 (i) 3ej3/5(n+1/2)
j[(π/2)t-1)]
( j) 2cos (10t + 1) – sin (4t – 1) (k) e (l) cos2π/8n

(m) u(n) + u[-n] (n)  [ (n  4n)   (n  1  4k )]
n  

x(n) = x(n  N)
N – integral No.
x(n) = e j0 n
x(n + N) = e j0 ( n N )
= e j0 n e j0 N
 e j 0 n  1  e j 2
 0 N  2
2 2
N=  m M smallest integer which connects to integer no.
0 0
x1 (n)  x 2 (n)  x3 (n)
x(n) =   
N1 N2 N3
N = LCM [N1 & N2 & N3]
Note :- Sum & difference of discrete periodic signal is periodic and sum & difference of
continuous periodic signal is periodic under certain condition
G. Energy and power signals : -
 

  x(t )
2 2
E  x(t ) dt , E  
 n  
Th N
1 1
T  T 

2
P  Lim x (t ) dt , P  Lim [ x(n)]2
N  2 N  1
Th n N

An energy signal if and only if 0<E<  and P = 0


a power signal if and only if 0 < P <  , E =  otherwise neither energy signals nor power
signals
Determines the values of P  & E  for each of following
(a) x(t) = e-atu(t) a > 0
(b) x(t) = A cos(  0 t +  )
(c) x(t) = + u(t)
(d) x(t) = ej(2t+  /4)
n
1
(e) x[n] =   u[ n]
2
SIGNALS AND SYSTEMS
(f) x[n] = e j ( / 2 n
  / 8)
(g) x(n) = cos (  /4n)
(h) x(n) = A
(i) x[n] =  (t)
All periodic Signals are power signal, but vice, versa is not true.
Basic Signals:-
1) unit step function
1 t  0
u(t) =  discontinuous at t = 0
0 t  0
valve at t = 0 is undefined
1 t  t 0
u(t – t0) =  discontinuous at t – t0
0 t  t 0
u(t)
u(t-t0)
1 1

0 0 t0

2) Unit Impulse function a Direct delta function


0 t  0
 (t )  
 t  0


 (t )dt  1

2) a. 
d (t ) (t )dt  d (0)
where d(t) is any regular function continuous at t = 0
 d (0) a0b
b 
a d (t ) (t )dt   0 a  b  0 or 0  a  b
undefined a0 or b0


2).b  d (t ) (t  t

0 ) dt  d (t 0 )

1
3).  (at )   (t )  (-t) =  (t) even function
a
4). x(t)  (t) = x(0)  (t) if x(t) is continuous at t = 0
x(t)  (t – t0) = x(t0)  (t – t0) if x(t) is continuous at t = t0
Any continuous signal x(t) can be expressed as

x(t) =  x(t ) (t  c)dc

5). nth derivative g(n)(t)
 n 

d (t ) g ( n ) (t ) dt  ( 1) 
d ( n ) (t ) g (t ) dt

du (t )
6).  (t) =
dt
SIGNALS AND SYSTEMS

 d (t ) (t )dt = -  (0)


c). Expontial signal


x(t) = eat
if a > 0, growing expontial
if a < 0, decay expontial
D. Sinusoidal Signal
x(t) = a cos (  0 t +  ) Angle
 
Amp for
phase
2
Fundament period T0 =
0
E. Unit step sequence
1 n  0
u[n] = 
0 n  0
1 n  k
u[n – k] = 
0 n  k
1 u(n) 1 u(n-k)

(a) k (b) n

F. The unit impulse sequence / sample sequence


1 n  0
 (n) = 
0 n  0
1 n  k (n) (n-k)
 (n - k) = 
0 n  k 1

2. x[n]  [n] = x[0]  [n] -2 -1 0 1 2 k


x[n]  [n – k] = x[k]  [n – k]

3.  [n] = u[n] – u[n – 1]


n
u[n] =   [k ]
k  

4. Any sequence x[n] can be expressed as



x[n] =  x[k ] [n  k ]
n  
C. Complex exponential sequences
x[n] = e j 0 n = cos  0 n + j sin  0 n
SIGNALS AND SYSTEMS
0 m
periodicity of e j 0 n  , m = + ve integer
2 n
 2 
N0 = m  
0 
Continuous
e e0t   0 (all valve of it) periodic
Discrete e j 0 n e j (  2k ) n = e j 0 n
Since e j 2kn = 1,
Exp. Seq at freq  0 is same as at for ( 0  2 ) , ( 0  u ) & soon
So. 0 <  0  2 ,     0  

System & Classification of system


A. system is a mathematical model of a physical process that relates the input (or
excitation) signal to the output (response) signal.
A continuous time & Discrete time systems
x(t) y(t) x[n] y[n]
System T System T

B. System with memory and without Memory: -


A system is said to be memoryless if the output at any time depends on only the input at
that some time
y(t) = Rx(t) - memoryless
t
y(t) = 
x (t )dt - memory
n
y[n] =  x[k ]
k  
- memory

C. Causal & Non causal system : -


A system is called causal if its output y(t) at an arbitrary time t = t 0 depands on only the
input x(t) for (t  t0)
not causal = non causal,
Ex y(t) = x(t + 1)
y(t) = x[-n]
causal
y(t) = x(t) cos (t + 1)
Note: - All memory less system are causal but not vice versa
D. Linear System & Non linear system : -
Linearity  1) Superposition property
2) Homogenetion (or scaling)
 y1  T [ x1 ]

Super position  y 2  T [ x2 ]
T [ x  x ]  y  y
 1 2 1 2

{a zero i/p yields a zero o/p}


otherwise non – linear
E. Time Invariant and Time – varying systems.
A system is called time invariant if a time shift (delay or advance) in the input signal causes the
same time shift in the output signal system is time Invariant
SIGNALS AND SYSTEMS
T[x(t -  )] = y(t -  )
T [x(n – k)] = y [n – k]
Otherwise time varying

G. LT2 System = Linear + Time invariant System


Stable system
A system is bound input/bounded output BIBO stable if for any bound input
(x)  k, (y)  k2
Invertible : -
A system is said to be invertible if distinct inputs lead to distinct output.

x(n) y(t) w(n)=x(n)


System Inverse System

x(n) n y(n) w(n)=x(n)


y(n)  
kk   
x (n ) w(n)=y(n)-y(n-1)

Properties of δ(t)

1.   (t)dt  1


2.  x(t) (t)dt  x(0)


3.  x(t) (t  t )dt  x(t )

0 0


4.  x( ) (t   )d  x(t)

1
5.  (at)   (t)
a
6. x(t) δ(t –t0) = x(t0) δ(t-t0)
t2

7.  x(t)  n (t  t0 )dt  (1) n x n (t0 )


t1
SIGNALS AND SYSTEMS
CHAPTER-2
TRIGONOMETRIC FOURIER SERIES
A periodic function f(t) can be expressed in the form of trigonometric series as
1
a  a cos0 t  a2 cos20 t  a3 cos 30t  .........
f(t) = 2 0 1
 b1 sin0 t b2 sin 20 t  b3 sin30 t  ... (2.1)
2
where  0  2f  , f is the frequency and a’s and b’s are the coefficients. The Fourier
T
series exists only when the functions f(t) satisfies the following three conditions called
Dirichlet’s conditions.
(i) f(t) is well defined and single-valued, except possibly at a finite number of points, i.e.,
(ii) f(t) must posses only a finite number of discontinuities in the period T.
(iii) f(t) must have a finite number of positive and negative maxima in the period T.
Equation 2.1 may be expressed by the Fourier series
 
1
f (t )  a 0   a n cos n  0 t   bn sin n 0 t ( 2 .2 )
2 n 1 n 1

where an and bn are the coefficients to be evaluated.


Integrating Eq.2.2 for a full period, we get
T /2 T /2 T /2 
1
 f (t )dt  2 a0 T/ dt2  T/ 2 
T / 2 n 1
(a n cos n  0 t  bn sin n n t ) dt

Integration of cosine or sine function for a complete period is zero.


T /2
1
Therefore,  f (t )dt  a 0T
T / 2
2
T /2
2
T T/ 2
Hence, a 0  f (t )dt

T
2
T 0
or, equivalently a 0  f (t )dt

T /2
Multiplying both sides of Eq.2.2 by cos m  0 t and integrating, we have  f (t ) cos m 
T / 2
0 t dt =

T/2 T /2  T /2

1
T T/ 2
a 0 cos m  0 t dt    n
T / 2 n 1
a cos n  0 t cos m  0 t dt    bn sin n 0 t cos m 0 t dt
T / 2 n 1
T /2
1
2 T/ 2
Here, a0 cos m  0 t dt  0

T /2 T /2  0, for m  n
an 
 an cos n  0 t cos m  0 dt  2  [cos( m  n )  0 t  cos ( m  n )  0 t ]dt  T a , for m  n
T / 2 T / 2  2 n
T /2 T /2
bn
 bn sin n  0 t dt 
T / 2
2 T/ 2
[sin(m  n)  0 t  sin (m  n)  0 t ]dt  0

T /2
Ta n
Therefore,  f (t ) cos n  t dt =
T / 2
0
2
, for m  n
SIGNALS AND SYSTEMS
T /2
2
T T/ 2
Hence, a n  f (t ) cos n 0t dt (2.4)

T
2
T 0
or, equivalently a n  f (t ) cos n 0t dt

Similarly, multiplying both sides of Eq. 2.2 by sin m 0t and integrating, we get
T /2 T /2
1
 f (t ) sin m 0t dt =
T / 2
2 T/ 2
a0 sin m 0t dt

T /2  T/2 
+   an cos n 0 t sin m 0 t dt   b n sin n 0 t sin m  0 t dt
T / 2 n 1 T / 2 n 1
T /2
1
2 T/ 2
Here, a 0 sin m 0t dt = 0

T /2

a
T / 2
n cos n  0 t sin m  0 t dt  0

T /2  0, for m  n

 bn sin n  0 t sin m  0 t dt   1 bn for m  n
T / 2  2
T /2
T
Therefore,  f (t ) sin m  0 t dt  bn , for m  n
T / 2
2
T /2
2
T T/ 2
Hence, bn  f (t ) sin n  0 t dt (2.5)

T
2
T 0
or, equivalently, bn = f (t ) sin n  0 t dt

The number n = 1, 2, 3,……. Gives the values of the harmonic frequencies.


Symmetry Conditions
(i) If the function f(t) is even, then f(-t) = f(t). For example, cos t, t 2, t sin t, are all even. The
cosine is an even function, since it may be expressed as the power series
t2 t4 t6
cos t = 1 -    ......
2! 4! 6!
The waveforms representing the even functions of t are shown in Fig. 2.2. Geometrically, the
graph of an even function will be symmetrical with respect to the y – axis and only cosine
terms are present (d.c. term optional). When f(t) is even,
a a


a
f (t ) dt  2  f (t ) dt
a

The sum of product of two or more even functions is an even function.


(ii) If the function f(t) is odd, then f(-t) = - f(t) and only sine terms are present (d.c. term
optional). For example, sin t, t3, t cost are all odd. The waveforms shown in Fig. 2.3 represent
odd functions of t. The graph of an odd function is symmetrical about the origin. If f(t) is odd,
a

 f (t ) dt = 0. The sum of two or more odd functions is an odd function and the product of two
a
odd functions is an even function.
SIGNALS AND SYSTEMS
(iii) If f(t + T/2) = f(t), only even harmonics are present.
(iv) If f(t + T/2) = - f(t), only harmonics are present and hence the waveform has half – wave
symmetry.

Example 2.1 Obtain the Fourier components of the f(t)


periodic square wave signal which is symmetrical with
+A
respect to the vertical axis at time t = o, as shown in fig.
Solution since the given waveform is symmetrical about the
horizontal axis, the average area is zero and hence the
d.c. term a0 = 0. In addition, f(t) = f(-t) and so only cosine -T/2 -T/4 0 T/2 t
terms are present, i.e., bn = 0.
T /2
2 -A
T T/ 2
Now, an  f (t ) cos n  0 t dt

 A, from  T / 2  t  T / 4

where f(t) =  A, from  T / 4  t  T / 4
 A, from  T / 4  t  T / 2

Therefore,
2A  
T / 4 T/4 T /2
an    (  cos n  0 t ) dt   cos n  0 t dt   (  cos n 0 t ) dt 
T  T / 2 T / 4 T /4 
2 A   sin n 0 t    sin n 0 t  
T / 4 T /4 T /2
 sin n 0 t 
         
T  n 0   n   T / 4  n   T / 4 
 0 0

2A    n 0T    n 0T   n T 
   sin   sin    sin  0 
n 0T   4   2   4 
  n 0T   n T   n T 
 sin   sin  0   sin  0 
 4   2   4 
8A   n 0 T  4A  n T 
 sin   sin  0 
n 0 T  4  n 0 T  2 
When  0 T = 2  , the second term is zero for all integer values of n.
Hence,
8A  n  4 A  n 
an  sin   sin  
2 n  2  n  2 
a0 = 0 (d.c.term)
4A    4A
a1  sin   
  2  n
4A
a2  sin    0

4 A  3  4A
a3  sin    
3  2  3
………………………………………………………
………………………………………………………

Substituting the values of the coefficients in Eq. 2.2, we get


SIGNALS AND SYSTEMS
4A  1 1 
f (t )   cos( 0 t )  cos(3 0 t )  cos(5 0 t )  ...............
  3 5 
Example 2.2 Obtain the Fourier Components of the f(t)
periodic rectangular waveform shown in Fig. A
Solution The given waveform for one period can be
written as
 0, for  T / 2  t  T / 4

f(t) =  A, for  T / 4  t  T / 4 -T/2 -T/4 T/4 T/8 t
 0, for T /4  t T /2

For the given waveform, f(-t) = f(t) and hence it is an even function and has b n = 0.
The value of the d.c. term is
T /2
2
T T/ 2
a0 = f (t )dt

T /4
2 2A T
= 
T T / 4
A dt   A
T 2
T/2
2
T T/ 2
an = f (t ) cos n  0 t dt

T /4
2
T T/ 4
= A cos n 0 t dt
T /4
2 A  sin n 0 t 
=  
T  n 0  T / 4
4A
= sin ( n 0T / 4)
n 0 T
When  0T = 2  , we have
2A  n 
an  sin  
n  2 
= 0, for n = 2, 4, 6,…….
2A
 , for n  1, 5, 9,13, ........
n
2A
 , for n  3, 7,11,15, ........
n
Substituting the values of the coefficients in Eq. 2.2, we obtain
A 2A  1 1 
f(t) =   cos  0 t  cos 3 0 t  cos 5  0 t  ........... 
2   3 5 

Example: Obtain the trigonometric Fourier series for the half-wave rectified sine wave shown in
Solution As the waveform shows no symmetry, the series may contain both sine and cosine
terms. Here, f(t) = A sin  0 t
To evaluate a0:
T
2
a 0   A sin  0 t dt
T 0
SIGNALS AND SYSTEMS
T /2
2

T  A sin  t dt
0
0

2A 2A
  cos  0 t T0 / 2 
 cos( 0T / 2)  1 
 0T  0T
2A
Substituting  0 T = 2  , we have a0 = .

To evaluate an:
T
2
T 0
an  f (t ) cos n  0 t dt
T /2
2 A   n sin  0 t sin n 0 t  cos n dt 0 t 

 0T   n2 1 
0
Substituting  0 T = 2  , we have
A
an  [cos   1]
 (1  n 2 )
2A
Hence, an  , for n even
 (1  n 2 )
= 0, for n odd
For n = 1, this expression is infinite and hence we have to integrate separately to evaluate a 1.
T /2
2
T 0
Therefore, a1  A sin  0 t cos  0 t dt

T /2
A
=
T  sin 2 t dt
0
0

A
 [  cos 2 0 t ]T0 / 2
2 0T
When 0 T= 2  , we have a1 = 0.
To find bn :
T
2
bn   f (t ) sin n  0 t dt
T 0
T /2
2

T  A sin  t sin n t dt
0
0 0

T /2
A  n sin  0 t cos n  0 t  sin n 0 t cos  0 t 

 0T   n2  1 
0
When  0 T = 2  , we have bn = 0.
For n = 1, the expression is infinite and hence b1 has to be calculated separately.
T /2
2A
b1 
T 0  sin 2  0 t dt
T /2
2A   0 t sin 2 0 t 
  2 
 0T  4  0
SIGNALS AND SYSTEMS
A
When  0 T = 2  , we have b1 =
2
Substituting the values of the coefficients in Eq. 2.2, we get
A  2 2 
f(t) = 1  sin  0 t  cos 2 0 t  cos 4 0 t  .........
 2 3 15 
COMPLEX OR EXPONENTIAL FORM OF FOURIER SERIES
From Eq. 2.2, the trigonometric from of the Fourier series is

1
f(t) = a 0   (a n cos n  0 t  bn sin n 0 t )
2 n 1

An alternative but convenient way of writing the periodic function f(t) is in exponential form
with complex quantities. Since
e j n 0t  e  j n0t
cos n  0 t =
2
j n 0 t
e  e  j n0t
sin n  0 t =
2j
Substituting these quantities in the expression for the Fourier series gives
1 
 e j n0t  e  j n0t    e j n0t  e  j n0t 
f(t) = a 0   a n     bn  
2 n 1  2  n 1  2j 
1   (a n  jbn )e j n0t   (a n  jbn )e  j n0t 
= a0   


 


2 n 1  2    jb 
1
Here, taking cn = (an – jb n)
2
1
c-n = (an – jbn)
2
c0 = a0
Where c-n is the complex conjugate of cn. Substituting expressions for the coefficients an and bn
from Eqs 2.4 and 2.5 gives
T /2
1
T 0
cn = f (t )[cos n  0 t  j sin n 0 t ] dt

T /2
1
 f (t ) e
 jn 0t
= dt
T 0
T /2
1
T T/ 2
and c-n = f (t )[cos n  0 t  j sin n 0 t ] dt

T /2
1
 f (t ) e
jn 0t
= dt
T 0
 1
with f(t) = c0 +  c n e j n 0 t 
n 1
c e
n  
n
j n 0 t

where the values of n are negative in the last term and are included under the  sign. Also, c0
may be included under the  sign by using the value of n = 0. Therefore,

f(t) = c e
n  
n
j n 0 t
SIGNALS AND SYSTEMS
PARSEVAL’S IDENTITY FOR FOURIER SERIES
A periodic function f(t) with a period T is expressed by the Fourier series as

1
f (t )  a 0   ( a n cos n  0 t  bn sin n  0 t )
2 n 1

1
Now, [ f (t )] 2  a 0 f (t )   [ a n f (t ) cos n  0 t  bn f (t ) sin n  0 t ]
2 n 1
T/2 T /2
1 (a / 2)
Therefore,  [ f (t )]2 dt  0
T T / 2 T T/ 2
[ f (t )]dt

1   
T /2 T/2
   n 
a
T n 1  T / 2
f (t ) cos n  0 t dt  b n  f (t ) sin n  0 t dt 
T / 2 
From Eqns 2.2, 2.3 and 2.4, we have
T /2
2
T T/ 2
a0  f (t ) dt

T /2
2
T T/ 2
an  f (t ) cos n  0 t dt

T /2
2
T T/ 2
bn  f (t ) sin n  0 t dt

Therefore, substituting all these values, we get


T /2 2
1  a0  1  2
T T/ 2
[ f (t )] 2
dt  
2
 
 
2

n 1
(a n  bn2 )

This is the Parseval’s identity.


POWER SPECTRUM OF A PERIODIC FUNCTION
The power of a periodic signal spectrum f(t) in the time domain is defined as
T/2
1
T T/ 2
P= [ f (t )]2 dt

The Fourier series for the signal f(t) is



f(t) = c e
n  
n
j n0t

According to Parseval’s relation, we have


T/2
1
Pav =  [ f (t )]2 dt
T T / 2
T/2 
1
=  [ f (t )]2 dt
T T / 2
c e
n  
n
j n0t
dt

 T /2
1
=  cn  f (t ) e
jn 0t
dt
n   T T / 2

= c c
n  
n n

c
2
= n , watts
n  
SIGNALS AND SYSTEMS
From Eq. 2.12, the above equation becomes
2
 a0  1  
    (a n2  bn2 )   c n
2

 2 2 n 1 n 0

a0
Here, c0  and c n  a n2  bn2 , (n  1)
2
Thus the power in f(t) is
P = …..+|c-n|2 + ……. + |c-1|2 + |c 0|2 + ……+ |cn|2 + ……….
P = |c 0|2 + 2|c1|2 + 2|c2|2 + ……. + |cn|2 + ………
Hence, for a periodic function, the power in a time waveform f(t) can be evaluated by adding
together the powers contained in each harmonic, i.e., frequency component of the signal f(t).
The power for the nth harmonic component at n  0 radians per sec is|cn|2 and that of –n
 0 is|c-n|2 . For the single real harmonic, we have to consider both the frequency components
 n 0 .
Here, cn = c-n and hence |cn|2 = |c-n|2. The power for the nth real harmonic f(t) is
Pn = |cn|2 + |c-n|2 = 2|cn|2
The effective or RMS value of f(t)
Using Eqns 2.12, 2.13 and 2.14, the RMS value of the function f(t) expressed by Eq. 2.1 is
2
a  1 1 1 1
Frms   0   a12  a 22  ......  b12  b22  ..............
 2  2 2 2 2
1 1
 c02  c12  c 22  ............
2 2

PROPERTIES OF FOURIER TRANSFORM


Table 2.1 Important properties of the Fourier transform
Operation f(t) F(jω)
Transform f(t) 

 f (t )e
 jt
dt

SIGNALS AND SYSTEMS
Inverse transform 1
 F ( j )
2 
 jt
F ( j ) e d

Linearity af1 (t )  bf 2 (t ) aF1 ( j )  bF2 ( j )


Time-reversal f ( t ) F ( j )  F * ( j ), f (t ) real
Time-shifting (Delay) f (t  t 0 ) F ( j  ) e  j t 0
Time-Sealing f (at ) 1  j 
F 
a  a 
Time-differentiation dn ( j ) n F ( j )
f (t )
dt n
Frequency-differntiation (  jt ) f (t ) dF ( j )
d
Time-integration t 1
F ( j )  F (0) ( )
 f ( )d

j
Frequency-integration 1 

( jt )
f (t )
 f ( j `)d `
Time convolution f 1 (t ) * f 2 (t )  F1 ( j ) F2 ( j )

 f ( ) f

1 2 (t   )d

Frequency convolution f (t )e j0t F ( j  j 0 )


(Modulation)
Symmetry F ( jt ) 2f (  )
Real-time function f (t ) F ( j )  F * (  j  )
Re[ F ( j )]  Re[ F ( j )]
Im[ F ( j )]   Im[ F ( j )]
| F ( j ) |  | F (  j ) |
f ( j )  f ( j )
Parseval`s theorem 
1

E   | f (t ) | 2 dt E  | F ( j ) |
2
d

2 
Duality If f (t )  g ( j ),
then g (t )  2f (  j )

Table 2.2 Fourier transform of some important signals


Sl. Time domain f(t) Frequency domain F (jω)
SIGNALS AND SYSTEMS
 (t  t 0 )
F(j )
1. f(t) e  jt0
1
(1)
0
t
0 t0

F(j )
2. f(t) e  j 0 t 2 (   0 ) 2

t 0 0
0
F(j )
3. f(t) cos  0 t  [ (   0 )   (   0 )]
( )
1

0 t
-1 - 0 0 0

Eternal cosine
4. f(t)
sin  0 t  F(j )
[ (   0 )   (   0 )] /j
j
0 t
- 0 0 0

Eternal sine
- /j

5. f(t) 1 2 ( ) F(j )

1 2

0
t
0
F(j )
6. f(t)
t 2
1 f (t )  sgn(t ) 
|t | j
t
0
-1 0

f(t) F(j )
7. u(t) 1
 ( ) 
j
1

0
0 t

f(t) F(j )
8. e  at u (t ) 1
1/a
1 a  j
0
0 t
f(t) F(j )
9. e  a|t | 2a
1 a 2
2 2/a

1/a

0 1/a t -a 0 a
-1/a
Double exponential
SIGNALS AND SYSTEMS
a  j
F(j )
10. f(t) e  at
cos  0 t u (t )
( a  j ) 2   02

0 t 0
- d d

f(t) |F(j )|
11.   T   T   T 
Au  t    u  t   AT Sinc 
A 1
  2  2   2 
T
sin
T 2 -2 /T 0 2 /T
T
-T/2 0 T/2 t
2
f(t)
f (t )  1, for | t |  T sin(T )
F(j )
12.
2T  2T sin c (T )
1  0, otherwise T 2T

f (t )  u (t  T )  u (t  T )
-2 /T - /T
/T 2 /T

-T 0 T t

f(t) |F(j )|
13.  |t |  T 
2
f (t )  A1  , | t |  T
 T  sin 2  2
AT
A AT    AT sin c 
), elsewhere  T  
 2  -4 /T -2 /T 0 2 /T 4 /T

-T 0 T t
f(t) F(j )
14. 0  t  u (   0 )  u (   0 )
sin c 0  1

   
/
0
- 0 0 0

0 t
- / 0 / 0
Since pulse
f(t) F(j )
15. 
2 
 2n 
  (t  nT )
n   T
   
n    T 
 2_
T

-2T -T 0 T 2T t
Impulse train -4_ -2_ 0 2_ 4_
T T T T

f(t) F(j )
16. 1
  f 2

(a/ ) 1/2
a 2
  exp  at
2
  exp
 a


1

 
0

0 t
Gaussian Pulse
SIGNALS AND SYSTEMS
17. f(t) t exp(  at )u (t ) 1 |F(j )|

( a  j ) 2 1/a
2

0
0 t
18. t n1 1
exp(at )u (t )
(n  1)! ( a  j ) n
19. f(t) A exp( at ) sin ( 0 t ) u (t ) A 0 A 1 2 0- a
2 a + a2+ (2 0- a) 2

(a  j ) 2   02 A 0
2 2
a+ 0
+a
0

0 t 0
A 1 2 0+a
2 a + a2+ (2 0- a) 2

20. f(t) A exp( at ) cos ( 0 t ) u (t ) a  j A 1


2 a + a2+
a |F(j )|
A. 2
0

(a  j ) 2   02

0 t
0 - 0 0
Aa
2 2
a+ 0

21. cos  0 [u (t  T )  u (t  T )]  sin(   0 )T sin(  


T 
 (   0 )T (  

The Discrete – Time Fourier Transform : Few Points


As discussed in last article
(i) The discrete – time Fourier transform (DTFT) X(e j ) of a discrete – time signal x(n) is
expressed as

X(e j ) =  x(n)e
n  
 j n

j
or DTFT x(n) = X(e )
And Inverse Discrete – Time Fourier Transform (IDTFT) is expressed as

1
x(n) =  X (e j )e j n d
2 
or IDTFT X(e j ) = x(n)
(ii) From equation (4.15) and (4.17), it is clear that x(n) and X (e j ) are a Symbolically, this
may be expressed as
x(n)   X (e j )
DTFT

Table 4.1 Properties of Discrete – time Fourier Transform


S.No. Name of Property Time – Domain Frequency – Domain
Expressio Expression
n
SIGNALS AND SYSTEMS
1 Notation x(n) X( e j )
2 Linearity a1x1(n) + a2x2(n) a1X1(e j ) + a2X2(e j )
3 Time –shifting x(n – n0) e- j n0X(e j )
4 Frequency –shifting e j 0n x(n) X(e j 0 )
5 Frequency-differentiation nx(n) dX (e j )
j
d
6 Conjugation x*(n) X*(ej  )
7 Time – reversal x(-n) X(ej  )
8 Parseval’s theorem 
1


n  
x1 ( n ) x *
2 ( n )

2 
X 1 (e j ) X 2* (e j )d

9 Multiplication x1(n) x2(n) 1


2  X

1 ( ) X 2 (e j  )d

10. Modulation x(n) cos  0 n 1 1


X ( e j   0 )  X ( e j   0 )
2 2
11 Scaling x(pn)  
X 
P
12. Convolution x(n)  y(n) X(e )Y(ej  )
j
SIGNALS AND SYSTEMS

Table 4.2 Few useful Discrete-Time Fourier Transform Pairs


S.No. Discrete-Time signal x[n] Discrete-time Fourier Transform
j
1 x(n) X(e )
1 1

n n
-3 -2 -1 1 2 3 -
x(n)= (n) j
X(e ) =1
j
2 A X(e )
1
1

0
n -
-L 0 L
j
3 x(n) X(e )
1

n - c c

- 0 c
c

4 1
a n
x(n) = 
for n  0 X(ej  ) =
for n  0 1  ae  j
0
SIGNALS AND SYSTEMS
CHAPTER-3
S.No. f(t) F(s)
1.  (t) 1
2.  (t-a) e-as
3. u(t) 1
s
4. u(t – a) e  as
s
5. tn 1
u(t), n positive integer (-1)n
n! s n 1
6. e-atu(t) 1
sa
7. t n e  at 1
u(t)
n! s  a n 1
8. sin (  0 t) u(t) 0
s   02
2

9. cos (  0 t) u(t) s
s   02
2

10. t cos (  0 t) u(t) s 2   02


( s 2   02 ) 2
11. t sin (  0 t) u(t) 2 0 s
( s 2   02 ) 2
12. e-at sin (  0 t) u(t) 0
( s  a ) 2   02
2

13. e-at cos (  0 t) u(t) sa


( s  a ) 2   02
14. sin h  0 t 0
s   02
2

15. cos h  0 t s
s   02
2

16. e-atsin h  0 t 0
( s  a) 2   02
17. e-atcos h  0 t sa
( s  a ) 2   02
18. sin (  0 t +  ) s sin   0 cos
s 2   02
19. cos(  0 t +  ) s cos   0 sin 
s 2   02

Table 3.2 Properties of Laplace transform


SIGNALS AND SYSTEMS
S.No. Property Time domain Frequency domain
1. Linearity a f1(t)  b f2(t) a and b a F1 (s)  b F2 (s)
are constants
2. Scalar multiplication kf(t) kF (s)

3. Scale change f (a t), a  0 1 s


F 
a a
4. Time delay f(t – a), a > 0 F (s) e-as
5. s-shift e-at f(t) F(s + a)

6. Multiplication by tn tn f(t), n = 1, 2,……. d n F ( s)


(-1)n
ds n
7. Time differentiation f'(t) s F(s) – f(0)
f”(t) s2 F(s) – sf(0) – f ‘ (0)
f n(t’) sn F(s) – sn-1 f(0)-sn-2
f’(0) - ……- fn-1(0)
8. Time integration t
(t  u ) n 1 F ( s)
0 (n  1)! f (u )du sn
9. Frequency differentiation (-t)n f(t) Fn (s)
-t f(t) dF ( s )
F ‘ (s) =
t2 f(t) ds
F”(s)
10. Frequency integration f (t ) 

t  F (s) ds
s

11. Convolution f1 (t) * f2(t) F1(s) F2(s)


t
=  f ( ) f
0
1 2 (t   )d

12. Final value f (  ) = lim f(t) lim s F (s)


t  s 0

13. Initial value +


f(0 ) = lim f(t) lim s F(s)
t 0 s 

14. Time periodicity f(t) = f(t + n T) 1


F1(s)
n = 1, 2, ……. 1  e  sT
t

 f (t ) e
 st
where F1(s) = dt
0
SIGNALS AND SYSTEMS

CHAPTER- 4
The analysis of any sampled signal or sampled data system in the frequency domain in extremely
difficult using s – plane representation because the signal or system equations will contain infinite long
polynomials due to the characteristic infinite number of poles and zeros. Fortunately, this problem may
be overcome by using the z-transform, which reduces the poles and the zeros to a finite number in the z
–plane.
The purpose of the z – transform is to map (transform) any point s =    j  in the s-plane to a
log z
corresponding point z (r  ) in the z-plane by the relationship s  , Where T is the sampling
T
period (seconds)
Under this mapping, the imaginary axis,  = 0 maps on to the unit circle |z| = 1 in the z-plane. Also,
the left hand half –plane  < 0 corresponds to the interior of the unit circle |z| = 1 in the z – plane.
This correspondence is shown in Fig.
Im(S) Im(Z)
+j
e Z(s)=esT
2 /T
|Z|=1
d
2 /T Re(S) c`
ba c + 1 a` Re(Z)
=0, s

b`=d`=e`/1 s .....
-j
j T
Mapping of s-plane for z=plane for z =e

Considering that the real part of x is zero, i.e.  = 0, we have z = e jT = 1   jT , which gives the
values of z.
We know that the Laplace transform gives

x * (t )  X (s) =  x(nT )e  nsT

n0
But we have z = e for the z – transform and hence z-n = e-nsT, which corresponds to a delay of n
sT

sampling periods. Therefore, the z – transform of x*(t) is



Z[x*(t)] = X(z) =  x(nT )z
n0
n

In order to simplify the notation, the nth sample may be written as x(n) and hence the above equation
becomes

X(z) =  x ( n)z  n
n 0

Evaluating X(z) at the complex number z = re j gives


 
X(z) z  re j
  x(n)(re j ) n 
n  
 [ x(n)r
n  
n
]e  j n

Therefore, if r = 1, the z –transform evaluated on the unit circle gives the Fourier transform of
the sequence x(n).
SIGNALS AND SYSTEMS
The z – transform method is used to analyse discrete –time systems for finding the transfer
function, stability and digital network realizations of the system.
Example 4.1 Determine the z – transform for the analog input signal x(t) = e -at applied to a
digital filter.
Solution We know that

X * (t) =  x(t ) (t  nT )
n  
For t = nT, the sampled signal sequence is
X * (nT) = [e0, e-aT,e-2aT, e-3aT, …….]

By applying Eq. 4.1 of X(z) =  x(n) z-n, we get
n 0
X(z) = 1 + e-aTz-1 + e-2aT z-2 + e-3aTz-3 + …….
 
= e
n 0
 anT
z-n =  (e
n0
 anT
z 1 ) n

1
Using infinite summation formula of a
k 0
k

1 a
, a  1, we get

1
X(z) =
a  e  aT z 1

Region of Convergence (ROC)


Equation 4.1 gives

X(z) z  re j  X ( re j )   x(n)r
n  
n
e  j n

which is the Fourier transform of the modified sequence [x(n)r -n]. If r = 1, i.e. |z| = 1, X(z)
reduces to its Fourier transform. The series of the above equation converges if x(n)r -n is
absolutely summable, i.e.

 x ( n) r
n  
n


If the output signal magnitude of the digital signal system, x(n), is to be finite, then the
magnitude of its z – transform, X(z), must be finite. The set of z values in the z-plane, for
which the magnitude of X(z) is finite, is called the Region of Convergence (ROC). That is,

convergence of  x( n) r
n0
n
guarantees convergence of Eq. 4.1, where X(z) is a function of z -n.

Therefore, the condition for X(z) to be finite is |z| > 1. In other words, the ROC for X(z) is the
area outside the unit circle in the z-plane.
The ROC of a rational z-transform is bounded by the location of its poles. For example, the z –
z
transform of the unit step response u(n) is X(z) = which has a zero at z = 0 and a pole at z
z 1
= 1 and the ROC is |z| > 1 and extending all the way to  , as shown in Fig.
SIGNALS AND SYSTEMS
Im(Z)

Z-plane Unit circle

Re Z

Pole at z=1
Zero at z=0 ROC

Fig: Pole-zero and ROC of the Unit-step Response u(n)

Important Properties of the ROC for the z –transform


(i) X(z) converges uniformly if and only if the ROC of the z-transform X(z) of the sequence
includes the unit circle. The ROC of X(z) consists of a ring in the z – plane centered about the
origin. That is, the ROC of the z-transform of x(n) has values of z for which x(n) r -n is
absolutely summable.

 x ( n) r
n  
n


(ii) The ROC does not contain any poles.


(iii) When x(n) is of finite duration, then the ROC is the entire z – plane, except possibly z = 0
and /or z =  .
(iv) If x(n) is a right-sided sequence, the ROC will not include infinity.
(v) If x(n) is a left – sided sequence, the ROC will not include z = 0. However, if x(n) = 0 for
all n > 0, the ROC will include z = 0.
(vi) If x(n) is two – sided, and if the circle |z| = r0 is in the ROC, then the ROC will consist of a
ring in the z-plane that includes the circle |z| = r0. That is, the ROC includes the includes the
intersection of the ROC’s of the components.
(vii) If X(z) is rational, then the ROC extends to infinity, i.e. the ROC is bounded by poles.
(viii) If x(n) is causal, then the ROC includes z =  .
(ix) If x(n) is anti – causal, then the ROC includes z = 0.
To determine the ROC for the series expressed by the Eq. 4.2, which is called a two –sided
signal z – transform, this equation can be written as
 1 

 x( n) r  n 
n  
 x ( n) z  n   x ( n) z  n
n   n0
 
=  x(n) z
n 1
n
  x (n) z  n
n0

Table 4.3 Some important z-transform pairs


SIGNALS AND SYSTEMS
S. Signal x(t) Sequence x(t) Laplace z-transform X(z) ROC
transf
orm
X(s)
1. δ(t) δ(n) 1 1 All z-plane
-ks
2. δ(t-k) δ(n-k) e z-k |z| > 0, k >
0
|z| < ∞, k <
0
3. u(t) u(n) 1 1 z |z| > 1
1
=
s 1 z z 1
4. -u(-t) -u(-n-1) 1 1 z |z| < 1
1
=
s 1 z z 1
5. tu(t) n u (n) 1 z 1
z |z| > 1
=
s2 (1  z ) 1 2
( z  1) 2
6. an u(n) 1 z |z| > |a|
=
1  az 1
( z  a)
7. -an u(-n -1) 1 z |z| < |a|
=
1  az 1
( z  a)
8. n an u(n) az |z| > |a|
( z  a) 2
9. -n an u (-n -1) az |z| < |a|
( z  a) 2
10. e-at e-an 1 1 z |z| > |e-a|
 a 1

(s  a) 1 e z z  e n
11. t2 u(t) n2u(n) 2 (1  z 1 z ( z  1) |z| > |e-a|
z 1 
s3 1 3
(1  z ) ( z  1) 3
12. te-at u(t) ne-an 1 z 1 e  a ze  a |z| > |e-a|

( s  a) 2 (1  e  a z 1 ) 2 ( z  e  a ) 2
13. sin ω0t sin ω0n 0 z sin  0 |z| > 1
s   02
2
z  2 z cos  0  1
2

14. cos ω0t cos ω0n s z ( z  cos  0 ) |z| > 1


s   02
2
z  2 z cos  0  1
2

15. sinh ω0t sinh ω0n 0 z sin h 0 |z| > 1


s   02
2
z  2 z cos h 0  1
2

16. cos h ω0t cos h ω0n s z ( z  cos h 0 ) |z| > 1


s   02
2
z  2 z cosh  0  1
2

17. e-atsin ω0t e-ansin ω0n 0 ze  a sin  0 |z| > |e-a|


( s  a) 2   02 z 2  2 ze  a cos  0  e  2 n
18. e-atcos ω0t e-an cos ω0n sa z ( z  e  a cos  0 ) |z| > |e-a|
( s  a ) 2   02 z 2  2 ze  a cos  0  e  2 a
SIGNALS AND SYSTEMS
19. n
a sin ω0n za sin  0 |z| > |a|
z 2  2 za cos  0  a 2
20. an cos ω0n z ( z  a cos  0 ) |z| > |a|
z  2 z a cos  0  a 2
2

Table 4.4 Properties of z-Transform


S. No. Property or operation Signal z – transform
1. Transformation x(n) 
X(z) =  x (n)z  n
n0

2. Inverse transformation 1 X(z)



2j c
X ( z ) z n 1 dz

3. Linearity a1x1(n) + a2x2(n) a1X1(z) + a2X2(z)


4. Time reversal x(-n) X(z-1)
5. Time reversal (i) x(n-k) (i) z-kX(z)
(ii) x(n + k) (ii) zkX(z)

6. Convolution x1(n)*x2(n) x1(z) X2(z)


7. Correlation 
R x1x2 ( z )  X 1 ( z ) X 2 ( z 1 )
rx1 x2 (l )   x ( n) x
n  
1 2 (n  l )

8. Scaling anx(n) X(a-1z)


9. Differentiation nx (n) dX ( z ) dX ( z )
z 1 1
or  z
dz dz
-1
10. Time differentiation x(n) – x(n-1) x(z) (1-z )
11. Time accumulation 
 z 
 X (k )
n  
X(z) = 
 z 1 

12. Initial value theorem lim x(n) = x(0) lim X ( z )


k 0 z 

13. Final value theorem lim x(n)  z 1


n  lim  X (z)
z 1
 z 
SIGNALS AND SYSTEMS
CHAPTER-5
“A continuous –time signal may be completely represented its samples and recovered
back if the sampling frequency is f   2 f m . Here f  is the sampling frequency and fm is
the maximum frequency present in the signal”.
Let us consider a continuous time signal x(t) whose spectrum is band – limited to f m Hz. This
means that the signal x(t) has no frequency components beyond f mHz. Therefore, X(j  )is zero
for    m [ X ( j )  0 for    m ( m  2f m )] .
X(j )
A
x(t)

-2 fm 0 2 fm

0 t -fm 0 fm f
(a) (b)
T8 (t)
x(t) g(t)
Multuplier
0 (t)
T8
t
(c) (d)

g(t) G(j )
A/Ts

0 2 fm 2
2 8 8
0 t
8 8

(e) 0
-2fs fm -fs2fs -fm
(f)
Fig. (a) Any continuous-time signal, (b) Spectrum of continuous-time signal,
(c) Impulse train as sampling function, (d) Multiplier, (e) Sampled signal
(f) Spectrum of sampled signal.
Nyquist Rate and Nyquist Interval
When the sampling rate becomes exactly equal to 2fm samples are second, then it is called
Nyquist rate. Nyquist rate is also called the minimum sampling rate. It is given by
fs = 2fm
Similarly, maximum sampling interval is called Nyquist interval. It is given by
1
Nyquist Interval Tz = seconds
2 fm
Therefore, the original spectrum X(j  ) can be recovered from the sampled spectrum by using
a low pass filter with a cut – off frequency  m.

G(j )
A/Ts

-3 8 -2 8 - 8
- m m 8 2 8 3 8

Fig: Sampled spectrum at Nyquist rate


SIGNALS AND SYSTEMS

Reconstruction Filter (Low Pass Filter) Amplitude


A low-pass filter is that type of filter which pass only
low-frequencies upto a specified cut-off frequency and 1
rejects all other frequencies above cut –off frequency.
This means that an ideal low-pass filter is not
physically realizable.
-fm 0 fm f
Fig: Ideal low-poss filter
X(f)

(a)

-fm 0 fm f
X(f)

-2fs -fs -fm 0 fm fs 2fs


(b)

(c)

-fm fm fs-fm
Fig: (a) Spectrum of original signal
(b) Spectrum of sample signal
(c) Amplitude response of practical low-pass filter.
The expression for sampled signal is written as
g(t) = x(t).  Ts (t )
1
or g(t) = [x(t) + 2x(t) cos  s t + 2x(t) cos 2  s t + ………]
Ts
SIGNALS AND SYSTEMS
From above equation, it may be observed that the sampled signal contains a component
1
 g (t ) .
Ts
To recover x(t) or X(j  ), the sampled signal must be passed through an ideal low-pass filter of
bandwidth of fm Hz and gain Ts.
Therefore, the reconstruction or interpolating filter transfer function may be expressed as
  
H(j  ) = Ts  rect  
 4f m 
The impulse response h(t) of this filter is the inverse Fourier transform of H (j  ).
h(t) = F-1[H(j  )]
   
h(t) = F-1 Ts rect  4f  
  m 

Assuming that sampling is done at Nyquist rate, then


1
Ts =
2 fm
So that 2fm Ts = 1
Putting this value of 2 fm Ts in equation (7.19) we have
h(t) = 1 . sin c( 2  fmt)
= sin c(2  fmt)
Figure (b) shows the graph of h(t).
H(j ) h(t)
Ts
1 1
2fm 2fm

0 t
-2 fm 0 2 fm

(a) (b)

Sampled Reconstructed
Signal g(t) Signal x(t)

t
0

(c)

Effect of under sampling: Aliasing


When a continuous-time band-limited signal is sampled at a rate lower than Nyquist rate f s < 2
fm, then the successive cycles of the spectrum G(jω) of the sampled signal g(t) overlap with
each other.
SIGNALS AND SYSTEMS
G(j ) for fs < 2fm
fm
Aliasing

-2fs -fs 0 fs 2fs f


Fig: Spectrum of the sampled signal for the case fs < 2fm

Energy Signal
an energy signal is one which has finite energy and zero average power. Hence, x(t) is an
energy signal if
0 < E <  and P = 0
Parseval’s Theorem for Energy Signals
The Parseval’s theorem states that the energy of a signal may be obtained with the help of its
Fourier Transform.
  
1
 X ( j ) d   X ( j ) df   X (t ) dt
2 2 2
E
2    
Above equation is known as Parseval’s theorem for energy signals.
Energy Spectral Density
Let us consider a signal x(t) which is applied to an ideal low pass filter as shown in figure 3.88.
Y ( j ) = K ( j ) H ( j )
Here X ( j ) = Fourier transform of x(t)
and Y ( j ) = Fourier transform of y(t)
Also, the energy E0 of the output signal y(t) may be expressed as (using Parseval’s theorem)

1

2
E0 = Y ( j ) d
2 

1

2
or, E0 = H ( j ). X (( j ) d
2  
it may be observed that H ( j ) = 0 for all the frequencies except for the narrow band -  m to  m
for which it is unity.
Therefore,
 
1 m 1 m
 
2 2
E0  1. X ( j  ) d  X ( j ) d
2  m 2 m
Further, we may assume that Fourier transform X ( j ) is constant with frequency for a narrow
band -  m to  m . Therefore, the energy of the signal over this narrow band   2 m will be

1 m

. X ( j )  1..d
2
E0 =
2  m

1 2
E0 = . X ( j ) ( 2 m )
2
1 2 2
E0 = . X ( j ) (  )  X ( j ) ( f )
2
SIGNALS AND SYSTEMS
Therefore, energy contribution per unit band-width will be
E0 2
 X ( j )
f
2
Hence, X ( j ) represents energy per unit band-width and is known as Energy spectral
Density or Energy Density spectrum.
It is generally denoted by ( ) .
2
Hence  ( ) = X ( j )

1
2 
E=  ( )d

Now, we can find the relationship between energy-densities of input and output (response) as
under:
Y(j  ) = H(j  ).X(j  )
therefore, |y(j  )|2 = |H(j  )X(j  )|2 = |H(j  )|2 |X(j  )|2
Now, let  y ( ) be the energy spectral density of output y(t) and  x ( ) be the energy spectral
density of input x(t), then
 y ( ) =| Y(j  )|2
and  y ( ) =| X(j  )|2
This is the relationship between input and output energy spectral densities.
Obtain the energy spectrum density of a Gate function of width  and amplitude A.
A t  2
x(t) = 
 0 other wise
The Fourier transform is expressed as

X(j  ) =  x (t ) e
 j t
dt

For a gate function, we have
j j
A  2 
 /2

 Ae  
 j t
X ( j )   e e 2
 / 2
j  

  
sin  
 2    
or X(j  ) = A  A Sa 
    2 
 
 2 
2   
 ( )  X ( )  A 2 2 Sa 2  
 2 

Power Signals
A power signal is one which has finite average power and infinite energy.
Hence, x(t) is a power signal if 0 < P <  and E =  where, P is the average power and E is
the energy of the signal.
SIGNALS AND SYSTEMS
1   

 n n n  Cn
2
P C .(TC *
)  C n .
C *
n 
T n     n  
the average power P of the function x(t).
Therefore,
2

X t ( j )
P   Lim df

  
2
X t ( j )
In the limit    , the ratio may be approach a finite value.

Let this finite value be S (  )
2
X t ( j )
So that S ( )  Lim
  
 
1
P  s( )df


2  

s ( ) d

Relationship between input and output power spectral densities


This relationship, like energy spectral density, is expressed as
S r ( )  | H ( j ) | 2 S x ( )
where Sr(  ) = power spectral density of response r(t)
Sx(  ) = power spectral density of input x(t)
H(j  ) = transfer function of the system

Table 3.1: Comparison of Energy spectral density function


and Power spectral density function
Energy spectral density  ( ) Power spectral density S(  )
1. Energy spectral density gives the Power spectral density gives the
distribution of energy of a signal distribution of power of a signal
in frequency – domain. in frequency – domain.
2. Energy spectral density is given as Power spectral density is given as
 ( ) = |X(j  )|2 | X  ( j ) | 2
S(  ) = Lim
  
SIGNALS AND SYSTEMS

DISTRIBUTION FUNCTIONS AND PROBABILITY DENSITY FUNCTIONS


CUMULATIVE DISTRIBUTION FUNCTION (cdf)
The cumulative distribution function Fx(x) of a random variable X takes a value less than or
equal to x; i.e.
Fx(x) = P(X ≤ x)
Properties of cumulative distribution function Fx(x)
(1) Fx(x) ≥ 0
(2) Fx(∞) = 1
(3) Fx(-∞) = 0
(4) Fx (x) is a non-decreasing function, i.e.
Fx(x1) ≤ Fx(x2) for x1 ≤ x2
PROBABILITY DENSITY FUNCTION (pdf)
Distribution function of a random variable completely characterizes the random variable but
sometimes it is more convenient to work another function known as probability density
function.
It is related to distribution function as
d
Probability density function = (Probability distribution function)
dx
d
Or f X ( x)  FX(x) …………(i)
dx
The probability density function fx(x) is the first derivative of the probability distribution
function Fx(x). The first derivative of the probability distribution function may not exist at all
points because the probability distribution function may be a discontinuous function for
discrete random variables. Here we assume that Fx(x) is a continuous function. The probability
density function (pdf) of a random variable completely characterizes the random variable, i.e.
x
Fx(x) = f

x ( x ) dx …………(ii)

From the property of probability distribution function


Fx(x) = P(X ≤ x) ………….(iii)
From equations (ii) and (iii), we get
x
P(X ≤ x) =

f x ( x ) dx

This equation relates probability distribution function and probability density function.
Integrating equation (i) in the limit form x1 to x2 we get
x2
x1

f
x2
x ( x)dx  Fx ( x)
x1

= Fx(x2) – Fx(x1) …………(iv)


Also, we know that
Fx(x2) – Fx(x1) = P(x1 ≤ X ≤ x2) ………….(v)
From Eqs. (vi) and (vii) we get
x2

P(x1 ≤ X ≤ x2) = f
x1
x ( x )dx ………….(vi)
SIGNALS AND SYSTEMS
From equations (ii) and (vi), the pdf completely characterizes the random variable. If the
probability density function of the random variable is known, then the probability of the event
s(x1 ≤ X ≤ x2) is the area under the pdf curve as shown in the fig.

Fx(x)

P{x 1<X<x 2}

x1 x2 x
Fig. Probability of an event from its
probability density function (pdf), Fx(x)

Properties of Probability Density Function.


(1) The probability density function (pdf) is a non-negative function
Fx(x) ≥ 0, for all x
Proof: By definition of probability density function (pdf)
dF ( x )
Fx(x) = X
dx
Since fx(x) is a non-decreasing function, therefore, fx(x) ≥ 0 for all x.
(2) Area under the pdf curve is unity
x

f

x ( x ) dx =1

(3) For a continuous random variable, the probability of the event P(X = x) is zero.
P(X = x = 0, for any real number x
Some Common pdfs
If the probability density function of a random variable defined over a given random
experiment is known, then the probability of an event associated with the experiment can be
determined.

Some Commonly occurring pdfs


(1) Uniform pdf
(2) Gaussian or normal pdf
(3) Exponential pdf.
fx(x)
(1) Uniform pdf. The pdf of a uniform random variable X,
with parameters a, b  R has a sample space S = (a, b), i.e., a ≤
1
X ≤ b, is given by (b-a)
1
Fx(x) = , x  [a, b]
ba
Uniform pdf is drawn in fig 0 a b x
Fig. Uniform pdf
SIGNALS AND SYSTEMS
(2) Gaussian or Normal pdf. The Gaussian random variable Fx(x)
X with parameters µ, σ, has sample space S = (-∞, ∞). Its pdf 1
is given by 2
1
Fx(x) = e [( x   ) 2 ] / 2 2 , x  (, )
2 

Gaussian probability density function is the most commonly x


used model for description of noise in communication channel. Fig. Gaussian pdf
Gaussian pdf is drawn in fig.

(3) Exponential pdf. The exponential random variable X, with


fx(x)
parameter γ has a sample space.
S = [0, ∞] 2
Here γ is a positive number.
Exponential pdf is given by
F(x) = γ e-γx, x  [0, ∞]
This is a very widely used model in the analysis and design of 2 x
computer network systems. Exponential in Fig.
Fig. Exponential pdf for y=2
Joint Distribution and Probability Density Functions
Joint Distribution Function. The joint distribution function of the two random variables X
and Y is denoted by FXY and defined by
FXY(x, y) = P(X ≤ x, Y ≤ y]
Where the event [X ≤ x, Y ≤ y] = [X ≤ x] ∩ [Y ≤ y]
The joint distribution function of two random variables is a two-dimensional function. The joint
density function depends upon both x and y.

Properties of Joint Distribution Function


(i) The joint distribution function of the two random variables completely characterizes the two
random variables. It means that both random variables X and Y can be determined from the
joint distribution function FXY(x, y)
(ii) The definition of the joint distribution function can be easily extended to a random vector
consisting of N random variables.
FX1, X2, X3, …..Xn (x1, x2, x3, …….xn) = P[X1 ≤ x1, X2, x2, X3 ≤ x3, …… Xn ≤ xn]

Joint Probability Density Function.


The joint probability density function of two random variables X and Y is defined as
FXY ( x, y )
fXY(x, y) = ……….(i)
x y

Properties of Joint Probability Density Function


(i) Integrating both sides of equation (i), we can get another relation between f XY(x, y)
x y
FXY(x, y) = f

xy ( x, y ) dx dy

(ii) The joint probability density function of a random vector consisting of N random variables
is given by
SIGNALS AND SYSTEMS
Fx1 , x2 xn ( x1 , x2 ,......xn )
fx1, x2, x3,……, xn =
x1 x2 ,..... xn
(iii) The joint probability density function can be used to determine the probabilities in the
following way.
x y
P[X ≤ x, Y ≤ x] = f
 
XY ( x, y ) dx dx
x2 y 2

P[x1 ≤ X ≤ x2, y1 ≤ Y ≤ y2] =   f XY ( x, y ) dx dy


x1 y1

Above equations can also be extended to the case of N random variables.


(iv) Area of joint probability density function is unity {same as for single random variable}
 

f
 
XY ( x, y ) dx dy  1

(v) fXY(x, y) completely characterizes the two random variables X and Y.


MARGINAL DISTRIBUTION AND PROBABIILTY DENSITY FUNCTION
Marginal Distribution Function.
It can be obtained from the joint distribution in the following way
We know, P[X ≤ x, Y ≤ x] = FXY(x, y) ………(i)
P[X ≤ x, Y ≤ ∞] = FXY(x, ∞)
Now, [X ≤ x, Y ≤ ∞] = [X ≤ x] C [Y ≤ ∞] = [X ≤ x]
Thus we can write equation (i) as
P[X ≤ x] = FXY(x, ∞)
and as Fx(x) = FXY(x, ∞)
Similarly FY(y) = FXY(∞, y)
Marginal Probability Density Functions
These can be obtained from the following relations:
x 
FX(x) = f
 
XY ( x, y ) dx dy
 y
and FY(y) = f
 
XY ( x, y ) dx dy

as fX(x) = f

XY ( x, y ) dy

and fY(y) = f

XY ( x, y ) dy

INDEPENDENT RANDOM VARIABLES


From the basic theory of probability, two events A and B are independent if the occurrence of
one does not affect the probability of occurrence of the other.
Mathematically, if P(P∩B) = P(A) P(B), then events A and B are independent.
If events [X ≤ x] and [Y ≤ y] are independent for any x and y, then the two random variables X
and Y are called independent variables.
P[X ≤ x, Y ≤ y] = P[X ≤ x] P[Y ≤ y] for all x and y.
Or FXY(x, y) = FX(x) FY(y)
Differentiating with respect to x and y, we get
2 d d
FXY ( x, y )  FX ( x) FY ( y )
x y dx dx
SIGNALS AND SYSTEMS
Or FXY(x, y) = FX(x) FY(y)
i.e., the two random variables X and Y are independent if the product of their marginal
distribution (probability density) functions is equal to their joint distribution (probability
density) functions.
AUTOCORRELATION
AUTOCORRELATION FUNCTION
Autocorrelation function is defined as a measure of similarity between a signal or process and
its replica by a variable amount.
The autocorrelation function of a stationary process X(t) is defined as
Rx(tj – ti) = E[X(tj) X(ti)] for any tj and ti ……..(i)
Where X(tj) and X(ti) are the random variables obtained by observing the process X(t) at times
tj and ti, respectively.
The autocorrelation function depends only on the time difference (t j – ti).
Using τ = tj – ti in equation (i) we get
Rx(τ) = E[X(t) X(t – τ)]
The expression X(t) and X(t – τ) are considered as random variables. The variable τ is shown as
time – lag or time – delay parameter. The autocorrelation function for stationary random
process is independent of a shift of time origin.
Properties of Autocorrelation Function RX(τ) of a Wide-Sense Stationary (WSS) Process
(1) The autocorrelation function of WSS process X(t) is an even function of time-lag.
Autocorrelation function RX(τ) satisfies the following mathematical relationship.
RX(τ) = RX(-τ)
(2) The mean square value of a WSS process is equal to the autocorrelation function of the
random process for zero time-lag.
RX(0) = RX(τ) |τ = 0
= E[X(t) X(t – τ)] |τ = 0
= E[X(t) X(t – 0)]
= E[X2(t)]
Or RX(0) = E[X2(t)]
(3) The autocorrelation function of a WSS random process has the maximum magnitude at zero
time – lag (τ = 0) |RX(τ)| = magnitude of Rx(τ) = Rx(0)
CROSS-CORRELATION FUNCTIONS
Autocorrelation function is determined for single random process but cross correlation function
is determined for two random processes.
Consider two random processes X(t) and Y(t) with autocorrelation function R X(t, u) and RY(t,
u), respectively. There will be two cross-correlation functions between two random processes
X(t) and Y(t). These cross-correlations are defined as
RXY(t, u) = E(X(t) Y(u))
And RYX(t, u) = E[Y(t), X(u)]
Where t and y are the two values of time at which the processes are observed.
The cross-correlation function of two random processes X(t) and Y(t) can be given
 R X (t , u ) R XY (t , u )
conveniently in matrix form as R(t, u) =   ………(i)
 R XY (t , u ) RY (t , u ) 
R(t, u) is called the correlation matrix of random processes X(t) and Y(t).
 R X (t  u ) R XY (t  u )
Or R(t, u) =   ………(ii)
 RXY (t  u ) RY (t  u ) 
SIGNALS AND SYSTEMS
If both random processes X(t) and Y(t) are WSS processes, then equation (i) can be written as
equation (ii). In addition, both random processes X(t) and dY(t) are said to be jointly wide-
sense stationary.
Generally, the cross-correlation function is not an even function of τ like an autocorrelation
function. It has following symmetry relation.
Rxy(τ) = RYX (-τ)
Moreover, cross-correlation does not have a maximum at the origin like the autocorrelation
function. The two random processes X(t) and Y(t) are said to be incoherent or orthogonal if
cross-correlation function of X(t) and Y(t) is zero.
Rxy(τ) = 0
The two random processes are said to be non-correlated, if their cross-correlation function
RXY(τ) are equal to the multiplication of mean values.
Rxy (τ) = E[x(t) E[Y(t)]
The incoherent or orthogonal processes are non-correlated process with E[X(t)] and/or E[Y(t)] = 0
SPECTRAL DENSITIES
Power Spectral Density (psd)
Power spectral density SX(ω) of a wide-sense stationary (WSS) random process X(t) is defined
as
SX(ω) = Fourier transform (RX(τ))

= R

X ( )e  j t d

Where RX(τ) = Autocorrelation function of random process X(t).


i.e., the Fourier transform of autocorrelation function R X(τ) of a random process X(t) is called
power spectral density.
Cross Power Spectral Density (cpsd)
The cross power spectral density of two jointly WSS random processes X(t) and Y(t) is defined
as

Sxy(ω) = Fourier transform [RXY(τ)] = R

xy ( )e  j t d

Where Rxy(τ) = Cross-correlation function of random processes X(t) and Y(t)


Properties of Power Spectral Density
(i) Power spectral density of a random process is a real function of frequency ω.
(ii) Power spectral density of a random process X(t) is an even function of frequency ω.
SX(ω) = SX(-ω)
(iii) Power spectral density of a random process X(t) is a non-negative function of ω, i.e.,
SX(ω) ≥ 0 for all ω
Energy Spectral Density (esd)
The energy spectral density ψx(ω) is a measure of density of the energy contained in random
process X(t) in Joules per Hertz. Since the amplitude spectrum of a real-valued random process
X(t) is an even function of ω, the energy spectral density of such a signal is symmetrical about
the vertical axis passing through the origin.
Total energy of the random process X(t) is defined as

1
2 
E=  x ( )d

You might also like