D R S SR R T: Cross-Correlation Function, Is Defined by The Integral
D R S SR R T: Cross-Correlation Function, Is Defined by The Integral
D R S SR R T: Cross-Correlation Function, Is Defined by The Integral
x(t ) = 7e −t / 2 cos(3t ) .
Hence, the correlation integral between two signals will have a large maximum value at
τ = 0 if the signals are exactly the same (auto-correlation), a finite non-zero value over
some range of τ when the two signal are somewhat similar, and approximately zero if
the two signals are non-similar (e.g., a deterministic signal and random noise). This is the
basis for signal detection in noise that will be handled later.
30
Autocorrelation, R (τ)
5
x
Signal, x(t)
10 10
Time (t), sec Time-delay
τ (sec)
-5
-30
22 / 124
1.2.5 Signal Power and Energy
If x (t ) is a signal, then its instantaneous power at any time t , denoted by p (t ) , is
defined as the power dissipated in a 1 Ω - resistor by a voltage of amplitude x (t ) volts (or,
equivalently, by a current of x (t ) amperes), which is given by:
p (t ) =| x(t ) |2 ,
where absolute value is used to include complex signals.
Since power is the time average of energy, the instantaneous energy e(t ) is:
1.2.5.1 Power in Periodic Signals If x (t ) is a periodic signal with a period T , then the
o
total energy in this signal is:
T /2
E = lim 2
∫ | x(t )| dt → ∞ ,
T → ∞ −T / 2
but signal power is finite (hence, they are power signals), and can be expressed as:
T
1 T /2 2 1 o 2
P= ∫-To / 2 x(t ) dt = ∫ | x(t )| dt .
To o To o
Example: If x(t ) = Asin(ωot ) , then the signal power is given by:
1 To 2 2 A 2 To 1 1
P= ∫ A sin (ω o t )dt = ∫ [ − cos(2ω o t )]ω o dt
To o To ω o o 2 2
φ =ω o t A2 2π 1 1 A2 1 1 2π A
2
→ = ∫ [ − cos(2φ )]dφ = t − sin( 2φ ) =
ω o To = 2π 2π o 2 2 2π 2 2 0 2
Same result is obtained for the signal x(t ) = A cos(ωot ) .
23 / 124
1.2.5.2 Parseval’s Theorem
The power in periodic signals and the energy in non-periodic signals can equivalently be
obtained from the frequency domain as follows
∞ 2
1. For periodic signals: P = ∑ X k , where X are the FS coefficients.
k
k = −∞
2. For non-periodic signals: E = ∫
∞ X ( f ) 2 df , where X ( f ) is the FT of the signal.
−∞
2
The function X k vs. f (defined at discrete frequencies f = kf , f being the
k o o
2
fundamental frequency) is called the power spectrum of the signal. Hence, X k is
called power spectral density (PSD).
The function X(f )2 vs. f is called the energy spectrum of the signal, and X(f )2
is called the energy spectral density (ESD).
24 / 124
1.3 Random Signals
1.3.1 Definition
A random signal is a signal whose values cannot be predicted or known a priori, but
follows a probability function.
Probability:
Sample space:
Example 1: In coin-tossing experiment, outcomes are either “heads” or “tails”, hence the
sample space is S={h, t}. If tossing is repeated a large number of times, then we have
approximately 50% heads, 50% tails. We say: p(h)=0.5, p(t)=0.5.
N
Note: if S = {e | k = 1,...., N }, then ∑ p(ek ) =1 , i.e., the summation of the
k
k =1
probabilities of all possible events (outcomes) should be 1 (100%).
Example 2: Die tossing outcomes are S = {1-dot, 2-dots, 3-dots, 4-dots, 5-dots, 6-dots}.
If a die is tossed a large number of times N, then the number of each combination of dots is
≈ N/6, or the probability is 1/6.
A random variable is a real-valued function whose domain is the set of all possible events
(i.e., the sample space S) of a repeating experiment.
X : S → R, R ⊆ ℜ (R is a subset of ℜ)
r.v.
Example: In coin-tossing experiment, if we define X(h) = 1, X(t) = -1, then X:{h,t} {1,-1}
is a random variable.
25 / 124
Notes:
1) Random variables can be discrete (as in the above example) or continuous (as the
amplitude of noise).
2) In case of noise, we can define the random variable as noise itself, i.e.,
X : R → R | X (r ) = r ∀r ∈ R .
1.3.2.3 Joint Probability
If we have A and B as two events (outcomes), then the joint probability of A and B ,
denoted by P( A ∩ B) , is the probability that A would be the outcome of experiment 1
and B the outcome of experiment 2.
Example: If n(t ) is noise, and we define the events A = {n(t1 ) > 0.1} and
B = {n(t 2 ) > 0.5} , then
P( A ∩ B ) = P{n(t ) > 0.1 and n(t ) > 0.5 } .
1 2
1.3.2.4 Conditional Probability
The events A and B are independent when P( A | B ) = P ( A) . Hence, using the above
formula: P( A ∩ B) = P( A).P( B) .
Example 1: A box contains 20 cubes: 15 red and 5 blue. Two cubes are to be selected
randomly (without replacement). What is the probability that the 1st is red and the 2nd is
blue?
26 / 124
1.3.2.4 Probability Density Function (pdf)
The pdf of a random variable X , P ( X ) , is a non-negative function (with a total area of
X
1) that shows how the values of X are distributed after a large number of experiments
(trials), i.e.,
∞ P ( x)dx = 1
P ( X ) ≥ 0, ∫∞
X X
P( x ≤ X ≤ x ) = ∫ x2 PX ( x)dx.
1 2 x1
Note that x, x1 and x2 are values attained by the random variable X .
−∞
The quantity σ X = var(X ) is called the standard deviation of X . The variance
indicates how far the values of X are spread around the mean. Hence, the variance gives a
measure of the randomness of a random signal.
Note:
σ X2 = E {X 2 − 2m X X + m X2 }= E ( X 2 ) − 2m X E ( X ) + m X 2 = E ( X 2 ) − m X 2 .
1.3.2.8 The Gaussian pdf
It is an important probability density function often encountered in applications. A random
variable X is said to be Gaussian if its pdf is given by:
1 2 2
p( x) = e −( x −m ) / 2σ
σ 2π
where m = statistical mean, σ = variance. Plots of this pdf for different values of mean
2
27 / 124
p (x), p (x) p (x), p (x)
1 2 1 2
σ =3 σ =3 σ =3
1 1 2
m1 = 0 → m1 = 0 → ←m =2
2
σ =5
← m2 = 0
2
x x
-5 0 5 0 2
Noise that is mostly encountered in electrical systems has Gaussian pdf with zero mean,
m = 0 , as follows
1 2 2
p ( n) = e −n / 2σ
σ 2π
Note that Gaussian noise power, E (n 2 ) = E{(n − mn ) 2 } (since mn = 0 ) = noise
variance. = σ (Prove!)
2
2 2
If there are two noise signals, n1 and n2 , with variances such that σ 2 > σ 1 , then pdf2
has a wider spread around its mean than pdf1 (see Fig.1.3.1 left), and the second signal
has more power than the first.
28 / 124
1.3.3.3 Power Spectral Density of Random Signals
A random signal n(t ) can be classified as a power signal, hence, like other power
Gn ( f ) ), which is defined in this case as follows:
signals, it has PSD (denoted normally by
1 2 1 2
G ( f ) = lim E F{n(t )Π T (t )} = lim E N T ( f ) ,
T →∞ T T →∞ T
where E means the statistical mean.
G x ( f ) ←→
F
Rx (τ ) ,
where G x ( f ) is the PSD of the signal and Rx (τ ) is its autocorrelation function as given
in Section (1.3.3.5), i.e., E{ x (t ) x (t + τ )} .
1.3.3.8 White Noise
A common kind of noise encountered in nature is the thermal noise, which has a constant
PSD and its values are Gaussian-distributed. Theoretically, its PSD is formulated as
constant over all frequencies (hence the name “white”), hence, its autocorrelation function
is a weighted delta function (by WKT) [see Fig.(1.3.2)]; this means that its samples (in the
time domain) are uncorrelated with each other. Practically, noise is band-limited
(PSD = (η / 2)Π 2 B ( f ) , η being a constant). The use of η / 2 rather than η indicates the
use of double-sided PSD (i.e., positive and negative frequencies are considered).
29 / 124
Gn(f) Rn(τ )
← FT →
WKB [ η /2 ] δ (τ )
η /2
f, Hz τ , sec
0 0
Fig.(1.3.2): PSD of white noise with its autocorrelation function.
LPF n ( t ),
n ( t ), o
Input Output
H(f)
Noise Noise
Gn ( f ) H(f) Gn ( f )
o
η /2 1 η /2
f f f
0 -B 0 B -B 0 B
R (τ ) R (τ )
n n
o
ηB
[ η /2 ] δ (τ)
← η B sinc(2Bτ)
τ, sec τ, sec
0
←
Fig.(1.3.4): Autocorrelation functions for AWGN and its low-pass filtered version.
30 / 124
1.4 Applications of Signal Analysis
In Section (1.2.4) we defined the autocorrelation function of deterministic periodic and non-
periodic signals. The autocorrelation function of random signals was defined in
Subsection(1.3.3.5), but this definition requires knowledge of the pdf of the signa1. In
applications, most random signals (e.g., noise) are ergodic, i.e., its statistical average =
its time average. Hence, the auto-correlation function of an ergodic random signal x(t ) is
defined by a relation similar to that of periodic signals as follows:
1 T /2
Rx (τ ) = lim ∫−T / 2 x(λ ) x(τ + λ )dλ ,
T →∞ T
while the cross-correlation between two random signals x (t ) and y (t ) is defined as
follows:
1 T /2
Rxy (τ ) = R yx (τ ) = lim ∫−T / 2 x(λ ) y (τ + λ )dλ .
T →∞ T
Natural noise is non-correlated with deterministic signals. It is even non-correlated with
itself, where its autocorrelation function is almost a delta function (a spike at the origin), as
we saw in Section (1.3.3.8). This is the basic theory behind signal detection in noise. If a
deterministic signal x(t ) is transmitted and a corrupted signal y (t ) = x(t ) + n(t ) is
received, we can decide whether there is a message inside the received random signal or
not. We correlate y (t ) with itself, where its autocorrelation is given by:
T /2
R y (τ ) = lim ∫ [ x(λ ) + n(λ )][ x(τ + λ ) + n(τ + λ )]dλ
T →∞ −T / 2 .
= Rx (τ ) + Rn (τ ) + 2 Rxn (τ )
Since Rn (τ ) is a spike and Rxn (τ ) is approximately zero, it is Rx (τ ) only that gives a
symmetric shape for R y (τ ) , from which we can decide that there is a message inside
noise.
Example: Consider the sinusoidal signal x(t ) = sin(ω o t ) , with frequency f o = 0.1 Hz.
2
The power of this signal is 1 /2 = 0.5 watts. The signal and the absolute value of its
autocorrelation Rx (τ ) is shown in Fig.(1.4.1). A random noise signal n(t ) with noise
power 5dB (= 3.1623 watts) is considered to corrupt the sinusoid, giving a noisy signal
y (t ) = x(t ) + n(t ) . Hence, the signal-to-noise-ratio is SNR = 0.1581, or -8.0103 dB,
which represents a strong noise interference. However, when we correlate the noisy signal
with itself, we can still distinguish a symmetric non-random pattern with absolute maximum
around the origin. This indicates that there is a deterministic signal imbedded in noise.
This example can be simulated on MATLAB using the function xcorr(x,y)*Ts, where Ts is
the time increment.
31 / 124
10 10 10
0 0 0
-5 -5 -5
|R (τ )|
20
5 5
xn
x
n
10
|R
0 0 0
-10 0 10 -10 0 10 -10 0 10
Delay ( τ ), sec Delay ( τ ), sec Delay ( τ ), sec
10
|R (τ )|
5
y
0
-10 0 10
Delay ( τ ), sec
t t t
0 T -T 0 0 T
Fig.(1.4.2): Matched filter impulse response with the symbol waveform.
32 / 124
The Matched Filter is a Correlator
After convolution, the output of the matched filter will be:
T T
y (t ) = ∫ r (λ )h(t − λ )dλ = ∫ r (λ ) s(T − t + λ )dλ = Rrs (T − t )
0 0
where Rrs (τ ) is the cross-correlation between the symbol and the received version of it.
Hence the matched filter is essentially a correlator. If r = s (noise-free condition), R(τ ) will
have a symmetric shape with maximum at τ = 0, i.e., T − t = 0, or t = T .
33 / 124
1.4.3 Range Estimation by Radar:
A narrow pulse s (t ) is transmitted by the radar station towards the airplane. The pulse will
hit the plane and reflect back to the radar, where a matched filter (correlator) is utilized to
estimate the distance between the plane and the radar. Explain how the correlator is used
for range estimation.
d d
Rs (t -to )
correlator to
s (t ) r (t ) = s (t -to)
b b r (t ) y (t ) b
h (t ) = s (-t )
t t t t
0 a 0 to to+a 0 to-a to+a
Sol. The received signal r (t ) is a delayed version of the transmitted signal s (t ) , which is also
corrupted by noise. It is given by:
r (t ) = s (t − t 0 ) + n(t ) .
The correlator is a matched filter with impulse response given by:
h(t ) = s (−t ) [A reflected version of s (t ) ].
∞ ∞ ∞
y (t ) = r (t ) ∗ h(t ) = ∫ r (λ )h(t − λ )dλ = ∫ r (λ ) s (λ − t )dλ = ∫ [ s (λ − t o ) + n(λ )]s (λ − t )dλ
−∞ −∞ −∞
∞ ∞
→ = ∫ s (v + t − t o ) s (v)dv + ∫ n(v + t ) s (v)dv = Rs (t − t 0 ) + Rns (t ) .
v =λ −t
−∞ −∞
34 / 124