Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

D R S SR R T: Cross-Correlation Function, Is Defined by The Integral

Download as pdf or txt
Download as pdf or txt
You are on page 1of 13

1.2.

4 Signal Correlation and Its Applications


The correlation between two energy deterministic signals s (t ) and r (t ) , called the
cross-correlation function, is defined by the integral:
R (τ ) = ∫−∞∞ s (λ )r (τ + λ )dλ .
sr
This is similar to the convolution integral, with “+” sign instead of “ - ”, and the time-delay
τ is used instead of the true time, t . Note that R (τ ) = R (τ ) .
sr rs
Hence, the correlation gives an idea of how samples of the two signals are “related” to
each other when the time difference between them is τ , or it is a measure of similarity
between two signals. If r (t ) equals s (t ) , the correlation is called the auto-correlation
function of the signal s (t ) , denoted by R (τ ) . For a periodic deterministic signal x (t )
s
with period T , the auto-correlation function is defined as:
o
1 T /2
R (τ ) = ∫ o x(λ ) x(τ + λ )dλ .
x To −To / 2
There are correlation formulas for random signals that will be defined later.
It can be shown that the auto-correlation function R x (τ ) has the following properties:
P1: Even (symmetric about time-delay axis), R x ( −τ ) = R x (τ )
P2: Always has its absolute maximum at the origin,τ = 0 .

Fig.(1.2.14) shows the autocorrelation of the non-periodic signal

x(t ) = 7e −t / 2 cos(3t ) .
Hence, the correlation integral between two signals will have a large maximum value at
τ = 0 if the signals are exactly the same (auto-correlation), a finite non-zero value over
some range of τ when the two signal are somewhat similar, and approximately zero if
the two signals are non-similar (e.g., a deterministic signal and random noise). This is the
basis for signal detection in noise that will be handled later.

30
Autocorrelation, R (τ)

5
x
Signal, x(t)

10 10
Time (t), sec Time-delay
τ (sec)
-5

-30

Fig.(1.2.14): The signal x(t ) = 7e − t / 2 cos(3t ) and its autocorrelation function.

22 / 124
1.2.5 Signal Power and Energy
If x (t ) is a signal, then its instantaneous power at any time t , denoted by p (t ) , is
defined as the power dissipated in a 1 Ω - resistor by a voltage of amplitude x (t ) volts (or,
equivalently, by a current of x (t ) amperes), which is given by:

p (t ) =| x(t ) |2 ,
where absolute value is used to include complex signals.

Since power is the time average of energy, the instantaneous energy e(t ) is:

e(t ) = p (t )dt =| x(t ) |2 dt ,


where dt is the time increment. Hence, the total energy E of the signal is given by:
T /2
E = lim 2
∫ | x(t )| dt ,
T →∞ −T / 2
while the total power is given as the time average of the total energy as follows:
E 1 T /2 2
P = lim = lim ∫ | x(t )| dt .
T → ∞ T T → ∞ T −T / 2
Signals are classified as power signals, energy signals, or neither. Power signals have
finite power (hence, infinite energy, by using the above definitions). Energy signals have
finite energy (hence, zero power). Most signals in practice are either power or energy
signals.

1.2.5.1 Power in Periodic Signals If x (t ) is a periodic signal with a period T , then the
o
total energy in this signal is:
T /2
E = lim 2
∫ | x(t )| dt → ∞ ,
T → ∞ −T / 2
but signal power is finite (hence, they are power signals), and can be expressed as:
T
1 T /2 2 1 o 2
P= ∫-To / 2 x(t ) dt = ∫ | x(t )| dt .
To o To o
Example: If x(t ) = Asin(ωot ) , then the signal power is given by:
1 To 2 2 A 2 To 1 1
P= ∫ A sin (ω o t )dt = ∫ [ − cos(2ω o t )]ω o dt
To o To ω o o 2 2

φ =ω o t A2 2π 1 1 A2  1 1  2π A
2
→ = ∫ [ − cos(2φ )]dφ = t − sin( 2φ ) =
ω o To = 2π 2π o 2 2 2π  2 2 0 2
Same result is obtained for the signal x(t ) = A cos(ωot ) .
23 / 124
1.2.5.2 Parseval’s Theorem

The power in periodic signals and the energy in non-periodic signals can equivalently be
obtained from the frequency domain as follows
∞ 2
1. For periodic signals: P = ∑ X k , where X are the FS coefficients.
k
k = −∞
2. For non-periodic signals: E = ∫
∞ X ( f ) 2 df , where X ( f ) is the FT of the signal.
−∞
2
The function X k vs. f (defined at discrete frequencies f = kf , f being the
k o o
2
fundamental frequency) is called the power spectrum of the signal. Hence, X k is
called power spectral density (PSD).

The function X(f )2 vs. f is called the energy spectrum of the signal, and X(f )2
is called the energy spectral density (ESD).

1.2.5.3 Wiener-Kinchin Theorem


For a periodic signal x (t ) , its PSD is the FT of its autocorrelation:
FT
R (τ ) ←→| X k |2 .
x 
PSD
For a non-periodic signal x (t ) , its ESD is the FT of its autocorrelation:
FT
R (τ ) ←→| X ( f )|2 .
x   
ESD
A similar relation for random signals will be studied later.

24 / 124
1.3 Random Signals
1.3.1 Definition

A random signal is a signal whose values cannot be predicted or known a priori, but
follows a probability function.

To understand such signals, we need to study probability and statistics.

1.3.2 Overview of Probability and Statistics


1.3.2.1 Probability and Sample Space

Probability:

It is a mathematical function that describes the statistical regularity of the outcomes of a


repeating situation or experiment.

Sample space:

It is the set of all possible outcomes of a repeating experiment.

Example 1: In coin-tossing experiment, outcomes are either “heads” or “tails”, hence the
sample space is S={h, t}. If tossing is repeated a large number of times, then we have
approximately 50% heads, 50% tails. We say: p(h)=0.5, p(t)=0.5.

N
Note: if S = {e | k = 1,...., N }, then ∑ p(ek ) =1 , i.e., the summation of the
k
k =1
probabilities of all possible events (outcomes) should be 1 (100%).

Example 2: Die tossing outcomes are S = {1-dot, 2-dots, 3-dots, 4-dots, 5-dots, 6-dots}.
If a die is tossed a large number of times N, then the number of each combination of dots is
≈ N/6, or the probability is 1/6.

Example 3: If we consider noise in signals, the outcomes are S = {changes in amplitude},


the probability of each change depends on the probability function (distribution) of noise. In
this case we have
S =ℜ (The set of real numbers).

1.3.2.2 Random Variables

A random variable is a real-valued function whose domain is the set of all possible events
(i.e., the sample space S) of a repeating experiment.
X : S → R, R ⊆ ℜ (R is a subset of ℜ)

r.v.
Example: In coin-tossing experiment, if we define X(h) = 1, X(t) = -1, then X:{h,t}  {1,-1}
is a random variable.

25 / 124
Notes:
1) Random variables can be discrete (as in the above example) or continuous (as the
amplitude of noise).
2) In case of noise, we can define the random variable as noise itself, i.e.,
X : R → R | X (r ) = r ∀r ∈ R .
1.3.2.3 Joint Probability

If we have A and B as two events (outcomes), then the joint probability of A and B ,
denoted by P( A ∩ B) , is the probability that A would be the outcome of experiment 1
and B the outcome of experiment 2.

Example: If n(t ) is noise, and we define the events A = {n(t1 ) > 0.1} and
B = {n(t 2 ) > 0.5} , then
P( A ∩ B ) = P{n(t ) > 0.1 and n(t ) > 0.5 } .
1 2
1.3.2.4 Conditional Probability

It is the probability of an event A given that an event B has already occurred,


defined as:
P( A∩ B)
P( A | B) = .
P( B)
1.3.2.5 Independent Events

The events A and B are independent when P( A | B ) = P ( A) . Hence, using the above
formula: P( A ∩ B) = P( A).P( B) .
Example 1: A box contains 20 cubes: 15 red and 5 blue. Two cubes are to be selected
randomly (without replacement). What is the probability that the 1st is red and the 2nd is
blue?

Sol. Let R=1st cube is red; B=2nd cube is blue.


15 3 5
P( R) = = ; P( B | R) =
20 4 19
3 5 15
P( B ∩ R) = P( R ).P( B | R ) = = .
4 19 76
Example 2: Two coins are tossed. What is the probability that the first one is a head and
the second is a tail?
11 1
Sol. P( H ∩ T ) = P ( H ) P (T ) = = (because they are independent events). If
22 4
the order is not important, this probability will be 2(1/4) = 1/2.

26 / 124
1.3.2.4 Probability Density Function (pdf)
The pdf of a random variable X , P ( X ) , is a non-negative function (with a total area of
X
1) that shows how the values of X are distributed after a large number of experiments
(trials), i.e.,
∞ P ( x)dx = 1
P ( X ) ≥ 0, ∫∞
X X
P( x ≤ X ≤ x ) = ∫ x2 PX ( x)dx.
1 2 x1
Note that x, x1 and x2 are values attained by the random variable X .

1.3.2.5 Statistical Mean


The statistical mean (or the expected value) of a random variable X is defined as
m x = E ( X ) = ∫−∞∞ xPX ( x)dx
It represents the center around which the values of X are expected to be.

1.3.2.6 The Second Moment


The second moment of a random variable X is defined as
m (X2 ) = E ( X 2 ) = ∫−∞∞ x 2 PX ( x)dx
It represents the center of the variation in the value of X 2.
1.3.2.7 The Variance
The second central moment (or variance) of a random variable X is defined as

2
σ X = var( X ) = E{( X − m X ) } = ∫ ( x − m X ) 2 p X ( x)dx.
2

−∞
The quantity σ X = var(X ) is called the standard deviation of X . The variance
indicates how far the values of X are spread around the mean. Hence, the variance gives a
measure of the randomness of a random signal.

Note:
σ X2 = E {X 2 − 2m X X + m X2 }= E ( X 2 ) − 2m X E ( X ) + m X 2 = E ( X 2 ) − m X 2 .
1.3.2.8 The Gaussian pdf
It is an important probability density function often encountered in applications. A random
variable X is said to be Gaussian if its pdf is given by:
1 2 2
p( x) = e −( x −m ) / 2σ
σ 2π
where m = statistical mean, σ = variance. Plots of this pdf for different values of mean
2

and variance are shown in Fig.(1.3.1).

27 / 124
p (x), p (x) p (x), p (x)
1 2 1 2

σ =3 σ =3 σ =3
1 1 2
m1 = 0 → m1 = 0 → ←m =2
2

σ =5
← m2 = 0
2

x x
-5 0 5 0 2

Fig.(1.3.1): Gaussian pdfs with different means and variances.

1.3.3 Signals in Noise


1.3.3.1 Gaussian Noise

Noise that is mostly encountered in electrical systems has Gaussian pdf with zero mean,
m = 0 , as follows
1 2 2
p ( n) = e −n / 2σ
σ 2π
Note that Gaussian noise power, E (n 2 ) = E{(n − mn ) 2 } (since mn = 0 ) = noise

variance. = σ (Prove!)
2
2 2
If there are two noise signals, n1 and n2 , with variances such that σ 2 > σ 1 , then pdf2
has a wider spread around its mean than pdf1 (see Fig.1.3.1 left), and the second signal
has more power than the first.

1.3.3.2 Signals in Gaussian Noise

If s (t ) n(t ) is noise, then z (t ) = s (t ) + n(t ) is a random


is a deterministic signal and
signal. First we consider the case s (t ) = a (constant). If n(t ) is Gaussian noise with zero
mean and variance = σ , then the random variable z is also Gaussian with mean and
2

variance given at any time t by:


z = m z = E ( z ) = E ( a + n) = E ( a ) + E ( n ) = a + 0 = a , and
var( z ) = E{( z − m z ) 2 } = E{n 2 } = σ 2 .
Hence, the pdf of the signal z (t ) at any time t is given by:
2
1  z−a 
1 −  
2 σ 
p( z ) = e
σ 2π
This result is general for any time signal s (t ) . For example, if s(t ) = sin(ω o ) , then
z = m z = s (t ) = sin(ωo t ), while the variance is still σ 2 .

28 / 124
1.3.3.3 Power Spectral Density of Random Signals
A random signal n(t ) can be classified as a power signal, hence, like other power
Gn ( f ) ), which is defined in this case as follows:
signals, it has PSD (denoted normally by
1 2 1 2
G ( f ) = lim E F{n(t )Π T (t )} = lim E N T ( f ) ,
T →∞ T T →∞ T
where E means the statistical mean.

1.3.3.4 Stationary Random Signals


Signals whose statistics (i.e., mean & variance) do not change with time are called
stationary.

1.3.3.5 The Autocorrelation Function of Random Signals


The autocorrelation function of a random signal x(t ) is given by:

Rx (t1 , t 2 ) = E{x(t1 ) x(t 2 )} .


It gives an idea of how the signal values at two time instants are related.

1.3.3.6 Wide-Sense Stationary (WSS) Signals


A random signal is WSS if its mean and autocorrelation are time-invariant, i.e.,
E{ X (t )} = mx = constant, and
Rx (t1 , t 2 ) = Rx (t1 − t 2 ) = Rx (τ ) where τ = t1 − t 2
Every stationary signal is WSS, but the converse is not true.

1.3.3.7 Wiener-Kinchin Theorem (WKT) for Random Signals


If x(t ) is WSS random signal, then

G x ( f ) ←→
F
Rx (τ ) ,
where G x ( f ) is the PSD of the signal and Rx (τ ) is its autocorrelation function as given
in Section (1.3.3.5), i.e., E{ x (t ) x (t + τ )} .
1.3.3.8 White Noise
A common kind of noise encountered in nature is the thermal noise, which has a constant
PSD and its values are Gaussian-distributed. Theoretically, its PSD is formulated as
constant over all frequencies (hence the name “white”), hence, its autocorrelation function
is a weighted delta function (by WKT) [see Fig.(1.3.2)]; this means that its samples (in the
time domain) are uncorrelated with each other. Practically, noise is band-limited
(PSD = (η / 2)Π 2 B ( f ) , η being a constant). The use of η / 2 rather than η indicates the
use of double-sided PSD (i.e., positive and negative frequencies are considered).

29 / 124
Gn(f) Rn(τ )

← FT →
WKB [ η /2 ] δ (τ )

η /2

f, Hz τ , sec
0 0
Fig.(1.3.2): PSD of white noise with its autocorrelation function.

1.3.3.9 Effect of Ideal LPF on White Noise


When a random signal enters a system H ( f ) , the output signal would also be random,
with PSD = input PSD multiplied by the power transfer function of the system, which is
2
H( f ) . Now assume white noise input n(t ) with constant PSD entering an ideal LPF as
shown in Fig.(1.3.3). The output noise PSD is given by:
η
Gno ( f ) =| H ( f ) |2 Gn ( f ) = Π 2 B ( f ) .
2
Hence, using WKT, the autocorrelation function of the output noise is given by:
η
Rno (τ ) = F −1{Gn ( f )} = [2 Bsinc(2 Bτ )]. [See Fig.(1.3.4)]
2
Therefore, noise samples are no longer uncorrelated, except when τ = k / 2 B, k being
integer.

LPF n ( t ),
n ( t ), o
Input Output
H(f)
Noise Noise

Gn ( f ) H(f) Gn ( f )
o

η /2 1 η /2

f f f
0 -B 0 B -B 0 B

Fig.(1.3.3): Ideal LP filtering of white noise.

R (τ ) R (τ )
n n
o

ηB
[ η /2 ] δ (τ)
← η B sinc(2Bτ)

τ, sec τ, sec
0

-1/(2B) 1/(2B) 1/B

Fig.(1.3.4): Autocorrelation functions for AWGN and its low-pass filtered version.

30 / 124
1.4 Applications of Signal Analysis

1.4.1 Signal Detection in Noise

In Section (1.2.4) we defined the autocorrelation function of deterministic periodic and non-
periodic signals. The autocorrelation function of random signals was defined in
Subsection(1.3.3.5), but this definition requires knowledge of the pdf of the signa1. In
applications, most random signals (e.g., noise) are ergodic, i.e., its statistical average =
its time average. Hence, the auto-correlation function of an ergodic random signal x(t ) is
defined by a relation similar to that of periodic signals as follows:
1 T /2
Rx (τ ) = lim ∫−T / 2 x(λ ) x(τ + λ )dλ ,
T →∞ T
while the cross-correlation between two random signals x (t ) and y (t ) is defined as
follows:
1 T /2
Rxy (τ ) = R yx (τ ) = lim ∫−T / 2 x(λ ) y (τ + λ )dλ .
T →∞ T
Natural noise is non-correlated with deterministic signals. It is even non-correlated with
itself, where its autocorrelation function is almost a delta function (a spike at the origin), as
we saw in Section (1.3.3.8). This is the basic theory behind signal detection in noise. If a
deterministic signal x(t ) is transmitted and a corrupted signal y (t ) = x(t ) + n(t ) is
received, we can decide whether there is a message inside the received random signal or
not. We correlate y (t ) with itself, where its autocorrelation is given by:
T /2
R y (τ ) = lim ∫ [ x(λ ) + n(λ )][ x(τ + λ ) + n(τ + λ )]dλ
T →∞ −T / 2 .
= Rx (τ ) + Rn (τ ) + 2 Rxn (τ )
Since Rn (τ ) is a spike and Rxn (τ ) is approximately zero, it is Rx (τ ) only that gives a
symmetric shape for R y (τ ) , from which we can decide that there is a message inside
noise.

Example: Consider the sinusoidal signal x(t ) = sin(ω o t ) , with frequency f o = 0.1 Hz.
2
The power of this signal is 1 /2 = 0.5 watts. The signal and the absolute value of its
autocorrelation Rx (τ ) is shown in Fig.(1.4.1). A random noise signal n(t ) with noise
power 5dB (= 3.1623 watts) is considered to corrupt the sinusoid, giving a noisy signal
y (t ) = x(t ) + n(t ) . Hence, the signal-to-noise-ratio is SNR = 0.1581, or -8.0103 dB,
which represents a strong noise interference. However, when we correlate the noisy signal
with itself, we can still distinguish a symmetric non-random pattern with absolute maximum
around the origin. This indicates that there is a deterministic signal imbedded in noise.

This example can be simulated on MATLAB using the function xcorr(x,y)*Ts, where Ts is
the time increment.

31 / 124
10 10 10

y(t) = x(t) + n(t)


Noise, n ( t )
5 5 5
x(t)

0 0 0

-5 -5 -5

-10 -10 -10


0 5 10 0 5 10 0 5 10
Time (t), sec Time (t), sec Time (t), sec
10 10 30
(τ)|
|R (τ ) |

|R (τ )|
20
5 5
xn
x

n
10
|R

0 0 0
-10 0 10 -10 0 10 -10 0 10
Delay ( τ ), sec Delay ( τ ), sec Delay ( τ ), sec
10
|R (τ )|

5
y

0
-10 0 10
Delay ( τ ), sec

Fig.(1.4.1): Signal detection in noise using a correlator.

1.4.2 The Matched Filter (MF)


The matched filter is a linear filter that gives optimal detection for a given symbol s(t ) of
finite duration T . The impulse response h(t ) of this filter is dependent on the symbol,
hence the name matched filter.
Hence, the impulse response of the matched filter is given by
h (t ) = s (T − t )
That is, the impulse response of a filter matched to a symbol is a time-reversed
version of the symbol itself. See Fig.(1.4.2).

s(t) s(-t) h(t)=s(T-t)

t t t
0 T -T 0 0 T
Fig.(1.4.2): Matched filter impulse response with the symbol waveform.

32 / 124
The Matched Filter is a Correlator
After convolution, the output of the matched filter will be:
T T
y (t ) = ∫ r (λ )h(t − λ )dλ = ∫ r (λ ) s(T − t + λ )dλ = Rrs (T − t )
0 0
where Rrs (τ ) is the cross-correlation between the symbol and the received version of it.
Hence the matched filter is essentially a correlator. If r = s (noise-free condition), R(τ ) will
have a symmetric shape with maximum at τ = 0, i.e., T − t = 0, or t = T .

The Optimal Receiver

If there is a finite set of M symbols to be transmitted, {si (t ) | i = 1,2,3...M }, and we


want optimal reception for these symbols, we should use a bank of M filters, each matched
th th
to one symbol only [see Fig.(1.4.3)]. If the i symbol si (t ) was transmitted, then the i
matched filter will have a maximum at the time of optimal reception, while other filters have
small values. Hence the receiver will decide that si (t ) was transmitted. In binary
communication systems (like in computer networks), we need only two filters, matched
to the two symbols that represent logic “0” and logic “1”.
s1(t) Comparator at t = T
T y1(t)

0
Received s2(t) Decision
Signal T y2(t) max { Y i } ; at t = T :
r(t) ∫ sk was sent
0 where :
where:
Yi = y (t) | =
i t T k = arg[max{Yi}]
sM (t) i = 1, 2, ⋅ ⋅ ⋅ , M
T yM (t) OR: Yk = max{Yi}

0

Fig.(1.4.3): The optimal receiver for a finite set of symbols.

33 / 124
1.4.3 Range Estimation by Radar:
A narrow pulse s (t ) is transmitted by the radar station towards the airplane. The pulse will
hit the plane and reflect back to the radar, where a matched filter (correlator) is utilized to
estimate the distance between the plane and the radar. Explain how the correlator is used
for range estimation.

d d
Rs (t -to )
correlator to
s (t ) r (t ) = s (t -to)
b b r (t ) y (t ) b

h (t ) = s (-t )

t t t t
0 a 0 to to+a 0 to-a to+a

Sol. The received signal r (t ) is a delayed version of the transmitted signal s (t ) , which is also
corrupted by noise. It is given by:
r (t ) = s (t − t 0 ) + n(t ) .
The correlator is a matched filter with impulse response given by:
h(t ) = s (−t ) [A reflected version of s (t ) ].

∞ ∞ ∞
y (t ) = r (t ) ∗ h(t ) = ∫ r (λ )h(t − λ )dλ = ∫ r (λ ) s (λ − t )dλ = ∫ [ s (λ − t o ) + n(λ )]s (λ − t )dλ
−∞ −∞ −∞
∞ ∞
→ = ∫ s (v + t − t o ) s (v)dv + ∫ n(v + t ) s (v)dv = Rs (t − t 0 ) + Rns (t ) .
v =λ −t
−∞ −∞

Note: since h(t ) = s (−t ) , the system convolution is essentially a correlation.

Since the correlation is a measure of similarity, Rns (t ) ≈ 0 .

From Tutorial 21 we know that the autocorrelation Rs (t − t 0 ) of the square pulse s (t ) is a


triangular function with a maximum at t − t 0 = 0 , or t = t 0 . Hence, if the pulse width a is very
small, we have just a pulse at the time instant t = t 0 at the receiver, from which we can
decide the time delay t 0 .

Now 2d = ct 0 (where c = velocity of signal propagation, normally 10 8 m / s ).


Hence, the distance to the airplane is d = ct 0 / 2 .

34 / 124

You might also like