lab7b
lab7b
ECE438 - Laboratory 7:
Discrete-Time Random Processes (Week 2)
October 6, 2010
1 Bivariate Distributions
In this section, we will study the concept of a bivariate distribution. We will see that bivariate
distributions characterize how two random variables are related to each other. We will also
see that correlation and covariance are two simple measures of the dependencies between
random variables, which can be very useful for analyzing both random variables and random
processes.
If the joint CDF is sufficiently “smooth”, we can define a joint probability density func-
tion,
∂2
fX,Y (x, y) = FX,Y (x, y). (2)
∂x∂y
Conversely, the joint probability density function may be used to calculate the joint CDF:
Z y Z x
FX,Y (x, y) = fX,Y (s, t)ds dt. (3)
−∞ −∞
Questions or comments concerning this laboratory should be directed to Prof. Charles A. Bouman,
School of Electrical and Computer Engineering, Purdue University, West Lafayette IN 47907; (765) 494-
0340; bouman@ecn.purdue.edu
Purdue University: ECE438 - Digital Signal Processing with Applications 2
The random variables X and Y are said to be independent if and only if their joint CDF
(or PDF) is a separable function, which means
Informally, independence between random variables means that one random variable does
not tell you anything about the other. As a consequence of the definition, if X and Y are
independent, then the product of their expectations is the expectation of their product.
While the joint distribution contains all the information about X and Y , it can be very
complex and is often difficult to calculate. In many applications, a simple measure of the
dependencies of X and Y can be very useful. Three such measures are the correlation,
covariance, and the correlation coefficient.
• Correlation Z ∞ Z ∞
E[XY ] = xyfX,Y (x, y)dx dy (6)
−∞ −∞
• Covariance
Z ∞ Z ∞
E[(X − µX )(Y − µY )] = (x − µX )(y − µY )fX,Y (x, y)dxdy (7)
−∞ −∞
• Correlation coefficient
E[(X − µX )(Y − µY )] E[XY ] − µX µY
ρXY = = (8)
σX σY σX σY
If the correlation coefficient is 0, then X and Y are said to be uncorrelated. Notice that
independence implies uncorrelatedness, however the converse is not true.
1. Z = Y
2. Z = (X + Y )/2
3. Z = (4 ∗ X + Y )/5
4. Z = (99 ∗ X + Y )/100
Purdue University: ECE438 - Digital Signal Processing with Applications 3
Notice that since Z is a linear combination of two Gaussian random variables, Z will also
be Gaussian.
Use Matlab to generate 1000 i.i.d. samples of X, denoted as X1 , X2 , . . . , X1000 . Next,
generate 1000 i.i.d. samples of Y , denoted as Y1 , Y2 , . . . , Y1000 . For each of the four choices
of Z, perform the following tasks:
1. Use equation (8) to analytically calculate the correlation coefficient ρXZ between X and
Z. Show all of your work. Remember that independence between X and Y implies that
E[XY ] = E[X]E[Y ]. Also remember that X and Y are zero-mean and unit variance.
3. Generate a scatter plot of the ordered pair of samples (Xi , Zi ). Do this by plotting
points (X1 , Z1 ), (X2 , Z2 ), . . . , (X1000 , Z1000 ). In order to plot points without connecting
them with lines, use the plot command with the ’.’ format.
plot(X,Z,’.’)
Use the command subplot(2,2,n) (n=1,2,3,4) to plot the four cases for Z in the same
figure. Be sure to label each plot using the title command.
INLAB REPORT:
1. Hand in your derivations of the correlation coefficient ρXZ along with your numerical
estimates of the correlation coefficient ρ̂XZ .
3. Hand in your scatter plots of (Xi , Zi ) for the four cases. Note the theoretical correlation
coefficient ρXZ on each plot.
2.1 Background
A discrete-time random process Xn is simply a sequence of random variables. So for each n,
Xn is a random variable.
The autocorrelation is an important function for characterizing the behavior of random
processes. If X is a wide-sense stationary (WSS) random process, the autocorrelation is
defined by
rXX (m) = E[Xn Xn+m ] m = . . . , −1, 0, 1, . . . . (9)
Note that for a WSS random process, the autocorrelation does not vary with n. Also, since
E[Xn Xn+m ] = E[Xn+m Xn ], the autocorrelation is an even function of the “lag” value m.
Intuitively, the autocorrelation determines how strong a relation there is between samples
separated by a lag value of m. For example, if X is a sequence of independent identically
2
distributed (i.i.d.) random variables each with zero mean and variance σX , then the auto-
correlation is given by
We use the term white or white noise to describe this type of random process. More precisely,
a random process is called white if its values Xn and Xn+m are uncorrelated for every m 6= 0.
x(n) y(n)
H(ejω )
If we run a white random process Xn through an LTI filter as in figure 1, the output
random variables Yn may become correlated. In fact, it can be shown that the output
autocorrelation rY Y (m) is related to the input autocorrelation rXX (m) through the filter’s
impulse response h(m).
rY Y (m) = h(m) ∗ h(−m) ∗ rXX (m) (11)
2.2 Experiment
Consider a white Gaussian random process Xn with mean 0 and variance 1 as input to the
following filter.
y(n) = x(n) − x(n − 1) + x(n − 2) (12)
Calculate the theoretical autocorrelation of Yn using (10) and (11). Show all of your work.
Purdue University: ECE438 - Digital Signal Processing with Applications 5
INLAB REPORT:
For the filter in equation (12),
2. Hand in the four scatter plots. Label each plot with the corresponding theoretical
correlation, using rY Y (m). What can you conclude about the output random process
from these plots?
3. Hand in your plots of rY Y (m) and rY′ Y (m) versus m. Does equation (13) produce
a reasonable approximation of the true autocorrelation? For what value of m does
rY Y (m) reach its maximum? For what value of m does rY′ Y (m) reach its maximum?
Similar to the definition of the sample autocorrelation introduced in the previous section, we
can define the sample cross-correlation for a pair of data sets. The sample cross-correlation
between two finite random sequences Xn and Yn is defined as
N −m−1
1
c′XY (m)
X
= X(n)Y (n + m) 0 ≤ m ≤ N − 1 (15)
N − m n=0
N −1
1
c′XY (m) =
X
X(n)Y (n + m) 1 − N ≤ m < 0 (16)
N − |m| n=|m|
where N is the number of samples in each sequence. Notice that the cross-correlation is not
an even function of m. Hence a two-sided definition is required.
Cross-correlation of signals is often used in applications of sonar and radar, for example to
estimate the distance to a target. In a basic radar set-up, a zero-mean signal X(n) is trans-
mitted, which then reflects off a target after traveling for D/2 seconds. The reflected signal
is received, amplified, and then digitized to form Y (n). If we summarize the attenuation and
amplification of the received signal by the constant α, then
where W (n) is additive noise from the environment and receiver electronics.
In order to compute the distance to the target, we must estimate the delay D. We can do
this using the cross-correlation. The cross-correlation cXY can be calculated by substituting
(17) into (14).
Here we have used the assumptions that X(n) and W (n+m) are uncorrelated and zero-mean.
By applying the definition of autocorrelation, we see that
Because rXX (m − D) reaches its maximum when m = D, we can find the delay D by
searching for a peak in the cross correlation cXY (m). Usually the transmitted signal X(n)
is designed so that rXX (m) has a large peak at m = 0.
3.2 Experiment
Using (15) and (16), write a Matlab function C=CorR(X,Y,m) to compute the sample
cross-correlation between two discrete-time random processes, X and Y , for a single lag
value m.
To test your function, generate two length 1000 sequences of zero-mean Gaussian random
variables, denoted as Xn and Zn . Then compute the new sequence Yn = Xn + Zn . Use CorR
to calculate the sample cross-correlation between X and Y for lags −10 ≤ m ≤ 10. Plot
your cross-correlation function.
INLAB REPORT:
1. Submit your plot for the cross-correlation between X and Y . Label the m-axis with
the corresponding lag values.
INLAB REPORT:
1. Plot the transmitted signal and the received signal on a single figure using subplot.
Can you estimate the delay D by a visual inspection of the received signal?
′
2. Plot the sample autocorrelation of the transmitted signal, rXX (m) vs. m for −100 ≤
m ≤ 100.
3. Plot the sample cross-correlation of the transmitted signal and the received signal,
c′XY (m) vs. m for −100 ≤ m ≤ 100.
4. Determine the delay D from the sample correlation. How did you determine this?
References
[1] A. Papoulis, Probability, Random Variables, and Stochastic Processes, 3rd ed., McGraw-
Hill, New York, 1991.