Lecture 12
Lecture 12
Lecture 12
Expectation
Reza Abdolmaleki
Lecture 12
Product Moments
Definition. The -th and s-th product moment about the origin of the random variables and ,
denoted by is the expected value of ; symbolically
In the discrete case, the double summation extends over the entire joint range of the two random
variablesNote that which we denote here by and that which we denote here by .
Analogous to the previous definition, let us now state the following definition of product moments
about the respective means
Definition. The -th and s-th product moment about the mean of the random variable and , denoted
by is the expected value of ; symbolically
In statistics, is of special importance because it is indicative of the relationship, if any, between the
values of and ; thus, it is given a special symbol and a special name.
Definition. is called the covariance of and , and it is denoted by , C, or
Observe that if there is a high probability that large values of will go with large values of Y and small
values of with small values of , the covariance will be positive; if there is a high probability that large
values of will go with small values of and vice versa, the covariance will be negative. It is in this sense
that the covariance measures the relationship, or association, between the values of and .
Let us now prove the following result, which is useful in actually determining covariances.
Theorem 1.
Proof.
.
Example 1. The joint and marginal probabilities of and the numbers of aspirin and sedative caplets
among two caplets drawn at random from a bottle containing three aspirin, two sedative, and four laxative
caplets, are recorded as follows:
and
Therefore,
The negative result suggests that the more aspirin tablets we get the fewer sedative tablets we will get,
and vice versa, and this, of course, makes sense.
Example 2. Find the covariance of the random variables whose joint probability density is given by
Solution.
Evaluating the necessary integrals, we get
and
Thus,
As far as the relationship between and is concerned, observe that if and are independent, their
covariance is zero; symbolically, we have the following theorem:
Since and are independent, we can write where g(x) and h(y) are the values of the marginal
distributions of and and
we get
Hence,
It is of interest to note that the independence of two random variables implies a zero covariance, but a
zero covariance does not necessarily imply their independence. This is illustrated by the following
example:
show that their covariance is zero even though the two random variables are not independent.
Solution.
Using the probabilities shown in the margins, we get
and
Therefore,
the covariance is zero, but the two random variables are not independent. For instance,
for and .
Product moments can also be defined for the case where there are more than two random
variables. Here let us merely state the important result, in the following theorem, which is a
generalization of previous theorem:
and
where the double summation extends over all values ofand , from 1 to n, for which .
Proof.
From Theorem 5 of Lecture 10 with for i = 0, 1, 2, ... , n, it follows immediately that
and this proves the first part of the theorem. To obtain the expression for the variance of , let us write for so
that we get
Then, expanding by means of the multinomial theorem, according to which , for example, equals + 2ab +
2ac + 2ad + 2bc + 2bd + 2cd, and again referring to Theorem 5 of Lecture 10, we get
Example 4. If the random variables and have the means 2, -3, and 4, the variances , and, and the
covariances C, and
C, find the mean and the variance of
Solution.
By Theorem 4, we get
and
The following is another important theorem about linear combinations of random variables; it concerns
the covariance of two linear combinations of n random variables.
Example 5. If the random variables and have the means , 5, and , the variances , and, and the
covariances , and, find the covariance and .
Solution.
By Theorem 5, we get
Conditional Expectations
Conditional probabilities are obtained by adding the values of conditional probability distributions, or
integrating the values of conditional probability densities. Conditional expectations of random variables
are likewise defined in terms of their conditional distributions.
Definition. If is a discrete random variable, and is the value of the conditional probability
distribution of given at , the conditional expectation of given is
If we let in the previous definition , we obtain the conditional mean of the random variable given ,
which we denote by
where is given by the previous definition with . The reader should not find it difficult to generalize the
previous definition for conditional expectations involving more than two random variables.
so that
Thus, is given by
Next we find