Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Lecture 12

Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 19

Mathematical

Expectation
Reza Abdolmaleki

Probability + Probability Honors

Lecture 12
Product Moments
Definition. The -th and s-th product moment about the origin of the random variables and ,
denoted by is the expected value of ; symbolically

for and when and are discrete, and

when and are continuous.

In the discrete case, the double summation extends over the entire joint range of the two random
variablesNote that which we denote here by and that which we denote here by .
Analogous to the previous definition, let us now state the following definition of product moments
about the respective means
Definition. The -th and s-th product moment about the mean of the random variable and , denoted
by is the expected value of ; symbolically

for and when and are discrete, and

when and are continuous.

In statistics, is of special importance because it is indicative of the relationship, if any, between the
values of and ; thus, it is given a special symbol and a special name.
Definition. is called the covariance of and , and it is denoted by , C, or

Observe that if there is a high probability that large values of will go with large values of Y and small
values of with small values of , the covariance will be positive; if there is a high probability that large
values of will go with small values of and vice versa, the covariance will be negative. It is in this sense
that the covariance measures the relationship, or association, between the values of and .

Let us now prove the following result, which is useful in actually determining covariances.

Theorem 1.

Proof.

.
Example 1. The joint and marginal probabilities of and the numbers of aspirin and sedative caplets
among two caplets drawn at random from a bottle containing three aspirin, two sedative, and four laxative
caplets, are recorded as follows:

Find the covariance of and .


Solution.
Referring to the joint probabilities given here, we get

and using the marginal probabilities, we get

and

Therefore,

The negative result suggests that the more aspirin tablets we get the fewer sedative tablets we will get,
and vice versa, and this, of course, makes sense.
Example 2. Find the covariance of the random variables whose joint probability density is given by

Solution.
Evaluating the necessary integrals, we get

and

Thus,
As far as the relationship between and is concerned, observe that if and are independent, their
covariance is zero; symbolically, we have the following theorem:

Theorem 2. If and are independent, then and


Proof.
For the discrete case we have, by definition,

Since and are independent, we can write where g(x) and h(y) are the values of the marginal
distributions of and and
we get

Hence,
It is of interest to note that the independence of two random variables implies a zero covariance, but a
zero covariance does not necessarily imply their independence. This is illustrated by the following
example:

Example 3. If the joint probability distribution of X and Y is given by

show that their covariance is zero even though the two random variables are not independent.
Solution.
Using the probabilities shown in the margins, we get

and using the marginal probabilities, we get

and

Therefore,

the covariance is zero, but the two random variables are not independent. For instance,
for and .
Product moments can also be defined for the case where there are more than two random
variables. Here let us merely state the important result, in the following theorem, which is a
generalization of previous theorem:

Theorem 3. If are independent, then


Moments of Linear Combinations of Random Variables
In this section we shall derive expressions for the mean and the variance of a linear combination of n
random variables and the covariance of two linear combinations of random variables. Applications of
these results will be important in sampling theory and problems of statistical inference.

Theorem 4. If are random variables and

where, , ... , are constants, then

and

where the double summation extends over all values ofand , from 1 to n, for which .
Proof.
From Theorem 5 of Lecture 10 with for i = 0, 1, 2, ... , n, it follows immediately that

and this proves the first part of the theorem. To obtain the expression for the variance of , let us write for so
that we get

Then, expanding by means of the multinomial theorem, according to which , for example, equals + 2ab +
2ac + 2ad + 2bc + 2bd + 2cd, and again referring to Theorem 5 of Lecture 10, we get

Note that we have tacitly made use of the fact that C


Since when and are independent, we obtain the following corollary:

Corollary 1. If the random variables are independent and then

Example 4. If the random variables and have the means 2, -3, and 4, the variances , and, and the
covariances C, and
C, find the mean and the variance of
Solution.
By Theorem 4, we get

and

The following is another important theorem about linear combinations of random variables; it concerns
the covariance of two linear combinations of n random variables.

Theorem 5 . If are random variables and,


where, , ... , and , , ... , are constants, then
The proof of this theorem, which is very similar to that of Theorem 4, will be left as an exercise.

Since Cwhen and are independent, we obtain the following corollary:

Corollary 2. If are independent random variables and,

Example 5. If the random variables and have the means , 5, and , the variances , and, and the
covariances , and, find the covariance and .

Solution.
By Theorem 5, we get
Conditional Expectations
Conditional probabilities are obtained by adding the values of conditional probability distributions, or
integrating the values of conditional probability densities. Conditional expectations of random variables
are likewise defined in terms of their conditional distributions.

Definition. If is a discrete random variable, and is the value of the conditional probability
distribution of given at , the conditional expectation of given is

Correspondingly, if is a continuous variable and is the value of the conditional probability


distribution of given at the conditional expectation of given is
Similar expressions based on the conditional probability distribution or density of given define the
conditional expectation of given .

If we let in the previous definition , we obtain the conditional mean of the random variable given ,
which we denote by

Correspondingly, the conditional variance of given Y = y is

where is given by the previous definition with . The reader should not find it difficult to generalize the
previous definition for conditional expectations involving more than two random variables.

Example 6. If the joint probability density of X and Y is given by

find the conditional mean and the conditional variance of X given .


Solution.
For these random variables the conditional density of X given Y = y is

so that

Thus, is given by

Next we find

and it follows that

You might also like