Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Lecture No.7

Download as pdf or txt
Download as pdf or txt
You are on page 1of 15

Engineering Statistics 03 March, 2022

Lecture No. 7
Resourse Person: Dr. Absar Ul Haq Department: Mechanical Engineering (Narowal Campus).

7.1 Mathematical Expectation

7.1.1 Mean of a Random Variable

In Lecture 1, we discussed the sample mean, which is the arithmetic mean of the data. Now consider the
following. If two coins are tossed 16 times and X is the number of heads that occur per toss, then the values
of X are 0, 1, and 2. Suppose that the experiment yields no heads, one head, and two heads a total of 4, 7,
and 5 times, respectively. The average number of heads per toss of the two coins is then

(0)(4) + (1)(7) + (2)(5)


= 1.06.
16
This is an average value of the data and yet it is not a possible outcome of {0, 1, 2}. Hence, an average is
not necessarily a possible outcome for the experiment. For instance, a salesman’s average monthly income
is not likely to be equal to any of his monthly paychecks.
Let us now restructure our computation for the average number of heads so as to have the following equivalent
form:      
4 7 5
(0) + (1) + (2) = 1.06.
16 16 16
The numbers 4/16, 7/16, and 5/16 are the fractions of the total tosses resulting in 0, 1, and 2 heads,
respectively. These fractions are also the relative frequencies for the different values of X in our experiment.
In fact, then, we can calculate the mean, or average, of a set of data by knowing the distinct values that
occur and their relative frequencies, without any knowledge of the total number of observations in our set of
data. Therefore, if 4/16, or 1/4, of the tosses result in no heads, 7/16 of the tosses result in one head, and
5/16 of the tosses result in two heads, the mean number of heads per toss would be 1.06 no matter whether
the total number of tosses were 16, 1000, or even 10,000.
This method of relative frequencies is used to calculate the average number of heads per toss of two coins
that we might expect in the long run. We shall refer to this average value as the mean of the random variable
X or the mean of the probability distribution of X and write it as µx or simply as µ when it is clear to which
random variable we refer. It is also common among statisticians to refer to this mean as the mathematical
expectation, or the expected value of the random variable X, and denote it as E(X). Assuming that 1 fair
coin was tossed twice, we find that the sample space for our experiment is

S = {HH, HT, T H, T T }.

Since the 4 sample points are all equally likely, it follows that
1 1
P (X = 0) = P (T T ) = , P (X = 1) = P (T H) + P (HT ) = ,
4 2
and
1
P (X = 2) = P (HH) = ,
4

7-1
Lecture No. 7 7-2

where a typical element, say T H, indicates that the first toss resulted in a tail followed by a head on the
second toss. Now, these probabilities are just the relative frequencies for the given events in the long run.
Therefore,      
1 1 1
µ = E(X) = (0) + (1) + (2) = 1.
4 2 4
This result means that a person who tosses 2 coins over and over again will, on the average, get 1 head per
toss.
The method described above for calculating the expected number of heads per toss of 2 coins suggests
that the mean, or expected value, of any discrete random variable may be obtained by multiplying each of
the values x1 , x2 , · · ·, xn of the random variable X by its corresponding probability f (x1 ), f (x2 ), · · ·, f (xn )
and summing the products. This is true, however, only if the random variable is discrete. In the case of
continuous random variables, the definition of an expected value is essentially the same with summations
replaced by integrations

Definition 7.1 Let X be a random variable with probability distribution f (x). The mean, or expected value,
of X is X
µ = E(X) = xf (x)
x

if X is discrete, and Z ∞
µ = E(X) = xf (x)dx
−∞

if X is continuous.

The reader should note that the way to calculate the expected value, or mean, shown here is different from
the way to calculate the sample mean described in lecture 1, where the sample mean is obtained by using
data. In mathematical expectation, the expected value is calculated by using the probability distribution.
However, the mean is usually understood as a ”center” value of the underlying distribution if we use the
expected value, as in Definition 7.1.

ˆ A lot containing 7 components is sampled by a quality inspector; the lot contains 4 good components
and 3 defective components. A sample of 3 is taken by the inspector. Find the expected value of the
number of good components in this sample.
Solution: Let X represent the number of good components in the sample. The probability distribution
of X is
4
 3 
x 3−x
f (x) = 7
 , x = 0, 1, 2, 3.
3

Simple calculations yield f (0) = 1/35, f (1) = 12/35, f (2) = 18/35, and f (3) = 4/35. Therefore,
       
1 12 18 4 12
µ = E(X) = (0) + (1) + (2) + (3) = = 1.7.
35 35 35 35 7
Thus, if a sample of size 3 is selected at random over and over again from a lot of 4 good components
and 3 defective components, it will contain, on average, 1.7 good components.
ˆ A salesperson for a medical device company has two appointments on a given day. At the first appoint-
ment, he believes that he has a 70% chance to make the deal, from which he can earn $1000 commission
if successful. On the other hand, he thinks he only has a 40% chance to make the deal at the second
appointment, from which, if successful, he can make $1500. What is his expected commission based
on his own probability belief? Assume that the appointment results are independent of each other.
Lecture No. 7 7-3

Solution: First, we know that the salesperson, for the two appointments, can have 4 possible com-
mission totals: $0, $1000, $1500, and $2500. We then need to calculate their associated probabilities.
By independence, we obtain

f ($0) = (1 − 0.7)(1 − 0.4) = 0.18, f ($2500) = (0.7)(0.4) = 0.28,

f ($1000) = (0.7)(1 − 0.4) = 0.42, and f ($1500) = (1 − 0.7)(0.4) = 0.12.


Therefore, the expected commission for the salesperson is

E(X) = ($0)(0.18) + ($1000)(0.42) + ($1500)(0.12) + ($2500)(0.28) = $1300.

ˆ Let X be the random variable that denotes the life in hours of a certain electronic device. The
probability density function is Find the expected life of this type of device.

Solution: Using Definition 7.1, we have


Z ∞ Z ∞
2000 2000
µ = E(X) = x dx = dx = 200.
100 x3 100 x2

Therefore, we can expect this type of device to last, on average, 200 hours.

Now let us consider a new random variable g(X), which depends on X; that is, each value of g(X) is
determined by the value of X. For instance, g(X) might be X 2 or 3X − 1, and whenever X assumes the
value 2, g(X) assumes the value g(2). In particular, if X is a discrete random variable with probability
distribution f (x), for x = −1, 0, 1, 2, and g(X) = X 2 , then

P [g(X) = 0] = P (X = 0) = f (0),

P [g(X) = 1] = P (X = −1) + P (X = 1) = f (−1) + f (1),


P [g(X) = 4] = P (X = 2) = f (2),
and so the probability distribution of g(X) may be written By the definition of the expected value of a

random variable, we obtain

µg(X) = E[g(x)] = 0f (0) + 1[f (−1) + f (1)] + 4f (2)


= (−1)2 f (−1) + (0)2 f (0) + (1)2 f (1) + (2)2 f (2)
X
= g(x)f (x).
x

This result is generalized in Theorem 7.2 for both discrete and continuous random variables.
Lecture No. 7 7-4

Theorem 7.2 Let X be a random variable with probability distribution f (x). The expected value of the
random variable g(X) is X
µg (X) = E[g(X)] = g(x)f (x)
x

if X is discrete, and Z ∞
µg (X) = E[g(X)] = g(x)f (x)dx
−∞

if X is continuous.

ˆ Suppose that the number of cars X that pass through a car wash between 4:00 P.M. and 5:00 P.M. on
any sunny Friday has the following probability distribution: Let g(X) = 2X − 1 represent the amount

of money, in dollars, paid to the attendant by the manager. Find the attendant’s expected earnings
for this particular time period.
Solution: By Theorem 7.2, the attendant can expect to receive
9
X
E[g(X] = E(2X − 1) = (2x − 1)f (x)
x=4
     
1 1 1
= (7) + (9) + (11)
12 12 14
     
1 1 1
+(13) + (15) + (17)
4 6 6
= $12.67.

ˆ Let X be a random variable with density function

Find the expected value of g(X) = 4X + 3.


Solution: By Theorem 7.2, we have
Z 2
(4x + 3)x2 1 2
Z
E(4X + 3) = dx = (4x3 + 3x2 )dx = 8.
−1 3 3 −1

We shall now extend our concept of mathematical expectation to the case of two random variables X and
Y with joint probability distribution f (x, y)
Lecture No. 7 7-5

Definition 7.3 Let X and Y be random variables with joint probability distribution f (x, y). The mean, or
expected value, of the random variable g(X, Y ) is
XX
µg (X, Y ) = E[g(X, Y )] = g(x, y)f (x, y)
x y

if X and Y are discrete, and


Z ∞ Z ∞
µg (X, Y ) = E[g(X, Y )] = g(x, y)f (x, y)dxdy
−∞ −∞

if X and Y are continuous.

ˆ Let X and Y be the random variables with joint probability distribution indicated in Table of Example
1 of lecture 6. Find the expected value of g(X, Y ) = XY . The table is reprinted here for convenience.

Solution: By Definition 7.3, we write


2 X
X 2
E(XY ) = xyf (x, y)
x=0 y=0
= (0)(0)f (0, 0) + (0)(1)f (0, 1)
+(1)(0)f (1, 0) + (1)(1)f (1, 1) + (2)(0)f (2, 0)
3
= f (1, 1) = .
14

ˆ Find E(Y /X) for the density function

Solution: We have
1 2 1
y(1 + 3y 2 ) y + 3y 3
  Z Z Z
X 5
E = dxdy = dy = .
Y 0 0 4 0 2 8
Lecture No. 7 7-6

7.2 Variance and Covariance of Random Variables

The mean, or expected value, of a random variable X is of special importance in statistics because it
describes where the probability distribution is centered. By itself, however, the mean does not give an
adequate description of the shape of the distribution. We also need to characterize the variability in the
distribution. In Figure 7.1, we have the histograms of two discrete probability distributions that have the
same mean, µ = 2, but differ considerably in variability, or the dispersion of their observations about the
mean. The most important measure of variability of a random variable X is obtained by applying Theorem

Figure 7.1: Distributions with equal means and unequal dispersions.

7.2 with g(X) = (X − µ)2 . The quantity is referred to as the variance of the random variable X or the
2
variance of the probability distribution of X and is denoted by V ar(X) or the symbol σX , or simply by σ 2
when it is clear to which random variable we refer.

Definition 7.4 Let X be a random variable with probability distribution f (x) and mean µ. The variance of
X is X
σ 2 = E[(X − µ)2 ] = (x − µ)2 f (x),
x

if X is discrete, and Z ∞
σ 2 = E[(X − µ)2 ] = (x − µ)2 f (x),
−∞

, if X is continuous. The positive square root of the variance, σ, is called the standard deviation of X.

The quantity x − µ in Definition 7.4 is called the deviation of an observation from its mean. Since the
deviations are squared and then averaged, σ 2 will be much smaller for a set of x values that are close to µ
than it will be for a set of values that vary considerably from µ.

ˆ Let the random variable X represent the number of automobiles that are used for official business
purposes on any given workday. The probability distribution for company A [Figure 7.1(a)] is and that
Lecture No. 7 7-7

for company B [Figure 7.1(b)] is Show that the variance of the probability distribution for company B
is greater than that for company A.
Solution: For company A, we find that

µA = E(X) = (1)(0.3) + (2)(0.4) + (3)(0.3) = 2.0,

and then
3
X
2
σA = (x − 2)2 = (1 − 2)2 (0.3) + (2 − 2)2 (0.4) + (3 − 2)2 (0.3) = 0.6.
x=1

For company B, we have

µB = E(X) = (0)(0.2) + (1)(0.1) + (2)(0.3) + (3)(0.3) + (4)(0.1) = 2.0,

and then
4
X
2
σB = (x − 2)2f (x)
x=0
= (0 − 2)2 (0.2) + (1 − 2)2 (0.1) + (2 − 2)2 (0.3)
+(3 − 2)2 (0.3) + (4 − 2)2 (0.1) = 1.6.

Clearly, the variance of the number of automobiles that are used for official business purposes is greater for
company B than for company A.
An alternative and preferred formula for finding σ 2 , which often simplifies the calculations, is stated in the
following theorem.

Theorem 7.5 The variance of a random variable X is

σ 2 = E(X 2 ) − µ2 .

ˆ Let the random variable X represent the number of defective parts for a machine when 3 parts are
sampled from a production line and tested. The following is the probability distribution of X. Using

Theorem 7.5, calculate σ 2 .


Solution: First, we compute

µ = (0)(0.51) + (1)(0.38) + (2)(0.10) + (3)(0.01) = 0.61.

Now,
E(X2) = (0)(0.51) + (1)(0.38) + (4)(0.10) + (9)(0.01) = 0.87.
Therefore,
σ 2 = 0.87 − (0.61)2 = 0.4979.
Lecture No. 7 7-8

ˆ The weekly demand for a drinking-water product, in thousands of liters, from a local chain of efficiency
stores is a continuous random variable X having the probability density Find the mean and variance
of X.
Solution: Calculating E(X) and E(X 2 ), we have
Z 2
5
µ = E(X) = 2 x(x − 1)dx =
1 3

and Z 2
2 17
E(X ) = 2 x2 (x − 1)dx = .
1 6
Therefore,
 2
17 5 1
σ2 = − =
6 3 18

At this point, the variance or standard deviation has meaning only when we compare two or more dis-
tributions that have the same units of measurement. Therefore, we could compare the variances of the
distributions of contents, measured in liters, of bottles of orange juice from two companies, and the larger
value would indicate the company whose product was more variable or less uniform. It would not be mean-
ingful to compare the variance of a distribution of heights to the variance of a distribution of aptitude scores.
We show how the standard deviation can be used to describe a single distribution of observations. We shall
now extend our concept of the variance of a random variable X to include random variables related to X.
2
For the random variable g(X), the variance is denoted by σg(X) and is calculated by means of the following
theorem.

Theorem 7.6 Let X be a random variable with probability distribution f (x). The variance of the random
variable g(X) is X
2
σg(X) = E[g(X) − µg(X)]2 = [g(x) − µg(X)]2 f (x)
x

if X is discrete, and Z ∞
2 2
σg(X) = E[g(X) − µg(X)] = [g(x) − µg(X)]2 f (x)dx
−∞

if X is continuous

ˆ Calculate the variance of g(X) = 2X + 3, where X is a random variable with probability distribution
Solution: First, we find the mean of the random variable 2X + 3. According to Theorem 7.2,

3
X
µ2X+3 = E(2X + 3) = (2x + 3)f (x) = 6.
x=0
Lecture No. 7 7-9

Now, using Theorem 7.6, we have


2
σ2X+3 = E[(2X + 3) − µ2x + 3]2 = E[(2X + 3 − 6)2 ]
3
X
= E(4X2 − 12X + 9) (4x2 − 12x + 9)f (x) = 4.
x=0

ˆ Let X be a random variable having the density function given in Example 5. Find the variance of the
random variable g(X) = 4X + 3.
Solution: In Example 5, we found that µ4X+3 = 8. Now, using Theorem 7.6,
2
σ4X+3 = E{[(4X + 3) − 8]2 }
x2
Z
= E[(4X − 5)2 ] = 2(4x − 5)2 dx
−1 3
Z
1
= 2(16x4 − 40x3 + 25x2 )dx
3 −1
51
= .
5

Definition 7.7 Let X and Y be random variables with joint probability distribution f (x, y). The covariance
of X and Y is XX
σXY = E[(X − µX)(Y − µY )] = (x − µX)(y − µy)f (x, y)
x y

if X and Y are discrete, and


Z ∞ Z ∞
σXY = E[(X − µX)(Y − µY )] = (x − µX)(y − µy)f (x, y)dxdy
−∞ −∞

if X and Y are continuous.

Theorem 7.8 The covariance of two random variables X and Y with means µX and µY , respectively, is
given by
σXY = E(XY ) − µX µY

ˆ In Lecture 6, we describes a situation involving the number of blue refills X and the number of red
refills Y . Two refills for a ballpoint pen are selected at random from a certain box, and the following
is the joint probability distribution: Find the covariance of X and Y .
Lecture No. 7 7-10

3
Solution: From Example 6, we see that E(XY ) = 14 . Now
2      
X 5 15 3 3
µX = xg(x) = (0) + (1) + (2) = ,
x=0
14 28 28 4

and
2      
X 15 3 1 1
µY = yh(y) = (0) + (1) + (2) =
y=0
28 7 28 2

Therefore,   
3 3 1 9
σXY = E(XY ) − µX µY = − = .
14 4 2 −56

ˆ The fraction X of male runners and the fraction Y of female runners who compete in marathon races
are described by the joint density function Find the covariance of X and Y .

Solution: We first compute the marginal density functions.


They are and

From these marginal density functions, we compute


Z 1
4
µX = E(X) = 4x4 dx =
0 5

and Z 1
8
µY = 4y 2 (1 − y 2 )dy = .
0 15
From the joint density function given above, we have
Z 1Z 1
4
E(XY ) = 8x2 y 2 dxdy = .
0 y 9

Then   
4 4 8 4
σXY = E(XY ) − µX µY = − =
9 5 15 225
Lecture No. 7 7-11

Although the covariance between two random variables does provide information regarding the nature of
the relationship, the magnitude of σXY does not indicate anything regarding the strength of the relationship,
since σXY is not scale-free. Its magnitude will depend on the units used to measure both X and Y . There
is a scale-free version of the covariance called the correlation coefficient that is used widely in statistics.

Theorem 7.9 Let X and Y be random variables with covariance σXY and standard deviations σX and σY
, respectively. The correlation coefficient of X and Y is
σXY
ρXY =
σX σY

ˆ Find the correlation coefficient between X and Y in Example 13. Solution: Since
     
5 15 3 27
E(X 2 ) = (02 ) + (12 ) + (22 ) =
14 28 28 28
and      
2 2 15 2 3 2 1 4
E(Y ) = (0 ) + (1 ) + (2 ) = ,
28 7 28 7
we obtain  2
2 27 3 45
σX = − =
28 4 112
and  2
4 1 9
σY2 = − = .
7 2 28
Therefore, the correlation coefficient between X and Y is
σXY −9/56 −1
ρXY = =p =√ .
σX σY (45/112)(9/28) 5

ˆ Find the correlation coefficient of X and Y in Example 14.


Solution: Because Z 1
2 2
E(X ) = 4x5 dx =
0 3
and Z 1
2 1
E(Y 2 ) = 4y 3 (1 − y 2 )dy = 1 − = ,
0 3 3
we conclude that  2
2 2 4 2
σX = − =
3 5 75
and  2
1 8 11
σY2 = − = .
3 15 225
Hence,
4/225 4
ρXY = p =√ .
(2/75)(11/225) 66

Note that although the covariance in Example 15 is larger in magnitude (disregarding the sign) than that
in Example 16, the relationship of the magnitudes of the correlation coefficients in these two examples is
just the reverse. This is evidence that we cannot look at the magnitude of the covariance to decide on how
strong the relationship is.
Lecture No. 7 7-12

References
[TT] T.T. Soong, “Fundamentals of probability and statistics for engineers,” John Wiley & Sons
Ltd, The Atrium, Southern Gate, Chichester, West Sussex PO19 8SQ, England, 2004.
Lecture No. 7 7-13
Lecture No. 7 7-14
Lecture No. 7 7-15

You might also like