Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Solutions Manual

Download as pdf or txt
Download as pdf or txt
You are on page 1of 222

Student

Solutions
Manual
Student Solutions Manual

Probability and Statistics for


Engineering and the Sciences

NINTH EDITION

Jav 1.Devore
California Polytechnic State University

Prepared by

Matt Carlton
California Polytechnic State University

*" ~ CENGAGE
t_ Learning

Australia Brazil' Mexico' Singapore' United Kingdom' United Slales


,.. (ENGAGE
... Learning-

2016 Cengage Learning ISBN: 978-1-305-26059-7

weN: 01-100-101
Cengage learning
20 Channel Center Street
ALL RIGHTS RESERVED. No part of this work covered by the
Boston, MA 02210
copyright herein may be reproduced, transmitted, stored, or
used in any form or by any means graphic, electronic, or
USA
mechanical, including but not limited to photocopying,
Cengage Learning is a leading provider of customized
recording, scanning, digitizing, taping, Web distribution,
learning solutions with office locations around the globe,
information networks, or information storage and retrieval
systems, except as permitted under Section 107 or 108 of the including Singapore, the United Kingdom, Australia,
1976 United States Copyright Act, without the prior written Mexico, Brazil, and Japan. Locate your local office at:
permission of the publisher. www.cengage.com/global.

Cengage Learning products are represented in


Canada by Nelson Education, Ltd.
For product information and technology assistance, contact us at
Cengage Learning Customer & Sales Support, To learn more about Cengage Learning Solutions,
1-800-354-9706. visit www.cengage.com.

For permission to ~se material from this text or product, submit Purchase any of our products at your local college store or
all requests online at www.cengage.com/permissions at our preferred online store www.cengagebrain.com.
Further permissions questions can be emailed to
permissionrequest@cengage.com.

Printed in the United States nf America


Print Number: 01 Print Year: 2014
CONTENTS

Chapter I Overview and Descriptive Statistics

Chapter 2 Probability 22

Chapter 3 Discrete Random Variables and Probability 41


Distributions
Chapter 4 Continuous Random Variables and Probability 57
Distributions
Chapter 5 Joint Probability Distributions and Random Samples 79

Chapter 6 Point Estimation 94

Chapter 7 Statistical Intervals Based on a Single Sample 100

Chapter 8 Tests of Hypotheses Based on a Single Sample 108

Chapter 9 Inferences Based on Two Samples 119

Chapter 10 The Analysis of Variance 135

Chapter I I Multifactor Analysis of Variance 142

Chapter 12 Simple Linear Regression and Correlation 157

Chapter 13 Nonlinear and Multiple Regression 174

Chapter 14 Goodness-of-Fit Tests and Categorical Data Analysis 196

Chapter 15 Distribution-Free Procedures 205

Chapter 16 Quality Control Methods 209

III
C 20[6 Cengagc Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.

CHAPTER 1

Section 1.1

1.
a. Los Angeles Times, Oberlin Tribune, Gainesville Sun, Washington Post

b. Duke Energy, Clorox, Seagate, Neiman Marcus

c. Vince Correa, Catherine Miller, Michael Cutler, Ken Lee

d. 2.97, 3.56, 2.20, 2.97

3.
a. How likely is it that more than half of the sampled computers will need or have needed
warranty service? What is the expected number among the 100 that need warranty
service? How likely is it that the number needing warranty service will exceed the
expected number by more than 1O?

b. Suppose that 15 of the 100 sampled needed warranty service. How confident can we be
that the proportion of all such computers needing warranty service is between .08 and
.22? Does the sample provide compelling evidence for concluding that more than 10% of
all such computers need warranty service?

5.
a. No. All students taking a large statistics course who participate in an SI program of this
sort.

b. The advantage to randomly allocating students to the two groups is that the two groups
should then be fairly comparable before the study. If the two groups perform differently
in the class, we might attribute this to the treatments (SI and control). If it were left to
students to choose, stronger or more dedicated students might gravitate toward SI,
confounding the results.

c. If all students were put in tbe treatment group, tbere would be no firm basis for assessing
the effectiveness ofSI (notbing to wbich the SI scores could reasonably be compared).

7. One could generate a simple random sample of all single-family homes in the city, or a
stratified random sample by taking a simple random sample from each of the 10 district
neighborhoods. From each of the selected homes, values of all desired variables would be
determined. This would be an enumerative study because there exists a finite, identifiable
population of objects from wbich to sample.

102016 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted 10 a publicly accessible website, in whole or in part.
-

Chapter I: Overview and Descriptive Statistics

9.
a. There could be several explanations for the variability of the measurements. Among
them could be measurement error (due to mechanical or technical changes across
measurements), recording error,differences in weather conditions at time of
measurements, etc.

b. No, because there is no sampling frame.

Section 1.2

11.
31. I
3H 56678
41. 000112222234
4H 5667888 stem: tenths
51. 144 leaf: hundredths
5H 58
61. 2
6H 6678
71.
7H 5

The stem-and-leaf display shows that .45 is a good representative value for the data. In
addition, the display is not symmetric and appears to be positively skewed. The range of the
data is .75 - .31 ~ .44, which is comparable to the typical value of .45. This constitutes a
reasonably large amount of variation in the data. The data value .75 is a possible outlier.

13.
a.

12 2 stem: tens
12 445 leaf: ones
12 6667777
12 889999
13 00011111111
13 2222222222333333333333333
13 44444444444444444455555555555555555555
13 6666666666667777777777
13 888888888888999999
14 0000001111
14 2333333
14 444
14 77

The observations are highly concentrated at around 134 or 135, where the display
suggests the typical value falls.

2
C 2016 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
Chapter 1: Overview and Descriptive Statistics

b.

40
r-t-'

r---
I---
-

- -

..
10
I---
r----

o ~ I
124 128 132 136 140 144 148
strength (ksi)

The histogram of ultimate strengths is symmetric and unimodal, with the point of
symmetry at approximately 135 ksi. There is a moderate amount of variation, and there
are no gaps or outliers in the distribution.

15.
American French
8 I
755543211000 9 00234566
9432 10 2356
6630 II 1369
850 12 223558
8 13 7
14
15 8
2 16

American movie times are unimodal strongly positively skewed, while French movie times
appear to be bimodal. A typical American movie runs about 95 minutes, while French movies
are typically either around 95 minutes or around 125 minutes. American movies are generally
shorter than French movies and are less variable in length. Finally, both American and French
movies occasionallyrun very long (outliers at 162 minutes and 158 minutes, respectively, in
the samples).

3
C 20 16 Cengoge Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to (I publicly accessible website, in whole or in pan.
Chapter I: Overview and Descriptive Statistics

17. The sample size for this data set is n = 7 + 20 + 26 + ... + 3 + 2 = 108.
a. "At most five hidders" means 2, 3,4, or 5 hidders. The proportion of contracts that
involved at most 5 bidders is (7 + 20 + 26 + 16)/108 ~ 69/108 ~ .639.
Similarly, the proportion of contracts that involved at least 5 bidders (5 through II) is
equal to (16 + II + 9 + 6 + 8 + 3 + 2)/1 08 ~ 55/108 ~ .509,

b. The numher of contracts with between 5 aod 10 bidders, inclusive, is 16 + 11 + 9 + 6 + 8


+ 3 = 53, so the proportion is 53/1 08 = .491. "Strictly" between 5 and 10 means 6, 7, 8, or
9 bidders, for a proportion equal to (11 + 9 + 6 + 8)/108 = 34/1 08 ~ .315.

c. The distribution of number of bidders is positively skewed, ranging from 2 to II bidders,


a trvprca
WIith . I va Iue 0 f aroun d 4 -5 bidd
1 ers.
0

r-

J I' I--

.. I--

r
I--

, , , , ,
, Hl ..
Nu"'''oFbidd.~
, r
"
"l

19.
a. From this frequency distribution, the proportion of wafers that contained at least one
particle is (100-1)/1 00 ~ .99, or 99%. Note that it is much easier to subtract I (which is
the number of wafers that contain 0 particles) from 100 than it would be to add all the
frequencies for 1,2,3, ... particles. In a similar fashion, the proportion containing at least
5 particles is (100 - 1-2-3-12-11)/100 = 71/100 = .71, or, 71%.

b. The proportion containing between 5 and 10 particles is (15+ 18+ 10+ 12+4+5)/1 00 ~
64/1 00 ~ .64, or 64%. The proportion that contain strictly between 5 and 10 (meaning
strictly more than 5 and strictly less than 10) is (18+ 10+12+4)/1 00 ~ 44/100 = .44, or
44%.

c. The following histogram was constructed using Minitah. The histogram is almost
syrrunetric and unimodal; however, the distribution has a few smaller modes and has a
ve sli ht ositive skew.

'"
r-'

s ~

o
~r-- r-r-
t-

,
t-r-

,1.-rl
o 1 2 J 4 5 6 7 8 9 10
Il---n,
II 12 13 I~
Nurmer Or.onl,rrin.lirlg po";ol

4
C 2016 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.

7
Chapter 1: Overview and Descriptive Statistics

21.
a. A histogram of the y data appears below. From this histogram, the number of
subdivisions having no cul-de-sacs (i.e., y = 0) is 17/47 = .362, or 36.2%. The proportion
having at least one cul-de-sac (y;o, I) is (47 - 17)/47 = 30/47 = .638, or 63.8%. Note that
subtracting the number of cul-de-sacs with y = 0 from the total, 47, is an easy way to find
the number of subdivisions with y;o, 1.

25

20

15

i
~
~ 10

0
0 2 3 4 5
NurriJcr of'cujs-de-sac

b. A histogram of the z data appears below. From this histogram, the number of
subdivisions with at most 5 intersections (i.e., Z ~ 5) is 42147 = .894, or 89.4%. The
proportion having fewer than 5 intersections (i.e., z < 5) is 39/47 = .830, or 83.0%.

14
r--
12
-
10

r--

-
4
f--
2

o
o 2 6
n
Number of intersections

5
Q 20 16 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
r -
Chapter 1: Overview and Descriptive Statistics

23. Note: since tbe class intervals have unequal length, we must use a density scale.

0.20 r-r-

0.15
f-
~
:;
e
0.10
I!

0.05

I
0.00
0 2 4 II 20 30 40
Tantrum duration

Tbe distribution oftantrum durations is unimodal and beavily positively skewed. Most
tantrums last between 0 and 11 minutes, but a few last more than halfan hour! With such
heavy skewness, it's difficult to give a representative value.

25. Tbe transformation creates a much more symmetric, mound-shaped histogram.

Histogram of original data:

14

12

10

l
o
10 20 40 60 70 80
lOT

6
C 2016 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part

7 '-
Chapter 1: Overview and Descriptive Statistics

Histogram of transformed data:

6 ~

o I
1.1 1.2 1.3 1.4 I.S 1.6 1.7 18 19
Iog(I01)

27.
a. The endpoints of the class intervals overlap. For example, the value 50 falls in both of
the intervals 0-50 and 50-100.

b. The lifetime distribution is positively skewed. A representative value is around 100.


There is a great deal of variability in lifetimes and several possible candidates for
outliers.

Class Interval Frequency Relative Frequency


0-<50 9 0.18
50-<100 19 0.38
100-<150 11 0.22
150-<200 4 0.08
200-<250 2 0.04
250-<300 2 0.04
300-<350 I 0.02
350-<400 1 0.02
400-<450 o 0.00
450-<500 o 0.00
500-<550 I 0.02
50 1.00

7
C 20 16 Cengage Learning. All Rights Reserved. May nor be scanned, copied or duplicated, or posted 10 a publicly accessible website, in whole or in pan.
r t
Chapter 1: Overview and Descriptive Statistics

20
~

rs

~ -
g 10
I ~

5
-

I ,--,
o
o 1110 200 300 400 500
lifetirre

c. There is much more symmetry in the distribution of the transformed values than in the
values themselves, and less variability. There are no longer gaps or obvious outliers.

Class Interval Freqnency Relative Frequency


2.25-<275 2 0.04
2.75-<3.25 2 0.04
3.25-<3.75 3 0.06
3.75-<4.25 8 0.16
4.25-<4.75 18 0.36
4.75-<5.25 10 0.20
5.25-<5.75 4 0.08
5.75-<6.25 3 0.06

20

ts

>----

.-----
o
I I I I
2.25 3.25 425 5.25 625
In(lifclirre)

d. The proportion of lifetime observations in this sample that are less than 100 is .18 + .38 =
.56, and the proportion that is at least 200 is .04 + .04 + .02 + .02 + .02 ~ .14.

8
C 2016 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
Chapter I: Overview and Descriptive Statistics

29.
Physical Frequency Relative
Activity Frequency
A 28 .28
B 19 .19
C 18 .18
D 17 .17
E 9 .09
F 9 .09
100 1.00

30
.--
25

20
.- - r-r-r-
15
8 15

10
- .-

o
A B C D E F
Type of Physical Activity

31.
Class Frequency Cum. Freq. Cum. ReI. Freq.
0.0-<4.0 2 2 0.050
4.0-<8.0 14 16 00400
8.0-<12.0 11 27 0.675
12.0-<16.0 8 35 0.875
16.0-<20.0 4 39 0.975
20.0-<24.0 0 39 0.975
24.0-<28.0 1 40 1.000

9
C 20 16 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted [0 II publicly accessible website, in whole or in part
r

Chapter 1: Overview and Descriptive Statistics

Section 1.3

33.
a. Using software, i ~640.5 ($640,500) and i = 582.5 ($582,500). The average sale price
for a home in this sample was $640,500. Half the sales were for less than $582,500, while
half were for more than $582,500.

b. Changing that one value lowers the sample mean to 610.5 ($610,500) but has no effect on
the sample median.

c. After removing the two largest and two smallest values, X"(20) = 591.2 ($591,200).

d. A 10% trimmed mean from removing just the highest and lowest values is X;"IO) ~ 596.3.
To form a 15% trimmed mean, take the average of the 10% and 20% trimmed means to
get x"(lS) ~ (591.2 + 596.3)/2 ~ 593.75 ($593,750).

35. The sample size is n ~ 15.


a. The sample mean is x ~ 18.55115 = 1.237 ug/g and the sample median is x = the 8'"
ordered value ~ .56 ~glg.These values are very different due to 'the heavy positive
skewness in tbe data.

b. A 1/15 trimmed mean is obtained by removing the largest and smallest values and
averaging tbe remaining 13 numbers: (.22 + ... + 3.07)113 ~ 1.162. Similarly, a 2/15
trimmed mean is the average ofthe middle 11 values: (.25 + ... + 2.25)/11 ~ 1.074. Since
the average of 1115and 2115 is. I (10%), a 10% trimmed mean is given by the midpoint
oftbese two trimmed means: (1.162 + 1.074)/2 ~ 1.118 ug/g.

c. The median of the data set will remain .56 so long as that's the 8'" ordered observation.
Hence, the value .20 could be increased to as higb as .56 without changing the fact that
tbe 8'" ordered observation is .56. Equivalently, .20 could be increased by as much as .36
without affecting tbe value of the sample median.

37. i = 12.01, x = 11.35, X"(lOj = 11.46. The median or the trimmed mean would be better
choices than the mean because of the outlier 21.9.

39.

a. Lx
,
= 16.475 so x = 16.475 = 1.0297'
16'
x = (1.007 +2 LOll) 1.009

b. 1.394 can be decreased until it reaches 1.0 II (i.e. by 1.394 - 1.0 II = 0.383), the largest
of the 2 middle values.lfit is decreased by more than 0.383, the median will change.

10
C 2016 Cengage Learning. All Rights Reserved Maynot be scanned, copied or duplicated,or posted to a publicly accessiblewebsite, in whole or in pan,

7
Chapter I: Overview and Descriptive Statistics

41.

b. x = .70 = the sample proportion of successes

c. To have xln equal .80 requires x/25 = .80 or x = (.80)(25) = 20. There are 7 successes (S)
already, so another 20 -7 = 13 would be required.

43. The median and certain trimmed means can be calculated, while the mean cannot - the exact
(57 + 79)
values of the "100+" observations are required to calculate the mean. x 68.0,
2
:x,r(20) = 66.2, X;,(30) = 67.S.

Section 1.4

45.
a. X = 115.58. The deviations from the mean are 116.4 - 115.58 ~ .82, 115.9 - 115.58 ~
.32,114.6 -115.58 =-.98,115.2 - 115.58 ~-.38, and 115.8 -115.58 ~ .22. Notice that
the deviations from the mean sum to zero, as they should.

b. s' ~ [(.82)' + (.32)' + (_.98)' + (_.38)' + (.22)']/(5 - I) = 1.928/4 ~ .482, so s ~ .694.

c. ~x;' =66795.61,sos'~SJ(n-I)~ (Lx;2-(Lx,)'/n)/(n-I)~


(66795.61 -(577.9)'/5)/4 = 1.928/4 ~ .482.
d. The new sample values are: 16.4 15.914.615.2 15.8. While the new mean is 15.58,
all the deviations are the same as in part (a), and the variance ofthe transformed data is
identical to that of part (b).

47.
a. From software, x
= 14.7% and x
= 14.88%. The sample average alcohol content of
these 10 wines was 14.88%. Half the wines have alcohol content below 14.7% and half
are above 14.7% alcohol.

b. Working long-hand, L(X, -x)' ~ (14.8 -14.88)' + ... + (15.0 - 14.88)' = 7.536. The
sample variance equals s'= ~(X;-X)2 ~7.536/(10-1)~0.837.

c. Subtracting 13 from each value will not affect the variance. The 10 new observations are
1.8, 1.5,3.1,1.2,2.9,0.7,3.2, 1.6,0.8, and 2.0. The sum and sum of squares of these 10
new numbers are Ly, ~ 18.8 and ~y;2= 42.88. Using tbe sample variance shortcut, we
obtain s' = [42.88 - (18.8)'/1 0]/(1 0 - I) = 7.536/9 ~ 0.837 again.

49.
a. Lx; =2.75+ .. +3.01=56.80, Lx;' =2.75'+ .. +3.01' =197.8040

, 197.8040 - (56.80)' /17 8.0252 =.5016 s=.708


b. s ;:;;
16 16 '

II
102016 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
Chapter I: Overview and Descriptive Statistics

st.
a. From software, s' = 1264.77 min' and s ~ 35.56 min. Working by hand, Lx = 2563 and
Lx' = 36850 I, so

:.-,1lc::. 9
s' = :...36:..:8c::.50:...1:.-,-~(=25:..:6c::.3),-'
= 1264.766 and s = "1264.766 = 35.564
19-1

b. If y ~ time in hours, !beny ~ ex where c ~ to. So, s~ = c's; = (to)' 1264.766 = .351 h?
and s, =es, =(to)35.564=.593hr.

53.
a. Using software, for the sample of balanced funds we have x = 1.121,.<= 1.050,s = 0.536 ;
for the sample of growth funds we have x = 1.244,'<= 1.I00,s = 0.448.

b. The distribution of expense ratios for this sample of balanced funds is fairly symmetric,
while the distribution for growth funds is positively skewed. These balanced and growth
mutual funds have similar median expense ratios (1.05% and 1.10%, respectively), but
expense ratios are generally higher for growth funds. The lone exception is a balanced
fund with a 2.86% expense ratio. (There is also one unusually low expense ratio in the
sample of balanced funds, at 0.09%.)

3.0

'.5

2.0
.e
;;

$
~ 1.5
j I
1.0

0.5

0.0 r
Balanced

55.
a. Lower half of the data set: 325 325 334 339 356 356 359 359 363 364 364
366 369, whose median, and therefore the lower fourth, is 359 (the 7th observation in the
sorted list).

Upper halfofthe data set: 370 373 373 374 375 389 392 393 394 397 402
403 424, whose median, and therefore the upper fourth is 392.

So,f, ~ 392 - 359 = 33.

12
02016 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
Chapter I: Overview and Descriptive Statistics

b. inner fences: 359 - 1.5(33) ~ 309.5, 392 + 1.5(33) = 441.5


To be a mild outlier, an observation must be below 309.5 or above 441.5. There are none
in this data set. Clearly, then, there are also no extreme outliers.

e. A boxplot of this data appears below. The distribution of escape times is roughly
symmetric with no outliers. Notice the box plot "hides" the fact that the distribution
contains two gaps, which can be seen in the stem-and-Ieaf display.

------1D 1-
320 360 380 400 420
Escape tjrre (sec)

d. Not until the value x = 424 is lowered below the upper fourth value of392 would there be
any change in the value of the upper fourth (and, thus, of the fourth spread). That is, the
value x = 424 could not be decreased by more than 424 - 392 = 32 seconds.

57.
a. J, = 216.8 -196.0 = 20.8
inner fences: 196 -1.5(20.8) = 164.6,216.8 + 1.5(20.8) = 248
outer fences: 196 -3(20.8) = 133.6, 216.8 + 3(20.8) = 279.2
Of the observations listed, 125.8 is an extreme low outlier and 250.2 is a mild high
outlier.

b. A boxplot of this data appears below. There is a bit of positive skew to the data but,
except for the two outliers identified in part (a), the variation in the data is relatively
small.

-{[]--
------------------x
120 140 160 180 200 220 240 260

13
Q 2016 Cengegc Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website. in whole or in part.
:
Chapter 1: Overview and Descriptive Statistics

59.
a. If you aren't using software, don't forget to sort the data first!
ED: median = .4, lower fourth = (.1 + .1)/2 = .1, upperfourth = (2.7 + 2.8)/2 = 2.75,
fourth spread = 2.75 -.1 = 2.65

Non-ED: median = (1.5 + 1.7)/2 = 1.6, lower fourth = .3, upper fourth = 7.9,
[ il! I fourth spread = 7.9-.3 =7.6.

b. ED: mild outliers are less than .1 - 1.5(2.65) = -3.875 or greater than 2.75 + 1.5(2.65) =
6.725. Extreme outliers are less than.1 - 3(2.65) =-7.85 or greater than 2.75 + 3(2.65) =
I 10.7. So, the two largest observations (11. 7, 21.0) are extreme outliers and the next two
largest values (8.9, 9.2) are mild outliers. There are no outliers at the lower end of the
data.
III
II Non-ED: mild outliers are less than.3 -1.5(7.6) =-Il.l or greater than 7.9 + 1.5(7.6) =
19.3. Note that there are no mild outliers in the data, hence there caanot be any extreme
outliers, either.

c. A comparative boxplot appears below. The outliers in the ED data are clearly visible.
There is noticeable positive skewness in both samples; the Non-ED sample has more
variability then the Ed sample; the typical values of the ED sample tend to be smaller
than those for the Non-ED sample.

II[I r
...
II
8J
0-
0
a
'"
Non-ED
{]
0 10 15 20
Cocaine ccncenuarcn (1ll:!L)

61. Outliers occur in the 6a.m. data. The distributions at the other times are fairly symmetric.
Variability and the "typical" gasoline-vapor coefficient values increase somewhat until 2p.m.,
then decrease slightly.
II

14
C 2016 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.

l
Chapter 1: Overview and Descriptive Statistics

Supplementary Exercises

63. As seen in the histogram below, this noise distribution is bimodal (but close to unimodal) with
a positive skew and no outliers. The mean noise level is 64.89 dB and the median noise level
is 64.7 dB. The fourth spread of the noise measurements is about 70.4 - 57.8 = 12.6 dB.

25
~
20

g 15
g
1 10 -
-
-

o
I l
60 64 68 72 76 80 84
" " Noise (dB)

65.
a. The histogram appears below. A representative value for this data would be around 90
MFa. The histogram is reasonably symmetric, unimodal, and somewhat bell-shaped with
a fair amount of variability (s '" 3 or 4 MPa).

~
40

30 - f--

i
' 20
-
-

-
10

o I
81
I 85 89 93
r-----,
97
Fracture strength (MPa)

b. The proportion ofthe observations that are at least 85 is 1 - (6+7)1169 = .9231. The
proportion less than 95 is I - (13+3)/169 ~ .9053.

15
C 2016 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in pan.
(

Chapter I: Overview and Descriptive Statistics

c. 90 is the midpoint of the class 89-<91, which contains 43 observations (a relative


frequency of 43/\69 ~ .2544). Therefore about half of this frequency, .1272, should be
added to the relative frequencies for the classes to the left of x = 90. That is, the
approximate proportion of observations that are less than 90 is .0355 + .0414 + .I006 +
.1775 + .1272 ~ .4822.

67.
3. Aortic root diameters for males have mean 3.64 em, median 3.70 em, standard deviation
0.269 em, and fourth spread 0040. The corresponding values for females are = 3.28 em, x
x~ 3.15 em, S ~ 0.478 em, and/, ~ 0.50 em. Aortic root diameters are typically (though
not universally) somewhat smaller for females than for males, and females show more
variability. The distribution for males is negatively skewed, while the distribution for
females is positively skewed (see graphs below).

Boxplot of M, F

4.5

4.0
I
I
.m
."
35
~

flil
.W
t:0

'" 3.0
1
25
M

b. For females (" ~ 10), the 10% trimmed mean is the average of the middle 8 observations:
x"(lO) ~ 3.24 em. For males (n ~ 13), the 1/13 trinuned mean is 40.2/11 = 3.6545, and

the 2/13 trimmed mean is 32.8/9 ~ 3.6444. Interpolating, the 10% trimmed mean is
X,,(IO) ~ 0.7(3.6545) + 0.3(3.6444) = 3.65 em. (10% is three-tenths of the way from 1/13

to 2/13).

69.
a.
_y=--=
LY, L(ax,+b) ax+b
II
n "
2 L(Y, _y)2 L(ax, +b-(a'X +b))' L(ax,-a'X)2
Sy ,,-I
,,-I ,,-I

= _0_2 L~(,-X!-,__X--,),--2 = 02 S2
n-l ;c

16
C 2016 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
Chapter 1: Overview and Descriptive Statistics

b.
x='C,y='F

y = 2.(87.3)+ 32 = 189.14'F
5

s, =,R = (H (1.04)' = .h.5044 =L872'F

71.
a. The mean, median, and trimmed mean are virtually identical, which suggests symmetry.
If there are outliers, they are balanced. The range of values is only 25.5, but half of the
values are between 132.95 and 138.25.

b. See the comments for (a). In addition, using L5(Q3 - QI) as a yardstick, the two largest
and three smallest observations are mild outliers.

120 125 130 135 140 145 ISO


strength

17
e 2016 Cengegc Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
Chapter 1: Overview and Descriptive Statistics

73. From software, x = .9255. s = .0809; i = .93,J; = .1. The cadence observations are slightly
skewed (mean = .9255 strides/sec, median ~ .93 strides/sec) and show a small amount of
variability (standard deviation = .0809, fourth spread ~ .1). There are no apparent outliers in
the data.

78 stem ~ tenths
8 11556 leaf ~ hundredths
92233335566
00566

I I

0.85 0.90 0.95 1.00 \.05


0.'"
Cadence (strides per second)

75.
a. The median is the same (371) in each plot and all three data sets are very symmetric. In
addition, all three have the same minimum value (350) and same maximum value (392).
Moreover, all three data sets have the same lower (364) and upper quartiles (378). So, all
three boxplots will be identical. (Slight differences in the boxplots below are due to the
way Minitab software interpolates to calculate the quartiles.)

Type I

Type 2

Type 3

370 380 390


350
Fntjgue limit (MPa)

18
e 2016 Cengage learning. All Rights Reserved. May not be sconncd, copied or duplicated, or posted to a publicly accessible website, in whole or in part.


Chapter 1: Overview and Descriptive Statistics

b. A comparative dotplot is shown below. These graphs show that there are differences in
the variability of the three data sets. They also show differences in the way the values are
distributed in the three data sets, especially big differences in the presence of gaps and
clusters.

...
Typo I
Type 2

Type]
j
:: 354
, ..
360
.
: T
366
.:::..

J72 378
. .
, ..
:
384
,
3'>J
:: I

Fatigue Liril (MPa)

c. The boxplot in CaJis not capable of detecting the differences among the data sets. The
primary reason is that boxplots give up some detail in describing data because they use
only five summary numbers for comparing data sets.

77.
a.

o 444444444577888999 leaf 1.0


1 00011111111124455669999 stem 0.1
2 1234457
3 11355
4 17
5 3
6
7 67
8 1

HI 10.44, 13.41

19
02016 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part .
Chapter 1: Overview and Descriptive Statistics

b. Since the intervals have unequal width, you must use a density scale.

0.5

0.4

0.3
.~

<!I
0.2

0.1

0.0
",,~<;:,?,~ -,"~.~
"
Pitting depth (mm)

c. Representative depths are quite similar for the three types of soils - between 1.5 and 2.
Data from the C and CL soils shows much more variability than for the other two types.
The boxplots for the first three types show substantial positive skewness both in the
middle 50% and overall. The boxplot for the SYCL soil shows negative skewness in the
middle 50% and mild positive skewness overall. Finally, there are multiple outliers for
the first three types of soils, including extreme outliers.

79.

a.

b. In the second line below, we artificially add and subtract riX; to create the term needed
for the sample variance:
n+l 11+\

~)Xi
ns;+\ :;::: - Xn t)2
+ = Lx 2 - (n + l)x;+1
j

1=1 1=1

= t,x,' +x;" - (n+ I]X;" = [t,x; - nX; ]+ n:x; +x;" - (n+ l)x;"
= (n -I)s; + Ix;" + n:x; -(n + I]X;,,)
Substitute the expression for xn+
1
from part (a) into the expression in braces, and it

simplifies to_n_(x >,-x)', as desired.


n + 1 /" II

15(12.58) + 11.8 200.5 . .


c. First, x" 16 ---ui= 12.53. Then, solving (b) for s;" gives

2
s",=-s.
n -1
n
2
+-(x",-x.l
I
IJ+I
_ 2 14
=-(.512)
15
2 I
+-(11.8-12.58)
16
2 .
~ .238. Fmally, the

standard deviation is 5" = .J.238 = .532.

20
e 2016 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted 10 a publicly accessible website, in whole or in part.
Chapter 1: Overview and Descriptive Statistics

81. Assuming that the histogram is unimodal, then there is evidence of positive skewness in the
data since the median lies to the left of the mean (for a symmetric distribution, the mean and
median would coincide).

For more evidence of skewness, compare the distances ofthe 5th and 95'h percentiles from the
median: median - 5'h %i1e ~ 500 - 400 ~ 100, while 95'h %i1e - median ~ 720 - 500 ~ 220.
Thus, the largest 5% of the values (above the 95th percentile) are further from the median
than are the lowest 5%. The same skewness is evident when comparing the 10'" and 90'h
percentiles to the median, or comparing the maximum and minimum to the median.

83.
a. When there is perfect symmetry, the smallest observation y, and the largest observation
y, will be equidistantfrom the median, so = y. - x x -
y,. Similarly, the second-smallest
and second-largest will be equidistant from the median, so y._, - x = x- y" and so on.
Thus, the first and second numbers in each pair will be equal, so that each point in the
plot will fall exactly on the 45 line.

When the data is positively skewed, y, will be much further from the median than is y"
so y. - x
will considerably exceed x-
y, and the point (y. - x,x -
y,) will fall
considerably below the 45 line, as will the other points in the plot.

b. The median of these n = 26 observations is 221.6 (the midpoint of the 13" and 14'h
ordered values). The first point in the plot is (2745.6 - 221.6, 221.6 - 4.1) ~ (2524.0,
217.5). The others are: (1476.2,213.9), (1434.4, 204.1), (756.4, 190.2), (481.8, 188.9),
(267.5, 181.0), (208.4, 129.2), (112.5, 106.3), (81.2, 103.3), (53.1, 102.6), (53.1, 92.0),
(33.4, 23.0), and (20.9, 20.9). The first number in each ofthe first seven pairs greatly
exceeds the second number, so each of those points falls well below tbe 45 line. A
substantial positive skew (stretched upper tail) is indicated.

2500

2000

1500

1000

500


o
o 500 1000 t500 2000 2500

21
Cl2016 Cengage Learning. All Rights Reserved. May nOIbe scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in pan.
CHAPTER 2

Section 2.1

a. J"= {1324, 1342, 1423,1432,2314,2341,2413,2431,3124,3142,4123, 4132,3214, 3241,4213,


4231 I
b. Event A contains the outcomes where 1 is first in the list:
A = (1324, 1342, 1423,14321

c. Event B contains the outcomes where 2 is first or second:


B = {2314, 2341, 2413, 2431,3214,3241,4213,4231}.

d. The event AuB contains the outcomes in A or B or both:


AuB ~ {1324, 1342, 1423, 1432,2314,2341,2413,2431,3214,3241,4213,4231 I
AnE = 0, since I and 2 can't both get into the championship game.
A' = J"- A ~ (2314, 2341, 2413, 2431,3124,3142,4123,4132,3214,3241,4213, 4231 I

3.
a. A = {SSF, SFS, FSS}.

b. B = {SSS, SSF, SFS, FSS}.


c. For event C to occur, the system must have component I working (S in the first position), then at least
one of the other two components must work (at least one S in the second and third positions): C =
{SSS, SSF, SFS}.

d. C ~ {SFF, FSS, FSF, FFS, FFF}.


AuC= {SSS, SSF, SFS, FSS}.
AnC~ {SSF, SFS}.
BuC= {SSS, SSF, SFS, FSS}. Notice that B contains C, so BuC= B.
BnC= (SSS SSF, SFSI. Since B contains C, BnC= C

22
02016 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
Chapter 2: Probability

5.
a. The 33 = 27 possible outcomes are numbered below for later reference.

Outcome Outcome
Number Outcome Number Outcome
1 III 15 223
2 112 16 231
3 113 17 232
4 121 18 233
5 122 19 311
6 123 20 312
7 131 21 313
8 132 22 321
9 133 23 322
10 211 24 323
11 212 25 331
12 213 26 332
13 221 27 333
14 222

b. Outcome numbers I, 14, 27 above.

c. Outcome numbers 6, 8, 12, 16, 20, 22 above.

d. Outcome numbers 1,3,7,9,19,21,25,27 above.

7.
a. f~ {BBBAAAA, BBABAAA, BBAABAA, BBAAABA, BBAAAAB, BABBAAA, BABABAA, BABAABA,
BABAAAB, BAABBAA, BAABABA, BAABAAB, BAAABBA, BAAABAB, BAAAABB, ABBBAAA,
ABBABAA,ABBAABA,ABBAAAB,ABABBAA,ABABABA,ABABAAB,ABAABBA,ABAABAB,
ABAAABB, AABBBAA, AABBABA, AABBAAB, AABABBA, AABABAB, AABAABB, AAABBBA,
AAABBAB, AAABABB, AAAABBB}.

b. AAAABBB, AAABABB, AMBBAB, AABAABB, AABABAB.

9.
a. In the diagram on the left, the sbaded area is (AvB)'. On the right, the shaded area is A', the striped
area is 8', and the intersection A'r'I13' occurs where there is both shading and stripes. These two
diaorams disnlav the: sarne area.

t>
( -,
A
)~,

23
C 2016 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
Chapter 2: Probability

b. In tbe diagram below, tbe shaded area represents (AnB)'. Using the right-hand diagram from (a), the
union of A' and B' is represented by the areas that have either shading or stripes (or both). Both ofthe
diagrams display the same area.

Section 2.2

11.
a. 07.

b. .15 + .10 + .05 ~ .30.

c. Let A = the selected individual owns shares in a stock fund. Then peA) ~ .18 + .25 ~ .43. The desired
probability, that a selected customer does not shares in a stock fund, equals P(A) = I - peA) = I - .43
= .57. This could also be calculated by adding the probabilities for all the funds that are not stocks.

13.
a. AI U A, ~ "awarded either #1 or #2 (or both)": from the addition rule,
peA I U A,) ~ peAI) + peA,) - PCA I n A,) = .22 + .25 - .11 = .36.

b. A; nA; = "awarded neither #1 or #2": using the hint and part (a),
peA; nA;)=PM uA,)') = I -P(~ uA,) ~ I -.36 ~ .64.

c. AI U A, U A,~ "awarded at least one of these three projects": using the addition rule for 3 events,
P(AI u A,) = P(AI)+ P(A,)+P(AJ)-P(AI
U A, nA,)-P(AI nA,)-P(A, nAJ) + P(AI n A, nA,) =
.22 +.25 + .28 - .11 - .05 - .07 + .0 I = .53.

d. A; n A; n A; = "awarded none of the three projects!':


peA; nA; nA;) = I - P(awarded at least one) ~ I - .53 = .47.

24
Q 2016 Ccngage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in port.

I LI"
Chapter 2: Probability

e. A; n .4; n A, = "awarded #3 but neither # I nor #2": from a Venn diagram,


peA; nA; nA,) = peA,) -peA, n A,) -peA, n A,) + peA, n A, n A,) ~
.28 - .05 - .07 + .0 I = .17. The last term addresses the "double counting" of the two subtractions.

f. (A; n.4;) u A, = "awarded neither of#1 and #2, or awarded #3": from a Venn diagram,
PA; n.4;) u A,) = P(none awarded) + peA,) = .47 (from d) + .28 ~ 75.

Alternatively, answers to a-f can be obtained from probabilities on the accompanying Venn diagram:

25
102016 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated. or posted 10 a publicly accessible website, in whole or in part.
(

Chapter 2: Probability

15.
a. Let E be the event tbat at most one purchases an electric dryer. Then E' is the event that at least two
purchase electric dryers, and peE') ~ I - peE) = I - 0428 = .572.

b. Let A be the event that all five purchase gas, and let B be the event that all five purchase electric. All
other possible outcomes are those in which at least one of each type of clothes dryer is purchased.
Thus, the desired probability is 1 - [peA) - PCB)] =
1 - [.116 + .005] = .879.

17.
a. The probabilities do not add to 1 because there are other software packages besides SPSS and SAS for
which requests could be made.

b. peA') ~ I -peA) ~ 1-.30 = .70.

c. Since A and B are mutually exclusive events, peA U B) ~ peA) + PCB) = .30 + .50 ~ .80.

d. By deMorgan's law, peA' n B') = P((A U B)') = I - peA u B) = 1- .80 = .20.


In this example, deMorgan's law says the event "neither A nor B" is the complement of the event
"either A or B." (That's true regardless of whether they're mutually exclusive.)

19. Let A be that the selected joint was found defective by inspector A, so PtA) = li.t6o Let B he analogous
for inspector B, so PCB)~ ,i~~o' The event "at least one of the inspectors judged a joint to be defective is

AuB , so P(AuB) ~ .lill..


10,000 .

a. By deMorgan's law, P(neither A oar B) = peA' "B')= I - P(AuB) = 1- l~:~O = 1~~~~O


~ .8841.

b. The desired event is BnA'. From a Venn diagram, we see that P(BnA') = PCB) - P(AnB). From the
addition rule, P(AuB) = peA) + PCB) - P(AnB) gives P(AnB) = .0724 + .0751 - .1159 = .0316.
Finally, P(BnA') = PCB)- P(AnB) = .0751 - .0316 = .0435.

21. In what follows, the first letter refers to the auto deductible and the second letter refers to the homeowner's
deductible.
a. P(MH)=.IO.

b. P(1ow auto deductible) ~ P({LN, LL, LM, LH)) = .04 + .06 + .05 + .03 = .18. Following a similar
pattern, P(1ow homeowner's deductible) = .06 +.10 + .03 ~ .19.

c. P(same deductible for both) = P( {LL, MM, HH)) = .06 + .20 + .15 = AI.

d. P(deductibles are different) = I - P(same deductible for both) = I - Al ~ .59.

e. Peat least one low deductible) = P( {LN, LL, LM, LH, ML, HL)) = .04 + .06 + .05 + .03 + .10 + .03 =
.31.
I I f. P(neither deductible is low) = 1 - peat least one low deductible) ~ I - .31 = .69.

26
Q 2016 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.

,
Chapter 2: Probability

23. Assume that the computers are numbered 1-6 as described and that computers I and 2 are the two laptops.
There are 15 possible outcomes: (1,2) (1,3) (1,4) (1,5) (1,6) (2,3) (2,4) (2,5) (2,6) (3,4) (3,5) (3,6) (4,5)
(4,6) and (5,6).
a. P(both are laptops) ~ P({(1 ,2)}) ~ -G- ~.067.

b. P(both are desktops) ~ P({(3,4) (3,5) (3,6) (4,5) (4,6) (5,6)}) = l~ = .40.

c. P(at least one desktop) ~ I - P(no desktops) = I - P(both are laptops) ~ I - .067 = .933.

d. P(at least one of each type) = I - P(both are the same) ~ I - [P(both are laptops) + P(both are
desktops)] = I - [.067 + .40] ~ .533.

25. By rearranging the addition rule, P(A n B) = P(A) + P(B) - P(AuB) ~.40 + 55 - .63 = .32. By the same
method, P(A (l C) ~.40 + .70 - .77 = .33 and P(B (l C) ~ .55 + .70 - .80 ~ .45. Finally, rearranging the
addition rule for 3 events gives
P(A n B n C) ~P(A uBu C) -P(A) -P(B) -P(C) + P(A n B) + P(A n C) + P(B n C) ~ .85 - .40- 55
- .70 + .32 + .33 +.45 = .30.
These probabilities are reflected in the Venn diagram below .

.05

a. P(A u B u C) = .85, as given.

b. P(none selected) = I - P(at least one selected) ~ I - P(A u B u C) = I - .85 = .15.

c. From the Venn diagram, P(only automatic transmission selected) = .22.

d. From the Venn diagram, P(exactiy one of the three) = .05 + .08 + .22 ~ .35.

27. There are 10 equally likely outcomes: {A, B) {A, Co} {A, Cr} {A,F} {B, Co} {B, Cr} {B, F) {Co, Cr}
{Co, F) and {Cr, F).
a. P({A, B)) = 'k ~.1.

b. P(at least one C) ~ P({A, Co} or {A, Cr} or {B, Co} or {B, Cr} or {Co, Cr} or {Co, F) or {Cr, F}) =
~=.7.

c. Replacing each person with his/her years of experience, P(at least 15 years) = P( P, 14} or {6, I O}or
{6, 14} or p, 10} or p, 14} or {l0, 14}) ~ ft= .6.

27
C 20 16 Ccngage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in pan.
( (

j
Chapter 2: Probability
I
Section 2.3

29.
a. There are 26 letters, so allowing repeats there are (26)(26) = (26)' = 676 possible 2-letter domain
names. Add in the 10 digits, and there are 36 characters available, so allowing repeats there are

II (36)(36) = (36)' = 1296 possible 2-character domain names.

b. By the same logic as part a, the answers are (26)3 ~ 17,576 and (36)' = 46,656.

c. Continuing, (26)' =456,976; (36)' = 1,679,616.

d. P(4-character sequence is already owned) = I - P(4-character sequence still available) = 1 -


97, 7861(36)' ~ .942.

31.
a. Use the Fundamental Counting Principle: (9)(5) ~ 45.

b. By the same reasoning, there are (9)(5)(32) = 1440 such sequences, so such a policy could be carried
out for 1440 successive nights, or almost 4 years, without repeating exactly the same program.

33.
a. Since there are 15 players and 9 positions, and order matters in a line-up (catcher, pitcher, shortstop,
etc. are different positions), the number of possibilities is P9,I' ~ (15)(14) ... (7) or 1511(15-9)! ~
1,816,214,440.

b. For each of the starting line-ups in part (a), there are 9! possible batting orders. So, multiply the answer
from (a) by 9! to get (1,816,214,440)(362,880) = 659,067,881 ,472,000.

c. Order still matters: There are P3., = 60 ways to choose three left-handers for the outfield and P,,10 =
151,200 ways to choose six right-handers for the other positions. The total number of possibilities is =
(60)(151,200) ~ 9,On,000.

35.
a. There are (I~) ~ 252 ways to select 5 workers from the day shift. In other words, of all the ways to

select 5 workers from among the 24 available, 252 such selections result in 5 day-sbift workers. Since

the grand total number of possible selections is (2 4) = 42504, the probability of randomly selecting 5
5
day-shift workers (and, hence, no swing or graveyard workers) is 252/42504 = .00593.

b. Similar to a, there are (~) = 56 ways to select 5 swing-sh.ift workers and (~) = 6 ways to select 5

I I graveyard-shift workers. So, there are 252 + 56 + 6 = 314 ways to pick 5 workers from the same shift.
The probability of this randomly occurring is 314142504 ~ .00739.

c. peat least two shifts represented) = I - P(all from same shift) ~ I - .00739 ~ .99261.

28
C 2016 Ccngage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted 10 a publicly accessible website, in whole or in part.


Chapter 2: Probability

d. There are several ways to approach this question. For example, let A I ~ "day shift is unrepresented,"
A2 = "swing shift is unrepresented," and A3 = "graveyard shift is unrepresented," Then we want
P(A, uA, uA,).

N(A ,) ~ N(day shift unrepresented) = N(all from swing/graveyard) =(8: 6) = 2002,

since there are 8 + 6 ~ 14 total employees in the swing and graveyard shifts. Similarly,

N(A,) ~ CO;6) = 4368 and N(A,) = (10;8) = 8568. Next, N(A, n A,) ~ N(all from graveyard) = 6

from b. Similarly, N(A, n A,) ~ 56 and N(A, nA,) = 252. Finally, N(A, nA, n A,) ~ 0, since at least
one shift must be represented. Now, apply the addition rule for 3 events:
P(A, uA,uA,) 2002+4368+8568-6-56-252+0 14624 =.3441.
42504 42504

37.
a. By the Fundamental Counting Principle, with", = 3, ni ~ 4, and", = 5, there are (3)(4)(5) = 60 runs.

b. With", = I Gust one temperature),", ~ 2, and", = 5, there are (1)(2)(5) ~ 10 such runs.

c. For each of the 5 specific catalysts, there are (3)(4) ~ 12 pairings of temperature and pressure. Imagine
we separate the 60 possible runs into those 5 sets of 12. The number of ways to select exactly one run

from each ofthese 5 sets of 12 is C:)' ~ 12'. Since there are (65) ways to select the 5 runs overall,

the desired probability is (\2)' 1(6)


5
5
= 12 /( ~O) ~ .0456.

39. In a-c, the size ofthe sample space isN~ (5+~+4)=C:) = 455.

a. There are four 23W bulbs available and 5+6 = II non-23W bulbs available. The number of ways to

select exactly two of the former (and, thus, exactly one of-the latter) is (~)C/) = 6(11) = 66. Hence,

the probability is 66/455 = .145.


b. The number of ways to select three 13W bulbs is (~) = 10. Similarly, there are G) = 20 ways to

select three 18W bulbs and (;) ~ 4 ways to select three 23W bulbs. Put together, tbere are 10 + 20 + 4

= 34 ways to select three bulbs of the same wattage, and so the probability is 34/455 = .075.

c. The number of ways to obtain one of each type is (~)( ~J(n ~ (5)(6)(4) = 120, and so the probability

is 120/455 = .264.

29
C 2016 Ccngagc Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part,
r-

Chapter 2: Probability

d. Rather than consider many different options (choose I, choose 2, etc.), re-frarne the problem this way:
at least 6 draws are required to get a 23W bulb iff a random sample of five bulbs fails to produce a
23W bulb. Since there are 11 non-23W bulbs, the chance of getting no 23W bulbs in a sample of size 5

is C~}(I:) =462/3003 = .154.

41.
a. (10)(10)( I0)(10) = 10' ~ 10,000. These are the strings 0000 through 9999.

b. Count the number of prohibited sequences. There are (i) 10 with all digits identical (0000, 1111, ... ,
9999); (ii) 14 with sequential digits (0123,1234,2345,3456,4567,5678,6789, and 7890, plus these
same seven descending); (iii) 100 beginning with 19 (1900 through 1999). That's a total of 10 + 14 +
I00 ~ 124 impermissible sequences, so there are a total of 10,000 - 124 ~ 9876 permissible sequences.

The chance of randomly selecting one is just 9876 = .9876.


10,000

c. All PINs of the form 8xxl are legitimate, so there are (10)( I0) = 100 such PINs. With someone
randomly selecting 3 such PINs, the chance of guessing the correct sequence is 3/1 00 ~ .03.

d. Of all the PINs of the form lxxI, eleven is prohibited: 1111, and the ten ofthe form 19x1. That leaves
89 possibilities, so the chances of correctly guessing the PIN in 3 tries is 3/89 = .0337.

5
43. There are (5:)= 2,598,960 five-card hands. The number of lO-high straights is (4)(4)(4)(4)(4) =4 ~ 1024

(any of four 6s, any of four 7s, etc.). So, P(I 0 high straight) = 1024 .000394. Next, there ten "types
2,598,960

of straight: A2345,23456, ... ,91 OJQK, 10JQKA. SO,P(straight) = lOx 1024 = .00394. Finally, there
2,598,960
are only 40 straight flushes: each of the ten sequences above in each of the 4 suits makes (10)(4) = 40. So,
P(straight flush) = 40 .00001539.
2,598,960

Section 2.4

45.
a. peA) = .106 + .141 + .200 = .447, P(G) =.215 + .200 + .065 + .020 ~ .500, and peA n G) ~ .200.

b. P(A/G) = peA nC) = .200 = .400. If we know that the individual came from etbrtic group 3, the
P(C) .500
probability that he has Type A blood is 040.P(C/A) = peA nC) = .200 = 0447. If a person has Type A
peA) .447
blood, the probability that he is from ethnic group 3 is .447.

c. Define D = "ethnic group I selected." We are asked for P(D/B'). From the table, P(DnB') ~ .082 +
.106 + .004 ~ .192 and PCB') = I- PCB)~ I- [.008 + .018 + .065] = .909. So, the desired probability is
P(D/B') P(DnB') = .192 =.211.
PCB') .909

30
02016 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
Chapter 2: Probability

47.
a. Apply the addition rule for three events: peA U B u C) = .6 + .4 + .2 ~.3 -.]5 - .1 + .08 = .73.

b. peA n B n C) = peA n B) - peA n B n C) = .3 - .08 = .22.

c. P(BiA) = P(AnB) ].=.50andP(AjB)= P(AnB) .3=.75.Ha]fofstudentswitbVisacardsalso


peA) .6 PCB) .4
have a MasterCard, while three-quarters of students with a MasterCard also have a Visa card.

d. P(AnBIC)= P([AnB]nC) P(AnBnC) .08=.40.


P(C) P(C).2

e. P(AuB!C)= P([AuB]nC) P([AnC]u[BnC]). Use a distributive law:


P(C) P(C)
peA nC)+P(BnC)-P([A nC]n[BnC]) peA n C) + PCBn C) - peA n B n C)
P(C) P(C)
.15+.1-.08 = .85 .
.2

49.
a. P(small cup) = .14 + .20 = .34. P(decal) = .20 + .10 + .10 = .40.

b. P( decaf I small) = P(small n decaf) .20 .588. 58.8% of all people who purchase a small cup of
P(small) .34
coffee choose decaf.
P(small ndecaf) .20
c. P( srna II I d eca l) = .50. 50% of all people who purchase decaf coffee choose
P(decaf) .40
the small size.

51.
a. Let A = child bas a food allergy, and R = child bas a history of severe reaction. We are told that peA) =
.08 and peR I A) = .39. By the multiplication rule, peA n R) = peA) x peRI A) ~ (.08)(.39) = .0312.

b. Let M = the child is allergic to multiple foods. We are told that P(M I A) = .30, and the goal is to find
P(M). But notice that M is actually a subset of A: you can't have multiple food allergies without
having at least one such allergyl So, apply the multiplication rule again:
P(M) = P(M n A) = peA) x P(M j A) = (.08)(.30) = .024.

53. P(B/A) = peA n B) PCB) = .05 = .0833 (since B is contained in A, A n B = B).


peA) peA) .60

31
102016 Cengagc Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
(

Chapter 2: Probability

Let A = {carries Lyme disease} and B ~ {carries HOE}. We are told peA) ~ .16, PCB) = .10, and
55.
peA n B I A uB) ~ .10. From this last statement and the fact that AnB is contained in AuB,

.10 = P(AnB) =,>P(A nB) ~ . IOP(A u B) = .IO[P(A) + PCB) -peA nB)l = .10[.10 + .16 -peA nB)] =>
P(AUB)
I.IP(A n B) ~ .026 ='> PeA n B) = .02364.
. ..' P(AnB) .02364
Finally, the desired probability is peA I B) = PCB) .2364.
.10

57. PCB I A) > PCB) ilf PCBI A) + p(B'1 A) > PCB) + P(B'IA) iff 1 > PCB) + P(B'IA) by Exercise 56 (with the
letters switched). This holds iff I - PCB) > P(B'I A) ilf PCB') > PCB' I A), QED.

59. The required probahilities appear in the tree diagram below .


.4x.3 ~ .12 = peA, nB) = P(A,)P(B I A,)
B

a. peA, nB) = .21.

b. By the law of total probability, PCB) ~ peAl n B) + peA, n B) + peA, n B) = .455.

c. Using Bayes' theorem, P(A, I B) = peAl n B) .12 = .264 ,P(A,1 B) =~ = .462; P(A,\ B) = I -
PCB) 455 .455
.264 - .462 = .274. Notice the three probabilities sum to I.

61. The initial ("prior") probabilities of 0, 1,2 defectives in the batch are .5,.3, .2. Now, let's determine the
probabilities of 0, 1,2 defectives in the sample based on these three cases.
Ifthere are 0 defectives in the batch, clearly there are 0 defectives in the sample.
P(O defin sample 10 defin batch) = I.
Ifthere is I defective in the batch, the chance it's discovered in a sample of2 equals 2/10 = .2, and the
probability it isn't discovered is 8/1 0 ~ .8.
P(O def in sample 11 def in batch) = .8, P(l def in sample II def in batch) = .2.
If there are 2 defectives in the batch, the chance both are discovered in a sample of2 equals

~x.1- = .022; the chance neither is discovered equals ~x 2.


= .622; and the chance exactly I is
10 9 \0 9
discovered equals I - (.022 + .622) ~ .356.
P(O def in sample 12 def in batch) = .622, P(I def in sample 12 def in batch) = .356,
P(2 def in sample 12 def in batch) = .022.

32
e 2016 Ccngage Learning. All Rights Reserved. May nOIbe scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in pan.
Chapter 2: Probability

These calculations are summarized in the tree diagram below. Probabilities at the endpoints are
intersectional probabilities, e.g. P(2 defin batch n 2 defin sample) = (.2)(.022) = .0044 .

(0) 1 .50

(0) .5 .24

:-...I..l..'-"'--...L:.=--;'-,-L- .06

.2 .1244
(2)
=-,-,
~::-""""--f4) .0712
.022
( .0044

a. Using the tree diagram and Bayes' rule,

P(O defin batch I 0 defin sample) = .5 .578


.5 +.24 + .1244

P( I def in batch I 0 def in sample) = .24 .278


.5+.24+.1244

P(2 defin batch 10 defin sample) = .1244 .144


.5+.24+.1244

b. P(O def in batch II def in sample) = 0


P(1 defin batch II defin sample) = .06 .457
.06+.0712
P(2 defin batch II defin sample) = .0712 .543
.06+.0712

63.
a.

.8
c
.9 B
c'
.1 .6
.75 B' c
A c'

.25 .7
c
.8
B
"'c>-
A' .2
C
B'
C

b. From the top path of the tree diagram, P(A r.B n C) = (.75)(.9)(.8) = .54.

c. Event B n C occurs twice on the diagram: P(B n C) = P(A n B n C) + P(A' n B n C) = .54 +


(.25)(.8)(.7) = .68

d. P(C) =P(A r.B n C) + P(A' r.B n C)+ P(ArsB' n C) + P(A' nB' n C) = .54 + .045 + .14 + .015 = .74.

33
02016 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in pan.

Chapter 2: Probability

P(A"B"C)
e. Rewrite the conditional probability first: peA IB " C) = .54 = .7941 .
PCB "C) .68

65. A tree diagram can help. We know that P(day) = .2, P(I-night) = .5, P(2-night) = .3; also, P(purchase I day)
= .1, P(purchase [I-night) = .3, and P(purchase 12-night) = .2.

I h) P(day" purchase) (.2)(.1) =.02=.087.


Apply Bayes' rule: e.g., P(day pure ase = P(purchase) (.2)(.1) +(.5)(.3) + (.3)(.2) .23

Similarly, P(I-night I purchase) ~ (.5)(.3) ~ .652 and P(2-night I purchase) = .261 .


.23

67. Let T denote the event that a randomly selected person is, in fact, a terrorist. Apply Bayes' theorem, using
P(1) = 1,000/300,000,000 ~ .0000033:
P(T)P( + I T) (.0000033)(.99)
II T
P( I+)~ P(T)P(+IT)+P(T')P(+IT') (.0000033)(.99)+(1-.0000033)(1-.999) .003289. That isto
say, roughly 0.3% of all people "flagged" as terrorists would be actual terrorists in this scenario.
I
69. The tree diagram below surrunarizes the information in the exercise (plus the previous information in
Exercise 59). Probabilities for the branches corresponding to paying with credit are indicated at the far
right. ("extra" = "plus")

.0840
.7
credit

.3 fill ~5 .1400
/'e:::..----:~.!..7_ credit
no fill
.4 .1260
reg
.6 ~dtt
..;::._---".3"'5~----:_~(
extra ~ ~ .0700
.25 no fl,. ~edit
prem
.5 .0625
""'--=---;f;"'""'--.....,:::::--credit
.5
no
fill "-,-=-_",.4L,, .0500
credit

a. P(plus n fill n credit) = (.35)(.6)(.6) = .1260.

b. P(premium n no fill "credit) = (.25)(.5)(.4) = .05.

c. From tbe tree diagram, P(premium n credit) = .0625 + .0500 = .1125.

d. From the tree diagram, P(fill" credit) = .0840 + .1260 + .0625 = .2725.

e. P(credit) = .0840 + .1400 + .1260 + .0700 + .0625 + .0500 = .5325.

P(premiurn ncredit)
f. P(premi urn I cred it) = .1l25 = .2113.
P(credit) .5325

34
e 2016 Cengege Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part
Chapter 2: Probability

Section 2.5

71.
a. Since the events are independent, then A' and B' are independent, too. (See the paragraph below
Equation 2.7.) Thus, P(B'IA') ~ P(B') ~ I -.7 =.3.

b. Using the addition rule, P(A U B) = PtA) + P(B) - P(A n B)~.4 + .7 - (.4)(.7) = .82. Since A and Bare
independent, we are permitted to write PtA n B) = P(A)P(B) ~ (.4)(.7).

c. P(AB' I A uB) = P(AB' n(AuB)) P(AB') P(A)P(B') (.4)(1-.7) .12=.146.


P(AuB) P(AuB) P(AuB) .82 .82

73. From a Venn diagram, P(B) ~ PtA' n B) + PtA n B) ~ P(B) =:0 P(A' nB) = P(B) - P(A n B). If A and B
are independent, then P(A' n B) ~ P(B) - P(A)P(B) ~ [I - P(A)]P(B) = P(A')P(B). Thus, A' and Bare
independent.
P(A'nB) P(B)-P(AnB) P(B) - P(A)P(B) ~ 1 _ P(A) ~ P(A').
Alternatively, P(A'I B) = --'---"'-'-
P(B) P(B) P(B)

75. Let event E be the event that an error was signaled incorrectly.
We want P(at least one signaled incorrectly) = PtE, u ...U EIO). To use independence, we need
intersections, so apply deMorgan's law: = P(E, u ...u EIO) ~ 1 -P(E; nnE;o) . P(E') = I - .05 ~ .95,
so for 10 independent points, P(E; nn E(o) = (.95) ... (.95) = (.95)10 Finally, P(E, u E, u ...u EIO) =
1 - (.95)10 ~ .401. Similarly, for 25 points, the desired probability is 1 - (P(E'))" = 1 - (.95)" = .723.

77. Let p denote the probability that a rivet is defective.


a. .15 = P(seam needs reworking) = I - P(searn doesn't need reworking) =
1 - P(no rivets are defective) ~ I - P(I" isn't def n ...n 25'" isn't det) =
1-(I-p) ...(I-p)= I-(I-p)".
Solve for p: (I - p)" = .85 =:0 I - P ~ (.85)'n5 =:0 P ~ 1 - .99352 = .00648.

b. The desired condition is .10 ~ I - (1 -p)". Again, solve for p: (1- p)" = .90 =:0

P = 1 - (.90)'125~ I - .99579 = .00421.

79. Let A, = older pump fails, A, = newer pump fails, and x = P(A, n A,). The goal is to find x. From the Venn
diagram below, P(A,) = .10 + x and P(A,) ~ .05 + x. Independence implies that x = P(A, n A,) ~P(A,)P(A,)
~ (.10 + x)(.05 + x). The resulting quadratic equation, x' - .85x + .005 ~ 0, has roots x ~ .0059 and x =
.8441. The latter is impossible, since the probahilities in the Venn diagram would then exceed 1.
Therefore, x ~ .0059.

A,
.10 x .15

35
C 2016 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted 10 a publicly accessible website, in whole or in part.
Chapter 2: Probability

81. Using the hints, let PtA,) ~ p, and x ~ p2 Following the solution provided in the example,
P(system lifetime exceeds to) = p' + p' - p' ~ 2p' - p' ~ 2x - x'. Now, set this equal to .99:
2x _ x' = .99 => x'- 2x + .99 = 0 =:> x = 0.9 or 1.1 => p ~ 1.049 or .9487. Since the value we want is a
probability and cannot exceed 1, the correct answer is p ~ .9487.

83. We'll need to know P(both detect the defect) = 1- P(at least one doesn't) = I - .2 = .8.
a. P(I" detects n 2" doesn't) = P(I" detects) - P(I" does n 2" does) = .9 - .8 = .1.
Similarly, P(l" doesn't n 2" does) = .1, so P(exactly one does)= .1 + .1= .2.

b. P(neither detects a defect) ~ 1- [P(both do) + P(exactly 1 does)] = 1 - [.8+.2] ~ O. That is, under this
model there is a 0% probability neither inspector detects a defect. As a result, P(all 3 escape) =
(0)(0)(0) = O.

85.
a. Let D1 := detection on 1SI fixation, D2:= detection on 2nd fixation.
P(detection in at most 2 fixations) ~ P(D,) + P(D; n D,) ; since the fixations are independent,
P(D,) + P(D;nD,) ~ P(D,) + P(D;) P(D,) = p + (I - p)p = p(2 - p).

b. Define D" D" ... .D; as in a. Then P(at mnst n fixations) ~


P(D,) + P(D;nD,) + P(D; nD; nD,)+ ... + P(D; nD; nnD;.1 nD,,)=
p + (l-p)p+ (l-p)'p + ... + (I -pr'p ~p[1 + (I -p) + (1- p)'+ ... + (I -p)""] ~

pl-(l-p)' I-(l-p)".
1-(1- p)
Alternatively, P(at most n fixations) = I - P(at least n+ 1 fixations are required) =
I-P(no detection in I" n fixations) = I - P(D; nD; nnD;)= I -(I -p)".

c. P(nodetection in 3 fixations) ~ (I - p)'.

d. P(passes inspection) = P({not flawed} u {flawed and passes})


~ P(not flawed) + P(flawed and passes)
= .9 + P(flawed) P(passes I flawed) = .9 + (.1)(1 - p)'.

e.
.
Borrowing from d, P(flawed I passed) = ~==-:..:..==:<.
n
P(flawed
P(passed)
passed) .1(1- p)'
.9+.I(I-p)
, . For p~ .5,

P(flawedlpassed)~ .1(1-.5)', .0137 .


.9+.1(1-.5)

36
02016 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
Chapter 2: Probability

87.
a. Use the information provided and the addition rule:
P(A, vA,) = peA,) + peA,) - P(A, n A,) => P(A, n A,) ~ peA,) + peA,) - P(A, u A,) = .55 + .65 - .80
= .40.

b. By definition, peA, I A,) = peA, n A,) = .40 = .5714. If a person likes vehicle #3, there's a 57.14%
P(A,) 70
chance s/he will also like vehicle #2.

c. No. From b, P(A, I A,) = .5714 * peA,) = .65. Therefore, A, and A] are not independent. Alternatively,
peA, n AJ) = .40 * P(A,)P(AJ) = (.65)(.70) = .455.

d. The goal is to find peA, u A, I A;) ,i.e. P([A, v A,l n A:J . The denominator is simply I - .55 ~ .45.
peA;)
There are several ways to calculate the numerator; the simplest approach using the information
provided is to draw a Venn diagram and observe that P([A, u A]l n A;) = Pi A, u A, vA,) - peA,) =

.88 - .55 = .33. Hence, PtA; v A] I A;) ~ 2:l.~ .7333 .


.45

89. The question asks for P(exactly one tag lost I at most one tag lost) = PC, nC;)v(C; nC,) I (C, nC,l').
Since the first event is contained in (a subset 01) the second event, this equals
PC,nC;)u(C;nC, P(C,nC;)+P(C;nC,) P(C,)P(C;)+P(C;)P(C')b . d d
Y 10 epen ence =
PC, nC,)') 1- PiC, nC,) 1- P(C,)P(C,)
".(1-".) + (1- ".)". 2".(1- ".) 2".
1_".' 1_".' 1+".

Supplementary Exercises

91.
a. P(line I) = 500 = .333;
1500
_.50--,(_50~O
),--+_.4--:-4::-:(
4--:-00~)_+_.4--,0
(,--60-,-,-0)666 = .444.
P(crack) =
1500 1500

b. This is one of the percentages provided: P(blemish I line 1) = .15 .

.IO( 500) + .08( 400)+ .15( 600) 172 .


c. P(surface defect)
1500 1500 '
.10(500) 50
P(lme I n surface defect) = ;
1500 1500
. 5011500 50
so, P(lme I I surface defect) = - = .29 L
17211500 172

37
C 2016 Cengage Learning. All Rights Reserved, May not be scanned, copied or duplicated, or posted 10 8 publicly accessible website, in whole or in part.
(

Chapter 2: Probability

93. Apply the addition rule: P(AuB) ~ peA) + P(B)- peA n B) => .626 = peA) + PCB) - .144. Apply
independence: peA n B) = P(A)P(B) = .144.
So, peA) + PCB)=770 and P(A)P(B) = .144.
Let x = peA) and y ~ PCB). Using the first equation, y = .77 - x, and substituting this into the second
equation yields x(.77 - x) = .144 or x' - .77x + .144 = O. Use the quadraticformula to solve:

x .77)(-.77)'-(4)(l)(.I44) .77.13 = .32 or. 45 . S'tnce x = P(A)' IS assume d to be t he Iarger


2(1) 2
probability, x = peA) ~ .45 and y = PCB) = .32.

95.
a. There are 5! = 120 possible orderings, so P(BCDEF) = ';0 = .0833.
b. The number of orderings in which F is third equals 4x3x I *x2x 1 = 24 (*because F must be here), so
P(F is third) ~ ,','0 ~ .2. Or more simply, since the five friends are ordered completely at random, there
is a ltf; chance F is specifically in position three.

e. Similarly P(F last) = 4x3x2xlxl .2.


, 120
4 4 (4J10
d. P(F hasn't heard after 10times)=P(noton#1 nnoton#2n .. , n noton#IO)= S"x ... x"5="5 =

.1074.

97. When three experiments are performed, there are 3 different ways in which detection can occur on exactly
2 of the experiments: (i) #1 and #2 and not #3; (ii) #1 and not #2 and #3; and (iii) not #1 and #2 and #3. If
the impurity is present, the probability of exactly 2 detections in three (independent) experiments is
(.8)(.8)(.2) + (.8)(.2)(.8) + (.2)(.8)(.8) = .384. If the impurity is absent, the analogous probability is
3(.1)(.1)(.9) ~ .027. Thus, applying Bayes' theorem, P(impurity is present I detected in exactly 2 out of3)
P(deteeted in exactly 2npresent) (.384)(.4)
.905.
P(detected in exactly 2) (.384)(.4)+(.027)(.6)

99. Refer to the tree diagram below.

,r------ .95

.95
good .024
.6
.05 good
ba .8good ~ .40
ba .016
.20
bad "---- .010

a. pcpass inspection) = pcpass initially u passes after recrimping) =


pcpass initially) + P(fails initially n goes to recrimping n is corrected after recrimping) ~
.95 + (.05)(.80)(.60) (following path "bad-good-good" on tree diagram) = .974.

38
(:I 2016 Cengegc Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part

l
I__ .....
L ,
Chapter 2: Probability

b.
..
P( nee de d no recnmpmg passe
I d . ion)
inspecnon =
P(passed initially)
.. ~=.9754.
P(passed inspection) .974

101. Let A = I" functions, B = 2"' functions, so PCB) = .9, peA u B) ~ .96, Pt A nB)=.75. Use the addition rule:
peA u B) = peA) + P(B)-P(A n B) => .96 ~ peA) +.9 - .75 => peA) = .81.
Therefore PCB I A) = PCB n A) = l2 = .926.
, peA) .81

103. A tree diagram can also heip here.


a. peel n L) ~ P(E,)P(L / EI) = (.40)(.02) ~ .008.

b. The law of total probability gives peL) = L P(E;)P(L / E;) = (.40)(.02) + (.50)(.01) + (.10)(.05) = .018.

P(E,)P(L' IE,) (.40)(.98)


c. peE' I L') = I-P(E I L') = 1- peE, nL') .601.
, I peL') 1- peL) 1-.018

105. This is the famous "Birthday Problem" in probability.


a. There are 36510 possible lists of birthdays, e.g. (Dec 10, Sep 27, Apr I, ...).Among those, the number
with zero matching birthdays is PIO"" (sampling ten birthdays without replacement from 365 days. So,
P(all different) = ~O.'6S (365)(364) .. (356) .883. P(at least two the same) ~ 1- .883 ~ .1 17.
36510 (365)10

. P.
b. The general formula IS P(atleastlwo the same) ~ I -
k.36;. By trial and error, this probability equals
365
.476 for k = 22 and equals .507 for k = 23. Therefore, the smallest k for which k people have atieast a
50-50 chance of a birthday match is 23.

c. There are 1000 possible 3-digit sequences to end a SS number (000 through 999). Using the idea from
. p.
a, Peat least two have the same SS ending) = I - 10.10':: ~ I - .956 = .044.
1000
Assuming birthdays and SS endings are independent, P(atleast one "coincidence") = P(birthday
coincidence u SS coincidence) ~ . I17 + .044 - (.117)(.044) = .156.

107. P(detection by the end of the nth glimpse) = I - P(not detected in first n glimpses) =
n
I -peG; nG; n ..nG;)= I - P(G;)P(G;)P(G;) ~ I - (I - p,)(I-p,) ... (I - p,,) ~ I - f](l- Pi)

109.

a. P(allincorrectroom)~ .!..=_I =.0417.


41 24

b. The 9 outcomes which yield completely incorrect assignments are: 2143,2341,2413,3142,3412,

3421,4123,4321, and 4312, so P(allincorrect) ~ ..2-. ~ .375.


24

39
C 2016 Cengage Learning, All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in pan.
Chapter 2: Probability

Note: s ~ 0 means that the very first candidate interviewed is hired. Each entry below is the candidate hired
IU.
for the given policy and outcome.

s=o s~1 s~2 s=3 Outcome s=o s=1 s=2 s= 3


Outcome
I 4 4 4 3124 3 I 4 4
1234
I 3 3 3 3142 3 I 4 2
1243
I 4 4 4 3214 3 2 I 4
1324
I 2 2 2 3241 3 2 I 1
1342
I 3 3 3 3412 3 1 I 2
1423
I 2 2 2 3421 3 2 2 1
1432
2 I 4 4 4123 4 I 3 3
2134
2 1 3 3 4132 4 I 2 2
2143
2 1 1 4 4213 4 2 I 3
2314
2341 2 I I 1 4231 4 2 I I
2 1 I 3 4312 4 3 I 2
2413
2 I 1 1 4321 4 3 2 1
2431

From the table, we derive the following probability distribution based on s:

s 0 2 3

P(hire #1) 6 II 10 6
-
24 24 24 24

Therefore s ~ I is the best policy,

lB. peAl) ~ P(draw slip I or 4) = y,; peA,) ~ P(draw slip 2 or 4) = y,;


peA,) ~ P(draw slip 3 or 4) = y,; P(A, n A,) = P(draw slip 4) = v..;
peA, n A,) ~ P(draw slip 4) = v..; peA, n A,) ~ P(draw slip 4) = v..
Hence peA, "A,) ~ P(A,)P(A,) = v.; peA, n A,) = P(A,)P(A,) = v.; and
peAl n A,) ~ P(A,)P(A,) = v... Thus, there exists pairwise independence. However,
peA, n A, "A,) ~ P(draw slip 4) = V. * Yo = P(A,)P(A,)P(A,), so the events are not mutually independent.

40
C 2016 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part,

l l.
CHAPTER 3
Section 3.1

1.
S: FFF SFF FSF FFS FSS SFS SSF SSS

X: o 2 2 2 3

3. Examples include: M = the difference between the large and the smaller outcome with possible values 0, I,
2,3,4, or 5; T= I if the sum of the two resulting numbers is even and T~ 0 otherwise, a Bernoulli random
variable. See tbe back of tbe book for otber examples.

5. No. In the experiment in which a coin is tossed repeatedly until a H results, let Y ~ I if the experiment
terminates with at most 5 tosses and Y ~ Ootberwise. The sample space is infinite, yet Y has only two
possible values. See the back of the book for another example.

7.
a. Possible values of X are 0, 1, 2, ... , 12; discrete.

b. With n = # on the list, values of Yare 0, 1,2, ... ,N; discrete.

c. Possible values of U are I, 2, 3, 4, ". ; discrete.

d. Possible values of X are (0, 00) if we assume that a rattlesnake can be arbitrarily short or long; not
discrete.

e. Possible values of Z are all possible sales tax percentages for online purchases, but there are only
finitely-many of these. Since we could list these different percentages (ZI, Z2, . '" ZN), Z is discrete.

f. Since 0 is the smallest possible pH and 14 is the largest possible pH, possible values of Yare [0,14];
not discrete.

g. With m and M denoting the minimum and maximum possible tension, respectively, possible values of
Xare [m, M]; not discrete.

h. The number of possible tries is I, 2, 3, ... ; eacb try involves 3 racket spins, so possible values of X are
3,6,9, 12, 15, ".; discrete.

9.
a. Returns to 0 can occur only after an even number of tosses, so possible X values are 2, 4, 6, 8, ....
Because the values of X are enumerable, X is discrete.

b. Nowa return to 0 is possible after any number of tosses greater than I, so possible values are 2, 3, 4, 5,
.... Again, X is discrete.

41
C 20 J 6 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
Chapter 3: Discrete Random Variables and Probability Distributions

Section 3.2

11.
a.
I I
.. ~
azs r-r-

~
7
'"
c: o.lS r-r-
r-r-
'"
,.
,. o , , ,

b. P(X? 2) = p(2) + p(3) + p(4) = .30 + .15 + .10 = .55, while P(X> 2) = .15 + .10 = .25.

c. P(1 ~X~ 3) =p(l) + p(2) + p(3) = .25 + .30+ .15 = .70.

d. Who knows? (This is just a little joke by the author.)

13.
a. P(X5, 3) = p(O)+ p(l) + p(2) + p(3) = .10+.15+.20+.25 = .70.

b. P(X < 3) = P(X 5, 2) = p(O) + p(l) + p(2) = .45.

c. P(X? 3) = p(3) + p(4) + p(5) + p(6) = .55.


d. P(2 5,X5, 5)=p(2) + p(3) + p(4) + p(5) = .71.
e. The number of lines not in use is 6 - X, and P(2 :s 6 - X:s 4) = P(-4 :s -X:S -2) =
P(2 :SX~ 4)=p(2) + p(3) + p(4) = .65.

f. P(6 -X;' 4) = P(X 5, 2) = .10 + .15 + .20 = .45.

15.
a. (1,2) (1,3) (1,4) (1,5) (2,3) (2,4) (2,5) (3,4) (3,5) (4,5)

b. X can only take on the values 0, 1, 2. p(O) = P(X = 0) = P( {(3,4) (3,5) (4,5)}) ~ 3/10 = .3;
p(2) =P(X=2) = P({(l,2)}) = 1110= .1;p(l) = P(X= 1) = 1- [P(O) + p(2)] = .60; and otherwisep(x)
= O.
c. F(O) = P(X 5, 0) = P(X = 0) = .30;
F(I) =P(X5, I) =P(X= 0 or 1) = .30 + .60= .90;
F(2) = P(X 52) = 1.

Therefore, the complete cdf of X is

42
C 2016 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.

l.,
b
Chapter 3: Discrete Random Variables and Probability Distributions

0 x<O

F(x) =
j .30 O';x<1
.90 l';x<2
I 2,;x

17.
a. p(2) ~ P(Y ~ 2) ~ P(tirst 2 batteries are acceptable) = P(AA) ~ (.9)(.9) = .81.

b. p(3) ~ P(Y = 3) ~ P(UAA or AUA) = (.1)(.9)' + (.1)(.9)' = 2[(.1)(.9)'] = .162.

c. The fifth battery must be an A, and exactly one of the first four must also be an A.
Thus, p(5) ~ P(AUUUA or UAUUA or UUAUA or UUUAA) = 4[(.1)3(.9)'] ~ .00324.

d. p(y) ~ P(the y'h is anA and so is exactly one of the first y - I) ~ (y - 1)(.I)'~'(.9)', for y ~ 2, 3, 4, 5,

19. p(O) ~ P(Y ~ 0) ~ P(both arrive on Wed) = (.3)(.3) ~ .09;


p(l) = P(Y ~ I) ~ PW,Th) or (Th,W) or (Th,Th)) ~ (.3)(04) + (04)(.3) + (A)(A) = 040;
p(2) = P(Y~ 2) ~ PW,F) or (Th,F) or (F,W) or (F,Th) or (F,F)) ~ .32;
p(3) ~ I - [.09 + AO + .32] = .19.

21.
a. First, I + I/x > I for all x = I, ... ,9, so log(l + l/x) > O. Next, check that the probabilities sum to I:

Iog,,(l
x_j
+ I/ x) = i:log" (x+x I) = log" (3.)+
x.,1 I
log" (~) +...+ log" (!Q); using properties
2 9
of logs,

this equals log" (T x % x ... x I~) 10glO(l0) = = I.

b. Using the formula p(x) = 10glO(I + I/x) gives the foJlowing values: p(l) = .30 I, p(2) = .176, p(3) =
.125, p(4) = .097, p(5) = .079, p(6) ~ .067, p(7) = .058,p(8) ~ .05 l,p(9) = .046. The distribution
specified by Benford's Law is not uniform on these nine digits; rather, lower digits (such as I and 2)
are much 1110relikely to be the lead digit ofa number than higher digits (such as 8 and 9).

c. Thejumps in F(x) occur at 0, ... ,8. We display the cumulative probabilities here: F(I) = .301, F(2) =
0477, F(3) = .602, F(4) = .699, F(5) ~ .778, F(6) = .845, F(7) ~ .903, F(8) = .954, F(9) = I. So, F(x) =
Oforx< I;F(x)~.301 for I ::ox<2;
F(x) = 0477 for 2 ::Ox < 3; etc.

d. P(X::03)=F(3)~.602;P(X~5)~ I-P(X<5)= I-P(X::04)= I-F(4)~ 1-.699=.301.

23.
a. p(2) = P(X ~ 2) = F(3) - F(2) ~ .39 - .19 = .20.

b. P(X> 3) = I-P(X::o 3) ~ I -F(3) = 1- .67= .33.

c. P(2 ,;X,; 5) = F(5) -F(2-1) = F(5) - F(I) = .92 - .19 = .78.

d. P(2 <X < 5) = P(2 <X::o 4) = F(4) - F(2) ~ .92 -.39 ~ .53.

43
C 2016 Cengagc Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.

Chapter 3: Discrete Random Variables and Probability Distributions

25. p(O) ~ P( Y~ 0) ~ PCBfirst) ~ p;


p(I)=P(Y~ 1)~P(Gfirst, thenB)~(1-p)p;
p(2) = P(Y~ 2) ~ P(GGB) ~ (1- p)2p;
Cnntinuing, PlY) ~ Pry Gs and then a B) ~ (I - pIp for y = 0, I,2,3, ....

27.
a. The sample space consists of all possible permutations of the four numhers I, 2, 3, 4:

outcome x value outcome x value outcome x value


1234 4 2314 I 3412 0
1243 2 2341 0 3421 0
1324 2 2413 0 4132 I
1342 I 2431 1 4123 0
1423 I 3124 I 4213 I
1432 2 3142 0 4231 2
2134 2 3214 2 4312 0
2143 0 3241 1 4321 0

b. From the table in a, p(O) ~ P(X~ 0) ~ ,", ,p(1) = P(X= I) = 2~,p(2) ~ P(Y~ 2) ~ 2~'

p(3) ~P(X=3)~ 0, andp(4) =P(Y~4) = 2',.


Section 3.3

29.
a. E(X) = LXP(x) ~ 1(.05) + 2(.10) + 4(.35) + 8(.40) + 16(.10) = 6.45 OB.
alLt

b. VeX) = L(X- Ji)' p(x) = (1 - 6.45)'(.05) + (2 - 6.45)'(.1 0) + ... + (16 - 6.45)'(.10) = 15.6475.
allx

c. a> ~V(X) = J15.6475 ~ 3.956 OB.

d. E(X') = LX' p(x) = 1'(.05) + 2'(.10) + 4'(.35) + 82(.40) + 162(. I0) = 57.25. Using the shortcut
allx

formula, VeX)= E(x') - / = 57.25 - (6.45)' = 15.6475.

31. From the table in Exercise 12, E(Y) = 45(.05) + 46(.10) + ... + 55(.01) = 48.84; similarly,
E(Y') ~ 45'(.05) + 46'(.1 0) + '" + 55'(.01) = 2389.84; thus V(Y) = E(l") - [E(Y)]' = 2389.84 _ (48.84)2 ~
4.4944 and ay= J4.4944 = 212.
One standard deviation from the mean value of Y gives 48.84 2.12 = 46.72 to 50.96. So, the probability Y
is within one standard deviation of its mean value equals P(46.72 < Y < 50.96) ~ P(Y ~ 47, 48, 49, 50) =
.12 + .14 + .25 + .17 = .68.

44
02016 Cengege Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website. in whole or in part
Chapter 3: Discrete Random Variables and Probability Distributions

33.
I
a. E(X') = 2:X2.p(X)~O'(l-p)+ I'(P)=p.
-,",,0

b. V(X)=E(X')-[E(X)]' =p-fp]'=p(l-p).

c. E(X") ~ 0"(1 - p) + I"(P) = p. In fact, E(X') = P for any non-negative power n.

35. Let h,(X) and h,(X) equal the net revenue (sales revenue minus order cost) for 3 and 4 copies purchased,
respectively. [f3 magazines are ordered ($6 spent), net revenue is $4 - $6 ~ -$2 if X ~ 1,2($4) - $6 ~ $2
if X ~ 2, 3($4) - $6 = $6 if X ~ 3, and also $6 if X = 4, S, or 6 (since that additional demand simply isn't
met. The values of h,(X) can be deduced similarly. Both distributions are summarized below.

x 1 2 3 4 5 6
h,(x) -2 2 6 6 6 6
h, x -4 0 4 8 8 8
p(x) -.L. -L l 4 l -L
15 15 15 IS 15 15

6
Using the table, E[h,(X)) = 2:h, (x) p(x) ~ (-2)( 15) + ... + (6)( I~) ~ $4.93 .
..."'1
6
Similarly, E[h,(X)] ~ ~)4(X)' p(x)~ (-4)( 15) + ... + (8)( 1'5) ~ $S.33 .
... ",1

Therefore, ordering 4 copies gives slightly higher revenue, on the average.

37. Using the hint, (X) = i:x. (.1.)n =.1.n i:x = .1.[n(n+
...",1 n 2
I)] = n 2+ 1. Similarly,
.1'=1

E(X') = i:x' .(.1.) =.1. Ix' = .1.[n(n + 1)(2n + 1)] = (n + 1)(2n + I) , so


.1',,1 n n .-",1 n 6 6

V(X)=(n+I)(2n+l) ("+1)' =n'-l


6 2 12

39. From the table. E(X) ~ I xp(x) ~ 2.3, Eex') = 6.1, and VeX) ~ 6.1 - (2.3)' ~ .81. Each lot weighs 5 Ibs, so
the number nf pounds left = 100 - sx. Thus the expected weight left is (100 - 5X) = 100 - SE(X) ~
88.S lbs, and the variance of the weight left is V(IOO- SX) = V(-SX) ~ (-S)'V(X) = 2SV(X) = 20.25.

41. Use the hint: V(aX+b)= E[((aX+b)-E(aX+b))'] ~ 2:[ax+b-E(aX+b)]'p(x)=

2:[ax + 6 - (aJ1 +6)' p(x) = 2:[ax -aJ1]' p(x) = a'2:(x - 1')' p(x) = a'V(X).

43. With a ~ I and b ~-c,E(X -c) ~E(aX + b) = a E(X) + 6 ~E(X)-c.


= = = =
When c 1', E(X - 1') (X) - I' 1'- I' 0; i.e., the expected deviation from the mean is zero.

45
e 2016 Ccngage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in pan.
r

Chapter 3: Discrete Random Variables and Probability Distributions

45. a ~ X ~ b means that a ~x ~ b for all x in the range of X. Hence apex)~xp(x) :s bp(x) for all x, and
Lap(x)'; LXp(x)'; Lbp(x)
aLP(x)'; Lxp(x)';b LP(x)
aI,;E(X),;b1
a,;E(X),;b

Section 3.4

47.
a. 8(4;15,.7)=.001.

b. b(4; 15,.7) ~ 8(4; 15,.7) - B(3;15,.7) ~ .001 - .000 ~ .00 I.

c. Now p ~ 03 (multiple vehicles). b(6;15,o3) = 8(6; 15,.3) - B(5;15,.3) = .869 - .722 = .147.

d. P(2'; X,; 4) = B(4;15,.7) - B(I; 15,.7) ~ .00 I.

e. P(2';X) = I -P(X'; I) = I - 8(1;15,.7) ~ I - .000 ~ 1.

f. The information that II accidents involved multiple vehicles is redundant (since n = 15 and x ~ 4). So,
this is actually identical to b, and the answer is .001.

49. Let Xbe the number of "seconds," so X - Bin(6, .10).

a. P(X= I) = (: J p" (1- pro, ~ (~)c1)1(.9)5 = .3543


b. P(X~ 2) ~ I - [P(X = 0) + P(X = I)] = I - [( ~Je.I)O(.9)6 +( ~Je.l)I(.9)5] = I - [.5314 + .3543] =
.1143

c. Either 4 or 5 goblets must be selected.

Select 4 goblets with zero defects: P(X= 0) = (~}I)O(.9)4 = .6561.

Select 4 goblets, one of which has a defect, and the 5 is gOOd{


1h
(~)c 1)1(.9)3] x.9 = .26244

So, the desired probability is .6561 + .26244 = .91854.

51. LetXbe the number offaxes, so X - Bin(25, .25).


a. E(X) ~ np = 25(.25) = 6.25.

b. VeX) = np(l-p) = 25(.25)(.75) =4.6875, so SD(XJ = 2.165.

C. P(X> 6.25 + 2(2.165)) = P(X> 10.58) = I - P(X:S 10.58) = I - P(X:S 10) = I - B(10;25,.25) =030.

46
e 2016 Cengage Learning, All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.

b
Chapter 3: Discrete Random Variables and Probability Distributions

53. Let "success" = has at least one citation and define X = number of individuals with at least one citation.
Then X - Bin(n = 15, P ~ .4).
a. If at least 10 have no citations (failure), then at most 5 have had at least one (success):
P(X5, 5) = B(5; I 5,.40) = .403.

b. Half of 15 is 7.5, so less than half means 7 or fewer: P(X 5, 7) = B(7; 15,.40) = .787.

c. P(5 5,X5, 10) ~ P(X5, 10)-P(X5, 4) = .991- .217 = .774.

55. Let "success" correspond to a telephone that is submitted for service while under warranty and must be
replaced. Then p = P(success) = P(replaced I subrnittedj-Pfsubmitted) = (.40)(.20) = .08. Thus X, the
number among the company's 10 phones that must be replaced, has a binomial distribution with n = 10 and

p = .08, so P(X~ 2) = (1~}.08)2(.92)8 = .1478.

57. Let X = the number of flashlights that work, and let event B ~ {battery has acceptable voltage}.
Then P(flashlight works) ~ P(both batteries work) = P(B)P(B) ~ (.9)(.9) ~ .81. We have assumed here that
the batteries' voltage levels are independent.
Finally, X - Bin(10, .81), so P(X" 9) = P(X= 9) + P(X= 10) = .285 + .122 = .407.

59. In this example, X - Bin(25,p) withp unknown.


a. P(rejecting claim when p = .8) = P(X'S IS when p ~ .8) = B(l5; 25, .8) = .017.

b. P(not rejecting claim when p = .7) ~ P(X> 15 whenp = .7) = 1- P(X'S 15 when p = .7) =
= I-B(15;25,.7)=I-.189=.81J.
For p = .6, this probability is = I - B(l5; 25, .6) = 1 - .575 = .425.

c. The probability of rejecting the claim when p ~ .8 becomes B(14; 25, .8) = .006, smaller than in a
above. However, the probabilities of b above increase to .902 and .586, respectively. So, by changing
IS to 14, we're making it less likely that we will reject the claim when it's true (p really is" .8), but
more likely that we'll "fail" to reject the claim when it's false (p really is < .8).

61. [ftopic A is chosen, then n = 2. When n = 2, Peat least half received) ~ P(X? I) = I - P(X= 0) =

1-(~)c9).(.1)2~ .99.

If topic B is chosen, then n = 4. When n = 4, Peat least half received) ~ P(X? 2) ~ I - P(X 5, I) =

1- [( ~)c9)'(' 1)4+ (;}9)1 (.1)3 ] =9963

Thus topic B should be chosen if p = .9.

However, if p = .5, then the probabilities are. 75 for A and .6875 for B (using the same method as above),
so now A should be chosen.

63.

a. b(x;n, I -p)~ (:)(I-p)'(pr= C=JP)'-'(I-P)' ~b(n-x;n,p).

Conceptually, P(x S's when peS) = I - p) = P(n-x F's when P(F) ~ p), since the two events are
identical, but the labels Sand F are arbitrary and so can be interchanged (if peS) and P(F) are also
interchanged), yielding P(n-x S's when peS) = I - p) as desired.

47
(12016 Ccngngc Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
(

Chapter 3: Discrete Random Variables and Probability Distributions

b. Use the conceptual idea from a: R(x; n, I - p) = peat most x S's when peS) = I - p) =
Peat least II-X F's when P(F) ~ p), since these are the same event
= peat least II-X S'S when peS) = p), since the Sand F labels are arbitrary
= I _ Peat most II-x-l S's when peS) = p) = 1 - R(n-x-I; n, p).

c. Whenever p > .5, (1 _ p) <.5 so probabilities involving X can be calculated using the results a and b in
combination with tables giving probabilities only for p ,; .5.

65. a. Although there are three payment methods, we are only concerned with S = uses a debit card and F =
does not use a debit card. Thus we can use the binomial distribution. So, if X = the number of
customers who use a debit card, X - Bin(n = 100, P = .2). From this,
E(X) = lip = 100(.2) = 20, and VeX) = npq = 100(.2)( \ -.2) = \ 6.

h. With S = doesn't pay with cash, n = 100 and p = .7, so fl = lip = 100(.7) = 70, and V= 21.

67. Whenn ~ 20 andp ~.5, u> 10 and a> 2.236, so 20-= 4.472 and 30-= 6.708.
The inequality IX- 101 " 4.472 is satisfied if either X s 5 or X" \ 5, or
P(IX _ ,ul" 20-) = P(X'; 5 or X" 15) = .021 + .021 = .042. The inequality IX-I 012: 6.708 is satisfied if
either X <: 3 or X2: 17, so P(IA'- fll2: 30-) = P(X <: 3 or X2: 17) = .00 I + .001 = .002.

Section 3.5

69.
According to the problem description, X is hypergeometric with II = 6, N = 12, and M = 7.

a.

c:)
P(X=4)= (:)(~) 350

924
.379. P(X<:4) = I-P(X>4)= 1-[P(X=5)+P(X=6)]=

'l (ml""..
[ilij] 00"' , - iz: <.m

7 7
b. E(X) =11' ~ =6' 1 2= 3.5; VeX) = ('1~~~)6C72)(I- 1 2)= 0.795; 0" = 0.892. So,

P(X> fl + 0") = P(X> 3.5 + 0.892) = P(X> 4.392) = P(X = 5 or 6) = .121 (from part a).

c. We can approximate the hypergeometric distribution with tbe binomial if the population size and the
number of successes are large. Here, n = \ 5 and MIN = 40/400 = .1, so
hex; I 5, 40, 400) '" hex; I 5, .10). Using this approximation, P(X <: 5) '" R(5; 15, .10) = .998 from the
binomial tables. (This agrees with the exact answer to 3 decimal places.)

48
C 2016 Cengagc Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in pan.

L.,
Chapter 3: Discrete Random Variables and Probability Distributions

71.
a. Possible values of X are 5, 6, 7, 8, 9, 10. (In order to have less than 5 of the granite, there would have
to be more than 10 of the basaltic). X is hypergeometric, with n ~ 15, N = 20, and M = 10. So, the pmf
of X is

p(x)=h(x; 15, 10,20)~


C:)(I;~X)
-'..:.'-L;(2"-1~:'-)...::..L

The pmf is also provided in table form helow.

x 5 6 7 8 9 10
p(x) .0163 .1354 .3483 .3483 .1354 .0163

b. P(aIlIO of one kind or the other) = P(X~ 5) + P(X= ]0) = .0163 + .0163 = .0326.

c. M 10
1'= n-=15-=7.5 V(X)~ (20-15)
-- 15(10)(
- 1-- 10) =.9868'0-=.9934.
N 20' 20 -I 20 20 '

I' 0- = 7.5 .9934 = (6.5066, 8.4934), so we want P(6.5066 < X < 8.4934). That equals
P(X = 7) + P(X = 8) = .3483 + .3483 = .6966.

73.
a. The successes here are the top M = 10 pairs, and a sample of n = 10 pairs is drawn from among the N

...
= 20. The probability IS therefore h(x; 10, 10, 20) ~
(~)(I~~J
(~~)

b. Let X = the number among the top 5 who play east-west. (Now, M = 5.)
Then P(all of top 5 play the same direction) = P(X = 5) + P(X = 0) =

'(; ie.s, ")H(;,,,; 'O)~ G(gl ,(r:~IJ""

l~
c. Generalizing from earlier parts, we now have N = 2n; M = n. The probability distribution of X is

hypergeornetri W) ~'",' " '") ~ [: J <1 for x ~ 0, (," ,"0,

E(X)~ n 1
n-=-n and V(X)= (2n
-- - n) n_
n ( 1--n ) = n' .
2n 2 2n-1 2n 2/1 4(2/1-1)

49
C 2016 Cengege Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted 10a publicly accessible website, in whole or in pan.
r

Chapter 3: Discrete Random Variables and Probability Distributions

75. Let X = the number of boxes that do not contain a prize until you find 2 prizes. Then X - NB(2, .2).
0
a. With S = a female child and F = a male child, let X = the number of F's before the 2 ' S. Then

P(X=x)=nb~;2, .2)~ (X+2-1)(.2)'(I_.2Y=(X+ 1)(.2)'(.8)'.


2-1

b. P(4 boxes purchased) = P(2 boxes without prizes) = P(X = 2) = nb(2; 2, .2) = (2 + 1)(.2)'(.8)' = .0768.

2
c. Peatmost 4 boxes purchased) = P(X'; 2) = L,nb(x;2,.8) = .04 + .064 + .0768 = .1808.
,=<J

d. E(X) = r(l- p) 2(1- .2) = 8. The total number of boxes you expect to buy is 8 + 2 = 10.
P .2

This is identical to an experiment in which a single family has children until exactly 6 females have been
77.
bom (sincep =.5 for each of the three families). So,
6(1- .5) = 6' notice this is
p(x) = nb(x; 6, 5) ~ (x ;5)<.5)'(I-.5Y =(X;5)<.5 r .
Also, E(X) = r(1 ~ p) .5 '
just 2 + 2 + 2, the sum of the expected number of males born to each family.

Section 3.6
79. All these solutions are found using the cumulative Poisson table, F(x; Ji) = F(x; I).
a. P(X'; 5) = F(5; I) = .999.

e-112
b. P(X= 2) = -= .184. Or, P(X= 2) = F(2; I) - F(I; I) =.920- .736 = .184.
21

c. P(2,;X,;4)= P(X:o4)-P(X:o 1)=F(4; I)-F(I; 1)=.260.

d. For XPoisson, a> Ji, = I, so P(X> Ji + ,,) = P(X> 2) = I - P(X ~ 2) = 1 - F(2; I) = I - .920 = .080.

81. Let X - Poissonuz ~ 20).


a. P(X';1O)=F(10;20)=.011.

b. P(X> 20) = 1- F(20; 20) = 1- 559 = .441.

c. P( I0'; X,; 20) = F(20; 20) - F(9; 20) = .559 - .005 = .554;
P(10 <X< 20) = F(19; 20) - F(lO; 20) = .470 - .011 = .459.

d. E(X) = I' = 20, so o > flO = 4.472. Therefore, PCIi- 2" < X < Ji + 2,,) =
P(20 _ 8.944 < X < 20 + 8.944) = P(11.056 < X < 28.944) = P(X'; 28) - P(X'; II) =
F(28; 20) - F(ll; 20) = .966 - .021 = .945.

50
02016 Cengage Learning, All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
Chapter 3: Discrete Random Variables and Probability Distributions

83. The exact distribution of X is binomial with n = 1000 and p ~ 1/200; we can approximate this distribution
by the Poisson distribution with I' = np = 5.
a. P(5'; X,; 8) ~ F(8; 5) - F(4; 5) = .492.

b. P(X?: 8) = I -P(X'; 7) = I -F(7; 5) ~ I - .867 = .133.

85.
e"8'
a. I' ~8 when t> I, so P(X~ 6) =61 ~.122; P(X?: 6) = I - F(5; 8) ~ .809; and

P(X?: 10) = I-F(9; 8) = .283.


b. I ~ 90 min = 1.5 hours, so p ~ 12; thus the expected number of arrivals is 12 and the standard deviation
is (J =.J12 ~ 3.464.

c. I = 2.5 hours implies that I'= 20. So, P(X?: 20) ~ I - F( 19; 20) = .530 and
P(X'; I 0) ~ F(lO; 20) = .011.

87.
a. For a two hour period the parameter oftbe distribution isl' ~ ot> (4)(2) ~ 8,
e-SgW
soP(X~ 10)= -- ~099.
10!

-220
b. For a 30minute period, at = (4)(.5) ~ 2, so P(X~ 0) ~ _e - = .135.
O!

c. The expected value is simply E(X) = at = 2.

89. In this example, a = rate of occurrence ~ I/(mean time between occurrences) = 1/.5 ~ 2.
a. For a two-year period, p = oi ~ (2)(2) = 4 loads.

b. Apply a Poisson model with I' = 4: P(X> 5) = I - P(X'; 5) = I - F(5; 4) = 1- .785 ~ .215.

c. For a = 2 and the value of I unknown, P(no loads occur during the period of length I) =
2
P(X = 0) = e' ' (21) e'''. Solve for t. e2':S.1 =:0 -21:S In(.I) => t? 1.1513 years.
O!

91.
a. For a quarter-acre (.25 acre) plot, the mean parameter is I'~(80)(.25) = 20, so P(X'; 16) = F( 16; 20) =
.221.

b. The expected number of trees is a(area) = 80 trees/acre (85,000 acres) = 6,800,000 trees.

c. The area of the circle is nr' = n(.1)2 = .01n = .031416 square miles, wbich is equivalent to
.031416(640) = 20.106 acres. Thus Xhas a Poisson distribution with parameter p ~ a(20.106) ~
80(20.1 06) ~ 1608.5. That is, the pmf of X is the function p(x; 1608.5).

51
02016 Cengagc Learning All Rights Reserved. May not be scanned, copied or duplicated, or posted to IIpublicly accessible website, in whole or in pan.
r

Chapter 3: Discrete Random Variables and Probability Distributions

93.
a. No events occur in the time interval (0, 1+ M) if and only if no events occur in (0, I) and no events
occur in (tl t + At). Since it's assumed the numbers of events in non-overlapping intervals are
independent (Assumption 3),
P(no events in (0, 1+ /11))~ P(no events in (0, I)) . P(no events in (I, I + M)) =>
Po(t + M) ~ poet) . P(no events in (t, t + t11)) ~ poet) . [I - abt - 0(t11)] by Assumption 2.

b. Rewrite a as poet + t11) = Po(l) - Po(I)[at11 + 0(/11)], so PoCt + /11)- Po(l) ~ -Po(I)[aM + 0(t11)] and
P(I+t1t)-P.(I) o(M). 0(/11) .
o -ap' (I) - P'(t) -- . Since -- --> 0 as M --+ 0 and the left-hand side of the
t1t t11 /11
. dP. (t) dP. (t)
equation converges to -'- as M --+ 0, we find that -'- = -aF.(t).
dt dt

c. Let poet) ~ e''". Then dP,(t) ~ !!.. [e-<U]= -ae~'


= --<J.Po(I),as desired. (This suggests that the
dt
dt
probahility of zero events in (0, t) for a process defined by Assumptions 1-3 is equal to e-<U.)

d.
. . . .
Similarly, the product rule implies -
d [e'"' (at)' ] -ae'a, (at)' kae,a, (at)'"
+="---"'>'::;"-'--
~ k! kl k!
e'"'(at)' e'a'(at)'"
-a +a ~ -aP,(t) + aP"I(t), as desired.
k! (k-I)!

Supplementary Exercises

95.
a. We'll find p(l) and p(4) first, since they're easiest, then p(2). We can then find p(3) by subtracting the
others from I.
p(l) ~ P(exactly one suit) = P(all ~) + P(all .,) + P(all +) + P(all +) ~

4 P(all ~) = 4 c: )e:) .00198, since there are 13 ~s and 39 other cards.

(5:)
p(4)=4'P(2-
-, , , -
I., 1+ 1")~4(':rn('ncn
(5:) . .
26375

p(2) ~ P(all "s and es, with;:: one of each) + .,. + P(all +s and +s with > one of each) =

(~). P(aU"s and ~s, with > one of each) =


6 [P(I ., and 4 ~) + P(2" and 3~) + P(3 ., and 2~) + P(4" and I +)] ~

6.[2 (l:rn
en c:)U)]
+2
(5;)
=6[18,590+44,616]
2,598,960
= .14592.

Finally,p(3) = I - [P(I) + p(2) + p(4)] = .58835.

52
C 2016 Ccngagc Learning. All Rights Reserved. May nor be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in pan.
Chapter 3: Discrete Random Variables and Probability Distributions

b. 11 ~ t,X'P(X)=3.114;a'=[t,X2p(X)]-(3.114)2=.405 ~(T=.636.

97.
a. From the description, X - Bin(15, .75). So, the pmf of X is b(x; 15, .75).

b. P(X> 10) = 1- P(X::: 10) ~ I -8(10;15, .75) ~ I - .314 = .686.

c. P(6 :::X::: 10) = 8(10; 15, .75) -8(5; 15, .75) ~ .314 - .001 = .313.

d. 11 = (15)(.75) = 11.75, a'= (15)(.75)(.25) = 2.81.

e. Requests can all be met if and only if X:<; 10, and 15 - X:<; 8, i.e. iff 7 :<;X:<; 10. So,
P(all requests met) ~ P(7 :<; X:<; 10) = 8(10; 15, .75) - 8(6; 15, .75) ~ .310.

99. Let X ~ the number of components out of 5 that function, so X - Bin(5, .9). Then a 3-out-of 5 system
works when X is at least 3, and P(X <! 3) = I- P(X :<; 2) ~ I - 8(2; 5, .9) ~ .991.

10],
a. X - Bin(n ~ 500,p = .005). Since n is large and p is small, X can be approximated by a Poisson
e-2.52.Y
distribution with u = np ~ 2.5. Tbe approximate pmf of X is p(x; 2.5)
xl

2
b. P(X= 5) = e- '2.5' .0668.
51

c. P(X<! 5) = I-P(X:::4) = I -p(4; 2.5) ~ I- .8912 = .1088.

103. Let Y denote the number oftests carried out.


For n = 3, possible Yvalues are I and 4. P(Y ~ I) ~ P(no one has the disease) ~ (.9)3 = .729 and P(Y= 4) ~
I - .729 = .271, so E(Y) ~ (I )(.729) + (4)(.271) ~ 1.813, as contrasted with the 3 tests necessary without
group testing.
For n ~ 5, possible values of Yare I and 6. P(Y ~ I) ~ P(no one has the disease) ~ (.9)' = .5905, so
P(Y= 6) = I - .5905 = .4095 and E(Y) = (1)(.5905) + (6)(.4095) = 3.0475, less than the 5 tests necessary
without group testing.

105. p(2) = P(X ~ 2) = P(SS) = p2, and p(3) = P(FSS) ~ (I - P )p'.

For x ~ 4, consider the first x - 3 trials and the last 3 trials separately. To have X ~ x, it must be the case
that the last three trials were FSS, and that two-successes-in-a-row was not already seen in the first x - 3
tries.

The probability of the first event is simply (I - p)p'.


The second event occurs if two-in-a-row hadn't occurred after 2 or 3 or ... or x - 3 tries. The probability of
this second event equals I - [p(2) + p(3) + ... + p(x - 3)]. (For x ~ 4, the probability in brackets is empty;
for x = 5, it's p(2); for x = 6, it's p(2) + p(3); and so on.)

Finally, since trials are independent, P(X = x) ~ (I - [p(2) + ... + p(x - 3)]) . (I - p)p'.

For p = .9, the pmfof Xup to x = 8 is shown below.

53
02016 Cengage Learning. A 11Rights Reserved. May not be scanned, copied or duplicated, or posted 10 a publicly accessible website, in whole or in part.
(( , '

Chapter 3: Discrete Random Variables and Probability Distributions

x 2 3 4 5 6 7 8
.81 .081 ,081 .0154 .0088 .0023 .0010
p(x)

So, P(X;; 8) = p(2) + ... + p(8) = .9995.

107.
a. Let event A ~ seed carries single spikelets, and event B = seed produces ears with single spikelets.
Then PtA n B) ~ P(A) . P(B I A) = (.40)(.29) = .116.
Next, let X ~ the number of seeds out of the 10 selected that meet the condition A n B. Then X -

Bin( I0, .116). So, P(X ~ 5) ~ 10)


5 (.116) , (.884) s = .002857 .
(

b. Fnr anyone seed, the event of interest is B = seed produces ears with single spikelets. Using the
law of total prohability, P(B) = PtA n B) + PtA' n B) ~ (.40)(.29) + (,60)(.26) ~ .272.
Next, let Y ~ the number out of the 10 seeds that meet condition B. Then Y - Bin( I 0, .272). P(Y = 5) =

(1;)(,272)'(1-.272)' ~ .0767, while

P(Y;; 5) ~ (10)(.272)'(1_ .272)10-,~ .041813 + .. + .076719 = .97024.


y=o y

109.
e-22
a. P(X ~ 0) = F(O; 2) or -- ~ 0.135.
01

b. Let S an operator who receives no requests. Then the number of operators that receive no requests
followsa Bin(n = 5, P ~ .135) distribution. So, P(4 S's in 5 trials) = b(4; 5, .135) =

(~)<.135)4(.865)1 = .00144.

c. For any non-negative integer x, P(all operators receive exactly x requests) ~


10
P(first operator receives x) ..... P(fifth operator receives x) ~ [P(x; 2)]' ~ [e-
2
2']'
x!
= e- 2:'
(xl)
Thea, P(all receive tbe same number) = P(all receive 0 requests) + P(all receive I request) + P(all
00 e-I025,l'

receive 2 requests) + .,. = 2:--,-.


,.0 (xl)
111. The number of magazine copies sold is X so long as X is no more than five; otherwise, all five copies are
00

sold. So, mathematically, the number sold is min(X, 5), and E[min(x, 5)] ~ ~:>nin(x,5)p(x;4) = Op(o; 4) +
x=o

Ip(l; 4) + 2p(2; 4) + 3p(3; 4) + 4p(4; 4) + i:5p(x;4) =


,-'
1.735 + 5t,P(X;4) = 1.735 + 5 [1- t, P(X;4)] = 1.735 + 5[1 - F(4; 4)] ~ 3.59.

54
C 2016 Cengage Learning. All Rights Reserved. May nOIbe scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
Chapter 3: Discrete Random Variables and Probability Distributions

113.
a. No, since the probability of a "success" is not the same for all tests.

b. There are four ways exactly three could have positive results. Let D represent those with the disease
and D' represent those without the disease.

Combination Probability
D D'
o 3
[( ~}2)O(.8)']' [( ~}-9)J(.I)' ]

=(.32768)(.0729) = .02389

2
[(~Je2),(.8)' H G}-9),(.1)] ]

~(.4096)(.0081) ~ .00332

2
[(~Je2),(.8)] ]. [( ~)c.9)' (.1)']
=(.2048)(.00045) = .00009216

3 o
[(~J(.2)3 (.8)' ] . [ (~J (.9) (.1)' ]

=(.0512)(.00001) = .000000512

Adding up the probabilities associated with the four combinations yields 0.0273.

115.
a. Notice thatp(x; 1'" 1',) = .5 p(x; I',) + .5 p(x; 1',), where both terms p(x; 1',) are Poisson pmfs. Since both
pmfs are 2: 0, so is p(x; 1',,1'2)' That verifies the first requirement.

Next, i;p(X;I',,1'2) = .5i;p(x;1', ) + .5i>(x;,u,) = .5 + .5 = I, so the second requirement for a prof is


...",0 ...",0 >:=0

met. Therefore, p(x; 1'1,1'2) is a valid prof.

b. E(X) = i;xp(x, 1'" 1',) = i;x[.5 p(x;,u,) +.5p(x;,u,)] = .5i;xP(X;,u,) + .5i;X' P( X, 1/;) = .5E(X,) +
x..(l ...",0 .10"'0 .1'=0
.5E(X2), where Xi - Poisson(u,). Therefore, E(X) = .51' I + .51'2,

c. This requires using the variance shortcut. Using the same method as in b,

E(x') = .5i;x'p(x;I/,) + .5i;x" P( >; 1';) = .5E(X,') + .5E(X;). For any Poisson rv,
x=O .\"=0

E(x') = VeX) + [E(X) l' = I' + 1", so E(x') = .5(1', + 1',') + .5(1/2 + ,ui) .
Finally, veX) = .5(1', + 1",') + .5(1', +,ui) - [.51" + .51"]" which can be simplified to equal .51'1 + .51'2
+ .25(u, -1',)'.

d. Simply replace the weigbts .5 and .5 with .6 and .4, so p(x; 1',,1',) = .6 p(x; 1'1) + .4 p(x; 1'2)'

55
02016 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted 10 a publicly accessible website, in whole or in part.
Chapter 3: Discrete Random Variables and Probability Distributions

10 10
117. P(X~ j) ~ LP(arrn on track i "X= J) ~ LP(X= j I arm on i) . Pi =
;=1 ;=1
10 10
LP(next seek at i +j + 1 or i - j - 1) Pi ~ L(Pi+j+, + Pi-j~,)Pi ,where in the summation we take P.~0
;=1 ;=\

ifk<Oork>IO.

119. Using the hint, L(x-,u)'pex)~


all.r
L
r.1x-.ul~0"
(x-I1)'P(X)~ L:
x:1x-,uI<!:kl1
(ku)'p(x)=k'u
2
L:
r.1x-,uj:!:ka
p(x).

The left-hand side is, by definition.rr'. On the other hand, the summation on the right-hand side represents
P(~ - ,u\ ;, ko).
So 0' ~ JC';. P(\X - ~ ;, ka), whence P(~ -,u\ ~ ka):S 11K.
,I
121.
a. LetA, ~ {voice}, A, ~ {data}, and X= duration of a caU. Then E(X) = EeX\A,)p(A,) + E(XJA,)P(A,) ~
3(.75) + 1(.25) ~ 2.5 minutes.

b. Let X ~ the number of chips in a cookie. Then E(X) = E(Ali = I )P(i = 1) + E(Al i = 2)P(i ~ 2) +
E(Al i ~ 3)P(i ~ 3). If X is Poisson, then its mean is the specified I' - that is, E(Ali) ~ i + 1. Therefore,
E(X) ~ 2(.20) + 3(.50) + 4(.30) ~ 3.1 chips.

II

56
C 2016 Cengege Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
CHAPTER 4

Section 4.1
I.
a. The pdf is the straight-line function graphed below on [3, 5]. The function is clearly non-negative; to
verify its integral equals I, compute:

J: (.075x + .2)dx = .0375x' + .2x J: = (.0375(5)' + .2(5 - (.0375(3)' +.2(3


~ 1.9375 - .9375 ~ 1

..
"
.,
3 OJ

"
"
" " "
.. u
"

b. P(X'; 4) = J: (.075x + .2)dx = .0375x' + .2x J: ~ (.0375(4)' + .2(4 - (.0375(3)' + .2(3


= 1.4 - .9375 = .4625. SinceXis a continuous rv, P(X< 4) ~ P(X<:4)= .4625 as well.

C. P(3.5<:X<:4.5)=
J4.5 (.075x+.2)dx=.0375x
15
2 +.2x J'"
l.5
="'=.5.

P(4.5<X)=P(4.5<:X)= J' (.075x+.2)dx=.0375x'+.2xJ'


4.5 4.5
= .. =.278125.

3.
a.
0.4.,-----------------,

0.3

..
" 0.2

0.'

0.0 L,-----c--"'-"'T--~-____,--_,J
-J -2 -t

57
Q 2016 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in pan.
,,-

Chapter 4: Continuous Random Variables and Probability Distributions

b. P(X> 0)= f:09375(4-X')dx=.09375(4X-X:JI ;5.

This matches the symmetry of the pdf about x = O.

c. P(-I <X< 1)= (.09375(4-x')dx=.6875.

d. P(X < -.5 or X> .5) = I - P(-.5 -;X -; .5) = 1 - f,.09375(4 - x')dx = 1 - .3672; .6328.

5.

a. 1= Jm
j(x)dx= J a fa;'dx=- fa;3J8k
3
=-=,>k=-.
3
3
8
II ~ 0 0

",.
III
I[ u

~
;<0.8

c.s
M

e.a
ao
0.' 0.' '0 ,.s '.0

b. P(O-;X-;I)= f;tx'dx;tx3J~ =t=125

c. P(I-;X-;1.5)=
J t 'dx=tx3 Iu X J'"' =tt ()'
I -t(1) 3 ;*=.296875

d. P(X?: 1.5)= 1- J' tx'dx;txlJ' =t(2)3_i(1.5)3 =.578125


1.5 1.5

7.
1 1
a. fix) = for .20-;x s 4.25 and; 0 otherwise.
B-A 4.25 -.20 4.05

0<0

J,;--l_----,----,_----,----,_--, __ ---,.-J..J

4.25
b. P(X> 3) ;
f 3
--'-
4.05
dx ; .L1l.;
4.05
.309.

58
02016 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
Chapter 4: Continuous Random Variables and Probability Distributions

ll+1
C. Piu.- I ~ X ~ I' + I) = f ~I-I '
,t,-dx = fos
,
~ .494. (We don't actually need to know I' here, but it's clearly

the midpoint of2.225 mm by symmetry.)

d. Pea "X" a + I) ~ r'mdx=tr,~ .247.

9.
a. P(X" 5) = f,'.15e-"-I)dx = .15 f,'e-''''du (after the substitution u x - I)=
= -e-''''J4 o
= l_e-6", .451. P(X> 5) ~ 1 - P(X<
-
5) = 1- .451 = .549.

b. P(2"X,,5)= f:.I5e-"-I)dx~ r.15e-"'du=-e-"'J: ~.312.

Section 4.2
n.
a. P(X" I) = F(I) = '4I' = .25

I' .5'
b. P(.5"X" I) ~ F(I) - F(.5) ~ '4-4= .1875.

1.5'
c. P(X> 1.5) = I-P(X" 1.5) = I-F(1.5)~ 1--=.4375.
4

d.

e. lex) = P(x) = ~ for 0" x < 2, and ~ 0 otherwise.

f~x-f(x)dx= J'022x-dx=- 1 J2 x 2dx=-x


3

f. E(X)= x ]' 8
=-",1.333.
-eo 0 6 6 0

g E(X2)~ r ....:tJ
x'J(x)dx=J z x'~dx=!i
22
,4xJdx=~ ]' =2, so
80
V(X)~E(x')-[E(X)]2~

2- (8)''6 8 =36",222,andux= ";.222=.471.

h. From g, E(x') = 2.

59
C 2016 Cengugc Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in pan.
r-'

Chapter 4: Continuous Random Variables and Probability Distributions

13.

a. 1= rI
k
,dx=k
x
JO x~d.x=-X"k]"
I -3 I
=0- ( -k )
-3
(lr' =-=:ok=3.
k
3

b. For x z I,F(x)~ J" !(y)dy=J"';'dy=-y"I' =-x"+I=I--',.Forx< I,F(x)=Osincethe


--a:> 1 y I X

0 x< I
distribution begins at 1. Put together, F(x) = I
{ 1-- ls x
x'

c. P(X>2)= I-F(2) ~ 1-t=t or .125;


P(2 < X <3) = F(3)-F(2) = (I-+,)-(I-t) = .963-.875 = .088.

d. ThemeanisE(X)=J" x(';'
xl'
tx= r(l,
IxT iA.=-lx'I"2 =O+~=~ = 1.5. Next,

E(X') = J,"X'(:4}tx
1

= re,f= -3x"I~ =0+3=3,


1 22

so VeX) = 3 - (1.5)' ~ .75. Finally, the

standard deviation of X is (J ~ ~ = .866.

e. P(1.5 ~ .866 < X < 1.5 + .866) = P(.634 < X < 2.366) = F(2.366) - F(.634) ~ .9245 - 0 = .9245.

15.
a. SinceX is limited to the interval (0, I), F(x) = 0 for x:S 0 and F(x) = I for x;;, I.
For 0 <x < 1,
F(x) = [J(y)dy = J:90y'(l- y)dy = J: (90y' -90y')dr lOy' -9y"J: = lOx' _9x'o .

The graphs of the pdf and cdf of X appear below.

'.0

0.'

0.'

0.2

O'0L=:::::;=::=:::;==:::::-~~
0.0 0.2 0.'1 0.6 0.8 J.O
02 0.' 0.' 0.' '.0
II I
0.0

I

b. F(.5) = 10(.5)' - 9(.5)10 = .0107.

c. P(.25 <X'; .5) = F(.5) - F(.25) = .0107 - [10(.25)' - 9(.25)10] ~ .0107 - .0000 = .0107.
Since X is continuous, P(.25 :S X:S .5) = P(.25 < X,; .5) = .0107.

lO
d. The 75'h percentile is the value ofx for which F(x) = .75: lOx' - 9x ~ .75 =:0 x ~ .9036 using software.

60
o 2016 Ccngage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.

b
Chapter 4: Continuous Random Variables and Probability Distributions

e. E(X)= f" x.(x)dx=J,x.90x'(l-x)dx=J,(90x'-90x


~.
r'
0
r' )dx=9x lO
0
lO
--X"
90]'
II 0
=9-- 90 =-=.8182.
II
9
II

Similarly, E(X') ~ ex'. f(x)dx =f;X' 90x8(1-x)dx~ ... ~ .6818, from which V(X) ~ .6818-
(.8182)' ~ .0124 and O"X~.11134.

f. 11 0"= (.7068, .9295). Thus, P(j1- 0"<; X <; I' + 0")~ F(.9295) - F(.7068) = .8465 - .1602 = .6863, and
the probability X is more than I standard deviation from its mean value equals I - .6863 ~ 3137.

17.
a. To find the (lOOp)th percentile, set F(x) ~ p and solve for x:
x-A
-- <p zo x v A + (B-A)p.
B-A

8
b. E(X)=f x._I-dx= A+B , the midpoint of the intervaL Also,
A B-A 2

E(X') A' + ~B + B' , from which V(X) = E(X') _ [E(X)l' = = (B ~2A)' . Finally,

O"x= ~V(X)=B;;.
'112

c. E(X") = r A
x' ._I_dx =_1_
B-A B-A n+1
X'+1]8
A (n+I)(B-A)

19.
a. P(X<; 1)=F(1)=.25[1 +In(4)]=.597.

b. P(l <; X <; 3) = F(3) - F(l) = .966 - .597 ~ .369.

c. For x < 0 Drx> 4, the pdf isfl:x) ~ 0 since X is restricted to (0, 4). For 0 <x < 4, take the first derivative
of the cdf:

F(x) = ~[I In(~)]


4
+
x
=~x+
4
1n(4) x-~xln(x)
4 4
0=,

f(x) = F'(x) = ~+ In(4) -~In(x) -~x~ = In(4) -~ln(x) = .3466- .25In(x)


444 4x 4 4

21. E(area)=E(nR)= , f"_nr ,f(r)dr= ,3( 1-(lO-r)


J,r" nr"4 ') dr="=5n=314.79m.
501 ,

23. With X ~ temperature in C, the temperature in OFequals 1.8X + 32, so the mean and standard deviation in
OF are 1.81'x + 32 ~ 1.8(120) + 32 ~ 248F and Il.810"x= 1.8(2) = 3.6F. Notice that the additive constant,
32, affects the mean but does not affect the standard deviation.

61
Q 20 16 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted 10 a publicly accessible website, in whole or in part.
Chapter 4: Continuous Random Variables and Probability Distributions

25.
a. P(Y", 1.8 ii + 32) ~ P(1.8X + 32 '" 1.8 ii + 32) ~ P( X S ii ) = .5 since fi is the median of X. This
shows that 1.8 fi +32 is the median of Y.

h. The 90 percentile for Yequals 1.8T](.9) + 32, where T](.9) is the 90" percentile for X. To see this, P(Y
'" 1.8T](.9) + 32) = P(l.8X + 32 S 1.8T](.9) + 32) ~ P(X S T](.9)) ~ .9, since T](.9) is the 90 percentileof
X. This shows that 1.8T](.9) + 32 is the 90'h percentile of Y.

c. When Y ~ aX + b (i.e. a linear transformation of X) and the (I OOp)th percentile of the X distribution is
T](P), then the corresponding (I OOp)th percentile of the Y distribution is Q'T](P) + b. This can be
demonstrated using the same technique as in a and b above.

27. Since X is uniform on [0, 360], E(X) ~ 0 + 360 ~ 180 and (Jx~ 36~0 ~ 103.82. Using the suggested
2 ,,12
linear represeotation of Y, E(Y) ~ (2x/360)l'x- x = (21t1360)(l80) - rt ~ 0 radians, and (Jy= (21t1360)ax~
1.814 radians. (In fact, Y is uniform on [-x, x].)

Section 4.3

29.
a. .9838 is found in the 2.1 row and the .04 column of the standard normal table so c = 2.14.

b. P(O '" Z '" c) ~ .291 =:> <I>(c)- <1>(0)= .2910 =:> <I>(c) -.5 = .2910 => <1>(c) = .7910 => from the standard
normal table, c = .81.

c. P(c",Z)~.121 =:> l-P(Z<c)~.121 => 1-<1>(c)~.121 =><1>(c)=.879=:>c~ 1.17.

d. Pi-c '" Z", c) ~ <1>(c)- <I>(-c) = <I>(c)- (I - <1>(c)) = 2<1>(c)- I = .668 =:> <1>(c)~ .834 =>
c ~ 0.97.

e. P(c '" IZI) = I - P(IZ\ < c) ~ I - [<I>(c)- <1>(-c)] ~ 1 - [2<1>(c)- I] = 2 - 2<1>(c) = .016 => <I>(c)~ .992 =>
c=2.41.

31. By definition, z, satisfies a = P(Z? z,) = 1 - P(Z < z,) = I - <1>(z,),or <I>(z.) ~ 1 - a.
a. <I>(Z.OO55)= I - .0055 ~ .9945 => Z.OO55= 2.54.

b. <I>(Z.09)
= .91 =:> Z.09" 1.34.

c. <I>(Z.6631
= .337 =:> Z.633 " -.42.

33.

a. P(X s 50) ~ p(Z S 50 - 46.8) = P(Z s 1.83) ~ <1>(1.83) ~ .9664.


1.75

b. P(X? 48) ~ p( Z ? 481~~6.8) = P(Z? 0.69) = 1- <1>(0.69) ~ 1- .7549 ~ .2451.

c. The mean and standard deviation aren't important here. The probability a normal random variable is
within 1.5 standard deviations of its mean equals P(-1.5 '" Z '" 1.5) =
<1>(1.5)- <1>(-1.5) ~ .9332 - .0668 = .8664.

62
C 20 16 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
Chapter 4: Continuous Random Variables and Probability Distributions

35.
a. P(X-c. J 0) = P(Z -c. .43) ~ I - (1)(.43) = 1 - .6664 ~ .3336.
Since Xis continuous, P(X> 10) = P(X-c. 10) = .3336.

b. P(X> 20) ~P(Z>4) '" O.

c. P(5 ,;X,; 10) = P(-1.36'; Z'; .43) ~ <1>(.43)


- <1>(-1.36) ~ .6664 - .0869 ~5795.

d. P(8.8 - c,;X,; 8.8 + c) = .98, so 8.8 - c and 8.8 + C are at the I" and the 99'" percentile of the given
distribution, respectively. The 99'" percentile of the standard normal distribution satisties <I>(z) ~ .99,
which corresponds to z = 2.33.
So, 8.8 + c = fl + 2.330'= 8.8 + 2.33(2.8) ~ c ~ 2.33(2.8) ~ 6.524.

e. From a, P(X> 10) = .3336, so P(X'S 10) = I - .3336 ~ .6664. For four independent selections,
Peat least one diameter exceeds 10) = I-P(none of the four exceeds 10) ~
I - P(tirst doesn't n ...fourth doesn't) ~ 1-(.6664)(.6664)(.6664)(.6664) by independence ~
I - (.6664)' ~ .8028.

37.
a. P(X = 105) = 0, since the normal distribution is continuous;
P(X < 105) ~ P(Z < 0.2) = P(Z 'S 0.2) ~ <1>(0.2)~ 5793;
P(X'S 105) ~ .5793 as well, sinceXis continuous.

b. No, the answer does not depend on l' or a. For any normal rv, P(~ - ~I
> a) ~ P(IZI > I) =
P(Z < -lor Z> I) ~ 2P(Z < -1) by symmetry ~ 2<1>(-1)= 2(.1587) = .3174.

c. From the table, <I>(z) ~ .1% ~ .00 I ~ z ~ -3.09 ~ x ~ 104 - 3.09(5) ~ 88.55 mmollL. The smallest
.J % of chloride concentration values nre those less than 88.55 mmol/L

39. !J.= 30 mm, 07= 7.8 mm


a. P(X'S 20) = P(ZS:-1.28) = .1003. SinceXis continuous, P(X< 20) ~ .1003 as well.

b. Set <I>(z) = .75 to find z" 0.67. That is, 0.67 is roughly the 75'" percentile of a standard normal
distribution. Thus, the 75'" percentile of X's distribution is!J. + 0.6707 = 30 + 0.67(7.8) ~ 35.226 mm.

c. Similarly, <I>(z) = .15 ~ a e -1.04 ~ ~(.15) ~ 30 - 1.04(7.8) = 21.888 mm.

d. Tbe values in question are the 10'" and 90'" percentiles of the distribution (in order to have 80% in the
middle). Mimicking band c, <I>(z) = .1 ~ z '" -1.28 & <I>(z) = .9 ~ z '" + 1.28, so the 10'" and 90'"
percentiles are 30 1.28(7.8) = 20.016 mm and 39.984 mm.

41. .
For a single drop, P(damage) ~ P(X < 100) = P ( Z < 100-200)
30 ~ P(Z < -3.33) = .0004. So, the

probability of no damage on any single drop is I - .0004 = .9996, and


Peat least one among five is damaged) ~ I - P(none damaged) = I - (.9996)' ~ I - .998 = .002.

63
02016 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted 10 a publicly accessible website, in whole or in part.
Chapter 4: Continuous Random Variables and Probability Distributions

43.
a. Let I' and a denote the unknown mean and standard deviation. The given information provides

.05 = P(X < 39.12) = <l>e9.1~ -I') => 39.1~ -I' '"-1.645 => 39.12 -I' = -1.6450- and

.10 = P(X> 73.24) ~ 1 - <l>C ;-I')


3.2 => 73.2;-1' = <1>-'(.9) '" 1.28 0=> 73.24-1' = 1.280-.

Subtract the top equation from the bottom one to get 34.12 = 2.925<7,or <7'" 11.665 mpb. Then,
substitute back into either equation to get!, '" 58.309 mph.

b. P(50 ~X ~ 65) = <1>(.57)- <1>(-.72)~ .7157 - .2358 = .4799.

c. P(X> 70)= 1- <1>(1.00)= 1- .8413 = .1587.

45. With p= .500 inches, the acceptable range for the diameter is between .496 and .504 inches, so
unacceptable hearings will have diameters smaller than .496 or larger than .504.
The new distribution has 11 = .499 and a =.002.
P(X < .496 nr X>.504) ~ p(z
< .496-.499)+
.002
p(z
> .504-.499) ~ P(Z < -1.5) + P(Z > 2.5) =
.002
<1>(-1.5)+ [1- <1>(2.5)]= .073. 7.3% of the hearings will be unacceptable.

47. The stated condition implies that 99% of the area under the normal curve with 11~ 12 and a = 3.5 is tn the
left of c - I, so c - 1 is the 99'h percentile of the distribution. Since the 99'h percentile of the standard
normal distribution is z ~ 2.33, c - 1 = 11+ 2.33a ~ 20.155, and c = 21.155.

49.
a. p(X > 4000) = p(Z > 4000-3432) = p(Z > 1.18) = 1- <I>(1.l8) = 1- .8810 = .1190;
482
P(3000 < X < 4000) ~ p(3000 -3432 < Z < 4000 - 3432)= <1>(1.18)- <1>(-.90)~ .8810 - .1841 = .6969.
482 482

b. P(X <2000 or X> 5000) = p(Z < 2000-3432)+ p(Z > 5000-3432)
482 482
= <1>(-2.97)+ [1- <1>(3.25)]
= .0015 +0006 = .0021 .

c. We will use the conversion I Ib = 454 g, then 7 Ibs = 3178 grams, and we wish to find

p(X > 3178) = p(Z > 3178- 3432) = 1-<1>(-.53) = .7019 .


482

d. We need the top .0005 and the bottom .0005 of the distribution. Using the z table, both .9995 and
.0005 have multiple z values, so we will use a middle value, 3.295. Then 3432 3.295(482) = 1844
and 5020. The most extreme .1% of all birth weights are less than 1844 g,and more than 5020 g.

e. Converting to pounds yields a mean of7.5595 lbs and a standard deviation of 1.0608 Ibs. Then

P(X > 7) = j Z > 7 -75595) = I _ ql(-.53) = .7019. This yields the same answer as in part c.
,~ 1.0608

51. P(~ -Ill ~a) ~ I -P(~ -Ill < a) ~ I -P(Il-a <X< 11 + a) = I -P(-I S ZS 1) = .3174.
Similarly, P(~ - III :?: 2a) = I - P(-2 "Z" 2) = .0456 and P(IX - III :?: 3a) ~ .0026.
These are considerably less than the bounds I, .25, and .11 given by Chebyshev.

64
02016 Cengage Learning.All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part,
Chapter 4: Continuous Random Variables and Probability Distributions

53. P ~ .5 => fl = 12.5 & 0' = 6.25; p ~ .6 => fl = 15 & 02 = 6; p = .8 => fl = 20 and 0' = 4. These mean and
standard deviation values are used for the normal calculations below.

a. For the binomial calculation, P(l5 S; X S; 20) = B(20; 25,p) - B(l4; 25,p).
P P(15<X<20) P(14.5 < Normal < 20.5)
.5 -.212 = P(.80" Z" 3.20) = .21 12
.6 = .577 = P(-.20 "Z" 2.24) = .5668
.8 =573 ~ P(-2.75 "Z"
.25) = .5957

b. For the binomial calculation, P(XS; 15) = B(l5; 25,p).


p P(X< 15) P(Normal < 15.5)
.5 - .885 = P(Z" 1.20) = .8849
.6 = .575 = P(Z ".20) ~ .5793
.8 = .017 = P(Z" -2.25) = .0122

c. For the binomial calculation, P(X~ 20) ~ I - B(l9; 25,p).


p P(X> 20) P(Normal> 19.5)
.5 -.002 - P(Z ~ 2.80) - .0026
.6 =.029 ~ P(Z ~ 1.84) = .0329
.8 =.617 ~ P(Z ~ -0.25) = .5987

55. Use the normal approximation to the binomial, with a continuity correction. With p ~ .75 and n = 500,
II ~ np = 375, and
(J= 9.68. So, Bin(500, .75) "'N(375, 9.68).
a. P(360" X" 400) = P(359.5 "X" 400.5) = P(-1.60" Z" 2.58) = 11>(2.58)-11>(-1.60) = .9409.

b. P(X < 400) = P(X" 399.5) = P(Z" 2.53) = 11>(2.53)= .9943.

57.

a. For any Q >0, Fy(y) = P(Y" y)= P(aX +b"y) =p( X" y~b)= r, (Y~b)- This, in turn, implies

j~(y)= ~Fy(y)= ~Fx(Y~b)=~/x(Y~b)-


Now let X have a normal distribution. Applying this rule,

j~(y)=~ ~ exp( ((y_b)l~-1l)1)= ~ exp( (Y-b;~1l)1).ThisisthePdfofanormal


Q -v27f<5 2<5 27fQ<5 2a <5

distribution. In particular, from the exponent we can read that the mean of Y is E(Y) ~ au + b and the
variance of Yis V(l') = a'rl. These match the usual rescaling formulas for mean and variance. (The
same result holds when Q < 0.)

b. Temperature in OFwould also be normal, with a mean of 1.8(115) + 32 = 239F and a variance of
1.8'2' = 12.96 (i.e., a standard deviation ofJ.6F).

65
C 20 16 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
Chapter 4: Continuous Random Variables and Probability Distributions

Section 4.4

59.
I
a. E(X)~ ;:~1.
I
b. cr ~ - ~ I.
A,

c. P(X,,4)~ l_e-(I)(4) ~1_e-4 ~.982.

61. Note that a mean value of2.725 for the exponential distribution implies 1- ~ _1_.
2.725 Let X denote the

duration of a rainfall event. 212 725


a. P(X? 2) ~ I _ P(X < 2) ~ 1- P(X S 2) ~ 1 - F(2; 1-) ~ 1 - [I - e-{1/2.725)(2)]
~ e- . ~ A800;
P(X,,3)~ F(3; 1-) ~ 1- e-{1/212j)(3)~ .6674; P(2 "X" 3) ~ .6674- .4800 ~ .1874.

b. For this exponential distribution, o ~ I' ~ 2.725, so P(X> I' + 20) = 3


P(X> 2.725 + 2(2.725 = P(X> 8.175) ~ I - F(8.175; 1-) = e-{1/2725)(8175)
= e- = .0498.
On the other band, P(X <I' -0) = P(X < 2.725 - 2.725) =P(X < 0) ~ 0, since an exponential random
variable is non-negative.

63. a. If a customer's cal\s are typically short, the first calling plan makes more sense. If a customer's calls
are somewhat longer, then the second plan makes more sense, viz. 99 is less than 20rnin(10lrnin) =
$2 for the first 20 minutes under the first (flat-rate) plan.

b. h ,(X) ~ lOX, while h2(X) ~ 99 for X S 20 and 99 + I O(X - 20) for X > 20. With I' ~ 1IAfor the
exponential distribution, it's obvious that E[h,(X)] ~ 10E[XJ = 101" On the other hand,

E[h (X)] = 99 + 10 r~(x- 20)k- lx dx ~ 99 +~e-20' = 99 + lO~e-20/".


2 J,o A- 2
When j, ~ 10, E[h,(X)] ~ 100 = $1.00 while E[h2(X)] = 99 + 100e- '" $1.13.
When I' = 15, E[h,(X)] ~ 150 ~ $1.50 while E[h2(X)] = 99 + 150e-413'" $1.39.
As predicted, the first plan is better when expected cal\ length is lower, and the second plan is better
when expected cal\ length is somewhat higher.

65. a. From the mean and sd equations for the gamma distribution, ap = 37.5 and a./f = (21.6)2 ~ 466.56_
Take the quotient to get p = 466.56137.5 = 12.4416. Then, a = 37.51p = 37.5112.4416 ~ 3.01408 ... -

b. P(X> 50) = I _ P(X S 50) = I _ F(501l2.4416; 3.014) ~ 1 - F(4.0187; 3.014). Ifwe approximate this
by 1_ F(4; 3), Table AA gives I _ .762 = .238. Software gives the more precise answer of .237.

c. P(50 SX S 75) = F(75J12A416; 3.014) - F(50112.4416; 3.014) ~ F(6.026; 3.014) - F(4.0187; 3.01
F(6; 3) - F(4; 3) ~ .938 - .762 = .176.

66

J C 2016 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in Part._
Chapter 4: Continuous Random Variables and Probability Distributions

.
67. Notice that u> 24 and if ~ 144 =:> up = 24 and up' ~ 144 c> p =-~1M 24
6 and o: =-= 4.
24 P
a. P(12:;;X';24)~F(4;4)-F(2;4)=.424.

b. P(X'; 24) = F(4; 4) = .567, so while the mean is 24, the median is less than 24, since P(X'; ji) = .5.
This is a result of the positive skew of the garruna distribution.

c. We want a value x for WhichF(i,a) = F( i'4) = .99. In Table A.4, we see F(IO; 4) ~ .990. Soxl6 =

10, and the 99'" percentile is 6(10) = 60.

d. We want a value t for which P(X> t) = .005, i.e. P(X <: I) = .005. The left-hand side is the cdf of X, so
we really want F( ~,4) =.995. In Table A.4, F(II; 4)~.995, so 1/6~ II and I= 6(11) = 66. At 66

weeks, only .5% of all transistors would still be operating.

69.
a. {X" I} ~ {the lifetime of the system is at least I}. Since the components are connected in series, this
equals {all 5 lifetimes are at least I} ~ A, nA, nA, n A, nA,.

b. Since the events AI are assumed to be independent, P(X" I) = PtA, n A2 n A, n A, n A,) ~ P(A,)
PtA,) . PtA,) . PtA,) . PtA,). Using the exponential cdf, for any i we have
PtA,) ~ P(component lifetime is::: I) = I - F(I) ~ I - [I - e-'01'] = e-o".
Therefore, P(X" I) = (e-.0") ... (e-.0") = e-05' , and F x< I) = P(X'; I) = I _ e-.05'.
Taking the derivative, the pdfof Xisfx(l) = .05e-o" for I"
O. ThusXalso has an exponential
distribution, but with parameter A ~ .05.

c. By the same reasoning, P(X'; I) = I -e-"', so Xhas an exponential distribution with parameter n.t

71.
a. {X',;y) = {-fY,;x,;fY).

b. F,(y)~P(Y<:y)=P(X'';y)~ P(-fY,;x,;fY) = f~ ~e-"l2dz.Tofindthepdfofy,usethe


-vr "2,,
identity (Leibniz's rule):
fy(Y) =_I_e-l-!iJ'l2dfY __ I_e-l--!iJ'l2d(-fY)
..j2.;; dy -Jbe dy
1 -y/2 1 1 -)'/2 -1 1 -112 -y/2
=--e -----e '--=--y e
..j2.;; 2fY...& 2fY..j2.;;

This is valid for y > O. We recognize this as the chi-squared pdf with v= l.

67
C 20 16 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
( r I
r

Chapter 4: Continuous Random Variables and Probability Distributions

Section 4.5

73.
a. P(X'; 250) = F(250; 2.5, 200) ~ 1- e,,'50I200)"= 1- e'l.1S = .8257.
P(X < 250) ~ P(X'; 250) = .8257.
P(X> 300) ~ I - F(300; 2.5, 200) ~ e,,15," = .0636.

b. P( I00 ,; X'; 250) ~ F(250; 2.5, 200) - F( I00; 2.5, 200) = .8257 - .162 = .6637.

c. The question is asking for the median, ji . Solve F( ji) = .5: .5 = 1- e,(;;I200)'" =>
e,,,I200)"=.5 => (ji /200)" = -In(.5) => jl = 200(_ln(.5)),ns = 172.727 hours.

a
75. UsingthesubstilUtiony~ (~)" =~.Thendy= m dx,and,u=fx.~xa"e'(fp)' dx =
pa
"
P P" 0 pa
J:(pa )"a,e'Ydy=.af y'v.e'Ydy=P.r(I+ ~) by definition of the gamma function.
y

77.
a. E(X)=e'<o'12 =e482 =123.97.

VeX) = (e"""+"')' (e'" -I) = 13,776.53 => (J = 117.373.

b. P(X ,; 100) = <1>(In(100t4.5) = <1>(0.13)= .5517.

c. P(X ~ 200) = i - P(X < 200) = l_<I>Cn(200t4.5) = 1-<1>(1.00)= 1-.8413 = .1587. Since X is continuous,

P(X>200)~ .1587 as well.

Notice that Jix and Ux are the mean and standard deviation of the lognormal variable X in this example; they
79. are not the parameters,u and a which usually refer to the mean and standard deviation ofln(X). We're given
Jix~ \0,28\ and uxl,ux= .40, from which ux= .40,ux= 4112.4.
a. To find the mean and standard deviation of In(X), set the lognormal mean and variance equal to the
appropriate quantities: \ 0,281 = E(X) = e'M'12 and (4112.4)' ~ VeX) = e,,+a'(e"' -I) . Square the first

equation: (10,281)' ~ e'"'o' . Now divide the variance by this amount:


a
(4112.4)' e",+a'(e ' -I) =>e" -1=(.40)' =.16=>u=~ln(1.l6) =.38525
(10,281)' e',+a'
II That's the standard deviation of In(X). Use this in the formula for E(X) to solve for Ji:
=> ,u = 9.164. That's E(ln(X)).
10,28\ = e,+("515)'12= e"+0741'

b. P(X'; 15,000)= p(z< In(15,OOO)-9.164) =P(Z< 1.17)=<ll(1.l7)~.8790 .


.38525 -

68
C 2016 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in pan.
Chapter 4: Continuous Random Variables and Probability Distributions

c. P(X? Jix) = P(X? 10,281) = p(Z ~ In(lO,281) - 9.164)


.38525
= P(Z>
-
.19) = I - <1>(0.19) = .4247. Even

though the normal distribution is symmetric, the lognormal distribution is not a symmetric distribution.
(See the lognormal graphs in the textbook.) So, the mean and the median of X aren't the same and, in
particular, the probability X exceeds its own mean doesn't equal .5.

d. One way to check is to determine whether P(X < 17,000) = .95; this would mean 17,000 is indeed the

95"' percentile. However, we find that P(X < 17,000) =<1>(In(l7,000) -9.164) = <1>(1.50) = .9332, so
.38525
17,000 is not the 95"' percentile ofthis distribution (it's the 93.32%ile).

81.
a. VeX)= e2(205)06(e06 -I) = 3.96 ~ SD(X) = 1.99 months.

c. The mean ofXisE(X) = e'05+06I2 = 8.00 months, so P(jix- rJx<X <Jix+ rJx) = P(6.01 <X< 9.99) =

<1>(In(9.~ 2.05) -<t>( In(6.~ 2.05) = <1>(1.03) _ <1>(-1.05) = .8485 _ .1469 = .7016.

d. .5 = F(x) = <1>(ln(~.05) c> In(~.05 = <1>-'(.5) = 0 ~ Io(x) - 2.05 =0 ~ the median is given
.06 .06
by x = e,05 = 7.77 months.

e. Similarly, In('7.~ 2.05 = <1>-'(.99) = 2.33 ~ '7.99 = e,62 = 13.75 months .


.06

f. The probability of exceeding 8 months is P(X> 8) = 1 - <1>( In(~.05) = I - <1>(.12)= .4522, so the

expected number that will exceed 8 months out of n = lOis just 10(.4522) = 4.522.

83. Since the standard beta distribution lies on (0, I), the point of symmetry must be 1" so we require that
f (1- - JI) =f (1-+ JI) . Cancelling out the constants, this implies

( 1-- JI r (1-+ Ji t' = (1- + JI )"-' (1- - JI r' which (by matching exponents on both sides) in tum implies
that a=p.

Alternatively, symmetry about 1, requires f1 = \1" so _a_= .5. Solving for a gives a = p.
a+p

69
C 2016 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in pan.
r'

Chapter 4: Continuous Random Variables and Probability Distributions

85. a. Notice from the definition ofthe standard beta pdfthat, since a pdf must integrate to I,
I~( r( a + ,8) [' (I-x )p-, dx =>
or(a)r(,8)
J'x.-' (I-xt' dx~ r(r(a+,8)
0
a )r(,8)

Using this E(Xl=


,
J'x- r(a)r(,8)
0
r(a+,8) x-'(I-xt'dx r(a+,8)
r(a)r(,8)
J'x(I-xt-'dx~
0

r(a+,8) I'(a+l)r(,8) _ ar(a) r(a+,8) a


r(a)r(,8)- r(a+I+,8) -l(a)r(,8)-(a+,8)r(a+,8) a+,8

b. Similarly,E[(l-Xl
m
] = J;(1-xr :r~;~){-'(I-xt' dx>

r(a+,8) (x.-'(I-x)"'P-'dx r(a+,8) I'(a)r(m+,8) r(a+,8)-I'(m+,8)_


r(a )r(,8) 0 r( a )r(,8) I'( a + m +,8) r( a + m + ,8)r(,8)
If X represents the proportion of a substance consisting of an ingredient, then I- X represents the
proportion not consisting of this ingredient. For m = 1 above,
E(I-X)~ r(a+,8)-r(I+,8) r(a+,8),Br(,8) ,8
r(a+I+,8)r(,8) (a+,8)r(a+,8)r(,8) a+,8

Section 4.6
87. The given probability plot is quite linear, and thus it is quite plausible that the tension distribution is
nonnal.
89. The plot below shows lbe (observation, z percentile) pairs provided Yes, we would feel comfortable using
a normal probability model for this variable, because the normal probability plot exhibits a strong, linear
pattern.

31
3D


a
'"

_2
"
1 2J



26

zs

24
-2 -, zpercenli!e

70
02016 Ccngage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted 10 a publicly accessible website, in whole or in part.
Chapter 4: Continuous Random Variables and Probability Distributions

91. The (z percentile, observation) pairs are (-1.66, .736), (-1.32, .863), (-1.01, .865),
(-.78, .913), (-.58, .915), (-AD, .937), (-.24, .983), (-.08,1.007), (.08,1.011), (.24,1.064), (040,1.109),
(.58, 1.132), (.78, 1.140), (1.01, 1.153), (1.32, 1.253), (1.86, 1.394).

The accompanying probability plot is straight, suggesting that an assumption of population normality is
plausible.

1,4 -

1.3 -

1.2 -



c 1.1-
>
.
c
0 1.0 -

0.'

0.8

0.7 ,
.z .' o a
z%ire

93. To check for plausibility of a lognormal population distribution for the rainfall data of Exercise 81 in
Chapter I, take the natural logs and construct a normal probability plot. This plot and a normal probability
plot for the original data appear below. Clearly the log transformation gives quite a straight plot, so
lognormality is plausible. The curvature in the plot for the original data implies a positively skewed
population distribution - like the lognormal distribution.

sooo ...,-------------,

........ "
0000
.....
.. '
'000

.................. .' -a
., o %%lIe

z%i1e

71
C 2016 Cengagc Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
,f

Chapter 4: Continuous Random Variables and Probability Distributions

95. Tbe pattern in the plot (below, generated by Minitab) is reasonably linear. By visual inspection alone, it is
plausible that strength is normally distributed.

Normal Probability Plot

.999
.99
.95

~ .80
:0
ro .50
D
2
c,
.20
.05
.01
.001

135 145
125
strength
/l;lderson--Oaring NormBilY Test
Avemge: 134.902 A-SQIIaI'ed: 1.065
S\Oev. 4.54186 P-VaIoe: 0.008
N: 153

97. The (lOOp)" percentile ,,(P) for the exponential distribution with A. = 1 is given by the formula
W) = -In(l _ pl. With n = 16, we need w)
for p = tt,1%, ...,W- These are .032, .398, .170, .247, .330,
.421, .521, .633, .758, .901, 1.068,1.269,1.520,1.856,2.367,3.466.

The accompanying plot of (percentile, failure time value) pairs exhibits substantial curvature, casting doubt
on the assumption of an exponential population distribution.

600

500-


400 -
<l>

.'..'
E 300-
:E
~ 200 -

100 -

0-'
"
0.0 05 1.0 1.5 2.0
,
2.5 3.0
-r
3.5

percentile

Because A. is a scale parameter (as is (J for the normal family), A. ~ 1 can be used to assess the plausibility of
the entire exponential family. If we used a different value of A. to find tbe percentiles, the slope of the graph
would change, but not its linearity (or lack thereof).

72
e 2016 Cengage Learning. All Rights Reserved, May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
Chapter 4: Continuous Random Variables and Probability Distributions

Supplementary Exercises

99.

a. FOrO"Y"25,F(y)~...!.-r(U-1{JdU=..!.-(1{-~J]'
24 0 12 24 2 36 0
=/
48
_L.
864
Thus

F(y)= 1:' _L
48 864
:::$12
I y > 12

b. P(Y" 4) = F(4) = .259. pry> 6) = I ~F(6) = .5.


P(4 "X" 6) = F(6) - F(4) ~.5 - .259 = .241.

c. E(Y)= J"Y..!.-Y(I_L)dy=..!.-f'(/-y'JdY=...!.-[y' _Y']" ~6inches.


o 24 12 24 0 12 24 3 48 0

E(Y') ~
24
r"(y' -
..!.-Jo Y' JdY
12
= 43.2 , so V(Y) ~ 43.2 - 36 = 7.2.

d. P(Y<40rY>8)= 1-P(4"Y,,8)= 1-[F(8)-F(4)] =.518.

e. The shorter segment has length equal to minCY, 12 - Y), and

E[min(Y, 12 - Y)] = f~';'in(y,12 - y). J(y)dy = f:mincy,12 - y). f(y)dy

+ r'';'in(y,12 _ y). J(y)dy = r~. J(y)dy + r'b2 - y). J(y)dy = 90 = 3.75inches.


J6 Jo J6 24

101.
0:5x<1

a. By differentiation,j(x) = 1~-2X
15.x<-
7
3
4 4
o otherwise
,----'"------------,
LO

0.8

0.'
""" 0.'
0.'

73
C 2016 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
n
Chapter 4: Continuous Random Variables and Probability Distributions

c. Using the pdf from a, E(X) = r\.x'dx+ J,r%x


J,
.(2._l )dx
4 4
x =!l!. = 1.213.
108

103.
a. P(X> 135)~ I _<1>(135-137.2) ~ I- <1>(-1.38)= I - .0838 ~ .9162.
1.6

b. With Y= the number among ten that contain more than 135 OZ, Y - Bin(1 0, .9162).
So, Ply;, 8) = b(8; 10, .9162) + b(9; 10, .9162) + b(1 0; 10, .9162) =.9549.

c. . I -$ (135-137.2)
We want P(X> 135) ~ .95, i.e. a = .95 or $(135-137.2)
(Y = .05. From the standard

135-137.2
normal table, -1.65 => (Y = 1.33 .
(Y

Let A = the cork is acceptable and B = the first machine was used. The goal is to find PCB/ A), which can
105.
be obtained from Bayes' rule:
PCB A) I P(B)P(A B) I .6P(A B) I
P(B)P(A I
B) +P(B')P(A I B')6P(A I B)+.4P(A I B')
From Exercise 38, peA / B) = P(machine 1 produces an acceptable cork) = .6826 and peA I Bi = P(machine
2 produces an acceptable cork) ~ .9987. Therefore,
PCB A) I .6(.6826) ~ .5062 .
.6(.6826) + .4(.9987)

107.

a.

II
II
II

V() 0 '.orx<-,an I r'


d F()x ~ I' tor r > 2 . F or- 1 ';;x';; 2 , F(x)=L,t(4-y a )dy 11+ 12x
27 - x' . Thi SIS
.
b r'X ~

graphed below.
..
I I ..
..
..
.[j.1.0-llJo.olUl.oIJl.OU

74
e 2016 Cengage Learning. All Rights Reserved, May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in pan.
Chapter 4: Continuous Random Variables and Probability Distributions

c. The median is 0 iff F(O) = .5. Since F(O) = i\, this is not the case. Because g < .5, the median must
be greater than O.(Looking at the pdf in a it's clear that the line x = 0 does not evenly divide the
distribution, and that such a line must lie to the right of x = 0.)

d. Yis a binomial rv, with n = 10 andp =P(X> 1) = 1 - F(I):'" r,.


109. Below, exp(u) is alternative notation for e".

a. P(X5.150)=exp[ -exp( -(l5~~150))]=exP[_exP(0)]=eXP(_I)=.368,

P(X5. 300) = exp[-exp(-t.6667)1=.828, and P(l50 5.X5. 300) = .828- .368 = .460.

b. The desired value c is the 90'" percentile, so c satisfies

.9 = exp[ -exp( -(C;~50))]. Taking the natural log of each side twice in succession yields -(C;~50)
= 1o[-ln(.9)] =-2.250367, so c = 90(2.250367) + 150 = 352.53.

c. Use the chain rule:flx) = P(x) =exp[ -exp( -(x;a )JJ-exp(-(x;a)}- ~ =

~exp[ -exp( -(x;a)J- (x~a)J

d. We wish the value of x for whichflx) is a maximum; from calculus, this is the same as the value of x

for which In(f(x)] is a maximum, and Io(f(x)] = -lofJ-e-I,-aYP - (x-a) . The derivative ofln(f(x)] is
fJ
.!!--[-lnfJ-e-('-"'/P - (x-a)]=o+~e-('-"'/P _~ . set this equal to o and we get e-(,-al/P =1, so
~ fJ fJ fJ'

-( x - a) 0, which implies that x = a. Thus the mode is a.


fJ

e. E(X) = .5772fJ+ a = 201.95, whereas tbe mode is a = 150 and the median is the solution to F(x) = .5.
From b, this equals -9010[-10(.5)] + 150 = 182.99.
Since mode < median < mean, the distribution is positively skewed. A plot of the pdf appears below.

0.004

0.....

0.002

0.001

200 .... ....


x
000

75
e 20 16 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted 10 IIpublicly accessible website, in whole or in part
r I ,. ,

Chapter 4: Continuous Random Variables and Probability Distributions

111.
a. From a graph of the normal pdf or by differentiation of the pdf, x ~ Ji.

b. No; the density function has constaot height for A :;; x s; B.

c. J(x; 1.) is largest for x ~ 0 (the derivative at 0 does not exist sinceJis not continuous there), so x = O.

d. In[J(x;a,p)] = -1n(P' )-In(r( a)) +(a -1)ln(x) -~


P
, and !!-In[f(x;a,p)J
dx
= a -I
x
-L
P
Setting this

equal to 0 gives the mode: x = (a-l)p.

e. The chi-squared distribution is the gamma distribution with a ~ v/2 and p = 2. From d,

X.=(~-1}2) = v-2.

113.

a. E(X) ~ J: x [pA.,e-" +(1- p)A-,e-i.,' Jdx ~ pS: xA.,e-.l,'dx+(1- p) J: xA-,e-.t"dx = ~ + (l ~p) (Each of

the two integrals represents the expected value of an exponential random variable, which is the
reciprocal of A.) Similarly, since the mean-square value of an exponential rv is E(Y') = V(Yl + [E(Yll' =

lJ}.' + [lI,lJ' = 2/A', E(X') = I~


x'J(x)dx
o
p
= ... ~ 2 , + 2(1- p) . From this,
A.,-<;

V(X)= 2p + 2(l-p) [E-+(I-P)]'.


J,' A.{ A., A-,

b. Forx>O,F(x;'<J,,,,,p)~S:f(y;A.,,A-,,p)dy = SJpA.,e-"+(I-p)A-,e-.t,']dy =
f:
p J,e-"dy+ (1- p) f: A-,e-.l,'dy~ p(1-e-") + (1- p)(1-e-.l,,). For x ~ 0, F(x) ~ O.

c. P(X> .01) = 1 - F(.OI) = 1 - [.5(1 _e-4O(ol) + (1- .5)(1 - e-20O(OI)] = .5e-Q4 + .5e-' ~ .403.

d. Using the expressions in a, I' = .015 and d' = .000425 ~ (J= .0206. Hence,
P(p. _(J < X < Ji + (J) ~ P( -.0056 < X < .0356) ~ P(X < .0356) because X can't be negative

= F(.0356) = ... = .879.

e. For an exponential rv, CV = '!. = 1I A = I. For X hyperexponential,


f.l II A

CV= (J ~E(X')-Ji' ~E(X')_I= 2plA.,'+2(1-p)/~ I


f.l Ji I"'~ [plA.,+(I-p)/A-,]

2( p-<;+ (1- p )A.,:) 1= ~2r _I , where r ~ p-<; + (1- P )A.,' . But straightforward algebra shows that
~ (pA-,+O-p)A.,) (pA-,+(I-p)A.,)'
r> I when A, "" A" so that CV> 1.

f. For the Erlang distribution, II =.': and a = j;; , so CV~ _1_ < 1 for n > I.
A A .[;;

76
C 2016 Ccngage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.

'---... --1.....
Chapter 4: Continuous Random Variables and Probability Distributions

115.

a. Since In(~J has a normal distribution, by definition Ie has a lognormal distribution.


o ~

c. E( ~: J = ,"0025" = 2.72 and V( ~: J = ".0025 (e OO25


-I) =0 185 .

117. F(y) = P(Y"y) ~ P(<YZ + Ii "y) ~ p( z" y:.u)= f7 ~e+' dz =:> by tbe fundamental theorem of

calculus, f(y) ~ F'(y) ~ --;'\.


I -" "pl' '-=--e
I I -"
'\.
"pj' , a normal pdf with parameters Ii and <Y.
.j2; <Y .}2;<Y

119.
a. Y=-ln(X) =:>x~e'Y~key), so k'(y) =-e'Y Thus sincej(x) = I,g(y) = I . [-e" I <e: forO<y<oo.
Y has an exponential distribution with parameter A = I.

b. y = <YZ+ Ii =:> z = key) ~ Y -/I and k'(y) = J., from which the result follows easily.
<Y <Y

c. y = hex) = cx c> x = key) = Land k'(y) ~~, from which the result follows easily.
c c

121.
a. Assuming the three birthdays are independent and that all 365 days of the calendar year are equally

likely, P(all 3 births occur on March II) ~(_1_)3


365

b. P(all 3 births on the same day) = P(a1l3 on Jan. I) + P(a1l3 on Jan. Z) + ... = (_1_)3 +(_1_)3 +
365 365 ...

365 (_I
365
)3 ~ (_I )'
365

c. Let X = deviation from due date, so X - N(O, 19.88). The baby due on March 15 was 4 days early, and
P(X~-4)"'P(-4.5 <X<-3.5)
= <1>(-3.5 )-<1>( -4.5)=
19.88 19.88
<1>(-.18)- <I>(-.Z37) = .4286 - .4090 ~ .0196.
Similarly, the baby due on April I wasZI daysearly,andP(X~-ZI)'"

<1>(.ZO.5)_<b('ZI.5) = <1>(-1.03)-<I>(-1.08) = .1515 -.140 1= .0114.


19.88 19.88

77
02016 Cengagc Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
r

Chapter 4: Continuous Random Variables and Probability Distributions

Finally, the baby due on April 4 was 24 days early, and P(X = -24) '" .0097.
Again assuming independence, P(all 3 births occurred on March II) = (.0196)(.0114)(.0097)=
.0002145.
d. To calculate the probability ofthe three births happening on any day, we could make similar
calculations as in part c for each possible day, and then add the probabilities.

123.
a. F(x) = P(X'S x) = p( -In(I-U)" x) = P(In(I-U)" -AX) = P(I-U" e-")
= p( U" I-e-'-<) = I-e-.u since the cdf of a uniform rv on [0, I] is simply F(u) = u. Thus X has an

exponential distribution with parameter A.

b. By taking successive random numbers Ul, U2, U3, .,. and computing XI ;:: -~ln(l-U,) for each one, we
10
obtain a sequence of values generated from an exponential distribution with parameter A = 10.

125. If g(x) is convex in a neighborhood of 1', then g(l') + g'(I')(x - Ji) "g(x). Replace x by X:
E[g(J1) + g'(Ji)(X - Ji)] "0E[g(X)] => E[g(X)] ~ g(Ji) + g'(Ji)E[(X -Ii)] = g(l') + g'(I') . 0 = g(Ji).
That is, if g(x) is convex, g(E(X))" E[g(X)].

127.
a. E(X)=150+(S50-150)-S- =710 and V(X)= (S50-150)'(S)(2) =7127.27=>SD(X)"'S4.423.
S+2 (S+2)'(S+2+1)
Using software, P(IX - 7101"084.423)= P(625.577 "OX'S 794.423) =

'94.423 I '(10) (X-150)'(S50-X)' dx= 684


J 625.577 700 ,(S)L(2) 700 700 .,

P(X> 750) = '50 I f(IO) (X-150)'(S50-X)' dx= .376. Again, the computation of the
b.
I 750 700 r(8)L(2)
--
700
---
700
requested integral requires a calculator or computer.

78
C 2016 Cengagc Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part,

~,
CHAPTER 5

Section 5.1

1.

b. P(X", I and Y", I) =p(O,O) +p(O,I) +p(I,O) +p(l,I)= 042.

c. At least one hose is in use at both islands. P(X", and Y", 0) = p(l,I) + p(l,2) + p(2, 1) + p(2,2) = .70.

d. By summing row probabilities, px(x) = .16, .34, .50 for x = 0, 1,2, By summing column probabilities,
p,(y) = .24, .38, .38 for y = 0, 1,2. P(X", I) = px(O) + px(l) ~ .50.

e. p(O,O) = .10, butpx(O) . p,(O) = (.16)(.24) = .0384 '" .10, so X and Yare not independent.

3.
a. p(l, I) = .15, the entry in the I" row and I" column of the joint probability table.

b. P(X, = X,) = p(O,O) + p(l,I) + p(2,2) + p(3,3) ~ .08 + .15 + .10 + .07 = 040.

c. A = (X, ;, 2 + X, U X, ;, 2 + X,), so peA) = p(2,0) + p(3,0) + p(4,0) + p(3, I) + p(4, I) + p(4,2) + p(0,2)
+ p(0,3) + p(l,3) =.22.

d. P(X, + X, = 4) =p(l,3) + p(2,2) + p(3,1) +p(4,0) ~ .17.


P(X, + X, ~ 4) = P(X, + X, = 4) + p(4,1) + p(4,2) + p(4,3) + p(3,2) + p(3,3) + p(2,3)=A6.

5.
a. p(3, 3) = P(X = 3, Y = 3) = P(3 customers, each with I package)
= P( each has I package 13 customers) . P(3 customers) = (.6)3 . (.25) = .054.

b. p(4, 11) = P(X = 4, Y = II) = P(total of 11 packages 14 customers) . P(4 customers).


Given that there are 4 customers, there are four different ways to have a total of 11 packages: 3,3,3,2
or 3, 3, 2, 3 or 3, 2, 3, 3 or 2,3,3, 3. Each way has probability (. 1)'(.3), so p(4, II) = 4(.1)'(.3)(.15) =
.00018.

7.
a. p(l,I)=.030.

b. P(X", I and Y", I) =p(O,O) + p(O,I) + p(l,O) + p(l,I) = .120.

c. P(X= 1)=p(l,O) + p(l,I) + p(l,2) = .100; P(Y= I) = p(O,I) + ... + p(5,1) = .300.

d. P(overflow) = P(X + 3Y > 5) = I - P(X + 3Y", 5) = I - PX,Y)=(O,O) or ... or (5,0) or (0,1) or (1,1) or
(2,1)) = 1- .620 = .380.

79
e 2016 Cengage Learning. All Rights Reserved. May nOI be scanned, copied or duplicated, or posted to a publicly accessible website, in whole orin part.
Chapter 5: Joint Probability Distributions and Random Samples

e. The marginal probabilities for X (row sums from the joint probability table) are px(O) ~ .05, px(l) ~
.10, px(2) ~ .25, Px(3) = .30, px(4) = .20,px(5) ~ .10; thnse for Y (column sums) are p,(O) ~ .5, p,(\) =
.3, p,(2) ~.2. It is now easily verified that for every (x,y), p(x,y) ~ px(x) . p,(y), so X and Yare
independent.

9.
a. 1= f" f"
-:> --I
f(x,y)dxdy = flO flO K(x' + y')dxdy = KflOflO
20 20 20 20
x'dydx+ KflOSlO y'dxdy
20 20

= 10Kf" x'dx + 1OKfJOy'dy = 20K .(_19_,0_0_0)


='> K = c-=-::-3-=-::--:-
20 20 3 380,000

26
b. P(X<26 and Y<26)= f 26 f 26K(x'+y')dxdy=KS
2020
26[
20
x'y+L
l ]26
320
dx
=K
f 20
(6x'+3192)dx =

K(38,304) ~ .3024.

c. The region of integration is labeled JIl below.


3

Fx+2 4~=X-2
-;;::

2 '/11
20 30

P(IX - YI <:: 2) ~ Sf f(x,y)dxdy= 1- If f(x,y)dxdy- Sf/(x,y)dxdY~


f/I I /I

1-
f28
20
r .1"+2
f(x,y)dydx- rr
22 20
f(x,y)dydx = .3593(after much algebra).

d. fx(x) = r->
f(x,y)dy = r20
K(x' + y')dy = 10Kx' +K r'1 =
320
10

10/6:' + .05, for 20 <::x<:: 30.

e. f,(Y) can be obtained by substitutingy for x in (d); c1earlyj(x,y)" fx(x) ./,(y), so X and Yare not
independent.

11. -PI X -P2 Y e-P1-Pz Jltx flf


a. Since X and Yare independent, p(x,y) = pAx)py(Y) = '!.-.!:i..'!.-J:!L x!y!
xl yl
for x = 0, 1, 2, ... ; y = 0, 1, 2, ....

b. P(X+ Y:,: I) = p(O,O) + p(O,I)+ p(l,O) = ... =e--'-'" [1+1'1 +1',].

c.

80
e 2016 Ccngage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.

_____________________ ....;nnz....
Chapter 5: Joint Probability Distributions and Random Samples

Poisson random variable with parameter fl, + fl, . Therefore, the total number of errors, X + Y, also has
a Poisson distribution, with parameter J11 + f.J2 .

13.

a. f(x,y) ~ fx(x) -j.Jy) ~ {e-'-Y x;;, O,y;;' 0


o otherwise

b. By independence,P(X'; I and Y'; I) =P(X,; 1) P(Y,; 1) ~(I- e-I) (1- e-I) = .400.

d. P(X+Y~I)~ f;e-'[I-e-('-'l]d<=I-k'=.264,
so P(l ,;X + Y~ 2) = P(X + Y'; 2) -P(X + Y'; 1) ~ .594'- .264 ~ .330.

15.
a. Each X, has cdfF(x) = P(X, <ox) = l_e-.u. Using this, the cdfof Yis
F(y) = P(Y,;y) = P(X1 ,;y U [X,';y nX, ~y])
=P(XI ,;y) + P(X,';y nX, ,;y) - P(X, ~yn [X,';y nX, ,;y])
= (I-e-'Y)+(I-e-'Y)'-(l-e-'Y)' fory>O.

JAy
The pdf of Y isf(y) = F(y) = A'Y + 2(1-e-'Y)( ,ie-'Y)- 3(1-e-'Y)' (A'Y) ~ 4,ie-Uy -3;1.e-
fory> O.

b. E(l') ~ r- y.(4;1.e-UY _3;1.e-JAY\AY=2(_1 ) __ 1 =2-


Jo r 2;i. 3A 3;1.

17.
a. Let A denote the disk of radius RI2. Then P((X; Y) lies in A) = If f(x,y)d<dy
A

__
Jf-'-dxd Y __-I-JI ""dxdY __area of A :rr(R / 2)' -1 = .25 Notice that, since the J' OIntpdf of X
A JrR2 lrR2 A :;rR2 JrR2 4
and Y is a constant (i.e., (X; Y) is unifonn over the disk), it will be the case for any subset A that P((X; Y)
lies in A) ~ area o,fA
:rrR

b. By the same ratio-of-areas idea, p(_1i2 s X s R2 , - R2 ,; Y ,; R)2:rrR:rr


= R', = ~. This region is the square

depicted in the graph below.

81
o 2016 Cengege Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted 10 a publicly accessible website, in whole or in part.
r' .- ------;-'~~~~-----------------

Chapter 5: Joint Probability Distributions and Random Samples

R R R R) 2R' 2 . .
c. Similarly, P - r;:5,X5, r;:'- r;:5,Y5, r;: =--, =-. This region is the slightly larger square
( ,,2 ,,2 ,,2 ,,2 trR tr
depicted in the graph below, whose comers actually touch the circle.

/- k-...."
I 1\

\ 1/

00 SN-." I 2.J R' - x'


d. !x(x)=S f(x,y)dy= I:T>:-,dy= , for-R5,x5,R .
......, -,/R--r st R nR

Similarly,f,.(y) ~ 2F7 tr -
for -R 5,y 5, R. X and Yare not independent, since the joint pdfis not

. I 2.JR'-x' 2~
the product of the marginal pdfs: --, '" " z
1!R nR 1!R

Throughout these solutions, K 3 , as calculate d itn Exercise 9 .


19.
380,000
K( '+y')
a. I, (y I x) ~ f(x,y) x for205,y5,30.
~x fAx) 10Kx' +.05

xI f(x,y) K(x' + i) for 20 5, x 5, 30.


!xlY( y) = frey) 10Ky' +.05

b. P(Y,,25Ix=22)= r30fYlx(yl22)dy = SJO K22)'+ i) dy= .5559.


J" as I OK(22)' + .05
pen: 25) = SJO fr(y)dy = SJO (loKi + .05)dy = .75. So, given that the right tire pressure is 22 psi, it's
as as
much less likely that the left tire pressure is at least 25 psi.

c. E( YIX= 22) =

S_
y. !'Ix(y I 22)dy = SJO y.
20
K22)' + i)
,
IOK(22) + .05
dy= 25.373 psi.

E(Y'I X= 22) = SJO y'.


k22)' + i) dy = 652.03 ~
1 Ok(22)' + .05
20

V(Y I X = 22) = E(Y' I X = 22) - [E( Y I X = 22)]' = 652.03 - (25.373)' = 8.24 ~


SD(Y! X= 22) = 2.87 psi.

82
02016 Cengege Learning. All Rights Reserved. May not be scanned, copied or duplicated. or posted to a publicly accessible website, in whole or in part,

b
Chapter 5: Joint Probability Distributions and Random Samples

21.

a. I XlIX1.X2 ( X3 I X1,X2 ) I(x"x"x,) I


h
were I.
,\'1,X
2
(XllX2::= ) th e margma
. I"joint df
POI fX an dX' 2, i.e.
L,, (x"x,)
Ix I' x 2 (x"x,)= f" I(x"x"x,)dx,.
-<Q

b. I(x"x"x,)
lx, (x,)
, where Jx,(x,) = L:f: I(x" x"x,)dx,dx, , the marginal pdf of X,.

Section 5.2
4 ,

23. E(X, -X,) = LL(x, -x,), p(x"x,)=(0-0)(.08) + (0 -1)(.07)+ .. + (4 -3)(.06) = .15.


,l"1"'Ox1",0

Note: It can be shown that E(X, - X,) always equals E(X,) - E(X,), so in this case we could also work out
the means of X, and X, from their marginal distributions: E(X,) ~ 1.70 and E(X,) ~ 1.55, so E(X, - X,) =
E(X,) - E(X,) = 1.70 - 1.55 ~ .15.

25. The expected value of X, being uniform on [L - A, L + A], is simply the midpoint of the interval, L. Since Y
has the same distribution, E(Y) ~ L as well. Finally, since X and Yare independent,
E(area) ~ E(XY) ~ E(X) . E(Y) = L .L ~ L'.

27. The amount oftime Annie waits for Alvie, if Annie arrives first, is Y - X; similarly, the time Alvie waits for
Annie is X - Y. Either way, the amount of time the first person waits for the second person is
h(X, Y) = IX - Y]. Since X and Yare independent, their joint pdf is given byJx(x) . f,(y) ~ (3x')(2y) = 6x'y.
From these, the expected waiting time is
f'f'
E[h(X,y)] ~ JoJolx- Yl-f(x,y)dxdy = f'fl,
oJolx- yl6x ydxdy

=
f'f'
o 0
(x- y)6x'ydydx+ f'f'0 .,
(x- y)'6x2ydydx =-+-=-
I
6 12
I I hour, or 15 minutes.
4

2 2
29. Cov(X,Y) ~ -- and J'x = J'y =-.
75 5

E(x') = Jx
o
I z
Jx(x)dx =12f
'"
0
12 1
x (I-x dx)=-=-,so
60 5
I
V(X)= --
5 ( )'
2
-
5
=-.
25
1

'1 I "(Y) =-,soPXY=fll1


Sunuany.> I -f, 50
--=-- 2
25 . vis 'vi 75 3

31.
lO
a. E(X)= flOxfx(x)dx=J x[IOKx'+.05Jdx= 1925 =25.329=E(Y),
20 20 76

E(XY)~
30130, 24375
120 20 xy-Ki
x +y')dxdy=--=641.447=>
38
Cov(X, Y) = 641.447 - (25.329)2 =-.1082.

b. E(x') = 130x' [IOKx' + .05Jdx


20
= 37040 = 649.8246 = E(Y')
57
=>

VeX) = V(Y) = 649.8246 - (25.329)' = 8.2664 => P = -.1082 -.0 J3].


,,1(8.2664)(8.2664)

83
C 2016 Cengagc Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted 10 a publicly accessible website, in whole or in part.
r

-
Chapter 5: Joint Probability Distributions and Random Samples

33. Since E(X) = E(X)' E(Y), Covex, Y) = E(X) - E(X) . E(Y) = E(X) E(Y) - E(X) . E(Y) = 0, and since

Corr(X, Y) = Cov(X,Y) , then Corr(X, Y) = O.


UXCiy

35.
a. Cov(aX + b, cY + d) = E[(aX + b)(cY + dj] - E(aX + b) . E(cY + dj
=E[acXY + adX + bcY + bd] - (aE(X) + b)(cE(Y) + dj
= acE(X) + adE(X) + bcE(Y) + bd - [acE(X)E(Y) + adE(X) + bcE(Y) + bd]
= acE(X) - acE(X)E(Y) = ac[E(X) - E(X)E(Y)] = acCov(X, Y).

ac
CorraX+b cY+ = Cov(aX+b,cY+d) acCov(X,Y) =-ICorr(X,Y). When a
b. ( , d) SD(aX+b)SD(cY+d) lal!c\SD(X)SD(Y) lac

and c have the same signs, ac = lacl, and we have


Corr(aX + b, cY + d) = COIT(%,Y)

c. When a and c differ in sign, lac/ = -ac, and we have COIT(aX + b, cY + d) = -COIT(X, Y).

Section 5.3
37. The joint prof of X, and X, is presented below. Each joint probability is calculated using the independence
of X, and X,; e.g., 1'(25,25) = P(X, = 25) . P(X, = 25) = (.2)(.2) = .04.
X,
P(XI, x,) 25 40 65
.04 .10 .06 .2
25
40 .10 .25 .15 .5
x,
65 .06 .15 .09 .3
.2 .5 .3

a. For each coordinate in the table above, calculate x . The six possible resulting x values and their
corresponding probabilities appear in the accompanying pmf table.

From the table, E(X) = (25)(.04) + 32.5(.20) + ... + 65(.09) = 44.5. From the original pmf, p = 25(.2) +

40(.5) + 65(.3) = 44.5. SO, E(X) = p.

h. For each coordinate in the joint prnftable above, calculate $' = _l_
2-1 ;=1
(Xi - x)' . The four possible

resulting s' values and their corresponding probabilities appear in the accompanying pmftable.

o 112.5 312.5 800


s'
.38 .20 .30 .12

From the table, E(5") = 0(.38) + ... + 800(.12) = 212.25. From the originalymf,
,; = (25 _ 44.5)'(.2) + (40 - 44.5)'(.5) + (65 - 44.5)'(.3) = 212.25. So, E(8') =,;

84
e 2016 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
Chapter 5: Joint Probability Distributions and Random Samples

39. X is a binomial random variable with n = 15 and p = .8. The values of X, then X/n = X/15 along with the
corresponding probabilities b(x; 15, .8) are displayed in the accompanying pmftable.

x 0 I 2 3 4 5 6 7 8 9 10
x/IS 0 .067 .133 .2 .267 .333 .4 .467 .533 .6 .667
p(x/15) .000 .000 .000 .000 .000 .000 .001 .003 .014 .043 103

x II 12 13 14 IS
x/IS .733 .8 .867 .933 I
p(x/15) .188 .250 .231 .132 .035

41. The tables below delineate all 16 possible (x" x,) pairs, their probabilities, the value of x for that pair, and
the value of r for that pair. Probabilities are calculated using the independence of X, and X,.

(x" x,) 1,1 1,2 1,3 1,4 2,1 2,2 2,3 2,4
probability .16 .12 .08 .04 .12 .09 .06 .03
x I 1.5 2 2.5 1.5 2 2.5 3
r 0 I 2 3 I 0 I 2

(x" x,) 3,1 3,2 3,3 3,4 4,1 4,2 4,3 4,4
probability .08 .06 .04 .02 .04 .03 .02 .01
x 2 2.5 3 3.5 2.5 3 3.5 4
r 2 I 0 I 3 2 I 2

a. Collecting the x values from the table above yields the pmftable below.

__ ...::X'---I----'I'---"1.""5
_...:2'----'2"'.=-5_...::3'----=-3.""5
_--,4_
p(x) .16 .24 .25 .20 .1004 .01

b. P( X ~ 2.5) = .16 + .24 + .25 + .20 = .85.


c. Collecting the r values from the table above yields the pmf table below.
r o 2 3

per) .30 .40 .22 .08

d. With n = 4, there are numerous ways to get a sample average of at most 1.5, since X ~ 1.5 iff the Slim
of the X, is at most 6. Listing alit all options, P( X ~ 1.5) ~ P(I ,I ,I, I) + P(2, 1,1,I) + ... + P(I ,1,1,2) +
P(I, I ,2,2) + ... + P(2,2, 1,1) + P(3,1,1,1) + ... + P(l ,1,1,3)
= (.4)4 + 4(.4)'(.3) + 6(.4)'(.3)' + 4(.4)'(.2)' ~ .2400.
43. The statistic of interest is the fourth spread, or the difference between the medians of the upper and lower
halves ofthe data. The population distribution is uniform with A = 8 and B ~ 10. Use a computer to
generate samples of sizes n = 5, 10,20, and 30 from a uniform distribution with A = 8 and B = 10. Keep
the number of replications the same (say 500, for example). For each replication, compute the upper and
lower fourth, then compute the difference. Plot the sampling distributions on separate histograms for n = 5,
10, 20, and 30.

85
C 2016 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
rr

Chapter 5: Joint Probability Distributions and Random Samples

45. Using Minitab to generate the necessary sampling distribution, we can see that as n increases, the
distribution slowly moves toward normality. However, even the sampling distribution for n = 50 is not yet
approximately normal.

n = 10
Normal Probability Plot

.... ...
,,:.
~
~
"
.se
.00
.~
!.. r e
c, .ro

..

cI ~
.ee
"
.00'
IS25~~55657585
0=10
_~Norm.I~T."
....-""':' .. 06
......... :O.DOO

n = 50
Normal ProbabUity Plot

.llllfI ,i"'
"s----,::-------' .es

t ::
.M
00
h
~ .~
"
.01 r-"
.00'

11
'5 25 J$ .. es _""O_NoonoiIIyT
_:'~l"
"'''''''':0,000
.. '

Section 5.4

47.
a. In the previous exercise, we found E(X) = 70 and SD(X) = 0.4 when n = 16. If the diameter
distribution is normal, then X is also normal, so
P(6% X;; 71) = p(69 -70 s Z ;; 71-70)= P(-2.5 ;; Z;; 2.5) = <1>(2.5)- <1>(-2.5)~ .9938 - .0062 =
0.4 0.4
.9876.

b. Withn=25, E(X)=70but SD(X) = I~ =0.32GPa.So,P(X


,,25
>71)= P(Z> 71-70)=
0.32
1- <l>(3.125)= I - .9991 = .0009.

49.
a. II P.M. - 6:50 P.M. = 250 minutes. With To=X, + ... + X" = total grading time,
PT, =np=(40)(6)=240 and aT, =a'J;, =37.95, so P(To;;250) '"

p(z;; 250 - 240) = p( Z ;; .26) =6026.


37.95

86
C 2016 Cengage Learning. All Rights Reserved. May nat be scanned, copied or duplicated. or posted to a publicly accessible website, in whole or in pan.

L
Chapter 5: Joint Probability Distributions and Random Samples

b. The sports report begins 260 minutes after he begins grading papers.
260-240)
P(To>260)=P ( Z> 37.95
=P(Z>.53)=.2981.

51. Individual times are given by X - N(l 0,2). For day I, n = 5, and so

p(X';; II) =p( Z,;; 121/-~) =P(Z';; 112)=.8686.


For day 2, n = 6, and so

P(X,;; 1l)~P(l'';;II)=P(Z,;;~I/-;)=P(Z';;I.22)=.8888

Finally, assuming the results oftbe two days are independent (which seems reasonable), the probability the
sample average is at most 11 min on both days is (.8686)(.8888) ~ .7720.

53.
a. With the values provided,

P(l'? 51) = P(Z? 51-~) = P(Z? 2.5) = 1-.9938 =.0062.


1.2/ 9

b. Replace n = 9 by n = 40, and

P(l'? 51)= p(Z? 51-Jfo) = P(Z ?5.27) ",0.


1.2/ 40

55.
a. With Y ~ # of tickets, Y has approximately a normal distribution with I' = 50 and <I = jj; = 7.071 . So,
using a continuity correction from [35, 70J to [34.5, 70.5],
P(35 ,;;Y';; 70) ",p(34.5-50 ,;;z s 70.5-50) = P(-2.19';; Z,;; 2.90) = .9838.
7.071 7.071

b. Now J1 = 5(50) = 250, so <I = ,,1250 = 15.811.


Using a continuity correction from [225, 275] to [224.5,275.5], P(225 ,;;Y';; 275) '"
p(224.5-250 <Z,;; 275.5-250) ~P(-1.61';; Z,;; 1.61)~ .8926.
15.811 15.811

e-250250Y
c.
.
Usmg software, part (a) ="L.... --
e-
70

y=35y!
50S0'
= .9862 and part (b) = L 275
-225
Y- y!
.8934. Both of the

approximations in (a) and (b) are correct to 2 decimal places.

57. With the parameters provided, E(X) = ap = 100 and VeX) = afl' ~ 200. Using a normal approximation,

P(X';; 125) '" p( Z < 12~0)~P(Z';; 1.77) = .9616.

87
C 2016 Cengage Learning. All Rights Reserved. May nOIbe scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
Chapter 5: Joint Probability Distributions and Random Samples

Section 5.5

59.
a. E(X, + X, + X,) ~ 180, V(X, + x, + X,) = 45, SD(X, + X, + X,) = J4s = 6.708.
200-180)
P(X, + X, + X, ~ 200) ~ P Z < = P(Z ~ 2.98) = .9986.
( 6.708
P(150~X, + X, + X, ~ 200) = P(-4.47 s Z ~ 2.98) '" .9986.

., _Jl5_
b. 1'_ =1'=60 and ax - ,- r;; -2.236, so
x -m -..;3

P(X ~ 55) = p(Z ~ 55 -60)= P(Z ~ -2.236) = .9875 aod


2.236
P(58~ X s 62) = p( -.8% Z s .89) = .6266.

c. E(X, - .5X, - .5X,) = u : S Il- .5 Il = 0, while


V(X,_ .5X, -.5X,) = ai' +.250'; + .25a; = 22.5 => SD(XI ' .5X, -.5X,) ~ 4.7434. Thus,

P(-IOSX,-.5X,-SXlS5)~P ,10-0
--~Z~-- 5-0) =p ( -2.11:Z~1.05 ) = .8531-.0174=
( 4.7434 4.7434
.8357.

d. E(X, + X, + X,) = 150, V(X, + X, + X,) = 36 => SD(X, + X, + X,) = 6, so

P(X, +X, + X, ~ 200) = p(Z < 160~150)= P(Z ~ 1.67)=.9525.

Next, we want P(X, + X, ~ 2X,), or, written another way, P(X, + X, - 2X,,, 0).
E(X, + X, _ 2X,) = 40 + 50-2(60)=-30 and V(X, +X,,2X,) = a,' +a; +4a; =78=>
SD(X, + X, ' 2X,) = 8.832, so
P(X, + X, _ 2X,,, 0) = p(Z> 0-(-30)) = P(Z" 3.40) = .0003.
8.832

61.
a. The marginal pmfs of X aod Yare given in the solution to Exercise 7, from which E(X) = 2.8,
E(Y) ~ .7, V(X) = 1.66, and V(Y)= .61. Thus, E(X + Y) = E(X) + E(Y) = 3.5,
V(X+ Y) ~ V(X) + V(Y)= 2.27, aod the standard deviatioo of X + Y is 1.5 1.

b. E(3X + lOY) = 3E(X) + 10E(y) = 15.4, V(3X + lOY) = 9V(X) + 100 V(Y) = 75.94, and the standard
deviation of revenue is 8.71.

63.
a. E(X,) = 1.70, E(X,) = 1.55, E(X,X,) = LLX,x,p(xl,x,) = .. , = 3.33 , so
Xl ""2

Cov(XIo X,) ~ E(X,X,) - E(X,) E(X,) = 3.33 - 2.635 = .695.

b. V(X, + X,) = V(X,) + V(X,) + 2Cov(X1oX,) = I S9 + 1.0875 + 2(.695) = 4.0675. This is much larger
than V(X,) + V(X,), since the two variahles are positively correlated.

88
C 2016 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
Chapter 5: Joint Probability Distributions and Random Samples

65.
, ,
a. E(X - Y) = 0; V(X - Y) = ~ +~ = .0032 => ax y = J.0032 = .0566
25 25 ~
=> P(-I s X -Y ~ .1)= P(-I77 ~ Z ~ 1.77)= .9232.

- - a CT
, ,
b. V(X-y)=-+-=.0022222=>aj,' y=.0471
36 36 ~
=> P(-.I ~ X -Y ~ .1)", P(-2.12 ~Z s 2.12)= .9660. The normal curve calculations are still justified
here, even though the populations are not normal, by the Central Limit Theorem (36 is a sufficiently
"large" sample size).

67. Letting X" X" and X, denote the lengths of the three pieces, the total length is
X, + X, -X,. This has a normal distribution with mean value 20 + 15 -I = 34 and variance .25 + .16 + .01
~ .42 from which the standard deviation is .6481. Standardizing gives
P(34.5 ~X, + X, ~X, ~ 35) = P(.77 s Z~ 1.54) = .1588.

69.
a. E(X, + X, + X,) = 800 + 1000 + 600 ~ 2400.

b. Assuming independence of X" X" X,. V(X, + X, + X,) ~ (16)' + (25)' + (18)' ~ 1205.

c. E(X, + X, + X,) = 2400 as before, but now V(X, + X, + X,)


= V(X,) + V(X,) + V(X,) + 2Cov(X" X,) + 2Cov(X" X,) + 2Cov(X" X,) = 1745, from which the
standard deviation is 41 .77.

71.
8. M = Q1XI +Q2X2 +W f" xdx= Q1X1 +a2XZ + 72W, so
Jo
E(M) ~ (5)(2) + (10)(4) + (72)(1.5) ~ 158 and
a~ = (5)' (.5)' +(10)' (1)' + (72)' (.25)' = 430.25 => aM = 20.74.

b. P(M s 200) = p(z,; 200-158)


20.74
= P(Z ~ 2.03) = .9788.

73.
a. Both are approximately normal by the Central Limit Theorem.

b. The difference of two rvs is just an example of a linear combination, and a linear combination of
normal rvs has a normal distribution, so X- Y has approximately a normal distribution with I'x~y = 5

8' 6'
and aJi r = -+- = 1621.
~ 40 35

c. P(_I~X_Y~I)"'P(-1-5
16213
';Z~~) 16213
=P(-3.70';Z~-2.47)",.0068.

89
C 2016 Cengege Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
Chapter 5: Joint Probability Distributions and Random Samples

d. p(X -h:lo) '" p(Z ~ 10-5)= P(Z ~3.08) = .0010. This probability is quite small, so such an
1.6213
occurrence is unlikely if III - P, = 5 , and we would thus doubt this claim.

Supplementary Exercises

75.
a. px(x) is obtained by adding joint probabilities across the row labeled x, resulting inpX<x) = .2, .5,.3 for
x ~ 12, 15,20 respectively. Similarly, from column sums p,(y) ~ .1, .35, .55 for y ~ 12, 15,20
respectively.

b. P(X 5, 15 and Y 5, (5) ~ p(12,12) + p(12, 15) + p(15,12) + p(15, 15) = .25.

c. px(12). p,{12) = (.2)(.1)" .05 = p(12,12), so X and Yare not independent. (Almost any other (x, y) pair
yields the same conclusion).

d. E(X +Y) = LL(x+ y)p(x,y) =33.35 (or ~ E(X) + E(Y) ~ 33.35).

e. E(IX -YI)= LLlx- ylp(x,y)= .. =385.

77.
a. 1= f" f" J(x,y)dxdy = r20f30- kxydydx+ f30f30- kxydydx = 81,250. k => k =_3_ .
.- -<0 Jc 20-x 20 0 3 81,250

x+ y=30

x+ =20

'O-' kxydy = k(250x -lOx') o 5,x5, 20


b JX (x) = {fI,20-.
30-x
kxydy=k(450x-30x'+tx') 205,x:;;30

By symmetry,f,(y) is obtained by substitutingy for x inJX<x).


SinceJx<25) > 0 andJy(25) > 0, butJt2S, 25) = 0 Jx<x) -Jy(y) " Jtx,y) for all (x, y), so X and Yare not
independent.

c. P(X + Y 5, 25) = J" r:kxydydx+ J"1"-' kxydydx


o 20-x 20 0
=_3_.
81,250
230,625 = .355.
24

d. E(X +Y) = E(X)+ E(Y) = 2E(X)= 2\ I:'x- k(250x-lOx')dx


+ f:: x k( 450x - 30x' + tx' )dx) = 2k(35 1,666.67) = 25.969.

90
C 2016 Cengage Learning. All Rights Reserved. May nOIbe scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
Chapter 5: Joint Probability Distributions and Random Samples

e. E(XY) =f" f" xy' f(x,y)dxdy


-<l;l -<0
rr:
= J() 20-x
kx'y'dydx

+flOrlO-,kx' 'd dx=~33,250,000 136.4103 so


"Jo y Y 3 3 '
Cov(X, Y) = 136.4103 - (12.9845)' ~ -32.19.
-32.19
E(x') = E(Y') ~ 204.6154, so ";' = ": = 204.6154 -(12.9845)' = 36.0182 and p 36.0182 -.894.

f. V(X + Y) = V(X) + V(Y) + 2Cov(X, Y) = 7.66.

79. E(X +Y +2)=500+900+2000=3400.


_ _ _ 50' 100' 180' - - -
V(X + Y +Z)=-+--+--=123.014=> SD(X + Y +Z)~ 11.09.
365 365 365
P(X + Y + 2:<; 3500) = P(Z:<; 9.0) "" 1.

81.
a. E(N) '" ~ (10)(40) = 400 minutes.

b. We expect 20 components to come in for repair during a 4 hour period,


so E(NJ . '" = (20)(3.5) = 70.

83. 0.95= P(i1-.02:<;X:<;i1+.02)=p( -.O~:<;Z:<; .02,--J=P(-.2Fn:<;Z:<;.2.,[;,) ; since


.vt-i n .1/"n
p( -1.96:<; Z:<;1.96) = .95, .2.,[;, = 1.96 => n = 97. The Central Limit Theorem justifies our use of the
normal distribution here.

85. The expected value and standard deviation of volume are 87,850 and 4370.37, respectively, so

P(volume:<; 100,000) = p(z < 100,000-87,850)


4370.37
= P(Z:<; 2.78) =.9973.

87.
12-13 15-13)
a. P(12<X<15)~P ( -4-<Z<-4- =P(-0.25<Z<0.5)~.6915-.4013=.2092.

b. Since individual times are normally distributed, X is also normal, with the same mean p = 13 but with
standard deviation "x = a / Fn = 4/ Jl6 = I. Thus,

P(12 < X- < 15)= P (12-13


-1-< 15-13)
Z <-1- ~P(-l < Z< 2) = .9772- .1587 ~ .8185.

c. The mean isp = 13. A sample mean X based on n ~ 16 observation is likely to be closer to the true
mean than is a single observation X. That's because both are "centered" at e, but the decreased
variability in X gives it less ability to vary significantly from u.

d. P(X> 20)= 1-<1>(7) "" I -I = O.

91
e 2016 Ccngage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
r'

Cbapter 5: Joint Probability Distributions and Random Samples

89.
a. V(aX + Y) ~ a'o-~ + 2aCov(X,Y) + 0"; ~ a'o-~ + 2ao-Xo-yp + 0-;.

Substituting a ~ ~ yields 0"; + 20";p + 0-; ~ 20-; (I + p) ~ O. This implies (1 + p) ~ 0, or p ~-1.


o-x

b. Tbe same argument as in a yields 20"; (1- p) ~ 0, from whichp:S I.

c. Suppose p ~ I. Then V (aX - Y) ~ 20-; (1- P ) ~ a , which implies that aX - Y is a constant. Solve for Y
and Y= aX - (constant), whicb is oftbe form aX + b.

91.
a. With Y=X, + X" Fy(Y) = J'{J""0 ZVl,,()'"r 1v /2
o J
Y~l
1 "'--, ~,-=
r ( vz/2 ).X' x,' e 'dx,
I
}dx.. Butthe

1 2 J e
inner integral can be shown to be equal to I y[(V +V )12 -J -Y/2 from which the result
2("+")"r(v, +v,)f2) ,

follows.

b. By 3, 2 2 + Z: is chi-squared with v = 2, so (212 + Z~)+ Z; is chi-squared with v = 3, etc., until


1

212+ ...+Z; is chi-squared withv=n.

c. Xi; J1 is standard normal, so [ Xi; J1 J is chi-squared with v ~ 1, so the sum is chi-squared witb

parameter v = n.

93.
a. V(X,)~V(W+E,)~o-~+O"~~V(W+E,)~V(X,) and Cov(X"X,)~
Cov(W + E"W + E,) ~ Cov(W,W) + Cov(W,E,) + Cov(E"W) + Cov(E"E,) ~

Cov(W, W)+O+O+O= V(W)~ o-,~.

Tbus, P ,0-'w ,.
o"w+o"

I
b. P ~.9999.
1+.0001

95. E(Y) '" h(J1" 1'" /-l" 1',) ~ 120[+0-+ +, ++0] ~ 26.

The partial derivatives of h(j1PJ12,J13,J.14) with respect to X], X2, X3, and X4 are _ x~,
x,
,' and
_ X4
x3

~+~+~, respectively. Substituting x, = 10, x, = 15, X3 = 20, and x, ~ 120 gives -1.2, -.5333, -.3000_
XI x2
X:J
and .2167, respectively, so V(Y) ~ (1)(_1.2)' + (I )(-.5333)' + (1.5)(-.3000)' + (4.0)(.2167)' ~ 2.6783, and
the approximate sd of Y is 1.64.

92
e 2016 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in pan,
Chapter 5: Joint Probability Distributions and Random Samples

97. Since X and Yare standard normal, each has mean 0 and variance I.
a. Cov(X, U) ~ Cov(X, .6X + .81') ~ .6Cov(X,X) + .8Cov(X, 1') ~ .6V(X) + .8(0) ~ .6(1) ~ .6.
The covariance of X and Y is zero because X and Yare independent.
Also, V(U) ~ V(.6X + .81') ~ (.6)'V(X) + (.8)'V(1') ~ (.36)(1) + (.64)(1) ~ I. Therefore,

(.x; U) = Cov(X,U)
Carr
.6
r; r; =.
6, t he coe ffiicient
. X
on .
(J' x (J'U " "t!
b. Based on part a, for any specified p we want U ~ pX + bY, where the coefficient b on Y has the feature
that p' + b' ~ I (so that the variance of U equals I). One possible option for b is b ~ ~l- p' , from

which U~ pX + ~l- p' Y.

93
02016 Cengagc Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in pan.
CHAPTER 6

Section 6.1

l.
a. We use the sample mean, x, to estimate the population mean 1'. jJ = x = Lx,/I = 219.80
27
= 8.1407.

b. We use the sample median, i = 7.7 (the middle observation when arranged in ascending order).

(219.St
1860.94- = 1.660.
c. We use the sample standard deviation, s = R= 26
21

d. With "success" ~ observation greater than 10, x = # of successes ~ 4, and p =.::/I = ~27 = .1481.

s 1.660
e. We use the sample (std dev)/(mean), or =
x
= --
8.1407
= .2039.

3.
a. We use the sample mean, x = 1.3481.
b. Because we assume normality, the mean = median, so we also use the sample mean x = 1.3481. We
could also easily use tbe sample median.

c. We use the 90'" percentile of the sample: jJ + (1.28)cr = x + 1.28s = 1.3481 + (1.28)(.3385) = 1.7814.

d. Since we can assume normality,


P(X < 1.5) ss p(z < 1.5-X)= p(z < 1.5 -1.3481) = p(Z < .45) = .6736
s .3385

e. The estimated standard error of x = ~ = ~ = .3;;S = .0846.


-en -i n vl6

_" _ " X
5. Let e = the total audited value. Three potential estimators of e are e, = NX , 8, =T- ND , and 8] = T ="
y

From the data, y = 374.6, x= 340.6, and d = 34.0. Knowing N~ 5,000 and T= 1,761,300, the three

corresponding estimates are Ii, = (5,000)(340.6) = 1,703,000, 8, = 1,761,300 - (5,000)(34.0) = 1,591,300.

and Ii] = 1,761,300(340.6) = 1,601,438.281.


374.6

94
02016 Ccngage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in pan.
I

~
Chapter 6: Point Estimation

7.

a.

b. f = 10,000 .u = 1,206,000.
c. 8 of 10 houses in the sample used at least 100 therms (the "successes"), so p=~ = .80.

d. The ordered sample values are 89, 99,103,109,118,122,125,138,147,156, from which the two

middle values are 118 and 122, so ft=i 118+122 = 120.0.


2

9.
a. E(X) = /1 = E(X), so X is an unbiased estimator for the Poisson parameter /1. Since n ~ 150,
_ L.xj (0)(18)+(1)(37)+ ... +(7)(1) 317 =2.11.
/1=x=-=
n 150 150

b. ax = ar:" j;;" so th e estimate


. d stan dar d error .~'..Eli
IS = ~
- = 119 .
-s n 'o/n n '0/150

ll.

-.!,(nIPlql)+-.!,-(n,p,q,)= Plql + p,q, , and the standard errar is the square root of this quantity.
nl n,z III nz

"
With PI = -,Xl" ql = 1- PI
""
, P, = -,Xz"" q, = 1- P" the estimated standar d error is
Plql + p,q, .
c.
til nz til nz

d. t z: _. )= 127 _ 176 =.635-.880=-.245


\1'1 P, 200 200

(.635)(.365) + (.880)(.120) = .041


e.
200 200

I x2 ex3
, 1
13. /1=E(X)=f xW+Ox)dx=-+- =-O=> 0~3/1=>
-r 4 6 I-t
3

iJ = 3X => E(iJ) = E(3X) = 3E(X) = 3/1 = 3G)0 = O.

95
C 20 16 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted 10a publicly accessible website, in whole or in part.
,r

Chapter 6: Point Estimation

15,

a, E(X') ~ 2B implies that E(X'


2
J ~ B, Consider {}= IX,' . Then
2n

E (B') ~ E (IX')
__ ' IE( x,'j L.2B = --
-- 2nB = B , implying that B' is an unbiased estimator for (J,
2n 2n ~ 2n

b. "x'~1490.1058,so (}~1490,1058 74,505,


L- ' 20

17,
a, E(p)~f
x=ox+r-}
1'-1 ,(X+/,-l)
x
p'(l-p)'

~p f (x+
-,,=,0
I'
x!(r - 2)!
r
-2)! p,-l ,(1_ P = p f(X+
..:=0
1'-2t,_1 (1- p)' = p fnb(x;/,-l,p)
x r x-o
=p ,

. ' 5-1
b. For the given sequence, x ~ 5, so P = -4 = .4 44'
5+5-1 9

19,
a, 2=.5p+.15=>2A.=p+,3,so p=22-,3 and P=2.i-,3~2(~)-.3; theestimateis

2G~)-3=2

b. E(p) ~E( 2.i -,3) = 2E(.i)-,3 = 22-,3 = P , as desired.

c. Here 2=,7p+(,3)(,3),
10
so p=-2--
9
and p=-
,10(Y)
- --,
9
7 70 7 n 70

96
C 2016 Cengege Learning. All Rights Reserved. May not be scanned, copied Of duplicated, or posted to a publicly accessible website, in whole Of in part.

.. c,
Chapter 6: Point Estimation

Section 6.2

21.
a. E(X)=jJ-r(I+ ~) and E(X')=V(X)+[E(X)]'=jJ'r(l+ ~).sothe moment estimators a and

,
jJ are the solution to _ , ( I) I"
x s s: 1+-;:- '-L.,Xi' =jJ'r ,( 2)1+-;:- . Thus /3= , ( X ) ,so once a
a n a r I +.!-
a
has been determined r( I + l) is evaluated and jj then computed. Since X' = jj' -r' (I + ) ,

r ( a)2) ,
1+-
so this equation must be solved to obtain a.
r- ( I+~
a

r(l+i) 1 _ _ r'(I+) .
b. F rom a, ~(16,500) 2' = 105 ' so -- - .95 - ( ) ,and from the hint,
20 28.0 r- ( I+.!- ) 1.05 r I+~
a a
I ' x 28.0
a =.2 ~ a=5 Then jJ = r(l.2) = r(12)

23. Determine the joint pdf (aka the likelihood function), take a logarithm, and then use calculus:

I( x". .., x" B) =11" _1_r;:;-;:; e


-<;120 =(2 1t
B)-I' e -2:41"
1

,., ,,21[B

f(B)= In(f(x" ...,x, IB)]=-~ln(21[)-"-ln(B)-


2 2
Lxi /2B

'(B) =0--"--+
2B
Lx,'l2B' =O=>-nB+Lxi =0

Solving for e, the maximum likelihood estimator is (j =.!-n LXi

25.
a. ,Ii = x = 384.4;s' = 395.16, so ~ L( Xi - x)' = a-' 9
= 1 0(395.16) = 355.64 and a- = '-'355.64 = 18.86

. (this is not s).

b. The 95th percentile is fJ + 1.6450- ,so the mle of this is (by the invariance principle)
,Ii + 1.645a- = 415.42.
c. The mle of P(X ~ 400) is, by the invarianee principle, 4>(400,- ,Ii) = 4>(400 -384.4) = 4>(0.83)=
0- 18.86
.7967.

97
C 2016 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
Chapter 6: Point Estimation

27.
( r,l ,u,lp
a. J(XI' ...,x,;a,fJ) ~ XIXp:~~' (:) , so the log likelihood is

(a-In)n(x)-~'>i
, fJ
-naln(fJ)-nln1(a). Equating both .s:
da
and ~
dB
to o yields

L;ln(x,)-nln(fJ)-1l :a r(a)~O and ~:i~n; ~O, a very difficult system of equations to solve.

b. From the second equation in a, L>


fJ
~na ~ x ~ afJ ~ J1 , so the mle of I' is jj ~ X.

29.
a. The joint pdf (likelihood function) is
.) _ {A'e'2l'I'"O) XI;' O,...,x, ;, 0
f ( xll,xlI,il..,B - .
o otherwise
Notice that XI ;, O,... ,x, ;, 0 iff min (Xi);' 0 , and that -AL( Xi-0) ~ -AUi + nAO.

Thus likelihood ~ {A' exp] -AU, )exp( nAO) min (x, );, 0
o min(x,)<O
Consider maximization with respect to e Because the exponent nAO is positive, increasing Owill
increase the likelihood provided that min (Xi) ;, 0; if we make 0 larger than min (x,) , the likelihood
drops to O. This implies that the mle of 0 is 0 ~ min (x,), The log likelihood is now
n
n In(A) - AL( Xi - 0). Equating the derivative W.r.t. A to 0 and solving yields .i ~ (n 0)
L x,-O

h. O~min(x,)~.64, and ui~55.80,so.i 10 .202


55.80-6.4

Supplementary Exercises

31. Substitute k ~ dO'y into Chebyshev's inequality to write P(IY - 1'r1 ~ '):S I/(dO'y)' ~ V(Y)le'. Since
E(X)~l'and V(X) ~ 0"1 n, we may then write p(lx _pl.,,)$ 0" ;n. As n -> 00, this fraction converges
e
to 0, hence P(lx - 1'1" &) -> 0, as desired.

33. Let XI ~ the time until the first birth, xx ~ the elapsed time between the first and second births, and so on.
Then J(x" ...,X,;A) ~ M' ....
' (ZA )e" ....' ...(IlA )e" .... ~ n!A'e'2l'h,. Thus the log likelihood is

In(n!)+nln(A)-ALkx,. Taking!'- and equating to 0 yields .i~_n_.


dA Lkx,
For the given sample, n ~ 6, Xl ~ 25.2, x, ~ 41. 7 - 25.2 ~ 16.5, x, ~ 9.5, X4 ~ 4.3, x, ~ 4.0, X6~ 2.3; so
6 6
L;kx, ~ (1)(25.2) + (2)(16.5) + ...+ (6)(2.3) ~ 137.7 and .i ~ -- ~ .0436.
,., 137.7

98
C 2016 Ccngage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
Chapter 6: Point Estimation

35.
Xi+X' 23.5 26.3 28.0 28.2 29.4 29.5 30.6 31.6 33.9 49.3
23.5 23.5 24.9 25.75 25.85 26.45 26.5 27.05 27.55 28.7 36.4
26.3 26.3 27.15 27.25 27.85 27.9 28.45 28.95 30.1 37.8
28.0 280 28.1 28.7 28.75 29.3 29.8 30.95 38.65
28.2 28.2 28.8 28.85 29.4 29.9 31.05 38.75
29.4 29.4 29.45 30.0 30.5 30.65 39.35
29.5 29.5 30.05 30.55 31.7 39.4
30.6 30.6 31.1 32.25 39.95
31.6 31.6 32.75 40.45
33.9 33.9 41.6
49.3 49.3

There are 55 averages, so the median is the 28'h in order of increasing magnitude. Therefore, jJ = 29.5.

37. Let c = 1('11. Then E(cS) ~ cE(S), and c cancels with the two 1 factors and the square root in E(S),
l(t) -ti
1(9.5) (8.5)(7.5) (.5)1(.5) (8.5)(7.5) .(.5).,J; 2
1eaving just o. When n ~ 20, c 12 12 12 = 1.013 .
l(lO)y~ (I0-1)!yf9 9!yf9

99
CI2016 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
CHAPTER 7

Section 7.1

1.
a. z"" ~ 2.81 implies that 012 ~ I - <1>(2.81)~ .0025, so a = .005 and the confidence level is IOO(I--a)% =
99.5%.

b. z"" = 1.44 implies that a = 2[1 - <1>(1.44)]= .15, and the confidence level is 100(l-a)% ~ 85%.

c. 99.7% confidence implies that a = .003, 012 = .0015, and ZOO15 = 2.96. (Look for cumulative area equal
to 1 - .0015 ~ .9985 in the main hody of table A.3.) Or, just use z" 3 by the empirical rule.

d. 75% confidence implies a = .25, 012 = .125, and Z125 = 1.15.

3.
a. A 90% confidence interval will be narrower. The z critical value for a 90% confidence level is 1.645,
smaller than the z of 1.96 for the 95% confidence level, thus producing a narrower interval.

b. Not a correct statement. Once and interval has been created from a sample, the mean fl is either
enclosed by it, or not. We have 95% confidence in the general procedure, under repeated and
independent sampling.

c. Not a correct statement. The interval is an estimate for the population mean, not a boundary for
population values.

d. Not a correct statement. In theory, if the process were repeated an infinite number of times, 95% of the
intervals would contain the populatioo mean fl. We expect 95 out of 100 intervals will contain fl, but
we don't know this to be true.

5.
(1.96)(.75)
a. 4.85 =
,,20
4.85.33= (4.52,5.18).

. (2.33)(.75)
b. z",,=z.01=2.33,sothemtervalis4.56 tr: (4.12,5.00).
,,16

c.
[
n = 2(1.96)(.75
.40
)]' = 54.02)' 55.

d. Width w = 2(.2) = .4, so n = [2(2.51(75) J = 93.61)' 94.

lOO
., 2016 Cengagc Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.

---'=:!.;;;;=iiiiiii ..... _
Chapter 7: Statisticallntervals Based on a Single Sample

7. If L = 2zjI, Fn and we increase the sample size by a factor of 4, the new length is

L' = 2zjI, k = [ 2zjl, Fn ]( ~) = f Thus halving the length requires n to be increased fourfold. If

n' = 25n, then L' =!:.-, so the length is decreased by a factor of 5.


5

9.

a. (x -1.645 Fn'ex} From 5a, x =4.85,,,~ .75, and n ~ 20; 4.85 -1.645 ~ = 4.5741, so the

interval is (4.5741,00).

b. s.
(X - za j;;' 00)

c. ( -co, x + za Fn); From 4a, x = 58.3 , ,,~3.0, and n ~ 25; 58.3 + 2.33 k~ 59.70, so the interval is

(-co, 59.70).

11. Y is a binomial rv with n = 1000 and p = .95, so E(Y) = np ~ 950, the expected number of intervals that
capture!" and cry =~npq = 6.892. Using the normal approximation to the binomial distribution,
P(940'; Y'; 960) = P(939.5 ,; Y'; 960.5) '" P(-1.52 ,; Z'; 1.52) ~ .9357 - .0643 ~ .8714.

Section 7.2

a. x z025 :- = 654.16 1.96 16~3 = (608.58, 699.74). We are 95% confident that the true average
vn ",50
CO2 level in this population of homes with gas cooking appliances is between 608.58ppm and
699.74ppm

b. w = 50 2(1.9~175) =>..;;; = 2(1.96X175) = 13.72 => n = (13.72)2 ~ 188.24, which rounds up to 189.
n 50

15.
a. za = .84, and $ (.84) = .7995 '" .80 , so the confidence level is 80%.

b. za = 2.05 ,and $ (2.05) = .9798 '" .98 , so the confidence level is 98%.

c. za = .67, and $(.67) = .7486 '" .75, so the confidence level is 75%.

101
C 20 16 Ccngagc Learning. All Rights Reserved. May nor be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
Chapter 7: Statistical Intervals Based on a Single Sample

_ s 4.59
17. x- z" r ~135.39 - 2.33 ~ ~ 135.39 - .865 ~ 134.53. We are 99% confident that the true average
. "n vl53
ultimate tensile strength is greater than 134.53.

19. p ~ ~ ~ .5646 ; We calculate a 95% confidence interval for the proportion of all dies that pass the probe:
356

.5646+ (196)' 1.96 (.5646}(4354) + (1.96)'


2(356) 356 4(356)' .5700.0518
(.513,.615). The simpler CI formula
(196)' 101079
1+--
356

(7.11) gives .5646 1.96 .5646(4354) ~ (.513, .616), which is almost identical.
356

21. F ora one-Slide d b oun,d we nee d Za =Zos = I.64 5;p =--=.25;


250 an d p- = .25 + 1.645'
2 12000 .,2507 T'-tie
. 1000 1+1645 11000
resulting 95% upper confidence bound for p, the true proportion of such consumers who never apply for a
. 2507 1.645~(.25)(.75)/1000+(1.645)' 1(4'1000')
re bate, IS . + .2507 + .0225 = .2732.
1+ (1.645)' II000
Yes, there is compelling evidence the true proportion is less than 1/3 (.3333), since we are 95% confident
this true proportion is less than .2732.

23.
a. With such a large sample size, we can use the "simplified" CI formula (7.11). With P = .25, n = 2003,
and ZoJ' ~ Z.005 = 2.576, the 99% confidence interval for p is

p Za"JP! ~ .25 2.576 (.2~6~~5) ~.25 .025 = (.225, .275).

b. Using the "simplified" formula for sample size and p = q ~.5,


n ~ 4z' pq ~ 4(2.576)'(.5)(.5) 2654.31
w' (.05)'
So, a sample of size at least 2655 is required. (We use p ~ q ~.5 here, rather than the values from the
sample data, so that our CI has the desired width irrespective of what the true value ofp might be. See
the texthook discussion toward the end of Section 7.2.)

25.

2(1.96)' (.25) - (1.96)' (0 I) )4(1.96)' (25)(.25 -01) + .01(1.96)'


a. n '" 381
.01

2(196)' (tt) - (1.96)' (.0 I) J4(1.96)' (t.t)( tt -.0 I) +.01(196)'


b. n~ '" 339
.01

102
C 2016 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
Chapter 7: Statistical Intervals Based on a Single Sample

27. Note th at th e rmidnoi


potnt 0 f t h e new interva
. I' IS x + Z2 /2
2' W hi Ch IS
. roug hiy --x + 2 Wit. I ence 1eve I 0 f
h a con fid
n+z n+4

95
0;
/0 an d approximanng
.. 196 =::.2Th
t.
.
e variance f thi quantity
0 t IS
.. IS
np(1-p) 2 ' or roug hi y -,p(L1--,PCL)
. N OW
(n+z2) n+4

. . x+2
replacing p with --,
(X+2)
we have -- z.;
(::~)(l<:~) . .Forclanty,let
.
x =x+2 and
11+4 n+4 71 n+4

n' = n + 4, then j; = :: and the formula reduces to j/ zi,Iii~?, , the desired conclusion. For further

discussion, see the Agresti article.

Section 7.3

29.
a. = 2.228
1.025.10 d. 1.00550 = 2.678
b. 1.025.20 = 2.086 e. 1.01.25 = 2.485

C. 1.00520 = 2.845 f. -1.0255 = -2.571

31.
a. 1.05.10 = 1.812 d. 1.01,4 = 3.747
b. tom = 1.753 e. '.02.24 ~ (.025,24 ;:; 2.064
C. 1.".15 = 2.602 f. 1.".37 '" 2.429

33.
a. The boxplot indicates a very slight positive skew, with no outliers. The data appears to center near
438.

-1 1[-
420 430 440 450 460 470

polymer

b. Based on a normal probability plot, it is reasonable to assume the sample observations carne from a
normal distribution.

103
10 2016 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in pan.
- "I

Chapter 7: Statistical Intervals Based on a Single Sample

c. With df= n - I ~16, the critical value for a 95% CI is 1.025.16 = 2.120, and the interval is

438.29 (2.120)C3N) = 438.29 7.785 = (430.51,446.08). Since 440 is within the interval, 440 is

a plausiblevalue for the true mean. 450, however, is not, since it lies outside the interval.

35. n ~ 15, x~25.0, s ~ 3.5; 1.02514 = 2.145

a. A 95% CIfor the mean: 25.0 2.145 3~ = (23.06, 26.94).


" 15

h. A 95% prediction interval: 25.0 2.145(3.5)~1 + I ~ (17.25, 32.75). The prediction interval is about
15
4 times wider than the confidence interval.

37.
a. A95%CI: .92552.093(.018I)=.9255.0379=:o(.8876,.9634)

b. A 95% P.I. : .9255 2.093(.0809)~1 ++0 = .9255. I 735 =:0 (.7520,1.0990)

c. A tolerance interval is requested, with k = 99, confidence level 95%, and n = 20. The tolerance critical
value, from Table A.6, is 3.615. The interval is .9255 3.615(.0809) =:0 (.6330,1.2180) .

39.
a. Based on the plot, generated by Minitab, it is plausible that the population distribution is normal.
Normal Probability Plot

I:
.999
.99
.95
~ .80
:0
.50
'"E'
.c
.20
0-
.05
.01
.001

30 50 70
volume
Average: 52.2308 Ivldersort-Oarling Normality Test
StDev. 14.8557 A-Squared: 0.360
N: 13 P-Value: 0.392

b. We require a tolerance interval. From table A.6, with 95% confidence, k ~ 95, and n~13, the tolerance
critical value is 3.081. x 3.081s = 52.231 3.081(14.856) = 52.231 45.771 =:0 (6.460,98.002).

c. A prediction interval, with 1.02,.12 = 2.179 :


52.231 2.I79(14.856)~1 +f,- = 52.231 33.593 =:0 (18.638,85.824)

104
e 2016 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.

________________________ ...;nnn......
Chapter 7: Statistical Intervals Based on a Single Sample

41. The 20 df row of Table A.5 shows that 1.725 captures upper tail area .05 and 1.325 captures upper tail area
.10 The confidence level for each interval is 100(central area)%.
For the first interval, central area = I - sum of tail areas ~ I - (.25 + .05) ~ .70, and for the second and third
intervals the central areas are I - (.20 + .10) = .70 and 1 - (.15 + .15) ~ 70. Thus each interval has
. ., s(.687 + 1.725) S . f
confidence level 70%. The width of the first interval is ...r,; 2.412 ...r,; , whereas the widths a

the second and third intervals are 2.185 and 2.128 standard errors respectively. The third interval, with
symmetrically placed critical values, is the shortest, so it should be used. This will always be true for a t
interval.

Section 7.4

43.
a. X~5,IO = 18.307

b. %~5,IO = 3,940

c. Since 10.987 = %.~75", and 36.78 = %.~25,22, P(x.~75,22 ,; %' ,; %~25,22) = .95 ,

d. Since 14.611 = %~5,25 and 37.652 = %~5,25' P(x' < 14.611 or x' > 37.652) ~
1 - P(x' > 14.611) + P(x' > 37.652) ~ (I - ,95) + .05 = .10.

45. For the n = 8 observations provided, the sample standard deviation is s = 8.2115. A 99% CI for the
population variance.rr', is given by
(n -I)s' / %~s,,_,,(n _1)$' / %.;",,-,) = (7.8.2115' /20.276,78.2115' /0.989) = (23.28, 477.25)
Taking square roots, a 99% CI for (J is (4.82, 21.85). Validity ofthis interval requires that coating layer
thickness be (at least approximately) normally distributed.

Supplementary Exercises

47.
a. n ~ 48, x = 8.079a, s' = 23.7017, and s ~ 4.868.
A 95% CI for J1 ~ the true average strength is
_ s
x1.96 r =8.0791.96
vn
4.868
= ()
=8.0791.377 = 6.702,9.456,
v48

b. p=~=.2708, A95%CIforpis
48
1.96' (.2708)(.7292) 1.96'
.2708+-(-) 1.96 +--,
2 48 48 4(48) .3108.1319
1.0800 (.166,.410)
1.96'
1+--
48

105
Cl 20 16 Cengege Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to 8 publicly accessible website, in whole or in part.
.-
~ .--'

Chapter 7: Statistical Intervals Based on a Single Sample

60.2+ 70.6 = 65.4 N . T be 95""lidcon . of


ence margin
49. Tbe sample mean is tbe midpoint oftbe interval: x 2
error for tbe mean must bave been 5.2, so t s /.,J;, = 5.2. Tbe 95% confidence margin of error for a
prediction interval (i.e., an individual) is t s~1 +-'; = ..r;;+l./. S /..r;, = JiT+t (5.2) = 18.0. Thus,tbe 95%
PI is 65.4 18.0 ~ (47.4 N, 83.4 N). (You could also determine / from n and IX, tben s separately.)

51.
3. Wit
. h Pf = 31/88 352 ,P -
= .
.352+1.96' 2 /2(88) ~ .,358 andbt. e CI' IS
1+1.96 /88
~(.352)(.648)
+ 1.96' /4(88)'
.358 1.96 , = (.260, .456). We are 95% conlident that betweeo 26.0%
1+1.96 /88
II and 45.6% of all atbletes under tbese conditions bave an exercise-induced laryngeal obstruction.

b. Using tbe "simplified" formula, n = 4,'


w
fii
4(1.96)'(})(.5)
(.04)
2401. So, rougbly 2400 people sbould be

surveyed to assure a widtb no more than .04 witb 95% confidence. Using Equation (7.12) gives the
almost identical n ~ 2398.

c. No. Tbe upper bound in (a) uses a a-value of 1.96 = Z025' SO, iftbis is used as an upper bound (and
bence .025 equals a ratber tban a/2), it gives a (I - .025) = 97.5% upper bound. If we waot a 95%
confidence upper bound for p, 1.96 sbould be replaced by tbe critical value Zos ~ 1.645.

teXt_ +x_1
_
+~)-X4
_
zan -
1 (s~ si s;)
-+-+-
9 III 111 n3
+_.
s;
n4

For tbe giveo data, B = -.50 and ae = .1718, so tbe interval is -.50 1.96(.1718) = (-.84,-.16) .

55. Tbe specified condition is tbat tbe interval be lengtb .2, so n =[ 2(1.9~)(.8) J = 245.86J' 246.

57. Proceeding as in Example 7.5 witb T, replacing LX" tbe CI for.!. is (~, ~/, ) wbere
A XI-'1l.2r %";:.2r

t, = y, + ... + y, +(n-r)y,. In Example 6.7, n = 20, r= 10, and /,= I I 15. Witb df'> 20, the necessary
critical values are 9.591 and 34.170, giving tbe interval (65.3, 232.5). This is obviously an extremely wide

interval, The censored experiment provides less information about..!... than would an uncensored
;I.
experiment witb n = 20.

106
e 2016 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole: or in part.
Chapter 7: Statistical Intervals Based on a Single Sample

59.
(1-012)110 I J(I-a12t" a a
a. 1 II

(aI2)
nun- du = uTI
(al2)
tI. = I - - - -;;: 1- a.
2 2
From the probability statement,

(';I,)Y. < -I ,; (1- ';I,i'


() . pro babl
Wit h I rty I - U, so ta kimg t he reciproca
. I 0 f eac h en dnoi
pomt an d
max ( X, ) B max X,

.
Interchanging gives the CI
(max(x) y. ' max(%)j
Y.' for B.
(1-';1,)" (';1,).

b. aY. ,; max(X,) < I with probability I - c, so I'; B) < ~ with probability 1- 11, which yields
B max (X; a'

the interval (max (X; ), m:~ X,) }

c. It is easily verified that the interval of b is shorter - draw a graph of [o (u) and verify that the
shortest interval which captures area I - crunder the curve is the rightmost such interval, which leads
to the CI ofb. With 11 = .05, n ~ 5, and max(x;)=4.2, this yields (4.2, 7.65).

61. x = 76.2, the lower and upper fourths are 73.5 and 79.7, respectively, andj, = 6.2. The robust interval is

76.2 (1.93)( ~)= 76.2 2.6 =(73.6,78.8).

x = 77.33, s = 5.037, and 102521 = 2.080, so the t interval is

77.33 (2.080)e;) = 77.33 2.23 = (75.1,79.6). The t interval is centered at x, which is pulled out

to the right of x by the single mild outlier 93.7; the interval widths are comparable.

107
e 2016 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted [0 a publicly accessible website, in whole or in part.
.

CHAPTER 8

Section 8.1

1.
a. Yes. It is an assertion about the value of a parameter.

b. No. The sample median i is not a parameter.

c. No. The sample standard deviation s is not a parameter.

d. Yes. The assertion is that the standard deviation of population #2 exceeds that of population # I.

e. No. X and Yare statistics rather than parameters, so they cannot appear in a hypothesis.

f. Yes. H is an assertion about the value of a parameter.

3. We reject Ho iff P-value:S a = .05.


a. Reject Ho b. Reject Ho c. Do not reject Ho d. Reject Ho e. Do not reject Ho

5. In this formulation, Ho states the welds do not conform to specification. This assertion will not be rejected
unless there is strong evidence to the contrary. Thus the burden of proof is on those who wish to assert that
the specification is satisfied. Using H,: /l < 100 results in the welds being believed in conformance unless
proved otherwise, so the burden of proof is on the non-conformance claim.

7. Let 0 denote the population standard deviation. The appropriate hypotheses are Ho: (J = .05 v. H,: 0< .05.
With this formulation, the burden of proof is on the data to show that the requirement has been met (the
sheaths will not be used unless Ho can be rejected in favor of H,. Type I error: Conclude that the standard
deviation is < .05 mm when it is really equal to .05 mm. Type II error: Conclude that the standard
deviation is .05 mm when it is really < .05.

9. A type I error here involves saying that the plant is not in compliance when in fact it is. A type II error
occurs when we conclude that the plant is in compliance when in fact it isn't. Reasonable people may
disagree as to which of the two errors is more serious. If in your judgment it is the type II error, then the
reformulation Ho: J1 = 150 v. H,: /l < 150 makes the type I error more serious.

11.
a. A type I error consists of judging one of the two companies favored over the other when in fact there is
a 5050 split in the population. A type II error involves judging the split to be 5050 when it is not.

b. We expect 25(.5) ~ 12.5 "successes" when Ho is true. So, any Xvalues less than 6 are at least as
contradictory to Ho as x = 6. But since the alternative hypothesis states p oF .5, X-values that are just as
far away on the high side are equally contradictory. Those are 19 and above.
So, values at least as contradictory to Ho as x ~ 6 are {0,1,2,3,4,5,6,19,20,21,22,23,24,25}.

108
C 2016 Cengage Learning. All Rights Reserved. May not be scanned. copied or duplicated, or posted to a publicly accessible website, in whole orin pan.
Chapter 8: Tests of Hypotheses Based on a Single Sample

c. When Ho is true,Xhas a binomial distribution with n = 25 andp = .5.


From part (b), P-value ~ P(X::: 6 or X::: 19) ~ B(6; 25,.5) + [I - B(18; 25,.5)] ~ .014.

d. Looking at Table A.I, a two-tailed P-value of .044 (2 x .022) occurs when x ~ 7. That is, saying we'll
reject Ho iff P-value::: .044 must be equivalent to saying we'll reject Ho iff X::: 7 or X::: 18 (the same
distance frnm 12.5, but on the high side). Therefore, for any value of p t .5, PCp) = P(do not reject Ho
when X- Bin(25,p = P(7 <X < 18 when X - Bin(25,p ~ B(l7; 25,p) - B(7; 25, pl
fJ(.4) ~ B(17; 25,.4) - B(7, 25,.4) = .845, while P(.3) = B( 17; 25, .3) - B(7; 25, .3) = .488.
By symmetry (or re-computation), fJ(.6) ~ .845 and fJ(.7) ~ .488.

e. From part (c), the P-value associated with x = 6 is .014. Since .014::: .044, the procedure in (d) leads
us to reject Ho.

13.
a. Ho:p= 10v.H.:pt 10.

b. Since the alternative is two-sided, values at least as contradictory to Ho as x ~ 9.85 are not only those
less than 9.85 but also those equally far from p = 10 on the high side: i.e., x values > 10.15.
When Ho is true, X has a normal distribution with mean zz = 10 and sd ':- = .2: = .04. Hence,
-a n v25
P-value= P(X::: 9.85 or X::: 10.15 when Ho is true) =2P(X ::: 9.85 whenHo is true) by symmetry

= 2P(Z < 9.85 -10) = 2<1>(-3.75)'" O. (Software gives the more precise P-value .00018.)
.04
In particular, since P-value '" 0 < a = .01, we reject Ho at the .01 significance level and conclude that
the true mean measured weight differs from 10 kg.

c. To determinefJCfl) for any p t 10, we must first find the threshold between P-value::: a and P-value >
a in terms of x . Parallel to part (b), proceed as follows:

.01 =P(rejectHo when n, is true) = 2P(X ';xwhen Ho is true) = 2<1>(x -10)


.04
=>

<I>(x -10)
.04
= .005 => x -lO
.04
= -2.58 => x = 9.8968 . That is, we'd reject Ho at the a = .01 level iff the
observed value of X is::: 9.8968 - or, by symmetry.z 10 + (10 - 9.8968) = 10.1032. Equivalently,
we do not reject Ho at the a = .01 level if 9.8968 < X < 10.1032 .
Now we can determine the chance of a type II error:
fJ(lO.l) = P( 9.8968 < X < 10.1032 when = 10.1) ~ P(-5.08 < Z < .08) = .5319.
Similarly, fJ(9.8) = P( 9.8968 < X < I 0.1032 when p = 9.8) = P(2.42 < Z < 7.58) ~ .0078.

109
Q2016 Cengage Learning. All Rights Reserved, May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in pan.
Chapter 8: Tests of Hypotheses Based on a Single Sample

Section 8.2

15. In each case, the direction of H, indicates that the P-value is P(Z? z) = I - <1>(z).
a. P-value = I - <1>( 1.42) = .0778.

b. P-value= 1-<1>(0.90)= .1841

c. P-value = I - <1>(
1.96) = .0250.

d. P-value = I - <1>(2.48)= .0066.

e. P-value = I - <1>(-.11)~ .5438.

17.
30,960-30,000
a. z= tr: 2.56, so P-value = P(Z? 2.56) = I - <1>(2.56)= .0052.
1500/ v16
Since .0052 < a = .01, reject Ho.

b. Z,~ZOl =2.33, so p(30500) = <1>2.33+


( 116 J =<1>(100)=.8413.
30000 - 30500
1500/ 16

1500(2.33 + 1.645)]'
C. Z,=ZOI = 2.33 and Zp=Z05 = 1645. Hence, n = =142.2, so use n = 141
. . [ 30,000 - 30, 500

d. From (a), the P-value is .0052. Hence, the smallest a at which Ho can be rejected is .0052.

19.

a. Since the alternative hypothesis is two-sided, P-value = 2 -[1- <I{I~~~~~ IJ] = 2 . [I - <1>(2.27)]~

2(.0116) = .0232. Since .0232 > a = .0 I, we do not reject Ho at the .0 I significance level.

b. ZoI2=zoo,=2.58,SOP(94)=<I>(2.58+ 95-~)_<I>(_2.58+ 95-~) =<1>(5.91)-<1>(0.75)=


1.20/ 16 1.20/ 16
.2266.

1.20( 2.58 + 1.28)]'


c. zp = Z 1 = 128. Hence, n = = 21.46, so use n = 22.
. [ 95-94

21. The hypotheses are Ho: I' = 5.5 v, H,: I' * 5.5.

a. The P-value is 2 -[ '-<1>(15.:~ ~5IJ] = 2 . [1 - <1>(3.33)


= .0008. Since the P-value is smaller than

illlY reasonable significance level (.1, .05, .0 I, .00 I), we reject Ho.

110
C 2016 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in pan.
Chapter 8: Tests of Hypotheses Based on a Single Sample

b. The chance of detecting that Ho is false is the complement of the chance of a type II error. With ZoJ' ~

Zoo, = 2.58, I-.B( 5.6) = I-[<1>(2.58 + 5~5(~) - <1>(-2.58+ 5~5(~)] ~ I - <1>(1.25)+<1>(3.91)=

.1056.

c. n=
[ .3 2.58+2.33
(
5.5-5.6
)
]' =216.97,sousen=217 .

23.
a. Using software.F> 0.75, x
= 0.64, s = .3025,[, = 0.48. These summary statistics, as well as a box plot
(not shown) indicate substantial positive skewness, but no outliers.

b. No, it is not plausible from the results in part a that the variable ALD is normal. However, since n =
49, normality is not required for the use of z inference procedures.

0.75-1.0r:;;; =-5. 79 ,an d sot he


c. We wish to test Ho:,u = 1.0 versus H,: I' < 1.0. The test statistic is z
.3025 (.",49
P-value is P(Z ~ -5.79) '" O. At any reasonable significance level, we reject the null hypothesis.
Therefore, yes, the data provides strong evidence that the true average ALD is less than 1.0.

d. -x+zo, sOl
r:" .75+ 6 .3025
.45 0 82
r:;;; = . 1
vn v49

25. Let u denote the true average task time. The hypotheses of interest are Ho: I' ~ 2 v. H,: I' < 2. Using a-based

inference with the data provided, the P-value of the test is P(Z 5 1.95
.20 ( 52
Jtz) ~ <1>(-1.80) ~ .0359. Since

.0359> .01, at the a ~ .01 significance level we do not reject Ho. At the .01 level, we do not have sufficient
evidence to conclude that the true average task time is less than 2 seconds.

27. j3 (,uo - L\) = <1>(za/2 + L\j;, ( 0") - <I> ( -Za/2 + L\.Jn ( 0") = 1- <P( -za" - cd;; ( er)-[1- (D(za/2 -;:,..[,; ( c-) ] =

<I>(zal2 -;:,..[,; (cr)- <p(- zal2 -;:,..[,; (cr) = .B(,uo + L\) .

Section 8.3

29. The hypotheses are Ho:,u ~ .5 versus H,: I' f..5. Since this is a two-sided test, we must double the one-tail
area in each case to determine the P-value.
a. n = 13 ~ df= 13 - I ~ 12. Looking at column 12 of Table A.8, the area to the right of t = 1.6 is .068.
Doubling this area gives the two-tailed P-value of 2(.068) = .134. Since .134 > a = .05, we do not
reject Ho.

b. For a two-sided test, observing t ~ -1.6 is equivalent to observing I = 1.6. So, again the P-value is
2(.068) = .134, and again we do not reject Ho at a = .05.

III
C 20 16 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
Chapter 8: Tests of Hypotheses Based on a Single Sample

c. df=" - I = 24; the area to the left of-2.6 = the area to the right of2.6 = .008 according to Table A.8.
Hence, the two-tailed P-value is 2(.008) = .016. Since .016> .01, we do not reject Ho in this case.

d. Similar to part (c), Table A.8 gives a one-tail area of .000 for t = 3.9 at df= 24. Hence, the two-tailed
P-vaJue is 2(.000) = .000, and we reject Ho at any reasonable a level.

31. This is an upper-tailed test, so the P-value in each case is P(T? observed f).
a. P-value = P(T? 3.2 with df= 14) = .003 according to Table A.8. Since .003 ~ .05, we reject Ho.

b. P-value = P(T? 1.8 with df= 8) ~ .055. Since .055 > .01, do not reject Ho

c. P-value ~ P(T? -.2 with df= 23) = I- P(T? .2 with df= 23) by symmetry = I - .422 = .578. Since
.578 is quite lauge, we would not reject Ho at any reasonable a level. (Note that the sign of the observed
t statistic contradicts Ha, so we know immediately not to reject Ho.)

33.
a. It appears that the true average weight could be significantly off from the production specification of
200 Ib per pipe. Most of the boxplot is to the right of200.

b. Let J1 denote the true average weight of a 200 Ib pipe. The appropriate null and alternative hypotheses
are Ho: J1 = 200 and H,: J1 of 200. Sioce the data are reasonably normal, we will use a one-sample t

proce dure.
. .. IS t
test statistic
206.73-200
tr:
6.73
5.
::::
80 elf
.jor a P-va ue ;::;O.
.
So, we [eject Ho.
0UT 0
6.351,,30 1.16
At the 5% significance level, the test appears to substantiate the statement in part a.

35.
a. The hypotheses are Ho: J1 = 200 versus H,: J1 > 200. With the data provided,
t= x-~ 249.7 -; 1.2; at df= 12 _ I = II, P-value = .128. Since .128 > .05, Ho is not rejected
s l-ln 145.11 12
at the a = .05 level. We have insufficient evidence to conclude that the true average repair time
exceeds 200 minutes.

b. With d J,uo - ,II


(J"
[200-300[
150
0.67 , df = 11, and a = .05, software calculates power > .70, so

P(300) '" .30.

112
C 2016 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
Chapter 8: Tests of Hypotheses Based on a Single Sample

37.
a. The accompanying normal probability plot is acceptably linear, which suggests that a normal
a ulation distribution is uite lausible,
Probability Plot orStrength
NOl'Illll 95% Cl

w,-----------,---r---r-, r.::M_=-C:.c:l"
5'0.. I\.2J9
, '"
"w AD
P.v.h ..
0.186
0,17')

'"

60 se 90 '00 110 120 >3.


SlleDglh

b. The parameter of interest is f1 ~ the true average compression strength (MPa) for this type of concrete.
The hypotheses are Ho: f1 ~ 100 versus fI,: f1 < 100.
Since the data come from a plausibly normal population, we will use the t procedure. The test statistic

is t = 'i'-lio 96.42 -I 00 -1.37. The corresponding one-tailed P-value, at df = 10 - I ~9, is


s! Fn
8.26/ .J1O
P(TS-1.37) '" .102.
The P-value slightly exceeds .10, the largest a level we'd consider using in practice, so the null
hypothesis flo: II ~ 100 should not be rejected. This concrete should be used.

39. Software provides x ~ 1.243 and s ~ 0.448 for this sample.


a. The parameter of interest is f1 = the population mean expense ratio (%) for large-cap growth mutual
funds. The hypotheses are flo: f1 ~ I versus fI,: Ii > I.
We have a random sample, and a normal probability plot is reasonably linear, so the assumptions for a
t procedure are met.
.,
T he test statistic lS t =
1.243-1
0.448/ ,,20
=
2.43, for a P-value of P(Te: 2.43 at df= 19) '" .013. Hence, we

(barely) fail to reject H. at the .01 significance level. There is insufficient evidence, at the a = .0 I level,
to conclude that the population mean expense ratio for large-cap growth mutual funds exceeds 1%.

b. A Type I error would be to incorrectly conclude that the population mean expense ratio for large-cap
growth mutual funds exceeds I % when, io fact the mean is I%. A Type II error would be to fail to
recognize that the populatioo mean expense ratio for large-cap growth mutual funds exceeds I% when
that's actually true.
Since we failed to reject flo in (a), we potentially committed a Type II error there. If we later find out
that, in fact, f1 = 1.33, so H, was actually true all along, then yes we have committed a Type II error.

c. With n ~ 20 so df= 19, d= 1.33-1 =.66, and a = .01, software provides power > .66. (Note: it's
.5
purely a coincidence that power and d are the same decimal!) This means that if the true values of f1
and II are f1 = 1.33 and II = .5, then there is a 66% probability of correctly rejecting flo: Ii = I in favor of
fI,: u > I at the .01 significance level based upon a sample of size n = 20.

113
e 2016 Cengege Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
r

Chapter 8: Tests of Hypotheses Based on a Single Sample

41. Ji = true average reading, Ho: Ji = 70 v. H,: Ii


x-70
and t =--
s
* 70,
75.5-70
7/,J6
5.5 --1.92.
2.86 I..r,;
From table A.8, df= 5, Pvalue = 2[P(T> 1.92)] '" 2(.058) ~ .116. At significance level .05, tbere is not
enougb evidence to conclude tbat tbe spectropbotometer needs recalibrating.

Section 8.4

43.
a. Tbe parameter of interest is P ~ tbe proportion of the population of female workers tbat have BMIs of
at least 30 (and, bence, are obese). The hypotbeses are Ho: P ~ .20 versus H,: P > .20.
With n = 541, npo ~ 541(.2) = 108.2 ~ 10 and n(1 - Po) ~ 541(.8) ~ 432.8 ~ 10, so tbe "large-sample" .::
procedure is applicable.

From th e d ata provtid ed , p = -120 =. 2218 , so z = f;-po .2218-.20 . andP -V"-U-%O


127 ._,,-
541 ~Po(l-po)ln ~.20(.80)/541
a ~ .05 level. We cc
= P(Z~ 1.27) ~ I - <1>(1.27)= .1020. Since .1020> .05, we fail to reject Ho at the
not bave sufficient evidence to conclude that more than 20% of tbe population of female workers is
obese.

b. A Type I error would be to incorrectly conclude that more than 20% of the population of female
workers is obese, wben tbe true percentage is 20%. A Type II error would be to fail to recognize tbaJ:
more than 20% of the population offemale workers is obese when that's actually true.

c. Tbe question is asking for the cbance of committing a Type II error wben the true value of pis .25, r.e,
fi(.25). Using the textbook formula,

fi(.25) = $[.20 -.25 + 1.645,jr::.2-::C0(:-=.8-:C0)--:1-=-54--:1


] = <1>(-1.166)'" .121.
125(.75)/541

45. Let p ~ true proportion of all donors with type A blood. Tbe bypotheses are Ho: P = .40 versus H,: p "" AG.
Usmg t be one-proportion. z proce dure.jhe
ure, t e test
test statisti
statistic is Z = 82/150-.40 .1473667
-- =. , andth e
~.40(.60) 1150 .04
corresponding P-value is 2P(Z ~ 3.667) '" O. Hence, we reject Ho. The data does suggest that the
percentage of all donors with type A blood differs from 40%. (at the .01 significance level). Since the P-
value is also less than .05, tbe conclusion would not change.

47.
a. Tbe parameter of interest is p = the proportion of all wine customers who would find screw tops
acceptable. The hypotbeses are Ho: P = .25 versus H,: p < .25.
With n ~ 106, npo = I06(.25) ~ 26.5 ~ 10 aod n(l - Po) = 106(.75) = 79.5 ~ 10, so the "large-sampfe" z:
procedure is applicable.
.208-.25
From the data provided, f; = ~ = .208 ,so z -1.01 andP-value~P(Z:::-LOl =
106 125(.75)/106
<1>(-1.01)~ .1562.
Since .1562 > .10, we fail to reject Ho at tbe a = .10 level. We do not have sufficient evidence to
suggest that less than 25% of all customers find screw tops acceptable. Therefore, we recommend th:r.
tbe winery should switch to screw tops.

114
C 2016 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part,
Chapter 8: Tests of Hypotheses Based on a Single Sample

b. A Type I error would be to incorrectly conclude that less than 25% of all customers find screw tops
acceptable, when the true percentage is 25%. Hence, we'd recommend not switching to screw tops
when there use is actually justified. A Type II error would be to fail to recognize that less than 25% of
all customers find screw tops acceptable when that's actually true. Hence, we'd recommend (as we did
in (a)) that the winery switch to screw tops when the switch is not justified. Since we failed to rejectHo
in (a), we may have committed a Type 11error.

49.
a. Let p = true proportion of current customers who qualify. The hypotheses are Hi: p ~ .05 v. H,: p '" .05.
The test statistic is z = .08 - .05 3.07, and the P-value is 2 . P(Z? 3.07) = 2(.00 II) = .0022.
).05(.95)/ n
Since .0022:s a = .01, Ho is rejected. The company's premise is not correct.

P(.I 0) = <1>[.05-.10 + 2.58).05(.95)/ 500] _ <1>[.05-.1 0 - 2.58).05(.95) / 500]


b.
).10(.90)/500 ).10(.90)/500

"<1>(-1.85)-0= .0332

51. The hypotheses are Ho: P ~ .10 v. H.: p > .10, and we reject Ho iff X? c for some unknown c. Tbe
corresponding chance of a type I error is a = P(X? c when p ~ .10) ~ I - B(c - I; 10, .1), since the rv X has
a Binomial( I0, .1) distribution when Ho is true.
The values n = 10, c = 3 yield a = I -B(2; 10, .1) ~ .07, while a > .10 for c = 0, 1,2. Thus c ~ 3 is the best
choice to achieve a:S.1 0 and simultaneously minimize p. However, fJ{.3) ~ P(X < c when p = .3) =
B(2; 10, .3) ~ .383, which has been deemed too high. So, the desired a and p levels cannot be achieved with
a sample size of just n = 10.
The values n = 20, c = 5 yield a ~ I - B(4; 20, .1) ~ .043, but again fJ{.3) ~ B(4; 20, .3) = .238 is too high.
The values n = 25, c = 5 yield a ~ I -B(4; 25, .1) = .098 while fJ{.3) =B(4; 25, .3) ~ .090 S .10, so n = 25
should be used. in that case and with the rule that we reject Ho iff X? 5, a ~ .098 and fJ{.3) = .090.

Section 8.5

~l
53.

a. The formula for p is 1- \I{- 2.33 + which gives .8888 for n ~ 100, .1587 for n = 900, and .0006

for n = 2500.

b. Z ~ -5.3, which is "off the z table," so P-value < .0002; this value ofz is quite statistically significant.

c. No. Even wben the departure fromHo is insignificant from a practical point of view, a statistically
significant result is highly likely to appear; the test is too likely to detect small departures from Ho.

115
C 2016 Ccngage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part,
r [ .~ r

Chapter 8: Tests of Hypotheses Based on a Single Sample

55.
a. The chance of committing a type I error on a single test is .0 I.Hence, the chance of committing at
least one type I error among m tests is P(at least on error) ~ I- P(no type Ierrors) ~ I-[P(no type I
errorj]" by independence ~ I - .99"'. For m ~ 5, the probability is .049; for m = 10, it's .096.

b. Set the answer from (a) to .5 and solve for m: 1 - .99"'?: .5 => .99"' ~ .5 => m?: log(.5)/log(.99) = 68.97.
So, at least 69 tests must be run at the a = .01 level to have a 50-50 chance of committing at least one
type Ierror.

Supplementary Exercises

57. Because n ~ 50 is large, we use a z test here. The hypotheses are Hi: !J = 3.2 versus H,: !J"' 3.2. The

computed z value is z = 3.05


.34/ 50
-J~/
-3.12, and the P-value is 2P(Z?: 1-3.121) = 2(.0009) =0018. Since

.0018 < .05, Ho should be rejected in favor of H,.

59.
a. Ho: I' ~.85 v. H,: I' f. .85

b. With a P-value of .30, we would reject the null hypothesis at any reasonable significance level, which
includes both .05 and .10.

61.
a. The parameter of interest is I' = the true average contamination level (Total Cu, in mg/kg) in this
region. The hypotheses are Ho: 11 = 20 versus H,: I' > 20. Using a one-sample I procedure, with x ~
45.31-20
45.31 and SEt x) = 5.26, the test statistic is I 3.86. That's a very large r-statistic;
5.26
however, at df'> 3 - I = 2, the P-value is P(T?: 3.86) '" .03. (Using the tables with t = 3.9 gives aP-
value of'" .02.) Since the P-value exceeds .01, we would fail to reject Ho at the a ~ .01 level.
This is quite surprising, given the large r-value (45.31 greatly exceeds 20), but it's a result of the very
small n.

b. We want the probability that we fail to reject Ho in part (a) when n = 3 and the true values of I' and a
are I' = 50 and a ~ 10, i.e. P(50). Using software, we get P(50) '" .57.

63. n= 47, x = 215 mg, s ~ 235 mg, scope of values = 5 mg to 1,176 mg


a. No, the distribution does not appear to be normal. It appears to be skewed to the right, since 0 is less
than one standard deviation below the mean. It is not necessary to assume normality if the sample size
is large enough due to the central limit theorem. This sample size is large enough so we can conduct a
hypothesis test about the mean.

b. The parameter of interest is I' = true daily caffeine consumption of adult women, and the hypotheses
are Ho: 11~ 200 versus H,: 11 > 200. The test statistic (using a z test) is z
215 - 200 .44 with a
235/J47
corresponding P-value of P(Z?: .44) = I - <1>(.44)~ .33. We fail to reject Ho, because .33 > .10. The
data do not provide convincing evidence that daily consumption of all adult women exceeds 200 mg.

116
C 2016 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or ill part.
Chapter 8: Tests of Hypotheses Based on a Single Sample

65.
a. From Table A.17, when u = 9.5, d= .625, and df'> 9, p", .60.
When I' = 9.0, d = 1.25, and df= 9, P '" .20.

b. From Table A.17, when j3 = .25 and d = .625, n '" 28.

67.
. Hsp = 1/75 v. H,: p '" 1/75, P = - 16 =. 0 2, z-,=;;;~~~=e-=
With I.645, and P-value = .10, we
a.
800

fail to reject the null hypothesis at the a = .05 level. There is no significant evidence that the incidence
rate among prisoners differs from that of the adult population.

The possible error we could have made is a type II.

b. P-value~2[1-<I>(1.645)]= 2[05]= .10. Yes, since .10 < .20, we could rejectHo

69. Even though the underlying distrihution may not he normal, a z test can be used because n is large. The
null hypothesis Hi: f.I ~ 3200 should be rejected in favor of H,: f.I < 3200 ifthe P-value is less than .001.

The computed test statistic is z = 3107 - ~o -3.32 and the P-value is <1>(-3.32) ~ .0005 < .00 I, so Ho
1881 45
should be rejected at level .001.

71. We wish to test Ho: f.I = 4 versus H,: I' > 4 using the test statistic z = ~. For the given sample, n = 36
v41 n

andx= 160 =4.444,soz= 4~4 1.33.


36 4/36
The P-value is P(Z",: 1.33) ~ I - <1>(1.33)~ .0918. Since .0918 > .02, Ho should not be rejected at this level.
We do not have significant evidence at the .02 level to conclude that the true mean of this Poisson process
is greater than 4.

73. The parameter of interest is p ~ the proportion of all college students who have maintained lifetime
abstinence from alcohol. The hypotheses are Ho: p = .1, H,: p > .1.
With n = 462, np = 462(.1) = 46.2"': 10 n(1 - Po) = 462(.9) ~ 415.8 "': 10, so the "large-sample" z procedure
is applicable.
. d', p =-=.
F rom th e d ata provide 51 1104 ,so Z=
.1104-.1 0.74.
462 ,).1(.9) 1462
The corresponding one-tailed P-value is P(Z",: 0.74) ~ 1 - <1>(0.74)= .2296.
Since .2296> .05, we fail to reject Ho at the a ~ .05 level (and, in fact, at any reasonable significance level).
The data does not give evidence to suggest that more than 10% of all college students have completely
abstained from alcohol use.

117
Cl2016 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
r ~ r

Chapter 8: Tests of Hypotheses Based on a Single Sample

75. Since n is large, we'll use tbe one-sample z procedure. W itb,u = population mean Vitamin D level for
21-20
infants, the hypotbeses are Ho: I' = 20 v. H.:,u > 20. The test statistic is z trx: = 0.92, and tbe
11/,,102
upper-tailed P-value is P(Z:::: 0.92) ~ .1788. Since .1788> .10, we fail to reject Ho. It cannot be concluded
that I' > 20.

77. The 20 dfrow of Table A.7 sbows that X.~.20 = 8.26 < 8.58 (Ho not rejected at level .01) and
8.58 < 9.591 = X.;".20 (Ho rejected at level .025). Tbus.O I < P-value < .025, and Ho cannot be rejected at
level .0I (the P-value is tbe smallest a at whicb rejection can take place, and tbis exceeds .0 I).

79.

a. When Hois true, 2A"IT, =~ LX,


has a chi-squared distribution with df> 2n. If the alternative is
1J0
H.: I' < 1'0, then we sbould reject Ho in favor of H. when the sample mean is small. Since is smallx x
exactly when Lx, is small, we'll reject Ho when the test statistic is small. In particular, the P-value
2
should be tbe area to the left of the observed value -Lx, .
1J0

b. The hypotheses are Ho: I' = 75 versus H.: I' < 75. The test statistic value is ~
lJo
LX, = ~(737)
75
=

19.65. At df'> 2(1 0) ~ 20, the P-value is tbe area to the left of 19.65 under the X;o curve. From
software, this is about .52, so Ho clearly sbould not be rejected (tbe P-value is very large). The sample
data do not suggest tbat true average lifetime is less than the previously claimed value.

118
e 2016 Cengage Learning. All Rights Reserved. May nOIbe scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in pan.
----------------------------------- -- -----

CHAPTER 9

Section 9.1
I.
a. E( X - f) = E( X)- E(Y) = 4.1-4.5 = -A, irrespective of sample sizes.

b.
-
V (X-Y
-) =v (-) (-) a' a' (L8)' (2.0)'
X +v Y =-' +-' =--+--=.0724, and the SO ofX-Y
- - is
. m n 100 100
X- f = J.0724 = .2691.
c. A normal curve with mean and sd as given in a and b (because m = n = 100, the CLT implies that both
X and f have approximately normal distributions, so X - Y does also). The shape is not
necessarily that of a normal curve when m = n ~ 10, because the CLT cannot be invoked. So if the two
lifetime population distributions are not normal, the distribution of X - f will typically be quite
complicated.

3. Let zz ~ the population mean pain level under the control condition and 1', ~ the population mean pain
level under the treatment condition.
a. The hypotheses of interest are Ho: 1', - 1/' = 0 versus H,: 1', - 1', > 0_With the data provided, the test
statistic value is Z = (5.2 - 3.1)-0 ~ 4.23. The corresponding P-value is P(Z::: 4.23) = I - <1>(4.23)'" O.
2.3' 2.3'
--+--
43 43
Hence, we reject Ho at the a = .01 level (in fact, at any reasonable level) and conclude that the average
pain experienced under treatment is less than the average pain experienced under control.

b. Now the hypotheses are Ho: 1', - 1', ~ 1 versus H,: 1', - 1'2> L The test statistic value is
Z = (5.2 -3.1) -1 = 2.22, and the P-value is P(Z::: 2.22) ~ I - <1>(2.22)
= .0132. Thus we would reject
2.3' 2.3'
--+--
43 43
Ho at the a ~ .05 level and conclude that mean pain under control condition exceeds that of treatment
condition by more than I point. However, we would not reach the same decision at the a = .01 level
(because .0132~ .05 but .0132 > .01).

5.
a. H, says that the average calorie output for sufferers is more than I cal/cmvrni below that for non- .

sufferers.
u,' ui =
-+-
(.2)'
--+-- =.1414, so Z=
(A)' (.64-2.05)-(-1)
2.90. The P-value for this
m n 10 10 .1414
one-sided test is P(Z ~ -2.90) ~ .0019 < .01. So, at level .01, Ho is rejected.

b. Za=Z_OI
=2.33 ,and so P(-L2)=I-<I>(-2.33 -1.2 +
.1414
I) = 1-<1>(-.92) = .8212.

.2(2.33+ L28)'
c. m =n::; 65.15, so use 66.
(_.2)'

119
C 2016 Cengagc Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
Chapter 9: Inferences Based on Two Samples

7. Let u, denote the true mean course OPA for all courses taught hy full-time faculty, and let I" denote the
true mean course OPA for all courses taught by part-time faculty. The hypotheses of interest are Ho:,u, ~ 1-'2
*
versus H,:,u, I',; or, equivalently, Ho:,u, -I" ~ 0 v. H.:,u, -I'd o.

(x~y)-" (2.7186-2.8639)-0
Tbe large-sample test statistic is z = 0 = ~ -1.88. The corresponding
s~ + s; (.63342)' (.49241)'
m n 125 + 88
two-tailed P-value is P(IZJ ~ f--1.881)~ 2[1 - <1>( 1.88)] = .0602.
Since the P-value exceeds a ~ .01, we fail to reject Ho. At the .01 significance level, there is insufficient
evidence to conclude that the true mean course OP As differ for these two populations of faculty.

9.
a. Point estimate x - y = 19.9 -13.7 = 6.2. It appears that there could be a difference.

b. Ho: I" -I" = 0, H,: I" -I" * 0, z =


(19.9 -137)
39.1' 15.8'
62
= -' - = 1.14, and the P-value ~ 2[P(Z > 1.14)] =
5.44
--+--
60 60
2( .1271) ~ .2542. The P-value is larger than any reasonable a, so we do not reject Ho There is no
statistically significant difference.

c. No. With a normal distribution, we would expect most of the data to be within 2 standard deviations of
the mean, and the distribution should be symmetric. Two sd's above the mean is 98.1, but the
distribution stops at zero on the left. The distribution is positively skewed.

d. We will calculate a 95% confidence interval for zz,the true average length of stays for patients given

the treatment. 19.9 1.96 3;;; = 19.99.9 = (10.0,21.8).


,,60

, a r--o---'"
11. (x-Y)Zal2 :2.+:2 ~ (x-Y)zal2~(SE,)'+(SE,)'. Usinga=.05 and ZaJ2= 1.96 yields
m n
(5.5-3.8)1.96J(0.3)' +(0.2)' =(0.99,2.41). We are 95% confident that the true average blood lead
level for male workers is between 0.99 and 2.41 higher than the corresponding average for female workers.

13. 0", = 0", = .05, d ~ .04, a = .01,.B = .05, and the test is one-tailed ~

(.0025 + .0025)(2.33 + 1.645)'


n= 49.38, so use n = 50 .
.0016

15.
a. As either m or n increases, SD decreases, so f-J.1 - f.12 - 6.0 increases (the numerator is positive), so
SD

z PI - Ji'2 - 6.0 ) decreases so fJ = <I> (Z J.il - j.12 - .6.0) decreases.


( a SD ' a SD

b. As f3 decreases, zp increases, and since zp is the numerator of n, n increases also.

120
CI 2016 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted 10 a publicly accessible website, in whole or in part.

=---== mn......
.... c,
Chapter 9: Inferences Based on Two Samples

Section 9.2

17.
(5'110+6'110)' 37.21
a. v= 17.43,,17.
(5'110)' /9+(6' /10)2 /9 .694 + 1.44

(5' /10+6' /15)' 24.01


b. v= 21.7,,21.
(5' /10)' /9+(6' /15)' /14 .694+.411

(2' /10+6' /15)' 7.84


c. v= 18.27"18.
(2'110)' /9+(6' /15)' 114 .018+.411

(5' /12+6' /24)' 12.84


d. v 26.05" 26.
(5'112)' 111+(6' /24)' /23 .395 +.098

. h h h . .. 115.7-129.3+10 -3.6 = - 1.20 , an dth e df' IS


19. F or t he given ypot eses, t e test statistic IS t :::::.::.c:..:-r=:=~~-'-'-
15.032
'1-6-+-6-5.382 3.007

v= (4.2168+4.8241)' 9.96, so use df= 9. The P-value is P(T:" -1.20 when T - I.) '" .130.
( 4.2168)' +.1-,(4;..:.8=-24:..:.1
)1..-'

5 5
Since .130> .01, we don't reject Ho.

21. Let 1', = the true average gap detection threshold for normal subjects, and !J, = the corresponding value for
CTS subjects. The relevant hypotheses are Hi: 1'1-1'2 ~ 0 v. H": II, - /12< 0, and the test statistic is

1= 1.71-2.53 -.82 =-2.46. Usingdfv= (.0351125+.07569)' 15.1,orI5,theP-value


.J.Q351125+.07569 .3329 (.0351125)' (.07569)'
~-7-'--+ 9
is P(T:" -2.46 when T- I,,) '" .013. Since .013> .0 I, we fail to reject Ho at the a = .01 level. We have
insufficient evidence to claim that the true average gap detection threshold for CTS subjects exceeds that
for normal subjects.

121
Q 20 [6 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
r .~'

Chapter 9: Inferences Based on Two Samples

23.
a. Using Minitab to generate normal probability plots, we see that both plots illustrate sufficient linearity.
Therefore, it is plausible that both samples have been selected from normal population distributions.

Nanna! Probability Plot for High Quality Fabric Normal Probability Plot for Poor Quality Fabric

.m .sss
.ss
,---
."
.95 ...
."
-

&'
~
~
c,
.sc
50
20
"- - r - _. - - . _. - -~. -- - - _ - -~-
~

~
n,
:
JIG

.05
-,
--...,...:.----
.' .
--
ns _.L -----r .. -----_
.m
"
.,"
__ ; _ __ ~ __ ~ L .001 .' .

,., 2.' 2.'


ae
"
2.2
" p,
H' _'.~M _....o~~T.'
"'_I.~
J..
~~Il"""".'
~.o.~
SID"" 0.&30330
II: 20
--",.'0.170
p.~_ um
SID .. : O,"'2IlIl
P.v_: O.:loU
11,2,

b. The comparative boxplot does not suggest a difference between average extensibility for the two types
offabrics.

Comparative Box Plot for High Quality and Poor Quality Fabric

Poor
Quality

High
Quality
-L---JDf---
0.5 1.5 2.S

extensibility (%)

c. We test Ho : fJl - fJ2 = 0 v, H. :fJ,-1l2 * O. With. degrees of freedom v =


(0~32~r
.00017906
10.5
-.08
(which we round down to 10) and test statistic is t -.38 "--{).4, the P-value is
~(.0433265)
2(.349)= .698. Since the P-value is very large, we do not reject Ho. There is insufficient evidence to
claim that the true average extensibility differs fnr the two types offabrics.

122
02016 Cengege Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.

________________________ b .....
I:
Chapter 9: Inferences Based on Two Samples
I

25.
a. Normal probability plots of both samples (not shown) exhibit substantial linear patterns, suggesting
that the normality assumption is reasonable for both populations of prices.

b. The comparative boxplots below suggest that the average price for a wine earning a ~ 93 rating is
much higher than the averaQe orice eaminQ a < 89 rating.
~

,~
--
"
i icc -

~,
LJ ....
c. From the data provided, x= 110.8, Y = 61.7, SI ~ 48.7, S2 ~ 23.8, and v " 15. The resulting 95% CI for
2 2
the difference of population means is (110.8- 61.7) I01S IS 48.7 + 23.8 ~ (16.1, 82.0). That is, we
., 12 14
are 95% confident that wines rated ~ 93 cost, on average, between $16.10 and $82.00 more than wines
rated ~ 89. Since the CI does not include 0, this certainly contradicts the claim that price and quality
are unrelated.

27.
a. Let's construct a 99% CI for /iAN, the true mean intermuscular adipose tissue (JAT) under the
described AN protocol. Assuming the data comes from a norntal population, the CI is given by

x 1.12.,,-1./;; = .52 tooS1S ~ = .52 2.947 ~ ~ (.33, .71). We are 99% confident that the true mean

IAT under the AN protocol is between .33 kg and .71 kg.

b. Let's construct a 99% CI for /iAN - /ic, the difference between true mean AN JAT and true mean
control lAT. Assuming the data come from normal populations, the CI is given by

(x-y)I.
12
s; +si
=(.52-.35)t
oos2l
(.26)' +(.15)' =.172.831 (.26)2 +(.15)' ~(-.07,.4I) .
." m n ., 16 8 16 8
Since this CI includes zero, it's plausible that the difference between the two true means is zero (i.e.,
/iAN - uc = 0). [Note: the df calculation v = 21 comes from applying the formula in the textbook.]

29. Let /i1 = the true average compressioo strength for strawberry drink and let/i2 = the true average
compression strength for cola. A lower tailed test is appropriate. We test Hi: /i1 - /i2 = 0 v. H,: /i1 - /i2 < O.
. . -14 (44.4)2 1971.36
The test stansnc is 1.= -2.10; v 2 2 25.3 ,so usedf=25.
-./29.4+15 (29.4) (15) 77.8114
+--
14 14
The P-value" pet < -2.1 0) = .023. This P-value indicates strong support for the alternative hypothesis.
The data does suggest that the extra carbonation of cola results in a higher average compression strength.

123
C 2016 Cengage Learning. All Rights Reserved. May nOIbe scanned, copied or duplicated, or posted 10 a publicly accessible website, in whole or in part.
r ~,

Chapter 9: Inferences Based on Two Samples

31.
a. The most notable feature ofthese boxplots is the larger amount of variation present in the mid-range
data compared to the high-range data. Otherwise, both look reasonably symmetric witb no outliers
present.

Comparative Box Plot for High Range and Mid Range

,m
,.,
~""
I

.s
E
0

""
I
8
'" midnnge high mnge

b. Using df= 23, a 95% confidence interval for ,umid-runge - ,uhigh.mnge is

(438.3 -437.45) 2.069 l~i'


+-'ft- ~ .85 8.69 = (-7.84,9.54). Since plausible values for
,umid-rnnge - ,uhigh_l<Inge are both positive and negative (i.e., the interval spans zero) we would conclude that
there is not sufficient evidence to suggest that the average value for mid-range and the average value
for high-range differ.

33. Let 1'1 and 1'2 represent the true mean body mass decrease for the vegan diet and the control diet,
respectively. We wish to test the hypotheses Ho: 1'1 - I',:S I v. H,: 1'1 - 1', > 1. The relevant test statistic is

t ~ (5.8-3.8) -1 1.33, with estimated df> 60 using the formula. Rounding to t = 1.3, Table A.8 gives a
3.2' 2.8'
--+--
32 32
one-sided P-value of .098 (a computer will give the more accurate P-value of .094).
Since our P-value > a = .05, we fail to reject Ho at the 5% level. We do not have statistically significant
evidence that the true average weight loss for the vegan diet exceeds the true average weight loss for the
control diet by more than 1 kg.

35. There are two changes that must be made to the procedure we currently use. First, the equation used to

compute the value of the I test statistic is: 1= (x ~ where sp is defined as in Exercise 34. Second,
1 I
s -+-
P m n
the degrees of freedom =m +n- 2. Assuming equal variances in the situation from Exercise 33, we
7
calculate sp as follows: sp ~ C 6)(2.6)' +C9 )(2.5)' = 2.544. The value ofthe test statistic is,
6

then, I =
2.544
HI
(328 - 40.5) - (-5)

-+-
-2.24", -2.2 with df= 16, and the P-value is P(T< -2.2) = .021. Since

8 10
.021> .01, we fail to reject Ho.

124
02016 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted 10 3 publicly accessible website, in whole or in part.
Chapter 9: Inferences Based on Two Samples

Section 9.3

37.
a. This exercise calls for paired analysis. First, compute the difference between indoor and outdoor
concentrations of hexavalent chromium for each ofthe 33 houses. These 33 differences are
summarized as follows: n = 33, d = -.4239, SD = .3868, where d = (indoor value - outdoor value).
Then 1.",.32 = 2.037 , and a 95% confidence interval for the population mean difference between indoor

and outdoor concentration is -.4239 (2.037{~8) = -.4239.13715 = (-.5611, -.2868). We can

be highly confident, at the 95% confidence level, that the true average concentration of hexavalent
chromium outdoors exceeds the true average concentration indoors by between .2868 and .5611
nanograrns/m'.
th
b. A 95% prediction interval for the difference in concentration for the 34 house is
d 1.'2>0' (>oJI +;) = -.4239 (2.037)(.3868)1 +J'J) = (-1.224,.3758). This prediction interval
means that the indoor concentration may exceed the outdoor concentration by as much as .3758
nanograms/rn' and that the outdoor concentration may exceed the indoor concentration by a much as
1.224 nanograms/rrr', for the 34th house. Clearly, this is a wide prediction interval, largely because of
the amount of variation in the differences.

39.
a. The accompanying normal probability plot shows that the differences are consistent with a normal
population distribution.

ProbllbiUty Plot of Differences


Norrral

" ,
"se .- -,
J. - ~
~
"1"
-'i--'. -t- --I --- oj
,
, ~ i :
"1 - ... "i:
--1--'--"" ...

ro
~ -~j~.._+,~:,_=:i-~, J'
~
, , 1 ,
.,__ _, ,.....". -j , I

,- - i-- -1-----, ~
"" -f

T"-- - i---,_..j- ,i --~

"

+- -
.- -f---- -'.- ---~ --- ---1-----1-----
~. ~
'
--1"'--
... -1 --- -+ ,.--~ ~,
,
,
--j---

i- --1
..-
,
, 'I
,
,
,
,
, : ;
,
-see .2lO '50 soo 150
Differences

u
b. W e want to test 110: 0 u
!J.D = versus lla: PD r
-"0 Th . ..
. _e test stanstic IS t
d-O
= ----r = 167.2-0
r;-;.'
274
an
d
sD1"n 228/,,14
the two-tailed P-value is given by 2[P(T> 2.74)] '" 2[P(T> 2.7)] = 2[.009] = .018. Since .018 < .05,
we reject Ho. There is evidence to support the claim that the true average difference between intake
values measured by the two methods is not O.

125
Q 20 16 Ccngage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted 10a publicly accessible website. in whole or in pari.
,. r .- r ,...._._-

Chapter 9: Inferences Based on Two Samples

41.
a. Let I'D denote the true mean change in total cholesterol under the aripiprazole regimen. A 95% CI for

I'D, using the "large-sample" method, is d z." :rn ~ 3.75 1.96(3.878) ~ (-3.85, 11.35).

b. Now let I'D denote the true mean change in total cholesterol under the quetiapine regimen. Tbe
bypotbeses are Ho: /10 ~ 0 versus H,: Po > O. Assuming the distribution of cbolesterol changes under
this regimen is normal, we may apply a paired t test:

t - --'
a : -
9.05-0
2.126 =; P-value ~ P(T" ~ 2.126) '" P(T" ~ 2.1) ~ .02.
- SD1.J,;- 4.256
Our conclusion depends on our significance level. At the a ~ .05 level, there is evidence tbat the true
mean change in total cholesterol under the quetiapine regimen is positive (i.e., there's been an
increase); however, we do not have sufficient evidence to draw that conclusion at the a> .01 level.

c. Using tbe "large-sample" procedure again, the 95% CI is d 1.96 :rn ~ d 1.96SE(d). If this equals

(7.38,9.69), then midpoint = d = 8.535 and width ~ 2(1.96 SE(d)) ~ 9.69 -7.38 ~ 2.31 =;

SEed) =~~ .59. Now, use these values to construct a 99% CI (again, using a "large-sample" z
2(1.96)
- -
metbod): d 2.576SE(d) ~ 8.535 2.576(.59) ~ 8.535 1.52 = (7.02, 10.06).

43.
a. Altbougb tbere is a 'jump" in the middle of the Normal Probability plot, the data follow a reasonably
straight path, so there is no strong reason for doubting the normality of the population of differences.

b. A 95% lower confidence bound for the population mean difference is:

d -t"'I4(~) ~ -38.60-(1.761)e~8) ~ -38.60-IO,54~ -49.14. We are 95% confident that the

true mean difference between age at onset of Cushing's disease symptoms and age at diagnosis is
greater than -49.14.

c. A 95% upper confidence hound for tbe population mean difference is 38.60 + 10.54 ~ 49.14.

45.
a. Yes, it's quite plausible that the population distribution of differences is normal, since the
accom an in normal robabilit lot of the differences is quite linear.
Probability Plot ordilf
N<>='
",--------------------,,.....-,
,
"

,.
a

,.


"
.'00 tee JOO
diff

126
02016 Cengage Learning. All Rights Reserved. May n01be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
Chapter 9: Inferences Based on Two Samples

b. No. Since the data is paired, the sample means and standard deviations are not useful summaries for
inference. Those statistics would only be useful if we were analyzing two independent samples of data.
(We could deduce J by subtracting the sample means, but there's no way we could deduce So from the
separate sample standard deviations.)

c. The hypotheses corresponding to an upper-tailed test are Ho: I'D ~ 0 versus H,: I'D > O. From the data

proviid e d ,t he paire
. d t test statisnc
. .. IS J - ""'0 = 82.5 - r;-;"
t = ----r 0 3 66 Th e correspon diing PI'-va ue IS
sD/"n 87.4/,,15
P(T14 ~ 3.66)" P(T14 ~ 3.7) = .00 I. While the P-value stated in the article is inaccurate, the conclusion
remains the same: we have strong evidence to suggest that the mean difference in ER velocity and IR
velocity is positive. Since the measurements were negative (e.g. -130.6 deg/sec and -98.9 deg/sec),
this actually means that the magnitude ofiR velocity is significantly higher, on average, than the
magnitude ofER velocity, as the authors of the article concluded.

47. From the data, n ~ 12, d ~ -0.73, So ~ 2.81.


a. Let I'D = the true mean difference in strength between curing under moist conditions and laboratory
drying conditions. A 95% CI for I'D is J /'025,lISo!.[,; ~ -0.73 2.201 (2.81)/ JlO ~
(-2.52 MPa, 1.05 MPa). In particular, this interval estimate includes the value zero, suggesting that
true mean strength is not significantly different under these two conditions.

b. Since n ~ 12, we must check that the differences are plausibly from a normal population. The normal
probability plot below stron I substantiates that condition.
Normal Probability Plot of Differences
Normal
,,-r;------r--_r---r-~_r_;r___,

"-1I
,

-'--
"
"
- ~'-----t
,
~-". .~.
,
-1,
i..
,
.,
-- ~'-~,::-~~~
;- . r ",
I
I" -T ~ ...
- 1 i-
-- 1, T,

iu
e,
- 'jl t
1

-7.5 -5.0 -2.5 0.0 2.5 5.0


Differences

127
Cl2016 Ccngage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
Chapter 9: Inferences Based on Two Samples

Section 9.4

49. Let p, denote the true proportion of correct responses to the first question; define p, similarly. The
hypotheses of interest are Hi: p, - p, = 0 versus H,: p, - p, > O. Summary statistics are ", <ni = 200,
p, = ~ ~ .82, p, = 140 = .70, and the pooled proportion is p ~ .76. Since the sample sizes are large, we
200 200
may apply the two-proportion z test procedure.
. . IS
Th e ca Icu Iated test stansnc . z ~ I (.82-.70)-0 . ,an d the PI'-va ue
~ 281 IS P (Z ?:.281) ~ .0025 .
,,(.76)(.24) [Too + foo]
Since .0025:S .05, we reject Ho at the 0: ~ .05 level and conclude that, indeed, the true proportion of correct
answers to the context-free question is higher than the proportion of right answers to the contextual one.

51. Let p, ~ the true proportion of patients that will experience erectile dysfunction when given no counseling,
and define p, similarly for patients receiving counseling about this possible side effect. The hypotheses of
interest are Ho: PI - P2 = 0 versus H; PI - P2 < O.
The actual data are 8 out of 52 for the first group and 24 out of 55 for the second group, for a pooled

proportion
. 0
f"p = ---
8+24
~ .299 . Th e two-proportion
. . ..
z test statistic IS
(.153-.436)-0
-.
320
, an
d
52+55 J(.299)(.70 I) [f, + +,]
the P-value is P(Z:5 -3.20) ~ .0007. Since .0007 < .05, we reject Ho and conclude that a higher proportion
of men will experience erectile dysfunction if told that it's a possible side effect of the BPR treatment, than
if they weren't told of this potential side effect.

53.
a. Let pi and Pi denote the true incidence rates of GI problems for the olestra and control groups,
respectively. We wish to test Ho: p, - /1, ~ 0 v. H,: p, - pi f. O. The pooled proportion is
"529(.176)+563(.158) fr hi ...
p = = .1667, om w ich the relevant test stanstic IS Z ~
529+563
.176 - .158 ~ 0.78. The two-sided P-value is 2P(Z?: 0.78) ~ .433 > 0: ~ .05,
~(.1667)(.8333)[529-' + 563"]
hence we fail to reject the null hypothesis. The data do not suggest a statistically significant difference
between the incidence rates of GI problems between the two groups.

(1.96~(.35)(1.65) / 2 + u8A 15)(.85) + (.2)(.8) )'


b. n = 1210.39, so a common sample size of m~n~
(.05)'
1211 would be required.

55.

a. A 95% large sample confidence interval formula for lo( 8) is lo( B) Zal2)m:x x + n:: . Taking the
antilogs of the upper and lower bounds gives the confidence interval for e itself.

128
o 2016 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted 10a publicly accessible website, in whole or in part.
Chapter 9: Inferences Based on Two Samples

b. 0= -""-- =1.818, In 0 =.598,and


, ~ ( ') the standard deviation is
11,037

10,845 10,933
(11,034)(189) + (11,037)(104) .1213, so the CI for In(B) is .598l.96(.1213) = 060,.836).
Then taking the antilogs of the two bounds gives the CI for Oto be (1.43, 2.31). We are 95% confident
that people who do not take the aspirin treatment are between 1.43 and 2.31 times more likely to suffer
a heart attack than those who do. This suggests aspirin therapy may be effective in reducing the risk of
a heart attack.

57. p, = 154~7 = .550, p, = ~~ = .690, and the 95% CI is (.550 - .690) l.96(.106) = -.14 .21 = (-.35,.07).

Section 9.5

59.
a. From Table A.9, column 5, row 8, FOI,s,. = 3.69.

b. From column 8, row 5, FOI,s,s = 4.82.


I
c. F95,5,8 = -F-- = .207 .
.05,8,5

I
d. F95,8,5 =-F--=271
.05,5,8

e. FOI,IO,12 = 4,30
I I
f. ~99,IO,12
.212.
F0I,12,IO 4.71
g. F05.6,4 =6,16, so P(F "6.16) = ,95.

h. Since F99105=_1_=.177, P(.177"F"4.74)=P(F,,4.74)-P{F,,.177) =.95-.01=.94 .


. " 5.64

61. We test Ho: 0"' = 0"' V. H, : 0"' '" 0". The calculated test statistic is f = (2.75)', = .384. To use Table
" " (4.44)
A.9, take the reciprocal: l!f= 2.61. With numerator df'> m - I = 5 - I = 4 and denominator df= n - I =
10- I = 9 after taking the reciprocal, Table A.9 indicates the one-tailed probability is slightly more than
.10, and so the two-sided P-value is slightly more than 2(.10) = .20.
Since ,20> .10, we do not reject Ho at tbe a ~ .I level and conclude that there is no significant difference
between the two standard deviations.

63. Let a12:::: variance in weight gain for low-dose treatment, and a;:;::: variance in weight gain for control

condition. We wish to test H o: (T1


2
:;::: a; v. H.: al2 > a; . The test statistic is I> s~::::
5,
54: = 2.85. From
32
Table A.9 with df= (19,22) '" (20, 22), the P-value is approximately .01, and we reject Ho at level .05. The
data do suggest that there is more variability in the low-dose weight gains.

129
CI 2016 Cengagc Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
_ f __

Chapter 9: Inferences Based on Two Samples

65. p( F,-." .-',-I :> ~~ ~ :~ s F." ..,.,_,) = 1- a. The set of inequalities inside the parentheses is clearly

S'P. a S'F
equivalent to 2 l-(l"f~,m-I.n-l < 0'; ~ 2 al\m-I.n-I Substituting the sample values s~ and si yields the
Sl 0'1 Sl
,
confidence interval for a~ , and taking the square root of each endpoint yields the confidence interval for
a,

a, . Withm ~ n = 4, we need F0533 =9.28 and F'533 =_1_=.108. Then with" ~ .160 and s, = .074,
a, . .. ... 9.28

a'
the CI for --+
a,
is (.023,1.99), and for a, is (.15,1.41).
a,

Supplementary Exercises

67. We test Ho :f.L, -f.Lz =0 v. H. :f.LI-I'Z ,,0. The test statistic is

tJx-Ji)-A= 807-757 =~=~=3.22. Theapproximatedfis


"
~+~
27'
--+-
41' .J24i 15.524
m n 10 10

V= (241)' 15.6, which we round down to 15. The P-value for a two-tailed test is
+.L( I 6",,8.:..:,.1
_(72_.9_)' ),-'
9 9
approximately 2P(T > 3.22) = 2( .003) ~ .006. This small of a P-value gives strong support for the
alternative hypothesis. The data indicates a significant difference. Due to the small sample sizes (10 each),
we are assuming here that compression strengths for both fixed and floating test platens are normally
distributed. And, as always, we are assuming the data were randomly sampled from their respective
populations.

69. Let PI = true proportion of returned questionnaires that included no incentive; Pi = true proportion of
returned questionnaires that included an incentive. The hypotheses are Ho: P, - P, = 0 v. H. : P, - P, < 0 -

The test statistic is z p, - P,


Jpq(;+;,) .
p, = J..?... = .682 and p, = 66 = .673 ; at this point,
you might nonce that smce P, > p" the numerator of the
110 98
z statistic will be > 0, and since we have a lower tailed test, the P-value will be > .5. We fail to reject H o-
This data does not suggest that including an incentive increases the likelihood of a response.

130
C 2016 Ccngage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
Chapter 9: Inferences Based on Two Samples

-473.3 + 1691.9 609.3.


71. The center of any confidence interval for IJI - f.i2 is always XI - Xl' SO X; - X2
2
id h .. . 1691.9-(-473.3)
Furt h ermore, ha If of the WI t of this interval is 1082.6. Equating this value to the
2

expression on the right ofthe 95% confidence interval formula, we find 5-+
", n,
s;
= 1082.6 = 552.35.
1.96
For a 90% interval, the associated z value is 1.645, so the 90% confidence interval is then
609.3 (1.645)( 552.35) = 609.3 908.6 = (-299.3,1517.9).

73. Let 1'1 and 1'2 denote the true mean zinc mass for Duracell and Energizer batteries, respectively. We want to
test the hypotheses Ho: 1'1 - 1'2 = 0 versus H,: 1'1 - 1'2 if' O. Assuming that both zinc mass distributions are
'II I th . .. (x-.y)-L'>o (138.52-149.07)-0 519
norma,I we use a two-samp e t test; e test statistic IS t= == = -. .
s,' s;
-+-
(7.76)'
---+---
(1.52)'
15 20 m"
The textbook's formula for df gives v = 14. The P-value is P(T'4:S -5.19)" O. Hence, we strongly reject Ho
and we conclude the mean zinc mass content for Duracell and Energizer batteries are not the same (they do
differ).

75. Since we can assume that the distributions from which the samples were taken are normal, we use the two-
sample t test. Let 1'1 denote the true mean headability rating for aluminum killed steel specimens and 1'2
denote the true mean headability rating for silicon killed steel. Then the hypotheses are H0 : Il, - Il, = 0 v.
. . . -.66 -.66 225 Th .
H a : Il, - Il," 0 . The test statistic IS t I I' . e approximate
".03888 + .047203 ".086083
(.086083)'
degrees of freedom are v 57.5 \, 57 . The two-tailed P-value '" 2(.014) ~ .028,
(.03888)' (.047203)'
+ -'------:~'--
29 29
which is less than the specified significance level, so we would reject Ho. The data supports the article's
authors' claim.

77.
a. The relevant hypotheses are Ho : 1', - Il, = 0 v. H, :JI, - Il, ,,0. Assuming both populations have
normal distributions, the two-sample t test is appropriate. m = II, x =98.1, s, ~ 14.2, "~ 15,
- 1292. ,s2~39.1.
y= T h e test stanstic
. . is t= I -31.1 I -31.1 -..284 Th e
,,18.3309+101.9207 ,,120.252
(120.252)2
approximate degrees of freedom v 18.64 \, 18. From Table A.8, the
(18.3309)' + ,,(1-,--01_.9_2_071..)'
10 14
two-tailed P-value '" 2(.006) = .012. No, obviously the results are different.

b. For the hypotheses Ho: Il, - Jl2 = -25 v. H.: JI, - JI, < -25, the test statistic changes to

t = -31.1-(-25) -.556. Witb df> 18, the P-value" P(T< -.6) = .278. Since the P-value is greater
"'120.252
than any sensible choice of a, we fail to reject Ho. There is insufficient evidence that the true average
strength for males exceeds that for females by more tban 25 N.

131
02016 Cengagc Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted 10 a publicly accessible website, in whole or in part.
Chapter 9: Inferences Based on Two Samples

79. To begin, we must find the % difference for each of the 10 meals! For the first meal, the % difference is
measured - stated 212 -180
.1778, or 17.78%. The other nine percentage differences are 45%,
stated 180
21.58%,33.04%,5.5%,16.49%,15.2%,10.42%,81.25%, and 26.67%.
*
We wish to test the hypotheses Ho: J1 ~ 0 versus H,: II 0, where J1 denotes the true average perceot
difference for all supermarket convenience meals. A normal probability plot of these 10 values shows some
noticeable deviation from linearity, so a z-test is actually of questionable validity here, but we'll proceed
just to illustrate the method.
27.29-0
For this sample, n = 10, XC~ 27.29%, and s = 22.12%, for a I statistic of t 22.12/ 3.90. J10
At df= n - I = 9, the P-value is 2P(T, 2: 3.90)" 2(.002) = .004. Since this is smaller than any reasonable
significance level, we reject Ho and conclude that the true average percent difference between meals' stated
energy values and their measured values is non-zero.

81. The normal probability plot below indicates the data for good visibility does not come from a nonrnal
distribution. Thus, a z-test is not appropriate for this small a sample size. (The plot for poor visibility isn't
as bad.) That is, a pooled t test should not be used here, nor should an "unpooled" two-sample I test be used
(since it relies on the same normality assumption).

, i
: , . i
.!
I .'..'-
I ,I,

'; ,

i
I
i
!~
I . ,
,

-
. +
:
,.
1,
'1

., c
....
, ,I
;

83. We wish to test Ho: )11 =)12 versus H,: )11'" )12
Unpooled:
With Ho: )11- /12 = 0 v. H,: )11-/12 '" 0, we will reject Ho if p - value < a .

v = ( (} + r)')' = 15.95 -I- 15, and the test statistic t 8.48 - 9.36 -.88 = -1.81 leads to a P-
.792 1.522 .792 + 1.522 .4869
14+ 12 14 12
13 II
value of about 2P(T15 > 1.8) =2(.046) = .092.
Pooled:
The degrees of freedom are v = III + n - 2 = 14 + 12 - 2 = 24 and the pooled variance

is (~~ )c79)2 + (~~ )1.52)2 = 1.3970, so s p = 1.181. The test statistic is

- 88 - 88
. = -' -" -1.89. The P-value = 2P(T24 > 1.9) = 2(.035) ~ .070.
I. 181 J..+J.. .465
14~ 12

132
C 2016 Cengage Learning. All Rights Reserved. May nor be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part

-~~~iiiiiiiiiiiiiiiiiiiliiiiiiiiiiiiiiiiiiiiiiiiliiiiii _
Chapter 9: Inferences Based on Two Samples

With the pooled method, there are more degrees of freedom, and the P-value is smaller thao with the
unpooled method. That is, if we are willing to assume equal variances (which might or might not be valid
here), the pooled test is more capable of detecting a significant difference between the sample means.

85.
900 400
a. With n denoting the second sample size, the first is m ~ 3n. We then wish 20 = 2(2.58) -+-,
3n n
which yields n =47, m = 141.

b. W e 'WiS. h to fiInd t Iie n W hirc h ...


mInImIZeS 2Zan 900 400
+ -- I
. Ient Iy, ten
or equiva h w hi c h rmrurruzes
'. '
400-n n
900
---+--. 400Taki ng t hee derivative with respect to n and eouati
envanve Wit equating to O'ld
yie s
400-n n
900(400- n r' -400n- 2 = 0, whence 91l' = 4( 400 -Il)' , or 5n' + 3200n-640,000 = O. The solution
is n = 160, and thus m = 400 - n = 240.

87. We want to test the hypothesis Hi: PI :s 1.5p2 v. H,: J.1.1 > 1.5p, - or, using the hint, Ho: fI:s 0 v. N,: fI> O.
2 2

Our point estimate of fI is 0 = X, -1.5X2, whose estimated standard error equals s(O) = .:'1..+ (1.5)2 !1- ,
nl nz

2 ,,'
using the factthat V(O) = ~ + (1.5) 2 -' . Plug in the values provided to get a test statistic t =
nl n2

22.63~15)-0 '" 0.83. A conservative dfestimate here is v ~ 50 - I = 49. Since P(T~ 0.83) '" .20
2.8975
and .20> .05, we fail to reject Ho at the 5% significance level. The data does not suggest that the average
tip after an introduction is more than 50% greater than the average tip without introduction.

89. ,lo=O, ",=",=IO,d=l, ,,=J200 =14.~2 ,so /3=$(1.645- ,In J,giVingp=.9015,.8264,


n 'ill 14.142
.0294, and .0000 for n = 25, 100,2500, and 10,000 respectively. If the PiS referred to true average IQs
resulting from two different conditions, J.1., - J.1., = I would have little practical significance, yet very large
sample sizes would yield statistical significance in this situation.

91. H 0 : PI = P, will be rejected at level" in favor of H, : PI > P, if z ~ Za With p, = f,- = .10 and
p, = f,k = .0668, P = .0834 = .0332 = 4.2 , so Ho is rejected at any reasonable"
and Z level. It appears
.0079
that a response is more likely for a white name than for a black name.

133
02016 Cenguge Learning. All Rights Reserved. May not be scanned, copied or duplicated. or posted to a publicly accessible website, in whole or in part.
, ~. II

Chapter 9: Inferences Based on Two Samples

93.
a. Let f./, and f./, denote the true average weights for operations I and 2, respectively. The relevant
hypotheses are No: f./, - f.I, = 0 v. H.: f./, - f./, '" O. The value of the test statistic is
(1402.24-1419.63) -17.39 -17.39 =-6.43.
(10.97)' (9.96)' ,)4.011363+3.30672 ,)7.318083
'1/~-=-30;:-'-+ -3-0 -

(7.318083)'
At df> v 57.5 '\. 57 , 2P(T~ --{j.43)'" 0, so we can reject Ho at level
(4.011363)' (3.30672)'
~--:::;-~- + ~---:c~~
: : 29 29
.05. The data indicates that there is a significant difference between the true mean weights of the
packages for the two operations.

b. Ho: III ~ 1400 will be tested against H,: Il, > 1400 using a one-sample t test with test statistic
x-1400I'
With
1
d egrees 0 fftreed om = 29, we reject
'"f H It=::
o 1.05,29 = 1.69 9. Th e test sratrstic
.. vaIue
s.l-Jm
1402.24 -1400 2.24 __1.1.
is t Because 1.1 < 1.699, Ho is not rejected. True average weight does
10.97/ ..J3O 2.00
not appear to exceed 1400.

95. A large-sample confidence interval fOrA,-A, is (i, -i,) Zal,~il + i2 , or (x - Y)Zal,Jx +2..
n In m n
II; With Z> 1.616 and jr e 2.557, the 95% confidence interval for A, -A, is-.94 1.96(.177) =-.94 .35 ~
(-1.29, -.59).

134
C 2016 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.

I
CHAPTER 10

Section 10.1

MSTr. 2673.3
1. The computed value of F = -- IS f
= -- = 2.44. Degrees of freedom are I - I ~ 4 and I(J - I) =
MSE 1094.2
(5)(3) = 15. From Table A.9, Fos"'I> = 3.06 and FIO,4.15 = 2.36 ; since our computed value of 2.44 is
between those values, it can be said that .05 < P-value < .10. Therefore, Ho is not rejected at the a = .05
level. The data do not provide statistically significant evidence of a difference in the mean tensile strengths
of the different types of copper wires.

3. With 1', = true average lumen output for brand i bulbs, we wish to test Ho : 1', = 1', =}l, v. H,: at least two

I';'S are different. MSTr = 0-; = 591.2


2
= 295.60, MSE = o-~= 4773.3
21
= 227.30, so

f = MSTr = 295.60 = 1.30.


MSE 227.30
For finding the P-value, we need degrees of freedom I - I ~2 and I (J - I) = 21. In the 2"" row and 21"
column of Table A,9, we see that 1.30 < FIOI2l = 2.57 , so the P-value > .10. Since .10 is not < .05, we
cannot reject Ho, There are no statistically significant differences in the average lumen outputs amoog the
three brands of bulbs.

5. 1', ~ true mean modulus of elasticity for grade i (i = I, 2, 3). We test Ho : 1', = 1', = 1'3 vs. H,: at least two
I';'S are different. Grand mean = 1.5367,

MSTr = I~[(1,63 -1.5367)' + (1.56-1.5367)' + (1.42 -1.5367)' ] = .1143,

MSE = .!.[(,27)' +(.24)' + (.26)' ] = .0660, f = MSTr = .1143 = 1.73. At df= (2,27), 1.73 < 2.51 => the
3 MSE .0660
P-value is more than .10. Hence, we fail to reject Ho. The three grades do not appear to differ significantly.

7. Let 1', denote the true mean electrical resistivity for the ith mixture (i = I, ... , 6),
The hypotheses are Ho: 1'1 = ... = 1'6versus H,: at least two of the I','S are different.
There are I ~6 different mixtures and J ~ 26 measurements for each mixture. That information provides the
dfvalues in tbe table. Working backwards, SSE = I(J - I)MSE = 2089.350; SSTr ~ SST - SSE ~ 3575.065;
MSTr = SSTr/(I - I) = 715.013; and, finally,j= MSTrlMSE ~ 51.3.

Source <If SS MS r
Treatments 5 3575.065 715.013 51.3
Error 150 2089.350 13.929
Total 155 5664.415

The P-value is P(Fwo ~ 51.3) '" 0, and so Ho will be rejected at any reasonable significance level. There is
strong evidence that true mean electrical resistivity is not the same for all 6 mixtures.

135
102016 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
Chapter 10: The Analysis of Variance

9. The summary quantities are XI. = 34.3, x,. = 39.6, X, = 33.0, x. = 41.9, x = 148.8, n:.x~= 946.68, so
(148.8)' (34.3)' + ... +(41.9)'
CF = 922.56 , SST = 946.68 - 922.56 = 24.12, SSTr 922.56 = 8.98,
~ 6
SSE = 24.12-8.98 = 15.14.

Source df SS MS F
Treatments 3 8.98 2.99 3.95
Error 20 15.14 .757
Total 23 24.12

Since 3.10 = F".J,20 < 3.95 < 4.94 = FOl.l.20' .01 < P-value < .05, and Ho is rejected at level .05.

Section 10.2

11. Q".l,1S = 4.37, w = 4.37 F72.8


-4- = 36.09. The brands seem to divide into two groups: 1,3, and 4; and 2

and 5; with no significant differences within each group but all between group differences are significant.
3 1 4 2 5
437.5 462.0 469.3 512.8 532, I

13. Brand 1 does not differ significantly from 3 or 4, 2 does not differ significantly from 4 or 5, 3 does not
differ significantly froml, 4 does not differ significantly from I or 2,5 does not differ significantly from 2,
but all other differences (e.g., 1 with 2 and 5, 2 with 3, etc.) do appear to be significant.
3 I 4 2 5
427.5 462.0 469.3 502.8 532.1

15. In Exercise 10.7, 1= 6 and J = 26, so the critical value is Q.05,6,1" '" Q'05,6,120 = 4.10, and MSE = 13.929. SO,

W"'4.IOJI3~29 = 3.00. So, sample means less than 3.00 apart will belong to the sarne underscored set.

Three distinct groups emerge: the first mixture (in the above order), then mixtures 2-4, and finally mixtures
5-6.
14.18 17.94 18.00 18.00 25.74 27.67

17. e = l.C,fl, where ci = C, =.5 and cJ = -I , so iJ = .sx,.+ .5x,. - XJ.= -.527 and l.el' = 1.50. With
1.02527 = 2.052 and MSE = .0660, the desired CI is (from (10.5))

(.0660)( 150)
-.527(2.052) 10 = -.527,204 = (-.731,-.323).

136
e 2016 Ccngage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.

L
Chapter 10: The Analysis of Variance

140 1680
19. MSTr~ 140,errordf= 12,so f= -- and F".,.12 =3.89.
SSEIJ2 SSE

w= Q05] I2JMSE = 3.77 JSSE = .4867JSSE . Thus we wish 1680 > 3.89 (significant)) and
., J ~ ~
.4867.,JSSE > 10 (~20 - 10, the difference between the extreme X; 's, so no significant differences are
identified). These become 431.88 > SSE and SSE> 422.16, so SSE = 425 will work.

21.
a. Tbe hypotheses are Ho: 1'1 = .. , = 116 v. H,: at least two of the I1;'S are different. Grand mean = 222.167,
MSTr = 38,015.1333, MSE = 1,681.8333, andf= 22.6.
At df'> (5, 78) '" (5,60),22.62: 4.76 => P-value < .001. Hence, we reject Ho. The data indicate there is
a dependence on injection regimen.

b. Assume 1.005." ss2.645 .

i)

1,681.8333 (1.2)
=-67.4(2.645) 14 =(-99.16,-35.64).

ii) Confidence interval for HI', + p, + 1', + 1'5) - fl. :

1,681.8333(125) _ ( )
= 61.75 ( 2.645 ) 14 - 29.34,94.16

Section 10.3

23. JI = 5, J, ~ 4, J] = 4, J, = 5, X; = 58.28, x, = 55.40, x,. = 50.85, x,. = 45.50, MSE = 8.89.

.
WltbW,.=Q05'I'
v ..
--
2JJ
I
MSE ( -+- 1 J =4.11 8.89 (1-+-
--
2JJ
1 J,
I J I J

X; - x, w" = (2.88)( 5.81); X;. - x] w,] = (7.43) ( 5.81)"; x, -x, 11';, = (12.78) (5.48) *;
x,. - x] W" = (4.55) (6.13); x, - x, W" = (9.90) ( 5.81) ";X]. - x,. w" = (5.35) ( 5.81).
A" indicates an interval that doesn't include zero, corresponding to I1'S that are judged significantly
different. This underscoring pattern does not have a very straightforward interpretation.

4 3 2

25.
a. The distributions of the polyunsaturated fat percentages for each of the four regimens must be normal
with equal variances.

b. We have all the X;s , and we need the grand mean:


x 8(43.0)+13(42.4)+17(43.1)+14(43.5) 2236.9 =43.017'
52 52'

137
C 20 16 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to II publicly accessible website, in whole or in part.
r _.-

Chapter 10: The Analysis of Variance

SSTr= IJ,(x,. -x.J' =8(43.0-43.017)' +13(42.4-43.017)'

+17(43.1-43.017)2 +13(43.5-43.017f = 8.334 and MSTr = 8.334 =2.778


3

SSE = I(J; -1)5' = 7(1.5)' + 12(1.3)' + 16(1.2)' + 13(12)' = 7779 and MSE = 7:;9 = 1.621. Then

f = MSTr = 2.778 = 1.714


MSE 1.621
Since 1.714 < Flo 3 50 = 2.20 , we can say that the P-value is > .10. We do not reject the null
hypnthesis at significance level .10 (or any smaller), so we conclude that the data suggests no
difference in the percentages for the different regimens.

27.

a. Let u, ~ true average folacin content for specimens of brand i. The hypotheses to be tested are
Ho : fJ, = J.1, = I":J = u, vs. H,: at least two of the II;'S are different. ~r.x~= 1246.88 and

x' (168.4)' r.x' (57.9)' (37.5)' (38.1)' (34.9)'


-'-:-c-'~=1l8161 soSST~65.27- -' =---+--+---+--=1205.10,so
n 24 ' J, 7 5 6 6
SSTr = 1205.10 -1181.61 = 23.49.

Source df SS MS F
Treatments 3 23.49 7.83 3.75
Error 20 41.78 2.09
Total 23 65.27
With numerator df~ 3 and denominator df= 20, F".J.20 = 3.10 < 3.75 < Fou.20 = 4.94, so the P-value
is between .01 and .05. We reject Ho at the .05 level: at least one of the pairs of brands of green tea has
different average folacin content.

b. With:t;. = 8.27,7.50,6.35, and 5.82 for i = 1,2,3,4, we calculate the residuals Xu -:t;. for all
observations. A normal probability plot appears below and indicates that the distribution of residuals
could be normal, so the normality assumption is plausible. The sample standard deviations are 1.463,
1.681,1.060, and 1.551, so the equal variance assumption is plausible (since the largest sd is less than
twice the smallest sd).

Normal Probability Plot for ANOVA Residuals

,-
,- .
". ..
....
";:
.. .
0-
"
"
., -
."
., -
-a .' 0
prob

138
o 2016 Cengagc Learning. All Rights Reserved. May nOIbe scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in pan.
Chapter 10: The Analysis of Variance

c. Qo,"",o = 3,96 and Wlj = 3,96, 2,09 (~+~J, so the Modified Tukey intervals are:
2 J, J)

Pair Interval Pair Interval

1,2 ,77 2.37 2,3 I. 152A5a

1,3 1.92 2,25 2,4 1.682A5

1,4 2A52,25 * 3,4 .53 2,34

4 3 2

Only Brands I and 4 are significantly different from each other,

= E(7J,x,' -nX,,) V,E( X,~)-nE(X')


n
29. E(SSTr) =

= v,[v(x,)+( E(x,))']-n[v(x HE(X)J'] = LJ,[ ~: +,u,']-n[ :' +(V~p,

= I (j' + V, (,u+ a,)' - (j' -~[ V, (I' + a,)J' = (I -I)(j' + V,I" + 2j.li.!,a, + V,a,' -~[ nu + 0]'
n n
=(1 -l)(j' +,u'n+2pO+V,a; -nl" = (I-1)(j' +V,a,', from which E(MSTr) is obtained through
division by (!- I),

31. With o = I (any other a would yield tbe same ), al = -I, a, = a, = 0, a. = I,

a 1(5( _I)' +4(0)' +4(0)' +5(1)')


= 2.5, = 1.58, VI = 3, v, = 14, and power e ,65.
4

33. g(X)=X(I-':')=nU(I-U) where u=':',so h(x) = S[u(l-u)r'l2du, From a table of integrals, this
n n

gives hex) = arcsin (,[u") = arcsin ( ~J as the appropriate transformation,

139
C 2016 Cell gage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted 10 a publicly accessible website, in whole or in part.
Chapter 10: The Analysis of Variance

Supplementary Exercises

35.
a. The hypotheses are H, : A = fl, = f.', = 11. v. H,: at least two of the Il,'S are different. The calculated
test statistic isf~ 3.68. Since F.05.3~O = 3.10 < 3.68 < FOI320 = 4.94, the P-value is between .01 and
.05. Thus, we fail to reject Ho at a = .0 I. At the I % level, the means do not appear to differ
significantly.

b. We reject Ho when the P-value S a. Since .029 is not < .01, we still fail to reject Hi:

37. Let 1', ~ true average amount of motor vibration for each of five bearing brands. Then the hypothesesare
Ho : 1'1 = ... = 11, vs. H,: at least two of the I'is are different. The ANOY A table follows:
Source df SS MS F
Treatments 4 30.855 7.714 8.44
Error 25 22.838 0.914
Total 29 53.694
8.44> F.ooI"." ~ 6.49, so P-value < .001 < .05, so we reject Ho. At least two of the means differ from one
another. The Tukey multiple comparisons are appropriate. Q", 25 = 4.15 from Minitab output; or, using

Table A.IO, we can approximate with Q".",. =4.17. Wij =4.15,).914/6 =1.620.

Pair x; -Xi" Pair XI. -s;


1,2 -2.267* 2,4 1.217
1,3 0.016 2,5 2.867'
1,4 -1.050 3,4 -1.066
1,5 0.600 3,5 0.584
2,3 2.283* 4,5 1.650'

*Indicates significant pairs.

5 3 4 2

0=2.58 263+2.13+2.41+2.49
39. .165
4 . = 2.060, MSE = .108, and
102,,,

'Lc; = (I)' +( _.25)' + (-.25)' + (-.25)' +( _.25)' = 1.25 , so a 95% confidence interval for B is

(.108)(1.25) ( 'bl
.1652.060 = .165.309 = -.144,.474). This interval does include zero, so 0 is a plausi e
6
value for B.

41. This is a random effects situation. Ho : a; = 0 states that variation in laboratories doesn't contribute to
variation in percentage. SST = 86,078.9897 - 86,077.2224 ~ 1.7673, SSTr = 1.0559, and SSE ~ .7/14.
At df= (3, 8), 2.92 < 3.96 < 4.07 =:> .05 < P-value < .10, so Ho cannnt be rejected at level .05. Variation in
laboratories does not appear to be present.

140
C 2016 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
Chapter 10: The Analysis of Variance

43. ~(I-I)(MSE)( F05I_I,,_1 ) = J(2)(2.39)(3.63) = 4.166. For f.JI -1'" CI = I, c, = -I, and c, = 0, so

fi1 c' JRI


L.,-'-=
J,
-+-=.570,
8 5
..
Similarly.for f.J,-I'" fi1c' )HI
L.,-'-=
J,
-+-=.540;for
8 6
f.J,-f.J"

L' fi1'L ,-------:-

u: = JRl 5' 5' (_I)' = .498.


fi1 J,
-+-
5 6
= .606, and for .5f.J, +.5f.J, - f.J" zs: =
J,
.: -+-' -+--
8 5 6

Contrast Estimate Interval

25.59 - 26.92 = -1.33 (-1.33) (.570)(4.166) = ( -3. 70,1.04)


f.JI - f.J,
25.59 - 28.17 =-2.58 (-2.58) (.540)(4.166) = ( -4.83, -.33)
1'1 - f.J,
26.92 - 28.17 =-1.25 (- 125) ('606)(4.166) = ( -3. 77,1.27)
f.J, - f.J,

.51'2 + .5/1, - /i, -1.92 (-1.92) (.498)( 4.166) = (-3.99,0.15)

The contrast between f.JI and f.J" since the calculated interval is the only one that does not contain 0,

45. Y,j -Y' = c(Xy -X.) and Y,. -Y' =c(X,. -x..), so each sum of squares involving Ywill be the
2
corresponding sum of squares involving X multiplied by c2. Since F is a ratio of two sums of squares, c
appears in both the numerator and denominator. So c' cancels, and F computed from Yijs ~ F computed
fromXi/s.

141
e 2016 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to IIpublicly accessible website. in whole or in part.
CHAPTER 11

Section 11.1

\.
... I. = --MSA ::::: SSA/(I-I) 442.0/(4-1) 716 C hi h
3. Th e test stansnc IS = . . ompare t IS to t e
A MSE SSE/(l-I)(J-I) 123.4/(4-1)(3-1)
F distribution with df> (4 -I, (4 - 1)(3 - 1 ~ (3, 6): 4.76 < 7.16 < 9.78 => .01 < P-value < .05. In
particular, we reject ROA at the .05 level and conclude that at least one of the factor A means is
different (equivalently, at least one of the a/s is not zero).

'1 I r SSB/(J-I) 428.6/(3-1) 1042 A df (26) 5 14 1042 1092


b S 11m ar y, JB=SSE/(I_I)(J_I) 123.4/(4-1)(3-1) ., t ~ , " < . < .
=> .01 < P-value < .05. In particular, we reject HOB at the .05 level and conclude that at least one ofthe
factor B means is different (equivalently, at least one of the (J/s is not zero).

3.
a. The entries of this ANOV A table were produced with software.
Source df SS MS F
Medium I 0.053220 0.0532195 18.77
Current 3 0.179441 0.0598135 21.10
Error 3 0.008505 0.0028350
Total 7 0.241165
To test HoA: ar = "2 ~ 0 (no liquid medium effect), the test statistic isf; = 18.77; at df= (1, 3), theP-
value is .023 from software (or between .0 I and .05 from Table A.9). Hence, we reject HOA and
conclude that medium (oil or water) affects mean material removal rate.

To test HOB: Pr ~ fh = (J, ~ (J4 = 0 (no current effect), the test statistic isfB = 21 .10; at df ~ (3, 3), the P-
value is .016 from software (or between .01 and .05 from Table A.9). Hence, we reject HOB and
conclude that working current affects mean material removal rate as well.

b. Using a .05 significance level, with J ~ 4 and error df= 3 we require Q.05.4" ~ 6.825. Then, the metric
for significant differences is w = 6.825JO.0028530 /2 ~ 0.257. The means happen to increase with
current; sample means and the underscore scheme appear below.

Current: 10 15 20 25
0.201 0.324 0.462 0.602
X.j :

142
e 2016 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
I
Chapter II: Multifactor Analysis of Variance

I
5.
Source df SS MS F
Angle 3 58.16 19.3867 2.5565
Connector 4 246.97 61.7425 8.1419
Error 12 91.00 75833
Total 19 396.13

We're interested in Ho :a, = a, = a, = a, = 0 versus H,: at least one a, * O. fA = 2.5565 < FOll12 = 5.95 =>
P-value > .01, so we fail to reject Ho. The data fails to indicate any effect due to the angle of pull, at the .0 I
significance level.

7.
a. The entries of this ANOYA table were produced with software.
Source df SS MS F
Brand 2 22.8889 11.4444 8.96
Operator 2 27.5556 13.7778 10.78
Error 4 5.1111 1.2778
Total 8 55.5556

The calculated test statistic for the F-test on brand islA ~ 8.96. At df= (2, 4), the P-value is .033 from
software (or between .01 and .05 from Table A.9). Hence, we reject Ho at the .05 level and conclude
that lathe brand has a statistically significant effect on the percent of acceptable product.

b. The block-effect test statistic isf= 10.78, which is quite large (a P-value of .024 at df= (2, 4)). So,
yes, including this operator blocking variable was a good idea, because there is significant variation
due to different operators. If we had not controlled for such variation, it might have affected the
analysis and conclusions.

9. The entries of this ANOY A table were produced with software.


Source df SS MS F
Treatment 3 81.1944 27.0648 22.36
Block 8 66.5000 8.3125 6.87
Error 24 29.0556 1.2106
Total 35 176.7500
At df'> (3, 24),/= 22.36 > 7.55 => P-value < .001. Therefore, we strongly reject HOA and conclude that
there is an effect due to treatments. We follow up with Tukey's procedure:
Q.O'.4.24 = 3.90; w = 3.90.J1.21 06/9 = 1.43
1 4 3 2
8.56 9.22 10.78 12.44

143
C 20 16 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
Chapter 11: Multifactor Analysis of Variance

11. The residual, percentile pairs are (-0.1225, -1.73), (-0.0992, -1.15), (-0.0825, -0.81), (-0.0758, -0.55),
(-0.0750, -0.32), (0.0117, -0.10), (0.0283, 0.10), (0.0350, 0.32), (0.0642, 0.55), (0.0708, 0.81),
(0.0875,1.15), (0.1575,1.73).

Normal Probability Plot

0.1 -

'"~ ..
" 0.0 -

..0.1 ~
..
-z ., 0
z-percenlile

The pattern is sufficiently linear, so normality is plausible.

13.
a. With Yij=Xij+d, Y,.=X;.+d and Yj=Xj+d and Y =X +d,soallquantitlesmsldethe
parentheses in (11.5) remain unchanged when the Y quantities are substituted for the correspondingX's
(e.g., Y, - y = X; - X .., etc.).

2
b. With Yij=cXij' each sum of squares for Yis the corresponding SS for X multiplied by e However,
when F ratios are formed the e' factors cancel, so all F ratios computed from Yare identical to those
computed from X. If Y;j = eX, + d , the conclusions reached from using the Y's will be identical to
tbose reached using tbe X's.

15.

a. r.ai = 24, so rf>'= (%)( ~:) = 1.125, = 1.06 , v, ~ 3, v, = 6, and from Figure 10.5, power .2.

For tbe second alternative, = 1.59, and power re .43.

b. rf>'=(7)L:~ =(~)(~~)=1.00,SO rf>=1.00,v,=4,v2~12,andpower ",.3.

144
C 2016 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
Chapter 11: Multifactor Analysis of Variance

Section 11.2
17.
a.
Source df SS MS F P-value
Sand 2 705 352.5 3.76 .065
Fiber 2 1,278 639.0 6.82 .016
Sand x Fiber 4 279 69.75 0.74 .585
Error 9 843 93.67
Total 17 3,105

P-values were obtained from software; approximations can also be acquired using Table A.9. There
appears to be an effect due to carbon fiber addition, but not due to any other effect (interaction effect or
sand addition main effect).

b.
Source df SS MS F P-value
Sand 2 106.78 53.39 6.54 .018
Fiber 2 87.11 43.56 533 .030
Sand x Fiber 4 8.89 2.22 0.27 .889
Error 9 73.50 8.17
Total 17 276.28
There appears to be an effect due to both sand and carbon fiber addition to casting hardness, but no
interaction effect.

c.
Sand% o 15 30 0 15 30 o 15 30

Fiber% o 0 0 0.25 0.25 0.25 0.5 0.5 0.5


62 68 69.5 69 71.5 73 68 71.5 74

The plot below indicates some effect due to sand and fiber addition with no significant interaction.
This agrees with the statistical analysis in part b.

Interaction Plot (data means) for Hardness


Carbon
.........
74

/_~<,,-':;-::'-:--- - /
----
"
..... ----
---.....
_
0.00
0.25
0.50
72

//
./ -:,,"
70 ......
-r;>
c
0

l
68 .-..." .... ,"

66

64

62
0 15 30
Sand

145
C 2016 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part,
- <

Chapter 11: Multifactor Anal ysis of Variance

19.
Source df SS MS F
Farm Type 2 35.75 17.875 0.94
Tractor Maint. Metbod 5 861.20 172.240 907
Type x Method 10 603.51 60.351 3.18
Error 18 341.82 18.990
Total 35 1842.28

For the interaction effect,hB = 3.18 at df= (10, 18) gives P-value ~ .016 from software. Hence, we do not
reject HOAB at the .0 I level (although just barely). This allows us to proceed to the main effects.

For the factor A main effect.j', ~ 0.94 at df> (2, 18) gives P-value = AI from software. Hence, we clearly
fail to reject HOA at tbe .01 level- there is not statistically significant effect due to type of farm.

Finally,fB ~ 9.07 at df'> (5, 18) gives P-value < .0002 from software. Hence, we strongly reject HOB at the
.01 level - there is a statistically significant effect due to tractor maintenance method.

21. From the provided SS, SSAB = 64,95470 - [22,941.80 + 22,765.53 + 15,253.50] = 3993.87. This allows us
to complete tbe ANOV A table below.

Source df SS MS F
A 2 22,941.80 11,470.90 22.98
B 4 22,765.53 5691.38 11.40
AB 8 3993.87 499.23 .49
Error 15 15,253.50 1016.90
II Total 29 64,954.70

JAB = 049 is clearly not significant. Since 22.98" FOS,2" = 4046, the P-value for factor A is < .05 and HOA is
rejected. Since 11040" F as 4' = 3.84, the P-value for factor B is < .05 and HOB is also rejected. We
conclude that tbe different cement factors affect flexural strength differently and that batch variability
contributes to variation in flexural strength.

23. Summary quantities include xl.. =9410, x, ..=8835, xl.. =9234, XL =5432, x,. = 5684, x3. =5619,
x . = 5567, x,. = 5177 , x. = 27,479, CF = 16,779,898.69, Dc,' = 25 1,872,081, Dc~ = 151,180,459,
resulting in the accompanying ANOVA table.
Source df SS MS F
A 2 11,573.38 5786.69 /t'i:. =26.70
B 4 17,930.09 4482.52 ::::8 = 20.68
AS 8 1734.17 216.77 ~:;: = 1.38
Error 30 4716.67 157.22
Total 44 35,954.31
Since 1.38 < FOl",30 = 3.17 , the interaction P-value is> .0 I and HOG cannot be rejected. We continue:
26.70" FOl,2" = 8.65 => factor A P-value < .01 and 20.68" FOl,4" = 7.01 => factor B P-value < .01, so
both HOA and HOB are rejected. Both capping material and the different batches affect compressive strength
of concrete cylinders.

146
02016 Cengage Learning. All Rights Reserved. May no! be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
Chapter 11: Multifactor Analysis of Variance

25. With B:::: a, - a,~, B:::: Xi .. - Xr .. ::::J~ 7 r(x ijk - X i'jk ) , and since i *" i' , X ijk and Xi1k are independent

') _ _ (1'2 (72 20'2


foreveryj,k. Thus, VB( =V(X , )+V(X, ,. )=-+-=-
JK JK JK (because V(5t ,I.)=V(e) and

V (eif' )= a') so a. = ~2MSE . The appropriate number ofdfis IJ(K-I), so the CI is


JK

(x, .. _ x;oJ lal2,IJ(K-I) ~2~:E . For the data of exercise 19, x2. ~ 8.192, X3. ~ 8.395, MSE = .0170,

t 025 9 = 2.262, J = 3, K = 2, so the 95% c.l. for a, - a] is (8.182 - 8.395) 2.26210340 ~ -0.203 0.170
. , 6
~ (-0.373, -0.033).

Section 11.3
27,
a. The last column will be used in part b.
Source df SS MS F F.OS,Dum dr, den df

A 2 14,144.44 7072.22 61.06 3.35


B 2 5,511.27 2755.64 23.79 3.35
C 2 244,696.39 122.348.20 1056.24 3.35
AB 4 1,069.62 267.41 2.31 2.73
AC 4 62.67 15.67 .14 2.73
BC 4 331.67 82.92 .72 2.73
ABC 8 1,080.77 135.10 1.17 2.31
Error 27 3,127.50 115.83
Total 53 270,024.33

b. The computed Fstatistics for all four interaction terms (2.31, .14, .72, 1.17) are less than the tabled
values for statistical significance at the level .05 (2.73 for AB/ACIBC, 2.31 for ABC). Hence, all four
P-values exceed .05. This indicates that none of the interactions are statistically significant.

c. The computed F-statistics for all three main effects (61.06, 23.79, 1056.24) exceed the tabled value for
significance at level .05 (3.35 ~ F05.',")' Hence, all three P-values are less than .05 (in fact, all three
P-values are less than .001), which indicates that all three main effects are statistically significant.

d, Since Q.05,3,27 is not tabled, use Q05,3,,' =3.53, w = 3.53 (~~X~)


= 8.95. All three levels differ

significantly from each other.

147
C 2016 Ccngagc Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted \0 a publicly accessible website, in whole or in part.
Chapter 11: Multifactor Analysis of Variance

29.
a.
Source df SS MS F P-value
A 2 1043.27 521.64 110.69 <.001
B I 112148.10 112148.10 23798.01 <.001
C 2 3020.97 1510.49 320.53 <.001
AB 2 373.52 186.76 39.63 <.001
AC 4 392.71 98.18 20.83 <.001
BC 2 145.95 72.98 15.49 <.001
ABC 4 54.13 13.53 2.87 .029
Error 72 339.30 4.71
Total 89 117517.95

P-values were obtained using software. At the .0 I significance level, all main and two-way interaction
effects are statistically significaot (in fact, extremely so), but the three-way interaction is not
statistically significant (.029 > .01).

b. The means provided allow us to construct an AB interaction plot and an AC interaction plot. Based on
the first plot, it's actually surprising that the AB interaction effect is significant: the "bends" of the two
paths (B = I, B ~ 2) are different but not that different. The AC interaction effect is more clear: the
effect of C ~ I on mean response decreases with A (= 1,2,3), while the pattern for C = 2 and C ~ 3 is
very different (a sharp up-dawn-up trend).

Imcraetion Plot Inlernetion l'lot


DOlfi Menns [)alaMo""s
,~r--=------------,
------- ---.--------. ~,
~,
un
'''' ~
'M
... '"
fi
" .,
100 ,
a
'00

"
.,
, .....
"'~--,-------"c-----,---' ,
A A

31.
a. The followiog ANOV A table was created with software.
Source df 88 MS F P-value
A 2 124.60 62.30 4.85 .042
B 2 20.61 1030 0.80 .481
C 2 356.95 178.47 13.89 .002
AB 4 57.49 14.37 1.12 .412
AC 4 61.39 15.35 1.19 .383
BC 4 11.06 2.76 0.22 .923
Error 8 102.78 12.85
Total 26 734.87

b. The P-values for the AB, AC, and BC interaction effects are provided in the table. All of them are much
greater than .1, so none of the interaction terms are statistically significant.

148
10 2016 Cengage Learning. AU Rights Reserved. May not be scanned. copied or duplicated, or posted 10 H publicly accessible website, in whole or in pan.

-I.
Chapter 11: Multifactor Analysis of Variance

c. According to the P-values, the factor A and C main effects are statistically significant at the .05 level.
The factor B main effect is not statistically significant.

d. The paste thickness (factor C) means are 38.356, 35.183, and 29.560 for thickness .2, .3, and .4,
respectively. Applying Tukey's method, Q.",),8 ~ 4.04 => w = 4.04,,112.85/9 = 4.83.

Thickness: .4 .3 .2
Mean: 29.560 35.183 38.356

33. The various sums of squares yield the accompanying ANOYA table.
Sonree df SS MS F
A 6 67.32 11.02
B 6 51.06 8.51
C 6 5.43 .91 .61
Error 30 44.26 1.48
Total 48 168.07

We're interested in factor C. At df> (6, 30), .61 < Fo56.)O = 2.42 => P-value > .05. Thus, we fail to reject
Hoc and conclude that heat treatment had no effect on aging.

35.
I 2 3 4 5

xr.. 40.68 30.04 44.02 32.14 33.21 I.x',.. = 6630.91


31.61 37.31 40.16 41.82 I.x3 = 6605.02
x.J 29.19

Xk 36.59 36.67 36.03 34.50 36.30 I.x'k = 6489.92

x .. = 180.09, CF = 1297.30, ~I.xt(k) = 1358.60


Source df SS MS F
A 4 28.89 7.22 10.71
B 4 23.71 5.93 8.79
C 4 0.63 0.16 0.23
Error 12 8.09 0.67
Total 24 61.30
F , 12 = 3.26, so the P-values for factor A and B effects are < .05 (10.71 > 3.26,8.79> 3.26), but the P-
05
value for the factor C effect is> .05 (0.23 <3.26). Both factor A (plant) and B(leafsize) appear to affect
moisture content, but factor C (time of weighing) does not.

149
C 2016 Cengagc Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible websilc, in whole or in pan.
Chapter 11: Multifactor Analysis of Variance

37. SST = (71)(93.621) = 6,647.091. Computing all other sums of squares and adding them up = 6,645.702.
Thus SSABCD = 6,647.091 - 6,645.702 = 1.389 and MSABCD = 1.389/4 = .347.

Source df MS F F.01,num dr, den dr*


A 2 2207.329 2259.29 5.39
B I 47.255 48.37 7.56
C 2 491.783 503.36 5.39
D I .044 .05 7.56
AB 2 15.303 15.66 5.39
AC 4 275.446 281.93 4.02
AD 2 .470 .48 5.39
BC 2 2.141 2.19 5.39
BD I .273 .28 7.56
CD 2 .247 .25 5.39
ABC 4 3.714 3.80 4.02
ABD 2 4.072 4.17 5.39
ACD 4 .767 .79 4.02
BCD 2 .280 .29 5.39
ABCD 4 .347 .355 4.02
Error 36 .977
Total 71
'Because denominator dffor 36 is not tabled, use df= 30.

To be significant at the .0 I level (P-value < .01), the calculated F statistic must be greater than the .01
critical value in the far right column. At level .01 the statistically significant main effects are A, B, C. The
interaction AB and AC are also statistically significant. No other interactions are statistically significant.

Section 11.4

39. Start by applying Yates' method. Each sum of squares is given by SS = (effect contrast)'124.

Total Effect
Condition Xiik I 2 Contrast SS
(1) 315 927 2478 5485
a 612 1551 3007 1307 SSA = 71,177.04
b 584 1163 680 1305 SSB = 70,959.38
ab 967 1844 627 199 SSAB = 1650.04
e 453 297 624 529 SSC = 11,660.04
ae 710 383 681 -53 SSAC = 117.04
be 737 257 86 57 SSBC = 135.38
abc 1107 370 113 27 SSABC = 30.38

a. Totals appear above. From these,


584+ 967 + 737 + 1107 -315 -612 -453 -710
Pt ~ X:2,. - x:... = 24
54.38 ;

r'AC
u =
315-612 +584-967 -453+ 710-737 + 1107
24
2.21 ; 9: = -rl~c= 2.21 .
l
C

150
C 2016 Cengage Learning. All Rights Reserved. May nOIbe scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
Chapter 11: Multifactor Analysis of Variance

b. Factor sums of squares appear in the preceding table. From the original data, LLLL x:" = 1,411,889
and x .... = 5485, so SST ~ 1,411,889 - 5485'/24 = 158,337.96, from which SSE ~ 2608.7 (the
remainder).

df SS MS F P-value
Source
I 7 1,177.04 71 ,177.04 43565 <.001
A
70,959.38 70,959.38 435.22 <.001
B I
1 1650.04 1650.04 10.12 .006
AB
11,660.04 11,660.04 71.52 <.001
C 1
1 117.04 117.04 0.72 .409
AC
135.38 135.38 0.83 .376
BC 1
I 30.38 30.38 0.19 .672
ABC
Error 16 2608.7 163.04
Total 23 158,337.96

P-va1ues were obtained from software. Alternatively, a P-value less than .05 requires an F statistic
greater than F05,I,16 ~ 4.49. We see that the AB interaction and all the main effects are significant.

c. Yates' algorithm generates the 15 effect SS's in the ANOV A table; each SS is (effect contrast)'/48.
From the original data, LLLLLX~"m = 3,308,143 and x ..... = 11,956 ~ SST ~ 3,308,143 - 11,956'/48
328,607,98. SSE is the remaioder: SSE = SST - [sum of effect SS's] = ... = 4,339.33.

df SS MS F
Source
I 136,640.02 136,640.02 1007.6
A
I 139,644.19 139,644.19 1029.8
B
1 24,616.02 24,616.02 181.5
C
I 20,377.52 20,377.52 150.3
D
1 2,173.52 2,173.52 16.0
AB
I 2.52 2.52 0.0
AC
I 58.52 58.52 0.4
AD
I 165.02 165.02 1.2
BC
1 9.19 9.19 0.1
BD
1 17.52 17.52 0.1
CD
I 42.19 42.19 0.3
ABC
I 117.19 117.19 0.9
ABD
1 188.02 188.02 1.4
ACD
1 13.02 13.02 0.1
BCD
I 204.19 204.19 1.5
ABCD
Error 32 4,339.33 135.60
Total 47 328,607.98

In this case, a P-value less than .05 requires an F statistic greater than F05,I,J2 "" 4.15. Thus, all four
main effects and the AB interaction effect are statistically significant at the .05 level (and no other
effects are).

151
02016 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted 10 a publicly accessible website, in whole Of in pan.
Chapter 11: Multifactor Analysis of Variance

41. The accompanying ANOV A table was created using software. All F statistics are quite large (some
extremely so) and all P-values are very small. So, in fact, all seven effects are statistically significant for
predicting quality.

Source df SS MS F P-value
A I .003906 .003906 25.00 .001
B I .242556 .242556 1552.36 <.001
C I .003906 .003906 25.00 .001
AB I .178506 .178506 1142.44 <.001
AC I .002256 .002256 14.44 .005
BC I .178506 .178506 1142.44 <.001
ABC I .002256 .002256 14.44 .005
Error 8 .000156 .000156
Total 15 .613144

43.
Conditionl SS = {conrrast)2
Conditionl SS _ (contrast)2
F
F - 16
Effect
Effect
(I)
" D 414.123 850.77
.436 <I AD .017 < 1
A
B .099 < I BD .456 < I
AB 003 <I ABD .990
C .109 < 1 CD 2.190 4.50
AC .078 < I ACD 1.020
BC 1.404 3.62 BCD .133
ABC .286 ABCD .004

SSE ~ .286 + .990 + 1.020 + .133 + .004 =2.433, df> 5, so MSE ~ .487, which forms the denominators of the F
values above. A P-value less tban .05 requires an F statistic greater than F05,1,5 = 6.61, so only the D main effect
is significant.

45.
a. The allocation of treatments to blocks is as given in the answer section (see back of book), with block
# 1 containing all treatments having an even number of letters in common with both ab and cd, block
#2 those having an odd number in common with ab and an even number with cd, etc.

16,898'
b. n:LLx~kfm= 9,035,054 and x ... = 16,898, so SST = 9,035,054 --3-2 - = 111,853.875. The eight

block-replication totals are 2091 (= 618 + 421 + 603 + 449, the sum of the four observations in block
#1 on replication #1),2092,2133,2145,2113,2080,2122, and 2122, so

S8BI=2091' + ... +2122' _16,898' =898.875. The effect SS's cao be computed via Yates'
4 4 32
algorithm; those we keep appear below. SSE is computed by SST - [sum of all other SS]. MSE =
5475.75/12 = 456.3125, which forms the denominator of the F ratios below. With FOl,I,I' =9.33, only
the A and B main effects are significant.

152
2016 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in pan.
I

Chapter 11: Multifactor Analysis of Variance I

Source df SS F
A 1 12403.125 27.18 I
B I 92235.125 202.13
C I 3.125 001
D
AC
I
I
60.500
10.125
0.13
0.02 I
BC I 91.125 0.20
AD I 50.000 0.11
BC 1 420.500 0.92
ABC I 3.125 0.01
ABD I 0.500 0.00
ACD I 200.000 0.44
BCD 1 2.000 0.00
Block 7 898.875 0.28
Error 12 5475.750
Total 31 111853.875

47.
a. The third nonestirnable effect is (ABCDE)(CDEFG) ~ ABFG. Tbe treatments in tbe group containing (I)
are (1), ab, cd, ee, de, fg, acf. adf adg, aef aeg, aeg, beg, bcf bdf bdg, bef beg, abed, abee, abde, abfg,
edfg, eefg, defg, acdef aedeg, bcdef bedeg, abedfg, abeefg, abdefg. The alias groups of the seven main
effects are {A, BCDE, ACDEFG, BFG}, {B, ACDE, BCDEFG, AFG}, {C, ABDE, DEFG, ABCFG},
{D, ABCE, CEFG, ABDFG}, {E, ABCD, CDFG, ABEFG}, {F, ABCDEF, CDEG, ABG}, and
{G, ABCDEG, CDEF, ABF}.

b. I: (I),aef, beg, abed, abfg, edfg, aedeg, bcdef 2: ab, ed,fg, aeg, bef, acdef bedeg, abcdfg; 3: de, aeg, adf
bcf bdg, abee, eefg, abdefg; 4: ee. acf adg, beg, bdf abde, defg, abcefg.

49.
A B C D E AB AC AD AE BC BD BE CD CE DE
+ + + + + +
a 70.4 +
+ + + + + +
b 72.1 +
+ + + + + +
c 70.4 - +
+ + + + +
abc 73.8 + +
+ + + + + + +
d 67.4 -
+ + + + +
abd 67.0 + +
+ + + + + + +
acd 66.6
- + + + + + + +
bed 66.8
+ + + + + + +
e 68.0 -

+ + + + + + +
abe 67.8
+ + + + + +
ace 67.5 +
+ + + + + + +
bee 70.3 -
+ + + + + +
ade 64.0 +
+ + + + + +
bde 67.9 - +
+ + + + + + +
cde 65.9 -
+ + + + + + + + + + + + + +
abcde 68.0 +

153
Cl2016 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
Chapter 11: Multifactor Analysis ofYariance

Thus SSA ~ (70.4 - 72.1 - 70.4 + ...+ 68.0)' = 2.250, SSB ~ 7.840, SSC = .360, SSD ~ 52.563, SSE =
16
10.240, SSAB ~ 1.563, SSAC ~ 7.563, SSAD ~ .090, SSAE = 4.203, SSBC = 2.103, SSBD ~ .010, SSBE
~ .123, SSCD ~ .010, SSCE ~ .063, SSDE ~ 4.840, Error SS = sum of two factor SS's = 20.568, Error MS
= 2.057, FOl,I,lO= 10.04, so only the D main effect is significant.

Supplementary Exercises

51.
Source df SS MS F
A I 322,667 322.667 980.38
B 3 35.623 11.874 36.08
AB 3 8.557 2.852 8.67
Error 16 5.266 .329
Total 23 372.113

We first testthe null hypothesis of no interactions (Ho,. : Yij = 0 for all i,j). At df= (3, 16),5.29 < 8.67 <
9.01 => .01 < P-value < .001. Therefore, Ho is rejected. Because we have concluded that interaction is
present, tests for main effects are not appropriate.

53. Let A ~ spray volume, B = helt speed, C ~ brand. The Yates tahle and ANOV A table are below. At degrees
of freedom ~ (1,8), a P-value less than .05 requires F> 1".05,1,8 = 5.32. So all of the main effects are
significant at level .05, but none of the interactions are significant.
Condition Total 2 Contrast SS = (collll'astl
16
(I) 76 129 289 592 21,904.00
A 53 160 303 22 30.25
B 62 143 13 48 144.00
AB 98 160 9 134 1122.25
C 88 -23 31 14 12.25
AC 55 36 17 -4 1.00
BC 59 -33 59 -14 12.25
ABC 101 42 75 16 16.00

Effect df MS F
A I 30.25 6.72
B I 144.00 32.00
AB I 1122.25 249.39
C I 12.25 2.72
AC I 1.00 .22
BC I 12.25 2.72
ABC I 16.00 3.56
Error 8 4.50
Total 15

154
to 2016 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
I[

Chapter 11: Multifactor Analysis of Variance


III
55.
a.
Effect
Effect %lron 1 2 3 Contrast SS
7 18 37 174 684
A 11 19 137 510 144 1296
B 7 62 169 50 36 81
AB 12 75 341 94 0 0
21 79 9 14 272 4624
C
41 90 41 22 32 64
AC
BC 27 165 47 2 12 9
48 176 47 -2 -4 I
ABC
D 28 4 1 100 336 7056
51 5 13 172 44 121
AD
33 20 11 32 8 4
BD
57 21 11 0 0 0
ABD
70 23 1 12 72 324
CD
95 24 I 0 -32 64
ACD
BCD 77 25 I 0 -12 9
99 22 -3 -4 -4 1
ABCD
We use estimate~ contrastrF when n ~ 1 to get ti, ~ 1~ ~ 144 ~ 9.00,p, ~ 36 ~ 2.25,
2 16 16
. 272
0, ~-~17.00,
16
r, ~-~21.00.
336
16
Similarly, (aft )" ~ 0, (a6)" ~ 2.00,(dy )" ~ 2.75,

(po)" ~.75, (pr)" ~.50, and (fir )',450.


b. The plot suggests main effects A, C, and D are quite important, and perhaps the interaction CD as well.
In fact, pooling the 4 three-factor interaction SS's and the four-factor interaction SS to obtain an SSE
based on 5 df and then constructing an ANOV A table suggests that these are the most important
effects.

20

~ 10 A

.. CD

.2 -1 0
z-percentile

155
Q 2016 Cengagc Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
Chapter 11: Multifactor Analysis of Variance

57. The ANOVA table is:


Source df SS MS F F.Ol num df. den df

A 2 67553 33777 11.37 5.49


B 2 72361 36181 12.18 5.49
C 2 442111 221056 74.43 5.49
AB 4 9696 2424 0.82 4.11
AC 4 6213 1553 0.52 4.11
BC 4 34928 8732 2.94 4.11
ABC 8 33487 4186 1.41 3.26
Error 27 80192 2970
Total 53 746542

A P-value less than .01 requires an F statistic greater than the FOl value at the appropriate df (see the far
right column). All three main effects are statistically significant at the 1% level, but no interaction terms are
statistically significant at that level.

59. Based on the P-values in the ANOYA table, statistically significant factors at the level .01 are adhesive
type and cure time. The conductor material does not have a statistically significant effect on bond strength.
There are no significant interactions.

61. SSA ""'(X""


= L..L....
- X )' = .!..rr'
N'N
- X' , witb similar expressions for SSB, SSC, and SSD, each having
i j

N-I df.
z X'
SST = LL(Xif,", -X.) = LLX~,") -N with N' - I df, leaving N' -1-4(N -I) dffor error.
" j ! J

2 3 4 5 Ex'

x I...
482 446 464 468 434 1,053,916

x.J .. 470 451 440 482 451 1,053,626

x ..k . 372 429 484 528 481 1,066,826

x ...l . 340 417 466 537 534 1,080,170

Also, l:Ex~I"'= 220,378 , x .. = 2294 , and CF ~ 210,497.44.

Source df SS MS F
A 4 285.76 71.44 .594
B 4 227.76 56.94 .473
C 4 2867.76 716.94 5.958
D 4 5536.56 1384.14 11.502
Error 8 962.72 120.34
Total 24
At df> (4, 8), a P-value less than .05 requires an F-statistic greater than Fo, " = 3.84, HOA and HOB cannot
be rejected, while Hoc and HOD are rejected.

156
02016 Cengage Learning. All Rights Reserved. May nOI be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in pan.
CHAPTER 12

Section 12.1

I.
a. Stem and Leaf display of temp:

170
1723 stern> tens
17445 leaf= ones
1767
17
180000011
182222
18445
186
188

180 appears to be a typical value for this data. The distribution is reasonably symmetric in appearance
and somewhat bell-shaped. The variation in the data is fairly small since the range of values (188-
170 = 18) is fairly small compared to the typical value of 180.

0889
10000 stern> ones
13 leaf> tenths
14444
166
I 8889
211
2
25
26
2
300

For the ratio data, a typical value is around 1.6 and the distribution appears to be positively skewed.
The variation in the data is large since the range of the data (3.08 - .84 = 2.24) is very large compared
to the typical value of 1.6. The two largest values could be outliers.

b. The efficiency ratio is not uniquely determined by temperature since there are several instances in the
data of equal temperatures associated with different efficiency ratios. For example, the five
observations with temperatures of 180 eacb bave different efficiency ratios.

157
C 2016 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in pan.
Chapter 12: Simple Linear Regression and Correlation

c. A scatter plot of the data appears below. Tbe points exhibit quite a bit of variation and do not appear
to fall close to any straight line or simple curve.

3 -


c 2 -
~
0::


1 -

170 160 190
Temp:

3. A scatter plot of the data appears below. The points fall very close to a straight line with an intercept of
approximately 0 and a slope of about I. This suggests that the two methods are producing substantially the
same concentration measurements.

220

s, 120 - .... I

.
20
50 100 150 200
x

158
2016 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted 10 a publicly accessible website, in whole or in part.
Chapter 12: Simple Linear Regression and Correlation

5.
a. The scatter plot with axes intersecting at (0,0) is shown below.

Temperature (x) vs Elongation (y)

250 -

200 - '.
''0

100

50

,
o 20 40 BO '"0

b. The scatter plot with axes intersecting at (55, 100) is shown below.

Temperature (x) vs Elongation (y)

250 -

200

s,

''0

100 -

55 65
x;
"

c. A parabola appears to provide a good fit to both graphs.

7.
a. ,uY.2500 = 1800 + 1.3 (2500) = 5050

b. expected change = slope = p, = 1.3

c. expected change = lOOP, = 130

d. expected change =-IOOp, ~-130

9.
a. P, = expected change in flow rate (y) associated with a one inch increase in pressure drop (x) = .095.

b. We expect flow rate to decrease by 5P, ~ .475.

159
Ci 20 16 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to IIpublicly accessible website, in whole or in pan.
Chapter 12: Simple Linear Regression and Correlation

c. fly." = -.12+ .095(1 0) = .83, and flm = -.12 + .095(15) = 1.305 .

d. pry > .835)= p(z > .835-.830)=


.025
p(z > .20) = .4207 .

pry > .840)= p(Z > .840-.830)


.025
= p(z > .40) = .3446 .

e. Let YI and Y, denote pressure drops for flow rates of 10 and 11, respectively. Then flY.I' = .925 , so

YI - Y, has expected value .830 - .925 ~ -.095, and sd ~(.025)' +(.025)' = .035355. Thus

P(Y, > Y,) = P(Y, - Y, > 0) = p(z > 0-(-.095)) = p(Z > 2.69)= .0036 .
.035355

11.
a. /3, ~ expecled change for a one degree increase ~ -.0 I, and 10/3,= -.1 is the expected change for a 10
degree increase.

b. flY.,oo~5.00-.01(200)=3,and fir 150 =2.5.

c. The probability that the first observation is between 2.4 and 2.6 is

P(2.4'; Y'; 2.6) = p(2.4- 2.5 < Z ,; 2.6- 2.5) = P(-1.33 ~ Z ~ 133) ~ .8164. The probability that
.075 .075
any particular one of the otherfour observations is between 2.4 and 2.6 is also .8164, so the probability
that aU five are between 2.4 and 2.6 is (.8164)' = .3627.

d. Let YI and Y, denote the times at the higher and lower temperatures, respectively. Then YI- Y, has
expected value 5.00 -.0 I (x+ 1)- (5.00-.0 Ix) = -.0 I. The standard deviation of Y, - Y, is

~(.075)' +(.075)' = .10607. Thus P(Y, - Y, > 0) = p(Z > -(-.01)) = p( Z > .09) = .4641.
.10607

Section 12.2

13. For this data, n ~ 4, Lx, ~ 200, LY, = 5.37 , LX; = 12.000, LY,' = 9.3501, Lx,Y, = 333 =>

S~=12,000J200)' =2000,SST=S =9.3501_(5.37)' 2.140875,S =333 (200)(5.37) 64.5


4 ,y 4 xy 4
, S 645 '
=> fJ,=~=-' =.03225 => SSE=S -/3,Sy=2.14085-(.03225)(64.5)=.060750. Fromthese
S~ 2000 ' .'


ca Iell Ia1l005, r 'I = --- SSE = I - .060750
972 Thi .rs a very big.
...IS ue 0 r, W hiic h con fiinns the
. h va If'
SST 2.14085
authors' claim that there is a strong linear relationship between the two variables. (A scatter plot also shows
a strong, linear relationship.)

160
C 2016 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
Chapter 12: Simple Linear Regression and Correlation

15.
a. The following stem and leaf display shows that: a typical value for this data is a number in the low
40's. There is some positive skew in the data. There are some potential outliers (79.5 and 80.0), and
there is a reasonably large amount of variation in the data (e.g., the spread 80.0-29.8 = 50.2 is large
compared with the typical values in the low 40's).

29
3 33 stem = tens
3 5566677889 leaf= ones I
4 1223
456689
5 I
5
I
62
69
7
79
80

b. No, the strength values are not uniquely determined by the MoE values. For example, note that the
two pairs of observations having strength values of 42.8 have different MoE values.

c. The least squares line is ji = 3.2925 + .10748x. For a beam whose modulus of elasticity is x = 40, the
predicted strength would be ji = 3.2925 + .10748(40) = 7.59. The value x = 100 is far beyond the range
of the x values in the data, so it would be dangerous (i.e., potentially misleading) to extrapolate the
linear relationship that far.

d. From the output, SSE = 18.736, SST = 71.605, and the coefficient of determination is? = .738 (or
73.8%). The ,) value is large, which suggests that the linear relationship is a useful approximation to
the true relationship between these two variables.

17.
a. From software, the equation of the least squares line is ji = 118.91 - .905x. The accompanying fitted
line plot shows a very strong, linear association between unit weight and porosity. So, yes, we
anticipate the linear model. will explain a great deal of the variation iny.

Fitted Line Piol


pOl'Osily'" 118.9- 0.9047 weight

30


25

.~20
es
0

is
.0
10
105 110 115 120
100
weight

161
C 2016 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in pan.
Chapter 12: Simple Linear Regression and Correlation

b. The slope of the line is b, ~ -.905. A one-pcf increase in the unit weight of a concrete specimen is
associated with a .905 percentage point decrease in the specimen's predicted porosity. (Note: slope is
not ordinarily a percent decrease, but the units on porosity, Y, are percentage points.)

c. When x ~ 135, the predicted porosity is y = 118.91 - .905(135) = -3.265. That is, we get a negative
prediction for y, but in actuality y cannot be negative! This is an example of the perils of extrapolation;
notice that x = 135 is outside the scope of the data.

d. The first observation is (99.0, 28.8). So, the actual value of y is 28.8, while the predicted value ofy is
118.91 -.905(99.0) = 29.315. The residual for the first observation isy-y = 28.8- 29.315 ~-.515"
-.52. Similarly, for the second observation we have y = 27.41 and residual = 27.9 -27.41 = .49.

e. From software and the data provided, a point estimate of (J is s = .938. This represents the "typical"
size of a deviation from the least squares line. More precisely, predictions from the least squares line
are "typically" .938% off from the actual porosity percentage.

f. From software,'; ~ 97.4% or .974, the proportion of observed variation in porosity that can be
attributed to the approximate linear relationship between unit weight and porosity.

19. n=14, u,=3300, Ey,~5010, u;=913,750, EY;'=2,207,100, u,y,=J,413,500

a. P, = 3,256,000 1.71143233, Po = -45.55190543, so the equation of the least squares line is


1,902,500
roughlyy=-45.5519+ l.7114x.

b. Pr.m =-45.5519+1.7114(225)=339.51.

c. Estimated expected change = -50p, = -85.57 .

d. No, the value 500 is outside the range of x values for which observations were available (the danger of
extrapolation).

21.
a. Yes - a scatter plot of the data shows a strong, linear pattern, and'; = 98.5%.

b. From the output, the estimated regression line is 9 = 321.878 + 156.711x, where x ~ absorbance and y
= resistance angle. For x = .300, 9 = 321.878 + 156.711 (.300) = 368.89.

c. The estimated regression line serves as an estimate both for a single y at a given x-value and for the
true average~, at a given x-value, Hence, our estimate for 1', when x ~ .300 is also 368.89.

23.
a. Using the given j.'s and the formula j =-45.5519+1.7114x;,
SSE = (150-125.6Y + ... + (670 - 639.0)' = 16,213.64. The computation formula gives
SSE = 2,207,100- (- 45.55190543X5010)- (1.71143233XI,413,500) = 16,205.45

16,205.45
b. SST = 2,207,100- (5010)' =414,235.71 so r' =1 .961.
14 414,235.71

162'
e 20 16 Cengage Learning. All Rights Reserved. May nOI be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in pan.
Chapter 12: Simple Linear Regression and Correlation

25. Substitution of Po; Ly, - /3, Lx, and P, for bo and b, on the left-hand side of the first normal equation
n

yields n Ly-/3Lx
' I '+ fJ, Lx, ; Ey, _ fJ,Lx,
., + fJ,Lx, ; Ey, , which verifies they satisfy the first one. The same
n
substitution in the left-hand side of the second equation gives

(Lx,)(Ey,-/3ILx,) ( ')' (Lx,)(Ey;}+PI(nLx,'-(Lx,)')


.:-~~~"':"':.J. + Lx, fJ,; ----------'------"-
n n
last term in parentheses is S", so making that substitution along with Equation (12.2) we have

(Lx,)(Ey,)1 n+ S/ (S~);(Lx,)(Ey,)( n+S". By the definition ofS", this last expression is exactly
~
E x,Y, ' which verifies that the slope and intercept formulas satisfy the second normal equation.

27. We wish to find b, to minimize [(b,); L(Y, -b,x,)' . Equating ['(b,) to 0 yields

L[2(y,-b,x,)'(-x,)];0=>2L[-X,y,+b,x;];0=>LxiYi ; b,Lxi and b.> :~i . The least squares


,
Lxy
estimator of /3, is thus PI = --' 2-'
t;

29. For data set #I,? ~ .43 and s = 4.03; for #2,?; .99 and s ; 4.03; for #3, ?; .99 and s ; 1.90. In general,
we hope for hoth large? (large % of variation explained) and small s (indicating that observations don't
deviate much from the estimated line). Simple linear regression would thus seem to be most effective for
data set #3 and least effective for data set # I.

" ,.,
2...
I"


.., ,.
'w


... ..
u




..
"
..
,..
'"
.
, "

". '"

163
C 20 16 Cengege Learning. All Rights Reserved. May not be scanned, copied or duplicalcd, or posted to a publicly accessible website, in whole or in part.
Chapter 12: Simple Linear Regression and Correlation

Section 12.3

3\.
a. Software output from least squares regression on this data appears below. From the output, we see that
"~89.26% or .8926, meaning 89.26% of the observed variation in threshold stress (y) can be
atrributed to the (approximate) linear model relationship with yield strength (x).

Regression Equation

y 211.655 - 0.175635 x

Coefficients

Coef SE Coef T p 95% CI


Term
Constant 211.655 15.0622 14.0521 0.000 (178.503, 244.807)
x -0.176 0.0184 -9.5618 0.000 ( -0.216, -0.135)

Summary of Model

S ~ 6.80578 R-Sq ~ 89.26% R-Sq(adj) = 88.28%

b. From tbe software output, P, = -0.176 and s p,, = 0.0184. Alternatively, the residual standard deviation
is s = 6.80578, and the sum of squared deviations ofthe x-values can be calculated to equal Su ~

~::<x;<y' = 138095. From these, sA ~ k~ .0183 (due to some slight rounding error).

c. From the sofrware output, a 95% CI for P, is (-0.216, -0.135). This is a fairly narrow interval, so PI
has indeed been precisely estimated. Alternatively, with n ~ 13 we may construct a 95% CI for PI as
P, 1 025."S A =-0.176 2.179(.0184) ~ (-0.216, -0.136).

33.
a. Error df> n - 2 = 25, t.025,25 = 2.060, and so the desired confidence interval is
P, 1
025 25
. . sA = .10748 (2.060)(.01280) = (.081,.134). We are 95% confident thatthe true average
change in strength associated with a I GPa increase in modulus of elasticity is between .081 MPa and
.134 MPa.

b. We wish to test Ho: PI S .1 versus H,: PI >.1. The calculated test statistic is

t = P, -.1 = .10748 - .1 .58, which yields a P-value of.277 at 25 df. Thus, we fail to reject Ho; i.e.,
SA .01280
there is not enough evidence to contradict the prior belief.

164
e 20 16 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
Chapter 12: Simple Linear Regression and Correlation

35.
(222.1)' = 155.019
a. We want a 95% CI for fJ,: Using the given summary statistics, Sea = 3056.69 17 '

S =2759.6 (222.1XI93) 238.112 and P = Sxy = 238.112 1.536. We need


xy 17 " Sxx 115.019

fJo 193-(1.536X222.1) =-8.715 to calculate the SSE:


17

SSE = 2975_(_8.715)(193)-(1.536)(2759.6)=418.2494. Then s= 418.2494 =5.28 and


15

s, ,J 5.28 =.424. Witht02515=2.131,ourCI is 1.5362.131(.424) =(.632,2.440). With


M 155.019 '
95% confidence, we estimate that the change in reported nausea percentage for everyone-unit change
in motion sickness dose is between .632 and 2.440.

b. We test the hypotheses Ho: P, = 0 versus H,: P, to 0, and the test statistic is 1= 1.536 = 3,6226. With
.424
df'> 15, the two-tailed P-value ~ 2P(T> 3.6226) ~ 2(.001) = .002. With a P-value of .002, we would
reject the null hypothesis at most reasonable significance levels. This suggests that there is a useful
linear relationship between motion sickness dose and reported nausea.

c. No. A regression model is only useful for estimating values of nausea % when using dosages between
6.0 and 17.6, the range of values sampled.

d. Removing the point (6.0, 2.50), the new summary stats are: n ~ 16, Lx, =216.1, Ly, = 191.5,
Lx,' = 3020.69, 'Ly,2= 2968.75, Lx,y, = 2744.6, and then P, = 1.561 , Po = -9, 118, SSE ~ 430.5264,
s = 5.55, sft, = .551, and the new CI is 1.561 2.145 (.551) , or (.379, 2.743). The interval is a little
wider. But removing the one observation did not change it that much. The observation does not seem
to be exerting undue influence.

37.
a. Let I'd = the true mean difference in velocity between the two planes. We have 23 pairs of data that we
will use to test Ho: I'd = 0 v. H,: 1'" to O. From software, x" ~
0.2913 with Sd = 0,1748, and so t =

0.2913-0 .
=0:':'.1-"7::'4-8:':"
'" 8, which has a two-sided P-value of 0.000 at 22 df. Hence, we strongly reject the null

hypothesis and conclude there is a statistically significant difference in true average velocity in the two
planes. [Nole: A normal probability plot of the differences sbows one mild outlier, so we have slight
concern about the results ofthe I procedure.]

b. Let PI denote the true slope for the linear relationship between Level - - velocity and Level- velocity.
. . . b -I 0.65393-1
We WIsh to test Ho: PI = I v. H,: PI < I. Using the relevant numbers provided, t = _1_
s(b ) 0,05947 l

~ -5.8, which has a one-sided P-value at 23-2 ~ 21 df of P(T < -5.8) '" O. Hence, we strongly reject the
null hypothesis and conclude the same as the authors; i.e., the true slope ofthis regression relationship
is significantly less than I.

165
e 20 16 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
Chapter 12: Simple Linear Regression and Correlation

39. SSE = 124,039.58- (72.958547)(1574.8) - (.04103377)(222657.88) = 7.9679, and SST = 39.828

Source df SS MS f
Regr 31.860 31.860 18.0

Error 18 7.968 1.77

Total 19 39.828

At df= (I, 18),J~ 18.0 > Foo,.,.18 ~ 15.38 implies that the P-value is less than .001. So, Ho: P, = 0 is
rejected and the model is judged useful. Also, S = ~ = 1.33041347 and Sxr = 18,921.8295, so

.04103377 4.2426 and t' = (4.2426)' = 18.0= J ,showing the equivalence of the
1.33041347/ JI8,921.8295
two tests.

41.
a. Under the regression model, E(Y,) = fJo+ fJ,x, and, hence, E(f) = fJo+ P,X. Therefore,

E(Y-f)=fJ(x-x)
, I,'
.)
and E ( fJ =E L.,
I
["'(X -x)(Y
L(x,-x)'
,
-f)] "'(x -x)E[Y -f]
L. ,
L(x, -x)'
,

L(x,-x)fJ,(x,-x) ~:::Cx,-x)'
= L(x,-x)' PI~:::CX,-X)' fJl'

b. Here, we'll use the fact that ~:::CX,-x)(Y, -f) = ~:::cx, -x)Y, -fLex, -x) = ~)x, -x)Y, -f(o) ~

~)x,-x)Y,. With c= ~(x, -x)', PI =!c L(X, -X)(Y, -f) = L (x, -x) Y,
c
=:> since the Y,s are

.
independent, V(P,) =
. L (x.
-'- -x)' V(Y,) =, I
L( x, -v) ,a'a' =- =
a' 2 or, equivalently,
c C C L(x,-x)
a'
2 1 I as desired.
u, -(u') / n

43. The numerator of d is II - 21 = I, and the denominator is ~


.831, so d = _1_ = 1.20. The
324.40 .831
approximate power curve is for n - 2 df = 13, and fJ is read from Table A.17 as approximately .1.

166
02016 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
Chapter 12: Simple Linear Regression and Correlation

Section 12.4
45.
a. We wish to find a 90% CI for 1',.12': YI2'~ 78.088, 1.".18 ~ 1.734, and

1 (125-140.895)'
s, ~s _+ .1674. Putting it together, we get 78.088 1.734(.1674)~
, 20 18,921.8295
(77.797,78.378).

I+~+ (125-140.895)' ~.6860, so the PI is


b. We want a 90% PI. Only the standard error changes: s.y ~ s
20 18,921.8295
78.088 1734(.6860) = (76.898, 79.277).

c. Because the x' of 115 is farther away from x than the previous value, the terrn (x*-x)' will be
larger, making tbe standard error larger, and thus the width of the interval is wider.

d. We would be testing to see if the filtration rate were 125 kg-DS/m/h, would the average moisture
. .. 78.088-80 1142 d
content 0 f the compresse d pellets be less than 80%. Tbe test stanstic is / ~ - . , an
.1674
witb 18 dfthe P-value is P(T < -I 1.42)" 0.00. Hence, we reject Ho. There is significant evidence to
prove that the true average moisture content when filtration rate is 125 is less than 80%.

47.
a. Y(40) = -1.128 + .82697 (40) = 31.95, 1.025.13~ 2.160 ; a 95% PI for runoff is

31.95 2.160~( 5.24)' + (1.44)' = 31.95 11.74 = (20.21,43.69).


No, the resulting interval is very wide, therefore the available information is not very precise.

b. ~ ~ 798,l:x' ~ 63,040 which gives Sa ~ 20,586.4, which in turn gives

s Y( ) = 5.24 -.!... + (50 - 53.20)' = 1.358, so the PI for runoff when x ~ 50 is


sc 15 20,586.4

40.22 2.160~(5.24)' + (1.358)' = 40.22 11.69 = (28.53,51.92). The simultaneous predictinn level
for the two intervals is at least 100(1- 2a rio ~ 90% .

49. 95% CI ~ (462.1, 597.7) ~ midpoint = 529.9; 1.0258 = 2.306 ~ 529.9+ (2.306)$ A,+A(i') ~ 597.7 =>

S .0,+.0,(15) = 29.402 => 99% Cl = 529.9 too,., (29.402) ~ 529.9 (3.355)(29.402) = (431.3,628.5) .

51.
a. 0.40 is closer to x.

b. Po + P, (0.40)l a12
.,.,SA,+A{040j or 0.81042.l01(0.03 11)= (0.745, 0.876).

c. Po + P, (1.20) lal2.,.' . ~s' +s' A,+~(120) or 0.2912 2.1 0 I ~(O.I 049)' +(0.0352)' ~ (.059,.523).

167
C 20 16 Ccngage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
Chapter 12: Simple Linear Regression and Correlation

53. Choice a wiJl be the smallest, with d being largest. The width of interval a is less than band c (obviously),
and b and c are both smaller than d. Nothing can he said about tbe relationship between band c.

55.

I (x*-x)
d, = - + s (xj - r). Thus, since the fis are independent,
n n

v(.Bo + p,x) = Ld,'V(Y,) = iJ'Ldt'


_ ,,,[~ 2(x*-x)(xi-x) (X*-Xl'(Xi-X)']
-ULJ2+ + 2
n nSxx nS.u
_ ,[ ~ (X*-X)L(Xi-X) (X*-X)'L(Xi-X)']
-u n 2 +2 + 2
n nS;u s;
~(J
,[I-+
n
2(X*-X)'0
: +
(X*-X)'Sn]
S.~
=a
,[I -+
n
(X*-X)']
s,

Section 12.5

57. Most people acquire a license as soon as they become eligible. If, for example, the minimum age for
obtaining a license is 16, then the time since acquiring a license, y, is usually related to age by the equation
y '" x - I 6, which is the equation of a straight line. In other words, the majority of people in a sample will
have y values that closely follow the line y ~ x - 16.

59.
(1950)' (47.92)'
a. S_=251,970 40720 S =130.6074- 3.033711,and
- 18" JY 18

S", =5530.92 (1950)(47.92) 339586667 so r = 339.586667 = .9662. There is a very


18 ~40, 720-13.0337 II
strong, positive correlation between the two variables.

b. Because the association between the variables is positive, the specimen with the larger shear force will
tend to have a larger percent dry fiber weight.

c. Changing the units of measurement on either (or both) variables will have no effect on the calculated
value of r, because any change in units will affect both the numeratorand denominator of r by exactly
the same multiplicative constant.

d. ,) = .9662' ~ .933, or 93.3%.

e. We wish to test Ho: P = 0 v. H,: p > O.The test statistic is t .9662"ti8=2 r~14.94. This is
~ ~1-.9662'
"off the charts" at 16 df, so the one-tailed P-vaille is less than .00 I. So, Ho should be rejected: the data
indicate a positive linear relationship between the two variables.

168
(:I 2016 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
Chapter 12: Simple Linear Regression and Correlation

61.
7377.704
a. We are testing Ho: p = 0 v. H,: p > O. The correlation is r .7482, and
.)36.9839..)2,628,930.359

the test statistic is t ]482.JU. '" 3.9. At 14 df, the Pvalue is roughly .001. Hence, we reject Ho:
1-.7482'
there is evidence that a positive correlation exists between maximum lactate level and muscular
endurance.

b. We are looking for,J, the coefficient of determination: r' ~ (.7482)' = .5598, or about 56%. It is the
same no matter which variable is the predictor.

63. With the aid of software, the sample correlation coefficient is r = .7729. To test Hi: p ~ 0 v. H,: p t 0, the
test statistic is I
(.7729)~
I = 2.44. At 4 df, the 2-sided P-value
.IS about 2(.035) ~ .07 (software gives
.
\11-(.7729)'
a P-value of .072). Hence, we fail to reject Ho: the data do not indicate that the population correlation
coefficient differs from O. This result may seem surprising due to the relatively large size of r (.77),
however, it can be attributed to a small sample size (n = 6).

65.
3. From the summary statistics provided, a point estimate for the population correlation coefficient p is r
~)xl-x)(y;-y) 44,185.87 =.4806.
lJ)x; -x)'L(Y; - y)' )(64,732.83)(130,566.96)

b. The hypotheses are He: P ~ 0 versus H; P t O. Assuming bivariate normality, tbe test statistic value is

I r..) n - 2 .4806Ji5=2 1.98. At df= 15 - 2 = 13, tbe two-tailed P-value for this I test is 2P(TtJ
.jJ::;2 ,11-.4806'
~ 1.98) '" 2P(T13 2: 2.0) = 2(.033) = .066. Hence, we fail to reject Ho at the .0 I level; there is not
sufficient evidence to conclude that the population correlation coefficient between internal and external
rotation velocity is not zero.

c. lfwe tested Ho: p = 0 versus H,: p > 0, the one-sided P-value would be .033. We would still fail to
reject H at the .01 level, lacking sufficient evidence to conclude a positive true correlation coefficient.
o
However, for a one-sided test at the .05 level, we would reject Ho since P-value = .033 < .05. We have
evidence at the .05 level that the true population correlation coefficient between internal and external
rotation velocity is positive.

67.
a. Because P-value = .00032 < a ~ .001, Ho should be rejected at this significance level.

b. Not necessarily. For sucb a large n, the test statistic I has approximately a standard normal distribution
r~
when H : P = 0 is true, and a P-value of .00032 corresponds to z ~ 3.60. Solving 3.60
o .J02
for r yields r = .159. That is, with n ~ 500 we'd obtain this P-value with r = .159. Such an r value
suggests only a weak linear relationship between x and y, one that would typically bave little practical
importance.

169
02016 Ccngage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
---" ..----

Chapter 12: Simple Linear Regression and Correlation

.. I Id b .022)10,000 - 2
2.20 ; since the test statistic is again
c. The test statistic va ue wou et
"/1-.022'
approximately normal, the 2-sided P-value would be roughly 2[1 - <1>(2.20)]= .0278 < .05, so Ho is
rejected in favor of H, at the .05 significance level. The value t ~ 2.20 is statistically significant - it
cannot be attributed just to sampling variability in the case p ~ O. But with this enormous n, r ~ .022
implies p" .022, indicating an extremely weak relationship.

Supplementary Exercises

69. Use available software for all calculations.


a. We want a confidence interval for PI. From software, hi = 0.987 and s(hl) ~ 0.047, so the
corresponding 95% CI is 0.987 1025.l7(0.047)= 0.987 2.11 0(0.047) ~ (0.888, 1.086). We are 95%
confident that the true average change in sale price associated with a one-foot increase in truss height
is between $0.89 per square foot and $1.09 per square foot.

b. Using software, a 95% CI for J.ly.25is (47.730, 49.172). We are 95% confident that the true average sale
price for all warehouses with 25-foot truss height is between $47.73/ft' and $49. I 71ft'.

c. Again using software, a 95% PI for Y when x = 25 is (45.378, 51.524). We are 95% confident that the
sale price for a single warehouse with 25-foot truss height will be between $45.38/ft' and $5 1.52/ft'.

d. Since x = 25 is nearer the mean than x = 30, a PI at x ~ 30 would be wider.

e. From software, r' ~SSRISST = 890.361924.44 = .963. Hence, r = ).963 = .981.

71. Use software whenever possible.


a. From software, the estimated coefficients are PI ~ 16.0593 and Po = 0.1925.

b. TestHo:PI~OversusH':Pl*O.Fromsoftware,theteststatisticis 16.0593-0 t
~54.15;evenat
0.2965
just 7 df, this is "off the charts" and the P-value is '" O. Hence, we strongly reject Ho and conclude that
a statistically significant relationship exists between the variables.

c. From software or by direct computation, residual sd = s = .2626, x ~ .408 and Sn ~ .784. When x ~x
~ .2,Y ~ 0.1925 + 16.0593(.2) = 3.404 with an estimated standard deviation of
I (x-x)' 1 (2-408)'
s, = s -+ .2626 -+" = .107. The analogous calculations when x ~ x ~ .4
n s; 9 .784
result in y ~ 6.616 and s, = .088, confirming what's claimed. Prediction error is larger when x ~ .2
because .2 is farther from the sample mean of .408 than is x = .4.
d. A 95% CI for J.ly... is y I",.,_,S, ~ 6.616 2.365(.088) ~ (6.41,6.82).

e. A 95% PI for Ywhen x ~ .4 is y lo".9_'~s' + s; ~ (5.96, 7.27).

170
C 2016 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in pan.
Chapter 12: Simple Linear Regression and Correlation

73.
a. From the output, r' = .5073.
b. r = (sign ofslope)P = +,}.5073 =.7122.

c. We test He: PI = 0 versus H.: PI ;< O. The test statistic t = 3.93 gives P-value ~ .0013, which is < .01,
the given level of significance, therefore we reject Ho and conclude that the model is useful.

d. We use a 95% CI for }ly.". y(50) = .787218 + .007570(50) ~ 1.165718; 1.025,15 ~ 2.131;

s = "Root MSE" = .20308 =;. s, = .20308 -.!... + 17 (50 - 42.33)' 2 = .051422. The resulting 95%
y 17 17(41,575)-(719.60)
CI is 1.165718 2.131 (.051422) ~ 1.165718 .109581 ~ (1.056137, 1.275299).

e. Our prediction is y(30) ~ .787218 + .007570(30) ~ 1.0143, with a corresponding residual of y - Y=


.80 - 1.0143 ~ -.2143.

75.
_ S 660.130-(205.4)(35.16)111 _ 3.597 _
a. Withy = stride rate and x = speed, we have PI = S: 3880.08-(205.4)' /II - 44.702 -

0.080466 and /30 = y - Ax = (35.16111) - 0.080466(205.4)/11 ~ 1.694. So, the least squares line for
predicting stride rate from speed is y = 1.694 + 0.080466x.

_ S 660.130-(35.16)(205.4)111 3.597
b. With y = speed and x = stride rate, we have PI =....!L
S= 112.681-(35.16)'111 = --
0.297 =

12.117 and /30 = y - /3IX =(205.4)/11 - 12.117(35.16/11) ~ -20.058. So, the least squares linefor
predicting speed from stride rate is y ~ -20.058 + 12.117x.

c. The fastest way to find ,; from the available information is r' = Pi S= . For the first regression, this
S"
gives r' = (0.080466)' 44.702 '" .97. For the second regression, r' = (12.117)' 0.297 '" .97 as well. In
0.297 44.702
fact, rounding error notwithstanding, these two r' values should be exactly the same.

77.
a. Yes: the accompanying scatterplot suggests an extremely strong, positive, linear relationship between
the amount of oil added to the wheat straw and the amount recovered.

",,------------,
"
:3

,
11

1"
~
!
.... 8 111 I~ " 16 It
....... un' of Gil oddl~)

171
C 20 16 Cengugc Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
Chapter 12: Simple Linear Regression and Correlation

b. Pieces of Minitab output appear below. From the output, r' = 99.6% or .996. That is, 99.6% of the total
variation in the amount of oil recovered in the wheat straw can be explained by a linear regression on
the amount of oil added to it.

Predictor Coef SE Coef T P


Constant -0.5234 0.1453 -3.60 0.003
x 0.87825 0.01610 54.56 0.000

S 0.311816 R-Sq ~ 99.6% R-Sq (adj 1 99.5%

Predicted Values for New Observations

New Obs Fi t SE Fi t 95% CI 95% PI


1 3.8678 0.0901 13.6732, 4.0625) 13.1666, 4.56901

c. Refer to the preceding Minitab output. A test of Ho: fJ, ~ 0 versus H,: p, 0 returns a test statistic of
t ~ 54.56 and a P-value of'" 0, from which we can strongly reject Ho and conclude that a statistically
*
significant linear relationship exists between the variables. (No surprise, based on the scatterplot!)

d. The last line of the preceding Minitab output comes from requesting predictions at x ~ 5.0 g. The
resulting 95% PI is (3.1666, 4.5690). So, at a 95% prediction level, the amount of oil recovered from
wheat straw when the amount added was 5.0 g will fall between 3.1666 g and 4.5690 g.

e. *
A formal test of Ho: p = 0 versus H,: p 0 is completely equivalent to the t test for slope conducted in
c. That is, the test statistic and P-value would once again be t ~ 54.56 and p", 0, leading to the
conclusion thatp O. *
- Ly-fJl:x
79. Start with the alternative formula SSE = l:y' - Pol:y- p,'foxy. Substituting fJo = "
n

SSE =Ly'
l:y- P,l:x ~
s-v: p- ,,,-,y=q
~.. -c- 2 (l:y)' P,l:xl:y
---+ P,l:xy = [ l:y' _ (l:~)' ] _ P, [ 'foxy _ l:x:y]
n n n

= S" - P,S.,

81.
S L(x, -Xl' S S
a. Recall that r ~, s' ---"'- and similarly S2 =---..2L. Using these formulas,
"SuS),), x n-l n-l Y n-l

S., r'M r rs:: =r. (n-I)s~ s --


r . ..L . Using the fact that flo = Y - Ax,
fl,
S~ S~ vS:: (n -I)s~ s,
the least

A" A S
squares equation becomes Y = flo + fJ,x = Y + fl, (x- X) = Y + r ..L(x- x) , as desired.
s,
b. In Exercise 64, r= .700. So, a specimen whose UV transparency index is 1 standard deviation below
average is predicted to have a maximum prevalence of infection that is.7 standard deviations below
average.

83. Remember that SST = Syy and use Exercise 79 to write SSE ~ S" - P,S" = S" - S~ I S~ . Then

r' =~= S~/S~ = S" -SSE 1_ SSE =1- SSE.


S~S" S", S" S" SST

172
02016 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
Chapter 12: Simple Linear Regression and Correlation

85. Using Minitab, we create a scatterplot to see if a linear regression model is appropriate.

]
..
8
~ 5

8:c .....
. . ..
ru eo ao 50 50
"
time

A linear model is reasonable; although it appears that the variance in y gets larger as x increases. The
Minitab output follows:

The regression equation is


blood glucose level = 3.70 + 0.0379 time

T p
Predictor Coef St.Dev
0.2159 17.12 0.000
Constant 3.6965
0.006137 6.17 0.000
time 0.037895

R-Sq 63.4% R-Sq(adj) = 61.7%


S = 0.5525

Analysis of Variance
F p
OF SS MS
Source 38.12 0.000
1 11.638 11.638
Regression
22 6.716 0.305
Residual Error
Total 23 1B.353

The coefficient of determination of 63.4% indicates that only a moderate percentage of the variation in y
can be explained by the change in x. A test of model utility indicates that time is a significant predictor of
blood glucose level. (t = 6.17, P '" 0). A point estimate for blood glucose level when time = 30 minutes is
4.833%. We would expect the average blood glucose level at 30 minutes to be between 4.599 and 5.067,
with 95% confidence.

87.
From the SAS output in Exercise 73, n, = 17, SSE, ~ 0.61860, p, = 0.007570; by direct computation,
.61860+.51350 .040432, and the calculated test
SS" = 11,114.6. The pooled estimated variance is iT' 17+15-4
statistic for testing Ho: (J, = y, is
t= .007570-.006845 '" 0.24. At 28 df, the two-tailed P-value is roughly 2(.39) = .78.

"'.040432 _1- + I
11114.67152.5578
With such a large P-value, we do not reject Ho at any reasonable level (in particular, .78> .05). The data do
not provide evidence that the expected change in wear loss associated with a I% increase in austentite
content is different for the two types of abrasive - it is plausible tbat (J, = y,.

173
02016 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
CHAPTER 13

Section 13.1

I.
(x,-15)'
a. x = 15 and I( x, - x)' = 250 , so the standard deviation of the residual 1'; - Y, is 10 1- 51 250
= 6.32, 8.37, 8.94, 8.37, and 6.32 for i = 1,2,3,4,5.

b. Now x = 20 and 2:) x, - x)' = 1250, giving residual standard deviations 7.87, 8.49, 8.83, 8.94, and

2.83 for i = 1,2,3,4, 5.

c. The deviation from the estimated line is likely to be much smaller for the observation made in the
experiment of b for x ~ 50 than for the experiment of a when x = 25. That is, the observation (50, Y) is
more likely to fall close to the least squares line than is (25, Y).

3.
3. This plot indicates there are no outliers, the variance of E is reasonably constant, and the E are normally
distributed. A straight-line regression function is a reasonable choice for a model.

"

cs
.
.

oc
~
,;
..,
.rn


. "~ '00 ,~ ""

b. We need S.u~ L(x, -x)' =415,914.85


(2817.9)'
20 = 18,886.8295. Then each e; can be calculated as
e, . The table below shows the values.
follows: e; =
.:..:89:..:5Ll'
I( , x::C' _-:--14:..:0
.4427 1+-+-
20 18,886.8295

Notice that if e; '" e, / s, then e, / e; "s. All of the e, / e; 's range between .57 and .65, which are close
to s.

174
02016 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
Chapter 13: Nonlinear and Multiple Regression

standardized
standardized
residuals ell e; residuals e e;
j /

0.644053 06175 0.64218


-0.31064
-0.30593 0.614697 0.09062 0.64802
0.4791 0.578669 1.16776 0.565003
0.647714 -1.50205 0.646461
1.2307
-1.15021 0.648002 0.96313 0.648257
0.643706 0.019 0.643881
0.34881
0.633428 0.65644 0.584858
-0.09872
0.640683 -2.1562 0.647182
-1.39034
0.640975 -0.79038 0.642113
0.82185
0.621857 1.73943 0.631795
-0.15998

c. This plot looks very much the same as the one in part a.



"
0 1 -

"e

-o 0

"
"
'E
ro
-o 1 -


s"

2 -

150 200
100
filtration rate

5.
a. 97.7% of the variation in ice thickness can be explained by the linear relationship between it and
elapsed time. Based on this value, it is tempting to assume an approximately linear relationship;
however, ,) does not measure the aptness of the linear model.

b. The residual plot shows a curve in the data, suggesting a non-linear relationship exists. One
observation (5.5, -3. 14) is extreme.

2 -


'

..
'. .' .
." ,
.'" .

2

3
2 3 , 5 6
o
elapsed time

175
C 20 16 Ccngage Learning. All Rights Reserved. May nor be scanned, copied or duplicated. or posted to a publicly accessible website, in whole or in pan.
Chapter 13: Nonlinear and Multiple Regression

7.
a. From software and the data provided, the least squares line is y = 84.4 - 290x. Also from software, the
coefficient of determination is? ~ 77.6% or .776.

Regression Analysis: y versus x

The regression equation is


y = 84.4 - 290 x

Predictor Coef SE Coef T p


Constant 84.38 11. 64 7.25 0.000
x -289.79 43.12 -6.72 0.000

S 2.72669 R-Sq 77.6% R-Sq(adj) = 75.9%

b. The accompanying scatterplot exhibits substantial curvature, which suggests that a straight-line model
is not actually a good fit.

16
I.


12

10

, 8


.,
0

'.
0.24 025 0.26 0.21 0.28 029

c. Fits, residuals, and standardized residuals were computed using software and the accompanying plot
was created. The residual-versus-fit plot indicates very strong curvature but not a lack of constant
variance. This implies that a linear model is inadequate, and a quadratic (parabolic) model relationship
might be suitable for x and y.

10

1.0
II.>

0.0
..
-as .'

-1.11

-1.5


.,.
6 10 u
Filled VDlue "

176
C 2016 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
Chapter 13: Nonlinear and Multiple Regression

9. Both a scatter plot and residual plot (based on the simple linear regression model) for the first data set
suggest that a simple linear regression model is reasonable, with no pattern or influential data points which
would indicate that the model should be modified. However, scatter plots for the other three data sets
reveal difficulties.

Scatter Plot for Data Set #1 Scatter Plot for Data Set #2

11--<------------~ 9-
...
8-

7-
8-
~ ~ 6-
7-

,-
5-
.
5-

3
-
-LT-----.-------r'
14

Scatter Plot for Data Set #4


Scatter Plot for Data Set #3
,,~-----------~
13
12-
12
"-
"10 - 10-

~ ,- I
!
,- !
20
14

For data set #2, a quadratic function would clearly provide a much better fit. For data set #3, the
relationship is perfectly linear except one outlier, which has obviously greatly influenced the fit even
though its x value is not unusually large or small. One might investigate this observation to see whether it
was mistyped andlor it merits deletion. For data set #4 it is clear that the slope of the least squares line has
been determined entirely by the outlier, so this point is extremely influential. A linear model is completely
inappropriate for data set #4.

177
C 20 l6 Cengage Learning. All Rights Reserved. May nOI be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
Chapter 13: Nonlinear and Multiple Regression

11.

a. I
.
I'
-'
Y.-Y.=Y.-Y-fJ(x.-X)=Y.--
I I I
I I Y. )
11 j

I (x, -x)'
C =1 for j <i and cj = I
) n
nL(xj -:xf n

V( Y, - y,) = LV( cjYj) (since the lj's are independent) = a'I:cJ which, after some algebra, gives
Equation (13.2).

b. a' = V(Y,) = v(i;+ (y, -y,)) = V(Y,) + v(y, - y,), so

v(y, - y,) = 0"' - V(Y;) = a' -a2l~n + (x,- x)' ,], which is exactly (13.2).
I: (x) -x)

c. As Xi moves further from x, (Xi -:if grows larger, so V(~) increases since (x; - X)2 has a positive

sign in V(Y,), but V (y, - y,) decreases since (x, - x)' has a negative sign in that expression.

13. The distribution of any particular standardized residual is also a I distribution with n - 2 d.f., since e: is
obtained by taking standard normal variable Y, - Y, and substituting the estimate of 0" in the denominator
ur,~~

(exactly as in the predicted value case). With E,' denoting the ," standardized residual as a random
variable, when n ~ 25 E; bas a I distribution with 23 df and tOJ2J = 2.50, so P( E; outside (-2.50, 2.50)) ~

p( E,' ~ 2.50)+ p( E; ,,-2.50) = .01 +.01 = .02.

178
e 2016 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted 10 a publicly accessible website, in whole or in part.
"',

Chapter 13: Nonlinear and Multiple Regression

Section 13.2

15.
a. The scatterplot of y versus x below, left has a curved pattern. A linear model would not be appropriate.
I
b. The scatterplot of In(y) versus In(x) below, right exhibits a strong linear pattern.
SuncrplOI orlll(y) V! In(x)
I
'.0
" I
"
"
as
.
u '.0

." 8
i l~

I I


.
'.0
. ,I

z


us
. ,I
0
0
" " '". " 50
., 0.0
r.s
'" " .
"" "
..,
II

c. The linear pattern in b above would indicate that a transformed regression using the natural log of both
x and y would be appropriate. The probabilistic model is then y ~ a:x;P . E , the power function with an
error term.
I
d. A regression of In(y) on In(x) yields the equation In(y) = 4.6384 - 1.04920 In(x). Using Minitab we
can get a PI for y when x = 20 by first transforming the x value: 1n(20) = 2.996. The computer II I
generated 95% PIfor In(y) when In(x) = 2.996 is (1.1188,1.8712). We must now take the antilog to I
8712
return to the original units ofy: (el.l188, el. ) = (3.06, 6.50).

e. A computer generated residual analysis:

Residual Plots for In(y)

NOTAlHII'robabilily Plot Versus Fits

0.'
"I
., .
J:Y
so ' ,,
I I 0.'

i"
..
0.'
] so ..
ro
,...'
- i

.,
,
' .
'.0
,
,I

e.c
R.. idwd
o.z 'A c
Fined Volue

Histogram Versus Order

Looking at the residual vs. fits (bottom right), one standardized residual, corresponding to the third
II
observation, is a bit large. There are only two positi ve standardized residuals, but two others are
essentially O. The patterns in the residual plot and the normal probability plot (upper left) are
marginally acceptable.

179
C 2016 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in pan.

II
Chapter 13: Nonlinear and Multiple Regression

17.
a. u; = 15.501, LY; = 13.352, U;2 = 20.228, Ly;2 = 16.572, u; Y; = 18.109, from which

PI = 1.254 and Po = -.468 so P = PI = 1.254 and a = e -.468 = .626 .


b. The plots give strong support to this choice of model; in addition,'; ~ .960 for the transformed data.

c. SSE = .11536 (computer printout), s ~ .1024, and the estimated sd of PI is .0775, so


125 _
t = -'--' = -1.02 , for a P-value at 11 df of P(T:S -1.02) '" .16. Since .16 > .05, Ho cannot be
.0775
rejected in favor of H,.

d. The claim that lir., = 2,lln., is equivalent to a 511 = 2a(2.5)P , or that Ii ~ 1. Thus we wish test

Ho: Ii = 1versus H,: Ii * 1.25-I = 3.28, the 2-sided P-value at II df is roughly 2(.004) =
1. With
.0775
t =

.008. Since .008:S .01, Hois rejected at level.O!.

19.
a. No, there is definite curvature in the plot.

b. With x ~ temperature and y ~ lifetime, a linear relationship between In(lifetime) and I/temperature
implies a model Y = exp(a + Ii/x + s). Let x' ~ I/temperature and y' ~ In(lifetime). Plotting y' vs. x' gives
a plot which has a pronounced linear appearance (and, in fact,'; = .954 for the straight line fit).

c. u; =.082273, LY; = 123.64, u;' = .00037813, LY;' = 879.88, u;y; = .57295, from which
P = 3735.4485 and a = -10.2045 (values read from computer output). With x = 220, x' = .004545
so yo = -10.2045 + 3735.4485(.004545) = 6.7748 and thus .y = el = 875.50.

d. For the transformed data, SSE = 1.39857, and ", =", =", = 6, :r;. = 8.44695, y;, = 6.83157,

.
y; = 5.32891,from which SSPE = 1.36594, SSLF = .02993,
.
.02993/1
1.36594/15
r .33. Comparing this

to the F distribution with df = (I, 15), it is clear that Ho cannot be rejected.

180
e 2016 Cengage Learning. All Rights Reserved. May nOI be scanned, copied or duplicated, or posted to II publicly accessible website, in whole or in part.
Chapter 13: Nonlinear and Multiple Regression

21.
a. The accompanying scatterplot, left, shows a very strong non-linear association between the variables.
The corresponding residual plot would look somewhat like a downward-facing parabola.

We'd anticipate an';-


b. The right scatterplot shows y versus Ilx and exhibits a much more linear pattern.
value very near I based on the plot. (In fact,'; = .998.)
..
,
.. .. . .. ,eo

00
.. ... 00 <,",
. I I
00 00

m m

, ..

' .. I I
~ se

.. I I
'" I
~ '"
'" '" .. so 00 ,.. '''' ''''
,se '" 0.00> 0.010 O.OIS '000

'" "'"
enso 0.0" 00.

c. With the aid of software, a 95% PI for y when x ~ 100, aka x' = Ilx ~ 1II 00 = .0 1, can be generated.
Using Minitab, the 95% PI is (83.89, 87.33). Tbat is, at a 95% prediction level, tbe nitrogen extraction
percentage for a single run wben leaching time equals lOa b is between 83.89 and 87.33.

23. V(Y) = V(aeP' -s) =[ aeP']' .V(&) = a'e'P' r' wbere we have set V(&) =r'. IfP> 0, this is an
increasing function of x so we expect more spread in y for large x than for small x, while the situation is
reversed if P < O. It is important to realize that a scatter plot of data generated from this model will not
spread out uniformly about the exponential regression function througbout the range of x values; tbe spread
will only be uniform on the transformed scale. Similar results bold for tbe multiplicative power model.

25. First, the test statistic for the hypotheses H,,: P, ~ 0 versus H,: P, 0 is z = --4.58 with a corresponding P-
value of .000, suggesting noise level has a highly statistically significant relationship with people's
*
perception of the acceptability of the work eovironment. The negative value indicates that the likelihood of
finding work environment acceptable decreases as tbe noise level increases (not surprisingly). We estimate
that a I dBA increase in noise level decreases the odds of finding the work environment acceptable by a
multiplicative factor of .70 (95% Cl: .60 to .81).
ebo+~x e23.2-.359x
Tbe accompanying plot shows ii . Notice that the estimate probability of finding
1 + ebo+~'" 1 + e23.2 .359,;

work environment acceptable decreases as noise level, x, increases.

,.

"
us

!
"
uz

00

" .. m
NoMIov.1 "

181
C 2016 Ceogagc Learning. All Rights Reserved. May nOIbe scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
Chapter 13: Nonlinear and Multiple Regression

Section 13.3

27.
a. A scatter plot of the data indicated a quadratic regression model might be appropriate.

zs .

'"
es
.
eo
.
55

. . .
so
0 , z a , 5 , 1 8

b. ;. = 84.482 -15.875(6)+ 1.7679(6)' = 52.88; residual = Y6 -;'6 = 53-52.88 = .12

c. SST = "'iy' J"'iYi)' ~ 586.88 so R' = 1- 61.77 = .895.


'n ' 586.88

d. None oftbe standardized residuals exceeds 2 in magnitude, suggesting none ofthe observations are
outliers. The ordered z percentiles needed for the normal probability plot are -1.53, -.89, -.49, -.16,
.16, .49, .89, and 1.53. The normal probability plot below does not exhibit any troublesome features.

Residuals Versus x Normal Probability Plot of the Residuals


0,----------.-----------0>--,
o

..
1 o+------------~---.:___j

1,

-a -a -1 0 1
Shndardtl:ed Raldual

e. ity -s = 52.88 (from b) and 1.025,,-3 = 1.025,5 = 2.571 , so the CI is


52.88 (2.571XI.69) = 52.88 4.34 = (48.54,57.22).

61.77
SSE = 61.77, so 5' =--=12.35 and sjpred} ~ 12.35+(1.69)' =3.90. The PI is
f.
5
52.88 (2.571X3.90) = 52.88 10.03 = (42.85,62.91).

182
C 2016 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted 10 a publicly accessible website, in whole or in part.
Chapter 13: Nonlinear and Multiple Regression

29.
a. The table below displays the y-values, fits, and residuals. From this, SSE = L e' = 16.8,
s' ~ SSE/(n - 3) = 4.2, and s = 2.05.

y y e=y-y
81 82.1342 -1.13420
83 80.7771 2.22292
79 79.8502 -D.85022
75 72.8583 2.14174
70 72.1567 -2.15670
43 43.6398 -D.63985
22 21.5837 0.41630

b. SST ~ L (y - j1)' = L (y - 64.71)' ~ 3233.4, so R' = I - SSE/SST ~ I - 16.8/3233.4 = .995, or 99.5%.


995% of the variation in free-flow can be explained by the quadratic regression relationship with
viscosity.

c. We want to test the hypotheses Ho: P, = 0 v. H,: P, of O. Assuming all inference assumptions are met,
0 -6 5 A
. ..
th e re Ievant t statisnct = -.0031662 -
IS . 5. t n - 3 = 4 df. th e correspond'mg PI'-va ue IS
.0004835
2P(T> 6.55) < .004. At any reasonable significance level, we would reject Ho and conclude that the
I
quadratic predictor indeed belongs in the regression model.

d. Two intervals with at least 95% simultaneous confidence requires individual confidence equal to
100% _ 5%/2 ~ 97.5%. To use the t-table, round up to 98%: tOI,4 ~ 3.747. The two confidence intervals
are 2.1885 3.747(.4050) ~ (.671,3.706) for PI and -.0031662 3.747(.0004835) =
(-.00498, -.00135) for p,. [In fact, we are at least 9% confident PI andp, lie in these intervals.]

e. Plug into the regression equation to get j > 72.858. Then a 95% CI for I'Y400 is 72.858 3.747(1.198)
~ (69.531,76.186). For the PI, s{pred} ~ ~s' + st = J4.2 + (1.198)2 ~ 2.374, so a 95% PI for Ywhen
x = 400 is 72.858 3.747(2.374) = (66.271, 79.446).

31.
a. R' ~ 98.0% or .980. This means 98.0% ofthe observed variation in energy output can be attributed to
the model relationship.

z (n-I)R'-k (24-1)(.780)-2
b. For a quadratic model, adjusted R = .759, or 75.9%. (A more
n-I-k 24-1-2
precise answer, from software, is 75.95%.) The adjusted R' value for the cubic model is 97.7%, as seen
in the output. This suggests that the cubic term greatly improves the model: the cost of adding an extra
parameter is more than compensated for by the improved fit.

c. To test the utility of the cubic term, the hypotheses are flo: p, = 0 versus Ho: p, of O. From the Minitab
output, the test statistic is t ~ 14.18 with a P-value of .000. We strongly reject Ho and conclude that the
cubic term is a statistically significant predictor of energy output, even in the presence of the lower
terms.

d. Plug x ~ 30 into the cubic estimated model equation to get y = 6.44. From software, a 95% CI for I'no
is (6.31, 6.57). Alternatively. j' t.025.''''Y~6.44 2.086(.0611) also gives (6.31,6.57). Next, a 95% PI

183
C 20 16 Ccngage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to IIpublicly accessible website, in whole or in pan.
Chapter 13: Nonlinear and Multiple Regression

for y. 30 is (6.06, 6.81) from software. Or, using the information provided, y 1.025.20 ~s' + si ~
6.44 2.086 J<.1684)' + (.0611)' also gives (6.06, 6.81). The value of s comes from the Minitah
output, where s ~ .168354.

e. The null hypothesis states that the true mean energy output when the temperature difference is 35'K is
equal to 5W; the alternative hypothesis says this isn't true.
Plug x ~ 35 into the cubic regression equation to get j' = 4.709. Then the test statistic is
t 4.709-5,,_5.6, and the two-tailed P-value at df= 20 is approximately 2(.000) ~ .000. Hence, we
.0523
strongly reject Ho (in particular, .000 < .05) and conclude that 11m if' 5.

Alternatively, software or direct calculation provides a 95% CI for /'1'." of(4.60, 4.82). Since this CI
does not include 5, we can reject Ho at the .05 level.

33.
x-20 '
a. x=20 and s,> 10.8012 so x' Forx = 20, x' = 0, and y= /3; = .9671. For x ~25,x'~
10.8012 .
.4629, so y=.9671-.0502(.4629)-.0176(.4629)' +.0062(.4629)' =.9407.

b. '=.9671-.0502( x-20 )-.0176( X-20)' +.0062( X-20)'


Y 10.8012 10.8012 10.8012
= .00000492x' - .000446058x' + .007290688x + .96034944 .

c. t = .0062 ~ 2.00. At df= n - 4 = 3, the P-value is 2(.070) = .140> .05. Therefore, we cannot reject Ho;
.0031
the cuhic term should be deleted.

d. SSE = L(Yi - y,)' and the Yi 's are the same from the standardized as from the unstandardized model,
so SSE, SST, and R' will be identical for the two models.

e. LYi' = 6.355538, LY, = 6.664 , so SST = .011410. For the quadratic model, R' ~ .987, and for the
cubic model, R' ~ .994. The two R' values are very close, suggesting intuitively that the cuhic term is
relatively unimportant.

35. Y' = In(Y) = Ina+/3x+yx' +In(8)= /30 + /3,x+/3,x' +8' where 8' =in(s), flo = In(a), /3, =fl, and
fl, = y. That is, we should fit a quadratic to (x, InCY)).The resulting estimated quadratic (from computer
output) is 2.00397+.1799x-.0022x',so P=.1799, y=-.0022,and a=e'0397 =7.6883. [ThelnCY)'s
are 3.6136, 4.2499, 4.6977,5.1773, and 5.4189, and the summary quantities can then be computed as
before.]

184
02016 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
Chapter 13: Nonlinear and Multiple Regression

Section 13.4

37.
a. The mean value ofy when x, ~ 50 and x, ~ 3 is I'yso., = -.800+.060(50)+.900(3) = 4.9 hours.

b. When the number of deliveries (x,) is held fixed, then average change in travel time associated with a
one-mile (i.e., one unit) increase in distance traveled (x,) is .060 hours. Similarly, when distance
traveled (x,) is held fixed, then the average change in travel time associated with on extra delivery (i.e.,
a one unit increase in X2) is .900 hours.

c. Under the assumption that Y follows a normal distribution, the mean and standard deviation of this
distribution are 4.9 (because x, = 50 and x, ~ 3) and" = .5 (since the standard deviation is assumed to
be constant regardless of the values of x, and x,). Therefore,

p( Y s 6) = p( Z :> 6 ~;.9) = p( Z :> 2.20) = .9861. That is, in the long run, about 98.6% of all days

will result in a travel time of at most 6 hours.

39.
a. For x, = 2, x, ~ 8 (remember the units of x, are in 1000s), and x, ~ 1 (since the outlet has a drive-up
window), the average sales are y = 10.00 -1.2(2) + 6.8(8)+ 15.3(1) = 773 (i.e., $77 ,300).

b. For x, ~ 3, x, = 5, and x, = 0 the average sales are y = 10.00-1.2(3)+ 6.8( 5) + 15.3(0) = 40.4 (i.e.,
$40,400).

c. When the number nf competing outlets (x,) and the number of people within a I-mile radius (x,)
remain fixed, the expected sales will increase by $15,300 when an outlet has a drive-up window.

41.
a. R' = .834 means that 83.4% of the total variation in cone cell packing density (y) can be explained by a
linear regression on eccentricity (x,) and axial length (Xl)' For Ho: P, = p, = 0 vs. H,: at least one p 0, *
. . .IS F
t h e test statistic =2
R' I k .834/2 :::::
475 an d t h e associate
. d P -va 1ue
l

(l-R )/(n-k-l) (1-.834)/(192-2-1)


at df= (2, 189) is essentially O. Hence, Ho is rejected and the model is judged useful.

b. Y= 35821.792 - 6294.729(1) - 348.037(25) ~ 20,826.138 cells/mm'.

c. For a fixed axial length (X2), a l-mm increase in eccentricity is associated with an estimated decrease in
mean/predicted cell density of 6294.729 cells/mm'.

d. The error df= n - k- I = 192 - 3 = 189, so the critical CI value is t025.189" '-025 = 1.96. A 95% CI for
p, is -6294.729 1.96(203.702) = (--{)694.020,-5895.438).

e.
.
Th e test statistic IS t
-348.037-0 = - 2 .59; at 18 9 d,f th e 2-ta, '1ed P-va I'ue IS roughly 2P(T::: - 2 .59)
134.350
'" 2<1>(-2.59) = 2(.0048)" .01. Since .01 < .05, we reject Ho. After adjusting for the effect of
eccentricity (x.), there is a statistically significant relationship between axial length (x,) and cell
density (y). Therefore, we should retain x, in the model.

185
C 2016 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
Chapter 13: Nonlinear and Multiple Regression

43.
a. y = 185.49 -45.97(2.6) -0.3015(250) +0.0888(2.6)(250) = 48.313.

b. No, it is not legitimate to interpret PI in this way. It is not possible to increase the cobalt content, Xli
while keeping the interaction predictor, X3, fixed. When XI changes, so does X3, since X3 = XIX2

c. Yes, there appears to be a useful linear relationship between y and the predictors. We determine this
by observing that the P-value corresponding to the model utility test is < .0001 (F test statistic ~
18.924).

d. We wish to test Ho: 1', ~ 0 vs. H,: 1', '" O. The test statistic is t = 3.496, with a corresponding P-value of
.0030. Since the P-value is < 0: = .0 I, we reject Ho and conclude that the interaction predictor does
provide useful information about y.

e. A 95% cr for the mean value of surface area under the stated circumstances requires the following
quantities: y = 185.49- 45.97 (2) - 0.3015( 500)+ 0.0888(2)( 500) = 31.598. Next, t025.16 = 2.120, so
the 95% confidence interval is 31.598 (2.120)( 4.69) = 31.598 9.9428 = (21.6552,41.5408).

45.
a. The hypotheses are Hi: 1', ~ 1', ~ 1'3~ 1', ~ 0 vs. H,: at least one I'd O. The test statistic is f>
R' 1k .946/4 .
-:- -r- ~ 87.6 > F 00' 4 '0 = 7.10 (the smallest available F-value from
(I-R2)/(n-k-l) (1-.946)120 - . "
Table A.9), so the P-value is < .00 I and we can reject Ho at any significance level. We conclude that
at least one of the four predictor variables appears to provide useful information about tenacity.

b. The adjustedR'value is 1_
n-I (SSE)=I_ n(-I )(I-R') =1_24(1-.946)=.935, which does
n-(k+l) SST n- k+l 20
not differ much from R2 = .946.
f
c. The estimated average tenacity when Xl = 16.5, X2 = 50, X3 = 3, and X4 = 5 is
y = 6.121- .082(16.5) + .113(50)+ .256(3)- .219(5) = 10.09 I. For a 99% cr, tOOS,20 = 2.845, so
the interval is 10.091 2.845(.350) = (9.095,11.087). Therefore, when the four predictors are as
specified in this problem, the true average tenacity is estimated to be between 9.095 and 11.087.

47.
a. For a I% increase in the percentage plastics, we would expect a 28.9 kcal/kg increase in energy
content. Also, for a I% increase in the moisture, we would expect a 37.4 kcallkg decrease in energy
contenl. Both of these assume we have accounted for the linear effects of the other three variables.

b. The appropriate hypotheses are Ho: 1', = 1', = 1', = 1', = 0 vs. H.: at least one I' '" O. The value ofthe F-
test statistic is 167.71, with a corresponding P-value that is '" O. So, we reject Ho and conclude that at
least one of the four predictors is useful in predicting energy content, using a linear model.

c. Ho: 1', ~ 0 v. H.: /3, '" O. The vaJue of the ttest statistic is t = 2.24, with a corresponding P-value of
.034, which is less than the significance level of .05. So we can reject Ho and conclude that percentage
garbage provides useful information about energy consumption, given that the other three predictors
remain in the model.

186
C 2016 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
Chapter 13: Nonlinear and Multiple Regression

d. Y = 2244.9 + 28.925(20) + 7.644(25) + 4.297( 40) - 37.354( 45) = 1505.5 , and 1025.25= 2.060. So a
950/0 CI for the true average energy content under these circumstances is
1505.5 (2.060XI2.47); 1505.5 25.69; 0479.8,1531.1). Because the interval is reasonably narrow,
we would conclude that the mean energy content has been precisely estimated.

e. A 95% prediction interval for the energy content of a waste sample having the specified characteristics

is 1505.5 (2.060)~(31.48)' + (12.47)' ; 1505.5 69.75; (1435.7,1575.2).

49.
a. Use the ANOVA table in the output to test Ho: (l, ~ (l, = /33; 0 vs. H,: at least one /3) O. Withf;
17.31 and P-value ~ 0.000, so we reject Ho at any reasonable significance level and conclude that the
*
model is useful.

b. Use the t test information associated with X3 to test Ho: /33 ~ 0 vs. H,: /33 O. With t ~ 3.96 and P-value
~ .002 < .05, we reject Ho at the .05 level and conclude that the interaction term should be retained.
*
c. The predicted value ofy when x, ~ 3 and x, ~ 6 is}; 17.279 - 6.368(3) - 3.658(6) + 1.7067(3)(6) =
6.946. With error df> II, 1.025." ~ 2.201, and the CI is 6.946 2.201(.555) ~ (5.73, 8.17).

d. Our point prediction remains the same, but the SE is now )s' + s! = "/1.72225' + .555' ~ 1.809. The
resulting 95% PI is 6.946 2.201(1.809) = (2.97,10.93).

51.
a. Associated with X3 = drilling depth are the test statistic t ~ 0.30 and P-value ~ .777, so we certainly do
not reject Ho: /33 ; 0 at any reasonably significance level. Thus, we should remove X3 from the model.

b. To test Hi: /3, = (l, = 0 vs. *


H,: at least one (l 0, use R': f
R' I k
(I-R')I (n -k-I)
.836 I 2
(1-.836) I (9 -2-1)
; 15.29; at df'> (2, 6), 10.92 < 15.29 < 27.00 => the P-value is between .001 and .01. (Software gives
.004.) In particular, P-value:S .05 => reject Ho at the" = .05 level: the model based on x, and x, is
useful in predictingy.

c. With error df'> 6,1025.6; 2.447, and from the Minitab output we can construct a 95% C1 for /3,:
-0.006767 2.447(0.002055) ~ (-0.01 180, -0.00174). Hence, after adjusting for feed rate (x,), we are
950/0 confident that the true change in mean surface roughness associated with a I rpm increase in
spindle speed is between -.0 I 180 urn and -.00174 urn.

d. The point estimate is}; 0.365 -0.006767(400) + 45.67(.125) = 3.367. With the standard error
provided, the 95% CI for I' y is 3.367 2.447(.180) = (2.93, 3.81).

e. A normal probability plot of the e* values is quite straight, supporting the assumption of normally
distributed errors. Also, plots of the e* values against x, and x, show no discernible pattern, supporting
the assumptions of linearity and equal variance. Together, these validate the regression model.

187
C 2016 Cengegc Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
Chapter 13: Nonlinear and Multiple Regression

53. Some possible questions might be:


(I) Is this model useful in predicting deposition of poly-aromatic hydrocarbons? A test of model

(2)
utility gives us an F= 84.39, with a P-value of 0.000. Thus, the model is useful.
Is XI a significant predictor ofy in the presence of x,? A test of Ho: PI = 0 v. H.: PI 0 gives us a t
= 6.98 with a P-value of 0.000, so this predictor is significant.
*
(3) A similar question, and solution for testing x, as a predictor yields a similar conclusion: with a P-
value of 0.046, we would accept this predictor as significant if our significance level were anything
larger than 0.046.

Section 13.5

55.
a. To test Ho: PI ~ P, = fi, = 0 vs. H,: at least one P 0, use R': *
R'/k .706/3
f= , = 6.40. At df= (3,8),4.06 < 6.40 < 7.59 ~ the P-
(l-R )/(n-k-l) (1-.706)/(12-3-1)
value is between .05 and .01. In particular, P-value < .05 ~ reject Ho at the .05 level. We conclude that
the given model is statistically useful for predicting tool productivity.

b. No: the large P-value (.510) associated with In(x,) implies that we should not reject Ho: P, = 0, and
hence we need not retain In(x,) in the model that already includes In(x,).

c. Part of the Minitab output from regression In(y) on In(x,) appears below. The estimated regression
equation is In(y) = 3.55 + 0.844 In(x,). As for utility, t = 4.69 and P-value = .00 I imply that we should
reject Ho: PI = 0 - the stated model is useful.

The regression equation is


In(y) ~ 3.55 + 0.844 In(xl)

Predictor Coef SE Coe f T P


Constant 3.55493 0.01336 266.06 0.000
In(x1) 0.8439 0.1799 4.69 0.001
d. The residual plot sbows pronounced curvature, rather than "random scatter." This suggests that the
functional form of tbe relationship might not be correctly modeled - that is, In(y) might have a non-
linear relationship with In(xI)' [Obviously, one should investigate this further, rather than blindly
continuing with the given mode!!]

Residuals V~nul '"(xl)


( .... pon$c iIIln(y)j

'.00 '.M ,m
!nixl)

188
e 2016 Cengage Learning. All RightsReserved. May not be scanned, copied or duplicated, or posted 10 a publicly accessible website, in whole or in part.
I
II
Chapter 13: Nonlinear and Multiple Regression

e. First, for the model utility test of In(x,) and In'(x,) as predictors, we again rely on R':
R'/k .819/2 ., .
f (1 R') 20.36. Since this IS greater than F 0012 9 = 16.39, the
- /(n-k-l) (1-.819)/(12-2-1) ...
P-value is < .001 and we strongly reject the null hypothesis ofoo model utility (i.e., the utility of this
model is confirmed). Notice also the P-value associated with In'(x,) is .031, indicating that this
"quadratic" term adds to the model.
Next, notice that when x, = I, In(x,) = 0 [and In'(x,) = 0' = 0], so we're really looking at the
information associated with the intercept. Using that plus the critical value 1.025.9 = 2.262, a 95% PI for
the response, In(Y), when x, = I is 3.5189 2.262 "'.0361358' + .0178' = (3.4277, 3.6099). Lastly, to
create a 95% PI for Yitself, exponentiate the endpoints: at the 95% prediction level, a new value of Y
60OO
when x, = I will fall in the interval (e'4217, el. ) = (30.81, 36.97).

57.
SSE
k R' R'a C, =-,-' +2(k+I)-n
s
I .676 .647 138.2
2 .979 .975 2.7
3 .9819 .976 3.2
4 .9824 4
where s' ~ 5.9825

a. Clearly the model with k ~ 2 is recommended on all counts.

b. No. Forward selection would let X4 enter first and would not delete it at the next stage.

59.
a. The choice ofa "best" model seems reasonably clear-cut. The model with 4 variables including all but
the summerwood fiber variable would seem best. R' is as large as any of the models, including the 5-
variable model. R' adjusted is at its maximum and CP is at its minimum. As a second choice, one
might consider the model with k = 3 which excludes the summerwood fiber and springwood %
variables.

b. Backwards Stepping:

Step I: A model with all 5 variables is fit; the smallest z-ratio is I ~ .12, associated with variable x,
(summerwood fiber %). Since I ~ .12 < 2, the variable x, was eliminated.
Step 2: A model with all variables except x, was fit. Variable X4 (springwood light absorption) has
the smallest r-ratio (I = -1.76), whose magnitude is smaller than 2. Therefore, X4 is the next
variable to be eliminated.
Step 3: A model with variables Xl and X, is fit. Both z-ratios have magnitudes that exceed 2, so both
variables are kept and the backwards stepping procedure stops at this step. The final model
identified by the hackwards stepping method is the one containing Xl and X,.

189
C 2016 Cengagc Learning. All Rights Reserved. May nol be scanned, copied or duplicated, or posted to a publicly accessible website. in whole or in part.
Chapter 13: Nonlinear and Multiple Regression

Forward Stepping:

Step 1: After fitting all five I-variable models, the model with x, had the t-ratio with the largest
magnitude (t = -4.82). Because the absolute value of this t-ratio exceeds 2, x, was the first
variable to eater the model.
Step 2: All four 2-variable models that include x, were fit. That is, the models {x, x.}, {x" x,},
{x" X4}, {x" x,} were all fit. Of all 4 models, the t-ratio 2.12 (for variable X5) was largest in
absolute value. Because this t-ratio exceeds 2, Xs is the next variable to enter the model.
Step 3: (not printed): All possible 3-variable models involving x, and x, and another predictor. None
of the t-ratios for the added variables has absolute values that exceed 2, so no more variables are
added. There is no need to print anything in this case, so the results of these tests are not shown.

Note: Both the forwards and backwards stepping methods arrived at the same final madel, {x,. X5}, in
this problem. This often happens, but not always. There are cases when the different stepwise
methods will arrive at slightly different collections of predictor variables.

61. [fmulticollinearity were present, at least one of the four R' values would be very close to I, which is not
the case. Therefore, we conclude that multicollinearity is not a problem in this data.

63. Before removing any observations, we should investigate their source (e.g., were measurements on that
observation misread?) and their impact on the regression. To begin, Observation #7 deviates significantly
from the pattern of the rest of the data (standardized residual ~ -2.62); if there's concern the PAR
deposition was not measured properly, we might consider removing that point to improve the overall fit. If
the observation was not rnis-recorded, we should not remove the point.

We should also investigate Observation #6: Minitab gives h66 = .846 > 3(2+ 1)/17, indicating this
observation has very high leverage. However, the standardized residual for #6 is not large, suggesting that
it follows the regression pattern specified by the other observations. Its "influence" only comes from
having a comparatively large XI value.

190
C 2016 Cengage Learning. All Rights Reserved. May nor be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
Chapter 13: Nonlinear and Multiple Regression

Supplementary Exercises

65.
a.
Boxplotsof ppv by prism quality
(means are Indica.tBd by sold circlBs)

"'" .---L-

,.

,. j

prism qualilly

A two-sample t confidence interval, generated by Minitab:


Two sample T for ppv

prism gu N Mean StDev SE Mean


827 295 85
cracked 12
483 234 55
not cracke 18

95% CI for mu (cracked ) _ mu (not creckel : (132, 557)

I
b. The simple linear regression results in a significant model, ,J is .577, but we have an extreme
observation, with std resid = -4.11. Minitab output is below. Also run, but not included here was a I

model with an indicator for cracked/ not cracked, and for a model with the indicator and an interaction
term. Neither improved the fit significantly.
I
The regression equation is
ratio = 1.00 -0.000018 ppv
T p
Predictor Coef StDev
0.00204 491.18 0.000
Constant 1.00161
0.00000295 -6.19 0.000
ppv -0.00001827

s ~ 0.004892 R-Sq 57.7% R-sq(adj) = 56.2%

Analysis of variance
F p
OF SS MS
Source 38.26 0.000
1 0.00091571 0.00091571
Regression
28 0.00067016 0.00002393
Residual Error
Total 29 0.00158587

Unusual Observations Residual St Resid


ppv ratio Fit StDev Fit
Obs 0.001786 -0.018704 -4.11R
1144 0.962000 0.980704
29

R denotes an observation with a large standardized residual

191
02016 Ccngage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
,I' Chapter 13: Nonlinear and Multiple Regression

Ii 67.
a. After accounting for all the otber variables in the regression, we would expect the VO,max to decrease
by .0996, on average for each one-minute increase in the one-mile walk time.

b. After accounting for all the other variables in the regression, we expect males to have a VO,max that is
.6566 Umin higher than females, on average.

(' c. y; 3.5959 +.6566(1)+ .0096(170) -.0996(11)- .0880(140); 3.67. The residual is

Iii I y; (3.15 - 3.67) = -.52.


I

d. R'- -_ 1_ SSE; I -~--'----=


30.1033 .706, or 70.6% of the observed variation in VO,max can be attributed to
SST 102.3922
the model relationship.

e. To test Ho: fl, ~fl, ~ flJ ~ fl.; 0 vs. H,: at least one fl;< 0, use R':
I> -~,-:-~--=
R'/k .706/4
---~---~ 9.005. At df'> (4,15),9.005> 8.25 =>theP-value is
(l-R')/(n-k-I) (1-.706)/(20-4-1)
less than .05, so Ho is rejected. It appears that the model specifies a useful relationship between
VO,max and at least one of the other predictors.

69.
a. Based on a scatter plot (below), a simple linear regression model would not be appropriate. Because of
the sligbt, but obvious curvature, a quadratic model would probably be more appropriate.

350
.. . ..

..
~
11
'50


..
E
150
~

50

0 100
'00 300 .. 0
Pressure

Using a quadratic model, a Minitab generated regression equation is


y; 35.423 + 1.719Ix-.0024753x' , and a point estimate of temperature when pressure is 200 is
y; 280.23. Minitab will also generate a 95% prediction interval of(256.25, 304.22). That is, we are
confident that when pressure is 200 psi, a single value of temperature will be between 256.25 and
304.22F.

192
C 2016 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
Chapter 13: Nonlinear and Multiple Regression

71.
a. Using Minitab to generate the first order regression model, we test the model utility (to see if any of
the predictors are useful), and withj> 21.03and a Pvalue of .000, we determine that at least one of the
predictors is useful in predicting palladium cootent. Looking at the individual predictors, the P-value
associated with the pH predictor has value .169, which would indicate that this predictor is
unimportant in the presence of the others.

b. We wish to test Ho: /3, = ... = /320= 0 vs. H,: at least one /3 l' O. With calculated statistic f = 6.29 and p.
value .002, this model is also useful at any reasonable significance level.

c. Testing H 0 : /36 = ... = /320 = 0 vs. H,: at least one of the listed /3's l' 0, the test statistic is
(716.10-290.27)/(20-5) 1.07 < F.05,15,l1 = 2.72. Thus, P-value > .05, so we fail to reject Ho
f 290.27(32 - 20-1)
and conclude that all the quadratic and interaction terms should not be included in the model. They do
not add enough information to make this model significantly better than the simple first order model.

d. Partial output from Minitab follows, which shows all predictors as significant at level .05:
The regression equation is
pdconc = - 305 + 0.405 niconc + 69.3 pH - 0.161 temp + 0.993 currdens
+ 0.355 pallcont - 4.14 pHsq

StDev T p
Predictor Coef
Constant -304.85 93,98 -3.24 0.003
niconc 0.40484 0.09432 4.29 0.000
69.27 21. 96 3.15 0.004
pH
temp -0.16134 0.07055 -2.29 0.031
0.9929 0.3570 2.78 0.010
currdens
pallcont 0.35460 0.03381 10.49 0.000
pHsq -4.138 1.293 -3.20 0.004

73.

a. We wish to test Ho: P, = p, = 0 vs. H,: either PI or P, l' O. With R' = I-~
202.88
= ,9986, the test

. .
stansuc .f
IS = R' 1k .9986/2 1783 , were
h k = 2 rror t h e qua drati
atic rna d et.I
(I-R')/(n-k-l) (1-.9986)/(8-2-1)
Clearly the P-value at df= (2,5) is effectively zero, so we strongly reject Ho and conclude that the
quadratic model is clearly useful.

b. The relevant hypotheses are H,: P, = 0 vs. H,: P, l' O. The test statistic value is

I = _/3, = -.00163141-0 -48.1 ; at 5 df, the P-value is 2P(T ~ 1-48.11)'" 0, Therefore, Ho is rejected,
sp, .00003391
The quadratic predictor should be retained.

c. No. R' is extremely high for the quadratic model, so the marginal benefit of including the cubic
predictor would be essentially nil- and a scatter plot doesn't show the type of curvature associated
with a cubic model.

d. 1 = 2,571, and /30+ /31(100)+ /32(100)' = 21.36, so the CI is 21.36 2.571(,1141) ~21.36 ,29
0255
~ (21.07,21.65).

193
C 2016 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
Chapter 13: Nonlinear and Multiple Regression

e. First, we need to figure out s' based on the information we have been given: s' = MSE = SSE/df =

.2915 ~ .058. Then, the 95% PI is 21.36 2.571~.058 + (.1141)' = 21.360.685 = (20.675,22.045).

75.
a. To test Ho: PI = P, = 0 vs. H,: either PI or P, * 0, first find R': SST ~ Ey' -(Ey)' I n = 264.5 => R' ~

I _ SSE/SST ~ 1- 26.98/264.5 ~ .898. Next,


.898 I 2 f = 30.8, which at df> (2,7)
(1- .898) I (10 - 2-1)
corresponds to a P-value of" O. Thus, Ho is rejected at significance level .01 and the quadratic model
is judged useful.

b. *
The hypotheses are Hi: P, ~ 0 vs. H,: P, O. The test statistic value is t ~ (-2.3621 - 0)/.3073 ~ -7.69,
and at 7 dfthe P-value is 2P(T? f-7.691)" O. So, Ho is rejected at level .001. The quadratic predictor
should not be eliminated.

c. x = I here, fiy] ~ Po + P, (1) + p, (I)' ~ 45.96 , and t025.7 ~ 1.R95, giving the CI
45.96 (1.895XI.031)=(44.01,47.91).

77.
a. The hypotheses are Ho: PI = fl, = fl, ~fl, ~ 0 versus H,: at least one Pi O.From the output, the F- *
statistic is ]> 4.06 with a P-value of.029. Thus, at the .05 level we rejectHo and conclude that at least
one ofthe explanatory variables is a significant predictor of power.

b. Yes, a model with R' = .834 would appear to be useful. A formal model utility test can be performed:

f = , R' I k .834 I 3 = 20.1, which is much greater than F 053 I' ~ 3.49. Thus,
(l-R )/[n-(k+I)] (1-.834)/[16-4] ...
the mode including {x" x" x,x,} is useful.

We cannot use an Ftest to compare this model with the first-order model in (a), because neither model
is a "subset" of the other. Compare {Xl. X2, X3, X4} to {X), X4. X)X4}.

c. The hypotheses are Ho: P, = ... = PIO ~ 0 versus H,: at least one of these fli 0, where
the coefficients for the six interaction terms. The "partial F test" statistic is
* fl, through PIO are

(SSE,-SSE,)/(k-l) (Ri -R,')/(k-l) (.960-.596)/(10-4)


f = SSE, I [n _ (k + I)] (1- Ri) I [n - (k + I)] (1- .960) I [16 _ (10+ 1)1 7.58, which is greater
than F.as,6" = 4.95. Hence, we reject Ho at the .05 level and conclude that at least one of the interaction
terms is a statistically significant predictor of power, in the presence of the first-order terms,

79. There are obviously several reasonable choices in each case. In a, the model with 6 carriers is a defensible
choice on all three grounds, as are those with 7 and 8 carriers. The models with 7, 8, or 9 carriers in b
merit serious consideration. These models merit consideration because Rf , MSE/I., and C, meet the variable
selection criteria given in Section 13.5.

81.
a. The relevant hypotheses are Ho: /31 = ...= fl, = 0 vs. H,: at least one among /31, .. ,fl, O. f= *
.827 I 5
106.1 ? Fas.,,111 "2.29, so P-value < .05. Hence, Ho is rejected in favor of the conclusion
.173/11
that there is a useful linear relationship between Yand at least one of the predictors.

194
e 2016 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
Chapter 13: Nonlinear and Multiple Regression

b. 105,111 = 1.66, so the CI is .041 (1.66X.016) = .041 .027 = (.014,.068). P, is the expected change in
mortality rate associated with a one-unit increase in the particle reading when the other four predictors
are held fixed; we can be 90% confident that .014 < P, < .068.

c. In testing Ho: P, = 0 versus H,: p, * 0, t = p, -0


sp,
= .047
.007
= 5.9, with an associated P-value of" O. So,

Ho is rejected and this predictor is judged important.

d. Y ~ 19.607+.041(166)+.07l(60)+.001(788)+.041(68)+687(.95)~99.514, and the correspooding residual is


103 - 99.514 = 3.486.

83. Taking logs, the regression model is In(Y) = Po + P, In(x,) + P, In(x,) + e', where Po~ In(a). Relevant
Minitab output appears below.
a. From the output, Po
= 10.8764,,0, = -1.2060,jJ, = -1.3988. In the original model, solving for a returns
a = exp(po) = elO8764 = 52,912.77.

b. From the output, R' ~ 78.2%, so 78.2% of the total variation in In(wear life) can be explained by a
linear regression on In(speed) and In(load). From the ANOV A table, a test of Ho: PI = p, = 0 versus
*
H,: at least one of these P's 0 producesj> 42.95 and P-value ~ 0.000, so we strongly reject Ho and
conclude that the model is useful.

c. Yes: the variability utility r-tests for the two variables have I ~-7.05, P ~ 0.000 and t = --6.01, P =
0.000. These indicate that each variable is highly statistically significant.
d. With In(50)" 3.912 and In(5) '" I .609 substituted for the transformed x values, Minitab produced the
accompanying output. A 95% PI for In(Y) at those settings is (2.652, 5.162). Solving for Y itself, the
95% PI of interest is (e'65', e5.'6') = (14.18, 174.51).

The regression equation is


Ln t y) - 10.9 - 1.21 In(x1) - 1.40 1n(x2)

Predictor Caef SE Caef T P


Constant 10.8764 0.7872 13.82 0.000
1n(x1) -1.2060 0.1710 -7.05 0.000
Inlx21 -1. 3988 0.2327 -6.01 0.000

5 - 0.596553 R-5q 78.2% R-5q (adj 1 - 76.3%

Analysis of Variance

OF 55 M5 F P
Source
Regression 2 30.568 15.284 42.95 0.000
Residual Error 24 8.541 0.356
Total 26 39.109

Predicted Values for New Observations

New Obs Fit 5E Fit 95% CI 95% PI


1 3.907 0.118 (3.663, 4.1511 (2.652, 5.1621

195
02016 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
CHAPTER 14

Section 14.1

l. For each part, we reject Ho ifthe P-value is:SOa, which occurs if and only the calculated X' value is greater
than or equal to the value X;,'-1 from Table A.7.
a. Since 12.25 <: X.:", = 9.488, P-value:SO .05 and we would reject Ho
b. Since 8.54 < %.:1.3 = 11.344, P-value > .01 and we would fail to reject Ho,

c. Since 4.36 < X~02 =4.605, P-value > .10 and we would fail to reject Hi:

d. Since 10.20 < X.:1" = 15.085, P-value > .01 we would fail to reject Ho

3. The uniform hypothesis implies that p" = t = .125 for i = I, ... , 8, so the null hypothesis is
Ho : PIO = P20 = ... = P8l1 = .125. Each expected count is np = 120(.125) = 15, so

X' = (12-15)' + ...+ (10-15)'] = 4.80. At df= 8 - I = 7, 4,80 < 12.10 => P-value > .10 => we fail to
[ 15 15
reject Ho. There is not enough evidence to disprove the claim.

2
5. The observed values, expected values, and corresponding X terms are:

Obs 4 15 23 25 38 21 32 14 10 8

Exp 6.67 13.33 20 26.67 33,33 33.33 26.67 20 13.33 6.67

1.069 .209 .450 .105 .654 .163 1.065 1.800 .832 .265

x' = 1.069 + ... + .265 ~ 6.612. With df'> 10 - I = 9, 6.612 < 14.68 => P-value > .10 = we cannot reject
Ho. There is no significant evidence that the data is not consistent with the previously determined
proportions.

7. We test Ho : P, = p, = P3 = P, = .25 vs. H,: at least one proportion # .25, and df> 3.

Cell I 2 3 4
Observed 328 334 372 327
Expected 340.25 340.25 340.25 34.025
lterm .4410 ,1148 2.9627 .5160

x' ~ 4.0345, and with 3 df, P-value > .10, so we fail to reject Ho. The data fails to indicate a seasonal
relationship with incidence of violent crime.

196
e 20 16 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
Chapter 14: Goodness-of-Fit Tests and Categorical Data Analysis

9.
a. Denoting the 5 intervals by [0. c.), [c" c,), ... , [c, 00), we wish c, for which
.2 ~ p(Oo X oc,)~ f:' e-'dx ~ I-e-", so c, ~-ln(.8) ~ .2231. Then
.2 ~p(c, $ X oC,)~.4~P(O $ X, oc,)~I-e-~, so c, ~-ln(.6) ~ .5108. Similarly, c, ~-ln(.4) ~
.0163 and c, ~ -In(.2) ~ 1.6094. The resulting intervals are [0, .2231), [.223 I, .5108), [.5108, .9163),
[.9163, 1.6094), and [1.6094, 00).

h. Each expected cell count is 40(.2) ~ 8, and the observed cell counts are 6, 8,10,7, and 9, so

X' ~ (6_8)'
8 (9_8)']
+ ...+ --8- ~ 1.25. Because 1.25 < X.~O.4 ~ 7.779 , even at level .10 Ho cannot be
[
rejected; the data is quite consistent with the specified exponential distribution.

11. ili ili


a. The six intervals must be symmetric about 0, so denote the 4 , 5 and e" intervals by [0, a), [a, b),
[b,oo). The constant a must be such that <D( a) ~ .6667( t+i), which from Tahle A.3 gives a '" .43.
Similarly, <D(b) ~ .8333 implies b '" .97, so the six intervals are (--00, -.97), [-.97, -.43), [-.43, 0),
[0, .43), [.43, .97), and [.97, 00).

b. The six intervals are symmetric about tbe mean of .5. From a, the fourtb interval sbould extend from
the mean to .43 standard deviations above the mean, i.e., from .5 to .5 + .43(.002), which gives
[.5, .50086). Thus the third interval is [.5 - .00086,.5) ~ [.49914, .5). Similarly, the upper endpoint of
the fifth interval is .5 + .97(.002) ~ .50194, and the lower endpoint of the second interval is.5 - .00194
~ .49806. The resulting intervals are (--00, .49806), [.49806, .49914), [.49914, .5), [.5, .50086),
[.50086, .50194), and [.50194, 00).

c. Each expected count is 45(116) ~ 7.5, and the observed counts are 13,6,6,8,7, and 5, so X' ~ 5.53.
With 5 df, the P-value > .10, so we would fail to reject Ho at any of the usual levels of significance.
There is no significant evidence to suggest thnt the bolt diameters are not normally distributed with
fl ~ .5 and (J ~ .002.

Section 14.2

13. According to the stated model, the three cell prohabilities are (I - p)', 2p(I - p), and p', so we wish the
.1
value ofp which maximizes (1- p)'" [2p(l- p)r p"'. Proceeding as in Example 14.6 gives

p n,+2n, 234 ~ .0843. The estimated expected cell counts are then n (')'
1- p ~ II 63 .85,
2n 2776
1l[2P(I-p)r~214.29, IIp'~9.86. Thisgives

, ~[(1212-1163.85)' + (118- 214.29)' + (58-9.86)'] ~ 280.3. With df> 4 - I - I = 2 280.3> 13.81


X 1163.85 214.29 9.86 '
~ P-value < .001 ~ Ho is soundly rejected. The stated model is strongly contradicted by the data.

197
02016 Cengage Learning. All Rights Reserved. May DO!be scanned, copied or duplicated. or posted 10 a publicly accessible website, in whole or in part.
Chapter 14: Goodness-of-Fit Tests and Categorical Data Analysis

IS. The part of the likelihood involving 0 is [(1- 0)4 t' .


[0(1- 0)' I" . [0' (I - oj' t' .
[03 (1 - B )J14 . [04 js := 0"2+2"J+3"4 +4'15 (1 _ B)4'11+3112+2"j+"4 = 0233 (1- B)367 I so the log-likelihood is

233 In 0 + 367 In(1- 0), Differentiating and equating to 0 yields 233 = .3883, and (I - i})= .6117 i} =
600
[note that the exponent on 0 is simply the total # of successes (defectives here) in the n = 4(150) = 600
trials]. Substituting this iJ into the formula for Pi yields estimated cell probabilities .1400, .3555, .3385,
,1433, and .0227. Multiplication by 150 yields the estimated expected cell counts are 21.00, 53.33, 50.78,
21.50, and 3.41. the last estimated expected cell count is less than 5, so we combine the last two categories
into a single one (2: 3 defectives), yielding estimated counts 21.00, 53.33, 50.78, 24.91, observed counts 26,
51,47, 26, and X' = 1.62. With df e 4 - I - I ~ 2, since 1.62 < x.io.2 = 4,605 , the P-value > .10, and we
do not reject Ho. The data suggests that the stated binomial distribution is plausible.

I
__ (0)(6)+(1)(24)+(2)(42)++(8)(6)+(9)(2)
17. }J=X= 1163 = 3.88, so the estimated cell probabilities
300 300

are computed from p=e-3.88 (3.88Y .


x!

x o 2 3 4 5 6 7 2:8
1
IIp(X) 6.2 24.0 46.6 60.3 58.5 45.4 29.4 16.3 13.3

, obs 6 24 42 59 62 44 41 14 8
,
This gives X' ~ 7.789. At df~ 9 - I - I = 7,7,789 < 12,0I => P-value > .10 => we fail to reject Ho. The
i Poisson model does provide a good fit.

19. With A = 2111 + 114 + !ls, B = 2n2 + n4 + n6, and C = 2n3 + ns + n6, the likelihood is proportional to

0,'0: (1- 0, - 0, r Taking the natural log and equating both J!...-
oe, and J!...-
00,
to zero gives ~
e, I-I!,-O,
C

and 1!- =
0,
C
!-e,-O,
, whence 0, = Be, .
A
Substituting this into the first equation gives e, = A+B+C
A , and

then = B ,Thus ii, 2nl+114+115 iJ. 2112+n4+n6 d (1_~_iJ)_2n3+ns+n6


(J, an I 2 ~ .
: A+B+C 211
2
211
)
211
- 2(49)+20+53 ,110
Substituting the observed nis yields (J, 400 .4275, (J,= 400 = .2750, and

(1- ii, - i},) = .2975, from which p, = (.4275)' = .183, p, = .076, p, = .089, P4 = 2(.4275)(.275) = .235,

p, = .254, Po =.164.

"

Category 1 2 3 4 5 6
lip 36,6 15.2 17.8 47.0 50.8 32.8
observed 49 26 14 20 53 38

This gives x' ~ 29.1. At df e 6- I - 2 = 3, this gives a P-value less than .001. Hence, we reject Ho.

198
02016 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
Chapter 14: Goodness-of-Fit Tests and Categorical Data Analysis

21. The Ryan-Joiner test P-value is larger than .10, so we conclude tbat tbe null bypothesis ofnonnality cannot
be rejected. This data could reasonably have come from a normal population. This means tbat it would be
legitimate to use a one-sample t test to test hypotheses about the true average ratio.

23. Minitab gives r = .967, though the hand calculated value may be slightly different because when there are
ties among the x(i)'s, Minitab uses the same Yi for each X(i) in a group of tied values. CIO = .9707, and C,os =
9639, so .05 < P-value < .10. At the 5% significance level, one would have to consider population
normality plausible.

Section 14.3

25. The hypotheses are Ho: tbere is no association between extent of binge drinking and age group vs.
H,: there ~ an association between extent of binge drinking and age group. With tbe aid of software, the
calculated test statistic value is X' = 212.907. Witb all expected counts well above 5, we can compare this
value to a cbi-squared distribution with df> (4 - 1)(3 - I) ~ 6. Tbe resulting P-value is > 0, and so we
strongly reject Ho at any reasonable level (including .0I). There is strong evidence of an association
between age and binge drinking for college-age males. In particular, comparing the observed and expected
counts shows that younger men tend to binge drink more than expected if Ho were true.

27_ With i = I identified with men and i = 2 identified with women, and} ~ 1,2,3 denoting the 3 categories
*
L>R, L=R, L<R, we wish to test Ho: Plj = P'j for} ~ 1,2,3 vs. H,: Plj P'j for at least one}. Tbe estimated
cell counts for men are 17.95,8.82, and 13.23 and for women are 39.05, 19.18, 28.77, resulting in a test
statistic of X' = 44.98. With (2 - 1)(3 - I) = 2 degrees of freedom, the P-value is < .00 I, whicb strongly
suggests that Ho should be rejected.

29.
a. The null hypothesis is Hi: P Ij = P'j = PJj for} = I, 2, 3, 4, wbere Pij is the proportion of the ith
population (natural scientists, social scientists, non-academics with graduate degrees) whose degree of
spirituality falls into the}tb category (very, moderate, slightly, not at all).

From the accompanying Minitab output, the test statistic value is X' = 2 I 3.212 with df = (3-1)(4- I) =
6, with an associated P-value of 0.000. Hence, we strongly reject Ho. These three populations are not
homogeneous with respect to their degree of spirituality.

199
C 20 16 Cengcge Learning. All Rights Reserved. May not be scanned. copied or duplicated, or posted to a publicly accessible website, in whole or in part.
Chapter 14: Goodness-of-Fit Tests and Categorical Data Analysis

Chi-Square Test: Very, Moderate, Slightly, Not At All

Expected counts are printed below observed counts


Chi-Square contributions are printed below expected counts

Very Moderate Slightly Not At All Total


1 56 162 198 211 627
78.60 195.25 183.16 170.00
6.497 5.662 1.203 9.889

2 56 223 243 239 761


95.39 236.98 222.30 206.33
16.269 0.824 1.928 5.173

3 109 164 74 28 375


47.01 116.78 109.54 101.67
81.752 19.098 11.533 53.384

Total 221 549 515 478 1763

Chi-Sq 213.212, OF - 6, P-Va1ue - 0.000

b. We're now testing Ho: PIj = P2j for j = 1, 2, 3, 4 under the same notation. The accompanying Minitab
output shows X' - 3.091 with df'> (2-1)(4-1) - 3 and an associated P-value of 0.378. Since this is
larger than any reasonable significance level, we fail to reject Ho. The data provides no statistically
significant evidence that the populations of social and natural scientists differ with respect to degree of
spirituality.

Chi-Square Test: Very, Moderate, Slightly, Not At All

Expected counts are printed below observed counts


Chi-Square contributions are printed below expected counts

Very Moderate Slightly Not At All Total


1 56 162 198 211 627
50.59 173.92 199.21 203.28
0.578 0.816 0.007 0.293

2 56 223 243 239 761


61. 41 211.08 241. 79 246.72
0.476 0.673 0.006 0.242

Total 112 385 441 450 1388

Chi-Sq 3.091, OF = 31 P-Value = 0.378

200
C 2016 Cengage Learning. All Rights Reserved. May nOI be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
-

Chapter 14: Goodness-of-Fit Tests and Categorical Data Analysis

31.
a. The accompanying table shows the proportions of male and female smokers in the sample who hegan
smoking at the ages specified. (The male proportions were calculated by dividing the counts by the
total of96; for females, we divided by 93.) The patterns ofthe proportions seems to he different,
suggesting there does exist an association between gender and age at first smoking.

Gender
Male Female
<16 0.26 0.11
Age 16-17 0.25 0.34
18--20 0.29 0.18
>20 0.20 0.37

b. The hypotheses, in words, are Ho: gender and age at first smoking are independent, versus Ha: gender
and age at first smoking are associated. The accompanying Minitab output provides a test statistic
value of x' ~
14.462 at df> (2-1)(4-1) ~ 3, with an associated P-value of 0.002. Hence, we would
reject Hoat both the .05 and .01 levels. We have evidence to suggest an association between gender
and age at first smoking.

Chi-Square Test: Male, Female

Expected counts are printed below observed counts


Chi-Square contributions are printed below expected counts

Male Female Total


1 25 10 35
17.78 17.22
2.934 3.029

2 24 32 56
28.44 27.56
0.694 0.717

3 28 17 45
22.86 22.14
1.157 1.194

4 19 34 53
26.92 26.08
2.330 2.406

Total 96 93 189

Chi-Sq 14.462, OF ~ 3, P-Value 0.002

201
C 2016 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted 10 a publicly accessible website, in whole or in part.
Chapter 14: Goodness-of-Fit Tests and Categorical Data Analysis

33.
, __ ~j is ,.,..
....
~~
Ny -. N~- 2E_ijNij+ i~ N~
1:1:-.--2LLNij
_ _
+1:1:E" but LLEif -1:1:Nif -n, so
X
Elj Eij Eij

% , = LL-.;L
~ -11. This formula is computationally efficient because there is only one subtraction to be
Eij
performed, which can be done as the last step in the calculation.

A nij. " nknij.


35. With Pi} denoting the common value of Pi}!> Pij2, PiP' and Pij4 under Ho, Pi} :;::- and Eijk = --, where
n n
4 4
nij. = L: nijk and n = 2>k . With four different tables (nne for each region), there are 4(9 - 1) = 32
k=! k=!
freely determined cell counts. Under Ho, the nine parameters PI], ... , P33 must be estimated, but 'i.''EPij = I ,
so only 8 independent parameters are estimated, giving! df= 32 - 8 ~ 24. Note: this is really a test of
homogeneity for 4 strata, each with 3x3=9 categories. Hence, df = (4 - 1)(9 - I) = 24.

Supplementary Exercises

37. There are 3 categories here - firsthorn, middlebom, (2" or 3'" hom), and lastbom. With pi. p; and p,
denoting the category probabilities, we wish to test Ho: P, ~ .25, p, = .50,p, = .25 because p, ~ P(2" or 3"
born) = .25 + .25 ~ .50. The expected counts are (31)(.25) = 7.75, (31)(.50) = 15.5, and 7.75, so
x' (12 -7.75)' + (II-ISS)' + (8 _7.75)' 3.65. At df e 3 - 1 = 2,3.65 < 5.992 => P-value > .05 => Ho is
7.75 15.5 7.75
not rejected. Tbe hypothesis of equiprobable birth order appears plausible.

39.
a. For that top-left cell, the estinaated expected count is (row total)(column total)/(grand total) =
(189)(406)/(852) ~ 90.06. Next, the chi-squared contribution is (0 - E)'/E ~ (83 - 90.06)'/90.06 ~
0.554.

b. No: From the software output, the P-value is .023 > .01. Hence, we fail to reject the null hypothesis of
"no association" at the .01 level. We have insufficient evidence to conclude that an association exists
between cognitive state and drug status. [Note: We would arrive at a different conclusion for a ~ .05.]

41. The nul I hypothesis Ho:Pij = Pi. P j states that level of parental use and level of student use are independent
in the population of interest. The test is based on (3 -1)(3 - 1) = 4 df.
Estimated expected counts
119.3 57.6 58.1 235
828 33.9 40.3 163
23.9 11.5 11.6 47
226 109 110 445

The calculated test statistic value is X,' = 22.4; at df ~ (3 - 1)(3 - I) = 4, the P-value is < .001, so Ho should
be rejected at any reasonable significance level. Parental and student use level do not appear to be
independent.

202
02016 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
Chapter 14: Goodness-of-Fit Tests and Categorical Data Analysis

43. This is a test of homogeneity: Ho: P Ij = P2j = Plj forj = 1, 2, 3, 4, 5. The given SPSS output reports the
calculated i
= 70.64156 and accompanying P-value (significance) oLOOOO.We reject Ho at any
significance level. The data strongly supports that there are differences in perception of odors among the
three areas.

45. (n, _ nplO)' = (nplO - n,)' = (n - nl - n( 1- PIG))' = (", - npzo)' Therefore


X' (n, -nplO)' + (n, -npzo)' (n,-nplO)' (...!!.-+...!!.-J
nPIO nP20 nz PIO Pzo

=(fi_PIO)'(_n
n PLOP20
J (PI-PIS =z'
PIOPZO In

47.
a. Our hypotheses are Ho: no difference in proportion of concussions among the three groups v. Ha: there
is a difference in proportion of concussions among the three groups.
No
Concussion Concussion Total
Observed
45 46 91
Soccer
28 68 96
Non Soccer
8 45 53
Control
81 159 240
Total

No
Concussion Concussion Total
Exnected
30.7125 60.2875 91
Soccer
32.4 63.6 96
Non Soccer
17.8875 37.1125 53
Control
81 159 240
Total

2( ,4;::.5_-::.:30:.:.:.7..:.:12=5L)2
(46-60.2875]2 (28-32.4)' (68-63.6)'
X - + + +
30.7125 60.2875 32.4 63.6

+ (8-17.8875)2 + (45-37.1125)' =19.1842. . .


Thedfforthistestls(J-I}- )( 1) = 2 ,sot he Pl'-vatue rs
17.8875 37.1125
less than .001 and we reject Ho. There is a difference in the proportion of concussions based on
whether a person plays soccer.

b. The sample correlation of r = -.220 indicates a weak negative association between "soccer exposure"

B-
and immediate memory recall. We can formally test the hypotheses Ho: P = 0 vs H.: p < O. The test

statistic is I .22,/89 = -2.13. At significance level a = .0 I, we would fail to reject Ho


l_r2 ~1-.222
and conclude that there is no significant evidence of negative association in the population.

203
e 2016 Cengege Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
Chapter 14: Goodness-of-Fit Tests and Categorical Data Analysis

c. We will test to see if the average score on a controlled word association test is the same for soccer and
non-soccer athletes. HO:/11 ~!J.2 vs H,: /11 iI'" Since the two sample standard deviations are very
close. we will use a pooled-variance two-sample I test. From Minitab, the test statistic is I = -0.91. with
an associated P-value of 0.366 at 80 df. We clearly fail to reject Ho and conclude that there is 00
statistically significant difference in the average score on the test for the two groups of athletes.

d. Our hypotheses for ANOVA are Ho: all means are equal vs H,: not all means are equal. The test
... f MSTr
statisnc IS = --.
MSE
SSTr = 91(.30 - .35)2 + 96(.49 - .35)' + 53(.19 - .35)2 = 3.4659 MSTr = 3.4659 = 1.73295
2
SSE = 90(.67)2 +95(.87)2 + 52(.48)2 = 124.2873 and MSE = 124.2873 .5244.
237

Now. f = 1.~~~~5 3.30. Using df> (2.200) from Table A.9, the P-value is between .0 I and .05. At

significance level .05, we reject the null hypothesis. There is sufficient evidence to conclude that there
is a difference in the average number of prior non-soccer concussions between the three groups.

49. According to Benford's law, the probability a lead digit equals x is given hy 10glO(I+ I/x) for x ~ I ..... 9.
Let Pi ~ the proportion of Fihonacci numbers whose lead digit is i (i = I, ... , 9). We wish to perform a
goodness-of-fit test Ho: Pi ~ 10glO(l + I/i) for i = 1, ... ,9. (The alternative hypothesis is that Benford's
formula is incorrect for at least one category.) The table below summarizes the results of the test.

Digit 2 3 4 5 6 7 8 9

Obs.# 25 16 II 7 7 5 4 6 4

Exp.# 25.59 14.97 10.62 8.24 6.73 5.69 4.93 4.35 3.89

Expected counts are calculated by np, ~ 85 10glO(1+ IIi). Some of the expected counts are too small, so
combine 6 and 7 into one category (obs ~ 9, exp = 10.62); do the same to 8 and 9 (obs = 10. exp ~ 8.24).

h' d stati . . 2 (25-25.59)2 (10-8.24)2 0 df 7


Th e resu Itmg c r-square stanstic IS X = + ... + .92 at ~ - I ~ 6 (since
25.59 8.24
there are 7 categories after the earlier combining). Software provides a P-value of .988!

We certainly do not reject Ho - the lead digits of the Fihonacci sequence are highly consistent with
Benford's law.

204
e 2016 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
CHAPTER 15

Section 15.1

1. Refer to Table A.l3.


a. With n = 12, Po(S+~ 56) =.102.

b. With n = 12, 61 < 62 < 64 => Po(S+ ~ 62) is between .046 and .026.

c. With n = 12 and a lower-tailed test, P-value = Po(S+~ n(n + 1)/2 - s.) = PolS. ~ 12(13)/2 - 20) =
Po(S+? 58). Since 56 < 58 < 60, the P-value is between .055 and .I02.

d. With n ~ 14 and a two-tailed test, P-value ~ 2Po(S+~ max{21, 14(15)/2 - 21}) = 2Po(S+? 84) ~ .025.

e. With n = 25 being "off the chart," use the large-sample approximation:


z s, - n(n + I) I 4 300 - 25(26) / 4 -_ 3 .7 => two-tal '1e d P -va Iue -- 2P(Z> 3 ."'"
7) - 0 .
~n(n+l)(2n+I)/24 ~25(26)(51)/24 -

3. *
We test Ho: fJ. = 7.39 vs. H,: fJ. 7.39, so a two tailed test is appropriate. The (x; - 7.39)'s are -.37, -.04,
-.05, -.22, -.11, .38, -.30, -.17, .06, -.44, .01, -.29, -.07, and -.25, from which the ranks of the three
positive differences are 1,4, and 13. Thus s; = I + 4 + 13 ~ 18, and the two-tailed P-value is given by
2Po(S+? maxi 18, 14(15)12 _ 18}) = 2Po(S+? 87), which is between 2(.025) and 2(.010) or .05 and .02. In
particular, since P-value < .05, Ho is rejected at level .05.

S. The data are paired, and we wish to test Ho: fJ.o = 0 vs. H,: fJ.D O.
-1.1 2.9 1.8
* .5 2.3 .9 2.5
d, -.3 2.8 3.9 .6 1.2
6- 5 II- 7- 2- 8- 4' 9-
rank I 10- 12- 3-
s; = 10 + 12 + ... + 9 = 72, so the 2-tailed P-value is 2Po(S+? max {n, 12(13)/2 - 72}) ~ 2Po(S+ ? 72) <
2(.005) = .0 I. Therefore, Ho is rejected at level .05.

The data are paired, and we wish to test Ho: JiD = .20 vs. Ha: jJ.D > .20 where J1D = #outdoor - J.lindooro
Because
7.
n = 33, we'll use the large-sample test.

d,-.2 rank d, d,- .2 rank


d, d,-.2 rank d,
0.15 -0.05 5.5 0.63 0.43 23
0.22 0.02 2
1.37 1.17 32 0.23 0.03 4
0.01 -0.19 17
0.48 0.28 21 0.96 0.76 31
0.38 0.18 16
0.11 -0.09 8 0.2 0 1
0.42 0.22 19
0.03 -0.17 15 -0.02 -0.22 18
0.85 0.65 29
0.83 0.63 28 0.03 -0.17 14
0.23 0.03 3
1.39 1.19 33 0.87 0.67 30
0.36 0.16 l3
0.68 0.48 25 0.3 0.1 9.5
0.7 0.5 26
0.3 0.1 9.5 0.31 0.11 II
0.71 0.51 27
-0.11 -0.31 22 0.45 0.25 20
0.13 -0.07 7
0.31 0.11 12 -0.26 -0.46 24
0.15 -0.05 5.5

205
C 20 16 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
Chapter 15: Distribution-Free Procedures

s, -n(n+I)/4 424-280.5 143.5


From the table, s+ ~ 424, so z 2.56. The upper-tailed
.jn(n + 1)(2n + 1)/24 ,13132.25 55.9665
P-value is P(Z~ 2.56) ~ .0052 < .05, so we reject HO. There is statistically significant evidence that the true
mean difference between outdoor and indoor concentrations exceeds .20 nanograrns/rrr'.

9.
Rl I 1 I I I I 2 2 2 2 2 2
R2 2 2 3 3 4 4 I I 3 3 4 4
R, 3 4 2 4 2 3 3 4 I 4 I 3
R. 4 3 4 2 3 2 4 3 4 I 3 1
D 0 2 2 6 6 8 2 4 6 12 10 14

Rl 3 3 3 3 3 3 4 4 4 4 4 4
R2 1 1 2 2 4 4 I I 2 2 3 3
R, 2 4 I 4 1 2 2 3 I 3 I 2
R. 4 2 4 1 2 I 3 2 3 I 2 1
D 6 10 8 14 16 18 12 14 14 18 18 20

When Ho is true, each of the above 24 rank sequences is equally likely, which yields the distribution of D:

d 0 2 4 6 8 10 12 14 16 18 20
p(d) 1/24 3/24 1/24 4/24 2/24 2/24 2/24 4/24 1/24 3/24 1/24

Then c ~ 0 yields a ~ 1/24 ~ .042 (too small) while c ~ 2 implies" = 1/24 + 3/24 ~ .167, and this is the
closest we can come to achieving a .10 significance level.

Section 15.2

11. The ordered comhined sample is 163(y), 179(y), 213(y), 225(y), 229(x), 245(x), 247(y), 250(x), 286(x), and
299(x), so w = 5 + 6 + 8 + 9 + 10 ~ 38. With m ~ n ~ 5, Table A.14 gives P-value ~ Po(W~ 38), which is
between .008 and .028. In particular, P-vaJue < .05, so Ho is rejected in favor of H,.

13. Identifying x with unpolluted region (m = 5) and y with polluted region (n ~ 7), we wish to test the
bypotheses Ho: III - 112 ~ 0 vs. H,: III - 112 < O.The x ranks are 1, 5,4,6,9, so w ~ 25. In this particular
order, the test is lower-tailed, so P-value = Po(W~ 5(5 + 7 + 1) - 25) ~ Po(W~ 40) > .053. So, we fail to
reject Ho at the .05 level: there is insufficient evidence to conclude that the true average fluoride level is
higher in polluted areas.

15. Letlll andll2 denote true average cotanine levels in unexposed and exposed infants, respectively. Tbe
hypotheses of interest are Ho: III - 112 ~ -25 vs. Ho: III - 112 < -25. Before ranking, -25 is subtracted from
each x, (i.e, 25 is added to each), giving 33, 36, 37, 39, 45, 68, and 136. The corresponding x ranks in the
combined set of 15 observations are I, 3, 4, 5, 6, 8, and 12, from which w = I + 3 + ... + 12 = 39. With m =
7 and n = 8, P-value = Po(W~ 7(7 + 8 + I) - 39) ~ Po(W~ 73) = .027. Therefore, Ho is rejected at the .05
level. The true average level for exposed infants appears to exceed that for unexposed infants by more than
25 (note that Ho would not be rejected using level .0 I).

206
02016 Cengage Learning. All Rights Reserved. May ncr be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
Chapter 15: Distribution-Free Procedures

Section 15.3

17. n ~ 8, so from Table A.15, a 95% CI (actually 94.5%) has the form (X(36_32<1)'XI32)) = (ii'x(32))' It is easily

yenifredth atth e 5 smaIIest parrwise


.. averages are 5.0+5.0 5. 00 , 5.0+11.8 8.4 0 ) 5.0+12.2 = 8 .6 0 ,
2 2 2
5.0+17.0 11.00, and 5.0+ 17.3 11.15 (the smallest average not involving 5.0 is
2 2
X(6) = 11.8; 11.8 11.8), and the 5 largest averages are 30.6, 26.0, 24.7, 23.95, and 23.80, so the confidence

interval is (lLl5, 23.80).

19. First, we must recognize this as a paired design; the eight differences (Method 1 minus Method 2) are
-Q.33, -0041, -0.71, 0.19, -Q.52, 0.20, -0.65, and -Q.14. With n ~ 8, Table A.15 gives c = 32, and a 95% CI
for uo is CX(8(8+I)f2-32+1)'X(J2) = (x(S) ,X(32) .
1b
Of the 36 pairwise averages created from these 8 differences, the 5 smallest is XI') ~ -0.585, and the
5'h-Iargest (aka the 32'd-smallest) is X(32) ~ 0.025. Therefore, we are 94.5% confident the true mean
difference in extracted creosote between the two solvents, ,"D, lies in the interval (-.585, .025).

21. m = n = 5 and from Table A.16, C = 21 and the 90% (actually 90.5%) interval is (dij(,),dij(")' The five
smallest Xi - Yj differences are -18, -2, 3, 4, 16 while the five largest differences are 136, 123, 120, 107,87
(construct a table like Table 15.5), so the desired interval is (16, 87).

Section 15.4

23. Below we record in parentheses beside each observation the rank of that observation in the combined
sample.

1: 5.8(3) 6.1(5) 6.4(6) 6.5(7) 7.7(10) rl. = 31

2: 7.1(9) 8.8(12) 9.9(14) 10.5(16) 11.2(17) ri. = 68

3: 5.1(1) 5.7(2) 5.9(4) 6.6(8) 8.2(11) r,. = 26


4: 9.5(13) 1.0.3(15) 11.7(18) 12.1(19) 12.4(20) r,. = 85

2 2 2 2
The computed value of k is k =_(_ 12 [31 +68 +26 +85 ] -3(21)= 14.06. At 3 df, the P-value is <
20 21) 5
.005, so we reject Ho.

25. The ranks are 1,3,4,5,6,7,8, 9, 12, 14 for the first sample; II, 13, 15, 16, 17, 18 for the second;2, 10,
19, 20, 21, 22 for the third; so the rank totals are 69, 90, and 94.
2 2 2
k= 12
( ) [69
-+-+- 90 94 ] -3(23)=9.23; at2df, theP-va1ue is rougWy.OI.Therefore, we reject
22 23 10 6 5
Ho : 1', = 1'2 = f.l, at the .05 level.

207
Q 2016 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated. or posted 10 a publicly accessible website, in whole or in part.
Chapter 15: Distribution-Free Procedures

27.
2
1 2 3 4 5 6 7 8 9 10 r, r.
2 3 3 2 3 2 19 361

H 2 2 2 2 2 3 17 289

C 3 3 2 3 3 3 2 3 24 576

1226

The computed value of F, is 1O(~)(4) (1226) -3(10)(4) = 2.60. At 2 df, P-value > .10, and so we don't

reject Ho at the .05 level.

Supplementary Exercises

29. Friedman's test is appropriate here. It is easily verified that r,. = 28, r,. = 29, r,. = 16, r4. = 17 , from
which the defining formula g}vesf, ~ 9.62 and the computing formula givesf, = 9.67. Either way, at 3 df
the P-value is < .025, and so we reject H; :a, = a, = a, = a4 = 0 at the .05 level. We conclude that there are
effects due to different years.

31. From Table A.16, m ~ n ~ 5 implies that c = 22 for a confidence level of95%, so
mn - c + I = 25 - 22 = I = 4. Thus the confidence interval extends from the 4th smallest difference to the
4- largest difference. The 4 smallest differences are -7.1, -{i.5, -{i.l, -5.9, and the 4 largest are -3.8, -3.7,
-3.4, -3.2, so the CI is (-5.9, -3.8).

33.
a. With "success" as defined, then Y is binomial with n = 20. To determine the binomial proportionp, we
realize that since 25 is the bypothesized median, 50% of the distribution should be above 25, tbus
when Ho is true p = .50. The upper-tailed P-value is P(Y~ IS when Y - Bin(20, .5)) ~ I - 8(14; 20, .5)
= .021.
b. For the given data, y = (# of sample observations that exceed 25) = 12. Analogous to a, the P-value is
then P(Y~ 12 when Y- Bin(20, .5)) ~ I - B(ll; 20, .5) = .252. Since the P-value is large, we fail to
reject Ho - we have insufficient evidence to conclude that the population median exceeds 25.

35.
Sample: y x Y y x x x Y Y
Observations: 3.7 4.0 4.1 4.3 4.4 4.8 4.9 5.1 5.6
Rank: I 3 5 7 9 8 6 4 2

The value of W' for this data is w' = 3 + 6 + 8 + 9 = 26. With m = 4 and n = 5, he upper-tailed P-value is
Po(W~ 26) > .056. Thus, Ho cannot be rejected at level.05.

208
e 2016 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
CHAPTER 16
Section 16.1

1. All ten values of the quality statistic are between the two control limits, so no out-of-control signal is
generated.

ili
3. P(lO successive points inside the limits) = P(1" inside) x P(2nd inside) x ... x P(IO inside) = (.998)" ~
.9802. P(25 successive points inside the limits) = (.998)" = .9512. (.998)" ~ .9011, but (.998)" ~ .8993,
so for 53 successive points the probability that at least one will fall outside the control limits when the
process is in control is I - .8993 ~ .1007 > .10.

s.
. USL-LSL 3.1-2.9
a. For the case of 4(a), with (J= .02, Cp = -----,-- 1.67. This is indeed a very good
6u 6(.02)
3.1-2.9
capability index. In contrast, the case of 4(b) with (J ~ .05 has a capability index of Cp =
6(.05)
0.67. This is quite a bit less than I, the dividing line for "marginal capability"

3.1-3.04 1 and ,u-LSL 3.04-2.9


b. Forthe caseof4(a), with,u= 3.04 and (J=.02, USL-,u
3u 3(.02) 3u 3(.02)
2.33, so Cpk= min{l, 2.33) = 1.
Forthecaseof4(b),with,u=3.00and(J~.05, USL-,u 3.1-3.00 .67 and ,u-LSL 3.00-2.9
3u 3(.05) 3u 3(.05)
.67, so Cpk = min{.67, .67) = .67. Even using this mean-adjusted capability index, process (a) is more
"capable" than process (b), though Cpk for process (a) is now right at the "marginal capability"
threshold.
c. In genera,ICCpk:S p' and they are equal iff jj = LSL+2 USL. ' i.e. th e process mean IS
. th e IDl
id point
. 0
fth e

spec lirni
1000tS.To demonstrate
emonstrate thi
this, suppose first that,u = LSL+USL . Then
2
USL- ,II USL-(LSL+ USL) 12 2USL-(LSL+ USL) USL-LSL
Cp, and similarly
30- 3u 6u 6u
,u-LSL
Cpo In that case, Cpk ~ min{ Cpo Cp) = Cpo
30-
Otherwise, suppose zzis closer to the lower spec limit than to the upper spec limit (but between the

two), so that,u - LSL < USL - 1'. In such a case, Cpk -r,u:....-...,:L=S:.::L ., this same case,u <
. However, In
3u
_L_SL---c-+_U_S..:..L
, fr om W h'ic h ,u-LSL < (LSL+USL)12-LSL =
USL-LSL Cpo That is, Cpt < Cpo
2 3cr 3u 6u
Analogous arguments for all other possible values of,u also yield Cpk < Cpo

209
IC2016 Cengege Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
Chapter 16: Quality Control Methods

Section 16.2
7.
a. P(point falls outside the limits when Il = 1'0+ .5IT) = 1- P(IlO - ~ < X < 1'0+ ~ when I' = 1'0 +.50- )

= 1- p( -3-.5.[;; < 2 <3-.5~) = 1- p( -4.12 < 2 < 1.882) = 1-.9699 =.0301.

h. I-P(I'O - ~ < X <1'0 + ~ when I' = 1'0-IT) = I-P( -3+.[;; < 2 <3+.[;;)

= I-P( -.76 < 2 <5.24)= .2236

c. I-P( -3-2.[;; < 2 <3-2~) = I-P( -7.47 < 2 < -1.47) = .9292

9. x= 12.95 and 8=.526, so with a, =.940, the control limits are

12.95 3 .526r,: = 12.95 .75 = 12.20,13.70. Again, every point (x) is hetween these limits, so there is no
.940,,5
evidence of an out-of-control process.

11. x 2317.07 = 96.54 8 = 1.264 and a. = .952 , glvmg the control limits
24
96.54 3 1.264 96.54 1.63 = 94.91,98.17. The value ofx on the 22,d day lies above the DeL, so the
.952../6
process appears to be out of control at that time.

13.

a. p(JlO - 2"!},r < X < 1'0+ 2~(]' when Il = 1'0) ~ P(-2.81 < 2 < 2.81) ~ .995, so the probability that a

point falls outside the limits is .005 and ARL = _1_ = 200 .
.005

b. int is
P( a pomt insid h .. )
IS lOSt e t e limits =
p( J.lo - 2.81IT
..f;z < X- < Po + 2.81IT
J;; w h en j1 = Po + a ) = ...

p( -2.81-.[;; < 2 < 2.81-~) ~ P(-4.81 < 2 < .81) [when n = 4] '" <1>(.81)= .7910 ~

P ~ P(a point is outside the limits) = 1- .7910 = .209 =:> ARL =_1_ = 4.78 .
.2090

c. Replace 2.81 with 3 above. For a, P(-3 <2 < 3) = .9974, so P = I - .9974 ~ .0026 and
1
ARL = .0026 = 385 for an in-control process. When I' = Ilo + IT as in b, the probability of an out-of-

control point is I-P(-3-.[;; <2<3-'[;;)= I-P(-5 <2< I)'" 1-<I>(I)=.1587,so


1
ARL=--=6.30 .
. 1587

210
02016 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
Chapter 16: Quality Control Methods

15. x = 12.95, IQR = .4273, ks = .990. The control limits are 12.953 .427~ = 12.37,13.53.
.990,,5

Section 16.3

17.
-r = 30
85.2 = 2. 84, b, ~ 2.058, and c, = .880. Since n ~ 4, LCL ~ 0 and VCL
a.

= 2.84 + 3(.880X2.84) = 2.84 + 3.64 = 6.48.


2.058

b. r = 3.54, bg = 2.844, and cg = .820, and the control limits are

3.54+ 3(.820)(3.54) = 3.54 3.06 = .48,6.60.


2.844

19. S= 1.2642, a6 = .952, and the control limits are

1.2642 3(1.2642),jl- (.952)2 = 1.2642 1.2194 = .045,2.484. The smallest S,'S. s" = .75, and the largest
.952
is S,2 ~ 1.65, so every value is between .045 and 2.434. The process appears to be in control with respect to
variability.

Section 16.4

_ -c- p. were
h x,
"i.PI =-+ ...
+
x, X, +,,,+xk = 578 =5.78. Thus )1= 5.78 =.231.
21. p;L,...-.J...
n 100 ~
k n n
(.231)(.769) =.231.126=.105,.357.
a. The control limits are .231 3
100

b. ~ = .130 which is between the limits but ~ = .390 , which exceeds the upper control limit and
100 ' , 100
therefore generates an out-of-control signal.

23. LCL> 0 when )1 > 3) )1(1: )1) , i.e. (after squaring both sides) 50P' > 9p(1-)1), i.e. 50)1 > 3(1-)1), i.e.

53p>3~ )1=2-=.0566.
53

25. Lxi = 102, x = 4.08 , and x 3,Jx = 4.08 6.06 = ( -2.0,10.1). Thus LCL = 0 and VCL ~ 10.1. Because
no Xi exceeds 10.1, the process is judged to be in control.

211
-02016 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
Chapter 16: Quality Control Methods

27. With u,~!L, the u, 's are 3.7S, 3.33, 3.7S, 2.S0, S.OO, S.OO, 12.S0, 12.00,6.67,3.33,1.67, 3.7S, 6.2S, 4.00,
s,
6.00, 12.00, 3.7S, S.OO, 8.33, and 1.67 for i ~1, ... ,20, giving U ~ S.SI2S. For g, ~.6,

u3 II~S.SI259.0933, LCL~ 0, UCL~ 14.6. For s. ~.8, u3


1I ~S.512S7.8S7, LCL~ 0,
~i ~i
DCL ~ 13.4. For g, ~ 1.0, U 3 II~
S.SI2S 7.0436, LCL ~ 0, DCL ~ 12.6. Several u;'s are close to
V&
the corresponding DeL's but none exceed them, so the process is judged to be in control.

Section 16.5

29. 110 ~ 16, k ~%~ O.OS, h~.20, d, ~ max(O,dH + (x, -16.0S)), e, ~ max (O,el-l +(x, -IS.9S)).
e,
x; -16.0S d, x,-IS.9S
-0.OS8 a 0.024 o
I
0.001 0.001 0.101 a
2
0.016 0.017 0.116 a
3
a -D.038 0.038
4 -D.138
-D.020 a 0.080 o
S
0.010 0.010 0.110 o
6
-D.068 o 0.032 o
7
o -D.OS4 0.OS4
8 -D.ISI
-D.012 a 0.088 a
9
0.024 0.024 0.124 o
10
II -D.021 0.003 0.079 a
a -D.OIS O.OlS
12 -D. liS
-D.018 o 0.082 a
13
14 -D.090 a 0.010 a
IS O.OOS O.OOS 0.10S a
For no time r is it the case that d, > .20 or that e, > .20, so no out-of-control signals are generated.

31. Connecting 600 on the in-control ARL scale to 4 on the out-of-control scale and extending to the I( scale
gives I( ~ .87. Thus k' ~ '" / ~ .002..[;; from which $, ~2.17S => " ~ 4.73 ~ s . Then connecting .87
a/"," .OOS/"
on the I( scale to 600 on the out-of-control ARL scale and extending to h' gives h' ~ 2.8, so

h ~(:rn )r28) {~)r2.8) ~ .00626.

212
Q 2016 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part,
Chapter 16: Quality Control Methods

Section 16.6

33. For the binomial calculation, n = 50 and we wish


p(X : 2) =(5~)pO (1- p)" +(5~)p' (1- p)" +(5~)p' (1- p)" when p> .01, .02, ... , .10. For the
M)(500-M) (M)(500-M) (M)(500-M)
hypergeometricca1culation,P(X:2)=
(0 50 + 1 49 + 2 48 to be
(5~~) (5~~) (5~~)'
calculated for M = 5, 10, 15, ... , 50. The resulting probabilities appear below.

.01 .02 .03 .04 .05 .06 .07 .08 .09 .10
p
.9919 .9317 .8182 .6775 .5343 .4041 .2964 .2110 .1464 .0994
Hypg.
.9862 .9216 .8t08 .6767 .5405 .4162 .3108 .2260 .1605 .1111
Bin.

35. P(X : 2) =C~O)pO(l- p)'oo +C~O)p' (1- p)" +C~O)P'(I- p)"


.08 .09 .10
.05 .06 .07
.01 .02 .03 .04
P .0019
.0566 .0258 .0113 .0048
.6767 .4198 .2321 .1183
P(X :2) .9206

For values of p quite close to 0, the probability of lot acceptance using this plan is larger than that for the
previous plan, whereas for larger p this plan is less likely to result in an "accept the lot" decision (the
dividing point between "close to zero" and "larger p" is someplace between .01 and .02). In this sense, the

current plan is better.

37. P(accepting the lot) =P(X, = 0 or 1) + P(X, = 2, X, = 0, 1,2, or 3) + P(X, =3,X, = 0,1, or 2)
~ P(X, = 0 or 1) + P(X, = 2)P(X, = 0, 1,2, or 3) + P(X, = 3)P(X, ~O, I, or 2).
p = .01: = .9106 +(.0756)(.9984) + (.0122)(.9862) = .9981
P = .05: = .2794 +(.26\ 1)(.7604)+ (.2199)(5405) =5968
P = .10: = .0338+(.0779)(.2503)+ (.1386)(.1117) = .0688

39.
a. AOQ = pP(A) = p[(I- p)" + 50p(1- p)" + 1225 p' (1- p )"1

p .01 .02 .03 .04 .05 .06 .07 .08 .09 .10

AOQ .010 .018 .024 .027 .027 .025 .022 .018 .014 .011

b. P = .0447, AOQL = .0447P(A) = .0274

213
C '016 Ceng age L,,,,,ing. All Ri,h~ Reserved. May not be ".nn,d, oopiod 0' duplio"od, 0' posted to a puhlidy .co,,,ib" w,j"ito, in who" 0' in part,
Chapter 16: Quality Control Methods

c. ATI = 50P(A) + 2000(1 - peA))

P .01 .02 .03 .04 .05 .06 .07 .08 .09 .10

ATI 77.3 202.1 418.6 679.9 945.1 1188.8 1393.6 1559.3 1686.1 178\.6

Supplementary Exercises

41. n=6,k=26, u,=10,980, x=422.31, Es,=402, 5=15.4615, Er,=1074, r=41.3077

3(15.4615)~1-(.952)'
cart: 15.4615 15.461514.9141 ~ .55,30.37
S h .952
R chart: 41.31 3(.848)(41.31) = 41.31 41.44, so LCL = 0, UCL = 82.75
2.536
- h b d - 3(15.4615)
X c art ase on s : 422.31+ r:
.952,,6
402.42,442.20

X chart based on r . 422.31 3( 41.3~) 402.36,442.26


2.536 6

43. S, r,
i x,
50.83 1.172 2.2
1
50.10 .854 1.7
2
50.30 1.136 2.1
3
50.23 1.097 2.1
4
50.33 .666 1.3
5
51.20 .854 1.7
6
50.17 .416 .8
7
50.70 .964 1.8
8
49.93 1.159 2.1
9
49.97 .473 .9
10
50.13 .698 .9
11
49.33 .833 1.6
12
50.23 .839 1.5
13
50.33 .404 .8
14
49.30 .265 .5
15
49.90 .854 1.7
16
50.40 .781 1.4
17
49.37 .902 1.8
18
49.87 .643 1.2
19
50.00 .794 1.5
20
50.80 2.931 5.6
21
50.43 .971 1.9
22

214
e 20 16 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
Chapter 16: Quality Control Methods

-e- = 19.706, S = .8957, IT = 1103.85,


i
X = 50.175, a, = .886, from which an s chart has LCL = 0 and

CL ~ .8957 + 3(.8957)~I-(.886)' 2.3020, and s21 = 2.931> UCL. Since an assignable cause is
.886
UIned to have been identified we eliminate the 21" group. Then LSi = 16.775, S = .7998, = 50.145. x
The resulting DeL for an s chart is 2.0529, and s, < 2.0529 for every remaining i. The x chart based on s

has limits 50.145 + 3(.7988) 48.58,51.71. All Xi valuesare between these limits.
.886.,[3

r Ln, =4(16)+(3)(4)=76, Lnixi=32,729.4, x=430.65,


s' = L(ni-I)s~ 27,380.16-5661.4 590.0279,sos~24.2905. For variation: whenn=3,
L(ni -1) 76-20
,-----

=24.2905+ 3(24.2905) 'i~-(.886)' 24.29+38.14=62.43; whenn~4,


DCL
.886
~---
= 24.2905 + 3(24.2905) 'i11_(.921)' 24.29 +30.82= 55.11. For location: when n = 3,
DCL
.921
430.65 47.49 = 383.16,478.14; when n = 4, 430.6539.56 = 391.09,470.21.

215
02016 Cengege Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
ISBN-13: ,1&-1-305-2bOS,-?
ISBN-10: l-305-2b05,-?
90000

9 781305 260597

You might also like