Math Stats Text
Math Stats Text
Math Stats Text
To my GRANDCHILDREN F.A.G.
To JOAN, LISA, and KARIN D ..C.B.
INTRODUCTION
TO THE THEORY
OF STATISTICS
Copyright © 1963. t 974 by McGraw-Hill, Inc. All rights reserved.
Copyright 1950 by McGraw-Hili, Inc. All rights reserved.
Printed in the United States of America. No part of this pubJication may be reproduced,
stored in a retrieval system, or transmitted, in any form or by any means, electronic.
mechanical, photocopying, recording, or otherwise, without the prior written
permission of the publisher.
6789 10 KPKP 7832109
I Probability 1
2 Noncalculus 527
2.1 Summation and Product Notation 527
2.2 Factorial and Combinatorial Symbols and Conventions 528
2.3 Stirling's Formula 530
2.4 The Binomial and Multinomial Theorems 530
3 Calculus 531
3.1 Preliminaries 531
3.2 Taylor Series 533
3.3 The Gamma and Beta Functions 534
Index 557
PREFACE TO THE
THIRD EDITION
The purpose of the third edition of this book is to give a sound and self-con-
tained (in the sense that the necessary probability theory is included) introduction
to classical or mainstream statistical theory. It is not a statistical-methods-
cookbook, nor a compendium of statistical theories, nor is it a mathematics
book. The book is intended to be a textbook, aimed for use in the traditional
full-year upper-division undergraduate course in probability and statistics,
or for use as a text in a course designed for first-year graduate students. The
latter course is often a "service course," offered to a variety of disciplines.
No previous course in probability or statistics is needed in order to study
the book. The mathematical preparation required is the conventional full-year
calculus course which includes series expansion, mUltiple integration, and par-
tial differentiation. Linear algebra is not required. An attempt has been
made to talk to the reader. Also, we have retained the approach of presenting
the theory with some connection to practical problems. The book is not mathe-
matically rigorous. Proofs, and even exact statements of results, are often not
given. Instead, we have tried to impart a "feel" for the theory.
The book is designed to be used in either the quarter system or the semester
system. In a quarter system, Chaps. I through V could be covered in the first
xiv PREFACE TO THE THIRD EDITION
quarter, Chaps. VI through part of VIII the second quarter, and the rest of the
book the third quarter. In a semester system, Chaps. I through VI could be
covered the first semester and the remaining chapters the second semester.
Chapter VI is a " bridging" chapter; it can be considered to be a part of" proba-
bility" or a part of" statistics." Several sections or subsections can be omitted
without disrupting the continuity of presentation. For example, any of the
following could be omitted: Subsec. 4.5 of Chap. II; Subsecs., 2.6, 3.5, 4.2, and
4.3 of Chap. III; Subsec. 5.3 of Chap. VI; Subsecs. 2.3, 3.4, 4.3 and Secs. 6
through 9 of Chap. VII; Secs. 5 and 6 of Chap. VIII; Secs. 6 and 7 of Chap. IX;
and all or part of Chaps. X and XI. Subsection 5.3 of Chap VI on extreme-value
theory is somewhat more difficult than the rest of that chapter. In Chap. VII,
Subsec. 7.1 on Bayes estimation can be taught without Subsec. 3.4 on loss and
risk functions but Subsec. 7.2 cannot. Parts of Sec. 8 of Chap. VII utilize matrix
notation. The many problems are intended to be essential for learning the
material in the book. Some of the more difficult problems have been starred.
ALEXANDER M. MOOD
FRANKLIN A. GRAYBILL
DUANE C. BOES
EXCERPTS FROM THE FIRST
AND SECOND EDITION PREFACES
This book developed from a set of notes which I prepared in 1945. At that time
there was no modern text available specifically designed for beginning students
of matpematical statistics. Since then the situation has been relieved consider-
ably, and had I known in advance what books were in the making it is likely
that I should not have embarked on this volume. However, it seemed suffi-
ciently different from other presentations to give prospective teachers and stu-
dents a useful alternative choice.
The aforementioned notes were used as text material for three years at Iowa
State College in a course offered to senior and first-year graduate students.
The only prerequisite for the course was one year of calculus, and this require-
ment indicates the level of the book. (The calculus class at Iowa State met four
hours per week and included good coverage of Taylor series, partial differentia-
tion, and multiple integration.) No previous knowledge of statistics is assumed.
This is a statistics book, not a mathematics book, as any mathematician
will readily see. Little mathematical rigor is to be found in the derivations
simply because it would be boring and largely a waste of time at this level. Of
course rigorous thinking is quite essential to goocf statistics, and I have been at
some pains to make a show of rigor and to instill an appreciation for rigor by
pointing out various pitfalls of loose arguments.
XVI EXCERPTS FROM THE FIRST AND SECOND EDITION PREFACES
While this text is primarily concerned with the theory of statistics, full
cognizance has been taken of those students who fear that a moment may be
wasted in mathematical frivolity. All new subjects are supplied with a little
scenery from practical affairs, and, more important, a serious effort has been
made in the problems to illustrate the variety of ways in which the theory may
be applied.
The probl~ms are an essential part of the book. They range from simple
numerical examples to theorems needed in subsequent chapters. They include
important subjects which could easily take precedence over material in the text;
the relegation of subjects to problems was based rather on the feasibility of such
a procedure than on the priority of the subject. For example, the matter of
correlation is dealt with almost entirely in the problems. It seemed to me in-
efficient to cover multivariate situations twice in detail, i.e., with the regression
model and with the correlation model. The emphasis in the text proper is on
the more general regression model.
The author of a textbook is indebted to practically everyone who has
touched the field, and I here bow to all statisticians. However, in giving credit
to contributors one must draw the line somewhere, and I have simplified matters
by drawing it very high; only the most eminent contributors are mentioned in
the book.
I am indebted to Catherine Thompson and Maxine Merrington, and to
E. S. Pearson, editor of Biometrika, for permission to include Tables III and V,
which are abridged versions of tables published in Biometrika. I am also in-
debted to Professors R. A. Fisher and Frank Yates, and to Messrs. Oliver and
Boyd, Ltd., Edinburgh, for permission to reprint Table IV from their book
" Statistical Tables for Use in Biological, Agricultural and Medical Research."
Since the first edition of this book was published in 1950 many new statis-
tical techniques have been made available and many techniques that were only in
the domain of the mathematical statistician are now useful and demanded by
the applied statistician. To include some of this material we have had to elim-
inate other material, else the book would have come to resemble a compendium.
The general approach of presenting the theory with some connection to prac-
tical problems apparently contributed significantly to the success of the first
edition and we have tried to maintain that feature in the present edition.
I
PROBABILITY
The purpose of this chapter is to define probability and discuss some of its prop-
erties. Section 2 is a brief essay on some of the different meanings that have
been attached to probability and may be omitted by those who are interested
only in mathematical (axiomatic) probability, which is defined in Sec. 3 and
used throughout the remainder of the text. Section 3 is subdivided into six
subsections. The first, Subsec. 3.1, discusses the concept of probability models.
It provides a real-world setting for the eventual mathematical definition of
probability. A review of some of the set theoretical concepts that are relevant
to probability is given in Subsec. 3.2. Sample space and event space are
defined in Subsec. 3.3. Subsection 3.4 commences with a recall of the definition
of a function. Such a definition is useful since many of the words to be defined
in this and coming chapters (e.g., probability, random variable, distribution,
etc.) are defined as particular functions. The indicator function, to be used
extensively in later chapters, is defined here. The probability axioms are pre-
sented, and the probability function is defined. Several properties of this prob-
ability function are stated. The culmination of this subsection is the definition
of a probability space. Subsection 3.5 is devoted to examples of probabilities
2 PROBABIUTY I
2 KINDS OF PROBABILITY
2.1 Introduction
One of the fundamental tools of statistics is probability, which had its formal
beginnings with games of chance in the seventeenth century.
Games of chance, as the name implies, include such actions as spinning a
roulette wheel, throwing dice, tossing a coin, drawing a card, etc., in which th~
outcome of a trial is uncertain. However, it is recognized that even though the
outcome of any particular trial may be uncertain, there is a predictable' ibng-
term outcome. It is known, for example, that in many throws of an ideal
(balanced, symmetrical) coin about one-half of the trials will result in heads.
It is this long-term, predictable regularity that enables gaming houses to engage
in the business.
A similar type of uncertainty and long-term regularity often occurs in
experimental science. For example, in the science of genetics it is uncertain
whether an offspring will be male or female, but in the long run it is known
approximately what percent of offspring will be male and what percen't will be
female. A life insurance company cannot predict which persons in the United
States will die at age 50, but it can predict quite satisfactorily how many people
in the United States will die at that age. ~
First we shall discuss the classical, or a priori, theory of probability; then
we shall discuss the frequency theory. Development of the axiomatic approach
will be deferred until Sec. 3.
2 KINDS OF PROBAMUl'Y 3
there are only two ways that the coin can fall, heads or tails, and since the coin
is well balanced, one would expect that the coin is just as likely to fall heads as
tails; h~nce, the probability of the event of a head will be given the value t·
This kind of reasoning prompted the following classical definition of prob-
ability.
We shall apply this definition to a few examples in order to illustrate its meaning.
~ If an ordinary die (one ofa pair of dice) is tossed-there are six possible out-
comes-anyone of the six numbered faces may turn up. These six outcomes
are mutually exclusive since two or more faces cannot turn up simultaneously.
··And if the die is fair, or true, the six outcomes are equally likely; i.e., it is expected
that each face will appear with about equal relative frequency in the long run.
blow suppose that we want the probability that the result of a toss be an even
number. Three of the six possible outcomes have this attribute. The prob-
ability t~at an even number will appear when a die is tossed is therefore i, or t.
-l;imilarly, the probability that a 5 will appear when a die is tossed is 1. The
probability that the result of a toss will be greater than 2 is i-
To consider another example, suppose that a card is drawn at random from
an ordinary deck of playing cards. The probability of drawing a spade is
readily seen to be ~ ~, or i· The probability of drawing a number between 5
and lO~ inclusive, is ; ~, or t'3·
The application of the definition is straightforward enough in these simple
cases, but it is not always so obvious. Careful attention must be paid to the
q~<!1ifications "mutually exclusive," equally likely," and random." Suppose
H H
that one wishes to compute the probability of getting two heads if a coin is
tossed twice. He might reason that there are three possible outcomes for the
two tosses: two "heads, two tails, or one head and one tai 1. One of these three
·.
4 PROBABIUTY' I
outcomes has the desired attribute, i.e., two heads; therefore the probability is
-1. This reasoning is faulty because the three given outcomes are not equally
likely. The third outcome, one head and one tail, can OCCUr in two ways
since the head may appear on the first toss and the tail on the second or the
head may appear on the second toss and the tail on the first. Thus there are
four equally likely outcomes: HH, HT, TH, and TT. The first of these has
the desired attribute, while the others do not. The correct probability is there-
fore i. The result would be the same if two ideal coins were tossed simul-
taneously.
Again, suppose that one wished to compute the probability that a card
drawn from an ordinary well-shuffied deck will be an ace or a spade. In enu-
merating the favorable outcomes, one might count 4 aces and 13 spades and
reason that there are 17 outcomes with the desired attribute. This is clearly
incorrect because these 17 outcomes are not mutually exclusive since the ace of
spades is both an ace and a spade. There are 16 outcomes that are favorable to
an ace or a spade, and so the correct probability is ~ ~, or 143'
We note that by the classical definition the probability of event A is a
number between 0 and 1 inclusive. The ratio nAln must be less than or equal to
1 since the total number of possible outcomes cannot be smaller than the
number of outcomes with a specified attribute. If an event is certain to happen,
its probability is 1; if it is certain not to happen, its probability is O. Thus, the
probability of obtaining an 8 in tossing a die is O. The probability that the
number showing when a die is tossed is less than 10 is equal to 1.
The probabilities determined by the classical definition are called a priori
probabilities. When one states that the probability of obtaining a head in
tossing a coin is !-, he has arrived at this result purely by deductive reasoning.
The result does not require that any coin be tossed or even be at hand. We say
that if the coin is true, the probability of a head is !, but this is little more than
saying the same thing in two different ways. Nothing is said about how one
can determine whether or not a particular coin is true.
The fact that we shall deal with ideal objects in developing a theory of
probability will not troub1e us because that is a common requirement of mathe-
matical systems. Geometry, for example, deals with conceptually perfect
circles, lines with zero width, and so forth, but it is a useful branch of knowl-
edge, which can be applied to diverse practical problems.
There are some rather troublesome limitations in the classical, or a priori,
approach. It is obvious, for example, that the definition of probability must
be modified somehow when the total number of possible outcomes is infinite.
One might seek, for example, the probability that an integer drawn at random
from the positive integers be even. The intuitive answer to this question is t.
2 KINDS OF PROBABIUTY 5
If one were pressed to justify this result on the basis of the definition, he might
reason as fol1ows: Suppose that we limit ourselves to the first 20 integers; 10
of these are even so that the ratio of favorable outcomes to the total number is
t%, or 1· Again, if the first 200 integers are considered, 100 of these are even,
and the ra~io is also 1_ In genera!, the first 2N integers contain N even integers;
if we form the ratio N/2N and let N become infinite so as to encompass the whole
set of positive integers, the ratio remains 1_ The above argument is plausible,
and the answer is plausible, but it is no simple matter to make the argument
stand up. It depends, for example, on the natural ordering of the positive
integers, and a different ordering could produce a different result. Thus, one
could just as well order the integers in this way: 1,3,2; 5, 7,4; 9, 11,6; ___ ,
taking the first pair of odd integers then the first even integer, the second pair
of odd integers then the second even integer, and so forth. With this ordering,
one could argue that the probability of drawing an even integer is 1-. The
integers can also be ordered so that the ratio will oscillate and never approach
any definite value as N increases.
There is another difficulty with the classical approach to the theory of
probability which is deeper even than that arising in the case of an infinite
num ber of outcomes. Suppose that we toss a coin known to be biased in
favor of heads (it is bent so that a head is more likely to appear than a tail).
The two possible outcomes of tossing the coin are not equally likely. What is
the probability of a head? The classical definition leaves us completely helpless
here.
Still another difficulty with the classical approach is encountered when we
try to answer questions such as the following: What is the probability that a
child born in Chicago will be a boy? Or what is the probability that a male
will die before age 50? Or what is the probability that a cookie bought at a
certain bakery will have less than three peanuts in it? All these are legitimate
questionswhichwewantto bring into the realm of probability theory. However,
notions of symmetry," "equally likely," etc., cannot be utilized as they could
H
was symmetrical, and it was anticipated that in the long run heads would occur
about one-half of the time. For another example, a single die was thrown 300
times, and the outcomes recorded in Table 2. Notice how close the relative
frequency of a face with a I showing is to !; similarly for a 2, 3, 4, 5, and 6.
These results are not unexpected since the die which was used was quite sym-
metrical and balanced; it was expected that each face would occur with about
equal frequency in the long run. This suggests that we might be willing to use
this relative frequency in Table 1 as an approximation for the probability that
the particular coin used will come up heads or we might be willing to use the
relative frequencies in Table 2 as approximations for the probabilities that
various numbers on this die will appear. Note that although the relative fre-
quencies of the different outcomes are predictable, the actual outcome of an
individual throw is unpredictable.
In fact, it seems reasonable to assume for the coin experiment that there
exists a number, label it p, which is the probability of a head. Now if the coin
appears well balanced, symmetrical, and true, we might use Definition 1 and
state that p is approximately equal to 1. It is only an approximation to set p
equal to 1- since for this particular coin we cannot be certain that the two cases,
.-
heads and tails, are exactly equally likely. But by examining the balance and
symmetry of the coin it may seem quite reasonable to assume that they are.
Alternatively, the coin could be tossed a large number of times, the results
recorded as in Table 1, and the relative frequency ofa head used as an approxima-
tion for p. In the experiment with a die, the probability P2 of a 2 showing
could be approximated by using Definition I or by using the relative frequency
in Table 2. The important thing is that we postulate that there is a number P
which is defined as the probability of a head with the coin or a number P2
which is the probability of a 2 showing in the throw of the die. Whether we use
Definition 1 or the relative frequency for the probability seems unimportant in
the examples ci ted.
Long-run expec~ed
Observed Observed relative relative frequency
Outcome Frequency frequency of a balanced coin
H 56 .56 .50
T 44 .44 .50
Long-run expected
Observed Observed relative frequency
Outcome Frequency relative frequency of a balanced die
1 51 .170 .1667
2 54 .180 .1667
3 48 .160 .1667
4 51 .170 .1667
5 49 .163 .1667
6 47 .157 .1667
Tota1 300 1.000 1.000
8 PROBABILITY I
event. For instance, suppose that the experiment consists of sampling the
population of a large city to see how many voters favor a certain proposal.
The outcomes are" favor" or "do not favor," and each voter's response is un-
predictable, but it is reasonable to postulate a number p as the probability that
a given response will be "favor." The relative frequency of" favor" responses
can be used as an approximate value for p.
As another example, suppose that the experiment consists of sampling
transistors from a large collection of transistors. We shall postulate that the
probability of a given transistor being defective is p. We can approximate p by
selecting several transistors at random from the collection and computing the
relative frequency of the number defective.
The important thing is that we can conceive of a series of observations or
experiments under rather uniform conditions. Then a number p can be postu-
lated as the probability of the event A happening, and p can be approximated by
the relative frequency of the event A in a series of experiments.
3 PROBABILITY-AXIOMATIC
assume, unless otherwise stated, that all the sets mentioned in a given discussion
consist of points in the space n.
We shall usually use capital Latin letters from the beginning of the
alphabet, with or without subscripts, to denote sets. If co is a point or element
belonging to the set A, we shall write co E A; if co is not an element of A, we
shall write co ¢; A.
EXAMPLE 3 Let n = {(x, y): 0 <x ~ 1 and 0 <y< I}, which is read the
collection of all points (x, y) for which 0 < x < 1 and 0 < y < 1. Define
the following sets:
A1 (\ A2 = A1A2 = A4;
A1 ={(x,y):O<x< l;!<y< I};
Al - A4 = {(x, y): !<x< I; 0 < Y < !}. IIII
FIGURE 1
3 PROBABILITY-AXIOMATIC 13
Several of the above laws are illustrated in the Venn diagrams in Fig. 1.
Although we will feel free to use any of the above laws, it might be instructive
to give a proof of one of them just to illustrate the technique. For example,
let us show that (A u B) = A n B. By definition, two sets are equal if each is
contained in the other. We first show that (A u B) cAn 13 by proving that if
OJ E A u B, then OJ E A n 13. Now OJ E (A u B) implies OJ ¢ A u B, which implies
that OJ ¢ A and OJ ¢ B, which in turn implies that OJ E A and OJ E 13; that is,
OJ E A n 13. We next show that A nBc (A u B). Let OJ E A n 13, which means
OJ belongs to both A and 13. Then OJ ¢ A u B for if it did, OJ must belong to at
least one of A or B, contradicting that OJ belongs to both A and 13; however,
OJ ¢ A u B means OJ E (A u B), completing the proof.
nAA, = n.
E
EXAMPLE 5 If A = {1, 2, ... , N}, i.e., A is the index set consisting of the
first N integers, then U
AA, is also written as
A.eA
N
U An = A 1 U
n=l
A2 U ... U AN' IIII
14 PROBABILlTY I
IIII
We wi1l not give a proof of this theorem. Note, however, that the special
case when the index set A consists of only two names or indices is Theorem 7
above, and a proof of part of Theorem 7 was given in the paragraph after
Theorem 8.
The above does not precisely define what an event is. An event will
always be a subset of the sample space, but for sufficiently large sample spaces
not all subsets will be events. Thus the class of all subsets of the sample space
will not necessarily correspond to the event space. However, we shall see that
the class of all events can always be selected to be large enough so as to include
all those subsets (events) whose probability we may want to talk about. If the
sample space consists of only a finite number of points, then the corresponding
event space will be the class of all subsets of the sample space.
Our primary interest will not be in events per se but will be in the prob-
ability that an event does or does not occur or happen. An event A is said to
occur if the experiment at hand results in an outcome (a point in our sample
space) that belongs to A. Since a point, say OJ, in the sample space is a subset
(that subset consisting of the point OJ) of the sample space n, it is a candidate to
be an event. Thus OJ can be viewed as a point in n or as a subset of n. To
distinguish, let us write {OJ}, rather than just OJ, whenever (J) is to be viewed as a
subset of n. Such a one-point subset will always be an event and will be called
an elementary event. Also if> and n are both subsets of Q, and both will always
be events. n is so metimes called the sure event.
We shall attempt to use only capital Latin letters (usually from the begin-
ning of the alphabet), with or without affixes, to denote events, with the excep-
tion that if> will be used to denote the empty set and n the sure event. The event
space will always be denoted by a script Latin letter, and usually d. ;?4 and /F,
as well as other symbols, are used in some texts to denote the class of all events.
The sample space is basic and generally easy to define for a given experi-
ment. Yet, as we shall see, it is the event space that is really essential in de-
fining probabiHty. Some examples follows.
16 PROBABILITY I
EXAMPLE 7 Toss a penny, nickel, and dime simultaneously, and note which
side is up on each. There are eight possible outcomes of this experiment.
n ={(H, H, H), (H, H, T), (H, T, H), (T, H, H), (H, T, n,
(T, H, n,
(T, T, H), (T, T, T)}. We are using the first position of (', " .), called a
3-tuple, to record the outcome of the penny, the second position to record
the outcome of the nickel, and the third position to record the outcome ot
the dime. Let Ai = {exactly i heads}; i = 0, I, 2, 3. For each i, Ai is an
event. Note that Ao and A3 are each elementary events. Again all
subsets of n are events; there are 2 8 = 256 of them. IIII
EXAMPLE 9 Select a light bulb, and record the time in hours that it burns
before burning out. Any nonnegative number is a conceivable outcome
of this experiment; so n = {x: x >O}. For this sample space not an
3 PROBABILITY-AXIOMATIC 17
subsets of Q are events; however, any subset that can be exhibited will be
an event. For example, let
where in the 2-tuple (', .) the first position indicates the number of times
that it rains and the second position indicates the total rainfall. For
example, OJ = (7, 2.251) is a point in Q corresponding to there being seven
different times that it rained with a total rainfall of 2.251 inches. A =
{(i, x): i = 5, ... , 10 and x 3} is an example of an event. IIII
(i) QEd.
(ii) If A E d, then A E d.
(iii) If Al and A2 Ed, then Al U A2 Ed.
Theorem 12 t/> E d.
PROOF By property (i) QEd; by (ii) QEd; but Q = t/>; so t/> Ed.
1111
If (a, b) Ef( .), we write b = f(a) (read" b equals f of a") and call f(a)
the value of f(·) at a. For any a E A, f(a) is an element of B; whereasf( -) is
a set of Qrdered pairs. The set of all values of f( .) is called the range of f(· );
i.e., the range of f(') = {b E B: b = f(a) for some a E A} and is always a subset
of the counterdomain B but is not necessarily equal to it. f(a) is also called the
image of a under f( +), and a is called the pre image of /(a).
EXAMPLE 12 Let.ft(-) andf2(') be the two functions, having the real line
for their domain and counterdomain, defined by
fi ( .) = {(x, y) : y = x 3 + X + 1, - 00 < x < oo}
and
i
Probability function Let 0 denote the sample space and d denote a collec-
tion of events assumed to be an algebra of events (see Subsec. 3.3) that we shall
consider for some random experiment.
22 PROBABILITY I
(that is, Ai (l Aj = 4> for i =1= j; i,j = 1,2, ... ) and if Ai U A2 U ... =
IIII
*In defining a probability function, many authors assume that the domain of the set
function is a sigma-algebra rather than just an algebra. For an algebra d, we had the
property
if A1 and A2 E d, then Al U A2 E d.
A sigma-algebra differs from an algebra in that the above property is replaced by
00
if A tt A 2 ..... A n •... Ed, then U AnEd.
If'" 1
A fundamental theorem of probability theory. called the extension theorem. states that
if a probability function is defined on an algebra (as we have done) then it can be
extended to a sigma-algebra. Since the probability function can be extended from an
algebra to a sigma-algebra, it is reasonable to begin by assuming that the probability
function is defined on a sigma-algebra.
3 PROBABILITY-AXlOMATIC 23
EXAMPLE 16 Consider the experiment of tossing two coins, say a penny and
a nickel. Let n {(H, H), (H, n,
(T, H), (T, T)} where the first com-
ponent of (', .) represents the outcome for the peJ}ny. Let us model this
random experiment by assuming that the four points in n are equally
likely; that is, assume P[{(H, H)}] = P[{~fI, T)}] = P[{(T, H)}] =
P[{(T, T)}]. The following question arises: Is the P[· ] function that is
implicitly defined by the above really a probability function; that is, does
it satisfy the three axioms? It can be shown that it does, and so it is
a probability function.
In our definitions of event and .PI, a collection of events, we stated that .PI
cannot always be taken to be the collection of all subsets of n. The reason for
this is that for" sufficiently large" n the collection of all subsets of n is so large
that it is impossible to define a probability function consistent with the above
aXIOms.
We are able to deduce a number of properties of our function P[· ] from
its definition and three axioms. We list these as theorems.
I t is in the statements and proofs of these properties that we will see the
convenience provided by assuming.PI is an algebra of events. .PI is the domain
of P[·]; hence only members of .PI can be placed in the dot position of the
notation P[· ]. Since .PI is an algebra, if we assume that A and BE .PI, we know
that A, A u B, AB, AB, etc., are also members of .PI, and so it makes sense to
talk about P[A], P[A u B], P[AB], P[AB], etc.
Properties of P[·] For each of the following theorems, assume that nand
.PI (an algebra of events) are given and P[·] is a probability function having
domain .PI.
Theorem 15 P[4>] = o.
PROOF Take Al = 4>, A2 = 4>, A3 = ¢, ... ; then by axiom (iii)
II <Xl
IIII
PROOF A u A= Q, and A (l A = ¢; so
P[Q] = P[A u A] = P[A] + P[A].
But P[!l] = 1 by axiom (ii); the result follows. IIII
Finite sample space with equa1ly likely points For certain random
experiments there is a finite number of outcomes, say N, and it is often realistic
to assume that the probability of each outcome is liN. The classical definition
of probability is generally adequate for these problems, but we shall show how
26 PROBABILITY I
EXAMPLE 17 Consider the experiment of tossing two dice (or of tossing one
die twice). Let 0 = {(ii' i 2): il = 1,2, ... , 6; i2 = 1,2, ... , 6}. Here i1 =
number of spots up on the first die, and ;2 = number of spots up on the sec-
ond die. There are 6'6 = 36 sample points. It seems reasonable to attach
the probability of l6 to each sample point. 0 can be displayed as a lattice
as in Fig. 2. Let "A 7 = event that the total is 7; then A7 = {(I, 6), (2, 5),
(3,4), (4, 3), (5, 2), (6, I)}; so N(A7) = 6, and P[A 7] = N(A 7 )/N(O) = 36(j =
7;. Similarly P[Aj] can be calculated for Aj = total ofj;j = 2, ... , 12. In
this example the number of points in any event A can be easily counted,
and so P[A] can be evaluated for any event A. IIII
If N(A) and N(O) are large for a given random experiment with a finite
number of equally likely outcomes, the counting itself can become a difficult
problem. Such counting can often be facilitated by use of certain combinatorial
formulas, some of which will be developed now.
Assume now that the experiment is of such a nature that each outcome
can be represented by an n-tuple. The above example is such an experiment;
each outcome was represented by a 2-tuple. As another example, if the ex-
periment is one of drawing a sample of size n, then n-tuples are particularly
3 PROBABIUTY-AXIOMATIC 27
6 • • • •
5 • • •
4 • • •
3 • • •
: : : _.__________~
:1~- ·;1
FIGURE 2 1 2 3 4 5 6
useful in recording the results. The terminology that is often used to describe
a basic random experiment known generally by sampling is that of balls and urns.
It is assumed that we have an urn containing, say, M balls, which are numbered
1 to M. The experiment is to select or draw balls from the urn one at a time
until n balls have been drawn. We say we have drawn a sample of size n. The
drawing is done in such a way that at the time of a particular draw each of the
balls in the urn at that time has an equal chance of selection. We say that a
ball has been selected at random. Two basic ways of drawing a sample ~are
with replacement and without replacement, meaning just what the words say. A
sample is said to be drawn with replacement, if after each draw the ball drawn
is itself returned to the urn, and the sample is said to be drawn without replace-
ment if the ball drawn is not returned to the urn. Of course, in sampling without
replacement the size of the sample n must be less than or equal to M, the original
number of balls in the urn, whereas in sampling with replacement the size of
sample may be any positive integer. In reportjng the results of drawing a sample
of size n, an n-tuple can be used; denote the n-tuple by (Zh ••. , zn), where Zi
represents the number of the ball drawn on the ith draw.
In general, we are interested in the size of an event that is composed of
points that are n-tuples satisfying certain conditions. The size of such a set can be
compute.d as follows; First determine the number of objects, say N I , that may be
used as the first component. Next determine the number of objects, say N 2 ,
that may be used as the second component of an n-tuple given that the first com-
ponent is known. (We are assuming that N2 does not depend on which
object has occurred as the first component.) And then determine the number of
objects, say N 3 , that may be used as the third component given that the first
and second components are known. (Again we are assuming N3 does not
28 PROBABILITY I
depend on which objects have occurred as the first and second components.)
Continue in this manner until N n is determined. The size N(A) of the set A of
n-tuples then equals Nl • N2 ... N n •
.
~ n .
The total number of subsets of S, where S is a set of SIze M, IS i..J . (M)
"=0
This includes the empty set (set with no elements in it) and the whole set,
both of which are subsets. Using the binomial theorem (see Appendix
A)
with a = b = 1,
we see that
2M = JJ~); (2)
From Example 18 above, we know N(Q) = M"under (i) and N(Q) = (M)n
under {ii). AA; is that subset of Q for which exactly k of the z/s are ball
numbers 1 to K inclusive. These k ball numbers must fall in some subset
of k positions from the total number of n available positions. There are
(:) ways of selecting the k positions for the ball numbers 1 to K inclusive to
fall in. For each of the (~) different positions, there are K.(M - K)·-k
different n-tuples for case (i) and (K)k(M - K)n A; different n-tuples for
case (ii). Thus A. has size (Z)Kk(M - K)" • for case (i) and size
(!)(~=:)
P[A.l = ( '!:) . (5)
in the numerator equals the" upper" term in the denominator, and the
sum of the" lower" terms in the numerator equals the" lower" term
in the denominator. // / /
Finite sample space without equally likely points We saw for finite sample
spaces with equally likely sample points that P[A] = N(A)IN(o.) for any event A.
For finite sample spaces without equally likely sample points, things are not
quite as simple, but we can completely define the values of P[A] for each of the
2N (Q) events A by specifying the value of P[· ] for each of the N = N(o.) elemen-
tary events. Let 0. = {WI' ... , WN}, and assume Pj =P[{Wj}] for j = 1, ... , N.
Since
For any event A, define P[A] = J:.Pj' where the summation is over those Wj
belonging to A. It can be shown that P[· ] so defined satisfies the three axioms
and hence is a probability function.
32 PROBABILITY I
EXAMPLE 22 Consider an experiment that has N outcomes, say (01; (02' ••• ,
(ON' where it is known that outcome (OJ+l is twice as likely as outcome
(OJ' wherej = 1, ... , N - 1; that is, Pj+ 1 = 2Pi' where Pi P[{(Ol)]' Find
prAll, where A.k = {(Oh (02' ••• , (Ok}' Since
N N
L Pj = j:::::l
j=1
L 21- 1
PI = Pl(1 + 2 + 22 + ... + 2N - 1
) = Pl(2 N - 1) = 1,
1
Pi = 2N -1
and
hence
We might note that the above defmition is compatible with the frequency
approach to probability, for if one observes a large number, say N, of occur-
rences of a random experiment for which events A and B are defined, then
P[A IB] represents the proportion of occurrences in which B occurred that A also
occurred, that is,
P[A IB] = NAB,
NB
where N B denotes the number of occurrences of the eve~t B in the N occur-
rences of the random experiment and NAB denotes the number of occurrences
of the event A n B in the N occurrences. Now P[AB] = NABIN, and P[B] =
NBIN; so
P[AB] = NABIN = NAB = P[AIB]
P[B] NBIN NB '
consistent with our definition.
EXAMPLE 23 Let 0 be any finite sample space, d the collection of all subsets
of 0, and P[·] the equally likely probability function. Write N = N(O).
For events A and B,
P[AIB] = P[AB] = N(AB)IN
P[B] N(B)! N '
where, as usual, N(B) is the size of set B. So for any finite sample space
with equally likely sample points, the values of P[A IB] are defined for any
two events A and B provided P[B] > O. IIII
P[A A IA = =!
] = P[AIA2Ad = P[A 1 A 2] t
1 2 1 P[A d P[Ad! 2.
34 PROBABILITY I
U Ai Ed, then
i=:l
Hence, P['I B] for given B satisfying P[B] > 0 is a probability function, which
justifies our calling it a conditional probability. P[· IB] also enjoys the same
properties as the unconditional probability. The theorems listed below are
patterned after those in Subsec. 3.4.
Properties of Pl· IB] Assume that the probability space (n, .91, P[·]) is given,
and let BEd satisfy P[B] > O.
Proofs of the above theorems follow from known properties of P[·] and
are left as exercises.
There are a number of other useful formulas involving conditional prob-
abilities that we will state as theorems. These will be followed by examples.
n
PROOF Note that A = U ABj
j= 1
and the AB/s are mutually disjoint;
hence
1//1
Theorem 30 Bayes' formula For a given probability space (0, .91, Pl· D,
if B 1 , B 2 , ••• , Bn is a collection of mutually disjoint events in .91 satisfying
n
n = U Bj and P[Bj ] > 0 for j = 1, ... , n, then for every A Ed for which
j:::l
peA] >0
P[B"IA] = nP[AIBk]P[Bk ] •
2: P[AIBj]P[B
j= 1
j]
PROOF
, ..
P B/ _ P[A/B]P[B]
[ A] - P[A' B}P[B] + peA IB]P[B]' 1I11
P[A 1 A 2 ••• An] = P[AdP[A 2 1 AtlP[A 3 1 A 1 A 2 ] ••• PlAn IAl ... An-d·
EXAMPLE 25 There are five urns, and they are numbered I to 5. Each
urn contains 10 balls. Urn i has i defective balls and 10 - i nondefective
balls, i = 1, 2, ... , 5. For instance, urn 3 has three defective balls and
seven nondefective balls. Consider the following random experiment:
First an urn is selected at random, and then it ball is selected at random
from the selected urn. (The experimenter does not know which urn was
selected.) Let us ask two questions: (i) What is the probability that a
defective ball will be selected? (ii) If we have already selected the ball
and noted that it is defective, what is the probability that it came from
urn 5?
SOLUTION Let A denote the event that a defective ball is selected and
B t the event that urn i is selected, i = I, ... , 5. Note that P[B,l = ~,
i 1, ... ,5, andP[AIBi ] = i/lO, i= 1, ... , 5. Question (i) asks, What is
P[A]? Using the theorem of total probabilities, we have
5 5 ill 5. 1 5·6 3
peA] = i~lP[ArB,]p[Ba = l~ 10'"5 = 50 t~l' = 50"2 = 10'
38 PROBABILITY I
substantiating our suspicion. Note that unconditionally all the B/s were
equally likely whereas, conditionally (conditioned on occurrence of event
A), they were not. Also, note that
s skI s 1 5·6
IP[BkIA]
k=l
= k=115
I -=- Ik=--=l.
15k=l 152
IIII
p+t~I-P)";::.P. IIII
3 PROBABlUTY-AXIOMATIC 39
EXAMPLE 27 An urn contains ten balls of which three are black and seven
are white. The following game is played: At each trial a ball is selected
at random, its color is noted, and it is replaced along with two additional
balls of the same color. What is the probability that a black ball is
selected in each of the first three trials? Let Bi denote the event that a
black ball is selected on the ith trial. We are seeking P[B1 B 2 B3]' By the
mul tiplication r"ule,
SOLUTION Let Ak denote the event that the sample contains exactly
k black balls and Bj denote the event that the jth ball drawn is black.
We seek P[Bjl Ak]' Consider (i) first.
P[AkJ =
n) Kk(M - K)"-k
(k and P[A kIBJ.J =
(n - 1) K k- 1
(M _ K)"-k
1
M(I k- 1 M"-
by Eq. (3) of Subsec. 3.5. Since the balls are replaced, P[Bj ] = KIM for
any j. Hence,
and
K-i
P[B., C.]
J t
=- --
M-j+l'
and so
Finally,
PCB _I A]
,'P[A kI Bj]P[BJ =
[(K -
k- 1
l)(M -
n- k
K)j(M -
n- 1
1)] K
Ai = ~.
J k P[Akl (~)('~ =f) j(~) .
n
Thus we obtain the same answer under either method of sampling. "t-'/IIF, "
Independence of events If P[A IB] does not depend on evem: B, that is,
P[A, B] = P[A], then it would seem natural to say that event A is independent
of event B. This is given in the following definition.
EXAMPLE 29 Consider the experiment of tossing two dice. Let A denote the
event of an odd total, B the event of an ace on the first die, and C the
event of a total of seven. We pose three problems:
(i) Are A and B independent?
Oi) Are A and C independent?
(iii) Are Band C independent?
The property of independence of two events A and B and the property that
A and B are mutually exclusive are distinct, though related, properties. For
example, two mutually exclusive events A and B are independent if and only if
P[A]P[B] = 0, which is true if and only if either A or B has zero probability.
Or if P[A] :# 0 and P[B] =F 0, then A and B independent implies that they are
not mutually exclusive, and A and B mutually exclusive implies that they are not
independent. Independence of A and B implies independence of other events
as wel1.
One might inquire whether all the above conditions are required in the
definition. For instance, does P[A I A 2 A 3] = P[AtlP[A 2 ]P[A 3 ] imply P[A I A 2 ]
= P[AtlP[A 2 ]? Obviously not, since P[A 1A 2 A 3 ] = P[AdP[A 2 ]P[A 3 ] if P[A 3]
= 0, but P[A I A 2 ] #: P[AdP[A 2 ] if At and A2 are not independent. Or does
pairwise independence imply independence? Again the answer is negative,
as the following example shows.
In one sense, independence and conditional probability are each used to find
the same thing, namely, P[AB], for P[AB] = P[A]P[B] under independence and
P[AB] = P[A IB]P[B] under nonindependence. The nature of the events A and
B may make calculations of P[A], P[B], and possibly P[A IB] easy, but direct
calculation of P[AB] difficult, in which case our formulas for independence or
conditional probability would allow us to avoid the difficult direct calculation
of P[AB]. We might note that P[AB] = P[A IB]P[B] is valid whether or not A
is independent of B provided that P[A IB] is defined.
The definition of independence is used not only to check if two given events
are independent but also to model experiments. For instance, for a given
experiment the nature of the events A and B might be such that we are willing
to assume that A and B are independent; then the definition of independence gives
the probability of the event A n B in terms of P[A] and P[B]. Similarly for
more than two events.
PROBLEMS
To solve some of these problems it may be necessary to make certain assumptions,
such as sample points are equally likely, or trials are independent, etc., when such
assumptions are not explicitly stated. Some of the more difficult problems, or those
that require'special knowledge, are marked with an *.
lOne urn contains one black ball and one gold ball. A second urn contains one
white and one gold ball. One ball is selected at random from each urn.
(a) Exhibit a sample space for this experiment.
(b) Exhibit the event space.
(c) What is the probability that both balls will be of the same color?
(d) What is the probability that one ball will be green?
2 One urn contains three red balls, two white balls, and one blue ball. A second
urn contains one red ball, two white balls, and three blue balls.
(a) One ball is selected at random from each urn.
(i) Describe a sample space for this experiment.
(ii) Find the probability that both balls will be of the same color.
(iii) Is the probability that both balls will be red greater than the prob-
ability that both will be white?
(b) The balls in the two urns are mixed together in a single urn, and then a sample
of three is drawn. Find the probability that all three colors are represented,
when (i) sampling with replacement and (ii) without replacement.
3 If A and B are disjoint events, P[A] =.5, and P[A u B] = .6, what is P[B]?
4 An urn contains five balls numbered 1 to 5 of which the first three are black and
the last two are gold. A sample of size 2 is drawn with replacement: Let Bl
denote the event that the first ball drawn is black and B2 denote the event that the
second ball drawn is black.
(a) Describe a sample space for the experiment, and exhibit the events B 1 , B 2 ,
and B 1 B 2 •
(b) Find P[B1], P[B 2 ], and P[B1B2]'
(c) Repeat parts (a) and (b) for sampling without replacement.
5 A car wit~ six spark plugs is known to have two malfunctioning spark plugs.
If two plugs are pulled at random, what is the probability of getting both of
the malfunctioning plugs ?
6 In an assembly-line operation, 1 of the items being produced are defective. If
three items are picked at random and tested, what is the probability:
(a) That exactly one of them will be defective?
(b) That at least one of them will be defective?
7 In a certain game a participant is allowed three attempts at scoring a hit. In the
three attempts he must alternate which hand is used; thus he has two possible
strategies: right hand, left hand, right hand; or left hand, right hand, left hand.
His chance of scoring a hit with his right hand is .8, while it is only .5 with his
left hand. If he is successful at the game provided that he scores at least two hits
in a row, what strategy gives the better chance of success? Answer the same
44 PROBABILITY I
,}S' )(:{biased coin has probability p of landing heads. Ace, Bones, and Clod toss the
coin successively, Ace tossing first, until a head occurs. The person who tosses
the first head wins. Find the probability of winning for each.
*26 It is told that in certain rural areas of Russia marital fortunes were once told in the
following way: A girl would hold six strings in her hand with the ends protruding
abQve and below; a friend would tie together the six upper ends in pairs and then
tie together the six lower ends in pairs. If it turned out that the friend had tied
~he six strings into at least one ring, this was supposed to indicate that the girl
''Would get married within a year. What is the probability that a single ring will
,be formed when the strings are tied at random? What is the probability that at
'least one ring will be formed? Generalize the problem to 2n strings.
27 Mr. Bandit, a well-known rancher and not so well-known part-time cattle rustler,
l¥ls twenty head of cattle ready for market. Sixteen of these cattle are his own
tfiilnd consequently bear his own brand. The other four bear foreign brands. Mr.
Bandit knows that the brand inspector at the market place checks the brands of
.20 percent of the cattle in any shipment. He has two trucks, one which will haul
all twenty <!attle at once and the other that will haul ten at a time. Mr. Bandit'
feels that he has four djfferent strategies to follow in his attempt to market the
cattle without getting caught. The first is to sell all twenty head at once; the
others are to sell ten head on two different occasions, putting all four stolen cattle
in one set of ten, or three head in one shipment and one in the other, or two head in
each of the shipments of ten. Which strategy will minimize Mr. Bandit's prob-
a~ility 6i" getting caught, and what is his probability of getting caught under each
strategy?
28 Show that the formula of Eq. (4) is the same as the formula of Eq. (5).
46 PROBABIliTY I
A B
~--------4C~·----~----~~-------4
(a) What is the probability that the circuit from A to B will fail to close?
(b) If ~a line is added on at e, as indicated in the sketch, what is the probability
that the circuit from A to B will fail to close?
(c) 'If a line and switch are added at e, what is the probability that the circuit from
A to B will fail to close?
II
60 Let Bt, B 2 , ••• , Bn be mutually disjoint, and let B = U BJ • Suppose P[Bj ] > 0
J=1
and P(A I B J] = P for j = I, ... , n. Show that peA 1 B] = p.
61 In a laboratory experiment, an attempt is made to teach an animal to turn right
in a maze. To aid in the teaching, the animal is rewarded if it turns right on a
given trial and punished if it turns left. On the first trial the animal is just as
likely to turn right as left. If on a particular trial the animal was rewarded, his
probability of turning right on the next trial is PI > !, and if on a given trial the
animal was punished, his probability of turning right on the next trial is P2 > Pl.
(a) What is the probability that the animal will turn right on the third trial?
(b) What is the probability that the animal wHi turn right on the third trial,
given that he turned right on the first trial?
50 PROBABILITY I
*62 You are to play ticktacktoe with an opponent who on his turn makes his mark by
selecting a space at random from the unfilled spaces. You get to mark first.
Where should you mark to maximize your chance of winning, and what is your
probability of winning? (Note that your opponent cannot win, he can only
tie.)
63 Urns I and II each contain two white and two black balls. One ball is selected
from urn I and transferred to urn II; then one ball is drawn from urn II and turns
out to be white. What is the probability that the transferred ball was white?
64 Two regular tetrahedra with faces numbered 1 to 4 are tossed repeatedly until a
total of 5 appears on the down faces. What is the probability that more than two
tosses are required?
65 Given P[A] = .5 and P[A v B] = .7:
(a) Find P[B] if A and B are independent.
(b) Find P[B] if A and B are mutually exclusive.
(c) Find P[B] if P[A IB] =.5.
66 A single die is tossed; then n coins are tossed, where n is the number shown on the
die. What is the probability of exactly two heads?
*67 In simple Mendelian inheritance, a physical characteristic of a plant or animal is
determined by a single pair of genes. The color of peas is an example. Let y and
9 represent yellow and green; peas will be green if the plant has the color-gene
pair(g, g); they will be yellow if the color-gene pair is (y, y) or (y, g). In view of
this last combination, yellow is said to be dominant to green. Progeny get one
gene from each parent and are equally likely to get either gene from each parent's
pair. If (y, y) peas are crossed with (g, g) peas, all the resulting peas will be (y, g)
and yellow because of dominance. If (y, g) peas are crossed with (g, g) peas, the
probability is .5 that the resulting peas will be yellow and is .5 that they will be
green. In a large number of such crosses one would expect about half the result-
ing peas to be yellow, the remainder to be green. In crosses between (y, g) and
(y, g) peas, what proportion would be expected to be yellow? What proportion
of the yellow peas would be expected to be (y, y)?
*68 Peas may be smooth or wrinkled, and this is a simple Mendelian character.
Smooth is dominant to wrinkled so that (s, s) and (s, w) peas are smooth while
(w, w) peas are wrinkled. If (y, g) (s, w) peas are crossed with (g, g) (w, w) peas,
what are the possible outcomes, and what are their associated probabilities? For
the (y, g) (s, w) by (g, g) (s, w) cross? For the (y, g) (s, w) by (y, g) (s, w) cross?
69 Prove the two unproven parts of Theorem 32.
70 A supplier of a certain testing device claims that his device has high reliability
inasmuch as P[A IB] = P[A IB] = .95, where A = {device indicates component is
faulty} and B = {component is faulty}. You hope to use the device to locate the
faulty components in a large batch of components of which 5 percent are faulty.
(a) What is P[B IA]?
(b) Suppose you want p[BIA] =.9. Let p=P[AIB]=P[AIB]. How large
does p have to be?
II
RANDOM VARIABLES, DISTRIBUTION
FUNCTIONS, AND EXPECTATION
Subsec. 4.5. Moments and moment generating functions, which are expecta-
tions of particular functions, are considered in the final subsection. One major
unproven result, that of the uniqueness of the moment generating function, is
given there. Also included is a brief discussion of some measures, of some
characteristics, such as location and dispersion, of distribution or density
functions.
This chapter provides an introduction to the language of distribution
theory. Only the univariate case is considered; the bivariate and multivariate
cases will be considered in Chap. IV. It serves as a preface to, or even as a
companion to, Chap. III, where a number of parametric families of distribution
functions is presented. Chapter III gives many examples of the concepts
defined in Chap. II.
2.1 Introduction
2.2 Definitions
We commence by defining a random variable.
2d dice
6 • • • • • •
5 • • • • • •
4 • • • • • •
3 • • • • • •
2 • • • • • •
1 • ,. • • • •
1st dice
FIGURE 1 1 2 3 4 5 6
that it satisfies the definition; that is, we should show that {w: X(w) < r}
belongs to d for every real number r. d consists of the four subsets:
4>, {head}, {tail}, and n. Now, if r < 0, {w: X(w) < r} = 4>; and if
° <r < I, {w: X(w) < r} = {tail}; and if r > 1, {w: X(w) < r} = n = {head,
tail}. Hence, for each r the set {w: X(w) < r} belongs to d; so X( . ) is a-
random variable. IIII
[0, 1] which satisfies Fx(x) P[X x] P[{w: X(w) < x}] for every real
numberx. I1II
A cumulative distribution function is uniquely defined for each random
variable. If it is known, it can be used to find probabilities of events defined
in terms of its corresponding random variable. (One might note that it is in
this definition that we use the requirement that {w: X(w) < r} belong to d for
every real r which appears in our definition of random variable X.) Note that
different random variables can have the same cumulative distribution function.
See Example 4 below.
The use of each of the three words in the expression" cumulative distri-
bution function" is justifiable. A cumulative distribution function is first of
all a/unction; it is a distribution function inasmuch as it tells us how the values
of the random variable are distributed, and it is a cumulative distribution func-
tion since it gives the distribution of values in cumulative form. Many writers
omit the word "cumulative" in this definition. Examples and properties of
cumulative distribution functions follow.
EXAMPLE 4 In the experiment of tossing two fair dice, let Y denote the
absolute difference. The cumulative distribution of Y, Fy( . ), is sketched
in Fig. 2. Also, let X k denote the value on the upturned face of the kth
die for k 1, 2. Xl and X 2 are different random variables, yet both
have the same cumulative distribution function, which is FXk(x) =
5 •
·1
l~ 6Ip, i+ I)(X) + 1[6, oo)(X) and is sketched in Fig. 3. IIII
Careful scrutiny of the definition and above examples might indicate the
following properties of any cumulative distribution function Fx( . ).
S6 RANDOM VARIABLES, DISTRIBUTION FUNCTIONS, AND EXPECTATION II
Fr(y)
34/36
------------
• •
30/36 •
24/36 •
16/36 T
I
I
6/36
I
! L. I I I .. y
FIGURE 2 1 2 3 4 5
1 •
•
•
•
•
•
FJGURE 3
3 DENSITY FUNCTIONS
Random variable and the cumulative distribution function of a random variable
have been defined. The cumulative distribution function described the distri-
bution of values of the random variable. For two distinct classes of random
variables, the distribution of values can be described more simply by using
density functions. These two classes, distinguished by the words "discrete"
and" continuous," are considered in the next two subsections.
The values of a discrete random variable are often called mass points; and,
fx(xj) denotes the mass associated with the mass point x j • Probability mass
function, discrete frequency function, and probability function are other terms
used in place of discrete density function. Also, the notation px( .) is some-
times used intead of fx( . ) for discrete density functions. fx(') is a function
with domain the real line and counterdomain the interval [0, 1]. If we use the
indicator function,
00
fx(x)
n
L P[X = xn]/{x,,}(x),
I
(2)
PROOF Denote the mass points of Xby Xl, Xl, • ••• Suppose fx(')
is given; then Fx(x) = L
fx(x). Conversely, suppose Fx(') is given;
{j: xr';x}
then fx(xj) = Fx(xj ) - lim Fx(xj - h); hence fx(x) can be found for
O<h ..... O
each mass point X j; however, fx(x) = 0 for X =I: X J' j 1, 2, ... , so fx(x) is
determined for all real numbers. IIII
-A
-h ~ i
~ 36
f6 16
-h io
0 1 2 3 4 5 6 7 8 9 10 11 12
FIGURE 4
According to Theorem 1, for given fx('), Fx(x) can be found for any x;
for instance, if x = 2.5,
And, if Fx (') is gjven, fx(x) can be found for any x. For example, for
x = 3,
Y 0 I 2 3 4 5
6 10 8 6
fly) 36 J6 36 36
4
36
..L
36 11I1
The discrete density function tells us how likely or probable each of the
values of a discrete random variable is. It also enables one to calculate the
probability of events described in terms of the discrete random variable X.
For example, let X have mass points XI' x 2 , ••• , X n , ••• ; then P[a < X b] =
L !x(Xj) for a < b.
j:{ a< XJ :;;b}
60 RANDOM VARIABLES, DISTRIBUTION FUNCTIONS, AND EXPECTATION II
Xn , • • • • IIII
This definition allows us to speak of discrete density functions without
reference to some random variable. Hence we can talk about properties that
a density function might have without referring to a random variable.
Other names that are used instead of probability density function include
density function, continuous density function, and integrating density function.
, Note that strictly speaking the probability density function fx(') of a
random variable X is not uniquely defined. All that the definition requires is
that the integral of fx(') gives Fx(x) for every x, and more than one function
fx(') may satisfy such requirement. For example, suppose Fx(x) = x/[o, I)(x) +
x
1[I.oo)(x); then fx(u) = 1(0, I)(u) satisfies Fx(x) = J fx(u)
-00
du for every x, and
so fx(') is a probability density function of X. However fx(u) ::=: 1(0. tlu) +
x
69/(t}(u) + l<t, I)(u) also satisfies Fx(x) = J
-00
fx(u) duo (The idea is that ,if the
value of a function is changed at only a "few" points, then its integral is
unchanged.) In practice a unique choice of fx(') is often dictated by continuity'
considerations and for this reason we will usually allow ourselves the liberty of
3 DENSITY fUNCTIONS 61
uous." All the continuous random variables that we shall encounter will take
on a continuum of values .. The second justification arises when one notes that
the absolute continuity of the cumulative distribution function is the regular
mathematical definition of an absolutely continuous function (in words, a
function is called absolutely continuous if it can be written as the integral of its
derivative); the "continuous," then, in a corresponding continuous random
variable could be considered just an abbreviation of" absolutely continuous."
The notations for discrete density function and probability density func-
tion are the same, yet they have quite different interpretations. For discrete
random variables /x(x) = P[X = xl, which is not true for continuous random
variables. For continuous random variables,
- dFx(x) _ l'
- 1m Fx(x
+ ~x) - Fx(x - ~x)
f x (x ) - .
dx 4:x-O 2~x '
hence fx(x)2~x ~ Fx(x + ~x) - Fx(x - ~x) = P[x - Ax < X < x + ~xl; that
is, the probability that X is in a small interval containing the value x is approxi-
mately equal to /x(x) times the width of the interval. For discrete random
62 RANDOM VARIABLES, DISTRIBUTION FUNCTIONS, AND EXPECTATION II
variables fx(') is a function with domain the real line and counterdomain the
interval [0,.1]; whereas, for continuous random variables fx(') is a function with
domain the real line and counterdomain the infinite interval [O~ (0) .
. Remark We will use the term" density function" without the modifier
of" discrete" or "probability" to represent either kind of density. 1111
(ii) J f(x) dx =
-IX)
1. III/
FIGURE 5
4.1 Mean
(i) (4)
(5)
(iii) G[X] = fo
00
[1 - Fx(x)] dx -
fO_ooFx(X) dx (6)
In 0), G[X] is defined to be the indicated series provided that the series is
absolutely convergent; otherwise, we say that the mean does not exist. And in
(ii), G[X] is defined to be the indicated integral if the integral exists; otherwise,
we say that the mean does not exist. Final!y, in (iii), we require that both
integrals be finite for the existence of G[X].
L
Note what the definition says: In xjfx(xj), the summand is thejth value
j
of the random variable X multiplied by the probability that X equals that jth
value, and then the summation is overall values. So G[X] is an" average" of the
values that the random variable takes on, where each value is weighted by the
probability that the random variable is equal to that value. Values that are
more probable receive more weight. The same is true in integral form in (ii).
There the value x is multiplied by the approximate probabjlity that X equals
the value x, namely fx(x) dx, and then integrated over all values.
Several remarks are in order.
Remark G[X] is the center of gravity (or centroid) of the unit mass that
is determined by the density function of X. So the mean of X is a meas-
ure of where the values of the random variable X are" centered." Other
measures of "location" or "center" of a random variable or its corre-
sponding density are given in Subsec. 4.6. 7Tn '
66 RANDOM VARIABLES; DISTRIBUTION FUNCI10NS; AND EXPECTATION II
IIII
tC[X]
oo
= f0 [1 - Fx(x)] dx -
fO
-00 Fx(x) dx =
foo0 pe- AX dx =;:.
P
Here, we have used Eq. (6) to find the mean of a random variable that is
partly discrete and partly continuous. IIII
4 EXPECTATIONS AND MOMENTS 67
so we say that S[X] does not exist.' We might also say that the mean of X
is infinite since it is clear here that the integral that defines the mean is
~~ @
4.2 Variance
The mean of a random variable X, defined in the previous subsection, was a
measure of central location of the density of X. The variance of a random vari-
able X will be a measure of the spread or dispersion of the density of X.
00 (x - Ilx)2fx(x) dx (8)
density; similarly (for those readers familiar with elementary physics or me-
chanics), variance represents the moment of inertia of the same density with
respect to a perpendicular axis through the center of gravity.
EXAMPLE 14 Let X be the total of the two dice in the experiment of tossing
two dice.
var [X] = L(Xj - JlX)2/X(Xj)
= (2 - 7)2l6 + (3 - 7) 2l6 + (4 - 7)2 336 + (5 - 7)2 346
+ (6 - 7)2 356 + (7 - 7)2 366 + (8 - 7)2 356 + (9 - 7)2 346
+ (10 - 7)2 /6 + (11 - 7) 2l6 + (12 _7)2 316 = 2l6°. IIII
r
- 00
AX
= (x - W.<e- dx
1
- A?' IIII
r
o
= 2xpe- AX dx - (~)'
= 2 ~_
).2
(E)). 2 _
-
p(2 - p)
).2 . IIII
4 EXPECTATIONS AND MOMENTS 69
(11)
* tf[g(X)] has been defined here for random variables that are either discrete or
continuous; it can be defined for other random variables as well. For the reader
who is familiar with the Stieltjes integral, C[g(X)J is defined as the Stieltjes integral
J~ oog(x) dFx(x) (provided this integral exists). where F x(·) is the cumulative distribu-
tion function of X. If X is a random variable whose cumulative distribution fUnction is
partly discrete and partly continuous. then (according to Subsec. 3.3) Fx(x) =
(l - p)r(x) + pPC(x) for some 0 < p < 1. Now tf[g(X)J can be defined to be tf[g(X)J
= (1- p) 2:g(X))fd(X)) + p J~ rz;)g(x)fac(x) dx, where fd(.) is the discrete density func-
tion corresponding to Fd(.) and r C ( . ) is the probability density function corre-
sponding to F ac (.).
70 RANDOM VARIABLES, DISTRIBUTION FUNCTIONS, AND EXPECTATION II
00 00
00 00
= Cl f- oo gl(X)/X(x) dx + Cz f- oo gix)/x(x) dx
= C1 S[gl(X)] + Cz S[g2(X)]'
Finally,
PROOF CNe first note that if S[X2] exists, then S[X] exists.)* By
our definitions of variance and S[g(X)], it follows that var [X] =
S[eX - S[X])2]. Now S[(X - S[X])2] = S[X2 - 2XS[X] + (S[XD2] =
S[X2]- 2(S[X])2 + {S[X])2 = S[X2] - {S[X])2. 1//1
* Here and in the future We are not going to concern ourselves with checking existence.
4 EXPECTATIONS AND MOMENTS 71
" 1I'~·t.:.
8[g(X)] = f«>
-00
g(x)/x(x) dx f =
~g~)~~
g(x)/x<x) dx
+f {x:g(x)<t}
g(x)/x(x) dx > f g(x)/x{x) dx
{x: g(x)~t}
Divide by k, and the result follows. A similar proof holds for X discrete.
1111
Corollary Chebyshev inequality If X is a random variable with finite
vanance,
that is. the probability that X falls within ru x units of Jlx is greater than or
equal to I - llr2. For r = 2, one gets P[Jtx - 2ux < X < Jlx + 2u x ] > t, or
for any random variable X having finite variance at least three-fourths of the
mass of X falls within two standard deviations of its mean.
Ordinarily, to calculate the probability of an event described in terms of
a random variable X, the distribution or density of X is needed; the Chebyshev
inequality gives a bound, which does not depend on the distribution of X, for the
pro bability of particular events described in terms of a random variable and
its mean and variance.
PROOF Since g(x) is continuous and convex, there exists a line, say
/(x) = a + bx, satisfying /(x) = a + bx < g(x) and /(tS'[X]) = g(tS'[X]).
/(x) is a line given by the definition of continuous and convex that goes
through the point (tS'[X], g(tS'[X])). Note that tS'[/(X)] = tS'[(a + bX)] =
a + btS'[X] = /(tS'[X]); hence g(tS'[X]) = /(tS'[X]) = tS'[/(X)] < tS'[g(X)] [using
property {iv} of expected values (see Theorem 3) for the last inequality].
IIII
IIII
Note that III = G[(X - Ilx)] = 0 and 112 = G[(X - IlX)2], the variance of X.
Also, note that all odd moments of X about Ilx are 0 if the density function of X
is symmetrical about Ilx, provided such moments exist.
In the ensuing few paragraphs we will comment on how the first four
moments of a random variable or density are used as measures of various
M
, .characteristics of the corresponding density. For some of these characteristics,
'.' other measures can be defined in terms of quantiles.
Fx(x)
1.0
1 Fx(x)
.75
.50
.25
x
FIGURE 6 0
So the median of X is any number that has half the mass of X to its right and
the other half to its left, which justifies use of the word median."
H
We have already mentioned that8[X], the first moment, locates the" center"
of the density of X. The median of X is also used to indicate a central
location of the density of X. A third measure of location of the density of X,
though not necessarily a measure of central location, is the mode of X, which is
defined as that point (if such a point exists) at whkh fx(') attains its maximum.
Other measures of location [for example, t('.25 + '.75)] could be devised, but
three, mean, median, and mode, are the ones commonly used.
We previously mentioned that the second moment about the mean, the
variance of a distribution, measures the spread or dispersion of a distribution.
Let us look a little further into the manner in which the variance characterizes
the distribution. Suppose that It (x) and f2(x) are two densities with the same
mean f.l such that
p+a
{-a [fleX) - f2(X)] dx > 0 (17)
for every value of a. Two such densities are illustrated in Fig. 7. It can be
shown that in this case the variance ai in the first density is smaller than the
FIGURE 7
4 EXPECTATIONS AND MOMENTS 75
FIGURE 8
variance CT~ in the second density. We shall not take the time to prove this in
detail, but the argument is roughly this: Let
g(x) = It (x) - f2(X) ,
co
where It (x) and f2(X) satisfy Eq. (17). Since S g(x) dx = 0, the positive area
- co
between g(x) and the x axis is equal to the negative area. Furthermore, in
view of Eq. (17), every positive element of area g(x') dx' may be balanced by a
negative element g(x") dx" in such a way that x" is further from J-l than x'.
When these elements of area are multiplied by (x - J-l)2, the negative elements
will be multiplied by larger factors than their corresponding positive elements
(see Fig. 8); hence
JCO (x - J-l)2 g (X) dx < 0
-00
unless It (x) and f2(X) are equal. Thus it follows that ui < u~ . The converse
of these statements is not true. That is, if one is told that ui < u~ , he cannot
conclude that the corresponding densities satisfy Eq. (17) for all values of a;
although it can be shown that Eq. (17) must be true for certain values of a.
Thus the condition ui < u~ does not give one any precise information about
the nature of the corresponding distributions, but it is evident that It (x) has
more area near the mean thanf2(x), at least for certain intervals about the mean.
We indicated above how variance is used as a measure of spread or
dispersion of a distribution. Alternative measures of dispersion can be defined
in terms of quantiles. For example 7S -e. e.
2S , called the interquartile range,
is a measure of spread. Also, p - e el- p for some -!<p < I is a possible
measure of spread.
The third moment J-l3 about the mean is sometimes called a measure of
asymmetry, or skewness. Symmetrical distributions like those in Fig. 9 can be
shown to have J-l3 = O. A curve shaped likelt(x) in Fig. 10 is said to be skewed
to the left and can be shown to have a negative third moment about the mean;
one shaped like f2{X) is called skewed to the right and can be shown to have a
positive third moment about the mean. Actually, however, knowledge of the
76 RANDOM VARIABLES, DISTRIBUTION FUNCTIONS, AND EXPECTATION II
--~--~------------~--------------~~-------x
FIGURE 9
third moment gives almost no clue as to the shape of the distribution, and we
mention it mainly to point out that fact. Thus, for example, the density f3(x)
in Fig. 10 has /13 = 0, but it is far from symmetrical. By changing the curve
slightly we could give it either a positive or negative third moment. The ratio
/13/(13, which is unitiess, is called the coefficient of skewness.
The quantity 11 = (mean - median)/(standard deviation) provides an
alternative measure of skewness. It can be proved that -1 < 11 < 1.
The fourth moment about the mean is sometimes used as a measure of
excess or kurtosis, which is the degree of flatness of a density near its center.
Positive values of /14-/(14 - 3, called the coefficient 0/ excess or kurtosis, are
sometimes used to indicate that a density is more peaked around its center than
the density of a normal curve (see Subsec. 3.2 of Chap. III), and negative values
are sometimes used to indicate that a density is more flat around its center than
the density of a normal curve. This measure, however, suffers from the same
failing as does the measure of skewness; namely, it does not always measure
what it is supposed to.
While a particular moment or a few of the moments may give little
information about a distribution (see Fig. 11 for a sketch of two densities having
the same first four moments. See Ref. 40. Also see Prob. 30 in Chap. Ill),
the entire set of moments (J1~, /1~, f.L;, ...) will ordinarily determine the distri-
FIGURE 10
4 EXPECTATIONS AND MOMENTS 77
.7
.6
.5
.4
.3
.2
.1
-2
FIGURE 11
bution exactly, and for this reason we shall have occasion to use the moments
in theoretical work.
In applied statistics, the first two moments are of great importance, as
we shall see, but the third and higher moments are rarely useful. Ordinarily
one does not know what distribution function one is working with in a practical
problem, and often it makes little difference what the actual shape of the distri-
bution is. But it is usually necessary to know at least the location of the
distribution and to have some idea of its dispersion. These characteristics can
be estimated by examining a sample drawn from a set of objects known to have
the distribution in question. This estimation problem is probably the most
important problem in applied statistics, and a large part of this book will be
devoted to a study of it.
We now define another kind of moment,/actorial moment.
I111
For some random variables (usually discrete), factorial moments are
78 RANDOM VARlABLES, DISTRIBUTION FUNCTIONS, AND EXPECTATION
easier to calculate than raw moments. However the raw moments can be
obtained from the factorial moments and vice versa.
The moments of a density function play an important role in theoretical
and applied statistics. In fact, in some cases, if all the moments are known,
the density can be determined. This will be discussed briefly at the end of
this subsection. Since the moments of a density are important, it would be
useful if a function could be found that would give us a representation of all
the moments. Such a function is called a moment generating function.
where the symbol on t~e left is to be interpreted to mean the rth derivative of
met) evaluated as t -+ O. Thus the moments of a distribution may be obtained
from the moment generating function by differentiation, hence its name.
If in Eq. (19) we replace ext by its series expansion, we obtain the series
expansion of met) in terms of the moments of fx( •); thus
'4 EXPECTATIONS AND MoMENrS 79
oo
1 , j
(22)
= L0 I. -., rJ t ,
II.
j=
from which it is again evident that J.l; may be obtained from met); J.l; is the co-
efficient of trlr!.
dm(t) 1 , 1
m'(t) = = hence m (0) = 8[X] = -.
dt (1 - t)2 1
And " 21
m (t) = (1 _ t)3 ' so m"(O) = 8[X2] = 1~' IIII
e-;'A:~
fx(x) = - for x = 0, 1, 2, ....
x!
Then
d
hence - B[ tX] = A. 1111
dt t=)
PROBLEMS
J (a) Show that the following are probability density functions (p.d.f.'s):
hex) = e-x/(o.OO)(x}
f2(X} = 2e- 2X/(0. OO)(x}
f(x) = «() + l}};(x) - ()f2(X)
4 Suppose that the cumulative distribution function (c.d.f.) Fx(x) Can be written
as a function of (x - a.)/fJ, where a. and fJ > 0 are constants; that is, x, a., and fJ
appear in Fx( .) only in the indicated form.
(a) Prove that if a. is increased by aa., then so is the mean of X.
(b) Prove that if fJ is multiplied by k(k > 0), then so is the standard deviation
of X.
5 The experiment is to toss two balls into four boxes in such a way that each ball
is equally likely to fall in any box. Let X denote the number of balls in the first
box.
(a) What is the c.d.f. of X?
(b) What is the density function of X?
(c) Find the mean and variance of X.
6 A fair coin is tossed until a head appears. Let X denote the number of tosses
required.
(a) Find the density function of X.
(b) Find the mean and variance of X.
(c) Find the moment generating function (m.g.f.) of X.
*7 A has two pennies; B has one. They match pennies until one of them has all
three. Let X denote the number of trials required to end the game.
(a) What is the density function of X?
(b) Find the mean and variance of X.
(c) What is the probability that B wins the game?
8 Let fx(x) =(1!fJ)[1 -I (x - a.)/fJl ]/(II-p. a+plx), where IX and (3 are fixed con-
stants satisfying - 00 < a. < 00 and fJ > O.
(a) Demonstrate that fx(') is a p.d.f., and sketch it.
(b) Find the c.d.f. corresponding to fx(·).
(c) Find the mean and variance of X.
(d) Find the qth quantile of X.
9 Letfx(x) = k(l/fJ){1 - [(x - a.)/{3]2}I(rJ-p. Cl+p,(X), where - 00 < IX < 00 and (3 > O.
(a) Find k so that/xC') is a p.d.f., and sketch the p.d.f.
(b) Find the mean, median, and variance of X.
(c) Find 8[1 X - a.1]·
(d) Find the qth quantile of X.
10 Let fx(x) = t{O/(o. 1 ,(x) + 1[1. 2)(X) (1 - 0)/(2.3,(X)}, where 0 is a fixed constant
satisfying 0 0 ~ 1.
(a) Find the c.dJ. of X.
(b) Find the mean, median, and variance of X.
J1 Let f(x; 0) =' Of(X; 1) + (1 - O)f(x; 0), where 0 is a fixed constant satisfying
o < 0 ~ 1. Assume that/(·; 0) andf(·; 1) are both p.d.f.'s.
(a) Show that f( . ; 0) is also a p.d.f.
(b) Find the mean and variance of f(· ; 0) in terms of the mean and variance of
f(' ; 0) and f(' ; 1), respectively.
(c) Find the m.g.f. of/('; 0) in terms of the m.g.f.'s of/('; 0) andf(·; 1).
PROBLEMS 83
J2 A bombing plane flies directly above a railroad track. Assume that if a large
(small) bomb falls within 40 (15) feet of the track, the track will be sufficiently
damaged so that traffic will be disrupted. Let X denote the perpendicular
distance from the track that a bomb falls. Assume that
100-x
/x(x) = l[o.loo)(x).
5000
and
18 An urn contains balls numbered 1, 2, 3, First a ball is drawn from the urn,
and then a fair coin is tossed the number of times as the number shown on the
drawn ball. Find the expected number of heads.
19 If X has distribution given by P[X = 0] = P[X = 2] = p and P[X = 1] = 1 - 2p
for 0 <p< i, for what p is the variance of X a maximum?
20 If X is a random variable for which P[X 0] = 0 and S[X] fL < 00, prove that
P[X fLt] > 1 -l/t for every t 1.
of X, inasmuch as the coefficient of t J gives P[X = n. Find S[t X] for the random
variable of Probs. 6 and 7.
III
SPECIAL PARAMETRIC FAMILIES OF
UNIV ARIA TE DISTRIBUTIONS
2 DISCRETE DISTRIBUTIONS
In this section we list several parametric families of univariate discrete densities.
Sketches of most are given; the mean and variance of each are derived, and usually
examples of random experiments for which the defined parametric family
might provide a realistic model are included.
The parameter (or parameters) indexes the family of densities. For
each family of densities that is presented, the values that the parameter can
assume will be specified. There is no uniform notation for parameters; both
Greek and Latin letters are used to designate them.
1
for x = 1, 2, ... , N
1
f(x) =f(x; N) = N -
-N- I {l.Z ••.•• N} (x) , (1)
o otherwise
where the parameter N ranges over the positive integers, is defined to have
a discrete uniform distribution. A random variable X having a density
given in Eq. (1) is called a discrete uniform random variable. 1111
l/Nlu~
o 2 3
I ---~ __ _
FIGURE 1
Density of discrete uniform.
(N Z -1) N 't 1
' and mx(t) = 8[e ] = j~l e' N'
tX
var [X] = 12
2 DISCRETE DISTRIBUTIONS 87
PROOF
11II
Remark The discrete uniform distribution is sometimes defined in
density form as I(x; N) = [1/(N + l)]I{o. I. ...• N} (x), for N a nonnegative
integer. Jf such is the case, the formulas for the mean and variance have
to be modified accordingly. /11I
for x 0 or I} ,
= pX(l - p)1 -xI lO • l}(x), (2)
otherwise
FIGURE 2
Bemou1li density.
88 SPECIAL PARAMETRIC FAMILIES OF UNIVARIATE DISTRIBUTIONS III
PROOF 8[X] = 0 . q + 1 . p = p.
var [X] = 8[X2] - (8[X])2 = 0 2 • q + 12 • p _ p2 = pq.
mx(t) = 8[e tX ] = q + pet. IIII
EXAMPLE 1 A random experiment whose outcomes have been classified
into two categories, called" success" and" failure," represented by the
letters d and I, respectively, is called a Bernoulli trial. If a random
variable X is defined as 1 if a Bernoulli trial results in success and 0 if
the same Bernoulli trial results in failure, then X has a Bernoulli distribu-
tion with parameter p = P[success]. 1III
EXAMPLE 2 For a given arbitrary probability space (0, d, P[·]) and for A
belonging to d, define the random variable X to be the indicator function
of A; that is, X(w) 1.4.(w); then X has a Bernoulli distribution with
parameter p = P[X 1] = P[A]. II1I
Definition 3 Binomial distribution A random variable X is defined to
have a binomial distribution if the discrete density function of X is given
by
n)
fx(x) =fx(x; n, p) = {( ~ p q
x n-x for x = 0, I, ... , n
otherwise (4)
o 1 2
-l~.~, Illll
4 5 6 7 8 9 10
I
o• •I 2 3 4 5 6
I • •
7 8 9 10
.. x
012345
x L
o I
FIGURE 3
Binomial densities.
2 DISCRETE DISTRIBUTIONS 89
where the two parameters nand p satisfy 0 <p ~ 1, n ranges over the
positive integers, and q = I - p. A distribution defined by the density
function given in Eq. (4) is called a binomial distribution. IIII
PROOF
mx(t) = 8[etX ] =
x=o
± etx(n)pxqn-x =
X
t
x~o
(n)(petyqn-x
X
= (pet + qt.
Now
and
hence
S[X] = m~(O) = np
and
var [X] 8[X2] - (8[X])2
= mi(O) - (np)Z = n(n - l)p2 + np - (np)2 = np(l - p). I1II
Remark The binomial distribution reduces to the Bernoulli distribution
when n = I. Sometimes the Bernoulli distribution is called the point
binomial. IIII
is given by qqpqpp ... qp. Let the random variable X represent the num-
ber of successes in the n repeated independent Bernoulli trials. Now
P[X = x] = P[exactly x successes and n - x failures in n trials]
(:) p'q"- x for x = 0, 1, ... , n since each outcome of the experimentthat has
exactly x successes has probability p"q'-x and there are (:) such outcomes.
Hence X has a binomial distribution. I1I1
which is the same as P[Ad in Eq. (3) of Subsec. 3.5 of Chap. I, for x = k.
1III
PROOF
for x = 0, 1, ... , n
fx(x; M, K, n) =
o otherwise (7)
&[X] = n ' -
K
and var [Xl = n.- . K M-K . -
M-n
- (8)
M M M M-t
PROOF
(K)(M - K) (K - l)(M - K)
8[ Xl = t x x n- x =n. K t x-I n - x
FO (~) M x=, (~~n
(Ky- 1) (M n-l-y
Kn-I
- 1- K+ 1)
=n .- L ...:.----....;..---:~~-:-----.:;...~
M (M - 1)
y=O
n-l
K
-- n 'M'
-
M ]0; K = 4; n = 4 M = ]0; K 4; n =5
~ __~__~~_~I__~.____. x
o 2 3 4
FIGURE 4
Hypergeometric densities.
8[X(X - 1)]
= Ix(x-l) (~)(~=~)
.=0 (~)
K- 2)(M - K)
=n(n_1)K(K-1)
M(M - l)x=2
± x-2 n-x
(
2) (M -
n-2
(K- 2) (M - 2- K+ 2)
= n(n _ 1) K(K - 1)
I
M(M - 1) y""o
n 2 n- 2- Y
y
(M - 2)
= n(n _ 1) K(K - 1) .
M(M -1)
n-2
Hence
= n(n _ 1) K(K - 1) +n K _ n 2 K2
M(M-1) M M2
~]
K [ K-1
=n- (n-l). +1
M M-l
= fx(x; A) =~
I x!
for x = 0, 1, 2, ... 1
e -A.'X
),
fx(x)
I
r=
1 xl I (D. I .... }(x), (9)
(0 otherwise
where the parameter Ii satisfies ), > O. The density given in Eq. (9) is
called a Poisson density. IIII
.607
0
G
.~--~
I 2
.0I3
3
.002
4• -e--... x
5
I_J
0
.184
I
2
.06]
t
3
.015
•
4
.003
•5
..x
.073
_~.0!8 t .005 .002 .001
o I . •
IO
•
11
---4---+ X
12
FIGURE 5
Poisson densities.
94 SPECIAL PARAMETRIC FAMILIES OF UNIVARIATE DISTRIBUTIONS III
PROOF
hence,
and
So,
G[X] = m~(O) = A
and
~-~~----------~--~~-------*--*-~--------~--------x
o
FIGURE 6
(i) The probability that exactly one happening will occur in a small
time interval of length h is approximately equal to vh, or prone happening
in interval of length h] = vh + o(h).
(ii) The probability of more than one happening in a small time interval
of length h is negligible when compared to the probability of just one
happening in the same time interval, or P[two or more happenings in
interval of length h] = o(h).
(iii) The num bers of happenings in nonoverlapping time intervals are
independent.
The term o(h), which is read some function of smaller order than 17,"
H
The quantity v can be interpreted as the mean rate at which happenings occur per
unit of time and is consequently referred to as the mean rate of occurrence.
and on passing to the limit one obtains the differential equation P~Ct) =
- vPo(t), whose solution is Po(t) = e vr, using the condition PoCO) = 1.
Similarly, PtCt + h) = Pt(t)Po(h) + Po(t)Pt(h), or P1(t + h) = P.(t)[l - vh
- o(h)] + PoCt)[vh + o(h)], which gives the differential equation P~(t) =
- vPI(t) + vPo(t), the solution of which is given by PICt) = vte- vr , using
the initial condition PI (0) = 0. Continuing in a similar fashion one
obtains P~(t) = - vPII (t) + vP1I-.(t), for n = 2, 3, ....
It is seen that this system of differential equations is satisfied by
P II (t) = (vt)lI e -vt/n L
The second proof can be had by dividing the interval (0, t) into, say
n time subintervals, each of length h = tin. The probability that k
happenings occur in the interval (0, t) is approximately equal to the prob-
ability that exactly one happening has occurred in each of k of the 11
subintervals that we divided the interval (0, t) into. Now the probability
of a happening, or success," in a given subinterval is vh. Each sub-
H
n n k! n n k!
//1/
2 DISCRETE DISTRIBUTIONS 97
L
k==K+ )
PROOF
e-1).k-1/(k -l)! k
-
e- A)..klk! ).. ,
which is less than 1 if k < l, greater than I if k > )., and equal to 1 if )..
is an integer and k = ).. 1111
2 DISCRETE DISTRIBUTIONS 99
/x(x) =/x(x; p)
= {:(l -p)'
for x = 0, 1, ... }
= p(l - p)XI{o, I, ... }(X), (11)
otherwise
/x(X) = fx(x;r,p)
otherwise (12)
= (
r X-I) pq,. xI
+x ( )
{O.I .... }X'
_l
P -4
The geometric distribution is well named since the values that the geometric
density assumes are the terms of a geometric series. Also the mode of the
geometric density is necessarily o. A geometric density possesses one other
interesting property, which is given in the following theorem.
., . P[X 2 i + j]
PROOF P[X > I + JI X > I] = P[X 2 i]
00
I p(1 - PY
_ ;:.:...x=_.;....·+...:;.l_--- _
(l _p)i+l
~p(1 - PY (1-
x=i
= (1 - p)l
=P[X> j]. IIII
and the mean is lip, the variance isqlp2, and the moment generating function is
petl( 1 - qe r ).
rq rq
S[X] =
p
, var [X] = 2 ' and m xC t) =[ p
1 - qe
t] "', (15)
P
PROOF
x.:O
f (- r)pr( -qe'Y = [ 1 - pqer]
X
r
and
hence
rq
8[X] = m~(t)
r=O p
and
The negative binomial distribution, like the Poisson, has the nonnegative
integers for its mass points; hence, the negative binomial distribution is poten-
tially a model for a random experiment where a count of some sort is of interest,
[ndeed, the negative binomial distribution has been applied in population counts,
in health and accident statistics, in communications, and in other counts as
welL Unlike the Poisson distribution, where the mean and variance are the
same, the variance of the negative binomial distribution is greater than its mean.
We will see in Subsec. 4,3 of this chapter that the negative binomial distribution
can be obtained as a contagious distribution from the Poisson distribution.
2 DISCRETE DISTRIBUTIONS 103
(x+r-l)
r-l p
r-I x~
q -
(r+x-l)
x p
r-Iqx
,
z o I 2
e -A I - e -A - Ae
,-..1.
fez)
The counter counts correctly values 0 and 1 of the random variable X; but if X
takes on any value 2 or more, the counter counts 2. Such a random variable
is often referred to as a censored random variable.
The above two illustrations indicate how other families of discrete densities
can be fonnulated from existing families. We close this section by giving two
further, not so wellwknown, families of discrete densities.
(17)
binomial distribution.
rem) is the well-known gamma function rem) = Io
xm - I e- X dx for
m > O. See Appendix A. The beta-binomial distribution has
for x = 1, 2, ... x
q. I ( )
!(x;p) = - 1 {1,2, ... } x, (19)
-x oge P
o otherwise
where the parameters satisfy 0 < P < 1 and q = 1 - p is defined as the
logarithmic distribution. IIII
The name is justified if one recalls the power-series expansion of loge (1 ~. q).
The logarithmic distribution has .----
. q . q(q + loge p)
M ean =---- and varIance = ( )2 . (20)
-p loge P - plogeP
3 CONTINUOUS DISTRIBUTIONS
In this section several parametric families of univariate probability density
functions are presented. Sketches of some are incl uded; the mean and variance
(when they exist) of each are given.
1
/x(x) = /x(x; a, b) = l[a bJ(X), (21)
b-a '
106 SPECIAL PARAMETRIC FAMILIES OF UNIVARIATE DISTRIBUTIONS III
I
b-a • •
---+-----a b - - - - - -........ X
FIGURE 8
Uniform probability density.
where the parameters a and b satisfy - 00 < a < b < 00, then the random
variable X is defined to be uniformly distributed over the interval [a, b],
and the distribution given by Eq. (21) is called a uniform distribution.
IIII
PROOF
b 1 b2 - a 2 a +b
8[X] = {x b _ a dx = 2( b - a) = 2 .
IIII
The uniform distribution gets its name from the fact that its density is
uniform, or constant, over the interval [a, b]. It is also called the rectangular
distribution-the shape of the density is rectangular.
The cumulative distribution function of a uniform random variable is
given by
(23)
3 CONTINUOUS DISTRIBUTIONS 107
EXAMPLE 12 If a wheel is spun and then allowed to come to rest, the point
on the circumference of the wheel that is located opposite a certain fixed
marker could be considered the value of a random variable X that is
uniformly distributed over the circumference of the wheel. One could
then compute the probability that X will fall in any given arc. 11II
FIGURE 9
Norma] densities.
" '
One can readily check that the mode of a normal density occurs at x = Jl
and inflection points occur at Jl - a and Jl + a. (See Fig. 9.) Since the normal
distribution occurs so frequently in later chapters, special notation is introduced
for it. If random variable X is norma]]y distributed with mean J1 and variance
a 2, we wi]] write X,.... N(J1, ( 2). We will also use the notation </J/1. a2(x) for the
density of X,.... N(Jl, ( 2) and <1>/1. a2(x) for the cumulative distribution function.
If the normal random variable has mean 0 and variance 1, it is called a
standard or normalized normal random variable. For a standard normal ran-
dom variable the subscripts of the density and distribution function notations
are dropped; that is,
</J(x),= J~-
2n
e-
tx2 and <1>(x) = IX
-00
</J(u) duo (25)
I_
00
00 </J /1 , a 2 (x) dx = 1,
but we should satisfy ourselves that this is true. The verification is somewhat
troublesome because the indefinite integral of this particular density function
does not have a simple functional expression. Suppose that we represent the
area under the curve by A; then
A= 1 foo e-(x-Il)2j2a2d x,
J2na -00
and on making the substitution y = (x - Jl)/a, we find that
1
A=J~ e
I oo
-ty2 d
y.
2n - 00
3 CONTINUOUS DISTRIBUTIONS 109
FIGURE 10
Norma] cumulative distribution
function.
We wish to show that A = 1, and this is most easily done by showing that A2 is
1 and then reasoning that A = 1 since <P 1l ,a2 (x) is positive. We may put
2
A=J.~
1 foo e
-ty2 d
Y 1-2n foo e -tz 2 d
z
2n -00
J -00
=1.
PROOF
mx(t) = C[e tX ] = llltS'[l(X-Il)]
oo 1
= e tll
f
-00
-= l(x- ll )e-(1/2a2)(x-Il)2 dx
J2n
= etll 1 foo e-O/2(2)[(x-IlP-2a2t(x-ll)]dx.
J2n -00
and we have
The integra] together with the factor I IJ21tU is necessarily I since it is the
area under a normal distribution with mean Jl + u 2 t and variance a 2 •
Hence,
2 2
mx(t) = eJlt+a t /2,
8[X] = m~(O) = jJ
and
var [X] = 8[X2] - (8[X])2 = mi.-(O) - Jl2 a 2,
thus justifying our use of the symbols Jl and u2 for the parameters. IIII
Since the indefinite integral of 4J Jl • a2(x) does not have a simple functional
form, one can only exhibit the cumulative distribution function as
f- oo 4Jp.,a2(u) duo
x
<I>Jl,a2(x) = (27)
The folIowing theorem shows that we can find the probabiHty that a normally
distributed random variable, with mean Jl and variance a 2 , falls in any interval
in terms of the standard norma] cumulative distribution function, and this
standard normal cumulative distribution function is tabled in Table 2 of
Appendix O.
PROOF
b 1
P[a < X < bJ = f e- H (x- Jl )/a]2 dx
a J21tU
1
f (b-Jl)/a -tz 2 d
= (a-Jl)/a J21t e z
3 CONTINUOUS DISTRIBUTIONS 111
P[9.9 < X< 10.2] = <\l Co.~ 1- 10) - <\l e·9.~ 10)
= <1>(2) - <1>(-1) ~ .9772 - .1587 = .8185. IIII
3.3 Exponential and Gamma Distributions
Two other families of distributions that play important roles in statistics are the
(negative) exponential and gamma distributions, which are defined in this sub-
section. The reason that the two are considered together is twofold; first, the
112 SPECIAL PARAMETRIC FAMILIES OF UNIVARIATE DISTRIBUTIONS III
exponential is a special case of the gamma, and, second, the sum of independent
identically distributed exponential random variables is gamma-distributed, as
we shall see in Chap. V.
°
where r > and A > 0, then X is defined to have a gamma distribution.
r(·) is the gamma function and it is discussed in Appendix A. IIII
1 1 A
4[X] =-, var [Xl = A2 ' and mx(t) = - - for
A A- t
(31)
1.0
o~~~~~~~~~~~~7~~8~X
1 2
FIGURE 11
Gamma densities (A I),
PROOF
= (A-
A )r Joo(A - ty x r-I e -(A.-t)x d x-
_ (_A . )r
- t 0 r(r) A- t
m~(t)= rArO. - t)-r-I
and
4'[X] = m~(O) = i
and
P[X> a + b] e-).(a+b)
PROOF P[X> a + blX> a] = P[X> a] = e-).a
where a > °and b > 0, then X is defined to have a beta distribution. IIII
The function B(a, b) = xa-1(1 - n X)b-l dx, caned the beta junction, is
mentioned briefly in Appendix A.
it is often called the incomplete beta and has been extensively tabulated.
IIII
2.0
I.S
I a = 1
~~~~ __+-______~~~~lb=l
.5
Beta densities. .4 .6 .8
116 SPECIAL PARAMETRIC FAMILIES OF UNIVARIATE DISTRIBUTIONS III
The moment generating function for the beta distribution does not have a
simple form; however the moments are readily found by using their definition.
~[Xk] = 1
B(a, b)
II 0
y!+a-l(1 - X)b-l dx
and
2 2 r(a + 2)r(a + b) ( a ) 2
var [X] = ~[X ] - (~[X]) = r(a)r(a + b + 2) - a + b
(a + 1)a
a )2 ab (
=(a+b+l)(a+b)- a+b =(a+b+l)(a+b)2· IIII
The family of beta densities is a two-parameter family of densities that
is positive on the interval (0, 1) and can assume quite a variety of different
shapes, and, consequently, the beta distribution can be used to model an experi-
ment for which one of the shapes is appropriate.
F (x) - -
1 IX du
x - n -00 n{J{l + [(u - a)/{J]2}
(37)
1 1 X-ct
= - + - arc tan - -
2 n {J
f .
(x, 11,
2 _ J1 exp [1
(1 ) - - 2 2 (Ioge x - 11) 2] [(0. OO)(x), (38)
x 2n(1 (1
where a > 0 and b > 0, is ca]]ed the Weibull density, a distribution that has been
successfully used in reliability theory. For b = 1, the Weibu11 density reduces
to the exponential density. 1t has mean (1/a)llhr(l + b- 1 ) and variance
(l/a)2/h[r(1 + 2b- 1 ) - r2(l + b- 1 )).
where - 00 < r:x < 00 and fJ > O. The mean of the logistic distribution is given
by r:x. The variance is given by fJ 2 n 2 /3. Note that F( r:x - d; r:x, fJ) =
I - F(r:x + d; r:x, fJ), and so the density of the logistic is symmetrical about cx.
This distribution has been used to model tolerance levels in bioassay problems.
l44)
where () > 0 and Xo > O. The mean and variance respectively of the Pareto
distribution are given by
where - 00 < r:x < 00 and fJ> 0 is ca]]ed the Gumbel distribution. It appears
as a limiting distribu tion in the theory of extreme-value statistics.
then
1 dfxC x) r - J x - (r - 1)/A
-- = -A + = -----
fx(x) dx x -x/).
for x > 0; so the gamma distribution is a member of the Pearsonian system with
a = -(r - 1)/)" b I = -1/)', and bo = b 2 = O.
4 COMMENTS
We conclude this chapter by making severa) comments that tie together some
of the density functions defined in Secs. 2 and 3 of this chapter.
4.1 . Approximations
Although many approximations of one distribution by another exist, we wi]]
give only three here. Others wiJ] be given along with the central-limit theorem
in Chaps. V and VI.
for x 0, 1, ... , n.
(47)
for fixed integer x. The above fo11ows immediately from the fo]]owing con-
sideration:
n)pX(1 _ p)n-x
( x.
= (n}x (~)X(l _~)n-x
xI n n
120 SPECIAL PARAMETRIC FAMILIES OF UNIVARIATE DISTRIBUTIONS III
since
A) -x
(1-;; -+1, and as n -+ 00.
d - np ) <I> (C - np )
<I> ( Jnpq - Jnpq
for large n, and, so, an approximate value for the probability that a binomial
random variable faIJs in an interval can be obtained from the standard norma]
distribution. Note that the binomial distribution is discrete and theapproximat-
ing norma] distribution is continuous.
EXAMPLE 15 Suppose that two fair dice are tossed 600 times. Let X
denote the number of times a total of 7 occurs. Then X has a binomial
distribution with parameters n = 600 and p = ~. 8[X] = 100. Find
P[90 < X < 110].
P[90 x if (600)
-< 11 0] = j=90 j
(~)j(~)600-
6 6 '
j
that is, X has an exponential distribution. On the other hand, it can be proved,
under an independence assumption, that if the happenings are occurring in
time in such a way that the distribution of the lengths of time between successive
happening~ is exponential, then the distribution of the number of happenings
in a fixed time interval is Poisson distributed. Thus the exponentia1 and Poisson
distribu tions are re1ated.
=
A'
I (X)
O,+x-1 e -O.+ 1)0 d8
x!r(r) °
= A' . r(r + x) I(X) [(A + 1)0j'+x-1 e -().+ 1)8 d[(A + 1)8]
x!r(r) (A + 1Y+x ° r(r + x)
A )' r(r + x) 1
(
- A + 1 (x!)r(r) (A + It
= (r +xx- 1) ().+1
A )'( 1 )
A+1
x for x = 0, 1, ... ,
it is known that the values that the random variable can assume are between 0
and 1. A truncated norma] or gamma distribution would also provide a useful
model for such an experiment. A normal distribution that is truncated at 0
on the left and at 1 on the right is defined in density form as
<Pp.,a2(x)l(o,l)(x)
ji()
x = ji( x; /1, (1 ) = (52)
ct> p.,a2( 1) - ct>ll,a2(O)
This truncated normal distribution, like the beta distribution, assumes values
between 0 and 1.
Truncation can be defined in general. If X is a random variable with
density Ix(-) and cumulative distribution Fx('), then the density of X truncated
on the left at a and on the right at b is given by
Ix(x)l(a,b)(X)
(53)
Fx(b) - Fx(a) .
PROBLEMS
1 (a) Let X be a random variable having a binomial distribution with parameters
n 25 and p = .2. Evaluate P[X < ftx 2ax].
(b) If X is a random variable with Poisson distribution satisfying P[X 0] =
P[X = 1], what is G[X]?
(c) If X is uniformly distributed over (1, 2), find z such that P[X > z·+ ftx] L
(d) If X is normally distributed with mean 2 and variance 1, find P[I X 21 < 1].
(e) Suppose X is binomial1y distributed with parameters nand p; further sup-
pose that G[X] = 5 and var [X] = 4. Find nand p.
(/) If G[X] = 10 and ax = 3, can X have a negative binomial distribution?
(g) If Xhas a negative exponential distribution with mean 2, find P[X < 11 X < 2].
(h) Name three distributions fOr which P[X < ftx] = ~..
(0 Let X be a random variable having binomial distribution with parameters
n = 100 andp =.1. Evaluate P[X < ftx - 3ax].
(j) If X has a Poisson distribution and P[X = 0] = i, what is G[X]?
(k) Suppose X has a binomial distribution with parameters n and p. For what
p is var [X] maximized if we assumed n is fixed?
(/) Suppose X has a negative exponential distribution with parameter A. If
P[X 1] = P[X > 1], what is var [X]?
(m) Suppose X is a continuous random variable with uniform distribution
having mean 1 and variance t. What is p[X < OJ?
(n) If X has a beta distribution, can G[lt X] be unity?
(0) Can X ever have the same distribution as - X? If so, when?
l
(p) If X is a random variable having moment generating function exp (e 1), -
what is 8[X]?
2 (a) Find the mode of the beta distribution.
(b) Find the mode of the gamma distribution.
PROBLEMS 125
pendent. Let X denote his position after n steps. Find the distribution of
(X + n)/2, and then find 8[X].
*(d) Let Xl (X 2) have a binomial distribution with parameters nand Pl (n and P2)'
If Pl <P2, show that P[Xl < k] > P[X2 < k] for k = 0, I, ... , n. (This
result says that the smaller the p, the more the binomial distribution is shifted
to the left.)
9 In a town with 5000 adu1ts, a sample of 100 is asked their opinion of a proposed
municipal project; 60 are found to favor it, and 40 oppose it. If, in fact, the
adults of the town were equal1y divided on the proposal, what would be the prob-
ability of obtaining a majority of 60 or more favoring it in a sample of 100?
10 A distributor of bean seeds determines from extensive tests that 5 percent of a large
batch of seeds will not germinate. He sells the seeds in packages of 200 and
gUarantees 90 percent germination. What is the probability that a given package
wi1l violate the guar3;ntee?
*11 (a) A manufacturing process is intended to produce electrical fuses with no
mOre than 1 percent defective. It is checked every hour by trying 10 fuses
selected at random from the hour's production. If 1 or more of the 10
fail, the process is halted and carefully examined. If, in fact, its prob-
ability of producing a defective fuse is .01, what is the probability that the
process will needlessly be examined in a given instance?
(b) Referring to part (a), how many fuses (instead of 10) should be tested if the
manufacturer desires that the probability be about .95 that the process wi11
be examined when it is producing 10 percent defectives?
12 An insurance company finds that .005 percent of the population die from a certain
kind of accident each year. What is the probability that the company must pay
off On more than 3 of 10,000 insured risks against such accidents in a given
year?
13 (a) If X has a Poisson distribution with P[X = 1] = P[X = 2]. what is
P[X = 1 or 2]?
(b) If X has a Poisson distribution with mean 1, show that 8[1 X-II] = 2ax/e.
*14 Recall Theorems 4 and 8. Formulate, and then prove or disprove a similar
theorem for the negative binomial distribution.
*15 Let X be normal1y distributed with mean ft and variance a 2 • Truncate the density
of X on the left at a and On the right at b, and then calculate the mean of the trun-
cated distribution. (Note that the mean of the truncated distribution should fall
between a and b. Furthermore, if a = ft - c and b = ft + c, then the mean of the
truncated distribution should equal ft.)
*16 Show that the hypergeometric distribution can be approximated by the binomial
distribution for large M and K; Le., show that
PROBLEMS 127
17 Let X be the life in hours of a radio tube. Assume that X is normally distributed
with mean 200 and variance a 2 • If a purchaser of such radio tubes requires that
at least 90 percent of the tubes have lives exceeding 150 hours, what is the largest
value a can be and still have the purchaser satisfied?
18 Assume that the number of fatal car accidents in a certain state obeys a Poisson
distribution with an average of one per day.
(a) What is the probability of more than ten such accidents in a week?
(b) What is the probability that more than 2 days will lapse between two such
accidents?
19 The distribution given by
P[X > k] = L ~
n
j=k
(
J
)
pjqn- j = 1
B(k, n - k + 1)
f
0
p
1I"-I( 1 - 1I)"-~ dll
xz
=
8 4 •
• • •
13
"2
- 3 • • • •
~
• • • •
=2
"0
0
u
~ 1 • • • •
FIGURE 1 I I I .. Xl
Sample space for experiment of tossing 1 2 3 4
two tetrahedra. First tetrahedron
4<y 0 h .lL
16 -» 1
3<y<4 0 1\ Ch) -h -h
'-
2<y<3 0 -h N
4
n N
4
1<y<2 0 n -h -h ...l..
16
y<l 0 0 0 0 0
.... 1 1 x<2 2<x<3 3<x<4 4 x
FIGURE 2
132 JOINT AND CONDITIONAL DISTRIBUTIONS, STOCHASTIC INDEPENDENCE IV
(ii) If Xl < and Yl < Y2, then P[Xl < X < X2; Yl < Y < Y2]
X2
= F(X2' Y2) - F(X2' Yl) - F(Xl' Y2) + F(x l , Yl) > O.
(iii) F(x, y) is right continuous in each argument; that is,
lim F(x + h, y) = Jim F(x, Y + h) = F(x, y).
o <h-+O 0 <h-+O
TABLE OF G(x, y)
1 <y 0 x 1
O<y<l 0 0 y
y<O 0 0 0
FIGURE 3
2 JOINT DISTRIBUTION FUNCTIONS 133
Remark Fx(x) = Fx. y(x, (0), and Fy(y) = Fx. y(oo, y); that is, knowl~
edge of the joint cumulative distribution function of X and Y implies
knowledge of the two marginal cumulative distribution functions. fill
The converse of the above remark is not general1y true; in fact, an example
(Example 8) will be given in Subsec. 2.3 below that gives an entire family of
joint cumulative distribution functions, and each member of the family has the
same marginal distributions.
We wi11 conclude this section with a remark that gives an inequality
inv01ving the joint cumu1ative distribution and marginal distributions. The
proof is left as an exercise.
for (Xl' X2' ... , Xk), a value of (Xl' X 2 , "" X k ) and is defined to be 0
otherwise. fill
Remark I/x I, •••• xk(x l , " ' , Xk) = 1, where the summation IS over all
possib1e va1ues of (Xl' ... , X k ). fill
134 JOINT AND CONDmONAL DISTRIBUTIONS, STOCHASTIC INDEPENDENCE IV
Ix. y(x, y)
FIGURE 4 x
EXAMPLE 2 Let X denote the number on the downturned face of the first
tetrahedron and Ythelargerofthe downturned numbers in the experiment
of tossing two tetrahedra. The values that (X, Y) can take on are (I, I),
(I , 2), (I, 3), (] , 4), (2, 2), (2, 3), (2, 4), (3, 3), (3, 4), and (4, 4); hence X and
Yare jointly discrete. The joint discrete density function of X and Y
is given in Fig. 4.
In tabular form it is given as
(x, y) (l, ]) (1, 2) (I, 3) (J,4) (2, 2) (2,3) (2,4) (3, 3) (3,4) (4,4)
1 1 1 1 2 1 1 3 1 4
j ~. y(x,y) 16 16" 16 16 16 16 T6 T6" T6 16"
1 1 3
3 16 T6 16
2 1 2
T6 16
] 1
T6
y/x 1 2 3 4 II/I
PROOF Let (Xl' Yl)' (X2' Y2)' '" be the possible values of (X, Y).
If lx, y(', .) is given, then F x , y(x, y) = I/x, y(Xj, Yi)' where the summa~
tion is over an i for which Xl ::;; X and Yi < y. Conversely, if Fx. y(., .) is
given, then for (Xi, Yi), a possible value of (X, Y),
Ix,y(X;, Yi) = FX,y(Xb Yi) - limoFx,y(Xi - h, Y{)
O<h-+
- Jim FX.y(Xb Yi - h)
o <h-+O
+ Jim Fx,y(xj-h,Yi- h ). 1111
O<h-+O
Remark If Xl' "', X k are jointly" discrete random variables, then any
marginal discrete density can be found from the joint density, but not
conversely. F or example, if X and Yare jointly discrete with values
(Xl' YI)' (X2, Y2), ... , then
where the summation is over all Yj for the fixed Xk. The marginal density of Y
is analogously obtained. The fo]]owing example may help to c1arify these two
different methods of indexing the values of (X, Y).
ly(3) = I Ix, y(Xj, Yi) = lx, y(1, 3) + Ix, y(2, 3) + lx, y(3, 3)
{i:y£=3}
= l6 + rt +
3 5
16 = 1 6'
Simi1arly Iy(l) = -n" fy(2) = -hi, and ly(4) = 176' which together with
ly(3) = 156 give the marginal discrete density function of Y. 1111
3 /6 - e rt+e 3
1"6
1 2
2 TI 16
1
1 16
Y/x I 2 3 4
2 JOINT DISTRIBUTION FUNCTIONS 137
For each 0 < e < /6' the above table defines a joint density. Note that
the marginal densities are independent of e, and hence each of the joint
densities (there is a different joint density for each 0 < e < -16) has the
same marginals. IIII
the binomial case. Suppose that we repeat the trial n times. Let Xi
denote the number of times outcome .J i occurs in the n trials,
i = I, "', k + I. Jf the trials are repeated and independent, then the
discrete density function of the random variables Xl' ... , X k is
k+ I k
wherexj=O, ... ,nand I Xj=n. NotethatXk+1=n- I Xj'
i= I i I
To justify Eq. (I), note that the left-hand side is P[X1 = XI; X 2 = X2;
... ; X k + I Xk+ d; so, we want the probability that the n trials result in
exactly Xl outcomes ·JI' exactly X2 outcomes "2, •..• exactly Xk+ I outcomes
k+1
,Jk+l' where II Xi = n. Any specific ordering of these n outcomes has
probability p~l . p~2 ... Pk~\l by the assumption of independent trials.
and there are n!/xt! x 2 ! ... Xk+l! such orderings. IIII
.20
FIGURE 5 Xl
3'
!Xl,X2(Xl' X2) =f(x l , X2) = , '(3 ~ _ )' (.2Yl(.3YZ(.5)3
Xl ,X2 • Xl X2'
FXt. .... Xk(X l ' ""Xk)= J~koo '" J:loofxh ... ,Xk(Ut"",Uk)dul •.• duk (2)
for all (Xl' ... , Xk)' fXI ..... Xk(·' ... , .) is defined to be a joint probability
density function. IIII
2 JOINT DISTRIBUTION FUNCTIONS 139
5_
00
00
f-
00
oo
Kf(x, y) dx dy = fo
1
t 1
K(x + y) dx dy
1 1
=K I I (x + y) dx dy
° °
I (t + y) dy
1
= K
°
=K(t+-D
=1
140 JOINT AND CONDITIONAL DlSTRmUTlONS, STOCHASTIC INDEPENDENCE IV
f(x, y)
(1, 1, 2)
(1,0, 1)
(0, 1, 1)
--~--~~------~--x
/ 2
I //
I /
_____ ....Y____
/ _
FIGURE 6 y
-2-
- 64'
which is the volume under the surface z = x + y over the region {(x, y):
0< x < t; 0 < y<!} in the xy plane. ///1
PROOF . .
For a given /x y(., .), Fx y(x, y) is obtained for any
(x, y) by
2 JOINT DlsTRmUTION FUNCTIONS 141
iJ2 Fx y(x, y)
fx, y(x, y) = iJ~ iJy
smce
dF x(x) d [X (fOO ) ] 00
Ix(x) = dx = dx f_ 00 _ oofx, y(u, y) dy du = f _ oofx, y(x, y) dy.
I1I1
EXAMPLE 7 Consider the joint probability density
fx, y(x, y) = (x + y)l(o, l)(x)l(o, 1)(Y)'
f f (u + v) du dv
y x
Fx,Y(x, y) = l(o,l)(x)l(o,l)(Y)
°° 1 x
+ 1(0, l)(x)l[l, OO)(y) fo fo (u + v) du dv
y 1
+ 1[1, oo)(x)I(o, l)(Y) fo fo (u + v) du dv
fx(x) = I 00
-00
fx, y(X, y) dy
= 1(0, l)(X) f (X + y) dy
0
1
= (X + !)/(o, l)(X);
or,
aFx,y(x, oo)
fx(x) =
ax
aF x(X)
ax
a
= l(o,ll x ) ax (+x)
2
EXAMPLE 8 Let /x(x) and /y(y) be two probability density functions with
corresponding cumulative distribution functions Fx(x) and Fy(y), respec-
tively. For - I < ex < I, define
We will show (i) that for each ex satisfying -I < ex < I, fx, y(x, y; ex) is a
joint probability density function and (ii) that the marginals of/x, y(x, y; ex)
are/x(x) and/y(y), respectively. Thus, {Ix, y(x, y; ex): -I < ex < I} will be
an infinite family of joint probability density functions, each having the
same two given marginals. To verify (i) we must show that/x , y(x, y; ex)
is nonnegative and, if integrated over the xy plane, integrates to I.
but ex, 2Fx(x) - I, and 2Fy(y) - I are all between -I and I, and hence
also their product, which implies/x. y(x, y; ex) is nonnegative. Since
it suffices to show that/x(x) and/y(y) are the marginals of/x, y(x, y; ex).
I-00
00
00 00
= fo (2u -
1
1) du = 0
f (Ix)_/x,y(x,y) (5)
YIX y - /x(x) ,
Since X and Yare discrete, they have mass points, say Xl' X2, ... for X
and YI, Y2'··· for Y. If Ix(x) > 0, then X = Xi for some i, and IX(Xi)
= P[X = xJ The numerator of the right-hand side of Eq. (5) is lx, y(Xi' J'j)
= P[X = Xi; Y = Yj]; so
for Yl a mass point of Yand Xi a mass point of X; hence !Ylx(" Ix) is a condi-
tional probability as defined in Subsec. 3.6 of Chap. l. !Ylx(· Ix) is called a
conditional discrete density function and hence should possess the properties
of a discrete density function. To see that it does, consider X as some fixed
mass point of X. Then IYlx(yl x) is a function with argument Y; and to be a
discrete density function must be nonnegative and, if summed over the possible
values (mass points) of Y, must sum to 1. IYlx(yl x) is nonnegative since
!x, y(x, y) is nonnegative and Ix(x) is positive.
where the summation is over all the mass points of Y. (We used the fact that
the marginal discrete density of X is obtained by summing the joint density of
X and Y over the possible values of Y.) So !Ylx(" Ix) is indeed a density; it
tells us how the values of Yare distributed for a given value x of X.
The conditional cumulative distribution of Y given X = x can be defined
for two jointly discrete random variables by recalling the close relationship
between discrete density functions and cumulative distribution functions.
Ix y(2, 2) -(6 1
ly/x(21 2) = ix(2) = T\ = 2
Ix y(2, 3) rt 1
IYlx(312) = ix(2) = T~ =4
Ix y(2,4) l6 1
ly/x(41 2) = ix(2) = n = 4'
Also,
for y = 3
for y = 4. IIII
r (I
JXl,X2IXJ,XsXbX2X3,XS-
)- lXI, X2, XJ. xs(XI, X2, Xl' Xs)
I" ( ) •
J XJ. Xs X 3 , Xs IIII
(12 - x~~x,)
where Xi = 0, 1, .. " 4 and X2 + X 4 < 12 - Xl - X3' IIII
f i x . y(Xo, y) dy = Ix(xo)·
-00
Fy1x(ylx) = f~oo/Ylx(zlx) dz
yx +z 1 fY
=
fox+t dz =
x+t 0
(x + z) dz
I
--1 (xy + y2/2) for 0 < y < I. IIII
x+'!
P[A I X = 1= P[A; X = xl
x P[X = xl '
which is well defined; on the other hand, if x is not a mass point of X, we are
not interested in P[A I X = xl. Now if X is continuous, P[A I X = xl cannot be
analogously defined since P[X = xl = 0; however, if x is such that the events
{x - h < X < x + h} have positive probabllity for every h > 0, then P[A I X = x]
could be defined as
provided that the limit exists. We will take Eq. (9) as our definition of
P[A I X = xl if the indicated limit exists, and leave P[A I X = xl undefined other-
wise. (It is, in fact, possible to give P[A I X = xl meaning even'if P[X = x] = 0,
and such is done in advanced probability theory.)
We will seldom be interested in P[A I X = x] per se, but will be interested
in using it to calculate certain probabilities. We note the following formulas:
00
00
(ii) P[A] = r
·-00
P[A I X = x]fx(x) dx (11)
if X is continuous.
if X is continuous.
Ai' .;gh we will nQt prove the above formulas, we note that Eq. (10) is
just tl:-:' iheorem of total probabilities given in Subsec. 3.6 of Chap. I and the
other& are generalizations of the same. Some problems are of such a nature
that it is easy to find P[A I X = x] and difficult to find P[A]. If, however, /x( .)
is known, then PtA] can be easily obtained using the appropriate one of the
above formulas.
Remark Fx. y(x, y) = S: ooFy1x(yl x')fx(x') dx' results from Eq. (13) by
taking A = {Y < y} and B = (- 00, x]; and Fy(y) = J~ooFYlx(yl x)/x(x) dx
is obtained from Eq. (II) by taking A = {Y < y}. IIII
3.4 .Independence
When we defined the conditional probability of two events in Chap. I, we also
defined independence of events. We have now defined the conditional distri-
bution of random variables; so we should define independence of random
variables as well.
forallxt"",xk' IIII
It can be proved that if XI. ... , X k are jointly continuous random variables,
then Definitions IS and 17 are equivalent. Similarly, for jointly discrete
random variables, Definitions 15 and 16 are equivalent. It can also be proved.
that Eq. (15) is equivalent to P[XI E B l ; ... ; X k E Bd = n P[X,
k
i= 1
E B i ] for sets
Bl , ••• , The following important result is easily derived using the above
Bk •
= n" P[Yj
j=l
E Bj ]. IIII
152 JelNT AND CONDITIONAL DISTRIBUTIONS, STOCHASTIC INDEPENDENCE IV
For k = 2, the above theorem states that if two random variables, say
X and Y, are independent, then a function of X is independent of a function of
Y. Such a result is certainly intuitively plausible.
We will return to independence of random variables in S ubsec. 4.5.
Equation (14) of the previous subsection states that P[h(X, Y) < zl X = x]
= P[h(x, y) < z I X = x]. Now if X and Y are assumed to be independent,
then P[h(x, y) < zl X = x] = P[h(x, y) < z], which is a probability that may
be easy to calculate for certain problems.
= f99
101
P[x - h < Y < x]tdx.
f f
100.5 101
+ t(t) dx + (t)(100.5 - x + l)t dx = 7
1 6.
99.5 100.5 •
IIII
4 EXPECTATION 153
4 EXPECT A TION
When we introduced the concept of expectation for univariate random variables
in Sec. 4 of Chap. II, we first defined the mean and variance as particular expec-
tations and then defined the expectation of a general function of a random vari-
able. Here, we will commence, in Subsec. 4.1, with the definition of the
expectation of a general function of a k-dimensional random variable. The
definition will be given for only those k-dimensional random variables which
ha ve densities.
4.1 Definition
In order for the above to be defined, it is understood that the sum and
mUltiple integral, respectively, exist.
= J(Xl
-(Xl xdx.(xt) dXr = 8[X,]
154 JOINT AND CONDmONAL DISTRIBUTIONS, STOCHASTIC INDEPENDENCE IV
using the fact that the marginal density /XiXi) is obtained from the joint
density by
f ···f
00 00
-00 -00
fXl, ... ,Xk(X1,···,Xk)dx1···dxi-1·dxi+1··.dxk· IIII
We might note that the" expectation" in the notation 8[Xi ] of Eq. (20)
has two different interpretations; one is that the expectation is taken over the
joint distribution of Xl' ... , X k , and the other is that the expectation is taken
over the marginal distribution of Xl. What Theorem 4 really says is that
these two expectations are equivalent, and hence we are justified in using the
same notation for both.
8[XY] = f° f
1
0
1
xy(x + y) dx dy =-1-.
8[X + Y] = tt
1 1
(x + y)( x + y) dx d y = ~.
7
8[X] = 8[Y] = 1 2. IIII
4 EXPECTATION 155
The following remark, the proof of which is left to the reader, displays a
property of joint expectation. It is a generalization of (ii) in Theorem 3 of
Chap. II.
COv [X, Y]
px , y = (Ix (Iy
(22)
provided that cov [X~ y], (Ix, and (Iy exist, and (Ix > 0 and (Iy > O. III I
156 JOINT AND CQNDmONAL DISTRIBUTIONS, STOCHASTIC INDEPENDENCE IV
EXAMPLE 21 Find Px , y for X, the number on the first, and Y, the larger of
the two numbers, in the experiment of tossing two tetrahedra. We would
expect that Px, y is positive since when X is large, Y tends to be large too.
We calculated 4[XY], 4[X], and 4[ Y] in Example 18 and obtained
4[XY] = \365, 4[X] = 4-, and 4[ Y] = i ~. Thus cov [X, YJ = _\365 - t· i ~
= t%. Now 4[X2] = 34° and 4[ y2] = \76°; hence var [X] = t and
var [y] = ~~. So,
t~ .' 2
1111
Px, y = J~Jll =
4 64
EXAMPLE 22 Find Px, y for X and Y iflx, y(x, y) = (x + y)I(o, 1)(x)I(o, 1)(Y)'
We saw that 4[XY] = ~ and 4[X] = 4[ y] = 1\ in Example 19.. Now
4[X2] = 4[ y2] = 152; hence var [X] = var [y] = 1\14< Finally
1. _ J:.2... 1
3 144
Px, y = 11 = - - .
T44 II
Does a negative correlation coefficient seem right? 1111
4 EXPECTATION 157
if (X, Y) are jointly discrete, where the summation is over all. possible
~~~y @
In particular, if g(x, y)=y, we have defined C[YIX=x]=C[Ylx].
C[Ylx] and C[g(X, Y)lx] are functions of x. Note that this definition can be
generalized to more than two dimensions. For example, let (Xl' ... , X k ,
Yl , ... , Ym ) be a (k + m)-dimensional continuous random variable with density
/XI, ... , Xk , YI, ... , Ym(Xl' . " , Xk' Yl' ., ., Ym); then
C[g(Xb ... , X k , Yb ... , Ym)IXl, ... , Xk]
! for Y =2
/Ylx(yI2) = J.! for Y = 3
{
4 for Y = 4
in Example 9. Hence C[ YI X = 2] = LY/YI x(y I X = 2) = 2·1- + 3·1 + 4· t
_ 11
T'
. . . - .- -. /f //
158 JOINT AND CONDmONAL DISTRIBUTIONS, STOCHASTIC INDEPENDENCE IV
X+ y d 1 (X- + -1)
8[ Y I X = x] = I° y x+!
I
Y=
x+! 2 3
for 0 < x < I. //11
= J-
00
oo
8[g(Y)lx]fx(x) dx
= J~oo J~oog(Y)fx.Y(X' y) dy dx
= 8[g(Y)].
Thus we have proved for jointly continuous random variables X and Y
(the proof for X and Y jointly discrete is similar) the following simple yet very
useful theorem.
Let us note in words what the two theorems say. Equation (26) states
that the mean of Y is the mean or expectation of the conditional mean of Y,
and Theorem 7 states that the variance of Y is the mean or expectation of the
conditional variance of Y, plus the variance of the conditional mean of Y.
We will conclude this subsection with one further theorem. The proof
can be routinely obtained from Definition 21 and is left as an exercise. Also,
the theorem can be generalized to more than two dimensions.
Remark If ri = rj = 1 and all other rm's are 0, then that particular joint
moment ~bout the means becomes <9'[(Xi - /lx,)(Xj - /lx)], which is just
the covanance between XI and Xi' IIII
\ ~
160 JOINT AND CONDmONAL DISTRIBUTIONS, STOCHASTIC INDEPENDENCE IV
if the expectation exists for all values of t l , ... , tk such that -h < tj < h
for some h > O,j = 1, ... , k. IIII
The rth moment of Xj may be obtained from m Xt • ... , Xk(tl, ••. , t k ) by
differentiating it r times with respect to tj and then taking the limit as all the t's
approach 0. Also 8[X~ Xj] can be obtained by differentiating the joint moment
generating function r times with respect to ti and s times with respect to tj and
then taking the limit as all the t's approach O. Similarly other joint raw
moments can be generated.
Remark mx(td = mx. y(tl' 0) = limmx. y(tl' t 2), andm y(t2) = mx. yeO, t 2)
t2-+0
= limmx y(tl' t 2 ); that is, the marginal moment generating functions can
tl-+0 •
Theorem 9 If X and Yare independent and gl (.) and g2(') are two
functions, each of a single argument, then
8[gl(X)g2( Y)] = 8[gl(X)]' 8[g2( Y)].
PROOF We will give the proof for jointly continuous random
variables.
= f~oogl(X)fx(X) dx . f~oog2(Y)fY(Y) dy
= 8[g leX)] . t9'[g2(Y)]' IIII
4 EXPECTATION 161
Remark The converse of the above corollary is not always true; that is,
cov [X, Y] = 0 does not always imply that X and Yare independent, as
the following example shows. IIII
Corollary IPx, yl < 1, with equality if and only if one random variable
is a linear function of the other with probability 1.
z
z = f(x, y) for z > k
FIGURE 7 k
for - 00 < x < 00, - 00 < y < 00, where u y , u x , llx, lly, and p are con-
stants such that -I < p < I, 0 < Uy , 0 < Ux, -00 < llx < 00, and
- 00 < lly < 00. Then the random variable (X, Y) is defined to have a
bivariate normal distribution. fill
The density might, for example, represent the distribution of hits on a vertical
target, where x and y represent the horizontal and vertical deviations from the
central lines. And in fact the distribution closely approximates the distribution
of this as well as many other bivariate populations encountered in practice.
We must first show that the function actually represents a density by
showing that its integral over the whole plane is I; that is,
I I
00 00
-00 -00
f(x, y) dy dx = 1. (30)
The density is, of course, positive. To simplify the integral, we shall substitute
x-ll y - lly
u = x and v=--- (31)
U x Uy
164 JOINT AND CONDITIONAL DISTRIBUTIONS, STOCHASTIC INDEPENDENCE IV
so that it becomes
I I
00
oo
1 .-
e -U-/(l_p2)](U2- 2pUV+V2) d v d u.
- 00 - 00 2n J 1 _ p2
and if we substitute
u - pv du
w = --;-.===- and dw = ,
JI -p2 JI- p2
the integral may be written as the product of two simple integrals
e -w /2 d w I J 1 e -v /2 d v,
oo oo
I 1 -- 2 2
(32)
- 00 "
J 2n - 2n 00
both of which are I, as we have seen in studying the univariate normal distri-
bution. Equation (30) is thus verified.
I I
. 00 00
t2
rnX,y(tl' t 2 ) = m(tl' t 2 ) = C[elX+t2Y] = i 1X
+ Yf(x, y) dy dx.
-00 -00
and on completing the square first on u and then on v, we find this expres-
sion becomes
and
becomes
_!w 2 -lz2 + !(trui + 2pt1t2UXUy + tiui),
and the integral in Eq. (34) may be written
m(th t 2) = e'lf.l x+ t1 J4y exp[!(trui + 2Ptlt2 UXUy + t~u:)J
x f-00
00 foo
- 00
-2n1 e-wl/2-z1/2 dw dz
= exp[t1Jlx + t2 Jly + !(tiui + 2Ptl t2 UXUy + ti ui)]
since the double integral is equal to unity. II1I
Theorem 13 If (X, Y) has bivariate normal distribution, then
8[X] = /lx,
8[ Y] = /ly,
var [X] = ui,
var [Y] = u:,
cov [X, Y] = pux uy ,
166 JOINI' AND CONDmONAL DISTRIBUTIONS, STOCHASTIC INDEPENDENCE IV
and
Px , y = p.
=..J1x2 + Gx2 .
= pGxGy .
Theorem 15 If (X, Y) has a bivariate normal distri bution, then the mar-
ginal distributions of X and Yare univariate normal distributions; tbat is,
X is normally distributed with mean JJ.x and variance ai, and Y is nor-
mally distributed with mean JJ.y and variance ai.
PROOF The marginal density of one of the variables X, for example,
is by definition
f
QO
/x(x) = -QO
f(x, y) dy;
and again substituting
y ~ JJ.y
V=--
ay
and completing the square on v, one finds that
00 1
/x(x)=f -QO 2nax
J11 - p
2
and
fy(y) == I 1 exp[1
,,2nay
2 - - (y - p,y) 2] .
2 ay 1111
Theorem 16 If (X, Y) has a bivariate normal distribution , then the
conditional distribution of X given Y = y is normal with mean
JJ.x + (pax/ay)(y ~ JJ.y) and variance ai(I - p2). Also, the conditional
distribution of Y given X = x is normal with mean JJ.y + (payjax)(x - JJ.x)
and variance ai(l _ p2).
168 JOINT AND CONDmONAL DISTRIBUTIONS, STOCHASTIC INDEPENDENCE IV
f(x, y)
fXly(xly) = fy(y) ,
!XIY(x Iy)
!Ylx(Y Ix)
= J21!
urJi _ p2 ex { - 2ui(11_ p2) [Y -
p I1r - p:: (x - I1X)] T (36)
IIII
x =g(y)
when plotted in the xy plane gives the regression curve for x. It is simply a
curve which gives the location of the mean of X for various values of Y in the
conditional density of X given Y = y.
For the bivariate normal distribution, the regression curve is the straight
line obtained by plotting
pax
x = f1x + ~ (y - f1y),
ay
FIGURE 8
PROBLEMS
1 Prove or disprove:
(a) If P[X> Y] = 1, then S[X] > SlY].
(b) If S[X] > SlY], then P[X> y] = 1.
(e) If S[X] > S[ Y], then P[X> y] > O.
2 Prove or disprove:
(a) If Fx(z) > Fy(z) for alJ z, then SlY] > S[X].
(b) If S[y] > C[X], then Fx(z) > Fy(z) for an z.
(e) If Cry] > S[X], then Fx(z) > Fy(z) for some z.
(d) If Fx(z) Fy(z) for all z, then P[X = Y] = 1.
(e) If Fx(z) > F y(z) for all z, then P{X < Y] > O.
(I) If Y = Xl I, then Fx(z) Fy(z + 1) for all z.
3 If X I and X 2 are independent random variables with distribution given by
P[X, 1] =P[X~ = 1] ! for i = 1, 2, then are Xl and X I X 2 independent?
4 A penny and dime are tossed. Let X denote the number of heads up_ Then
the penny is tossed again. Let Y denote the number of heads up on the dime
(from the first toss) and the penny from the second toss.
(a) Find the conditional distribution of Y given X = I.
(b) Find the covariance of X and Y.
5 If X and Y have joint distribution given by
lx, y(x, y) U(O,y)(x)I(o. J)(Y).
(a) Find COy [X, Y].
(b) Find the conditional distribution of Y given X = x.
6 Consider a sample of size 2 drawn without replacement from an urn containing
three ba1ls, numbered 1, 2, and 3. Let X be the number on the first ball drawn
and Y the Jarger of the two numbers drawn.
(a) Find the joint discrete density function of X and Y.
(b) Find P[X = 11 Y 3].
(e) Find cov [X. Yl.
170 JOINT AND CONDITIONAL DISTRIBUTIONS. STOCHAS'l1C INDEPENDENCE IV
(a)Find "[y].
(b)Find the distribution of Y.
16 Suppose that the joint probabiJity density function of (X, Y) is given by
Ix. y(x, y) = [1 - IX(l - 2x)(l - 2y)]I(o.1)(x)I(o. 1)(Y)'
"'20 Suppose X and Y are independent and identically distributed random variables
with probability density function f(·} that is symmetrical about o.
(a) Prove that Pfl X + YI 21 XI] > 1_
(b) Select some symmetrical probability density function f('}, and evaluate
P[IX YI <2IXI]·
*21 Prove or disprove: If 8[YI X] = X,8[XI Y] = Y, and both 8[X2] and 8[y2] are
finite, then P[X = Y] = 1. (Possible HINT: P[X = Y] = 1 if var [X Y] = 0.)
22 A multivariate Chebyshev inequaJity: Let (Xl, .. _, Xm) be jointly distributed with
8 [Xl] ILl and var [Xl] = u~ for j = 1, "', m. Define Al = {I Xl ILl I < Vmtul}'
m
Show that P[ () A l ] > 1 - r2. for t>O.
1=1
23 Let fx( .) be a probability density function with corresponding cumulative dis-
tribution function F x(·). In terms of fx(') and/or F x('}:
(a) Find p[X> XO + .::lxl X>xo].
(b) Find P[xo < X <Xo + .::lxl X> xo].
(c) Find the limit of the above divided by .::lx as .::lx goes to O.
(d) Evaluate the quantities in parts (a) to (c) for fx(x} = 'Ae-;,xI(o. OC)(x).
24 Let N equal the number of times a certain device may be used before it breaks.
The probability is p that it will break on anyone try given that it did not break
on any of the previous tries.
(a) Express this in terms of conditional probabiJities.
(b) Express it in terms of a density function, and find the density function.
25 Player A tosses a coin with sides numbered 1 and 2. B spins a spinner evenly
graduated from 0 to 3. B's spinner is fair, but A's coin is not; it comes up 1
with a probability p, not necessarily equal to 1. The payoff X of this game is the
difference in their numbers (A's number minus B's). Find the cumulative dis-
tribution function of X.
26 An urn contains four balls; two of the balls are numbered with aI, and the other
two are numbered with a 2. Two bal1s are drawn from the um without replace-
ment. Let X denote the smaller of the numbers on the drawn balls and Y the
larger.
(a) Find the joint density of X and ·Y.
(b) Find the marginal distribution of Y.
(c) Find the cov [X, Y].
27 The joint probability density function of X and Y is given by
28 The discrete density of X is given by Ix(x) = xl3 for x = 1, 2, and IYI x(Ylx) is
binomia1 with parameters x and !; that is,
I" x(Ylx} ~P[Y~YI X ~xl ~ (;) U).
for Y = 0, ... , x and x = 1, 2.
(a) Find 8[X] and var [X).
(b) Find 8[ Y).
(c) Find the joint distribution of X and Y.
29 Let the joint density function of X and Y be given by Ix. y(x, y) = 8xy for 0 < x
< y < 1 and be 0 e1sewhere.
(a) Find 8[ YI X = x).
(b) Find 8[XYIX=x].
(c) Find var [YI X = x).
30 Let Y be a random variab1e having a Poisson distribution with parameter "-.
Assume that the conditiona1 distribution of X given Y = y is binomia11y distrib-
uted with parameters y and p. Find the distribution of X, if X = 0 when Y = O.
31 Assume that X and Yare independent random variab1es and X ( y) has binomia1
distribution with parameters 3 and t (2 and i). Find P[X = Y).
32 Let X and Y have bivariate norma1 distribution with parameters p'x = 5, p. Y = 10,
ai = 1, and a~ = 25.
(a) If p > 0, find p when P[4 < Y < 161 X = 5] = .954.
*(b) If p = 0, find P[X + Y < 16).
33 Two dice are cast 10 times. Let X be the number of times no Is appear, and 1et
Y be the number of times two Is appear.
(a) What is the probabiJity that X and Y wi11 each be 1ess than 3?
(b) What is the probabiHty that X + Y wi11 be 4?
34 Three coins are tossed n times.
(a) Find the joint density of X, the number of times no heads appear; Y, the num-
ber of times one head appears; and Z, the number of times two heads appear.
(b) Find the conditiona1 density of X and Z given Y.
35 Six cards are drawn without rep1acement from an ordinary deck.
(a) Find the joint density of the number of aces X and the number of kings Y.
(b) Find the conditiona1 density of X given Y.
36 Let the two-dimensiona1 random variab1e (X, Y) have the joint density
lx, y(x, y) = 1(6 - ,x - y)I(O, 2)(x)I(2.4ly).
(a)Find 8[YI X = x], (b) Find8[Y2IX=x].
Find var [YI X = x],
(c) (d) Show that 8[ y] = 8[8[ YI X]].
(e) Find 8[XYI X = x).
37 The trinomia1 distribution (muhinomia1 'with k + 1 = 3) of two random variab1es
X and Y is given by
n'
Ix. y(x, y) = ' pXqY(1 -/J _ q)"-X-Y
x!y!(n - x - y)!
for x, y = 0, 1, ... , n and x + y < n, where 0 <p, 0 .-:::;: q, and p + q < 1.
174 JOINT AND CONDmONAL DISTRIBUTIONS, STOCHASTIC iNDEPENDENCE IV
,.
Find the marginal distribution of Y.
(a)
(b) Find the conditional distribution of X given Y, and obtain its expected
value.
(e) Find p[X, Y].
38 Let (X, Y) have probability density function/x. y(x, y), and let u(X) and v(Y) be
functions of X and Y, respectively. Show that
39 If X and Yare two random variables and 8[YI X = x] = ft, where ft does not
depend on x, show that var [Y] = 8[var [YI Xl],
40 If X and Yare two independent random variables, does 8[ YI X = x] depend
onx?
41 If the joint moment generating function of (X, Y) is given by mx. y(II' ( 2 ) =
exp[l(lf+ In] what is the distribution of Y?
42 Define the moment generating function of YI X = x. Does my(l) = 8[mYI x(I)]?
43 Toss three coins. Let X denote the number of heads on the first two and Y
denote the number of heads on the last two.
(a) Find the joint distribution of X and Y.
(b) Find 8[YI X = 1].
(e) Find px. y.
(d) Give a joint distribution that is not the joint distribution given in part (a)
yet has the same marginal distributions as the joint distribution given in
part (a).
44 Suppose that X and Y are jointly continuous random variables, /Ylx(ylx) =
I(x. X+I)(Y), and/x(x) = 1(0. O(x).
(a) Find 8[Y]. (b) Find cov [X, Y].
(e) Find P[X + Y <]]. (d) Find/xI y(xly)·
45 Let (X, Y) have a joint discrete density function
Ix. y(x, y)
FYI, ... , Yk(Ylo ... , YIe) = P[ Y1 < Y1; .•. ; Yk < YIe]
= P[gl(X1"", XII) < Yl;··· ;gle(X1,··., Xn)<YIe]
for fixed Y1' ••• , Yle' which is the probability of an event described in terms of
Xl' ..• , X n• and theoretically such a probability can be determined by integrat-
ing or summing the joint density over the region corresponding to the event.
The problem is that in general one cannot easily evaluate the desired probability
176 DISTRIBUTIONS OF FUNCTIONS OF RANDOM VARIABLES v
for each Yl' ... , Yk' One of the important problems of statistical inference, the
estimation of parameters, provides us with an example of a problem in which
it is useful to be able to find the distribution of a function of joint random
variables.
In this chapter three techniques for finding the distribution offunctions of
random variables will be presented. These three techniques are called 0) the
cumulative-distribution-function technique, alluded to above and discussed in
Sec. 3, (ii) the moment-generating-function technique, considered in Sec. 4, and
(iii) the transformation technique, considered in Secs. 5 and 6. A number of
important examples are given, including the distribution of sums of independent
random variables (in Subsec. 4.2) and the distribution of the minimum and
maximum (in Subsec. 3.2). Presentation of other important derived distributions
is deferred until later chapters. For instance, the distributions of chi·square,
Student's t, and F, all derived from sampling from a normal distribution, are
given in Sec. 4 of the next chapter.
Preceding the presentation of the techniques for finding the distribution
of functions of random variables is a discussion, given in Sec, 2, of expectations
of functions of random variables. As one might suspect, an expectation, for
example, the mean or the variance, of a function of given random variables can
sometimes be expressed in tenns of expectations of the given random variables.
If such is the case and one is only interested in certain expectations, then it is not
necessary to solve the problem of finding the distribution of the function of the
given random variables. One important function of given random variables
is their sum, and in Subsec. 2.2 the mean and variance of a sum of given random
variables are derived,
We have remarked several times in past chapters that our intermediate
objective was the understanding of distribution theory. This chapter provides
us with a presentation of distribution theory at a level that is deemed adequate
for the understanding of the statistical concepts that are given in the remainder
of this book.
2 EXPECTATIONS OF FUNCTIONS
OF RANDOM VARIABLES
variable, 8[ Y] is defined (if it exists), and 8[g( X)] is defined (if it exists). For
instance, if X and Y = g( X) are continuous random variables, then by definition
00
and
and
oo .. 00
8 [g(Xl , ... , XII)] f
-00
••. J-00
g(x 1 ,···, x lI )fxl ..... XJXl , ... , XII) dXl ... dx,..
(4)
In practice, one would naturally select that method which makes the
calculations easier. One might suspect that Eq. (3) gives the better method of
the two since it involves only a single integral whereas Eq. (4) involves a multiple
integral. On the other hand, Eq. (3) involves the density of Y, a density that
may have to be obtained before integration can proceed.
and
8[g(X)] = 8[X2] = f 00
-00
x:tx(x) dx.
Now
178 DISTRIBtmONS OF FUNCTIONS OF RANDOM VARIABLES V
and
using the fact that Y has a gamma distribution with parameters r = ! and
A. = t. (See Example 2 in Subsec. 3. I below.) JI II
and
var[f
1
Xl] =f
1
var[Xi ] + 2 L L cov[X
i<j
j , Xj]' (6)
= L" var[Xi) + 2 I I
i=l i<j
COV[Xi' Xj]' JIJI
The following theorem gives a result that is somewhat related to the above
theorem inasmuch as its proof, which is left as an exercise, is similar.
EXPEcrATIONS OF FUNCTIONS OF RANDOM VARIABLES 179
Theorem 2 Let Xl' ... , XII and Y1, ••• , Ym be two sets of random vari-
ables, and let al, ••• , all and b1, ••• , bm be two sets of constants; then
Corollary If Xl, ... , X II are random variables and al' ... , a" are
cons tants, then
(8)
II
= L af var[X,] + L L a,aj cov[X j , X j ].
i=l i:f:.j
/11/
JIll
Equation (10) gives the variance of the sum or the difference of two ran-
dom variables. Clearly
180 DISTRIBUTIONS OF FUNCTIONS OF RANDOM VARIABLES V
PROOF
and
Note that the mean of the product can be expressed in terms of the means
and covariance of X and Y but the variance of the product requires higher-order
moments.
In general, there are no simple exact formulas for the mean and variance
of the quotient of two random variables in terms of moments of the two random
variables; however, there are approximate formulas which are sometimes useful.
3 CUMULATIVE-DISTRIBUTION-FUNCTION TECHNIQUE 181
Theorem 4
var - [X]
Y
R::
~x)2(var[Xl
-
y Jlx
2 + var[Yl
2
_ 2cov[X,
p,y Jlx Jly
Yl) • (15)
PROOF To find the approximate formula for 8'[XI y], consider the
Taylor series expansion of xly expanded about (Ilx, Jly); drop all terms of
order higher than 2, and then take the expectation of both sides. The
approximate formula for var [XI Y] is similarly obtained by expanding in
a Taylor series and retaining only second-order terms. IIII
Two comments are in order: First, it is not unusual that the mean and
variance of the quotient XI Y do not exist even though the moments of X and Y
do exist. (See Examples 5, 23, and 24.) Second, the method of proof of
Theorem 4 can be used to find approximate formulas for the mean and variance
of functions of X and Yother than the quotient. For example,
1 var[X] ;z g(x, y)
8'[g(X, Y)] R:: g{J1x, Jly) + -2
I a
vx
2
IPX,py
1 a2 a2
+ - var[Y] ~ 2 g(x, y) + cov[X, Y] ~y ;lX g(x, y) , (16)
2 vy px.Py v v px.py
and
3 CUMULATIVE-DISTRIBUTION-FUNCTION
TECHNIQUE
EXAMPLE 2 Let there be only one given random variable, say X, which has a
standard normal distribution. Suppose the distribution of Y = g(X) = X2
is desired.
FI'(Y)
.2 f)/ 1 tz f)/ 1 1 tz
= J2n 0 2Jz e- dz = 0 r(!-) J2z e- dz, for y > 0,
some real number. As defined, Y,,(w) = max [X1(w), ... , X,,(w)]; that is, for
a given w, Y,,(w) is the largest of the real numbers Xl (w), ... , X,,(w).
The distributions of YI and Y" are desired. F y" (y) = P[ Yn y] =
P[X I < Y; .•. ; X" < y] since the largest of the X,'s is less than or equal to y if
and only if all the X,'s are less than or equal to y. Now, if the X;'s are assumed
independent, then
" n
P[X I <y; ... ; Xn<Y]= [lP[Xt<y]= [lFxlY);
i= 1 I I
so the distribution of Yn = max [X I ' ... , Xn] can be expressed in terms of the
marginal distributions of Xl' ... , X n • If in addition it is assumed that all the
Xl' ... , X" have the same cumulative distribution, say F x('), then
PROOF
Similarly,
P[YI~y]=1 P[Y1 > y] = I - P[XI > y; ,. .. ,. Xn> y]
184 DISTRlBUfiONS OF FUNCTIONS OF RANDOM VARIABLES V
since. YI is greater than y if and only if every Xi > y. And if Xl, ... , X" are
independent, then
If further it is assumed that Xl, ..• , X" are identically distributed with common
cumulative distribution function ..F x ( .), then
I - n" [] - Fxj(Y)] = J -
1= I
[J - Fx(Y)]",
. (2])
And if X I, ..• , X" are independent and identically distributed with com-
mon cumulative distribution function Fx( .), then
PROOF
d
-Fy1 (Y) = n[1 II II
dy
EXAMPLE 3 Suppose that the life of a: certain light bulb is exponentially
distributed with mean 100 hours. If 10 such light bulbs are installed
simultaneously, what is the distribution of the life of the light bulb that
fails first, and what is its expected life? Let X i denote the life of the ith
light bulb; then Y 1 = min [Xl' .•• , X IO ] is the life of the light bulb that
fails first. Assume that the X i'S are independent.
3 CUMULATiVE-DISTRIBUTION-FUNCTION TECHNIQUE t 85
so
!Yl(Y) = JO(e-l~o")10-1(ntoe-1~o")I(o. oolY)
= /o~-looo)ll(o. (0)(Y),
Iz(z) = f-
oo
Ix. y(x, z - x) dx = f_ Ix. y(z -
00
00 y, y) dy, (24)
and
00
•
PROOFWe will prove only the first part of Eq. (24); the others are
proved in an analogous manner.
= ( , [f!x,y(X, u- x) dU] dx
by making the substitution y u- x.
Now
J_!Y(Z
00 00
= J P[x + Y z]fx(x) dx
-00
,00
Remark The formula given in Eq. (26) is often called the convolution
formula. In mathematical analysis, the function fz(') is caned the
convolution of the functions fy( .) and f x( . ). IIII
foo
-00
{I (0, z)( x)/(O, 1)(z) + I (z-l, 1)(x)I[1, 2)(Z)} dx
:z; 1
FIGURE 1
and
lu(u) = I
-00
00
PROOF Again, only the first part of Eq. (27) will be proved. (See
Fig. I for z > 0.)
= fO
-00 Z x x "0 -00 x x
= (ro [(, IJx,y(x,~) dX] du + (ro [( ~fx,+,~) dX] du
= L. [fro I~I fx,y(x,~) dX] du;
hence
liz) = dFz(z)
dz
= C, ~ fx,.( x,~) dx. IIII
00
lrAu) = r
~ - 00
IY IIx. y(uy, y) dy
(see Fig. 2)
0
1{1l
y dy
1
=2 1(0,
1(1)2
l)(u) + 2 ;; 1[1,oo)(u).
~=--_U
FIGURE 2 1
4 MOMENT-GENERATING-FUNCTION TECHNIQUE
1
for t< 2.'
which we recognize as the moment generating function of a gamma with
parameters r = t and ,t = t. (It is also called a chi-square distribution
with one degree of freedom. See Subsec. 4.3 of Chap. VI.) 1///
m
Yl.Y2
(t l' t)
2
= .!.P[e Y 1t1 + Y2t2]
(0
= mXl(t 1 - t 2)mxi t l + t 2)
(11 - t2)2 (11 + t2)2
= exp 2 exp 2
2ti 2ti
= exp(t; + t~) = exp 2 exp 2:
= mY1(tl)mY2(t2)'
4 MOMENT-GENERATING-FUNCTION TECHNIQUE 191
= f OO foo 1
-2 exp
[(X2 - XI)2
2 t-
xi +2 X~] dXI dX2
-00 -00 TC
x {f_ OO
00
[1 - t (2 +
exp - -2- Xl
2XtX2
1_ t
t)] dXI }dX 2
-00
~
~2TC
exp -
2
---....:2::...-_
2(1 - t)
1
x J1 _ t {roo JJ~ exp [ _1 2 I (x, + t2/y] dx,) dX2
1
= J1 _ I i
roo exp [ - ~ (1 - I - 1,2 }i] dX2
_ 1 . J 1- t Jt=U 1 foo ( 1 1 - 2t )
- Ji - t Jl - 21 . J 1 - t J2n -00 exp -:2 1 _ t x~ dX2
moment generating function of each exists for all -h < t < h for some
,.
h > 0, let Y = I Xi; then
1
,.
my(t) = t9'[ exp LXi t] = 0 mx.(t) for -h<t<h.
j= 1
PROOF
II II
=0 G[ell =0 mxlt)
1= l j= 1
mxlt) = pe + q ..
II
So
mr xlt) = .0 mxlt) = (pe' + q)lI,
1=1
EXAMPLE 10 Suppose that Xl, ... , XII are independent Poisson distributed
random variables, Xi having parameter A. i • Then
mx.(t) = C[e tXi ] = exp Ale' - 1),
and
, hence
1=1
=
II
i=l
t
- 1) = exp L Ai(e t
- 1),
EXAMPLE 11 Assume that Xl, ... , XII are independent and identically dis-
tributed exponential random variables; then
So
mI:x,(t) = ,n
II
mxlt) =
( A )11
A. - t '
then
and
194 DISTRIBUTIONS OF FUNCTIONS OF RANDOM VARIABLES V
Hence
n
m'f,o;x.(t) = nmOtxlt)
i=l
= exp[(I aif,J.i)t + t(I afuf)t 2] ,
The above says that any linear combination (that is, L ai Xi) of inde-
pendent normal random variables is itself a normally distributed random
variable. (Actually, any linear combination of jointly normally distri-
buted random variables is normally distributed. Independence is not
required.) In particular, if
x '" N(px, oJ),
and X and Yare independent, then
X + Y", N(Jlx + Jly, uk + u~),
and
x- Y '" N(p.x - Jly, u; + u~).
If Xl' ... , Xn are independent and identically distributed random vari-
ables distributed N(Jl, ( 2 ), then
We have made use ofEq. (9), which stated that ~[XII] = Jlx and var [XII] =
uiln. Equation (31) states that for each fixed argument z the value of the
cumulative distribution function of ZII' for n = I, 2, ... , converges to the value
CI>(z). [Recall that CI>(.) is the cumulative distribution function of the standard
normal distribution.]
Note what the central-limit theorem says: If you have independent random
variables X I, ... , XII , ... , each with the same distribution which has a mean and
variance, then XII = (lIn) LXi" standardized" by subtracting its mean and
then dividing by its standard deviation has a distribution that approaches a
standard normal distribution. The key thing to note is that it does not make
any difference what common distribution the Xl' ... , XII' ... have, as long as
they have a mean and variance. A number of useful approximations can be
garnered from the central~limit theorem, and they are listed as a corollary.
p [a < X.u /
x
-;X
n
< b] ~ <I>(b) - CI>(a) , (32)
P[~ < XII < d] ~ CI> (~- Jl~) - CI>(C - Jlx) , (33)
uxlJn ux/Jn
or
IIII
196 DISTRIBUTIONS OF FUNCTIONS OF RANDOM VARIABLES V
otherwise;
4 MO~NT-GENERATING-FUNCTION TECHNIQUE 197
n
then Xj = I Zjl%' Now suppose we want to find coY [Xi' Xj ], In-
a=l
tuitively, we might suspect that such covariance is negative since when one
of the random variables is large another tends to be small.
by Theorem 2. Now if IX i" p, then ZiP and Zja are independent since they
correspond to different trials, which are independent. Hence
n ,. n
I I
p=la=l
COY [Zip, Z ja] = I
a=l
COY [Zil%' Z jl%]'
(by using the fact that a sum of independent and identically distributed
exponential random variables has a gamma distribution)
= f ).pe-
z
o
lU e(l- p )lll du = ).p f e-
z
0
lpu du =1- e- lpz•
f f )' dx = Jy
..;~
and therefore
I I
/Y(Y)=
2Jy I(o.1)(Y)' IIII
Application of the cumulative-distribution-function technique to find the
density of Y = g(X), as in the above example, produces the transformation
technique, the result of which is given in the following theorem. 1
200 DISTRIBUTIONS OF FUNCTIONS OF RANDOM VARIABLES V
d
fy(y) = dyg-l(y) fX(g-l(y»I!}(y)
= - Y I ( Y)Q -1 (I _ e - Y\b - 11 ()
e B(a, b) e ) (0, (0) Y
I
- B(a, b) e- aY(I - e Y)b 11(0, OO)(y),
EXAMPLE 18 Suppose X has the Pareto density/x(x) = (}X- 9 - 1[[1. oo)(x) and
the distribution of Y = loge X is desired.
where the summation is over those values of i for which g(x) = y for some value
of x in Xi.
In particular, if
/x(x) = H)e- 1x1 ,
then
r 1 I -'Or
lY(Y) = 2 JY e Y[(o.ooly);
202 DISTRIBUTIONS OF FUNCTIONS OF RANDOM VARIABLES V
or, if
then
IIII
6 TRANSFORMATIONS
In Sec. 5 we considered the problem of obtaining the distribution of a function
of a given random variable. It is natural to consider next the problem of
obtaining the joint distribution of several random variables which are functions
of a given set of random variables.
Suppose that the joint density of Yl = gl(Xl , ... , X n), •.. , Yk = gk(Xl , ... , Xn)
is desired. It can be observed that Yl , ••• , Yk are jointly discrete and
P[Yl =Yl; ... ; Yk = Yk] =!Yl, ... ,Yk(Yl'''',Yk) = I!xt .... ,xn(Xl , ... ,xn),where
the summation is over those (Xh •.. , xn) belonging to I for which (Yl, ... 'Yk) =
(gl(Xl' ... , XII)' ••. , gk(Xh ... , XII»'
X = {(O, 0, 0), (0, 0, 1), (0, 1, 1), (1, 0, 1), (1, 1,0), (1, 1, I)}.
!Yt. Y2(0, 0) =!X •• X2.X3(0, 0, 0) = L
!Y to Y2(1, 1) =!Xt, X2. X3(0, 0,1) = i,
!Yt,Y2(2, 0) =!XI,X2.X3(0, 1,1) = ~,
!Y to Y2(2, 1) =!x lo x2.xi 1, 0,1) + !x •• x 2.x3(1, 1,0) = i,
and
IIII
X = {(Xl' ... , XII) :fxt. ...• Xn (X1, ... , XII) > O}. (38)
Again assume that the joint density of the random variables Y1 = 91 (X 1, ... , XII)'
... , Yk = 9k( Xl' ... , XII) is desired, where k is some integer satisfying I <k < n.
If k < n, we will introduce additional, new random variables Yk + 1 =
9k+1(X 1, ... , X,,), ... , Y II = 911 (X 1 , ••• , XII) for judiciously selected functions
9k+ l' .•. , 911; then we will find the joint distribution of Yh •.. , YII , and finally
we will find the desired marginal distribution of Y1 , ••• , Yk from the joint dis-
tribution of Y1 , ••• , YII . This use of possibly introducing additional random
variables makes the transformation Y1 = 91(Xb ... , x,,), ... , YII = 911(X1, ... , XII)
a transformation from an n-dimensional space to an n-dimensional space.
Henceforth we will assume that we are seeking the joint distribution of Y1 =
91(X1, ... , XII)' ... , YII = 911(X1, ... , XII) (rather than the joint distribution of
Y1, ... , Yk) when we have given the joint probability density of Xl, ... , XII'
We will state our results first for n = 2 and later generalize to n > 2.
LetfX1>XiX1' X2) be given. Set X ={(Xb x 2):fxl.xi x 1' X2) > O}. We want to
find the joint distribution of Y1 = 91(X 1, X 2) and Y2 = 92(X b X 2) for known
functions 91(', .) and 92(', '). Now suppose that Y1 = 91(X1' X2) and Y2 =
92(X1' X2) defines a one-to-one transformation which maps X onto, say, ~.
Xl and X2 can be expressed in terms of Y1 and Y2; so we can write, say, Xl =
911(Y1' Y2) and X2 = 921(Yb Y2)' Note that X is a subset of the X1X2 plane and
~ is a subset of the Y1Y2 plane. The determinant
6 TRANSFORMATIONS 205
aXl aXl
aYl aY2 (39)
aX2 aX2
aYl aY2
aXl aX
- - l -!
aYl aY2 1
J= =-
aX2 aX2 2
- -
aYl aY2
206" DISTRIBUTIONS OF FUNCTIONS OF RANDOM VARIABLES V
FIGURE 3
-l( ) YIY2
Xl=91 Yl'Y2 =1+Y2 and X2 = 92-l( Yl' Y2 ) = 1 Yl .
+ Y2
Y2 Yl
-
1 +Y2 (1 + Y2)2 Yl(Y2 + 1) Yl
J= = - = -
(1 + Y2)2·
1 (1 + Y2)3
Yl
1 + Y2 (1 + Y2)2
6 TRANSFORMATIONS 207
_ ~ 1 foo ex [_ ~ (1 + Y~)Yi] d
- 2n (I + Y 2)2 _ 00 IY I I p 2 (1 + Y 2)2 Y 1-
Let
then
(I + Y~)
du =
(I + Y2) 2 Yl dYt
and so
a Cauchy density. That is, the ratio of two independent standard normal
random variables has a Cauchy distribution. 1/1/
hence
J=I -YzYz Yt
) -Yl
I Y2 .
6 TRANSFORMAT]ONS 209
Hence
fy t • Y2(Yl, Y2)
1I
x [ ;"lIt+ 2 ylIl+1I2-le-AYl] (y)].
r(nl + n2 ) 2 (0,00) 2
It turns out that Yl and Y2 are independent and Y1 has a beta distribu-
tion with parameters n1 and n2 • ////
m
fYI. yiYl' YZ) = L IJilfX.,X2(gl/(Yl' Yl), U2/(Yl' YZ»I\~,(YI' Yl)'
i=l
(41)
1/11
We illustrate this theorem with Example 26.
= - -1 ( Yl - Yl2)-t •
2
o
Hence,
fyhyiYh Y2) = [IJllfxl. x2(uIl(Yh Y2), UZ/(Yh Yl»
+ IJzlfxt.x/uli(Yl' Y2)' u2i(Yl' Y2»]IID(Yh Y2)
1 1 _tyl
- . -. e
JYI - y~ 2n
for YI 0 and -JYl < Y2 < JYl' Now
1 o -1
°9ti
- - og1/
-OY2 ... ... -9ti-
OYI oY"
o9u-1
ogu t 1
- - - - ... . .. -
0921
-
Ji = oYt OY2 oY"
••••• to • '" ................. to ••••
~ -1
" . . . .. U9",
oYrJ
for i = 1, .. " m.
Assume that all the partial derivatives in J i are continuous over
ID and the determinant J i is nonzero, i = 1, ... , m. Then
m
= L IJdfxl ..... x (9i/(Yt, ... , Y,,), ... , 9rJi \Yl' ... , Yn»
i=1
n (42)
1 0 o
J= -1 2 o = 6.
o -2 3
-
. 212 DISfRIBUTIONS OF FUNCTIONS OF RANPOM VARIABLES V
=
-00 -00
= .)2 ()2,,)' r", exp [-!( 6y~ - 12y, Y3 + 6y~)]exp[ -!(3 y~)] dy,
PROBLEMS
1 (a)Let X h X 2 , and X3 be uncorrelated random variables with common variance
a 2 • Find the correlation coefficient between Xl + X 2 and X 2 + X 3 .
(b) Let Xl and X 2 be uncorrelated random variables. Find the correlation
coefficient between Xl + X 2 and X 2 - Xl in terms of var [Xl] and var [X2].
(c) Let Xl, X 2 , and X3 be independently distributed random variables with
common mean p. and common variance a 2 • Find the correlation coefficient
between X 2 - Xl and X3 - Xl'
2 Prove Theorem 2.
3 Let X have c.d.f. F x (') = F(·). What in terms of F(') is the distribution of
XI[o. CXJ)(X) = max [0, Xl?
4 Consider drawing bal1s, one at a time, without replacement, from an urn containing
M bal1s, K of which are defective. Let the random variable X( Y) denote the
number of the draw On which the first defective (nondefective) ba11 is obtained.
Let Z denote the number of the draw On which the rth defective baH is obtained.
(a) Find the distribution of X.
PROBLEMS 213
(b) Find the distribution of Z. (Such distribution is often cal1ed the negative
hypergeometric distribution.)
(c) Set M = 5 and K = 2. Find the jOint distribution of X and Y.
5 Let Xl, ••• , Xn be independent and identically distributed with common density
/x(x) = x- 2 I u , CXl)(X).
Set Y = min [Xl, ••. , X n ]. Does 8[X.] exist? If so, find it. Does 8[ Y] exist?
If so, find it.
6 Let X and Y be two random variables having finite means.
(a) Prove or disprove: 8[max [X, Yl] > max [8[X], 8[ Y]].
(b) Prove or disprove: 8[max [X, Yl + min [X, Yl] = 8[X] + 8[ Y].
7 The area of a rectangle is obtained by first measuring the length and width and
then multiplying the two measurements together. Let X denote the measured
length, Y the measured width. Assume that the measurements X and Y are
random variables with jOint probability density function given by Ix. l'(x, y) =
klr.9L.1.ILJ(x)hsw.I.2WJ(Y), where Land Ware parameters satisfying L > W> 0
and k is a constant which may depend on Land W.
(a) Find 8[XYl and var [XYl.
(b) Find the distribution of XY.
8 If X and Yare independent random variables with (negative) exponential dis-
tributions having respective parameters Al and A2 , find 8[max [X, Yl].
9 Projectiles are fired at the origin of an xy coordinate system. Assume that the
point which is hit, say (X, y), consists of a pair of independent standard normal
random variables. For two projectiles fired independently of one another, let
(XI, Yl ) and (X2 , Y 2 ) represent the points which are hit, and let Z be the distance
between them. Find the distribution of Z2. HINT: What is the distribution of
(X2 - X I )2? Of (Y2 - y 1 )2? Is (X2 - X I )2 independent of (Y2 - y.)2?
10 A certain explOsive device wiJ] detonate if anyone of n short-lived fuses lasts
longer than .8 seconds. Let Xl represent the life of the ith fuse": It can be as-
sumed that each Xi is uniformly distributed over the interval 0 to I second. Fur-
thermor~ it can be assumed that the XI'S are independent.
(a) How many fuses are needed (i.e., how large should n be) if one wants to be
95 percent certain that the device wiH detonate?
(b) If the device has nine fuses, what is the average life of the fuse that lasts the
longest?
11 Suppose that random variable Xn has a c.d.f. given by [(n - l)/n] <l>(x) + (l/n)Fn(x),
where <l> (.) is the c.d.f. of a standard normal and for each n Fn(·) is a c.d.f. What
is the limiting distribution of Xn?
12 Let X and Y be independent random variables each having a geometric distribu-
tion.
*(a) Find the distribution of X/(X + Y). [Define X/(X + Y) to be zero if
X+ y=o.]
(b) Find the joint moment generating function of X and X + Y.
214 DISTRIBUTIONS OF FUNCTIONs OF RANPOM vARIABLES v
with the satellite during n orbits. Assume that the XI'S are independent and
identically dist~ibuted Poisson random variables having mean A.
(a) Find I[Sn] and var [Sn].
(b) If n = 100 and A = 4, find approximately pISIOO > 440].
19 How many light bulbs should you buy if you want to be 95 percent certain that
you will have 1000 hours of light if each of the bulbs is known to have a lifetime
that is (negative) exponentially distributed with an average life of 100 hours?
(a) Assume that an the bulbs are burning simultaneously.
(b) Assume that one bulb is used until it burns out andlhen it is replaced. etc.
PROBLEMS 215
20 (a) If Xl, ... , XII are independent and identical1y distributed gamma random
variables, what is the distribution of Xl + ... XII?
(b) If X., •. " XII are independent gamma random variables and if X, has param-
eters ri and ", i 1, ... , n~ what is the distribution of Xl + ... + XII 1
21 (a) If Xl, •.• , XII are independent identically distributed geometric random
variables, what is the distribution of Xl + . . . Xn 1
(b) If Xl" •.• , XII are independent identically distributed geometric random
variables with density 0(1- (J)Jf- 1 l(l.z ....•(x), what is the distribution of
Xl + ... + XII?
(c) If Xl, .•. , XII are independent identically distributed negative binomial
random variables, what is the distribution of Xl ••• + XII ?
(d) If Xl, .•• , Xn are independent negative binomial random variables and if Xi
has parameters ri and p, what is the distribution of Xl + ... + XII?
*22 Kitty Oil Co. has decided to drill for oil in 10 different locations; the cost of
drilling at each location is $10,000. (Total cost is then $100,000.) The prob-
ability of finding oil in a given location is only "1, but if oil is found at a given
location, then the amount of money the company will get selling oil (excludi ng the
initial $10,000 deming cost) from that location is an exponential random variable
with mean $50,000. Let Y be the random variable that denotes the number of
locations where oil is found, and let Z denote the total amount of money received
from selling oj] from an the locations.
(a) Find 8[Z].
(b) Find P[Z> 100,0001 Y = 1] and P[Z > 100,0001 Y = 2].
(c) How would you find p[Z > 100,000]1 Is P[Z> 100,000] > i?
23 If Xl, ..• , X" are independent Poisson distributed random variables, show that
the conditional distribution of Xl, given Xl + ... + X", is binomial.
*24 Assume that Xl, ... , Xk+l are independent Poisson distributed random variables
with respective parameters "1, ... ,
",,+1' Show that the conditional distribution
of XI, ... , X" given that Xl ... + X"+1 = n has a multinomial distribution
with parameters n, "d", "" ""I", "1
where" = + ... +",,+1' .
25 If X has a uniform distribution over the interval ( -7T/2, 7T/2), find the distribution
of Y tan X.
26 If X has a normal distribution with mean f.L and variance (12, find the distribution,
mean, and variance of Y = eX.
27 Suppose X has c.d.f. Fx(x) exp [-e-(Jf-<l:)/P]. What is the distribution of
Y exp [-(X - (X)/{1]?
28 Let X have density
1 x lO
-
*54 Let Xl and X 2 be independent random variables, each normally distributed with
parameters p. = 0 and (72 = 1. Find the joint distribution of Y1 = Xl + Xl and
(
Y 2 = X 1 !X2 • Find the marginal distribution of Y1 and of Y 2 • Are Y1 and Y 2
independent?
55 If the joint distribution of X and Y is given by
Ix. rex, y) = 2e- (.x'+.v) I[o.),](x)I[o. C;C)(Y},
find the jOint distribution of X and X + Y. Find the marginal distributions of
X and X Y.
56 Let Ix. rex, y) K(x + y)l(o. l)(x)/(o. l)(y)I(o. l)(x y).
(a) Find Ix( ').
(b) Find the jOint and marginal distributions of X + Yand Y - X.
57 Suppose !c.x. ]I) I z(x, y Iz) = [z + (I - z}(x+- y»)I(o. l)(x)l(o. l)(Y) for 0 <z < 2, and
Iz(z} = !i[o. 2](Z}.
(a) Find 8[X YJ.
(b) Are X and Y independent? Verify.
(e) Are Xand Z independent? Verify.
(d) Find the joint distribution of X and X + Y.
(e) Find the distribution of max [X, Y]IZ = z.
(I) Find the distribution of (X + Y) I Z = z.
58 A system will function as long as at least one of three components functions.
When all three components are functioning, the distribution of the life of each is
exponential with parameter iA, When only two are functioning, the distribution
of the I ife of each of the two is exponential with parameter !A; and when only One
is functioning, the distribution of its 1ife is exponential with parameter A.
(a) What is the distribution of the lifetime of the system?
(b) Suppose nOw that only one component (of the three components) is used at a
time and it is replaced when it fails. What is the distribution of the lifetime
of such a system?
59 The system in the sketch will function as long as component C 1 and at least One
of the components C 2 and C 3 functions. Let Xi be the random variable denoting
the lifetime of component C" i I, 2, and 3. Let Y = max [X2 • X 3 ] and Z =
min [Xl, n. Assume that the Xs's are independent (negative) exponential
random variables with mean I.
(a) Find 8[Z] and var [Z].
(b) Find the distribution of the lifetime of the system.
r--
- C2 :-
CI
-
c3 I-
218 DISTRIBUTIONS OF FUNCfIONS OF RANDOM VARIABLES v
introduced in Chap. III is given. Sampling from the normal distribution is con-
sidered in Sec. 4!J where the chi-square, F, and t distributions are defined. Order
statistics are discussed in the final section; theY!J like sample moments, are impor-
tant and useful statistics.
2 SAMPLING
2 .. 1 Inductive Inference
Up to now we have been concerned with certain aspects of the theory of prob-
ability, including distribution theory. Now the subject of sampling brings us
to the theory of statistics proper, and here we shall consider briefly one important
area of the theory of statistics and its relation to sampling.
Progress in science is often ascribed to experimentation. The research
worker performs an experiment and obtains some data. On the basis of the .
data, certain conclusions are drawn. The conclusions usually go beyond the
materials and operations of the particular experiment. In other words, the
scientist may generalize from a particular experiment to the class of all similar
experiments. This sort of extension from the particular to the general is called
inductive inference. It is one way in which new knowledge is found.
Inductive inference is well known to be a hazardous process. In fact, it
is a theorem of logic that in inductive inference uncertainty is present. One
simply cannot make absolutely certain generalizations. However, uncertain
inferences can be made, and the degree of uncertainty can be measured if the
experiment has been performed in accordance with certain principles. One
function of statistics is the provision of techniques for making inductive in-
ferences and for measuring the degree of uncertainty of such inferences'. Un-
certainty is measured in terms of probability, and that is the reason we have
devoted so much time to the theory of probability.
Before proceeding further we shall say a few words about another kind of
inference-deductive inference. While conclusions which are reached by induc-
tive inference are only prObable, those reached by deductive inference are con-
clusive. To illustrate deductive inference, consider the following two statements:
(i) One of the interior angles of each right triangle equals 90°,
(ii) Triangle A is a right triangle.
If we accept these two statements, then we are forced to the conclusion:
(iii) One of the angles of triangle A eq uals 90°.
2 SAMPLING 221
(i) Major premise: All West Point graduates are over 18 years of age.
(ii) Minor premise: John is a West Point graduate.
(iii) Conclusion: John is over 18 years of age.
West Point graduates is a subset of all persons over 18 years old, and John
is an element in the subset of West Point graduates; hence John is also an element
in the set of persons who are over 18 years old.
While deductive inference is extremely important, much of the new knowl-
edge in the real world comes about by the process of inductive inference. In the
science of mathematics, for example, deductive inference is used to prove the-
'orems, while in the empirical sciences inductive inference is used to find new
knowledge.
Let us illustrate inductive inference by a simple example. Suppose that
we have a storage bin which contains (let us say) 10 million flower seeds which
we know will each produce either white or red flowers. The information which
we want is: How many (or what percent) of these 10 million seeds will produce
white flowers? Now the only way in which we can be sure that this question
is answered correctly is to plant every seed and observe the number producing
white flowers. However, this is not feasible since we want to sell the seeds.
Even if we did not want to sell the seeds, we would prefer to obtain an answer
without expending so much effort. Of course, without planting each seed and
observing the color of flower that each produces we cannot be certain of the
number of seeds producing white flowers. Another thought which occurs is:
Can we plant a· few of the seeds and, on the basis of the colors of these few
flowers, make a statement as to' how many of the 10 mi11ion seeds will produce
222 SAMPLING AND SAMPLING DISTRIBUTIONS VI
In the example in the previous subsection the 10 million seeds in the stor-
age bin form the target population. The target population may be all the dairy
cattle in Wisconsin on a certain date, the prices of bread in New York City on a
certain date, the hypothetical sequence of heads and tails obtained by tossing a
certain coin an infinite number of times, the hypothetical set of an infinite
number of measurements of the velocity of light, and so forth. The important
thing is that the target population must be capable of being quite well defined;
it may be real or hypothetical. .
The problem of inductive inference is regarded as follows from the point
of view of statistics: The object of an investigation is to find out something about
a certain target population. It is generally impossible or impractical to examine
the entire population, but one may examine a part of it (a sample from it) and,
on the basis of this limited investigation, make inferences regarding the entire
target population.
The problem immediately arises as to how the sample of the population
should be selected. We stated in the previous section that we could make prob-
abilistic statements about the population if the sample is selected in a certain
fashion. Of particular importance is the case of a simple random sample,
usually called a random sample, which is defined in Definition 2 below for any
2 SAMPLING 223
population which has a density. That is, we assume that each element in our
population has some numerical value associated with it and the distribution of
these numerical values is given by a density. For such a population we define
a random sample.
In the example in the previous subsection the 10 million seeds in the stor-
age bin formed the population from which we propose to sample. Each seed is
an element of the population and will produce a white or red flower; so, strictly
speaking, there is not a numerical value associated with each element of the
population. However, if we, say, associate the number 1 with white and the
num ber 0 with red, then there is a numerical value associated with each element
of the population, and we can discuss whether or not a particular sample is
random. The random variable Xi is then 1 or 0 depending on whether the ith
seed sampled produces a white or red flower, i 1, ... , n. Now if the sampling
of seeds is performed in such a way that the random variables Xl' ... , Xn are
independent and have the same density. then, according to Definition 2, the
sample is caned random.
An important part of the definition of a random sample is the meaning of
the random variables Xl, .. " X n • The random variable Xi is a representation
for the numerical value that the ith item (or element) sampled will assume. After
the sample is observed, the actual values of Xl' ... , Xn are known, and as usual,
we denote these observed values by X h ••. , X n • Sometimes the observations
Xl' ••. , Xn are called a random sample if Xl, ... , Xn are the values of Xl, .. " X n•
where Xl' "', Xn is a random sample.
Often it is not possible to select a random sample from the target popula-
tion, but a random sample can be selected from some related population. To
distinguish the two populations, we define sampled population.
Xl is called the first observation, and X2 the second observation. The pair of
numbers (Xl' x 2 ) determines a point in a plane, and the collection of all such
pairs of numbers that might have been drawn forms a bivariate population.
We are interested in the distribution (bivariate) of this bivariate population in
terms of the original density f( . ). The pair of numbers (Xl' X2) is a value of the
joint random variable (Xl' X 2), and Xl' X 2 is a random sample (of size 2)
from I( ,). By definition of random sample, the joint distribution of Xl and
X 2 , which we call the distribution of our random sample of size 2, is given by
fxt, X2(X l , X2) = f(x l )f(x 2)·
As a simple example, suppose that X can have only two values, 0 and 1,
with probabilities q = 1 - p and p, respectively. That is, X is a discrete ran-
dom variable which has the Bernoulli distribution
f(x) = pXql -x/{O.l}(X), (1)
The joint density for a random sample of two values fromf( .) is
IXI. X2(X l , X2) = f(x l )f(x 2) = pXI +X2q2- XI-X2f{O. l}(xl)I{o. 1}(x 2). (2)
is a statistic, and
!{min [Xh ... , XII] + max [Xl' ... , XII]}
is also a statistic. If I(x; 0) ¢o, 1 (x) and 0 is unknown, Xn 0 is not a
statistic since it depends on 0, which is unknown. ////
2 SAMPLING 227
Next we shall define and discuss some important statistics, the sample
moments.
M,. = -1 ~ _)'
~ (Xi - Xn . (5)
n i= 1
IIII
with a density f(·)· The expected value of the rth sample moment (about
0) is equal to the rth population moment; that is,
Also,
PROOF
var[M;] = var [~
ni=1
t X~]
= (~)2 var [.tx~]
n ,In
= (!)2 .t var[X~]
,=1
and - ] = -1 (J 2 ,
var [ Xn (8)
n
where It and (J2 are, respective1y, the mean and variance of I( .). 1111
The reason for taking 8 2 rather than M 2 as our definition of the sample
variance (both measure dispersion in the sample) is that the expected value of
8 2 equals the population variance.
The proof of the following remark is left as an exercise.
I nn
2
Remark 8 n2 = 8 = 2 ( 1) '"
L, '"L, (Xi - Xj) 2 . 1111
nn- i= 1 j= 1
Then
PROOF (Only the first part wil1 be proved.) Recall that (J2
= 8[(X - 11)2] and 11,. 8[(X 11)"]. We commence by noting and prov-
ing an identity that is quite useful.
n n
L (Xi -
i= 1
11)2 =
i
L1(Xi - x)2 + n(X - 11)2 (11)
sInce
S[8 2 ] = S[ 1
n-1
L (Xi - X)2]
=
n - 1 L=l
±
1 sf (Xi - p)2 - n(X - p)2]
= 1 ( n(12 - n (12)
- = (12.
n -1 n
Although the derivation of the formula for the variance of 8 2 can
be accomplished by utilizing the above identity [Eq. (11)] and
_ 1,. 1
X- p = - L. Xi - - np
n n
111 I
= -
n
L Xi - n- L p = -n L (Xi - p),
3 SAMPLE MEAN
The first sample moment is the sample mean defined to be
I "
X = X" = -
ni=l
L Xi'
where Xh X 2 , ••• , X,. is a random sample from a density I( .). X is a function
of the random variables Xl' ... , X,., and hence theoretically the distribution of
X can be found. In general, one would suspect that the distribution of X
3 SAMPLE MEAN 231
depends on the density f(·) from which the random sample was selected, and
indeed it does. Two characteristics of the distribution of X, its mean and
variance, do not depend on the density f( . ) per se but depend on]y on two charac-
teristics of the density f(·). This idea is reviewed in the following subsection,
while succeeding subsections consider other results involving the samp]e mean.
The exact distribution of X will be given for certain specific densities f( . ).
It might be helpful in reading this section to think of the sample mean X
as an estimate of the mean J1 of the density f(·) from which the samp]e was
selected. We might think that one purpose in taking the sample is to estimate
J1 with X.
and (12)
IIII
and no inference from the sample to the popUlation would be necessary. There-
fore, it seemS that we would like to have the random sample tell us something
about the unknown parameter 0. This problem will be discussed in detail in
the next chapter. In this subsection we shall discuss a related particular
problem.
Let S[X] be denoted by J-l in the density f(·). The problem is to estimate
J-l. In a loose sense, 8[X] is the average of an infinite number of values of the
random variable X. In any real-world problem we can observe only a finite
number of values of the random variable X. A very crucial question then is:
Using only a finite number of values of X (a random sample of size n, say), can
any reliable inferences be made about S[X], the average of an infinite number of
values of X? The answer is "yes"; reliable inferences about S[X] can be made
by using only a finite sample, and we shaH demonstrate this by proving what is
called the weak law of large numbers. In words, the law states the following: A
positive integer n can be determined such that if a random sample of size n or
larger is taken from a population with the density 1(') (with 8[X] = 11), the
probability can be made to be as close to I as desired that the sample mean X
will deviate from J-l by less than any arbitrarily specified small quantity. More
precisely, the weak law of large numbers states that for any two chosen small
numbers e and c5, where e > 0 and 0 < c5 < 1, there exists an integer n such that
if a random sample of size n or larger is obtained from f(·) and the sample
mean, denoted by Xn , computed, then the probabiJity is greater than I - c5
(i.e., as close to I as desired) that Xn deviates from 11 by less than e (i.e., is ar-
bitrarily close to J-l). In symbols this is written: For any e> 0 and 0 < b < I
there exists an integer n such that for all integers m> n
The weak law of large numbers is proved using the Chebyshev inequality given
in Chap. II.
2
Letg(X) = (Xn - J.l)2 and k = 8 ; then
P[ -8 < Xn - J.l < 8] = P[I Xn - J.l1 < 8]
- 2 2 1 S[(Xn - J.l?]
= P[ I Xn - J.l1 < 8 ] >-
8
2
IIII
Below are two examples to illustrate how the weak law of large numbers
can be used.
EXAMPLE 5 How large a sample must be taken in order that you are 99
percent certain that Xn is within .5a of J.l? We have 8 = .5a and () = .01.
Thus
IIII
Theorem 5 tells us that the limiting distribution of Z" (which is X" stand-
ardized) is a standard normal distribution, or it tells us that X" itself is ap-
proximately, or asymptotically, distributed as a norma] distribution with mean
Jl and variance a2 In.
The astonishing thing about Theorem 5 is the fact that nothing is said
about the form of the original density function. Whatever the distribution
function, provided only that it has a finite variance, the sample mean wiJ] have
approximately the normal distribution for large samples. The condition that
the variance be finite is not a critical restriction so far as applied statistics is
concerned because in almost any practical situation the range of the random
variable will be finite, in which case the variance must necessarily be finite.
The importance of Theorem 5, as far as practical applications are con-
cerned, is the fact that the mean X" of a random sample from any distribution
with finite variance a 2 and mean J.! is approximately distributed as a normal
random variable with mean J.! and variance a2 /n.
We shall not be able to prove Theorem 5 because it requires rather ad-
vanced mathematical techniques. However, in order to make the theorem
plausible, we shall outline a proof for the more restricted situation in which
the distribution has a moment generating function. The argument will be
essentially a matter of showing that the moment generating function for the
sample mean approaches the moment generating function for the normal
distribution.
Recall that the moment generating function of a standard normal dis··
t2 t2
tribution is given by e-t • (See Subsec. 3.2 of Chap. III.) Let m(t) = e1: •
Let mzJt) denote the moment generating function of Z" . It is our purpose to
show that mz.,(t) must approach met) when n, the sample size, becomes large.
3 SAMPLE MEAN 235
Now
using the independence of Xl' ... , X n • Now if we let Y j = (Xi - II}jO', then
mylt), the moment generating function of Y j , is independent of i since a II Yt
have the same distribution. Let my(t) denote my.(t); then
Hence,
The rth derivative of my(tj J;;) evaluated at t = 0 gives uS the rth moment about
the mean of the densityf(·) divided by (O'J~y, so we may write
Now lim (1 + ujnt = ett2 , where u represents the expression within the paren-
theses in Eq. (17). We have lim m ZI1(t) = ett \ so that in the Jimit, Zn has the
same moment generating function as a standard normal and, by a theorem
similar to Theorem 7 in Chap. II, has the same distribution.
The degree of approximation depends, of course, on the sample size and
on the particular density f(·). The approach to normality is iI1ustrated in
Fig. 2 for the particular function defined by f(x) = e- X l(o,oo)(x). The solid
curves give the actual distributions, while the dashed curves give the normal
approximations. Figure 2a gives the original distribution which corresponds to
samples of 1 ; Fig. 2b shows the distribution of sample means for n = 3; Fig. 2c
236 ')AMPLING AND SAMPLING DISTRIBUTIONS VI
"---~
1.0 1.0
-'--------'------"-- x
o 1 2 x o 1 2 x o 1
(a) (b) (c)
FIGURE 2
gives the distribution of sample means for n = 10. The curves rather exaggerate
the approach to norma1ity because they cannot show what happens on the tails
of the distribution. Ordinarily distributions of samp1e means approach
normality fair1y rapid1y with the sample size in the region of the mean, but more
s10wly at points distant from the mean; usual1y the greater the distance of a
point from the mean, the more slowly the normal approximation approaches the
actual distribution.
In the fol1owing subsections we wi]] give the exact distribution of the
sample mean for some specific densities f(')'
n
We know (see Example 9 of Chap. V) that L Xi has a binomial distribution;
1
that is,
(n)2 p q 2 n-2
, ... ,
_ k] [ n ] e - nA( nA)k
[
P X n =- =P L:Xj=k
n ;=1
=------
k!
for k = 0, 1, 2, _.. , (19)
which gives the exact distribution of the sample mean for a sample from a
Poisson density.
or
for y > 0,
and so
for y > 0.
Or,
nx 1
P[X" < xl = f(j r(n) z" lW'e- 9z dz
x 1
=
fo r(n)
- - (nu)"-le ne- n8u n du'
,
that is, X" has a gamma distribution with parameters n and nO.
238 SAMPLING AND SAMPLING DISTRIBUTIONS VI
The derivation of the above (using mathematical induction and the convolution
formula) is rather tedious and is omitted. Instead let us look at the particular
cases n = I, 2, 3.
l~--~~--------~----
~----~-------L----~~~x
FIGURE 3
then X" has this same Cauchy distribution for any n. That is, the sample mean
has the same distribution as one of its components. We are unable to easily
verify this result. The moment-generating-function technique fails us since the
moment generating function of a Cauchy distribution does not exist. Mathe-
matical induction in conjunction with the convolution formula produces
integrations that are apt to be difficult for a nonadvanced calculus student to
perform. The result, however, is easily obtained using complex-variable
analysis. In fact, if we had defined the characteristic function of a random
variable, which is a generalization of a moment generating function, then the
above result would follow immediately from the fact that the product of the
characteristic functions of independent and identically distributed random
variables is the characteristic function of their Sum. A major advantage of
..
characteristic functions over moment generating functions is that they always
exist.
r
One of the simplest of all the possible functions of a random sample is the
sample mean, and for a random sample from a normal distribution the dis-
tribution (exact) of the sample mean is also normal. This result first appeared
as a special case of Example 12 in Chap. V. It is repeated here.
4 SAMPLING FROM THE NORMAL DISTRIBUTlONS 241
= tG [ .n exp tXt]
n
,:1
-
n
=.n tXt]
tG exp-
n
n
,~\
[
= n n
l== I
mXi
( t) =
-
n
nexp [J.1t-n + 2-1 (at)
n
i= I
- 2]
n
;(0'1)2]
= exp [ J.1t + n '
which estimates" the unknown 0'2. A density function which plays a central
H
(21)
242 SAMPLING AND SAMPLING DISTRIBUTIONS VI
tS'[X] = kl2 = k,
t
X] - kl2 - 2k (22)
var [ - (1/2)2 - ,
and
_[_t]k12 _ [1
~2
1 ]k12 t < 1/2. (23)
mX<t) - - t - - 2t '
1111
Theorem 7 If the random variables Xi' i = 1,2, ... , k, are normal1y and
independently distributed with means f.11 and variances af, then
U= Ik (X.' _11.)2 Fl
j=l (Ii
oo
-1e -t(1-2t)z 2 d z
=
f -00 J2n
=
1 foo /1-2t
e -!(1-2t)Z2 d z
)1 - 2t -00 J21C
1 1
for t< 2.'
/1 - 2t
4 SAMPLING FROM THE NORMAL DISTRIBUTIONS 243
the ]a tter integral being unity since it represents the area under a norma]
curve with variance 1/(1 - 2t). Hence,
k 1 = (1
n0"[exp tZlJ = n J 1-2t k
1 2t
)k12
for t<
1
2:'
i=l -
i=l
we could estimate ,,2 with (I fn) it. (X, - li)2 (note that .c [(I In),t, (X, - JI)2] =
(lln),t. C[(X, - JI)2] = (1/n),t, ,,2 = ,,2), and find the distribution of
n
(1/n) L (Xi - J.1)2 by using the coro]]ary.
i= 1
of freedom.
PROOF (Our proof wi]] be incomp]ete.) (i) is a special case of
"-
Theorem 6. We wi11 prove (ii) for the case n = 2. If n = 2,
- Zl + Z2
Z= 2
244 SAMPLING AND SAMPLING DISTRIBUTIONS VI
and
and, similarly,
mz Z -Z 1(t 2 ) = exp t~.
Also,
mZ +zz, ZZ-Zl(t It t 2 )
1
= G[e (Zl +zz)+tz(Zz-Zt)]
1
and since the joint moment generating function factors into the produc~
of the marginal moment generating functions, ZI + Z2 and Z2 - ZI are
independent.
n
To prove (iii), we accept the independence of Z and L (Zj -
1
2}2 for
arbitrary n. Let us note that L zf = L (Z, - 2 + Zf = L (Zl - 2)2 +
22 L(Zj - Z) + L Z2 = L(Zj - 2)2 + nZ2; also L(Zi - zf and nZ2
are independent; hence
So,
t < 1/2
4 SAMPLING FROM THE NOR.MAL DISTRIBUTIONS 245
noting that J~Z has a standard normal distribution implying that nZ/
has a chi-square distribution with one degree of freedom. We have
shown that the moment generating function of L (Zi - Z)2 is that of a
chi-square distribution with n - 1 degrees of freedom, which completes
~~~ M
Theorem 8 was stated for a random sample from a standard normal dis-
tribution, whereas if we wish to make inferences about 11 and a 2 , our sample
is from a normal distribution with mean p and variance a 2 • Let Xl' ... , Xn
denote the sample from the norma] distribution with mean Il and variance a 2 ;
then the Zi of Theorem 8 could be taken equal to (Xi - II)la.
(i) of Theorem 8 becomes:
(i') Z = (lIn) L (X, - Il)la = (X - tt)/a has a normal distribution with
mean 0 and variance 1In.
(H) of Theorem 8 becomes:
(ii') Z = (X - p)la and L(Zi - Z)2 = L [(Xi - p)la - (X - p)laf =
L [(Xi - %)2/a2] are independent, which implies X and L (Xi - %)2 are
independent.
(iii) of Theorem 8 becomes:
(iii') L (Zi - Z)2 = L [(Xi - X)2/a 2 ] has a chi-square distribution with
n - 1 degrees of freedom.
n
Corollary If 8 2 = [l/(n - 1)] I (Xi - X)2 is the sample variance of a
i= I
random sample from a normal distribution with mean Il and variance (r2,
then
",
(24)
1
f~p(y) = ( n - 2 1)(n-1)/2 . y(n-3)/2 e -(n-l)y/2a j
2 (y) (25)
2a r[(n _ 1)/2] (0. cO) •
IIII
246 sAMPLING AND SAMPLING DISTRIBUTIONS VI
All the results of this section apply only to normal populations. In fact,
it can be proved that for no other distributions (i) are the sample mean and
sample variance independently distributed or (ii) is the sample mean exactly
normally distributed.
(26)
We shall find the distribution of the quantity
X- Ulm (27)
- Vln'
which is sometimes referred to as the variance ratio. To find the dIstribution
of X, we make the transformation X = (Ulm)/(Vln) and Y = V, obtain the
joint distribution of X and Y, and then get the marginal distribution of X by
integrating out the y variable. The Jacobian of the transformation is (mln)y; so
and
00
The order in which the degrees of freedom are given is important since the
density of the F distribution is not symmetrical in m and n. The number of
degrees of freedom of the numerator of the ratio mjn that appears in Eq. (28)
is always quoted first. Or if the F-distributed random variable is a ratio of two
independent chi-square-distributed random variables divided by their respective
degrees of freedom, as in the derivation above, then the degrees of freedom of the
chi-square random variable that appears in the numerator are always quoted
first.
We have proved the following theorem.
X= Ujm
Vjn
The following corollary shows how the result of Theorem 9 can be useful
in sampling.
L (Xi - X)2jm
L (Yj - y)2jn
We close this subsection with several further remarks about the F dis-
tribution.
$[X] = $ [-Ulm]
Vln
n
=-$[U]$
m
- [1]
V
.
= r[(n -
2
2)/2] (!)nI2(!) -(,,-2)/ = 1 ;
r(n/2) 2 2 n- 2
and so
but
1 - p = P[ Y < l;~ - p] ;
so
I
I
W= mXln
1 + mXln
(30)
,." ,
r[(k + 1)/2] 1 1
- (31)
r(k/2) jkn (1 + x 2 /k)(k+ 1)/2'
The following corollary shows how the result of Theorem lOis applicable
to sampling from a normal popUlation.
We might note that for one degree of freedom the Student's t distribution
reduces to a Cauchy distribution; and as the number of degrees of freedom
increases, the Student's t distribution approaches the standard normal distribu-
tion. Also, the square of a Student's t-distributed random variable with k
degrees of freedom has an F distribution with I and k degrees of freedom.
5 ORDER STATISTICS
We note that the Yj are statistics (they are functions of the random sample
Xh X 2 , ••• , Xn) and are in order. Unlike the random sample itself, the order
statistics are clearly not independent, for if Y j > y, then Yj + l > y.
We seek the distribution, both marginal and joint, of the order statistics.
We have already found the marginal distributions of YI = min [Xl' ... , Xn] and
Yn = max [Xl' ... , Xn] in Chap. V. Now we will find the marginal cumulative
distribution of an arbitrary order statistic.
F Yr.,(y) = ±(~)[F(y)]j[l
)=tZ J
- F(y)r- j (33)
then
n
i
2:= 1Zi = the number of Xi < y.
n
Note that 2: Zj has a binomial distribution with parameters nand F(y).
i= I
Now
.
FyJy) = P[Ya < y] = P[2: Zi > a] = f
}=tZ
(~)[F(y)]j[l -
J
F(y)r- i .
The key step in the proof is the equivalence of the two events {Ya < y}
and {2: Zj > a}. If the ath order statistic is less than or equal to y, then
surely the number of Xi less than or equal to y is greater than or equal to
a, and conversely. 1111
and
fy,/y)
= lim Fy,.{y + Ay) - Fy",(Y) = lim P[y < Ya y + Ay]
A,.-O Ay A,.-O Ay
= lim P[(a: -l)ofthe Xi ~ y; one Xi in(y,y + Ay];(n - a:) of the Xi> y + Ay]
A,.-O Ay
= lim f n! [F(y)r-l[F(y + Ay) - F(y)][1 - F(y + Ay)]n-a}
A,.-O \( a: - 1) U !(n - a:)! Ay
n'
- (a: _ l)!('n _ a:)! [F(y)]a-l[1 - F(y)]n-af(y)·
(P-a:-1)ofthe Xi in (x+Ax,y];
one Xi in (y,y + Ay]; (n - P) of the Xi > Y + Ay]
nl
~----~-----------------
(a: - I)! 1 t(P - a: - I)! l!(n - {I)!
x [F(x)]a-l[F(y) - F(x + Ax)]fl-a-l[1 - F(y + Ay)]n-Pf(x) Axf(y) Ay;
hence
fy",. Y,(X' y) =
(a: - 1)1 (P
n'
_ a: '_ I)! (n _ P)![F(x)]a-l[F(Y) - F(x)]p-a-l[l - F(y)]n-Pf(x)f(y)
1
= lim P[one Xi in (y., Yl + AYl]; . " ; one Xi in (Yn, Yn + AYn]]
AYt-+
O
i =1
n
n
AYI
n!
lim [F(y 1 + Ay 1) - F(y 1)]· .... [F(Yn + AYn) - F(Yn)]
=
AYt-+O
i
n=
n
1
AYI
fyJy) = (a _ 1)! en
n' _ a)! [F(y)]'~-I[1 - F(y)]n-'i(Y); (34)
n!
fy"" YfJ(x, y) = '(a - l)! (P - a - 1)! (n - P) !
x [F(x)]a-l[F(y) - F(x)]p-a-l
x [1 - F(y)]n-Pf(x)f(y)I(x. oo)(Y); (35)
fYI •.... Yn(Yl, ... , Yn)
= {~!fly,) ..... f(y.) for Yl < Y2 < . . . < Yn (36)
otherwise. IIII
Any set of marginal densities can be obtained from the joint density
fyIo .... Yn(Yl' ... , Yn) by simply integrating out the unwanted variables.
II II
Note, however, th~t (lIn) L Yj = (lin) L Xi' the sample mean, which was the
j=l" i=l
subject of Sec. 3 of this chapter, We define now some other functions of the
order statistics,
(40)
which simplifies to
From Eq. (41), we can derive 8[R] = 2j3a(n - l)/(n + 1). /1//
Certain functions of the order statistics are again statistics and may be used
to make statistical inferences. For example, both the sample median and the
midrange can be used to estimate Jl, the mean of the population. For the uni-
form density given in the above example, the variances of the sample mean, the
sample median, and the sample midrange are compared in Problem 33.
Since for asymptotic results the sample size n increases, we let yin) <
Y!IJ) < ... < y!IJ) denote the order statistics for a sample of size n. The super-
script denotes the sample size. We will give the asymptotic distribution of that
order statistic which is approximately the (np)th order statistic for a sample of
size n for any 0 <p < I. We say "approximately" the (np)th order statistic
because np may not be an integer. Define Pn to be such that nPn is an integer
and Pn is approximately equal to P; then Y!;~ is the (nPn)th order statistic for a
sample of size n. (If Xl' ... , Xn are independent for each positive integer n, we
will say Xl' ... , X n , ... are independent.)
= lim(l + e-bnY-logn)-n
n-+oo
= lim(} + (l/n)e-bny)-n
n-+oo
260 SAMPLING AND SAMPLING DISTRIBUTIONS VI
Hence, if {an} and {bn} are selected so that {an} = {log n} and {bn} = {l},
respectively, then the limiting distribution of (y~n) - an)lbn = y~n) - log n
is exp ( - e - Y). IIII
and
so
1
n + I ~ exp { - AtC[ y~n)]},
or
] 1
tC[ y!n)] ~ J log (n + 1) ~ J log n.
r
n ..... oo
= lim
' ( 1 - -1 e-;'b
,. ..... 00 n
l1 y )n
Hence the limiting distribution of(y!n) - an)lbn = [y~n) - (1IA) log n]/O/A)
is exp (-e- Y ). We note that we obtained the same limiting distribution
here as in Example 8. Here we were sampling from an exponential
distribution, and there we were sampling from a logistic distribution.
IIII
5 ORDER STATISTICS 261
In each of the above two examples we were able to obtain the limiting
distribution of (y~n) - Qn)lbn by using the exact distribution of y!n) and ordinary
algebraic manipulation. There are some rather powerful theoretical results
concerning extreme-value statistics that tell us, among other things, what limiting
distributions we can expect. We can only sketch such results here. The
interested reader is referred to Refs. 13, 30, and 35.
lim
1 - F(xo - rx) = rY
for every r > O.
o<x-+o I - F(xo - x)
where
n - 1 }
Ci" = inf {z: n < F(z)
and
Pn = inf {z: 1 - (ne)-l < F(a" + z)}. fill
for each x,
or
or
n[l - F(bnx + a,.)] -+ e- x ;
and we see that a,. can be taken equal to ctn and bn = Pn . Thus, for the third
type the constants {an} and {bn} are actually determined by the condition for that
type. We shall see below that for certain practical applications it is possible to
estimate {an} and {bn}.
Since the types G1(. ; y) and G 2 (' ; y) both contain a parameter, it can be
surmised that the third type G 3(') is more convenient than the other two in
applications. Also, G 3 (y) = exp (-e-)') is the correct limiting extreme-value
distribution for a number of families of distributions. We saw that it was
correct for the logistic and exponential distributions in Examples 8 and 9; it is
also correct for the gamma and normal distributions. What is often done in
practice is to assume that the sampled distribution F(') is such that exp (-e- Y)
is the proper limiting extreme-value distribution; one can do this without assum-
ing exactly which parametric family the sampled distribution F(') belongs to.
One then knows that p[(y~n) - an)jbn ~ y] ~ exp (-e-y) for every y as n ~ 00.
Hence,
or
264 SAMPLING AND SAMPLING DISTRIBUTIONS VI
It is true that and bn are given in terms of the (1 - 1/n)th quantile and the
On
Much more could be said about the sample cumulative distribution func-
tion, but we will wait until Chap. Xl on nonparametric statistics to do so.
PROBLEMS
1 (a) Give an example where the target population and the sampled population
are the same.
(b) Give an example where the target population and the sampled population
are not the same.
2 (a) A company manufactures transistors in three different plants A, B, and C
whose manufacturing methods are very similar. It is decided to inspect thOse
transistors that are manufactured in plant A since plant A is the largest plant
9
and statisticians are available there. In order to inspect a week s produc-
tion, 100 transistors will be selected at random and tested for defects. Define
the sampled population and target population.
(b) In part (a) above, it is decided to use the results in plant A to draw conclu-
sions about plants Band C. Define the target population.
3 (a) What is the probability that the two observations of a random sample of two
from a population with a rectangular distribution over the unit interval will
not differ by more than I?
(b) What is the probability that the mean of a sample of two observations from a
rectangular distribution over the unit interval will be between! and.£?
4 (a) Balls are drawn with replacement from an urn containing one white and two
black balls. Let X = 0 for a white ball and X = 1 for a black ball. For
samples XI, X 2 , " ' , X9 of size 9, what is the joint distribution of the observa-
tions? The distribution of the sum of the observations?
(b) Referring to part (a) above, find the expected values of the sample mean and
sample variance.
5 Let XI, •. " Xn be a random sample from a distribution which has a finite fourth
moment Define IL = @"[XI ),U 2 = var [Xd, 1L3 = @"[{XI - 1L)3], 1L4 = @"[{X1 - 1L)4].
n n
X = (lIn) 2: Xi. and 8
I
2
= [l/{n -1)] L (X,_X)2.
I
n n
(a) Does 8 2 = [l/2n(n - 1)] L L {Xi - X J )2?
'''' I J'" I
266 SAMPLING AND SAMPLING DISTRIBUTIONS vI
(b) For a random sample of size n from a population with mean f-t and rth
central moment f-t" show that
7 (a) Use the Chebyshev inequality to find how many times a coin must be tossed
in order that the probability will be at least .90 that X will lie between .4
and .6. (Assume that the coin is true.)
(b) How could one determine the number of tosses required in part (a) more
accurately, i.e., make the probability very nearly equal to .90? What is the
number of tosses?
8 If a population has u = 2 and X is the mean of samples of size 100, find limits
between which X - f-t will lie with probability .90. Use both. the Chebyshev
inequality and the central-limit theorem. Why do the two results differ?
9 Suppose that Xl and X 2 are means of two samples of size n from a population
with variance u 2 • Determine n so that the probability will be about .01 that the
two sample means will differ by more than u. (Consider Y = Xl - X 2 .)
10 Suppose that light bulbs made by a standard process have an average life of 2000
hours with a standard deviation of 250 hours, and suppose that it is considered
worthwhile to replace the process if the mean life can be increased by at least
10 percent. An engineer wishes to test a proposed new process, and he is willing
to assume that the standard deviation of the distribution of lives is about the
same as for the standard process. How large a sample should he examine if he
wishes the probability to be about .01 that he will fail to adopt the new process if
in fact it produces bulbs with a mean life of 2250 hours?
11 A research worker wishes to estimate the mean of a popUlation using a sample
large enough that the probability will be .95 that the sample mean will not differ
from the population mean by more than 25 percent of the standard deviation.
How large a sample should he take?
12 A polling agency wishes to take a sample of voters in a given state large enough
that the probability is only .01 that they will find the proportion favoring a certain
candidate to be less than 50 percent when in fact it is 52 percent. How large a
sample should be taken?
13 A standard drug is known to be effective in about 80 percent of the cases in which
it is used to treat infections. A new drug has been found effective in 85 of the
first 100 cases tried. Is the superiority of the new drug well established? (If
PROBLEMS 267
the new drug were as equally effective as the old, what would be the probability
of obtaining 85 or more successes in a sample of 1001)
14 Find the third moment about the mean of the sample mean for samples of size n
from a Bernoulli population. Show that it approaches 0 as n becomes large
(as it must if the normal approximation is to be valid).
15 (a) A bowl contains five chips numbered from 1 to 5. A sample of two drawn
without replacement from this finite population is said to be random if all
possible pairs of the five chips have an equal chance to be drawn. What is
the expected value of the sample mean 1 What is the variance of the sample
mean 1
(b) Suppose that the two chips of part (a) were drawn with replacement; what
would be the variance of the sample mean 1 Why might one guess that this
variance would be larger than the one obtained before?
*(c) Generalize part (a) by considering N chips and samples of size n. Show that
the variance of the sample mean is
u2 N n
n N-l'
where u 2 is the population variance; that is
u2 = ~ ~
N 1=1
(i __N_+_l)
2
2
16 If Xl, X 2 , X3 are independent random variables and each has a uniform distribu-
tion over (0, I), derive the distribution of (Xl X 2 )/2 and (Xl + X 2 X 3)/3.
2
17 If XI, ... , XII is a random sample from N(p., u ), find the mean and variance of
8 =J~(XI- X)2.
n-l
18 On the F distribution:
(a) Derive the variance of the F distribution. [See part (d).]
(b) If X has an F distribution with m and n degrees of freedom, argue that 1/ X
has an F distribution with nand m degrees of freedom.
(c) If X has an F distribution with m and n degrees of freedom, show that
mX/n
w
1 + mX/n
19 On the I distribution:
(a) Find the mean and variance of Student's t distribution. (Be careful about
existence.)
(b) Show that the density of a t distributed random variable approache-s the
standard nonnal density as the degrees of freedom increase. (Assume that
the" constant" part of the density does what it has to do.)
(c) If X is t-distributed, show that X2 is F-distributed.
(d) If X is I-distributed with k degrees of freedom, show that 1/(1 + X2!k) has
a beta distribution.
20 Let Xl, X 2 be a random sample from N(O, O. Using the results of Sec. 4 of
Chap. VI, answer the following:
(a) What is the distribution of {X 2 X l )tV2?
(b) What is the distribution of (Xl + X 2)2/(X2 X I )2?
(c) What is the distribution of (X2 + Xl)!V (Xl - X 2)2?
(d) What is the distribution of lIZ if Z = Xf! Xl?
21 Let Xl, ..... XII be a random sample from N(O, 1). Define
1 k 1 II
-k 2: Xi
1
and XII - k =
n-
k 2:
k+l
X,.
1 II
X n - k = n- k 2: X"
k+l
1 n
X=- 2: X,.
n 1
I n
S:-k = n- k - '" (Xi
1 L. X-n)
- k2
k+l
and
.. PROBLEMS 269
mean but different variances af, a~., ... , a: and assuming that V = L(x,/u1)/'L.(1/uJ)
and V = 'L.{X, - V)z/uf are independently distributed, show that V is normal and
V has the chi-square distribution with n - 1 degrees of freedom.
28 For three samples from normal populations {with variances af. a~, and an, the
sample sizes being nl, n z , and n3, find the joint density of
Sf
V=- and
Sl
where the 81, 8~, and Sj are the sample variances. (Assume that the samples
are independent.)
270 SAMPLING AND SAMPLING DISI'RIBUTIONS VI
and v
(Assume that the samples are independent.)
30 For a random sample of size 2 from a normal density with mean 0 and variance t,
find the distribution of the range.
31 (a) What is the probability that the larger of two random observations from any
continuous distribution will exceed the median?
(b) Genera1ize the result of part (a) to samples of size n.
32 ConSidering random samples of size n from a population with density /(x), what
is the expected value of the area under /(x) to the left of the smallest sample
observation?
·33 Consider a random sample X h ••• ., XII from the uniform distribution over the
interval (p V3a, I' + V3a). Let Y 1 < ... <YII denote the corresponding
order statistics.
(a) Find the mean and variance of Y II - Y 1•
(b) Find the mean and variance of (Y1 + Y II )/2.
(c) Find the mean and variance of Yt+l if n 2k + 1, k = 0., 1, ....
(d) Compare the variances of XII, Yk+h (Y1 + YII)/2.
HINT: It might be easier to solve the problem for VI, •.. , VII., a random sample
from the uniform distribution over either (0, 1) or (-1, I), and then make an
appropriate transformation.
34 Let Xl. ... ., XII be a random sample from the density
I
/(x;a,fJ) = 2P exp [-I(x- a)/PI],
where - 00 < (X < 00 and f3 > O. Compare ~he asymptotic distributions of the
sample mean and the sample median. In p~rticular, compare the asymptotic
variances.
• 35 Let XI, ... , XII be a random sample from the cumulative distribution function
F(x) = {I - exp [-x/O - x)]}I(o. l){X) + 1[1. q;)(x). What is the limiting distri-
bution of{y~lI) - all)/bll , where all log n/(1 + log n)and b;l = Oog n) (1 + logn)?
What is the asymptotic distribution of Y~")?
36 Let Xl, ••• , XII be a random sample from/(x; 8) = ()e-bl(o. q;)(x), () >0.
(a) Compare the asymptotic distribution of XII with the asymptotic distribution
of the sample median.
(b) For your choice of {all} and {bll }, find a limiting distribution of (Y!") - all)/bll'
(c) For your choice of {all} and {bll}' find a limiting distribution of (YIII> - all)/bll'
VII
PARAMETRIC POINT ESTIMATION
the value of some function, say r(8), of the unknown parameter. This estima-
tion can be made in two ways. The first, called point estimation, is to let the
272 PARAMETRIC POINT ESTIMATION vn
value of some statistic, say t(Xb ... , X n), represent, or estimate, the unknown
reO); such a statistic t(Xb ... , Xn) is called a point estimator. The second,
called interval estimation, is to define two statistics, say t 1(Xh •.. , Xn) and
tiXb ···, X n), where t 1 (Xh ···, Xn) < '2(X1 , ••• , X n), so that (tl(XI , ••• , XII)'
t 2 (Xb ..• , Xn)) constitutes an interval for which the probability can be deter-
mined that it contains the unknown reO). For example, if f( . ; 0) is the normal
density, that is,
where the parameter 0 is (p, u), and if it is desired to estimate the mean, that is,
n
reO) = tl, then the statistic X (ljn) L Xi is a possible point estimator of
I
"estimator" stands for the function, and the word" estimate" stands for a
and xn is an estimate of J.l. Here Tis X n , t is Xn , and 1(' , ... , .) is the function
defined by summing the arguments and then dividing by n.
Notation in estimation that has widespread usage is the following: 8 is
used to denote an estimate of (J, and, more generally, (81 , " ' , Ok) is a vector
that estimates the v~ctor «(Jl' ... , (Jk), where OJ estimates (Jj' j = 1, .,., k. If
8 is an estimate of (J, then 0 is the corresponding estimator of (J; and if the
discussion requires that the function that defines both 8 and 0 be specified, then
it can be denoted by a small script theta, that is, 0 = 9( Xl' ... , Xn).
When we speak of estimating (J, we are speaking of estimating the fixed
yet unknown value that (J has. That is, we assume that the random sample
Xl, ... , Xn came from the density f(' ; (J), where (J is unknown but fixed. Our
object is, after looking at the values of the random sample, to estimate the fixed
unknown (J. And when we speak of estimating -r(0), we are speaking of estimat-
ing the value -r«(J) that the known function -r(') assumes for the' unknown but
fixed (J.
in the k variables (Jl, ... , (Jk, and let 9 1, ... , 9 k be their solution (we assume
that there is a unique solution). We say that the estimator (9 b ... : 9 k ),
where 8j estimates (Jj' is the estimator of «(Jl, .. " (Jk) obtained by the method of
moments. The estimators were obtained by replacing population moments by
sample moments. Some examples follow.
2 MElHODS of FINDING ESTIMATORS 275
EXAMPLE 1 Let Xl' ..• , XII be a random sample from a normal distribution
with mean Il and variance u 2 • Let (Ov ( 2 ) = (p, u). Estimate the param-
2
eters Il and G b.y the method of moments. Recall that u = J12 - (p~)2
and Il = Ill. The method-of-moments equations become
M~ = J11 = J1~(J1, u) = J1
2
M2 J12 Jl1,(J1, u) u + J12,
and their solution is. the following: The method-of-moments estimator
of Jt is M~ X, and the method-of-moments estimator of u is'
X I
JM 2 - 2 = J(lln) xl- X2 = JI (Xi -X)2In. Note that the
method-of-moments estimator of u given above is not jSi. IIII
EXAMPLE 3 Let Xl' ... , Xn be a random sample from the negative expo-
nential density f(x; 0) Oe- 9xI(o,oolx). Estimate 0. The method-of-
moments equation is
M~ = J1~ = J1~(0) = !;
o
hence the method-of-moments estimator of 0 is I/M~ = I/X. IIII
and
· '276' PARAMETRIC POINT ESTIMATION VB
for (J'.
We shall see later that there are better estimators of p and (J' for this
distribution. III!
and solve them for 7: 1 , ••• , 7:,.. Estimators obtained using either way are called
method-of-moments estimators and may not be the same in both cases.
Outcome: x 0 1 2 3
9 1.1 1.7.
f(x; 1) «1 6-4" 64 64
1.1 1
f(x; -1)
117
"8. 64 n9 '6-4"
. i.e., because a sample with x = 0 is more likely (in the sense of having larger
probability) to arise from a population with p ! than from one with p i.
And in general we should estimate p by .25 when x = 0 or 1 and by .75 when
x = 2 or 3. The estimator may be defined as
p= p(x) = {.25
for x = 0, 1
.75 for x = 2, 3.
The estimator thus selects for every possible x the value of p, say /1, such that
f(x; /1) > f(x; p'),
where p' is the alternative value of p.
More generally, if several alternative values of p were possible, we might
reasonably proceed in the same manner. Thus if we found x = 6 in a sample
of 25 from a binomial population, we should substitute all possible values of p
in the expression
anti choose as our estimate that value of p which maximizedf(6; p). For the
given possible values of p we should find our estimate to be 265- The position
of its maximum value can be found by putting the derivative of the function
defined in Eq. (2) with respect to p equal to 0 and solving the resulting equation
for p. Thus,
~f(6")
dp ,p = (25)
6 P5 (l - p) 18 [6(1 - p) 19p],
278 PA~C POINT ESTIMATIoN vn
and on putting this equal to 0 and solving for p, we find that p 0, I, 265 are
the roots. The first two roots give a minimum, and so our estimate is therefore
p 265' This estimate has the property that
f(6; P) > f(6; p'),
where pi is any other value of p in the interval 0 p ~ 1.
In order to define maximum-likelihood estimators, we shall first define the
likelihood function.
The likelihood function L(O; XI' ••• , xn) gives the likelihood that the
random variables assume a particular value Xl, X2, ... , Xn . The likelihood is
the value of a density function; so for discrete random variables it is a proba-
bility. Suppose for a moment that 8 is known; denote the value by 00 , The
particular value of the random variables which is "most likely to occur" is that
value x~, Xl' ... , x~ such that fXI, ... , xn(Xf, ... , Xn; ( 0) is a maximum. For
example, for simplicity let us assume that n = 1 and Xl has the normal density
with mean 6 and variance 1. Then the value of the random variable which is
most likely to occur is Xl = 6. By" most likely to occur" we mean the value
x~ of Xl such that cP6,I(xD>cP6,I(XI)' Now let us suppose that the joint
density of n random variables is fx!, ...• xn(xI , ••• , Xn; 8), where 8 is unknown.
Let the particular values which are observed be represented by Xl, X2' ., ., x~.
We want to know from which density is this particular set of values most likely
to have come. We want to know from which density (what value of 0) is the
likelihood largest that the set x~, ... , x~ was obtained. In other words, we
want to find the value of 0 in 9, denoted by {J, which maximizes the likelih~~d
function L(O; x~, ... , x~. The value {J which maximizes the likelihood function
is, in general, a function of Xf, ••• , Xn, say (J = 9(Xh X2,"'" xJ. When this is
e
the case, the random variable = 8(Xf, X 2 , ••• , Xn) is called the maximum-
likelihood estimator of O. (We are assuming throughout that the maximum
of the likelihood function exists.) We shall now formalize the definition of a
maximum-likelihood estimator.
2 METHODS OF FINDING ESTIMATORS 279
In this case it ~ay also be easier to work with the logarithm of the likelihood.
We shall Illustrate these definitions with some examples.
280. pAItAMETlUC POINT ESTIMATION VII
o ~ p .s; 1 and q = 1 - p.
The sample values Xh X2, " ' , XII will be a sequence of Os and Is, and the
likelihood function is
L(p) = n
II
i= 1
ylql-XI = pf.Xiqn-f.xi,
and if we let
we obtain
*log L(P) = y log p + (n - y) log q
and
dlogL(p) y n- y
=- - ,
dp p q
which is intuitively what the estimate for this parameter should be. It is
also a method-of-moments estimate. For n = 3, let us sketch the likeli-
hood function. Note that the likelihood function depends on the x/s
only through 2. Xi; thus the likelihood function can be represented by the
following four curves:
Lo = L(P; 2. Xi = 0) = (1 - p)3
Ll L(P; 2. Xi = 1) = P(1 - p)2
L2 = L(p; 2. Xi = 2) = p2(1 - p)
L3 = L(P; 2. Xl = 3) = p3,
which are sketched in Fig. 1. Note that the point where the maximum of
each of the curves takes place for 0 p 1 is the same as that given in
Eq. (3) when n = 3. 11I1
,.. Recall that log x means loge x.
2 METHODS OF FINDING ESTIMATORS 281
L(p)
FIGURE 1
Ii J2nu
i=l
1 e-(1/2a 2
)(xi-Jt)2 = (~)n/2 exp [- ~ I (Xi -
2nu 2u
tl)2] .
The-logarithm of the likelihood function is
and
aLate n 1 1 2
au 2 = - 2. u 2 + 2u4 I (Xi - tl) ,
(4)
"'2
U = -n1 "L. (Xi - 2,
- X) (5)
= I h 'n-t,)ll+tl.°), (6)
where Yl is the smallest of the observations and Y1I is the largest. The
last equality in Eq. (6) follows since n." IlO-t,o+u(xi) is unity if and on1y
i= 1
if all Xl' ••• , X" are in the interval [0 - t, 0 + t], which is true if and
only if 0 - t < YI and Y" s 0 + t, which is true if and only if
y" - t 0 <YI + t. We see that the likelihood function is either 1
(for Yn - t < 0 < YI + i) or 0 (otherwise); hence any statistic with value
iJ satisfying Y1I - i < {j YI + t is a maximum-likelihood estimate.
Examples are Yn - i, Yl + i, and llil + Y1I)' This latter is the midpoint
between Yn -!- and Yt + i, or the midpoint between Yl and Y1I' the
smallest and largest observations. / / //
where - 00 < p < 00 and 0' > O. (Recall Example 4.) Here the likeli-
hood function for a sample of size n is
FIGURE 2
1
.a = '2 (y 1 + Y n) (7)
and
1
lJ = J3
2 3
(yn - Yl), (8)
The above four examples are sufficient to illustrate the application of the
method of maximum likelihood. The last two show that one must not always
rely on the differentiation process to locate the maximum.
The function L(O) may, for example, be represented by the curve in Fig. 3,
where the actual maximum is at fJ, but the derivative set equal to 0 would locate
0' as the maximum. One must also remember that the equation 8LI80 = 0
locates minima as well as maxima, and hence one must avoid using a root of
the equation which actually locates a minimum.
We shall see in later sections (especially Sec. 9 of this chapter) that the
maximum-likelihood estimator has some desirable optimum properties other
than the intuitively appealing property that it maximizes the likelihood function.
In addition, the maximum-likelihood estimators possess a property which is
sometimes called the invariance property of maximum-likelihood estimators. A
little reflection on the meaning of a single-valued inverse will convince one of
the validity of the following theorem.
284 PARAMETRIC POINT ESTIMATION VII
L(O)
~--~--------~--------------~O
FIGURE 3 0'
estimator of 8(1 - 8). Theorem 2 below will give such an estimate, and it will
be x(1 - x). As a second example, consider sampling from a normal distribu-
tion whete both It and a2 are unknown, and suppose an estimate of C[X2] =
It'}. + (12 is desired. Example 6 gives the maximumMlikelihood estimates of J1
and a2 , but It'}. + (12 is not a one-to-one function of J.i and 0'2, and so the
maximum-likelihood estimate of It'}. + 0'2 is not known. Such an estimate will
be obtainable from Theorem 2 below. It will be x2 + (lIn) L (Xl' - X)2.
Let 0 = (01 , ••• , OJ be a k-dimensional parameter, and, as before, let e
denote the parameter space. Suppose that the maximum-likelihood estimate
of t(O) = (tl (0), ... , 1'r(O», where 1 < r < k, is desired. Let T denote the range
space of the transformation t(·) = (tl (.), ... , t r (· ».
T is an rMdimensional
space. Define M( t; Xl' ... , Xn) = sup L(O; Xl' .•• , Xn)· M(· ; Xh .•• , Xn)
{6:.(6)=.}
is called the likelihood function induced by t(· ).* When estimating 0 we max-
imized the likelihood function L(O; X h •.• , xn) as a function of 0 for fixed
Xb " ' , x,,; when estimating t = 1'(0) we will maximize the likelihood function
induced by t(·), namely M(1'; Xl' .•• , X n), as a function of t for fixed Xh ••• , X n •
Thus, the maximum-likelihood estimate of t = t(O), denoted by t, is any
value that maximizes the induced likelihood function for fixed Xl' ..• , x"; that
is, t is such that M(t; Xl' •.. , xn) M(t; Xl' ..• , X,,) for all l' E T. The invari-
ance property of maximum-likelihood estimation is given in the following
theorem.
*The notation
. "sup" is used here. and elsewhere I'n th'IS b 00 k ,as 1't IS
' usua II y use d In
.
mathematIcs.
. I t 'f" .. •For those
I readers Who are not acq a' t d 'th
u In e WI th' t t'
IS no a lon, not mue
h
IS OS I sup IS rep aced by "max" h ' . . .
• were max IS an abbreVIatIon for maxImum. .
286 P.AIlANETRIC POINT ESTIMATION VII
°
EXAMPLE 9 In the normal density, let = (Oh ( 2 ) = (j1., (i2). Suppose
T(O) = J1 + Zq u, where Zq is given by tfJ(Zq) = q. T(O) is the qth quantile.
According to Theorem 2, the maximum-likelihood estimator of T(O) is
X + zqj(l/n) I (X~ - X)2. 1I11
where nj is a value of N j • The numerator ofthejth term in the sum is the square
of the difference between the observed and the expected number of observations
falling in cell fj' j • The minimum-chi-square estimate of 6 is that tJ which
minimizes x2 • It is that 0 among all possible O's which makes the expected
number of observation in cell fl'j "nearest" the observed number. The minimum-
chi-square estimator depends on the partition fl'1' ••• , fj' k selected.
2 ME1HODS Oil FINDING ESTIMATORS 287
Often it is difficult to locate that () which minimizes X2; hence, the denomi-
nator np/O) is sometimes changed to nj (if nj = 0, unity is used) forming a
k
modified X2 = I ([n) - np/O)]2Inj}. The modified minimum-chi-square estimate
j= I
of 0 is then that () which minimizes the nlodified X2 •
d(F. G)
~--~--------------------------------~X
FIGURE 4
Now if the distance function d(F, G) = sup IF(x) - G(x) I is used, then
x
d(F(x; 8), Fix» is minimized if 1 - 0 is taken equal to noln or 0 = ndn =
Lxi/no Hence &= x. IIII
For a more thorough discussion of the minimum-chi-square method, see
Cramer [11] or Rao [17]. The minimum-distance method is discussed in
Wolfowitz [42].
3.1 Closeness
If we have a random sample Xv ... , Xn from a density, say f(x; 8), which is
known except for 0, then a point estimator of 1'(8) is a statistic, say t(XI' ... , XII)'
whose value is used as an estimate of 1'(8). We will assume here that 1'(0) is a
3 PRoPERnES OF POINt ESTIMATORS 289
Not being able to achieve the ultimate of always correctly estimating the
unknown 't'«(), we _look for an estimator t( Xl' ... , X,,) that is "close" to 1'(e).
There are several ways of defining" close," T = !(XI' ... , Xn) is a statistic
and hence has a distribution, or rather a family of distributions, depending on
what () is. The distribution of Ttells how the values t of Tare distributed, and
we would like to have the values of T distributed near 1'(0); that is, we would like
to select t(·, ,.,' .) so that the values of T = t(XI' ... , Xn) are concentrated
near 't'«()). We saw that the mean and variance of a distribution were, respec-
tively, measures of location and spread. So what we might require of an
estimator is that it have its mean near or equal to 1'(0) and have small variance.
These two notions are explored in Subsec. 3.2 below and then again in Sec. 5.
Rather than resorting to characteristics of a distribution, such as its mean
and variance, one can define what" concentration" might mean in terms of the
distribution itself. Two such definitions follow.
IIIPe[1"(O) - ..l < T < 1"(0) + ..l], the event {r(O) - ..l < T:5: 1"(0) +..l} is
described in terms of the random variable T, and, in general, the distri-
bution of T is indexed by O. IIII
nately, most concentrated estimators seldom exist. There are just too many
possible estimators for anyone of them to be most concentrated. What is then
sometimes done is to restrict the totality of possible estimators under con-
sideration by requiring that each estimator possess some other desirable property
and to look for a best or most concentrated estimator in this restricted class.
We will not pursue the problem of finding most concentrated estimators, even
within some restricted class, in this book.
Another criterion for comparing estimators is the following one.
In the above we were assuming that n, the sample size, was fixed. Still
another meaning can be affixed to" closeness" if one thinks in terms of increasing
sample size. It seems that a good estimator should do better when it is based
on a large sample than when it is based on a small sample. Consistency and
asymptotic efficiency are two properties that are defined in terms of increasing
sample size; they are considered in Subsec. 3.3. Properties of point estimators
that are defined for a fixed sample size are sometimes referred to as small-sample
properties, whereas properties that are defined for increasing sample size are
sometimes referred to as large-sample properties.
where f(x; 8) is the probability density function from which the random
sample was selected. IIII
The name" mean-squared error" can be justified if one first thinks of
the difference t - 1'(0), where t is a value of Tused to estimate 1'(8), as the error
made in estimating 1'(8), and then interprets the U mean" in "mean-squared
error" as expected or average. To support the contention that the mean-
squared error of an estimator is a measure of goodness, one merely notes that
ife[[T - 1'(O)Fl is a measure of the spread of T values about 1'(0), just as the'
variance of a random variable is a measure of its spread about its mean. If we
VII
________-1----_. 8
- --MSE'l (6)
.,.--- ...........
MSE, (6)
FIGURE 5
EXAMPLE 13 Let Xl' ... , Xn be a random sample from the density f(x; 0),
where 0 is a real number, and consider estimating 0 itself; that is, r(O) = O.
We seek an estimator, say T* = 1*(Xb ... , X n), such that MSEt*(O) <
MSEA.O) for every 0 and for any other estimator T = I(Xb ... , Xn) of O.
Consider the family of estimators T90 = 190(X1, ... , Xn) 00 indexed by =
00 for 00 E e. For each 00 belonging to 8, the estimator 160 ignores the
observations and estimates 0 to be 00 , Note that
One reason for being unable to find an estimator with uniformly smallest
mean-squared error is that the class of all possible estimators is too large-
it includes some estimators that are extremely prejudiced in favor of particular O.
For instance, in the example above t(Jo(X1 , ••• , XII) is highly partial to Bo since
it always estimates 0 to be 00 • One could restrict the totality of estimators by
considering only estimators that satisfy some other property. One such
property is that of unhiasedness.
Remark
MSEt (6) = var [T] + {'r(B) - tfe[T]}2. (9)
So if Tis an unbiased estimator of 'r(B), then MSEAB) = var [T].
~
PROOF
MSEAO) = 8 e[[T - -,;(B)]2] = tfe[«T - 8 e[T])- {'r(e) - tfe[T]})2]
= 8 e[(T - 8 e[TD2] - 2{'r(O) - 8 e[T]}cf(J[T - cfe[T]]
+ tfe[{-,;(B) - 8 e[T]}2] = var [T] + {-,;(B) - 8(J{T]}2. IIII
The term -,;(B) - tfe[T] is called the bias of the estimator T and can be
either positive, negative, or zero. The remark shows that the mean-squared
error is the sum of two nonnegative quantities; it also shows how the mean-
squared error, variance, and bias of an estimator are related.
=
(n - 1)2 var [8 2
] + (n -1 )2
a2 _ - - a2
n2 n
=
(n - 1)2 1(
2 - 114 -
n- 3
a
4) + a-, 4
n n n- 1 n2
using Eq. (10) of Theorem 2 in Chap. VI. IIII
Remark For the most part, in the remainder of this book we will take
the mean-squared error of an estimator as our standard in assessing the
goodness of an estimator. IIII
0"[(8; - 0-
2)2] = var [8;] = ~ (P4 -: ~ 0-4) .... 0
as n -+ 00, using Eq. (10) of Chap. VI; hence the sequence {S;} is a
mean-squared-error consistent sequence of estimators of (12. Note that if
Tn = (lIn) I (Xi - X)2, then the sequence {Tn} is also a mean-squared-
error consistent sequence of estimators of (12. IIII
n = 1, 2, ... ,
We assume now that an appropriate loss function has been defined for our
estimation problem, and we think of the loss function as a measure of error
or loss. Our object is to select an estimator T = t(Xl , •.. , Xn) that makes
this error or loss small. (Admittedly, we are not considering a very important,
substantive problem by assuming that a suitable loss function is given. In
general, selection of an appropriate loss function is not trivial.) The loss
function in its first argument depends on the estimate t, and t is a value of the
estimator T; that is, t = t(Xb " ., xn)· Thus, our loss depends on the sample
Xl' ... , X n • We cannot hope to make the loss small for every possible sample,
but we can try to make the loss small on the average. Hence, if we alter our
objective of picking that estimator that makes the loss small to picking that
estimator that makes the average loss small, we can remove the dependence of
the loss on the sample Xl' "" X n • This notion is embodied in the following
definition.
Definition 12 Risk function For a given loss function t(·; .), the risk
function, denoted by [!llO), of an estimator T = t(Xl , ..• , Xn) is defined
to be
[!liO) = <if(J[t(T; 0)]. (10)
fIll
The risk function is the average loss. The expectation in Eq. (10) can be
taken in two ways. For example, if the density f(x; 0) from which we sampled
is a probability density function, then
C(J[t(T; 0)] = C(J[t(t(Xh ... , X n); 0)]
Co[t(T; 0)] = J
t( ; O)fT(t) dt,
where fT(t) is the density of the estimator T. In either case, the expectation
averages out the values of Xl' . " , X n •
EXAMPLE 17 Consider the same loss functions given in Example 16. The
corresponding risks are given by:
(i) C(J[[T - T(O)j2], our familiar mean~squared error.
(ii) <ifo[! T T(O) 1], the mean absolute error.
(iii) A'Po[!T-T(O)! >8].
(iv) p(O)Co[! T - T(O)I''j. 1/11
4 SUFFICIENCY 299
Our object now is to select an estimator that makes the average loss (risk)
small and ideally select an estimator that has the smallest risk. To help meet
this objective, we use the concept of admissible estimators.
4 SUFFICIENCY
Prior to continuing our pursuit of finding best estimators, we introduce the
concept of sufficiency of statistics. In many of the estimation problems that
we will encounter, we will be able to summarize the information in the sample
300 PARAMETRIC POINT ESTIMATION VII
'-------
Xl' ... , Xn· That is, we will be able to find some function of the sample that
tells us just as much about () as the sample itself. Such a function would be
sufficient for estimation purposes and accordingly is called a sufficient statistic.
Sufficient statistics are of interest in themselves, as well as being useful in
statistical inference problems such as estimation or testing of hypotheses.
Because the concept of sufficiency is widely applicable, possibly the notion
should have been isolated in a chapter by itself rather than buried in this chapter
on estimation.
Values of Values of
S T j X l.X2,X3JS j X l,x2,X3I T
1-p
(0,0,0) 0 0 1
l+p
1 I-p
(0,0, 1) 1 1
3 1 +2p
1 p
(0, 1,0) 1 0
3 l+p
1 p
(l, 0, 0) 1 0
3 l+p
1 p
(0,1,1) 2 1
3 1 + 2p
1 P
(1,0,1) 2 1
3 1 +2p
1 p
(l, 1,0) 2 1
3 1 +2p
(1,1,1) 3 2 1 1
FIGURE 6
The conditional densities given in the last two columns are routinely
calculated. For instance,
P[X I = 0; X 2 = 1; X 3 = 0; S = 1]
P[S = 1]
(1 - p)P(1 - p) 1.
= G)p(l - p)2 = 3'
4 sumClENCY 303
and
o _P[X I =0;X z =I;X 3 =0;T=0]
IXI. X2.X3IT==o(0, 1,01 )- P[T = 0]
(1 _ p)zp p p
-
- (1 - p)3 + 2(1 - p)zp 1- p + 2p 1+P
EXAMPLE 19 Let Xl' "" X,. be a random sample from f(-; 0) = ¢(J.I(·);
that is, X h ... , Xn is a random sample from a normal distribution with
mean 0 and variance unity. In order to expedite calculations, we take
n = 2. Let us argue that S = o(Xl' Xz) = Xl + X z is sufficient using
the second interpretation above. The transformation of (Xl' Xz) to
(S, Y z), where S = Xl + X z and Yz = X z - Xh is one-to-one; so it
suffices to show that fY2 IS(Yz Is) is independent of O. Now
since
which is independent of O.
The necessary calculations for the first interpretation above are
less simple. We must show that P[XI < Xl; X 2 x21 S = s] is inde-
pendent of O. According to Eq. (9) of Chap. IV,
P[XI < Xl; X 2 < x 2 1 S = s] = lim P[XI < Xl; X 2 < x21s - h < S < s + h].
h ...... o
. 1
hm 2h P[XI Xl; Xl < Xl;S - h < S < s + h]
h-+o
- ----------------------------------
lim Zl P[s - h < S < S + h]
h ...... o h
. 1
hm Zh P[XI < Xl; X 2 X2; s - h < S < s + h]
h ...... O
------------------------------------
fs(s)
1
lim -
h ...... O Zh
J f
Xl
S-h-X2
s+h-Il
s-h-Il
!Xl(U)!X2(V) dv du,
4 SUFFICIENCY 305
v
___ X2 _ _ _ _ _ _ _ _ _ _ _ + __ _
I
u+
-41---_u
FIGURE 7
and hence
Finally, then,
The sample Xl, ... , Xn itself is always jointly sufficient since the condi-
tional distribution of the sampJe given the sample does not depend on e. Also,
the order statistics Yl , ... , Yn are jointly sufficient for random sampling. If the
order statistics are given, say, by (Yl = Yl, ... , Yn = Yn), then the only values
that can be taken on by (Xl' ... , Xn) are the permutations of Yl, ... , Yn' Since
the sampling is random, each of the n! permutations is equally likely. So, given
the values of the order statistics the probability that the sample equals a partic-
ular permutation of these given values of the order statistics is lIn!, which is
independent of e. (Sufficiency of the order statistics also follows from Theorem
5 belOW.)
If we recall that the important aspect of a statistic or set of statistics is the
partition of x that it induces, and not the values that it takes on, then the
validity of the following theorem is evident.
4 SUFFICIENCY 307
Theorem 3 If SI = 01(X1, ... , Xli)' " ., S,. = o,,(Xb ... , Xli) is a set of
jointly sufficient statistics, then any set of one-to-one functions, or trans-
formations, of 8 1 , •.. , S,. is also jointly sufficient. I111
ifand only if the joint density of X b ... , Xli' which is nf(x i ; e), factors as
i 1
fXt, ''', Xn(X h ... , Xli; e) = g(O(Xl' ... , Xli); e)h(Xb ... , Xli)
= g(S; e)h(Xl' ... , Xli)' (II)
where the function h(Xl' ... , Xli) is nonnegative and does not involve the
parameter e and the function g(O(Xh ... , Xli); e) is nonnegative and
depends on Xl, ... , Xli only through the function 0(', ... , '). IIII
where the function h(Xl' ... , xn) is nonnegative and does not involve the
parameter fJ and the function g(Sb ... , sr; fJ) is nonnegative and depends
on Xl' ... , Xn only through the functions .11(', ... , .), .. " dr(", .. " .). II/I
Note that, according to Theorem 3, there are many possible sets of suffi-
cient statistics. The above two theorems give us a relatively easy method for
judging whether a certain statistic is sufficient or a set of statistics is jointly
sufficient. However, the method is not the complete answer since a particular
statistic may be sufficient yet the user may not be clever enough to factor the
joint density as in Eq. (11) or (12). The theorems may also be useful in discover-
ing sufficient statistics.
Actually, the result of either of the above factorization theorems is in-
tuitively evident if one notes the following: If the joint density factors as
indicated in, say, Eq. (12), then the likelihood function is proportional to
g( Sl' ... , Sr; fJ), which depends on the observations Xb .. " Xn only through
.11' .. " dr [the likelihood function is viewed as a function of fJ, So h(xb ... , xn)
is just a proportionality constant], which means that the information about
fJ that the likelihood function contains is embodied in the statistics .11(', ... , .),
••• , d r(', ••• , ').
Before giving severa~ examples, we remark that the function h(·, "', .)
appearing in either Eq. (11) or (12) may be constant.
and
Then
n n
.n !(Xi; fJ) = n fJxi(I -
,=1 i=l
e)l-xiJ {O, l}(Xi)
i=l
n
If we take fJl:Xi(I - fJt -l:x, as g(d(Xl' "', x n); fJ) and n
i= 1
I{O,l}(Xj) as
h(Xl' .. " xn) and set d(Xl' ... , xn) = L
Xi' then the joint density of
Xb ... , Xn factors as in Eq. (11), indicating that S = d( Xl' ... , Xn) = Xi L
is a sufficient statistic. IIII
4 SUFFICIENCY 309
EXAMPLE 21 Let Xl' ... , XII be a random sample from the normal density
with mean Jl and variance unity. Here the parameter is denoted by Jl
instead of 8. The joint density is given by
n
= Ii J2n
i=l
1 exp [- ~ (Xi - Jl)2]
2
1
= (2n)II/2 exp
[1
- 2i~l
II
(Xi - Jl)
2]
= (2,,~nI2 exp [ - ~ (L: xl - 2Jl L: x, + nJl2)]
= (2nt/2
1 exp (II" X. -
r i...J ,
~2 Jl2) exp (- 2~ i...J
'\~ x~).
I
EXAMPLE 22 Let Xl' ••. , Xn be a random sample from the normal density
¢p, a2( . ). Here the parameter () is a vector of two components; that is,
8 = (j.t, u). The joint density of Xl, ... , XII is given by
iQ <P
II
p ,a2(X i ) = I\ J2nu
n 1 [1 (X' - Jl) 2]
exp - 2 I U
EXAMPLE 23 Let Xl' ... , X,. be a random sample from a uniform distribu-
tion over the interval [01 , O2 ], The joint density of Xh •.. , X,. is given by
,. 1
IXI ..... X/Xl' ••. , x,.; Oh ( 2 ) = n0
i= 1 2 -
0 I[fJt.fJ2](Xi)
1
1 n
= (0
2 -
0 ),.
1
n l[fJ
i:;l
I • fJ2](Xj)
1
= (0 _ ( )" l[fJt.1n](Yl)1[;Pl,fJ2](Y")'
2 1
where
and y,. = max [Xb ... , X,.].
The joint density itself depends on Xh " ' , x,. only through Y1 and y,.;
hence it factors as in Eq. (12) with h(X1' ... , X,.) = 1. The statistics Y1
and Y,. are jointly sufficient. Note that if we take 01 = 0 and O2 = 0 + 1,
then Y1 and Y,. are still jointly sufficient. However, if we take 01 = 0 and
O2 = 0, then our factorization can be expressed as
1
= 0" 1[0, fJ](Y,.)I [0, 1n](Y 1)'
Taking g(a(xb ... , X,.); 0) = (1/8")1[0. fJ](yn) and h(Xb ... , xn) = 1[0. Yn](Yl),
we see that Y,. alone is sufficient. IIII
PROOF If Sl = 01(Xb ... , X n), "" Sk = 0k(Xb ... , Xn) are jointly
sufficient, then the likelihood function can be written as
= nn
£=1
!(Xi; 0)
= g(ol(XI , ••• , x n), ••• , 0k(XI , .,., x n); O)h(XI' ... , x n)·
As a function of 0, L(O; Xl' ... , xn) will have its maximum at the same
place that g(SI' ..• , Sk; 0) has its maximum, but the place where 9 attains
its maximum can depend on Xl' ... , Xn only through Sb ... , Sk since 9
~~. HH
We might note that method-of-moment estimators may not be functions
of sufficient statistics. See Examples 4 and 23.
EXAMPLE 24 If f(x; lJ) = 8e- ex/(0. ooix), then f(x; lJ) belongs to the expo-
nential family for a(fJ) = lJ, b(x) = /(0. tt:l)(x), c(lJ) = - 8, and d(x) = x in
Eq. (13). IIJI
A
In Eq. (13), we can take a(A) = e- , b(x) = (1Ix!)I{o.l •... }(x), C(A) = log A,
and d(x) = x; so f(x; A) belongs to the exponential family. /III
. The above remark shows that, under random sampling, if a density belongs
to the one-parameter exponential family, then there is a sufficient statistic. In
fact, it can be shown that the sufficient statistic so obtained is minimal.
The one-parameter exponential family can be generalized to the k-param-
eter exponential family.
for a suitable choice of functions a(·, ... , .), b(·), ci·, ... , .), and dj (·),
j = 1, ... , k, is defined to belong to the exponential family. IIII
In Definition 20, note that the number of terms in the sum of the exponent
is k, which is also the dimension of the parameter.
1 exp (1
= fo
2na
- -2 . Ji
a
2
2
) exp (1
-
2a
-2 x2 + ..!!..-2 x
a
)
.
Take a(p, a) = (1/J2na) exp (--!- . Ji 2I( 2), b(x) = 1, Cl(P, a) = -1/2a2,
2
C2(P, a) = Jila , dl(x) = x 2, and d2(x) = x to show that cP",(l2(X) can be
expressed as in Eq. (14). IIII
314 PARAMETRIC POINT ESTIMATION VII
EXAMPLE 27 If
then
J(X; °
1, (
1
2) = B(Ol' 0l) 1(0,1) (x) exp [(Ot - 1) log x + (0 2 - I) log (1 - x)];
k
Remark Iff(x; Of, ... , Ok) = a(Ol, ... , 0k)b(X) exp I
j=l
C/Oh ... , Ok)d/x),
n f (Xi; ° °k)
II
1, . . .,
i= 1
II n
and so by the factorization criterion I d1(Xi ), "', I dk(Xi ) is a set of
£:=1 i=1
jointly sufficient statistics. I d1(X,), ... , I dk(Xi) are in fact minimal
sufficient statistics. 1I11
II II
EXAMPLE 28 From Example 27, we see that I log Xi and I log (1 - Xl)
i=l 1==1
are jointly minimal sufficient when sampling from a beta density. 1111
Our main use of the exponential family will not be in finding sufficient
statistics, but it will be in showing that the sufficient statistics are complete, a
concept that is useful in obtaining" best" estimators. This concept will be
defined in Sec. 5.
Lest one get the impression that all parametric families belong to the
exponential family, we remark that a family of uniform densities does not
belong to the exponential family. In fact, any family of densities for which the
range of the values where the density is nonnegative depends on the parameter
does not belong to the exponential class.
°
s UNBIASED ESTIMATIoN 315
5 UNBIASED ESTIMATION
Since estimators with uniformly minimum mean-squared error rarely exist, a
reasonable procedure is to restrict the class of estimating functions and look
for estimators with uniformly minimum mean-squared error within the restricted
class. One way of restricting the class of estimating functions would be to
consider only unbiased estimators and then among the class of unbiased esti-
mators search for an estimator with minimum mean-squared error. Con-
sideration of unbiased estimators and the problem of finding one with uniformly
minimum mean-squared error are to be the subjects of this section.
According to Eq. (9) the mean-squared error of an estimator T of 1'(6)
can be written as
t!8[[T - 1'(6)]2] = var6 [T] + {1'(0) - t!8[T]}2,
and if T is an unbiased estimator of 1'(0), then t!8[T] = 1'(8), and so
t!6[[T - 1'(0)]2] = varo [T]. Hence, seeking an estimator with uniformly
minimum mean-squared error among unbiased estimators is tantamount to
seeking an estimator with uniformly minimum variance among unbiased
estimators.
= r. f:O,a!(X,;9)dX, "·dx•.
(]'I']') 8 f . ..
ae ft{ Xl' ••• , XII )IT" f( Xi; e) dXl , .• dXn
i= 1
Equation (15) is called the Cramer-Rao inequality, and the right-hand side
is called the Cramer-Rao lower bound for the variance of unbiased estimators
of ,(e).
PROOF
8 ,(e) = 8e
,'(e) = 8e 8 f . . . Jt{Xl" ,IJ" f(Xj; e) dX1 , •. dXII
•. , XII)
1
= r·. I[t(x" ... , x.,} - 1(/1)] :0 Lr"l fix,; 0] <lx, ... <lx,
= I··' I[t(x" •.. , x,) - -r(6)] [:6 IOg.U j(x,; 0)]
= 4'. [[t(X " •.. , X.) - -r(/I)] [:6 log tV(X,; 0)]].
[1'(0)]2 < S.[[t(X " ... , X,) - 1(0)]2]S. [[:0 log ,:0. f(X,; 6)] 'l
or
but
this requires that :0 log ,:0. /(x,; 0) be proportional to I(x" ... , x.) - 1(0)
o 0
Note that oe logf(x; e) = ae (log e - ex) = lIe - x, and so
Hence, the Cramer-Rao lower bound for the variance of unbiased esti-
mators of e is given by
t1
!-.log/(x,;
00
e) = t
1
!..(lOg e -
00
eXi) = f.1 (-e - Xi) = -n(xn - ~),
1
By taking K(O, n) = -n and utilizing the result of Eq. (16), we see that
X n is an UMVUE of lIe since its variance coincides with the Cramer-Rao
lower bound. IIII
Therefore
We win omit the proof of this remark. It relates the Cramer-Rao lower
bound to the exponential family; in fact, it tells us that we will be able to find
an estimator whose variance coincides with the Cramer-Rao lower bound if
and only if the density from which we are sampling is a member of the expo-
nential class. Although the remark does not explicitly so state, the following is
true: There is essentially only one function (one function and then any linear
function of the one function) of the parameter for which there exists an unbiased
estimator whose variance coincides with the Cramer-Rao lower bound. So,
what this remark and the comments following it really tell is: The Cramer-
Rao lower bound is of limited use in finding UMVUEs. It is useful only if we
sample from a member of the one-parameter exponential family, and even then
it is useful in finding the UMVUE of only one function of the parameter.
Hence, it behooves us to search for other techniques for finding UMVUEs, and
that is what we do in the next subsection.
and therefore
var9 [11 = Go[(T - T')2] + var6 [Tf] > varo [T'].
Note that varo [T] > varo [T'] unless Tequals T' with probability l. II/I
For many applications (particularly where the density involved has only
one unknown parameter) there will exist a single sufficient statistic, say
S = O(Xl' ... , X n), which would then be used in place of the jointly sufficient
set of statistics SI' ... , Sk' What the theorem says is that, given an unbiased
estimator, another unbiased estimator that is a function of sufficient statistics
can be derived and it will not have larger variance. To find the derived statistic,
the calculation of a conditional expectation, which mayor may not be easy, is
required.
EXAMPLE 31 Let Xl' ... , X,. be a random sample from the Bernoulli density
I(x; B) = (]x(I - B)1-;;c for x = 0 or 1. Xl is an unbiased estimator of
r(B) = B. We use Xl as T = t(Xl , ... , XII) in the above theorem. Xi L
is a sufficient statistic; so we use S = I Xi as our set (of one element) of
UNBIASED ESTIMATION 323
P[x, =O;.tXi= $]
p[X 1 = OIL Xi = s] = [n p==l ]
P LX' =s
1 i=
- P[tXi
t= 1
=s] -
P[it Xi s] - --(-:)-es-(1-0-r---
s - - ~
hence,
II
L Xi
T' = i=l
n
for all (J e E'l, that is, for 0 < (} < 1. To argue that Tis complete, we must
show that A:(t) = 0 for t = 0, 1, ... , n. Now
hence, 8 9 [A:(T)] = °
for all 0 < 6 < 1 implies that
or
or
JA:(y)yn-l dy =0
o
(J
EXAMPLE 34 Let Xl' ... , Xn be a random sample from the Poisson density
e-.ll x
f(x; l) = , for x = 0, 1, ....
x.
5 UNBIASED ESTIMATION 327
Therefore,
hence
t9'[l{o)(X,)IL Xi =s] = PIX, =OIL Xi =s] = r ~ It
. h A
IS t e UMVUE of e- for n > 1. For n = 1, I{o}(X1) is an unbiased
estimator which is a function of the complete sufficient statistic X, and
hence I(o}(X1) itselfis the UMVUE of e- A• The reader may want to ~erive
the mean and variance of
328 PARAMETRIC POINT ESTIMATION VII
and compare them with the mean and variance of the estimator
II
X n = (lIn) I Xi'
i I
which is a function of the complete sufficient
II
_ JS r(n) (s - Xt)"-2
- Kr(n-l) S,,-1 dXt
=
n -1
,,-1
fO y"-2( -dy)
s s-K
n- 1 y,,-t s-K
- S,,-l n - 1 0
for s > K and n > 1, where the substitution y =s- Xl was made. Hence,
330 pARAMETRIC POINT ESTIMATION VII
is the UMVUE for e-K(J for n > 1. (Actually the estimator is applicable
for n = 1 as well.) It may be of interest and would serve as a check to
verify directly that
is unbiased.
of (J. For fixed 0 <p < 1, consider the estimator g(XI - p) + p, where
the function g(y) is defined to be the greatest integer less than y. Now
8[g(Xl- p) + p]
=1 9+1
9
g(Xl - p) dXl +p = f 6+1-p
6-p
g(y) dy + p.
For fixed and p, there exists an integer, say N
(J = N(B, p), satisfying
() - p < N < (J + 1 - p. Hence,
S[g(Xl - p) + p]
N f6 + I - p
= f
6-p
8+ 1- p
g(y) dy +p= f6-p
(N - 1) d y +
N
N dy +p= (}.
for t(X., •.• , XII) = (YI + y,J/2. On the other hand, quite a number of
2
estimators are not location-invariant; for example, 8 and YII - Y1, as the fol-
lowing shows: Take T = t(XI , ... , XII) = 8 = L (Xi - XJ /(n - 1); then
2 2
t(XI + C, ••• , XII + c) = L [Xi + c - L (Xi + c)ln]2/(n - 1) = t(X., ... , XII)' in-
stead of t(XI' •.• , xJ + c. Now take T = t(XI' ... , X,J = Y,. - Yi; then
t(XI + c, •.. , X,. + c) = max [Xl + c, ... , X,. + c] - min [Xl + c, ... , Xn + c] =
max [Xl' ... , xn] + c - min [Xl' ... , X,.] - C = t(X., ... , x,J, instead of
t(XI' .•. , XII) + c.
Our use of location invariance will be similar to our use of unbiasedness.
We will restrict ourselves to looking at loc~tion-invariant estimators and seek
an estimator within the class of location-invariant estimators that has uniformly
sma1lest mean-squared error. The property of location invariance is intuitively
appealing and turns out also to be practically appealing if the parameter we are
estimating represents location.
We will now state, without proof, a theorem that gives within the class
of location-invariant estimators the uniformly smallest mean-squared error
estimator of a location parameter. The theorem is from Pitman [41].
Theorem 11 Let X., ... , Xn denote a random sample from the density
e
I( . ; e), where e is a location parameter and is the real line. The
estimator
t(X 1 ,· .. ,X )=
f etl !(X
n
t; e) de
(17)
n
f}]l!(X i ; e) de
is the estimator of e which has uniformly smallest mean-squared error
within the class of location-invariant estimators. ////
-----------------------
6 LOCATION OR SCALE INV ARlANCE 335
PROOF If 8 1 = 0l(Xh ••. , Xn), ••. , 8.,. = 0k(X1 , •.• , Xn) is a set of
n
sufficient statistics, then by the factorization criterion I1 !(Xl; 8) =
i= 1
g(Sb ••• , SJi;; 8)h(Xb •.• , x,,); so the Pitman estimator can be written as
336 PARAMETRIC POINT ESTIMATION VII
f 8 g(S S k; 8) d8
l' ... ,
Note that if 8 is a scale parameter for the family of densities {/(' ; 8),8 > O},
then the density h( . ) of the definition is given by hex) = I(x; 1).
LOCATION OR SCALE INVARIANCE 337
6
Our sole result for scale invariance, a result that is comparable to the
result of Theorem lIon location invariance, requires a slightly different frame··
work. Instead of measuring error with squared-error loss function we measure
it with the loss function t(t; 8) = (t - 6)216 2 = (tiO - 1)2. If It - OJ represents
error, then 1001 t - 01/6 can be thought of as percent error, and then (t - 8)2/62
is proportional to percent error squared. We state the following theorem, also
from Pitman [41], without proof.
has uniformly smallest risk for the loss function t(t; 0) = (t - 8)2/02. /1//
...
Definition 28 PItman estimator for scale The estimator given in Eq.
(18) is defined to be the Pitman estimator for scale. /111
EXAMPLE 41 Let Xl' ... , Xn be a random sample from a density I(x; lJ) =
(1/0)/(0,6)(x). The Pitman estimator for the scale parameter B is
{1/[(n + 2) _1]}yn-(n+2)+1 n +2
= {1/[(n + 3) - 1]}yn-<n+3)+ 1 = n + 1 Yn'
We know that Yn is a complete sufficient statistic and 8[Yn ] =[nl(n + 1)]0;
so by the Lehmann-Scheffe theorem [en + 1)/n]Yn is the UMVUE of o.
IIII
EXAMPLE 42 Let X., ... , Xn be a random sample from the density f(x; A) =
(IIA) exp (-xIA)/(o. oolx). The Pitman estimator for the scale param-
eter A is
00
2 n
fo (1/A )IIJ !(X , ; A) dA t 00
(1IAn+2)exp( - I: XdA)dA
r (I/A
1
3
) D,f(X,; A)dA
-
('(I/An + 3)eXp ( - L XJA)dA
t (<x/I:
00
= I: x. r(n + 1) = I: X, .
'r(n + 2) n +1
(It can be shown that the UMVUE of A is Xi/n.) I:
Note that I:
X,Jn is a scale-invariant estimator, and, hence, since
I:Xil(n + 1) is the sca1e-invariant estimator having uniformly smallest
risk for the loss function (t - Of 102 , the risk of Xi/en + 1) is uniformly I:
I: 2
smaller than the risk of Xi/n. Also, since here risk equals 1/0 times the
MSE, the MSE of I:
Xil(n + 1) is uniformly smaller than the MSE of
I: Xil n . 1/1/
BAYES ESTIMATORS 339
7
7 BAYES ~STIMATORS
In our considerations of point-estimation problems in the previous sections .of
this chapter, we have assumed that our random sample came from some densIty
f( . ; 8), where the function/( . ; .) was assumed known. Moreover, we have
assumed that 8 was some fixed, though unknown to us, point. In some real-
world situations which the density f( . ; 8) represents, there is otten additional
infonnation about 6 (the only assumption which we heretofore have made about
8 is that it can take on values in 8). For example, the experimenter may have
evidence that 6 itself acts as a random variable for which he may be able to
postulate a realistic density function. For instance, suppose that a machine
which stamps out parts for automobiles is to be examined to see what fraction
o of defectives is being made. On a certain day, 10 pieces of the machine's
output are examined, with the observations denoted by Xl' X 2 , ••• , X 10 , where
Xi = 1 if the ith piece is defective and Xl = 0 if it is nondefective. These can
be viewed as a random sample of size 10 from the Bernoulli density
for 0 <0< 1,
which indicates that the probability that a given part is defective is equal to the
unknown number O. The joint density of the 10 random variables Xl' X 2 , ••• ,
XlO is
for 0 :::; 0 < 1.
distribution of E> is over the parameter space, we have departed from our custom
of using F( .) and f( .) to represent a cumulative distribution function and
density function, respectively, and have used G( . ) and g( . ) instead.
If we assume that the distribution of E> is known, we have additional
information. So an important question is: How can this additional information
be used in estimation? It is this question that we will address ourselves to in
the foll<?wing two subsections. In many problems it may be unrealistic to
assume that B is the value of a random variable; in other problems, even though
it seems reasonable to assume that B is the value of a random variable E> the
distribution of E> may not be known, or even ifit is known, it may contain other
unknown parameters. However, in some problems the assumption that the
distribution of E> is known is realistic, and we shall examine this situation.
Remark
= f La I(X,16)]g,,(8) d8
for random sampling. [Recall that fYlx=iylx) =fx,y(x, y)lfx<.x) =
fXIY=:V<x I y)fy(y)Lfx(x).] 1///
The posterior distribution replaces the likelihood function as an expression
that incorporates all information. If we want to estimate 6 and parallel the
development of the maximum-likelihood estimator of 6, we could take as our
estinlator of 6 that 6 which maximizes the posterior distribution, that is, estimate
6 with the mode of the posterior distribution. However, unlike the likelihood
function (as a function of 6), the posterior distribution is a distribution function;
so we could just as well estimate 6 with the median or mean of the posterior
distribution. We will use the mean of the posterior distribution as our estimate
of 6, and in general we could estimate 1'( 6) as the mean of 1'( 9) given Xl = Xl, ••• ,
X" = x,,; that is, take 8[1'(9) I Xl = Xh ... , X" = x,,] as our 'estimate of 1'(6).
1///
Remark
8[1'(9)1 Xl = Xl' ••• ' X" = X,,] = J1'(6)!elxl=Xl •.... Xn:;x.J6Ixh ... ' x,,) d6
(21)
f r(8>ta I (x, 8)] g,,(8) d8
1
-
f La I(X.l8)] g.,(8) d8 .
/111
One might note the Similarity between the posterior Bayes estimator of
1'(6) = 6 and the Pitman estimator of a location parameter [see Eq. (17)].
342 PARAMETRIC POINT ESTIMATION vn
EXAMPLE 43 Let Xl' ... , Xn denote a random sample from the Bernoulli
density f(x 18) = £F(1 - e)l-x for x = 0, 1. Assume that the prior distri-
bution of 9 is given by gs(8) = /(0,1) (8); that is, 9 is uniformly distribu-
ted over the interval (0, 1). Consider estimating 8 and T(e) = 8( 1 - 8).
Now
(L Xi + l)(n - LXi + 1)
-
(n + 3)(n + 2)
So the posterior Bayes estimator of 8(1 - 8) with respect to a uniform
prior distribution is (L Xi + 1)(n - L Xi + l)/(n + 3)(n + 2). IIII
7 BAYES ESTIMATORS 343
We noted in the above example that the posterior Bayes estimator that
we obtained was not unbiased. The following remark states that in general a
posterior Bayes estimator is not unbiased.
noted in Subsec. 3.4 that Co[t(T; B)] represented the average loss of that esti-
mator, and we defined this average loss to be the risk, denoted by {!ttCB), of the
estimator t(·, ... , '). We further noted that two estimators, say 11 =
tl(XI , ... , Xn) and Tz = t 2 (XH ••• , X n), could be compared by looking at their
respective risks {!t'l(B) and {!t,lB) , preference being given to that estimator with
smaller risk. In general, the risk functions as functions of B of two estimators
may cross, one risk function being smaller for some B and the other smaller
for other B. Then, since B is unknown, it is difficult to make a choice between
the two estimators. The difficulty is caused by the dependence of the risk
function on B. Now, since we have assumed that Bis the value of some random
variable 9, the distribution of which is also assumed known, we have a natural
way of removing the dependence of the risk function on B, namely, by averaging
out the B, using the density of 9 as our weight function.
fIll
The Bayes risk of an estimator is an average risk, the averaging being over
the parameter space ~ with respect to the prior density g(.). For given
loss function t( .; .) and prior density g( . ) the Bayes risk of an estimator is a
real number; so now two competing estimators can be readily compared by
comparing their respective Bayes risks, still preferring that estimator with smaller
Bayes risk. In fact, we can now define the best" estimator of ,(B) to be that
U
The posterior Bayes estimator of T(8), defined in Definition 30, was defined
without regard to a loss function, whereas the definition given above requires
specification of a loss function.
The definition leaves the problem of actually finding the Bayes estimator,
which may not be easy for an arbitrary loss function, unsolved, However, for
squared-error loss, finding the Bayes estimator is relatively easy, We seek that
estimator, say 1*( • , "', .), which minimizes the expression J~ BitCe)g(e) de =
Iii C8 [[/(Xb ... , x,.) - 1:(e)]2]g(e) de as a function over possible estimators
I( • , •. " .). Now,
= f~(it ~ [«0) - I(x" ... , x,,) l' Ix .. .... x.(x" ... , x.1 O)g(O) dO}
Ix 10 •••• x..(X h •.. , Xn)
n
• IXI ••••• X ..(Xl' ••• , Xn) n dx,
1= 1
and since the integrand is nonnegative, the double integral can be minimized
if the expression within the braces is minimized for each x., ... , X n • But the
expression within the braces is the conditional expectation of [T(e) - I(XH
... , xn)]2 with respect to the posterior distribution of e given Xl = Xl' ... t
Xn = X n , which is minimized as a function of I(x., ... , xn) for I*(x., ... , xn)
equal to the conditional expectation of T(e) with respect to the posterior distribu-
tion of e given Xl = Xl' ..• , Xn = x n • {Recall that C[(Z - a)2] is minimized
as a function of a for a*"= C[Z].} Hence the Bayes estimator of T(e) with re-
spect to the squared-error loss function is given by
(23)
=fft [J/(t(x" ... , x,); IJ)fXh ...• xJx" ... , x, I IJ) V, dx,]O(IJ) dIJ
= J. [J/( I(x" ... , x,); IJ)felx, =x" .... x.=x.(IJI x" ... , x,) dIJ]
n
• fXl •.•.• Xn(Xl' .•• , xn) n dx
i= 1
i,
(25)
EXAMPLE 44 Let Xl, ... , X" be a random sample from the normal density
with mean B and variance 1. Consider estimating B with a squared-error
loss function. Assume that 8 has a normal density with mean 110 and
variance 1. Write 110 = Xo when convenient. According to Eq. (25) the
Bayes estimator is given as the mean of the posterior distribution of 8.
_ exp [ -t,t
o
(X,- 0)2]
f., [-t,t.
exp (x, - 0)2] dO
= 1 e
J2n/(n +1) xp
(_n+l["
2
]21
~xd(n + 1) J; (J -
348 PARAMETRIC POINT ESTIMATION VII
ft n
Xo + L Xi Jlo + L Xi
1 1
--.....-.- - --- -
n+1 n+1
n+1
is also the Bayes estimator with respect to a loss function equal to the
absolute deviation. II/I
EXAMPLE 45 Let Xl' ... , X" be a random sample from the density I(x 18) =
(l/fJ)/(0.8)(x), Estimate 8 with the loss function let; fJ) = (t - 8)2/82.
Assume that e has a density given by g(8) = 1(0.1)(8). Let y" denote
max [Xl, ... ,xft ]. Find the posterior distribution of 9.
n I(0.8)(x;)I(0. 1)(8)
ft
(l/fJ)ft
f8I X l=xlo •••• Xn=Xn
t
( ll
V Xl"'"
)
X" = -l~--"-----
i= I
fo (1/8)"Xl I(0.8/X i) d8
(1/ 8 )"I(Yn.l)(8)
-
fYn(1/8)"I(Yn.n(8) d8
I
We seek that estimator which minimizes Eq. (24), or we seek that esti-
mator 1(') which minimizes
J{[/(Yn) - 8fI82}(1/~)/(Yn.1)(8) d8
[l/(n - l)](lly: 1 - 1)
7 BAYES ESTIMATORS 349
/III
by the area under the risk function and to seek that estimator which has the
least area under its risk function. We note that if the parameter space e
is
an interval, the estimator having the least area under its risk function is the
Bayes estimator corresponding to a uniform prior distribution over the interval e.
This is true because for a uniform prior distribution the Bayes risk is propor-
tional to the area under the risk function, and hence minimizing the Bayes risk is
equivalent to minimizing area.
We now ev8Juate the risk of (2: Xl + a)/(n + a + b) with the hope that we
will be able to select a and b so that the risk will be constant. Write
t': B(Xl, .•. , x.) = A 2: Xi + B = (2: Xl + a)/(n + a + b); then t!it*A,B(O)
+
= '4 [(A 2: Xi B - O?] = 8[[A(2: Xi - nO) + B - 0 + nAO]2] =
A 24[(2: Xi - nO)2] + (B - 0 + nAO)2 = nA 20(l - 0) + (B - 0 + nA0)2 =
02[(nA - 1)2 _ nA2] + O[nA2 + 2(nA - I)B] + B2, which is constant if
(nA - 1)2 - nA2 = 0 and nA2 + 2(nA - I)B = O. Now (nA - 1)2 - nA2
=0 if A=1/jn(jn+1), and nA2+2(nA-l)B=0 if B=
-nA 2/2(nA - 1), which is 1/2(Jn + 1) for A = l/j;t(J~ + 1). On
solving for a and b, we obtain a = b = j~/2; so (2: Xl + ~/2)/( n + In)
is a Bayes estimator with constant risk and, hence, minimax. // //
8 VECTOR OF PARAMETERS
In this section we present a brief introduction to the problem of simultaneous
point estimation of several functions of a vector parameter. We will assume
that a random sample Xl' ... , Xn of size n from the density I(x; 01, ... , Ok) is
available, where the parameter 0 = (01, ... , Ok) and parameter space e are
k-dimensional. We want to simultaneously estimate 't'l (0), ... , 't'r(O) , where
't'iO) , j = I, ... , r, is some function of 0 = (01, ... , Ok)' Often k = r, but this
need not be the case. An important special case is the estimation of 0 =
(81, ... , e k) itself; then r = k, and 't'l (0) = 01, ... , 't' k(O) = Ok' Another important
special case is the estimation of 't'(0); then r = 1. A point estimator of
('t'l(O), ... , 't',.(O» is a vector of statistics , say(Tl" .. , T,.), where Tj = tiXl" .. , Xn)
and Tj is an estimator of 't'j(0).
Our presentation of the method of moments and maximum-likelihood
method as techniques for finding estimators included the possibility that the
parameter be vector-valued. So we already have methods of determining esti-
mators. What we need are some criteria for assessing the goodness of an esti-
mator, say (Th ... , T,.), and for comparing two estimators, say (Tl' ... , T,.)
and (T{, ... , T;). As was the case in estimating a real-valued function 't'(e) ,
where we wanted the values of our estimator to be close to 't'(0), we now want
the values of the estimator (1;., ... , Tr) to be close to ('t'l (0), ... , 't'r(O». We
want the dis~ribution of (Tit ... , T,.) to be concentrated around ('t'1(0), ... , 't'r(O».
352 pAR.AM£TlUC POINT ESTIMATION VII
FIGURE 8 ~----------------------~tl
and (iii) implies that Wilks' generalized variance of (T{, ... , T;) is smaller than
Wilks' generalized variance of (Th ... , T,.).
Theorem 10 of Sec. 5 can also be generalized to r dimensions, but first the
concept of completeness has to be generalized.
where 81 < O2 , Write 0 = (81 , 82 ), Let Yl = min [Xh ... , X,.] and
Y,. = max [Xl' ... , Xn]. We want to show that Yi and Yn are jointly
8 VECTOR OF PARAMETERS 355
complete. (We know that they are jointly sufficient.) Let ~(Yl' Y,,) be
an unbiased estimator of 0, that is,
<9' 8[~(Yl' Y,,)] == 0 for all 0 E ~.
Now
Theorem 16 Let Xl' ... , X,. be a random sample from/(x; 01 , ••• , Ok)'
k
If/(x; 01 , ... , Ok) =a(OI, ... , 0k)b(x)exp [I
j=1
c/0 1 , .," Ok) d.(x)j, that
J
. is,/(x; 01 , Ok) is a member of the k-parameter exponential family, then
.t.
•• "
Ct. d. (X,) • ... • d.( Xi) ) is a minimal set of jointly complete and sufficient
statistics. // //
3S6 pARAMETRIC POINT ES'I1MATION VII
Now
n n
so L Xi and L X; are jointly complete and sufficient statistics by
i=l i=l
Theorem 16. IIII
We will state without proof the vector analog of Theorem 10. In the
same sense that an UMVUE was optimum, this following theorem gives an
optimum estimator for a vector of functions of the parameter.
Theorem 17 Let Xl' ... , Xn be a random sample from/ex; 01, ... , Ok)'
Write 0 = (0 1, ... , Ok)' If Sl = 01(X1, ... , X n), ... , Sm = 0m(X1, ... , Xn)
is a set of jointly complete sufficient statistics and if there exists an un-
biased estimator of (r 1 (0), ... , rr(O», then there exists a unique unbiased
estimator of (r1(0), ... , rr(O») , say Tt = tt(Sl' ... , Sm), ... , T: =
t:(Sl' ... , Sm), where each tj is a function of Sl' ... , Sm, which satisfies:
var9 [Tt] ~ var 9 [1)] for every () E E:j, j = 1, ... , r, for any
(i)
unbiased estimator (11, ... , T,) of (r 1 ((}), •.. , r,((}».
(ii) The ellipsoid of concentration of (Tt, ... , T:) is contained in
the ellipsoid of concentration of (11, ... , 7;.), where (T1' ... , T,) is any
unbiased estimator of (r1 ((}), ••• , rr«(}»' II/I
There are four different maximal subscripts, all of which are intended.
n denotes the sample size, k denotes the dimension of the parameter 0, m is the
number of real-valued statistics in our jointly complete and sufficient set, and r
is the dimension of the vector of functions of the parameter that we are trying
to estimate. In practice, it will turn out that usually' k = m. The estimator
(Tt, ... , T:) is optimal in the sense that among unbiased estimators it is the
best estimator using any of the four generalizations of variance that have been
proposed.
Just as was the case in using Theorem 10, we have two ways of finding
(Tt, ... , r:). The first is to guess the correct form of the functions tt, ... , t:,
which are functions of Sl' ... , Sm' that will make them unbiased estimators of
8 VECTOR OF PARAMETERS 357
't1 (0), ... , 'f,(l1). The second is to find any set of unbiased estimators of
'fl(O), ••• , 'f,i.l1) and then calculate the conditional expectation of these unbiased
estimators given the set of jointly complete and sufficient statistics. We employ
only the first method in the following examples.
EXAMPLE 50 Let Xi) ... , Xn be a random sample from the normal den-
sity f(x; 01, ( 2 ) = tP/l.fl2(X). By Examples 22 and 48, L Xi and L Xl are
jointly complete and sufficient statistics. Hence, by Theorem 17,
(L X,ln, L (X, - X)2/(n - 1)) is an unbiased estimator of (Il, 0'2) whose
corresponding ellipsoid of concentration is contained in the ellipsoid of
concentration of any other unbiased estimator. [NoTE: L (X, - X)2 =
L Xf - nX2; so the estimator L (Xi - X)2/(n - 1) is a function of the
L
jointly complete and sufficient statistics Xi and X;,] L
For this same example, suppose we want to estimate that function
of 8 = (p., 0'2) satisfying the following integral equation:
for (X fixed and known. 'teO) is that point which satisfies P[Xi > 'teO)] = (X;
that is, it is that point which has 100(X percent of the mass of the population
density to its right, or r(O) is the (1 - (X)th quantile point. We have
1 - (X = <b(['t(0) - 1l]10'); so 'teO) = Jl + ZI-«O', where Zl-« is given by
«)(Zl -J = 1 - (x, Since (X is known, Zl _« can be obtained from a table
of the standard normal distribution. To find the LTMVUE of reO), it
suffices to ~nd the unbiased estimator of Jl + Zl _« 0' which is a function of
3S8 PARAMETRIC POINT ESTIMATION vu
r[(n - 1)/2]
r(nI2)j2 J'L (Xi - X)2 = T*
i= 1
0). Let an = 8iXl , ... , Xn) denote the maximum-likelihood
estimator of 0 based on a sample of size n. We defined and discussed in Sec. 3
of this chapter a number of properties that an estimator mayor may not possess.
Recall that some of these properties, such as unbiasedness and uniformly
minimum variance, are referred to as small-sample properties, and others of
these properties, such as consistency and best asymptotically normal, are referred
to as large-sample properties. The use of the word" small" in "small-sample"
is somewhat misleading since a small-sample property is really a property that is
defined for a fixed sample size, which may be fixed to be either small or large.
By a large-sample property, we mean a property that is defined in terms of the
sample size increasing to infinity. Our main result of this section will be con-
tained in Theorem 18 below and will concern optimum large-sample properties
of maximum-likelihood estimation.
9 OPTIMUM PROPERTIES OF MAXIMUM-LIKELIHOOD ESTIMATION 359
We will not be able to prove Theorem 18. In fact, we have not precisely
stated it, inasmuch as we have not delineated the regularity conditions. We
do, however, want to emphasize what the theorem says. Loosely speaking, it
says that for large sample size the maximum-likelihood estimator of 8 is as
good an estimator as there is. (Other estimators might be just as good but not
better.)
We might point out one feature of the theorem, namely, that the asymptotic
normal distribution of the maximum-likelihood estimator is not given in terms of
the distribution of the maximum-likelihood estimator. It is given in terms of
f( • ; B), the density sampled. Also, the variance of the asymptotic normal
distribution given in the theorem is the Cramer-Rao lower bound.
Fa
[T'(8)]2
a
"88 [[oologf(X; Ii)] ]
2 '
-.t.[::~IOgf(X; 0)]
(J2
1- - ,
n~
2 -
u2
-.t'[::i logf (X; 0)]
-
nA
9 oPTIMUM PROPERTIES OF MAXIMUM-LIKELlHOOD ESTIMATION 361
and
PU 1U 2 = ,
nA
where
f( x', ll)
v
= f(x', 0H 8)
2
= '+'01.Oz (x) = J2n8
,I,. 1 e-(1/ 20 z)(x-0 1 )Z
•
2
82 X - 01
80 80 10gf(X; 0) = - 02 '
2 1 2
and
and because
362 pARAMETRIC POINT ESTIMATION vn
and
PROBLEMS
1 An urn contains black and white balls. A sample of size n is drawn with replace-
ment. What is the maximum-likelihood estimator of the ratio R of black to
white balls in the urn? Suppose that one draws balls one by one with replacement
until a black ball appears. Let X be the number of draws required (not counting
the last draw). This operation is repeated n times to obtain a sample X., X 2 , ••• ,
X n • What is the maximum-likelihood estimator of R on the basis of this sample '/
2 Suppose that n cylindrical shafts made by a machine are selected at random from
the production of the machine and their diameters and lengths measured. It is
found that Nil have both measurements within the tolerance limits, Nl2 have
satisfactory lengths but unsatisfactory diameters, N21 have satisfactory diameters
but unsatisfactory lengths, and N22 are unsatisfactory as to both measurements.
2: N'J = n. Each shaft may be regarded as a drawing from a multinomial popUla-
tion with density
~ll
PH P12 P21
~12 ~21(l
-PH - Pl2 - P21
)~22
for X'J = 0, l; 2: Xu = 1
having three parameters. What are the maximum-likelihood estimates of the
parameters if Nu = 90, N12 = 6, N21 = 3, and N22 = 1 '/
3 Referring to Prob. 2. suppose that there is no reason to believe that defective
diameters can in any way be related to defective lengths. Then the distribution
of the X,Jcan be set up in terms of two parameters: PI, the probability of a satis-
factory length, and q., the probability of a satisfactory diameter. The density of
the Xu is then
(Plqlt ll [Pl(1 - ql)t 12 [(1 - PI)qlf 2I [(l - Pl)(l - ql)t
22
for XIJ =0, l; 2: X'J = 1.
What are the maximum-likelihood estimates for these parameters'/ Are the prob-
abilities for the four classes different under this model from those obtained in the
above problem '/
4 A sample of size nl is to be drawn from a normal population with mean P,l and
variance 0':.
A second sample of size n2 is to be drawn from a normal population
with mean P,2 and variance O'~. What is the maximum-likelihood es~timator of
• • Jl~_ h
() = P,l - P,2 '/ If we assume that the total sample SIze n = nl + n2 IS xed, Ow
should the n observations be divided between the two populations in order to
minimize the variance of the maximum-likelihood estimator of () '/
PROBLEMS 363
5 A sample of size n is drawn from each of four normal populations. all of which
have the same variance a 2 • The means of the four populations are a + b + c,
a + b - c, a - b + c, and a - b - c. What are the maximum-likelihood estima-
tors of a, b, c, and a 2 ? (The sample observations may be denoted by X'J, i = 1,
2, 3,4 andj = 1,2, ... , n.)
6 Observations X., X 2 , ••• , Xn are drawn from normal populations with the same
mean ft but with different variances a~, a~, ... ,a;. Is it possible to estimate all
the parameters? If we assume that the a~ are known, what is the maximum-
likelihood estimator of ft ?
7 The radius of a circle is measured with an error of measurement which is dis-
tributed N(O, ( 2 ), a 2 unknown. Given n independent measurements of the
radius, find an unbiased estimator of the area of the circle.
8 Let X be a single observation from the Bernoulli density f(x; (J) =
(Jx(1- (J)l-xI{o.l)(X), where 0 < (J < 1. Let tl(X) = X and t 2(X) = l.
(a) Are both tl(X) and t 2(X) unbiased? Is either?
(b) Compare the mean-squared error of tl(X) with that of t 2(X).
9 Let Xl, X 2 be a random sample of size 2 from the Cauchy density
.(J _ 1 (J
f(x, ) - 17[1 + (x _ 8)2]' - 00 < < 00.
Argue that (Xl + X 2 )/2 is a Pitman closer estimator of (J than Xl is. [Note that
(Xl + X 2 )/2 is not more concentrated than Xl since they have identical distribu-
tions.]
10 Let (J denote some physical quantity, and let Xl, ••• , Xn denote n measurements of
the physical quantity. If (J is estimated by 0, then the residual of the ith measure-
ment is defined by X, - 0, i = 1, "', n. Show that there is only one estimator
with the property that the residuals sum is 0, and find that estimator. Also,
find that estimator which minimizes the sum of squared residuals.
II Let Xl, ••• , Xn be a random sample from some density which has mean ft and
variance a 2 •
n
(a) Show that .2 a, X, is an unbiased estimator of ft for any set of known constants
1
n
al, •.. , an satisfying .2 a, =
1
1.
n n
(b) If L a, =
1
1, show that var [L a, X,] is minimized for a, =
1
lIn, ; = 1, ... , n.
n n
13 Let Xl, X 2 be a random sample of size 2 from a normal distribution with mean 8
and variance 1. Consider the following three estimators of 8:
T, = t,(Xl, X 2 ) = IX, + tX2
T2 = t 2 (X" X 2 ) = IX, + iX2
T3 = t 3 (X" X 2 ) = IX, + IX2 •
For the loss function t(t; 8):::;;: 38 2 (t - 8)2, find £1t,,(8) for i = 1, 2, 3, and
(a)
sketch it.
(b) Show that T, is unbiased for i = 1, 2, 3.
14 Let X" ... , XII: .•. be independent and identically distributed random variables
from some distribution for which the first four central moments exist. We know
that 8[8 2 ] 0'2 and
var [8 2 ] = -1 ( P,4 - -
n--3 0'4) ,
n n-l
where
25 Let X., •.• , X,. be a random sample from the binomial distribution
(:)P"(I- p)m-., x ~ 0,1, ••. , m, where m is known and 0 <p< 1.
Estimatep by the method of moments and the method of maximum likelihood.
(a)
Is there an UMVUE of p? If so, find it.
(b)
*26 Let Xl, ... , Xn be a random sample from the discrete density function
where () > O.
(a) Find a maximum-likelihood estimator of ().
v-
(b) Is Yn = max [Xl, ••• , Xn] a sufficient statistic? Is Yn complete?
(c) Is there an UMVUE of ()? If so, find it.
PROBLEMS 367
f(x; 0) for 0 o.
(a)Find a maximum-likelihood estimator of O.
(b)Suppose n 1, so that you have only one observation, say X = Xl' Clearly
X is a sufficient statistic. Is X a minimal sufficient statistic? Is X complete?
34 Let Xl, ••. , X" be a random sample from the negative exponential density
f(x; 0) = Oe-h/[o. C()(x).
(a) Find the uniformly minimum-variance unbiased estimator of var [Xl] if such
exists.
(b) Find an unbiased estimator of I/O based omy on Y~") = min [Xl, ... , XJ.
Is your sequence of estimators mean-SQuared-error consistent?
35 Let Xl, ~ .• , X" be a random sample from the density
log 0
f(x; 0) = 0 _ 1 OX/(o.l)(x), 0>1.
n
• 36 Show that
1
I(x; fJ) = -0 1[9. 29](X), fJ > O.
where () > O.
(a) Is there a unidimensional sufficient statistic? If so, is it complete?
(b) Find a maximum-likelihood estimator of 0 2 = P[XI = 2]. Is it unbiased?
(c) Find an unbiased estimator of () whose variance coincides with the correspond-
ing Cramer-Rao lower bOund if such exists. If such an estimate does not
exist, prove that it does not.
(d) Find a uniformly minimum-variance unbiased estimator of ()2 if such exists.
(e) Using the squared-error loss function find a Bayes estimator of 0 with respect
to the beta prior distribution
(f) Using the squared-error loss function, find a minimax estimator of ().
(g) Find a mean-squared error consistent estimator of 02 •
370 PARAMETRIC POINT ESTIMATION VII
where () > O. For a squared-error loss function find the Bayes estimator of () for
a gamma prior distribution. Find the posterior distribution of 0. Find the
posterior Bayes estimator of T( () = P[Xf = 0].
49 Let Xl, ... , Xn be a random sample from f(xl () = (1/()/(o.fJ)(x), where () > o.
For the loss function (t - 8)2/()2 and a prior distribution proportional to
()-IZ/(1. CXI)«() find the Bayes estimator of 8.
50 Let Xl, ... , Xn be a random sample from the Bernoulli distribution. Using the
squared-error loss function, find that estimator of () which has minimum area
under its risk function.
51 Let Xl, ... , Xn be a random sample from the geometric density
where - 00 < a < 00 and f3 o. Show that Yl and 2: Xi are jOintly sufficient.
It can be shown that Yl and 2: (Xi - Yl ) are jOintly complete and independent of
each other. Using such results, find the estimator of (a, f3) that has an ellipsoid of
concentration that is contained in the ellipsoid of concentration of any other un-
biased estimator of (a, (J). (Yl = min [Xl"'.' X n1.)
54 Let X., ... , X. be a random sample from the density
f(x; a, 8) = (1- 8)(JX-«I(<<, «+1, ... )(x),
where - 00 < a < 00 and 0 < (J < 1.
(a) Find a two-dimensional set of sufficient statistics.
• (b) Find the maximum-likelihood estimator of <a,
(J).
55 Let Xl, ... , Xn be a random sample from the density
2 CONFIDENCE INTERVALS
In practice, estimates are often given in the form of the estimate plus or minus a
certain amount. For instance, an electric charge may be estimated to be
(4.770 + .005)10- 10 electrostatic unit with the idea that the first factor is very
unlikely to be outside the range 4.765 to 4.775. A cost accountant for a publish-
ing company in trying to allow for all factors which enter into the cost of produc-
ing a certain book (actual production costs, proportion of plant o,("erhead, pro-
portion of executive salaries, etc.) may estimate the cost to be 83 + 4.5 cents per
volume with the implication that the correct cost very probably lies between
374 PARAMETRIC INTERVAL ESTIMA nON vm
78.5 and 87.5 cents per volume. The Bureau of Labor Statistics may estimate
the number of unemployed in a certain area to be 2.4 + .3 million at a given
time, feeling rather sure that the actual number is between 2.1 and 2.7 million.
What we are saying is that in practice One is quite accustomed to seeing estimates
in the form of intervals.
In order to give precision to these ideas, we shall consider a particular
example. Suppose that a random sample (1.2, 3.4, .6, 5.6) of four observations
is drawn from a normal population with an unknown mean JI and a known
standard deviation 3. The maximum-likelihood estimate of JI is the mean of the
sample observations:
x = 2.7.
We wish to determine upper and lower limits which are rather certain to contain
the true unknown parameter value between them.
In general, for samples of size 4 from the given distribution the quantity
z=X
__ JI
~
will be normally distributed with mean 0 and unit variance. X is the sample
mean, and; is (J/J~. Thus the quantity Z has a density
1
fz(z) = ¢(z) = ;_ e -i z2 ,
y2n
is equivalent to
J1 > X - 2.94.
We may therefore rewrite Eq. (1) in the form
P[X - 2.94 < J1 < X + 2.94] = .95, (2)
FIGURE 1
95 percent of the area under ¢(z) lies between a and b will determine a 95 percent
confidence interval. Ordinarily one would want the confidence interval to be
as short as possible, and it is made so by making a and b as close together as
possible because the relation P[a < Z < b] = .95 gives rise to a confidence in-
terval of length ((J/Jn)(b - a). The distance b - a will be minimized for a
fixed area when ¢(a) = ¢(b), as is evident on referring to Fig. 1. If the point b
is moved a short distance to the left, the point a will need to be moved a lesser
distance to the left in order to keep the area the same; this operation decreases
the length of the interval and will continue to do so as long as ¢(b) < ¢(a).
Since ¢(z) is symmetrical about z = 0 in the present example, the minimum value
of b - a for a fixed area occurs when b = - a. Thus for x = 2.7, (- .24, 5.64)
gives the shortest 95 percent confidence interval, and (-1.17, 6.57) gives the
shortest 99 percent confidence interval for 1'.
In most problems it is not possible to construct confidence intervals which
are shortest for a given confidence coefficient. In these cases one may wish to
find a confidence interval which has the shortest expected length or is such that
the probability that the confidence interval covers a value 1'* is minimized, where
1'* =f:. 1'.
The method of finding a confidence interval that has been illustrated in the
example above is a general method. The method entails finding, if possible, a
function (the quantity Z above) of the sample and the parameter to be estimated
which has a distribution independent of the parameter and any other parameters.
Then any probability statement of the form P[a < Z < b] =}' for known a and
b, where Z is the function, will give rise to a probability statement about the
parameter that we hope can be rewritten to give a confidence interval. This
method, Or technique, is fully described in Subsec. 2.3 below. This technique
is applicable in many important problems, but in others it is not because in these
others it is either impossible to find functions of the desired fOrm or it is impos-
sible to rewrite the derived probability statements. These latter problems can
be dealt with by a more general technique to be described in Sec. 4.
The idea of interval estimation can be extended to include simultaneous
2 CONFIDENCE INTERVALS 377
FIGURE 2
estimation of several parameters. Thus the two parameters of the normal distri-
bution may be estimated by some plane region R in the so-called parameter
space, that is, the space of all possible combinations of values of Jt and (12. A
95 percent confidence region is a region constructible from the sample such that
if samples were repeatedly drawn and a region constructed for each sample,
95 percent of those regions in a long-term relative-frequency sense would include
the true parameter point (Po, (l5)(see Fig. 2).
Confidence intervals and regions provide good illustrations of uncertain
inferences. In Eq. (2) the inference is made that the interval - .24 to 5.64 covers
the true parameter value, but that statement is not made categorically. A
measure, .05, of the uncertainty of the inference is an essential part of the
statement.
We note that one or the other, but not both, of the two statistics
tl(Xb " ' , Xn) and t 2 (Xl , .,., Xn) may be constant; that is, One of the two end
points of the random interval (Tl , T 2 ) may be constant.
As was the case in point estimation, our problem is twofold: First, we need
methods of finding a confidence interval, and, second, we need criteria fOl~
comparing competing confidence intervals or for assessing the goodness of a
confidence interval. In the next subsection, we will describe One method of
finding confidence intervals and call it the pivotal-quantity method.
2 CONFIDENCE INTER VALS 379
FIGURE 3
Definition 3 Pivotal quantity Let Xl, ... , X,. be a random sample from
the density f(· ; 0). Let Q = ?(Xl , ... , X,.; 0); that is, let Q be a function
of Xl' ..• , X,. and O. If Q has a distribution that does not depend on 0,
then Q is defined to be a pivotal quantity. IIII
FIGURE 4
interval, is not random, then we might select that pair of ql and q2 that makes the
length of the interval smallest; or if the length of the confidence interval is
random, then we might select that pair of ql and q2 that makes the average
length of the interval smallest.
As a third and final comment, note that the essential feature of the pivotal-
quantity method is that the inequality {ql < 9(Xl, ... , Xn; 8) < q2} can be re-
written, or inverted or" pivoted," as {tl(X l , ... , xn) < -r(8) < tix l , ... , xn)} for
any possible sample value Xl' ... , x n • [This last comment indicates that
"pivotal quantity" may be a misnomer since according to our definition Q =
9(Xl , ... , Xn; 8) may be a pivotal quantity, yet it may be impossible to "pivot"
it.]
EXAMPLE 3 Let Xl' ... , Xn be a random sample from ¢o, l(X), Consider
estimating -r(8) = 8. Q = 9(Xl , ... , Xn; 8) = (X - 8)/(JlTn) has a
standard normal distribution and, hence, is a pivotal quantity. fQ(q) =
¢(q). For given I' there exist ql and q2 such that P[ql < Q < q2] = I' (in
fact, there exist many such ql and q2)' See Fig. 4.
Now {ql < (x - e)IJ1/n < q2} if and only if {x - q2Jl/n < e <
x -qlJl/n}; so (X - q2JlTn, X .:.. qlJl/n) is a 1001' percent confidence
interval for e. The length of the confidence interval is given by
(X - qlJ lin) - (X - q2#) = (q2 - ql)JlTn; so the length will be
made smallest by selecting ql and q2 so that q2 - ql is a minimum under the
restriction that I' = P[ql < Q < q2] = <I>(q2) - <I>(ql), and q2 - ql will be a
minimum if ql = -q2' as can be seen from Fig. 4. IIII
subject to
(3)
382 pARAMETRIC INTERVAL ESTIMATION vm
but
8 (d q 2 ) 8 (frlql) )
J~ dql - 1 = In frlq2) - 1 =0
if and only if fT(ql) = fT(q2)' which implies that ql = q2 [in which case
S:!fT(t) dt =f:. y] or ql = -q2' ql = -q2 is the desired solution, and such ql
and q2 can be readily obtained from a table of the t distribution.
if and only if
so
2 2
(n - 1)8 , (n - 1)8 )
( q2 ql
3 SAMPLING FROM THE NORMAL DISTRIBUTION 383
FIGURE 5
is a looy percent confidence interval for (12, where ql and q2 are given by
P[ql < Q < q2] = y. See Fig. 5.
ql and q2 are often selected so that P[Q < ql] = P[Q > q2] = (1 - y)/2.
Such a confidence interval is sometimes referred to as the equal-tails confidence
interval for (12. ql and q2 can be obtained from a table of the chi-square
distribution. Again, we might be interested in selecting ql and q2 so as to
minimize the length, say L, of the confidence interval.
(1ql q2I).
L = (n - 1)82 - - -
and so
dL .= (n _ 1)8 2(- 12 + q
12 d 2) = (n _ 1)8 2(- 12 + ~fa(ql)) = 0,
dql ql q2 dql ql q2fa(q2)
which implies that q;fa(ql) = q~fa(q2)' The length of the confidence interval
will be minimized if ql and q2 are selected so that
subject to
A solution for ql and q2 can be obtained by trial and error or numerical integra-
tion.
384 pAJt.AMETRIc INTBllVAL ESTIMATION vm
and
p [
en - 2
1)8
2
2
< (1 < 2
2
(n - 1)8 ]
= .95, (5)
X.975 X.025
where t.975 is the .975th quantile point of the t distribution with n - 1 degrees
of freedom and X~25 and X.~75 are the .025th quantile point and .975th quantile
point, respectively, of the chi-square distribution with n - 1 degrees of freedom.
The region displayed in Fig. 6 does indeed give a confidence region for (/1, (12),
but we do not know what its corresponding confidence coefficient is. [It is not
.95 2 since the two events given in Eqs. (4) and (5) are not independent.]
A confidence region, whose corresponding confidence coefficient can be
3 SAMPLING FROM 11IE NORMAL DISTRmunoN 385
2 (n - l)a2
---(1 = "
I 2 ql
(Il - X)I = ql(1
n
L-------~~~---------+Il
FIGURE 7 X
readily evaluated, may be set up, however, using the independence of X and S2.
Since
and
are each pivotal quantities, we may find numbers ql' q2' and qi such that
and
(7)
(1/ J n
P [ -ql < < ql; q2 < 2 < q2 = 1'11'2' (8)
(1
The four inequalities in Eq. (8) determine a region in the parameter space, which
is easily found by plotting its boundaries. One merely replaces the inequality
signs by equality signs and plots each of the four resulting equations as functions
of It and (12. A region such as the shaded area in Fig, 7 will result.
We might note that a confidence regiOn for (It, (1) could be obtained in
exactly the same way; the equations would be plotted as fUnctions of (1 instead of
(12, and the parabola in Fig. 7 would become a pair of straight lines given by
(y - x) - (1l2 - Ill)
-t(1+Y)/2< J (l/m + l/n)o; <t(1+y)/2
if and only if
hence
(- -
(Y-X)-t(1+y)/2 J(1 1) 2 - -
m+n Sp,(Y-X)+t(1+Y)/2 J(l 1)S2)
m+~ p (9)
(-
D- t(1+'1)/2
Jl: (D, - D)2 D-
n(n - 1)
,
r
+ t(I+'1)/2 (D, - D)2)
n(n - 1)
(10)
where t(1 +'1)1 2 is the [(1 + y)/2]th quantile point of the t distribution with n - 1
degrees of freedom. The above obtained confidence interval for 112 - III is
often referred to as the confidence interval for the difference in means for paired
observations. The ith X observation is paired with the ith Yobservation.
Remark If Xb ... , Xn is a random sample from f(· ; 8), for which the
corresponding cumulative distribution fUnction F(x; 8) is continuous
in x, then, by the probability integral transform, F(Xi; 8) has a uniform
distribution over the interval (0, 1). Hence -log F(Xi ; 8) has the den-
sity e-ul(o,oo)(u) since P[-logF(Xi; 8);:::u] =P[logF(Xj ; 8)< -u]=
P[F(X,; 8) se- ] = e- for u > O. Finally -
U U
L
log F(X.· 8) has a gamma
distribution with parameters n and I; that is, "
388 pARAMETRIC INTERVAL ESTIMATION VIII
So
It n
TI F(Xi ; 8), or - L log F(X j ; 8),
i= 1 i"'" 1
EXAMPLE 4 Let Xl, ... , Xn be a random sample from the density r(x; 8) =
8xO- 1I(o.1)(x); then F(x; 8) = XOI(O.llx) + I[t,oo)(x), If ql and q2 are
selected [see Eq. (11)] so that
log q 2 log q 1 ]
=P It <8< It '
[ 10glU Xi 10gJ:\ Xi
4 METHODS OF FINDING CONFIDENCE INTERVALS 389
1~----------------------~--
FIGURE 8
then
~ h1 (6)
Area = Pl
t
FIGURE 9
where Pl and P2 are two fixed numbers satisfying 0 < Pl) 0 < P2 ) and Pl + P2 < 1.
See Fig. 9.
hl(O) and h2 (0) can be plotted as functions ofO. We will assume that both
hl ( .) and h2 (·) are strictly monotone) and for our sketch we will assume that
they are monotone) increasing functions. We know that hl (0) < h2 (0). See
Fig. 10.
Let to denote an observed value of T; that is, to = t(Xl, ... , xn) for an
observed random sample Xb ..• , Xn • Plot the value of to on the vertical axis in
Fig. 10, and then find Vl and V 2 as indicated. For any possible value of to, a
corresponding Vl and V2 can be obtained, so Vl and V2 are functions of to; denote
these by Vl = Vl(tO) and V2 = V2(t O)' The interval (Vb V 2) will turn out to be a
100(1 - Pl - P2) percent confidence interval for 00 , To argue that this is so,
let us repeat Fig. 10 as Fig. 11 and add to it. (Figure 10 indicates the method of
finding the confidence interval.)
to ---------
FIGURE 10
4 METHODS OF FINDING CONFIDENCE INTERVALS 391
FIGURE 11
We see from Fig. 11 that hi (0 0) < to = t(X I , ... ~ Xn) < h 2(00) if and only if
VI = VI(Xb ••• ~ xn) < 00 < V2 = V2(XI~ ... ~ xn) for any possible observed sample
(Xj ~ ... ~ x n). But by definition of hi (.) and h2(')~
P8o[h l (0 0) < t(Xb ... ~ Xn) < h 2(00)] = 1 - PI - P2;
so
P80 [VI(Xh ' ••• ~ Xn) < 00 < V2(Xh ... ~ Xn)] = 1 - PI - P2;
that is~ as stated~ (VI~ V 2) is a 100(1 - PI - P2) percent confidence interval for
00 ~ where Vi = Vl(X b ... , Xn) for i = 1, 2.
We might note that the above procedure would work even if hi (.) and
h2 ( .) were not monotone fUnctions, only then we would obtain a confidence
region (often in the form of a set of intervals) instead of a confidence interval.
EXAMPLE 5 Let Xl' ... , Xn be a random sample from the density f(x; ( 0) =
(1/(Jo)I(0.8o)(x), We want a confidence interval for 00 , Y n =
max [Xl, ... ~ Xn] is known to be a sufficient statistic; it is also the maximum-
likelihood estimator of(Jo' We will use Yn as our statistic Tthat appears
in the above discussion; then
-=~----------------~----6
FIGURE 12 Vl
For observed to = max [Xl' ••• ,Xn ], VI is such that h 2 (Vl) = to;
that is, hlv 1) = Vl(1 - P2)I/n = to or VI = to(l - P2)-lln. Similarly,
V 2 = top-lIn. So a 100(1 - PI - pz) percent confidence interval for 00 is
given by (Y,.(1 - P2)-I/n, Y nPl l !n). We could worry about selecting PI
and Pl so that the confidence interval is shortest subject to the restriction
that 1 - PI - Pl = y. The length of the confidence interval is
L = Y~[pilin - (1 - P2) -lIn]
We observe in the example above, and in general for that matter, that
hI (0) and hiO) are really not needed. For a given observed value to =
I(Xb ••• , xn) of the statistic T, we need to find VI = 'Vt(Xb ... , xJ) and V 1 =
'Vl(Xb ••• , x 1I ). V2 can be found by solving for 0 in the equation \
\\
Pt = J
hl(tJ)=to
-00
/,.(t; 8) dt;
\
(13)
Pl = f 00
h;z(6) =to
fT(t; 8) dt; (14)
VI is the solution.
We mentioned at the outset that the method would work for discrete
random variables as well as for continuous random variables. Then the inte-
grals in Eqs. (12) to (14) would need to be replaced by summations. Two
5 LARGE-SAMPLE CONFIDENCE INTERVALS 393
popular discrete density functions are the Bernoulli and Poisson. One could be
interested in confidence-interval estimates of the parameters in each. In
Example 6 to follow we will consider the Bernoulli density function; the Poisson
case is left as an exerCIse.
and
0' 2(0) - 1 _ -1
.. 2 2 - o-g-f-(x-;0:::-:-::)]'
n10 [{(0/00) log f (X; O)} ] - -nS'":""o-:":[(--:a2;-1a::-::O-::')-1 (15)
394 p.A.llAMln"l';UC INTBIlVAL ESTIMATION VIII
T -0
-z < ;n(O) < z, (16)
where z = Z(1 +1)/2 is defined by fl)(z(l +y)/2) = (1 + y)/2 or fl)(z) - fl)( -z) = y.
The above described method will always work to find a large-sample confidence
interval provided the inequality -z < (Tn - O)IO'n(O) < z can be inverted.
1
----------::- - -.
m9'9[{(81 80) 10gf(X; n
Therefore,
y,;:;;p [ -z<
1/Xn - 0
fii2i:. <z
]
V 021n
-ZO 1 ZO]
=p [ v1n<Xn- O <vIn
=p [ l/Xn < II
u < l(Xn ] ,
1 + zlvln 1 - ~/Jn
\
\
and hence
These expressions for the limits may be simplified since in deriving the
large-sample distribution certain terms containing the factor l/.j;i are
neglected; that is, the asymptotic normal distribution is correct only to
within error terms of size a constant times II);;. We may therefore
neglect terms of this order in the limits in Eq. (17) without appreciably
affecting the accuracy of the approximation. This means simply that we
may omit all the Z2 terms in Eq. (17) because they always occur added to a
term with factor n and will be negligible relative to n when n is large to
within the degree of approximation that we are assuming. Thus Eq. (17)
may be rewritten as
p [e- - )(;)(1 n
Z
-Q)
< 0 < e~ + Z
J0(l -(;»]
n
::::::: y. (18)
In particular,
[-
P E> - 1.96~
1~(1 n-Q) < (J < °+ 1.96
JQ(l - Q)]
n : : : : .95
We may observe that Eq. (18) is just the expression that would have been
obtained had Q been substituted for 0 in u;(O). The substitution would imply
that
A- 0
)~(l - (;)/n
396 PAIlAMETRIC INTERVAL ESTIMATION VIII
of a known prior density to define the posterior distribution of ®, and from this
posterior distribution we defined the posterior Bayes point estimator of O. In
this section we use this same posterior distribution of ® to arrive at an inter1(ai
estimator of O.
If 1(' 10) is the density sampled from and ge(') is the prior density of ®,
then the posterior density of® given (Xl' ... , Xn) = (Xl'" . , xn) is [recall Eq. (19)
of Chap. VII]
EXAMPLE 9 Let Xl' ... , Xn be a random sample from the normal density
with mean 0 and variance 1. Assume that ® has a normal density with
mean Xo and variance 1. Consider estimating O. We saw in Example 44
of Chap. vn that the posterior distribution of ® is normal with mean
n
Lo xi/en + 1) and variance lj(n + 1). We seek tl and t2 satisfying
/' = Xh ., ., xn) dO
t1
t2 =
~ Xi
+z
J--1---- and
n+l n+l
398 PARAMETRIC INTERVAL ESTIMATION VIII
gives the shortest 1001' percent Bayesian interval estimate of Note that e.
the corresponding 1001' percent confidence-interva1 estimate of is given e
by (t xJn - zJlIn, t xJn + zJi/n). The only difference in the results
of the two methods for this example is that the sample size seems to
increase by I and the apparent" additional observation" is the mean of
the assumed prior normal distribution. / // /
PROBLEMS
1 Let X be a single observation from the density
I(x; 6) = (JxfJ-1I(o.1)(X),
where (J > O.
(a) Find a pivotal quantity, and Use it to find a confidence-interval estimator of (J.
(b) Show that (Y/2, Y) is a confidence interval for (J. Find its confidence coef-
ficient. Also, find a better confidence interval for (J. Define Y = -l/log X.
2 Let Xl, "', Xn be a random sample from N«(J, 8), (J > O. Give an example of a
pivotal quantity, and Use it to obtain a confidence-interval estimator of (J.
3 Suppose that Tl is a 100y percent lower confidence limit for T«(J) and T2 is a 100y
percent upper confidence limit for T(6). Further assume that PfJ(Tl < T 2] = 1.
Find a 100(2y - 1) percent confidence interval for T(8). (Assume y > ~.)
4 Let Xl, ...• Xn denote a random sample from I(x; 6) = l(fJ-~. fJ+~)(x). Let
Y1 < ... < Y,. be the corresponding ordered sample. Show that (Yl~ Yn) is a
confidence interval for (J. Find its confidence coefficient.
5 Let Xl, "', X,. be a random sample from/(x; 8) = (Je-fJxl(o. OO)(x).
(a) Find a 100y percent con'fidence interval for the mean of the population.
(b) Do the same for the variance of the population.
(e) What is the probability that these intervals COver the true mean and true
variance, simultaneously?
(d) Find a confidence-interval estimator of e - fJ = P(X > I].
(e) Find a pivotal quantity based only on YJ, and use it to find a confidence-
interval estimator of (J. (Yl = min (Xl, . ", X,.].)
6 X is a single observation from (Je-fJxl(o. OO)(x), where (J > O.
(a) (X, 2X) is a confidence interval for l/(J. What is its confidence coefficient?
(b) Find another confidence interval for 1/(J that has the same coefficient but
smaller expected length.
7 Let Xl, X2 denote a random sample of size 2 from N«(J, 1). Let Yl < Y 2 be the
corresponding ordered sample.
(a) Determine yinP(Yl < (J < Y 2 ] = y. Find the expected length oftheinterval
(Y1 , Y 2 ).
(b) Find that confidence-interval estimator for 8 using X - (J as a pivotal quantity
that has a confidence coefficient y, and compare the length with the expected
length in part (a).
PROBLEMS 399
8 Consider random sampling from a normal distribution with mean p. and variance
a2•
(a) Derive a confidence interval estimator of p. when a 2 is known.
(b) Derive a confidence interva1 estimator of a 2 when p. is known.
9 Find a 90 percent confidence interval for the mean of a normal distribution with
a = 3 given the sample (3.3, -.3, -.6, -.9). What would be the confidence
interva1 if a were unknown?
10 The breaking strengths in pounds of five specimens of manila rope of diameter
3
1 6 inch were found to be 660, 460, 540, 580, and 550.
19 X., ... , X,. is a random sample from (l/6)X(1-6)/6/ro . J](x), where 0 O. Find a
l00y percent confidence interval for O. Find its expected length. Find the
limiting expected length of your confidence interval. Find n such that
P[length (50] p for fixed (5 and p. (You may use the central-limit theorem.)
20 Develop a method for estimating the parameter of the Poisson distribution by a
confidence interval.
21 Find a good l00y percent confidence interval for 0 when sampling fromf(x; (J) =
1(6-1:.6+1:)(x).
22 Find a gOod tOOy percent confidence interval for 0 when sampling fromf(x; 0) =
(2X/02)/(O.6)(X), where 0 > O.
23 One head and two tails resulted when a coin was tossed three times. Find a
90 percent confidence interval for the probability of a head.
24 Suppose that 175 heads and 225 tails resulted from 400 tosses of a coin. Find a
90 percent confidence interval for the probability of a head. Find a 99 percent
confidence interval. Does this appear to be a true coin?
25 Let Xl, ... , X" be a random sample fromf(x; 6) = f(x; p., 0") = cPp,.02(X). Define
T«(J) by f:6) cPp,.a 2 (x) dx = a(a is fixed). Recall what the UMVUE of T(O) is. Find
a l00y percent confidence interva1 for T(O). (If you cannot find an exact lOOy per-
cent confidence interval, find an approximate one).
26 Let Xl, ... , Xn be a random sample from f(x; (J) = cP6.I(X). Assume that the
prior distribution of@ is N(lLo,~),lLoandO"~known. Find a lOOypercentBayesian
interval estimator of 0, and compare it with the corresponding confidence interval.
27 Let Xl, "', XII be a random sample from f(x 16) = OX'-I 1(0. H(X), where 0> O.
Assume that the prior distribution of @ is given by
1
De(O) = r(r) ){Or-I e -A6[(o. CC)«(J),
expire. Find the maximum-likelihood estimator of the mean lifetime I/O. Also
find a confidence-interval estimator of I/O.
IX
TESTS OF HYPOTHESES
EXAMPLE 1 Let Xl' ... , Xn be a random sample from I(x; fJ) = CPo, 2S(X),
The statistical hypothesis that the mean of this normal population is less
than or equal to 17 is denoted as follows: Yf: fJ < 17. Such a hypothesis
is composite; it does not completely specify the distribution. On the
other hand, the hypothesis Yf: fJ = 17 is simple since it completely specifies
the distribution. IIII
EXAMPLE 2 Let Xl' ... , Xn be a random sample from I(x; fJ) = CPo, 2S(X),
Consider Yf: fJ ,:5;; 17. One possible test 1 is as follows: Reject Yf if and
only if X > 17 + 51Jn. /III
EXAMPLE 3 Let Xl, "" Xn be a random sample from I(x; fJ) = CPo, 25(X),
X is euclidean n space. Consider Yf: fJ < 17 and the test 1: Reject Yf if
and on)y if x > 17 + 51 J~. Then 1 is nonrandomized, and Cy =
{(Xl' ... , X n ): x > 17 + 5IJ~ }. IIII
404 TESTS OF HYPOTHESES IX
and
The power function will play the same role in hypothesis testing that mean-
squared error played in estimation. It will usually be our standard in assessing
the goodness of a test or in comparing two competing tests. An ideal power
function, of course, is a function that is 0 for those 0 corresponding to the null
hypothesis and is unity for those 0 corresponding to the alternative hypothesis.
The idea is that you do not want to reject .7t 0 if .7t 0 is true and you do want to
reject .7t 0 when .7t 0 is false.
Remark 1ty(O) = P8[reject .7t 0]' where 0 is the true value of the param-
eter. If Y is a nonrandomized test, then 1ty(O) = P8 [(X1 , ••• , Xn) E Cy],
where Cy is the critical region associated with test Y. If Y is a randomized
test with critical function t/ly (. , ... , +), then
1ty(O) = P 8[reject.7t 0]
Consider .7t0: 0< 17 and the test Y: Reject if and only if X > 17 + 51 J~.
-
1ty(0) = P 8 [ X > 17 + J~ =
5 ]
I - <I>
(17 + 51J~ -
51J n
0) .
For n = 25, 1ty(O) is sketched in Fig. 1.
1 INTRODUCTION AND SUMMARY 407
1.0
.5
~----~----~----~~~~-----8
FIGURE 1 0
The power function is useful in telling how good a particular test is.
In this example, if 0 is greater than about 20, the test Y is almost certain
to reject :J'f 0' as it should. And if 0 is less than about 16, the test Y is
almost certain not to reject :J'f 0, as it should. On the other hand, if
17 < 0 < 18 (so :J'f 0 is false), the test Y has less than half a chance of
rejecting :J'f 0 • II!!
Definition 7 Size of test Let Y be a test of the hypothesis :J'f0: 0 E eo,
where eo c e; that is, eo is a subset of the parameter space S. The size
of the test Y of :J'f0 is defined to be sup [nr(O)]. The size of the test for
8e~o
a nonrandomized test is also referred to as the size of the critical region.
IIII
Remark Many writers use the terms "significance level" and "size of
test" interchangeably. We, however, will avoid use of the term" signi-
ficance level/' intending to reserve its use for tests of significance, a type of
statistical inference that is closely related to hypothesis testing. Tests of
significance will not be considered in this book; the interested reader is
referred to Ref. [37]. IIII
EXAMPLE 6 Let Xl' "" Xn be a random sample from f(x; 0) = 4>0. 2S(x).
Consider the :J'f 0 = 0 < 17 and the test Y: Reject :J'f0 if X > 17 + 51Jn.
~o = {O: 0 < 17} and the size of the test Y is sup [nr(O)]
_ Oe~o
= sup
Osl7
[I - (J)(17 +515~nn -6)] = 1 - (J)(l) ~ .159. IIII
PROOF Define o/l'(Sb "" S,.) = 8[0/1(X1, ... , X1I ) lSI = Sl, ... ,
S,. = S,.]; then 0/1' is a critical function. Furthermore, 1t1'(e) =
4 9 [o/y,(Sl' ... , S,.)] = 8 9 [8[0/1(X1, ... , X1I ) ISb . ", S,.]] = 8 9 [I/II(X1, ... ,
X 1I) ] = 1ty(e). IIII
The theorem shows that given any test, another test which depends only on
a set of sufficient statistics can be found, and this new test has a power function
identical to the power function of the original test. So, in our search for good
tests we need only look among tests that depend on sufficient statistics.
We have introduced some of the language of testing in the above. The
problem of testing is like estimation in the sense that it is twofold: First, a
method of finding a test is needed, and, second, some criteria for comparing
competing tests are desirable. Although we will be interested in both aspects
of the problem, we will not discuss them in that order. First we will consider,
in Sec. 2, the problem of testing a simple null hypothesis against a simple alter-
native. Two approaches will be assumed. The first will use the power function
as a basis for setting goodness criteria for tests, and the second will use a loss
function. The Neyman-Pearson lemma is stated and proved. It will turn out
that all those tests, which are best in some sense, will be of the form of a simple
likelihood-ratio, which is defined.
Tests of composite hypotheses will be discussed in Sec. 3. The section
will commence, in Subsec. 3.1, with a discussion of the generalized likelihood-
ratio principle and the generalized likelihood-ratio test. ~. his principle plays a
central role in testing, just as maximum likelihood played central role in es-
timation. It is a technique for arriving at a test that in gen ral will be a good
test, just as maximum likelihood led to an estimator that in general was quite a
good estimator. For a book of the level of this book, it is probably the most
important concept in testing. The notion of uniformly most powerful tests
will be introduced in Subsec. 3.2, and several methods that are sometimes use-
ful in finding such tests will be presented. Unbiasedness and invariance in es-
timation are two methods of restricting the class of estimators wi th the hope
of finding a best estimator within the restricted class. These two concepts play
essentially the same role in testing; they are methods of restricting the totality of
2 SIMPLE HYPOTHESIS VERSUS SIMPLE ALTERNATIVE 409
possible tests with the hope offinding a best test within the restricted class. We
will discuss only unbiasedness, and it only briefly in Subsec. 3.3. Subsection
3.4 will summarize several methods of finding tests of composite hypotheses.
Section 4 will be devoted to consideration of various hypotheses and
tests that arise in sampling from a normal distribution. Section 5 will consider
tests that fall within a category of tests generally labeled chi-square tests.
Included will be the asymptotic distribution of the generalized likelihood-ratio,
goodness-of-fit tests, tests of the equality of two or more distributions, and tests
of independence in contingency tables. Section 6 will give the promised dis-
cussion of the connection between tests of hypotheses and interval estimation.
The chapter will end with an introduction to sequential tests of hypotheses in
Sec. 7.
The reader will note that our discussion of tests of hypotheses is not as
thorough as that of estimation. Both testing and estimation will be used in
later chapters, especially in Chap. X. Also, a number of the nonparametric
techniques that will be presented in Chap. XI will be tests of hypotheses.
We stated at the beginning of this section that testing of hypotheses is one
major area of statistical inference. A type of statistical inference that is closely
related (in fact so closely related that many writers do not make a distinction) to
hypothesis testing is that of significance testing. The concept of significance
testing has important use in applied problems; however, we will not consider it
in this book. The interested reader is referred to Ref. [37].
2.1 Introduction
In this section we consider testing a simple null hypothesis against a simple
alternative hypothesis. This case is actually not very useful in applied statistics,
but it will serve the purpose of introducing us to the theory of testing hypotheses.
We assume that we have a sample that came from one of two completely
specified distributions. Our object is to determine which one. More precisely,
assume that a random sample Xl"'" Xn came from the density fo(x) or fl (x)
and we want to test :Yf 0 : Xi distributed asJo('), abbreviated Xi '" fo{'), versus
:Yf1: Xi '" It ( .). If we had only one observation Xl and foe .) and It (.) were
as in Fig. 2, one might quite rationally decide that the observation came from
fo(') if fo(x 1) > Ji(Xl) and, conversely, decide that the observation came from
It (.) if fi (Xl) > fo(xl)' This simple intuitive method of obtaining a test can be
expanded into a family of tests that, as we shall see, will contain some good tests.
410 TESTS OF HYPOTHESES IX
FIGURE 2
(}l are known. We want to test :tt 0 : (} = (}o versus :ttl : (} = (}l' Corresponding
to any test 1 of :tt 0 versus :ttl is its power function 1'ly(8). A good test is a test
for which 1ty«(}o) = P[reject :tt 0 l:tt 0 is true] is small (ideally 0) and 1'ly(81) =
P[reject -*'ol:tfo is false] is large (ideally unity). One might reasonably use the
two values 1ty«(Jo) and 1'lY«(}I) to set up criteria for defining a best test. 1'ly«(}o) =
size of Type I error, and I - 1'lY«(}I) = P[accept:tt 0 I:tt 0 is false] = size of Type II
error; so our goodness criterion might concern making the two error sizes small.
F or example, one might define as best that test which has the smallest sum of the
error sizes. Another method of defining a best test, made precise in the following
definition, is to fix the size of the Type I error and to minimize the size of the
Type II error.
A test 1* is most powerful of size (1 if it has size (1 and if among all other
tests of size (1 or less it has the largest power. Or a test 1* is most powerful of
size (1 if it has the size of its Type I error equal to (1 and has smallest size Type II
error among all other tests with size of Type I error (1 or less.
The justification for fixing the size of the Type I error to be (1 (usually
small and often taken as ,.05 or .01) seems to arise from those testing situations
where the two hypotheses are formulated in such a way that one type of error is
more serious than the other. The hypotheses are stated so that the Type I error
is the more serious, and hence one wants to be certain that it is small.
The following theorem is useful in finding a most powerful test of size (1.
The statement of the theorem as given here, as well as the proof, considers only
nonrandomized tests. We might note that the statement and proof of the
theorem can be altered to include all randomized tests.
We comment that k* and C* satisfying conditions (i) and (ii) do not always
exist and then the theorem, as stated, would not give a most powerful size-ct
test. However, whenever fo( .) and It ( .) are probability density functions, a k*
and C* will exist. Although the theorem does not explicitly say how to find k*
and C*, implicitly it does since the form of the test, that is, the critical region, is
given by Eq. (5). In practice, even though k* and C* do exist, often it is not
necessary to find them. Instead the inequality A S; k* for (Xh ... , xn) E C* is
manipulated into an equivalent inequality that is easier to work with, and the
actual test is then expressed in terms of the new inequality. The following
examples should help clarify the above.
2 SIMPLE HYPOTHESIS VERSUS SIMPLE ALTERNATIVE 413
FIGURE 3
8X
EXAMPLE 7 Let Xl"'" Xli bea random sample from/ex; (J) = (Je- /(0. oo)(x),
where (J = 80 or 8 = 81 , (Jo and (JI are known fixed numbers, and for
concreteness we assume that (JI > (Jo' We want to test .Yf0: (J = (Jo versus
.Yfl : (J = (JI' Now Lo = 03 exp (-(Jo LXi), L1 = 0i exp (-(JI I Xi), and
according to the NeymanftPearson lemma the most powerful test will have
the form: Reject .Yf0 if )" < k* or if «(JO/(JI)'J exp [ - «(Jo - (JI) I < k*, xa
which is equivalent to .
where k' is just a constant. The inequality A =:;; k* has been simplified and
expressed as the equivalent inequality L Xi k'. Condition (i) is ex =
PfJo[reject .Yfol = PfJo[ I Xi =:;; k']. We know that Xi has a gamma L
distribution with parameters n and 0; hence
an equation in k', from which k' can be determined; and the most powerful
test of size ex of .Yfo: (J = (Jo versus .Yfl : (J = (JI' (JI > (Jo is this: Reject .Yfo
if LXi k', where k' is the exth quantile point of the gamma distribution
with parameters nand (Jo· 11//
EXAMPLE 8 Let Xl' ... , Xn be a random sample from /(x; (J) =
(Jx(1 - (J)l-X I{O.I}(X), where (J = (Jo or (J = (J1' We want to test .Yf0: (J = 00
versus .Yf1 : 0 = (Jl' where, say, (Jo < 01 , Lo = (J~Xi(1 - (Jo)n-r.xi, Ll =
(Jr Xt (1 Odn-r.xt, and so A < k* if and only if
(J~Xi(1 - (JO)'J-r.Xi/(J r
Xi(l - (Jl)n-r. Xi < k*, ('-
if and only if
414 TESTS OF HYPOTHESES IX
(X=P80=1/4[reject:tfO]=P80=1/4[IXi:C:k'] = I
10 (10)
. (1)
- i (3) 10 - i
- .
i =k' 1 4 4
If (X = .0197, then k' = 6, and if (X = .0781, then k' = 5. For (X = .05, there
is no critical region C* and constant k* of the form given in the Neyman-
Pearson lemma. In this example our random variables are discrete, and
for discrete random variables it is not possible to find a k* and C* satisfying
conditions (i) and (ii) for an arbitrary fixed 0 < (X < 1. In practice, one is
usually content to change the size of the test to some (X for which a test
given by the Neyman-Pearson lemma can be found. We might note,
however, that a most powerful test of size (X does exist. The test would be
a randomized test. For the example at hand, if we take (X = .05, the
randomized test with critical function
1
.0584
o
is the most powerful test of size (X = .05. IIII
In closing this subsection, we note that a most powerful test of size (x,
given by the Neyman-Pearson lemma, is necessarily a simple likelihood-ratio
test.
Remark
aly(O) = t(d1 ; 0)Pe[(X1 , XJ E Cr ] + t(do ; 0)Pe[(X1 ,
••• , ••• , Xn) E (;r]
= t(d1 ; O)nr(O) + t(do; 0)[1 - nr(O)]; (6)
that is, the risk function is a linear function of the power function; the
coefficients in the linear function are determined by the values of the loss
function. Since 0 assumes only two values, !!lr(O) can take on only two
values, which are
and al y{Ol) = t(do ; 01)[1 - n 1 (01)]' (7)
IIII
416 TESTS OF HYPOTHESES IX
Our object is to select that test which has smallest risk, but, unfortunately,
such a test will seldom exist. The difficulty is that the risk takes on two values,
and a test that minimizes both of these values simultaneously over all possible
tests does not exist except in rare situations. (See Prob. 3.) Not being able
to find a test with smallest risk, we resort to another less desirable criterion, that
of minimizing the largest value of the risk function.
Theorem 3 For a random sample Xl' ... , X" from f(·; ( 0 ) orf(· ; ( 1 ),
°°
consider testing ff 0: fJ = 00 versus ff 1 : = 1 , If a test Ymhas a critical
region given by Cm = {(Xl' ... , xJ: A <km }, where k m is a positive constant
such that &frm(fJ O) = &f rm«(}I), then 1m is minimax. Recall that
A = LolLI = " f(XI; (}o)]/[ I1
[I1 " f(Xi; ( 1)],
1= I i I
EXAMPLE9 Let Xl, ... , X" be a random sample fromf(x; fJ) = fJe- 8xI(o, oo)(x),
° °°
For 01 > 0 , test ffo: 0= 00 versus ff l : = 1 , In Example 7, we found
the most powerful size-tX test. We seek now to find the minimax test for
a loss function given by t(d1 ; fJ o) = a and t(do ; fJ1) = h. According to
Theorem 3, the minimax test 1m is given by Cm = {(Xl' ... , x,,): A k m }
2 SIMPLE HYPOTHESIS VERSUS SIMPLE ALTERNATIVE 417
Before leaving minimax tests we make two comments: First, if 10(') and
11 ( .) are discrete density functions, then there may not exist a k m such that
r71 ym(()o) = r71 Ym (()l) unless randomized tests are allowed; and, second, a minimax
test as given in Theorem 3 was a simple likelihood-ratio test.
In the above we assumed that Xl, ... , Xn was a random sample from
1(' ; (J), where () = ()o or ()l' and for each (), 1(' ; ()) is completely known. We
also assumed that we had an appropriate loss function. Now, we further
assume that ()o and ()l are the possible values of a random variable e and that
we know the distribution of e, which is called the prior distribution, just as in
our considerations of Bayes estimation. e is discrete, taking on only two
values ()o and ()l; so the prior distribution of e is completely given by, say, g,
where 9 = p[e = ()l] = 1 - p[e = ()o]. We mentioned above that, in general,
a test with smallest risk function for both arguments does not exist. Now that
we have a prior distribution for the two arguments of the risk function, we can
define an average risk and seek that test with smallest average risk.
which is minimized if C is defined to be all (Xl' ... , xn) for which the last inte N
Cg = {(Xb .. " x,,): (l - g)t(d1 ; Oo)Lo - gt(do ; (1)L 1 < O}. (10)
We have proved the following theorem.
(II)
IIII
We note that once again a good test, in this case a Bayes test, turns out to
be a simple likelihood-ratio test. The exact form of the Bayes test is given by
Eq. (11).
f I g(Ji t(do ; ( 1 ) }
= 1(Xh ... , Xn): I Xi < (Jl - (Jo loge (1 - g)Oo t(d ;( )
1 0
3 COMPOSITE HYPOTHESES
In Sec. 2 above we considered testing a simple hypothesis against a simple
alternative. We return now to the more general hypotheses-testing problem,
that of testing composite hypotheses. We assume that we have a random sample
fromf(x; 0), (J E 9, and we want to test .Yf'o: 0 E 9 0 versus.Yf'l: (J E 9f, where
eo c e, e 1 c 8, and eo and e1 are disjoint. Usually 9 1 = 9 - 9 0 , We
begin by djscussing a general method of constructing a test.
3 COMPOSITE HYPOTHESES 419
(12)
1111
Note that 2 is a function of Xl' ... , x,,, namely 2(Xl' ... , x,,). When the
observations are replaced by their corresponding random variables Xl, ... , X"'
then we write A for 2; that is, A = 2(Xl' ... , X,,). A is a function of the random
variables Xl, ... , X" and is itself a random variable. In fact, A is a statistic
since it does not depend on unknown parameters.
Several further notes follow: (i) Although we used the same symbol ), to
denote the simple likelihood-ratio, the generalized likelihood-ratio does not
e
reduce to the simple likelihood-ratio for = {Oo, Ol}' (ii) 2 given by Eq. (12)
necessarily satisfies 0 < 2 < I; 2 >0 since we have a ratio of nonnegative
quantities, and )" < I since the supremum taken in the denominator is over a
larger set of parameter values than that in the numerator; hence the denominator
cannot be smaller than the numerator. (iii) The parameter 0 can be vector-
valued. (iv) The denominator of A is the likelihood function evaluated at the
maximum-likelihood estimator. (v) In our considerations of the generalized
likelihood*ratio, often the sample X h . . . , X" will be a random sample from a
density f(x; 0) where (J E 9.
The values 2 of the statistic A are used to formulate a test of Jff 0 = 0 E 9 0
versus Jff1 : 0 E 9 - 9 0 by employing the generalized likelihood-ratio test prin-
ciple, which states that Jff 0 is to be rejected if and only if 2 < 20 , where 20 is
some fixed constant satisfying 0 < 20 < I. (The constant 20 is often specified
by fixing the size of the test.) A is the test statistic. The generalized likelihood-
ratio test makes good intuitive sense since 2 will tend to be small when Jff 0 is
not true, since then the denominator of 2 tends to be larger than the numerator.
In general, a generalized likelihood-ratio test will be a good test; although there
are examples where the generalized likelihood-ratio test makes a poor showing
420 TSSTS OF HYPOTHESES IX
compared to other tests. One possible drawback of the test is that it iSlIOtn:ez
times difficult to find sup L(O; Xb •.. , x n); another is that it can be difficult to'
find the distribution of A which is required to evaluate the power of the test.
and
n
(Inx)"e-. if --::;;;, 00
LXi
-
n
~ exp (-0 0 LXi) if
r>Oo'
Xi
Hence
n
1 if - - < 00
LXi -
A= (13)
O~ exp ( - 00 LXi) n
(niI xi)ne-n
if
r >°
Xi
0 ,
reject JIP 0 if "'\'n > 00 and 00 In Xl)n exp ( -0 Ix; + n) < Ao,
~Xi
( 0
(14)
or reject JIP 0 if Oox < 1 and (0 0 x)n e- n(8ox-l) < Ao. Write y = 00 x, and
note that yne-n(.v-I) has a maximum for y = 1. Hence y < 1, and
yne n(y-l) < Ao if and only if y < k, where k is a constant satisfying
o < k < 1. See Fig. 4.
We see that a generalized likelihood-ratio test reduces to the follow-
ing:
Reject JlPo if and only if Box < k, where 0 < k < 1; (15)
3 COMPOSITE HYPOTHESES 421
O~~--+---~l----------~==~-Y
FIGURE 4
that is, reject :Yf 0 if x is less than some fraction of 1/00 , If that gen-
eralized likelihood-ratio test having size a is desired, k is obtained as the
solution to the equation
"k 1
f
a = P9o[OoX < k] = P9o [Oo L Xi < nk] = 0 r(n) u"- e- duo
l U
(Note that P9[OoX < k] P9o [OoX < k] for 0 <0 0 ,) III/
We note that in the above example the first form of the test, as given in
Eq. (14), is rather messy, yet after some manipulation the test reduces to a very
simple form as given in Eq. (15). Such a pattern often appears in dealing with
generalized likelihood-ratio tests-their first form is often foreboding, yet the
tests often simplify into some nice form. We will observe this again in Sec. 4
below when we consider tests concerning sampling from the normal distribution.
We might note, by considering the factorization criterion, that a generalized
likelihood-ratio test must necessarily depend only on minimal sufficient statistics.
In Sec. 5 below, a large-sample distribution of the generalized likelihood-
ratio is given. This will provide us with a method of obtaining tests with
approximate size a.
Theorem 5 Let Xl' ... , Xn be a random sample from the density I(x; 8),
8 E 8, where (3 IS some interval. Assume that I(x; 8) =
n
a(8)b(x) exp [c(8)d(x)], and set t(Xb ... , xn) = L d(xJ
1 ..
critical region C* = {(Xl' ' , " x n): t(x l , " " Xn) < k*} is a uniformly most
powerful size-a test of .Yt0: 0 00 versus J'l'l: 0 > 00 or of J'l' 0: 0 = 00
versus J'l'l: 0 > 00 , IIII
a = P 80 ['\'
'-'
X I < k*] = f -r(n) on un- e- 8oU du
k·
0
1
0
I
•
IIII
L(O'; x., ... , xn) = (O')n exp (- 0' I X,) = (o,)n exp [ _ (0' _ 0") I x.]
L(O"; x., ... , xn) (Ollr exp ( - 0" I Xi) 0" I
0
n- I
=
1
O~ [O~ - (k*)lI] = 1-
(k*) 11
0 '
0
which implies that k* = 00 ViI - ct. IIII
Several comments are in order. First, the null hypothesis was stated as
o < 00 in both Theorems 5 and 6; if it had been stated as 0 > 00 , tIte two theorems
would remain valid provided the inequalities that define the critical regions were
reversed. Second, Theorem 5 is a consequence of Theorem 6. Third, the
theorems consider only one-sided hypotheses.
This completes our brief study of uniformly most powerful tests. We have
seen that a uniformly most powerful test exists for one-sided hypotheses if the
density sampled from has a monotone likelihood-ratio in some statistic. There
are many hypothesis-testing problems for which no uniformly most powerful
3 COMPOSITE HYPOTHESES 425
test exists. One method of restricting the class of tests, with the hope and
intention of finding an optimum test within the restricted class, is to consider
unbiasedness of tests, to be defined in the next subsection.
A test that has been quite extensively applied in various fields of science is
.Yf0: e = eo against .Yf1 : e =/; eo. For example, let e be the mean difference of
yields between two varieties of wheat. It is often suggested that it is desirable
to test the hypothesis .Yf0: e = 0 against .Yf1 ; e =/; 0, that is, to test if the two
varieties are different in their mean yields. However, in this situation, and
many others where e can vary continuously in some interval, it is inconceivable
that e is exactly equal to 0 (that the varieties are identical in their mean yields).
Yet this is what the test is stating: Are the two mean yields identical (to one
3 COMPOSITE HYPOTHESES 427
nCO) = 1-
fCl
f ~(t); 0) dt} for 0 in 9.
This power function can be compared with the ideal power function, and if it
does not deviate further from the ideal than the experimenter can tolerate, the
test may be useful even though it may not be a uniformly most powerful test.
Let us illustrate the above with a simple example.
We have
.5
.05 ~_ _----'..::::_"""~L...-...__--'---_---+ e
FIGURE 6 o 1 2 3
For example, if ex = .05 and n = 16, then d -:: : ; .911; so Cl -:::::; .589, and
C2 -:::::; 2.411. The power function is given by
nCO) = 1 - Pe[cl < X < c2l = 1 - Pe[.589 < X < 2.411]
and is sketched in Fig. 6. IIII
4 TESTS OF HYPOTHESES-SAMPLING
FROM THE NORMAL DISTRIBUTION
A number of the foregoing ideas are well illustrated by common practical testing
problems-those problems of testing hypotheses concerning the parameters of
normal distributions. The section is subdivided into four subsections, the first
two dealing with just one normal population and the last two dealing with several
normal populations.
:Yt 0: Jl < Jlo versus :Yt I : Jl > Jlo In testing :Yt0 : Jl S Jlo versus :Yt I : Jl > Jlo there
are two cases to consider depending on whether or not (]2 is assumed known,
If (]2 is assumed known, our parameter space is the real line, and we are testing a
one-sided hypothesis; so we have hope of finding a uniformly most powerful
test, Since (]2 is assumed known, it is a known constant; hence
= 1 e-HIl/tI)2e-!(x/tI)2e(lt/tI)X,
J2n(]
which is a member of the exponential family with
_ 1 -Hllltl)2
and d(x) = x.
( )
aJl- k e ,
y2n(]
The conditions for Theorem 5 are satisfied; so the uniformly most powerful size·
n
ct test is given by the following: Reject:Yt 0 if t(XI' .,., Xn) = I Xi > k~', where k*
I
is given as a solution to Pllo[l: Xi > k*] = ct. Now ct = Plto [I Xi > k*] =
I - q,«k* - nJlo)/jn(J); so (k* - nJlo)/jn(] = ZI-a' where Zl-a is the (1 - ct)th
quantile of the standard normal distribution, The test becomes the following:
Reject :Yt 0 if I Xi > nJlo + In(]ZI-a, or reject :Yt 0 if x >- Jlo + «(J/jn)zl_a'
If (J2 is assumed unknown, then testing :Yt0: Jl ~ Jlo versus :Yt1 : Jl > Jlo is
equivalent to testing :Yt0: (J E 9 0 versus :Yt1 : (J ~ eo,
where (J = (Jl, (]2), 9 =
{(Jl, (J2): - 00 < Jl < 00; (J2 > O}, and 9 0 = {(Jl, (]2): Jl ~ Jlo; (J2 > O}. Toobtain
a test, we could use the generalized likelihood-ratio principle, or we could find
some statistic that behaves differently under the two hypotheses and base our
test on it. Such a statistic is T = (X - Jlo)/(Sljn), where X is the sample mean
and S2 is the sample variance, Since Twould tend to be larger for Jl > Jlo than
for Jl < Jlo , a test based on T is given by the following: Reject :Yt 0 if T is large;
that is, reject :Yt0 if T> k, If Jl = Jlo, then T has a t distribution with n - I
degrees of freedom; so k can be determined by setting ct = PIl =Ilo[T > k], which
implies that k = tl-a:(n - I), the (1 - ct)th quantile of a t distribution with n - I
degrees of freedom, It can be shown that the test derived here is a generalized
likelihood·ratio test having size ct.
ft' 0: J1. = J1.o versus :YtI : Jl =1= Jlo Again, we have two cases to consider depend-
ing on whether or not (Jz is assumed known. For (Jz known, we know that
(X - z(l +"f)/2«(JI.j~), X + Z(l +"f)/z«(Jljn)) is a lOOy percent confidence interval
430 TESTS OF ID'PQTHESES IX
for Jl, where Z(l + y)/2 is the [(1 + y)/2]th quantile of the standard normal distribu-
tion. A possible test is given by the following: Reject .Yt0 if the confidence
interval does not contain Jlo . Such a test has size I - y since
_ (J - (J ]
Pp.=p.o [ X - Z(1+y)/2 J;';' < Jlo < X + Z(1+y)/2 yin = y.
If (12 is assumed unknown, we could obtain a test, similar to the one above,
using the lOOy percent confidence interval
2. _ [ n ] nl2 _ nl2
sup L ( Jl, (1 ,Xl"'" Xn) - ~ 2 e .
~o 2n ~ (Xi - Jlo)
and so a critical region of the form A < Ao is equivalent to a critical region of the
fonn t2(Xl' ... , x n ) > k 2. A generalized likelihood-ratio test is then given by
the following: Reject :Yf 0 if and only if
:Yf 0: a < a~ versus :Yf 1 : a 2 > a~ There are two cases to consider depending on
2
whether or not J-l is assumed known. If J-l is known, then our parameter space
is an interval, and our hypothesis is one-sided; so we have a chance of finding a
uniformly most powerful size-a test.
test with critical region = {(Xl' ... , Xn): L (Xi - 1l)2 > k*} is uniformly most
powerful of size ct, where k* is given by Pa2=ao2[L (Xi - 1l)2 > k*] = ct, which
implies that k* = (}"~ xi -in), where xi -cz(n) is the (l - ct)th quantile point of the
chi-square distribution with n degrees of freedom.
If Il is unknown, a test can be found using the statistic V = L (Xi - X)2/(}"~.
V will tend to be larger for (}"2 > (}"~ than for (}"2 < (}"~; so a reasonable test would
be to reject :Yf 0 for V large. If (}"2 = (}"~ , then V has a chi-square distribution
with n - I degrees of freedom, and Pa2=ao2[V> xi-in - I)] = ct, where
xi -cz(n - I) is the (l - ct)th quantile of a chi-square distribution with n - I
degrees of freedom. It can be shown that the test given by the following:
Reject :Yfo if and only if L (Xi - X)2/(}"~ > xi-cz(n - I) is a generalized likeli-
hood-ratio test of size ct.
A size-(ct = I - y) test is given by the following: Accept :Yf 0 if and only if (}"5 is
contained in the above confidence interval. It is left as an exercise to show that
for a particular pair of ql and q2 the test of size ct derived by the confidence-
interval method is in fact the generalized likelihood-ratio test of size ct.
a new line of hybrid com with that of a standard line, one would also have to use
estimates of both mean yields because it is impossible to state the mean yield of
the standard line for the given weather conditions under which the new line
would be grown. I t is necessary to compare the two lines by planting them in
the same season and on the same soil type and thereby obtain estimates of the
mean yields for both lines under similar conditions. Of course the comparison
is thus specialized; a complete comparison of the two lines would require tests
over a period of years on a variety of soil types.
The general problem is this: We have two normal populations-one with
a random variable Xl' which has a mean III and variance CTi, and the other with
a random variable X 2 , which has a mean 112 and variance CTi. On the basis of
two samples, one from each population, we wish to test the null hypothesis
_ ( - -12)nl/2 exp
-
2nCTl
[l I
-"!
nl (Xli-lll)2]( - -12 )n2/2 exp [ -t In2 (X2J-1l2)2]
1 CTl 2nCT2 1 CT2
'
and its maximum in 9 is readily seen to be
Ifwe put III and 112 equal to Il, say, and try to maximize L with respect to Il, CTi,
and CT~, it will be found that the estimate of Il is given as the root of a cubic
equation and will be a very complex function of the observations. The resulting
generalized likelihood-ratio A will therefore be a complicated function, and to
find its distribution is a tedious task indeed and involves the ratio of the two
variances. This makes it impossible to determine a critical region 0 < A < k
434 TISTS OF BYPOTHBSES IX
for a given probability of a Type I error because the ratio of the population
variances is assumed unknown. A number of special devices can be employed
in an attempt to circumvent this difficulty, but we shall not pursue the problem
further here. For large samples the following criterion may be used: The root
of the cubic equation can be computed in any instance by numerical methods, and
A can then be calculated; furthermore, as we shall see in Sec. 5 below, the
quantity - 2 log A has approximately the chi·square distribution with one
degree of freedom, and hence a test that would reject for - 2 log )" large could
be devised.
When it can be assumed that the two populations have the same variance,
the problem becomes relatively simple. The parameter space 9 is then three ..
dimensional with coordinates (Ilh Ilz, (1z), while 9 0 for the null hypothesis
III = Ilz = Il is two-dimensional with coordinates {Jl, (1z). In 9 we find that
the maximum-likelihood estimates of Ill' Ilz, and (1z are, respectively, Xf, Xz,
and
so
for Il
and
which gives
supL
~o
nl + n2 ](n +n )/Z
1 2
Finally,
it = (1 + [n l n2/(n l ~ n2)](XI - X2)2 2) 2
-(n1 +n )/2.
Z= ~l-XZ
uJI/nl + l/nz
is normally distributed with mean 0 and unit variance, the quantity
(19)
Equality of several means The test presented above can be extended from
just two normal popUlations to k normal populations. We assume that we have
available k random samples, one from each of k normal populations; that is,
436 TBSTS OF HYPOTHESES IX
let Xii' •.. , XJnJ be a random sample of size nj from the jth normal population,
j = I, ... , k. Assume that the jth population has mean Ii) and variance (12.
Further assume that the k random samples are independent. Our object is to
test the null hypothesis that all the population means are the same versus the
alternative that not all the means are equal. We seek a generalized likelihood·
ratio test. The likelihood function is given by
k
where n = L nj.
j= I
The parameter space 9 is (k + 1)-dimensional with coordinates
(iii' ... , lik' (12), and 9 0 , the collection of points in the parameter space corre-
sponding to the null hypothesis, is two-dimensional with coordinates (Ii, (12), where
Ii = iii = ... = lik' In 9, the maximum-likelihood estimates of iii' ... , lik, (12
are given by
I nJ
A
II
r j =X.
-
J. =- I x",
JI j = 1, ... , k,
n)l= I
and
1
"'2 = -
(1-
~
Lk I( x .. - x·- )2.
HJ
)1 J.' (20)
n j=l i=l
hence,
and
and so
21l ~ ~ (Xji _X)2] -n12
sup L = )~ e- nI2 •
§o
[ n
4 TESTS OF HYPOTHESES-SAMPLING FROM THE NORMAL DISTRIBUTION 437
- 2
- ~~ (Xji - X J,)
= 1+
k - 1 ~ nix), - x)2/(k - 1)
J 2
l-n'2
[ n - k ~~(Xjl- x),) I(n - k)
The ratio r is sometimes called the variance ratio, or F ratio. The constant c is
determined so that the test will have size (X; that is, c is selected so that
P[R > c 1:Yf0] = (x. Note that Xj. is independent of I (Xji - X l )2 and,hence, the
i
numerator of Eq. (21) is independent of the denominator. Also, under :Yf 0'
note that the numerator divided by (12 has a chi-square distribution with k - I
degrees of freedom, and the denominator divided by (12 has a chi-square dis-
tribution with n - k degrees of freedom. Consequently, if :Yf 0 is true, R has an
F distribution with k - land n - k degrees of freedom; so the constant c is the
(1 - (X)th quantile of the F distribution with k - 1 and n - k degrees of freedom.
The testing problem considered above is often referred to as a one-way
analysis of variance. In some experimental situations, an experimenter is
interested in determining whether or not various possible treatments affect the
yield. F or example, one might be interested in finding out whether various
types of fertilizer applications affect the yield of a certain crop. The different
treatments correspond to the different populations, and when we test that there
is no population difference, we are testing that there is no treatment" effect.
U
The term " analysis of variance" is explained if we note that the denominator of
the ratio in Eq. (21) is an estimate of the variation within populations and the
· ..
, ,.-
Two variances Given random samples from each of two normal populations
with means and variances (/11' uI) and (/12' ui), we may test hypotheses about
the two variances. We will consider testing:
(i) .Yf0: ui < ui versus .Yf1: ui > ui
(ii) .Yf0: ui > ui versus .Yf1: ui < ui
(iii) .Yf0: ui = ui versus .Yf1: ui =1= ui
If X 11 , .•. , X l711 is a random sample from a normal density with mean /11
and variance ui, if X 21 , ..• , X 2712 is a random sample from a normal density
with mean /12 and variance ui, and if the two samples are independent, then we
know that
We might mention that the above defined tests can all be derived using the
generalized likelihood-ratio principle.
- n n .1 k"j
e- U (XJI-Jl.J)/O'jJ2
j= 1 i~ 1 J21l (1'j
and the maximum-likelihood estimates of Jlj' (1'1, j = 1, ... , k are given by
and
1 nj
hJ = - I (Xji - XjJ z.
nJ i= 1
The null hypothesis states that all (1'] are equal. Let (1'2 denote their common
value; then So = {(Jll' ... , Jlk' (1'2): - 00 < Jlj < 00; (1'2 > O}, and the maximum·
likelihood estimates of JlI" .. , Jlk' (1'2 over So are given by
j = 1, ... , k,
and
Therefore,
_
n (hJ)"i/
j=1
k
Z
- (I n) hJ/I n)Y'''j/z'
A generalized likelihood-ratio test is given by the following: Reject :If 0 if and
only if A :S; Ao . We would like to determine the size of the test for any constant
440 TESTS OF HYPOTHESES IX
AO or find Ao so that the test has size a, but, unfortunately, the distribution of the
generalized likelihood-ratio is intractable. An approximate size-a test can be
obtained for large nJ since it can be proved that - 2 log A is approximately
distributed as a chi-square distribution with k - 1 degrees of freedom. Accord~
ing to the generalized likelihood~ratio principle .Yf0 is to be rejected for small
A; hence .Yf 0 should be rejected here for large - 2 log A; that is, the critical region
of the approximate test should be the right tail. So the approximate size-a test
is the following: Reject .Yf 0 if and only if - 2 log A > Xl -a.Ck - I), the (l - a)th
quantile of the chi~square distribution with k - I degrees of freedom. (Several
other approximations to the distribution of the likelihood~ratio statistic have
been given, and some exact tests are also available.)
In this section we present a number of tests of hypotheses that one way or another
involve the chi-square distribution. Included will be the asymptotic distribution
of the generalized likelihood-ratio, goodness-of-fit tests, and tests concerning
contingency tables. The material in this section will be presented with an aim
of merely finding tests of certain hypotheses, and it will not be presented in such
a way that concern is given to the optimality of the test. Thus, the power
functions of the derived tests will not be discussed.
where O~, •.. , O~ are known and Or+l' ..• , Ok are left unspecified, -2 log An
is approximately distributed as a chi-square distribution with r degrees of
freedom when :If 0 is true and the sample size n is large. //11
We have assumed that 1 <r < k in the above theorem. If r = k, then all
parameters are specified and none is left unspecified. The parameter space e is
k-dimensional, and since :If 0 specifies the value of r of the components of
(01 , •.• , Ok)' the dimension of So is k - r. Thus, the degrees of freedom of the
asymptotic chi-square distribution in Theorem 7 can be thought of in two ways:
first, as the number of parameters specified by :If 0 and, second, as the difference
e
in the dimensions of and 9 0 ,
Recall that An is the random variable which has values
EXAMPLE 19 Recall that in Subsec. 4.3 we discussed testing :If 0: III 1l2'
O'f > 0, O'i > 0 versus :lfl : III =1= 1l2' O'i > 0, O'i > 0, where III and 0';
are
the mean and variance of one normal population and 112 and O'i are
the mean and variance of another. Here the parameter space is four-
dimensional, and although :If0 does not appear to be of the form given
442 TESTS OF HYPO'IHESES IX
EXAMPLE 20 In Subsec. 4.4 we tested :Yf 0: O'r = ... = 0';, 111' ... , 11m, where
I1j and 0'] were, respectively, the mean and variance of the jth normal
population, j = 1, ... , m. (In Subsec. 4.4, k was used instead of m.) If
we make the following reparameterization, :Yf 0 will have the desired form
of Theorem 7:
Now:Yf o becomes :Yfo: 01 = 1, ... , 0m-l = 1, Om, Om+l'"'' 02m; that is,
the first m - 1 components are specified to be 1 and the remaining are
unspecified. Theorem 7 is now applicable, and, again, because of the
invariance property of maximum-likelihood estimates, the generalized
likelihood-ratio obtained before and after reparameterization are the
same; hence the asymptotic distribution of - 2 log A, as claimed in
Subsec. 4.4, is the chi-square distribution with m - 1 degrees of freedom
when :Yf 0 is true. // /I
1+1
where xi = 0 or 1, j = 1, ... , k + 1; 0 < Pj < 1, j = 1, "', k + 1; I Xj = 1;
j= 1
k+l
and I Pj = 1 (as would be the case in sampling with replacement from a
j=l
population of individuals who could be classified into k + I classes or categories),
a common problem is that of testing whether the probabilities Pj have specified
numerical values. Thus, for instance, the result of casting a die may be classified
into one of six classes, and on the basis of a sample of observations we may wish
to test whether the die is true, that is, whether Pj = t for j = 1, ... , 6. One can
also think in terms of independent, repeated trials, where each trial can result in
anyone of k + 1 outcomes, called classes or categories. The density in Eq. (23)
then gives the density for the outcome of one trial. The result of one trial can
be represented by the multivariate random variable (Xl' ••. , Xk), where Xj is
unity if the trial results in category j and is 0 otherwise. Pj is the probability
that a trial results in category j. Now if we independently repeat the trial n
times, we have n observations of the multivariate random variable (Xl' ... , X k);
we can display them as
If we let N j = I" Xl}' then the random variable N J is the number of the n trials
t= 1
resulting in category j. We know that (Nl' ... , N k ) has a multinomial distribu..
tion. (See Example 5 in Subsec. 2.2 of Chap. IV.)
To test the null hypothesis .1fo: Pj = pJ, j = 1, ... , k + 1, where pJ are
given probabilities summing to unity, we hope to employ the generalized
likelihood-ratio principle. The likelihood function is given by
L = L(Pl' .•. , Pk; X 1b ••• , Xu, ••. , X"h ..• , X"k) = n" k+n pjli.
i= 1 }= 1
1
(24)
. The parameter space e has k dimensions (given k of the k + 1 P/s, the remaining
one is determined by I Pj = 1), while 9 0 is a point. It is readily found thatL is
maximized in 9 when
likelihood-ratio is
A = n" n
k+l (pO)"J
1'=1
-1
nj
.
which tends to be small when JIP 0 is true and large when JIP 0 is false. Note that
N j is the observed number of trial outcomes resulting in category j and nPJ is the
expected number when JIP 0 is true. It can be easily shown (see Prob. 39)
that
k+l I
S[Q~] = I -0 [np/l - Pj) + n2 (pj - pJ)2], (26)
J= 1 np J
where the Pj are the true parameters. If JIP 0 is true, then S[Q~] = I (1 - pJ) =
k + 1 - 1 = k. The following theorem gives a limiting distribution for Q~ when
the null hypothesis .;t(? 0 is true.
We will not prove the above theorem, but we will indicate its proof for
k = 1. What needs to be demonstrated is that for each argument X~ FQk(x)
converges to Fx2(k/x) as n --'). 00, where FQk(') is the cumulative distribution
function of the random quantity Qk and F X2(k)(') is the cumulative distribution
function of a chi-square random variable having k degrees of freedom. (Note
that k + 1, the number of groups, is held fixed, and n, the sample size, is increas-
ing.) If k = 1, then
We know that NI has a binomial distribution with parameters n and PI and that
Y,. = (Nl - npI)/JnpI(l - PI) has a limiting standard normal distribution;
hence, since the square of a standard normal random variable has a chi-square
distribution with one degree of freedom, we suspect that Y; = Q1 has a limiting
chi-square distribution with one degree offreedom, and such can be easily shown
to be the case, which would give a proof of Theorem 8 for k = 1.
Theorem 8 gives the limiting distribution for the statistic
o _ k+1
~
(N.J _ npO)2
j
Qk-.i..J
}=1 npj °
when the null hypothesis Yf o : p) = pJ,j = 1, ... , k + 1, is true. Thus a test of
Yf 0: p} = pJ, j = 1, ... , k + 1, which has approximate size a, is given by the
following:
green," according to the ratios 9/3/3/1. For n = 556 peas, the following
were observed (the last column gives the expected number):
Round and yellow 315 312.75
Round and green 108 104.25
Angular and yellow 101 104.25
Angular and green 32 34.75
9 3 3
A size-.05 test of the null hypothesis .Yf0: PI = 1 6' P2 = 1 6' P3 = 1 6'
and P4 = /6 is given by the following:
4 (N. _ npO)2
Reject.Yf ° if and only if Q~ = L J ° j exceeds XT-alk) = X~95(3) =
1 npj
7.81.
The observed Q~ is
(315 - 312.75)2 (l08 - 104.25)2 (101 - 104.25)2 (32 - 34.75)2
312.75 + 104.25 + 104.25 + 34.75
~ .470,
and so there is good agreement with the null hypothesis; that is, there is a
good fit of the data to the model. IIII
The proof of Theorem 9 is beyond the scope of this book. The limiting
distribution given in Theorem 9 differs from the limiting distribution given in
Theorem 8 only in the number of degrees of freedom. In Theorem 8 there are k
degrees of freedom, and in Theorem 9 there are k - r degrees of freedom; the
number of degrees of freedom has been reduced by one for each parameter that
is estimated from the data.
No mention of hypothesis testing is made in the statement of Theorem 9.
However, we will show now how the results of the theorem can be used to obtain
a goodness-of~fit test. Suppose that it is desired to test that a random sample
Xt, ... , Xn came from a density I(x; 81, ... , 8,.), where 81, ... , 8,. are unknown
parameters but the function I is known. The null hypothesis is the composite
hypothesis .Yf0: Xi has density I(x; 81, ... , 8,.) for some 81, ... , 8,.. The null
hypothesis states that the random sample came from the parametric family of
densities that is specified by I( . ; 81 , ••• , 8,.). If the range of the random variable
Xi is decomposed into k + 1 subsets, say AI' ... , A k + l , if Pj = P[X, E A j ], and if
N j = number of X/s falling in A j' then, according to Theorem 9,
k+l (N - np.)2
Q" = L1 j
nPj
J
indicate a test of the hypothesis that two multinomial populations can be con-
sidered the same and then indicate some generalizations. Suppose that there are
k + 1 groups associated with each of the two multinomial populations. Let the
first popula ti on have associated probabiIi ties Pll' P12' ... , PI k' PI, k+ 1 and the
second P21' P22' ... , P2k , P2, k+l· It is desired to test .Yt0: P1l = P2j (= Pj' say),
j = 1, ... , k + 1. For a sample of size nl from the first population, let N 1j
denote the number of outcomes in group j, j = 1, ... , k + 1. Similarly, let N 2j
denote the number of outcomes in group j of a sample of size n2 from the second
population. (Here we are assuming that the sample sizes nl and n2 are known.)
We know that
I
k+l (N ij - niPij )2
j= 1 nj Pij
Q2A; = I2 k+l(N
I ij - n p)2
i j (29)
i=1 j=1 niPj
has a limiting chi-square distribution with 2k degrees of freedom. If.Yt 0
specifies the values Pj' then Q2k is a statistic and can be used as a test statistic.
On die other hand, if the Pj defined by .Yt0 are unknown, then they have to be
estimated. If.Yt 0 is true, the two samples can be considered as one random
sample of size n 1 + n 2 from a multinomial population with probabilities PI, ... ,
Pk+l. Maximum-likelihood estimators of the Pj are then (N 1j + N 2j )/(nl + n2)'
j = 1, ... , k, and if the Pj in Eq. (29) are replaced by their maximum-likelihood
estimators, we then obtain
several given samples are drawn from the same population of a specified type
(such as the Poisson, the gamma, etc.) can be obtained using a procedure similar
to that above. We illustrate with an example.
EXAMPLE 24 One hundred observations were drawn from each of two Poisson
populations with the following results:
0 1 2 3 4 5 6 7 8 9 or more Total
Population 1 11 25 28 20 9 3 3 0 1 0 100
Population 2 13 27 28 17 11 1 2 1 0 0 100
Total 24 52 56 37 20 11 200
Is there strong evidence in the data to support the contention that the two
Poisson populations are different? That is, test the hypothesis that the
two populations are the same. This hypothesis can be tested in a variety
of ways. We first use the chi-square technique mentioned above. We
group the data into six groups, the last including all digits greater than 4, as
indicated in the above table. If the two populations are the same, we have
to estimate one parameter, namely, the mean of the common Poisson
distribution. The maximum-likelihood estimate is the sample mean,
which is
0(24) + 1(52) + 2(56) + 3(37) + 4(20) + 5(4) + 6(5) + 7(1) + 8(1)
200
420
= 200 = 2.1.
The expected number in each group of each population is given by
0 1 2 3 4 5 or more
The value of the statistic in Eq. (29), where niPj is replaced by the estimates
given in the above table, can be calculated. It is approximately 1.68.
The degrees of freedom should be 2k - 1 (one parameter is estimated),
which is 9. The test indicates that there is no reason to suspect that the
two assumed Poisson populations are different Poisson populations. IIII
452 TESTS OF HYPOTHESES IX
We mentioned earlier that there are several methods of testing the null
hypothesis considered here. For example the generalized likelihood-ratio prin-
ciple and employment of Theorem 7 yield a test that the student may find in-
structive to find for himself.
(ii) whether or not they were subject to the condition. An industrial engineer
could use a contingency table to discover whether or not two kinds of defects in
a manufactured product were due to the same underlying cause or to different
causes. It is apparent that the technique can be a very useful tool in any field
of research.
(31)
As a further notation we shall denote the row totals by N i. and the column totals
by N .}; that is,
Of course,
It N i. = 2: N. j = n.
J
We shall now set up a probability model for the problem with which we
wish to deal. The n individuals will be regarded as a sample of size n from a
multinomial population with probabilities Pi} (i = 1, 2, ... , r; i = 1, 2, .. " s).
The probability density function for a single observation is
where
or 1 and L
t, }
xi} = L
454 TESTS OF HYPOTHESES IX
We wish to test the null hypothesis that the A and B classifications are independ~
ent, i.e., that the probability that an individual falls in B j is not affected by the
A class to which the individual happens to belong. Using the symbolism of
Chap. I, we would write
and
or
P[A i n B}l = P[AilP[Bjl.
If we denote the marginal probabilities P[Ail by Pi. (i = 1, 2, ... , r) and the
marginal probabilities P[Bjl by P.j (j = I, 2, ... , s), the null hypothesis is simply
In 8 0 ,
(36)
5 em-SQUARE TESTS 455
The distribution of A under the null hypothesis is not unique because the hy-
pothesis is composite and the exact distribution of A does involve the unknown
parameters Pi. and P.j; hence, it is very difficult to solve for 20 in sup P 8[A ~ 20] =
eo
ex. For large samples we do have a test, however, because - 2 log A is in that
case approximately distributed as a chi-square random variable with
rs - 1 - (r +s- 2) = (r - 1)(s - 1)
degrees of freedom and on the basis of this distribution a unique critical region
for 1 may be determined. The degrees of freedom rs - 1 - (r + s - 2) is
obtained by subtracting r + s - 2, which is the dimension of 9 0 , from ra - 1,
which is the dimension of 9. Also, (r - 1)(a - 1) is the number of parameters
specified by :Yt0 • (See Theorem 7 and the comment following it.) Actually,
the null hypothesis :Yto: Pij = Pi.P.j is not of the form required by Theorem 7;
so it might be instructive to consider the necessary reparameterization. For
convenience, let us take r = s = 2. Now (3 = {(el , O2 , ( 3 ) = (PH' Po, P21):
Pu > 0; PI2 > 0; P21 ~ 0; and P11 + PI2 + P21 :::;;; I}. Let 8 with points
1
PH PI2 PI,s-1 Ph
P21 PZZ P2, s-1 PZs
In casting about for a test which may be used when the sample is not large,
we may inquire how it is that a test criterion comes to have a unique distribution
for large samples when the distribution actually depends on unknown param-
eters which may have any values in certain ranges. The answer is that the
parameters are not really unknown; they can be estimated, and their estimates
approach their true values as the sample size increases. In the limit as n becomes
infinite, the parameters are known exactly, and it is at that point that the dis-
tribution of A actually becomes unique. It is unique because a particular point
in 8 0 is selected as the true parameter point, so that the N ij are given a unique
distribution, and the distribution of A is then determined by this distribution.
It would appear reasonable to employ a similar procedure to set up a test
for small samples, i.e., to define a distribution for A by using the estimates for
the unknown parameters. In the present problem, since the estimates of the
Pi. and P.j are given by Eq. (35), we might just substitute those values in the
distribution function of the N lj and use the distribution to obtain a distribution
for A. However, we should still be in trouble; the critical region would depend
on the marginal totals Nt. and N. j ; hence the probability of a Type I error would
vary from sample to sample for any fixed critical region 0 < A < Ao.
There is a way out of this difficulty, which is well worth investigation
because of its own interest and because the problem is important in applied
statistics. Let us denote the joint density of all the N ij briefly by f(nt), the
marginal density of all the Nt. and N. j by g(ni" n.}), and the conditional density
of the N i}, given the marginal totals, by
!(ni})
I
f( ni}ni.,n.j)= ( )
g nt., n.}
Under the null hypothesis, this conditional distribution happens to be inde-
pendent of the unknown parameters (as we shall show presently); the estimators
N i./n and N ,}In form a sufficient set of statistics for the Pt, and P.j' This fact
will enable us to construct a test.
The joint density of the N ij is simply the multinomial distribution
in e, and in 8 0 (we are interested in the distribution of A under :if0) this becomes
To obtain the desired conditional distribution, we must first find the distribution
5 em-SQUARE TESTS 457
of the N i. and N.}, and this is accomplished by summing Eq. (38) over all sets
of nil such that
I} = n}
"~ n·· •
and "n,.=n
~} ..
I.
(39)
i j
For fixed marginal totals, only the factor l/flnjj! in Eq. (38) is involved in
the sum; so we have, in effect, to sum that factor over all nij subject to Eq. (39).
The desired sum is given by comparing the coefficients of fl x';'. in the expression
i
(Xl + ... + xrfoJ(xl + ... + x r)n. 2
+ ... + xrt· = (Xl + ... + Xr)n.
••• (Xl s
(40)
On the right-hand side the coefficient of fl Xii. is simply
n!
(41)
fl ni.!·
i
On the left-hand side there are terms with coefficients of the form
g( n
i.,
n) -
.} -
t)2
(fl ni.(n!)(f] n.}!)
(fl Pi.ni')(f] P.j'
n. /) (44)
lr-------------~-----------
which, happily, does not involve the unknown parameters and shows that the
estimators are sufficient.
To see how a test may be constructed, let us consider the general situation
in which a test statistic A for some test has a distribution/A(A; 8) which involves
an unknown parameter 8. If 8 has a sufficient statistic, say T, then the joint
density of A and T may be written
fA,T(A, t; 8) =/AIT(Alt)/T(t; 8),
and the conditional density of A, given T, will not involve 8. Using the condi-
tional distribution, we may find a number, say Ao(t), for every t such that
f A.o(t)
o
/AIT(AI t) dA = .05, (46)
for example. In the At-plane the curve A = Ao(t) together with the line A = 0
will determine a region R. See Fig. 7. The probability that a sample will give
rise to a pair of values (A, t) which correspond to a point in R is exactly .05
because
co
P[(A, T) E R] = J-co J fA, T(A, t; 8) dA dt
A.o(t)
= Jco .05fT(t; 8) dt
- co
=.05.
Hence we may test the hypothesis by using Tin conjunction with A. The
critical region is a plane region instead of an interval 0 < A < Ao; it is such a
region that, whatever the unknown value of 8 may be, the Type I error has a
5 em-SQUARE TESTS 459
Q = I [Nij - n(Ni./n)(N.j!n)F
i,j n(Ni./n) (N.j!n) (47)
represent the probabilities associated with the individual cells and Ntlk be the
numbers of sample elements in the individual cells, and, as before, marginal
totals will be indicated by replacing the summed index by a dot; thus
and (48)
There are four hypotheses that may be tested in connection with this table.
We may test whether all three criteria are mutually independent, in which case
the n utI hypothesis is
Pilk = PI .. P.l.P ..k' (49)
where Pi .. = L L Pilk'
j t
P.j. = Li L Pijk'
k
and P ..k = L L Pijk;
i 1
or we may test
whether anyone of the three criteria is independent of the other two. Thus to
test whether the B classification is independent of A and C, we set up the null
hypothesis
Pijk = Pl.k P.}., (50)
where Pi.k = L
}
Pijk'
where
L
i. j. k
PUk = 1 and L
i.}. k
nijk = n.
so that
To test the null hypothesis in Eq. (50), for example, we make the substitution
of Eq. (50) into Eq. (51) and maximize L with respect to the PUr. and P.). to find
sup L =
i!o n
~n (II n7:ir.
I, If.
k
) (II n~l')'
j
(53)
6 TESTS OF HYPOTHESES AND CONFIDENCE INTERV AIS 461
The generalized likelihood-ratio). is given by the quotient of Eqs. (52) and (53),
and in large samples - 2 log A has the chi-square distribution with
8 18 2 83 - 1- [(8183 - 1) + 82 - 1] = (8183 - 1)(82 - 1)
degrees of freedom. Again the large-sample distribution is quite adequate for
many purposes. (8183 -1) +(82 -1)isthedimensionofB o , and 8 18 2 83 -1 is
the dimension of t).
A test statistic analogous to that given in Eq. (47) for testing independence
in a 2 x 2 contingency table can also be derived. F or testing :If 0: A and C
classifications are independent of the B classification, such a test statistic is
It should be emphasized that any member, say 8(Xl' ... , X,.), of the family
of confidence sets is a subset of 8, the parameter space. 8(X1 , • •• , Xn) is a
random subset; for any possible value, say (Xl' ... , X,.), of (Xl' ... , X,.),
8(Xl' ... , X,.) takes on the value 8(Xl' ... , X,.), a member of the family 9. To
aid in the interpretation of the probability statement in Eq. (55), note that for a
fixed (yet arbitrary) fJ "8 (Xl' ... , X,.) contains fJ" is an event [it is the event that
the random interval 8(Xl' ... , X,.) contains the fixed fJ] and the fJ that appears
as a subscript in P8 is the fJ that indexes the distribution of the X/s appearing
in 8(Xl' ... , X,.).
For instance, suppose Xl' ... , X,. is a random sample from N(fJ, 1).
8 = {fJ; - 00 < fJ < oo}. Let the subset 8(x 1 , ••• , X,.) be the interval
(x - z/Jn, x + z/J~), where z is given by cI>(z) - cI>( -z) = y; then the family of
subsets 9 = {8(Xl' ... , xJ: 8(Xh ... , X,.) = (x - z/J~, x + z/Jn)} is a family
of confidence sets with a confidence coefficient y since
= P8 [ -z < X -fJ]
l/Jn < z = y for all fJ E 8.
P80[8(Xl' ... , X,.) contains fJo] = P80 [(X1 , ••• , Xn) E X(fJ o)] = 1- ct.
6 TESTS OF HYPOTHESES AND CONFIDENCE INTERVALS 463
EXAMPLE 25 Let Xl' ... , X" be a random sample from N(fJ, 1), and consider
testing .1f0: fJ = fJ 0 • A test with size rl is given by the following: Reject
.1f0 if and only if Ix - fJo I z/j~, where z is defined by 4l(z) - 4l( -z) =
1 - rl. The acceptance region of this test is given by
30(110) = {(x" ... , x.); 110 - ,in < x< 110 + ,in}.
We can now define, as in Eq. (56),
S(XH ... , xn) = {fJ o: (XH ••• , xn) E X(fJo)}
The general procedure exhibited above shows how tests of hypotheses can
be used to generate or construct confidence sets. The procedure is reversible;
that is, a given family of confidence sets can be "reverted" to give a test of
hypothesis. Specifically, for a given family {e(Xl' ... , xn)} of confidence sets
with a confidence coefficient y, if we defined
then the nonrandomized test with acceptance region X(fJ o) is a test of .1f0: fJ = fJo
with size rl = 1 - y.
The usefulness of the strong relationship between tests of hypotheses and
confidence sets is exemplified not only in the fact that One can be used to construct
the other but also in the result that often an optimal property of one carries over
to the other. That is, if one can find a test that is optimal in some sense, then
the corresponding constructed confidence set is also optimal in some sense, and
conversely. We will not study the very interesting theoretical result alluded to
in the previous sentence, but we will give the following in order to give some idea
of the types of optimality that can be expected. (See the more advanced books
of Ref. 16 and Ref. 19 for a detailed discussion.) An optimum property of
confidence sets is given in the following definition.
464 TESTS OF HYPOTHESES IX
7.1 Introduction
Sequential analysis refers to techniques for testing hypotheses or estimating
parameters when the sample size is not fixed in advance but is determined during
the course of the experiment by criteria which depend on the observations as they
occur. In this section we propose to consider, and then only briefly, one form
of sequential analysis, namely, the sequential probability ratio test.
In Sec. 2 above we considered testing the simple null hypothesis :Yf 0: 0 = 00
versus the simple alternative hypothesis :Yf1: 0 = 01 , It was shown (Neyman-
Pearson lemma) that for samples of fixed size n, the test which minimized the
size, say p, of the Type II error for fixed size, say a, of the Type I error was a
simple likelihood-ratio test. That is, for fixed n and a, p was minimized.
Suppose now that it is desired to fix both a and P in advance and then find that
simple likelihood-ratio test having minimum sample size n and having size of
Type I error equal to a and size of Type II error p. The solution of such a
problem is illustrated in the following example.
7 SEQUENTIAL TESTS OF HYPOTHESES 46S
and
or
IO/y' n IO/yn
k -100
r: ~ 2.326,
10/yn
and
k - 105)
<f> (
10
1)1, = .05
466 TESTS OF HYPOTHESES IX
implies
k - 105
;: ~ -1.645;
lO/y n
for m = 1, 2, ... , and compute sequentially )"h A2' ... ,. For fixed ko and kl
satisfying 0 < ko < k1' adopt the following procedure: Take observation Xl and
compute A1 ; if A1 s ko, reject :Yf 0; if A1 > kh accept :Yf0; and if ko < )"1 < k1'
take observation X2, and compute A2 • If A2 s ko, reject :Yf 0; if A2 > k1'
accept :Yf 0; and if ko < A2 < kh observe X3' etc. The idea is to continue
sampling as long as ko < A) < k1 and stop as soon as Am s ko or Am ~ k1'
rejecting :Yf 0 if Am < ko and accepting :Yf 0 if Am > k 1 • The critical region of
ao
the described sequential test can be defined as C = U Cn , where
n=1
Definition 20 Sequential probability ratio test F or fixed 0 < ko < k1' a test
as described above is defined to be a sequential probability ratio test. IIII
When we considered the simple likelihood-ratio test for fixed sample size
n, we determined k so that the test would have preassigned size ct. We now
want to determine ko and k1 so that the sequential probability ratio test will have
preassigned ct and {J for its respective sizes of the Type I and Type II errors.
Note that
f Lo(n)
en
(60)
and
{J = P[accept :Yf 0 1:Yf 0 is false] = I
00
n=1
f L (n),
An
1 (61)
determination of ko and kl from Eqs. (60) and (61) can be a major computational
project. In practice, they are seldom determined that way because a very simple
and accurate approximation is available and is given in the next subsection.
We note that the sample size of a sequential probability ratio test is a
random variable. The procedure says to continue sampling until An =
An(Xl' ... , xn) first falls outside the interval (ko, k l ). The actual sample size then
depends on which Xi'S are observed; it is a function of the random variables
Xl, X 2 , ••• and consequently is itself a random variable. Denote it by N.
Ideally we would like to know the distribution of N or at least the expectation
of N. (The procedure, as defined, seemingly allows for the sampling to continue
indefinitely, meaning that N could be infinite. Although we will not so prove,
it can be shown that N is finite with probability 1.) One way of assessing the
performance of the sequential probability ratio test would be to evaluate the
expected sample size that is required under each hypothesis. The following
theorem, given without proof (see Lehmann [16]), states that the sequential
probability ratio test is an optimal test if performance is measured using expected
sample size.
Theorem 10 The sequential probability ratio test with error sizes a and
/3 minimizes both G[NI Jt o is true] and G[NI Jt l is true] among all tests
(sequential or not) which satisfy the following: P[Jt 0 is rejected I Jt 0 is
true] < a, P[Jt 0 is accepted I Jt 0 is false] < /3, and the expected sample
size is finite. // //
Note that in particular the sequential probability ratio test requires fewer
observations on the average than does the fixed-sample-size test that has the
same error sizes. In Subsec. 7.4 we will evaluate the expected sample size for
the example given in the introduction in which 64 observations were required
for a fixed-sample-size test with preassigned a and /3.
n= 1 en
L fenLl(n) =
00
= k o(1- P),
and hence ko > (1.1( I - P). Also
'
k 0=1 (1.
_ P< k 0 < k 1 ~ 1 -P (1. = k'
1- (63)
1/11
Remark Let (1.' and P' be the error sizes of the sequential probability
ratio test defined by leO and k't given in Eq. (62). Then (1.' + P' < (1. + p.
Let A' and C' (with corresponding A~ and C~) denote the
PROOF
acceptance and critical regions of the sequential probability ratio test
defined by ko
and k 1. Then
and
1 - (1.' = L
00 f Lo(n) > 1-(1.
I P L f , L 1(n) = 1-(1.
00
P P';
n;;:; 1 An n 1 An ~
hence (1.'(1 - P) (1.(1 - P'), and (1 - (1.)P' < (l - (1.')P, which together
implythat(1.'(1- P) + (1 - (1.)P' ~ (1.(1- P') + (1- (1.')por(1.' + P' < (1. + p.
///1
470 TESTS OF HYPOTHESES IX
Naturally, one would prefer to use that sequential probability ratio test
having the desired preassigned error sizes a and P; however, since it is difficult
to find the ko and kl corresponding to such a sequential probability ratio test,
instead one can use that sequential probability ratio test defined by ko and kl of
Eq. (62) and be assured that the sum of the error sizes a' and P' is less than or
equal to the sum of the desired error sizes a and p.
Theorem 11 Wald's equation Let ZI, Z2, ... , Zn, ... be independent
identically distributed random variables satisfying &[ IZi I] < 00. Let N
be an integer-valued random variable whose value n depends only on the
values of the first n Zi'S. Suppose &[N] < 00. Then
&[ZI + ... + Z N] = &[N] . &[ZJ (64)
PROOF &[ZI + ... + ZN] = &[&[ZI + ... + ZNIN]]
00
= L Li
i= 1 n=
&[Zd N = n]P[N = n]
00
= L &[Zi]P[N > i]
i= 1
00
If the sequential probability ratio test leads to rejection of J'e0' then the
random variable ZI + ... + ZN < lo~ ko, but ZI + ... + ZN is close to lo&: ko
since ZI + ... + ZN first became less than or equal to loge ko at the Nth observa-
tion; hence $[ZI + ... + ZN] ~ loge ko . Similarly, if the test leads to acceptance,
$[ZI + ... + ZN] ~ logekl ;hence$[ZI + ... + ZN] ~ plogeko + (1 - p)lo&:kl'
where p = P[ J'e 0 is rejected]. Using
we obtain
,....,
(l - P) loge [rxl(l - P)] + P lo~ [(1 - rx)IP]
I'V
(66)
$[Zd J'e 0 is false]
hence
1 2
= 2(12 (8 1 - ( 0) ,
and
For a; = .01, P= .05, (12 = 100,80 = 100, and 8 1 = 105 (as in Example 26),
Eq. (65) reduces to
PROBLEMS
1 Let X have a Bernoulli distribution, where P[X = 1] = () = 1 - P[X = 0].
(a) For a random sample of size n = 10. test -*'0: () < i versus -*'1: () > i. Use
the critical region {L x, 6}.
(i) Find the power function, and sketch it.
(ii) What is the size of this test?
(b) For a random sample of size n 10:
(i) Fmd the most powerful size-o: (0: .0547) test of -*'0: e= i versus
-*'1: () = 1.
(ii) FIlld the power of the most powerful test at 1. e
(c) For a random sample of size 10, test -*'0: () i versus -*'1: e= 1.
(i) Fmd the minimax test for the loss function 0 = t(do ; eo) = t(d1 ; ( 1 ),
{(do; e•.) = 1719, t(d.; eo) 2241.
(ii) Compare the maximum risk of the minimax test with the maximum risk
of the most powerful test given in part (b).
(d) Again, for a sample of size 10, test -*'0: () 1 versus -*'1: = 1. Use the e
above loss function to find the Bayes test corresponding to prior probabilities
given by
9 = (1719/2241)0)10 + 34 '
PROBLEMS 473
where 8 >0.
(a) In testing Jf' 0: 8 <1 versus Jf'1: 8 > 1, find the power function and size of
the test given by the following: Reject Jf' 0 if and only if X> 1.
(b) Find a most powerful size-<x test of Jf' 0: 8 2 versus Jf'1: 8 = 1.
(c) For the loss function given by {(do; 2) = {(d.; 1) 0, {(do; 1) = {(d1 ; 2) 1,
find the minimax test of Jf' 0: 8 = 2 versus Jf'1: 8 = 1.
(d) Is there a uniformly most powerful size-a. test of Jf' 0: 8 > 2 versus ~1: 8 < 2?
If so, what is it?
(e) Among all possible simple likelihood-ratio tests of Jf' 0: 8 = 2 versus Jf'1:
8 = 1, find that test that minimizes <X + fJ, where <X and fJ are the respective
sizes of the Type I and Type II errors.
(f) Find the generalized likelihood-ratio test of size <X of Jf' 0: 8 = 1 versus
Jf'.:8#;1.
5 Let X be a single observation from the density f(x; 8) = (28x + 1 - 8)1[0. 11(X),
where 1 <8<1.
(a) Find the most powerful size-a. test of Jf' 0: 8 = 0 versus Jf'1: 8 = 1. (Your
test should be expressed in terms of <x.)
(b) To test Jf' 0: 8 <0 versus Jf'1: 8 0, the following procedure was used:
Reject :?e0 if X exceeds 1. Find the power and size of this test.
(c) Is there a uniformly most powerful size-a. test of Jf' 0: 8 <0 versus Jf' 1~ 8 > O?
If so, what is it?
474 TESTS OF HYPOTHESES IX
I
(d) What is the generalized likelihood-ratio test of :¥f 0: 8 = 0 ve~us :¥f I! 8 ¢ O?
(e) Among all possible simple likelihood-ratio tests of :¥f 0: (J = 0 versus :¥f I:
8 = 1 find that test which minimizes 0'. + {3, where 0'. and (3 are the respective
sizes of the Type I and Type II errors.
(f) Given a set of observations. all of which fall between 0 and I, indicate how
you would test the hypothesis that the ob~ ,~vations came from the density
f(x; 8).
6 Let Xl, ..• , Xn denote a random sample from f(x; 8) = (1/8)1(0. 6)(X), and let
Y l , ... , Y n be the corresponding ordered sample. To test :¥f 0: 8 = 80 versus
:¥f I: 8 ¢ 80 , the following test was used: Accept :¥f 0 if 80 ( \Y;;) < Yn < 80 ;
otherwise reject.
(a) Find the power function for this test, and sketch it.
(b) Find another (nonrandomized) test that has the same size as the given test,
and show that the given test is more powerful (for all alternative 8) than the
test you found.
7 Let Xl, •.• , Xn denote a random sample from
f(x; 8) =( 1/8)x(I-6)/61(0. l)(x).
e- 6 8J<
f(x; 8) = -,-l{o.l. 2 • ••• )(x).
x.
(a) Find the UMP test of :¥f 0: 8 = 80 versus :¥f I: 8 > 80 , and sketch the power
function for 80 = 1 and n = 25. (Use the central-limit theorem. Pick
0'. = .05.)
(b) Test:¥f 0: 8 = 80 versus :¥fl: 8 ¢ 80 • Find the general form of the critical
region corresponding to the test arrived at using the generalized likelihood-
ratio principle. (The critical region should be defined in terms of L Xl.)
(c) A reasonable test of :¥f 0: 8 = 80 versus :¥f I: 8 ¢ 80 would be the following:
Reject if I X - 80 I >K. For 0'. = .05, find K so that P[reject :K 0 I:¥f 0] = .05.
(Assume that n is large enough so that the central-limit theorem can be used
to find an approximation to K.)
9 Let e = {80 , 81 }. Show that any test arrived at using the generalized likelihood-
ratio principle is equivalent to a simple likelihood-ratio test.
10 To test :¥f 0: 8 < 1 versus :¥f I : 8 > 1 on the basis of two observations, say Xl and
X 2 , from the uniform distribution on (0, 8), the following test was used:
Reject:¥f 0 if
PROBLEMS 475
(a) Find the power function of the above test, and note its size. [Recall that
Xl + X 2 has a triangular distribution on (0, 28).]
(b) Find another test that has the same size as the given test but has greater power
for some () > 1 if such exists. If such does not exist, explain why.
11 Let Xl, ••. , Xn be a random sample of size n fromf(x; 8) = ()2 xe - 61cl(0. CO)(x).
(a) In testing .1f 0: () < 1 versus .1f1 : () > 1 for n = 1 (a sample of size 1) the
following test was used: Reject .1f 0 if and only if Xl < 1. Find the power
function and size of this test.
(b) Find a most powerful size-a test of .1f 0: () = 1 versus .1f1: () = 2.
(c) Does there exist a uniformly most powerful size-a test of .1f 0: () < 1 versus
.1f1: () > I? If so, what is it?
(d) In testing .1f 0: () = 1 versus .1f 1: () = 2, among all simple likelihood-ratio
tests find that test which minimizes the sum of the sizes of the Type I and Type
II errors. You may take n = 1.
12 Let Xl, .•. , Xn be a random sample from the uniform distribution over the interval
«(), () + 1). To test .1f 0: () = 0 versus .1f1: () > 0, the following test was used:
Reject .1f 0 if and only if Yn > 1 or Y 1 > k, where k is a constant.
(a) Determine k so that the test wiIl have size a.
(b) Find the power function of the test you obtained in part (a).
(c) Prove or disprove: If k is selected so that the test has size a, then the given
test is uniformly most powerful of size a.
13 Let X .. ••. , Xm be a random sample from the density ()lxB 1 - 11(0.o(x), and let
Y 1, "', Yn be a random sample from the density ()2y fJ2 -11(0, 1)(y). Assume that
the samples are independent. Set Vf = -logeXf' i = 1, ... , m, and VJ =
-loge YJ, j = 1, •.. , n.
(a) Find the generalized likelihood-ratio for testing .1f 0: ()1 = ()2 versus
.1f 1 :()1¢()2.
(b) Show that the generalized likelihood-ratio test can be expressed in terms of
the statistic
L Vf
T - =-....;;;;..;.----==---
-LU,+LV/
(c) If.1f 0 is true, what is the distribution of T? (You do not have to derive it if
you know the answer.) Does the distribution of T depend on () = ()1 = ()2
given that .1f 0 is true?
14 Find a genera1ized likelihood-ratio test of size a: for testing .1f0: () < 1 versus
.1f 1: () > 1 on the basis of a random sample Xl, "', Xn from f(x; 8) =
()e- b l(o. CO)(x).
15 Let X be a single observation from the density f(x; 8) = (1 + 8)xBl(o. 1)(x), where
(»-l.
(a) Find the most powerful size-a: test of .1f 0: () = 0 versus .1f 1: () = 1.
(b) Is there a uniformly most powerful size-a: test of.1'f'0 : () < 0 versus .1f1 : () > O?
If so, what is it?
476 TESTS OF HYPOTHESES IX
26 The metallurgist of Prob. 20, after assessing the magnitude of the various errors
that might accrue in his experimental technique, decided that his measurements
should have a standard deviation of 2 degrees centigrade or less. Are the data
consistent with this supposition at the .05 level? (That is, test .:¥f0: a <2.)
27 Test the hypothesis that the two samples of Prob. 19 came from populations with
the same variance. Use Ct = .05.
28 The power function for a test that the means of two normal populations are equal
depends on the values of the two means ILl and 1'2 and is therefore a surface. But
the value of the function depends only on the difference (1 = 1'1 - 1'2, so that it
can be adequately represented by a curve, say {3(U). Plot {3(U) when samples of 4
are drawn from one population with variance 2 and samples of 2 are drawn from
another population with variance 3 for tests at the .01 level.
29 Given the samples (1.8, 2.9, 1.4, 1.1) and (5.0, 8.6,9.2) from normal populations,
test whether the variances are equal at the .05 level.
30 Given a sample of size 100 with X = 2.7 and 2: (X, X)2 225, test the null
hypothesis .:¥f 0: IL 3 and a 2 = 2.5 at the .01 level, assuming that the population
is normal.
31 Using the sample of Prob. 30, test the hypothesis that I' a 2 at the .01 level.
32 Using the sample of Prob. 30, test at the 0.1 level whether the .95 quantile point,
say ~ = ~.95, of the population distribution is 3 relative to alternatives ~ < 3.
Recall that ~ is such that J~ ~ f(x) dx = .95, where f(x) is the population
density; it is, of course, I' + I.645a in the present instance where the distribution is
assumed to be normal.
33 A sample of size n is drawn from each of k normal populations with the same
variance. Derive the generalized likelihood-ratio test for testing the hypothesis
that the means are all O. Show that the test is a function of a ratio which has
the F distribution.
34 Derive the generalized likelihood-ratio test for testing whether the correlation of
a bivariate normal distribution is O.
35 If XI, X 2 , ••• , XII are observations from normal populations with known variances
ai, vi, ... , a~, how would one test whether th~ir means were all equal?
36 A newspaper in a certain city observed that driving conditions were much improved
in the city because the number of fatal automobile accidents in the past year was 9
whereas the average number per year over the past several years was 15. Is it
possible that conditions were more hazardous than before? Assume that the
number of accidents in a given year has a Poisson distribution.
37 Six 1-foot specimens of insulated wire were tested at high voltage for weak spots
in the insulation. The numbers of such weak: spots were found to be 2, 0, 1, 1, 3,
and 2. The manufacturer's quality standard states that there are less than 120
such defects per 100 feet. Is the batch from which these specimens were taken
worse than the standard at the .05 level ? (Use the Poisson distribution.)
478 TESTS OF HYPOTHESES IX
38 Consider sampling from the normal distribution with unknown mean and variance:
(a) Find a generalized likelihood-ratio test of :K 0: u 2 < u~ versus :K I: u 2 > u& •
(b) Find a generalized likelihood-ratio test of :K 0: u 2 u6 versus :K I: u 2 :f:. u~.
39 (a) Suppose (Nit 0, Nt) is multinomially distributed with parameters n,
o.
has a limiting chi-square distribution. Find the exact mean and variance
of Q.
(b) Let (Nt, • 0 N,,) be distributed as in part (a). Define
.,
[See Eq. (25).] Find 8[Qf]. [See Eq. (26).] Is 8[Q~] for PI = p~, .•. ,
PUI = P~+l less than or equal to 8[Qf] for arbitrary Ph ... , p,,+!?
40 A psychiatrist newly employed by a medical clinic remarked at a staff meeting that
about 40 percent of all chronic headaches were of the psychosomatic variety. His
disbelieving colleagues mixed some pills of plain flour and water, giving them to
all such patients on the clinic's rolls with the story that they were a new headache
remedy and asking for comments. When the comments were all in they could be
fairly accurately classified as follows: (i) better than aspirin, 8, (ti) about the same
as aspirin, 3, (iii) slower than aspirin, 1, and (iv) worthless, 29. While the doctors
were somewhat surprised by these results, they nevertheless accused the psychiatrist
of exaggeration. Did they have good grounds?
41 A die was cast 300 times with the following results:
Occurrence: 1 2 3 4 5 6
Frequency: 43 49 56 45 66 41
Are the data consistent at the ,05 level with the hypothesis that the die is true?
42 Of 64 offspring of a certain cross between guinea pigs, 34 were red, 10 were black,
and 20 were white. According to the genetic model, these numbers should be in
the ratio 9/3/4. Are the data consistent with the model at the .05 level?
43 A prominent baseball player's batting average dropped from .313 in one year to
.280 in the following year. He was at bat 374 times during the first year and 268
times during the second. Is the hypothesis tenable at the .05 level that his hitting
ability was the same during the two years?
44 Using the data of Prob. 43, assume that one has a sample of 374 from one Bernoulli
population and 268 from another. Derive the generalized likelihood-ratio test
for testing whether the probability of a hit is the same for the two populations.
How does this test compare with the ordinarY test for a 2 x 2 contingency table?
PROBLEMS 479
45 The progeny of a certain mating were classified by a physical attribute into three
groups, the numbers being 10 53, and 46. According to a genetic model the
9
Male Female
According to the genetic model these numbers should have relative frequencies
given by
P p2
2
-+pq
2
q q2
-
2 2
More than
No colds One cold one cold
Test at the .05 level whether the t~o trinomial populations may be regarded as the
same.
480 TESTS OF HYPOTHESES IX
50 According to the genetic model the proportion of individuals having the four
blood types should be given by:
0: q2
A:p2+2pq
+
B: r2 2qr
AB: 2pr
where p + q + r = I. Given the sample 0, 374; A, 436; B, 132; AB, 58; how
would you test the correctness of the model?
51 Galton investigated 78 families, classifying children according to whether or not
they were light-eyed, whether or not they had a light-eyed parent, and whether or
not they had a light-eyed grandparent. The following 2 x 2 x 2 table resulted :
Grandparent
Light Not
Parent
'0
:aU Ught
Not
1928
303
552
395
596
225
508
501
Test for complete independence at the .01 level. Test whether the child classifica-
tion is independent of the other two classifications at the .01 level.
52 Compute the exact distribution of A for a 2 x 2 contingency table with marginal
totals Nt. = 4, N 2. = 7, N.1 = 6, N.2 = 5. What is the exact probability that
- 21o&e A exceeds 3.84, the .05 level of a chi-square distribution for one degree of
freedom?
53 In testing independence in a 2 x 2 contingency table, find the exact distribution of
the generalized likelihood-ratio for a sample of size 2. Do the same for samples
of size 3 and 4. Discuss.
54 Let Xl, •.• , Xn be a random sample from N(I-', a 2 ), where a 2 is known. Let A
denote the generalized likelihood-ratio for testing Jf' 0: I-' = 1-'0 versus :Yf 1: I-' :f:. 1-'0'
Find the exact distribution of - 2 lo&eA, and compare it with the corre-
sponding asymptotic distribution when Jf' 0 is true. HINT: L (X, - X)2 =
'"
L. (X, - I-')2 - n(X
- - 1-') 2•
55 Here is an actual sequence of outcomes for independent Bernoulli trials. Do you
think p (the probability of success) equals l?
s I I I s, sIs I I, s I I I I, I I sIs, s I I I I,
s I Is/, I I I I s, I I s I /, s I I I I, s s I I I.
PROBLEMS 481
~.
If you do not think p is 1, what do you think p is? Give a confidence-interval
estimate of p. If the above data were generated by tossing two dice, then what
would you think p is? If the data were generated by tossing two coins, then what
would you think pis? (If the data were generated by tossing two dice, assume that
the possible values of pare j/36, j 0, ... , 36. If the data were generated by
tossing two coins, assume that the possible values of p arej/4,j = 0, ... ,4.)
56 In sampling from a Bernoulli distribution, test the null hypothesis that p = t
against the alternative that p = t. Let p refer to the probability of two heads
when tossing two coins, and carry through the test by tossing two coins, using
(X = f3 .10. (The alternative was obtained by reasoning that tossing two coins
can result in the three outcomes: two heads, two tails, or one head and one tail,
and then assuming each of the three outcomes equally likely.)
57 Show that the SPRT (sequential probability ratio test) of f' = fLo versus f' = fLl for
the mean of the normal distribution with known variance may be performed by
plotting the two lines
and
II
and
3 DEFINITION OF LINEAR MODEL 485
~~----~--------~------~--~x
FIGURE 1 D
for i = 1, 2, ... , n,
and the Ei satisfy
and
So we can write
for i = I, 2, ... , n,
where
and
and (3)
var [Yi ] = (12, i = 1,2, ... , n.
These specifications define a linear statistical model. IIII
Note We can write Eq. (3) as
Yj = /30 + /31 X, + Ei
G[Ei ] = 0 (4)
var [Ei] = (12,
wherei = 1,2, ... , n. /1/1
Note The word" linear" in "linear statistical model " refers to the fact
that the function p(.) is linear in the unknown parameters. In the
simple example We have referred to, p( . ) is defined by p(x) = /30 + /31 x; x
in D, and this is linear in x, but this is not an essential part of the definition
of this linear model. For example, Y = p(x) + E, where p(x) = /30 + /31 ~
is a linear statistical model. IIII
Note In many situations some additional assumptions on the c.dJ.
Fyx( • ) will be made, such as normality. Also, generally the sampling
procedure will be such that the Yi will be either jointly independent or
pairwise uncorrelated. In fact We shall discuss inference procedures for
two sets of assumptions on the random variables defined in Cases A and B
below. 1/1/
Case A For this caSe We assume that the n random variables are jointly
independent and each Yi is a normal random variable. IIII
Case B For this case We assume only that the Yi are pairwise uncor-
related; that is, cov [Yj, YJ] = 0 for all i :F j = I, 2, ... , n. 1//1
4 POINT ESTIMATION-CASE A
For this case YI , Y2, ... , Yn are independent normal random variables with
means Po + PIXh Po + PI X2, •.• , Po + PIX,. and variances a2 • To find point
estimators, We shall use the method of maxim urn likelihood. The likelihood
function is
(5)
and
2 n n 2 1~ 2
log L(Po , PI' a ) = - -2 log 21t - -2 log a - 2 2 1... (Y t - Po - PtXi) .
a i=l
The partial derivatives of log L(Po, Ph a 2 ) with respect to Po, Ph and a 2 are
obtained and' set equal to O. We let Po, PI' 8 2 denote the solutions of the
resulting three equations. The three equations are given below (with some
minor simplifications):
(6)
The first two equations are called the normal equations for determining Po and
Pl' They are linear in Po and PI and are readily solved. We obtain
(7)
(8)
(9)
L Yi,
i= 1
"X·
L. , y.
i= ]
J (10)
where in the integral the quantities ~1' ~2' ~3 will be written in terms of Yi and Xi'
This integral is straightforward but tedious to evaluate, and the result is
+ t2
2 L (Xi 1_ x)2
}) x (1 - 2t
3
)-(n-2)/2 for t <to
'3
4 POINT ESTIMATION-CASE A 489
and
-u2 x
cov [Do, Dd = L (Xl - X)
-2
(iii) We recognize that m2(t 3) is the moment generating function
of a chi-square random variable with n - 2 degrees of freedom. Hence
we have
490 UNEAR MODELS x
so we define lJ2 by
A2 n -2 1 f A 2 A
Corollary The UMVUE of each of the parameters Po, PI' and 0'2 is
given by no, n
h and &2, respectively, in Theorem 1. 1III
Corollary The UMVUE of Jl(x) = Po + PIX for any X in the domain
D is flex), where p(x) = no n
+ 1x. (fl(x) is the random variable with
values Jl(x) = Po + PIX.) IIII
Corollary For any two known constants c1 and c2 the UMVUE of
no
c1Po + C2 /31 is CI n
+C 2 l · 1//1
5 CONFIDENCE INTERVALS-CASE A
To obtain a y-Ievel confidence interval on al, we note by Theorem I that
_ {n - 2)&2
U ---al-=---
(i) Z = (no - L
Po)J'L (Xi - x)2n10'2 xf is distributed as a stand-
ard normal random variable.
(U) (n - 2)&210'2 = U is distributed as a chi-square random variable
with n - 2 d.f.
(iii) Z and U are independent.
Hence, by Theorem 10 of Chap. VI
492 LINEAR MODELS x
Mter simplifying We get the following for a l00y percent confidence interval
on Po:
[
p no - t(I+'1)/2(n - 2)&
J n
Lxf
L (Xi -'-X) 2
and the estimated variance of no, which we write as var [no], is given by
,... r8]
var LllO =
~2
0- "
LX;
( -)2 •
n L.. Xi - X
and this is a 1001' percent confidence interval on /31' We note from Theorem 1
that
(12
and that the estimated variance of fJ], which is denoted by var [8]], is given by
a2
var [fJ d = L (Xi -
-)2 .
X
p[fJ 1 - t(1 + y)/i n - 2)J~ar [fJd < /31 < fJ] + t(l +y)/2(n - 2)Jvar [fJ1J] = 1'.
(16)
To obtain a y-ievel confidence interval on J.l(x) for any X in the domain D,
we note that
(i) J.l(x) = /30 + /3]x.
(ii) fi(x) = fJo + fJ 1 (X).
(iii) G[fi(x)] = J.l(x).
(iv) var [fi(x)] = var [8 0 + fJ] x]
= var [fJo] + 2xcov [fJo, fJ]] + x 2 var [fJd
=
L (Xi -
(12
X)2
(L x~ _ 2xx + X2)
n
= (1
2 [1~ + L(x - (Xi
X)2 ]
- X)2 .
494 UNEAR MODELS x
or
6 TESTS OF HYPOTHESES-CASE A
In the linear model there are many tests that could be of interest to an investi-
gator. For example, he may want to test whether the line goes through the
origin, i.e., to test if the intercept is equal to zero, or perhaps test whether the
intercept is positive (or negative). These are indicated by
On the other hand the interest may be in the slope rather than the intercept, and
an investigator could be interested in testing
To test
By comparing this with Eq. (16) we notice that this test is equivalent to the
procedure of setting a 1 - t:J. confidence interval on the parameter PI and rejecting
the hypothesis if and only if the confidence interval does not contain O.
We will now show that this test is a generalized likelihood-ratio test.
Corresponding to the notation in Chap. IX we note that in testing
-*'0: PI = 0 versus -*' 1: PI -::f:. 0
496 LINEAR MODELS x
the parameter spaces 9, 9 0 , and e 1 are as given below, where 0 = (Po, PI' 0'2):
9 = {(Po, Pb ~): - 00 < Po < 00; - 00 < PI < 00; 0'2 > O}
9 0 = {(flo, Ph 0'2): - 00 < Po < 00; PI = 0; 0'2 > O}
9 1 = 9 - eo.
We must determine A, where
sup L(O; Yl' .•• , Yn)
A = (I eeo (7)
sup L(O; Yl' ... , Yn)
(le~
and the values of Po, PI' 0'2 that maximize this for 0 E 9 are the maximum-
likelihood estimates given in Eqs. (7) to (9). Thus we get
1 [1 '"
L(flo, 0'2) = (2n0'2t12 exp - 20'2 '--' (y i-PO)2] .
But this is the likelihood function for a random sample of size n from a normal
distribution with mean Po and variance ~. The values of Po and (12 that
maximize the likelihood function are the maximum-likelihood estimates
p~ = 57
and
Thus
We obtain
,- ( -&2 )n12
A - 2
0'*
for the generalized likelihood-ratio. Instead of A we will examine the quantity
(n - 2)(A -21n - 1), which is a monotonic function of A and hence will give an
equivalent test function. We get
L(Yi - y)2 - L(Yi - Po - Pt XJ2
L (Yi - Po - Pt X i)2
Replace Po with Po = y - PtX in the numerator, and get
A-21n _ 1 = L (Yi - y)2 - L [(Yi - y) - Pt(x i - x)F.
L (Yi - Po - Pt X j)2
Hence,
2
( _ 2)(A -21n _ 1) = PI L (Xi - X)2 = PI L (Xi -
x)2/O'
n fl2 &2/0'2'
which is the ratio of the values of two independent chi-square random variables
(under .J'f0: Pt = 0) divided by their respective degrees of freedom, which are
1 for the numerator and n - 2 for the denominator. Thus en - 2)(A -21n - 1)
has an F distribution with 1 and n - 2 degrees of freedom under .J'f 0 . The
generalized likelihood-ratio test says to reject .J'f 0 if and only if A < AO, or if
and only if
en - 2)(A- 2In - 1) > en - 2)(Ao2In - 1) = A~ (say),
or if and only if
[PI L (Xi - x)2]/fl2 > A~,
where A~ is chosen for a desirable size of Type I error.
Note that (n - 2)(A -21n - 1) is the square of
Jvar [fiIl'
and recall that the square of a Student's t-distributed random variable with n - 2
degrees of freedom has an F distribution with 1 and n - 2 degrees of freedom.
Thus we have verified that if the confidence-interval statement in Eq. (16) is
used to test .J'f0: Pt = 0 versus .J'ft : Pt =P 0, it is a generalized likelihood-ratio
test.
We will generalize this result slightly in the following theorem.
498 UNEAR MODELS x
7 POINT ESTIMATION-CASE B
For this case Y1 , Y2 , ••. , Yn are pairwise uncorrelated random variables with
means Po + PI Xt, Po + PI X2, ... , Po + PtXn and variances a2. Since the joint
density of the Yj is not specified, maximum-likelihood estimators of Po, Pb and
(J'2 cannot be obtained. In models when the joint density of the observable
random variables is not given, a method of estimation called least-squares can
be utilized.
and clearly these are the same values that maximize the likelihood function in
Eq. (5). Hence We have the following theorem.
fi - L (Yi - Y)(Xj - x)
(19)
1- ~ ( -)2 '
L. Xi - X
1111
For Case A the maximum-likelihood estimators of Po, P1' and (J2 had some
desirable optimum properties. The first corollary of Theorem 2 states that
fio and fi 1are uniformly minimum-variance unbiased estimators. That is, in the
class of all unbiased estimators of Po and Pb the estimators fio and fi1 in Eq. (13)
have uniformly minimum variance. No such desirable property as this is
enjoyed by least-squares estimators for Case B. For Case A the assumptions
are much stronger than for Case B, where the distribution of the random
variables Yj is assumed to be unknown; so We should not expect as strong an
optimality in the estimators for Case B.
For Case B, we shall restrict our class of estimating functions and deter-
mine if the least-squares estimators have any optimal properties in the restricted
class. Since C[Yd = Po + P1X" We see that Po (and P1) can be given by the
expected value of linear functions of the Y i • Within this class of linear functions
We will define minimum-variance unbiased estimators.
It should be noted that there are two restrictions on the estimating func-
tions before the property of minimum variance is considered. First, the class of
estimating functions is restricted to linear functions of the Yi . Second, in
the class of linear functions of the Yi only unbiased estimators are considered.
Finally, then, consideration is given to finding a minimum-variance estimator in
the class of estimating functions that are linear and unbiased.
We will now prove an important theorem that gives optimum properties
for the point estimators of /30 and /31 derived by the method of least squares for
Case B. This theorem is often referred to as the Gauss-Markov theorem.
Theorem 6 Consider the linear model given in Definition 1, and let the
assumptions for Case B hold. Then the least-squares estimators for /31
and /30 given in Eq. (19) are the respective best linear unbiased estimators
for /31 and /30 .
PROOF We shall demonstrate the proof for /30; the proof for /31
is similar. Since We are restricting the class of estimators to be linear, We
haVe :9 0 = I aj Yj • We must determine the constant aj such that:
and (20)
Now
L =L aJ - Al (L aJ - 1) - A2 L aJ x j '
Taking derivatives, one finds
8L
-
OAI
= - L aJ + 1 = 0,
8L
OA2 = -LajXj=O.
2 L x j aJ = Al L x j + A2 L xJ,
or since I ajXj = 0, this becomes
A _ - 2 I xtln _ - 2x
2 - '\' 2 -2 - '\' ( 2
'-' Xi - nx '-' Xi - Xl
and
1
11.1 -
_ 2
'\'
L xf/n 2 •
'-' (XI - x)
Substituting Al and A2 into the tth equation in (21) and solving for at
gives
SOl LINEAR MODELS x
PROBLEMS
1 Assume that the data below satisfy the simple linear model given in Definition 1
for Case A.
y: -6.1 -0.5 7.2 6.9 -0.2 -2.1 -3.9 3.8
x: -2.0 0.6 1.4 1.3 0.0 -1.6 -1.7 0.7
Find the maximum-likelihood estimates of /30, /31, and a 2 ,
2 In Prob. 1 find the UMVUE of /30 + 3/31'
3 In Prob. I find a 95 percent confidence interval on /30; on /31; on a 2 •
4 In Prob. I find a 90 percent confidence interval on p.(x) for x = -1.0.
5 In the simple linear model for Case A find the maximum-likelihood estimator of 8,
where () = /30 + 3/31 + 2a 2 •
6 In Prob. 5 find the UMVUE of ().
7 In the simple linear model for Case A, show that p proportion of the distribution
of Yat x = Xo is below gp, where gp /30 + /31XO + Zp a and Zp is given by <I>(zp) = p.
8 In Prob. 7 find the UMVUE of gp •
9 Use the data in Prob. I to evaluate the UMVUE of gl' in Prob. 7.
10 The hardness Y of the shells of eggs laid by a certain breed of chickens was as-
sumed to be roughly linearly related to the amount x of a certain food supplement
put into the diet of the chickens. The model was assumed to be a simple linear
model for Case A. Data were collected and are given below:
y,: .70 .98 1.16 1.75 .76 .82 .95 1.24 1.75 1.95
Xi: .12 .21 .34 .61 .13 .17 .21 .34 .62 .71
Test the hypothesis that /31 1.00 versus the hypothesis /31 :t 1.00. Use a Type I
error probability of 5 percent.
11 In Prob. 10 test the hypothesis /31 > I versus the hypothesis /31 < I.
12 In Prob. 10 test the hypothesis p.(.50) > 1.5 versus the hypothesis p.(.50) 1.5.
Use a Type I error probability of 10 percent.
13 In Prob. 10 compute a 90 percent confidence interval on 2a.
14 In the simple linear model for Case A find the UMVUE of /3da 2 •
15 Consider the simple linear model given in Definition I except var [Y,] = a/ 1 a l ,
where at, i = I, 2, ... ., n., are known positive numbers. Find the maximum-
likelihood estimators of /30 and /31'
PROBLEMS 503
16 What are the conditions on the Xi in the simple linear model for Case A so that
.fio and .fi 1 are independent?
17 In the simple linear model for Case A show that Yand .fit are uncorrelated. Are
they independent?
18 Prove Theorem 4.
19 In Theorem 6 give the proof for the best linear unbiased estimator of f31.
20 For the simple linear model for Case B prove that the best (minimum-variance)
linear unbiased estimator of f30 + f31 is 130 + 131, where .fio and .fit are the least-
squares estimators of fJo and fJh respectively.
21 Extend Prob. 20 to cofJo + CtfJh where Co and Cl are given constants.
XI
NONPARAMETRIC METHODS
percentage, then the experimenter will probably not be satisfied. In cases where
it is known that the conventional methods based on the assumption of a normal
density are not applicable, an alternative method is desired. If the basic distri-
bution is known (but is not necessarily normal), one may be able to derive exact
(or sufficiently accurate) tests of hypotheses and confidence intervals based on
that distribution. In many cases an experimenter does not know the form of the
basic distribution and needs statistical techniques which are applicable regardless
of the form of the density. These techniques are called nonparametric or
distribution-free methods.
The term "nonparametric" arises from considerations of testing hypoth-
eSeS (Chap. IX). In forming the generalized likelihood-ratio, for example, one
deals with a parameter space which defines a family of distributions as the para-
meters in the functional form of the distribution vary over the parameter space.
The methods to be developed in this chapter make no use of functional forms
or parameters of such forms. They apply to very wide families of distributions
rather than only to families specified by a particular functional form. The term
" distribution-free" is also often used to indicate similarly that the methods do
not depend on the functional form of distribution functions.
The nonparametric methods that will be considered will, for the most part,
be based on the order statistics. Also, although the methods to be presented are
applicable to both continuous and discrete random variables, We shall direct our
attention almost entirely to the continuous case.
Section 2 will be devoted to considerations of statistical inferences that
concern the cumulative distribution function of the population to be sampled.
The sample cumulative distribution function will be used in three types of
inference, namely, point estimation, interval estimation, and testing. Popula-
tion quantiles have been defined for any distribution function regardless of the
form of that distribution. Section 3 deals with distribution-free statistical
methods of making inferences regarding population quantiles. Section 4 studies
an important concept, that of tolerance limits. The similarities and differences
of tolerance limits and confidence limits are noted.
In Sec. 5 We return to an important problem in the application of the
theory of statistics. It is the problem of testing the homogeneity of two popula-
tions. This problem was first mentioned in Subsec. 4.3 of Chap. IX when we
tested the equality of the means of two normal populations. I t was considered
again in Subsec. 5.3 of Chap. IX when We tested the equality of two multi-
nomial populations. We indicated there that the derived test using a chi-square-
type statistic couId be used to test the equality of two arbitrary populations, and
so we had really anticipated this chapter inasmuch as We derived a distribution-
free test. Other distribution-free tests of the homogeneity of two populations
506 NONPARAMETRlC METHODS xi
will be presented in Sec. 5. Included will be the sign test, the run test, the
median test, and the rank-sum test.
In this chapter we present only a very brief introduction to nonparametric
statistical methods. This chapter is similar to the last inasmuch as it includes
use of the three basic kinds of inference that Were the focus of our attention in
Chaps. VII to IX. We shall see that much of the required distributional theory
is elementary, seldom using anything more complicated than the basic principles
of probability that Were considered in Chap. I and the binomial distribution.
2 INFERENCES CONCERNING A
CUMULATIVE DISTRIBUTION FUNCTION
where Xl, ... , Xn is a random sample from some c.d.f. F(·). According to
Theorem 17 of Chap. VI,
where Fn( . ) is the sample c.d.f. corresponding to c.d.f. F( .). From Eq. (2),
we see that
S[Fn(x)] = ±~ (n)
k=O n k
[F(x)t[1 - F(x)]n-k = F(x) (3)
and similarly
1
var [Fn(x)] = - F(x)[1 - F(x)]. (4)
n
2 INFERENCES CONCERNING A CUMULATIVE DISTRIBUTION FUNCTION 507
Equation (5), known as the Glivenko-Cantelli theorem, states that with prob-
ability one the convergence of Fn(x) to F(x) is uniform in x. We can define
Dn = sup IFix) - F(x)l. (6)
-ro<x<ro
Dn is a random quantity that measures how far F n(') deviates from F(')'
Equation (5) states that P[lim Dn = 0] = 1; so, in particular, the c.d.f. of Dn ,
say FDn( • ), converges to the discrete c.d.f. that has all its mass at O. In the next
subsection we will consider the limiting distribution of J~ Dn. Equation (5)
tells us that the estimating function Fn(x) of the c.d.f. F(x) converges to F(x)
uniformly for all x with probability one.
Instead of a point estimate of F(x) = P[X ~ x], one might be interested
in a point estimate of F(y) - F(x) = P[x < X ~ y] for fixed x < y. The follow-
ing remark is useful in showing that Fn(Y) - Fix) is an unbiased mean-squared-
error consistent estimator of F(y) - F(x).
Remark
.1
cov [Fn(x), Fn(Y)] = - F(x)[1 - F(y)] for y > x. (7)
n
PROOF
508 NONPAR.AMETRIC METHODS
1
= ;;{8[I(-oo,x](X 1)I( oo,yiXl)]
var [FiY) - Fn(x)] = var [Fn(Y)] - 2 cov [Fix), Fn(Y)] + var [Fn(x)]
1
=- [F(y) - F(x)][l - F(y) + F(x)];
n
1 n 1
var [ - L
n i= 1
]
IB(Xj) = - P[X E B](l - P[X E BD,
n
say. IIII
The c.d.f. given in Eq. (8) does not depend on the c.d.f. from which the
sample was drawn (other than that it be continuous); that is, the limiting
distribution of J~ Dn is distribution-free. This fact allows Dn to be broadly
used as a test statistic for goodness of fit. For instance, suppose one wishes to
test that the distribution that is being sampled from is some specified continuous
distribution; that is, test .1l' 0: Xi '" F o( .), where F o( .) is some completely
specified continuous c.d.f. If.1l' 0 is true,
Kn = In(XI , •.• , Xn) = In sup
-oo<x<oo
1Fn(x) - F o(x) 1 (9)
t
1 ------
4:44 P.M.
~--~~----~----~----~~----~----~x
8 A.M. 12 noon
o 480 720 1440 min.
FIGURE 1
It follows that
but
P[ In sup IFn(x) -
x
F(x) I < ky] = P[ sup IF,,(x) -
x
F(x) I < ky/Jn]
noting that
k
sup IF,,(x) - F(x) I < JY~
x n
if and only if
for all x. Using the fact that 0 < F(x) :::; 1, We have
that is, the band with lower boundary defined by L(x) = max [0, F,,(x) - ky/ J~]
and upper boundary defined by U(x) = min [F,,(x) + ky/ In,, 1] is an approxi-
mate 100y percent confidence band for the c.d.f. F( . ), where the meaning of the
confidence band is given in Eq. (10).
512 NONPARAMETRIC METHODS XI
and so,
Thus,
...
P[F(Yj ) < u] = fo fz(z) dz
=
-----
B(j,
1
j + 1)
n -
f u
0 z
.
J 1(1 - z)n- j+ 1 1 dz
= IB.,(j, n - j + 1),
called the incomplete beta function, which is extensively tabulated. Hence,
and then (Yj , Yk ) is a 1001' percent confidence interval for q . Of course, for e
arbitrary I' there will not exist a j and k so that the confidence coefficient is
exactly y.
The confidence coefficient can be obtained another way.
But
P[Yj :::;; eq] = P[jth order statistic < eq]
= P[j or more observations < e
q]
n
= I P[exactly i observations < eq]
i=j
hence,
Note that a table of the binomial distribution can now be used to evaluate the
confidence coefficient.
514 NONPARAMETRIC METHODS XI
P[Y 2 ~ ~t < Y9 ]
(n) (1)2:
= ~8 i n
= ~8 (10)
i
(1)2: 10
= .9784. IIII
We have presented one way, using order statistics, of obtaining point esti-
mates or confidence-interval estimates for a quantile.
Besides being extremely general in that the method requires few assump-
tions about the form of the distribution function, the method is extraordinarily
simple. No complex analysis or distribution theory was needed; the simple
binomial distribution provided the necessary equipment to determine the con-
fidence coefficient. The only inconvenience was the paucity of confidence levels
that could be attained.
e
For example, suppose q = t so that q = e t = median; then a possible test of
.1l'o:e t =e versus .1l'l:e t =l=e is to accept.1l'o if and only if jZ-npj =
1Z - nl2j < c, where c is a constant determined by
P[ IZ - nl2l < c] = 1 - a,
so c can be determined from a binomial table. (For small sample sizes, not
many a's are possible, unless randomized tests are used.) The power function
of such a test can be readily obtained since the distribution of Z is still binomial
even when the null hypothesis is false; Z has the binomial distribution with
parameters nand p = P[X > e]. Such a power function could be sketched as a
function of p.
Note also that the sign test can be used to test one-sided hypotheses. For
e e
instance, in testing .1l' 0: q ~ e versus .1l'1: q > e, the sign test says to reject
.1l' 0 if and only if Z, defined as above, is large. Again the power function can
be easily obtained.
4 TOLERANCE LIMITS
Remark Note that the random quantity F(L2) - F(Ld represents the
area under f( . ) between Ll and L2 . / // /
Make the transformation Z = F(Yk ) - F(Yj ) and Y = F(Y), find the joint distri-
bution of Yand Z, and then integrate out Y to get the marginal distribution of Z.
The following obtains:
I' ( _ n1 k- 1- j( n- k +j
(12)
JZ z) - (k _ 1 _ j)!(n _ k + j)! Z 1 - z) I(O,l)(Z),
= 1- ±(~)(.75y(.25)5-i
1=4 I
= .3672. /III
p
1)2"'-2(1 - z) dz = 1 - npn-l + (n - l)pn.
5.1 Introduction
In this section various tests of the equality of two populations will be studied.
As we mentioned in Sec. 1 above, we first studied the equality of two populations
when we tested that the means from two normal populations Were equal in
Subsec. 4.3 of Chap. IX. Then again in Subsec. 5.3 of Chap. IX, we gave a test
of homogeneity of two populations. A great many nonparametric methods
have been developed for testing whether two populations have the same distribu-
tion. We shall consider only four of them; a fifth will be briefly mentioned at
the end of this subsection.
The problem that we propose to consider is the following: Let Xl, ... , Xm
denote a random sample of size m from c.d.f. F x( . ) with a corresponding density
function fx( .), and let Y1 , ••• , Y,. denote a random sample of size n from
c.d.f. F y(') with a corresponding density function fy( '). (Note that We are
departing from our usual convention of using Y's to represent the order statistics
corresponding to the X's.) Further, assume that the observations from Fx( . )
are independent of the observations from F y( '). Test.1l' 0: F x(z) = F y(z) for
all z versus .1l'1: F x(Z) =1= F y(z) for at least one value of z. In Sec. 2 above We
pointed out that the sample c.d.f. can be used to estimate the population c.dJ.
In the case that .1l' 0 is true, that is, F x(z) = F y(z), We have two independent
estimators of the common population c.d.f., one using the sample c.d.f. of the
X's and the other using the sample c.d.f. of the Y's. Intuitively, then, one might
consider using the closeness of the two sample c.d.f.'s to each other as a test
criterion. Although we will not study it, a test, called the two-sample
Kolmogorov-Smirnov test, has been devised that USes such a criterion.
We will assume throughout that the random variables under consideration
are continuous and merely point out at this time that the methods to be presented
can be extended to include discrete random variables as well. In our pre-
sentation, we will consider testing two-sided hypotheses and will not consider
one-sided hypotheses, although the theory works equally well for one-sided
hypotheses.
5 EQUALITY OF TWO DISTRIBUTIONS 519
n
then Zi has a Bernoulli distribution, and consequently Sn = L Zi has a binomial
i= 1
distribution with parameters nand p = P[Xi > Yi ]. If -*'0 is true, p = i, and
&[Sn] = n12. If the alternative hypothesis is two-sided so that p = P[Xi > Yd
can be either larger or smaller than i, then a possible test criterion is to accept
-*'0 if Sn is close to n12, that is, accept -*'0 if ISn - nl21 :s; k, where k is deter-
mined by fixing the size of the test. k is easily determined from a binomial
table, and we have a very simple test of the equality of the two populations.
One can See that avoidance of the assumption that Xi and Y, are inde-
pendent is desirable. For example, Xi might represent an observation on the
ith entity before some" treatment" and Yi the observation on the same entity
after" treatment." In such a case one is not likely to have independence of
Xi and Yj since they are observations taken on the same entity, yet one can
sometimes test that there is no " treatment" effect by testing that the" before "
and" after" popUlations are the same.
then order (in ascending order of magnitude) the combined sample. For
example, if m = 4,and n = 5, one might obtain
y x x y x y y y x. (14)
A run is a sequence of letters of the same kind bounded by letters of another
kind except for the first and last position. Thus, in Eq. (14) the ordering
starts with a run of one y value, then follows a run of two x values, then a run
of one y value, and so on; six runs are exhibited in Eq. (14). It is apparent that
if the two samples are from the same population, the x's and y's will ordinarily
be well mixed, and the total number of runs will be large. If the two popula-.
tions are widely separated so that their range of values does not overlap, then
the number of runs will be only two, and, in general, differences between the
two populations will tend to reduce the number of runs. Thus the two popula-
tions may have the same mean or median, but if the x population is concentrated
while the y population is dispersed, there will be a tendency to have a long y
run on each end of the combined sample, and there will thus be a tendency
to reduce the number of runs. A test then is performed by observing the total
number of runs, say Z, in the combined sample and rejecting .1P 0 if Z is less
than or equal to some specified number Zo. Our task now is to determine the
distribution of Z under .1P 0 in order that for a given test size We may specify Zo.
If .1P 0 is true, it can be argued that the possible arrangements of the
m x values and n y values are equally likely. It is clear that there are exactly
(m ; n) such arrangements. To find P[Z = z], it is necessary now to count
all arrangements with exactly z runs. Suppose z is even, say 2k; then there
must be k runs of x values and k runs of y values. To get k runs of x values,
the m x's must be divided into k groups. We can form these k groups, or runs,
by inserting k - 1 dividers into the m - 1 spaces between the m x values with no
more than one divider per space. We can place the k - 1 dividers into the
m - 1 spaces in (~=::) ways. Similarly, We can construct the k runs of
(m- I) (n - I) (m - 1) (n - I)
P[Z = z] = P[Z = 2k + I] =k k- I + k- I k . (16)
(m ;n)
To test Jf 0 with size of Type I error equal to 0;, one finds the integer Zo so that
(as nearly as possible)
%0
L P[Z =
%=2
z] = 0; (17)
there are exactly (m + n)j2 of the observations (combined x's and y's) greater
than the median of the combined sample. (Since we have an even number of
continuous random variables, no two are equal, and the median is midway be-
tween the middle two.) It can be easily argued that
( :++n;/2)
for m + n even and .1f 0 true. A similar expression obtains for m +n odd.
Such a distribution can be used to find a constant k such that
m. • m(m + 1)
u= I 1 (ri -I) = I
1=
ri - I' = tx -
2
,
or
To find the first two moments of Tx , we find the first two moments of U.
and
rr] mn(m + n + 1)
var [ .Lx = 12 • (23)
that is,
Reject :K0 if and only if 1 Tx - 8[Tx] I ~ k,
where k is determined by fixing the size of the test and using the asymptotic
normal distribution of Tx.
x x x y y, x x y x y, x x y y x, x y x x y, x y x y x,
x y y x x, y x x x y, y x x y x, y x y x x, y y x x x.
The corresponding Tx values are, respectively, 6, 7, 8, 8, 9, 10,9, 10, 11, 12;
so
1
P[ Tx = 6] = P[ Tx = 7] = 1 0'
PROBLEMS
1 n
1 Show that T= - L IB(X,) is an unbiased estimator of P[X E B]. Find var rll,
n ,= 1
and show that T is a mean-squared-error consistent estimator of P[X E B].
1 n
2 Define Fn(BJ) = - L IBiXt) for j = 1, 2. Find cov [Fn(B 1), Fn(Bz)].
n,=l
3 Let Y1 , ••• , Yn be the order statistics corresponding to a random sample of size n
from a continuous c.d.f. F(' ).
(a) Find the density of F( Yj ).
(b) Find the joint density of F( Y t ) and F( Y j ).
(c) Find the density of [F(Yn) - F(Yz)]/[F(Yn) - F(Y1 )].
4 Let Xl, ... , Xn be independent and identically distributed random variables
having common continuous c.d.f. F( .). Let YI < ... < Yn be the corre-
SpOnding order statistics, and define F n(') to be the sample c.d.f. Set Dn =
sup 1Fn(x) - F(x) I.
-CXl<X<CXl
5 Show that the expected value of the larger of a random sample of two observations
from a normal population with mean 0 and unit variance is VV;:;' and hence that
for the general normal population the expected value is fL + a/V;:;'.
6 If (X, Y) is an observation from a bivariate normal population with means 0, unit
variances, and correlation p, show that the expected value of the larger of Xand Y
is V(l - p)/Tr.
7 We have seen that the sample mean for a distribution with infinite variance (such
as the Cauchy distribution) is not necessarily a consistent estimator of the popula-
tion mean. Is the sample median a consistent estimator of the population median?
8 Construct a (approximate) 90 percent confidence band for the data of Example 1.
Does your band include the appropriate uniform distribution?
9 Let Yl < ... < Y s be the order statistics corresponding to a random sample from
some continuous c.d.f. Compute P[Yl < fso < Y s] and P[Yz < fso < Y 4 ].
Compute P[Yl < fzo < Y z ]. Compute P[Y3 < f7S < Ys].
10 Let Yl and Y,. be the first and last order statistics of a random ~~ple of size n
from some continuous c.d.f. F(·). Find the smallest value o'f n such that
P[F( Y,.) -F( Yl ) > .75] > .90.
11 Test as many ways as you know how at the 5 percent level that the following two
samples came from the same population:
12 Let Xl, ... , Xs denote a random sample of size 5 from the density f(x; fJ) =
1(6 - t. 6 + t)(x). Consider estimating O.
(a) Determine the confidence coefficient of the confidence interval (Yl , Ys).
(b) Find a confidence interval for 0 that has the same confidence coefficient as in
part (a) using the pivotal quantity (Yl + Y s )/2 - O.
(c) Compare the expected lengths of the confidence intervals of parts (a) and (b).
13 Find var [U] when F x (') - F·A·). See Eq. (20).
14 Equation (21) shows that Uand Txare linearly related. Find the exact distribution
of U or Tx when JIt' 0 is true for small sample sizes. For example, take m = 1,
n = 2; m = 1, n = 3; m = 2, n = 1 ; m = 3, n = 1; and m = n = 2.
15 We saw that G[U] = mnp. Is U/mn an unbiased estimator of p = P[Xt > Y j ]
whether or not JIt' 0 is true? Is U a consistent estimator of p?
16 A common measure of association for random variables X and Y is the rank
correlation, or Spearman's correlation. The X values are ranked, and the observa-
tions are replaced by their ranks; similarly the Yobservations are replaced by their
ranks. For example, for a sample of size 5 the observations
are replaced by
r(x) 3 1 5 2 4
r(y) 2 1 5 3 4
Let r(Xt ) denote the rank of XI and r( Y t ) the rank of Y t • Using these paired
ranks, the ordinary sample correlation is computed:
[r(XI) - i(X)][r( Yt) - i( Y)]
Spearman's correlation = S = -::::========;;;;:====== ,
[r(XI) - i(X)]Z [r( Yt) - i( Y)]2
1 INTRODUCTION
The purpose of this appendix is to provide the reader with a ready reference to some
mathematical results that are used in the book. This appendix is divided into two
main sections: The first, Sec. 2 below, gives results that are, for the most part, com-
binatorial in nature, and the last gives results from calculus. No attempt is made to
prove these results, although sometimes a method of proof is indicated.
2 NONCALCULUS
summation sign. The letter I is called the summation index. The term fol1owing L is
called the summand. The" i = 3 " below ~ indicates that the first term of the sum is
obtained by putting i = 3 in the summand. The" 7 " above the L indicates that the
528 MATHEMATICAL ADDENDUM APPENDIX A
final term of the sum is obtained by putting i = 7 in the summand. The other terms
of the sum are obtained by giving ithe integral values between the limits 3 and 7. Thus
5
L: (-I)J-2jx1J = 2x4 - 3x6 + 4x8 - 5x10 •
J=2
i
l.d
i = n(n + 1) .
2
(1)
Equation (1) can be used to derive the following formula for an arithmetic series
or progression:
" [a + (j - d
L:
J=1
l)d] = na 2n(n-l). (5)
II1I
J=O
j). (7)
O! is defined to be 1.
2 NONCALCULUS 529
Remark (n)" = n!/(n - k)!, and (n)n = n!/O! = nJ. The combinatorial symbol
n) = (nh n!
(k (9)
k! (n k)!k!'
if k<O or k n. (10)
11/1
Remark
(~) = (:) = 1.
(;) = (n ~k)'
(n : I) = (~) + (k ~ I) for n 1, 2, •.. and k = 0, ± 1, 2, ., ..
(11)
Equation (11) is a useful recurrent formula that is easily proved. /III
Both (n). and the combinatorial symbol (;) can be generalized from a positive integer
n to any real number t by defining ,
/) t(t - 1) ..... (t - k + 1)
(I)" = t(1 - 1) ..... (I - k 1),
(k k!
for k = 1, 2, ... , (12)
- n) = (- n)( - n - 1) ..... ( - n ~ k + 1)
Remark ( k k!
n(n 1) ..... (n +k - 1)
= (-1)~ - - - - k - ! - - - -
or
where 1 - 1/(12n 1) < r(n) < 1. To indicate the accuracy of Stirling's formula, 10!
was evaluated using five-place logarithms and Eq. (13), and 3,599,000 was obtained.
The actual value of 10! is 3,628,800. The percent error is less than 1 percent, and the
percent error will decrease as n increases.
(a i (~)] aJb",-J
+ b)" = J=O (15)
for n, a positive integer. The binomial theorem explains why the (;) are sometimes
called binomial coefficients. Four special cases are noted in the following remark.
Remark (16)
(1 - t)'" =
}."O
i (~)
J
(-1)JtJ, (17)
2'" =
}""O
i (~),
J
(18)
3 CALCULUS 531
and
0= i: (-1)1(~).
}=o J
(19)
1111
Expanding both sides of
(l + x)"(l + xt = (l + X)Hb
and then equating coefficients of x to the nth power gives
}=o
i (a)j (n-bj ) = (a +nb)' (20)
( .2")" IT n a;',
n'"
a} =.2,,' (21)
}=1 1",1
nd
l= 1
where the summation is over all nonnegative integers nl, nz, ... , n" which sum to n.
A special case is
3 CALCULUS
3.1 Preliminaries
It is assumed that the reader is familiar with the concepts of limits, continuity, differenti~
ation, integration, and infinite series. A particular limit that is referred to several
times in the book is the limit expression for the number e; that is,
Equation (24) can be derived by taking logarithms and utilizing l'Hospital's rule,
532 MATHEMATICAL ADDENDUM APPENDIX A
which is reviewed below. There are a number of variations of Eq. (24)t for instance,
lim (1
x-+ a:J
+x 1)~ =e (25)
and
lim (1 + Ax)1/x = e). for constant A. (26)
X'" 0
A rule that is often useful in finding limits is the following so-called I'Hospital's
rule: If f( • ) and g( . ) are functions for which lim f(x) = lim g(x) = 0 and if
' f'(x)
IIm--
x-+a g'(x)
and
lim f(x) = lim f'(x)
x-+a g(x) x'" a g'(x) .
EXAMPLE 2 Find lim [(llx) lo~ (1 x)]. Let f(x) = lo~ (l x) and g(x) = x;
"" ... 0
then
Another rule that we use in the book is Leibniz' rule for differentiating an integral:
Let
1(1) = J lI(t)
(I(t)
f(x; t) dx,
(I(t)
of
ot
dh
dx+ f(h(t); t)-
dt
dg
f(g(t);t) dt . (27)
Several important special cases derive from Leibniz' rule; for example, if the
integrandf(x; t) does not depend on t, then
d [ dh dg
dt {(t)
h(t)
f(x) dx
]
= f(h(t» dt f(o(t)) dt ; (28)
and a<c<x.
R" is called the remainder. f(x) is assumed to have derivatives of at least order n + l.
If the remainder is not too large, Eq. (30) gives a polynomial (of degree n) approxima w
tion, when R" is dropped, of the function [(.). The infinite series corresponding to
Eq. (30) will converge in some interval if lim Rn 0 in this interval. Several important
,,-+ <Xl
infinite Taylor series, along with their intervals of convergence, are given in the follow-
ing examples.
1II1
EXAMPLE 4 Suppose [(x) = (1- x)t and a = 0; then [(1)(x) = t(1 x)t-l,
[(2)(X) = t(t l)(1 X)t-2, .•. , [U)(x) =( -l)Jt(t - l)'" ··(t - j + 1)(1 - x)t- J,
and hence
<Xl x)
[(x) (l-xr= 2
)=0
(-1)1(1»)-.
J!
= i (t.)(_X)J
)=0 ]
for -1 <x < 1. (32)
I111
There are several interesting special cases of Eq. (32). t = - n gives
i-I)
i
xJ for -1 <x < 1; (33)
(1 - X)-l = L
):;0
xJ; (34)
534 MATHEMATICAL ADDENDUM APPENDlX A
t -2 gives
«l
II/!
The Tay lor series for functions of one variable given in Eq. (30) can be generalized
to the Taylor series for functions of several variables. For example, the Taylor series
for f(x, y) about x = a and y = b can be written as
f(x,y) f(a, b) + f;t;(a, b)(x - a) ~(a, b)(y - b) +
1
2! £f;t;;t;(a, b)(x - a)2 + 2fx,,(a, b)(x a)(y - b) + f.".,,(a, b)(y - b)2] + ... ,
where
r(t) = J xt -Ie-;t; dx
«l
for t > O. (37)
o
r(t) is nothing more than a notation for the definite integral that appears on the right-
hand side of Eq. (37). Integration by parts yields
If n is an integer,
1 . 3 . 5 ..... (2n 1) ~ I-
r(n +!) = 211 V 1T, (40)
3 CALCULUS 535
and, in particular,
Again, B(a, b) is just a notation for the definite integral that appears on the right-hand
side of Eq. (42). A simple variable substitution gives B(a, b) = B(b, a). The beta
function is related to the gamma function according to the following formula =
r(a)r(b)
B(a, b) = r(a + b)· (43)
APPENDIX B
TABULAR SUMMARY OF PARAMETRIC FAMILIES
OF DISTRIBUTIONS
1 INTRODUCTION
The purpose of this appendix is to provide the reader with a convenient reference
to the parametric families of distributions that were introduced in Chap. III. Given
are two tables, one for discrete distributions and the other for continuous distribu~
tions.
538 PARAMETRIC FAMILIES OF DISTRIBUTIONS APPENDIX B
Name of
parametric
family of Parameter Mean
distributions Discrete density functions f(·) Space fL = ~[X]
Discrete 1 N+1
uniform f(x) =N l{l • .... N}(X) N= 1,2, ...
2
(~)e\!=:) K
Hypergeometric flx) = (~) l{o. 1 ..... n}(X) M=1,2, ... n-
M
K = 0, 1, ... , M
n=1,2, ... ,M
e-AAx
Poisson f(x) =-,-l{o. 1 • ••• )(x) A>O A
x.
q
Geometric f(x) = p¢l(o. 1. ... }(x) O<p<l
p
(q = 1- p)
Negative rq
binomial
f()
x -_(r+x-1)
x p '¢/(0.1 •••• ) (x) O<p<l
p
r> 0
(q = 1 - p)
t
1 DISCRETE DISTRIBUTIONS 539
Moment
generating
Variance Moments fL: = tf[xr] or fLr = tf[(X - fLY] function
a 2 =tf[(X - fL)2] and/or cumu]ants Kr tf[e fX ]
pq fL: = p for an r
K M-KM-n
nM M M-1
"[X(X - I) 0 0 o(X - r + I)J ~ r! (~y) not useful
rq r(q + q2)
p2 fL3 = p3 (1 !qetf
\
S40 PARAMETRIC FAMILIES OF DISTRIBUTIONS APPENDIX B
Name of
parametric
family of Cumulative distribution function F(o) Parameter Mean
distributions or probability density functionf(o) space p, = tB'[X]
Uniform or 1 a+b
rectangular f(x) = -b--a l(a,b](x) -00< a< b < 00
2
1 -oo<p,<oo
Normal f(x) exp[ - (x- p,)2/2u 2] p,
~ . V21TU u>O
"I~"'" .i
-'
1
Exponential f(x) = >"e-A:X:I(oo~)(x) >">0
X
>..r >">0 r
Gamma f(x) = r(r)x"-le-A:X:I(oo~)(x) r>O X
1 a>O a
Beta f(x) = B(a, b)x"-l(l- x)b-ll(o.l)(X)
b>O a+b
r! U
fLr = 0, r odd; fLr = (r/2)! 2r/2' r even; exp[fLt + I (J2 t 2]
Kr= 0, r > 2
1 r(r+ 1)
n'-
,r- Ar " for t<"
"2 A-t
, r(r + j)
fLl = "lr(r)
ab B(r + a, b)
not usefu1
(a + b + 1)(a + b)2 fLr = B(a, b)
( continued)
542 PARAMETRIC FAMIUES OF DISTRIBUTIONS APPPENDIX B
Name of
parametric
family of Cumulative distribution function F(') Parameter Mean
distributions or probability density functionf(·) space p. = rS'[X]
a>O
Weibul1 f(x) = abx"-lexp[-axbJI(o,oo)(x) a- 1IbI'(1 + b- 1 )
b>O
, 1lI
"
2 CONTINUOUS DISTRIBUTIONS 543
fl Z1J'2
e llt 1J'fll csc(1J'flt)
3
does not (
k-2 p.r= exist
B(!, k/1J
for k > 2
for k> rand r even
2n Z(m+n 2) , =(!:)"
p., m
r(m/2+ r)r(n/2
r(m/2)r(n/2)
r) does not
exist
m(n - 2)2(n - 4)
n
for n > 4 for r <2
2Jr(k/2 + j)
f _
(~r/2
2k p.) - r(k/2) 1 21
for t< 1/2
APPENDIX C
REFERENCES AND RELATED READING
MATHEMATICS BOOKS
1. PR01TER and MORREY: "Calculus with Analytic Geometry: A Second Course,"
Addison-Wesley Publishing Company, Inc., Reading, Mass., 1971.
2. THOMAS: "Calculus and Analytic Geometry," alternate ed., Addison-Wesley Pub-
lishing Company, Inc., Reading, Mass., 1972.
3. WIDDER: "Advanced Calculus," 2d ed., Prentice-Hall, Inc., Englewood Cliffs,
N.J., 1961.
4. WYLIE: "Advanced Engineering Mathematics," 3d ed., McGraw-Hill Book
Company, New York, 1966.
PROBABILITY BOOKS
5. "Basic Probability Theory," John Wiley & Sons, Inc., New York, 1970.
ASH:
6. DRAKE: "Fundamentals of Applied Probability Theory," McGraw-Hill Book
Company, New York, 1967.
7. DWASS: "Probability: Theory and Applications," W. A. Benjamin, Inc., New York,
1970.
PROBABILITY AND STATISTICS BOOKS 545
SPECIAL BOOKS
29. DANIEL and WOOD: "Fitting Equations to Data; Computer Analysis of Multi-
factor Data for Scientists and Engineers," Interscience Publishers, a division of
John Wiley & Sons, Inc., New York, 1971.
30. DAVID: "Order Statistics," John Wiley & Sons, Inc., New York, 1970.
31. DRAPER and SMITH: "Applied Regression Analysis," John Wiley & Sons, Inc.,
New York, 1966.
32. GIBBONS: "Nonparametric Statistical Inference," McGraw..Hill Book Company,
New York, 1971.
33. GRAYBILL: "An Introduction to Linear Statistical Models," Vol. 1, McGraw-Hill
Book Company, New York, 1961.
34. JOHNSON and KOTZ:" Discrete Distributions," Houghton Mifflin Company, Boston,
1969.
35. JOHNSON and KOTZ: "Continuous Univariate Distributions-I," Houghton Mifflin
Company, Boston, 1970.
36. JOHNSON and KOTZ: "Continuous Univariate Distributions-2," Houghton Mifflin
Company, Boston, 1970.
37. KEMPTHORNE and FOLKS: U Probability, Statistics, and Data Analysis," The Iowa
State University Press, Ames, 1971.
38 . .MORRISON: "Multivariate Statistical Methods," McGraw-Hill Book Company,
New York, 1967.
39. RAJ: "Sampling Theory," McGraw-Hill Book Company, New York, 1968.
PAPERS
40. JOINER and ROSENBLATT: "Some Properties of the Range in Samples from Tukey's
Symmetric Lambda Distributions," Journal 0/ the American Statistical Associa·
lion, Vol. 66 (1971), 394-399.
BOOKS OF TABLES 547
41. PITMAN: "The Estimation of the Location and Scale Parameters of a Continuous
Population of Any Given Form," Biometrika, Vol. 30 (1939), pp. 391--421.
42. WOLFOWITZ:" The Minimum Distance Method," Annalso/Mathematical Statistics,
Vol. 28(1) (1957), pp. 75-88.
43. ZEHNA:" Invariance of Maximum Likelihood Estimation ,"Annals 0/ Mathematical
Statistics, Vol. 37 (1966), p. 744.
1 DESCRIPTION OF TABLES
for values of x between 0 and 4 at intervals of .01. For negative values of x one uses
the fact that r$( - x) = r$(x).
for values of x beteenn 0 and 3.5 at intervals of .01. For negative values of x, one uses
1 DESCRIPTION OF TABLES 549
for n, the number of degrees of freedom, equal to 1, 2, ... , 30. For larger values of n,
a normal approximation is quite accurate. The quantity V2u V2n 1 is nearly
normal1y distributed with mean 0 and unit variance. Thus U/t, the exth quantile point
of the distribution, may be computed by
where Zit is the exth quantile point of the cumulative normal distribution. As an ilJustra·
tion, we may compute the .95 value of U for n = 30 degrees of freedom:
for selected values of m and n; m is the number of degrees of freedom in the numerator
of F, and n is the number of degrees of freedom in the denominator of F. The table
also provides values corresponding to G = .10, .OS, .025, .01, and .005 because F 1 - 1t
for m and n degrees of freedom is the reciprocal of Fa for nand m degrees of freedom.
Thus for G = .OS with three and six degrees of freedom, one finds
1 1
F.os(3, 6) = F.9s(6, 3) = 8.94 = .112
One should interpolate on the reciprocals of m and n as in Table S for good accuracy.
550 TABLES APPENDIX D
with n = 1, 2, ... , 30, 40, 60, 120, 00. Since the density is symmetrical in t, it follows
that F( - t) = 1 - F(t). One should not interpolate linearly between degrees of
freedom but On the reciprocal of the degrees of freedom, if good accuracy in the last
digit is desired. As an illustration, we shall compute the .975th quantile point for
40 degrees of freedom. The values for 30 and 60 are 2.042 and 2.000. Using the
reciprocals of n, the interpolated value is
1 1
ao--Co
2.042 - 1 1 (2.042 - 2.000) = 2.021,
30-eo
which is the correct value. Interpolating linearly, one would have obtained 2.028.
DESCRIPTION OF TABLES 551
x .00 .01 .02 .03 .04 .05 .06 .07 .08 .09
.0 .3989 .3989 .3989 .3988 .3986 .3984 .3982 .3980 .3977 .3973
.1 .3970 .3965 .3961 .3956 .3951 .3945 .3939 .3932 .3925 .3918
.2 .3910 .3902 .3894 .3885 .3876 .3867 .3857 .3847 .3836 .3825
.3 .3814 ' .3802 .3790 .3778 .3765 .3752 .3739 .3725 .3712 .3697
.4 .3683 .3668 .3653 .3637 .3621 .3605 .3589 .3572 .3555 .3538
.5 .3521 .3503 .3485 .3467 .3448 .3429 .3410 .3391 .3372 .3352
.6 .3332 .3312 .3292 .3271 .3251 .3230 .3209 .3187 .3166 .3144
.7 .3123 .3101 .3079 .3056 .3034 .3011 .2989 .2966 .2943 .2920
.8 .2897 .2874 .2850 .2827 .2803 .2780 .2756 .2732 .2709 .2685
.9 .2661 .2637 ,2613 .2589 .2565 .2541 .2516 .2492 .2468 .2444
1.0 .2420 .2396 .2371 .2347 .2323 .2299 .2275 .2251 .2227 .2203
1.1 .2179 .2155 .2131 .2107 .2083 .2059 .2036 .2012 .1989 .1965
1.2 .1942 .1919 .1895 .1872 .1849 .1826 .1804 .1781 .1758 .1736
1.3 .1714 .1691 .1669 .1647 .1626 .1604 .1582 .1561 .1539 .15'18
1.4 .1497 .1476 .1456 .1435 .1415 .1394 .1374 .1354 .1334 .1315
1.5 .1295 .1276 .1257 .1238 .1219 .1200 .1182 .1163 .1145 .1127
1.6 .1109 .1092 .1074 .1057 .1040 .1023 .1006 .0989 .0973 .0957
1.7 .0940 .0925 .0909 .0893 .0878 .0863 .0848 .0833 .0818 .0804
1.8 .0790 .0775 .0761 .0748 .0734 .0721 .0707 .0694 .0681 .0669
1.9 .0656 .0644 .0632 .0620 .0608 .0596 .0584 .0573 .0562 .0551
2.0 .0540 .0529 .0519 .0508 .0498 .0488 .0478 .0468 .0459 .0449
2.1 .0440 .0431 .0422 .0413 .0404 .0396 .0387 .0379 .0371 .0363
2.2 .0355 .0347 .0339 .0332 .0325 .0317 .0310 .0303 .0297 .0290
2.3 .0283 •0277 .0270 .0264 .0258 . .0252 .0246 .0241 .0235 .0229
2.4 .0224 .0219 .0213 .0208 .0203 .0198 .0194 .0189 .0184 .0180
2.5 .0175 .0171 .0167 .0163 .0158 .0154 .0151 .0147 .0143 .0139
2.6 .0136 .0132 .0129 .0126 .0122 .0119 .0116 .0113 .0110 .0107
2.7 .0104 .0101 .0099 .0096 .0093 .0091 .0088 .0086 .0084 .0081
2.8 .0079 .0077 .0075 .0073 .0071 .0069 .0067 .0065 .0063 .0061
2.9 .0060 .0058 .0056 .0055 .0053 .0051 .0050 .0048 .0047 .0046
3.0 .0044 .0043 .0042 .0040 .0039 .0038 .0037 .0036 .0035 .0034
3.1 .0033 .0032 .0031 .0030 .0029 .0028 .0027 .0026 .0025 .0025
3.2 .0024 .0023 .0022 .0022 .0021 .0020 .0020 .0019 .0018 .0018
3.3 .0017 .0017 .0016 .0016 .0015 .0015 .0014 .0014 .0013 .0013
3.4 .0012 .0012 .0012 .0011 .0011 .0010 .0010 .0010 .0009 .0009
3.5 .0009 .0008 .0008 .0008 .0008 .0007 .0007 .0007 .0007 .0006
3.6 .0006 .0006 .0006 .0005 .0005 ,0005 .0005 .0005 .0005 ,0004
3.7 .0004 .0004 .0004 .0004 .0004 ,0004 .0003 .0003 .0003 .0003
3.8 .0003 .0003 .0003 .0003 .0003 .0002 .0002 .0002 .0002 .0002
3.9 .0002 .0002 .0002 .0002 .0002 .0002 .0002 .0002 .0001 .0001
552 TABLES APPENDIX D
x .00 .01 .02 .03 .04 .05 .06 .07 .08 .09
.0 .5000 .5040 .5080 .5120 .5160 .5199 .5239 .5279 .5319 .5359
.1 .5398 .5438 .5478 .5517 .5557 .5596 .5636 .5675 .5714 .5753
.2 .5793 .5832 .5871 .5910 .5948 .5987 .6026 .6064 .6103 .6141
.3 .6179 .6217 .6255 .6293 .6331 .6368 .6406 .6443 .6480 .6517
.4 .6554 .6591 .6628 .6664 .6700 .6736 .6772 .6808 .6844 .6879
.5 .6915 .6950 .6985 .7019 .7054 .7088 .7123 .7157 .7190 .7224
.6 .7257 .7291 .7324 .7357 .7389 .7422 .7454 .7486 .7517 .7549
.7 .7580 .7611 .7642 .7673 .7704 .7734 .7764 .7794 .7823 .7852
.8 .7881 .7910 .7939 .7967 .7995 .8023 .8051 .8078 .8106 .8133
.9 .8159 .8186 .8212 .8238 .8264 .8289 .8315 .8340 .8365 .8389
1.0 .8413 .8438 .8461 .8485 .8508 .8531 .8554 .8577 .8599 .8621
1.1 .8643 .8665 .8686 .8708 .8729 .8749 .8770 .8790 .8810 .8830
1.2 .8849 .8869 .8888 .8907 .8925 .8944 .8962 .8980 .82~7 . .9015
1.3 .9032 .9049 .9066 .9082 .9099 .9115 . 9131 .9147 .9162 .9177
1.4 .9192 .9207 .9222 .9236 .9251 .9265 .9279 .9292 .9306 .9319
1.5 .9332 .9345 .9357 .9370 .9382 .9394 .9406 .9418 .9429 .9441
1.6 .9452 .9463 .9474 .9484 .9495 .9505 .9515 .9525 .9535 .9545
1.7 .9554 .9564 .9573 .9582 .9591 .9599 .9608 .9616 «:9625 .9633
1.8 .9641 .9649 .9656 .9664 .9671 .9678 .9686 .9693 .9699 .9706
1.9 .9713 .9719 .9726 .9732 .9738 .9744 .9750 .9756 .9761 .9767
2.0 .9772 .9778 .9783 .9788 .9793 .9798 .9803 .9808 .9812 .9817
2.1 .9821 .9826 .9830 .9834 .9838 .9842 .9846 .9850 .9854 .9857
2.2 .9861 .9864 .9868 .9871 .9875 .9878 .9881 .9884 .9887 .9890
2.3 .9893 .9896 .9898 .9901 .9904 .9906 .9909 .9911 .9913 .9916
2.4 .9918 ~9920 .9922 .9925 .9927 .9929 .9931 .9932 .9934 .9936
2.5 .9938 .9940 .9941 .9943 .9945 .9946 .9948 .9949 .9951 .9952
2.6 .9953 .9955 .9956 .9957 .9959 .9960 .9961 .9962 .9963 .9964
2.7 .9965 .9966 .9967 .9968 .9969 .9970 .9971 .9972 .9973 .9974
2.8 .9974 .9975 .9976 .9977 .9977 .9978 .9979 .9979 .9980 .9981
2.9 .9981 .9982 .9982 .9983 .9984 .9984 .9985 .9985 .9986 .9986
3.0 .9987 .9987 .9987 .9988 .9988 .9989 .9989 .9989 .9990 .9990
3.1 .9990 .9991 .9991 .9991 .9992 .9992 .9992 .9992 .9993 .9993
3.2 .9993 .9993 .9994 .9994 .9994 .9994 .9994 .9995 .9995 .9995
3.3 .9995 .9995 .9995 .9996 .9996 .9996 .9996 .9996 .9996 .9997
3.4 .9997 .9997 .9997 .9997 .9997 .9997 .9997 .9997 .9997 .9998
<I>(x) .90 .95 .975 .99 .995 .999 .9995 .99995 .999995
2[1 - <I>(x)) .20 .10 .05 .02 .01 .002 .001 .0001 .00001
Table 3 CUMULATIVE CHI-SQUARE DISTRIBUTION·
F(u) = f: x(n~ 2.)~2~- X/~ dx
~ .005 .010 .025 .050 .100 .250 .500 .750 .900 .950 .975 .990 .995
1 .04 393 .0 3 157 .0 3 982 .0 2 393 .0158 .102 .455 1.32 2.71 5.02
2
3
.0100
.0717
.0201
.115
.0506
.216
.103
.352
.211
.584
.575
1.21
1.39
2.37
2.77
4.11
4.61
6.25
1*
.99
7.81
7.38
9.35
6.63
9.21
11.3
7.88
10.6
12.8
4 .207 .297 .484 .711 1.06 1.92 3.36 5.39 7.78 9.49 11.1 13.3 14.9
5 .412 .554 .831 1.15 1.61 2.67 4.35 6.63 9.24 11.1 12.8 15.1 16.7
6 .676 .872 1.24 1.64 2.20 3.45 5.35 7.84 10.6 12.6 14.4 16.8 18.5
7 .989 1.24 1.69 2.17 2.83 4.25 6.35 9.04 12.0 14.1 16.0 18.5 20.3
8 1.34 1.65 2.18 2.73 3.49 5.07 7.34 10.2 13.4 15.5 17.5 20.1 22.0
9 1.73 2.09 2.10, 3.33 4.17 5.90 8.34 11.4 14.7 16.9 19.0 21.7 23.6
10 2.16 2.56 3.25 . 3.94 4.87 6.74 9.34 12.5 16.0 18.3 20.5 23.2 25.2
11 2.60 3.05 3.82 4.57 5.58 7.58 10.3 13.7 17.3 19.7 21.9 24.7 26.8
12 3.07 3.57 4.40 5.23 6.30 8.44 11.3 14.8 18.5 21.0 23.3 26.2 28.3
13 3.57 4.11 5.01 5.89 7.04 9.30 12.3 16.0 19.8 22.4 24.7 27.7 29.8
14 4.07 4.66 5.63 6.57 7.79 10.2 13.3 17.1 21.1 23.7 26.1 29.1 31.3
15 4.60 5.23 6.26 7.26 8.55 11.0 14.3 18.2 22.3 25.0 27.5 30.6 32.8
16 5.14 5.81 6.91 7.96 9.31 11.9 15.3 19.4 23.5 26.3 28.8 32.0 34.3
17 5.70 6.41 7.56 8.67 10.1 12.8 16.3 20.5 24.8 27.6 30.2 33.4 35.7
18 6.26 7.01 8.23 9.39 10.9 13.7 17.3 21.6 26.0 28.9 31.5 34.8 37.2
19 6.84 7.63 8.91 10.1 11.7 14.6 18.3 22.7 27.2 30.1 32.9 36.2 38.6
20 7.43 8.26 9.59 10.9 12.4 15.5 19.3 23.8 28.4 31.4 34.2 37.6 40.0
21 8.03 8.90 10.3 11.6 13.2 16.3 20.3 24.9 29.6 32.7 35.5 38.9 41.4
22 8.64 9.54 11.0 12.3 14.0 17.2 21.3 26.0 30.8 33.9 36.8 40.3 42.8
23 9.26 10.2 11.7 13.1 14.8 18.1 22.3 27.1 32.0 35.2 38.1 41.6 44.2
24 9.89 10.9 12.4 13.8 15.7 19.0 23.3 28.2 33.2 36.4 39.4 43.0 45.6
25 10.5 11.5 13.1 14.6 16.5 19.9 24.3 29.3 34.4 37.7 40.6 44.3 46.9
26 11.2 12.2 13.8 15.4 17.3 20.8 25.3 30.4 35.6 38.9 41.9 45.6 48.3
27 11.8 12.9 14.6 16.2 18.1 21.7 26.3 31.5 36.7 40.1 43.2 47.0 49.6
28 12.5 13.6 15.3 16.9 18.9 22.7 27.3 32.6 37.9 41.3 44.5 48.3 51.0
29 13.1 14.3 16.0 17.7 19.8 23.6 28.3 33.7 39.1 42.6 45.7 49.6 52.3
30 13.8 15.0 16.8 18.5 20.6 24.5 29.3 34.8 40.3 43.8 47.0 50.9 53.7 v.
- ~- --
v.
!...oJ
• This table is abridged from "Tables of percentage points of the incomplete beta function and of the chi-square distribution," Biometrika,
Vol. 32 (1941). It is here published with the kind permission of its author, Catherine M. Thompson, and the editor of Biometrika.
Table 4 CUMULATIVE F DISTRIBUTION· (m degrees of freedom in numerator; n in denominator)
G n m 1 2 3 4 5 6 7 8 9 10 12 15 20 30 60 120 00
.90 39.9 49.5 53.6 55.8 57.2 58.2 58.9 59.4 59.9 60.2 60.7 61.2 61.7 62.3 62.8 63.1 63.3
.95 161 200 216 225 230 234 237 239 241 242 244 246 248 250 252 253 254
.975 1 648 800 864 900 922 937 948 957 963 969 977 985 993 1000 1010 1010 1020
.99 4,050 5,000 5,400 5,620 5,760 5,860 5,930 5,980 6,020 6,060 6,110 6,160 6,210 6,260 6,310 6,340 6,370
.995 16,200 20,000 21,600 22,500 23,100 23,400 23,700 23,900 24,100 24,200 24,400 24,600 24,800 25,000 25,200 25,400 25,500
.90 8.53 9.00 9.16 9.24 9.29 9.33 9.35 9.37 9.38 9.39 9.41 9.42 9.44 9.46 9.47 9.48 9.49
.95 18.5 19.0 19.2 19.2 19.3 19.3 19.4 19.4 19.4 19.4 19.4 19.4 19.5 19.5 19.5 19.5 19.5
.975 2 38.5 39.0 39.2 39.2 39.3 39.3 39.4 39.4 39.4 39.4 39.4 39.4 39.4 39.5 39.5 39.5 39.5
.99 98.5 99.0 99.2 99.2 99.3 99.3 99.4 99.4 99.4 99.4 Q9.4 99.4 99.4 99.5 99.5 99.5 99.5
.995 199 199 199 199 199 199 199 199 199 199 199 199 199 199 199 199 199
.90 5.54 5.46 5.39 5.34 5.31 5.28 5.27 5.25 5.24 5.23 5.22 5.20 5.18 5.17 5.15 5.14 5.13
.95 10.1 9.55 9.28 9.12 9.01 8.94 8.89 8.85 8.81 8.79 8.74 8.70 8.66 8.62 8.57 8.55 8.53
.975 3 17.4 16.0 15.4 15.1 14.9 14.7 14.6 14.5 14.5 14.4 14.3 14.3 14.2 14.1 14.0 13.9 13.9
.99 34.1 30.8 29.5 28.7 28.2 27.9 27.7 27.5 27.3 27.2 27.1 26.9 26.7 26.5 26.3 26.2 26.1
.995 55.6 49.8 47.5 46.2 45.4 44.8 44.4 44.1 43.9 43.7 43.4 43.1 42.8 42.5 42.1 42.0 41.8
.90 4.54 4.32 4.19 4.11 4.05 4.01 3.98 3.95 3.93 3.92 3.90 3.87 3.84 3.82 3.79 3.78 3.76
.95 7.71 6.94 6.59 6.39 6.26 6.16 6.09 6.04 6.00 5.96 5.91 5.86 5.80 5.75 5.69 5.66 5.63
.975 4 12.2 10.6 9.98 9.60 9.36 9.20 9.07 8.98 8.90 8.84 8.75 8.66 8.56 8.46 8.36 8.31 8.26
.99 21.2 18.0 16.7 16.0 15.5 15.2 15.0 14.8 14.7 14.5 14.4 14.2 14.0 13.8 13.7 13.6 13.5
.995 31.3 26.3 24.3 23.2 22.5 22.0 21.6 21.4 21.1 21.0 20.7 20.4 20.2 19.9 19.6 19.5 19.3
.90 4.06 3.78 3.62 3.52 3.45 3.40 3.37 3.34 3.32 3.30 3.27 3.24 3.21 3.17 3.14 3.12 3.11
.95 6.61 5.79 5.41 5.19 5.05 4.95 4.88 4.82 4.77 4.74 4.68 4.62 4.56 4.50 4.43 4.40 4.37
.975 5 10.0 8.43 7.76 7.39 7.15 6.98 6.85 6.76 6.68 6.62 6.52 6.43 6.33 6.23 6.12 6.07 6.02
.99 16.3 13.3 12.1 11.4 11.0 10.7 10.5 10.3 10.2 10.1 9.89 9.72 9.55 9.38 9.20 9.11 9.02
.995 22.8 18.3 16.5 15.6 14.9 14.5 14.2 14.0 13.8 13.6 13.4 13.1 12.9 12.7 12.4 12.3 12.1
.90 3.78 3.46 3.29 3.18 3.11 3.05 3.01 2.98 2.96 2.94 2.90 2.87 2.84 2.80 2.76 2.74 2.72
.95 5.99 5.14 4.76 4.53 4.39 4.28 4.21 4.15 4.10 4.06 4.00 3.94 3.87 3.81 3.74 3.70 3.67
.975 6 8.81 7.26 6.60 6.23 5.99 5.82 5.70 5.60 5.52 5.46 5.37 5.27 5.17 5.07 4.96 4.90 4.85
.99 13.7 10.9 9.78 9.15 8.75 8.47 8.26 8.10 7.98 7.87 7.72 7.56 7.40 7.23 7.06 6.97 6.88
.995 18.6 14.5 12.9 12.0 11.5 11.1 10.8 10.6 10.4 10.2 10.0 9.81 9.59 9.36 9.12 9.00 8.88
.90 3.59 3.26 3.07 2.96 2.88 2.83 2.78 2.75 2.72 2.70 2.67 2.63 2.59 2.56 2.51 2.49 2.47
.95 5.59 4.74 4.35 4.12 3.97 3.87 3.79 3.73 3.68 3.64 3.57 3.51 3.44 3.38 3.30 3.27 3.23
.975 7 8.07 6.54 5.89 5.52 5.29 5.12 4.99 4.90 4.82 4.76 4.67 4.57 4.47 4.36 4.25 4.20 4.14
.99 12.2 9.55 8.45 7.85 7.46 7.19 6.99 6.84 6.72 6.62 6.47 6.31 6.16 5.99 5.82 5.74 5.65
.995 16.2 12.4 10.9 10.1 9.52 9.16 8.89 8.68 8.51 8.38 8.18 7.97 7.75 7.53 7.31 7.19 7.08
.90 3.46 3.11 2.92 2.81 2.73 2.67 2.62 2.59 2.56 2.54 2.50 2.46 2.42 2.38 2.34 2.31 2.29
.95 5.32 4.46 4.07 3.84 3.69 3.58 3.50 3.44 3.39 3.35 3.28 3.22 3.15 3.08 3.01 2.97 2.93
.975 8 7.57 6.06 5.42 5.05 4.82 4.65 4.53 4.43 4.36 4.30 4.20 4.10 4.00 3.89 3.78 3.73 3.67
.99 11.3 8.65 7.59 7.01 6.63 6.37 6.18 6.03 5.91 5.81 5.67 5.52 5.36 5.20 5.03 4.95 4.86
.995 14.7 11.0 9.60 8.81 8.30 7.95 7.69 7.50 7.34 7.21 7.01 6.81 6.61 6.40 6.18 6.06 5.95
.90 3.36 3.01 2.81 2.69 2.61 2.55 2.51 2.47 2.44 2.42 2.38 2.34 2.30 2.25 2.21 2.18 2.16
.95 5.12 4.26 3.86 3.63 3.48 3.37 3.29 3.23 3.18 3.14 3.07 3.01 2.94 2.86 2.79 2.75 2.71
.975 9 7.21 5.71 5.08 4.72 4.48 4.32 4.20 4.10 4.03 3.96 3.87 3.77 3.67 3.56 3.45 3.39 3.33
.99 10.6 8.02 6.99 6.42 6.06 5.80 5.61 5.47 5.35 5.26 5.11 4.96 4.81 4.65 4.48 4.40 4.31
.995 13.6 10.1 8.72 7.96 7.47 7.13 6.88 6.69 6.54 6.42 6.23 6.03 5.83 5.62 5.41 5.30 5.19
.90 3.29 2.92 2.73 2.61 2.52 2.46 2.41 2.38 2.35 2.32 2.28 2.24 2.20 2.15 2.11 2.08 2.06
.95 4.96 4.10 3.71 3.48 3.33 3.22 3.14 3.07 3.02 2.98 2.91 2.84 2.77 2.70 2.62 2.58 2.54
.975 10 6.94 5.46 4.83 4.47 4.24 4.07 3.95 3.85 3.78 3.72 3.62 3.52 3.42 3.31 3.20 3.14 3.08
.99 10.0. 7.56 6.55 5.99 5.64 5.39 ~.20 5.06 4.94 4.85 4.71 4.56 4.41 4.25 4.08 4.00 3.91
.995 12.8 9.43 8.08 7.34 6.87 6.54 6.30 6.12 5.97 5.85 5.66 5.47 5.27 5.07 4.86 4.75 4.64
.90 3.18 2.81 2.61 2.48 2.39 2.33 2.28 2.24 2.21 2.19 2.15 2.10 2.06 2.01 1.96 1.93 1.90
.95 4.75 3.89 3.49 3.26 3.11 3.00 2.91 2.85 2.80 2.75 2.69 2.62 2.54 2.47 2.38 2.34 2.30
.975 12 6.55 5.10 4.47 4.12 3.89 3.73 3.61 3.51 3.44 3.37 3.28 3.18 3.07 2.96 2.85 2.79 2.72
.99 9.33 6.93 5.95 5.41 5.06 4.82 4.64 4.50 4.39 4.30 4.16 4.01 3.86 3.70 3.54 3.45 3.36
.995 11.8 8.51 7.23 6.52 6.07 5.76 5.52 5.35 5.20 5.<)9 4.91 4.72 4.53 4.33 4.12 4.01 3.90
.90 3.07 2.70 2.49 2.36 2.27 2.21 2.16 2.12 2.09 2.06 2.02 1.97 1.92 1.87 1.82 1.79 1.76
.95 4.54 3.68 3.29 3.06 2.90 2.79 2.71 2.64 2.59 2.54 2.48 2.40 2.33 2.25 2.16 2.1 J. 2.07
.975 15 6.20 4.77 4.15 3.80 3.58 3.41 3.29 3.20 3.12 3.06 2.96 2.86 2.76 2.64 2.52 2.46 2.40
.99 8.68 6.36 5.42 4.89 4.56 4.32 4.14 4.00 3.89 3.80 3.67 3.52 3.37 3.21 3.05 2.96 2.87
.995 10.8 7.70 6.48 5.80 5.37 5.07 4.85 4.67 4.54 4.42 4.25 4.07 3.88 3.69 3.48 3.37 3.26
.90 2.97 2.59 2.38 2.25 2.16 2.<)9 2.04 2.00 1.96 1.94 1.89 1.84 1.79 1.74 1.68 1.64 1.61
.95 4.35 3.49 3.10 2.87 2.71 2.60 2.51 2.45 2.39 2.35 2.28 2.20 2.12 2.04 1.95 1.90 1.84
.975 20 5.87 4.46 3.86 3.51 3.29 3.13 3.0t 2.91 2.84 2.77 2.68 2.57 2.46 2.35 2.22 2.16 2.09
.99 8.10 5.85 4.94 4.43 4.10 3.87 3.70 3.56 3.46 3.37 3.23 3.09 2.94 2.78 2.61 2.52 2.42
.995 9.94 6.99 5.82 5.17 4.76 4.47 4.26 4.()9 3.96 3.85 3.68 3.50 3.32 3.12 2.92 2.81 2.69
.90 2.88 2.49 2.28 2.14 2.05 1.98 1.93 1.88 1.85 1.82 1.77 1.72 1.67 1.61 1.54 1.50 1.46
.95 4.17 3.32 2.92 2.69 2.53 2.42 2.33 2.27 2.21 2.16 2.09 2.01 1.93 1.84 1.74 1.68 1.62
.975 30 5.57 4.18 3.59 3.25 3.(\3 2.87 2.75 2.65 2.57 2.51 2.41 2.31 2.20 2.07 1.94 1.87 1.79
.99 7.56 5.39 4.51 4.02 3.70 3.47 3.30 3.17 3.07 2.98 2.84 2.70 2.55 2.39 2.21 2.11 2.01
.995 9.18 6.35 5.24 4.62 4.23 3.95 3.74 3.58 3.45 3.34 3.18 3.01 2.82 2.63 2.42 2.30 2.18
.90 2.79 2.39 2.18 2.04 1.95 1.87 1.82 1.77 1.74 1.71 1.66 1.60 1.54 1.48 . 1.40 1.35 1.29
.95 4.00 3.15 2.76 2.53 2.37 2.25 2.17 2.10 2.04 1.99 1.92 1.84 1.75 1.65 1.53 1.47 1.39
.975 60 5.29 3.93 3.34 3.01 2.79 2.63 2.51 2.41 2.33 2.27 2.17 2.06 1.94 1.82 1.67 1.58 1.48
.99 7.08 4.98 4.13 3.65 3.34 3.12 2.95 2.82 2.72 2.63 2.50 2.35 2.20 2.03 1.84 1.73 1.60
.995 8.49 5.80 4.73 4.14 3.76 3.49 3.29 3.13 3.01 2.90 2.74 2.57 2.39 2.19 1.96 1.83 1.69
.90 2.75 2.35 2.13 1.99 1.90 1.82 1.77 1.12 1.68 1.65 1.60 1.54 1.48 1.41 1.32 1.26 1.19
.95 3.92 3.07 2.68 2.45 2.29 2.18 2.09 2.02 1.96 1.91 1.83 1.75 1.66 1.55 1.43 1.35 1.25
.975 120 5.15 3.80 3.23 2.89 2.67 2.52 2.39 2.30 2.22 2.16 2.05 1.94 1.82 1.69 1.53 1.43 1.31
.99 6.85 4.79 3.95 3.48 3.17 2.96 2.79 2.66 2.56 2.47 2.34 2.19 2.03 1.86 1.66 1.53 1.38
.995 8.18 5.54 4.50 3.92 3.55 3.28 3.09 2.93 2.81 2.71 2.54 2.37 2.19 1.98 1.75 1.61 1.43
.90 2.71 2.30 2.08 1.94 1.85 1.77 1.72 1.67 1.63 1.60 1.55 1.49 1.42 1.34 1.24 1.17 1.00
.95 3.84 3.00 2.60 2.37 2.21 2.10 2.01 1.94 1.88 1.83 1.75 1.67 1.57 1.46 1.32 1.22 1.00
.975 «> 5.02 3.69 3.12 2.79 2.57 2.41 2.29 2.19 2.11 2.05 1.94 1.83 1.71 1.57 1.39 1.27 1.00
.99 6.63 4.61 3.78 3.32 3.02 2.80 2.64 2.51 2.41 2.32 2.18 2.04 1.88 1.70 1.47 1.32 1.00
.995 7.88 5.30 4.28 3.72 3.35 3.09 2.90 2.74 2.62 2.52 2.36 2.19 2.00 1.79 1.53 1.36 1.00
* This table is abridged from "Tables of percentage points of the inverted beta distribution," Biometrika, Vol. 33 (1943). It is here published
with the kind permission of its authors, Maxine Merrington and Catherine M. Thompson, and the editor of Biometrika.
556 TABLES APPENDIX 0
, r(n11)
Fr.t) =f - <:D
_ (1 + x'),O+'>l
r(n/2)V -rrn _
2 dx
n
~ 1
.75 .90 .95 .975 .99 .995 .9995
This table is abridged from the" Statistical Tables n of R. A. Fisher and Frank Yates published
III
by Oliver &, Boyd, Ltd., Edinburgh and London, 1938. It is here pubHshed with the kind permission
of the authors and their publishers.
INDEX
Absolutely continuous, 60, 61, 63,64 Complete families of densities, 321, 324, 354
Admissible estimator, 299 Complete statistic, 324
Algebra of sets, 18, 22 Completeness, 321, 354
Analysis of variance, 431 Composite hypothesis, 418
A posteriori probability, 5, 9 (See also Hypotheses)
A priori probability, 2-4, 9 Concentration, 289
Arithmetic series, 528 Conditional distributions, 129, 148
Asymptotic distribution, 196,256-258,261,359, bivariate normal, 161
440,444 continuous, 146, 141
Average sample size in sequential tests, 410--472 discrete, 143-145
Conditional expectation, 151
Conditional mean, 158
BAN estimators, 294, 296, 349, 446 Conditional probability, 32
Bayes estimation, 339, 344 Conditional variance, 159
Bayes' formula, 36 Confidence bands for c,d.f., 511
Bayes risk, 344 Confidence coefficient. 315,311,461
Bayes test, 411 Confidence intervals, 313, 315, 311, 461
Bayesian interval estimates, 396,391 c,d,f.,511
BernouUi distribution, 81, 538 difference in means, 386
Bernoulli trial, 88 general method for, 389
repeated independent, 89,101,131 large sample, 393
Best linear unbiased estimators, 499 mean of normal population, 315,381,384
Beta distribution, 115, 540 median, 512
of second kind, 215 method of finding tests, 425, 461
Beta function, 534, 535 on~sided, 318
Bias, 293 p of binomial population, 393, 395
Binomial coefficient, 529 pivotal method of obtaining, 319, 381
Binomial distribution, 81-89, 119, 120,538 regression coefficients, 491-494
confidence limits for p, 393, 395 uniformly most accurate, 464
normal approximation, 120 variance of normal population, 382, 384
Poisson approximation, 119 Confidence limits [see Confidence interval(s)]
Binomial theorem, 530 Confidencere~on,311
Birthday problem, 45 for mean and variance of normal population,
Bivariate normal distribution, 162-168 384
conditional distribution, 161 Confidence sets, 461
marginal distribution, 161 uniformly most accurate, 464
moment generating function, 164 Consistency of an estimator, 291, 294, 295. 359
moments, 165 Contagious distribution, 102, 122, 123
Boole's inequality, 25 Contingency tables, 452-461
interaction, 454
tests for independence, 452
Cauchy distribution, 111, 201, 238, 540 Continuous distributions, 60, 62
Cauchy· Schwarz inequalities, 162 (See also Distributions)
Censored, 104 Continuous random variable, 60
Central-limit theorem, Ill, 120, 195, 233, 234, Convex function, 12
258 Convolution, 186
Centroid, 65 Correlation, 155, 161
Chebyshev inequality, 11 sample, 526
multivariate, 172 Spearman's rank, 525, 526
Chi-square distribution, 241,542,549 Correlation coefficient, 155, 156
table of, 553 Covariance, 155, 156
Chi-square tests, 440, 442-461 of two Bnear combinations of random
contingency tables, 452-461 variables, 119
goodness-of-fit, 442, 441 Covariance matrix, 352, 489
Combinations and permutations, 528 Cramer-Rao inequality, 316
Combinatorial symbol, 528 Cramer-Rao lower bound, 316, 320
Complement of set, 10 Critical function, 404
SS8 INDEX
Estimation: Function:
interval, 372 decision, 291
point, 211 definition of, 19
Estimator ( s) ,212, 213 density (see Distributions)
admissible, 299 distance, 281
BAN, 294, 296 distribution (see Distributions)
Bayes method, 286, 339, 344 domain of, 19,53
best linear unbiased, 499 gamma, 534
better, 299 generating, 84
closeness, 288 image of, 19
concentrated,289 indicator, 20
consistent, 294, 295, 359 likelihood,218
e1Iipsoid of concentration, 353 loss, 291
least squares, 286,498 squared-error, 291
location invariant, 334 moment generating (see Moment generating
maximum likelihood (see Maximum likeH- function)
hood estimators) power, 406
mean-squared error, 291 preimage, 19
method of moments, 214 probability, 21,22,26
minimax, 299,350 regression, 158, 168
minimum chi-square, 286, 281 risk, 291, 298
minimum distance, 286, 281 set, 20, 21
Pitman, 290, 334, 336 size-of-set, 21
scale invariant, 336
unbiased, 293,315
uniformly minimum-variance unbiased, 315 Game of craps, 48
Wilks' generalized variance, 353 Gamma distribution, 111, 112, 123, 540
(See also Large samples)
Gamma function, 534
Event, 14, 15, 18,53 Gauss-Markov theorem, 500
elementary, 15 Gaussian distribution (see Normal distribution)
Event space, 15, 18,23 Generalized likelihood ratio (see Likelihood
Excess, coefficient of, 16 ratio)
Expectation, 64, 69,153,160,116 Generalized variance, 352, 353
Expected values, 64, 69, 10, 129,153,160 Generating functions (see specific generating
conditional, 151 functions)
of functions of random variables, 116 Geometric distribution, 99, 538
properties of, 10 Geometric series, 528
Exponential c1ass, 312, 313,320,326,355,422 G1ivenko-Cantelli theorem, 501
Exponential distribution, 111,121,231,262,540 Goodness-of-fit test:
Exponential family, 312, 313,320, 326, 355, 422 chi-si\uare, 442, 441
Extension theorem, 22 Kolmogorov-Smirnov, 508, 509
Extreme-value statistic, 118,258 Gumbel distribution, 118, 542
asymptotic distribution of, 261