Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Lecture No.8

Download as pdf or txt
Download as pdf or txt
You are on page 1of 6

Engineering Statistics 17 March, 2022

Lecture No. 8
Resourse Person: Dr. Absar Ul Haq Department: Mechanical Engineering (Narowal Campus).

8.1 Some Discrete Probability Distributions

8.1.1 Introduction and Motivation

No matter whether a discrete probability distribution is represented graphically by a histogram, in tabular


form, or by means of a formula, the behavior of a random variable is described. Often, the observations
generated by different statistical experiments have the same general type of behavior. Consequently, discrete
random variables associated with these experiments can be described by essentially the same probability
distribution and therefore can be represented by a single formula. In fact, one needs only a handful of
important probability distributions to describe many of the discrete random variables encountered in practice.
Such a handful of distributions describe several real-life random phenomena. For instance, in a study
involving testing the effectiveness of a new drug, the number of cured patients among all the patients who
use the drug approximately follows a binomial distribution. In an industrial example, when a sample of
items selected from a batch of production is tested, the number of defective items in the sample usually can
be modeled as a hypergeometric random variable. In a statistical quality control problem, the experimenter
will signal a shift of the process mean when observational data exceed certain limits. The number of samples
required to produce a false alarm follows a geometric distribution which is a special case of the negative
binomial distribution. On the other hand, the number of white cells from a fixed amount of an individual’s
blood sample is usually random and may be described by a Poisson distribution. In this lecture, we present
these commonly used distributions with various examples.

8.1.2 Binomial and Multinomial Distributions

An experiment often consists of repeated trials, each with two possible outcomes that may be labeled success
or failure. The most obvious application deals with the testing of items as they come off an assembly line,
where each trial may indicate a defective or a non-defective item. We may choose to define either outcome
as a success. The process is referred to as a Bernoulli process. Each trial is called a Bernoulli trial.
Observe, for example, if one were drawing cards from a deck, the probabilities for repeated trials change if
the cards are not replaced. That is, the probability of selecting a heart on the first draw is 1/4, but on the
second draw it is a conditional probability having a value of 13/51 or 12/51, depending on whether a heart
appeared on the first draw: this, then, would no longer be considered a set of Bernoulli trials.

The Bernoulli Process

Strictly speaking, the Bernoulli process must possess the following properties:

1. The experiment consists of repeated trials.


2. Each trial results in an outcome that may be classified as a success or a failure.

8-1
Lecture No. 8 8-2

3. The probability of success, denoted by p, remains constant from trial to trial.


4. The repeated trials are independent.

Consider the set of Bernoulli trials where three items are selected at random from a manufacturing process,
inspected, and classified as defective or nondefective. A defective item is designated a success. The number
of successes is a random variable X assuming integral values from 0 through 3. The eight possible outcomes
and the corresponding values of X are Since the items are selected independently and we assume that the

process produces 25% defectives, we have


   
3 1 3 9
P (N DN ) = P (N )P (D)P (N ) = = .
4 4 4 64
Similar calculations yield the probabilities for the other possible outcomes. The probability distribution of
X is therefore

Binomial Distribution

The number X of successes in n Bernoulli trials is called a binomial random variable. The probability
distribution of this discrete random variable is called the binomial distribution, and its values will be denoted
by b(x; n, p) since they depend on the number of trials and the probability of a success on a given trial. Thus,
for the probability distribution of X, the number of defectives is
 
1 9
P (X = 2) = f (2) = b 2; 3, = .
4 64
Let us now generalize the above illustration to yield a formula for b(x; n, p). That is, we wish to find a
formula that gives the probability of x successes in n trials for a binomial experiment. First, consider the
probability of x successes and n − x failures in a specified order. Since the trials are independent, we can
multiply all the probabilities corresponding to the different outcomes. Each success occurs with probability
p and each failure with probability q = 1 − p. Therefore, the probability for the specified order is px q n−x .
We must now determine the total number of sample points in the experiment that have x successes and n-x
failures. This number is equal to the number
 of partitions of n outcomes into two groups with x in one group
and n − x in the other and is written nx as introduced in previous lecture. Because these partitions are
mutually exclusive, we add the  probabilities of all the different partitions to obtain the general formula, or
simply multiply px q n−x by nx

Definition 8.1 (Binomial Distribution) A Bernoulli trial can result in a success with probability p and
a failure with probability q = 1 − p. Then the probability distribution of the binomial random variable X, the
number of successes in n independent trials, is
 
n x n−x
b(x; n, p) = p q , x = 0, 1, 2, ..., n.
x
Lecture No. 8 8-3

ˆ The probability that a certain kind of component will survive a shock test is 3/4. Find the probability
that exactly 2 of the next 4 components tested survive.
Solution: Assuming that the tests are independent and p = 3/4 for each of the 4 tests, we obtain
     2  2   2
3 4 3 1 4! 3 128
b 2; 4, = = 4
= .
4 2 4 4 2!2! 4 27

ˆ The probability that a patient recovers from a rare blood disease is 0.4. If 15 people are known to have
contracted this disease, what is the probability that (a) at least 10 survive, (b) from 3 to 8 survive,
and (c) exactly 5 survive?
Solution: Let X be the number of people who survive.
(a)
X9
P (X ≥ 10) = 1 − P (X < 10) = 1 − b(x; 15, 0.4) = 1 − 0.9662 = 0.0338
x=0

(b)
8
X 8
X 2
X
P (3 ≤ X ≤ 8) = b(x; 15, 0.4) = b(x; 15, 0.4) − b(x; 15, 0.4) = 0.9050 − 0.0271 = 0.8779
x=3 x=0 x=0

(c)
5
X 4
X
P (X = 5) = b(5; 15, 0.4) = b(x; 15, 0.4) − b(x; 15, 0.4) = 0.4032 − 0.2173 = 0.1859
x=0 x=0

ˆ A large chain retailer purchases a certain kind of electronic device from a manufacturer. The manu-
facturer indicates that the defective rate of the device is 3(a) The inspector randomly picks 20 items
from a shipment. What is the probability that there will be at least one defective item among these
20?
(b) Suppose that the retailer receives 10 shipments in a month and the inspector
randomly tests 20 devices per shipment. What is the probability that there will be exactly 3 shipments
each containing at least one defective device among the 20 that are selected and tested from the ship-
ment?
Solution: (a) Denote by X the number of defective devices among the 20. Then X follows a
b(x; 20, 0.03) distribution. Hence,

P (X ≥ 1) = 1 − P (X = 0) = 1 − b(0; 20, 0.03) = 1 − (0.03)0 (1 − 0.03)20−0 = 0.4562.

(b) In this case, each shipment can either contain at least one defective item or not. Hence, testing of
each shipment can be viewed as a Bernoulli trial with p = 0.4562 from part (a). Assuming independence
from shipment to shipment and denoting by Y the number of shipments containing at least one defective
item, Y follows another binomial distribution b(y; 10, 0.4562). Therefore,
 
10
P (Y = 3) = 0.45623 (1 − 0.4562)7 = 0.1602.
3

ˆ It is conjectured that an impurity exists in 30rural community. In order to gain some insight into the
true extent of the problem, it is determined that some testing is necessary. It is too expensive to test
all of the wells in the area, so 10 are randomly selected for testing. (a) Using the binomial distribution,
what is the probability that exactly 3 wells have the impurity, assuming that the conjecture is correct?
Lecture No. 8 8-4

(b) What is the probability that more than 3 wells are impure?
Solution: (a) We require
3
X 2
X
b(3; 10, 0.3) = b(x; 10, 0.3) − b(x; 10, 0.3) = 0.6496 − 0.3828 = 0.2668.
x=0 x=0

(b) In this case, P (X > 3) = 1 − 0.6496 = 0.3504

Theorem 8.2 The mean and variance of the binomial distribution b(x; n, p) are µ = np and σ 2 = npq.

8.1.3 Hypergeometric Distribution

The simplest way to view the distinction between the binomial distribution and the hypergeometric distri-
bution is to note the way the sampling is done. The types of applications for the hypergeometric are very
similar to those for the binomial distribution. We are interested in computing probabilities for the number of
observations that fall into a particular category. But in the case of the binomial distribution, independence
among trials is required. As a result, if that distribution is applied to, say, sampling from a lot of items
(deck of cards, batch of production items), the sampling must be done with replacement of each item after
it is observed. On the other hand, the hypergeometric distribution does not require independence and is
based on sampling done without replacement. Applications for the hypergeometric distribution are found in
many areas, with heavy use in acceptance sampling, electronic testing, and quality assurance. Obviously, in
many of these fields, testing is done at the expense of the item being tested. That is, the item is destroyed
and hence cannot be replaced in the sample. Thus, sampling without replacement is necessary. A simple
example with playing cards will serve as our first illustration. If we wish to find the probability of observing
3 red cards in 5 draws from an ordinary deck of 52 playing cards, the binomial distribution does not apply
unless each card is replaced and the deck reshuffled before the next draw is made. To solve the problem of
sampling without replacement, let us restate the problem. If 5 cards are drawn at random, we are interested
in the probability of selecting 3 red cards from the 26 available in the deck and 2 black cards from the 26
available in the deck. There are 26 3 ways of selecting 3 red cards, and for each of these ways we can choose
2 black cards in 26

2 ways. Therefore, the total number of ways to select 3 red and 2 black cards in 5 draws
is the product 26 26

3 2 . The total number of ways to select any 5 cards from the 52 that are available is
52

5 . Hence, the probability of selecting 5 cards without replacement of which 3 are red and 2 are black is
given by
26 26
 
3 (26!/3!23!)(26!/2!24!)
26
3 = = 0.3251.
3
52!/5!47!
In general, we are interested in the probability of selecting x successes from the k items labeled successes
and n − x failures from the N − k items labeled failures when a random sample of size n is selected from
N items. This is known as a hypergeometric experiment, that is, one that possesses the following two
properties:

1. A random sample of size n is selected without replacement from N items.


2. Of the N items, k may be classified as successes and N − k are classified as failures.

The number X of successes of a hypergeometric experiment is called a hypergeometric random variable.


Accordingly, the probability distribution of the hypergeometric variable is called the hypergeometric dis-
tribution, and its values are denoted by h(x; N, n, k), since they depend on the number of successes k in
the set N from which we select n items.
Lecture No. 8 8-5

Definition 8.3 The probability distribution of the hypergeometric random variable X, the number of suc-
cesses in a random sample of size n selected from N items of which k are labeled success and N − k labeled
failure, is
k N −k
 
x N −x
h(x; N, n, k) = N
 , max{0, n − (N − k)} = x = min{n, k}
n

The range of x can be determined by the three binomial coefficients in the definition, where x and n − x are
no more than k and N − k, respectively, and both of them cannot be less than 0. Usually, when both k (the
number of successes) and N − k (the number of failures) are larger than the sample size n, the range of a
hypergeometric random variable will be x = 0, 1, ..., n.

ˆ Lots of 40 components each are deemed unacceptable if they contain 3 or more defectives. The
procedure for sampling a lot is to select 5 components at random and to reject the lot if a defective is
found. What is the probability that exactly 1 defective is found in the sample if there are 3 defectives
in the entire lot?
Solution : Using the hypergeometric distribution with n = 5, N = 40, k = 3, and x = 1, we find the
probability of obtaining 1 defective to be
3 37
 
1
h(1; 40, 5, 3) = 4
40 = 0.3011
5
Once again, this plan is not desirable since it detects a bad lot (3 defectives) only about 30% of the
time

Theorem 8.4 The mean and variance of the hypergeometric distribution h(x; N, n, k) are
nk
µ=
N
and  
N −n k k
σ2 = .n. 1−
N −1 N N

ˆ Find the mean and variance of the random variable of Example above.
Solution: Since Example 5.9 was a hypergeometric experiment with N = 40, n = 5, and k = 3, by
Theorem 5.2, we have
(5)(3) 3
µ= = = 0.375
40 8
and  
40 − 5 3 3
σ2 = .5. 1− = 0.3113
40 − 1 40 40

8.1.4 Poisson Distribution and the Poisson Process

Experiments yielding numerical values of a random variable X, the number of outcomes occurring during a
given time interval or in a specified region, are called Poisson experiments. The given time interval may
be of any length, such as a minute, a day, a week, a month, or even a year. For example, a Poisson experiment
can generate observations for the random variable X representing the number of telephone calls received
per hour by an office, the number of days school is closed due to snow during the winter, or the number
of games postponed due to rain during a baseball season. The specified region could be a line segment, an
area, a volume, or perhaps a piece of material. In such instances, X might represent the number of field
mice per acre, the number of bacteria in a given culture, or the number of typing errors per page. A Poisson
experiment is derived from the Poisson process and possesses the following properties.
Lecture No. 8 8-6

Properties of the Poisson Process


1. The number of outcomes occurring in one time interval or specified region of space is independent of
the number that occur in any other disjoint time interval or region. In this sense we say that the
Poisson process has no memory.
2. The probability that a single outcome will occur during a very short time interval or in a small region
is proportional to the length of the time interval or the size of the region and does not depend on the
number of outcomes occurring outside this time interval or region.
3. The probability that more than one outcome will occur in such a short time interval or fall in such a
small region is negligible

The number X of outcomes occurring during a Poisson experiment is called a Poisson random variable, and
its probability distribution is called the Poisson distribution. The mean number of outcomes is computed
from µ = λt, where t is the specific ”time,” ”distance,” ”area,” or ”volume” of interest. Since the probabilities
depend on λ, the rate of occurrence of outcomes, we shall denote them by p(x; λt). The derivation of the
formula for p(x; λt), based on the three properties of a Poisson process listed above, is beyond the scope of
this book. The following formula is used for computing Poisson probabilities

Definition 8.5 (Poisson Distribution) The probability distribution of the Poisson random variable X,
representing the number of outcomes occurring in a given time interval or specified region denoted by t, is

e−λt (λt)x
p(x; λt) = , x = 0, 1, 2, ...,
x!
where λ is the average number of outcomes per unit time, distance, area, or volume and e = 2.71828....

ˆ During a laboratory experiment, the average number of radioactive particles passing through a counter
in 1 millisecond is 4. What is the probability that 6 particles enter the counter in a given millisecond?
Solution: Using the Poisson distribution with x = 6 and λt = 4 we have

e−4 46
p(6; 4) = = 0.1042.
6!

ˆ Ten is the average number of oil tankers arriving each day at a certain port. The facilities at the port
can handle at most 15 tankers per day. What is the probability that on a given day tankers have to
be turned away?
Solution: Let X be the number of tankers arriving each day. Then, we have
15
X
P (X > 15) = 1 − P (X ≤ 15) = 1 − p(x; 10) = 1 − 0.9513 = 0.0487.
x=0

Theorem 8.6 Both the mean and the variance of the Poisson distribution p(x; λt) are λt.

References
[TT] T.T. Soong, “Fundamentals of probability and statistics for engineers,” John Wiley & Sons
Ltd, The Atrium, Southern Gate, Chichester, West Sussex PO19 8SQ, England, 2004.

You might also like