Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Corr Binom

Download as pdf or txt
Download as pdf or txt
You are on page 1of 10

A note on the probability of at least k successes in n

correlated binary trials


Alexander Zaigraev

Serguei Kaniovski

September 5, 2012

Abstract
We obtain the probability distribution of the number of successes in a sequence of correlated binary trials as a function of marginal probabilities and correlation coefficients. It is
based on Bahadurs representation of the joint probability distribution of correlated binary
trials, truncated to second-order correlations. We provide new results, illustrating the use of
this distribution in reliability theory and decision theory.
Key Words: correlated binary trials, k-out-of-n system, expert groups

Introduction

In this paper we generalize the binomial distribution to correlated binary variables based on
Bahadurs (1961) representation of the joint probability distribution of n correlated binary
variables, truncated to second-order correlations. The probability distribution of the number of
successes in binary trials finds wide application in reliability theory and decision theory.
Reliability of k-out-of-n systems. Our first application studies the reliability of a k-outof-n system with positively correlated component failures (Zuo and Tian 2010). Factors that
may lead to dependent component performance include the influence of a common operating
environment and failure cascades, if the failure of one component increases the strain on the
remaining components. There exists ample empirical evidence for the positively correlated

Nicolaus Copernicus University


Faculty of Mathematics and Computer Science
Chopin str. 12/18, 87-100 Toru
n, Poland
Tel: +48 56 6112944
Email: alzaig@mat.uni.torun.pl

Austrian Institute of Economic Research (WIFO)


P.O. Box 91, A-1103 Vienna, Austria
Tel: +43 1 7982601 231
Email: serguei.kaniovski@wifo.ac.at

component failures surveyed in Dhillon and Anude (1994), and a number of models study the
effect of positive correlation between component failures on the reliability of a system (e.g.,
Littlewood 1996, Fiondella 2010).
We show that choosing an optimal degree of component redundancy k can mitigate the
effect of correlation on the reliability of a k-out-of-n system with n equally reliable components.
One additional advantage of a correlation-robust system is its predictability. The reliability of a
correlation-robust system is as close as possible to that of an identical system with independently
functioning components, and its reliability under independence can easily be established at the
design stage of the system using the ordinary binomial distribution. The disadvantage is that
minimizing the sensitivity to correlation may not maximize reliability, because the effect of
correlation on reliability can be beneficial as well as detrimental.
Expertise of expert groups. The Condorcet Jury Theorem studies the collective expertise
of a group of experts. The expertise of an expert group is the probability of it collectively making
the correct decision under a given voting rule. The theorem rests on five assumptions: 1) the
group must choose between two alternatives, one of which is correct; 2) the group makes its
decision by voting under simple majority rule; 3) each expert is more likely than not to vote
for the correct alternative; 4) all experts have equal probabilities (homogeneity); and 5) each
expert makes his decision independently. The theorem states that any group comprising an odd
number of experts greater than one is more likely to select the correct alternative than any single
expert, and this likelihood becomes a certainty as the number of experts tends to infinity. A
proof of this classic result in social sciences can be found in Young (1988) and Boland (1989).
We generalize the above model by allowing 1) heterogeneity of experts, 2) positive correlation
between the votes, and 3) qualified majority rules. For analytical tractability, we assume that
any two votes correlate with the same correlation coefficient. The conventional wisdom holds
that the groupthink or bandwagon effect implied in positive correlation diminishes the
collective competence of the group (Surowiecki 2005). We show that this effect can be positive
or negative, and provide sufficient conditions for it to have a certain sign. This indicates that
we can choose a voting rule k, for which the effect of positive correlation between the votes on
the probability of a group collectively making correct decisions is positive.

The Bahadur representation

Let xi be a realization of a binary random variable Xi , such that: P (Xi = 1) = pi and P (Xi =
0) = 1 pi , pi (0, 1) for all i. Let x, the vector of n realizations, occur with the probability
x . Bahadur (1961) obtained the following representation of the joint probability distribution
of X = (X1 , X2 , . . . , Xn ):
Theorem (Bahadur). The joint probability distribution of n correlated binary random variables

is uniquely determined by n marginal probabilities (pi ) and 2n n 1 correlation coefficients


order 2

ci,j = E[Zi Zj ]

for all

order 3

ci,j,r = E[Zi Zj Zr ]

1 i < j n;

for all

1 i < j < r n;

...
order n
where Zi = (Xi pi )/

c1,2,...,n = E[Z1 Z2 . . . Zn ],

p
pi (1 pi ) for i = 1, 2, . . . , n, such that


x =
x 1 +

ci,j zi zj +

1i<jn


ci,j,k zi zj zk + + c1,2,...,n z1 z2 . . . zn .

(1)

1i<j<kn

Here

x =

n
Y

pxi i (1 pi )1xi

i=1

is the probability of X = x in the case of independence and zi = (xi pi )/

p
pi (1 pi ) is a

realization of the random variable Zi for i = 1, 2, . . . , n.


The second-order correlation coefficient is the Pearson product-moment correlation coefficient between two binary random variables. We will denote the n n matrix of Pearson
product-moment correlation coefficients by C. Higher order correlation coefficients measure dependence between the general tuples of binary random variables. For a given n, there will be
Pn n
n
i=2 i = 2 n 1 correlation coefficients of all orders.
The Bahadur representation is sufficiently general to define the joint probability distribution
of any vector of correlated binary variables. The drawback is the large number of parameters
it therefore requires. Foreseeing this difficulty, Bahadur proposed truncating distribution (1)
to second-order correlations. This is equivalent to assuming that all higher order correlations
vanish. In this case distribution (1) becomes:

x =
x 1 +


ci,j zi zj .

(2)

1i<jn

Bahadur provided a lower bound on the smallest eigenvalue min of the correlation matrix C
required for (2) to be nonnegative for all x:
2

min 1 Pn

i=1 i


, where i = max

pi
1 pi
,
1 pi
pi


for each i = 1, 2, . . . , n.

Since in Bahadurs parametrization x 1 by construction, we only need to ensure x 0. The


above bound is sufficient but not necessary for x in (2) to be a distribution.

Probability of at least k successes

Let t(x) =

Pn

i=1 xi

denote the sum of coordinates of x. We use distribution (2) to obtain the

probability of at least k successes in n correlated binary trials:


X

Pnk (p, C) =

x ,

where

k = 0, 1, . . . , n,

(3)

x:t(x)k

under the assumption that all x s according to formula (2) are nonnegative. The cumulative
distribution function of a generalized binomial distribution becomes:
(
P {t(X) k} =

1 Pnk+1 (p, C), k = 0, 1, . . . , n 1;


1,

k = n.

Theorem. The probability of at least k, k = 0, 1, . . . , n, successes in n correlated binary trials


is given by

n
Y

Pnk (p, C) = Pnk (p, I) +

ph

ci,j i j Aki,j (),

(4)

1i<jn

h=1

where p = (p1 , p2 , . . . , pn ) is the vector of marginal probabilities, = (1 , 2 , . . . , n ) such that


p
i = (1 pi )/pi for i = 1, 2, . . . , n, C = (cij ) the n n correlation matrix, I the n n identity
matrix, and

Aki,j () =

0,

k = 0;
X

i21

. . . i2nk

is 6=i, is 6=j
1i1 <<ink n

j21

. . . j2nk1 ,

k = 1, 2, 3, . . . , n 1;

js 6=i, js 6=j
1j1 <<jnk1 n

1,

k = n.

Proof. Let x = (x1 , . . . , xn ) be a vector of binary outcomes. We begin the proof by rewriting Bahadurs expression (2) for the joint distribution in terms of i . Since in (2) zi = (1)1xi i2xi 1
for i = 1, 2, . . . , n, we have
x =
=

n
Y
i=1
n
Y

pxi i (1 pi )1xi +
pxi i (1 pi )1xi +

i=1

n
Y
i=1

pxi i (1 pi )1xi +

n
Y
h=1
n
Y
h=1
n
Y
h=1

ph
ph

n
Y
s=1
n
Y

s2(1xs )

ci,j zi zj =

1i<jn

s2(1xs )

s=1

ph

2xj 1

ci,j (1)2xi xj i2xi 1 j

1i<jn

ci,j i j (1)xi +xj

1i<jn

s2(1xs ) .

s6=i, s6=j

In view of the above formula for x , the probability of exactly k successes in n trials can be

written as
n
Y

pxi i (1

pi )

1xi

x: t(x)=k i=1

n
Y
h=1

ph

ci,j i j

1i<jn

(1)xi +xj

s2(1xs ) .

(5)

s6=i, s6=j

x: t(x)=k

Let
k
Bi,j
() =

(1)xi +xj

s2(1xs ) for i, j = 1, . . . , n, 1 i < j n.

(6)

s6=i, s6=j

x: t(x)=k

k () for all k such that 2 k n 2. For the remaining values of k, the term
Consider Bi,j
k () will be studied below. Split the sum (6) in three summands: first, over all the vectors
Bi,j

x, such that xi = 1 and xj = 1; second, over all the x, such that xi = 1 and xj = 0, or xi = 0
and xj = 1; and, third, over all the x, such that xi = 0 and xj = 0. To simplify notations, we
drop the argument . We have,
k
Bi,j
= Cnk 2Cnk1 + Cnk2 , where Cm =

i21 . . . i2m .

is 6=i, is 6=j
1i1 <<im n

For the remaining cases k = 0, k = 1, k = n 1, k = n, putting C0 = 1,


0
Bi,j
= Cn2 ,

1
Bi,j
= 2Cn2 + Cn3 ,

n1
Bi,j
= C1 2C0 ,

n
Bi,j
= C0 .

Note that
n
Bi,j
= C0 ,

n1
n
Bi,j
+ Bi,j
= C1 C0 ,

n1
n2
n
Bi,j
+ Bi,j
+ Bi,j
= C2 C1 ;

(7)

...
n
Bi,j

+ +

k
Bi,j

= Cnk Cnk1 ,

(8)

...
n
2
Bi,j
+ + Bi,j
= Cn2 Cn3 ,

n
1
Bi,j
+ + Bi,j
= Cn2 ,

n
0
Bi,j
+ + Bi,j
= 0.

(9)

Combining (5) and (6) yields


Pnk (p, C) = Pnk (p, I) +

n
Y

ph

h=1

Substituting

Pn

l=k

X
1i<jn

ci,j i j

n
X

l
Bi,j
.

l=k

l from (7)-(9) in the above formula furnishes the theorem.


Bi,j

When computing the cumulative distribution function using (4), note that if we write Aki,j =
k C k , then B 1 = C n1 = 0 and B k = C k1 . The last equality effectively halves the number
Bi,j
i,j
i,j
i,j
i,j
i,j

of sums required to compute Aki,j . Since for given i and j, computing Aki,j for all k requires
5

k and C k , the number of such sums for all k and all i < j equals
computing (n 2) sums Bi,j
i,j

n(n 1)(n 2)/2. The above computations exclude the trivial cases k = 0 and k = n. For all
k, the total number of summands in the second term of (4) equals n(n 1)2 /2.

Robustness of a k-out-of-n system to correlation

A k-out-of-n system consists of n components, such that each component is either functional, or
has failed. A k-out-of-n system functions if k or more of its components function. Let component
is state be a realization xi of a binary random variable Xi , such that
(
xi =

1, if component i functions;
0, if component i fails.

The reliability of component i is measured by its probability of being functional pi = P (Xi = 1).
A vector of states x = (x1 , . . . , xn ) is called a system profile. There will be 2n such profiles.
The structure function of a k-out-of-n system is defined as
(
(x) =

1, if
0, if

Pn

Pi=1
n

xi k;

i=1 xi

< k.

Let x be the probability of X = x, and t(x) be the number of functioning components. The
reliability of a system is defined as the probability that the system will function:
Rnk (p, C) = E[(X)] = P ((X) = 1) = Pnk (p, C), where k = 1, 2, . . . , n.
The k-out-of-n notation conveys the arrangement of system components. In a parallel system
(k = 1) at least one of the n components must function in order for the system to function. A
functioning series system (k = n) requires all of its components to be functional.
Fiondella (2010) suggests using a Birnbaum measure to assess the sensitivity of a systems
reliability to correlation. The Birnbaum measure for correlation coefficients is defined as:
BMnk (p, C) =

Rnk (p, C)
.
ci,j

This measure is useful if the system designer can control correlation so as to maximize reliability.
Harnessing correlation may not be feasible if the dependency in component performance owes to
the influence of an adverse operating environment (Lindley and Singpurwalla 2002). In this case,
it would be desirable to mitigate the effect of correlation on system reliability by minimizing the
absolute value of the Birnbaum measure |BMnk (p, C)|.
In the following analysis, the design of the system is given by k or the degree of redundancy in
a system comprising n equally reliable components, so that pi = p (0, 1) for all i = 1, 2, . . . , n.

We propose choosing k that minimizes or sometimes even negates the effect of correlation on
system reliability. Formula (4) furnishes that Rnk (p, C) does not depend on C if Aki,j () = 0 for
all i < j. By assumption, pi = p, so that Aki,j () = Ak (). Consequently, Rnk (p, C) does not
depend on C if Ak () = 0. This is not possible in a series system, where k = n, or in a parallel
system, where k = 1. For any intermediate system, the effect of correlation on reliability is
 2(n)
 2(n1)
n2
n2
minimal for k that minimizes |Ak ()|. Solving the equation n

= n1

with respect to k yields = p(n 1) + 1. If is an integer, then choosing k = would make


the k-out-of-n system immune to correlation. If is not an integer, two feasible correlation
robust k-out-of-n systems can be defined using the greatest integer smaller to or equal to
(bc) and the smallest integer larger to or equal to (de). Using the identities bc = de 1


and x n1
= (n x) n1
x
x1 one can show that
|Abc ()| < |Ade ()| (1 p)
The above inequality holds for p

n2
n1 .

bc
bc + 1
<p
.
n bc
bc

It follows that when component reliability is sufficiently

high, the correlation robust k equals bp(n 1) + 1c.

The effect of correlation on competence of an expert group

Take an odd number of experts n (n 3). Each expert is competent in the sense of being more
likely than not to decide correctly, but some experts are better than others. Both ideas are
reflected in the assumption pi > 0.5 for all i. The opinions of any two experts are correlated
with the same positive correlation coefficient ci,j = c > 0 for all i < j.
Individual opinions are aggregated into a collective decision using a voting rule k, such that
k = (n + 1)/2, . . . , n. Simple majority rule requires at least k = (n + 1)/2 votes in favor of
a decision being passed, whereas setting k = n stipulates unanimity. The positive effect of
correlation on the collective expertise under unanimity rule is evident because, as we noted in
our theorem, Ani,j () = 1 for all i < j in (4). The following lemma establishes the negative effect
of correlation under simple majority rule.
(n+1)/2

Lemma. If pi (0.5, 1) for i = 1, . . . , n, then Ai,j

() < 0 for any i, j = 1, . . . , n, i < j.

Proof.
(n+1)/2

Ai,j

() =

i21 . . . i2(n1)/2

is 6=i, is 6=j
1i1 <<i(n1)/2 n

j21 . . . j2(n3)/2 .

js 6=i, js 6=j
1j1 <<j(n3)/2 n

Both sums comprise an equal number of summands,

n2
(n1)/2

and

n2
(n3)/2

respectively.

It suffices to prove the lemma for i = 1 and j = 2. The validity of the lemma for an arbitrary
pair of indexes i < j follows because their choice is not essential for the proof.
7

Each term of the second sum corresponds to a set of indexes Aj1 ...j(n3)/2 = {j1 , . . . , j(n3)/2 }.
ej ...j
ej ...j
Let A
= {3, . . . , n} \ Aj ...j
. Since the set A
contains (n 1)/2 indexes,
1

(n3)/2

(n3)/2

2 2
m
j1 . . . j2(n3)/2 =

3j1 <<j(n3)/2 n mA
ej

n1
2

1 ...j(n3)/2

(n3)/2

i21 . . . i2(n1)/2 .

3i1 <<i(n1)/2 n

It is evident that the difference


X
3i1 <<i(n1)/2 n

2
=
n1

i21 . . . i2(n1)/2

3j1 <<j(n3)/2 n

j21

j21 . . . j2(n3)/2 =

3j1 <<j(n3)/2 n

1 ...j(n3)/2

2 2
m
j1 . . . j2(n3)/2

3j1 <<j(n3)/2 n mA
ej

2
=
n1

j21 . . . j2(n3)/2 =

. . . j2(n3)/2

3j1 <<j(n3)/2 n

2
m


n1
< 0.

ej ...j
mA
1
(n3)/2

The last step follows since pi (0.5, 1) for i = 1, . . . , n implies i (0, 1).
The effect of a common positive correlation on the collective expertise is negative under
simple majority rule and positive under unanimity rule. This points towards the existence of
intermediate rules

n+1
2

< k < n, for which the effect of correlation changes its sign. Although it

is not possible to obtain a general result on the sign of the effect for intermediate voting rules,
the following condition is sufficient for this effect to have a certain sign.
Sufficient Condition 1. If the sum of any k 1 of reciprocal probabilities {p1
s } is not smaller
than n 1, then i, j, i < j, we have Am
i,j () 0, m k. If the sum of any k 1 of reciprocal
m
probabilities {p1
s } is not greater than n 1, then i, j, i < j, we have Ai,j () 0, m k.

The above sufficient condition has clear implications for an optimal choice of the voting rule
in relationship to the harmonic mean and the range of individual competences {pi }.
P
1
Sufficient Condition 1 implies inequalities on the harmonic mean H = n( ni=1 p1
i ) . If the
sum of any k 1 of {p1
s } is not smaller than n 1, then H
{p1
s }

is not greater than n 1, then H

k1
n1 ;

if the sum of any k 1 of

k1
n1 .

k1
Sufficient Condition 2. If pi (0.5, n1
], i, then Am
i,j () 0, m k, i, j, i < j. If
k1
pi [ n1
, 1), i, then Am
i,j () 0, m k, i, j, i < j.

Sufficient Condition 2 follows from Sufficient Condition 1, as does the following condition:
Sufficient Condition 3. If there exist
interval

l1
[ m1
n1 , n1 ],

then

(n+1)/2
Ai,j
()

n+1
2

m < l n such that all {pi } lie within the

l
n
< 0, . . . , Am
i,j () 0, Ai,j () 0, . . . , Ai,j () > 0, i, j,

i < j.

Proof. Since Sufficient Condition 3 follows from Sufficient Condition 2, and the latter follows
from Sufficient Condition 1, we only need to prove Sufficient Condition 1.
P
2
Let i, j, such that i < j, be given. If k = n 1, then An1
s6=i, s6=j s 1,
i,j () =
X

An1
i,j () 0

p1
s n1

n1
and Ai,j
() 0

s6=i, s6=j

p1
s n 1.

s6=i, s6=j

To prove Sufficient Condition 1 for the remaining values of k, we proceed as in the proof of
the lemma. Without any loss of generality, assume that i = 1 and j = 2. If k = n 2, then
X

An2
1,2 () =

i21 i22

3i1 <i2 n

n
X

s2 .

s=3

es = {3, . . . , n} \ {s}. We have,


For any fixed s, consider the set of indexes A
X

i21 i22

3i1 <i2 n

n
X

s2



n
n
n
1X X 2 2 X 2 1X 2 X 2
m s
s =
s
m 2 .
=
2
2
s=3 mA
es

s=3

s=3

s=3

es
mA

es contains n 3 = k 1 indexes,
It remains to note that since the set A
X

2
2=
m

es
mA

p1
m (n 1).

es
mA

Thus,
An2
1,2 () 0

p1
m n1

n2
() 0
and A1,2

p1
m n 1.

es
mA

es
mA

The same reasoning can be repeated for any k such that

n+1
2

< k < n.

References
Bahadur, R. R.: 1961, A representation of the joint distribution of responses to n dichotomous items, in
H. Solomon (ed.), Studies in item analysis and prediction, Stanford University Press, pp. 158168.
Boland, P. J.: 1989, Majority systems and the Condorcet jury theorem, The Statistician 38, 181189.
Dhillon, B. and Anude, O.: 1994, Common-cause failures in engineering systems: A review, International Journal
of Reliability, Quality and Safety Engineering 1, 103129.
Fiondella, L.: 2010, Reliability and sensitivity analysis of coherent systems with negatively correlated component
failures, International Journal of Reliability, Quality and Safety Engineering 17, 505529.
Lindley, D. V. and Singpurwalla, N. D.: 2002, On exchangeable, causal and cascading failures, Statistical Science
17, 209219.
Littlewood, B.: 1996, The impact of diversity upon common mode failures, Reliability Engineering and System
Safety 51, 101113.

Surowiecki, J.: 2005, The wisdom of crowds. Why the many are smarter than the few, Abacus.
Young, H. P.: 1988, Condorcets theory of voting, American Political Science Review 82, 12311244.
Zuo, M. J. and Tian, Z.: 2010, k-out-of-n systems, in J. J. Cochran, L. A. Cox, P. Keskinocak, J. P. Kharoufeh
and J. C. Smith (eds), Wiley Encyclopedia of Operations Research and Management Science, John Wiley &
Sons, Inc.

10

You might also like