Some Applications Involving Binary Data: (A) Comparison of Two Binomial Probabilities
Some Applications Involving Binary Data: (A) Comparison of Two Binomial Probabilities
Motivating Example:
In a small-scale clinical trial, the following table might represent the contribution of one of the smaller centres to the study. Success Treatment Control
Y =2 X =1
X +Y = 3
Failure
3Y =1
m1 = 3 m2 = 4 m = 7
4 X =3
There are two approaches to make the inference about treatment effect. The first one is based on the full likelihood while the second one is based on the conditional likelihood. (i) Full likelihood approach
X ~ B (m
is the number of
logit ( 1 ) = + logit ( 2 ) =
and
we can fit the linear logistic model and use the methods in chapter 4. Then, we have
1 (1 2 ) = log (1 ) 1 2
Y X 1 m2 m1 = log X Y 1 m m1 2 Y (m 2 X ) = log ( ) X m Y 1 2 3 = log 1 1 = 1.792
and the large-sample variance of
Var )= ( =
is
1 1 1 1 + + + Y X m1 Y m2 X
1 1 1 1 + + + 2 1 1 3 17 = 6
2
2l , , , D (y , ) = 2l
where
given . Then, a
) < }. { : D ( y , ) D (y ,
For example, a 90% confidence interval for is
)< { : D ( y , ) D (y ,
Thus, (ii) Since
2 0 .1
= 2 . 71 = ( 0 . 8 , 4 . 95 ) .
H (m, s, ) ,
y lc ( ) log b 3 4 j j = a 3
= y log ( ) log P0 e
[ ( )]
= y log ( ) log [P ( )] 0 j
where
P0 ( ) =
3 4 j j 3 j j = a 3 4 3 4 3 4 2 3 4 3 = + + 0 3 1 2 2 1 + 3 0 = 4 + 18 + 12 2 + 3
b
The estimate
of satisfies
P0' e e l c ( ) = y =0 P0 e = = c c y=
Thus,
P0' e c e c
c 0
( ) P (e )
c
( ) ( )
= 1 . 493 . For
the variance of
That is,
1 = Var c I ( )
since
( )
1 l ( ) E c
2
1 Var (Y | X + Y = 3 )
2 2 P0' e e l c ( ) = E y E | X + Y = s1 = 3 P0 e = Var (Y | X + Y = 3 )
( ) ( )
the variance
Thus,
of H (m , s ,
ar V c =
( )
1 [Var (Y | X + Y = 3 )] = c
P e c c P1 e 2 c c P e P e 0 0
( ) ( )
( ) ( )
(b)
Combination of several
2 2 tables
Suppose there are n centres participating in the clinical trial. Then, the following tables can be obtained, i = 1, 2 , K , n , Success Treatment Control
Yi Xi s1i
Failure
m1i Yi m2i X i s 2i m1i m2i
We may consider
l c ( ) =
i =1
y log = ( ) P 0
i 0
{y
i =1
)]}
where
{y log [P (e )]} (1 2 i ) = 1i 2 i (1 1i ) ,
i =1
2i log 1 2i
m 1i m 2 i P0 ( ) = j j=a s1i
b
information is sufficiently large, standard large-sample likelihood theory applies to the conditional likelihood. To test H 0 : = 0 , we can use the score statistic under H 0 ,
6
l ( ) = U = c = 0
In addition,
m1i ( x i + y i ) y i m1i + m 2 i . i =1
n
Var (U ) =
Z =
1 2 Var (U )
Note:
There is the probability that varies from centre to centre. That is,