Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
21 views

Exp: L L: 4.4 Comparison of Two or More Poisson Means

This document discusses testing hypotheses for Poisson means. It describes how to derive a test statistic for comparing two or more Poisson means and establishes that this follows a multinomial distribution under the null hypothesis. It also derives the log-likelihood function for the Poisson means and shows it can be separated into the marginal distribution based on the total and the conditional distribution based on the proportions.

Uploaded by

juntujuntu
Copyright
© Attribution Non-Commercial (BY-NC)
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
21 views

Exp: L L: 4.4 Comparison of Two or More Poisson Means

This document discusses testing hypotheses for Poisson means. It describes how to derive a test statistic for comparing two or more Poisson means and establishes that this follows a multinomial distribution under the null hypothesis. It also derives the log-likelihood function for the Poisson means and shows it can be separated into the marginal distribution based on the total and the conditional distribution based on the proportions.

Uploaded by

juntujuntu
Copyright
© Attribution Non-Commercial (BY-NC)
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 8

4.

4 Comparison of two or more Poisson means


The connection between log-linear models for frequencies and multinomial response models for proportions stems from the fact that the binomial and multinomial distributions can be derived from a set of independent Poisson random variables conditionally on their total being fixed. Suppose that Y = (Y1 , Y2 ,K, YK ) are independent Poisson random variables with means 1 , 2 , K , k , log j = 0 + 1 x j , and that we require to test the composite null hypothesis

( )

H 0 : 1 = 2 = L = k = exp( 0 ) H 0 : 1 = 0

H 0 : log(1 ) = log( 2 ) = L = log( k ) = 0


where x j s are given constants. The alternative hypotheses under consideration are

H a : 1 > 0 ,

or

H a : 1 < 0

or

H a : 1 0 .

Standard theory of significance testing leads to consideration of the test statistic T (Y ) =


k

x Y
j =1 j

conditionally on the observed value of

m = Y j , which is the sufficient statistic for 0 . For example,


j =1

as H a

: 1 > 0

, the test is

T ( y ) = C0 (m ), reject H 0 with probabilit y w(m ) T ( y ) < C0 (m ), do not reject H 0


1

T ( y ) > C0 (m ), reject H 0

given m = solving

Y
j =1

where

C0 (m)

and

w(m)

can be obtained by

k ( ) ( ) P T Y C m | Y m , H is true > = j 0 0 j =1

+ w (m )P T (Y ) = C 0 (m ) | Y j
j =1

= m , H 0 is true =

and

w(m)

is some constant depending on m.

Note that under null hypothesis, we regard the data as having multinomial distribution with index m and parameter vector

1 1 1 , , K , independent of 0 , i.e., k k k
P Y1 = y 1 , K , Y k = y k | = m!
k

j =1

Yj = m,H
k

is true

y j!

j =1

1 k

j =1

yj m! 1 j =1 = k k y j! j =1

m! y j!
j =1

1 km

Note:
Let Y = (Y1 , Y2 ,K, YK ) has the probability density function or probability distribution function
r f ( y ) = f ( y1 , y 2 , K , y k ) = C ( , ) exp T ( y ) + j M j ( y ) h ( y ) j =1

where

= (1 , 2 ,K, r ) and M ( y ) = (M1 ( y ), M 2 ( y ), K, M r ( y )) .


2

Then, for testing

H0 : 0 v.s. H a : > 0 , an UMP (uniformly M (Y ) = m


is

most powerful) unbiased level-test given

T ( y ) = C0 (m ), reject H 0 with probabilit y w(m ) T ( y ) < C0 (m ), do not reject H 0

T ( y ) > C0 (m ), reject H 0

where the constants obtained by solving

C0 (m)

and

w(m)

depending on m can be

+ w (m )P (T (Y ) = C 0 (m ) | M (Y ) = m , H 0 is true ) =
Therefore, for independent Poisson random variables Y1 , Y2 , K , Yk , with means 1 , 2 , K , k , log j = 0 + 1 x j , the probability distribution function is

P (T (Y ) > C 0 (m ) | M (Y ) = m , H 0 is true )

( )

f (y ) =

j =1

yj

y j!

k k yj exp log = exp j j j =1 j =1 1

j =1

y j!

k k ( ) exp log = exp y j j j j =1 j =1

j =1

y j! 1

k k = exp j exp y j ( 0 + 1 x j ) j =1 j =1

j =1

y j! 1

k k k = exp exp x y + y j 1 0 j j j j =1 j =1 j =1

j =1

y j!

= C ( 1 , 0 ) exp [ 1T ( y ) + 0 M ( y )]h ( y )
3

where
k , T ( y ) = C ( 1 , 0 ) = exp j j =1

x
j =1

y j , M (y ) =

y
j =1

, h(y ) =

y
j =1

Note:
Under
E [T (Y

H0 : 1 = 0 , the unconditional moments of T (Y )


)] =
E x jY j j =1
k

are

j =1

x j E (Y j ) =

j =1

x j exp ( 0 )

j =1

xj

yj j =1 k
k

since Also,

E (Y j ) = j = exp( 0 ) and

y
j =1

is the estimate of

k y j k k k k 2 j = 1 2 = x j Var(Y j ) = x 2 ( ) Var[T (Y )] = Var x Y x exp j j j j 0 k j =1 j =1 j =1 j =1


Note that the unconditional moments of

T (Y ) depend on 0 . On
y = m

the other hand, under the null hypothesis, denote


Y =

j =1

Y j, y =

j =1

Then,
k E [T (Y ) | Y = y ] = E x jY j | Y = y = j =1 k y = x j k j =1 =

j =1

x j E (Y j | Y = y )

j =1

x j

yj j =1 k
k

since Y j | Y = y ~ B y ,

y 1 E (Y j | Y = y ) = . k k
of the

The unconditio

estimate

Var [T (Y ) | Y = y ] k | x Y Y y = Var = j j j =1 = Var (x j Y j | Y = y ) + 2 Cov (x j Y j , xiYi | Y = y )


k j =1 k j <i

1 1 1 1 = x y 1 + 2 xi x j y k k k k j =1 j <i
2 j

Q Cov (Y j , Yi | Y = y ) 1 1 = y , j i k k

y k 2 1 1 = x j 1 2 xi x j k j =1 k k j <i k 2 1 k 2 2 + x x x x j i j j k j <i j =1 j =1 2 y k 2 1 k x j x j = k j =1 k j =1 k xj y k 2 j =1 2 = x j kx denote x = k k j =1 k y 2 = (x j x ) k j =1 = y k k yj k 2 j =1 = (x j x ) k j =1


5

nal variance of

T (Y ) is quite different from the exact conditional

variance. The conditional variance is unaffected by the addition of a constant to each component of

x j s.

The Poisson log-likelihood function for ( 0 , 1 ) in this problem is

l Y ( 0 , 1 ) =
k

(y
k j =1 j

log (

) )
j

[y (
j =1 k j =1

+ 1 x j ) exp ( 0 + 1 x j )
k

= 0 y j + 1 x j y j
j =1

exp (
k j =1

+ 1x j )

Denote =

exp (
k j =1

+ 1 x j ) = j .
k j =1

Then, the log-likelihood for ( , 1 ) becomes

lY ( , 1 ) 0 y j + 1 x j y j exp ( 0 + 1 x j )
k k k j =1 j =1 j =1

= 0 y + 1 x j y j
j =1

= y log ( ) + 1 x j y j ( y log ( ) 0 y )
j =1 k

= y log ( ) + 1 x j y j y log exp ( ) j =1 0 k k = y log ( ) + 1 x j y j y log exp ( 1 x j ) j =1 j =1 lY ( ) + lY |Y ( 1 )


6

where

l Y (

y log ( )

is the Poisson log-likelihood for based on m = y = Y ~ P( ) and

l Y | Y ( 1 ) 1

j =1

k x j y j y log exp ( 1 x j =1

is the multinomial log-likelihood for 1 based on conditional distribution, Y1 , K, Yk | Y = m ~ M(m, 1 , K, k ) , M(m, 1 , K, k ) is a multinomial distribution with index m and parameters

j =

exp(1 x j )

exp(1x j )
k j =1

Note:
The Poisson marginal likelihood based on Y mainly depends only on while the multinomial conditional likelihood based on

Y1 ,K,Yk | Y depends only on 1 . Provided that no information is


available concerning the value of 0 and consequently of , then all of the information concerning 1 is in the conditional likelihood based on Y1 ,K,Yk | Y .

Note:
The Fishers information for ( , 1 ) is

1 0 k 2 0 j (x j x ) j =1 and these parameters are said to be orthogonal. Under suitable

must be approximately and limiting conditions, the estimate 1


independent.

You might also like