Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

To test = σ = σ = · · · = σ = At least one pair of σ different

Download as pdf or txt
Download as pdf or txt
You are on page 1of 12

Levene’s Test

Analysis of To test
Variance

H0 = σ12 = σ22 = · · · = σk2


Designed
Experiments

H1 = At least one pair of σ 2 different.

Test statistic
(n − k) SSTZ MSTZ
W = =
(k − 1) SSEZ MSEZ

where SSTZ and SSEZ are the usual sums of squares evaluated
for the new data zij where

zij = |xij − x i |.

If H0 is true
W ∼ Fisher-F(k − 1, n − k).
1/ 12
Example: PTSD Example (see handout).
Analysis of
Variance n = 45, k = 4.
Designed
Experiments

F-statistic F = 3.046
Critical Value F0.05 (3, 41) l 2.84
F0.025 (3, 41) l 3.46
F0.01 (3, 41) l 4.31

Tables in McClave and Sincich give Fα (3, 40).

=⇒ Reject H0 at α = 0.05 (p = 0.039).

BUT Levene’s Test suggests that the assumption of equal


variances is NOT valid.

2/ 12
Analysis of
Why do we need the three assumptions ?
Variance
Designed
Experiments
I independence
I Normality
I equal variances
- so that we can predict (under H0 ) that

F ∼ Fisher-F(k − 1, n − k)

and complete the test (compute p-values and the rejection


region).
But our hypothesis of interest is

H0 : No difference between treatments

3/ 12
Analysis of
Variance
Designed
Experiments Under this hypothesis, the treatment labels

SHOULD NOT MATTER !

i.e. we should be able to exchange the labels, and not notice


any major difference in the test statistic.
This leads us to consider permutation or randomization tests.
i.e. we compute the test statistic for all possible relabellings
consistent with H0 , retaining the group sample sizes, and use
these values to compute the rejection region.

4/ 12
Randomization/Permutation Tests
Analysis of
Variance Suppose that there are N possible relabellings that give rise to
Designed
Experiments test statistics
F1 , F2 , . . . , FN
Then the rejection region for significance level α is the interval
to the right of

N(1 − α)th largest of the values F1 , F2 , . . . , FN

and the p-value is


Number of F1 , F2 , . . . , FN ≥ F
N
where
MST
F =
MSE
is the true test statistic.
5/ 12
Analysis of
Variance
Designed
Experiments

If the group sample sizes are n1 , n2 , . . . , nk then


n!
N=
n1 !n2 ! . . . nk !
where
n! = n(n − 1)(n − 2) . . . 3.2.1
(”n factorial”) - potentially very large.

6/ 12
Analysis of
Variance
Designed
Experiments Example: PTSD Example.

k = 4, n = 45 (n1 = 14, n2 = 10, n3 = 11, n4 = 10)

There are
45!
= 2.610 × 1024
14!10!11!10!
possible relabellings: a very big number.
MST
We compute F = MSE for each relabelling. For the real data,
F = 3.046.

7/ 12
Analysis of
Example: PTSD Example (continued).
Variance
Designed
Using this approach, we compute for α = 0.05
Experiments

CRITICAL VALUE : CR = 2.844


p-VALUE : p = 0.040

Compare this with the ANOVA F-test values

CRITICAL VALUE : CR = 2.833


p-VALUE : p = 0.039

(using the Fisher-F(3,41) distribution.


Thus we obtain virtually identical results; but the
randomization test does not need the assumptions of
normality or equal variances.

8/ 12
Analysis of Permutation Distribution
Variance
Designed

1.00
Experiments

0.75

Fisher−F(3,41) density
Density

0.50
0.25
0.00

0 2 4 6 8 10

F statistic

9/ 12
Analysis of
Variance
Designed
Experiments

Example: PTSD Example (continued).


Thus the null hypothesis (of equal means) is

REJECTED

under both procedures at the α = 0.05 significance level.


In this case, the computations give similar conclusions. Here
the truth or otherwise of the normality/equal variance
assumptions does not matter.

10/ 12
Final Note on ANOVA F-test for a CRD
Analysis of
Variance
Designed
Experiments If k = 2, consider F = MST /MSE ;

k
1 X
MST = ni (x i − x)2 = n1 (x 1 − x)2 + n2 (x 2 − x)2
k −1
i=1
n1 n2
= (x 1 − x 2 )2
n1 + n2
k ni
1 XX
MSE = (xij − x i )2 = sP2
n−k
i=1 j=1
(n1 − 1)s12 + (n2− 1)s22
=
n1 + n2 − 2

11/ 12
Therefore
Analysis of
Variance µ ¶  2
n1 n2
Designed
Experiments (x 1 − x 2 )2  (x 1 − x 2 ) 
n1 + n2
F = =
 r1

sP2 1 
sp +
n1 n2

Thus F = t 2 , where t is the two-sample t-test statistic.


Thus if k = 2, the ANOVA F-test and the two sample t-test
are EQUIVALENT

t ∼ Student-t(n − 2)
F ∼ Fisher-F(1, n − 2)

and we must get the same conclusion (to reject H0 or


otherwise) using either statistic.

12/ 12

You might also like