Goodness of Fit Tests: Do These Data Correspond Reasonably To The Proportions 1:2:1?
Goodness of Fit Tests: Do These Data Correspond Reasonably To The Proportions 1:2:1?
Goodness of Fit Tests: Do These Data Correspond Reasonably To The Proportions 1:2:1?
We observe data like that in the following table: AA 35 We want to know: Do these data correspond reasonably to the proportions 1:2:1? AB 43 BB 22
Multinomial distribution
Imagine an urn with k types of balls. Let pi denote the proportion of type i. Draw n balls with replacement. Outcome: (n1, n2, . . . , nk), with i ni = n where ni = no. balls drawn that were of type i.
Examples
The binomial distribution: the case k = 2. Self a heterozygous plant, obtain 50 progeny, and use test crosses to determine the genotypes of each of the progeny. Obtain a random sample of 30 people from Hopkins, and classify them according to student/faculty/staff.
Multinomial probabilities
n! n1! nk!
i ni
nk 1 pn 1 pk
=n
Otherwise
Example
Let
(p1, p2, p3) = (0.25, 0.50, 0.25) and n = 100.
Then
7.3 10-4
Goodness of t test
We observe (n1, n2, n3) multinomial( n, (p1, p2, p3) ). We seek to test H0 : p1 = 0.25, p2 = 0.5, p3 = 0.25. versus Ha : H0 is false. We need: (a) A test statistic (b) The null distribution of the test statistic
Test statistics
Let n0 i denote the expected count in group i if H0 is true. LRT statistic LRT = 2 ln Pr(data | p = MLE) Pr(data | H0)
i ni
= ...= 2
2 test statistic
ln(ni/n0 i)
X = =
Pr(LRT = g | H0) =
n1,n2,n3 giving LRT = g
Computer simulation
1. Simulate a table conforming to the null hypothesis. e.g., simulate (n1, n2, n3) multinomial( n=100, (1/4, 1/2, 1/4) ) 2. Calculate your test statistic. 3. Repeat steps (1) and (2) many (e.g., 1000 or 10,000) times.
Estimated critical value = the 95th percentile of the results Estimated P-value = the propn of results the observed value.
Asymptotic approximation
Very mathemathically savy people have shown that, if the sample size, n, is large,
Example
We observe the following data: AA 35 AB 43 BB 22
We imagine that these are counts (n1, n2, n3) multinomial( n=100, (p1, p2, p3) ). We seek to test H0 : p1 = 1/4, p2 = 1/2, p3 = 1/4. We calculate LRT 4.96 and X2 5.34. Referring to the asymptotic approximations (2 distn with 2 degrees of freedom), we obtain P 8.4% and P 6.9%. With 10,000 simulations under H0, we obtain P 8.9% and P 7.4%.
Observed
5 G
10
15
Observed
5 X2
10
15
If the sample size is sufciently large that the expected count in each cell is 5, use the asymptotic approximation without worries. Otherwise, consider using computer simulations.
Composite hypotheses
Sometimes, we ask not But rather something like: pAA = f2, pAB = 2f(1 f), pBB = (1 f)2 for some f pAA = 0.25, pAB = 0.5, pBB = 0.25
For example: Genotypes, of a random sample of individuals, at a diallelic locus. Question: Is the locus in Hardy-Weinberg equilibrium (as expected in the case of random mating)? Example data: AA 5 AB 20 BB 75
Another example
ABO blood groups; 3 alleles A, B, O. Phenotype A = genotype AA or AO B = genotype BB or BO AB = genotype AB O = genotype O Allele frequencies: fA, fB, fO (Note that fA + fB + fO = 1)
Under Hardy-Weinberg equilibrium, we expect: pA = f2 A + 2fAfO pB = f2 B + 2fBfO Example data: O 104 A 91 pAB = 2fAfB pO = f2 O
B 36
AB 19
2 AA = AB = 2 BB = (1 p f ,p f (1 f), p f)2
LRT statistic:
LRT = 2 ln
AA, p AB, p BB) Pr(nAA, nAB, nBB | p AA, p AB, p BB) Pr(nAA, nAB, nBB | p
General MLEs:
MLE under H0: Requires numerical optimization. O, p A, p B, p AB) Call them ( fO , fA , fB) (p LRT statistic: LRT = 2 ln
O, p A, p B, p AB) Pr(nO, nA, nB, nAB | p O, p A, p B, p AB) Pr(nO, nA, nB, nAB | p
Obtain the MLE(s) under H0. Calculate the corresponding cell probabilities. Turn these into (estimated) expected counts under H0. Calculate
X =
Simulate data under H0 (plug in the MLEs for the observed data) Calculate the MLE with the simulated data Calculate the test statistic with the simulated data Repeat many times.
Asymptotic approximation
Under H0, if the sample size, n, is large, both the LRT statistic and the 2 statistic follow, approximately, a 2 distribution with k s 1 degrees of freedom, where s = no. parameters estimated under H0. Note that s = 1 for example 1, and s = 2 for example 2, and so df = 1 for both examples.
Results, example 1
Example data: AA 5 MLE: AB 20 BB 75
2.25
25.5
P 4.9% P 8.2%
4 G
4 X2
Results, example 2
Example data: O 104 MLE: A 91 B 36 AB 19
Expected counts:
98.5
94.2
42.0
15.3
Test statistics:
P 16% P 17%
4 G
4 X2
Example 3
Data on no. sperm bound to an egg 0 1 2 4 3 2 4 1 5 1
count 26 4
where = mean
0
observed 26
1 4
2 4
3 2
4 1
5 1
X2 =
(obsexp)2 exp
= . . . = 42.8
LRT = 2
Compare to 2(df = 6 1 1 = 4) p-value = 1 108 (2) and 9 104 (LRT). By simulation: p-value = 16/10,000 (2) and 7/10,000 (LRT)
Observed
20
40
60
80
100
120
Simulated 2 statistic
Observed
10
15
20
A nal note
With these sorts of goodness-of-t tests, we are often happy when are model does t. In other words, we often prefer to fail to reject H0. Such a conclusion, that the data t the model reasonably well, should be phrased and considered with caution. We should think: how much power do I have to detect, with these limited data, a reasonable deviation from H0?