Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Inferential Statistics Masters Updated 1

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 27

INFERENTIAL STATISTICS

Hypothesis Testing

Meaning of Hypothesis

A hypothesis is a tentative explanation for certain events, phenomena or


behaviors. In statistical language, a hypothesis is a statement of prediction or
relationship between or among variables. Plainly stated, a hypothesis is the most
specific statement of a problem. It is a requirement that these variables are related.
Furthermore, the hypothesis is testable which means that the relationship between the
variables can be put into test on the data gathered about the variables.

Null and Alternative Hypotheses

There are two ways of stating a hypothesis. A hypothesis that is intended for
statistical test is generally stated in the null form. Being the starting point of the testing
process, it serves as our working hypothesis. A null hypothesis (Ho) expresses the idea
of non-significance of difference or non- significance of relationship between the
variables under study. It is so stated for the purpose of being accepted or rejected.

If the null hypothesis is rejected, the alternative hypothesis (Ha) is accepted. This
is the researcher’s way of stating his research hypothesis in an operational manner. The
research hypothesis is a statement of the expectation derived from the theory under
study. If the related literature points to the findings that a certain technique of teaching
for example, is effective, we have to assume the same prediction. This is our alternative
hypothesis. We cannot do otherwise since there is no scientific basis for such prediction.

Chain of reasoning for inferential statistics

1. Sample(s) must be randomly selected


2. Sample estimate is compared to underlying distribution of the same size sampling
distribution
3. Determine the probability that a sample estimate reflects the population parameter

The four possible outcomes in hypothesis testing


Actual Population Comparison
Null Hyp. True Null Hyp. False
DECISION (there is no (there is a
difference) difference)
Rejected Null Type I error Correct Decision
Hyp (alpha)
Did not Reject Correct Decision Type II Error
Null
(Alpha = probability of making a Type I
error)

Regardless of whether statistical tests are conducted by hand or through statistical


software, there is an implicit understanding that systematic steps are being followed to
determine statistical significance. These general steps are described on the following
page and include 1) assumptions, 2) stated hypothesis, 3) rejection criteria, 4)
computation of statistics, and 5) decision regarding the null hypothesis. The underlying
logic is based on rejecting a statement of no difference or no association, called the null
hypothesis. The null hypothesis is only rejected when we have evidence beyond a
reasonable doubt that a true difference or association exists in the population(s) from
which we drew our random sample(s).

Reasonable doubt is based on probability sampling distributions and can vary at the
researcher's discretion. Alpha .05 is a common benchmark for reasonable doubt. At
alpha .05 we know from the sampling distribution that a test statistic will only occur by
random chance five times out of 100 (5% probability). Since a test statistic that results in
an alpha of .05 could only occur by random chance 5% of the time, we assume that the
test statistic resulted because there are true differences between the population
parameters, not because we drew an extremely biased random sample.

When learning statistics we generally conduct statistical tests by hand. In these


situations, we establish before the test is conducted what test statistic is needed (called
the critical value) to claim statistical significance. So, if we know for a given sampling
distribution that a test statistic of plus or minus 1.96 would only occur 5% of the time
randomly, any test statistic that is 1.96 or greater in absolute value would be statistically
significant. In an analysis where a test statistic was exactly 1.96, you would have a 5%
chance of being wrong if you claimed statistical significance. If the test statistic was 3.00,
statistical significance could also be claimed but the probability of being wrong would be
much less (about .002 if using a 2-tailed test or two-tenths of one percent; 0.2%). Both
.05 and .002 are known as alpha; the probability of a Type I error.

When conducting statistical tests with computer software, the exact probability of a Type
I error is calculated. It is presented in several formats but is most commonly reported as
"p <" or "Sig." or "Signif." or "Significance." Using "p <" as an example, if a priori you
established a threshold for statistical significance at alpha .05, any test statistic with
significance at or less than .05 would be considered statistically significant and you
would be required to reject the null hypothesis of no difference. The following table links
p values with a benchmark alpha of .05:

P < Alpha Probability of Type I Error Final Decision


.05 .05 5% chance difference is not Statistically
significant significant
.10 .05 10% chance difference is not Not statistically
significant significant
.01 .05 1% chance difference is not Statistically
significant significant
.96 .05 96% chance difference is not Not statistically
significant significant

Steps to Hypothesis Testing

Hypothesis testing is used to establish whether the differences exhibited by random


samples can be inferred to the populations from which the samples originated.

General Assumptions

 Population is normally distributed


 Random sampling
 Mutually exclusive comparison samples
 Data characteristics match statistical technique

For interval / ratio data use


t-tests, Pearson correlation, ANOVA, regression

For nominal / ordinal data use


Difference of proportions, chi square and related measures of association

State the Hypothesis

Null Hypothesis (Ho): There is no difference between ___ and ___.

Alternative Hypothesis (Ha): There is a difference between __ and __.

Note: The alternative hypothesis will indicate whether a 1-tailed or a 2-tailed test
is utilized to reject the null hypothesis.

Ha for 1-tail tested: The __ of __ is greater (or less) than the __ of __.

Set the Rejection Criteria


This determines how different the parameters and/or statistics must be before the
null hypothesis can be rejected. This "region of rejection" is based on alpha ( ) --
the error associated with the confidence level. The point of rejection is known as
the critical value.
Compute the Test Statistic
The collected data are converted into standardized scores for comparison with the
critical value.
Decide Results of Null Hypothesis
If the test statistic equals or exceeds the region of rejection bracketed by the
critical value(s), the null hypothesis is rejected. In other words, the chance that the
difference exhibited between the sample statistics is due to sampling error is
remote--there is an actual difference in the population.

Let us consider an experiment involving two groups, an experimental group and a control
group. The experimenter likes to test whether the treatment (values clarification lessons)
will improve the self-concept of the experimental group. The same treatment is not given
to the control group. It is presumed that any difference between the two groups after the
treatment can be attributed to the experimental treatment with a certain degree of
confidence.

The hypothesis for this experiment can be stated in various ways:

a) No existence or existence of a difference between groups

Ho: There is no significant difference in self-concept between the group


exposed to values clarification lessons and the group not exposed to the
same.
Ha: The self-concept of the group exposed to values clarification lessons differ
significantly that of the other group.

b) No existence or existence of an effect of the treatment

Ho: There is no significant effect of the values clarification lessons on the self-
concept of the students.
Ha: Values clarification lessons have significant effect on the self-concept of
students.

c) No existence of relationship between the variables


Ho: The self-concept of the students is not significantly related to the values
clarification lessons conducted on them.

Ha: The self-concept of the students is not significantly related to the values
clarification lessons they were exposed to.

Parametric Test

What are the parametric tests?


The parametric tests are tests that require normal distribution, the levels of
measurement of which are expressed in an interval or ratio data. The following
parametric tests are:

z- test for One Sample Group

z- test, for Two Sample Means

z- test, for Two Sample Proportions

t-test for one population mean compared to a sample mean

t- test for Independent Samples

t- test for Correlated Samples

F- test (ANOVA)

r (Pearson Product Moment Coefficient of Correlation)

Y=a+bx (Simple Linear Regression Analysis)

Y=𝑏0 +𝑏1 𝑥1 +𝑏2 𝑥2 +... +𝑏𝑛 𝑥𝑛

(Multiple Regression Analysis)

What is the z-test?

The z-test is another test under parametric statistics which requires normality of
distribution. It uses the two population parameters  and .

It is used to compare two means, the sample mean, and the perceived population
mean.

It is also used to compare the two sample means taken from the same population.
It is used when the samples are equal to or greater than 30. The z-test can be
applied in two ways: the One-Sample Mean Test and the Two-Sample Mean Test.

The tabular value of the z-test at .01 and .05 level of significance is shown below.

Test Level of Significance

.01 .05

One-tailed ± 2.33 ± 1.645


Two-tailed ± 2.575 ± 1.96

What is the z-test for one sample group?

The z-test for one sample group is used to compare the perceived
population mean  against the sample mean, 𝑋̅

When is the z-test for a one-sample group?

The one-sample group test is used when the sample is being compared to
the perceived population mean. However if the population standard
deviation is not known the sample standard deviation can be used as a
substitute.

Why is the z-test used for a one-sample group?

The z-test is used for a one-sample group because this is appropriate for
comparing the perceived population mean  against the sample mean 𝑋̅. We are
interested if significant difference exists between the population  against the
sample mean. For instance a certain tire company would claim that the life span
odd its product will last 25,000 kilometers. To check the claim, sample tires will be
tested by getting sample mean 𝑋̅.

How do we use the z-test for a one-sample group?

Population Mean Compared to Sample Mean (Z-test)

The formula is

𝐱̅−  (𝐱̅− )√𝐧


𝑍=  or z=

⁄ 𝑛

Where:

𝑋̅ = sample mean
 = hypothesized value of the
population mean
 = population standard deviation
N = sample size
Example 1

Data from a school census show that the mean weight of college students was 45
kilos, with a standard deviation of 3 kilos. A sample of 100 college students were found
to have a mean weight of 47 kilos. Are the 100 college students really heavier than the
rest, using .05 significance level?

Step 1. Ho: The 100 college students are not really heavier than the rest. (X=45 kls.)
Step 2. Set .05 level of significance.
Step 3. The standard deviation given is based on the population. N>30. Therefore the z-
test is to be used.
Step 4. The given values in the problem are:

𝐱̅ = 47 kilos  = 3 kilos
µ= 45 kilos n = 100

The formula to be used is

𝐱̅−  𝟒𝟕− 𝟒𝟓 2 2
𝑍=  = 3⁄ =3 = = 6.67
⁄ 𝑛 ⁄10 .3
√ √100

Step 5. The tabular value for a z – test at .05 level of significance is found in the following
table. Critical values of z for other levels of significance are found in the table of
normal curve areas.

Critical Values of Z at Varying Significance Levels

Significance
Level .10 .05 .025 .01

Test Type
One-tailed test +1.28 +1.645 +1.96 +2.33
Two-tailed test +1.645 +1.96 +2.33 +2.58

Based on the given above, the tabular value of z for a one tailed test at .05 leve3l
of significance is + 1.645.

Step 6. The computed value 6.67 is greater than the tabular value 1.645. Therefore, the
null hypothesis is rejected.

The 100 college students are really heavier than then rest.
What is the z-test for a two-sample mean test?

The z-test for a two-sample mean test is another parametric test used to compare
the means of two independent groups of samples drawn from a normal population if
there are more than 30 samples for every group.

When do we use the z-test for two sample mean?

The z-test for two-sample mean is used when we compare the means of samples
of independent groups taken from a normal population.

Why do we use the z-test?

The z-test is used to find out if there is a significant difference between the two
populations by only comparing the sample mean of the population.

How do we use the z-test for a two-sample mean test?

The formula is

The formula is

x̅1 −x̅2 𝐱̅𝟏 −𝐱̅𝟐


Z= 1 1
or z=
√ + 𝒔𝟐 𝒔 𝟐
𝑛1 𝑛2 √ 𝟏+ 𝟐
𝒏𝟏 𝒏𝟐
where:

x̅1 = the mean of sample 1


x̅2 = the mean of sample 2
 = population standard deviation
𝑛1 = size of sample 1
𝑛2 = size of sample 2

Comparing Two Sample Means (Z-test)

Example 2

A researcher wishes to find out whether or not there is significant difference


between the monthly allowances of morning and afternoon students in his school. By
random sampling, he took a sample of 239 students in the morning session. These
students were found to have mean weekly allowance of P142.00. The researcher also
took a sample of 209 students in the afternoon session. They were found to have a
mean weekly allowance of P148.00. The total population of students in that school has a
standard deviation of P40. Is there a significant difference between the two sample at .01
level of significance?

Ho : There is no significant difference between the samples (X̅1 = X̅2)

The given values in the sample problem are:

x̅1 = P142 x̅2 = P148


n1 = 239 n2 = 209
 = P40

The formula to be used is:

x̅1 −x̅2 P142 –P148 −P6


Z= = =
1
 √𝑛 +𝑛
1 1 1 P40√.0042+ .0048
P40 √239+209
1 2

−P6 −P6 −P6


= = = = -1.579
P40 √.0090 P40 (.095) 𝑃3.80

The absolute computed value /- 1.579/ is less than the absolute tabular value 2.58
which is a two-tailed test. The null hypothesis is not. rejected

There is no significant difference between the two samples.

Comparing Two Sample Proportions (Z-test)

The formula is:

P1− P2
Z= 𝑃1 𝑞1 𝑃2 𝑞2
√ 𝑛 + 𝑛
1 1

Example 3

A sample survey of a television program in Metro Manila shows that in one


university 80 of 200 men dislike the program and 75 of 250 women dislike the same
program. We want to decide whether the difference between the two sample proportions,
80/200 = 40 and 75/250 =.30, is significant or not, at .05 level of significance.

Ho : There is no significant difference between the two sample proportions. (P1 =


P2)
The given value in the problem are:

P1=.40 q1 = 1 – p1 = 1 - .40 = .60


P2 = .30 q2 = 1 – p2 = 1 - .30 = .70
n1 = 200 n2 = 250

The formula to be used is:

P1− P2 .40− .30 .10 .10


Z= = = =
𝑃1 𝑞1 𝑃2 𝑞2
√ 𝑛 + √
(.40)(.60) (.30)(.70)

.24 .21 √.0024
𝑛 + +
1 1 200 250 200 250

.10
Z= = 2.22
.045

Since the computed z – value (2.22) falls on the rejection region (because it is
greater than the tabular value 1.96 which is a two-tailed test) the null hypothesis is
rejected.

There is significant difference between men and women viewership.

What is the t-test for independent samples?

The t-test is a test of difference between two independent groups. The means are
being compared 𝑥̅1 against 𝑥̅ 2.

When do we use the t-test for independent samples?

The t-test for independent samples is used when we compare means of two
independent groups.

When the distribution is normally distributed,


Sk = 0 and Ku = .265.
we use interval or ratio data.

the sample is less than or equal to 30.

Why do we use the t-test for independent sample?

The t-test is used for independent sample because it is more powerful test
compared with other tests of difference of two independent groups.

How do we use the t-test for independent samples?


x̅1 −x̅2
t= (𝑛 −1)(𝑠1 )2+ (𝑛1 −1)(𝑠2)2 1 1
√[ 1 𝑛1+ 𝑛2 −2
][ +
𝑛1 𝑛2
]
Where:Type equation here.
t = the t test
𝑥̅1 = the mean of group 1 or sample 1
𝑥̅2 = the mean of group 2 or sample 2
𝑆1 = the standard deviation of group 1 or sample 1
𝑆2 = the standard deviation of group 2 or sample 2
𝑛1 = the number of observations in group 1
𝑛2 = the number of observations in group 2

Comparing a Population Mean to a Sample Mean (T-test)

The formula is:

𝐱̅− 
t=𝐬

√𝐧−𝟏

Example 4

A researcher knows that the average height of Filipino women is 1.525 meters. A
random sample of 26 women was taken and was found to have a mean height of 1.56
meters, with standards deviation of .10 meters. Is there reason to believe that the 26
women in the sample are significantly taller than the others at .05 significance level?

Ho : The sample is not significantly taller than the other Filipino women (X̅ =
1.525).

The given values in the problem are:

𝐱̅ = 1.56 meters
µ = 1.525 meters
n = 26
s = .10 meters

The formula to be used is:


𝐱̅−  𝟏.𝟓𝟔− 𝟏.𝟓𝟐𝟓 .𝟎𝟑𝟓 (.035) 5
t=𝐬 = 𝟏𝟎⁄ = 𝟏𝟎 = = 1.75
⁄ ⁄ 1 10
√𝐧−𝟏 √𝟐𝟔−𝟏 √𝟐𝟓

The absolute computed value (1.75) is greater than the absolute tabular value (df = n –
1=1.708 which is a one-tailed test. The Ho is rejected.

The sample is significantly taller than the others

What is the t-test for independent samples?

The t-test is a test of difference between two independent groups. The means are
being compared 𝑥̅1 against 𝑥̅ 2.

When do we use the t-test for independent samples?

The t-test for independent samples is used when we compare means of two
independent groups.

When the distribution is normally distributed,


Sk = 0 and Ku = .265.
we use interval or ratio data.

the sample is less than 30.

Why do we use the t-test for independent sample?

The t-test is used for independent sample because it is more powerful test
compared with other tests of difference of two independent groups.

How do we use the t-test for independent samples?

x̅1 −x̅2
t= (𝑛 −1)(𝑠1 )2+ (𝑛1 −1)(𝑠2)2 1 1
√[ 1 𝑛1+ 𝑛2 −2
][𝑛 +𝑛 ]
1 2
Where:Type equation here.
t = the t test
𝑥̅1 = the mean of group 1 or sample 1
𝑥̅2 = the mean of group 2 or sample 2
𝑆1 = the standard deviation of group 1 or sample 1
𝑆2 = the standard deviation of group 2 or sample 2
𝑛1 = the number of observations in group 1
𝑛2 = the number of observations in group 2
Comparing two Sample Means or Independent Groups

Example 5.

A teacher wishes to test whether or not the Case Method of teaching is more
effective than the Traditional Method. She picks two classes of approximately equal
intelligence (verified through and administered IQ test). She gathers a sample of 18
students to whom she uses the Case Method. After the experiment, an objective test
revealed that the first sample got a mean score of 28.6 with a standard deviation of 5.9,
while the second group of 14 got a mean score of 21.7 with a standard deviation of 4.6.
Based on the result of the administered test, can we say that Case Method is more
effective than the Traditional method?

Ho : The Case Method is as effective as the Traditional Method.

The given values in the problem are:

𝑥̅1 = 28.6 𝑥̅2 = 21.7


𝑠1 = 5.9 𝑠2 = 4.6
𝑛1 = 18 𝑛2 = 14

The formula to be used is:

x̅1 −x̅2
t= (𝑛 −1)(𝑠1 )2+ (𝑛1 −1)(𝑠2)2 1 1
√[ 1 𝑛1+ 𝑛2 −2
][𝑛 +𝑛 ]
1 2

28.6 −21.7
t=
(18−1)(5.9 )2+ (14−1)(4.6)2 1 1
√[ ][18+14]
32−2

6.9
=
(17)(34.81 )+ (13)(21.16)
√[ ]√.06+.07
18+14−2

6.9 6.9 6.9 6.9


= = = = = 3.56
√[5.91.77+275.08]√.13 √[28.895]√.13 √3.756 1.94

The computed t – value of 3.56 is in the rejection region. It greater than the tabular
value which is 1.697 ( n1 + n2 - 2 = 32= df) using the one-tailed test. The null hypothesis is
therefore rejected. The case method is more effective than the traditional method.
The Case Method is more effective than the Traditional Method of reaching.

What is the t-test correlated samples?

The t-test for correlated samples is another parametric test applied to one group
of samples. It can be used in the evaluation of a certain program or treatment.
Since this is another parametric test, conditions must be met like the normal
distribution and the use of interval or ratio data.

When do we use the t-test for correlated samples?

The test for correlated samples is applied when the mean before and the mean
after are being compared. The pretest (mean before) is measured, the treatment
of the intervention is applied and then the posttest (mean after) is likewise
measured. Then the two means (pretest vs. the posttest) are compared.

Why do we use the t-test for correlated samples?

The t-test for correlated samples is used to find out if a difference exists between
the before and after means. If there is a difference in favor of the posttest then the
treatment or intervention is effective. However, if there is no significant difference
then the treatment is not effective.

This is the appropriate test for evaluation of government programs. This is used in
an experimental design to test the effectiveness of a certain technique or method
or program that had been developed.

How do we use the t-test for correlated samples?

The formula is

𝑥̅1 − 𝑥̅2
t=
𝑫𝟐 –(D)𝟐

𝑛2 (𝑛−1)
T – test for Correlated Means

Dependent Samples

Example 6

Prior to pursuing a training program, enrollees should take an aptitude test. Ten students
were given the test before they undergo training under the Dual Training System in
Refrigeration and Air Conditioning. Upon the completion of the training program, the
same test was re-administered. It is suspected that the students will perform well after
the training. The following were the scores obtained by the students.

Student Score before Score after D D2


1 78 80 2 4
2 76 77 1 1
3 82 84 2 4
4 79 86 7 49
5 78 89 11 121
6 81 81 0 0
7 81 83 2 4
8 79 86 7 49
9 83 85 2 4
10 75 78 3 9

Ʃ D=37, ƩD²= 245; n=10,

x̅1=79.2 and x̅2=82.9

formula for the computed t

𝑥̅1 − 𝑥̅2
t=
𝑫𝟐 –(D)𝟐

𝑛2 (𝑛−1)

Thus, the data we have,


𝑥̅1 − 𝑥̅2 79.2−82.9 −3.7 −3.7
t= = = = = -3.376
10 81 1.095952
n𝑫𝟐 –(D)𝟐 10 (245) –(37)𝟐 √
√ 2 √ 900
𝑛 (𝑛−1) 102 (10−1)

At ἀ=.05 (two tailed), and df=10-1=9, the tabular value of t is 2.262. Since the absolute
value of the computed t (t=/-3.376/) exceeded the tabular value, we reject the null
hypothesis. We conclude that the training significantly improved the scores of the
enrollees.

What is the F-test?

The F-test is another parametric test used to compare the means of two or more
groups of independent samples. It is also known as the analysis of variance,
(ANOVA).

The three kinds of analysis of variance are:

one-way analysis of variance

two-way analysis of variance

three-way analysis of variance

The F-test is the analysis of variance (ANOVA). This is used in comparing the
means of two or more independent groups. One-way ANOVA is used when there
is only one variable involved. The two-way ANOVA is used when two variables
are involved: the column and the row variables. The researcher is interested to
know if there are significant differences between and among columns and rows.
This is also used in looking at the interaction effect between the variables being
analyzed.

Like the t-test, the F-test is also a parametric test which has to meet some
conditions, and the data to be analyzed if they are normal are expressed in
interval or ratio data. This test is more efficient than other tests of difference.

Why do we use the F-test?

The F-test is used to find out if there is a significant difference between and
among the means of the two or more independent groups.

When do we use F-test?


The F-test is used when there is normal distribution and when the level of
measurement is expressed in interval or ratio data just like the t-test and
the z-test.

How do we use the F-test?

To get the F computed value, the following computations should be done.


(𝐺𝑇)2
𝐶𝐹 = 𝑁

TSS is the total sum of squares minus the CF, the correction factor.

BSS is the between sum of squares minus the CF correction factor.

WSS is the sum of squares or it is the difference between the TSS minus BSS.

After getting the TSS, BSS and WSS, the ANOVA table should be constructed.

ANOVA Table
Sources of F-Value
Df SS MS Computed Tabular
𝐵𝑆𝑆 𝑀𝑆𝐵
Between K-1 BSS =𝐹 see the table
𝑑𝑓 𝑀𝑆𝑊
at .05 or the desired level
of significance
𝑊𝑆𝑆
Within (N-1)-(K-1) WSS w/ df between
𝑑𝑓
Group and w/ group
Total N-1 TSS

What are the steps in solving for the F-value?

The ANOVA table has five columns. These are:


sources of variations, degrees of freedom, sum of squares, mean
squares and the F-value, both the computed and the tabular values.

The sources of variations are between the groups, within the group
itself and the total variations.
The degrees of freedom for the total is the total number of
observation minus 1.

The degrees of freedom from the between group is the total number
of groups minus 1.

The degrees of freedom for the within group is the total df minus the
between groups df.

 The MSB mean squares between is equal to the BSS/df.

 The MSW mean square within is equal to the WSS/df.

 To get the F-computed value, divide MSB/MSW.

 If the F-computed value at a given level of significance with the


corresponding df’s of BSS and WSS.

 If the F computed value is greater than the F-tabular value, reject the null
hypothesis in favour of the research hypothesis.

 When the F-computed value is greater than the F-tabular value the null is
rejected and the research hypothesis not rejected which means that there is
a significant difference between and among the means of the different
groups.

Example 1: A sari-sari store is selling 4 brands of shampoo. The owner is


interested if there is a significant difference in the average
sales of the four brands of shampoo for one week. The
following data are recorded.

Brand
A B C D
7 9 2 4
3 8 3 5
5 8 4 7
6 7 5 8
9 6 6 3
4 9 4 4
3 10 2 5

Perform the analysis of variance and test the hypothesis at .05 level of significance that
the average sales of the four brands of shampoo are equal.
Solving by the Stepwise Method

I. Problem: Is there a significant difference in the average sales of the four


brands of shampoo?

II. Hypotheses:
𝐻0: There is no significant difference in the average sales of the four brands of
shampoo
𝐻1: There is a significant difference in the average sales of the four brands of
shampoo.

1 +2 +3+4 (37+57+26+36)2 (156)2


𝐶𝐹 = = = = 869.14
𝑛1 𝑛2 𝑛3 𝑛 4 7+7+7+7 28

TSS =  𝑥12 +  𝑥22 +  𝑥32 +  𝑥42 − 𝐶𝐹

= 225+475+110+204-869.14

=1014 – 869.14

TSS = 144.86
(𝑥1 )2 ( 𝑥 )2 (𝑥3 )2 (𝑥4 )2
BSS = + 2 + + − 𝐶𝐹
𝑛1 𝑛2 𝑛3 𝑛4

(37)2 (57)2 (26)2 (36)2


= + + + − 869.14
7 7 7 7
= 195.57+464.14+96.57+185.14-869.14

= 941.42-869.14

BSS = 72.28

WSS = TSS – BSS

=144.86 – 72.28
WSS = 72.58

Analysis of Variance Table

Sources of Degrees of Sum of Mean Computed Tabular


Variation Freedom Squares Squares F-Value Value

Between Groups K-1 3 72.28 24.09 7.98 3.01


Within Groups (N-1) -(K-1) 24 72.58 3.02
Total N-1 27 144.86

III. Decision Rule: If the F computed value is greater


than the F-tabular value,
Reject 𝐻0 .

IV. Conclusion: Since the F-computed value of 7.98 is greater than the F –tabular
value of 3.01 at .05 level of significance with 3 and 24 degrees of
freedom, the null hypothesis is rejected in favor of the research
hypothesis which means that there is a significant difference in the
average sales of the 4 brands of shampoo.

What is the Scheff𝒆́ ’s Test?

To find out where the differences lies, another test must be used.

The F-test tells us that there is a significant difference in the average sales
of the 4 brands of shampoo but as to where the difference lies, it has to be tested
further by another test, the Scheffé ’s test formula.

(x̅1 − x̅2 )2
F′ =
SW 2 (n1 + n2
n1 n2

Where:
F′ = Scheffé ’s test
x̅1 = mean of group 1
x̅2 = mean of group 2
n1 = number of samples in group 1
n2 = number of samples in group 2
SW 2 = within mean squares

A vs. B

′ (5.28−8.14)2
F = 3.02 (7+7)
7(7)
8.1796
= 42.28
49
8.1796
=
.86

F′ = 9.51

A vs C A vs D

(𝟓.𝟐𝟖−𝟑.𝟕𝟏)𝟐 (𝟓.𝟐𝟖−𝟓.𝟏𝟒)𝟐
𝐅′ = 𝟑.𝟎𝟐(𝟕+𝟕) 𝐅′ = 𝟑.𝟎𝟐(𝟕+𝟕)
𝟕(𝟕) 𝟕(𝟕)
𝟐.𝟒𝟔𝟒𝟗 .𝟎𝟏𝟗𝟔
= =
.𝟖𝟔 .𝟖𝟔

𝐅 ′ = 2.87 𝐅 ′
.02 =

B vs C B vs D

(𝟖.𝟏𝟒−𝟑.𝟕𝟏)𝟐 (𝟖.𝟏𝟒−𝟓.𝟏𝟒)𝟐
𝐅′ = 𝟑.𝟎𝟐(𝟕+𝟕) 𝐅′ = 𝟑.𝟎𝟐(𝟕+𝟕)
(𝟕)(𝟕) (𝟕)(𝟕)
𝟏𝟗.𝟔𝟐𝟒𝟗 𝟗
= =
.𝟖𝟔 .𝟖𝟔

𝐅 ′ = 22.82 𝐅 ′
=
10.46

C vs D

(𝟑.𝟕𝟏−𝟓.𝟏𝟒)𝟐
𝐅′ = 𝟑.𝟎𝟐(𝟕+𝟕)
(𝟕)(𝟕)
𝟐.𝟎𝟒𝟒𝟗
=
.𝟖𝟔

𝐅 ′ =2.38

Comparison of the Average Sales of the Four Brands of Shampoo

Between F′ (F. 05)


Brand (𝐾 − 1) Interpretation
(3.01)(3)

A vs B 9.51 9.03 significant


A vs C 2.87 9.03 not significant
A vs D .02 9.03 not significant
B vs C 22.82 9.03 significant
B vs D 10.46 9.03 significant
C vs D 2.38 9.03 not significant

The above table shows that there is a significant difference in the sales between
brand A and brand B, brand B and brand C and also brand B and D. However, brands A
and C, A and D and C and D not significantly differ in their average sales.

This implies that brand B is more saleable than brands A, C and D.

NON-PARAMETRIC TEST
CHI – SQUARE

Chi – square is applicable in analyzing data in descriptive research. The chi-


square test determines the significant difference between the observed and expected
frequencies of independent variables. The symbol of chi-square is known as the dancing
x2. Chi-square considers the practical and theoretical importance in a set of
observations.

When the researcher is interested to determine if 2 or more categories show


significant difference, then the chi-square test is used. It compares a set of observed
and expected frequencies from independent samples.

DEFINITION OF CHI-SQUARE

Chi-square (x2) may be defined as the sum of the difference of observed and
expected frequencies divided by the expected frequency. The definition is denoted by
this formula (Ferguson, 1976):
2
x2 = Σ(O – E)
E
where:
x2 = Chi-square
O = Observed frequency
E = Expected frequency
Chi-square is a descriptive measure of the discrepancy values between observed
frequency and expected frequency. The larger the discrepancies between O and E, the
larger the chi-square value obtained. If observed and expected frequencies show no
discrepancies at all, the chi-square value is zero.

Bear in mind that the chi-square value is always a positive number.

USES OF CHI-SQUARE

1. chi-square is used in descriptive research if the researcher wants to determine the


significant difference between the obser4ved and expected or theoretical
frequencies from independent variables.

2. It is used to test the goodness of fit where a theoretical distribution is fitted to


some data, i.e., the fitting of normal curve.

3. It is used to test the hypothesis that the variance of a normal population is equal
to a given value.

4. It is also used for the construction of confidence interval for variance.

5. It is used to compare two uncorrelated and correlated proportions.

ONE – WAY CLASSIFICATION

Chi-square in one-way classification is applicable when the researcher is


interested in determining in the number of subjects, object, or responses which fall in
various categories. For instance, the specific research question is “Do you agree that
divorce can be applied in the Philippines?”

The subjects are 30 women and 30 men or a total of 60 subjects in all. Of the 30
women, 9 answered yes; and 9, undecided. Of the 30 men, 15 answered yes; 2, no; and
13, undecided.

To test the significant difference of their responses, consider the following:

1. Null hypothesis. There is no significant difference between the responses of


women and men in the question: “Do you agree that divorce can be applied in the
Philippines?” H0: O = E

2. Statistical test. Chi-square (x2) test


2
x2 = Σ (O – E)
E
3. Significance level. Let ∞ = .01
4. Sampling distribution. N = 60 with degrees of freedom (df) of 2.
df = (R – 1) (C – 1)

5. Rejection region. Null hypothesis (H0) will be rejected if x2 value is equal to or


greater than the tabular value at df 2 and at .01 level of significance.

6. Computation. Table 1 shows the computation of chi-square in a one-way


classification of the responses of women and men to the question: Can divorce be
applied in the Philippines?

Table 1

Computation of Chi-square in One-Way Classification of the Responses of Women


and Mean if Divorce be Applied in the Philippines

(O – E)2
O E O–E (O – E)2
Response W M Both W M W M W M WE M Both

Yes 9 15 24 12 12 -3 3 9 9 0.75 0.75 1.50


No 12 2 14 7 7 5 -5 25 25 3.57 3.57 7.14
Undecided
9 13 22 11 11 -2 2 4 4 0.36 0.36 0.72
Total 30 30 60 30 30 0 0 4.68 4.68 9.36**
*Significant at .01 level

df = (r – 1) (C – 1)

= (3 – 1) (2 – 1)

= (2) (1)
df = 2

df.01 = 9.210

Bear in mind that if a computation of expected frequency is correct, 0 – E equals


zero because the sum of observed frequency is equal to the sum of expected frequency.

7. Interpretation. The computed x2 value obtained is 9.36, which is significant at .01


level. To be significant at .01 level having two degrees of freedom, the computed
x2 is equal to or greater than the tabular value of 9.210. since the x2 computed
value is greater than 9.210, the results showed significant difference in the
responses of women and men to the question: “can divorce be applied in the
Philippines?”. This means that responses of women and men really differ from
each other, thus, the null hypothesis (H0) is rejected.
Two Way Classification

TABLE 2

3 x 3 Table of Independent Variables

Status
Career Success Permanent Temporary Casual Total
Very Successful 60 35 15 110
Successful 55 45 20 120
Unsuccessful 30 40 50 120
Total 145 120 85 350

Table 3 data determine if the position status of 350 government employees is


independent from career success. The question is “Is there a significant difference
between the status and career success of government employees?”

To answer this question, consider the following steps:

1. Null hypothesis (Ho). There is no significant difference between the position


status and career success of government employees. Position status (PS) is
independent from career success (CS) or position status and career are equal.
Ho: PS = CS

2. Statistical test. Chi-square (χ2)

3. Significance level. Let ∞ = .01

4. Sampling distribution. N = 350 with df = 4, df = (R-1) (C-1)

5. Rejection region. The null hypothesis (Ho) will be rejected if chi-square (χ2) value
obtained is equal to or greater than the tabular value at df 4 and at 1 percent level
of significance.

6. Computation. Table 3 shows the computation of chi-square in a 3 x 3 table


between position status and career success of government employees. Consider
the following data:

STATUS

Career Success Permanent Temporary Casual


Very Successful 60 (45.572) 35 (37.7143) 15(26.714) 110
Successful 55 (49.714) 45 (41.143) 20 (29.143) 120
Unsuccessful 30 (29.143) 40 (41.143) 50 (29.143) 120

145 120 85 350

Expected Frequency Computation

Observed freq. 145 x 110 expected freq 85 x 120


60 = = 45.572 20 = = 29.143
350 350

120 x 110 145 x 120


35 = = 37.7143 30 = = 49.714
350 350

85 x 110 120 x 120


15 = = 26.714 40 = = 41.143
350 350

145 x 120 85 x 120


55 = = 49.714 50 = = 29.143
350 350

120 x 120
45 = = 41.143
350

Computation of Chi-square in a 3 x 3 Table between Position Status


And Career Success of Government Employees

(O – E)2
O E O–E (O – E)2
60 45.572 14.428 208.16718 4.5679
E
35 37.714 -2.714 7.36579 0.1953
15 26.714 -11.714 137.21779 0.4385
55 49.714 5.286 27.94179 0.5621
45 41.143 3.857 14.87644 0.3616
20 29.143 -9.143 83.59444 2.8684
30 49.714 -19.714 388.64179 7.8176
40 41.143 -1.143 1.306449 0.0318
50 29.143 20.857 453.01444 14.9269
Total 350 350.0000 0.000 31.7701 or
31.77**
df = (R – 1) (C – 1)
= (3 – 1) (3 – 1)
df = 4
df.01 = 13.28**

7. Interpretation. The computed chi-square (χ2) value is 31.77. This value is greater
than the tabular value of 13.28 at df 4 and at 1 percent level significance, hence, it
is significant. This means that success in career depends on the position status of
government employees. Therefore, the null hypothesis (Ho) is rejected.

PREPARED BY:

DR. FE C. MONTECALVO
Professor VI

You might also like