Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
534 views

Assignment-Practical Exercise in One-Way Anova

The document describes a laboratory exercise on one-way analysis of variance (ANOVA). The exercise aims to introduce one-way ANOVA and interpret main effects plots and multiple comparisons. Students will evaluate differences between group means for a single factor using one-way ANOVA and interpret Minitab 18 output. The document provides background on ANOVA, the one-way ANOVA model, assumptions, and an example involving comparing hardness of four paint blends using one-way ANOVA and multiple comparisons.
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
534 views

Assignment-Practical Exercise in One-Way Anova

The document describes a laboratory exercise on one-way analysis of variance (ANOVA). The exercise aims to introduce one-way ANOVA and interpret main effects plots and multiple comparisons. Students will evaluate differences between group means for a single factor using one-way ANOVA and interpret Minitab 18 output. The document provides background on ANOVA, the one-way ANOVA model, assumptions, and an example involving comparing hardness of four paint blends using one-way ANOVA and multiple comparisons.
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 11

Laboratory Exercise No.

One-way Analysis of Variance

Course : BS Accountancy Experiment No. : 8


Group No. : Section : ACTCY31 S1
Group Members : 1. Date Performed : 10/01/20
2. Date Submitted : 10/02/20
Instructor : Engr. Ma. Teodora Gutierrez
1. Objective(s)

The activity aims to introduce one-way analysis of variance by comparing means of samples collected at
different levels using a one-way model and Interpret the main effects plot and multiple comparisons.

2. Intended Learning Outcomes (ILOs)

At the end of the exercise, the students are expected:

1. To evaluate differences between group means for a single factor using one-way ANOVA; and
2. To interpret results and draw conclusions about the output provided by Minitab 18.

3. Discussion
Analysis of Variance (ANOVA)
Tests the hypothesis that the means of two or more populations are equal. ANOVAs evaluate the
importance of one or more factors by comparing the response variable means at the different factor
levels. The null hypothesis states that all population means (factor level means) are equal while the
alternative hypothesis states that at least one is different.

To run an ANOVA, you must have a continuous response variable and at least one categorical factor
with two or more levels. ANOVAs require data from normally distributed populations with roughly equal
variances between factor levels.

For example, you design an experiment to assess the durability of four experimental carpet products.
You place a sample of each carpet type in ten homes and you measure durability after 60 days. Because
you are examining one factor (carpet type) you use a one-way ANOVA.

If the p-value is less than your alpha, then you conclude that at least one durability mean is different. To
further explore the differences between specific means, use a multiple comparison method such as
Tukey's.

The name "analysis of variance" is based on the manner in which the procedure uses variances to
determine whether the means are different. The procedure works by comparing the variance between
group means versus the variance within groups as a method of determining whether the groups are all
part of one larger population or separate populations with different characteristics.

Minitab has different types of ANOVAs to allow for additional factors, types of factors, and different
designs to suit your specific needs.
ANOVA Type Model and Design Properties
One fixed factor (levels set by investigator) which can have either an
One-way unequal (unbalanced) or equal (balanced) number of observations per
treatment combination.
Two-way Two fixed factors and requires a balanced design.
Model may contain any number of fixed and random factors (levels are
Balanced randomly selected), and crossed and nested factors, but requires a
balanced design.
Expands on Balanced ANOVAs by allowing unbalanced designs and
General Linear Model
covariates (continuous variables).

One-way ANOVA

The one-way ANOVA (analysis of variance) procedure is a generalization of the independent samples of
T-test. Unlike the T-test. However, one-way ANOVA can be used to analyze the means of more than two
groups (samples)at once. Use one-way ANOVA (also called single-factor ANOVA) when you have
continuous response data for two or more fixed levels of single factor.

Before accepting the results of an ANOVA, you must verify that the following assumptions about the
errors are valid for your data. They must be:

1. Be independent (and thus random);


2. Not deviate substantially from a normal distribution; and

3. Have constant variance across all factor levels

One-way ANOVA can help answer questions such as:

1. Are all branches of your company achieving comparable customer satisfaction ratings?
2. Do treatment group means differ?

For example:

1. Do mean customer satisfaction ratings differ between a company’s branches in New Hamphshire,
Maine, and Vermont?
2. Which of the three training courses is the most successful in decreasing mean application
processing errors?

Dot Plot
A dot plot gives a first look at the data to graphically compare the central tendencies and spreads for the
3 commission types. This graph can also reveal whether outlying data points are present and need to
be investigated.
Degrees of Freedom
The degrees of freedom (DF) Statistic measures how much “independent” information is available to
calculate each sum of squares (SS):

1. 𝑫𝑭𝑭𝒂𝒄𝒕𝒐𝒓 𝒌 − 𝟏, where k is the number of factor levels.

2. 𝑫𝑭𝑬𝒓𝒓𝒐𝒓 𝒏 − 𝒌, where n is the total number of observations.

3. 𝑫𝑭𝑻𝒐𝒕𝒂𝒍 𝒏 − 𝟏

Sum of Squares

The sum of squares (SS) measures the amount of variability each source contributes to the data. Notice
that:
𝑺𝑺𝑻𝒐𝒕𝒂𝒍 = 𝑺𝑺𝑩𝒆𝒕𝒘𝒆𝒆𝒏 + 𝑺𝑺𝑬𝒓𝒓𝒐𝒓

Mean Squares

The mean square (MS) for each source is equal to the SS divided by the DF.
F-statistic

F is the ratio of the variability contributed by the factor to the variability contributed by factor.
𝑴𝑺𝒇𝒂𝒄𝒕𝒐𝒓
𝑭=( )
𝑴𝑺
𝒆𝒓𝒓𝒐𝒓

1. If between-group variability is similar to within group variability, F is close to 1, indicating that the factor
does not affect the responsible variable.

2. If between group variability is larger than within group variability, F is greater than 1.

P-value

A large F suggests that the factor level means are more different than expected by chance, thus the P-
value is small.
Individual Confidence Interval
When the p-value in the analysis of variance table indicates a difference among the factor level means,
the table individual confidence intervals is sometimes used to assess the differences.
4. Materials and Equipment
 Minitab 18 Statistical software
 Minitab 18 Manual
 Training Data Sets
 Textbooks
5. Procedure

Problem: A chemical engineer wants to compare the hardness of four blends of paint. Six samples of
each paint blend were applied to a piece of metal. The pieces of metal were cured. Then each sample
was measured for hardness. In order to test for the equality of means and to assess the differences
between pairs of means, the analyst uses one-way ANOVA with multiple comparisons.

Part 1: Compare Distributions using Dotplot

Step 1: Open PaintHardness.MTW

Step 2: Choose Graph > Dotplot

Step 3: Under One Y, Choose With Groups, then Click OK.

Step 4: Complete the dialog box as shown below.


Step 5: Click OK and interpret the results.

Part 2: Perform One-way ANOVA

Step 1: Choose Stat > ANOVA > One-way

Step 2: Select Response data are in one column for all factor levels

Step 3: In Response, select Hardness.

Step 4: In Factor, select Paint.


Step 5: Click Graphs > Residual Plots > Four in One.

Step 6: Click OK in each dialog box. Interpret the results and ensure that the results are valid, determine whether all
the assumptions about the residuals have been met.

Step 7: Repeat the same steps from Step 4 to 6 for Temp and Operator, respectively.
6. Data and Results
7. Data Analysis and Conclusion

The p-value for the paint hardness ANOVA is less than 0.05. This result indicates that the
mean differences between the hardness of the paint blends is statistically significant. The
engineer knows that some of the group means are different.

The engineer uses the Fisher comparison results to formally test whether the difference
between a pair of groups is statistically significant. The graph and the table that include the
Fisher simultaneous confidence intervals show that the confidence interval for the
difference between the means of Blend 2 and 4 is 3.114 to 15.886. This range does not
include zero, which indicates that the difference between these means is significant. The
engineer can use this estimate of the difference to determine whether the difference is
practically significant.

The confidence intervals for the remaining pairs of means all include zero, which indicates
that the differences are not significant.

The low predicted R2 value indicates that the model generates imprecise predictions for new
observations. The imprecision may be due to the small size of the groups. Thus, the
engineer should be wary about using the model to make generalizations beyond the sample
data.
8. Reflection on the Attainment of Intended Learning Outcomes (ILOs):

I have learned that Analysis of variance (ANOVA) is an analysis tool used in statistics that splits an observed
aggregate variability found inside a data set into two parts: systematic factors and random factors. The systematic
factors have a statistical influence on the given data set, while the random factors do not. Analysts use the ANOVA
test to determine the influence that independent variables have on the dependent variable in a regression study.

The ANOVA test is the initial step in analyzing factors that affect a given data set. Once the test is finished, an
analyst performs additional testing on the methodical factors that measurably contribute to the data set's
inconsistency. The analyst utilizes the ANOVA test results in an f-test to generate additional data that aligns with the
proposed regression models.

The ANOVA test allows a comparison of more than two groups at the same time to determine whether a relationship
exists between them. The result of the ANOVA formula, the F statistic (also called the F-ratio), allows for the analysis
of multiple groups of data to determine the variability between samples and within samples.

If no real difference exists between the tested groups, which is called the null hypothesis, the result of the ANOVA's
F-ratio statistic will be close to 1. Fluctuations in its sampling will likely follow the Fisher F distribution. This is actually
a group of distribution functions, with two characteristic numbers, called the numerator degrees of freedom and the
denominator degrees of freedom.

On the other hand, The Fishers LSD test is basically a set of individual t tests. It is only
used as a follow up to ANOVA. Unlike the Bonferroni, Tukey, Dunnett and Holm methods,
Fisher's LSD does not correct for multiple comparisons. If you choose to use the Fisher's
LSD test, you'll need to account for multiple comparisons when you interpret the data, since
the computations themselves do not correct for multiple comparisons.

The only difference a set of t tests and the Fisher's LSD test, is that t tests compute the
pooled SD from only the two groups being compared, while the Fisher's LSD test computes
the pooled SD from all the groups (which gains power). Prism performs
the unprotected LSD test. Unprotected simply means that calculations are reported
regardless of the results of the ANOVA. The unprotected Fisher's LSD test is essentially a
set of t tests, without any correction for multiple comparisons.

Prism does not perform a protected Fisher's LSD test. Protection means that you only
perform the calculations described above when the overall ANOVA resulted in a P value
less than 0.05 (or some other value set in advance). This first step sort of controls the false
positive rate for the entire family of comparisons. While the protected Fisher's LSD test is of
historical interest as the first multiple comparisons test ever developed, it is no longer
recommended. It pretends to correct for multiple comparisons, but doesn't do so very well.

You might also like