Design of Experiments (DOE) Tutorial
Design of Experiments (DOE) Tutorial
1. Introduction
The term experiment is defined as the systematic procedure carried out under controlled conditions
in order to discover an unknown e ect, to test or establish a hypothesis, or to illustrate a known
e ect. When analyzing a process, experiments are often used to evaluate which process inputs
have a significant impact on the process output, and what the target level of those inputs should
be to achieve a desired result (output). Experiments canAbout Us
be designed Online Training
in many di erentSoftware
ways to
collect this information. Design of Experiments (DOE) is also referred to as Designed Experiments
or Experimental Design - all of the terms have the same meaning.
Simulations Blended Learning Resources
Experimental design can be used at the point of greatest leverage to reduce design costs by
speeding up the design process, reducing late engineering Login
design changes, and reducing product
Contact
material and labor complexity. Designed Experiments are also powerful tools to achieve
manufacturing cost savings by minimizing process variation and reducing rework, scrap, and the
need for inspection.
This Toolbox module includes a general overview of Experimental Design and links and other
resources to assist you in conducting designed experiments. A glossary of terms is also available at
any time through the Help function, and we recommend that you read through it to familiarize
yourself with any unfamiliar terms.
2. Preparation
If you do not have a general knowledge of statistics, review the Histogram, Statistical Process
Control, and Regression and Correlation Analysis modules of the Toolbox prior to working with this
module.
https://www.moresteam.com/toolbox/design-of-experiments.cfm 1/11
07/02/2018 Design of Experiments (DOE) Tutorial
You can use the MoreSteam's data analysis software EngineRoom® for Excel to create and analyze
many commonly used but powerful experimental designs. Free trials of several other statistical
packages can also be downloaded through the MoreSteam.com Statistical Software module of the
Toolbox. In addition, the book DOE Simplified, by Anderson and Whitcomb, comes with a sample of
excellent DOE software that will work for 180 days after installation.
Figure 1
https://www.moresteam.com/toolbox/design-of-experiments.cfm 2/11
07/02/2018 Design of Experiments (DOE) Tutorial
4. Purpose of Experimentation
Designed experiments have many potential uses in improving processes and products, including:
Comparing Alternatives. In the case of our cake-baking example, we might want to compare
the results from two di erent types of flour. If it turned out that the flour from di erent vendors
was not significant, we could select the lowest-cost vendor. If flour were significant, then we
would select the best flour. The experiment(s) should allow us to make an informed decision
that evaluates both quality and cost.
Identifying the Significant Inputs (Factors) A ecting an Output (Response) - separating the
vital few from the trivial many. We might ask a question: "What are the significant factors
beyond flour, eggs, sugar and baking?"
Achieving an Optimal Process Output (Response). "What are the necessary factors, and what
are the levels of those factors, to achieve the exact taste and consistency of Mom's chocolate
cake?
Reducing Variability. "Can the recipe be changed so it is more likely to always come out the
same?"
Minimizing, Maximizing, or Targeting an Output (Response). "How can the cake be made as
moist as possible without disintegrating?"
Improving process or product "Robustness" - fitness for use under varying conditions. "Can the
factors and their levels (recipe) be modified so the cake will come out nearly the same no matter
what type of oven is used?"
Balancing Tradeo s when there are multiple Critical to Quality Characteristics (CTQC's) that
require optimization. "How do you produce the best tasting cake with the simplest recipe (least
number of ingredients) and shortest baking time?"
When designing an experiment, pay particular heed to four potential traps that can create
experimental di culties:
1. In addition to measurement error (explained above), other sources of error, or unexplained
variation, can obscure the results. Note that the term "error" is not a synonym with "mistakes".
Error refers to all unexplained variation that is either within an experiment run or between
experiment runs and associated with level settings changing. Properly designed experiments can
identify and quantify the sources of error.
2. Uncontrollable factors that induce variation under normal operating conditions are referred to as
"Noise Factors". These factors, such as multiple machines, multiple shifts, raw materials, humidity,
etc., can be built into the experiment so that their variation doesn't get lumped into the
unexplained, or experiment error. A key strength of Designed Experiments is the ability to
determine factors and settings that minimize the e ects of the uncontrollable factors.
3. Correlation can often be confused with causation. Two factors that vary together may be highly
correlated without one causing the other - they may both be caused by a third factor. Consider
the example of a porcelain enameling operation that makes bathtubs. The manager notices that
there are intermittent problems with "orange peel" - an unacceptable roughness in the enamel
surface. The manager also notices that the orange peel is worse on days with a low production
rate. A plot of orange peel vs. production volume below (Figure 2) illustrates the correlation:
Figure 2
https://www.moresteam.com/toolbox/design-of-experiments.cfm 4/11
07/02/2018 Design of Experiments (DOE) Tutorial
If the data are analyzed without knowledge of the operation, a false conclusion could be reached
that low production rates cause orange peel. In fact, both low production rates and orange peel
are caused by excessive absenteeism - when regular spray booth operators are replaced by
employees with less skill. This example highlights the importance of factoring in operational
knowledge when designing an experiment. Brainstorming exercises and Fishbone Cause & E ect
Diagrams are both excellent techniques available through the Toolbox to capture this
operational knowledge during the design phase of the experiment. The key is to involve the
people who live with the process on a daily basis.
4. The combined e ects or interactions between factors demand careful thought prior to
conducting the experiment. For example, consider an experiment to grow plants with two inputs:
water and fertilizer. Increased amounts of water are found to increase growth, but there is a point
where additional water leads to root-rot and has a detrimental impact. Likewise, additional
fertilizer has a beneficial impact up to the point that too much fertilizer burns the roots.
Compounding this complexity of the main e ects, there are also interactive e ects - too much
water can negate the benefits of fertilizer by washing it away. Factors may generate non-linear
e ects that are not additive, but these can only be studied with more complex experiments that
involve more than 2 level settings. Two levels is defined as linear (two points define a line), three
levels are defined as quadratic (three points define a curve), four levels are defined as cubic, and so
on.
https://www.moresteam.com/toolbox/design-of-experiments.cfm 5/11
07/02/2018 Design of Experiments (DOE) Tutorial
https://www.moresteam.com/toolbox/design-of-experiments.cfm 6/11
07/02/2018 Design of Experiments (DOE) Tutorial
The data are shown below along with the mean for each route (treatment), and the variance for
each route:
As shown on the table above, both new routes home (B&C) appear to be quicker than the existing
route A. To determine whether the di erence in treatment means is due to random chance or a
statistically significant di erent process, an ANOVA F-test is performed.
The F-test analysis is the basis for model evaluation of both single factor and multi-factor
experiments. This analysis is commonly output as an ANOVA table by statistical analysis software, as
illustrated by the table below:
The most important output of the table is the F-ratio (3.61). The F-ratio is equivalent to the Mean
Square (variation) between the groups (treatments, or routes home in our example) of 19.9 divided
by the Mean Square error within the groups (variation within the given route samples) of 5.51.
The Model F-ratio of 3.61 implies the model is significant.The p-value ('Probability of exceeding the
observed F-ratio assuming no significant di erences among the means') of 0.0408 indicates that
there is only a 4.08% probability that a Model F-ratio this large could occur due to noise (random
chance). In other words, the three routes di er significantly in terms of the time taken to reach home
from work.
https://www.moresteam.com/toolbox/design-of-experiments.cfm 7/11
07/02/2018 Design of Experiments (DOE) Tutorial
The following graph (Figure 4) shows 'Simultaneous Pairwise Di erence' Confidence Intervals for
each pair of di erences among the treatment means. If an interval includes the value of zero
(meaning 'zero di erence'), the corresponding pair of means do NOT di er significantly. You can use
these intervals to identify which of the three routes is di erent and by how much. The intervals
contain the likely values of di erences of treatment means (1-2), (1-3) and (2-3) respectively, each of
which is likely to contain the true (population) mean di erence in 95 out of 100 samples. Notice the
second interval (1-3) does not include the value of zero; the means of routes 1 (A) and 3 (C) di er
significantly. In fact, all values included in the (1, 3) interval are positive, so we can say that route 1 (A)
has a longer commute time associated with it compared to route 3 (C).
Figure 4
Other statistical approaches to the comparison of two or more treatments are available through the
online statistics handbook - Chapter 7:
Statistics Handbook
8. Multi-Factor Experiments
Multi-factor experiments are designed to evaluate multiple factors set at multiple levels. One
approach is called a Full Factorial experiment, in which each factor is tested at each level in every
possible combination with the other factors and their levels. Full factorial experiments that study all
paired interactions can be economic and practical if there are few factors and only 2 or 3 levels per
factor. The advantage is that all paired interactions can be studied. However, the number of runs
goes up exponentially as additional factors are added. Experiments with many factors can quickly
become unwieldy and costly to execute, as shown by the chart below:
https://www.moresteam.com/toolbox/design-of-experiments.cfm 8/11
07/02/2018 Design of Experiments (DOE) Tutorial
To study higher numbers of factors and interactions, Fractional Factorial designs can be used to
reduce the number of runs by evaluating only a subset of all possible combinations of the factors.
These designs are very cost e ective, but the study of interactions between factors is limited, so the
experimental layout must be decided before the experiment can be run (during the experiment
design phase).
You can also download a 30-day free trial EngineRoom for Excel, MoreSteams statistical data analysis
software (an Excel add-in), to design and analyze several popular designed experiments. The
software includes tutorials on planning and executing full, fractional and general factorial designs.
https://www.moresteam.com/toolbox/design-of-experiments.cfm 9/11
07/02/2018 Design of Experiments (DOE) Tutorial
Taguchi adds this cost to society (consumers) of poor quality to the production cost of the product
to arrive at the total loss (cost). Taguchi uses designed experiments to produce product and process
designs that are more robust - less sensitive to part/process variation.
References
Webster's Ninth New Collegiate Dictionary
Books
Mark J. Anderson and Patrick J. Whitcomb, DOE Simplified (Productivity, Inc. 2000). ISBN 1-56327-
225-3. Recommended - This book is easy to understand and comes with copy of excellent D.O.E.
software good for 180 days.
George E. P. Box, William G. Hunter and J. Stuart Hunter, Statistics for Experimenters - An
Introduction to Design, Data Analysis, and Model Building (John Wiley and Sons, Inc. 1978). ISBN
0-471-09315-7
Douglas C. Montgomery, Design and Analysis of Experiments (John Wiley & Sons, Inc., 1984) ISBN
0-471-86812-4.
Genichi Taguchi, Introduction to Quality Engineering - Designing Quality Into Products and
Processes (Asian Productivity Organization, 1986). ISBN 92-833-1084-5
Summary
Designed experiments are an advanced and powerful analysis tool during projects. An e ective
experimenter can filter out noise and discover significant process factors. The factors can then be
used to control response properties in a process and teams can then engineer a process to the exact
specification their product or service requires.
A well built experiment can save not only project time but also solve critical problems which have
remained unseen in processes. Specifically, interactions of factors can be observed and evaluated.
Ultimately, teams will learn what factors matter and what factors do not.
https://www.moresteam.com/toolbox/design-of-experiments.cfm 10/11
07/02/2018 Design of Experiments (DOE) Tutorial
Additional Resources
Recorded Webcast: "Experimental Design in the Transactional Arena"
https://www.moresteam.com/toolbox/design-of-experiments.cfm 11/11