Supplemental Text Maaterial PDF
Supplemental Text Maaterial PDF
Supplemental Text Maaterial PDF
Montgomery
I have prepared supplemental text material for each chapter of the 5th edition of Design
and Analysis of Experiments. This material consists of (1) some extensions of and
elaboration on topics introduced in the text and (2) some new topics that I could not
easily find a “home” for in the text without disrupting the flow of the coverage within
each chapter, or making the book ridiculously long.
Some of this material is in partial response to the many suggestions that have been made
over the years by textbook users, who have always been gracious in their requests and
very often extremely helpful. However, sometimes there just wasn’t any way to easily
accommodate their suggestions directly in the book. Some of the supplemental material
is in direct response to FAQ’s or “frequently asked questions” from students. It also
reflects topics that I have found helpful in consulting on experimental design and analysis
problems, but again, there wasn’t any easy way to incorporate it in the text. Obviously,
there is also quite a bit of personal “bias” in my selection of topics for the supplemental
material. The coverage is far from comprehensive.
I have not felt as constrained about mathematical level or statistical background of the
readers in the supplemental material as I have tried to be in writing the textbook. There
are sections of the supplemental material that will require considerably more background
in statistics than is required to read the text material. However, I think that many
instructors will be able to use this supplement material in their courses quite effectively,
depending on the maturity and background of the students. Hopefully, it will also
provide useful additional information for readers who wish to see more in-depth
discussion of some aspects of design, or who are attracted to the “eclectic” variety of
topics that I have included.
Contents
Chapter 1
1-1. More About Planning Experiments
1-2. Blank Guide Sheets from Coleman and Montgomery (1993)
1-3. Other Graphical Aids for Planning Experiments
1-4. Montgomery’s Theorems on Designed Experiments
Chapter 2
2-1. Models for the Data and the t-Test
2-2. Estimating the Model Parameters
2-3. A Regression Model Approach to the t-Test
2-4. Constructing Normal Probability Plots
2-5. More About Checking Assumptions in the t-Test
2-6. Some More Information About the Paired t-Test
Chapter 3
3-1. The Definition of Factor Effects
Chapter 4
4-1. Relative Efficiency of the RCBD
4-2. Partially Balanced Incomplete Block Designs
4-3. Youden Squares
4-4. Lattice Designs
Chapter 5
5-1. Expected Mean Squares in the Two-factor Factorial
5-2. The Definition of Interaction
5-3. Estimable Functions in the Two-factor Factorial Model
5-4. Regression Model Formulation of the Two-factor Factorial
5-5. Model Hierarchy
Chapter 6
6-1. Factor Effect Estimates are Least Squares Estimates
6-2. Yates’ Method for Calculating Factor Effects
6-3. A Note on the Variance of a Contrast
6-4. The Variance of the Predicted Response
6-5. Using Residuals to Identify Dispersion Effects
6-6. Center Points versus Replication of Factorial Points
6-7. Why We Work With Coded Design Variables
6-8. Testing for “Pure Quadratic” Curvature using a t-Test
Chapter 7
7-1. The Error Term in a Blocked design
7-2. An Illustration of Why Blocking is Important
7-3. The Prediction Equation for a Blocked Design
7-4. Run Order is Important
Chapter 8
8-1. Yates’ Method for the Analysis of Fractional Factorials
8-2. Fold-Over and Partial Fold-Over of Fractional Factorials
8-3. Alias Structures in Fractional Factorials and Other Designs
8-4. Irregular Fractions
8-5. Supersaturated Designs
Chapter 9
9-1. Yates’ Algorithm for the 3k Design
9-2. Aliasing in Three-level and Mixed-Level Designs
Chapter 10
10-1. The Covariance Matrix of the Regression Coefficients
10-2. Regression Models and Designed Experiments
10-3. Adjusted R2
10-4. Stepwise and Other Variable Selection Methods in Regression
10-5. The Variance of the Predicted Response
10-6. The Variance of Prediction Error
10-7. Leverage in a Regression Model
Chapter 11
11-1. The Method of Steepest Ascent
11-2. The Canonical Form of the Second-Order Response Surface Model
11-3. Center Points in the Central Composite Design
11-4. Center Runs in the Face-Centered Cube
11-5. A Note on Rotatability
11-6. The Taguchi Approach to Robust Parameter Design
11-6.1 The Taguchi Philosophy
11-6.2. Taguchi’s Technical Methods
Chapter 12
12-1. Expected Mean Squares for the Random Model
12-2. Expected Mean Squares for the Mixed Model
12-3. Restricted versus Unrestricted Mixed Models
12-4. Random and Mixed Models with Unequal Sample Size
12-5. Some Background Concerning the Modified large Sample Method
12-6. A Confidence Interval on a Ratio of Variance Components using the Modified
Large Sample Method
12-7. Measurement Systems Capability Studies
Chapter 13
13-1. The Staggered, Nested Design
13-2. Inadvertent Split-Plots
13-3. Fractional Factorial Experiments in Split-Plots
Chapter 14
14-1. The Form of a Transformation
14-2. Selecting λ in the Box-Cox Method
14-3. Generalized Linear Models
14-3.1. Models with a Binary Response Variable
14-3.2. Estimating the Parameters in a Logistic Regression Model
14-3.3. Interpreting the Parameters in a Logistic Regression Model
14-3.4. Hypothesis Tests on Model Parameters
14-3.5. Poisson Regression
14-3.6. The Generalized Linear Model
14-3.7. Link Functions and Linear Predictors
Table 1. Master Guide Sheet. This guide can be used to help plan and design
an experiment. It serves as a checklist to improve experimentation and ensures
that results are not corrupted for lack of careful planning. Note that it may not be
possible to answer all questions completely. If convenient, use supplementary
sheets for topics 4-8
1.Experimenter's Name and Organization:
Brief Title of Experiment:
2. Objectives of the experiment (should be unbiased, specific, measurable, and
of practical consequence):
3. Relevant background on response and control variables: (a) theoretical
relationships; (b) expert knowledge/experience; (c) previous experiments. Where does
this experiment fit into the study of the process or system?:
4. List: (a) each response variable, (b) the normal response variable level at which the
process runs, the distribution or range of normal operation, (c) the precision or range to
which it can be measured (and how):
5. List: (a) each control variable, (b) the normal control variable level at which the
process is run, and the distribution or range of normal operation, (c) the precision (s) or
range to which it can be set (for the experiment, not ordinary plant operations) and the
precision to which it can be measured, (d) the proposed control variable settings, and
(e) the predicted effect (at least qualitative) that the settings will have on each response
variable:
6. List: (a) each factor to be "held constant" in the experiment, (b) its desired level
and allowable s or range of variation, (c) the precision or range to which it can
measured (and how), (d) how it can be controlled, and (e) its expected impact, if any,
on each of the responses:
7. List: (a) each nuisance factor (perhaps time-varying), (b) measurement precision,
(c)strategy (e.g., blocking, randomization, or selection), and (d) anticipated effect:
8. List and label known or suspected interactions:
9. List restrictions on the experiment, e.g., ease of changing control variables,
methods of data acquisition, materials, duration, number of runs, type of experimental
unit (need for a split-plot design), “illegal” or irrelevant experimental regions, limits to
randomization, run order, cost of changing a control variable setting, etc.:
10. Give current design preferences, if any, and reasons for preference, including
blocking and randomization:
11. If possible, propose analysis and presentation techniques, e.g., plots,
ANOVA, regression, plots, t tests, etc.:
12. Who will be responsible for the coordination of the experiment?
13. Should trial runs be conducted? Why / why not?
3. Relevant background on response and control variables: (a) theoretical relationships; (b) expert
knowledge/experience; (c) previous experiments. Where does this experiment fit into the study of the
process or system?
(a) Because of tool geometry, x-axis shifts would be expected to produce thinner blades, an undesirable
characteristic of the airfoil.
(b) This family of parts has been produced for over 10 years; historical experience indicates that
externally reground tools do not perform as well as those from the “internal” vendor (our own regrind
operation).
(c) Smith (1987) observed in an internal process engineering study that current spindle speeds and feed
rates work well in producing parts that are at the nominal profile required by the engineering drawings
- but no study was done of the sensitivity to variations in set-up parameters.
Results of this experiment will be used to determine machine set-up parameters for impeller machining. A
robust process is desirable; that is, on-target and low variability performance regardless of which tool
vendor is used.
The impeller involved in this experiment is shown in Figure 1. Table 3 lists the
information about the response variables. Notice that there are three response variables
of interest here.
Figure 1. Jet engine impeller (side view). The z-axis is vertical, x-axis is horizontal, y-
axis is into the page. 1 = height of wheel, 2 = diameter of wheel, 3 = inducer blade
height, 4 = exducer blade height, 5 = z height of blade.
As with response variables, most experimenters can easily generate a list of candidate
design factors to be studied in the experiment. Coleman and Montgomery call these
control variables. We often call them controllable variables, design factors, or process
variables in the text. Control variables can be continuous or categorical (discrete). The
ability of the experimenters to measure and set these factors is important. Generally,
small errors in the ability to set, hold or measure the levels of control variables are of
relatively little consequence. Sometimes when the measurement or setting error is large,
a numerical control variable such as temperature will have to be treated as a categorical
control variable (low or high temperature). Alternatively, there are errors-in-variables
statistical models that can be employed, although their use is beyond the scope of this
book. Information about the control variables for the CNC-machining example is shown
in Table 4.
'The x, y, and z axes are used to refer to the part and the CNC machine. The a axis refers only to the machine.
Held-constant factors are control variables whose effects are not of interest in this
experiment. The worksheets can force meaningful discussion about which factors are
adequately controlled, and if any potentially important factors (for purposes of the
present experiment) have inadvertently been held constant when they should have been
included as control variables. Sometimes subject-matter experts will elect to hold too
many factors constant and as a result fail to identify useful new information. Often this
information is in the form of interactions among process variables.
In the CNC experiment, this worksheet helped the experimenters recognize that the
machine had to be fully warmed up before cutting any blade forgings. The actual
procedure used was to mount the forged blanks on the machine and run a 30-minute cycle
without the cutting tool engaged. This allowed all machine parts and the lubricant to
reach normal, steady-state operating temperature. The use of a typical (i.e., mid-level)
operator and the use of one lot of forgings ware decisions made for experimental
“insurance”. Table 5 shows the held-constant factors for the CNC-machining
experiment.
Type of cutting Standard type Not sure, but Use one type None
fluid thought to be
adequate
Temperature of 100- 100°F. when 1-2° F. (estimate) Do runs after None
cutting fluid machine is machine has
(degrees F.) warmed up reached 100°
Operator Several operators - Use one "mid- None
normally work level"
in the process operator
Titanium Material Precision of lab Use one lot Slight
forgings properties may tests unknown (or block on
vary from unit forging lot,
to unit only if
necessary)
Nuisance factors are variables that probably have some effect on the response, but which
are of little or no interest to the experimenter. They differ from held-constant factors in
that they either cannot be held entirely constant, or they cannot be controlled at all. For
example, if two lots of forgings were required to run the experiment, then the potential
lot-to-lot differences in the material would be a nuisance variable than could not be held
entirely constant. In a chemical process we often cannot control the viscosity (say) of the
incoming material feed stream—it may vary almost continuously over time. In these
cases, nuisance variables must be considered in either the design or the analysis of the
experiment. If a nuisance variable can be controlled, then we can use a design technique
called blocking to eliminate its effect. Blocking is discussed initially in Chapter 4. If the
nuisance variable cannot be controlled but it can be measured, then we can reduce its
effect by an analysis technique called the analysis of covariance, discussed in Chapter 14.
Coleman and Montgomery also found it useful to introduce an interaction sheet. The
concept of interactions among process variables is not an intuitive one, even to well-
trained engineers and scientists. Now it is clearly unrealistic to think that the
experimenters can identify all of the important interactions at the outset of the planning
process. In most situations, the experimenters really don’t know which main effects are
likely to be important, so asking them to make decisions about interactions is impractical.
However, sometimes the statistically-trained team members can use this as an
opportunity to teach others about the interaction phenomena. When more is known about
the process, it might be possible to use the worksheet to motivate questions such as “are
there certain interactions that must be estimated?” Table 7 shows the results of this
exercise for the CNC-machining example.
Table 7. Interactions
Control
variable y shift z shift Vendor a shift Speed Height Feed
x shift P
y shift - P
z shift - - P
Vendor - - - P
a shift - - - -
Speed - - - - - F,D
Height - - - - - -
NOTE: Response variables are P = profile difference, F = surface finish and D = surface defects
Two final points: First, an experimenter without a coordinator will probably fail.
Furthermore, if something can go wrong, it probably will, so he coordinator will actually
have a significant responsibility on checking to ensure that the experiment is being
conducted as planned. Second, concerning trial runs, this is often a very good idea—
particularly if this is the first in a series of experiments, or if the experiment has high
significance or impact. A trial run can consist of a center point in a factorial or a small
part of the experiment—perhaps one of the blocks. Since many experiments often
involve people and machines doing something they have not done before, practice is a
good idea. Another reason for trial runs is that we can use them to get an estimate of the
magnitude of experimental error. If the experimental error is much larger than
anticipated, then this may indicate the need for redesigning a significant part of the
experiment. Trial runs are also a good opportunity to ensure that measurement and data-
acquisition or collection systems are operating as anticipated. Most experimenters never
regret performing trial runs.
Response Variables
response normal meas. precision, relationship of
variable operating level accuracy response variable
(units) & range How known? to
objective
Control Variables
control meas. proposed predicted
variable normal level precision settings, effects
(units) & range & setting error based on (for various
How known? predicted responses)
effects
Nuisance Factors
nuisance measurement strategy (e.g.,
factor (units) precision randomization, anticipated effects
How known? blocking, etc.)
Interactions
control var. 2 3 4 5 6 7 8
1
2 -
3 - -
4 - - -
5 - - - -
6 - - - - -
7 - - - - - -
I also use the following “theorems” at various times throughout the course. Most of them
relate to non-statistical aspects of DOX, but they point out important issues and concerns.
Theorem 3. Never let one person design and conduct an experiment alone, particularly if
that person is a subject-matter expert in the field of study.
Theorem 4. All experiments are designed experiments; some of them are designed well,
and some of them are designed really badly. The badly designed ones often tell you
nothing.
Finally, my friend Stu Hunter has for many years said that without good experimental
design, we often end up doing PARC analysis. This is an acronym for
Supplemental References
Andrews, H. P. (1964). “The Role of Statistics in Setting Food Specifications”,
Proceedings of the Sixteenth Annual Conference of the Research Council of the American
Meat Institute, pp. 43-56. Reprinted in Experiments in Industry: Design, Analysis, and
Interpretation of Results, eds. R. D. Snee, L. B. Hare and J. R. Trout, American Society
for Quality Control, Milwaukee, WI 1985.
Barton, R. R. (1997). “Pre-experiment Planning for Designed Experiments: Graphical
Methods”, Journal of Quality Technology, Vol. 29, pp. 307-316.
Barton, R. R. (1998). “Design-plots for Factorial and Fractional Factorial Designs”,
Journal of Quality Technology, Vol. 30, pp. 40-54.
The model presented in the text, equation (2-23) is more properly called a means model.
Since the mean is a location parameter, this type of model is also sometimes called a
location model. There are other ways to write the model for a t-test. One possibility is
yij = µ + τ i + ε ij
RS i = 1,2
T j = 1,2,", n i
where µ is a parameter that is common to all observed responses (an overall mean) and τi
is a parameter that is unique to the ith factor level. Sometimes we call τi the ith treatment
effect. This model is usually called the effects model.
Since the means model is
yij = µ i + ε ij
RS i = 1,2
T j = 1,2,", n
i
we see that the ith treatment or factor level mean is µ i = µ + τ i ; that is, the mean
response at factor level i is equal to an overall mean plus the effect of the ith factor. We
will use both types of models to represent data from designed experiments. Most of the
time we will work with effects models, because it’s the “traditional” way to present much
of this material. However, there are situations where the means model is useful, and even
more natural.
2-2. Estimating the Model Parameters
Because models arise naturally in examining data from designed experiments, we
frequently need to estimate the model parameters. We often use the method of least
squares for parameter estimation. This procedure chooses values for the model
parameters that minimize the sum of the squares of the errors εij. We will illustrate this
procedure for the means model. For simplicity, assume that the sample sizes for the two
factor levels are equal; that is n1 = n2 = n . The least squares function that must be
minimized is
2 n
L = ∑ ∑ ε ij2
i =1 j =1
2 n
= ∑ ∑ ( yij − µ i ) 2
i =1 j =1
∂L n
∂L n
Now = 2∑ ( y1 j −µ 1 ) and = 2 ∑ ( y2 j −µ 2 ) and equating these partial derivatives
∂µ 1 j =1 ∂µ 2 j =1
n
nµ 1 = ∑ y1 j
i =1
n
nµ 2 = ∑ y2 j
i =1
The solution to these equations gives the least squares estimators of the factor level
means. The solution is µ 1 = y1 and µ 2 = y2 ; that is, the sample averages at leach factor
level are the estimators of the factor level means.
This result should be intuitive, as we learn early on in basic statistics courses that the
sample average usually provides a reasonable estimate of the population mean. However,
as we have just seen, this result can be derived easily from a simple location model using
least squares. It also turns out that if we assume that the model errors are normally and
independently distributed, the sample averages are the maximum likelihood estimators
of the factor level means. That is, if the observations are normally distributed, least
squares and maximum likelihood produce exactly the same estimators of the factor level
means. Maximum likelihood is a more general method of parameter estimation that
usually produces parameter estimates that have excellent statistical properties.
We can also apply the method of least squares to the effects model. Assuming equal
sample sizes, the least squares function is
2 n
L = ∑ ∑ ε ij2
i =1 j =1
2 n
= ∑ ∑ ( yij − µ − τ i ) 2
i =1 j =1
Equating these partial derivatives to zero results in the following least squares normal
equations:
2 n
2nµ + nτ 1 + nτ 2 = ∑ ∑ yij
i =1 j =1
n
nµ + nτ 1 = ∑ y1 j
j =1
n
nµ + nτ 2 = ∑ y2 j
j =1
Notice that if we add the last two of these normal equations we obtain the first one. That
is, the normal equations are not linearly independent and so they do not have a unique
solution. This has occurred because the effects model is overparameterized. This
situation occurs frequently; that is, the effects model for an experiment will always be an
overparameterized model.
One way to deal with this problem is to add another linearly independent equation to the
normal equations. The most common way to do this is to use the equation τ 1 + τ 2 = 0 .
This is, in a sense, an intuitive choice as it essentially defines the factor effects as
deviations from the overall mean µ. If we impose this constraint, the solution to the
normal equations is
µ = y
τ i = yi − y , i = 1,2
That is, the overall mean is estimated by the average of all 2n sample observation, while
each individual factor effect is estimated by the difference between the sample average
for that factor level and the average of all observations.
This is not the only possible choice for a linearly independent “constraint” for solving the
normal equations. Another possibility is to simply set the overall mean equal to a
constant, such as for example µ = 0. This results in the solution
µ = 0
τ i = yi , i = 1,2
Yet another possibility is τ 2 = 0 , producing the solution
µ = y2
τ 1 = y1 − y2
τ 2 = 0
There are an infinite number of possible constraints that could be used to solve the
normal equations. An obvious question is “which solution should we use?” It turns out
that it really doesn’t matter. For each of the three solutions above (indeed for any solution
to the normal equations) we have
µ i = µ + τ i = yi , i = 1,2
That is, the least squares estimator of the mean of the ith factor level will always be the
sample average of the observations at that factor level. So even if we cannot obtain
unique estimates for the parameters in the effects model we can obtain unique estimators
of a function of these parameters that we are interested in. We say that the mean of the
ith factor level is estimable. Any function of the model parameters that can be uniquely
estimated regardless of the constraint selected to solve the normal equations is called an
estimable function. This is discussed in more detail in Chapter 3.
2-3. A Regression Model Approach to the t-Test
The two-sample t-test can be presented from the viewpoint of a simple linear regression
model. This is a very instructive way to think about the t-test, as it fits in nicely with the
general notion of a factorial experiment with factors at two levels, such as the golf
18.5
Bond Strength
17.5
16.5
-1 0 1
Factor Level (x)
where β 0 and β 1 are the intercept and slope, respectively, of the regression line and the
regressor or predictor variable is x1 j = −1 and x2 j = +1 . The method of least squares can
be used to estimate the slope and intercept in this model. Assuming that we have equal
sample sizes n for each factor level the least squares normal equations are:
2 n
2nβ 0 = ∑ ∑ yij
i =1 j =1
n n
2nβ 1 = ∑ y2 j − ∑ y1 j
j =1 j =1
Analysis of Variance
Source DF SS MS F P
Regression 1 6.7048 6.7048 82.98 0.000
Residual Error 18 1.4544 0.0808
Total 19 8.1592
Notice that the estimate of the slope (given in the column labeled “Coef” and the row
1 1
labeled “Factor L” above) is 0.579 ≅ ( y2 − y1 ) = (17.92 − 16.76) and the estimate of
2 2
1 1
the intercept is 17.343 ≅ ( y2 + y1 ) = (17.92 + 16.76) . (The difference is due to
2 2
rounding the manual calculations for the sample averages to two decimal places).
Furthermore, notice that the t-statistic associated with the slope is equal to 9.11, exactly
the same value we gave in Table 2-2 in the text. Now in simple linear regression, the t-
test on the slope is actually testing the hypotheses
H0 : β 1 = 0
H0 : β 1 ≠ 0
and this is equivalent to testing H0 : µ 1 = µ 2 .
It is easy to show that the t-test statistic used for testing that the slope equals zero in
simple linear regression is identical to the usual two-sample t-test. Recall that to test the
above hypotheses in simple linear regression the t-statistic is
β 1
t0 =
σ 2
S xx
2 n
where Sxx = ∑ ∑ ( xij − x ) 2 is the “corrected” sum of squares of the x’s. Now in our
i =1 j =1
σ 2
1 2
Sp Sp
S xx 2n n
This is the usual two-sample t-test statistic for the case of equal sample sizes.
Now if we plot the cumulative probabilities from the next-to-last column of this table
versus the rank-ordered observations from the second column on normal probability
paper, we will produce a graph that is identical to Figure 2-11a in the text.
A normal probability plot can also be constructed on ordinary graph paper by plotting the
standardized normal z-scores z(j) against the ranked observations, where the standardized
normal z-scores are obtained from
j − 0.5
P( Z ≤ z j ) = Φ( z j ) =
n
where Φ(•) denotes the standard normal cumulative distribution. For example, if (j –
0.5)/n = 0.05, then Φ( z j ) = 0.05 implies that z j = −164
. . The last column of the above
table displays the values of the normal z-scores. Plotting these values against the ranked
observations on ordinary graph paper will produce a normal probability plot equivalent to
Figure 2-11a. As noted in the text, many statistics computer packages present the normal
probability plot this way.
yij = µ i + ε ij
RS i = 1,2
T j = 1,2,", n i
and that the estimates of the parameters (the factor level means) in this model are the
sample averages. Therefore, we could say that the fitted model is
yij = yi , i = 1,2 and j = 1,2," , ni
That is, an estimate of the ijth observation is just the average of the observations in the ith
factor level. The difference between the observed value of the response and the predicted
(or fitted) value is called a residual, say
eij = yij − yi , i = 1,2 .
The table below computes the values of the residuals from the portland cement mortar
tension bond strength data.
Observation y1 j e1 j = y1 j − y1 y2 j e2 j = y2 j − y2
j = y1 j − 16.76 = y2 j − 17.92
The figure below is a normal probability plot of these residuals from Minitab.
1
Normal Score
-1
-2
-0.5 0.0 0.5
Residual
As noted in section 2-3 above we can compute the t-test statistic using a simple linear
regression model approach. Most regression software packages will also compute a table
or listing of the residuals from the model. The residuals from the Minitab regression
model fit obtained previously are as follows:
Obs Factor Level Bond Str Fit StDev Fit Residual St Resid
1 -1.00 16.8500 16.7640 0.0899 0.0860 0.32
2 -1.00 16.4000 16.7640 0.0899 -0.3640 -1.35
3 -1.00 17.2100 16.7640 0.0899 0.4460 1.65
4 -1.00 16.3500 16.7640 0.0899 -0.4140 -1.54
5 -1.00 16.5200 16.7640 0.0899 -0.2440 -0.90
6 -1.00 17.0400 16.7640 0.0899 0.2760 1.02
7 -1.00 16.9600 16.7640 0.0899 0.1960 0.73
8 -1.00 17.1500 16.7640 0.0899 0.3860 1.43
9 -1.00 16.5900 16.7640 0.0899 -0.1740 -0.65
10 -1.00 16.5700 16.7640 0.0899 -0.1940 -0.72
11 1.00 17.5000 17.9220 0.0899 -0.4220 -1.56
12 1.00 17.6300 17.9220 0.0899 -0.2920 -1.08
13 1.00 18.2500 17.9220 0.0899 0.3280 1.22
14 1.00 18.0000 17.9220 0.0899 0.0780 0.29
15 1.00 17.8600 17.9220 0.0899 -0.0620 -0.23
16 1.00 17.7500 17.9220 0.0899 -0.1720 -0.64
17 1.00 18.2200 17.9220 0.0899 0.2980 1.11
18 1.00 17.9000 17.9220 0.0899 -0.0220 -0.08
19 1.00 17.9600 17.9220 0.0899 0.0380 0.14
20 1.00 18.1500 17.9220 0.0899 0.2280 0.85
The column labeled “Fit” contains the averages of the two samples, computed to four
decimal places. The residuals in the sixth column of this table are the same (apart from
rounding) as we computed manually.
yij = µ + τ i + ε ij
RSi = 1,2,", a
T j = 1,2,", n
where, for simplicity, we are working with the balanced case (all factor levels or
treatments are replicated the same number of times). Recall that in writing this model,
the ith factor level mean µ i is broken up into two components, that is µ i = µ + τ i , where
a
∑µ i
τ i is the ith treatment effect and µ is an overall mean. We usually define µ = i =1
and
a
a
this implies that ∑τ i = 0.
i =1
This is actually an arbitrary definition, and there are other ways to define the overall
“mean”. For example, we could define
a a
µ = ∑ wi µ i where ∑w i =1
i =1 i =1
This would result in the treatment effect defined such that
a
∑wτ i i =0
i =1
Here the overall mean is a weighted average of the individual treatment means. When
there are an unequal number of observations in each treatment, the weights wi could be
taken as the fractions of the treatment sample sizes ni/N.
yij = µ + τ i + ε ij
RSi = 1,2,", a
T j = 1,2,", n
In addition, we will find the following useful:
E (ε ij ) = E (ε i . ) = E (ε .. ) = 0, E (ε ij2 ) = σ 2 , E (ε i2. ) = nσ 2 , E (ε ..2 ) = anσ 2
Now
1 a 2 1
E ( SSTreatments ) = E ( ∑
n i =1
yi . ) − E ( y..2 )
an
Consider the first term on the right hand side of the above expression:
1 a 2 1 a
E ( ∑ yi . ) = ∑ E (nµ + nτ i + ε i . ) 2
n i =1 n i =1
Squaring the expression in parentheses and taking expectation results in
1 a 2 1 a
E ( ∑ yi . ) = [a (nµ ) + n ∑ τ i2 + anσ 2 ]
2 2
n i =1 n i =1
a
= anµ 2 + n ∑ τ i2 + aσ 2
i =1
because the three cross-product terms are all zero. Now consider the second term on the
right hand side of E ( SSTreatments ) :
F 1 I 1 E (anµ + n∑ τ
EG y J =
a
H an K an + ε .. ) 2
2
.. i
i =1
1
= E (anµ + ε .. ) 2
an
a
since ∑τ i = 0. Upon squaring the term in parentheses and taking expectation, we obtain
i =1
FG 1 y IJ = 1 [(anµ)
H an K an + anσ 2 ]
2 2
E ..
= anµ 2 + σ 2
since the expected value of the cross-product is zero. Therefore,
1 a 2 1
E ( SSTreatments ) = E ( ∑
n i =1
yi . ) − E ( y..2 )
an
a
= anµ 2 + n ∑ τ i2 + aσ 2 − (anµ 2 + σ 2 )
i =1
a
= σ 2 (a − 1) + n ∑ τ i2
i =1
a
σ 2 (a − 1) + n∑ τ i2
= i =1
a −1
a
n ∑ τ i2
σ2 + i =1
a −1
This is the result given in the textbook.
FG SS E IJ
H
P χ 12−α / 2 , N − a ≤
σ 2
K
≤ χ α2 / 2 , N − a = 1 − α
where χ 12−α / 2 , N − a and χ α2 / 2 , N − a are the lower and upper α/2 percentage points of the χ2
distribution with N-a degrees of freedom, respectively. Now if we rearrange the
expression inside the probability statement we obtain
F SS I = 1− α
P GH χ2
E
α / 2, N −a
≤σ2 ≤
SS E
χ 12−α / 2 , N − a JK
Therefore, a 100(1-α) percent confidence interval on the error variance σ2 is
SS SS
E
≤σ2 ≤ E
χ 2
α /2 , N −a χ 2
1−α / 2 , N − a
Sometimes an experimenter is interested in an upper bound on the error variance; that is,
how large could σ2 reasonably be? This can be useful when there is information about σ2
from a prior experiment and the experimenter is performing calculations to determine
sample sizes for a new experiment. An upper 100(1-α) percent confidence limit on σ2 is
given by
SS
σ2 ≤ E
χ 2
1−α , N − a
P( E1 ∩ E2 ) ≥ 1 − α − α
≥ 1 − 2α
Therefore, if we want the probability that both of the confidence intervals are correct to
be at least 1-α we can assure this by constructing 100(1-α/2) percent individual
confidence interval.
If there are r confidence intervals of interest, we can use mathematical induction to show
that
r
P( E1 ∩ E2 ∩"∩ Er ) ≥ 1 − ∑ P( Ei )
i =1
≥ 1 − rα
As noted in the text, the Bonferroni method works reasonably well when the number of
simultaneous confidence intervals that you desire to construct, r, is not too large. As r
becomes larger, the lengths of the individual confidence intervals increase. The lengths
of the individual confidence intervals can become so large that the intervals are not very
informative. Also, it is not necessary that all individual confidence statements have the
same level of confidence. One might select 98 percent for one statement and 92 percent
for the other, resulting in two confidence intervals for which the simultaneous confidence
level is at least 90 percent.
To find the least squares estimators we take the partial derivatives of L with respect to the
β’s and equate to zero:
∂L N
= −2 ∑ ( yi − β 0 − β 1 xi ) = 0
∂β 0 i =1
∂L N
= −2 ∑ ( yi − β 0 − β 1 xi ) xi = 0
∂β 1 i =1
where β 0 and β 1 are the least squares estimators of the model parameters. So, to fit this
particular model to the experimental data by least squares, all we have to do is solve the
normal equations. Since there are only two equations in two unknowns, this is fairly
easy.
In the textbook we fit two regression models for the response variable tensile strength (y)
as a function of the cotton weight percent (x); a quadratic model
y = β 0 + β1x + β 2 x2 + ε
and a cubic model
y = β 0 + β 1x + β 2 x 2 + β 3 x 3 + ε
The least squares normal equations for the quadratic model are
N N N
Nβ 0 + β 1 ∑ xi + β 2 ∑ xi2 = ∑ yi
i =1 i =1 i =1
N N N N
β 0 ∑ xi + β 1 ∑ xi2 + β 2 ∑ xi3 = ∑ xi yi
i =1 i =1 i =1 i =1
N N N N
β 0 ∑ xi2 + β 1 ∑ xi3 + β 2 ∑ xi4 = ∑ xi2 yi
i =1 i =1 i =1 i =1
and the least squares normal equations for the cubic model are
N N N N
Nβ 0 + β 1 ∑ xi + β 2 ∑ xi2 + β 3 ∑ xi3 = ∑ yi
i =1 i =1 i =1 i =1
N N N N N
β 0 ∑ xi + β 1 ∑ xi2 + β 2 ∑ xi3 + β 3 ∑ xi4 = ∑ xi yi
i =1 i =1 i =1 i =1 i =1
N N N N N
β 0 ∑ xi2 + β 1 ∑ xi3 + β 2 ∑ xi4 + β 3 ∑ xi5 = ∑ xi2 yi
i =1 i =1 i =1 i =1 i =1
N N N N N
β 0 ∑ xi3 + β 1 ∑ xi4 + β 2 ∑ xi5 + β 3 ∑ xi6 = ∑ xi3 yi
i =1 i =1 i =1 i =1 i =1
Obviously as the order of the model increases and there are more unknown parameters to
estimate, the normal equations become more complicated. In Chapter 10 we use matrix
methods to develop the general solution. Most statistics software packages have very
good regression capability.
#
n
nµ + nτ a = ∑ yaj
j =1
consistent with defining the factor effects as deviations from the overall mean µ. If we
impose this constraint, the solution to the normal equations is
µ = y
τ i = yi − y , i = 1,2," , a
That is, the overall mean is estimated by the average of all an sample observation, while
each individual factor effect is estimated by the difference between the sample average
for that factor level and the average of all observations.
Another possible choice of constraint is to set the overall mean equal to a constant, say
µ = 0. This results in the solution
µ = 0
τ i = yi , i = 1,2," , a
Still a third choice is τ a = 0 . This is the approach used in the SAS software, for
example. This choice of constraint produces the solution
µ = ya
τ i = yi − ya , i = 1,2," , a − 1
τ a = 0
There are an infinite number of possible constraints that could be used to solve the
normal equations. Fortunately, as observed in the book, it really doesn’t matter. For each
of the three solutions above (indeed for any solution to the normal equations) we have
µ i = µ + τ i = yi , i = 1,2," , a
That is, the least squares estimator of the mean of the ith factor level will always be the
sample average of the observations at that factor level. So even if we cannot obtain
unique estimates for the parameters in the effects model we can obtain unique estimators
of a function of these parameters that we are interested in.
This is the idea of estimable functions. Any function of the model parameters that can
be uniquely estimated regardless of the constraint selected to solve the normal equations
is an estimable function.
What functions are estimable? It can be shown that the expected value of any
observation is estimable. Now
E ( yij ) = µ + τ i
so as shown above, the mean of the ith treatment is estimable. Any function that is a
linear combination of the left-hand side of the normal equations is also estimable. For
example, subtract the third normal equation from the second, yielding τ 2 − τ 1 .
Consequently, the difference in any two treatment effect is estimable. In general, any
a a
contrast in the treatment effects ∑ ciτ i where ∑c i = 0 is estimable. Notice that the
i =1 i =1
individual model parameters µ , τ 1 ," , τ a are not estimable, as there is no linear
combination of the normal equations that will produce these parameters separately.
However, this is generally not a problem, for as observed previously, the estimable
functions correspond to functions of the model parameters that are of interest to
experimenters.
For an excellent and very readable discussion of estimable functions, see Myers, R. H.
and Milton, J. S. (1991), A First Course in the Theory of the Linear Model, PWS-Kent,
Boston. MA.
ANOVA test statistic. Every ANOVA model can be written explicitly as an equivalent
linear regression model. We now show how this is done for the single-factor experiment
with a = 3 treatments.
The single-factor balanced ANOVA model is
yij = µ + τ i + ε ij
RS i = 1,2,3
T j = 1,2,", n
The equivalent regression model is
yij = β 0 + β 1 x1 j + β 2 x2 j + ε ij
RS i = 1,2,3
T j = 1,2,", n
where the variables x1j and x2j are defined as follows:
x1 j =
RS1 if observation j is from treatment 1
T 0 otherwise
x2 j =S
R1 if observation j is from treatment 2
T 0 otherwise
The relationships between the parameters in the regression model and the parameters in
the ANOVA model are easily determined. For example, if the observations come from
treatment 1, then x1j = 1 and x2j = 0 and the regression model is
y1 j = β 0 + β 1 (1) + β 2 (0) + ε 1 j
= β 0 + β1 + ε1j
and we have
β0 = µ3 = µ + τ 3
Thus in the regression model formulation of the one-way ANOVA model, the regression
coefficients describe comparisons of the first two treatment means with the third
treatment mean; that is
β0 = µ3
β 1 = µ1 − µ 3
β2 = µ2 − µ3
In general, if there are a treatments, the regression model will have a – 1 regressor
variables, say
yij = β 0 + β 1 x1 j + β 2 x2 j +"+ β a −1 xa −1 + ε ij
RSi = 1,2,", a
T j = 1,2,", n
where
xij =
RS1 if observation j is from treatment i
T 0 otherwise
Since these regressor variables only take on the values 0 and 1, they are often called
indicator variables. The relationship between the parameters in the ANOVA model and
the regression model is
β 0 = µa
β i = µ i − µ a , i = 1,2," , a − 1
Therefore the intercept is always the mean of the ath treatment and the regression
coefficient βi estimates the difference between the mean of the ith treatment and the ath
treatment.
Now consider testing hypotheses. Suppose that we want to test that all treatment means
are equal (the usual null hypothesis). If this null hypothesis is true, then the parameters in
the regression model become
β 0 = µa
β i = 0, i = 1,2," , a − 1
Using the general regression significance test procedure, we could develop a test for this
hypothesis. It would be identical to the F-statistic test in the one-way ANOVA.
Most regression software packages automatically test the hypothesis that all model
regression coefficients (except the intercept) are zero. We will illustrate this using
Minitab and the data from the experiment in Example 3-1. Recall in this example that the
development engineer is interested in determining if varying the cotton content in a
synthetic fiber affects the tensile strength, and he has run a completely randomized
experiment with five levels of cotton percentage and five replicates. For convenience, we
repeat the data from Table 3-1 here:
The data was converted into the xij 0/1 indicator variables as described above. Since
there are 5 treatments, there are only 4 of the x’s. The coded data that is used as input to
Minitab is shown below:
x1 x2 x3 x4 strength
1 0 0 0 7
1 0 0 0 7
1 0 0 0 15
1 0 0 0 11
1 0 0 0 9
0 1 0 0 12
0 1 0 0 17
0 1 0 0 12
0 1 0 0 18
0 1 0 0 18
0 0 1 0 14
0 0 1 0 18
0 0 1 0 18
0 0 1 0 19
0 0 1 0 19
0 0 0 1 19
0 0 0 1 25
0 0 0 1 22
0 0 0 1 19
0 0 0 1 23
0 0 0 0 7
0 0 0 0 10
0 0 0 0 11
0 0 0 0 15
0 0 0 0 11
The Regression Module in Minitab was run using the above spreadsheet where x1
through x4 were used as the predictors and the variable “strength” was the response. The
output is shown below.
Regression Analysis
Analysis of Variance
SOURCE DF SS MS F p
Regression 4 475.76 118.94 14.76 0.000
Error 20 161.20 8.06
Total 24 636.96
Notice that the ANOVA table in this regression output is identical to the output in Table-
3-4. Therefore, testing that the regression coefficients β 1 = β 2 = β 3 = β 4 = 0 in this
regression model is equivalent to testing the null hypothesis of equal treatment means in
the original ANOVA model formulation.
Also note that the estimate of the intercept or the “constant” term in the above table is the
mean of the 5th treatment. Furthermore, each regression coefficient is just the difference
between one of the treatment means and the 5th treatment mean.
(df b + 1)(df r + 3) σ r2
R= ⋅
(df b + 3)(df r + 1) σ b2
where σ 2r and σ b2 are the experimental error variances of the completely randomized and
randomized block designs, respectively, and df r and df b are the corresponding error
degrees of freedom. This statistic may be viewed as the increase in replications that is
required if a CRD is used as compared to a RCBD if the two designs are to have the same
sensitivity. The ratio of degrees of freedom in R is an adjustment to reflect the different
number of error degrees of freedom in the two designs.
To compute the relative efficiency, we must have estimates of σ 2r and σ b2 . We can use
the mean square for error MSE from the RCBD to estimate σ b2 , and it may be shown [see
Cochran and Cox (1957), pp. 112-114] that
(b − 1) MS Blocks + b(a − 1) MS E
σ 2r =
ab − 1
is an unbiased estimator of the error variance of a the CRD. To illustrate the procedure,
consider the data in Example 4-1. Since MSE = 0.89, we have
σ b2 = 0.89
and
(b − 1) MS Blocks + b(a − 1) MS E
σ 2r =
ab − 1
(3)27.5 + 4(3)0.89
=
4( 4 ) − 1
= 6.21
Therefore our estimate of the relative efficiency of the RCBD in this example is
(df b + 1)(df r + 3) σ r2
R= ⋅
(df b + 3)(df r + 1) σ b2
(9 + 1)(12 + 3) 6.21
= ⋅
(9 + 3)(12 + 1) 0.89
= 6.71
This implies that we would have to use approximately seven times as many replicates
with a completely randomized design to obtain the same sensitivity as is obtained by
blocking on the metal coupons.
Clearly, blocking has paid off handsomely in this experiment. However, suppose that
blocking was not really necessary. In such cases, if experimenters choose to block, what
do they stand to lose? In general, the randomized complete block design has (a – 1)(b - 1)
error degrees of freedom. If blocking was unnecessary and the experiment was run as a
completely randomized design with b replicates we would have had a(b - 1) degrees of
freedom for error. Thus, incorrectly blocking has cost a(b - 1) – (a - 1)(b - 1) = b - 1
degrees of freedom for error, and the test on treatment means has been made less
sensitive needlessly. However, if block effects really are large, then the experimental
error may be so inflated that significant differences in treatment means would remain
undetected. (Remember the incorrect analysis of Example 4-1.) As a general rule, when
the importance of block effects is in doubt, the experimenter should block and gamble
that the block means are different. If the experimenter is wrong, the slight loss in error
degrees of freedom will have little effect on the outcome as long as a moderate number of
degrees of freedom for error are available.
1 . There are a treatments arranged in b blocks. Each block contains k runs and each
treatment appears in r blocks.
2. Two treatments which are ith associates appear together in λi blocks, i = 1, 2.
3. Each treatment has exactly ni ith associates, i = 1,2. The number ni is independent of
the treatment chosen.
4. If two treatments are ith associates, then the number of treatments that are jth
associates of one treatment and kth associates of the other treatment is pijk, (i , j ,k =
1, 2). It is convenient to write the pijk as (2 x 2) matrices with pijk the jkth element of
the ith matrix. 4
Table 1. A Partially
Balanced incomplete
Block Design with Two
Associate Classes
Block Treatment
Combinations
1 1 2 3
2 3 4 5
3 2 5 6
4 1 2 4
5 3 4 6
6 1 5 6
We now show how to determine the pijk . Consider any two treatments that are first
associates, say 1 and 2. For treatment 1, the only first associate is 2 and the second
associates are 3, 4, 5, and 6. For treatment 2, the only first associate is 1and the second
associates are 3, 4, 5, and 6. Combining this information produces Table 2. Counting the
number of treatments in the cells of this table, have the {pljk} given above. The elements
{p2jk} are determined similarly.
The linear statistical model for the partially balanced incomplete block design with two
associate classes is
yij = µ + τi + βj + εij
where µ is the overall mean, τi is the ith treatment effect, βj is the jth block effect, and εij
is the NID(0, σ2) random error component. We compute a total sum of squares, a block
sum of squares (unadjusted), and a treatment sum of squares (adjusted). As before, we
call
1 b
Qi = yi. − ∑ nij y. j
k j =1
the adjusted total for the ith treatment. We also define
S1 (Qi ) = ∑ Qs s and i are first associates
s
∑ τ Q i i
a-1
i =1
Blocks 1 b 2 y 2 ..
∑ y . j − bk
k j =1 b-1
2( k − ci )σ 2
V (τ u − τ v ) =
r ( k − 1)
where treatments u and v are ith associates (i = 1, 2). This indicates that comparisons
between treatments are not all estimated with the same precision. This is a consequence
of the partial balance of the design.
We have given only the intrablock analysis. For details of the interblock analysis, refer
to Bose and Shimamoto (1952) or John (1971). The second reference contains a good
discussion of the general theory of incomplete block designs. An extensive table of
partially balanced incomplete block designs with two associate classes has been given by
Bose, Clatworthy, and Shrikhande (1954).
y 2 ... (31) 2
SS T = ∑ ∑ ∑ y 2 ijh − = 183.00 − = 134.95
i j h N 20
Q1 = 12 - 1 (2 + 7 + 9 + 7) = 23/4
4
Q2 = 2 - 1 (2 + 6 + 9 + 7) = - 16/4
4
Q3 = - 4 - 1 (2 + 6 + 7 + 7) = -38/4
4
Q4 = -2 - 1 (2 + 6 + 7 + 9) = - 32/4
4
Q5 =23 - 1 (6 + 7 + 9 + 7) = 63/4
4
a
k ∑ Qi
2
i =1
S S T rea tm en ts ( a d ju sted ) =
λa
Also,
b
y 2 i .. y 2 ... (2) 2 + (6) 2 + (7) 2 + (9) 2 + (7) 2 (31) 2
SS Days = ∑ − = − = 6.70
i =1 k N 4 20
k
y 2 ..h y 2 ... (6) 2 + (9) 2 + (7) 2 + (9) 2 (31) 2
SS Stations = ∑ − = − = 1.35
h =1 b N 5 20
and
Block or day effects may be assessed by computing the adjusted sum of squares for
blocks. This yields
b
r ∑ Q j '2
j =1
SS Days( adjusted ) =
λb
4[(0 / 4) 2 + (5 / 4) 2 + ( −1 / 4) 2 + (1 / 4) 2 + ( −5 / 4) 2 ]
= = 0.87
(3)(5)
There are other types of lattice designs that occasionally prove useful. For example, the
cubic lattice design can be used for k3 treatments in k2 blocks of k runs. A lattice design
for k(k + 1) treatments in k + 1 blocks of size k is called a rectangular lattice. Details of
the analysis of lattice designs and tables of plans are given in Cochran and Cox (1957).
Supplemental References
Bose, R. C. and T. Shimamoto (1952). “Classification and Analysis of Partially
Balanced Incomplete Block Designs with Two Associate Classes”. Journal of the
American Statistical Association, Vol. 47, pp. 151-184.
Bose, R. C. W. H. Clatworthy, and S. S. Shrikhande (1954). Tables of Partially
Balanced Designs with Two Associate Classes. Technical Bulletin No. 107, North
Carolina Agricultural Experiment Station.
Smith, C. A. B. and H. O. Hartley (1948). “Construction of Youden Squares”. Journal of
the Royal Statistical Society Series B, Vol. 10, pp. 262-264.
where SSA is the sum of squares for the row factor. Since
1 a 2 y...2
SS A = ∑ yi.. − abn
bn i =1
E ( SS A ) =
1 a
y2
E ∑ yi2.. − E ...
FG IJ
bn i =1 abn H K
Recall that τ . = 0, β . = 0, (τβ ) . j = 0,(τβ ) i . = 0, and (τβ ) .. = 0 , where the “dot” subscript
implies summation over that subscript. Now
b n
yi .. = ∑ ∑ yijk = bnµ + bnτ i + nβ . + n(τβ ) i . + ε i ..
j =1 k =1
= bnµ + bnτ i + ε i ..
and
a a
1 1
E ∑ yi2.. = E ∑ (bnµ ) 2 + (bn) 2 τ i2 + ε i2.. + 2(bn) 2 µτ i + 2bnµε i .. + 2bnτ i ε i ..
bn i =1 bn i =1
=
1 LM a
a (bnµ ) 2 + (bn) 2 ∑ τ i2 + abnσ 2
OP
bn N i =1 Q
a
= abnµ 2 + bn∑ τ i2 + aσ 2
i =1
1 1
E ( y...2 ) = E (abnµ + ε ... ) 2
abn abn
1
= E (abnµ ) 2 + ε ...2 + 2abnµε ... )
abn
1
= (abnµ ) 2 + abnσ 2
abn
= abnµ 2 + σ 2
Therefore
FG SS IJ
E ( MS A ) = E
H a − 1K
A
1
= E ( SS A )
a −1
1 LM a
abnµ 2 + bn∑ τ i2 + aσ 2 − (abnµ 2 + σ 2 )
OP
a −1 N i =1 Q
=
1 LM a
σ 2 (a − 1) + bn∑ τ i2
OP
a −1 N i =1 Q
a
bn∑ τ i2
=σ2 + i =1
a −1
which is the result given in the textbook. The other expected mean squares are derived
similarly.
∑µ ij
j =1
µi. =
b
a
∑µ ij
µ. j = i =1
a
Then if there is no interaction,
µ ij = µ i . + µ . j − µ
(τβ ) ij = µ ij − ( µ + τ i + β j )
or equivalently,
(τβ ) ij = µ ij − ( µ ij ′ + µ i ′j − µ i ′j ′ )
= µ ij − µ ij ′ − µ i ′j + µ i ′j ′
Therefore, we can determine whether there is interaction by determining whether all the
cell means can be expressed as µ ij = µ + τ i + β j .
Sometimes interactions are a result of the scale on which the response has been
measured. Suppose, for example, that factor effects act in a multiplicative fashion,
µ ij = µτ i β j
If we were to assume that the factors act in an additive manner, we would discover very
quickly that there is interaction present. This interaction can be removed by applying a
log transformation, since
log µ ij = log µ + log τ i + log β j
This suggests that the original measurement scale for the response was not the best one to
use if we want results that are easy to interpret (that is, no interaction). The log scale for
the response variable would be more appropriate.
Finally, we observe that it is very possible for two factors to interact but for the main
effects for one (or even both) factor is small, near zero. To illustrate, consider the two-
factor factorial with interaction in Figure 5-1 of the textbook. We have already noted that
the interaction is large, AB = -29. However, the main effect of factor A is A = 1. Thus,
the main effect of A is so small as to be negligible. Now this situation does not occur all
that frequently, and typically we find that interaction effects are not larger than the main
effects. However, large two-factor interactions can mask one or both of the main effects.
A prudent experimenter needs to be alert to this possibility.
a b a b
abnµ + bn∑ τ i + an∑ β j + ∑ ∑ (τβ ) ij = y...
i =1 j =1 i =1 j =1
b b
bnµ + bnτ i + n∑ β j + n∑ (τβ ) ij = yi .. , i = 1,2," , a
j =1 j =1
a a
anµ + n∑ τ i + anβ j + n∑ (τβ ) ij = y. j . , j = 1,2," , b
i =1 i =1
It can be shown that each of the new parameters µ * , τ *i , β *j , and (τβ ) *ij is estimable.
Therefore, it is reasonable to expect that the hypotheses of interest can be expressed
simply in terms of these redefined parameters. It particular, it can be shown that there is
no interaction if and only if (τβ ) *ij = 0 . Now in the text, we presented the null hypothesis
of no interaction as H0 :(τβ ) ij = 0 for all i and j. This is not incorrect so long as it is
understood that it is the model in terms of the redefined (or “starred”) parameters that we
are using. However, it is important to understand that in general interaction is not a
parameter that refers only to the (ij)th cell, but it contains information from that cell, the
ith row, the jth column, and the overall average response.
One final point is that as a consequence of defining the new “starred” parameters, we
have included certain restrictions on them. In particular, we have
τ *. = 0, β *. = 0, (τβ ) *i . = 0, (τβ ) *. j = 0 and (τβ ) *.. = 0
These are the “usual constraints” imposed on the normal equations. Furthermore, the
tests on main effects become
H0 :τ 1* = τ *2 =" = τ *a = 0
and
H0 : β 1* = β *2 =" = β *b = 0
This is the way that these hypotheses are stated in the textbook, but of course, without the
“stars”.
We will use the battery life experiment of Example 5-1 to illustrate the procedure. Recall
that there are three material types of interest (factor A) and three temperatures (factor B),
and the response variable of interest is battery life. The regression model formulation of
an ANOVA model uses indicator variables. We will define the indicator variables for
the design factors material types and temperature as follows:
Material type X1 X2
1 0 0
2 1 0
3 0 1
Temperature X3 X4
15 0 0
70 1 0
125 0 1
where i, j =1,2,3 and the number of replicates k = 1,2,3,4. In this model, the terms
β 1 xijk 1 + β 2 xijk 2 represent the main effect of factor A (material type), and the terms
β 3 xijk 3 + β 4 xijk 4 represent the main effect of temperature. Each of these two groups of
terms contains two regression coefficients, giving two degrees of freedom. The terms
β 5 xijk 1 xijk 3 + β 6 xijk 1 xijk 4 + β 7 xijk 2 xijk 3 + β 8 xijk 2 xijk 4 in Equation (1) represent the AB
interaction with four degrees of freedom. Notice that there are four regression
coefficients in this term.
Table 1 shows the data from this experiment, originally presented in Table 5-1 of the text.
In Table 1, we have shown the indicator variables for each of the 36 trials of this
experiment. The notation in this table is Xi = xi, i=1,2,3,4 for the main effects in the
above regression model and X5 = x1x3, X6 = x1x4,, X7 = x2x3, and X8 = x2x4, for the
interaction terms in the model.
This table was used as input to the Minitab regression procedure, which produced the
following results for fitting Equation (1):
Regression Analysis
The regression equation is
y = 135 + 21.0 x1 + 9.2 x2 - 77.5 x3 - 77.2 x4 + 41.5 x5 - 29.0 x6
+79.2 x7 + 18.7 x8
Analysis of Variance
Source DF SS MS F P
Regression 8 59416.2 7427.0 11.00 0.000
Residual Error 27 18230.7 675.2
Total 35 77647.0
Source DF Seq SS
x1 1 141.7
x2 1 10542.0
x3 1 76.1
x4 1 39042.7
x5 1 788.7
x6 1 1963.5
x7 1 6510.0
x8 1 351.6
First examine the Analysis of Variance information in the above display. Notice that the
regression sum of squares with 8 degrees of freedom is equal to the sum of the sums of
squares for the main effects material types and temperature and the interaction sum of
squares from Table 5-5 in the textbook. Furthermore, the number of degrees of freedom
for regression (8) is the sum of the degrees of freedom for main effects and interaction (2
+2 + 4) from Table 5-5. The F-test in the above ANOVA display can be thought of as
testing the null hypothesis that all of the model coefficients are zero; that is, there are no
significant main effects or interaction effects, versus the alternative that there is at least
one nonzero model parameter. Clearly this hypothesis is rejected. Some of the treatments
produce significant effects.
Now consider the “sequential sums of squares” at the bottom of the above display.
Recall that X1 and X2 represent the main effect of material types. The sequential sums of
squares are computed based on an “effects added in order” approach, where the “in
order” refers to the order in which the variables are listed in the model. Now
SS MaterialTypes = SS ( X 1 ) + SS ( X 2 | X 1 ) = 1417
. + 10542.0 = 10683.7
which is the sum of squares for material types in table 5-5. The notation SS ( X 2 | X 1 )
indicates that this is a “sequential” sum of squares; that is, it is the sum of squares for
variable X2 given that variable X1 is already in the regression model.
Similarly,
SSTemperature = SS ( X 3 | X 1 , X 2 ) + SS ( X 4 | X 1 , X 2 , X 3 ) = 761
. + 39042.7 = 39118.8
which closely agrees with the sum of squares for temperature from Table 5-5. Finally,
note that the interaction sum of squares from Table 5-5 is
SS Interaction = SS ( X 5 | X 1 , X 1 , X 3 , X 4 ) + SS ( X 6 | X 1 , X 1 , X 3 , X 4 , X 5 )
+ SS ( X 7 | X 1 , X 2 , X 3 , X 4 , X 5 , X 6 ) + SS ( X 8 | X 1 , X 2 , X 3 , X 4 , X 5 , X 6 , X 7 )
= 788.7 + 19635 . + 6510.0 + 3516 . = 96138 .
When the design is balanced, that is, we have an equal number of observations in each
cell, we can show that this model regression approach using the sequential sums of
squares produces results that are exactly identical to the “usual” ANOVA. Furthermore,
because of the balanced nature of the design, the order of the variables A and B does not
matter.
The “effects added in order” partitioning of the overall model sum of squares is
sometimes called a Type 1 analysis. This terminology is prevalent in the SAS statistics
package, but other authors and software systems also use it. An alternative partitioning
is to consider each effect as if it were added last to a model that contains all the others.
This “effects added last” approach is usually called a Type 3 analysis.
There is another way to use the regression model formulation of the two-factor factorial
to generate the standard F-tests for main effects and interaction. Consider fitting the
model in Equation (1), and let the regression sum of squares in the Minitab output above
for this model be the model sum of squares for the full model. Thus,
SS Model ( FM ) = 59416.2
with 8 degrees of freedom. Suppose we want to test the hypothesis that there is no
interaction. In terms of model (1), the no-interaction hypothesis is
H0 :β 5 = β 6 = β 7 = β 8 = 0
(2)
H0 : at least one β j ≠ 0, j = 5,6,7,8
When the null hypothesis is true, a reduced model is
yijk = β 0 + β 1 xijk 1 + β 2 xijk 2 + β 3 xijk 3 + β 4 xijk 4 + ε ijk (3)
Fitting Equation (2) using Minitab produces the following:
Analysis of Variance
Source DF SS MS F P
Regression 4 49802 12451 13.86 0.000
Residual Error 31 27845 898
Total 35 77647
Analysis of Variance
Source DF SS MS F P
Regression 2 10684 5342 2.63 0.087
Residual Error 33 66963 2029
Total 35 77647
Notice that the regression sum of squares for this model [Equation (5)] is essentially
identical to the sum of squares for material types in table 5-5 of the text. Similarly,
testing that there is no temperature effect is equivalent to testing
H0 :β 3 = β 4 = 0
(6)
H0 : at least one β j ≠ 0, j = 3,4
To test the hypotheses in (6), all we have to do is fit the model
yijk = β 0 + β 3 xijk 3 + β 4 xijk 4 + ε ijk (7)
The Minitab regression output is
Regression Analysis
The regression equation is
y = 145 - 37.3 x3 - 80.7 x4
Analysis of Variance
Source DF SS MS F P
Regression 2 39119 19559 16.75 0.000
Residual Error 33 38528 1168
Total 35 77647
Notice that the regression sum of squares for this model, Equation (7), is essentially equal
to the temperature main effect sum of squares from Table 5-5.
material type by linear effect of temperature interaction were not significant; that is, they
had fairly large P-values. We left these non-significant terms in the model to preserve
hierarchy.
The hierarchy principal states that if a model contains a higher-order term, then it should
also contain all the terms of lower-order that comprise it. So, if a second-order term,
such as an interaction, is in the model then all main effects involved in that interaction as
well as all lower-order interactions involving those factors should also be included in the
model.
There are times that hierarchy makes sense. Generally, if the model is going to be used
for explanatory purposes then a hierarchical model is quite logical. On the other hand,
there may be situations where the non-hierarchical model is much more logical. To
illustrate, consider another analysis of Example 5-4 in Table 2, which was obtained from
Design-Expert. We have selected a non-hierarchical model in which the quadratic effect
of temperature was not included (it was in all likelihood the weakest effect), but both
two-degree-of-freedom components of the temperature-material type interaction are in
the model. Notice from Table 2 that the residual mean square is smaller for the non-
hierarchical model (653.81 versus 675.21 from Table 5-15). This is important, because
the residual mean square can be thought of as the variance of the unexplained residual
variability, not accounted for by the model. That is, the non-hierarchical model is
actually a better fit to the experimental data.
Notice also that the standard errors of the model parameters are smaller for the non-
hierarchical model. This is an indication that he parameters are estimated with better
precision by leaving out the nonsignificant terms, even though it results in a model that
does not obey the hierarchy principal. Furthermore, note that the 95 percent confidence
intervals for the model parameters in the hierarchical model are always longer than their
corresponding confidence intervals in the non-hierarchical model. The non-hierarchical
model, in this example, does indeed provide better estimates of the factor effects that
obtained from the hierarchical model
________________________________________________________________________
Supplemental Reference
Myers, R. H. and Milton, J. S. (1991), A First Course in the Theory of the Linear Model,
PWS-Kent, Boston, MA.
The least squares estimates of the model parameters β are chosen to minimize the sum of
the squares of the model errors:
b g
4
L = ∑ yi − β 0 − β 1 xi1 − β 2 xi 2 − β 12 xi1 xi 2
2
i =1
4 4 4 4 4
Now since ∑x
i =1
i1 = ∑ xi 2 = ∑ xi1 xi 2 = ∑ x x = ∑ xi1 xi22 = 0 because the design is
i =1 i =1 i =1
2
i1 i 2
i =1
orthogonal, the normal equations reduce to a very simple form:
4 β 0 = (1) + a + b + ab
4 β 1 = − (1) + a − b + ab
4 β = − (1) − a + b + ab
2
4 β 12 = (1) − a − b + ab
The solution is
[(1) + a + b + ab]
β 0 =
4
[ − (1) + a − b + ab]
β 1 =
4
[ − (1) − a + b + ab]
β 2 =
4
[(1) − a − b + ab]
β 12 =
4
These regression model coefficients are exactly one-half the factor effect estimates.
Therefore, the effect estimates are least squares estimates. We will show this in a more
general manner in Chapter 10.
The estimates of the effects and sums of squares obtained by Yates' algorithm for the data
in Example 6-1 are in agreement with the results found there by the usual methods. Note
that the entry in column (3) [in general, column (k)] for the row corresponding to (1) is
always equal to the grand total of the observations.
In spite of its apparent simplicity, it is notoriously easy to make numerical errors in
Yates' algorithm, and we should be extremely careful in executing the procedure. As a
partial check on the computations, we may use the fact that the sum of the squares of the
elements in the jth column is 2j times the sum of the squares of the elements in the
response column. Note, however, that this check is subject to errors in sign in column j.
See Davies (1956), Good (1955, 1958), Kempthorne (1952), and Rayner (1967) for other
error-checking techniques.
∑c y i i
Effect = i =1
2k
where the contrast constants ci are all either –1 or +1. Therefore, the variance of an
effect estimate is
2k
1
V ( Effect ) = k 2
(2 )
∑c V (y )
2
i i
i =1
2k
1
= k 2
(2 )
∑c σ
i =1
2
i
2
i
2k
1
= k 2
(2 )
∑σ
i =1
2
i
because ci2 = 1 . Therefore, all contrasts have the same variance. If each observation yi in
the above equations is the total of n replicates at each design point, the result still holds.
σ2 σ2
Now recall that the variance of a model regression coefficient is V ( β ) = k = ,
n2 N
where N is the total number of runs in the design. The variance of the predicted response
is
FG k
V [ y (x)] = V β 0 + ∑ β i xi
IJ
H i =1 K
k
= V ( β 0 ) + ∑ V ( β i xi )
i =1
k
= V ( β 0 ) + ∑ xi2V ( β i )
i =1
σ2
σ 2 k
=
N
+
N
∑x
i =1
2
i
=
σ2
FG
1 + ∑ xi2
k
IJ
N H
i =1 K
Copyright © 2005 - The McGraw-Hill Companies srl
Progettazione e analisi degli esperimenti - Douglas C. Montgomery
In the above development we have used the fact that the design is orthogonal, so there are
no nonzero covariance terms when the variance operator is applied
The Design-Expert software program plots contours of the standard deviation of the
predicted response; that is the square root of the above expression. If the design has
already been conducted and analyzed, the program replaces σ 2 with the error mean
square, so that the plotted quantity becomes
V[ y (x) =
MS E k
FG
1 + ∑ xi2
IJ
N i =1H K
If the design has been constructed but the experiment has not been performed, then the
software plots (on the design evaluation menu) the quantity
V [ y (x)
=
1 FG k
1 + ∑ xi2
IJ
σ2 N H
i =1 K
which can be thought of as a standardized standard deviation of prediction. To illustrate,
consider a 22 with n = 3 replicates, the first example in Section 6-2. The plot of the
standardized standard deviation of the predicted response is shown below.
DESIGN-EXPERT Plot
1.00
3 StdErr of Design 3
StdErr of Design
X = A: A 0.433 0.433
Y = B: B
Design Points
0.50
B
0.00
-0.50
0.337
3 3
-1.00
-1.00 -0.50 0.00 0.50 1.00
V [ y (x = 1)
σ 2
=
1
12
c
1 + (1) 2 + (1) 2 h
3
=
12
= 0.5
This is also shown on the graph at the corners of the square.
Plots of the standardized standard deviation of the predicted response can be useful in
comparing designs. For example, suppose the experimenter in the above situation is
considering adding a fourth replicate to the design. The maximum standardized
prediction standard deviation in the region now becomes
V [ y (x = 1)
σ 2
=
1
12
c
1 + (1) 2 + (1) 2 h
3
=
16
= 0.433
The plot of the standardized prediction standard deviation is shown below.
DESIGN-EXPERT Pl ot
1.00
4 StdErr of Design 4
Design Points
0.50
B
0.00
-0.50
0.281
0.311
0.372 0.372
0.403 0.342 0.403
4 4
-1.00
-1.00 -0.50 0.00 0.50 1.00
Notice that adding another replicate has reduced the maximum prediction variance from
(0.5)2 = 0.25 to (0.433)2 = 0.1875. Comparing the two plots shown above reveals that the
standardized prediction standard deviation is uniformly lower throughout the design
region when an additional replicate is run.
Sometimes we like to compare designs in terms of scaled prediction variance, defined
as
NV [ y (x)]
σ2
This allows us to evaluate designs that have different numbers of runs. Since adding
replicates (or runs) to a design will generally always make the prediction variance get
smaller, the scaled prediction variance allows us to examine the prediction variance on a
per-observation basis. Note that for a 2k factorial and the “main effects only” model we
have been considering, the scaled prediction variance is
NV [ y (x)] FGk
= 1 + ∑ xi2
IJ
σ 2
H
i =1 K
= (1 + ρ 2 )
where ρ 2 is the distance of the design point where prediction is required from the center
of the design space (x = 0). Notice that the 2k design achieves this scaled prediction
variance regardless of the number of replicates. The maximum value that the scaled
prediction variance can have over the design region is
NV [ y (x)]
Max = (1 + k )
σ2
It can be shown that no other design over this region can achieve a smaller maximum
scaled prediction variance, so the 2k design is in some sense an optimal design. We will
discuss optimal designs more in Chapter 11.
DESIGN-EXPERT Plot
Defects
Residuals vs. Closing time
3.00
1.50
-1.50
-3.00
-1 0 1
This plot indicates that factor D has a potential dispersion effect. The normal probability
plot of the dispersion statistic Fi * in Figure 6-28 clearly reveals that factor B is the only
factor that has an effect on dispersion. Therefore, if you are going to use model residuals
to search for dispersion effects, it is really important to select the right model for the
location effects.
If we use the replicated design the scaled prediction variance is (see Section 6-4 above):
NV [ y (x)]
=
FG
+ ∑
2
IJ
σ 2
1
H i =1
xi2
K
= (1 + ρ 2 )
Now consider the prediction variance when the design with center points is used. We
have
FG 2
V [ y (x)] = V β 0 + ∑ β i xi
IJ
H i =1 K
2
= V ( β 0 ) + ∑ V ( β i xi )
i =1
k
= V ( β 0 ) + ∑ xi2V ( β i )
i =1
σ 2
σ 2 2
=
8
+
4
∑x
i =1
2
i
=
σ 2
FG k
1 + 2∑ xi2
IJ
8 Hi =1 K
σ2
= (1 + 2 ρ 2 )
8
Therefore, the scaled prediction variance for the design with center points is
NV [ y (x)]
= (1 + 2 ρ 2 )
σ 2
Clearly, replicating the corners in this example outperforms the strategy of replicating
center points, at least in terms of scaled prediction variance. At the corners of the square,
the scaled prediction variance for the replicated factorial is
NV [ y (x)]
= (1 + ρ 2 )
σ 2
= (1 + 2)
=3
= (1 + 2(2))
=5
However, prediction variance might not tell the complete story. If we only replicate the
corners of the square, we have no way to judge the lack of fit of the model. If the design
has center points, we can check for the presence of pure quadratic (second-order) terms,
so the design with center points is likely to be preferred if the experimenter is at all
uncertain about the order of the model he or she should be using.
6-7. Why We Work With Coded Design Variables
Generally we perform all of the analysis and model-fitting for a 2k factorial design in
coded design variables, −1 ≤ x1 ≤ +1 , and not the design factors in their original or as
they are sometimes called, engineering units. When the engineering units are used, we
can obtain different numerical results in comparison to the coded unit analysis, and often
the results will not be as easy to interpret.
To illustrate some of the differences between the two analyses, consider the following
experiment. A simple DC-circuit is constructed in which two different resistors, 1Ω and
2Ω, can be connected. The circuit also contains an ammeter and a variable-output power
supply. With a resistor installed in the circuit the power supply is adjusted until a current
flow of either 4 amps or 6 amps is obtained. Then the voltage output of the power supply
is read from a voltmeter. Two replicates of a 22 factorial design are performed, and Table
2 presents the results.
Table 2. The Circuit Experiment
I (Amps) R (Ohms) x1 x2 V (Volts)
4 1 -1 -1 3.802
4 1 -1 -1 4.013
6 1 1 -1 6.065
6 1 1 -1 5.992
4 2 -1 1 7.934
4 2 -1 1 8.159
6 2 1 1 11.865
6 2 1 1 12.138
We know that Ohm’s law determines the observed voltage, apart from measurement
error. However, the analysis of this data via empirical modeling lends some insight to the
value of coded units and engineering units in designed experiments.
The following two displays shows the regression models obtained using the design
variables in engineering units and the usual coded variables (x1 and x2).
Analysis of Variance
Source DF SS MS F P
Regression 3 71.267 23.756 1085.95 0.000
Residual Error 4 0.088 0.022
Total 7 71.354
Analysis of Variance
Source DF SS MS F P
Regression 3 71.267 23.756 1085.95 0.000
Residual Error 4 0.088 0.022
Total 7 71.354
Consider first the coded variable analysis. The design is orthogonal, and the coded
variables are also orthogonal. Notice that both main effects (x1 = current) and (x2 =
resistance) are significant, as is the interaction. In the coded variable analysis, the
magnitudes of the model coefficients are directly comparable; that is, they all are
dimensionless and they measure the effect of changing each design factor over a one-unit
interval. Furthermore, they are all estimated with the same precision (notice that the
standard error of all three coefficients is 0.053). The interaction effect is smaller than
either main effect, and the effect of current is just slightly more than one-half the
resistance effect. This suggests that over the range of the factors studied, resistance is a
more important variable. Coded variables are very effective for determining the relative
size of factor effects.
Now consider the analysis based on the engineering units. In this model, only the
interaction is significant. The model coefficient for the interaction term is 0.9170 and the
standard error is 0.1046. We can construct a t-statistic for testing the hypothesis that the
interaction coefficient is unity:
β IR − 1 0.9170 − 1
t0 = = = −0.7935
se( β IR ) 01046
.
The P-value for this test statistic is P = 0.76, Therefore, we cannot reject the null
hypothesis that the coefficient is unity, which is consistent with Ohm’s law. Note that the
regression coefficients are not dimensionless, and they are estimated with differing
precision. This is because the experimental design, with the factors in the engineering
units, is not orthogonal.
Because the intercept and the main effects are not significant we could consider fitting a
model containing only the interaction term IR. The results are as follows.
S = 0.1255
Analysis of Variance
Source DF SS MS F P
Regression 1 520.76 520.76 33053.19 0.000
Residual Error 7 0.11 0.02
Total 8 520.87
Notice that the estimate of the interaction term regression coefficient is now different
than it was in the previous engineering-units analysis, because the design in engineering
units is not orthogonal. The coefficient is also virtually unity.
Generally, the engineering units are not directly comparable but they may have physical
meaning, as in the present example. This could lead to possible simplification based on
the underlying mechanism.
In almost all situations, the coded unit analysis is preferable. It is fairly unusual for a
simplification based on some underlying mechanism (as in our example) to occur. The
fact that coded variables lets an experimenter see the relative importance of the design
factors is really useful in practice.
and so we see that the difference in averages yF − yC is an unbiased estimator of the sum
of the pure quadratic model parameters. Now the variance of yF − yC is
1 1
V ( yF − yC ) = σ 2 +
nF nC
Consequently, a test of the above hypotheses can be conducted using the statistic
yF − yC
t0 =
1 1
σˆ 2 +
nF nC
which under the null hypothesis follows a t distribution with nC – 1 degrees of freedom.
We would reject the null hypothesis (that is, no pure quadratic curvature) if | t0 |> tα / 2,nC −1 .
This t-test is equivalent to the F-test given in the book. To see this, square the t-statistic
above:
( yF − yC ) 2
t02 =
1 1
σˆ 2 +
nF nC
nF nC ( yF − yC ) 2
=
(nF + nC )σˆ 2
This ratio is computationally identical to the F-test presented in the textbook.
Furthermore, we know that the square of a t random variable with (say) v degrees of
freedom is an F random variable with 1 numerator and v denominator degrees of
freedom, so the t-test for “pure quadratic” effects is indeed equivalent to the F-test.
Supplemental References
Good, I. J. (1955). “The Interaction Algorithm and Practical Fourier Analysis”. Journal
of the Royal Statistical Society, Series B, Vol. 20, pp. 361-372.
Good, I. J. (1958). Addendum to “The Interaction Algorithm and Practical Fourier
Analysis”. Journal of the Royal Statistical Society, Series B, Vol. 22, pp. 372-375.
Rayner, A. A. (1967). “The Square Summing Check on the Main Effects and Interactions
in a 2n Experiment as Calculated by Yates’ Algorithm”. Biometrics, Vol. 23, pp. 571-573.
8 1 -1 -1 -1 -1 25
11 2 1 -1 -1 -1 71
1 3 -1 1 -1 -1 28
3 4 1 1 -1 -1 45
9 5 -1 -1 1 -1 68
12 6 1 -1 1 -1 60
2 7 -1 1 1 -1 60
13 8 1 1 1 -1 65
7 9 -1 -1 -1 1 23
6 10 1 -1 -1 1 80
16 11 -1 1 -1 1 45
5 12 1 1 -1 1 84
14 13 -1 -1 1 1 75
15 14 1 -1 1 1 86
10 15 -1 1 1 1 70
4 16 1 1 1 1 76
Normal plot
DESIGN-EXPERT Plot
Filtration Rate 99
A: Temperature 95 A
B: Pressure 90
C
Normal % probability
C: Concentration 80 D
D: Stirring Rate 70
50
30
20
10
5
AC
1
Effect
2 1 Block 1 1 -1 -1 -1
12 2 Block 1 1 1 -1 1
10 3 Block 1 1 -1 -1 1
15 4 Block 1 -1 1 1 1
14 5 Block 1 1 -1 1 1
4 6 Block 1 1 1 -1 -1
7 7 Block 1 -1 1 1 -1
3 8 Block 1 -1 1 -1 -1
5 9 Block 1 -1 -1 1 -1
8 10 Block 1 1 1 1 -1
11 11 Block 1 -1 1 -1 1
16 12 Block 1 1 1 1 1
1 13 Block 1 -1 -1 -1 -1
9 14 Block 1 -1 -1 -1 1
6 15 Block 1 1 -1 1 -1
13 16 Block 1 -1 -1 1 1
It turns out that in this case, the answer to that question is “no”. Now some analysis can
of course be performed, but it would basically consist of fitting a regression model to the
response data from the first 8 trials. Suppose that we fit a regression model containing an
intercept term and the four main effects. When things have gone wrong it is usually a
good idea to focus on simple objectives, making use of the data that are available. It
turns out that in that model we would actually be obtaining estimates of
Now suppose we feel comfortable in ignoring the three-factor and four-factor interaction
effects. However, even with these assumptions, our intercept term is “clouded” or
“confused” with two of the two-factor interactions, and the main effects of factors A and
B are “confused” with the other two-factor interactions. In the next chapter, we will refer
to the phenomena being observed here as aliasing of effects (its proper name). The
supplemental notes for Chapter 8 present a general method for deriving the aliases for the
factor effects. The Design-Expert software package can also be used to generate the
aliases by employing the Design Evaluation feature. Notice that in our example, not
completeing the experiment as originally planned has really disturbed the interpretation
of the results.
Suppose that instead of completely randomizing all 16 runs, the experimenter had set this
24 design up in two blocks of 8 runs each, selecting in the usual way the ABCD
interaction to be confounded with blocks. Now if only the first 8 runs can be performed,
then it turns out that the estimates of the intercept and main factor effects from these 8
runs are
[Intercept] = Intercept
[A] = A + BCD
[B] = B + ACD
[C] = C + ABD
[D] = D + ABC
If we assume that the three-factor interactions are negligible, then we have reliable
estimates of all four main effects from the first 8 runs. The reason for this is that each
block of this design forms a one-half fraction of the 24 factorial, and this fraction allows
estimation of the four main effects free of any two-factor interaction aliasing. This
specific design (the one-half fraction of the 24) will be discussed in considerable detail in
Chapter 8.
This illustration points out the importance of thinking carefully about run order, even
when the experimenter is not obviously concerned about nuisance variables and blocking.
Remember:
Generally, if a 2k factorial design is constructed in two blocks, and one of the blocks is
lost, ruined, or never run, the 2 k / 2 = 2 k −1 runs that remain will always form a one-half
fraction of the original design. It is almost always possible to learn something useful
from such an experiment.
To take this general idea a bit further, suppose that we had originally set up the 16-run 24
factorial experiment in four blocks of four runs each. The design that we would obtain
using the standard methods from this chapter in the text gives the experiment in Table 3.
Now suppose that for some reason we can only run the first 8 trials of this experiment. It
is easy to verify that the first 8 trials in Table 3 do not form one of the usual 8-run blocks
produced by confounding the ABCD interaction with blocks. Therefore, the first 8 runs
in Table 3 are not a “standard” one-half fraction of the 24.
A logical question is “what can we do with these 8 runs?” Suppose, as before, that the
experimenter elects to concentrate on estimating the main effects. If we use only the first
eight runs from Table 3 and concentrate on estimating only the four main effects, it turns
out what we really are estimating is
Once again, even assuming that all interactions beyond order two are negligible, our main
effect estimates are aliased with two-factor interactions.
10 1 Block 1 1 -1 -1 1
15 2 Block 1 -1 1 1 1
3 3 Block 1 -1 1 -1 -1
6 4 Block 1 1 -1 1 -1
12 5 Block 2 1 1 -1 1
8 6 Block 2 1 1 1 -1
13 7 Block 2 -1 -1 1 1
1 8 Block 2 -1 -1 -1 -1
11 9 Block 3 -1 1 -1 1
2 10 Block 3 1 -1 -1 -1
7 11 Block 3 -1 1 1 -1
14 12 Block 3 1 -1 1 1
16 13 Block 4 1 1 1 1
5 14 Block 4 -1 -1 1 -1
9 15 Block 4 -1 -1 -1 1
4 16 Block 4 1 1 -1 -1
If we were able to obtain 12 of the original 16 runs (that is, the first three blocks of Table
3), then we can estimate
[BC] = BC - ABD
[BD] = BD - ABC
[CD] = CD – ABCD
If we can ignore three- and four-factor interactions, then we can obtain good estimates of
all four main effects and five of the six two-factor interactions. Once again, setting up
and running the experiment in blocks has proven to be a good idea, even though no
nuisance factor was anticipated. Finally, we note that it is possible to assemble three of
the four blocks from Table 3 to obtain a 12-run experiment that is slightly better than the
one illustrated above. This would actually be called a 3/4th fraction of the 24, an irregular
fractional factorial. These designs are discussed briefly in Chapter 8 and are available in
the Design-Expert software package.
Table 1. Yates' Algorithm for the 2 4IV−1 Fractional Factorial in Example 8-1
Treatment Response (1) (2) (3) Effect Effect
Combination Estimate
2 × (3) / N
[A] = A + BD + CE
[B] = B + AD + CDE
[C] = C + AE + BDE
[D] = D + AB + BCE
[E] = E + AC + BCD
[BC] = BC + DE + ABE + ACD
[BE] = BE + CD + ABC + ADE
Now suppose that after running the eight trials in Table 2, the largest effects are the main
effects A, B, and D, and the BC + DE interaction. The experimenter believes that all
other effects are negligible. Now this is a situation where fold-over of the original design
is not an attractive alternative. Recall that when a resolution III design is folded over by
reversing all the signs in the test matrix, the combined design is resolution IV.
Consequently, the BC and DE interactions will still be aliased in the combined design.
One could alternatively consider reversing signs in individual columns, but these
approaches will essentially require that another eight runs be performed.
The experimenter wants to fit the model
y = β 0 + β 1 x1 + β 2 x2 + β 4 x4 + β 23 x2 x3 + β 45 x4 x5 + ε
where x1 = A, x2 = B, x3 = C , x4 = D, and x5 = E . A partial fold-over is a design
containing fewer than eight runs that can be used to augment the original design and will
allow the experimenter to fit this model. One way to select the runs for the partial fold-
over is to select points from the remaining unused portion of the 25 such that the
variances of the model coefficients in the above regression equation are minimized. This
augmentation strategy is based on the idea of a D-optimal design, discussed in Chapter
11.
Design-Expert can utilize this strategy to find a partial fold-over. The design produced
by the computer program is shown in Table 3. This design completely dealiases the BC
and DE interactions.
Notice that the partial fold-over design requires four additional trials. Furthermore, these
trials are arranged in a second block that is orthogonal to the first block of eight trials.
This strategy is very useful in 16-run resolution IV designs, situations in which a full
fold-over would require another 16 trials. Often a partial fold-over with four or eight
runs can be used as an alternative.
As a second example, consider the 26-2 resolution IV design shown in Table 4. The alias
structure for the design is shown below the table.
Table 4. A 26-2 Resolution IV Design
Std Run Block Factor Factor Factor Factor Factor Factor
ord ord A:A B:B C:C D:D E:E F:F
10 1 Block 1 1 -1 -1 1 1 1
11 2 Block 1 -1 1 -1 1 1 -1
2 3 Block 1 1 -1 -1 -1 1 -1
12 4 Block 1 1 1 -1 1 -1 -1
16 5 Block 1 1 1 1 1 1 1
15 6 Block 1 -1 1 1 1 -1 1
8 7 Block 1 1 1 1 -1 1 -1
7 8 Block 1 -1 1 1 -1 -1 -1
5 9 Block 1 -1 -1 1 -1 1 1
1 10 Block 1 -1 -1 -1 -1 -1 -1
6 11 Block 1 1 -1 1 -1 -1 1
4 12 Block 1 1 1 -1 -1 -1 1
14 13 Block 1 1 -1 1 1 -1 -1
13 14 Block 1 -1 -1 1 1 1 -1
9 15 Block 1 -1 -1 -1 1 -1 1
3 16 Block 1 -1 1 -1 -1 1 1
Suppose that the main effects of factors A, B, C, and E are large, along with the AB + CE
interaction chain. A full fold-over of this design would involve reversing the signs in
columns B, C, D, E, and F. This would, of course, require another 16 trials. The D-
optimal partial fold-over approach requires only four additional runs. The augmented
design, obtained from Design-Expert, is shown in Table 5. These four runs form a
second block that is orthogonal to the first block of 16 runs, and allows the interactions of
interest in the original alias chain be separately estimated.
Table 5. The Partial Fold-Over
Std Run Block Factor Factor Factor Factor Factor Factor
A:A B:B C:C D:D E:E F:F
12 1 Block 1 1 1 -1 1 -1 -1
15 2 Block 1 -1 1 1 1 -1 1
2 3 Block 1 1 -1 -1 -1 1 -1
9 4 Block 1 -1 -1 -1 1 -1 1
5 5 Block 1 -1 -1 1 -1 1 1
8 6 Block 1 1 1 1 -1 1 -1
11 7 Block 1 -1 1 -1 1 1 -1
14 8 Block 1 1 -1 1 1 -1 -1
13 9 Block 1 -1 -1 1 1 1 -1
4 10 Block 1 1 1 -1 -1 -1 1
10 11 Block 1 1 -1 -1 1 1 1
6 12 Block 1 1 -1 1 -1 -1 1
7 13 Block 1 -1 1 1 -1 -1 -1
16 14 Block 1 1 1 1 1 1 1
3 15 Block 1 -1 1 -1 -1 1 1
1 16 Block 1 -1 -1 -1 -1 -1 -1
17 17 Block 2 1 -1 1 -1 -1 -1
18 18 Block 2 -1 1 -1 -1 -1 -1
19 19 Block 2 -1 -1 1 1 1 1
20 20 Block 2 1 1 -1 1 1 1
In this chapter we show how to find the alias relationships in a 2k-p fractional factorial
design by use of the complete defining relation. This method works well in simple
designs, such as the regular fractions we use most frequently, but it does not work as well
in more complex settings. Furthermore, there are some fractional factorials that do not
have defining relations, such as Plackett-Burman designs, so the defining relation method
will not work for these types of designs at all.
Fortunately, there is a general method available that works satisfactorily in many
situations. The method uses the polynomial or regression model representation of the
model, say
y = X1 β 1 + ε
where y is an n × 1 vector of the responses, X1 is an n × p1 matrix containing the design
matrix expanded to the form of the model that the experimenter is fitting, β1 is an p1 × 1
vector of the model parameters, and ε is an n × 1 vector of errors. The least squares
estimate of β1 is
β 1 = ( X1′ X1 ) −1 X1′ y
Suppose that the true model is
y = X1 β 1 + X 2 β 2 + ε
where X2 is an n × p2 matrix containing additional variables that are not in the fitted
model and β2 is a p2× 1 vector of the parameters associated with these variables. It can
be easily shown that
E( β 1 ) = β 1 + ( X1′ X1 ) −1 X1′ X 2 β 2
= β 1 + Aβ 2
where A = ( X1′ X1 ) −1 X1′ X 2 is called the alias matrix. The elements of this matrix
operating on β2 identify the alias relationships for the parameters in the vector β1.
We illustrate the application of this procedure with a familiar example. Suppose that we
have conducted a 23-1 design with defining relation I = ABC or I = x1x2x3. The model that
the experimenter plans to fit is the main-effects-only model
y = β 0 + β 1 x1 + β 2 x2 + β 3 x3 + ε
In the notation defined above,
LMβ OP
0 LM1 −1 −1 1 OP
= M P, and X = M PP
β 1 1 −1 −1
β1
MMβ PP MM1
1
1
−1 1 −1
2
Nβ Q
3 N1 1 1 1
PQ
Suppose that the true model contains all the two-factor interactions, so that
y = β 0 + β 1 x1 + β 2 x2 + β 3 x3 + β 12 x1 x2 + β 13 x1 x3 + β 23 x2 x3 + ε
and
LMβ OP LM 1 −1 −1 OP
=M PP
12
−1 −1 1
β2 MMβ
= β 13 PP, and X 2
MM−1 1 −1
N 23 Q N1 1 1
PQ
Now
LM0 0 0 OP
=M PP
1 0 0 4
( X1′ X1 ) −1 =
4
I 4 and X1′ X 2
MM0 4 0
P
N4 0 0Q
Therefore
E ( β 1 ) = β 1 + ( X1′ X1 ) −1 X1′ X 2 β 2
LMβ 0 OP
β0 LM OP 0 0 0 LM
β 12
OPL OP
MM
β
E = 1 PP
β1
+MM PP
1 0 0 4 MM
β 13 PPMM PP
β2
MM
β2
PP MM PP
4 0 4 0
MN
β 23 PQMN Q
β
N 3
β3
Q N Q4 0 0
LM β 0 OP
=M PP
β +β
MMβ + β
1 23
Nβ + β
3
13
12
PQ
The interpretation of this, of course, is that each of the main effects is aliased with one of
the two-factor interactions, which we know to be the case for this design. While this is a
very simple example, the method is very general and can be applied to much more
complex designs.
The 12-run irregular resolution V fraction shown in Table 6 will allow all main effects
and two-factor interactions to be estimated. Several of these designs are available in the
Design-Expert software package.
Table 6. An Irregular Fraction
Std Run Block Factor 1 Factor 2 Factor 3 Factor 4
A:A B:B C:C D:D
1 12 Block 1 -1 -1 -1 -1
2 6 Block 1 1 1 -1 -1
3 7 Block 1 -1 -1 1 -1
4 2 Block 1 1 -1 1 -1
5 3 Block 1 -1 1 1 -1
6 8 Block 1 1 1 1 -1
7 9 Block 1 -1 -1 -1 1
8 4 Block 1 1 -1 -1 1
9 1 Block 1 -1 1 -1 1
10 11 Block 1 1 1 -1 1
11 10 Block 1 1 -1 1 1
12 5 Block 1 -1 1 1 1
The alias relationships associated with the irregular fractional factorial in Table 6 are
[A] = A - ACD
[B] = B - BCD
[C] = C - ABCD
[D] = D - ABCD
[AB] = AB - ABCD
[AC] = AC - BCD
[AD] = AD - BCD
[BC] = BC - ACD
[BD] = BD - ACD
[CD] = CD - 0.5 * ABC - 0.5 * ABD
Notice that all effects are estimated free of two-factor interactions, and that the two-factor
interactions are aliased with higher-order interactions. Thus, the same information about
these effects has been obtained from this design as would be found with a full 24.
If a full 24 had been used, all factor effects would be orthogonal, and the standard error of
each effect would be σ / 16 = σ / 4 = 0.25σ . In the irregular fraction, the standard
errors of the model regression coefficients are larger, therefore the model parameters are
not estimated as precisely as they would be by using a full factorial. Specifically the
standard errors from the irregular fraction are
Term StdErr
A 0.35
B 0.35
C 0.35
D 0.35
AB 0.35
AC 0.35
AD 0.35
BC 0.35
BD 0.35
CD 0.31
Furthermore, these model coefficients are correlated. Generally, this will be the price an
experimenter will pay for using an irregular fraction; correlated effect estimates and
larger standard errors than would result from the more complete design.
obscure the contribution of another factor. Supersaturated designs are created to minimize
this amount of non-orthogonality between factors.
In a supersaturated design the number of factors (k, or columns) exceeds the number of
runs or experiments (n, rows). The goal is to find a design matrix, X, that is as close as
possible to orthogonal. The covariance matrix (X'X) of the design will have non-zero
off- diagonal terms. Making these off-diagonal elements as small as possible makes the
design as close to orthogonal as possible.
Consider the shape of a design matrix for a supersaturated design. The design matrix will
have more columns than rows. Recall that n is the number of experiments. Initially, we
will restrict n to be even. Each column of X is composed of half +1’s and half –1’s, so the
n FG IJ
possible columns of X are all possible combinations, n/2 +1’s and n/2 –1’s. X has
n/2 H K
candidate columns. Let r be the near-orthogonality parameter. When r is zero, the
design is orthogonal. Let t be the value of the off-diagonal covariance elements of X'X.
The value of t may vary from -n < t < n. The off-diagonal elements are not continuous,
since the entries of the design matrix are -1 or +1. The off-diagonal elements take on
integer values in steps of 4. For a given number of rows, n, the covariance between rows
is given by ci′c j (the dot or inner product of columns ci and cj). The covariance minus n
mod 4 is zero; that is, (ci′c j − n) mod(4) = 0 . The goal is to find the greatest number of
columns for a given degree of non-orthogonality.
When n is odd, the number of +1’s may be selected as one more than the number of –1’a.
n FG IJ
The number of candidate columns becomes
(n − 1) / 2 H K
. The rest of the discussion for n
ρ = ∑ sij2 /
FG k IJ ,
H 2K
where k is the number of columns in X′X . Another paper by Wu (1995) also generates
supersaturated designs and evaluates them with this criterion. Wu uses a D-optimal
column-swapping scheme as the generating method. The run times to find designs that
Wu reported were very long. Wu produced design with many fewer columns than Lin.
Wu claims that the designs that his algorithm generated are superior based on the
secondary criterion, the ρ statistic. Recently Balkin and Lin [1998] have introduced
additional secondary criteria for evaluating and comparing supersaturated design
generation methods. The methods involve evaluating the design projectivity and a
criterion similar to the trace of the X′X matrix, a design-optimality criterion that we will
discuss in Chapter 11.
Supersaturated designs are typically analyzed by regression model-fitting methods, such
as forward selection. This is a procedure in which variables are selected one-at-at time
for inclusion in the model until no other variables appear useful in explaining the
response. These designs have not had widespread use, but they are an interesting and
potentially useful method for experimentation with systems where there are many
variables and only a very few of these are expected to produce large effects.
Supplemental References
2r3tn
where r is the number of factors in the effect considered, t is the number of factors in the
experiment minus the number of linear terms in this effect, and n is the number of
replicates. For example, BL has the divisor 21 x 31 x 4= 24.
The sums of squares are obtained by squaring the element in column (2) and dividing by
the corresponding entry in the Divisor column. The Sum of Squares column now
contains all of the required quantities to construct an analysis of variance table if both of
the design factors A and B are quantitative. However, in this example, factor A (material
type) is qualitative; thus, the linear and quadratic partitioning of A is not appropriate.
Individual observations are used to compute the total sum of squares, and the error sum
of squares is obtained by subtraction.
The analysis of variance is summarized in Table 2. This is essentially the same results
that were obtained by conventional analysis of variance methods in Example 5-1.
The matrix A = ( X1′ X1 ) −1 X1′ X 2 is called the alias matrix. The elements of this matrix
identify the alias relationships for the parameters in the vector β1.
This procedure can be used to find the alias relationships in three-level and mixed-level
designs. We now present two examples.
Example 1
Suppose that we have conducted an experiment using a 32 design, and that we are
interested in fitting the following model:
y = β 0 + β 1 x1 + β 2 x2 + β 12 x1 x2 + β 11 ( x12 − x12 ) + β 22 ( x22 − x22 ) + ε
This is a complete quadratic polynomial. The pure second-order terms are written in a
form that orthogonalizes these terms with the intercept. We will find the aliases in the
parameter estimates if the true model is a reduced cubic, say
y = β 0 + β 1 x1 + β 2 x2 + β 12 x1 x2 + β 11 ( x12 − x12 ) + β 22 ( x22 − x22 )
+ β 111 x13 + β 222 x23 + β 122 x1 x22 + ε
Now in the notation used above, the vector β 1 and the matrix X1 are defined as follows:
LM1 −1 −1 1 1/ 3 1/ 3 OP
LM β OP MM11 −1 0
−1 1
0
−1
1/ 3 −2 / 3
PP
MM1 PP
0
1/ 3 1/ 3
MM ββ 1 PP 0 −1 0 −2 / 3 1/ 3
β1 =M PP, X = M1 −2 / 3 −2 / 3 PP
MM1
2
and 0 0 0
MMββ
1
12
PP MM1
0 1 0 −2 / 3 1/ 3 PP
MNβ 11
Q 1 − 1 −1 1/ 3 1/ 3
PP
22
MM1 1 0 0 1/ 3 −2 / 3
PQ
N1 1 1 1 1/ 3 1/ 3
Now
LM9 0 0 0 0 0 OP
MM00 6 0 0 0 0
PP
X′ X = M PP
0 6 0 0 0
MM00
1 1
0 0 4 0 0
MN0
0 0 0 2 0 PP
0 0 0 0 2 Q
and the other quantities we require are
LM−1 −1 −1 OP
MM−−11 0 0
1 −1
PP LM0 0 0 OP
MM 0 −1 0
PP LM β OP MM60 0 4
PP
=M0 PP,
111
=M P
6 0
X2
MM 0 0 0 β2 MMβ
= β 222 PP, and X1′ X 2
MM0 0 0P
MM 1
1 0 PP N 122 Q 0P
−1 1
PP MN00 0
0Q
P
MM 1 0 0
PQ
0
N1 1 1
The expected value of the fitted model parameters is
E( β 1 ) = β 1 + ( X1′ X1 ) −1 X1′ X 2 β 2
or
LM β OP L β OP LM9 0 0 0 0 0 OP −1
LM0 0 0OP
PP MM β
0
MM β
0
1
PP MM00 6 0 0 0 0
PP MM60 0 4
PPL
β 111 OP
PP = MMββ
1
EM PPMM
β
MMβ
2 2
PP + MM0 0 6 0 0 0
PP MM0 6 0
β 222 PP
PP MMβ PPMN
0 0 4 0 0 0 0
12
MMβ
12
Nβ
22
22 0 0 0 0 2 N0 0 0 Q
The alias matrix turns out to be
LM0 0 0 OP
MM10 0 2/3
PP
A=M PP
1 0
MM0 0 0
PP
MN00 0 0
0 0 Q
This leads to the following alias relationships:
E ( β 0 ) = β 0
E ( β ) = β + β
1 1 111 + (2 / 3) β 122
E ( β 2 ) = β 2 + β 222
E ( β ) = β
12 12
E ( β 11 ) = β 11
E ( β ) = β
22 22
Example 2
This procedure is very useful when the design is a mixed-level fractional factorial. For
example, consider the mixed-level design in Table 9-10 of the textbook. This design can
accommodate four two-level factors and a single three-level factor. The resulting
resolution III fractional factorial is shown in Table 3.
Since the design is resolution III, the appropriate model contains the main effects
y = β 0 + β 1 x1 + β 2 x2 + β 3 x3 + β 4 x4 + β 5 x5 + β 55 ( x52 − x52 ) + ε ,
where the model terms
β 5 x5 and β 55 ( x52 − x52 )
represent the linear and quadratic effects of the three-level factor x5. The quadratic
effect of x5 is defined so that it will be orthogonal to the intercept term in the model.
E ( β 0 ) = β 0
E ( β ) = β
1 1
E ( β 2 ) = β 2 + (1 / 2) β 15
E ( β ) = β + (1 / 2) β
3 3 15
E ( β 4 ) = β 4 + (1 / 2) β 155
E ( β ) = β + β
5 5 12
E ( β 55 ) = β 55
The linear and quadratic components of the interaction between x1 and x5 are aliased with
the main effects of x2 , x3 , and x4 , and the x1 x2 interaction aliases the linear component
of the main effect of x5.
β = ( X ′X) −1 X′y
is an unbiased estimator. We also give the result that the covariance matrix of β is
σ 2 ( X′X) −1 (see Equation 10-18). This last result is relatively straightforward to show.
Consider
V ( β ) = V [( X′X) −1 X′y]
V ( β ) = V [( X′X) −1 X′y]
= ( X′X) −1 X′V ( y)[( X ′X) −1 X′ ]′
V ( β 0 ) = σ 2 / 12 = 0.0833σ 2
V ( β ) = σ 2 / 8 = 0125
i . σ 2 , i = 1,2,3
In Example 10-3, we reconsider this same problem but assume that one of the original 12
observations is missing. It turns out that the estimates of the regression coefficients does
not change very much when the remaining 11 observations are used to fit the first-order
model but the ( X′X) −1 matrix reveals that the missing observation has had a moderate
effect on the variances and covariances of the model coefficients. The variances of the
regression coefficients are now larger, and there are some moderately large covariances
between the estimated model coefficients. Example 10-4, which investigated the impact
of inaccurate design factor levels, exhibits similar results. Generally, as soon as we
depart from an orthogonal design, either intentionally or by accident (as in these two
examples), the variances of the regression coefficients will increase and potentially there
could be rather large covariances between certain regression coefficients. In both of the
examples in the textbook, the covariances are not terribly large and would not likely
result in any problems in interpretation of the experimental results.
10-3. Adjusted R2
In several places in the textbook, we have remarked that the adjusted R2 statistic is
preferable to the ordinary R2, because it is not a monotonically non-decreasing function
of the number of variables in the model.
From Equation (10-27) note that
2
R Adj = 1−
LM SS E / df e OP
N SS
T / df T Q
MS E
= 1−
SST / df T
Now the mean square in the denominator of the ratio is constant, but MSE will change as
variables are added or removed from the model. In general, the adjusted R2 will increase
when a variable is added to a regression model only if the error mean square decreases.
The error mean square will only decrease if the added variable decreases the residual sum
of squares by an amount that will offset the loss of one degree of freedom for error. Thus
the added variable must reduce the residual sum of squares by an amount that is at least
equal to the residual mean square in the immediately previous model; otherwise, the new
model will have an adjusted R2 value that is larger than the adjusted R2 statistic for the
old model.
to fit, either from an ANOVA or from examining a normal probability plot of effect
estimates.
There are, however, other situations where regression is applied to unplanned studies,
where the data may be observational data collected routinely on some process. The data
may also be archival, obtained from some historian or library. These applications of
regression frequently involve a moderately-large or large set of candidate regressors,
and the objective of the analysts here is to fit a regression model to the “best subset” of
these candidates. This can be a complex problem, as these unplanned data sets frequently
have outliers, strong correlations between subsets of the variables, and other complicating
features.
There are several techniques that have been developed for selecting the best subset
regression model. Generally, these methods are either stepwise-type variable selection
methods or all possible regressions. Stepwise-type methods build a regression model by
either adding or removing a variable to the basic model at each step. The forward
selection version of the procedure begins with a model containing none of the candidate
variables and sequentially inserts variables into the model one-at-a-time until a final
equation is produced. In backward elimination, the procedure begins with all variables in
the equation, and then variables are removed one-at-a-time to produce a final equation.
Stepwise regression usually consists of a combination of forward and backward stepping.
There are many variations of the basic procedures.
In all possible regressions with K candidate variables, the analyst examines all 2K
possible regression equations to identify the ones with potential to be a useful model.
Obviously, as K becomes even moderately large, the number of possible regression
models quickly becomes formidably large. Efficient algorithms have been developed that
implicitly rather than explicitly examine all of these equations. For more discussion of
variable selection methods, see textbooks on regression such as Montgomery and Peck
(1992) or Myers (1990).
V [ y (x 0 )] = σ 2 x ′0 ( X ′X) −1 x 0
where the predicted response at the point x0 is found from Equation (10-39):
y (x 0 ) = x 0′ β
It is easy to derive the variance expression:
V [ y (x 0 )] = V (x′0 β )
= x′V ( β )x
0 0
= σ x ′0 ( X ′X) −1 x 0
2
Design-Expert calculates and displays the confidence interval on the mean of the
response at the point x0 using Equation (10-41) from the textbook. This is displayed on
the point prediction option on the optimization menu. The program also uses Equation
(10-40) in the contour plots of prediction standard error.
design center in coded units. Therefore, if all points are replicated n times, they will all
have identical leverage.
Leverage can also be thought of as the maximum potential influence each design point
exerts on the model. In a near-saturated design many or all design points will have the
maximum leverage. The maximum leverage that any point can have is hii = 1. However,
if points are replicated n times, the maximum leverage is 1/n. High leverage situations
are not desirable, because if leverage is unity that point fits the model exactly. Clearly,
then, the design and the associated model would be vulnerable to outliers or other
unusual observations at that design point. The leverage at a design point can always be
reduced by replication of that point.
and we wish to use this model to determine a path leading from the center of the design
region x = 0 that increases the predicted response most quickly. Since the first–order
model is an unbounded function, we cannot just find the values of the x’s that maximize
the predicted response. Suppose that instead we find the x’s that maximize the predicted
response at a point on a hypersphere of radius r. That is
k
Max y = β 0 + ∑ β i xi
i =1
subject to
k
∑x 2
i = r2
i =1
Max G = β 0 + ∑ β i xi − λ
k
LM∑ x
k
2
− r2
OP
i =1 N
i =1
i
Q
where λ is a LaGrange multiplier. Taking the derivatives of G yields
∂G
= β i − 2 λxi i − 1,2," , k
∂xi
∂G
=−
LM∑ x
k
2
− r2
OP
∂λ N
i =1
i
Q
Equating these derivatives to zero results in
β i
xi = i = 1,2," , k
2λ
k
∑x 2
i = r2
i =1
Now the first of these equations shows that the coordinates of the point on the
hypersphere are proportional to the signs and magnitudes of the regression coefficients
(the quantity 2λ is a constant that just fixes the radius of the hypersphere). The second
equation just states that the point satisfies the constraint. Therefore, the heuristic
description of the method of steepest ascent can be justified from a more formal
perspective.
= y s + z′ Bz
because from Equation (11-7) we have 2x ′s Bz = − z′ β . Now rotate these new axes (z) so
that they are parallel to the principal axes of the contour system. The new variables are
w = M ′z , where
M ′BM = Λ
The diagonal matrix Λ has the eigenvalues of B, λ 1 , λ 2 ," , λ k on the main diagonal and
M is a matrix of normalized eigenvectors. Therefore,
y = y s + z′ Bz
= y s + w ′M ′BMz
= y s + w ′Λw
k
= y s + ∑ λ i wi2
i =1
DESIGN-EXPERT Plot
Actual Factors:
X= A 0.791
Y= B
0.739
StdErr of Ev aluation
0.687
0.635
0.582
1.00
1.00
0.50
0.50
0.00
0.00
B -0.50 -0.50
A
-1.00 -1.00
Notice that the plot of the prediction standard deviation has a large “bump” in the center.
This indicates that the design will lead to a model that does not predict accurately near
the center of the region of exploration, a region likely to be of interest to the
experimenter. This is the result of using an insufficient number of center runs.
Suppose that the number of center runs is increased to nc = 4 . The prediction standard
deviation plot now looks like this:
DESIGN-EXPERT Plot
Actual Factors:
X= A 0.791
Y= B
0.712
StdErr of Ev aluation
0.632
0.553
0.474
1.00
1.00
0.50
0.50
0.00
0.00
B -0.50 -0.50
A
-1.00 -1.00
Notice that the addition of two more center runs has resulted in a much flatter (and hence
more stable) standard deviation of predicted response over the region of interest.
The CCD is a spherical design. Generally, every design on a sphere must have at least
one center point or the X′X matrix will be singular. However, the number of center
points can often influence other properties of the design, such as prediction variance.
DESIGN-EXPERT Plot
Actual Factors:
X= A 0.855
Y= B
0.789
StdErr of Ev aluation
C = 0.00 0.656
0.589
1.00
1.00
0.50
0.50
0.00
0.00
B -0.50 -0.50
A
-1.00 -1.00
Notice that despite the absence of center points, the prediction standard deviation is
relatively constant in the center of the region of exploration. Note also that the contours
of constant prediction standard deviation are not concentric circles, because this is not a
rotatable design.
While this design will certainly work with no center points, this is usually not a good
choice. Two or three center points generally gives good results. Below is a plot of the
prediction standard deviation for a face-centered cube with two center points. This
choice work very well.
DESIGN-EXPERT Plot
Actual Factors:
X= A 0.849
Y= B
0.753
StdErr of Ev aluation
Actual Constants: 0.657
C = 0.00 0.561
0.465
1.00
1.00
0.50
0.50
0.00
0.00
B -0.50 -0.50
A
-1.00 -1.00
DESIGN-EXPERT Plot
Actual Factors:
X= A 0.775
Y= B
0.675
StdErr of Ev aluation
0.575
0.475
0.375
1.00
1.00
0.50
0.50
0.00
0.00
B -0.50 -0.50
A
-1.00 -1.00
Notice that the contours of prediction standard deviation are not circular, even though a
rotatable design was used.
Note that these are essentially the same objectives we discussed in Section 11-7.1.
Taguchi has certainly defined meaningful engineering problems and the philosophy that
recommends is sound. However, as noted in the textbook, he advocated some novel
methods of statistical data analysis and some approaches to the design of experiments
that the process of peer review revealed were unnecessarily complicated, inefficient, and
sometimes ineffective. In this section, we will briefly overview Taguchi's philosophy
regarding quality engineering and experimental design. We will present some examples
of his approach to parameter design, and we will use these examples to highlight the
problems with his technical methods. As we saw in the Section 11-7.2 of the textbook, it
is possible to combine his sound engineering concepts with more efficient and effective
experimental design and analysis based on response surface methods.
In the parameter design stage, the specific values for the system parameters are
determined. This would involve choosing the nominal resistor and power supply values
for the Wheatstone bridge, the number and type of component placement machines for
the printed circuit board assembly process, and so forth. Usually, the objective is to
specify these nominal parameter values such that the variability transmitted from
uncontrollable or noise variables is minimized.
Tolerance design is used to determine the best tolerances for the parameters. For
example, in the Wheatstone bridge, tolerance design methods would reveal which
components in the design were most sensitive and where the tolerances should be set. If
a component does not have much effect on the performance of the circuit, it can be
specified with a wide tolerance.
Taguchi recommends that statistical experimental design methods be employed to assist
in this process, particularly during parameter design and tolerance design. We will focus
on parameter design. Experimental design methods can be used to find a best product or
process design, where by "best" we mean a product or process that is robust or insensitive
to uncontrollable factors that will influence the product or process once it is in routine
operation.
The notion of robust design is not new. Engineers have always tried to design products
so that they will work well under uncontrollable conditions. For example, commercial
transport aircraft fly about as well in a thunderstorm as they do in clear air. Taguchi
deserves recognition for realizing that experimental design can be used as a formal part of
the engineering design process to help accomplish this objective.
A key component of Taguchi's philosophy is the reduction of variability. Generally,
each product or process performance characteristic will have a target or nominal value.
The objective is to reduce the variability around this target value. Taguchi models the
departures that may occur from this target value with a loss function. The loss refers to
the cost that is incurred by society when the consumer uses a product whose quality
characteristics differ from the nominal. The concept of societal loss is a departure from
traditional thinking. Taguchi imposes a quadratic loss function of the form
L(y) = k(y- T)2
shown in Figure 1 below. Clearly this type of function will penalize even small
departures of y from the target T. Again, this is a departure from traditional thinking,
which usually attaches penalties only to cases where y is outside of the upper and lower
specifications (say y > USL or y < LSL in Figure 1. However, the Taguchi philosophy
regarding reduction of variability and the emphasis on minimizing costs is entirely
consistent with the continuous improvement philosophy of Deming and Juran.
In summary, Taguchi's philosophy involves three central ideas:
1. Products and processes should be designed so that they are robust to external sources
of variability.
2. Experimental design methods are an engineering tool to help accomplish this
objective.
3. Operation on-target is more important than conformance to specifications.
These are sound concepts, and their value should be readily apparent. Furthermore, as
we have seen in the textbook, experimental design methods can play a major role in
translating these ideas into practice.
We now turn to a discussion of the specific methods that Professor Taguchi recommends
for applying his concepts in practice. As we will see, his approach to experimental
design and data analysis can be improved.
Table 1. Factors and Levels for the Taguchi Parameter Design Example
The two designs are combined as shown in Table 11-22 in the textbook, repeated for
convenience as Table 3 below. Recall that this is called a crossed or product array
design, composed of the inner array containing the controllable factors, and the outer
array containing the noise factors. Literally, each of the 9 runs from the inner array is
tested across the 8 runs from the outer array, for a total sample size of 72 runs. The
observed pull-off force is reported in Table 3.
The data from this experiment may now be analyzed. Recall from the discussion in
Chapter 11 that Taguchi recommends analyzing the mean response for each run in the
inner array (see Table 3), and he also suggests analyzing variation using an appropriately
chosen signal-to-noise ratio (SN). These signal-to-noise ratios are derived from the
quadratic loss function, and three of them are considered to be "standard" and widely
applicable. They are defined as follows:
y2
SN T = 10 log 2
S
2. Larger the better:
1 n 1
SN L = −10 log ∑ 2
n i =1 y i
E 1 1 1 2 2 2 2
F 1 2 2 1 1 2 2
G 2 1 2 1 2 1 2
1 n 2
SN L = −10 log ∑ yi
n i =1
Notice that these SN ratios are expressed on a decibel scale. We would use SNT if the
objective is to reduce variability around a specific target, SNL if the system is optimized
when the response is as large as possible, and SNS if the system is optimized when the
response is as small as possible. Factor levels that maximize the appropriate SN ratio are
optimal.
In this problem, we would use SNL because the objective is to maximize the pull-off
force. The last two columns of Table 3 contain y and SNL values for each of the nine
inner-array runs. Taguchi-oriented practitioners often use the analysis of variance to
determine the factors that influence y and the factors that influence the signal-to-noise
ratio. They also employ graphs of the "marginal means" of each factor, such as the ones
shown in Figures 2 and 3. The usual approach is to examine the graphs and "pick the
winner." In this case, factors A and C have larger effects than do B and D. In terms of
maximizing SNL we would select AMedium, CDeep, BMedium, and DLow. In terms of
maximizing the average pull-off force y , we would choose AMedium, CMedium, BMedium and
DLow. Notice that there is almost no difference between CMedium and CDeep. The
implication is that this choice of levels will maximize the mean pull-off force and reduce
variability in the pull-off force.
Taguchi advocates claim that the use of the SN ratio generally eliminates the need for
examining specific interactions between the controllable and noise factors, although
sometimes looking at these interactions improves process understanding. The authors of
this study found that the AG and DE interactions were large. Analysis of these
interactions, shown in Figure 4, suggests that AMedium is best. (It gives the highest pull-off
force and a slope close to zero, indicating that if we choose AMedium the effect of relative
humidity is minimized.) The analysis also suggests that DLow gives the highest pull-off
force regardless of the conditioning time.
When cost and other factors were taken into account, the experimenters in this example
finally decided to use AMedium, BThin, CMedium, and Dlow. (BThin was much less expensive
than BMedium, and CMedium was felt to give slightly less variability than CDeep.) Since this
combination was not a run in the original nine inner array trials, five additional tests were
made at this set of conditions as a confirmation experiment. For this confirmation
experiment, the levels used on the noise variables were ELow, FLow, and GLow. The
authors report that good results were obtained from the confirmation test.
The advocates of Taguchi's approach to parameter design utilize the orthogonal array
designs, two of which (the L8 and the L9) were presented in the foregoing example. There
are other orthogonal arrays: the L4, L12, L16, L18, and L27. These designs were not
developed by Taguchi; for example, the L8 is a 2 7III− 4 fractional factorial, the L9 is a 34III− 2
−11
fractional factorial, the L12 is a Plackett-Burman design, the L16 is a 215III fractional
factorial, and so on. Box, Bisgaard, and Fung (1988) trace the origin of these designs.
As we know from Chapters 8 and 9 of the textbook, some of these designs have very
complex alias structures. In particular, the L12 and all of the designs that use three-level
factors will involve partial aliasing of two-factor interactions with main effects. If any
two-factor interactions are large, this may lead to a situation in which the experimenter
does not get the correct answer.
Notice that we can fit the linear and quadratic effects of the controllable factors but not
their two-factor interactions (which are aliased with the main effects). We can also fit the
linear effects of the noise factors and all the two-factor interactions involving the noise
factors. Finally, we can fit the two-factor interactions involving the controllable factors
and the noise factors. It may be unwise to ignore potential interactions in the controllable
factors.
This is a rather odd strategy, since interaction is a form of curvature. A much safer
strategy is to identify potential effects and interactions that may be important and then
consider curvature only in the important variables if there is evidence that the curvature is
important. This will usually lead to fewer experiments, simpler interpretation of the data,
and better overall process understanding.
Another criticism of the Taguchi approach to parameter design is that the crossed array
structure usually leads to a very large experiment. For example, in the foregoing
application, the authors used 72 tests to investigate only seven factors, and they still could
not estimate any of the two-factor interactions among the four controllable factors.
There are several alternative experimental designs that would be superior to the inner and
outer method used in this example. Suppose that we run all seven factors at two levels in
the combined array design approach discussed on the textbook. Consider the
2 7IV− 2 fractional factorial design. The alias relationships for this design are shown in the
top half of Table 4. Notice that this design requires only 32 runs (as compared to 72). In
the bottom half of Table 4, two different possible schemes for assigning process
controllable variables and noise variables to the letters A through G are given. The first
assignment scheme allows all the interactions between controllable factors and noise
factors to be estimated, and it allows main effect estimates to be made that are clear of
two-factor interactions. The second assignment scheme allows all the controllable factor
main effects and their two-factor interactions to be estimated; it allows all noise factor
main effects to be estimated clear of two-factor interactions; and it aliases only three
interactions between controllable factors and noise factors with a two-factor interaction
between two noise factors. Both of these arrangements present much cleaner alias
relationships than are obtained from the inner and outer array parameter design, which
also required over twice as many runs.
In general, the crossed array approach is often unnecessary. A better strategy is to use the
combined array design discussed in the textbook. This approach will almost always
lead to a dramatic reduction in the size of the experiment, and at the same time, it will
produce information that is more likely to improve process understanding. For more
discussion of this approach, see Myers and Montgomery (1995) and Example 11-6 in the
textbook. We can also use a combined array design that allows the experimenter to
directly model the noise factors as a complete quadratic and to fit all interactions between
the controllable factors and the noise factors, as demonstrated in the textbook in Example
11-7.
Aliases:
A AF = BCD CG = EF
B AG = BDE DE = ABG
C = EFG BC = ADF DF = ABC
D BD = ACF = AEG DG = ABE
E = CFG BE = ADG ACE = AFG
F = CEG BF = ACD ACG = AEF
G = CEF BG = ADE BCE = BFG
AB = CDF = DEG CD = ABF BCG = BEF
AC = BDF CE = FG CDE = DFG
AD = BCF = BEG CF = ABD = EG CDG = DEF
AF = BDG
Another possible issue with the Taguchi inner and outer array design relates to the order
in which the runs are performed. Now we know that for experimental validity, the runs
in a designed experiment should be conducted in random order. However, in many
crossed array experiments, it is possible that the run order wasn’t randomized. In some
cases it would be more convenient to fix each row in the inner array (that is, set the levels
of the controllable factors) and run all outer-array trials. In other cases, it might be more
convenient to fix the each column in the outer array and the run each on the inner array
trials at that combination of noise factors. Exactly which strategy is pursued probably
depends on which group of factors is easiest to change, the controllable factors or the
noise factors. If the tests are run in either manner described above, then a split-plot
structure has been introduced into the experiment. If this is not accounted for in the
analysis, then the results and conclusions can be misleading. There is no evidence that
Taguchi advocates used split-plot analysis methods. Furthermore, since Taguchi
frequently downplayed the importance of randomization, it is highly likely that many
actual inner and outer array experiments were inadvertently conducted as split-plots, and
perhaps incorrectly analyzed. We introduce the split-plot design in Chapter in Chapter
13. A good reference on split-plots in robust design problems is Box and Jones (1992).
A final aspect of Taguchi's parameter design is the use of linear graphs to assign factors
to the columns of the orthogonal array. A set of linear graphs for the L8 design is shown
in Figure 5. In these graphs, each number represents a column in the design. A line
segment on the graph corresponds to an interaction between the nodes it connects. To
assign variables to columns in an orthogonal array, assign the variables to nodes first;
then when the nodes are used up, assign the variables to the line segments. When you
assign variables to the nodes, strike out any line segments that correspond to interactions
that might be important. The linear graphs in Figure 5 imply that column 3 in the L8
design contains the interaction between columns 1 and 2, column 5 contains the
interaction between columns 1 and 4, and so forth. If we had four factors, we would
assign them to columns 1, 2, 4, and 7. This would ensure that each main effect is clear of
two-factor interactions. What is not clear is the two-factor interaction aliasing. If the
main effects are in columns 1, 2, 4, and 7, then column 3 contains the 1-2 and the 4-7
interaction, column 5 contains the 1-4 and the 2-7 interaction, and column 6 contains the
1-7 and the 2-4 interaction. This is clearly the case because four variables in eight runs is
a resolution IV plan with all pairs of two-factor interactions aliased. In order to
understand fully the two-factor interaction aliasing, Taguchi would refer the experiment
designer to a supplementary interaction table.
Taguchi (1986) gives a collection of linear graphs for each of his recommended
orthogonal array designs. These linear graphs seem -to have been developed
heuristically. Unfortunately, their use can lead to inefficient designs. For examples, see
his car engine experiment [Taguchi and Wu (1980)] and his cutting tool experiment
[Taguchi (1986)]. Both of these are 16-run designs that he sets up as resolution III
designs in which main effects are aliased with two-factor interactions. Conventional
methods for constructing these designs would have resulted in resolution IV plans in
which the main effects are clear of the two-factor interactions. For the experimenter who
simply wants to generate a good design, the linear graph approach may not produce the
best result. A better approach is to use a simple table that presents the design and its full
alias structure such as in Appendix Table XII. These tables are easy to construct and are
routinely displayed by several widely available and inexpensive computer programs.
Consider first the signal to noise ratio for the target is best case
y2
SN T = 10 log 2
S
This ratio would be used if we wish to minimize variability around a fixed target value.
It has been suggested by Taguchi that it is preferable to work with SNT instead of the
standard deviation because in many cases the process mean and standard deviation are
related. (As µ gets larger, σ gets larger, for example.) In such cases, he argues that we
cannot directly minimize the standard deviation and then bring the mean on target.
Taguchi claims he found empirically that the use of the SNT ratio coupled with a two-
stage optimization procedure would lead to a combination of factor levels where the
standard deviation is minimized and the mean is on target. The optimization procedure
consists of (1) finding the set of controllable factors that affect SNT, called the control
factors, and setting them to levels that maximize SNT and then (2) finding the set of
factors that have significant effects on the mean but do not influence the SNT ratio, called
the signal factors, and using these factors to bring the mean on target.
Given that this partitioning of factors is possible, SNT is an example of a performance
measure independent of adjustment (PERMIA) [see Leon et al. (1987)]. The signal
factors would be the adjustment factors. The motivation behind the signal-to-noise
ratio is to uncouple location and dispersion effects. It can be shown that the use of SNT is
equivalent to an analysis of the standard deviation of the logarithm of the original data.
Thus, using SNT implies that a log transformation will always uncouple location and
dispersion effects. There is no assurance that this will happen. A much safer approach is
to investigate what type of transformation is appropriate.
Note that we can write the SNT ratio as
y2
SN T = 10 log 2
S
2
= 10 log( y ) − 10log(S 2 )
If the mean is fixed at a target value (estimated by y ), then maximizing the SNT ratio is
equivalent to minimizing log (S2). Using log (S2) would require fewer calculations, is
more intuitively appealing, and would provide a clearer understanding of the factor
relationships that influence process variability - in other words, it would provide better
process understanding. Furthermore, if we minimize log (S2) directly, we eliminate the
risk of obtaining wrong answers from the maximization of SNT if some of the
manipulated factors drive the mean y upward instead of driving S2 downward. In
general, if the response variable can be expressed in terms of the model
y = µ ( x d , x a )ε ( x d )
where xd is the subset of factors that drive the dispersion effects and xa is the subset of
adjustment factors that do not affect variability, then maximizing SNT will be equivalent
to minimizing the standard deviation. Considering the other potential problems
surrounding SNT , it is likely to be safer to work directly with the standard deviation (or
its logarithm) as a response variable, as suggested in the textbook. For more discussion,
refer to Myers and Montgomery (1995).
The ratios SNL and SNS are even more troublesome. These quantities may be completely
ineffective in identifying dispersion effects, although they may serve to identify location
effects, that is, factors that drive the mean. The reason for this is relatively easy to see.
Consider the SNS (smaller-the-better) ratio:
1 n 2
SNS = −10 log ∑ yi
n i =1
The ratio is motivated by the assumption of a quadratic loss function with y nonnegative.
The loss function for such a case would be
1 n 2
L=C ∑ yi
n i =1
1 n 2
log L = log C + log ∑ yi
n i =1
and
SNS = 10 log C - 10 log L
so maximizing SNs will minimize L. However, it is easy to show that
1 n 2 1 n 2 2
∑ yi = y + ∑ yi − n y
2
n i =1 n i =1
2 n − 1 2
= y + S
n
Therefore, the use of SNS as a response variable confounds location and dispersion
effects.
The confounding of location and dispersion effects was observed in the analysis of the
SNL ratio in the pull-off force example. In Figures 3 and 3 notice that the plots of y and
SNL versus each factor have approximately the same shape, implying that both responses
measure location. Furthermore, since the SNS and SNL ratios involve y2 and 1/y2, they
will be very sensitive to outliers or values near zero, and they are not invariant to linear
transformation of the original response. We strongly recommend that these signal-to-
noise ratios not be used.
A better approach for isolating location and dispersion effects is to develop separate
response surface models for y and log(S2). If no replication is available to estimate
variability at each run in the design, methods for analyzing residuals can be used.
Another very effective approach is based on the use of the response model, as
demonstrated in the textbook and in Myers and Montgomery (1995). Recall that this
allows both a response surface for the variance and a response surface for the mean to be
obtained for a single model containing both the controllable design factors and the noise
variables. Then standard response surface methods can be used to optimize the mean
and variance.
Finally, we turn to some of the applications of the analysis of variance recommended by
Taguchi. As an example for discussion, consider the experiment reported by Quinlan
(1985) at a symposium on Taguchi methods sponsored by the American Supplier
Institute. The experiment concerned the quality improvement of speedometer cables.
Specifically, the objective was to reduce the shrinkage in the plastic casing material.
(Excessive shrinkage causes the cables to be noisy.) The experiment used an L16
orthogonal array (the 215-11
III design). The shrinkage values for four samples taken from
3000-foot lengths of the product manufactured at each set of test conditions were
measured and the responses y and SNSheila computed.
Quinlan, following the Taguchi approach to data analysis, used SNS as the response
variable in an analysis of variance. The error mean square was formed by pooling the
mean squares associated with the seven effects that had the smallest absolute magnitude.
This resulted in all eight remaining factors having significant effects (in order of
magnitude: E, G, K, A, C, F, D, H). The author did note that E and G were the most
important.
Pooling of mean squares as in this example is a procedure that has long been known to
produce considerable bias in the ANOVA test results, To illustrate the problem, consider
the 15 NID(0, 1) random numbers shown in column 1 of Table 6. The square of each of
these numbers, shown in column 2 of the table, is a single-degree-of-freedom mean
square corresponding to the observed random number. The seven smallest random
numbers are marked with an asterisk in column 1 of Table 6. The corresponding mean
squares are pooled to form a mean square for error with seven degrees of freedom. This
quantity is
0.5088
MS E = = 0.0727
7
Finally, column 3 of Table 6 presents the F ratio formed by dividing each of the eight
remaining mean squares by MSE. Now F0.05,1,7 = 5.59, and this implies that five of the
eight effects would be judged significant at the 0.05 level. Recall that since the original
data came from a normal distribution with mean zero, none of the effects is different from
zero.
Analysis methods such as this virtually guarantee erroneous conclusions. The normal
probability plotting of effects avoids this invalid pooling of mean squares and provides a
simple, easy to interpret method of analysis. Box (1988) provides an alternate analysis of
the Quinlan data that correctly reveals E and G to be important along with other
interesting results not apparent in the original analysis.
It is important to note that the Taguchi analysis identified negligible factors as significant.
This can have profound impact on our use of experimental design to enhance process
knowledge. Experimental design methods should make gaining process knowledge
easier, not harder.
factor-at-a-time" methods will also work-and occasionally they produce good results.
This is no reason to claim that they are good methods. Most of the successful
applications of Taguchi's technical methods have been in industries where there was no
history of good experimental design practice. Designers and developers were using the
best guess and one-factor-at-a-time methods (or other unstructured approaches), and
since the Taguchi approach is based on the factorial design concept, it often produced
better results than the methods it replaced. In other words, the factorial design is so
powerful that, even when it is used inefficiently, it will often work well.
As pointed out earlier, the Taguchi approach to parameter design often leads to large,
comprehensive experiments, often having 70 or more runs. Many of the successful
applications of this approach were in industries characterized by a high-volume, low-cost
manufacturing environment. In such situations, large designs may not be a real problem,
if it is really no more difficult to make 72 runs than to make 16 or 32 runs. On the other
hand, in industries characterized by low-volume and/or high-cost manufacturing (such as
the aerospace industry, chemical and process industries, electronics and semiconductor
manufacturing, and so forth), these methodological inefficiencies can be significant.
A final point concerns the learning process. If the Taguchi approach to parameter design
works and yields good results, we may still not know what has caused the result because
of the aliasing of critical interactions. In other words, we may have solved a problem (a
short-term success), but we may not have gained process knowledge, which could be
invaluable in future problems.
In summary, we should support Taguchi's philosophy of quality engineering. However,
we must rely on simpler, more efficient methods that are easier to learn and apply to
carry this philosophy into practice. The response surface modeling framework that we
present in the textbook is an ideal approach to process optimization and as we have
demonstrated, it is fully adaptable to the robust parameter design problem.
Supplemental References
Leon, R. V., A. C. Shoemaker and R. N. Kackar (1987). “Performance Measures
Independent of Adjustment”. Technometrics, Vol. 29, pp. 253-265
Quinlan, J. (1985). “Product Improvement by Application of Taguchi Methods”. Third
Supplier Symposium on Taguchi Methods, American Supplier Institute, Inc., Dearborn,
MI.
Box, G. E. P. and S. Jones (1992). “Split-Plot Designs for Robust Product
Experimentation”. Journal of Applied Statistics, Vol. 19, pp. 3-26.
where SSA is the sum of squares for the row factor. Recall that the model components
τ i , β j and (τβ ) ij are normally and independently distributed with means zero and
variances σ τ2 , σ 2β , and σ τβ
2
respectively. The sum of squares and its expectation are
defined as
1 a 2 y...2
SS A = ∑ yi.. − abn
bn i =1
E ( SS A ) =
1 a
y2FG IJ
E ∑ yi2.. − E ...
bn i =1 abn H K
Now
b n
yi .. = ∑ ∑ yijk = bnµ + bnτ i + nβ . + n(τβ ) i . + ε i ..
j =1 k =1
and
a a
1 1
E ∑ yi .. =
2
E ∑ (bnµ ) 2 + (bn) 2 τ i2 + ε i2.. + 2(bn) 2 µτ i + 2bnµε i .. + 2bnτ i ε i ..
bn i =1 bn i =1
1
= a (bnµ ) 2 + a (bn) 2 σ τ2 + ab(n) 2 σ 2β + abn 2σ τβ
2
+ abnσ 2
bn
= abnµ 2 + abnσ τ2 + anσ 2β + anσ τβ 2
+ aσ 2
Furthermore, we can show that
y... = abnµ + bnτ . + anβ . + n(τβ ) .. + ε ...
so the second term in the expected value of SSA becomes
1 1
E ( y...2 ) = (abnµ ) 2 + a (bn) 2 σ τ2 + b(an) 2 σ 2β + abn 2σ τβ
2
+ abnσ 2
abn abn
= abnµ 2 + bnσ τ2 + anσ 2β + nσ τβ2
+σ2
We can now collect the components of the expected value of the sum of squares for
factor A and find the expected mean square as follows:
FG SS IJ
E ( MS A ) = E
H a − 1K
A
1 L1 FG y IJ OP
2
M E∑ y
a
−E
H abn K Q
2 ...
a − 1 N bn
i ..
i =1
1
= σ 2 (a − 1) + n(a − 1)σ τβ
2
+ bnσ τ2
a −1
= σ 2 + nσ τβ
2
+ bnσ τ2
N a − 1Q τβ
We will find the expected mean square for the random factor, B. Now
b
E MS B = E g FGH bSS− 1IJK B
E b SS g
1
=
b −1
B
and
b
1 1
E ( SS B ) = E ∑ y.2j . − E ( y...2 )
an j =1 abn
Using the restrictions on the model parameters, we can easily show that
y. j . = anµ + anβ j + ε . j .
and
b
1
E ∑ y.2j . = b(anµ ) 2 + b(an) 2 σ 2β + abnσ 2
an j =1
= abnµ 2 + abnσ 2β + bσ 2
Since
y... = abnµ + anβ . + ε ...
we can easily show that
1 1
E ( y...2 ) = (abnµ ) 2 + b(an) 2 σ 2β + abnσ 2
abn abn
= abnµ 2 + anσ 2β + σ 2
Therefore the expected value of the mean square for the random effect is
b
E MS B = g 1
b −1
b g
E SS B
=
1
b −1
dabnµ 2 + abnσ 2β + bσ 2 − abnµ 2 − anσ 2β − σ 2 i
1
= σ 2 (b − 1) + an(b − 1)σ 2β
b −1
= σ 2 + anσ 2β
and all random effects are uncorrelated random variables. Notice that there is no
assumption concerning the interaction effects summed over the levels of the fixed factor
as is customarily made for the restricted model. Recall that the restricted model is
actually a more general model than the unrestricted model, but some modern computer
programs give the user a choice of models (and some computer programs only use the
unrestricted model), so there is increasing interest in both versions of the mixed model.
We will derive the expected value of the mean square for the random factor, B, in
Equation (12-26), as it is different from the corresponding expected mean square in the
restricted model case. As we will see, the assumptions regarding the interaction effects
are instrumental in the difference in the two expected mean squares.
b
E MS B = E g FGH bSS− 1IJK
B
E b SS g
1
=
b −1
B
because α . = 0 . Notice, however, that the interaction term in this expression is not zero
as it would be in the case of the restricted model. Now the expected value of the first part
of the expression for E(SSB) is
b
1 1
E ∑ y.2j . = b(anµ ) 2 + b(an) 2 σ γ2 + abn 2σ αγ
2
+ abnσ 2
an j =1 an
= abnµ 2 + abnσ γ2 + bnσ αγ
2
+ bσ 2
Now we can show that
y... = abnµ + bnα . + anγ . + n(αγ ) .. + ε ...
= abnµ + anγ . + n(αγ ) .. + ε ...
Therefore
1 1
E ( y...2 ) = (abnµ ) 2 + b(an) 2 σ γ2 + abn 2σ αγ
2
+ abnσ 2
abn abn
= abnµ 2 + anσ γ2 + nσ αγ
2
+σ 2
We may now assemble the components of the expected value of the sum of squares for
factor B and find the expected value of MSB as follows:
b
E MS B = g 1
b −1
E SS Bb g
=
1 LM
1 b
E ∑ y.2j . −
1
E ( y...2 )
OP
N
b − 1 an j =1 abn Q
=
1
b −1
abnµ 2 + abnσ γ2 + bnσ αγ 2
d
+ bσ 2 − abnµ 2 + anσ γ2 + nσ αγ
2
+σ2 i
1
= [σ 2 (b − 1) + n(b − 1)σ αγ
2
+ an(b − 1)σ γ2 ]
b −1
= σ 2 + nσ αγ
2
+ anσ γ2
This last expression is in agreement with the result given in Equation (12-26).
Deriving expected mean squares by the direct application of the expectation operator (the
“brute force” method) is tedious, and the rules given in the text are a great labor-saving
convenience. There are other rules and techniques for deriving expected mean squares,
including algorithms that will work for unbalanced designs. See Milliken and Johnson
(1984) for a good discussion of some of these procedures.
has a normal distribution with mean zero and variance unity as min( f 1 , f 2 ," , f Q )
approaches infinity, where
Q
V (σ 20 ) = 2∑ ci2θ i2 / f i ,
i =1
θ i is the linear combination of variance components estimated by the ith mean square,
and fi is the number of degrees of freedom for MSi. Consequently, the 100(1-α) percent
large-sample two-sided confidence interval for σ 20 is
σ 20 − zα / 2 V (σ 20 ) ≤ σ 20 ≤ σ 20 + zα / 2 Vσ 20 )
Operationally, we would replace θ i by MSi in actually computing the confidence interval.
This is the same basis used for construction of the confidence intervals by SAS PROC
MIXED that we presented in section 11-7.3 (refer to the discussion of tables 12-17 and
12-18 in the textbook).
These large-sample intervals work well when the number of degrees of freedom are large,
but when the fi are small they may be unreliable. However, the performance may be
improved by applying suitable modifications to the procedure. Welch (1956) suggested a
modification to the large-sample method that resulted in some improvement, but Graybill
and Wang (1980) proposed a technique that makes the confidence interval exact for
certain special cases. It turns out that it is also a very good approximate procedure for the
cases where it is not an exact confidence interval. Their result is given in the textbook as
Equation (12-42).
σ 2 ∑ c MS i i
= 1 i =1
σ 2 Q
2
∑ c MS j j
j = P +1
L=
LM
σ 12 2 + k4 / ( k1k 2 ) − VL OP
σ 12 MN2(1 − k5 / k22 ) PQ
where
VL = (2 + k4 / ( k1k 2 )) 2 − 4(1 − k5 / k 22 )(1 − k 3 / k12 )
P Q
k1 = ∑ ci MSi , k2 = ∑ c MS j j
i =1 j = P +1
P P −1 P
k3 = ∑ Gi2 ci2 MSi2 + ∑ ∑ Git*ci ct MSi MSt
i =1 i =1 t > i
P Q
k4 = ∑ ∑ G c c MS MS
ij 1 j i j
i =1 j = P +1
Q
k5 = ∑H c 2 2
j j MS 2j
j = P +1
and Gi , H j , Gig , and Git* are as previously defined. For more details, see the book by
Burdick and Graybill (1992).
is superior to the tabular analyses. Many practical aspects of designing experiments for
the study of measurement systems are discussed in these two papers.
Montgomery and Runger (1993a) (1993b) strongly recommend that confidence intervals
accompany the point estimates of the variance components (or functions of the variance
components). They show how both the Satterthwaite method and the large-sample
maximum likelihood (or REML) method can be used to produce confidence intervals in
measurement systems capability studies like the one in Example 12-2. They also
observe that many measurement systems capability studies involve nested factors (we
discuss nested designs in Chapter 13). Burdick and Larson (1997) expand on this work
and show how the modified large sample method can be applied to measurement systems
capability experiments. They also report the results of a simulation study comparing
methods for constructing confidence intervals on the components of gauge variability.
They conclude that the modified large sample method is superior to other methods
because it maintains an actual confidence level is closer to the “advertised” or stated
confidence level for the size experiments that are typical of industrial R & R studies.
Borror, Montgomery and Runger (1997) also show how the modified large sample
method can be applied to the classical R & R study, and compare its performance to the
REML method. They provide a SAS code for implementing the modified large sample
procedure. This paper also includes an example of a complex measurement systems
capability study from the semiconductor industry involving both nested and factorial
factor effects.
Supplemental References
Borror, C. M., D. C. Montgomery and G. C. Runger (1997). “Confidence Intervals for
Variance Components from Gauge Capability Studies”. Quality & Reliability
Engineering International, Vol. 13, pp. 361-369.
Burdick, R. K. and G. A. Larsen (1997). “Confidence Intervals on Measures of
Variability in R & R Studies”. Journal of Quality Technology, Vol. 29, pp. 261-273.
Graybill, F. A. and C. M. Wang (1980. “Confidence Intervals on Nonnegative Linear
Combinations of Variances”. Journal of the American Statistical Association, Vol. 75,
pp. 869-873.
Montgomery, D. C. (1996). Introduction to Statistical Quality Control, 3rd Edition. John
Wiley & Sons, New York.
Montgomery, D. C. and G. C. Runger (1993a). “Gauge Capability and Designed
Experiments Part I: Basic Methods”. Quality Engineering, Vol. 6, pp. 115-135.
Montgomery, D. C. and G. C. Runger (1993b). “Gauge Capability and Designed
Experiments Part II: Experimental Design Models and Variance Component Estimation”.
Quality Engineering, Vol. 6, pp. 289-305.
Welch, B. L. (1956). “On Linear Combinations of Several Variances”. Journal of the
American Statistical Association, Vol. 51, pp. 132-148.
The following output is from the Minitab general linear model analysis procedure.
As noted in the textbook, this design results in a - 1 = 9 degrees of freedom for lots, and a
= 10 degrees of freedom for samples within lots and error. The ANOVA indicates that
there is a significant difference between lots, and the estimate of the variance component
for lots is σ 2Lots = 2.03 . The ANOVA indicates that the sample within lots is not a
significant source of variability. This is an indication of lot homogeneity. There is a
small negative estimate of the sample-within-lots variance component. The experimental
error variance is estimated as σ 2 = 0.526 . Notice that the constants in the expected mean
squares are not integers; this is a consequence of the unbalanced nature of the design.
When the experimenter examines this run order, he notices that the level of temperature
is going to start at 150 degrees and then be changed eight times over the course of the 16
trials. Now temperature is a hard-to-change-variable, and following every adjustment to
temperature several hours are needed for the process to reach the new temperature level
and for the process to stabilize at the new operating conditions.
The experimenter may feel that this is an intolerable situation. Consequently, he may
decide that fewer changes in temperature are required, and rearange the temperature
levels in the experiment so that the new design appears as in Table 2. Notice that only
three changes in the level of temperature are required in tis new design. In efect, the
experimenter will set the temperature at 150 degrees and perform four runs with the other
three factors tested in random order. Then he will change the temperature to 100 degrees
and repeat the process, and so on. The experimenter has inadvertently introduced a split-
plot structure into the experiment.
Typically, most inadvertent split-plotting is not taken into account in the analysis. That
is, the experimenter analyzes the data as if the experiment had been conducted in random
order. Therefore, it is logical to ask about the impact of ignoring the inadvertent split-
plotting. While this question has not been studied in detail, generally inadvertently
running a split-plot and not properly accounting for it in the analysis probably does not
have major impact so long as the whole plot factor effects are large. These factor effect
estimates will probably have larger variances that the factor effects in the subplots, so
part of the risk is that small differences in the whole-plot factors may not be detected.
Obviously, the more systematic fashion in which the whole-plot factor temperature was
varied in Table 2 also exposes the experimenter to confounding of temperature with some
nuisance variable that is also changing with time. The most extreme case of this would
occur if the first eight runs in the experiment were made with temperature at the low level
(say), followed by the last eight runs with temperature at the high level.
Supplemental References
Bingham, D. and R. R. Sitter (1999). “Minimum Aberration Two-Level Fractional
Factorial Split-Plot Designs”. Technometrics, Vol. 41, pp. 62-70.
Bisgard, S. (2000). “The Design and Analysis of 2k-p × 2q-r Split Plot Experiments”.
Journal of Quality Technology, Vol. 32, pp. 39-56.
Huang, P., D. Chen and J. Voelkel (1999). “Minimum Aberration Two-Level Split-Plot
Designs”. Technometrics, Vol. 41, pp. 314-326.
Since σ 2 = f ( µ ) , we have
2
V ( x ) = f ( µ ) h ′( µ )
We want the variance of x to be a constant, say c2. So set
2
c 2 = f ( µ ) h ′( µ )
h( µ ) = c z dt
f (t )
= cG ( µ ) + k
where k is a constant.
As an example, suppose that for the response variable y we assumed that the mean and
variance were equal. This actually happens in the Poisson distribution. Therefore,
µ = σ 2 implying that f (t ) = t
So
h( µ ) = c z dt
t
z
= c t −1/ 2 dt + k
t − (1/ 2 ) +1
=c +k
− (1 / 2) + 1
= 2c t + k
This implies that taking the square root of y will stabilize the variance. This agrees with
the advice given in the textbook (and elsewhere) that the square root transformation is
very useful for stabilizing the variance in Poisson data or in general for count data where
the mean and variance are not too different.
As a second example, suppose that the square root of the mean is approximately equal to
the variance; that is, µ 1/ 2 = σ 2 . Essentially, this says that
2
µ = σ2 which implies that f (t ) = t 2
Therefore,
h( µ ) = c z t2
dt
=c z
dt
t
+k
= c log(t ) + k , if t > 0
This implies that for a positive response where µ 1/ 2 = σ 2 the log of the response is an
appropriate variance-stabilizing transformation.
SS * = SS E ( λ )e α ,1
χ 2 /n
Remember that λ is the value of λ that minimizes the error sum of squares.
Equation (14-20 in the textbook looks slightly different than the equation for SS* above.
The term exp( χ α2 ,1 / n) has been replaced by 1 + (tα2 / 2 ,v ) / v , where v is the number of
degrees of freedom for error. Some authors use 1 + ( χ α2 / 2 ,v ) / v or 1 + ( zα2 / 2 ) / v instead, or
sometimes 1 + (tα2 / 2 ,n ) / n or 1 + ( χ α2 / 2 ,n ) / n or 1 + ( zα2 / 2 ) / n . These are all based on the
expansion of exp( x ) = 1 + x + x 2 / 2!+ x 3 / 3! +" ≈ 1 + x , and the fact that χ 12 = z 2 ≈ t v2 ,
unless the number of degrees of freedom v is too small. It is perhaps debatable whether
we should use n or v, but in most practical cases, there will be little difference in the
confidence intervals that result.
on the generalized linear model or GLM. Examples 14-2 and 14-3 illustrated the
applicability of the GLM to designed experiments.
The GLM is a unification of nonlinear regression models and nonnormal response
variable distributions, where the response distribution is a member of the exponential
family, which includes the normal, Poisson, binomial, exponential and gamma
distributions as members. Furthermore, the normal-theory linear model is just a special
case of the GLM, so in many ways, the GLM is a unifying approach to empirical
modeling and data analysis.
We begin our presentation of these models by considering the case of logistic regression.
This is a situation where the response variable has only two possible outcomes,
generically called “success” and “failure” and denoted by 0 and 1. Notice that the
response is essentially qualitative, since the designation “success” or “failure” is entirely
arbitrary. Then we consider the situation where the response variable us a count, such as
the number of defects in a unit of product (as in the grille defects of Example 14-2), or
the number of relatively rare events such as the number of Atlantic hurricanes than make
landfall on the United States in a year. Finally, we briefly show how all these situations
are unified by the GLM.
14-3.1. Models with a Binary Response Variable
Consider the situation where the response variable from an experiment takes on only two
possible values, 0 and 1. These could be arbitrary assignments resulting from observing a
qualitative response. For example, the response could be the outcome of a functional
electrical test on a semiconductor device for which the results are either a “success”,
which means the device works properly, or a “failure”, which could be due to a short, an
open, or some other functional problem.
Suppose that the model has the form
yi = xi′β + ε i
where xi′ = [1, xi1 , xi 2 ," , xik ], β ′ = [ β 0 , β 1 , β 2 ," , β k ], xi′β is called the linear predictor,
and the response variable yi takes on the values either 0 or 1. We will assume that the
response variable yi is a Bernoulli random variable with probability distribution as
follows:
yi Probability
1 P( yi = 1) = π i
0 P( yi = 0) = 1 − π i
n
ln L( y1 , y2 ," , yn , β ) = ln ∏ f i ( yi )
i =1
n LM F π I OP + ∑ ln(1 − π ) n
N GH 1 − π JK Q
= ∑ yi ln i
i
i =1 i i =1
Now since 1 − π i = [1 + exp(x i′β )]−1 and η i = ln[π i / (1 − π i )] = x i′β , the log-likelihood can
be written as
n n
ln L( y, β ) = ∑ yi xi′β − ∑ ln[1 + exp(x i′β )]
i =1 i =1
Often in logistic regression models we have repeated observations or trials at each level
of the x variables. This happens frequently in designed experiments. Let yi represent the
number of 1’s observed for the ith observation and ni be the number of trials at each
observation. Then the log-likelihood becomes
n n n
ln L( y, β ) = ∑ yi π i + ∑ ni ln(1 − π i ) − ∑ yi ln(1 − π i )
i =1 i =1 i =1
Numerical search methods could be used to compute the maximum likelihood estimates
(or MLEs) β . However, it turns out that we can use iteratively reweighted least squares
(IRLS) to actually find the MLEs. To see this recall that the MLEs are the solutions to
∂L
=0
∂β
which can be expressed as
∂L ∂π i
=0
∂π i ∂β
Note that
∂L n
ni n
ni n
y
= ∑ −∑ +∑ i
∂π i i =1 π i i =1 1 − π i i =1 1 − π i
and
∂π i R|
exp(x i′β ) exp(x i′β ) LM OP U|Vx
2
= S| −
N Q |W
∂β
T
1 + exp(x i′β ) 1 + exp(xi′β )
i
= π i (1 − π i )xi
Putting this all together gives
∂L LM∑ n − ∑ n + ∑ y OPπ (1 − π )x
=
n
i
n
i
n
i
∂β N π 1− π 1− π Q
i =1 i i =1 i i =1 i
i i i
L y n + y OPπ (1 − π )x
n
= ∑M − i i i
Nπ 1 − π 1 − π Q
i =1 i i i
i i i
n
= ∑ ( yi − ni π i )xi
i =1
X′Xβ = X′y
which can be written as
X ′ ( y − Xβ ) = 0
X′( y − µ ) = 0
The Newton-Raphson method is actually used to solve the score equations. This
procedure observes that in the neighborhood of the solution, we can use a first-order
Taylor series expansion to form the approximation
pi − π i ≈
FG ∂π IJ ′ (β − β)
H ∂β K
i *
(1)
where
yi
pi =
ni
and β * is the value of β that solves the score equations. Now η i = x i′β , and
∂η i
= xi
∂β
so
exp(η i )
πi =
1 + exp(η i )
By the chain rule
∂π i ∂π i ∂ηi ∂π i
= = xi
∂β ∂η i ∂β ∂ηi
i
i
*
≈G
F ∂π I (x′β − x′β )
H ∂η JK
pi − π i i
i
*
i (2)
i
≈G
F ∂π I (η − η )
H ∂η JK
pi − π i i *
i i
i
∂π i
=
exp(η i )
−
exp(η i ) LM OP 2
∂η i 1 + exp(η i ) 1 + exp(η i ) N Q
= π i (1 − π i )
Consequently,
yi − ni π i ≈ [ni π i (1 − π i )](η i* − η i )
Now the variance of the linear predictor η *i = xi′β * is, to a first approximation,
1
V (η *i ) ≈
ni π i (1 − π i )
Thus
yi − ni π i ≈
LM 1 OP(η − η ) *
NV (η ) Q *
i
i i
or in matrix notation,
X′V −1 (η * − η) = 0
where V is a diagonal matrix of the weights formed from the variances of the η i .
Because η = Xβ we may write the score equations as
X′V −1 (η * − Xβ ) = 0
and the maximum likelihood estimate of β is
β = ( X ′V −1X) −1 X ′V −1η *
However, there is a problem because we don’t know η * . Our solution to this problem
uses equation (2):
F ∂π I (η
pi − π i ≈ GH ∂η JK
i
i
i
*
− ηi )
L
V M( p − π )
∂η O L π (1 − π ) OF ∂η I
P = M n PQGH ∂π JK
i i i i
2
N i
∂π Q N
i
i i i
=M
Lπ (1 − π ) OPFG 1 IJ 2
N n QH π (1 − π ) K
i i
i i i
1
=
ni π i (1 − π i )
So V is the diagonal matrix of weights formed from the variances of the random part of z.
Thus the IRLS algorithm based on the Newton-Raphson method can be described as
follows:
1. Use ordinary least squares to obtain an initial estimate of β , say β 0 ;
3. Let η 0 = Xβ 0 ;
4. Base z1 on η 0 ;
5. Obtain a new estimate β 1 , and iterate until some suitable convergence criterion is
satisfied.
If β is the final value that the above algorithm produces and if the model assumptions
are correct, then we can show that asymptotically
exp(x i′β )
π i =
1 + exp(x ′β )
i
1
=
1 + exp( − xi′β )
xi
= β 1
If we take antilogs, we obtain the odds ratio
oddsxi +1
O R ≡ = eβ1
oddsxi
The odds ratio can be interpreted as the estimated increase in the probability of success
associated with a one-unit change in the value of the predictor variable. In general, the
estimated increase in the odds ratio associated with a change of d units in the predictor
variable is exp(dβ 1 ) .
The interpretation of the regression coefficients in the multiple logistic regression model
is similar to that for the case where the linear predictor contains only one regressor. That
is, the quantity exp( β ) is the odds ratio for regressor x , assuming that all other
j j
The value of the log-likelihood function for the fitted model can never exceed the value
of the log-likelihood function for the saturated model, because the fitted model contains
fewer parameters. The deviance compares the log-likelihood of the saturated model with
the log-likelihood of the fitted model. Specifically, model deviance is defined as
λ ( β ) = 2 ln L(saturated model) - 2lnL( β )
(3)
= 2[ A((saturated model) - A( β )]
where A denotes the log of the likelihood function. Now if the logistic regression model
is the correct regression function and the sample size n is large, the model deviance has
an approximate chi-square distribution with n – p degrees of freedom. Large values of
the model deviance would indicate that the model is not correct, while a small value of
model deviance implies that the fitted model (which has fewer parameters than the
saturated model) fits the data almost as well as the saturated model. The formal test
criteria would be as follows:
if λ ( β ) ≤ χ α2 ,n − p conclude that the fitted model is adequate
if λ ( β ) > χ α2 ,n − p conclude that the fitted model is not adequate
The deviance is related to a very familiar quantity. If we consider the standard normal-
theory linear regression model, the deviance turns out to be the error or residual sum of
squares divided by the error variance σ 2 .
Testing Hypotheses on Subsets of Parameters using Deviance
We can also use the deviance to test hypotheses on subsets of the model parameters, just
as we used the difference in regression (or error) sums of squares to test hypotheses in the
normal-error linear regression model case. Recall that the model can be written as
η = Xβ
= X1 β 1 + X 2 β 2
where the full model has p parameters, β 1 contains p – r of these parameters, β 2 contains
r of these parameters, and the columns of the matrices X1 and X2 contain the variables
associated with these parameters. Suppose that we wish to test the hypotheses
H0 : β 2 = 0
H1: β 2 ≠ 0
Therefore, the reduced model is
η = X1β 1
Now fit the reduced model, and let λ ( β 1 ) be the deviance for the reduced model. The
deviance for the reduced model will always be larger than the deviance for the full model,
because the reduced model contains fewer parameters. However, if the deviance for the
reduced model is not much larger than the deviance for the full model, it indicates that
the reduced model is about as good a fit as the full model, so it is likely that the
parameters in β 2 are equal to zero. That is, we cannot reject the null hypothesis above.
However, if the difference in deviance is large, at least one of the parameters in β 2 is
likely not zero, and we should reject the null hypothesis. Formally, the difference in
deviance is
λ (β 2 | β 1 ) = λ (β 1 ) − λ (β )
and this quantity has n − ( p − r ) − (n − p) = r degrees of freedom. If the null hypothesis
is true and if n is large, the difference in deviance has a chi-square distribution with r
degrees of freedom. Therefore, the test statistic and decision criteria are
Sometimes the difference in deviance λ(β2 | β1) is called the partial deviance. It is a
likelihood ratio test. To see this, let L( β ) be the maximum value of the likelihood
function for the full model, and L( β 1 ) be the maximum value of the likelihood function
for the reduced model. The likelihood ratio is
L( β 1 )
L( β )
The test statistic for the likelihood ratio test is equal to minus two times the log-
likelihood ratio, or
L( β 1 )
χ 2 = −2 ln
L( β )
= 2 ln L( β ) − 2 ln L( β 1 )
However, this is exactly the same as the difference in deviance. To see this, substitute
from the definition of the deviance from equation (3) and note that the log-likelihoods for
the saturated model cancel out.
Tests on Individual Model Coefficients
Tests on individual model coefficients, such as
H0 : β j = 0
H1: β j ≠ 0
can be conducted by using the difference in deviance method described above. There is
another approach, also based on the theory of maximum likelihood estimators. For large
samples, the distribution of a maximum likelihood estimator is approximately normal
with little or no bias. Furthermore, the variances and covariances of a set of maximum
likelihood estimators can be found from the second partial derivatives of the log-
likelihood function with respect to the model parameters, evaluated at the maximum
likelihood estimates. Then a t-like statistic can be constructed to test the above
hypothesis. This is sometimes referred to as Wald inference.
Let G denote the p × p matrix of second partial derivatives of the log-likelihood function;
that is
∂ 2 A( β )
Gij = , i , j = 0,1," , k
∂β i ∂β j
G is called the Hessian matrix. If the elements of the Hessian are evaluated at the
maximum likelihood estimators β = β , the large-sample approximate covariance matrix
of the regression coefficients is
V ( β ) ≡ Σ = − G( β ) −1
The square roots of the diagonal elements of this matrix are the large-sample standard
errors of the regression coefficients, so the test statistic for the null hypothesis in
H0 : β j = 0
H1: β j ≠ 0
is
β j
Z0 =
se( β j )
The reference distribution for this statistic is the standard normal distribution. Some
computer packages square the Z0 statistic and compare it to a chi-square distribution with
one degree of freedom. It is also straightforward to use Wald inference to construct
confidence intervals on individual regression coefficients.
E ( yi ) = µ i
and that there is a function g that relates the mean of the response to a linear predictor,
say
g ( µ i ) = β 0 + β 1 x1 +"+ β k xk
= xi′β
The function g is usually called the link function. The relationship between the mean
and the linear predictor is
µ i = g −1 (xi′β )
There are several link functions that are commonly used with the Poisson distribution.
One of these is the identity link
g ( µ i ) = µ i = xi′β
When this link is used, E ( yi ) = µ i = x i′β since µ i = g −1 (x i′β ) = xi′β . Another popular
link function for the Poisson distribution is the log link
g ( µ i ) = ln( µ i ) = xi′β
For the log link, the relationship between the mean of the response variable and the linear
predictor is
µ i = g −1 (xi′β )
= e xi′β
The log link is particularly attractive for Poisson regression because it ensures that all of
the predicted values of the response variable will be nonnegative.
The method of maximum likelihood is used to estimate the parameters in Poisson
regression. The development follows closely the approach used for logistic regression.
If we have a random sample of n observations on the response y and the predictors x, then
the likelihood function is
n
L( y, β ) = ∏ f i ( yi )
i =1
n
e − µ i µ iyi
=∏
i =1 yi !
n n
∏ µ iyi exp(−∑ µ i )
i =1
= n
i =1
∏y !
i =1
i
where µ i = g −1 (xi′β ) . Once the link function is specified, we maximize the log-
likelihood
n n n
ln L( y, β ) = ∑ yi ln( µ i ) − ∑ µ i − ∑ ln( yi !)
i =1 i =1 i =1
Iteratively reweighted least squares can be used to find the maximum likelihood estimates
of the parameters in Poisson regression, following an approach similar to that used for
logistic regression. Once the parameter estimates β are obtained, the fitted Poisson
regression model is
yi = g −1 (x i′β )
For example, if the identity link is used, the prediction equation becomes
yi = g −1 (x i′β )
= x ′β
i
Inference on the model and its parameters follows exactly the same approach as used for
logistic regression. That is, model deviance is an overall measure of goodness of fit, and
tests on subsets of model parameters can be performed using the difference in deviance
between the full and reduced models. These are likelihood ratio tests. Wald inference,
based on large-sample properties of maximum likelihood estimators, can be used to test
hypotheses and construct confidence intervals on individual model parameters.
14-3.6. The Generalized Linear Model
All of the regression models that we have considered in this section belong to a family of
regression models called the generalized linear model, or the GLM. The GLM is
actually a unifying approach to regression and experimental design models, uniting the
usual normal-theory linear regression models and nonlinear models such as logistic and
Poisson regression.
A key assumption in the GLM is that the response variable distribution is a member of
the exponential family of distributions, which includes the normal, binomial, Poisson,
inverse normal, exponential and gamma distributions. Distributions that are members of
the exponential family have the general form
f ( yi , θ i , φ ) = exp{[ yiθ i − b(θ i )] / a (φ ) + h( yi , φ )}
where φ is a scale parameter and θ i is called the natural location parameter. For members
of the exponential family,
db(θ i )
µ = E ( y) =
dθ i
d 2b(θ i )
V ( y) = a (φ )
dθ i2
dµ
= a (φ )
dθ i
Let
V ( y ) dµ
var( µ ) = =
a (φ ) dθ i
where var(µ) denotes the dependence of the variance of the response on its mean. As a
result, we have
dθ i 1
=
dµ var( µ )
It is easy to show that the normal, binomial and Poisson distributions are members of the
exponential family.
The Normal Distribution
1 FG 1 IJ
f ( yi , θ i , φ ) =
2πσ 2
exp −
H 2σ 2
( y − µ)2
K
L
= exp M− ln(2πσ ) −
yµ µ O 2 2
2σ PQ
y
+ −
2
N 2σ σ 2 2 2
L1F y µ I 1
2
= exp M G − + yµ − J − ln(2πσ ) P
O 2
Nσ H 2 2K 2
2
2
Q
L 1 F µ I y − 1 ln(2πσ )OP
= exp M G yµ − J −
2 2
Nσ H 2 K 2σ 2
2
2
Q 2
f ( yi , θ i , φ ) =
FG n IJ π (1 − π ) n− y
H mK
y
θ i = ln
LM π OP and π = exp(θ )
N1 − π Q
i
1 + exp(θ ) i
b(θ i ) = − n ln(1 − π )
a (φ ) = 1
h( yi , φ ) = ln
FG nIJ
H yK
db(θ i ) db(θ i ) dπ
E ( y) = =
dθ i dπ dθ i
We note that
dπ
=
exp(θ i )
−
exp(θ i ) LM OP 2
dθ i 1 + exp(θ i ) 1 + exp(θ i ) N Q
= π (1 − π )
Therefore,
FG n IJ π (1 − π )
E ( y) =
H 1− π K
= nπ
We recognize this as the mean of the binomial distribution. Also,
dE ( y )
V ( y) =
dθ i
dE ( y ) dπ
=
dπ dθ i
= nπ (1 − π )
This last expression is just the variance of the binomial distribution.
There are other link functions that could be used with a GLM, including:
1. The probit link,
η i = Φ −1[ E ( yi )]
where Φ represents the cumulative standard normal distribution function.
2. The complimentary log-log link,
η i = ln{ln[1 − E ( yi )]}
3. The power family link,
ηi =
RSE ( y ) ,
i
λ
λ≠0
Tln[ E ( y )],i λ=0
A very fundamental idea is that there are two components to a GLM; the response
variable distribution, and the link function. We can view the selection of the link
function in a vein similar to the choice of a transformation on the response. However,
unlike a transformation, the link function takes advantage of the natural distribution of
the response. Just as not using an appropriate transformation can result in problems with
a fitted linear model, improper choices of the link function can also result in significant
problems with a GLM.
14-3.8. Parameter Estimation in the GLM
The method of maximum likelihood is the theoretical basis for parameter estimation in
the GLM. However, the actual implementation of maximum likelihood results in an
algorithm based on iteratively reweighted least squares (IRLS). This is exactly what we
saw previously for the special case of logistic regression.
Consider the method of maximum likelihood applied to the GLM, and suppose we use
the canonical link. The log-likelihood function is
n
A( y, β ) = ∑ [ yiθ i − b(θ i )] / a (φ ) + h( yi , φ )
i =1
=
1 n
∑ yi −
LM
db(θ i )
xi
OP
a (φ ) i =1 N
dθ i Q
1 n
= ∑ ( yi − µ i )xi
a (φ ) i =1
Consequently, we can find the maximum likelihood estimates of the parameters by
solving the system of equations
1 n
∑ ( yi − µ i )xi = 0
a (φ ) i =1
In most cases, a(φ ) is a constant, so these equations become:
n
∑(y i − µ i ) xi = 0
i =1
This is actually a system of p = k + 1 equations, one for each model parameter. In matrix
form, these equations are
X′( y − µ ) = 0
where µ ′ = [ µ 1 , µ 2 ," , µ p ] . These are called the maximum likelihood score equations,
and they are just the same equations that we saw previously in the case of logistic
regression, where µ ′ = [n1π 1 , n2π 2 " , nnπ n ] .
To solve the score equations, we can use IRLS, just as we did in the case of logistic
regression. We start by finding a first-order Taylor series approximation in the
neighborhood of the solution
dµ i *
yi − µ i ≈ (η i − η i )
dη i
Now for a canonical link η i = θ i , and
dµ i *
yi − µ i ≈ (η i − η i ) (4)
dθ i
Therefore, we have
dθ i
η *i − ηi ≈ ( yi − µ i )
dµ i
LM
V (η *i − ηi ) ≈ V ( yi − µ i )
dθ i OP
N dµ i Q
Since η *i and µ i are constants,
L dθ OP V ( y )
V (η ) ≈ M i
2
i
N dµ Q i
i
But
dθ i 1
=
dµ i var( µ i )
and V ( yi ) = var( µ i )a (φ ). Consequently,
L 1 OP
V (η ) ≈ M
2
var( µ i )a (φ )
i
N var(µ ) Q i
1
≈ a (φ )
var( µ i )
For convenience, define var(ηi ) = [var( µ i )]−1 , so we have
V (η i ) ≈ var(η i )a (φ ).
Substituting this into Equation (4) above results in
1
yi − µ i ≈ (η *i − η) (5)
var(η i )
If we let V be an n × n diagonal matrix whose diagonal elements are the var(η i ) , then in
matrix form, Equation (5) becomes
y − µ ≈ V −1 (η * − η)
We may then rewrite the score equations as follows:
X ′( y − µ ) = 0
X′V −1 (η * − η) = 0
X′V −1 (η * − Xβ ) = 0
Thus, the maximum likelihood estimate of β is
β = ( X ′V −1X) −1 X ′V −1η *
Now just as we saw in the logistic regression situation, we do not know η * , so we pursue
an iterative scheme based on
dη
zi = η i + ( yi − µ i ) i
dµ i
Using iteratively reweighted least squares with the Newton-Raphson method, the solution
is found from
β = ( X ′V −1X) −1 X′V −1z
Asymptotically, the random component of z comes from the observations yi. The
diagonal elements of the matrix V are the variances of the zi’s, apart from a(φ ) .
As an example, consider the logistic regression case:
η i = ln
FG π IJ
H1− π K
i
FG π IJ
H 1− π K
i
d ln
dη i dη i
= = i
dµ i dπ i dπ i
=
1− π i π i LM
+
πi OP
N
π i 1 − π i (1 − π i ) 2 Q
=
(1 − π i )
1+
πiLM OP
π i (1 − π i ) N
1− π i Q
=
LM
1 1− π i + π i OP
πi N
1− π i Q
1
=
π i (1 − π i )
Thus, for logistic regression, the diagonal elements of the matrix V are
FG dη IJ V ( y ) = LM 1 OP
2 2
π i (1 − π i )
H dµ K
i
i
i
Nπ (1 − π ) Q i i ni
1
=
ni π i (1 − π i )
which is exactly what we obtained previously.
Therefore, IRLS based on the Newton-Raphson method can be described as follows:
1. Use ordinary least squares to obtain an initial estimate of β , say β 0 ;
3. Let η 0 = Xβ 0 ;
4. Base z1 on η 0 ;
5. Obtain a new estimate β 1 , and iterate until some suitable convergence criterion is
satisfied.
If β is the final value that the above algorithm produces and if the model assumptions,
including the choice of the link function, are correct, then we can show that
asymptotically
c
E ( β ) = β and V ( β ) = a(φ ) X ′V -1X h −1
If we don’t use the canonical link, then η i ≠ θ i , and the appropriate derivative of the log-
likelihood is
∂A dA dθ i dµ i ∂η i
=
∂β dθ i dµ i dη i ∂β
Note that:
1.
dA
=
1 LM
yi −
db(θ i )
=
1OP ( yi − µ i )
dθ i a (φ ) N dθ i Q
a (φ )
dθ i 1
2. = and
dµ i var( µ i )
∂η i
3. = xi
∂β
Putting this all together yields
∂A yi − µ i 1 dµ i
= xi
∂β a (φ ) var( µ i ) dη i
Once again, we can use a Taylor series expansion to obtain
dµ i *
yi − µ i ≈ (η i − η i )
dη i
Following an argument similar to that employed before,
V (η i ) ≈
LM OP
dθ i
2
V ( yi )
N Q
dµ i
and eventually we can show that
∂A n
η *i − η i
=∑ xi
∂β i =1 a (φ ) var(η i )
Equating this last expression to zero and writing it in matrix form, we obtain
X′V −1 (η * − η) = 0
or, since η = Xβ ,
X′V −1 (η * − Xβ ) = 0
The Newton-Raphson solution is based on
β = ( X ′V −1X) −1 X′V −1z
where
dη
zi = η i + ( yi − µ i ) i
dµ i
Just as in the case of the canonical link, the matrix V is a diagonal matrix formed from
the variances of the estimated linear predictors, apart from a(φ ) .
Some important observations about the GLM:
1. Typically, when experimenters and data analysts use a transformation, they use
ordinary least squares or OLS to actually fit the model in the transformed scale.
2. In a GLM, we recognize that the variance of the response is not constant, and we use
weighted least squares as the basis of parameter estimation.
3. This suggests that a GLM should outperform standard analyses using transformations
when a problem remains with constant variance after taking the transformation.
4. All of the inference we described previously on logistic regression carries over
directly to the GLM. That is, model deviance can be used to test for overall model fit,
and the difference in deviance between a full and a reduced model can be used to test
hypotheses about subsets of parameters in the model. Wald inference can be applied
to test hypotheses and construct confidence intervals about individual model
parameters.
L = g −1 (x ′0 β − Zα / 2 x ′0 Σ x 0 ) and U = g −1 (x 0′ β + Zα / 2 x 0′ Σ x 0 )
This method is used to compute the confidence intervals on the mean response reported
in SAS PROC GENMOD. This method for finding the confidence intervals usually
works well in practice, because β is a maximum likelihood estimate, and therefore any
function of β is also a maximum likelihood estimate. The above procedure simply
constructs a confidence interval in the space defined by the linear predictor and then
transforms that interval back to the original metric.
It is also possible to use Wald inference to derive approximate confidence intervals on the
mean response. Refer to Myers and Montgomery (1997) for the details.
14-3.10. Residual Analysis in the GLM
Just as in any model-fitting procedure, analysis of residuals is important in fitting the
GLM. Residuals can provide guidance concerning the overall adequacy of the model,
assist in verifying assumptions, and give an indication concerning the appropriateness of
the selected link function.
The ordinary or raw residuals from the GLM are just the differences between the
observations and the fitted values,
ei = yi − yi
= yi − µ i
It is generally recommended that residual analysis in the GLM be performed using
deviance residuals. The ith deviance residual is defined as the square root of the
contribution of the ith observation to the deviance, multiplied by the sign of the raw
residual, or
rDi = di sign( yi − yi )
where di is the contribution of the ith observation to the deviance. For the case of logistic
regression (a GLM with binomial errors and the logit link), we can show that
di = yi ln
FG y IJ + (n − y )LM1 − ( y / n ) OP, i = 1,2,", n
H n π K
i i i
i i N 1 − π Q
i i
i
where
1
π i =
1 + e − xi′β
Note that as the fit of the model to the data becomes better, we would find that
π i ≅ yi / ni , and the deviance residuals will become smaller, close to zero. For Poisson
regression with a log link, we have
FG y IJ − ( y − e x i′ β
di = yi ln
He K ), i = 1,2," , n
i
x i′ β i
Once again, notice that as the observed value of the response yi and the predicted value
yi = e xi′β become closer to each other, the deviance residuals approach zero.
Generally, deviance residuals behave much like ordinary residuals do in a standard
normal theory linear regression model. Thus plotting the deviance residuals on a normal
probability scale and versus fitted values are logical diagnostics. When plotting deviance
residuals versus fitted values, it is customary to transform the fitted values to a constant
information scale. Thus,
1. for normal responses, use yi
Recall that the regression model formulation of an ANOVA model uses indicator
variables. We will define the indicator variables for the design factors material types and
temperature as follows:
Material type X1 X2
1 0 0
2 1 0
3 0 1
Temperature X3 X4
15 0 0
70 1 0
125 0 1
where i, j =1,2,3 and the number of replicates k = 1,2," , nij , where nij is the number of
replicates in the ijth cell. Notice that in our modified version of the battery life data, we
have n11 = n12 = n13 = n23 = n32 = 3, and all other nij = 4 .
In this regression model, the terms β 1 xijk 1 + β 2 xijk 2 represent the main effect of factor A
(material type), and the terms β 3 xijk 3 + β 4 xijk 4 represent the main effect of temperature.
Each of these two groups of terms contains two regression coefficients, giving two
degrees of freedom. The terms β 5 xijk 1 xijk 3 + β 6 xijk 1 xijk 4 + β 7 xijk 2 xijk 3 + β 8 xijk 2 xijk 4 represent
the AB interaction with four degrees of freedom. Notice that there are four regression
coefficients in this term.
Table 3 presents the data from this modified experiment in regression model form. In
Table 3, we have shown the indicator variables for each of the 31 trials of this
experiment.
Table 3. Modified Data from Example 5-1 in Regression Model Form
Y X1 X2 X3 X4 X5 X6 X7 X8
130 0 0 0 0 0 0 0 0
150 1 0 0 0 0 0 0 0
136 1 0 1 0 1 0 0 0
25 1 0 0 1 0 1 0 0
138 0 1 0 0 0 0 0 0
96 0 1 0 1 0 0 0 1
155 0 0 0 0 0 0 0 0
40 0 0 1 0 0 0 0 0
70 0 0 0 1 0 0 0 0
188 1 0 0 0 0 0 0 0
122 1 0 1 0 1 0 0 0
70 1 0 0 1 0 1 0 0
110 0 1 0 0 0 0 0 0
120 0 1 1 0 0 0 1 0
104 0 1 0 1 0 0 0 1
80 0 0 1 0 0 0 0 0
82 0 0 0 1 0 0 0 0
159 1 0 0 0 0 0 0 0
106 1 0 1 0 1 0 0 0
58 0 0 0 1 0 0 0 0
168 0 1 0 0 0 0 0 0
150 0 1 1 0 0 0 1 0
82 0 1 0 1 0 0 0 1
180 0 0 0 0 0 0 0 0
75 0 0 1 0 0 0 0 0
126 1 0 0 0 0 0 0 0
115 1 0 1 0 1 0 0 0
45 1 0 0 1 0 1 0 0
160 0 1 0 0 0 0 0 0
139 0 1 1 0 0 0 1 0
60 0 1 0 1 0 0 0 1
We will use this data to fit the regression model in Equation (6). We will find it
convenient to refer to this model as the full model. The Minitab output is:
Regression Analysis
The regression equation is
Y = 155 + 0.7 X1 - 11.0 X2 - 90.0 X3 - 85.0 X4 + 54.0 X5 - 24.1 X6
+ 82.3 X7 + 26.5 X8
Analysis of Variance
Source DF SS MS F P
Regression 8 46814.0 5851.8 13.48 0.000
Residual Error 22 9553.8 434.3
Total 30 56367.9
Analysis of Variance
Source DF SS MS F P
Regression 4 38212.5 9553.1 13.68 0.000
Residual Error 26 18155.3 698.3
Total 30 56367.9
Now the regression or model sum of squares for the full model, which includes the
interaction terms, is SS Model ( FM ) = 46,814.0 and for the reduced model [Equation (8)] it
is SS Model ( RM ) = 38, 212.5 . Therefore, the increase in the model sum of squares due to
the interaction terms (or the extra sum of squares due to interaction) is
SS Model (Interaction|main effects) = SS Model ( FM ) − SS Model ( RM )
= 46,814.0 − 38, 212.5
= 8601.5
Since there are 4 degrees of freedom for interaction, the appropriate test statistic for the
no-interaction hypotheses in Equation (7) is
SS Model (Interaction|main effects) / 4
F0 =
MS E ( FM )
8601.5 / 4
=
434.3
= 4.95
The P-value for this statistic is approximately 0.0045, so there is evidence of interaction.
Now suppose that we wish to test for a material type effect. In terms of the regression
model in Equation (6), the hypotheses are
H0 : β 1 = β 2 = 0
(9)
H1: β 1 and / or β 2 ≠ 0
and the reduced model is
yijk = β 0 + β 3 xijk 3 + β 4 xijk 4
(10)
+ β 5 xijk 1 xijk 3 + β 6 xijk 1 xijk 4 + β 7 xijk 2 xijk 3 + β 8 xijk 2 xijk 4 + ε ijk
Regression Analysis
The regression equation is
Y = 151 - 86.3 X3 - 81.3 X4 + 54.8 X5 - 23.3 X6 + 71.3 X7 + 15.5 X8
Analysis of Variance
Source DF SS MS F P
Regression 6 46480.6 7746.8 18.80 0.000
Residual Error 24 9887.3 412.0
Total 30 56367.9
Therefore, the sum of squares for testing the material types main effect is
SS Model (Material types) = SS Model ( FM ) − SS Model ( RM )
= 46,814.0 − 46, 480.6
= 333.4
The F-statistic is
SS Model (Material types) / 2
F0 =
MS E ( FM )
333.4 / 2
=
434.3
= 0.38
which is not significant. The hypotheses for the main effect of temperature is
H0 : β 3 = β 4 = 0
(11)
H1: β 3 and / or β 4 ≠ 0
and the reduced model is
yijk = β 0 + β 1 xijk 1 + β 2 xijk 2
(12)
+ β 5 xijk 1 xijk 3 + β 6 xijk 1 xijk 4 + β 7 xijk 2 xijk 3 + β 8 xijk 2 xijk 4 + ε ijk
Regression Analysis
The regression equation is
Y = 96.7 + 59.1 X1 + 47.3 X2 - 36.0 X5 - 109 X6 - 7.7 X7 - 58.5 X8
Analysis of Variance
Source DF SS MS F P
Regression 6 31464 5244 5.05 0.002
Residual Error 24 24904 1038
Total 30 56368
Therefore, the sum of squares for testing the temperature main effect is
SS Model (Temperature) = SS Model ( FM ) − SS Model ( RM )
= 46,814.0 − 31, 464.0
= 15,350.0
The F-statistic is
SSModel (Temperature) / 2
F0 =
MS E ( FM )
15,350.0 / 2
=
434.3
= 17.67
The P-value for this statistic is less than 0.0001. Therefore, we would conclude that the
main effect of temperature has an effect on battery life. Since both the main effect of
temperature and the materials type-temperature interaction are significant, we would
likely reach the same conclusions for this data that we did from the original balanced-data
factorial in the textbook.
14-4.2 The Type 3 Analysis
Another approach to the analysis of an unbalanced factorial is to directly employ the
Type 3 analysis procedure discussed previously. Many computer software packages will
directly perform the Type 3 analysis, calculating Type 3 sums of squares or “adjusted”
sums of squares for each model effect. The Minitab General Linear Model procedure
will directly perform the Type 3 analysis. Remember that this procedure is only
appropriate when there are no empty cells (i.e., nij > 0, for all i , j ).
Output from the Minitab General Linear Model routine for the unbalanced version of
Example 5-1 in Table 3 follows:
General Linear Model
Factor Type Levels Values
Mat fixed 3 123
Temp fixed 3 15 70 125
The “Adjusted” sums of squares, shown in boldface type in the above computer output,
are the Type 3 sums of squares. The F-tests are performed using the Type 3 sums of
squares in the numerator. The hypotheses that are being tested by a type 3 sum of
squares is essentially equivalent to the hypothesis that would be tested for that effect if
the data were balanced. Notice that the error or residual sum of squares and the
interaction sum of squares in the Type 3 analysis are identical to the corresponding sums
of squares generated in the regression-model formulation discussed above.
When the experiment is unbalanced, but there is at least one observation in each cell, the
Type 3 analysis is generally considered to be the correct or “standard” analysis. A good
reference is Freund, Littell and Spector (1988). Various SAS/STAT users’ guides and
manuals are also helpful.
14-4.3 Type 1, Type 2, Type 3 and Type 4 Sums of Squares
At this point, a short digression on the various types of sums of squares reported by some
software packages and their uses is warranted. Many software systems report Type 1 and
Type 3 sums of squares; the SAS software system reports four types, called (originally
enough!!) Types 1, 2, 3 and 4. For an excellent detailed discussion of this topic, see the
technical report by Driscoll and Borror (1999).
As noted previously, Type 1 sums of squares refer to a sequential or “effects-added-in-
order” decomposition of the overall regression or model sum of squares. In sequencing
the factors, interactions should be entered only after all of the corresponding main effects,
and nested factors should be entered in the order of their nesting.
Type 2 sums of squares reflect the contribution of a particular effect to the model after all
other effects have been added, except those that contain the particular effect in question.
For example, an interaction contains the corresponding main effects. For unbalanced
data, the hypotheses tested by Type 2 sums of squares contain, in addition to the
parameters of interest, the cell counts (i.e., the nij). These are not the same hypotheses
that would be tested by the Type 2 sums of squares if the data were balanced, and so most
analysts have concluded that other definitions or types of sums of squares are necessary.
In a regression model (i.e., one that is not overspecified, as in the case of an ANOVA
model), Type 2 sums of squares are perfectly satisfactory, so many regression programs
(such as SAS PROC REG) report Type 1 and Type 2 sums of squares.
Type 3 and Type 4 sums of squares are often called partial sums of squares. For balanced
experimental design data, Types 1, 2, 3, and 4 sums of squares are identical. However, in
unbalanced data, differences can occur, and it is to this topic that we now turn.
To make the discussion specific, we consider the two-factor fixed-effects factorial model.
For proportional data, we will find that for the main effects the relationships between the
various types of sums of squares is Type 1 = Type 2, and Type 3 = Type 4, while for the
interaction it is Type 1 = Type 2 = Type 3 = Type 4. Thus the choice is between Types 1
and 4. If the cell sample sizes are representative of the population from which the
treatments were selected, then an analysis based on the Type 1 sums of squares is
appropriate. This, in effect, makes the factor levels have important that is proportional to
the sample sizes. If this is not the case, then the Type 3 analysis is appropriate.
With unbalanced data having at least one observation in each cell, we find that for the
main effects that Types 1 and 2 will generally not be the same for factor A, but Type 1 =
Type 2 for factor B. This is a consequence of the order of specification in the model. For
both main effects, Type 3 = Type 4. For the interaction, Type 1 = Type 2 = Type 3 =
Type 4. Generally, we prefer the Type 3 sums of squares for hypothesis testing in these
cases.
If there are empty cells, then none of the four types will be equal for factor A, while Type
1 = Type 2 for factor B. For the interaction, Type 1 = Type 2 = Type 3 = Type 4. In
general, the Type 4 sums of squares should be used for hypothesis testing in this case, but
it is not always obvious exactly what hypothesis is being tested. When cells are empty,
certain model parameters will not exist and this will have a significant impact on which
functions of the model parameters are estimable. Recall that only estimable functions can
be used to form null hypotheses. Thus, when we have missing cells the exact nature of
the hypotheses being tested is actually a function of which cells are missing. There is a
process in SAS PROC GLM where the estimable functions can be determined, and the
specific form of the null hypothesis involving fixed effects determined for any of the four
types of sum of squares. The procedure is described in Driscoll and Borror (1999).
R| i = 1,2,", a
yijk = µ + τ i + β j + (τβ ) ij + ε ijk S| j = 1,2,", b
Tk = 1,2,", n ij
m12 53.98
126.02
m31
m32 -26.02
41.36
First examine the F-statistic in the analysis of variance. Since F = 14.10 and the P-value
is small, we would conclude that there are significant differences in the treatment means.
We also used Fisher’s LSD procedure in Minitab to test for differences in the individual
treatment means. There are significant differences between seven pairs of means:
µ 11 ≠ µ 12 , µ 11 ≠ µ 13 , µ 11 ≠ µ 22 , µ 11 ≠ µ 23
µ 21 ≠ µ 22 , µ 21 ≠ µ 23 , and µ 22 ≠ µ 23
Furthermore, the confidence intervals in the Minitab output indicate that the longest lives
are associated with material types 1,2 and 3 at low temperature and material types 2 and 3
at the middle temperature level.
Generally, the next step is to form and comparisons of interest (contrasts) in the cell
means. For example, suppose that we are interested in testing for interaction in the data.
If we had data in all 9 cells there would be 4 degrees of freedom for interaction.
However, since one cell is missing, there are only 3 degrees of freedom for interaction.
Practically speaking, this means that there are only three linearly independent contrasts
that can tell us something about interaction in the battery life data. One way to write
these contrasts is as follows:
C1 = µ 11 − µ 13 − µ 21 + µ 23
C2 = µ 21 − µ 22 − µ 31 + µ 32
C3 = µ 11 − µ 12 − µ 31 + µ 32
Therefore, some information about interaction is found from testing
H0 : C1 = 0, H0 : C2 = 0, and H0 : C3 = 0
Actually there is a way to simultaneously test that all three contrasts are equal to zero, but
it requires knowledge of linear models beyond the scope of this text, so we are going to
perform t-tests. That is, we are going to test
H0 : µ 11 − µ 13 − µ 21 + µ 23 = 0
H0 : µ 21 − µ 22 − µ 31 + µ 32 = 0
H0 : µ 11 − µ 12 − µ 31 + µ 32 = 0
Consider the first null hypothesis. We estimate the contrast by replacing the cell means
by the corresponding cell averages. This results in
C1 = y11. − y13. − y21. + y32.
= 155.00 − 70.00 − 155.75 + 46.67
= −24.08
The variance of this contrast is
=σ2
FG 1 + 1 + 1 + 1 IJ
Hn n n n K
11 13 21 32
F 1 1 1 1I
=σ G + + + J
H 3 3 4 3K
2
F 5I
=σ G J
H 4K
2
From the Minitab ANOVA, we have MSE = 444 as the estimate of σ 2 , so the t-statistic
associated with the first contrast C1 is
C1
t0 =
σ 2 (5 / 4)
−24.08
=
(444)(5 / 4)
= −1.02
which is not significant. It is easy to show that the t-statistics for the other two contrasts
are for C2
C2
t0 =
σ 2 (13 / 12)
28.33
=
(444)(13 / 12)
= 129
.
and for C3
C3
t0 =
σ 2 (5 / 4)
82.33
=
(444)(5 / 4)
= 3.49
Only the t-statistic for C3 is significant (P = 0.0012). However, we would conclude that
there is some indication between material types and temperature.
Notice that our conclusions are similar to those for the balanced data in Chapter 5. There
is little difference in materials at low temperature, but at the middle level of temperature
only materials types 2 and 3 have the same performance – material type 1 has
significantly lower life. There is also some indication of interaction, implying that not all
materials perform similarly at different temperatures. In the original experiment we had
information about the effect of all three materials at high temperature, but here we do not.
All we can say is that there is no difference between material types 1 and 2 at high
temperature, and that both materials provide significantly reduced life performance at the
high temperature than they do at the middle and low levels of temperature.
If a complex metamodel is to be fit, then the design must usually have a fairly large
number of points, and the designs dominated by boundary points that we typically use
with low-order polynomial are not going to be satisfactory. Space-filling designs are
often suggested as appropriate designs for deterministic computer models. A Latin
hypercube design is an example of a space-filling design. In a Latin hypercube design,
the range of each factor is divided into n equal-probability subdivisions. Then an
experimental design is created by randomly matching each of the factors. One way to
perform the matching is to randomly order or shuffle each of the n divisions of each
factor and then take the resulting order for each factor. This ensures that each factor is
sampled over its range. An example for two variables and n = 16 is shown below.
18
16
14
12
10
B
0
0 5 10 15 20
A
The design points for this Latin hypercube are shown in the Table 5. For more
information on computer experiments and Latin hypercube designs, see Donohue (1994),
McKay, Beckman and Conover (1979), Welch and Yu (1990), Morris (1991), Sacks,
Welch and Mitchell (1989), Stein, M. L. (1987), Owen (1994) and Pebesma and
Heuvelink (1999).
13 1
16 5
6 2
12 10
14 13
5 15
4 11
7 3
1 4
10 7
15 6
2 12
3 14
Supplemental References
Barton, R. R. (1992). “Metamodels for Simulation Input-Output Relations”, Proceedings
of the Winter Simulation Conference, pp. 289-299.
Donohue, J. M. (1994). “Experimental Designs for Simulation”, Proceedings of the
Winter Simulation Conference, pp. 200-206.
Driscoll, M. F. and Borror, C. M. (1999). Sums of Squares and Expected Mean Squares
in SAS, Technical Report, Department of Industrial Engineering, Arizona State
University, Tempe AZ.
Freund, R. J., Littell, R. C., and Spector, P. C. (1988). The SAS System for Linear Models,
SAS Institute, Inc., Cary, NC.
McKay, M. D., Beckman, R. J. and Conover, W. J. (1979). “A Comparison of Three
Methods for Selecting Values of Input Variables in the Analysis of Output from a
Computer Code”, Technometrics, Vol. 21, pp. 239-245.
Morris, M. D. (1991). “Factorial Sampling Plans for Preliminary Computer
Experiments”, Technometrics, Vol. 33, pp. 161-174.
Owen, A. B. (1994), “Controlling Correlations in Latin Hypercube Sampling”, Journal of
the American Statistical Association, Vol. 89, pp. 1517-1522
Pebesma, E. J. and Heuvelink, G. B. M. (1999), “Latin Hypercube Sampling of Gaussian
Random Fields”, technometrics, Vol. 41, pp. 303-312.
Sacks, J., Welch, W. J., Mitchell, T. J. and Wynn, H. P. (1989). “Design and Analysis of
Computer Experiments”, Statistical Science, Vol. 4, pp. 409-435.
Stein, M. L. (1987), Large Sample Properties of Simulations using Latin Hypercube
Sampling”, Technometrics, Vol. 29, pp. 1430151.
Welch, W. J and Yu, T. K. (1990). “Computer Experiments for Quality Control by
Parameter Design” Journal of Quality Technology, Vol. 22, pp. 15-22.