Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Cap 7

Download as pdf or txt
Download as pdf or txt
You are on page 1of 32

VARIABILITY AND QUALITY

Analysing and Minimizing Variation

I. The effect of variability on design


II. Analysis of residuals
The sources of residual "error"
Graphical analysis of residuals
III. Treatment of data with non-constant variance
Weighted multilinear regression
Variance depending on the experimental response
Example of a Box-Cox transformation
Non-power transformations
IV. Variability and quality
Introduction
Experimental strategies
Choice of response
Example: granulation in a high-speed mixer
Scaling-up and other applications

I. THE EFFECT OF VARIABILITY ON DESIGN

Associated with each measured value of an experimental response is an


experimental error, which is the difference between this measured value and the
unknown "true value". Variation in the results of replicated experiments carried out
under "identical conditions" may be ascribed to fluctuations in the experiment
conditions. These fluctuations are either in controlled factors which not perfectly
fixed or, in other factors, left partially or totally uncontrolled. Added to these are
the errors from imprecision of the measurement method. The overall result is a
dispersion of values about a mean, this variation being the experimental
repeatability.
Up to now we have assumed that this dispersion, characterised by the

TM

Copyright n 1999 by Marcel Dekker, Inc. All Rights Reserved.


standard deviation a, is more or less constant over the domain; in any case,
sufficiently uniform to be able to neglect changes in a. We will now question this
assumption, at the same time exploring and extending the concept of experimental
variation.
Part of the basis of the scientific method is that any experiment may be
reproduced by another worker and/or another laboratory, provided the original
protocol is rigidly adhered to. We have seen already that this is an ideal that the
experimenter seeks to approach. However, a single worker - in a fixed spot, using
the same apparatus or machine, the same batch of material, the same reagents - is
faced with variability. Once we add the effects of geography, changed materials and
equipment, and different operators, this variation is likely to be still greater. This
global variation in the value of the response is known as the reproducibility.
Variation has many causes. There are certain factors which cannot be
controlled, or which one chooses not to control, and these can have non-negligible
effects on the response or on its measurement. Variation in these factors
(temperature, humidity, wear of the equipment, the operator's learning curve, etc.)
is not always random. It is because all of these increased variability that we have
stressed the necessity of randomizing experiments whenever possible.
In pure and applied research and also in industrial development, an
experiment is usually carried out in order to verify a theory, to quantify a
phenomenon or to demonstrate that certain factors influence certain responses. The
experimenter is often keen to understand and explain, as well as to describe the
phenomenon (especially in the case of factor-influence studies described in chapter
3). We therefore try to carry out the experiment and also analyse the data so that
the postulated model is determined with the greatest possible precision and the
estimates affected as little as possible by the dispersion in the responses. In
following two sections of this chapter we will describe techniques most commonly
used for achieving this, where there is heterogeneity of the experimental variance.
The objectives of industrial production experiments are often rather different.
The responses are usually characteristics of the final product. According to quality
control principles, the product is considered acceptable provided these
characteristics are within the established specifications or norms. These
specifications consist of an upper and a lower limit, or alternatively they comprise
a nominal value with a tolerated range on either side. This notion of tolerance limit
where a product just in the interior is accepted and one just outside it is rejected is
too abrupt. Taguchi (1) was one of the first to introduce the idea of a continuous
loss function where any difference between the measured response and its target
value leads in some way to a decrease in the product's value - to a loss in quality.
The greater the difference between a product's measured characteristics and the
target properties, the higher is the cost to the company producing the product - in
increased cost of after-sales service, returned products, replacements, compensation,
and possible desertion by customers.
Taguchi also showed that variability in a product is itself a measure of
quality. A product of totally consistent quality may in fact be preferred to one
whose average characteristics are better, but more dispersed. So, in an experiment
the aim is no longer to eliminate the consequences of the variability from the

TM

Copyright n 1999 by Marcel Dekker, Inc. All Rights Reserved.


analysis of the effects and of the mathematical model we are studying by means of
weighted regression or transformation of responses. It is rather the variability of the
performance of the product that must be studied and minimized.
Another consideration peculiar to industrial production is that not only must
the variability of the performance characteristic be minimal at the time of
manufacture, but it must stay low throughout the product's lifetime. The conditions
under which the product is kept and used are likely to be very variable. The
conditions of use are uncontrolled factors which must have as little effect as
possible on the product's quality, that is, on the variability of its characteristics with
time.

II. ANALYSIS OF RESIDUALS

A. The Sources of Residual "Error"

Predictive models, determined by multilinear regression, are tested by ANOVA. We


also saw, in chapter 5, that they might also be tested by analysis of the residuals,
the differences between the measured response and that calculated by the model,
y, - _y,. This analysis is usually graphical, by normal or half-normal plots of the
residual and by plotting the residuals against the value of each factor, against time,
and against the response. Analysis of the residuals is only useful if there is an
adequate number of degrees of freedom (at least 5). It should be used for example
in analysing RSM designs such as central composite or Doehlert designs.
The residuals may be distributed in various different ways. First of all they
may be scattered more or less symmetrically about zero. This dispersion can be
described by a standard deviation of random experimental error. If this is
(approximately) constant over the experimental region the system is homoscedastic,
as has been assumed up to now. However the analysis of residuals may show that
the standard deviation varies within the domain, and the system is heteroscedastic.
On the other hand it may reveal systematic errors where the residuals are not
distributed symmetrically about zero, but show trends which indicate model
inadequacy.
In the following section we describe some of these methods and how they
may show the different effects of dispersion and systematic error. Then in the
remaining two sections of the chapter we will discuss methods for treating
heteroscedastic systems. In the first place, we will show how their non-constant
standard deviation may be taken into account in estimating models for the kind of
treatment we have already described. Then we will describe the detailed study of
dispersion within a domain, often employed to reduce variation of a product or
process.

B. Graphical Analysis of Residuals

The use of all of these methods is recommended. Each may possibly reveal

TM

Copyright n 1999 by Marcel Dekker, Inc. All Rights Reserved.


systematic errors due to inadequacy of the model or may show that the system is
heteroscedastic. In the latter case a transformation of the response may sometimes
be appropriate, as described in section III, or the influence of the factors on the
variability of the system may be investigated, as in section IV.

1. Normal and half-normal plots

The residuals, if they are a measurement of the error, would be expected to be


normally distributed. Therefore they may be expressed as a cumulative frequency
plot, just as was done for the estimated coefficients in chapter 3. If the plot is a
straight line, this supports the adequacy of the model. If there are important
deviations this may indicate an inappropriate model, the need for a transformation,
or errors in the data. The deviating points should be looked at individually.
Just as for the normal plot of the coefficients, the residuals must be
studentized - that is, divided by the standard deviation of prediction at that point.

2. Dependence of the residual (error) on the factors

It is useful to plot the residuals, or the studentized residuals, against the values of
the coded variables X; in turn. These should be evenly distributed, with no obvious
dependence on the factor. Figure 7.1 gives the example of the response of cloud
point, in the case of the formulation of an oral solution (3) already discussed in
chapters 3, 5, and 6. The studentized residuals are plotted in turn against the
polysorbate 80 concentration (Xt), the propylene glycol concentration (X2), and the
invert sucrose medium (X3).
The design (central composite) was for a second-order model, so this model
was used for the calculation. However, analysis of variance showed that the second-
order coefficients were not statistically significant, and there was (possibly) some
lack of fit (significance 0.067). Graphs (a) and (c) show no trends; the points are
scattered evenly. However, graph (b) of cloud point against propylene glycol
concentration, shows a plot clearly cubic in type. The fit of the model may be
improved greatly if an j23 term is added to the first- or second-order model. In view
of the lack of significance of the quadratic model, and the high variability of the
replicated experiments, it is unlikely that other cubic terms would be needed (see
chapter 5, section VII).
The new model could be fitted without doing further experiments, but we
would advise against this. Further experiments might be selected along the X2 axis,
for example at points jc2 = -1.68, -0.56, +0.56, +1.68, in order to confirm the model.
There may be cases where there are no systematic differences between
predicted and experimental values, but where the scatter of the residuals depends
on the value of the factor. This could be because the standard deviation is
dependent on the response (see section 4). Otherwise, if this scatter of residuals
proves not to be response-dependent, but a function only of the level of the factor,
then the methods of section IV could prove useful for reducing the variability of
the formulation or process being studied.

TM

Copyright n 1999 by Marcel Dekker, Inc. All Rights Reserved.


Residuals .l Residuals
10 10
8 8
6 6
4 •i ' 1

2 -X- h fP
r^
t v
< x,
4
2 \ — ^ '/P
^

0 y 0- T ; f
/I '
V
'
\
-2 — i— -2
-4
-6
-1
X
I
S
) >l<
-4
-6
'>= »
f
U \.
"•
-2 -1 0 1 2 2 - 1 0 1 2
(a) (b)

i k Residuals
10
8
- T

6
4 —> — -j;
2 — 1—> ^^ X
\

0 ^f
-2 V ^
*
-4
-6
2
-tl - 1
>
0
'

1
X

2
(c)

Figure 7.1 Dependence of the residual of the cloud-point on: (a) polysorbate
(b) propylene glycol, and (c) sucrose invert medium concentrations

3. Dependence of the residuals on combinations of 2 or more factors

Systematic dependance of the residuals may be observed for randomization


restrictions, where for example the batch resulting from the first stage in the process
is split into sub-batches for the remainding stages. The experiments are not
independently carried out and the errors of experiments corresponding to each sub-

TM

Copyright n 1999 by Marcel Dekker, Inc. All Rights Reserved.


batch in a given batch are correlated.
We take the example of a granulation experiment (see chapter 6, section
IV.B, and chapter 8, section III.A). Let the granulation variables (time, amount of
liquid, agitation, etc.) be represented as Xu. Each batch of granulate is split into a
number of sub-batches for the following stages (drying, sieving, lubrification,
tableting, etc.) the variables for which are represented by X2j. The random error in
the granulation stage is represented by e, with an expectation of zero and standard
deviation a,, and the random error of the remaining stages is e2, also with
expectation zero and with standard deviation O2. If a first-order model for the
responses is postulated, it is:

y = Po + £|Vi,

The coefficients are estimated by multi-linear regression. Analysis of residuals only


gives useful information if there are sufficient degrees of freedom. If the random
error of the granulation step (e,) is to influence the residuals, there must be more
granulation batches prepared than there are granulation terms in the model. In such
a case, if the residuals are grouped according to the granulation batch from which
the sub-batch is taken and they appear to be correlated (and if ANOVA did not
show lack of fit), we might conclude that the random variation of the first stage is
comparable or greater than the random error of the second stage. This is why,
although the best values of the coefficients are those determined by multi-linear
regression, it is not possible to use analysis of variance as described in chapter 4
to estimate the significance of the model.
The problem is a common one in industrial experiments (4). The design is
known as a split-plot (the name has an agricultural origin). Failure to recognise this
situation is likely to lead to coefficient estimates which appear to have a higher
statistical significance than is really the case. In the above example, if there are no
degrees of freedom for the granulation stage, the residuals will be estimates of the
random errors only of the second stage in the process and the true error will be
underestimated. If, on the other hand, there are sufficient degrees of freedom for
the first step, it will be possible, by an appropriate analysis of variance, to estimate
the standard deviations of the errors of both stages and to test for the significance
of the model and the individual coefficients.
It may happen that the dispersion of residuals without systematic effects is
observed for certain combinations of factor levels where the experiments in the
design have been carried out independently. If the scatter is not dependant on the
response value, then the results may usefully be analysed in order to reduce
variability, as explained in section IV.

4. Dependence of the residuals on the time

When the residuals are plotted in the order in which the experiments were carried
out, they should be scattered evenly about zero. However, there may be a slope
demonstrating a trend in an uncontrolled factor that is affecting the results of the
experiment. Provided the plan has been randomized, this will affect the estimations

TM

Copyright n 1999 by Marcel Dekker, Inc. All Rights Reserved.


of error and significance, but will not greatly affect predictions. [See chapter 3,
section IV (time-trend)].
Secondly, there may be a trend to increased, or decreased scatter, indicating
a change of variability with time. Decreasing residuals in a long series of runs may
point to the operator's increasing expertise. Increasing residuals are more
worrisome!

5. Dependence of the error on the response value

The experimenter should test for this possibility, especially where there is wide
variation of a response over the experimental domain. If the response range is
relatively narrow, changing by less than a factor of 3 or 4, it is unlikely that the
dependence of the variance on the response will have much effect on the
significance of the factor study or the predicted response surfaces. If the response
varies by a more than an order of magnitude, it is often found that transformation
improves the analysis.
This situation, where the variance of a response function depends on the
value of the response, should be distinguished from the phenomenon where the
variance changes over the experimental region as a function of the underlying
process or formulation variables (see II.3 above). For this, see section IV of this
chapter.

III. TREATMENT OF DATA WITH NON-CONSTANT VARIANCE

A. Weighted Multilinear Regression

In determining a mathematical model, whether by linear combinations or by multi-


linear regression, we have assumed the standard deviation of random experimental
error to be (approximately) constant (homoscedastic) over the experimental region.
Mathematical models were fitted to the data and their statistical significance or that
of their coefficients was calculated on the basis of this constant experimental
variance. Now the standard deviation is often approximately constant. All
experiments may then be assumed equally reliable and so their usefulness depends
solely on their positions within the domain.
If, however, the system is heteroscedastic and the standard deviation does
indeed vary within the domain so that certain experimental conditions are known
to give less precise results, this must be taken into account when calculating the
model coefficients. One means of doing this is by weighting. Each experiment /
is assigned a weight w,, inversely proportional to the variance of the response at
that point. Equation 4.5, for least squares estimation of the model coefficients, may
thus be rewritten as:

B = (X'WXy'X'WY

where W is the weights matrix:

TM

Copyright n 1999 by Marcel Dekker, Inc. All Rights Reserved.


w, 0 0 . 0
0 w2 0 . 0
W =

0 0 0 . vvv
The weights must be known or assumed. One method, expensive in time and
resources, is to repeat each experiment several times and to estimate the variance
at each point. If the variance cannot be predicted in any way, this is the only
possible method.
Analysis of the residuals (y; - $,) as a function of the independent variables
Xf may reveal a correlation between the residues and one or more of the variables.
This could indicate an increase in the variance of the experimental results as the
value of the factor increases (which could be the case, for example, if not all levels
of the factor can be controlled with the same precision). One might then suggest a
weighting w, which decreases as the value of the natural variable [/,- increases. An
example might be wi = £/,"', provided all [/, are greater than zero.
Other possibilities are described in the literature, which deal with cases
where the variance depends on the response, or where the errors are correlated, or
where the errors depend on the order in which the experiments are carried out. For
the most part these are outside the scope of this book. However the use of
transformations of response values to correct for a non-constant variance, as well
as non-normality, is widespread, ever since it was proposed by Box and Cox (2),
and this is the subject of the following two sections.

B. Variance Depending on the Experimental Response

1. Examples of variance depending on the response values

There are certain situations where for physical or physico-chemical reasons, or


because of the methodology, we might expect the variance to depend on the
response in specific ways.

(a) The relative standard deviation may be constant. For example in a solubility
experiment where the measured solubility varied from 0.05 to 10 mg/mL, the
error in the solubility, due perhaps to analytical error and variation in sample
preparation, was approximately constant at 8%. The standard deviation is thus
proportional to the response.
(b) In the measurement of ultraviolet absorbance A, at least part of the error results
from a constant electrical noise ar in the transmittance measurement T. Since
A = Iog107 the standard deviation of the absorbance OA is given by:
or 10-
AT j ' 2.303 T 230.3

TM

Copyright n 1999 by Marcel Dekker, Inc. All Rights Reserved.


(c) The Poisson distribution applies where there is a constant, relatively small
possibility of an event occurring. A possible example is that of the number of
broken tablets in a friability test. For this distribution, the variance is equal to
the mean, so the standard deviation varies with the square root of the response.
(d) A simplification which is sometimes useful is to assume that the standard
deviation varies according to a power a of the mean value of the response at
a given point:

The response is usually measured directly on the most physically accessible


quantity. Flowability, for example, is normally measured by recording the time for
a certain quantity of powder to flow. It could equally well be expressed as a flow
rate. The variance dependance will be different according to how the data are
expressed.
It is quite rare for there to be enough data to show deviations from normality.
Least squares regression is usually adequate and cases requiring "robust" regression
methods are infrequent.

2. How to recognise the need for a transformation

Transformation of the response data is often advantageous where there is variation


in the response of an order of magnitude or more. A transformation will not always
be necessary, but should be tested for.

(a) Plotting either absolute or studentized values of residuals against the


corresponding calculated values may show a relationship between them, usually
increasing values of the residuals with increasing values of the calculated
response. However, such trends are not always visible, even when a
transformation is required because of error in the prediction at low-response
values. Analysis by the method of Box and Cox, given at the end of this
section, is preferred.
(b) There are theoretical or physico-chemical arguments for such a transformation.

3. How to choose the appropriate transformation

A transformation of the data giving an improved fit may sometimes be obtained by


trial and error, but one or both of the following approaches is recommended. One
may make a theoretical choice of transformation and test if it gives improved
results - a regression that is more significant in analysis of variance, reduction in
the number of outliers, residuals normally distributed. Alternatively the Box-Cox
transformation method may be used.

4. Principle of the Box-Cox transformation

Box and Cox (2, 5) showed that, provided the experimental errors were

TM

Copyright n 1999 by Marcel Dekker, Inc. All Rights Reserved.


independent, transformation of the initial responses might correct for non-normality
and non-constant experimental variance. The function

is calculated for each experimental point y for values of X from -2.5 to 2.5. y is the
geometric mean of y. X= 0 is a special case where:

y® = y log, y
The ;y(X) are analysed according the model equation and residual sums of squares.
5X are then calculated for each value of X and plotted against X. Xy*""1 is a
normalising factor in the above equation, which allows for the change of scale on
transformation, so that the sums of squares may be compared. The best value of X
is that for which the sum of squares is a minimum and y may be transformed
accordingly to yK. Note that certain values of X give rise to particular
transformations (shown in table 7.1).
The value of X for the minimum sum of squares is not necessarily exactly
a multiple of 1A. But there will normally be a choice of transformations that are
nearly as good, at least not statistically worse. An approximate 95% confidence
interval may also be calculated (1, 2), and any transform within the confidence
interval may be selected.
The Box-Cox transformation method is valid only for all y > 0. If some
y < 0, a small constant term may be added to all the data to give positive values.
In our experience this is rarely useful when there are negative values, but may be
contemplated when a few responses are equal to zero. A value equal to the smallest
detectable response could be added to each response before transforming.

Table 7.1 Transformations Corresponding to Particular Values of X

X Transformation Example or type

1 none data with constant standard deviation


0.5 square root Poisson distribution
0 logarithm constant relative standard deviation
-0.5 reciprocal square root
-1 inverse rates

Possible advantages of a successful transformation are:

• The sensitivity of the experiment is improved, with analysis of variance

TM

Copyright n 1999 by Marcel Dekker, Inc. All Rights Reserved.


showing the model to be more significant.
Apparent interaction terms in the original transform are no longer significant
if they were the result of an inappropriate scale.
The residuals are often normalised.

C. Example of a Box-Cox Transformation

1. Description of the problem, experimental domain, model and design

To illustrate the method we examine the data of Wehrle, Palmieri, and Stamm (6),
who report the optimized production of theophylline pellets by a simple one-step
process in a high speed Stephan granulator. The formulation consisted of 20%
theophylline drug substance, 30% lactose, and 50% microcrystalline cellulose. It
was granulated with a 15% hydroalcoholic solution of hydroxypropyl-
methylcellulose.
The process was considered to depend on the amount of granulating liquid
added, and on the kneading or granulation time. The responses were the yield of
pellets between 200 and 630 pm, the mean diameter, and the flowability, their
dependences on these factors were investigated. The speed of the granulator was
also varied, but in this example we take only those results obtained at a speed of
1000 rpm and the only response considered is that of yield.
The limits for added liquid were between 190 and 210 rnL and the kneading
time was between 0 and 20 minutes. The domain was therefore not spherical but
cubic. Having carried out the 5 experiments of a 22 factorial design with a centre
point, the authors chose to carry out 4 further experiments to obtain the 9
experiments of a full factorial design at 3 levels, 32 (chapter 5, section VIA). The
design and the experimental results are given in table 7.2.

Table 7.2 Full Factorial Design 32 for Granulation [data taken


from reference (6), by courtesy of Marcel Dekker, Inc.]

No. i *i X2 liquid time yield


j (mL) (min) (%)
1 -1 _1 190 0 48
2 0 -1 200 0 74
3 +1 -1 210 0 82
4 -1 0 190 10 84
5 0 0 200 10 86
6 +1 0 210 10 56
7 -1 +1 190 20 84
8 0 +1 200 20 50
9 +1 +1 210 20 21

TM

Copyright n 1999 by Marcel Dekker, Inc. All Rights Reserved.


2. Analysis of transformations

Figure 7.2 shows the results of the transformation according to the method of Box
and Cox. The minimum is found at X = 0, corresponding to a logarithmic
transformation. (The sharpness of this minimum is however highly unusual.)

Ln(Sx)
10 :—— :——
Q *-,

•—'
*- /*-
4 ————— / Sx,a=2.5%,v=3 — 3.026
2 u-—-^
; Xmin = 1.746
n
-2.0 -1.5 -1.0 -0.5 ,0.0, 0.5 1.0 1.5 2.0

^•min Vnax

Figure 7.2 Box-Cox transformation of the data of table 7.2.

3. Analysis of results

The analysis of the variance of the regression is shown on table 7.3. Table 7.4 gives
the estimates of the coefficients of the model:

The yield, transformed back to the original variables, is given as a contour plot in
figure 7.3 (solid lines). The process may be optimized by reference to the diagram,
to give maximum yield.

Table 7.3 ANOVA of the Regression on the Transformed Response

Degrees of Sum of Mean Sign.


freedom squares square
Total 0.31826
Regression 0.31797 0.06359 639.66
Residual 0.00030 0.00010

TM

Copyright n 1999 by Marcel Dekker, Inc. All Rights Reserved.


Table 7.4 Coefficients of the Model for log,0(yield)

Coefficient Sign. Coefficient Sign.


ba = 1.924 *** bn = -0.081 ***

-0.091 *** b22= -0.134 ***


*• =
b2 = -0.086 *** bl2 = -0.209 ***

Speed = 1000 rpm


84 50
20 —

0)
^^
01
c

I 10 —

190 210 230


Quantity of liquid (ml_)
Figure 7.3 Response surfaces for yield, calculated by regression on transformed
data (solid lines) and untransformed data (dotted lines) of reference (6).

4. Comparison with untransformed results

Figure 7.3 shows that the model using the logarithmic transformation of the yield
is better than that using the regression on the untransformed response. The numbers

TM

Copyright n 1999 by Marcel Dekker, Inc. All Rights Reserved.


in bold type are the experimental values obtained, the dotted lines represent the
response surface calculated for the model y = f(x,) and the solid contour lines show
the response surface calculated for the model loglay = g(x,).

D. Non-Power Transformations

Apart from the power transformations, usually selected after a Box-Cox analysis,
certain other transformations may sometimes be useful.

1. The logit transform

This transformation may be applied to response data which fall within a finite
range. Common examples are percentages between 0 and 100, size data such as a
yield, or the percentage dissolved at a given time. Consider the dissolution testing
of a number of formulations. The formulations may be divided, approximately, into
3 groups with slow, intermediate, and fast dissolution. Those slow to dissolve might
have percentages dissolved clustered between 0 and 20%. For the second group of
intermediate dissolution rate, the percentage dissolved would vary from about 20%
to 80%. If there were just those 2 groups, a power transformation like the one we
have discussed might be adequate. A reciprocal transformation, for example, would
transform them to rates. But if there were a third group with rapid dissolution
clustered near 100%, a different kind of transformation could be useful, one which
spreads out the values near the boundaries:

logit (y) = log e

where y is the response and ya is the lower value for the response (for example 0%)
and y^ is the maximum value (for example 100%). This is known as the logit
transformation.

2. Binomial data and the arcsine transformation

Here the response is one of a fraction of results that "pass" or "fail". If the fraction
that passes is p then the following transformation to a new variable P may be made
(as suggested by Fisher) before analysis with the chosen model:
sin P = \fp

P = arcsinf//5)

Obviously, P must be between 0 and 1. Unlike the logit transformation, P can also
take those limiting values 0 and 1. Possible examples are (as for the previous
example) the fraction of drug dissolved or liberated at a given time in a dissolution
test or the fraction with a particle size below a given limit.

TM

Copyright n 1999 by Marcel Dekker, Inc. All Rights Reserved.


IV. VARIABILITY AND QUALITY

A. Introduction

1. Quality control and quality assurance

The term quality applied to a product includes all the properties and characteristics
by which it satisfies the needs of those who use it. This concept is not new to
pharmaceutical scientists. They are used to specifications, whether these are in
pharmacopoeias or otherwise. Specifications are fixed and these are designed to
ensure the quality. It is verified after manufacture that the product enters within
these specifications, and so, historically, the concept of quality is tied up with the
concept of control. The elimination of products that do not conform to commercial
specifications will satisfy external clients that the quality criteria are being met, but
this is at a cost. In no way does control directly improve the quality of the product
that is actually being manufactured.
It is necessary to introduce structures, procedures and methods which ensure
quality - that is quality assurance (7). Quality control is only one aspect of this,
although in general it is a necessary verification.
The cost of quality may be broken down into:

•the cost of conformity, which is the cost of prevention plus the cost of
control, and
• the cost of non-conformity, which is the cost of internal plus external
failure.

Internal failures must be thrown out or recycled. These should all be


detected in products where a batch can be sampled and tested. External failures are
much more serious and costly, although it is not always easy to quantify the cost.
In the case of a "normal" product, the customer will return it. If he is really
unhappy he will not buy it again. If a batch of medicine is faulty, the F.D.A. (to
name but one national authority) may recall it. If the agency is really unhappy
about the company's structures, procedures, and methods, it may well take stronger
action.
There is therefore an advantage in making sure that no defective product
leaves the factory and for this there are two main strategies - increased control, so
that all defective products are eliminated, and improved quality, so that there are
no defective products.
If we reduce the number of actual failures, we will automatically reduce the
number of both internal and external failures. This modern approach to quality,
spearheaded by Japanese workers (1) but now formally applied over a much wider
area (8), consists of prevention of failure rather than its detection. The idea is a
simple one - to avoid refusing a product it is enough at the production stage to
control the process in such a way that the manufactured product is as close as
possible to the desired product.

TM

Copyright n 1999 by Marcel Dekker, Inc. All Rights Reserved.


2. The loss function

When we manufacture a product, be it chemical, mechanical, electrical, or


pharmaceutical, we look first of all for properties of the actual product that are
within the specifications that were previously fixed. They were either fixed by some
external authority (for example a pharmacopoeia) or internally, at levels considered
necessary by the company, and then registered. In any case, they are not negotiable,
the only possible exception being the distinction between official specifications and
internal company specifications, which may be rather narrower.
Each measured property is known as a performance characteristic. If we
can give this property a number, it can be expressed or "graded" as y. The ideal
value is the target value, represented by i. So let y be the performance
characteristic's measured value, T) its expectation E(y). The ideal result is:

E(y) = Tl = I

where the expected (mean) value of the property that is being measured is also the
target value. This alone is not enough, as individual items can still fall outside
specifications. The performance characteristic y is certain to vary, with a variance
o2 representing the variability:

- of the measurement method,


- due to the manufacturing process,
- due to the manufacturing environment.

The traditional approach to quality is dominated by quality control. A


permissible range for the performance characteristic y about the target value is
defined. If y is within specification the product is accepted, but if y is outside
specification, the product is refused (figure 7.4a). The weakness of this approach
is the implication that even when the performance characteristic of a product is
close to the specification limit, this product is still of equivalent quality to one
where y is very close to the target value. In spite of this it appears evident that the
closer y is to T the "better" is the product.
The Japanese approach takes this into account. Taguchi (1) states that any
product whose performance characteristics are different from the target values,
suffers a loss in quality. He quantifies this by the following loss function L(y):

L(y) = K(y-i)2

where K is a constant, see figure 7.4b. The objective of Taguchi and his successors
has been to find ways of minimising this loss in quality.

TM

Copyright n 1999 by Marcel Dekker, Inc. All Rights Reserved.


Quality . Cost of
repair or Quality
toss replacement toss

*.'
Tolerance

Target Characteristic Target Characteristic


T
V T y
(a) (b)

Figure 7.4 Quality loss functions: (a) classical and (b) as suggested by Taguchi.

3. Variability

In research, development, and above all in production, increased variability of a


product implies reduced quality. Due to this fact a product's variability is itself a
response. Therefore, the problem of choosing an optimum formulation, or
conditions for manufacturing it, is not only one of obtaining the best values of the
responses (whether maximum, minimum or target values), but also that of finding
conditions where those characteristics vary as little as possible.
For this purpose, Taguchi classified factors influencing the responses. We
will consider the two main classes:

• control factors, those factors which may normally be controlled and whose
levels may be fixed,
• noise factors, which are difficult, impossible, or very expensive to control.
These may be manufacturing factors (that depend on machines, differences
in speed, non-homogeneous temperature, etc.), to which may be added
external variations in their conditions of use (over which the manufacturer
has no control). We have already seen the effects of such factors in
discussing blocking and block factors.

Taguchi's approach consists of identifying the effects of all factors on both


the response and on its variability by means of experimental design. Then, since the
noise factors cannot be controlled (except possibly during a particular experiment),
he would try to minimize their effects. Thus, diminishing the variability of the
response is equivalent to removing the effects of uncontrolled factors on the
response. This is a considerable advance on the traditional approach, which is to try

TM

Copyright n 1999 by Marcel Dekker, Inc. All Rights Reserved.


to control the variation in these noise factors.
To do this it is absolutely necessary to determine the interactions between
noise and control factors. Thus, the levels of certain control factors will be fixed
at levels that minimize the effect of noise factors with which they interact, whereas
the other control factors will be used to adjust the mean response to its nominal
value.
There are three main types of experimental design approaches to this. All
tend to require a large number of experiments.

B. Experimental Strategies

1. Random variation in the noise factors

The first method is to construct a classical experimental design in the controlled


factors, and to repeat it a large number of times, hoping that there is enough
random variation in the uncontrolled noise factors to cover the domain. Mean and
variance may be determined for each experimental point and the variance may then
be considered as a response and accordingly minimized.
The disadvantages of this approach are twofold. It requires a very large
number of experiments. Also, one is never sure that the noise (non-controlled)
factors have varied enough over the period of the carrying out of the experimental
design, so that they are representative of the variation to be expected over months
or years, under manufacturing conditions. This being said, it is the only possible
method when it is technically impossible to control the noise factors.

2. Taguchi's matrix products

Rather than rely on chance variation of the noise factors, Taguchi (1) proposes that
at each point of an experimental design set up to study the control factors the noise
factors are also allowed to vary following an experimental design. Suppose, for
example, there are two control factors F, and F2 and two noise factors, F3 and F4.
For each of the noise factors choose two real but extreme levels. One may then
construct a factorial design 22 with the control factors and repeat each of the 4
experiments using each possible combination of levels of the noise factors, F3, F4.
We thus obtain the design of table 7.5.
The design comprising the control factors, F,, F2, on the left hand side of
table 7.5 is known as the inner array and the design formed by the noise factors,
F3, F4, is the outer array. If there are Nt experiments in the inner array and N2 in
the outer array, then there are N = N} x N2 experiments in all.
This may quickly result in a prohibitively large number of runs. Because of
this, Taguchi proposes using only screening designs. Although their R-efficiency
approaches 100%, they do not allow interactions to be calculated. Taguchi generally
neglects interactions, but this may be a source of considerable error.

TM

Copyright n 1999 by Marcel Dekker, Inc. All Rights Reserved.


Table 7.5 Matrix Product of Two Factorial Designs

Outer array
Inner array -1 +1 -1 +1 F3
F1 T?
^2 -1 -1 +1 +1 F4
-1 -1 yi y'i y"i y'",
+1 -1 y2 y'2 y"2 y'"2
-1 +1 y3 y'3 y"3 y'"3
+1 +1 y4 y'4 y'; y'"4

When there is a considerable number of noise factors to be studied, certain


authors suggest studying them by groups, as in group screening, allowing them to
vary in pairs or triplets rather than individually.

3. Use of classical designs

Before eliminating the possible effect of noise factors on the variability, it may be
interesting to study these effects. One may therefore include these factors in the
design, just as the controlled factors are included (7, 8, 9, 10). This involves two
kinds of design.
The first are designs for factor influence studies (chapter 3). They may be
ones set up for special models, where interactions between the noise factors are not
included, but interactions between noise factors and controlled factors are studied
in detail. However even when the design has not been set up for the express
purpose of studying variability, Box and Meyer (11) and Montgomery (12) have
shown that it is possible to detect factors affecting the variability of the response.
A recent example of the use of this technique in studying a fluid-bed granluation
process has recently been described (13). This used a 2s'1 resolution V design.
Other authors use RSM (chapter 5) with graphical methods to reveal
favourable manufacturing conditions. This is illustrated below.

C. Choice of response

What variables do we use to describe quality? Taguchi (1) takes into account:

(a) the difference between the measured response and its target value,
(b) the variability of the measured response.

The response y has a target value T which is neither zero nor infinity but equal to
a nominal value T0 - "nominal is best" or "target is best". We need to obtain a
performance characteristic as close to T0 as possible, at the same time reducing the
variation. For n experiments at a given setting of the control factors we obtain

TM

Copyright n 1999 by Marcel Dekker, Inc. All Rights Reserved.


responses y^ y2,.. y;,.., yn. In this case, Taguchi recommends maximizing the
function:

Z =S/N = +10 Iog 1 0 |i| (7.1)


s

where y" = _

by minimizing its variability s2.


There are other possibilities, according to whether the target value for y is
as small as possible, or as large as possible. If the desired value is zero, the
objective, T = 0, is "smaller is better". The performance characteristic, y, is positive
and the loss function increases with y:

LOO = Ky2

It is suggested that the function:

Z = S/N = -101og10 (7.2)


n

should be maximized, the sum of squares of the y, being minimized. An example


might be the presence of an impurity in a synthesized starting material or finished
product.
We look next at the similar case, where the target value is as large as
possible ("infinity") - "larger is better". Maximizing y is equivalent to minimizing
y"1. Taguchi proposes maximizing:

Z = S/JV = -10 log,0 i £ y r 2 (7.3)


n 1,1

Other authors prefer using the initial response (with possible transformation, as we
saw in chapter 2), the study of the dispersion being carried out on log(s).

D. Example: Granulation in a High-Speed Mixer

1. Problem and experimental domain

Wehrle, Palmieri, and Stamm have demonstrated this method, which they used for
optimizing the production of theophylline pellets in a one-step process in a high
speed granulator (6). The process was considered to depend on the amount of water
added, between 190 and 210 mL, and the kneading time, between 0 and 20 minutes.

TM

Copyright n 1999 by Marcel Dekker, Inc. All Rights Reserved.


These are the control factors and the domain is cubic shaped.
We want to know the variability of the responses as well as the shapes of
their response surfaces. Variability is very difficult to measure experimentally. In
order to estimate it by replicate measurements a very large number of experiments
would be required. Here it was thought that variability might be due to small
differences in the speed of the impeller blade of the mixer granulator. The blade
speed was thus allowed to vary by a small amount about its normal value of 1100
rpm. This is therefore a noise factor. The two levels for the speed, 1000 and 1200
rpm, are believed to cover the range of random variation of the speed.

2. Experimental design, plan, and results

The authors decided to use Taguchi's strategy, presented in section IV.B.2. The
design used was a product matrix between an inner array, which is a complete 32
factorial design, and an outer array, which is a 2' design. The dependence of the
responses, which were the yield of pellets between 200 and 630 pm, and also the
flowability on the control factors was studied by the inner array. The 18
experiments and their results are given in tables 7.2 and 7.6. Note that for the
design in table 7.2, the level of the coded noise factor X3 is -1, throughout.

Table 7.6 Additional Results at 1200 rpm Mixer Speed [data taken from reference
(6) by courtesy of Marcel Dekker, Inc.]

No. AV X, liquid time speed yield


l *3
(mL) (min) (rpm) (%)
1 -1 -1 +1 190 0 1200 66
2 0 -1 +1 210 0 1200 83
3 +1 _J +1 230 0 1200 85
4 -1 0 +1 190 10 1200 76
5 0 0 +1 210 10 1200 88
6 +1 0 +1 230 10 1200 36
7 -1 +1 +1 190 20 1200 79
8 0 +1 +1 210 20 1200 75
9 +1 +1 +1 230 20 1200 53

3. Global analysis of the overall results

Although the design was constructed according to Taguchi's method, it is in fact


a full factorial 2'32 design, which may be analysed in the usual way. The authors
of the original article showed the necessity of introducing quadratic terms in Xl and
X2. The postulated model therefore contains the 6 terms of the quadratic model, the

TM

Copyright n 1999 by Marcel Dekker, Inc. All Rights Reserved.


single noise factor term, in X3, and 5 cross-terms between the noise factor and the
remaining terms in Xl and X2:

E (7.4)

A similar calculation to that carried out in section III.C shows that a Box-Cox
(logarithmic) transformation gives an improved fit, but the improvement is not
statistically significant. However it was decided to use this transformation because
of the much improved result demonstrated for the half plan at 1000 rpm, above, and
the fact that the significance of the model with transformation of the data is 2.5%,
and without transformation it is 6.7%.
Table 7.7 gives the analysis of variance of the regression in the two cases
and estimates of the coefficients (logarithmic tranformation) are given in table 7.8.

Table 7.7 ANOVA of the Regression

Degrees of Sum of Mean F Sign.


freedom squares square
(a) Untransformed response
Total 17 6546
Regression 11 5673 515.7 3.5 6.7%
Residual 6 873 145.4
(b) Logarithmic transformation
Total 17 0.4595
Regression 11 0.4174 0.0374 5.4 2.5%
Residual 6 0.0421 0.0070

Table 7.8 Coefficients of the Model for log,0(yield)

Coefficient Sign. Coefficient Sign.


b0 = 78..9 *** bn= -17, .7 **
bi = -8..7 * & = 0 .8 81..3%
b2 = -6..3 11..7 ^23 = 1 .8 62..1%
b, = _ j .4 82..1 ^123 =
^ .5 17,.6%
bu = -11. .8 9,,6 /?],-! = -4,.3 50,.4%
b22 = -4,.3 50.,4 &221 = 11,.2 11,.2%

TM

Copyright n 1999 by Marcel Dekker, Inc. All Rights Reserved.


The rotation speed has apparently little effect on the response and can only
give rise to slight variations. There may, however, be some effect on the interaction
time bl2 and the curvature b22. Although the effects are small, we continue the
treatment to demonstrate how these methods may be used.

4. Graphical analysis

Interesting information can be shown in different ways. Firstly, the above equation
allows response surfaces to be traced, for the yield, on the planes #3 = -1
(speed = 1000 rpm) and x} = +1 (speed = 1200 rpm). These curves are super-
imposed in figure 7.5. The shaded zones correspond to yields greater than 70%
(light shading) and 80% (heavier shading) for the two extremes. Note that the two
response surfaces are identical to those that would have been obtained if the two
32 designs had been treated separately by a second-order model.

Speed = 1000 rpm


Speed = 1200 rpm
20-

o-
190 210 230
Quantity of liquid (ml_)

Figure 7.5 Response surfaces for different mixing speeds, showing region with
over 80% yield (plotted using the coefficients of table 7.8).

This analysis on its own does not allow us to minimize the effects of
variations in the speed. The two curves are shown in perspective in figure 7.6. It
may be seen that the curvature on the X2 axis (kneading time) is modified. The

TM

Copyright n 1999 by Marcel Dekker, Inc. All Rights Reserved.


intersection of the two surfaces includes all the conditions where the calculated
response is the same for the two rotation speeds, and it is on this intersection of the
two curves where the variability due to fluctuations in the speed is minimized. Its
projection onto figure 7.5 will allow any target value to be selected, minimizing the
variability at the same time. It is evident that the maximum yield would be chosen
here; nonetheless the method is valid for a response where the optimum is a target,
not a maximum or minimum.

100

\
%,

Figure 7.6 Response surfaces for different mixing speeds, showing intersection of
the two surfaces (plotted using the coefficients of table 7.8).

Another graphical method is to calculate the difference between the results


at each speed, Ayit for each experiment of the inner array. Ay is treated as a new
response, and analysed by multilinear regression, according to the second-order
model:

= Po (7.5)

The values of the coefficients, which are not highly significant, are given in table
7.9, and the response surface of Ay is plotted in figure 7.7. As before, we may
conclude that the variation in the yield depends only on the curvature on the X2
axis and the interaction between the two variables.

TM

Copyright n 1999 by Marcel Dekker, Inc. All Rights Reserved.


Table 7.9 Coefficients for the Model: Aj> = f ( x t , x2)

Coefficient Sign. Coefficient Sign.


b0 -2.9 71.5 % bn -8.7 29.9 %
bl 1.7 70.2 % AB 22.2 *
b2 3.6 42.8 % 612 13.0 7.5 %

20-

0)

*3 10-
O)
c
T3
CD

190 210 230


Quantity of liquid (ml)
Figure 7.7 Response surface for Ay (difference in yield at different speeds).

5. Analysis using Taguchi's method

The criterion for the yield is "larger is better" (equation 7.3). The performance
characteristic ZY:

ZY = -10 log..

is therefore maximized. The second-order model was postulated, and so the


coefficients of the model, where S/N replaces Ay in equation 7.5, are those given
in table 7.10 and the contour lines in figure 7.8. The area of maximum performance
follows quite closely the area of maximum yield.
For the flowability, where the response is the flow time, the criterion

TM

Copyright n 1999 by Marcel Dekker, Inc. All Rights Reserved.


F

l
Zp = -10 Iog10

210 230
Quantity of liquid (ml_)
Figure 7.8 Response surface for Taguchi performance characteristic Zy, calculated
from the coefficients of table 7.10.

Table 7.10 Coefficients for the Model: S/N(yield) = f(xlt x2)

Coefficient Sign. Coefficient Sign.


bo 37.95 ***
*.. -2.19 7.0 %
b, -1.86 * b22 -0.90 34.4 %
b2 -1.44 5.1 % bn -3.26 **

6. Conclusions

As we have just shown, the various approaches do not lead to exactly the same
conclusions. This being said, the predicted optimum conditions are quite close,
whether it is the characteristic response of the product (the yield), the absence of
variability in the yield, or the SIN ratio, that is treated. We superimpose the

TM

Copyright n 1999 by Marcel Dekker, Inc. All Rights Reserved.


different results in figure 7.9:

The shaded regions show maximum yields for the two mixer speeds.
The lightly shaded lines represent the points where the yield depends least
on the variations in the speed.
The full lines are the optimum contour lines of the S/N ratio for the yield.
The two circles represent regions where the different objectives may best be
achieved.

20 —

c
E,
0)

o>
c
£ 10 —
(0
<D

o—
190 210 230
Quantity of liquid (ml_)
Figure 7.9 Robust granulation conditions predicted by the different methods.

E. Scaling-up and Other Applications

1. Identifying possible noise and control factors

Many pharmaceutical processes are well controlled and repeatable. However,


properties may change for no apparent reason or sometimes a necessary

TM

Copyright n 1999 by Marcel Dekker, Inc. All Rights Reserved.


Dissolution profiles of controlled release formulations may cause problems, or the
variation may be in the stability of the dosage form. Control charts may show the
variation to be part of a constant trend or it may be random fluctuation.
The first stage is to examine the existing data, screening studies, factor
influence studies, optimization, and scale-up (see below). These may give a clue as
to what factor is causing the variation. They should also indicate what factors may
be varied in order to change the properties and perhaps improve the robustness of
the product or process.
After that, factors difficult to control are noted. Possible ones (according to the
actual problem) might be:

• different batches of drug substance,


• the batch of excipient,
• the ambient temperature,
• the ambient relative humidity,
• the machine used,
• the operator,
• the exact granulation time or speed,
• the rate at which liquid is added.

Some of these, like the granulation time, may also be control factors. When the
noise and control factors have been chosen, a design can be set up in the control
factors. This should be based on the results already obtained in a process study, if
available. It is then multiplied by a design in the noise variables, generally factorial
or fractional factorial.

2. Scaling-up of pharmaceutical processes

Scale-up and technology transfer


A general problem and source of variation in pharmaceutical development and
production is that of scaling-up and of process transfer. Formulations and processes
are normally first developed on a small scale because of the need to test a large
number of formulations and the carrying out of a large number of experiments for
optimization of the processing factors, as described in chapter 6. The problems are
that a large mass of material may well behave differently from where there are only
a few kilograms and the production scale equipment is likely to have changed
characteristics from that used at the pilot scale.
Optimum levels of the process variables for a pharmaceutical form
manufactured using one piece of equipment will not be the same as for another.
The correspondence between the process variables needs to be established. This is
obviously easier if the equipment used on the laboratory, or pilot scale, is similar
to the production scale equipment and the development scientist will try to be
equipped with material similar to that which is used in the factory.
A related problem is one of process transfer, where a process in production
at one factory is transferred to another, using different equipment. The process

TM

Copyright n 1999 by Marcel Dekker, Inc. All Rights Reserved.


which has been optimized for one set of equipment has to be modified or changed
completely. This is especially frequent for multinational companies, extending or
transferring production across international borders. The modifications may
sometimes be very considerable, such as a pharmaceutical form originally
developed for a one-step granulation process where the granules are dried and
lubricated within the same mixer used for the initial granulation is transferred to a
multi-step process, with granulation, sieving, drying, and lubrification in different
equipment. Or the transfer may be in the opposite sense.

Scale-up and quality


Some of the problems to be tackled in scale-up are very similar to those of
variability. There are in fact two possible approaches, both using rather similar
designs. Most in tune with the philosophy of design for quality is the approach of
considering the scale of manufacturing or type of apparatus as a noise variable and
optimizing the process or formulation so that the resulting dosage form should be
not only optimum, but as robust as possible to the apparatus used and the scale. The
methods described in sections C and D may be used.
Another approach is to treat the manufacturing scale as a normal qualitative
variable, and optimize the process and/or formulation at each level. Alternatively
it might also be treated as a quantitative discrete variable, as the volume of the
apparatus, or possibly better, its logarithm. Literature examples of the use of
experimental design in scale-up take this approach (14, 15, 16).

Designs and models


The designs are similar to those involving control and noise variables. A design of
the appropriate order for the process variables (c.f., control variables), is multiplied
by a full factorial design for the scale or apparatus variables. It is thus a product
design and may be considered as consisting of an inner and an outer array if the
"quality/variability" method of analysis is to be used.
The models for quality/variability consist of an appropriate model for RSM
in the process factors, a term or terms for the effect of scale, and all possible
interactions between scale and process. Take a simple example of a granulation at
the 2, 5 and 20 kg scale. The process factors are the amount of liquid Xl and the
granulation time X2. We will treat the scale as a quantitative variable, but will
transform it to the logarithm, instead of using it directly. The levels are thus 0.30,
0.70, and 1.3. The levels of the associated coded variable Z3 are therefore -1, -0.2,
and +1, the middle level being slightly displaced from the centre.
A hexagonal design with one centre point would be suitable for studying the
granulation. This may be carried out at each level of Z3, giving 21 experiments in
all. The model is obtained by multiplying the model for the granulation by the
second-order model for the scale:

y = [P0 + p,*, + Pj*2 + P,,*,2 + P22x22 + P^,^] x [a0 + a3z3 + a33z3 2] + e

There are 18 terms in the model, so the design is nearly saturated. The 3 fourth-
order terms could probably be left out.

TM

Copyright n 1999 by Marcel Dekker, Inc. All Rights Reserved.


Literature examples of experimental design in scale-up
Surprisingly little has been published on this subject and all works appear to have
taken the approach of optimizing individually at each level. The papers are mainly
concerned with the problem of granulation.
Wehrle et al. (14) have compared granulation with 5 different mixers. The
formulation was a "placebo" mixture containing lactose and corn starch, with
povidone as binder, granulated with a hydroalcoholic mixture. They investigated
small and large planetary mixers (Ours 12 litre and Collette 60 litre size) and high
shear mixers (Moritz Turbosphere, 10 and 50 litres, and the Lodige 50 litre mixer),
in terms of the effect of granulation time and quantity of water added. For each
piece of equipment they did experiments according to a 32 factorial design with 2
extra experiments at the centre. They were thus able to compare the properties of
the resulting granules (flow properties, particle size distribution) and the resulting
tablets (friability, disintegration time) both directly by plotting superimposed
contour surfaces for pairs of mixers and also by using factorial discriminant
analysis to reduce the number of responses.
The experiment gave two kinds of information, useful in scaling up. Firstly
optimum levels could be found for each apparatus. For example, more liquid
(compared with the powder mass) was needed for the smaller Turbosphere than the
large. It may be possible to extrapolate this finding to other formulations. Secondly,
general characteristics of the different mixers could be established, in terms of the
principal components. (This method, deriving orthogonal combinations of the
responses, and the whole subject of multi-response data analysis is outside the
scope of this book.)
Ogawa et al. (15) compared results with small size mixer granulators (2 and
5 kg scale). The factors studied were the volume percentage of ethanol in the binder
solution and the volume of binder solution. Mixing time, granulation time, and
cross-screw rotation speed were constant and the blade rotation speed was adjusted
so the speed at the end of the blade was the same in both apparatus. Granulation
in each apparatus was studied by a central composite design, with 2 centre points,
a total of 20 experiments. The total design was therefore a product design, product
of the central composite with a 21 factorial, and the model consisted of the product
of the two models.
The model was therefore a full second-order model in the granulation
variables, but also included a first-order term in the third factor, the type of mixer.
In addition there were interaction terms between the type of mixer and all the
granulation terms. They also carried out experiments at the 20 kg scale.
Lewis et al. (16) compared granulation in Fielder mixers at the 25 litre and
65 litre scale for two sustained release tablet formulations. They also compared the
effects of oven drying in trays and in a fluid-bed apparatus. Here the design was
slightly different to those used above, as it was basically a central composite design
where the factorial design was carried out at a small scale and the axial experiments
at the larger scale, with centre points for both series of experiments. Thus fewer
experiments were needed than for the full central composite design, replicated for
each mixer-granulator. The design had the disadvantage that it was not possible to
determine interactions between the mixer variable and either the square terms or the

TM

Copyright n 1999 by Marcel Dekker, Inc. All Rights Reserved.


interaction terms in the granulation variables. However, the experiment allowed:

• determination of (average) differences in the responses at each scale,


• optimum conditions of granulation and lubrification to be identified
for each scale of granulation and drying method,
• trends to be identified so that initial conditions could be selected for
production scale manufacturing,
• critical variables to be identified, those which had a different effect
at the two scales.

If the curvature of the response were significant and dependent on the type of
apparatus used, it would have been necessary to use a full second-order design at
each qualitative level of the mixer variable.

3. Closing remarks

Taguchi's philosophy of quality is a valuable addition to the range of methods


available to us. However, these methods, whether those proposed by Taguchi
himself or the more efficient designs described in this book, are little used in
pharmaceutical development at the present. The designs are often difficult to set up;
it is not always easy to identify noise factors, nor can the noise factor always be
varied as we would wish. They require considerable experimentation. Nevertheless,
failure is costly, and the possibility of designing quality into formulations so that
they are robust enough to be manufactured on different pieces of equipment, under
different conditions, and using drug substance and excipients from different sources,
makes the approach potentially very attractive.
In this section the control variables have been process variables, but we may
also wish to adjust the proportions of drug substance and excipients so that the
formulation is insensitive to noise factors. The control factors are thus studied in
a mixture design of the kind that will be described in the final two chapters.

References

1. G. Taguchi, System of Experimental Design: Engineering Methods to Optimize


Quality and Minimize Cost, UNIPUB/Fraus International, White Plains, N.Y.,
1987.
2. G. E. P. Box and D. R. Cox, An analysis of transformations, /. Roy. Stat. Soc.
Ser.B, 26, 211 (1964).
3. E. Senderak, H. Bonsignore, and D. Mungan, Response surface methodology as
an approach to the optimization of an oral solution, Drug Dev. Ind. Pharm., 19,
405-424 (1993).
4. D. C. Montgomery, Design and Analysis of Experiments, 2nd edition, J. Wiley,
N. Y., 1984.
5. G. E. P. Box, W. G. Hunter, and J. S. Hunter, Statistics for Experimenters, J.
Wiley, N.Y., 1978.

TM

Copyright n 1999 by Marcel Dekker, Inc. All Rights Reserved.


to optimize theophylline beads production in a high speed granulator, Drug
Dev. Ind. Pharm., 20, 2823-2843 (1994).
7. R. N. Kacker, Off-line quality control, parameter design and the Taguchi
method, /. Qual. Technol., 17, 176-209 (1985).
8. V. Nair, Taguchi's parameter design: a panel discussion, Technometrics, 34,
127-161 (1992).
9. R. V. Leon, A. C. Shoemaker, and R. N. Kacker, Performance measures
independent of adjustment, Technometrics, 29, 253-285 (1987).
10. M. S. Phadke, Quality Engineering Using Robust Design, Prentice Hall, 1989.
11. G. E. P. Box and R. D. Meyer, Dispersion effects from fractional designs,
Technometrics, 28, 19-27 (1986).
12. D. C. Montgomery, Using fractional factorial designs for robust process
development, Quality Engineering, 3, 193-205 (1990).
13. A. Menon, N. Dhodi, W. Mandella, and S. Chakrabarti, Identifying fluid-bed
parameters affecting product variability, Int. J. Pharm., 140, 207-218 (1996).
14. P. Wehrle, Ph. Nobelis, A. Cuine', and A. Stamm, Scaling-up of wet
granulation: a statistical methodology, Drug Dev. Ind. Pharm., 19, 1983-1997
(1993).
15. S. Ogawa, T. Kamijima, Y. Miyamoto, M. Miyajima, H. Sato, K. Takayama
and T. Nagai, A new attempt to solve the scale-up problem for granulation
using response surface methodology, J. Pharm. Sci., 83, 439-443 (1994).
16. G. A. Lewis, V. Andrieu, M. Chariot, V. Masson, and J. Montel, Experimental
design for scale-up in process validation studies, 12th Pharm. Tech. Conf.,
1993.

Further reading

• D. M. Grove and T. P. Davis, Engineering Quality and Experimental Design,


Longman Scientific and Technical, Harlow, 1992.
• S. R. Schmidt and R. L. Launsby, Understanding Industrial Designed
Experiments, 3rd edition, Air Academic Press, Colorado Springs, 1993.
• G. S. Peace, Taguchi Methods, A Hands-on Approach to Quality Engineering,
Addison-Wesley, Reading, 1992.

TM

Copyright n 1999 by Marcel Dekker, Inc. All Rights Reserved.

You might also like