Six Sigma BooK Part2
Six Sigma BooK Part2
Six Sigma BooK Part2
Man
Machine
Material
Weight variation of product (effect) Pressure low Method Too high temperature
Measurement
Environment
When preparing a cause-and-effect diagram, the first step is to agree on the specific wording of the effect and then to identify the main causes that can possibly produce the effect. The main causes can often be identified as any of 5M1E, which helps us to get started, but these are by no means exhaustive. Using brainstorming techniques, each main cause is analyzed. The aim is to refine the list of causes in greater detail until the root causes of that particular main cause are established. The same procedure is then followed for each of the other main causes. In Figure 4.1, the method is a main cause, the pressure and the temperature are the causes, and the pressure is low and the temperature is too high are the root causes. (2) Check sheet The check sheet is used for the specific data collection of any desired characteristics of a process or product that is to be improved. It is frequently used in the measure phase of the Six Sigma improvement methodology, DMAIC. For practical purposes, the check sheet is commonly formatted as a table. It is important that the check sheet is kept simple and that its design is aligned to the characteristics that are measured. Consideration should be given as to who should gather the data and what measurement intervals to apply. For example, Figure 4.2 shows a check sheet for defect items in an assembly process of automobile ratios.
75
Data gathered by S.H. Park Date Defect item Soldering defect Joint defect Lamp defect Scratch defect Miscellaneous Sum 9 12 11 12 13 Aug. 10 Aug. 11 Aug. 12 Aug. 13 Aug. 14 Sum 11 8 6 24 9 58
(3) Control chart (a) Introduction The control chart is a very important tool in the analyze, improve and control phases of the Six Sigma improvement methodology. In the analyze phase, control charts are applied to judge if the process is predictable; in the improve phase, to identify evidence of special causes of variation so that they can be acted on; in the control phase, to verify that the performance of the process is under control. The original concept of the control chart was proposed by Walter A. Shewhart in 1924 and the tool has been used extensively in industry since the Second World War, especially in Japan and the USA after about 1980. Control charts offer the study of variation and its source. They can give process monitoring and control, and can also give direction for improvements. They can separate special from common cause issues of a process. They can give early identification of special causes so that there can be timely resolution before many poor quality products are produced. Shewhart control charts track processes by plotting data over time in the form shown in Figure 4.3. This chart can track either variables or attribute process parameters. The
76
types of variable charts are process mean (x), range (R), standard deviation (s), individual value (x) and moving range (Rs). The attribute types are fraction nonconforming (p), number of nonconforming items (np), number of nonconformities (c), and nonconformities per unit (u).
Process parameter Upper control limit (UCL)
Period of time
The typical control limits are plus and minus 3 standard deviations limits using about 20-30 data points. When a point falls outside these limits, the process is said to be out of control. When a point falls inside these limits, the process is said to be under control. There are various types of control charts, depending on the nature and quantity of the characteristics we want to supervise. The following control charts are the most often used ones depending on whether the data are continuous or discrete. These charts are called Shewhart control charts. Note that for continuous data, the two types of chart are simultaneously used in the same way as a single control chart. For continuous data (variables): R (average and range) chart x x s (average and standard deviation) chart Rs (individual observation and moving range) chart x
77
For discrete data (attributes): p (fraction of nonconforming items) chart np (number of nonconforming items) chart c (number of defects) chart u (number of defects per unit) chart Besides these charts, the following new charts for continuous data have been suggested and studied. For good references for control charts, see CUSUM (cumulative sum) chart MA (moving average) chart GMA (geometric moving average) chart EWMA (exponentially weighted moving average) chart (b) How are control charts constructed? A detailed generic sequence for construction of control charts can be developed, which can be useful when working with control charts in practice. Step 1. Select the characteristic and type of control chart First, the decision must be made regarding the characteristic (effect) of the process or product that is to be checked or supervised for predictability in performance. Then the proper type of control chart can be selected. Step 2. Determine the sample size and sampling interval Control charts are, in most cases, based on samples of a constant number of observations, n. For continuous data, it is common to use two to six observations. However, there are also charts for subgroup sizes of one, x (individual observation) chart and Rs (moving range) chart. For discrete data, n could be as large as 100 or 200. Step 3. Calculate the control lines and center line All control charts have control limits, UCL and LCL, showing when the process is affected by special cause variation. A CL is drawn between the control limits. The distance from CL to UCL/LCL is 3 standard deviations of the characteristic.
78
For example, for n individual observations, x1, x2, x3...xn the following formulae apply to the calculation of CL, UCL (average) chart. and LCL for the x
CL = x =
/n
(4.1)
x1 x2
R1 R2
s1 s2
x1 x2
xk x x A2 R
Fraction of nonconformities
Rk R R D3 R
sk s s B3 s
xk x x 2.66 Rs
Fraction of defects per unit
Discrete characteristics Sample 1 2 K Average & CL UCL/LCL Number of nonconformities Fraction of defects
p1 p2
np1 np2
c1 c2
u1 u2
pk p p 3 p (1 p ) / n
npk np n p 3 n p (1 p )
ck c c3 c
uk u u 3 u/n
79
Step 4. Draw the control chart and check for special causes The control chart can now be drawn, with CL, UCL and LCL. The samples used for calculating the control limits are then plotted on the chart to determine if the samples used to calculate the control limits embody any special causes of variation. Special causes exist if any of the following alarm rules apply: A single point falls outside the 3 control limits. Two out of three consecutive points fall outside the 2 limits. Seven or more consecutive points fall to one side of the center line. A run of eight or more consecutive points is up (in increasing trend), or down (in decreasing trend). At least 10 out of 11 consecutive points are on one side of the center line. At least eight consecutive points make a cycle movement, which means if a point is on one side of the center line, and the next point is on the other side of the center line. (4) Histogram It is meaningful to present data in a form that visually illustrates the frequency of occurrence of values. In the analysis phase of the Six Sigma improvement methodology, histograms are commonly applied to learn about the distribution of the data within the results Ys and the causes Xs collected in the measure phase and they are also used to obtain an understanding of the potential for improvements. To create a histogram when the response only takes on certain discrete values, a tally is simply made each time a discrete value occurs. After a number of responses are taken, the tally for the grouping of occurrences can then be plotted in histogram form. For example, Figure 4.3 shows a histogram of 200 rolls of two dice, where, for instance, the sum of the dice was two for eight of these rolls. However, when making a histogram of response data that are continuous, the data
80
need to be placed into classes or groups. The area of each bar in the histogram is made proportional to the number of observations within each data value or interval. The histogram shows both the process variation and the type of distribution that the collected data entails.
Frequency 40
30
20
10
7 Dice Value
10
11
12
(5) Pareto chart The Pareto chart was introduced in the 1940s by Joseph M. Juran, who named it after the Italian economist and statistician Vilfredo Pareto, 18481923. It is applied to distinguish the vital few from the trivial many as Juran formulated the purpose of the Pareto chart. It is closely related to the socalled 80/20 rule 80% of the problems stem from 20% of
81
the causes, or in Six Sigma terms 80% of the poor values in Y stem from 20% of the Xs. In the Six Sigma improvement methodology, the Pareto chart has two primary applications. One is for selecting appropriate improvement projects in the define phase. Here it offers a very objective basis for selection, based on, for example, frequency of occurrence, cost saving and improvement potential in process performance. The other primary application is in the analyze phase for identifying the vital few causes (Xs) that will constitute the greatest improvement in Y if appropriate measures are taken. A procedure to construct a Pareto chart is as follows: 1) Define the problem and process characteristics to use in the diagram. 2) Define the period of time for the diagram for example, weekly, daily, or shift. Quality improvements over time can later be made from the information determined within this step. 3) Obtain the total number of times each characteristic occurred. 4) Rank the characteristics according to the totals from step 3. 5) Plot the number of occurrences of each characteristic in descending order in a bar graph along with a cumulative percentage overlay. 6) Trivial columns can be lumped under one column designation; however, care must be exercised not to omit small but important items. Table 4.2 shows a summary table in which a total of 50 claims during the first month of 2002 are classified into six different reasons. Figure 4.4 is the Pareto chart of the data in Table 4.2.
82
Basic QC and Six Sigma Tools Table 4.2. Summary of claim data
Claim reason A B C D E All others Number of data 23 10 7 3 2 5 % 46 20 14 6 4 0 Cumulative frequency 23 33 40 43 45 50 Cumulative (%) 46 66 80 86 90 100
100
100
80
80
60 Percantage
60
40
40
20
20
(6) Scatter diagram The scatter plot is a useful way to discover the relationship between two factors, X and Y, i.e., the correlation. An important feature of the scatter plot is its visualization of the correlation pattern, through which the relationship can be determined. In the improve phase of the Six Sigma improvement methodology, one often searches the collected data for Xs that have a special influence on Y. Knowing the existence of such relationships, it is possible to identify input variables that
83
Cumulative percantage
cause special variation of the result variable. It can then be determined how to set the input variables, if they are controllable, so that the process is improved. When several Xs may influence the values of Y, one scatter plot should be drawn for each combination of the Xs and Y. When constructing the scatter diagram, it is common to place the input variable, X, on the X-axis and the result variable, Y, on the Y-axis. The two variables can now be plotted against each other and a scatter of plotted points appears. This gives us a basic understanding of the relationship between X and Y, and provides us with a basis for improvement. Table 4.3 shows a set of data depicting the relationship between the process temperature (X) and the length of the plastic product (Y) made in the process. Figure 4.5 shows a scatter diagram of the data in Table 4.3.
Table 4.3. Data for temperature (X) and product length (Y) in a plastic-making process
X (C) 131 135 136 130 132 133 132 131 128 134 Y (mm) 22.99 23.36 23.62 22.86 23.16 23.28 22.89 23.00 23.08 23.64 X (C) 129 135 134 126 133 134 130 131 136 133 Y (mm) 23.01 23.42 23.16 22.87 23.62 23.63 23.01 23.12 23.50 22.75
84
Temperature (C) 136 135 134 133 132 131 130 136 136 136 136 22.7 22.8 22.9 23.0 23.1 23.2 Length (mm) 23.3 23.4 23.5 23.6 23.7
(7) Stratification Stratification is a tool used to split collected data into subgroups in order to determine if any of them contain special cause variation. Hence, data from different sources in a process can be separated and analyzed individually. Stratification is mainly used in the analyze phase to stratify data in the search for special cause variation in the Six Sigma improvement methodology. The most important decision in using stratification is to determine the criteria by which to stratify. Examples can be machines, material, suppliers, shifts, day and night, age groups and so on. It is common to stratify into two groups. If the number of observations is large enough, more detailed stratification is also possible. 4.2 Process Flowchart and Process Mapping (1) Process flowchart For quality systems it is advantageous to represent system structure and relationships using flowcharts. A flowchart pro85
vides a picture of the steps that are needed to understand a process. Flowcharts are widely used in industry and have become a key tool in the development of information systems, quality management systems, and employee handbooks. The main value of the flowchart resides in the identification and mapping of activities in processes, so that the main flows of products and information are visualized and made known to everyone. In every Six Sigma improvement project, understanding the process is essential. The flowchart is therefore often used in the measure phase. It is also used in the analyze phase for identifying improvement potential compared to similar processes and in the control phase to institutionalize the changes made to the process. Flowcharts can vary tremendously in terms of complexity, ranging from the most simple to very advanced charts. When improving variation, a very simple flowchart is often applied in the measure phase to map the Xs (input variables) and Y (result variable) of the process or product to be improved. The input variables are either control factors or noise factors, and the flowchart provides a good tool for visualizing them, as shown in Figure 4.6. This figure is related to an improvement project from ABB in Finland where the flowchart was used to map the control and noise factors in the input. This chart was later used in the improvement phase for running a factorial experiment on the control factors, making possible a considerable reduction of DPMO in the process and a cost savings of $168,000. The drawing of flowcharts has become fairly standardized, with a dedicated international standard, ISO 5807, titled Information processing Documentation symbols and charts and system resources charts. The standard gives a good overview of symbols used in flowcharts, as seen in Figure 4.7. The symbols are commonly available in software for drawing flowcharts, for example PowerPoint from Microsoft. Figure 4.8 exemplifies the form of a process flowchart.
86
X1 X2 X3 X4 Resin Filling Mold Vessel temperature speed temperature temperature (Control factors)
Epoxy molding
Output, Y
Surface quality
Mold surface V2
On-page connector: The connector to another part of the same flowchart on the same page Off-page connector: The connector to different page to show the connection, to page x or from page y is necessary Storage: Raw material, work in progress and finished goods
Start
Operation A
Operation B
Inspection
Pass
Yes
End
Rework
No
(2) Process mapping An alternative (or supplement) to a detailed process flowchart is a high-level process map that shows only a few major process steps as activity symbols. For each of these symbols key process input variables (KPIVs) to the activity are listed on one side of the symbol, while key process output variables (KPOVs) to the activity are listed on the other side of the symbol. Note that a KPIV can be a CTQx, and a KPOV can be a CTQy. 4.3 Quality Function Deployment (QFD) (1) Four phases of QFD Quality Function Deployment (QFD) is a structured technique to ensure that customer requirements are built into the design of products and processes. In Six Sigma, QFD is mainly applied in improvement projects on the design of products and processes. Hence, QFD is perhaps the most important tool for DFSS (design for Six Sigma). QFD enables the translation of customer requirements into product and process characteristics including target value. The tool is also applied in Six Sigma to identify the critical-to-customer characteristics which should be monitored and included in the measurement system. QFD was developed in Japan during the late 1960s by Shigeru Mizuno (19101989) and Yoji Akao (1928). It was first applied at the Kobe shipyard of Mitsubishi Heavy Industry in 1972, with the Japanese car industry following suit some years later. In the West, the car industry first applied the tool in the mid 1980s. Since then, it has enjoyed a wide dis88
persal across industries in a number of countries. Although QFD is primarily used to map and systematically transform customer requirements, this is not its only use. Other possible applications concern the translation of market price into costs of products and processes, and company strategies into goals for departments and work areas. Basically, QFD can be divided into four phases of transformation as shown in Figure 4.9. These four phases have been applied extensively, especially in the automobile industry.
Product characteristics Critical product requirements Component characteristics Critical component requirements Process characteristics Critical process requirements Production characteristics
Phase 1: Market analysis to establish knowledge about current customer requirements which are considered as critical for their satisfaction with the product, competitors rating for the same requirements and the translation into product characteristics. Phase 2: Translation of critical product characteristics into component characteristics, i.e., the products parts. Phase 3: Translation of critical component characteristics into process characteristics. Phase 4: Translation of critical process characteristics into production characteristics, i.e., instructions and measurements. The four phases embody five standard units of analysis always transformed in the following order: customer requirements, product characteristics, component characteristics,
89
process characteristics, and production characteristics. The level of detail hence increases from general customer requirements to detailed production characteristics. At each phase the main focus is on the transformation from one of these units of analysis, the so-called Whats, and to the other more detained unit of analysis, the so-called Hows. At each of the four phases in Figure 4.9, the left-hand requirements are Whats, and the upper right hand characteristics are Hows. A basic matrix, possessing some resemblances to a house, embodying 11 elements (rooms), is used to document the results of each of the four phases of transformation in QFD as shown in Figure 4.10. Often this matrix is called the house of quality. The numbers in parentheses indicate the sequence in which the elements of the matrix are completed.
Correlation matrix
Sums of correlation (10) Improvement direction (8) Hows (4) Competitive assessment (3) Importance (2)
Whats (1)
(2) Eleven elements of house of quality Of the 11 elements in the basic matrix shown in Figure 4.10, the first three are concerned with characteristics of the Whats, whereas the remaining eight are concerned with characteristics of the Hows. In this house of quality, identifying the critical Hows which constitute the main result of each matrix is the essential task. In the following, a generic description of the eleven elements is given. 1) The Whats The starting point is that the Whats are identified and included in the matrix. If it is the first phase of transformation, customer requirements will be the Whats. Customer requirements are given directly by the customers, which is sometimes called VOC (voice of customers). 2) Relative importance In the first phase of transformation the customer is also asked to attach relative importance, for example on a scale from 1 = least to 5 = most, to each of the requirements they have stated. This holds similarly for the other phases. This importance is often denoted by Rimp. 3) Competitive assessment A comparison of how well competitors and ones own company meet the individual Whats can then be made. If the Whats are customer requirements, it is common that customers give input to this comparison. For the three other Whats product characteristics, component characteristics and process characteristics the comparison is typically carried out by the team applying QFD. One way to do the comparison is to evaluate competitors, Ecom, and ones own company, Eown, on, for example, a scale from 1 = very poor to 5 = very good. Both the ranking of competitors and ones own company can then be weighted with relative importance, Rimp, to obtain a better understanding of the significance of differences in score for the individual
91
What. Thus the weighted evaluation of each What for competitors and ones own company is obtained by
8) Improvement direction Based on the target value and the competitor assessment, the improvement direction for each characteristic of the Hows can be identified. It is common to denote increase with , no change with q and decrease with . This helps to understand the Hows better. 9) Correlation matrix In the correlation matrix, the correlations among the Hows characteristics are identified. Two characteristics at a time are compared with each other until all possible combinations have been compared. Positive correlation is commonly denoted by +1, and negative correlation by 1. There does not need to exist correlation among all the characteristics. 10) Sums of correlation The sum of correlations for each How, Sj, can be calculated by summing the related coordinates as shown in Figure 4.11.
+1 +1 1 1 +1 1 +1 +1 1 +1 1 1 +1 +1
S1
S2
S3
S5
S6
S7
S8
11) Importance The final result is an identification of the Hows which are critical. The critical Hows are identified by evaluation and calculation. In general, the critical Hows are those that have a strong relationship with the improvement potential of the Whats compared to competitors and high positive sum of correlation.
93
The relative importance of each How, Irel, is derived by calculation. This is done by first computing the absolute importance of each of the Hows.
I abs =
imp
Wij
For example, in Figure 4.12, the absolute importance of the first How, Length, becomes
I abs = 4 9 + 3 3 + 2 1 + 1 3 = 50
Very often this absolute importance of each How is recalculated into relative importance, Irel. This is done by normalizing the absolute importance, for example, on a scale from 0 to 10. For example, in Figure 4.12, the relative importance of the first How, Length, is
Correlation matrix
+1 +1
-1 -1 +1 -1 -1 +1
1 q
0 q
-1
-1 q
-2
-1 q
Diameter
R imp
Material hardness
Hows
Toxic material
Ballpoint size
Material type
Length
Weight
Sharp
E w.com 15 16 4 15 2 5
What No leakage Easy writing Consistent writing Low weight Ergonomic Classical design 5 4 2 3 2 1
3 9 3 3 3 1 3 3 1 1 9 3 3 9 1 1 9 9
3 9
3 4 2
3 3 4 5 2 3
15 12 8 15 4 3
1 9 9
5 1 5
1 2 50 6.2
2 2 24 3.0
1 1 6 0.7
4 3 40 4.9
3 2 37 4.6
5 4 36 4.4
2 3 54 6.7
81 10
Round
20 mm
Target value, Tj
100 N/cm
AISI 304
160 mm
0.5 mm
20 g
Non
4 5
E w.own
E com
E own
From Figure 4.12, it is evident that the shape, material hardness, length, weight and toxic material are product characteristics (Hows) with high relative importance. It is important that these characteristics should be improved in order to fulfill customer requirements. The next three phases help identifying areas of improvement. 4.4 Hypothesis Testing (1) Concept of hypothesis testing In industrial situations we frequently want to decide whether the parameters of a distribution have particular values or relationships. That is, we may wish to test a hypothesis that the mean or standard deviation of a distribution has a certain value or that the difference between two means is zero. Hypothesis testing procedures are used for these tests. A statistical hypothesis is usually done by the following process. Set up a null hypothesis (H0) that describes the value or relationship being tested. Set up an alternative hypothesis (H1). Determine a test statistic, or rule, used to decide whether to reject the null hypothesis. a specified probability value, denoted as , that defines the maximum allowable probability that the null hypothesis will be rejected when it is true. Collect a sample of observations to be used for testing the hypothesis, and then find the value of the test statistic. Find the critical value of the test statistic using and a proper probability distribution table. Comparing the critical value and the value of the test statistic, decide whether the null hypothesis is rejected or not. The result of the hypothesis test is a decision to either reject or not reject the null hypothesis; that is, the hypothesis is either
96
rejected or we reserve judgment on it. In practice, we may act as though the null hypothesis is accepted if it is not rejected. Since we do not know the truth, we can make one of the following two possible errors when running a hypothesis test: 1. We can reject a null hypothesis that is in fact true. 2. We can fail to reject a null hypothesis that is false. The first error is called a type I error, , and the second is called a type II error, . This relationship is shown in Figure 4.13. Hypothesis tests are designed to control the probabilities of making either of these errors; we do not know that the result is correct, but we can be assured that the probability of making an error is within acceptable limits. The probability of making a type I error is controlled by establishing a maximum allowable value of the probability, called the level of significance of the test, which is usually denoted by the letter .
True state of nature
H0
Conclusion made
H1
Type II error () Correct conclusion
H0 H1
(2) Example A manufacturer wishes to introduce a new product. In order to be profitable, the product should be successfully manufactured within a mean time of two hours. The manufacturer can evaluate manufacturability by testing the hypothesis that the mean time for manufacture is equal to or less than two hours. The item cannot be successfully manufactured if the mean time is greater than two hours, so the alternative hypothesis is that the mean time is greater than two. If we use and 0 to note the mean time and the hypothesized mean value, respectively, we can set up the hypotheses:
97
H 0 : 0
and H 1 : > 0 ,
where 0 = 2. This type of hypothesis which has inequality signs is called a one-sided test. If there is an equality sign in the null hypothesis, it is called a two-sided test. The statistic used to test the hypothesis depends on the type of hypothesis being tested. Statisticians have developed good, or even optimal, rules for many situations. For this example, it is intuitively appealing that if the average of an appropriate sample of manufacturing times is sufficiently larger than two, the test statistic used for this case is
s/ n If this test statistic T is large enough, then we can reject H0. How much large? Well, that depends on the allowable probability of making an error and the related probability distribution. Let us assume that the allowable probability of making an error is 5%. Then the level of significance is = 0.05. In fact, a 5% level of significance is mostly used in practice. Then the critical value of the test can be found from the t-distribution, which is t(n 1, ). Then the decision is that we reject H 0 , if T > t (n 1, ) .
Suppose the manufacturer has nine sample trials and obtains the following data. Data (unit: hours): 2.2, 2.3, 2.0, 2.2, 2.3, 2.6, 2.4, 2.0, 1.8. We can find that the sample mean time and the sample standard deviation are
T=
x 0
(4.2)
x = 2.2, s = 0.24 , Then the test statistic becomes T= x 0 s/ n = 2.2 2.0 0.24 / 9
.
98
If we use a 5% level of significance, the critical value is t(n 1, ) = t(8, 0.05) = 1.860. Since T = 2.250 > 1.860, H0 is rejected with 5% Type I errors, which means that the mean time is more than two hours with maximum 5% probability of making an error. 4.5 Correlation and Regression (1) Correlation analysis The scatter diagram which was explained pictorially in Section 4.1 describes the relationship between two variables, say X and Y. It gives a simple illustration of how variable X can influence variable Y. A statistic that can describe the strength of a linear relationship between two variables is the sample correlation coefficient (r). A correlation coefficient can take values between 1 and +1. A value of 1 indicates perfect negative correlation, while +1 indicates perfect positive correlation. A zero indicates no correlation. The equation for the sample correlation coefficient of two variables is
r=
( x x)( y y) ( x x) ( y y )
i i 2 i i
,
2
(4.3)
where (xi, yi)i = 1,2,...,n, are the coordinate pair of evaluated values. It is important to plot the analyzed data. The coefficient r simply shows the straight-line relationship between x and y. Two data variables may show no linear correlation (r is nearly zero), but they may still have a quadratic or exponential functional relationship. Figure 4.14 shows four plots with various correlation characteristics.
99
Variable x
Variable x
The hypothesis test for the population correlation coefficient () to equal zero is
H0 : = 0 H1 : = 0
which is a two-sided hypothesis test. The test statistic for this hypothesis test is
T=
r n2 1 r2
(4.4)
where H0 is rejected if the value of T is greater than t(n 2, / 2). (2) Example of correlation analysis In studying the decay of an aerosol spray, experimenters obtained the results shown in Table 4.4 (Box, Hunter and Hunter 1978), where x is the age in minutes of the aerosol and y is its observed dispersion at that time. Dispersion is measured
100
as the reciprocal of the number of particles in a unit volume. The n=9 experiments were run in random order. The scatter diagram of these data is shown in Figure 4.15, which indicates that there is a strong correlation between the two variables.
Table 4.4. Aerosol data
Observed number Order in which experiments were performed Age (x) Dispersion (y)
1 2 3 4 5 6 7 8 9
6 9 2 8 4 5 7 1 3
8 22 35 40 57 73 78 87 98
Dispersion of aerosol
r=
( x x)( y y) ( x x) ( y y )
i i 2 i i
= 0.983 .
Testing the null hypothesis that the correlation coefficient equals zero yields
T=
r n2 1 r2
= 14.229 .
Hence, using a two-sided t-table at / 2, we can reject H0, because the absolute value of T, 14.229, is greater than t(n 2, / 2)= t(7, 0.025) = 2.365 at the Type I error = 0.05. (3) Regression analysis The simple linear regression model with a single regressor x takes the form
y = 0 + 1 x + ,
(4.5)
where 0 is the intercept, 1 is the slope, and is the error term. Typically, none of the data points falls exactly on the regression model line. The error term makes up for these differences from other sources such as measurement errors, material variations in a manufacturing operation, and personnel. Errors are assumed to have a mean of zero and unknown variance 2, and they are not correlated. When a linear regression model contains only one independent (regressor or predictor) variable, it is called simple linear regression. When a regression model contains more than one independent variable, it is called a multiple linear regression model. The multiple linear regression model with k independent variables is
y = 0 + 1 x1 + 2 x 2 + + k x x + .
102
(4.6)
If we have a data set such as, (xi, yi), i = 1,2,...,n, the estimates of the regression coefficients of the simple linear regression model can be obtained through the method of least squares as follows:
= 1
( x x)( y y) , ( x x)
i i 2 i
(4.7)
= y x 0 1 .
Then the fitted regression line is
x, + = y 0 1
which can be used for quality control of (x, y) and prediction of y at a given value of x. It was found that there is a strong positive correlation between x and y in the aerosol data in Table 4.4. Lets find the simple regression equation for this data set. Since the estimated coefficients are from (4.7),
= 1
( x x)( y y) ( x x)
i i 2 i
= 0.489 ,
= y x = 0.839 . 0 1
Hence, the fitted simple regression line is
= 0.839 + 0.489x . y
When there is more than one independent variable, we should use the multiple linear regression model in (4.6). By the method of least squares, we can find the estimates of regression coefficients by the use of statistical packages such as SAS, SPSS, Minitab, S and so on. Then the fitted regression equation is
x + x + + x . + = y 0 1 1 2 2 k x
103
4.6 Design of Experiments (DOE) (1) Framework of design of experiments Experiments are carried out by researchers or engineers in all fields of study to compare the effects of several conditions or to discover something new. If an experiment is to be performed most efficiently, then a scientific approach to planning it must be considered. The design of experiments (DOE) is the process of planning experiments so that appropriate data will be collected, the minimum number of experiments will be performed to acquire the necessary technical information, and suitable statistical methods will be used to analyze the collected data. The statistical approach to experimental design is necessary if we wish to draw meaningful conclusions from the data. Thus, there are two aspects to any experimental design: the design of experiment and the statistical analysis of the collected data. They are closely related, since the method of statistical analysis depends on the design employed. An outline of the recommended procedure for an experimental design is shown in Figure 4.16. A simple, but very meaningful, model in Six Sigma is that y is a function of x, i.e., y=f(x), where y represents the response variable of importance for the customers and x represents input variables which are called factors in DOE. The question is which of the factors are important to reach good values on the response variable and how to determine the levels of the factors. The design of experiments plays a major role in many engineering activities. For instance, DOE is used for 1. Improving the performance of a manufacturing process. The optimal values of process variables can be economically determined by application of DOE.
104
Confirmation test
Data analysis
2. The development of new processes. The application of DOE methods early in process development can result in reduced development time, reduced variability of target requirements, and enhanced process yields. 3. Screening important factors. 4. Engineering design activities such as evaluation of material alternations, comparison of basic design configurations, and selection of design parameters so that the product is robust to a wide variety of field conditions. 5. Empirical model building to determine the functional relationship between x and y. The tool, DOE, was developed in the 1920s by the British scientist Sir Ronald A. Fisher (18901962) as a tool in agricultural research. The first industrial application was performed in order to examine factors leading to improved barley growth for the Dublin Brewery. After its original introduction to the brewery industry, factorial design, a class of design in DOE, began to be applied in industries such as agriculture, cotton, wool and chemistry. George E. P. Box (1919), an American
105
scientist, and Genichi Taguchi (1924), a Japanese scientist, have contributed significantly to the usage of DOE where variation and design are the central considerations. Large manufacturing industries in Japan, Europe and the US have applied DOE from the 1970s. However, DOE remained a specialist tool and it was first with Six Sigma that DOE was brought to the attention of top management as a powerful tool to achieve cost savings and income growth through improvements in variation, cycle time, yield, and design. DOE was also moved from the office of specialists to the corporate masses through the Six Sigma training scheme. (2) Classification of design of experiments There are many different types of DOE. They may be classified as follows according to the allocation of factor combinations and the degree of randomization of experiments. 1. Factorial design: This is a design for investigating all possible treatment combinations which are formed from the factors under consideration. The order in which possible treatment combinations are selected is completely random. Single-factor, two-factor and three-factor factorial designs belong to this class, as do 2k (k factors at two levels) and 3k (k factors at three levels) factorial designs. 2. Fractional factorial design: This is a design for investigating a fraction of all possible treatment combinations which are formed from the factors under investigation. Designs using tables of orthogonal arrays, Plackett-Burman designs and Latin square designs are fractional factorial designs. This type of design is used when the cost of the experiment is high and the experiment is time-consuming. 3. Randomized complete block design, split-plot design and nested design: All possible treatment combinations are tested in these designs, but some form of restriction is imposed
106
on randomization. For instance, a design in which each block contains all possible treatments, and the only randomization of treatments is within the blocks, is called the randomized complete block design. 4. Incomplete block design: If every treatment is not present in every block in a randomized complete block design, it is an incomplete block design. This design is used when we may not be able to run all the treatments in each block because of a shortage of experimental apparatus or inadequate facilities. 5. Response surface design and mixture design: This is a design where the objective is to explore a regression model to find a functional relationship between the response variable and the factors involved, and to find the optimal conditions of the factors. Central composite designs, rotatable designs, simplex designs, mixture designs and evolutionary operation (EVOP) designs belong to this class. Mixture designs are used for experiments in which the various components are mixed in proportions constrained to sum to unity. 6. Robust design: Taguchi (1986) developed the foundations of robust design, which are often called parameter design and tolerance design. The concept of robust design is used to find a set of conditions for design variables which are robust to noise, and to achieve the smallest variation in a products function about a desired target value. Tables of orthogonal arrays are extensively used for robust design. For references related to robust design, see Taguchi (1987), Park (1996) and Logothetis and Wynn (1989). (3) Example of 23 factorial design There are many different designs that are used in industry. A typical example is illustrated here. Suppose that three factors, A, B and C, each at two levels, are of interest. The design
107
is called a 23 factorial design, and the eight treatment combinations are written in Table 4.5 and they can be displayed graphically as a cube, as shown in Figure 4.17. We usually write the treatment combinations in standard order as (1), c, b, bc, a, ac, ab, abc. There are actually three different notations that are widely used for the runs in the 2k design. The first is the + and notation, and the second is the use of lowercase letters to identify the treatment combinations. The final notation uses 1 and 0 to denote high and low factor levels, respectively, instead of + and 1.
Table 4.5. 23 runs and treatment combinations
Run 1 2 3 4 5 6 7 8 A B C (+/ notation) + + + + + + + + + + + + Treatment combinations (1) c b bc a ac ab abc A B C (1/0 notation) 0 0 0 0 1 1 1 1 0 0 1 1 0 0 1 1 0 1 0 1 0 1 0 1 Response data 2.5 1.0 3.5 1.0 2.6 1.4 4.0 2.0
T=
y4(bc) y8(abc) 1.0
y
2.0
= 3.0
(+)
y2(c)
y6(ac)
(+)
1.0
1.4
y4(bc)
y8(abc)
C
(+)
3.5
4.0 (+)
B
() y2(c) () y6(ac) (+) () () 2.5 () 2.6 (+) ()
A
3
Suppose that a soft drink bottler is interested in obtaining more uniform fill heights in the bottles produced by his manufacturing process. The filling machine theoretically fills each bottle to the correct target height, but in practice, there is variation around this target, and the bottler would like to understand the sources of this variability and eventually reduce it. The process engineer can control three variables during the filling process as given below, and the two levels of experimental interest for each factor are as follows: A: The percentage of carbonation (A0 = 10%, A1 = 12%) B: The operating pressure in the filler (B0 = 25 psi, B1 = 30 psi) C: The line speed (C0 = 200 bpm, C1 = 250 bpm) The response variable observed is the average deviation from the target fill height observed in a production run of bottles at each set of conditions. The data that resulted from this experiment are shown in Table 4.5. Positive deviations are fill heights above the target, whereas negative deviations are fill heights below the target. The analysis of variance can be done as follows. Here, Ti is the sum of four observations at the level of Ai, and Tij is the sum of two observations at the joint levels of AiBj. The ANOVA (analysis of variance) table can be summarized as shown in Table 4.6.
( yi ) 2 8
(3.0) 2 = 48.095 . 8
109
1 2 T11 + T00 T01 T10 8 1 2 ab + abc + (1) + c b bc a ac = 8 1 2 = 4.0+2.0+( 2.5)+( 1.0) 3.5 1.0 ( 2.6) ( 1.4) 8 = 0.5 . Similarly, we can find that SB = 40.5, SC = 0.405. For the interaction sum of squares, we can show that 1 2 SAB = T11 + T00 T01 T10 8 1 2 ab + abc + (1) + c b bc a ac = 8 1 2 = 4.0+2.0+( 2.5)+( 1.0) 3.5 1.0 ( 2.6) ( 1.4) 8 = 0.5 . Similarly, we can find that SAC = 0.005 and SBC = 6.48. The error sum of squares can be calculated as SA = S e = S T ( S A + S B + S C + S A B + S A C + S B C ) = 0.08 .
Table 4.6. ANOVA table for soft drink bottling problem
Source of variation A B C AB AC BC Error(e) Total Sum of squares 0.125 40.500 0.405 0.500 0.005 6.480 0.080 48.095 Degrees of freedom 1 1 1 1 1 1 1 7 Mean square 0.125 40.500 0.405 0.500 0.005 6.480 0.080 F0 1.56 506.25 5.06 6.25 0.06 81.00
110
Since the F0 value of AC is less than 1, we pool AC into the error term, and the pooled ANOVA table can be constructed as follows.
Table 4.7. Pooled ANOVA table for soft drink bottling problem
Source of variation A B C AB BC Pooled error(e) Sum of squares 0.125 40.500 0.405 0.500 6.480 0.085 Degrees offreedom 1 1 1 1 1 2 Mean square 0.125 40.500 0.405 0.500 6.480 0.0425 F0 2.94 952.94
**
48.095
To assist in the practical interpretation of this experiment, Figure 4.18 presents plots of the three main effects and the AB and BC interactions. Since AC is pooled, it is not plotted. The main effect plots are just graphs of the marginal response averages at the levels of the three factors. The interaction graph of AB is the plot of the averages of two responses at A0B0, A0B1, A1B0 and A1B1. The interaction graph of BC can be similarly sketched. The averages are shown in Table 4.8.
Table 4.8. Averages for main effects and interactions
A0 0.25 A1 0.50 A0 B0 B1 1.75 2.25 A1 2.0 3.0 C0 C1 B0 1.875 B1 2.625 C0 0.6 B0 2.55 1.2 C1 0.15 B1 3.75 1.5
111
-2
-2
25 30 Pressure (B)
4 2 0 -2 -4
4 2 0 -2 -4
10 12 A, B interaction
10 12 B, C interaction
Notice that two factors, A and B, have positive effects; that is, increasing the factor level moves the average deviation from the fill target upward. However, factor C has a negative effect. The interaction between B and C is very large, but the interaction between A and B is fairly small. Since the company wants the average deviation from the fill target to be close to zero, the engineer decides to recommend A0B0C1 as the optimal operating condition from the plots in Figure 4.18. 4.7 Failure Modes and Effects Analysis (FMEA) (1) Definition Failure modes and effects analysis (FMEA) is a set of guidelines, a process, and a form of identifying and prioritizing potential failures and problems in order to facilitate process improvement. By basing their activities on FMEA, a manager, improvement team, or process owner can focus the energy and
112
resources of prevention, monitoring, and response plans where they are most likely to pay off. The FMEA method has many applications in a Six Sigma environment in terms of looking for problems not only in work processes and improvements but also in data-collection activities, Voice of the Customer efforts and procedures. There are two types of FMEA; one is design FMEA and the other is process FMEA. Design FMEA applications mainly include component, subsystem, and main system. Process FMEA applications include assembly machines, work stations, gauges, procurement, training of operators, and tests. Benefits of a properly executed FMEA include the following: Prevention of possible failures and reduced warranty costs Improved product functionality and robustness Reduced level of day-to-day manufacturing problems Improved safety of products and implementation processes Reduced business process problems (2) Design FMEA Within a design FMEA, manufacturing and/or process engineering input is important to ensure that the process will produce to design specifications. A team should consider including knowledgeable representation from design, test, reliability, materials, service, and manufacturing/process organizations. When beginning a design FMEA, the responsible design engineer compiles documents that provide insight into the design intent. Design intent is expressed as a list of what the design is expected to do. Table 4.9 shows a blank FMEA form. A team determines the design FMEA tabular entries following guidelines as described below.
113
Header information: Documents the system/subsystem/component, and supplies other information about when the FMEA was created and by whom. Item/function: Contains the name and number of the analyzed item. Includes a concise, exact, and easy-tounderstand explanation of the function of the item task. Potential failure mode: Describes ways a design could fail to perform its intended function. Potential effect of failure: Contains the effects of the failure mode on the function from an internal or external customer point of view. Severity: Assesses the seriousness of the effect of the potential failure mode to the next component, subsystem, or system, if it should occur. Estimation is typically based on a 1 to 10 scale where 10 is the most serious, 5 is low and 0 is no effect. Classification: Includes optional information such as critical characteristics that may require additional process controls. Potential cause of failure: Indicates a design weakness that causes the potential failure mode. Occurrence: Estimates the likelihood that a specific cause will occur. Estimation is usually based on a 1 to 10 scale where 10 is very high (failure is almost inevitable), 5 is low, and 1 is remote (failure is unlikely). Current design controls: Lists activities such as design verification tests, design reviews, DOEs, and tolerance analysis that ensure occurrence criteria.
114
Responsibility:
Core team:
Design FMEA (Item/ Function) Process FMEA (Function/ Require.) O c c u Current r Controls D e t R e P c N Recommended Actions Responsibility and Target Completion Date
Actions Taken
S e v
O c c u r
D e t R e P c N
115
Detection: Assessment of the ability of the current design control to detect the subsequent failure mode. Assessment is based on a 1 to 10 scale where 10 is absolute uncertainty (there is no control), 5 is moderate (moderate chance that the design control will detect a potential cause), 1 is almost certain (design control will almost certainly detect a potential cause). Risk priority number (RPN): Product of severity, occurrence, and detection rankings. The ranking of RPN prioritizes design concerns. Recommended action: Intent of this entry is to institute actions. Responsibility for recommended action: Documents the organization and individual responsibility for recommended action. Actions taken: Describes implementation action and effective date. Resulting RPN: Contains the recalculated RPN resulting from corrective actions that affected previous severity, occurrence, and detection rankings. Blanks indicate no action. Table 4.10 shows an example of a design FMEA which is taken from the FMEA Manual of Chrysler Ford General Motors Supplier Quality Requirements Task Force.
116
System
Subsystem
Component
01.03/Body Closures
Model Year(s)/Vehicle(s)
199X/Lion
4door/Wagon
Potential Failure Mode Corroded interior lower door panels Current Design Controls Vehicle general durability Test vah. T-118 T-109 T-301
S e v 7
C l a s s R P N 2 9 4 Recommended Action(s) Add laboratory accelerated corrosion testing Responsibility and Target Completion Date A Tate-Body Engineering 8X 09 30
O c c u r 6
D e t e c 7
S e v 7
O c c 2
D e t 2
R P N 2 8 Action Taken Based on test result (test no. 1481) upper edge spec raised 125 mm
Ingress to
Unsatisfactory
appearance due to rust through paint over time Insufficient wax thickness specified 4 Vehicle general durability Testing (as above) 7 1 9 6 Add laboratory accelerated corrosion testing Conduct design of experiments (DOE) on wax thickness
Potential Cause(s)/ Mechanism of Failure Upper edge of protective wax application specified for inner door panels is too low
2 8
Support anchorage for door hardware including mirror, hinges, latch and window regulator
Test results (test no. 1481) show specified thickness is adequate. DOE shows 25% variation in specified thickness is acceptable.
117
(3) Process FMEA For a process FMEA, design engineering input is important to ensure appropriate focus on important design needs. A team should consider including knowledgeable representation from design, manufacturing/process, quality, reliability, tooling, and operators. Table 4.9 shows a blank FMEA form which can be simultaneously used for a design FMEA and for a process FMEA. The tabular entries of a process FMEA are similar to those of a design FMEA. Detailed explanations for these entries are not given here again. An example is given in Table 4.11 to illustrate the process FMEA. 4.8 Balanced Scorecard (BSC) The concept of a balanced scorecard became popular following research studies published in the Harvard Business Review articles of Kaplan and Norton (1992, 1993), and ultimately led to the 1996 publication of the standard business book on the subject, titled The Balanced Scorecard (Kaplan and Norton, 1996). The authors define the balanced scorecard (BSC) as organized around four distinct performance perspectives financial, customer, internal, and innovation and learning. The name reflects the balance provided between short- and long-term objectives, between financial and nonfinancial measures, between lagging and leading indicators, and between external and internal performance perspectives. As data are collected at various points throughout the organization, the need to summarize many measures so that toplevel leadership can gain an effective idea of what is happening in the company becomes critical. One of the most popular and useful tools we can use to reach that high-level view is the BSC. The BSC is a flexible tool for selecting and displaying key indicator measures about the business in an easy-to-read format. Many organizations not involved in Six Sigma, including many government agencies, are using the BSC to establish common performance measures and keep a closer eye on the business.
118
: Process
Core Team : Sam Smith, Harry Adams, Hilton Dean, Harry Hawkins, Sue Watkins
Design FMEA (Item/ Function) Process FMEA (Function/ Require.) Solder dipping C Potential l Cause(s)/ S a Mechanism(s) e s of Failure v s 9 Flux wire termination Current Controls 100% inspection Automatic solder tool 3 168 Automation/DOE 3 168 Harry Adams 5/15 Hilton Dean 5/15 Sue Watkins 5/15 R P N 162 Responsibility and Target Completion Date Sam Smith 6/4 Done Done Actions Taken Done 7 7 High temp 8 Long solder time 8 Recommended Actions Automation/DOE/ 100% with go/no go gauge Automation/DOE/ Define visual criteria O c c u r 6 D e t e c 3 7 7 4 4
O c S c e u v r 9 4
D e t e c 2
R P N 72
Potential Failure Mode Excessive Solder wire Protrusion Interlock base damage
Visual defects
2 2
56 56
Delamination of interlock base 7 Moisture in Interlock base Not being Cleaned in time 5 Marking ink Curing Smooth Marking surface 8 None 5 UV energy and SPC 4 2 3 6 7 280 5 7 245
Visual defects
Automatic solder tool/ SPC Automatic solder tool/ SPC No 3 168 Automation/DOE
Done
56
Done
98
Harry Hawkins 5/15 Sam Smith 5/15 48 90 288 Improve quality of Plating define criteria With customer None None Rough surface Sam Smith 5/15
Done
80
Marking 6 6 6
Contact problem/no signal Legible marking /customer unsatisfaction Clean in 30 minutes after solder dip SPC
108
119
A number of organizations that have embraced Six Sigma methodology as a key strategic element in their business planning have also adopted the BSC, or something akin to it, for tracking their rate of performance improvement. One of those companies is General Electric (GE). In early 1996, Jack Welch, CEO of GE, announced to his top 500 managers his plans and aspirations regarding a new business initiative known as Six Sigma (Slator, 2000). When the program began, GE selected five criteria to measure progress toward an aggressive Six Sigma goal. Table 4.12 compares the GE criteria with the four traditional BSC criteria. We have ordered the four GE criteria so that they align with the corresponding traditional BSC measures. The fifth GE criterion, supplier quality, can be considered as a second example of the BSC financial criteria.
Table 4.12. Measurement criteria: BSC versus GE
Balanced Scorecard 1. Financial 2. Customer 3. Internal 4. Innovation and learning General Electric 1. Cost of poor quality (COPQ) 2. Customer satisfaction 3. Internal process performance 4. Design for manufacturability (DFM) 5. Supplier quality
In todays business climate, the term balanced scorecard can refer strictly to the categories originally defined by Kaplan and Norton (1996), or it can refer to the more general family of measures approach involving other categories. GE, for example, uses the BSC approach but deviates from the four prescribed categories of the BSC when it is appropriate. Godfrey (1999) makes no demands on the BSC categories other than that they track goals that support the organizations strategic plan. For an example of a BSC, the following BSC can be obtained for an internal molding process.
120
Basic QC and Six Sigma Tools Table 4.13. Internal process BSC
Process name CTQ Diameter Curvature Molding Distance Contraction Temperature Index Average 3.15 4.65 812 1.14 90 1.0 2.1 LSL 1 USL 1 0.57 1.14 Mean 0.021 0.165 0.022 98.94 1.57 Standard deviation 0.340 0.099 0.290 2.46 0.16 Zl 2.71 4.06 3.74 3.62 3.32 Zs 4.21 5.56 5.24 5.12 4.82 DPMO 3,338 25 91 147 458
In Table 4.13, Zl and ZS are the long-term and short-term critical values of standard normal distribution, respectively. Since the average DPMO of this molding process is 812, the sigma quality level is 4.65. Through this BSC, we can judge whether this process is satisfactory or not.
121
Six Sigma and Other Management Initiatives Table 5.1. Categories of quality costs and their contents
Category Contents 1. Quality training 2. Process capability studies 3. Vendor surveys 4. Quality planning and design 5. Other prevention expenses 1. All kinds of testing and inspection 2. Test equipment 3. Quality audits and reviews 4. Laboratory expenses 5. Other appraisal expenses 1. Scrap and rework 2. Design changes 3. Excess inventory cost 4. Material procurement cost 5. Other internal failure expenses 1. After-service and warranty costs 2. Customer complaint visits 3. Returns and recalls 4. Product liability suits 5. Other external failure expenses
portion of existing preventive and appraising efforts should be expended in reducing failure costs. This strategy will eventually reduce the overall quality costs. The optimal proportions of quality costs depend on the type of business involved. However, it is reported that the quality cost could be reduced to as much as approximately 10% level of total sales value. (3) Cost of poor quality The cost of poor quality (COPQ) is the total cost incurred by high quality costs and poor management. Organizations, both public and private, that can virtually eliminate the COPQ can become the leaders of the future. Conway (1992)
123
% of Total Sales
28.4%
22.7%
13.9%
External Failure
14.2% 4.2%
11.3% 10.1%
3.2%
2.8% Division C
claims that in most organizations 40% of the total effort, both human and mechanical, is wasted. If that waste can be eliminated or significantly reduced, the per-unit price that must be charged for goods and services to yield a good return on investment is greatly reduced, and often ends up being a price that is competitive on a global basis. One of the great advantages of Six Sigma is to reduce the COPQ, and hence, to improve profitability and customer satisfaction. As the quality movement progressed, it became obvious that the costs associated with quality could represent as much as 20 to 40% of total sales value (see Juran, 1988), and that many of these costs were hidden (not directly captured) on the income statement or balance sheet. These hidden quality costs are those shown below the water line in Figure 5.2.
124
Visible COPQ
Prevention cost Appraisal cost Internal failure cost External failure cost
Hidden COPQ
Lost management time cost Lost business cost Lost credibility cost Project rework cost Lost opportunity cost Lost assets cost Rerun cost Lost goodwill cost Maintenance cost
The addition of technical specialists within the quality department helped with defining and focusing on these hidden quality costs. Since large COPQ represents unsatisfactory products or practices, that, if eliminated, could significantly improve the profitability of an organization. Over a period of decades, a number of surprising facts surfaced concerning COPQ (Juran, 1988): Quality-related costs were much higher than financial reports tended to indicate. Quality costs were incurred not only in manufacturing but in support areas as well. While many of these costs were avoidable, there was no person or organization directly responsible for reducing them. An excellent Six Sigma strategy should directly attack the COPQ, whose issues can dramatically affect a business. Wisely applied Six Sigma techniques can help eliminate or reduce many of the issues that affect overall COPQ. The concept of
125
COPQ can help identify Six Sigma projects. It would be ideal if a Pareto chart of the monetary magnitude of the 20 COPQ subcategories listed in Table 5.1 could be created so that areas for improvement could be identified. 5.2 TQM and Six Sigma While Six Sigma is definitely succeeding in creating some impressive results and culture changes in some influential organizations, it is certainly not yet a widespread success. Total Quality Management (TQM) seems less visible in many businesses than it was in the early 1990s. However, many companies are still engaged in improvement efforts based on the principles and tools of TQM. It appears at least in Korea that Six Sigma is succeeding while TQM is losing its momentum. One of the problems that plagued many of the early TQM initiatives was the preeminence placed on quality at the expense of all other aspects of the business. Some organizations experienced severe financial consequences in the rush to make quality first among equals. The disconnection between management systems designed to measure customer satisfaction and those designed to measure provider profitability often led to unwise investments in quality, which has been often practiced in TQM. Ronald Snee (1999) points out that although some people believe it is nothing new, Six Sigma is unique in its approach and deployment. He defines Six Sigma as a strategic business improvement approach that seeks to increase both customer satisfaction and an organizations financial health. Snee goes on to claim that the following eight characteristics account for Six Sigmas increasing bottom-line (net income or profit) success and popularity with executives.
126
Bottom-line results expected and delivered Senior management leadership A disciplined approach (DMAIC) Rapid (36 months) project completion Clearly defined measures of success
Infrastructure roles for Six Sigma practitioners and leadership Focus on customers and processes A sound statistical approach to improvement Other quality initiatives including TQM have laid claim to a subset of these characteristics, but only Six Sigma attributes its success to the simultaneous application of all eight. Six Sigma is regarded as a vigorous rebirth of quality ideals and methods, as these are applied with even greater passion and commitment than often was the case in the past. Six Sigma is revealing a potential for success that goes beyond the levels of improvement achieved through the many TQM efforts. Some of the mistakes of yesterdays TQM efforts certainly might be repeated in a Six Sigma initiative if we are not careful. A review of some of the major TQM pitfalls, as well as hints on how the Six Sigma system can keep them from derailing our efforts is listed below. 1. Links to the business and bottom-line success: In TQM, quality often was a sidebar activity, separated from the key issues of business strategy and performance. The link to the business and bottom-line success was undermined, despite the term total quality, since the effort actually was limited to product and manufacturing functions. Six Sigma emphasizes reduction of costs, thereby contributing to the bottom-line, and participation of three major areas: manufacturing, R&D and service parts. 2. Top-level management leadership: In many TQM efforts, top-level managements skepticism has been apparent, or their willingness to drive quality ideas has been weak. Passion for and belief in Six Sigma at the very summit of the business is unquestioned in companies like Motorola, GE, Allied Signal (now Honeywell), LG and Samsung. In fact, top-level management involvement is the beginning of Six Sigma.
127
3. Clear and simple message: The fuzziness of TQM started with the word quality itself. It is a familiar term with many shades of meaning. In many companies, Quality was an existing department with specific responsibilities for quality control or quality assurance, where the discipline tended to focus more on stabilizing rather than improving processes. Also TQM does not provide a clear goal at which to aim. The concept of Six Sigma is clear and simple. It is a business system for achieving and sustaining success through customer focus, process management and improvement, and the wise use of facts and data. A clear goal (3.4 DPMO or 6 quality level) is the centerpiece of Six Sigma. 4. Effective training: TQM training was ineffective in the sense that the training program was not so systematic. Six Sigma divides all the employees into five groups (WB, GB, BB, MBB and Champion), and it sets very demanding standards for learning, backing them up with the necessary investment in time and money to help people meet those standards. 5. Internal barriers: TQM was a mostly departmentalized activity in many companies, and it seemed that TQM failed to break down internal barriers among departments. Six Sigma places priority on cross-functional process management, and cross-functional project teams are created, which eventually breaks down internal barriers. 6. Project team activities: TQM utilized many quality circles of blue-collar operators and workers, and not many task force teams of whitecollar engineers even if they are needed. Six Sigma demands a lot of project teams of BBs and GBs, and the project team activities are one of the major sources of bottom-line and top-line success. The difference between quality circles and Six Sigma project team activities was already explained in Chapter 2.
128
5.3 ISO 9000 Series and Six Sigma ISO (International Organization for Standardization) 9000 series standards were first published in 1987, revised in 1994, and re-revised in 2000 by the ISO. The 2000 revision, denoted by ISO 9000:2000, has attracted broad expectations in industry. As of the year 2001, more than 300,000 organizations worldwide have been certified to the ISO 9000 series standards. It embodies a consistent pair of standards, ISO 9001:2000 and ISO 9004:2000, both of which have been significantly updated and modernized. The ISO 9001:2000 standard specifies requirements for a quality management system for which third-party certification is possible, whereas ISO 9004:2000 provides guidelines for a comprehensive quality management system and performance improvement through Self-Assessment. The origin and historical development of ISO 9000 and Six Sigma are very different. The genesis of ISO 9000 can be traced back to the standards that the British aviation industry and the U.S. Air Force developed in the 1920s to reduce the need for inspection by approving the conformance of suppliers product quality. These standards developed into requirements for suppliers quality assurance systems in a number of western countries in the 1970s. In 1987 they were amalgamated into the ISO 9000 series standards. Independent of ISO 9000, the same year also saw the launch of Six Sigma at Motorola and the launch of SelfAssessment by means of the Malcolm Baldrige National Quality Award in USA. Both Six Sigma and Self-Assessment can be traced back to Walter A. Shewhart and his work on variation and continuous improvement in the 1920s. It was Japanese industry that pioneered a broad application of these ideas from the 1950s through to the 1970s. When variation and continuous improvement caught the attention of some of the American business leaders in the late 1980s, it took the form of the Malcolm Baldrige National Quality Award, on a national level, and of Six Sigma at Motorola. Some people are wondering if the ISO 9000:2000 series
129
standards make Six Sigma superfluous. They typically refer to clause 8 of ISO 9001: Measurement, analysis, improvement. It requires that companies install procedures in operations for the measurement of processes and data analysis using statistical techniques with the demonstration of continuous improvement as shown in Figure 5.3. They also partly refer to the ISO 9004:2000 standards that embody guidelines and criteria for Self-Assessment similar to the national quality awards.
Quality Management System Continual Improvement S a t i s f a c t I o n
R C e u q s u i t o r m e e m r e n t
C u s t o m e r
Input
Output
Product/ Service
The author firmly believes that Six Sigma is needed regardless of whether a company is compliant with the ISO 9000 series. The two initiatives are not mutually exclusive and the objectives in applying them are different. A Six Sigma program is applied in organizations based on its top-line and bottom-line rationales. The primary objective for applying the ISO 9000 series standards is to demonstrate the companys capability to consistently provide conforming products and/or services. Therefore, the ISO 9000 series standard falls well short of making Six Sigma superfluous. The ISO 9000 series standards have from their early days been regarded and practiced by industry as a minimum set of requirements for doing business. The new ISO 9000:2000 stan130
dards do not represent a significant change to this perspective. Six Sigma on the other hand, aims at world-class performance, based on a pragmatic framework for continuous improvement. The author believes that Six Sigma is superior in such important areas as rate of improvement, bottom-line and top-line results, customer satisfaction, and top-level management commitment. However, considering the stronghold of ISO 9000 in industry, Six Sigma and ISO 9000 are likely to be applied by the same organization, but for very different purposes. 5.4 Lean Manufacturing and Six Sigma (1) What is lean manufacturing? Currently there are two premier approaches to improving manufacturing operations. One is lean manufacturing (hereinafter referred to as lean) and the other is Six Sigma. Lean evaluates the entire operation of a factory and restructures the manufacturing method to reduce wasteful activities like waiting, transportation, material hand-offs, inventory, and over-production. It reduces variation associated with manufacturing routings, material handling, storage, lack of communication, batch production and so forth. Six Sigma tools, on the other hand, commonly focus on specific part numbers and processes to reduce variation. The combination of the two approaches represents a formidable opponent to variation in that it includes both layout of the factory and a focus on specific part numbers and processes. Lean and Six Sigma are promoted as different approaches and different thought processes. Yet, upon close inspection, both approaches attack the same enemy and behave like two links within a chain that is, they are dependent on each other for success. They both battle variation, but from two different points of view. The integration of Lean and Six Sigma takes two powerful problem-solving techniques and bundles them into a powerful package. The two approaches should be viewed as complements to each other rather than as equiva131
lents of or replacements for each other (Pyzdek, 2000). In practice, manufacturers that have widely adopted lean practices record performance metrics superior to those achieved by plants that have not adopted lean practices. Those practices cited as lean in a recent industrial survey (Jusko, 1999) include quick changeover techniques to reduce setup time; adoption of manufacturing cells in which equipment and workstations are arranged sequentially to facilitate small-lot, continuous-flow production; just-in-time (JIT) continuous-flow production techniques to reduce lot sizes, setup time, and cycle time; and, JIT supplier delivery in which parts and materials are delivered to the shop floor on a frequent and as-needed basis. (2) Differences between Lean and Six Sigma There are some differences between Lean and Six Sigma as noted below. Lean focuses on improving manufacturing operations in variation, quality and productivity. However, Six Sigma focuses not only on manufacturing operations, but also on all possible processes including R&D and service areas. Generally speaking, a Lean approach attacks variation differently than a Six Sigma system does (Denecke, 1998) as shown in Figure 5.4. Lean tackles the most common form of process noise by aligning the organization in such a way that it can begin working as a coherent whole instead of as separate units. Lean seeks to co-locate, in sequential order, all the processes required to produce a product. Instead of focusing on the part number, Lean focuses on product flow and on the operator. Setup time, machine maintenance and routing of processes are important measures in Lean. However, Six Sigma focus132
Routing Standardization
Method Variation
One-Piece Flow
FMEA
es on defective rates and costs of poor quality due to part variation and process variation based on measured data. The data-driven nature of Six Sigma problem-solving lends itself well to lean standardization and the physical rearrangement of the factory. Lean provides a solid foundation for Six Sigma problem-solving where the system is measured by deviation from and improvements to the standard. While Lean emphasizes standardization and productivity, Six Sigma can be more effective at tackling process noise and cost of poor quality.
133
(3) Synergy effect The author believes that Lean and Six Sigma, working together, represent a formidable weapon in the fight against process variation. Six Sigma methodology uses problem-solving techniques to determine how systems and processes operate and how to reduce variation in processes. In a system that combines the two philosophies, Lean creates the standard and Six Sigma investigates and resolves any variation from the standard. In addition, the techniques of Six Sigma should be applied within an organizations processes to reduce defects, which can be a very important prerequisite to the success of a Lean project. 5.5 National Quality Awards and Six Sigma The national quality awards such as the Malcolm Baldrige National Quality Award (MBNQA), the European Quality Award, the Deming Prize and the Korean National Quality Grand Prize provide a set of excellent similar criteria for helping companies to understand performance excellence in operations. Table 5.2 shows the list of these criteria. Let us denote these criteria and efforts directed toward performance excellence for quality awards as a Self-Assessment program. Then, is Self-Assessment and Six Sigma the same?
Table 5.2. Overview of the criteria in some Self-Assessment models
Malcolm Baldrige National Quality Award 1. Leadership 2. Strategic planning 3. Customer & market share 4. Information & analysis 5. Human resource focus 6. Process management 7. Business results 1. 2. 3. 4. 5. 6. 7. 8. 9. European Quality Award Leadership Policy & strategy People Partnership & resources Processes Customer results People results Society results Key performance results Deming Prize 1. Organization 2. Policies 3. Information 4. Standardization 5. Human resources 6. Quality assurance 7. Maintenance 8. Improvement 9. Effects 10. Future plans Korean National Quality Grand Prize 1. Leadership 2. Strategic planning 3. Customer satisfaction 4. Information & analysis 5. Human resource management 6. Process management 7. Business results
134
Some evidence indicates a relationship between Self-Assessment and Six Sigma. Firstly, since the launch of the MBNQA in 1987, at least two companies have received the prestigious award largely due to their Six Sigma program. They are Motorola in 1988 and Defence Systems Electronics Group in 1992 (now Raytheon TI Systems). Secondly, a number of companies strongly promoting Self-Assessment are now launching Six Sigma programs. The most well known is probably Solectron, the only two-time recipient of the MBNQA in 1991 and 1997, which launched Six Sigma in 1999. Thirdly, the achievement towards excellence made by companies applying Six Sigma is as much as 70% improvement in process performance per year. However, there are some significant differences. While SelfAssessment is heavily diagnostic in nature with most criteria that guide companies towards excellence, Six Sigma is a much more action-oriented and pragmatic framework embodying the improvement methodology, tools, training and measurements necessary to move towards world-class performance. Six Sigma heavily focuses on improvement projects to generate cost savings and revenue growth with company-wide involvement of employees. On the other hand, Self-Assessment has been criticized for contributing meagerly in terms of financial benefits and for depending solely on a cumbersome evaluation practice by a team of in-house experts. Furthermore, it does not in a systematic way involve the broad mass of rank-and-file employees to the extent that Six Sigma does. However, the two kinds of initiatives may very well support and complement each other. While Self-Assessment indicates important improvement areas, Six Sigma guides the action-oriented improvement process. They share the objective of excellence in operations. It is believed that Six Sigma constitutes a road to performance excellence via the most pragmatic way.
135
7. Evaluate the companys Six Sigma performance from the customers viewpoint, benchmark the best company in the world, and revise the Six Sigma roadmap if necessary. Go to step 1 for further improvement. First of all, a handful or a group of several members should be appointed as a Six Sigma team to handle all kinds of Six Sigma tasks. The team is supposed to prepare proper education and the long-term Six Sigma vision for the company. We can say that this is the century of the 3Cs, which are Changing society, Customer satisfaction and Competition in quality. The Six Sigma vision should be well matched to these 3Cs. Most importantly, all employees in the company should agree to and respect this long-term vision. Second, Six Sigma can begin from proper education for all classes of the company. The education should begin from the top managers, so called Champions. If Champions do not understand the real meaning of Six Sigma, there is no way for Six Sigma to proceed further in the company. After Champions education, GBBBMBB education should be completed in sequence. Third, we can divide Six Sigma into three parts according to its characteristics. They are R&D Six Sigma, manufacturing Six Sigma, and Six Sigma for non-manufacturing areas. The R&D Six Sigma is often called DFSS (Design for Six Sigma). It is usually not wise to introduce Six Sigma to all areas at the same time. The CEO should decide the order of introduction to these three areas. It is common to introduce Six Sigma to manufacturing processes first, and then service areas and R&D areas. However, the order really depends on the current circumstances of the company. Fourth, deploy CTQs for all processes concerned. These CTQs can be deployed by policy management or by management by objectives. Some important CTQs should be given to BBs to solve as project themes. In principle, the BBs who lead the project teams work as full-time workers until the projects are finished.
137
Fifth, in order to firmly introduce Six Sigma, some basic infrastructure is necessary such as scientific management tools of SPC, KM, DBMS and ERP (enterprise resources planning). In particular, efficient data acquisition, data storage, data analysis and information dissemination are necessary. Sixth, one day each month is declared as Six Sigma day. On this day, the CEO should personally check the progress of Six Sigma. All types of presentation of Six Sigma results can be given, and awards can be presented to persons who performed excellently in fulfilling Six Sigma tasks. If necessary, seminars relating to Six Sigma can be held on this day. Lastly, all process performances are evaluated to investigate whether they are being improved. The benchmarked companys performance should be used for process evaluation. Revise your vision or roadmap of Six Sigma, if necessary, and repeat again the innovation process. 6.2 IT, DT and Six Sigma (1) Emergence of DT It is well known that the modern technology for the 21st century is regarded as based on the following 6Ts. They are: IT : Information Technology BT : Bio-Technology NT : Nano-Technology ET : Environment Technology ST : Space Technology CT : Culture Technology We believe that one more T should be added to these 6Ts, which is DT, data technology. Definition of DT (data technology): DT is a scientific methodology which deals with Measurement, collection, storage and retrieval techniques of data;
138
Statistical analysis of data and data refinement Generation of information and inference from data Statistical/computational modeling from data Creation of necessary knowledge from data information Diagnosis and control of current events from statistical models and, Prediction of unforseen events from statistical models for the future. DT is an essential element for Six Sigma, and in general for national competitiveness. The importance of DT will rapidly expand in this knowledge-based information society. (2) Difference between IT and DT Many believe that DT is a subset of IT. This argument may be true if IT is interpreted in a wide sense. Generally speaking, however, IT is defined in a narrow sense as follows. Definition of IT (information technology): IT is an engineering methodology which deals with Presentation and control of raw data and information created by DT; Efficient data/information and image transmission and communication; Manufacturing technology of electronic devices for data/information transmission and communication; Production technology of computer-related machines and software; and, Engineering tools and support for knowledge management. Korea is very strong in IT industries such as the Internet, ebusiness, mobile phones, communication equipment and computer-related semiconductors. The difference between DT and IT can be seen in the information flow as shown in Figure 6.1.
139
DT
Fact
Data collection
Creation of knowledge from information Engineering tools and support for knowledge
IT
Fact
DT is mainly concerned with data collection, statistical analysis of data, generation of information, and creation of necessary knowledge from information. However, IT is mainly concerned with data/information/image transmission and communication, and development of engineering devices and computers for information handling. Also IT is concerned with engineering tools for knowledge management. Generally speaking, DT forms the infrastructure of IT. Without DT, IT would have limitations in growth. DT is software-oriented, but IT is hardware-oriented and systems-oriented. Without IT, DT cannot be well visualized. IT is the vehicle for DT development. Table 6.1 shows the differences between DT and IT in terms of characteristics, major products, major study fields and advanced levels in Korea.
Table 6.1. Comparison of DT and IT
Contents DT IT
Major characteristics
Hardware & systems-oriented engineering Software-oriented, scientific approach for data analysis, statistical modeling for approach for transmission & communication of data/information/image future prediction Software such as DBMS, CRM, SPC, ERP, Statistics, Data-mining, Simulation, and Cryptography Communication systems and auxiliary software, Computers, Semiconductors, Electronic devices, Measuring and Control devices Computer engineering, Electronic/ communication engineering, Control & Systems engineering High
Major products
Low
140
(3) Knowledge triangle It is said that the 21st century is the knowledge-based information society. We can think about the knowledge triangle as shown in Figure 6.2 in which DT and IT play important roles.
Wisdom 4. Gods Kingdom Knowledge 3. DT & IT Information 2. DT Data 1. DT Fact
141
(4) Scope of DT The scope of DT can be divided into three categories: management, multiplication and execution. Management DT comes first, and then multiplication DT, and finally execution DT provides valuation and profit generation for the organization concerned. The scope can be shown sequentially as in Figure 6.3.
Management DT
Acquisition, Storage, Retrieval Basic analysis of data Creation of information
Multiplication DT
Minute analysis, Re-explanation of results obtained, Information is multiplied and regenerated by using DT, Data-mining plays large roles, Knowledge is created.
Execution DT
Execution of generated knowledge, Data/information transmission, Higher value & bigger profit.
(5) Loss due to insufficient DT A weak DT can result in big loss to a company, to a society and to a nation. Some examples of national loss due to insufficient DT are as follows. Economic crisis in 1997: Korea faced an economic crisis in 1997, and the International Monetary Fund helped Korea at that time. The major reason was that important economic data, so-called Foreign Exchange Stock (FES) had not been well taken care of. Had the collection of FES, trend analysis of FES, and prediction of FES been well performed by good DT, there would not have been an economic crisis.
142
Inherent political dispute in politics: Politics is perhaps the most underdeveloped area in Asia including Korea. Non-productive political disputes hamper development of all other areas such as industry, education, culture and so on. If peoples opinion surveys are properly conducted by DT, and political parties just follow the opinion of the majority of people, politics can become more mature, and can assist all the other areas to become more developed. Big quality cost: The quality costs of most companies in Asia including Korea make up about 20% of the total sales value. The quality costs consist of P-cost for prevention, A-cost for appraisal and F-cost for failure. The ratios of these costs are roughly 1%, 3%, and 16% for P-cost, A-cost, and F-cost, respectively. If DT is well utilized for the data analysis of quality cost, the quality cost can be reduced to about 10% of total sales value. Perhaps the optimal ratios of these costs would be 3%, 2%, and 5% for P-cost, A-cost, and F-cost, respectively. Actually, Six Sigma project teams are very much aimed at reducing quality costs. 6.3 Knowledge Management and Six Sigma (1) Knowledge-based Six Sigma We think that Knowledge Management (KM) is very important in this knowledge-based information society. If Six Sigma and KM are combined, it could become a very powerful management strategy. We want to propose the so-called Knowledge Based Six Sigma (KBSS) as the combination of Six Sigma and KM. KBSS can be defined as a company-wide management strategy whose goal is to achieve process quality innovation corresponding to 6 level and customer satisfaction through such activities as systematic generation/storage/dissemination of knowledge by utilizing the information technology of the Internet/intranet, data-bases and other devices. As shown in
143
Figure 6.4, there are some differences between Six Sigma and KM. However, there also exist some areas of intersections such as data acquisition and utilization, data analysis, generation of information, and so on.
Six Sigma Knowledge Management
KBSS is a combination of KM and Six Sigma which can be developed as a new paradigm for management strategy in this digital society of the 21st century. (2) Methodologies in KBSS Process flow of improvement activities In KM, it was proposed by Park (1999) that a good process flow of improvement activities is the CSUE cycle as shown in Figure 6.5. CSUE means Creating & Capturing, Storing & Sharing, Utilization and Evaluation. As explained previously, the well-known process flow of improvement activities in Six Sigma is MAIC.
Evaluation Utilization
Control Improve
Measure Analyze
Flow in KM
144
The CSUE and MAIC cycles can be intermixed in order to create an efficient cycle in KBSS. One way is to use the MAIC cycle in each step of CSUE, or to use the CSUE cycle in each step of the MAIC cycle. We believe that CSUE and MAIC are both complementary to each other. Project team activities The project team activities by BBs and GBs for quality and productivity improvement are perhaps most important activities in Six Sigma. If the concept of KM is added to these activities, more useful and profitable results could be made possible. We may call such activities KBSS project team activities. Through team efforts, we can create and capture information, store and share the information, and utilize it in the MAIC process. Also by using the MAIC process, we can create new information and follow the CSUE process. Education and training Education and training is the most fundamental infrastructure in Six Sigma. A systematic training program for GB, BB, MBB and Champion levels is essential for the success of Six Sigma. Also in KM, without proper training, creation/storage/sharing/utilization would not be easy, and the process flow of knowledge would not be possible. It is often mentioned that the optimal education and training time in Six Sigma is about 57% of total working hours, and in KM it is about 68%. This means that more education and training time is necessary in KM than in Six Sigma. However, there is a lot of duplication in Six Sigma and KM, so the optimal education and training time in KBSS would be 810% of total working hours. Information management Information on areas such as customer management, R&D, process management, quality inspection and reliability tests are essential elements in Six Sigma. In KM also, information management concerning storage, sharing and utilization of knowledge is the most important infrastructure. We believe that information management is essential in KBSS.
145
Scientific tools Basic QC and statistical tools such as 7 QC tools, process flowcharts, quality function deployment, hypothesis testing, regression and design of experiments can be used in KBSS. Also some advanced Six Sigma tools such as FMEA, benchmarking and marketing surveys can be effectively used in KBSS. These tools are helpful in analyzing data, obtaining information, statistical process evaluation and generating knowledge. We can say that KBSS is based on these scientific and statistical methods. 6.4 Six Sigma for e-business Recently, e-business has been rapidly increasing and it is of great interest to consider Six Sigma for e-business. A suitable name that incorporates Six Sigma in e-business is eSigma. It is clear that the ultimate management concept of e-Sigma should be customer satisfaction. There are four ingredients for customer satisfaction management. They are labeled CQCD, which stand for convenience, quality, cost and delivery. To have an excellent e-Sigma system for providing convenient, high-quality, low-cost products, and accurate and speedy delivery, the following e-Sigma model should be established in e-business companies. The voice of customer (VOC) should be input into DFSS by using QFD, which converts VOC to technical requirements. These technical requirements are reflected in design aspects for Six Sigma. An ERP scheme which is suitable for e-business should be employed to manage necessary resources. Also an efficient SCM is required for systematic acquisition, handling, storage and transportation of products. In all processes of an e-business, the sigma level of each process should be evaluated and improved to assure high-quality performance of each process. For customer-oriented quality management, CRM is required in e-business. Eventually, such e-Sigma flow will guarantee a high-level customer satisfaction and simultaneous creation of new customers.
146
e-business
VOC
QFD
e-DFSS
ERP
SCM
CRM
6.5 Seven-step Roadmap for Six Sigma Implementation In Section 6.1, the seven steps for Six Sigma were introduced. These steps represent organizational implementation of Six Sigma at the beginning stage. While implementing these introductory steps, it is necessary to have a roadmap of Six Sigma improvement implementation. This roadmap also has seven steps. They are as follows. Step 1: Set up the long-term vision of Six Sigma. Step 2: Identify core processes and key customers. Step 3: Define customer requirements and key process variables.
147
4: 5: 6: 7:
Measure current process performance. Improve process performance. Design/redesign process if necessary. Expand and integrate the Six Sigma system.
Step 1: Set up the long-term vision of Six Sigma Setting up the long-term vision over a period of about 10 years for Six Sigma is important for Six Sigma implementation. Without this vision, the Six Sigma roadmap may be designed in a non-productive way. For this vision, the CEO should be involved, and he should lead the Six Sigma implementation. Step 2: Identify core processes and key customers The following are the three main activities associated this step. Identify the major core processes of your business. Define the key outputs of these core processes, and the key customers they serve. Create a high-level map of your core or strategic processes. In identifying the core processes, the following questions can help you to determine them. What are the major processes through which we provide value products and services to customers? What are the primary critical processes in which there are strong customer requirements? In defining the key customers, we should consider the core process outputs. These outputs are delivered to internal or external customers. Very often the primary customers of many core processes could be the next internal processes in a business. However, the final evaluation of our products or services depends on the external customers.
148
Step 3: Define customer requirements and key process variables The sub-steps for defining customer requirements usually consist of the following: Gather customer data, and develop Voice of the customer (VOC); Develop performance standards and requirements statements; and Analyze and prioritize customer requirements. When the customer requirements are identified, key process variables can be identified through quality function deployment (QFD) and other necessary statistical tools. Step 4: Measure current process performance For measuring current process performance, it is necessary to plan and execute the measures of performance against the customer requirements. Then it is also necessary to develop the baseline defect measures and identify the improvement opportunities. for these activities, we need to obtain: Data to assess current performance of processes against customers output and/or service requirements. Valid measures derived from the data that identify relative strengths and weaknesses in and between processes. Yield, rolled throughput yield (RTY), DPMO, DPU, COPQ or sigma quality level is often used for such valid measures. Step 5: Improve process performance The project team activity to prioritize, analyze and implement improvements is perhaps the essence of Six Sigma. Improvement efforts usually follow the DMAIC, IDOV or DMARIC process flows which were explained before. The important activities at this step are as follows.
149
Select improvement projects and develop project rationale. Analyze, develop and implement root cause-focused solutions. Step 6: Design/redesign process and maintain the results Very often it is necessary to design or redesign the process for innovation purposes. If such design/redesign is implemented, maintaining and controlling the altered process in good shape is desirable. The important activities at this step are as follows: Design/redesign and implement effective new work process. Maintain and control the new process in good shape. Step 7: Expand and integrate the Six Sigma system The final step is to sustain the improvement efforts, and to build all concepts and methods of Six Sigma into an ongoing and cross-functional management approach. The key idea is to expand and integrate the Six Sigma system into a stable and long-term management system. Continuous improvement is a key link in the business management system of Six Sigma. The key actions for this purpose are as follows: Implement ongoing measures and actions to sustain improvement; Define responsibility for process ownership and management; and, Execute careful monitoring of process and drive on toward Six Sigma performance gains.
150
151
(2) Cost/benefit perspective Without investment, we cannot expect a big change or a big gain. Some of the most important Six Sigma investment budget items include the following: Direct payroll: Individuals dedicated to the effort fulltime such as BBs. Indirect payroll: The time devoted by executives, team members, process owners, and others to such activities as measurement, data gathering for VOC (voice of customer), and improvement projects. Training and consulting: Teaching people Six Sigma skills and obtaining advice on how to make the effort successful. Improvement implementation costs: Expenses related to installation of new solutions or process designs proposed by project teams. Other expenses such as travel and lodging, facilities for training, and meeting space for teams. Estimating potential benefits is not an easy task. There is no way to accurately estimate the gains without examining the improvement opportunities present in the business, and without planning the implementation to see what the relative payoff will be. However, the following benefits could be expected: The total quality costs (prevention cost, appraisal cost and failure cost) can be reduced. Eventually, the costs of poor quality (COPQ) can be reduced substantially, and the companys profits can soar. By improving quality and productivity through process evaluations and project team efforts, the total sales and profits can dramatically increase. Through a sound Six Sigma initiative, better strategic management, more systematic data collection and analysis, and efforts directed toward customer satisfaction will result in a better market image and customer loyalty.
152
As a result of systematic education through belt systems, cultivation and efficient utilization of manpower becomes possible, which eventually fosters employee pride in their company. Based on the responses to key questions and the cost/benefit analysis, a company can decide whether it should take the Six Sigma initiative now or later. One important point to be kept in mind when a company prepares to embark on Six Sigma efforts is that the company should factor at least six to 12 months for the first wave of DMAIC projects to be completed and concrete results to be realized. The company can push teams for faster results. Giving them extra help or coaching as they work through their learning curve can be a good way to accelerate their efforts, although it may also boost your costs. It would be a mistake to forecast achievement of big tangible gains sooner than a period of six months. The company must have patience and make consistent efforts at the embarkation stage. 7.2 How Should We Initiate Our Efforts for Six Sigma? When a company decides to start a Six Sigma initiative, the first important issue to resolve is How and where should we embark on our efforts for Six Sigma? Since Six Sigma is basically a top-down approach, the first action needed is a declaration of commitment of top-level management. For making the management decisions, it is best to look at the criteria impacting the scale and urgency of their efforts which will strengthen the company by removing the weaknesses. Based on these facts, he should decide his Objective, Scope and Time-frame for Six Sigma engagement. (1) Determination of Objective Every business desires good results from a Six Sigma effort, but the type of results and the magnitude of the changes may vary a great deal. For example, Six Sigma may be attractive as a means to solve nagging problems associated
153
with product failures or gaps in customer service, or as a way to create a responsive management culture for future growth. Each of these objectives could lead to different types of Six Sigma efforts. It is possible to define three broad levels of Objectives: Management innovation, Statistical measurement and process evaluation, and Strategic improvement by problem solving (see Table 7.1).
Table 7.1. Three levels of Six Sigma Objectives
Objective Description A major shift in how the organization works through cultural change. Creating customer-focused management Abandoning old structures or ways of doing business Creating a top-level world-beating quality company All processes are statistically measured, and the sigma quality levels are evaluated. The sigma quality level of each process is evaluated. Poor processes are designated for improvement. A good system of statistical process control is recommended for each process.
Management innovation
Key strategic and operational weaknesses and opportunities become the targets for improvement. Quality and productivity improvement by problem Speeding up product development Enhancing supply chain efficiencies or e-commerce capabilities solving Shortening processing/cycle time Project team efforts for key quality and productivity problems
(2) Assessing the feasibility scope What segments of the organization can or should be involved in the initial Six Sigma efforts? Scope is very important in the initial stage of Six Sigma. Usually we divide the whole company into three segments; the R&D part, manufacturing part, and transactional (or non-manufacturing) part. Mostly the manufacturing section is the target for initial Six Sigma efforts. However, the author is aware of some companies in Korea that began their efforts from the transactional section. It would be desirable to consider the following three factors in determining the scope of the initial Six Sigma efforts. Resources: Who are the best candidates to participate in the effort? How much time can people spend on
154
Six Sigma efforts? What budget can be devoted to the start-up? Attention: Can the business focus on start-up efforts? Are they willing to listen to new ideas for management innovation? Acceptance: If people in a certain area (function, business unit, division, etc.) are likely to resist, for whatever reasons, it may be best to involve them later. It is wise to start from the section which accepts the new Six Sigma efforts. (3) Defining time-frame How long are you willing to wait to get results? A long lead-time for a payoff can be frustrating. The time factor has the strongest influence on most Six Sigma start-up efforts. The top management should define the time-frame for Six Sigma implementation. 7.3 Does Six Sigma Apply Well to Service Industries? Many service industries such as banking, insurance, postal office and public administration often ask Does Six Sigma apply well to service industries? Despite the successful application of Six Sigma in companies such as AIG Insurance, American Express, Citibank, GE Capital Services, NBC and the US Postal Service, executives and managers from the service industry very often wonder if Six Sigma is applicable to their type of business. The primary response to this question is that Six Sigma has the potential to be successful in almost any industry. Since Six Sigma mainly focuses on customer satisfaction, variation reduction, quality improvement and reduction of COPQ, the results enjoyed by Six Sigma companies in the service industry are just as impressive as their counterparts in the manufacturing industries. Lets take the example of GE Capital Services. Three years after the launch of Six Sigma (1995 was the beginning year),
155
they reported: In 1998, GE Capital generated over a third of a billion dollars in net income from Six Sigma quality improvements double that of 1997. Some 48,000 of our associates have already been extensively trained in this complex process improvement methodology and they have completed more than 28,000 projects. The framework in Six Sigma for ensuring and measuring that customer requirements are met should also be attractive to most service organizations. In Six Sigma, the customers are asked to identify the critical characteristics of the services they consume and what constitutes a defect for each of the individual characteristics. Based on these, the Six Sigma measuring system is built up. It is true that many service companies often find it difficult to measure their processes adequately. Compared to manufacturing processes, it is often more demanding to find appropriate characteristics to measure. Also it is difficult to measure the sigma quality level for a service process. In this case, a possible way to set up the quality level for a service process is as follows.
6 level 3 level the ideal level to be reached or the benchmark level of the best company in the world the current level of my company
According to the above levels, the company can achieve the levels of 4 and 5. If the current level of the company is very poor, one can designate the company level as 2. 7.4 What is a Good Black Belt Course? (1) A Black Belt course Depending upon each company, the content and duration of a Black Belt course could be different. Most Korean companies take four five-day sessions and one final graduation
156