Six Sigma Handbook - 2007 PDF
Six Sigma Handbook - 2007 PDF
Six Sigma Handbook - 2007 PDF
Acknowledgments
We thank our friends and professional associates for their assistance, particularly
Tim Brenton, Steve Adriano, and Vicki Shaw who helped with this text. We would
appreciate any comments regarding improvement and errata. It is our concern to be
accurate.
Bill Wortman
Quality Council of Indiana
602 West Paris Avenue
West Terre Haute, IN 47885
TEL: 800-660-4215
TEL: 812-533-4215
FAX: 812-533-4216
qci@qualitycouncil.com
http://www.qualitycouncil.com
008
Ron Crabtree
Ron Crabtree, CPIM, CIRM, is President of MetaOps, Inc., a consulting firm that
specializes in strategic business transformation. He is an internationally recognized
expert and author on business process improvement. Ron serves as adjunct faculty
for Villanova University developing and teaching Lean Six Sigma for the University
Alliance Online. Mr. Crabtree writes the "Lean Culture" Department in APICS
Magazine and is active in consulting and conducting seminars nation-wide.
Edwin Garro
Edwin Garro has spread quality and continuous improvement knowledge in Central
America for 20 years as college professor, quality manager, and general manager.
Edwin is a partner at Performance Excellence Solutions, an education, consulting,
and human capital organization. Mr Garro is also a partner and CEO of
Ludovico.Produccion Grafica, a printing shop, where he practices what he teaches.
Edwin is a senior member of ASQ and founding member of ASQ Section 6000, IMU
Costa Rica. He pioneered the first ASQ certifications in Central America. Edwin has
trained more than 400 professionals for ASQ certifications. He is an ASQ CQE,
CQM/OE, and CSSBB. Edwin has a B.S. in Industrial Engineering from the Instituto
Tecnologico de Costa Rica, and an M.S. in Manufacturing Engineering (Graduate
Student of the Year) from the University of Massachusetts.
Glenn Gee
Glenn Gee is a Senior Quality Engineer at Champion Laboratories. Glenn previously
worked as a Manufacturing Extension Director, advising over 300 companies on
such diverse issues as lean production, quality, economic issues, strategic planning,
marketing, finances, and product innovation. Glenn is an Adjunct Professor for
Eastern Illinois University and has taught ASQ section review classes for the CQE,
CCT, and CMQ/OE. He holds five ASQ certifications.
Glenn has a B.S. in Industrial Engineering from Purdue University and an M.S. in
Industrial Technology from Indiana State University. Glenn is a Registered
Professional Engineer and a Senior Member of ASQ. Glenn is a certified Project
Management Professional and a Certified Training Consultant. Mr. Gee is a member
of the Board of Governors for Quality Systems Registrar, Sterling, VA.
M. Dale Metcalf
M. Dale Metcalf holds an M.B.A. in Production Management and Organizational
Development from Indiana University. Mr. Metcalf's early professional career
spanned 25 years, including manufacturing floor supervision, middle management,
vice president of corporate training, and project leader with an international
consulting firm. He is a certified ISO-9000 lead auditor, member of ASQ, SME, and
is a Ford Global 8-D problem solving trainer. Dale's consulting and training firm,
Metcalf Training Group, Inc., began operation in 1995.
Omar Mora
Omar Mora is the founder and CEO of Blackberry and Cross a consulting firm
located in San Jose, Costa Rica. Omar received a B.S. in Industrial Engineering at
Universidad Internacional de las Americas and a M. S. in Industrial Engineering at
Universidad Interamericana de Costa Rica.
Mr. Mora is a Senior Member of ASQ and founding member of ASQ Section 6000, in
Costa Rica. Omar Mora is an ASQ Certified Quality Engineer and Six Sigma Black
Belt. He is also an APICS Certified Production and Inventory Manager. Mr. Mora
developed the first Lean Enterprise Certification in Latin America.
Terrill R. Paradise
Terrill Paradise holds a B.S. in Quality Management and an M.S. in Engineering
Management from Kennedy Western University. Terrill is a Senior Member of ASQ
and holds certifications in the CQT, CQI, CQA, CQE, and CSSBB areas.
Wesley R. Richardson
Wesley R. Richardson is the Quality Knowledge Manager at Quality Council of
Indiana (QCI). In this capacity he writes, edits, and reviews materials created and
published by QCI. He has over 28 years of quality management experience,
including a commercial metallurgical testing laboratory, a medical device
manufacturer making MRI scanners, and a company manufacturing tungsten carbide
products for the coal mining and metal cutting industries. Wes has a B.S. in
Metallurgy from Massachusetts Institute of Technology, an M.S. in Metallurgy from
Case Western Reserve University, and an M.B.A. from the University of Kentucky.
Wes is a Senior Member of ASQ and currently holds twelve ASQ Certifications.
Bill Wortman
Bill Wortman is the CEO of Quality Council of Indiana - a quality publishing firm
located in Terre Haute, Indiana. He is a Senior Member of ASQ, former Chairman of
Section 0919, and Deputy Director of Region 9. Mr. Wortman currently holds eight
ASQ Certifications. Bill has instructed over 9,000 individuals in quality
fundamentals, including certification training for five ASQ Certifications. Mr.
Wortman has a B.S. in Metallurgical Engineering from N.C. State University. He
worked most of his professional life in the aluminum industry in a variety of
progressive technical and production management positions before starting Quality
Council of Indiana in 1988. Mr. Wortman has been author, co-author, or editor of
more than 30 quality related books and training CDs.
I. LSS Overview
X. Design Improvement 8% 32
XII. Appendix
The fully explained solutions to all 400 questions are available through QCI in the
LSS Solutions Text.
LAO-TZU
Preface
This text is designed to provide professionals with a review of fundamental
knowledge and skills. It is also intended to be a Handbook for those interested in
taking the examinations offered by the Society for Manufacturing Engineers and
Villanova University on-line. It is anticipated that the American Society for Quality
will adopt a lean enterprise or LSS certification BOK in the Ilear future. Test
questions have been fabricated by the authors in most cases. They are provided at
the end of each Section and are printed on blue paper for easy distinction and
removal if required during an examination.
A little Dilbert® six sigma humor with permission of Scott Adams and United
Feature Syndicate, Inc.
Breyfogle, F.W., III. (2003). Implementing Six Sigma: Smartel' Solutions Using
Statistical Methods, 2nd ed. New York: John Wiley & Sons.
Deming, W.E. (2000). Out of Crisis. Cambridge, MA: The MIT Pre~;s.
Goldratt, E. (1992). The Goal. Great Barrington, MA: North River Press.
Hirano, H. (1995). 5 Pillars of the Visual Work Place. New York: Productivity Press.
Imai, M. (1986). Kaizen: The Key to Japan's Competitive Success. New York:
McGraw-Hili.
Juran, J.M. (1999). Juran's Quality Handbook, 5th ed. New York: McGraw Hill.
Levinson, W.A. & Rerick, R.A. (2002). Lean Enterprise: A Synergistic Apprt;)ach to
Minimizing Waste. Milwaukee: ASQ Quality Press.
Pan de, P.S., Newman, P.R., & Cavanagh, R.R. (2000). The Six Sigma Way. New York:
McGraw-Hili.
Rother, M. & Shook, J. (2003). Learning to See. Cambridge, MA: Lean Enterprise
Institute.
Sharma, A. & Moody, P. E. (2001). The Perfect Engine. New York: Free Press.
Shingo, S. (1995). A Study of the Toyota Production System. New York: Productivity
Press.
Tague, N.R. (2004). The Quality Toolbox, 2nd ed. Milwaukee: ASQ Quality Press.
Womack, J. P., Jones, D. T., & Roos, D. (1991). The Machine that Changed the World:
The Story of Lean Production. New York: Harper Perennial.
Womack, J. & Jones, D. T. (2003). Lean Thinking: Banish Waste and Create Wealth
in Your Corporation. New York: Free Press.
I. Introduction
A. Bibliography
B. Lean Six Sigma BOK
C. Lean Six Sigma Glossary
C. Lean Pioneers
Recognize the origins of various lean enterprise techniques (Taylor, Ford,
Toyoda, Ohno, Shingo, etc.).
E. Organizational Leadership
Recognize the key organization roles and responsibilities in support of lean
six sigma. Describe how process inputs, outputs, and feE!dback impact the
enterprise system as a whole. Recognize the benefits c)f using strategic
balanced scorecards, SWAT analysis and benchmarkiing to determine
improvement needs.
C. Project Selection
1. Process Elements
Describe the impact that people, materials, energy, equipment, and
information have on project selection.
2. Stakeholder Analysis
Recognize the importance of stakeholders (suppliers, stockholders,
management, employees, customers, and society) on the viability and
impact of projects.
3. Customer Data
Describe the importance of internal and external customer data in the
creation of improvement projects. Identify how surveys, focus groups,
complaints, etc. can be used to gather this data.
4. QFD
Identify how quality function deployment can be used to ensure that
customers wants and needs are adequately heard.
5. Benchmarking
Recognize how process, performance, and strategic benchmarking can be
used for project selection. Define the basic benchmarking sequence.
1. Plan Elements
Identify and describe the stages of project manauement: planning,
scheduling, and controlling. Distinguish major project elements such as
project scope, milestones, goal statements, and required resources.
3. Planning Tools
Describe and differentiate the features of major project planning tools such
as PERT, CPM, Gantt Charts, and AND diagrams.
4. Project Documentation
Recognize project documentation techniques such as status reports,
milestone reporting, lessons learned, and document archiving.
A. Initiating Teams
Recognize the importance of improvement teams to both the company and
individuals. Describe basic team objectives and the need for management
support. Identify a variety of team arrangements for both lean six sigma and
other objectives.
B. Team Roles
Define and describe the roles and responsibilities of partici[pants on both lean
six sigma and other teams including team members, sponsors, process
owners, black belts, green belts, champions, etc.
c. Team Stages
Describe the main stages of team evolution including f4lrming, storming,
norming, performing, and adjourning.
E. Conflict Resolution
Describe how communications, conflict resolution, and negotiation
techniques are essential for effective team performance.
F. Team Tools
Define and apply basic team consensus techniques such as brainstorming,
nominal group technique, multi-voting, etc.
G. Performance Evaluation
Describe how team performance can be assessed both during and at the end
of a project.
V. Defining Opportunities
A. Project Charter
Describe the elements and importance of a project charter.
B. A3 Report
Describe the applications of A3 reports as project definition tools.
C. Definition Tools
Apply common problem definition tools such as affinity diagrams, cause-and-
effect diagrams, and Pareto diagrams.
D. Customer Inputs
Translate customer feedback such as CTQ trees, survey analysis, VOC
techniques, and Kano analysis into opportunities for improvement.
E. Lean Thinking
Understand key lean thinking concepts such as value, value stream, value
flow, pull value, and perfection.
H. Process Mapping
Recognize the importance of process mapping and its appllication to process
improvement. Contrast process mapping with the VSM technique.
I. Spaghetti Diagrams
Understand the use of spaghetti diagrams in depicting emlployee movement,
information flow, and work flows.
A. Process Analysis
2. Takt Time
Define how takt time measurement can form a basis fClr an improvement
in work flow. Describe the benefit of small batch sizes.
B. Data Collection
1. Types of Data
Recognize and differentiate between variable, attribute, and locational
data. Describe how attribute data can be converted to variable data.
4. Data Accuracy
Recognize basic data accuracy considerations and describe the
importance of random sampling.
C. Measurement Systems
Recognize the need for measurement system analysis and gage R & R.
Describe measurement error and common measurement terms.
1. Normal Distribution
Describe the application of histograms. Understand the use of z values in
determining normal distribution information.
3. Capability Indices
Recognize process capability and process performance indices. Describe
the difference between long-term and short-term capability.
B. Variable Relationships
1. Multi-Vari Analysis
Create multi-vari studies and interpret the difference between positional,
cyclical, and temporal variation.
1. Fundamental Concepts
Recognize fundamental hypothesis testing concepts such as the null
hypothesis, test statistic, types of errors, one-tail i:lnd two-tail tests,
practical versus statistical significance and adequate !;ample size.
3. Means Tests
Apply various average based tests such as z, t, paired t, and p.
4. Variance Tests
Apply and interpret the results of variance based tests such as F and chi-
square.
A. Eliminating Wastes
B. Kaizen
Identify how Kaizen tools and techniques can be utilized to accomplish
process improvement.
C. Theory of Constraints
Understand the theory of constraints. Recognize the key TOC terms -
throughput, inventory, and operating expenses. Describe the drum-buffer-
rope strategy.
D. Design of Experiments
Define and describe common DOE terminology. Apply the basic elements of
experimental planning and execution. Construct randomized, Latin square,
full factorial, and fractional factorial designs.
A. Quality Controls
Describe the function of quality controls such as written procedures and work
instructions in directing product and process performance.
B. Control Plans
Define how control plans are developed and understand how they help hold
the gains from improvement activities. Identify who creates these plans and
maintains their use and effectiveness.
C. Control Charts
Describe the benefits of control charts (SPC) in controlling process
performance. Identify special and common causes. Construct and interpret
various types of control charts. Understand the pre-control technique.
E. Visual Systems
Distinguish how visual displays can be effectively used to make problems
apparent, clarify targets for future improvement, and influence and direct
employee behavior.
F. Standard Work
Identify how standards and standard work techniques can be used to
minimize wastes and ensure more consistent performanc~~.
G. Training
Recognize the importance of employee training as both a preventive and
control technique. Identify the importance of management support, training
needs assessments, necessary resources, and other training fundamentals.
X. Design Improvement
A. DFSS
Understand how the basics of DFSS are applied. Distinguish between
techniques such as IDOV and DMAC.
D. FMEAlFMECA
Define and distinguish between FMEA and FMECA. De~;cribe design and
process FMEAs (DFMEAs and PFMEAs).
Seiri (Sort) - Eliminating everything not required for the work being performed.
5 Whys - A simple technique used to reveal the root cause (as opposed to the
symptoms) of a problem. This approach asks the question "why" until the root cause
is finally discovered.
Green - No problems
Yellow - Situation requires attention
Red - Production stopped; attention urgently needed
CPM - An event oriented, project planning technique meaning critical path method.
Cycle Time - The normal time to complete a product or service op.~ration. This is not
the same as takt time.
DMAIC - The core problem solving methodology used by many lean six sigma
companies. The term refers to the steps: define, measure, anallyze, improve, and
control.
Enterprise Resource Planning (ERP) - ERP adapts the techniques of MRPII to all
areas of an organization (as opposed to the manufacturing arena). ERP is usually
implemented as a comprehensive business software solution.
FMECA - A design review process referring to failure mode effect c:riticality analysis.
Gantt Charts - A form of bar chart used to display project planning activities.
Just-in-Time (JIT) - A production scheduling concept that calls for any item needed
at a production operation (whether raw material or finished item) to be produced and
available precisely when needed (not earlier or later).
Kaizen - The philosophy that every process can and should be continually evaluated
and improved in terms of the time required, resources used, resultant quality, etc.
1. All production and movement of parts and material takes place only as
required by a downstream operation.
3. The quantity authorized per individual kanban is minimal, ideally one. The
number of circulating kanban for an item is determined by the demand rate for
the item and the time required to produce or acquire it.
Lean Manufacturing - The philosophy of continually reducing waste in all areas and
in all forms. This English phrase often refers to the Toyota production system.
Level Loading - The smoothing or balancing of the work load in all steps of a
process.
Line Balancing - The equalization of the cycle times for units of the manufacturing
process, through the proper assignment of workers and machine~; to ensure smooth
production flow.
Muda (waste) - A Japanese term meaning any activity that consumes resources but
creates no value. Those activities and results that should be eliminated. Many
references cite the following seven categories of waste:
Non-Value-Added - Those actions that the customer is not willing to pay for. Any
activity that does not add value to the product or service.
NPV - An acronym representing net present value. This calculation considers cash
flow, time, and interest rates.
One Piece Flow - The concept of reducing production batch sizes to a minimal
amount, preferably a single unit. This can have dramatic effects on raw material,
WIP, finished goods inventories, production lead times, quality, and costs.
PDCA - A general problem solving methodology representing the steps: plan, do,
check, and act.
Perfection - The complete elimination of muda so that all activities, along a value
stream, create value.
Point of Use Inventory - Inventory that is delivered to the location where it will be
consumed.
Queue Time - The time a product spends awaiting the next processing step.
Seiban - Seiban is the name of a Japanese management practice taken from the
Japanese words II sei II , which means manufacturing, and II bun II , which means
number. A Seiban number is assigned to all parts, materials, and purchase orders
associated with a particular customer's job or project. This enables a manufacturer
to track progress.
Setup Time - The time required to change over a machine or pro(:ess from one item
or operation to the next item or operation. This time can be divided into two types:
1. Internal: Setup work that can be done only when the machine or process is
not actively engaged in production.
Shojinka - Continually optimizing the number of workers in a work center to meet the
type and volume of demand imposed on the work center. Shojink,a requires workers
trained in multiple disciplines and a supportive work centel' layout (such as
U-shaped or circular).
Single Piece Flow - A situation in which one complete product proceeds through
various operations like design, order taking, and production, without interruptions,
back flows, or scrap. This is in contrast with batch-and-queue a,rrangements.
SIPOC - A term implying a high-level process map focusing On! suppliers, inputs
processes, outputs, and customers.
Skills Matrix - A work cell visual control depicting all work activities. It provides
assistance in the cross-training of team members.
Small Lot Principle - Effectively reducing lot size until the optimum of one piece flow
is realized.
Standard Work - A precise description of each work activity, specifying cycle time,
takt time, the work sequence of specific tasks, and the minimum inventory of parts
needed to conduct the activity.
Takt Time - Takt time is the available production time divided by the rate of customer
demand. For example, if customers want 480 widgets per day and the factory
operates 960 minutes per day, the takt time is two minutes. Takt time becomes the
heartbeat of any lean organization. Takt is a German term for rhythm. Takt time is
the rate at which customers demand a product and is not the same as cycle time.
TRIZ - A Russian abbreviation for "the theory of inventive problem solving." The
term is pronounced "trees." It consists of 9 action steps and some 40 basic
principles.
Value - From the perspective of the customer, value represents those aspects or
features of products or services that they are willing to pay for.
Value-Added - Those steps that transform raw materials or activities directly into the
features for which the customer assigns value.
Value Stream - The specific activities required to design, and provide a specific
product, from concept to launch, from order to delivery.
Waste - All overproduction ahead of demand, waiting for the ne):t processing step,
unnecessary transport of materials, excessive inventories, unne!cessary employee
movements, and production of defective parts.
WCM (World Class Manufacturing) - The philosophy of being thle best, the fastest,
and the lowest cost producer of a product or service. It implies the constant
improvement of products, processes, and services in order to remain an industry
leader.
Work Cell - The layout of machines or business processes Ilf different types,
performing different operations in a tight sequence, (typically a U-shape or L-shape),
to permit single piece flow and flexible deployment of human ef1~ort.
References
1. The Northwest Lean Networks retrieved from http://www.nwlean.netl
November 15, 2006.
2. Wortman, B.L.,et. al. (2001). CSSBB Primer. Terre Haute, IN: Quality Council
of Indiana.
Six sigma has been defined in a variety of ways. One definition states, "Six sigma
is ... a business strategy and philosophy built around the concept that companies
can gain a competitive edge by reducing defects in their industriial and commercial
processes." (Harry, 2000)24
A few key characteristics of lean and six sigma are discussed and compared below.
There are some explanations from the points of view of lean and six sigma purists.
One of the selling points that some six sigma gurus tout is that six sigma zeroes in
better on "big bang" improvements. Black belts are expected to target and achieve
large bottom line savings in projects every year.
Both six sigma and lean empower people to create process stability and a culture
of continuous improvement. The cornerstones of a lean strategy are tools such as
value stream mapping (VSM), workplace organization (5S), total productive
maintenance (TPM), kanban/pull systems, kaizen, setup reduction, teamwork, error
proofing, problem solving, cellular manufacturing, and one-piece flow.
Many problem identification and problem solving techniques are commonly used
with both lean and six sigma methodologies. These include brainstorming, cause-
and-effect diagrams, 5 "whys", Pareto analysis, 8-0s, FMEAs, and others. Both six
sigma and lean methodologies have a heavy emphasis on careful problem definition.
Six sigma better promotes a rigorous, systematic process to find the true root
cause{s) of the problem.
Value stream mapping (VSM) is the principal lean diagnostic tool. It is credited to
Toyota, who called it material and information flow mapping. The methodology was
developed into a viable tool for the masses by Rother and Shook in 1998 in the text
Learning to See (2003)50. VSM creates a visual representation of what is happening
in a process to improve system performance. Process mapping is a tool favored by
the six sigma community and is best used to identify the inputs, outputs, and other
factors that can affect a process. {Crabtree, 2004)9
Should six sigma and lean coexist in any organization? Ron Crabtree feels the
answer to this question is self-evident: Yes. He feels that lean approaches should
precede and coexist with the application of six sigma methods. Why? Put simply,
lean provides stability and repeatability in many basic processes. Once stability has
taken hold, much of the variation due to human processes goes away. The data
collected to support six sigma activities thereby becomes much more reliable and
accurate.
• Minimize variation
• Apply scientific problem solving
• Utilize robust project chartering
• Focus on quality issues
• Employ technical methodologies
Most executives recognize that they have a combination of both sets of issues.
Placing lean six sigma in the middle of this continuum reflects a more holistic and
synergistic approach. If a specific problem requires only lean clr six sigma tools,
then that is perfectly ok. Lean six sigma is a relatively new p~lradigm providing
broader selection approaches. If the only tool in a company's bag is a hammer, then
all problems start to look like a nail. It is best to have a tool kit with a broader set of
tools, principles, and ways of thinking. (Crabtree, 2006fo (Crabtree, 2004)9
!zw 10
:E 8
w
>
0
a: 6
a.
~ 4
~
0
2
Quality Digest (November, 2006}48 cites research from Avery Point Group (a search
firm specializing in lean and six sigma placement) indicating that lean and six sigma
are destined for eternal togetherness. According to Avery Point Group
approximately one-half of employers are looking for employees with both lean and
six sigma skill sets.
Six years ago, books published on the combined use of lean and six sigma were
virtually nonexistent. Today, they represent almost one-half of the lean books and
25% of the six sigma books published. Tim Noble, manager of the Avery Point
Group states, "Those companies that perpetuate the divide between six sigma and
lean are clearly missing the point. The two are clearly complementary tool sets, not
competing philosophies."
On the following pages are additional descriptions of six sigma and lean enterprise.
This book unifies the discussion of lean six sigma by use of the DMAIC problem
solving approach. Obviously, other systems would work equally well, as long as
they are communicated and known to the organization. Refer to ·rable 2.3 below for
some applications of the various lean six sigma tools at various problem solving
stages.
The student should note that there are a multitude of effective t()ols in addition to
those listed above.
Motorola, under the direction of Chairman Bob Galvin, used statistical tools to
identify and eliminate variation. From Bill Smith's yield theory in 1984, Motorola
developed six sigma as a key business initiative in 1987. Many credit the resulting
improvements as a key factor in Motorola winning the Malcolm Baldrige Award for
Business Excellence in 1988. Dr. Mikel Harry, who had led the corporate effort,
subsequently left Motorola and later founded the Six Sigma Academy. The purpose
of the Six Sigma Academy is to accelerate the efforts of corporations to achieve
world-class standards. (Harry, 1998)23
Sigma is a statistical term that refers to the standard deviation of a process with
regard to it's mean. In a normally distributed process, 99.73% of measurements will
fall within ± 3.0 sigma and 99.99932% will fall within ± 4.5 sigma.
Motorola noted that many operations, such as complex assemblies, tended to shift
1.5 sigma over time. So a process, with a normal distribution and normal variation
of the mean, would need to have specification limits of ± 6 sigma in order to produce
less than 3.4 defects per million opportunities. This failure rate can be referred to
as defects per opportunity (OPO), or defects per million opportunities (OPMO).
+-~~~~~~--~----~--~~~~~
-6 -4 -2 o 2 4 6
Note that Table II in the Appendix provides defect levels at other sigma values.
Various authors report slightly different failure rates based mainly upon rounding
effects and slight miscalculations. Most of the differences occur at levels less than
3 sigma. However, in looking at this situation objectively, companies with less than
3 sigma capability, and with ± 1.5 sigma shifts, probably won!'t be around long
enough to undertake a six sigma improvement effort anyway.
It should be noted that the term "six sigma" has been applied tOI many operations
including those with distributions that are not normal, for which a calculation of
sigma would be inappropriate. The principle remains the same, dl~liver near-perfect
products and services by improving the process and eliminating defects. The end
objective is to delight customers.
Harry (2000)24 proposes that the entire six sigma breakthrough strategy should
consist of the following eight elements:
The business successes that result from a six sigma initiative include:
Harry (1998)23 reports that the average black belt (or green belt) project will save
about $175,000. There should be about 5 to 6 projects per year, pIer black belt. The
ratio of 1 black belt per 100 employees, can provide a 6% cost rt!duction per year.
For larger companies, there is usually 1 master black belt for every 100 black belts.
Organizations that follow a six sigma improvement process for :several years find
that some operations achieve greater than six sigma quality. When operations reach
six sigma quality, defects become so rare that when defects do occur, they receive
the full attention necessary to determine and correct the root caUSie. As a result, key
operations frequently end up realizing better than six sigma quallity.
• Motorola • AlliedSignal
• General Electric • Black & Decker
• Dupont • Dow Chemical
• Polaroid • Federal Express
• Kodak • Boeing
• Sony • Johnson & Johnson
• Toshiba • Navistar
Lean Enterprise *
The lean enterprise encompasses the entire production system, beginning with the
customer. It includes sales outlets, the final assembler, product or process design,
and all tiers of the supply chain (including raw materials). Any truly lean system is
highly dependent on the demands of its customers and the reliability of its suppliers.
No implementation of lean manufacturing can reach its full potential without
including the entire enterprise in its planning.
Generally, five areas drive the lean producer: cost, quality, delivery, safety, and
morale. Just as mass production is recognized as the production system of the 20th
century, lean production is viewed as the production system of the 21 st century.
Typically, Japanese terms are used in defining lean principles in order to convey
broad concepts with iconic (representative) terminology. Once properly explained,
the term "kanban" can be more descriptive than "those little cards which help
control product moves." However, use of these terms can have a negative effect,
especially if the culture of a particular organization is predisposed against all things
non-American. One should choose carefully the training methods (and terms) for
conveying lean tools and methods.
Lean Pioneers
The following is a list of major contributors to the concept of lean enterprise.
.
Lean Pioneer Contribution
Frederick W. Taylor Wrote Principles of Scientific Management
Divided work into component parts
Was the foremost efficiency expert c.f his day
Applied scientific methods to maximize output
Henry Ford Known as the father of mass production
Advocated waste reduction
Founded Ford Motor Company
Brought affordable transportation to the masses
Sakichi Toyoda Known as a hands-on inventor
Developed the jidoka concept
Initiated the Toyota Motor Company (TMC)
Kiichiro Toyoda Continued the work of his father Sakichi
Promoted mistake proofing concepts
Became president of Toyota Motor Company
Eiji Toyoda Was the cousin of Kiichiro Toyoda
Developed an automotive research lab
Hired outstanding people within TMC
Became the Chairman of TMC
Taiichi Ohno Created the Toyota production syste!m (TPS)
Integrated the TPS into the supply chain
Had the vision and focus to eliminatl~ waste
Shigeo Shingo Developed the SMED system
Assisted in the development of otherr TPS elements
James Womack Well-known promoters of lean enterprise
Daniel Jones Co-authors of major lean thinking beloks
Anand Sharma CEO of TBM Consulting Group
Author of prominent books on lean Emterprise
Michael L. George Widely known for lean six sigma boelks
Founder of the George Group
Frederick Taylor was born into a wealthy Philadelphia family, but chose the career
of an engineer. He started working as an apprentice in a machine shop and became
a foreman. In his spare time, he obtained a Mechanical Engineering degree from
Stevens Institute of Technology in 1883. From the very beginning, he focused on
improving the work methods and the efficiencies of the shop. This characteristic no
doubt led him to develop his own system and thus be called "The father of scientific
management. "
Frederick Taylor was the first efficiency expert; the original time and motion study
specialist. He applied scientific methods to obtain maximum output. This was
accomplished by having management in control of the workplace and by detailing
the minute routine of the worker. Through operations analysis, Taylor took away job
complexity. That is, he could now take a person from the street and train that person
to do a simpler operation. Work now required less brains, less muscle, and less
independence. "Taylorism" was "the application of scientific methods to obtain
maximum efficiency in industrial work." (Kanigel, 1999)31
In his book, The Principles of Scientific Management, Taylor emphasized that the
employer and the employee must both "prosper" through his system. One can have
high employee wages and low manufacturing costs. Maximum prosperity can exist
as the result of maximum productivity. Some key Taylor concepts are:
The Ford Motor Company was founded in 1903 with the introduction of the Model A.
By 1908, after 20 design changes, the Model T was created. Mr. I=ord had a vehicle
that was designed for both the ease of manufacture and use. ThE! vehicle had parts
with interchangeability and simplicity. The common man was able to drive and
repair his own car. IN 1927 a second Model A was launched to meet the features
offered by other U.S. competitors.
Henry Ford was the master of "mass production." The successful implementation
of the assembly line at the Highland Park, Detroit plant in 1913 reduced costs and
increased productivity for Ford Motor Company. The reduced malnufacturing costs
made cars more affordable for Americans. (The Henry Ford website)63
In 1908, the workers required an average station task time of 514 minutes. With
improved work techniques and time/motion studies, the average task was reduced
to 2.3 minutes. In 1913, the introduction of the assembly line pushed the average
task cycle time down to 1.19 minutes. This was accomplished by reducing the
complexity of the task. The operator did not need to be a skilled craftsman. It must
be noted that the use of the assembly line resulted in a labor turnover rate of 380%
in the beginning of 1913, and 900% by year's end. On January 4, 1914, wages were
doubled to $5.00 per day. The increased wages resulted in a much improved
retention rate. In 1915, the Highland Park Plant had 7,000 workers. There were 50
different languages spoken at the plant. Therefore, the reduced complexity of the
task aided in the training of new workers.
Mr. Ford went beyond just managing the internal resources of the plant. He sought
to reduce costs and increase productivity by controlling the costs of raw materials.
The River Rouge plant near Dearborn, Michigan was a great eJ(ample of vertical
integration. Ford Motor Company, had a steel mill for producing steel, a glass
factory for windshields, rubber plantations in Brazil and iron ore mines in Minnesota.
Ford owned the ships that carried the ore.
(Womack, Jones, & Roos, 1990)65, and (Kanigel, 1999)31
1. 81,000 employees
2. 6,952,000 square feet of production space
3. $268,991,552 in investment costs (The Henry Ford websitet3
Mr. Ford was an advocate of reduced waste in every operational area. Some
examples include:
• Using straw from his farm to make "Fordite" for steering wheels
• Reworking and reusing worn steel rails
• Remelting scrap steel at the River Rouge plant
• Reworking broken tools and equipment
• Converting used paper, rags, and hardwood into binder board
The Toyota Motor Company (TMC) was spun-off as a separate company in 1937.
From the beginning the concept of just-in-time production was u!,ed. Due to lack of
materials this concept had to be used for economics and to increase cash flow. Mr.
K. Toyoda was very much influenced by his trips to Ford plants and by seeing the
supermarket process of restocking goods on the shelves. Toy01ta Motor Company
faced bankruptcy during the post war years due to inflation and credit management
problems. The situation even led to the layoff of workers and to a series of strikes.
In a classic show of the sense of obligation and responsibility, Kiichiro Toyoda took
responsibility for this failure and resigned as President. By 1950, after 13 years of
manufacturing, 2,685 automobiles had been produced by TMC, compared to 8,000
per day from the Ford River Rouge plant. Kiichiro Toyoda was asked to return as
President in early 1952, but died suddenly within the year at the age of 57.
{Womack, 1990t5 , {Liker, 2004)37, {Toyoda, 1987t2
During 1950, he traveled to the United States for a 3-month tour of the auto plants
and their suppliers. This trip provided evidence to Eiji Toyoda that little Toyota
Motor Company could compete in the automotive arena, but not using the same
"mass production" techniques. There was waste in the system and TMC could build
a new system from that. (Note at this time that Toyota was producing 40 units per
day, while Ford Rouge was at 8,000 per day.) In 1955, Eiji Toyoda drove the first
"Crown" passenger car off the assembly line. The Crown is credited with
transforming TMC into a large company. E. Toyoda was President of Toyota Motor
Company from 1967 - 1982. During that time period he sponsored Taiichi Ohno's
hard work inside TMC. Upon the merger of Toyota Motor Company and Toyota
Motor Sales, he served as Chairman from 1982 - 1994.
(Liker, 2004)37, (Toyoda, 1987)62
In the 1950s, he also toured the United States auto plants to view and evaluate the
"mass production" process. From the tour, Ohno learned that the mass production
system could achieve economies of scale and reduced costs, but the system was
still full of waste. The waste was present in the forms of over production, excess
inventory, long setup times, rework, etc. Earlier, Kiichiro Toyoda had set an
"impossible" goal for Toyota Motor Company to catch up with America. The initial
estimates of productivity were 9:1. That is, it took nine Japanese workers to equal
the productivity of one American. The adoption of the customized mass production
system with elimination of waste could be the method for catching up.
Mr. Ohno had the vision and focus to uncover and eliminate waste both within
Toyota and their suppliers. He was fascinated by the obvious and would unravel
invisible problems. He stated that he could focus on a situation, mentally run the
process sequence forward and then in reverse to uncover probh~ms. Ohno would
immediately put his ideas to the test.
From 1950 on, as a manager and executive, and with the backing of President Eiji
Toyoda, he pushed and fought to install the concepts of lean throughout Toyota and
into the supply base. This was not an easy task. It was very difficult to overcome
the conventional wisdom that things are fine as they are. He had many clashes with
his superiors on the TPS, due to his "take-no-prisoners" approach. Upon extending
TPS throughout the supply base, Ohno retired from Toyota in 19~r8 as an Executive
Vice-President.
Fearing retribution on his assistants, Ohno formed the Shingijuts Consulting Group
as a way for his loyal assistants to leave Toyota. Shingijuts means "new
technology." The assistants included Yoshiki Iwata and Chihiro Nakao. The
consulting group has been active in the United States. They have maintained the
style of training used at Toyota. (Liker, 2004)37, (Ohno, 1988)42, (Womack, 1996)65
By 1959, Shigeo Shingo formed his own consulting firm, Institute of Management
Improvements, and provided consulting throughout the Far East. Much of his work
centered on mistake-proofing, zero quality control, and supplier sourcing. It was
not until 1969, at the Toyota Motor Company when Taiichi Ohno demanded the
impossible, that the SMED (Single Minute Exchange of Die) concept really came to
life. Ohno's demand was to reduce setup changes from 1.5 hours to 3 minutes. It
had previously been 4 hours, so 3 minutes seemed impossible. But, within three
months the goal was accomplished.(Shingo, 1989)56, (Utah State University, 2006)8
Shigeo Shingo trained and consulted for TMC from 1954 to 1982. During that time
he conducted over 87 sessions involving over 2,000 students. While he was not a
Toyota employee, he was a consultant that assisted in the development of the
Toyota Production System. In 1988, he was awarded an Honorary Doctorate in
Business from Utah State University. The Shingo Prize was established by the
College of Business, Utah State University to promote leanl world-class business
practices to enable a company to compete globally. The first winner, in 1989, was
Globe Metallurgical, Inc., Cincinnati, Ohio.
Dr. Womack received a Ph.D. in Political Science from MIT in 1982 and was a
research scientist at MIT from 1979 to 1991. Professor Jones was Professor of
Manufacturing at Cardiff University Business School in 1989 and Founding Director
of the Lean Enterprise Research Centre (1994-2001). Womacl< and Jones have
established a global network on lean manufacturing with individual networks in
America and in Europe. (Daniel T. Jones n.d.)27, (Kleiner 2006)35
Anand Sharma
Anand Sharma is President and CEO of TBM Consulting GroUlp, Durham, North
Carolina. He has been profiled by Fortune magazine as one of the "Heroes of U.S.
Manufacturing" (March 2001). In 2002, the Society of Manufa,cturing Engineers
awarded Mr. Sharma the Donald C. Burnham Manufacturing Award for achieving
manufacturing excellence without sacrificing human capital. His supporters state
that he is an expert who can figure out what is wrong with an organization by
walking the shop floor. He proclaims, "Where other people see complexity, I look
at how simple things can be." His company, TBM Consulting Group, employing over
70 employees, has worked with over 500 enterprises on improving manufacturing
productivity and profits. Mr. Sharma prides himself on refusing to work with firms
that will layoff workers due to use of his system.
Michael George
Michael George is Chairman and CEO of The George Group based in Dallas, Texas.
His company has worked with over 300 clients focusing on operational performance
and shareholder value through six sigma, lean six sigma, management of
complexity, and innovation efforts. Mr. George has a B.S. in Physics from the
University of California and a M.S. in Physics from the University of Illinois. His first
assignment was at Texas Instruments in 1964. In 1969, he founded International
Power Machine, which he sold to Rolls-Royce. The funds from the sale enabled him
to travel to Japan to study the Toyota Production System. The George Group was
formed in 1986. Mr. George is the holder of several patents on the reduction of
process cycle time and complexity. He has authored or co-authored a multitude of
lean six sigma books including: Fast Innovation, Lean Six Sigma, Lean Six Sigma for
Service, and Conquering Complexity in Your Business. (George Group, nd.)21
Guru Contributi-o n I
Philip B. Crosby Senior management involvement
4 absolutes of quality management
Quality cost measurements
W. Edwards Deming Plan-do-study-act (wide usage)
Top management involvement
Concentration on system improveme!nt
Constancy of purpose
Armand V. Feigenbaum Total quality control/management
Top management involvement
Kaoru Ishikawa 4M (SM) or cause-and-effect diagram
Companywide quality control (CWQC)
Next operation as customer
Joseph M. Juran Top management involvement
Quality trilogy (project improvement)
Quality cost measurement
Pareto analysis
Walter A. Shewhart Assignable cause vs. chance cause
Control charts
Plan-do-check-act (as a design apprclach)
Use of statistics for improvement
Genichi Taguchi Loss function concepts
Signal to noise ratio
Experimental design methods
Concept of design robustness
Bill Smith First introduced the term "six sigma"
Mikel Harry The main architect of six sigma
Forrest Breyfogle '" Author of Implementing Six Sigma
Philip Crosby started his career as a junior technician testing fire control systems
for B-47s. He eventually moved on to ITT and became one of the first corporate VPs
of quality in the country. He attributed his management training to Harold Geneen
and to the monthly general management meetings. It was Philip Crosby's deep
understanding of the concerns of management that made him akin to top
management. The other quality deep thinkers could be viewed as academicians, but
Crosby was considered a businessman. This explained the numbers of top
management that flocked to his quality college.
Crosby believed that quality was a significant part of the company and senior
managers must take charge of it. He believed the quality professional must become
more knowledgeable and communicative about the business. Crosby stated that
corporate management must make the cost of quality a part of the financial system
of their company.
Philip Crosby was a fellow and past president of ASQ. One of his most popular
statements on quality was: Quality is conformance to requirements.
The requirements are what the customer says they are. There is a need to
emphasize a "do it right the first time" attitude.
The four absolutes of quality management are basic requirements for understanding
the purpose of a quality system. Philip Crosby also developed al 14 step approach
to quality improvement:
1. Management Commitment
2. Quality Improvement Team
3. Measurement
4. Cost of Quality
5. Quality Awareness
6. Corrective Action
7. Zero Defects Planning
8. Employee Education
9. Zero Defects Day
10. Goal Setting
11. Error Cause Removal
12. Recognition
13. Quality Councils
14. Do It All Over Again
Dr. Deming was an honorary member of ASQ. He was awarded the ASQ Shewhart
Medal in 1955. During his life Dr. Deming published over 200 papers, articles, and
books. Notable books include:
W. Edwards Deming was the one individual who stood for quality and for what it
means. He is a national folk hero in Japan and was perhaps the leading speaker for
the quality revolution in the world. He did summer work at the Hawthorne plant
while working on his Ph.D. At the Hawthorne plant he became acquainted with W.
Shewhart and studied Shewhart's statistical methods.
The World War II effort enabled Deming to conduct classes in statistical methods to
thousands of American engineers, foremen, and workers. The statistical methods
were later credited to be a major factor in the war effort. But, as he would state it,
after the war, all traces of statistical methods were gone in a puff of smoke.
There were several visits to Japan between 1946 and 1948 for the purpose of census
taking. He developed a fondness for the Japanese people during that time. JUSE
invited Deming back in 1950 for executive courses in statistical methods. He refused
royalties on his seminar materials and insisted that the proceeds be used to help the
Japanese people. JUSE named their ultimate quality prize after him.
Deming would return to Japan on many other occasions to teach and consult. He
was well known in Japan, but not so in America. Only when NBC published its white
paper, "If Japan can, why can't we?" did America discover him. An overnight
success at age 80, W.E. Deming died at the age of 93. During his last 13 years,
Deming gave American industry a dose of strong medicine in quality. His message
to America is listed in his famous 14 points and 7 deadly diseases .
.
•
Among other educational techniques, Deming promoted the parable of the red
beads, the PDSA cycle, and the concept of 94% management (system) causes versus
6% special causes. (Deming, 1986)14
"
Improve quality ... Decrease costs (less rework, fewer delays) ... Productivity
improves ... Capture the market with better quality and price'" Stay in business ...
Provide jobs.
Thus, a "chain reaction" of good things can occur through the Deming philosophy.
Mr. Feigenbaum is generally given credit for establishing the concept of "total
quality control" in the late 1940's at General Electric. His TQC statement was first
published in 1961, but at that time, the concept was so new, that no one listened.
Failure driven companies ... "If it breaks, we'll service it." versus the quality
excellence approach ... "No defects, no problems, we are essentially moving
toward perfect work processes."
Authored the first Japanese book to define the word "TQC" in 1981
Guide to Quality Control (1982)26
What is Total Quality Control? The Japanese Way (Ishikawa, 1985)25
Kaoru Ishikawa was involved with the quality movement in its earliest beginnings
and remained so until his death in 1989. His father, Ichiro Ishikawa, President of the
Federation of Economic Organizations and of JUSE, invited Deming to speak before
top Japanese executives in 1950. A review of Ishikawa's training tapes, produced
in 1981, contain many of the statements of quality that are in vogue today. Subjects
such as total quality control, next operation as customer, trclining of workers,
empowerment, customer satisfaction, elimination of sectionalism (it's not our job),
and humanistic management of workers, are examples. It is am.azing to hear such
statements of quality on record from more than two decades ago.
Ishikawa stated that total quality control had been practiced in Japan since 1958.
The time for such a philosophy to take hold in a company can range from 2-5 years.
That time will depend on the commitment of top management. To reduce confusion
between Japanese-style total quality control and western-style total quality control,
he called the Japanese method the companywide quality controll (CWQC).
One of the first concepts that western management took back to their own shores
was the quality circle. The quality circle concept represents the bottom up
approach. In Japan in 1988, there were one million quality circles, involving ten
million people. Quality circles were originally study groups that workers formed in
their department to study the quality concepts that were published in "Quality
Control for Foreman" (Ishikawa was the editor). Quality circles involve members
from within a department. The circle solves problems on a continuous basis. Circle
membership changes dependent upon the task or project under consideration.
Ishikawa also wrote that he originated the concept: "Next operation as customer"
in 1950 when he was working with a steel mill. Operators concerned about their own
defects were considered spies whenever they traveled to the next department to
view their original work. Departments were defensive when outsiders made tours,
thus, a concept of "Next operation as customer" was developed to remove those
fears. The separation of departments was referred to as sectionalism.
A man with many thoughtful concepts, Kaoru Ishikawa was known for his lifelong
efforts as the father of Japanese quality control efforts. The fish bone diagram is
also called the Ishikawa diagram in his honor.
J.M. Juran started in quality after his graduation from engineering school with an
inspection position at Western Electric's Hawthorne plant in Chicago in 1924 (Walter
Shewhart and W.E. Deming were also at that plant). He left Westem Electric to begin
a career in research, lecturing, consulting, and writing that has lal;ted over 60 years.
An association with the American Management Association ha~, enabled Juran to
teach a course, "Managing for Quality," for 30 years to about 100,000 people in over
40 countries.
The publication of his book... Quality Control Handbook and his work in quality
management led to an invitation from JUSE in 1954. Juran's first lectures in Japan
were to the 140 largest company CEOs, and later to 150 senior managers. The right
audience was there at the start. Juran commented that no one was more surprised
than he to see CEOs at the seminars. His visit thus marked Japan's use of QC as a
management, rather than a specialist, technique.
Dr. Juran has a prime basic belief that quality in America is improving, but it must
be improved at a revolutionary rate. Quality improvements need to be made by the
thousands, year after year. Only then will a company become a quality leader.
• Top management must commit the time and resources for success
• Specific quality improvement goals must be in the business plan and include:
Juran Trilogy
Juran has felt that managing for quality requires the same attention that other
functions obtain. Thus, he developed the Juran or quality trilogy which involves:
• Quality planning
• Quality control
• Quality improvement
Juran sees these items as the keys to success. Top management can follow this
sequence just as they would use one for financial budgeting, cost control, and profit
improvement.
For any project, quality planning is used to create the process that will enable one
to meet the desired goals. The concept of quality control is used to monitor and
adjust the process. Chronic losses are normal in a controlled state, while the
sporadic spike will initiate an investigation. Eventually, only quality improvement
activities will reduce the chronic losses and move the process to a better and
improved state of control and that's the "last word."
Dr. Shewhart worked for the Western Electric Company, a manufac:turer of telephone
hardware for Bell Telephone, from 1918 until 1924. Bell Telephone's engineers had
a need to reduce the frequency of failures and repairs. In 1924, Shewhart framed the
problem in terms of "assignable-cause" and "chance-cause" variation and
introduced the control chart as a tool for distinguishing between the two. Bringing
a production process into a state of "statistical control," where the only variation is
chance-cause, is necessary to manage a process economically.
Walter Shewhart's statistical process control charts have become a quality legacy
that continues today. Control charts are widely used to monitor processes and to
determine when a process changes. Process changes are only made when points
on the control chart are outside acceptable ranges. Dr. De!ming stated that
Shewhart's genius was in recognizing when to act, and when to leave a process
alone. (Capitol Hill, 2001)6
The historical evolution of the PDCA problem solving cycle is interesting. Kolsar
(1994)36 states that Deming presented the following product design cycle (which he
attributed to Shewhart) to the Japanese in 1951:
Perhaps from this concept, the Japanese (Mizuno, 1984)39 evolved a general problem
solving process called PDCA. Both PDCA and PDSA are reviewed elsewhere in this
book.
Use the loss function and signal-to-noise ratio as ways to evaluate the cost
of not meeting the target value. The traditional view is that a product is
either in specification or not. Taguchi feels the quality loss increases
parabolically as the product strays from a single target value.
Taguchi methods, and other design of experiment techniques, have been described
as tools that tell us how to make something happen, whereas most statistical
methods tell us what has happened. Taguchi methods are concepts that many
engineers can take out of a book and use.
It has been published that about 50% of the practicing engineers in Japan are
competent in Taguchi methods. Dr. Taguchi has presented America with quality
engineering techniques that can work to produce better products and reduce costs.
It is more technical in nature and made for technical specialists. Top management
needs only to provide the training to learn the concepts and allow its use throughout
the corporation for it to be effective. The Taguchi approach does not call for an
internal revolution. His concepts do improve products and procedures.
Mr. Smith helped Robert W. Galvin, Chairman and CEO of Motorola, recognize the
need to control variation and to work toward 3.4 defects per million or for six sigma
levels of quality. Later with Mikel Harry, Smith developed the initial four-step six
sigma stages: measure, analyze, improve, and control, to reduce the defect levels.
In 1988, Motorola won the first Malcom Baldrige National Quality Award. Mr. Smith's
six sigma efforts were credited with achieving that award. Upon his death in 1993,
Northwestern University and Motorola established a scholarship to honor Bill Smith.
Mikel Harry
Mikel Harry and Richard Schroeder founded Six Sigma Academy in 1994 as a
consulting firm specializing in the six sigma methodology. Mikt!1 Harry has called
Bill Smith "The father of six sigma" and gave himself the title "The godfather of six
sigma". Many industry people have called Mikel Harry the main "alrchitect" of the six
sigma movement, as he has been the most widely known driver in the industry.
In 1985, Mikel Harry joined Motorola as a quality and reliability tmgineer where he
initially developed a problem solving program that included: Juran's quality journey,
SPC, Shainin's tools, and planned experimentation. He later teamed with Bill Smith
and developed the MAIC methodology with the "logic filters" approach. The logic
filters are a collection of tools to be used at each stage of tht! problem solving
approach. These originated with Harry's Ph.D. research at Arizona State University.
In 1989, Robert Galvin gave Harry the head position for the new Siix Sigma Research
Institute at Motorola University, where the emphasis and focus wlould be on dollars,
business transformation, and building a foundation for the six sigma process.
During the process of building the six sigma structure, Mikel Harry and a Unisys
Plant Manager derived the term "black belts" for the new bre!ed of statistically
trained problem solving experts. (They were both martial arts enthusiasts.)
Organizational Leadership
There are several ways to structure a lean six sigma strategy. However, successful
applications share a common core of management support, training, rewards, and
reinforcement.
Management Support
Effective lean six sigma programs do not happen accidentally. Careful planning and
implementation are required to ensure that the proper resources are available and
applied to the right problems. Key resources may include people trained in problem
solving tools, measurement equipment, analysis tools, and capital resources.
Assigning human resources may be the most difficult element, since highly skilled
problem solvers are a valuable resource and may need to be pulled from other areas
where their skills are also needed.
It has been said that there are two occasions when it is difficult to implement an
improvement program, when times are bad and when times are good. When times
are bad, profitability is low, resources are tight and "strategic" activities take a back
seat to "survival." When times are good, profitability is high, and resources are
focused on the current source of cash flow. Improvement may be last on the list of
things to do in order to take advantage of the current opportunity.
It has also been said that there are two times when an improvement program is
critical, when times are bad and when times are good. When times are bad, and
profitability is low, a company can not afford to continue losing money because of
poor quality and performance.
When times are good, and profitability is high, the costs of poor quality and internal
wastes are also likely to be high. Customers are not likely to repeat business with
a company that delivers a poor quality product or service, when a better option is
available. Unfortunately, many companies cruise along like the Titanic thinking they
are unsinkable because they are the market leaders.
Potential black belts may undertake a 4 month training program consisting of one
week of instruction each month. A variety of software packages; are used to aid in
the presentation of projects, including Excel or MINITABTM for the! statistics portion.
Potential black belts will receive coaching from a master black Ibelt to guide them
through a project. The completed project will typically require thE! trainee to use the
majority of the tools presented during the training sessions.
Lesser amounts of training will qualify individuals for the green belt title. Some
companies include extensive lean training as part of their black belt and master
black belt programs. Other companies provide general lean instrlUction at the green
and black belt levels and then identify one or two master blac:k belts to receive
specialized lean training. The diagram below outlines a high levell training plan with
special training for executives and master black belts. The relative volume of each
diagram level represents the relative number of people receivin~1 training.
Many organizations have a structure that fits somewhere in between the two
previous models. Master black belts are responsible for coaching and training black
belts in order to make the best use of their skills. Master black belts also train and
coach management in order to help them support the lean six sigma program.
It is also important that green and black belts experience the rewards of achieving
significant savings for the company. At the same time, other team members must
be recognized for their contribution to performance improvements. To only reward
the black belts for improvements that were achieved by teams, creates resentment
and isolates the black belts from team members.
Installing the metrics for a certain activity means that a set of perf()rmance goals and
standards have been determined. Metric analysis should thenl provide effective
control feedback for reaching strategic goals. Organizational perfcnmance goals and
corresponding measurements are often established in the areas of:
• Profit
• Cycle times
• Marketplace response
• Resources
A company should develop metrics for each major performanc:e goal. A unit of
measure and a method of measurement must be defined for e,ach goal. For the
above performance goals, possible metrics include:
Profit
Cycle Times
Resources
Use of Metrics
Metrics can be and must be developed in order to measure achievement of the
organizational goals. Dr. Deming discussed the problem of obtaining a "true value."
There is variation in all measurement and one must be skeptical of how data is
collected. The device used to collect the data must be accurate. Additionally, the
questions of "when," "where," and "how" will impact the accuracy and precision of
the data. According to Juran (1993)30, the development of any measurement system
should take into account the following factors:
Any mechanical and electrical instruments, gages, tools, etc., used for data
collection must undergo recognized calibration procedures. In many applications,
the appropriate metrics are qualitative based on customer, supplier, or appraisal
feedback forms.
STRATEGIC
GOALS
I
STRATEGIC STRATEGIC STRATEGIC STRATEGIC
QUALITY OPERATING MARKETING SAFETY
GOALS GOALS GOALS GOALS
, , ,
TACTICAL QUALITY GOALS 1"'1-._--- METRICS USED TO EVALUATE PERFORMANCE
• Learning and growth: How will we sustain our ability to change and improve?
Employee surveys, employee suggestions, money spent on training
Observers and users of the balanced scorecard can see the strategy and goals of the
company and align themselves accordingly. Building steps include:
5. A second workshop is held with senior and other levels of management. The
draft is refined and objectives are provided for proposed measures.
Performance Metrics
Effective business process management (BPM) requires an integrated system of
metrics in order to achieve the desired lean six sigma busine!ss improvements.
Pearson (1999)45 describes how this system of metrics might linlk all three levels of
the enterprise (process, operations, and business) with the KPOVs of each level of
the process becoming the KPIVs of the next.
Breyfogle (2003)3 refers to these metrics as satellite metrics, the highest level
measures in business process management. Business (executive) level metrics
comprise summaries of detailed operations and financial results, reported monthly,
quarterly, or annually.
Traditional end-of-period cutoff reports are not sufficient for six sigma projects.
Other standard reporting practices, such as comparing year-to-date totals to the
same period last year, are also inadequate. It is important to remember that these
metrics are part of a complete system and should be treated with the same statistical
process monitoring and control techniques as operations data" Wheeler (1993t 4
provides excellent examples of using statistical methods for bu~,iness monitoring,
control and improvement.
Operational efficiency measures relate to the cost and time required to produce the
products. They provide key linkages between detailed process measures and
summary business results, and help identify important relationships and root
causes. Senge (1990t 3 found that employees and teams who can see the impact of
their efforts on the overall business outcome, learn and make improvements more
effectively and efficiently.
Process Metrics
Detailed process-level metrics include the data from production people and
machinery. This is the information that operators and supervisors need to run
normal operations. This information is also the subject of much of the measure,
analyze, improve and control phases (MAle) of lean six sigma once the improvement
project has been selected and defined. A number of process performance metrics
are reviewed later in this Handbook.
• The "vital few" versus the "trivial many": Large organizations may have
thousands of metrics, but no individual should have to focus on more than a
few. Overall business level metrics should be less than 20.
• Metrics should focus on the past, present, and future. Past history provides
context for decisions and builds organizational wisdom. The present data
provides real-time process control. Future predictions provide the basis for
estimates, improvement plans, and strategies.
• The key to an effective system is to have multiple metrics, not just one
important one. Success is about balance, not a mindless focus on quality,
shareholder value, profit, or any other individual measure.
References
1. Barney, M. (May,2002). "Motorola's Second Generation." Silx Sigma Magazine.
Accessed November 2, 2006 from
http://kellogg.northwestern.edu/faculty/savaskan/opnsquality/moCsix_sig
ma.pdf
2. Besterfield, D.H., et. al. (1999). Total Quality Management, 2nd ed. Upper Saddle
River, NJ: Prentice Hall.
3. Breyfogle, F.W., III. (2003). Implementing Six Sigma, 2nd ed. New York: John
Wiley & Sons, Inc.
4. Brown, M.G. (1996). Keeping Score, Using the Right Metrics to Drive World -
Class Performance, Quality Resources.
11. Crosby, P.B. (1984). Quality Without Tears. New York: McGraw-Hili
12. Cutler, A.N. (2001, August 22). Retrieved from web site: http://www.sigma-
engineering.co.ukl, Sigma Engineering Partnership.
13. DeCarlo, N. (2004). Mikel J. Harry Ph.D. Accessed November 2, 2006 from
http://mikeljharry.com.
References (Continued)
14. Deming, W.E. (1986). Out of the Crisis. Cambridge, MA: Massachusetts Institute
of Technology, CAES.
15. Delavigne, K.T. & Robertson, J.D. (1994). Deming's Profound Changes.
Englewood Cliffs: Prentice Hall.
16. DMAIC Process. (2001). Downloaded December 1, 2001 from web site:
http://www.ge.com/capital.vendor/dmaic.htm
18. Feigenbaum, A.V. (1991). Total Quality Control. 3rd ed., Revised, Fortieth
Anniversary Edition. New York: McGraw-Hili.
19. Ford, H. (1926), (1988). Today and Tomorrow. Cambridge, MA: Productivity
Press.
20. Gee, G., Richardson, W.R. & Wortman, B.L. (2005). CMQ Primer. Terre Haute, IN:
Quality Council of Indiana.
21. George Group. (n.d.). Michael George. Accessed November 5, 2006 from:
http://www.georgegroup.com.
22. Hahn, G., Hill, W., Hoerl, R. & Zinkgraf, S. (1999, August). The American
Statistician. 53 (3).
23. Harry, M. (1998, May). "Six Sigma: A Breakthrough Strategy for Profitability."
Quality Progress.
24. Harry, M. & Schroeder, R. (2000). Six Sigma. New York: Currency, Doubleday.
25. Ishikawa, K. (1985). What Is Total Quality Control? The Japanese Way.
Englewood Cliffs: Prentice Hall.
26. Ishikawa, K. (1982). Guide to Quality Control. White Plains, NY: Quality
Resources.
27. Jones, D. T. (n.d.). Lean Enterprise Institute. Accessed November 5,2006 from:
http://www.lean.org/whoweare/.
References (Continued)
28. Juran, J.M. (1992). Juran on Quality by Design. New York: Free Press.
29. Juran, J.M. (1999). Juran's Quality Handbook, 5th ed. New York: McGraw-Hili.
30. Juran, J.M. & Gryna, F.M. (1993). Quality Planning and Anc,'ysis, 3rd ed. New
York: McGraw-Hili.
31. Kanigel, R. (1999). The One Best Way: Frederick Winsllow Taylor and the
Enigma of Efficiency. New York: Penguin.
32. Kaplan, R.S. & Norton, D.P. (1992, January-February). "The Bi:1llanced Scorecard-
Measures That Drive Performance." Harvard Business Review.
33. Kaplan, R.S. & Norton, D.P. (1993, September-October). "Putting the Balanced
Scorecard to Work." Harvard Business Review.
34. Kaplan, R.S. & Norton, D.P. (1996, January-February). "U!;ing the Balanced
Scorecard as a Strategic Management System." Harvard I~usiness Review.
36. Kolsar, P.J. (Fall, 1994). "What Deming Told the Japanese in 1950." Quality
Management Journal. Milwaukee:ASQ.
References (Continued)
43. Omdahl, T. (1997). Quality Dictionary, Terre Haute, IN: Quality Council of Indiana.
44. Pande, P.S., Newman, P.R. & Cavanagh, R.R. (2000). The Six Sigma Way. New
York: McGraw-Hili.
45. Pearson, T.A., (1999, May). "Measurements and the Knowledge Revolution."
presented at the ASQ Annual Quality Congress.
46. Potts, M. (March 16, 2001). "Fortune" Calls Anand Sharma Hero of U.S.
Manufacturing. [Electronic version]. India - West, 26(20).
47. Process Quality Associates, Inc. (n.d.). The Evolution of Six Sigma. Accessed
November 2,2006 from http://www.pqa.netlprodservices/sixsigma.
48. Quality Digest (November, 2006) "Six Sigma and Lean: Happily Ever After." No
author attributed.
49. Ramberg, J. {May, 2000). Six Sigma: Fad or Fundamental? [Electronic version].
Quality Digest.
50. Rother, M. & Shook, J. (2003). Learning to See. Cambridge, MA: The Lean
Enterprise Institute.
53. Senge, P.M. (1990). The Fifth Discipline, The Art and Practice of the Learning
Organization, New York: Currency/Doubleday.
55. Shingijutsu Co., Ltd. (2001). Shingijutsu website. Accessed November 8, 2006
from: http://Shingijutsu,co.jp/.
56. Shingo, S. (1989). A Study of the Toyota Production System from an Industrial
Engineering Viewpoint. (Revised). Cambridge, MA: Productivity Press.
References (Continued)
57. Six Sigma First. (n.d.). A Conversation with Forest Breyfogle. Accessed
November 8,2006 from: http://www.sixsigmafirst.com/forrest.htm
58. Smarter Solutions, Inc. (n.d.). Forrest Breyfogle, III. AccE!ssed November 8,
2006 from: http://www.smartersolutions.com.
59. Snee, R.D. (1999, September). "Why Should Statisticians Pay Attention to Six
Sigma?" Quality Progress.
60. Taguchi, G. & Wu, Y. (1979). Off - Line Quality Control. Nagaya: Central Japan
Quality Control Association.
62. Toyoda, E. (1987). Toyota, Fifty Years in Motion: An Autobiography. Tokyo &
New York: Kofansha International.
63. The Henry Ford website. "The Life of Henry Ford." Retrieved October 20, 2006
from http://www.hfmgv.org.
64. Wheeler, D.J., (1993). Understanding Variation, The Key to Managing Chaos,
Knoxville, TN: SPC Press.
65. Womack, J., Jones, D., & Roos, D. (1990). The Machine that Changed the World.
New York: Harper Perennial.
66. Womack, J., & Jones, D. (1996). Lean Thinking: Banish Waste and Create
Wealth in Your Corporation. New York: Simon & SChustE!r.
67. Wortman, B.L. (2001). CSSBB Primer. Terre Haute, IN: Quality Council of Indiana.
2.1. Lean and six sigma share in common all of the 2.6. Kaplan and Norton have outlined a business
following issues, EXCEPT: planning process that gives consideration to
factors other than strictly financial ones. It
a. They both focus on continuous provides a greater perspective for stakeholder
improvement interests. This approach is referred to as:
b. They both require top management
commitment a. Balanced scorecard
c. They both focus on customer satisfaction b. Strategic planning
d. They both require long learning curves c. Five forces of competitive strategy
d. Quality function deployment
2.2. The most important element in lean six sigma
deployment would be considered: 2.7. Increasing performance in a lean six sigma
corporation from 3 sigma to 4 sigma would
a. Training reduce defects per million by a factor of:
b. Organizational structure
c. Management support a. 2
d. Reward and recognition b. 8
c. 10
2.3. Which of the following concepts is mostly d. 16
associated with Taiichi Ohno?
2.8. In a nutshell, lean six sigma is considered:
a. SPC
b. TOC a. A business improvement approach
c. CTQ b. A focus on critical customer items
d. TPS c. An elimination of mistakes and defects
d. A concentrated focus on business outputs
2.4. Which of the following is the LEAST acceptable
reason for the deployment of lean six sigma 2.9. What guru is MOST widely associated with
projects? DOE?
2.5. Lean six sigma project benefits could include all 2.10. The term "metrics" most frequently refers to:
of the following, EXCEPT:
a. A unit of measurement
a. Increased profrts b. The metric system
b. Improved process capability c. The science of weights and measurements
c. Increased defects d. An evaluation method
d. Reduced warranty claims
2.11. If one chose to look at any business enterprise 2.16. Strategic goals will be subdivided into:
on a main level basis, which of the following
categories would NOT have either KPIV (key a. Major benchmarks;
process input variables) or KPOV (key process b. Loss functions
output variables)? c. Numerous tactical goals
d. Appropriate metric:s
a. Process
b. Operations 2.17. Of the key elements 011 an organizational plan,
c. Business which of the following would be most likely to
d. Technological contain numbers and dates?
2.21. Why is lean six sigma called TQM on steroids? 2.26 What would occur if the quality goals were not
a part of the strategic plan?
a. Because of the extensive training element
required a. There would be no strategic goals
b. Because of the inclusion of statistical and b. There would not be as much emphasis on
lean tools quality
c. Because of the heavy impact of top c. The total quality effort would not suffer
management support d. The quality department would still maintain
d. Because of the impact of cost savings on the quality goals
the bottom line
2.27. Which American figure is seen as the earliest
2.22. Strategic goals must be subdivided. Thus, they advocate of waste reduction?
are:
a. Henry Ford
a. Delegated b. Frederick W. Taylor
b. Distributed c. W. Edwards Deming
c. Accountable d. James Womack
d. Deployed
2.28. A company has just started a lean six sigma
2.23. Many tools can be used in either lean or six initiative. Which set of tools would be better
sigma projects. A problem solving approach suited for initial projects?
that unifies project follow-up is:
a. Lean tools because they provide stability
a. SIPOC and repeatability for future projects
b. DOE b. Six sigma tools because they provide a
c. DMAIC larger number of available options
d. TPM c. Lean tools because they provide a more
reliable and accurate picture of the root
2.24. Which of the following quality gurus is most cause
closely associated with the term "total quality d. Six sigma tools because they provide more
management?" measurement data
2.31. Which of the following legendary lean thinkers 2.36. Metrics of the eftecti'ileness of the strategic
was never officially on Toyota's payroll? quality plan include:
2.32. Which of the following are resource 2.37. The extension from I,ean production to lean
requirements needed to achieve strategic office is possible because:
goals?
a. The office producEIS a variety of services
b. The concept of ""aste applies to every
I. Infrastructure and support for the
business environment
projects c. Offices are mOire data driven than
II. Training manufacturing
III. Administration of the programs d. There is little difference between
IV. Evaluation of the projects production and ofl'ice environments
a. Analysis of returml
2.33. Who created the initial problem solving
b. Cost of quality
framework that would later become c. Customer market !Iurveys
DMAIC? d. Customer retention
A: Mikel Harry 2.39. Who should be the ultimate recipient of lean six
B: Forrest Breyfogle, III sigma project results?
C: Robert Galvin
0: Michael George a. Top management
b. Employees
2.34. After the development of a viable corporate c. Customers
strategy, the next logical step would be: d. Project sponsors
2.35. If a metrics format were being developed to a. Many American c:ompanies employ too
track marketplace response, which of the many inspectors; perhaps 5% - 10% of the
following items would be included? workforce
b. Quality should be built into the product, not
a. Cost of quality inspected in
b. Customer retention c. In most cases, the worker should perform
c. Cycle time reduction his/her own inspt!ction and not rely on
d. Profit margin on sales someone else
d. Most manual inspection will miss 10% - 20%
of defects under typical working conditions
The answers to all questions are located at the end of Section XIII.
Lean Six Sigma Project Management is presented in the following topic areas:
The above assessment will go a long way towards deciding if c:urrent efforts are
sufficient, or whether the timing is appropriate to undertake a lean six sigma effort.
Lean six sigma can also be applied as a targeted approach. A number of companies
have improvement techniques and teams in place, and only assign black belt
assistance as needed.
The lean six sigma approach achieves the best results if implemented by high
performance organizations. Medium and low performance companies should
consider some building block steps, in order to take advantage of the "low hanging
fruit" that can be picked with these more basic techniques. Examples include:
George (2004)12 indicates that some key themes of lean six sigma include:
High
1 2
Benefit
3 4
Low
Low High
Effort
Obviously, those projects fitting into quadrant 1 should receive immediate attention.
Generally, projects in quadrant 4 should be avoided. Quadrant 3 projects are
sometimes beneficial for initial team activities. The most difficullt decision involves
the quadrant 2 category. These projects are potentially very d1esirable, but often
require careful analysis to ensure strategic fit and adequate resource availability.
Value stream mapping is often used to identify projects with the' highest impact.
Harry (2000)15 details a methodology to focus the deployment ()f (lean) six sigma
projects. There are a considerable number of options, dependE!nt upon the goals
and objectives of the organization. Considerations include:
The typical methodology that is followed for lean six sigma projects is either Define-
Measure - Analyze - Improve - Control, or some variation of this approach. This
assumes that a key business problem can be clearly defined, iand that it can be
addressed by data measurement and/or other improvement techniques.
• Poor communications
• Low recovery
• A car runs rough
• Excessive downtime
• Lack of training
• Too much scrap
• The team's two major activities are project resolution and learning to work
effectively together.
The use of these basic approaches can resolve many problems a~ nd complete many
projects. In some cases, more powerful tools are necessary. In these instances, the
team would be wise to utilize the DMAIC approach because of the implied support
of professionals trained in the use of statistical software programs and techniques
such as ANOVA, DOE, confidence intervals, process capabilitiE!s, and hypothesis
testing.
PDCA
The PDCA cycle is very popular in many problem solving situatilons because it is a
graphical and logical representation of how most individu,als already solve
problems. Refer to Figure 3.2 below:
;f.:
1.P~
Act (A): Implement Plan (P): Establish a
necessary reforms when plan for achieving
the results are not 4. ACT a goal.
as expected.
It is helpful to think that every activity and every job is part of .:1 process. A flow
diagram of any process will divide the work into stages and these stages, as a
whole, form the process. Work comes into any stage, changes are affected on it,
and it moves on to the next stage. Each stage has a customer. The improvement
cycle will send a superior product or service to the ultimate customer.
PDSA
Deming (1986)5 was somewhat disappointed with the Japanese PDCA adaption. In
1951 Deming presented a four or five step product design cycle to the Japanese, and
attributed the cycle to Shewhart. Deming proposed a Plan-Do-Study-Act continuous
improvement loop (actually a spiral), which he considered principally a team
oriented, problem solving technique. The team objective is to improve the input and
the output of any stage. The team can be composed of people from different areas
of the plant, but should ideally be composed of people from one area of the plant's
operation.
1. Plan - What could be the most important accomplishment of this team? What
changes might be desirable? What data is needed? Does a test need to be
devised? Decide how to use any observations.
2. Do - Carry out the change or test decided upon, preferably on a small scale.
4. Act - Study the results. What was learned? What can one predict from what
was learned? Will the result of the change lead to either (a) improvement of
any, or all stages and (b) some activity to better satisfy the internal or external
customer? The results may indicate that no change at all is needed, at least
for now.
As noted with other problem solving techniques, everyone on the team has a chance
to contribute ideas, plans, observations and data which are incorporated into the
consensus of the team. The team may take what they have learned from previous
sessions and make a fresh start with clear ideas. This is a sign of advancement.
Both PDCA and PDSA are very helpful techniques in product and/or process
improvement projects. They can be used with or without a special cause being
indicated by the use of statistical tools.
2. Define the problem; if it is large, break it down to smaller onles and solve these
one at a time.
4. Analyze the problem. Find all the possible causes; decide whiich are major ones.
5. Solve the problem. Choose from available solutions. Select the one that has the
greatest organizational benefit. Obtain management appr10val and support.
Implement the solution.
6. Confirm the results. Collect more data and keep records on the implemented
solution. Was the problem fixed? Make sure it stays fixed.
Y
NEEDING ITROVEMENT
.-: .
:....
..
NO L
IMPL EMENT SOLUTION
Figure 3.3 A Problem Solving Flow Chart Showing Use of Quality Tools
DMAIC Process
Each step in the cyclical DMAIC process is required to ensure, the best possible
results from lean six sigma team projects. The process steps are detailed below:
Define the customer, their critical to quality (CTQ) issues, and the core business
process involved.
Analyze the data collected and process map to determine root causes of defects and
opportunities for improvement.
Improve the target process by designing creative solutions to fix and prevent
problems.
The above information is modified from GE® Capital Services web site (DMAIC,
2001)6. Note how closely the DMAIC process steps parallel the six classical team
problem solving steps presented on the previous two pages.
IDEA Process
The IDEA problem solving loop is similar in nature to the PDCA and DMAIC process
cycles. IDEA stands for Investigate, Design, Execute, and Adjust. The process
consists of basic step-by-step questions to help guide the problem solving team
toward new and innovative solutions. The detailed IDEA process steps are:
• Investigate: Provide a definition of the problem, provide some facts about the
problem, and provide a root cause.
• Design: Envision the idealized future state and create a list of options to
achieve the idealized state.
• Execute: Establish the specific metrics for success, test the best solution,
and determine a measurable project impact.
• Adjust: Reflect on the outcome of the project (the Japanese word is hansei).
This is an after action review and is also conducted for successful projects.
The IDEA report is formatted so that the four steps are concisely and clearly
displayed on one simple page. A condensed example is shown below.
D1. Establish the team: Use cross~functional team membership. This is a diverse
team with the knowledge, time, authority, and skill to solve the problem and
implement the corrective action. There will be a team champion.
D2. Describe the problem: Identify the problem in terms of the internal/external
customer problem. Define the problem in terms of "what is wrong with what?," use
of the 5W 2H method (who, what, where, when, why, how, how many). Use
quantifiable terms to define the problem. Some other useful tOlols include: "is/is
not," cause~and~effect diagrams, and flow charts.
D4. Identify the root cause: Search for all possible causes of thl! problem. Update
"is/is not," cause~and~effect diagram, process flow charts, etc. A FMEA (failure
mode effect analysis) could also be used here. Test each pc,tential root cause
against the problem description and test data for elimination of the problem.
D7. Prevent recurrence: Modify the existing systems, practices, and procedures to
prevent recurrence of the problem. One must be able to state that this is the best
possible long~term solution for the customer.
08. Recognize team and individual contributions: Recognize te!am and individual
contributions, and celebrate.
Project Selection
Improvement projects may be selected from a variety of sources. Many of the same
tools and techniques discussed later in Section V can be employed. A number of
items will be discussed here: .
• Process elements
• Stakeholder analysis
• Customer data
• Quality functional deployment
• Benchmarking
Process Elements
A business process is the logical organization of people, materials, energy,
equipment, and information into work activities designed to produce a required end
result (product or service). (Juran, 1999)21
The SIPOC diagram is a foundation technique for lean six sigma improvement.
SIPOC is an acronym for the five major elements in the diagram:
Process: The set of action steps that transforms the inputs into outputs by
adding customer value
SIPOC can help everyone "see" the business from an overall pr10cess perspective
by:
When process flow charts are used with the SIPOC model, business process
monitoring, control, understanding, and improvement are greatl), enhanced. When
lean specific projects are under construction, the detail created by value stream
mapping may be required. This technique is described in Secticln V.
Stakeholder Analysis
A lean six sigma project with high impact will bring about major changes to a system
or to the entire company. The change can affect various people iinside and outside
of the system. Major resistance to the change can develop. Attempts to remove or
reduce the resistance must be made. The stakeholders involved should be identified
and then a plan to convert or enroll them in the change process must be developed.
This should provide for the needed buy-in, alternate solutionl;, and removal of
pitfalls. Stakeholders can be identified as:
SOCIETY
STOCKHOLDERS OR OWNERS
en en
a: a:
w INTERNAL COMPANY w
:::i :1E
D.. 0
D.. ~
PROCESSES (/)
~
~
en 0
Activities which fail to meet their stated objectives will have neg()ltive effects on the
stakeholders. To the stockholders, the net worth of the company will be reduced.
Suppliers may have payments delayed or never paid in full. Management and
employees may see wage levels frozen or diminished, and the number of employees
may be reduced. Customers may react to unsuccessful activities by seeking other
companies with which to deal, or they may impose penalties stated in the contract.
Organizational performance and the related strategic goals and objectives may be
determined for:
Goals may be set for either short-term or long-term results. The profit margin
required to operate a business should be optimized for all stakeholder requirements.
Projects and programs initiated by the company usually require a return on
investment (ROI). The maximum profits are usually not taken because of internal
stakeholder interests. If stockholder returns are maximized, then items such as re-
investment in the company, purchases of new machinery and equipment, wages and
salary increases must be turned down. An optimal level of stockholder dividends,
investments, personnel costs, etc. must be maintained.
Customers
Lean six sigma is built around the customer. Everything starts and ends with
customers. They define quality and set expectations. They rightfully expect
performance, reliability, competitive prices, on-time delivery, service, and clear and
accurate transaction processing. (Harry, 2000)15
At times, the customer of a project may not be as evident as initially thought. The
receiver of the next operation, an internal department, could be thought of as a
customer. The external customer of a process could be the purchaser. But yet, if
the purchaser is a distributor, then they may not really be the true customer.
Pande (2000)25 points out that the primary customer of the process will or should
have the highest impact on the process. The primary customer is of utmost
importance to the process. The sorting out of the primary customer may take some
discussion on the team's part. The question of "Who is the customer?" may bring
out discoveries of "Which customers make money?". That is, are there certain
customers that make up the bulk of company revenues? Are there a small
proportion of customers that simulate the Pareto law? The case being that 80% of
the revenues come from 20% of the customers, or that 80% of net profit comes from
20% of the customers. See the discussion of Pareto analysis later in this Handbook.
External customers are the most important part of any business. If one can identify
them and understand their requirements, we can design products (goods and
services) that they will want to buy. Every business has many potential customers,
and each customer has their own decision criteria. They attempt to weigh the overall
value of goods and services by considering ~ost, guality, !eatures, and ~vailability
factors (CQFA). Businesses compete for customers on this CQFA value grid, and
must excel in at least one category in order to succeed.
Clearly, the voice of the customer is critical to business success. Both internal and
external customers should be identified. Their requirements must be identified in
order to understand and improve the business process. The relationship that
management can develop with either basic customer type will affect the company's
ability to be effective in delivering customer satisfaction.
Business Level
Customers at this level are primarily shareholders and top mana~,ement employees.
The data of interest is primarily financial data such as stock price, market share,
revenues, earnings, return-on-investment (ROI), return-on-net assets (RONA), etc.
Typical analysis tools are financial. Typical measurement intervals may be quarterly
or annually.
Operations Level
Customers at this level are primarily those who purchase the product (external) and
those who manage production operations (internal). Data of interest measures
overall process performance with the focus on customer satisfaction (external
measures of operational effectiveness), and internal operations lefficiency (internal
measures such as rolled throughput yield, sigma levels, WIP inventory, etc.). Typical
analysis tools come from six sigma methods and lean manufacturing, industrial
engineering, and various forms of operations analysis. Typical measurement
intervals may be daily or weekly.
Process Level
Customers at this level are primarily internal, including employ'ees and the "next
process" in the operation. External customers include suppliers for detailed
material specification questions. Data of interest primarily invl)lves key process
variables. Typical analysis tools are statistical methods for process control,
capability, and improvement. Typical measurement may vary from hours to fractions
of a second, depending on production rates.
• State of the company: What are the employees' perception of the company?
• State of quality efforts: Are the quality efforts worthwhile?
• State of the processes: Are there improvements?
• Reaction to policies: What dumb things were implemented?
• Rating of company satisfaction: Is the company a good place to work?
• Rating of job satisfaction: Do I like my job, my boss, etc.? (Snee, 1995)27
Customer Surveys
A few customer survey themes are noted below:
In the evaluation of customer information, not all attributes and transactions should
be treated equally. Some are much more important than others. As customers'
needs change, the evaluations will change. Griffin (1995)14 conducted a study on
best customer satisfaction practices and recommended the use of multiple
instruments to collect customer satisfaction data. The opportunity to collect
misleading or useless "iriformation is possible with just one instrument. Validation
of the initial results can be accomplished via multiple measurements.
Customer survey sample sizes and frequency can have significant cost implications,
and should be chosen to balance business resources and the need to monitor
changes in the business environment. Surveys can be developed in questionnaire
form. An adequate sample would range from 25 to 30 questions. For an L-Type
matrix survey, the use of a numerical scale from 1 (very dissatisfied) to 10 (very
satisfied) can make it easier to quantify the results, as shown in Figure 3.7.
Customer Satisfaction
Very Dissatisfied Very Satisfied
Task 1 2 3 4 5 6 7 8 9 10
On Schedule
Good Product
Friendly
Prompt
Figure 3.7 L-Type Survey Matrix
Survey Pitfalls
Surveys are a method to gather data, but care should be taken with that data. A well
designed and properly executed survey can be a help to the company. The survey
can show what resources do not satisfy customers, identify opportunities for growth
or correction, and focus on customer issues (Futrell, 1994)10. Problems which can
occur in the use of surveys include:
For a consumer survey, Hayslip (1994)17 suggests there may be some differences:
Customer Expectations
Customers ultimately determine the value of any product (goods and services) with
their decision to buy or not buy. These decisions are made based on a complex
system of critical customer requirements. In order to manage (ccmtrol and improve)
any business process, one must be able to determine the critical customer
requirements that influence these decisions.
• Desired: These are attributes that are worthwhile to have, but not necessarily
provided as part of the package. A few extra hints by the technician on
operating procedures or hookups. The person at the rental car agency gives
good directions to your location and helpfully tells you how to save some
money when returning the rental car.
(Albrecht, 1992)1
Customer Needs
Customer needs are not stable, but are continually changing. A product or service
that satisfied a certain need may generate new needs for the customer. Maslow's
hierarchy of needs would be a good illustration of the advancement of needs
(physical, safety, social, esteem, and self-awareness). As an individual's needs are
fulfilled, he/she advances up the hierarchy. As the customer obtains a suitable
product or service (the basic needs are fulfilled), they will look for new attributes.
• Convenience: Technology in today's world can bring about new products and
services that were not dreamed of. There are sections of society that limit the
extent of technology in their lives (Amish for example).
• Service for product failures: When a product fails, what recourse (warranties,
returns, exchanges, etc.) does the customer have? A newly purchased tennis
racket cracks. What is the replacement policy?
The technical characteristics are handled by the company through the design
function, or better still, through a cross-functional team that includes sales,
marketing, design engineering, manufacturing engineering, andl operations. This
activity should focus the product or service on satisfying custol11er requirements.
QFD is a tool for the entire organization to use. It is flexible and cListomized for each
case and works well for manufactured products and in the service industry.
QFD was first applied in the Kobe shipyards in 1972 by Yoji Akao and his associates.
It met with great success and was introduced to the United States by Don Clausing
in the mid 1980s. Various United States companies (mostly automotive) have
applied the principles of QFD to their product design process. Hauser (1988)16
provides an illustration concerning the location of the emergency parking brake
lever for an American sports car. Engineering initially wanted 1to place the brake
between the seat and the door, but this caused a problem for a woman driver
wearing a skirt. Could she get in and out gracefully? Would this eventually cause
dissatisfaction?
The collection of customer wants and expectations are expressed through the
methods available to most any organization: surveys, focus group!;, interviews, trade
shows, hot lines, etc. The house of quality is one technique to 4lrganize the data.
The house of quality is so named because of the image used in its Iconstruction. The
use of matrices is the key to building of the house. The primary matrix is the
relationship matrix between the customer's needs or wants and the design features
and requirements.
CUSTOMER COMPARISON
NEEDS OF CUSTOMER
(WANTS) PRIORITIES
BENCHMARKED
TARGET VALUES
P = POSITIVE INTERACTIONS
N = NEGATIVE INTERACTIONS
DESIGN FEATURES
I-
z C/)
COMPETITION
w Z
I- 0 COMPARISON
z i=
0 w 0 ~
0 N C/)
w 0
w >-
....I l-
en 0c:: w C/) m
m ::::i I-
c:: 0 w I-
0 ....I Z
<I:
I- m <I: c:: ii: m w w
c.. w c.. <I:
c:: en
c.. <I:
CUSTOMER <I: c:: :!: 3: 3: > c:: a:
o
NEEDS 0 ::::l 0 W 0 ::::l
0
<I: 0 0 LL. ....I :!: 0 ~
COMPREHENSIVE 4 5 1 2 1 1 3 5
LOW COST 2 0 4 3 1 5 3 0
UP-TO-DATE 5 4 1 0 3 0 2 5
EASILY AVAILABLE 4 0 1 3 0 4 0 0
TEST QUESTIONS 5 5 0 2 5 1 5 5
RATINGS RANKINGS
5 = MOST IMPORTANT
I- Z 0= NO IMPORTANCE
::::l 0
c.. i= I-
::::l
~ 0 :I: c..
c:: w 0
LL. C/)
Z ~
0 enc:: c::
::::l
TARGET I-
0 ....
0
.... c.. W
VALUES ::::l 0 w 3:
c:: + C/) c:: ....I W
I- i:i: m c:: 10 0 :>
C/) ....I
0 W r-- :I: W
EA-
~
CD
<? V M V M c::
The house of quality is flexible and customized to each situation. lEach organization
will develop guidelines that will modify the above image. However, the basics of
QFD will remain the same: to hear the voice of the customer and to be proactive in
its design of products in order to meet customer needs. (Besterfield, 1999)2
Benchmarking
Benchmarking is the process of comparing the current project, methods, or
processes with the best practices and using this information to drive improvement
of overall company performance. The standard for comparison may be competitors
within the industry, but is often found in unrelated business segments.
Process Benchmarking
Performance Benchmarking
Project Benchmarking
Strategic Benchmarking
Benchmarking (Continued)
Benchmarking activities often follow the following sequence:
Juran (1993)20 presents the following examples of benchmarks (sl ightly modified) in
an advancing order of attainment:
Benchmarking (Continued)
Some companies attempt to achieve a higher performance level than their
benchmark partner. Shown below is a comparison between a typical and a
breakthrough benchmark approach.
Time Time
(Goetsch, 2000)13
It should be noted that organizations often choose benchmarking partners who are
not best-in-class because they have identified the wrong partner or simply picked
someone who is handy.
The reason for performing a risk analysis is to inform the appropriate stakeholders
(sponsors, customers, team members, others) about the magnitude of the risks
associated with the project, as well as the contingencies that will be developed in
order to mitigate or set the risks to acceptable levels (Martin, 1997)23.
Risk can be defined as a measure of the probability of an event and the costs
associated with not achieving an expected purpose.
Additionally, performance, cost, and schedule risks can be segml:mted into five risk
areas. These are:
• Technical performance
• Supportability risks
• Environmental risks
• Cost risks
• Schedule risks
• Identify: Search for and locate risks before they become problems.
• Track: Monitor risk indicators and actions taken throughclut the project.
The management of risk requires the specific actions identified in Figure 3.12.
(Frank, 2002)!~
For major projects or activities, a risk management plan is a smart way to guide the
risk management process and to document the results.
Project Scope
The project scope refers to the boundaries of the project. It is an attempt to outline
the range of the team's activities. In the area of product development, the team may
decide to limit itself to the launching of a new product at a single manufacturing site.
Issues or problems regarding market research, prototype development, or financial
investments would be outside the scope of the team's activities. Eckes (2001 r
suggests that each team work very hard in its first meetings to clarify the project
scope. The team champion, the team leader, and the team will all be involved in this
process.
Goal Statement
The goal statement will be created and agreed to by the team and team champion.
Hopefully, the goals will be achievable within a 120 to 160 day period. Eckes {2001f
indicates that a typical "rule of thumb" for six sigma goals is a requirement of a 50%
reduction in some initial metric (or improvement of 50%). For example, reduce the
collectibles from 120 days to 60 days; reduce the scrap from 5% to 2.5%.
Milestones/Deliverables
For any well managed project, a set of stages or milestones are used to keep the
project on track and to help bring a project to completion. Again, Eckes (2001)7
points out that initial team projects should be at the 120 day length. Only half of the
project would be allocated to the define and measure stages. Assigning teams an
initial project with lengths of more than 160 days will lower the anticipated success
rate. A typical milestone chart might be:
Team Composition
The composition of the team is of great importance, especially for critical projects.
Teams should be composed of qualified people with sufficient expertise to achieve
the team's charter. The team should not be staffed with people just interested in
improvement. To carry out high impact lean six sigma projects, highly qualified,
highly trained team members will best serve the team champion. {Eckes, 2001 f
Required Resources
The resources required for a project must be detailed. Typical Iproject resources
include:
Each project task is divided into smaller activities, and then elements, until the level
is reached in which each element is under one identifiable individual or group
responsibility. Another way to say this is: you take a long journey one step at a time.
If the time constraints are fixed, such as a set deadline, then the resource
constraints must be flexible to accommodate variations in the project. Time delays
during the project require offsetting increases in resources and costs to maintain a
set deadline. One example of a time constrained project is the construction and
delivery of a new product. These projects have fixed completion dates, often with
penalties for every day it is late. The contract is often won on the basis of the
company that plans to meet the deadline date, and is willing to risk the losses if the
project is late.
Most projects have relatively fixed levels of resources including manpower and
equipment. In resource constrained projects, the objective is to meet the project
duration requirements, without exceeding the resource limits. If these resources are
shared within an organization, resource scheduling and coordination between
projects becomes vital.
For those resources with time conflicts, it may become necessary to schedule
planned parallel activities as sequential tasks, using existing slack time and possibly
delaying the project completion date.
Planning Tools
Project planning tools include developing and analyzing the project timeline,
determining required resources, and estimating costs. Common techniques for
evaluating project timelines include PERT charts, Gantt charts, the Critical Path
Method (CPM), and activity network diagrams (AND). The work breakdown structure
(WBS) helps identify detailed activities for the plan and enables estimation of project
costs.
• Arrows imply logical precedence only. The length and compass direction of
the arrows have no meaning.
• The network must start at a single event, and end at a sin~lle event.
• Time estimates must be made for each activity in the netw1ork, and stated as
three values: optimistic, most likely, and pessimistic elapsied times.
• The critical path and slack times for the project are calculated. The critical
path is the sequence of tasks which requires the greatest I!xpected time.
PERT (Continued)
The slack time, 5, for an event is the latest date an event can occur or can be
finished without extending the project, (TL)' minus the earliest date an event can
=
occur (TE). For events on the critical path, TL TE' and 5 O.=
(Kerzner, 2001 )22
• The planning required to identify the task information for the network and the
critical path analysis can identify interrelationships between tasks and
problem areas.
Each starting or ending point for a group of activities on a PERT chart is an event,
also called a node, and is denoted as a circle with an event number inside. Events
are connected by arrows with a number indicating the time duration required to go
between events. An event at the start of an arrow must be completed before the
event at the end of the arrow may begin. The expected time between events, te is
given by:
To calculate the critical path, add the durations for each possible path through the
network. Which path is the critical path, and how long is it?
The critical path is 0-1-3-5-7-8-9-10. Obviously, with larger and more complex
networks, this calculation could be tedious by hand, but computer software is
available to do these calculations. During the project implementation, tasks which
are late in ending may delay the project, and can modify the remaining tasks' critical
path. Projects not on the critical path may be delayed by an amount equal to the
slack time without delaying the completion of the project.
What is the slack time for event 6? First one observes that event 6 is not on the
critical path calculated above. The earliest, T E' that event 6 can occur is at the 17th
week, found by path 0-1-3-5-6. Event 8 is on the critical path and occurs at the 24th
week, and since task 6-8 takes 4 weeks, the latest, TL' that event 6 can take place is
the 20th week. Using the formula for slack time:
For each activity, there is a normal cost and time required for completion. To crash
an activity, the duration is reduced, while costs increase. Cralsh, in this sense,
means to apply more resources to complete the activity in a !,horter time. The
incremental cost per time saved to crash each activity on the critical path is
calculated. To complete the project in a shorter period, the activity with the lowest
incremental cost per time saved is crashed first. The critical path is recalculated.
If more reduction in project duration is needed, the next least expensive activity is
crashed. This process is repeated until the project can be completed within the time
requirements.
Using information from the PERT chart example, and adding crash times and costs,
we have:
Note that each activity arrow on the PERT chart example becomes a circle on the
CPM example. The letter indicates the activity and a number. The number, in this
example, is the normal activity duration in weeks. The critical p~lth is indicated by
the thicker arrows, along path A-C-F-I-K-L-M.
CPM Example
TASK ACTIVITY DURATION COST COSTI
weeks $ WEEK
0 ISO 9001 Certification normal crash normal crash CRASH
A Planning 4 3 2,000 3,000 1,000
B Select Registrar 4 3 1,000 1,200 200
C Write Procedures 8 6 12,000 15,000 1,500
D Contact Consultant 3 1 500 700 100
E Schedule Audit 6 5 200 1,000 800
F Write Quality Manual 4 3 800 1,200 400
G Consultant Advising 12 9 9,600 14,400 1,600
H Send Manual to Auditor 1 1 100 100 -
I Perform Training 6 4 9,000 12,000 1,500
J Auditor Review Manual 4 3 1,000 1,250 250
K Internal Audits 2 1 600 750 150
L ISO Audit 1 1 10,000 10,000 -
M Corrective Action 3 2 1,600 2,000 400
10 Certification Milestone 48,400
Table 3.17 above shows the cost of crashing an activity, and the activity duration in
weeks if it is crashed.
What is the cost and project total duration if done in a normal manner? The time is
calculated adding the normal durations for events on the critical path A-C-F-I-K-L-M.
The normal time is 28 weeks. The total normal cost is the sum of the normal cost for
each activity, or $48,400.
The next activity to be crashed is A at a cost of $1,000 per week. After task C is
crashed, there are two critical paths, A-C-F-I-K-L-M and A-D-G-K-L-M, each 23 weeks
long. Both D and I must be crashed to shorten the critical path.
After task I is crashed, there are four critical paths, A-B-E-J-L-M, A-C-F-H-J-L-M,
A-C-F-I-K-L-M, and A-D-G-K-L-M, each 20 weeks long. Crashing any additional
activities provides no further time savings.
65000
~
80000
---
u; 55000
I
"-
~
o
(.)
50000 ...........
--
45000 I I I I I I
18 20 22 24 26 28 30
Time (weeks)
The graph in Figure 3.19 illustrates that crashing activities beyond activity I,
increases cost without further reduction in time. If this is done, it is a useless waste
of resources. The assumption made in crashing an activity is that it is independent
of other activities. This may not be a valid assumption, for example if the same
resource is needed to crash different activities in overlapping time periods.
For calculations of more complex projects, linear programming methods are used
to determine the optimal cost-time point and activities to be crashed, which satisfy
the project time constraints. The cost-time curve shown above is a convex shape
in this example. Various algorithms are used to deal with convex and concave
curves, as well as those that are neither convex nor concave, but follow a more
complex relationship.
Similar to the PERT chart, CPM includes the concept of slack time for activities.
Without crashing, activity J has a slack time of 20 - 17 3 weeks. =
The bar charts indicate only an ambiguous description of how the project, as a
system, reacts to changes. The network relationship between activities which are
indicated in PERT and CPM charts, are not shown in the Gantt chart.
An example of a Gantt chart for the ISO 9001 certification project is shown in Figure
3.20 on the following page.
•••• -4
0
0
r-
Certification 21-Sep 21-Sep Milestone C .0 C/)
r
m Key: = = = = Summary Task === = Summary Progress • • • • Detail Task •••• Task
l>
z Progress
(J)
><
(J) • •_ Slack 0 Milestone I Current Date ~ ~ ~ ~ Conflict . .•• Delay
i5 Path: C indicates critical path Scale: Each character is 5 work days or 1 week
s::
l>
:I:
l>
Z Figure 3.20 Gantt Chart Example
C
III
o
o
~
III. LSS PROJECT MANAGEMENT
PROJECT MANAGEMENT TECHNIQUES/PLANNING TOOLS
The activity network diagram incorporates a lot of PERT and CPM techniques in its
usage. The activities, milestones, and critical times must be developed and then
drawn onto a chart. The chart will then provide a tool to help monitor, schedule,
modify, and review the project.
As with other methods, the use of Post-it® notes or 3" x 5" cards will help in the
preparation stage of the chart. A planning meeting to uncover the required activities
is required. Various creative methods may be used to generate the activities or
milestones.
Slack time (SL): The difference between the latest time and the earliest finish time
10
10 20 ....- - - Determine LSS BOK
Critical
110 120 I
Path
20 40
EE 20
120 140 1
1+---- Locate five
outside authors
~-10
40 50 Make
Costa Rican
150 160 I assignments
Costa Rican
..4--authors
complete
30 1451751 ...- material
50 80
50 70
20
60 80
Indiana and Michigan
authors complete material 10 ISOl90 1 Proof major
80 90 content
90 120
Develop 30 1--+--4 90 110
test 90 ~20
questions 110120
@
" o 0... ... '"
12 Publish material
Project Documentation
The initial project documentation is the project proposal. The prc)posal is usually in
response to meeting an improvement objective. The proposal should include the
objective(s), project plan, and budget. Approval of the proposal is management's
indication of support for the project objectives and commitment to provide funding
and resources. During implementation of the project, status reports are the
communication vehicle to management (or the customer) on the progress and health
of the project.
The feedback loop defines the methods for monitoring and adjusting the process if
results are different than desired. Planning for feedback is analogous to designing
an automatic control system. The success or failure of a project i!s measured in the
following dimensions:
It is possible for a project to be considered a success, even when the project is late,
over budget, or does not meet the stated objectives. An example of this type of
success is when the project accomplishes a significant feat.
Nearly every project encounters unanticipated events or problems, but this is not an
acceptable excuse for failing to meet the performance standards. The skillful project
leader will manage the resources to resolve the issues and maintain the project
schedule and budgets. Performance is measured on results, not effort.
The project time line is the most visible yardstick for measurement of project
activities. The unit of measurement is time in minutes, hours, days, weeks, months,
or years, and is readily understood by all participants on a project. The overall
project has definite starting and ending dates, both planned and attained.
From a quality viewpoint, both early and late projects have the opportunity for poor
quality compared to the project on schedule. For projects ahead of schedule, the
skeptical question is asked, "What corners were cut?" For projects behind
schedule, an appropriate pessimistic question is, "What is not being done properly
in an effort to regain lost time?"
Methods for planning, monitoring, and controlling projects range from manual
techniques (using plain paper, graph paper, storyboards, grease boards, and colored
magnetic markers) to computer software.
• Ease of use
• Low cost
• Best for monitoring schedules and timing of events
• A hands-on feel for the status of the project
• Easily customized for the specific project needs
• Minimal training requirements
Whichever method is used by the project team, keep in mind that the method is only
a tool to organize and summarize the data. The completion of project is the
objective, not the status boards or bar charts.
Milestones Reporting
Milestones are significant points in the project which are planned to be completed
at specific points in time. Intermediate milestones serve the purpose of refocusing
priorities on the longer range objectives, and at the same time providing status of
progress. Milestones typically occur at points where they act as a gate for a
go/no go decision to continue the project.
The date and time for the milestone and the milestone activity are set very early on
in the project planning phase. Once set and approved, the milestones are not
normally subject to change or negotiation. If the project is late on meeting a
milestone, this fact will reach the visibility of upper management quite quickly.
Final Report
The project final report is the report card on performance for completion of
objectives, comparison of actual benefits and costs with budgets, and measures of
major activity completion dates versus milestones.
Lessons Learned
The next project closure step is the postmortem analysis (also called lessons
learned, autopsy, and post-project appraisal). The analysis of what went well and
what went wrong is used as a learning tool for future projects. The intent is to avoid
making the same mistakes, and to benefit from effective proces!.es. This review is
a formal and documented critique conducted by a committee of qualified company
personnel. The project review extends over all phases of development, from
inception to completion. Some of the fundamental review topics include:
Results of the project review will be retained, along with the other project
documentation, and archived for future reference.
Document Archiving
The final project stage is document archiving. This includes test data, traceability
of materials, key process variables, and reports generated during the project. The
documents must be complete and organized. Storage requirements include:
References
1. Albrecht, K. (1992). The Only Thing That Matters. New York: Harper Collins.
2. Besterfield, D.H., et at (1999). Total Quality Management, 2nd ed. Upper Saddle
River, NJ: Prentice Hall.
3. Bogan, C., & English, M. (1994). Benchmarking for Best Practices. New York:
McGraw-Hili.
4. Breyfogle, F.W. III, et al. (2000). Managing Six Sigma. New York: John Wiley and
Sons.
5. Deming, W.E. (1986). Out of the Crisis. Cambridge, MA: Massachusetts Institute
of Technology, CAES.
7. Eckes, G. (2001). The Six Sigma Revolution. New York: John Wiley & Sons.
8. Feigenbaum, A.V. (1991). Total Quality Control, 3rd ed. Revised, Fortieth
Anniversary Edition. New York: McGraw-Hili.
9. Frank, B., Marriott, P., & Warzusen, C. (2004). CSQE Primer. Terre Haute, IN:
Quality Council of Indiana.
10. Futrell, D. (1994, April). "Ten Reasons Why Surveys Fail." auality Progress,
27(4).
11. George M.L. (2002). Lean Six Sigma. New York: McGraw-Hili.
12. George M.L., Rowlands, D. & Kastle, B. (2004). What is Lean Six Sigma? New
York: McGraw-Hili.
13. Goetsch, D.L., & Davis, S.B. (2000). Quality Management, Introduction to Total
Quality Management for Production, Processing, and Services, 3rd ed. Upper
Saddle River, NJ: Prentice Hall.
14. Griffin, A., Gleason, G., Preiss, R., & Shevenaugh, D. (1995, Winter). "Best
Practice for Customer Satisfaction in Manufacturing Firms." Sloan
Management Review, 36(2).
References (Continued)
15. Harry, M., & Schroeder, R. (2000). Six Sigma, The Breakthr10ugh Management
Strategy. New York: Currency/Doubleday.
16. Hauser, J.R., & Clausing, D. (1988, May-June). "The House of Quality." Harvard
Business Review, 66(3), 63-73.
18. Hunter, M.R., & Van Landingham, R.D. (1994, April). "Listening to the Customer
Using QFD." Quality Progress, 27(4).
20. Juran, J.M., & Gryna, F.M. (1993). Quality Planning and Anarlysis, 3rd ed. New
York: McGraw-Hili.
21. Juran, J.M. (1999). Juran's Quality Handbook, 5th ed. New Y,ork: McGraw-Hili.
23. Martin, P. & Tate, K. (1997). Project Management Memory Jogger: A Pocket
Guide for Project Teams. Salem, NH: Goal QPC.
24. May, M. (2007). The Elegant Solution: Toyota's Formula for Mastering
Innovation. New York: Free Press
25. Pande, P.S., Neuman, P.R., & Cavanagh, R.R. (2000). The Six Sigma Way. New
York: McGraw-Hili.
26. Rath & Strong. (2000). Rath & Strong's Six Sigma Pocket Guide. Lexington, MA:
Rath & Strong Management Consultants.
27. Snee, R.D. (1995, January). "Listening to the Voice of the Employee." Quality
Progress, 28(1).
3.1. Which two of the following categories typically 3.6. A project has more than one critical path. This
provide feedback to business systems and means that:
processes?
a. Crashing an event might shorten the project
I. Customers time
II. Suppliers b. The critical path was not calculated
IJI. Inputs correctly
IV. Outputs c. Delaying an event on a critical path may not
delay the project
a. I and II only d. Shortening anyone event cannot shorten
b. IJI and IV only the project duration
c. II and IJI only
d. I and IV only 3.7. Using the DMAIC approach to lean six sigma
improvement, at what step would the root
3.2. Modifying or redesigning a product would most causes of defects be identified?
likely occur during which two of the PDCA
phases? a. Measure
b. Control
a. Plan and Do c. Improve
b. Check and Act d. Analyze
c. Do and Act
d. Plan and Act 3.8. Which of the following statements can NOT be
made regarding the objective: "Quality will be
3.3. Other than project development, what other improved next year."
business decisions can be handled with risk
management? a. It does not define the measures of quality
b. The statement needs a defined ending date
I. Major equipment purchases c. How the improvements will be made should
II. Changes in work flow be stated
IJI. Changes in priCing d. It will not be possible to determine if the
IV. Relocation of facilities objective was met
3.11. According to Juran, what the customer thinks 3.16. In a project network, if the earliest an event can
he/she desires, would be classified as: take place is week 45, the most likely time it will
take place is week 49, find the latest it can take
a. Stated needs place, without delaying the project completion,
b. Real needs is week 51. What is thE! event slack time?
c. Perceived needs
d. Cultural needs a. 2weeks
b. 4weeks
3.12. A team would be operating in which phase of c. 6weeks
the Shewart cycle if they were in the process of d. Cannot be determined from the above
conducting a pilot program test activity? information
a. Is the difference between the earliest and 3.20. Risk analysis planning should:
the latest start dates
b. Is equal to the ·crash" time I. Be done at the beginning of the project
c. Is neither less than nor greater than zero planning phase
d. Is the highest risk area II. Identify associated contingency plans
III. Identify and implement a metrics program
IV. Be reviewed and u~ldated at each milestone
3.21. List the following project scheduling techniques 3.26. Identify the tool that is most similar to project
in order of complexity, from least to most management and concurrent engineering:
complex.
a. Activity network diagram
I. CPM b. Praeto chart
II. Gantt c. Plan do check act
III. Pert d. Affinity diagram
a. Only the PERT method can be displayed on a. Large U.S. companies with like
a Gantt chart manufacturing processes
b. The PERT technique allows for easier b. Small U.S. firms making similar products
crashing of project time c. Latin American organizations with intensive
c. The PERT technique permits network labor utilization
relationships but CPM does not d. Japanese "world-class" conglomerates with
d. The PERT technique is event oriented, while diverse operations
CPM is activity centered
3.30. Why is the PDCA cycle so readily accepted by
3.25. In what areas would upper management be most American teams and individuals?
most helpful in the initiation of a lean six sigma
effort? a. It is the natural way that most people
already approach problems
a. Providing direct training to black belts b. It was promoted by Dr. Deming who has a
b. Standardizing business operations wide American following
c. Providing key resources to the organization c. It has been widely used in Japan with
d. Directing the improvement projects success
d. It requires much less work than comparable
improvement techniques
3.31. Focus groups can best be defined as: 3.36. Which of the following is NOT a primary reason
for periodic project reviews?
a. Small groups with a specific topic interest
b. A segmented group of suppliers a. To highlight the prc>ject team's efforts
c. A segmented group of intermediate users b. To update goal achievement
d. A pre-selected group of users c. To review the schedule
d. To review the cost!1
3.32. Stakeholders that could help define a project
charter include: 3.37. Which of the following techniques has proven
useful in translating customer needs into
I. Customers product design feature:.?
II. Suppliers
III. Management a. Changing percepticms
IV. Benchmark partners b. Customer service principles
c. Confrontation and problem solving
a. I and II only d. Quality function deployment
b. I and III only
c. I, II, and III only 3.38. Outstanding service companies perform which
d. I, II, III, and IV activity first:
3.33. Internal customers of a retail store generally are: a. Treat employees as external customers
b. Train employees extensively in customer
a. People who shop inside a store instead of service skills
using the internet c. Give communicaticln skills training
b. Buyers that receive special pricing d. Provide good reco"ery skills training
discounts
c. Purchasers of the goods who buy in bulk or 3.39. One advantage of project management is that it
volume lot sizes does NOT require:
d. Store employees who are next in the
processing sequence a. Planning
b. Objectives
3.34. Planning for feedback by the project leader c. Unlimited resources
during a project would NOT address: d. People
a. The method of reporting 3.40. What type of expectation is met when a product
b. What is to be communicated has attributes that are worthwhile, but not
c. When reports are to be made necessarily expected?
d. Project milestones
a. Desired
3.35. Properly designed surveys should avoid all of b. Expected
the following, EXCEPT: c. Basic
d. Unanticipated
a. Asking specific questions
b. Ignoring non-responses
c. Poorly defined survey issues
d. Asking too many questions
The answers to all questions are located at the end of Section XII.
CASEY STENGEL
Team Organization and Dynamics are presented in the following topic areas:
Initiating Teams *
A participative style of management is the best approach to ensure employee
involvement in the improvement process. Today's work force has higher
educational levels and is eager to participate in the decision making process
affecting them. There is no better way of motivating employees than to provide them
with challenging jobs which make use of their talents and abilities.
In spite of all the obvious advantages, team participation is one of the key areas
where most American companies fail. Dr. Ishikawa, a leading Japanese quality
professional, said of team involvement, "A people-building philosophy will make the
program successful, a people-using philosophy will make the program fail."
The benefits of a team approach to issues are numerous. Consider the role an
individual plays in the organization. Team members may represent the role of
supplier, processor, and customer. On a team, each member often brings different
experiences, skills, know-how, and perspectives to the issues.
Such diversity is important for most improvement teams. A single person trying to
remove a problem or deficiency, no matter how skilled, has rarely mastered the
intricacies of an entire work process. The most significant gains are usually
achieved by teams - groups of individuals pooling their talents and expertise.
* Considerable content from this Section is derived from CMQ Primer (Gee, 2005)13
and CQE Primer (Wortman, 2006)41 material.
• Can have immediate access to the technical skills and knowledge of all team
members (plus green, black, and master black belts, in the typical lean six
sigma arrangement)
• Can rely on the mutual support and cooperation that arises among team
members, as they work on a common project
• A chance to work on a project with the full support and interest of upper
management
• The satisfaction of solving a chronic problem, which may attract and/or retain
more customers, increase revenues, and reduce costs
Team Resources
Resources are time, talent, money, information, and materials. The development of
productive teams will use considerable resources. Management must optimize the
resources available to teams. The team charter is the best place to establish the
team's expectations concerning available resources.
Team Objectives
The team process can be a highly effective, people-building, potential-releasing,
goal-achieving social system that is characterized by:
Listed below are some of the reasons that teams have been successful in many
companies:
• The team procedure allows all team members to communicate and exercise
creative expression.
Team Empowerment .
Most power is derived from the organization's management authority. A team is
empowered by virtue of that power that is granted to it by management. A team
charter is a very useful tool for helping a team and management understand just
exactly what the team is empowered to do.
Team members have control over the team's performance and behavior. Control is
one source of power. Information is another source of power. To be effective, teams
need information. Teams should be told everything that could possibly help them
to achieve their objectives. They should be aware of financial conditions,
organizational changes, market conditions, etc. Access to resources is a third
source of power. A team's ability to succeed will depend in part on how free it is to
use organizational resources.
Management Support
Management must give more than passive team support. This means that
management, especially middle management, must be educated to the degree that
they are enthusiastic about the team concept. The implementation of project
schedules and solutions originating from teams should be given precedence. In
order for teams to be successful, management must recognize that there will be
additional work created by their efforts. Leaders, facilitators, and team members
should be thoroughly trained in six sigma and other improvement techniques, as
required.
In spite of the potential benefits, some people are skeptical of the long-term success
of teams. These people point out that the traditional style of management in the
typical American industry carries with it such momentum that the team approach will
have little appreciable long-term effect.
There are reasonable arguments that can be expressed either for or against teams.
The important questions that need answers are: (1) Does the company have the
proper environment in which teams can survive and thrive? and (2) Does
management fully comprehend the value of teams?
Types of Teams
The following types of teams are used by industries throughout the world today:
The structure and functional roles of lean six sigma teams closely follow the
description of project and ad hoc teams that follow, with the addition of black and
master black belt support.
It should be noted that not all companies, lauded to have effective lean six sigma
programs, follow the same structure. Many companies use a variety of team
arrangements and provide black belt and master black belt support as necessary.
Improvement Teams
A group belonging to any department chooses to solve a quality/productivity
problem. It will continue until a reasonable solution is found and implemented. The
problem may be management selected, but the solution is team directed.
For a process improvement team, employees may be drawn from more than one
department to look into the flow of material and semi-finished goods required to
streamline the process.
Team membership can be all management, all work area, or a composite of the two.
Usually, the boundaries of the assignment are tightly drawn for project teams or ad
hoc teams. Some task forces may have broader mandates.
This type of team operates with minimal day-to-day direction from management. Self
directed teams are asked to accomplish objectives within time frames that are truly
stretch objectives. Management must give the team the maximum latitude possible
to achieve their objectives.
The concept of circles originated in Japan after WW II. They were so successful in
Japan that many managers in the United States tried to duplicate them. The circle
is a means of allowing and encouraging people on the production floor to participate
in decisions that will improve quality and/or reduce manufacturing costs.
Quality Teams
The quality circle approach has been on the decline in the USA for some time. A
variety of quality team nomenclatures have replaced the term "quality circle." The
major reasons for the shift appear to be two-fold. First, the term ";quality circle" has
a strong Japanese connotation. And secondly, most circle projects tend to be
employee selected, while most team efforts are management selected, but team
directed.
In natural work teams, leadership is usually given to the area supervisor. Members
of teams come from the supervisor's work force. Outside members, from
specialized organizations, can be included in the membership, either as active
members or as contributing guests. Often, a facilitator is another important person
in this team organizational structure. He or she is specifically trained to coordinate
multiple team activities, oversee team progress, document results, and train team
members in their assorted duties.
Topic Ideas
Team agenda Who sets? When published? Input invited? etc. Rolling
agenda with priorities. Recorder to publish.
Attendance Excused absences only. How are latecomers handled?
Minimum attendance to conduct business? An obligation
to be present; excused absence permitted through team
leader; latecomers to be updated on critical issues only.
Meetings Time, frequency, place? Which meeting room? The time
and frequency must be determined.
Decision process Consensus, collaborative, majority? Can one person
remove an item from the agenda?
Minutes and Select a recorder. How are minutes approved? Where
reports posted? Who types? How distributed? Use a flip chart
for minutes? Is the recorder a volunteer or appointed by
chairperson? Time keeper to maintain agenda timing?
Recorder to transcribe, type, and distribute minutes.
Leader role How defined? How selected? Expectations? Leader
keeps things on track and moving, makes housekeeping
decisions, monitors participation, attendance and
timeliness, helps manage conflict.
Behavioral norms Listening; interruptions; radios, cell phones and pagers
off; no smoking; breaks called at members discretion;
limited cheap shots; empathetic listening; common
courtesy is expected; feedback should be constructive,
specific, and timely.
Confidentiality What goes outside the group?
Guests How invited? How excused?
Meeting audits How frequent? Who is responsible?
Facilitator How selected? Expectations? How will this role differ
from the leader?
Conflict Expected? How managed?
Recommendations How initiated? How routed? Who is informed?
Commitments Follow through on commitments, analysis, word
processing, etc.
1. Develop an agenda
• Define goal(s)
• Identify discussion items
• Identify who should attend
• Allocate time
• Set time and place (semi-permanent if possible)
3. Start on time
6. Reinforce:
• Participation
• Consensus building
• Conflict resolution
• Problem solving process
Agenda/Minutes
Date'.
I Members I I I I I I I I I I
Group Accomplishments
Facilitator's Log
Date Comments Concerns
II
II I
Figure 4.2 Sample Meeting Forms
The result is a very delicate situation for the team leader or sponsoring manager.
Both the team and the manager should have a series of frank discussions with the
individual. The conversations should center on what's expected, What's at stake,
and what's not happening that needs to happen, or what is happening that shouldn't
be happening. If the situation doesn't improve, the team member must be removed.
Team Size
A team can consist of members from only one area or can be made up of a group of
representatives from different parts of the organization. Each person may be a
subject matter expert who understands the processes and activities at issue. It is
usually impractical to include every person who could be involvled.
Conventional wisdom is that teams over 20 people, some think over 15, become too
unwieldy and lose the active participation of all team members. Teams of 4 people
or less may not generate enough ideas. A major change management principle
embraces the notion that people will more readily accept and support a change, if
they are included in the development of the solution. This presents a major dilemma
for teams: How can the team be kept small enough to effectively 'Work together and
at the same time involve everyone? This is not a trivial matter in large organizations
that may have several hundred people actively supporting a work process.
Extending the group to customers of the process generates an even larger group of
people whose collective buy-in is needed to ensure successful c:hange.
Special efforts have to be used to involve the larger group in the understanding of
the initial team's charter and the collection of needed information. Input and ideas
should be sought from the larger group as the solution set is develloped. Successful
teams organize, develop, and implement a communication plan to gain the
participation, support, and ultimately, the commitment of an en1tire department or
operation.
Team Diversity
To achieve optimum performance, a team often needs diversity in the orientation of
its individual team members. Some team members are needed who are primarily
oriented towards task and target date accomplishment. Other team members will
be needed who hold process, planning, organization and methods in the highest
regard. Teams also need members who nurture, encourage and communicate well.
Teams need some members who are creative and innovative. This quality is helpful
when product design, inspiration, optimism or humor is needed.
Team Roles
While some organizations choose to use different names and definitions, most
successful organizations have implemented the following roles in their black belt
program.
The definitions in this portion of the Handbook are a combination of the author's
experience and references at the end of this Section.
Black Belts
Black belts are most effective in full-time process improvement positions. The term
black belt is borrowed from the martial arts, where the black belt is the expert who
coaches and trains others as well as demonstrates a mastery of the art. In a similar
way, lean six sigma black belts are individuals who have studied and demonstrated
skill in implementation of the principles, practices, and techniques of lean six sigma
for maximum cost reduction and profit improvement.
Black belts typically demonstrate their skill through significant positive financial
impact and customer benefits on multiple projects. Black belts may be utilized as
team leaders, responsible for measuring, analyzing, improving and controlling key
processes that influence customer satisfaction and/or productivity growth. Black
belts may also operate as internal consultants, working with a number of teams at
once. They may also be utilized as instructors for problem solving and statistics
classes. Black belts are encouraged to mentor green belt and black belt candidates.
Lean six sigma master black belts are typically in full-time process improvement
positions. They are, first and foremost, teachers who mentor black belts and review
their projects. Selection" criteria for master black belts includes both quantitative
skills and the ability to teach and mentor. For master black belt recognition, an
individual must be an active black belt who continues to demonstrate skill through
significant, positive, financial impact and customer benefits on projects. The ability
to teach and mentor is evaluated by reviewing the number and caliber of people they
have developed. Teaching may also be demonstrated in classroom environments.
Green belts are not usually in full-time process improvement positions. The term
green belt is also borrowed from the martial arts. Green belt refers to an individual
who has mastered the basic skills. Green belts may be black belts in training,
having less experience than full black belts. Green belts must demonstrate
proficiency with the core statistical tools by using them for positi"e financial impact
and customer benefits on a few projects. In some organizations, individuals may
remain a green belt for several years. Green belts operate under the supervision and
guidance of a black belt or master black belt.
Executive Sponsors
Champions
Lean six sigma champions are typically upper level managers that control and
allocate resources to promote process improvements and black belt development.
Champions are trained in the core concepts of lean six sigma and deployment
strategies. With this training, champions lead the implementation of the lean six
sigma program. Champions also work with black belts to ensure that senior
management is aware of the status of six sigma deployment. Champions ensure
that resources are available for training and project completion. They are involved
in all project reviews in their area of influence.
Process Owners
Key processes should have a process owner. A process owner coordinates process
improvement activities and monitors progress on a regular basis. Process owners
work with black belts to improve the processes for which they are responsible.
Process owners should have basic training in the core statistical tools, but will
typically only gain proficiency with those techniques used to improve their individual
processes. In some organizations, process owners may be lean six sigma
champions or sponsors.
• Selecting teams. Once a project has been identified, the steering committee
appoints a team to see the project through the remaining steps of the
improvement process.
• Supporting project teams. Some lean six sigma teams are generally required
to make significant improvements. It is up to the steering committee to see
that improvement teams are well prepared and equipped to carry out their
mission. Steering committee support may include:
• Providing lean six sigma training to black belts and green belts
• Providing training in team tools and techniques to other team members
• Providing a trained leader or facilitator to help the team work efficiently
• Reviewing team progress
• Approving revisions of the project mission
• Identifying/helping with team-related problems
• Helping with logistics, such as meeting sites
• Providing expertise in data analysis and/or survey design (black belts)
• Furnishing resources for unusually demanding data collection
• Communicating project results throughout the organization
* Various companies call this committee the lean six sigma steering committee, the
management council, or the executive steering committee.
Team Facilitation
As noted earlier, the team leader in lean six sigma and other team arrangements is
often the facilitator. However, many companies find facilitators useful both for team
start-ups and for a variety of other team arrangements. The team leader and/or
facilitator must understand group dynamics and how a group moves through
developmental stages (for example: forming, storming, norming and performing).
If there is no facilitator, the team leader, an assigned black belt, or a coach must
assume many of the above duties.
The leader's role is not to "boss" the team, but to ensure implementation of the
team's mission and charter. Facilitation and leadership requirements often diminish
as capability is developed within the team. Refer to the diagram below:
1
TEAM MEMBERS I
(COMBINED) I
,..-
_-------1I
,," !
,." I
I-
Z
-- ,. I
w /"__ I
:E
w ,. -- f
>
-I --__ I
o /
,. ,.
> / .......... LEADER I
~
.......... I
.......... 1
-- ,.. " "
TIME - - - -..
J'
Action-oriented roles:
• Shaper: Shapers are highly motivated people with a lot of drive, energy, and
need for achievement. They often seem to be aggressive ,extroverts.
• Implementer: Implementers are well organized and have practical sense. They
favor hard work and tackle problems in a systematic fashion.
People-oriented roles:
• Team worker: Team workers are the most supportive members of a team.
They are sociable and adaptive to different situations and people.
• Plant*: Plants are innovators and can be highly creative. They provide the
seeds and ideas from which major developments spring.
Completer Finishers finds errors and omissions. They Finishers worry unduly and are
Finisher deliver their contributions on time and pay reluctant to delegate. They tend
attention to details. to be over anxious.
Teamworker Team workers tend to keep team spirit up and They tend to be indecisive in
allow other members to contribute. They moments of crisis and are
bring cooperation and diplomacy to a team. reluctant to offend.
Plant A plant brings creativity, ideas, and Plants ignore incidentals and
Innovator imagination to a team. They can solve may be too preoccupied to
difficult problems. communicate effectively.
Monitor The monitor evaluator is not deflected by The evaluator may appear dry,
Evaluator emotional arguments. They are serious boring, and overcritical. They
minded and bring objectivity and judgment to are not good at inspiring others.
options.
Specialist Specialists bring dedication and initiative. They may contribute only on a
They provide needed knowledge and narrow front and dwell on
technical skills. technicalities.
Team Stages
Most teams go through four development stages before they become productive:
forming, storming, norming, and performing. These stages can also be cyclical.
Individuals may be storming with one teammate and performing with another. Bruce
W. Tuckman (1965)39 first identified the four development stages.
Forming
Forming is the beginning of team life. Expectations are unclear. Members test the
water. Interactions are superficial. This is the honeymoon stage. When a team
forms, its members typically start out by exploring the boundaries of acceptable
group behavior. As each member makes the transition from individual to team
member, each looks to the team leader (or facilitator) for guidance as to his or her
role and responsibilities.
Storming
The second phase consists of conflict and resistance to the group's task and
structure. There are healthy and unhealthy types of storming. CClnflict often occurs
in the following major areas: authority issues, vision and values dissonance, and
personality and cultural differences. However, if dealt with appropriately, these
stumbling blocks can be turned into performance later.
This is the most difficult stage for any team to work through. Teams realize how
much work lies ahead and feel overwhelmed. They want the project to move forward
but are not yet expert at team improvement skills. They often cling to their own
opinions, based on personal experience, and resist seeking the opinions of others.
This can lead to hurt feelings and unnecessary disputes. A disciplined use of the
quality improvement process and the proper tools and commUinication skills can
assist teams members to express their various theories, lower their anxiety levels,
and reduce the urge to assign blame.
Norming
During the third phase, a sense of group cohesion develops. Team members use
more energy on data collection and analysis as they begin to test theories and
identify root causes. Members accept other team members and develop norms for
resolving conflicts, making decisions, and completing assignments. Norming takes
place in three ways. First, as storming is overcome, the team becomes more relaxed
and steady. Conflicts are no longer as frequent and no longer throw the team off
course. Second, norming occurs when the team develops a ro,utine. Scheduled
team meetings give a sense of predictability and orientation. Third, norming is
cultivated through team-building events and activities. Norming is a necessary
transition stage. A team can't perform if it doesn't norm.
This is the payoff stage. The group has developed its relationships, structure, and
purpose. The team begins to tackle the tasks at hand. The team begins to work
effectively and cohesively. During this stage, the team may still have its ups and
downs. Occasionally, feelings that surfaced during the storming stage may recur.
Refer to Figure 4.6 for a graphical display of the performance of teams as they
advance through the team evolutionary stages.
Performing
Members:
show maturity
focus on the process
achieve goals
operate smoothly
Norming
Members:
cooperate
talk things out
Q)
(.)
focus on objectives
r:: have fewer conflicts
ns
E
L.. Storming
.g Members:
Q) have confrontation
~
think individually
are learning roles
have divided loyalties
Forming
Members are:
inexperienced
excited
anxious
proud
Time
Figure 4.6 Schematic of Team Development Phases
Adjourning
At the end of most lean six sigma projects the team disbands. This step is called
adjourning to rhyme with four other team stages (forming, storming, norming and
performing). Adjourning is also a very common practice for other project teams,
task forces, and ad hoc teams.
BEHAVIORS BEHAVIORS
Lack of task focus Problem solving is sup,erficial
Difficulty in defining problems There is petty arguing
Uneven participation Hidden agendas and cliques emerge
Ineffective decision making Decisions don't come easily
Resistance to team building Plenty of uncertainties persist
FEELINGS FEELINGS
Excitement, anticipation, and pride Resistance is seen
Shaky alliance to the team Individual attitudes vary widely
Suspicion, fear, and anxiety Anger and jealousy abc)und
Roles and responsibilities are unclear
HOW TO IMPROVE
HOW TO IMPROVE Follow a problem solving format
Take time to become acquainted Clearly define roles
Establish mission and goals Debrief meetings for cClntent and process
Establish team ground rules Deal openly with conflilct
Add structure to meetings Work to expose hidden agendas
Train members in team concepts Focus team on goals
Encourage equal participation
BEHAVIORS BEHAVIORS
Attitudes improve Members to work through problems
Trust and commitment grow Members manage the glroup process
Some goals and objectives are achieved There is creativity and unformality
Feedback becomes regular and objective High levels of unity and spirit are seen
Conflicts are dealt with and resolved Close bonds form
The leader receives respect
Some leadership is shared by the team FEELINGS
Self improvement is noted
FEELINGS Acceptance of weakness
Comfort with giving feedback Appreciation of strengths
Comfort with receiving feedback Satisfaction with team progress
Sense of cohesion and spirit Team knows clearly what it is doing
Friendlier and more open exchanges
HOW TO IMPROVE
HOW TO IMPROVE Promote openness
Evaluate team performance Permit more self direction
Periodic summaries of progress Establish new goals
Create ties outside of the team (Tuckman, 1965)39
Effectiveness Optimize
Build
Time
Figure 4.8 Team Life Cycle Characteristics
Team Dynamics
On this and the following pages are a number of issues (in addition to team
development stages) that fall into the category of team dynamics. These include
team recognition, a number of team problem areas, communication challenges, and
negotiating techniques.
Intangible Items:
• Satisfaction • Thanks
• Pleasure • Admiration
• Friendship • Notoriety
• Learning experience • Prestige
Individual rewards and recognition are best received when they are personal to the
individual receiving them. If the award is unique, it has greater value than the same
recognition a second or third time.
Team rewards should be the same for all members of the team. Intangible rewards
are generally not given from one person to another, but yet people may receive them
as a result of their activities. Probably one of the best rewards is "thank you" when
it is sincerely meant.
Groupthink
One aspect of group cohesiveness can work to a team's disadvantage. Members of
highly cohesive groups may publicly agree with actual or suggested courses of
action, while privately having serious doubts about them. Strong feelings of group
loyalty can make it hard for members to criticize and evaluate other's ideas and
suggestions. Desiring to hold the group together and avoid disagreements may lead
to poor decision making. Psychologist Irving Janis (1971 )23 calls this phenomenon
"groupthink," the tendency for highly cohesive groups to lose their critical
evaluative capabilities. Janis ties a variety of well-known historical blunders to
groupthink, including the lack of preparedness of the U.S. naval forces for the 1941
Japanese attack on Pearl Harbor, the Bay of Pigs invasion under President Kennedy,
and the many roads that led to the USA's involvement in Vietnam.
Irving Janis (1971 )23 describes groupthink as: "A mode of thinking that people
engage in when they are deeply involved in a cohesive in-group, when the members'
strivings for unanimity override their motivation to realistically appraise alternative
courses of action." "The more amiability and esprit de corps among members of a
policy- making in-group, the greater is the danger that independent critical thinking
will be replaced by groupthink."
Risky - Shift
Many people think that the proposed solutions to team projects would be fairly
conservative. That is, any proposed solution is an "averaged" r'emedy. However,
those experienced with team mechanics and dynamics have found the opposite to
be the case. Teams often get swept up with expansive and expensive remedies.
There are ways to combat this tendency. One way is to discuss risky-shift openly
in the initial training. Another approach is to have a team member (after full
discussion of the issue) ask the question, If this were our personal money, would we
still risk it on the proposed solution?
·· ··
Members seem overwhelmed Review the team purpose
Decisions are postponed Ask "How can we proceed?"
Dominant
Participants
·· Members interrupt others
Members dominate the conversation
•
·
Promote equal participation
Structure the discussion
Overbearing
Participants
·· A member has excessive influence
A member has legitimate authority
·• Reinforce team concepts
Ask the expert to lead the group
· A member is an "expert"
· Have a private discussion with "expert"
Negative
Nellies
·· Members say "We tried that already"
Members defend their turf
·• Reinforce the positive
Ask for other points of view
• Members are negative of suggestions
· Separate idea generation from criticism
···
Opinions as Members present opinions as facts • Ask for support data
··
Facts Members make unfounded assumptions Question opinions and assumptions
Self assurance seen as unquestionable See groupthink discussion
Shy Members
·· Members are reluctant to speak
Members afraid of making mistakes ·· Structure group participation
Direct conversation their way
Jumpto
Solutions
··
• Members rush to accomplish something
·
Members avoid data collection and analysis •
Reinforce the need for data analysis
Ask for alternate solutions
Members want immediate decisions
· Slow the process down
Attributions
·· Members make casual inferences
Members don't seek real explanations
·· Challenge assumptions
Challenge judgments
· Members make psychological judgments
· Ask for data to support conclusions
Put-downs
(Discounts &
···
A member's comments are ignored
Members are not listening
·· Encourage active listening
Encourage equal participation
Plops) The meaning of a suggestion is missed
· Talk to parties privately
· Sarcasm is noted
· Promote uniform idea consideration
Wanderlust
(Tangents & ·· Conversations stray from the main topic
Sensitive issues are avoided
·· Follow a written agenda
Reinforce team operating guidelines
Digressions)
· Group pursues tangents
· Redirect the discussion
Feuding
· Win-lose hostilities emerge
· Confront the adversaries alone
·· ··
The team takes entrenched sides Reinforce team operating guidelines
Some members become spectators Replace the guilty parties if necessary
Risky-Shift
· Expansive and expensive remedies are
suggested (using company money)
• Ask "If this were my personal money
would I still spend it?"
All of the above problem areas can be minimized with proper team training and team
member awareness. A portion of the above Table was modified from Lorber (2001 )26.
Many of the problem areas are identified by Scholtes (1996)34.
Communication Techniques
For any organization to survive, information must continually now vertically and
horizontally across the company. A good team leader must be able to understand
the process flow and how to use it. Motivating people, collaborating with people,
and getting things done by people are all accomplished with communication. An
effective leader will operate as a:
Team leaders must relay information and give orders and directives to team
members and peers. Typical information includes:
Misleading Information
Misleading information has many origins and causes. There are three important
reasons why management may not receive accurate and complete information:
Horizontal Communications
Special Roles
Gatekeepers are described as individuals who are at the crossroads of
communications channels. They are centers of information, normally because of
their jobs. People and groups pass information to the gatekeeper due to this
positioning. Boundary spanners are individuals who have positions that link them
with others outside of their work units. They exist to exchange in1formation between
groups. (Gordon, 1991 )14
The spoken word via the telephone, face·to·face, formal briefings, videotapes, and
even internet are forms of oral communications. Examples of written
communications include letters, reports, computer messages, and e-mails. The
written forms can be described as one-way channels. That is, in one-way channels
feedback or response to a report or a posting is not immediate. Face-to-face
meetings generally allow for immediate feedback from the receiver to the sender
(two-way communications).
There are a variety of nonverbal communication signals from people that the
manager might be able to pick up from face-to-face or group meetings. The
nonverbal signals include:
In the use of interpersonal space (proxemics), certain cultures are comfortable with
a set spacing. Note that not all "non-verbal" signals really mean what others claim
them to mean. For instance, folded arms across the front of the body is supposed
to indicate a defensive posture by the person. It might also mean that the arms are
comfortable in that position. (Schermerhorn, 1993)33
Important factors always present in motivating people are verbal and written
communication skills. The ability to explain and clarify has been long recognized.
Speaking and writing abilities are essential for leadership success.
Questioning Techniques
The skillful use of questioning is of great value to the team leader. Scholtes (1996)34
suggests the following seven key questions that managers should ask:
Auvine (1978)3 provides some additional ideas for the art of asking questions:
• Avoid leading questions: let the group or individual draw their own conclusion
• Phrase questions in a positive manner
• Prepare questions in advance of the interview
In line with this, the use of open ended questions will allow for some discussion and
probing rather than just a simple "yes" or "no" answer.
Listening, the other half of the communication concept, has received far too little
attention. Effective project leaders have learned the art of listening. Verbal
information can often be very difficult to understand, even when active listening
takes place. A project leader will spend up to 45% of the time being a listener.
Active listening is recommended. Active listening is defined als helping find the
source of problems or meaning. A passive listener will respond in a manner that will
discourage the message sender from saying more.
Many of us would rather hear ourselves speak than listen to another person. The
good news is listening skills can be learned and developed by practice.
Negotiation Techniques
Nierenberg (1986)28 states that negotiating is the act of exchanging ideas or
changing relationships to meet a need. As common and as important as negotiating
is in everyday life, most people learn to negotiate through trial and error.
Negotiating should not be a process of using overwhelming and irresistible force on
the other party. Some degree of cooperation must be employed in the process.
Many negotiating methods are used successfully. However, in dealing with people
in a business context, the best approach is to think win-win. The concept of win-win
negotiating is for both sides to emerge with a successful deal.
Conflict Resolution
Conflict is the result of mutually exclusive objectives or views, manifested by
emotional responses such as anger, fear, frustration, and elation. Some conflicts
are inevitable in human relationships. When one's actions may be controlled by the
actions of another, there is opportunity for conflict. Common causes of conflict
include:
The results of conflicts may be positive in some instances, negative in some, and
irrelevant in others. Irrelevant conflicts occur when the outcome has neither positive
nor negative effects for either party.
Each individual uses a number of ways to deal with conflicts depending upon the
circumstances and the relationships between the people invc)lved. Whether a
conflict resolution method is appropriate or effective will also depend on the
situation. The ways of dealing with conflicts can be depicted in a two dimensional
model for conflict handling behavior, adapted from Thomas-Kilmann Conflict Mode
Instrument: (Thomas, 1975)37
Competing Collaborating
G)
>
t:
G)
U)
U)
~
Compromising
t
~
t:
G)
U)
U)
as
£:
::J Avoiding Accommodating
Uncooperative'" cooperative
There is no specific right or wrong method for handling conflicts. The method that
works best depends upon the situation and the interactions between the affected
parties. The project leader must be able to understand and use all of the conflict
resolution techniques, as is appropriate for the situation. The following are general
applications for the various conflict handling methods:
• Compromising is used when two opponents have equal power and the goals
are not worth the effort or disruption of mutually exclusive solutions.
Team Tools
Often parties need to arrive at a decision or problem resolution using team tools.
Several useful techniques are presented below.
• A problem is presented.
• Before any discussion is held, all members create ideas silently and
individually onto a sheet of paper for about 5 to 10 minutes.
• The facilitator then requests an idea from each member in sequence. Each
idea is recorded until ideas are exhausted.
• Voting for the best solution idea is then conducted (rank ordering, priority
ratings, etc.). Several rounds of voting may be needed before a "best" idea
is found. One voting method employs the use of cards and a Pareto
breakdown of favored ideas.
The facilitator should allow about 60 to 90 minutes for a problem solving session.
As with brainstorming sessions, the facilitator should avoid trying to influence the
problem solving process. The chief advantage of this technique is that the group
meets formally, and yet encourages independent thinking. The authors of this
Handbook feel that exposure to other problem techniques is usefUlI for the NGT team
members.
Multivoting
Multivoting is a popular way to select the most popular or potentially most important
items from a previously generated list. Quite often, in a team environment, a list of
ideas is generated by simple brainstorming. In many cases, brainstorming can be
used to segregate potential causes into their 4 or 5 M components in a cause-and-
effect diagram. The list that is generated can consist of either ideas for improvement
or potential causes for a problem (too much scrap, too much inventory, excessive
downtime, etc.).
Having a list of ideas does not translate into action. Often, there are too many items
for a team to work on at a single time. Additional data or experimentation can help
identify significant items. However, it is often worthwhile to narrow the field to a few
items worthy of immediate attention.
According to Scholtes (1996)3\ multivoting is useful for this objective and consists
of the following steps:
4. Allow members to choose several items that they feel are most important. A
suggested guide is to permit each member a number of choices equal to at
least one-third of the total items on the list.
5. Members may make their initial choices silently and then the votes are tallied.
This is usually done by a show of hands as each item is announced.
6. To reduce the list, eliminate those items with the fewest votes. Group size will
affect the results. The facilitator may chose to eliminate items receiving 0-4
votes.
It should be noted that most problem solving teams can only work on two or three
items at a time. The items receiving the largest number of votes are usually worked
on or implemented first. The original list should be saved for future reference and/or
action.
Brainstorming
Brainstorming is an intentionally uninhibited technique for generCllting creative ideas
when the best solution is not obvious. The brainstorming technique is widely used
to generate ideas when using the fishbone (cause-and-effect) diagram.
Generate a large number of ideas: Don't inhibit anyone. Just let the ideas out. The
important thing is quantity, but record the ideas one at a time.
Don't criticize: There will be ample time after the session to sift through the ideas
for the good ones. During the session, do not criticize ideas because that might
inhibit others.
Record all the ideas: Appoint a recorder to write down everything suggested. Don't
edit the ideas, just jot them down as they are mentioned. Keep a permanent record
that can be read.
Let ideas incubate: You are freeing the subconscious mind to be creative. Let it do
its work by giving it time. Don't discontinue brainstorming sessions too soon.
Consider adding to the list at another meeting.
Select an appropriate meeting place: A place that is comfortablle, casual, and the
right size will greatly enhance a brainstorming session.
Brainstorming, just like the cause-and-effect diagram, does not necessarily solve
problems or create a corrective action plan. It can be effectively used with other
techniques, such as multivoting, to arrive at a consensus as to an appropriate
course of action. It is a participative method to help work teams achieve their goals
and objectives.
Establish agenda
Clarity about goals Uncertainty about
and content II I I II goals and content
Stick to subject
Build on positive
Active listening
Participation
Consensus
Management Presentations
Another useful technique in team performance evaluation are formal management
presentations.
Management presentations are opportunities for lean six sigma and other
improvement teams to:
• Display skills
• Show accomplishments
• Summarize projects
References
1. 12Manage. (2007). "Belbin Team Roles." Downloaded January 5, 2007 from
http://www.12manage.com/description_belbin_team_roles_theory.html
2. Albrecht, K. (1992). The Only Thing That Matters. New York: Harper Collins.
3. Auvine, B., Densmore, B., Extrom, M., Poole,S., & Shanklin, M. (1978). A Manual
for Group Facilitators. Madison, WI: The Center for Conflict Resolution.
4. Bateman, T.S., & Zeithaml, C.P. (1993). Management: Function & Strategy, 2nd
ed. Homewood, IL: Irwin.
5. Besterfield, D.H., et at (1999). Total Quality Management, 2nd ed. Upper Saddle
River, NJ: Prentice Hall.
6. Breyfogle, F.W. III, et al. (2000). Managing Six Sigma. New York: John Wiley and
Sons.
7. Deming, W.E. (1993). The New Economics, MIT Center for Advanced Engineering
Study.
8. Eitington, J. (1989). The Winning Trainer, 2nd ed. Houston: Gulf Publishing.
9. Feigenbaum, A.V. (1991). Total Quality Control. 3rd ed. Revised, Fortieth
Anniversary Edition. New York: McGraw-HilI.
10. Furlong, C.B. (1993). Marketing for Keeps Building Your Busfness by Retaining
Your Customers. New York: John Wiley.
11. Futrell, D. (1994, April). "Ten Reasons Why Surveys Fail." Quality Progress,
27(4}.
12. Geddes, L. (1993). Through the Customer's Eyes. New York: AMACOM.
13. Gee, G., Richardson, W.R., & Wortman, B.L. (2005). CMQ Primer. Terre Haute, IN:
Quality Council of Indiana.
14. Gordon, J.R. (1991). A Diagnostic Approach to Organizational Behavior, 3rd ed.
Boston: Allyn and Bacon.
References (Continued)
15. Griffin, A., Gleason, G., Preiss, R., & Shevenaugh, D. (1995, Winter). "Best
Practice for Customer Satisfaction in Manufacturing Firms." Sloan
Management Review, 36(2}.
16. Harry, M., & Schroeder, R. (2000). Six Sigma, The Breakthrough Management
Strategy, New York: Currency/Doubleday.
17. Hauser, J.R., & Clausing, D. (1988, May-June). "The House of Quality." Harvard
Business Review, 66(3}, 63-73.
18. Hunter, M.R., & Van Landingham, R.D. (1994, April). "Listening to the Customer
Using QFD." Quality Progress, 27(4}.
20. Heskett, J.L., Jones, T.O., Loveman, G.W., Sasser, Jr., W.E.S., & Schlesinger, L.
(1994, March-April). "Putting the Service-Profit Chain to Work." Harvard
Business Review.
22. Ishikawa, K. (1985). What Is Total Quality Control? The Japanese Way.
Englewood Cliffs, NJ: Prentice Hall.
24. Juran, J.M. (1999). Juran's Quality -Handbook, 5th ed. New York: McGraw-Hili.
28. Nierenberg, G. (1986). Complete Negotiator. New York: Nierenberg & Zeif.
References (Continued)
29. Osborne, D., & Gaebler, T. (1992). Reinventing Government: How the
Entrepreneurial Spirit is Transforming the Public Sector. New York: Addison-
Wesley.
30. Pande, P.S., Neuman, P.R. & Cavanagh, R.R. (2000). The Six Sigma Way. New
York: McGraw-HilI.
31. Pearson, T.A. (1999, May). "Measurements and the Knowledge Revolution."
presented at the ASQ Annual Quality Congress.
32. Schaaf, D., & Zemke, R. (1989). The Service Edge. New York: Plume.
33. Schermerhorn, J.R., Jr. (1993). Management for Productivity, 4th ed. New York:
John Wiley & Sons.
34. Scholtes, P.R., Joiner, B.L., & Streibel, B.J. (1996). The Team Handbook, 2nd ed.
Madison, WI: Oriel, Inc.
36. Snee, R.D. (1995, January). "Listening to the Voice of the Employee." Quality
Progress, 28(1).
40. Whiteley, R.C. (1991). The Customer Driven Company - Moving from Talk to
Action. Reading: Addison-Wesley.
41. Wortman, B.L., Carlson, D.R., & Richardson, W.R. (2006). CQE Primer. Terre
Haute, IN: Quality Council of Indiana.
4.1 . Which of the following places should NOT be 4.6. Identify the communication technique that has
considered when selecting improvement team the widest universal acceptance.
members?
a. The use of pictures and charts
a. Where the sources or causes of the b. The use of language translation tools
problem may be found c. The standardization to English as a
b. Among those with special knowledge, business language
information, or skills d. The invention of Spanglish which combines
c. In areas that can be helpful in developing English and Spanish
remedies
d. Among those who reach consensus easily 4.7. When selecting potential team members, it is
important that candidates:
4.2 The normal order of the stages of a team's
development are often referred to as? a. Are from the same work group
b. Are at the same level within the
a. Forming, norming, storming, performing organization
b. Develop, build, optimize c. Can discontinue their normal duties while
c. Forming, storming, norming, and participating on the team
performing d. Have a reason to work together
d. Forming, building, optimizing, and
performing 4.8. Prior to participating on a team, each team
member should:
4.3. A skilled team facilitator will:
a. Be trained in problem solving techniques
a. Dominate group discussions b. Have undertaken a study of group dynamics
b. Correct the group when their ideas are c. Have participated in a team motivation
wrong seminar
c. Take sides when one side is correct d. Have a knowledge of the team's intended
d. Make sure all opinions are heard purpose
4.4. Which two of the following teams are likely to 4.9. Divided member loyalties, having personal
disband upon completion of a specific job or confrontations, thinking individually, and
application? learning team members roles are indicators that
a team is in which stage of its evolution?
I. Self-directed
II. Project a. Forming
III. Cross-functional b. Storming
IV. Improvement c. Performing
d. Norming
a. I and II only
b. I and III only 4.10. Which lean six sigma role is most likely to
c. II and III only define objectives for an improvement team?
d. III and IV only
a. Leader
4.5. The proper sizing of a team is: b. Sponsor
c. Facilitator
a. Something that senior management should d. Member
consistently determine
b. Best based upon the inclusion of
representatives from all affected areas
c. A straightforward management decision
d. A complex balancing of considerations of
inclusion and manageability
4.11 . It is noted that the involvement and participation 4.16. A team facilitator or leader can use all of the
of which of the following team roles expands following tactics to combat digression and
and increases over time? tangents within a worl< team, EXCEPT?
a. The team leader should disband the team 4.18. Natural work teams share in common:
b. The facilitator should guide the team
through the process a. Their attitudes and experiences
c. The sergeant at arms should ask one group b. Their motivation and training
to excuse themselves c. Their involvement in the problem to be
d. The team should split and have each group addressed
work on solutions d. Their concern about the large issues of the
organization
4.14. With regard to conditions on the work floor, in
what order should upper management consider 4.19. A team commissioned to design and install an
the value of hislher information? automated part degreaser is which type of
team?
I. Upward flow (from below)
II. Downward flow (from the top) a. Self-directed
III. Horizontal flow b. Autonomous
IV. Informal networks c. Improvement
d. Project
a. I, IV, III, II
b. II, I, III, IV 4.20. Cross-functional membership would NOT
c. III, I, IV, II typically be used for which of the following?
d. I, II, III, IV
a. Self-directed teams
4.15. Valid reasons for removing a team member from b. Six sigma teams
a team include: c. Ad hoc improvement teams
d. Project improvement teams
I. Personality conflicts
II. Lack of skills by the member
III. Poor team meeting attendance
IV. Member needed for other activities
4.21. Which of the following would be considered the 4.26. The main reason to analyze behavior patterns
three most important team roles? among improvement team members is to:
4.22. Which of the following statements is a true 4.27. The cross-functional team approach to quality
description of a team during the storming improvement is effective because:
stage?
a. Nobody is ultimately responsible for the
a. The team wants the project to move forward results
b. The team works cohesively b. "Finger pointing" has an opportunity to get
c. The team members are very cooperative sorted out once and for all
d. The team operates smoothly c. The diversity of team members brings a
complete working knowledge of the
4.23. The role of the team recorder is to: processes
d. Problems are solved on the spot
a. Assign responsibilities
b. Take clear notes of the meetings 4.28. The most important resource that upper
c. Select team members management can provide a natural work team
d. Restrain disruptive team members is:
A. Why not do it this way, since it is only a 4.29. A successful team effort should produce all of
"drop in the bucket?" the following benefits, EXCEPT:
B. If this were my money, would I spend it on
this solution? a. Improved worker morale
C. Is this what a "world-class" company would b. A decreased need for management efforts
do? c. Improved communications between
D. Since the organization has considerable managers and team members
resources, can we solve it quickly? d.' Cost savings from participative problem
solving
4.25. The best way to disseminate information about
a new quality program is to: 4.30. Identify the INCORRECT statement from the
choices below:
a. Send an e-mail to all employees
b. Post the announcement on the company a. The team leader is always the supervisor
bulletin board b. The recorder maintains the team's minutes
c. Send a memo to all department heads and agendas
d. Use a combination of media c. The facilitator should have training in
improvement techniques
d. Each team member is responsible for
completing assignments between meetings
4.31. The greatest value a team can provide to an 4.36. A team sponsor or chslmpion typically:
organization is:
a. Is a liaison betwE!en the team and upper
a. Implementing solutions management
b. Having well planned meetings b. Attends all team meetings
c. Identifying organization wide problems c. Directs the team on implementing solutions
d. Meeting at least once a week d. Fulfills the facilitator role
4.32. Techniques useful for team facilitators, when 4.37. The tendency for highly cohesive groups to lose
narrowing a list of potential problem areas to their critical evaluative capabilities is what:
investigate, include all of the following,
EXCEPT: a. Irving Janis called "groupthink"
b. Frederick Taylor ISlbeled as "cognition"
a. Brainstorming c. Abraham Maslow c:ited as "human needs"
b. Nominal group technique d. B.F. Skinner definl!d as "the Skinner box"
c. Voting
d. Multivoting 4.38. When a problem arises in the performance of a
team member and it has a negative impact on
4.33. The classical number of development stages for the performance of the team, which of the
team growth is: following actions would be considered
inappropriate?
a. 3
b. 4 a. The team leader !:;ummarily removes the
c. 5 individual from thE! team
d. 7 b. The leader or members advise the
individual of what is expected
4.34. Facilitators do NOT normally: c. If action to remOVE! the individual from the
team is warranted, do so respectfully
a. Act as group leaders d. The team leader should clarify expectations
b. Summarize group ideas with the individual
c. Provide feedback to the group
d. Know problem solving techniques 4.39. At which evolutionary stage would a team
typically select a leader?
4.35. You are working through a conflict situation
with a customer. You try your best to reduce a. Performing
the tension and to have the customer take his b. Storming
time on the problem. You are practicing which c. Forming
conflict resolution model? d. Norming
The answers to all questions are located at the end of Section XII.
The initial step of the lean six sigma problem solving methodolog~' is the define step.
Properly defining the problem is the most important part of solving the problem.
Many of the above tools and techniques are explained in other portions of this
Handbook because they are also very applicable to other lean six s;igma areas. Many
of the tools in this Section are also beneficial in other problem s,olving areas.
Project Charter
A critical element in the establishment of an improvement team is the development
and acceptance of a charter. A charter is a written document that defines the team's
mission, scope of operation, objectives, time frames, and consequences. Charters
can be developed by top management and presented to teams, or' teams can create
their own charters and present them to top management. Either way, top
management's endorsement of a team's charter is a critical factor in giving the team
the direction and support it needs to succeed.
Teams need to know what top management expects of them. The team must have
the authority, permission, and blessing from the necessary levels of management
to operate, conduct research, consider and implement any changes needed to
achieve the expected results.
The charter begins with a purpose statement. This is a one or two line statement
explaining why the team is being formed. The purpose statement should align with,
and support, the organization's vision and mission statements. The charter should
also identify the objectives the team is expected to achieve.
A good charter should contain a section describing top management's support and
commitment. This is important because some team members may feel that they are
taking a personal risk by becoming a member of a high profile team. A charter
provides the following advantages:
Moen (1991)17 suggests that a team project charter should contain the following key
points:
Identifying the above details, in written form, will provide a constant and consistent
target for the team. In this Handbook, the authors will address the business case
and problem statement as part of the project charter and the remaining subjects as
part of the project scope.
Business Case
The business case is a short summary of the strategic reasons fClr the project. The
general rationale for a business case would involve subjects like waste elimination,
quality, cost, or delivery of a product with a financial justification. Moen (1991)17
suggests that there are four basic activities:
Eckes (2001)5 reports that a common problem for many projects is the lack of a
company impact measurement. For example, if the existing quamy defective rate is
at 5,000 defectives per million opportunities; the possible justification is a reduction
to 250 defectives per million opportunities with a cost savings of $1,000,000. A
project improvement team should follow typical financial department justification
guidelines. Projects which do not show a significant financial impact to the
company should be stopped or eliminated as soon as possible.
(Rath & Strong, 2000)25, (Eckes, 200'1t, (Moen, 1991)17
Problem Statement
A problem statement will detail the issue that the team wants tC) improve. Eckes
(2001)5 explains the problem statement should be crafted to be as descriptive as
possible. That is, how long has the problem existed, what mleasurable item is
affected, what is the business impact, and what is the performance gap. The
problem statement should be neutral, to avoid jumping to conclusions. A sample
problem statement might be: "The ABC Company, in 2006, has experienced a 25%
drop in sales, with a 40% drop in net profits." The problem statemEmt should include
a reference to a baseline measurement. A baseline measurement is the level of
performance of the particular metric at the initial start of a project. (Pande, 2000)19
The collection of good data and process performance measurement will provide a
picture of the areas in the company that have the greatest need for improvement.
In addition, the measurement system will provide a foundation for c)ther teams to use
to pursue other projects. If baseline measures differ from the a!isumptions of the
team or the company, more clarification may be necessary. The measures may not
be wrong, but more data may be necessary to understand the situation (Pande,
2000)19. The problem statement should detail the amount of anticipated
improvement and include a firm completion date.
Charter Negotiation
The project charter can be created and presented by upper management. However,
the project team might be closer to the actual facts and might propose a different
mode of attack than envisioned by management. Hence, charter negotiation may be
required.
2. Scope: The boundaries of the project could require expansion. The project
may be sufficiently large and require dividing into more manageable pieces.
This could require two or three projects in succession by a single team or
additional project teams working on a portion of the original project.
6. Project closure: The improvement team could discover that related processes
or products need the same type of effort as that which was undertaken for the
initial project. In some cases, the project closure date might be moved up
because of such diverse events as unexpected success or shifts in customer
preference.
Obviously, both the project team and upper management are interested in success,
not failure. There should be a willingness on the part of both parties to negotiate a
number of project details. Generally, it is best to handle charter negotiations at the
start of a project. However, all pertinent information may not be apparent at this
point. Charter negotiations can be required at any point during the project.
Metrics Selection
The primary and more detailed metrics are developed, but are [not finalized in the
definition step. That is left to the measure step. The primary metrics for
consideration in a project could come from several sources:
• Suppliers
• Internal processes
• Customers
The metrics that will affect projects involving suppliers, internlal processes, and
customers would be: quality, cycle time, cost, value, and labor. (Eckes, 2001)5.
Garvin (1988t and Besterfield (1998)2 suggest nine dimensions of quality
measurement:
Hill (1993)10 presents similar measurements that are important h, the marketplace.
Hill framed his characteristics to answer the question: "How do products win orders
in the market place?" His suggested measurements are:
The secondary or consequential metrics would be derived from the primary metrics.
For example, if cycle time were determined to be a key metric, the next step would
be to establish the numerical measurement. Examples of measurements include:
A3 Report
The A3 report is a concise summary of a project described in five or six parts on one
piece of paper. The name A3 relates to the metric paper size used for the report,
approximately equivalent to 11" x 17" paper. This can be the final report presented
to management for approval of a project. The process removes the fluff and flashy
showmanship from the presentation. It may only take 3 to 5 minutes for the speaker
to present the business case. A project is thus judged on its merits.
Thirty minutes of PowerPoint slides can be enough to put an audience to sleep. The
A3 report is lean production (at its finest) in report presentations. Toyota makes
extensive use of the A3 style. Variations of the one page project reports are also
used at Komatsu and Cummins, Inc.
There are four basic types of story lines used in the A3 report. Each type is similar
in structure, but the purposes are different. Extensive information is provided in
graphic form. The basic types are:
The report format fits the problem solving stages onto a 11" x 17" sheet. For
example, the problem definition and analysis stages would appear on the left side
of the paper, while the action plans, results of activities, and future steps will appear
on the right side. Some key concepts for A3 report writing:
Please note that the following A3 example is presented on two 8-1/2" x 11" sheets.
Each sheet represents approximately one-half of an A3 report.
"
A3 Report (Continued)
Improve Productivity of Air Filter Line
20
0 I
J F M A M J J
Revenue Months
150
Rework
goal 80
'"~ 100
60
,?!.
...
II)
40
..!!! 50 -
(5 II)
0 :!:
s::::: 20
:::l
0 0
J F M A M J J J J
J F M A
Months
Months
20
Root Causes ...J
A3 Report (Continued)
3. Action Plans:
--
Design DOE for materials Sr.O.E.
Volume
4. Results of Actions: 120
goal
100
Revenue
150 Rework
80
'0
T'"
100 60
~
...
If)
40
~
0
50 If)
;::
C c 20
::J
0
0 I I I
N 0 J F M A M N D J F M A M
Months Months
A3 Report (Continued)
The A3 report has also been used as a problem solving methodollogy. It is a format
replacement for other options such as PDCA, PDSA, and DMAIIC. The followin!~
steps are used:
3. Cause Analysis: List the problems. What is the most likely r04lt cause? Use the
5 whys technique.
4. Target Condition: Diagram the proposed new process. What are the measurable
target values? Countermeasures are noted as fluffy clouds.
5. Implementation Plan: This is the proposed who, what, whEm, and where for
actions, and the people, times, and locations to correct the pl'oblem.
6. Follow-up Plan: Define how and when the outcomes will be checked. Are the
actual results as predicted? Sobek (2007)24
To:
Background I
• Background of the problem
t Target Condition I
• Diagram of proposed new proc:ess
• Importance of the problem ~ • Countermeasures noted as fluffy clouds
• Measurable targets (quatity , time)
0 •
•
•
Diagram of current situation or process
Highlight problem(s) with storm burst
What about the system is not ideal
What?
Actions
Who?
Person
When?
Times,
Where?
Location
• Extend of the problem(s) taken Dates
Cost:
, (
Cause Analysis
• List problem(s)
I
Follow-Up Plan I
,,.
• Most likely root causes ~ Plan Actual
• Use 5 whys
• How are effects checked? • C(Jmparison to predicted
• When are they checked? • Date check done
• Affinity diagrams
• Cause-and-effect diagrams
• Pareto diagrams
Affinity Diagrams*
Affinity diagrams are good tools in organizing complex information into logical
categories. Because of its format, the affinity diagram is helpful in the problem
definition stage of lean six sigma.
The affinity diagram uses an organized method to gather facts and ideas to form
categories of thought. The normal implementation steps are explained below:
• Develop a main category or idea for each group and that becomes the affinity
card
• Once all of the cards or notes have been placed under a proper affinity card,
borders can be drawn around the affinity groups for clarity
The affinity diagram can also be referred to as the KJ Method. It was developed by
Dr. Kawakita Jiro, founder of the Kawayoshida Research Center.
IRESOURCESI
rI, OBTAIN KNOWLEDGE J-
IGET LSS HANDBOOKI I WATCH VIDEO PRESENTATION I
ENROLL IN
I TAKE OTHER LSS COUFlSES I
VILLANOVA ON-LINE
SUBJECTS
A cause-and-effect diagram:
A fish bone session is divided into three parts: brainstorming, prioritizing, and
development of an action plan. The problem statement is identified and potential
contributing factors are brainstormed into categories. To prioritize problem causes,
polling is often used. The three most probable causes may be circled for the
development of an action plan.
Generally, the 4-M (manpower, material, method, machine) version of the fishbone
diagram will suffice. Occasionally, an expanded version must be used. In a
laboratory environment, measurement is a key issue. When discussing the brown
grass in the lawn, environment is important. A 5-M and E schematic is shown in
Figure 5.2.
Problem
Statement
4. LENGTHS
SUSPECT PAN
TARE WEIGHTS # - - - AIRFLOW
VENDOR COllNTS ACCEPTED
TARE WEIGHTS 1 - - - - DEBRIS
NOT ON PANS NON-STANDAFID SAMPLING
PROCEDURE (iNADEQUATE
SAMPLE QUA~ITITY)
SCALE CALIBRATION
WRONG PART NUMBERS
THREE DIFFERENT FROM DEPARTMENTS
SCALES
SCALE# 2 MORE
ACCURATE THAN
SCALE# 1
Pareto Diagrams
Pareto diagrams are very specialized forms of column graphs. They are used to
prioritize problems (or opportunities) so that the major problems (or opportunities)
can be identified. Pareto diagrams can help lean six sigma teams get a clear picture
of where the greatest contribution can be made.
History
There is a very interesting story behind the name of Pareto diagrams. The word
"Pareto" comes from Vilfredo Pareto (1848-1923). He was born in Paris after his
family had fled from Genoa, Italy, in search of more political freedom. Pareto, an
economist, made extensive studies about the unequal distribution of wealth and
formulated mathematical models to quantify this maldistribution. He also wrote a
political book on nationalism which helped lead to Fascism in Italy.
Briefly stated, the principle suggests (in most situations) Vilfredo Pareto
that a few problem categories (approximately 20%) will 1848-1923
present the most opportunity for improvement (approximately 80%).
100
300
I/J
E
Q)
::s0 75
...
Q.
en
c
'"0
c
200 Cumulative Line
-
cQ)
50 Q)~
-
OJ
...0
Q)
.a
100
25
Q.
E
::l
Z
o
ABC D E F G H J K L M N
Problem Categories
"First things first" is the thought behind the Pareto diagram. Attention is focused
on problems in priority order. The simple process of arranging data may suggest
something of importance that would otherwise have gone unnoticed. Selecting
classifications, tabulating data, ordering data, and utilizing the Pareto diagram, have
proved to be useful in problem investigation.
Note that the "all others" category is placed last. Cumulative lines are convenient
for answering such questions as, "What defect classes constitute 70% of all
defects?"
The traditional Pareto diagram, based on occurrences, would look like so:
12 12
9 -
8
r---
- 6
r----
~
,--L
-
2 2 2 (All Others)
.... I
o i
D F J G E H K L
Figure 5.5 Audit Occurrence Pareto Diagram
The composite Pareto diagram suggests a different priority than the previous one:
400 400
B~ 300
s::.-
ns(1)
t::3:
&.>< 200
E en 200
_(1)
-
(1)(.)
> s::
._ (1)
ns ~~ 120
100
Q; ~ 100
0::(.) I 75
61
o 50 (All Others)
J • I
o G L o A E K
Dollars can also be used instead of demerits. In addition, the original data could be
sorted by shift, the day of the week, etc.
2. Collect and analyze reactive data (complaints, service calls) and then consider
proactive approaches (interviews, surveys)
Not only can VOC input be useful in product and process design, it can also be
critical in lean six sigma team project selection and measurement.
3. Identify the first set of basic requirements of the customer. Example: The
measures could be promptness of delivery, price, and gocld taste.
5. Validate the requirements with the customer. The CTIJ tree should be
reviewed with the customer to ensure that key requirements are known.
I ~~_~_IN_~__~--~.~I_S_~ __~
Order Meal ..
~
Price
...... E1conomical
Eckes (2001)5 points that out the exact metrics are not determined at this stage. The
measure stage is where the precise measurements will be determined.
Statistical tests: A large number of tests and tables can be used to determine, with
identified confidence levels, that customer preferences have shifted. In addition,
most normal statistical tests may be used on many of the numerical survey results
such as the Likert scale (0-5 or 0-10 ranking) surveys described earlier.
Line graphs: Line graphs can graphically show if either discrete or continuous
characteristics of a product or service are changing. In most cases, a visual
assessment can be made as to whether the product or service is getting better,
worse, or staying the same. A discrete chart follows:
Good
Product
or Service
Characteristic I----''''br-:----- Average
Bad
Time ..
Control charts: A variety of variable or attribute charts can also be used to display
customer feedback data. This tool offers an advantage over line charts because the
addition of calculated control limits facilitates the ability to detect special or
assignable causes of variation. An attribute chart is shown below:
# of
failuresl
1000
customers
~-~~~~~~------Average
Time ..
Defect Type
A
Customer B C
~ E F G H I J Totals Number of
A 1 1 - 12 2 3 - 6 3 1 29 occlllrrences
B 5 - 1 4 2 1 - 1 1 - 15 per fixed
time
C 2 2 1 2 - 1 4 1 - h 13
D
lA. - - 6 - - - - - ~ 21
E
~ - - 8 2 - - - 4 - 32
F - 1 1 1 - - - - - 1 4
4 - - - - - 1
o
G 4 1 1 11
H
~- 1
~ 2 1 - - 1 - 13
Problem
Totals
~ 3 4 4~ 8 6 5 9 9 15 138 BlreaS
.c 40 .c 40
....c ....c
o 30 o 30
~ :l!:
All .....
]j 20 .l!l 20
(J u
<ll <ll
'(i) 10 "(i) 10
0::: c::
0 0
A B C D E F G 0 A C B E F G
Categories Categories
Other comparative analyses: The comparative Pareto analysis illustrated above is
a powerful tool for analyzing customer data. In the same way, other charts (control
charts, line graphs, histograms, and even matrix diagrams) can be compared from
one time period to another, from one supplier to another, etc., to provide real insight
into the needs of the customer and the changes in the market. Vi!»ual comparisons,
however, are risky. A significance test may be required.
Kano Model
The Kano model is also referred to as Kano analysis. It is used to analyze customer
requirements. Noriaki Kano is a Japanese engineer and consultant, whose work is
being used by a growing number of Japanese and American companies. The model
is based on 3 categories of customer needs:
The customer expects these basic requirements as part of the total package.
If the basic requirements are not present, the customer will be unhappy. For
American tourists traveling to a foreign country, say China, the expectations
are that travel facilities there are as good as in the United States. There is
definitely a big let down as travel facilities are quite primitive in many
locations. American tourists can probably expect to see a dramatic
improvement in travel facilities in China as the 2008 Beijing Games approach.
When the requirements of the customer are met, the more they are met, the
better. The tourist taking a Caribbean cruise expects a week of entertainment
and food at a reasonable price. The more personal attention the tourist
receives on the cruise, the better it is. In fact, the entire staff and crew on the
ship are out to make your travel experience a great one.
Improvement projects can often be selected from among the satisfier and delighter
categories. Most companies, in a competitive environment, would not be around
long enough to tackle a basic requirement issue.
(Pan de, 2000 19), (Rath & Strong, 200025)
Lean Thinking
Womack (1990)26 introduced the term lean production to the Western World in 1990
with the publication of The Machine that Changed the World. The book describes
the basis of lean production (lean manufacturing) as practiced by the best
companies in the world.
After several years, Womack (1996)27 wrote Lean Thinking: Banish Waste and Create
Wealth in Your Corporation. This book describes the concepts of converting a mass
production plant into a lean organization. Womack (1996)27 clffers five guiding
principles for consideration:
Value
The German mind set is more product feature and process orien1ted. The technical
people (engineers) are in control of the businesses. Thus, the Germans are very
strong technically. Therefore, features and enhancements are of utmost importance.
However, some of the new, complex enhancements have failed to attract the
customers' interests. Often, the German mind set is that thE! customer is not
sophisticated enough to understand the new features.
The Japanese define value in the context of where value is created. As the
proportion of the product made on the Japanese homeland increases, the greater the
value that is retained at home for their society. Customers, in general, do not define
value based on where it is made. Customers want their needs satisfied quickly. As
the yen strengthened, the previous advantages of Japanese companies using
Japanese suppliers disappeared. This has resulted in a weakening of many
Japanese companies. (Womack, 1996)27
The target cost of the product may be determined after defining customer value.
This target cost is more than the "market cost" of the product. The market cost is
typically the manufacturing cost of the product plus the selling expenses and profit.
In lean thinking, the target cost is the mixture of the current selling prices by
competitors and the examination of and the elimination of muda (waste) through
lean methods. This analysis results in a target price which is below the current
selling price. The firm then applies lean thinking to its processes.
(Womack, 1996)27
• Problem solving
• Information management
• Physical transformation
The problem solving stream for a business includes: solve the concept design,
develop the prototype, plan reviews, and determine the mechanism for product
launch. The information management stream consists of customer order taking, raw
material sequencing from suppliers, in-house scheduling, and delivery to the
customer. The physical transformation stream (or product realization in ISO 9001
terms) proceeds from the conversion of raw materials to finished goods.
Value streams can be constructed for each major product (or process) that an
individual organization or plant utilizes. Womack (1996)27 describes applications of
a value stream beyond the boundaries of a typical plant, involving the suppliers, the
organization, and the customers. Efforts must be made to eliminate the muda
(waste) in this value stream.
A value stream map is created to identify all of the activities invol"ed in the product.
This value stream can include the various suppliers, production activities, and the
final customer. The activities are viewed in terms of the following criteria:
Conner (2001)3 identifies the steps for documenting the value str'eam mapping as:
2. Process design: Perform a walk through of the process, rec::ording each step.
Start from the shipping dock and work back through the process to the
receiving dock. Make a note of machine times, cycle times, operators,
changeover times, WIP, available time, scrap rate, machinle reliability, etc.
Cycle time reduction and value stream mapping are discussed la1ter in this Section.
The lean effort requires the conversion of a batch process to a continuous flow
process. In some cases, converting the batch process to a one piece flow is ideal.
Some of the obstacles to overcome include:
Ideally, in a continuous flow layout, the production steps for single piece flow,
without WIP, are arranged in a sequence, straight line, U-shaped, or cellular. Inside
this flow, the work of each station and operator must be performed with reliability.
When the machinery is performing as expected, there are zero breakdowns. This is
the concept of TPM (total productive maintenance). The quality level of each
operation is very high, near perfect, using a variety of defect elimination and
detection techniques.
The activities needed for production should be in a steady, continuous flow, with no
wasted motions, no batches, and no WIP. There should be flexibility to meet the
present needs. The work of people, functions, departments, and firms will require
adjustments to the value stream to make it flow, and to create value for the
customer. (Womack,1996)27
Most mass production manufacturing firms are in the push production mode. Each
operation produces as much as possible and sends it onto the nl!xt operation. The
goal is to maximize machine efficiency with a maximum amount of in process
inventory sitting around the plant.
Contrast the above manufacturing firm with the factory that is dependent on the pull
of the market. The receipt of a customer order initiates activitel;. Each operation
produces parts as needed through a signal from downstream. 1rhere is a minimal
amount of WIP in the process stream.
This arrangement enables flow through the plant, using the !principles of lean
thinking. Quality, machinery downtime, absenteeism, etc., are all of concern.
Problems in any area will stop the process, disrupting the pull plrocess. Problems
of any sort are magnified and must be immediately corrected. {Imai, 1997)11
Womack (1996)27 presents reduction in cycle times due to lean thi nking methods as
shown in Table 5.9:
• Product teams working with the customer to find better ways to specify value,
enhance flow, and achieve pull
Lean thinking principles are the cornerstones to higher performance and economic
growth. Perfection is a journey.
It may take years to apply lean thinking principles in a company, and even more time
to apply lean thinking in the entire value chain.
(Womack, 1996)27
• To please a customer
• To reduce internal or external wastes
• To increase capacity
• To simplify operations
• To reduce product damage (improve quality)
• To remain competitive
Training
Some of the cycle time training principles and topics are listed below:
• There is a time limit of 5 days to accomplish the change (~:;ome use 3 days)
• Two days of training are provided on lean manufacturing techniques
• Two and one-half days are alloted for collecting data and making changes
• The last one-half day consists of a presentation on the results to the
workforce (Gee, 1996f
Team members actively participate in collecting and analyzing dalta. The operators,
technical staff, supervisors, and maintenance staff can all be team members and be
involved in the analysis. The team will perform work sampling, pace studies, line
balancing, elemental analysis, motion studies, and takt time calc:ulations.
Work sampling provides a picture of the work content of the station. This reveals
the content and ratio of work, inspection, walking, and other factc~rs. See Table 5.13
for content ratios.
The line value added activities, at this point, only comprise 50% to 80% of the work
content of each station. The inspection, delay, walking, and other categories are
considered muda. This provides the team with information to ccmsider for station
combinations. The team will investigate ways to eliminate (or reduce) the four muda
elements. It appears possible to reduce from the number of staltions from five to
four.
As indicated in Table 5.14, the actual work time for the 5 stations amounts to 173
seconds. If there is presently 1 operator per station; given that the takt time is 60
seconds; the 173 seconds divided by 60 seconds suggests that 3 operators will be
sufficient. There may still be a need for 5 stations, but only 3 operators. Additional
data collection will be necessary to confirm this analysis. Pace studies of each
station will provide a clearer picture of the cycle times. Usually up to 25 cycles of
the line are studied in order to determine the average cycle time. A line balancing
chart can be made and compared to the desired takt time. (Gee, 1996f
A study of the stations reveals the motions used by the operators. In this study, an
exacting industrial engineering approach to human motions is not used. An
approximation of the operator effort will suffice. Robinson (1990)22 describes the
Shingo technique of classifying human motions. It is divided into 4 grades:
The team will prepare a workplace layout of the line. This layout will include
operators, WIP inventory, raw materials, and equipment in the workspace. A
charting of the current flow of the product may reveal a "spaghetti-like" flow. Thus,
it can be termed a spaghetti chart. In many cases, rework and questions add many
more lines than shown in Figure 5.15.
. In
-
J
Station 4 Station 1 .....
Operator 4 Operator 1
-
, f , f
"
Station 2 Station 5 - Station 3
The idea is to arrange the production line using either aU-shaped, L-shaped, C-
shaped, or straight line arrangement, in orderto create continuous flow. The various
lines must reduce the distance traveled by the part, reduce the amount of WIP
inventory between stations, and still meet the required takt time.
~ In
.J
Station 2 ...... Station 1
"
Station 3
.A
;
""
~
Ope~ator
•
2 r
Operator 1
Operator 3
V 1
'"
,. Station 4 '"
~ Station 5 -. Out
Figure 5.16 illustrates a U-shaped line with 3 operators. An analysis of the walking
distance and material flow reveals significantly less wasted motion.
A kaizen event is a very stressful time for the whole team (especially for the
facilitator), since no one at the beginning will know exactly how it will end up. There
are many opportunities for innovation and creativity in the composition of line work
load and layout. The final results are almost always pleasantly surprising in terms
of achieving team goals. The team goals are usually:
Equipment
Product Shear Press Weld Assembly
A x x x x
B x x x
C x
D x x x
E x x x
The matrix shows products that go through a series of common processes. A work
cell could be formed to handle a particular flow. Another method is to create a
Pareto chart of the various products. The product with the highest volume should
be used for the model line. {Rother, 1999)23, {Conner, 2001)3
It is recommended that a production person handle the job of value stream manager.
This manager would monitor all aspects of the project. Being a hands-on person,
the manager should be on the floor on a regular basis. {Rother, 1999)23
Some of the typical process data includes: cycle time (CT), changleover time (COT),
uptime (UT), number of operators, pack size, working time (minus breaks, in
seconds), WIP, and scrap rate. An analysis of the current status can provide the
amount of lead and value-added time.
In many situations teams take on the task of data collection. B01th individuals and
teams find it beneficial to develop a VSM data box in advance. Examples of data
boxes are shown in Table 5.24 later in this Section and in the service industry case
study in Section XI.
Value-added time (VAn - The amount of time spent transforming the product, which
the customer is willing to pay for.
Lead t ime CUT) - The time it takes one piece of product to mo"e through all the
processes.
See Rother (1999)23 and Conner (2001)3 for further details on valLie stream maps.
• Where is the pacemaker process? (This process controls the tempo of the
value stream.)
Implementation Planning
The final step in the value stream mapping process is to develop an implementation
plan for establishing the future state. This includes a step-by-step plan, measurable
goals, and checkpoints to measure progress. A Gantt chart may be used to illustrate
the implementation plan. Several factors determine the speed of the plan. These
include available resources and funding. The plan could take months or years to
complete, and even then, there may be a need to improve upon it in the future.
(Rother, 1999)23
Examples of VSM icons and hypothetical current and future state maps are shown
on the following pages.
Electronic Flow
-FIFO-
FIFO
"---->
Finished Goods
Movement
txI
Go See
D ~ [ox OX I
Kanban Production Kanban Signal Kanban Load Leveling
Withdrawal
Manual Information
Flow Operator Process Box
G
Pull Arrow
Pull Circle
IIIIIII~
Push Arrow
D Schedule Box Source
I I 12.7rrin I
Time Line - - - - - - -....... Lead Time Cycle Time
Customer
UPS
CURRENT STATE VELOCITY =25.4 days to go from raw material to shipped product
ups
Weld
RED FIFO
Assembly.
Grind and
Clean
') Paint
8lK FIFO
Pack and
Ship
CiO= 10 m n
REL =96% REL = 98% REL = 98% REL = 96"1< REL = 100%
@@
FPV=99% 50%
REL ==96% reduction!
same day
shipping
.
18 Days 0.5 , Days 3.0 , Days 1.5 , Days
.
D,S Min ,
1 Min 12.2 , Min 2.1 , Min
FUTURE STATE VELOCITY =5.3 days to go from raw material to shipped product
Process mapping or flow charting has the benefit of describing a process with
symbols, arrows, and words without the clutter of sentences. Mal11Y companies use
process maps to outline new procedures and review old procedurtes for viability and
thoroughness.
Most flow charting uses standardized symbols. Computer flow charting software
may contain 15 to 185 shapes with customized variations extending to the 500 range.
Many software programs have the ability to create flow charts ,or process maps,
although the information must come from someone knowledgeable about the
process.
Some common flow chart or process mapping symbols are shown below:
I ~ B
Alternate
Process
Process I
Manual
Input
000 Off Page ~
Connector Connector
Extract
Figure 5.22 Common Flow Chart Symbols
Start
Visual
inspection
Inform purchasing
No of rejection.
Generate
corrective
action report
Return to
supplier
Yes Dimensional
>----+1 inspection End
No
End
By having all of the important aspects of the overall process on a single page, it is
much easier to understand everything that should be considered before making any
changes. If changes are being considered, one can re-draw the process map and
have an easy way to compare the "before" and "after" representations.
Process maps are a wonderful way to provide a clear understanding of all the
interrelationships in complex processes - like software codinSI, call processing,
product and process design cycles, chemical and pharmacelJtical processing,
clinical trials, etc. where there are many interactions and "cause-and-effect"
relationships. Six sigma, as a technique, heavily uses process mapping and SIPOC
together to attack process variation and to improve quality.
Value stream mapping (VSM) - Also known as "Information and Material Flow Maps"
is credited to Toyota Motor Company. It is best known in Leamin,g to See by Rother
and Shook. Value Stream Mapping is a scalable approach to create a visual
representation of what is happening in a process. It includes de!tailed information
at each step and across the whole value stream. VSMs are powerful tools to help
identify where changes are needed to improve system performance. The two major
things that VSMs focus on are material and information flow. Information flow is
often independent on the "thing" being mapped in the material f~ow.
Table 5.24 Value Stream Map Data Box - Insurance Claim Processing
The VSM data box collects a great deal of information. The content is decided by the
VSM team at the onset of the work. The process step name, available time, who is
involved, and cycle time (Crr) are recorded. Cycle time should not be confused with
takt time, which is the production "pace" time to keep up with demand.
After populating the data boxes, a current state is developed. The team can then use
Pareto analysis to identify the relative few data boxes where the biggest
opportunities for value stream improvements lie.
VSM approaches are extremely rich in factual and quantifiable diata that describes
the value stream's performance. In situations where making improvements is the
main goal, VSM is a stronger tool than process mapping. This assumes that the
level of complexity allows one to use VSM appropriately.
There seems to be some confusion when making a decision on which tool should
be used for a given situation. Read each of the scenarios below and form an opinion
with respect to which tool should be applied.
Scenario #1: A team has been charged with improving a process in an organization
that is under performing in profitability. Another big concern is the process lead
time. The process is fairly well understood, and it seems most of the issues are
driven by communication problems and high levels of human error.
Scenario #2: A team has been tasked with improving a process where the quality of
results is the single biggest concern. The process is very difficult to understand and
the reasons for the problems are hard to pin-point. The problenl seems technical
and the "people element" does not seem to be a big driver.
There are cases where process mapping and VSM should happen concurrently on
the same piece of paper. For example, there may be a good caSE~ to use a process
mapping approach for the decision-making and information flow portion of the
visualization and VSM for the "thing" being processed. Taking advantage of the best
attributes of both tools, at exactly the same time, may just be thE~ ticket.
Spaghetti Diagrams
Spaghetti diagrams can be useful in describing the flow of people, information, or
material in almost any type of process. They are not as diagnostic or definitive as
VSM in traditional manufacturing operations, but certainly have a utility for a number
of service, administrative, and light production situations.
BINDER STORAGE
:----------j
_I '---_
COORS
PRlNTERA
+
...J
...J
~
W
~
CI)
~
0
tii
SERVER
!
a:
§I,
PRINtERS
SHIPPING
DOOR
PAPER
STORAGE
INACTIVE
DOOR
STORAGE
ACTIVE ACTIVE
DOOR DOOR
The spaghetti chart in Figure 5.25 shows individual movements (for a representative
shift interval). Included in this schematic are social activities, trips to the bathroom,
visits to the break area, etc. Normally, spaghetti diagrams focus on material or
information flow.
BINDER STORAGE
--------------------,
I
_ _ _ II
I
Il __ _
ACTIVE
•
DOORS
8c
PRiNTERS
SHIPPING
DOOR
PAPER
STORAGE
INI\C'T1VE
, - -____- - , DOOR
STORAGE 1 --
ACTIVE ACTIVE
DOOR DOOR
Included in the discussion for Figure 5.26 are some items that could be considered
material flow. However, the 6 presented paths all contain information that is
important to the customer or to aCI.
Often spaghetti diagrams are used with computer generated or drafting layouts
before physical flow changes are made (particularly in the case of heavy equipment).
Refer to the first case study in Section XI for examples of ma1terial flows using
spaghetti diagrams.
References
1. Anderson, P.F., & Wortman, B.L. (2004). CQA Primer. Terre Haute, IN: Quality
Council of Indiana.
2. Besterfield, D. (1998). Quality Control, 5th ed. Upper Saddle River, NJ:
Prentice Hall.
3. Conner, G. (2001). Lean Manufacturing for the Small Shop. Dearborn, MI:
Society of Manufacturing Engineers.
4. Edenborough, N.B. (2002). Quality System Handbook. Terre Haute, IN: Quality
Council of Indiana.
5. Eckes, G. (2001). The Six Sigma Revolution. New York: John Wiley & Sons.
6. Garvin, D. (1988). Managing Quality: The Strategic and Competitive Edge. New
York: The Free Press.
7. Gee, G., McGrath, P., & Izadi, M. (1996, Fall). "A Team Approach to Kaizen."
Journal of Industrial Technology.
8. Gee, G., Richardson, W.R., & Wortman, B.L. (2005). CMQ Primer. Terre Haute,
IN: Quality Council of Indiana.
10. Hill, T. (1993). Manufacturing Strategy, 2nd ed. Burr Ridge, IL: Irwin.
12. Ishikawa, K. (1982). Guide to Quality Control. White Plains: Quality Resources.
13. Juran, J.M. (1999). Juran's Quality Handbook, 5th ed. New York: McGraw-Hili.
14. Lean Enterprise Institute. (2001). Mapping Icons. Retrieved November 6, 2001
from web site http://www.lean.org.
15. Liker, J., & Meier, D. (2006). The Toyota Way Fieldbook: A Practical Guide for
Implementing Toyota's 4Ps. New York: McGraw-Hili.
16. May, M. (2007). The Elegant Solution: Toyota's Formula for Mastering
Innovation. New York: Free Press.
References (Continued)
17. Moen, R., Nolan, T., & Provost, L. (1991). Improving QualitJif through Planned
Experimentation. New York: McGraw-Hili.
18. Omdahl, T. (1997). Quality Dictionary. Terre Haute, IN: (~uality Council of
Indiana.
19. Pande, P.S., Neuman, P.R. & Cavanagh, R.R. (2000). The Sb( Sigma Way. New
York: McGraw-Hili.
23. Rother, M., & Shook, J. (1999). Learning to See: Value Stream Mapping to Add
Value and Eliminate Muda. Brookline, MA: The Lean Enterprise Institute.
24. Sobek, D.K., II. (2007). Montana State University. Retrived January 22,2007
from http://www.coe.montana.eduIlElfaculty/sobeklA3
25. Rath & Strong. (2000). Rath & Strong's Six Sigma Pocket Guide. Lexington,
MA: Rath & Strong Management Consultants.
26. Womack, J., Jones, D., & Roos, D. (1990). The Machine that Changed the
World. New York: HarperPerennial.
27. Womack, J., & Jones, D. (1996). Lean Thinking: Banish 'Waste and Create
Wealth in Your Corporation. New York: Simon & Schuster.
5.1. If one were to summarize in one word the 5.S. What is the major advantage of the A3 report?
ground breaking work of Womack, in
introducing the term "lean production" to the a. It is more thorough than other problem
Western world, that word would be: solving tools
b. It is a concise problem presentation format
a. Muda c. It requires less preparation time than other
b. Value reports
c Distance d. It minimizes the need for a follow·up plan
d Rhythm
5.7. Value stream mapping means:
5.2. When the term "pull" is used in lean thinking,
who is ultimately doing the pulling? a. Flow charting techniques
b. An identification of inputs, tasks, and
a. Downstream operations outputs
b. The takt time c. A pictorial view that identifies process
c. The customer steps
d. The cycle time d. A graphical flow-charting technique that
shows material and information flows
5.3. A breakdown of customer requirements is
accomplished using a diagram as depicted 5.8. Process flow improvement steps normally do
below: NOT include:
a. A Kano model
b. A CTa tree diagram
c. A CTa Gantt chart a. Buffer stock
d. A aFD matrix diagram b. Load leveling
c. Schedule box
5.4. The most widely used technique for d. Data box
distinguishing between chronic and
insignificant problems is: 5.10. Which of the following quality tools would be
LEAST important in the problem definition
a. A Pareto diagram phase?
b. A control chart
c. A cause-and-effect diagram A. Fishbone diagrams
d. A scatter diagram B. Control charts
C. Process flow diagrams
5.5. A team project charter is essential for all of the D. Pareto diagrams
following reasons, EXCEPT:
5.11. Management has dictated that inspection
a. The team will be focused on the appropriate locations be established for a new product.
problem What tool would customarily be applied to
b. The solution will be aligned towards the assist in the selection of the necessary
organization's goals inspection locations?
c. The team champion will be supportive
d. The charter provides a complete history of a. Histograms
the project b. c charts
c. Flow charts
d. Pareto diagrams
5.12. Which of the following statements can be safely 5.18. Which of the followin!~ management tools is
made about Pareto diagrams? most similar to the cau:se-and-effect diagram?
I. Reducing waste 5.21. A team has been asked to improve the small
II. Reducing people purchase process in a company. They decide to
III. Reducing management layers create a process map of the existing process
IV. Eliminating bottlenecks in a process because it will help:
5.16. The 5M model would typically NOT include 5.22. What is the main disadvantage of presenting a
which of the following options? team with an initial pr4)ject lasting more than
160 days?
a. Machines
b. Materials a. Excessive costs
c. Feedback b. A lowered expectation of success
D. People c. Too much time away from regular duties
d. The possibility that the team will expand the
5.17. Which of the following process mapping project boundaries
symbols would NOT be associated with a
decision point? 5.23. Affinity diagrams are useful tools to help
analyze and solve what type(s) of problems:
a. c.
<> a.
b.
c.
Unfamiliar problems
Structured problems
Mathematical modl!ls
b. o d. o d. Project flow problems
5.24. Considering that all of the following terms have 5.29. There are several varieties of A3 reports. They
benefits, which would LEAST likely affect can often have five or six key steps. Which of
product quality? the following would be a major step?
5.25. Identify the lean thinking statement that is NOT 5.30. Which of the following is NOT a critical element
true? in cycle time reduction?
5.26. When comparing spaghetti diagrams, which a. The exact metrics for the customer
were constructed before and after a work area b. The needs of the customer
improvement, one would expect to find: c. The basic drivers for the customer
d. The potential third level CTQ metrics
a. Less traffic after the change
b. More exchange of information after the 5.32. The business case portion of a project charter
change would be likely to contain:
c. Less necessary equipment after the change
d. Fewer operators after the change a. A summary of the strategic reasons for the
project
5.27. Potentially, the most difficult area to obtain b. A problem or goal statement
meaningful information, during the six sigma c. The resources available to the project team
define phase, would be: d. The boundaries of the project team
5.34. Using the Kano model, improvement teams 5.39. Which of the following are principle reasons for
most often select improvement projects from NOT utilizing process mapping?
among which of the following customer need
categories? a. To identify where IJnnecessary complexity
exists
I. Satisfiers b. To visualize the pr,ocess quickly
II. Dissatisfiers c. To eliminate the planning process
III. Delighters d. To assist in work !iimplification
a. A reduced number of machine operations 5.42. Historical data indicate!~ that defective seat belts
b. A reduced number of operators are due to the following: 17% stitching, 14%
c. Less machine downtime metal corrosion, 23% mounting bolts, 38%
d. An increased work flow velocity foreign objects, and 8% other. Using a Pareto
diagram, one can conclude:
5.37. A criticality based Pareto analysis would most
likely focus on: a. The greatest cos'ls are due to foreign
objects
I. Internal scrap categories b. Mounting bolts cause more defects than
II. Potential safety risks stitching
III. Both real and potential economic losses c. The design causes more problems than the
IV. Occurrences of customer complaints consumer
d. Solving metal cormsion is insignificant
a. I, II, and III only
b. II, III, and IV only 5.43. A spaghetti chart would be helpful in tracking all
c. II and III only of the following, EXCEPT:
d. I, II, and IV only
a. Information flow
5.38. What is the basic objective of an A3 report? b. Material flow
c. Downtime activity
a. To simplify the problem solving process d. Operator movemenlt
b. To save valuable management time
c. To provide uniformity to project cause 5.44. Which of the followin,g is the LEAST likely
analysis element to be contained in a project charter?
d. To summarize a project in a concise and
understandable way a. Identification of thEI team members
b. The role of the team members
c. The quality and type of resources provided
d. The project scope
The answers to all questions are located at the end of Section XII.
SOURCE OBSCURE
Measurement Techniques
• Process analysis
• Data collection
• Measurement systems
• Process capability analysis
Process Analysis
• Procedures
• Work instructions
• Takt time
• LSS metrics
Procedures
ISO 9001 :200013 states that internal procedures shall control nonccmforming product
so that it is prevented from inadvertent use or installation. In many companies this
requirement is the responsibility of the quality department, although the actual
functions are performed by various other departments.
Procedures (Continued)
4. A material review board reviews the nonconformance to determine
disposition. The possible dispositions are:
• Scrap the part, which ends part usage and requires no further action.
• Accept the part for use "as is" or "as repaired."
• Rework the part to its original configuration requirement.
The finished chart might look like that shown in Figure 6.1 on the following page.
Note, in this example, the customer contact element which is a substantial part of
ISOfTS 16949 (2002)15 has not been highlighted. This could certainly be made into
a separate process flow diagram as part of the deviation report process, however,
this example focuses on the internal nonconforming material flow. As one can see,
the process sounds simple in generic description. However, it may take several
twists and turns to show what really happens. The flow charting process helps to
visualize the necessary actions.
Work Instructions
Procedures describe the process at a general level, while work instructions provide
details and a step-by-step sequence of activities.
Flow charts may also be used with work instructions to show relationships of
process steps. Controlled copies of work instructions are kept in the area where the
activities are performed. Some discretion is required in writing work instructions,
so that the level of detail included is appropriate for the background, experience, and
skills of the personnel that would typically be using them.
The people that perform the activities described in the work instruction should be
involved in writing the work instruction. The wording and terminology should match
that used by the personnel performing the tasks.
Production or
Product is held
in special
holding area
Quality
Supervisor
reviews product
Yes
Takt Time
Takt time is the available production time divided by the rate of customer demand.
In order to understand the measurement of takt time, and how it can be improved,
a discussion of a hypothetical process is presented below.
Consider a sequential operation with five operators or stations. The times allocated
for each station are indicated in Table 6.2.
If the takt time for the line is 60 seconds, the immediate observation is that station
4 exceeds the takt time and will not be able to maintain the pace. One option would
be to have some of the time eliminated by moving work to another station.
The total work time used at the current time is 265 seconds. With 5 operators, this
equates to 53 seconds on a balanced line. Another perspective indicates that 4
operators will require 66.25 seconds. This is slightly more than the desired takt time
of 60 seconds. However, the initial task will be to re-examine the content of the
operations by looking at the value added and non-value added elements.
A thorough study may reveal that a significant portion of time can be eliminated,
leading to a reduction of 1 operator and possibly a reduction in floor space. (This
analysis was discussed in cycle time reduction presented in Section V. Refer to Gee
(1996)8 for a similar case study.
The choice of the best option depends upon how well the proc:ess is controlled,
whether operations can be shifted from one station to another, and whether available
process improvement opportunities exist. Processes exhibiting Iclrge variation work
best using Option 2 conditions. For well controlled processes with small variatiol1J,
Option 1 is a better choice, and Option 3 is better still. If the process can be
improved to reduce cycle time, Option 4 is the best choice of all.
It should be emphasized that the use of lean manufacturing principles should not
result in the direct layoff of employees. It is best to reassign the ,e mployees to new
responsibilities, such as kaizen teams. Normal employee attrition is another option.
Expansion of the business into new areas is also a great solution.
The takt time is defined as the time needed to produce to customer requirements.
The takt time will be 1.433 minutes or about 86 seconds per unit. The ideal pace of
each operation is set at 86 seconds. The takt time can be listed either in minutes or
seconds. (Conner, 2001)6, (Rother, 1999)18, (Sharma, 2001)20
The perfect time to implement CFM may never arrive, but one can start with interim
solutions. If the appropriate equipment will take longer than desired to arrive,
perhaps an alternative solution can be found. If a one-pallet flow is required, a one-
container flow is better than no flow at all.
If a plant does not have the lUxury of mass production to establish a single takt time,
the solution may be to develop multiple takt times, breaking the requirements into
smaller components. (Conner, 2001)6
The following three cases assume a series of three operations, which each process
one unit per minute.
Operation 1 1..----1...1
Operation 21. . ---I.~1 Opera ion 3 ~
Case 1: Orders are manufactured in batches of 100 units.
The above three cases illustrate the power of one-piece flow. If a c:ustomer changes
their requirements, the shop will not have 300 units in queue in partial stages of
production. The shop will be able to shift production requirements and provide the
first units rapidly. (Conner, 2001)li, (Sharma, 2001)20
The necessary quality, machine, personnel, materials, and suppliter resources must
be coordinated and made available as needed. The layout of the line or cell is a
starting point. The line should be examined as necessary to:
Lean Metrics
The reader should note that some lean metrics are located elsewhere in this
Handbook, for example:
The TPM metrics include availability, operating speed rate (OSR), net operating rate
(NOR), performance efficiency (PE), and overall equipment effectiveness (OEE).
Since the throughput rate is equal to 1/(cycle time), Little's Law can be written as:
Example 6.1: If there are 30,000 units scheduled for production by ACME Stamping
and 3,000 units can be produced per day, what is the total required lead time?
Example 6.2: Quality Shipping has a backlog of21 outstanding qu,otes. They provide
their customers 3 days maximum quotation service. What must their completion
rate be to meet their internal goal?
. WIP 21 quotes
Completion Rate =--
TLT
= 3 days
=7 quotes/day
Example 6.3: Better Brass Corporation requires 8 weeks to fulfilll a typical bearing
order. The value added time is 18 hours. Assuming the cc)mpany works 24
hours/day, seven day per week, what is the process cycle efficiEmcy expressed as
a percentage?
Example 6.4: If the throughput rate for an operation is 7,200 unit!; per hour, what is
the cycle time in seconds?
Cycle Time = 1
Throughput Rate
Breyfogle (2003)5 defines a large number of six sigma measurements with the
suggestion that some are controversial and an organization need not use them all.
The authors of this Handbook have presented only those that are widely used:
• Defects D =
• =
Units U
• Opportunities (for a defect) = 0
• =
Yield Y
Defect Relationships
Example 6.5: A matrix chart indicates the following information for 100 production
units. Determine the DPU:
Defects 0 1 2 3 4 5
Units 70 20 5 4 0 1
Example 6.6: Assume that each unit in Example 6.5 had 6 opportunities for a defect
(i.e. characteristics A, B, C, D, E, and F). Determine DPO and DPMO.
Yield Relationships
Note that the Poisson equation is normally used to model defect occurrences. If
there is a historic defect per unit (DPU) level for a process, the probability P(x} that
an item contains x flaws is described mathematically by the equation:
e· OPU DPU x
P(x) =--x'--
Where: x is an integer greater or equal to 0
DPU is greater than 0
If one is interested in the probability of having a defect free (as most people are),
then use x = 0 in the Poisson formula and the math is simplified:
n
Rolled throughput yield: Vrt = RTY = II V; = (V )(V 1 2) ••• (Vn)
1=1
Example 6.7: Assume that a process has a DPU of 0.47. Determine the yield.
Example 6.8: Assume that a process has a first pass yield of 0.625. Determine the
DPU.
DPU = -In(V) = -In(0.625) = 0.47
'-1
= (0.99)(0.98)(0.97)(0.96) = 0.90345 = 90.345%
Probability of a defect = P( d)
P(d) can be looked up in a Z table (using the table in reverse to determine Z).
Example 6.10: The first pass yield for a single operation is 95%. What is the
probability of a defect and what is the Z value?
Schmidt (1997)19 reports that the 6 sigma quality level (with the 1,, 5 sigma shift) can
be approximated by:
6 Sigma Quality Level = 0.8406 + .J29.37 - 2.221 In (ppm)
Data Collection
• Types of data
• Collection methods
• Measurement scales
• Data accuracy
Types of Data
Data is objective information that everyone can agree on. Measurability is important
in collecting data. The three types of data are attribute data, variable data, and
locational data. Of these three, attribute and variables data are more widely used.
Attribute Data
Attribute data is discrete. This means that the data values can only be integers, for
example, 3, 48, 1029. Counted data or attribute data are answers to questions like
"how many," "how often" or "what kind." Examples include:
Variable Data
Variable data is continuous. This means that the data values can be any real
number, for example, 1.037, -4.69, 84.35. Measured data (variable data) are answers
to questions like "how long," "what volume," "how much time," and "how far." This
data is generally measured with some instrument or device. Examples include:
Measured data is regarded as being better than counted data. It is more precise and
contains more information. For example, one would certainly know much more
about the climate of an area if they knew how much it rained ea,ch day rather than
how many days it rained. Collecting measured data is often difficult and expensive,
so one must often rely on counted data.
In some situations, data will only occur as counted data. For example, a food
producer may measure the performance of microwave popcorn by counting the
number of kernels of unpopped corn in each bag tested.
ICharacteristics I measurable
Variable
I
countable
Attribute
I
continuous discrete units or occurrences
may derive from counting good/bad
Types of data length number of de!fects
volume number of de!fectives
time number of scrap items
Examples width of door gap audit points lost
lug nut torque paint chips per unit
fan belt tension defective lamps
Data examples 1.7 inches 10 scratches
32.06 psi 6 rejected PCllrtS
10.542 seconds 25 paint runs
Locational Data
The third type of data does not fit into either category above. Thi~s data is known as
locational data which simply answers the question "where." Charts that utilize
locational data are often called measles charts or concentration charts. Examples
are a drawing showing locations of paint blemishes on an automobile, or a map of
the United States with the locations of the sales and distribution offices indicated.
Some data may only have discrete values, such as this part is good or bad, or I like
or dislike the quality of this product. Since variables data provides more information
than attribute data, for a given sample size, it is desirable to use variable data
whenever possible.
When collecting data, there are opportunities for some types of data to be either
attribute or variable. Instead of a good or bad part, the data can state how far out of
tolerance or within tolerance it is. The like or dislike of product quality can be
converted to a scale of how much do I like or dislike it.
Referring back to Table 6.5, two of the data examples could easily be presented as
variables data: 10 scratches could be reported as total scratch length of 8.37 inches,
and 25 paint runs as 3.2 sq. in. surface area of paint runs.
Even part failures can be reported as failure after 2,133 hours or cycles of operation,
etc., instead of the number of rejected parts.
Consideration of the cost of collecting variable versus attribute data should also be
given when choosing the method. Typically, the measuring instruments are more
costly for performing variables measurements and the cost to organize, analyze, and
store variables data is higher as well. A golno go ring gage can be used to quickly
check outside diameter threads. To determine the actual pitch diameter, a slower
and more costly process is required.
Variable data requires storing of individual values and computations for the mean,
standard deviation, and other estimates of the population. Attribute data requires
minimal counts of each category and hence requires very little data storage space.
For manual data collection, the required skill level of the technician is higher for
variables data than for attribute data. Likewise, the cost of automated equipment for
variables data is higher than for attribute data.
The ultimate purpose for the data collection and the type of data are the most
significant factors in the decision to collect attribute or variable data.
Collection Methods
Collecting information is not cheap. To help ensure that the data is relevant to the
problem, some prior thought must be given to what is expected. Some guidelines
are:
Data collection includes both manual and automatic methods. Data collected
manually may be done using printed paper forms or by data entry at the time the
measurements are taken. Manual systems are labor intensive and subject to human
errors in measuring and recording the correct values.
Automatic data collection includes electronic chart recorders and digital storage.
The data collection frequency may be synChronous, based on a set time interval, or
asynchronous, based on events. Automatic systems have higher initial costs than
manual systems, and have the disadvantage of collecting both "good" and
"erroneous" data. Advantages of automatic data collection systems include high
accuracy rates and the ability to operate unattended.
The efficiency of data entry and analysis is frequently improved by data coding.
Problems due to not coding include:
• Inspectors trying to squeeze too many digits into small areas on a form
Coding by substitution:
Measurements such as 0.55303, 0.55310, 0.55308 in which the digits 0.553 repeat in
all observations can be recorded as the last two digits expressed as integers.
Depending on the objectives of the analysiS, it mayor may not be necessary to
decode the measurements.
Check Sheets
Check sheets are great tools for organizing and collecting facts and data. By
collecting data, individuals or teams can make better decisions, solve problems
faster, and earn management support.
A recording check sheet is used to collect measured or counted data. The simplest
form of the recording check sheet is for counted data. Data is c()lIected by making
tick marks on this particular style of check sheet. (Wortman, 2005)25
Dais of Week
Errors 1 2 3 4 5 6 Total
Defective Lights 1fH.1fH. 1fH. 1fH. 1fH.1II 1fH.1I 1fH. 40
Loose Fasteners I III 1fH. 1fH. II 16
Scratches I III III I III 1fH.1fH. 21
Missing Parts I I I 3
Dirty Contacts 1fH.1 1111 1fH.1II 1111 1fH. 1fH. 32
Other I III II III 9
ITotal I 19 I 19 I 16 I 19 I 23 I 25 I 121 I
Figure 6.6 Typical Recording Check Sheet
The check sheet can be broken down to indicate either shift, day, or month.
Measured data may be summarized by the means of a check sheet called a tally
sheet. To collect measured data, the same general check sheet form is used. The
only precaution is to leave enough room to write in individual mleasurements.
Checklists
The second major type of check sheet is called the checklist. A grocery list is a
common example of a checklist. On the job, checklists may often be used for
inspecting machinery or product. Checklists are also very helpful when learning
how to operate complex or delicate equipment.
A locational check sheet called a measles chart could be used to show defect
locations using a schematic of the product.
Measurement Scales*
Table 6.9 details four measurement scales in increasing order of statistical
desirability.
* For a more expansive treatment of the measurement scales and probability, refer
to Triola (1994)23.
*Note: Many of the interval measures may be useful for ratio data as well.
Data Examples
3. Ordinal scale: Defects are categorized as critical, major A, major S, and minor
4. Nominal scale: A print-out of all shipping codes for last week's orders
6. Interval scale: The temperatures of steel rods (OF) after one hour of cooling
Data Accuracy
Bad data is not only costly to capture, but corrupts the decision making process.
Some considerations include:
• Screen or filter data to detect and remove data entry errors such as digital
transposition and magnitude shifts due to a misplaced decimal point.
• Avoid removal by hunch. Use objective statistical tests tel identify outliers.
It is important to select a sampling plan appropriate for the purpose of the use of the
data. There are no standards as to which plan is to be used for data collection and
analysis, therefore the analyst makes a decision based upon e:l(perience and the
specific needs. A few sampling methods are listed on the nex1t page. There are
many other sampling techniques that have been developed for specific needs.
Sampling is often undertaken because of time and economic advantages. The use
of a sampling plan requires randomness in sample selection. Obviously, true
random sampling requires giving every part an equal chance of being selected for
the sample.
The sample must be representative of the lot and not just the product that is easy to
obtain. Thus, the selection of samples requires some up front thought and planning.
Often, emphasis is placed on the mechanics of sampling plan usage and not on
sample identification and selection. Sampling without randomness ruins the
effectiveness of any plan. The product to be sampled may take many forms: in a
layer, on a conveyor, in sequential order, etc. The sampling sequence must be
based on an independent random plan. The sample is determined by selecting an
appropriate number from a hat or random number table.
Sequential Sampling
Sequential sampling plans are similar to multiple sampling plans except that
sequential sampling can theoretically continue indefinitely. Usually, these plans are
ended after the number inspected has exceeded three times the sample size of a
corresponding single sampling plan. Sequential testing is used for costly or
destructive testing with sample sizes of one and are based on a probability ratio test
developed by Wald (1947)24.
Stratified Sampling
One of the basic assumptions made in sampling is that the sample is randomly
selected from a homogeneous lot. When sampling, the "lot" may not be
homogeneous. For example, parts may have been produced on different lines, by
different machines, or under different conditions. One product line may have well
maintained equipment, while another product line may have older or poorly
maintained equipment.
The concept behind stratified sampling is to attempt to select random samples from
each group or process that is different from other groups or processes. The
resulting mix of samples can be biased if the proportion of the samples does not
reflect the relative frequency of the groups. To the person using the sample data,
the implication is that they must first be aware of the possibility of stratified groups
and second, phrase the data report such that the observations are relevant only to
the sample drawn and may not necessarily reflect the overall system.
Measurement Systems
• Planning - The actions involved with the entire calibration system must be
planned. This planning must consider management system analysis.
• Sealing for integrity - Where adjustments may be made that may logically go
undetected, sealing of the adjusting devices or case is required.
• Use of outside products and services - Procedures must define controls that
will be followed when any outside calibration source or service is used.
Measurement Error
The error of a measuring instrument is the indication of a measuring instrument
minus the true value. (Grant, 1988)10
a~RROR = a~EASUREMENT - a~RUE
or
a~EASUREMENT = a~RUE + a~RROR
The formula states that halving the error of measurement requires quadrupling the
number of measurements. There are many reasons that a measuring instrument
may yield erroneous variation, including the following categories:
Measurement Terms
The following are not the official measurement definitions, but are meant to provide
the basic concepts. The following terms are primarily from (Stein, 2003)22 and (AIAG,
2002)3.
AIAG (2002)3 defines five sources of measurement variation that can be determined
by gage R&R studies: bias, linearity, repeatability, reproducibility, and stability.
Range Method
Reproducibility is the variability introduced into the measurement system by the bias
differences of different operators. The range method does not quantify repeatability
and reproducibility separately. The range method is a simple way to quantify the
combined repeatability and reproducibility of a measurement system. To separate
repeatability and reproducibility, the average and range method! or the analysis of
variance method must be used.
The average and range method computes the total measurement system variability,
and allows the total measurement system variability to be separated into
repeatability, reproducibility, and part variation.
The analysis of variance method CANOVA) is the most accurate method for
quantifying repeatability and reproducibility. In addition, the ANCIVA method allows
the variability of the interaction between the appraisers and the parts to be
determined.
Examples of the average and range and ANOVA method will be plresented using the
same data for both examples. The following methodology will be used:
Note that the R&R determination described in the following example is referred to
as the "short method." Ford, Chrysler, and General Motors generally prefer another
format called the "long method."
I I 1
I 1st 2nd
Set Set
2.0 1.0
I 1.0
R1 X1
1.5
I R2 X2
I R3
I
2 2.0 3.0 1.0 2.5
A 3 1.5 1.0 0.5 1.25 1.75 2.0
4 3.0 3.0 0.0 3.0
5 2.0 1.5 0.5 1.75
RA = 0.6
1 1.5 1.5 0.0 1.5
2 2.5 2.5 0.0 2.5
B 3 2.0 1.5 0.5 1.75 1.50 1.8
4 2.0 2.5 0.5 2.25
5 1.5 0.5 1.0 1.0
RB = 0.4
To proceed further, one must determine several standard deviations using the range
formula:
_ R
(J' =-
d2
The 6' is the predicted value of the standard deviation based on 1the average range.
The factor d 2 depends upon the sample size (n) and the number of samples (K). It
is also convenient to work with values of 1/d 2 so that division can be replaced by
multiplication. As the number of samples, K, approaches infinity, the 1/d2 values
approach those calculated from the d 2 capability factors for control charts given il1
the Appendix. Table 6.13 shows 1/d2 values:
r;zJ 1
I
2
I
3
I
4
I
5
I
10
I
00
I
2 0.709 0.781 0.813 0.826 0.840 0.862 0.885
3 0.524 0.552 0.565 0.571 0.575 0.581 0.592
4 0.446 0.465 0.472 0.474 0.476 0.481 0.485
Where 1/d 2 is based on K =15 samples and n =2. From Table 6.13, the 00 column is
used for K and 1/d2 equals 0.885. R is the grand average range within parts.
= =
Where 1/d 2 is based on one sample, K 1, and n 3. From Tablle 6.13, 1/d 2 equals
0.524. R3 is the range between the average of all measurements taken by each
technician.
The total observed standard deviation in the example can also be determined by the
additive law of variances according to the following formula:
The constant 5.15 comes from the normal curve, representing a 99% confidence
level. Obviously, the 49% value consumed by the measuring system is too large.
The following ANOVA examples use the same data as presented in the average and
range methods.
A RowSq is determined by squaring the Row total and dividing by Row n, e.g.,
82 /6 = 10.667.
An interaction CellSq is determined by squaring the Cell total and dividing by cell
sample size n, e.g., (2 + 1)2 / 2 = 4.5.
ErrorSS = TotSS - TechSS - PartSS -lnterSS = 15.742 - 0.6167 - 9.867 -1.633 = 3.625
2.467 - 0.2417
Var(parts) = = 0.3708
6
0.2041 - 0.2417
Var(lnter) = = - 0.0188
2
The adjusted variance (Adj Var) column converts the negative interaction variance
to O. The % column shows the percent contribution of each component based on the
Adj Var column.
SIGe (0.4916) is the square root ofthe Error MS (0.2417) and represents repeatability.
SIGtot (0.7368) is the sigma of total data. The difference between SIGe and SIGtot
is due to the difference among technicians and the difference among parts.
Repeatability is the error variance and contributes 39.03% of the t()tal variation in the
data. Reproducibility is the variation among technicians which contributes 1.08%
of the variation in the data. However, the F ratio test for technicians is 1.28
compared to an F critical value of 3.68 at the 95% confidence level. The null
hypothesis that there is no difference among technicians is not rejected. This
implies that a reduction in measurement variation cannot be achieved by directing
improvement activities at the three technicians. There is no interaction. The
interaction variance is effectively O. This means that each technician measures each
part in the same way.
Because variances are additive, one could say that the total measurement
=
contribution is repeatability variance + technician variance :J9.03% + 1.08% =
40.11 %. If R&R variation is to be reduced, it is the source of repeatability variation
which must be addressed.
Process variation accounts for 59.89% ofthe total variation in the data. Note that the
null hypothesis of no difference between parts would be rejected. Fcal (10.21) is
greater than Fa (3.06). Whether this is too much process variation requires
comparing total data with specifications. The specifications have InO way of knowing
the variance components of product output measurements.
Histograms
Histograms are frequency column graphs that display a static picture of process
behavior. Histograms usually require a minimum of 50-100 data points in order to
adequately capture the measurement or process in question. A histogram is
characterized by the number of data points that fall within a given bar or interval.
This is commonly referred to as "frequency." A stable process is most commonly
characterized by a histogram exhibiting unimodal or bell-shaped curves.
LSL USL
Histograms (Continued)
.50 I
.51 III ill
~
.52 UK'I
.53 UK' UIr 28
26
.54 UK' UK' lUI' I 24
.55 UIr UIr UIr UIr I 22
.56 UK' UK' lUI' UK' UK' >- 20
~ 18
.57 UIr UIr UIr UIr UIr II w 16
:::l
.58 UIr UIr IUf UIr UIr III a 14
~ 12
.59 UK' UIr IUf UIr UIr I LL 10
.60 UK' UK' lUI' UK' I 8
.61 UIr UIr IUf 6
4
.62 UK' UIr 2
.63 UK' II
.64 III
.651
• There are many distributions that do not follow the normal curve. Examples
include the Poisson, binomial, exponential, lognormal, rectcmgular, U-shaped,
and triangular distributions.
Normal Distribution
Characteristics of normal distributions include:
Average
When all special causes of variation are eliminated, the process will produce a
product that, when sampled and plotted, has a bell-shaped distribution. If the base
of the histogram is divided into six (6) equal lengths (three on each side of the
average), the amount of data in each interval exhibits the following percentages:
II - 30 II - 20 11 - 0 II II + 0 II + 20 II + 30
1...4 - - - - - - - - - - 99.73% ----------..1
Where: =
x Data value (The value of concern)
IJ = Mean
a = Standard deviation
This transformation will convert the original values to the number of standard
deviations away from the mean. The result allows one to use a standard normal
table to describe areas under the curve (probability of occurrence). There are
several ways to display the normal (standardized) distribution:
o 1.0
Figure 6.18 P(z = - 00 to 1) = 0.8413
o 1.0
o 1
Figure 6.20 P(z =0 to 1) =0.3413
One must understand the specific normal distribution table that is being used. The
standard normal table in the Appendix uses the second method for calculating the
probability of occurrence.
Z Value Example
To illustrate the Z value, consider the following weights of typical 10th grade
students. The weights are normally distributed with a mean 1..1 = 150 Ib and standard
deviation a = 20.
Example 6.14: What is the probability of a student weighing between 120 and
1601b?
The best technique to solve this problem is using the standard normal table in the
Appendix to determine the tail area values, and to subtract them from the total
probability of 1.
Thus, 62.47% of the students will weigh more than 120 Ib, but le~,s than 160 lb.
!• LSL USL
···::
1·
·:
:
·:::
:•
·:
Figure 6.23 A Comparison of Process Spread to Tolerance Range
• Collecting data
• Do nothing. If the process limits fall well within the spec:ification limits, no
action may be required.
• Center the process. When the process spread is approximately the same as
the specification spread, an adjustment to the centering olf the process may
bring the bulk of the product within specifications.
• Reduce variability. This is often the most difficult option to achieve. It may
be possible to partition the variation (within piece, batch to batch, etc.) and
work on the largest variation component first. An experirnental design may
be used to identify the leading source of variation.
• Accept the losses. In some cases, management must be content with a high
loss rate. Some centering and reduction in variation may be possible, but the
principal emphasis is on handling the scrap and rework e'fficiently.
If a part has fourteen different dimensions, process capability would not normally
be performed for all of these dimensions. Selecting one, or possibly two, key
dimensions provides a more manageable method of evaluating the process
capability. For example, in the case of a machined part, the overall length or the
diameter of a hole may be the critical dimension. The characteristic selected may
also be determined by the history of the part and the parameter that has been the
most difficult to control or has created problems in the next higher level of
assembly.
Chrysler, Ford, and General Motors use symbols to designate safety and/or
government regulated characteristics and important performance, fit, or appearance
characteristics. (AIAG, 1995)1
The process capability study is used to demonstrate that the process is centered
within the specification limits and that the process variation predicts the process is
capable of producing parts within the tolerance requirements.
When the process capability study indicates the process is not capable, the
information is used to evaluate and improve the process in order to meet the
tolerance requirements. There may be situations where the specifications or
tolerances are set too tight in relation to the achievable process (:apability. In these
circumstances, the specification must be reevaluated. Ifthe specification cannot be
adjusted, then the action plan is to perform 100% inspection of the process, unless
inspection testing is destructive.
The appropriate sampling plan for conducting process capability studies depends
upon the purpose and whether there are customer or standards requirements for the
study. Ford and General Motors specify that process capability studies for PPAP
submissions be based on data taken from a significant production run of a minimum
of 300 consecutive pieces. (AIAG,1995)1
If the process is currently running and is in control, control chart data may be used
to calculate the process capability indices. If the process fits a normal distribution
and is in statistical control, then the standard deviation can be estimated from:
o~~
R d
2
For example, for a project proposal of a new processes, a pilot run may be used to
estimate the process capability. The disadvantage of using a pilot run is that the
estimated process variability is most likely less than the process variability expected
from an ongoing process.
Process capabilities conducted for the purpose of improving the process may be
performed using a design of experiments (DOE) approach. The olbjective of DOE is
to determine the optimum values of the process variables which yield the lowest
process variation.
If only common causes of variation are present in a process, then the output of the
process forms a distribution that is stable over time and is predictable. If special
causes of variation are present, the process output is not stable over time.
(AIAG, 1995)2
The process may also be unstable if either the process average or variation is out-of-
control. Figure 6.24 depicts an unstable process with both process average and
variation out-of-control.
Common causes of variation refer to the many sources of variation within a process
that has a stable and repeatable distribution over time. This is called a state of
statistical control and the output of the process is predictable. Special causes refer
to any factors causing variation that are not always acting on the process. If special
causes of variation are present, the process distribution changes and the process
output is not stable over time. (AIAG, 1995)2
When plotting a process on a control chart, lack of process stability can be shown
by several types of patterns including: points outside the control limits, trends,
points on one side of the centerline, cycles, etc.
The validity of the normality assumption may be tested using the chi-square
hypothesis test. If the data does not fit a normal distribution, the chi-square
hypothesis test may also be used to test the fit to other distributions such as the
exponential or binomial distributions.
DISTRIBUTION OF
DISTRIBUTION OF
AVERAGES INDI7~ VALUES
-+-
CONTROL
LIMITS
±3 SX'
/- - - -
PROCESS
SPREAD
± 3 Sp
- UCL
_1_ - - - - - LCL
The following calculation will determine the approximate standard deviation of the
process, if the R-bar is known from a control chart.
x - LSL USL -x
ZLower = S Zupper = S
LSL USL
o 1.0
Looking up the value of Z in a standard normal table gives the area outside of
specification. In the example above, Z upper = 1.0. Thus, 15.9% is above the USL.
Example 6.15: Given USL 8.5, LSL 3.5, n= = =3 and the following control chart data,
what are the projected failure rates?
6 4 7 8 1 5 6 4 8 8 6 9 4 6 8 5 6 7 1 6
X 7 6 9 6 7 3 2 6 2 4 6 5 8 6 7 10 6 3 6 1
10 5 5 1 5 2 7 7 9 6 5 2 4 5 8 7 4 4 5 2
X 7.7 5.0 7.0 5.0 4.3 3.3 5.0 5.7 6.3 6.0 5.7 5.3 5.3 5.7 7.7 7.3 5.3 4.7 4.0 3.0
R 4 2 4 7 6 3 5 3 7 4 1 7 4 1 1 5 2 4 5 5
= =
Note that n 3, X 5.465 and R 4.0 = A
0'---
_ R_ 4.0 -- 2 . 363
An estimate of process standard deviation: d2 1.693
From the Z table: 1.28 =10.0% too high, 0.83 =20.3% too low =30.3% total
16
15 - LSL = 3.5 USL = 8.5
---
14 20.3% LOW - 10.0'7'0 HIGH
13 or or
12 18.3% LOW 8.3% HIGH
11 - or or
>- -
10 19.5% LOW ~ 9.3% HIGH
U
Z
W 8 -
-
9 ~
5W 67 - -
B: 5 --
~
r--
4 - r--
3 - -
-
2
1
o
-
1 2
n3 4 5 6 7 8 9
n 10
MEASUREMENT
=
However, from the histogram: 5/60 8.3% too high, 11/60 = 18.3% too low, 26.6% =
total. The individual data plugged into a calculator yields 9.3% too high, 19.5% too
low and 28.8% total. Given three answers, which is correct? ThE! histogram values
are the short-term truth and the other two are long-term estimat43s.
R
0'
R
=-
d2
Capability Index
- . [USL -
Cpk -mm X ,X-- -
LSL]
-
30'R 30'R
Capability Ratio
Note, the rule of thumb logic is somewhat out of step with the six sigma assumption
of ± 1.5 sigma shift. The above formulas only apply if the process is centered, and
remains centered within the specifications, and Cp CPk • =
Cpm Index
Example 6.16: For a process with X = 12, IJ = 12, a R = 2, USL = 115, LSL = 4, and T =
10, determine C P' C pk , and C pm :
C = (USL-LSL) = (16-4) =1
P 60'R 6(2)
C = min[USL30'R'
pk
- X X - LSL] = min[16 -12 12 - 4J = 0.667
30'R 3(2) , 3(2)
n 2
L(X X) j -
0'.I = =
j 1
(n - 1)
_ . [USL - X ,X-- -
Ppk -mm LSL]
-
3aR 3aR
Performance Ratio
p = 6a j
r (USL - LSL)
Example 6.17: For a process with X =12, a =2, USL =16, and LSL =4, determine P
j P'
Ppk and Pr :
p = (USL - LSL) = (16 - 4) =1
p 6a j 6(2)
P
pk
=min[USL
3a
-X
R
'
X - LSL]
3aR
=min[163(2)
-12 4] =
12 -
, 3(2)
0.667
P = 6al = 6(2) =1
r (USL - LSL) (16 - 4)
Cp Z value ppm
0.33 1.00 317,311
0.67 2.00 45,500
1.00 3.00 2,700
1.10 3.30 967
1.20 3.60 318
1.30 3.90 96
1.33 4.00 63
1.40 4.20 27
1.50 4.50 6.8
1.60 4.80 1.6
1.67 5.00 0.57
1.80 5.40 0.067
2.00 6.00 0.002
In Table 6.28, ppm equals parts per million of nonconformance (or failure) when the
process:
• Is centered on X
-
• Has a two-tailed specification
• Is normally distributed
• Has no significant shifts in average or dispersion
When the Cp , Cpk , Pp' and Ppk values are 1.0 or less, Z values and the standard normal
table can be used to determine failure rates. With the drivE~ for increasingly
dependable products, there is a need for failure rates in the Cp rcmge of 1.5 to 2.0.
When a process capability is determined using one operator on one shift, with one
piece of equipment, and a homogeneous supply of materials, the process variation
is relatively small. When factors are included for time, multiple operators, various
lots of material, environmental changes, and so on, each contributes to increasing
the process variation. Control limits based on a short-term process evaluation are
closer together than control limits based on the long-term process.
Smith (2001 )21 describes a short run with respect to time and a small run, where
there is a small number of pieces produced. When a small amount of data is
available, there is generally less variation than is found with a larger amount of data.
Control limits based on the smaller number of samples will be narrower than they
should be, and control charts will produce false, out-of-control patterns.
Smith suggests a modified X and R chart for short runs, running an initial 3 to 10
pieces without adjustment. A calculated value is compared with a critical value and
either the process is adjusted or an initial number of subgroups is run. Inflated 0 4
and A2 values are used to establish control limits. Control limits are recalculated
after additional groups are run.
For small runs, with a limited amount of data, Smith recommends the use of the X
and MR chart. The X represents individual data values, not an average, and the MR
is the moving range, a measure of piece-to-piece variability. Process capability or
Cpk values determined from either of these methods must be considered preliminary
information. As the number of data points increases, the calculated process
capability will approach the true capability.
When comparing attribute data and variables data, variables data generally provides
more information about the process, for a given number of data points. Using
variables data, a reasonable estimate of the process mean and variation can be
made with 25 to 30 groups of five samples each. Whereas a comparable estimate
using attribute data may require 25 groups of 50 samples each. Using variables data
is preferable to using attribute data for estimating process capability.
References
1. AIAG. (1995). Production Part Approval Process (PPAP). Chrysler Corporation,
Ford Motor Company, and General Motors Corporation.
4. Breyfogle, F.W., III. (1999). Implementing Six Sigma: Smarter Solutions Using
Statistical Methods. New York: John Wiley & Sons.
5. Breyfogle, F.W., III. (2003). Implementing Six Sigma, 2nd ed. New York: John
Wiley & Sons, Inc.
6. Conner, G. (2001). Lean Manufacturing for the Small Shop. Dearborn, MI:
Society of Manufacturing Engineers.
7. Edenborough, N.B. (2002). Quality System Handbook. Terre Haute, IN: Quality
Council of Indiana.
8. Gee, G., McGrath, P., & Izadi, M. (1996, Fall). "A Team Approach to Kaizen."
Journal of Industrial Technology.
9. George, M.L., Rowlands, D., Price, M. & Maxey, J. (2005). The Lean Six Sigma
Pocket Toolbook. McGraw-HiII:New York.
10. Grant, E.L. and Leavenworth, R.S. (1988). Statistical Quality Control, 6th ed.
McGraw - Hill: New York.
11. Gryna, F.M. (2001). Quality Planning and Analysis, 4th ed. New York: McGraw -
Hill.
12. Harry, M. & Schroeder, R. (2000). Six Sigma. New York: Currency, Doubleday.
14. ISO VIM. (1993). International Vocabulary of Basic and General Terms in
Metrology, 2nd ed. Geneve, Switzerland: ISO
References (Continued)
15. ISOITS 16949:2002. Quality Management Systems-Particular Requirements for
the Application of ISO 9001:2000 for Automotive Production and Relevant
Service Part Organizations, 2nd ed. Geneva: International Organization for
Standardization.
16. Juran, J.M. (1999). Juran's Quality Handbook, 5th ed. New York: McGraw-HilI.
18. Rother, M., & Shook, J. (1999). Learning to See: Value Stream Mapping to Add
Value and Eliminate Muda. Brookline, MA: The Lean Enterprise Institute.
19. Schmidt, S.R. & Launsby, R.G. (1997). Understanding Industrial Designed
Experiments, 4th ed. Air Academy Press: Colorado Sprin"s, CO.
20. Sharma, A., & Moody, P. (2001). The Perfect Engine: How Iro Win in the New
Demand Economy by Building to Order with Fewer Resources. New York: The
Free Press.
21. Smith, G.M. (2001). Statistical Process Control and Quality Improvement. Upper
Saddle River, NJ: Prentice-Hall.
22. Stein, P.G. (2003). Certified Calibration Technician Primer. Terre Haute, IN: QCI.
23. Triola, M.F. & Franklin, L.A. (1994). Business Statistics, R.!ading: Addison -
Wesley.
24. Wald, A. (1947). Sequential Analysis. New York: John Wiley 4!1t Sons.
25. Wortman, B.L. & Carlson, D.R. (2005). CQE Primer. Terre Itlaute, IN: Quality
Council of Indiana.
26. Wortman, B.L., et al. (2001). CSSBB Primer. Terre Haute, IN: 'Quality Council of
Indiana.
6.1. Calculate the takt time for the one hour of 6.7. Why is process capability called a comparison
provided system data? between two independent worlds?
Demand 30 units Ihr A. Because it is based on two independent,
upper and lower, specification limits
B. Because it compares two independent
estimates of the standard deviation
C. Because it compares the world of
specifications with the world of product
spreads
a. 5 minutes c. 3 minutes D. Because it is based on two independent
b. 1.8 minutes d. 2 minutes estimates of normality of the means
6.2. Which of the following statements about check 6.8. The written document that describes the purpose
sheets is NOT true? and scope of an activity at a general level is
called a:
a. Subjective data requires larger forms than
measured data a. Manual c. Work instructions
b. Check sheets can record subjective data b. Procedure d. Check sheet
c. A shopping list is a check sheet
d. Both variables and attribute data can be 6.9. The repeatability of an R&R study can be
recorded on a check sheet determined by examining the variation between:
6.3. Legitimate techniques for ensuring data accuracy a. The individual technicians and within their
and integrity would NOT include which of the measurement readings
options below: b. The average ofthe Individual technicians for
all parts measured
a. Avoid bias relative to targets or tolerances c. Part averages that are averaged among
b. Screen data to remove entry errors technicians
c. Remove outlier data, as they arise d. The individual technicians and comparing it
d. Record data In time sequence, when to the part averages
appropriate
6.10. Which of the following statements describes
6.4. In a normal distribution, what Is the area under attribute data?
the curve between +0.7 and +1.3 standard
deviation units? a. Number of employees wearing green shirts
b. Number of gallons of chemical used in a
a. 0.2903 c. 0.2580 process
b. 0.7580 d. 0.1452 c. Diameter of a hole
d. Miles per gallon of automobile fuel economy
6.5. A histogram is used to plot the number of voids
found versus the weight of a plastic injection 6.11. When conducting a process capability study
molded part. One would expect the shape of the consistent with PPAP requirements, which ofthe
distribution to be: following is mandatory?
a. 2.0 c. 0.5
b. 1.0 d. 0.4
6.13. How is process capability for attribute data 6.18. The calibration of ml!asuring instruments is
calculated? necessary to maintain accuracy. How does
calibration affect preci~;ion?
a. It is not possible to estimate a process
capability since attribute data is abnormal a. The precision increases over the working
b. It is the average proportion or range of the instrument
nonconformities per unit for stable b. The precision decreases over the working
processes range of the instrument
c. It is calculated the same way as process c. Calibration has a minimum impact on
capability for continuous data precision
d. It is the number of conformities per unit for d. The precision will vary over the working
unstable processes range of the instrument
6.14. Select the INCORRECT statement from the 6.19. A process consists of ttlree sequential steps with
following. The IDs of a certain piece of tubing the following yields:
are normally distributed with a mean 1.00". The Y1 = 99.8, Y2 = 97.4, Y3 = 96.4
proportion of tubing with IDs less than 0.90" is: Determine the total defl!cts per unit.
6.24. The main purpose of a gage R&R study Is to: 6.30. Which ofthe following is an example of variables
data?
a. Determine the total variation in a part
b. Determine the total variation in a gage a. Paint chips per unit
system b. Tension on fan belts
c. Determine how much of the total variation is c. Percent of defective units in a lot
contributed by measurement d. Audit points
d. Determine the ratio of reproducibility to
repeatability measurements 6.31. If one were to compare short·term capability with
long·term capability for the same process, it
6.25. Random selection of a sample: should not be surprising to flnd:
a. Theoretically means that each item in the lot a. The long·term capability improves
has an equal chance to be selected b. The Cp is better short·term
b. Ensures that the sample average will equal c. The results are very comparable
the population average d. The average drifts but the variation stays
c. Means that a table of random numbers was
used to dictate the selection 6.32. One thousand units of product were examined
d. Is a meaningless theoretical requirement for the possibility of 5 different undesirable
characteristics. A total of 80 defects were found.
6.26. What documents provide details that expand how How many defects would be expected in a million
and when an activity will take place? opportunities?
6.28. The tabulation of the number of times a given a. 60% c. At least 100%
quality characteristic measurement occurs, b. 80% d. 120%
within the product sample being checked, is
called a: 6.35. Use this data for the following question.
6.37. Reproducibility in an R&R study would be 6.43. Which ofthe following st;:ltements best describes
considered the variability introduced into the a bimodal distribution?
measurement system by:
a. This distribution shows stratified data and
a. The change in instrument differences over has two distinct peaks
the operating range b. This distribution shows a single mode and
b. The total measurement system variation bell shaped distribution
c. The bias differences of different operators c. This distribution is truncated
d. The part variation d. This distribution has several distribution
peaks
6.38. A check sheet is being used to record
Information from several different machining 6.44. The DPMO for a procens is 860. What is the
centers. Whic.h of the following types of approximate 6 sigma level of the process?
information is NOT suitable for inclusion in the
check sheet? a. 4.2 c. 4.6
b. 4.4 d. 4.8
a. Number of rejects c. Machine number
b. Types of defects d. Dimensional size 6.45. The interaction term in an R&R ANOVA indicates
an interaction between:
6.39. The outcome from flipping a coin is considered:
a. The technician and Imeasurement error
a. Discrete data c. Random data b. The technician and tthe part
b. Continuous data d. Probability data c. The part and the totill variation
d. The repeatability and the reproducibility
6.40. Customer satisfaction data is often accumulated
using some form of Likert scale (1-5, 1-7, etc). 6.46. One use of recording check sheets is:
What measurement scale is being used?
a. Automating the chaliing of variables data
a. Nominal c. Interval b. Collecting tally counts of attribute data
b. Ordinal d. Ratio c. Identifying process variables
d. Creating process m;ips
6.41. One of the advantages of using the ANOVA
method versus the average and range method for 6.47. If the value added time for a process is 40
gage R&Ris: minutes per 8-hour shift, what is the process
cycle efficiency expressed as a percentage?
a. Less math
b. Fewer readings a. 5.00% c. 6.24%
c. An ability to measure interactions b. 1.50% d. 8.33%
d. The ability to partition variation into
component parts 6.48. One can say that normailly distributed data will:
6.42. Which two of the following statements are true a. Be wider than the specification limits
for sequential sampling plans? b. Show a larger standard deviation for larger
I. The ASN is relatively low sample sizes
II. These plans are simple to understand c. Be spread approxim;itely equal on either side
III. Samples are taken one item at a time of the average
IV. The administration costs are low d. Be truncated beyond the specification limits
The answers to all questions are located at the end of Section XII.
Analysis Techniques
• Overproduction • Processing
• Inventory • Transport
• Repair/rejects • Waiting
• Motion
Overproduction
In the just-in-time environment, producing too early is as bad as producing too late.
Parts need to be available at a certain location at a certain time! according to the
customer's schedule.
Having the product too early, too late, or in quantities that are too great, will result
in undesirable consequences, such as:
Inventory
In addition, inventory sitting around in various stages can be affected in these ways:
Repairl Rejects
The repair or rework of defective parts involves a second attempt at producing a
good item. Rejects involving scrapping of the whole part are a definite waste of
resources. Having rejects on a continuous flow line defeats the purpose of
continuous flow. Line operators and maintenance will be used to correct problems,
putting the takt time off course. Repair and rework may require nonconforming
product forms to be filled out by suppliers, generating muda.
Various design changes can be muda also. Design changes are rework or a
development effort causing additional labor.
Ergonomics can eliminate factors in the workplace that may cause injuries and lost
production. Some guidelines for providing sound ergonomic: principles in the
workplace include:
Processing
Processing muda consists of additional steps or activities in the manufacturing
process. This can be described as:
Transport
All forms of transportation are muda. This describes the use of forklifts, conveyors,
pallet movers, and trucks. This can be caused by poor plant layouts, poor cell
designs, use of batch processing, long lead times, large s.torage areas, or
scheduling problems. The muda of transport is always considered for elimination.
The muda of waiting occurs when an operator is ready for the next operation, but
must remain idle. The operator is idle due to machine downtime, lack of parts,
unwarranted monitoring activities, or line stoppages. A maintenance operator
waiting at a tool bin for a part is muda. The muda of waiting can be characterized
by:
• Idle operators
• Breakdowns in machinery
• Long changeover times
• Uneven scheduling of work
• Batch flow of materials
• Long and unnecessary meetings
(Imai, 1997)2
• Misused resources
• Underutilized resources
• Counting
• Looking for tools or parts
• Multiple systems
• Multiple hand-offs
• Unnecessary approvals
• Machine breakdowns
• Waste of sending bad product to customers
• Waste of providing bad service to customers
(Metcalf, 1997)5
The authors have found seven, eight, nine, or ten classes of waste shown in various
sources. The classes of wastes are also categorized for both production and
administrative activities.
Variable Relationships*
• Multi-vari analysis
• Linear correlation and regression
Multi-Vari Analysis
In statistical process control, one tracks variables like customer complaints,
pressure, or temperature by taking measurements at certain intervals. The
underlying assumption is that the variables will have approximately one
representative value when measured. Frequently, this is not the case. Temperature
in the cross-section of a furnace will vary and the thickness of a [part may also vary
depending on where each measurement is taken.
Often, the variation is within piece, and the source of this variatioln is different from
piece-to-piece and time-to-time variation. The multi-vari chart is a very useful tool
for analyzing all three types of variation. Multi-vari charts are also used to
investigate the stability or consistency of a process. The chart consists of a series
of vertical lines, or other appropriate schematics, along a time scale. The length of
each line or schematic shape represents the range of values found in each sample
set.
0.060
I
I I IIII
en
en 0.050
II III !
Cl)
~
c::
(J
.c::
I-
0.040
0.030
0.020
I I
0.010
5 10 15 20
Sample Number
Figure 7.1 An illustrative Multi-Vari Line Chart
To establish a multi-vari chart, a sample set is taken and plotted from the highest to
the lowest value. This variation may be represented by a vertical line or other
rational schematic. Figure 7.2 shows an injection molded plastic part. The
thickness is measured at four points across the width as indicated by arrows.
4
USL = 0.110"
LSL = 0.100"
Three hypothetical cases (Figures 7.3,7.4, and 7.5) are presented to help understand
the interpretation of multi-vari charts for the part illustrated in Figure 7.2.
3
Ai m 0 .105" I-+--+-t-+-f--t-+-t-~_+_+-I_++-It-t-_+_H
Aim 0.105" I
1.~
LSL 0.100" L -_ _....1..-_ _ ----1_ _ _- L -_ _- - '
USL 0.110" r - - - - . - - - - - - - r - - - - r - . - - r - + - + _ + _ .
Aim 0 .105"
4
3
2
Table 7.6 identifies the typical areas of time and locational variation.
Note that positional variation can often be broken into multiple components:
The thickness specification was 0.245" ±0.005". The operation had been producing
scrap. A process capability study indicated that the process spread was 0.0125" (a
Cp of 0.8) versus the requirement of 0.010".
The operation generated a profit of approximately $200,000 per month even after a
scrap loss of $20,000 per month. Refitting the mill with a more modern design,
featuring automatic gauge control and hydraulic roll bending, would cost $800,000
and result in 6 weeks of downtime for installation.
Four positional measurements were made at the corners of each flat sheet in order
to adequately determine within piece variation. Three flat sheets were measured in
consecutive order to determine piece to piece variation. Additionally, samples were
collected each hour to determine temporal variation.
0.250" r-------;------y------,----~~--=--..,--------,
. . - - - - Left
~~ Right
Front Back
Legend
0.240" L - -_ _- - - - '_ _ _--'----"'-----"_ - - L -_ _ _-'--_ _-----'
Time
Actions taken over the next two weeks included re-Ieveling the bottom back-up roll
(approximately 30% of the total variation), initiating more frequent coolant tank
additions, followed by an automatic coolant make-up modification (SO% of the total
variation).
Additional spray nozzles were added to the roll stripper housings to reduce heat
build up in the work rolls during the rolling process (10% to 1S% of total variation).
The piece-to-piece variation was ignored. This dimensional variation may have
resulted from roll bearing slop or variation in incoming aluminum sheet temperature
(or a number of other sources). The results of this work are reflected in Table 7.9.
The results from this single study indicated if all of the modifications were perfect,
the resulting measurement spread would be 0.002" total. In reality, the end result
was:t 0.002" or 0.004" total, under conditions similar to that of the initial study. The
total cash expenditure was $8,000 for the described modifications. All work was
completed in two weeks. The specification of 0.24S" ± O.OOS" was easily met.
Most multi-vari analyses do not yield results that are this spectacular, but the
potential for significant improvement is apparent.
Hours of study and test results for ten randomly selected students
Student Study Time (Hours) Test Re!sults (%)
1 60 ~57
2 40 ~51
3 50 ~73
4 65 130
5 35 ~50
6 40 !55
7 50 152
8 30 !50
9 45 151
10 55 ~ro
An initial approach to the analysis of the data in Table 7.10 is to plot the points on
a graph known as a scatter diagram. One will observe that y appears to increase as
x increases. One method of obtaining a prediction equation relating y to x is to place
a ruler on the graph and move it until it seems to pass through the majority of the
points, thus providing what is regarded as the "best fit" line.
81
>
~ 74
~
"'5
~ 67
0::
'ti 60
...
G)
53
30 35 40 45 50 55 60 65
Study Time (Hours), X
Where Po is the y intercept when x = 0 and P1 is the slope of the line. Please note in
the previous chart that the x-axis does not go to zero so the y intercept appears too
high. The equation for a straight line in this example is too simplistic. There will
actually be a random error which is the difference between an observed value of y
and the mean value of y for a given value of x. For any given value of x, the
observed value of y varies in a random manner and possesses a normal probability
distribution. The concept is illustrated in the following diagram:
If the predicted value of y obtained from the fitted line is denoted as y, the
prediction equation is:
A A A
Y =130 + J31 x
81
-tn
~ 60
53
30 35 45 50 55 60 65
Study Time (Hours), X
Having decided to minimize the deviation of the points in choosing the best fitting
line, one must now define what is meant by "best."
Choose, as the best fitting line, the line that minimizes the sum of squares of
the deviations of the observed values of y from those predicted.
Mathematically, one wishes to minimize the sum of squared errors given by:
SSE = ~ Yi - Yi
n ( A )2
I'" 1
and Po = Y- P1 X
where:
Once 130 and 131 have been computed, their values are substituted into the equation
of a line to obtain the least squares prediction equation, or regression line.
Example 7.1: Obtain the least squares prediction line for the da"ta below:
Xi Yi x-2I X1Yi y~
60 67 3,600 4,020 4,489
40 61 1,600 2,440 3,721
50 73 2,500 3,650 5,329
65 80 4,225 5,200 6,400
35 60 1,225 2,100 3,600
40 55 1,600 2,200 3,025
50 62 2,500 3,100 3,844
30 50 900 1,500 2,500
45 61 2,025 2,745 3,721
55 70 3,025 3,850 4,900
Sum 470 639 23,200 30,805 41,529
Table 7.14 Data Table for the Study TimelTest Score Example
S
x2
=L.J ~x' _(t,xJ =
i=1
I
n
23200 _ (470)'
, 10
=1110
'
S = ~ x.y.
n
- 1-1
(
1- 1
~XI
n
~ Yi )
)( n
= 30 805 - (
470 639
)( ):: 772
xy ~ II n ' 10
s" =fy;'
i=1
-(t,y,J
n
=41,529 _ (639)' =696.9
10
- 639 - 470
y= - =63.9 x= - =47
10 10
One may now predict y for a given value of x by substitution into the prediction
equation. For example, if 60 hours of study time are allocated, the predicted test
score would be:
A
Y = 31.2115 + 0.6955x
y = 31.2115 + (0.6955)(60)
Y= 72.9415 = 73%
• Always plot the data points and graph the least squares line. If the line does
not provide a reasonable fit to the data points, there may be a calculation
error.
• Projecting a regression line outside of the test area can be risky. The above
equation suggests a student without studying, would score 31 % on the test.
The odds favor 25%, if answer a is selected for all questions. The equation
also suggests the student with 100 hours of study, should attain 100% on the
examination - which is highly unlikely.
A random error E enters into the calculations of 130 and 131' The random errors affect
the error of prediction. Consequently, the variability ofthe random errors (measured
by a 2e ) plays an important role when predicting by the least squares line.
The first step toward acquiring a boundary on a prediction error requires that one
estimate a 2 • It is reasonable to use SSE (sum of squares for error) based on (n-2)
degrees otfreedom, one for each variable (x and y).
2
n 2 S )
=~(y. - y.) =Sy2 =Sy2
A A
(
SSE ~ I I
- R. S
f-'1 xy
- Sxy
i =1 x2
Example 7.2: Substitute previous data into the above formula to obtain the
confidence interval around the slope of the line at a 95% confidence.
Ii
1
=0.6955 ± 2.306 .J1,4.47
110
= 0.6955 ± 0.3094
Intervals constructed by this procedure will enclose the true value of 131 95% of the
time. Hence, for each 10 hours of increased study, the expected increase in test
scores is in the interval of 3.86 to 10.05 percentage points.
• A positive value for r implies that the line slopes upward to the right. A
negative value indicates that it slopes downward to the right.
• =
When r 0, the data points are scattered and give no evidence of a linear
relationship. When r = 0, it implies no linear correlation, not simply "no
correlation." A pronounced curvilinear pattern may exist.
• When r =1 or r =-1, all pOints fall on a straight line and SSE equals zero.
• Any other value of r suggests the degree to which the points tend to be
linearly related.
Example 7.3: Using the study time and test score data reviewed earlier in Example
7.1, determine the correlation coefficient.
772
Solution: r =.j(1,110)(696.9) =0.878
Example 7.4: Using the data from Example 7.3, determine r2.
r2 =(0.878)2 =0.771
One can say that 77% of the variation in test scores can be explained by variation in
study hours.
Correlation Example
25
24
23
22
21
MPG
20 . .. . ~? . .. ~ Average
1" 20 MPG
19
18
Dg
17
dg
16 . . . . . . . *""---"''-
~ ,,e...<--r-------,.,-----..,.---
2000 3000 4000
Car Weight
SSE = L d~ + d~ + ... + d~
r2 =1 _ SSE = SST - SSE
SST SST
Where SST = total sum of squares (from the experimental average) and SSE = total
sum of squared errors (from the best fit). When SSE is zero, r2 equals one and when
SSE equals SST, then r2 equals zero.
S2 = SSE
n- (k + 1)
Source DF SS MS
Regression k SSR MSR =SSRlk
Error n-(k+1) SSE MSE =SSE/[n-(k+1)]
Total n-1 Total SS
p-Value
The traditional approach to hypothesis testing compares a test statistic to a pre-
determined critical probability value known as alpha (0). In m()st textbooks, the
statistical tables for critical 0 values are set at 1% or 5% (t distribution, X2
distribution, and F distribution.) The use of table values avoids the necessity of
performing manual calculations to determine the exact probabilities.
"A p-value is the probability of getting a value of the sample b~st statistic that is
at least as extreme as the one found from the sample data (clssuming that the
hypothesized value is correct)."
In other words, a small p-value is an indication that the null hypothesis is false.
Various statistical software tools such as Microsoft Excel (Microsoft Corporation),
MINITAB (Minitab, Inc.), S-PLUS (MathSoft, Inc.), and SPSS (SPSS, Inc.) provide the
exact "probability due to chance" ofthe hypothesis or claim. Table! 7.17 summarizes
the logic behind the difference of either 0 or p-values being due to chance.
o or p-value Remarks
p>5% Significant difference is not provjm
1% < p~5% A statistically significant differenlce
p~ 1% A highly significant statistical differlence
An example of regression analysis for study time and test results was shown earlier
using manual calculations. This same data can be analyzed usin!9 MINITAB, Excel,
or other statistical analysis programs.
Student 1 2 3 4 5 6 7 8 9 10
Study Time (hrs) 60 40 50 65 35 40 50 30 45 55
Test Results (%) 67 61 73 80 60 55 62 50 61 70
MINITAB Results
A MINITAB output is displayed below. The regression equation is provided. The
coefficients of the line, coefficient of determination, and an ANOVA have also been
calculated. In addition, the probability of significance for the study time and the
ANOVA have been determined.
The study time coefficient is the slope, 1311 with a t statistic of 5.18. The critical value
is not provided, but the p-value is provided. The p-value of 0.001 indicates that the
study time coefficient is a highly statistical significant factor. It is a very extreme
probability number. Therefore, the regression equation is valid and the student is
reminded to study long and hard.
The R-Sq value of 77.0% is the coefficient of determination (0.77). The square root
of the coefficient of determination is the correlation coefficient. That value would
be 0.8775. (Note: some rounding errors occur due to differences between manual
and computerized calculations.) The t test is used to determine if there is a
significant level for the correlation coefficient. In the MINITAB display, the p-value
is also 0.001 for this determination.
Hypothesis Testing
Basic Concepts
Before discussing these specific procedures, it is worthwhile to review a number of
commonly used hypothesis test terms.
Null Hypothesis
This is the hypothesis to be tested. The null hypothesis direct:ly stems from the
problem statement and is denoted as Ho. Examples:
• If a strong claim is made that the average of process A is greater than the
average of process B, the null hypothesis (one-tail) would state that process
A ~ process B. This is written as Ho: A ~ B.
Test Statistic
In order to test a null hypothesis, a test calculation must be lTIade from sample
information. This calculated value is called a test statistic and is compared to an
appropriate critical value. A decision can then be made to reject 40r fail to reject the
null hypothesis.
• Type I error: This error occurs when the null hypothesis is rejected when it
is, in fact, true. The probability of making a type I error is called a (alpha) and
is commonly referred to as the producer's risk (in sampling). Examples are:
incoming products are good but called bad; a process change is thought to
be different when, in fact, there is no difference.
• Type II error: This error occurs when the null hypothesis is not rejected when
it should be rejected. This error is called the consumer's risk (in sampling)
and is denoted by the symbol 13 (beta). Examples are: incoming products are
bad, but called good; an adverse process change has occurred but is thought
to be no different.
The degree of risk (a) is normally chosen by the concerned parties (a is normally
taken as 5%) in arriving at the critical value of the test statistic. The assumption is
that a small value for a is desirable. Unfortunately, a small a risk increases the 13
risk. For a fixed sample size, a and 13 are inversely related. Increasing the sample
size can reduce both the a and 13 risks.
Null Hypothesis
True False
One-Tail Test
.------
ENTIRE a =5%
o
IJo =35 HOURS
Figure 7.21 Determine If the True Mean is Within the a Cri1tical Region
=5%
--------.
ENTIRE a
o
loLo =20%
Figure 7.22 Determine If the True Mean is Within the a Critical Region
Two-Tail Test
If a null hypothesis is established to test whether a population shift has occurred,
in either direction, then a two-tail test is required. The allowable a error is generally
divided into two equal parts. Examples:
-1.96 o +1.96
~o
On occasion, an issue of practical versus statistical significance may arise. That is,
some hypothesis or claim is found to be statistically significaillt, but may not be
worth the effort or expense to implement. This could occur if al large sample was
tested to a certain value, such as a diet that results in a net loss of 0.5 pounds for
10,000 people. The result is statistically significant, but a diet los,ing 0.5 pounds per
person would not have any practical significance. (Triola, 1994)8
Huck (1996)1 indicates that issues of practical significance will often occur if the
sample size is not adequate. A power analysis may be needed to aid in the decision-
making process.
The sample size (n) needed for hypothesis testing depends on:
Obtain 96 pilot hourly yield values and determine the hourly average. If this mean
deviates by more than 4 tons from the previous hourly average, a significant change
at the 95% confidence level has occurred. If the sample mean deviates by less than
4 tons/hr., the observable mean shift can be explained by chance cause.
n = Z2 (p) (1 - p)
(Ap)2
Where,
Hypotheses Tests
-
Z= x-~
o/yfi
Z=~-~
0-
x.-x.
-
Noona' ~
~----~ x'~ =(n - 1)S2
r - - - - - - ' ' ' " - - - , ~--.... 02
Small
-
t= x-~
samples _s/..jfi
S-X-x--Sfrr1
1
-2
-n n + .-
1 2
t = X1 -X2
S-x,-X-
2
A= Sfl n 1
1 - - -..... B=S}ln 2
Welch-5atterthwaile
Approximation
~ df= (A+B)2
A2
-+-
~-1
82
n2 -1
-
I I
,"'" ~ p(~-P) ,"'" z = .Q..:...Q.
pvs.1.l.
) , I op = op
I Binomial I ,""
I
, f
= ~ p(1 -p)(
Z=P,- P2
I P1 VS ,P2
~
Op,-p, 1- 1-)
n,
+
n2
,"'" op _p
1 2
-
P=
n, P, + n2P2 ~
n, + n2
cvs.1.l. 0= r=c Z =C - C
.;c
Poisson
c =no. of defects
cvs. c
k =no. samples
The charts in Figures 7.24 and 7.25 were provided by DuWayne Carlson.
Example 7.6: Given the following values for four customer calli wait times: 28.7,
27.9,29.2, and 26.5 seconds, calculate the point estimation of thE! population mean.
n
LXi
28.7 + 27.9 + 29.2 + 26.5 2808 d
J..I.:= X =-
' -=
"=1
n 4
= . secon s
n 2 n
n -1 N
s= (5 = =.....:.1_
..:....i _ _
n -1 N
The confidence interval (CI) or interval estimate of the population mean, J,I, when the
population standard deviation, a, is known, is calculated using the sample mean, X,
the population standard deviation, a, the sample size, n, and the normal distribution.
From sample data, one can calculate the interval within which the population mean,
J,I, is predicted to fall. Confidence intervals are always estimated for population
parameters. A confidence interval is a two-tail event and requires critical values
based on an aiphaJ2 risk in each tail. The central limit theorem term, oJ,,", is
necessary because the confidence interval is for a population mean and not
individual values.
Example 7.7: The average of 100 samples is 18 with a population standard deviation
of 6. Calculate the 95% confidence interval for the population mean.
x -Zal2 :fn
6
18 -1.96--
../100
16.82 ~ Jl ~ 19.18
The confidence interval of the population mean, J,I, when the population standard
deviation, a, is unknown, is calculated using the sample mean, X, the sample
standard deviation, s, the sample size, n, and the t distribution. If a relatively small
sample is used, e.g. n < 30, then the t distribution must be used. When the sample
size is large, e.g. n > 30, the Z distribution may be used in place of the t distribution.
Ps
- ZaJ2 1 P, (1 - P.),; p ,; Ps
+z
aJ2
1P, (1 - P.)
n n
Note that other confidence interval formulas exist. These include percent
nonconforming, Poisson distribution data, and very small sample size data.
Example 7.9: If 16 defectives were found in a sample size of 200 units, calculate the
90% confidence interval for the proportion.
Ps = nx = 200
16
= 0.08
r----:----:-
The confidence interval for the mean were symmetrical about the average. This is
not true for the variance, since it is based on the chi-square distribution.
Example 7.10: The sample variance for a set of 25 samples was found to be 36.
Calculate the 90% confidence interval for the population variance.
(n - 1) s~ 2 (n - 1) s~
2 ~ (1 ~
X al2, n - 1 x~ -al2, n - 1
(n -1)s~
X21-al2,n-1
When the population follows a normal distribution and the pc~pulation standard
deviation, ax, is known, then the hypothesis tests for comparing a population mean,
J,I, with a fixed value, J,lo' are given by the following:
Z = X - J.10 _ X - r-o
II
- Gx/Jn
... where the sample average is X, the number of samples is n and the standard
deviation of means is ax. If n > 30, the sample standard deviation, s, is often used
as an estimate of the population standard deviation, ax. The \test statistic, Z, is
compared with a critical value, Za or Za/2' which is based on a siSlnificance level, a,
for a one-tailed test or a/2 for a two-tailed test. If the H1 sign is :f::, it is a two-tailed
test. If the H1 sign is >, it is a right one-tailed test, and if the H1 sign is <, it is a left,
one-tailed test. (Triola, 1994)8
Example 7.11: Suppose that the average time that a person keeps a bank account
is 5.00 years, with a standard deviation of 0.12 years. A new type of account was
offered and a sample found the following durations for how long people kept this
type of account: 5.10, 4.90, 4.92, 4.87, 5.09, 4.89, 4.95 and 4.88 ytears.
Can one state, with 95% confidence, that the new accounts am being kept for a
shorter period than the original type of accounts? This question involves an
inference about a population mean with a known sigma. The Z test applies. The null
and alternative hypotheses are:
or
Ho: J,I ~ 5.00 years H1 : J,I < 5.00 years
The sample average is X = 4.95 years with n = 8 and the population standard
=
deviation is ax 0.12 years. The test statistic is:
If the test statistic had been, for example -1.85, one would have rejected the null
hypothesis and concluded, with 95% confidence, that the new accounts were being
kept for a shorter period than the original accounts.
Student's t Test
t = X - ).10
s/Jii
Where: X =The sample mean
J,lo =The target value or population mean
=
s The sample standard deviation
n = The number of test samples
The null and alternative hypotheses are the same as were given for the Z test.
The test statistic, t, is compared with a critical value, ta or ta/2' which is based on a
significance level, a, for a one-tailed test or a/2 for a two-tailed test, and the number
of degrees of freedom, d.f. The degrees of freedom is determined by the number of
samples, n, and is simply:
d.f. = n -1
Example 7.12: The average daily revenues of a hospital has been $880,000 (101 = 880).
A new marketing program has been evaluated for 25 days (n = 25) with a yield of
$900,000 (X) and sample standard deviation, s = $20,000. Can one say, with 95%
confidence, that the revenues have changed?
or
Ho: 101 =880
The test statistic calculation is:
Since the H1 sign is '1:, it is a two-tailed test and with a 95% confidence, the level of
significance, a = 1 - 0.95 = 0.05. Since it is a two-tail test, a/2 is used to determine
= =
the critical values. The degrees of freedom, dJ. n - 1 24. Looking up the critical
= =
values in a t distribution table, yields t o.025 -2.064 and t o.975 2.064. Since the test
statistic, 5, falls in the right-hand reject (or critical) region, the null hypothesis is
rejected. One concludes, with 95% confidence, that the revenues have changed.
Note that a t distribution table is shown in the Appendix.
This technique was developed byW.S. Gosset and published in 1908 under the pen
name "Student." Gosset referred to the quantity under study ciS t. The test has
since been known as the student's t test.
Example 7.13: A new spark plug design is tested for wear. A sample of six plugs
yielded: 0.0058, 0.0049, 0.0052, 0.0044, 0.0050 and 0.0047 inches of wear. The
current design has historically produced an average wear of 0.0055 11 • With 95%
confidence, is the new design better?
Example 7.14: A very expensive experiment has been conducted to evaluate the
manufacture of synthetic diamonds by a new technique. Five diamonds have been
generated with recorded weights of 0.46, 0.61, 0.52, 0.57 and 0.54 carats. An average
diamond weight greater than 0.50 carats must be realized for the venture to be
profitable, assuming 95% confidence, what is the recommendation?
Conclusions
STEPS EXAMPLE 7.13 EXAMPLE 7.14
Step 1: Establish the null
hypothesis: [there is no Ho: 1-11 ~ 0.0055" Ho: 1-12 ~ 0.50
difference between the target
value and the sample average] H1: 1-11 < 0.0055" H1: 1-12 > 0.50
Step 4: Can one reject the null Since the value of Since the value of
hypothesis? calculated t is to the left of calculated t (1.597) is not
-2.015, the null hypothesis to the right of the critical t
is rejected. The wear is (2.132), the null hypothesis
less for the new plug can't be rejected.
design. Insufficient evidence
exists for the new
technique to be profitable.
The paired t test is used to test the difference between 2 samplle means. Data is
taken in pairs with the difference calculated for each pair.
Example 7.15: Calculate a paired t test for two operators that measure the same set
of samples.
= =
The critical value for a two-sided test with a 0.05, and d.f. 4 is t o.025,4 2.776. =
Since the calculated test statistic is in the critical area, the null hypothesis Ho is
rejected, and we conclude there is a difference in the population means.
The paired method (dependent samples), compared to treatin~1 the data as two
independent samples, will often show a more significant diffenmce because the
standard deviation of the d's (Sd) includes no sample to samplle variation. This
comparatively more frequent significance occurs despite the fact that "n - 1"
represents fewer degrees of freedom than "n1 + n2 - 2." In geneml, the paired t test
is a more sensitive test than the comparison of two independent samples.
If conditions that np ~ 5 and n(1-p) ~ 5 are met, then the binomial distribution of
sample proportions can be approximated by a normal distribution. The hypothesis
tests for comparing a sample proportion, p, with a fixed value, Po, are given by the
following:
Ho: P =Po
H1 : p < Po
z= x - npo
~npo (1-po)
Where the number of successes is x and the number of samples is n. The test
statistic, Z, is compared with a critical value, Za or Zal2' which is based on a
significance level, a, for a one-tailed test or a/2 for a two-tailed test. If the H1 sign
is -1=, it is a two-tailed test. If the H1 sign is >, it is a right, one-tailed test, and if the H1
sign is <, it is a left, one-tailed test. (Triola, 1994t
Case II. Comparing observed and expected frequencies of test outcomes when
there is no defined population variance (attribute data).
If the H1 sign is '1:, it is a two-tailed test. If the H1 sign is >, it is a right, one-tailed test,
and if the H1 sign is <, it is a left, one-tailed test. {Triola, 1994)8
o 2
X 1-0 o
Figure 7.27 Chi-square Distribution Tail Areas
Example 7.16: The R&D department of a steel plant has tried to develop a new steel
alloy with less tensile variability. The R&D department claims that the new material
will show a four sigma tensile variation less than or equal to 60 psi 95% of the time.
An eight sample test yielded a standard deviation of 8 psi. Can a reduction in tensile
strength variation be validated with 95% confidence?
Solution: The best range of variation expected is 60 psi. This tralnslates to a sigma
of 15 psi (an approximate 4 sigma spread covering 95.44% of occurrences).
2
The alternative hypothesis is: H:a2<a
1 x 0
or
From the chi-square table: Because alternative hypothesis is H1: .,.2x < a 20 , this is a left
tail test with n - 1 = 7. The critical value for 95% confidence is 2.17. That is, the
calculated value will be less than 2.17 5% of the time. Please nc)te that if we were
looking for more variability in the process a right tail rejection region would have
been selected and the critical value would be 14.07.
Since 1.99 is less than 2.17, the null hypothesis must be rejected. The decreased
variation in the new steel alloy tensile strength supports the R&D claim.
The alternative hypothesis that not all n populations are equal is:
2. Take one subgroup from each of the various processes and determine the
observed frequencies (0) for the various conditions being compared.
3. Calculate for each condition the expected frequencies (E) under the
assumption that no differences exist among the processes.
4. Compare the observed (0) and expected (E) frequencies to obtain "reality."
The chi-square test, which is the most "famous" chi-square statistic, is totaled
over all of the process conditions:
5. A critical value is determined using the chi-square table with the entire level
of significance, 0, in the one-tail, right side, of the distribution. The degrees
of freedom is determined from the calculation (r-1)(c-1), which is (the number
of rows - 1)(times the number of columns - 1).
6. A comparison between the test statistic and the critical value confirms if a
significant difference exists, at a selected confidence level. (NIST, 2007t
Example 7.17: An airport authority wanted to evaluate the ability of three X-ray
inspectors to detect key items. A test was devised whereby prohibited knives were
placed in ninety pieces of luggage. Each inspector was exposed to exactly thirty
pieces of the forbidden luggage, in a random fashion. At a 95% c:onfidence, is there
any significant difference in the abilities of the inspectors?
Inspectors Treatment
Observed Values
1 2 3 Totals
Knives detected 27 25 22 74
Knives undetected 3 5 8 16
Sample total 30 30 30 90
Null hypothesis:
There is no difference among the three inspectors, Ho: P1 =P:z =P3
Alternative hypothesis:
At least one of the proportions is different, H1: P1 '¢ P2 '¢ P3
Inspectors Treatment
Expected Values
1 2 3 Totals
X2 = t
1= 1
(OJ - Ej )2
Ej
X2 = (27-24.67Y + (25-24.67t + (22-24.67)2 + (3-5.33)2 + (5-5.33)2 + (8-5.33t
24.67 24.67 24.67 5.33 5.33 5.33
2 (2.33)2 (0.33)2 (_2.67)2 (_2.33)2 (_0.33)2 (2.67)2
X= + + + + +
24.67 24.67 24.67 5.33 5.33 5.33
X2 = 2.89
Since the calculated value of X2 is less than the previously calculated critical value
of 5.99 and this is a right tail test, the null hypothesis cannot be rejected. There is
insufficient evidence to say, with 95% confidence, that the abilities ofthe inspectors
differ.
F Test
The need for a statistical method of comparing two population variiances is apparent.
We may wish to compare the precision of one measuring device to another, or the
relative stability of two manufacturing processes. The F test, named in honor of Sir
Ronald Fisher, is usually employed.
If independent random samples are drawn from two normal populations with equal
variances, the ratio of (s1)2/(s2)2 creates a sampling distribution known as the
F distribution. The hypothesis tests for comparing a population variance, a 2, with
another population variance, a 22 , are given by the following: 1
The F statistic is the ratio of two sample variances (two chi-square distributions) and
is given by the formula:
2
Where S1 and S22 are sample variances.
The F cumUlative distribution function is given in the Appendix. Both the lower and
upper tails are given in this table, but most texts only give one talil, and require the
other tail to be computed using the following expression:
F Test (Continued)
f(F)
f (ex)
Figure 7.30 An F Distribution for Two Normal Samples Both with OF = 10
There are numerous F Table formats based on a values of 0.1 0, 0.05, 0.025, 0.01, etc.
Listed below is a partial F Table for a = 0.05. A more complete table is in the
Appendix.
~ v2
1
1
161.4
2
199.5 215.7
3 4
224.6
5
230.2
6
234.0
7
236.8
8
238.9
9
240.5
10
241.9
2 18.51 19.00 19.16 19.25 19.30 19.33 19.35 19.37 19.38 19.40
3 10.13 9.55 9.28 9.12 9.01 8.94 8.89 8.85 8.81 8.79
4 7.71 6.94 6.59 6.39 6.26 6.16 6.09 6.04 6.00 5.96
5 6.61 5.79 5.41 5.19 5.05 4.95 4.88 4.82 4.77 4.74
6 5.99 5.14 4.76 4.53 4.39 4.28 4.21 4.15 4.10 4.06
7 5.59 4.74 4.35 4.12 3.97 3.87 3.79 3.73 3.68 3.64
8 5.32 4.46 4.07 3.84 3.69 3.58 3.50 3.44 3.39 3.35
9 5.12 4.26 3.86 3.63 3.48 3.37 3.29 3.23 3.18 3.14
10 4.96 4.10 3.71 3.48 3.33 3.22 3.14 3.07 3.02 2.98
Note: The above critical values for F may be used for a one-tailed test (a=0.05, 95%
confidence) or a two-tailed test (a/2=0.05, 90% confidence).
F Test (Continued)
Conclusion: Since the calculated F value is in the critical region, the null hypothesis
is rejected. There is sufficient evidence to indicate a reduced variance and more
consistency of strength after aging for one year.
From Table 7.31, the right tail critical value of F = 3.68. The left tail critical value of
F is found using the equation on page VII-48 where F = 1/(3.29) :: 0.304, where the
degrees of freedom terms are 7 and 9 when finding the table value. Since the
calculated F value of 2.22 is between the critical values of 0.304 and 3.68, we fail to
reject the null hypothesis and cannot conclude that the population variances are
different at a 90% confidence level.
-
t=X-llo Single sample mean. Standard
tTest
s/..Jn n-1 deviation of population unknown.
2 Mean
- - 2 sample means. Variances are
Xi - X2 unknown, but considered equal.
Equal t=
Variance SP~ n1 + 1 n 1+n 2-2 ("1-1) s~ +{n2 -1 )s~
tTest 1 n2 sp=~ n 1+n2-2
X2 X2 = (n -1)s!
2 n-1
Tests sample variance against
a2 Known 0x known variance.
The student may wish to refer to {Juran, 1999)3, {NIST, 2007)6 or (Triola, 1994)8.
Analysis of Variance
Earlier, techniques were presented for estimating and testing thE! values of a single
population mean or the difference between two means (Z test and student's t test).
In many investigations (such as experimental trials), it is necessary to compare three
or more population means simultaneously. There are underlying assumptions in this
analysis of variance of means: the variance is the same for all factor treatments or
levels, the individual measurements within each treatment are ne,rmally distributed
and the error term is considered a normally and independently distributed random
effect.
Where:
In the one-way ANOVA, the total variation in the data has two parts: the variation
among treatment means and the variation within treatments.
Let the ANOVA grand average =IXIN =GM. The total SS (Total SS) is then:
SST = Int (Xt - GM)2 = Sum of the squared deviations of each treatment average
from the grand average or grand mean.
SSE = I(Xt - Xt)2 = Sum ofthe squared deviations of each individual observation
within a treatment from the treatment average.
SST = I(TCM) - CM
HO: J,l1 = J,l2 = ... = J,lt H1: At least one mean different
IX = 30 N = 15Total DF = N -1 = 15 -1 = 14
also SST = Int (X t - GM)2 = 5(6.2 - 2)2 + 5(0.6 - 2)2 + 5(-0.8 - 2)2
= 82.2 + 9.8 + 39.2 = 137.2
Source
SS OF Mean Square F Fa,v ,v
(of variation) 1 2
Since the computed value of F (33.2) exceeds the critical value of F, the null
hypothesis is rejected. Thus, there is evidence that a real difference exists among
the weights of hamburgers from three different companies.
Two-Way ANOVA
It will be seen that the two-way analysis procedure is an extension of the patterns
described in the one-way analysis. Recall that a one-way ANOVA has two
components of variance: Treatments and experimental error (may be referred to as
columns and error or rows and error). In the two-way ANOVA there are three
components of variance: Factor A treatments, Factor B treatments, and
experimental error (may be referred to as columns, rows and error).
Example 7.21: In a hypothetical example, three different study materials were taught
by two different instructors to three different students with the following results.
The responses are examination results shown as a percent.
Study Materials
The null hypotheses: Instructor and study material means do not differ.
Total SS =IX2
- CM =81844 -78672.22 =3171.78
ColSq =column total squared and divided by the no. of observati4)ns in the column.
RowSq =row total squared and divided by the no. of observations in the row.
Columns
872.44 2 436.22 20.8 FO.05,2, 14 = 3.74
(Matis)
Rows
2005.56 1 2005.56 95.6 Fo.o5,1,14 = 4.60
(Instruct)
Error 293.78 14 20.98
17 SIGe = '1'20.98 = 4.58
SIG total =v'Total SSI(N-1) = 13.66
= = =
Col F MSColIMSE 436.22/20.98 20.79. This is larger than critical F =3.74.
Therefore, the null hypothesis of equal material means is rejected.
= = =
Row F MSRow/MSE 2005.56120.98 95.59. This is larger than critical F = 4.60.
Therefore, the null hypothesis of equal instructor means is rejected.
The difference between total sigma (13.66) and error sigma (4.58) is due to the
significant difference in instructor means and study material means. Ifthe instructor
difference and study material differences were only due to chance cause, the sigma
variation in the data would be equal to SIGe, the square root of the mean square
error.
Immediate Intermediate
Situation Ro_o t Cause Action
Action Action
The dam Plug it Patch the dam Find out what caused the leak
leaks so it does not happen again.
Then rebuild the dam.
Parts are 100% Put an oversize Analyze the plrocess and take
oversized Inspection kickout device in action to elimiinate the
line production of oversized parts.
To help locate the system's true problem, a variety of problem solving tools are
available. Listed below are commonly used techniques.
Ask why, and then ask why again .... Data collection and analysis
Brainstorming Pareto analysis
Process flow analysis Regression analysis
Plan-do-check-act Checksheets
Systematic problem solving Data matrix analysis
Nominal group technique Process capability analysis
Operator observation Partitioning of variatkm
Fishbone diagrams Subgrouping of data
Consensus exercises Simple trials
Six - thinking hats Statistical design of E!xperiments
Use of teams Analytical tests (X2 , F, Z, t)
FMEAlfauit tree analysis Control charting
• The root cause analysis has identified the full extent of th~e problem
• The corrective action is satisfactory to eliminate or prevent recurrence
• The corrective action is realistic and maintainable
5 Whys
The 5 whys approach to root cause analysis is described as asking the question
"Why?" five times. This technique is generally attributed to a Japanese method of
determining the root cause. The following is an example of 5 whys.
1. Why? We ran out of parts because the die stamping press broke down.
2. Why? The press had not received scheduled maintenance for a period of
three months.
3. Why? The maintenance department staff had been reduced to six people from
eight.
4. Why? The maintenance department was over budget due to high overtime
costs, and the General Manager eliminated all overtime and required a
25% reduction in personnel for all overhead support departments.
5. Why? The company was not reaching profit goals so the CEO had issued an
order to avoid all unnecessary spending. So the root cause was the
CEO was worried about getting fired for poor profit performance.
There is nothing magical about 5 whys. In fact, the root cause may be found after
3 or 4 whys. In other cases, one may need to ask why six or more times. This
method asks why until the root cause is found.
5 Ws and H (or 2 H)
In some references, this same basic method is simply referred to as the 5 Ws (Who,
What, Where, When, and Why). Also note that the order of the Ws varies, depending
upon the referenced source. The technique looks at a problem or symptom from
several viewpoints in order to include as much information as is needed to assist in
determining the root cause.
References
1. Huck, S. & Cormier, W. (1996). Reading Statistics and Rese,arch, 2nd ed. New
York: HarperCollins.
3. Juran, J.M. (1999). Juran's Quality Handbook, 5th ed. New York: McGraw - Hill.
7. Sharma, A., & Moody, P. (2001). The Perfect Engine: How j~o Win in the New
Demand Economy by Building to Order with Fewer Resources. New York: The
Free Press.
8. Triola, M.F., & Franklin, L.A. (1994). Business Statistics, RI!ading: Addison -
Wesley.
9. Wortman, B.L., et al. (2001). CSSBB Primer. Terre Haute, IN: Quality Council of
Indiana.
7.1. Which of the following tasks is value added? 7.5. A randomly selected sample of bicycle helmets
was tested for impact resistance. Given the data
a. An appropriate product development results below, what is the 95% confidence
process interval for the mean?
b. Reworking parts to meet customer
requirements Test results:
c. MRB meetings Sample size: 100 helmets
d. Efficient material handling of customer Average impact resistance: 276 g
returns Standard deviation ofthe measurements: 15 g
7.4. How is the level of significance defined? 7.S. From a statistical standpoint, the sample size
for hypothesis testing depends on:
a. The probability of rejecting a null
hypothesis when it Is true I. The required type I and type II risks
b. The probability of not rejecting a null II. The minimum population shift of interest
hypothesis when it is true III. The variations in the population of interest
c. The probability of accepting a null IV. The cost oftaking a sample
hypothesis when it is true
d. The probability of not accepting a null a. I and III only
hypothesis when it is true b. I, II, and III only
c. II and IV only
d. I, II, III, and IV
7.9. When a problem solving team applies the 5 why 7.13. In a single factor analysis of variance, the
technique, they are attempting to: assumption of homoge'neity of variances applies
to:
a. Determine if the interviewee is telling the
truth a. The variances within the treatment groups
b. Understand the basics of the problem b. The variances of the treatment groups
c. Eliminate areas not to investigate c. The total variance
d. Determine the root cause of the problem d. All of the above
7.10. Non-value added activities on the factory floor 7.14. A company has just installed a new computer
are most clearly controlled by the elimination of: data entry system and the new input error rates
must be determined. Management requires the
a. Gembo error rate to be within (]1.5%, at a 95% confidence
b. Muda level. What sample size is necessary if the
c. Poka-yoke population standard dl~viation is 1.2%?
d. Kaizen
a. 15
7.11. Which of the following statements concerning b. 16
the coefficient of simple linear correlation, r, is c. 22
NOT true? d. 23
a. r = 0.00 represents the absence of a 7.15. Given that random l;amples of process A
relationship produced 10 defective and 30 good units, while
b. The relationship between the two variables process B produced .25 defectives out of 60
must be nonlinear units. Using the chi-!iquare test what is the
c. =
r 0.76 has the same predictive power as r probability that the obslarved value of chi-square
=-0.76 could result under the hypothesis that both
d. r = 1.00 represents a perfect relationship processes are operating at the same quality
level?
7.12. The determination of temporal variation in multi-
vari charting means: a. Less than 5%
b. Between 5% and 1D%
a. Variation within piece c. Between 10% and 20%
b. Variation over time d. Greater than 20%
c. Variation piece-to-piece
d. Variation within batch 7.16. The correlation coefficient for test #1 equals -
0.89, and the correlaticm coefficient for test #2
equals 0.79. Assuming the tests are in different
subject areas, which ofthe following statements
is FALSE?
7.17. What inference test compares observed and 7.21. All of the following statements concerning
expected frequencies of test outcomes? statistical inference are true, EXCEPT:
7.25. Given a coefficient of determination of 0.85, 7.29. What test statistic mu:st be known in order to
which of the following is true? compute a confidence interval for variation?
7.33. What is the Z value needed to conduct a two-tail 7.37. A correlation of +0.95 for two variables, x and y,
test in a statistical inference problem, specifying means that:
a 90% level of confidence?
I. There is a strong linear relationship
a. 1.96 between x and y
b. 1.28 II. Y may be predicted from x using the
c. 1.65 equation y = 0.95x
d. 2.24
a. I only
7.34. If a multi-vari chart shows that 60% ofvariation b. "only
is within piece, 25% of variation is piece-to- c. Both I and II
piece and 10% of variation occurs over time, d. Neither I nor II
what would be the indicated improvement
action sequence? 7.38. If four inspectors were evaluated, for the
detection or non-detection of a defect in twenty
a. Temporal, positional, cyclical samples, how many degrees of freedom would
b. Positional, temporal, cyclical be used to determine the critical chi-square
c. Cyclical, positional, temporal value?
d. Positional, cyclical, temporal
a. 16
7.35. The current process produces fifty units per b. 8
shift. A new process produced sixty units per c. 6
shift, for ten consecutive shifts. The highest d. 3
shift was sixty-six units and the lowest shift
during the trial was fifty-four units. With what 7.39. A product was yielding 90% recovery before an
level of confidence can one say the process has Improvement was made. To determine if a 2%
changed? change (in either direction) has been made at
the 95% confidence level, what sample size
a. Since the standard deviation is unknown, should be taken?
the test can't be performed
b. 90% confidence a. 468
c. 95% confidence b. 648
d. >99% confidence c. 864
d. 1728
7.36. Refer to the following partial ANOVA table:
7.40. Which ofthe following characterize the muda of
Source SS OF MS waiting?
Materials 900 2
I. Idle operators
Machines 2100 2 II. Machinery breakdown
Errors 300 10 III. Uneven schedules
14 IV. Unnecessary meetings
a. I and II only
What are the three missing MS values?
b. II, III, and IV only
c. I, II, III, and IV
a. 450, 1050, 30
d. I, III, and IV only
b. 60, 140,20
c. 450,190.91,20
d. 45.0,190.91,30
7.41. If the 90% confidence for the mean is 181.3 to 7.45. Which two of the following confidence interval
203.8, which of the following statements is calculations require thl! use of Z table values?
correct?
I. Large sample means
a. 90% of all values in the population fall II. Small sample means
between 181.3 and 203.8 III. Variation confiden.:e intervals
b. 90% of all values are either greater than IV. Proportion confidence intervals
203.8 or less than 181.3
c. The probability of randomly selecting a a. I and II only
value between 181.3 and 203.8 is 90% b. II and III only
d. None of the above c. I and III only
d. I and IV only
7.42. The probabilistic regression model for any
particular observed value of Y contains a term 7.46. If the mean for each of the treatment groups in
/30' which represents: an experiment were ide'ntical, the F ratio would
be:
a. The Y-axis intercept, when X=O
b. The Y-axis intercept, when X=1 a. 1
c. The slope of the model b. 0
d. The X-axis intercept, when Y= 0 c. A positive number between 0 and 1.00
d. Infinite
7.43. Three machines are being evaluated in a one-
way ANOVA. A total of sixteen trials have been 7.47. For a linear correlation, the total sum of squares
conducted. How many degrees of freedom are equals 1,600, and the total sum of squared
available to evaluate the error term? errors equals 1,000. What is r2?
a. 12 a. 0.600
b. 13 b. 0.625
c. 14 c. 0.375
d. 15 d. 0.750
7.44. A multi-vari chart indicates: 7.48. Identify the element that is NOT associated with
excess inventory?
15% within piece variation
65% piece-to-piece variation a. Storage space
10% over time variation b. Additional labor
c. Transportation vehicles
What is the recommended action sequence to d. Expensive poka-yoke devices
reduce variation?
The answers to all questions are located at the end of Section XII.
Eliminating Wastes
• 55 • 5et up reduction
• Kanban (pull) • Flow improvement
• POka-yoke (mistake proofing) • Quick response manufacturing
Although 5S is considered the simplest of all lean techniques, it should not be taken
lightly. 55 applications can bring excellent benefits in the fiSJht against muda
(waste). 55 starts by providing a formal approach for housekeeping that becomes
systematic. The 55 technique includes the measurement, auditing, and monitoring
of cleanliness, order, and neatness. The 55 approach exemplifies a determination
to organize the workplace, keep it neat and clean, establish standalrdized conditions,
and maintain the discipline that is needed to do the job.
Once fully implemented, the 55 process can increase mOralE!, create positive
customer impressions, and increase efficiencies. Not only will employees feel better
about where they work, the effect on continuous improvement can lead to less
waste, improved quality, and faster lead times. Any of these itl~ms will make an
organization more profitable and competitive in the marketplace.
'" The material on 55 has been modified and combined from the fl3110wing sources:
{Davidson, 2006)4, {George, 2003)5, {Imai, 1997)10, {i5ixSigma, 2006)1\ {Skaggs,
2006)26, {Wortman, 2006)35.
58 (Continued)
There are now very distinctive English counterparts for each original Japanese 55
word. The concept of 55 has been twisted somewhat due to attempts to keep each
element starting with the letter '5' in English, like the real Nippongo words (seiri,
seiton, seiso, seiketsu, and shitsuke). Different authors state slightly different
English 55 equivalents, but their meanings remain similar to the original
fundamental terms. The term 55 comes originally from five Japanese words briefly
described as follows:
Seiri focuses on eliminating unnecessary items from the workplace. One should
separate the necessary from the unnecessary ones. An effective visual method to
identify these unneeded items is called red tagging. A red tag is placed on all items
not required to complete the normal job. This process requires an evaluation of the
red tag items. Items that are only used occasionally are moved to an organized
outside storage location. Unneeded items are discarded. Sorting is an excellent
way to free up valuable floor space and eliminate such things as broken tools,
obsolete fixtures, scrap, and excess raw material. The sort process also helps
prevent the JIC Oust-in-case) job mentality. A typical sequence follows:
58 (Continued)
Step 2. Seiton (also known as set. set in order, store. straighten. proper
arrangement. or simplify)
Seiton focuses on efficient and effective storage methods. Oncl~ sorted, items are
arranged systematically to ensure their traceability. Things are put in order and
placed in such a way that they can be easily reached whenevelr they are needed.
Items are arranged and identified for ease of use.
One should ask questions like: What do I need to do the job? Where should this
item be located? How many items are really needed?
Strategies for effective seiton activities include: painting floors, outlining work areas
and locations, shadow boards, and modular shelving and cabinets for needed items
(such as trash cans, brooms, mop and bucket). "A place f()r everything and
everything in its place."
Once the work area clutter and junk have been eliminated and thl~ necessary items
are identified, the next step is to thoroughly clean the work are'l. Daily follow-up
cleaning is necessary to sustain this improvement. Most WOrkE!rS take pride in a
clean and clutter-free work area.
The shine step will help create ownership in the equipment and fac:ility. Workers will
also begin to notice changes in equipment and facilities such as cdr leaks, oil leaks,
coolant leaks, contamination, vibration, breakage, and misalignment. These
changes, if left unattended, could lead to equipment failure and I()ss of production.
All of these issues impact the company's bottom line.
55 (Continued)
Step 3. Seiso (Continued)
Always keep arranged items ready to use and in a tidy status. Look for ways to keep
the work area clean. One must make the workplace spotless.
Once the first three of the 5Ss have been addressed, one should concentrate on
standardizing the best practices in the work area. All employees involved with the
process, should participate in the development of a standard method. Workers are
a valuable, but often overlooked, source of information regarding the best work
methods.
5S (Continued)
Step 5. Shitsuke (also known as sustain, or self-discipline, or cc)mmitment)
Sustain the previous 4 steps and continually improve on them. 5hitsuke is by far the
most difficult 5 to implement and achieve. Human nature tends, to resist change.
More than a few organizations have found themselves with a dirt)" cluttered shop a
few months following their attempt to implement 55. The tendency is to return to the
status quo and the comfort zone of the "old way" of doing things. Sustain focuses
on defining a new status quo and standard of workplace organization.
5hitsuke is not a part of the previous core 45s, but is an appropriiate approach and
attitude towards any undertaking to inspire pride and adherenCE! to standards (as
established for the other four components). Using self-discipline, each individual
has to commit to the process, stick to the rules, and maintain m()tivation.
Lockheed Martin and other organizations often add one more 5 (hence the notation
55+1). The additional item is safety. Removing hazards and dangers.
What 5S is Not
Due to the straightforwardness of 55, it is easy to mistakenly conflUse some isolated
cleaning activities and label them as a 55 program. 55 is not:
There are also significant intangibles such as the reward of a better place to work,
personal achievement, and sense of ownership resulting from the 5S
implementation. Employees often talk about taking the 5S approach to their own
lives, houses, communities, etc.
55 Implementation Roadmap
The implementation of 5S can be addressed from three different levels:
In most cases, companies dedicate most of their efforts to the 5S operational level,
underestimating the importance of incorporating the principles and practices of the
process and philosophy levels.
55 Philosophy Level
Failures in implementing 5S are often related to a disconnect between company
values with 5S practices. Companies typically conduct the sort, straighten, and
shine stages several times, but they do not standardize or sustain the process.
Sometimes this is related to short-term decision making within the business.
55 Operational Level
The operational level is the typical case for 5S. In this level, a c:ompany creates,
deploys, and implements sorting, organization, cleaning, and similar activities.
Figure 8.1 illustrates the typical steps.
Pre-Launch
A pilot 55 approach will allow a team to understand, correct, and improve the tactical
and practical implementation details. This will prove valuable when the concept is
deployed in other areas.
Launch
• Vision
• Values
• Goals
Positive pilot area results can be a motivational factor in encouraging the rest of the
company to embrace 55. Classic 55 steps will be developed first by the pilot area.
Sort
At the operational level, the sort step attempts to remove unnecessary items from
work areas. Any article, piece of equipment, document, or record, that is not
essential for current production or service delivery, should be removed.
* In this case, the analysis deals with why the unnecessary items have
accumulated in the work area.
5S Red Tag
Filled by: Date:
Quantity: Location:
Item Description (mark one): Reason for disposition:
1. Raw material 1. Defective
2. WIP 2. Obsolete
3. Finished goods 3. Surplus
4. Repair supplies 4. Not needed in short-term
5. Surplus equipment/tools 5. Not needed in long-term
6. Office supplies 6. Needs identification
7. Customer property 7. Other
8. Unknown
9. Files Final disposition:
10. Other
1. Scrap (with paperwork)
2. Scrap (no paperwork)
Disposition by: 3. Return to vendor
Date: 4. Return to customer
Disposal by: 5. Move to red tag area
6. Move to:
7. Store in:
Straighten is the stage where the team must analyze and understand how things get
organized. Takashi Osada states that straighten (or neatness) involves 4 stages:
• Analyze the status quo. The most important and often overlooked step in the
straighten stage is analyzing how things get messed up, especially when the
volume and variety of items is large and difficult to control. Again, basic
problem solving tools are useful for this analysis. Once the current state is
understood, the team should advance to where things should be placed.
• Decide where things belong. If the sort stage was properly achieved, a
company should experience a reduction in the total quantity of items on the
floor. A 5S purist would say that one of any part on hand, in any given period,
is enough. Deciding where things belong utilizes stratification methods. That
is, the most frequently used items should be easily accessible.
• Decide how things should be put away. Once the stratification of items has
been completed, guidelines should be established for putting away items (as
well as how and when not to put them away).
In zero quality control, two conditions of human nature are recognized. One, people
forget things and two, people make unintended mistakes. These facts should be
considered to prevent people from forgetting or failing to follow the straighten rules.
Shine (Sweep)
Shine is more than a simple cleanliness step. During this stage, the 5S team has to
develop other activities such as defining specific cleaning schedules for work areas
and organizing the big kick-off-day for companywide or pilot area cleaning. The
team has to organize the shine efforts in order to:
It is a good practice to start the cleaning from the roof and dowll1 to the floor. It is
typical to find work areas in which the floor and walls are clean, but the roof and
ceiling are not. Shine emphasizes a completely concentrated effort to prevent dirt
in all areas of the facility.
The use of cleaning checklists is necessary. Checklists for facilitiies and equipment
require specific areas such as: floors, walls, ceilings, roof~;, machines, and
equipment. Ifthe straighten stage was properly accomplished, thEm a particular area
would have been identified to store cleaning supplies.
Standardize
Standardize tools include checklists, visual aids, work instructions, and training.
Activities and practices should be scheduled for daily, weekly, mc)nthly, and annual
events.
Several 5S purists state that a way to measure progress is tOI assess the total
quantity of cleanups, especially cleaning tasks when customers or auditors visit the
facility.
Sustain
Sustain is associated with discipline and habits. This is the sta"e in which the 5S
program attempts to become an integral, inherent lifestyle throughout the company.
During the sustain phase, the team will review and adjust any audit forms they
developed during the pre-launch stage. An annual audit schedlule is necessary.
Every stage of the 5S program should be addressed.
Lean experts state that companies must be humble enough to recognize that if the
55 program cannot be sustained, consequently more elaborated tools, like TPM,
5MED, etc., will be in jeopardy. Mediocre results in 55 implementation will move to
the next experience and will prevent future success. Knowledge, discipline, and
habits are intrinsic to successful 55 implementation.
Deployment
Once a 55 pilot test has been successful, a clear deployment plan for the rest of the
company must be structured. Specific elements covered during the pre-launching
stage need to be revised, including:
• Companywide training
Follow-up
When the standardize and sustain phases are successful, audits and other metrics
(developed to track improvement) can be added to management reviews. Review of
55 performance by top management will be relevant, only if 55 is aligned with the
values and goals of the company.
58 Process Level
At a process level, 5S has a more specific approach, oriented to the application of
the 5S principles to the workstation. Inoculating employees with 5S is a process that
may take months or years. Discipline is not easily achieved.
Sort
Sort includes identifying and eliminating wastes. The way in whic:h work is done, in
a workstation, can add waste. Tools such as Therbligs, which ar.~ related to micro-
motion studies, can contributed to identifying and eliminating ""aste in a specific
work area. Therbligs are movements or activities that are done by the operators or
machines. Many of these add no value. In order to apply this analysis, an individual
should review internet sources such as The Gilbreth Network, (2006)31.
Straighten
At a workstation level, there are many tools and concepts that cain be applied, both
from time and movement studies, or from micro-movement studiies. For example,
using labels to identify bins, racks, products, etc., is very useful. Tools including
visual aids, drawings, marks, and colorful images are of great hellP when improving
the straighten phase at a workstation.
~ If
Transport
Plan
loaded '01 Assemble
Grasp
n Position
~
Unavoidable
delay rb
~
Rest for
Hold
1L Use
U overcoming
fatigue
Shine
Typically, implementation teams define specific times during the day to perform
preventive maintenance, housekeeping, and workstation setup. This is necessary,
but the process needs review. Daily practices related to cleaning, sweeping, or
preventive maintenance can become excessive, trivial, or inconsequential to
employees. Cleaning twice a day, for example, is not shine. Remember that shine's
purpose is to prevent grime.
Kanban-Pull
T. Ohno of Toyota Motor Company was the originator of the kanban method. This
idea supposedly occurred to Ohno on a visit to the United States when he visited a
supermarket. In the supermarket, product is "pulled" from the shl~lf and the missing
item is replenished. This is the famous "pull" system in action (Shingo, 1989)25.
Liker (1997)14 suggests that the story of Ohno visiting an American supermarket to
develop Kanban is fiction.
To reduce the WIP and cycle time, the goal is to be able to produce each part, every
day in some order such as, 2 As, 1 B, 2 As, 1 B, etc. The factory must be capable of
producing such an arrangement. It requires control of the machinE!ry and production
schedule, plus coordination of the employees. {Liker, 1997)14
If a kanban system is used, with cards indicating the need to resupply, the method
of feeding an assembly line could be achieved using the following process:
1. Parts are used on the assembly line and a withdrawal kanban is placed in a
designated area.
3. The WIP kanban card is a work instruction to the WIP op,erator to produce
more parts. This may require a kanban card to pull mate'rial from an even
earlier operation.
4. The next operation will see that it has a kanban card and will have permission
to produce more parts.
Kanban-Pull (Continued)
The order to produce parts at anyone station is dependent on recelvmg an
instruction, the kanban card. Only upon receiving a kanban card will an operator
produce more goods. This system aims at simplifying paperwork, minimizing WIP
and finished goods inventories. Examples of kanban cards are shown below.
RZC 5 MBT 8
Type Manual Type Automatic
Due to the critical timing and sequencing of a kanban system, improvements are
continually made. A kanban system can not have production halted by machine
failures or quality problems. Only a specific amount of product is in the system at
any point in time. A stoppage will cause much distress throughout the production
system. Every effort is made to eliminate causes of machine downtime and to
eliminate sources of errors in production.
Shingo (1989)25 notes that kanban systems are applicable in repetitive production
plants, but not in one-of-a-kind production operations. Kanban is beneficial for
production systems involving parts with common processes.
Poka-yoke devices can be combined with other inspection systems to obtain near
zero defect conditions. Errors can occur in many ways:
• Skipping an operation
• Positioning parts in the wrong direction
• Using wrong parts or materials
• Failing to properly tighten a bolt (Suzaki, 1993)29
There are numerous adaptive approaches. Gadgets or devices can stop machines
from working if a part or operation sequence has been missed by an operator. A
specialized tray or dish can be used prior to assembly to ensurEl that all parts are
present. In this case, the dish acts as a visual checklist. Otheli service oriented
checklists can be used to assist an attendant in case of interruption.
A buzzer or light will signal that an error has occurred, requiring immediate action.
Root cause analysis and corrective action are required before work resumes.
Other than eliminating the opportunity for errors, mistake proofing is relatively
inexpensive to install and engages the operator in a contributing way. Work teams
can often contribute by brainstorming potential ways to thwart error-prone activities.
A disadvantage is, in many cases, that technical or engineering assistance is
required during technique development.
Setup Reduction
SUR is an acronym for setup reduction. SMED is an acronym for single minute
exchange of dies. In this discussion, the two terms will be used interchangeably.
SUR is one of the most important tools in the lean manufacturing system. The
concept is to take a long setup change of perhaps 4 hours in length and reduce it to
3 minutes. Most people can not believe that this is possible. Shigeo Shingo,
developer of the SMED system, used it quite effectively in the Toyota Production
System for just-in-time production. Single minute exchange of dies does not literally
require only one minute. It merely implies that die changes are to be accomplished
in a single digit of time (nine minutes or less). (Robinson, 1990)22
Long setup changes present a major problem for many companies dealing with low
volume production. American industry has long held the view that the best
production scheme would be long production runs of the same product. Witness
Henry Ford's Model T assembly line of only black cars. In today's world, that may
not be possible. There may be certain industries where the supplier is dominant in
comparison to the buyer. However, if the customer is more dominant, or the
industry is very competitive, being able to switch production very quickly can create
a competitive edge. There are 3 myths regarding setup times:
• The skill for setup changes comes from considerable practice and experience
• Long production runs are more efficient because they save setup times
• Long production runs are economically better
The traditional setup method requires that all machine operations be stopped and
then operators proceed to think about the setup. A better way is t() identify what can
be performed before shutting down the machine (external setup time), and then to
identify what has to be done when the machine is shut down (internal setup time).
A machine shut down means that no parts are being produced. lin planning a SUR
project, the actual conditions and steps of the die changeover must be detailed.
This can be done by:
• Worker interviews
• Preparation of parts
• Finding parts
• Measuring parts
• Maintenance of dies and spares
• Cleaning of spares, etc.
The break down of initial elements into internal and external setup operations is just
a start. The existing internal setup elements should be reexamined to convert more
ofthose elements into external setup. The goal is to reduce the time to under 1 digit.
However, it may take a series of SUR projects to lower the time to 1 digit.
The setup team will need to generate some creative options. They should look for
pre-heating of dies, earlier preparation of parts, simplifying holding devices,
standardizing die heights, and using common centering jigs, multipurpose dies,
parallel operations (2 or more people working), functional clamps, one-turn
attachments, U-shaped washers, one-motion methods, interlocking methods,
elimination of adjustments, etc. Brainstorming and problem solving sessions are
needed to continuously improve the setup process.
All elements of internal and external setup must be reviewed in detail and
streamlined in order to attain the single digit goal. Perhaps the goal is unattainable,
but efforts should be made to go as low as possible. Once a SUR procedure is
agreed upon, the setup team should practice the process and critique itself for
additional improvements. (Robinson, 1990)22
Over a period of three months, the change out time was reduced to a firm fifteen
minutes regardless of station and mold type. Some of the key ingredients in this
success included:
The mass production, or large lot production, world is a series of operations that
produce goods in large batches. The sequence of operations used in producing
large batch sizes results in waiting time between operations. Large lot production
has these faults:
(Productivity, 1999)21
Takt Time
In the operation of a continuous flow manufacturing line, takt time takes on great
importance. Takt time is a term used (first by Toyota) to define al time element that
equals the demand rate.
In a CFM or one piece flow line, the time allowed for each line operation is limited.
The line is ideally balanced so that each operator can perform their work in the time
allowed. The word, takt, is a German word for baton, used by an orchestra
conductor (Imai, 1997)10. This provides a rhythm to the pr04::ess, similar to a
heartbeat. The work pace is at a certain pace or rhythm. In CFMI, a certain pace is
maintained, and the line must be engineered to do so.
(Conner, 2001)'1, (Sharma, 2001)23
In many cases, QRM requires that the managerial mind set must change. The
implementation of QRM in a company requires proper training and orientation to
grasp the dynamics ofthe manufacturing system. It is important to understand how
capacity planning, resource utilization, lot-sizing, etc., interact with each other and
impact lead times. This is very important in the relentless pursuit of lead time
reductions. QRM is especially useful for a product line that has a large variety of
highly engineered products with variable demand.
Kaizen
Kaizen is Japanese for continuous improvement (Imai, 1997)10. The word kaizen is
taken from the Japanese word kai meaning "change" and zen meaning "good." This
is usually referred to as incremental improvement, but on a continuous basis and
involving everyone. Western management is enthralled with radical innovations.
They enjoy seeing major breakthroughs, the "home runs" of business. Kaizen is an
umbrella term for:
• Productivity
• Total quality control
• Zero defects
• Suggestion systems
• Just-in-time production (Imai, 1997)10
The kaizen blitz, using cross functional volunteers in a 3 to 5 day' period, results in
a rapid workplace change on a project basis. The volunteers clome from various
groups. Ifthe work involves a specific department, more team members are selected
from that department. Blitz teams often require a facilitator. Various metrics are
used to measure the outcomes of a kaizen blitz:
Theory of Constraints
The theory of constraints (TOC) is a system developed by E. Goldratt. In 1986,
Goldratt and Cox published a book titled The Goal (Goldratt, 1986f, which
introduced the subject. The Goal describes a process of ongoing continuous
improvement. Additional books have followed on the subject, including Theory of
Constraints (Goldratt, 1990)8.
(Goetsch, 2000)6
The Goal is a novel written in a story format describing the dual trials of a plant
manager as he struggles to simultaneously manage his plant and his marriage. The
key concept, "theory of constraints" is never mentioned as such, but is fed to the
reader in bits and pieces. Listed below are many of the key elemental pieces:
The Goal reminds readers that there are three basic measures to be used in the
evaluation of a system.
• Throughput
• Inventory
• Operational expenses
These measures are more reflective of the true system impact than machine
efficiency, equipment utilization, downtime, or balanced plants.
• Balanced plants are not always a good thing. One should not balance
capacity with demand, but "balance the flow of product thmugh the plant with
demand from the market." The plant may be capable of gemerating inventories
and goods at record levels, which jam up the plant's systems. The idea is to
make the flow through the bottleneck equal to market demand. One can do
more with less by just producing what the market requires at the time. It is
possible that the existing plant has more than enough resources to do any
job, but the flow must be controlled.
• Throughput is: "The rate at which the system generates money through
sales." The finished product must be sold before it can gtmerate money.
• Inventory is: "All the money that the system has inves1ted in purchasing
things that it intends to sell." This can also be defined as sold investments.
• Operational expenses are: "All the money that the system !spends in order to
turn inventory into throughput." This includes depreciatic)n, lubricating oil,
scrap, carrying costs, etc.
• The terms throughput, inventory and operational expenses, define money as:
"Incoming money, money stuck inside, and money going I)Ut."
1. Identify the system's constraints. A system constraint limits the firm from
achieving its optimum performance and goals. Thus, constraints must be
identified and prioritized for maximum impact.
5. Back to step 1. After the constraint has been broken, go back to step one and
look for new constraints.
One must ensure a smooth source of materials to the bottleneck. Because machines
and equipment have variation in their output, there may on occasion be too much
work-in-process (WIP) for the bottleneck, or not enough WIP for the bottleneck. The
ideal situation is to always have enough WIP for the bottleneck (which controls the
pace of the line) to keep production rates moving. Therefore, a set amount of
inventory (a buffer) is needed ahead of the bottleneck.
• Drum: This is the constraint that controls the pace of the process. The "beat"
of this operation sets the pace of the line.
• Rope: This is the feedback mechanism from the buffer te) the raw material
input point. The dispatching point will release only enough material to keep
the buffer inventory at the proper level.
Shipping
Material
1--.... 1--... Inventory 1 -. . . or next
Release
operation
Feedback loop
Pitcher (2003)18 indicates that the DBR model can work very well in a job shop with
its wide variety of products, routings, and process times. In this environment,
bottlenecks can be everywhere. The use of DBR methods ha:s led to excellent
performances in some situations, because WIP is kept low, and I()wer system cycle
times are achieved.
(Pitcher, 200~1)18, (Yang, 2005)32
• Prerequisite trees: Something must occur before something else can occur.
TOe is a transition tool from an old way of doing things to a new way.
DOE Introduction*
Classical experiments focus on OFAT or 1FAT (one factor at a time) at two or three
levels and attempt to hold everything else constant (which is impossible to do in a
complicated process). When DOE is properly constructed, it cain focus on a wide
range of key input factors or variables and will determine the optimum levels of each
of the factors. It should be recognized that the Pareto principle applies to the world
of experimentation. That is, 20% of the potential input factors generally make 80%
of the impact on the result. The classical approach to experimElntation, changing
just one factor at a time, has the following shortcomings:
• Too many experiments are necessary to study the effects of all the input
factors.
o The interaction (the behavior of one factor may be dependent on the level of
another factor) between factors cannot be determined.
• Unless carefully planned and the results studied statisticall)', conclusions may
be wrong or misleading.
o Even if the answers are not actually wrong, non-statisticcll experiments are
often inconclusive. Many of the observed effects tend to be mysterious.
• Time and effort may be wasted through studying the wrong variables or
obtaining too much or too little data.
• One can look at a process with relatively few experiments. The important
factors can be distinguished from the less important ones. Concentrated
effort can then be directed at the important ones.
• Since the designs are balanced, there is confidence in the conclusions drawn.
The factors can usually be set at the optimum levels for verification.
• Frequently, quality and reliability can be improved with minimal trial costs.
In many cases, tremendous cost savings can be achieved.
DOE Applications
Situations, where experimental design can be effectively used, include:
• Hit a target
• Reduce variability
• Maximize or minimize a response
• Make a process robust (despite uncontrollable factors)
• Seek multiple goals
DOE Steps*
Getting good results from a DOE involves a number of steps:
• Set objectives
• Select process variables
• Select an experimental design
• Execute the design
• Ensure the data is consistent with the experimental assumptions
• Analyze and interpret the results
• Use/present the results (may lead to further runs or DOEs)
• The principle experimenter should learn as many facts abclut the process as
possible prior to brainstorming.
• Brainstorm a list of the key independent and dependent variiables with people
knowledgeable ofthe process and determine ifthese factors; can be controlled
or measured.
• Be bold, but not foolish, in choosing the low and high factor levels
When choosing the range of settings for input factors, it is wise to avoid extreme
values. In some cases, extreme values will give runs that are not feasible.
The most popular experimental designs are called two-level designs. They are
simple and economical. They are ideal for screening designs, and give most of the
information required to go to a multi-level response surface experiment, if one is
needed. However, it is often desirable to include some center points (for quantitative
factors) during the experiment (center points are located in the middle of the design
"box").
Experimental Objectives*
Choosing an experimental design depends on the objectives of the experiment and
the number of factors to be investigated. Some objectives are dliscussed below:
1. Comparative objective: If several factors are under investigatkm, but the primary
goal of the experiment is to make a conclusion about whethE~r a factor (in spite
of the existence of the other factors) is "significant," then the-experimenter has
a comparative problem and needs a comparative design solution.
The choice of a design depends on the amount of resources availalble and the degree
of control over making wrong decisions that the experimenter desires. It is a good
idea to choose a design that requires somewhat fewer runs than the budget permits,
so that additional runs can be added to check for curvature alnd to correct any
experimental mishaps.
Experimental Assumptions*
In all experimentation, one makes assumptions. Some of the engineering and
mathematical assumptions an experimenter makes include:
• Are the residuals (the difference between the model predictions and the actual
observations) well behaved?
It is not a good idea to find, after finishing an experiment, that the measurement
devices are incapable. This should be confirmed before embarking on the
experiment itself. In addition, it is advisable, especially if the experiment lasts over
a protracted period, that a check be made on all measurement devices from the start
to the conclusion of the experiment. Strange experimental outcomes can often be
traced to "hiccups" in the metrology system.
Experimental runs should have control runs which are done at the "standard"
process setpoints, or at least at some identifiable operating conditions. The
experiment should start and end with such runs. A plot of the outcomes of these
control runs will indicate if the underlying process itself drifted or shifted during the
experiment. It is desirable to experiment on a stable process. However, if this
cannot be achieved, then the process instability must be accounted for.
These are the assumptions behind ANOVA and classical regression analysis. This
means that an analyst should expect a regression model to err in predicting a
response in a random fashion; the model should predict values higher and lower
than actual with equal probability. In addition, the level of thl~ error should be
independent of when the observation occurred in the study, 4)r the size of the
observation being predicted, or even the factor settings involved in making the
prediction.
The overall pattern of the residuals should be similar to the bE!II-shaped pattern
observed when plotting a histogram of normally distributed data. Graphical
methods are used to examine residuals. Departures from assumptions usually mean
that the residuals contain structure that is not accounted for in the model.
Identifying that structure, and adding a term representing it to the original model,
leads to a better model. Any graph suitable for displaying the distribution of a set
of data is suitable for judging the normality of the distributilon of a group of
residuals. The three most common types are: histograms, normal probability plots,
and dot plots. Shown below are examples of dot plot results.
• ~-
a
.-
-------:
/.~
• • •• •
~ ~
•• a . a a• •
E • • • • • • ••
• •• • •• • ••
• •
DOE Terms
A 8 C AS AS;.~ A 8 .c AS A.c~
+ + + + - + -
+ - + - + - + + -
+
+ + +
+
+ + +
G + + +
+ + +
11 t j'
JJ 11 t
Figure 8.9 A is confounded with BC, B with AC, and C with AB
l'
JJ
c
+
+ + +
+ + +
+ + + + +
Full Factorial Describes experimental designs which contain all
combinations of all levels of all factors. No possible treatment
combinations are omitted. A two-level, three factor full
factorial design is shown below:
+
+
+ +
+
+ +
+ +
+ + +
Input factor An independent variable which may affect a (dependent)
response variable and is included at different levels in the
experiment.
Inner Array In Taguchi style fractional factorial experimtent, these are the
factors that can be controlled in a process.
Interaction Occurs when the effect of one input factc>r on the output
depends upon the level of another input facltor.
No Interaction Interaction
~Have
Q) Q)
(/) (/)
s:: s::
0 0
c. c.
~ Eaten
(/) (/)
Q) Q)
0:: 0::
Haven't
Eaten Drugs
2 4 6 8 0 1 2 3
# Drinks # Drinks
Concentration (:!22 )
( 200 ) ~~-~--'-':::;;;"--"T""I""
('122 )
( 022 )
( 000 )
-- (001 ) (002 )
Temperature
From a design catalogue test plan*, the selected fractional factoriall experiment looks
like so:
Treatment
Block (Days) A 8 C D
1 -5 Omitted -18 -10
2 Omitted -27 -14 -5
3 -4 -14 -23 Omitted
4 -1 -22 Omitted -12
Only treatments A, C, and D are run on the first day. 8, C, and D on the second day,
etc. In the whole experiment, note that each pair of treatments, such as 8C occurs
twice together. The order in which the three treatments are run on a given day
follows a randomized sequence.
Another randomized block design for air permeability response is shown below:
In Latin square designs, a third variable, the experimental treatment, is then applied
to the source variables in a balanced fashion. The Latin square plan is restricted by
two conditions:
Carburetor Type
Car I II III IV V
1 A B C D E
2 B C D E A
3 C D E A B
4 D E A B C
5 E A B C D
In the above design, five automobiles and five carburetors are us<ed to evaluate gas
mileage by five drivers (A, B, C, D, and E). Note that only twenty-five of the potential
125 combinations are tested. Thus, the resultant experiment is a one-fifth fractional
factorial.
Graeco-Latin Designs
Graeco-Latin square designs are sometimes useful to eliminate more than two
sources of variability in an experiment. A Graeco-Latin design is an extension of the
Latin square design, but one extra blocking variable is added for a total of three
blocking variables.
Carburetor Type
Car I II III IV Drivers
A,B,C,D
1 Aa B(3 Cy Dl5
2 Bl5 Ay D(3 Ca
3 C(3 Da Al5 By Days
a, (3,y,l5
4 DV Cl5 Ba AI3
The output (response) variable could be gas mileage by 4 drivers (A, B, C, D).
Hyper-Graeco-Latin Designs
A Hyper-Graeco-Latin square design permits the study oftreatments with more than
three blocking variables.
Carburetor Type
Car I II III IV Drivers Tires
A,B,C,D M,N,O,P
1 AaMcp B(3NX CyO\fJ Dl5P!l
2 Bl5N!l AyM\fJ D(3PX CaOcp
3 C(30X DaPcp Al5M!l ByN\fJ Days Speeds
a, (3,y,l5 cp,X,\fJ,!l
4 DyP\fJ Cl50!l BaNcp A(3MX
The output (response) variable could be gas mileage by 4 drivers (A, B, C, D).
Plackett-Burman Designs*
Plackett-Burman (1946)20 designs are used for screening experiments. PB designs
are very economical. The run number is a multiple of four rather than a power of 2.
PB geometric designs are two-level designs with 4, 8, 16, 32, 64 11 and 128 runs and
work best as screening designs. Each interaction effect is confounded with exactly
one main effect.
All other two-level PB designs (12, 20, 24, 28, etc.) are non-geometric designs. In
these designs a two-factor interaction will be partially confounded with each of the
other main effects in the study. Thus, the non-geometric designs are essentially
"main effect designs," when there is reason to believe any interactions are of little
practical importance. A PB design in 12 runs, for example, may be used to conduct
an experiment containing up to 11 factors. See Table 8.12.
Factors
Exp X1 X2 X3 X4 X5 X6 X7 X8 X9 X10 I X11 Results
1 + + + + + + + + + + +
2 - + - + + - - + - - +
3 - - + - ++ - - - +
+
4 + - - + - + + - - -
+
5 - + - - + - + + + - -
6 - - + - - + - + + + -
7 - - - + - - + - + + +
8 + - - - + - - + - + +
9 + + - - - + - - + - +
10 + + + - - - + - - + -
11 - + + + - - - + - - +
12 + - + + + - - - + - -
Table 8.12 Plackett-Burman Non-Geometric Design (12 Runs/11 Factors)
1. Select a process
2. Identify the output factors of concern
3. Identify the input factors and levels to be investigated
4. Select a design (from a catalogue, Taguchi, self-created, etc.)
5. Conduct the experiment under the predetermined conditions
6. Collect the data (relative to the identified outputs)
7. Analyze the data and draw conclusions
Study the effects of seven variables at two levels, that are suspected of
affecting gasoline mileage performance.
* The student should again note that this is a hypothetical experiment. However,
it is illustrative of how experimentation can be applied to a variety of processes.
Note: The above inputs are both variable (quantitative) and attribute (qualitative).
A screening plan is selected from a design catalogue. Only E!ight (8) tests are
needed to evaluate the main effects of all 7-factors at 2-levels. The design is:
Input Factors
Test A 8 C D E F G
#1 - - - - - - -
#2 - - - + + + +
#3 - + + - - + +
#4 - + + + + - -
#5 + - + - + - +
#6 + - + + - + -
#7 + + - - + + -
#8 + + - + - - +
Input Factors
Test A B C D E F G
#3 oo + + oo oo + +
Test #3 means:
A (-) '= Windows are open E (oo) = Time of day is daytime
B (+) = Driver is Steve F (+) = Super grade of gasoline
C (+) = Speed is 75 mph G (+) = Car weight is 3,500 Ib
D (-) = Air conditioner is on
* Arrive "Yes" requires going 375 miles, using 15 gallons of gasoline or less.
=
Mpg miles per gallon of gasoline performance.
1. The arrive pattern of "Yes" and "No" does not track with any single input factor.
2. The difference (/!.) values are the sum of values for each input factor. The effects
are the differences divided by 4 since four tests were at both high and low levels.
The absolute values are then taken for the effect values. Sum the absolute values
to give the gross effect and divide this by two for the net effect of improvement.
The gross effect is divided by 2 because the experiment is conducted in the
middle ofthe high and low levels and only halfthe difference (~.) can be achieved.
The net effect of 8 mpg is added to the average of 22 mpg to determine the
maximum of 30 mpg. The optimum gasoline mileage would be obtained by
running the following trial:
Where: Level 2 ::: (+) Level 1 ::: (-) o ::: Does not matter
Table 8.18 Optimum Factor Levels
The above trial was one of the 120 tests not performed out of 128 possible choices.
DOE is almost magical. The predicted gasoline mileage can be confirmed by
additional experimentation.
28
22
16
-A +A -8 +8 -C +C -0 +0 -E +E -F +F -G +G
Figure 8.19 Miles per Gallon Gasoline Mileage vs. Factor Levels
• Brainstorming
• FMEA
• Multi-vari reanalysis
• Post-improvement capability analysis
• DOE improvement analysis (manual, MINITAB, F test, t test, etc.)
• Measurement system reanalysis
Brainstorming
Quite often, brainstorming is useful in generating ideas for both simple problem
solving approaches (basic quality tools) or advanced problem solving approaches
(DOE or ANOVA).
FMEA
Consider that a design improvement team has worked diligently on a power lock
assembly with a potential failure mode of clamping force loss. The risk assessment
(RPN) before improvement was an arbitrary value of 32. After improvement, the RPN
is a value of 8.
Obviously, there has been a dramatic improvement. Whether the team has
completed their task, at this point, depends upon the hazard cc)nsequences of a
failure, as well as the RPNs for other potential failure modes within the same power
lock assembly.
Multi-Vari Re-analysis
Working on a process using multi-vari analysis is somewhat like draining a swamp.
The large stumps are uncovered first. In Figure 8.20 it is noted in the left drawing
that a substantial amount of variation occurred over time. This has now been
corrected by an improvement team, as reflected in the current drawing on the right.
Now the improvement team must decide if reducing the remaining within-piece
variation is worthy of additional effort (since piece-to-piece variability is of much
lower concern)
USL 0.110"
4
!3
Aim 0.105" 2
4 1
3
2
1
LSL 0.100"
Time ~ Time ---.
AIM AIM
Response Surfaces
An experimental equation represents a response line, plane, or surface for the
factors being evaluated. Refer to Figure 8.22 in which the S, C, and T values depend
on the size of slopes, curves, and twists, respectively.
Additive Additive
Certain surface response experiments are helpful in reaching the optimum results
in a relatively few experiments. However, there are experimental risks involving
potential product losses using these techniques.
91% 94%
E E
71%
pH A
E
92%
69% B D
A 87% 88%
A A C
63% 70% 84%
Concentration
Tests are carried out in phase A until a response pattern is established. Then phase
B is centered on the best conditions from phase A. This procedure is repeated until
the best result is determined. When nearing a peak, switching to smaller step sizes
or examining different variables are helpful approaches. EVOP usually involves
small incremental changes so that little or no process scrap is generated. Large
sample sizes may be required to determine the appropriate direction of
improvement. The method can be extended to more than two variables.
Suppose that pressure, temperature, and concentration are three suspected key
variables affecting the yield of a chemical process which is currently running at 64%.
An experimenter may choose to fix these variables at two levels (high and low) to
see how they influence yield. In order to find out the effect of all three factors and
= =
their interactions, a total of 2 x 2 x 2 23 8 experiments* must be conducted. This
is called a full factorial experiment. The low and high levels of input factors are
noted below by (-) and (+).
3 - + - 47
4 + + - 73
5 - .. + 56
6 + - + 80
7 - + + 51
8 + + + 73
Average 64
* A rapid mnemonic memory aid to determine the number of test trials is the
following: levels are level to the ground while factors fly. That iis, 3 factors at two
= =
levels is 23 8 trials and 2 factors at three levels is 23 9 trial~:;.
Th t f ff t (56+80+51+73) - (55+77+47+73) 2
e concen ra Ion e ec: 4- =
The interaction effects between the factors can be checked by using the T, P, and C
columns to generate the interaction columns by the multiplication of signs:
Interactions
EXP. T P C TXP PXC TXC TXPXC Y1ELD
1 - - - + + + - 55
2 + - - - + - + 77
3 - + - - - + + 47
4 + + - + - - - 73
5 - - + + - - + 56
6 + - + - - + - 80
7 - + + - + - - 51
8 + + + + + + + 73
*Interaction means the change in yield when the pressure and b~mperature values
are both low or both high as opposed to when one is high and the other is low.
The T x P interaction shows a marginal gain in yield when the temperature and
pressure are both at the same level.
P x C interaction:
(55+77+51+73) - (47+73+56+80) =0
4
In this example, the interactions have either zero or minimal negative yield effects.
If the interactions are significant compared to the main effects, they must be
considered before choosing the final level combinations.
The best combination of factors here is: high temperature, low pressure, and high
concentration (even though the true concentration contribution is probably minimal).
MINITAB Results
Most people don't analyze experimental results using manual techniques. On the
following page is a synopsis of the effects of temperatur1e, pressure, and
concentration on yield results using MINITAB.
Source OF 55 M5 F P
T 1 1104.50 1104.50 803.27 0.000
P 1 72.00 72.00 52.36 0.002
C 1 8.00 8.00 5.82 0.073
Error 4 5.50 1.37
Total 7 1190.00
The F values and corresponding p-values indicate that temperature and pressure are
significant to greater than 99% certainty. Concentration might also be important but
more replications would be necessary to see if the 93% certainty can be improved
to something greater than 95%.
The regression equation will yield results similar to those for the previous manual
calculations. Again, the p-values for temperature and pressure reflect high degrees
of certainty.
Frequently, one can derive the same conclusions by conducting fewer experiments.
Suppose the experiments cost $10,000 each, one might then dncide to conduct a
one-half, fractional factorial experiment. Assume the following balanced design was
chosen for the original experiment:
Experiment T P C Yield %
2 + - - 77
3 - + - 47
5 - - + 56
8 + + + 73
Since a fractional factorial experiment is being conducted, only the main effects of
factors can be determined. Please note that experiments 1, 4, 6, and 7 would have
been equally valid.
The results are not exactly identical to what was obtained by conducting the eight
full factorial experiment. However, the same relative conclusion!:; can be drawn as
to the effects of temperature, pressure, and concentration on thE~ final yield.
Note that the average yield is 63.25%. If the temperature is high, an 11.75% increase
is expected, plus 3.25% for low pressure, plus 1.25% for high cOl1lcentration equals
an anticipated maximum yield of 79.5% even though this experiment was not
conducted. This yield is in line with the actual results from exp,eriment number 6
from the full factorial.
Using either the manual or MINITAB recaps, would the experimenter stop at this
point? Might a follow-up experiment (perhaps at three levels) looking at higher
temperatures and lower pressures payoff? After all, the yield has improved by 16%
since experimentation started.
For example, micrometers that measured to 0.001" were used for several years. As
machining capability improved and the specification requirements became tighter,
micrometers with the ability to measure to 0.0001" became necessary. Although
both of these micrometers still have many applications todalY, more accurate
instruments such as "supermicrometers" are now required for some length
measurements. Using specialized equipment in temperature and humidity controlled
rooms, measurements to 1 millionth or 0.1 millionth of an inch for length are
possible today, and are routinely done.
An old rule of thumb was that the measurement instrument needled an accuracy of
one-tenth of the specification tolerance. This was also stated as the "10:1" rule. If
the specification required 0.345" ± 0.005", the proper instrument had an accuracy of
0.0001" because the specification required measurement to tlhe nearest 0.001"
(Some interpreted the requirement as 0.0005" or 0.001" based on 0.1" of the
tolerance). Other sources used a "4:1" ratio, or the equivalent 25%, as a guide for
the measurement uncertainty as compared with the specificati()n tolerance. The
disadvantage of either of these rules is that in some application:s achieving a 10:1
ratio is easily done, while in other applications it would require expensive equipment
to attain the suggested uncertainty.
The new quality system standards require " ... an estimation of the measurement
uncertainty as well as statistical techniques for analysis of test andlor calibration
data." (ISO/IEC 17025, 1999)12. Actually this is not a new requirement because
similar wording was in MIL-STD-45662A (1988)15 " ... the collective uncertainty of the
measurement standards shall not exceed 25% of the acceptable 1tolerance for each
characteristic being calibrated."
Note that although the factors are different, the term <10% looks similar to the 10:1
ratio and 10% to 30% looks similar to the 4:1 ratio or 25% from earlier methods.
The expression of measurement uncertainty includes both a range and the level of
confidence at which the statement is made. For example, measurement uncertainty
of a 100 gram mass could be stated as:
"ms = (100.02147 ± 0.00070) g, where the number following the symbol ± is the
numerical value of an expanded uncertainty U = kuc' with U determined from a
combined standard uncertainty (i.e., estimated standard deviation) Uc = 0.35 mg
and a coverage factor k= 2. Since it can be assumed that the possible estimated
values of the standard are approximately normally distributed with approximate
standard deviation uc ' the unknown value of the standard is believed to lie in the
interval defined by Uwith a level of confidence of approximately 95%."
(Taylor, 1994)30
References
1. Alukal, G. & Manos A. (2006). Lean Kaizen. A Simplified Apl,roach to Process
Improvements. Milwaukee, WI: ASQ Quality Press.
3. Conner, G. (2001). Lean Manufacturing for the Small Sh"p. Dearborn, MI:
Society of Manufacturing Engineers.
4. Davidson, P. (2006). "Kaizen Events -55's," Lean Six Sigma. Biisk Education, Inc.
and Villanova University.
5. George, M.L. (2003). Lean Six Sigma for Service. New York: McGraw-Hili.
6. Goetsch, D.L. & Davis, S.B. (2000). Quality Management, Introduction to Total
Quality Management for Production, Processing, and Servi'ces, 3rd ed. Upper
Saddle River, NJ: Prentice Hall.
7. Goldratt, E., & Cox, J. (1986). The Goal: A Process of Ongo'ing Improvement,
Revised Edition. Croton-on-Hudson, NY: North River Press.
9. Hahn, G.J. & Shapiro, 8.5. (1966, May). A Catalog and Comput,er Program for the
Design and Analysis of Orthogonal Symmetric and Asymmetric Fractional
Factorial Experiments. Schenectady, NY: General Electric, Research and
Development Center.
12. ISO/IEC 17025 (1999). General Requirements for the Competence of Testing and
Calibration Laboratories, 1st ed. Geneva: International Organization for
Standardization.
14. Liker, J. (editor). (1997). Becoming Lean: Inside Stories of U.S. Manufacturers.
Portland, OR: Productivity Press.
16. MSA (1998). Measurement Systems Analysis Reference Manual, 2nd ed.
Automotive Industry Action Group (AIAG).
19. Osada, T. (1991). The5S's: five keys toa total quality environment. Tokyo: Asian
Productivity Organization.
20. Plackett, R.L. & Burman, J. P. (1946). "The Design of Optimal Multifactorial
Experiments." Biometrika (Vol.33).
23. Sharma, A., & Moody, P. (2001). The Perfect Engine: How to Win in the New
Demand Economy by Building to Order with Fewer Resources. New York: The
Free Press.
24. Shingo, S. (1986). Zero Quality Control: Source Inspection and the Poka-yoke
System. Stamford, CT: Productivity Press.
References (Continued)
25. Shingo, S. (1989). A Study of the Toyota Production System /From an Industrial
Engineering Viewpoint. Cambridge, MA: Productivity Press.
26. Skaggs, T. (2006) "The 5S Philosophy." Metaltek Mfg. Inc. Papa Kaizen.
Downloaded November 1, 2006 from:
http://www.tpmonline.com/papakaizen/articls_on_lean_mclnufacturing_strat
egies/5s.htm
28. Suri, R. (2006). Quick Response Manufacturing: A Competitive Strategy for the
21 st Century. Retrieved November 23,2006 from:
http://www.advancedmanufacturing.com/may01/qrm.pdf.
29. Suzaki, K. (1993). The New Shop Floor Management: Empowering People for
Continuous Improvement. New York: The Free Press.
30. Taylor, B.N. & Kuyatt, C.E. (1994). NIST Technical Note 12~J7. Guidelines for
Evaluating and Expressing the Uncertainty of NIST Mea~;urement Results.
United States Department of Commerce, Technology Administration, National
Institute of Standards and Technology.
31. The Gilbreth Network. (2006). "Therbligs: The Keys to Siimplifying Work."
Downloaded November 30, 2006 from
http://gilbrethnetwork.tripod.com/therbligs.html
32. Yang, K. (2005). Design for Six Sigma for Service. New York: McGraw-Hili.
33. Womack, J., & Jones, D. (1996). Lean Thinking: Banish Wastei~ndCreate Wealth
in Your Corporation. New York: Simon & Schuster.
34. Wortman, B.L., et al. (2001). CSSBB Primer. Terre Haute, IN: IJuality Council of
Indiana.
35. Wortman, B.L., et al. (2006). CSSGB Primer. Terre Haute, IN: IJuality Council of
Indiana.
36. Wortman, B.L., Carlson, D.R. & Richardson, W.R. (2006). CQE Primer. Terre
Haute, IN: Quality Council of Indiana.
8.1. Goldratt's theory of constraints deals with 8.4. The floor of a small shop looks dirty,
money In relationship to three fundamental disorganized, and messy. The manager tells you
measurements for evaluating a system. Align that this is ok, they will perform their annual 55
the following measures with their monetary day as soon as they finish a current large order.
equivalents. You tell the manager this is not really 55
because:
I. Throughput
II. Inventory a. 55 is systematic and formal
III. Operational expenses b. 55 requires a coordinator
c. There will probably be another big order after
A. Money stuck inside the current one
B. Incoming money d. 55 is usually performed more often than
C. Money going out annually
a. 18.00 a. Collinearity
b. 70.75 b. Confounded
c. 61.75 c. Coplanarity
d. 15.43 d. Covariates
8.3. Certain six sigma improvement efforts have 8.7. How does poka-yoke respond to human error?
resulted In the need to replace the existing
measurement system and others have not. What a. By eliminating human error
could be the reasons for staying with an existing b. By punishing human error
system? c. By rewarding defect detection due to human
error
I. It's precise enough d. By catching human error before it becomes a
II. Better equipment does not exist defect
III. The current measurement is based on
counted data 8.8. A Latin square design is an experimental design
IV. The improvement has not been successful which:
8.9. Which of the following is considered the sixth 8.15. What are the major reasc)n(s) the Japanese place
"S"? such a high emphasis 0111 housekeeping items in
the 5S approach?
a. Sanitation
b. Safety I. They figure that workplace organization and
c. Simplification manufacturing are inseparable
d. Setup reduction II. They recognize that workplace organization
must precede 01ther high levels of
8.10. Identify the lean enterprise technique in which performance
the videotaping of a segment of the operation is III. The Japanese tend to be a nation of tidy
helpful: people
a. SMED a. I only
b. TPM b. I and II only
c. Takt c. II and III only
d. FIFO d. I, II, and III
8.11. The theory of constraints concentrates mainly 8.16. Identify the most difficult limitation in achieving
on: continuous flow.
8.12. Identify the improvement methodology which 8.17. Ideally, what sequenc:e of experimentation
would be LEAST effective in validating the should one use to optimize the response of a
outcome of an improvement activity when only process?
quantitative data is involved.
a. Use response surfalce methodologies at all
a. Multi-vari reanalysis stages
b. Post-improvement capability analysis b. Use screening firs,t and then response
c. Pareto diagram reanalysis surface techniques
d. Response surface plotting c. Use charting techlliques first and then
ANOVA
8.13. If an experiment has an alias, one would say that d. Use experimental designs first and then
the two factor effects are: ANOVA
a. Find the root cause of the problem and carry 8.19. The smallest run number possible in order to
out corrective action examine the main effect. of 22 factors at 2 levels
b. Throw out the defective piece and continue is:
with normal operations
c. Stop the process, assemble a team, and plan a. 23
a kaizen event b. 24
d. If the problem persists, implement process c. 44
inspection as a precaution d. 56
8.20. Which of the following are Included in the 8.26. Red tagging is used during which 5S stage?
principles of 5S?
a. Standardize
I. Sorting out b. Sustain
II. Systematic arrangement c. Straighten
III. Self-discipline d. Sort
IV. Systems management
8.27. The main objection to designed experimentation
a. I only in an industrial environment is:
b. I, II, and III only
c. II, III, and IV only a. Obtaining more information for less cost than
d. I, II, III, and IV can be obtained by traditional
experimentation
8.21. When performing "one experiment with five b. Getting excessive scrap as a result of
repetitions, " what are the six experiments choosing factor levels that are too extreme
called? c. Verifying that one factor at a time is the most
economical way to proceed
a. Randomization d. Obtaining data and then deciding what to do
b. Replications with it
c. Planned grouping
d. Sequential 8.28. When comparing breakthrough achievement with
kaizen techniques, which of the following
8.22. Which of the following techniques does NOT statements is true?
necessarily compliment the visual factory
concept? a. Kaizen techniques provide more rapid
Improvement
a. Kanban b. Breakthrough achievement is generally less
b. Tool boards expensive
c. 55 c. Breakthrough achievement would be used for
d. Poka-yoke low tech products
d. Kaizen techniques are more easily applied at
8.23. Which of the following is NOT a defect the floor level
elimination and detection technique?
8.29. Which of the following is NOT an external setup
a. Vendor certificate of compliance operation?
b. Poka-yoke
c. Source inspection a. Preparing parts
d. Self checks by operators b. Changing dies
c. Cleaning spares
8.24. A2-1evel, 5-factor experiment is being conducted d. Finding parts
to optimize the reliability of an electronic control
module. A half replicate of the standard full 8.30. The main difference between traditional kaizen
factorial experiment is proposed. The number of and the kaizen blitz approach is:
treatment combinations will be:
a. The number of people involved
a. 10 b. The pace of the change effort
b. 16 c. The amount of floor space saved
c. 25 d. The commitment level of management
d. 32
8.31. What is considered an ideal batch size in a
8.25. What does the Japanese concept of poka-yoke continuous flow operation?
mean?
a. Large batches are considered ideal
a. Root cause analysis b. It depends on the bin size pre-determination
b. Standardizing corrective actions c. One piece is considered ideal
c. Mistake proofing d. Takt time batch size is considered ideal
d. Reengineering
8.32. An experiment yielding the following equation: 8.37. In CFM, the time per op1aration is based on:
8.43. A concept that includes lean enterprise but goes 8.49. When studying a combination of qualitative and
a step further by integrating all aspects of quantitative factors in a 25 full factorial design, a
product cycle time is called: member of a lean six sigma group recommends
removing the qualitative factors in order to study
a. TOC only numerical combinations of effects. What is
b. JIT your reaction to this request?
c. TPS
d. QRM a. You must remove the qualitative factors in
order to calculate quantitative effects
8.44. In every experiment there is experimental error. b. You keep the qualitative factors even though
Which of the following statements is true? no effects can be calculated
c. You remove the qualitative factors and make
a. It is due to a lack of material uniformity and an attributes chart with them
to inherent experimental variability d. You keep both factors and calculate the
b. This error can be changed statistically by effects of both factors
increasing the degrees of freedom
c. The error can be reduced only by improving 8.50. One would say that the kanban method would be
the material most closely associated with:
d. In a well-designed experiment there is no
interaction effect a. The elimination of non-value-added activities
in the process
8.45. Which of the following 5S stages is primarily the b. The development of a future state process
responsibility of top management? stream map
c. Making problems visible in a process, thus
a. Shine clarifying targets for improvement
b. Sustain d. The control of material flow
c. Sort
d. Straighten 8.51. Which of the following is NOT a TOC principle?
8.54. Which of the following operations would benefit 8.S8. The reduction of lead times is one of the main
most from a kanban system application? objectives of kanban. Which of the following is
the other main objectiv<e?
a. One-of-a-kind production operations
b. Service operations a. To optimize floor sp1ace
c. Repetitive production plants b. To increase "heijunlka"
d. Hospital emergency centers c. To control the factory
d. To reduce WIP
8.SS. Ergonomics reduce injuries and lost production.
Which of the following are sound ergonomic 8.S9. There is strong int'3raction between two
principles In the wOrkplace? variables. This means lthat:
The answers to all questions are located at the end of Section XII.
W. EDWARDS DEMING
Control Concepts
Written Procedures
A procedure is a document that specifies the way to perform an activity. For most
operations, a procedure can be created in advance by the appropriate individual(s).
Consider the situation where a process exists, but has not been documented. The
procedure should be developed by those responsible for the process. Some
procedures may be developed by the quality department for use by other operating
departments. Generally, the operating departments provide input.
Work Instructions
Procedures describe the process at a general level, while work instructions provide
details and a step-by-step sequence of activities. Flow charts may be used with
work instructions to show relationships of process steps. Controned copies of work
instructions are kept in the area where the activities are performed.
Some discretion is required in writing work instructions. The levEl1 of detail must be
appropriate for the background, experience, and skills of the personnel using the
instructions. Typically, the people that perform the activities described in the work
instruction should be involved in its creation. The wording and terminology used
should also match that used by the personnel performing the tasks.
Quality Controls
Production operations, which directly affect quality, are identifiEld and planned to
ensure that they are carried out under controlled conditions. Controlled conditions
include the following:
It is necessity to first identify all of the key internal and external customer
requirements. One should remember to include all of the critical product and
process characteristics uncovered throughout the complete design process.
Examples of customer requirements are listed in the first column in Table 9.1.
The second step is to identify the manufacturing process flow and the
manufacturing support processes. Examples of these are listed in the second
column in Table 9.1.
The third step is to identify the quality tools that a company will use to control the
processes. Examples of these are listed in the third column in Table 9.1.
After the customer requirements, processes, and quality tools have been identified,
a control plan can be detailed. First, one takes a customer requirement, then
decides which process step and which quality tool should be used to satisfy the
requirement. Some quality requirements need to be addressed in two or more
process steps with different quality tools. Some examples are:
When the quality plan is developed, each line item must be implemented and
verified. The quality planning process generates a quality plan. The part of the
quality plan that focuses on production is often called a control plan.
Control Plans
A control plan is a document describing the critical characteristics (key input or
output variables) of the part or process. Through this system of monitoring and
control, customer requirements will be met and the product or process variation will
be reduced. However, the control plan should not be a replacement for detailed
operator instructions in the form of work instructions or standard operating
procedures. Each part or process must have a control plan. A group of common
parts using a common process can be covered by a single control plan.
• Prototype
• Pre-launch
• Production
A prototype control plan is used in the early development stages when the part or
process is being defined or configured. This control plan will list the controls for the
necessary dimensional measurements, types of materials, and required performance
tests.
A pre-launch control plan is used after the prototype phase is completed and before
full production is approved. It lists the controls for the necessary dimensional
measurements, types of materials, and performance tests. This plan will have more
frequent inspections, more in-process and final check points, some statistical data
collection and analysis, and more audits. This stage will be discontinued once the
pre-launch part or process has been validated and approved for production.
A production control plan is used for the full production of a part. It contains all of
the line items for a full control plan: part or product characteristics, process
controls, tests, measurement system analysis, and reaction plans. A more detailed
list of input factors is provided later in this discussion.
(APQP, 2000) 1 , (MSA, 2002)12
A responsible person must be placed in charge of the control plan. This ensures
successful monitoring and updating. A green belt or black belt mayor may not be
a suitable person for the role, as he/she may be replaced or transferred to a different
position. A better selection would be the process owner.
The current process owner can be listed on the control plan, but, in reality, it is a
functional role that is to be passed on to the next individual in that same
organizational position. If the control plan is not maintained, t:he benefits of the
project could slowly be lost. The frequent changing of process 40wners, combined
with large numbers of process projects, can easily result in neglected or lost control
plans.
The student is provided an illustrative blank control plan, a description of the line
items in the control plan, and a filled in example control plan. Customer
requirements may dictate the exact form of the control plan. (APQP, 2000)1. Often,
there is some flexibility in the construction of the forms.
>a
-
G)
c. u
G) :E C ...
~
-e .
cu G) "C
III .;::
- .
G)
III _::::J .2-C 0 C
-
III C ::::J J:
III III cu 0
CO"
G).- :ccu G) 0" cu
G) III > -> .~
G) G)
G) 0.
::::J_ E.EU ~ :EE
G)
u u cu C. ~
E
...0 ...C.0 ::::J C. G) cu III c. ·iii G) C
-
C. U 0
Q. -::::J-
G) (ij ~
...
::::J_
G)
U G) G) 0 C C ...
1:cu •
C o.Q
>a.!!!
·u ·u III G) G)
m
0. 0. (ij o 0
III C. ~
::::J
U
.Q
::::J
>a
G)- G) ...
G)
c.
G)
c.
cum
G) cu cu E
cu
E
cu E ... III G) C
0
cu
Q. CJ) ~~ ~~ CJ) CJ) :Em C!) CJ) CJ) C G) G)
Q. ...
E 0
G)
0::
4. Contact person: This could be the green belt in charge of the project, however,
the name and function of the process owner are more important.
5. Page: Provide page numbers if the control plan exceed~; one page. The
examples shown in this description are brief. Control plans may run up to 20
pages.
6. Original date: Indicate the original date of issue of the control plan.
7. Revision date: Provide the latest revision date of the control plan.
8. Part! process: List the part number or the process flow being charted.
9. Sub-process step: Indicate the sub-process step being descrilbed (if applicable).
10. Key input variable (X): Note the key input variable, when appropriate. On any
line item, only the X or V variable is filled out, not both. This is to clearly indicate
which item is being monitored and controlled.
11. Key output variable (V): Note the key output variable, when Blppropriate.
15. Gage capability: Provide the current capability ofthe measurement system. The
AIAG MSA manual lists:
16. Sample size: Provide the sample size for each subgroup.
17. Sample frequency: List how often the inspection or monitoring of the part or
process is required.
19. Person responsible for measurement: Indicate who will make and record the
measurement.
20. Control method: Note how this X or Y variable will be controlled. Examples
include control charts, checklists, visual inspections, automated measurements,
etc.
21. Reaction plan: Describe what will happen if the variable goes out of control.
How should the responsible person respond to the situation?
... ..
C
~
.~ GI "0
.. Sc
U GI
fI/ .~ tV tV fI/ ;:':::1
...
0
.r. C
-
fI/ :::I
fI/ fI/
fI/
CQ
>
> CQ 0
C CC"
CD- ::ctV CD C" !l.GI
CQ
CD
:a E GI C.
..
N
~
..
.r.
U GI
..9-
U
...
:::I
U CQ E.E !l. ·iii .¥
E
-..
0 :::I !l. GI U CQ !l. .. GI C
0 !l. U .. GI GI C .. 0
n. ::::I iii I;:: :::1- U GI U "0
.E 0 ·uGI_GI ·uGI C. C. 0:::1
~
(II GI GI (II (II U
t!tV .a >. >. CQC)
GI CQ
C)
tV
E E ..
GI GI
CQ C
0
tV
GI
~
::::I GI !l.O !l. tV CQ
n. m ~ mc m :::iC) CI m m :E n.E u a::
In the example above, note that only the key input column is controlled.
Control Charts *
Control charts are the most powerful tools to analyze variation in most processes -
either manufacturing or administrative. Control charts were originated by Walter
Shewhart in the mid 1920s. The publication of Economic Control of Quality of
Manufactured Product {Shewhart, 1931)20 made the concepts more widely known.
Process average
Control charts using variables data are line graphs that display a dynamic picture
of process behavior. Control charts require approximately 25 subgroups of size 4
or larger to calculate upper and lower control limits, but require only periodic small
subgroups or Xs to continue to monitor the process. Control charts for attributes
data require 25 or more subgroups to calculate the control limits.
A process which is in statistical control has most of the plot points randomly
distributed around the mean with fewer plot points as one approaches the control
limits. Points exceeding the controls are very rare. When a process is in control, it
is predictable.
Control charts may be used by lean six sigma improvement teams to control critical
products or processes after the appropriate variable or attribute levels have been
attained. These control charts can be maintained manually, electronically, computer
based or some combination of these techniques.
* Much of this material has been adapted from the CQE Primer {Wortman, 2006)27.
Excellent control chart references include: Grant {1988f, Western Electric (1956)25
and Besterfield (1993)4. See the references at the end of this Section.
Types of Charts
There are many variations of possible control charts. The two primary types are:
There are other more advanced variable charts like CuSum (cumulative sum) and
EWMA (exponentially weighted moving average) charts.
Plots general measurement of the total process (the number of complaints per order,
number of orders on time, absenteeism frequency, number of ermrs per letter, etc.).
Types: p charts
np charts
c charts
u charts
Short run varieties of the above four charts
Charts for variables are generally more costly since each separatEl variable (thought
to be important) must have data gathered and analyzed. In some cases, the
relatively larger sample sizes associated with attribute charts can prove to be more
expensive.
Variable Charts
Often, variable charts are the most valuable and useful because the specific
measurement values are known. There is a large number of variable chart options.
Variable charts are reviewed first in the following text. Five of these will be
discussed and three of these will be plotted.
X Average of all the Xs. It is the value of the central line on the X chart.
R The range. The difference between the largest and smallest value in
each sample.
R Average of all the Rs. It is the value of the central line on the R chart.
UCULCL Upper and Lower Control Limits. The control boundaries for
99.73% of the population. They are not specification limits.
20.5
..... . _ .... . . . .... .... ....................... .................. .................._ . ........._ .. - UCLx = 20.0
20.0
19.5
X 19.0
Average 18.5
18.0
17.5
5 10 15 20 25 30
4.5
4.0 ................ _ •••••-" .. _ .. - .............................. .... __ •.••:•••.•• ".- ••• - . ..... ... ............ UCLR = 4.0
3.5
R
3.0
Range 2.5
2.0 R= 1.9
1.5
1.0
0.5 . . ._ _ _ _ _ _ _ _ _ _ _ _ _ _ _ .LCL.=O
0
5 10 15 20 25 30
5. Calculate X (the average of all the X's). This is the center line of theXchart.
6. Calculate R (the average of all the R's). This is the center Iline of the R chart.
I n I A2 I Da I D~ I d2 I
2 1.88 0 3.27 1.13
3 1.02 0 2.57 1.69
4 0.73 0 2.28 2.06
X· R Control Chart
Chart No. 1
Product Name: Tablets Process Closure Department Operator Bill
Variable: Removal Torques SDecification Limits: LSL =10 LBS USL =22 LBS IUnits of Me,asure: LBS
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 H 20 21 22 23 24 25
- - -. - - .- ..
18
- -~ .~ -. ~ - - .- ~. ~ ~
U( 4: 17.5
,.
17
16
15
- " /
["10.
" " I \
X 1 .4
1/1
C\)
..
-.- .
CI I I'\. I
14
IV
C\)
13
~
,.,
I
... '- - - .. .... - - - LC -= ~3.
~
12 Is
)eC
11
.. ..,,:; . :al
8
7
.. - 1-
:i; - .1-
UC 7.E
1/1 I
6
C\)
CI 5
C 4
IV
a: 3
R= 3.6
2 !.' \.
""'~
1
1
- ... .....
2 3 4 5 -6 7 8
I_~ P-
-- -
9 10 11 12 13 14 15- 16'- 17 18 -
- . 19 -20 [Lei "11= ~I!
21 22 23 24 25
I--f-
1 12 15 13 10 13 15 15 15 22 16 16 15 17 16 17 19 16 16 17' 19 14
2 12 17 18 12 16 12 16 17 17 15 18 16 15 15 19 17 19 15 13 18 17
3 13 16 14 11 14 13 15 16 15 17 16 17 16 18 17 15 16 17 17 17 16
Sample 4 15 17 14 10 15 15 16 14 17 15 16 17 15 18 15 15 15 16 1~, 15 14
Measurement 5 12 18 15 11 14 11
15 12 14 18 16 14 16 16 17 17 14 18 1~, 16 13
Total 64 83 74 54 72 66 77 74 85 81 82 79 79 83 85 83 80 82 76 85 74
Averagei 12.8 16.6 14.8 10.8 14.4 ~ 3.2 ~5.4 14.8 17 16.2 1~A 15.8 15.8 16.6 17 rl6.6 16 ~6.4 1U 17 14.8
Range R 3 3 5 2 3 4 1 5 8 3 2 3 2 3 4 4 5 3 4 4 4
X-bar (X) and sigma (S) charts are often used for increased sensitivity to variation
(especially when larger sample sizes are used). These charts may be more difficult
to work with manually than the X - R charts due to the tedious calculation of the
sample standard deviation (S). Often, S comes from automated process equipment
so the charting process is much easier. The formula is:
S= ~L(X-X)'
n-1
Where: I means the sum
X the individual measurements
X-bar the average
n the sample size
The X chart is constructed in the same way as described earlier, except that sigma ( S )
is used for the control limit calculations via the following formulas:
= - = -
UCLx = X + A3 S LCLx = X - A3 S
The control limits for the sigma (S) chart are calculated using the following formulas
and table:
S is the average sample standard deviation and is the centerline of the sigma chart.
I n
I 2
I 3
I 4
I 5
I 6
I 7
I 8
I 9
I 10
I 25 I
84 3.27 2.57 2.27 2.09 1.97 1.88 1.82 1.76 1.72 1.44
83 * * * * 0.03 0.12 0.18 0.24 0.28 0.56
A3 2.66 1.95 1.63 1.43 1.29 1.18 1.10 1.03 0.98 0.61
*The lower control limit for a sigma chart when (n) is less than 6 is zero.
Another variety records the data and plots the median value and range on two
separate charts. Minimal calculations are needed for each subgroup. The control
limits for the median chart are calculated using the same formulas as the X - R
chart:
UCL-x = X + A2R
The A2 values are somewhat different than the A 2 values for thEl X - R chart since
the median is less efficient and, therefore, exhibits more variatio,n.
n 2 3 4 5
1.88 1.19 0.80 0.69
X-MR Charts
Control charts plotting individual readings and a moving range may be used for
short runs and for destructive testing. X-MR charts are also known as I - MR,
individual moving range charts. The control limits for individual charts are
calculated using the formulas and factors as illustrated in Table 9.10 below:
I n
I 2
I 3
I 4
I 5
I
04 3.27 2.57 2.28 2.11
03 0 0 0 0
E2 * 2.66 1.77 1.46 1.29
'n
*E 2 =A2"n
Table 9.10 X-MR Control Chart Factors
The control limits for the range chart are calculated exactly as for the X-R chart.
The X-MR chart (for individuals and moving ranges) is the only control chart which
may have specification limits shown.
• Individual charts are not as sensitive to changes in the process as the X-R
chart (or MX-MR, when n = 3).
x - MR Chart Example
Product Name ADDie Strudel Process LineA Charlt No. 7 IOperat or you
Vari able Stick Weights I Specification Limit T85, High 88, Low 82 IUnits of Measure Grams
1 12 3 4 5 6 7 8 9 1 10 11 112 13 14 15 , 16 17 18 19 20 21 22 23 24 25
,
95 : , ,
~ (jet=-: 9 .
ca I I
::l 90
-
.-"t3:> 85
~
I
I
-s:: 80
,
,
I ,
•
75 . I
•
,
15
. ,
CI) ,
Q.) I , ,
tn 10
s:: ,
ca ,
a::: 5
. . . -
, I
1 2 , 3 14 5 6 7 8 I 9 10 11 1 12 13 14 15 16 17 18 1!1 20 21 22 23 24 125
Date 4116/ 98 I I
Time I I I I I
1 I I I I
2 I I
3 I I 1
Sample 4 I I
Measurements 5 1 I
Individuals. X 85 87 86 86 77 83 184 87 90 184 89 182 84 86 88 85 90 83 84 87 87 y= tS.4
Range,R
Notes
2 1
I I
0/
6 11 3 3 6 5
I
7 2 2 2 3 5 7 1 3 0 ~R d,.,?+_
X = 1794 =85.4 - 68
21
MR = -20 =3.4
Note k =21 for X J k =20 for MR. n =2 for the MR chart above
UCL XJ LCL x =X ± E2MR UCLMR =D4MR =(~L27)(3.4) =11.1
UCLXJ LCL x =85.4 ± (2.66)(3.4) LCL MR =0
UCLx = 94.4 LCL x =76.4
MX-MR Charts
MX-MR (moving average-moving range) charts are a variation of X-R charts where
data is less readily available. There are several construction techniques. An
example for n =3 for the data on the prior page is shown below. Control limits are
calculated using the X-R formulas and factors.
PRCXl..CT NNIE L1DUID I PROCESS ~ILLING OPERATION CH,'RT NO. 2 OPERATOR BILL
VARIABLE VEIGHT ISPECI~ICATION LIHITS USL·3. I LSL-3.0 l UNlTS O~ HE.ASURE GRAHS
2 3 4 5 6 7 6 9 10 II 12 13 14 15 16 17 16 19 20 21 22 23 24 25
LCL X·3.0S
3.05
3.00
(.I)
W ~·2.96
l:l 2.95
<J:
Ck:
101
>
<J:
2.B5
.3
(.I)
~ LCL· .217
Z .2
<J:
Ck:
I 2 3 4 5 6 7 B 9 10 II 12 13 14 15 16 17 1B 19 20 21 22 23 24 25
DATE
TIME '7: IB 7:2E7 ; 33 7:4~ B : a, 8:S!: 1a:2R 8:31 8:45 9:1i!lIi 9: 15 9:3111 9:45 9:59
1 2 . B3i? .81112.952.9 2 . 9 2 . 912 . 9~ 12 . 8E 2.8!3 2. 7S 2 . 9£ a.11I 2 . ~ 2 . 81 ~.~ 3 . 1114 3 . 111! 3.~ 3.~~1114 3. I1IJ 13. 1112 3.1111113.1111113.111
2 2 . BR ~ . 95 2 . 922 . 9 2.9 2.952 .Be 2 .8~ 2. 7e 2.96 3.111. 2.9~ 2 .8e 2.9 3.11143.11153.111 3.11143 . 1114 3.1111 3 . 11J.'2 3 . 0U.11I11I3 . 11I3 S. 0'1
3 2.952.922.932.9 2.9~2 .86 2 . 8~ 2. 7E 2 . 96 3 . 02fz . 9 2 .BE 2 . 9il 3 . 111 3 . @!i 3.~ 3 . 111 3.04 3.013 . 111 3.l1Ie~ . l1Ie 3 . 11133.11172.92
SAMPLE
I1EASUREMENTS 4
5
AVERAGE. x 2 . Be 2 . 892.932 . 9 2.952.9 2 . 9Q 2 .B4 2. BS 2 . 9 2 . 9? 2 . 9412 . 9 2.9E 3 . 1112 3 . 111 3 . 111 3 . 11I43 . 11I~ 3 . 111 3.111 3 .B I 3 . 1111 3 . 111~ 3 . 111/
RANGLR .15 . 15 . 1113 . 85 . B4 . 11 .1119 . 11 . 18 . 24 . 1119 . 14 . 1119 .16 .1118 . 02 . 1112 .1111 .1113 . 1113 .1112 . 1112 .1113 . 1117 . 15
NOTES
The student would naturally be unfamiliar with the data in the ablDve example. It is
actual data for a filling operation. According to chart interpretation rules, did a
change occur between plot points 14 and 24? If so, referring to, the specification
limits, was it a good change?
Attribute Charts
An attribute chart plots characteristics such as short or tall, fat or thin, blue or
brown, pass or fail, okay or not okay, good or bad, etc. Attributes are discrete,
counted data. Unlike variables charts, only one chart is plotted for attributes. There
are four types of attribute charts, as summarized below:
I 100p*
I Percent defectives
I Varies
I
Table 9.13 Types of Attributes Charts
• If the actual p chart subgroup size varies by more than ± 20% from the average
subgroup size, the data point must either be discarded or the control limits
calculated for the individual point.
• The most sensitive attribute chart is the p chart. The most sensitive and
expensive chart is the X - R chart.
• The defects and defectives plotted in attribute charts are often categorized in
Pareto fashion to determine the vital few. To actually reduce the defect or
defective level, a fundamental change in the system is often necessary.
UCL = Ii + 3l'(1.:p)
LCL np = nli - 3/nli{1-p)
p n
k = number of samples
LCL = Ii - 3l'(1-li)
p n
- L
n=-
n -
p=
Lnp
k Ln
Defects (POisson Distribution)
u Chart c Chart
Average Number of Defects Number of IDefects
c
u=-
n
-c =LC
--
k
- ~
UCLu = u + 3 ~ UCLc = C + 3~:
LCLc = C -3~
- ~
LCL = u - 3 - k = number of ~iamples
u ~
- L
n
n=-
-
u=-
LC
k Ln
Sample Size Varies Sample SiZI~ Fixed
1 1250 8 0.64%
2 1250 0 0.00%
3 1350 12 0.89%
4 1200 3 0.25 %
5 1150 5 0.43 %
6 1100 0 0.00%
7 1100 2 0.18%
8 1350 2 0.15%
9 1250 1 0.08%
10 600 3 0.50%
11 1150 0 0.00 %
12 1100 5 0.45%
13 1050 10 0.95%
14 1050 0 0.00%
15 1100 0 0.00%
16 1000 0 0.00%
17 1200 0 0.00 %
18 1050 0 0.00%
19 1150 1 0.09%
20 1050 0 0.00%
k = no. of lots In = 22,500 Inp = 52
k= 20
n= 22,500 = 1125 p=
total defective
=
52
= 0.23 %
20 total inspected 22,500
UCLp, LCLp = p
_
± 3
~P(100n- Ii)
r-------;-------:-
p Chart Example
Attributes Control Chart Form p~ n POCO IJ 0
PARTR: OESCRIPTI~: UNCOATED TABLETS CHARACTERISTIC: 7. DEFECTI'IES OllTE: 6-I-U
SlUCE: TABLET DEPMTt£NT IJI[RIITOR: YMIIlJS INSPECTlIl;...·---=B=IL_L_ _ __
III : .66 lCl: 0 II¥EM : ----::.=23:....-_ _'"'"i"i:::::;:::::;::::::;::;:-_ _
1-0
UCL I) =.
P+3VO( 100-0 I ii
.9
='123 3'V11 r.II 0-.2: I
II 15
.8 = ,23 '3( . 14 3)
= ,66
--
.7
.6
- - - I- '- - '- - ~
- -
~ uc = 6E
.5
.4
.3
p= 23
.2
.1
0
U ..... :0
Sonp le
] 2\/3 4 5 6 7\ 8 9 A~A~A~ ;~A4 ;~ ]6 ;~A~ ;~h~ktk~Ak~ k~
{n I 2SO 2SO 350 I~ II SO 1100 1100 1350 2SO 600 ISO 100 050 050 100 000 I~ 1050 IISQ 10SO ~ =2 .. 5 )0 ~=l 125
~~er
( . c) S 0 12 3 5 0 2 2 I 3 0 5 10 0 0 0 0 0 I 0 ~ np 52
Froct i on
®ul .64 0 .89 .25 .43 0 . IS . 15 .08 .SO 0 .45 .95 0 0 0 0 0 .09 0 p= 23
Dotel
~ %~ ~ %%% ~I ~3 ~4 ~4 ~5 ~6 ~S ~9 ~g
""
Tine
Notes
'" '"
Figure 9.15 P Chart Example
Note that a system change of major consequence occurred at picot point 14. If this
chart were to be continued, the sample size would have to be increased
substantially.
1 100 5
2 100 8
3 100 7
4 100 5
5 100 7
6 100 3
7 100 3
8 100 4
9 100 2
10 100 2
11 100 3
12 100 3
13 100 2
14 100 3
15 100 1
16 100 9
17 100 6
18 100 7
19 100 7
20 100 4
21 100 7
22 100 1
23 100 6
24 100 5
25 100 _4_
k= 25 =
n 100 I,c = 114
c Chart Example
Attributes Control Chart Form pO np 0 c~ uD
PART "* : Encyclopedia DESCRIPTION: SPC Checklist CHARACTERISTIC: ...::;:D.::..;ef=ec=tse-_ _ DATE: . . :. ;10::.. ;/1,--_
SOURCE: Bindina Deoartment OPERATOR: INSPECTOR: -'-Y.::..:ou=---_ _ __
UCL LCL: 0 AVERAGE:
I ,
14
I
I
12
10
4 I
2 ,
,
0
/ 1\ 2 13 14\ 5 16 7 Is\MAo 11 1~ 13A~A5 A~AAA~A~ho h1/~\ h~h~h~
Sample
(n)
N u m ~er
(np,c)
/
5 8 7 5 7 3
0 eF xed Sta nda dS ~m~ Ie
3 4 2 2 3 3 2 3 1 9 6 7 7 14 7 1 6 5
"4
Fr~ction
%p,u) c= 4.6
Datel
Time 1~ Shift
Cha age
Notes
Note that a shift change occurred between plot points 15 and 16"
Is this significant?
1 100 9
2 100 6
3 100 7
4 100 3
5 100 4
6 100 6
7 100 3
8 100 3
9 100 3
10 100 4
11 100 4
12 100 6
13 100 4
14 100 5
15 100 2
16 100 3
17 100 2
18 100 8
19 100 4
20 100 3
21 100 2
22 100 3
23 100 4
24 100 8
25 100 6
k= 25 n = 100 I,np = 112
np Chart Example
Attributes Control Chart Form p D np [g1 CD U D
PART # : Encyclopedia DESCRIPTION: SPC Checklist CHARACTERISTIC: Any De fective DATE: 9/25 - 10/1
SOURCE: Binding Department OPERATOR: INSPECTOR: ~Yo=u~_ _ __
OCl ~~ o AVERAGE:
I
I
I
,,
,
I
14 ,
12
10 I
4
~
- I
2
,
0 •
Sample
1 2 / 3 / 4\ 5\ 6 Ji\Js\ g\Ao /11 1~A3 A4 1~AM7 A~ 1~ :~~h~h2 /23 /24 2~
/ "-
(n) I
101 Un ts F ixec I
Number
(np,c) 9 6 7 3 4 6 3 3 3 4 4 6 4 5 2 3 2 8 4 3 2 3 4 8 6
Fraction
%(p,u)
Datel
Time
Notes
l%5 9'1
UCL"p = np + 3~ LCL"p = np - 3~
UCLnp = 4.5 + 3..j(4.5)(0.955) LCL np =4.5 - 3..j(4.5)(0.955)
UCLnp = 10.7 LCLnp = -1.72 = 0
± 1S = 68.27%
±3S = 99.73%
Control limits are the boundaries set by the process which alert us to process
stability and variability. Remember, our control limits are 3 standard deviations
above and below the grand average. If the process is in control, 99.73% of the
averages will fall inside these limits. The same is true for the range control limits.
Because there are two components to every control chart -- the average and the
range -- there are four possible conditions could occur in the process.
1. Average Out-of-Control
Range In Control
-15~~~~~~~~~~~
25~~~~~~~~~~~
20~~~~~~-+-+-+-+~
w I5 1--' ...... ...... ... _... "-' ................. "'" 1-.. UCL
~
~ 10~~--+,/.....,..-t""'""",,-+--+-+/"'-+~-+-+~ R
5~~~/~~~~~~~
o ~~~~~---'---L.----'-----L..----'-~ LCL
~
25
20
1~
....... .......... ........ ....~ ~.. ........... ........ ........ ..1. .\- ......... UCL
~ 15
~ 10 I
j \ r---..
f-" ~II
I \\ -
0:: R
5 ...........
o LCL
25 ""
~r
...~ x.. .-.. ..•
20
~ 15
..... ..". _4H'" .... -~ .f- l . H ••• UCL
~ 10 / t". f-'" II \
a:: I 1"'-., R
5 ........
o LCL
4. Process In Control
+15~~-'~~~-'-'-'-'~ Average Stable
+ 10 -- _.- --- - -- .__• '-1--' -- ..,- .•... UCL Variation Stable
w +5~~~~-;~-+-+-+~~
~ /"""' =
c:t:: 0 X
~ -_. -" i--- ._- ...- •__ • - - -_. -'" .... ..... LCL
< -5
- 10 ~~I--l~-;~-+-+-+-I--i
- I5 '--'----'L........I----'---'--..L-I...-I...---1...----'---'
25
20 _. - _.
~ 15
~- ...... ... ..- ..... UCL
~ 10
c:t::
5
I' " V
v ." .......... /' --
-
R
LCL
o
(Rule 2) 4 out of 5
points in zone B (Rule 4) 8 or more consecutive
points on one side of center line UCL
Zone A
Zone C
ZoneC
ZoneB
Zone A
LCL
(Rule 3) 2 out of 3 Rule 5 A trend is
(Rule 1) A point beyond points in zone A 6 or more consecutive points
the contrallimit increasing or decreasing
Comment: Some authorities say 7 or more consecutive points for both Rules 4 and 5.
Zone C
Zone C
ZoneB
Zone A
(Rule 6) Stratification. LCL
15 or more points in zone C (Rule 7) Mixture or
systematic variation
This is an example of a process which is in control. Notice that it looks good, but
not too good.
Trends
- R CHART CAUSES
X CHART CAUSES
CORRECTIVE ACTION
- R CHART CAUSES
X CHART CAUSES
Recurring Cycles
- R CHART CAUSES
X CHART CAUSES
CORRECTIVE ACTION
- R CHART CAUSES
X CHART CAUSES
CORRECTIVE ACTION
Lack of Variability
- R CHART CAUSES
X CHART CAUSES
CORRECTIVE ACTION
The idea behind pre-control is to divide the total tolerance intc» zones. The two
boundaries within the tolerance are called pre-control (P-C) line:s. The location of
these lines is halfway between the center of the specification and specification
limits. It can be shown that 86% of the parts will be inside the P-C lines with 7% in
each of the outer sections, if the process is normally distributed ailld Cpk 1. Usually =
the process will occupy much less of the tolerance range, so this extreme case will
not apply.
P-C P-C
LINE LINE
7% 7%
+-- TARGET----.
SPECIFICATION
The chance that two parts in a row will fall outside either P-C line iis 1n times 1n, or
1/49. This means that only once in every 49 pieces can one expec1t to get two pieces
in a row outside the P-C lines due to chance. There is a much greater chance (48/49)
that the process has changed. Therefore, it is advisable to reset the process to the
center. It is equally unlikely that one piece will be outside one P-G line and the next
outside the other P-C line. This is a definite indication that a !»pecial factor has
widened the variation and action must be taken to find that spe!cial cause before
continuing.
• Setup: The job is OK to run if five pieces in a row are inside the target
• If the first piece is within target, run (don't measure the second piece)
• If the first piece is not within target, check the second piece
• If both pieces are out of target, adjust the process, go back to setup
The ideal frequency of sampling is 25 checks until a reset is required. Sampling can
be relaxed if the process does not need adjustment in greater than 25 checks.
Sampling must be increased if the opposite is true. To make pre-control even easier
to use, gauges for the target area may be painted green. Yellow is used for the outer
zones and red for out-of-specification.
2. Setup and adjustment: These are losses from setup changes. One should
reduce the setup times and have better adjustment periods.
4. Reduced speed: This is the loss resulting from the differences between
designed and actual operating speeds.
6. Reduced yield: Product losses often result from machinle shutdowns and
startups.
Elimination ofthe six big losses, will lead to dramatically improved plant conditions.
The lean manufacturing system could not exist without TPM.
(Nakajima, 1988, 1989)14,15
Lohr & Bromkamp GmbH (Lobro) in Offebach, Germany chose to place TPM at the
center of its quality initiative. The TPM effort began with an overall cleaning of the
equipment, involving all 1,800 employees. The program had 6 elements:
1. Autonomous maintenance
2. Eliminating the six big losses
3. 100% production quality
4. A planning system for new machines
5. Training for all operators
6. Increased office efficiency (Imai, 1997)8
As a result of the TPM efforts, and other quality improvements, Lobro reported the
following results for 1990 to 1995:
TPM Metrics
Overall equipment effectiveness is the prime measure used to evaluate TPM. There
are several formula variations. Plants can adjust the factors according to their
needs.
Loading time is the available time per shift or per unit minus planned downtime.
Planned downtime includes scheduled maintenance and meetings. Operation time
is loading time minus unscheduled downtime.
Example 9.1: If there are 480 minutes per shift (available time), 15 minutes of setup
time, 10 minutes of planned downtime, 30 minutes of unscheduled equipment
failures, what is the loading time and the availability?
Solution:
Loading Time = Available Time per Shift - Planned Downtime
Performance efficiency is defined as the operating speed rate multiplied by the net
operating rate. The operating speed rate is the ratio of the theoretical cycle time to
its actual operating cycle time.
Example 9.2. If the theoretical cycle time for an operation is 1 miinute per unit, and
the actual cycle time is 1.5 minutes per unit, what is the operating speed rate?
Solution:
The net operating rate measures the stability of the equipmen1t, the losses from
minor stoppages, small problems, and adjustment losses.
Example 9.3. If the processed amount is 185 units, the actual cycle time is 1.5
minutes per unit and the operation time is 425 minutes, what is the net operating
rate?
Example 9.4. Using the data from examples 9.2 and 9.3, determin4e the performance
efficiency.
Performance Efficiency = 0.667 x 0.653 = 43.6%
Example 9.5. Using the information from Examples 9.1 and 9.4, if the percentage of
good products is 95%, what is the overall equipment effectiveness?
The overall equipment effectiveness (OEE) for this example is a poor 37.4%. TPM
prize winning companies have OEE's above 85%. The ideal conditions are:
1. Reduced costs
2. Reduced inventory
3. Accident reduction/elimination
4. Pollution control
5. Work environment
The proper integration ofthe philosophy ofTPM within the company will bring about
improved worker and equipment utilization. These changes are aided by improving
employee attitudes, increasing their skills, and providing a supporting work
environment.
Visual Systems
Visual control systems can be described as the use of production boards, schedule
boards, tool boards, jidohka devices, and kanban cards on the factory floor. The
intent of these techniques is to provide management and workers with a visible
display of what is happening at any moment. In an ideal situation, anyone can
glance across the room and be able to assess or "feel" the condition of the shop
floor, including whether it is operating properly. Visual controls assist the operators
by providing them with information that they would not normally receive. The
operators typically do not have enough information regarding order status, product
quality, lead times, customer demands and costs. (Imai, 1997)8, (Liker, 1997)11
Imai (1997)8 provides three reasons for using visual management tools:
Production boards and schedule boards are examples of visual control. These
generally include the posting of daily production, maintenance items, or quality
problems for everyone to see and understand. Before the start of a shift, the
department supervisor uses the boards for a short discussion on planned activities
for the day and any specific problems. The boards are also used by managers and
guests, walking through the plant, to gauge the progress ofthe various departments.
Among the items that can be displayed on the boards are:
• Safety boards
• Daily meeting boards
• Improvement boards
• Status boards
• Layout maps
• Goal and mission statements
• Production/schedule boards
• Defect boards or bins
• Maintenance boards
• Production status boards
• Standard size containers
• Poka-yoke techniques
• Andon or trouble lights
• Labels and color coding
• Tool boards
The kanban system provides material control for the factory floor. These cards are
a form of visual control for the flow of production and inventory.
The tool board is a display designed for the tools needed at a work station. This
method is a part of 55 activities. The board is constructed to hold or mark the place
for the tools and includes only the tools required for that work station. When the
tool is not being used, if it is not in its spot, then it is missing or lost and another
tool must be obtained for effective operation of the process. {Liker, 1997)11
The visual factory places an emphasis on setting and displaying targets for
improvement. The concept is that various operations have a target or goal to
achieve. The standard time is initially set higher than the target. However, as the
operation is performed, the operator tries to beat the old time, until the goal is met.
Imai (1997)8 illustrates this concept using an example of a plant goal to reduce the
setup time of machines. A display board was placed next to a machine and the
current setup time was displayed on a graph. Each time a setup was performed, the
setup time was marked. Over time, the operator reduced the setup time by avoiding
past errors and improving methods. Eventually, the targeted setup time was
reached.
In summary, visual systems enable management and employees to see the status
of the factory floor at a glance. The current conditions and progress are evident and
any problems can be seen by everyone.
Standard Work
The operation of a plant depends on the use of policies, procedures, and work
instructions. These could be referred to as standards. Maintainiing and improving
standards leads to improvement of the processes and plant effelctiveness.
If there is a need to increase production, management must find a way to do so. One
of the ways is to have operators change the way they do their jobs. The use of
kaizen activities, kaizen blitz, etc., can be used to improve the process. Once the
changes have been made, efforts should commence to standardize the new
procedures. (Imai, 1997)8
Imai (1997)8 provides a discussion of the word "standards." It seems that the word
standards has a very bad connotation in the western world versu:s that in Japan. In
Japan, standards are used to control the process, not the wor.cers. In the west,
standards imply the use of unfair conditions on workers, such CiS working harder
under extreme conditions, etc. In Japan, standards are used to describe a process
that is the safest and easiest for the workers, and is the most t::ost-effective and
productive way for the company. It is a balance between the twe. parties.
The following are some examples of standards that go beyond procedures and work
instructions:
• Takt time
• Ergonomics
• Parts flow
• Maintenance procedures
• Routines
Shingo (1989)21 wrote that Toyota's Ohno mentioned that for standard operations,
much ofthe information will be contained in standard work sheetsi. A standard work
sheet combines the 3 elements of materials, workers, and mcllchines in a work
environment. Toyota refers to it as a work combination.
Standard work sheets, that operators will have confidence in, should consider the
following areas:
In order to have standard work sheets, waste elimination, problem solving, and
quality methods must be accomplished. The elements that comprise the standard
work operations are:
• Cycle time: The time allowed to make a piece of producUon. This will be
based on the takt time. The actual time will be compared to the required takt
time to see if improvements are needed.
• Work sequence: The order of operations that the worker must use to produce
a part: grasp, move, hold, remove, delay, etc. The same order of work must
be done every time. A standard time is provided for each eIE~ment. A standard
work combination sheet (providing element times), standard work layout
sheet (workplace layout), or planning capacity table (machiine capacity data),
or all 3, are provided to the operator for use.
Shingo (1989)21 and Sharma (2001)19 indicate that standard charts will also include:
• Standard operating sheets: Details from the task instruction manual with
information on equipment layouts, cycle times, order of operations, standard
on hand stock, net work times, safety checks, and quality checks.
Training Requirements
Obviously training is a preventive technique. Effective employee training is one of
the first considerations that an organization must undertake. Trclining is presented
at this point because it is also an effective control technique.
The criteria for the Malcomb Baldrige award (2006)3 has a 25-poilnt category which
emphasizes an organization's education, training, and career dev4elopment methods
to build employee knowledge, skills, and capabilities. This requirement focuses on:
1. Determining needs
2. Setting objectives
3. Determining subject content
4. Electing participants
5. Determining the best schedule
6. Selecting appropriate facilities
7. Selecting appropriate instructors
8. Preparing audiovisual aids
9. Coordinating the program
10. Evaluating the program
Both Robinson (2000)16 and Wile (1996)28 present the case that the six external
elements are controlled by management, whereas, the performer controls the two
internal elements.
Conducting a training needs analysis requires diligent work and effort on behalf of
the analyst. Many or all of the details will lead to the formati(:>n of the training
solution. Smith (1987)23 provides some guidelines for conducting a training needs
analysis in three steps: surveillance, investigation, and analysis"
Surveillance
Investi gation
In this stage, a possible performance gap may be suspected. The next step would
be to gather more details in that specific area. Among the data gathering techniques
are:
Analysis
At the analysis stage, the gathered information is sorted and examined for validity.
It is then summarized with conclusions drawn and recommendations provided.
Smith (1987)23 divides analysis preparation into the groupings of:
New and rapidly growing companies are often very supportive of training programs.
As the company expands, new skills are needed, and the company fills this need by
hiring new people and by training current personnel.
Training programs need upper, middle and lower management support, if they are
to be successful. The desire for training by a subordinate cannot possibly overcome
resistance by higher management for that training. It is management's authority to
allocate the time and resources to permit training to take place.
The training materials should be presented at the level appropriate to the target
training group. Teaching statistical process control (SPC), m;ing conventional
statistical formulas, to workers with an elementary grade education will be of little
value. However, showing the same workers how to apply SPC to the products they
manufacture will give these people useful information.
The time and location that training takes place also has an effect on the quality of
training. Training classes at 8:00 AM when people are fresh and alert may be fine for
first shift workers, but it is probably the worst time for third shift: workers.
• Good lighting - Lighting should be sufficient for reading, without shadows and
without flickering.
• Line of sight - Each person should be able to see the black board, overheads,
and other visual displays.
• Furniture - Seating must be comfortable, with back support, and with a writing
surface large enough for a book and paper.
Learning Principles
Goetsch (2000)6 summarizes the following principles of learning:
3. Active learning: People learn by doing. This requires an active role by the
student in the learning process. In the area of statistical training, numerous
articles point to the benefits of active learning.
4. Feedback: There is a need for students (learners) and trainers to know how
the effort is progressing. The learner must know if progress is being made,
and the trainer must know if the message is being transmitted.
9. Multiple-sense learning: The instructor should plan to use more than one
sense to help the student learn. Students on the average take in information
as follows:
Another source, Goetsch (2000t, provides the following percenta~les regarding what
learners retain from instruction they receive:
10. Transfer of learning: Transferring learning to the workplace will depend on the
closeness of training to the workplace and to the workplat:e design.
References
1. Advanced Product Quality Planning (APQP) and Control Plan Reference
Manual, 3rd ed. (2000). Southfield, MI: AIAG.
4. Besterfield, D.H. (1993). Quality Control, 3rd ed. Englewood Cliffs: Prentice
Hall.
5. Breyfogle, F., III. (2003). Implementing Six Sigma, 2nd ed. New York: John
Wiley & Sons.
6. Goetsch, D.L., & Davis, S.B. (2000). Quality Management, Introduction to Total
Quality Management for Production, Processing, and Services, 3rd ed.
Upper Saddle River, NJ: Prentice Hall.
7. Grant, E.L. & Leavenworth, R. (1988). Statistical Quality Control, 6th ed. New
York: McGraw-Hili.
10. Kirkpatrick, D. (1994). Evaluating Training Programs: The Four Levels. San
Francisco: Berrett-Koehler.
11. Liker, J. (editor). (1997). Becoming Lean: Inside Stories of U.S. Manufacturers.
Portland, OR: Productivity Press.
12. Measurement Systems Analysis: MSA, 3rd ed. (2002). Southfield, MI: AIAG.
13. Miller, W.B., & Schenk, V.L. (1993). Alii Need to Know About Manufacturing
I Learned in Joe's Garage. Walnut Creek, CA: Bayrock Press.
References (Continued)
15. Nakajima, S. (editor). (1989). TPMDevelopmentProgram: Implementing Total
Productive Maintenance. Cambridge, MA: Productivity Press.
18. Schermerhorn, J. (1993). Management for Productivity, 4th ed. New York:
John Wiley & Sons.
19. Sharma, A., & Moody, P. (2001). The Perfect Engine: How to Win in the New
Demand Economy by Building to Order with Fewer Resources. New York:
The Free Press.
22. Six Sigma Academy. (2002). The Black Belt Memory Jogge'r: A Pocket Guide
for Six Sigma Success. Salem, NH: GoaI/QPC.
23. Smith, B., & Delahaye, B. (1987). How to be an Effective Trainer, 2nd ed. New
York: Wiley Professional Development Programs.
24. Suzaki, K. (1993). The New Shop Floor Management: Empclwering People for
Continuous Improvement. New York: The Free Press.
27. Wortman, B.L.; Carlson, D.R.; & Richardson, W.R. (2006). CQE Primer. Terre
Haute, IN: Quality Council of Indiana.
9.1. When investigating TPM, which of the following 9.6. A particular department is using an X-bar chart
would NOT be considered one of the six big with specification limits instead of control limits
negative contributors to equipment for monitoring a key characteristic. The
effectiveness? department supervisor states that the
specifications have more real meaning than the
a. Setup and adjustment control limits. The best immediate reaction
b. Reduced speed should be to:
c. Reduced yield
d. Work cell arrangement a. Congratulate the supervisor for an
outstanding application of control chart
9.2. Identify the LEAST likely result of adopting theory
standardized work procedures: b. Stop the process and check if all readings are
within control
a. They tend to minimize variability c. Stop the process immediately and replace the
b. They are a basis for training specification limits with control limits
c. They ensure marketplace success d. Do nothing; the supervisor should be allowed
d. They preserve know-how and expertise to use the chosen criteria
9.3. There are several types of control plans. Identify 9.7. A lean six sigma project has progressed to the
the appropriate control plan to be used for part point that a control plan is required. Control plan
mock-Ups. activities can be considered closed after which of
the following?
a. Prototype
b. Pre-launch a. A process owner is named for the control
c. Production plan
d. Generic b. A responsible engineer is designated
c. The cross functional team signs off on the
9.4. A P chart: control plan
d. The control plan is a "living document" and is
a. Can be used for only one type of defect per rarely closed
chart
b. Plots the number of defects in a sample =
9.8. A process is in control with p-bar 0.10 and n =
c. Plots either the fraction or percent defective 50. The three sigma control limits for the np
in order of time control chart are which of the following?
d. Plots variations in dimensions
a. 1.37,12.91
9.5. What is a jidhoka? b. 0,11.36
c. -2.01,12.01
a. A totally automated process to reduce d. 3.05, 8.05
defectives
b. A device that identifies good parts
c. A technique which separates defectives from
good parts
d. A device that stops the machine whenever a
defective is produced
9.9. After the successful completion and 9.13. If one were to summarizf! the results of a training
implementation of the first four phases of DMAIC, needs analysis into a few words, what would be
the control phase should be completed by which the best selection from the choices presented
of the following? below?
a. The black belt alone; the project has been a. Providing incentives and meaningful work
successfully completed b. Giving cognitive support
b. The process owner alone; it is hislher c. Identifying performa nce gaps
responsibility to take the project from here d. Developing skills anl:l knowledge
c. The process owner and black belt together
d. The complete team is the most desirable 9.14. An X-bar and R chart was prepared for an
option operation using twenty !lamples with five pieces
in each sample; X-double bar was found to be
9.10. Standard work sheets are required for standard 33.6 and R-bar was 6.20. During production, a
operations. Which element is NOT included on sample of five was taken and the pieces
the sheets? measured 36, 43, 37, 25, and 38. At the time this
sample was taken:
a. Cycle time based on takt time
b. Work sequence - the order of operations a. Both the average Elnd range were within
c. Standard inventory on hand control limits
d. The annual or semi-annual review date b. Neither the average nor range were within
control limits
9.11 Many training instructors have developed c. Only the average was outside control limits
approaches to emphasize multiple sense d. Only the range was c)utside control limits
learning. Which of the following options would
be generally recognized to best foster student 9.15. The visual factory would be supported to the
retention? greatest extent by:
9.17. Process A consists of several machines that 9.21. Advantages of control charts do NOT include:
combine their output into a common stream.
Once combined, it is impossible to trace single a. They can detect trends of statistical
pieces to specific machines. Process B receives significance
the mixed pieces. A corrective action requires b. They provide straightforward, easily
finding the root cause of a defect found in some interpreted information
of these pieces. A team assigned to this problem c. They provide an ongoing measure of process
is thinking of using SQC to detect the source of capability
the problem. Where should SQC be d. They can detect special causes of variation
implemented?
9.22. After a six sigma team reached the improve
a. At the beginning of Process B phase of the DMAIC cycle, it was clear that a full
b. At each machine in Process A redesign of the part was the right path to take.
c. At the end of Process B From then on, the DMAIC project became a DFSS
d. At the beginning of Process A project. Which control phase should be used for
the newly redesigned part?
9.18 Pre-control starts a process specifically centered
between: a. FMEA
b. Pre-launch
a. Process limits c. Production
b. Specification limits d. Prototype
c. Normal distribution limits
d. Three sigma control limits 9.23. Control limits are set at the three sigma level
because:
9.19. A P chart has exhibited statistical control over a
period of time. However, the average fraction a. This level makes it difficult for the output to
defective is too high to be satisfactory. Internal get out of control
improvement can be obtained by: b. This level establishes tight limits for the
production process
I. A change in the basic design of the product c. This level reduces the probability of looking
II. Instituting 100% inspection for trouble in the process when none exists
/II. A change in the production process through d. This level assures a very small type II error
substitution of new tooling or machinery
9.24. Training needs for an individual are based on all
a. lonly of the following, EXCEPT:
b. I and /II only
c. II and /II only a. The employee's performance
d. I, II, and /II b. The company's objectives
c. The gap between current and desired skills
9.20. Which of the following would be a device d. Availability of qualified instructors
associated with the visual factory?
a. Standard work
b. Andon board
c. Queue time
d. Work cell
9.25. Standard work as visualized by the Japanese 9.29. Which of the following tools would NOT be part
means: of the control plan devtllopment?
9.27. Ultimate responsibility for training is vested with: 9.32. What is the importance of the reaction plan in a
control plan?
a. The department supervisor
b. Co-workers a. It describes what will happen if a key variable
c. The human resource department goes out of control
d. The employee b. It indicates that a nE!W team must be formed
to react to a problem
9.28. The factor D" In X-bar and R control charts is c. It lists how often lthe process should be
used to: monitored
d. It defines the special characteristics to be
a. Determine the upper control limit of a range monitored
chart
b. Establish the control limits for the average
chart
c. Correct the bias in estimating the population
d. Determine the lower control limit of a range
chart
9.33 The ideal setting for training: 9.37. During variable control charting, a trend of four
consecutive points is noted on both the average
a. Is at the employee's normal work station and range charts. The average chart is
b. Is at the end of the employee's regular work increasing and the range chart is decreasing.
shift One may make which of the following
c. Will have environmental conditions go conclusions?
unnoticed by the student
d. Should match the conditions in an outdoor a. The nominal measurement is increasing
park in Spring time b. The variability is decreasing
c. The process is improving
9.34. The control chart that is most sensitive to d. No conclusions may be made yet
variations in a measurement is:
9.38. After the successful implementation of an
a. p chart "Improve" stage, a DMAIC team is considering
b. np chart control charts to monitor the new gains for a
c. c chart process. Which approach would provide the
d. X-bar and R chart best results?
9.35. How many of the following Japanese techniques a. Chart as many characteristics as possible
are supportive of operational control? b. Chart the most important input variables
associated to the CTQs
I. Poka -yoke c. Use control charts for variables only
II. 5S d. Use numerous control charts for attributes
III. TPM and variables
IV. Kanban
9.39 What % of product should fall between the P - C
a. II and IV only lines (green zone) on a pre-control chart,
b. I, II, and IV only assuming that the process is stable?
c. II, III, and IV only
d. I, II, III, and IV a. 60.0%
b. 68.4%
9.36. Given that resistors are produced in lots of 1,000, c. 86.0%
and the average number of defective resistors d. 95.4%
per lot is 12.7, what are the upper and lower
limits for the control chart appropriate for this 9.40. Which control chart pattern best represents an in
process? control process?
a. =
LCL 2.0 UCL 23.4 = a. A consecutive run of seven or more points on
b. =
LCL 3.8 UCL 20.2 = one side of the centerline
c. LCL = 0.031 UCL 0.131 b. A random distribution of points with one
d. LCL = 1.5 UCL = 26.7 point outside the control limits
c. A random distribution of points on both sides
of the centerline
d. A steady trend of points toward either control
limit
9.41. An R chart is generally used to: 9.45. Standard work does NOT rely on which of the
following tools?
a. Determine if the process is in control
b. Determine if the process mean is in control a. Visual factory
c. Determine if the process variance is in b. 5S
control c. Kanban
d. Determine the variance of the process d. EWMA
9.42. Which of the following elements is NOT part of a 9.46 The single most important factor in establishing
control plan form? an effective company trclining effort is:
9.43. Which of the following charts have upper control 9.47. An R chart is most closely related to which ofthe
limits, but frequently have lower control limits of following?
zero?
a. c chart
a. X-bar and individual charts b. S chart
b. c charts and u charts c. u chart
c. p charts and np charts d. X-bar chart
d. R and sigma charts
9.48. What lean technique is most widely used to make
9.44. The design of a control plan for a particular part problems visible?
incorporates information from a variety of
sources such as flow charts, QFD, FMEAs, a. JIT
designed experiments, and statistical studies. It b. 5S
is a tool to monitor and control the part or c. Kaizen
process. If used properly, the control plan d. SMED
avoids which of the following problems?
The answers to all questions are located at the end of Section XII.
Design Improvement
In the business world, the equation for return on investment, or return on net
operating assets, has both a numerator - net income, and a denominator -
investment. Managers have found that by cutting the denominator, investments in
people, resources, materials, or other assets is an easy way to make the desired
return on investment rise (at least short-term).
To grow the numerator of the equation requires a different way of thinking. That
thinking must include ways to increase sales or revenues. One of the ways to
increase revenues must include introducing more new products"
Cooper (1993)9 states that new products account for a large percentage of company
sales (40%), and profits (46%). Of course, not every new product will survive. Two
studies listed in Table 10.1 provide some statistics.
Table 10.1 indicates that a large amount of ideas are needed. These ideas are
sorted, screened, and evaluated in order to obtain feasible ideas, which enter the
development stage, pass into the launch stage, and become successful products.
Cooper (1996)10 provides more details of how winning products are obtained:
1. A unique, superior product: This is a product with benefits and value for the
customer.
6. Team effort: Product development is a team effort that includes research &
development, marketing, sales, and operations.
7. Proper project selection: Poor projects must be killed atthe proper time. This
provides adequate resources for the good projects.
8. Prepare for the launch: A good product launch is important and resources
must be available for future launches.
10. Speed to market: Product development speed is the wealPon of choice, but
sound management practices should be maintained.
11. A new product process: This is a screening (stage gate!) process for new
products.
13. Strength of company abilities: The new product provides cl synergy between
the company and internal abilities.
There are many product development processes to choose from. Rosenau (1996)30
suggests that the former "relay race" process (one function pa:ssing the product
from marketing to engineering to manufacturing and back thr10ugh the loop) is
obsolete. Multi-functional team activities involving all departments are necessary
for effectiveness and speed to market. The process is comprised of 2 parts: a
"fuzzy front end" (idea generation and sorting) and new product dE!velopment (NPD).
The complete NPD process includes 5 activities:
• Concept study: A study is needed to uncover the unknowns; about the market,
technology, and/or the manufacturing process.
• Development of the new product: This is the start of the NPD process. This
includes the specifications, needs of the customer, target markets,
establishment of multi-functional teams, and determination 4:>f key stage gates.
• Maintenance: These are the post delivery activities associated with product
development.
A stage gate process is used by many companies to screen and pass projects as
they progress through development stages. Each stage of a project has
requirements that must be fulfilled. The gate is a management review of the
particular stage in question. It is at the various gates that management should make
the "kill" decision. Too many projects are allowed to live beyond their useful lives
and clog the system. This dilutes the efforts of project teams and overloads the
company resources. Table 10.2 illustrates some sample stages from Rosenau
(1996)30.
The above Table presents several examples of new product development processes.
The individual organization should customize their process and allow a suitable time
period for it to stabilize.
2. New category entries: These are company products that are not new to the
world, but new to the company. A "me too" type product.
5. Repositionings: Products that are retargeted for a new use. The original
purpose was not broad enough. Arm & Hammer baking soda has been
repositioned as a drain deodorant, refrigerator deodorant, etc.
6. Cost reductions: New products which are designed tel replace existing
products, but at a lower cost.
GE Plastics (Mt. Vernon, IN) has formalized their product design development
process (Harold, 1999)17. It is described as designing for six sigma using the
product development process. The methodology is used to pn:>duce engineered
plastics through a series of tollgates that describe the elements needed for
completion of a stage. Best practices are used in each stage, including:
Phadnis (2001)26 in an article "Design for Six Sigma Roadmap" provides various
checklists to guide the project team toward completion of a project.
With DMAIC, a lean six sigma team takes an existing process, defines a problem
within that process, and follows a series of steps to improve the current state. IDOV
(identify, design, optimize, and validate) takes this approach one step further. The
central concept of variation reduction provides new leverage when it starts at the
design phase of products and processes.
DMAIC attains improvement in existing products and processes while IDOV, and
other DFSS methodologies, quantifies the steps necessary to achieve six sigma
quality in new products and processes. (Shina, 2001)31
Treffs (2001 )37 and Simon (2000)32 provide additional insight on the development of
other six sigma design methods. A standardized approach has not yet been
established, but most authors recommend a framework that tries to remove "gut
feel" and substitutes more control.
• Design: Transfer functions are applied to develop the ()verall layout and
geometry of the product. A statistical quality approach is used in this step.
Product quality scorecards (PQS) are defined at this point. Some of the tools
common to this phase are design of experiments (DOE), finite element
analysis (FEA), and statistical inference.
• Validate: PQSs are used as input to show the required IE!Ve I of quality has
been achieved. Data from quality control and field studies confirm that the
product satisfies the required specifications.
DMADV
Simon (2000)32 provides a five-step DMADV process for six sigma design. The
DMADV method for the creation of a new product consists of the following steps:
• Verify: Verify the design performance and ability to meet customer needs
• The product or process exists and has been optimized, but still doesn't meet
the customers' specification or six sigma level
(Simon, 2000)32
As six sigma implementations progress, new projects become more complex. The
development phases will typically follow the sequence below:
• Training and launch: Teams learn the DMAIC methodology and projects
selected represent "low-hanging fruit" which yield gains from obvious
improvement opportunities with easy solutions.
• Design for six sigma (DFSS): Teams attack the most complex projects, with
the greatest potential gains. Solutions require more than improvement and
become a matter of new process and product designs.
{ASQ Staff, 2003)5
DMAIC projects tend to be easier and faster than design projects, hence they provide
early gains. DFSS projects tend to take longer to implement, but res~lt in solutions
to complex design projects. Organizations with well-developed six sigma programs
run DMAIC and DFSS projects concurrently. The following steps help determine an
organization's readiness to deploy DFSS:
4. Gauge the organization's capability for success with DFSS: Are improvement
projects being completed within projected timelines? Should the
management team be more involved? Are adequate resources available?
{ASQ Staff, 2003)5
I
+
Selected Embodiment Working
Schemes --+ of Schemes --+ Detailing
r--+ Drawings , etc
The designer (and design team) will capture the needs, provide analysis, and
produce a statement of the problem. The conceptual design will generate a variety
of solutions to the problem. This brings together the elements of engineering,
science, practical knowledge, production methods, and practices. The embodiment
of schemes step produces a concrete, working drawing (or item) from the abstract
concept. The detailing step consolidates and coordinates the fine points of
producing a product.
The designer of a new product is responsible for taking the initial concept to final
launch. In this effort, the designer will be part of a team. The project manager,
product manager, or general manager for a new product or new design team (which
includes marketing, sales, operations, design, and finance) will n,eed to manage the
process.
The use of six sigma tools and techniques must be introduced in a well thought out
manner at various phases of the project. The remainder of this Section details some
of the available tools. The student should be aware that there are numerous other
DFSS techniques that are not presented.
The foundation of the house contains the benchmarking or target values. The values
indicate "how much" for each of the measures. For the example given in Figure
10.4, the "how much" target values include the temperature range, book weight,
maximum errors, cost, etc. This area may also be used to indicate objective
measures for competitive products and technical importance of each factor. There
is a tendency to give a tolerance for the target values, but it is better to set the target
values as single objectives, and then rate the engineering characteristics in terms
of the ability of achieving the target values.
The right-hand wall of the house in Figure 10.4 indicates the customer competitive
assessment and other factors affecting the customer. The competition comparison
shows graphically the relative weights between this product and the competition.
The elements which are included in the house are customized to the particular
product or service being described. When reviewing a completed house, the easiest
method is to look at each area separately (walls, ceiling, and foundation), to
understand the factors, relative strengths, and interactions.
-
~ DESIGN FEATURES
P = POSITIVE INTERACTIONS
N = NEGATIVE INTERACTIONS
I--
Z (J) COMPETITIVE
W Z
RANKINGS I--
0 COMPARISON
Z
5 = MOST IMPORTANT 0 i=
0= NO IMPORTANCE u W
N (J) U x:
W 0
W > in 0:: W (J) III
..J I--
I-- 0 COMPETITIVE
(!) III ::::i u 0:: U W
..J
I--
Z ASSESSMENT
Z <C iii 0-<C w0:: ii: III W a::
I-- 0- 5 = BEST w
52 0- <C <C 0:: (/)
w w
CUSTOMER Z <C 0:: :i!i ;: ;: > 0:: 3 = AVERAGE a:: :!!;
I-
I-
NEEDS <C a => 0 W 0 0 => 1 = WORST 0 « w
- 0:: <C a u u. ..J :i!i u :s: (/) al
COMPREHENSIVE 5 1 2 1 1 3 5 5 4 3 4 4 3
LOW COST
4
-
2 0 4 3 1 5 3 0 4 3 4 3 3 3 /
-5
UP·TO-DATE
I
EASILY AVAILABLE
TEST QUESTIONS
RATINGS
-4
-5
'--
4
0
5
1
1
0
0
3
2
3
0
5
I
5
4
5
..J
<C
=>
3
4
4
3
4
2
4
5
5
I--
0::
3
3
4
e>
a
3
3
4
(J)
<:
..J Z
Z <C 0 (J) 0
TARGET VALUES 0:: u. (J) U It) (J) 0:: <C x: => 0- ::c
0 0 III W
(J)
..... W W :i!i 0 Z 0- i= I--
I-- ....""+ ..J
CD U5
~
v ..J ;: 0:: 0
III
<C =>
(J)
0::
W
W
:i!i
U v 0 W => :i!i
>
=> i:i: 0:: ::c :; 0 >< > ..J
<C a (J)
0:: 0 M W <C W
I-- 0 0:: U ..J
(J) 0:: 0:: <C
f1 Z (J)
~ W ::c
M U
W
wI-- I--
OUR MANUAL 89 98 5.2 2 65 3 91
OBJECTIVE o::>Z
-w
MEASURES X BOOK 77 95 3.9 5 65 0 79 w!::::i!i
:i!il--(J)
Y MANUAL 81 76 4.5 4 65 3 83 OW(J)
I--O-W
ABSOLUTE 83 27 44 91 89 15 72 (J):i!i(J)
TECHNICAL =>O(J)
MPORTANCE RANKING 3 6 5 1 2 7 4 uu<c
The "how" portion ofthe above Figure is identified as design featu res. Various other
texts have used "engineering characteristics," "design requirements," "technical
descriptors" and "technical details" as descriptions. The "how mLich" floor area and
the "comparison" right wall area of the house in Figure 10.4 are included to show
their impact on the design of the product.
DIU
o
0-"0-
c::';:
0
..
o
o
U
00
00
GI"
GIGI
c ....
0_ U
Clco
0
t:~
0
co ....
o.U
I!!
-
.. ....
Glc
Uo
0 0
o.l!!
-
c .. >.GI
w~ co GlQ.
.c ~o
U U
(Hauser, 1988)18
Figure 10.5 Linked House of Quality Example
While it is easy to get caught up in the process of constructing the house(s) and
completing entry ofthe data, one should not lose sight ofthe objectives ofthe house
of quality methodology. Hauser (1988)18 states, "The house of quality is a kind of
conceptual map that provides the means for interfunctional planning and
communications." "The principal benefit of the house of quality is quality in-house.
It gets people thinking in the right direction and thinking together."
The voice of the customer, both external and internal, is quantified and presented
in the format of a house of quality. The different organizational functional groups,
engineering, marketing, manufacturing and so on, are able to see the effect of design
and planning changes in order to balance customer needs, costs, and engineering
characteristics in the development of new or improved products and services.
Various authors use the basic theme of the house of quality presented by Hauser
and include additional factors and symbols to show other data relationships.
Noise Factors
,r
..III-
Control Factors
Control factors are those parameters that are controllable by the designer. These
factors are the items in the product or process that operate to produce a response
when triggered by a signal. For instance, in the case of the furnace, the control
factors might be the design of the thermocouple and heat controller. Control factors
are sometimes separated into those which add no cost to the product or process
and those that do add cost. Since factors that add cost are frequently associated
with selection of the tolerance ofthe components, these are called tolerance factors.
Factors that don't add cost are simply control factors. Noise factors are parameters
or events that are not controllable by the designer. These are generally random, in
that only the mean and variance can be predicted.
These noise factors have the ability to produce an error in the desired response. The
function of the designer is to select control factors so that the impact of noise
factors on the response is minimized while maximizing the response to signal
factors. This adjustment of factors is best done using statistical design of
experiments or SDE.
Burners
Inside Outside
Tiles Tiles
Although temperature was an important factor, it was treated as a noise factor. This
meant that temperature was a necessary evil and all other factors would be varied
to see if the dimensional variation could be made insensitive to tE~mperature. In Dr.
Taguchi's words, whether the "robustness of the tile design" cc,uld be improved.
People (the engineers, chemists, etc.) having knowledge about the process were
brought together. They brainstormed and identified seven major cc>ntrollable factors
which they thought could affect the tile dimension.
This is a classic example of improving quality (reducing the impact of a noise factor),
reducing costs (using less amalgamate) and drastically reducing the number of
defectives at the same time.
Some of the key robust design steps discussed by Phadke, (1989)25 are concept
design, parameter design, and tolerance design.
Concept Design
Parameter Design
During the parameter design stage, the design is established using the lowest cost
components and manufacturing techniques. The response is then optimized for
control and minimized for noise. If the design meets the requirements, the designer
has achieved an acceptable design at the lowest cost.
Tolerance Design
If the design doesn't meet requirements, the designer then gives consideration to
more expensive components or processes that reduce the tolerances. The
tolerances are reduced until the design requirements are met. With robust design
approaches, the designer has the ability to produce a design with either the lowest
cost, the highest reliability, or an optimized combination of cost and reliability.
" ... a systematic group of activities intended to: a) recognize, and evaluate the
potential failure of a product/process and the effects of that failure, b) identify
actions that could eliminate or reduce the chance of the! potential failure
occurring, and c) document the entire process."
A FMECA provides the design engineer, reliability engineer, and others a systematic
technique to analyze a system, subsystem, or item for all potential or possible failure
modes. This method then places a probability that the failure mode will actually
occur and what effect this failure has on the rest of the system.
A FMEA or FMECA (in some cases there is little difference) is a dletailed analysis of
a system, down to the component level. Once all items are clas.sified as to the 1)
failure mode, 2) effect of failure, and 3) probability failure will occ:ur, they are rated
as to their severity via an index called a risk priority number (RPN).
(Dovich, 2002)13
In the DFSS realm, FMEAs will be used in the design phase of product development
(concept, design, optimize, and verify), while FMEAs are used in the development
phase of technology development (invent/innovate, develop, optimize, and verify).
(AIAG, 2001) 1 , (Stamatis, 1995)33
FMEA Types
There are a variety of types of FMEAs. However, Stamatis (1995)33 has classified
them into four types: system, design, process, and service. Other writers provide
more detailed breakdowns, but these four types are primarily used. The FMEA form
is not standardized, so a company can customize it for their use. Most of the forms
have common elements. The form to use may be mandated from the customer. The
big three American automotive manufacturers request the use of forms consistent
with the AIAG FMEA manual.
System FMEAs
System FMEAs (SFMEA) deal with systems, subsystems, and components, along
with the interaction between systems and elements of the system. There is to be a
balanced approach to effectiveness, performance, and costs. The inputs can be
derived from a QFD (quality function deployment - voice of the customer). The
potential causes of failures can be gleamed from warranty claims, customer
complaints, field service data, reliability data, and feasibility studies.
Design FMEAs
An objective design FMEA (DFMEA) will reduce the risk of failUl~es. This includes
functional requirements and design alternatives. It should analyze products prior
to the release of production drawings for tooling and to manufacturing (before-the-
fact). The focus is on failure modes caused by design deficienci4es. Manufacturing
issues are not in the DFMEA. Those should be included in process FMEAs.
Design FMEAs do not rely on process controls to correct potential design flaws.
One should acknowledge the manufacturing process capabilitiies with regard to
specifications, tolerances, finishes, tooling, or other sources. The team should
involve individuals from design, testing, reliability, materi;:lls, service, and
manufacturing. DFMEAs are conducted on the superior concepts that emerge from
a selection process. 76% of engineering changes are due to pOlor design and the
remaining 24% are due to improvements.
The potential causes of failure modes may be related to materia ~s, processes, and
costs. For detection methods, one would use simulation, mathematical modeling,
prototype testing, design of experiments, verification testing, pmduct testing, and
tolerance stack-up studies. A design FMEA will be presented la1ter in this Section.
Process FMEAs
Process FMEAs (PFMEA) study the manufacturing and assemlbly process. The
purpose is to focus on the failure modes caused by process or assembly operations
and to minimize those deficiencies. The PFMEA is initiated before or after the
feasibility stage, prior to tooling for production, and accounts for all manufacturing
operations. A flow chart of the process must be made available, along with the
DFMEA document for reference. A QFD can provide further customer input. The
improvement team should involve individuals from design, assembly, operations,
materials, quality, service, and suppliers.
The PFMEA should not rely on product design changes to correct process
weaknesses. Some of the potential issues include too much, tOCI little, missing, or
wrong elements pertaining to labor, machine, methods, materials, measurement, and
environment. The characteristics of a product can be classifiE!d as: 1. critical -
shall/must comply with safety/government regulations, or service requirements; 2.
significant - customer and supplier quality features; 3. key - characteristics that
provide prompt feedback on the product aiding in the corrective action process.
Service FMEAs
Service FMEAs investigate services before they reach the customer. The focus is
on failure modes (tasks, errors, mistakes) caused by system or process deficiencies
before the first service. The service FMEA will cover the non-manufacturing aspects
and includes maintenance contractors, financial services, legal, hospitality services,
government and educational institutions, and healthcare. The inputs to the FMEA
may be derived from QFD, benchmarking, market research, and/or focus groups.
The potential causes of failures may be from: labor, machine, methods, materials,
measurement and environment. The service FMEA can be termed healthcare FMEAs
for the healthcare industry.
FMEA Follow-up
The team leader (and eventually the process owner) should follow-up on the FMEA
to verify that the implementation actions were undertaken and completed. Some of
the follow-up steps include:
The next major step is to weigh the risks associated with the current component,
effect, and cause with the controls that are currently in place.
12. P is the probability this failure mode will occur. Values for this index
generally range from 1 to 10, with 10 being near certainty !o f occurrence.
13. S is the severity of the effect of the failure on the rest of: the system if the
failure occurs. These values are often indexed from 1 to 10, with a 10 meaning
that the safety of the user is in jeopardy.
14. D is a measure ofthe effectiveness ofthe current controls (in place) to identify
the potential weakness or failure prior to its release to production. This index
may also range from 1 to 10. A value of 10 indicates the design weakness
would most certainly make it to final production without detection.
15. RPN. The risk priority number is the product of the indices from the previous
three columns. This RPN is dimensionless since there is no real meaning to
a value of say 600 versus 450 except in the difference in lTIagnitude.
RPN =P·S·D
16. The actions then are based upon what items either have, the highest RPN
and/or where the major safety issues are.
17. There is a column for actions to reduce the risk, a column for responsibility,
and a column for the revised RPN after corrective action.
PART FUNCTION POTENTIAL POTENTIAl. CURRENT RISK RECO"MMENDED A9TlON(S) REVISED ~ISK RESPONSIBU
NUMBER FAlL.URE ~FFECT(S) POTENTIAL CONTROLS ASSESSMENT CORRECTIVE TAKa! ASSESSMENT DEPT OR
NAME MODE(S) FFAILURE CAUSE(S) OF ACTION(S) INDIVIDUAL
FAILURE
'p S D RPN P S D RPN
REVIEW NEED
OVER
NONE 242 16 FOR PREVENTION
PRESSURE
SYSTEM
REVIEW PRESSURE
PUMP ENG. 1 4 2 8 PELIVERED IN FIELD
SIZING STANDARD
AND ACTUAL NEED
Risk Assessment
Risk assessment is the combination of the probability of an event or failure, and the
consequence(s) of that event. The analysis of risk of failure normally considers the
severity and probability of failure. The severity of failure is generally defined by the
categories from MIL-STD-1629 (1980)22. These are shown in Table 10.9.
Classification Description
I Catastrophic A failure that may cause death or mission loss
II Critical A failure that may cause severe injury or major system damage
III Marginal A failure that may cause minor injury or degradation in mission
performance
IV Minor A failure that does not cause injury or system damage but may
result in system failure and unscheduled maintenance
Rank Criteria
1 It is unreasonable to expect that the minor nature of this failure will
degrade the performance of t he system.
2-3 Minor nature of failure will cause slight annoyance to the customer.
Customer may notice a slight deterioration of the system performance.
4-6 Moderate failure will cause customer dissatisfaction. Customer will
notice some system performance deterioration.
7-8 High degree of customer dissatisfaction and inoperation of the system.
Does not involve safety or noncompliance to government regulations.
9 -10 Very high severity ranking in terms of safety-related failures and
nonconformance to regulations and standards.
A number of systems are used to combine the probability of failure and the hazard
category. These systems are based on accepting a degree of risk (~f occurrence with
respect to the severity of the hazard.
AT&T Bell Laboratories coined the term DFX to describe the process of designing
a product to meet the above characteristics. In doing so, the life cycle cost of a
product and the lowering of down-stream manufacturing costs would be addressed.
The DFX toolbox has continued to grow in number from its inception 15 years ago
to include hundreds of tools today (Huang, 1997)20. The user can be overwhelmed
by the choices available. Some researchers in DFX technology have developed
sophisticated models and algorithms. The usual practice is to apply one DFX tool
at a time. Multiple applications of DFX tools can be costly. The authors note that a
systematic framework is not yet available for use with DFX methodology. A set
methodology would aid in the following ways:
1. Design guidelines:
DFX methods are usually presented as rules ofthumb (design guidelines). These
guidelines provide broad design rules and strategies. The design rule to increase
assembly efficiency requires a reduction in the part count and part types. The
strategy would be to verify that each part is needed.
Each DFX tool involves some analytical procedure that measures the
effectiveness ofthe selected tool. Boothroyd (1994f provides; a DFA (design for
assembly) procedure that addresses the handling time, insertion time, total
assembly time, number of parts, and the assembly efficiency. Each tool should
have some method of verifying its effectiveness.
Each tool can be evaluated for usefulness by the user. The tooll may be evaluated
based on accuracy of analysis, reliability characteristics, and/or integrity of the
information generated.
Use of the DFX tools will be of benefit if the product development process is
understood by the design team. Understanding the process ,activities will help
determine when a particular tool can be used.
The mapping of a tool by level implies that DFX analysis can be complex. Several
levels of analysis may be involved with one individual tool. lrhe structure may
dictate the feasibility of tool use. For routine product redesigns, the amount of
information needed may already be available. For original de~signs, the amount
of interdependence of tools can make it difficult to coordinate all of the changes
downstream.
DFX Characteristics
The following characteristics and attributes should be considered by DFX projects.
(Bralla, 1999)8
Safety:
Design for safety requires the elimination of potential failure prone elements that
could occur in the operation and use ofthe product. The design should make the
product safe for: manufacture, sale, use by the consumer, and disposal.
Quality:
The three characteristics of quality, reliability, and durability are required and are
often grouped together in this category.
Reliability:
A reliable design has already anticipated all that can go wrong with the product,
using the laws of probability to predict product failure. Techniques are employed
to reduce failure rates in design testing. FMEA techniques consider how
alternative designs can fail. Derating of parts is considered. Redundancy
through parallel, critical component systems may be used.
Testability:
Ma n ufactu rability:
DFA means simplifying the product so that fewer parts are in"olved, making the
product easier to assemble. This portion of DFX can often provide the most
significant benefit. A product designed for ease of assembly can: reduce
service, improve recycling, reduce repair times, and ensure faster time to market.
This is accomplished by using fewer parts, reducing engineering documents,
lowering inventory levels, reducing inspections, minimizing setups, minimizing
material handling, etc.
Environment:
The objective is minimal pollution during manufacture, use, cmd disposal. This
could be defined as design for the environment (DFE). The concept is to increase
growth without increasing the amount of consumable resources. Some
categories of environmental design practices include: rec:overy and reuse,
disassembly, waste minimization, energy conservation, maiE!rial conservation,
chronic risk reduction, and accident prevention.
A product should be returned to operation and use easily afte!r a failure. This is
sometimes directly linked to maintainability.
Maintainability:
The product must perform satisfactorily throughout its intended life with minimal
expenses. The best approach is to ensure the reliability of components. There
should be: reduced down time for maintenance activities; reduced user and
technician time for maintenance tasks; reduced requirements for parts; and
lower costs of maintenance. Endres (1999)14 provides some specific methods for
increasing maintainability (decreasing diagnosis and repair times): use modular
construction in systems, use throwaway parts (instead of parts requiring repair),
use built-in testing, have parts operate in a constant failure rate mode, etc.
Human factors engineering must fit the product to the human user. Some
guidelines to consider are: fitting the product to the user's attributes, simplifying
the user's tasks, making controls and functions obvious, anticipating human
error, providing constraints to prevent incorrect use, properly positioning
locating surfaces, improving component accessibility, and identify components.
Appearance (Aesthetics):
Packaging:
The best package for the product must be considered. The size and physical
characteristics of the product are important, as is the economics of the package
use. The method of packaging must be determined. Automated packaging
methods are desirable.
Features:
Features are the accessories, options, and attachments available for a product.
Time to Market:
The ability to have shorter cycle times in the launch design of a product is
desirable. The ability to make the product either on time, or faster than the
competition is of tremendous advantage.
TRIZ
TRIZ is a Russian abbreviation for "the theory of inventive problem solving."
(Altshuller, 1996)2, (Altshuller, 1998)3. TRIZ in Russian is TeoplUI pememHI
H306peTaTeJIbCKHX 3a;a;a-q (TPH3). (The School of Triz, 2ooofl6. L. Shulyak is the
translator of a series of books originally written by Genrich Altshuiller, on the subject
of inventive problem solving. Provost (1998)27 also provides a description of the
TRIZ methodology in Quality Progress. Altshuller states that inventiveness can be
taught. Creativity can be learned, it is not innate. One does not h.:lve to be born with
creativity.
Altshuller asserts that traditional inventing is "trial and error" resulting in much
wasted time, effort and resources. Through his years ()f education and
imprisonment, he solidified a theory that one solves problems through a collection
of assembled techniques. Technical evolution and invention have certain patterns.
One should be knowledgeable with them to solve technical problems. There is some
common sense, logic and use of physics in problem solving. (Altshuller, 1996)2
3. Formulation of the ideal final result (IFR): Providing a desc:ription of the final
result, which will provide more details
TRIZ (Continued)
6. Change or reformulate the problem
8. Utilization of the found solution: Seeking side effects of the solution on the
system or other processes.
9. Analysis of the steps that lead to the solution: An analysis may prove useful
later.
Initially there were 27 TRIZ tools (Altshuller, 1996)2, (Provost, 1998)27, which were
later expanded to 40 innovative, technical tools (Altshuller, 1998)3. This short
section on TRIZ can not fully describe the methodology that has proven to be very
successful. Users of TRIZ report great results.
Systematic Design
Systematic design is a step-by-step approach to design. It provides a structure to
the design process using a German methodology. It is stated that: systematic design
is a very rational approach and will produce valid solutions. This approach is similar
to the German design standard: Guideline VOl 2221 "Systematic Approach to the
Design of Technical Systems and Products."
Systematic design includes tools and methods suggested for valrious steps along
the design process. The creativity of the designer is encouraged in this method, but
on a more structured basis. Any and all design methods must employ the designer's
creativity to find new innovative solutions. . (Pahl, 1988)24
Constant or variable: What will happen if only the changing item is treated
as an exception?
Linking or separating: What will happen if some things are joined or taken
apart?
Parallel or serial: Can two or more things be done at the same time?
Can they be done sequentially?
Use it another way: Is there another way to use it while keeping the
current setup? Can anything else be produced?
Change or replace it: Change the shape, color, sound, mc)vement, location,
orientation, power source, or rotation.
Reverse it: Turn it upside down; invert it; revelrse the positions,
front and back. Change the roles: r orientations, or
setups.
What occurred in practice was that the master (which was subject to damage) sat
around for many days in a safe (or in various other locations because the system
didn't make sense) waiting for the appropriate authority figures to come back from
vacation or attend to paperwork. Refer to Figure 10.12.
The process problem was attacked by a trained team who began by flow charting the
current process. They asked "Why do we do it this way" questions at each key step.
As it turned out, this expensive master had a very low likelihood of theft. Other
producers, who could even use the master, were few indeed and were known to have
integrity. Hourly employees at the firm also considered themselves to have as much
honesty and integrity as management. To them, the established system was a joke.
The cycle time was reduced to 2 days (the majority of which was a scheduling
decision). See Figure 10.13.
This example makes an interesting case study illustrating how teamwork and the
development of a process flow chart can be used to simplify an existing process.
However, the question arises: Why is this example included in design improvement?
PUT IN DOC~~
"
SUPERVISOR
DOCK OR REPLACEMENT
SAFE RECEIVES MASTER
YES
DELIVER MASTER
TO PLANNING
+
ADVISE PLANNING
.
DELIVER MASTER
TO VAULT
OF AVAILABILlTf
AUTHORITY
H
NO--.-t
PUT MASTER
IN VAULT
YES
DELIVER MASTER
TO VAULT
."
[RODUCTION IS
SCHEDULED
>---NO
H
( PRODUCT IS )
PRODUCED
YES
PRODUCTION
IS SCHEDULED
References
1. AIAG. (2001). Potential Failure Modes and Effects Analysis (FMEA) Reference
Manual, 3rd ed. Southfield, MI: AIAG.
2. Altshuller, G. (1996). And Suddenly the Inventor Appeared: TRIZ, the Theory of
Inventive Problem Solving. (L. Shulyak, Trans.). Worcester, MA: Technical
Innovation Center.
5. ASQ Staff Writer. (2003, Nov. 26). "Planning for DFSS." Downloaded Jan. 4, 2006
from: http://www.asq.org/forums/sixsigma/articles/champion/pdf/
chp_planning_for_DFSS.pdf
7. Boothroyd, G.; Dewhurst, P.; & Knight, W. (1994). Product Design for
Manufacture and Assembly. New York: Marcel Dekker.
8. Bralla, J. (1999). Design for Manufacturability Handbook. 2nd ed., New York:
McGraw-Hili.
9. Cooper, R. (1993). Winning at New Products, 2nd ed. Reading, MA: Perseus
Books.
10. Cooper, R. (1996). "New Products: What Separates the Winners from the
Losers." In Rosenau, et. al., The PDMA Handbook of New Product
Development. New York: John Wiley & Sons.
11. Crawford, C. (1997). New Products Management, 5th ed. Chicago: Irwin.
12. Cross, N. (1994). Engineering Design Methods: Strategies for Product Designs,
2nd ed., Chichester: John Wiley & Sons.
13. Dovich, R.A., & Wortman, B.L. (2002). eRE Primer. Terre Haute, IN: Quality
Council of Indiana.
References (Continued)
14. Endres, A. (1999). "Quality in Research and Development." In Juran's Quality
Handbook, 5th ed., Juran, J., et al., New York: McGraw-Hili.
15. Gryna, F. (1999). "Operations." In Juran's Quality Handboolc~ 5th ed., Juran, J.,
et al., New York: McGraw-Hili.
16. Hamel, G., & Prahalad, C. (1994). Competing for the FuturE!. Boston: Harvard
Business School Press.
18. Hauser, J.R., & Clausing, D. (1988). "The House of Quality." Harvard Business
Review, May-June, p. 63-73.
19. Hockman, K. (2001). "Why is a Design for Six Sigma Methodc,logy Necessary?"
Retrieved September 10, 2001, from:
http://www.sixsigmaforum.com/protected/articles/ds_dfss•. shtml
20. Huang, G., & Mak, K. (1997, Sept). "Developing a Generic DI~sign for X Shell."
Journal of Engineering Design.
21. Ireson, W.G.; Coombs, C.F.; & Moss, R., eds. (1996). Handbook of Reliability
Engineering and Management. New York: McGraw-Hili.
22. MIL-STD-1629. Procedures for Performing a Failure Mode, Effects and Criticality
Analysis.
24. Pahl, G., & Beitz, W. (1988). Engineering Design: A Systlematic Approach.
London: The Design Council, and Berlin: Spring-Verlag.
25. Phadke, M.S. (1989). Quality Engineering Using Robust Dc~sign. Englewood
Cliffs. NJ: Prentice Hall.
References (Continued)
27. Provost, L. & Langley, G. (1998, March). "The Importance of Concepts in
Creativity into Quality Management." Quality Progress. Milwaukee: ASQ.
28. Reid, R. (May, 2005). "FMEA- Something Old, Something New." Quality
Progress. 38(5), 90-93.
29. Reiling, J.; Knutzen, B.; & Stoecklein, M. (August 2003). "FMEA - The Cure For
Medical Errors." [Electronic version]. Quality Progress, 36(8), p. 67-71.
30. Rosenau, M. (1996). "Choosing a Development Process That's Right for Your
Company." In Rosenau, et. aI., The PDMA Handbook of New Product
Development. New York: John Wiley & Sons.
31. Shina, S. (2001). Six Sigma for Electronics Design and Manufacturing. New York:
McGraw-Hili.
32. Simon, K. (December, 2000). "DMAIC Versus DMADV." Retrieved August 17,
2001 from:
http://www.isixsigma.comllibrary/content/c001211a.asp
33. Stamatis, D. (1995). Failure Mode and Effect Analysis: Failure From Theory to
Execution. Milwaukee: ASQC Quality Press.
37. Trefts, D.; Rabeneck, S.; & Rabeneck, L. (2001). "An Approach to Achieving Six
Sigma Design and Optimize (Improve) Phase:" Retrieved September 10,2001,
from:
http://www.sixsigmaforum.com/protected/articles/ds_approach1.shtml
References (Continued)
38. Vandervort, C., & Kudlacik, E. (2001). "GE Generator Technology Update."
Retrieved January 8,2006 from:
http://mjharden.com/prod_serv/products/tech_docs/en/downloads/ger4203.pdf
39. Watson, B., & Radcliffe, D. (1998, Sept). "Structuring Design for X Tool Use for
Improved Utilization." Journal of Engineering Design.
40. Wilkins, Jr., J. (May, 2000). "Putting Taguchi Methods to Work to Solve Design
Flaws." Quality Progress.
10.1. FMEAs satisfy which of the following basic 10.4. The most effective and efficient method of
organizational needs? solving quality problems for a product is to
concentrate efforts in which of the following
a. FMEAs are of no value since they are very areas?
tedious
b. Auditors will not hand out a finding if FMEAs a. Design
are in compliance b. Production
c. The bottom line is automatically improved c. Quality improvement
d. The risk of quality defects is greatly reduced d. Systematic troubleshooting
10.2. An improvement team has completed several 10.5. Which of the following organizations is ready
RPN values and is now ready to institute for a DFSS approach?
improvements. There are three failure modes
with the same RPN score in the table below. a. The organization has recently become ISO
The first line item which should be reduced 9001 certified
would be: b. The first full-time black belt was recently
hired and oriented within the organization
Failure Mode Severity Occur. Detection RPN c. After many successful LSS projects the rate
of improvement has slowed
Bad battery 9 7 5 315 d. The new CEO has a strong engineering
Cracked welds 7 9 5 315 background and wants change
Missing bolts 5 7 9 315 10.6. The design steps in Taguchi's robust design
sequence are:
a. The bad battery because it has the highest
severity ranking I. Concept design
b. The cracked welds because of the high II. Parameter design
occurrence III. Tolerance design
c. The missing bolts because it has the highest
detection level a. I, II, III
d. The rankings are the same, so choose any b. I, III, II
one to start c. II, I, III
d. III, I, II
10.3. What is the main difference between DMAIC and
DFSS? 10.7. Which of the following is NOT a widely
recognized topic area for DFX?
a. DFSS does not have a control phase
b. DMAIC attacks existing problems while a. Design for profit
DFSS provides new product and process b. Design for assembly
design c. Design for reliability
c. DFSS works only for engineers and DMAIC d. Design for appearance
can be used by everybody in an organization
d. DMAIC is a methodology while DFSS is a 10.8. Which of the following statements is an
tool INCORRECT description of QFD?
10.9. In the design of a product, design for X (DFX) 10.13.ln robust design. a factor that can cause
can be described as: unknown variability, or an error in the response
factor, is considered a:
a. Design for anything
b. A very effective method for manufacturability a. Signal factor
of parts b. Control factor
c. A new technique encompassing six sigma c. Noise factor
d. A knowledge-based design approach d. Response factor
10.10. Arrange the following design steps into a logical 10.14. The term "severity" in ia FMEA describes the:
sequence from start to finish.
a. Difficulty of completing the FMEA form
I. Measure and determine customer needs and b. Possible impact to a system user of a low-
specifications level failure
II. Define the project goals and customer needs c. Likelihood of a failure
specifications d. Time for which the system is expected to be
III. Analyze the process options down
IV. Verify and validate the design
V. Develop the design details for producing the 10.15. When faced with a complex problem which
customer needs requires an inventive solution, the method
which produces the results with the least
a. I, II, III, IV, V wasted time, effort. and resources is:
b. II, I, III, V, IV
c. II, I, IV, III, V a. Trial and error
d. I, II, V, III, IV b. Innate inventitivene:ss
c. Using ARIZ steps in the TRIZ method
10.11.ln Taguchi methods, a factor that results In a d. Plan-do-check-act
response variable and can be equated to an
Independent variable is called a: 10.16. The analysis of risk involves two measures of
failure. These meaSUrE!S are:
a. Control factor
b. Signal factor a. Failure analysis and failure effects
c. Noise factor b. Failure mode and failure method
d. Dependent variable c. Failure severity and failure probability
d. Failure mechanism ::md failure mode
10.12. A number of authors have recommended
sequences by which the HOQ (QFD) can capture 10.17. Cooper stresses that new products will have a
customer needs in the design. Please arrange greater chance of succoss If they have all of the
the following design details in appropriate following characteristic;s EXCEPT:
sequence from start to finish.
A. An attractive market
I. Production requirements B. A unique and superior product
II. Key process operations C. Being first to market
III. Parts characteristics D. A good product laurlch
IV. Engineering characteristics
a. I, II, III, IV
b. II, I, IV. III
c. IV, II. III, I
d. IV, III, II, I
10.18. Identify the design acronym(s) that would be 10.23. TRIZ is a methodology for problem solving and
considered (a) subset(s) of DFX: Is quite useful in the design phase of a product.
Which of the following methods are employed in
I. DFSS TRIZ?
II. DFA
III.DFM I. Trial and error
II. Reference to a trick
a. lonly III. Use of physical effects (physics)
b. II and III only IV. Combination oftrlcks and physics
c. I and III only
d. I, II, and III a. I and II only
b. II and III only
10.19.ln robust design technology, which of the c. I, II, and III only
following factors would NOT be considered a d. II, III, and IV only
noise factor?
10.24. The stage gate process is used by many
a. Ambient temperature companies to screen and pass projects. Many
b. Carbon content in cast iron companies may fail to use the process properly.
c. Weather conditions A common problem in the stage gate process
d. Sun spots that could negate its benefits would be:
10.21. DFM (or design for manufacturability) attempts a. Design for assembly
to accomplish all of the following objectives, b. Design for features
EXCEPT: c. Design for testability
d. Design for manufacture
a. Testability
b. Environmental control 10.26. New products will account for a large
c. Producibillty percentage of company sales. The typical ratio
d. Reduced number of manufacturing of new product ideas to successful products is
operations in the range of:
10.28. Review the following DFX statements and 10.31. A new product such as ;:1 Pepsi with lemon twist
identify the one true description: would be considered:
I. Customer needs
II. Customer competitive assessment
III. Design features
IV. Technical competitive assessment
A. I and II only
B. II and III only
C. III and IV only
D. I, II, III, and IV
The answers to all questions are located at the end of Section XII.
I HEAR, I FORGET.
I SEE, I REMEMBER.
I DO, I UNDERSTAND.
CHINESE PROVERB
TABS
BINDER STORAGE
CLOSET I
SHIPPING
DOOR
STORAGE I
. - - - - - - , INACTIVE
DOOR
ACTIVE
DOOR
MEDIA
CENTER
Reflected in Figure 11.1 is the normal production of training binde!rs, solution texts,
and question disks (at the time). Also included are items like replenishing paper,
retrieving tabs, and moving packaging materials.
Since copy machines (or printers) are normally leased or purchased on a 5 year
basis, there were some limitations on rapid movement in the equipment area without
serious financial consequences.
• Brainstorming sessions
• Benchmarking similar operations
• Investigations and trials of new equipment
• The use of spaghetti flow diagrams
• Simulated layouts (both computer generated and graphical cut-outs)
• Prototype equipment trials
• The installation of simple visual control systems
After the above activities were concluded, only two operators were necessary. The
resulting spaghetti chart for material flows is shown in Figure 11.2.
- ACTIVE
~ ]
...J
~ W
~
~
'" i2
'"
SERVER
~
0:
0
8
SHIPPING
DOOR
PRINTER S
1
PAPER
STORAGE
I NiICn VE
, - -- - - - , ~
ACTIVE ACTIVE
DOOR DOOR
So with two employees now working in the area, what happl~ned to the third
employee? What happens when one of the two employees is sick or on vacation for
a week?
Normal attrition was factored into the short-term manpower equation. However, one
employee was soon moved into a technical writing position, so the over-all company
head count remained the same. Fortunately, QCI busy periods are not in desirable
vacation periods, and in most situations, one operator is sufficient for a week.
Furthermore, vacations are restricted to permit only one planned cllbsence per week.
There are standard QCI procedures for making manuals, comb-bound books, and
CDs. However, specific customer orders drive the entire process and they vary
considerably. In an 80 customer order day, at least 75 orders will be different.
Specific material flow sequences under these conditions do not follow the same
step-by-step routines. Think about how order fulfillment works at a fast food
restaurant. There are detailed instructions for making french fries and hamburgers
but specific material flows depend upon the immediate desires of the customer.
Compared to Lands End, QCI has a relatively uncomplicated product base. However,
QCI has increased the number of available products from 12 in 1996 to 64 today (with
3 more in the works). Potential product combinations run to the millions.
In a few areas, QCI has been successful in reducing binder, tab, and cover inventory
levels. However, all binder and tab manufacturers offer a tremendous discount on
larger quantities. Visual systems are used to control inventories and work in
progress.
It should be noted that printer change over times are one minute maximum. Paper
scrap is now virtually non-existent. Sales have also increased 40% between Figures
11.1 and 11.2. Significant advances over the years have also included:
Defi ne Stage
Project Title
Issue and renew software licenses correctly and within the requiired time period.
Business Case
When a software license expires, the software stops working until a new license
number is entered in the system. If a customer has requested alnd completed the
license renewal prior to the expiration date, then the softwarE! should continue
working without any impediment. New software installations also require a proper
license number request. Two problems can occur after the customer has completed
his/her part, either the wrong license is issued or the license is not issued prior to
the expiration date. Both problems create a series of internal and E!xternal dilemmas.
Internally, a series of costly problems are generated and externally the customer is
stressed and dissatisfied.
Problem
Incorrect and late licenses were a chronic problem. During the month of November
2005, 5 ERP licenses were either wrong or late. In total, 46 out olf 150 licenses had
some type of problem (this includes November and December, ~!005 and January,
2006).
Goal/Scope
The goal was to completely eliminate problems with software licenses. This
includes both new and renewed licenses.
Data from the time window of November, 2005, December, 2005, and January, 2006,
showed the following results for 150 issued software licenses:
The causes of both wrong and late licenses were separated as follows:
Cause Frequency
Not knowing or not following procedures 8
Error from third party 3
Error in data entry 1
Lack of data 1
Wrong information 1
Cause Frequency
Internal software error 29
Not knowing or not following procedures 3
8 100
7 1
... -.. -_. -r-I 80
6 ~--~~---------------------------,
4
" . &-
. . _......... - - - - -- - - _. - _. _.- - - - -- - - - -- - - _. - - _. -- - - - - - - - - -- -_. 40
3
2
f- 20
1
0 ~~~~----~~1~----~~1~L---~~=L--~==r=L--L 0
The two main causes of wrong and late software licenses were:
Analyze Stage
A series of brainstorming sessions and current state process mapping revealed the
following root causes of problems:
1. The current process was so complicated, that employees did not follow it.
3. The internal software created to issue software licenses did not work properly.
J
Software License Improvement at Exactus (Continued)
Improve Stage
Three actions were implemented to eliminate the problem:
An implementation schedule was developed. The following are the main elements
of the implementation stage.
Control Stage
Before After
There was not a defined procedure for issuing The new procE!dure covers all
new licenses. Two departments had different situations: new licenses, renewed
and inconsistent procedures. Third-party licenses, and third-party
implementations had no procedure at all. implementations;.
Information in the old software had to be The CRM system was substituted
included in the CRM system. No license could for the old licensing software. All
be generated until all information was in both information is c:ompleted in one
systems. system.
The one person who generated the license Licenses are generated by a sales
information had no relation to the sales assistant.
department. Sales representatives were
constantly called to get missing information.
Some countries were not considered in the All countries arE! part of the new
old process. process.
Five employees from three departments took Two employees are in charge of
part in the old licensing process. the whole proce!;s.
The old procedure took 19 steps. The new procedure takes 8 steps.
Third-party implementations required double Third-party implementations are
the effort to get a license. now consideredl a regular sale
from the beginning.
The old procedure was complex and The new system has an easier
bureaucratic. procedure.
The old failure rate was 300,000 ppm. So far, the new failure rate is 0
ppm.
Exactus now consistently uses DMAIC and lean six sigma practices for all
continuous improvement projects.
}
Lean Six Sigma Implementation at Ludovico
Introduction
Ludovico Producci6n Grafica (Graphic Production) is a small printing shop in San
Jose, Costa Rica. The eight year old company principally serves international
corporations with operations in Costa Rica. Approximately 60% of Ludovico's
production involves manuals and inserts for electronic and medical device
companies. International customers such as Panduit, Arthrocare, Suttle, and Cytyc
require a commitment to quality and speed.
The purpose of this business case is to show a successful utilization of very simple
lean and six sigma tools in a small business environment. The very same tools that
apply to giants such as GE or Toyota, apply to small companies all over the world.
• The quality of the job goes beyond what is seen. Ludovico will only accept a
job if the specifications are completely understood.
• The quality of a job begins with the quality of the shop. The shop must always
be clean and orderly.
The above six core principles pave the way for other lean and six sigma applications.
Figure 11.8 shows monthly sales from December, 2004 to Novennber, 2006.
80,000
70,000
60,000
~
II)
50,000
:::l
II) 40,000
w
...J
c(
II) 30,000
20,000
10,000
-
It) It) It) CD CD CD
~ ~ c~
0
U
Ql
0
lJ ...C.
~ 0
C:
::J
~
en U
0
U
Ql
0
lJ
0
.:.C. c:
0
C, ....
t,)
Ql ::J Ql ::J ::J
C II. c( "") c( 0 C II. c( "") c( ()
MONTH
The upward trend that started in 2004 was marked by the arrival ,of a second multi-
national customer with manufacturing operations in Costa Rica. The previous
market was composed mainly of advertisement agencies, and marketing
departments. The ad and marketing business is not very demanding. Quantities can
drift as much as ±5%. An occasional faulty product is allowed and even expected.
Manufacturing companies, on the other hand, demand exact quanitities, zero defects
and no delivery problems.
i
Lean Six Sigma Implementation at Ludovico (Continued)
V.E.R© The Foundation of Speed, Delivery and Quality
Along with the six core principles above, Ludovico centers its quality in a simple but
powerful system called V.E.R. ("to see" in Spanish). V.E.R. is an acronym for
VERIFICAR (verify), EJECUTAR (execute) and REVISAR (review or check). Simply
stated, the system verifies or confirms that all the elements, prior to printing, comply
with customer specifications (samples, engineering drawings, color proofs,
purchase orders, digital images, customer signed approvals, etc.); executes the
work order according to approved first piece and process controls; and verifies
quantities, final process inspections, special packaging instructions, etc.
For internal purposes and identification, the V.E.R. system is represented by the
popular Egyptian eye of Horus (or Ra). See Figure 11.9 below.
LUDOVICO®
produccion grafica
V.E.R.
VERIFICAR, EJECUTAR, REVISAR
When demand was softer and the print shop had extra capac:ity, there was an
opportunity to teach, for the first time, the concepts of ValUE! and flow to the
Ludovico troops.
A regular customer entered a routine monthly order of 10,000 ma~lazines. It took the
shop about 2 days to print the 10,000 magazines and then 4 more! days to complete
the binding and finishing portions. Figure 11.10 illustrates what went on in the
binding and finishing process.
ffi ffi
~"~ ~ 1; ~' D:q ;,
MAGAZINE MANUAL MAGAZINE STAPLING MAGAZINE FOLDING
~,
~
IfDt I"""TR....
,~, ~,
IMM....
IN....
G & -- . . ,
The four employees in the finishing department would receive the 10,000 printed but
unfinished magazines from the cutting machine. They would thlen spend one day
manually collecting the 10,000 magazines (Step 1 in the value strt!am map). During
the second day the same 4 operators would staple the magazine (Step 2 in the value
stream map). During a third day they would fold the 10,000 maga2:ines (Step 3 in the
value stream map) and finally during the fourth day they would finish, trim, and
package the batch for delivery (Step 4 in the value stream map).
It is easy to calculate, from the cycle times, that less than a full d~IY was required to
complete each step. However, the same 4 operators were not necE!ssarily idle. They
often completed other small work orders.
Each day the crew would create a beautiful 10,000 magazine stack of WIP. Not until
4 days (and 4 seconds) was an actual single magazine ready. Operators were asked
what would happen if the customer entered the shop and requested a copy of their
magazine? About 16 seconds after the question was asked, a full magazine emerged
from the pile of WIP and was proudly given to the inquiring party.
The basic concept of flow, value, and WIP were introduced to the crew at that point.
At first they did not accept the new concepts at all. Some of them had worshiped
WIP for years and were afraid to try either a one piece flow or small batch approach.
After considerable discussion, a new method was created (see Figure 11.11). The
operators learned to act as a manufacturing cell, pulling one magazine at a time from
the previous operation. A fifth person (a handler) was introduced to the line. This
new task was undertaken to balance workload, move material, tally the finished
product, and set a pace for the group. The same batch that required 4 days for
completion was finished in 3 hours.
......------. r-D
Magazine finishing
ro =5
3 HOURS 3 HOURS
CT= 2.5 sec.
C/O=7 min.
Batch= 2 units
\ .
Notice that no sophisticated kanban or takt time calculations wer'e actually needed.
For a "make to order" company like Ludovico, dramatically redulcing times means
extra capacity to deliver more product to more customers.
From the day of this experiment, operators from the finishing department have
learned to balance their workload and achieve optimum work order flow. Thus, Work
Principle 3 "The customer determines the date of delivery" could be achieved and
even more customers could be accommodated.
When the shop was acquired in 1999, space was immediately a concern. The old
warehouse that houses Ludovico is about 4,300 square feet. Recent flow projects,
cubic space utilization, and other kaizen projects have resulted in 210 square feet
rescued for productive use.
Visual management has been fundamental to sustain the 200% plus growth
experienced during the last two years. Examples of visual management include a
planning board, a training board, a tool board, and a floor area layout. A first piece
control board is also very important. Figure 11.13 is a representation of the training
board which shows 13 floor operators versus all machines and manual processes.
Due to visual management techniques, rework, the main quality issue at Ludovico,
has been reduced from 57,142 ppm (3.08 sigma level) to 6,250 ppm (3.99 sigma
level), through November, 2006.
All operators at Ludovico have 100% authorization to stop a procE~ss whenever they
have any doubts about their performance. Employees are encouraged to ask
questions and find answers from their superior (or support staff). Both the V.E.R.
logo and visual management systems support this initiative.
• Some ideas failed from day one, but that did not discourage the teams from
trying new approaches.
• Ludovico could not have sustained its growth without lean and six sigma
tools.
• The use of tools is backed up by a higher set of work principles and work
culture. Without that higher purpose, the tools would have been marginally
effective.
Summary of Results
Sales have grown more than 200% during the last 2 years. Lean and six sigma tools
have been essential to sustain this dramatic growth. Repeat customers are the main
source of income for Ludovico.
A key process (magazine finishing) went from 4 days to 3 hours through the use of
lean techniques. Operators consistently use flow and flexible lines to comply with
customer delivery dates.
Almost 5% of unproductive floor space has been converted into productive floor
space.
Visual management and other techniques have decreased rework from 57,142 ppm
(3.08 sigma level) to 6,250 ppm (3.99 sigma level).
The reason for selecting this area for a kaizen event revolved around the need to
introduce new document scanning technology to replace the antiquated microfilming
system. This unionized organization did not have a very go()d track record in
making major work changes. Typically, changes faced difficulties, many false starts,
and a number of heated arguments with union representatives.
A five day kaizen event was planned to allow the team adequate tiime to learn and to
develop the best possible plan for later implementation by the team and others over
a three month period. Unlike traditional kaizens, where a 'team attempts to
implement changes in a single week, this was a special situatio,n for a number of
reasons.
The workforce involved with the current state process worked in lthree departments
on two different floors of the office building. Secondly, there was Ito be considerable
expense putting in the necessary cabling and power to support the new scanner
technology, so they wanted to make sure the location was ri!~ht the first time.
Thirdly, the optical recognition scanning technology was new to the company and
it was expected that there would be a significant learning cur'Ve. Lastly, there
needed to be adequate time to work out programming issues to integrate data from
the new scanning results with the existing IT infrastructure.
The steering committee provided a charter for the team: Find wa~,s to take time out
of the process while at the same time working to improve prclductivity and the
quality of results. The first step in the process was to give this team a solid
foundation by providing adequate training. The training content included waste and
waste elimination, value stream mapping, one piece flow, and significant team
building activities. Working in teams was a new concept for the individuals involved.
The team was challenged to create a value stream map (VSM) and spaghetti diagram
of the current state processes, including an estimate of physical distances traveled
for in-bound paper claims.
The team created a spaghetti diagram which indicated the following travel distances
for the paper claims:
The team brainstormed and agreed upon the information required to give them an
accurate picture of the current state of the process. The VSM databox is shown in
Table 11.14.
The team broke into sub-teams and began gathering observations and data to
populate the data boxes with pertinent information. They also captured as many
ideas and recommendations for potential improvements as possible. Information
was recorded at each "touch" occurrence for the maiVdocuments passing through
the value stream.
Step name:
Place it occurs:
What the team found was that each and every piece of claim mail encountered at
least 10 significant hand-offs (touches), if all went well the first time! There were
many "artifacts" collected and displayed. Included were forms, screen shots of
necessary entry data, and tracking documents. The team went on to summarize the
VSM statistics and learned the following:
• The average paper claim cycle time (touch time) was 49 seconds.
• The paper claim through time was four 8 hour days (32 hours).
Needless to say, the team was highly motivated and interested in seeing what they
could do to eliminate all the delays and wasted efforts in the current state of the
value stream. Everyone agreed that making customers wait 4 days to start the
claims process was unacceptable!
The team agreed that there had to be a way to get all the processe:s into a single area
on the first floor of the building. This meant that many paradigmn about the current
work had to be broken to condense the desired workflow into the smallest possible
space. One big shift was to move away from mail sorting and pr1eparation at desks
and, instead, perform this work at a central pass-through mail siorting table. This
change would facilitate a smooth transition to the scanners.
Creative thinking was required to conceptualize a new layout. The team was
challenged: "Come up with a workflow that allows a piece of mail to continuously
move, without stopping, from the time it is opened to the time it is scanned." The
team accepted the challenge and developed the following concE!ptual work flow:
slots
Rx .,
Rx
•
dust/noise
----~
barrier
X-ray
dental •
non-scan
scan kick-out
30 day storage 2
3rd floor
electronic transfer
Next, the team set the conceptual layout aside and began working with the actual
building footprint itself. They accepted the built-in limitations of the current building
and derived a layout that would come as close as possible to their "utopian" plan.
Three layout scenarios were developed. One option was selected. The team then
created an implementation plan to make the physical transition a reality.
• Improve productivity by 20% by July 31. This means reducing the cycle time
per piece through workflow improvements.
• Decrease the through-time for mail to the DC room by 50% by July 31. This
means all received mail will be delivered in no more than two days.
• Reduce the number of sorts in the mail room by 50% by June 30. The current
19 sorts would be reduced to 9 or 10.
• Reduce the physical movement of mail by 50%. That is, cut the travel distance
of 716 feet by half.
Implementation
Over several months the team continued to implement their previously developed
ideas to reduce movement, improve ergonomics and safety, provide materials at the
point of use, reduce handling, institute better flow, develop standard work, institute
58 activities, and utilize visual communication boards.
At the completion of the kaizen project, a circle team was chartered to sustain the
gains and continue the improvement process. This was accomplished without any
issues with the union. All changes were accomplished without the typical problems
encountered when management imposed workforce changes.
Today the average in-bound piece of mail is handled in under two (eight-hour) days
and the team is far from satisfied. The mail room team is also actively working with
other departments to identify activities they can perform to he~p eliminate other
wastes and speed information through the system. This depalrtment has made
continuous improvement part of their culture.
The Context
At Community Medical Center (CMC), two types of patients are sent to the diagnostic
departments for procedures: outpatients and inpatients. The outpatients come to
the hospital, register, complete a procedure, and leave on the same day. The
inpatients reside in the clinical departments overnight and are sent to the diagnostic
departments for various procedures depending on the medical necessity. Once the
procedure is complete, the patient is returned to the clinical department.
Some of the outpatients who come for the procedures are frail, and therefore, unable
to walk to the diagnostic department. It is the responsibility of the transportation
department to move the patient to the appropriate department. Similarly, it is the
responsibility of the transporter to move the inpatients to the diagnostic department
from the clinical departments when they are scheduled for procedures.
The A3 Process
A group of individuals representing the diagnostic departments (radiology,
endoscopy, special procedures, cardiology), nursing, transportation, and quality risk
management met to discuss the issue and initiate an A3 problem solving initiative.
These individuals formed the core A3 problem solving team.
To understand the problem first hand, the transportation manager and four
transporters observed the current process. They observed the request for patient
transportation process as it unfolded for 10 days. The manager also contacted and
interviewed different individuals in the diagnostic and clinical departments to get
first hand information about the process.
In the current state, somebody from the diagnostic department, usually a technician,
called or paged the transporter. At other times, somebody fr()m the diagnostic
department called the ward secretary on the floors who then called the RN and the
transporter. The transporter did not know who was paging. Sometimes the
message the transporter received said, "Bring down John Doe to Badiology." There
was no information on the room number, bed number, floor, or area. The transporter
did not know from where the person was paging and did not alwclYs know whom to
call to clarify. He/she only knew a patient needed to be transported. A great deal of
time was, thus, expended by the transporter on patient search.
If the information was complete and the patient was ready for the procedure, the
transporter reached the patient and transported him or her to the diagnostic
department. However, in many situations when the transporter n~ached the patient
(usually inpatients), the patient was not ready and was in need of medications, MRI
screening, bathroom break, IV changes, etc. In these situations, ,as the patient was
not ready for transport, the transporter contacted the nurse. The transporter left the
room and waited for the call from the nurse when the patient Wall) ready.
Charge
1\ ·fi
Diagnostic
department
Ward
secretary
RN
h Patient
----EJ
Transporter
The A3 problem solving team brainstormed the root causes of the problems using
the "S-Whys" approach. The analysis of the first storm cloud revealed that the staff
members calling from the diagnostic department were often too busy to send written
messages to the transporter or to the floors, and therefore, the message lacked
complete information causing delays. The analysis of the second storm cloud
revealed that as the RNs or ward secretaries were sometimes not aware that a
patient needed a procedure, they failed to prepare the patient on time, which
eventually led to late arrival of the patient at the diagnostic department.
Based on the understanding of the current state and the associated root causes, the
team embarked on devising the target state. The transportation manager termed the
problem solving as "Road to Recovery."
The transportation manager drew the target state drawing on the A3 report as
illustrated below in Figure 11.22.
f\ .1\
Charge RN Technician
Diagnostic
Department
Should be able to do in
Less than 30 minutes
Transporter
• The diagnostic departments will beep the charge RN and thE~ transporter at the
same time
As part of the implementation plan, the team created a specific action plan. First, a
designated transporter and a staff member responsible for communication in CMC
developed a "group page" whereby two or more people could be paged
simultaneously by the diagnostic departments. Second, the transportation manager
and the charge RNs met and developed a patient tracking sheet (a log sheet for the
floor staff to sign off when the patient is transported). Third, the transportation
manager and the designated transporter developed a reference card that contained
the pager numbers of the charge RNs of each clinical department and the transport
pager numbers that the diagnostic departments should page. It contains the
information that needs to be paged by the diagnostic department when asking for
a patient transport. This information included:
• Room number
• Patient's destination
The transportation manager set the target time from request to delivery at 30
minutes. When asked about the rationale for setting such a high time, he responded,
"The procedural departments were happy with 30 minutes. They were tickled to
death. Moreover, most procedure departments schedule in 30 minutes increments."
He carried out follow-up surveys at regular intervals to assess transport time. The
following table presents the collected data:
The transportation manager felt the A3 process was every effE!ctive for problem
solving in healthcare. He wrote, "I find the A3 process a very important tool for
evaluating problems and/or processes. It allows a person or team to look at how a
process flows and where the problem or work around area may be. It promotes team
work on solving problems by giving a global and unbiased look into procedures. It
involves a positive thought process and invigorates the mind to think in alternative
ways of problem solving by including all aspects of a process. It gives all parties
involved a way to express and present their perceptions or data. IOn a scale of 1-10,
ten being the highest, I would rate the A3 process at a 10."
Reference
1. Sobek, O.K. (2007). "A Case Study in A3 Problem Solvin!)." Montana State
University. Downloaded January 25, 2007 from
http://www.coe.montana.eduIlE/faculty/sobeklA3
ALEXANDER POPE
Table I
Standard Normal Table
o z
z X.XO X.X1 X.X2 X.X3 X.X4 X.X5 X.X6 XJ(7 X.X8 X.X9
0.0 0.5000 0.4960 0.4920 0.4880 0.4840 0.4801 0.4761 0.4n1 0.4681 0.4641
0.1 0.4602 0.4562 0.4522 0.4483 0.4443 0.4404 0.4364 0.4~125 0.4286 0.4247
0.2 0.4207 0.4168 0.4129 0.4090 0.4052 0.4013 0.3974 0.3936 0.3897 0.3859
0.3 0.3821 0.3783 0.3745 0.3707 0.3669 0.3632 0.3594 0.3~j57 0.3520 0.3483
0.4 0.3446 0.3409 0.3372 0.3336 0.3300 0.3264 0.3228 0.31192 0.3156 0.3121
0.5 0.3085 0.3050 0.3015 0.2981 0.2946 0.2912 0.2877 0.2f143 0.2810 0.2776
0.6 0.2743 0.2709 0.2676 0.2643 0.2611 0.2578 0.2546 O.2~i14 0.2483 0.2451
0.7 0.2420 0.2389 0.2358 0.2327 0.2297 0.2266 0.2236 0.2206 0.2177 0.2148
0.8 0.2119 0.2090 0.2061 0.2033 0.2005 0.1977 0.1949 0.1~122 0.1894 0.1867
0.9 0.1841 0.1814 0.1788 0.1762 0.1736 0.1711 0.1685 0.1660 0.1635 0.1611
1.0 0.1587 0.1562 0.1539 0.1515 0.1492 0.1469 0.1446 0.1423 0.1401 0.1379
1.1 0.1357 0.1335 0.1314 0.1292 0.1271 0.1251 0.1230 0.12:10 0.1190 0.1170
1.2 0.1151 0.1131 0.1112 0.1093 0.1075 0.1056 0.1038 0.10'20 0.1003 0.0985
1.3 0.0968 0.0951 0.0934 0.0918 0.0901 0.0885 0.0869 0.0853 0.0838 0.0823
1.4 0.0808 0.0793 0.0778 0.0764 0.0749 0.0735 0.0721 0.0708 0.0694 0.0681
1.5 0.0668 0.0655 0.0643 0.0630 0.0618 0.0606 0.0594 0.0582 0.0571 0.0559
1.6 0.0548 0.0537 0.0526 0.0516 0.0505 0.0495 0.0485 0.0475 0.0465 0.0455
1.7 0.0446 0.0436 0.0427 0.0418 0.0409 0.0401 0.0392 0.0384 0.0375 0.0367
1.8 0.0359 0.0351 0.0344 0.0336 0.0329 0.0322 0.0314 0.0307 0.0301 0.0294
1.9 0.0287 0.0281 0.0274 0.0268 0.0262 0.0256 0.0250 0.0244 0.0239 0.0233
2.0 0.0228 0.0222 0.0217 0.0212 0.0207 0.0202 0.0197 0.0192 0.0188 0.0183
2.1 0.0179 0.0174 0.0170 0.0166 0.0162 0.0158 0.0154 0.0150 0.0146 0.0143
2.2 0.0139 0.0136 0.0132 0.0129 0.0125 0.0122 0.0119 0.0116 0.0113 0.0110
2.3 0.0107 0.0104 0.0102 0.0099 0.0096 0.0094 0.0091 0.00B9 0.0087 0.0084
2.4 0.0082 0.0080 0.0078 0.0075 0.0073 0.0071 0.0069 0.0068 0.0066 0.0064
2.5 0.0062 0.0060 0.0059 0.0057 0.0055 0.0054 0.0052 0.0051 0.0049 0.0048
2.6 0.0047 0.0045 0.0044 0.0043 0.0041 0.0040 0.0039 0.00:38 0.0037 0.0036
2.7 0.0035 0.0034 0.0033 0.0032 0.0031 0.0030 0.0029 0.0028 0.0027 0.0026
2.8 0.0026 0.0025 0.0024 0.0023 0.0023 0.0022 0.0021 0.00:21 0.0020 0.0019
2.9 0.0019 0.0018 0.0018 0.0017 0.0016 0.0016 0.0015 0.00'15 0.0014 0.0014
3.0 0.00135
A standard normal table gives the percentage under one-half of the normal curve
at different Z values. Look at the values of 0, 1.0, 2.0 and 3.0 to better understand
th is concept.
Table III
Poisson Distribution
Probability of r or fewer occurrences of an event that has an average number of
occurrences equal to np.
~ 0.02
0
0.980
1
1.000
2 3 4 5 6 7 8 9
Table III
Poisson Distribution (Continued)
~ 2.2
0
0.111
1
0.355
2
0.623
3
0.819
4
0.928
5
0.975
6
0.993
7
0.998
8
1.000
9
2.4 0.091 0.308 0.570 0.779 0.904 0.964 0.988 0.997 0.999 1.000
2.6 0.074 0.267 0.518 0.736 0.877 0.951 0.983 0.995 0.999 1.000
2.8 0.061 0.231 0.469 0.692 0.848 0.935 0.976 0.992 0.998 0.999
3.0 0.050 0.199 0.423 0.647 0.815 0.916 0.966 0.988 0.996 0.999
3.2 0.041 0.171 0.380 0.603 0.781 0.895 0.955 0.983 0.994 0.998
3.4 0.033 0.147 0.340 0.558 0.744 0.871 0.942 0.977 0.992 0.997
3.6 0.027 0.126 0.303 0.515 0.706 0.844 0.927 0.969 0.988 0.996
3.8 0.022 0.107 0.269 0.473 0.668 0.816 0.909 0.960 0.984 0.994
4.0 0.018 0.092 0.238 0.433 0.629 0.785 0.889 0.949 0.979 0.992
4.2 0.015 0.078 0.210 0.395 0.590 0.753 0.867 0.936 0.972 0.989
4.4 0.012 0.066 0.185 0.359 0.551 0.720 0.844 0.921 0.964 0.985
4.6 0.010 0.056 0.163 0.326 0.513 0.686 0.818 0.905 0.955 0.980
4.8 0.008 0.048 0.143 0.294 0.476 0.651 0.791 0.887 0.944 0.975
5.0 0.007 0.040 0.125 0.265 0.440 0.616 0.762 0.867 0.932 0.968
5.2 0.006 0.034 0.109 0.238 0.406 0.581 0.732 0.845 0.918 0.960
5.4 0.005 0.029 0.095 0.213 0.373 0.546 0.702 0.822 0.903 0.951
5.6 0.004 0.024 0.082 0.191 0.342 0.512 0.670 0.797 0.886 0.941
5.8 0.003 0.021 0.072 0.170 0.313 0.478 0.638 0.771 0.867 0.929
6.0 0.002 0.017 0.062 0.151 0.285 0.446 0.606 0.744 0.847 0.916
10 11 12 13 14 15 16
2.8 1.000
3.0 1.000
3.2 1.000
3.4 0.999 1.000
3.6 0.999 1.000
3.8 0.998 0.999 1.000
4.0 0.997 0.999 1.000
Table III
Poisson Distribution (Continued)
~ 6.2
0
0.002
1
0.015
2
0.054
3
0.134
4
0.259
5
0.414
6
0.574
7
0.7'16
8
0.826
9
0.902
6.4 0.002 0.012 0.046 0.119 0.235 0.384 0.542 0.687 0.803 0.886
6.6 0.001 0.010 0.040 0.105 0.213 0.355 0.511 0.658 0.780 0.869
6.8 0.001 0.009 0.034 0.093 0.192 0.327 0.480 0.6~~8 0.755 0.850
7.0 0.001 0.007 0.030 0.082 0.173 0.301 0.450 0.5~19 0.729 0.830
7.2 0.001 0.006 0.025 0.072 0.156 0.276 0.420 0.5ti9 0.703 0.810
7.4 0.001 0.005 0.022 0.063 0.140 0.253 0.392 0.5~19 0.676 0.788
7.6 0.001 0.004 0.019 0.055 0.125 0.231 0.365 0.5110 0.648 0.765
7.8 0.000 0.004 0.016 0.048 0.112 0.210 0.338 0.4~I1 0.620 0.741
8.0 0.000 0.003 0.014 0.042 0.100 0.191 0.313 0.453 0.593 0.717
8.5 0.000 0.002 0.009 0.030 0.074 0.150 0.256 0.366 0.523 0.653
9.0 0.000 0.001 0.006 0.021 0.055 0.116 0.207 0.324 0.456 0.587
9.5 0.000 0.001 0.004 0.015 0.040 0.089 0.165 0.2fi9 0.393 0.522
10.0 0.000 0.000 0.003 0.010 0.029 0.067 0.130 0.2~!0 0.333 0.458
10 11 12 13 14 15 16 17 18 19
6.2 0.949 0.975 0.989 0.995 0.998 0.999 1.000
6.4 0.939 0.969 0.986 0.994 0.997 0.999 1.000
6.6 0.927 0.963 0.982 0.992 0.997 0.999 0.999 1.000
6.8 0.915 0.955 0.978 0.990 0.996 0.998 0.999 1.000
7.0 0.901 0.947 0.973 0.987 0.994 0.998 0.999 1.000
7.2 0.887 0.937 0.967 0.984 0.993 0.997 0.999 0.9919 1.000
7.4 0.871 0.926 0.961 0.980 0.991 0.996 0.998 0.9919 1.000
7.6 0.854 0.915 0.954 0.976 0.989 0.995 0.998 0.9919 1.000
7.8 0.835 0.902 0.945 0.971 0.986 0.993 0.997 0.9919 1.000
8.0 0.816 0.888 0.936 0.966 0.983 0.992 0.996 0.9918 0.999 1.000
8.5 0.763 0.849 0.909 0.949 0.973 0.986 0.993 0.997 0.999 0.999
9.0 0.706 0.803 0.876 0.926 0.959 0.978 0.989 0.995 0.998 0.999
9.5 0.645 0.752 0.836 0.898 0.940 0.967 0.982 0.991 0.996 0.998
10.0 0.583 0.697 0.792 0.864 0.917 0.951 0.973 0.986 0.993 0.997
20 21 22
8.5 1.000
9.0 1.000
9.5 0.999 1.000
10.0 0.998 0.999 1.000
n r 0.05 0.10 0.15 0.20 0.25 0.30 0.35 0.40 0.45 0.50
2 0 0.9025 0.8100 0.7225 0.6400 0.5625 0.4900 0.4225 0.3600 0.3025 0.2500
1 0.9975 0.9900 0.9775 0.9600 0.9375 0.9100 0.8775 0.8400 0.7975 0.7500
3 0 0.8574 0.7290 0.6141 0.5120 0.4219 0.3430 0.2746 0.2160 0.1664 0.1250
1 0.9928 0.9720 0.9392 0.8960 0.8438 0.7840 0.7182 0.6480 0.5748 0.5000
2 0.9999 0.9990 0.9966 0.9920 0.9844 0.9730 0.9571 0.9360 0.9089 0.8750
4 0 0.8145 0.6561 0.5220 0.4096 0.3164 0.2401 0.1785 0.1296 0.0915 0.0625
1 0.9860 0.9477 0.8905 0.8192 0.7383 0.6517 0.5630 0.4752 0.3910 0.3125
2 0.9995 0.9963 0.9880 0.9728 0.9492 0.9163 0.8735 0.8208 0.7585 0.6875
3 1.0000 0.9999 0.9995 0.9984 0.9961 0.9919 0.9850 0.9744 0.9590 0.9375
5 0 0.7738 0.5905 0.4437 0.3277 0.2373 0.1681 0.1160 0.0778 0.0503 0.0312
1 0.9774 0.9185 0.8352 0.7373 0.6328 0.5282 0.4284 0.3370 0.2562 0.1875
2 0.9988 0.9914 0.9734 0.9421 0.8965 0.8369 0.7648 0.6826 0.5931 0.5000
3 1.0000 0.9995 0.9978 0.9933 0.9844 0.9692 0.9460 0.9130 0.8688 0.8125
4 1.0000 1.0000 0.9999 0.9997 0.9990 0.9976 0.9947 0.9898 0.9815 0.9688
6 0 0.7351 0.5314 0.3771 0.2621 0.1780 0.1176 0.0754 0.0467 0.0277 0.0156
1 0.9672 0.8857 0.7765 0.6554 0.5339 0.4202 0.3191 0.2333 0.1636 0.1094
2 0.9978 0.9842 0.9527 0.9011 0.8306 0.7443 0.6471 0.5443 0.4415 0.3438
3 0.9999 0.9987 0.9941 0.9830 0.9624 0.9295 0.8826 0.8208 0.7447 0.6562
4 1.0000 0.9999 0.9996 0.9984 0.9954 0.9891 0.9777 0.9590 0.9308 0.8906
5 1.0000 1.0000 1.0000 0.9999 0.9998 0.9993 0.9982 0.9959 0.9917 0.9844
7 0 0.6983 0.4783 0.3206 0.2097 0.1335 0.0824 0.0490 0.0280 0.0152 0.0078
1 0.9556 0.8503 0.7166 0.5767 0.4449 0.3294 0.2338 0.1586 0.1024 0.0625
2 0.9962 0.9743 0.9262 0.8520 0.7564 0.6471 0.5323 0.4199 0.3164 0.2266
3 0.9998 0.9973 0.9879 0.9667 0.9294 0.8740 0.8002 0.7102 0.6083 0.5000
4 1.0000 0.9998 0.9988 0.9953 0.9871 0.9712 0.9444 0.9037 0.8471 0.7734
5 1.0000 1.0000 0.9999 0.9996 0.9987 0.9962 0.9910 0.9812 0.9643 0.9375
6 1.0000 1.0000 1.0000 1.0000 0.9999 0.9998 0.9994 0.9984 0.9963 0.9922
8 0 0.6634 0.4305 0.2725 0.1678 0.1001 0.0576 0.0319 0.0168 0.0084 0.0039
1 0.9428 0.8131 0.6572 0.5033 0.3671 0.2553 0.1691 0.1064 0.0632 0.0352
2 0.9942 0.9619 0.8948 0.7969 0.6785 0.5518 0.4278 0.3154 0.2201 0.1445
3 0.9996 0.9950 0.9786 0.9437 0.8862 0.8059 0.7064 0.5941 0.4770 0.3633
4 1.0000 0.9996 0.9971 0.9896 0.9727 0.9420 0.8939 0.8263 0.7396 0.6367
5 1.0000 1.0000 0.9998 0.9988 0.9958 0.9887 0.9747 0.9502 0.9115 0.8555
6 1.0000 1.0000 1.0000 0.9999 0.9996 0.9987 0.9964 0.9915 0.9819 0.9648
7 1.0000 1.0000 1.0000 1.0000 1.0000 0.9999 0.9998 0.9993 0.9983 0.9961
9 0 0.6302 0.3874 0.2316 0.1342 0.0751 0.0404 0.0207 0.0101 0.0046 0.0020
1 0.9288 0.7748 0.5995 0.4362 0.3003 0.1960 0.1211 0.0705 0.0385 0.0195
2 0.9916 0.9470 0.8591 0.7382 0.6007 0.4628 0.3373 0.2318 0.1495 0.0898
3 0.9994 0.9917 0.9661 0.9144 0.8343 0.7297 0.6089 0.4826 0.3614 0.2539
4 1.0000 0.9991 0.9944 0.9804 0.9511 0.9012 0.8283 0.7334 0.6214 0.5000
5 1.0000 0.9999 0.9994 0.9969 0.9900 0.9747 0.9464 0.9006 0.8342 0.7461
6 1.0000 1.0000 1.0000 0.9997 0.9987 0.9957 0.9888 0.9750 0.9502 0.9102
7 1.0000 1.0000 1.0000 1.0000 0.9999 0.9996 0.9986 0.9962 0.9909 0.9805
8 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 0.9999 0.9997 0.9992 0.9980
10 0 0.5987 0.3487 0.1969 0.1074 0.0563 0.0282 0.0135 0.0060 0.0025 0.0010
1 0.9139 0.7361 0.5443 0.3758 0.2440 0.1493 0.0860 0.0464 0.0232 0.0107
2 0.9885 0.9298 0.8202 0.6778 0.5256 0.3828 0.2616 0.1673 0.0996 0.0547
3 0.9990 0.9872 0.9500 0.8791 0.7759 0.6496 0.5138 0.3823 0.2660 0.1719
4 0.9999 0.9984 0.9901 0.9672 0.9219 0.8497 0.7515 0.6331 0.5044 0.3770
5 1.0000 0.9999 0.9986 0.9936 0.9803 0.9527 0.9051 0.8338 0.7384 0.6230
6 1.0000 1.0000 0.9999 0.9991 0.9965 0.9894 0.9740 0.9452 0.8980 0.8281
7 1.0000 1.0000 1.0000 0.9999 0.9996 0.9984 0.9952 0.9877 0.9726 0.9453
8 1.0000 1.0000 1.0000 1.0000 1.0000 0.9999 0.9995 0.9983 0.9955 0.9893
9 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 0.9999 0.9997 0.9990
Table V t Distribution
There is only a 5% probability that a sample with 10 degrees of freedom will have
a t value greater than 1.812.
Table VI
Critical Values of the Chi-Square (X2) Distribution
I OF I 2
X O•99
I 2
X 0.9S
I 2
X 0.90
I 2
X 0.10
I X20.os
I 2
X O•01
I
1 0.00016 0.0039 0.0158 2.71 3.84 6.63
2 0.0201 0.1026 0.2107 4.61 5.99 9.21
3 0.115 0.352 0.584 6.25 7.81 11.34
4 0.297 0.711 1.064 7.78 9.49 13.28
5 0.554 1.15 1.61 9.24 11.07 15.09
6 0.872 1.64 2.20 10.64 12.59 16.81
7 1.24 2.17 2.83 12.02 14.07 18.48
8 1.65 2.73 3.49 13.36 15.51 20.09
9 2.09 3.33 4.17 14.68 16.92 21.67
10 2.56 3.94 4.87 15.99 18.31 23.21
11 3.05 4.57 5.58 17.28 19.68 24.73
12 3.57 5.23 6.30 18.55 21.03 26.22
13 4.11 5.89 7.04 19.81 22.36 27.69
14 4.66 6.57 7.79 21.06 23.68 29.14
15 5.23 7.26 8.55 22.31 25.00 30.58
16 5.81 7.96 9.31 23.54 26.30 32.00
18 7.01 9.39 10.86 25.99 28.87 34.81
20 8.26 10.85 12.44 28.41 31.41 37.57
24 10.86 13.85 15.66 33.20 36.42 42.98
30 14.95 18.49 20.60 40.26 43.77 50.89
40 22.16 26.51 29.05 51.81 55.76 63.69
60 37.48 43.19 46.46 74.40 79.08 88.38
120 86.92 95.70 100.62 140.23 146.57 158.95
The above table addresses both the left and right tails of X2. 950/0 of the area under
the X 2 distribution lies to the right of X 20.9S ' Example: for a right tail evaluation with
D.F. = 5, only 5 % of the values will exceed the critical value of 11.07.
f(F)
Table VII
Distribution of F a
Fa
F Table a = 0.05
v1(DF)
v2(DF) 1 2 3 4 5 6 7 8 9 10 12 15
1 161.4 199.5 215.7 224.6 230.2 234.0 236.8 238.9 240 . 5 241.9 243.9 245.9
2 18.51 19.00 19.16 19.25 19.30 19.33 19.35 19.37 19.:S8 19.40 19.41 19.43
3 10.13 9.55 9.28 9.12 9.01 8.94 8.89 8.85 8.S11 8.79 8.74 8.70
4 7.71 6.94 6.59 6.39 6.26 6.16 6.09 6.04 6.00 5.96 5.91 5.86
5 6.61 5.79 5.41 5.19 5.05 4.95 4.88 4.82 4.7'1 4.74 4.68 4.62
6 5.99 5.14 4.76 4.53 4.39 4.28 4.21 4.15 4.10 4.06 4.00 3.94
7 5.59 4.74 4.35 4.12 3.97 3.87 3.79 3.73 3.6i8 3.64 3.57 3.51
8 5.32 4.46 4.07 3.84 3.69 3.58 3.50 3.44 3.3,9 3.35 3.28 3.22
9 5.12 4.26 3.86 3.63 3.48 3.37 3.29 3.23 3.18 3.14 3.07 3.01
10 4.96 4.10 3.71 3.48 3.33 3.22 3.14 3.07 3.02 2.98 2.91 2.85
11 4.84 3.98 3.59 3.36 3.20 3.09 3.01 2.95 2.90 2.85 2.79 2.72
12 4.75 3.89 3.49 3.26 3.11 3.00 2.91 2.85 2.80 2.75 2.69 2.62
13 4.67 3.81 3.41 3.18 3.03 2.92 2.83 2.77 2.71 2.67 2.60 2.53
14 4.60 3.74 3.34 3.11 2.96 2.85 2.76 2.70 2.65 2.60 2.53 2.46
15 4.54 3.68 3.29 3.06 2.90 2.79 2.71 2.64 2.59 2.54 2.48 2.40
v 1(DF)
v2 (DF) 20 30 40 50 60 00
f(F)
Table VIII
Distribution of F a
Fa
F Table a =0.025
v1(DF)
v2(DF) 1 2 3 4 5 6 7 8 9 10 12 15
1 647.8 799.5 864.2 899.6 921.8 937.1 948.2 956.7 963.3 968.6 976.7 984.9
2 38.51 39.00 39.17 39.25 39.30 39.33 39.36 39.37 39.39 39.40 39.41 39.43
3 17.44 16.04 15.44 15.10 14.88 14.73 14.62 14.54 14.47 14.42 14.34 14.25
4 12.22 10.65 9.98 9.60 9.36 9.20 9.07 8.98 8.90 8.84 8.75 8.66
5 10.01 8.43 7.76 7.39 7.15 6.98 6.85 6.76 6.68 6.62 6.52 6.43
6 8.81 7.26 6.60 6.23 5.99 5.82 5.70 5.60 5.52 5.46 5.37 5.27
7 8.07 6.54 5.89 5.52 5.29 5.12 4.99 4.90 4.82 4.76 4.67 4.57
8 7.57 6.06 5.42 5.05 4.82 4.65 4.53 4.43 4.36 4.30 4.20 4.10
9 7.21 5.71 5.08 4.72 4.48 4.32 4.20 4.10 4.03 3.96 3.87 3.77
10 6.94 5.46 4.83 4.47 4.24 4.07 3.95 3.85 3.78 3.72 3.62 3.52
11 6.72 5.26 4.63 4.28 4.04 3.88 3.76 3.66 3.59 3.53 3.43 3.33
12 6.55 5.10 4.47 4.12 3.89 3.73 3.61 3.51 3.44 3.37 3.28 3.18
13 6.41 4.97 4.35 4.00 3.77 3.60 3.48 3.39 3.31 3.25 3.15 3.05
14 6.30 4.86 4.24 3.89 3.66 3.50 3.38 3.29 3.21 3.15 3.05 2.95
15 6.20 4.77 4.15 3.80 3.58 3.41 3.29 3.20 3.12 3.06 2.96 2.86
v1(DF)
v2 (DF) 20 30 40 50 60 00
Table IX
Control Chart Factors
n A2 A3 C4 B3 B4 d2 03 04
The above table is extracted from Table 27 in ASTM Manual on P"esentation of Data
and Control Chart Analysis (1976). ASTM Publication STP15D. American Society for
Testing and Materials, Philadelphia, pp. 134-135. Used with permission.
-
X - R Charts X - S Charts
CLx == X :I: ~R Cli =X ± AaS
UCl R = 1~4S
lCl R =
-
1~3S
1. d 1. d 1. d 1. b 23. a 1. d 25. a
2. c 2. c 2. c 2. c 24. c 2. a 26. c
3. d 3. d 3. d 3. b 25. d 3. c 27. d
4. d 4. a 4. c 4. a 26. a 4. d 28. a
5. c 5. c 5. d 5. d 27. b 5. c 29. d
6. a 6. a 6. a 6. b 28. b 6. b 30. b
7. c 7. d 7. d 7. d 29. a 7. c 31. b
8. a 8. b 8. d 8. d 30. b 8. b 32. a
9. d 9. c 9. b 9. b 31. a 9. a 33. b
10. d 10. d 10. b 10. b 32. a 10. a 34. b
11. d 11. c 11. a 11. c 33. d 11. c 35. c
12. b 12. d 12. b 12. b 34. c 12. c 36. d
13. c 13. b 13. b 13. c 35. c 13. b 37. c
14. b 14. b 14. a 14. d 36. d 14. c 38. d
15. a 15. c 15. d 15. a 37. c 15. d 39. a
16. c 16. c 16. d 16. c 38. d 16. b 40. b
17. b 17. c 17. d 17. b 39. c 17. d 41. c
18. b 18. c 18. c 18. b 40. b 18. c 42. a
19. a 19. b 19. d 19. a 41. c 19. b 43. a
20. d 20. a 20. a 20. d 42. b 20. b 44. c
21. b 21. c 21. d 21. a 43. c 21. a 45. b
22. d 22. b 22. a 22. b 44. a 22. b 46. b
23. c 23. b 23. b 23. d 47. d
24. b 24. d 24. b 24. c 48. c
25. a 25. c 25. d
26. b 26. a 26. b
27. a 27. a 27. c
28. a 28. d 28. c
29. c 29. a 29. b
30. c 30. a 30. a
31. c 31. a 31. a
32. a 32. c 32. a
33. a 33. d 33. b
34. b 34. d 34. a
35. b 35. a 35. b
36. b 36. a 36. a
37. b 37. d 37. a
38. b 38. a 38. a
39. c 39. c 39. c
40. b 40. a 40. a
LEWIS CARROLL
(CHARLES LUTWIDGE DODGSON)
"THROUGH THE LOOKING GLASS"
1872
CLASSROOM NOTES