Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Lesson I: Overview of Six Sigma and Organizational Goals

Download as pdf or txt
Download as pdf or txt
You are on page 1of 536

Lesson I

Overview of Six Sigma and


Organizational Goals
Overview of Six Sigma and Organizational
Goals - Agenda
 About CSSGB

 Introduction to Six Sigma

 Six Sigma and Organizational Goals


o Value of Six Sigma
o Organizational Drivers and Metrics
o Organizational Goals and Six Sigma projects

 Lean principles in the organization


o Lean Concepts and Tools
o Value-added and Non-value-added activities
o Theory of Constraints

 Design for Six Sigma (DFSS) in the organization


o Lean Concepts and Tools
o Quality Function Deployment (QFD)
o Design and process failure mode and effects analysis (FMEA,
DFMEA & PFMEA)
o Roadmaps for DFSS
Introduction
Agenda

o What is CSSGB?

o What are CSSGB requirements?

o About the CSSGB exam


What is CSSGB?
 CSSGB: Certified Six Sigma Green Belt
 Is given to an individual and is the first step of Professional
Six Sigma Certification
 After completing CSSGB, a trainee will be able to use basic
statistical tools and will also be able to complete short-run
LOB (Line of Business) wise departmental , product line, or
business process or service projects
 Requirements:
 Need to have at least 2-3 years of work experience in any niche
and any sector (Six Sigma is an industry neutral discipline and can
be applied to 70 different sectors)
 BOK: Body of Knowledge
 The Body of Knowledge is like the Table of Contents for any Six
Sigma Certification
 The training material is based on the lines of ASQ (American
Society of Quality, the premier training agency worldwide in
the niche of Six Sigma)
About: CSSGB Exam
 Exam measures comprehension of CSSGB BOK
 Total no. of questions: 100, multiple choice questions
 Duration of the exam: 4 hours
 Conducted in June and December for locations other than United
States
 For United States, exam is conducted year round. Details of
location and dates are available on ASQ website
 For locations other than United States, exam is conducted in 66
countries by international certification affiliates of ASQ in month of
June and December. ASQ will make testing arrangements after
you register for the exam and choose your preferred location
 For countries not in the list, contact ASQ for details
 CSSGB examinations can be taken any time post completion of
SSGB Training Program from the institute.
 CSSGB exam is an Open Book exam. You are allowed to refer to the
training module, Online sources, and tables, prescribed by the
facilitator.
About: CSSGB Exam

Overview: Six Sigma and the Organization 13 questions

Six Sigma – Define 23 questions

Six Sigma – Measure 23 questions

Six Sigma – Analyze 15 questions

Six Sigma - Improve and Control 15 + 11questions


Lesson II
Introduction
to Six
Sigma
Agenda

WHAT IS SIX WHY SIX SIGMA HOW DOES SIX WHAT IS


SIGMA? IS USEFUL? SIGMA WORK? QUALITY?
Basics of
Six Sigma
 Highly disciplined
process that focuses on
developing and
delivering near-perfect
products and services
consistently

 It is a continuous
improvement process,
with focus on change
empowerment,
seamless training of
resources and
consistent top
management support
Basics of Six Sigma
A process is a series of steps designed to produce a product
and/or service as required by the customer

Each input can be classified into: Controllable (C), Non


controllable (NC), Noise (N), Critical (X)

Feedback:
•Helps in process control
•Depending on the nature of output(s), feedback suggests changes to input(s), which
again changes the output(s) to match desired specification

Common feature of any such process as shown above is emphasis


on inputs and outputs

Basics of
•Input is something put into a process or expended in its operation, to achieve an
output or a result
•Output is the final product or service delivered to an internal / external customer

Six Sigma
•Output(s) of a process can be input(s) to another process
•If inputs are bad, irrespective of the process, the output would be bad

Management is interested in
•Defining points from where data is to be collected
•Measurement system to be used
•Analysis of the data collected
•Use of information generated from the data to improve the process
•Feedback in real time which triggers changes in inputs, or processes
•For generation of improvement plan

Other versions of the above diagram are process maps, value


stream maps, etc.
Process for Six Sigma is DMAIC:
•Define: Define the problem statement and plan the improvement initiative
•Measure: Collect data from the process and understand current quality
levels/operational performance levels
•Analyze: Study the business process and the data generated to understand
the root causes of the problem resulting in variations in the process
•Improve: Identify possible improvement actions, prioritize them, test the
improvements, finalize the improvement action plan
•Control: Full scale implementation of improvement action plan, setup controls
to monitor the system so that gains are sustained

Process
for Six
Sigma -
DMAIC DMAIC is used for process improvements, while DFSS is used
for designing a new process, new product, or re-
engineering. Detailed text on DFSS in later chapters.
Six Sigma thinking: All processes can be Defined,
Measured, Analyzed, Improved, and Controlled
(phases of Six Sigma). Collection of above phases is
popularly known as DMAIC. Any process has inputs
(x), and delivers outputs (y). Controlling inputs will
control output. This is y=f(x) thinking.
Six Sigma as set of tools: Contains qualitative and
quantitative tools which Six Sigma practitioners use
to drive improvements. Examples include Control
Charts, FMEA, Process Mapping, etc.

DFSS approach is helpful to design new processes,


What is while DMAIC improves existing process.

Six Metric: Six Sigma quality means 3.4 defects in 1


Sigma? million opportunities or a process with 99.99966%
Rolled Throughput Yield. Assumes a 1.5 sigma shift in
the process mean.

Sigma: It is the standard deviation of a process


metric.
Opportunity: Every chance for a process to
deliver an output that is either “Right” or
“Wrong”, as per customer’s specifications. In
other words, an opportunity is every possible
chance of making an error. Six Sigma projects
are, at a lot of times, referred to as opportunities.

Defect: Every result of an opportunity that does


not meet customer’s specifications i.e. not falling
What is within Upper Specification Limit (USL) and Lower
Specification Limit (LSL).
Six
Sigma?
Cont.. Specification limits: Limits set by a customer
always and not by the business. These limits
represent the range of variation the customer
can tolerate/accept.
Sigma Defects per million Rolled Throughput
Process (σ) opportunities Yield
1 697,672 30.2328%

2 308,537 69.1463%

3 66,807 93.3193%

Six Sigma
4 6,210 99.3790%

Process 5 233 99.97670%

6 3.4 99.99966%
From Where Does Six Sigma
Example:
 Come?

Assume a machine produces the following number of bottle caps per minute
 The following is the number of caps produced for a period of 30 minutes
 27,11,13,12,13,12,11,12,9,12,12,13,12,12,13,12,12,12,11,10,12,12,12,11,12,13,12,12,12,12
 Mean (μ)
 Sum of all the data points / Total number of data points
 (27+11+13+12+13+12+11+12+9+12+12+13+12+12+13+12+12+12+11+10+12+12+12+11+12+
13+12+12+12+12) / 30 μ=12.4
 Standard deviation (σ)
 Subtract mean from each data points and square them
 (27-12.4)2 , (11-12.4)2 , (13-12.4)2 , (12-12.4)2 , ………
 Add them and divide by the total no. of data points = 8.1
 Calculate the square root of the value found in above step = √ (8.1) = 2.8
 σ = 2.8
 The acceptable limits set by the production manager (the customer for the machine) is
between 0 bottle caps per minute (LSL), and 25 bottle caps per minute (USL)
 This means that out of all 30 data points mentioned above, one data point (27) falls
outside customer specification
 Calculate ZU (Z-Upper) and ZL (Z-Lower)
 ZU = (USL – μ)/ (σ) = (25 – 12.4) / (2.8) = 4.5
 ZL = (μ - LSL)/ (σ) = (12.4 - 0) / (2.8 )= 4.3
 Process Sigma levels = Minimum of ZU and ZL = 4.3
 We can say that the machine producing bottle caps is at 4.3 Sigma levels.
 This could be thought of as an improvement opportunity for the production manager, if he
wishes to improve process efficiency to 6 Sigma levels.
 The formula for calculating Sigma levels will be referenced in the Measure Phase discussions.
Note: There are multiple ways of calculating Sigma levels, which we will discuss later
 Interpretations from the calculations done on the
previous page:
 Currently, the process is working at 4.3 Sigma,
which may not be the optimal level of
performance.

 The business manager needs to know if given


the current business conditions and Customer
Satisfaction levels, is this Sigma level
acceptable?

 The business manager also needs to know if


Six Sigma --- improving the performance to Six Sigma levels
Introduction to will bring him sustained business results.
Qualifications

 All these interpretations will be discussed in detail in


the Prerequisites, Qualifications.
 (A Later Session)
Why Six Sigma?

 Eliminate causes of mistakes and defects in a process. Elimination of


mistakes is subject to successful implementation of POKA YOKE or
MISTAKE PROOFING and other preventive techniques. Sometimes the
solution is creating a robust process or product that mitigates the
impact of a variable input or output on a customer’s experience. For
example, many electrical utility systems have voltage variability up to
and sometimes exceeding a 10% deviation from nominal. Thus, most
electrical products are built to tolerate the variability, drawing more
amperage without damage to any components or the unit itself.

 Reduce variation and waste in a process

 Gain competitive advantage and become world leader in their


respective fields.
 Ultimately satisfy customers and achieve organizational goals
How Does Six Sigma work?
 Management Strategy: An environment where management supports Six
Sigma as a business strategy and not as a stand-alone approach or a
program to satisfy some public relations need

 DMAIC: Emphasis on the DMAIC (Define-Measure-Analyze-Improve-Control)


method of problem solving

 Focused Teams: Teams are assigned to well-defined projects that directly


impact organization’s bottom line, with customer satisfaction and increased
quality being by-products

 Use of Statistical Methods: Six Sigma requires extensive use of statistical


methods
Six Sigma and
Quality
 Taking a process to Six Sigma
level ensures that Quality of the
product is maintained, with the
primary goal being increased
profits

 What is Quality?
 Conformance to
Customer Requirement
 Technically defined as
the Degree of Excellence
of a product/service
offered to a customer
Summary

 What is Six Sigma?


 Why is it used?
 How is it used?

 What is a Process?

 What is Quality?
Lesson III
Six Sigma and Organizational Goals
Agenda

HISTORY OF POPULAR HISTORY OF SIX WHAT IS BUSINESS


QUALITY QUALITY GURUS SIGMA SYSTEM?
Quality approaches Time frame Description

Conceived by Walter Shewhart and used extensively during World


Statistical Process 1930s
War II to
Control quickly expand the US’s industrial capabilities

They are self improvement groups composed of small number of


Quality Circles 1960s
employees
belonging to a single department. Originated in Japan

A set of international standard on quality management and quality


ISO 9000 1987 –
assurance to
help organizations implement quality management systems and
present
related

From Where
supporting standards. Was developed by International Organization
for
Standardization (ISO)

An approach which involves restructuring of an entire organization

Does Six
Re-engineering 1996-1997
and its
processes

An improvement process in which an organization measures its

Sigma
Benchmarking 1988
performance
against the best organization in their field, determines how such
performance
levels were achieved and uses the information to improve

Come?
themselves

A management tool that helps managers at all levels to monitor


Balanced Scorecard 1990s
multiple results
in their key areas so that one metric is not optimized while another
is ignored

Baldrige Award An award developed by U.S. Congress in 1987 to raise awareness of


1987 –
Criteria quality
management system and to recognize and award U.S. companies
present
that have
successfully implemented quality management systems
Guru Contribution
“Do it Right, First time” and “Zero Defect”
Crosby’s fourteen steps to quality improvement
Senior management involvement
4 absolutes of quality management
Philip Crosby Quality cost measurements
14 key principles for management for transforming
business effectiveness
Seven deadly diseases also known as the "Seven
Wastes“
PDSA (Plan- Do-Study- Act) cycle
W Edwards Top management involvement
Deming Concentration on system improvement
Constancy of purpose
Total quality control/management
Armand V.
Top management involvement
Feigenbaum
Cause-effect diagram
Company wide quality control

Quality
Kaoru Ishikawa Human dimension to quality management
Pareto analysis
Quality trilogy

Gurus
Top management involvement
Joseph M Juran
Quality cost measurement
Statistical Process Control (SPC) charts
Assignable cause vs. chance cause
PDCA (Plan-Do-Check-Act) cycle
Walter A. Shewhart
Use of statistics for improvement
Loss function concepts
Signal to noise ratio
Genichi Taguchi Experimental design methods
Concept of design robustness
History of Six Sigma

 1986: Motorola starts Six Sigma initiative. Bill Smith


and Mikel Harry are the pioneers. The first team of
professionals implementing Six Sigma in Motorola
were Karate students, hence they adopted the
terms of Black Belts, Green Belts.

 2001: Motorola saves $16 billion cumulatively

 1995: Jack Welch initiates Six Sigma at GE

 1998: Allied Signal saves $0.5 billion

 2000: GE saves $2 billion annually


History of Six Sigma
 Motorola initiated Six Sigma for process improvement and reduced
defects to negligible levels

 Motorola initiated the project when the company was not doing
well with Customer Satisfaction levels

 It was at GE that Six Sigma was used to improve the entire Business
System
 What is Business System
 Designed to implement a process or a set of
processes

 Ensures that process inputs are at the right


place and right time so that each step of
process has the resource it needs

 Considers and includes the collecting and


analyzing of data
Six Sigma  So that continual improvement of its
processes, products, and services is
and ensured

Business
 Has processes, sub processes (procedures),
System and steps as its subsets

 Personnel Development, Manufacturing


Scheduling, Marketing Forecasts are some
examples of Processes in a Business System
Six Sigma and Business
System
 How Six Sigma effects Business System
 By removing defects in its processes

 By making the defect removing process continuous


 Defect: Any noncompliant attribute or aspect of a product or
service that would cause a customer to reject it (“a nonfulfillment
of an intended requirement…”)
 Defective: Any Product(s)/Service(s) that a customer would reject
 Customer:
 Can be the user of ultimate product(s)/service(s)
 Can be the next process in the downstream

 Reducing the probability of defects will remove some number of


defectives and increase the throughput yield of the process
Six Sigma and Business
System
 Not all Six Sigma project bring improvement to a Business. Selection of projects should be
done on the basis of prerequisites and qualifications of selecting a Six Sigma project

 Six Sigma project should align to the Goals of a Business System or Organizational Goals

 Project selection

 Project selection group consists of Master Black Belts, Black Belts, Champions,
and Key Executives to establish a set of criteria for project selection and team
assignments

 Team selection for the project may be done based on the nature of the project.
The selection should have a mix of skills and expertise

 Only projects that have an impact on the profits of the company should be taken

 Calculating the project’s expected profit helps in further selection of the project.
Expected profit = Profit X Probability of success

 Projects for selection should also optimize the results of the whole system. The effect of
proposed changes on other processes within the system should be considered.
Improvement in any one process of a Business System should not cause large, deleterious
effects in other processes of the system which causes the overall results of the system to
suffer
Structure of
Six Sigma
Team
Summary

 History of Quality

 Various contributors to the field


of Quality Management Systems

 History of Six Sigma

 Importance of project selection


and its relevancy to
organizational goals

 Understanding key drivers for a


Business System

 Structure of a Six Sigma Team


Lesson IV
Lean Principles
Agenda

o Why use Lean?

o What is Lean?

o Value-added and Non-value-added Activities

o Value Stream mapping

o Lean Concepts

o Various Lean Techniques

o Reduction in Cycle Time

o Theory of Constraints
Why Use Lean?
 LEAN helps in reducing/eliminating wastes and reducing non-value added
(NVA) activities from a process.

 In doing so, LEAN increases continuous flow in the process, as opposed to


stop-flow and unbalanced production.

 Before starting with a Six Sigma project, it is important to check the WASTE
status of the process.

 If Wastes and NVAs exist, eliminate or reduce them first, and then apply Six
Sigma.
Example:

An operation might have many defects in the welding operations. An operator


observes that he is sometimes welding rusty components together. It might be
worthwhile to figure out ways to reduce inventory and the waiting (storage)
time that causes the steel to rust (i.e., oxidize excessively) before figuring out
other solutions to deal with rust (like using an oil coating which might create
other welding problems or require a cleaning process).
What is Lean?
 Lean talks of doing away with Muda, Mura, and Muri. Muda = Waste, Mura =
Unevenness, Muri = Overburden.
 Techniques to tackle these three key Lean related issues could be different.
 7 types of Muda or waste:

 Overproduction: Producing more than is required. Example: customer needed 10


products and you delivered 12.
 Inventory: In simple words, stock. Inventory includes finished goods, semi-finished
goods, raw materials, supplies kept in waiting, and some of the work in progress.
 Defects/Repairs/Rejects: Anything deemed unusable by the customer and any
effort to make it usable to the original customer or a new customer.
 Motion: A waste due to poor ergonomics of workplace.
 Over processing: Extra operation on a product or service to remove some
unneeded attribute or feature is processing. Example: customer needed a bottle
and you delivered a bottle with extra plastic casing; customer needs ABEC 3
bearing and your process is tuned to produced more precise ABEC 7 bearings
taking more time for something the customer doesn’t need.
 Waiting: When the part waits for processing, or the operator waits for work.
 Transport: When the product moves unnecessarily in the process, without adding
value. Example: product is finished yet it travels 10 kilometers to warehouse
before it gets shipped to the customer. Another example: an electronic form is
transferred to 12 people, some of them seeing the form more than once (i.e., the
form is traveling over the same ‘space’ multiple times).
History of
Lean
 Henry Ford spoke
about Lean
principles, which
Taiichi Ohno later
adopted at Toyota.

 TPS became one of


the key driving
points for Lean
Manufacturing,
popularized by
James Womack in
1980s.
Other Lean Wastes
 Some Lean experts will talk about additional areas of waste:

 Underutilized skills: The workforce has capabilities that are not fully
being used towards productive efforts; people are assigned to jobs for
which they are not fit.

 Automation of a poorly performing process: Often people create a


program that duplicates the inefficient routing of paperwork;
improving a process that should be eliminated if possible (e.g., the
product returns department or product discounts process); asymmetry
in processes that should be eliminated (e.g., two signatures to approve
a cost reduction and six signatures to reverse a cost reduction that
created higher costs in other areas).

 Wrong use of metrics: Process metrics sometimes lead us to incorrect


conclusions or suggest actions we shouldn’t take (e.g., a lack of SPC
analysis on run charts—to be discussed in the SPC session);
inappropriate performance requirements that do not have a basis in
reality (e.g., requiring suppliers’ products to arrive by 1st of the month
when they won’t be used completely in the next 7 days); focusing the
whole organization on ship dates when production dates might be a
better focus.
 Identify the types of waste and possible causes:

 Materials are air-freighted into a company for the MRP


deadline on the first day of the month. The materials then
sit in the warehouse for 3 weeks before they’re used.
 A clerk sets aside an incomplete order form after
contacting the customer for more information.
 Customer payments are not received on time because the
customer claims that the information on the bill-of-lading,
invoice and order do not match.
 An inspector rejects blemished parts that he inspected
under a microscope when the specification allows for

Examples 
blemishes that can’t be seen from 3 feet away.
A welder visually inspects his/her work. The next welder

of Waste inspects that first welder’s work before proceeding with


their work. Finally, an inspector inspects both welders’ work.
 By the time, the work-in-process piles on the shelves and
carts are reduced, it was found some assemblies were
done to a previous revision and can’t be used.
 When the copier runs out of paper, the person has to get
more from the office supply closet 100 feet away. When
the ream is opened, he/she discovers it was the wrong
paper (i.e., it was pre-punched for a three-ring binder)
requiring a return trip to the closet.
Examples of Waste
 It is a visualization tool to map the path and identify all activities involved in
the product/service

 All activities related to a product/service are mapped using flowcharts

 Helps in identifying and eliminating/reducing non-value added activities


 Any activity that does not add any value to the product as perceived by the
customer is a non-value-added activity

 Value added activities


 Activities in the making of a product which adds value to the customer using the
final product
 Customer would be willing to pay for those activities

 Every activity of a Value Stream Map can be classified into:


 It adds value as perceived by the customer. Example: actual production process
 It adds no value, but is required by the process. Such activities can be termed as
non-value adding activities, but you cannot eliminate them from the process as
they are necessary Example: regulatory audits, like ISO and financial audits
 It adds no value, and can be eliminated
Lean Concepts
 Value Chain: It is a chain of activities in a business system. Forming a value
chain at business system level is more appropriate than forming it at any
process level
 Flow: It is essential that products/services move through the business system
in continuous flow. Any stopping or reduction in flow is a non-value adding
activity and hence a waste
 Pull: Instead of making products/services based on an estimated sales
forecast, the business system makes products/services as the customer
requires it. Benefits of a pull process are:
 Decrease in cycle time
 Finished inventory is reduced
 Work in progress is reduced
 Stable price
 Smooth flow of the process
 Perfection: It is the complete elimination of Muda/waste so that all activities
along a value chain add value
 Push : It is a type of process, which works exactly the opposite of a Pull
process. In the Push process, forecasting of demand is the first step, which
moves on to the production line and the parts produced are stocked in
anticipation of customer demand.
Lean Concepts, cont…

 Pull versus Push

 Push Process Example: A shirt manufacturing company decides to


manufacture 200 shirts based on past forecasts. The company makes
200 shirts and waits for the customer to place the order.

 The same case for a Pull process would have been like this --- The
company receives a client order for 200 shirts, and then starts
producing the 200 shirts to be delivered to the customer.

Important: Contrary to what most people think, it is not necessary that


Pull processes work universally. In some cases, PUSH works well too. For
example, a pharmacy shop is an example of PUSH process to
customer.
Techniques Description
Kaizen, or continuous improvement, is the building block of all
Lean production methods. Kaizen
Kaizen philosophy implies that all, incremental changes routinely
applied and sustained over a long period result in significant
improvements
Aka Mistake Proofing - It is good to do it right the first time; it is
even better to make it impossible
Poka Yoke
to do it wrong the first time. POKA YOKE talks about
automated mistake detection and fix
5S A framework to create and maintain your workplace- Sort,
Set-in-order, Shine, Standardize, Sustain
A manufacturing philosophy which leads to "Producing the
Just in Time
necessary units, in the necessary
(JIT)
quantities at the necessary time with the required quality”
Literally means signboard in Japanese. Kanban utilizes visual
display cards to signal movement of
Kanban material between steps of a product process. Means
“automation with human touch.” It is an automated

Lean
inspection function in production line
and stops the process as soon as a defect is encountered.
The process does not start until root
Jidoka cause of the defect has been eliminated Takt time is the

Techniques
maximum time in which the customer demand needs to be
met. For example, if the
customer needs 100 products, and the company has 420
minutes of available production time,
Takt time TAKT Time = Time Available/Demand. In this case, the
company has a maximum of 4.2 minutes per product. This
would be the target for the production line
Means Production Leveling/Smoothing. It is a technique to
Heijunka reduce waste which occurs due to
fluctuating customer demand
Cycle Time
Reduction
 Need for Cycle Time Reduction
 Increase capacity

 Reduce
internal/external waste

 Simplify operation

 Reduce product
damage

 Satisfy customer

 Remain ahead of
competition
The Theory Of Constraints
 What is the Theory of Constraints?
 Is a tool to remove bottlenecks in a process that limits production or
throughput
 Start with mapping the value stream and follow the 5 steps

 The 5 steps in the Theory of Constraints are:


 Step 1: Identify the system's constraint(s)
 A system constraint limits the business system from achieving its
performance and goals
 It acts as a bottle neck
 Step 2: Decide how to exploit the system's constraint(s)
 Find ways so that this constraint now works at full potential
 Step 3: Subordinate everything else to the decisions of Step 2
 Align the whole process or system to support the decision made
above
 Step 4: Elevate the system's constraint(s)
 Make other changes so that the constraint is resolved
 Step 5: If a constraint has been resolved in Step 4, go back to Step 1
 Once a constraint has been resolved, redo the process to find the
next constraint(s)
The Theory Of Constraints -
Example
 The numbers in shapes are maximum production rates in
units/hr
 Blue line for product A; Red dotted line for product B
 Black figure is assembly point where A & B are
assembled and sold as a complete product

 Customer demand says that this process needs to produce


100 units/hr as a combination of both A & B
 Demand is the constraint; Constraint is external; Work
on marketing/sales

 If customer demand is 100 units/hr each of both A & B


The Theory Of
Constraints –
Example, cont..
 Step 1: Identify the system constraint

 Ninth equipment: Only 70 units/hr;


Constraint in the system. This is the
active constraint

 Step 2: Exploit the systems constraint

 Run 9th equipment at full capacity at


70 units/hr; no downtime or defects

 Step 3: Subordinate everything else to decision of


step 2

 Run 1st equipment at capacity with 70


of A and 80 of B

 Step 4: Elevate the system constraint

 To elevate the active constraint;


Elevate constraint found in step 1 to
100 ; Elevate 1st equipment to 200;

 Step 5: If constraint is broken or resolved, go to step


1 and identify the next constraint
Brief History
Lean What it means –
reduce waste

Value Stream Value-added and


Non-value-added
Maps activities

Various Lean Concepts


Summary

Various Lean 5S, Kanban, Kaizen,


Techniques and so on

The Theory of constraints


Lesson V
Design for Six Sigma (DFSS)
Agenda

o What is DFSS?

o What is QFD?

o DFMEA & PFMEA

o Processes for DFSS

This Photo by Unknown Author is licensed under CC BY-SA


What is DFSS?
 DFSS: Design for Six Sigma
 What can be designed?
 New Product/Service
 New Process for a new product/Service
 Redesign of existing product/service to meet customer requirement
 Redesign of existing product/service process

 DFSS ensures that the Product/Service meets customer requirement


 What DFSS means to a Business System?
 Introduce new product/service or new category of product/service
 New category for the Business System and not the customer
 Improve product/service
 Addition to current product/service lines

Example: If you wish to launch a new product or build a new product/process,


you would want to use DFSS.
What is DFSS?
 QFD: Quality Function Deployment
 Also known as Voice of Customer or House of Quality
 A process to understand the needs of the customer and convert them
in to a set of design and manufacturing requirements
 QFD also helps the company prioritize customer needs and sets targets
for the Technical or the Operations team to meet those customer
needs

 What do we learn from QFD?


 Which customer requirements are most important?
 What are our strength and weaknesses?
 Where do we focus our efforts?
 Where do we need to do most of the work?

 How do we learn from QFD?


 By asking relevant questions to customers
 Tabulating them to identify the set of parameters critical
to the product design
FMEA (DFMEA and PFMEA)
 DFMEA/FMEA: Design Failure Mode and Effects Analysis
 Used in the design of a new product to uncover potential
failures
 Purpose: How failure modes affect the system and to reduce
effect of failure upon the system
 Is done before product is sent to manufacturing operation
 All significant design deficiencies would be resolved at the
end of this process

 PFMEA: Process Failure Mode and Effects Analysis


 Used on new or existing processes to uncover potential failures
 Is done in the quality planning phase to act as an aid during
production
 A PFMEA can involve fabrication, assembly, transactions, or
services

Important: FMEA is also used as a preemptive tool. Importantly, FMEA is


also a
Processes for DFSS
Two major Processes for DFSS: IDOV & DMADV
 IDOV :
 Identify
 Specific customer needs based on which product or business
process will be designed
 Tools used: QFD, Voice of Customer, FMEA
 Design
 Consists of identifying functional requirements, developing
alternative concepts, evaluating alternatives, selecting a best-fit
concept, and predicting sigma capability
 Tools used: FMEA and others
 Optimize
 Use statistical approach to calculate tolerance
 Developing detailed design elements, predicting performance,
and optimizing design
 Verify
 Test and validate the design
 Check conformance to Six Sigma standards
Processes for
DFSS, cont…

 Two major Processes for DFSS: IDOV


& DMADV

 DMADV

 Define customer
requirements and goals
for the process, product or
service

 Measure and match


performance to customer
requirements

 Analyze and assess the


design for the process,
product or service

 Design and implement the


array of new processes
required for the new
process, product or
service

 Verify results and maintain


performance
DFSS
Types (DMADV & IDOV)
Meaning and use •Difference between types Relation to DMAIC
and uses

QFD
Summary Meaning and use

DFSS
Types (DFMEA & PFMEA)
Meaning and use •Difference between types and uses
Session Summary
 What Six Sigma is, How Six Sigma is done and Why

 Six Sigma and organizational goals


 Value of Six Sigma
 Organizational drivers and metrics
 Organizational goals and Six Sigma projects

 Lean principles in the organization


 Lean concepts and tools
 Value-added and non-value-added activities
 Theory of constraints

 Design for Six Sigma (DFSS) in the organization


 Quality function deployment (QFD)
 Design and process failure mode and effects analysis (FMEA, DFMEA &
PFMEA)
 Roadmaps for DFSS
1. Kaizen is defined as

A. Re-engineering
Quiz - 1 B. Lean manufacturing
C. Continuous improvement
D. Mistake proofing
1. A production line uses signs at specific
points on the line to indicate when
components or raw materials need to
be replenished. This practice is an
example of

Quiz - 2
A. Kanban
B. Kaizen
C. Poka Yoke
D. FMEA
1. Quality function deployment (QFD) is a
methodology for

A. Removing bugs from code

Quiz - 3 B. Identifying and Refining key customer


requirements
C. Measuring the Reliability of a software
product
D. Training employees in quality issues
1. Defects, over-production, inventory,
and motion are all examples of

A. Waste
Quiz - 5
B. 5S target areas
C. Noise
D. Value-Added activities
1. The primary factor in the successful
implementation of Six Sigma is to have

A. The necessary resources


Quiz - 6 B. The support/leadership of top
management
C. Explicit customer requirements
D. A comprehensive training program
Quiz - 1

1. Kaizen is defined as

A. Re-engineering
B. Lean manufacturing
C. Continuous improvement
D. Mistake proofing

Correct Answer: C
Meaning of the word Kaizen is Continuous Improvement. Re-engineering is a different
quality concept, mistake proofing is a tool of Lean manufacturing.
Quiz - 2

1. A production line uses signs at specific points on the line to


indicate when components or raw materials need to be
replenished. This practice is an example of

A. Kanban
B. Kaizen
C. Poka Yoke
D. FMEA

Correct Answer: A
Kanban literally means signboards. Kanban uses display cards to signal
movement of material
Quiz - 3

1. Quality function deployment (QFD) is a methodology for

A. Removing bugs from code


B. Identifying and Refining key customer requirements
C. Measuring the Reliability of a software product
D. Training employees in quality issues

Correct Answer: B

QFD stands for Voice of Customer and is used to identify customer requirement
Quiz - 4

1. For a process at five sigma level, how many opportunities lie


outside the specification limits

A. 3.4
B. 99.9767
C. 233
D. 5

Correct Answer: C
A process at five sigma level is at 99.9767% yield. For 1 million opportunities, it
means 999767 times the process has no defects. No. of defects = 1000000 –
999767 = 233 defects
Quiz - 5

1. Defects, over-production, inventory, and motion are all


examples of

A. Waste
B. 5S target areas
C. Noise
D. Value-Added activities

Correct Answer: A

Correction, over-production, inventory, and motion are four of the seven


wastes mentioned in Lean
Quiz - 6

1. The primary factor in the successful implementation of Six


Sigma is to have

A. The necessary resources


B. The support/leadership of top management
C. Explicit customer requirements
D. A comprehensive training program

Correct Answer: B

Implementing Six Sigma needs change in the whole organization,


and hence support of top management is necessary
Session II

Define Phase of Six Sigma


Introduction
 Prerequisites and Qualifications of a Six Sigma project

 Introduction to Define Phase

 Process Management for Projects


o Process Elements
o Owners and Stakeholders
o Identify Customers
o Collect Customer Data
o Analyze Customer Data
o Translate Customer Requirements Sigma projects

 Project Management Basics


o Project Charter and Problem Statement
o Project Scope
o Project Metrics
o Project Planning tools
o Project Documentation
o Project Risk Analysis
o Project Closure
Prerequisites of a Six Sigma
Project
 Six Sigma can be applied to everything under the sun, to processes in almost
70 different sectors, but Six Sigma approaches cannot be applied to all
problems.

 Qualification criteria for a Six Sigma project (DMAIC) (Level 1 Selection)


 Is there a process existing
 Is there a problem in the process
 Is the problem measurable
 Does the problem impact Customer Satisfaction
 Does working on the problem impact profits of the company
 The ROOT CAUSE of the problem must be unknown
 The solution to the problem must not be visible or apparent

Important: The first step is to check if the project qualifies to be a Six Sigma
project.
Introduction to Define Phase
 The Define phase is the first phase in the Six Sigma project. The objectives of
this phase are to :
 Clearly identify the Problem Statement through customer analysis
 Define the objective of the Six Sigma project
 Plan the project in terms of time, budget, and resource requirements
 Define Team Structure for the project and establish roles and
responsibilities

 In this session we will cover the first two aspects of the Define phase:
 Lesson 1: Process aspects of the project and how to capture customer
requirements
 Lesson 2: Project management aspects of the project

 Team Structure and roles and responsibilities will be addressed in Session III
along with tools that can be used for planning and controlling the project.
Lesson I

Process Management for Six Sigma Projects


Agenda

o Business Process

o Business Process Elements

o Owners and Stakeholders

o Identify Customers

o Collect Customer Data

o Analyze Customer Data

o Translate Customer Requirements


Session II – Define I

Process Elements
What is a Business
Process?
 Business Process: Part of a Business System

 Business Process is a systematic organization of objects


(people, machinery, materials, and so on) into work
activities which is designed to produce a required
product/service

 Sales, Marketing, Purchasing, Finance, HR are


various Business Processes of a manufacturing
company (Business System)

 Business Process vs Process: Process is a subset of Business


Process. Output of a Process is aligned to Business Process
and Output of a Business Process is aligned to Business
system

 Payroll calculation is a Process of HR (Business


Process)

 Elements of any Process:

 S – Supplier

 I – Input

 P – Process

 O – Output

 C ─ Customer
Process Elements
 Supplier: Can be a person/another organization/a part of Business System

 Input: Resources (information, materials, service, and so on) provided by the


Supplier

 Process: Set of steps that transforms the inputs into outputs


 Process adds value, as perceived by the customer, to the
input

 Output: Final product/service as a result of the process

 Any
 change
Customers: Userinof
Output will beperson/a
the output--a because of of
part one or more
Business changes in
System/another
S,I, or P
organization
 If SIPs are stable, Output will be stable

 Relations between SIPs and O provide a method to define possible


cause-effect relationship

 SIPOC is used conventionally to scope the project.


Steps In
Process
Who are the What do the What are the start and end What product or
What are their
suppliers for our suppliers points of the process associated service does the
requirements for
product or provide to my with the problem and the major process deliver to
performance?
service? process? steps in the process.? the customer?

Suppliers Input Process (High Level) Output Customers

1 1 Start Point: 1 1

2 2

3 2 1

2 1 2

2 Operation or Activity 3 1

3 1 2

3 1 2 4 1

2 3 2

3 4 5 1

SIPOC 4 1

2
5

6 6
2

Template
3 7 2

10

11

End Point:
What are the start and
Who are the What do the end points of the
What product or service What are their
suppliers for our suppliers process associated
does the process deliver requirements for
product or provide to my with the problem and
to the customer? performance?
service? process? the major steps in the
process.?

Process (High
Suppliers Input Output Customers
Level)
1 1 Start Point: 1 1
2 2
3 2 1
2 1 2
2 Operation or Activity 3 1
1 Employees log in
3 2

2 Employees fill
3 1 timesheets 4 1

2 3 Employees put phones 2

Sample 3
4 Employ log in to
AVAYA 5 1

SIPOC
4 1 5 Take calls 2
2 6 6 1
3 7 2
8
9
10
11
End Point:
 The SIPOC Map is a macro-level map, drawn only in the DEFINE
SIPOC Phase.

Notes  The SIPOC Map, when used in Service Environments like a Call
Center, is called as COPIS Map. In most service processes, the
demand often comes from the customer, and hence the
CUSTOMER step is updated first.

 Supplier Quality assumes extra significance. The SIPOC Map shows if


the Supplier’s goods are checked on arrival.

 To ensure all the inputs of the process are of good quality, the
Supplier Quality must be good or the business spends a lot of
money to inspect/audit the inputs.
Challenges to Business Process
Improvement

Structure of a Each function has A product/service Managing flow of a


traditional Business definite local goals has to cross many product/service
System: Grouped and objectives such functions, and across various
around functional (cost, throughput, in turn many levels functional elements
aspect and so on) (functional is difficult because
elements) in a often no one is in
Helps top management A function may have
to improve and control many levels of hierarchy
function, to reach charge
a function (operators, supervisors, the customer
managers, executives,
and so on) known as
functional elements
Cumulating local goals
and objectives of all
functions in a business
system maps to goals
and objectives of the
 A Process Owner is a Senior Executive in charge of a process.

 A Business may have many Stakeholders. Failure of a process to


meet its objective may result in negative effects on the
Stakeholders.
 Stockholders: The net worth of the business may get
reduced
 Customers: May seek competitors, may find recourse in
legal action
 Suppliers: Payments may get delayed or not paid at all

Owners and  Company Management: Wage level may decrease


 Employees and their Families: Reduction in number of
Stakeholders employees
 The community and Society: May pollute environment

 Important:
 Stakeholder analysis is an important objective to be completed
before thinking of how to do a Six Sigma project. The team must
also factor in reasons why Stakeholders may oppose the change
effort.
Business –
Stakeholder
Relationship
Company Stakeholder
Interactivity
Identify Customer
 Who is your Customer?

 A Customer is someone who


 Uses a product/service
 Decides to buy a product/service
 Pays for the product/service
 Gets affected by a product/service

 Types of Customer
 Internal Customers
 External Customers
Internal Customers
 Internal Customers: An Internal Customer is anyone in a business
system who is affected by the product/service as it is being made
 The next process/function in a business process is an internal
customer

 What an Internal Customer may need from the Process? (Examples)


 Proper Training, Equipment
 Information, Reports, Timely actions to meet deadlines
 Materials

 Why is an Internal Customer important?


 Activities directly affect ultimate customer
 Activities affect the next internal customer
 Affects quality of the product/service

 Addressing satisfaction of Internal Customer is a way to greater


productivity and quality
External Customers
 External Customers: External customers are not part of the Six Sigma
Team’s organization but are affected by the organization
 Important because they are the source of revenue to any
business system

 External Customers include three types of customers:


 End Users
 Purchase a product/service for their own use

 Intermediate Customers
 Purchase the product/service and then resell, repackage,
modify, or assemble the product for sale to an end user
 Example: Retailers, distributors, logistics firms

 Affected Parties
 Do not use or purchase the product but are affected by it.
 Example: People living near a manufacturing plant of
defense artillery
Collect Customer Data
 After identifying various customers
 Get feedback from the Customer – Both Internal and External
 So that processes can be improved to what customers want
 Collecting customer data also helps in such questions as
 What is quality as perceived by customer
 Knowledge about competitors
 Identify factors to provide a competitive edge to the
product/service

 Customer requirements are defined by customers and not by


anyone else. If the customer is not specific about his requirements,
the business must ask for it.

 Goal is to identify Critical to Quality (CTQ) metrics for project


Ways To Capture Customer
Feedback

Surveys: A Focus Groups: A Interviews: Customer


questionnaire small group of Individual customer Complaints: Call
designed to gather individuals who are interview of 30 centers, Emails,
data assembled to minutes Feedback Forms
explore specific
topics and
Questions are questions about
standardized; may customer
contain 25 to 30
questions
Example: Rate our
product on a scale of 1
to 10 – 1 is highly
dissatisfied and 10 is
highly satisfied
Methods Advantages Disadvantages
Mail surveys can get
incomplete results,
Lower cost approach
skipped questions,
unclear understanding
Mail surveys 20-30%
Phone response rate 70-90%
response rate
Survey
Phone surveys:

Examples
Mail surveys require least interviewer has influential
amount of trained resources role, can lead
for execution interviewee, producing
undesirable results

How to
Can produce faster results
Learning’s only applied
Group interaction generates
to those asked, difficult
information
to generalize

Collect Focus Groups


More in-depth responses
Data collected typically
qualitative vs.
quantitative

Customer
Excellent for getting CTQ Can generate too much
definitions anecdotal information
Can cover more complex
questions or qualitative data

data
Can tackle complex
Long cycle time to
questions and a wide range
complete
of information
Requires trained,
Allows use of visual aids experienced
Individual Interviews
interviewers
Good choice when people
won’t respond willingly
and/or accurately by
phone/mail
Probably not adequate
Specific feedback
sample size
Customer Complaints Provides opportunity to May lead to changing
respond appropriately to process inappropriately
dissatisfied customer based on 1-2 data points
Analyze
Customer
Requirements
 How do we analyze
Customer
Requirements?
 Customer
requirements
must be
understood
clearly
 Voice of
Customer (VOC)
is a technique to
organize,
analyze, and
profile the
customer
requirements
Analyze Customer
Requirements –
Pareto Diagram

 A Pareto Chart is a
Histogram ordered by
frequency of occurrence
 Also called 80/20 rule or
“vital few trivial many”
 Helps project teams
focus on problems that
are causing the greatest
number of defects
 In the example –
modules D and B are
causing about 80% of
the defects reported
by the customer.
Hence these modules
should be improved
first.
Example:
 A hotel receives plenty of complaints
from its customers and the Hotel Manager
wishes to understand what are the key
areas for complaints. Below is the recorded
data.
 Cleaning --- 35
 Check-In --- 19
 Opening hours of the pool --- 4
Pareto  Mini Bar --- 3
Chart ---  Room Service --- 2
An  Others --- 1

example
 Interpretation --- 35 customers have
complained about the cleaning being
inadequate and so on.
Percentage Cumulativ
Cause Number
(%) e
Cleaning 35 54.69% 54.69%
Check-in 19 29.69% 84.38%
Pool Opening Hours 4 6.25% 90.63%
Mini Bar 3 4.69% 95.31%
Room Service 2 3.13% 98.44%
Other 1 1.56% 100.00%
0.00% 100.00%
0.00% 100.00%
0.00% 100.00%
0.00% 100.00%
0.00% 100.00%
0.00% 100.00%
Total: 64 100%

Pareto Chart --- An


example
Pareto Chart --- An example
 80% of the complaints from the
customers are due to 20% of causes, i.e.,
Cleaning and Check-in time.

 The Hotel Manager needs to tackle


these areas of focus first and in priority
and the remaining causes later.

Pareto Chart
---
Interpretation  Important:
 Pareto Charts can be considered as
Phase-Neutral tools. They can be used
every time you have multiple
reasons/issues and you wish to prioritize
between them.
Translate Customer Requirements
 Customer requirement is the data collected from customers that
gives information about what they need or want from the process.

 Customer requirements are often high-level, vague, and non-


specific. Some customers may give you a set of specific
requirements to the business, but broadly, customer’s requirements
are a reflection of their experience.

 Customer requirements when translated into critical process


requirements that are specific and measurable are called Critical
To Quality (CTQ) factors.

 A fully developed CTQ has four elements:


 Output characteristic
 “Y” Metric
 Target
 Specification/tolerance limits
Translate Customer Requirements
Example:

A customer visits a garment store to buy a shirt of size 40. When he


goes back home he finds that the shirt is faulty, i.e., the arm size is
longer, it is stained in some places, etc. He returns it to the store
claiming, “The shirt is bad.”

Now, “A bad shirt” is a subjective comment. He is not being specific as


to what is faulty or what went wrong. This becomes a Vague
Assessment.

 How to Fix this


 In a typical project scenario, never live with vague
expectations. Get specifics.
 If the customer is unable to specify his/her expectations, ask
questions. In this case, “Sir, why do you feel the shirt is bad?”
Define CTQ
 VOC forms a critical input to define the CTQ for the customer

 Listen to the Voice of the


 Customer
 Business, &
 Process

 Use CTQ translation worksheet


 Translating a customer need into fully developed CTQ occurs in the
Define and Measure phase

Important:
Typically, a Six Sigma project starts with understanding what a
customer wants. Now, this understanding can come only if you have
listened to the Customer, which can be done only by VOC methods.
VOC – CTQ ---
An Example

Example:

You walk into a
coffeehouse for a cup
of Cappuccino. You
place your order and
wait for it to be served.
Meanwhile the
coffeehouse tries to
understand your
specific needs.
VOC – CTQ --- An Example
In the Example, The CTQ Drivers are:

How hot the coffee should be, how strong the coffee flavor should be
(how many scoops of Ground Coffee) and how much sugar is
needed?

Important: The customer will not be able to define the exact


temperature to quantify how hot the coffee should be.

The business makes an attempt to know all these things, and fixes some
operational numbers.

Heat of the Coffee To be presented at a max temperature of 50 Celsius


and not less than 30 Celsius. This is a fully Developed CTQ.

The temperature at which the cappuccino is made is the Key Process


Output Variable (KPOV), which will be greater than the CTQ
temperature to allow for cooling before presenting it to the customer.
Voice of The Customer Service / Quality Project Y Output
Specific Needs Statement
(High Level Need) Issues Characteristic
1. Customer gets to the
I am always on hold or Functionality correct person the first
transferred to the next Want to talk to the right time
person person immediately Availability
2. Add additional menu
items to the voice system

1. Customer receives bill


I get invoices at different on the same date every
Accuracy
times of the month month
Consistent monthly bill
Invoicing Cycle time
2. Customer wants bill
consistently

1. Customer receives
application on requested
Time
date
Delivery timeliness
Delivery cycle time Delivery Cycle Time
2. Customer wants fast
deliveries

Translation Worksheet to Define CTQs


Translating Customer Requirements –
QFD
 QFD: Quality Function Deployment
 Also known as Voice of Customer or House of Quality
 A process to understand the needs of the customer and convert
them into a set of design and manufacturing requirements.
 It is a systematic process that facilitates a business’ focus on its
customers
 QFD helps companies design more competitive products in less
time and with less cost
 What do we learn from QFD?
 Which customer requirements are most important?
 What are our strengths and weaknesses?
 Where do we focus our efforts?
 Where do we need to do the most work?

 How do we learn from QFD?


 By asking relevant questions to customers
 Tabulating them to bring out set of parameters critical to
design of the product
QFD-An Automobile Bumper
 Customer Request:
There is too much damage to bumpers in low-speed collisions. Customer wants a
better bumper

 Step 1: Identify Customer(s)


 Repair Department
 Automobile Owner
 Manufacturing Plant
 Sales Force

 Step 2: Determine Customer Requirements/Constraints


 I want something that looks nice (Basic)
 I don’t want to pay too much (Basic)
 I want it strong enough not to dent (Excitement)
 It must hold my license plate (Performance)
 It must protect my tail-lights and head-lights (Performance)

In Step 2, legal requirements (license plate and lights) are classified as performance.

Any other attribute has to be classified differently. It gives a “wow” or “soothing” factor if
you see no dent even after low collision; that is why it is classified as excitement.
QFD-An Automobile Bumper
Step 3 - Put prioritized Customer Requirements into a House of
Quality Chart
QFD-An Automobile Bumper
Step 3: Prioritize Customer Requirements
QFD-An Automobile
Bumper

Step 4: Competitive Benchmarking

 Identify Competitors

 Test and Analyze Competitor Products

 Reverse Engineer Competitor Products

 Rate Competitor Products against customer


requirements/constraints

These are ratings of the competitor of your


organization on a scale of 1-5 (1 being the lowest
and 5 being the highest) on those requirements. For
e.g. In case of “resist dent” requirement competitor
C is rated 5. It shows that competitor C is the best in
class and we might want to study and adopt what
different are they doing on that requirement.


QFD-An
Automobile
Bumper
Steps 5 and 6: Translate
Customer Requirements into
Measurable Engineering
Specifications and define target
values

 Specify how license


plate will be held
 Specify how to resist
dents through
material yield
strength, young's
modulus, etc.
 Specify with a dollar
amount the term
‘inexpensive’

Sample QFD Template
Relationship

Exception

Important
Experienc
Availabilit
Quality of

e Level of

Limitation
vailability

Technolo
Processin
Problems

Reductio
Speed/A
Informati

Blocking
Sources:

Baseline
Delivery
Systems
Volume

Training
Routing
Staffing
Matrix: 9=Strong,

Rework
Service

Depts.

Info
Metric Target

Levels

Other
3=Moderate,

Reps

sHow
Call

Call

Call

y of
on
1=Weak

gy
of
g

n
Service Level
IVR Usage
Management

Call Work
Productivity and
Calls Per
Hour
Cost
Risk Exposure
Compliance
POC Resolution
Call Duration
External Client

and Hold Time


Call Backs
Call Transferred
Call Blocked
Time in Queue

How Important 0 0 0 0 0 0 0 0 0 0 0 0 0 0

Metrix

Target
 The leftmost vertical section is known as
Customer Room.

 The horizontal metrics are known as


Technical Room.

 The matrix between the Customer Room


and Technical Room, where priority
numbers would be updated is known as
Integrator room. The Integrator room is
expressed in form of a relationship matrix.
QFD –
Explanation
Summary  The rightmost column is Competitor Room.
This is where benchmarking happens.

 The row at the bottom is known as Tester


Room. This is where benchmarking and
target setting happens.
In this section we What is a Process
have covered: and its elements

Summary How to
Section 1 •Identify Customers
Who are the •Collect Customer Data
Owners and •Analyze Customer Data
Stakeholders for a •Translate Customer
Requirements to identify
process the CTQs
Lesson II
Project Management Basics
Section 2 – Project
Management Basics

o Project Charter and Problem


Statement

o Project Scope

o Project Metrics

o Project Planning Tools

o Project Documentation

o Project Risk Analysis

o Project Closure
Problem Statement
First Step towards starting the project is to define the problem
statement that the project is targeted to solve. Problem
statement should be:

 Clear and concise description of the problem

 Quantified with a metric that includes unit’s

 Identifies gap in performance


 States current state (baseline)
 Benchmarking to be done only if needed
 Gap in performance should always be measured against what
the customer needs.

 Contains no solutions or causes


IS/IS NOT Template

HELPFUL FOR LATER THE POTENTIAL STARTS WITH A VERY WORKS THROUGH
DEFINING THE LIST OF CAUSES CAN SPECIFIC PROBLEM WHAT, WHERE, WHEN,
PROBLEM AND BE PRIORITIZED BASED STATEMENT AND TO WHAT EXTENT
PROVIDING A FOCUS ON HOW WELL THEY THE PROBLEM “IS”
FOR THE PROJECT EXPLAIN THE AND THE PROBLEM “IS
TEAM OBSERVATIONS NOT”.
SHOWN IN THIS
TEMPLATE
IS/IS NOT Template - Example

Problem statement: Paper cup leaks

IS IS NOT

Visible gaps in seams; Styrofoam cups;


What Slow leaks; 12 oz. Cups
plastic cups; 16 oz. Or 20 oz. Paper cups

Bottom of cup; at joint of Above 6 mm from the bottom;


Where vertical seam with bottom; anywhere else along bottom away
less than 5 mm from bottom from vertical seam

2nd shift production; two 1st shift production; between 11 months


When
weeks ago and 1 year ago ago and 3 weeks ago
10% of production overall; Same extent on both shifts; all the time;
To What
20-30% on 2nd shift; drip rate barely noticeable (1/min) or
Extent
of 30/min. immediately obvious (60+/min)
Project Charter
A Charter is a written document that defines the team’s mission, scope
of operation, objectives, time frames, and consequences for the
project. A Charter is considered as a formal approval from the senior
management to start the project.

Who writes the Charter?


The Green Belt is trained enough to write the charter with the problem
statement. Checked by the Black Belt, it is the Black Belt’s responsibility
to get approval from the Champion.

 A Charter should include


 Measurable objectives (CTQs) to be achieved from the project
which will help resolve the problem statement defined earlier
 Organizational and Operation boundaries within which the
project should perform
 Top Management support and commitment
Project Objective Criteria

 Project objective will be similar to the problem statement with regard to


specificity and measurability.
 It should meet the following criteria (SMART-S):
 Specific: What you’re working on
 Good example: Reduce patient ID errors in recording lab results
 Bad example: Reduce form rejections
 Measurable: How would you know you’ve succeeded?
 Good example: Reduce patient ID errors by 30%
 Bad example: Fewer form rejections
 Attainable, Actionable: That the target is do-able, there is some
probability of success
 Relevant: Important to helping achieve yearly or strategic goals
 Time-based, Timeframe: When you will look for success
 Good example: Reduce patient ID errors 30% by 2013 year-end
 Stretch: Sometimes added by some organizations to prevent easily
achieved, attainable targets. Often 10% reduction is possible because
people are aware enough to pay attention to the issue.
Project Charter Sections
 Project Name and Description

 Project Manager Name

 Business Need (Problem Statement)

 Project Purpose or Justification including ROI

 Stakeholder

 Stakeholder Requirements

 Broad Timelines

 Major Deliverables

 Constraints and Assumptions

 Summary Budget
Sample Project Charter

Project Title: Information Technology (IT) Upgrade Project

Project Start Date: February 4,2010 Projected Finish Date: November


4,2010

Project Manager: Person C, abc@xyz.com

Project Objective: Upgrade hardware and software for all employees


(approximately 2000) within nine months based on new corporate
standards. See attached sheet describing the new standards.
Upgrades may effect servers, as well as associated network hardware
and software. Budgeted $ 1,000,000 for hardware and software costs
and $ 500,000 for labor costs.

cont….
Sample Project Charter, cont..
 Approach:
 Update the Information Technology Inventory Database to
determine the Upgrade requirements
 Develop detailed cost estimate for project and report to CIO
 Issue a request for quote to obtain hardware and software
 Use internal staff as much as possible for planning, analysis,
and installation

Roles and Responsibilities


NAME ROLE RESPONSIBILITIES
PERSON A CEO Project sponsor, monitor project

PERSON B CIO Monitor project, provide staff

PERSON C Project Manager Plan and execute project


Director of IT
PERSON D Mentor PM
Operations
Provide staff, issue memo to all
PERSON E VP, Human Resources
employees about project
Assist in purchasing hardware
PERSON F Director of Purchasing
and software
Project Plan
A project plan is a final approved document used to manage and control
project execution

The Project Manager uses the Project Charter as input to create a detailed
project plan. The project plan includes the following sections:

 Project Management Approach


 Scope Statement
 Work Breakdown Structure (WBS)
 Cost Estimates
 Schedule
 Performance Baselines
 Major Milestones
 Key or Required Staff
 Key risks
 Open or Pending Decisions

The project plan also contains references to other subsidiary plans for managing
risk, scope, schedule, etc.
Project Scope
 Develop and review project boundaries to ensure that the project has value
to the customer.
 Scope refers to all the work involved in creating the products of the project
and the processes used to create them.

Project scope management includes the processes involved in defining and


controlling what is or is not included in a project.
 Scope planning: Deciding how the scope will be defined, verified, and
controlled.

 Scope definition: Reviewing the project charter and preliminary scope


statement and adding more information as requirements are developed
and change requests are approved.

 Creating the WBS: Subdividing the major project deliverables into smaller,
more manageable components.

 Scope Verification: Formalizing acceptance of the project scope.

 Scope Control: Controlling changes to project scope.


Techniques for Identifying Project Scope

Interpretation of the Project Scope from the problem statement and the project
charter can be done using a variety of tools like:

 Pareto Chart – Aka the 80-20 principle related to the principle of vital few –
trivial many. It helps project teams narrow the scope of the project by
identifying the causes that are have major impact on the project

 Suppliers, Inputs, Process, Outputs, Customers (SIPOC) – High level process


map which enables all team members to get a common understanding of
how the process functions in terms of – who the suppliers are, what are the
inputs, what are the processes, what are the outputs, and who are the
customers.
Project Primary Metrics
Metrics are needed to ensure that the requirements for the project are
measurable and therefore controlled throughout the project. Primary Metrics
are developed in the Define phase of the project, but these are not finalized
until the Measure phase.

The primary metrics for consideration in the project come from various sources:
 Suppliers
 Internal Process
 Customers

The Primary Metrics for the project can be:


 Quality
 Cycle Time
 Cost
 Value
 Labor
Secondary Project Metrics
The secondary metrics for the project are derived from the primary metrics.
These are usually numerical representation of the primary metrics.

Some examples of secondary metrics include:


 Defects Per Unit (DPU)
 Defects Per Million Opportunities (DPMO)
 Average Age of Receivables
 Lines of Error Free Software Code
 Reduction in Scrap
Project Planning Tools
There are various tools used by a Project Manager to plan and control a
project. We have studied one such tool called Pareto Chart in the previous
section.

In this section we will study the following tools in more details:


 Network Diagram

 Critical Path Method (CPM)

 Program Evaluation and Review Technique (PERT)

 Gantt Charts

 Work Breakdown Structures (WBS)


Network Diagrams
A Network Diagram aka Arrow
Diagram represents the Early Start Duration Early Finish
interdependencies between all tasks Task Name
and activities.
Late Start Slack/Float Late Finish
The Network Diagram assumes :
 Before an activity begins all
preceding activities must be
completed
 Arrows indicate logical
precedence
 Network must start at a single
event and end at a single event

Slack time or float time for an event is


the latest date an event can occur
or can be finished without extending
the project time.
Project Planning Tool ─ Critical Path
Method
 Critical Path is the longest sequence of activities on the network diagram
and is characterized by zero float or slack for activities. There is no slack on
the critical path. (CPM Tool has been provided to you as part of the toolkit).

 Any delay on the critical path delays the project

 Sometimes multiple critical paths can exist


The critical path in the below network diagram is highlighted in
orange.
Project Planning Tool ─ PERT
 The Program (or Project) Evaluation and Review Technique, commonly
abbreviated PERT, is another method of calculating the critical path in a
project.
 PERT was developed primarily to simplify the planning and scheduling of
large and complex projects.
 PERT involves three kinds of estimates for each activity – optimistic (to), most
likely (tm), and pessimistic (tp). The realistic estimate for the activity
calculated as te is

 This is more robust than CPM because it takes three inputs in calculating the
duration of each activity
Project Planning
Tool – Gantt
Chart
 A Gantt chart is a
graphical
representation of the
duration of tasks against
the progression of time.

 A Gantt chart is a useful


tool for planning and
scheduling projects.

 A Gantt chart is helpful


when monitoring a
project's progress.
Project Planning Tool – Work Breakdown Structure
 Delivery oriented, hierarchical decomposition of work to be executed by the
project team to accomplish project objectives
 Defines total scope for the project and creates required deliverables
 Develops common understanding on the project scope
 Items at the lowest level of the WBS are called Work Packages
 A work package can be scheduled, cost estimated, monitored, and
controlled
 WBS divides the project deliverables and project work into smaller more
manageable components
Project Documentation
Documentation for the project is critical throughout the phases of the project.
Some of the benefits achieved through project documentation are :

 Written proof for execution of the project


 A common understanding of the requirements of the project and the status
of the project
 Personal bias reduction as there is a documented history of discussions and
decisions made for the project

Each project includes a variety of documents like :


 Project Charter

 Project Plan and subsidiary plans

 Project Status reports includes reporting on Key Milestones, Risks, and


Pending Action Items. The frequency of these reports is determined by the
nature of the project. These reports are sent to all stakeholders to keep them
abreast of the status of the project

 Final Project Report: This report is prepared at the end of the project and
includes a summary of the complete project.
Vehicles for Project Documentation
Project Information can be represented using various tools:

 Project storyboard: This is a template provided by an organization to formally


document the progress of the Six Sigma project and used in review meetings
with Master Black Belts and Champions to review progress

 Statistical tool output: These are generally outputs from statistical tools like
SPSS or Minitab

 Spreadsheet output: These are generally statistical calculations or graphs as


used in Microsoft Excel or Open Office

 Checklists: These are generally used at the end of each Six Sigma phase to
ensure the required steps, tools, and techniques are followed in a Six Sigma
project

 Miscellaneous: Additional documentation of data, research reports, etc.,


can be included as a part of any project
Project Risk Management
Project Risk Management is an ongoing activity.

What is Risk?
 Uncertain events or consequences probable of occurring during a
project.
 Has impact on at least one project objectives
(Time/Cost/Quality/Scope).

Impact can be :
 Positive: Enhances the success of the project
 Negative: Threat to project success

Risk probability is the likelihood that a risk will occur Probability and
impact are assessed for each risk

Risk consequences: Are the effect on project objectives if the risk


event occurs
Risk Analysis and Management is crucial to the
success of the project as it helps in:

Identifying risks proactively before they become


issues. Once the risk has been identified it can be
mitigated, transferred, or accepted

Importance
of Risk
Communicating to stakeholders beforehand and
Analysis setting realistic expectations

Identifying contingency activities if the risk does


become an issue
Project Closure is the process of finalizing all the activities
across all Project Management phases to formally
complete the project.

The intent of this phase is to:

Get a formal sign off on the project so that the project


can be considered closed and resources can be
released.

Project Capture insights from the project so that these can be


Closure implemented for future projects.

Update Organization Assets (process documentation,


case studies, etc.) to include reference to the project.

Project Closure also includes Document Archiving. All


project documents are backed up and secured to
ensure security and retrieval.
Lesson 2 Summary
In this section we have Learned:
 Problem Statement

 Project Charter

 Project Scope and how it can be identified and managed

 Primary And Secondary Project Metrics

 Project Planning Tools - CPM, PERT, Gantt Charts, and WBS

 Project Documentation and Vehicles For Documentation

 Project Risk Analysis,

 Project Closure
Session II Summary
In this section we have covered:
 What is a Process and its elements
 Who are the Owners and Stakeholders for a process
 How to
 Identify Customers
 Collect Customer Data
 Analyze Customer Data
 Translate Customer Requirements to identify the CTQs

 Identify core processes of Project Management like:


 Problem Statement, Project Charter, Project Scope
 Primary And Secondary Project Metrics
 Project Planning Tools - CPM, PERT, Gantt Charts, and WBS
 Project Documentation and Vehicles For Documentation
 Project RISK ANALYSIS, AND PROJECT CLOSURE
1. SIPOC model helps everyone in the
company to see the business from an
overall process perspective by:
.
1. Providing a framework applicable to
processes of all size
2. Identifying the few key business
customers
3. Displaying cross-function activities in
Quiz - 1 simple terms
4. Helping maintain the big business
picture

A. II and IV only
B. I and IV only
C. I, II, and III only
D. II, III, and IV only
1. The key difference between internal
and external customers is:

A. External customers best determine the


true quality of the product

Quiz - 2 B. External customers can influence the


design of the product
C. External customers usually influence the
design of the product
D. Their interest in the product or service
1. Which of the following statement is an
incorrect description of QFD?

A. It transfers customer requirements into


Quiz - 3 design specification
B. It is an iterative process
C. It is similar to project management
D. It identifies risk areas
1. Identification of external customer is
important because:

A. It permits easier product recalls


Quiz - 4
B. It helps to identify customer needs
C. It produces more profit per customer
D. It eliminates wasted advertising
1. The relevant stakeholders in an
important project would typically
include all of the following, EXCEPT:

Quiz - 5 A. Owners or stockholders


B. Potential suppliers
C. Potential competitors
D. Contract workers
1. What is the main difference between
risk analysis and risk management?

A. There is minimal difference, they refer to


the same concept
B. Risk analysis includes risk handling while
Quiz - 6 risk management refers to risk
monitoring
C. Risk analysis refers to tools and risk
management deals with consent
D. Risk analysis evaluates risks, while risk
management is a more inclusive
process
Quiz - 1
1. SIPOC model helps everyone in the company to see the business
from an overall process perspective by:
.
1. Providing a framework applicable to processes of all size
2. Identifying the few key business customers
3. Displaying cross-function activities in simple terms
4. Helping maintain the big business picture

A. II and IV only
B. I and IV only
C. I, II, and III only
D. II, III, and IV only

Correct Answer: B

SIPOC does provide framework and displays relation between various


functions in a business system.
Quiz - 2

1. The key difference between internal and external customers is:

A. External customers best determine the true quality of the product


B. External customers can influence the design of the product
C. External customers usually influence the design of the product
D. Their interest in the product or service

Correct Answer: A
In this question, What is unique or different? Options b, c, d are true
for both internal and external customers. Option A is best because
the external customer's perception of quality really determines a
company’s survival.
Quiz - 3

1. Which of the following statement is an incorrect description


of QFD?

A. It transfers customer requirements into design specification


B. It is an iterative process
C. It is similar to project management
D. It identifies risk areas

Correct Answer: C

QFD focuses on identifying the "voice of the customer." Project management


does not focus on gathering customer requirements. Project management
focuses on implementation
Quiz - 4

1. Identification of external customer is important because:

A. It permits easier product recalls


B. It helps to identify customer needs
C. It produces more profit per customer
D. It eliminates wasted advertising

Correct Answer: B
Customer identification may eliminate wasted advertising, increase the profit per
customer and make product recalls easier, but the most important reason is
identifying the needs of the customer. And this is impossible without knowing the
customer. A company will be successful only if it meets the needs of their
customers better than the competitors.
Quiz - 5

1. The relevant stakeholders in an important project would


typically include all of the following, EXCEPT:

A. Owners or stockholders
B. Potential suppliers
C. Potential competitors
D. Contract workers

Correct Answer: C

The relevant stakeholders in any project are the stockholders, management,


employees, suppliers, and customers. Potential competitors would not be
stakeholders
Quiz - 6

1. What is the main difference between risk analysis and risk management?

A. There is minimal difference, they refer to the same concept


B. Risk analysis includes risk handling while risk management refers to risk
monitoring
C. Risk analysis refers to tools and risk management deals with consent
D. Risk analysis evaluates risks, while risk management is a more inclusive process

Correct Answer: D

Risk management is a more thorough process, while risk analysis is more specific to
the ways complex risk is evaluated. . Both are different, eliminating option A.
Option B refers to particular parts of risk management. Option C does not reflect
the real meaning of risk management.
Session III, Lesson 1
Management and Planning Tools
Team Tools

Affinity Diagrams

Interrelationship Diagrams

Agenda - Tree Diagrams

Introduction
to Define – Prioritization Matrices
II
Process Decision Program Charts

Activity Network Diagrams


Introduction to Define II

In this session we will cover the last three aspects of Define II


phase

 Lesson 3: Management and Planning Tools


 Affinity Diagrams, Interrelationship Diagram and so on

 Lesson 4: Business Results for Projects


 Process performance metrics such as Defect Per Unit (DPU),
Rolled Throughput Yield (RTY), Cost of Poor Quality (COPQ),
Defect per Million Opportunities (DPMO) and Process
Capability Indices
 Failure Mode and Effect Analysis (FMEA) and Risk Priority
Number (RPN)

 Lesson 5: Team Dynamics and Performance


 Team stages and dynamics, team roles and responsibilities,
team tools and communication techniques
Team Tools
 Brainstorming

 Nominal Group Technique

 Multi - voting

As per BOK, Team Tools are part of Team Dynamics and Performance

 Some of these tools don’t acquire statistical significance. They are


merely used as Planning tools
Team Tools – Multi - voting
 Multi - voting: Multi - voting is a group decision-making tool used to
reduce a long list of items to a manageable number.

 A team meets and votes on a set of ideas.

 Works best for a large group of ideas.

 The steps for multi - voting are :


 Generate A list of items
 Number each item for identification
 Get each participant to choose one-third of the items
 Get each participant to cast votes for each item
 Eliminate items with the least votes
 Repeat the process until A specific number of items is reached
Team Tools ─ Brainstorming
Brainstorming :
 A tool used by a team to generate ideas and solutions to any
predefined problem(s).
 How?
 Write the problem to be discussed on a whiteboard.
 Invite every participant to present their ideas on why the
problem occurs. Each participant gives 2 ideas at least but not
at once.
 As you receive the ideas, write them down randomly on the
board.
 Key members in a Brainstorming session:
 Session Leader: Ensures structure to the session and free flow of
ideas from the members.
 Facilitator: Coordinates activity before and after the session.
Prioritization should never be done during brainstorming.
 Writer: Writes down all the ideas generated. Can also put
forward his/her ideas.
Important--Brainstorming can be used at all stages when fresh
ideas are needed. It is not a statistical tool and is easy to follow.
Team Tools – Nominal Group Technique
(NGT)
Nominal Group Technique:
 Similar to Brainstorming but limits initial interaction among members
 This concept is applied to prevent social or peer pressure from
influencing generation of ideas

 After the problem has been explained to all the members by the
facilitator, each member silently and individually writes down all the
ideas on a piece of paper

 Time duration to write the ideas can be 5-10 minutes

 Ideas are then collated

 Then finally use multi - voting to prioritize the list of ideas


Affinity Diagrams
 When to use Affinity Diagram
 It is a team activity, preceded by Brainstorming
 Affinity means closeness. Ideas generated during Brainstorming
are often group to how the ideas are close to a specific group
 Group consensus is needed. Affinity diagram resembles the
Cause And Effect Diagram in the way how it works

 Steps
 Define the problem and brainstorm with sticky pads
 Arrange the papers into similar thought pattern or categories
 Members arrange ideas based on certain affinity
 If one idea belongs to multiple categories, duplicate of the
idea is created and put into several categories
 Make a header card (capturing the central idea that ties all
the cards together) for every group
 Once all ideas have been grouped to the header cards, a
diagram can be drawn and borders are placed around group
of ideas
Inter-Relationship Diagram
 Interrelationship Diagram: To illustrate the relationship between
ideas in more complex situations
 If the problem is really complex, it may not be easy to determine
exact relationship between ideas
 It helps in identifying relationship between problems and ideas in
complex situations
 Steps:
 Define the problem and write down the ideas on a sticky
notepad paper
 Each paper has only one idea
 Put all the sticky notepad paper with idea on the table for a
random display
 Then identify causes and what are the effects of that cause
from the cards and draw an arrow which goes from cause item
to the effect item. This is done for every card until it is
completed
 Then transfer the diagraph onto a large sheet
 High number of outgoing arrows indicates the root cause and
high number of incoming arrows indicates an outcome
Example: Brainstorming session

Example: Following figure presents the results of a team brainstorming session which
identified ten major issues involved in developing an organization’s quality plan
Tree Diagrams
 Tree Diagram: To identify the tasks and methods needed to solve a problem
and reach a goal. An example of the Tree Diagram is a CTQ Tree.
 When?
 When developing actions to carry out a solution or other plan
 When analyzing processes in detail
 When evaluating implementation issues for several potential solutions
 As a communication tool, to explain details to others

Example: A coffee shop trying to set standards for the coffee it delivers.
Prioritization Matrices
 Prioritization Matrices: Used to prioritize tasks, issues, product/service
characteristics, etc., based on known weighted criteria

 There are three types of prioritization matrices that can be developed for
use:
 The full analytical criteria method
 The consensus criteria method
 The combination interrelationship matrix method

 When?
 Key issues have been identified and the options must be narrowed
down
 Criteria for a good solution are agreed upon, but there is a
disagreement over their relative importance
Prioritization Matrices - Example

The full analytical criteria method is the combination of all the three methods and consensus is
required. All require sets of matrices to form the final matrix(These two are out of green belt
BOK)
Matrix Diagram
 Matrix Diagram: To provide information about the relationship and
importance of task and method elements of the subject

 Shows importance of relations between the processes

 Helps in organizing large amount of inter-process activity

 There are several basic types of matrices


 L-type: elements on the Y-axis and elements on the X-axis
 T-type: 2 sets of elements on the Y-axis, split by a set of elements on the
X-axis
 X-type: 2 sets of elements on both the Y-axis and X-axis
 Y-type: 2 L-type matrices joined at the Y-axis to produce a matrix
design in 3 planes
 C-type(3-d matrix): 2 L-type matrices joined at the Y-axis, but with only 1
set of relationships indicated in 3 dimensional space

 When?
 To graphically illustrate logical connections between different
processes of a business system
Process Decision Program Chart (PDPC)

 Process Decision Program Chart: PDPC is a technique designed to


help prepare contingency plans. The emphasis of the PDPC is to
identify the failure of important issues on activity plans, and create
appropriate contingency plans to limit risks

 Contingency Plan: A contingency plan is a plan devised for a


specific situation when things could go wrong

 When?
 Before implementing a plan, especially when the plan is large
and complex
 When the plan must be completed on schedule
 When the price of failure is high
Process Decision Program Chart (PDPC)

Example: Following are the PDPC which shows the process which can help to
secure a contract
Activity Network Diagram

 Activity Network Diagram: To show the time required for solving


a problem and which items can be done in parallel

 When?
 When scheduling and monitoring tasks within a complex
project or process with interrelated tasks and resources
 When you know the steps of the project or process, their
sequence, and how long each step takes
Activity Network Diagram

Example: Following figure shows an arrow diagram used to plan the construction of
a house, to identify:
• amount of time for each operation
• relation of work without time for each operation
• each specific operation
Summary
 Team Tools
 Multi - voting , Nominal Group Technique (NGT), etc.

 Affinity Diagrams
 When it is used and what are the steps to create Affinity Diagram

 Inter-Relationship Diagrams
 What are Inter-relationship Diagram and the steps

 Tree Diagrams
 What are Tree Diagram and the steps

 Prioritization Matrices
 Prioritization Matrices and its type

 Process Decision Program Charts


 Matrix diagram and its type

 Activity Network Diagrams


 What are Activity Diagrams and the steps
Session III, Lesson 2

Business Results for


Process
Agenda

 Process Performance
 Defect Per Unit (DPU)
 Rolled Throughput Yield (RTY)
 Cost of Poor Quality (COPQ)
 Defects Per Million Opportunities
(DPMO)
 Process Capability Indices.

 Failure Mode Effect Analysis (FMEA)


 What is FMEA?
 Risk Priority Number (RPN)
Defect Per Unit (DPU)

The average no. of defect per unit. The ratio of defects to unit is the universal
measure of quality

Important-- DPU is an important business measure because it tells you how


many defects do you generally observe in one unit.
Rolled Throughput Yield (RTY)
 Throughput Yield: Throughput Yield (TPY) is the number of acceptable pieces
at the end of a process divided by the number of starting pieces, excluding
scrap and rework. TPY is a measure of quality of the process, or efficiency of
the process

 TPY is used to only measure a single process

 To calculate TPY, if the DPU or defects and units are known then:

TPY= e-DPU= e-D/U or DPU= -loge(TPY)

 Rolled Throughput Yield: Rolled Throughput Yield is the true measure of


process efficiency and is considered across multiple processes

 Another method to estimate RTY if the total defects per unit(TDPU) or defects
and units are known:

RTY= e-TDPU or TDPU= -loge(RTY)


RTY is the product of each process’s First Pass Yield, FPY, when defectives are known
First Pass Yield (FPY): FPY = (N-D)/N where N is total number of units, D is number of
defectives . Rolled Throughput Yield (RTY) RTY = FPY1 * FPY2 * FPY3*…*FPYn
Rolled Throughput Yield --- An Example

Example:

Let us assume a company has three processes: A, B, and C,


represented by the flow:

Process A receives an input of 100 parts from the supplier. It works on the parts and
produces 85 quality parts that pass inspection without any rework. 5 parts pass the
inspection after rework. 10 parts are absolutely unusable and are “scrapped”
Rolled Throughput Yield --- An Example
Calculations:

For process A, calculate the First Pass Yield, FPY (No. of products which pass
without any rework, i.e., First Pass = (Number of Quality products)/Total number
of products.

First Pass Yield/FPY of Process A = 85%

Similarly FPY of Process B = 88.9%

FPY of Process C = 100%

Rolled Throughput Yield = FPY of A * FPY of B * FPY of C = 75% approximately

Interpretation:

For every 100 parts coming in to the process, we estimate that only 75
parts complete all 3 processes without any rework. For a process
working at Six Sigma levels, the RTY should be 99.9996%.
Defect Per Million Opportunities
 In process improvement efforts, Defects Per Million Opportunities
or DPMO or Nonconformities Per Million Opportunities (NPMO) is
a measure of process performance. It is defined as

Example: If there are 5 units with 5 defect opportunities each and in


total there are 8 defects then:

 TOP (Total Opportunities) = Units x Opportunities = 5 units x 5 opportunities


= 25 total opportunities.

 DPO (Defect Per Opportunity) = total no. of defects/total no. of


opportunities = 8/25 = 0.32 defects per total opportunity

DPMO = DPO x 1,000,000 = 0.32 x 1,000,000 = 320,000 DPMO


Cost of Poor Quality (COPQ)
 What Is Cost of Quality (COQ)?
 COQ is also known as Cost of Quality and is the cost incurred by a
process because it cannot consistently make a perfect product.
 COQ is a financial measure.
 Cost of Quality (COQ) is defined as sum spent on:
 Preventive Costs: incurred to prevent failure (e.g., Training,
Improvement programs)
 Appraisal Costs: incurred to determine the degree of
conformance to quality requirements (e.g. Testing, Reviews,
Inspections)
 Internal Failure Cost: associated with defects found before the
customer receives the product or service. (e.g. Rework, Scrap)
 External Failure Cost: associated with defects found after the
customer receives the product or service. (e.g. Complaints,
Returned products, Lost reputation)
COPQ ( Cost of Poor Quality) was coined by H. James Harrington
based on previous work by Armand Feigenbaum and Joseph M.
Juran. It only includes Appraisal, Internal Failure and External Failure
Costs.
Cost of Poor Quality (COPQ), cont..

Example:

If a company manufactures 100 marker pens, with a cost of $10 per


pen, the company would gain a revenue of $1,000 if all 100 pens
were perfect. Assume 20 pens were defectives due to defects in the
pen. In this case, the customer may not pay for those 20 pens. This
would mean a Cost incurred of $200 by the company.

Question : Why did the company incur Cost?

Answer: Its products were not perfect. This is Cost of Quality.


Process Capability Indices
 Cp a straightforward indicator of process capability which is given by:
 Where:

Cp USL is upper specification limit LSL is lower specification limit

6σ is product of 6 and standard deviation (Will be described in later chapters

 Process Capability Index, or Cpk, is how good the process is in delivering what the
customer wants, with consistency

 To calculate Cpk, you need to find out if the process mean is closer to the LSL or the
USL. If it is equidistant, either specification limit can be chosen.

 If the Process Mean is closer to the LSL, Cpk will be Cpl = (Xbar – LSL)/3*Sigma,
 where Xbar is Process Average
 where Sigma represents the Standard Deviation.

 If the Process Mean is closer to the USL, Cpk will be (Cpu = USL – Xbar)/3*Sigma
Process Capability Indices - Example
 A batch process produces high fructose corn syrup with a specification
of the dextrose equivalent (DE) to be between 6.00 and 6.15. The DEs
are normally distributed, and a control chart shows the process is stable.
The standard deviation of the process is 0.035. The DEs from a random
sample of 30 batches have a sample mean of 6.05. Determine Cp and
Cpk be (Cpu = USL – Xbar)/3*Sigma

A Cpk of 0.48 indicates that the process is not capable relative to the lower
specification limit. This process needs a lot of improvements.
Process Capability Indices - Example
 A Cpk of 0.48 indicates that the process is not capable relative to the
lower specification limit. This process needs a lot of improvements.

 A Cp value of less than 1 indicates process is not capable. Even if the Cp


> 1, to ascertain if the process really is not capable, check the Cpk value.

 A Cpk value of less than 1 indicates that the process is definitely not
capable but might be if Cp > 1 and the process mean is at or near the
mid-point of the tolerance range.

 Cpk value will always be less than Cp, especially as long as the process
mean is not at the center of the process tolerance range.

 Non-centering can happen when the process has not understood the
customer expectations clearly or the process is complete as soon as the
output reaches a spec limit.

Example: Shirt size of 40 has a target chest diameter of 40 inches, but


the process consistently delivers shirts with a mean of 41 inches in the
chest diameter; a machine stops removing material as soon as the
measured dimension is within spec.
 FMEA: Failure Modes and Effects Analysis

 Applicable to all levels of a system (process, sub-process and so on) and


to the product/service itself

 How FMEA is updated?

 Analyze an object (System, products, etc) for all possible modes


of failure

 RPN, Risk Priority Number: Product of O, S, and D (see next page)

 Higher the RPN, higher priority the object receives

 Occurrence, O: Answers how frequently the failure mode occurs.


Higher the frequency or probability, higher the value of O on a

Failure scale of 1 to 10. A mode with a high occurrence rating means it is


happening very frequently.

Modes
 Severity, S: Answers how critical the failure mode is, to the
customer or the process. More severe the effect, higher the value
of S on a scale of 1 to 10. A mode with high severity rating means

and
that the mode is really critical to ensure safety of operations.

 Detectable, D: Answers how easily you can detect the failure


mode. Higher detect ability, lower the value on a scale of 1 to 10.
Effects A mode with a high detection rating means that the current
controls are not sufficient, reliable, or repeatable.

Analysis  FMEA is a simple tool to prioritize the failure modes & actions

(FMEA)  By understanding why and how we fail, we can plan for


improvement

 It works on the belief that prevention saves time and expense

 Typically, FMEA is used after some root cause analysis, and is a


better tool for focus / prioritization as compared to multi-voting
Risk Priority Number (RPN) & Scale
Criteria
 Risk Priority Number (RPN) is a measure used when assessing risk to help
identify critical failure modes associated with a design or process. The
RPN values range from 1 (absolute best) to 1000 (absolute worst).
RPN is calculated by:
Severity x Occurrence x Detection = RPN

 Scales of SOD ( Severity Occurrence Detection)

SEVERITY:
 Severity is the seriousness of the effect of the failure mode.

Important-- The Severity rating can never be changed.


Risk Priority Number (RPN) & Scale
Criteria
Effect SEVERITY of Effect Rating

Hazardous without Very high severity ranking when a potential failure mode
10
warning affects safe system operation without warning

Hazardous with Very high severity ranking when a potential failure mode
9
warning affects safe system operation with warning
System inoperable with destructive failure without
Very High 8
compromising safety
High System inoperable with equipment damage 7

Moderate System inoperable with minor damage 6

Low System inoperable without damage 5


System operable with significant degradation of
Very Low 4
performance
Minor System operable with some degradation of performance 3

Very Minor System operable with minimal interference 2

None No effect 1
Important-- The Severity rating can never be changed.
Occurrence
 Occurrence is the probability that a specific cause will result in the
particular failure mode.

PROBABILITY of Failure due a particular cause Failure Prob Rating


>1 in 2 10
Very High: Failure is almost inevitable due to this
cause
1 in 3 9

1 in 8 8
High: Repeated failures due to this cause
1 in 20 7

1 in 80 6

Moderate: Occasional failures due to this cause 1 in 400 5

1 in 2,000 4

1 in 15,000 3
Low: Relatively few failures due to this cause
1 in 150,000 2

Remote: Failure is unlikely due to this cause <1 in 1,500,000 1

If the probability of Occurrence of the failure mode is 1 in 3, you would


give it a rating of 9.
Detection

 Detection is the
probability that a
particular cause or
failure will be found

 If detection is
impossible, you would
give the failure mode
a rating of 10. At the
start of a Six Sigma
project, you would
give a relatively high
rating, as a thumb rule
Detection

Detection Likelihood of DETECTION by Design / Process Control Ranking


Design / Process control cannot detect potential cause/mechanism and
Absolute Uncertainty 10
subsequent failure mode

Very remote chance the Design / Process control will detect potential
Very Remote 9
cause/mechanism and subsequent failure mode

Remote chance the Design / Process control will detect potential


Remote 8
cause/mechanism and subsequent failure mode

Very low chance the Design / Process control will detect potential
Very Low 7
cause/mechanism and subsequent failure mode

Low chance the Design / Process control will detect potential


Low 6
cause/mechanism and subsequent failure mode

Moderate chance the Design / Process control will detect potential


Moderate 5
cause/mechanism and subsequent failure mode

Moderately High chance the Design / Process control will detect


Moderately High 4
potential cause/mechanism and subsequent failure mode

High chance the Design / Process control will detect potential


High 3
cause/mechanism and subsequent failure mode

Very High chance the Design / Process control will detect potential
Very High 2
cause/mechanism and subsequent failure mode

Design / Process control will detect potential cause/mechanism and


Almost Certain 1
subsequent failure mode
Example Of FMEA & RPN

In the Process of playing a cricket match, manager wants to prioritize risk area
which could potentially result in losing the match again
Sample FMEA Template

The FMEA Matrix can be first updated in the Define Phase. You can use it in the
Measure and Analyze Phase and finally update it after the Control Phase.
Summary

 Process Performance
 Defect Per Unit (DPU)
 Rolled Throughput Yield (RTY)
 Cost of Poor Quality (COPQ)
 Defects Per Million Opportunities (DPMO)
 Process Capability Indices

 Failure Mode Effect Analysis (FMEA)


 What is FMEA?
 Risk Priority Number (RPN)
Session III, Lesson 3

Team Dynamics and Performance


Team Stages and
Dynamics

Six Sigma and other


Agenda Team Roles And
Responsibilities

Communication
techniques
Team Stages
 In the Forming stage :
 The team comes together and begins to formulate roles and
responsibilities
 The team leader directs and assigns responsibilities to others
 Team members are generally enthusiastic and motivated by a
desire to be accepted
 The leader employs a directive style of management - delegating
responsibility, providing structure, and determining process
 To move to the next stage, the team should achieve a commitment
to the project and an acceptance of a common purpose

 In the Storming stage,:


 Conflicts start to arise
 The team leader coaches and conciliates
 Team members struggle over responsibilities and control
 The leader employs a coaching style of management – facilitating
change, managing conflict, and mediating understanding
 To move to the next stage, team members need to learn to voice
disagreement openly and constructively while staying focused on
common objectives and areas of agreement
Team Stages
 In the Norming stage:
 Relationships gel and the team develops a unified commitment to the project goal
 The team leader promotes and participates
 Team members look to the leader to clarify understanding as some leadership roles
begin to shift to the group
 The leader employs a participatory style of management – facilitating change,
working to build consensus, and overseeing quality control
 To move to the next stage, team members must accept individual responsibilities
and work out agreements about team procedures

 In the Performing stage :


 Team members manage complex tasks and work toward the common goals
 The team leader supervises and stands aside
 This is the highly productive stage of project team evolution
 The leader employs a supervisory style of management – overseeing progress,
rewarding achievement, and supervising process
 When the project has been successfully completed or when the end is in sight, the
team moves into the final stage

 In the Adjourning stage :


 The project is winding down and the goals are within reach
 The team leader says goodbye and gives feedback
 During this stage, team members are dealing with their impending separation from
the team
 The leader employs a supportive style of management – giving feedback,
celebrating accomplishments, and providing closure
Team Tools – Multi - voting
 Overbearing participants :
 These participant use their influence or expertise to take on a position of
authority, discounting contributions from other team members
 Solution: Team leaders must establish ground rules for participation, by
reinforcing that the group has the right to explore any area pertinent to team
goals and objectives

 Dominant participants:
 These participants take up an excessive amount of group time by talking too
much, focusing on trivial concerns, and otherwise preventing participation by
others
 Solution: Team leaders need to set participation limits and encourage
participation of others by specifically soliciting their input.

 Reluctant participants:
 Feel intimidated or are unhappy of the team process
 Their reluctant means they miss opportunities to bring up data that is valuable
to the project
 A team member's dislike of the purpose or requirements of the project can
lead to hostility
 Solution: One way to deal with a reluctant participant is to respond positively
and encourage any contribution from the team member
Group Challenges
 Opinions:
 Opinions are useful for exploring team creativity, but one should not
blindly accept opinions as facts. This can lead to serious
miscalculations or misinterpretation
 Solution: It is important that the team is objective and critical when
dealing with opinions, and that decisions are based on evidence in
the form of data

 Feuding:
 Feuds (disputes) in groups are often a result of issues which are not
related to the project, or are a result of a difference of opinion
between two individuals
 When feuding occurs, other team members may be reluctant to
speak up for fear of being perceived as supporting the argument,
or they may feel pressured into taking sides
 Solution: Develop ground rules for interpersonal behavior within the
group

 Groupthink:
 A situation where the group’s desire for reaching consensus quickly
abridges the critical testing, analysis, and evaluation of ideas
 Solution: To overcome groupthink, the team needs to be
Group Challenges
 Floundering:
 Teams have trouble making progress due to the inability to make or
commit to decisions
 Solution: Discuss the stagnant position with the team, assessing the roles
and responsibilities of team members, and opening more effective
communication channels with the team and with other stakeholders

 Rush to accomplishment :
 Happens when a team's desire for getting the results supersedes the
team's sensitivity to alternative courses of action
 Solution: Remind team members that agendas work to allow them
enough time to accomplish tasks, as well as keeping them on schedule
and emphasizing that quality takes patience; maintain discipline of the
DMAIC methodology

 Attribution :
 Is the forming of conclusions based on inference, rather than facts and
data
 Solution: Ask attributers to paraphrase the information they have
received, and require that conclusions be based on verified sources
and data
Group Challenges
 Discounts:
 Are dismissals of the contributions of individual team members
 Solution: Support discounted team members by refocusing on what
they have said, and speaking to team members who habitually
discount others. Tools for dealing with discounts include training the
team in active listening
Six Sigma Teams and Other
Responsibilities
 Executive Sponsors:
 Sets the direction and priorities for the organization
 Leads and directs the company’s overall objective towards successful and
profitable Six Sigma deployment
 The sponsor may be a functional manager, or an external customer. Sponsors are
the source or conduit for project resources, and they are usually the recipients of
the benefits the project will produce

 Process Owners :
 They are usually functional managers in charge of specific processes - such as a
production line supervisor at a manufacturing facility
 They work with the Black Belts to improve the process for which they are
responsible
 Their knowledge value is their functional expertise

 Champions:
 They are typically upper level managers that control and allocate resources to
promote process improvements
 They ensure that the organization is providing necessary resources to the project
and that the project is fitting into the strategic plans of the organization
 They are involved in all project reviews in their area of influence
Six Sigma Teams and Other
Responsibilities
 Master Black Belts:
 Act as consultants to team leaders, and offer expertise in the use of Six Sigma
tools and methodologies
 They are experts in Six Sigma statistical tools and are qualified to teach high-level
Six Sigma methodologies and applications
 Master Black Belts often work within a single function, such as marketing or
accounting
 They work closely with process owners to implement Six Sigma methodologies and
ensure that projects stay on track

 Black Belts :
 Lead project teams and conduct the detailed analysis required in Six Sigma
methodologies
 They usually act as team leader and work on the project on a full-time basis
 Black Belts act as instructors and mentors for Green Belts, educating them in Six
Sigma tools and methods
 They also protect the interests of the project by liaising with functional managers

 Green Belts :
 Are focused on the basic Six Sigma tools to assist Black Belts’ projects
 They are trained in Six Sigma but typically lead project teams working in their own
areas of expertise
 Green Belts work on projects on a part-time basis, dividing time between project
and functional responsibilities .
Communication Techniques
For any organization to survive, information must continually flow vertically and
horizontally across the company

 Vertical Communication:
 Downward Flow of Communication: Managers must pass information and give
orders and directives to the lower levels
 Upward Flow of Communication: Upward communication consists of information
relayed from the bottom or grassroots, to the higher levels of the company
 Some of the more common methods of upward communication are open
door policies, surveys, questionnaires, suggestion systems, breakfast
meetings, shift meetings and so on

 Horizontal Communication::
 Horizontal communication refers to the sharing of information across the same
levels of the organization

 Formal and Informal Communication:


 Formal communication are official company sanctioned methods of
communicating to the employees
 Informal communication includes hallway discussions, ad-hoc meetings, the
grapevine, rumor mill, etc.
Communication Techniques

For any organization to survive, information must continually flow


vertically and horizontally across the company

 Verbal and Non-Verbal Communication:


 Verbal communication includes written and oral communication via
telephone, face-to-face, formal briefings, videotapes
 Non-verbal communication imparts meaning without the use of words.
Messages and cues are sent and received through body language,
facial expressions, gestures, posture, and tone of voice. Emotional
meaning is communicated most strongly in a non-verbal manner

 One-way or Two-way communication:


 One-way communication happens when information is relayed from
the sender to the receiver, without the expectation of a response.
Policy memos and announcements are one-way communication
methods, where no response is expected
 Two-way communication is a method in which both parties react,
respond and have input into the conclusions reached through
dialogue
Summary

 Team Stages and Dynamics


 Stages of team evolution like Forming, Storming, etc.
 Team dynamics like overbearing, dominant, and reluctant
participants
 Team challenges like groupthink, floundering, and rush to
accomplishment

 Six sigma and other team roles and responsibilities


 The roles and responsibilities of teams including Black Belt,
Master Black Belt ,etc.

 Communication
 Effective communication techniques for different situations to
overcome barriers to project success
Listen Listen to the customer --- VOC

Understand Understand your customer needs

Translate your customer needs to functional


Translate requirements --- QFD

Prioritize your customer’s main problem areas ---


Prioritize Pareto Charts

Collect raw data on Key impacting Output Variables’

Define Collect performance and attribute the performance to


financial performance

Phase Define Define the problem statement. IS/IS NOT template

Tools - Update the project charter with the problem


Update
Activities statement and work boundaries --- Project Charter

Summary
Draw a CTQ Tree, detailing on the key CTQs
Draw impacting the customer satisfaction levels and profits
of the company --- CTQ Tree

Update the FMEA matrix correlating Key Input


Update variables impacting output variables --- FMEA Matrix
Management and
planning tools
•Affinity diagrams, tree
diagrams and so on
•Brainstorming, multi-voting
and, so on

Session
Summary Team dynamics Business results for
•Team stages, process
Communication, and so on •DPMO, DPU, FMEA, and so
on
1. Which of the following New quality
management tools is (are) used to
organize facts and data about an
unfamiliar subject or problem?

1. The affinity diagram


1. Tree diagrams

Quiz - 1 2. Interrelationship diagram


3. PDPC charts

A. I only
B. I and III only
C. IV only
D. II and IV only
1. The matrix diagram is used to show the
relationship between 2 variables. Which
among the following illustrates
relationships in three planes?

A. L type

Quiz - 2 B. T type .
C. X type
D. Y type
1. Which is the initial step in Six Sigma
project methodology?

A. Problem definition
B. Define
Quiz - 3 C. Project charter
D. Champion approval
1. In order to select a problem to work on
form a list of contenders, which of the
following team tools would a facilitator
be LEAST likely to employ?

Quiz - 4 A. Nominal Group Technique


B. Groupthink
C. Multi - voting
D. Brainstorming
1. In the preparation of a project, efforts
should be made to identify and involve
various parties affected by the planned
changes. These other parties are known
as:

Quiz - 5 A. Process owners


B. Champions
C. Team leaders
D. Stakeholders
1. A process consists of three sequential
steps with the following yields: Y1=99.8%;
Y2=97.4%; Y3=96.4% Determine the total
defects per unit.

Quiz - 6
A. 0.053
B. 0.065
C. 0.064
D. 0.069
1. What is the correct order of occurrence
of the four team development stages:

Quiz - 7 A. Forming, Storming, Performing, Norming


B. Forming, Storming, Norming, Performing
C. Norming, Storming, Forming, Performing
D. Norming, Forming, Storming, Performing
Quiz - 1
1. Which of the following New quality management tools is (are) used to organize facts
and data about an unfamiliar subject or problem?

1. The affinity diagram


1. Tree diagrams
2. Interrelationship diagram
3. PDPC charts

A. I only
B. I and III only
C. IV only
D. II and IV only Correct Answer: B
The question requires understanding of management planning tools. The tree diagram is used to
identify steps to solve a particular problem. PDPC is used develop contingency plans. Both Affinity
diagram and Inter relationship diagram are used to relate facts and data about unfamiliar subject
or problem.
Quiz - 2

1. The matrix diagram is used to show the relationship between 2


variables. Which among the following illustrates relationships in
three planes?

A. L type
B. T type .
C. X type
D. Y type

Correct Answer: D
This question requires knowledge of the types of matrix diagrams. L type, T
type, and X type can be ruled out as they are drawn on two planes only.
Quiz - 3

1. Which is the initial step in Six Sigma project methodology?

A. Problem definition
B. Define
C. Project charter
D. Champion approval

Correct Answer: B

Remember the DMAIC process for Six Sigma. The first step is Define
phase. All other options mentioned are part of the Define phase.
Quiz - 4
1. In order to select a problem to work on form a list of contenders,
which of the following team tools would a facilitator be LEAST likely
to employ?

A. Nominal Group Technique


B. Groupthink
C. Multi - voting
D. Brainstorming

Correct Answer: B

Notice that the question requires negative response. Groupthink is a condition


of group cohesiveness which can decrease performance of a team because
alternatives are not adequately explored.
Quiz - 5

1. In the preparation of a project, efforts should be made to


identify and involve various parties affected by the planned
changes. These other parties are known as:

A. Process owners
B. Champions
C. Team leaders
D. Stakeholders

Correct Answer: D

Team leaders and champions are part of the team. Process owner is required
to be in the communication chain. Any party that may be affected by the
results can be described as stakeholders.
Quiz - 6
1. A process consists of three sequential steps with the following yields:
Y1=99.8%; Y2=97.4%; Y3=96.4% Determine the total defects per unit.

A. 0.053
B. 0.065
C. 0.064
D. 0.069

Correct Answer: B

Solution requires calculation in two parts :RTY = Y1 X Y2 X Y3 = 0.998 X 0.974


X 0.964 = 0.93706
Total DPU = - loge(RTY) = - loge(0.93706) = 0.065
1. What is the correct order of occurrence
of the four team development stages:

Quiz - 7 A. Forming, Storming, Performing, Norming


B. Forming, Storming, Norming, Performing
C. Norming, Storming, Forming, Performing
D. Norming, Forming, Storming, Performing

Correct Answer: B

It is part of the team development process


Session IV, Lesson 1
Measure 1
Agenda

 Introduction to Measure – I

 Process Analysis and Documentation


 Process Modeling
 Process Inputs and Outputs

 Probability and Statistics


 Drawing valid statistical
conclusions
 Central limit theorem and
sampling distribution of the
mean
 Basic probability concepts
Introduction to Measure Phase
 The Measure phase is the second phase in six sigma project. The
objectives of this phase are to:

 Gather as much information as possible on the current processes


 This entails three key tasks
 Creating A Detailed Process Map
 Gathering Baseline Data, &
 Summarizing And Analyzing The Data

 In this session we will cover the first two aspects of Measure Phase :

 Lesson 1: Process Analysis and Documentation


 Process modeling and Process inputs and outputs
 Lesson 2: Probability and Statistics
 Drawing valid statistical conclusions
 Central limit theorem and sampling distribution of the mean
 Basic probability concepts
Process Analysis and Documentation
Process Modeling
 Process Mapping: Process mapping is a workflow diagram to bring a
clearer understanding of a process or series of parallel processes.
Important--Process mapping may be done in the Measure or in the
Analyze Phase.

 Process mapping can be done by any of the following methods:


 Flowchart
 Written Procedures
 Work Instructions

 Flowchart: A flowchart is a picture of the separate steps of a process in


sequential order.

 When?
 To document a process
 When planning a project
 To communicate to others how a process is done
 To develop understanding of how a process is done
Common
Symbols Of
Flowchart:
Flowchart

Example: Following flowchart shows the processes in Software


development
Written Procedures
 Written Procedures: A written procedure is a step-by-step guide to direct the
reader through a task

 When?
 A process is lengthy and complex
 A process is routine, but it's essential that everyone strictly follows rules
 A person wants to know what is going on when A product is being
developed (as-is)

 Benefits: Some of the benefits of having written procedures are as follows:


 Prevents mistakes
 Frees your creativity
 Saves time
 Ensures consistency and improves quality
 Will free management from worrying about employee's decisions
prescribed in the procedures and work instructions

Procedures describe the process at a general level (e.g., internal audit


procedure), while work instructions provide details and a step-by-step sequence
of activities (e.g., how to fill out the audit results report). Work instructions are also
easy to decode and can be easily understood by employees.
Work Instruction

 Work Instruction: Work instructions define how one or more


activities in a procedure should be written in detail, using
technology or other resources

 The purpose of a Work Instruction is to organize steps in a logical


format so that an employee can easily follow it independently
Work Instruction - Example
Example: Work Instruction for shipping electronic instruments

 XYZ Electronic Instruments


 Written by: ABC
 Approved by: CDE
 Work Instruction no: AB101
 Date issued: Oct. 10. 2010
 Revision: Z
Work Instruction – Example, cont..
Example: Work Instruction for shipping electronic instruments

 Subject: Shipping of electronic instruments by the shipping department


 Scope: Applicable to Shipping Operations
 Procedures:
 Prepare order for shipment
 Shipping person gets the order number from the sales
department via automatic order system
 Get the quantity of the instrument and card number from the
system file
 Check that the quantity of instruments is correct or not
 Pack the instruments according to the card

 Packaging
 Check any special packaging required of the instruments
 Mark instruments as per card instructions
 Pack the instruments in the special container if required
according to the card or use the standard container if special
container is not required
 Write order number in shipping system; get the packing list
and shipping documentation
Work Instruction – Example, cont..

Example: Work Instruction for shipping electronic instruments

 Complete and Shipment


 Check the quantity and documents
 To improve a process, the key process output variables
(KPOV) and input variables (KPIV) should first be measured.
 Metrics for key process variables include percent defective,
operation cost, elapsed time, backlog quantity, and
documentation errors
 The Process Owners are the best source for initial identification
of the critical variables
 Once identified, the relationship between the variables are
depicted using a tool such as cause and effect diagrams
Cause and Effect Matrix
The matrix lists key process output variables horizontally and key
process input variables vertically. For each of the output variables,
assign a prioritization number. Within the matrix, a number is
entered for the effect that each input variable has on the output
variable. The results for each input variable, the process output
priority is multiplied times the effect value. And these are summed
to determine the results for that input variable

 The process input variables results are compared to determine which


input variables have the greatest effect on the output variables

Process Output Variables


A B C D E
Prioritization number 4 1 7 11 5 Results %
1 3 4 7 117 33

2 8 5 3 4 96 27
Process
Input/Variables 3 6 2 46 13
4 7 5 32 9
5 3 4 65 18

Totals 356 100


Cause and Effect Matrix Template
Rating of
Importance to
Customer
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
Process Inputs Total

1 0
2 0
3 0
4 0
5 0
6 0
7 0
8 0
9 0
10 0
11 0
12 0
13 0
14 0
15 0
16 0
17 0
18 0
19 0
20 0
0
0

0
Total
Cause and Effect Matrix: How to update

 List the Input variables vertically under the column Process Inputs.

 List the Output variables horizontally under the numbers 1-15.


 These output variables would need to be judged important from
the customer’s perspective.
 You can refer to either the QFD or the CTQ Tree to find the key
output variables.

 Rank the output variables based on customer priority. These numbers


can also be taken from the QFD.

 The Input Variables that get the highest score would be the ones on
which to focus for the remainder of the project.

Important--This tool is in your toolkit and can also be used in the


Define phase.
Cause and Effect Diagram
 Cause and Effect Diagram: Also called a fishbone, 4-M or Ishikawa
diagram, it is used to examine effects or problems by exploring the
possible causes and to point out possible areas where data can be
collected

 When?
 When a team is trying to find potential solutions to a problem and is
looking for the root cause

Steps: There are four steps to constructing a cause and effect


diagram
 Brainstorm all possible causes of the problem or effect selected for
analysis
 Classify the major causes under the headings: materials, methods,
machinery, and manpower
 Draw a cause and effect diagram with the problem at the point of
the central axis line
 Write the causes on the diagram under the classifications chosen
Cause and Effect Diagram - Example
The example shown in figure below examines the possible
causes of solder defects on a reflow soldering line. The
group looked at all the possible causes and classified
them under the main headings as shown.

The cause and effect diagram was then used to plan data
collection to discover the root cause
Summary

 Introduction to Measure – I

 Process Analysis and Documentation

 Process Modeling
 Process Maps, Written Procedures, Work
Instructions, Flowcharts

 Process Inputs and Outputs


 Process Input Variables And Process Output
Variables
 Cause And Effect Diagrams
Probability and Statistics
Agenda
 Drawing valid statistical conclusions:
 Through Analytical Statistics – Introduction
to Hypothesis
 Distinguish between enumerative
(Descriptive) and analytical (Inferential)
studies
 Distinguish between a population
parameter and a sample statistic

 Central limit theorem and sampling distribution


of the mean:
 Central Limit Theorem
 Significance in the application of
Inferential Statistics for confidence intervals

 Basic Probability concepts


 Independence, Mutually Exclusive,
Multiplication Rules
Analytical Statistics: Introduction to
Hypothesis
 Analytical statistics use data from a sample to make estimates or
inferences about the population from which the sample was drawn.

 Example: Sampling few grains of rice to draw inference about


the whole packet.

 Most of the time, studying the population may require a lot of hard-
work and time invested.
 Example: Sampling will be useful to estimate the average
monthly household spending at a McDonald’s restaurant in a
town of 10,000 households.
Analytical Studies
Example: Suppose team management wants to see if Indian cricket team’s performance has
improved after they have recruited a new coach. Is there an improvement that can be proven
statistically?0

 What?:
 Management needs to make an assumption about the efficiencies of the
two coaches A & B, and test any difference for significance
 How?

 Null Hypothesis: Considering the example, null hypothesis is that the two coaches
have the same efficiency (i.e., no difference in efficiencies till proven otherwise)
 Assuming status quo is Null Hypothesis
 Null hypothesis can be given by H0: Ya = Yb
 Ya= Efficiency of Coach A; Yb= Efficiency of Coach B

 Alternate Hypothesis: Alternative hypothesis challenges the null hypothesis


 If null hypothesis is proven wrong, alternate hypothesis must be right
 Considering the previous example, alternate hypothesis is that coach B has
higher efficiency than coach A (i.e., there is a difference in efficiencies)
 Alternate hypothesis can be given by H1: Ya≠Yb
Analytical Statistics
 Type of errors: When formulating a conclusion regarding a population based
on observation from a small sample, two types of errors are possible

 Type I error: Rejecting a null hypothesis when it was true is called Type I
error. α risk or Significance Level is the chance of committing a Type 1
Error and is typically chosen to be 5%. This means the maximum amount
of risk you are willing to tolerate for a Type 1 Error is 5%.
 It is also called ‘Producer’s Risk’ by drawing analogy with a part getting
rejected by QA team when it was not defective, thereby generating a
loss to producer
 Thus reliance on the conclusion that coach B is better than coach A
when they are actually at the same level of efficiency is making type I
error.

 Type II error: Accepting a null hypothesis when it was false is called


Type II error. β risk is the chance of committing a Type II error. Typically,
β is 20%.
 It is also called ‘Consumer’s Risk’ by drawing analogy with a part
getting accepted by QA team when it was defective, thereby creating
a problem for the consumer who will buy it
 Thus, reliance on the conclusion that the coaches are the same when
they’re not making Type II error or β error
Enumerative Statistics
 Enumerative statistics consists of a set of tools that organize, summarize, and display information. A
descriptive study shows various properties of a set of data, such as mean, median, mode dispersion,
and shape.

 In statistical study the word population refers to the collection of all the items or data under
consideration.
 Significant values for this collection are called population parameters e.g., Population mean,
Population median, Population standard deviation, etc.
 A sample is a subset of population and is selected randomly
 Analytical (or inferential) statistics are derived from sample data to make estimates or
inferences about the population parameters from which sample was drawn.

Important—A sample’s statistics will be a good estimate of a population parameter (discussed in


greater detail later).

 It is traditional to denote sample statistic by using Latin letters and population parameters by using
Greek letter. The following symbols are most commonly used in text book

Sample Population
Size (no. of elements) n N
Mean X–Bar μ
Standard Deviation s σ
Central Limit Theorem
 Central Limit Theorem States :

 For sample size > 30, the mean of the sample means (X-double bar)
taken out from the population equals the population mean.

In simple words, the Average of the Sample means = Population


Mean.

 For sample size > 30, the Standard Error of Mean (SEM), representing the
variability between the sample means, is very low.

 SEM is given by the Formula = (Population Standard


Deviation)/Sqrt(Sample Size)

 Statistically speaking, when sample size > 30, the sample means
approach normal distribution.

Important--This doesn’t mean that all samples should be of sample size 30.
Selecting a sample size also depends on the kind of Power you want
for the test.
(Power will be explained later)
Central Limit Theorem: Graphical
Representation
Central Limit Theorem and Sampling
Distribution of the Mean
 Central Limit Theorem - Conclusion
 Sampling distributions are also helpful in dealing with non-normal data

 If we take sample data points from a population and plot the


distribution of the means of samples it is called the sampling distribution
of means

 This sampling distribution will approach normality as the sample size


increases

 This concept is called the Central Limit Theorem (CLT)

 Central Limit Theorem enables us to make inferences from the sample


statistics about the population parameters irrespective of the shape of
the population

 Thus Central Limit Theorem (CLT) becomes the basis for calculating
confidence interval for hypothesis test as detailed in previous slides because
it allows the use of standard normal table.
Basic Probability Concepts
 Suppose an experiment has N possible outcomes, all equally likely.

 Then the probability that a specified event occurs equals the number of
ways, f, that the event can occur, divided by the total number of possible
outcomes. In symbols
Basic Properties of Probabilities
 Property 1: The probability of an event is always between 0 and 1,
inclusive.

 Property 2: The probability of an event that cannot occur is 0. (An event


that cannot occur is called an impossible event.)

 Property 3: The probability of an event that must occur is 1. (An event


that must occur is called a certain event.)

Sample Space and Events


 Sample space: The collection of all possible outcomes for an
experiment.

 Event: A collection of outcomes for the experiment, that is, any subset of
the sample space.
Probabilities: Example
 What is the probability of getting a three followed by two when a dice is
thrown twice?

 Sample space: 6X6 = 36

 Event: 3,2
 Can happen in only one way

 Probability: Total no. of events / Total no. of sample space = 1 / 36


Basic Properties of Probabilities, cont..

Probability Notation:

 If E is an event, then P(E) stands for the probability that event E occurs. It
is read “the probability of E”.
Various Probability Rule

The Complementation Rule


 For any event E, P (E) = 1 – P (~ E)

 In words, the probability that an event occurs equals 1 minus the


probability that it does not occur.

Combinations of Events
 The Addition Rule – “Or”
 The special addition rule (mutually exclusive events)
 The general addition rule (non-mutually exclusive events)

 The Multiplication Rule – “And”


 The special multiplication rule (for independent events)
 The general multiplication rule (for non-independent events)
Addition Rule
Mutually Exclusive

 Two or more events are said to be Mutually Exclusive if at most one of


them can occur when the experiment is performed, that is, if no two of
them have outcomes in common

The Special Addition Rule

 If event A and event B are mutually exclusive, then


P(A or B) = P(A)+ P(B)

 More generally, if events A, B, C … are mutually exclusive, then


P(A or B or C ...)= P(A)+ P(B)+ P(C) ...

 That is, for mutually exclusive events, the probability that at least one of
the events occurs is equal to the sum of the individual probabilities.
Addition Rule, cont…
Non- Mutually Exclusive

 Two or more events are said to be Non-Mutually Exclusive if at least one


of them, if not both, can occur when the experiment is performed; that
is, at times two of them occur simultaneously

The General Addition Rule

 If event A and event B are Non-Mutually Exclusive, then


P(A or B) = P(A) + P(B) – P(A&B)

 In words, for any two events, the probability that one or the other occurs
equals the sum of the individual probabilities less the probability that
both occur.
Multiplication Rule
Independent Events

 Event B is said to be independent of event A if the occurrence of event


A does not affect the probability that event B occurs. In symbols,
P(B | A) = P(B).

 This means that knowing whether event A has occurred provides no


probabilistic information about the occurrence of event B.

The Special Multiplication Rule

 What is the probability of all of these events occurring:


 Flip a coin and get a head
 Draw a card and get an ace
 Throw a die and get a 1

P(A & B & C ) = P(A) · P(B) · P(C) = 1/2 X 1/13 X 1/6


Multiplication Rule, cont…
 Non-Independent Events – Conditional Events
 The probability that event B occurs given that event A has
occurred is called a conditional probability.
 It is denoted by the symbol P(B | A)=P(B∩A), which is read “the
probability of B given A.” We call A the given event. P(A)

 The Multiplication Rule for Non-Independent/Conditional Event


 If A and B are any two events, then
P(A & B) = P(A) · P(B | A).

 In words, for any two events, their joint probability equals the
probability that one of the events occurs times the conditional
probability of the other event given that event.
Multiplication Rule, cont…
 Venn diagrams for
a) Event (not E )
b) Event (A & B)
c) Event (A or B)

( Not E ( A or B
(A&B )
) )

(a) (b) (c)


Session Summary
 Introduction to Measure – I
 Process Modeling
 Process Maps, Written Procedures, Work Instructions,
Flowcharts
 Process Inputs and Outputs
 Process Input Variables And Process Output Variables
 Cause And Effect Diagrams

 Drawing valid statistical conclusions:


 Through Analytical Statistics – Introduction to Hypothesis
 Distinguish between enumerative (Descriptive) and analytical
(Inferential) studies
 Distinguish between a population parameter and a sample
statistic

 Central limit theorem and sampling distribution of the mean:


 Central Limit Theorem
 Significance in the application of Inferential Statistics for
confidence intervals

 Basic Probability concepts


 Independence, Mutually Exclusive, Multiplication Rules
Quiz - 1

1. Which of the following process mapping symbols would


NOT be associated with a decision point?

A. .
B. .
C. .
D. .
Quiz - 2

1. Which of the following are principle reasons for utilizing process


mapping?
.
a) To identify where unnecessary complexity exists
b) To visualize the process quickly
c) To eliminate the total planning process
d) To assist in work simplification

A. a&b
B. a, b, & c
C. a, b, & d
D. a, b, c & d
Quiz - 3

1. The input categories for a classical cause and effect diagram would NOT
include:

A. Maintenance
B. Manpower
C. Machine
D. Material
Quiz - 4

1. A number resulting from the manipulation of some raw data


according to certain specified procedure is called:

A. A Population
B. A Constant
C. A Statistic
D. A Parameter
Quiz - 5

1. If the probability of a car starting on a cold morning is 0.6, and


we have two such cars, what is the probability of at least one
car starting on a cold morning?

A. 0.84
B. 0.83
C. 0.62
D. 0.32
Quiz - 1

1. Which of the following process mapping symbols would


NOT be associated with a decision point?

A. .
B. .
C. .
D. .

Correct Answer: B
The question requires a knowledge of a process mapping/flow-charting
symbols. Answer A represents a decision point. Answer C indicates a two way
decision. Answer D can be preparation stage or multiple decision.
Quiz - 2
1. Which of the following are principle reasons for utilizing process mapping?
.
a) To identify where unnecessary complexity exists
b) To visualize the process quickly
c) To eliminate the total planning process
d) To assist in work simplification

A. a&b
B. a, b, & c
C. a, b, & d
D. a, b, c & d

Correct Answer: C

Items A, B, and D are all benefits of process mapping (or process flow
charts).
Quiz - 3

1. The input categories for a classical cause and effect diagram would NOT
include:

A. Maintenance
B. Manpower
C. Machine
D. Material

Correct Answer: A

The 4m of the cause and effect diagram includes categories like machine,
material, method, and manpower.
Quiz - 4

1. A number resulting from the manipulation of some raw data


according to certain specified procedure is called:

A. A Population
B. A Constant
C. A Statistic
D. A Parameter

Correct Answer: C

A statistic is a sample value. A parameter is a population value.


Quiz - 5

1. If the probability of a car starting on a cold morning is 0.6, and


we have two such cars, what is the probability of at least one
car starting on a cold morning?

A. 0.84
B. 0.83
C. 0.62
D. 0.32

Correct Answer: A
This can be solved by using additive law of probability
P(A or B)= P(A) + P(B) – P(A and B)
P(A or B)= 0.6 + 0.6 – (0.6 x 0.6)
= 1.2 – 0.36 = 0.84
Session V

Measure II
Agenda

 Collecting and
Summarizing Data
 Types of data and
measurement scales
 Data collection
method
 Techniques for
assuring data
accuracy and
integrity
 Descriptive statistics
 Graphical methods
Lesson 1:

Collecting and Summarizing Data


Types of Data
Data: A collection of facts from which conclusions may be drawn. There are
two types of Data:
 Attribute Data (Discrete)
 Variable Data (Continuous)

 Attribute Data: Attribute data can be counted. It includes only integers


like, 2, 40, 1050. These data are the answers to questions like “how
many”, “how often”, “what kind”. Following are the examples:
 How many products are found defective
 What percentage are defective
 How often are the machines repaired
 What kind of award were you given

 Variable Data: Variable data can be measured. It includes any real


numbers like, 2.045, -4.42, 45.65. These data are the answers to question
like “how long”, “what volume”, “how much time”, and “how far”.
Following are the examples:
 How tall are you?
 How long did it take to complete the work?
 What is the weight of this packet?
Types of Data

Determining the type of data you need to collect is the first step in
the
Measure Phase. The type will depend on the kinds of questions you
intend to answer, such as “how often…” (attribute) or “how long…”
(variable).

 What do you know--Thus far, we know the CTQs, Key Process Output
Variables (KPOVs) and the Key Process Input Variable (KPIVs) for our
process.

 What do we do--We determine what type of data fits the metrics for
those key variables.

 Why this is important--Knowing the type of data prepares us to collect


the right data, and know what analysis and inferences we can make.

Important: Converting Attribute Data to Variables Data type is challenging if


not impossible without assumptions about the situations or additional
information like re-testing all units.
Measurement Scales

Scale Description Example


A bag of balls contained
Data consists of only names or categories.
the following colors:
There is no possibility of ordering. Normally,
 Green 10
Nominal considered the least informative of all
 Black 5
scales. Mode is the measure of Central
 Yellow 8
Tendency.
 White 9
Restaurant ratings:
Data is arranged in order and whose  A3
Ordinal
value can be compared. Median or  B5
(Ranking)
mode is the measure of central tendency.  C2
 D4
Interval Scale is used for ranking items in The temperature of three
step order along a scale of equidistant
Interval metal rods were 1000F,
points. Either mode, median or mean
could be used for central tendency. 2000F and 6000F
The ratio scale also represents variable
data and is measured against a known
standard or increment. However, this
scale also has an absolute zero (no Mass, length, electric
Ratio
numbers exist below zero). charge
Either median or mode could be used as
well as arithmetic, geometric and
harmonic means.
Data Collection Methods
Check Sheets: A check sheet is a structured, prepared form for collecting
and analyzing data. This is a generic tool that can be adapted for a variety
of purposes. Check sheets are used for its relative simplicity.

 When?
 When data can be observed and collected repeatedly by the
same person or at the same location
 When collecting data from a production process

 Example: Absenteeism in a company:

Day Absences Total


Monday IIII IIII IIII IIII IIII II 27
Tuesday IIII IIII II 12
Wednesday IIII III 8
Thursday IIII IIII III 13
Friday IIII IIII 10
 Sampling: The act, process, or technique of selecting an
appropriate sample

 Random Sampling: Random sampling is a sampling technique


where we select a group of subjects (a sample) for study from a
larger group (a population), with the choice being done not on the
basis of any logic, i.e. randomly.

 Sequential Sampling: Sequential sampling is a non-probability


sampling technique wherein the researcher picks a single or a
group of subjects in a given time interval, conducts his study,
analyzes the results then picks another group of subjects if needed
and so on

Techniques
for Assuring  Stratified Sampling: A stratified sample is obtained by taking
samples from each stratum or sub-group of a population.

Data
Accuracy Example: To study the average spending of individuals in McDonald’s,
you divide the population in two categories, males and females. Each
category is then divided into different age groups. Then, from each
specific group a random sample is selected.

Important: Such a sampling technique gives you a more accurate


estimate of the population parameter.
 Simple Random Sampling is easy to
do, while Stratified Sampling takes a

Simple lot of time to perform.

Random  The possibility of Simple Random

Sampling Sampling to give erroneous results is


very high, while Stratified Sampling

versus minimizes the chances of error.

Stratified  Simple random sampling doesn’t


have the power to indicate possible
Sampling causes of variation while Stratified
Sampling, if done correctly, will show
assignable causes.
Descriptive Statistics-1
Measures of central tendency: A measure of central tendency is a measure that
tells us where the middle of a set of data . The three most common measures of
central tendency are The Mean, The Median, And The Mode

 Mean: Mean is the most common measure of central tendency. It is simply


the sum of the data divided by the number of data in a set of data. This is
also known as average.
Mean is also known as Arithmetic Mean.

 Median: Median is the number present in the middle when the numbers in a
set of data are arranged in ascending or descending order. If the number of
numbers in a data set is even, then the median is the mean of the two middle
numbers.
Median is also known as Positional Mean.

 Mode: Mode is the value that occurs most frequently in a set of data.
Mode is also known as Frequency Mean.

Example: For the data 1, 2, 3, 4, 5, 5, 6, 7, 8 the measures of central tendency are:


Median=5
Mode=5
Descriptive Statistics – I, cont…
 Consider a small tweak to the dataset, with the new dataset as below
1,2,3,4,5,6,7,8,100.

 The mean (Arithmetic Average) is 15.11. Conventionally, we should


expect to see 50% of values in the dataset falling to the left of 15.11 and
50% to the right.

 That doesn’t happen here, as you can see almost 90% of values fall to
the left side with only one value falling to the right.

 The data point 100 is called as an Outlier. An Outlier is an extreme value


in the data set.

 The Median of the Dataset is unchanged at 5.

Important--When the dataset has outliers, use median as a measure


of central tendency instead of mean.
Descriptive Statistics - 2
 Measure of Dispersion: Describe the spread of values.

Important--A process will have higher spread, if the data points vary
amongst them a lot.

 Three main measures of dispersion are:


 Range,
 Variance
 Standard Deviation

 Range: Range is defined as the difference between the largest and the
smallest values of data
Example: 4,8,1,6,6,2,9,3,6,9 =8 (9-1)
Range=largest – Smallest

Interpretation: The number 8 just tells you what is the spread of the data.

Note: In calculating Range, you don’t need all the data points. You only
need the maximum value and the minimum value.
Variance
Variance is defined as Average of Squared Mean Differences, and variance shows
Variation.

Important--Variance is not variation. It just shows Variation.


 How to calculate Variance:
 Example: 4,8,1,6,6,2,9,3,6,9

Use the formula= VARP() in an Excel sheet and you would get the calculated value for
Variance. 4 2
8 9
1 3
6 6
6 9
Variance 7.24
Variance 8.04444
 The Variance result 7.24 is using the formula = VARP() and shows population
variation.
 The Variance result 8.04, is using the formula = VAR(), and shows sample
variation.

Important: Preferably, use population variance instead of sample variance as it is a


more accurate indicator.
Standard Deviation
 Standard Deviation is the square root of Variance. Standard Deviation is
the most important of all measures of dispersion.

 It is the Standard Deviation, which is known as Sigma, represented by the


Greek letter,
σ or s.

Important--Standard Deviation is always related to the mean.

 For the same data list used in Variance calculations,


 Population Standard Deviation = 2.69 (Formula = STDEVP())
 Sample Standard Deviation = 2.83 (Formula = STDEV())

Important--All the Variance and Standard Deviation calculations


can be done manually without using MS-EXCEL, which will be shown
to you by your facilitator.
Descriptive Statistics - 3
 Frequency Distribution: A frequency distribution is a grouping of data into mutually
exclusive categories showing the number of observations in each class

 A survey was taken on Apple Street. In each of 20 homes, people were asked how
many cars were registered to their households. The results were recorded as follows:
1, 2, 1, 0, 3, 4, 0, 1, 1, 1, 2, 2, 3, 2, 3, 2, 1, 4, 0, 0

 Use the following steps to present this data in a frequency distribution table:
 Divide the results (x) into intervals, and then count the number of results in
each interval. In this case, the intervals would be the number of households
with no car (0), one car (1), two cars (2) and so forth
 Make a table with separate columns for the interval numbers (the number of
cars per household), the tallied results, and the frequency of results in each
interval. Label these columns Number of cars, Tally and Frequency
 Read the list of data from left to right and place a tally mark in the
appropriate row. For example, the first result is a 1, so place a tally mark in
the row beside where 1 appears in the interval column (Number of cars). The
next result is a 2, so place a tally mark in the row beside 2, and so on. When
you reach your fifth tally mark, draw a tally line through the preceding four
marks to make your final frequency calculations easier to read
 Add up the number of tally marks in each row and record them in the final
column entitled Frequency
 Your frequency distribution table for this exercise should look like this:
Descriptive Statistics – 3, cont…

Example: By looking at this frequency distribution table quickly, we can see


that out of 20 households surveyed, 4 households had no cars, 6 households
had 1 car, etc.

No. of cars(x) Tally Frequency(f)

0 IIII 4

1 IIII I 6

2 IIII 5

3 III 3

4 II 2
Descriptive Statistics - 4
 Cumulative frequency Distribution: A cumulative frequency distribution
table is a more detailed table. It looks almost the same as a frequency
distribution table but it has added columns that give the cumulative
frequency and the cumulative percentage of the results, as well

 At a recent chess tournament, all 10 of the participants had to fill out a


form that gave their names, address and age. The ages of the
participants were recorded as follows:
37, 49, 54, 91, 60, 62, 65, 77, 67, 81

 Following are the steps to present these data in a cumulative frequency


distribution table

 Divide the results into intervals, and then count the number of results in
each interval. In this case, intervals of 10 are appropriate. Since 37 is the
lowest age and 91 is the highest age, start the intervals at 35 to 44 and
end the intervals with 85 to 94
Descriptive Statistics – 4, cont…

 Create a table similar to the frequency distribution table but with three extra columns:
 In the first column or the Lower value column, list the lower value of the result intervals. For
example, in the first row, you would put the number 35
 The next column is the Upper value column. Place the upper value of the result intervals.
For example, you would put the number 44 in the first row
 The third column is the Frequency column. Record the number of times a result appears
between the lower and upper values. In the first row, place the number 1
 The fourth column is the Cumulative frequency column. Here we add the cumulative
frequency of the previous row to the frequency of the current row. Since this is the first
row, the cumulative frequency is the same as the frequency. However, in the second
row, the frequency for the 35–44 interval (i.e., 1) is added to the frequency for the 45–54
interval (i.e., 2). Thus, the cumulative frequency is 3, meaning we have 3 participants in
the 34 to 54 age group.1 + 2 = 3
 The next column is the Percentage column. In this column, list the percentage of the
frequency. To do this, divide the frequency by the total number of results and multiply by
100. In this case, the frequency of the first row is 1 and the total number of results is 10.
The percentage would then be 10. (1 ÷ 10) X 100 = 10
 The final column is Cumulative percentage. In this column, divide the cumulative
frequency by the total number of results and then to make a percentage, multiply by
100. Note that the last number in this column should always equal to 100. In this example,
the cumulative frequency is 1 and the total number of results is 10, therefore the
cumulative percentage of the first row is 10.
(1 ÷ 10) X 100 = 10,
Descriptive Statistics - 5

The cumulative frequency distribution table should look like


this

Lower Upper Frequency Cumulative Cumulative


Percentage
Value Value (f) Frequency Frequency

35 44 1 1 10 10

45 54 2 3 20 30

55 64 2 5 20 50

65 74 2 7 20 70

75 84 2 9 20 90

85 94 1 10 10 100
Graphical Method
 Stem and leaf plots: A stem and leaf plot is used for presenting data in graphical
format, to assist visualizing the shape of a distribution

Example: Following are the temperatures for the month of May in Fahrenheit:

78 81 82 68 65 59 62 58 51 62 62 71 69 64 67 71 62 65 65 74 76 87 82 82 83 79 79 71 82 77 81

 How?
 Begin with the lowest temperature
 The lowest temperature of the month was 51. Enter 5 in the Stem column and 1 in
the Leaf. What's the next lowest temperature? It's 58, enter 8 in the Leaf column
corresponding to 5 in the Stem. Next is 59, enter 9 in the Leaf column
corresponding to 5 in the stem.
 Now, find all of the temperatures that were in the 60's, 70's and 80's
 Enter the rest of the temperatures sequentially until your Stem and Leaf Plot
contains all of the data. It should look like the below.

Temperatures
Stem Leaf
5 189
6 22224555789
7 111467899
8 11222237
Box and Whisker Plots
 Box and whisker plots: A box and whisker graph is used to display a set of data so
that you can easily see where most of the numbers are
 Example: Suppose you were to catch and measure the length of 13 fish in a
lake
12, 13, 5, 8, 9, 20, 16, 14, 14, 6, 9, 12, 12

 A box and whisker plot is based on medians or quartiles. The first step is to rewrite
the data in order, from smallest length to largest
5, 6, 8, 9, 9, 12, 12, 12, 13, 14, 14, 16, 20

 Now find the median of all the numbers. Notice that since there are 13 numbers,
the middle one will be the seventh number
5, 6, 8, 9, 9, 12, 12, 12, 13, 14, 14, 16, 20
Median

 The next step is to find the lower median or first quartile. This is the middle of the
lower six numbers. The exact center is half-way between 8 and 9 ... which would be
8.5

 Now find the upper median or third quartile. This is the middle of the upper six
numbers. The exact center is half-way between 14 and 14 ... which must be 14
5, 6, 8, 9, 9, 12, 12, 12, 13, 14, 14, 16, 20
Median
Box and Whisker Plots, cont..
 Now you are ready to construct the actual box & whisker graph. First you will need
to draw an ordinary number line that extends far enough in both directions to
include all the numbers in your data

 Then locate the main median 12 using a vertical line just above your number line

 Now locate the lower median 8.5 and the upper median 14 with similar vertical
lines, and then draw a box using the lower and upper median lines as endpoints

 Finally, the whiskers extend out to the data's smallest number 5 and largest number
20
Box & Whisker Plot
Box and
Whisker Plots,
cont..
 Well, it's obvious from the graph that the lengths of
the fish were as small as 5 cm, and as long as 20
cm. This gives you the range of the data ...15

 We also know the median, or middle value was 12


cm.

 Since the medians or quartiles (three of them)


represent the middle points, they split the data into
four equal parts. In other words

 One Quarter of the data numbers are


less than 8.5

 One Quarter of the data numbers are


between 8.5 and 12

 One Quarter of the data numbers are


between 12 and 14

 One Quarter of the data numbers are


greater than 14
Run Charts
 Run Charts: A run chart is simply a basic plot of a specific process/product value on
a fixed time intervals. If a specific measurement is collected on a regular basis, say
every 15 min and plotted on a time scale, the resulting diagram is called a run
chart.

Important: Run charts are very powerful tools to help detect Special Causes
of Variation, often found using Process Stability Study, discussed later.
Scatter
Plots
 A simple tool that helps
determine if a relationship exists
between two measures or
indicators:
 It provides a visual
image of how potential
process factors are (or
are not) related to key
outcome
 An indication of any
relationship is followed
by more formal statistical
methods (if necessary).

Following table shows the


marks of ten students in
theory and in practice
Pareto Charts, cont..
 A Pareto Chart is a Histogram ordered by frequency of occurrence. (How to draw
a Pareto Chart has been explained in detail in the Define Phase Discussions).

 Also called 80/20 rule or “vital few, trivial many”

 Helps project teams to find out the major causes of the problem

 Steps to create a pareto diagram:


 Arrange the module by their defects in descending order and add the no.
of defects
 Now the next column is the percentage contribution column. To do this
each module’s defects is divided by total no. of defects and converted into
percentage. For module D 50/95*100=53% approximately
 Now the next column is cumulative percentage column. Here we add the
percentage contribution of the previous row to the percentage of the
current row. Since D is the first row, the cumulative percentage would be
same as the percentage contribution. However in the second row, the
percentage contribution of module B (i.e.,32%)is added to cumulative
percentage of module D (i.e.,53%). Thus the cumulative percentage is 85%
 Now plot the pareto chart between number of defects and cumulative
percentage
No. of % Cumulative
Module
defects Contribution %

D 50 53% 53%

B 30 32% 85%

A 5 5% 89%

C 4 4% 94%

E 3 3% 97%

From the chart we can say that modules D and B are causing about 85% of the defects. Hence these modules should be improved first.
F 2 2% 99%

G 1 1% 100%

Total 95

Pareto Charts
Example: Suppose in a manufacturing unit, one particular product has
too many defects, so they want to find out the root cause of the defects of
the product. After that they have taken the samples from the defective
items and found that no. of defects from module A is 5, B is 30, C is 4, D is
50, E is 3 , F is 2 and G is producing 1 defect. Now draw a Pareto Chart to
get a visual of which module is creating most of the problems
Normal Probability Plots
 Normal Probability Plots: Normal Probability plots is made so that random
sample from a normal distributed population will form a straight line.

Example: The following data represent a sample of diameters from a drilling


operation
0.127 0.125 0.123 0.123 0.12 0.124 0.126 0.122 0.123 0.125 0.121 0.123 0.122 0.125
0.124 0.122 0.123 0.123 0.126 0.121 0.124 0.121 0.124 0.122 0.126 0.125 0.123

 Construct a Cumulative Frequency Distribution


 Arrange the module by their defects in descending order and add
the no. of defects
 Multiply this result by 100 to convert it to percentage. This value is
called the mean rank probability estimate
Normal Probability Plots, cont.…
Plot the graph on log paper
(Cumulative Frequency) / (n +
x Frequency Cumulative Frequency Mean rank, %
1)
0.12 1 1 28-Jan 4
0.12 3 4 28-Apr 14
0.12 4 8 28-Aug 29
0.12 7 15 15/28 54
0.12 4 19 19/28 68
0.13 4 23 23/28 82
0.13 3 26 26/28 93
0.13 1 27 27/28 96
n=27

From this graph we can say that the random sample forms a straight line and seems to be
taken from a normally distributed population. This Normal Probability plot can also be drawn
using Minitab, a statistical software for using Six Sigma related calculations and graphs.
Lesson 2:

Probability Distribution
Agenda

 Discrete probability Distributions


 Binomial Distribution
 Poisson Distribution

 Continuous Probability Distribution


 Normal
 Chi square
 t-Distribution
 f-Distribution
Discrete Probability Distribution
 While dealing with discrete data, we must be familiar with discrete
distributions

 Two of the most useful discrete distributions (Binomial, Poisson, Negative


Binomial, Geometric, Hyper-geometric etc.) are
 Binomial Distribution
 Poisson Distribution

 Like any probability distribution, these distributions also help in predicting


the sample behavior that has been taken from a population
Binomial Distribution
Binomial Distribution:

 It’s used in situations where there are only two options, choices, outcomes
(pass/fail, yes/no)
 It is an application of the population knowledge to predict the sample behavior
 Binomial distribution describes discrete data resulting from a process
 Tossing of a coin a fixed number of times
 Success or failure in an interview

 A process is called a Bernoulli process when


 Probability of each outcome remains constant over time
 Outputs are statistically independent

 A Binomial distribution is described by following equation :

p = Probability Of Success -- r = Number Of Successes Desired -- n = Sample Size


Binomial Distribution, cont..
 Mean of a Binomial Distribution, µ= n p
 Standard Deviation of a Binomial Distribution, σ= n p (1 – p)
 A! is calculated as follows:
 5 ! =5*4*3*2*1= 120
 4 !=4*3*2*1= 24

Example: We know that the tossing of a coin has only two


outcomes – head or tail
 Probability of each outcome is 0.5 & it remains fixed over time
 Also, outcomes are statistically independent
 If we want to know what is the probability of getting 5 heads if we toss the
coin 8 times, we can use the binomial equation to find that

 Here :
 p =Probability Of Success= 0.5
 r = Number Of Successes Desired= 5
 n = Sample Size= 8
Binomial Distribution - Concepts
 Binomial distribution is on defectives and not on defects

 It’s best suitable when n< 30 & p> 0.1

 p is simply the percentage of non-defective items, provided probability


of creating a defective item remains same over time. P(good) = FPY

Project teams can use the Binomial distribution to find out how difficult a
particular target is to achieve given past performance.

Example: If we want to hire a foreign coach for the Indian team only if
probability of losing 4 matches is less than 50%, we can use the Binomial
distribution to make the decision.

RTY = FPY1 * FPY1 * FPY1 * …* FPYn = p(good)n


(The formula will be explained later.)
Defectives and Defects
A defect is any Non-compliance with a specification. A defective item will
have at least one defect.

Example: There are two defects found with a pen. One defect is the bubble
in the pen cover; the other one is a lack of ink flowing through the nib. If
three other pens have no defects, then the defects/unit (DPU) is 0.5 (2
defects/4 units) and the TPY is 0.6 while the FPY is 0.75 (3 units good/4 units).
The binomial probability of a good unit is the same as FPY, or 0.75.

The binomial probability of a defective is also 0.25 (1 defective/4 units). DPU


cannot be used for binomial probabilities.

TPY < FPY in this example because multiple defects were found on one unit.
If two units had 2 defects each, then DPU = 1, TPY = 0.4, while FPY = 0.5 (2
good/4 units).
Poisson Distribution
 It’s also a probability distribution for discrete data

 It is named after Simeon Denis Poisson

 It is an application of the population knowledge to predict the sample


behavior

 Poisson distribution describes discrete data resulting from a process


 Number of calls received by a call center agent
 Number of accidents at a signal

 Unlike Binomial distribution that deals with the binary discrete data, a
Poisson distribution deals with integers that can take any value
Poisson Distribution - Characteristics
 This distribution is generally used for describing the probability distribution
of an event with respect to time or space

 Suitable for analyzing situations where the no. of trials (remember sample
size in Binomial distribution) is very large (tending towards infinity) and
probability of occurrence in each trial is very small (tending towards
zero).

 Hence applicable for predicting occurrence of relatively rare events like


plane crashes, car accidents etc. and therefore used in Insurance
industry

 Can be used for prediction of no. of defects, if the defect occurrence


rate is low

 A Poisson distribution is described by following equation:


Poisson Distribution - An Example
 Suppose we want to investigate the efficiency of safety measures taken
at a dangerous signal. Past records show that mean number of
accidents every week is five at this signal. If the number of accidents
follows a Poisson distribution then we can calculate the probability of
any number of accidents happening in a week

 Given: l=5 per week

 Concepts: Poisson distribution is applicable for rare events, where the no.
of trials is large. It is typically used for analyzing defects. DPU can be used
for λ.

 Project teams can use the Poisson distribution to determine the difficulty
in achieving a particular target given past performance.
Continuous Distribution – Normal
Distribution

 It’s one of the most important continuous data Probability Distributions,


illustrated as N ( µ, σ)

 Many kinds of data like people’s height, weight, machine output in manufacturing
etc. follow normal distribution. Most production processes should be normal in their
output.

 Higher frequency of values around the mean and fewer occurrences as you move
away from mean

 Continuous & Symmetrical:


 Tails asymptotic to X-axis i.e. touches x-axis at infinity
 Bell shaped
 Total area under the Normal curve = p(x is found in the distribution) = 1
Normal
Distribution –
Characteristics
Long Term v/s
Short Term

 For a product to be working at Six


Sigma levels, the number of good
parts lying in the range must be
99.9996% (Rolled Throughput
Yield).

 The previous page shows


99.999998% (Rolled Throughput
Yield).

 The difference is --- 99.999998%


shows short term sigma
performance. 99.9996% shows
long term sigma performance.

 The difference between long


term and short term sigma levels is
known as Sigma Shift.

 On a hypothetical case study


done in Motorola, it was found
that on an average long term
Sigma shifts from short term by 1.5.
This is known as 1.5 Sigma Shift.
Normal Distribution – Characteristics,
cont..
 To standardize comparisons of dispersion, a standard Z variable is used

Y-µ
Z=
σ

 Where:
 Y =Value of the Data Point, we are concerned with µ = Mean of the
Population
 σ = Standard Deviation of the population
 Z = Number of standard deviations between Y & the mean (µ)

 Z value is unique for each probability within the normal distribution.

 It helps in finding probabilities of data points anywhere within the


distribution

 It is dimensionless with no units like mm, liters, coulombs, etc.


Z-table Usage
Z-table Usage, cont..
Example:

 The time needed to resolve customer problems follow a normal distribution


with mean of 250 hours and standard deviation of 23 hrs. What is the
probability that a problem resolution will take more than 300 hrs.?

 From a Normal Distribution Table, we find that Z value of 2.17 covers an area
of 0.98499 under itself.

 Thus, the probability that a problem can be resolved in less than 300 hours is
98.5% and it taking more than 300 hours is 1.5%
Normal
Distribution,
cont..
Example:

 For the same data, what is


the probability that problem
resolution will take between
216 & 273 hours?

 From Z Table:
 Total area covered by
Z1 = 0.841344740
 Total area covered by
Z2 = 1 - 0.929219087 =
0.0707 Intercepted
area between Z1 & Z2
= 0.7705

Thus, probability that a problem


resolution may take between 216
& 273 hours is 77.1%
Chi Square Distribution
 It is one of the most widely used probability distributions in inferential statistics i.e.
Hypothesis Testing

 When used in Hypothesis Test, it only needs one sample for the test to be
conducted.

 Chi-square distribution (also chi-squared or χ²-distribution) with k-1 degrees of


freedom is the distribution of a sum of the squares of k independent standard
normal random variables

Example: If w, x, y & z are random variables with standard normal distributions, then the
random variable defined as f =w2 + x2 + y2 + z2 has a chi square distribution. The
degrees of freedom (df) of the distribution is calculated by the formula df=k-1 where k
is number of random variables or sample size used. In this case , k=4 as we have 4
random variables. So , df= k-1 = 4-1=3

 Characteristics:

 Where:
• c2 Calculated = chi-square index
 fo=An observed frequency
Chi Square Test - Example
Example: Observed frequency of 3 wins against South Africa in Australia, would
convert to expected frequency (21 / 31) * 5 = 3.39
Chi Square Test – Example, cont…
Combining all the information:
Chi Square Test – Example (Interpretation of Result)

 There is a different chi-square distribution for each different number of degrees of


freedom
 For chi-square distribution, degrees of freedom are calculated as per the number
of rows & columns in the contingency table
Degrees of Freedom = ( Number of rows – 1) * ( Number of columns – 1)
 For previous example, Degrees Of Freedom = (2 - 1) * (4 – 1) = 3.
 Assuming α = 10%, we can look up the Chi-square distribution in Chi-Square
table & arrive at χ² Critical= 6.251
 For our example, χ² Critical= 6.251 & χ² Calculated= 1.36
 Assuming α = 10%, we can look up the Chi-square distribution in Chi-Square
table & arrive at χ² Critical= 6.251

 Critical χ² divides region into acceptance and rejection zones while χ² calculated
allows us to accept or reject the null hypothesis depending into which zone it falls

 Since calculated value is less than the critical value (falls in the acceptance
region), the differences in wins at home or abroad or with any particular country is
not statistically significant for the Australian hockey team.
t - Distribution
 A t-distribution is most appropriate to be used when
 Sample size is <30
 Population standard deviation is not known
 Population is approximately normal

 t-distribution is symmetrical, but flatter than the normal distribution

 A t-distribution is lower at the mean & higher at the tails than a normal distribution

 As sample size increases, a t-distribution approaches normality

 There is a different t-distribution for every possible sample size (degrees of freedom)

 This distribution is used in Hypothesis testing


f – Distribution: Characteristics
 The f distribution is a ratio of two Chi-square distributions, and a specific F
distribution is denoted by the degrees of freedom for the numerator Chi-
square and the degrees of freedom for the denominator Chi-square

 F-test is performed to evaluate if the standard deviations / variances of


two processes are significantly different:
 Often, project teams target the variance of the process to be
reduced

 Where, s1 & s2= standard deviations of two samples,s1>s2 (numerator


should be greater than denominator), df1= n1 – 1 & df2= n2– 1

 From the F table one can find F critical at α and degrees of freedom of
samples of two different process (df1 and df2).

 Applications of these distributions will be covered in a topic called


Hypothesis Tests in the Analyze Phase. Only the fundamentals of these
Distributions need to be understood for now.
Session V, Lesson 3

Measurement System Analysis


Measurement System Analysis
 Measurement System Analysis (MSA) is a technique that identifies
measurement error (variation) and sources of that error in order to
reduce the variation.

 Throughout the DMAIC process, the MS’s output is the data we use for
the metrics, analysis and control efforts; an error-prone MS will only lead
to incorrect data. Incorrect data leads to incorrect conclusions.

 Rather than relying on a false set of data, fix the MS and then collect
data.

 MSA thus is one of the first things to be done in a Measure Phase.

Important--Only after doing MSA, should you collect data.

 Calculate, analyze, and interpret measurement system capability using


repeatability and reproducibility (GR&R) to determine measurement
correlation, bias, linearity, percent agreement, and precision/tolerance
(P/T)
Objective of Measurement System
Analysis
 Obtain information about the type of measurement variation associated
with the measurement system

 Establish criteria to accept and release new measuring equipment

 Compare one measurement method with another

 Form basis for evaluating a method suspected of being deficient

Important: Resolve measurement system variation in order to ensure the


correct baseline for the project objective
Measurement System Analysis
 Measurement Analysis:
Observed Value=True Value +/- Measurement Error

 Measurement error is a statistical term meaning the net effect of all sources of
measurement variability that cause an observed value to deviate from the true
value
True Variability=Process Variability + Measurement Variability

 Both process and measurement variability must be evaluated and improved


together

Important--If we work on process variability first and our measurement variability is


large, we can never conclude that the improvement made was significant, or
correct

 Types of Measurement Errors


 Measurement System Bias- Calibration Study
µtotal = µprocess +/- µmeasurement

 Measurement System Variation- GRR Study (Gage Repeatability and


Reproducibility)
σ2total = σ2process + σ2measurement
Sources of
Variation
Gage Repeatability and Reproducibility

Gage Repeatability is the variation in measurements obtained when one


operator uses the same gage for measuring the identical characteristics of the
same part

Gage Repeatability is the variation in measurements obtained when one


operator uses the same gage for measuring the identical characteristics of the
same part
Component of GRR Study

The above diagram shows repeatability and reproducibility on 6 different parts represented
by numbers 1 – 6
for two different trial readings by three different operators

E.g. If there is difference in readings for part 1(green box) by 3 different operators it’s called
reproducibility error and if there is a difference in readings of part 4 (red box) by same operator
in two different trials it’s called repeatability error
Key Concepts
 Gage ‘Repeatability’ & ‘Reproducibility’ studies are referred to as GRR
studies

 GRR studies should be performed over the range of expected


observations

 Actual equipment should be used for the GRR studies

 Written procedures or approved practices should be followed during the


study. It should be business as usual

 Measurement variability should be presented “as-is”

 After GRR, measurement variability is separated into causal


components, prioritized & targeted for action
Measurement Resolution
 Measurement Resolution: It is the capability of the measurement system
to detect the smallest tolerable differences given the number of
increments in the measurement system over the full range

Note: As a pre-requisite to GRR, ascertain that your gage has acceptable


resolution

Example: A truck scale should not be used for measuring the weight of a
tea pack; a caliper capable of measuring differences of 0.1 mm should not
be used to show compliance with a tolerance + 0.07 mm (29.47-29.61 mm
range e.g.); a color-blind person should not be asked to rate different
shades of red.
Repeatability and Reproducibility
 Repeatability is Equipment Variation (EV) and happens when the same
technician or operator measures the same part or same process, under
the same conditions, with the same measurement system.

Example: You time a 36 km/hr pace mechanism over a distance of 100


meters on your stop watch and take three readings (assuming no
operator error)

 Trial 1 = 9 seconds
 Trial 2 = 10 seconds
 Trial 3 = 11 seconds

 The same process was measured with the same equipment and in the
same conditions by the same operator.

The variation in the three readings you see is known as Repeatability or


Equipment Variation (EV).
Repeatability and Reproducibility
 Reproducibility is Appraiser Variation (AV) and happens when different
technicians or operators measure the same part or same process, under
the same conditions, with the same measurement system.
Reproducibility is also a comparison of two gages being used by the
same operator on the same parts.

Example: Same example as before with the stop watch being used by a
friend

 As you can see, there is a variation between the readings. This is known
as Reproducibility or Appraiser Variation.

Important: Fix EV and then fix AV; it’s counter-productive to fix AV


and then EV.
Data Collection
 Data Collection process:
 Usually 3 operators
 Usually 10 units to measure
 General sampling techniques should be used to represent the population
 Each unit is to be measured 2-3 times by each operator (Number of trials)
 Gage should have been calibrated properly
 Resolution should have been ensured
 First operator should measure all units in random order
 Same order should be maintained for all other operators
 Repeat for each trial

 Method of Analyzing GRR Studies:


 ANOVA not only separates the equipment & operator variation, but also
elaborates on combined effect of operator & part
 ANOVA uses the ‘standard deviation’ instead of ‘range’, & hence gives a
better estimate of the measurement system variation
 However, Time, Resource & Cost Constraints may need to be considered

ANOVA Method (Best one )


Interpretation of Measurement System
Analysis
 If reproducibility error is large compared to repeatability error, possible
causes could be
 Operators are not properly trained in using and reading gage
 Calibrations on gage dial are not clear

 If repeatability error is large compared to reproducibility error, possible


cause could be
 Gage / instrument needs maintenance
 Gage needs to be more rigid
 Location for gaging needs improvement
 SOP’s for measurement are not clear

 The measure system analysis (MSA) classifies measurement system error


into five categories: bias, linearity , percent agreement and
precision/tolerance.
GAGE RR Template
GAGE RR Results Summary
GAGE RR Interpretation
 Step 1 --- Check the value of %GRR. If %GRR < 30, GAGE Variation is
acceptable, and thus GAGE is acceptable. If %GRR > 30, GAGE is not
acceptable.

 Step 2 --- Check for EV first. If EV = 0, the measurement system is reliable


and the entire GAGE variation is contributed by different operators. If AV
= 0, the MS is precise.

 Step 3 --- If no EV, fix AV by training operators

Important

GAGE RR also shows interaction between operators and parts, which can be
studied by knowing Part Variation (see template on previous page). The
trueness portion of gage accuracy (trueness and precision) cannot be
determined in a GRR if only one gage or measurement method is
evaluated; the gage or method might have an inherent bias that would
be undetected only varying operators and parts.
Session V, Lesson IV

Process Capability and Performance


Agenda

 Process Stability Studies

 Process Capability Studies

 Process Performance vs. Specification

 Short-term vs. Long-term capability

 Process capability for attributes data


Process Stability Studies

 Until now, in the Measure Phase, we have done MSA and collected data
for CTQ’s, KPOV’s and some KPIV’s.

Important—This data is only to be collected after doing MSA.

 Descriptive statistics, such as mean and standard deviation, can be


known for the data.

 Once we have a set of data, believed to be correct and valid, check


for stability.

 Reason: Nothing can be done on an unstable process. If the process is


unstable, bring it back to stable status and then conduct further studies.

 Why: A process goes unstable due to many Special or Assignable causes


of variation. There is a difference between instability (many Special
Causes of Variation) and an out-of-control condition (mainly due to one
Special Cause).

 How: Use Run Charts with the help of Minitab.


Process Stability Studies
 How to plot a Run chart in Minitab --- Stat Quality Tools Run
Charts

 Sample chart show below:

If p-values for any of the set of the last 4 values provided in the chart is less
than 0.05, the process has Special Causes of variation, and the chances of
process going unstable is high.
Process Stability
Studies
 Special Causes of Variation (causes that
come from outside of the process and very
sporadic in nature) may result in the spikes
in data and trends too.

Important--Special Causes of Variation


(SCV) are undesirable for the process.
They may cause defects. That’s why
eliminating SCV is important.

 If the Run Charts show a Special Cause of


Variation, the Six Sigma project must focus
on Root Cause Analysis.
Process Capability Studies - 1
 Process capability: Process capability is the actual variation to the process
specification.
 A process capability study includes the following steps:
 Planning for data collection
 Collecting data
 Plotting and analyzing results

 Process Capability Objectives: The objective of a process capability study is to


establish a state of control over a manufacturing process and then to maintain
that state of control over a period of time .When the natural process limits are
compared with the specification range, any of the following actions are possible
 Do nothing: If the process limits (Control Limits) fall under the specification
limits then this shows that no action is required
 Change the Specifications: In some of the cases, the specification limit may
be set tighter than it is necessary, then the customer needs to be contacted
for the specification to be modified or not
 Center the Process: When the process spread and specification spread are
approximately same, an adjustment to the centering of the process may
bring the lot of product under the specifications
 Reduce Variability: It may be possible to partition the variation within piece
or batch to batch etc. and work on the largest offender first. A design
experiment can be used to know the major source of variation
 Accept the losses: In some of the cases management must be content with
a high loss rate. Some centering and reduction in the variation can be
possible but the main focus is on handling the scrap and on rework
Process Capability Studies - 2

 Identifying Characteristics: To identify the characteristics in a process


capability study, the following requirements should be met:
 The characteristic should indicate a key factor in the quality of
product or process
 The value of the characteristic should be possible to influence
through process adjustments
 The operating conditions that affect the measured characteristic
should be defined and controlled

 Customer purchase order requirements or industry standards may also


determine the characteristics that are required to be measured.

 Identifying Specification/Tolerances: The process specification or


tolerances are defined by industry standards, customer requirements or
by the organization engineering department in consultation with the
customer.

 The process capability study is used to determine if the output


consistently meets specifications and the probability of a defect or
defective.
Process Capability Studies - 3
 Developing Sampling Plans: To get the appropriate sampling plan for the process
capability studies depends upon the purpose and whether there are customer or
standards requirements for the study

 If the process is currently running and is in control, control chart data may be used
to calculate the process capability indices.
Important--In control means no Special Causes of Variation.

 For new processes, a pilot run may be used to estimate the process capability

 Verifying Stability and Normality: If only Common Causes of Variation (CCV) are
present in a process, then the output of the process forms a distribution that is
stable over time and is predictable. If SCV are present, the process output is not
stable over time

 Common Causes of Variation refer to the many sources of variation within a


process that have a stable and repeatable distribution over time. This is called a
state of statistical control and the output of the process is predictable

 Special causes refer to any factors causing variation that are not always acting on
the process. If special causes of variation are present, the process distribution
changes and the process output is not stable over time

 The validity of the normality assumption may be tested using the chi-square
hypothesis test or Anderson Darling Test.
Process Performances vs. Specification

Specification:
 A specification is a customer-defined tolerance for the output unit characteristics
 There may be two-sided specifications
 Specifications form the basis for determining defects

Process Performance:

 Process tolerance limits should be viewed as limits of customer. Beyond these limits the
customer is dissatisfied. The zone of dissatisfaction represents the zone of defects
/defectives (red dots)
 Historically the m +/- 3slimits have been considered as the natural limits of process
 If the mean & standard deviation are known of the sample, then it can be compared
against specification. Process performance indicators like Cp, Cpk (explained later) can
be found as well as a prediction of percentage defective.
Process Performance Indices
 Process performance is defined as a statistical measure of the outcome of a
characteristic from a process that may not have been demonstrated to be in a
state of statistical control
 It differs from process capability (it has been defined earlier in section III) because a
state of statistical control is not required
 Three basic process performance indices are Pp, Ppk, Ppm / Cpm

 The PP is computed as:

 Where,
 USL is upper specification limit
 LSL is lower specification limit
 6s is the natural process variation Similar to the Cpk the Ppk is computed as
Ppk = Min (Ppu , PpL)

 Where,
 Ppu is the upper process capability index and given by Ppu =
 PpL is the lower process capability index and given by PpL =

Where,
 x=Process average (better noted as X-bar) & s= sample standard deviation
Process Performance Indices – contd.
 The Cpm also known as the process capability index of the mean, is an
index that accounts for the location of the process average relative to a
target value and is defined as:

 Where,
 μ = process average
 σ = process standard deviation
 USL = upper specification limit
 LSL = lower specification limit
 T= Target Value (typically the center of the tolerance
 xi= Sample Reading
 n= Number Of Samples Readings

 Where,
 When the process average and the target value are equal, Cpm
equals the CPK
 When the process average and drifts from the target value, the
Cpm is less than the CPK
 The Ppm index is analogous to the Cpm
Process Performance Indices – contd.
 Concept of Process Shift in short and Long Term:

 Over time / subgroups, a typical process will shift by approximately 1.5 standard
Sample at Time 3 ---- Sample at Time 2 ----
deviations
Sample at Time 1
 Long term variation is more than the short term variation

 This difference is called the Sigma shift, which is an indicator of process control

 This shift could be due to different people, raw material, wear & tear, time, etc
Key Concepts

 Short term capability (ZST):


 It is the capability or the potential performance of the
process, in control at any point of time. It is based on the
sample collected in short term

 Long term performance (ZLT):


 It is the actual performance of the process over time

 Subgroups:
 Several small-sized samples collected consecutively, each
sample forms a sub-group
 Sub-groups are chosen so that data points are likely to
be identical within subgroup, but different between
subgroups

 Process shift (ZST– ZLT):


 It reflects how well a process is controlled, usually a factor
of 1.5 is used
 Short Term Variation: (Common Cause)

 Variance inherent in the process (natural variation)


 Also called within sub-group variation
 Small number of samples, each sample collected in a
short interval
 Common cause variation is captured
 Common causes are difficult to be identified &
corrected (process re-design would be needed)

 Long Term Variation: (Common + Special Cause)


Assumptions
and
 Added variation due to factors external to the usual
Convention - process (abnormal variation)
Process  Also called overall variation (sample standard
Variations deviation for all samples put together)
 Within subgroup + Between subgroup variation
 Special causes (different operators, raw material,
wear & tear) lead to increase in variation
 Special causes need to be identified & corrected for
improvement
 Long term variation is always greater than the short
term variation
Stability, Capability,
Spread and Defects
Summary

 Refer to “The Holy Grail Sheet”


attached below (The Holy Grail
Sheet is also a part of your toolkit)

 Interpretation:

 If the process has few CCV and no


SCV, the variations are less,
variability is less, capability is high,
possibility of defects is low and the
process is said to be in control and
capable

 If the process has many CCV and


no SCV, the variations are high,
variability is high, capability is less,
and the process is incapable, but
still in control.
 Three scenarios arise when you compare
Cpk value with Checkpoint

 Cpk < Cp = The mean is not centered.


There is a mean shift and this could result in
Shift in Sigma levels in the long term.

 Cpk = Cp = The mean is centered. The


Cpk versus process can only be considered capable if
Cpk > 1. This will happen only if variations
Cp are in control.
Comparison
 Cpk > Cp = Double-check your
calculations. This is impossible by definition.

 Question --- If Cp = 0.9, Cpk = 0.8, what


would you infer and what you do?
Understanding Process
Variations – Complaint
Resolution Time Hours

Let’s understand the


concept of short & long
term variations. Below is
the data given on
customer complaint
resolution time spread
over 3 weeks. Each
week’s data can form a
sub-group.
Understanding
Process
Variations
Effect of Mean Shift
Defects Levels at different Sigma Multiple values and different mean shifts

Defects Levels at different Sigma Multiple values and different mean shifts

From the chart, the effects of a mean shift are more negligible as the
process capability increases. A Six Sigma process’s level of defects isn’t
affected much by long term variation.
Key Concepts

 Process capability for attribute data is determined by


the mean rate of non conformity (defects or
defectives)

 DPMO is the measure of process capability for


attribute data. For this we need to define the mean
and standard deviation for attribute data

 For defectives, p-bar (for constant and variable


sample size) is used for checking process capability
and for defects c-bar and u-bar are used for
constant and variable sample size respectively
Understanding Process Variations –
Quality Control Department Example
Let’s assume that the quality control department checks the quality of
finished goods by sampling a batch of 10 items from the produced lot
every hour. If items are found out of control limits consistently in any
given day, production process has to be stopped for the next day.
They collect the following data over 24 hours:

Hour Defectives Hour Defectives


1 2 13 0
2 1 14 1
3 0 15 2
4 0 16 1
5 2 17 1
6 3 18 1
7 1 19 4
8 4 20 0
9 5 21 0
10 1 22 0
11 2 23 1
12 0 24 2
 Interpreting results: As this is defectives
with constant sample size we will use p-
bar to calculate to process capability
Understanding  Total number of defectives= 34
Process  Sub-group size= 10
Variations –
Quality  Total number of units= 10 * 24= 240
Control  p-bar= 34 / 240 = 0.1417
Department
Example,  DPMO = p-bar * 1000,000
contd..

 DPMO of the process is 0.1417* 1000000 =


142,000 approximately

 Therefore by looking at DPMO table the


process is currently working 2.6 σ which is
86.4% yield
Assumptions and Convention - Process
Variations
 Step 1: Understand the type of data for Y and X (CTQ and KPOV, or KPOV and KPIV)

 Step 2: Do a MSA for measuring Y --- Use GAGE RR Sheet

 Step 3: Collect Data -- Use Data Collection Plan Sheet provided in Toolkit

 Step 4: Perform stability studies --- Run Charts using Minitab

 Step 5: Calculate Descriptive Statistics for Y and X

 Step 6: Perform Normality Studies:


(Stat Basic Statistics Normality Test Anderson Darling) using Minitab

 Step 7: Perform Capability Studies

 Step 8: Calculate Sigma levels using capability studies and DPMO levels

 Step 9: Document the Baseline data for Y and X in Project Charter.

 Step 9a): The Baseline data should include Process Status (Stable or not, In control or not), Cpk value,
Baseline Sigma levels, Baseline DPMO Levels.

 Step 10: Move to the Analyze Phase, where the reasons why X is changing resulting in change in Y
would be determined.
Sessions Summary
 Collecting And Summarizing data
 Types of data and measurement scales
 Data collection method
 Techniques for assuring data accuracy and integrity
 Descriptive statistics
 Graphical methods
 Discrete probability distributions
 Binomial Distribution
 Poisson Distribution
 Continuous Probability Distribution
 Normal
 Chi square
 t-distribution
 f-distribution
 Calculate, analyze, and interpret measurement system capability using repeatability
and reproducibility (GR&R)
 Measurement correlation, bias, linearity, percent agreement, and precision/tolerance
(P/T)
 Process Capability Studies
 Process performance vs. Specification
 Process performance Indices
 Short-term vs. Long-term capability
 Process capability for attributes data
1. The sum of the squared deviations of a
group of measurements from their
mean divided by the number of
measurement equals

Quiz - 1 A. σ
B. σ2
C. One
D. The mean deviation
1. Which three of the following four
techniques could easily be used to
display the same data?
.
1. Stem and leaf plots
2. Box plots
3. Scatter Diagrams
Quiz - 2 4. Histograms

A. 1, 2, 3
B. 1, 2, 4
C. 1, 3, 4
D. 2, 3, 4
Quiz - 3

1. Which of the following distributions does not


require the use of the natural logarithmic base
for probability calculations?

A. Normal
B. Poisson
C. Weibull
D. Binomial
Quiz - 4

1. The repeatability of an R&R study can be


determined b y:

A. Examining the variation between the individual


inspectors and within their measurement readings
B. Examining the variation between the average of
the individual inspectors for all parts measured
C. Examining the variation between part averages
that are averaged among inspectors
D. Examining the variation between the individual
inspectors and comparing it to the part averages
Quiz - 5

1. For attribute data, process capability:

A. Cannot be determined
B. Is determined by the control limits on the
applicable attribute chart
C. Is defined as the average proportion of
nonconforming product
D. Is measured by counting the average non-
conforming units in 25 or more samples
Quiz - 1
1. The sum of the squared deviations of a group of measurements
from their mean divided by the number of measurement equals

A. σ
B. σ2
C. One
D. The mean deviation

Correct Answer: B

Variance is defined as σ2
Quiz - 2
1. Which three of the following four techniques could easily be used to display the same data?

.
1. Stem and leaf plots
2. Box plots
3. Scatter Diagrams
4. Histograms

A. 1, 2, 3
B. 1, 2, 4
C. 1, 3, 4
D. 2, 3, 4

Correct Answer: B

The odd tool out is the scatter diagram, which displays the
relationship between variables
Quiz - 3

1. Which of the following distributions does not require the use of


the natural logarithmic base for probability calculations?

A. Normal
B. Poisson
C. Weibull
D. Binomial

Correct Answer: D

The normal, poisson and weibull distributions all use e in their probability
formulas. The binomial distribution does not.
Quiz - 4

1. The repeatability of an R&R study can be determined b y:

A. Examining the variation between the individual inspectors and within their measurement
readings
B. Examining the variation between the average of the individual inspectors for all parts
measured
C. Examining the variation between part averages that are averaged among inspectors
D. Examining the variation between the individual inspectors and comparing it to the part
averages

Correct Answer: A

Repeatability is determined by examining the variation between the


individual inspectors and within their measurement
Quiz - 5

1. For attribute data, process capability:

A. Cannot be determined
B. Is determined by the control limits on the applicable attribute chart
C. Is defined as the average proportion of nonconforming product
D. Is measured by counting the average non-conforming units in 25 or
more samples

Correct Answer: C

The average proportion may be reported on a defects/ defectives per


million scale by multiplying the average(such as pbar, cbar, ubar etc.) by
1,000,000
Session VI

Exploratory Data Analysis


Agenda
 Causes for Variations in X

 Basic Cause Mapping Quality Tools

 Multi-Vari studies to validate the causes


 Create and interpret multi-vari studies to
interpret the difference between positional,
cyclical, and temporal variation
 Applying sampling plans to investigate the
largest sources of variation

 Simple linear correlation and regression for statistical


validation
 Interpret the correlation coefficient and
determine its statistical significance
 Difference between correlation and causation
 Interpret the linear regression equation and
determine its statistical significance
Causes for Variations in X
 In the Measure Phase, we measured Y and X.

 We considered an important correlation --- If X varies, Y varies. Thus, to control the


variations in Y, we need to control the variations in X.

 To control the variations in X, we need to understand what is causing X to vary.

Important--Six Sigma approach doesn’t mandate elimination of variation. It only says


that the variation should be controlled to such an extent that defects don’t occur.

 There are two major causes for variation in X:


 Common Causes of Variation: The causes that come from within the
process, are repeatable, happen under the same circumstances and should
be controlled. For example, you need a shirt size 40 and the vendor gives
you 40.5, 39.5, 39.6, etc. The causes that result in this kind of variation are
known as Common Causes of Variation (CCV).

 Special Causes of Variation: The causes that come from outside of the
process, happen sporadically, happen under different circumstances and
should be eliminated, if undesirable. For example, you need a shirt size 40
and the vendor gives you 42, 43, 45, etc. The causes that result in this kind of
variation are known as Special Causes of Variation (SCV).

Important--Excessive CCV can also result in SCV, which is why CCV needs to be
controlled.
Causes of Variation - Examples
 Special Causes of Variation

 Poor equipment adjustment


 Operator deviates from his standard working procedures
 Machine malfunctioning
 Machine crash
 Computer malfunctioning/crashes
 Volatility in power supply
 Abnormal processing demands from customers, or high volume
demands putting strain on processes
 Absenteeism resulting in decrease of productivity
 Long test duration times

Important: You can use Fishbone Diagram explained in the Measure Phase
to know what causes variations in X. Other tools that can be used are Pareto
Charts, FMEA matrix.
Analysis
 Multi-Vari analysis: Multi-Vari studies is used to:
 Analyze variation
 Investigate the stability or consistency of a process
 Identify where to and where not to investigate
 Breaks down the variation into components so that improvements can be
made

 Multi-Vari studies are rigorous investigations of a process to classify variation sources


as
 Positional (Within piece)
 Cyclical (Piece to piece)
 Temporal (Over time)

 Positional: Positional sources of variation are variations within a single unit (piece)
where variation is due to location
 Pallet stacking in a truck
 Temperature gradient in an oven
 Head-to-head
 Cavity-to-cavity within a mold
 Region of country
 Line on invoice
Analysis, cont…
 Cyclical: Cyclical sources of variation are variations which occur among
sequential repetitions (piece to piece) over a short period of time
 Every nth pallet broken
 Batch-to-batch variation
 Lot-to-lot variation
 Cavity-to-cavity between molds
 Invoices received day to day
 Account activity week to week

 Temporal: Temporal sources of variation are variations which occur over longer
periods of time (time to time)
 Process drift, e.g., machine output due to inner wear and tear
 Breaks/lunches
 Seasonal
 Shift-to-shift
 Month to month closings
 Quarterly returns
Create Multi-Vari Chart

 Steps in creation of Multi-Vari chart:  Steps in creation of Multi-Vari


chart in our example :
 Select the process and the
relevant characteristic to be  The process selected is
investigated where a plate of thickness
1” is produced; measure
 Select sample size and plate thickness
frequency of measurement
 Sample size 5 from each
 Create a tabulation sheet to equipment; every 2 hours
record the time and values
from each sample  Sheet created. As header
row Time, Equipment,
 Plot the chart on a paper with Thickness has been
time along X-axis and the mentioned
measure value on vertical
scale  Chart plotted; Time on X
axis and plate thickness on
 Join the observed values with Y axis
appropriate lines
 Join the observed values
with appropriate lines
Create Multi-Vari Chart
Create Multi-Vari
Correlation – Association between Variables

 If we want to associate ‘Y’ with a single ‘X’ and statistically validate the relationship, we
can use correlation. Use = CORREL() function in Excel to calculate Correlation
Coefficient

 Correlation shows the strength of the relationship between Y and X

 Statistical significance is denoted by correlation coefficient ‘r’, also known as Pearson’s

Coefficient of Correlation

 ‘r’ is always between –1 & +1


 Positive value of ‘r’ means direction of movement in both variables is same
 Negative value of ‘r’ means direction of movement in both variables is inverse
 Zero value of ‘r’ means no correlation between the two variables

 Higher the absolute (with the direction or sign) value of ‘r’, stronger the correlation
between ‘Y’ & ‘X‘. As a thumb rule an ‘r’ value of > + 0.85 or < - 0.85 indicates a strong
correlation
Correlation Levels

 Suppose team management wants to see if Indian cricket team’s performance has
improved

 Correlation measures the linear association between the output (Y) and one
input variable (X) only
Regression
 While correlation tells us only the
strength of the relationship, it does
not reveal much about variability
in Y being explained by X.

 If a high percentage of variability


in Y (r2> 70%) is explained by
changes in X, we can use the
model to write a transfer equation
Y = f(X) and use the same
equation to predict future values
of Y given X, and X given Y.

 ‘Y’ can be regressed on one or


more X’s simultaneously
 Simple linear regression is
for one X
 Multiple linear regression is
for more than one X’s
 Even though the transfer functions Y = f(X) as an
output of the regression, this is not the correct
transfer function to control ‘Y’ because there may
be a low level of correlation between the two

 Main thrust of regression in this step is only to discover


whether a statistical significant relationship exists
between ‘Y’ & a particular ‘X’, i.e., whether it is a
vital ‘X’ or not, by looking at p-values

Key
Concepts  Important:

In the Analyze Phase, you try to understand if there is


statistical relevance between Y and X. If the relevance is
established using metrics from Regression Analysis, we
can move forward with the tests. This aspect of Simple
Linear Regression (SLR) makes it useful as a Statistical
Validation tool in the first activity of the Analyze Phase.
Simple Linear Regression (SLR)
 A simple linear regression equation is nothing but a fitted linear equation
between ‘Y’ & ‘X’ that looks as follows:
Y = A + BX + C

 Where,

Values Description

Y= Dependent variable / output / response

X= Independent variable / input / predictor

A= Intercept of fitted line on Y axis

B= Regression coefficient / Slope of the fitted line

C= Error in the model


Least Squares Method in SLR (Simple Linear
Regression)
 If ‘Y’ & ‘X’ are not perfectly linear (r = ± 1), there could be several lines
that could be fitted

Minitab fits the line which has the least Sum of Square of Error—sum value of
errors squared/degrees of freedom. In perfect linear relationship the points
would lie on the line but almost always the data lies off the line. The
distance from the point to line is the error distance, which is used in the SSE
calculations.
Simple Linear Regression - Example

 Suppose a farmer wishes to predict the relationship between the


amount spent on fertilizers and the annual sales of his crops. He
collects the following data for last few years & determines his
expected revenue if he spends $8 on fertilizer.

Y X
Year $ spend on fertilizers Annual Selling in $

2005 2 20

2006 3 25

2007 5 34

2008 4 3

2009 11 40

2010 5 31
SLR Using Excel
 Step 1: Click on Insert, and Choose the Plain Scatter Chart (Titled Scatter with
only Markers)

Scatter Chart as below


SLR Using Excel, cont..
 Step 2: Right click on the data points in the Scatter Chart and Choose
the Option, “Add Trend line”

 Step 3: Choose the Option, “Linear” and select the boxes titled, “Display
R-Squared value” and “Display equation”
Scatter Chart as below:
SLR Using Excel

Interpretations of Scatter Chart

 The R-Square value (Coefficient of Determination) tells you if this model is


a good one and one that can be used or not.

 The R-Square value here is 0.3797.

 38% of variability in Y is explained by X. The remaining 62% variation is still


unexplained or due to residual factors. Other factors like rain amount
and variability, sunshine, temperatures, seed type, and seed quality
could be tested.

Interpretation:

 The low value of R-Square statistically validates very poor relationship


between Y and X. Thus, the equation presented cannot be used for
analysis any more.

Important: In such a case, go back to the Cause and Effect Matrix and try
studying the relationship between Y and a different X.
Multiple Linear Regression

 If you add another variable X2 to the model, you would be


testing the impact of X1 and X2 on Y. This is known as Multiple
Linear Regression.

 The value of R2 will change due to the introduction of the new


variable.

 The resulting value of R2 when used in cases of Multiple


Regression is known as R-Square Adjusted.

 If R-Square Adjusted value is greater than 70%, the model can


be used.
Key Concepts
 The residuals or the differences between the actual value and the
predicted value, give an indication of how good the model is

 If the errors (residuals) are small and predictions use X’s that are within
the range of the collected data, the predictions should be fine. Do not
extrapolate the data
 Sum of Squares Total (SST) = Sum of Squares of Regression (SSR)+
Sum of Squares of Error (SSE)
 SSR = SST – SSE, which is why we want SSE to be low
 R2 = SSR/SST

 To get a sense of the error of the fitted model, run replicate points - take
two observations of ‘Y’ at the same ’X’

 Prioritization of X’s can be done through SLR (Simple linear regression


equation) but it requires running separate regressions on ‘Y’ with each
‘X’

 If an ‘X’ does not explain variation in ‘Y’, it should not be explored further
Key Concepts, cont…
 Don’t assume causation :
 Regression equation denotes a relationship only. This in no way means that
a change in one variable causes change in another. If number of schools &
incidents of crime in a city go up together, there may be a relationship, but
no causation. The increase in both factors could be due to third factor –
population.

 In other words, both of them may be dependent variables themselves

 Note: In the above diagram, we cannot assume that sneeze is the cause of
somebody’s death though the correlation is very strong
Summary
 Multi-Vari studies:
 Create and interpret multi-vari studies to interpret the
difference between positional, cyclical, and temporal
variation
 Applying sampling plans to investigate the largest sources
of variation

 Simple Linear Correlation And Regression :


 Interpret the correlation coefficient and determine its
statistical significance
 Difference between correlation and causation
 Interpret the linear regression equation and determine its
statistical significance
Hypothesis Testing
Agenda

 Basics

 Tests for Means,


Variances, And
Proportions

 Paired – Comparison
Tests

 Single – Factor Analysis


Of Variance (ANOVA)

 Chi square
 Sometimes differences between a variable and its
hypothesized value are statistically significant but not
practically or economically meaningful
 Example: Based on a hypothesis test, a
company implemented trading strategy
which was proven to provide statistically
significant return. It does not mean that we
can guarantee that trading on this strategy
would result in economically meaningful
positive returns. The returns may not be
economically significant after accounting for
taxes, transaction costs, and risks inherent in
Statistical the strategy
and Practical
Significance  Even if we conclude that a strategy’s results are
of Hypothesis economically significant, we should examine
whether there is a logical reason to explain the
Test apparently significant returns offered by the strategy
before actually implementing it

 Thus we can safely conclude that there has to be a


Practical significance (economical) study before
implementing any statistical significant data
Hypothesis

 Null Hypothesis

 As was explained earlier in session IV, assuming status quo is Null Hypothesis. Null
Hypothesis, represented as the Basic Assumption for any activity or experiment, is
indicated as Ho. For example, assuming that movie is good, you plan to watch it.
Therefore, the Null Hypothesis in this scenario will be “Movie is good.”

Alternative Hypothesis

 Challenges the null hypothesis or converse of the Null Hypothesis. In this case,
Alternate Hypothesis will be "Movie is not Good."

 Important: If null hypothesis is rejected, alternative hypothesis must be right. You


cannot prove a null hypothesis; you can only reject (disprove) it.
Type I Error

 Rejecting a null hypothesis when it is true is called Type I error

 It is also called ‘Producer’s Risk’ because of the case where


a part is not defective and yet is rejected by QA team, the
producer loses revenue and expenses.
 Similarly, reaching a conclusion that coach B is better than
coach A, when they actually have the same level of
efficiency, is making Type I error.
 Example --- You go to a movie. The movie was good, but
you came out and said the movie was not good.
 You rejected the Null Hypothesis when it was actually
true. This is Type 1 error.

Important: Significance level, or Alpha, is the chance of committing


a Type 1 error and is expressed in the form of percentage. The
popular value of Alpha is 0.05 or 5%.
Type II Error

 Accepting a null hypothesis when it is false is called Type II error


 It is also called ‘Consumer’s Risk’ because of the case
where a part is defective and is also accepted by QA
team, thereby letting the consumer find the problem and
‘lose’ the use of their purchase
 Minimizing Type II error requires acceptance criteria to be
very strict e.g., Pacemaker

 Example: You go to a movie. The movie was not good, but you
came out and said the movie was good. Therefore, you did not
reject the Null Hypothesis when it was actually wrong. This is Type
II Error.

 β is the chance of committing a Type II Error. Popularly, the value


for β is 20% or 0.2

Important--Any good experiment should have as low a value for β


as possible.
Type I and Type II Errors –
Key Concepts
 Probability of making one type of error can be reduced only when
we are willing to accept a higher probability of making other type of
error
 Example: Increasing probability of shipping an error-free
pacemaker (decreasing β or consumer risk) increases the
probability of having an event with producer risk (Type I error)

 Even if we reject a true null hypotheses by mistake (Type I error), false


null hypotheses may be accepted which is a Type II error

 Typically, ‘α’ is set at 0.05; 0.05 means the risk of you committing a
Type 1 Error will be 1 out of 20 experiments

 Teams must decide which type of error should be less & set ‘α’ and
‘β’ accordingly
Power of Test
 The power of a test is the probability of correctly rejecting the null
hypothesis when it is false

 Power of a test = 1 – β (Type II error). The probability of not committing a


Type II error is called the power of a hypothesis test

 The higher the power of the test, the better it is for purposes of hypothesis
testing. Given a choice of tests, the one with the highest power should
be preferred

 The only way to decrease the probability of a Type II error given the
significance level (probability of Type I error) is to increase the sample
size
Test Criteria of Hypothesis Test
 ‘α’ is called the significance level for a hypothesis testing

 ‘1-α’ is called the confidence level for a hypothesis testing


Determinants of Sample Size - Continuous
Data
 The sample size is determined by answering 3 questions:
 How much variation is present in the population? ( σ)
 In what interval does the true population mean need to be
estimated?
(± )
 How much representation error is allowed in the sample? (
α)

 Sample size formula for Continuous Data:


Standard Sample Size Formula -
Continuous Data
 Usually, value of α is taken as 5%

 Z 97.5= 1.96 (it is derived from Z table)

 Thus standardized sample size formula can be written as

 Example: We know that the population standard deviation (from past


data) for the time to resolve customer problems is 30 hrs. Now, we want
to collect a sample that can estimate the average problem resolution
time within ± 5 hrs tolerance with 99% confidence. What should be the
sample size?
 = 5, σ = 30, and α =0.01
 From Z table Z99.5 = 2.55 therefore, sample size = [(2.55*30)/5]2 =
234.09 = 235
Standard Sample Size Formula -
Discrete Data
 Extending the same logic, we can find out the sample size
required while dealing with discrete population

 If the average population proportion non-defective is at ‘p’,


population standard deviation can be calculated as

 Example: We know that the non-defective population


proportion (from past data) for pen manufacturing is 80%. Now,
we want to draw a sample that can estimate the proportion of
compliant pens within ± 5% with an alpha of 5%. What should be
the sample size?
 = 0.05, σ 2= 0.8 (1-0.8), and α = 0.05
 From Z table Z97.5 = 1.96 therefore, sample size = (1.96/0.05)2
*0.8*0.2 = 245.86 = 246
Hypothesis
Testing
Roadmap
 Basic determinants
of accepting or
rejecting a
hypothesis remains
same; however,
various tests are
used depending
upon the type of
data
Hypothesis Test for Means (Theoretical)

Z-test (σ known) t-test (σ unknown)

 Null hypothesis: Average height of Indian males  Null hypothesis: Average height of Indian males is
is 165 cm (µ0) 165 cm (µ0)

 Alternative hypothesis: Average height of  Alternative hypothesis: Average height of Indian


Indian males < > 165 cm. males < > 165 cm.

 In notation, H0: µ= µ0 against H1: µ< > µ0 .  In notation, H0: µ= µ0 against H1: µ< > µ0

 On the basis of a sample of size n = 117, sample  On the basis of a sample of size n = 25, sample
average (x-bar) = 164.5 cm average (x-bar) = 164.5 cm

 The population SD is known; σ = 5.2  The population SD is unknown; however, it’s


estimated from the sample SD; s = 5.0
 Compute z = (x-bar - µ0) / √ (σ2/n) = (165 –
164.5) / √ (5.22/117) = 1.04  Compute t = (x-bar - µ0) / √(s2/ n) =
(165 – 164.5) / √(52 /25)= 0.5
 Reject H0 at level of significance α if z > zα
 Reject H0 at level of significance α if t > tn-1, α
 Since z0.05 = 1.64, the null hypothesis is not
rejected at 5% level of significance  Since t24, 0.05 = 1.711, the null hypothesis is not
rejected at 5% level of significance
Results: Thus we can conclude based on the sample
collected that average height of Indian males is 165 Results: Thus we can conclude based on the sample
cm collected that average height of Indian males is 165 cm
Hypothesis Test for Variance and
Proportions
Chi-Square test (Variance) Test of Population Proportion (p)
 As per the example defined in session V,  Null hypothesis: Proportion of smokers among
the null hypothesis is males in a place named R is 0.10 (p0)

 Alternative hypothesis: Proportion of smokers


 H0: “Proportion of wins in Australia or among males in R is different than 0.10
abroad is independent of country played
against  In notation, H0: p = p0 against H1: p < > p0

 H1: “Proportion of wins in Australia or  Among n = 150 adult males interviewed, 23 were
abroad is dependent of country played found smokers. Thus, sample proportion p = 23/150
= 0.153

 Since against χ² Critical = 6.251 & χ²  Compute test statistic


Calculated= 1.36

Results: Since calculated value is less than the


critical value, the proportion of wins of
 Reject H0 at level of significance α if z > zα
Australia hockey team is independent of
the country played or place  Since z0.05 = 1.64, the null hypothesis is rejected at
5% level of significance in favor of the alternative

Results: Thus we can conclude based on the sample


collected that proportion of smokers in R is greater than
0.10.
Paired-Comparison Tests
Comparison of Means of Two Processes

 Two-sample tests are performed to compare two samples &


discover if they belong to different populations

 In benchmarking, we often want to compare the existing


process with a benchmarked process
Paired Comparison Hypothesis Test for
Means (Theoretical)
 Null hypothesis: Average heights of American and British males are equal

 Alternative: Not equal

 In notation, H0: µ1 = µ2 against H1: µ1≠µ2

 Two samples of sizes n1 = 125 and n2 = 110 are taken from the two populations

 x1-bar = 167.3, x2-bar = 165.8, s1 = 4.2, s2 = 5.0 are the sample means and SDs
respectively

 Reject H0 at level of significance α if |Computed t|>

 Since t223, 0.025 = 1.96, the null hypothesis is rejected at 5% level of significance.

Results: Thus the average heights of American and British males are significantly different.
Paired-Comparison Hypothesis Test for
Variance – F-Test example
 Susan is examining the earnings/share of two companies. She is of the opinion that the
earnings of Company A are more volatile than those of Company B. She has been
obtaining earnings data for the past 31 years for Company A, and for the past 41 years
for Company B. She finds that the sample standard deviation of Company A’s earnings
is $4.40 and that of Company B’s earnings is $3.90. Determine whether the earnings of
Company A have a greater standard deviation than those of Company B at 5% level of
significance.

 Solution:
H0 : σA2= σB2 = the variance of Company A’s earnings is equal to the variance of Company
B’s earning.
 Ha : σA2 < > σB2 = the variance of Company A’s earnings is different
 σA2= variance of Company A’s earnings.
 σB2= variance of Company B’s earnings.
 Note: σA > σB . In calculating the F-test statistic, we always put the greater variance in
the numerator.
Hypothesis Test for Equality of Variance –
F-Test Example
 dfA (degrees of freedom of A)= 31 - 1 = 30

 dfB (degrees of freedom of B)= 41 - 1 = 40

 The critical value from F-table equals 1.74. We will reject the null
hypothesis if the F-test statistic is greater than 1.74

 Calculation of F-test statistic:


F= (SA2/SB2) = 4.402/3.902 = 1.273

 Results: The F-test statistic (1.273) is not greater than the critical
value (1.74). Therefore, at 5% significance level we fail to reject the
null hypothesis
Hypothesis
Tests Chef 1 Chef 2
(Practical)
4 4.2
 f-test for independent groups
(Tests for variances between
two groups)
4.5 4.5
We are inspecting two groups of
data for significant differences in their
variation. The idea is to conclude if
there is significant amount of 5 7.2
difference. If there is a statistical
evidence of variation, we can

5.2 6.1
conclude a possibility of Special
Cause of Variation. Data as below

5.3 8.9
Example: A restaurant wants to
explore the recent overuse of
avocados. They suspect that there is
a difference between two chefs and
how much avocados are being used
by them in the salads. Data as below,
in ounces: 6.1 5.2
Hypothesis Chef 1 Chef 2
Tests
(Practical) 4 4.2
 f-test for independent groups
(Tests for variances between
two groups) 4.5 4.5
We are inspecting two groups of
data for significant differences in their
variation. The idea is to conclude if
5 7.2
there is significant amount of
difference. If there is a statistical
evidence of variation, we can
conclude a possibility of Special 5.2 6.1
Cause of Variation. Data as below
Example: A restaurant wants to
explore the recent overuse of
avocados. They suspect that there is 5.3 8.9
a difference between two chefs and
how much avocados are being used
by them in the salads. Data as below,
in ounces:
6.1 5.2
 How to do a F-Test?
 Use MS-Excel
 Click on Data
 Click on Data Analysis (Please
follow facilitator instruction on how
to install Add-ins)
 Select F-Test Two-Sample for
Variances

F-Test  In Variable 1 Range, select the


data set for Group A, and select
data set for Group B in Variable 2
Range
 Click Ok

 Screenshot for the F-Test window in the


next page
F-Test
Assumptions:
Null Hypothesis – There is no
significant statistical difference
between the variances of the
two groups, thus concluding
that any variation could be
because of chance (Common
Cause of Variation)

F-Test
Assumptions Alternate Hypothesis – There is
a significant statistical
difference between the
variances of the two groups,
thus concluding variations
could be because of
assignable causes as well
(Special Cause of Variation)
F-Test Interpretations
 From the Excel result sheet, the p-value is 0.03.
 If p-value is low, null must go. Statistically, if p-value is < 0.05, Null must be rejected.
 Thus, we reject the Null Hypothesis with 97% confidence.
 Thus, we reject the fact that variation could only be due to Common Cause of
Variation.
 Thus, we infer from the test that there could be Assignable Causes of Variation (Special
Causes of Variation).

Test Two-Sample for Variances


Variable 1 Variable 2
Mean 6.01666667 5.01666667
Variance 3.19766667 0.51766667
Observations 6 6
df 5 5
F 6.17707663
P(F<=f) one-tail 0.0336523
F Critical one-tail 5.05032906
Hypothesis Group A Group B
Tests
(Practical) 4 4.2
t-test for independent
groups (Tests for means 4.5 4.5
between two groups)

Example: We are 5 7.2


inspecting two groups of
data, as shown below, for
significant differences in
their means. The idea is to
5.2 6.1
conclude if there is
significant amount of
difference. If there is a 5.3 8.9
statistical evidence of
variation, we can
conclude a possibility of
Special Cause of Variation. 6.1 5.2
 How to do an Independent 2-Sample t-
test?

 Use MS-Excel
 Click on Data
 Click on Data Analysis (Please follow
facilitator instruction on how to install
Add-ins)
2-Sample  Select 2-Sample Independent t-test
assuming unequal variances
t-Test  In Variable 1 Range, select the data
set for the group with the larger
mean, and select data set for the
other group in Variable 2 Range
 Keep “Hypothesized Mean
Difference” as 0
 Click Ok
 Assumptions:
 Null Hypothesis – There is no significant
statistical difference between the variances of
the two groups, thus concluding that any
variation could be because of chance
(Common Cause of Variation)

 Alternate Hypothesis – There is a significant


statistical difference between the variances of
the two groups, thus concluding variations
could be because of assignable causes as
well (Special Cause of Variation)
2-Sample
Independent
 Null Hypothesis: Mean of group A = Mean of group B
t-Test
 Alternate Hypothesis: Mean of group A ≠ Mean of
Assumptions group B

 Important: The alternate hypothesis tests two


conditions, Mean of A < Mean of B and Mean of A >
Mean of B. Thus a two-tailed probability needs to be
used.
 2-Tailed probability versus 1-tailed
probability

 If the alternate hypothesis tests on
more than one direction, either less
or more, use a 2-tailed probability
value from the test.
 If the alternate hypothesis tests
unidirectional, use a 1-tailed
probability value from the test.
2-Sample
Independent Example of Alternate Hypothesis
t-Test
 Mean of A ≠ Mean of B --- 2-tailed
probability
 Mean of A > Mean of B --- 1-tailed
probability
2-Sample Independent t-Test Results and
Interpretations
Result
Test: Two-Sample Assuming Unequal Variances
Items Variable 1 Variable 2
Mean 6.01666667 5.01666667
Variance 3.19766667 0.51766667
Observations 6 6
Hypothesized Mean 0
df 7
t Stat 1.27079862
P(T<=t) one-tail 0.12220055
t Critical one-tail 1.89457861
P(T<=t) two-tail 0.24440109
t Critical two-tail 2.36462425
Interpretations
As the 2-tailed probability is being tested, the p-value of 2-tailed probability
testing is 0.24, which is greater than 0.05, so you fail to reject the null hypothesis.
That means, you fail to reject the fact that there is no significant statistical
difference between the two means.
Thus you infer that both the groups are statistically same.
Paired t-Test
 Paired t-test is one of the most powerful tests from the t-test
family.

 Easiest way to remember Paired t-test is Before test – After test.

 Example: A group of students score X in SSGB before taking the


Training program. After the training program, the scores are
recorded and compared to X.

 Question to be answered: Is there a statistical difference


between the two sets of scores?

 Inference: The inference could be: If there is a significant


difference, the training was effective.

 Important: The Paired t-test interpretation shows you the


effectiveness of the Improvement measures. This is the main
reason why Paired t-test is often used in the Improve Stage.
 We learned earlier that we can use a t-
test only for 1-sample & 2-sample tests for
comparing the difference between two
means

 If we want to compare the means of


more than two samples, we use ANOVA

ANOVA  ANOVA is ANalysis Of VAriance


(Comparison
of More Than  However, ANOVA does not tell us which
Two Means) mean is best, it only tells us that all the
sample means are not equal

 Based upon ANOVA output, most likely


factors with statistical significance can
be further tested
ANOVA - Example

 Let’s consider the takeaway food delivery time of three


different outlets

 Is there any evidence that the averages for 3 outlets are not
equal? In other words, can one outlet be the benchmark
(improvement target) for the others?

 Null hypothesis is that µ1 = µ2 = µ3

 If we end-up rejecting the null hypothesis, it would mean that


there is at least one outlet that is different in its average delivery
time
Using Minitab for ANOVA
Outlet Delivery Time
Outlet 1 46
Outlet 1 50
STAT > ANOVA > 1 WAY Outlet 1 49
Outlet 1 47
Outlet 2 50
Outlet 2 48
Outlet 2 36
Outlet 2 50
Outlet 2 50
Outlet 2 62
Outlet 2 45
Outlet 2 47
Outlet 2 51
Outlet 2 44
Outlet 3 49
Outlet 3 48
Outlet 3 39
Outlet 3 49
Outlet 3 34
Outlet 3 33
Outlet 3 57
Outlet 3 48
Outlet 3 47
Outlet 3 39

Stacked Data
Using Minitab for ANOVA, cont..
 Feeding the same data into Minitab, we get the following output
ANOVA using Excel
Interpreting Minitab Results

 Since p-value is more than 0.05 (Minitab default), we accept the null hypothesis
that there is no significant difference between means of delivery time for 3 outlets

 Looking at the confidence intervals, you will find that intervals are too overlapping
which means that there is little that separates the means of three samples

 Previous example was of one-way ANOVA where there was only one factor to be
benchmarked, i.e., the outlet of delivery

 If there are two such factors, we may use the two-way ANOVA
Chi – Square Test

 Chi – Square test is used for hypothesis testing and was


explained earlier in Session V
 Null Hypothesis is the basic assumption while
alternate hypothesis is the converse of the Null.

 Type 1 error is rejecting the Null when it was actually


true. Type II Error is rejecting the Alternate when it
was actually true.

 P-value is the maximum chance of committing a


Type I Error. Threshold levels for p-value is 0.05.

Hypothesis  If p < 0.05, Null must be rejected, else alternate must


be rejected (If p is low, Null must go)
Tests --
Summary  Use Paired t-test in Before-After conditions

points
 Use ANOVA for more than 2 groups

 Use Z-test when you know standard deviation of


population
 Multi-Vari Studies:
 Create and interpret multi-vari studies to interpret the
difference between positional, cyclical, and temporal
variation
 Applying sampling plans to investigate the largest sources
of variation

 Simple Linear Correlation And Regression:


 Interpret the correlation coefficient and determine its
statistical significance
 Difference between correlation and causation

Sessions
 Interpret the linear regression equation and determine its
statistical significance

Summary  Hypothesis Testing:


 Basics: power of test, significance (and p-value),
confidence level
 Tests for means, variances, and proportions
 Paired - comparison tests
 Single - factor analysis of variance (ANOVA)
 Chi square
1. Of the various statistical analysis tools
available, which would be most likely to
show a plot of all readings taken:

A. Xbar-R charts
Quiz - 1 B. Multi-vari charts
C. ANOVA
D. Chi Square
1. A null hypothesis states that a process
has not improved as a result of some
modifications. The type II error is to
conclude that:

A. We have failed to reject the null


Quiz - 2 hypothesis (Ho) when it is true
B. We have failed to reject the null
hypothesis (Ho) when it is false
C. We have rejected the null hypothesis
D. We have made a correct decision with
alpha probability
1. The test used for testing significance in
an analysis of variance table is the:

A. Z-test
Quiz - 3 B. t-test
C. f-test
D. Chi-square test
1. One use for a student t-test is to
determine whether or not differences
exists in:

Quiz - 4 A. Variability
B. Confidence intervals
C. Correlation coefficients
D. Averages
1. When making inferences about a
population variance based on a single
sample from that population, what
distribution is used?

Quiz - 5 A. Chi-square test


B. Normal
C. t-test
D. f-test
Quiz - 1

1. Of the various statistical analysis tools available, which would


be most likely to show a plot of all readings taken:

A. Xbar-R charts
B. Multi-vari charts
C. ANOVA
D. Chi Square

Correct Answer: B

Out of the four answer choices, only Xbar-R chart and multi-vari charts are
plotted. The Xbar-R chart plots averaged data and ranges. The multi-vari
chart normally contains almost of the readings taken
Quiz - 2

1. A null hypothesis states that a process has not improved as a result


of some modifications. The type II error is to conclude that:

A. We have failed to reject the null hypothesis (Ho) when it is true


B. We have failed to reject the null hypothesis (Ho) when it is false
C. We have rejected the null hypothesis
D. We have made a correct decision with alpha probability

Correct Answer: B

A type II error means that we have failed to reject the null


hypothesis (HO) when it is false
Quiz - 3

1. The test used for testing significance in an analysis of variance


table is the:

A. Z-test
B. t-test
C. f-test
D. Chi-square test

Correct Answer: C

The appropriate ANOVA test is the F-test. ANOVA is a test of the


equality of means
Quiz - 4

1. One use for a student t-test is to determine whether or not


differences exists in:

A. Variability
B. Confidence intervals
C. Correlation coefficients
D. Averages

Correct Answer: D

The student t-test is often used to make inferences about


population averages or means
Quiz - 5

1. When making inferences about a population variance based on a


single sample from that population, what distribution is used?

A. Chi-square test
B. Normal
C. t-test
D. f-test

Correct Answer: A

The chi-square distribution is used to compare a sample variance


with a known, same population variance.
Session VII:

Improve & Control


Agenda

 Introduction to Improve & Control Phase

 Lesson I: Piloting
 Lesson II: Design of Experiments (DOE)
 Lesson III: Statistical Process Control
(SPC)
 Lesson IV: Implement and Validate
Solution
 Lesson V: Control Plan
Define: Identify the CTQs, Key Process Output
Define Variables (KPOVs) or Y for focus

Measure: Collect Baseline data for Y, and also


Measure understand what could be the Key Process Input
Variables (KPIVs) or X impacting Y

Introduction Analyze Analyze: Validate the impact of X on Y, and


understand the reasons for variation in X

to Improve
and Control
Improve: Identify possible improvement actions for
Improve increasing the sigma level of X and validating those
improvements through hypothesis testing

Control: Full scale implementation of improvement


Control action plan; set up controls to monitor the system so
that gains are sustained
Session VII, Lesson 1

Piloting
Piloting

 By the end of the Analyze Phase, you will know the reasons causing X to vary, and
you would have prioritized them (using Pareto Charts or DOE for example) and
statistically validated them (Hypothesis tests).

Example: Let us assume our key output variable is Average Handle Time (Y) during
service calls, which impacts the financial fortunes of the company.

 With analysis, we found that Hold Time is the key input variable (X) for
Average Handle Time. Controlling the Hold Time would mean impacting
Average Handle Time.

 In the Analyze Phase, we will discover reasons for variation in the Hold
Time.

 The reasons could have been prioritized with the help of Pareto Charts
and, moving to the Improve Stage, we know what are the main factors
influencing changes in X.
Piloting
Example: Less training being provided to employee’s results in High Hold
Time.

How to take care of the “Less training” issue?

 Solution 1 --- Call all new employees for a refresher training program
 Solution 2 --- Update the company intranet with all information
including changes
 Solution 3 --- Instruct Team’s supervisor to conduct regular briefings
during team meetings
 Solution 4 --- Ensure availability of Team supervisor to employees

Important: Each of these standalone solutions could be implemented.

For this example, let us assume the company implemented Solution 1


Solution 2
Solution 3 Solution 4.
 At the moment, these solutions are brainstormed ‘only on
paper’ solutions. Real world effectiveness of these
solutions needs to be seen.

 To check the effectiveness of these solutions, we do a


test run on one team to see how well the measures work.

 We measure data for a certain period of time (15 days


to a month) to gauge effectiveness. Verify amount of
time by checking for appropriate sample size for

Piloting hypothesis testing that change has been effective.

 If effective (as shown through hypothesis testing),


expand the project enterprise-wide.

Important: This is known as Piloting. Often done to alleviate


the risk of an improvement failing. Example: if the
improvement effort would have failed, only one team is at
risk and not the entire business operation.
Piloting

 Effectiveness of a solution to be checked using a Paired t-test.

 If solution is effective, the same solution to be phased into


the entire enterprise.
 All results and challenges to be documented in the Project
Charter.

 Also, effectiveness of the solution enterprise-wide to be checked


using Paired t-test at each phase.

Important: If the solution is not statistically proven effective, the


team may need to brainstorm for another set of options to solve the
problem.
Design of Experiments (DOE)
An Introduction

 Designed experiments are a series of planned


and scientific experiments that test various Input
variables and their eventual impact on the
Output variable.

 Design of Experiments can be used as a one-


stop method for analyzing all influencing factors,
to arrive at a robust and a successful model.

 Designed experiments are preferred over OFAT


(One Factor at a Time) experiments because
they don’t miss interactions (explained in this
chapter).

 With techniques like Blocking, you can eliminate


experimental mistakes. However, trials should be
randomized to avoid concluding Factor A is
significant when time or sequence may have
influence over the response’s results.

 With techniques like Replication, you can test the


variability given the same conditions to improve
the robustness of the model.
Basic Terms – 1
Example: Determine factors related to consistent plastic part hardness, such as
mold temperature

 Dependent Variables: Responses that vary as a result of changes made in the


independent variable. Example: plastic part’s hardness.
 Response: An outcome of an experimental treatment that varies as
changes are made to levels and factors.

 Independent Variables: Factors that are intentionally varied by the


experimenter. In the above example, mold temperature will be varied in the
experiment.
 Factors: Factors (independent variables) are the items changed during
an experiment in order to see their impact on the output
 Factors may be quantitative or qualitative

 Levels: Levels are the values (or conditions) of the factors that are tested
during the experiment
 most experiments test factors at 2 or 3 levels
Basic Terms – 2
 Treatment: A certain combination of factor levels whose effect on the
response variable is of interest.

Example --- Output Hardness of the plastic compound (Hardness is


the response) Input Oven temperature and type of raw material
(Temperature and type are Factors)

Temperature can be varied at two levels: 700 Degrees and 900 Degrees.
They are thus quantitative factors. Raw material types are attributes, and
used as plastic with filler and plastic without fillers. They are qualitative
factors.

Thus, number of levels for this experiment is 2:

 Error: The variation in experimental units that have been exposed to the
same treatment is attributed to experimental error. This variability is due
to the uncontrollable factors.

 Experimental unit: is the quantity of material (in manufacturing) or the


number served (in a service system) to which one trial of a single
treatment is applied .
Basic Terms – 3
 Repetition: Running several samples during one experimental setup without
change in the setting (short-term variability)

 Replication: Repeating experimental trials after running other trial setups


(long-term variability)

 Repetition & Replication: Provide an estimate of the experimental error (CCV


and measurement error)
 This estimate will be used to determine whether observed differences
are statistically significant

 Example:
 Three (or three groups of) parts are manufactured during one trial at
700 deg using plastic with fillers. This is repetition.
 After making parts using plastic without fillers, you come back and
make more parts at 700 deg using plastic with fillers. This is replication.
 With combined analysis, experimental error can be determined and
will tell you if the differences in readings are statistically significant or
not.
DOE – A Plastic Molding Example

Objective: To achieve uniform Part Hardness at a particular target value


(i.e., reducing the variations)
Components of DOE in the Molding
Example
Full Factorial Experiment –Example
 Based on the understanding of various terms in the previous example, let’s
consider another example for full factorial experiment

 A full factorial experiment is any experiment in which all possible


combinations of factor levels are tested.

 This two-way heat treating experiment is a simple example of a full factorial


design:

 Note: This above simplified example is used to illustrate the concepts of main
factor and interaction effects.
Full Factorial Experiment –Example
 After conducting the experiment with two factors, two levels and two
repetitions we get the values as outlined in the boxes below for y1, y2 etc.

 An analysis of the means will help us determine:


 How a change in draw temperature creates a difference in the
average part hardness (Main Effect)
 How a change in oven time creates a difference in the average part
hardness (Main Effect)
 How interaction between temperature and time effects the average
part hardness (Interaction effect)
Main Effect
Interaction Effect

 Results: The interaction plot shows that we should select low temperature
and high oven time to achieve the highest desired output of hardness. The
parallel lines indicate the output if no interactions occur between the main
effects.
Design of Experiments - Runs
 Number of experiments in a DOE setting is known as Runs

 Full factorial experiment without replication on 5 factors and 2 levels


 Number of runs = 25 = 32
 Full factorial experiment with 1 replication on 5 factors and 2 levels
= 32 + 32 = 64

 Half fractional factorial experiment without replication on 5 factors and 2


levels
 Number of runs = 25-1 = 16
 Half factorial experiment with 1 replication on 5 factors and 2 levels
= 16 + 16 = 32

Do you see the difference between Full Factorial Experiment and


Half Fractional Factorial Experiment in terms of number of runs?
Choose Full Factorial Experiments as it tests
Choose all factors at all levels.

Choose Half Fractional Factorial


Choose Experiments if you wish to save time.

Design of
Experiments
--- Which Choose Screening Designs followed by
Experimental Choose Response Surface Designs for a highly
optimized and a robust model.
Method?

Choose Taguchi’s and Plackett Burman


Choose Designs for a very complex model.
Basics of Design of Experiments

Factor, Level, and Response

Blocking, Randomization, and


Design of Replication
Experiments
- Summary
Main and Interaction Effect

Different types of Designs


Session VII, Lesson 3

Statistical Process Control (SPC)


Objectives and benefits

Rational sub-grouping

Agenda
Selection and application
of control charts

Analysis of control charts


Objectives & Benefits of SPC

Objectives of SPC::

 SPC was developed by Walter A. Shewhart in 1924


 Aids visual monitoring and controlling by putting statistical measures
around the process outputs/input variables e.g. Control Chart
 Depends heavily on data collection

Benefits of SPC:

 Separates special & common cause variability


 Recognizes unexpected changes in process output quickly
 Identifies stable zone for calculating process capability
 Provides useful external information for the continuous improvement
 Monitor process in real time
 Can also be used in Measure phase to check data stability
Rational Sub-Grouping
 The rational subgroup concept means selecting subgroups or samples such that if
assignable causes are present, chance for differences between subgroups will be
maximized, while chance for difference within a subgroup will be minimized

 Two general approaches for constructing rational subgroups:


 Sample consists of units produced at the same time - consecutive units
 Primary purpose is to detect process shifts

 Sample consists of units that are representative of all units produced since
last sample - random sample of all process output over sampling interval
 Often used to make decisions about acceptance of product
 Effective at detecting shifts to out-of-control state and back into in-
control state between samples
 Care must be taken because we can often make any process appear
to be in statistical control just by stretching the interval between
observations in the sample
Basics of Control Charts

 Control charts are useful for tracking


process statistics over time and detecting
the presence of special causes

 A process is in control when most of the


points fall within the bounds of the control
limits, and the points do not display any
nonrandom patterns
Setting the Control Limits
 A standard control chart uses control limits at three standard deviations of the
mean (σmean) from the data’s grand average (X-double bar, or average of the
sample averages or μ). The probability of an out-of-control point when the process
has not changed is only 0.27%

 If control limits are set at two standard deviations, it increases the chance of type I
error (rejecting good part)

 If control limits are set at four standard deviations, it increases the chance of a type
II error (accepting bad part)

 Control chart should keep in mind both type I & type II errors

Important--The 3 limits were set by Walter Shewhart because it’s more likely that the
process needs correction immediately if it goes beyond these limits.
Purpose of Control Limits

 Common causes are inherent in the process where as special causes are a
significant difference in the process and should be investigated and corrected
if possible.
 Think of it like the ECG of a heart patient where a straight line indicates that the
patient has expired. A system with no variation at all is a dead system. Some
common cause variation should always be present.
 On the other hand, if there are large spikes seen in ECG, there is some special
cause & that must be corrected too. Important--Special Causes of Variation
also happen due to trends, shifts, and oscillations.
Most Common Rules for Control Chart Analysis

 These are a few of the easier rules to remember:

 An Out-of-Control (OOC) condition is indicated if one of the following is true


 One point outside the Control Limits (either above the UCL or below the
LCL); p(f) = 0.27%
 Eight consecutive points above the center line (CL) or consecutively
below the CL; p(f) = (0.5)8 = 0.39%
 Six to eight points consecutively increasing or consecutively
decreasing; p(f) = (0.5)6 or = (0.5)8 = 1.6% to 0.39%
 Two out of three points within 1 σmean of either the UCL or the LCL (i.e.
within the outside 1/3 of the distance between the control limits and
the CL); p(f) = (3!/(2!1!)(0.023)2(0.477) = 0.08% for one side.
Choosing an Appropriate Control Chart –
Continuous Data
Choosing an Appropriate Control Chart –
Discrete Data
Xbar Chart Principles

 Xbar-R Charts (and Xbar-s) are two separate charts of the same
subgroup data
 Xbar chart is a plot of the means of subgroup data and
shows inter-subgroup variation.
 R chart is a plot of the subgroup ranges (or if s, plot of
subgroup standard deviation) and shows intra-subgroup
variation.
 Most sensitive charts for tracking and identifying assignable
cause of variation. This chart doesn’t assume normality.
 Establish three sigma process limits using the table below
Defining the Xbar-R UCL and LCL

Where A2, D3 , and D4 are values from control chart table shown earlier.
Xbar-R and Subgroup Data – Example

Since the data is subgroup data Xbar-R chart will be used


Constructing/Analyzing
an Xbar-R Chart Graph
–Example

Analysis:
 In Xbar chart Point
SG6 is the point of
change in the process
from below the center
line to above the
center
 No Points are outside
of control limits in the
above process; may
want to investigate
points 6,7 on X-bar for
rule #4, and points 10,
11 on the R chart for
rule #4.
 I-MR Charts are two separate charts of the same
data

 I chart is a plot of the individual data

 MR chart is a plot of the moving range of


the previous individuals

I-MR  I-MR charts are sensitive to trends, cycles


and patterns and assumes normality
Chart
Principles  Used when subgroup variation is zero or no
subgroups exist
 Destructive Testing
 Batch Processing
 Summary data from a time period
(day, week, month for example)
I-MR and Individual Data – Example

 Once in an hour the QC department measures peel strength of


welds on clips for plate glass mounts
 Is the process in control?

 Since the data is individual data an I-MR chart will be used. This
is an example of a destructive test. (However, if several samples
were tested, they could be grouped and then an X-bar & R
charts could be used.)
Constructing an I-
MR Chart Graph –
Example

Analysis:
 In I- chart Point no.
16 is very close to
the upper limit.
However, we
don’t need to
investigate the
reason
 No Points are out
of control in the
above process
 Type of chart depends on sample size and what
is known (defects or defectives)

 Consistent sample size or area (opportunities)


 “np” chart for defectives
 “c” chart for defects

Control  Inconsistent sample size or area (opportunities)

Charts for  “p” chart for defectives

Attribute  “u” chart for defects

Data  Control limits may be constant, like X-bar & R


charts (for np and c charts), or vary depending
on sample size (for p and u charts)
np-chart Principles

np-Charts:

 Measure the proportion non-conforming (i.e., defectives) within


a standardized group size; expectation is that the same
proportion exists in each group

 Will follow binomial distribution

 Large subgroups required (50 minimum)

 Subgroup size must be constant; therefore, np will act like c,


equaling the number of nonconformities in each group

 Control limits will be constant, like X-bar & R charts (for np and c
charts), or vary depending on sample size (for p and u charts)
np-charts and Uniform Subgroup Size –
Example
 The sourcing department measures 125 purchase orders daily
and records the number of entry
 Is the order entry process in control?

 Since the data has a constant subgroup size (orders processed)


of defectives (error/no error) an np-chart will be used
Constructing/Analyzing an np-chart
Graph –Example

Analysis:
 In NP chart Point no. 12 is beyond the control limit of three standard
deviations. We need to investigate the reason and take corrective action if
necessary.
 Point number 12 is out of control in the above process.
p-chart Principles
p-Charts:

 Measure the proportion non-conforming (i.e., defectives);


expectation is that the same proportion exists in each subgroup

 Will follow a binomial distribution

 Each proportion is a subgroup of samples


 Large subgroups required (50 minimum)

 Subgroup size does not have to be constant

 Control limits may vary from subgroup to subgroup based upon


subgroup size
p-charts and Varying Subgroup Size –
Example
 The sourcing department measures the number of entry errors
on a daily basis
 Is the order entry process in control?

 Since the data has varying subgroup sizes (orders processed) of


defectives (error/no error) a p-chart will be used
Constructing/Analyzing a p-chart Graph –
Example

Analysis:
 In chart Point no. 12 has gone beyond the limit of 3-sigma level. We need to
investigate the reason and take corrective action if necessary.
 Point number 12 is out of control in the above process.
 c-Charts:

 Occurrences of non-conformities can be counted, but


non-occurrences cannot be; occurrences are
independent and expected less than 10% of the time

 Will follow a Poisson distribution (𝑐 = λ)

 Measure the count of non-conforming defects


 Errors, mistakes, blemishes, leaks
c-Chart
Principles  Area of opportunity must be constant
 lot, unit, invoice

 Control limits will be constant


 Control limits = 𝑐 ±3√𝑐

 20 or more subgroups suggested for analysis


c-Chart Subgroups – Example

 Final inspection grades the tinted glass on the number of “white


specs.” Product is priced by grade.
 White specs are defects, not defectives, and are measured over a
constant sample area; so c-chart will be used
 Is the process in control?
Constructing/Analyzing a p-chart Graph –
Example

Analysis:
 Point number 2, 3, 4, 12, 13, 16, 17 are out of control in the above process;
additionally, points 7, 9 and 18, 19 break rule #4.
 In the above c-chart process is not stable and we can see lot of points going
beyond 3-sigma control levels. We need to investigate the reason(s) and take
corrective action.
u-Chart Principles
c-Charts:

 Occurrences of non-conformities can be counted; occurrences are independent and


expected less than 10% of the time

 Will follow Poisson distribution (𝑢 / 𝑎=λ)

 Measure the count of non-conforming defects


 Errors, mistakes, blemishes, leaks

 Area of opportunity may vary


 lot, unit, invoice

 Control limits will be constant

 20 or more subgroups suggested for analysis


u-Chart
Subgroups
 The plastics operation counts defects after a “run”, which is undetermined
in length (once started continues until all material is used).

 Is the process in control?

 The count of defect has a varying area of opportunity since the length of
runs is not constant. A u-chart will be used
Constructing a u-Chart Graph

Analysis:
 In the above u-chart point number 18 has gone beyond 3 sigma level. We
need to investigate the reason and take corrective action if necessary.
 Point number 18 is out of control in the above process.
Lesson Summary
 Statistical process control (SPC):

 Objectives and Benefits:


 Describe the objectives and benefits of SPC, including
controlling process performance, identifying special and
common causes, etc.

 Rational sub-grouping:
 Define and describe how rational sub-grouping is used.

 Selection and application of control charts:


 Identify, select, construct, and apply the following types of
control charts: Xbar−R, Xbar −S, individuals and moving
range (ImR / XmR), median ( ), p, np, c, and u.

 Analysis of control charts:


 Interpret control charts and distinguish between common
and special causes.
Implement and Validate Solutions
Agenda

 Implement the solution identified and


validate the improvement
 Post-improvement capability analysis
to identify, implement, and validate
solutions through F-test, t-test, etc.
 Measurement system capability
reanalysis
New Process Capability
 Recompute the process capability in order to

 Verify improvement levels through Hypothesis testing tools


(explained earlier)
 Compare new Process Capability with targeted Process Capability
 Reconfirm noise levels (variability due to CCV)
 Recompute RPN to show business results.

 It’s possible that all vital X’s are under control, but required improvement
is not made. The reason could probably be out of the following

 If X’s chosen do not functionally relate to ‘Y’


 If some vital X’s haven’t been discovered
 If the optimum region has not been explored completely
 If the operating limits are not fixed properly
 If considerable measurement error is present in both ‘X’ & ‘Y’

 If required improvement is not made, each of the above points should


be explored
Measurement System Reanalysis

As process capability improves, we need to reevaluate the measurement system

 When improved performance has been sustained for at least one-two months with
all vital X’s under control

 Perform Gage R&R


 Revalidate the Measurement System to ensure no measurement capability
has been lost since earlier MSA

 Evaluate both results to validate the observed improvement in process


performance and stability
Lesson Summary

We learned the importance of

 New Process Capability

 Measurement System Reanalysis


Control Plan
Agenda

 Assist in developing a control plan to


document the gains, and assist in
implementing controls and monitoring
systems to hold the gains
What is a Control Plan?
 A written summary description of the system for controlling a process

 Describes actions required to maintain the “desired state” of the process


and minimize process and product variation

 A living document which evolves and changes with the process and
product requirements

 A control plan is also considered a knowledge-transfer document,


moving the lessons and implementation of the Six Sigma project to be
maintained by the company for long term reference.

What is a Control Plan?


 Provides a single point of reference for understanding process
characteristics, specifications, and SOPs

 Enables orderly transfer of responsibility for “sustaining the gain”


Control Plan Strategy
A good control plan
 Minimizes process tampering.

 Clearly states the reaction plan to out-of-control conditions.

 Signals when kaizen activities are needed.

 Describes training needs for standard operating procedures.

 Describes maintenance schedule requirements.

Note: A good control plan clearly describes what actions to take, when to
take them, and who should take them, thereby reducing “fire fighting”
activities.
What to Control?

 A control plan controls the Xs to ensure the desired state for Y.

 Merely monitoring the output, Y, is not an effective way to control


a process.
 Sources for identifying KPIVs:

 FMEA
 C & E Matrix / Diagram and Cause
Identifying Verification
KPIVs  Multi-vari studies
 Regression Analysis
 DOE
Control Plan Tools
 Developing and executing control plans requires the use of:
 Control charts – covered in previous slides

 Error proofing – Poka Yoke - implementation of fail-safe mechanisms within a


process to prevent it from creating defects

 Standard Operating Procedures (SOP) - written document or instruction


detailing all steps and activities of a process or procedure

 Measurement System Analysis – covered in previous slides

 Preventative Maintenance (PM) - Inclusion of documented Preventive


Maintenance is an important element. Ensure system is established to do PM
regularly – checklists that have a weekly/monthly/quarterly schedule for
operators, maintenance personnel, and engineers are helpful, checklist at
the equipment site or on a computerized system dispatches work orders to
appropriate personnel
Developing a Control Plan
 Start with a basic understanding of the process
 Form a multi-function team
 Use tools & techniques to understand & document
 Process Flow Diagram
 Failure Mode And Effects Analysis
 Special Characteristics (Critical & Significant)
 Control Plans / Lessons Learned From Similar Parts Or Processes
 Technical Documentation
 Validation Plan Results
 Optimization Methods
 Team Knowledge Of The Process
 Questions to Get Started
 What do you want to control?
 How often do you need to measure the process?
 Do you have an effective measurement system?
 What is the cost of sampling?
 How much of a sigma shift can you tolerate?
 Who needs to see the data?
 What type of tool/chart is necessary?
 Who will generate the data?
 Who will monitor and adjust the process parameters?
 Have they been trained?
 What are the system requirements for auditing & maintenance?
Choosing
the Right
Level of
Control
Example of Transactional Control Plan
Process Step
 Distinguishes Process / Process Step / Equipment

 Provides basis for standardization


Characteristic/Parameter
 Identifies which KPIV or KPOV to measure
 Determined from project efforts / team decision
 Consider standardization of similar process / equipment
 CTQ classification
 Proven impact on process performance if not measured &
controlled
Specification/Requirement
 Determined from project efforts, technology / process history
 Utilize variable data if possible
 Start with current specifications unless changed / established by
methodology
Measurement Method

 Identifies tool / gauge / method used for measurement


 Consider availability of equipment
 Consider calibration and MSA needs
 Consider training on use of tool / gauge / method
 Supporting Manufacturing Performance Index (MPI), Operational
Blueprint needed
Sample Size - Frequency - Who Measures
 Sample Size – Determine number of samples to pick for measurement
 Frequency – Defines how often the parameter will be measured. As you assess the
process over time, requirements might need modification
 Who measures – The person responsible for recording the data and charting the results if
applicable. This may vary by location / skills. Reaction of this person is critical to success
of the process
Where
Recorded?
 Control sheets
recording - charts, plots,
log, check sheets
 Control methods should
be customized for floor
/ function application
 Use as much
quantitative information
as possible
 Paper charting may be
more suitable as a
starting point vs. system
Decision Rule/Corrective Action
 Identifies reaction to out-of-control/out-of-specification situation; the reactions most
often are different to the extent of inter-department and inter-organization (e.g.
customer, supplier) communication and involvement
 References support documentation - trouble shooting maps, etc.
 Facilitates access to documented correct procedures
Reference Number
 Facilitates access to documented and correct procedures (i.e. the
latest, approved revision of the procedure).
Sample
Manufacturing
Control Plan
Summary –
Control
Phase Key
Objectives
and Tasks
 The tools are being listed in the order they need to
be used phase-wise

 Define:
 SIPOC
 VOC
 CTQ Tree
 QFD
 FMEA
 CE Matrix

DMAIC –  Project Charter

Tools  Measure:
Summary  GAGE RR Variables
 Run Charts or Control Charts
 Cp, Cpk, Sigma level (Z Level) and DPMO
 Anderson Darling Test
DMAIC – Tools Summary

Some of these tools can be used interchangeably in other Phases

 Analyze:
 SLR (Simple Linear Regression)
 Pareto Charts
 Fishbone Diagram
 FMEA
 Multi-vari Charts/Hypothesis Tests
 Improve:
 Brainstorming
 Piloting and FMEA
 DOE (If needed)
 Control:
 Control Charts
 Control Plan
 MSA Re-analysis
Control Phase

The Control Phase is


the last phase in the Six
Sigma cycle. Control
often leads back to the
Define phase where
new projects might be
initiated.
Sessions Summary
 Piloting:
 Basic terms
 DOE terms such as independent and dependent variables, factors and
levels, response, treatment, error, repetition, and replication
 Main effects
 Interpret main effects and interaction plots
 Objectives and benefits
 Describe the objectives and benefits of SPC, including controlling process
performance, identifying special and common causes
 Rational sub-grouping
 How rational sub-grouping is used
 Selection and application of control charts
 Identify, select, construct, and apply the following types of control charts:
Xbar −R, Xbar −s, individuals and moving range (ImR / XmR), p, np, c, and u
 Analysis of control charts
 Interpret control charts and distinguish between common and special
causes
 Implement and validate solutions
 Control plan
 Assist in developing a control plan to document the gains, and assist in
implementing controls and monitoring systems to hold the gains
1. In every experiment there is
experimental error, which of the
following statements is true

A. This error is due to lack of uniformity of


the material used in the experiment and
to inherent variability in the
experimental technique
Quiz - 1 B. This error can be changed statistically by
increasing the degrees of freedom
C. The error can be reduced only by
improving the material
D. In a well-designed experiment there is
no interaction effect
1. When performing “ one experiment with
five repetitions”. What are the six
experiments called?

A. Randomization
Quiz - 2 B. Replications
C. Planned Grouping
D. Sequential
1. A control chart is used to:

A. Determine if defective parts are being


Quiz - 3 produced
B. Measure the process capability
C. Determine causes of process variation
D. Detect non-random variation in
processes
1. The identification of key process input
and output variables can come from
.
1. DOE results
2. Customer surveys
3. ANOVA methods
Quiz - 4 4. Customer requirements

A. 1, & 2
B. 2&3
C. 1, 2, 3
D. 1, 2, 3, 4
1. Control limits are set at three sigma
level because:

A. This level makes it difficult for the output


to get out of control

Quiz - 5 B. This level establishes tight limits for the


production process
C. This level reduces the probability of
looking for trouble in the production
process when none exists
D. This level assures a very small type II error
Quiz - 1
1. In every experiment there is experimental error, which of the following
statements is true

A. This error is due to lack of uniformity of the material used in the experiment
and to inherent variability in the experimental technique
B. This error can be changed statistically by increasing the degrees of
freedom
C. The error can be reduced only by improving the material
D. In a well-designed experiment there is no interaction effect

Correct Answer: A
Answer D is incorrect because many experiments are designed to measure
interactions. Answer C is wrong because error is often inherent in other areas of the
experimental technique. Answer B is off target because only a more refined estimate
of error can be determined by increasing the degrees of freedom.
Quiz - 2

1. When performing “ one experiment with five repetitions”. What


are the six experiments called?

A. Randomization
B. Replications
C. Planned Grouping
D. Sequential

Correct Answer: B

Repeated trials or replications are often conducted to estimate the pure


trial to trial experimental error so that lack of fit may be evaluated.
Quiz - 3

1. A control chart is used to:

A. Determine if defective parts are being produced


B. Measure the process capability
C. Determine causes of process variation
D. Detect non-random variation in processes

Correct Answer: D

A control chart is used to distinguish between random variation, and variation


due to out of control condition. The correct response is to detect non-random
variation in processes.
Quiz - 4
1. The identification of key process input and output variables can come from

.
1. DOE results
2. Customer surveys

3. ANOVA methods

4. Customer requirements

A. 1, & 2
B. 2&3
C. 1, 2, 3
D. 1, 2, 3, 4

Correct Answer: D

Actually all items can identify key process variables.


Quiz - 5

1. Control limits are set at three sigma level because:

A. This level makes it difficult for the output to get out of control
B. This level establishes tight limits for the production process
C. This level reduces the probability of looking for trouble in the
production process when none exists
D. This level assures a very small type II error

Correct Answer: C

Assuming that assignable causes do not occur, the control limits are not
tight. This level assures a very small type I error (calling the process out of
control when it is in control).

You might also like