Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

7CS6_QM_Notes (3)

Download as pdf or txt
Download as pdf or txt
You are on page 1of 102

COURSE FILE

Subject Name : Quality Management


Subject Code : 7CS6-60.1
Branch : Computer Engineering
Year : IV Year/ VII Semester

Arya Institute of Engineering & Technology, Jaipur

Department of Computer Science & Engineering


(Rajasthan Technical University, KOTA)
UNIT-I
INTRODUCTION TO
QUALITY MANAGEMENT
UNIT- I
INTRODUCTION TO QUALITY
MANAGEMENT

INTRODUCTION
➢ QUALITY
Word quality can be defined either as;
• Fitness for use or purpose.
• To do a right thing at first time.
• To do a right thing at the right-time.
• Find and know what consumer wants?
• Features that meet consumer needs and give customer satisfaction.
• Freedom from deficiencies or defects.
• Conformance to standards.
• Value or worthiness for money, etc.

➢ MANAGEMENT
The process of dealing with or controlling things or people
Management is a process of planning, decision making, organizing, leading,
motivation and controlling the human resources, financial, physical, and
information resources of an organization to reach its goals efficiently and
effectively.

➢ QUALITY MANAGEMENT
Quality management is the act of overseeing all activities and tasks that must be
accomplished to maintain a desired level of excellence. This includes the determination
of a quality policy, creating and implementing quality planning and assurance, and
quality control and quality improvement.

A quality management system (QMS) is a collection of business processes focused on


consistently meeting customer requirements and enhancing their satisfaction. ... It is
expressed as the organizational goals and aspirations, policies, processes, documented
information and resources needed to implement and maintain it.
COMPONENTS OF QUALITY MANAGEMENT
Quality management consists of four key components, which include the following:

Quality Planning – The process of identifying the quality standards relevant to the
Project and deciding how to meet them.

Quality Improvement – The purposeful change of a process to improve the confidence or


reliability of the outcome.

Quality Control – The continuing effort to uphold a process’s integrity and reliability in
Achieving an outcome.

Quality Assurance – The systematic or planned actions necessary to offer sufficient


Reliability so that a particular service or product will meet the specified requirements.

PRINCIPLES OF QUALITY MANAGEMENT

• Customer focus:-
The primary focus of quality management is to meet customer requirements and to
Strive to exceed customer expectations.

• Leadership:-
Leaders at all levels establish unity of purpose and direction and create conditions in
which people are engaged in achieving the organization’s quality objectives.
Leadership has to take up the necessary changes required for quality improvement
and
Encourage a sense of quality throughout organization.
• Engagement of people:-
Competent, empowered and engaged people at all levels throughout the organization
Are essential to enhance its capability to create and deliver value.
• Process approach:-
Consistent and predictable results are achieved more effectively and efficiently when
activities are understood and managed as interrelated processes that function as a
coherent system.
• Improvement:-
Successful organizations have an ongoing focus on improvement.
• Evidence based decision making:-
Decisions based on the analysis and evaluation of data and information are more
Likely to produce desired results.
• Relationship management:-
For sustained success, an organization manages its relationships with interested
Parties, such as suppliers, retailers.

COURSE OBJECTIVE
• The first is to introduce the students to the evolution and history of quality management.
• The aim of quality management is to ensure that all the organization’s stakeholders work
together to improve the company’s processes, products, services, and culture to achieve
the long-term success that stems from customer satisfaction.
• The process of quality management involves a collection of guidelines that are developed
by a team to ensure that the products and services that they produce are of the right
standards or fit for a specified purpose.
• To realize the importance of significance of quality.
• Quality management is focused not only on product and service quality, but also on the
means to achieve it

EVOLUTION OF QUALITY MANAGEMENT


Quality management is a recent phenomenon but important for an organization. Civilizations
that supported the arts and crafts allowed clients to choose goods meeting higher quality
standards rather than normal goods. In societies where arts and crafts are the responsibility of
master craftsmen or artists, these masters would lead their studios and train and supervise others.
The importance of craftsmen diminished as mass production and repetitive work practices were
instituted. The aim was to produce large numbers of the same goods. The first proponent in the
US for this approach was Eli Whitney who proposed (interchangeable) parts manufacture for
muskets, hence producing the identical components and creating a musket assembly line. The
next step forward was promoted by several people including Frederick Winslow Taylor, a
mechanical engineer who sought to improve industrial efficiency. He is sometimes called "the
father of scientific management." He was one of the intellectual leaders of the Efficiency
Movement and part of his approach laid a further foundation for quality management, including
aspects like standardization and adopting improved practices. Henry Ford was also important in
bringing process and quality management practices into operation in his assembly lines. In
Germany, Karl Benz, often called the inventor of the motor car, was pursuing similar assembly
and production practices, although real mass production was properly initiated in Volkswagen
after World War II. From this period onwards, North American companies focused
predominantly upon production against lower cost with increased efficiency.
Walter A. Shewhart made a major step in the evolution towards quality management by creating
a method for quality control for production, using statistical methods, first proposed in 1924. This
became the foundation for his ongoing work on statistical quality control. W. Edwards Deming
later applied statistical process control methods in the United States during World War II, thereby
successfully improving quality in the manufacture of munitions and other strategically important
products.
Quality leadership from a national perspective has changed over the past decades. After the
Second World War, Japan decided to make quality improvement a national imperative as part of
rebuilding their economy, and sought the help of Shewhart, Deming and Juran, amongst others.
W. Edwards Deming championed Shewhart's ideas in Japan from 1950 onwards. He is probably
best known for his management philosophy establishing quality, productivity, and competitive
position. He has formulated 14 points of attention for managers, which are a high level abstraction
of many of his deep insights. They should be interpreted by learning and understanding the deeper
insights. These 14
Points include key concepts such as:

Break down barriers between departments


Management should learn their responsibilities, and take on leadership Supervision should be to
help people and machines and gadgets to do a better job Improve constantly and forever the
system of production and service

Institute a vigorous program of education and self-improvement


In the 1950s and 1960s, Japanese goods were synonymous with cheapness and low quality, but
over time their quality initiatives began to be successful, with Japan achieving high levels of
quality in products from the 1970s onward. For example, Japanese cars regularly top the J.D.
Power customer satisfaction ratings. In the 1980s Deming was asked by Ford Motor Company
to start a quality initiative after they realized that they were falling behind Japanese
manufacturers. A number of highly successful quality initiatives have been invented by the
Japanese (see for example on this pages: Genichi Taguchi, QFD, Toyota Production System).
Many of the methods not only provide techniques but also have associated quality culture (i.e.
people factors). These methods are now adopted by the same western countries that decades
earlier derided Japanese methods.

Customers recognize that quality is an important attribute in products and services. Suppliers
recognize that quality can be an important differentiator between their own offerings and those
of competitors (quality differentiation is also called the quality gap). In the past two decades this
quality gap has been greatly reduced between competitive products and services. This is partly
due to the contracting (also called outsourcing) of manufacture to countries like China and India,
as well internationalization of trade and competition. These countries, among many others, have
raised their own standards of quality in order to meet international standards and customer
demands. The ISO
9000 series of standards are probably the best known International standards for quality
management.
Customer satisfaction is the backbone of Quality Management. Setting up a million dollar
company without taking care of needs of customer will ultimately decrease its revenue. There
are many books available on quality management. Some themes have become more significant
including quality culture, the importance of knowledge management, and the role of leadership
in promoting and achieving high quality. Disciplines like systems thinking are bringing more
holistic approaches to quality so that people, process and products are considered together rather
than independent factors in quality management.
The influence of quality thinking has spread to non-traditional applications outside of
walls of manufacturing, extending into service sectors and into areas such as sales, marketing
and customer service

PRODUCT QUALITY

Product Quality
“Product quality means to incorporate features that have a capacity to meet consumer needs
(wants) and gives customer satisfaction by improving products (goods) and making them free
from any deficiencies or defects.”

Meaning of Product Quality

Product quality mainly depends on important factors like:


• The type of raw materials used for making a product.
• How well are various production-technologies implemented?
• Skill and experience of manpower that is involved in the production process.
• Availability of production-related overheads like power and water supply, transport, etc.
PRODUCT CHARACTERISTICS

Product quality has two main characteristics viz; measured and attributes.

• Measured characteristics
Measured characteristics include features like shape, size, color, strength, appearance, height,
weight, thickness, diameter, volume, fuel consumption, etc. of a product.

• Attributes characteristics
Attributes characteristics checks and controls defective-pieces per batch, defects per item,
number of mistakes per page, cracks in crockery, double-threading in textile material, discoloring
in garments, etc.
Based on this classification, we can divide products into good and bad.
So, product quality refers to the total of the goodness of a product.

FEATURE OF PRODUCT QUALITY

(i) Quality of design: The product must be designed as per the consumers’ needs and
high-quality standards.
(ii) Quality conformance: The finished products must conform (match) to the product
designspecifications.
(iii) Reliability: The products must be reliable or dependable. They must not easily
breakdown or become non-functional. They must also not require frequent repairs. They
must remainoperational for a satisfactory longer-time to be called as a reliable one.
(iv) Safety: The finished product must be safe for use and/or handling. It must not harm
consumers in any way.
(v) Proper storage: The product must be packed and stored properly. Its quality must be
maintained until its expiry date.

IMPORTANCE OF PRODUCT QUALITY


(i) For Company: Product quality is very important for the company. This is because, bad
quality products will affect the consumer’s confidence, image and sales of the company. It
may even affect the survival of the company. So, it is very important for every company to
make better quality products.

(ii) For Consumers: Product quality is also very important for consumers. They are ready to
pay high prices, but in return, they expect best-quality products. If they are not satisfied
with the quality of product of company, they will purchase from the competitors.
Nowadays, very good quality international products are available in the local market. So,
if the domestic companies don’t improve their products’ quality, they will struggle to
survive in the market.

SERVICE QUALITY

Every customer has an ideal expectation of the service they want to receive when they go to a
restaurant or store. Service quality measures how well a service is delivered compared to
customer expectations. Businesses that meet or exceed expectations are considered to have high
service quality. Let's say you go to a fast food restaurant for dinner, where you can reasonably
expect to receive your food within five minutes of ordering. After you get your drink and find a
table, your order is called, minutes earlier than you had expected! You would probably consider
this to be high service quality. There are five dimensions that customers consider when assessing
service quality. Let's discuss these dimensions in a little more detail.

Service quality was seen as having two basic dimensions:

Technical quality: What the customer receives as a result of interactions with the service firm
(e.g. a meal in a restaurant, a bed in a hotel)
Functional quality: How the customer receives the service; the expressive nature of the service
delivery (e.g. courtesy, attentiveness, promptness)
The technical quality is relatively objective and therefore easy to measure. However, difficulties
arise when trying to evaluate functional quality.
DIMENSIONS OF QUALITY

Eight dimensions of product quality management can be used at a strategic level to analyze
quality characteristics. The concept was defined by David A. Garvin, formerly C. Roland
Christensen Professor of Business Administration at Harvard Business School (died 30 April
2017).Garvin was posthumously honored with the prestigious award for 'Outstanding
Contribution to the Case Method' on March 4, 2018.

Some of the dimensions are mutually reinforcing, whereas others are not—improvement in
one may be at the expense of others. Understanding the trade-offs desired by customers among
these dimensions can help build a competitive advantage.

Garvin's eight dimensions can be summarized as follows:

➢ Performance: Performance refers to a product's primary operating characteristics. This


dimension of quality involves measurable attributes; brands can usually be ranked
objectively on individual aspects of performance.

➢ Features: Features are additional characteristics that enhance the appeal of the product
or service to the user.

➢ Reliability: Reliability is the likelihood that a product will not fail within a specific time
period. This is a key element for users who need the product to work without fail.

➢ Conformance: Conformance is the precision with which the product or service meets
the specified standards.

➢ Durability: Durability measures the length of a product’s life. When the product can be
repaired, estimating durability is more complicated. The item will be used until it is no
longer economical to operate it. This happens when the repair rate and the associated
costs increase significantly.
➢ Serviceability: Serviceability is the speed with which the product can be put into service
when it breaks down, as well as the competence and the behavior of the service person.

➢ Aesthetics: Aesthetics is the subjective dimension indicating the kind of response a user
has to a product. It represents the individual’s personal preference.

➢ Perceived Quality: Perceived Quality is the quality attributed to a good or service based
on indirect measures.

COST OF QUALITY

Cost of Quality is a methodology used to define and measure where and what amount of an
organization’s resources are being used for prevention activities and maintaining product quality
as opposed to the costs resulting from internal and external failures. The Cost of Quality can be
represented by the sum of two factors. The Cost of Good Quality and the Cost of Poor Quality
equals the Cost of Quality, as represented in the basic equation below:

CoQ = CoGQ + CoPQ

The Cost of Quality equation looks simple but in reality it is more complex. The Cost of Quality
includes all costs associated with the quality of a product from preventive costs intended to reduce
or eliminate failures, cost of process controls to maintain quality levels and the costs related to
failures both internal and external

How to Measure Cost of Quality (COQ)

The methods for calculating Cost of Quality vary from company to company. In many cases,
organizations like the one described in the previous example, determine the Cost of Quality by
calculating total warranty dollars as a percentage of sales. Unfortunately, this method is only
looking externally at the Cost of Quality and not looking internally. In order to gain a better
understanding, a more comprehensive look at all quality costs is required.
The Cost of Quality can be divided into four categories.
1. They include Prevention,
2. Appraisal,
3. Internal Failure
4. External Failure.

Within each of the four categories there are numerous possible sources of cost related to good or
poor quality
.
The Cost of Good Quality (CoGQ)

Prevention Costs – costs incurred from activities intended to keep failures to a minimum. These
can include, but are not limited to, the following:

• Establishing Product Specifications


• Quality Planning
• New Product Development and Testing
• Development of a Quality Management System (QMS)
• Proper Employee Training

Appraisal Costs – costs incurred to maintain acceptable product quality levels.


Appraisal costs can include, but are not limited to, the following:

• Incoming Material Inspections


• Process Controls
• Check Fixtures
• Quality Audits
• Supplier Assessments
The Cost of Poor Quality (CoPQ)
Internal Failures – costs associated with defects found before the product or service reaches the
customer.
Internal Failures may include, but are not limited to, the following examples:

• Excessive Scrap
• Product Re-work
• Waste due to poorly designed processes
• Machine breakdown due to improper maintenance
• Costs associated with failure analysis

External Failures – costs associated with defects found after the customer receives the product or
service.
External Failures may include, but are not limited to, the following examples:

• Service and Repair Costs


• Warranty Claims
• Customer Complaints
• Product or Material Returns
• Incorrect Sales Orders
• Incomplete BOMs
• Shipping Damage due to Inadequate Packaging

These four categories can now be applied to the original Cost of Quality equation. Our original
equation stated that the Cost of Quality is the sum of Cost of Good Quality and Cost of Poor
Quality. This is still true however the basic equation can be expanded by applying the categories
within both the Cost of Good Quality and the Cost of Poor Quality.

The Cost of Good Quality is the sum of Prevention Cost and Appraisal Cost

(CoGQ = PC + AC)

The Cost of Poor Quality is the sum of Internal and External Failure Costs
(CoPQ = IFC + EFC)

By combining the equations, Cost of Quality can be more accurately defined, as shown in
the equation below:

COQ = (PC + AC) + (IFC + EFC)

One important factor to note is that the Cost of Quality equation is nonlinear. Investing in
the Cost of Good Quality does not necessarily mean that the overall Cost of Quality will
increase. In fact, when the resources are invested in the right areas, the Cost of Quality
should decrease. When failures are prevented / detected prior to leaving the facility and
reaching the customer, Cost of Poor Quality will be reduced.

WILLIAM EDWARDS DEMING QUALITY PHILOSOPHY

Who Was He?


Born in 1900, W. Edwards Deming was an American engineer, professor, statistician,
lecturer, author, and management consultant.

What Was His Philosophy?

Deming opined that by embracing certain principles of the management, organizations can
improve the quality of the product and concurrently reduce costs. Reduction of costs would
include the reduction of waste production, reducing staff attrition and litigation while
simultaneously increasing customer loyalty. The key, in Deming’s opinion, was to practice
constant improvement and to imagine the manufacturing process as a seamless whole,
rather than as a system made up of incongruent parts.

In the 1970s, some of Deming's Japanese proponents summarized his philosophy in a two-
part comparison: Organizations should focus primarily on quality, which is defined by the
equation ‘Quality = Results of work efforts/total costs’. When this occurs, quality
improves, and costs to fall suddenly and quickly from a high level or position over time.
When organizations' focus is primarily on costs, the costs will rise, but over time the quality
drops.

The Deming Cycle

Also known as the Shewhart Cycle, the Deming Cycle, often called the PDCA, was a result
of the need to link the manufacture of products with the needs of the consumer along with
focusing departmental resources in a collegial effort to meet those needs.

The steps that the cycle follows are:

• Plan: Design a consumer research methodology that will inform business process
components.
• Do: Implement the plan to measure its performance.
• Check: Check the measurements and report the findings to the decision-makers.
• Act/Adjust: Draw a conclusion on the changes that need to be made and implement them.
The 14 Points for Management

Deming’s other chief contribution came in the form of his 14 Points for Management,
which consists of a set of guidelines for managers looking to transform business
effectiveness.

1. Create constancy of purpose for improvement of product and service ‘


2. Follow a new philosophy
3. Discontinue dependence on mass inspection
4. Cease the practices of awarding business on price tags.
5. Strive always to improve the production and service of the organization
6. Introduce new and modern methods of on-the-job training
7. Device modern methods of supervision
8. Let go of fear
9. Destroy barriers among the staff areas.
10. Dispose of the numerical goals created for the workforce.
11. Eradicate work standards and numerical quotas
12. Abolish the barriers that burden the workers
13. Device a vigorous education and training program
14. Cultivate top management that will strive toward these goals

The 7 Deadly Diseases for Management

The 7 Deadly Diseases for Management defined by Deming are the most serious and fatal
barriers that managements face, in attempting to increase effectiveness and
institute continual improvement.

✓ The inadequacy of the constancy of purpose factor, to plan a product or service.


✓ Organizations giving importance to short term profits.
✓ Employing personal review systems to evaluate performance, merit ratings, and annual
reviews for employees.
✓ Constant Job Hopping
✓ Use of visible figures only for management, with little or no consideration of figures that
are unknown or unknowable. 6.An overload of Medical Costs 7.Excessive costs of
liability.

JOSEPH MOSES JURAN QUALITY PHILOSOPHY

Who Was He?

Born in 1904, Joseph Juran was a Romanian-born American engineer and management
consultant of the 20th century, and a missionary for quality and quality management. Like
Deming, Juran's philosophy also took root in Japan. He stressed on the importance of a
broad, organizational-level approach to quality – stating that total quality management
begins from the highest position in the management, and continues all the way to the
bottom.

Influence of the Pareto Principle

In 1941, Juran was introduced to the work of Vilfredo Pareto. He studied the Pareto
principle (the 80-20 law), which states that, for many events, roughly 80% of the effects
follow from 20% of the causes, and applied the concept to quality issues. Thus, according
to Juran, 80% of the problems in an organization are caused by 20% of the causes. This is
also known as the rule of the "Vital Few and the Trivial Many". Juran, in his later years,
preferred "the Vital Few and the Useful Many" suggesting that the remaining 80% of the
causes must not be completely ignored.

What Was Juran’s Philosophy?

The primary focus of every business, during Juran's time, was the quality of the end
product, which is what Deming stressed upon. Juran shifted track to focus instead on the
human dimension of Quality management. He laid emphasis on the importance of
educating and training managers. For Juran, the root cause of quality issues was the
resistance to change, and human relations problems.
The Juran Quality Trilogy

One of the first to write about the cost of poor quality, Juran developed an approach for
cross-functional management that comprises three legislative processes:

1. Quality Planning:

This is a process that involves creating awareness of the necessity to improve, setting
certain goals and planning ways to reach those goals. This process has its roots in the
management’s commitment to planned change that requires trained and qualified staff.

2. Quality Control:

This is a process to develop the methods to test the products for their quality. Deviation
from the standard will require change and improvement.

3. Quality Improvement:

This is a process that involves the constant drive to perfection. Quality improvements need
to be continuously introduced. Problems must be diagnosed to the root causes to develop
solutions. The Management must analyze the processes and the systems and report back
with recognition and praise when things are done right.

Three Steps to Progress

Juran also introduced the Three Basic Steps to Progress, which, in his opinion, companies
must implement if they are to achieve high quality.

1. Accomplish improvements that are structured on a regular basis with commitment and a
sense of urgency.

2. Build an extensive training program.

3. Cultivate commitment and leadership at the higher echelons of management.


Ten Steps to Quality

Juran devised ten steps for organizations to follow to attain better quality.

• Establish awareness for the need to improve and the opportunities for improvement. Set
goals for improvement.
• Organize to meet the goals that have been set.
• Provide training.
• Implement projects aimed at solving problems.
• Report progress.
• Give recognition.
• Communicate results.
• Keep score.
• Maintain momentum by building improvement into the company's regular systems.
UNIT-II
PROCESS QUALITY
MANAGEMENT
UNIT- II
PROCESS QUALITY MANAGEMENT

INTRODUCTION TO PROCESS QUALITY

Process quality refers to the degree to which an acceptable process, including


measurements and criteria for quality, has been implemented and adhered to in order to
produce the artifacts.

Software development requires a complex web of sequential and parallel steps. As the scale
of the project increases, more steps must be included to manage the complexity of the
project. All processes consist of product activities and overhead activities. Product
activities result in tangible progress toward the end product. Overhead activities have an
intangible impact on the end product, and are required for the many planning, management,
and assessment tasks.

The objectives of measuring and assessing process quality are to:


1 Manage profitability and resources
2 Manage and resolve risk
3 Manage and maintain budgets, schedules, and quality
4 Capture data for process improvement

To some degree, adhering to a process and achieving high process quality overlaps somewhat
with the quality of the artifacts. That is, if the process is adhered to (high quality), the risk of
producing poor quality artifacts is reduced. However, the opposite is not always true—
generating high quality artifacts is not necessarily an indication that the process has been
adhered to.

Therefore, process quality is measured not only to the degree to which the process was
adhered to, but also to the degree of quality achieved in the products produced by the process.

To aid in your evaluation of the process and product quality, the Rational Unified Process
(RUP) has included pages such as:
1. Activity: a description of the activity to be performed and the steps required to perform the
activity.
2. Work Guideline: techniques and practical advice useful for performing the activity.
3. Artifact Guidelines and Checkpoints: information on how to develop, evaluate, and use
the artifact.
4. Templates: models or prototypes of the artifact that provide structure and guidance for
content.

TOOLS AND TECHNIQUES FOR PROCESS IMPROVEMENT


Understanding processes so that they can be improved by means of a systematic approach requires
the knowledge of a simple kit of tools or techniques. The effective use of these tools and techniques
requires their application by the people who actually work on the processes, and their commitment
to this will only be possible if they are assured that management cares about improving quality.
Managers must show they are committed by providing the training and implementation support
necessary.
The tools and techniques most commonly used in process improvement are:
• Problem solving methodology, such as DRIVE
• Process mapping
• Process flowcharting
• Force field analysis
• Cause & effect diagrams
• CEDAC • Brainstorming
• Pareto analysis •
Statistical process control (SPC)
• Control charts
• Check sheets
• Bar charts
• Scatter diagrams
• Matrix analysis
• Dot plot or tally chart
• Histograms

DRIVE is an approach to problem solving and analysis that can be used as part of process
improvement.
Define:
The scope of the problem the criteria by which success will be measured and agree the deliverables
and success factors

Review:
The current situation, understands the background, identify and collect information, including
performance, identify problem areas, improvements and “quick wins”

Identify:
Improvements or solutions to the problem, required changes to enable and sustain the improvements

Verify:
Check that the improvements will bring about benefits that meet the defined success criteria,
prioritize and pilot the improvements

Execute:
Plan the implementation of the solutions and improvements, agree and implement them, plan a
review, gather feedback and review

PROCESS MAPPING

One of the initial steps to understand or improve a process is Process Mapping. By gathering
information we can construct a “dynamic” model - a picture of the activities that take place in a
process. Process maps are useful communication tools that help improvement teams understand the
process and identify opportunities for improvement.
ICOR (inputs, outputs, controls and resources) is an internationally accepted process analysis
methodology for process mapping. It allows processes to be broken down into simple, manageable
and more easily understandable units. The maps define the inputs, outputs, controls and resources for
both the high level process and the sub-processes.
Process mapping provides a common framework, discipline and language, allowing a systematic
way of working. Complex interactions can be represented in a logical, highly visible and objective
way. It defines where issues or “pinch points “exist and provides improvement teams with a
common decision making framework.

To construct a process map:


• Brainstorm all activities that routinely occur within the scope of the process
• Group the activities into 4-6 key sub-processes
• Identify the sequence of events and links between the sub-processes
• Define as a high level process map and sub-process maps using ICOR
Process maps provide a dynamic view of how an organization can deliver enhanced business
value.
“What if” scenarios can be quickly developed by comparing maps of the process “As is” withthe
process.

Another tool used in the construction of process maps is Process Flowcharting. This is a
powerful technique for recording, in the form of a picture, exactly what is done in a process.
There are certain standard symbols used in classic flowcharts, and these are:

If a flowchart cannot be drawn using these symbols, then the process is not fully understood. The
purpose of the flowchart is to learn why the current process operates the way it does and to
conduct an objective analysis, to identify problems and weaknesses, unnecessary steps or
duplication and the objectives of the improvement effort.

Force Field Analysis is a technique for identifying forces which may help or hinder achieving a
change or improvement. By assessing the forces that prevent making the change, plans can be
developed to overcome them. It is also important to identify those forces that will help with the
change. Once these forces have been identified and analyzed, it is possible to determine if a
proposed change is viable.

A useful way of mapping the inputs that effect quality is the Cause & Effect Diagram, also
known as the Fishbone or Ishikawa Diagram. It is also a useful technique for opening up
thinking in problem solving.

The effect or problem being investigated is shown at the end of a horizontal arrow; potential
causes are then shown as labeled arrows entering the main cause arrow. Each arrow may have
other arrows entering it as the principal causes or factors are reduced to their sub- causes;
brainstorming can be effectively used to generate the causes and sub-causes.
With CEDAC – Cause and Effect Diagram with the Addition of Cards, the effect side of the
diagram is a quantified description of the problem, and the cause side of the diagram uses two
different colored cards for writing the facts and the ideas.

The facts are gathered and written on the left of the spines, and the ideas for improvement on the
right of the cause spines. The ideas are evaluated and selected for substance and practicality.

BRAINSTROMING
Brainstorming can be used in conjunction with the Cause and Effect tool. It is a group technique
used to generate a large number of ideas quickly and may be used in a variety of situations. Each
member of the group, in turn, can put forward an idea concerning the problem being considered.
Wild ideas are welcomed and no criticism or evaluation occurs during brainstorming, all ideas
being recorded for subsequent analysis. The process continues until no further ideas are
forthcoming and increases the chance for originality and innovation. It can be used for:

• Identifying problem areas


• Identifying areas for improvement
• Designing solutions to problems
• Developing action plans

Pareto Analysis can be used to analyze the ideas from a brainstorming session. It is used to
identify the vital few problems or causes of problems that have the greatest impact. A Pareto
diagram or chart pictorially represents data in the form of a ranked bar chart that shows the
frequency of occurrence of items in descending order.
Usually, Pareto diagrams reveal that 80% of the effect is attributed to 20% of the causes; hence, it
is some-times known as the 80/20 rule.

STATISTICAL PROCESS CONTROL (SPC)

Statistical Process Control (SPC) is a toolkit for managing processes. It is also a strategy for
reducing the variability in products, deliveries, materials, equipment, attitudes and processes,
which are the cause of most quality problems. SPC will reveal whether a process is “in control”–
stable and exhibiting only random variation, or “out of control” and needing attention.

In SPC, numbers and information form the basis for decisions and actions, and a thorough data
recording system is essential. In addition to the tools necessary for recording the data, there also
exists a set of tools to analyze and interpret the data, some of which are covered in the following
pages. An understanding of the tools and how to use them requires no prior knowledge of statistics.

One of the key tools of SPC is a Control Chart. It is used to monitor processes that are in control,
using means and ranges. It represents data, e.g., sales, volume, customer complaints, in chronological
order, showing how the values change with time. In a control chart each point is given individual
significance and is joined to its neighbors. Above and below the mean, Upper and Lower Warning
and Action lines (UWL, LWL, UAL, and LAL) are drawn. These act as signals or decision rules, and
give operators information about the process and its state of control. The charts are useful as a
historical record of the process as it happens, and as an aid to detecting and predicting change.

A Check Sheet is an organized way of collecting and structuring data, its purpose is to collect
the facts in the most efficient way. It ensures that the information that is collected is what was
asked for and that everyone is doing it the same way. Data is collected and ordered by adding
tally or check marks against predetermined categories of items or measurements. It simplifies the
task of analysis.

Bar Charts are visual displays of data in which the height of the bars is used to show the relative
size of the quantity measured. The bars can be separated to show that the data is not directly
related or continuous. They can be used to give visual impact to data, compare different types of
data and compare data collected at different times.

A Scatter Diagram is a graphical representation of how one variable changes with respect to
another. The variables are plotted on axes at right angles to each other and the scatter in the
points gives a measure of confidence in any correlation shown
They show whether 2 variables are related, or prove that they are not, the type of relationship, if
any, between the variables and how one variable might be controlled, by suitably controlling
the other. They also make predictions of values lying outside the measured range.
In its simplest form, Matrix Analysis is a way of presenting data in a rectangular grid,
with data displayed along the top and down the side.

Symbols placed at the intersections of the grid enable relationships to be established between
the two sets of data. It summarizes all the known data in one table and highlights gaps in
knowledge and relationships between items. It is a valuable attention focusing tool for teams,
and simplifies the task of priority ranking a set of items.

The Dot Plot or Tally Chart is a frequency distribution. It shows how often (the frequency) a
particular value has occurred. The shape of the plot can reveal a great deal about a process,
giving a picture of the variation, highlighting unusual values and indicating the probability of
particular values occurring.
A Histogram is a picture of variation or distribution, where data has been grouped into cells
and their frequency represented as bars. It is convenient for large amounts of data, particularly
when the range is wide. It gives a picture of the extent of variation, highlights unusual areas and
indicates the probability of particular values occurring.

With such a shopping list of tools and techniques, it may not be easy to know which one to use
when. To overcome this problem, the following matrix refers to the six step methodology for
process improvement and indicates the key tools and techniques that could be used in each
step. However, this list is not exhaustive and the tools should be used in conjunction with
measurement.
GRAPHICAL DATA REPRESENTATION

Graphing the data can be utilized for both historical data already available and when
analyzing the data resulting from live data collection activities. Of course, you need to pick
the right graphical tool as there are a lot of different ways to plot your data. A number of
commonly used graphical tools will be covered here. However, note that if one graph fails
to reveal anything useful, try another one.

A long list of data is usually not practical for conveying information about a process. One
of the best ways to analyze problems in any process is to plot the data and see what it is
telling you. This is often recommended as a starting point in any data analysis during the
problem-solving process. A wide range of graphical tools are available which can generate
graphs quickly and easily such as Minitab and Microsoft Excel.
Different graphs can reveal different characteristics of your data such as the central
tendency, the dispersion and the general shape for the distribution. Graphical Analysis
allows to quickly learning about the nature of the process, enables clarity of communication
and provides focus for further analysis. It is an important tool for understanding sources of
variation in the data and thereby helping to better understand the process and where root
causes might be. Conclusions drawn from the graphical analysis may require verification
through further advanced statistical techniques such as significance testing and
experimentation.

Line Charts:
Line Charts are the simplest forms of charts and often used to monitor and track data over
time. They are useful for showing trends in quality, cost or other process performance
measures. A line chart represents the data by connecting the data points by straight lines to
highlight trends in the data. A standard or a goal line may also be drawn to verify actual
performance against identified targets. Line charts are the most preferred format to display
time series data. Time series plots, run charts, SPC charts and radar charts are all line charts.

Time Series Plots are line charts that are used to evaluate behavior in data over a time
interval. They can be used to determine if a process is stable by visually spotting trends,
patterns or shift in the data. If any of these are observed, then we can say that the process
is probably unstable. More advanced charts for assessing the stability of a process over
time are run charts and SPC charts.

A Time series:
A time series plot requires the data to be in the order which actually happened and that the
data collection frequency is constant. Time Series Analysis is the analysis of the plotted
data in order to get meaningful information out of it. Different behaviors of the data can be
observed such as upward and downward trends, shifts in the mean and changes in the
amount of variation, patterns and cycles, or anything not random. Time Series Forecasting
is the use of a model to predict future values based on previously observed values.
Pie Charts:
Pie Charts are ways that make it easy to compare proportions. They are widely used in the
business and media worlds for their simplicity and ease of interpretation. They display the
proportion of each category relative to the whole data set representing each as a slice of the
pie. The percentage represented by each category is usually provided near to the
corresponding slice of the pie.

Bar charts:
Bar charts are ways of displaying frequency of occurrence of attribute data. They focus
on the absolute value of the data while a pie chart focuses on the relative value of the data.
The bar height indicates the number of times a particular characteristic was observed. The
bars on the chart may be arranged in any order and are presented either horizontally or
vertically to show comparisons among categories. When a bar chart presents the categories
in descending
order of frequency, this is called a Pareto Chart.

7QC(QUALITY CONTROL TOOLS)

The 7 quality tools were first conceptualized by Kaoru Ishikawa, a professor of engineering
at the University of Tokyo. They can be used for controlling and managing quality in any
organization.
The 7 basic quality tools are, essentially, graphical techniques used to identify & fix issues
related to product or process quality.
The 7 basic quality tools are as follows:
• .Flow Chart
• .Histogram
• Cause-and-Effect Diagram
• Check Sheet
• Scatter Diagram
• Control Charts
• .Pareto Charts

Flow charts: Flow charts are one of the best process improvement tools you can use to
analyze a series of events. They map out these events to illustrate a complex process in
order to find any commonalities among the events. They’re also one of the most common
methods of creating a workflow diagram.
Flow charts can be used in any field to break down complex processes in a way that is easy
to understand. Then, you can go through the business processes one by one, identifying
areas for improvement.
Histogram: A histogram is a chart with different columns. These columns represent the
distribution by the mean. If the histogram is normal then the graph will have a bell-shaped
curve.
If it is abnormal, it can take different shapes based on the condition of the distribution.
Histograms are used to measure one thing against another and should always have a
minimum of two variables.

Cause-and-effect Diagram (also known as Fishbone diagram): Cause-and-effect


diagrams can be used to understand the root causes of business problems. Because
businesses face problems daily, it is necessary to understand the root of the problem so you
can solve it effectively.
Check Sheet: A check sheet is a basic tool that gathers and organizes data to evaluate
quality. This can be done with an Excel spreadsheet so you can analyze the information
gathered in a graph.

Scatter Diagram: Scatter diagrams are the best way to represent the value of two different
variables. They present the relationship between the different variables and illustrate the
results on a Cartesian plane. Then further analysis can be done on the values.

Control Charts: A control chart is a good tool for monitoring performance and can be
used to monitor any process that relates to the function of an organization. These charts
allow you to identify the stability and predictability of the process and identify common
causes of variation.
Pareto Charts: Pareto charts are charts that contain bars and a line graph. The values are
shown in descending order by bars and the total is represented by the line. They can be
used to identify a set of priorities so you can determine what parameters have the biggest
impact on the specific area of concern.
PROCESS CAPABILITY ANALYSIS

Process capability analysis is a set of tools used to find out how well a given process meets a set
of specification limits. In other words, it measures how well a process performs.
An important technique used to determine how well a process meets a set of specification limits
is called a process capability analysis. A capability analysis is based on a sample of data taken
from a process and usually produces:
• An estimate of the DPMO (defects per million opportunities).
• One or more capability indices.
• An estimate of the Sigma Quality Level at which the process operates. STATGRAPHICS
provides capability analyses for the following cases:

PROBABILITY CAPABILITY INDICES

✓ Cp stands for process capability, and is a simple measure of the capability of a process. It
tells us how much potential the system has of meeting both upper and lower specification
limits. Its weak point is that, in focusing on the data spread, it ignores the averages; so if
the system being tested isn’t centered between the specification limits it may (when used
alone) give misleading impressions. The narrower the spread of a systems output is, the
greater the Cp value. You can test how centered a system is by comparing Cp to Cpk. If a
process is centered on its target, these two will be equal. The larger the difference between
Cpk and Cp the more off-center your process is.

✓ Cpk stands for process capability index and refers to the capability a particular process has
of achieving output within certain specifications. In manufacturing, it describes the ability
of a manufacturer to produce a product that meets the consumers’ expectations, within a
tolerance zone. If Cpk is more than 1, the system has the potential to perform as well as
required. The equation for Cpk is [minimum (mean – LSL, USL – mean)] / (0.5*NT),
where NT stands for natural tolerance, LSL for lower specification limit and USL for upper
specification limit.
✓ Pp stands for process performance. It is much the same as Cp, but unlike Cp it measures
actual performance rather than potential. Like Cp, it measures spread, and is subject to the
same weaknesses.

✓ Ppk stands for process performance index. Like Pp, it measures actual performance rather
than potential. A Ppk of between 0 and 1 indicates that not all the processes outputs are
meeting specifications. If Ppk is 1, 99.73% of your system’s output is within the
specifications. The percentage 99.73% comes from the normal distribution curve, where
99.73% of results fall within -3 and 3 standard deviations from the mean.

• Smaller Ppk values indicate the deviation from specification is larger.


• Larger Ppk values indicate the deviation from specification is smaller.

These numbers, and a histogram representing them, are usually produced in a process capability
analysis report.

MEASUREMENT SYSTEM ANALYSIS

What is Measurement System Analysis (MSA?)


MSA is defined as an experimental and mathematical method of determining the amount of
variation that exists within a measurement process. Variation in the measurement process can
directly contribute to our overall process variability. MSA is used to certify the measurement
system for use by evaluating the system’s accuracy, precision and stability.

What is a Measurement System?


Before we dive further into MSA, we should review the definition of a measurement system and
some of the common sources of variation. A measurement system has been described as a system
of related measures that enables the quantification of particular characteristics. It can also include
a collection of gages, fixtures, software and personnel required to validate a particular unit of
measure or make an assessment of the feature or characteristic being measured. The sources of
variation in a measurement process can include the following:

•Process – test method, specification •Personnel – the operators, their skill level, training, etc.
•Tools / Equipment – gages, fixtures, test equipment used and their associated calibration systems
• Items to be measured – the part or material samples measured, the sampling plan, etc.
•Environmental factors – temperature, humidity, etc.

Why Perform Measurement System Analysis (MSA)


An effective MSA process can help assure that the data being collected is accurate and the system
of collecting the data is appropriate to the process. Good reliable data can prevent wasted time,
labor and scrap in a manufacturing process. A major manufacturing company began receiving
calls from several of their customers reporting non-compliant materials received at their facilities
sites. The parts were not properly snapping together to form an even surface or would not lock
in place. The process was audited and found that the parts were being produced out of spec. The
operator was following the inspection plan and using the assigned gages for the inspection. The
problem was that the gage did not have adequate resolution to detect the non-conforming parts.
An ineffective measurement system can allow bad parts to be accepted and good parts to be
rejected, resulting in dissatisfied customers and excessive scrap. MSA could have prevented the
problem and assured that accurate useful data was being collected.

How to Perform Measurement System Analysis (MSA)


MSA is a collection of experiments and analysis performed to evaluate a measurement system’s
capability, performance and amount of uncertainty regarding the values measured. We should
review the measurement data being collected, the methods and tools used to collect and record
the data. Our goal is to quantify the effectiveness of the measurement system, analyze the
variation in the data and determine its likely source. We need to evaluate the quality of the data
being collected in regards to location and width variation. Data collected should be evaluated for
bias, stability and linearity. During an MSA activity, the amount of measurement uncertainty
must be evaluated for each type of gage or measurement tool defined within the process Control
Plans. Each tool should have the correct level of discrimination and resolution to obtain useful
data. The process, the tools being used (gages, fixtures, instruments, etc.) and the operators are
evaluated for proper definition, accuracy, precision, repeatability and reproducibility.

ANALYSIS OF VARIANCE(ANOVA)

Analysis of Variance (ANOVA) of statistical models and their associated estimation between
groups) used to analyze the differences developed by the statistician Ronald Fisher. The observed
variance in a particular variable is different sources of variation. In its simplest form, r generalizes
the t-test beyond two means.

Classes of models:

▪ Fixed-effects model
The fixed-effects model (class I) of analysis of variance applies to situations in which the
experimenter applies one or more treatments to the subjects of the experiment to see whether the
response variable values change. This allows the experimenter to estimate the ranges of response
variable values that the treatment would generate in the population as a whole.

▪ Random-effects models
Random-effects model (class II) is used when the treatments are not fixed. This occurs when the
various factor levels are sampled from a larger population. Because the levels themselves are
random variables, some assumptions and the method of contrasting the treatments (a multi-
variable generalization of simple differences) differ from the fixed-effects model.
▪ Mixed-effects models

A mixed-effects model (class III) contains experimental factors of both fixed and random-effects
types, with appropriately different interpretations and analysis for the two types.
Example: Teaching experiments could be performed by a college or university department to
find a good introductory textbook, with each text considered a treatment. The fixed-effects model
would compare a list of candidate texts. The random-effects model would determine whether
important differences exist among a list of randomly selected texts. The mixed-effects model
would compare the (fixed) incumbent texts to randomly selected alternatives.
Defining fixed and random effects has proven elusive, with competing definitions arguably
leading toward a linguistic quagmire.

DESIGN AND ANALYSIS OF EXPERIMENTS

Design of experiments (DOE) is defined as a branch of applied statistics that deals with planning,
conducting, analyzing, and interpreting controlled tests to evaluate the factors that control the
value of a parameter or group of parameters. DOE is a powerful data collection and analysis tool
that can be used in a variety of experimental situations.
It allows for multiple input factors to be manipulated, determining their effect on a desired output
(response). By manipulating multiple inputs at the same time, DOE can identify important
interactions that may be missed when experimenting with one factor at a time. All possible
combinations can be investigated (full factorial) or only a portion of the possible combinations
(fractional factorial).

A strategically planned and executed experiment may provide a great deal of information about
the effect on a response variable due to one or more factors. Many experiments involve holding
certain factors constant and altering the levels of another variable. This "one factor at a time"
(OFAT) approach to process knowledge is, however, inefficient when compared with changing
factor levels simultaneously.

Many of the current statistical approaches to designed experiments originate from the work of
R. A. Fisher in the early part of the 20th century. Fisher demonstrated how taking the time to
seriously consider the design and execution of an experiment before trying it helped avoid
frequently encountered problems in analysis. Key concepts in creating a designed experiment
include blocking, randomization, and replication.

Blocking: When randomizing a factor is impossible or too costly, blocking lets you restrict
randomization by carrying out all of the trials with one setting of the factor and then all the trials
with the other setting.

Randomization: Refers to the order in which the trials of an experiment are performed. A
randomized sequence helps eliminate effects of unknown or uncontrolled variables.

Replication: Repetition of a complete experimental treatment, including the setup.

A well-performed experiment may provide answers to questions such as:


1) What are the key factors in a process?
2) At what settings would the process deliver acceptable performance?
3) What are the key, main, and interaction effects in the process?
4) What settings would bring about less variation in the output?

A repetitive approach to gaining knowledge is encouraged, typically involving these


consecutive steps:
a) A screening design that narrows the field of variables under assessment.
b) A "full factorial" design that studies the response of every combination of factors and
factor levels, and an attempt to zone in on a region of values where the process is close
to optimization.
c) A response surface designed to model the response.
WHEN TO USE DOE

Use DOE when more than one input factor is suspected of influencing an output. For example, it
may be desirable to understand the effect of temperature and pressure on the strength of a glue
bond.

DOE can also be used to confirm suspected input/output relationships and to develop a predictive
equation suitable for performing what-if analysis.

DESIGN OF EXPERIMENTS TEMPLATE AND EXAMPLE

Setting up a DOE starts with process map. ASQ has created a design of experiments template
(Excel) available for free download and use. Begin your DOE with three steps:

1. Acquire a full understanding of the inputs and outputs being investigated. A process flowchart
or process map can be helpful. Consult with subject matter experts as necessary.

2. Determine the appropriate measure for the output. A variable measure is preferable. Attribute
measures (pass/fail) should be avoided. Ensure the measurement system is stable and repeatable.
3. Create a design matrix for the factors being investigated. The design matrix will show all
possible combinations of high and low levels for each input factor. These high and low levels
can be coded as +1 and -1. For example, a 2 factor experiment will require 4 experimental runs:

Input A Level Input B Level

Experiment #1 -1 -1

Experiment #2 -1 +1

Experiment #3 +1 -1

Experiment #4 +1 +1
ACCEPTANCE SAMPLING PAN

What Is Acceptance Sampling?

Acceptance sampling is a statistical measure used in quality control. It allows a company to


determine the quality of a batch of products by selecting a specified number for testing. The quality
of this designated sample will be viewed as the quality level for the entire group of products. A
company cannot test every one of its products. There may simply be too high a volume or number
of them to inspect at a reasonable cost or within a reasonable time frame. Or effective testing might
result in the destruction of the product or making it unfit for sale in some way.

Acceptance sampling solves these problems by testing a representative sample of the product for
defects. The process involves first, determining the size of a product lot to be tested, then the
number of products to be sampled, and finally the number of defects acceptable within the sample
batch. Products are chosen at random for sampling. The procedure usually occurs at the
manufacturing site—the plant or factory—and just before the products are to be transported. This
process allows a company to measure the quality of a batch with a specified degree of statistical
certainty without having to test every single unit. Based on the results—how many of the
predetermined number of samples pass or fail the testing—the company decides whether to
accept or reject the entire lot.

The statistical reliability of a sample is generally measured by a t-statistic, a type of inferential


statistic used to determine if there is a significant difference between two groups that share
common features.

A History of Acceptance Sampling

Acceptance sampling in its modern industrial form dates from the early 1940s. It was originally
applied by the U.S. military to the testing of bullets during World War II. The concept and
methodology were developed by Harold Dodge, a veteran of the Bell Laboratories quality
assurance department, who was acting as a consultant to the Secretary of War. While the bullets
had to be tested, the need for speed was crucial, and Dodge reasoned that decisions about entire
lots could be made by samples picked at random. Along with Harry Romig and other Bell
colleagues, he came up with a precise sampling plan to be used as a standard, setting the sample
size, the number of acceptable defects, and other criteria.

Acceptance sampling procedures became common throughout World War II and afterward.
However, as Dodge himself noted in 1969, acceptance sampling is not the same as acceptance
quality control. Dependent on specific sampling plans, it applies to specific lots and is an
immediate, short-term test—a spot check, so to speak. In contrast, acceptance quality control
applies in a broader, more long-term sense for the entire product line; it functions as an integral
part of a well-designed manufacturing process and system.

TOTAL QUALITY MANAGEMENT

What Is Total Quality Management (TQM)?

Total quality management (TQM) is the continual process of detecting and reducing or
eliminating errors in manufacturing, streamlining supply chain management, improving the
customer experience, and ensuring that employees are up to speed with training. Total quality
management aims to hold all parties involved in the production process accountable for the
overall quality of the final product or service.

TQM was developed by William Deming, a management consultant whose work had a great
impact on Japanese manufacturing.1 While TQM shares much in common with the Six Sigma
improvement process, it is not the same as Six Sigma. TQM focuses on ensuring that internal
guidelines and process standards reduce errors, while Six Sigma looks to reduce defects.

Understanding Total Quality Management

Total quality management (TQM) is a structured approach to overall organizational management.


The focus of the process is to improve the quality of an organization's outputs, including goods
and services, through continual improvement of internal practices. The standards set as part of
the TQM approach can reflect both internal priorities and any industry standards currently in
place. Industry standards can be defined at multiple levels and may include adherence to various
laws and regulations governing the operation of the particular business. Industry standards can
also include the production of items to an understood norm, even if the norm is not backed by
official regulations.

Primary Principles of Total Quality Management

TQM is considered a customer-focused process and aims for continual improvement of business
operations. It strives to ensure all associated employees work toward the common goals of
improving product or service quality, as well as improving the procedures that are in place for
production.

Important:- Special emphasis is put on fact-based decision making, using performance metrics to
monitor progress; high levels of organizational communication are encouraged for the purpose
of maintaining employee involvement and morale.

Principles of TQM
UNIT-III
LEADERSHIP
UNIT- III
LEADERSHIP

LEAN MANAGEMENT
A systematic approach to identifying and eliminating waste (non-value added activities)
through continuous improvement by flowing the product at the pull of the customer in
pursuit of perfection.

Origin:
Started by Japanese manufacturers in auto mobile industry.Replicated in other sectors all
over the world.

Underlying Principle:
“Less is more productive”

i.e. in order to stay competitive, organizations are required to deliver better quality products
and services using fewer resources.

Key Principals of Lean Thinking:


◆ Value - what customers are willing to pay for;
◆ Value Stream – the sequence of processes to deliver value;
◆ Flow – organizing Value Stream to be continuous;
◆ Pulls – responding to downstream customer demand;
◆ Perfection – relentless continuous improvement(culture);

JUST-IN-TIME(JIT)

◆ Powerful strategy for improving operations


◆ Materials arrive where they are needed when they are needed
◆ Identifying problems and driving out waste reduces costs and variability and improves
throughput
◆ Requires a meaningful buyer-supplier relationship.

JIT Concepts
• Eliminate waste
• Remove variability
• Improve throughput

1. Eliminate Waste
▪ Waste is anything that does not add value from the customer point of view
▪ Storage, inspection, delay, waiting in queues, and defective products do not add value
and are100% waste

Ohio’s Seven Wastes


◆ Overproduction
◆ Queues
◆ Transportation
◆ Inventory
◆ Motion
◆ Over processing
◆ Defective products

2. Remove Variability
• Variability is any deviation from the optimum process
• Lean systems require managers to reduce variability caused by both internal and external
factors
• Inventory hides variability
• Less variability results in less waste

2. Remove Variability
• Lean systems require managers to reduce variability caused by both internal and external
factors
• Variability is any deviation from the optimum process
• Inventory hides variability
• Less variability results in less waste
• Push systems dump orders on the downstream stations regardless of the need
• By pulling material in small lots, inventory cushions are removed, exposing problems
and emphasizing continual improvement
• The time it takes to move an order from receipt to delivery is reduced.

Core Logic of JIT


JIT and Competitive Advantage

BENCHMARKING

What is benchmarking?
Benchmarking is a strategic and analytical process of continuously measuring an organization's
products, services and practices against a recognized leader in the studied area for the purpose of
improving business performance.

Benchmarking and Best Practices

OQM will identify and publish best practices in public and private sector services that are relevant
to ORS divisions and branches. We will also identify and publicize ORS "best practices" through
our web page. Through sharing with other organizations we will seek to identify and incorporate
practices that will contribute to better performance. OQM can also facilitate benchmarking
studies for ORS divisions pursuing their goal of providing excellent services to their customers.
These studies will be characterized by objective comparisons of performance. In other words,
OQM will make sure that the variables under study are comparable in nature and scope

Why should we use benchmarking?

Benchmarking can greatly enhance an organization's performance.

Reasons why organizations benchmark:

•To forecast industry trends - Because it requires the study of industry leaders, benchmarking
can provide numerous indicators on where a particular business might be headed, which
ultimately may pave the way for the organization to take a leadership position.

•To discover emerging technologies - The benchmarking process can help leaders uncover
technologies that are changing rapidly, newly developed, or state-of-the-art.

•To stimulate strategic planning - The type of information gathered during a benchmarking
effortcan assist an organization in clarifying and shaping its vision of the future.

•To enhance goal setting - Knowing the best practices in your business can dramatically
improveyour ability to know what goals are realistic and attainable.

•To maximize award - winning potential - Many prestigious award programs, such as the
Malcolm Baldridge National Quality Award Program, the federal government's President's
Quality Award Program, and numerous state and local awards recognize the importance of
benchmarking and allocate a significant percentage of points to organizations that practice it.

•To comply with Executive Order #12862, "Setting Customer Service Standards" -
Benchmarking the customer service performance of federal government agencies against the best
in business is one of the eight action areas of this Executive Order.
What part of my organization should I select for benchmarking?

To identify where in your organization, you can start benchmarking:

•Review goals and objectives of your organization's strategic plan


• Identify your organization's significant processes
• Identify new customer needs
Types of Benchmarking:
• Internal benchmarking is a comparison of a business process to a similar process inside the
organization.
•Competitive benchmarking is a direct competitor-to-competitor comparison of a product,
service, process or method.
•Functional benchmarking is a comparison to similar or identical practices within the same or
similar functions outside the immediate industry.
•Generic benchmarking broadly conceptualizes unrelated business processes or functions that
can be practiced in the same or similar ways regardless of industry.

A PROCESS FAILURE MODE AFFECT ANALYSIS(PFMEA)

A Process Failure Mode Effects Analysis (PFMEA) is a structured analytical tool used by an
organization, business unit, or cross-functional team to identify and evaluate the potential failures
of a process.

PFMEA helps to establish the impact of the failure, and identify and prioritize the action items
with the goal of alleviating risk.
It is a living document that should be initiated prior to process of production and maintained
through the life cycle of the product.

PFMEA evaluates each process step and assigns a score on a scale of 1 to 10 for the following
variables:
➢Severity - Assesses the impact of the failure mode (the error in the process), with 1
representing the least safety concern and 10 representing the most dangerous safety concern. In
most cases, processes with severity scores exceeding 8 may require a fault tree analysis, which
estimates the probability of the failure mode by breaking it down into further sub-elements.

➢Occurrence - assesses the chance of a failure happening, with 1 representing the lowest
occurrence and 10 representing the highest occurrence. For example, a score of 1 may be assigned
to a failure that happens once in every 5 years, while a score of 10 may be assigned to a failure
that occurs once per hour, once per minute, etc.

➢Detection - assesses the chance of a failure being detected, with 1 representing the highest
chance of detection and 10 representing the lowest chance of detection.

➢RPN - Risk priority number = severity X occurrence X detection. By rule of thumb, any RPN
value exceeding 80 requires a corrective action. The corrective action ideally leads to a lower
RPN number.

How to Complete a Process FMEA

A simple explanation of how to complete a process FMEA follows below:

•Form a cross-functional team of process owners and operations support personnel with a team
leader.

•Have the team leader define the scope, goals and timeline of completing the FMEA.

•As a group, complete a detailed process map.

•Transfer the process map for the steps of the FMEA process.

•Assign severity, occurrence and detection scores to each process step as a team.
•Based on the RPN value, identify required corrective actions for each process step.

•Complete a Responsible, Accountable, Consulted, and Informed (RACI) chart for the corrective
actions.

•Have the team leader on a periodic basis track the corrective action and update the FMEA.

•Have the team leader also track process changes, design changes, and other critical discoveries
that would qualify and update the FMEA.

•Ensure that the team leader schedules periodic meetings to review the FMEA (based on process
performance, a quarterly review may be an option).

WHAT IS SERVICE QUALITY

Every customer has an ideal expectation of the service they want to receive when they go to a
restaurant or store. Service quality measures how well a service is delivered compared to
customer expectations. Businesses that meet or exceed expectations are considered to have high
service quality. Let's say you go to a fast food restaurant for dinner, where you can reasonably
expect to receive your food within five minutes of ordering. After you get your drink and find a
table, your order is called, minutes earlier than you had expected! You would probably consider
this to be high service quality. There are five dimensions that customers consider when assessing
service quality. Let's discuss these dimensions in a little more detail.

Tangibles
One dimension of service quality has to do with the tangibles of the service. Tangibles are the
physical features of the service being provided, such as the appearance of the building,
cleanliness of the facilities, and the appearance of the personnel. Going to a restaurant and finding
that your table and silverware are dirty would negatively impact your assessment of the service
quality. On the other hand, walking into a beautifully decorated, clean restaurant with impeccably
dressed wait staff would positively affect your opinion of the service.
QUALITY MANAGEMENT AND SIX SIGMA PERSPECTIVE

Two primary sets of costs are involved in quality:


▪ control costs
▪ failure costs
Costs broken into four categories:
▪ Prevention costs
▪ Appraisal costs
▪ Internal costs of defects
▪ External costs of defects
A Brief history of six sigma:
The Six Sigma concept was developed by Bill Smith, a senior engineer at Motorola, in 1986
as a way to standardize the way defects were tallied. Sigma is the Greek symbol used in statistics
to refer to standard deviation which is a measure of variation. Adding “six” to “sigma” combines
a measure of process performance (sigma) with the goal of nearly perfect quality (six).

In the popular book The Six Sigma Way, Six Sigma is defined as: “a comprehensive and
flexiblesystem for achieving, sustaining and maximizing business success. Six Sigma is uniquely
driven by close understanding of customer needs, disciplined use of facts, data, and statistical
analysis, and diligent attention to managing, improving, and reinventing business processes. (p.
xi)”

The DMAIC Improvement Process

Six Sigma projects generally follow a well defined process consisting of five phases.

define

measure

analyze

improve

control
pronounced “dey-MAY-ihk”

The Define Phase

The define phase of a DMAIC project focuses on clearly specifying the problem or opportunity,
what the goals are for the process improvement project, and what the scope of the project is.
Identifying who the customer is and their requirements is also critical given that the overarching
goal for all Six Sigma projects is improving the organization’s ability to meet the needs of its
customers.
ISO 9001

ISO 9001 is defined as the international standard that specifies requirements for a quality
management system (QMS). Organizations use the standard to demonstrate the ability to
consistently provide products and services that meet customer and regulatory requirements. It is
the most popular standard in the ISO 9000 series and the only standard in the series to which
organizations can certify.

ISO 9001 was first published in 1987 by the International Organization for Standardization (ISO),
an international agency composed of the national standards bodies of more than 160 countries. The
current version of ISO 9001 was released in September 2015.

What Are The Benefits Of Iso 9001?

ISO 9001 helps organizations ensure their customers consistently receive high quality products
and services, which in turn brings many benefits, including satisfied customers, management,
and employees.

Because ISO 9001 specifies the requirements for an effective quality management system,
organizations find that using the standard helps them:

Organize a QMS

Create satisfied customers, management, and employees Continually improve their processes

Save costs

ISO 9001 CERTIFICATION

ISO 9001 is the only standard in the ISO 9000 series to which organizations can certify. Achieving
ISO 9001:2015 certification means that an organization has demonstrated the following:

• Follows the guidelines of the ISO 9001 standard

• Fulfills its own requirements

• Meets customer requirements and statutory and regulatory requirements

• Maintains documentation
Certification to the ISO 9001 standard can enhance an organization’s credibility by showing
customers that its products and services meet expectations. In some instances or in some
industries, certification is required or legally mandated. The certification process includes
implementing the requirements of ISO 9001:2015 and then completing a successful registrar’s
audit confirming the organization meets those requirements.

Organizations should consider the following as they begin preparing for an ISO 9001 quality
management system certification:
Registrar’s costs for ISO 9001 registration, surveillance, and recertification audits.
Current level of conformance with ISO 9001 requirements
Amount of resources that the company will dedicate to this project for development and
implementation.
Amount of support that will be required from a consultant and the associated costs.

ISO 14000

What Is ISO 14000?

ISO 14000 is a set of rules and standards created to help companies reduce industrial waste and
environmental damage.
It’s a framework for better environmental impact management, but it’s not required. Companies
can get ISO 14000 certified, but it’s an optional certification. The ISO 14000 series of standards
was introduced in 1996 by the International Organization for Standardization (ISO) and most
recently revised in 2015 (ISO is not an acronym; it derives from the ancient Greek word ísos,
meaning equal or equivalent.)

KEY TAKEAWAYS
▪ ISO 14000 is a set of rules and standards created to help companies address their environmental
impact.
▪ This certification is optional for corporations, rather than mandatory;
▪ ISO 14000 is intended to be used to set and ultimately achieve environmentally-friendly business
goals and objectives.
▪ This type of certification can be used as a marketing tool for engaging environmentally conscious
consumers and may help firms reach mandatory environmental regulations.

Understanding ISO 14000


ISO 14000 is part of a series of standards that address certain aspects of environmental regulations.
It’s meant to be a step by-step format for setting and then achieving environmentally friendly
objectives for business practices or products. The purpose is to help companies manage processes
while minimizing environmental effects, whereas the ISO 9000 standards from 1987 were focused
on the best management practices for quality assurance. The two can be implemented concurrently.

Here are the key standards included in ISO 14000:

ISO 14001: Specification of Environmental Management Systems


ISO 14004: Guideline Standard
ISO 14010 – ISO 14015: Environmental Auditing and Related Activities
ISO 14020 – ISO 14024: Environmental Labeling
ISO 14031 and ISO 14032: Environmental Performance Evaluation
ISO 14040 – ISO 14043: Life Cycle Assessment
ISO 14050: Terms and Definitions

Benefits of ISO 14000 Certification


ISO 14000certification can be achieved by having an accredited auditor verify that all the
requirements are met, or a company may self-declare. Obtaining the ISO 14000 certification can
be considered a sign of a commitment to the environment, which can be used as a marketing tool
for companies. It may also help companies meet certain environmental regulations.

The other benefits include being able to sell products to companies that use ISO 14000–certified
suppliers. Companies and customers may also pay more for products that are considered
environmentally friendly. On the cost side, meeting the ISO 14000 standards can help reduce costs,
as it encourages the efficient use of resources and limiting waste. This may lead to finding ways
to recycle products or new uses for previously disposed of byproducts.
QS 9000

QS 9000 Certification Definition

QS 9000 is a company level certification based on quality system requirements related specifically
to the automotive industry. These standards were developed by the larger automotive companies
including Ford, General Motors and DaimlerChrysler. This standard is obsolete and has been
replaced by either ISO/TS 16949 or ISO 9001.

This certification was for organizations in the automotive supply chain.

Organizations that wanted to become certified to the current version of QS9000 would need to
complete an application, undergo a document review and certification audit. Once the certification
was received, annual or regularly scheduled audits would be conducted to verify continued
compliance to the standard.

QUALITY AUDIT

Quality audit is the process of systematic examination of a quality system carried out by an
internal or external quality auditor or an audit team. It is an important part of an organization's
quality management system and is a key element in the ISO quality system standard, ISO 9001.

Quality audits are typically performed at predefined time intervals and ensure that the
institution has clearly defined internal system monitoring procedures linked to effective action.
This can help determine if the organization complies with the defined quality system processes
and can involveprocedural or results-based assessment criteria.

With the upgrade of the ISO9000 series of standards from the 1994 to 2008 series, the focus of the
audits has shifted from purely procedural adherence towards measurement of the actual
effectiveness of the Quality Management System (QMS) and the results that have been achieved
through the implementation of a QMS.

KEY TAKEAWAYS

There are three main types of audits: external audits, internal audits, and Internal Revenue Service
(IRS) audits.

External audits are commonly performed by Certified Public Accounting (CPA) firms and result
in an auditor's opinion which is included in the audit report.

An unqualified, or clean, audit opinion means that the auditor has not identified any material
misstatement as a result of his or her review of the financial statements.

External audits can include a review of both financial statements and a company's internal controls.
Internal audits serve as a managerial tool to make improvements to processes and internal controls.
UNIT-IV
PRODUCT QUALITY
IMPROVEMENT
UNIT- IV
PRODUCT QUALITY IMPROVEMENT

QUALITY

It is not easy to define the word Quality since it is perceived differently by the different set of
individuals. If experts are asked to define quality, they may give varied responses depending on
their individual preferences. These may be similar to following listed phrases.
According to experts, the word quality can be defined either as;
• Fitness for use or purpose.
• To do a right thing at first time.
• To do a right thing at the right-time.
• Find and know, what consumer wants?
• Features that meet consumer needs and give customer satisfaction.
• Freedom from deficiencies or defects.
• Conformance to standards.
• Value or worthiness for money, etc.
Dr. Joseph Juran coined a short definition of quality as; Joned a short definition
of quality as;
“Product’s fitness for use.”

PRODUCT QUALITY

“Product quality means to incorporate features that have a capacity to meet consumer needs
(wants) and gives customer satisfaction by improving products (goods) and making them free
from any deficiencies or defects.”

Product quality mainly depends on important factors like:


1. The type of raw materials used for making a product.
2. How well are various production-technologies implemented?
3. Skill and experience of manpower that is involved in the production process.
4. Availability of production-related overheads like power and water supply, transport, etc.

Product quality has two main characteristics viz; measured and attributes.

Measured characteristics include features like shape, size, color, strength, appearance, height,
weight, thickness, diameter, volume, fuel consumption, etc. of a product.

Attributes characteristics checks and controls defective-pieces per batch, defects per item,
number of mistakes per page, cracks in crockery, double- threading in textile material,
discoloring in garments, etc.

Based on this classification, we can divide products into good and bad.
So, product quality refers to the total of the goodness of a product.
The five main aspects of product quality are depicted and listed below:

Quality of design: The product must be designed as per the consumers’ needs and high-quality
standards.

Quality conformance: The finished products must conform (match) to the product design
specifications.

Reliability: The products must be reliable or dependable. They must not easily breakdown or
become non-functional. They must also not require frequent repairs. They must remain
operational for a satisfactory longer-time to be called as a reliable one.

Safety: The finished product must be safe for use and/or handling. It must not harm consumers
in any way.

Proper storage: The product must be packed and stored properly. Its quality must be
maintaineduntil its expiry date.

Company must focus on product quality, before, during and after production:
• Before production, company must find out the needs of the consumers. These needs must
be included in the product design specifications. So, the company must design its product
as per the needs of the consumers.

• During production, company must have quality control at all stages of the production
process. There must have quality control for raw materials, plant and machinery, selection
and training of manpower, finished products, packaging of products, etc.

• After production, the finished-product must conform (match) to the product-design


specifications in all aspects, especially quality. The company must fix a high-quality
standard for its product and see that the product is manufactured exactly as per this quality
standard. It must try to make zero defect products.
Image depicts importance of product quality for company and consumers.

For company: Product quality is very important for the company. This is because; bad quality
products will affect the consumer’s confidence, image and sales of the company. It may even
affect the survival of the company. So, it is very important for every company to make better
quality products.

For consumers: Product quality is also very important for consumers. They are ready to pay
high prices, but in return, they expect best-quality products. If they are not satisfied with the
quality of product of company, they will purchase from the competitors. Nowadays, very good
quality international products are available in the local market. So, if the domestic companies
don't improve their products' quality, they will struggle to survive in the market.
QUALITY FUNCTION DEPLOYMENT

QFD is a focused methodology for carefully listening to the voice of the customer and then
effectively responding to those needs and expectations.
First developed in Japan in the late 1960s as a form of cause- and-effect analysis, QFD was
brought to the United States in the early 1980s. It gained its early popularity as a result of
numerous successes in the automotive industry.
In general we can define it like this

Every organization has customers. Some have only internal customers, some have only
external customers, and some have both. When you are working to determine what you need
to accomplish to satisfy or even delight your customers, quality function deployment is an
essential tool.

METHODOLOGY

In QFD, quality is a measure of customer satisfaction with a product or a service.


QFD is a structured method that uses the seven management and planning tools to identify and
prioritize customers’ expectations quickly and effectively

Beginning with the initial matrix, commonly termed the House of Quality (Figure 1), the QFD
methodology focuses on the most important product or service attributes or qualities.

These are composed of customer wows, wants, and musts. (See the Kano model of customer
perception versus customer reality.)

Once you have prioritized the attributes and qualities, QFD deploys them to the appropriate
organizational function for action, as shown in Figure 2. Thus, QFD is the deployment of
customer-driven qualities to the responsible functions of an organization.
TAGUCHI METHOD

What Is the Taguchi Method of Quality Control?

The Taguchi method of quality control is an approach to engineering that emphasizes the roles
of research and development (R&D), product design and development in reducing the
occurrence of defects and failures in manufactured goods.
This method, developed by Japanese engineer and statistician Genichi Taguchi, considers
design to be more important than the manufacturing process in quality control, aiming to
eliminate variances in production before they can occur.

KEY TAKEAWAYS

▪ In engineering, the Taguchi method of quality control focuses on design and development
to create efficient, reliable products.
▪ Its founder, Genichi Taguchi, considers design to be more important than the
manufacturing process in quality control, seeking to eliminate variances in production
before they can occur.
▪ Companies such as Toyota, Ford, Boeing, and Xerox have adopted this method.
UNIT-V
DESIGN FAILURE
UNIT- V
DESIGN FAILURE

INTRODUCTION TO DESIGN FAILURE MODE AND


EFFECTS ANALYSIS (DFMEA)

It was first used in rocket science. Initially, the rocket development process in the 1950’s did not
go well. The complexity and difficulty of the task resulted in many catastrophic failures. Root
Cause Analysis (RCA) was used to investigate these failures but had inconclusive results. Rocket
failures are often explosive with no evidence of the root cause remaining.

Design FMEA provided the rocket scientists with a platform to prevent failure. A similar platform
is used today in many industries to identify risks, take counter measures and prevent failures.
DFMEA has had a profound impact, improving safety and performance on products we use every
day.

What is Design Failure Mode and Effects Analysis (DFMEA)

DFMEA is a methodical approach used for identifying potential risks introduced in a new or
changed design of a product/service.
The Design FMEA initially identifies design functions, failure modes and their effects on the
customer with corresponding severity ranking / danger of the effect. Then, causes and their
mechanisms of the failure mode are identified.
High probability causes, indicated by the occurrence ranking, may drive action to prevent or
reduce the cause’s impact on the failure mode.

The detection ranking highlights the ability of specific tests to confirm the failure mode / causes
are eliminated.
The DFMEA also tracks improvements through Risk Priority Number (RPN) reductions.
By comparing the before and after RPN, a history of improvement and risk mitigation can be
chronicled.
Why Perform Design Failure Mode and Effects Analysis (DFMEA)

Risk is the substitute for failure on new / changed designs. It is a good practice to identify risks
on a program as early as possible. Early risk identification provides the greatest opportunity for
verified mitigation prior to program launch.

Risks are identified on designs, which if left unattended, could result in failure. The DFMEA is
applied when:

• There is a new design with new content


• There is a current design with modifications, which also may include changes due to past
failure
• There is a current design being used in a new environment or change in duty cycle (no
physical change made to design)

STEPS TO CONDUCT DFMEA

Step 1 | Review the design


Use a blueprint or schematic of the design / product to identify each component and interface.

Reasons for the review:


• Help assure all team members are familiar with the product and its design.
• Identify each of the main components of the design and determine the function or functions
of those components and interfaces between them.
• Make sure you are studying all components defined in the scope of the DFMEA.
Use a print or schematic for the review:
Add Reference Numbers to each component and interface.

Try out a prototype or sample:


• Invite a subject matter expert to answer questions.
• Document the function(s) of each component and interface.
Step 2 | Brainstorm potential failure modes:
Review existing documentation and data for clues.

Consider potential failure modes for each component and interface:


• A potential failure mode represents any manner in which the product component could fail
to perform its intended function or functions.
• Remember that many components will have more than one failure mode. Document each
one. Do not leave out a potential failure mode because it rarely happens. Don’t take
shortcuts here; this is the time to be thorough.

Prepare for the brainstorming activity:


• Before you begin the brainstorming session, review documentation for clues about
potential failure modes.
• Use customer complaints, warranty reports, and reports that identify things that have gone
wrong, such as hold tag reports, scrap, damage, and rework, as inputs for the brainstorming
activity.
• Additionally, consider what may happen to the product under difficult usage conditions
and how the product might fail when it interacts with other products.

Step 3 | List potential effects of failure


There may be more than one effect for each failure.
The effect is related directly to the ability of that specific component to perform its intended
function.
• An effect is the impact a failure could make should it occur.
• Some failures will have an effect on customers; others on the environment, the process the
product will be made on, and even the product itself.
The effect should be stated in terms meaningful to product performance. If the effects are
defined in general terms, it will be difficult to identify (and reduce) true potential risks.
Step 4 | Assign Severity rankings
The Severity ranking is based on the severity of the consequences of failure. Assign
aseverity ranking to each effect that has been identified.
• The severity ranking is an estimate of how serious an effect would be should it occur.
• To determine the severity, consider the impact the effect would have on the customer, on
downstream operations, or on the employees operating the process.
The severity ranking is based on a relative scale ranging from 1 to 10.
• A “10” means the effect has a dangerously high severity leading to a hazard without
warning.
• Conversely, a severity ranking of “1” means the severity is extremely low.
The severity ranking is based on the severity of the consequences of failure.

Step 5 | Assign Occurrence rankings


The Occurrence ranking is based on how frequently the cause of the failure is likely to
occur.
We need to know the potential cause to determine the occurrence ranking because, just like
the severity ranking is driven by the effect, the occurrence ranking is a function of the
cause.
• The occurrence ranking is based on the likelihood or frequency, that the cause (or
mechanism of failure) will occur.
• If we know the cause, we can better identify how frequently a specific mode of failure will
occur.
The occurrence ranking scale, like the severity ranking, is on a relative scale from 1 to 10.
• An occurrence ranking of “10” means the failure mode occurrence is very high; it happens
all of the time. Conversely, a “1” means the probability of occurrence is remote.
• See FMEA Checklists and Forms for an example DFMEA Occurrence Ranking Scale.
Your organization may need to customize the occurrence ranking scale to apply to different
levels or complexities of design. It is difficult to use the same scale for a modular design,
a complex design, and a custom design.
• Some organizations develop three different occurrence ranking options (time- based, event-
based, and piece-based) and select the option that applies to the design or product.
Step 6 | Assign Detection rankings
The Detection ranking is based on the chances the failure will be detected prior to the
customer finding it.
To assign detection rankings, consider the design or product-related controls already in
place for each failure mode and then assign a detection ranking to each control.
• Think of the detection ranking as an evaluation of the ability of the design controls to
prevent or detect the mechanism of failure.
• A detection ranking of “1” means the chance of detecting a failure is almost certain.
Conversely, a “10” means the detection of a failure or mechanism of failure is absolutely
uncertain.

Prevention controls are always preferred over detection controls.


• Prevention controls prevent the cause or mechanism of failure or the failure mode itself
from occurring; they generally impact the frequency of occurrence. Prevention controls
come in different forms and levels of effectiveness.
• Detection controls detect the cause, the mechanism of failure, or the failure mode itself
after the failure has occurred BUT before the product is released from the design stage.
To provide DFMEA teams with meaningful examples of Design Controls, consider adding
examples tied to the Detection Ranking scale for design related topics such as:
• Design Rules
• DFA/DFM (design for assembly and design for manufacturability) Issues
• Simulation and Verification Testing

Step 7 | Calculate the RPN


RPN = Severity * Occurrence * Detection
• The RPN is the Risk Priority Number.
• The RPN gives us a relative risk ranking. The higher the RPN, the higher the potential risk.
• The RPN is calculated by multiplying the three rankings together. Multiply the Severity
Ranking times the Occurrence Ranking times the Detection Ranking. Calculate the RPN
for each failure mode and effect.
• Editorial Note: The current FMEA Manual from AIAG suggests only calculating the RPN
for the highest effect ranking for each failure mode.
• We do not agree with this suggestion; we believe that if this suggestion is followed, it will
be too easy to miss the need for further improvement on a specific failure mode.
Since each of the three relative ranking scales ranges from 1 to 10, the RPN will always be
between 1 and 1000. The higher the RPN, the higher the relative risk. The RPN gives us
an excellent tool to prioritize focused improvement efforts.

Step 8 | Develop the action plan


Define who will do what by when. Taking action means reducing the RPN.
• The RPN can be reduced by lowering any of the three rankings (severity, occurrence, or
detection) individually or in combination with one another.
• A reduction in the Severity Ranking for a DFMEA is often the most difficult to attain. It
usually requires a design change.
• Reduction in the Occurrence Ranking is accomplished by removing or controlling the
potential causes or mechanisms of failure.
• A reduction in the Detection Ranking is accomplished by adding or improving prevention
or detection controls.

What is considered an acceptable RPN?


• The answer to that question depends on the organization.
• For example, an organization may decide any RPN above a maximum target of 200
presents an unacceptable risk and must be reduced. If so, then an action plan identifying
who will do what by when is needed.
There are many tools to aid the DFMEA team in reducing the relative risk of those failure
modes requiring action.

Step 9 | Take action


Implement the improvements identified by your DFMEA team.
• The Action Plan outlines what steps are needed to implement the solution, who will do
them, and when they will be completed.
• A simple solution will only need a Simple Action Plan while a complex solution needs
more thorough planning and documentation.
• Most Action Plans identified during a DFMEA will be of the simple “who, what, & when”
category. Responsibilities and target completion dates for specific actions to be taken are
identified.
• Sometimes, the Action Plans can trigger a fairly large-scale project. If that happens,
conventional project management tools such as PERT Charts and Gantt Charts will be
needed to keep the Action Plan on track.

Step 10 | Calculate the resulting RPN


Re-evaluate each of the potential failures once improvements have been made and
determine their impact on the RPNs.
• This step in a DFMEA confirms the action plan had the desired results by calculating the
resulting RPN.
• To recalculate the RPN, reassess the severity, occurrence, and detection rankings for the
failure modes after the action plan has been completed.

PRODUCT RELIABILITY ANALYSIS

WHAT IS RELIABILITY?
Reliability is defined as the probability that a product, system, or service will perform its
intended function adequately for a specified period of time, or will operate in a defined
environment without failure.

The most important components of this definition must be clearly understood to fully know
how reliability in a product or service is established:

Probability: the likelihood of mission success


Intended function: for example, to light, cut, rotate, or heat
Satisfactory: perform according to a specification, with an acceptable degree of
compliance
Specific period of time: minutes, days, months, or number of cycles
Specified conditions: for example, temperature, speed, or pressure

Stated another way, reliability can be seen as:


• Probability of success
• Durability
• Dependability
• Quality over time
• Availability to perform a function

Common examples of product reliability statements or guarantees include:


Example 1
"This car is under warranty for 40,000 miles or 3 years, whichever comes first."
Example 2
"This mower has a lifetime guarantee."

Design for Reliability


Engineers often talk about the importance of design for reliability (DfR) and the impact it has on
a product’s overall efficiencies and success. So, let’s take a look at DfR fundamentals and how
companies employ it to their best advantage.

What is DFR?
Essentially, DfR is a process that ensures a product, or system, performs a specified function
within a given environment over the expected lifetime.
DfR often occurs at the design stage — before physical prototyping — and is often part of an
overall design for excellence (DfX) strategy. But, as you’ll soon find out, the use of DfR can, and
should, be expanded.

When is DFR Used?

Performing comprehensive design reviews during product development is a proven method to


ensure a reliable product.
Most companies apply DfR at the design and development stage of a given project development
cycle.
However, this common practice comes too late in the development process.
Successful DfR requires the integration of product design and process planning into a cohesive,
interactive activity known as concurrent engineering.
Keep in mind; it’s less expensive to design for reliability than to test for reliability.
When you are implementing reliability considerations in the concept feasibility stage, you are
making all your decisions down the line with reliability in mind.
Therefore, DfR is most effective in the concept feasibility stage.
DESIGN FOR SIX SIGMA

Introduction to Design for Six Sigma (DFSS)

In the current global marketplace, competition for products and services has never been higher.
Consumers have multiple choices for many very similar products. Therefore, many
manufacturing companies are continually striving to introduce completely new products or break
into new markets. Sometimes the products meet the consumer’s needs and expectations and
sometimes they don’t. The company will usually redesign the product, sometimes developing
and testing multiple iterations prior to re-introducing the product to market.

Multiple redesigns of a product are expensive and wasteful. It would be much more beneficial if
the product met the actual needs and expectations of the customer, with a higher level of product
quality the first time.

Design for Six Sigma (DFSS) focuses on performing additional work up front to assure you fully
understand the customer’s needs and expectations prior to design completion. DFSS requires
involvement by all stakeholders in every function. When following a DFSS methodology you
can achieve higher levels of quality for new products or processes.

What is Design for Six Sigma (DFSS)?

Design for Six Sigma (DFSS) is a different approach to new product or process development in
that there are multiple methodologies that can be utilized.

Traditional Six Sigma utilizes DMAIC or Define, Measure, Analyze, Improve and Control. This
methodology is most effective when used to improve a current process or make incremental
changes to a product design. In contrast, Design for Six Sigma is used primarily for the complete
re-design of a product or process. The methods, or steps, used for DFSS seem to vary according
to the business or organization implementing the process. Some examples are DMADV, DCCDI
and IDOV.
What all the methodologies seem to have in common is that they all focus on fully understanding
the needs of the customer and applying this information to the product and process design. The
DFSS team must be cross-functional to ensure that all aspects of the product are considered, from
market research through the design phase, process implementation and product launch.

With DFSS, the goal is to design products and processes while minimizing defects and variations
at their roots. The expectation for a process developed using DFSS is reportedly 4.5 sigma or
greater.

Why Implement Design for Six Sigma (DFSS)?

When your company designs a new product or process from the ground up it requires a sizable
amount of time and resources. Many products today are highly complex, providing multiple
opportunities for things to go wrong. If your design does not meet the customer’s actual wants
and expectations or your product does not provide the value the customer is willing to pay for,
the product sales will suffer. Redesigning products and processes is expensive and increases
your time to market. In contrast, by utilizing Design for Six Sigma methodologies, companies
have reduced their time to market by 25 to 40 percent while providing a high quality product that
meets the customer’s requirements. DFSS is a proactive approach to design with quantifiable
data and proven design tools that can improve your chances of success.

When to Implement Design for Six Sigma (DFSS)?


DFSS should be used when designing a completely new product or service. DFSS is intended for
use when you must replace a product instead of redesigning. When the current product or process
cannot be improved to meet customer requirements, it is time for replacement. The DFSS
methodologies are not meant to be applied to incremental changes in a process or design. DFSS
is used for prevention of quality issues. Utilize the DFSS approach and its methodologies when
your goal is to optimize your design to meet the customer’s actual wants and expectations,
shorten the time to market, provide a high level of initial product quality and succeed the first
time.
How to Implement Design for Six Sigma (DFSS)?

As previously mentioned, DFSS is more of an approach to product design rather than one
particular methodology. There are some fundamental characteristics that each of the
methodologies share. The DFSS project should involve a cross functional team from the entire
organization. It is a team effort that should be focused on the customer requirements and Critical
to Quality parameters (CTQs). The DFSS team should invest time studying and understanding
the issues with the existing systems prior to developing a new design. There are multiple
methodologies being used for implementation of DFSS. One of the most common techniques,
DMADV (Define, Measure, Analyze, Design, Verify), is detailed below.
Define

The Define stage should include the Project Charter, Communication Plan and Risk Assessment
/ Management Plan.

Measure

During the Measurement Phase, the project focus is on understanding customer needs and wants
and then translating them into measurable design requirements. The team should not only focus
on requirements or “Must Haves” but also on the “Would likes”, which are features or functions
that would excite the customer, something that would set your product apart from the
competition. The customer information may be obtained through various methods including:

• Customer surveys
• Dealer or site visits
• Warranty or customer service information
• Historical data
• Consumer Focus Groups

Analyze

In the Analyze Phase, the customer information should be captured and translated into
measurable design performance or functional requirements. The Parameter (P) Diagram is often
used to capture and translate this information. Those requirements should then be converted into
System, Sub-system and Component level design requirements. The Quality Function
Deployment (QFD) and Characteristic Matrix are effective tools for driving the needs of the
customer from the machine level down to component level requirements. The team should then
use the information to develop multiple concept level design options. Various assessment tools
like benchmarking or brainstorming can be used to evaluate how well each of the design
concepts meet customer and business requirements and their potential for success.
Then the team will evaluate the options and select a final design using decision- making tools
such as a Pugh Matrix or a similar method.
Design

When the DFSS team has selected a single concept-level design, it is time to begin the detailed
design work using 3D modeling, preliminary drawings, etc. The design team evaluates the
physical product and other considerations including, but not limited to, the following:
Manufacturing process Equipment requirements Supporting technology Material selection
Manufacturing location Packaging

Once the preliminary design is determined the team begins evaluation of the design using various
techniques, such as:
▪ Finite Element Analysis (FEA)
▪ Failure Modes and Effects Analysis (FMEA)
▪ Tolerance Stack Analysis
▪ Design Of Experiment (DOE)

Verify

During the Verify Phase, the team introduces the design of the product or process and performs
the validation testing to verify that it does meet customer and performance requirements. In
addition, the team should develop a detailed process map, process documentation and
instructions.
Often a Process FMEA is performed to evaluate the risk inherent in the process and address any
concerns prior to a build or test run. Usually a prototype or pilot build is conducted. A pilot build
can take the form of a limited product production run, service offering or possibly a test of a new
process.

The information or data collected during the prototype or pilot run is then used to improve the
design of the product or process prior to a full roll-out or product launch. When the project is
complete the team ensures the process is ready to hand- off to the business leaders and current
production teams. The team should provide all required process documentation and a Process
Control Plan.
UNITWISE
ASSIGNMENT
DEPARTMENT Of COMPUTER SCIENCE & ENGINEERING
QUALITY MANAGEMENT (7CS6-60.1)

UNIT-I ASSIGNMENT

Question No.1
What is a Von Neumann architecture?

Question No.2
Define the register transfer language. What do you understand by arithmetic
microoperation? Explain with example.

Question No.3
Design a 4-bit combinational circuit decremented using four full adder circuits.

Question No.4
Draw a circuitry diagram for common bus system for four register using multiplexers.
DEPARTMENT Of COMPUTER SCIENCE & ENGINEERING
QUALITY MANAGEMENT (7CS6-60.1)

UNIT-II ASSIGNMENT

Question No.1
What is difference between direct and indirect addressing modes? Explain implied
mode of addressing also.

Question No.2
Explain the instruction format. What do you understand by instruction pipeline?

Question No.3
Define instruction pipeline and its problem. Explain pipeline speed up, efficiency and
throughput.

Question No.4
What is pipelining? What is maximum speed up that can be attained? Construct an
instruction pipeline. It is possible to attain maximum speed up in an instruction pipeline?
DEPARTMENT Of COMPUTER SCIENCE & ENGINEERING
QUALITY MANAGEMENT (7CS6-60.1)

UNIT-III ASSIGNMENT

Question No.1
Explain the term leadership for Decision making and Strategic planning Communications?

Question No.2
What do you mean by Quality Audit. Explain its procedure with the help of diagram?

Question No.3
How International Standard ISO 9001 is important for Quality Management System.

Question No.4
Explain DMAIC methodology. How it is similar to or different from the Deming cycle?

Question No.5
What are the benefits of ISO registration? Explain ISO 14000 in detail?
DEPARTMENT Of COMPUTER SCIENCE & ENGINEERING
QUALITY MANAGEMENT (7CS6-60.1)

UNIT-IV ASSIGNMENT

Question No.1
How can we improve the quality of the Product? Explain in detail.

Question No.2
What is the methodology use behind Quality Function Deployment (QFD)?

Question No.3
Explain the term Robust Design in detail with the help of an example. Why robust design is
important?

Question No.4
What are the benefits or advantages of Quality Function Deployment?

Question No.5
How we can build a solid product Strategy to improve the product Quality?
DEPARTMENT Of COMPUTER SCIENCE & ENGINEERING
QUALITY MANAGEMENT (7CS6-60.1)

UNIT-V ASSIGNMENT

Question No.1
Briefly explain Design failure mode and effect analysis?

Question No.2
Describe lean six sigma approach to new product development?

Question No.3
How do you measure product Reliability. Why is product Reliability is important?

Question No.4
How six sigma plays an important character in product Development?

Question No.5
What is the role of Product Reliability analysis in Design Failure?

You might also like