7CS6_QM_Notes (3)
7CS6_QM_Notes (3)
7CS6_QM_Notes (3)
INTRODUCTION
➢ QUALITY
Word quality can be defined either as;
• Fitness for use or purpose.
• To do a right thing at first time.
• To do a right thing at the right-time.
• Find and know what consumer wants?
• Features that meet consumer needs and give customer satisfaction.
• Freedom from deficiencies or defects.
• Conformance to standards.
• Value or worthiness for money, etc.
➢ MANAGEMENT
The process of dealing with or controlling things or people
Management is a process of planning, decision making, organizing, leading,
motivation and controlling the human resources, financial, physical, and
information resources of an organization to reach its goals efficiently and
effectively.
➢ QUALITY MANAGEMENT
Quality management is the act of overseeing all activities and tasks that must be
accomplished to maintain a desired level of excellence. This includes the determination
of a quality policy, creating and implementing quality planning and assurance, and
quality control and quality improvement.
Quality Planning – The process of identifying the quality standards relevant to the
Project and deciding how to meet them.
Quality Control – The continuing effort to uphold a process’s integrity and reliability in
Achieving an outcome.
• Customer focus:-
The primary focus of quality management is to meet customer requirements and to
Strive to exceed customer expectations.
• Leadership:-
Leaders at all levels establish unity of purpose and direction and create conditions in
which people are engaged in achieving the organization’s quality objectives.
Leadership has to take up the necessary changes required for quality improvement
and
Encourage a sense of quality throughout organization.
• Engagement of people:-
Competent, empowered and engaged people at all levels throughout the organization
Are essential to enhance its capability to create and deliver value.
• Process approach:-
Consistent and predictable results are achieved more effectively and efficiently when
activities are understood and managed as interrelated processes that function as a
coherent system.
• Improvement:-
Successful organizations have an ongoing focus on improvement.
• Evidence based decision making:-
Decisions based on the analysis and evaluation of data and information are more
Likely to produce desired results.
• Relationship management:-
For sustained success, an organization manages its relationships with interested
Parties, such as suppliers, retailers.
COURSE OBJECTIVE
• The first is to introduce the students to the evolution and history of quality management.
• The aim of quality management is to ensure that all the organization’s stakeholders work
together to improve the company’s processes, products, services, and culture to achieve
the long-term success that stems from customer satisfaction.
• The process of quality management involves a collection of guidelines that are developed
by a team to ensure that the products and services that they produce are of the right
standards or fit for a specified purpose.
• To realize the importance of significance of quality.
• Quality management is focused not only on product and service quality, but also on the
means to achieve it
Customers recognize that quality is an important attribute in products and services. Suppliers
recognize that quality can be an important differentiator between their own offerings and those
of competitors (quality differentiation is also called the quality gap). In the past two decades this
quality gap has been greatly reduced between competitive products and services. This is partly
due to the contracting (also called outsourcing) of manufacture to countries like China and India,
as well internationalization of trade and competition. These countries, among many others, have
raised their own standards of quality in order to meet international standards and customer
demands. The ISO
9000 series of standards are probably the best known International standards for quality
management.
Customer satisfaction is the backbone of Quality Management. Setting up a million dollar
company without taking care of needs of customer will ultimately decrease its revenue. There
are many books available on quality management. Some themes have become more significant
including quality culture, the importance of knowledge management, and the role of leadership
in promoting and achieving high quality. Disciplines like systems thinking are bringing more
holistic approaches to quality so that people, process and products are considered together rather
than independent factors in quality management.
The influence of quality thinking has spread to non-traditional applications outside of
walls of manufacturing, extending into service sectors and into areas such as sales, marketing
and customer service
PRODUCT QUALITY
Product Quality
“Product quality means to incorporate features that have a capacity to meet consumer needs
(wants) and gives customer satisfaction by improving products (goods) and making them free
from any deficiencies or defects.”
Product quality has two main characteristics viz; measured and attributes.
• Measured characteristics
Measured characteristics include features like shape, size, color, strength, appearance, height,
weight, thickness, diameter, volume, fuel consumption, etc. of a product.
• Attributes characteristics
Attributes characteristics checks and controls defective-pieces per batch, defects per item,
number of mistakes per page, cracks in crockery, double-threading in textile material, discoloring
in garments, etc.
Based on this classification, we can divide products into good and bad.
So, product quality refers to the total of the goodness of a product.
(i) Quality of design: The product must be designed as per the consumers’ needs and
high-quality standards.
(ii) Quality conformance: The finished products must conform (match) to the product
designspecifications.
(iii) Reliability: The products must be reliable or dependable. They must not easily
breakdown or become non-functional. They must also not require frequent repairs. They
must remainoperational for a satisfactory longer-time to be called as a reliable one.
(iv) Safety: The finished product must be safe for use and/or handling. It must not harm
consumers in any way.
(v) Proper storage: The product must be packed and stored properly. Its quality must be
maintained until its expiry date.
(ii) For Consumers: Product quality is also very important for consumers. They are ready to
pay high prices, but in return, they expect best-quality products. If they are not satisfied
with the quality of product of company, they will purchase from the competitors.
Nowadays, very good quality international products are available in the local market. So,
if the domestic companies don’t improve their products’ quality, they will struggle to
survive in the market.
SERVICE QUALITY
Every customer has an ideal expectation of the service they want to receive when they go to a
restaurant or store. Service quality measures how well a service is delivered compared to
customer expectations. Businesses that meet or exceed expectations are considered to have high
service quality. Let's say you go to a fast food restaurant for dinner, where you can reasonably
expect to receive your food within five minutes of ordering. After you get your drink and find a
table, your order is called, minutes earlier than you had expected! You would probably consider
this to be high service quality. There are five dimensions that customers consider when assessing
service quality. Let's discuss these dimensions in a little more detail.
Technical quality: What the customer receives as a result of interactions with the service firm
(e.g. a meal in a restaurant, a bed in a hotel)
Functional quality: How the customer receives the service; the expressive nature of the service
delivery (e.g. courtesy, attentiveness, promptness)
The technical quality is relatively objective and therefore easy to measure. However, difficulties
arise when trying to evaluate functional quality.
DIMENSIONS OF QUALITY
Eight dimensions of product quality management can be used at a strategic level to analyze
quality characteristics. The concept was defined by David A. Garvin, formerly C. Roland
Christensen Professor of Business Administration at Harvard Business School (died 30 April
2017).Garvin was posthumously honored with the prestigious award for 'Outstanding
Contribution to the Case Method' on March 4, 2018.
Some of the dimensions are mutually reinforcing, whereas others are not—improvement in
one may be at the expense of others. Understanding the trade-offs desired by customers among
these dimensions can help build a competitive advantage.
➢ Features: Features are additional characteristics that enhance the appeal of the product
or service to the user.
➢ Reliability: Reliability is the likelihood that a product will not fail within a specific time
period. This is a key element for users who need the product to work without fail.
➢ Conformance: Conformance is the precision with which the product or service meets
the specified standards.
➢ Durability: Durability measures the length of a product’s life. When the product can be
repaired, estimating durability is more complicated. The item will be used until it is no
longer economical to operate it. This happens when the repair rate and the associated
costs increase significantly.
➢ Serviceability: Serviceability is the speed with which the product can be put into service
when it breaks down, as well as the competence and the behavior of the service person.
➢ Aesthetics: Aesthetics is the subjective dimension indicating the kind of response a user
has to a product. It represents the individual’s personal preference.
➢ Perceived Quality: Perceived Quality is the quality attributed to a good or service based
on indirect measures.
COST OF QUALITY
Cost of Quality is a methodology used to define and measure where and what amount of an
organization’s resources are being used for prevention activities and maintaining product quality
as opposed to the costs resulting from internal and external failures. The Cost of Quality can be
represented by the sum of two factors. The Cost of Good Quality and the Cost of Poor Quality
equals the Cost of Quality, as represented in the basic equation below:
The Cost of Quality equation looks simple but in reality it is more complex. The Cost of Quality
includes all costs associated with the quality of a product from preventive costs intended to reduce
or eliminate failures, cost of process controls to maintain quality levels and the costs related to
failures both internal and external
The methods for calculating Cost of Quality vary from company to company. In many cases,
organizations like the one described in the previous example, determine the Cost of Quality by
calculating total warranty dollars as a percentage of sales. Unfortunately, this method is only
looking externally at the Cost of Quality and not looking internally. In order to gain a better
understanding, a more comprehensive look at all quality costs is required.
The Cost of Quality can be divided into four categories.
1. They include Prevention,
2. Appraisal,
3. Internal Failure
4. External Failure.
Within each of the four categories there are numerous possible sources of cost related to good or
poor quality
.
The Cost of Good Quality (CoGQ)
Prevention Costs – costs incurred from activities intended to keep failures to a minimum. These
can include, but are not limited to, the following:
• Excessive Scrap
• Product Re-work
• Waste due to poorly designed processes
• Machine breakdown due to improper maintenance
• Costs associated with failure analysis
External Failures – costs associated with defects found after the customer receives the product or
service.
External Failures may include, but are not limited to, the following examples:
These four categories can now be applied to the original Cost of Quality equation. Our original
equation stated that the Cost of Quality is the sum of Cost of Good Quality and Cost of Poor
Quality. This is still true however the basic equation can be expanded by applying the categories
within both the Cost of Good Quality and the Cost of Poor Quality.
The Cost of Good Quality is the sum of Prevention Cost and Appraisal Cost
(CoGQ = PC + AC)
The Cost of Poor Quality is the sum of Internal and External Failure Costs
(CoPQ = IFC + EFC)
By combining the equations, Cost of Quality can be more accurately defined, as shown in
the equation below:
One important factor to note is that the Cost of Quality equation is nonlinear. Investing in
the Cost of Good Quality does not necessarily mean that the overall Cost of Quality will
increase. In fact, when the resources are invested in the right areas, the Cost of Quality
should decrease. When failures are prevented / detected prior to leaving the facility and
reaching the customer, Cost of Poor Quality will be reduced.
Deming opined that by embracing certain principles of the management, organizations can
improve the quality of the product and concurrently reduce costs. Reduction of costs would
include the reduction of waste production, reducing staff attrition and litigation while
simultaneously increasing customer loyalty. The key, in Deming’s opinion, was to practice
constant improvement and to imagine the manufacturing process as a seamless whole,
rather than as a system made up of incongruent parts.
In the 1970s, some of Deming's Japanese proponents summarized his philosophy in a two-
part comparison: Organizations should focus primarily on quality, which is defined by the
equation ‘Quality = Results of work efforts/total costs’. When this occurs, quality
improves, and costs to fall suddenly and quickly from a high level or position over time.
When organizations' focus is primarily on costs, the costs will rise, but over time the quality
drops.
Also known as the Shewhart Cycle, the Deming Cycle, often called the PDCA, was a result
of the need to link the manufacture of products with the needs of the consumer along with
focusing departmental resources in a collegial effort to meet those needs.
• Plan: Design a consumer research methodology that will inform business process
components.
• Do: Implement the plan to measure its performance.
• Check: Check the measurements and report the findings to the decision-makers.
• Act/Adjust: Draw a conclusion on the changes that need to be made and implement them.
The 14 Points for Management
Deming’s other chief contribution came in the form of his 14 Points for Management,
which consists of a set of guidelines for managers looking to transform business
effectiveness.
The 7 Deadly Diseases for Management defined by Deming are the most serious and fatal
barriers that managements face, in attempting to increase effectiveness and
institute continual improvement.
Born in 1904, Joseph Juran was a Romanian-born American engineer and management
consultant of the 20th century, and a missionary for quality and quality management. Like
Deming, Juran's philosophy also took root in Japan. He stressed on the importance of a
broad, organizational-level approach to quality – stating that total quality management
begins from the highest position in the management, and continues all the way to the
bottom.
In 1941, Juran was introduced to the work of Vilfredo Pareto. He studied the Pareto
principle (the 80-20 law), which states that, for many events, roughly 80% of the effects
follow from 20% of the causes, and applied the concept to quality issues. Thus, according
to Juran, 80% of the problems in an organization are caused by 20% of the causes. This is
also known as the rule of the "Vital Few and the Trivial Many". Juran, in his later years,
preferred "the Vital Few and the Useful Many" suggesting that the remaining 80% of the
causes must not be completely ignored.
The primary focus of every business, during Juran's time, was the quality of the end
product, which is what Deming stressed upon. Juran shifted track to focus instead on the
human dimension of Quality management. He laid emphasis on the importance of
educating and training managers. For Juran, the root cause of quality issues was the
resistance to change, and human relations problems.
The Juran Quality Trilogy
One of the first to write about the cost of poor quality, Juran developed an approach for
cross-functional management that comprises three legislative processes:
1. Quality Planning:
This is a process that involves creating awareness of the necessity to improve, setting
certain goals and planning ways to reach those goals. This process has its roots in the
management’s commitment to planned change that requires trained and qualified staff.
2. Quality Control:
This is a process to develop the methods to test the products for their quality. Deviation
from the standard will require change and improvement.
3. Quality Improvement:
This is a process that involves the constant drive to perfection. Quality improvements need
to be continuously introduced. Problems must be diagnosed to the root causes to develop
solutions. The Management must analyze the processes and the systems and report back
with recognition and praise when things are done right.
Juran also introduced the Three Basic Steps to Progress, which, in his opinion, companies
must implement if they are to achieve high quality.
1. Accomplish improvements that are structured on a regular basis with commitment and a
sense of urgency.
Juran devised ten steps for organizations to follow to attain better quality.
• Establish awareness for the need to improve and the opportunities for improvement. Set
goals for improvement.
• Organize to meet the goals that have been set.
• Provide training.
• Implement projects aimed at solving problems.
• Report progress.
• Give recognition.
• Communicate results.
• Keep score.
• Maintain momentum by building improvement into the company's regular systems.
UNIT-II
PROCESS QUALITY
MANAGEMENT
UNIT- II
PROCESS QUALITY MANAGEMENT
Software development requires a complex web of sequential and parallel steps. As the scale
of the project increases, more steps must be included to manage the complexity of the
project. All processes consist of product activities and overhead activities. Product
activities result in tangible progress toward the end product. Overhead activities have an
intangible impact on the end product, and are required for the many planning, management,
and assessment tasks.
To some degree, adhering to a process and achieving high process quality overlaps somewhat
with the quality of the artifacts. That is, if the process is adhered to (high quality), the risk of
producing poor quality artifacts is reduced. However, the opposite is not always true—
generating high quality artifacts is not necessarily an indication that the process has been
adhered to.
Therefore, process quality is measured not only to the degree to which the process was
adhered to, but also to the degree of quality achieved in the products produced by the process.
To aid in your evaluation of the process and product quality, the Rational Unified Process
(RUP) has included pages such as:
1. Activity: a description of the activity to be performed and the steps required to perform the
activity.
2. Work Guideline: techniques and practical advice useful for performing the activity.
3. Artifact Guidelines and Checkpoints: information on how to develop, evaluate, and use
the artifact.
4. Templates: models or prototypes of the artifact that provide structure and guidance for
content.
DRIVE is an approach to problem solving and analysis that can be used as part of process
improvement.
Define:
The scope of the problem the criteria by which success will be measured and agree the deliverables
and success factors
Review:
The current situation, understands the background, identify and collect information, including
performance, identify problem areas, improvements and “quick wins”
Identify:
Improvements or solutions to the problem, required changes to enable and sustain the improvements
Verify:
Check that the improvements will bring about benefits that meet the defined success criteria,
prioritize and pilot the improvements
Execute:
Plan the implementation of the solutions and improvements, agree and implement them, plan a
review, gather feedback and review
PROCESS MAPPING
One of the initial steps to understand or improve a process is Process Mapping. By gathering
information we can construct a “dynamic” model - a picture of the activities that take place in a
process. Process maps are useful communication tools that help improvement teams understand the
process and identify opportunities for improvement.
ICOR (inputs, outputs, controls and resources) is an internationally accepted process analysis
methodology for process mapping. It allows processes to be broken down into simple, manageable
and more easily understandable units. The maps define the inputs, outputs, controls and resources for
both the high level process and the sub-processes.
Process mapping provides a common framework, discipline and language, allowing a systematic
way of working. Complex interactions can be represented in a logical, highly visible and objective
way. It defines where issues or “pinch points “exist and provides improvement teams with a
common decision making framework.
Another tool used in the construction of process maps is Process Flowcharting. This is a
powerful technique for recording, in the form of a picture, exactly what is done in a process.
There are certain standard symbols used in classic flowcharts, and these are:
If a flowchart cannot be drawn using these symbols, then the process is not fully understood. The
purpose of the flowchart is to learn why the current process operates the way it does and to
conduct an objective analysis, to identify problems and weaknesses, unnecessary steps or
duplication and the objectives of the improvement effort.
Force Field Analysis is a technique for identifying forces which may help or hinder achieving a
change or improvement. By assessing the forces that prevent making the change, plans can be
developed to overcome them. It is also important to identify those forces that will help with the
change. Once these forces have been identified and analyzed, it is possible to determine if a
proposed change is viable.
A useful way of mapping the inputs that effect quality is the Cause & Effect Diagram, also
known as the Fishbone or Ishikawa Diagram. It is also a useful technique for opening up
thinking in problem solving.
The effect or problem being investigated is shown at the end of a horizontal arrow; potential
causes are then shown as labeled arrows entering the main cause arrow. Each arrow may have
other arrows entering it as the principal causes or factors are reduced to their sub- causes;
brainstorming can be effectively used to generate the causes and sub-causes.
With CEDAC – Cause and Effect Diagram with the Addition of Cards, the effect side of the
diagram is a quantified description of the problem, and the cause side of the diagram uses two
different colored cards for writing the facts and the ideas.
The facts are gathered and written on the left of the spines, and the ideas for improvement on the
right of the cause spines. The ideas are evaluated and selected for substance and practicality.
BRAINSTROMING
Brainstorming can be used in conjunction with the Cause and Effect tool. It is a group technique
used to generate a large number of ideas quickly and may be used in a variety of situations. Each
member of the group, in turn, can put forward an idea concerning the problem being considered.
Wild ideas are welcomed and no criticism or evaluation occurs during brainstorming, all ideas
being recorded for subsequent analysis. The process continues until no further ideas are
forthcoming and increases the chance for originality and innovation. It can be used for:
Pareto Analysis can be used to analyze the ideas from a brainstorming session. It is used to
identify the vital few problems or causes of problems that have the greatest impact. A Pareto
diagram or chart pictorially represents data in the form of a ranked bar chart that shows the
frequency of occurrence of items in descending order.
Usually, Pareto diagrams reveal that 80% of the effect is attributed to 20% of the causes; hence, it
is some-times known as the 80/20 rule.
Statistical Process Control (SPC) is a toolkit for managing processes. It is also a strategy for
reducing the variability in products, deliveries, materials, equipment, attitudes and processes,
which are the cause of most quality problems. SPC will reveal whether a process is “in control”–
stable and exhibiting only random variation, or “out of control” and needing attention.
In SPC, numbers and information form the basis for decisions and actions, and a thorough data
recording system is essential. In addition to the tools necessary for recording the data, there also
exists a set of tools to analyze and interpret the data, some of which are covered in the following
pages. An understanding of the tools and how to use them requires no prior knowledge of statistics.
One of the key tools of SPC is a Control Chart. It is used to monitor processes that are in control,
using means and ranges. It represents data, e.g., sales, volume, customer complaints, in chronological
order, showing how the values change with time. In a control chart each point is given individual
significance and is joined to its neighbors. Above and below the mean, Upper and Lower Warning
and Action lines (UWL, LWL, UAL, and LAL) are drawn. These act as signals or decision rules, and
give operators information about the process and its state of control. The charts are useful as a
historical record of the process as it happens, and as an aid to detecting and predicting change.
A Check Sheet is an organized way of collecting and structuring data, its purpose is to collect
the facts in the most efficient way. It ensures that the information that is collected is what was
asked for and that everyone is doing it the same way. Data is collected and ordered by adding
tally or check marks against predetermined categories of items or measurements. It simplifies the
task of analysis.
Bar Charts are visual displays of data in which the height of the bars is used to show the relative
size of the quantity measured. The bars can be separated to show that the data is not directly
related or continuous. They can be used to give visual impact to data, compare different types of
data and compare data collected at different times.
A Scatter Diagram is a graphical representation of how one variable changes with respect to
another. The variables are plotted on axes at right angles to each other and the scatter in the
points gives a measure of confidence in any correlation shown
They show whether 2 variables are related, or prove that they are not, the type of relationship, if
any, between the variables and how one variable might be controlled, by suitably controlling
the other. They also make predictions of values lying outside the measured range.
In its simplest form, Matrix Analysis is a way of presenting data in a rectangular grid,
with data displayed along the top and down the side.
Symbols placed at the intersections of the grid enable relationships to be established between
the two sets of data. It summarizes all the known data in one table and highlights gaps in
knowledge and relationships between items. It is a valuable attention focusing tool for teams,
and simplifies the task of priority ranking a set of items.
The Dot Plot or Tally Chart is a frequency distribution. It shows how often (the frequency) a
particular value has occurred. The shape of the plot can reveal a great deal about a process,
giving a picture of the variation, highlighting unusual values and indicating the probability of
particular values occurring.
A Histogram is a picture of variation or distribution, where data has been grouped into cells
and their frequency represented as bars. It is convenient for large amounts of data, particularly
when the range is wide. It gives a picture of the extent of variation, highlights unusual areas and
indicates the probability of particular values occurring.
With such a shopping list of tools and techniques, it may not be easy to know which one to use
when. To overcome this problem, the following matrix refers to the six step methodology for
process improvement and indicates the key tools and techniques that could be used in each
step. However, this list is not exhaustive and the tools should be used in conjunction with
measurement.
GRAPHICAL DATA REPRESENTATION
Graphing the data can be utilized for both historical data already available and when
analyzing the data resulting from live data collection activities. Of course, you need to pick
the right graphical tool as there are a lot of different ways to plot your data. A number of
commonly used graphical tools will be covered here. However, note that if one graph fails
to reveal anything useful, try another one.
A long list of data is usually not practical for conveying information about a process. One
of the best ways to analyze problems in any process is to plot the data and see what it is
telling you. This is often recommended as a starting point in any data analysis during the
problem-solving process. A wide range of graphical tools are available which can generate
graphs quickly and easily such as Minitab and Microsoft Excel.
Different graphs can reveal different characteristics of your data such as the central
tendency, the dispersion and the general shape for the distribution. Graphical Analysis
allows to quickly learning about the nature of the process, enables clarity of communication
and provides focus for further analysis. It is an important tool for understanding sources of
variation in the data and thereby helping to better understand the process and where root
causes might be. Conclusions drawn from the graphical analysis may require verification
through further advanced statistical techniques such as significance testing and
experimentation.
Line Charts:
Line Charts are the simplest forms of charts and often used to monitor and track data over
time. They are useful for showing trends in quality, cost or other process performance
measures. A line chart represents the data by connecting the data points by straight lines to
highlight trends in the data. A standard or a goal line may also be drawn to verify actual
performance against identified targets. Line charts are the most preferred format to display
time series data. Time series plots, run charts, SPC charts and radar charts are all line charts.
Time Series Plots are line charts that are used to evaluate behavior in data over a time
interval. They can be used to determine if a process is stable by visually spotting trends,
patterns or shift in the data. If any of these are observed, then we can say that the process
is probably unstable. More advanced charts for assessing the stability of a process over
time are run charts and SPC charts.
A Time series:
A time series plot requires the data to be in the order which actually happened and that the
data collection frequency is constant. Time Series Analysis is the analysis of the plotted
data in order to get meaningful information out of it. Different behaviors of the data can be
observed such as upward and downward trends, shifts in the mean and changes in the
amount of variation, patterns and cycles, or anything not random. Time Series Forecasting
is the use of a model to predict future values based on previously observed values.
Pie Charts:
Pie Charts are ways that make it easy to compare proportions. They are widely used in the
business and media worlds for their simplicity and ease of interpretation. They display the
proportion of each category relative to the whole data set representing each as a slice of the
pie. The percentage represented by each category is usually provided near to the
corresponding slice of the pie.
Bar charts:
Bar charts are ways of displaying frequency of occurrence of attribute data. They focus
on the absolute value of the data while a pie chart focuses on the relative value of the data.
The bar height indicates the number of times a particular characteristic was observed. The
bars on the chart may be arranged in any order and are presented either horizontally or
vertically to show comparisons among categories. When a bar chart presents the categories
in descending
order of frequency, this is called a Pareto Chart.
The 7 quality tools were first conceptualized by Kaoru Ishikawa, a professor of engineering
at the University of Tokyo. They can be used for controlling and managing quality in any
organization.
The 7 basic quality tools are, essentially, graphical techniques used to identify & fix issues
related to product or process quality.
The 7 basic quality tools are as follows:
• .Flow Chart
• .Histogram
• Cause-and-Effect Diagram
• Check Sheet
• Scatter Diagram
• Control Charts
• .Pareto Charts
Flow charts: Flow charts are one of the best process improvement tools you can use to
analyze a series of events. They map out these events to illustrate a complex process in
order to find any commonalities among the events. They’re also one of the most common
methods of creating a workflow diagram.
Flow charts can be used in any field to break down complex processes in a way that is easy
to understand. Then, you can go through the business processes one by one, identifying
areas for improvement.
Histogram: A histogram is a chart with different columns. These columns represent the
distribution by the mean. If the histogram is normal then the graph will have a bell-shaped
curve.
If it is abnormal, it can take different shapes based on the condition of the distribution.
Histograms are used to measure one thing against another and should always have a
minimum of two variables.
Scatter Diagram: Scatter diagrams are the best way to represent the value of two different
variables. They present the relationship between the different variables and illustrate the
results on a Cartesian plane. Then further analysis can be done on the values.
Control Charts: A control chart is a good tool for monitoring performance and can be
used to monitor any process that relates to the function of an organization. These charts
allow you to identify the stability and predictability of the process and identify common
causes of variation.
Pareto Charts: Pareto charts are charts that contain bars and a line graph. The values are
shown in descending order by bars and the total is represented by the line. They can be
used to identify a set of priorities so you can determine what parameters have the biggest
impact on the specific area of concern.
PROCESS CAPABILITY ANALYSIS
Process capability analysis is a set of tools used to find out how well a given process meets a set
of specification limits. In other words, it measures how well a process performs.
An important technique used to determine how well a process meets a set of specification limits
is called a process capability analysis. A capability analysis is based on a sample of data taken
from a process and usually produces:
• An estimate of the DPMO (defects per million opportunities).
• One or more capability indices.
• An estimate of the Sigma Quality Level at which the process operates. STATGRAPHICS
provides capability analyses for the following cases:
✓ Cp stands for process capability, and is a simple measure of the capability of a process. It
tells us how much potential the system has of meeting both upper and lower specification
limits. Its weak point is that, in focusing on the data spread, it ignores the averages; so if
the system being tested isn’t centered between the specification limits it may (when used
alone) give misleading impressions. The narrower the spread of a systems output is, the
greater the Cp value. You can test how centered a system is by comparing Cp to Cpk. If a
process is centered on its target, these two will be equal. The larger the difference between
Cpk and Cp the more off-center your process is.
✓ Cpk stands for process capability index and refers to the capability a particular process has
of achieving output within certain specifications. In manufacturing, it describes the ability
of a manufacturer to produce a product that meets the consumers’ expectations, within a
tolerance zone. If Cpk is more than 1, the system has the potential to perform as well as
required. The equation for Cpk is [minimum (mean – LSL, USL – mean)] / (0.5*NT),
where NT stands for natural tolerance, LSL for lower specification limit and USL for upper
specification limit.
✓ Pp stands for process performance. It is much the same as Cp, but unlike Cp it measures
actual performance rather than potential. Like Cp, it measures spread, and is subject to the
same weaknesses.
✓ Ppk stands for process performance index. Like Pp, it measures actual performance rather
than potential. A Ppk of between 0 and 1 indicates that not all the processes outputs are
meeting specifications. If Ppk is 1, 99.73% of your system’s output is within the
specifications. The percentage 99.73% comes from the normal distribution curve, where
99.73% of results fall within -3 and 3 standard deviations from the mean.
These numbers, and a histogram representing them, are usually produced in a process capability
analysis report.
•Process – test method, specification •Personnel – the operators, their skill level, training, etc.
•Tools / Equipment – gages, fixtures, test equipment used and their associated calibration systems
• Items to be measured – the part or material samples measured, the sampling plan, etc.
•Environmental factors – temperature, humidity, etc.
ANALYSIS OF VARIANCE(ANOVA)
Analysis of Variance (ANOVA) of statistical models and their associated estimation between
groups) used to analyze the differences developed by the statistician Ronald Fisher. The observed
variance in a particular variable is different sources of variation. In its simplest form, r generalizes
the t-test beyond two means.
Classes of models:
▪ Fixed-effects model
The fixed-effects model (class I) of analysis of variance applies to situations in which the
experimenter applies one or more treatments to the subjects of the experiment to see whether the
response variable values change. This allows the experimenter to estimate the ranges of response
variable values that the treatment would generate in the population as a whole.
▪ Random-effects models
Random-effects model (class II) is used when the treatments are not fixed. This occurs when the
various factor levels are sampled from a larger population. Because the levels themselves are
random variables, some assumptions and the method of contrasting the treatments (a multi-
variable generalization of simple differences) differ from the fixed-effects model.
▪ Mixed-effects models
A mixed-effects model (class III) contains experimental factors of both fixed and random-effects
types, with appropriately different interpretations and analysis for the two types.
Example: Teaching experiments could be performed by a college or university department to
find a good introductory textbook, with each text considered a treatment. The fixed-effects model
would compare a list of candidate texts. The random-effects model would determine whether
important differences exist among a list of randomly selected texts. The mixed-effects model
would compare the (fixed) incumbent texts to randomly selected alternatives.
Defining fixed and random effects has proven elusive, with competing definitions arguably
leading toward a linguistic quagmire.
Design of experiments (DOE) is defined as a branch of applied statistics that deals with planning,
conducting, analyzing, and interpreting controlled tests to evaluate the factors that control the
value of a parameter or group of parameters. DOE is a powerful data collection and analysis tool
that can be used in a variety of experimental situations.
It allows for multiple input factors to be manipulated, determining their effect on a desired output
(response). By manipulating multiple inputs at the same time, DOE can identify important
interactions that may be missed when experimenting with one factor at a time. All possible
combinations can be investigated (full factorial) or only a portion of the possible combinations
(fractional factorial).
A strategically planned and executed experiment may provide a great deal of information about
the effect on a response variable due to one or more factors. Many experiments involve holding
certain factors constant and altering the levels of another variable. This "one factor at a time"
(OFAT) approach to process knowledge is, however, inefficient when compared with changing
factor levels simultaneously.
Many of the current statistical approaches to designed experiments originate from the work of
R. A. Fisher in the early part of the 20th century. Fisher demonstrated how taking the time to
seriously consider the design and execution of an experiment before trying it helped avoid
frequently encountered problems in analysis. Key concepts in creating a designed experiment
include blocking, randomization, and replication.
Blocking: When randomizing a factor is impossible or too costly, blocking lets you restrict
randomization by carrying out all of the trials with one setting of the factor and then all the trials
with the other setting.
Randomization: Refers to the order in which the trials of an experiment are performed. A
randomized sequence helps eliminate effects of unknown or uncontrolled variables.
Use DOE when more than one input factor is suspected of influencing an output. For example, it
may be desirable to understand the effect of temperature and pressure on the strength of a glue
bond.
DOE can also be used to confirm suspected input/output relationships and to develop a predictive
equation suitable for performing what-if analysis.
Setting up a DOE starts with process map. ASQ has created a design of experiments template
(Excel) available for free download and use. Begin your DOE with three steps:
1. Acquire a full understanding of the inputs and outputs being investigated. A process flowchart
or process map can be helpful. Consult with subject matter experts as necessary.
2. Determine the appropriate measure for the output. A variable measure is preferable. Attribute
measures (pass/fail) should be avoided. Ensure the measurement system is stable and repeatable.
3. Create a design matrix for the factors being investigated. The design matrix will show all
possible combinations of high and low levels for each input factor. These high and low levels
can be coded as +1 and -1. For example, a 2 factor experiment will require 4 experimental runs:
Experiment #1 -1 -1
Experiment #2 -1 +1
Experiment #3 +1 -1
Experiment #4 +1 +1
ACCEPTANCE SAMPLING PAN
Acceptance sampling solves these problems by testing a representative sample of the product for
defects. The process involves first, determining the size of a product lot to be tested, then the
number of products to be sampled, and finally the number of defects acceptable within the sample
batch. Products are chosen at random for sampling. The procedure usually occurs at the
manufacturing site—the plant or factory—and just before the products are to be transported. This
process allows a company to measure the quality of a batch with a specified degree of statistical
certainty without having to test every single unit. Based on the results—how many of the
predetermined number of samples pass or fail the testing—the company decides whether to
accept or reject the entire lot.
Acceptance sampling in its modern industrial form dates from the early 1940s. It was originally
applied by the U.S. military to the testing of bullets during World War II. The concept and
methodology were developed by Harold Dodge, a veteran of the Bell Laboratories quality
assurance department, who was acting as a consultant to the Secretary of War. While the bullets
had to be tested, the need for speed was crucial, and Dodge reasoned that decisions about entire
lots could be made by samples picked at random. Along with Harry Romig and other Bell
colleagues, he came up with a precise sampling plan to be used as a standard, setting the sample
size, the number of acceptable defects, and other criteria.
Acceptance sampling procedures became common throughout World War II and afterward.
However, as Dodge himself noted in 1969, acceptance sampling is not the same as acceptance
quality control. Dependent on specific sampling plans, it applies to specific lots and is an
immediate, short-term test—a spot check, so to speak. In contrast, acceptance quality control
applies in a broader, more long-term sense for the entire product line; it functions as an integral
part of a well-designed manufacturing process and system.
Total quality management (TQM) is the continual process of detecting and reducing or
eliminating errors in manufacturing, streamlining supply chain management, improving the
customer experience, and ensuring that employees are up to speed with training. Total quality
management aims to hold all parties involved in the production process accountable for the
overall quality of the final product or service.
TQM was developed by William Deming, a management consultant whose work had a great
impact on Japanese manufacturing.1 While TQM shares much in common with the Six Sigma
improvement process, it is not the same as Six Sigma. TQM focuses on ensuring that internal
guidelines and process standards reduce errors, while Six Sigma looks to reduce defects.
TQM is considered a customer-focused process and aims for continual improvement of business
operations. It strives to ensure all associated employees work toward the common goals of
improving product or service quality, as well as improving the procedures that are in place for
production.
Important:- Special emphasis is put on fact-based decision making, using performance metrics to
monitor progress; high levels of organizational communication are encouraged for the purpose
of maintaining employee involvement and morale.
Principles of TQM
UNIT-III
LEADERSHIP
UNIT- III
LEADERSHIP
LEAN MANAGEMENT
A systematic approach to identifying and eliminating waste (non-value added activities)
through continuous improvement by flowing the product at the pull of the customer in
pursuit of perfection.
Origin:
Started by Japanese manufacturers in auto mobile industry.Replicated in other sectors all
over the world.
Underlying Principle:
“Less is more productive”
i.e. in order to stay competitive, organizations are required to deliver better quality products
and services using fewer resources.
JUST-IN-TIME(JIT)
JIT Concepts
• Eliminate waste
• Remove variability
• Improve throughput
•
1. Eliminate Waste
▪ Waste is anything that does not add value from the customer point of view
▪ Storage, inspection, delay, waiting in queues, and defective products do not add value
and are100% waste
2. Remove Variability
• Variability is any deviation from the optimum process
• Lean systems require managers to reduce variability caused by both internal and external
factors
• Inventory hides variability
• Less variability results in less waste
2. Remove Variability
• Lean systems require managers to reduce variability caused by both internal and external
factors
• Variability is any deviation from the optimum process
• Inventory hides variability
• Less variability results in less waste
• Push systems dump orders on the downstream stations regardless of the need
• By pulling material in small lots, inventory cushions are removed, exposing problems
and emphasizing continual improvement
• The time it takes to move an order from receipt to delivery is reduced.
BENCHMARKING
What is benchmarking?
Benchmarking is a strategic and analytical process of continuously measuring an organization's
products, services and practices against a recognized leader in the studied area for the purpose of
improving business performance.
OQM will identify and publish best practices in public and private sector services that are relevant
to ORS divisions and branches. We will also identify and publicize ORS "best practices" through
our web page. Through sharing with other organizations we will seek to identify and incorporate
practices that will contribute to better performance. OQM can also facilitate benchmarking
studies for ORS divisions pursuing their goal of providing excellent services to their customers.
These studies will be characterized by objective comparisons of performance. In other words,
OQM will make sure that the variables under study are comparable in nature and scope
•To forecast industry trends - Because it requires the study of industry leaders, benchmarking
can provide numerous indicators on where a particular business might be headed, which
ultimately may pave the way for the organization to take a leadership position.
•To discover emerging technologies - The benchmarking process can help leaders uncover
technologies that are changing rapidly, newly developed, or state-of-the-art.
•To stimulate strategic planning - The type of information gathered during a benchmarking
effortcan assist an organization in clarifying and shaping its vision of the future.
•To enhance goal setting - Knowing the best practices in your business can dramatically
improveyour ability to know what goals are realistic and attainable.
•To maximize award - winning potential - Many prestigious award programs, such as the
Malcolm Baldridge National Quality Award Program, the federal government's President's
Quality Award Program, and numerous state and local awards recognize the importance of
benchmarking and allocate a significant percentage of points to organizations that practice it.
•To comply with Executive Order #12862, "Setting Customer Service Standards" -
Benchmarking the customer service performance of federal government agencies against the best
in business is one of the eight action areas of this Executive Order.
What part of my organization should I select for benchmarking?
A Process Failure Mode Effects Analysis (PFMEA) is a structured analytical tool used by an
organization, business unit, or cross-functional team to identify and evaluate the potential failures
of a process.
PFMEA helps to establish the impact of the failure, and identify and prioritize the action items
with the goal of alleviating risk.
It is a living document that should be initiated prior to process of production and maintained
through the life cycle of the product.
PFMEA evaluates each process step and assigns a score on a scale of 1 to 10 for the following
variables:
➢Severity - Assesses the impact of the failure mode (the error in the process), with 1
representing the least safety concern and 10 representing the most dangerous safety concern. In
most cases, processes with severity scores exceeding 8 may require a fault tree analysis, which
estimates the probability of the failure mode by breaking it down into further sub-elements.
➢Occurrence - assesses the chance of a failure happening, with 1 representing the lowest
occurrence and 10 representing the highest occurrence. For example, a score of 1 may be assigned
to a failure that happens once in every 5 years, while a score of 10 may be assigned to a failure
that occurs once per hour, once per minute, etc.
➢Detection - assesses the chance of a failure being detected, with 1 representing the highest
chance of detection and 10 representing the lowest chance of detection.
➢RPN - Risk priority number = severity X occurrence X detection. By rule of thumb, any RPN
value exceeding 80 requires a corrective action. The corrective action ideally leads to a lower
RPN number.
•Form a cross-functional team of process owners and operations support personnel with a team
leader.
•Have the team leader define the scope, goals and timeline of completing the FMEA.
•Transfer the process map for the steps of the FMEA process.
•Assign severity, occurrence and detection scores to each process step as a team.
•Based on the RPN value, identify required corrective actions for each process step.
•Complete a Responsible, Accountable, Consulted, and Informed (RACI) chart for the corrective
actions.
•Have the team leader on a periodic basis track the corrective action and update the FMEA.
•Have the team leader also track process changes, design changes, and other critical discoveries
that would qualify and update the FMEA.
•Ensure that the team leader schedules periodic meetings to review the FMEA (based on process
performance, a quarterly review may be an option).
Every customer has an ideal expectation of the service they want to receive when they go to a
restaurant or store. Service quality measures how well a service is delivered compared to
customer expectations. Businesses that meet or exceed expectations are considered to have high
service quality. Let's say you go to a fast food restaurant for dinner, where you can reasonably
expect to receive your food within five minutes of ordering. After you get your drink and find a
table, your order is called, minutes earlier than you had expected! You would probably consider
this to be high service quality. There are five dimensions that customers consider when assessing
service quality. Let's discuss these dimensions in a little more detail.
Tangibles
One dimension of service quality has to do with the tangibles of the service. Tangibles are the
physical features of the service being provided, such as the appearance of the building,
cleanliness of the facilities, and the appearance of the personnel. Going to a restaurant and finding
that your table and silverware are dirty would negatively impact your assessment of the service
quality. On the other hand, walking into a beautifully decorated, clean restaurant with impeccably
dressed wait staff would positively affect your opinion of the service.
QUALITY MANAGEMENT AND SIX SIGMA PERSPECTIVE
In the popular book The Six Sigma Way, Six Sigma is defined as: “a comprehensive and
flexiblesystem for achieving, sustaining and maximizing business success. Six Sigma is uniquely
driven by close understanding of customer needs, disciplined use of facts, data, and statistical
analysis, and diligent attention to managing, improving, and reinventing business processes. (p.
xi)”
Six Sigma projects generally follow a well defined process consisting of five phases.
define
measure
analyze
improve
control
pronounced “dey-MAY-ihk”
The define phase of a DMAIC project focuses on clearly specifying the problem or opportunity,
what the goals are for the process improvement project, and what the scope of the project is.
Identifying who the customer is and their requirements is also critical given that the overarching
goal for all Six Sigma projects is improving the organization’s ability to meet the needs of its
customers.
ISO 9001
ISO 9001 is defined as the international standard that specifies requirements for a quality
management system (QMS). Organizations use the standard to demonstrate the ability to
consistently provide products and services that meet customer and regulatory requirements. It is
the most popular standard in the ISO 9000 series and the only standard in the series to which
organizations can certify.
ISO 9001 was first published in 1987 by the International Organization for Standardization (ISO),
an international agency composed of the national standards bodies of more than 160 countries. The
current version of ISO 9001 was released in September 2015.
ISO 9001 helps organizations ensure their customers consistently receive high quality products
and services, which in turn brings many benefits, including satisfied customers, management,
and employees.
Because ISO 9001 specifies the requirements for an effective quality management system,
organizations find that using the standard helps them:
Organize a QMS
Create satisfied customers, management, and employees Continually improve their processes
Save costs
ISO 9001 is the only standard in the ISO 9000 series to which organizations can certify. Achieving
ISO 9001:2015 certification means that an organization has demonstrated the following:
• Maintains documentation
Certification to the ISO 9001 standard can enhance an organization’s credibility by showing
customers that its products and services meet expectations. In some instances or in some
industries, certification is required or legally mandated. The certification process includes
implementing the requirements of ISO 9001:2015 and then completing a successful registrar’s
audit confirming the organization meets those requirements.
Organizations should consider the following as they begin preparing for an ISO 9001 quality
management system certification:
Registrar’s costs for ISO 9001 registration, surveillance, and recertification audits.
Current level of conformance with ISO 9001 requirements
Amount of resources that the company will dedicate to this project for development and
implementation.
Amount of support that will be required from a consultant and the associated costs.
ISO 14000
ISO 14000 is a set of rules and standards created to help companies reduce industrial waste and
environmental damage.
It’s a framework for better environmental impact management, but it’s not required. Companies
can get ISO 14000 certified, but it’s an optional certification. The ISO 14000 series of standards
was introduced in 1996 by the International Organization for Standardization (ISO) and most
recently revised in 2015 (ISO is not an acronym; it derives from the ancient Greek word ísos,
meaning equal or equivalent.)
KEY TAKEAWAYS
▪ ISO 14000 is a set of rules and standards created to help companies address their environmental
impact.
▪ This certification is optional for corporations, rather than mandatory;
▪ ISO 14000 is intended to be used to set and ultimately achieve environmentally-friendly business
goals and objectives.
▪ This type of certification can be used as a marketing tool for engaging environmentally conscious
consumers and may help firms reach mandatory environmental regulations.
The other benefits include being able to sell products to companies that use ISO 14000–certified
suppliers. Companies and customers may also pay more for products that are considered
environmentally friendly. On the cost side, meeting the ISO 14000 standards can help reduce costs,
as it encourages the efficient use of resources and limiting waste. This may lead to finding ways
to recycle products or new uses for previously disposed of byproducts.
QS 9000
QS 9000 is a company level certification based on quality system requirements related specifically
to the automotive industry. These standards were developed by the larger automotive companies
including Ford, General Motors and DaimlerChrysler. This standard is obsolete and has been
replaced by either ISO/TS 16949 or ISO 9001.
Organizations that wanted to become certified to the current version of QS9000 would need to
complete an application, undergo a document review and certification audit. Once the certification
was received, annual or regularly scheduled audits would be conducted to verify continued
compliance to the standard.
QUALITY AUDIT
Quality audit is the process of systematic examination of a quality system carried out by an
internal or external quality auditor or an audit team. It is an important part of an organization's
quality management system and is a key element in the ISO quality system standard, ISO 9001.
Quality audits are typically performed at predefined time intervals and ensure that the
institution has clearly defined internal system monitoring procedures linked to effective action.
This can help determine if the organization complies with the defined quality system processes
and can involveprocedural or results-based assessment criteria.
With the upgrade of the ISO9000 series of standards from the 1994 to 2008 series, the focus of the
audits has shifted from purely procedural adherence towards measurement of the actual
effectiveness of the Quality Management System (QMS) and the results that have been achieved
through the implementation of a QMS.
KEY TAKEAWAYS
There are three main types of audits: external audits, internal audits, and Internal Revenue Service
(IRS) audits.
External audits are commonly performed by Certified Public Accounting (CPA) firms and result
in an auditor's opinion which is included in the audit report.
An unqualified, or clean, audit opinion means that the auditor has not identified any material
misstatement as a result of his or her review of the financial statements.
External audits can include a review of both financial statements and a company's internal controls.
Internal audits serve as a managerial tool to make improvements to processes and internal controls.
UNIT-IV
PRODUCT QUALITY
IMPROVEMENT
UNIT- IV
PRODUCT QUALITY IMPROVEMENT
QUALITY
It is not easy to define the word Quality since it is perceived differently by the different set of
individuals. If experts are asked to define quality, they may give varied responses depending on
their individual preferences. These may be similar to following listed phrases.
According to experts, the word quality can be defined either as;
• Fitness for use or purpose.
• To do a right thing at first time.
• To do a right thing at the right-time.
• Find and know, what consumer wants?
• Features that meet consumer needs and give customer satisfaction.
• Freedom from deficiencies or defects.
• Conformance to standards.
• Value or worthiness for money, etc.
Dr. Joseph Juran coined a short definition of quality as; Joned a short definition
of quality as;
“Product’s fitness for use.”
PRODUCT QUALITY
“Product quality means to incorporate features that have a capacity to meet consumer needs
(wants) and gives customer satisfaction by improving products (goods) and making them free
from any deficiencies or defects.”
Product quality has two main characteristics viz; measured and attributes.
Measured characteristics include features like shape, size, color, strength, appearance, height,
weight, thickness, diameter, volume, fuel consumption, etc. of a product.
Attributes characteristics checks and controls defective-pieces per batch, defects per item,
number of mistakes per page, cracks in crockery, double- threading in textile material,
discoloring in garments, etc.
Based on this classification, we can divide products into good and bad.
So, product quality refers to the total of the goodness of a product.
The five main aspects of product quality are depicted and listed below:
Quality of design: The product must be designed as per the consumers’ needs and high-quality
standards.
Quality conformance: The finished products must conform (match) to the product design
specifications.
Reliability: The products must be reliable or dependable. They must not easily breakdown or
become non-functional. They must also not require frequent repairs. They must remain
operational for a satisfactory longer-time to be called as a reliable one.
Safety: The finished product must be safe for use and/or handling. It must not harm consumers
in any way.
Proper storage: The product must be packed and stored properly. Its quality must be
maintaineduntil its expiry date.
Company must focus on product quality, before, during and after production:
• Before production, company must find out the needs of the consumers. These needs must
be included in the product design specifications. So, the company must design its product
as per the needs of the consumers.
• During production, company must have quality control at all stages of the production
process. There must have quality control for raw materials, plant and machinery, selection
and training of manpower, finished products, packaging of products, etc.
For company: Product quality is very important for the company. This is because; bad quality
products will affect the consumer’s confidence, image and sales of the company. It may even
affect the survival of the company. So, it is very important for every company to make better
quality products.
For consumers: Product quality is also very important for consumers. They are ready to pay
high prices, but in return, they expect best-quality products. If they are not satisfied with the
quality of product of company, they will purchase from the competitors. Nowadays, very good
quality international products are available in the local market. So, if the domestic companies
don't improve their products' quality, they will struggle to survive in the market.
QUALITY FUNCTION DEPLOYMENT
QFD is a focused methodology for carefully listening to the voice of the customer and then
effectively responding to those needs and expectations.
First developed in Japan in the late 1960s as a form of cause- and-effect analysis, QFD was
brought to the United States in the early 1980s. It gained its early popularity as a result of
numerous successes in the automotive industry.
In general we can define it like this
Every organization has customers. Some have only internal customers, some have only
external customers, and some have both. When you are working to determine what you need
to accomplish to satisfy or even delight your customers, quality function deployment is an
essential tool.
METHODOLOGY
Beginning with the initial matrix, commonly termed the House of Quality (Figure 1), the QFD
methodology focuses on the most important product or service attributes or qualities.
These are composed of customer wows, wants, and musts. (See the Kano model of customer
perception versus customer reality.)
Once you have prioritized the attributes and qualities, QFD deploys them to the appropriate
organizational function for action, as shown in Figure 2. Thus, QFD is the deployment of
customer-driven qualities to the responsible functions of an organization.
TAGUCHI METHOD
The Taguchi method of quality control is an approach to engineering that emphasizes the roles
of research and development (R&D), product design and development in reducing the
occurrence of defects and failures in manufactured goods.
This method, developed by Japanese engineer and statistician Genichi Taguchi, considers
design to be more important than the manufacturing process in quality control, aiming to
eliminate variances in production before they can occur.
KEY TAKEAWAYS
▪ In engineering, the Taguchi method of quality control focuses on design and development
to create efficient, reliable products.
▪ Its founder, Genichi Taguchi, considers design to be more important than the
manufacturing process in quality control, seeking to eliminate variances in production
before they can occur.
▪ Companies such as Toyota, Ford, Boeing, and Xerox have adopted this method.
UNIT-V
DESIGN FAILURE
UNIT- V
DESIGN FAILURE
It was first used in rocket science. Initially, the rocket development process in the 1950’s did not
go well. The complexity and difficulty of the task resulted in many catastrophic failures. Root
Cause Analysis (RCA) was used to investigate these failures but had inconclusive results. Rocket
failures are often explosive with no evidence of the root cause remaining.
Design FMEA provided the rocket scientists with a platform to prevent failure. A similar platform
is used today in many industries to identify risks, take counter measures and prevent failures.
DFMEA has had a profound impact, improving safety and performance on products we use every
day.
DFMEA is a methodical approach used for identifying potential risks introduced in a new or
changed design of a product/service.
The Design FMEA initially identifies design functions, failure modes and their effects on the
customer with corresponding severity ranking / danger of the effect. Then, causes and their
mechanisms of the failure mode are identified.
High probability causes, indicated by the occurrence ranking, may drive action to prevent or
reduce the cause’s impact on the failure mode.
The detection ranking highlights the ability of specific tests to confirm the failure mode / causes
are eliminated.
The DFMEA also tracks improvements through Risk Priority Number (RPN) reductions.
By comparing the before and after RPN, a history of improvement and risk mitigation can be
chronicled.
Why Perform Design Failure Mode and Effects Analysis (DFMEA)
Risk is the substitute for failure on new / changed designs. It is a good practice to identify risks
on a program as early as possible. Early risk identification provides the greatest opportunity for
verified mitigation prior to program launch.
Risks are identified on designs, which if left unattended, could result in failure. The DFMEA is
applied when:
WHAT IS RELIABILITY?
Reliability is defined as the probability that a product, system, or service will perform its
intended function adequately for a specified period of time, or will operate in a defined
environment without failure.
The most important components of this definition must be clearly understood to fully know
how reliability in a product or service is established:
What is DFR?
Essentially, DfR is a process that ensures a product, or system, performs a specified function
within a given environment over the expected lifetime.
DfR often occurs at the design stage — before physical prototyping — and is often part of an
overall design for excellence (DfX) strategy. But, as you’ll soon find out, the use of DfR can, and
should, be expanded.
In the current global marketplace, competition for products and services has never been higher.
Consumers have multiple choices for many very similar products. Therefore, many
manufacturing companies are continually striving to introduce completely new products or break
into new markets. Sometimes the products meet the consumer’s needs and expectations and
sometimes they don’t. The company will usually redesign the product, sometimes developing
and testing multiple iterations prior to re-introducing the product to market.
Multiple redesigns of a product are expensive and wasteful. It would be much more beneficial if
the product met the actual needs and expectations of the customer, with a higher level of product
quality the first time.
Design for Six Sigma (DFSS) focuses on performing additional work up front to assure you fully
understand the customer’s needs and expectations prior to design completion. DFSS requires
involvement by all stakeholders in every function. When following a DFSS methodology you
can achieve higher levels of quality for new products or processes.
Design for Six Sigma (DFSS) is a different approach to new product or process development in
that there are multiple methodologies that can be utilized.
Traditional Six Sigma utilizes DMAIC or Define, Measure, Analyze, Improve and Control. This
methodology is most effective when used to improve a current process or make incremental
changes to a product design. In contrast, Design for Six Sigma is used primarily for the complete
re-design of a product or process. The methods, or steps, used for DFSS seem to vary according
to the business or organization implementing the process. Some examples are DMADV, DCCDI
and IDOV.
What all the methodologies seem to have in common is that they all focus on fully understanding
the needs of the customer and applying this information to the product and process design. The
DFSS team must be cross-functional to ensure that all aspects of the product are considered, from
market research through the design phase, process implementation and product launch.
With DFSS, the goal is to design products and processes while minimizing defects and variations
at their roots. The expectation for a process developed using DFSS is reportedly 4.5 sigma or
greater.
When your company designs a new product or process from the ground up it requires a sizable
amount of time and resources. Many products today are highly complex, providing multiple
opportunities for things to go wrong. If your design does not meet the customer’s actual wants
and expectations or your product does not provide the value the customer is willing to pay for,
the product sales will suffer. Redesigning products and processes is expensive and increases
your time to market. In contrast, by utilizing Design for Six Sigma methodologies, companies
have reduced their time to market by 25 to 40 percent while providing a high quality product that
meets the customer’s requirements. DFSS is a proactive approach to design with quantifiable
data and proven design tools that can improve your chances of success.
As previously mentioned, DFSS is more of an approach to product design rather than one
particular methodology. There are some fundamental characteristics that each of the
methodologies share. The DFSS project should involve a cross functional team from the entire
organization. It is a team effort that should be focused on the customer requirements and Critical
to Quality parameters (CTQs). The DFSS team should invest time studying and understanding
the issues with the existing systems prior to developing a new design. There are multiple
methodologies being used for implementation of DFSS. One of the most common techniques,
DMADV (Define, Measure, Analyze, Design, Verify), is detailed below.
Define
The Define stage should include the Project Charter, Communication Plan and Risk Assessment
/ Management Plan.
Measure
During the Measurement Phase, the project focus is on understanding customer needs and wants
and then translating them into measurable design requirements. The team should not only focus
on requirements or “Must Haves” but also on the “Would likes”, which are features or functions
that would excite the customer, something that would set your product apart from the
competition. The customer information may be obtained through various methods including:
• Customer surveys
• Dealer or site visits
• Warranty or customer service information
• Historical data
• Consumer Focus Groups
Analyze
In the Analyze Phase, the customer information should be captured and translated into
measurable design performance or functional requirements. The Parameter (P) Diagram is often
used to capture and translate this information. Those requirements should then be converted into
System, Sub-system and Component level design requirements. The Quality Function
Deployment (QFD) and Characteristic Matrix are effective tools for driving the needs of the
customer from the machine level down to component level requirements. The team should then
use the information to develop multiple concept level design options. Various assessment tools
like benchmarking or brainstorming can be used to evaluate how well each of the design
concepts meet customer and business requirements and their potential for success.
Then the team will evaluate the options and select a final design using decision- making tools
such as a Pugh Matrix or a similar method.
Design
When the DFSS team has selected a single concept-level design, it is time to begin the detailed
design work using 3D modeling, preliminary drawings, etc. The design team evaluates the
physical product and other considerations including, but not limited to, the following:
Manufacturing process Equipment requirements Supporting technology Material selection
Manufacturing location Packaging
Once the preliminary design is determined the team begins evaluation of the design using various
techniques, such as:
▪ Finite Element Analysis (FEA)
▪ Failure Modes and Effects Analysis (FMEA)
▪ Tolerance Stack Analysis
▪ Design Of Experiment (DOE)
Verify
During the Verify Phase, the team introduces the design of the product or process and performs
the validation testing to verify that it does meet customer and performance requirements. In
addition, the team should develop a detailed process map, process documentation and
instructions.
Often a Process FMEA is performed to evaluate the risk inherent in the process and address any
concerns prior to a build or test run. Usually a prototype or pilot build is conducted. A pilot build
can take the form of a limited product production run, service offering or possibly a test of a new
process.
The information or data collected during the prototype or pilot run is then used to improve the
design of the product or process prior to a full roll-out or product launch. When the project is
complete the team ensures the process is ready to hand- off to the business leaders and current
production teams. The team should provide all required process documentation and a Process
Control Plan.
UNITWISE
ASSIGNMENT
DEPARTMENT Of COMPUTER SCIENCE & ENGINEERING
QUALITY MANAGEMENT (7CS6-60.1)
UNIT-I ASSIGNMENT
Question No.1
What is a Von Neumann architecture?
Question No.2
Define the register transfer language. What do you understand by arithmetic
microoperation? Explain with example.
Question No.3
Design a 4-bit combinational circuit decremented using four full adder circuits.
Question No.4
Draw a circuitry diagram for common bus system for four register using multiplexers.
DEPARTMENT Of COMPUTER SCIENCE & ENGINEERING
QUALITY MANAGEMENT (7CS6-60.1)
UNIT-II ASSIGNMENT
Question No.1
What is difference between direct and indirect addressing modes? Explain implied
mode of addressing also.
Question No.2
Explain the instruction format. What do you understand by instruction pipeline?
Question No.3
Define instruction pipeline and its problem. Explain pipeline speed up, efficiency and
throughput.
Question No.4
What is pipelining? What is maximum speed up that can be attained? Construct an
instruction pipeline. It is possible to attain maximum speed up in an instruction pipeline?
DEPARTMENT Of COMPUTER SCIENCE & ENGINEERING
QUALITY MANAGEMENT (7CS6-60.1)
UNIT-III ASSIGNMENT
Question No.1
Explain the term leadership for Decision making and Strategic planning Communications?
Question No.2
What do you mean by Quality Audit. Explain its procedure with the help of diagram?
Question No.3
How International Standard ISO 9001 is important for Quality Management System.
Question No.4
Explain DMAIC methodology. How it is similar to or different from the Deming cycle?
Question No.5
What are the benefits of ISO registration? Explain ISO 14000 in detail?
DEPARTMENT Of COMPUTER SCIENCE & ENGINEERING
QUALITY MANAGEMENT (7CS6-60.1)
UNIT-IV ASSIGNMENT
Question No.1
How can we improve the quality of the Product? Explain in detail.
Question No.2
What is the methodology use behind Quality Function Deployment (QFD)?
Question No.3
Explain the term Robust Design in detail with the help of an example. Why robust design is
important?
Question No.4
What are the benefits or advantages of Quality Function Deployment?
Question No.5
How we can build a solid product Strategy to improve the product Quality?
DEPARTMENT Of COMPUTER SCIENCE & ENGINEERING
QUALITY MANAGEMENT (7CS6-60.1)
UNIT-V ASSIGNMENT
Question No.1
Briefly explain Design failure mode and effect analysis?
Question No.2
Describe lean six sigma approach to new product development?
Question No.3
How do you measure product Reliability. Why is product Reliability is important?
Question No.4
How six sigma plays an important character in product Development?
Question No.5
What is the role of Product Reliability analysis in Design Failure?