Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Download as pdf or txt
Download as pdf or txt
You are on page 1of 54

www.Vidyarthiplus.

com

UNIT - I
Software quality
In the context of software engineering, software quality measures how well software is
designed (quality of design), and how well the software conforms to that design (quality
of conformance), although there are several different definitions.

Software product quality

 Product quality
o conformance to requirements or program specification; related to
Reliability
 Scalability
o Correctness
 Completeness
 absence of bugs
 Fault-tolerance
o Extensibility
o Maintainability
 Documentation

Software Quality Factors


A software quality factor is a non-functional requirement for a software program which is
not called up by the customer's contract, but nevertheless is a desirable requirement
which enhances the quality of the software program.

 Understandability
 Completeness
 Maintainability
 Conciseness
 Portability
 Consistency
 Testability
 Usability
 Reliability
 Structured ness
 Efficiency
 Security

www.Vidyarthiplus.com
www.Vidyarthiplus.com

Software Quality Measurement

Measurement of software quality factors


Understandability

Are variable names descriptive of the physical or functional property represented? Do


uniquely recognizable functions contain adequate comments so that their purpose is
clear? Are deviations from forward logical flow adequately commented? Are all elements
of an array functionally related?

Completeness

Conciseness

Is all code reachable? Is any code redundant? How many statements within loops could
be placed outside the loop, thus reducing computation time? Are branch decisions too
complex?

Portability

 Does the program depend upon system or library routines unique to a particular
installation? Have machine-dependent statements been flagged and commented?
Has dependency on internal bit representation of alphanumeric or special
characters been avoided?
 The effort required to transfer the program from one hardware/software system
environment to another.

Consistency

Is one variable name used to represent different physical entities in the program? Does
the program contain only one representation for physical or mathematical constants? Are
functionally similar arithmetic expressions similarly constructed? Is a consistent scheme
for indentation used?'

www.Vidyarthiplus.com
www.Vidyarthiplus.com

Developing a Set of Metrics

Quality Models
Why a quality model?

 Enables quality comparison both qualitatively and


quantitatively

Hierarchical models

 considers quality under a series of quality characteristics


or criteria, each having a set of associated measures ormetrics

 combined in a hierarchical nature into an overall

www.Vidyarthiplus.com
www.Vidyarthiplus.com

assessment of quality

Questions
 what criteria should be employed?
 how do they inter-relate?
 how can they be combined to provide an overall assessment of quality?

www.Vidyarthiplus.com
www.Vidyarthiplus.com

McCall’s Model

Boehm’s Model

 Barry W. Boehm is known for his many contributions to software engineering.


 He was the first to identify software as the primary expense of future computer
systems; he developed COCOMO, the spiral model, wideband Delphi, and many
more contributions through his involvement in industry and academia.

www.Vidyarthiplus.com
www.Vidyarthiplus.com

Metrics Measurement and Analysis

www.Vidyarthiplus.com
www.Vidyarthiplus.com

THE GOAL QUESTION METRIC APPROACH

 The Goal Question Metric (GQM) approach is based upon the assumption that for
an
 organization to measure in a purposeful way it must first specify the goals for
itself and its projects, then it must trace those goals to the data that are intended to
define those goals operationally, and finally provide a framework for interpreting
the data with respect to the stated goals.

 Thus it is important to make clear, at least in general terms, what informational


needs the organization has, so that these needs for information can be quantified
whenever possible, and the quantified information can be analyzed a to whether
or not the goals are achieved.

www.Vidyarthiplus.com
www.Vidyarthiplus.com

 The approach was originally defined for evaluating defects for a set of projects in
the NASA Goddard Space Flight Center environment.

 The result of the application of the Goal Question Metric approach application is
the specification of a measurement system targeting a particular set of issues and
a set of rules for the interpretation of the measurement data.

 The resulting measurement model has three levels:

1. Conceptual level (GOAL): A goal is defined for an object, for a variety of


reasons, with respect to various models of quality, from various points of view,
relative to a particular environment. Objects of measurement are
Products: Artifacts, deliverables and documents that are produced during
the system life cycle; E.g., specifications, designs, programs, test suites.
Processes: Software related activities normally associated with time; E.g.,
specifying, designing, testing, interviewing.
Resources: Items used by processes in order to produce their outputs; E.g.,
personnel, hardware, software, office space.

2. Operational level (QUESTION): A set of questions is used to characterize the


way the assessment/achievement of a specific goal is going to be performed based
on some characterizing model. Questions try to characterize the object of
measurement (product, process, resource) with respect to a selected quality issue
and to determine its quality from the selected viewpoint.

3. Quantitative level (METRIC): A set of data is associated with every question


in order to answer it in a quantitative way. The data can be
Objective: If they depend only on the object that is being measured and not
on the viewpoint from which they are taken; E.g., number of versions of a
document, staff hours spent on a task, size of a program.
Subjective: If they depend on both the object that is being measured and
the viewpoint from which they are taken; E.g., readability of a text, level of
user satisfaction.

www.Vidyarthiplus.com
www.Vidyarthiplus.com

The complete Goal Question Metric model is as follows:

THE GOAL QUESTION METRIC PROCESS

 A GQM model is developed by identifying a set of quality and/or productivity


goals, at corporate, division or project level; e.g., customer satisfaction, on-time
delivery, improved performance.
 From those goals and based upon models of the object of measurement, we derive
questions that define those goals as completely as possible.
 For example, if it is to characterize a software system (e.g., an electronic mail
package, a word processor) with respect to a certain set of quality issues (e.g.,

www.Vidyarthiplus.com
www.Vidyarthiplus.com

portability across architectures), then a quality model of the product must be


chosen that deals with those issues (e.g., list of functional features that can be
implemented in different architectures).

 The next step consists in specifying the measures that need to be collected in
order to answer those questions, and to track the conformance of products and
processes to the goals.
 After the measures have been specified, we need to develop the data collection
mechanisms, including validation and analysis mechanisms.

CONCLUSION
In summary, the Goal Question Metric approach is a mechanism for defining and
interpreting operational and measurable software. It can be used in isolation or, better,
within the context of a more general approach to software quality improvement.
Figure below outlines the basic roles and flows of information for this model.

www.Vidyarthiplus.com
www.Vidyarthiplus.com

www.Vidyarthiplus.com
www.Vidyarthiplus.com

UNIT-II

Software Quality Assurance (SQA)

 SQA consists of a means of monitoring the software engineering


processes and methods used to ensure quality.
 It does this by means of audits of the quality management system
under which the software system is created. These audits are backed
by one or more standards, usually ISO 9000.
 It is distinct from software quality control which includes reviewing
requirements documents, and software testing.
 SQA encompasses the entire software development process, which
includes processes such as software design, coding, source code
control, code reviews, change management, configuration
management, and release management.
 Whereas software quality control is a control of products, software
quality assurance is a control of processes.
 Tasks

Quality Engineering
The activity consisting of the cohesive collection of all tasks that are
primarily performed to ensure and help continually improve the
quality of an endeavor’s process and work products .

Goals

The typical goals of quality engineering are to:

 Ensure that the necessary levels of quality are achieved.


 Make the achievement of quality predictable and repeatable.
 Minimize endeavor, organizational, and personal risks due to
poor quality.

www.Vidyarthiplus.com
www.Vidyarthiplus.com

Objectives

The typical objectives of quality engineering are to:

 Define what quality means on the endeavor in terms of a quality


model defining quality factors and quality sub factors.
 Plan the quality tasks including helping the requirements team
determine and specify the quality requirements and associated quality
factors (attributes) and quality metrics.
 Assure the quality of the process used by the endeavor.
Thus, quality assurance is concerned with fulfilling the quality
requirements and achieving the quality factors of the endeavor’s
process.
“Are we building the products right?”
 Control the quality of the work products delivered during the
endeavor.
Thus, quality control is concerned with fulfilling the quality
requirements and achieving the quality factors of the endeavor’s work
products.
“Are we building the right products?”

Examples

Examples of quality engineering based on scope include:

 Application Quality Engineering


 Business Quality Engineering
 Contact Center Quality Engineering
 Data Center Quality Engineering

Preconditions

Quality engineering typically may begin when the following preconditions


hold:

 The endeavor is started.


 The quality team is initially staffed and trained in quality engineering.

www.Vidyarthiplus.com
www.Vidyarthiplus.com

Completion Criteria

Quality engineering is typically complete when the following post


conditions hold:

 The endeavor is complete.

Tasks

The following diagram illustrates the relationships between the quality tasks:

www.Vidyarthiplus.com
www.Vidyarthiplus.com

Plan

The purpose of this Software Quality Assurance Plan (SQAP) is to define


the techniques, procedures, and methodologies that will be used at the
Center for Space Research (CSR) to assure timely delivery of the software
that meets specified requirements within project resources.

The use of this plan will help assure the following:

(1) That software development, evaluation and acceptance


standards are developed, documented and followed.
(2) That the results of software quality reviews and audits will
be given to appropriate management within CSR. This provides
feedback as to how well the development effort is conforming
to various CSR development standards.
(3) That test results adhere to acceptance standards.

Teams

 The SQA team shall check that the quality is maintained during the
project and that the proper quality procedures are being followed,
discovered problems are reported to the Project Management.
 The members of the project team must work according to the part(s)
of the SQAP that applies to their specific task.

The tasks of the SQA team

For the first phase of the project (UR), the SQA team must see to it that the
following documents are properly reviewed internally before they are
submitted for an external review.

 The URD

The SQA team must check whether the URD:

o contains a general description of the software that has to be


developed;
o contains requirements on the software to be developed as stated
by the client;

www.Vidyarthiplus.com
www.Vidyarthiplus.com

o contains constraints on the software to be developed;


o contains a priority list of the requirements.
 The SPMP

The SQA team must check whether the goals of the project are clearly
described. A life cycle approach for the project must be defined. The
SQA team must ensure that the SPMP is realistic by checking:

o the assumptions made during the planning of the project;


o restrictions with respect to plan (e.g. availability of members);
o external problems (e.g. delivery of PCs, interface card and
drivers).
 The SCMP

With respect to the SCMP, the SQA team has to check whether the
document provides procedures concerning:

o CI identification
o CI storage
o CI change control
o CI status indication

All documents must have a unique identifier and backups must be


made at least once every three days.

 The SQAP

With respect to the SQAP, the SQA team must check wether the
SQAP contains:

o Project standards
o Review procedures
o Problem reporting procedures
o Responsibilities of the project members with respect to quality
assurance

Tasks during SR phase

For the second phase of the project (SR), the SQA team must see to it that
the following documents are properly reviewed internally before they are
submitted for an external review.

www.Vidyarthiplus.com
www.Vidyarthiplus.com

 The SRD

The SQA team must check whether the SRD:

o contains requirements on the software to be developed, these


requirements must be based on the software requirements stated
in the URD;
o contains constraints on the software to be developed, these
constraints must be based on the software constaint in the URD;
o contains a priority list of the requirements.
o contains a traceability matrix.
 The SPMP-SR

The SQA team must ensure that the SPMP is realistic by checking:

o the assumptions made during the planning;


o restrictions with respect to the planning (e.g. availability of
members);
o external problems (e.g. external software/code).
 The SCMP-SR

Whith respect to the SCMP, the SQA team must check wether the
SCP contains:

o the additional baselines.


 The SQAP-SR

With respect to the SQAP, the SQA team must check wether the
SQAP contains:

o the Tasks of the SQA team during the SR phase.

Tasks during AD phase

For the third phase of the project (AD), the SQA team must see to it that the
following documents are properly reviewed internally before they are
submitted for an external review.

 The ADD

The SQA team must check whether the ADD:

www.Vidyarthiplus.com
www.Vidyarthiplus.com

o contains an architectural design of the software to be developed,


this design must describe a logical model and the interfaces
between the different classes;
o contains pre and post conditions of the methods in the locical
model;
o contains a tracibility matrix where the design is checked to the
software requirements in the SRD.
 The SPMP-AD

The SQA team must ensure that the SPMP is realistic by checking:

o the assumptions made during the planning;


o restrictions with respect to the planning (e.g. availability of
members);
o external problems.
 The SCMP-AD

Whith respect to the SCMP, the SQA team must check wether the
SCMP contains:

o the additional baselines.


 The SQAP-AD

With respect to the SQAP, the SQA team must check wether the
SQAP contains:

o the tasks of the SQA team during the AD phase.

Characteristics of a good QA (Quality Assurnace) Engineer:

• Understanding of business approach and goals of the organization


• Understanding of entire software development process
• Strong desire for quality
• Establish and enforce SQA methodologies, processes and Testing
Strategies
• Judgment skills to assess high-risk areas of application
• Communication with Analysis and Development team
• Report defects with full evidence
• Take preventive actions
• Take actions for Continuous improvement
• Reports to higher management

www.Vidyarthiplus.com
www.Vidyarthiplus.com

• Say No when Quality is insufficient


• Work Management
• Meet deadlines

Documentation:

 Project documentation may include many kinds of documents (e.g.,


plans, task reports,
 Project size, criticality (i.e., the severity of the consequence of failure
of the system), and complexity are some features that may affect the
amount of documentation a project should need.
 For example, the design documentation may consist of a single
document describing both the system architecture and the detailed
modules or it may consist of separate documents for the architecture
and subsystems.
 The purpose of this section is not to specify how many documents
should be required.
 Rather, this section identifies the information content needed for any
project and the timeliness of requirements so that the information can
be used by the vendor, the utility, and the NRC reviewers.
 Because the NRC reviewers cannot determine the characteristics of
the software product without substantial technical specifications,
project plans, and reports, NRC should specify the technical products
of the vendor that the utility must provide NRC.

Review:

 The reviewers will also need to evaluate the installation package,


which consists of installation procedures, installation medium (e.g.,
magnetic tape), test case data used to verify installation, and expected
output from the test cases.
 In some instances, the product may already be installed in the utility.
NRC should request documentation on the results of installation and
acceptance testing.

www.Vidyarthiplus.com
www.Vidyarthiplus.com

UNITIII

Reliability and Quality Control:

 Although the terms reliability and quality are often used interchangeably,
there is a difference between these two disciplines.
 While reliability is concerned with the performance of a product over its
entire lifetime, quality control is concerned with the performance of a
product at one point in time, usually during the manufacturing process.
 As stated in the definition, reliability assures that components, equipment
and systems function without failure for desired periods during their whole
design life, from conception (birth) to junking (death).
 Quality control is a single, albeit vital, link in the total reliability process.
Quality control assures conformance to specifications.
 This reduces manufacturing variance, which can degrade reliability.
 Quality control also checks that the incoming parts and components meet
specifications, that products are inspected and tested correctly, and that the
shipped products have a quality level equal to or greater than that specified.
 The specified quality level should be one that is acceptable to the users, the
consumer and the public.
 No product can perform reliably without the inputs of quality control
because quality parts and components are needed to go into the product so
that its reliability is assured.

Quality Tools:

Cause Analysis Tools


 Tips and tools for the first step to improvement: identifying the cause of a
problem or situation.

Evaluation and Decision-Making Tools


 Making informed decisions and choosing the best options with a simple,
objective rating system, and determining the success of a project.

Process Analysis Tools


 How to identify and eliminate unnecessary process steps to increase
efficiency, reduce timelines and cut costs.

www.Vidyarthiplus.com
www.Vidyarthiplus.com

Seven Basic Quality Tools


 These seven tools get to the heart of implementing quality principles.

Data Collection and Analysis Tools


 How can you collect the data you need, and what should you do with them
once they’re collected?

Idea Creation Tools


 Ways to stimulate group creativity and organize the ideas that come from it.

Project Planning and Implementing Tools


 How to track a project’s status and look for improvement opportunities.

Seven New Management and Planning Tools


 Ways to promote innovation, communicate information and successfully
plan major projects.

Seven Basic Quality Tools( Ishikawa's basic tools )

 Quality pros have many names for these seven basic tools of quality, first
emphasized by Kaoru Ishikawa, a professor of engineering at Tokyo
University and the father of “quality circles.”

 Start your quality journey by mastering these tools, and you'll have a name
for them too: "indispensable."

1. Cause-and-effect diagram (also called Ishikawa or fishbone chart):


Identifies many possible causes for an effect or problem and sorts ideas into
useful categories.
2. Check sheet: A structured, prepared form for collecting and analyzing
data; a generic tool that can be adapted for a wide variety of purposes.
3. Control charts: Graphs used to study how a process changes over time.
4. Histogram: The most commonly used graph for showing frequency
distributions, or how often each different value in a set of data occurs.
5. Pareto chart: Shows on a bar graph which factors are more significant.
6. Scatter diagram: Graphs pairs of numerical data, one variable on each
axis, to look for a relationship.
7. Stratification: A technique that separates data gathered from a variety of

www.Vidyarthiplus.com
www.Vidyarthiplus.com

sources so that patterns can be seen (some lists replace "stratification" with
"flowchart" or "run chart").

Case tools:

Computer-aided software engineering (CASE) is the use of software tools to assist


in the development and maintenance of software. Tools used to assist in this way
are known as CASE Tools.

Some typical CASE tools are:

 Code generation tools


 Data modeling tools
 UML
 Refactoring tools
 QVT or Model transformation Tools
 Configuration management tools including revision control

Preventing, Discovering and Removing Defects

 To reduce the number of defects delivered with a software project an


organization can engage in a variety of activities.
 While defect prevention is much more effective and efficient in reducing the
number of defects, most organization conduct defect discovery and removal.
 Discovering and removing defects is an expensive and inefficient process. It
is much more efficient for an organization to conduct activities that prevent
defects.

Defect Removal Efficiency

If an organization has no defect prevention methods in place then they are totally
reliant on defect removal efficiency.

1. Requirements Reviews up to 15% removal of potential defects


2. Design Reviews up to 30% removal of potential defects
3. Code Reviews up to 20% removal of potential defects
4. Formal Testing up to 25% removal of potential defects

www.Vidyarthiplus.com
www.Vidyarthiplus.com

 In other words, if your organization is great at defect removal the maximum


percentage of defects your organization can expect to remove is 90%.
 If a software project is 100 function points, the total number of maximum
(or potential) defects could be 120.
 If you were perfect at defect removal your project would still have up to 12
defects after all your defect discovery and removal efforts.

The far majority of organization would receive a B (medium) or even a D (poor)


at defect removal efficiency.
Activity Perfect Medium Poor
Requirements Reviews 15% 5% 0%
Design Reviews 30% 15% 0%
Code Reviews 20% 10% 0%
Formal Testing 25% 15% 15%
Total Percentage Removed90% 45% 15%
Defect Discovery and Removal
Size in Function Points
Totals Defects Remaining
Points Max Defects Perfect Medium Poor
100 120 12 66 102
200 240 24 132 204
500 600 60 330 510
1,000 1,200 120 660 1,020
2,500 3,000 300 1,650 2,550
5,000 6,000 600 3,300 5,100
10,000 12,000 1,200 6,600 10,200
20,000 24,000 2,000 13,200 20,400

 An organization with a project of 2,500 function points and was about


medium at defect discovery and removal would have 1,650 defects
remaining after all defect removal and discovery activities.
 The calculation is 2,500 x 1.2 = 3,000 potential defects. The organization
would be able to remove about 45% of the defects or 1,350 defects.
 The total potential defects (3,000) less the removed defects (1,350) equals
the remaining defects of 1,650.

Defect Prevention

If an organization concentrates on defect prevention (instead of defect detection)


then the number of defects inserted or created is much less. The amount of time

www.Vidyarthiplus.com
www.Vidyarthiplus.com

and effort required to discover and remove this defects is much less also.

1. Roles and Responsibilities Clearly Defined up to 15% reduction in number of


defects created
2. Formalized Procedures up to 25% reduction in number of defects created
3. Repeatable Processes up to 35% reduction in number of defects created
4. Controls and Measures in place up to 30% reduction in number of defects
created

 Imagine an organization with items 1 and 2 in place. A project with 100


function points would have a potential of 120 defects, but since they have
preventative measures in place, they can reduce the number of potential
defects by 48 (40% = 25% + 15%).
 That makes the potential number of defects 72 compared to 120 with no
preventative efforts.
 Assuming that an organization was medium at defect discovery and removal
they could remove 45% of the remaining defects or have 40 remaining when
the project rolled to production.

Defect Removal Max Defects Prevention Medium


100 120 72 40
200 240 144 79
500 600 360 198
1,000 1,200 720 396
2,500 3,000 1,800 990
5,000 6,000 3,600 1,980
10,000 12,000 7,200 3,960
20,000 24,000 14,400 7,920

The above table represents the number of defects that an organization that does
items 1 and 2 above and is medium at discovery and removal.

 The problem for estimating defects is multidimensional.


 First the total number of defects must be estimated.
 Second the impact of defect prevention needs to be understood and the
estimated number of defects adjusted.
 Third an assessment needs to be done to understand how many defects can
be discovered and removed by an organization.

Clearly, the fewer number of defects that an organization must discover and
remove the better. The way this is accomplished is by better process, a more stable

www.Vidyarthiplus.com
www.Vidyarthiplus.com

organization and repeatable processes. The focus of software organizations needs


to be on defect prevention instead of defect detection.

Software Reliability Models:

 A proliferation of software reliability models have emerged as people try to


understand the characteristics of how and why software fails, and try to
quantify software reliability.
 Over 200 models have been developed since the early 1970s, but how to
quantify software reliability still remains largely unsolved. Interested readers
may refer to [RAC96], [Lyu95].
 As many models as there are and many more emerging, none of the models
can capture a satisfying amount of the complexity of software; constraints
and assumptions have to be made for the quantifying process.
 Therefore, there is no single model that can be used in all situations. No
model is complete or even representative.
 One model may work well for a set of certain software, but may be
completely off track for other kinds of problems.

 Most software models contain the following parts: assumptions, factors, and
a mathematical function that relates the reliability with the factors.
 The mathematical function is usually higher order exponential or
logarithmic.

Software modeling techniques can be divided into two subcategories: prediction


modeling and estimation modeling. [RAC96] Both kinds of modeling techniques
are based on observing and accumulating failure data and analyzing with statistical
inference

Rayleigh Model

 Rayleigh model has been found to be most suitable for predicting reliability
of software product.
 It predicts the expected value of defect density at different stages of life
cycle of the project, once parameters like total number of defects or total
cumulative defect rate and peak of the curve in terms of unit of time for the
curve are decided.

 The nature of curve indicates the pattern of defect removal rate in the life

www.Vidyarthiplus.com
www.Vidyarthiplus.com

cycle of the project.


 The area bounded by the x-axis and the curve is the measure of total defects
likely to be unearthed from the software being developed.
Below is the plot of the Rayleigh Curve plotted for one of our projects:

Rayleigh Curve

35
30
Defects /KLOC

LLD
25 CODING

20
15 UT
10
IT
5 HLD
0
0.05
0.45
0.85
1.25
1.65
2.05
2.45
2.85
3.25
3.65
4.05
4.45
4.85
5.25
5.65
6.05
6.45
6.85
Life cycle stages

Rayleigh plot

The red line indicates the actual defect density observed as against the predicted
values (brown smooth curve) obtained through a theoretical model. Observed
defect density closely matches with the defect density predicted by the model.

The curve indicates the defect density at the time of system testing as 21 defects.

www.Vidyarthiplus.com
www.Vidyarthiplus.com

ISO
Unit IV

Quality Management System


Topics covered
1. Concept of quality
2. QMS
3. Elements of qms
4. RAyleigh model
5. Reliability Growth Models

6. Complexity Metrics and Models


7. Customer Satisfaction Analysis
Concept of quality - historical background
 The concept of quality as we think of it now first emerged out of The
Industrial Revolution.
 Previously goods had been made from start to finish by The same
person or team of people, with handcrafting and tweaking.
 The product to meet 'quality criteria'. Mass production brought huge
teams of people together to work on specific stages of production
where one person would not necessarily complete a product from start
to finish.
 In The late 1800s pioneers such as Frederick Winslow Taylor and
Henry Ford recognize the limitations o the methods being used in
mass production at the time and the subsequent varying quality of
output.
 Taylor established Quality Departments to oversee the quality of
production and rectifying of errors, and Ford emphasized
standardization of design and component standards to ensure a
standard product was produced. Management of quality was the
responsibility of the Quality department and was implemented by
Inspection of product output to 'catch' defects.
 Application of statistical control came later as a result of World War

www.Vidyarthiplus.com
www.Vidyarthiplus.com

production methods.
 Quality management systems are the outgrowth of work done by W.
Edwards Deming, a statistician, after whom The Deming Prize for
quality is named.
 Quality, as a profession and The managerial process associated with
The quality function, was introduced during The second-half of The
20th century, and has evolved since then.
 Over this period, few other disciplines have seen as many changes as
The quality profession.
 The quality profession grew from simple control, to engineering, to
systems engineering.
 Quality control activities were predominant in the 1940s, 1950s, and
1960s.
 The 1970s were an era of quality engineering and The 1990s saw
quality systems as an emerging field.
 Like medicine, accounting, and engineering, quality has achieved
status as a recognized profession
Quality Management System (QMS)
 Quality Management System (QMS) can be defined as a set of
policies, processes and procedures required for planning and
execution (production / development / service) in the core
business area of an organization.
 QMS integrates the various internal processes within the
organization and intends to provide a process approach for
project execution. QMS enables the organizations to identify,
measure, control and improve
 The various core business processes that will ultimately lead to
improved business performance.
ELEMENTS OF QUALITY MANAGEMENT SYSTEMS
The standards of ISO 9000 detail 20 requirements for an organization's
quality management system in The following areas:
 Management Responsibility
 Quality System
 Order Entry

www.Vidyarthiplus.com
www.Vidyarthiplus.com

 Design Control
 Document and Data Control
 Purchasing
 Control of Customer Supplied Products
 Product Identification and Tractability
 Process Control
 Inspection and Testing Control of Inspection, Measuring, and Test
Equipment
 Inspection and Test Status
 Control of Nonconforming Products
 Corrective and Preventive Action
 Handling, Storage, Packaging, and Delivery
 Control of Quality Records
 Internal Quality Audits
 Training
 Servicing
 Statistical Techniques
The Rayleigh Model Framework
 Perhaps The most important principle in software engineering is "do it
right The first time."
 This principle speaks to the importance of managing quality throughout
The development process.
 The interpretation of the principle, in The context of software quality
management, is threefold:
 The best scenario is to prevent errors from being injected into The
development process.
 When errors are introduced, improve the front end of The
development process to remove as many of them as early as
possible. Specifically, in the context of the waterfall development
process, rigorous design reviews and code inspections are needed.
In the Cleanroom methodology, function verification by The team
is used.

www.Vidyarthiplus.com
www.Vidyarthiplus.com

 If the project is beyond the design and code phases, unit tests and
any additional tests by the developers serve as gatekeepers for defects to
escape the front-end process before the code is integrated into the
configuration management system (The system library).
 In other words, the phase of unit test or pre-integration test (The
development phase prior to system integration) is The last chance to do
it right The "first time."
 The Rayleigh model is a good overall model for quality management. It
articulates The points on defect prevention and early defect removal related
to The preceding items.
 Based on the model, if The error injection rate is reduced, The entire
area under the Rayleigh curve becomes smaller, leading to a smaller
projected field defect rate.
 Also, more defect removal at the front end of the development process
will lead to a lower defect rate at later testing phases and during
maintenance. Both scenarios aim to lower the defects in The latter testing
phases, which in turn lead to fewer defects in The field.
 The relationship between formal machine-testing defects and field
defects, as described by The model, is congruent with The famous
counterintuitive principle in software testing by Myers (1979), which
basically states that The more defects found during formal testing, The
more that remained to be found later.
 The reason is that at The late stage of formal testing, error injection of
The development process (mainly during design and code implementation)
is basically determined (except for bad fixes during testing).
 High testing defect rates indicate that the error injection is high; if no
extra effort is exerted, more defects will escape to The field.
 If we use the iceberg analogy to describe the relationship between testing
and field defect rates, The tip of The iceberg is the testing defect rate and
The submerged part is The field defect rate.
 The size of The iceberg is equivalent to The amount of error injection.
By the time formal testing starts, the iceberg is already formed and its size
determined.
 The larger its tip, the larger The entire iceberg. To reduce the
submerged part, extra effort must be applied to expose more of The iceberg
above The water.

www.Vidyarthiplus.com
www.Vidyarthiplus.com

The following fig. shows a schematic representation of the iceberg analogy.

Figure Iceberg Analogy—Error Injection, Testing Defects, and Latent


Defects

Reliability Growth Models

 Although reliability growth models are meant for reliability assessment,


they are also useful for quality management at the back end of the
development process.

 Models developed from a previous product or a previous release of the


same product can be used to track the testing defects of the current product.

 To have significant improvement, the defect arrival rate (or failure


density) of the current project must fall below the model curve.
The following fig. shows an example from a systems software product
developed at IBM Rochester.
Each data point represents a weekly defect arrival rate during the system
test phase.

www.Vidyarthiplus.com
www.Vidyarthiplus.com

The defect arrival patterns represented by the triangles and circles indicate
two later releases of the same product.
Compared to the baseline model curve, both new releases witnessed a
significant reduction in defect rate during the system test phase.
Figure : Reliability Growth Model for Quality Management

 As a second example, when another product was just about at the start of
system testing, the PTR arrival rates were unusually high compared to the
model.

 It was clear that proceeding in a business-as-usual manner would not


result in meeting the product's quality goal. A special quality improvement
program (QIP) was then pro-posed, evaluated, approved, and swiftly
implemented.
The QIP involved five extra activities:
1. Blitz testing— "artistic" testing in stressful environments
2. Customer evaluation— customers conducting testing in the
development laboratory

www.Vidyarthiplus.com
www.Vidyarthiplus.com

3. Code inspections— additional inspections of error-prone modules,


especially routines that are difficult to test such as the error
recovery/exception handling routines
4. Design reviews— rereview of designs of suspect components and
modules
5. Extension of system test— improvement of test suites and extension
of testing schedules to allow thorough final test execution
Because of the special QIP activities, the product ship date was delayed one
month. As a result, more than 250 would-be field defects were found and
removed. The field quality of the product, evidenced by field defect arrivals
reported in later years, improved

Complexity Metrics and Models

 Thus far the reliability and quality management models we have


discussed are either at the project or the product level.

 Both types of model tend to treat the software more or less as a black
box. In other words, they are based on either the external behavior (e.g.,
failure data) of the product or the intermediate process data (e.g., type and
magnitude of inspection defects), without looking into the internal dynamics
of design and code of the software.

 The unit of analysis is more granular, usually at the program-module


level. Such metrics and models tend to take an internal view and can provide
clues for software engineers to improve the quality of their work.

 Reliability models are developed and studied by researchers and software


reliability practitioners with sophisticated skills in mathematics and
statistics; quality management models are developed by software quality
professionals and product managers for practical project and quality
management.

 Software complexity research, on the other hand, is usually conducted by


computer scientists or experienced software engineers. Like the reliability
models, many complexity metrics and models have emerged in the recent
past.

www.Vidyarthiplus.com
www.Vidyarthiplus.com

Lines of Code

 The lines of code (LOC) count is usually for executable statements. It is


actually a count of instruction statements.

 The interchangeable use of the two terms apparently originated from


Assembler program in which a line of code and an instruction statement are
the same thing.

 Because the LOC count represents the program size and complexity, it is
not a surprise that the more lines of code there are in a program, the more
defects are expected.

 More intriguingly, researchers found that defect density (defects per


KLOC) is also significantly related to LOC count.

 Early studies pointed to a negative relationship: the larger the module


size, the smaller the defect rate.

 For instance, Basili and Perricone (1984) examined FORTRAN modules


with fewer than 200 lines of code for the most part and found higher defect
density in the smaller modules.

 Shen and colleagues (1985) studied software written in Pascal, PL/S, and
Assembly language and found an inverse relationship existed up to about
500 lines.

 Since larger modules are generally more complex, a lower defect rate is
somewhat counterintuitive.

 Interpretation of this finding rests on the explanation of interface errors:


Interface errors are more or less constant regardless of module size, and
smaller modules are subject to higher error density because of smaller
denominators.

 More recent studies point to a curvilinear relationship between lines of


code and defect rate: Defect density decreases with size and then curves up
again at the tail when the modules become very large.

 For instance, Withrow (1990) studied modules written in Ada for a large
project at Unisys and confirmed the concave relationship between defect
density (during formal test and integration phases) and module size.

www.Vidyarthiplus.com
www.Vidyarthiplus.com

 Specifically, of 362 modules with a wide range in size (from fewer than
63 lines to more than 1,000), Withrow found the lowest defect density in the
category of about 250 lines.

 Explanation of the rising tail is readily available. When module size


becomes very large, the complexity increases to a level beyond a
programmer's immediate span of control and total comprehension.

 This new finding is also consistent with previous studies that did not
address the defect density of very large modules.

 Experience from the AS/400 development also lends support to the


curvilinear model. In the example in Figure , although the concave pattern is
not as significant as that in Withrow's study, the rising tail is still evident.
Figure. Curvilinear Relationship Between Defect Rate and Module Size—
AS/400 data

 The curvilinear model between size and defect density sheds new light on
software quality engineering.

 It implies that there may be an optimal program size that can lead to the
lowest defect rate.

 Such an optimum may depend on language, project, product, and


environment; apparently many more empirical investigations are needed.

www.Vidyarthiplus.com
www.Vidyarthiplus.com

 Nonetheless, when an empirical optimum is derived by reasonable


methods (e.g., based on the previous release of the same product, or based
on a similar product by the same development group), it can be used as a
guideline for new module development.
Customer Satisfaction Analysis

 What the clients really want is the interpretation and analysis of the data
to provide actionable information: What can we learn from it? What actions
does it suggest that we take to improve customer satisfaction with what we
offer?
 The analysis process starts by performing statistical tests to reveal
relationships or differences in ratings of the performance on different
product and service attributes, and how they affect overall satisfaction.
 We compare the performance to the peers, utilizing The Benchmark
database to describe how the performance rates on a relative basis.
 We identify the customers' product and service priorities and we
compare these to their perceptions of the performance through Quadrant
Analysis.
 We look for gaps in performance versus expectations in The search for
major opportunities for improvement.
 We augment The analysis of the quantitative survey data with careful
study of the qualitative information – the comments and observations made
by The customers.
 These are an invaluable The of insight into the reasons behind their
ratings. In most cases, The analysis is aimed at identifying the key drivers
of satisfaction – those product or service elements that are most closely
related to customer satisfaction.

www.Vidyarthiplus.com
www.Vidyarthiplus.com

 The clients benefit from the perspective of The years of experience with
a wide variety of clients. Applying The knowledge,

The goal is to deliver actionable results – information you can use to create
change that will improve The competitive position and The bottom line

www.Vidyarthiplus.com
www.Vidyarthiplus.com

UNIT V

QUALITY STANDARDS

Topics Covered:

 Need for standards


 ISO 9000 series
 ISO 9000-3 series
 CMM and CMMI
 Six Sigma Concepts

1.Need for Standards:

 Standardization (or standardisation) is the process of agreeing on technical


standards.
 A standard is a document that establishes uniform engineering or technical
specifications, criteria, methods, processes, or practices. Some standards are
mandatory while others are voluntary.
 Some standards are defacto, meaning informal practices followed out of convenience,
or dejure, meaning formal requirements.
 Formal standards bodies such as the International Standard Organisation(ISO) or the
American National Standard University are independent of the manufacturers of the
goods for which they publish standards.
 The goals of standardization can be to help with independence of
single suppliers (commodification), compatibility, interoperability,
safety, repeatability, or quality.
 In social sciences, including economics, the idea of standardization is
close to the solution for a coordination problem, a situation in which
all parties can realize mutual gains, but only by making mutually
consistent decisions.
 Standardization is the process for select better choices and ratificate
this consistent decisions, as an obtained standard.

www.Vidyarthiplus.com
www.Vidyarthiplus.com

Types of standardization process:

 Emergence in de facto use: tradition (old standards on countries)


and/or domination (Microsoft ex.).
 Fixed by a standard body:
o in an impositive process: usually in mandatory norms (like
dictatorial Laws).
o in a consensus process: usually for voluntary standards

Standardization is defined as: The development and implementation of


concepts, doctrines, procedures and designs to achieve and maintain the
required levels of compatibility, interchangeability or commonality in the
operational, procedural, material, technical and administrative fields to attain
interoperability

2.ISO 9000 Series:

ISO 9000 is a family of standards for quality management systems. ISO


9000 is maintained by ISO, the International Organization for
Standardization and is administered by accreditation and certification bodies.

ISO-9001 covers the entire process from product design through after sales
service.

ISO-9002 covers only the manufacture (or a specific service such as a


QC/QA laboratory) of the product.

ISO-9003 covers the "final inspection" of the product only.

ISO-9004 is an "Internal Use" standard - it cannot be registered and is not


subject to the third party audits.

ISO-9001:

Some of the requirements in ISO 9001 (which is one of the standards in the
ISO 9000 family) would include

 a set of procedures that cover all key processes in the business;


 monitoring processes to ensure they are effective;
 keeping adequate records;

www.Vidyarthiplus.com
www.Vidyarthiplus.com

 checking output for defects, with appropriate corrective action where


necessary;
 regularly reviewing individual processes and the quality system itself
for effectiveness; and
 facilitating continual improvement
 A company or organization that has been independently audited and
certified to be in conformance with ISO 9001 may publicly state that
it is "ISO 9001 certified" or "ISO 9001 registered."
 Certification to an ISO 9000 standard does not guarantee the
compliance (and therefore the quality) of end products and services;
rather, it certifies that consistent business processes are being applied.
 Although the standards originated in manufacturing, they are now
employed across a wide range of other types of organizations.
 A "product", in ISO vocabulary, can mean a physical object, or
services, or software. In fact, according to ISO in 2004, "service
sectors now account by far for the highest number of ISO 9001:2000
certificates - about 31% of the total"

ISO 9001:2000 specifies requirements for a quality management system


where an organization needs to demonstrate its ability to consistently
provide product that meets customer and applicable regulatory requirements.

 It aims to enhance customer satisfaction through the effective


application of the system, including processes for continual
improvement of the system and the assurance of conformity to
customer and applicable regulatory requirements.
 All requirements of this International Standard are generic and are
intended to be applicable to all organizations, regardless of type, size
and product provided.

 Where any requirement(s) of this International Standard cannot be


applied due to the nature of an organization and its product, this can
be considered for exclusion.
 Where exclusions are made, claims of conformity to this International
Standard are not acceptable unless these exclusions are limited to
requirements within clause 7, and such exclusions do not affect the
organization's ability, or responsibility, to provide product that meets
customer and applicable regulatory requirements.

www.Vidyarthiplus.com
www.Vidyarthiplus.com

ISO-9002:

Developing products to a standard of consistent high quality, corpion Research is a


recognised leader in its field and proud to announce its achievement of the coveted
ISO 9002 quality certification.

The ISO 9002 standard is an emerging global standard for product and process
quality, adopted by 91 countries which comprise the International Organisation for
Standardisation (ISO).

ISO 9002 is best known to European countries who use the certification as a means of
identifying companies dedicated to providing a customer with the best product and
service possible.

Scorpion Research has always emphasised its quality through a continuous


improvement program complementing its other quality initiatives. Achieving ISO
9002 certification further demonstrates commitment to a consistent level of quality
recognised under International standards.

According to ISO assessors, the certification process is rigorous and only a minority
of the applicants who strive to become certified are successful on their first audit.
Scorpion Research is amongst this minority of organisations that passed the required
certification audit on the first assessment demonstrating its consistent emphasis
towards a high quality standard.

Certification is an ongoing process and Scorpion Research will undergo regular


internal and external audits by third party ISO assessors in order to retain its
certification. The ISO certification complements the quality program by establishing
an externally verified unbiased process of self-monitoring and problem-solving.

Scorpion Research joins firms such as British Telecom, Sony, IBM, Compaq, Digital
Equipment, Xerox, Toshiba and Hewlett-Packard in ISO certification.

www.Vidyarthiplus.com
www.Vidyarthiplus.com

ISO-9003:
 I003
This is the least complex and easiest to install of three standards in the
ISO 9000 series.
 This standard is for organizations that do not participate in design and
development, purchasing or have production controls.
 It is designed for organizations that only require final inspection and
testing of their products and services to ensure that they have met the
specified requirements.
 This system is generally only relevant to simple products and services.
 It is also an option for organizations that cannot justify the expense of
one of the other systems but still desire a quality management system
for their organization.

ISO-9004:
 Your quality system must balance two needs:
 Your customers' need to have quality
products at a reasonable price.
 Your company's need to make quality
products at a reasonable cost.

3.ISO9000-3 for software development:

ISO 9000-3 4.4 Software development and design

4.4.1 Develop and document procedures to control the product design


General and development process. These procedures must ensure that all
requirements are being met.

Software Control your software development project and make


development sure that it is executed in a disciplined manner.
Use one or more life cycle models to help
organize your software development project.
Develop and document your software development
procedures. These procedures should ensure that:
Software products meet all requirements.
Software development follows your:
Quality plan.
Development plan.

www.Vidyarthiplus.com
www.Vidyarthiplus.com

Software design Control your software design process and make


sure that it is performed in a systematic way.
Use a suitable software design method.
Study previous software design projects
to avoid repeating old mistakes.
Design software that is:
Easy to test, install, use, and maintain.
Safe and reliable when failure could cause:
Human injury.
Property damage.
Environmental harm.
Develop and document rules to control:
Coding activities.
Naming conventions.
Commentary practices.
Programming languages.
Apply configuration management techniques to
document and control the use and review of all:
Analysis tools.
Design techniques.
Compilers and assemblers.
Train personnel in the use of such tools and techniques.

4.4.2 Create design and development planning procedures.


Design and Your product planning procedures should ensure that:
development Plans are prepared for each design activity or phase.
planning Responsibility for implementing each plan,
activity, or phase is properly defined.
Qualified personnel are assigned to the
product design and development process.
Adequate resources are allocated to the
product design and development process.
Plans are updated, and circulated to the
appropriate participants, as designs change.

Software design Prepare a software development plan. Your plan should be


and development documented and approved before it is implemented. Your plan
planning should control:
Technical activities.
Requirements analyses.
Design processes.
Coding activities.
Integration methods.
Testing techniques.
Installation work.

www.Vidyarthiplus.com
www.Vidyarthiplus.com

Acceptance testing.
Management activities.
Project supervision.
Progress reviews.
Reporting requirements.
Your software development plan should:
Define your project.
Identify related plans and projects.
List your project objectives.
Define project inputs and outputs.
Define inputs for each project activity.
Define outputs for each project activity.
Explain how your project will be organized.
Explain how your teams will be structured.
Explain who will be responsible for what.
Explain how subcontractors will be used.
Explain how project participants will interact.
Explain how all resources will be managed.
Discuss project risks and potential problems.
Identify important project assumptions.
Present your project schedule.
Define project phases and dependencies.
Specify project time lines and milestones.
Introduce your project budget.
Describe the work that will be done.
Describe each task.
Describe the inputs for each task.
Describe the outputs for each task.
Identify all relevant control strategies.
Identify all relevant standards and conventions.
Identify all relevant rules and regulations.
Identify all relevant practices and procedures.
Identify configuration management practices.
Identify backup and recovery procedures.
Identify archiving procedures.
Identify all relevant methods and approaches.
Identify methods used to control nonconforming products.
Identify methods used to control software development software
Identify methods used to control virus protection activities.
Identify all relevant tools and techniques.
Identify methods used to qualify all tools and techniques.
Identify methods used to control all tools and techniques.

4.4.3 Identify the groups who should be routinely involved in the


Organizational product design and development process, and ensure that their

www.Vidyarthiplus.com
www.Vidyarthiplus.com

and technical design input is properly documented, circulated, and reviewed.


interfaces
Make sure that your software development
plan or your subcontractors' plans:
Define how the responsibility for software development
will be distributed between all participants.
Define how technical information will be shared
and transmitted between all participants.
Explain how all project participants will provide input.
Explain how subcontractors will provide input during
design, installation, maintenance, and training.
Explain how regulatory authorities will provide input
during design, installation, maintenance, and training.
Explain how help desk staff will provide input during
design, installation, maintenance, and training.
Explain how other related projects will provide input
during design, installation, maintenance, and training.
Explain how end users will provide input during
design, installation, maintenance, and training.
Make sure that your customer has
accepted the responsibility to:
Cooperate and support your project.
Provide the information you need when you need it.
Resolve outstanding issues in a timely manner.
Make sure that your customer representative has
been given the responsibility and authority to:
Clarify requirements and expectations.
Answer questions and solve problems.
Make and implement agreements.
Approve plans and proposals.
Establish acceptance criteria.
Provide appropriate customer-supplied products.
Define and distribute authorities and responsibilities.
Schedule joint progress reviews. You and
your customer together should review:
Activities.
Your developmental activities.
Your customer's activities.
Your users' activities.
Training activities.
Conversion activities.
Results.
Results of verification activities.
Results of acceptance tests.
Results of conformance evaluations.

www.Vidyarthiplus.com
www.Vidyarthiplus.com

4.4.4 Develop procedures to ensure that all design-input requirements


Design input are identified, documented, and reviewed; and that all design
flaws, ambiguities, contradictions, and deficiencies are resolved.
Design input requirements can be classified as follows:
Customer expectations.
Contractual conditions.
Statutory imperatives.
Regulatory requirements.
Environmental constraints.
Safety considerations.
Performance standards.
Functional specifications.
Descriptive prescriptions.
Aesthetic preferences.

Software Design input requirements should be specified by the customer.


design input However, sometimes the customer will expect you to develop the
design input specification. In this case, you should:
Prepare procedures that you can use to develop the design
input specification. These procedures should be documented
and should explain:
How interviews, surveys, studies, prototypes, and demonstrations
will be used to develop your design-input specification.
How you and your customer will formally agree:
To accept the official specification.
To accept changes to the official specification.
How changes to specifications will be controlled.
How prototypes and product demonstrations will be evaluated.
How system oriented input requirements will be met through
the use of hardware, software, and interface technologies.
How reviews, evaluations, and other discussions
between you and your customer will be recorded.
Work closely with your customer in order to avoid
misunderstandings and to ensure that the specification meets the
customers needs.
Express your specification using terms that will make
it easy to validate during product acceptance.
Ask your customer to formally approve the
resulting design input specification.
Your design input specification may address the
following kinds of characteristics or requirements:
Functional requirements.
Reliability requirements.
Usability requirements.
Efficiency requirements.

www.Vidyarthiplus.com
www.Vidyarthiplus.com

Maintainability requirements.
Portability requirements.
Interface requirements.
Hardware interface requirements.
Software interface requirements.
Your design input specification may also need
to address the following kinds of requirements:
Operational requirements.
Safety requirements.
Security requirements.
Statutory requirements.

4.4.5 Develop procedures to control design outputs.


Design output Design outputs are usually documents. They include drawings,
parts lists, process specifications, servicing procedures, and
storage instructions. These types of documents are used for
purchasing, production, installation, inspection, testing, and
servicing.
Design outputs must be expressed in terms that allow
them to be compared with design input requirements.
Design output documents must identify those aspects of the
product that are crucial to its safe and effective operation. These
aspects can include operating, storage, handling, maintenance,
and disposal requirements.
Design output documents must be reviewed
and approved before they are distributed.
Design outputs must be accepted only if
they meet official acceptance criteria.

Software Prepare design output documents using standardized methods


design output and make sure that your documents are correct and complete.
Software design outputs can include design specifications,
source code, user guides, etc.

4.4.6 Develop procedures that specify how design reviews should be


Design review planned and performed. Design review procedures should:
Be formally documented.
Ensure that reviews are recorded.
Ensure that representatives from all relevant
areas are involved in the process of review.

Software Plan and perform design reviews for software development


design review projects. Your reviews should ensure that all:
Design activities and results are reviewed.
Product nonconformities are identified and addressed.

www.Vidyarthiplus.com
www.Vidyarthiplus.com

Process deficiencies are identified and addressed.


Review conclusions and observations are recorded.

Software Develop and document design review procedures.


design review Your procedures should make sure that you:
procedures Clarify exactly what is being reviewed.
Distinguish between different types of reviews.
Organize and schedule design review meetings.
Indicate when design reviews should be performed.
Maintain a record of all design review meetings.
Invite all appropriate groups to participate.
Invite customers to participate (when required).
Confirm that customers agree with your results.
Allow design activities to continue only if all:
Deficiencies and nonconformities have been addressed.
Risks and consequences have been assessed.
Your design review procedures may also:
Define the methods that should be used to ensure
that all rules and conventions are being followed.
Define what needs to be done to prepare
for a design review. This may include:
Defining roles.
Listing objectives.
Preparing an agenda.
Collecting documents.
Define what should be done during the review.
This may include:
Defining the guidelines that should be followed.
Defining the techniques that should be used.
Define the criteria that constitute a successful review.
Define the follow-up methods that should be used to
ensure that all outstanding issues will be addressed.

4.4.7 Develop procedures that specify how design outputs, at every


Design stage
verification of the product design and development process, should be
verified.
These procedures should:
Verify that outputs satisfy design-input requirements.
Ensure that objective evidence is used to verify outputs.
Ensure that all design verifications are recorded.
Ensure that all design documents are verified.
These design verification procedures may also:
Use alternative calculations to verify design outputs.
Use tests and demonstrations to verify outputs.

www.Vidyarthiplus.com
www.Vidyarthiplus.com

Compare design outputs with proven designs.

Software design Verify design outputs by:


verification Performing design reviews.
Performing demonstrations.
Evaluating prototypes.
Carrying out simulations.
Performing tests.
Maintain a record of design verifications.
Record verification results.
Record remedial actions.
Record action completions.
Accept design outputs for subsequent use only if they have been
properly verified and only if all remedial actions have been taken.

4.4.8 Develop procedures that validate the assumption that your newly
Design designed products will meet customer needs. Develop design
validation validation procedures that:
Confirm that your new product performs properly
under all real-world operating conditions.
Confirm that your new product will meet every
legitimate customer need and expectation.
Ensure that validations are carried out early in the design process
whenever this will help guarantee that customer needs will be
met.

Software Prove that your product is ready for its intended


design validation use before you ask your customer to accept it.
Accept validated products for subsequent use only if they have
been properly verified and only if all remedial actions have been
taken.
Maintain a record of design validations.
Record validation results.
Record remedial actions.
Record action completions.

4.4.9 Develop procedures to ensure that all product design


Design changes modifications are
documented, reviewed, and formally authorized before the
resulting
documents are circulated and the changes are implemented.

Software Develop procedures to control design changes that may occur


design changes during
the product life cycle. Your procedure should ensure that you:

www.Vidyarthiplus.com
www.Vidyarthiplus.com

Document the design change.


Evaluate the design change.
Justify the design change.
Verify the design change.
Approve the design change.
Implement the design change.
Monitor the design change.
Your configuration management process
may be used to control design changes.

4.CMM and CMMI:

 The Capability Maturity Model (CMM), also sometimes referred to as


the Software CMM (SW-CMM).
 The CMM is a process capability model based on software
development organisation processes/practices.
 Though the CMM was retired in 1997 and has not been updated since,
having been superseded by CMMI (Capability Maturity Model
Integration), it has been used as a generally applicable model to assist
in understanding the process capability maturity of organisations in
diverse areas.
 For example, software engineering, system engineering, project
management, risk management, system acquisition, information
technology (IT) or personnel management, against a scale of five
maturity levels, namely: Initial, Repeatable, Defined, Managed and
Optimized.
 The Capability Maturity Model (CMM) is a process capability
maturity model which aids in the definiton and understanding of an
organisation's processes.
 The CMM was originally used to enable the assessment of software
development processes.

 The SEI’s Capability Maturity Model (CMM) and Capability Maturity


Model Integrated (CMMI) are the most prominent examples of a
model based approach to process improvement.
 The CMM is limited to management and software engineering
practices. The CMMI expands the CMM to address systems
engineering and integrated product development as well.

www.Vidyarthiplus.com
www.Vidyarthiplus.com

 The CMMI is organized around a set of Process Areas (PAs).


 The PAs are divided into groups associated with what are called
maturity levels. In the CMMI, most basic management practices are
considered part of maturity level 2, while most software engineering
practices are associated with level 3.
 Level 4 is about process and product quality management, while level
5 includes processes for process optimization and technology change
management.
 An organization that has practices that meet all the goals of maturity
level 2 is characterized as a level 2 organization.
 An organization that cannot demonstrate that its practices meet all the
applicable level 2 goals is considered level 1 by default.
 An organization that meets the goals of all the PAs at level 2 and level
3 is a level 3 maturity organization and so on.
 The CMMI also offers an alternate approach call the "continuous
representation", where individual PA's are rated on a maturity scale of
1 - 5. This allows the organization the flexibility to tailoring its
maturity goals based on business needs for particular PA's.
 As organizations move up the maturity ladder, they are more likely to
produce higher quality products with more predictable costs and cycle
times. The higher levels are more likely to correlate with higher
productivity and shorter cycle times as well.
 The idea of model based software improvement is very simple. First
pick the applicable Process Areas.
 Next perform an assessment of organizational practices relative to the
model.
 The organization practices do not have to conform exactly to the
representative practices included in the model.
 They just have to meet the stated goals of each PA.
 Based on this comparison, assign a maturity level to the organization
and produce a list of strengths and weakness relative to the model.
 The output of the assessment is used to prioritize areas for
improvement. The idea is to improve deficient practices at the lower
maturity levels first and systematically move up in maturity level over
time using additional assessments to measure progress.

www.Vidyarthiplus.com
www.Vidyarthiplus.com

 Focusing on the lowest level practices puts a firm foundation in place


before tacking the higher level processes and it give the organization
time to internalize the changes.
 This approach nicely avoids the twin problems of putting practices in
place before required supporting practices are available and trying to
do too much too soon.
 So model based improvement has a lot of very attractive features.
One of the most attractive ones is the level goal itself.
 The numerical level gives organizations a metric that that can be
easily understood, can used to measure progress, and can used to
benchmark against other organizations.
 SEI has defined a formal appraisal methodology and provided a lead
assessor training and certification program.
 This means that assessment results, particular those obtained using a
third party assessor, will be reasonably consistent and can be used to
benchmark against other organizations.
 In fact SEI maintains a publicly accessible database of assessment
results giving the number of organizations at each level by industry
and the average time for an organization to improve from one level to
the next.
6. Six Sigma Concept:

Six Sigma at many organizations simply means a measure of quality that


strives for near perfection. Six Sigma is a disciplined, data-driven approach
and methodology for eliminating defects (driving towards six standard
deviations between the mean and the nearest specification limit) in any
process -- from manufacturing to transactional and from product to service.

 The statistical representation of Six Sigma describes quantitatively


how a process is performing.
 To achieve Six Sigma, a process must not produce more than 3.4
defects per million opportunities.
 A Six Sigma defect is defined as anything outside of customer
specifications.

www.Vidyarthiplus.com
www.Vidyarthiplus.com

 A Six Sigma opportunity is then the total quantity of chances for a


defect. Process sigma can easily be calculated using a Six-Sigma
calculator.
 The fundamental objective of the Six Sigma methodology is the
implementation of a measurement-based strategy that focuses on
process improvement and variation reduction through the application
of Six Sigma improvement projects.
 This is accomplished through the use of two Six Sigma sub-
methodologies: DMAIC and DMADV. The Six Sigma DMAIC
process (define, measure, analyze, improve, control) is an
improvement system for existing processes falling below specification
and looking for incremental improvement.
 The Six Sigma DMADV process (define, measure, analyze, design,
verify) is an improvement system used to develop new processes or
products at Six Sigma quality levels.
 It can also be employed if a current process requires more than just
incremental improvement.
 Both Six Sigma processes are executed by Six Sigma Green Belts and
Six Sigma Black Belts, and are overseen by Six Sigma Master Black
Belts.
 Six Sigma is a set of practices originally developed by Motorola to
systematically improve processes by eliminating defects.
 A defect is defined as nonconformity of a product or service to its
specifications.

While the particulars of the methodology were originally formulated by Bill


Smith at Motorola in 1986, Six Sigma was heavily inspired by six preceding
decades of quality improvement methodologies such as quality control,
TQM, and Zero Defects. Like its predecessors, Six Sigma asserts the
following:

 Continuous efforts to reduce variation in process outputs is key to


business success
 Manufacturing and business processes can be measured, analyzed,
improved and controlled
 Succeeding at achieving sustained quality improvement requires
commitment from the entire organization, particularly from top-level
management

www.Vidyarthiplus.com
www.Vidyarthiplus.com

Sigma (the lower-case Greek letter σ) is used to represent standard deviation


(a measure of variation) of a population (lower-case 's', is an estimate, based
on a sample).

 The term "six sigma process" comes from the notion that if one has
six standard deviations between the mean of a process and the nearest
specification limit, there will be practically no items that fail to meet
the specifications.
 This is the basis of the Process Capability Study, often used by quality
professionals.
 The term "Six Sigma" has its roots in this tool, rather than in simple
process standard deviation, which is also measured in sigmas.
 Criticism of the tool itself, and the way that the term was derived from
the tool, often sparks criticism of Six Sigma.

www.Vidyarthiplus.com

You might also like