Module 5-SEPM
Module 5-SEPM
Module 5-SEPM
Module-05
Concept of Quality:
in
• Generally agreed to be beneficial.
d.
• Requires precise definition of required qualities.
Objective Assessment:
•
u
Judging if a system meets quality requirements needs measurement.
o Assess the likely quality of the final system. o Ensure development methods
vt
o Ensuring these methods will lead to the required quality in the final system.
Quality Concerns:
• Key points in the Step Wise framework where quality is particularly emphasized:
in
o Objectives may include qualities of the application to be delivered.
d.
o Activity 2.2 involves identifying installation standards and procedures, often
related to quality.
o
u
C. Step 3: Analyze Project Characteristics
o Review the overall quality aspects of the project plan at this stage.
in
u d.
lo
Importance of software quality
uc
General Expectation:
vt
o Final customers and users are increasingly concerned about software quality,
particularly reliability.
in
o Errors in earlier steps can propagate and accumulate in later steps. o
Errors found later in the project are more expensive to fix.
o The unknown number of errors makes the debugging phase difficult to
d.
control.
System Requirements:
•
u
Functional Requirements: Define what the system is to do.
lo
• Resource Requirements: Specify allowable costs.
• Quality Requirements: State how well the system is to operate.
uc
Measuring Quality:
• Good Measure: Relates the number of units to the maximum possible (e.g., faults
per thousand lines of code).
• Direct Measurement: Measures the quality itself (e.g., faults per thousand lines of
code).
in
• Indirect Measurement: Measures an indicator of the quality (e.g., number of user
inquiries at a help desk as an indicator of usability).
d.
Setting Targets:
• Impact on Project Team: Quality measurements set targets for team members.
• Meaningful Improvement: Ensure that improvements in measured quality are
meaningful.
o
u
Example: Counting errors found in program inspections may not be
lo
meaningful if errors are allowed to pass to the inspection stage rather than
being eradicated earlier.
uc
1. Definition/Description
o Definition: Clear definition of the quality characteristic.
o Description: Detailed description of what the quality characteristic entails.
2. Scale
o Unit of Measurement: The unit used to measure the quality characteristic
(e.g., faults per thousand lines of code).
3. Test
o Practical Test: The method or process used to test the extent to which the
quality attribute exists.
4. Minimally Acceptable
o Worst Acceptable Value: The lowest acceptable value, below which the
product would be rejected.
5. Target Range
in
o Planned Range: The range of values within which it is planned that the quality
measurement value should lie.
6. Current Value
d.
o Now: The value that applies currently to the quality characteristic.
in
system criticality.
o Target Range: E.g., Failure on demand probability of less than 0.01.
4. Support Activity:
d.
o Definition: Number of fault reports generated and processed. o Scale:
Count (number of reports).
o Test: Track and analyze the volume and resolution time of fault reports. o
u
Minimally Acceptable: Lower number of fault reports indicates better
reliability.
lo
o Target Range: E.g., Less than 10 fault reports per month.
These measurements help quantify and assess the reliability and maintainability of
software systems, ensuring they meet desired quality standards.
ISO 9126 is a significant standard in defining software quality attributes and providing a)
framework for assessing them. Here are the key aspects and characteristics defined by|
1. Functionality:
o Definition: The functions that a software product provides to satisfy user
needs.
o Sub-characteristics: Suitability, accuracy, interoperability, security,
compliance.
2. Reliability:
o Definition: The capability of the software to maintain its level of
in
performance under stated conditions.
o Sub-characteristics: Maturity, fault tolerance, recoverability.
3. Usability:
d.
o Definition: The effort needed to use the software. o Sub-characteristics:
Understandability, learnability, operability, attractiveness.
4. Efficiency:
o
done.
u
Definition: The ability to use resources in relation to the amount of work
lo
o Sub-characteristics: Time behavior, resource utilization.
5. Maintainability:
o Definition: The effort needed to make changes to the software. o Sub-
uc
• Definition: Focuses on how well the software supports specific user goals in a
specific context of use.
• Elements: Effectiveness, productivity, safety, satisfaction.
ISO 14598
in
d.
ISO 9126 Sub-characteristics: Compliance and Interoperability
Compliance
u
lo
• Definition: Refers to the degree to which the software adheres to application-
related standards or legal requirements, such as auditing standards.
• Purpose: Added as a sub-characteristic to all six primary ISO 9126 external
uc
characteristics.
• Example: Ensuring software meets specific industry regulations or international
standards relevant to its functionality, reliability, usability, efficiency,
maintainability, and portability.
vt
Interoperability
• Definition: Refers to the ability of the software to interact with other systems
effectively.
• Clarification: ISO 9126 uses "interoperability" instead of "compatibility" to avoid
confusion with another characteristic called "replaceability".
• Importance: Ensures seamless integration and communication between different
in
ISO 9126 Sub-characteristics: Maturity and Recoverability
d.
Maturity
Recoverability
• Definition: Refers to the capability of the software to restore the system to its
normal operation after a failure or disruption.
vt
Importance of Distinction
• Security: Focuses on access control and protecting the system from unauthorized
access, ensuring confidentiality, integrity, and availability.
• Recoverability: Focuses on system resilience and the ability to recover from
failures, ensuring continuity of operations.
in
Learnability
u d.
ISO 9126 Sub-characteristics: Learnability and Attractiveness
lo
• Definition: Refers to the ease with which users can learn to operate the
• Focus: Primarily on the initial phase of user interaction with the software.
completed.
Operability
• Definition: Refers to the ease with which users can operate and navigate the
software efficiently.
• Focus: Covers the overall usability of the software during regular use and
over extended periods.
Importance of Distinction
in
Operability, on the other hand, emphasizes the efficiency and ease of use
over extended periods and during regular use.
• Attractiveness: A recent addition under usability, attractiveness focuses on
d.
the aesthetic appeal and user interface design, particularly relevant in
software products where user engagement and satisfaction are crucial, such
as games and entertainment applications.
u
Application in Software Evaluation
lo
• Learnability: Critical for software that requires quick adoption and minimal
training, ensuring users can start using the software effectively from the
outset.
uc
in
ISO 9126 Sub-characteristics: Analysability, Changeability, and Stability
d.
Analysability
• Definition: Refers to the ease with which the cause of a failure in the software can
be determined.
u
• Focus: Helps in diagnosing and understanding software failures or issues quickly
lo
and accurately.
Changeability
• Definition: Also known as flexibility, changeability refers to the ease with which
software can be modified or adapted to changes in requirements or environment.
vt
Stability
in
integrity despite ongoing changes or updates.
Clarification of Terms
d.
• Changeability vs. Flexibility: Changeability focuses on the software's ability to
be modified without causing issues, while flexibility emphasizes its capacity to
adapt to diverse requirements or environments.
• u
Stability: Ensures that software modifications are managed effectively to
minimize risks of unintended consequences or disruptions.
lo
Application in Software Development
Portability Compliance
• Definition: Refers to the adherence of the software to standards that facilitate its
transferability and usability across different platforms or environments.
in
• Focus: Ensures that the software can run efficiently and effectively on various
hardware and software configurations without needing extensive modifications.
d.
hardware or software configurations.
Replaceability
u
• Definition: Refers to the ability of the software to replace older versions or
components without causing disruptions or compatibility issues.
lo
• Focus: Emphasizes upward compatibility, allowing new versions or components to
seamlessly integrate with existing systems or environments.
uc
Coexistence
vt
• Definition: Refers to the ability of the software to peacefully share resources and
operate alongside other software components within the same environment.
• Focus: Does not necessarily involve direct data exchange but ensures
compatibility and non-interference with other software components.
• Importance: Enables integration of the software into complex IT ecosystems
without conflicts or performance degradation.
Clarification of Terms
in
components within the same environment.
d.
ISO 9126 provides structured guidelines for assessing and managing software quality
characteristics based on the specific needs and requirements of the software product. It
emphasizes the variation in importance of these characteristics depending on the type and
u
context of the software product being developed.
2. Define Metrics and Measurements: Establish measurable criteria and metrics for
vt
evaluating each quality characteristic, ensuring they align with the defined
objectives and user expectations.
3. Plan Quality Assurance Activities: Develop a comprehensive plan for quality
assurance activities, including testing, verification, and validation processes to
ensure adherence to quality standards.
4. Monitor and Improve Quality: Continuously monitor software quality
throughout the development lifecycle, identifying areas for improvement and
• Reliability: Critical for safety-critical systems where failure can have severe
in
consequences. Measures like mean time between failures (MTBF) are essential.
• Efficiency: Important for real-time systems where timely responses are crucial.
Measures such as response time are key indicators.
d.
External Quality Measurements:
• Internal measurements like code execution times can help predict external qualities
like response time during software design and development.
• Predicting external qualities from internal measurements is challenging and often
requires validation in the specific environment where the software will operate.
• ISO 9126 acknowledges that correlating internal code metrics to external quality
characteristics like reliability can be difficult.
• This challenge is addressed in a technical report rather than a full standard,
indicating ongoing research and development in this area.
in
u d.
Based on the ISO 9126 framework and your points:
lo
1. Measurement Indicators Across Development Stages:
o Early Stages: Qualitative indicators like checklists and expert judgments are
used to assess compliance with predefined criteria. These are subjective and
uc
in
o Independent Assessment: Aims to provide an unbiased evaluation of
software quality for stakeholders like regulators or consumers.
d.
TABLE 13.2 Mapping response times onto user satisfaction
<2 5
u 2-3 4
lo
uc
It seems like you're describing a method for evaluating and comparing software products
based on their quality characteristics. Here's a summary and interpretation of your
approach:
vt
in
3. Calculation of Overall Score:
o Weighted scores are calculated for each quality characteristic by multiplying
the quality score by its importance weight.
d.
o The weighted scores for all characteristics are summed to obtain an overall
score for each software product.
4. Comparison and Preference Order:
o
u
Products are then ranked in order of preference based on their overall scores.
Higher scores indicate products that are more likely to satisfy user
lo
requirements and preferences across the evaluated quality characteristics.
Usability 3 1 3 3 9
Efficiency 4 S
2 8 2
Maintainability 2
3
6 1 2
Overall 17 19
in
2. Variability in Results: The outcome of such assessments can vary significantly
based on the weightings assigned to each software characteristic. Different
stakeholders within the community may have varying requirements and priorities.
d.
3. Caution in Interpretation: It's crucial to exercise caution in interpreting and
applying the results of these assessments. While they strive for objectivity, they
are still influenced by the criteria set and the relative importance assigned to each
quality characteristic.
u
4. Community Needs: Understanding the specific needs and expectations of the user
community is essential. The assessment should align closely with the community's
lo
goals and the functionalities they require from the software tools being evaluated.
5. Transparency and Feedback: Providing transparency in the assessment process
and gathering feedback from community members can enhance the credibility and
uc
relevance of the evaluation results. This helps ensure that the assessment
adequately reflects the needs and perspectives of the community.
Understanding the differences between product metrics and process metrics is crucial in
software development:
1. Product Metrics:
o Purpose: Measure the characteristics of the software product being
developed.
o Examples:
in
2. Process Metrics:
o Purpose: Measure the effectiveness and efficiency of the development
process itself.
d.
o Examples:
■
ureviews are in finding defects.
Defect Metrics: Average number of defects found per hour of
lo
inspection, average time taken to correct defects, and average
number of failures detected during testing per line of code.
■ Productivity Metrics: Measures the efficiency of the development
uc
Differences:
• Focus: Product metrics focus on the characteristics of the software being built
(size, effort, time), while process metrics focus on how well the development
process is performing (effectiveness, efficiency, quality).
• Use: Product metrics are used to gauge the attributes of the final software product,
aiding in planning, estimation, and evaluation. Process metrics help in assessing
in
By employing both types of metrics effectively, software development teams can better
manage projects, optimize processes, and deliver high-quality software products that meet
user expectations.
d.
Product versus process quality management
Product quality management focuses on evaluating and ensuring the quality of the
uc
software product itself. This approach is typically more straightforward to implement and
measure after the software has been developed.
Aspects:
vt
3. Benefits:
o Provides clear benchmarks for evaluating the success of the software
development project.
o Facilitates comparisons with user requirements and industry standards. o
Helps in identifying areas for improvement in subsequent software versions or
projects.
4. Challenges:
in
o Predicting final product quality based on intermediate stages (like early code
modules or prototypes) can be challenging.
d.
o Metrics may not always capture the full complexity or performance of the
final integrated product.
Aspects:
uc
in
o Effectiveness of process improvements may not always translate directly into
improved product quality without careful management and integration.
d.
Integration and Synergy
• While product and process quality management approaches have distinct focuses,
they are complementary.
•
u
Effective software development teams often integrate both approaches to achieve
optimal results.
lo
• By improving process quality, teams can enhance product quality metrics, leading
to more reliable, efficient, and user-friendly software products.
uc
ISO 9001:2000, now superseded by newer versions but still relevant in principle, outlines
standards for Quality Management Systems (QMS). Here’s a detailed look at its key
aspects and how it applies to software development:
vt
ISO 9001:2000 is part of the ISO 9000 series, which sets forth guidelines and
requirements for implementing a Quality Management System (QMS).
The focus of ISO 9001:2000 is on ensuring that organizations have effective processes in
place to consistently deliver products and services that meet customer and regulatory
requirements.
Key Elements:
1. Fundamental Features:
o Describes the basic principles of a QMS, including customer focus,
leadership, involvement of people, process approach, and continuous
in
improvement.
o Emphasizes the importance of a systematic approach to managing processes
and resources.
d.
2. Applicability to Software Development:
o ISO 9001:2000 can be applied to software development by ensuring that the
development processes are well-defined, monitored, and improved. o It
u
focuses on the development process itself rather than the end product
certification (unlike product certifications such as CE marking).
lo
3. Certification Process:
o Organizations seeking ISO 9001:2000 certification undergo an audit process
conducted by an accredited certification body.
uc
satisfaction.
o Leadership: Establishing unity of purpose and direction. o Involvement of
people: Engaging the entire organization in achieving quality objectives.
o Process approach: Managing activities and resources as processes to achieve
desired outcomes.
o Continuous improvement: Continually improving QMS effectiveness.
in
• Implement corrective and preventive actions to address deviations from quality
standards.
d.
• Ensure that subcontractors and external vendors also adhere to quality standards
through effective quality assurance practices.
•
u
Perceived Value: Critics argue that ISO 9001 certification does not guarantee the
lo
quality of the end product but rather focuses on the process.
• Cost and Complexity: Obtaining and maintaining certification can be costly and
time-consuming, which may pose challenges for smaller organizations.
uc
Despite these criticisms, ISO 9001:2000 provides a structured framework that, when
implemented effectively, can help organizations improve their software development
vt
It emphasizes continuous improvement and customer satisfaction, which are crucial aspects
in the competitive software industry.
1. Customer Focus:
o Understanding and meeting customer requirements to enhance satisfaction.
2. Leadership:
o Providing unity of purpose and direction for achieving quality objectives.
3. Involvement of People:
in
o Engaging employees at all levels to contribute effectively to the QMS.
4. Process Approach:
d.
Managing these processes as a system to achieve organizational objectives.
5. Continuous Improvement:
o Continually enhancing the effectiveness of processes based on objective
u
measurements and analysis.
6. Factual Approach to Decision Making:
lo
o Making decisions based on analysis of data and information.
7. Mutually Beneficial Supplier Relationships:
o Building and maintaining good relationships with suppliers to enhance
capabilities and performance.
uc
process.
5. Resource Management:
o Ensuring adequate resources (human, infrastructure, etc.) are available for
effective process execution.
6. Measurement and Monitoring:
o Designing methods to measure and monitor process effectiveness and
efficiency.
in
o Gathering data and identifying discrepancies between actual performance
and targets.
7. Analysis and Improvement:
d.
o Analyzing causes of discrepancies and implementing corrective actions to
improve processes continually.
Detailed Requirements
1. Documentation:
u
lo
o Maintaining documented objectives, procedures (in a quality manual),
plans, and records that demonstrate adherence to the QMS. o
Implementing a change control system to manage and update
uc
documentation as necessary.
2. Management Responsibility:
o Top management must actively manage the QMS and ensure that processes
conform to quality objectives.
vt
3. Resource Management:
o Ensuring adequate resources, including trained personnel and infrastructure,
are allocated to support QMS processes.
4. Production and Service Delivery:
o Planning, reviewing, and controlling production and service delivery
processes to meet customer requirements.
o Communicating effectively with customers and suppliers to ensure clarity
in
Process capability models
d.
Here’s an overview of some key concepts and methodologies related to process-based
quality management:
u
Historical Perspective
1. Definition:
o TQM focuses on continuous improvement of processes through
measurement and redesign.
o It advocates that organizations continuously enhance their processes to
1. Objective:
in
quality, service, and speed.
d.
1. SEI Capability Maturity Model (CMM) and CMMI:
o Developed by the Software Engineering Institute (SEI), CMM and CMMI
provide a framework for assessing and improving the maturity of processes. o
u
They define five maturity levels, from initial (ad hoc processes) to optimized
(continuous improvement).
lo
o CMMI (Capability Maturity Model Integration) integrates various disciplines
beyond software engineering.
2. ISO 15504 (SPICE):
uc
in
improvement early in the development lifecycle.
d.
The SEI Capability Maturity Model (CMM) is a framework developed by the Software
Engineering Institute (SEI) to assess and improve the maturity of software development
processes within organizations.
u
It categorizes organizations into five maturity levels based on their process capabilities and
practices:
lo
SEI CMM Levels:
1. Level 1: Initial
uc
o Characteristics:
■ Chaotic and ad hoc development processes.
■ Lack of defined processes or management practices.
■ Relies heavily on individual heroics to complete projects. o
vt
Outcome:
■ Project success depends largely on the capabilities of individual team
members.
■ High risk of project failure or delays.
2. Level 2: Repeatable
o Characteristics:
■ Basic project management practices like planning and tracking
3. Level 3: Defined
in
o Characteristics:
d.
■ Roles and responsibilities are clear across the organization.
■ Training programs are implemented to build employee capabilities.
■ Systematic reviews are conducted to identify and fix errors early. o
Outcome:
■
uConsistent and standardized processes across the organization.
lo
■ Better management of project risks and quality.
4. Level 4: Managed
o Characteristics:
uc
Outcome:
culture.
■ Process metrics are analyzed to identify areas for improvement.
■ Lessons learned from projects are used to refine and enhance
processes.
■ Innovation and adoption of new technologies are actively pursued. o
Outcome:
in
■ Continuous innovation and improvement in processes.
■ High adaptability to change and efficiency in handling new
challenges.
d.
■ Leading edge in technology adoption and process optimization.
SEI CMM has been instrumental not only in enhancing the software development
practices within organizations but also in establishing benchmarks for industry standards.
to optimized and continuously improving processes (Level 5), thereby fostering better
vt
in
the US Department of Defense.
o Expansion and Adaptation: Over time, various specific CMMs were
developed for different domains such as software acquisition (SA-CMM),
d.
systems engineering (SE-CMM), and people management (PCMM). These
models provided focused guidance but lacked integration and consistency.
2. Need for Integration:
o u
Challenges: Organizations using multiple CMMs faced issues like
overlapping practices, inconsistencies in terminology, and difficulty in
lo
integrating practices across different domains.
o CMMI Solution: CMMI (Capability Maturity Model Integration) was
introduced to provide a unified framework that could be applied across
uc
in
considered important for enhancing process capability.
d.
organizations to incrementally improve their processes as they move from
one maturity level to the next.
o Integration across Domains: Unlike the specific CMMs for various
u
disciplines, CMMI uses a more abstract and generalized set of
terminologies that can be applied uniformly across different domains.
lo
TABLE 13.4 CMMI key process areas
Level Key process areas
Benefits of CMMI
ISO/IEC 15504, also known as SPICE (Software Process Improvement and Capability
dEtermination), is a standard for assessing and improving software development
in
processes. Here are the key aspects of ISO 15504 process assessment:
d.
• Reference Model: ISO 15504 uses a process reference model as a benchmark
against which actual processes are evaluated. The default reference model is often
ISO 12207, which outlines the processes in the software development life cycle
u
(SDLC) such as requirements analysis, architectural design, implementation,
testing, and maintenance.
lo
Process Attributes
• Nine Process Attributes: ISO 15504 assesses processes based on nine attributes,
uc
which are:
the organization.
6. Process Measurement (PME): Evaluates the use of measurements to
manage and control the process.
7. Process Control (PC): Assesses the monitoring and control mechanisms in
place for the process.
in
improvement in the process.
9. Process Optimization (PO): Focuses on optimizing the process to improve
efficiency and effectiveness.
d.
Compatibility with CMMI
• Alignment with CMMI: ISO 15504 and CMMI share similar goals of assessing
u
and improving software development processes. While CMMI is more
comprehensive and applicable to a broader range of domains, ISO 15504 provides
lo
a structured approach to process assessment specifically tailored to software
development.
in
2.2 Work product Work products are properly de lined and
management reviewed to ensure they meet
requirements
3. Established 3.1 Process The processes to be carried out are
d.
process definition carefully defined
3.2 Process The processes defined above are properly
deployment executed by properly trained staff
4.2 Process
for each sub-process and data collected to
monitor performance
On the basis of the data collected by 4.1
lo
control corrective action is taken if there is
unacceptable variation from the targets
5. Optimizing 5.1 Process As a result of the data collected by 4.1,
innovation opportunities for improving processes are
uc
identified
5.2 Process The opportunities for process
optimization improvement are properly evaluated and
where appropriate are effectively
implemented
vt
When assessors are judging the degree to which a process attribute is being fulfilled they
allocate one of the following scores:
in
evidence is crucial to determine the level of achievement for each attribute.
Here’s how evidence might be identified and evaluated for assessing the process
d.
attributes, taking the example of requirement analysis processes:
o
u
1. Process Definition (PD):
Evidence: A section in the procedures manual that outlines the steps, roles,
lo
and responsibilities for conducting requirements analysis.
o Assessment: Assessors would review the documented procedures to ensure
they clearly define how requirements analysis is to be conducted. This
uc
step of the requirements analysis process, indicating that the defined process is being
implemented and deployed effectively (3.2 in Table 13.5). Using ISO/IEC 15504
Attributes
in
requirements analysis process, such as regular reviews, audits, and
corrective action reports.
o Assessment: Assessors would review the control mechanisms to ensure they
d.
effectively monitor the process and address deviations promptly.
• Process Optimization (PO):
o Evidence: Records of process improvement initiatives, feedback
u
mechanisms from stakeholders, and innovation in requirements analysis
techniques.
lo
o Assessment: Assessors would examine how the organization identifies
opportunities for process improvement and implements changes to optimize
the requirements analysis process.
uc
Importance of Evidence
• Validation: It validates that the process attributes are not just defined on paper but
are effectively deployed and managed.
• Continuous Improvement: Identifying evidence helps in identifying areas for
improvement and optimizing processes over time.
development for machine tool equipment, involves addressing several key challenges
identified within the organization.
Here’s a structured approach, drawing from CMMI principles, to address these issues and
improve process maturity:
in
1. Resource Overcommitment:
o Issue: Lack of proper liaison between the Head of Software Engineering and
Project Engineers leads to resource overcommitment across new systems
d.
and maintenance tasks simultaneously.
o Impact: Delays in software deliveries due to stretched resources.
2. Requirements Volatility:
o u
Issue: Initial testing of prototypes often reveals major new requirements. o
Impact: Scope creep and changes lead to rework and delays.
lo
3. Change Control Challenges:
o Issue: Lack of proper change control results in increased demands for
software development beyond original plans.
uc
• Actions:
in
• Expected Outcomes:
d.
schedules.
o Enable better resource allocation and management across different projects.
approval workflows.
o Ensure communication channels between development teams, testing groups,
and project stakeholders are streamlined for change notifications. o Implement
impact assessment mechanisms to evaluate the effects of changes on project
vt
in
automated testing frameworks where feasible to expedite testing cycles.
o Foster a culture of quality assurance and proactive bug identification
throughout the development phases.
d.
• Expected Outcomes:
o Faster turnaround in identifying and resolving bugs during testing. o Timely
completion of system testing phases, enabling on-time product releases.
u
Moving Towards Process Maturity Levels
lo
• Level 1 to Level 2 Transition:
o Focus: Transition from ad-hoc, chaotic practices to defined processes with
formal planning and control mechanisms.
uc
in
u d.
lo
uc
vt
Six Sigma
Here’s how UVW can adopt and benefit from Six Sigma:
in
identifying and eliminating causes of defects, thereby reducing variability in processes.
The goal is to achieve a level of quality where the process produces no more than 3.4
defects per million opportunities.
d.
Steps to Implement Six Sigma at UVW
1. Define:
o Objective: Clearly define the problem areas and goals for improvement. o
u
Action: Identify critical processes such as software development, testing,
and deployment where defects and variability are impacting quality and
lo
delivery timelines.
2. Measure:
Objective: Quantify current process performance and establish baseline
uc
metrics.
o Action: Use statistical methods to measure defects, cycle times, and other
relevant metrics in software development and testing phases.
3. Analyse:
vt
performance.
in
Implement measures such as control charts, regular audits, and performance
reviews to sustain improvements.
d.
Application to UVW's Software Development
• Focus Areas:
o Addressing late deliveries due to resource overcommitment. o Managing
u
requirements volatility and change control effectively. o Enhancing testing
processes to reduce defects and delays in system testing phases.
lo
• Tools and Techniques:
o Use of DMAIC (Define, Measure, Analyse, Improve, Control) for existing
process improvements.
uc
• Cost Savings: Reduced rework and operational costs associated with defects.
The discussion highlights several key themes in software quality improvement over time,
emphasizing shifts in practices and methodologies:
1. Increasing Visibility:
o Early practices like Gerald Weinberg's 'egoless programming' promoted code
review among programmers, enhancing visibility into each other's work.
in
o Modern practices extend this visibility to include walkthroughs, inspections,
and formal reviews at various stages of development, ensuring early
detection and correction of defects.
d.
2. Procedural Structure:
o Initially, software development lacked structured methodologies, but over
time, methodologies with defined processes for every stage (like Agile,
o
u
Waterfall, etc.) have become prevalent.
Structured programming techniques and 'clean-room' development further
lo
enforce procedural rigor to enhance software quality.
3. Checking Intermediate Stages:
o Traditional approaches involved waiting until a complete, albeit imperfect,
uc
4. Inspections:
o Inspections are critical in ensuring quality at various development stages, not
just in coding but also in documentation and test case creation.
in
collaboration and spirit.
o They also facilitate the dissemination of good programming practices and
improve overall software quality by involving stakeholders from different
d.
stages of development.
• The inspection is led by a moderator who has had specific training in the
technique.
• The other participants have defined roles. For example, one person will act as a
recorder and note all defects found, and another will act as reader and take the
vt
The late 1960s marked a pivotal period in software engineering where the complexity of
software systems began to outstrip the capacity of human understanding and testing
capabilities. Here are the key developments and concepts that emerged during this time:
in
o
d.
correctness.
2. Structured Programming:
o To manage complexity, structured programming advocated breaking down
o
u
software into manageable components.
Each component was designed to be self-contained with clear entry and exit
lo
points, facilitating easier understanding and validation by human
programmers.
3. Clean-Room Software Development:
uc
4. Incremental Development:
o Systems were developed incrementally, ensuring that each increment was
capable of operational use by end-users.
o This approach avoided the pitfalls of iterative debugging and ad-hoc
modifications, which could compromise software reliability.
5. Verification and Validation:
o Clean-room development emphasized rigorous verification at the
in
development stage rather than relying on extensive testing to identify and
fix errors.
o The certification team's testing was thorough and continued until statistical
d.
models showed that the software failure rates were acceptably low.
Overall, these methodologies aimed to address the challenges posed by complex software
u
systems by promoting structured, systematic development processes that prioritize
correctness from the outset rather than relying on post hoc testing and debugging. Clean-
lo
room software development, in particular, contributed to the evolution of quality
assurance practices in software engineering, emphasizing formal methods and rigorous
validation techniques.
uc
vt
Formal methods
It seems like you're discussing formal methods in software development and the concept of
software quality circles. Here's a summary of the points covered:
in
languages like Z and VDM. They define preconditions (input conditions) and
postconditions (output conditions) for procedures.
• Purpose: These methods ensure precise and unambiguous specifications and allow
d.
for mathematical proof of algorithm correctness based on specified conditions.
• Adoption: Despite being taught widely in universities, formal methods are rarely
used in mainstream software development. Object Constraint Language (OCL) is a
u
newer, potentially more accessible development in this area.
If you have any specific questions or if there's more you'd like to explore on these topics or
related areas, feel free to ask!
The process you're describing involves the compilation of most probable error lists, which
is a proactive approach to improving software development processes. Here’s a
in
current issues to compile a list of these common errors. This list documents
specific types of mistakes that have been identified as recurring.
3. Proposing Solutions: For each type of error identified, the team proposes
d.
measures to reduce or eliminate its occurrence in future projects. For example:
o Example Measure: Producing test cases simultaneously with requirements
specification to ensure early validation.
u
o Example Measure: Conducting dry runs of test cases during inspections to
lo
catch errors early in the process.
4. Development of Checklists: The proposed measures are formalized into checklists
that can be used during inspections or reviews of requirements specifications.
uc
This approach aligns well with quality circles and other continuous improvement
methodologies by fostering a culture of proactive problem-solving and learning from past
experiences.
If you have more questions or need further elaboration on any aspect, feel free to ask!
in
Lessons Learned Report
d.
The concept of Lessons Learned reports and Post Implementation Reviews (PIRs) are
crucial for organizational learning and continuous improvement in project management.
Here’s a breakdown of these two types of reports:
•
u
Purpose: The Lessons Learned report is prepared by the project manager
immediately after the completion of the project. Its purpose is to capture insights
lo
and experiences gained during the project execution.
• Content: Typically, a Lessons Learned report includes:
o Successes and Failures: What worked well and what didn’t during the
uc
project.
o Challenges Faced: Difficulties encountered and how they were overcome. o
Best Practices: Practices or approaches that proved effective. o Lessons
Learned: Key takeaways and recommendations for future projects.
vt
• Purpose: A PIR takes place after a significant period of operation of the new
system (typically after it has been in use for some time). Its focus is on evaluating
the effectiveness of the implemented system rather than the project process itself.
• Timing: Conducted by someone who was not directly involved in the project to
ensure neutrality and objectivity.
in
• Content: A PIR includes:
o System Performance: How well the system meets its intended objectives
and user needs.
d.
o User Feedback: Feedback from users on system usability and
functionality.
o Improvement Recommendations: Changes or enhancements suggested to
•
u
improve system effectiveness.
Audience: The audience typically includes stakeholders who will benefit from
insights into the system’s operational performance and areas for improvement.
lo
• Outcome: Recommendations from a PIR often lead to changes aimed at enhancing
the effectiveness and efficiency of the system.
uc
• Continuous Improvement: They provide a basis for making informed decisions and
improvements in future projects and system implementations.
Testing
The text discusses the planning and management of testing in software development,
highlighting the challenges of estimating the amount of testing required due to unknowns,
such as the number of bugs left in the code.
It introduces the V-process model as an extension of the waterfall model, emphasizing the
importance of validation activities at each development stage.
1. Quality Judgement:
o The final judgement of software quality is based on its correct execution.
2. Testing Challenges:
o Estimating the remaining testing work is difficult due to unknown bugs in the
in
code.
3. V-Process Model:
o Introduced as an extension of the waterfall model.
d.
o Diagrammatic representation provided in Figure 13.5.
o Stresses the necessity for validation activities matching the project creation
activities.
4. Validation Activities:
u
o Each development step has a matching validation process.
o Defects found can cause a loop back to the corresponding development stage
lo
for rework.
5. Discrepancy Handling:
o Feedback should occur only when there is a discrepancy between specified
uc
o Original designers are responsible for checking that software meets the
specified requirements, discovering any misunderstandings by developers.
in
u
Framework for Planning:
•
d.
The V-process model provides a structure for making early planning decisions
lo
about testing.
• Decisions can be made about the types and amounts of testing required from the
beginning of the project.
Off-the-Shelf Software:
vt
• If software is acquired off-the-shelf, certain stages like program design and coding
are not relevant.
• Consequently, program testing would not be necessary in this scenario.
• User acceptance tests remain valid and necessary regardless of the software being
off-the-shelf.
1. Objectives:
o Both techniques aim to remove errors from software.
2. Definitions:
in
o Verification: Ensures outputs of one development phase conform to the previous
phase's outputs.
o Validation: Ensures fully developed software meets its requirements
d.
specification.
3. Objectives Clarified:
o Verification Objective: Check if artifacts produced after a phase conform to
u
those from the previous phase (e.g., design documents conform to requirements
specifications).
lo
o Validation Objective: Check if the fully developed and integrated software
satisfies customer requirements.
4. Techniques:
uc
Testing activities
The text provides an overview of test case design approaches, levels of testing, and main
testing activities in software development.
in
It emphasizes the differences between black-box and white-box testing, the stages of
testing (unit, integration, system), and the activities involved in the testing process. Test
d.
Case Design Approaches
1. Black-Box Testing:
o
u
Test cases are designed using only the functional specification. o Based on
input/output behavior without knowledge of internal structure. o Also known
lo
as functional testing or requirements-driven testing.
2. White-Box Testing:
o Test cases are designed based on the analysis of the source code. o Requires
uc
Levels of Testing
vt
1. Unit Testing:
o Tests individual components or units of a program.
o Conducted as soon as the coding for each module is complete. o Allows for
parallel activities since modules are tested separately. o Referred to as testing
in the small.
2. Integration Testing:
o Checks for errors in interfacing between modules.
o Units are integrated step by step and tested after each integration. o Referred
to as testing in the large.
3. System Testing:
o Tests the fully integrated system to ensure it meets requirements. o
Conducted after integration testing.
Testing Activities
in
1. Test Planning:
o Involves determining relevant test strategies and planning for any required
test bed.
d.
o Test bed setup is crucial, especially for embedded applications.
2. Test Suite Design:
o Planned testing strategies are used to design the set of test cases (test suite).
o
u
3. Test Case Execution and Result Checking:
Each test case is executed, and results are compared with expected outcomes.
lo
o Failures are noted for test reporting when there is a mismatch between actual
and expected results.
The text describes the detailed process and activities involved in software test reporting,
uc
debugging, error correction, defect retesting, regression testing, and test closure.
It highlights the importance of formal issue recording, the adjudication of issues, and
various testing strategies to ensure software quality.
vt
Test Reporting
1. Issue Raising:
o Report discrepancies between expected and actual results.
2. Issue Recording:
o Formal recording of issues and their history.
in
o Informal intimation to the development team to optimize turnaround time.
d.
1. Debugging:
3. Defect Retesting:
uc
4. Regression Testing:
vt
Test Closure
1. Test Completion:
o Archiving documents related to lessons learned, test results, and logs for
future reference.
2. Time-Consuming Activity:
o Debugging is noted as usually the most time-consuming activity in the
testing process.
in
u d.
lo
Who performs testing?
The text describes who performs testing in organizations, the importance and benefits of
test automation, and various types of automated testing tools.
uc
It emphasizes that while test automation can significantly reduce human effort, improve
thoroughness, and lower costs, different tools have distinct advantages and challenges.
Test Automation
in
o Enables sophisticated test case design techniques.
2. Benefits of Test Automation:
o More testing with a large number of test cases in a short period without
d.
significant cost overhead.
o Automated test results are more reliable and eliminate human errors. o
Simplifies regression testing by running old test cases repeatedly.
o
u
Reduces monotony, boredom, and errors in running the same test cases
repeatedly.
lo
o Substantial cost and time reduction in testing and maintenance phases.
in
models, to generate tests.
o Adequately cover the state space described by the model.
d.
Estimating Errors Based on Code Size
1. Historical Data:
o u
Use historical data to estimate errors per 1000 lines of code from past
projects.
lo
o Apply this ratio to new system development to estimate potential errors
based on the code size.
including 6 seeded errors, 60% of seeded errors are detected. o This suggests
that around 40% of errors are still to be detected. o Formula to estimate total
errors:
Independent Reviews
code.
o They must work independently of each other.
o Three counts are collected for better error estimation.
Using these methods helps in obtaining a better estimation of latent errors, providing a
clearer understanding of the remaining testing effort needed to ensure software quality.
in
• zi2, the number of valid errors found by B
• nl2, the number of cases where the same error is found by both A and B.
The smaller the proportion of errors found by both A and B compared to those found by only one reviewer, the larger
d.
the total number of errors likely to be in the software. An estimate of the total number of errors (n) can be calculated
by the formula:
zi = (nl X n2)/z/12
For example, A finds 30 errors and B finds 20 errors of which 15 are common to both A and B. The estimated total
number of errors would be:
u (30 X 20)/15 = 40
lo
Software reliability
software product.
o It is defined as the probability of the software working correctly over a given
period of time.
o Reliability is a crucial quality attribute for software products.
vt
overall reliability.
o Studies show that 90% of a typical program's execution time is spent on 10%
of its instructions.
o The specific location of a defect (core or non-core part) affects reliability.
4. Observer Dependency:
o Reliability is dependent on user behavior and usage patterns.
o A bug may affect different users differently based on how they use the
in
software.
5. Reliability Improvement Over Time:
o Reliability usually improves during testing and operational phases as defects
d.
are identified and fixed.
o This improvement can be modeled mathematically using Reliability
Growth Models (RGM).
o
u
6. Reliability Growth Models (RGM):
RGMs describe how reliability improves as failures are reported and bugs are
lo
corrected.
o Various RGMs exist, including the Jelinski-Moranda model, Littlewood-
Verall’s model, and Goel-Okutomo’s model.
uc
o RGMs help predict when a certain reliability level will be achieved, guiding
decisions on when testing can be stopped.
Quality plans
vt
• Quality plans detail how standard quality procedures and standards from an
organization's quality manual will be applied to a specific project.
• They ensure all quality-related activities and requirements are addressed.
Client Requirements:
in
• For software developed for external clients, the client's quality assurance staff may
require a quality plan to ensure the quality of the delivered products.
d.
• This requirement ensures that the client’s quality standards are met.
• u
A quality plan acts as a checklist to confirm that all quality issues have been
addressed during the planning process.
•
lo
Most of the content in a quality plan references other documents that detail specific
quality procedures and standards.
MODULE- 5
.in
ud
lo
uc
vt
Step 1:Project Scope and objective:Some objective could relate to the qualities of the application to be
delivered.
Step 2: Identify project infrastructure: Identify the installation standard and procedures mostly regarding
quality
For example is it is safety critical then a wide range of activities could be added, such as n-version
development where a number of teams develop versions of the same software which are then run in
parallel with the outputs being cross checked for discrepancies.
Step 4: Identify the products and activities of the project: It is at this point the entry, exist and process
requirement are identified for each activity. Break down the project into manageable activities, ensuring
each is planned with quality measures in place.
in
Step 5:Estimate Effort for Each Activity: Accurate effort estimation is essential to allocate sufficient
resources for quality assurance activities, avoiding rushed and low-quality outputs.
d.
Step 6: Identify Activity Risks: Identifying risks early allows for planning mitigation strategies to
maintain quality throughout the project.
Step 7: Allocate Resources: Allocate resources not just for development but also for quality assurance
u
tasks like testing, code reviews, and quality audits.
Step 8: Review and Publicize Plan: Regular reviews of the plan ensure that quality objectives are being
lo
met and any deviations are corrected promptly.
Step 9: Execute Plan: Execute the project plan with a focus on adhering to quality standards, monitoring
progress, and making necessary adjustments to maintain quality.
uc
Step 10: Lower-Level Planning:Detailed planning at lower levels should include specific quality
assurance activities tailored to each phase or component of the project.
Review (Feedback Loop): Continuous review and feedback loops help in maintaining and improving
quality throughout the project lifecycle.
vt
The intangibility of software: This make it difficult to know that a particular tasks in project
has been completed satisfactory. The results of these tasks can be made tangible by demanding
that the developer produce deliverables that can be examined for quality.
Accumulating errors during software development: As computer system development
comprises steps where the output from one step is the input to the next, the errors in the later
deliverables will be added to those in the earlier steps, leading to an accumulating detrimental
effect. In general, the later in a project that an error is found the more expensive it will be to fix.
In addition, because the number of errors in the system is unknown, the debugging phases of a
project arc particularly difficult to control.
.in
Defining the software Quality:
Quality is a rather vague term and we need to define carefully what we mean by it.
compensated for it, and below which the product would have to be rejected out of hand
Target range: the range of values within which it is planned the quality measurement value should lie
Now: the value that applies currently
.in
PRODUCT REVISION QUALITIES
Maintainability: the effort required to locate and fix an error in an operational program
Testability: The effort required to test a program to ensure it performs its intended function
Flexibility: The effort required to modify an operational program,
PRODUCT TRANSITION QUALITIES
ud
Portability: The efforts required to transfer a program from one hardware configuration and
or software system environment to another
Reusability: The extent to which a program can be used in other applications.
Interoperability: The efforts required to couple one system to another
lo
Software Quality Models
uc
The quality models give a characterization (often hierarchical) of software quality in terms of a set of
characteristics of the software. The bottom level of the hierarchy can be directly measured, enabling a
quantitative assessment of the quality of the software. There are several well-established quality Models
including McCall’s. Dromey’s and Boehm’s. Since there was no standardization among the large
vt
number of quality models that became available, the ISO 9126 model of quality was developed.
David Garvin suggests that quality ought to be thought about by taking a third-dimensional read point
that begins with an assessment of correspondence and terminates with a transcendental (aesthetic) view.
Eight dimensions of product quality management will be used at a strategic level to investigate quality
characteristics.
in
Perceived quality: User's opinion about the product quality.
u d.
lo
uc
vt
MCCALL MODEL
in
u d.
McCall’s Software Quality Model was introduced in 1977. This model is incorporated with many
attributes, termed software factors, which influence software. The model distinguishes between two
lo
levels of quality attributes:
Quality Factors
Quality Criteria
uc
Quality Factors: The higher-level quality attributes that can be accessed directly are called quality
factors. These attributes are external. The attributes at this level are given more importance by the users
and managers.
Quality Criteria: The lower or second-level quality attributes that can be accessed either subjectively
vt
or objectively are called Quality Criteria. These attributes are internal. Each quality factor has many
second-level quality attributes or quality criteria.
Example: The usability quality factor is divided into operability, training, communicativeness,
input/output volume, and input/output rate. This model classifies all software requirements into 11
software quality factors.
The 11 factors are organized into three product quality factors: Product Operation, Product Revision,
and Product Transition.
DROMEY’S MODEL:
Dromey proposed that software product quality depends on four major high-level properties of the
software: Correctness, internal characteristics, contextual characteristics and certain descriptive
properties.
in
u d.
lo
BOEHM’S MODEL:
The model represents a hierarchical quality model similar to the McCall Quality Model to define
uc
software quality using a predefined set of attributes and metrics, each of which contributes to the overall
quality of software. The difference between Boehm’s and McCall’s Models is that McCall’s Quality
Model primarily focuses on precise measurement of high-level characteristics, whereas Boehm’s Quality
Model is based on a wider range of characteristics.
vt
The highest level of Boehm’s model has the following three primary uses, as stated as below:
The next level of Boehm’s hierarchical model consists of seven quality factors associated with three
primary uses, stated below:
Usability (Human Engineering): Extent of effort required to learn, operate and understand functions of
the software.
in
Testability: Effort required to verify that software performs its intended functions.
Understandability: Effort required for a user to recognize a logical concept and its applicability.
d.
Modifiability: Effort required to modify software during the maintenance phase.
u
lo
uc
vt