Software Engineering Notes
Software Engineering Notes
Unit 1
Introduction to Software Engineering
Importance of Software Engineering as a Discipline
Software Applications
Software Crisis
Software Processes & Characteristics
Waterfall Model:
Prototype and Prototyping Model:
Evolutionary Model:
Spiral Model
Software Requirements Analysis & Specifications:
Requirements Engineering:
Functional and Non-Functional Requirements:
User Requirements:
System Requirements:
Requirement Elicitation Techniques: FAST, QFD, and Use Case Approach
Data Flow Diagrams
Levels in Data Flow Diagrams (DFD)
Requirements Analysis Using Data Flow Diagrams (DFD):
Data Dictionary
Components of Data Dictionary:
Data Dictionary Notations tables :
Features of Data Dictionary :
Uses of Data Dictionary :
Importance of Data Dictionary:
Entity-Relationship (ER) Diagrams:
Requirements Documentation:
Software Requirements Specification (SRS) - Nature and Characteristics:
Requirement Management:
IEEE Std 830-1998 - Recommended Practice for Software Requirements
Specifications:
Unit 2
Software Project Planning:
Software Engineering 1
Project size estimation
Size Estimation in Software Development: Lines of Code and Function Count:
Cost Estimation Models in Software Development:
COCOMO (Constructive Cost Model):
Putnam Resource Allocation Model
Validating Software Estimates:
Risk Management in Software Development:
Software Design: Cohesion and Coupling
Function-Oriented Design in Software Engineering:
Object-Oriented Design (OOD) in Software Engineering:
User Interface Design
Unit 3
Software Metrics
Software Measurements: What & Why
Token Count
Halstead Software Science Measure
Data Structure Metrics:
Information Flow Metrics:
Software Reliability
Importance of Software Reliability:
Factors Affecting Software Reliability:
Methods to Enhance Software Reliability:
Hardware Reliability:
Factors Affecting Hardware Reliability:
Importance of Hardware Reliability:
Software Reliability:
Factors Affecting Software Reliability:
Importance of Software Reliability:
Faults:
Failures:
Relationship between Faults and Failures:
Reliability Models
1. Basic Reliability Model:
2. Logarithmic Poisson Model:
3. Software Quality Models:
4. Capability Maturity Model (CMM) & ISO 9001:
Unit 4
Software Engineering 2
Software Testing
Importance of Software Testing:
Types of Software Testing:
Software Testing Methods:
Software Testing Life Cycle (STLC):
Tools Used in Software Testing:
Testing process
Testing Process Phases:
Functional Testing
Boundary Value Analysis (BVA):
Equivalence Class Testing:
Decision Table Testing:
Cause-Effect Graphing:
Structural testing
Path Testing:
Data Flow Testing:
Mutation Testing:
Unit Testing:
Integration Testing:
System Testing:
Debugging:
Testing Tools:
Testing Standards:
Software Maintenance:
Management of Maintenance:
Maintenance Process:
Maintenance Models
1. Corrective Maintenance Model:
2. Adaptive Maintenance Model:
3. Perfective Maintenance Model:
4. Preventive Maintenance Model:
5. Agile Maintenance Model:
6. Evolutionary Maintenance Model:
Regression Testing
Reverse Engineering:
Software Re-engineering:
Relationship between Reverse Engineering and Software Re-engineering:
Software Engineering 3
Configuration Management:
Documentation:
Unit 1
Introduction to Software Engineering
Software Engineering is a systematic approach to the design, development,
maintenance, and documentation of software. It encompasses a set of
methods, tools, and processes to create high-quality software efficiently.
Key Concepts:
5. Implementation: Writing code and building the software based on the design
specifications. It involves programming, coding, and unit testing.
Software Engineering 4
7. Maintenance: Software maintenance is an ongoing process that includes
making enhancements, fixing bugs, and adapting the software to changing
requirements.
3. Quality Assurance: Ensuring that software is of high quality and free of defects
is a continuous challenge.
Various tools and techniques are used in software engineering, such as version
control systems (e.g., Git), integrated development environments (IDEs),
modeling tools, and project management software.
Career Opportunities:
Software Engineering 5
Software engineering plays a crucial role in managing the complexity of modern
software systems. As software applications become more intricate, the
discipline provides methodologies and tools to design, develop, and maintain
software in an organized and comprehensible manner.
2. Quality Assurance:
3. Cost-Efficiency:
4. Predictable Timelines:
6. Risk Management:
7. Reusability:
Software Engineering 6
consistency.
8. Scalability:
9. Documentation:
12. Innovation:
Software Applications
Definition:
Software Engineering 7
Software applications, commonly known as "apps," are computer programs or
sets of instructions designed to perform specific tasks or functions on electronic
devices, such as computers, smartphones, tablets, and more.
1. Desktop Applications:
These applications are developed for smartphones and tablets. They can
be categorized into two major platforms:
iOS Apps: Designed for Apple devices like iPhones and iPads.
3. Web Applications:
These are accessed through web browsers and run on remote servers.
Users can interact with web applications through a web page. Examples
include email services (e.g., Gmail), social media platforms (e.g.,
Facebook), and online shopping websites (e.g., Amazon).
4. Enterprise Applications:
These are software solutions designed for business and organizational use.
Enterprise applications often include Customer Relationship Management
(CRM) software, Enterprise Resource Planning (ERP) systems, and project
management tools.
5. Gaming Applications:
Software Engineering 8
6. Utility Applications:
Most applications have a graphical user interface (GUI) that allows users to
interact with the software.
2. Functionality:
3. Platform Compatibility:
4. Connectivity:
5. Data Storage:
Development Process:
Software Engineering 9
A critical aspect of application development, UX design focuses on creating an
enjoyable and intuitive user experience, including user interface design,
usability, and user interaction.
App Stores:
Many applications are distributed through app stores specific to their platforms
(e.g., Apple App Store, Google Play Store, Microsoft Store). These platforms
provide a centralized marketplace for users to discover and download apps.
Monetization:
Security:
Software applications have become an integral part of daily life, serving diverse
purposes from productivity and communication to entertainment and business
operations. Their development and continuous improvement contribute significantly
to the digital world's evolution and functionality.
Software Crisis
Definition:
Software Engineering 10
2. Lack of Methodology: In the early days of software development, there was a
lack of well-defined methodologies and processes for managing and developing
software. This led to ad-hoc approaches that often resulted in chaotic
development.
4. Quality Issues: Software systems produced during this period often had
numerous defects, making them unreliable and requiring frequent updates and
maintenance.
Software Engineering 11
approaches, improved the organization and management of software projects.
1. Definition:
Software Engineering 12
A software process typically follows a specific Software Development Life
Cycle (SDLC). Common SDLC models include the Waterfall model, Agile,
V-Model, and Iterative models. Each SDLC model prescribes a series of
phases and activities to guide the development process.
1. Systematic Approach:
2. Repeatability:
3. Quality Assurance:
4. Project Management:
Software Engineering 13
Software processes facilitate project management by providing a framework
for estimating project timelines, managing resources, and tracking progress.
5. Flexibility:
While processes provide structure, they can be adapted to fit the needs of
different projects. Agile methodologies, for example, prioritize flexibility and
adaptability in response to changing requirements.
6. Documentation:
7. Risk Management:
8. Iterative Improvement:
9. Communication:
Software Engineering 14
quality software products. The choice of a specific process model can vary based
on the project's requirements, size, and other factors.
Waterfall Model:
Description:
The Waterfall Model is a traditional and linear software development life cycle
model. It is often considered a classic approach, where the project is divided
into distinct phases, and each phase must be completed before the next one
begins. It follows a sequential, top-down flow where the output of one phase
becomes the input for the next.
Phases:
Characteristics:
Software Engineering 15
Sequential: The phases in the Waterfall Model proceed sequentially, and each
phase depends on the deliverables of the previous one.
Advantages:
Disadvantages:
Testing and user feedback often occur late in the project, which may lead to
costly defects.
Software Engineering 16
Prototype and Prototyping Model:
Prototype:
Software Engineering 17
Prototyping Model:
Software Engineering 18
Phases:
Software Engineering 19
4. Development: Once the prototype is approved, full-scale software
development takes place, often based on the refined requirements.
5. Testing: The final software is tested to ensure quality and compliance with the
requirements.
Characteristics:
Risk Management: It reduces the risk of delivering a final product that doesn't
meet user needs.
Advantages of Prototyping:
Disadvantages of Prototyping:
Software Engineering 20
In summary, prototypes are working models used to represent software
functionality, while the Prototyping Model is an iterative approach that uses
prototypes to improve requirements understanding and user satisfaction. The
choice between throwaway and evolutionary prototypes depends on project goals
and requirements.
Evolutionary Model:
Description:
Phases:
Characteristics:
Software Engineering 21
Early Delivery: The model allows for the early delivery of a basic working
system, which can provide value to users.
Reduced Risk: Iterative nature helps identify issues early and allows for course
corrections.
Software Engineering 22
Spiral Model
The Spiral Model is a software development and project management approach that
combines iterative development with elements of the Waterfall model. It was first
introduced by Barry Boehm in 1986 and is especially suitable for large and complex
projects. The Spiral Model is characterized by a series of cycles, or "spirals," each
of which represents a phase in the software development process. Here are the key
components and principles of the Spiral Model:
1. Phases:
The Spiral Model divides the software development process into several
phases, each of which represents a complete cycle of the model. The
typical phases include Planning, Risk Analysis, Engineering (or
Development), and Evaluation (or Testing).
Software Engineering 23
3. Risk Analysis:
The Risk Analysis phase is a unique feature of the Spiral Model. It involves
identifying and assessing project risks, such as technical, schedule, and
cost risks. The goal is to make informed decisions about whether to
proceed with the project based on risk analysis.
4. Prototyping:
5. Flexibility:
6. Customer Involvement:
7. Documentation:
Software Engineering 24
Effective for large and complex projects where risks and uncertainties are high.
The potential for project scope creep or endless iteration if not properly
controlled.
The Spiral Model is a robust approach for projects that require risk management,
flexibility, and a focus on iterative development. It is particularly useful in domains
where requirements are complex, evolving, or not well-understood initially. However,
it does require a disciplined approach to risk assessment and management to be
effective.
Software Engineering 25
Software Requirements Analysis & Specifications:
1. Software Requirements Analysis:
Definition:
Key Activities:
Software Engineering 26
documents that can take the form of textual descriptions, diagrams, or use
cases.
Challenges:
2. Software Specifications:
Definition:
Key Components:
Software Engineering 27
3. User Interface (UI) Specifications: For software with a graphical user
interface, these specifications outline the layout, design, and behavior of the
user interface elements.
Importance:
Clear and detailed specifications serve as a common reference point for all
stakeholders, including designers, developers, testers, and users. They help
ensure that the software is built as per the requirements and can be tested
effectively.
Documentation Standards:
Tools:
Traceability:
Software Engineering 28
Requirements Engineering:
Definition:
5. Validation: Validating requirements to ensure that they align with the project's
goals, are achievable within budget and time constraints, and meet the needs of
stakeholders.
Software Engineering 29
basis for the entire software development life cycle.
Managing Scope: Defining the project's scope and ensuring it does not expand
beyond the original intent can be complex.
Traceability:
Software Engineering 30
Effective requirements engineering is critical for the success of software projects, as
it ensures that software is developed to meet the needs of stakeholders, is of high
quality, and can adapt to changing requirements.
Definition:
What the System Does: Functional requirements describe what the system
does in response to specific inputs or under certain conditions.
Specific and Testable: They are typically specific, well-defined, and testable,
allowing for validation and verification.
Interactions and Use Cases: They often include use cases, scenarios, and
user stories that describe how the system functions in real-world situations.
2. Non-Functional Requirements:
Definition:
Software Engineering 31
Non-functional requirements, sometimes referred to as quality attributes or
constraints, define the characteristics and constraints of the software system
other than its specific functionality. They describe how the system should
perform, rather than what it should do.
Qualities and Constraints: They define the qualities or constraints that the
software must adhere to, such as response times, data storage, and security
measures.
2. Reliability: Availability, fault tolerance, and error handling, e.g., the system
should have 99.9% uptime.
3. Usability: User interface design, accessibility, and user satisfaction, e.g., the
system should be intuitive for novice users.
5. Scalability: The system's ability to handle increased load or data, e.g., the
system should scale to accommodate ten times the current user base.
Software Engineering 32
7. Regulatory Compliance: Adherence to legal and industry-specific regulations,
e.g., the system must comply with GDPR for data protection.
Importance:
Both functional and non-functional requirements are vital in ensuring that software
meets the needs and expectations of users, performs well, and complies with
quality and performance standards. Balancing and satisfying both types of
requirements is crucial for successful software development and user satisfaction.
User Requirements:
User requirements, also known as user needs or user stories, are a critical
component of software development. These requirements describe what the users,
stakeholders, or customers expect from a software system. User requirements are
typically expressed in non-technical language to ensure clear communication
between developers and end-users.
Key Characteristics of User Requirements:
4. User Stories: User requirements are often framed as user stories, which are
short, narrative descriptions that explain a specific user's need and the
Software Engineering 33
expected outcome. User stories typically follow the "As a [user], I want [feature]
so that [benefit]" format.
4. As a mobile app user, I expect the application to load within two seconds and
respond quickly to my interactions to provide a smooth and responsive user
experience.
5. As a healthcare provider, I require the system to comply with all relevant data
security and privacy regulations to safeguard patient information.
User requirements play a central role in the software development process for
several reasons:
Validation: User requirements serve as a basis for validating the final product
to ensure it meets user expectations.
Software Engineering 34
User Satisfaction: Meeting user requirements is crucial for user satisfaction,
which, in turn, affects user adoption and the software's success.
System Requirements:
System requirements, also known as technical requirements or software
requirements specifications, describe the technical and operational characteristics
that a software system must possess to meet the user requirements and function
effectively. These requirements provide guidance to the development and testing
teams on how to design, build, and maintain the software.
Key Characteristics of System Requirements:
Software Engineering 35
6. Integration: They specify how the software will integrate with other systems or
components, if applicable.
2. The software shall be built using Java and utilize the Spring framework for web
development.
5. Data backup must be performed every day at midnight and stored securely for a
minimum of one year.
6. The software should integrate with the company's single sign-on (SSO) system
for user authentication.
Software Engineering 36
Documentation: They provide a clear reference for the development and
testing teams and are essential for future maintenance and updates.
Software Engineering 37
5. Constraints Analysis: Examine any constraints, limitations, or requirements
associated with specific functions.
Use Cases:
2. Create the House of Quality: This is a visual matrix that correlates customer
needs with specific product features, indicating the strength and nature of the
relationship.
Use Cases:
Software Engineering 38
Definition:
The Use Case Approach is a technique used to capture and describe the
interactions between an actor (usually a user) and a software system. Use
cases provide a clear understanding of system functionality from a user's
perspective.
1. Identify Actors: Identify the different actors or users who interact with the
software system. Actors can be individuals, other systems, or entities.
2. Define Use Cases: Describe specific use cases, which are scenarios of
interactions between actors and the system. Each use case represents a
discrete piece of functionality.
3. Use Case Diagrams: Create use case diagrams to visualize the relationships
between actors and use cases.
4. Detail Scenarios: Write detailed descriptions of each use case, including the
steps involved, preconditions, postconditions, and any exceptions.
5. Validate and Refine: Use cases are reviewed and refined to ensure they
accurately represent user needs and system functionality.
Use Cases:
Software Engineering 39
system requirement graphically. It can be manual, automated, or a combination of
both.
It shows how data enters and leaves the system, what changes the information, and
where data is stored.
The objective of a DFD is to show the scope and boundaries of a system as a
whole. It may be used as a communication tool between a system analyst and any
person who plays a part in the order that acts as a starting point for redesigning a
system. The DFD is also called as a data flow graph or bubble chart.
The following observations about DFDs are essential:
1. All names should be unique. This makes it easier to refer to elements in the
DFD.
2. Remember that DFD is not a flow chart. Arrows is a flow chart that represents
the order of events; arrows in DFD represents flowing data. A DFD does not
involve any order of events.
4. Do not become bogged down with details. Defer error conditions and error
handling until the end of the analysis.
Standard symbols for DFDs are derived from the electric circuit diagram analysis
and are shown in fig:
Software Engineering 40
Circle: A circle (bubble) shows a process that transforms data inputs into data
outputs.
Data Flow: A curved line shows the flow of data into or out of a process or data
store.
Data Store: A set of parallel lines shows a place for the collection of data items. A
data store indicates that the data is stored which can be used at a later stage or by
the other processes in a different order. The data store can have an element or
group of elements.
Source or Sink: Source or Sink is an external entity and acts as a source of system
inputs or sink of system outputs.
Software Engineering 41
Levels in Data Flow Diagrams (DFD)
The DFD may be used to perform a system or software at any level of abstraction.
Infact, DFDs may be partitioned into levels that represent increasing information
flow and functional detail. Levels in DFD are numbered 0, 1, 2 or beyond. Here, we
will see primarily three levels in the data flow diagram, which are: 0-level DFD, 1-
level DFD, and 2-level DFD.
0-level DFDM
The Level-0 DFD, also called context diagram of the result management system is
shown in fig. As the bubbles are decomposed into less and less abstract bubbles,
the corresponding data flow may also be needed to be decomposed.
Software Engineering 42
1-level DFD
In 1-level DFD, a context diagram is decomposed into multiple bubbles/processes.
In this level, we highlight the main objectives of the system and breakdown the high-
level process of 0-level DFD into subprocesses.
Software Engineering 43
2-Level DFD
2-level DFD goes one process deeper into parts of 1-level DFD. It can be used to
project or record the specific/necessary detail about the system's functioning.
Software Engineering 44
Software Engineering 45
Requirements Analysis Using Data Flow Diagrams
(DFD):
Software Engineering 46
Data Flow Diagrams (DFDs) are a visual modeling technique used in software
engineering to represent the flow of data and processes within a system. They are
also a valuable tool for analyzing system requirements. Here's how you can perform
requirements analysis using DFDs:
Before you start with DFDs, identify the key stakeholders who will be involved in
the requirements analysis. This typically includes end-users, business analysts,
and subject matter experts.
Begin by gathering the initial set of high-level requirements. These can be in the
form of user stories, business use cases, or textual descriptions of what the
system should do.
The first step in using DFDs is to create a context diagram. This diagram shows
the system as a single process or entity and its interactions with external
entities (e.g., users, other systems, data sources). This provides an overview of
the system's boundaries and external interfaces.
Once you have the context diagram, you can start decomposing the system into
more detailed processes. Each process represents a specific function or task
within the system.
As you decompose processes, identify the data flows between them. Data flows
represent the transfer of data from one process to another. This helps in
understanding how data is shared and processed within the system.
Data stores are repositories where data is stored within the system. Identify the
data stores and their relationships with processes and data flows.
Software Engineering 47
For each process, describe the data transformations that occur. What happens
to the data as it moves from input to output within a process? This helps in
understanding how data is processed or transformed.
For each process, analyze the logic and rules governing it. What conditions
trigger the process? What are the expected outcomes? This analysis helps in
capturing detailed process requirements.
DFDs can also capture constraints and business rules that apply to the system.
These can include validation rules, security requirements, and any other
specific constraints.
Data Dictionary
Software Engineering 48
Data Dictionary is the major component in the structured analysis model of the
system. It lists all the data items appearing in DFD. A data dictionary in Software
Engineering means a file or a set of files that includes a database’s metadata (hold
records about other objects in the database), like data ownership, relationships of
the data to another object, and some other data.
Example a data dictionary entry: GrossPay = regular pay + overtime pay
Case Tools is used to maintain data dictionary as it captures the data items
appearing in a DFD automatically to generate the data dictionary.
Notations Meaning
Software Engineering 49
Here, we will discuss some features of the data dictionary as follows.
It is very important for creating an order list from a subset of the items list.
It is very important for creating an order list from a complete items list.
The data dictionary is also important to find the specific data item object from
the list.
Used for creating the ordered list of a subset of the data items
Data Quality: A data dictionary can help improve data quality by providing a
single source of truth for data definitions, allowing users to easily verify the
accuracy and completeness of data.
Software Engineering 50
Data Integration: A data dictionary can facilitate data integration efforts by
providing a common language and framework for understanding data elements
and their relationships across different systems.
Software Engineering 51
Key Components of an ER Diagram:
Types of Relationships:
Benefits of ER Diagrams:
Software Engineering 52
Data dictionaries and ER diagrams can complement each other in database design
and system analysis. A data dictionary can provide detailed information about data
elements, while ER diagrams offer a visual representation of how these elements
and entities are related within the system. Together, they aid in creating a
comprehensive and well-documented data model, making it easier to design, build,
and manage databases and information systems.
Requirements Documentation:
Requirements documentation is a critical aspect of the software development
process. It involves capturing, organizing, and presenting detailed information about
the software's functional and non-functional requirements, as well as any additional
information necessary for understanding and implementing the project. Effective
requirements documentation is essential for ensuring that the software meets user
expectations, aligns with stakeholder needs, and serves as a reference throughout
the development and testing phases.
1. Introduction:
2. Scope Statement:
The scope statement defines the boundaries of the project and specifies
what is included and excluded. It helps in managing project expectations.
3. Functional Requirements:
This section outlines the specific functions, features, and capabilities that
the software must provide. It includes detailed descriptions of how the
system should behave in response to various inputs or under specific
conditions.
4. Non-Functional Requirements:
Software Engineering 53
specify how the system should perform rather than what it should do.
5. User Requirements:
6. System Requirements:
Constraints and assumptions refer to factors that limit or impact the project.
This section outlines any restrictions or assumptions made during the
requirement-gathering process.
9. Data Models:
11. Dependencies:
Software Engineering 54
Dependencies describe any relationships or interdependencies between
different requirements. Understanding these relationships is important for
managing changes and project impact.
This section outlines the methods and criteria used to verify and validate
the requirements, ensuring that they are complete, accurate, and testable.
Software Engineering 55
A Software Requirements Specification (SRS) is a comprehensive document that
serves as the foundation of a software development project. It outlines the
functional and non-functional requirements of the software, providing a clear and
unambiguous description of what the system should do and how it should perform.
Here are the key characteristics and the nature of an SRS:
An SRS includes both functional requirements (what the system should do)
and non-functional requirements (how the system should perform). This
encompasses features, user interactions, performance criteria, security
requirements, and more.
The SRS uses clear and concise language to ensure that there is no room
for misinterpretation. Ambiguities and contradictions are eliminated during
the development of the document.
4. User-Centric:
The SRS focuses on meeting the needs and expectations of end-users and
stakeholders. It ensures that the software serves its intended purpose
effectively.
5. Traceability:
6. Structured Format:
Software Engineering 56
cases, data models, and more. This format helps organize and present the
information systematically.
7. Feasibility Analysis:
The SRS often includes a feasibility analysis that examines whether the
project can be realistically completed within budget and time constraints.
This analysis may assess technical, operational, and economic feasibility.
In some industries, the SRS may include information related to legal and
regulatory compliance to ensure that the software adheres to relevant
standards and guidelines.
Software Engineering 57
14. Alignment with Project Goals:
The SRS aligns with the overarching goals and objectives of the project,
ensuring that the software supports the business or organizational strategy.
The SRS serves as the basis for project planning, including resource
allocation, scheduling, and budgeting.
Ensuring the SRS is accurate and complete is essential for maintaining the
quality and reliability of the final software product.
Software Engineering 58
Requirement Management:
Requirement management is a critical process in software development and project
management. It involves the systematic and structured handling of requirements
throughout the project lifecycle. Effective requirement management ensures that
requirements are captured, documented, tracked, and maintained to meet the
needs and expectations of stakeholders and deliver a successful project. Here are
the key aspects and practices of requirement management:
1. Requirement Elicitation:
Software Engineering 59
2. Requirement Analysis:
3. Requirement Documentation:
4. Requirement Prioritization:
5. Requirement Traceability:
Traceability ensures that each requirement is linked to its source and that there
is a mechanism to track changes and updates throughout the project.
6. Change Control:
Projects are dynamic, and requirements may change. A formal change control
process is established to evaluate and manage requested changes, assessing
their impact on the project scope, schedule, and budget.
7. Version Control:
8. Requirement Validation:
Requirements are validated to ensure that they are accurate, complete, and
aligned with stakeholder needs. Validation often involves reviews, inspections,
and walkthroughs.
9. Requirement Verification:
Software Engineering 60
Verification ensures that the software developed meets the specified
requirements. This involves testing and quality assurance activities to confirm
that the requirements are correctly implemented.
Software Engineering 61
Requirement management is an ongoing process that ensures that a project stays
on track, delivers what stakeholders expect, and manages change effectively. It is
an integral part of project management, quality assurance, and the software
development lifecycle.
1. Purpose:
IEEE Std 830-1998 outlines a specific format and structure for SRS documents,
including sections and subsections that should be included in the document.
This structured approach helps ensure consistency and completeness.
3. Content Guidelines:
Software Engineering 62
IEEE Std 830-1998 recommends a clear and unambiguous language and style
to avoid misunderstandings and ambiguities in the requirements.
5. Requirements Attributes:
6. Traceability:
7. Appendices:
The standard allows for the inclusion of appendices, which can provide
supplementary information, such as data dictionaries, use case descriptions,
and diagrams.
IEEE Std 830-1998 recommends that the SRS undergo reviews and verification
to ensure its accuracy and completeness.
9. Change Control:
10. Examples:
- The standard provides examples and templates to help illustrate how to structure
and format an SRS document effectively.
11. References:
- IEEE Std 830-1998 may reference other IEEE standards and guidelines that are
relevant to software requirements engineering.
It's important to note that standards may evolve over time, and there may be more
recent versions or related standards that update or complement IEEE Std 830-
1998. Therefore, it's advisable to check for the latest version of the standard and
Software Engineering 63
any supplementary standards that may provide additional guidance on SRS
creation and management.
Following the IEEE Std 830-1998 or other relevant IEEE standards can help
software development teams create well-structured and comprehensive Software
Requirements Specifications, contributing to successful project outcomes and
effective communication with stakeholders.
Unit 2
Software Project Planning:
Software project planning is the process of defining the scope, objectives, and
approach for a software development project. It involves the creation of a detailed
plan that outlines the project's tasks, timelines, resource allocation, and budget.
Effective project planning is crucial for delivering software projects on time, within
budget, and meeting stakeholder expectations. Here are the key aspects and steps
involved in software project planning:
1. Project Initiation:
Define the project's purpose, objectives, and scope. Identify the key
stakeholders, project team members, and their roles. Determine the feasibility
of the project and its alignment with organizational goals.
2. Requirements Analysis:
Gather and analyze the project requirements, including functional and non-
functional requirements. Ensure a clear understanding of what the software
should achieve and the needs of the end-users.
Clearly define the scope of the project, specifying what is included and
excluded. This helps in managing expectations and avoiding scope creep.
Software Engineering 64
Create a hierarchical breakdown of the project tasks and deliverables. This is
known as the Work Breakdown Structure (WBS) and helps in organizing and
planning project work.
5. Task Estimation:
Estimate the effort, time, and resources required for each task or activity. Use
estimation techniques like expert judgment, historical data, and parametric
modeling.
6. Resource Allocation:
7. Project Scheduling:
Identify potential risks that may impact the project's success. Develop a risk
management plan to mitigate and manage these risks.
9. Quality Planning:
Define the quality standards and processes that will be followed throughout the
project to ensure the software meets quality requirements.
Software Engineering 65
requirements, or other project aspects. Evaluate changes for impact and approval.
13. Project Monitoring and Control:
- Develop methods and metrics for monitoring project progress. Use project
management tools to track task completion, identify issues, and make necessary
adjustments.
14. Documentation:
- Maintain project documentation, including project plans, status reports, meeting
minutes, and other relevant records. Ensure that project documents are well-
organized and accessible.
Software Engineering 66
Analogous Estimation: This technique involves estimating the project size based
on the similarities between the current project and previously completed projects.
This technique is useful when historical data is available for similar projects.
Bottom-up Estimation: In this technique, the project is divided into smaller
modules or tasks, and each task is estimated separately. The estimates are then
aggregated to arrive at the overall project estimate.
Three-point Estimation: This technique involves estimating the project size using
three values: optimistic, pessimistic, and most likely. These values are then used to
calculate the expected project size using a formula such as the PERT formula.
Function Points: This technique involves estimating the project size based on the
functionality provided by the software. Function points consider factors such as
inputs, outputs, inquiries, and files to arrive at the project size estimate.
Use Case Points: This technique involves estimating the project size based on the
number of use cases that the software must support. Use case points consider
factors such as the complexity of each use case, the number of actors involved, and
the number of use cases.
Each of these techniques has its strengths and weaknesses, and the choice of
technique depends on various factors such as the project’s complexity, available
data, and the expertise of the team.
Lines of Code
Function points
1. Lines of Code (LOC): As the name suggests, LOC counts the total number of
lines of source code in a project. The units of LOC are:
Software Engineering 67
KLOC- Thousand lines of code
The size is estimated by comparing it with the existing systems of the same kind.
The experts use it to predict the required size of various components of software
and then add them to get the total size.
It’s tough to estimate LOC by analyzing the problem definition. Only after the whole
code has been developed can accurate LOC be estimated. This statistic is of little
utility to project managers because project planning must be completed before
development activity can begin.
Two separate source files having a similar number of lines may not require the
same effort. A file with complicated logic would take longer to create than one with
simple logic. Proper estimation may not be attainable based on LOC.
The length of time it takes to solve an issue is measured in LOC. This statistic will
differ greatly from one programmer to the next. A seasoned programmer can write
the same logic in fewer lines than a newbie coder.
Advantages:
Simple to use.
Disadvantages:
Software Engineering 68
It is difficult to estimate the size using this technique in the early stages of the
project.
When platforms and languages are different, LOC cannot be used to normalize.
Advantages:
Disadvantages:
No fixed standards exist. Some entities contribute more to project size than
others.
Just like FPA, it is less used in the cost estimation model. Hence, it must be
converted to LOC.
Advantages:
Each major process can be decomposed into smaller processes. This will
increase the accuracy of the estimation
Disadvantages:
Software Engineering 69
Studying similar kinds of processes to estimate size takes additional time and
effort.
All software projects are not required for the construction of DFD.
4. Function Point Analysis: In this method, the number and type of functions
supported by the software are utilized to find FPC(function point count). The steps
in function point analysis are:
Count the number of functions of each proposed type: Find the number of
functions belonging to the following types:
External Inquiries: They lead to data retrieval from the system but don’t
change the system.
Internal Files: Logical files maintained within the system. Log files are not
included here.
External interface Files: These are logical files for other applications which
are used by our system.
Software Engineering 70
Function type Simple Average Complex
External Inputs 3 4 6
External Output 4 5 7
External Inquiries 3 4 6
Find the Function Point Count: Use the following formula to calculate FPC
Advantages:
Disadvantages:
Software Engineering 71
Many cost estimation models like COCOMO use LOC and hence FPC must be
converted to LOC.
Definition: Lines of code (LOC) is a metric that measures the number of lines or
statements in the source code of a software application. It's a simple and widely
used method for estimating the size of a software project.
Pros:
Useful for Cost Estimation: It can be used to estimate project costs and
schedule.
Cons:
Not Always Accurate: LOC may not accurately represent the complexity or
functionality of the software.
Dependent on Coding Style: The number of lines of code can vary based on
the coding style, making it subjective.
Software Engineering 72
Pros:
Effective for Comparisons: It's useful for comparing the complexity and size
of different software projects.
Cons:
1. Identify User Inputs: Determine the types and quantity of user inputs (external
inputs, external outputs, and external inquiries).
2. Identify User Outputs: Identify the types and quantity of user outputs.
4. Identify Logical Files: Determine the logical files used by the application.
Both LOC and FP have their merits and are often used in conjunction for more
accurate size estimation. The choice of which method to use may depend on project
characteristics and goals. FP is particularly useful for assessing the functionality of
a software system, while LOC is more closely related to coding effort and project
Software Engineering 73
management. Accurate size estimation is crucial for effective project planning, cost
estimation, and resource allocation in software development.
4. Estimation by Analogy:
This method involves comparing the current project with previous similar
projects and using historical data to estimate costs. It's based on the
assumption that past project experiences can be applied to the current
project.
Software Engineering 74
PERT is a technique that uses a three-point estimation approach,
incorporating optimistic, most likely, and pessimistic estimates to calculate
an expected value. It's often used for estimating project durations, which
can be translated into cost estimates.
6. Expert Judgment:
7. Parametric Models:
8. Bottom-Up Estimation:
9. Top-Down Estimation:
Top-down estimation starts with an overall project estimate and then breaks
it down into smaller components. It's useful for high-level cost estimation
before detailed project planning.
The Delphi Technique involves gathering estimates from experts and using
a systematic approach to achieve consensus on project cost estimates. It's
often used in situations where there is uncertainty or limited historical data.
Software Engineering 75
Monte Carlo simulation involves running multiple simulations to estimate
project costs. It takes into account uncertainty and variability in project
parameters to produce a range of possible cost outcomes.
The choice of a cost estimation model or technique depends on the project's nature,
available data, and the level of detail required. It's common to use multiple models
and compare their estimates to ensure accuracy. Additionally, ongoing monitoring
and refinement of cost estimates are essential as the project progresses and more
information becomes available.
1. Basic COCOMO:
Basic COCOMO is a simple and early version of the model. It estimates project
effort based on lines of code (LOC) and project type. It considers three modes, each
with a different level of complexity:
Organic Mode: Suitable for relatively small and straightforward projects with
experienced developers. Effort is primarily based on LOC.
Embedded Mode: Suitable for large and complex projects, often involving real-
time or mission-critical systems. LOC, innovation, complexity, and other factors
are considered.
Software Engineering 76
2. Intermediate COCOMO:
Intermediate COCOMO is a more detailed version of the model. It provides a
framework for estimating effort, project duration, and cost based on a range of cost
drivers and factors, including project attributes, product attributes, hardware
attributes, and personnel attributes. The formula for Intermediate COCOMO is:
Where:
Intermediate COCOMO allows for a more nuanced and tailored estimation, taking
into account the specific characteristics of the project. It considers factors such as
personnel capability, development flexibility, and the use of modern tools and
techniques.
3. Detailed COCOMO:
Detailed COCOMO is the most comprehensive version of the model, and it offers a
highly detailed estimation process. It takes into account additional factors like
software reuse, documentation, and quality control. This version of COCOMO is
particularly suitable for very large and complex projects.
It offers a range of cost drivers and parameters for a more accurate estimation.
It relies heavily on lines of code (LOC) as a primary input, which may not
accurately capture the complexity and functionality of modern software.
Software Engineering 77
It may require a significant amount of data and expertise to make accurate
estimates.
COCOMO estimates are based on historical data, and the accuracy of the
model depends on the relevance of that data to the current project.
COCOMO remains a valuable tool for initial software project cost estimation and
serves as a foundation for more advanced models and techniques in the field of
software engineering. It can be particularly useful for comparing different project
scenarios and making informed decisions about project planning and resource
allocation.
1. Project Size (S): The size of the software project is a critical factor in the
Putnam model. It is often measured in thousands of source lines of code
(KLOC) or function points (FP), depending on the context of the project.
2. Effort per Size (E/S): The Putnam model assumes that the effort required to
complete a project is proportional to its size. This factor represents the effort
required for each unit of project size (e.g., person-months per KLOC).
Software Engineering 78
Key Formulas in the Putnam Model:
1. Effort (E): The effort required for the project is calculated as follows:
E=S/P
Where:
P is the productivity.
2. Schedule (T): The project schedule is estimated by dividing the effort by the
number of available resources:
T=E/R
Where:
E is the effort.
Suitable for large and complex projects where resource allocation is a critical
factor.
Assumes a linear relationship between size, effort, and productivity, which may
not hold true for all types of projects.
Does not account for variations in productivity that can occur during different
project phases.
Software Engineering 79
The Putnam Resource Allocation Model is a valuable tool for estimating the effort
and schedule for software development projects, particularly for projects with well-
established productivity rates and resource constraints. However, as with any
estimation model, its accuracy depends on the quality of the data and the
applicability of its assumptions to the specific project at hand.
Compare current project estimates with historical data from similar projects.
This analysis can reveal patterns and trends that help in validating the
accuracy of the estimates.
2. Expert Judgment:
3. Peer Review:
4. Prototyping:
5. Benchmarking:
Software Engineering 80
Compare the estimates to industry benchmarks or standards.
Benchmarking can help gauge the reasonableness of the estimates in
relation to the industry norms.
6. Analogous Estimation:
Compare the current project with past projects that are similar in nature.
Analogous estimation involves adjusting past project data to account for
differences and validating the current estimates.
9. Contingency Planning:
Develop contingency plans that account for potential deviations from the
estimates. This proactive approach helps in managing risks and mitigating
the impact of uncertainties.
As the project progresses, track actual effort, costs, and schedule against
the initial estimates. Continuous monitoring and adjustment help in
validating and refining the estimates.
Software Engineering 81
Engage stakeholders throughout the project to validate their expectations
and ensure that the estimates align with their needs and objectives.
1. Risk Identification:
Identify potential risks that may affect the project. Risks can be technical
(e.g., software bugs), external (e.g., changes in requirements), or
operational (e.g., resource constraints).
2. Risk Assessment:
Software Engineering 82
Evaluate the potential impact and probability of each identified risk. High-
impact, high-probability risks require more attention than low-impact, low-
probability risks.
3. Risk Prioritization:
Prioritize risks based on their severity and the level of impact they may
have on the project. This helps in allocating resources and focus to the
most critical risks.
Develop mitigation plans for the high-priority risks. These plans should
outline specific actions to reduce the likelihood or impact of the risks.
Mitigation strategies can include code reviews, testing, contingency
planning, and resource allocation.
Continuously monitor the identified risks and their status throughout the
project's lifecycle. Regularly review the effectiveness of mitigation measures
and adjust them as necessary.
7. Risk Documentation:
Maintain a risk register or risk log that documents each identified risk, its
assessment, mitigation plans, and tracking information. This serves as a
reference for the project team.
8. Communication:
9. Risk Reviews:
Software Engineering 83
Conduct periodic risk reviews to reassess the project's risk landscape. New
risks may emerge, and the significance of existing risks may change as the
project progresses.
Effective risk management is an iterative and ongoing process that adapts to the
evolving nature of software development projects. It is a proactive approach that
can significantly contribute to project success by reducing the likelihood of negative
outcomes and enhancing project predictability.
Software Engineering 84
explore these concepts in more detail:
1. Cohesion:
Logical Cohesion: In this case, the elements within a module are grouped
together because they share a logical relationship. For example, a module that
contains file I/O functions may exhibit logical cohesion.
Software Engineering 85
defined function or task. Modules with functional cohesion are easier to
understand, maintain, and reuse.
Aim for achieving functional cohesion in your software design, as it results in more
modular and maintainable code.
2. Coupling:
Tight Coupling: In this scenario, modules are highly dependent on each other
and are closely connected. Changes in one module can have a significant
impact on others. Tight coupling reduces the system's flexibility and
maintainability.
Reducing coupling and achieving loose coupling is a crucial design goal in software
development. One way to achieve this is by using well-defined interfaces and
ensuring that modules communicate through those interfaces rather than directly
with each other.
In summary, cohesion and coupling are fundamental principles in software design
that impact the quality and maintainability of software systems. High cohesion and
low coupling are desirable design characteristics that lead to more modular,
understandable, and flexible software architectures.
Software Engineering 86
particularly suitable for systems with well-defined functions, and it is often
associated with procedural programming and structured programming.
7. Reuse: Modular functions can be reused across the system or in other projects,
promoting code reusability and reducing redundant development efforts.
Software Engineering 87
better system documentation.
9. Testing and Debugging: Smaller, modular functions are easier to test and
debug, which simplifies the software development and maintenance process.
Function-oriented design is often used for systems where the primary focus is on
data processing, algorithmic operations, and structured procedures. It is well-suited
for scientific and engineering applications, data processing systems, and embedded
software.
2. Classes:
Software Engineering 88
Classes serve as blueprints or templates for creating objects. They define the
structure and behavior of objects of a certain type. Classes can inherit attributes
and methods from other classes, fostering reusability and hierarchical
organization.
3. Encapsulation:
Encapsulation is the practice of bundling data and methods that operate on the
data within a class, making data private and providing controlled access
through methods (getters and setters). This ensures data integrity and
modularity.
4. Inheritance:
5. Polymorphism:
6. Abstraction:
7. Modularity:
OOD encourages modular design, where complex systems are broken down
into smaller, more manageable components (objects and classes). Each
module encapsulates a specific piece of functionality.
8. Reusability:
Objects and classes can be reused in different contexts, fostering code reuse
and reducing redundancy. Libraries, frameworks, and design patterns are
examples of reusable components in OOD.
Software Engineering 89
9. Association and Composition:
It's important to note that Object-Oriented Design is just one of several design
paradigms in software engineering. The choice of design paradigm depends on the
nature of the project, the problem domain, and the specific requirements.
Attractive
Simple to use
Clear to understand
Software Engineering 90
There are two types of User Interface:
The analysis and design process of a user interface is iterative and can be
represented by a spiral model. The analysis and design process of user interface
consists of four framework activities.
Software Engineering 91
profile users are made into categories. From each category requirements are
gathered. Based on the requirements developer understand how to develop the
interface. Once all the requirements are gathered a detailed analysis is
conducted. In the analysis part, the tasks that the user performs to establish the
goals of the system are identified, described and elaborated. The analysis of
the user environment focuses on the physical work environment. Among the
questions to be asked are:
Will the user be sitting, standing, or performing other tasks unrelated to the
interface?
2. Interface Design: The goal of this phase is to define the set of interface objects
and actions i.e. Control mechanisms that enable the user to perform desired
tasks. Indicate how these control mechanisms affect the system. Specify the
action sequence of tasks and subtasks, also called a user scenario. Indicate the
state of the system when the user performs a particular task. Always follow the
three golden rules stated by Theo Mandel. Design issues such as response
time, command and action structure, error handling, and help facilities are
considered as the design model is refined. This phase serves as the foundation
for the implementation phase.
4. Interface Validation: This phase focuses on testing the interface. The interface
should be in such a way that it should be able to perform tasks correctly and it
Software Engineering 92
should be able to handle a variety of tasks. It should achieve all the user’s
requirements. It should be easy to use and easy to learn. Users should accept
the interface as a useful one in their work.
Unit 3
Software Metrics
Software Metrics refer to quantitative measures that provide insight into various
attributes of software such as its quality, size, complexity, and efficiency. They are
crucial for assessing and improving the software development process. Here, we'll
delve into Software Measurements and Token Count.
Purpose:
Software Engineering 93
3. Project Metrics: They assess various project-related aspects like cost,
schedule adherence, and effort estimation accuracy.
Token Count
Definition: Token Count is a software measurement technique used to determine
the size and complexity of source code by counting the number of fundamental
elements known as 'tokens.'
Types of Tokens: Tokens are basic building blocks in source code, including:
4. Constants: Fixed values like integers, strings, or literals used in the code.
1. Size Estimation: Helps in estimating the size of the software, aiding in project
planning and effort estimation.
Software Engineering 94
3. Basis for Measurement: Token count forms a fundamental basis for various
software metrics, such as lines of code (LOC) and function points.
These software metrics, particularly software measurements and token count, play
a pivotal role in evaluating software quality, assessing complexity, and aiding in
project management by providing quantifiable data for analysis and decision-
making.
1. Program Length (N): Represents the total number of operator and operand
occurrences in the code. Operators are unique symbols like arithmetic
operators (+, -, *, /), and operands are unique entities like variables and
constants.
2. Program Vocabulary (n): Denotes the total number of distinct operators and
operands used in the code.
4. Difficulty (D): Indicates the difficulty level of understanding the code. It's
calculated as D = (total operators / 2) * (operands / total unique operators).
5. Effort (E): Represents the effort required to understand or modify the code. It's
computed as E = V * D, indicating the product of volume and difficulty.
6. Time Required to Program (T): Estimates the time required to write the code
and is calculated as T = E / 18 seconds.
7. Number of Bugs Expected (B): Predicts the number of errors that might be
present in the code and is calculated as B = (V / 3000)^(3/2) for languages like
Software Engineering 95
C, or B = (V / 3000)^2 for higher-level languages.
Conclusion:
The Halstead Software Science Measure offers a comprehensive set of metrics to
quantify software complexity, size, and the effort required for software development
and maintenance. These metrics aid software developers, project managers, and
stakeholders in making informed decisions and assessments regarding software
quality and efficiency.
Software Engineering 96
and outgoing connections, respectively.
4. Size and Complexity of Data Structures: Evaluates the size, complexity, and
efficiency of data structures used in the software, such as arrays, trees, graphs,
etc.
Software Engineering 97
1. Security and Privacy Analysis: Helps in identifying potential vulnerabilities
related to information flow and data exchange.
These metrics, focusing on data structures and information flow, play a crucial role
in assessing the structural complexity, efficiency, and information dynamics within
software systems. They aid in identifying potential areas of improvement, guiding
optimization efforts, and ensuring better software quality and performance.
Software Reliability
Software reliability refers to the probability of a software system functioning without
failure under specified conditions for a specified period. It is a crucial aspect of
software quality assurance, ensuring that the software behaves as expected and
meets user requirements consistently.
Software Engineering 98
6. Risk Mitigation: Unreliable software can pose significant risks such as data
loss, system crashes, or security breaches. Software reliability efforts mitigate
these risks.
Software Engineering 99
5. Documentation and Version Control: Well-maintained documentation and
version control practices facilitate better management and understanding of
software changes, leading to improved reliability.
Hardware Reliability:
Definition: Hardware reliability refers to the probability that a piece of hardware will
perform its required function for a specified period under stated conditions.
4. Age and Wear: Over time, hardware components may degrade, impacting their
reliability.
2. Data Integrity: Hardware reliability is crucial for safeguarding data integrity and
preventing data loss or corruption.
Software Reliability:
Definition: Software reliability refers to the probability that software will perform its
intended functions without failure under specified conditions for a defined period.
2. Testing and Debugging: Rigorous testing and efficient bug fixing contribute to
software reliability.
Conclusion:
Both hardware and software reliability are crucial for ensuring the efficient and
uninterrupted operation of systems. While hardware reliability focuses on the
physical components' stability, software reliability centers around the software's
ability to perform as expected without failure. Organizations and developers aim to
optimize both aspects to deliver reliable and high-quality products and services.
Faults:
Definition: Faults refer to defects or abnormalities within a system, software, or
hardware that can potentially cause errors or malfunctions.
Types of Faults:
Causes of Faults:
Software faults can arise from human error during coding, incorrect logic, or
inadequate testing.
Hardware faults can originate from manufacturing defects, wear and tear, or
environmental factors.
Examples:
Failures:
Definition: Failures occur when the system, software, or hardware deviates from its
expected behavior, resulting in an observable or detectable anomaly.
Types of Failures:
Software Failures: Occur when the software doesn’t perform its intended
functions or produces incorrect results.
Manifestation:
Causes of Failures:
Examples:
Not all Faults Result in Failures: Some faults might remain dormant or
masked, causing no observable failure until specific conditions or triggers occur.
Conclusion:
Reliability Models
1. Basic Reliability Model:
The Basic Reliability Model is a fundamental approach used to estimate and predict
the reliability of a system or component over time. It often employs statistical
methods to model the failure rate of the system. The model assumes:
Constant Failure Rate: It assumes that the failure rate remains constant over
the operational lifetime of the system.
Time-dependent Failure Rate: The failure rate is not constant; it changes over
time. It may increase or decrease based on factors like wear, environmental
conditions, or usage patterns.
Each of these models and standards aims to enhance reliability, quality, and
efficiency in different domains—ranging from predicting system reliability to guiding
software development processes and ensuring organizational quality standards.
They provide structured methodologies to measure, evaluate, and improve reliability
and quality across various domains.
Unit 4
Software Testing
Software testing is a crucial phase in the software development life cycle (SDLC)
that involves evaluating and validating software applications or systems to ensure
they meet specified requirements and perform as expected. This process helps
identify errors, bugs, or defects, thereby enhancing software quality and reliability.
3. Test Design: Developing test cases and test scenarios based on requirements.
6. Test Closure: Summarizing test results, creating test reports, and evaluating
test completion.
Conclusion:
Testing process
The testing process is a systematic and organized approach to evaluate software
applications or systems to identify defects, errors, or issues and ensure they meet
specified requirements. It involves several stages and activities to ensure
comprehensive testing and high-quality software delivery.
2. Test Planning: Defining the testing objectives, scope, resources, timelines, and
test strategy based on the project requirements.
3. Test Design: Creating detailed test cases, scenarios, and test data based on
functional and non-functional requirements.
5. Test Execution: Running the test cases, recording test results, and comparing
actual outcomes with expected results.
9. Test Closure: Summarizing test results, creating test reports, and evaluating
whether testing objectives have been met.
Functional Testing
Functional testing is a software testing technique that focuses on verifying that the
software application or system performs its functions as expected. It involves testing
the functionalities of the software against the specified requirements. Two common
methods used in functional testing are Boundary Value Analysis and Equivalence
Class Testing.
Process:
Test Case Design: Create test cases for both valid and invalid boundary
values.
For instance, if a range for input values is defined from 1 to 100, the
boundary values would be 0, 1, 2, 99, 100, and 101.
Advantages:
Example: For a system that accepts numbers between 1 and 100, boundary values
could include testing inputs like 0, 1, 2, 99, 100, and 101 to ensure the system
handles boundary conditions correctly.
Process:
Partitioning Inputs: Divide the input data into groups/classes that are
expected to be processed in the same way by the software.
For instance, if an input field accepts values between 1 and 100, three
equivalence classes could be 0-1 (invalid), 1-100 (valid), and 100-101
(invalid). Test cases would be selected from each class to validate the
software's behavior.
Test Execution: Execute the chosen test cases from each equivalence class.
Advantages:
Ensures that test cases are representative of the entire class of inputs.
Example: For a system that accepts age inputs between 1 and 100, equivalence
classes could be invalid inputs (0, negative values), valid inputs (1-100), and invalid
Both Boundary Value Analysis and Equivalence Class Testing are effective methods
in functional testing to ensure that the software handles input boundaries and
classes appropriately, reducing the number of test cases needed while maintaining
thorough coverage.
Process:
4. Test Execution: Execute the derived test cases and verify if the system
behaves as expected for each combination.
Advantages:
Helps in creating a compact and comprehensive set of test cases for testing
different scenarios.
Process:
1. Identifying Causes and Effects: Determine inputs and their possible effects
on the system.
3. Deriving Test Cases: Generate test cases from the cause-effect graph to cover
different scenarios and combinations.
4. Test Execution: Execute the test cases derived from the cause-effect graph
and validate the system's behavior.
Advantages:
Comparison:
Structural testing
Structural testing refers to a category of software testing techniques that focus on
examining the internal structure of the software code to ensure adequate test
coverage. Within structural testing, various methods like Path Testing, Data Flow
Testing, and Mutation Testing are used to assess different aspects of the software
code.
Path Testing:
Definition: Path Testing is a structural testing method that evaluates every possible
executable path in the software code.
Process:
1. Identifying Paths: Identify all possible paths through the software code. These
paths cover different combinations and sequences of statements, branches,
and loops.
2. Creating Test Cases: Develop test cases to execute each identified path in the
code.
3. Test Execution: Execute the test cases, ensuring that each path is covered
and tested at least once.
Advantages:
Provides thorough coverage by testing all possible paths through the code.
Helps in identifying complex logical errors and dead code that might not be
apparent in other testing methods.
Process:
1. Identifying Data Flow: Identify variables and track their usage, flow, and
transformations within the code.
2. Creating Test Cases: Develop test cases based on data flow criteria to cover
various scenarios where data values change.
Advantages:
Identifies potential data inconsistency and data dependency problems within the
code.
Mutation Testing:
Definition: Mutation Testing is a structural testing method that involves introducing
deliberate changes (mutations) into the software code to evaluate the effectiveness
of the test cases in detecting these changes.
Process:
2. Executing Test Cases: Run the existing test suite against these mutated
versions of the code.
3. Evaluating Effectiveness: Determine the ability of the test cases to detect and
fail the mutated versions (i.e., kill the mutants).
Advantages:
Evaluates the robustness of the test suite by measuring its ability to detect
changes in the code.
Conclusion:
Structural testing methods like Path Testing, Data Flow Testing, and Mutation
Testing focus on assessing different aspects of the software code to ensure
comprehensive test coverage. They help in uncovering errors, weaknesses, and
potential issues within the code, thereby enhancing the quality and reliability of the
software.
Unit Testing:
Definition: Unit Testing is the process of testing individual units or components of
the software in isolation. It focuses on verifying that each unit functions correctly as
per the specified requirements.
Key Aspects:
Objective: To validate that each unit operates as expected and meets its individual
functionality.
Benefits:
Integration Testing:
Key Aspects:
Benefits:
Validates that the integrated system performs as expected before reaching the
system testing phase.
System Testing:
Definition: System Testing is a level of software testing where the complete and
integrated software system is tested as a whole. It evaluates the entire system's
functionality and behavior against specified requirements.
Key Aspects:
Benefits:
Conclusion:
Unit Testing, Integration Testing, and System Testing are integral parts of the
software testing life cycle, each focusing on different levels and aspects of the
software. Together, they help in ensuring that the software functions correctly, meets
requirements, and delivers value to users while detecting and addressing defects at
various stages of development.
Debugging:
Definition: Debugging is the process of identifying, analyzing, and fixing defects,
errors, or issues within the software code to ensure proper functionality.
Key Aspects:
Defect Identification: Locate and isolate issues that cause the software to
behave unexpectedly or incorrectly.
Root Cause Analysis: Analyze the underlying causes of defects, which could
be logical errors, syntax errors, or runtime issues.
Techniques:
Use of Debugging Tools: Utilize tools like debuggers, loggers, and profilers to
trace and identify defects.
Testing and Validation: After fixing, perform retesting to ensure the defect is
resolved without introducing new issues.
Importance:
Testing Tools:
Categories of Testing Tools:
Test Management Tools: Manage test cases, test execution, and reporting
(e.g., HP ALM, TestRail).
Automation Testing Tools: Automate test cases for regression, functional, and
performance testing (e.g., Selenium, Appium, JUnit).
Benefits:
Testing Standards:
Common Testing Standards:
Benefits of Standards:
Conclusion:
Debugging, Testing Tools, and Standards are integral parts of software development
and quality assurance. Debugging helps in identifying and rectifying defects, while
testing tools automate and streamline testing processes. Adherence to testing
standards ensures consistency and quality in testing practices, ultimately
contributing to the delivery of reliable and high-quality software products.
Software Maintenance:
Definition: Software Maintenance involves managing and updating software
systems to meet changing user needs, resolve defects, enhance performance, and
adapt to new environments.
Management of Maintenance:
Key Aspects:
Best Practices:
3. Planning: Develop a plan outlining the tasks, resources, and timelines for
implementing changes.
Metrics in Maintenance:
MTTR (Mean Time to Repair): Measures the average time taken to address
and resolve issues.
Defect Density: Measures the number of defects identified per unit of software
size.
Conclusion:
Maintenance Models
Software maintenance models are frameworks or approaches that guide the
management and execution of software maintenance activities. They provide
structured methodologies to handle changes, updates, and enhancements to
Updates for Compatibility: Adjusts the software to work with new hardware,
operating systems, or third-party software.
Key Aspects:
Key Aspects:
Key Aspects:
Key Aspects:
Conclusion:
Regression Testing
Regression Testing is a vital software testing technique conducted to ensure that
recent changes or modifications in the code haven't adversely affected the existing
functionalities of the software. It aims to confirm that the previously developed and
tested software still performs correctly after alterations.
2. Scope: Regression testing verifies the unchanged parts of the software along
with the modified areas.
1. Complete Regression Testing: Executing the entire test suite after every
change. It ensures comprehensive coverage but may be time-consuming.
Maintaining Software Quality: Ensures that the software remains stable and
functions correctly despite modifications.
Conclusion:
Reverse Engineering:
Definition: Reverse Engineering involves analyzing a system's design, structure, or
code to understand its functionality, logic, and architecture without having access to
its original documentation or source code.
Key Aspects:
Goal: Understand and document how a system works, even if its design or
source code is not available.
Software Re-engineering:
Definition: Software Re-engineering involves modifying, restructuring, or enhancing
an existing software system's structure, design, or functionalities to improve its
maintainability, performance, or comprehensibility.
Key Aspects:
Foundation: The insights gained from reverse engineering guide decisions and
actions taken during the re-engineering process.
Techniques Used:
Conclusion:
Configuration Management:
Definition: Configuration Management (CM) is the discipline of identifying,
organizing, and controlling software and hardware components and changes to
these components throughout the software development lifecycle.
Key Aspects:
Key Aspects:
Benefits of Documentation:
Conclusion: