Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
17 views

Software Engineering

Uploaded by

daizepurple
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
17 views

Software Engineering

Uploaded by

daizepurple
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 35

DEBUGGING,

INTEGRATION AND
SYSTEM TESTING
Need for
Debugging
Once errors are identified in a program code, it is
necessary to first identify the precise program
statements responsible for the errors and then to fix
them. Identifying errors in a program code and then
fix them up are known as debugging.
DEBUGGING APPROACHES
Brute Force Method:
- This common but inefficient method involves adding
print statements to a program to track intermediate values. It
becomes more systematic with symbolic debuggers, which
allow for easier variable checking and setting breakpoints.

Backtracking:
- This approach traces the source code backward
from where an error symptom appears, looking for the error.
However, as the code size increases, the number of paths to
trace back can become unmanageable.
DEBUGGING APPROACHES

Cause Elimination Method:


- This technique involves creating a list of possible causes
for an error and systematically testing to eliminate each one.
Software fault tree analysis is a related technique that helps identify
errors from symptoms.

Program Slicing:
- Similar to backtracking, program slicing reduces the
search space by defining "slices" of the program. A slice consists of
the lines of code that influence the value of a specific variable at a
given statement.
DEBUGGING GUIDELINES

Understand Program Design: A thorough understanding of the


system design is crucial for effective debugging. Partial
understanding can lead to excessive effort for simple issues.
Avoid Fixing Symptoms: Sometimes, debugging may require
redesigning the system. Novice programmers often mistakenly
focus on fixing symptoms rather than the root cause of the error.
Regression Testing: After correcting errors, it's important to
perform regression testing, as fixes can inadvertently introduce
new errors.
Program Analysis Tools
Program analysis tools automate the evaluation
of source or executable code, providing insights
into various characteristics such as size,
complexity, and adherence to standards. These
tools fall into two main categories:

• Static Analysis Tools


• Dynamic Analysis Tools
STATIC ANALYSIS TOOLS:
• These tools analyze the code without executing it, focusing
on structural representations to derive insights.
• Common assessments include checking compliance with
coding standards, identifying uninitialized variables, and
detecting mismatches between actual and formal
parameters.
• Static analysis can also involve code walkthroughs and
inspections, though it primarily refers to automated tools
like compilers.
DYNAMIC ANALYSIS TOOLS:
• Dynamic analysis involves executing the program to observe its
actual behavior. This often includes instrumenting the code to
gather execution traces.
• After running the test suite, dynamic analysis tools perform post-
execution evaluations to report on the structural coverage
achieved, such as statement, branch, and path coverage.
• Results are typically visualized in formats like histograms or pie
charts, showing coverage across various program components.
This serves as documentation of the testing process.
• If the coverage results indicate gaps, additional test cases can be
created to improve testing, while redundant cases can be
identified and removed.
INTEGRATION TESTING
INTEGRATION TESTING

Integration testing focuses on verifying the interfaces between


modules to ensure correct parameter passing when one module
calls another. It involves integrating different modules according
to a predefined integration plan, which outlines the steps and
order of integration. After each step, the partially integrated
system is tested. The integration plan is guided by a module
dependency graph (or structure chart), which shows the
sequence of module interactions and helps in developing the
integration strategy.
There are four types of integration testing
approaches. Any one (or a mixture) of the following
approaches can be used to develop the integration
test plan. Those approaches are the following:

• Big bang approach


• Bottom- up approach
• Top-down approach
• Mixed-approach
Big-Bang Integration Testing
is the simplest approach, where all system modules are integrated and
tested at once. It is only suitable for very small systems. The major
drawback is the difficulty in pinpointing errors since they could originate
from any module, making debugging costly and challenging.

Bottom-Up Integration Testing


involves testing each subsystem separately before integrating them into the
full system. It focuses on verifying the interfaces between modules within
each subsystem. This approach allows for simultaneous testing of multiple
subsystems and does not require stubs, only test drivers. However, it can
become complex when dealing with many small subsystems, which can lead
to challenges similar to those encountered in the big-bang approach.
Top-Down Integration Testing
begins with testing the main routine along with one or two subordinate routines.
Once the top-level "skeleton" is tested, additional subroutines are integrated
and tested. This approach requires stubs to simulate lower-level routines.
While it does not need driver routines, a key disadvantage is that it can be
challenging to effectively test top-level routines without the presence of lower-
level functions, which often handle important tasks like I/O operations.

Mixed Integration Testing


(or sandwiched testing) combines both top-down and bottom-up approaches.
It allows testing to begin as soon as modules are available, rather than
waiting for all top-level or bottom-level modules to be completed. This
flexibility addresses the limitations of the other two methods, making it one of
the most commonly used integration testing strategies.
Phased Vs. Incremental Testing
The different integration testing strategies are either phased or
incremental. A comparison of these two strategies is as follows:
• In incremental integration testing, only one new module is added to
the partial system each time.
• In phased integration, a group of related modules are added to the
partial system each time.

Phased integration involves fewer integration steps than


incremental integration, but debugging is easier in the incremental
approach since failures are tied to the addition of a single module.
Big bang testing is considered a degenerate form of phased
integration.
System testing validates a fully developed system against its
requirements and includes three main types:

1. Alpha Testing: Conducted by the test team within the developing


organization.
2. Beta Testing: Performed by a select group of friendly customers.
3. Acceptance Testing: Done by the customer to decide whether to
accept the system.
Performance Testing evaluates whether a system meets the
non-functional requirements outlined in the Software
Requirements Specification (SRS) document. There are
several types of performance testing, which are considered
black-box tests. The specific types to be conducted depend
on the system's non-functional requirements. The nine types
of performance testing include:

• Stress testing • Recovery testing


• Volume testing • Maintenance testing
• Configuration testing • Documentation testing
• Compatibility testing • Usability testing
• Regression testing
Stress Testing:
Also known as endurance testing, it evaluates system
performance under extreme conditions by imposing abnormal
input scenarios, such as running more jobs than the system is
designed to handle. It’s crucial for systems that typically operate
below maximum capacity but face peak demands.

Volume Testing:
This checks whether data structures can handle large
amounts of data without issues, such as ensuring a symbol table
doesn’t overflow when compiling a large program.
Configuration Testing:
Analyzes system behavior across various hardware and
software configurations specified in the requirements, ensuring
the system operates correctly in all defined setups.
Compatibility Testing:
Ensures that a system interfaces correctly with other
systems, verifying that data retrieval and interaction with external
systems function as expected.
Regression Testing:
Conducted after upgrades or bug fixes to ensure that new
changes do not introduce new bugs. Only affected test cases need
to be re-run if only specific components were modified.
Recovery Testing:
Tests how well the system responds to faults or resource
loss (like power or devices) and checks its ability to recover from
these issues without significant data loss.

Maintenance Testing:
Verifies that diagnostic programs and maintenance
procedures are correctly implemented and functional.
Documentation Testing:
Ensures that user manuals and other documentation are
complete, consistent, and compliant with specified audience
requirements.

Usability Testing:
Evaluates the user interface to ensure it meets user
requirements, including testing display screens and report formats
for ease of use.
ERROR SEEDING
Is a technique used to estimate the number of residual errors in a
system by intentionally introducing known errors into the code. Customers may
specify a maximum allowable number of errors per line of code, and error
seeding helps predict how many errors remain after testing. By analyzing the
number of seeded errors detected during testing alongside the number of
unseeded errors, the following can be estimated:

• Total number of defects in the system.


• Effectiveness of the testing strategy.
The relationship can be expressed mathematically as:
SOFTWARE MAINTENANCE
Necessity of Software Maintenance

Software maintenance is essential for keeping


software products functional and up-to-date. As hardware
becomes obsolete, and software needs to adapt to newer
platforms, environments, or operating systems,
maintenance ensures compatibility and continued
performance. It involves correcting errors, enhancing
features, and porting software to new systems. This
ongoing evolution allows software products to meet user
demands, address emerging issues, and remain relevant
over time.
Software maintenance is categorized into three types:

• Corrective Maintenance: Involves fixing bugs or errors that occur


during the software's use.

• Adaptive Maintenance: Required when the software needs to be


updated to work on new platforms, operating systems, or with new
hardware/software.

• Perfective Maintenance: Involves adding new features, modifying


existing functionalities, or improving the system’s performance based on
user feedback.
Problems associated with software maintenance

• It is often more time-consuming and expensive than anticipated.


• Maintenance is frequently done in an unstructured, ad-hoc manner, as
it’s typically a neglected area within software engineering.
• Maintenance has a poor industry reputation, which makes it hard to
attract top talent.
• Many maintenance tasks involve working with legacy software, which
can be more complex due to the need to understand and modify
someone else’s code.
Software Reverse Engineering

Software reverse engineering is the process of


analyzing a software product's code to recover
its design and requirements specification. It is
especially important for legacy software, which
often lacks proper documentation and has
become unstructured over time due to ongoing
maintenance.
Fig. 24.1: A process model for reverse engineering

The reverse engineering process typically begins with cosmetic changes to


improve the code's readability, structure, and understandability, without altering
its functionality. This may include reformatting the code, renaming variables to
meaningful names, and simplifying complex structures.
Software Reverse Engineering
Once the code is more understandable, the next
step is to extract the design and requirements
specification. This involves understanding the
code thoroughly, often with the help of
automated tools to generate data flow and
control flow diagrams, as well as structure
charts. The ultimate goal is to create clear
documentation, such as the Software
Requirements Specification (SRS) document,
for the legacy system.
Fig. 24.2: Cosmetic changes carried out
before reverse engineering
Legacy software products
A legacy system is any software that is difficult to maintain, often due
to issues like poor documentation, unstructured or complex code, and
a lack of knowledgeable personnel. Even recently developed systems
with poor design and documentation can be considered legacy
systems.

The activities involved in maintaining a legacy system vary based on


factors such as:
• The extent of changes required.
• The resources available to the maintenance team.
• The condition of the existing product (e.g., structure,
documentation quality).
• The risks involved in the project.
SOFTWARE MAINTENANCE
PROCESS MODELS
Two broad categories of process models for
software maintenance exist. The first model is
suited for projects involving small changes,
where the code is directly modified and the
relevant documents are updated later. The
process begins by gathering and analyzing
change requirements, followed by formulating
strategies for code modification. Involving
members of the original development team can
help reduce the maintenance cycle, especially
for poorly documented or unstructured code.
Additionally, having access to a working version
of the old system helps maintenance engineers
understand the system's functionality and
Fig. 25.1: Maintenance process model 1
compare it with the modified version, making
debugging easier by allowing for comparison of
program traces.
The second process model for software
maintenance, called software reengineering,
is used when a lot of changes are needed. It
starts with reverse engineering, where the old
code is analyzed to create a design and
understand the original requirements. After that,
the changes are applied to the new
requirements and design. This process leads to
a better-structured design, improved
documentation, and often more efficient code.

However, this approach is more expensive than


the first model. It’s preferred when the software
has many issues, poor design, or high failure Fig. 25.2: Maintenance process model 2
rates, or when more than 15% of the code
needs to be changed.
Software Reengineering
Software reengineering is a combination of two consecutive
processes i.e. software reverse engineering and software
forward engineering as shown in the fig. 25.2.

Fig. 25.2: Maintenance process model 2


Estimation of approximate maintenance cost

Maintenance efforts typically account for about 60% of a software product's total life
cycle cost. In certain domains like embedded systems, maintenance costs can be 2
to 4 times the development cost.
Boehm’s COCOMO model estimates maintenance costs using a metric called Annual
Change Traffic (ACT), which measures the proportion of the software's source code
that is modified annually (through additions or deletions). The formula is:

ACT = (KLOC added + KLOC deleted) / KLOC total


The maintenance cost is then calculated by multiplying the ACT by the original
development cost:
Maintenance cost = ACT × development cost

However, maintenance cost estimates are approximate because they don’t account for
factors like engineer experience, familiarity with the product, hardware requirements,
or software complexity.

You might also like