Software Engineering
Software Engineering
INTEGRATION AND
SYSTEM TESTING
Need for
Debugging
Once errors are identified in a program code, it is
necessary to first identify the precise program
statements responsible for the errors and then to fix
them. Identifying errors in a program code and then
fix them up are known as debugging.
DEBUGGING APPROACHES
Brute Force Method:
- This common but inefficient method involves adding
print statements to a program to track intermediate values. It
becomes more systematic with symbolic debuggers, which
allow for easier variable checking and setting breakpoints.
Backtracking:
- This approach traces the source code backward
from where an error symptom appears, looking for the error.
However, as the code size increases, the number of paths to
trace back can become unmanageable.
DEBUGGING APPROACHES
Program Slicing:
- Similar to backtracking, program slicing reduces the
search space by defining "slices" of the program. A slice consists of
the lines of code that influence the value of a specific variable at a
given statement.
DEBUGGING GUIDELINES
Volume Testing:
This checks whether data structures can handle large
amounts of data without issues, such as ensuring a symbol table
doesn’t overflow when compiling a large program.
Configuration Testing:
Analyzes system behavior across various hardware and
software configurations specified in the requirements, ensuring
the system operates correctly in all defined setups.
Compatibility Testing:
Ensures that a system interfaces correctly with other
systems, verifying that data retrieval and interaction with external
systems function as expected.
Regression Testing:
Conducted after upgrades or bug fixes to ensure that new
changes do not introduce new bugs. Only affected test cases need
to be re-run if only specific components were modified.
Recovery Testing:
Tests how well the system responds to faults or resource
loss (like power or devices) and checks its ability to recover from
these issues without significant data loss.
Maintenance Testing:
Verifies that diagnostic programs and maintenance
procedures are correctly implemented and functional.
Documentation Testing:
Ensures that user manuals and other documentation are
complete, consistent, and compliant with specified audience
requirements.
Usability Testing:
Evaluates the user interface to ensure it meets user
requirements, including testing display screens and report formats
for ease of use.
ERROR SEEDING
Is a technique used to estimate the number of residual errors in a
system by intentionally introducing known errors into the code. Customers may
specify a maximum allowable number of errors per line of code, and error
seeding helps predict how many errors remain after testing. By analyzing the
number of seeded errors detected during testing alongside the number of
unseeded errors, the following can be estimated:
Maintenance efforts typically account for about 60% of a software product's total life
cycle cost. In certain domains like embedded systems, maintenance costs can be 2
to 4 times the development cost.
Boehm’s COCOMO model estimates maintenance costs using a metric called Annual
Change Traffic (ACT), which measures the proportion of the software's source code
that is modified annually (through additions or deletions). The formula is:
However, maintenance cost estimates are approximate because they don’t account for
factors like engineer experience, familiarity with the product, hardware requirements,
or software complexity.