Module 5 - SE
Module 5 - SE
Module 5 - SE
Software Engineering
Verification vs Validation
1. Definitions:
o Verification:
Ensures the software implements a specific function correctly.
Involves tasks like designing test cases to expose defects.
Answers: "Are we building the product right?"
o Validation:
Ensures the software meets customer requirements and performs
expected functions.
Tests reflect the system's expected usage.
Answers: "Are we building the right product?"
2. Barry Boehm's Perspective:
o Verification: Conformance of the software to its specification.
o Validation: Conformance of the software to user needs.
Testing Strategy
1. General Approach:
o Begin with testing-in-the-small (focusing on components/modules).
o Progress to testing-in-the-large (integrating and testing the complete system).
2. Conventional Software:
o Start with individual modules and proceed to module integration.
3. Object-Oriented Software:
o Focus on testing classes (attributes and operations).
o Classes are integrated progressively to build the entire system.
Unit Testing
Definition and Focus:
Unit testing focuses on testing individual components or modules in isolation.
It ensures the internal processing logic and data structures work as intended.
Key Points:
1. Critical Modules:
o Emphasis on critical modules with high cyclomatic complexity (a measure of
program complexity).
o Cyclomatic complexity is calculated using the formula: V(G)=E−N+2V(G) = E - N +
2V(G)=E−N+2 Example: V(G)=14−11+2=3V(G) = 14 - 11 + 2 = 3V(G)=14−11+2=3.
2. Unit Test Targets:
o Module Interface: Check proper data flow in and out of the module.
o Local Data Structures: Verify data integrity during algorithm execution.
o Boundary Conditions: Test functionality at boundary values.
o Independent Paths: Ensure all statements in a module are executed at least
once.
o Error Handling Paths: Test module responses to specific errors.
3. Common Computational Errors:
o Incorrect arithmetic precedence or initialization of values.
o Data type mismatches (e.g., int vs float).
o Loop-related issues, such as improper termination or modification of variables.
o Boundary value violations and logical operator errors.
Drivers and Stubs:
Driver: A main program to pass test data to the module and print results.
Stub: A replacement for lower-level modules, returning control to the module under
test.
Purpose:
o Facilitate testing isolated components.
o Represent testing overhead as they do not form part of the final software.
Integration Testing
Definition: Combines unit-tested modules incrementally to build the software system. Identifies
errors in module integration.
Types of Integration Testing:
1. Top-Down Integration:
o Approach:
Integration starts at the main module and moves downward in the
hierarchy.
Modules are added using depth-first or breadth-first integration.
o Advantages:
Verifies control or decision points early.
o Disadvantages:
Requires stubs for lower-level modules.
Delayed data flow testing may require additional effort later.
2. Bottom-Up Integration:
o Approach:
Starts with atomic modules and progresses upward.
o Advantages:
Low-level data processing is verified early.
No stubs are needed.
o Disadvantages:
Drivers are required for lower-level module testing, increasing initial
effort.
Incomplete algorithms in drivers may necessitate more testing later.
3. Sandwich Integration:
o Approach:
Combines top-down and bottom-up strategies.
Functional groups of high- and low-level modules are tested alternately.
o Advantages:
Balances the benefits of both approaches.
Minimizes the need for drivers and stubs.
o Disadvantages:
Requires careful planning to avoid chaotic integration.
Regression Testing
Purpose: Ensures that changes or additions to the software do not negatively affect existing
functionality.
Key Features:
1. Re-executes selected test cases that have already been conducted.
2. Detects unintended side effects and additional errors.
3. May be manual or automated.
Test Suite Components:
1. Representative sample of all software functions.
2. Tests targeting likely-affected functions.
3. Tests focusing on altered components.
Smoke Testing
Definition: A preliminary test to verify the basic functionality of the software build.
Steps:
1. Integrate code into a “build,” including necessary data files and components.
2. Execute a series of tests to identify “show-stopper” errors.
3. Daily integration of builds and smoke tests during development.
Benefits:
1. Reduces integration risks.
2. Uncovers errors early, minimizing schedule impact.
3. Improves end-product quality by identifying both functional and design errors.
4. Simplifies error diagnosis and correction.
5. Provides measurable progress indicators for managers.
White-box Testing
Definition:
Testing the internal workings of a product to ensure all internal operations are
performed according to specifications and all components are exercised.
Characteristics:
o Involves tests focusing on close examination of procedural detail.
o Logical paths through the software are tested.
o Test cases exercise specific sets of conditions and/or loops.
Black-box Testing
Definition:
Testing the specified functionality of a product to ensure it operates as intended and is
error-free, without considering the internal structure.
Characteristics:
o Tests are conducted at the software interface.
o Internal logical structure is not considered.
White-box Testing
Overview:
Also called Glass-box testing, this technique examines the internal logic or procedure of
the module.
Goals:
1. Ensure all independent paths within a module are exercised at least once.
2. Exercise all logical decisions on their true and false sides.
3. Execute all loops (simple and nested) at boundaries and within operational limits.
4. Validate internal data structures.
Basis Path Testing
Introduction:
A White-box testing technique proposed by Tom McCabe.
Purpose:
1. Derive a logical complexity measure of procedural design.
2. Define a basis set of execution paths to guarantee that every statement in the
program is executed at least once.
Implementation:
o A Flow Graph is constructed to represent the basis paths.
o V(G)=14−12+2=4V(G) = 14 - 12 + 2 = 4V(G)=14−12+2=4.
o V(G)=3+1=4V(G) = 3 + 1 = 4V(G)=3+1=4.
Higher-Order Testing
Validation Testing
Focuses on software requirements.
Performed after integration testing.
Ensures:
o User actions and outputs are visible and recognizable.
o The software meets all functional, performance, and behavioral requirements.
o Documentation is correct.
o Usability and other requirements like compatibility, error recovery, and
maintainability are satisfied.
After each validation test:
o If the function or performance meets specifications, it is accepted.
o If there is a deviation, a deficiency list is created.
System Testing
Focuses on system integration.
Types of System Testing:
1. Recovery Testing
o Forces the software to fail in different ways.
o Verifies if recovery mechanisms (e.g., reinitialization, checkpointing, data
recovery) work correctly.
2. Security Testing
o Checks if protection mechanisms prevent unauthorized access.
o Ensures security measures effectively block improper penetration.
3. Stress Testing
o Executes the system under abnormal conditions (e.g., high demand on resources
like memory, CPU, or bandwidth).
4. Performance Testing
o Tests the system's run-time performance in an integrated environment.
o Often combined with stress testing.
o Helps detect performance issues or degradation.
Debugging Process
Debugging happens after successful testing.
It is more of an art than a science.
Starts with running a test case and finding differences between expected and actual
results.
The goal is to trace the symptom (error observed) back to its cause and fix it.
Consequences of Bugs
Bugs can fall into these categories:
Function-related bugs.
System-related bugs.
Data bugs.
Coding bugs.
Design bugs.
Documentation bugs.
Standards violations.
Debugging Techniques
1. Brute Force
Simplest but often least effective.
Involves checking logs, stack traces, and memory dumps manually.
Developers add extra output statements to understand what the software is doing step
by step.
2. Backtracking
Starts at the point where the error occurs and works backward through the logic to find
the cause.
Effective for small programs.
For large programs, it may become too complex because of the many potential paths.
3. Cause Elimination
The developer creates hypotheses about possible causes of the error.
Tests are conducted to eliminate incorrect hypotheses.
Once the likely cause is identified, the error is isolated and fixed.