Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Module 5 - SE

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 17

Module 5

Software Engineering

Introduction to Software Testing


1. Definition: Testing is the process of executing a program with the specific aim of finding
errors before delivering it to the user.
2. Purpose:
o Ensures that a program does what it is intended to do.
o Identifies program defects before the software is deployed.
3. Process:
o Testing involves running the program using artificial test data.
o Results are analyzed to identify:
 Errors
 Anomalies
 Insights into non-functional attributes of the software.
4. Key Point:
o Testing can reveal the presence of errors but not confirm the absence of errors.
o Exhaustive testing (testing every possible scenario) is often impractical.
5. Relation to Verification and Validation: Testing is part of the broader Verification and
Validation (V&V) process.

Verification vs Validation
1. Definitions:
o Verification:
 Ensures the software implements a specific function correctly.
 Involves tasks like designing test cases to expose defects.
 Answers: "Are we building the product right?"
o Validation:
 Ensures the software meets customer requirements and performs
expected functions.
 Tests reflect the system's expected usage.
 Answers: "Are we building the right product?"
2. Barry Boehm's Perspective:
o Verification: Conformance of the software to its specification.
o Validation: Conformance of the software to user needs.

What Testing Shows


1. Objectives:
o To uncover errors in meeting customer requirements.
o Debug errors to align software with requirements.
2. Outcomes:
o Achieving requirements conformance leads to enhanced software performance.
o Improved performance is indicative of quality software.

Strategic Approach to Testing


1. Preparations: Conduct effective technical reviews (inspections) to catch errors before
testing starts.
2. Levels of Testing: Testing begins at the component level and progresses to the
integration level of the complete system.
3. Techniques: Different testing techniques are suitable for various phases of software
development.
4. Participants:
o Testing is conducted by both:
 Developers.
 Independent testing teams (for large projects).
5. Difference Between Testing and Debugging:
o Testing identifies errors.
o Debugging eliminates the identified errors.

Inspections and Testing


1. Software Inspections:
o Focus on analyzing static system representations (e.g., source code) to find
defects.
o Do not involve system execution and can be performed before implementation.
2. Software Testing: Focuses on executing the system with test data to observe its dynamic
behavior.
3. Complementary Methods:
o Inspections and testing work together.
o Inspections can check for specification conformance but not customer
requirements or non-functional attributes like performance or usability.

Testing Strategy
1. General Approach:
o Begin with testing-in-the-small (focusing on components/modules).
o Progress to testing-in-the-large (integrating and testing the complete system).
2. Conventional Software:
o Start with individual modules and proceed to module integration.
3. Object-Oriented Software:
o Focus on testing classes (attributes and operations).
o Classes are integrated progressively to build the entire system.

Unit Testing
Definition and Focus:
 Unit testing focuses on testing individual components or modules in isolation.
 It ensures the internal processing logic and data structures work as intended.
Key Points:
1. Critical Modules:
o Emphasis on critical modules with high cyclomatic complexity (a measure of
program complexity).
o Cyclomatic complexity is calculated using the formula: V(G)=E−N+2V(G) = E - N +
2V(G)=E−N+2 Example: V(G)=14−11+2=3V(G) = 14 - 11 + 2 = 3V(G)=14−11+2=3.
2. Unit Test Targets:
o Module Interface: Check proper data flow in and out of the module.
o Local Data Structures: Verify data integrity during algorithm execution.
o Boundary Conditions: Test functionality at boundary values.
o Independent Paths: Ensure all statements in a module are executed at least
once.
o Error Handling Paths: Test module responses to specific errors.
3. Common Computational Errors:
o Incorrect arithmetic precedence or initialization of values.
o Data type mismatches (e.g., int vs float).
o Loop-related issues, such as improper termination or modification of variables.
o Boundary value violations and logical operator errors.
Drivers and Stubs:
 Driver: A main program to pass test data to the module and print results.
 Stub: A replacement for lower-level modules, returning control to the module under
test.
 Purpose:
o Facilitate testing isolated components.
o Represent testing overhead as they do not form part of the final software.

Integration Testing
Definition: Combines unit-tested modules incrementally to build the software system. Identifies
errors in module integration.
Types of Integration Testing:
1. Top-Down Integration:
o Approach:
 Integration starts at the main module and moves downward in the
hierarchy.
 Modules are added using depth-first or breadth-first integration.
o Advantages:
 Verifies control or decision points early.
o Disadvantages:
 Requires stubs for lower-level modules.
 Delayed data flow testing may require additional effort later.
2. Bottom-Up Integration:
o Approach:
 Starts with atomic modules and progresses upward.
o Advantages:
 Low-level data processing is verified early.
 No stubs are needed.
o Disadvantages:
 Drivers are required for lower-level module testing, increasing initial
effort.
 Incomplete algorithms in drivers may necessitate more testing later.
3. Sandwich Integration:
o Approach:
 Combines top-down and bottom-up strategies.
 Functional groups of high- and low-level modules are tested alternately.
o Advantages:
 Balances the benefits of both approaches.
 Minimizes the need for drivers and stubs.
o Disadvantages:
 Requires careful planning to avoid chaotic integration.

Regression Testing
Purpose: Ensures that changes or additions to the software do not negatively affect existing
functionality.
Key Features:
1. Re-executes selected test cases that have already been conducted.
2. Detects unintended side effects and additional errors.
3. May be manual or automated.
Test Suite Components:
1. Representative sample of all software functions.
2. Tests targeting likely-affected functions.
3. Tests focusing on altered components.

Smoke Testing
Definition: A preliminary test to verify the basic functionality of the software build.
Steps:
1. Integrate code into a “build,” including necessary data files and components.
2. Execute a series of tests to identify “show-stopper” errors.
3. Daily integration of builds and smoke tests during development.
Benefits:
1. Reduces integration risks.
2. Uncovers errors early, minimizing schedule impact.
3. Improves end-product quality by identifying both functional and design errors.
4. Simplifies error diagnosis and correction.
5. Provides measurable progress indicators for managers.

White-box Testing
 Definition:
Testing the internal workings of a product to ensure all internal operations are
performed according to specifications and all components are exercised.
 Characteristics:
o Involves tests focusing on close examination of procedural detail.
o Logical paths through the software are tested.
o Test cases exercise specific sets of conditions and/or loops.

Black-box Testing
 Definition:
Testing the specified functionality of a product to ensure it operates as intended and is
error-free, without considering the internal structure.
 Characteristics:
o Tests are conducted at the software interface.
o Internal logical structure is not considered.

White-box Testing
 Overview:
Also called Glass-box testing, this technique examines the internal logic or procedure of
the module.
 Goals:
1. Ensure all independent paths within a module are exercised at least once.
2. Exercise all logical decisions on their true and false sides.
3. Execute all loops (simple and nested) at boundaries and within operational limits.
4. Validate internal data structures.
Basis Path Testing
 Introduction:
A White-box testing technique proposed by Tom McCabe.
 Purpose:
1. Derive a logical complexity measure of procedural design.
2. Define a basis set of execution paths to guarantee that every statement in the
program is executed at least once.
 Implementation:
o A Flow Graph is constructed to represent the basis paths.

Flow Graph Notation


 Components:
o Node: Represents a sequence of procedural statements.
o Predicate Node: Contains a simple conditional expression with two edges (True
and False).
o Edge (Link): Represents flow of control in a specific direction, starting and
terminating at nodes.
o Region: Areas bounded by edges and nodes, including the external area.

Basis Paths / Independent Paths


 Definition:
A path introducing at least one new processing statement or condition.
 Criteria:
o Must traverse at least one previously unvisited edge.
 Example Basis Set for Flow Graph:
o Path 1: 0-1-11
o Path 2: 0-1-2-3-4-5-10-1-11
o Path 3: 0-1-2-3-6-8-9-10-1-11
o Path 4: 0-1-2-3-6-7-9-10-1-11
Cyclomatic Complexity
 Definition:
A quantitative measure of a program's logical complexity.
 Uses:
1. Determines the number of independent paths in the basis set.
2. Provides an upper bound for the number of required tests.
 Calculation Methods:
1. Count the regions in the graph.
2. Use the formula: V(G)=E−N+2V(G) = E - N + 2 V(G)=E−N+2 where EEE = number of
edges, NNN = number of nodes.
3. Use the formula: V(G)=P+1V(G) = P + 1 V(G)=P+1 where PPP = number of
predicate nodes.
 Example:
o Number of regions = 4.

o V(G)=14−12+2=4V(G) = 14 - 12 + 2 = 4V(G)=14−12+2=4.

o V(G)=3+1=4V(G) = 3 + 1 = 4V(G)=3+1=4.

Deriving the Basis Set and Test Cases


1. Steps:
o Draw the flow graph based on the code.
o Calculate the cyclomatic complexity.
o Identify linearly independent paths.
o Prepare test cases for each path.
2. Example:
o Cyclomatic Complexity: V(G)=17−13+2=6V(G) = 17 - 13 + 2 = 6 V(G)=17−13+2=6
o Basis Set:
 Path 1: 3-4-20-21
 Path 2: 3-4-6-12-15-16-18-4-20-21
 Path 3: 3-4-6-12-13-15-16-18-4-20-21
 Path 4: 3-4-6-12-13-18-4-20-21
 Path 5: 3-4-6-7-12-13-18-4-20-21
 Path 6: 3-4-6-7-9-10-18-4-20-21
Black-box Testing
 In black-box testing, the tester does not have any knowledge of the internal workings of
the program or software.
 It works alongside white-box testing, helping to find different types of errors.
 Black-box testing is done after white-box testing has been completed.
 The focus is on the functional requirements and how the software interacts with its
external environment.
 The test cases help identify:
o Missing or incorrect functions.
o Interface errors.
o Errors related to data structures or external database access.
o Performance or behavior-related issues.
o Errors during initialization or termination.
Equivalence Partitioning
 Equivalence Partitioning, also known as Equivalence Class Partitioning, is a type of black-
box testing.
 This technique can be applied to all levels of software testing, such as unit, integration,
and system testing.
 The input domain of a program is divided into different equivalence classes, which help
in creating test cases. This reduces the number of test cases needed and thus saves
time.
 Test cases are designed by evaluating equivalence classes for input conditions.
 An equivalence class represents a set of valid or invalid conditions for an input.
Guidelines for Defining Equivalence Classes
 If the input condition specifies a range, one valid and two invalid equivalence classes are
defined:
o Example: Input range: 1 – 10
 Equivalence classes: {1..10}, {x < 1}, {x > 10}
 If the input condition requires a specific value, one valid and two invalid equivalence
classes are defined:
o Example: Input value: 250
 Equivalence classes: {250}, {x < 250}, {x > 250}
 If the input condition specifies a set of values, one valid and one invalid equivalence
class are defined:
o Example: Input set: {-2.5, 7.3, 8.4}
 Equivalence classes: {-2.5, 7.3, 8.4}, {any other value x}
 If the input condition is a Boolean value, one valid and one invalid equivalence class are
defined:
o Example: Input: {true condition}
 Equivalence classes: {true condition}, {false condition}
Boundary Value Analysis
 Boundary Value Analysis is another common type of black-box testing.
 More errors tend to occur at the boundaries of the input domain rather than the center.
 This test case design method works in combination with equivalence partitioning.
 Test cases are designed for the edge values of an input domain.
Guidelines for Boundary Value Analysis
1. If an input condition specifies a range from values a to b, the test cases should include:
o The boundary values a and b.
o Values just above and just below a and b.
Example: A program accepts values only between 100 and 5000.
o Valid test cases:
 Enter 100 (min value).
 Enter 101 (min+1 value).
 Enter 4999 (max-1 value).
 Enter 5000 (max value).
o Invalid test cases:
 Enter 99 (min-1 value).
 Enter 5001 (max+1 value).
2. If an input condition specifies a number of values (i.e., a set), the test cases should
include:
o The minimum and maximum values.
o Values just above and just below the minimum and maximum.
Testing for Object-Oriented Software
 Traditional test case design is based on the input-process-output model of software.
 Object-oriented testing focuses on testing the behavior of classes by designing
sequences of methods to exercise their states.
 Since attributes and methods are encapsulated within a class, testing them from outside
is often ineffective.
 Inheritance requires retesting each new context where a class is used.
 If a subclass is used differently from its parent class, the parent class test cases may not
apply. New tests must be created for the subclass.
 Conventional testing can still be applied to object-oriented systems.
 White-box testing can be used on the methods/operations of a class.
 Basis path testing and loop testing ensure that every statement in a method is tested.
 Black-box testing methods are also appropriate.
 Use cases can be helpful when designing black-box tests.
Class Level Testing: Random Testing
 Certain methods in a class may follow a sequence or order that represents a minimum
behavioral life cycle for an object.
 These sequences may be implicit, and testing can help detect dependencies.
 Example methods in classes could be:
o File or Database operations: open, read, write, close.
o Bank Account operations: open, deposit, withdrawal, balance, close.
 Randomly generated method sequences are executed to identify dependencies and
improve the method design.
Class Level Testing: Partition Testing
 Partition testing helps reduce the number of test cases needed to test a class, similar to
equivalence partitioning in conventional software.
 Methods in the class are grouped into one of three partitioning approaches: state-based,
attribute-based, and category-based.
State-based Partitioning
 Methods are grouped based on their ability to change the state of the class.
 Tests are designed to cover both state-changing and non-state-changing methods.
 Example: In a Bank Account class:
o State operations: deposit, withdraw.
o Non-state operations: checkBalance, checkCreditLimit, generateStatement.
Attribute-based Partitioning
 Methods are grouped based on the attributes they use or modify.
 Methods are divided into three categories:
o Those that read an attribute.
o Those that modify an attribute.
o Those that do not reference the attribute.
 Example: For a Bank Account class, attributes could be balance and creditLimit:
o Partition 1: Operations using creditLimit.
o Partition 2: Operations modifying creditLimit.
o Partition 3: Operations not using or modifying creditLimit.
Category-based Partitioning
 Methods are grouped based on their generic function, such as initialization,
computation, or termination.
 Example: For a Bank Account class:
o Initialization: open.
o Computational: deposit, withdraw.
o Query: checkBalance, generateStatement, checkCreditLimit.
o Termination: close.

Higher-Order Testing
Validation Testing
 Focuses on software requirements.
 Performed after integration testing.
 Ensures:
o User actions and outputs are visible and recognizable.
o The software meets all functional, performance, and behavioral requirements.
o Documentation is correct.
o Usability and other requirements like compatibility, error recovery, and
maintainability are satisfied.
 After each validation test:
o If the function or performance meets specifications, it is accepted.
o If there is a deviation, a deficiency list is created.

System Testing
 Focuses on system integration.
Types of System Testing:
1. Recovery Testing
o Forces the software to fail in different ways.
o Verifies if recovery mechanisms (e.g., reinitialization, checkpointing, data
recovery) work correctly.
2. Security Testing
o Checks if protection mechanisms prevent unauthorized access.
o Ensures security measures effectively block improper penetration.
3. Stress Testing
o Executes the system under abnormal conditions (e.g., high demand on resources
like memory, CPU, or bandwidth).
4. Performance Testing
o Tests the system's run-time performance in an integrated environment.
o Often combined with stress testing.
o Helps detect performance issues or degradation.

Alpha and Beta Testing


 Focuses on customer usage.
Alpha Testing
 Conducted at the developer's site.
 Software is tested in a natural setting with developers observing.
 Performed in a controlled environment.
Beta Testing
 Conducted at end-user sites.
 Developers are not present.
 Software is used in real-world conditions.
 Users report issues to developers regularly.
 After beta testing, developers fix the problems and prepare for release.

Types of User Testing


1. Alpha Testing
o Users test the software with the development team at the developer's site.
2. Beta Testing
o A version of the software is provided to users to try out.
o Users report any issues to developers.
3. Acceptance Testing
o Customers test the software to decide if it is ready to be accepted and deployed.
o Typically used for custom-built systems.

Debugging Process
 Debugging happens after successful testing.
 It is more of an art than a science.
 Starts with running a test case and finding differences between expected and actual
results.
 The goal is to trace the symptom (error observed) back to its cause and fix it.

Why Debugging is Difficult


1. The symptom and its cause may be in different parts of the software.
2. Symptoms might disappear temporarily when unrelated errors are fixed.
3. Non-errors, like rounding inaccuracies, can appear as symptoms.
4. Errors may come from compilers or human mistakes.
5. Some issues arise from wrong assumptions.
6. Symptoms may be intermittent (especially in embedded systems).
7. Causes may be spread across different tasks running in separate processes.

Consequences of Bugs
Bugs can fall into these categories:
 Function-related bugs.
 System-related bugs.
 Data bugs.
 Coding bugs.
 Design bugs.
 Documentation bugs.
 Standards violations.

Debugging Techniques
1. Brute Force
 Simplest but often least effective.
 Involves checking logs, stack traces, and memory dumps manually.
 Developers add extra output statements to understand what the software is doing step
by step.
2. Backtracking
 Starts at the point where the error occurs and works backward through the logic to find
the cause.
 Effective for small programs.
 For large programs, it may become too complex because of the many potential paths.
3. Cause Elimination
 The developer creates hypotheses about possible causes of the error.
 Tests are conducted to eliminate incorrect hypotheses.
 Once the likely cause is identified, the error is isolated and fixed.

You might also like