Software testing
Software testing
ID202L
SYLLABUS
Unit – 1
Introduction: Some popular worldwide software failure. What is software testing, Testing Objective,
Role of a tester, Skills required by tester. Terminologies in testing: Error, Fault, Failure, Verification,
Validation, Test Cases, Test Suits, Test Oracles, Testing Vs Debugging, Testing, Quality Assurance and
Quality Control, Limitations of testing, Testing Life Cycle(Phases: Requirement analysis, test planning,
test case design, test execution, defect reporting, and closure). Principles of Testing.
Unit – 2
Static vs Dynamic Testing, Black Box Testing Techniques: Boundary Value Analysis (Generalizing
Boundary Value Analysis, Robust Boundary Value Testing, Worst-Case Boundary Value Testing,
Robust Worst-Case Testing), Equivalence Class Testing, Decision Table based Testing, Cause-Effect
Graphing Technique, State Transition Technique.
White Box Testing – Need, Logic Coverage Criteria, Statement Coverage, Branch Coverage, Condition
Coverage, Loop Coverage, Path Coverage, Graph Matrices, Cyclomatic Complexity, Data Flow
Testing, Mutation Testing
Unit – 3
Levels of Testing: Unit Testing, Integration Testing, System Testing and Acceptance Testing
Regression Testing: Progressive Vs Regressive Testing, Objectives of Regression Testing, Regression
testing Techniques.
Experience Based Testing Techniques: Error Guessing, Exploratory Testing, Checklist Based Testing
Unit – 4
Test Management: Organization Structures for Testing Teams, Test Planning, Test case minimization,
Test Case Prioritization, Risk Analysis.
Debugging: Debugging Process, Debugging Techniques, Debuggers
Unit – 5
Automation and Testing Tools: Need for automation, Testing Tool Classification, benefits and Risks of
Test automation, Overview of some commercial testing tools, Introduction to Selenium, Functional
testing using Selenium, Automation Testing using Bugzilla
Books
Course Outcomes
Tagging COs with BLs & KCs
CO No. Statement of Course Outcome Knowledge
Bloom’s Cognitive
Category
After completion of the course, the student will be able to Process Level (BL)
(KC)
Understand software testing concepts, principles, and
CO1 2 C
the testing lifecycle.
Apply black-box and white-box testing techniques to
CO2 3 P
validate software functionality.
Understand levels of testing and regression testing
CO3 2 C
techniques for ensuring software quality.
Apply test management strategies, including test
CO4 planning and risk analysis, to optimize the testing 3 P
process.
Apply automation testing tools like Selenium for
CO5 3 P
functional testing.
Unit-1
What is Software Testing?
“Testing is the process of executing a program with the intent of finding faults.” Testing never
shows the absence of faults, but it shows that the faults are present in the program.
Some Software Failures
Software testing is a very expensive and critical activity; but releasing the software without
testing is definitely more expensive and dangerous
• Ensure Software Quality: Testing verifies that the software meets the required standards
and quality benchmarks.
• Error Identification: It aims to uncover bugs or defects in the software before
deployment.
• Customer Satisfaction: Delivering error-free software improves user experience and
trust.
• Cost Reduction: Early detection of issues reduces the cost of fixing bugs in later stages.
• Risk Mitigation: It minimizes the risk of software failures that could lead to financial
losses or harm.
• Compliance with Requirements: Testing ensures that the software performs its intended
functions as per specifications.
• Prevention of Failures: It prevents scenarios like software crashes, data loss, and critical
failures that could harm the organization or users.
Principles of Software Testing?
Testing shows the presence of defects: This testing principle states that testing cannot prove the
absence of defects; it can only reveal their presence. Even if your application passes all test cases,
you can never be entirely certain that no hidden defects remain. Testing identifies where defects exist,
but cannot confirm where they do not.
Exhaustive testing is impossible: Exhaustive testing, which involves evaluating every possible
combination of inputs and conditions, is impractical in real-world projects due to the vast number of
variables. Testers must instead prioritize scenarios based on risk, business impact, and customer
value. This strategic approach ensures critical areas are thoroughly tested without the impossible task
of covering every possibility.
Early testing saves time and money: Identifying defects early in the software development lifecycle
is critical because the cost and effort to fix issues grow exponentially as development progresses.
Early testing not only minimizes these risks but also streamlines the development process by
addressing potential problems when they are most manageable and least expensive. This proactive
approach saves time, reduces costs, and ensures a smoother path to delivering high-quality software.
Defects clustering: Defect clustering highlights that defects are often concentrated in specific areas
of the software. These "problem areas" usually account for the majority of issues, so focusing efforts
on them can significantly improve overall quality. This targeted approach ensures critical issues are
addressed efficiently, maximizing the impact of time and resources spent.
Pesticide paradox
The pesticide paradox suggests that repeatedly running the same set of tests will not uncover new or
previously unknown defects. To continue identifying issues effectively, test methodologies must
evolve by incorporating new tests, updating existing test cases, or modifying test steps. This ongoing
refinement ensures that testing remains relevant and capable of discovering previously hidden
problems.
Testing is context-dependent
Test strategies must be tailored to the specific context of the software being tested. The requirements
for different types of software—such as a mobile app, a high-transaction e-commerce website, or a
business-critical enterprise application—vary significantly. As a result, testing methodologies should
be customized to address the unique needs of each type of application, ensuring that testing is both
effective and relevant to the software's intended use and environment.
Absence-of-errors is a fallacy
The absence-of-errors fallacy occurs when developers or stakeholders assume that software is of
high quality solely because it is free of defects. This assumption disregards the possibility that the
software may still fall short of meeting user needs, business requirements, or performance
expectations, even if no bugs are identified. By focusing only on the absence of errors, this fallacy
overlooks other critical factors that contribute to the overall quality and success of the software.
Role of a Tester
1. Technical Skills:
• Knowledge of programming languages (e.g., Python, Java) for automation.
• Familiarity with testing tools (e.g., Selenium, JIRA, QTP).
• Understanding of databases, SQL, and APIs.
2. Analytical Thinking: Ability to understand complex systems, identify edge cases, and assess
risks.
3. Attention to Detail: Focus on identifying minute issues that could affect functionality or
performance.
4. Problem-Solving Skills: Quickly find root causes of defects and propose effective solutions.
5. Communication Skills: Convey test results and issues clearly to technical and non-technical
stakeholders.
6. Domain Knowledge: Understanding the specific industry or domain (e.g., finance, healthcare)
to validate use cases effectively.
7. Adaptability: Stay updated with evolving technologies, tools, and methodologies.
Bug Vs. Defect Vs. Error Vs. Fault Vs. Failure
Bug Vs. Defect Vs. Error Vs. Fault Vs. Failure
Error: A human action or mistake during software development that leads to incorrect or unexpected
results. Example: A developer writes incorrect logic in the code (e.g., using > instead of < in a
condition).
Bug: A flaw or problem in the code that deviates from expected behavior. Example: A button on a
website that doesn’t perform any action when clicked.
Defect: A mismatch between the software requirements and the actual implementation observed during
testing. Example: The login functionality allows incorrect credentials to access an account, contrary to
requirements.
Fault: A condition in the software caused by one or more errors that may cause the system to behave
incorrectly. Example: A null pointer dereference in the code causing unexpected system behavior.
Failure: The inability of a system to perform its intended function during operation. Example: An e-
commerce website crashing during a high-traffic sale event.
Bug Vs. Defect Vs. Error Vs. Fault Vs. Failure
Incorrect condition in the system The wrong formula in the code is the root
Fault
caused by an error. cause.
End-user experiences incorrect system Student sees incorrect total marks in the
Failure
behavior. result portal.
Verification and Validation
Verification helps in examining whether the product is built right according to requirements, while
validation helps in examining whether the right product is built to meet user needs.
Verification is static testing. Verification means Are we building the product right? Validation is
dynamic testing. Validation means Are we building the right product?
The V-model, also known as the Verification and Validation model, is an SDLC (System
Development Life Cycle) model that emphasizes a sequential and structured approach to
software development and testing. It follows a V-shape structure, where each phase of development
is directly linked to a corresponding testing phase. This model ensures that verification, which
involves checking that the software is built correctly, and validation, which involves ensuring the
software meets user needs and requirements, are thoroughly addressed throughout the development
process. Verification activities include unit testing, integration testing, and system testing, while
validation typically involves user acceptance testing (UAT) and confirming the system is ready for
deployment.
Verification Validation
Verification refers to the set of activities that ensure software Validation refers to the set of activities that ensure that the software that
Definition
correctly implements the specific function has been built is traceable to customer requirements.
Focus It includes checking documents, designs, codes, and programs. It includes testing and validating the actual product.
Responsibility Quality assurance team does verification. Validation is executed on software code with the help of testing team.
Activities:
1. Understand required architecture/ environment
2. Prepare h/w and s/w requirements list for the test environment
3. Setup the test environment and test data
4. Set up hardware, software, databases, and test servers.
5. Configure test environments as per requirements.
Deliverable:
1. Environment ready with test data setup
2. Test environment readiness report.
STLC: Test Execution(Performing Testing)
Activities:
1. Executes tests as per plan
2. Document test results, log defects for failed cases
3. Map defects to test cases in RTM
4. Retest the defect fixes
5. Track the defects to closure
Deliverable:
1. Complete RTM with execution status
2. Test cases updated with results
3. Defects report
STLC: Test Closure (Finalizing & Reporting)
Activities:
1. Evaluate test completion criteria.
2. Prepare test metrics based on parameters
3. Document learning out of the projects
4. Prepare test closure report
5. Document test results and lessons learned.
6. Conduct a test closure meeting with stakeholders.
Deliverable:
1. Test closure report
2. Test metrics
3. Test summary report, closure report.
What is Test Case?
A test case in software testing is a set of specific actions, conditions, and expected results developed to
verify whether a particular functionality of a software application works as intended. It serves as a
blueprint or guideline for testing a specific aspect of the software.
Component Description
Test Case ID TC_001
Test Case Title Verify valid login functionality.
User must have a valid username and password. The login page must
Preconditions
be accessible.
1. Functional Test Suite: Groups test cases verifying specific functional requirements.
2. Regression Test Suite: Focuses on verifying that recent changes haven't broken
existing functionality.
3. Smoke Test Suite: Contains critical test cases to ensure the basic functionality works.
4. Performance Test Suite: Includes test cases designed to validate system performance
under load.
5. Integration Test Suite: Groups test cases to validate interactions between different
modules.
• Input: 5 + 7
• Expected Output: 12 (as defined in the test oracle).
• Actual Output: 13
• Test Oracle's Role: Detects the discrepancy and identifies a defect in the addition
functionality.
Aspect Test Oracle Test Case
A mechanism or source to determine if test A set of specific actions, conditions, and
Definition
results are correct. expected outcomes to test functionality.
Purpose Validates the correctness of test results. Specifies what to test and how to test it.
Evaluating outcomes against expected Executing steps to verify specific
Focus
behavior. functionalities.
Narrow; specific to a single scenario or
Scope Broader; applies across multiple test cases.
feature.
Dependency Independent but used to validate test cases. Depends on oracles to verify results.
Requirements document, mathematical rules, Steps to log in, expected results for login
Examples
historical data. functionality.
Can be manual or automated (e.g., scripts Can be manual or automated (e.g., Selenium
Automation
validating outputs). for execution).
Expected results or standards for Specific inputs, actions, and conditions for
Input
comparison. testing.
Validates if the actual output matches the Generates actual outcomes to compare against
Output
expected result. expectations.
Ensures the correctness of results across Verifies a specific feature or behavior of the
Use Case
various scenarios. software.
What is Debugging?
Debugging is the process of identifying, analyzing, and fixing defects or bugs in software. It is
typically carried out after a bug is detected during testing or use. The goal is to identify the root
cause of the issue and resolve it so that the software behaves as expected.
Steps in Debugging
• Bug Reproduction: Reproduce the defect to understand its behavior.
• Bug Localization: Identify the part of the code or logic causing the defect.
• Bug Analysis: Analyze why the bug occurred.
• Bug Fixing: Modify the code to resolve the issue.
• Verification: Re-test the software to ensure the bug is fixed and no new issues were
introduced.
system is completely error-free Example: Even if all test cases pass, hidden defects may still exist
in untested areas.
Limited by Time and Budget: Due to project constraints, testing must be prioritized, and not all
test scenarios can be executed. Example: In Agile development, frequent releases may force teams
to focus only on critical functionalities.
Human Errors in Test Design and Execution: Testers may write incomplete or incorrect test
cases, leading to missed defects. Example: A tester might overlook an edge case where an invalid
date crashes the system.
Automation Has Limitations: Automated testing can only validate what it is programmed to
check, and it cannot adapt to new scenarios like human testers. Example: An automated UI test
may not detect minor but critical UI alignment issues.
Cannot Cover Real-World Scenarios Fully: Some defects only appear under real-world
conditions (e.g., network failures, unexpected user behavior). Example: A mobile app may work
perfectly in a testing lab but fail in real-world environments with slow internet connections.
End of Unit-1