Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
2 views

Software testing

The document outlines a comprehensive syllabus for a Software Testing course, covering key concepts, techniques, and methodologies in software testing. It includes topics such as the software testing life cycle, static and dynamic testing, levels of testing, test management, and the use of automation tools like Selenium. Additionally, it emphasizes the importance of testing in ensuring software quality, identifying defects, and meeting user requirements.

Uploaded by

anik261222
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views

Software testing

The document outlines a comprehensive syllabus for a Software Testing course, covering key concepts, techniques, and methodologies in software testing. It includes topics such as the software testing life cycle, static and dynamic testing, levels of testing, test management, and the use of automation tools like Selenium. Additionally, it emphasizes the importance of testing in ensuring software quality, identifying defects, and meeting user requirements.

Uploaded by

anik261222
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 57

Software Testing

ID202L
SYLLABUS
Unit – 1
Introduction: Some popular worldwide software failure. What is software testing, Testing Objective,
Role of a tester, Skills required by tester. Terminologies in testing: Error, Fault, Failure, Verification,
Validation, Test Cases, Test Suits, Test Oracles, Testing Vs Debugging, Testing, Quality Assurance and
Quality Control, Limitations of testing, Testing Life Cycle(Phases: Requirement analysis, test planning,
test case design, test execution, defect reporting, and closure). Principles of Testing.
Unit – 2
Static vs Dynamic Testing, Black Box Testing Techniques: Boundary Value Analysis (Generalizing
Boundary Value Analysis, Robust Boundary Value Testing, Worst-Case Boundary Value Testing,
Robust Worst-Case Testing), Equivalence Class Testing, Decision Table based Testing, Cause-Effect
Graphing Technique, State Transition Technique.
White Box Testing – Need, Logic Coverage Criteria, Statement Coverage, Branch Coverage, Condition
Coverage, Loop Coverage, Path Coverage, Graph Matrices, Cyclomatic Complexity, Data Flow
Testing, Mutation Testing
Unit – 3

Levels of Testing: Unit Testing, Integration Testing, System Testing and Acceptance Testing
Regression Testing: Progressive Vs Regressive Testing, Objectives of Regression Testing, Regression
testing Techniques.
Experience Based Testing Techniques: Error Guessing, Exploratory Testing, Checklist Based Testing

Unit – 4
Test Management: Organization Structures for Testing Teams, Test Planning, Test case minimization,
Test Case Prioritization, Risk Analysis.
Debugging: Debugging Process, Debugging Techniques, Debuggers

Unit – 5
Automation and Testing Tools: Need for automation, Testing Tool Classification, benefits and Risks of
Test automation, Overview of some commercial testing tools, Introduction to Selenium, Functional
testing using Selenium, Automation Testing using Bugzilla
Books
Course Outcomes
Tagging COs with BLs & KCs
CO No. Statement of Course Outcome Knowledge
Bloom’s Cognitive
Category
After completion of the course, the student will be able to Process Level (BL)
(KC)
Understand software testing concepts, principles, and
CO1 2 C
the testing lifecycle.
Apply black-box and white-box testing techniques to
CO2 3 P
validate software functionality.
Understand levels of testing and regression testing
CO3 2 C
techniques for ensuring software quality.
Apply test management strategies, including test
CO4 planning and risk analysis, to optimize the testing 3 P
process.
Apply automation testing tools like Selenium for
CO5 3 P
functional testing.
Unit-1
What is Software Testing?

Software testing is a process, to evaluate the functionality of a software application with an


intent to find whether the developed software met the specified requirements or not and to identify
the defects to ensure that the product is defect-free in order to produce a quality product.

“Testing is the process of executing a program with the intent of finding faults.” Testing never
shows the absence of faults, but it shows that the faults are present in the program.
Some Software Failures

1. Ariane 5 Rocket Explosion (1996)


• Incident: Unmanned rocket exploded 40 seconds after launch.
• Cause: Software design error in the inertial reference system.
• Details: Incorrect conversion of a 64-bit number to a 16-bit signed integer.
• Lesson: Importance of testing legacy systems in new environments.
2. Y2K Problem (Year 2000)
• Incident: Global panic over software systems misinterpreting the year "00."
• Cause: Two-digit year format used in older programs.
• Details: Systems couldn’t differentiate between 1900 and 2000.
• Lesson: Need for foresight in software design to handle long-term scenarios.
3. Patriot Missile Failure (1991)
• Incident: Failed to intercept a Scud missile, causing 28 fatalities.
• Cause: Timing error in the system’s clock after prolonged operation.
• Details: Software’s tracking system became inaccurate after 14+ hours.
• Lesson: Rigorous testing under real-world conditions.
4. London Ambulance System Collapse (1992)
• Incident: Ambulance dispatch system failed, causing delayed responses.
• Cause: System couldn’t handle high call volume.
• Details: Multiple ambulances dispatched to the same location, others missed.

Some more Software Failures


• Lesson: Need for stress testing and contingency plans.
5. USS Yorktown Incident (1998)
• Incident: Ship’s propulsion system failed due to a division-by-zero error.
• Cause: Input validation not implemented.
• Details: A zero input caused system-wide failure.
• Lesson: Validate inputs to prevent critical failures.
6. Accounting Software Failures
• Incident: Financial miscalculations or system crashes in businesses.
• Cause: Errors in software logic or lack of testing for edge cases.
• Details: Incorrect data led to customer dissatisfaction and financial losses.
• Lesson: Focus on quality assurance for financial software.
7. Windows XP Patch Issues (2001)
• Incident: Microsoft released patches for bugs on the launch day.
• Cause: Security vulnerabilities and compatibility issues in the initial release.
• Details: Some patches failed, adding to user frustration.
• Lesson: Thorough pre-release testing is essential.
Why Should do Software Testing?
Why Should do Software Testing?

Software testing is a very expensive and critical activity; but releasing the software without
testing is definitely more expensive and dangerous
• Ensure Software Quality: Testing verifies that the software meets the required standards
and quality benchmarks.
• Error Identification: It aims to uncover bugs or defects in the software before
deployment.
• Customer Satisfaction: Delivering error-free software improves user experience and
trust.
• Cost Reduction: Early detection of issues reduces the cost of fixing bugs in later stages.
• Risk Mitigation: It minimizes the risk of software failures that could lead to financial
losses or harm.
• Compliance with Requirements: Testing ensures that the software performs its intended
functions as per specifications.
• Prevention of Failures: It prevents scenarios like software crashes, data loss, and critical
failures that could harm the organization or users.
Principles of Software Testing?
Testing shows the presence of defects: This testing principle states that testing cannot prove the
absence of defects; it can only reveal their presence. Even if your application passes all test cases,
you can never be entirely certain that no hidden defects remain. Testing identifies where defects exist,
but cannot confirm where they do not.
Exhaustive testing is impossible: Exhaustive testing, which involves evaluating every possible
combination of inputs and conditions, is impractical in real-world projects due to the vast number of
variables. Testers must instead prioritize scenarios based on risk, business impact, and customer
value. This strategic approach ensures critical areas are thoroughly tested without the impossible task
of covering every possibility.
Early testing saves time and money: Identifying defects early in the software development lifecycle
is critical because the cost and effort to fix issues grow exponentially as development progresses.
Early testing not only minimizes these risks but also streamlines the development process by
addressing potential problems when they are most manageable and least expensive. This proactive
approach saves time, reduces costs, and ensures a smoother path to delivering high-quality software.
Defects clustering: Defect clustering highlights that defects are often concentrated in specific areas
of the software. These "problem areas" usually account for the majority of issues, so focusing efforts
on them can significantly improve overall quality. This targeted approach ensures critical issues are
addressed efficiently, maximizing the impact of time and resources spent.
Pesticide paradox
The pesticide paradox suggests that repeatedly running the same set of tests will not uncover new or
previously unknown defects. To continue identifying issues effectively, test methodologies must
evolve by incorporating new tests, updating existing test cases, or modifying test steps. This ongoing
refinement ensures that testing remains relevant and capable of discovering previously hidden
problems.
Testing is context-dependent
Test strategies must be tailored to the specific context of the software being tested. The requirements
for different types of software—such as a mobile app, a high-transaction e-commerce website, or a
business-critical enterprise application—vary significantly. As a result, testing methodologies should
be customized to address the unique needs of each type of application, ensuring that testing is both
effective and relevant to the software's intended use and environment.
Absence-of-errors is a fallacy
The absence-of-errors fallacy occurs when developers or stakeholders assume that software is of
high quality solely because it is free of defects. This assumption disregards the possibility that the
software may still fall short of meeting user needs, business requirements, or performance
expectations, even if no bugs are identified. By focusing only on the absence of errors, this fallacy
overlooks other critical factors that contribute to the overall quality and success of the software.
Role of a Tester

• Error Detection: Identify and report bugs or defects in the software.


• Requirement Validation: Verify that the software meets the specified requirements and
behaves as expected.
• Test Case Design: Create and execute test cases to cover various scenarios, including edge
cases.
• Regression Testing: Ensure new changes do not negatively affect existing functionality.
• Risk Assessment: Identify potential risks and provide insights on areas of improvement.
• Collaboration: Work closely with developers, project managers, and other stakeholders to
improve product quality.
• Documentation: Maintain detailed records of test cases, results, and issues for reporting and
future reference.
Skills Required by a Tester

1. Technical Skills:
• Knowledge of programming languages (e.g., Python, Java) for automation.
• Familiarity with testing tools (e.g., Selenium, JIRA, QTP).
• Understanding of databases, SQL, and APIs.
2. Analytical Thinking: Ability to understand complex systems, identify edge cases, and assess
risks.
3. Attention to Detail: Focus on identifying minute issues that could affect functionality or
performance.
4. Problem-Solving Skills: Quickly find root causes of defects and propose effective solutions.
5. Communication Skills: Convey test results and issues clearly to technical and non-technical
stakeholders.
6. Domain Knowledge: Understanding the specific industry or domain (e.g., finance, healthcare)
to validate use cases effectively.
7. Adaptability: Stay updated with evolving technologies, tools, and methodologies.
Bug Vs. Defect Vs. Error Vs. Fault Vs. Failure
Bug Vs. Defect Vs. Error Vs. Fault Vs. Failure

Error: A human action or mistake during software development that leads to incorrect or unexpected
results. Example: A developer writes incorrect logic in the code (e.g., using > instead of < in a
condition).
Bug: A flaw or problem in the code that deviates from expected behavior. Example: A button on a
website that doesn’t perform any action when clicked.
Defect: A mismatch between the software requirements and the actual implementation observed during
testing. Example: The login functionality allows incorrect credentials to access an account, contrary to
requirements.
Fault: A condition in the software caused by one or more errors that may cause the system to behave
incorrectly. Example: A null pointer dereference in the code causing unexpected system behavior.
Failure: The inability of a system to perform its intended function during operation. Example: An e-
commerce website crashing during a high-traffic sale event.
Bug Vs. Defect Vs. Error Vs. Fault Vs. Failure

Term Explanation Example


Developer writes the wrong formula for
Error Human mistake during development.
total marks.

Tester notices incorrect total marks during


Bug Problem found during testing.
testing.

System doesn’t calculate the sum of marks


Defect Deviation from software requirements.
correctly as per requirements.

Incorrect condition in the system The wrong formula in the code is the root
Fault
caused by an error. cause.

End-user experiences incorrect system Student sees incorrect total marks in the
Failure
behavior. result portal.
Verification and Validation

Verification helps in examining whether the product is built right according to requirements, while
validation helps in examining whether the right product is built to meet user needs.

Verification is static testing. Verification means Are we building the product right? Validation is
dynamic testing. Validation means Are we building the right product?

The V-model, also known as the Verification and Validation model, is an SDLC (System
Development Life Cycle) model that emphasizes a sequential and structured approach to
software development and testing. It follows a V-shape structure, where each phase of development
is directly linked to a corresponding testing phase. This model ensures that verification, which
involves checking that the software is built correctly, and validation, which involves ensuring the
software meets user needs and requirements, are thoroughly addressed throughout the development
process. Verification activities include unit testing, integration testing, and system testing, while
validation typically involves user acceptance testing (UAT) and confirming the system is ready for
deployment.
Verification Validation

Verification refers to the set of activities that ensure software Validation refers to the set of activities that ensure that the software that
Definition
correctly implements the specific function has been built is traceable to customer requirements.

Focus It includes checking documents, designs, codes, and programs. It includes testing and validating the actual product.

Type of Testing Verification is the static testing. Validation is dynamic testing.


Execution It does not include the execution of the code. It includes the execution of the code.
Methods used in verification are reviews, walkthroughs, Methods used in validation are Black Box Testing, White Box
Methods Used
inspections and desk-checking. Testing and Non-Functional testing.
It checks whether the software meets the requirements and expectations
Purpose It checks whether the software conforms to specifications or not.
of a customer or not.
It can only find the bugs that could not be found by the verification
Bug It can find the bugs in the early stage of the development.
process.
The goal of verification is application and software architecture
Goal The goal of validation is an actual product.
and specification.

Responsibility Quality assurance team does verification. Validation is executed on software code with the help of testing team.

Timing It comes before validation. It comes after verification.


It consists of checking of documents/files and is performed by
Human or Computer It consists of execution of program and is performed by computer.
human.
Error Focus Verification is for prevention of errors. Validation is for detection of errors.
STLC
STLC: Software Testing Life Cycle

The Software Testing Life Cycle (STLC) is a


structured process that defines the phases involved in
testing a software product to ensure quality and
defect-free delivery.
It consists of series of activities carried out
methodologically to help certify your software
product.
Each of these stages have a definite Entry and Exit
criteria; , Activities & Deliverables associated with it.
1. Entry Criteria: Entry Criteria gives the
prerequisite items that must be
completed before testing can begin.
2. Exit Criteria: Exit Criteria defines the
items that must be completed before
testing can be concluded.
STLC: Requirements/ Design Review((Understanding What to Test)
Activities:
1. Analyze requirements to understand testable aspects.
2. Identify missing or ambiguous requirements.
3. Identify the types of test to be performed
4. Prepare Requirement Traceability Matrix (RTM)
5. Identify test environment details where testing is suppose to be carried out
6. Feasibility analysis (if any)
Deliverable:
1. RTM
2. Feasibility Report (if applicable)
RTM: A Requirement Traceability Matrix (RTM) is a document that maps and traces user
requirements with test cases to ensure complete test coverage. It helps verify that all requirements are
tested and no functionality is missed.
STLC: Test Planning (Defining How to Test)
Activities:
1. Preparation of test plan/ strategy document for various types of testing
2. Test tool selection
3. Test effort estimation
4. Resource planning & determining roles and responsibilities
5. Training requirement
Deliverable:
1. Test plan/ strategy document
2. Effort estimation document
STLC: Test Designing(Creating Test Cases)
Activities:
1. Create test cases, automated scripts
2. Review and baseline test cases and scripts
3. Create test data (if test environment is available)
4. Prioritize test cases (critical, major, minor).
Deliverable:
1. Test cases/ scripts
2. Test data
STLC: Test Environment Setup (Preparing Where to Test)

Activities:
1. Understand required architecture/ environment
2. Prepare h/w and s/w requirements list for the test environment
3. Setup the test environment and test data
4. Set up hardware, software, databases, and test servers.
5. Configure test environments as per requirements.

Deliverable:
1. Environment ready with test data setup
2. Test environment readiness report.
STLC: Test Execution(Performing Testing)

Activities:
1. Executes tests as per plan
2. Document test results, log defects for failed cases
3. Map defects to test cases in RTM
4. Retest the defect fixes
5. Track the defects to closure

Deliverable:
1. Complete RTM with execution status
2. Test cases updated with results
3. Defects report
STLC: Test Closure (Finalizing & Reporting)

Activities:
1. Evaluate test completion criteria.
2. Prepare test metrics based on parameters
3. Document learning out of the projects
4. Prepare test closure report
5. Document test results and lessons learned.
6. Conduct a test closure meeting with stakeholders.
Deliverable:
1. Test closure report
2. Test metrics
3. Test summary report, closure report.
What is Test Case?
A test case in software testing is a set of specific actions, conditions, and expected results developed to
verify whether a particular functionality of a software application works as intended. It serves as a
blueprint or guideline for testing a specific aspect of the software.

Key Components of a Test Case


1. Test Case ID: A unique identifier for the test case.
2. Test Case Title/Description: A brief explanation of what the test case is verifying.
3. Preconditions: Any conditions or setup that must be met before executing the test case.
4. Test Steps: A sequence of actions to perform during the test.
5. Test Data: Specific inputs or variables required to execute the test.
6. Expected Result: The desired or correct outcome after performing the steps.
7. Actual Result: The observed outcome after the test is executed.
8. Pass/Fail Status: Indicates whether the test case met the expected result.
9. Priority: Specifies the importance of the test case (e.g., High, Medium, Low).
10. Remarks/Comments: Any additional notes, such as defects found or clarifications.
Example of a Test Case: Test Case for Login Functionality

Component Description
Test Case ID TC_001
Test Case Title Verify valid login functionality.

User must have a valid username and password. The login page must
Preconditions
be accessible.

1. Open the login page.


Test Steps 2. Enter a valid username and password.
3. Click the "Login" button.
Username: testuser
Test Data
Password: password123
Expected Result The user is redirected to the dashboard.
Actual Result The user is redirected to the dashboard.
Pass/Fail Status Pass
Remarks/Comments N/A
Example Test Case

Test Case Test Pass/


Input Expected Output Actual Output
ID Description Fail
Valid login Username: neelam11 Login successful;
TC01
credentials Password: Password123 redirects to homepage
Invalid Username: neelam@11 Error message: "Invalid
TC02
username Password: password123 credentials"
Invalid Username: neelam11 Error message: "Invalid
TC03
password Password: Password12 credentials"
Empty
Username: Error message: "Fields
TC04 username and
Password: cannot be empty"
password
Password field Username: user1 Error message: "Invalid
TC05
case sensitivity Password: password123 credentials"
What is Test Suite?
A test suite in software testing is a collection of test cases grouped together to test a specific
module, functionality, or application systematically. Test suites help organize and manage
multiple test cases efficiently, ensuring comprehensive testing and tracking of results.

Components of a Test Suite


Name/ID: Unique identifier for the test suite.
Objective: The purpose or functionality the suite aims to test.
Test Cases: A list of related test cases.
Execution Order: Defines the sequence in which test cases are executed (if applicable).
Test Environment: Specific details about the environment or setup needed to execute the
suite.
Preconditions: Requirements that must be met before running the suite.
Postconditions: Expected state after the suite execution.
Example of a Test Suite

Test Suite: User Authentication


•Objective: To validate the user authentication module of an e-commerce application.
•Test Cases:
• TC_001: Verify successful login with valid credentials.
• TC_002: Verify login failure with invalid credentials.
• TC_003: Verify "Forgot Password" functionality.
• TC_004: Verify the system locks the user after multiple failed login attempts.
•Preconditions:
• The application server must be running.
• A test user account with valid credentials should exist.
•Postconditions:
• Test data is reset to the initial state.
Types of Test Suites

1. Functional Test Suite: Groups test cases verifying specific functional requirements.
2. Regression Test Suite: Focuses on verifying that recent changes haven't broken
existing functionality.
3. Smoke Test Suite: Contains critical test cases to ensure the basic functionality works.
4. Performance Test Suite: Includes test cases designed to validate system performance
under load.
5. Integration Test Suite: Groups test cases to validate interactions between different
modules.

Benefits of Using Test Suites


• Enhances test management and organization.
• Reduces redundancy by grouping similar test cases.
• Makes it easier to execute related test cases together.
• Provides a clear structure for reporting and analysis.
What is Test Oracle?
A test oracle in software testing is a mechanism or source that determines whether the outcomes of
a test are correct. It acts as a reference or standard against which the actual behavior of the software
is compared to identify deviations or defects.

Key Characteristics of a Test Oracle


• Comparison Basis: Provides the expected results to compare with the actual results of the
software.
• Reliable and Accurate: The oracle must be trustworthy to ensure the correctness of the
evaluation.
• Automation or Manual: Test oracles can be implemented programmatically or used
manually by testers.

Importance of a Test Oracle


• Error Detection: Helps identify defects by providing a benchmark for expected behavior.
• Validation: Ensures the software functions as intended under various conditions.
• Efficiency: Speeds up the testing process when automated oracles are used.
Example of a Test Oracle

Scenario: Testing a Calculator App

• Input: 5 + 7
• Expected Output: 12 (as defined in the test oracle).
• Actual Output: 13
• Test Oracle's Role: Detects the discrepancy and identifies a defect in the addition
functionality.
Aspect Test Oracle Test Case
A mechanism or source to determine if test A set of specific actions, conditions, and
Definition
results are correct. expected outcomes to test functionality.
Purpose Validates the correctness of test results. Specifies what to test and how to test it.
Evaluating outcomes against expected Executing steps to verify specific
Focus
behavior. functionalities.
Narrow; specific to a single scenario or
Scope Broader; applies across multiple test cases.
feature.
Dependency Independent but used to validate test cases. Depends on oracles to verify results.
Requirements document, mathematical rules, Steps to log in, expected results for login
Examples
historical data. functionality.
Can be manual or automated (e.g., scripts Can be manual or automated (e.g., Selenium
Automation
validating outputs). for execution).
Expected results or standards for Specific inputs, actions, and conditions for
Input
comparison. testing.
Validates if the actual output matches the Generates actual outcomes to compare against
Output
expected result. expectations.
Ensures the correctness of results across Verifies a specific feature or behavior of the
Use Case
various scenarios. software.
What is Debugging?
Debugging is the process of identifying, analyzing, and fixing defects or bugs in software. It is
typically carried out after a bug is detected during testing or use. The goal is to identify the root
cause of the issue and resolve it so that the software behaves as expected.

Steps in Debugging
• Bug Reproduction: Reproduce the defect to understand its behavior.
• Bug Localization: Identify the part of the code or logic causing the defect.
• Bug Analysis: Analyze why the bug occurred.
• Bug Fixing: Modify the code to resolve the issue.
• Verification: Re-test the software to ensure the bug is fixed and no new issues were
introduced.

Testing vs. Debugging


Testing: Running a login feature test case, and finding that entering the correct password still
results in "Invalid Credentials."
Debugging: A developer examines the code, finds an issue with the password verification logic,
fixes it, and retests the function.
Aspect Testing Debugging
The process of identifying defects in the The process of locating, analyzing, and
Definition software by executing it under various fixing the root cause of defects found
conditions. during testing.
To find and report defects in the To resolve the defects identified during
Purpose
software. testing.
Who
Typically performed by testers. Typically performed by developers.
Performs It
Focus Focuses on identifying issues. Focuses on analyzing and fixing issues.
Testing tools like Selenium, JUnit, Debugging tools like GDB, Visual Studio
Tools Used
TestNG, etc. Debugger, Eclipse Debugger.
When It During or after development (in the
After testing identifies a defect.
Occurs testing phase).
Nature Verification activity (finding bugs). Correction activity (fixing bugs).
Output Bug reports or logs indicating defects. Fixed code with the defect resolved.
Skills Understanding of software behavior and In-depth knowledge of code and
Required test design techniques. debugging strategies.
Ensure the software functions correctly
Goal Ensure the software meets requirements.
after defect correction.
What is QA & QC?
Quality Assurance (QA)

• QA is a process-oriented approach that focuses on preventing defects in software


development by improving processes, methodologies, and standards.
• Objective: To ensure that software development processes lead to high-quality products.
• Activities:
1. Process definition and improvement
2. Standards compliance
3. Audits and reviews
4. Training and documentation
• Example: Establishing a Software Development Life Cycle (SDLC) model with defined
testing phases.
Quality Control (QC)

• QC is a product-oriented approach that focuses on identifying and fixing defects in the


final product through testing and inspections.
• Objective: To detect and correct defects in the software before release.
• Activities:
1. Functional and non-functional testing
2. Defect identification and reporting
3. Code reviews
4. Validation and verification
• Example: Running test cases to verify that a login feature works correctly.
Aspect Quality Assurance (QA) Quality Control (QC)
Process-oriented approach to prevent Product-oriented approach to detect
Definition
defects. defects.
Ensuring the process is efficient and
Focus Ensuring the final product is defect-free.
effective.
Identify and fix defects in the final
Objective Prevent defects before they occur.
product.
Proactive – focuses on improving Reactive – focuses on identifying issues
Approach
development processes. after development.

Process audits, documentation, Functional testing, system testing, defect


Activities
training, compliance checks. reporting, code inspections.
Who Performs It? QA team, process managers. Testers, developers, quality inspectors.
Standards, best practices, process Test cases, defect tracking, reviews,
Methods Used
guidelines. inspections.
Defining a coding standard for Running test cases to verify software
Example
software development. functionality.
Exhaustive Testing is Impossible: It is impractical to test all possible inputs, paths, and scenarios
due to the vast number of permutations. Example: A banking application with multiple user roles
and transaction types cannot be tested for every possible combination.
Cannot Prove the Absence of Defects: Testing can reveal defects, but it cannot confirm that a
Limitations of Software Testing?

system is completely error-free Example: Even if all test cases pass, hidden defects may still exist
in untested areas.
Limited by Time and Budget: Due to project constraints, testing must be prioritized, and not all
test scenarios can be executed. Example: In Agile development, frequent releases may force teams
to focus only on critical functionalities.
Human Errors in Test Design and Execution: Testers may write incomplete or incorrect test
cases, leading to missed defects. Example: A tester might overlook an edge case where an invalid
date crashes the system.
Automation Has Limitations: Automated testing can only validate what it is programmed to
check, and it cannot adapt to new scenarios like human testers. Example: An automated UI test
may not detect minor but critical UI alignment issues.
Cannot Cover Real-World Scenarios Fully: Some defects only appear under real-world
conditions (e.g., network failures, unexpected user behavior). Example: A mobile app may work
perfectly in a testing lab but fail in real-world environments with slow internet connections.
End of Unit-1

You might also like