Software Testing Notes
Software Testing Notes
1. Explain the fundamental concept of software testing. Discuss different types of software
testing and their objectives.
The fundamental concept of software testing is to evaluate and verify that software performs as
expected. It involves executing software in a controlled environment to detect and fix bugs,
ensure quality, and assess the system's reliability, security, and performance. Software testing
aims to ensure that the product meets specified requirements, delivers a positive user
experience, and functions reliably under various conditions.
1. Manual Testing
Objective: Involves human testers who interact with the software to uncover bugs and
usability issues. It's often used for exploratory, ad-hoc, and user-experience-based
testing, where human judgment and intuition are valuable.
2. Automated Testing
Objective: Uses scripts and tools to execute tests, particularly for repetitive or regression
testing scenarios. Automated testing increases efficiency, consistency, and speed,
especially for large projects with frequent code changes.
1. Functional Testing
Objective: Validates the functionality of the software against defined requirements. This
testing ensures that specific functions within the system work as expected.
o Acceptance Testing: Verifies if the system meets business requirements and user
needs, often conducted by end-users.
2. Non-Functional Testing
Objective: Focuses on the performance, reliability, and scalability of the system rather
than specific behaviors.
o Stress Testing: Tests the system under extreme conditions to assess its breaking
point.
3. Security Testing
Objective: Identifies vulnerabilities and ensures the system is protected against attacks.
It includes penetration testing, vulnerability scanning, and security assessments.
4. Regression Testing
Objective: Ensures that new changes in the codebase do not adversely affect the existing
functionality. Automated testing often supports regression testing due to its repetitive
nature.
o Sanity Testing: Verifies small sections of the application after minor changes.
6. Exploratory Testing
Objective: Involves minimal planning and relies on tester experience and creativity to
discover edge cases and usability issues that scripted testing might miss.
o Beta Testing: Conducted by a limited group of external users to gain insights into
how the software performs in real-world conditions.
2. Differentiate the testing and debugging. Explain how these activities complement each other
in the software development phase. Discuss the common debugging techniques used to identify
and resolve the defect.
Testing and debugging are both essential parts of the software development process, but they
serve different purposes and involve distinct activities. Here's a closer look at how they differ,
complement each other, and the common techniques used in debugging.
1. Testing
o Objective: Testing aims to detect and report defects, assess software quality, and
ensure that the system meets requirements. It involves running the software and
comparing the actual output to the expected output to find issues.
o Activities: Testing can be automated or manual and includes creating test cases,
running tests, logging defects, and verifying fixes.
o Outcome: Testing provides a list of issues but does not pinpoint the root causes
of these issues. It answers "What is wrong?" without solving it.
2. Debugging
o Activities: Debugging requires analyzing the failed test results, reproducing the
defect, identifying the root cause, making necessary corrections, and re-testing
the fix.
o Outcome: Debugging corrects defects, ultimately changing the code to fix the
problem. It answers "Why is it wrong?" and addresses the underlying issue.
Testing and debugging work together throughout the software development lifecycle to
improve software quality.
• Testing Identifies Problems: Testing exposes issues in the software, which developers
can then address through debugging.
• Debugging Resolves Issues: Debugging investigates the defects found during testing and
implements solutions to fix them.
• Iterative Process: Once debugging is complete and a fix is made, the software undergoes
regression testing to ensure that the changes don’t introduce new issues. This creates a
feedback loop where testing reveals problems, and debugging addresses them, gradually
improving the software quality.
o A simple but effective technique, especially for small projects or simple issues,
where developers add print statements to trace variable values and execution
flow at various points in the code.
2. Using a Debugger
o Modern IDEs include debugging tools that allow developers to set breakpoints,
step through code line by line, inspect variable states, and monitor the call stack.
This method is powerful for tracking down complex issues in the code.
3. Backtracking
o Involves working backward from the point where the error occurred, retracing
steps to find the origin of the issue. Developers often follow the error message or
stack trace to locate the source of the problem.
o Useful for locating issues in large codebases, binary search debugging involves
disabling parts of the code and testing to see if the issue persists. By isolating the
code incrementally, developers can narrow down the section of code causing the
issue.
3. Compare and contrast software testing and hardware testing. Discuss the challenges and
considerations associated with each type of testing.
Software Testing and Hardware Testing are both crucial for delivering reliable products, but
they differ in focus, methods, and challenges. Here’s a comparison of the two, along with the
unique considerations and challenges associated with each type.
Comparison of Software Testing and Hardware Testing
Involves unit tests, integration tests, Utilizes specialized tools like oscilloscopes,
Testing Tools automated test scripts, and multimeters, burn-in tests, and
debugging tools. environmental chambers.
Software bugs can often be patched, Hardware defects are costlier to fix due to
Cost of
but critical bugs may impact user manufacturing implications and can lead to
Failure
data and experience. recalls.
Highly automatable with continuous Limited automation; some tests are manual
Automation integration and deployment or require physical monitoring, though
pipelines. robotics can aid in repetitive tests.
o Challenge: Software can involve many dependencies, integrations, and use cases,
making it complex to test all possible scenarios thoroughly.
3. Cross-Platform Compatibility
o Challenge: Software faces the risk of cyberattacks and data breaches, requiring
thorough security testing.
2. Manufacturing Variability
o Challenge: Hardware must meet regulatory standards (e.g., FCC, CE) before it can
be released, involving specific compliance tests.
5. Failure Consequences
o Consideration: Reliability testing, stress testing, and quality assurance checks are
crucial to ensure hardware meets quality standards.
• Software and Hardware Integration Testing: Both software and hardware testing include
integration tests to ensure they work together seamlessly.
• System-Level Testing: Testing the complete system, where software and hardware
interact, is essential for validating the performance of the final product.
4. Explain the difference between testing and verification. Discuss the role of verification and
validation in ensuring the quality of software products.
Testing and verification are both vital in software development for ensuring quality, but they
differ in their objectives and approach. Here’s a breakdown of these terms and their roles in
quality assurance.
Difference Between Testing and Verification
1. Verification
o Focus: Verification checks whether the software is built correctly and follows the
development and design specifications.
2. Testing
o Activities: Testing includes executing the code through unit tests, integration
tests, system tests, and user acceptance tests. This process typically happens
after verification activities.
o Focus: Testing checks whether the product is functioning correctly and meets
user expectations.
o Example: Running test cases to ensure a login feature works as intended and that
inputting valid credentials grants access while invalid ones are denied.
Verification and validation (V&V) work together to ensure that a software product is both
correctly built and meets user requirements, thereby enhancing the quality of the final product.
1. Verification
2. Validation
o Objective: Ensures that the finished product meets the user’s needs and
functions as intended in a real-world environment.
o Role in Quality: Validation involves testing the software to verify its functionality
and ensure it fulfills the customer’s requirements. This step addresses potential
usability and functional issues, improving user satisfaction and preventing issues
after deployment.
▪ User Acceptance Testing (UAT): Ensures the final product works for the
end users in a real-world scenario and meets their needs.
4. Increased User Satisfaction: Validation ensures the software’s functionality meets end-
user expectations, which directly impacts user satisfaction and system acceptance.
5. Discuss the auditing process of testing and verification. Discuss the importance of test
matrices. Explain various test matrices that can be used to measure the effectiveness and
efficiency of testing activities. Provide an example of how these test matrices can be used to
improve the testing process.
The auditing process of testing and verification, as well as the use of test matrices, are essential
for evaluating the quality, coverage, and effectiveness of the testing activities within software
development. Let’s go over these concepts in more detail.
1. Purpose of Auditing:
o Ensure that test coverage is sufficient and that all critical requirements have been
verified.
o Review Test Documentation: Examines test plans, test cases, and test results to
confirm they align with requirements and standards.
o Evaluate Testing Methods and Tools: Assesses whether the testing methods and
tools chosen are appropriate for the project and used effectively.
o Analyze Test Coverage and Metrics: Ensures that test coverage is complete and
that metrics such as defect detection rate, test case execution rate, and pass/fail
rate are recorded and analyzed.
Test matrices are structured tables or charts that track various aspects of the testing process.
They are essential for measuring and monitoring the effectiveness and efficiency of testing
activities. Test matrices provide a clear, organized view of test coverage, defect status, test
progress, and more, making it easier to manage complex testing processes and ensure
thoroughness.
o Example: The RTM can map each requirement to test cases, showing whether
each requirement has been tested and passed, ensuring complete test coverage.
2. Defect Matrix
o Usage: Analyzes defect trends and ensures high-priority defects are resolved first.
o Example: A defect matrix could show that most high-severity bugs are
concentrated in a specific module, allowing testers to focus on that module for
in-depth testing.
o Purpose: Tracks the execution status (pass, fail, pending) of test cases across
different test cycles.
o Usage: Provides insight into the progress and stability of the software in each test
cycle.
o Example: This matrix can identify modules where test cases frequently fail,
signaling a need for deeper analysis or redesign.
o Purpose: Prioritizes testing based on the risk associated with each module or
feature.
o Usage: Guides the testing team to focus on high-risk areas first, optimizing
resources.
o Example: This matrix can show that modules with high business impact and
technical complexity require more extensive testing, allowing the team to
allocate resources accordingly.
Suppose a development team is working on a banking application, and they are concerned
about the risk of missing critical functionalities or introducing defects into high-impact modules.
Here’s how test matrices could be used to improve the testing process:
2. Analyzing Defect Matrix: The defect matrix shows that many of the defects are in the
“transaction processing” module, with several high-priority bugs related to calculation
errors. This insight prompts the team to perform focused regression testing on this
module and prioritize debugging efforts to resolve critical issues.
6. Explain the concept of execution history in software testing. Discuss the various techniques
used to analyze execution history and provide examples of how execution history data can be
used to improve testing effectiveness and efficiency.
Execution history in software testing refers to the records of test execution events, including
details of which test cases were run, their outcomes, times and dates of execution, and defect
reports. This historical data enables testers and developers to understand the performance,
reliability, and defect patterns of the software over time.
o Usage: By analyzing execution history, teams can identify areas with low test
coverage, focusing future test efforts on these gaps.
o Description: Defect density analysis identifies the number of defects per module
or feature area over time.
o Usage: Analyzing defect density helps identify "hot spots" in the application
where defects frequently occur, prompting more thorough testing or even
refactoring of specific modules.
o Usage: Execution history can reveal how well regression tests catch reoccurring
issues, helping testers adjust or add cases to improve defect detection.
o Example: If execution history shows that regression tests repeatedly miss specific
types of defects, the test suite can be adjusted to include more targeted test
cases for those areas.
o Description: This technique examines trends in defect reports over multiple test
cycles.
o Usage: Defect trend analysis helps assess product stability over time and
identifies periods where defect introduction rates spiked.
o Example: Analyzing execution history might show an increasing trend in defects
after certain feature updates, guiding teams to review those areas for potential
underlying issues.
o Description: This technique assesses how often each test case is executed over
time.
o Usage: It helps identify rarely executed test cases that may require optimization
or retirement, improving overall test suite efficiency.
o Example: If the execution history reveals that certain test cases are rarely used,
testers can either retire them or convert them into automated tests for faster
execution.
o Usage: Execution history provides context for RCA by showing when and where
defects were introduced and resolved.
Examples of Using Execution History Data to Improve Testing Effectiveness and Efficiency
o Example: The execution history shows that test cases for a newly developed
module were skipped or only partially executed due to time constraints. By
reviewing this data, testers can prioritize this module in the next cycle, ensuring
complete test coverage.
7. Discuss various strategies for test cases. Explain its advantages and disadvantages and provide
an example of specific test cases generation techniques.
o Expected Result: User should successfully log in and see the home page.
• Disadvantages: May not cover edge cases if test cases are too narrow.
2. Boundary Test Cases
• Purpose: Test the limits or boundaries of input fields to ensure they handle data
correctly at their limits.
o Test Cases:
o Expected Results: System should accept inputs between 8-16 characters and
reject others.
• Purpose: Ensure the application gracefully handles invalid input or unexpected user
actions.
o Expected Results: System should display an error message and not accept the
input.
• Purpose: Evaluate the application’s response under different load conditions and ensure
performance is acceptable.
o Test Case: Search with high-frequency terms to see if the system returns results
within 2 seconds.
o Expected Result: Results should appear within the performance benchmark.
• Purpose: Test for vulnerabilities that could compromise the system's security, such as
SQL injection, cross-site scripting, etc.
o Test Case: Attempt to insert SQL commands into the input fields.
o Expected Result: System should prevent SQL injection and sanitize inputs.
• Purpose: Verify that the application is user-friendly and follows best practices for
usability.
o Test Case: Verify that mandatory fields are marked with an asterisk.
o Expected Result: All mandatory fields are clearly indicated, and there are clear
error messages for incomplete fields.
• Purpose: Ensure the application works on different devices, operating systems, and
browsers.
o Test Case: Open the application on Chrome, Firefox, Safari, and Edge.
o Expected Result: The application should render and function correctly on all
browsers.
• Advantages: Ensures a consistent experience across platforms.
• Purpose: Verify that new changes do not break or negatively affect existing functionality.
o Test Case: Run existing test cases for related modules like profile access, logout,
etc.
• Purpose: Check if different modules or services work well together after integration.
o Test Case: Place an order and ensure successful transition from cart to payment
gateway and then back to order confirmation.
o Expected Result: Each module works in sync, and no data or functionality is lost
between modules.
• Purpose: Verify the application meets business requirements and criteria specified by
stakeholders.
o Test Case: Check that users can post, share, and like content as specified in
requirements.
o Expected Result: All core user activities are functional and meet stakeholder
approval.
8. Different between static and dynamic analysis in software testing. Discuss advantages and
disadvantages of each approach. Provide example of tools and techniques used for static and
dynamic analysis.
Static Analysis
Static analysis inspects the source code or compiled code without executing it, which helps in
identifying potential issues early in development. It can be performed using techniques like
code review, code inspection, and static analysis tools.
• Early Defect Detection: Catches issues like syntax errors, security vulnerabilities, and
code complexity issues before code is executed.
• Cost-Effective: Since defects are caught early, it reduces the cost of fixing them.
• Enforces Standards: Helps ensure code adheres to coding standards and best practices,
improving readability and maintainability.
• Automatable: Easily integrated into the CI/CD pipeline to detect issues as code is
written.
Disadvantages of Static Analysis
• Limited to Code Structure: It doesn’t account for runtime behavior, meaning it can miss
certain logic errors or dynamic issues.
• False Positives: Static analysis tools can generate a high number of warnings, some of
which may not be relevant, requiring manual review.
• Not Suitable for All Errors: Issues like memory leaks, performance bottlenecks, and
runtime exceptions can’t be detected without execution.
• SonarQube: Analyzes code quality and detects bugs, vulnerabilities, and code smells.
• Lint: Commonly used for checking coding style, syntax errors, and basic code structure
issues.
• PMD: Analyzes Java source code for errors and enforces coding standards.
Dynamic Analysis
Dynamic analysis involves executing the program to monitor and test its runtime behavior. It’s
generally used for functional, performance, and memory testing.
• Real-Time Error Detection: Finds issues that only appear during execution, such as
memory leaks, crashes, and runtime exceptions.
• User Interaction Testing: Enables testing of user interface, workflows, and interactions
to ensure smooth user experiences.
• Requires Test Cases: Needs specific test cases and scenarios, which can be complex to
design, especially for comprehensive coverage.
• Late Issue Detection: Issues may only be identified during later stages, making them
more expensive to fix.
• Valgrind: A memory analysis tool that detects memory leaks and mismanagement.
• Selenium: Used for automating browser interactions to test web applications, often used
for end-to-end testing.
9. Explain the concept of model-based testing. Describe Different types of models of using in
testing. Provide examples of model-based testing.
Model-based testing (MBT) is an approach in software testing where testing activities are driven
by models that represent the expected behavior of the system under test (SUT). In this
approach, models are used to describe the system’s functional requirements, which are then
used to automatically generate test cases. These models can represent the system’s behavior in
different ways, such as state transitions, processes, or user interactions.
The idea behind MBT is to create abstract representations (models) of the system that help
identify potential test cases, ensuring that the system’s behavior is thoroughly tested without
manually writing each test case.
Key Concepts:
• Model: An abstraction that represents the behavior or structure of the system under
test. It can represent things like states, sequences of actions, workflows, or functional
requirements.
• Test Generation: Test cases are automatically generated from the model, ensuring
consistency and coverage.
• Test Execution: Tests derived from the model are executed to verify that the system
meets its expected behavior.
• Increased Coverage: Since the model is a detailed representation of the system, it can
help identify edge cases and scenarios that may not be easily identified with traditional
testing approaches.
• Reduced Manual Effort: Automatically generating tests from models reduces the need
for manual test case creation, making the testing process more efficient.
• Consistency: As tests are derived from a single source (the model), there is less chance
of human error or inconsistency in test cases.
• Automation: Models can be easily used to automate test generation and execution,
which is particularly useful in regression testing.
• Complexity: Building and maintaining the models can be complex, especially for large
systems.
• Initial Setup Cost: Time and effort are required to create an accurate model, which may
be an overhead in short-term projects.
• Overfitting: Models need to be maintained and updated as the system evolves. If not
kept up-to-date, models can become too specific and fail to represent the system’s
actual behavior.
o Example: A login system with states like Logged Out, Logging In, and Logged In,
and transitions like clicking login or entering incorrect credentials.
o Description: Similar to FSMs but can be more detailed. These diagrams represent
the different states of a system and the possible transitions between these states
based on inputs or events.
o Example: For a mobile app, states like Offline, Connecting, and Online, with
events like network failure or successful connection triggering transitions.
o Test Case Example: Testing how the app behaves when transitioning from Offline
to Connecting under different network conditions.
3. Activity Diagrams
o Example: A user registration flow where the system checks for valid email,
verifies the email, and then creates a user account.
o Test Case Example: Testing the system's response when a user submits an invalid
email address during registration.
o Description: Use case models focus on the functional aspects of a system from
the perspective of the user. These models describe the interactions between the
user (or other systems) and the software.
o Example: A use case for logging into a website, which involves user actions like
entering credentials, submitting the login form, and verifying authentication.
o Test Case Example: Testing all possible ways the login system can fail, such as
invalid passwords or missing fields.
o Description: DFDs represent the flow of data within a system, showing how
inputs are transformed into outputs through processes. It helps in identifying the
sequence of data processing.
o Example: A system where data flows from input fields to a database, and then
through various processing steps.
o Test Case Example: Testing if the system properly handles invalid data inputs and
processes them correctly.
o Test Case Example: Verifying that the reactor system transitions to a safe state
under emergency conditions.
Consider a simple ATM System that can perform operations such as check balance, withdraw
money, and deposit money. The system has states like Idle, Waiting for PIN, Waiting for
Transaction, and Transaction Completed. The transitions depend on inputs like correct/incorrect
PIN, selecting a transaction type, and entering an amount.
• States:
o Waiting for PIN: After ATM is powered on, waits for PIN input.
o Waiting for Transaction: After PIN is entered correctly, waits for the user to
choose a transaction.
• Transitions:
o From Idle to Waiting for PIN when the ATM is powered on.
o From Waiting for PIN to Waiting for Transaction when the correct PIN is entered.
o From Transaction Completed to Idle when the user finishes the transaction.
o Test valid and invalid PIN entries (valid should proceed to the next state, invalid
should stay in the same state).
o Verify that after selecting a transaction type, the system processes it and
transitions to the next state.
10. Discuss the concept of control flow graph, state model and data flow-based testing. Explain
how these models can be used test cases and analyze the behavior of a software system.
A Control Flow Graph (CFG) is a graphical representation of a program’s control flow. The
graph’s nodes represent basic blocks of code, and the edges represent the control flow between
them. In CFG testing, the focus is on the paths through the code and how they can be traversed
to ensure that every possible execution path is tested.
• Nodes: Represent basic blocks or sections of code that are executed sequentially (e.g., a
function or statement).
• Edges: Represent control flow between these basic blocks, such as decisions (if-else),
loops, and function calls.
• Entry and Exit Points: Represent the starting and ending points of the execution.
• Path Coverage: Each path through the program is identified and tested. For example,
testing all possible loops, conditionals, and paths to check for errors in execution.
• Cyclomatic Complexity: A metric used to measure the complexity of the program based
on the number of independent paths. Higher complexity usually means more test cases
are needed.
Example:
In a simple program with conditional statements (if-else), the CFG would help visualize all the
possible paths (e.g., the path when the condition is true and when it is false) to ensure that
both paths are tested.
A State Model (often represented by a State Machine) is used to model systems that can exist
in a finite number of states, where transitions occur based on inputs or events. This model is
widely used in systems where the behavior changes depending on the current state, like finite-
state systems (e.g., vending machines, login systems).
• States: Represent the various conditions or modes of the system (e.g., "Logged Out",
"Logged In").
• Transitions: Arrows that show the possible changes between states based on events or
actions (e.g., user inputs).
• Initial and Final States: Represent the start and end points of the state model.
• State Transition Testing: Test cases are designed to validate that transitions between
states occur correctly. For example, if the user enters the correct password, the system
should transition from Logged Out to Logged In.
• Boundary Condition Testing: Checking the system’s behavior at the boundary of state
transitions (e.g., entering incorrect credentials may return the system to Logged Out).
• Path Coverage: Just like CFG, test paths can be generated to cover all possible state
transitions, ensuring every behavior is tested.
Example:
• Testing the transition from Logged Out to Logged In after valid login.
• Testing the transition from Logged In back to Logged Out on logout.
Data Flow Testing focuses on tracking the flow of data through the program. It examines how
variables and data move between statements, focusing on the creation, usage, and definition of
variables. This testing technique is concerned with detecting errors related to incorrect or
unexpected manipulation of data.
• Control Flow: Tracks how data moves between definitions and uses, checking if data is
used before being defined or if there are undefined variables.
• Variable Definition and Use: Test cases are designed to check that variables are properly
defined before being used.
• Uninitialized Variables: Identifying test cases where variables might be used before they
are assigned values.
• Data Paths: Test cases ensure that all paths from variable definition to its usage are
tested, ensuring correct data propagation.
Example:
python
Copy code
x=5
y = x + 10
Data flow testing would ensure that x is properly defined before y uses it. It would also check if
any variable is used without initialization.
How These Models Analyze the Behavior of a Software System
o Test Case Generation: The CFG helps generate test cases that cover all possible
paths, ensuring every branch (like if-statements) and loop is tested. This ensures
that both expected and unexpected paths are handled, improving code coverage.
o Behavior Analysis: By identifying all paths, CFG ensures the program behaves
correctly under different execution sequences and edge cases (e.g., infinite loops
or unhandled conditions).
2. State Model:
o Test Case Generation: Test cases are generated to validate that the system
transitions correctly between states. This is important for systems with multiple
operational modes (e.g., embedded systems, games, and user authentication).
o Behavior Analysis: The state model captures the system’s behavior in each state,
ensuring that actions in one state lead to expected behaviors in another. This also
highlights possible state conflicts or unreachable states.
o Test Case Generation: Data flow testing ensures that variables are correctly
defined before use and that no undefined or uninitialized variables affect the
system. It’s especially useful for detecting data-related issues like uninitialized
variables, dead code, or improper data propagation.
Consider a simple Banking Application that allows users to transfer money between accounts.
The system has several states (Logged In, Transferring Funds, Funds Transferred, etc.), and
several data-related operations (e.g., updating balances, logging transactions).
2. State Model:
o The banking application might have states like Logged In, Initiating Transfer,
Processing Payment, and Transfer Complete. The state model would help
generate test cases to verify that the system transitions correctly between these
states under various conditions (e.g., canceling a transfer or encountering an
error).
o In this case, you would track the movement of data such as account balances and
transaction details. Data flow testing ensures that balance calculations are
accurate, no transaction is logged without proper funds being available, and that
all relevant data is propagated correctly through the system.