Unit05 Ste (22518)
Unit05 Ste (22518)
Unit05 Ste (22518)
Manual Testing
Definition:
Manual testing is the process of testing software manually to identify defects without
using any automated tools.
Testers execute test cases, analyze results, and document issues to ensure the software
functions as expected.
Human Observation: It relies on human intuition and observation, which can sometimes
uncover subtle issues that automated tests might miss.
Flexible Approach: Manual testing allows testers to adapt quickly, explore different
scenarios, and test cases as needed.
Real-World Examples
1. Mobile App Usability Testing: When testing the user experience of a mobile app, manual
testers assess the design, usability, and user flow to ensure it’s intuitive and user-friendly.
For example, testers check if buttons, menus, and navigation elements work smoothly
and as expected.
1. Human Insight: Testers can detect and explore unexpected behaviors, usability issues, or
other defects that automated tests may overlook.
2. Flexibility: Manual testing allows for quick adjustments in test cases, making it suitable
for exploratory or ad-hoc testing where the testing scope can change rapidly.
3. Cost-Effective for Small Projects: For small or short-term projects, manual testing can be
more cost-effective than investing in automation tools.
1. Time-Consuming:
Manual testing requires more time, especially for repetitive tests, making it slower than
automated testing.
2. Human Error:
Since manual testing depends on human efforts, it’s prone to errors, especially during
lengthy or complex test cases.
3. Limited Coverage:
It’s challenging to cover all possible test cases manually, which can leave certain areas of the
application untested.
4. Less Consistency:
Results may vary each time due to different testers, fatigue, or minor variations in testing
methods.
5. Not Suitable for Large-Scale Testing:
Manual testing is inefficient for projects requiring extensive testing, such as performance or
load testing.
6. Cannot Run on Multiple Platforms Simultaneously:
Testing across different environments (like browsers or OS) is hard to manage manually and
often requires more resources.
7. Lack of Reusability:
Test cases created manually lack reusability; they must be recreated or re-executed each
time.
8. Difficult to Track Test Results:
Tracking progress and maintaining detailed records is harder in manual testing compared to
automated testing tools.
Definition:
Automation testing tools are software applications used to automate the process of
executing test cases, comparing results, and generating reports.
Efficiency: These tools eliminate repetitive manual work by running predefined scripts
and tests automatically.
Wide Usage: They are widely used in industries for tasks like regression testing,
performance testing, and load testing.
Real-World Examples
1. Selenium: A popular open-source automation testing tool used for web application
testing across multiple browsers and platforms. For example, testing the login
functionality of a banking website across Chrome and Firefox.
2. JMeter: An automation tool used for performance and load testing. For instance, testing
how an e-commerce website handles 1,000 users shopping simultaneously.
1. High Initial Setup Cost: Setting up automation tools, including purchasing licenses and
training testers, can be expensive.
2. Limited to Predefined Tests: Automation tools cannot handle unexpected changes or ad-
hoc testing effectively, as they follow predefined scripts.
3. Requires Skilled Resources: Testers need to have technical skills to write and maintain
automated test scripts, which can be challenging for beginners.
4. Not Suitable for UI or Usability Testing: Automation tools cannot judge user experience,
design quality, or other aspects that require human observation.
2. Improved Accuracy:
Automation tools reduce the likelihood of human error in testing. By following the same
steps every time, they ensure consistent results, which improves accuracy and reliability in
test execution.
1. Selenium
2. JUnit
o A widely used framework for Java programming language, specifically designed for
unit testing.
3. TestComplete
o A commercial tool that provides automated UI testing for desktop, mobile, and web
applications.
o An automated functional testing tool for web and desktop applications, now called
UFT (Unified Functional Testing).
5. LoadRunner
o A performance testing tool that helps to test the performance and load capacity of
applications.
6. Appium
o A mobile testing tool for automated testing of Android and iOS applications.
7. Postman
o A tool for testing APIs by making HTTP requests and validating responses.
8. JMeter
9. Cucumber
o A tool that supports Behavior Driven Development (BDD) for testing web
applications by writing tests in natural language.
10. Ranorex
An automation tool for UI testing, suitable for web, desktop, and mobile applications.
Testing Tools are software applications designed to support testing activities, whether
it's for automation, performance analysis, or defect tracking.
These tools help increase efficiency, improve test coverage, and ensure better quality
software by streamlining repetitive testing tasks.
1. Increased Efficiency
2. Improved Accuracy
3. Consistency
4. Reusability
5. Faster Testing
6. Comprehensive Reporting
7. Scalability
o Tools handle large volumes of data and users during performance testing.
8. Cross-Platform Testing
1. High Cost
2. Complex Setup
3. Learning Curve
4. Limited Scope
5. Dependency on Tools
6. Initial Investment
7. Tool Limitations
8. Frequent Updates
The right testing tool helps improve efficiency, accuracy, and coverage, and must align
with factors like the project’s budget, scope, timeline, and technology stack.
It's important to understand both the advantages and limitations of each tool to ensure
it supports the testing goals effectively.
o Does the tool support the platforms, browsers, or operating systems that the
application uses?
Cost:
o Does the tool fit within the project’s budget? Some tools are free (open-source),
while others are commercial with licensing fees.
Ease of Use:
Integration Capabilities:
o Does the tool integrate with other software tools, like bug-tracking systems,
CI/CD pipelines, or version control systems?
o Does the tool have good community support or customer service? Availability of
resources like documentation, tutorials, or forums is essential.
Scalability:
o Can the tool handle large-scale testing needs as the application grows, especially
for performance or load testing?
Compatibility:
o Does the tool support the required technologies (e.g., web, mobile, desktop,
APIs) or specific frameworks used by the application?
o Does the tool offer comprehensive reporting features that help in analyzing test
results, identifying defects, and tracking the progress?
Identification of the areas within the organization where tool support will help to
Proof-of-concept to see whether the product works as desired and meets the
Evaluation of the vendor (training, support and other commercial aspects) or open
Identifying and planning internal implementation (including coaching and mentoring for
4. Real-Life Examples
Example 1: When selecting an automation tool for a web application, you might
consider Selenium for browser compatibility, TestComplete for its user-friendly interface,
or Cypress for modern web frameworks. Each of these tools has its strengths depending
Example 2: For performance testing, tools like JMeter or LoadRunner are commonly
Examples:
Checkstyle: A tool for checking Java source code against coding standards.
Advantages:
Disadvantages:
Examples:
Advantages:
Disadvantages:
Key Difference:
Base Metrics
1. Definition:
Base metrics are the raw data collected during the software development and
testing processes. These are the foundation metrics upon which other calculations
and metrics are built.
2. Examples of Base Metrics:
Lines of Code (LOC)
o Measures the total number of lines of code written in the
software.
o Example: 5000 lines of code in a project.
Number of Test Cases
o The total count of test cases written for a project.
o Example: 120 test cases written for testing functionalities.
3. Advantages of Base Metrics:
Simplicity
o Easy to collect and understand without complex computations.
Foundation for Other Metrics
o Provides raw data needed to calculate more advanced metrics
like productivity or defect density.
Early Insights
o Gives an initial view of project size or testing efforts.
Consistency
Calculated Metrics
1. Definition:
Calculated metrics are derived by combining base metrics using formulas or calculations
to provide meaningful insights into the software process or product.
2. Examples of Calculated Metrics:
Defect Density
o Formula:
Product Metrics
1. Definition:
Product metrics are quantitative measures that evaluate the characteristics of a
software product, such as its functionality, reliability, maintainability, and
performance.
2. Examples of Product Metrics:
Code Complexity
o Measures the complexity of code using metrics like Cyclomatic
Complexity.
o Example: A function has a complexity score of 12.
Defect Density
o Measures the number of defects per unit of software size (e.g.,
per 1,000 lines of code).
o Example: 0.8 defects per 1,000 lines of code.
3. Advantages of Product Metrics:
Quality Assessment
o Helps assess the overall quality of the software product.
Improves Reliability
o Identifies areas prone to defects, leading to better testing and
error prevention.
Maintainability Insights
o Provides metrics to estimate ease of future maintenance.
Performance Tracking
o Tracks improvements in product performance over time.
4. Disadvantages of Product Metrics:
Code-Only Focus
o Often limited to measurable code aspects, ignoring user
experience.
Effort Intensive
o Requires significant effort to collect and analyze data accurately.
Misleading Data
o Incorrect interpretation can lead to flawed decisions.
Dependency on Tools
Process Metrics
1. Definition:
Process metrics are quantitative measures that evaluate the efficiency and effectiveness
of the processes used during software development and maintenance.
2. Examples of Process Metrics:
Defect Removal Efficiency (DRE)
o Formula:
Comparison Table
Comparison Table
Definition Raw data collected directly Metrics derived from base metrics
Insightful, customizable,
Advantages Simple, foundational, consistent
actionable
2. Measurement
Definition:
Measurement is the process of collecting and analyzing data to quantify attributes of
software or its process.
o Example: Measuring the execution time of test cases to assess performance.
Difference:
o Measurement: Focuses on collecting raw data.
o Metrics: Involves analyzing and deriving insights from the measured data.
1. Quality Assurance
o To ensure the software meets predefined quality standards.
o Example: Measuring defect rates to monitor quality.
2. Performance Tracking
o Helps track the efficiency of the development and testing processes.
3. Project Management
o Metrics assist in resource allocation, timeline management, and budget estimation.
4. Process Improvement
o Identifies weaknesses in processes and helps improve them over time.
5. Risk Assessment
o Predicts potential risks based on historical data and trends.
6. Productivity Measurement
o Tracks team productivity by analyzing metrics like test execution rates.
7. Decision Support
o Provides data-driven insights to make informed decisions during the software
lifecycle.
8. Customer Satisfaction
o Ensures the delivered software aligns with user requirements and expectations.
Purpose:
1. Class Testing:
2. Inheritance Testing:
o Verifies that subclasses inherit properties and methods correctly from parent
classes.
3. Polymorphism Testing:
o Example: Ensuring that calling the same method on different objects yields
expected results.
4. Encapsulation Testing:
o Validates that data hiding is implemented correctly through access control (e.g.,
private, protected).
o Example: Ensuring private variables are not directly accessible outside the class.
5. Object Interaction:
1. Comprehensive Coverage:
2. Improved Debugging:
3. Risk Reduction:
4. Enhanced Quality:
1. Complex Setup:
2. High Maintenance:
3. Resource Intensive:
4. Tool Dependency:
o May require specific tools to manage and track the matrix effectively.
Correct output
Class Validate class methods Input parameter testing Passed
generated
Expected method
Object substitution Call method on interface Failed
Polymorphism behavior
Winter 2019
1. State any four advantages of using tools. (2marks)
2. State any eight limitations of manual testing (4marks)
3. Describe object-oriented metrics in testing. (4marks)
4. Elaborate the term metrics and measurement and write the need of software measurement.
(6marks)
Summer 2022
1. Enlist any four testing tools. (2marks)
2. Describe different factors for selecting a testing tool. (4marks)
3. State any eight limitations of Manual Testing (4marks)
4. Describe need for Automated Testing tools. (4marks)
5. Elaborate the concept of Software Metrics? Describe Base Metrics and calculated matrics with
suitable example. (6marks)
Winter 2022
1. State the need of automated testing tools. (2marks)
2. State the limitations of manual testing. (4marks)
3. Enlist the factors considered for selecting a testing tool for test automation.. (4marks)
4. State the advantages and disadvantages of using tools. (4marks)
Summer 2023
1. State any two differences between manual and automated testing. (2marks)
2. Describe any four limitations of manual testing. (4marks)
3. Differentiate between static and dynamic testing tools. (any four points). (4marks)
4. Describe the criterias to select testing tools. (4marks)
5. Define metrics and measurements. Describe three types of metrics. (6marks)
Winter 2023
1. Enlist any four software testing tools. (2marks)
2. Describe any four factors for selecting a testing tools. (4marks)
3. State & explain any four benefits of automation in testing. (4marks)
4. State any four limitations of manual testing. (4marks)
Summer 2024
1. State the need of automated testing tool. (Any two). (2marks)
2. Differentiate between static and dynamic testing tools. (any four points). (4marks)
3. Give any four differences between manual and automated testing. (Any 4 points). (4marks)
4. Describe any four limitations of manual testing. (4marks)
5. How to select a testing tool? Explain in detail.(6marks)