Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
40 views83 pages

Understanding Software Testing Basics

Software Testing

Uploaded by

Soumya Kamal
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
40 views83 pages

Understanding Software Testing Basics

Software Testing

Uploaded by

Soumya Kamal
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

SOFTWARE TESTING

UNIT-5
What is Software Testing?
• Software testing is the process of evaluating a software application or
system to detect any defects, errors, or bugs in the software.
• Software testing is a process where we validate that the product
developed is as per the requirement of the client.
• Software is ensured that it is defect free, safe to use, and validated
against different metrics.
• Software is tested for security, user experience, speed, performance,
load capacity, and many other parameters.
• Software development and software testing both are ongoing
processes as every time a new functionality gets added, it is tested for
its accuracy.
Definitions of Software Testing
• According to Glenford J. Myers, software testing is defined as "the process
of executing a program or system with the intent of finding errors.
• "According to IEEE (Institute of Electrical and Electronics Engineers),
software testing is defined as "the process of operating a system or
component under specified conditions, observing or recording the results,
and making an evaluation of some aspect of the system or component.
• "According to Roger S. Pressman, software testing is defined as "the
process of executing a software application with the intent to uncover
defects, validate that the system meets its requirements, and ensure that it
operates correctly in its intended environment.
• "According to Paul C. Jorgensen, software testing is defined as "The process
of reducing uncertainty about the correctness of a software system by
executing the system with the purpose of finding its defects, if any."
Goals of Software Testing
• Find and Fix Defects:-The primary goal is to identify bugs, errors, or issues in
the software before it is released.
• Ensure Software Quality: Testing checks whether the software meets the
required standards (reliability, usability, efficiency, maintainability).
• Verify Functionality :Make sure the software works as expected and
performs all required functions correctly.
• Validate Requirements : Ensure the developed software matches the
customer’s needs and project requirements.
• Improve User Experience: Testing helps ensure the software is user-friendly,
smooth, and free from glitches.
Goals of Software Testing
• Prevent Failures in the Future: Detect early issues to avoid major failures
after release, reducing maintenance cost.
• Increase Confidence in the Software: Testing helps developers, managers,
and customers trust that the software is stable and reliable.
• Ensure Security: Identify vulnerabilities or weak points that attackers could
exploit.
• Check Performance : Verify speed, load-handling, and resource usage to
ensure the software works efficiently under different conditions.
• Compliance: Ensure the software follows industry standards, legal
requirements, and organizational policies.
Importance of Software Testing
• Ensures Software Quality: Software testing helps to identify defects,
errors, and bugs in the software. ensuring that it meets the specified
requirements and works as intended.
• Customer Satisfaction: By ensuring that the software meets user needs
and expectations, software testing can help to improve customer
satisfaction and build trust and loyalty with customers.
• Reduces Costs: By identifying and fixing defects early in the
development process, software testing can help to reduce costs
associated with fixing issues later on.
• Improves Reliability: Testing helps ensure that the software operates
reliably and consistently under different conditions and scenarios,
reducing the risk of unexpected failures or crashes.
• Minimizes Risks: Effective software testing can help to minimize risks
associated with deploying new software systems by identifying
potential issues before they become more costly to fix later on.
• Facilitates Compliance: Testing helps ensure that the software
complies with industry standards, regulations, and legal
requirements, reducing the risk of penalties or legal issues.
• Enhances Performance: By identifying performance issues early in the
development process, software testing can help to enhance the
overall performance of the software.
• Easy Maintenance: Well-tested software is typically easier to maintain
and update, as it is less likely to contain hidden defects or issues.
Software testing Strategy
• A software testing strategy is a comprehensive approach to testing that
outlines the steps to be taken during the testing process. It includes test
planning, test-case design, test execution, and resultant data collection and
evaluation.
• A software testing strategy is a plan that outlines the steps to be taken during
testing, when these steps will be taken, and how much time and resources
will be required. It's like a roadmap that guides the testing process.
• The goal of a software testing strategy is to ensure that the software meets
quality standards and works as intended.
• By following a well-defined strategy, developers can identify and fix defects
early in the development process, reducing costs associated with fixing issues
later on.
• This helps to Improve user satisfaction by ensuring that the software meets
their needs and expectations.
Common Characteristics of Software Strategy
• Effective technical reviews should be conducted before testing
commences to eliminate Errors early in the development process.
This can help reduce costs associated with fixing issues later on.
• Testing should start at the component level and work outward to
ensure that each part of system works as intended before integrating
them into a larger system.
• Different types of tests are needed for different types of software and
at different stage of development.
• Software developer is responsible for testing individual units or
components of the program to ensure that each performs its
intended function. In many cases, the developer also conduct
integration testing to test the complete software architecture.
However, for large projects, a independent test group (ITG) may also
be involved in testing.
• Testing and debugging are different activities, but debugging must be
accommodated in any testing strategy
• A software testing strategy should include both low-level tests to
verify small source code segments and high-level tests to validate
major system functions against customer requirements,
• A software testing strategy should provide guidance for practitioners
and a set of milestones for managers, It's important to measure
progress and identify problems as early as possible especially when
deadlines are approaching.
Verification and Validation
• Software testing is just one element of a broader topic that is often
referred to as verification and validation (V&V).
• Verification refers to the set of tasks that ensure that software correctly
implements a specific function. This involves checking that the software
meets its design specifications, coding standards, and other requirements.
• Validation refers to a different set of tasks that ensure that the software
meets customer requirements. This involves checking that the software is
traceable to customer needs and expectations.
• In simpler terms:
Verification: "Are we building the product, right?“
Validation: "Are we building the right product?“
Meaning & Definition: Verification
• Verification refers to the set of tasks that ensure that software
correctly implements a specific function.
• Verification is the process of discovering the possible failures in the
software (not the actual final product) before the commencement of
the testing phase.
• Generally, verification is done during the development phase of the
software development life cycle. It involves reviews, inspections,
meetings, code reviews, and specifications.
• Verification is the process, to ensure that whether we are building the
product right i.e., to verify the requirements which we have and to
verify whether we are developing the product accordingly or not.
Meaning & Definition: Validation
• Validation refers to a different set of tasks that ensure that the
software meets customer requirements.
• In simple words, the test execution which we do in our day to day life
is actually the validation activity which includes smoke testing,
functional testing, regression testing, systems testing, etc.
• Validation occurs after the verification process and the actual testing
of the product happens at a later stage. Defects which occur due to
discrepancies in functionality and specifications are detected in this
phase.
• It answers the question, "Are we building the right product?".
• Validation is the process, whether we are building the right product
i.e., to validate the product which we have developed is right or not.
Difference Between verification and Validation
Aspect Verification Validation
Purpose Ensures the product is built according to Ensures the product meets user needs
the design
Focus Adherence to specifications and design Fulfillment of customer and end-user
needs
Timing Occurs during the development process Generally occurs after the
development process
Objective Confirm the software is built according Confirm the software meets user
to specifications and design expectations and requirements
Output Detects inconsistencies, missing Detects if the Software is suitable for
requirements, or design flaws before its intended use
implementation
Methods Reviews, inspection, and walk through. All types of testing like smoke testing,
regression, functional testing, systems
and User Acceptance Testing
Software Inspection Vs Software Testing
• Software inspection is a formal review process used to find defects
without running the code.
• Meeting during which analysis, design and code are reviewed by
people other than original developer.
• The objective of the inspection process are to find errors early in the
development cycle.
• Key features:
• Static technique → No program execution.
• Done by a team of reviewers (developers, testers, analysts).
• Focus on documents and code quality.
• Detects errors early (during requirements, design, coding).
• Cost-effective since defects are identified before testing.
• Examples
• Inspecting requirements documents
• Reviewing design diagrams
• Code reviews, walkthroughs, peer reviews
• Checking for standards, formatting, logic mistakes
• Finds
• Missing requirements
• Design mistakes
• Coding errors (logic issues, naming, unused variables)
• Documentation inconsistencies
Software Testing
• Software testing involves executing the software to find defects.
• Key features:
• Dynamic technique → Program must run.
• Performed by testers using test cases.
• Focus on behavior, functionality, performance.
• Helps ensure the software meets user requirements.
• Performed after coding in various phases (unit, integration, system…).
• Examples
• Running test cases on an app
• Checking login functionality
• Load testing a server
• Usability testing
• Finds
• Runtime errors
• Functional mismatches
• Performance issues
• Security vulnerabilities
• Crashes and failure
Aspect Inspection Testing
Approach Static Dynamic
Technique Verification Validation
Timing Early in the development process Throughout and after the development
process
Focus Identifying defects in software artifacts Evaluating the functionality, performance
and other quality attributes.
Participants Team of knowledgeable peers Team of testers responsible for test case
design and execution
Cost Less costly Costly as it require specialized test tools.
Effectiveness More effective in detecting defects early in Effective in evaluating the overall behavior
development process and performance of the software
Defect discovery Effective on identifying defects in software Focused on uncovering defects through the
artifacts such as requirements, design models and execution of test cases
source code
Automation Typically, manual and rely on human expertise to It can be automated to execute test cases
review and identify issues and compare expected results with actual
results.
Traditional Software Testing Process
• Figure below is an abstract model of the ‘traditional’ testing process,
as used in plan driven development.
• Test cases are specifications of the inputs to the test and the
expected output from the system (the test results), plus a statement
of what is being tested.
• Test data are the inputs that have been devised to test a system. Test
data can sometimes be generated automatically, but automatic test
case generation is impossible, as people who understand what the
system is supposed to do must be involved to specify the expected
test results.
• However, test execution can be automated.
• The expected results are automatically compared with the
predicted results so there is no need for a person to look for
errors and anomalies in the test run.
Classification of software testing
• There are different types of testing to ensure the reliability,
functionality and performance of software systems.
• The three main types of testing are development testing, release
testing and user testing, each with its own subtypes.
Development Testing
• Development testing is carried out by the team developing the
software to discover bugs and ensure that the software functions
correctly.
• Development testing integrates testing into the software
development lifecycle, performed by developers during the
construction phase to find and fix bugs early.
• It includes activities like unit testing, component testing, and code
reviews to improve efficiency, reduce costs, and increase overall
software quality by preventing issues from becoming more complex
and costly to fix later.
Goals of development Testing
• The goal of development testing is to verify that each individual unit or
component works correctly.
• It aims to ensure that the code behaves according to the specified
design and requirements.
• It helps detect defects early in the development phase before they
spread to later stages.
• It ensures that code changes, modifications, or refactoring do not
break existing functionality.
• Development testing aims to improve the overall quality, readability,
and maintainability of the code.
• It checks that each module handles inputs, outputs, and error
conditions properly.
• It prevents regression issues by repeatedly testing the code whenever
updates are made.
• Development testing provides useful documentation by showing the
expected behavior of the code.
• It increases the developer’s confidence in the reliability and
correctness of their code.
• It reduces the overall cost and time of debugging by identifying and
fixing issues at an early stage.
Types of Development Testing
• There are different types of Development Testing:
• Unit Testing – Testing individual functions, classes, or modules written by
developers.
• Component Testing – Testing a group of related units together as a single
component.
• Module Testing – Testing a complete module to ensure it works correctly as
per design.
• Integration Testing (Developer Level) – Testing how two or more
units/components interact with each other.
• Regression Testing (Developer Side) – Re-testing the code after changes to
ensure nothing is broken.
• Static Testing – Reviewing code, design, or documents without executing the
program (code review, walkthroughs).
• Dynamic Testing – Executing the code to check functionality and behavior.
Unit Testing
• Unit testing is the type of software testing where individual units of
software or components of the software are tested.
• Its main purpose is to prove that each unit of software code performs
as expected.
• It is done by the developer during the development (coding phase) of
an application.
• Unit testing isolates a section of code and verifies its correctness. An
entity can be an individual function, method, procedure, module, or
object.
Unit Test Considerations
• The module interface is tested to ensure that information
properly flows into and out of the program unit under test.
• Local data structures are examined to ensure that data
stored temporarily maintains its integrity during all steps in
an algorithm’s execution.
• All independent paths through the control structure are
exercised to ensure that all statements in a module have
been executed at least once.
• Boundary conditions are tested to ensure that the module
operates properly at boundaries established to limit or
restrict processing.
• All error-handling paths are tested. Dataflow across a
component interface is tested before any other testing is
initiated.
Unit Test Procedure
• The design of unit tests can occur before coding begins or after source code
has been generate.
• Because a component is not a stand-alone program, driver and/or stub
software must often be developed for each unit test.
• Driver is nothing more than a “main program” that accepts test case data,
passes such data to the component (to be tested), and prints relevant
results.
• Stubs serve to replace modules that are subordinate (invoked by) the
component to be tested.
• A stub may do minimal data manipulation, prints verification of entry, and
returns control to the module undergoing testing. Drivers and stubs
represent testing “overhead.”
• That is, both are software that must be written (formal design is not
commonly applied) but that is not delivered with the final software
product.
Strategies for Choosing Test Cases
• Selecting efficient unit test cases is crucial since testing costs money and
takes time. Effectiveness here refers to two things:
1. The test cases should show that, when used as expected, the component that you
are testing does what it is supposed to do.
2. If there are defects in the component, these should be revealed by test cases.
• Therefore, you need create two different types of test cases.
• The first of them should demonstrate that the component is functional and
should depict typical programme operation. For instance, your test case
should demonstrate that the new patient record already exists in a database
and that its fields have been set as required if you are testing a component
that produces and initializes new patient records.
• The other type of test case ought to be focused on assessing knowledge of
typical problem areas. To ensure that these are correctly handled and do not
cause the component to crash, it should test using atypical inputs.
• You can choose test scenarios by using the following two strategies:
1. Using partition testing, you can find input groups that share traits
and ought to be handled similarly. Within each of these
groupings, you should select tests. Partitioning testing is a
strategy for selectin effective unit test cases by identifying groups
of inputs that have common characteristics and should be
processed in the same way.
2. Using testing guidelines to select test cases is known as
"guideline-based testing." These recommendations are based on
prior knowledge of the kind of mistakes that programmers
frequently make when creating components. It is an approach to
selecting effective test cases that involves using testing guidelines
to identify potential defects in software components.
Component Testing
• Component testing, also known as module testing, in software
engineering is a testing method where individual components or
modules of a software application are tested in isolation.
• This process aims to validate that each component functions correctly
according to its specifications before it is integrated with other parts
of the system.
• Component Testing is a software testing technique that verifies the
functionality, usability, and behavior of individual components of an
application in isolation to ensure they meet specified requirements.
• This approach helps identify defects early in the development cycle,
saving time and reducing the cost of fixing issues later.
• Importance of Component testing:
• Early Bug Detection: Component testing helps catch bugs early in the
development process, making them easier and cheaper to fix.
• Focused Testing: Testing components in isolation allows developers to
concentrate on specific parts of the application, making it easier to
pinpoint issues.
• Functionality Verification: Ensures that each part of the application
performs as expected.
• Support for Continuous Integration: Automated component tests can
be added to the CI pipeline, enabling frequent testing and quick
feedback.
• Cost Efficiency: Early issue detection reduces the cost of fixing bugs
later and minimizes expenses for post-release maintenance.
• Enhanced Software Quality: Thorough and regular testing of individual
components ensures the software meets high-quality standards.
Types of Interfaces between Components
• Interfaces between components play a critical role in software
systems, enabling components to communicate and interact with
each other to achieve specific functionalities.
• Interface types include:
• Parameter Interface: data passed from one method or procedure to another.
• Shared memory Interfaces: block of memory is shared between procedures
or functions.
• Procedural Interface :sub-system encapsulates a set of procedures to be
called by other sub-systems.
• Message passing interfaces : sub-systems request services from other sub-
systems.
• Software components are often composite components that are
made up of several interacting objects.
• Testing composite components should therefore focus on showing
that the component interface behaves according to its specification.
• Interface errors:
• Interface misuse : A calling component calls another component and
makes an error in its use of its interface e.g. parameters in the wrong
order.
• Interface misunderstanding : A calling component embeds assumptions
about the behavior of the called component which are incorrect.
• Timing errors : the called and the calling component operate at
different speeds and out-of-date information is accessed.
• Software components are often composite components that are
made up of several interacting objects.
• Testing composite components should therefore focus on showing
that the component interface behaves according to its
specification.
• General guidelines for interface testing:-
• Design tests so that parameters to a called procedure are at the
extreme ends of their ranges.
• Always test pointer parameters with null pointers.
• Design tests which cause the component to fail.
• Use stress testing in message passing systems.
• In shared memory systems, vary the order in which components
are activated.
System Testing
• System testing is a type of software development testing that is used
to check the overall functionality of your project.
• System testing in software testing is a level of testing where the
entire, integrated software system is evaluated to ensure it meets the
specified requirements and functions correctly as a whole. It is
performed after integration testing and before acceptance testing.
• System testing during development involves integrating components
to create a version of the system and then testing the integrated
system.
• The focus in system testing is testing the interactions between
components.
• System testing checks that components are compatible, interact
correctly and transfer the right data at the right time across their
interfaces.
• System testing tests the emergent behavior of a system.
• During system testing, reusable components that have been
separately developed and off-the-shelf systems may be integrated
with newly developed components.
• The complete system is then tested.
• System testing during development involves integrating components
to create a version of the system and then testing the integrated
system.
• Components developed by different team members or sub-teams
may be integrated at this stage.
• System testing is a collective rather than an individual process.
• The use cases developed to identify system interactions can be
used as a basis for system testing.
• Each use case usually involves several system components so
testing the use case forces these interactions to occur.
• The sequence diagrams associated with the use case document
the components and their interactions that are being tested.
Test-Driven Development
• Test-driven development (TDD) is an approach to program development in
which you the inter-leave testing and code development.
• Tests are written before code and 'passing' the tests is the critical driver of
development.
• This is a differentiating feature of TDD versus writing unit tests after the
code is written: it makes the developer focus on the requirements before
writing the code.
• The code is developed incrementally, along with a test for that increment.
• You don't move on to the next increment until the code that you have
developed passes its test.
• TDD was introduced as part of agile methods such as Extreme
Programming. However, it can also be used in plan-driven development
processes.
• The goal of TDD isn't to ensure we write tests by writing them first,
but to produce working software that achieves a targeted set of
requirements using simple, maintainable solutions.
• To achieve this goal, TDD provides strategies for keeping code
working, simple, relevant, and free of duplication.
• TDD process includes the following activities:
1. Start by identifying the increment of functionality that is
required. This should normally be small and implementable in a
few lines of code.
2. Write a test for this functionality and implement this as an
automated test.
3. Run the test, along with all other tests that have been
implemented. Initially, you have not implemented the
functionality so the new test will fail.
4. Implement the functionality and re-run the test.
5. Once all tests run successfully, you move on to implementing the
next chunk of functionality.
• Benefits of test-driven development:
• Code coverage: every code segment that you write has at least one
associated test so all code written has at least one test.
• Regression testing: a regression test suite is developed incrementally as
a program is developed.
• Simplified debugging: when a test fails, it should be obvious where the
problem lies; the newly written code needs to be checked and
modified.
• System documentation: the tests themselves are a form of
documentation that describe what the code should be doing.
• Regression testing is testing the system to check that changes have not
'broken' previously working code. In a manual testing process, regression
testing is expensive but, with automated testing, it is simple and
straightforward. All tests are rerun every time a change is made to the
program. Tests must run 'successfully' before the change is committed.
Release Testing
• Release testing, also known as deployment testing or production testing,
involves a comprehensive set of testing activities performed before a
software system is released to end-users. These activities aim to verify the
software's functionality, performance, and overall readiness for
deployment.
• Release testing is the final phase of software testing before a product is
delivered to end-users, focused on verifying that the software meets all
requirements and is ready for public use.
• It includes checking for defects, ensuring functionality and performance,
and validating non-functional aspects like compatibility and security under
real-world conditions.
• The primary goal is to build confidence in the quality and reliability of the
release for both the customer and the development team.
Purpose
• Validation of software: -Ensure that the software fulfills its intended
use when operated in its target environment.
• Verification of requirements:- Verifies that all the requirements, both
functional and non functional, are met.
• Assurance of quality:- Confirms that the software meets the quality
benchmarks set for the product, such as performance ,security and
usability standards.
• Compliance with regulations:- Checks that the software adheres to
any regulatory standards required for the industry in which it will be
used.
Types Of Release Testing
• It may involve various testing types, including:
• Functional Testing: Ensuring all features and functionalities work as
intended.
• Performance Testing: Assessing the system's responsiveness and
stability under various loads.
• Security Testing: Identifying and mitigating security vulnerabilities.
• Compatibility Testing: Verifying functionality across different
operating systems, browsers, and devices.
• Usability Testing: Evaluating the ease of use and user experience.
How To Perform Release Testing?
• To perform release testing in a software product, go through the
following steps given below.
• Make a thorough test plan that includes a timeline, specific test objectives, and
goals.
• Establish the product release’s acceptance criteria. Make sure the testing
environment is suitable and resembles the production environment.
• Assemble the different test data that will be utilized in the testing. All potential
user scenarios should be covered by the test cases you plan and create.
• Monitor the test results after running the tests manually or using automated
testing tools.
• Effective defect management aids in tracking and managing flaws discovered
during testing, as well as prioritizing and promptly resolving them.
• Compile the test findings into a thorough report that includes the test cases
that were run, the flaws that were discovered, and the product’s current state.
• Depending on the test results, determine whether the product is prepared for
release or if further testing is required.
Advantages Of Release Testing
• Enhances the functionality and quality of software products
• Enhances the effectiveness of teamwork
• Reduces the risk
• Reasonably priced
• Enhances customer satisfaction
• Improved project management
Phases Of Release Testing
• There are two phases for release testing. They are:
• Pre Release Test
• Post Release Test
• Pre Release Test
• This test is carried out before the software being made available to end users.
In the release test environment, it is carried out. It includes putting the
software through a rigorous testing process to find and fix bugs, make sure it
satisfies quality standards, and confirm that it works as intended.
• Post Release Test
• It is carried out after the software has been made available to end users and
put into use. The main aim is to track the software’s performance in real-
world settings and spot any potential problems after the product is released.
Challenges In Release Testing
• Limited Test Coverage – A lack of enough test coverage may lead to
undetected problems and poor-quality software.
• Time Constraints – It often leads to incomplete testing, that compromise
overall quality of the product.
• Complex Environments – Software programs are frequently implemented in
situations that are complex, involving various hardware configurations,
operating systems, and network configurations. It is difficult to test software
in such an environment, because incompatibilities or configuration
discrepancies could cause problems.
• Lack of Real-World Scenarios – Isolated settings may not accurately replicate
real-world usage situations and scenarios when it comes to release testing.
End users frequently utilize the product in ways that were not envisioned
during testing, which can result in unanticipated problems and unfavorable
user experiences.
Subtypes of Release testing
• Requirements-based Testing
• Requirement-based testing is a software testing approach where
test cases are created directly from the system's functional and
non-functional requirements to ensure the software meets its
specifications.
• It involves systematically deriving tests from requirements,
designing cases to validate each requirement, and establishing
traceability to link every requirement to its corresponding test
case.
• This helps ensure full coverage and identify gaps or defects early in
the development cycle.
• Scenario Testing
• Scenario testing is a software testing technique that uses hypothetical,
real-world stories to evaluate how a system works under specific
conditions.
• It focuses on testing end-to-end flows and complex scenarios that may
not be covered by simpler tests, ensuring the software meets user
expectations and requirements.
• Performance Testing
• Performance testing is a software testing process to evaluate a system's
responsiveness, stability, speed, and scalability under a specific
workload.
• It involves subjecting the software to varying loads to identify
bottlenecks and ensure it can meet performance requirements before
release, preventing issues like crashes or slow performance during peak
usage.
• Key goals include checking performance against requirements and
locating bottlenecks in resource utilization, such as CPU or memory
User Testing
• User testing in software testing involves evaluating a software
product by having real users from the target audience interact with it
to accomplish specific tasks.
• This process aims to identify usability issues, gather feedback on the
user experience, and ensure the software meets user needs and
expectations.
Alpha Testing
• Alpha Testing is conducted in an organization to determine all
possible bugs and defects before releasing the final product to the
real users. It is done by the testers who are mostly internal employees
of the organization.
• It is fake or real functional testing at its own site. The Unit testing,
integration testing is done before the alpha testing. It is used after all
the testing is executed.
The Alpha Testing Process
• Requirement Review: Review the functional requirement and the
design of the system specification
• Test Development: Test development is depending on the output of
the required review. In this development, phase Develop the test
cases and test plan.
The Alpha Testing Process
• Test case Execution: Execute the test case and test plan during test
case execution.
• Logging Defects: Logging the identified and noticed bugs found in the
application.
• Bug Fixation: When all the bugs are determined and logged into the
system, then there is a requirement to fix that logged bug.
• Retesting: When all issues are fixed from the developer side then
start retesting is done.
phases of Alpha Testing
• It confirms that the software executes perfectly and does not affect
the importance of the organization; the company executes final
testing in the form of alpha testing. It is executed in two different
phases.
1. First Phase:
• Internal developers of software engineers do this first phase of testing. In this
1st phase, the tester used a hardware debugger or hardware-aided debugger
to detect the bug fast. At the time of alpha testing, a tester can find a lot of
errors, bugs, crashes, missing features, and docs.
2. Second Phase:
• In the second phase, It affects the quality assurance staff conducting the
alpha testing by applying black box and white box techniques.
features of Alpha Testing
• It is one type of acceptance testing.
• It is occurring at the stage of the completion of the software product.
• It is in the labs where we provide an exact and suitable environment.
• It is in-house testing, which is performed by the internal testers within the
company.
• There is not any public involvement.
• It helps to earn confidence in the user approval of the software product.
• We can achieve alpha testing with the help of the black box and white box
techniques.
• It confirms the highest possible quality of the software before releasing it to
market or customers for beta testing.
• During testing developers complete alpha testing at the developer’s site; it allows
the developer to document the error with the ease to resolve found bugs fast.
• Generally, It is done after the other testing like unit testing, integration testing,
system testing but before the beta testing.
• Mostly alpha testing is used for testing software products, applications, and
projects.
Advantages
• This testing decreases the delivery time of the project.
• It delivers a whole test plan and test cases.
• Team members are early free for another project.
• All feedback is helpful to enhance software quality.
• It delivers a more useful observation of the software’s dependability
and accountability.
Disadvantages
• It doesn’t apply in-depth testing of the software.
• The difference between the tester’s and customer’s tests the data for
testing the software data from their view may result in the difference
in the software functioning.
• The used lab environment for alpha testing still lab cannot refill all
requirements of the real environment like the number of conditions,
aspects, and possibilities.
Beta testing
• Beta testing is also known as User Acceptance Testing. Which testing
we performed before the release of the software was Beta testing.
• It also performs end-to-end software testing and it is one type of
silent testing.
• It is generally performed by real users.
• This testing was performed after the alpha testing.
• Beta testing is released to a restricted number of users to check the
functionality, usability accessibility, and more.
• Beta testing is done by the real user in the real environment. The Beta
version is released for a limited number of users to test the product
quality. It reduces the risk of failure and provides the quality of the
product and it is final testing before shipping the product to the users.
• Features of beta testing are:
• It performs in a real environment at the user’s site. It helps in providing the
best quality of software.
• It is mostly done by the end-user, stakeholder, and client.
• It is always performed after the alpha testing, and before releasing software
into the market.
• It is also known as black-box testing.
• It performs in the absence of a software tester and in the presence of real-
time users
• It generally is done for testing software products like utilities, operating
systems, applications, etc.
Beta version of the software
• The beta version of the software is delivered to a restricted number of
users to receive their feedback and guidance on quality gain. Here different
two types of beta versions are available.
• Closed beta version: Closed beta version is known as a private beta, it is
released to a group of fixed and invited people. Only Invited people will
test the software and assess its features and specifications. The Beta
version represents the capability of the software for release, but it is for
limited users not for all users.
• Open beta version: Open beta version is also known as a public beta. This
beta version is open to the public. Anyone can assess the beta version as a
tester to provide appropriate feedback and reviews. The open beta version
increases the quality of the final release. This version is useful for finding
other hidden errors and bugs.
Beta Testing Process
• Beta testing is performed by the end-users and this process cannot execute
without any plan.
• Follow the below steps for the process of beta testing:
• Planning: Beta Testing supports proper planning. In the planning stage, the team
prepares a testing strategy and goal testing. In this case, the team demonstrates the
demand of users for testing, time, and required details related to the process.
• Participant Recruitment: This is the second stage of the beta process in this stage
the team recruits a group of specified end-users for testing. This group can modify as
per the requirements of the organization and the product.
• Product Launch: When a team of testers is recruited. The beta version of the product
is founded or established on the client or user side, and users will test the product
for quality security.
• Collect and Evaluate Feedback: In this stage when the testing is finished, developers
will contain the feedback provided by the testers and estimate it. In the end, based
on the feedback, issues, and bugs are fixed and resolved by the developers.
• Closure: When all the problems are resolved and the organization meets the exit
criteria, beta testing is completed, and the rewards are offered to the testing team.
What is a Beta Test plan?
• Below is the description of the Beta test plan.
• Objective: In the objective, we should have noted the aim of the project
and why there is a demand for beta testing even after executing the
internal testing.
• Scope: In this plan, we should mention the areas of functionality to be
tested or not.
• Test Approach: We should explain the tested function areas and which
type of testing we have performed like UI, Functionality, etc.
• Schedule: We have to mention, some data like start time and end time of
testing, how many number of cycles, and time duration per cycle.
• Tools: Bug logging tools which tool we are using during testing and the use
of the devices should be determined.
• Budget: Incentive of the bugs based on the bug severity.
• Feedback: Collecting feedback and considering techniques.
Advantages
• It focuses on the client’s fulfillment.
• It permits to decrease in the risk of product defeat via user
verifications.
• It allows getting immediate feedback from users.
• It is used to find bug and issues in the application, which is not found
by the software tester team.
• It supports the user to install, test, and send feedback about the
developed software.
Disadvantages
• In this testing, a software engineer has no power over the procedure
of the testing, as the users in the real-world environment execute it.
• This type of testing can be a time-consuming process and can slow
the final release of the product.
• It does not test the functionality of the software in deep as the
software is still in development.
• It is a scrap of time and money to work on the feedback of the users
who do not use the software themselves appropriately.
Acceptance testing
• Acceptance testing is the final phase of testing to determine if a
product or system meets the requirements of its users and
stakeholders, and is ready for release.
• Acceptance testing is a significant milestone in the software
development lifecycle (SDLC) that evaluates whether a system meets
the specified requirements and is approved for rollout.
• It comprises whether the selected software satisfies the business
stakeholders’ requirements and expectations by verifying the
functionality of the software being developed before being released
to the end user.
• Acceptance testing ensures that the product supports functional and
non-functional requirements, is stable and secure before going live.
Purpose of acceptance testing
• The primary Purpose of acceptance testing is to validate that the
Software complies with user expectations and Enterprise objectives.
• It helps in uncovering any defects or Irregularities that
could impact usability, guaranteeing that the product is ready for
deployment in real-world scenarios.
• By carrying out detailed acceptance testing, companies can minimize
risks associated with software applications, improve user experience,
and maintain compliance with industry standards
Acceptance Testing Process

1. Define Acceptance Criteria: This stage involves establishing the criteria


that the system must meet to be accepted. This should occur early in the
process and be part of the system contract, agreed upon by the customer
and the developer.
2. Plan Acceptance Testing: This stage involves determining the system
resources, time and budget for acceptance testing, as well as establishing
a testing schedule. The plan should also address the required coverage of
the requirements and discuss how risks to the testing process will be
mitigated.
3. Derive Acceptance Tests: Once the acceptance criteria are established,
specific tests are designed to check whether the system meets these
criteria. These tests should aim to cover both the functional and non-
functional characteristics of the system.
4. Run Acceptance Test: The agreed acceptance tests are executed on the
system. This should take place in the actual environment where the
system will be used, but a user testing environment may need to be set
up if this is not practical.
5. Negotiate Test Results: After running the acceptance tests, the
developer and customer must negotiate to decide if the system is good
enough to be put into use.
6. Accept or Reject System: Based on the test results and negotiations, a
decision is made to accept or reject the system. If the system meets the
acceptance criteria, it is accepted, and if no, further development may be
required.
Benefits
• Ensures software meets the business stakeholders’ and users’ needs.
• Reduces the risk of Live environment failures.
• Identifies high-severity defects before production.
• Enhances Consumer confidence by delivering a Well-functioning
product.
• Helps in Legal adherence and risk mitigation.
• Improves software quality and performance.
• Provides Assurance to stakeholders before deployment.
Black Box Testing
• A black box signifies an object where one has no idea what is inside it.
• Black box testing technique tests the functionality of the application based
on the output. If the output is correct, the test case is passed, but a bug is
reported if the expected output doesn't match the output obtained.
• Black box testing is mostly manual.
• Black box testing methodology is also based on it, and where a tester
writes test cases not considering how the code works but rather the
functionality, he wants to test.
• For example, a tester wants to test the authentication system of a website.
The tester tries with the wrong credentials and the correct credentials. If
he is getting the correct response from the API. The authentication
functionality is working correctly.
Technique Used
• Decision Table Technique.
• Boundary Value Analysis
• State Transition Technique
• All Pair Testing Technique.
• Cause-Effect Technique.
• Equivalence Partitioning Technique.
• Error Guessing Technique.
• Use Case Technique
• Advantages
• Black box testing technique requires less knowledge of coding.
• Black box testing is much faster than white box testing.
• Black box testing allows random and exploratory testing as mostly
manual testing is involved.
• Disadvantages
• Writing automated test cases is impossible with the black box
technique.
• In the black box, since code is not getting tested, one might not be
able to find excess code or code of higher time and space
complexity.
• Integration and data flow testing cannot be done in black box
testing.
White box testing
• White box testing, also referred to as clear box or structural testing,
is a software testing approach that involves examining the internal
code structure and logic of a software application.
• Testers have access to the source code and design details, allowing
them to design test cases based on their understanding of the
software's internal workings.
• This method aims to ensure thorough testing of code paths, branches,
and conditions, uncovering logical errors, code defects, and
vulnerabilities.
• Generally, white box testing covers:
• Internal security holes
• Flow of specific inputs through code
• Expected and unexpected output
• Broken or poorly structured code paths
• Logic of a conditional loop
• Behavior of each of the functions and classes
• How the software handles specific inputs
Techniques of White-box Testing
• Statement testing
• Decision testing/branch testing
• Condition testing
• Multiple condition testing
• Condition determination testing
• Path testing
• Advantages:
• Code streamlining by finding shrouded blunders.
• White box test cases can be handily automated.
• Testing is more exhaustive as all code ways are typically covered.
• Testing can begin early in SDLC regardless of whether GUI is not accessible.
• Disadvantages:
• White box testing can be very perplexing and costly.
• Developers who generally execute white box experiments loathe it. The
white box testing by developers is not definite and can prompt creation
mistakes.
• White box testing requires proficient assets, with a point-by-point
comprehension of programming and implementation.
• White-box testing is time-consuming, greater programming applications set
aside the effort to test completely.
Black Box Vs White Box Testing
Black Box White Box
It does not require an internal working structure. GUI It is necessary to have knowledge of the internal working
knowledge is sufficient to perform. structure.

We can use other names like closed box, data-driven We can use clear box testing, Transparent testing, and
testing, and functional testing to refer to it. code-based testing to refer to it.

It is difficult to find the hidden error of an application. It is easy to find a hidden error in an application.

The time duration of performing testing depends upon Time duration is more than BB because test case design
the available functional specification. requires more time due to referring to lengthy code.

It can be performed by Tester, Developer, and end-user. It can be performed by the Tester and Developer.

It is a less exhaustive procedure. It is a more exhaustive procedure

Algorithm testing is not used. Algorithm testing is convenient for it.

Flexibility and protectivity are covered under it. Do not cover Flexibility and protectivity.

You might also like