Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Unit 8 Software-Testing

Download as pdf or txt
Download as pdf or txt
You are on page 1of 27

Software Testing

 Testing is intended to show that a program does what it


is intended to do and to discover program defects.
 Goals of Testing:
 To demonstrate to the developer and the customer that
the software meets its requirements. For custom
software, this means that there should be at least one
test for every requirement in the requirements
document. For generic software products, it means that
there should be tests for all of the system features, plus
combinations of these features.
 To discover situations in which the behavior of the software
is incorrect, undesirable, or does not conform to its
specification. These are a consequence of software
defects. Defect testing is concerned with rooting out
undesirable system behavior such as system crashes,
unwanted interactions with other systems,incorrect
Software Testing

Validation : Are we building the right Product?


Verification : Are we building the product right ?
Software Testing
 Verification and validation processes are concerned
with checking that software being developed meets
its specification and delivers the functionality
expected by the people paying for the software.
 The aim of verification is to check that the software
meets its stated functional and non-functional
requirements. Validation, however, is a more
general process. The aim of validation is to ensure
that the software meets the customer’s
expectations.
 The ultimate goal of verification and validation
processes is to establish confidence that the
software system is ‘fit for purpose’.
 The system must be good enough for its
intended use.
System confidence depend upon
 Software purpose The more critical the software,
the more important that it is reliable.
 User expectations Because of their
experiences with buggy, unreliable software,
many users have low expectations of software
quality. They are not surprised when their
software fails.
 Marketing environment When a system is
marketed, the sellers of the system must take
into account competing products, the price that
customers are willing to pay for a system, and
the required schedule for delivering that system.
Advantages of Testing over Inspection
 During testing, errors can mask (hide) other
errors. When an error leads to unexpected
outputs, you can never be sure if later output
anomalies are due to a new error or side effects
of the original error.
 Inspection is a static process, you don’t have to
be concerned with interactions between errors.
 Incomplete versions of a system can be
inspected without additional costs.
 As well as searching for program defects, an
inspection can also consider broader quality
attributes of a program, such as compliance
with standards, portability, and maintainability.
Three Stages of Testing
 Development Testing, where the system is tested
during development to discover bugs and defects.
System designers and programmers are likely to be
involved in the testing process.
 Release Testing, where a separate testing team
tests a complete version of the system before it is
released to users. The aim of release testing is to
check that the system meets the requirements of
system stakeholders.
 User Testing, where users or potential users of a
system test the system in their own environment.
Acceptance testing is one type of user testing
where the customer formally tests a system to
decide if it should be accepted from the system
supplier or if further development is required.
Development Testing
 Development testing includes all testing activities
that are carried out by the team developing the
system.
 The tester of the software is usually the
programmer who developed that software, although
this is not always the case.
 Some development processes use
programmer/tester pairs where each programmer
has an associated tester who develops tests and
assists with the testing process.
 For critical systems, a more formal process may be
used, with a separate testing group within the
development team.
Unit Testing
 Unit testing is the process of testing program
components, such as methods or object classes.
 Individual functions or methods are the simplest
type of component.
 Unit Testing should Test:
◦ Test all operations associated with the object.
◦ Set and check the value of all attributes
associated with the object.
◦ Put the object into all possible states. This means
that you should simulate all events that cause a
state change.
Choosing Unit Test Cases
 The test cases should show that, when used as
expected, the component that you are testing does
what it is supposed to do.
 If there are defects in the component, these should
be revealed by test cases.
 Strategies to choose Test Cases
 Partition testing, where you identify groups of inputs
that have common characteristics and should be
processed in the same way. You should choose
tests from within each of these groups.
 Guideline-based testing, where you use testing
guidelines to choose test cases. These guidelines
reflect previous experience of the kinds of errors
that programmers often make when developing
components.
Testing things reveal following
 Test software with sequences that have only a
single value. Programmers naturally think of
sequences as made up of several values and
sometimes they embed this assumption in their
programs. Consequently, if presented with a single
value sequence, a program may not work properly.
 Use different sequences of different sizes in
different tests. This decreases the chances that a
program with defects will accidentally produce a
correct output because of some accidental
characteristics of the input.
 Derive tests so that the first, middle, and last
elements of the sequence are accessed. This
approach is reveals problems at partition
boundaries.
Whittaker’s book Guidelines for Test case Design
 Choose inputs that force the system to generate
all error messages;
 Design inputs that cause input buffers to
overflow;
 Repeat the same input or series of inputs
numerous times;
 Force invalid outputs to be generated;
 Force computation results to be too large or too
small.
Component Testing
 Software components are often composite
components that are made up of several interacting
objects.
 Different types of Component Interfaces
 Parameter interfaces These are interfaces in which data
or sometimes function references are passed from one
component to another.
 Shared memory interfaces These are interfaces in
which a block of memory is shared between
components.
 Procedural interfaces These are interfaces in which one
component encapsulates a set of procedures that can
be called by other components.
 Message passing interfaces These are interfaces in
which one component requests a service from another
Guidelines for Interface Testing
 Examine the code to be tested and explicitly list each
call to an external component. Design a set of tests in
which the values of the parameters to the external
components are at the extreme ends of their ranges.
 Where pointers are passed across an interface, always
test the interface with null pointer parameters.
 Where a component is called through a procedural
interface, design tests that deliberately cause the
component to fail.
 Use stress testing in message passing systems. This
means that you should design tests that generate many
more messages than are likely to occur in practice.
 Where several components interact through shared
memory, design tests that vary the order in which these
components are activated. These tests may reveal
implicit assumptions.
System Testing
 System testing during development involves
integrating components to create a version of the
system and then testing the integrated system.
 System testing checks that components are
compatible, interact correctly and transfer the right
data at the right time across their interfaces.
 During system testing, reusable components that
have been separately developed and off-the-shelf
systems may be integrated with newly developed
components.
 Components developed by different team members or
groups may be integrated at this stage. System
testing is a collective rather than an individual
process. In some companies, system testing may
involve a separate testing team with no involvement
from designers and programmers.
System Testing
 system testing should focus on testing the
interactions between the components and objects
that make up a system.
 Because of its focus on interactions, use case–
based testing is an effective approach to system
testing.
 If we have developed a sequence diagram to model
the use case implementation, you can see the
objects or components that are involved in the
interaction.
 This interaction testing should discover those
component bugs that are only revealed when a
component is used by other components in the
system.
Test Driven Development
 Test-driven development (TDD) is an approach to
program development in which you interleave
testing and code development.
 Test-driven development was introduced as part of
agile methods such as Extreme Programming.
However, it can also be used in plan-driven
development processes.
The TDD Process
 You start by identifying the increment of functionality that
is required. This should normally be small and
implementable in a few lines of code.
 You write a test for this functionality and implement this
as an automated test. This means that the test can be
executed and report whether or not it has passed/failed.
 You then run the test, along with all other tests that have
been implemented. Initially, you have not implemented
the functionality so the new test will fail. This is
deliberate as it shows that the test adds something to
the test set.
 You then implement the functionality and re-run the test.
This may involve refactoring existing code to improve it
and add new code to what’s already there.
 Once all tests run successfully, you move on to
implementing the next chunk of functionality.
Advantages of Test Driven Development
 Code coverage In principle, every code segment that
you write should have at least one associated test.
Therefore, you can be confident that all of the code in
the system has actually been executed. Code is tested,
so defects are discovered early in development process.
 Regression testing A test suite is developed
incrementally as a program is developed. You can
always run regression tests to check that changes to the
program have not introduced new bugs.
 Simplified debugging When a test fails, it should be
obvious where the problem lies. The newly written code
needs to be checked and modified. You do not need to
use debugging tools to locate the problem.
 System documentation The tests themselves act as a
form of documentation that describe what the code
should be doing.
Release Testing
 Release testing is the process of testing a particular
release of a system that is intended for use outside of
the development team.
 In a complex project, however, the release could be for
other teams that are developing related systems.
 Two important distinctions between release testing and
system testing.
◦ A separate team that has not been involved in the
system development should be responsible for
release testing.
◦ System testing by the development team should focus
on discovering bugs in the system (defect testing).
The objective of release testing is to check that the
system meets its requirements and is good enough for
external use (validation testing).
Release Testing
 The primary goal of the release testing process is to
convince the supplier of the system that it is good
enough for use.
 Release testing, therefore, has to show that the system
delivers its specified functionality, performance, and
dependability, and that it does not fail during normal
use.
 Release testing is usually a black-box testing process
where tests are derived from the system specification.
 The system is treated as a black box whose behavior
can only be determined by studying its inputs and the
related outputs.
 Another name for this is ‘functional testing’, so-called
because the tester is only concerned with functionality
and not the implementation of the software.
Requirement Based Testing
 A general principle of good requirements engineering
practice is that requirements should be testable; that is,
the requirement should be written so that a test can be
designed for that requirement.
 Requirements-based testing, is a systematic approach
to test case design where you consider each
requirement and derive a set of tests for it.
 Requirements-based testing is validation rather than
defect testing—you are trying to demonstrate that the
system has properly implemented its requirements.
 If a patient is known to be allergic to any particular
medication, then prescription of that medication shall
result in a warning message being issued to the system
user.
 If a prescriber chooses to ignore an allergy warning,
they shall provide a reason why this has been ignored.
Scenario Testing
 Scenario testing is an approach to release testing
where you devise typical scenarios of use and use
these to develop test cases for the system.
 A scenario is a story that describes one way in
which the system might be used.
 Scenarios should be realistic and real system users
should be able to relate to them.
 Research said, scenario test should be a narrative
story that is credible and fairly complex.
 They should relate to the scenario and believe that
it is important that the system passes the test.
Scenario Testing for MHC-PMS
1. Authentication by logging on to the system.
2. Downloading and uploading of specified
patient records to a laptop.
3. Home visit scheduling.
4. Encryption and decryption of patient records
on a mobile device.
5. Record retrieval and modification.
6. Links with the drugs database that maintains
side-effect information.
7. The system for call prompting.
Performance Testing
 Performance tests have to be designed to ensure
that the system can process its intended load. This
usually involves running a series of tests where you
increase the load until the system performance
becomes unacceptable.
 Performance testing is concerned both with
demonstrating that the system meets its
requirements and discovering problems and defects
in the system
 In performance testing, this means stressing the
system by making demands that are outside the
design limits of the software. This is known as
‘stress testing’.
 Stress testing is particularly relevant to distributed
systems based on a network of processors.
User Testing
 User or customer testing is a stage in the testing
process in which users or customers provide input
and advice on system testing.
 User testing is essential, even when comprehensive
system and release testing have been carried out.
 It is practically impossible for a system developer to
replicate the system’s working environment, as
tests in the developer’s environment are inevitably
artificial.
 For example, a system that is intended for use in a
hospital is used in a clinical environment where
other things are going on, such as patient
emergencies, conversations with relatives, etc.
Different types of User Testing
 Alpha testing, where users of the software
work with the development team to test the
software at the developer’s site.
 Beta testing, where a release of the software is
made available to users to allow them to
experiment and to raise problems that they
discover with the system developers.
 Acceptance testing, where customers test a
system to decide whether or not it is ready to be
accepted from the system developers and
deployed in the customer environment.
Process of Acceptance Testing

You might also like