Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
0 views

Module 5-Implementation and Testing

This module covers the design and implementation stages of software engineering, emphasizing the importance of distinguishing between design and implementation processes, understanding architectural designs, and recognizing design patterns. It discusses the object-oriented design process, the significance of system context, and various design models, including sequence and state diagrams. Additionally, it addresses implementation issues such as reuse, configuration management, and host-target development, along with the use of design patterns like the Observer pattern to enhance software design.

Uploaded by

Charles Uy
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
0 views

Module 5-Implementation and Testing

This module covers the design and implementation stages of software engineering, emphasizing the importance of distinguishing between design and implementation processes, understanding architectural designs, and recognizing design patterns. It discusses the object-oriented design process, the significance of system context, and various design models, including sequence and state diagrams. Additionally, it addresses implementation issues such as reuse, configuration management, and host-target development, along with the use of design patterns like the Observer pattern to enhance software design.

Uploaded by

Charles Uy
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 116

IMPLEMENTATION AND

TESTING
MODULE 5
OBJECTIVES
Upon the completion of this module, the students are
expected to:

• Distinguish the process of design and implementation.


• Identify the architectural designs
• Distinguish the design patterns and its element.
• Understand the importance of software testing.
• Compare and contrast the software testing.
DESIGN AND
IMPLEMENTATION
Design and implementation
• Software design and implementation is the stage in the software
engineering process at which an executable software system is
developed.
• Software design and implementation activities are invariably inter-
leaved.
• Software design is a creative activity in which you identify software
components and their relationships, based on a customer’s requirements.
• Implementation is the process of realizing the design as a program.
Build or buy
• In a wide range of domains, it is now possible to buy off-the-shelf
systems (COTS) that can be adapted and tailored to the users’
requirements.
• For example, if you want to implement a medical records system, you can
buy a package that is already used in hospitals. It can be cheaper and
faster to use this approach rather than developing a system in a
conventional programming language.
• When you develop an application in this way, the design process
becomes concerned with how to use the configuration features of
that system to deliver the system requirements.
An object-oriented design process
• Structured object-oriented design processes involve developing a
number of different system models.
• They require a lot of effort for development and maintenance of
these models and, for small systems, this may not be cost-
effective.
• However, for large systems developed by different groups design
models are an important communication mechanism.
Process stages
• There are a variety of different object-oriented design processes
that depend on the organization using the process.
• Common activities in these processes include:
• Define the context and modes of use of the system;
• Design the system architecture;
• Identify the principal system objects;
• Develop design models;
• Specify object interfaces.
• Process illustrated here using a design for a wilderness weather
station.
System context and interactions
• Understanding the relationships between the software that is
being designed and its external environment is essential for
deciding how to provide the required system functionality and
how to structure the system to communicate with its
environment.
• Understanding of the context also lets you establish the
boundaries of the system. Setting the system boundaries helps you
decide what features are implemented in the system being
designed and what features are in other associated systems.
Context and interaction models
• A system context model is a structural model that demonstrates
the other systems in the environment of the system being
developed.
• An interaction model is a dynamic model that shows how the
system interacts with its environment as it is used.
System context for the weather station
Weather station use cases
Use case description—Report weather
System Weather station
Use case Report weather
Actors Weather information system, Weather station
Description The weather station sends a summary of the weather data that has been
collected from the instruments in the collection period to the weather
information system. The data sent are the maximum, minimum, and average
ground and air temperatures; the maximum, minimum, and average air
pressures; the maximum, minimum, and average wind speeds; the total
rainfall; and the wind direction as sampled at five-minute intervals.
Stimulus The weather information system establishes a satellite communication link
with the weather station and requests transmission of the data.
Response The summarized data is sent to the weather information system.
Comments Weather stations are usually asked to report once per hour but this frequency
may differ from one station to another and may be modified in the future.
Architectural design
• Once interactions between the system and its environment have
been understood, you use this information for designing the
system architecture.
• You identify the major components that make up the system and
their interactions, and then may organize the components using
an architectural pattern such as a layered or client-server model.
• The weather station is composed of independent subsystems that
communicate by broadcasting messages on a common
infrastructure.
High-level architecture of the weather
station
Architecture of data collection system
Object class identification
• Identifying object classes is often a difficult part of object oriented
design.
• There is no 'magic formula' for object identification. It relies on the
skill, experience
and domain knowledge of system designers.
• Object identification is an iterative process. You are unlikely to get
it right first time.
Approaches to identification
• Use a grammatical approach based on a natural language description of the
system (used in Hood OOD method).
• Base the identification on tangible things in the application domain.
• Use a behavioural approach and identify objects based on what participates
in what behaviour.
• Use a scenario-based analysis. The objects, attributes and methods in each
scenario are identified.
Weather station description
A weather station is a package of software controlled instruments
which collects data, performs some data processing and transmits
this data for further processing. The instruments include air and
ground thermometers, an anemometer, a wind vane, a barometer
and a rain gauge. Data is collected periodically.

When a command is issued to transmit the weather data, the


weather station processes and summarises the collected data.
The summarised data is transmitted to the mapping computer
when a request is received.
Weather station object classes
• Object class identification in the weather station system may be based on
the tangible hardware and data in the system:
• Ground thermometer, Anemometer, Barometer
• Application domain objects that are ‘hardware’ objects related to the instruments in the
system.
• Weather station
• The basic interface of the weather station to its environment. It therefore reflects the
interactions identified in the use-case model.
• Weather data
• Encapsulates the summarized data from the instruments.
Weather station object classes
Design models
• Design models show the objects and object classes and
relationships between these entities.
• Static models describe the static structure of the system in terms
of object classes and relationships.
• Dynamic models describe the dynamic interactions between
objects.
Examples of design models
• Subsystem models that show logical groupings of objects into coherent
subsystems.
• Sequence models that show the sequence of object interactions.
• State machine models that show how individual objects change their state in
response to events.
• Other models include use-case models, aggregation models, generalisation
models, etc.
Subsystem models

• Shows how the design is organised into logically related groups of


objects.
• In the UML, these are shown using packages - an encapsulation
construct. This is a logical model. The actual organisation of
objects in the system may be different.
Sequence models
• Sequence models show the sequence of object interactions that
take place
• Objects are arranged horizontally across the top;
• Time is represented vertically so models are read top to bottom;
• Interactions are represented by labelled arrows, Different styles of arrow
represent different types of interaction;
• A thin rectangle in an object lifeline represents the time when the object
is the controlling object in the system.
Sequence diagram describing data
collection
State diagrams
• State diagrams are used to show how objects respond to different service
requests and the state transitions triggered by these requests.
• State diagrams are useful high-level models of a system or an
object’s run-time behavior.
• You don’t usually need a state diagram for all of the objects in the
system. Many of the objects in a system are relatively simple and a
state model adds unnecessary detail to the design.
Weather station state diagram
Interface specification
• Object interfaces have to be specified so that the objects and other
components can be designed in parallel.
• Designers should avoid designing the interface representation but should hide
this in the object itself.
• Objects may have several interfaces which are viewpoints on the methods
provided.
• The UML uses class diagrams for interface specification but Java may also be
used.
Weather station interfaces
Design patterns

• A design pattern is a way of reusing abstract knowledge about a


problem and its solution.
• A pattern is a description of the problem and the essence of its
solution.
• It should be sufficiently abstract to be reused in different settings.
• Pattern descriptions usually make use of object-oriented
characteristics such as inheritance and polymorphism.
Pattern elements
• Name
• A meaningful pattern identifier.
• Problem description.
• Solution description.
• Not a concrete design but a template for a design solution that can be
instantiated in different ways.
• Consequences
• The results and trade-offs of applying the pattern.
The Observer pattern

• Name
• Observer.
• Description
• Separates the display of object state from the object itself.
• Problem description
• Used when multiple displays of state are needed.
• Solution description
• See slide with UML description.
• Consequences
• Optimisations to enhance display performance are impractical.
The Observer pattern (1)
Pattern Observer
name
Description Separates the display of the state of an object from the object itself and
allows alternative displays to be provided. When the object state
changes, all displays are automatically notified and updated to reflect the
change.
Problem In many situations, you have to provide multiple displays of state
description information, such as a graphical display and a tabular display. Not all of
these may be known when the information is specified. All alternative
presentations should support interaction and, when the state is changed,
all displays must be updated.
This pattern may be used in all situations where more than one
display format for state information is required and where it is not
necessary for the object that maintains the state information to know
about the specific display formats used.
The Observer pattern (2)
Pattern name Observer

Solution This involves two abstract objects, Subject and Observer, and two concrete
description objects, ConcreteSubject and ConcreteObject, which inherit the attributes of the
related abstract objects. The abstract objects include general operations that are
applicable in all situations. The state to be displayed is maintained in
ConcreteSubject, which inherits operations from Subject allowing it to add and
remove Observers (each observer corresponds to a display) and to issue a
notification when the state has changed.

The ConcreteObserver maintains a copy of the state of ConcreteSubject and


implements the Update() interface of Observer that allows these copies to be kept
in step. The ConcreteObserver automatically displays the state and reflects
changes whenever the state is updated.

Consequences The subject only knows the abstract Observer and does not know details of the
concrete class. Therefore there is minimal coupling between these objects.
Because of this lack of knowledge, optimizations that enhance display
performance are impractical. Changes to the subject may cause a set of linked
updates to observers to be generated, some of which may not be necessary.
Multiple displays using the Observer
pattern
A UML model of the Observer pattern
Design problems
• To use patterns in your design, you need to recognize that any
design problem you are facing may have an associated pattern
that can be applied.
• Tell several objects that the state of some other object has changed
(Observer pattern).
• Tidy up the interfaces to a number of related objects that have often been
developed incrementally (Façade pattern).
• Provide a standard way of accessing the elements in a collection,
irrespective of how that collection is implemented (Iterator pattern).
• Allow for the possibility of extending the functionality of an existing class
at run-time (Decorator pattern).
Implementation issues
• Focus here is not on programming, although this is obviously
important, but on other implementation issues that are often not
covered in programming texts:
• Reuse Most modern software is constructed by reusing existing
components or systems. When you are developing software, you should
make as much use as possible of existing code.
• Configuration management During the development process, you have to
keep track of the many different versions of each software component in
a configuration management system.
• Host-target development Production software does not usually execute
on the same computer as the software development environment. Rather,
you develop it on one computer (the host system) and execute it on a
separate computer (the target system).
Reuse
• From the 1960s to the 1990s, most new software was developed
from scratch, by writing all code in a high-level programming
language.
• The only significant reuse or software was the reuse of functions and
objects in programming language libraries.
• Costs and schedule pressure mean that this approach became
increasingly unviable, especially for commercial and Internet-
based systems.
• An approach to development based around the reuse of existing
software emerged and is now generally used for business and
scientific software.
Reuse levels
• The abstraction level
• At this level, you don’t reuse software directly but use knowledge of
successful abstractions in the design of your software.
• The object level
• At this level, you directly reuse objects from a library rather than writing
the code yourself.
• The component level
• Components are collections of objects and object classes that you reuse in
application systems.
• The system level
• At this level, you reuse entire application systems.
Reuse costs
• The costs of the time spent in looking for software to reuse and
assessing whether or not it meets your needs.
• Where applicable, the costs of buying the reusable software. For
large off-the-shelf systems, these costs can be very high.
• The costs of adapting and configuring the reusable software
components or systems to reflect the requirements of the system
that you are developing.
• The costs of integrating reusable software elements with each
other (if you are using software from different sources) and with
the new code that you have developed.
Configuration management

• Configuration management is the name given to the general


process of managing a changing software system.
• The aim of configuration management is to support the system
integration process so that all developers can access the project
code and documents in a controlled way, find out what changes
have been made, and compile and link components to create a
system.
Configuration management activities
• Version management, where support is provided to keep track of the different
versions of software components. Version management systems include facilities to
coordinate development by several programmers.
• System integration, where support is provided to help developers define what
versions of components are used to create each version of a system. This description
is then used to build a system automatically by compiling and linking the required
components.
• Problem tracking, where support is provided to allow users to report bugs and other
problems, and to allow all developers to see who is working on these problems and
when they are fixed.
Host-target development
• Most software is developed on one computer (the host), but runs
on a separate machine (the target).
• More generally, we can talk about a development platform and an
execution platform.
• A platform is more than just hardware.
• It includes the installed operating system plus other supporting software
such as a database management system or, for development platforms, an
interactive development environment.
• Development platform usually has different installed software
than execution platform; these platforms may have different
architectures.
Development platform tools
• An integrated compiler and syntax-directed editing system that
allows you to create, edit and compile code.
• A language debugging system.
• Graphical editing tools, such as tools to edit UML models.
• Testing tools, such as unit that can automatically run a set of tests
on a new version of a program.
• Project support tools that help you organize the code for different
development projects.
Integrated development environments
(IDEs)
• Software development tools are often grouped to create an
integrated development environment (IDE).
• An IDE is a set of software tools that supports different aspects of
software development, within some common framework and user
interface.
• IDEs are created to support development in a specific
programming language such as Java. The language IDE may be
developed specially, or may be an instantiation of a general-
purpose IDE, with specific language-support tools.
Component/system deployment
factors
• If a component is designed for a specific hardware architecture, or relies on some other software
system, it must obviously be deployed on a platform that provides the required hardware and
software support.
• High availability systems may require components to be deployed on more than one platform.
This means that, in the event of platform failure, an alternative implementation of the
component is available.
• If there is a high level of communications traffic between components, it usually makes sense to
deploy them on the same platform or on platforms that are physically close to one other. This
reduces the delay between the time a message is sent by one component and received by
another.
Open source development
• Open source development is an approach to software development in
which the source code of a software system is published and
volunteers are invited to participate in the development process
• Its roots are in the Free Software Foundation (www.fsf.org), which
advocates that source code should not be proprietary but rather
should always be available for users to examine and modify as they
wish.
• Open source software extended this idea by using the Internet to
recruit a much larger population of volunteer developers. Many of
them are also users of the code.
Open source systems

• The best-known open source product is, of course, the Linux


operating system which is widely used as a server system and,
increasingly, as a desktop environment.
• Other important open source products are Java, the Apache web
server and the mySQL database management system.
Open source issues
• Should the product that is being developed make use of open
source components?
• Should an open source approach be used for the software’s
development?
Open source business
• More and more product companies are using an open source
approach to development.
• Their business model is not reliant on selling a software product
but on selling support for that product.
• They believe that involving the open source community will allow
software to be developed more cheaply, more quickly and will
create a community of users for the software.
Open source licensing
• A fundamental principle of open-source development is that
source code should be freely available, this does not mean that
anyone can do as they wish with that code.
• Legally, the developer of the code (either a company or an individual) still
owns the code. They can place restrictions on how it is used by including
legally binding conditions in an open source software license.
• Some open source developers believe that if an open source component
is used to develop a new system, then that system should also be open
source.
• Others are willing to allow their code to be used without this restriction.
The developed systems may be proprietary and sold as closed source
systems.
License models
• The GNU General Public License (GPL). This is a so-called ‘reciprocal’ license that
means that if you use open source software that is licensed under the GPL license,
then you must make that software open source.
• The GNU Lesser General Public License (LGPL) is a variant of the GPL license where
you can write components that link to open source code without having to publish
the source of these components.
• The Berkley Standard Distribution (BSD) License. This is a non-reciprocal license,
which means you are not obliged to re-publish any changes or modifications made to
open source code. You can include the code in proprietary systems that are sold.
License management
• Establish a system for maintaining information about open-source
components that are downloaded and used.
• Be aware of the different types of licenses and understand how a
component is licensed before it is used.
• Be aware of evolution pathways for components.
• Educate people about open source.
• Have auditing systems in place.
• Participate in the open source community.
SOFTWARE TESTING
Program testing
• Testing is intended to show that a program does what it is intended to do and to
discover program defects before it is put into use.
• When you test software, you execute a program using artificial data.
• You check the results of the test run for errors, anomalies or information about the
program’s non-functional attributes.
• Can reveal the presence of errors NOT their
absence.
• Testing is part of a more general verification and validation process, which also
includes static validation techniques.
Program testing goals
• To demonstrate to the developer and the customer that the
software meets its requirements.
• For custom software, this means that there should be at least one test for
every requirement in the requirements document. For generic software
products, it means that there should be tests for all of the system
features, plus combinations of these features, that will be incorporated in
the product release.
• To discover situations in which the behavior of the software is
incorrect, undesirable or does not conform to its specification.
• Defect testing is concerned with rooting out undesirable system behavior
such as system crashes, unwanted interactions with other systems,
incorrect computations and data corruption.
Validation and defect testing
• The first goal leads to validation testing
• You expect the system to perform correctly using a given set of test cases
that reflect the system’s expected use.
• The second goal leads to defect testing
• The test cases are designed to expose defects. The test cases in defect
testing can be deliberately obscure and need not reflect how the system is
normally used.
Testing process goals
• Validation testing
• To demonstrate to the developer and the system customer that the software meets its
requirements
• A successful test shows that the system operates as intended.
• Defect testing
• To discover faults or defects in the software where its behaviour is incorrect or not in
conformance with its specification
• A successful test is a test that makes the system perform incorrectly and so exposes a
defect in the system.
An input-output model of program
testing
Verification vs validation

• Verification:
"Are we building the product right”.
• The software should conform to its specification.
• Validation:
"Are we building the right product”.
• The software should do what the user really requires.
V & V confidence
• Aim of V & V is to establish confidence that the system is ‘fit for
purpose’.
• Depends on system’s purpose, user expectations and marketing
environment
• Software purpose
• The level of confidence depends on how critical the software is to an organisation.
• User expectations
• Users may have low expectations of certain kinds of software.
• Marketing environment
• Getting a product to market early may be more important than finding defects in the
program.
Inspections and testing
• Software inspections Concerned with analysis of
the static system representation to discover
problems (static verification)
• May be supplement by tool-based document and code analysis.
• Discussed in Chapter 15.
• Software testing Concerned with exercising and
observing product behaviour (dynamic verification)
• The system is executed with test data and its operational
behaviour is observed.
Inspections and testing
Software inspections
• These involve people examining the source representation with the aim of
discovering anomalies and defects.
• Inspections not require execution of a system so may be used before
implementation.
• They may be applied to any representation of the system (requirements,
design, configuration data, test data, etc.).
• They have been shown to be an effective technique for discovering program
errors.
Advantages of inspections
• During testing, errors can mask (hide) other errors. Because
inspection is a static process, you don’t have to be concerned with
interactions between errors.
• Incomplete versions of a system can be inspected without
additional costs. If a program is incomplete, then you need to
develop specialized test harnesses to test the parts that are
available.
• As well as searching for program defects, an inspection can also
consider broader quality attributes of a program, such as
compliance with standards, portability and maintainability.
Inspections and testing
• Inspections and testing are complementary and not opposing verification
techniques.
• Both should be used during the V & V process.
• Inspections can check conformance with a specification but not conformance
with the customer’s real requirements.
• Inspections cannot check non-functional characteristics such as performance,
usability, etc.
A model of the software testing
process
Stages of testing

• Development testing, where the system is tested during


development to discover bugs and defects.
• Release testing, where a separate testing team test a complete
version of the system before it is released to users.
• User testing, where users or potential users of a system test the
system in their own environment.
Development testing
• Development testing includes all testing activities that are carried
out by the team developing the system.
• Unit testing, where individual program units or object classes are tested.
Unit testing should focus on testing the functionality of objects or
methods.
• Component testing, where several individual units are integrated to create
composite components. Component testing should focus on testing
component interfaces.
• System testing, where some or all of the components in a system are
integrated and the system is tested as a whole. System testing should
focus on testing component interactions.
Unit testing
• Unit testing is the process of testing individual components in
isolation.
• It is a defect testing process.
• Units may be:
• Individual functions or methods within an object
• Object classes with several attributes and methods
• Composite components with defined interfaces used to access their
functionality.
Object class testing

• Complete test coverage of a class involves


• Testing all operations associated with an object
• Setting and interrogating all object attributes
• Exercising the object in all possible states.
• Inheritance makes it more difficult to design object class tests as
the information to be tested is not localised.
The weather station object interface
Weather station testing
• Need to define test cases for report Weather, calibrate, test,
startup and shutdown.
• Using a state model, identify sequences of state transitions to be
tested and the event sequences to cause these transitions
• For example:
• Shutdown -> Running-> Shutdown
• Configuring-> Running-> Testing -> Transmitting -> Running
• Running-> Collecting-> Running-> Summarizing -> Transmitting -> Running
Automated testing
• Whenever possible, unit testing should be automated so that tests
are run and checked without manual intervention.
• In automated unit testing, you make use of a test automation
framework (such as JUnit) to write and run your program tests.
• Unit testing frameworks provide generic test classes that you
extend to create specific test cases. They can then run all of the
tests that you have implemented and report, often through some
GUI, on the success of otherwise of the tests.
Automated test components
• A setup part, where you initialize the system with the test case,
namely the inputs and expected outputs.
• A call part, where you call the object or method to be tested.
• An assertion part where you compare the result of the call with
the expected result. If the assertion evaluates to true, the test has
been successful if false, then it has failed.
Unit test effectiveness
• The test cases should show that, when used as expected, the
component that you are testing does what it is supposed to do.
• If there are defects in the component, these should be revealed by
test cases.
• This leads to 2 types of unit test case:
• The first of these should reflect normal operation of a program and should
show that the component works as expected.
• The other kind of test case should be based on testing experience of
where common problems arise. It should use abnormal inputs to check
that these are properly processed and do not crash the component.
Testing strategies
• Partition testing, where you identify groups of inputs that have
common characteristics and should be processed in the same way.
• You should choose tests from within each of these groups.
• Guideline-based testing, where you use testing guidelines to
choose test cases.
• These guidelines reflect previous experience of the kinds of errors that
programmers often make when developing components.
Partition testing
• Input data and output results often fall into different classes where
all members of a class are related.
• Each of these classes is an equivalence partition or domain where
the program behaves in an equivalent way for each class member.
• Test cases should be chosen from each partition.
Equivalence partitioning
Equivalence partitions
Testing guidelines (sequences)
• Test software with sequences which have only a single value.
• Use sequences of different sizes in different tests.
• Derive tests so that the first, middle and last elements of the
sequence are accessed.
• Test with sequences of zero length.
General testing guidelines

• Choose inputs that force the system to generate all error messages
• Design inputs that cause input buffers to overflow
• Repeat the same input or series of inputs numerous times
• Force invalid outputs to be generated
• Force computation results to be too large or too small.
Component testing
• Software components are often composite components that are
made up of several interacting objects.
• For example, in the weather station system, the reconfiguration
component includes objects that deal with each aspect of the
reconfiguration.
• You access the functionality of these objects through the defined
component interface.
• Testing composite components should therefore focus on showing
that the component interface behaves according to its
specification.
• You can assume that unit tests on the individual objects within the
component have been completed.
Interface testing
Interface testing
• Objectives are to detect faults due to interface errors or invalid
assumptions about interfaces.
• Interface types
• Parameter interfaces Data passed from one method or procedure to
another.
• Shared memory interfaces Block of memory is shared between
procedures or functions.
• Procedural interfaces Sub-system encapsulates a set of procedures to be
called by other sub-systems.
• Message passing interfaces Sub-systems request services from other sub-
systems
Interface errors

• Interface misuse
• A calling component calls another component and makes an error in its use of its
interface e.g. parameters in the wrong order.
• Interface misunderstanding
• A calling component embeds assumptions about the behaviour of the called component
which are incorrect.
• Timing errors
• The called and the calling component operate at different speeds and out-of-date
information is accessed.

88
Interface testing guidelines

• Design tests so that parameters to a called procedure are at the extreme ends
of their ranges.
• Always test pointer parameters with null pointers.
• Design tests which cause the component to fail.
• Use stress testing in message passing systems.
• In shared memory systems, vary the order in which components are
activated.
System testing

• System testing during development involves integrating


components to create a version of the system and then testing the
integrated system.
• The focus in system testing is testing the interactions between
components.
• System testing checks that components are compatible, interact
correctly and transfer the right data at the right time across their
interfaces.
• System testing tests the emergent behaviour of a system.
System and component testing
• During system testing, reusable components that have been
separately developed and off-the-shelf systems may be integrated
with newly developed components. The complete system is then
tested.
• Components developed by different team members or sub-teams
may be integrated at this stage. System testing is a collective
rather than an individual process.
• In some companies, system testing may involve a separate testing team
with no involvement from designers and programmers.
Use-case testing
• The use-cases developed to identify system interactions can be
used as a basis for system testing.
• Each use case usually involves several system components so
testing the use case forces these interactions to occur.
• The sequence diagrams associated with the use case documents
the components and interactions that are being tested.
Collect weather data sequence chart
Testing policies
• Exhaustive system testing is impossible so testing policies which
define the required system test coverage may be developed.
• Examples of testing policies:
• All system functions that are accessed through menus should be tested.
• Combinations of functions (e.g. text formatting) that are accessed through
the same menu must be tested.
• Where user input is provided, all functions must be tested with both
correct and incorrect input.
Test-driven development
• Test-driven development (TDD) is an approach to program
development in which you inter-leave testing and code
development.
• Tests are written before code and ‘passing’ the tests is the critical
driver of development.
• You develop code incrementally, along with a test for that
increment. You don’t move on to the next increment until the
code that you have developed passes its test.
• TDD was introduced as part of agile methods such as Extreme
Programming. However, it can also be used in plan-driven
development processes.
Test-driven development
TDD process activities
• Start by identifying the increment of functionality that is required.
This should normally be small and implementable in a few lines of
code.
• Write a test for this functionality and implement this as an
automated test.
• Run the test, along with all other tests that have been
implemented. Initially, you have not implemented the
functionality so the new test will fail.
• Implement the functionality and re-run the test.
• Once all tests run successfully, you move on to implementing the
next chunk of functionality.
Benefits of test-driven development
• Code coverage
• Every code segment that you write has at least one associated test so all
code written has at least one test.
• Regression testing
• A regression test suite is developed incrementally as a program is
developed.
• Simplified debugging
• When a test fails, it should be obvious where the problem lies. The newly
written code needs to be checked and modified.
• System documentation
• The tests themselves are a form of documentation that describe what the
code should be doing.
Regression testing
• Regression testing is testing the system to check that changes have
not ‘broken’ previously working code.
• In a manual testing process, regression testing is expensive but,
with automated testing, it is simple and straightforward. All tests
are rerun every time a change is made to the program.
• Tests must run ‘successfully’ before the change is committed.
Release testing
• Release testing is the process of testing a particular
release of a system that is intended for use outside of
the development team.
• The primary goal of the release testing process is to
convince the supplier of the system that it is good
enough for use.
• Release testing, therefore, has to show that the system
delivers its specified functionality, performance and
dependability, and that it does not fail during normal use.
• Release testing is usually a black-box testing process
where tests are only derived from the system
specification.
Release testing and system testing
• Release testing is a form of system testing.
• Important differences:
• A separate team that has not been involved in the system development,
should be responsible for release testing.
• System testing by the development team should focus on discovering
bugs in the system (defect testing). The objective of release testing is to
check that the system meets its requirements and is good enough for
external use (validation testing).
Requirements based testing
• Requirements-based testing involves examining each requirement
and developing a test or tests for it.
• MHC-PMS requirements:
• If a patient is known to be allergic to any particular medication, then
prescription of that medication shall result in a warning message being
issued to the system user.
• If a prescriber chooses to ignore an allergy warning, they shall provide a
reason why this has been ignored.
Requirements tests
• Set up a patient record with no known allergies. Prescribe medication for
allergies that are known to exist. Check that a warning message is not issued
by the system.
• Set up a patient record with a known allergy. Prescribe the medication to that
the patient is allergic to, and check that the warning is issued by the system.
• Set up a patient record in which allergies to two or more drugs are recorded.
Prescribe both of these drugs separately and check that the correct warning
for each drug is issued.
• Prescribe two drugs that the patient is allergic to. Check that two warnings
are correctly issued.
• Prescribe a drug that issues a warning and overrule that warning. Check that
the system requires the user to provide information explaining why the
warning was overruled.
Features tested by scenario
• Authentication by logging on to the system.
• Downloading and uploading of specified patient records to a
laptop.
• Home visit scheduling.
• Encryption and decryption of patient records on a mobile device.
• Record retrieval and modification.
• Links with the drugs database that maintains side-effect
information.
• The system for call prompting.
A usage scenario for the MHC-PMS
Kate is a nurse who specializes in mental health care. One of her responsibilities
is to visit patients at home to check that their treatment is effective and that they
are not suffering from medication side -effects.
On a day for home visits, Kate logs into the MHC-PMS and uses it to print her
schedule of home visits for that day, along with summary information about the
patients to be visited. She requests that the records for these patients be
downloaded to her laptop. She is prompted for her key phrase to encrypt the
records on the laptop.
One of the patients that she visits is Jim, who is being treated with medication for
depression. Jim feels that the medication is helping him but believes that it has the
side -effect of keeping him awake at night. Kate looks up Jim’s record and is
prompted for her key phrase to decrypt the record. She checks the drug
prescribed and queries its side effects. Sleeplessness is a known side effect so
she notes the problem in Jim’s record and suggests that he visits the clinic to have
his medication changed. He agrees so Kate enters a prompt to call him when she
gets back to the clinic to make an appointment with a physician. She ends the
consultation and the system re-encrypts Jim’s record.
After, finishing her consultations, Kate returns to the clinic and uploads the records
of patients visited to the database. The system generates a call list for Kate of
those patients who she has to contact for follow-up information and make clinic
appointments.
Performance testing
• Part of release testing may involve testing the emergent properties
of a system, such as performance and reliability.
• Tests should reflect the profile of use of the system.
• Performance tests usually involve planning a series of tests where
the load is steadily increased until the system performance
becomes unacceptable.
• Stress testing is a form of performance testing where the system is
deliberately overloaded to test its failure behaviour.
User testing
• User or customer testing is a stage in the testing process in which
users or customers provide input and advice on system testing.
• User testing is essential, even when comprehensive system and
release testing have been carried out.
• The reason for this is that influences from the user’s working environment
have a major effect on the reliability, performance, usability and
robustness of a system. These cannot be replicated in a testing
environment.
Types of user testing
• Alpha testing
• Users of the software work with the development team to test the
software at the developer’s site.
• Beta testing
• A release of the software is made available to users to allow them to
experiment and to raise problems that they discover with the system
developers.
• Acceptance testing
• Customers test a system to decide whether or not it is ready to be
accepted from the system developers and deployed in the customer
environment. Primarily for custom systems.
The acceptance testing process
Stages in the acceptance testing
process
• Define acceptance criteria
• Plan acceptance testing
• Derive acceptance tests
• Run acceptance tests
• Negotiate test results
• Reject/accept system
Agile methods and acceptance testing
• In agile methods, the user/customer is part of the development
team and is responsible for making decisions on the acceptability
of the system.
• Tests are defined by the user/customer and are integrated with
other tests in that they are run automatically when changes are
made.
• There is no separate acceptance testing process.
• Main problem here is whether or not the embedded user is
‘typical’ and can represent the interests of all system stakeholders.
REFERENCES
• Hartson, R. (2018). The UX Book: Agile UX Design
for A Quality User Experience 2nd edition. Morgan
Kaufmann.

• Gothelf., J. (2018). Lean Vs. Agile Vs. Design


Thinking: What You Really Need to Know To Build
High. Sense and Respond Press.

• Ries, M. et . al. (2017). Agile Project Management: A


Complete Beginner's Guide to Agile Project
Management. Createspace Independent Publishing
• Stephens, R. (2017). Beginning Software
Engineering. Sybex. Createspace Independent
Publishing

• Laplante. P. (2016). Requirements Engineering for


Software and Systems (Applied Software
Engineering Series 3rd edition. Auerbach
Publications.

• Flewelling, P. (2017). The Agile Developers


Handbook. Packt Publishing.
• Flewelling, P. (2017). The Agile Developer's Handbook: Get
More Value From Your Software Development: Get The
Best Out Of The Agile Methodology. Packt Publishing.

• Keane, T. (2017). Project Management: Proven Principles in


Agile Project Management for Successful Managers and
Businesses (Project Management 101).

• Ries, M. et . al. (2016). Agile Project Management: A


Complete Beginner's Guide To Agile Project Management.
Createspace Independent Publishing
THANK YOU.

You might also like