Cs51 Software Engineering Unit I Software Product and Process Software Engineering Paradigm
Cs51 Software Engineering Unit I Software Product and Process Software Engineering Paradigm
Systems Engineering
Software as part of larger system, determine requirements for all system
elements, allocate requirements to software.
Software Requirements Analysis
Develop understanding of problem domain, user needs, function, performance,
interfaces, ...
Software Design
Multi-step process to determine architecture, interfaces, data structures,
functional detail. Produces (high-level) form that can be checked for quality,
conformance before coding.
Coding
Produce machine readable and executable form, match HW, OS and design needs.
Testing
Prototype thrown away and software developed using formal process{ it is used to define
the requirement} Prototyping
Strengths:
Requirements can be set earlier and more reliably
Customer sees results very quickly.
Customer is educated in what is possible helping to refine requirements.
Requirements can be communicated more clearly and completely
Between developers and clients Requirements and design options can be
investigated quickly and Cheaply
Weaknesses:
Requires a rapid prototyping tool and expertise in using ita cost for the
development organisation
Smoke and mirrors - looks like a working version, but it is not.
The RAD Model:
Rapid Application Development is a linear sequential software development process
model that emphasizes an extremely short development cycle
Rapid application achieved by using a component based construction approach
If requirements are well understood and project scope is constrained the RAD process
enables a development team to create a fully functional system
Team # n
M o d e lin g
bus ines s m odeling
dat a m odeling
proc es s m odeling
C o n s t r u c t io n
c om ponent reus e
aut om at ic c ode
generat ion
t es t ing
Team # 2
Co m m u n icat io n
Mo d el i ng
b u si n e ss m o d e l i n g
dat a m odeling
p ro ce ss m o d e l i n g
Plan n in g
Co nst r uct i o n
Team # 1
co m p o n e n t re u se
a u t o m a t i c co d e
g e n e ra t i o n
t e st i n g
Mo d e lin g
business modeling
dat a modeling
process modeling
Co n st r u ct io n
component reuse
aut omat ic code
generat ion
t est ing
6 0 - 9 0 d ays
RAD phases :
Business modeling
Data modeling
Process modeling
D e p lo ym e n t
int egrat ion
deliv ery
f eedback
Application generation
Testing and turnover
Business modeling:
What information drives the business process?
What information is generated?
Who generates it?
Data Modeling:
The information flow defined as part of the business modeling phase is refined into a set
of data objects that are needed to support the business.
The characteristics ( called attributes) of each object are identified and the relationships
between these objects are defined
Process modeling:
The data modeling phase are transformed to achieve the information flow necessary to
implement a business function.
Processing descriptions are created for adding , modifying, deleting, or retrieving a data
object
Application generation:
RAD assumes the use of 4 generation techniques.
Rather than creating software using conventional 3 generation programming languages,
the RAD process works to reuse existing program components (when possible) or created
reusable components (when necessary)
Testing and Turnover:
Since the RAD process emphasizes reuse, many of the program components have already
been testing.
This reduces over all testing time.
However, new components must be tested and all interfaces must be fully exercised
Advantages &Disadvantages of RAD:
Advantages
Extremely short development time.
Uses component-based construction and emphasises reuse and code generation
Disadvantages
Large human resource requirements (to create all of the teams).
Requires strong commitment between developers and customers for rapid-fire
activities.
High performance requirements maybe cant be met (requires tuning the components).
The Incremental Model
incr em e n t # n
C o m m u n i c a t
i o n
P l a n n i n g
M o d e l i n g
analy s i s
des ign
C o n s t
c od e t
es t
r u c t
i o n
D e p l o y m e n t
d e l i v e r y
f e e d b a c k
d e liv e ry o f
n t h in cre me n t
incr em e n t # 2
C o m m u n i c a t
i o n
P l a n n i n g
M o d e l i n g
analy s i s
des ign
C o n s t
r u c t
c od e t
es t
i o n
D e p l o y m e n t
d e l i v e r y
f e e d b a c k
incr em e n t # 1
C o m m u n i c a t
d e liv e ry o f
2 n d in cre me n t
i o n
P l a n n i n g
M o d e l i n g
analy s i s
des ign
C o n s t
c ode t
es t
r u c t
i o n
D e p l o y m e n t
d e l i v e r y
f e e d b a c k
d e liv e ry o f
1 st in cre me n t
System Engineering
Software engineering occurs as a consequence of a process called system engineering.
Instead of concentrating solely on software, system engineering focuses on a variety of
elements, analyzing, designing, and organizing those elements into a system that can be a
product, a service, or a technology for the transformation of information or control.
The system engineering process usually begins with a world view. That is, the entire
business or product domain is examined to ensure that the proper business or technology
context can be established.
The world view is refined to focus more fully on specific domain of interest. Within a
specific domain, the need for targeted system elements (e.g., data, software, hardware,
people) is analyzed. Finally, the analysis, design, and construction of a targeted system
element is initiated.
At the top of the hierarchy, a very broad context is established and, at the bottom, detailed
technical activities, performed by the relevant engineering discipline (e.g., hardware or
software engineering), are conducted.
Stated in a slightly more formal manner, the world view (WV) is composed of a set of
domains (Di), which can each be a system or system of systems in its own right.
WV = {D1, D2, D3, . . . , Dn}
Each domain is composed of specific elements (Ej) each of which serves some role in
accomplishing the objective and goals of the domain or component:
Di = {E1, E2, E3, . . . , Em}
Finally, each element is implemented by specifying the technical components (Ck) that
achieve the necessary function for an element:
Ej = {C1, C2, C3, . . . , Ck}
The final BPE stepconstruction and integration focuses on implementation detail. The
architecture and infrastructure are implemented by constructing an appropriate database and
internal data structures, by building applications using software components, and by selecting
appropriate elements of a technology infrastructure to support the design created during
BSD. Each of these system components must then be integrated to form a complete
information system or application.
The integration activity also places the new information system into the business area
context, performing all user training and logistics support to achieve a smooth transition.
System component engineering is actually a set of concurrent activities that address each of
the system components separately: software engineering, hardware engineering, human
engineering, and database engineering.
Each of these engineering disciplines takes a domain-specific view, but it is important to note
that the engineering disciplines must establish and maintain active communication with one
another. Part of the role of requirements engineering is to establish the interfacing
mechanisms that will enable this to happen.
The element view for product engineering is the engineering discipline itself applied to the
allocated component. For software engineering, this means analysis and design modeling
activities (covered in detail in later chapters) and construction and integration activities that
encompass code generation, testing, and support steps.
The analysis step models allocated requirements into representations of data, function, and
behavior. Design maps the analysis model into data, architectural, interface, and soft ware
component-level designs.
UNIT II SOFTWARE
REQUIREMENTS
The process of establishing the services that the customer requires from a system and the
constraints under which it operates and is developed
Requirements may be functional or non-functional
Functional requirements describe system services or functions
Non-functional requirements is a constraint on the system or on the development
process
Types of requirements
User requirements
Statements in natural language (NL) plus diagrams of the services the system
provides and its operational constraints. Written for customers
System requirements
A structured document setting out detailed descriptions of the system services.
Written as a contract between client and contractor
Software specification
A detailed software description which can serve as a basis for a design or
implementation. Written for developers
Functional and Non-Functional
Functional requirements
Functionality or services that the system is expected to provide.
Functional requirements may also explicitly state what the system shouldnt do.
Functional requirements specification should be:
Complete: All services required by the user should be defined
Consistent: should not have contradictory definition (also avoid ambiguity
dont leave room for different interpretations)
Examples of functional requirements
The LIBSYS system
A library system that provides a single interface to a number of databases of articles in
different libraries.
Users can search for, download and print these articles for personal study.
The user shall be able to search either all of the initial set of databases or select a subset from
it.
The system shall provide appropriate viewers for the user to read documents in the document
store.
Every order shall be allocated a unique identifier (ORDER_ID) which the user shall be able
to copy to the accounts permanent storage area.
Non-Functional requirements
Requirements that are not directly concerned with the specific functions delivered by the
system
Typically relate to the system as a whole rather than the individual system features
Often could be deciding factor on the survival of the system (e.g. reliability, cost, response
time)
Pro d u ct
r eq u ir emen ts
Efficien cy
req u ir emen ts
Relia b ility
r eq u ir emen ts
Usa b ility
r eq u ir emen ts
Org an isatio n al
req u ir emen ts
Porta b ility
req u ir emen ts
Deli very
req u ir emen ts
Sp ace
r eq u ir emen ts
Ex tern al
r eq u ir emen ts
In ter o p er a b ility
req u ir emen ts
Eth ical
r eq u ir emen ts
Stan d ar ds
req u ir emen ts
Pri vacy
r eq u ir emen ts
Leg islativ e
req u ir emen ts
Safety
req u ir emen ts
Domain requirements
Domain requirements are derived from the application domain of the system rather than from
the specific needs of the system users.
May be new functional requirements, constrain existing requirements or set out how
particular computation must take place.
Example: tolerance level of landing gear on an aircraft (different on dirt, asphalt, water), or
what happens to fiber optics line in case of sever weather during winter Olympics (Only
domain-area experts know)
Product requirements
Specify the desired characteristics that a system or subsystem must possess.
Most NFRs are concerned with specifying constraints on the behaviour of the executing
system.
Specifying product requirements
Some product requirements can be formulated precisely, and thus easily quantified
Performance
Capacity
Others are more difficult to quantify and, consequently, are often stated informally
Usability
Process requirements
Process requirements are constraints placed upon the development process of the system
Process requirements include:
Requirements on development standards and methods which must be followed
CASE tools which should be used
The management reports which must be provided
Examples of process requirements
The development process to be used must be explicitly defined and must be conformant with
ISO 9000 standards
The system must be developed using the XYZ suite of CASE tools
Management reports setting out the effort expended on each identified system component
must be produced every two weeks
A disaster recovery plan for the system development must be specified
External requirements
May be placed on both the product and the process
Derived from the environment in which the system is developed
External requirements are based on:
application domain information
organisational considerations
the need for the system to work with other systems
health and safety or data protection regulations
or even basic natural laws such as the laws of physics
Examples of external requirements
Medical data system The organisations data protection officer must certify that all data is
maintained according to data protection legislation before the system is put into operation.
Train protection system The time required to bring the train to a complete halt is computed
using the following function:
The deceleration of the train shall be taken as:
gtrain = gcontrol + ggradient
where:
ggradient = 9.81 ms-2 * compensated gradient / alpha and where the values of 9.81 ms-2/
alpha are known for the different types of train.
gcontrol is initialised at 0.8 ms-2 - this value being parameterised in order to remain
adjustable. The illustrates an example of the trains deceleration by using the parabolas derived
from the above formula where there is a change in gradient before the (predicted) stopping point
of the train.
Software Document
Should provide for communication among team members
Manag ers
Us e t he req ui rement s
d ocumen t to pl an a bi d for
t he s ys tem an d to pl an th e
sy st em dev elo pmen t p roces s
Sy st em eng in eers
Us e t he req ui rement s to
un ders tan d wh at s ys tem i s to
b e dev elo ped
Sy st em tes t
eng in eers
Us e t he req ui rement s to
d ev elo p val id ati on tes ts fo r
t he s ys tem
Sy st em
main ten ance
eng in eers
Process Documentation
Used to record and track the development process
Planning documentation
Cost, Schedule, Funding tracking
Schedules
Standards
Product Documentation
System Documentation
Describes how the system works, but not how to operate it
Examples:
Requirements Spec
Architectural Design
Detailed Design
Commented Source Code
Including output such as JavaDoc
Test Plans
Including test cases
V&V plan and results
List of Known Bugs
User Documentation has two main types
End User
System Administrator
In some cases these are the same people
The target audience must be well understood!
There are five important areas that should be documented for a formal release of a software
application
These do not necessarily each have to have their own document, but the topics should
be covered thoroughly
Functional Description of the Software
Installation Instructions
Introductory Manual
Reference Manual
System Administrators Guide
Document Quality
Providing thorough and professional documentation is important for any size product
development team
The problem is that many software professionals lack the writing skills to create
professional level documents
Document Structure
All documents for a given product should have a similar structure
A good reason for product standards
The IEEE Standard for User Documentation lists such a structure
It is a superset of what most documents need
The authors best practices are:
Put a cover page on all documents
Divide documents into chapters with sections and subsections
Add an index if there is lots of reference information
Add a glossary to define ambiguous terms
Standards
Standards play an important role in the development, maintenance and usefulness of
documentation
Standards can act as a basis for quality documentation
But are not good enough on their own
Usually define high level content and organization
There are three types of documentation standards
1.Process Standards
Define the approach that is to be used when creating the documentation
Dont actually define any of the content of the documents
2. Product Standards
Goal is to have all documents created for a specific product attain a consistent structure and
appearance
Can be based on organizational or contractually required standards
Four main types:
Documentation Identification Standards
Document Structure Standards
Document Presentation Standards
Document Update Standards
One caveat:
Documentation that will be viewed by end users should be created in a way that is
best consumed and is most attractive to them
Internal development documentation generally does not meet this need
3. Interchange Standards
Deals with the creation of documents in a format that allows others to effectively use
PDF may be good for end users who dont need to edit
Word may be good for text editing
Other Standards
IEEE
Has a published standard for user documentation
Provides a structure and superset of content areas
Many organizations probably wont create documents that completely match the
standard
Writing Style
Ten best practices when writing are provided
Author proposes that group edits of important documents should occur in a similar
fashion to software walkthroughs
Requ irement s
eli cit ati on an d
analy si s
Requ ir ement s
s pecifi cati on
Feasi bi li ty
repo rt
Requ irement s
v al id ati on
Sy st em
mo dels
Us er an d s ys tem
requ iremen ts
Requ irement s
d ocumen t
Feasibility Studies
A feasibility study decides whether or not the proposed system is worthwhile
A short focused study that checks
If the system contributes to organisational objectives
If the system can be engineered using current technology and within budget
If the system can be integrated with other systems that are used
Based on information assessment (what is required), information collection and report
writing
Questions for people in the organisation
What if the system wasnt implemented?
What are current process problems?
How will the proposed system help?
Requirements validation
Concerned with demonstrating that the requirements define the system that the customer
really wants
Requirements error costs are high so validation is very important
Fixing a requirements error after delivery may cost up to 100 times the cost of fixing
an implementation error
Requirements checking
Validity
Consistency
Completeness
Realism
Verifiability
Requirements validation techniques
Reviews
Systematic manual analysis of the requirements
Prototyping
Using an executable model of the system to check requirements.
Test-case generation
Developing tests for requirements to check testability
Automated consistency analysis
Checking the consistency of a structured requirements description
Requirements management
Requirements management is the process of managing changing requirements during the
requirements engineering process and system development
Requirements are inevitably incomplete and inconsistent
New requirements emerge during the process as business needs change and a better
understanding of the system is developed
Different viewpoints have different requirements and these are often contradictory
Software prototyping
Incomplete versions of the software program being developed. Prototyping can also be
used by end users to describe and prove requirements that developers have not considered
Benefits:
The software designer and implementer can obtain feedback from the users early in the
project. The client and the contractor can compare if the software made matches the software
specification, according to which the software program is built.
It also allows the software engineer some insight into the accuracy of initial project
estimates and whether the deadlines and milestones proposed can be successfully met.
Process of prototyping
1. Identify basic requirements
Determine basic requirements including the input and output information desired. Details,
such as security, can typically be ignored.
2. Evolutionary prototyping
Evolutionary Prototyping (also known as breadboard prototyping) is quite different from
Throwaway Prototyping. The main goal when using Evolutionary Prototyping is to build a very
robust prototype in a structured manner and constantly refine it. "The reason for this is that the
Evolutionary prototype, when built, forms the heart of the new system, and the improvements
and further requirements will be built.
Evolutionary Prototypes have an advantage over Throwaway Prototypes in that they are
functional systems. Although they may not have all the features the users have planned, they
may be used on a temporary basis until the final system is delivered.
In Evolutionary Prototyping, developers can focus themselves to develop parts of the
system that they understand instead of working on developing a whole system. To minimize risk,
the developer does not implement poorly understood features. The partial system is sent to
customer sites. As users work with the system, they detect opportunities for new features and
give requests for these features to developers. Developers then take these enhancement requests
along with their own and use sound configuration-management practices to change the softwarerequirements specification, update the design, recode and retest.
3. Incremental prototyping
The final product is built as separate prototypes. At the end the separate prototypes are
merged in an overall design.
4. Extreme prototyping
Extreme Prototyping as a development process is used especially for developing web
applications. Basically, it breaks down web development into three phases, each one based on
the preceding one. The first phase is a static prototype that consists mainly of HTML pages. In
the second phase, the screens are programmed and fully functional using a simulated services
layer. In the third phase the services are implemented. The process is called Extreme Prototyping
to draw attention to the second phase of the process, where a fully-functional UI is developed
with very little regard to the services other than their contract.
Advantages of prototyping
1. Reduced time and costs: Prototyping can improve the quality of requirements and
specifications provided to developers. Because changes cost exponentially more to implement as
they are detected later in development, the early determination of what the user really wants can
result in faster and less expensive software.
2. Improved and increased user involvement: Prototyping requires user involvement and
allows them to see and interact with a prototype allowing them to provide better and more
complete feedback and specifications. The presence of the prototype being examined by the user
prevents many misunderstandings and miscommunications that occur when each side believe the
other understands what they said. Since users know the problem domain better than anyone on
the development team does, increased interaction can result in final product that has greater
tangible and intangible quality. The final product is more likely to satisfy the users desire for
look, feel and performance.
Disadvantages of prototyping
1. Insufficient analysis: The focus on a limited prototype can distract developers from properly
analyzing the complete project. This can lead to overlooking better solutions, preparation of
incomplete specifications or the conversion of limited prototypes into poorly engineered final
projects that are hard to maintain. Further, since a prototype is limited in functionality it may not
scale well if the prototype is used as the basis of a final deliverable, which may not be noticed if
developers are too focused on building a prototype as a model.
2. User confusion of prototype and finished system: Users can begin to think that a prototype,
intended to be thrown away, is actually a final system that merely needs to be finished or
polished. (They are, for example, often unaware of the effort needed to add error -checking and
security features which a prototype may not have.) This can lead them to expect the prototype to
accurately model the performance of the final system when this is not the intent of the
developers. Users can also become attached to features that were included in a prototype for
consideration and then removed from the specification for a final system. If users are able to
require all proposed features be included in the final system this can lead to conflict.
3. Developer misunderstanding of user objectives: Developers may assume that users share
their objectives (e.g. to deliver core functionality on time and within budget), without
understanding wider commercial issues. For example, user representatives attending Enterprise
software (e.g. PeopleSoft) events may have seen demonstrations of "transaction auditing" (where
changes are logged and displayed in a difference grid view) without being told that this feature
demands additional coding and often requires more hardware to handle extra database accesses.
Users might believe they can demand auditing on every field, whereas developers might think
this is feature creep because they have made assumptions about the extent of user requirements.
If the developer has committed delivery before the user requirements were reviewed, developers
are between a rock and a hard place, particularly if user management derives some advantage
from their failure to implement requirements.
4. Developer attachment to prototype: Developers can also become attached to prototypes they
have spent a great deal of effort producing; this can lead to problems like attempting to convert a
limited prototype into a final system when it does not have an appropriate underlying
architecture. (This may suggest that throwaway prototyping, rather than evolutionary
prototyping, should be used.)
5. Excessive development time of the prototype: A key property to prototyping is the fact that
it is supposed to be done quickly. If the developers lose sight of this fact, they very well may try
to develop a prototype that is too complex. When the prototype is thrown away the precisely
developed requirements that it provides may not yield a sufficient increase in productivity to
make up for the time spent developing the prototype. Users can become stuck in debates over
details of the prototype, holding up the development team and delaying the final product.
6. Expense of implementing prototyping: the start up costs for building a development team
focused on prototyping may be high. Many companies have development methodologies in
place, and changing them can mean retraining, retooling, or both. Many companies tend to just
jump into the prototyping without bothering to retrain their workers as much as they should.
A common problem with adopting prototyping technology is high expectations for productivity
with insufficient effort behind the learning curve. In addition to training for the use of a
prototyping technique, there is an often overlooked need for developing corporate and project
specific underlying structure to support the technology. When this underlying structure is
omitted, lower productivity can often result.
Best projects to use prototyping
It has been found that prototyping is very effective in the analysis and design of on-line
systems, especially for transaction processing, where the use of screen dialogs is much more in
evidence. The greater the interaction between the computer and the user, the greater the benefit is
that can be obtained from building a quick system and letting the user play with it.
Systems with little user interaction, such as batch processing or systems that mostly do
calculations, benefit little from prototyping. Sometimes, the coding needed to perform the system
functions may be too intensive and the potential gains that prototyping could provide are too
small.
Prototyping is especially good for designing good human-computer interfaces. "One of
the most productive uses of rapid prototyping to date has been as a tool for iterative user
requirements engineering and human-computer interface design.
Methods
There are few formal prototyping methodologies even though most Agile Methods rely
heavily upon prototyping techniques.
1. Dynamic systems development method
Dynamic Systems Development Method (DSDM) is a framework for delivering business
solutions that relies heavily upon prototyping as a core technique, and is itself ISO 9001
approved. It expands upon most understood definitions of a prototype. According to DSDM the
prototype may be a diagram, a business process, or even a system placed into production. DSDM
prototypes are intended to be incremental, evolving from simple forms into more comprehensive
ones.
DSDM prototypes may be throwaway or evolutionary. Evolutionary prototypes may be evolved
horizontally (breadth then depth) or vertically (each section is built in detail with additional
iterations detailing subsequent sections). Evolutionary prototypes can eventually evolve into
final systems.
The four categories of prototypes as recommended by DSDM are:
Business prototypes used to design and demonstrate the business processes being
automated.
Usability prototypes used to define, refine, and demonstrate user interface design
usability, accessibility, look and feel.
Performance and capacity prototypes - used to define, demonstrate, and predict how
systems will perform under peak loads as well as to demonstrate and evaluate other nonfunctional aspects of the system (transaction rates, data storage volume, response time)
Capability/technique prototypes used to develop, demonstrate, and evaluate a design
approach or concept.
The DSDM lifecycle of a prototype is to:
1. Identify prototype
2. Agree to a plan
3. Create the prototype
4. Review the prototype
2. Operational prototyping
Operational Prototyping was proposed by Alan Davis as a way to integrate throwaway and
evolutionary prototyping with conventional system development. "[It] offers the best of both the
quick-and-dirty and conventional-development worlds in a sensible manner. Designers develop
only well-understood features in building the evolutionary baseline, while using throwaway
prototyping to experiment with the poorly understood features."
Davis' belief is that to try to "retrofit quality onto a rapid prototype" is not the correct approach
when trying to combine the two approaches. His idea is to engage in an evolutionary prototyping
methodology and rapidly prototype the features of the system after each evolution.
The specific methodology follows these steps:
An evolutionary prototype is constructed and made into a baseline using conventional
development strategies, specifying and implementing only the requirements that are well
understood.
Copies of the baseline are sent to multiple customer sites along with a trained prototyper.
At each site, the prototyper watches the user at the system.
Whenever the user encounters a problem or thinks of a new feature or requirement, the
prototyper logs it. This frees the user from having to record the problem, and allows them
to continue working.
After the user session is over, the prototyper constructs a throwaway prototype on top of
the baseline system.
The user now uses the new system and evaluates. If the new changes aren't effective, the
prototyper removes them.
If the user likes the changes, the prototyper writes feature-enhancement requests and
forwards them to the development team.
The development team, with the change requests in hand from all the sites, then produce
a new evolutionary prototype using conventional methods.
Obviously, a key to this method is to have well trained prototypers available to go to the user
sites. The Operational Prototyping methodology has many benefits in systems that are complex
and have few known requirements in advance.
3. Evolutionary systems development
Evolutionary Systems Development is a class of methodologies that attempt to formally
implement Evolutionary Prototyping. One particular type, called Systems craft is described by
John Crinnion in his book: Evolutionary Systems Development.
Systemscraft was designed as a 'prototype' methodology that should be modified and
adapted to fit the specific environment in which it was implemented.
Systemscraft was not designed as a rigid 'cookbook' approach to the development
process. It is now generally recognised[sic] that a good methodology should be flexible enough
to be adjustable to suit all kinds of environment and situation
The basis of Systemscraft, not unlike Evolutionary Prototyping, is to create a working system
from the initial requirements and build upon it in a series of revisions. Systemscraft places heavy
emphasis on traditional analysis being used throughout the development of the system.
4. Evolutionary rapid development
Tools
Efficiently using prototyping requires that an organization have proper tools and a staff
trained to use those tools. Tools used in prototyping can vary from individual tools like 4th
generation programming languages used for rapid prototyping to complex integrated CASE
tools. 4th generation programming languages like Visual Basic and ColdFusion are frequently
used since they are cheap, well known and relatively easy and fast to use. CASE tools are often
developed or selected by the military or large organizations. Users may prototype elements of an
application themselves in a spreadsheet.
1. Screen generators, design tools & Software Factories
Commonly used screen generating programs that enable prototypers to show users
systems that don't function, but show what the screens may look like. Developing Human
Computer Interfaces can sometimes be the critical part of the development effort, since to the
users the interface essentially is the system.
Software Factories are Code Generators that allow you to model the domain model and
then drag and drop the UI. Also they enable you to run the prototype and use basic database
functionality. This approach allows you to explore the domain model and make sure it is in sync
with the GUI prototype.
2. Application definition or simulation software
It enables users to rapidly build lightweight, animated simulations of another computer
program, without writing code. Application simulation software allows both technical and nontechnical users to experience, test, collaborate and validate the simulated program, and provides
reports such as annotations, screenshot and schematics. To simulate applications one can also use
software which simulate real-world software programs for computer based training,
demonstration, and customer support, such as screen casting software as those areas are closely
related.
3. Sketchflow
Sketch Flow, a feature of Microsoft Expression Studio Ultimate, gives the ability to quickly
and effectively map out and iterate the flow of an application UI, the layout of individual screens
and transition from one application state to another.
Interactive Visual Tool
Easy to learn
Dynamic
Provides enviroment to collect feedback
4. Visual Basic
One of the most popular tools for Rapid Prototyping is Visual Basic (VB). Microsoft Access,
which includes a Visual Basic extensibility module, is also a widely accepted prototyping tool
that is used by many non-technical business analysts. Although VB is a programming language it
has many features that facilitate using it to create prototypes, including:
An interactive/visual user interface design tool.
Easy connection of user interface components to underlying functional behavior.
Modifications to the resulting software are easy to perform.
Define
prototype
functionality
Develop
prototype
Evaluate
prototype
Prototyping
plan
Outline
definition
Executable
prototype
Evaluation
report
Data Model
Used to describe the logical structure of data processed by the system
Entity-relation-attribute model sets out the entities in the system, the relationships between
these entities and the entity attributes
Widely used in database design. Can readily be implemented using relational databases
No specific notation provided in the UML but objects and associations can be used
Behavioural Model
Behavioural models are used to describe the overall behaviour of a system
Two types of behavioural model are shown here
Data processing models that show how data is processed as it moves through the system
State machine models that show the systems response to events
Both of these models are required for a description of the systems behaviour
1. Data-processing models
Data flow diagrams are used to model the systems data processing
These show the processing steps as data flows through a system
Intrinsic part of many analysis methods
Simple and intuitive notation that customers can understand
Show end-to-end processing of data
Data flow diagrams
DFDs model the system from a functional perspective
Tracking and documenting how the data associated with a process is helpful to develop an
overall understanding of the system
Data flow diagrams may also be used in showing the data exchange between a system and
other systems in its environment
Statecharts
Allow the decomposition of a model into submodels
A brief description of the actions is included following the do in each state
Can be complemented by tables describing the states and the stimuli
Structured Analysis
The data-flow approach is typified by the Structured Analysis method (SA)
Two major strategies dominate structured analysis
Old method popularised by DeMarco
Modern approach by Yourdon
DeMarco
A top-down approach
The analyst maps the current physical system onto the current logical data-flow
model
The approach can be summarised in four steps:
Analysis of current physical system
Derivation of logical model
Derivation of proposed logical model
Implementation of new physical system
Modern structured analysis
Distinguishes between users real needs and those requirements that represent the external
behaviour satisfying those needs
Includes real-time extensions
Other structured analysis approaches include:
Structured Analysis and Design Technique (SADT)
Structured Systems Analysis and Design Methodology (SSADM)
Method weaknesses
They do not model non-functional system requirements.
They do not usually include information about whether a method is appropriate for a given
problem.
The may produce too much documentation.
The system models are sometimes too detailed and difficult for users to understand.
CASE workbenches
A coherent set of tools that is designed to support related software process activities such as
analysis, design or testing.
Analysis and design workbenches support system modelling during both requirements
engineering and system design.
These workbenches may support a specific design method or may provide support for a
creating several different types of system model.
Structur ed
dia g ram m ing
tools
Repor t
gener ation
facilities
Code
gener ator
Centr al
infor m ation
repository
Query
langua ge
facilities
Form s
cr ea tion
tools
Im port/e xpor t
facilities
Data Dictionary
Data dictionaries are lists of all of the names used in the system models. Descriptions of the
entities, relationships and attributes are also included
Advantages
Support name management and avoid duplication
Store of organisational knowledge linking analysis, design and implementation
Many CASE workbenches support data dictionaries
UNIT III
ANALYSIS, DESIGN CONCEPTS AND PRINCIPLES
Design Concepts and Principles:
Map the information from the analysis model to the design representations - data design,
architectural design, interface design, procedural design
Analysis to Design:
Design Models 1:
Data Design
created by transforming the data dictionary and ERD into implementation data
structures
requires as much attention as algorithm design
Architectural Design
derived from the analysis model and the subsystem interactions defined in the
DFD
Interface Design
derived from DFD and CFD
describes software elements communication with
other software elements
other systems
human users
Design Models 2 :
Procedure-level design
created by transforming the structural elements defined by the software
architecture into procedural descriptions of software components
Derived from information in the PSPEC, CSPEC, and STD
Design Principles 1:
Process should not suffer from tunnel vision consider alternative approaches
Design should be traceable to analysis model
Do not try to reinvent the wheel
- use design patterns ie reusable components
Design should exhibit both uniformity and integration
Should be structured to accommodate changes
Design Principles 2 :
Design is not coding and coding is not design
Should be structured to degrade gently, when bad data, events, or operating conditions
are encountered
Needs to be assessed for quality as it is being created
Needs to be reviewed to minimize conceptual (semantic) errors
Design Concepts -1 :
Abstraction
allows designers to focus on solving a problem without being concerned about
irrelevant lower level details
Procedural abstraction is a named sequence of instructions that has a specific and limited
function
e.g open a door
Open implies a long sequence of procedural steps
data abstraction is collection of data that describes a data object
e.g door type, opening mech, weight,dimen
Design Concepts -2 :
Design Patterns
description of a design structure that solves a particular design problem within a
specific context and its impact when applied
Design Concepts -3 :
Software Architecture
overall structure of the software components and the ways in which that structure
provides conceptual integrity for a system
Design Concepts -4 :
Information Hiding
information (data and procedure) contained within a module is inaccessible to
modules that have no need for such information
Functional Independence
achieved by developing modules with single-minded purpose and an aversion to
excessive interaction with other models
Refactoring Design concepts :
Fowler [FOW99] defines refactoring in the following manner:
"Refactoring is the process of changing a software system in such a way that it
does not alter the external behavior of the code [design] yet improves its internal
structure.
When software is refectories, the existing design is examined for
redundancy
unused design elements
inefficient or unnecessary algorithms
poorly constructed or inappropriate data structures
or any other design failure that can be corrected to yield a better design.
Design Concepts 4 :
Objects
encapsulate both data and data manipulation procedures needed to describe the
content and behavior of a real world entity
Class
generalized description (template or pattern) that describes a collection of similar
objects
Inheritance
provides a means for allowing subclasses to reuse existing superclass data and
procedures; also provides mechanism for propagating changes
Design Concepts 5:
Messages
the means by which objects exchange information with one another
Polymorphism
a mechanism that allows several objects in an class hierarchy to have different
methods with the same name
instances of each subclass will be free to respond to messages by calling their own
version of the method
Modular Design Methodology Evaluation 1:
Modularity
the degree to which software can be understood by examining its components
independently of one another
Modular decomposability
provides systematic means for breaking problem into sub problems
Layered
several layers are defined
each layer performs operations that become closer to the machine instruction set
in the lower layers
Architectural Styles 3:
Call and return
program structure decomposes function into control hierarchy with main program
invoking several subprograms
Software Architecture Design 1:
Software to be developed must be put into context
model external entities and define interfaces
Identify architectural archetypes
collection of abstractions that must be modeled if the system is to be constructed
Object oriented Architecture :
The components of a system encapsulate data and the operations that must be applied to
manipulate the data. Communication and coordination between components is
accomplished via message passing
Software Architecture Design 2:
Specify structure of the system
define and refine the software components needed to implement each archet ype
Continue the process iteratively until a complete architectural structure has been derived
Layered Architecture:
Number of different layers are defined, each accomplishing operations that progressively
become closer to the machine instruction set
At the outer layer components service user interface operations.
At the inner layer components perform operating system interfacing.
Intermediate layers provide utility services and application software function
Architecture Tradeoff Analysis 1:
1. Collect scenarios
2. Elicit requirements, constraints, and environmental description
3. Describe architectural styles/patterns chosen to address scenarios and requirements
module view
process view
data flow view
Architecture Tradeoff Analysis 2:
4. Evaluate quality attributes independently (e.g. reliability, performance, security,
maintainability, flexibility, testability, portability, reusability, interoperability)
5. Identify sensitivity points for architecture
any attributes significantly affected by changing in the architecture
Refining Architectural Design:
Processing narrative developed for each module
Interface description provided for each module
Local and global data structures are defined
Design restrictions/limitations noted
Design reviews conducted
Structure the system into a set of loosely coupled objects with well-defined interfaces.
Object-oriented decomposition is concerned with identifying object classes, their attributes
and operations.
When implemented, objects are created from these classes and some control model used to
coordinate object operations.
Analyse and
understand user
activities
Design
prototype
Evaluate design
wi th end-users
Produce
dynamic design
prototype
Executable
prototype
Implement
final user
int erface
UI design principles
User familiarity
The interface should be based on user-oriented terms and concepts rather than
computer concepts
E.g., an office system should use concepts such as letters, documents, folders etc.
rather than directories, file identifiers, etc.
Consistency
The system should display an appropriate level of consistency
Commands and menus should have the same format, command punctuation should be
similar, etc.
Minimal surprise
If a command operates in a known way, the user should be able to predict the
operation of comparable commands
Recoverability
The system should provide some interface to user errors and allow the user to recover
from errors
User guidance
Some user guidance such as help systems, on-line manuals, etc. should be supplied
User diversity
Interaction facilities for different types of user should be supported
E.g., some users have seeing difficulties and so larger text should be available
User-system interaction
Two problems must be addressed in interactive systems design
How should information from the user be provided to the computer system?
How should information from the computer system be presented to the user?
Interaction styles
Direct manipulation
Easiest to grasp with immediate feedback
Difficult to program
Menu selection
User effort and errors minimized
Large numbers and combinations of choices a problem
Form fill-in
Ease of use, simple data entry
Tedious, takes a lot of screen space
Natural language
Great for casual users
Tedious for expert users
Information presentation
Information presentation is concerned with presenting system information to system users
The information may be presented directly or may be transformed in some way for
presentation
The Model-View-Controller approach is a way of supporting multiple presentations of data
Information display
1
4
10
20
Pie chart
Thermometer
Horizontal bar
100
200
300
Temper atu re
400
25
50
75
Textual highlighting
OK
Ca ncel
Data visualisation
Concerned with techniques for displaying large amounts of information
100
Visualisation can reveal relationships between entities and trends in the data
Possible data visualisations are:
Weather information
State of a telephone network
Chemical plant pressures and temperatures
A model of a molecule
Colour displays
Colour adds an extra dimension to an interface and can help the user understand complex
information structures
Can be used to highlight exceptional events
The use of colour to communicate meaning
Error messages
Error message design is critically important. Poor error messages can mean that a user
rejects rather than accepts a system
Messages should be polite, concise, consistent and constructive
The background and experience of users should be the determining factor in message
design
User interface evaluation
Some evaluation of a user interface design should be carried out to assess its suitability
Full scale evaluation is very expensive and impractical for most systems
Ideally, an interface should be evaluated against req
However, it is rare for such specifications to be produced
Real Time Software Design
Systems which monitor and control their environment
Inevitably associated with hardware devices
Sensors: Collect data from the system environment
Actuators: Change (in some way) the system's environment
Time is critical. Real-time systems MUST respond within specified times
A real-time system is a software system where the correct functioning of the system depends
on the results produced by the system and the time at which these results are produced
A soft real-time system is a system whose operation is degraded if results are not produced
according to the specified timing requirements
A hard real-time system is a system whose operation is incorrect if results are not produced
according to the timing specification
Stimulus/Response Systems
Given a stimulus, the system must produce a response within a specified time
2 classes
Periodic stimuli. Stimuli which occur at predictable time intervals
For example, a temperature sensor may be polled 10 times per second
Aperiodic stimuli. Stimuli which occur at unpredictable times
For example, a system power failure may trigger an interrupt which must be
processed by the system
Architectural considerations
Because of the need to respond to timing demands made by different stimuli / responses, the
system architecture must allow for fast switching between stimulus handlers
Timing demands of different stimuli are different so a simple sequential loop is not usually
adequate
Real Time Software Design:
Designing embedded software systems whose behaviour is subject to timing constraints
To explain the concept of a real-time system and why these systems are usually
implemented as concurrent processes
To describe a design process for real-time systems
To explain the role of a real-time executive
To introduce generic architectures for monitoring and control and data acquisition
systems
Real-time systems:
Systems which monitor and control their environment
Inevitably associated with hardware devices
Sensors: Collect data from the system environment
Actuators: Change (in some way) the system's
environment
Time is critical. Real-time systems MUST respond within specified times
Definition:
A real-time system is a software system where the correct functioning of the system
depends on the results produced by the system and the time at which these results are
produced
A soft real-time system is a system whose operation is degraded if results are not
produced according to the specified timing requirements
A hard real-time system is a system whose operation is incorrect if results are not
produced according to the timing specification
Stimulus/Response Systems:
Given a stimulus, the system must produce a esponse within a specified time
Periodic stimuli. Stimuli which occur at predictable time intervals
For example, a temperature sensor may be polled 10 times per second
Aperiodic stimuli. Stimuli which occur at unpredictable times
For example, a system power failure may trigger an interrupt which must be
processed by the system
Architectural considerations:
Because of the need to respond to timing demands made by different stimuli/responses,
the system architecture must allow for fast switching between stimulus handlers
Timing demands of different stimuli are different so a simple sequential loop is not
usually adequate
Real-time systems are usually designed as cooperating processes with a real-time
executive controlling these processes
A real-time system model:
Sen so r
Sen so r
Sen so r
Sen so r
Sen so r
Sen so r
Real-time con
tro l sys tem
Act uat or
Act uat or
Act uat or
Act uat or
System elements:
Sensors control processes
Collect information from sensors. May buffer information collected in response to
a sensor stimulus
Data processor
Carries out processing of collected information and computes the system response
Actuator control
Generates control signals for the actuator
R-T systems design process:
Identify the stimuli to be processed and the required responses to these stimuli
For each stimulus and response, identify the timing constraints
Aggregate the stimulus and response processing into concurrent processes. A process
may be associated with each class of stimulus and response
Design algorithms to process each class of stimulus and response. These must meet the
given timing requirements
Design a scheduling system which will ensure that processes are started in time to meet
their deadlines
Integrate using a real-time executive or operating system
Timing constraints:
May require extensive simulation and experiment to ensure that these are met by the
system
May mean that certain design strategies such as object-oriented design cannot be used
because of the additional overhead involved
May mean that low-level programming language features have to be used for
performance reasons
Real-time programming:
Hard-real time systems may have to programmed in assembly language to ensure that
deadlines are met
Languages such as C allow efficient programs to be written but do not have constructs to
support concurrency or shared resource management
Ada as a language designed to support real-time systems design so includes a general
purpose concurrency mechanism
Non-stop system components:
Configuration manager
Responsible
for
the
dynamic
reconfiguration
of
the
system
software and hardware. Hardware modules may be replaced and software
upgraded without stopping the systems
Fault manager
Responsible
for
detecting
software
and
hardware
faults
and
taking appropriate actions (e.g. switching to backup disks) to ensure that the
system continues in operation
Burglar alarm system e.g
A system is required to monitor sensors on doors and windows to detect the presence of
intruders in a building
When a sensor indicates a break-in, the system switches on lights around the area and
calls police automatically
The system should include provision for operation without a mains power supply
Sensors
Movement detectors, window sensors, door sensors.
50 window sensors, 30 door sensors and 200 movement detectors
Voltage drop sensor
Actions
When an intruder is detected, police are called automatically.
Lights are switched on in rooms with active sensors.
An audible alarm is switched on.
The system switches automatically to backup power when a voltage drop is
detected.
The R-T system design process:
Identify stimuli and associated responses
Define the timing constraints associated with each stimulus and response
Allocate system functions to concurrent processes
Design algorithms for stimulus processing and response generation
Design a scheduling system which ensures that processes will always be scheduled to
meet their deadlines
Control systems:
A burglar alarm system is primarily a monitoring system. It collects data from sensors but
no real-time actuator control
Control systems are similar but, in response to sensor values, the system sends control
signals to actuators
An example of a monitoring and control system is a system which monitors temperature
and switches heaters on and off
Data acquisition systems:
Collect data from sensors for subsequent processing and analysis.
Data collection processes and processing processes may have different periods and
deadlines.
Data collection may be faster than processing e.g. collecting information about an
explosion.
Circular or ring buffers are a mechanism for smoothing speed differences.
Sensor
process
Sensor
values
500Hz
Thermostat
process
500Hz
Switch command
Room number
Thermostat process
Heater control
process
Furnace
control process
Sensor
process
Sensor
identifier and
value
Processed
flux level
Sensor data
buffer
Process
data
Display
Mutual exclusion:
Producer processes collect data and add it to the buffer. Consumer processes take data
from the buffer and make elements available
Producer and consumer processes must be mutually excluded from accessing the same
element.
The buffer must stop producer processes adding information to a full buffer and consumer
processes trying to take information from an empty buffer
System Design
Design both the hardware and the software associated with system. Partition functions to
either hardware or software
Design decisions should be made on the basis on non-functional system requirements
Hardware delivers better performance but potentially longer development and less scope for
change
System elements
Sensors control processes
Collect information from sensors. May buffer information collected in response t o a
sensor stimulus
Data processor
Carries out processing of collected information and computes the system response
Actuator control
Generates control signals for the actuator
Sensor/actuator processes
Sen so r
Act uat or
St imulus
Sensor
con trol
Response
Dat a
p ro ces so r
Act uat or
contro l
Parti ti on requ
irement s
So ftware
requ ir ement s
Hardw are
requ irement s
So ftware
d es ig n
Hardw are
d es ig n
Fu ll pow er
d o: set po wer
= 6 00
Ti mer
Wait in g
d o: di sp lay
ti me
Hal f
p ow er
Nu mb er
Fu ll
p ow er
Set ti me
Op erati on
d o: get nu mber
exi t: s et t ime
d o: op erate
o ven
Hal f
p ow er
Do or
clo sed
Ti mer
Cancel
Do or
o pen
Hal f p ower
d o: set po wer
= 3 00
St art
En abl ed
Do or
clo sed
d o: di sp lay
'Ready '
Sy st em
faul t
Wait in g
d o: di sp lay
ti me
Di s ab l ed
d o: di sp lay
'Wait in g'
Real-time programming
Hard-real time systems may have to programmed in assembly language to ensure that
deadlines are met
Languages such as C allow efficient programs to be written but do not have constructs to
support concurrency or shared resource management
Ada as a language designed to support real-time systems design so includes a general
purpose concurrency mechanism
Java as a real-time language
Java supports lightweight concurrency (threads and synchonized methods) and can be used
for some soft real-time systems
Java 2.0 is not suitable for hard RT programming or programming where precise control of
timing is required
Not possible to specify thread execution time
Uncontrollable garbage collection
Not possible to discover queue sizes for shared resources
Variable virtual machine implementation
Not possible to do space or timing analysis
Real Time Executives
Real-time executives are specialised operating systems which manage processes in the RTS
Responsible for process management and resource (processor and memory) allocation
Storage management, fault management.
Components depend on complexity of system
Executive components
Real-time clock
Provides information for process scheduling.
Interrupt handler
Real-t ime
clo ck
Int errup t
h an dl er
Sch edul er
Ready
p ro ces ses
Ready
l is t
Avail able
reso urce l
is t
Reso ur ce
manag er
Releas ed
reso urces
Pro ces so r
l is t
Des pat ch er
Ex ecut in g
p ro ces s
Process priority
The processing of some types of stimuli must sometimes take priority
Interrupt level priority. Highest priority which is allocated to processes requiring a very fast
response
Clock level priority. Allocated to periodic processes
Within these, further levels of priority may be assigned
Interrupt servicing
Control is transferred automatically to a pre-determined memory location
This location contains an instruction to jump to an interrupt service routine
Further interrupts are disabled, the interrupt serviced and control returned to the interrupted
process
Resource manager
Allocat e memory
and processor
Despatcher
Start execution on an
available processor
Process switching
The scheduler chooses the next process to be executed by the processor. This depends on a
scheduling strategy which may take the process priority into account
The resource manager allocates memory and a processor for the process to be executed
The despatcher takes the process from ready list, loads it onto a processor and starts
execution
Scheduling strategies
Non pre-emptive scheduling
Once a process has been scheduled for execution, it runs to completion or until it is
blocked for some reason (e.g. waiting for I/O)
Pre-emptive scheduling
The execution of an executing processes may be stopped if a higher priority process
requires service
Scheduling algorithms
Round-robin
Shortest deadline first
Data Acquisition System
Collect data from sensors for subsequent processing and analysis.
Data collection processes and processing processes may have different periods and
deadlines.
Sensor
process
Sensor
identifier and
value
Processed
flux level
Sensor data
buffer
Process
data
Display
A ring buffer
Producer
process
Consumer
process
Mutual exclusion
Producer processes collect data and add it to the buffer. Consumer processes take data from
the buffer and make elements available.
Producer and consumer processes must be mutually excluded from accessing the same
element.
The buffer must stop producer processes adding information to a full buffer and consumer
processes trying to take information from an empty buffer.
Java implementation of a ring buffer
class CircularBuffer
{
int bufsize ;
SensorRecord [] store ;
int numberOfEntries = 0 ;
int front = 0, back = 0 ;
CircularBuffer (int n) {
bufsize = n ;
store = new SensorRecord [bufsize] ;
} // CircularBuffer
synchronized void put (SensorRecord rec ) throws InterruptedException
{
if ( numberOfEntries == bufsize)
wait () ;
store [back] = new SensorRecord (rec.sensorId, rec.sensorVal) ;
back = back + 1 ;
if (back == bufsize)
back = 0 ;
numberOfEntries = numberOfEntries + 1 ;
notify () ;
} // put
synchronized SensorRecord get () throws InterruptedException
{
SensorRecord result = new SensorRecord (-1, -1) ;
if (numberOfEntries == 0)
wait () ;
result = store [front] ;
front = front + 1 ;
if (front == bufsize)
front = 0 ;
numberOfEntries = numberOfEntries - 1 ;
notify () ;
return result ;
} // get
} // CircularBuffer
Monitoring and Control System
The system should include provision for operation without a mains power supply
Timing requirements
The switch to backup power must be completed
within a deadline of 50 ms.
Each door alarm should be polled twice per second.
Each window alarm should be polled twice per
second.
Each movement detector should be polled twice per
second.
The audible alarm should be switched on within 1/2
second of an alarm being raised by a sensor.
The lights should be switched on within 1/2 second
of an alarm being raised by a sensor.
The call to the police should be started within 2
seconds of an alarm being raised by a sensor.
A synthesised message should be available within 4
seconds of an alarm being raised by a sensor.
Process architecture
6 0Hz
4 00 Hz
Movement
d et ecto r p ro ces s
1 00 Hz
Door sen so r
p ro ces s
W i nd ow sen so r
p ro ces s
Sen so r st at us
Sen so r st at us
5 60 Hz
Al ar m s ys tem
Co mmu ni cat io n
p ro ces s
Bu il di ng mon it or
p ro ces s
Power fai lu re
i nt erru pt
Bu il di ng mon it or
Al arm s ys tem
p ro ces s
Pow er swi t ch
p ro ces s
Ro om nu mber
Al arm
s ys tem
Al arm
s ys tem
Roo m n umb er
Au di bl e alarm
p ro ces s
Al ert mess ag e
Al arm s ys tem
Ro om nu mber
Li ghti ng co nt ro l
p ro ces s
Sen so r
p ro ces s
Sen so r
valu es
5 00 Hz
Th ermo st at
p ro ces s
5 00 Hz
Sw it ch co mmand
Ro om n u mber
Control systems
Fu rnace
con tro l p ro ces s
A burglar alarm system is primarily a monitoring system. It collects data from sensors but no
real-time actuator control
Control systems are similar but, in response to sensor values, the system sends control
signals to actuators
An example of a monitoring and control system is a system which monitors temperature and
switches heaters on and off
UNIT IV
TESTING
Taxonomy of Software Testing
Classified by purpose, software testing can be divided into: correctness testing, performance
testing, and reliability testing and security testing.
Classified by life-cycle phase, software testing can be classified into the following
categories: requirements phase testing, design phase testing, program phase testing,
evaluating test results, installation phase testing, acceptance testing and maintenance testing.
By scope, software testing can be categorized as follows: unit testing, component testing,
integration testing, and system testing.
Correctness testing
Correctness is the minimum requirement of software, the essential purpose of testing. It is
used to tell the right behavior from the wrong one. The tester may or may not know the inside
details of the software module under test, e.g. control flow, data flow, etc. Therefore, either a
white-box point of view or black-box point of view can be taken in testing software. We must
note that the black-box and white-box ideas are not limited in correctness testing only.
Black-box testing
White-box testing
Performance testing
Not all software systems have specifications on performance explicitly. But every system
will have implicit performance requirements. The software should not take infinite time or
infinite resource to execute. "Performance bugs" sometimes are used to refer to those design
problems in software that cause the system performance to degrade.
Performance has always been a great concern and a driving force of computer evolution.
Performance evaluation of a software system usually includes: resource usage, throughput,
stimulus-response time and queue lengths detailing the average or maximum number of tasks
waiting to be serviced by selected resources. Typical resources that need to be considered
include network bandwidth requirements, CPU cycles, disk space, disk access operations, and
memory usage. The goal of performance testing can be performance bottleneck identification,
performance comparison and evaluation, etc.
Reliability testing
Testing Principles:
All tests should be traceable to customer requirements.
Tests should be planned before testing begins.
80% of all errors are in 20% of the code.
Testing should begin in the small and progress to the large.
Exhaustive testing is not possible.
Testing should be conducted by an independent third party if possible.
Software Defect Causes:
Specification may be wrong.
Specification may be a physical impossibility.
Faulty program design.
Program may be incorrect.
Types of Errors:
Algorithmic error.
Computation & precision error.
Documentation error.
Capacity error or boundary error.
Timing and coordination error.
Throughput or performance error.
Recovery error.
Hardware & system software error.
Standards & procedure errors.
Software Testability Checklist 1:
Operability
if it works better it can be tested more efficiently
Observability
what you see is what you test
Controllability
if software can be controlled better the it is more that testing can be automated
and optimized
Software Testability Checklist 2:
Decomposability
controlling the scope of testing allows problems to be isolated quickly and
retested intelligently
Stability
the fewer the changes, the fewer the disruptions to testing
Understandability
the more information that is known, the smarter the testing can be done
Good Test Attributes:
A good test has a high probability of finding an error.
A good test is not redundant.
A good test should be best of breed.
A good test should not be too simple or too complex.
Test Strategies:
Black-box or behavioral testing
Cyclomatic Complexity:
A number of industry studies have indicated that the higher V(G), the higher the probability or
errors.
Control Structure Testing 1:
White-box techniques focusing on control structures present in the software
Condition testing (e.g. branch testing)
focuses on testing each decision statement in a software module
it is important to ensure coverage of all logical combinations of data that may be
processed by the module (a truth table may be helpful)
Control Structure Testing 2:
Data flow testing
selects test paths based according to the locations of variable definitions and uses
in the program (e.g. definition use chains)
Loop testing
focuses on the validity of the program loop constructs (i.e. while, for, go to)
involves checking to ensure loops start and stop when they are supposed to
(unstructured loops should be redesigned whenever possible)
Loop Testing: Simple Loops:
Minimum conditionsSimple Loops
1. skip the loop entirely
2. only one pass through the loop
3. two passes through the loop
4. m passes through the loop m < n
5. (n-1), n, and (n+1) passes through the loop
where n is the maximum number of allowable passes
Loop Testing: Nested Loops:
Nested Loops
Start at the innermost loop. Set all outer loops to their minimum iteration parameter values.
Test the min+1, typical, max-1 and max for the innermost loop, while holding the outer loops at
their minimum values.
Move out one loop and set it up as in step 2, holding all other loops at typical values. Continue
this step until the outermost loop has been tested.
Concatenated Loops
If the loops are independent of one another
then treat each as a simple loop
else* treat as nested loops
end if*
for example, the final loop counter value of loop 1 is
used to initialize loop 2.
Black-Box Testing:
Graph-Based Testing 1:
Black-box methods based on the nature of the relationships (links) among the program
objects (nodes), test cases are designed to traverse the entire graph
Transaction flow testing
nodes represent steps in some transaction and links represent logical connections
between steps that need to be validated
Finite state modeling
nodes represent user observable states of the software and links represent state
transitions
Graph-Based Testing 2:
Data flow modeling
nodes are data objects and links are transformations of one data object to another
data object
Timing modeling
nodes are program objects and links are sequential connections between these
objects
link weights are required execution times
Equivalence Partitioning:
Black-box technique that divides the input domain into classes of data from which test
cases can be derived
An ideal test case uncovers a class of errors that might require many arbitrary test cases
to be executed before a general error is observed
Equivalence Class Guidelines:
If input condition specifies a range, one valid and two invalid equivalence classes are
defined
If an input condition requires a specific value, one valid and two invalid equivalence
classes are defined
If an input condition specifies a member of a set, one valid and one invalid equivale nce
class is defined
If an input condition is Boolean, one valid and one invalid equivalence class is defined
Boundary Value Analysis - 1
Black-box technique
focuses on the boundaries of the input domain rather than its center
Guidelines:
If input condition specifies a range bounded by values a and b, test cases should
include a and b, values just above and just below a and b
If an input condition specifies and number of values, test cases should be exercise
the minimum and maximum numbers, as well as values just above and just below
the minimum and maximum values
Boundary Value Analysis 2
1. Apply guidelines 1 and 2 to output conditions, test cases should be designed to
produce the minimum and maximum output reports
2. If internal program data structures have boundaries (e.g. size limitations), be
certain to test the boundaries
Comparison Testing:
Black-box testing for safety critical systems in which independently developed
implementations of redundant systems are tested for conformance to specificatio ns
Often equivalence class partitioning is used to develop a common set of test cases for
each implementation
Orthogonal Array Testing 1:
Black-box technique that enables the design of a reasonably small set of test cases that
provide maximum test coverage
Focus is on categories of faulty logic likely to be present in the software component
(without examining the code)
Orthogonal Array Testing 2:
Priorities for assessing tests using an orthogonal array
Detect and isolate all single mode faults
Detect all double mode faults
Multimode faults
Software Testing Strategies:
Strategic Approach to Testing 1:
Testing begins at the component level and works outward toward the integration of the
entire computer-based system.
Different testing techniques are appropriate at different points in time.
The developer of the software conducts testing and may be assisted by independent test
groups for large projects.
The role of the independent tester is to remove the conflict of interest inherent when the
builder is testing his or her own product.
Strategic Approach to Testing 2:
Testing and debugging are different activities.
Debugging must be accommodated in any testing strategy.
Need to consider verification issues
are we building the product right?
Need to Consider validation issuesare we building the right product?
Verification vs validation:
Verification:
"Are we building the product right" The software should conform to its specification
Validation:
"Are we building the right product" The software should do what the user really requires
The V & V process:
As a whole life-cycle process - V & V must be applied at each stage in the software
process.
Has two principal objectives
The discovery of defects in a system
The assessment of whether or not the system is usable in an operational situation.
Strategic Testing Issues - 1 Specify product requirements in a quantifiable manner before
testing starts.
Specify testing objectives explicitly.
Identify the user classes of the software and develop a profile for each.
Develop a test plan that emphasizes rapid cycle testing.
Strategic Testing Issues 2:
Build robust software that is designed to test itself (e.g. use anti-bugging).
Use effective formal reviews as a filter prior to testing.
Conduct formal technical reviews to assess the test strategy and test cases.
Testing Strategy:
Unit Testing:
Program reviews.
Formal verification.
Testing the program itself.
black box and white box testing.
Black Box or White Box?:
Maximum # of logic paths - determine if white box testing is possible.
Nature of input data.
Amount of computation involved.
Complexity of algorithms.
Unit Testing Details:
Interfaces tested for proper information flow.
Local data are examined to ensure that integrity is maintained.
Boundary conditions are tested.
Basis path testing should be used.
All error handling paths should be tested.
Drivers and/or stubs need to be developed to test incomplete software.
Unit Testing:
Integration Testing:
Bottom - up testing (test harness).
Top - down testing (stubs).
Regression Testing.
Smoke Testing
Regression testing may be used to ensure that new errors not introduced.
Bottom-Up Integration:
from a legacy system can be a problem. If you have one million customers to load into Oracle
receivables at the rate of 5,000/hour and the database administrator allows you to load 20 hours
per day, you have a 10-day task.
Because they spend huge amounts of money on their ERP systems, many big companies
try to optimize the systems and capture specific returns on the investment. However, sometimes
companies can be incredibly insensitive and uncoordinated as they try to make money from their
ERP software. For example, one business announced at the beginning of a project that the
accounts payable department would be cut from 5017 employees as soon as the system went
live. Another company decided to centralize about 30 accounting sites into one shared service
center and advised about 60 accountants that they would lose their jobs in about a year. Several
of the 60 employees were offered positions on the ERP implementation team.
Small companies have other problems when creating an implementation team. Occasionally, the
small company tries to put clerical employees on the team and they have problems with issue
resolution or some of the ERP concepts. In another case, one small company didnt create the
position of project manager. Each department worked on its own modules and ignored the
integration points, testing, and requirements of other users. When Y2K deadlines forced the
system startup, results were disastrous with a cost impact that doubled the cost of the entire
project.
Project team members at small companies sometimes have a hard time relating to the cost
of the implementation. We once worked with a company where the project manager (who was
also the database administrator) advised me within the first hour of our meeting that he thought
consulting charges of $3/minute were outrageous, and he couldnt rationalize how we could
possibly make such a contribution. We agreed a consultant could not contribute $3 in value each
and every minute to his project. However, when I told him we would be able to save him
$10,000/week and make the difference between success and failure, he realized we should get to
work.
Because the small company might be relatively simple to implement and the technical
staff might be inexperienced with the database and software, it is possible that the technical staff
will be on the critical path of the project. If the database administrator cant learn how to handle
the production database by the time the users are ready to go live, you might need to hire some
temporary help to enable the users to keep to the schedule. In addition, we often see small
companies with just a single database administrator who might be working 60 or more hours per
week. They feel they can afford to have more DBAs as employees, but they dont know how to
establish the right ratio of support staff to user requirements. These companies can burn out a
DBA quickly and then have to deal with the problem of replacing an important skill.
UNIT V
SOFTWARE PROJECT MANAGEMENT
Software metric
Any type of measurement which relates to a software system, process or related
documentation
Lines of code in a program, the Fog index, number of person-days required to
develop a component.
Allow the software and the software process to be quantified.
May be used to predict product attributes or to control the software process.
Product metrics can be used for general predictions or to identify anomalous components.
Predictor and control metrics
Metrics assumptions
A software property can be measured.
The relationship exists between what we can measure and what we want to know. We can
only measure internal attributes but are often more interested in external software attributes.
This relationship has been formalised and validated.
It may be difficult to relate what can be measured to desirable external quality attributes.
Internal and external attributes
Data collection
A metrics programme should be based on a set of product and process data.
Data should be collected immediately (not in retrospect) and, if possible, automatically.
Three types of automatic data collection
Static product analysis;
Dynamic product analysis;
Process data collation.
Data accuracy
Dont collect unnecessary data
The questions to be answered should be decided in advance and the required data
identified.
Tell people why the data is being collected.
It should not be part of personnel evaluation.
Dont rely on memory
Collect data when it is generated not after a project has finished.
Product metrics
A quality metric should be a predictor of product quality.
Classes of product metric
Dynamic metrics which are collected by measurements made of a program in
execution;
Static metrics which are collected by measurements made of the system
representations;
Dynamic metrics help assess efficiency and reliability; static metrics help assess
complexity, understand ability and maintainability.
Dynamic and static metrics
Dynamic metrics are closely related to software quality attributes
It is relatively easy to measure the response time of a system (performance attribute)
or the number of failures (reliability attribute).
Static metrics have an indirect relationship with quality attributes
You need to try and derive a relationship between these metrics and properties such
as complexity, understandability and maintainability.
Software product metrics
Software metric
Fan in/Fan-out
Length of code
Cyclomatic complexity
Length of identifiers
Depth of conditional
nesting
Fog index
Description
Fan-in is a measure of the number of functions or methods that
call some other function or method (say X). Fan-out is the
number of functions that are called by function X. A high value
for fan-in means that X is tightly coupled to the rest of the design
and changes to X will have extensive knock-on effects. A high
value for fan-out suggests that the overall complexity of X may
be high because of the complexity of the control logic needed to
coordinate the called components.
This is a measure of the size of a program. Generally, the larger
the size of the code of a component, the more complex and errorprone that component is likely to be. Length of code has been
shown to be one of the most reliable metrics for predicting errorproneness in components.
This is a measure of the control complexity of a program. This
control complexity may be related to program understandability. I
discuss how to compute cyclomatic complexity in Chapter 22.
This is a measure of the average length of distinct identifiers in a
program. The longer the identifiers, the more likely they are to be
meaningful and hence the more understandable the program.
This is a measure of the depth of nesting of if-statements in a
program. Deeply nested if statements are hard to understand and
are potentially error-prone.
This is a measure of the average length of words and sentences in
documents. The higher the value for the Fog index, the more
difficult the document is to understand.
Object-oriented metrics
Object-oriented metric
Description
Method fan-in/fan-out
Number of overriding
operations
Measurement analysis
It is not always obvious what data means
Analysing collected data is very difficult.
Professional statisticians should be consulted if available.
Data analysis must take local circumstances into account.
Measurement surprises
Reducing the number of faults in a program leads to an increased number of help desk calls
The program is now thought of as more reliable and so has a wider more diverse
market. The percentage of users who call the help desk may have decreased but the
total may increase;
A more reliable system is used in a different way from a system where users work
around the faults. This leads to more help desk calls.
ZIPFs Law
Zipf's Law as "the observation that frequency of occurrence of some event (P), as a function
of the rank (i) when the rank is determined by the above frequency of occurrence, is a powerlaw function Pi ~ 1/ia with the exponent a close to unity (1)."
There is not a simple relationship between the development cost and the price charged to the
customer.
Broader organisational, economic, political and business considerations influence the price
charged.
Software productivity
A measure of the rate at which individual engineers involved in software development
produce software and associated documentation.
Not quality-oriented although quality assurance is a factor in productivity assessment.
Essentially, we want to measure useful functionality produced per time unit.
Productivity measures
Size related measures based on some output from the software process. This may be lines of
delivered source code, object code instructions, etc.
Function-related measures based on an estimate of the functionality of the delivered
software. Function-points are the best known of this type of measure.
Measurement problems
Estimating the size of the measure (e.g. how many function points).
Estimating the total number of programmer months that have elapsed.
Estimating contractor productivity (e.g. documentation team) and incorporating this
estimate in overall estimate.
Lines of code
The measure was first proposed when programs were typed on cards with one line per card;
How does this correspond to statements as in Java which can span several lines or where
there can be several statements on one line.
Productivity comparisons
The lower level the language, the more productive the programmer
The same functionality takes more code to implement in a lower-level language than
in a high-level language.
The more verbose the programmer, the higher the productivity
Measures of productivity based on lines of code suggest that programmers who write
verbose code are more productive than programmers who write compact code.
Function Point model
Function points
Based on a combination of program characteristics
external inputs and outputs;
user interactions;
external interfaces;
files used by the system.
A weight is associated with each of these and the function point count is computed by
multiplying each raw count by the weight and summing all values.
COCOMO 81
Project
complexity
Simple
Formula
PM = 2.4 (KDSI)1.05 M
Moderate
PM = 3.0 (KDSI)1.12 M
Embedded
PM = 3.6 (KDSI)1.20 M
Description
Well-understood applications developed by
small teams.
More complex projects where team
members may have limited experience of
related systems.
Complex projects where the software is part
of a strongly coupled complex of hardware,
software, regulations and operational
procedures.
COCOMO 2
COCOMO 81 was developed with the assumption that a waterfall process would be used and
that all software would be developed from scratch.
Since its formulation, there have been many changes in software engineering practice and
COCOMO 2 is designed to accommodate different approaches to software development.
COCOMO 2 models
COCOMO 2 incorporates a range of sub-models that produce increasingly detailed software
estimates.
The sub-models in COCOMO 2 are:
Application composition model. Used when software is composed from existing
parts.
Early design model. Used when requirements are available but design has not yet
started.
Reuse model. Used to compute the effort of integrating reusable components.
Post-architecture model. Used once the system architecture has been designed and
more information about the system is available.
Multipliers
Product attributes
Concerned with required characteristics of the software product being developed.
Computer attributes
Constraints imposed on the software by the hardware platform.
Personnel attributes
Multipliers that take the experience and capabilities of the people working on the
project into account.
Project attributes
Concerned with the particular characteristics of the software development project.
Delphi method
The Delphi method is a systematic, interactive forecasting method which relies on a panel
of experts. The experts answer questionnaires in two or more rounds. After each round, a
facilitator provides an anonymous summary of the experts forecasts from the previous round as
well as the reasons they provided for their judgments. Thus, experts are encouraged to revise
their earlier answers in light of the replies of other members of their panel. It is believed that
during this process the range of the answers will decrease and the group will converge towards
the "correct" answer. Finally, the process is stopped after a pre-defined stop criterion (e.g.
number of rounds, achievement of consensus, stability of results) and the mean or median scores
of the final rounds determine the results.
Scheduling
Scheduling Principles
compartmentalizationdefine distinct tasks
interdependencyindicate task interrelationship
effort validationbe sure resources are available
defined responsibilitiespeople must be assigned
defined outcomeseach task must have an output
defined milestonesreview for quality
Effort and Delivery Time
Effort
Ea = m ( t d 4 / t a 4 )
Ea = effort i n person-months
t d = nomi nal del i very ti me for schedul e
Impossi bl e
regi on
Ed
Eo
td
Tmi n = 0.75T d
Empirical Relationship: P vs E
Given Putnams Software Equation (5-3),
E = L3 / (P3t4)
to
devel opment ti me
Timeline Charts
Effort Allocation
front end activities
customer communication
analysis
design
review and modification
construction activities
coding or code generation
testing and installation
unit, integration
white-box, black box
regression
Defining Task Sets
determine type of project
concept development, new application development, application enhancement,
application maintenance, and reengineering projects
Actual cost of work performed, ACWP, is the sum of the effort actually expended on work
tasks that have been completed by a point in time on the project schedule. It is then possible
to compute
Cost performance index, CPI = BCWP/ACWP
Cost variance, CV = BCWP ACWP
Problem
Assume you are a software project manager and that youve been asked to computer earned
value statistics for a small software project. The project has 56 planned work tasks that are
estimated to require 582 person-days to complete. At the time that youve been asked to do
the earned value analysis, 12 tasks have been completed. However, the project schedu le
indicates that 15 tasks should have been completed. The following scheduling data (in
person-days) are available:
Task
Planned Effort Actual Effort
12
12.5
1
2
15
11
3
13
17
8
9.5
4
9.5
9.0
5
6
18
19
10
10
7
8
4
4.5
12
10
9
6
6.5
10
11
5
4
14
14.5
12
16
13
14
6
8
15
Error Tracking
Schedule Tracking
conduct periodic project status meetings in which each team member reports progress
and problems.
evaluate the results of all reviews conducted throughout the software engineering
process.
determine whether formal project milestones (diamonds in previous slide) have been
accomplished by the scheduled date.
compare actual start-date to planned start-date for each project task listed in the
resource table
meet informally with practitioners to obtain their subjective assessment of progress to
date and problems on the horizon.
use earned value analysis to assess progress quantitatively.
Progress on an OO Project-I
Importance of evolution
Organizations have huge investments in their software systems - they are critical business
assets.
To maintain the value of these assets to the business, they must be changed and updated.
The majority of the software budget in large companies is devoted to evolving existing
software rather than developing new software.
Software change
Software change is inevitable
New requirements emerge when the software is used;
The business environment changes;
Errors must be repaired;
New computers and equipment is added to the system;
The performance or reliability of the system may have to be improved.
A key problem for organisations is implementing and managing change to their existing
software systems.
Lehmans laws
Law
Continuing change
Increasing complexity
Large program
evolution
Organisational stability
Conservation of
familiarity
Continuing growth
Declining quality
Feedback system
Description
Small organisations;
Medium sized systems.
Software maintenance
Modifying a program after it has been put into use or delivered.
Maintenance does not normally involve major changes to the systems architecture.
Changes are implemented by modifying existing components and adding new components to
the system.
Maintenance is inevitable
The system requirements are likely to change while the system is being developed because
the environment is changing. Therefore a delivered system won't meet its requirements!
Systems are tightly coupled with their environment. When a system is installed in an
environment it changes that environment and therefore changes the system requirements.
Systems MUST be maintained therefore if they
are to remain useful in an environment.
Types of maintenance
Maintenance to repair software faults
Code ,design and requirement errors
Code & design cheap. Requirements most expensive.
Maintenance to adapt software to a different operating environment
Changing a systems hardware and other support so that it operates in a different
environment (computer, OS, etc.) from its initial implementation.
Maintenance to add to or modify the systems functionality
Modifying the system to satisfy new requirements for org or business change.
Distribution of maintenance effort
Maintenance costs
Usually greater than development costs (2* to 100* depending on the application).
Development/maintenance costs
Complexity metrics
Predictions of maintainability can be made by assessing the complexity of system
components.
Studies have shown that most maintenance effort is spent on a relatively small number of
system components of complex system.
Reduce maintenance cost replace complex components with simple alternatives.
Complexity depends on
Complexity of control structures;
Complexity of data structures;
Object, method (procedure) and module size.
Process metrics
Process measurements may be used to assess maintainability
Number of requests for corrective maintenance;
Average time required for impact analysis;
Average time taken to implement a change request;
Number of outstanding change requests.
If any or all of these is increasing, this may indicate a decline in maintainability.
COCOMO2 model maintenance = understand existing code + develop new code.
Project management
Objectives
To explain the main tasks undertaken by project managers
To introduce software project management and to describe its distinctive characteristics
To discuss project planning and the planning process
Description
Describes the quality procedures and standards that
will be used in a project.
Describes the approach, resources and schedule used
for system validation.
Describes the configuration management procedures
and structures to be used.
Predicts the maintenance requirements of the system,
maintenance costs and effort required.
Describes how the skills and experience of the project
team members will be developed.
project plan
The project plan sets out:
resources available to the project
work breakdown
schedule for the work.
Project plan structure
Introduction objective, budget, time
Project organisation. roles of people
Risk analysis. arising, reduction
Hardware and software resource requirements.
Work breakdown. break project to activity, milestone
Project schedule. time, allocation of people
Monitoring and reporting mechanisms.
Milestones and deliverables
Milestones are the end-point of a process activity.- report presented to management
Deliverables are project results delivered to customers.
- milestones need not be deliverables. May be used by project managers.
not to customers
The waterfall process allows for the straight forward definition of progress milestones.
Milestones in requirement process
Project scheduling
Split project into tasks and estimate time and resources required to complete each task.
Organize tasks concurrently to make optimal
use of workforce.
Minimize task dependencies to avoid delays
caused by one task waiting for another to complete.
Dependent on project managers intuition and experience.
Scheduling problems
Estimating the difficulty of problems and hence the cost of developing a solution is hard.
Productivity is not proportional to the number of people working on a task.
Adding people to a late project makes it later because of communication overheads.
The unexpected always happens. Always allow contingency in planning.
Bar charts and activity networks
Graphical notations used to illustrate the project schedule.
Show project breakdown into tasks. Tasks should not be too small. They should take
about a week or two.
Activity charts show task dependencies and the critical path.
Bar charts show schedule against calendar time.
Task durations and dependencies
Activity
T1
T2
T3
T4
T5
T6
T7
T8
T9
T10
T11
T12
Duration (days)
8
15
15
10
10
5
20
25
15
15
7
10
Dependencies
T1 (M1)
T2, T4 (M2)
T1, T2 (M3)
T1 (M1)
T4 (M5)
T3, T6 (M4)
T5, T7 (M7)
T9 (M6)
T11 (M8)
Activity network
8 da y s
1 4/7 /03
15 da y s
M1
T3
15 da y s
T9
5 da y s
T1
2 5/7 /03
2 5/8/03
M4
T6
4/7 /03
4/8/03
M6
M3
star t
7 da y s
2 0 da y s
15 da y s
T7
T11
T2
25/7 /03
10 da y s
10 da y s
M2
T4
5/9/03
11/8/03
M7
T5
M8
15 da y s
10da ys
T10
1 8/7 /03
T12
M5
2 5 da y s
Finish
T8
19/9/03
Activity timeline
4/7
11/7
18/7
2 5/7
1/8
8/8
1 5/8
22/8
2 9/8
5/9
12/9
1 9/9
Sta r t
T4
T1
T2
M1
T7
T3
M5
T8
M3
M2
T6
T5
M4
T9
M7
T10
M6
T11
M8
T12
Finish
Staff allocation
4/7
Fred
1 1/7
18/7
2 5/7
1/8
8/8
15/8
2 2/8
2 9/8
5/9
1 2/9
19/9
T4
T8
T11
T12
Ja ne
T1
T3
T9
Anne
T2
T6
Jim
T7
Ma ry
T10
T5
Risk management
Risk management - identifying risks and drawing up plans to minimise their effect on a
project.
A risk is a probability that some adverse circumstance will occur
Project risks : affect schedule or resources. eg: loss of experienced designer.
Product risks: affect the quality or performance of the software being developed.
eg: failure of purchased component.
Business risks : affect organisation developing software. Eg: competitor
introducing new product.
Software risks
Risk
Staff turnover
Affects
Project
Management change
Project
Hardware unavailability
Project
Requirements change
Specification delays
Size underestimate
CASE tool underperformance
Technology change
Product competition
Business
Business
Description
Experienced staff will leave the project before it
is finished.
There will be a change of organisational
management with different priorities.
Hardware that is essential for the project will not
be delivered on schedule.
There will be a larger number of changes to the
requirements than anticipated.
Specifications of essential interfaces are not
available on schedule
The size of the system has been underestimated.
CASE tools which support the project do not
perform as anticipated
The underlying technology on which the system
is built is superseded by new technology.
A competitive product is marketed before the
system is completed.
Risk identification
Discovering possible risk
Technology risks.
People risks.
Organisational risks.
Tool risk.
Requirements risks.
Estimation risks.
Risks and risk types
Risk type
Possible risks
Technology
The database used in the system cannot process as many transactions per
second as expected.
Software components that should be reused contain defects that limit their
functionality.
People
It is impossible to recruit staff with the skills required.
Key staff are ill and unavailable at critical times.
Required training for staff is not available.
Organisational The organisation is restructured so that different management are
responsible for the project.
Organisational financial problems force reductions in the project budget.
Tools
The code generated by CASE tools is inefficient.
CASE tools cannot be integrated.
Requirements
Estimation
Risk analysis
Make judgement about probability and seriousness of each identified risk.
Made by experienced project managers
Probability may be very low(<10%), low(10-25%), moderate(25-50%), high(50-75%) or
very high(>75%). not precise value. Only range.
Risk effects might be catastrophic, serious, tolerable or insignificant.
Risk
Organisational financial problems force
reductions in the project budget.
It is impossible to recruit staff with the
skills required for the project.
Key staff are ill at critical times in the
project.
Software components that should be reused
contain defects which limit their
functionality.
Changes to requirements that require major
design rework are proposed.
The organisation is restructured so that
different management are responsible for
the project.
The database used in the system cannot
process as many transactions per second as
expected.
The time required to develop the software
is underestimated.
CASE tools cannot be integrated.
Customers fail to understand the impact of
requirements changes.
Required training for staff is not available.
The rate of defect repair is underestimated.
The size of the software is underestimated.
The code generated by CASE tools is
inefficient.
Probability
Low
Effects
Catastrophic
High
Catastrophic
Moderate
Serious
Moderate
Serious
Moderate
Serious
High
Serious
Moderate
Serious
High
Serious
High
Moderate
Tolerable
Tolerable
Moderate
Moderate
High
Moderate
Tolerable
Tolerable
Tolerable
Insignificant
Risk planning
Consider each identified risk and develop a strategy to manage that risk.
categories
Avoidance strategies
The probability that the risk will arise is reduced;
Minimisation strategies
The impact of the risk on the project will be reduced;
Contingency plans
If the risk arises, contingency plans are plans to deal with that risk. eg: financial
problems
Staff illness
Defective components
Requirements changes
Organisational restructuring
Database performance
Underestimated development
time
Strategy
Prepare a briefing document for senior management
showing how the project is making a very important
contribution to the goals of the business.
Alert customer of potential difficulties and the
possibility of delays, investigate buying-in
components.
Reorganise team so that there is more overlap of work
and people therefore understand each others jobs.
Replace potentially defective components with
bought-in components of known reliability.
Derive traceability information to assess requirements
change impact, maximise information hiding in the
design.
Prepare a briefing document for senior management
showing how the project is making a very important
contribution to the goals of the business.
Investigate the possibility of buying a higherperformance database.
Investigate buying in components, investigate use of a
program generator
Risk monitoring
Assess each identified risks regularly to decide whether or not it is becoming less or more
probable.
Also assess whether the effects of the risk have changed.
Cannot be observed directly. Factors affecting will give clues.
Each key risk should be discussed at management progress meetings & review.
Risk indicators
Risk type
Technology
People
Potential indicators
Late delivery of hardware or support software, many
reported technology problems
Poor staff morale, poor relationships amongst team member,
job availability
Organisational
Tools
Requirements
Estimation
CASE Tools
Computer-Aided Software Engineering
Prerequisites to tool use
Need a collection of useful tools that help in every step of building a product
Need an organized layout that enables tools to be found quickly and used
efficiently
Need a skilled craftsperson who understands how to use the tools effectively
CASE Tools
Upper CASE
requirements
specification
planning
design
Lower CASE
implementation
integration
maintenance
CASE tool classification
Functional perspective
Process perspective
Integration perspective
CASE Tool Taxonomy
Project planning tools
- used for cost and effort estimation, and project scheduling
Business process engineering tools
- represent business data objects, their relationships, and flow of the data objects
between company business areas
Process modeling and management tools
- represent key elements of processes and provide links to other tools that provide
support to defined process activities
Risk analysis tools
- help project managers build risk tables by providing detailed guidance in the
identification and analysis of risks
Requirements tracing tools
- provide systematic database-like approach to tracking requirement status
beginning with specification
Quality assurance tools
metrics tools that audit source code to determine compliance with language
standards or tools that extract metrics to project the quality of software being built
Documentation tools
- provide opportunities for improved productivity by reducing the amount of time
needed to produce work products
Database management tools
- RDMS and OODMS serve as the foundation for the establishment of the CASE
repository
Software configuration management tools
- uses the CASE repository to assist with all SCM tasks (identification, version
control, change control, auditing, status accounting)
Analysis and design tools
- enable the software engineer to create analysis and design models of the system to
be built, perform consistency checking between models
Interface design and development tools
- toolkits of interface components, often part environment with a GUI to allow
rapid prototyping of user interface designs
Prototyping tools
- enable rapid definition of screen layouts, data design, and report generation
Programming tools
- compilers, editors, debuggers, OO programming environments, fourth generation
languages, graphical programming environments, applications generators, and
database query generators
Web development tools
- assist with the generation of web page text, graphics, forms, scripts, applets, etc.
Test management tools
- coordinate regression testing, compare actual and expected output, conduct batch
testing, and serve as generic test drivers
Client/server testing tools
- exercise the GUI and network communications requirements for the client and
server
Transaction control
Multi-user support