Unit 4
Unit 4
Unit 4
UNIT-4
2. Statistical Testing,
3. Software Quality,
5. ISO 9000,
8. Case Environment,
Tool,
1. SOFTWARE RELIABILITY
• The reliability of a software product essentially denotes its trustworthiness or
dependability.
• The reliability of a software product can also be defined as the probability of the
product working “correctly” over a given period of time.
• Intuitively, it is obvious that a software product having a large number of defects is
unreliable.
• It is also very reasonable to assume that the reliability of a system improves, as the
number of defects in it is reduced.
• It would have been very nice if we could mathematically characterize this relationship
between reliability and the number of bugs present in the system using a simple
closed form expression.
• Unfortunately, it is very difficult to characterize the observed reliability of a system in
terms of the number of latent defects in the system using a simple mathematical
expression.
• Removing errors from those parts of a software product that are very infrequently
executed, makes little difference to the perceived reliability of the product.
• It has been experimentally observed by analysing the behaviour of a large number of
programs that 90 per cent of the execution time of a typical program is spent in
executing only 10 per cent of the instructions in the program.
• The most used 10 per cent instructions are often called the core1 of a program.
• The rest 90 per cent of the program statements are called non-core.
• Reliability also depends upon how the product is used, or on its execution profile.
• If the users execute only those features of a program that are “correctly”
implemented, none of the errors will be exposed and the perceived reliability of the
product will be high.
• if only those functions of the software which contain errors are invoked, then a large
number of failures will be observed and the perceived reliability of the system will be
very low.
• Different categories of users of a software product typically execute different
functions of a software product.
• Software reliability more difficult to measure than hardware reliability: The reliability
improvement due to fixing a single bug depends on where the bug is located in the
code.
• The perceived reliability of a software product is observer-dependent.
• The reliability of a product keeps changing as errors are detected and fixed.
Hardware versus Software Reliability
• Hardware components fail due to very different reasons as compared to software
components.
• Hardware components fail mostly due to wear and tear, whereas software components
fail due to bugs.
• A logic gate may be stuck at 1 or 0, or a resistor might short circuit. To fix a hardware
fault, one has to either replace or repair the failed part.
• In contrast, a software product would continue to fail until the error is tracked down
and either the design or the code is changed to fix the bug.
• For this reason, when a hardware part is repaired its reliability would be maintained at
the level that existed before the failure occurred;
• whereas when a software failure is repaired, the reliability may either increase or
decrease (reliability may decrease if a bug fix introduces new errors).
• To put this fact in a different perspective, hardware reliability study is concerned with
stability (for example, the inter-failure times remain constant).
• On the other hand, the aim of software reliability study would be reliability growth
(that is, increase in inter-failure times).
• A comparison of the changes in failure rate over the product life time for a typical
hardware product as well as a software product are sketched in Figure 11.1.
• Observe that the plot of change of reliability with time for a hardware component
(Figure 11.1(a)) appears like a “bath tub”. For a software component the failure rate is
initially high, but decreases as the faulty components identified are either repaired or
replaced.
• The system then enters its useful life, where the rate of failure is almost constant.
• After some time (called product life time ) the major components wear out, and the
failure rate increases.
• The initial failures are usually covered through manufacturer’s warranty.
• That is, one need not feel happy to buy a ten year old car at one tenth of the price of a
new car, since it would be near the rising edge of the bath tub curve, and one would
have to spend unduly large time, effort, and money on repairing and end up as the
loser.
• In contrast to the hardware products, the software product show the highest failure
rate just after purchase and installation (see the initial portion of the plot in Figure
11.1 (b)).
• As the system is used, more and more errors are identified and removed resulting in
reduced failure rate.
• This error removal continues at a slower pace during the useful life of the product. As
the software becomes obsolete no more error correction occurs and the failure rate
remains unchanged.
Reliability Metrics of Software Products
• The reliability requirements for different categories of software products may be
different.
• For this reason, it is necessary that the level of reliability required for a software
product should be specified in the software requirements specification (SRS)
document.
• In order to be able to do this, we need some metrics to quantitatively express the
reliability of a software product.
• A good reliability measure should be observer-independent, so that different people
can agree on the degree of reliability a system has.
• Six metrics that show the reliability as follows:
• Rate of occurrence of failure (ROCOF): ROCOF measures the frequency of
occurrence of failures. ROCOF measure of a software product can be obtained by
observing the behaviour of a software product in operation over a specified time
interval and then calculating the ROCOF value as the ratio of the total number of
failures observed and the duration of observation.
• Mean time to failure (MTTF): MTTF is the time between two successive failures,
averaged over a large number of failures. To measure MTTF, we can record the
failure data for n failures.
o Let the failures occur at the time instants t1, t2, ..., tn. Then, MTTF can be
calculated as. It is important to note that only run time is considered in the
time measurements. That is, the time for which the system is down to fix the
error, the boot time, etc. are not taken into account in the time measurements
and the clock is stopped at these times.
• Mean time to repair (MTTR): Once failure occurs, some time is required to fix the
error. MTTR measures the average time it takes to track the errors causing the failure
and to fix them.
• Mean time between failure (MTBF): The MTTF and MTTR metrics can be
combined to get the MTBF metric: MTBF=MTTF+MTTR. Thus, MTBF of 300 hours
indicates that once a failure occurs, the next failure is expected after 300 hours. In this
case, the time measurements are real time and not the execution time as in MTTF
• Probability of failure on demand (POFOD): Unlike the other metrics discussed,
this metric does not explicitly involve time measurements. POFOD measures the
likelihood of the system failing when a service request is made.
o For example, a POFOD of 0.001 would mean that 1 out of every 1000 service
requests would result in a failure.
o POFOD metric is very appropriate for software products that are not required
to run continuously.
• Availability: Availability of a system is a measure of how likely would the system be
available for use over a given period of time. This metric not only considers the
number of failures occurring during a time interval, but also takes into account the
repair time (down time) of a system when a failure occurs.
o This metric is important for systems such as telecommunication systems, and
operating systems, and embedded controllers, etc. which are supposed to be
never down and where repair and restart time are significant and loss of
service during that time cannot be overlooked.
Shortcomings of reliability metrics of software products :
• All the above reliability metrics suffer from several shortcomings as far as their use in
software reliability measurement is concerned.
o One of the reasons is that these metrics are centered around the probability of
occurrence of system failures but take no account of the consequences of
failures.
o That is, these reliability models do not distinguish the relative severity of
different failures.
o Failures which are transient and whose consequences are not serious are in
practice of little concern in the operational use of a software product.
However, this simple model of reliability which implicitly assumes that all errors
contribute equally to reliability growth, is highly unrealistic since we already know
that correction of different errors contribute differently to reliability growth.
2. STATISTICAL TESTING
• Statistical testing is a testing process whose objective is to determine the reliability of
the product rather than discovering errors.
• The test cases designed for statistical testing with an entirely different objective from
those of conventional testing. To carry out statistical testing, we need to first define
the operation profile of the product.
• Operation profile: Different categories of users may use a software product for very
different purposes.
o For example, a librarian might use the Library Automation Software to create
member records, delete member records, add books to the library, etc.,
o whereas a library member might use software to query about the availability of
a book, and to issue and return books.
o Formally, we can define the operation profile of a software as the probability
of a user selecting the different functionalities of the software.
o If we denote the set of various functionalities offered by the software by {fi},
the operational profile would associate with each function {fi} with the
probability with which an average user would select {fi} as his next function
to use. Thus, we can think of the operation profile as assigning a probability
value pi to each functionality fi of the software.
• How to define the operation profile for a product? : We need to divide the input
data into a number of input classes.
o For example, for a graphical editor software, we might divide the input into
data associated with the edit, print, and file operations.
o We then need to assign a probability value to each input class; to signify the
probability for an input value from that class to be selected.
o The operation profile of a software product can be determined by observing
and analyzing the usage pattern of the software by a number of users.
Steps in Statistical Testing
• The first step is to determine the operation profile of the software.
• The next step is to generate a set of test data corresponding to the determined
operation profile.
• The third step is to apply the test cases to the software and record the time between
each failure. After a statistically significant number of failures have been observed,
the reliability can be computed.
• For accurate results, statistical testing requires some fundamental assumptions to be
satisfied.
• It requires a statistically significant number of test cases to be used.
• Pros and cons of statistical testing: Statistical testing allows one to concentrate on
testing parts of the system that are most likely to be used.
• Therefore, it results in a system that the users can find to be more reliable (than
actually it is!).
• Also, the reliability estimation arrived by using statistical testing is more accurate
compared to those of other methods discussed.
• However, it is not easy to perform the statistical testing satisfactorily due to the
following two reasons. There is no simple and repeatable way of defining operation
profiles. Also, the the number of test cases with which the system is to be tested
should be statistically significant.
3. SOFTWARE QUALITY
• Traditionally, the quality of a product is defined in terms of its fitness of purpose.
• That is, a good quality product does exactly what the users want it to do, since for
almost every product, fitness of purpose is interpreted in terms of satisfaction of the
requirements laid down in the SRS document.
• Although “fitness of purpose” is a satisfactory definition of quality for many products
such as a car, a table fan, a grinding machine, etc.—“fitness of purpose” is not a
wholly satisfactory definition of quality for software products.
• To give an example of why this is so, consider a software product that is functionally
correct. That is, it correctly performs all the functions that have been specified in its
SRS document.
• Even though it may be functionally correct, we cannot consider it to be a quality
product, if it has an almost unusable user interface.
• The modern view of a quality associates with a software product several quality
factors (or attributes) such as the following:
• Portability : A software product is said to be portable, if it can be easily made to
work in different hardware and operating system environments, and easily interface
with external hardware devices and software products.
• Usability: A software product has good usability, if different categories of users (i.e.,
both expert and novice users) can easily invoke the functions of the product.
• Reusability: A software product has good reusability, if different modules of the
product can easily be reused to develop new products.
• Correctness: A software product is correct, if different requirements as specified in
the SRS document have been correctly implemented.
• Maintainability: A software product is maintainable, if errors can be easily corrected
as and when they show up, new functions can be easily added to the product, and the
functionalities of the product can be easily modified, etc.
McCall’s quality factors :
• McCall distinguishes two levels of quality attributes [McCall]. The higherlevel
attributes, known as quality factor s or external attributes can only be measured
indirectly.
• The second-level quality attributes are called quality criteria.
• Quality criteria can be measured directly, either objectively or subjectively.
• By combining the ratings of several criteria, we can either obtain a rating for the
quality factors, or the extent to which they are satisfied.
• For example, the reliability cannot be measured directly, but by measuring the number
of defects encountered over a period of time. Thus, reliability is a higher-level quality
factor and number of defects is a low-level quality factor.
ISO 9126 :
• ISO 9126 defines a set of hierarchical quality characteristics.
• Thus, quality control aims at correcting the causes of errors and not just rejecting the
defective products.
• The next breakthrough in quality systems, was the development of the quality
assurance (QA) principles.
• The basic premise of modern quality assurance is that if an organisation’s processes
are good then the products are bound to be of good quality.
• The modern quality assurance paradigm includes guidance for recognising, defining,
analysing, and improving the production process.
• Total quality management (TQM) advocates that the process followed by an
organization must continuously be improved through process measurements.
• TQM goes a step further than quality assurance and aims at continuous process
improvement.
• TQM goes beyond documenting processes to optimising them through redesign.
Product Metrics versus Process Metrics :
• All modern quality systems lay emphasis on collection of certain product and process
metrics during product development.
• Let us first understand the basic differences between product and process metrics.
• Product metrics help measure the characteristics of a product being developed,
whereas process metrics help measure how a process is performing.
5. ISO 9000
• International Organisation for Standards (ISO) is a consortium of 63 countries
established to formulate and foster standardisation. ISO published its 9000 series of
standards in 1987.
What is ISO 9000 Certification? :
• ISO 9000 certification serves as a reference for contract between independent parties.
• In particular, a company awarding a development contract can form his opinion about
the possible vendor performance based on whether the vendor has obtained ISO 9000
certification or not.
• In this context, the ISO 9000 standard specifies the guidelines for maintaining a
quality system.
• We have already seen that the quality system of an organisation applies to all its
activities related to its products or services.
• The ISO standard addresses both operational aspects (that is, the process) and
organisational aspects such as responsibilities, reporting, etc.
• It is important to realise that ISO 9000 standard is a set of guidelines for the
production process and is not directly concerned about the product it self.
• ISO 9000 is a series of three standards—ISO 9001, ISO 9002, and ISO 9003.
• The ISO 9000 series of standards are based on the premise that if a proper process is
followed for production, then good quality products are bound to follow
automatically.
The types of software companies to which the different ISO standards apply are as
follows:
• ISO 9001: This standard applies to the organisations engaged in design, development,
production, and servicing of goods. This is the standard that is applicable to most
software development organisations.
• ISO 9002: This standard applies to those organisations which do not design products
but are only involved in production.
• ISO 9003: This standard applies to organisations involved only in installation and
testing of products.
ISO 9000 for Software Industry
• ISO 9000 is a generic standard that is applicable to big industries, starting from a steel
manufacturing industry to a service rendering company.
• Therefore, many of the clauses of the ISO 9000 documents are written using generic
terminologies and it is very difficult to interpret them in the context of software
development organisations.
• But in every big industries software plays very important role.
• So ISO 9000 is applicable to Software Industries.
8. CASE ENVIRONMENT
• CASE tools are not integrated, then the data generated by one tool would have to
input to the other tools.
• This may also involve format conversions as the tools developed by different vendors
are likely to use different formats.
• CASE tools are characterised by the stage or stages of software development life
cycle on which they focus.
• The central repository all the CASE tools in a CASE environment share common
information among themselves.
• Thus a CASE environment facilitates the automation of the step-by-step
methodologies for software development.
• In contrast to a CASE environment, a programming environment is an integrated
collection of tools to support only the coding phase of software development.
• The tools commonly integrated in a programming environment are a text editor, a
compiler, and a debugger.
• The different tools are integrated to the extent that once the compiler detects an error,
the editor takes automatically goes to the statements in error and the error statements
are highlighted.
• Examples of popular programming environments are Turbo C environment, Visual
Basic, Visual C++, etc.
• A schematic representation of a CASE environment is shown in Figure 12.1.
Code Generation
• More pragmatic support expected from a CASE tool during code generation phase are
the following:
o The CASE tool should support generation of module skeletons or templates in
one or more popular languages.
o It should be possible to include copyright message, brief description of the
module, author name and the date of creation in some selectable format.
o The tool should generate records, structures, class definition automatically
from the contents of the data dictionary in one or more popular programming
languages.
o It should generate database tables for relational database management systems.
Documentation Support
• The deliverable documents should be organized graphically and should be able to
incorporate text and diagrams from the central repository.
• This helps in producing up-to-date documentation.
• The CASE tool should integrate with one or more of the commercially available
desktop publishing packages.
• It should be possible to export text, graphics, tables, data dictionary reports to the
DTP package in standard forms such as PostScript.
Pro ject Management
• It should support collecting, storing, and analysing information on the software
project’s progress such as the estimated task duration, scheduled and actual task start,
completion date, dates and results of the reviews, etc.
External Interface
• The tool should allow exchange of information for reusability of design.
• The information which is to be exported by the tool should be preferably in ASCII
format and support open architecture.
Reverse Engineering Support
• The tool should support generation of structure charts and data dictionaries from the
existing source codes.
• It should populate the data dictionary from the source code.
• If the tool is used for re-engineering information systems, it should contain
conversion tool from indexed sequential file structure, hierarchical and network
database to relational database systems.
Data Dictionary Interface
• The data dictionary interface should provide view and update access to the entities
and relations stored in it.
• It should have print facility to obtain hard copy of the viewed screens.
• It should provide analysis reports like cross-referencing, impact analysis, etc.
• Ideally, it should support a query language to view its contents.
User interface
• The user interface provides a consistent framework for accessing the different tools
thus making it easier for the users to interact with the different tools and reducing the
overhead of learning how the different tools are used.
Object management system and repository
• Different case tools represent the software product as a set of entities such as
specification, design, text data, project plan, etc.
• The object management system maps these logical entities into the underlying storage
management system (repository).
• The commercial relational database management systems are geared towards
supporting large volumes of information structured as simple relatively short records.
• There are a few types of entities but large number of instances.
• By contrast, CASE tools create a large number of entity and relation types with
perhaps a few instances of each.
• Thus the object management system takes care of appropriately mapping these
entities into the underlying storage management system.