Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
12 views

Lecture 1 Software Verification and Validation

Cour sur la vérification d'un logiciel et sa validation
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views

Lecture 1 Software Verification and Validation

Cour sur la vérification d'un logiciel et sa validation
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 24

LECTURE 1: INTRODUCTION TO SOFTWARE TESTING

 Software Testing is executing a program with the intent of finding errors.


 Software testing is a set of activities conducted with the intent of finding errors. It
also ensures that the system is working according to the design specification. The two
important goals of software testing are to ensure system being developed is according to
the customer requirements and to reveal bugs. An essential form of software
assurance is software testing. Testing is laborious, expensive and time-consuming job, so
the choice of testing must be based on the risk to the system
 Software testing is a vital part of software development and automation makes it
faster, more reliable and cost efficient.

 Software Testing is an empirical (experimental) investigation conducted to provide


stakeholders with information about the quality of the product or service under test.

System testing is executing a program to check its functionality with the view of
finding errors or defects.

Software testing the process of executing a program with the view of finding if the
specified requirements have been satisfied or not.

 Testing is essential to the success of a system. In system testing, performance and


acceptance standards are developed.
 Testing is one of the important phases of software development. In this phase, the
program under test is executed to reveal faults, and after detecting failures, debugging
techniques are applied to isolate and remove faults. Obviously, the techniques used for
these various tasks depend on the tools, techniques and methodologies used to
develop the software under test.
 Automated testing may not replace manual testing, it augments (complements) the
test efforts of the human tester.

What is Software Testing?

Software Testing is the process of executing a program with the intent of finding errors.

Testing is the process of evaluating a system or its component(s) with the intent to
find that whether it satisfies the specified requirements or not.

This activity results in the actual, expected and difference between their results. In
simple words testing is executing a system in order to identify any gaps, errors or
missing requirements in contrary to the actual desire or requirements
Page 10 of 106
Testing is the most important part of the Software Development Life Cycle (SDLC).
One cannot release the final product without passing it through the testing process.
The purpose of software testing is to find bugs/defects in the software. Software
testing is an essential part of Software Engineering.

Why do we Need Software Testing?


The importance of testing cannot be overemphasized.

(1) There are lots of different devices, browsers, and operating systems out there. We are
lucky to live in an age where we can choose from a technology buffet of phones,
tablets, laptops, and desktops – not to mention the different browsers and operating
systems in use on those devices. All this variety is great, but it also makes testing for
compatibility essential. Testing on multiple devices, browsers, operating systems can
help ensure your website works for as many of your users as possible. (User Diversity
and Portability)

(2) Software prevails in our living environment. Quality of software significantly


influences our quality of life. Software as an entity, has affected all aspects of life. No one
could have foreseen that software would become embedded in systems of all
kinds: Health Care, Business, Transportation, Education, Telecommunications, Finance,
Military, Entertainment, Weather, etc. Despite these dramatic improvements in
software development (Web design software, mobile application, Artificial Intelligent
software, operating systems, etc), we are yet to develop a software technology that
does it all, and the likelihood of one arising in the future is slime.
(3) Software faults in critical software systems (A safety-critical system is defined as a
system in which the malfunctioning of software could result in death, injury or damage to
environment) may cause dramatic damages on our lives. E.g

- In 1986 two cancer patients at the East Texas cancer Center received fatal
radiation overdoses from the computer controlled radiation therapy
machine.
- Software errors can be costly to the economy of a nation.

Comparison with Automobile Industry

Let compare the software development with the development of Automobile Industry:

- If the automobile industry had developed like the software industry, we


would all be driving cars bought for 25,000frs; that is to say , the software
development industry is developing at an alarming rate than the automobile
industry.

Page 11 of 106
- If car were like software, they would crash twice a day for no reason, and
when you are called upon for service, you will tell the own to you to reinstall
the engine.

From the above explanation, what are the differences between Software Fault, Software
Error and Software Failure?

Software Error Software


Software Fault

Software Defects Relationships

From the above explanation, what are the goals of Software Testing?

Goals of Software Testing

The general aim of software testing is to affirm the quality of software systems by systematically
exercising the software in carefully controlled circumstances to:
 Reveal faults (finding errors: Syntax Errors, Logical errors, Semantic errors, etc)
 Establish confidence in the software
 Clarification of user’s requirement specification

The only effective way to raise the confidence level of a program significantly is to give a
convincing proof of its correctness

Can these goals be achieved without challenges?

Challenges of Software Testing


 Testing is huge cost of product development
 Incomplete, informal and changing specifications or incomplete users’
requirements.
 Lack of Software testing tools
 Testing effectiveness and software quality is hard to measure

Testing and Debugging

Testing: It involves the identification of bug/error/defect in the software without


correcting it. Normally professionals with a Quality Assurance background are involved
in the identification of bugs. Testing is performed in the testing phase.
Page 12 of 106
Debugging: It involves the Identifying, Isolating and Fixing (I2F) the problems/bug.
Developers who code the software conduct debugging upon encountering an error in
the code. Debugging is the part of White box or Unit Testing. Debugging can be
performed in the development phase while conducting Unit Testing or in phases while
fixing the reported bugs.

What is Software Verification and Validation?

Verification and validation (V&V) is the process of checking that a software system
meets specifications and that it fulfils its intended purpose. It may also be referred to as
software quality control

Definitions:

Verification: The process of determining whether the products of a given phase of the
software development process fulfil the requirements established during the previous
phase.

Verification can also be defined as the process of evaluating software to determine


whether the products of a given software development phase satisfy the conditions
imposed at the start of that phase

Suppose you are building a table. Here the verification is about checking all the parts of
the table, whether all the four legs are of correct size or not. If one leg of table is not of the
right size it will imbalance the end product. Similar behaviour is also noticed in case of
the software product or application. If any feature of software product or application
is not up to the mark or if any defect is found then it will result into the failure of the
end product.

Hence, verification is done at the starting of the software development process


(requirement Specification phase), at the beginning of each development phase and at the
end of each development phase.

It includes reviews and meetings, walk-throughs, inspection, etc. to evaluate


documents, plans, requirements specifications, system design and coding. Hence,
verification is very important. It takes place at the starting of the development process.

It answers the questions like:


Am I building the product right?
Am I accessing the data right (in the right place; in the right way). It is

Page 13 of 106
a Low level activity? (Conceptual Design)
Performed during development on key artifacts, like walkthroughs, reviews and
inspections, mentor feedback, training, checklists and standards.
Demonstration of consistency, completeness, and correctness of the software at
each stage and between each stage of the development life cycle.

Advantages of Software Verification:

1. Verification helps in lowering down the count of the defect in the later stages of
development.
2. Verifying the product at the starting phase of the development will help in
understanding the product in a better way.
3. It reduces the chances of failures in the software application or product.
4. It helps in building the product as per the customer specifications and needs

Validation: The process of evaluating software at the end of each software


development phase to ensure compliance with intended usage.

Validation can also be defined as the process of evaluating software at the end of
each development phase to determine whether it satisfies specified requirements.
Validation is done at the end of the development phase and takes place after
verifications are completed.

It answers the question like:

 Am I building the right product? Or Has the right product been built?
 Am I accessing the right data (in terms of the data required to satisfy the
requirement).
 It is a High level activity.
 Performed after a work product is produced against established criteria
ensuring that the product integrates correctly into the environment.
 Determination of correctness of the final software product by a development
project with respect to the user needs and requirements.

Software Validation: Software validation checks that the software product satisfies or fits
the intended use (high-level checking), i.e., the software meets the user
requirements.
Page 14 of 106
Not only can the software product as a whole be validated. Requirements should be
validated before the software product as whole is ready

Examples of some artifacts (items) to be validated:

 User Requirements Specification validation: User requirements as stated in a


document called User Requirements Specification Document are validated by
checking if they indeed represent the goals of the stakeholders. This can be
done by interviewing them or even by releasing quick prototypes and having the
users and stakeholders to assess them.
 User input validation: User input (gathered by any peripheral device such bio-
metric finger sensor, bio-metric scanner, etc.) is validated by checking if the
input provided by the software operators and users meet the domain rules and
constraints (such as data type, range, and format)

Software Verification can also be defined as the process of evaluating software to


determine whether the products of a given software development phase satisfy the
conditions imposed at the start of that phase. i.e the requirements from the previous
phase.

Some examples of items are to be verified:

 Verification of design specifications against the requirement specifications: Do


the architectural design, detailed design and database logical model
specifications correctly implement the functional and non-functional
requirement specifications?
 Verification of implementation artifacts against the design specifications: Do
the source code, user interfaces and database physical model correctly
implement the design specifications?

Verification is usually a more technical activity that uses knowledge about the
individual software artefacts, requirements, and specifications. Validation usually
depends on domain knowledge; that is, knowledge of the application for which the
software is written. For example, validation of software for an airplane requires
knowledge from aerospace engineers and pilots.

Verification and validation (V&V) are processes that help to ensure that software is
correct and reliable i.e assuring that a software system meets a user's needs

Page 15 of 106
Verification
(On-going)

Planning: -Target Users


Needs and Expectations Requirement Processes: Software Product
of Customer Specification  System Analysis and System Testing and
Design Implementation
-Validation at the end
 System Development

- Validation at the end


Validation at
the end of
Validation at the end of each phase starting from Planning to Deployment
each phase

Software Verification and Validation

The goals of Software Verification and Validation


The goal of software verification is to find as many hidden defects in the software
before delivery.

The goal of software validation is to gain confidence in the software by showing that it
meets its User Requirement Specifications. i.e to demonstrate to the developer and the
client (customer) that the software meets its requirements.

Software Verification and Validation Techniques

There are four main Verification and Validation Techniques. They include:

- Informal Analysis/Methods

- Static Analysis/Methods

- Dynamic Analysis/Methods

- Formal Analysis/Methods

Page 16 of 106
 Informal V&V techniques: Informal V&V techniques are among the most
commonly used. They are called informal because their tools and approaches rely
heavily on human reasoning and subjectivity without stringent mathematical
formalism. E.g. Inspection Method, here the software is inspected to ascertain if it
meets the specifications in the Software Requirement Specification Document (SRSD) is
verified

 Static V&V techniques: Static V&V techniques assess the accuracy of the static
model design and source code. Static techniques do not require machine execution of
the model, but mental execution can be used. The techniques are very popular and
widely used, and many automated tools are available to assist in the V&V process.
Static techniques can reveal a variety of information about the structure of the model, the
modeling techniques used, data and control flow within the model, and syntactical
accuracy.

Static Analysis: As the term “static” suggests, it is based on the examination of a


number of documents, namely requirements documents, software models, design
documents, and source code. Traditional static analysis includes code review,
inspection, walk-through, algorithm analysis, and proof of correctness. It does not
involve actual execution of the code under development. Instead, it examines code and
reasons over all possible behaviours that might arise during run time. Compiler
optimizations are standard static analysis.

 Dynamic V&V techniques: Dynamic V&V techniques require


software/model execution; they evaluate the software/model based on its execution
behavior. Most dynamic V&V techniques require model instrumentation, the
insertion of additional code (probes or stubs) into the executable model to collect
information about model behavior during execution.

Dynamic analysis of a software system involves actual program execution in order to


expose possible program failures. The behavioural and performance properties of the
program are also observed. Programs are executed with both typical and carefully
chosen input values. Often, the input set of a program can be impractically large.
However, for practical considerations, a finite subset of the input set can be selected.
Therefore, in testing, we observe some representative program behaviours and reach a
conclusion about the quality of the system. Careful selection of a finite test set is
crucial to reaching a reliable conclusion.

Dynamic V&V techniques usually are applied in three steps:

 Executable model is instrumented

Page 17 of 106
 Instrumented model is executed

 Model output is analyzed, dynamic model behaviour is evaluated

N/B:

By performing static and dynamic analyses, practitioners want to identify as many


faults as possible so that those faults are fixed at an early stage of the software
development. Static analysis and dynamic analysis are complementary in nature, and
for better effectiveness, both must be performed repeatedly and alternated.
Practitioners and researchers need to remove the boundaries between static and
dynamic analysis and create a hybrid analysis that combines the strengths of both
approaches.

 Formal V&V techniques: Formal V&V techniques (or formal methods) are based
on formal mathematical proofs or correctness and are the most thorough means of
model V&V. The successful application of formal methods requires the model
development process to be well defined and structured. Formal methods should be
applied early in the model development process to achieve maximum benefit. Because
formal techniques require significant effort they are best applied to complex problems,
which cannot be handled by simpler methods.

Page 18 of 106
Verification and Validation Techniques – Informal and Static Analysis/Methods
Recall the following definitions of Informal and static analysis as Verification and
validation techniques:

 Informal V&V techniques: Informal V&V techniques are among the most
commonly used. They are called informal because their tools and approaches rely
heavily on human reasoning and subjectivity without stringent mathematical
formalism.

 Static V&V techniques: Static V&V techniques assess the accuracy of the static
model design and source code. Static techniques do not require machine execution of
the model, but mental execution can be used. The techniques are very popular and
widely used, and many automated tools are available to assist in the V&V process.
Static techniques can reveal a variety of information about the structure of the model, the
modeling techniques used, data and control flow within the model, and syntactical
accuracy.
The main advantage of static techniques is that they do not require the application of a
functioning system or software: they can be applied to documents and parts of
software. Thus they can be executed long before dynamic techniques, which require a
functioning system or software for execution.

The second advantage of static techniques is that they are often cheaper than

Page 19 of 106
dynamic techniques, and that their return on investment is much higher. Defects being
identified earlier, their fixes are cheaper, and they often are not introduced in the
software at all, being found in specification documents.

Static techniques can be applied to numerous deliverables, such as software


components, whether or not they can compile; high-level or detailed level
specifications; lists of requirements; contracts; development plans or test plans; and
also to numerous other deliverables. Static tests can thus be executed very early in the
development cycle.

Software components, regardless of whether they can compile, can be examined with
reviews (code reviews) or with static analysis. The other items, such as documentation,
requirements and specifications, test design documents, test cases and test
procedures, even test data, can also be submitted for review. Development plans, test
plans, processes, and business activities can also be the subject of reviews.

Informal and Static test techniques are proven effective solutions to increase the
quality of software.

Quality Assurance: Quality Assurance (QA) is a set of activities intended to ensure that
products satisfy customer requirements in a systematic and reliable fashion. QA is all
the activities we do to ensure correct quality during development of new products.

Page 20 of 106
Types of Reviews

Institute of Electrical and Electronics Engineering (IEEE) identifies many types of


reviews

 Management Reviews
 Technical Reviews;
 Inspections
 Walk-throughs
 Audits.

Reviews can be grouped according to their level of formalism, from the less formal to
the most formal. All reviews have a product or process to which they apply.
(See Bernard Homes, (2012) for further reading pp. 94-)

Management Reviews

The goal for management reviews is to follow the progress, define the status of plans
and schedule, or to evaluate the efficiency of the management approaches used and
their adequacy to the objectives. Management reviews identify conformance and
deviations to management plans or procedures. Technical knowledge may be
necessary to successfully manage such types of reviews. Evaluation of components or
objects may necessitate more than one single meeting, and meetings can fail to
address all the different aspects of the product or process subject to review.
Management review also considers the aspects of Economic Feasibility (i.e testing the
cost-effectiveness of the software)

Management reviews can be applied to the following processes (among others):

– Acquisition and delivery processes;

– Development, usage, and maintenance processes;

– Configuration management processes;

– Quality assurance processes;

– Verification and validation processes, peer-reviewed processes, and audits;

– Problem management and resolution processes, and defect-management


Management reviews can be applied to the following products (among others):
Page 21 of 106
– Defect reports;

– Audit reports;

– Backup and recovery plans, restoration plans;

– Technical and business requirements and specifications;

– Hardware and software performance plans;

– Installation and maintenance plans;

– Progress reports;

– Risk-management plans;

– Configuration-management plans;

– Quality-assurance plans;

– Project-management plans;

Data Flow Analysis

Definition:

Data flow analysis consists of evaluation the variables and verifying whether they are
correctly defined and initialized before being used (referenced).

Data flow analysis ensures that:

 Variables must be declared and used (Variables must be declared before they
are used)

 Variable must be initialised

 Variables must be utilised

A variable is a computer memory area (a set of bytes and bits) to which a name is
associated (the variable name). When the variable is not used, for example before the
Page 22 of 106
program is launched or after the program ends, the memory area is allocated to
another program or to the system. It can thus contain any data, alphabetic or numeric.
When the program needs the variable, a memory area is allocated, but is not always
initialized. The content of the memory area is thus not guaranteed. To ensure that the
memory area contains valid data, it is important to initialize it with an adequate value.
Some programming languages automatically initialize variables when they are
declared, other languages do not. Similarly some programming languages dynamically
allocate variables (and their type, numeric or alphanumerical) during their identification, for
example at their first use in the code. We could thus have two variables with similar names
“NOM” and “N0M” (here the digit 0 instead of the letter O), that could be considered
identical when read by the developer but that are defined as two different variables by
the computer. We could also have the defect identified hereunder:

Program Errors/ Software Errors/Software Bugs


An error in a program, whether due to requirement errors, design errors or coding
errors, are known as bugs.

Error: An error is said to occur whenever the behaviour of a system does not conform to
that prescribed in the requirement specification document.

The above definition of error assumes that the given specification is acceptable to the
customer. However, if the specification does not meet the expectations of the
customer, then, of course, even a fault-free implementation fails to satisfy the
customer.

• A software bug is an error, flaw, failure, defect, or fault in a computer program or


system that produces an incorrect or unexpected result, or causes.

Any approach to testing is based on assumptions about the way program errors occur.
Errors are due to two main reasons:
• Errors occur due to our inadequate understanding of all conditions with which a
program must deal leading to requirement errors, design errors or coding errors.
• Errors occur due to our failure to realize that certain combinations of conditions
require special treatments.

Some major types of errors can be checked. Program Errors are classified as:
 Syntax Errors -- A syntax error is a program statement that violates one or more
rules of the language in which it is written (language of implementation) i.e.
violation of language’s grammar rules, usually caught by the compiler, and
Page 23 of 106
reported by compiler error messages. E.g
- Missing braces: If braces do not occur in matching pairs, the compiler
indicates an error.
- Omitting the return-value –type in a function definition causes a syntax
error if the function prototype specifies a return type other than int.
- Forgetting to return a value from a function that is supposed to return a
value can lead to unexpected errors.
- Returning a value for a function whose return type has been declared void
causes a syntax error.
- Defining a function inside another function is a syntax error
 Logic Error – Logic error occurs when a program produces incorrect results. i.e A
program that compiles and runs to normal completion, may not do what you
want.
Logic faults can be further split into three categories:

- Requirements Errors: This means failure to capture the real requirements of the
customer.
- Design Errors: This represents failure to satisfy an understood requirement. if
you solved the wrong problem, you have a design error. Design errors
occur when specifications are do not match the problem being solved.

- Implementation Errors: This represents failure to satisfy an understood


requirement.
-
 Run-Time Errors -- a program that compiles may die while running with a run-
time error message that may or may not be useful. Run-time errors result from
incorrect assumptions, incomplete understanding of the programming language,
or unanticipated user errors. E.g. errors like division-by-zero may occur as a
program runs, so these errors are called run-time errors or execution time-errors.
Division-by-zero is generally a fata error, i.e an error that causes the program to
terminate immediately without having successfully performed its job.

 Semantic errors, etc

Figure 1

With referenced to Figure 1, what will happen if the connection string property is not well
set? An error message will appear indicating the type of syntax error (e.g data type

Page 24 of 106
omitted) and the programmer will need to debug ( Identify the trouble spot, Isolate the
trouble spot and then Fix the trouble spot).

Testing in Software Development Life Cycle

Question: What are the differences between Software Development Life (SDLC) and
Software Testing Life Cycle (STLC)?

What is a Test Case? Test

Cases

Test Case is simply an input data to a software product.

Test Suit or Test Set is a set of Test cases or simply a collection of test cases

A common definition of a Test Case is a description of conditions and expected


results that are taken together to fully test a requirement or Use Case. In this course, I
allow multiple requirements to be described in a single Test Case and may limit a
Test Case to a portion of a Use Case such as a flow of events. Written Test Cases
should include a description of the functionality to be tested, and the preparation
required to ensure that the test can be conducted. (A test input)

A test case is defined by a starting environment, input data, actions, and expected
results, which included the expected data and resulting environment.

A test case is composed of the test case values, expected results, prefix values
(preconditions), and postfix values (postcondition) necessary for a complete execution and
evaluation of the software under test.

Criteria for the selection of Test Cases

There are three criteria for the selection of test cases:

 Specifying a criterion for evaluating a set of input values.


Page 25 of 106
(i.e specifying a criterion for validation all input to the software based on
requirement specification validation because all requirements MUST be validated
before proceeding into any software development) (Requirement Specification
Validation RSV criteria)

 Generating a set of input values that satisfy a given criterion.

 Test cases must be written for invalid and valid input conditions.

N/B: A good test case is one that has a high probability of detecting undiscovered error.
Testing for valid and invalid input are necessary conditions for selecting cases,
therefore exhaustive testing is impossible.

Each test case needs proper documentation, preferably in a fixed format. The format of
your test case design is very important. There are many formats; one format is
suggested below:

We will use a particular format for our test cases, as shown in Table 1.

Table 1: Test Case Format Example

Test ID Test Case Test Input Expected Result


Description (Test Case Value)

Testing must be defensive (Defensive Testing) i.e Testing which includes tests under
both normal and abnormal conditions.

Page 26 of 106
In order to fully test that all the requirements of an application are met, there must be at
least two test cases for each requirement: one positive test and one negative test. If a
requirement has sub-requirements, each sub-requirement must have at least two test
cases.

Test Procedures

Test Procedures describe specific activities taken by a tester to set up, execute, and
analyze a test. This includes defining data values, input and output files, automated
tests to run, and detailed manual test activities.

Test procedure is a document specifying a sequence of actions for the execution of a


test. Also known as test script or manual test script.

• How to set up the test environment

• Where to find test data sets


• Where to put them

• The steps to execute the tests, and

• What to do with the test results.

Test Procedures can be written for manual tests, automated tests, or a combination
of the two. They are usually only needed if testing is complex.

Test Scripts

A tests script is what is used to test the functionality of a software system. These
scripts can be either manual or automated.

Manual test scripts are usually written scripts that the tester must perform. This

Page 27 of 106
implies direct interaction between the tester and the system under tests. Manual test
scripts specify step-by-step instructions of what the tester should enter into the
system and expected results. Many times the scripts are embedded into the Test
Procedures.

Automated test scripts are software programs written to test the system. These can be
generated with tools or coded the old fashioned way. Usually there is a scripting
language involved to control performing the tests in an orderly manner. These tests
are usually initiated by testers and are referenced in Test Procedures.

Test Lists:

Test Lists are a way of collecting unit tests into logical groups. The main advantages to
adding unit tests to a test list is that you can run tests from multiple unit test files, you
can run them as part of a build, and you can use the lists to enforce a check-in policy.
For more information about Test Lists

Test Plans:

These are documents that spell out how you will test in order to prove the system and
what activities will be followed to get the job done. These plans can vary in level of
formality and detail. We will get into planning the test in detail later in this course with the
focus on planning just enough.

Test planning should be done throughout the development cycle, especially early in the
development cycle. A test plan is a document describing the scope, approach,
resources, and schedule of intended test activities. It identifies test items, the features to be
tested, the testing tasks, who will do each task, and any risks requiring contingency
plans. An important component of the test plan is the individual test cases. A test
case is a set of test inputs, execution conditions, and expected results developed for a
particular objective, such as to exercise a particular program path or to verify
compliance with a specific requirement.

Page 28 of 106
Write the test plan early in the development cycle when things are generally still going
pretty smoothly and calmly. This allows you to think through a thorough set of test
cases. If you wait until the end of the cycle to write and execute test cases, you might be
in avery chaotic, hurried time period. Often good test cases are not written in this
hurried environment, and ad hoc testing takes place. With ad hoc testing, people just
start trying anything they can think of without any rational roadmap through the
customer requirements. The tests done in this manner are not repeatable.

It is essential in testing to start planning as soon as the necessary artifact is available. For
example, as soon as customer requirements analysis has completed, the test team should
start writing black box test cases against that requirements document. By doing so
this early, the testers might realize the requirements are not complete. The team may
ask questions of the customer to clarify the requirements so a specific test case can be
written. The answer to the question is helpful to the code developer as well.
Additionally, the tester may request (of the programmer) that the code is designed
and developed to allow some automated test execution to be done. To summarize, the
earlier testing is planned at all levels, the better.

It is also very important to consider test planning and test execution as iterative
processes. As soon as requirements documentation is available, it is best to begin to
write functional and system test cases. When requirements change, revise the test
cases. As soon as some code is available, execute test cases. When code changes, run
the test cases again. By knowing how many and which test cases actually run you can
accurately track the progress of the project. All in all, testing should be considered an
iterative and essential part of the entire development process.

Sources of Information for Test Case Selection

Designing test cases has continued to stay in the foci of the research community and
the practitioners. A software development process generates a large body of
Page 29 of 106
information, such as requirements specification, design document, and source code. In
order to generate effective tests at a lower cost, test designers analyse the following
sources of information:

 Requirement Specification Document (RSD)


 Source Code
 Conceptual Design
 Fault Model
 Heuristics

(1) Software Requirement Specification Document (SRD): The process of software


development begins by capturing user needs. The nature and amount of user needs
identified at the beginning of system development will vary depending on the specific
software development methodology used. Let us consider a few examples. In the
Traditional Software Development Methodology using the Waterfall Approach, a
requirements engineer tries to capture most of the requirements before embarking on the
development process. On the other hand, in an Agile Software Development
Methodology, such as XP or the Scrum, the requirements identified at the beginning of
the software development project can be redefined to product iterative and
incremental versions of the software product.
The requirements might have been specified in an informal manner, such as a
combination of plaintext, equations, figures, and flowcharts. Though this form of

Page 30 of 106
requirements specification may be ambiguous, it is easily understood by customers.
For some systems, requirements may can be captured in the form of use cases,
entity–relationship diagrams, class diagrams, etc. Sometimes the requirements of a
system may have been specified in a formal language or notation, such as Z, SDL,
Estelle, or finite-state machine. Both the informal and formal specifications are prime
sources of test cases.

In black box testing, test cases are generated from requirement specification.
(2) Source Code: Whereas a requirements specification describes the intended
behaviour of a system, the source code describes the actual behaviour of the system.
High-level assumptions and constraints take concrete form in an implementation.
Though a software designer may produce a detailed design, programmers may
introduce additional details into the system. For example, a step in the detailed design can
be “sort array A.” To sort an array, there are many sorting algorithms with different
characteristics, such as iteration, recursion, and temporarily using another array.
Therefore, test cases must be designed based on the program.

In white box testing, test cases are generate from the sources code.

(3) Fault Model: Previously encountered faults are excellent sources of information in
designing new test cases. The known faults are classified into different classes, such
as initialization faults, logic faults, and interface faults, and stored in a repository. Test
engineers can use these data in designing tests to ensure that a particular class of faults is
not resident in the program.

There are three types of fault-based testing: error guessing, fault seeding, and mutation
analysis.
- In error guessing, a test engineer applies his experience to (i) assess the situation and
guess where and what kinds of faults might exist, and (ii) design tests to specifically
expose those kinds of faults.
- In fault seeding, known faults are injected into a program, and the test suite is
executed to assess the effectiveness of the test suite. Fault seeding makes an
assumption that a test suite that finds seeded faults is also likely to find other faults.

- Mutation Testing: High quality software cannot be done without high quality testing.
Mutation testing measures how “good” our tests are by inserting faults into the
program under test. Each fault generates a new program, a mutant that is slightly

Page 31 of 106
different from the original. Mutation testing measure the quality of test cases.
Mutation analysis is similar to fault seeding, except that mutations to program
statements are made in order to determine the fault detection capability of the test suite.
If the test cases are not capable of revealing such faults, the test engineer may
specify additional test cases to reveal the faults. Mutation testing is based on the idea of
fault simulation, whereas fault seeding is based on the idea of fault injection. In the fault
injection approach, a fault is inserted into a program, and an oracle is available to assert
that the inserted fault indeed made the program incorrect. On the other hand, in fault
simulation, a program modification is not guaranteed to lead to a faulty program. In fault
simulation, one may modify an incorrect program and turn it into a correct program.
(4) Heuristics i.e from human reasoning.

Tracking of Incidents

An incident tracking system keeps track of the incidents that should be fixed in the
developmental process so that all incidents are properly resolved:
- No incident will go unfixed on the whim of a single programmer.

Limitation of software testing

Software testing can only shows the presence of errors not their absence.
This is the major limitation of software testing because it cannot settle the question of
program correctness. In other words, by testing a program with a proper subset of the
input domain and observing no fault, we cannot conclude that there are no remaining
faults in the program.

Though testing cannot settle the question of program correctness, different testing
methods continue to be developed. For example, there are specification-based testing
methods and code-based testing methods. It is important to develop a theory to
compare the power of different testing methods.

A software system undergoes multiple test – fix – retest cycles until, ideally, no more
faults are revealed. Faults are fixed by modifying the code or adding new code to the
system. At this stage there may be a need to design new test cases. When no more
faults are revealed, we can conclude this way: either there is no fault in the program or the
tests could not reveal the faults. Since we have no way to know the exact situation, it is
useful to evaluate the adequacy of the test set.
There is no need to evaluate the adequacy of tests so long as they reveal faults. Two
practical ways of evaluating test adequacy are fault seeding and program mutation.
Page 32 of 106
- Fault Seeding is the process of inserting faults into a program.
- Mutation Testing is a systematic method of fault seeding.
- Introduce single fault one at a time to create “mutants” of original program. Each
inserted fault results in a new program called Program Mutation. Apply test set to
eachmutant program.
“Test adequacy” is measured by % “mutants killed”

- Execute program P on test set T

- P is considered the “correct” program

- Save results R to serve as an oracle call Test Oracle

- Each inserted fault results in a new program call Mutant Program:


Mutant programs = P1, P2, P3, ..., Pn

Testing should cover “all” functionality:

 every public method (black-box testing)


 every feature
 all boundary situations (Boundary Value Analysis (BVA))
 common scenarios
 exceptional scenarios
 every line of code (white-box testing)
 every path through the code

Just because all your tests are green does not mean that your code is correct and free of
bugs. This also does not mean that testing is futile (unsuccessful)! A good strategy is to
add a new test cases whenever a new bug is discovered. The test case should
demonstrate the presence of the bug. When the test is green, you know that this
particular instance of the bug is gone. Writing a new test for the bug that (i) documents the
bug, (ii) helps you debug it, and (iii) ensures that the bug will be flagged if it ever
appears again.

N/B: A theory to compare the power of testing methods based on their fault detection
abilities is a good research area.

Question: Compare Mutation Testing to other Testing approaches w.r.t. Fault detecting

Answer: Mutation testing usually requires significantly more test cases than the other
methods

Page 33 of 106

You might also like