BCA Project
BCA Project
BCA Project
Introduction
1.1 Software Testing
Software testing is a process used to identify the correctness, completeness, and
quality of developed computer software. It includes a set of activities conducted
with the intent of finding errors in software so that it could be corrected before
the product is released to the end users.
In simple words, software testing is an activity to check whether the actual
results match the expected results and to ensure that the software system is
defect free.
Why is testing is important?
This is China Airlines Airbus A300 crashing due to a software bug on
April 26, 1994 killing 264 innocent lives.
2. Software bugs can potentially cause monetary and human loss, history is
full of such examples.
1.
3.
4.
5.
6.
7.
As Paul Elrich puts it - "To err is human, but to really foul things up you
need a computer."
1.3 Overview
A key step in the process is, testing the software for correct behavior prior to
release to end users. For small scale engineering efforts (including prototypes),
exploratory testing may be sufficient. With this informal approach, the tester
does not follow any rigorous testing procedure, but rather explores the user
interface of the application using as many of its features as possible, using
information gained in prior tests to intuitively derive additional tests. The
success of exploratory manual testing relies heavily on the domain expertise of
the tester, because a lack of knowledge will lead to incompleteness in testing.
One of the key advantages of an informal approach is to gain an intuitive insight
to how it feels to use the application.
Large scale engineering projects that rely on manual software testing follow a
more rigorous methodology in order to maximize the number of defects that can
be found. A systematic approach focuses on predetermined test cases and
generally involves the following steps.
1. Choose a high level test plan where a general methodology is chosen,
and resources such as people, computers, and software licenses are
identified and acquired.
2. Write detailed test cases, identifying clear and concise steps to be taken
by the tester, with expected outcomes.
3. Assign the test cases to testers, who manually follow the steps and record
the results.
4. Author a test report, detailing the findings of the testers. The report is
used by managers to determine whether the software can be released, and
if not, it is used by engineers to identify and correct the problems.
A rigorous test case based approach is often traditional for large software
engineering projects that follow a Waterfall model. However, at least one recent
study did not show a dramatic difference in defect detection efficiency between
exploratory testing and test case based testing.
Testing can be through black-, white- or grey-box testing. In white-box testing
the tester is concerned with the execution of the statements through the source
code. In black-box testing the software is run to check for the defects and is less
concerned with how the processing of the input is done. Black-box testers do not
have access to the source code. Grey-box testing is concerned with running the
software while having an understanding of the source code and algorithm.
Static and dynamic testing approach may also be used. Dynamic testing involves
running the software. Static testing includes verifying requirements, syntax of
code and any other activities that do not include actually running the code of the
program.
Testing can be further divided into functional and non-functional testing. In
functional testing the tester would check the calculations, any link on the page,
or any other field which on given input, output may be expected. Non-functional
testing includes testing performance, compatibility and fitness of the system
under test, its security and usability among other things.
Automation Testing means using an automation tool to execute your test case
suite. The automation software can also enter test data into the System under
Test, compare expected and actual results and generate detailed test reports.
Test Automation demands considerable investments of money and resources.
Successive development cycles will require execution of same test suite
repeatedly. Using a test automation tool it's possible to record this test suite and
re-play it as required. Once the test suite is automated, no human intervention is
required. This improved ROI of Test Automation.
Goal of Automation is to reduce number of test cases to be run manually and not
eliminate manual testing all together.
Test automation may be able to reduce or eliminate the cost of actual testing. A
computer can follow a rote sequence of steps more quickly than a person, and it
can run the tests overnight to present the results in the morning. However, the
labor that is saved in actual testing must be spent instead authoring the test
program. Depending on the type of application to be tested, and the automation
tools that are chosen, this may require more labor than a manual approach. In
addition, some testing tools present a very large amount of data, potentially
creating a time consuming task of interpreting the results.
Things such as device drivers and software libraries must be tested using test
programs. In addition, testing of large numbers of users (performance testing and
load testing) is typically simulated in software rather than performed in practice.
Conversely, graphical user interfaces whose layout changes frequently are very
difficult to test automatically. There are test frameworks that can be used for
regression testing of user interfaces. They rely on recording of sequences of
keystrokes and mouse gestures, then playing them back and observing that the
user interface responds in the same way every time. Unfortunately, these
recordings may not work properly when a button is moved or relabeled in a
subsequent release. An automatic regression test may also be fooled if the
program output varies significantly.
Conversely, graphical user interfaces whose layout changes frequently are very
difficult to test automatically. There are test frameworks that can be used for
regression testing of user interfaces. They rely on recording of sequences of
keystrokes and mouse gestures, then playing them back and observing that the
user interface responds in the same way every time. Unfortunately, these
recordings may not work properly when a button is moved or relabeled in a
subsequent release. An automatic regression test may also be fooled if the
program output varies significantly. Automation Testing is use of tools to
execute test cases whereas manual testing requires human intervention for test
execution.
Automation Testing saves time, cost and manpower. Once recorded, it's easier to
run an automated test suite when compared to manual testing which will require
skilled labor. Any type of application can be tested manually but automated
testing is recommended only for stable systems and is mostly used for regression
testing. Also, certain testing types like ad-hoc and monkey testing are more
suited for manual execution. Manual testing can be become repetitive and
boring.
On the contrary, the boring part of executing same test cases time and again, is
handled by automation software in automation testing.
Manual testing
Manual testing will be used when the test case
only needs to runs once or twice.
Automation testing
Automation testing will be used when need to
execute the set of test cases tests repeatedly.
To execute the Build Verification Testing (BVT) Automation testing is very useful for automating
is very mundane and tiresome in Manual testing. the Build Verification Testing (BVT) & it is not
mundane and tiresome.
Table 1[comparision b/w manual and automation testing]
1.6 Matlab
MATLAB is a high-performance language for technical computing. It integrates
computation, visualization, and programming in an easy-to-use environment
where problems and solutions are expressed in familiar mathematical notation.
Typical uses include: Math and computation.
116
defines a variable named array (or assigns a new value to an existing variable
with the name array) which is an array consisting of the values 1, 3, 5, 7, and 9.
That is, the array starts at 1 (the init value), increments with each step from the
previous value by 2 (the increment value), and stops once it reaches (or to avoid
exceeding) 9 (the terminator value).
>> array = 1:3:9
array =
147
the increment value can actually be left out of this syntax (along with one of the
colons), to use a default value of 1.
>> ari = 1:5
ari =
12345
assigns to the variable named ari an array with the values 1, 2, 3, 4, and 5, since
the default value of 1 is used as the increment.
Indexing is one-based, which is the usual convention for matrices in
mathematics, although not for some programming languages such as C, C++,
and Java.
Matrices can be defined by separating the elements of a row with blank space or
comma and using a semicolon to terminate each row. The list of elements should
be surrounded by square brackets: []. Parentheses: () are used to access elements
and sub arrays (they are also used to denote a function argument list).
>> A = [16 3 2 13; 5 10 11 8; 9 6 7 12; 4 15 14 1]
A=
16 3 2 13
5 10 11 8
9 6 7 12
4 15 14 1
>> A(2,3)
ans =
11
Sets of indices can be specified by expressions such as "2:4", which evaluates to
[2, 3, 4]. For example, a sub matrix taken from rows 2 through 4 and columns 3
through 4 can be written as:
>> A(2:4,3:4)
ans =
11 8
7 12
14 1
A square identity matrix of size n can be generated using the function eye, and
matrices of any size with zeros or ones can be generated with the functions zeros
and ones, respectively.
>> eye(3,3)
ans =
100
010
001
>> zeros(2,3)
ans =
000
000
>> ones(2,3)
ans =
111
111
Most MATLAB functions can accept matrices and will apply themselves to each
element. For example, mod(2*J,n) will multiply every element in "J" by 2, and
then reduce each element modulo "n". MATLAB does include standard "for" and
"while" loops, but (as in other similar applications such as R), using the
vectorized notation often produces code that is faster to execute. This code,
excerpted from the function magic.m, creates a magic square M for odd values of
n (MATLAB function mesh grid is used here to generate square matrices I and J
containing 1:n).
[J,I] = mesh grid(1:n);
A = mod(I + J - (n + 3) / 2, n);
B = mod(I + 2 * J - 2, n);
M = n * A + B + 1;
1.6.4 Structures
MATLAB has structure data types. Since all variables in MATLAB are arrays, a
more adequate name is "structure array", where each element of the array has the
same field names. In addition, MATLAB supports dynamic field names (field
look-ups by name, field manipulations, etc.). Unfortunately, MATLAB JIT does
not support MATLAB structures, therefore just a simple bundling of various
variables into a structure will come at a cost.
1.6.5 Function handles
MATLAB supports elements of lambda calculus by introducing function
handles, or function references, which are implemented either in .m files or
anonymous/nested functions.
1.6.6 Classes
Although MATLAB has classes, the syntax and calling conventions are
significantly different from other languages. MATLAB has value classes and
reference classes, depending on whether the class has handle as a super-class
(for reference classes) or not (for value classes).
Method call behavior is different between value and reference classes. For
example, a call to a method
object. method();
can alter any member of object only if object is an instance of a reference class.
1.6.7 Graphics and graphical user interface programming
MATLAB supports developing applications with graphical user interface
features. MATLAB includes GUIDE (GUI development environment) for
graphically designing GUIs. It also has tightly integrated graph-plotting features.
For example the function plot can be used to produce a graph from two vectors x
and y. The code:
x = 0:pi/100:2*pi;
y = sin(x);
plot(x,y)
Fig1.2 Wireframe 3d
[X,Y] = meshgrid(-10:0.25:10,-10:0.25:10);
f = sinc(sqrt((X/pi).^2+(Y/pi).^2));
surf(X,Y,f);
axis([-10 10 -10 10 -0.3 1])
xlabel('{\bfx}')
ylabel('{\bfy}')
zlabel('{\bfsinc} ({\bfR})')
Chapter 2
Objective &Methodology
2.1 Objective
The objective of unit testing is to isolate each part of the program and show that
the individual parts are correct. A unit test provides a strict, written contract that
the piece of code must satisfy. As a result, it affords several benefits.
2.1.1 Test-Driven Development
Unit testing finds problems early in the development cycle.
In test-driven development (TDD), which is frequently used in both Extreme
Programming and Scrum, unit tests are created before the code itself is written.
When the tests pass, that code is considered complete. The same unit tests are
run against that function frequently as the larger code base is developed either as
the code is changed or via an automated process with the build. If the unit tests
fail, it is considered to be a bug either in the changed code or the tests
themselves.
2.1.2 Facilitates change
Readily available unit tests make it easy for the programmer to check whether a
piece of code is still working properly. In continuous unit testing environments,
through the inherent practice of sustained maintenance, unit tests will continue to
accurately reflect the intended use of the executable and code in the face of any
change. Depending upon established development practices and unit test
coverage, up-to-the-second accuracy can be maintained.
2.1.3 Simplifies integration
Unit testing may reduce uncertainty in the units themselves and can be used in a
bottom-up testing style approach. By testing the parts of a program first and then
testing the sum of its parts, integration testing becomes much easier. An
elaborate hierarchy of unit tests does not equal integration testing. Integration
with peripheral units should be included in integration tests, but not in unit tests.
2.1.4 Documentation
Unit testing provides a sort of living documentation of the system. Developers
looking to learn what functionality is provided by a unit, and how to use it, can
look at the unit tests to gain a basic understanding of the unit's interface (API).
2.1.5 Design
When software is developed using a test-driven approach, the combination of
writing the unit test to specify the interface plus the refactoring activities
performed after the test is passing, may take the place of formal design. Each
unit test can be seen as a design element specifying classes, methods, and
observable behaviour.
Technical feasibility
Ability to use the same test cases for cross browser testing
CHAPTER-3
LITERATURE REVIEW
3.1 HISTORICAL BACKGROUND
For the quality of the test, the designing of test cases are important. A large
number of test methods have been developed to support the developer when
choosing appropriate test data. Some useful testing methods are structural testing
methods, functional testing methods and statistical testing methods [81]. It is very
difficult to develop correct, good and unique test cases manually. Therefore,
automation of test cases is important. The success of a test data generation method
largely depends upon the efficiency of its search technique. Different researchers
have worked on test case automation from time to time with the aim to increase the
quality of the tests and to achieve substantial cost saving in the system development
by means of higher degree of automation.
One critical task in software testing is the creation of test data to satisfy a
given test - coverage criterion. This process is called as Test Data Generation.
Developments in the field of automated test data generation were initiated in early
70s when papers on Testing large software with automated software evaluation
systems by Ramamoorthy, in 976 [6] and Holland, in 1975 [8] and Automatic
Generation of Floating-Point Test Data by Miller and Spooner, in 1976 [10] is
published. Work done by Nevertheless, Clarke in 1976 [5] is considered to be the
first of its kind to produce a solid algorithm for Automatic Test Data Generation
(ATDG).
Path testing searches the program domain for suitable test cases that
covers every possible path in the Software Under Test (SUT) [53]. It is stronger
criteria as compare to statement and branch coverage criteria [37]. This tries to
find out the percentage of code coverage to more extent and hence increase the
chances of error detection [34]. However, it is generally impossible to achieve
this goal, for several reasons. First, a program may contain an infinite number of
paths when the program has loops [23, 34, 53, 85]. Second, the number of paths
in a program is exponential to the number of branches in it [83, 86] and many of
them may be unfeasible. Third, the numbers of test cases are too large, since
each path can be covered by several test cases. For these reasons, the problem of
path testing can become a NP complete problem [83], making the covering of all
possible path computationally impractical. Since it is impossible to cover all
paths in software, the problem of path testing selects a subset of paths to execute
determine which symbolic value of the potential values will be used for array or
pointer. Symbolic execution cannot find floating point inputs because the current
constraint solvers cannot solve floating point constraints.
Constraint Based Testing builds up constraint systems which describe the
given test goal. The solution to this constraint system brings about satisfaction of the
goal. The original purpose of Constraint Based testing was to generate test data for
mutation testing. Reachability constraints within the constraint system describe
conditions under which a particular statement will be reached. Necessity constraints
describe the conditions under which a mutant will be killed. With Constraint-based
testing, constraints must be computed before they are analysed.
Another test data generation technique type of Constraint-Based Testing is
invented by DeMillo and Offutt [2] and based on symbolic execution is used to
develop the constraints in terms of the input variables is called Domain Reduction.
Domain Reduction is then used to attempt a solution to the constraints. The first step
of this technique starts with the finding of domains of each input variable which are
derived from type or specification information or be supplied by the tester. The
domains are then reduced using information in the constraints, beginning with those
involving a relation operator, a variable, a constant and constraints involving a
relation operator and two variables. This helps in reducing the search space (input
domain) for solving a constraint system. Remaining constraints are then simplified
by back-substituting values. Although efforts were made for improving the
performance of algorithmic search methods by employing some techniques likes
identification of undesirable variables, finding optimum order of consideration of
input variables, use of binary search algorithm and expression handling technique
but these required a plenty of manual and time consuming analysis. This makes
algorithmic search methods very slow and ineffective. These algorithms also lack
global search capabilities which are a necessary requirement for software testing
where objective-functions are very complex and usually non-linear [82]. Since these
constraints are derived using symbolic execution, the method suffers from similar
problems involving loops, procedure calls and computed storage locations [88].
To overcome the limitations of domain reduction method another method called
Dynamic Domain Reduction was introduced by Offutt, in 1997 [11]. Dynamic Domain
Reduction also starts with the domain of input variables like the Domain
Reduction but these domains are reduced dynamically during the Symbolic
Execution stage, using constraints composed from branch predicates encountered as
the path is followed. If the branch predicate involves a variable comparison, the
domains of the input variables responsible for the outcome at the decision are split
at some arbitrary split point rather than assigning random input values. Dynamic
Domain Reduction still suffers with difficulties due to computed storage locations
and loops. Furthermore, it is not clear how domain reduction techniques handle nonordinal variable types such as enumerations [88].
3.3.2 Dynamic Testing
Unlike the static testing, dynamic methods require the execution of code.
The test cases are run on the code of the software product that has to be tested with
the help of computer. Since array subscripts and pointer values are known at runtime, many of the problems associated with symbolic execution can be discovered
with dynamic methods which are not possible with static testing. Dynamic Test Data
Generation Technique collects information during the execution of the program to
determine which test cases come closest to satisfying the requirement. Then, test
inputs are incrementally modified until one of them satisfies the requirements [3, 9].
Random Test-Data Generation Techniques select inputs at random until useful
inputs are found [14]. In random testing, random values are generated from domains of
inputs and program is executed using these values. If these inputs are able to satisfy the
testing criterion then they form a test case [82]. This technique may fail to find test data
to satisfy the requirements because information about the test requirements is not
incorporated into the generation process [25]. J. W. Duran and S. Ntafos in 1984 [64]
reported random testing to be satisfactory for small as well as large programs. Thayer
and others [89] used it to measure reliability of the system. Demillo and others [90] also
used random testing for identifying seeded faults in programs.
Mayer and Schneckenburger [91] empirically investigated different flavors of
adaptive random testing. They concluded that distance based random testing and
restricted random testing are the best methods for this class of testing techniques. This
approach is quick and simple but it is a poor choice with complex programs and
complex adequacy criteria. The probability of selecting an adequate input by chance
could be low in this case. The biggest issue for random approach is that of adequate
test data selection. Myers [37] viewed random testing as a worst case of program
testing.
The results of actual executions of the program with a search technique were
first studied by Miller and Spooner [30]. These were originally designed for the
generation of floating-point test data. However, the principles are more widely
applicable. The tester selects a path through the program and then produces a straightline version of it, containing only that path. Korel suggested a dynamic approach to
automatic test data generation using function minimization and directed search [32]. In
this work, the test data generation procedure worked on an instrumented version of the
original program without the need for a straight-line version to be produced. The search
targeted the satisfaction of each branch predicate along the path in turn, circumventing
issues encountered by the work of Miller and Spooner. In this type exploratory search is
done, in which the selected input variables are modified by a small amount and
submitted to the program. Korel [32] used alternate variable method for its dynamic test
data generator. The alternate variable method works in two phases. First, an input
variable is selected and its value is changed in small steps just to find out the direction
in which variable minimizes the branch function. This is called exploratory search.
Once the direction of search is known then pattern search is taken in large steps to find
the value of the variable in consideration for satisfying or minimizing the branch
function. If selected value of the variable fails to decrease the branch function then steps
of the pattern search are decreased successively before exploring other variables for
minimization purpose. Gallagher and Narasimhan [95] built on Korel's work for
programs written in ADA. In particular, this was the first work to record support for the
use of logical connectives within branch predicates. Dynamic techniques can stall when
they encounter local minima because they depend on local search techniques such as
gradient descent [82]
Korel, in 1992 [95] was first used concept of Goal-Oriented Approach. In 1992,
Goal-oriented techniques identify test data covering a selected goal such as a statement
or a branch, irrespective of the path taken [23]. This approach involves two basic steps:
to identify a set of statements (respective branches) the covering of which implies
covering the criterion; to generate input test data that execute every selected statement
(respective branch) [96]. Two typical approaches, Assertion-Based and Chaining
Approach are known as goal oriented. In the first case assertions are inserted
and then solved. In chaining approach data dependence analysis is carried out. It
uses the concept of an event sequence as an intermediate means of deciding the type
of path required for execution up to the target node [97, 99]. An event sequence is
basically a succession of program nodes that are to be executed. The initial event
sequence consists of just the start node and target node. Extra nodes are then
inserted into this event sequence when the test data search encounters difficulties.
Generally the goal-oriented approach faces issues of goal selection and selection of
adequate test data [98].
3.3.3 Functional Testing
Functional Testing is also called as specification based or Black Box Testing.
If testers want to test functional requirements, they may use Black-Box Testing
technique. On the other hand, function minimization methods are dynamic. They are
based on program execution. Black Box Testing does not need knowledge of how
software is programmed. It generates test data for software from its specification
without considering the behavior of the program under test. Testers inject test data to
execute program, then compare actual result with the specified test oracle. The test
engineers engaged in black box testing only knows the sets of input and expected
output and is unaware of how those inputs are transformed into output by software.
Black box testing requires functional knowledge of the product to be tested [1, 9].
Black Box Testing helps in the overall functionality verification of the system under
test.
Syntax Based Testing involves on boundary value analysis, partition
analysis, domain testing, equivalence partitioning, domain partitioning, and
functional analysis [100, 101, 102 , 87]. Hoffman in 1999 [100] presented a
technique based on boundary values analysis in this technique the relationship
between a generalized approach to boundary values and statement coverage is
explored. Jeng in 1999 [101] has presented a technique that is mainly related to
domain testing. It combined the static approach with the dynamic search method. In
1997, Gallagher and Lakshmi Narasimhan [102] proposed a method for locating
input domain boundaries intersections and generating ON/OFF test data points
3.4 METAHEURISTICS
Metaheuristics are general heuristic methods that guide the search through
the solution space, using as surrogate algorithms some form of heuristics and usually
local search. Starting from an initial solution built by some heuristic, metaheuristics
improve it iteratively until a stopping criterion is met. The stopping criterion can be
elapsed time, number of iterations, number of evaluations of the objective function
and so on [41]. Voss in 1999 [105] described a metaheuristic as Iterative master
processes that guides and modifies the operations of subordinate heuristics to
efficiently produce high quality solutions.
The most successful search algorithm class is based on metaheuristic
techniques like Hill Climbing (HC), Tabu Search (TS), Simulated Annealing (SA),
Genetic Algorithm (GA), Ant Colony Optimization (ACO), Particle Swarm
Optimization (PSO), Cat Intelligence etc. McMinn [88] has provided a detail and up
to date survey on use of metaheuristic techniques for software testing. Several
metaheuristics have been suggested for path coverage [83, 85], statement coverage
[23] and branch coverage [23, 34]. In such cases the use of metaheuristics would be
very useful in providing usable results in a reasonable time.
3.4.1 Hill Climbing (HC)
Hill Climbing is a local search algorithm. Starting from a solution created at
random or by some problem specific heuristic, standard local search tries to improve on
it by iteratively deriving a similar solution in the neighborhood of the so-far best
solution. Responsible for finding a neighboring solution is a move-operator, which must
be carefully defined according to the problem. This progression improvement is likened
to the climbing of hills in the landscape of a maximising objective function [88]. It
applies standard local search multiple times from different starting solutions and returns
the best local optimum identified [41]. The major disadvantage of standard local search
is its high probability of getting trapped at a poor local optimum.
history, in which the last created solutions or alternatively the last moves (i.e.,
changes from one candidate solution to the next) are stored. These solutions,
respectively moves, are forbidden (tabu) in the next iteration and the algorithm is
forced to approach unexplored areas of the search space. It uses neighborhood
information and backtracking for solving local optima. They defined two cost
functions for intensifying and diversifying the search mechanism. These cost
functions are similar to the functions used by Wegner, in 12002 [81] in which
individuals are penalised for taking wrong path while executing the program.
Penalty is fixed on the basis of error value produced by an individual in the effort of
satisfying a branch constraint.
3.4.3 Simulated Annealing (SA)
Another way for enabling local search to escape from local optima and
approach new areas of attraction in the search space is to sometimes also accept
worse neighboring solutions. Simulated annealing does this in a probabilistic way.
Simulated Annealing (SA) algorithms, based on the analogy of annealing process of
metals, were proposed by Metropolis [106] in 1953 and were first applied to
combinatorial optimization problems by Kirkpatrick in 1983 [ 107]. SA is
considered to be an improvement heuristic where a given initial solution is
iteratively improved upon. SA is a metaheuristic method used for test case
generation in which process of cooling of a material simulates the change in energy
level with time or iterations. The steady state in energy symbolizes the convergence
of solution. At the beginning of the optimization, worse solutions are accepted with
a relatively high probability and this probability is reduced over time in order to
achieve convergence A number of researchers have applied SA to testing problems.
Tracey [108, 109] constructed a SA based test data generator for safety critical
system. A hybrid objective function is used which includes concepts, branch
distance and number of executed control dependent nodes. N. Mansour [83] in 2004
presents that GA is faster than SA for generating test cases.
3.4.4 Genetic Algorithms (GAs)
GA is one of the most popular and intensively pursued techniques for software
testing. The GA is a global search metaheuristic proposed originally by Holland [8] in
1975. Extensive work has been done on the development of the original algorithm in
the last 20 years and it has been applied successfully in many fields of science and
engineering [92, 93]. The GA is based on the principles of Darwins theory of
natural evolution and belongs to a more general category, the Evolutionary
Algorithms (EAs).
Recently, test-data generation techniques based on genetic algorithms (GAs)
have been developed [13, 24, 23, 81, 84, 103]. Whereas previous techniques may
not be useful in practice, techniques based on GAs have the potential to use for real
systems. Xanthakis [103] first time applied GA for automatic test case generation.
Pargas et al. [23] presented a Genetic Algorithm directed by the control-dependence
graph of the program under test to search for test data to satisfy all-nodes and allbranches criteria. Wagener [18] logarithmized the objective function to provide
better guidance for its GA based test case generator. They present a test environment
for automatic generation of test data for statement and branch testing. These
techniques evolve a set of test data using genetic operations (selection and
recombination) to find the required test data. Michael et al. [24] used GAs for
automatic test-data generation to satisfy condition-decision test-coverage criterion.
They proposed a GA based test generation system called Genetic Algorithm Data
Generation Tool (GADGET) to generate test cases for large C and C++ programs by
using condition decision coverage metrics.
Watkins [84] and Ropar [38] used coverage based criteria for assessing the
fitness of individuals in their GA based test generator. Lin and Yeh [85] used
hamming distance based metric in objective function of their GA program to
identify the similarity and distance between actual path and already selected target
path in dynamic testing. Bouchachia [116] incorporated immune operators in
genetic algorithm to generate software test data for condition coverage
GA has started getting competition from other heuristic search techniques
like Particle Swarm Optimization. Various works [16 - 20] show that particle swarm
optimization is equally well suited or even better than Genetic Algorithms for
solving a number of test problems [21].
3.4.5 Particle Swarm Optimization (PSO)
PSO has been applied successfully to a wide variety of search and optimization
problems [16 -20, 110, 111]. It is motivated from the simulation of social
A more superior hybrid genetic algorithm in which initial solutions have been
searched by PSO for multi-objective scheduling of flexible manufacturing system was
proposed by Biswal [140]. The outstanding performance of this algorithm overcomes
the main limitation of early work done by Naderi in 2009 [143].
Majority of work reported for software testing problems has been dealt with
statement testing, branch testing, path testing, and data flow testing which have
their own limitations. Hence a more attention is required towards a new
approach for testing.
Automatic test data generation is major issue in software testing problem. Most of
the works reported in automatic test data generation but a new approach is required
that can generate unique test data and that does not fall into local optima.
Most of work with the hybridization of local search and heuristic techniques has
done. There is limited work towards hybridization of metaheuristics algorithms
in software testing. Hence more emphasis is required towards it.
testing problems. Development of heuristics and metaheuristics are still the major issues related to
software testing which includes automatically test data generation to covers each and every
statement. Therefore, in the present work, automatic test data generation problems with very good
performance measures including generation of unique test data, covering each and every statement or
100 percent statement coverage have been considered. An attempt has been made to develop Hybrid
algorithm that is based on combination of powers of two algorithms PSO and GA for solving test
data generation problem of software testing which must be effective in generating test cases.
30
Chapter 4
Problem formulation
4.1 Challenges in software testing
Software Testing has lot of challenges both in manual as well as in automation. Generally in
manual testing scenario developers through the build to test team assuming the responsible test
team or tester will pick the build and will come to ask what the build is about? This is the case in
organizations not following so-called processes. Tester is the middleman between developing
team and the customers, handling the pressure from both the sides. This is not the case always.
Sometimes testers may add complications in testing process due to their unskilled way of
working.
So here we go with the top challenges:
1) Testing the complete application: Is it possible? I think impossible. There are millions of test
combinations. Its not possible to test each and every combination both in manual as well as in
automation testing. If you try all these combinations you will never ship the product.
2) Misunderstanding of company processes: Sometimes you just dont pay proper attention
what the company-defined processes are and these are for what purposes. There are some myths
in testers that they should only go with company processes even these processes are not
applicable for their current testing scenario. This results in incomplete and inappropriate
application testing.
3) Relationship with developers: Big challenge. Requires very skilled tester to handle this
relation positively and even by completing the work in testers way. There are simply hundreds of
excuses developers or testers can make when they are not agree with some points. For this tester
also requires good communication, troubleshooting and analyzing skill.
4) Regression testing: When project goes on expanding the regression testing work simply
becomes uncontrolled. Pressure to handle the current functionality changes, previous working
functionality checks and bug tracking.
5) Lack of skilled testers: I will call this as wrong management decision while selecting or
training testers for their project task in hand. These unskilled fellows may add more chaos than
simplifying the testing work. This results into incomplete, insufficient and ad-hoc testing
throughout the testing life cycle.
6) Testing always under time constraint: Hey tester, we want to ship this product by this
weekend, are you ready for completion? When this order comes from boss, tester simply focuses
on task completion and not on the test coverage and quality of work. There is huge list of tasks
that you need to complete within specified time. This includes writing, executing, automating
and reviewing the test cases.
31
7) Which tests to execute first? If you are facing the challenge stated in point no 6, then how
will you take decision which test cases should be executed and with what priority? Which tests
are important over others? This requires good experience to work under pressure.
8 ) Understanding the requirements: Sometimes testers are responsible for communicating
with customers for understanding the requirements. What if tester fails to understand the
requirements? Will he be able to test the application properly? Definitely No! Testers require
good listening and understanding capabilities.
Software engineering Automated Software Testing for Matlab Software testing can improve
software quality. To test effectively, scientists and engineers should know how to write and run
tests, define appropriate test cases, determine expected outputs, and correctly handle floatingpoint arithmetic.
Using Matlab MLUnit automated testing framework, scientists and engineers using Matlab can
make software testing an integrated part of their software development routine.
MATLAB xUnit can be used with MATLAB releases R2008a and later. MATLAB xUnit relies
heavily on object-oriented language features introduced in R2008a and will not work with earlier
releases.
4.3 MLUnit
mlUnit is a unit test framework for the MATLAB M language.
It follows patterns of the xUnit family, including assertions,
test cases and suites as well as the fixture.
In contrast to MATLAB's own unit test framework:
1. Ml Unit outputs j Unit compatible XML reports
2. Ml Unit is compatible with Your MATLAB (not just R2013b), down to R2006b * ml
Unit offers specialised assert functions, e.g. assert_ empty, assert _warning, and many
more.
This software and all associated files are released under the GNU General Public License (GPL)
as published by the Free Software Foundation
33
Chapter 5
DESIGN & IMPLEMENTATION
5.1 User-Defined Functions
A user-defined function is a Matlab program that is created by the user, saved as a function file,
and then can be used like a built-in function. A function in general has input arguments (or
parameters) and output variables (or parameters) that can be scalars, vectors, or matrices of any
size. There can be any number of input and output parameters, including zero. Calculations
performed inside a function typically make use of the input parameters, and the results of the
calculations are transferred out of the function by the output parameters.
Writing a Function File
A function file can be written using any text editor (including the Matlab Editor). The file must
be in the Matlab Path in order for Matlab to be able to locate the file. The first executable line in
a function file must be the function definition line, which must begin with the keyword function.
The most general syntax for the function definition line is:
function [out1, out2, ...] = function Name(in_1,in2, ...)
where "function Name" is the name of the user-defined function, in1, in2, ... are the input
parameters, and out1, out2, ... are the output parameters.
The parentheses are needed even if the function has no input parameters:
function [out1, out2, ...] = function Name( )
If there is only one output parameter, then the square brackets can be omitted:
function out = function name(in1, in2, ...)
If there is no output parameter at all, then they are called void functions. The function definition
line is written as:
function function Name(in1, in2, ...)
Inline (expr) constructs an inline function object from the MATLAB expression contained in the
string expr. The input argument to the inline function is automatically determined by searching
expr for an isolated lower case alphabetic character, other than i or j, that is not part of a word
formed from several alphabetic characters. If no such character exists, x is used. If the character
is not unique, the one closest to x is used. If two characters are found, the one later in the
alphabet is chosen.
inline(expr,arg1,arg2,...) constructs an inline function whose input arguments are specified by the
strings arg1, arg2,.... Multi character symbol names may be used.
Inline (expr,n) where n is a scalar, constructs an inline function whose input arguments are x, P1,
P2, ... .
Example a:
This example creates a simple inline function to square a number.
g = inline('t^2')
g =Inline function:
g(t) = t^2
You can convert the result to a string using the char function.
char(g)
ans =
t^2
Example b:
This example creates an inline function to represent the formula f = 3sin(2x2). The resulting
inline function can be evaluated with the argnames and formula functions.
f = inline('3*sin(2*x.^2)')
f = Inline function:
f(x) = 3*sin(2*x.^2)
argnames(f)
ans =
'x'
formula(f)
ans =
35
3*sin(2*x.^2)
Example c:
This call to inline defines the function f to be dependent on two variables, alpha and x:
f = inline('sin(alpha*x)')
f=
Inline function:
f(alpha,x) = sin(alpha*x)
If inline does not return the desired function variables or if the function variables are in the
wrong order, you can specify the desired variables explicitly with the inline argument list.
g = inline('sin(alpha*x)','x','alpha')
g=
Inline function:
g(x,alpha) = sin(alpha*x)
36
1x1
1x9
1x1
8 double
18 char
370 struct
Basic commands like who display the class of each value in the workspace. This information
helps MATLAB users recognize that some values are characters and display as text while other
values are double precision numbers, and so on. Some variables can contain different classes of
values like structures.
5.4.2 Predefined Classes
MATLAB defines fundamental class that comprise the basic types used by the language.
5.4.3 User-Defined Classes
You can create your own MATLAB classes. For example, you could define a class to represent
polynomials. This class could define the operations typically associated with MATLAB classes,
like addition, subtraction, indexing, displaying in the command window, and so on. These
operations would need to perform the equivalent of polynomial addition, polynomial subtraction,
and so on. For example, when you add two polynomial objects:
p1 + p2
the plus operation must be able to add polynomial objects because the polynomial class defines
this operation.
When you define a class, you can overload special MATLAB functions (such as plus.m for the
addition operator). MATLAB calls these methods when users apply those operations to objects of
your class.
37
If the "is a" relationship holds, then it is likely you can define a new class from a class or classes
that represent some more general case.
Reusing Solutions
Classes are usually organized into taxonomies to foster code reuse. For example, if you define a
class to implement an interface to the serial port of a computer, it would probably be very similar
to a class designed to implement an interface to the parallel port. To reuse code, you could define
a super class that contains everything that is common to the two types of ports, and then derive
subclasses from the super class in which you implement only what is unique to each specific
port. Then the subclasses would inherit all of the common functionality from the super class.
Objects
A class is like a template for the creation of a specific instance of the class. This instance or
object contains actual data for a particular entity that is represented by the class. For example, an
instance of a bank account class is an object that represents a specific bank account, with an
actual account number and an actual balance. This object has built into it the ability to perform
operations defined by the class, such as making deposits to and withdrawals from the account
balance.
Objects are not just passive data containers. Objects actively manage the data contained by
allowing only certain operations to be performed, by hiding data that does not need to be public,
and by preventing external clients from misusing data by performing operations for which the
object was not designed. Objects even control what happens when they are destroyed.
Encapsulating Information
An important aspect of objects is that you can write software that accesses the information stored
in the object via its properties and methods without knowing anything about how that
information is stored, or even whether it is stored or calculated when queried. The object isolates
code that accesses the object from the internal implementation of methods and properties. You
can define classes that hide both data and operations from any methods that are not part of the
class. You can then implement whatever interface is most appropriate for the intended use.
Define a Simple Class
The basic purpose of a class is to define an object that encapsulates data and the operations
performed on that data. For example, Basic Class defines a property and two methods that
operate on the data in that property:
Value Property that contains the data stored in an object of the class
Sq Method that square the value of the property.
properties
x
end
methods
function p=sq(obj)
p= obj.x*obj.x
end
end
end
To use the class, create an object of the class, assign the class data, and call operations on that
data.when we run above code then result is as follow
Create object of class
>> y=clas1
y =clas1
Assign value of property
>> y.x=9
y=
clas1
accessing member function of class and passing object as parameter
>> sq(y)
p=
81
Testing by passing wrong value
assert_equals(82,sq(y))
p=
81
y=clas1;
y.x=9;
assert_equals(81,sq(y))
assert_equals(80,sq(y))
Output :
end
By implementing a method called plus, you can use the "+" operator with objects of Basic Class.
a = Basic Class(pi/3);
b = Basic Class(pi/4);
a+b
ans = 1.8326
Class-Based Unit Tests
(a) Author Class-Based Unit Tests in MATLAB
(b) Write Simple Test Case Using Classes
(c) Write Setup and Teardown Code Using Classes
(d) Tag Unit Tests
(e) Write Tests Using Shared Fixtures
(f) Create Basic Custom Fixture
(g) Create Advanced Custom Fixture
(h) Create Basic Parameterized Test
(i) Create Advanced Parameterized Test
Run Unit Tests
Run test suites in the testing framework
To run a group of tests in matlab. Unit test, create a test suite from the matlab.unit test. Test Suite
class. Test suites can contain:
45
Chapter 6
Implementation
To test a MATLAB program, write a unit test using qualifications that are methods for testing
values and responding to failures.
Qualifications are methods for testing values and responding to failures. This table lists the types
of qualifications.
Fatal
assertions
The MATLAB Unit Testing Framework provides approximately 25 qualification methods for
each type of qualification. For example, use verify Class or assert Class to test that a value is of
46
an expected class, and use assume True or fatal Assert True to test if the actual value is true. For
a summary of qualification methods, see Types of Qualifications.
Often, each unit test function obtains an actual value by exercising the code that you are testing
and defines the associated expected value. For example, if you are testing the plus function, the
actual value might be plus(2,3) and the expected value 5. Within the test function, you pass the
actual and expected values to a qualification method. For example:
Test Case.verify Equal(plus(2,3),5)
For an example of a basic unit test, see Write Simple Test Case Using Classes.
Additional Features for Advanced Test Classes
The MATLAB Unit Testing Framework includes several features for authoring more advanced
test classes:
Setup and teardown methods blocks to implicitly set up the pretest state of the system and
return it to the original state after running the tests. For an example of a test class with
setup and teardown code, see Write Setup and Teardown Code Using Classes.
Advanced qualification features, including actual value proxies, test diagnostics, and a
constraint interface. For more information, see matlab.unittest.constraints and
matlab.unittest.diagnostics.
Parameterized tests to combine and execute tests on the specified lists of parameters. For
more information, see Create Basic Parameterized Test and Create Advanced
Parameterized Test.
Ready-to-use fixtures for handling the setup and teardown of frequently used testing
actions and for sharing fixtures between classes. For more information, see
matlab.unittest.fixtures and Write Tests Using Shared Fixtures.
Ability to create custom test fixtures. For more information see Create Basic Custom
Fixture and Create Advanced Custom Fixture.
Test Method Setup and Test Method Teardown methods run before and after each test
method.
47
Test Class Setup and Test Class Teardown methods run before and after all test methods
in the test case.
The testing framework guarantees that Test Method Setup and Test Class Setup methods of super
classes are executed before those in subclasses.
It is good practice for test authors to perform all teardown activities from within the Test Method
Setup and Test Class Setup blocks using the add Teardown method instead of implementing
corresponding teardown methods in the Test Method Teardown and Test Class Teardown blocks.
This guarantees the teardown is executed in the reverse order of the setup and also ensures that
the test content is exception safe.
The following test case, Figure Properties Test, contains setup code at the method level. The Test
Method Setup method creates a figure before running each test, and Test Method Teardown
closes the figure afterwards. As discussed previously, you should try to define teardown activities
with the add Teardown method. However, for illustrative purposes, this example shows the
implementation of a Test Method Teardown block.
classdef Figure Properties Test < matlab.unittest.TestCase
properties
Test Figure
end
methods(TestMethodSetup)
function create Figure(testCase)
% comment
testCase.TestFigure = figure;
end
end
methods(TestMethodTeardown)
function closeFigure(testCase)
close(testCase.TestFigure)
end
end
methods(Test)
function defaultCurrentPoint(testCase)
48
cp = testCase.TestFigure.CurrentPoint;
testCase.verifyEqual(cp, [0 0], ...
'Default current point is incorrect')
end
function defaultCurrentObject(testCase)
import matlab.unittest.constraints.IsEmpty
co = testCase.TestFigure.CurrentObject;
testCase.verifyThat(co, IsEmpty, ...
'Default current object should be empty')
end
end
end
end
methods (Test)
function testConstructor(testCase)
b = BankAccount(1234, 100);
testCase.verifyEqual(b.AccountNumber, 1234, ...
'Constructor failed to correctly set account number');
testCase.verifyEqual(b.AccountBalance, 100, ...
'Constructor failed to correctly set account balance');
end
function testConstructorNotEnoughInputs(testCase)
import matlab.unittest.constraints.Throws;
testCase.verifyThat(@()BankAccount, ...
Throws('BankAccount:InvalidInitialization'));
end
function testDesposit(testCase)
b = BankAccount(1234, 100);
b.deposit(25);
testCase.verifyEqual(b.AccountBalance, 125);
end
function testWithdraw(testCase)
b = BankAccount(1234, 100);
b.withdraw(25);
testCase.verifyEqual(b.AccountBalance, 75);
end
function testNotifyInsufficientFunds(testCase)
50
callbackExecuted = false;
function testCallback(~,~)
callbackExecuted = true;
end
b = BankAccount(1234, 100);
b.addlistener('InsufficientFunds', @testCallback);
b.withdraw(50);
testCase.assertFalse(callbackExecuted, ...
'The callback should not have executed yet');
b.withdraw(60);
testCase.verifyTrue(callbackExecuted, ...
'The listener callback should have fired');
end
end
end
% Generate TestSuite.
s1 = Test Suite. From Class(?DocPolynomTest);
s2 = Test Suite. From File('axesPropertiesTest.m');
suite = [s1 s2];
% Create silent test runner.
runner = TestRunner.withNoPlugins;
% Add plugin to display test progress.
runner.addPlugin(TestRunProgressPlugin.withVerbosity(2))
% Run tests using customized runner.
result = run(runner,[suite]);
Setting of mlunit
54
56
57
58
59
60
Chapter 7
FUTURE SCOPE & CONCLUSION
7.1 Future scope
Manual Testing of all work flows, all fields, all negative scenarios is time and cost
consuming
It is difficult to test for multi lingual sites manually.
7.2 Conclusion
Automation Testing is use of tools to execute test cases whereas manual testing requires human
intervention for test execution.
Within the automotive area, very little upfront testing has been done. With the introduction of
executable modeling tools such as ML Unit this upfront testing is more feasible. It is the job of
the tool vendors to make this testing technology available and practical to the end user.
Automation Testing saves time, cost and manpower. Once recorded, it's easier to run an
automated test suite when compared to manual testing which will require skilled labor.
Any type of application can be tested manually but automated testing is recommended only for
stable systems and is mostly used for regression testing. Also, certain testing types like ad-hoc
and monkey testing are more suited for manual execution.
Manual testing can be become repetitive and boring. On the contrary, the boring part of
executing same test cases time and again is handled by automation software in automation
testing.
61
REFERENCES
[1] Automated Testing tools
http://www.guru99.com/automation-testing.html
[2] Artem, M., Abrahamsson, P., & Ihme, T. (2009). Long-Term Effects of Test-Driven
Development A case study. In: Agile Processes in Software Engineering and Extreme
Programming,10th International Conference, XP 2009,. 31, pp. 13-22. Pula, Sardinia, Italy:
Springer.
[3] Bach, J. (2000, November). Session based test management. Software testing and quality
engineering magzine(11/2000),( http://www.satisfice.com/articles/sbtm.pdf).
[4] Bach, J. (2003). Exploratory Testing Explained, The Test Practitioner 2002,
(http://www.satisfice.com/articles/et-article.pdf).
[5] Bach, J. (2006). How to manage and measure exploratory testing. Quardev Inc.,
(http://www.quardev.com/content/whitepapers/how_measure_exploratory_testing.pdf).
[6] Basilli, V., & Selby, R. (1987). Comparing the effectiveness of software testing strategies.
IEEE Trans. Software Eng., 13(12), 1278-1296.
[7] Berg, B. L. (2009). Qualitative Research Methods for the Social Sciences (7th International
Edition) (7th ed.). Boston: Pearson Education.
[8] Bernat, G., Gaundel, M. C., & Merre, B. (2007). Software testing based on formal
specifications: a theory and tool. In:Testing Techniques in Software Engineering, Second
Pernambuco Summer School on Software Engineering. 6153, pp. 215-242. Recife:
Springer.
[9] Bertolino, A. (2007). Software Testing Research: Achievements Challenges Dreams. In:
International Conference on Software Engineering, ISCE 2007, (pp. 85-103). Minneapolis:
IEEE.
[10] Butts, K., et. al., Automotive Powertrain Control Development Using CACSD,
Perspectives in Control: New Concepts and Applications, Tariq Samad (ed.), IEEE Press, 1999.
[11] Butts, K., Toeppe, S., Ranville, S., Specification and Testing of Automotive Powertrain
Control System Software using CACSD tools, 1998, Proceedings of the 17 th AIAA/IEEE/SAE
Digital Avionics System Conference
[12] Biezer, B., Software Testing Techniques, Second Edition, International Thomson
Computer Press, 1990
[13] Causevic, A., Sundmark, D., & Punnekkat, S. (2010). An Industrial Survey on
62
[24] Toeppe, S., Ranville, S., Bostic, D., "Automating Software Specification, Design and
Synthesis for Computer Aided Control System Design Tools", 2000, Proceedings of the 19 th
AIAA/IEEE/SAE Digital Avionics System Conf.
[25] The Mathworks, www.mathworks.com
64