Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

BCA Project

Download as doc, pdf, or txt
Download as doc, pdf, or txt
You are on page 1of 64

Chapter 1

Introduction
1.1 Software Testing
Software testing is a process used to identify the correctness, completeness, and
quality of developed computer software. It includes a set of activities conducted
with the intent of finding errors in software so that it could be corrected before
the product is released to the end users.
In simple words, software testing is an activity to check whether the actual
results match the expected results and to ensure that the software system is
defect free.
Why is testing is important?
This is China Airlines Airbus A300 crashing due to a software bug on
April 26, 1994 killing 264 innocent lives.
2. Software bugs can potentially cause monetary and human loss, history is
full of such examples.
1.

3.

In 1985,Canada's Therac-25 radiation therapy machine malfunctioned


due to software bug and delivered lethal radiation doses to patients
,leaving 3 people dead and critically injuring 3 others.

4.

In April of 1999 ,a software bug caused the failure of a $1.2 billion


military satellite launch, the costliest accident in history.

5.

In may of 1996, a software bug caused the bank accounts of 823


customers of a major U.S. bank to be credited with 920 million US
dollars.

6.

As you see, testing is important because software bugs could be


expensive or even dangerous.

7.

As Paul Elrich puts it - "To err is human, but to really foul things up you
need a computer."

1.2 Manual Testing


Manual testing is the process of manually testing software for defects. It requires
a tester to play the role of an end user and use most of all features of the
application to ensure correct behavior. To ensure completeness of testing, the
tester often follows a written test plan that leads them through a set of important
test cases.

1.3 Overview
A key step in the process is, testing the software for correct behavior prior to
release to end users. For small scale engineering efforts (including prototypes),

exploratory testing may be sufficient. With this informal approach, the tester
does not follow any rigorous testing procedure, but rather explores the user
interface of the application using as many of its features as possible, using
information gained in prior tests to intuitively derive additional tests. The
success of exploratory manual testing relies heavily on the domain expertise of
the tester, because a lack of knowledge will lead to incompleteness in testing.
One of the key advantages of an informal approach is to gain an intuitive insight
to how it feels to use the application.
Large scale engineering projects that rely on manual software testing follow a
more rigorous methodology in order to maximize the number of defects that can
be found. A systematic approach focuses on predetermined test cases and
generally involves the following steps.
1. Choose a high level test plan where a general methodology is chosen,
and resources such as people, computers, and software licenses are
identified and acquired.
2. Write detailed test cases, identifying clear and concise steps to be taken
by the tester, with expected outcomes.
3. Assign the test cases to testers, who manually follow the steps and record
the results.
4. Author a test report, detailing the findings of the testers. The report is
used by managers to determine whether the software can be released, and
if not, it is used by engineers to identify and correct the problems.
A rigorous test case based approach is often traditional for large software
engineering projects that follow a Waterfall model. However, at least one recent
study did not show a dramatic difference in defect detection efficiency between
exploratory testing and test case based testing.
Testing can be through black-, white- or grey-box testing. In white-box testing
the tester is concerned with the execution of the statements through the source
code. In black-box testing the software is run to check for the defects and is less
concerned with how the processing of the input is done. Black-box testers do not
have access to the source code. Grey-box testing is concerned with running the
software while having an understanding of the source code and algorithm.
Static and dynamic testing approach may also be used. Dynamic testing involves
running the software. Static testing includes verifying requirements, syntax of
code and any other activities that do not include actually running the code of the
program.
Testing can be further divided into functional and non-functional testing. In
functional testing the tester would check the calculations, any link on the page,
or any other field which on given input, output may be expected. Non-functional
testing includes testing performance, compatibility and fitness of the system
under test, its security and usability among other things.

1.4 Automation Testing

Automation Testing means using an automation tool to execute your test case
suite. The automation software can also enter test data into the System under
Test, compare expected and actual results and generate detailed test reports.
Test Automation demands considerable investments of money and resources.
Successive development cycles will require execution of same test suite
repeatedly. Using a test automation tool it's possible to record this test suite and
re-play it as required. Once the test suite is automated, no human intervention is
required. This improved ROI of Test Automation.
Goal of Automation is to reduce number of test cases to be run manually and not
eliminate manual testing all together.
Test automation may be able to reduce or eliminate the cost of actual testing. A
computer can follow a rote sequence of steps more quickly than a person, and it
can run the tests overnight to present the results in the morning. However, the
labor that is saved in actual testing must be spent instead authoring the test
program. Depending on the type of application to be tested, and the automation
tools that are chosen, this may require more labor than a manual approach. In
addition, some testing tools present a very large amount of data, potentially
creating a time consuming task of interpreting the results.
Things such as device drivers and software libraries must be tested using test
programs. In addition, testing of large numbers of users (performance testing and
load testing) is typically simulated in software rather than performed in practice.
Conversely, graphical user interfaces whose layout changes frequently are very
difficult to test automatically. There are test frameworks that can be used for
regression testing of user interfaces. They rely on recording of sequences of
keystrokes and mouse gestures, then playing them back and observing that the
user interface responds in the same way every time. Unfortunately, these
recordings may not work properly when a button is moved or relabeled in a
subsequent release. An automatic regression test may also be fooled if the
program output varies significantly.

1.5 Comparison between Manual testing and Automated Testing


Test automation may be able to reduce or eliminate the cost of actual testing. A
computer can follow a rote sequence of steps more quickly than a person, and it
can run the tests overnight to present the results in the morning. However, the
labor that is saved in actual testing must be spent instead authoring the test
program. Depending on the type of application to be tested, and the automation
tools that are chosen, this may require more labor than a manual approach. In
addition, some testing tools present a very large amount of data, potentially
creating a time consuming task of interpreting the results.
Things such as device drivers and software libraries must be tested using test
programs. In addition, testing of large numbers of users (performance testing and
load testing) is typically simulated in software rather than performed in practice.

Conversely, graphical user interfaces whose layout changes frequently are very
difficult to test automatically. There are test frameworks that can be used for
regression testing of user interfaces. They rely on recording of sequences of
keystrokes and mouse gestures, then playing them back and observing that the
user interface responds in the same way every time. Unfortunately, these
recordings may not work properly when a button is moved or relabeled in a
subsequent release. An automatic regression test may also be fooled if the
program output varies significantly. Automation Testing is use of tools to
execute test cases whereas manual testing requires human intervention for test
execution.
Automation Testing saves time, cost and manpower. Once recorded, it's easier to
run an automated test suite when compared to manual testing which will require
skilled labor. Any type of application can be tested manually but automated
testing is recommended only for stable systems and is mostly used for regression
testing. Also, certain testing types like ad-hoc and monkey testing are more
suited for manual execution. Manual testing can be become repetitive and
boring.
On the contrary, the boring part of executing same test cases time and again, is
handled by automation software in automation testing.

Manual testing
Manual testing will be used when the test case
only needs to runs once or twice.

Automation testing
Automation testing will be used when need to
execute the set of test cases tests repeatedly.

Manual testing will be very useful while


Automation testing will be very useful to catch
executing test cases first time & may or may not regressions in a timely manner when the code is
be powerful to catch the regression defects
frequently changes.
under frequently changing requirements.
Manual testing is less reliable while executing
test cases every time. Using manual software
testing it may not be perform test cases
execution with same precision.

Automation tests will help to perform same


operation precisely each time.

Simultaneously testing on different machine


Automation testing will be carried out
with different OS platform combination is not simultaneously on different machine with
possible using manual testing. To execute such different OS platform combination.
task separate testers are required.
To execute the test cases every time tester
required same amount of time.

Once Automation test suites are ready then less


testers are required to execute the test cases.

No programming can be done to write


sophisticated tests which fetch hidden
information.

Using Automation testing, Testers can program


complicated tests to bring out of sight
information.

Manual testing is slower than automation.


Running tests manually can be very time
consuming.

Automation runs test cases significantly faster


than human resources.

Manual testing requires less cost than


automating it.

Initial cost to automate is more than manual


testing but can be used repetitively.

It is preferable to execute UI test cases using


manual testing.

Sometimes cant automate the UI test cases


using automation testing.

To execute the Build Verification Testing (BVT) Automation testing is very useful for automating
is very mundane and tiresome in Manual testing. the Build Verification Testing (BVT) & it is not
mundane and tiresome.
Table 1[comparision b/w manual and automation testing]

1.6 Matlab
MATLAB is a high-performance language for technical computing. It integrates
computation, visualization, and programming in an easy-to-use environment
where problems and solutions are expressed in familiar mathematical notation.
Typical uses include: Math and computation.

MATLAB (matrix laboratory) is a multi-paradigm numerical computing


environment and fourth-generation programming language. Developed by Math
Works, MATLAB allows matrix manipulations, plotting of functions and data,
implementation of algorithms, creation of user interfaces, and interfacing with
programs written in other languages, including C, C++, Java, Fortran and
Python.
1.6.1 Syntax
The MATLAB application is built around the MATLAB language, and most use
of MATLAB involves typing MATLAB code into the Command Window (as an
interactive mathematical shell), or executing text files containing MATLAB
code, including scripts and/or functions.
1.6.2 Variables
Variables are defined using the assignment operator, =. MATLAB is a weakly
typed programming language because types are implicitly converted. It is an
inferred typed language because variables can be assigned without declaring
their type, except if they are to be treated as symbolic objects, and that their type
can change. Values can come from constants, from computation involving values
of other variables, or from the output of a function. For example:
>> x = 17
x=
17
>> x = 'hat'
x=
hat
>> y = x + 0
y=
104
97

116

>> x = [3*4, pi/2]


x=
12.0000 1.5708
>> y = 3*sin(x)
y=
-1.6097 3.0000
1.6.3 Vectors and matrices
A simple array is defined using the colon syntax: init: increment: terminator. For
instance:
>> array = 1:2:9
array =
13579

defines a variable named array (or assigns a new value to an existing variable
with the name array) which is an array consisting of the values 1, 3, 5, 7, and 9.
That is, the array starts at 1 (the init value), increments with each step from the
previous value by 2 (the increment value), and stops once it reaches (or to avoid
exceeding) 9 (the terminator value).
>> array = 1:3:9
array =
147
the increment value can actually be left out of this syntax (along with one of the
colons), to use a default value of 1.
>> ari = 1:5
ari =
12345
assigns to the variable named ari an array with the values 1, 2, 3, 4, and 5, since
the default value of 1 is used as the increment.
Indexing is one-based, which is the usual convention for matrices in
mathematics, although not for some programming languages such as C, C++,
and Java.
Matrices can be defined by separating the elements of a row with blank space or
comma and using a semicolon to terminate each row. The list of elements should
be surrounded by square brackets: []. Parentheses: () are used to access elements
and sub arrays (they are also used to denote a function argument list).
>> A = [16 3 2 13; 5 10 11 8; 9 6 7 12; 4 15 14 1]
A=
16 3 2 13
5 10 11 8
9 6 7 12
4 15 14 1
>> A(2,3)
ans =
11
Sets of indices can be specified by expressions such as "2:4", which evaluates to
[2, 3, 4]. For example, a sub matrix taken from rows 2 through 4 and columns 3
through 4 can be written as:
>> A(2:4,3:4)
ans =
11 8
7 12
14 1

A square identity matrix of size n can be generated using the function eye, and
matrices of any size with zeros or ones can be generated with the functions zeros
and ones, respectively.
>> eye(3,3)
ans =
100
010
001
>> zeros(2,3)
ans =
000
000
>> ones(2,3)
ans =
111
111
Most MATLAB functions can accept matrices and will apply themselves to each
element. For example, mod(2*J,n) will multiply every element in "J" by 2, and
then reduce each element modulo "n". MATLAB does include standard "for" and
"while" loops, but (as in other similar applications such as R), using the
vectorized notation often produces code that is faster to execute. This code,
excerpted from the function magic.m, creates a magic square M for odd values of
n (MATLAB function mesh grid is used here to generate square matrices I and J
containing 1:n).
[J,I] = mesh grid(1:n);
A = mod(I + J - (n + 3) / 2, n);
B = mod(I + 2 * J - 2, n);
M = n * A + B + 1;
1.6.4 Structures
MATLAB has structure data types. Since all variables in MATLAB are arrays, a
more adequate name is "structure array", where each element of the array has the
same field names. In addition, MATLAB supports dynamic field names (field
look-ups by name, field manipulations, etc.). Unfortunately, MATLAB JIT does
not support MATLAB structures, therefore just a simple bundling of various
variables into a structure will come at a cost.
1.6.5 Function handles
MATLAB supports elements of lambda calculus by introducing function
handles, or function references, which are implemented either in .m files or
anonymous/nested functions.

1.6.6 Classes
Although MATLAB has classes, the syntax and calling conventions are
significantly different from other languages. MATLAB has value classes and
reference classes, depending on whether the class has handle as a super-class
(for reference classes) or not (for value classes).
Method call behavior is different between value and reference classes. For
example, a call to a method
object. method();
can alter any member of object only if object is an instance of a reference class.
1.6.7 Graphics and graphical user interface programming
MATLAB supports developing applications with graphical user interface
features. MATLAB includes GUIDE (GUI development environment) for
graphically designing GUIs. It also has tightly integrated graph-plotting features.
For example the function plot can be used to produce a graph from two vectors x
and y. The code:
x = 0:pi/100:2*pi;
y = sin(x);
plot(x,y)

produces the following figure of the sine function:

Fig1.1 Sine function


A MATLAB program can produce three-dimensional graphics using the
functions surf, plot3 or mesh.
[X,Y] = meshgrid(-10:0.25:10,-10:0.25:10);
f = sinc(sqrt((X/pi).^2+(Y/pi).^2));
mesh(X,Y,f);
axis([-10 10 -10 10 -0.3 1])
xlabel('{\bfx}')
ylabel('{\bfy}')
zlabel('{\bfsinc} ({\bfR})')
hidden off
This code produces a wireframe 3D plot of the twodimensional unnormalized sine function:

Fig1.2 Wireframe 3d

[X,Y] = meshgrid(-10:0.25:10,-10:0.25:10);
f = sinc(sqrt((X/pi).^2+(Y/pi).^2));
surf(X,Y,f);
axis([-10 10 -10 10 -0.3 1])
xlabel('{\bfx}')
ylabel('{\bfy}')
zlabel('{\bfsinc} ({\bfR})')

This code produces a surface 3D plot of th


two-dimensional unnormalized sine function:

Fig 1.3 Surface 3d

Chapter 2
Objective &Methodology
2.1 Objective
The objective of unit testing is to isolate each part of the program and show that
the individual parts are correct. A unit test provides a strict, written contract that
the piece of code must satisfy. As a result, it affords several benefits.
2.1.1 Test-Driven Development
Unit testing finds problems early in the development cycle.
In test-driven development (TDD), which is frequently used in both Extreme
Programming and Scrum, unit tests are created before the code itself is written.
When the tests pass, that code is considered complete. The same unit tests are
run against that function frequently as the larger code base is developed either as
the code is changed or via an automated process with the build. If the unit tests
fail, it is considered to be a bug either in the changed code or the tests
themselves.
2.1.2 Facilitates change
Readily available unit tests make it easy for the programmer to check whether a
piece of code is still working properly. In continuous unit testing environments,
through the inherent practice of sustained maintenance, unit tests will continue to
accurately reflect the intended use of the executable and code in the face of any
change. Depending upon established development practices and unit test
coverage, up-to-the-second accuracy can be maintained.
2.1.3 Simplifies integration
Unit testing may reduce uncertainty in the units themselves and can be used in a
bottom-up testing style approach. By testing the parts of a program first and then
testing the sum of its parts, integration testing becomes much easier. An
elaborate hierarchy of unit tests does not equal integration testing. Integration
with peripheral units should be included in integration tests, but not in unit tests.
2.1.4 Documentation
Unit testing provides a sort of living documentation of the system. Developers
looking to learn what functionality is provided by a unit, and how to use it, can
look at the unit tests to gain a basic understanding of the unit's interface (API).
2.1.5 Design
When software is developed using a test-driven approach, the combination of
writing the unit test to specify the interface plus the refactoring activities
performed after the test is passing, may take the place of formal design. Each

unit test can be seen as a design element specifying classes, methods, and
observable behaviour.

2.2 Methodology and Planning of work


Following steps are followed in an Automation Process

Fig 2.1: Automation process


2.2.1 Test tool selection
Test Tool selection largely depends on the technology the Application under Test
is built on. For instance QTP does not support Informatics. So QTP cannot be
used for testing Informatics applications. It's a good idea to conduct Proof of
Concept of Tool on AUT
2.2.2 Define the scope of Automation
Scope of automation is the area of your Application under Test which will be
automated. Following points help determine scope:

Feature that are important for the business


Scenarios which have large amount of data

Common functionalities across applications

Technical feasibility

Extent to which business components are reused

Complexity of test cases

Ability to use the same test cases for cross browser testing

2.2.3 Planning, Design and Development


During this phase you create Automation strategy & plan, which contains
following details

Automation tools selected


Framework design and its features

In-Scope and Out-of-scope items of automation

Automation test bed preparation

Schedule and Timeline of scripting and execution

Deliverables of automation testing

2.2.4 Test Execution


Automation Scripts are executed during this phase. The scripts need input test
data before there are set to run. Once executed they provide detailed test reports.
Execution can be performed using the automation tool directly or through the
Test Management tool which will invoke the automation tool.
2.2.5 Maintenance
As new functionalities are added to the System Under Test with successive
cycles, Automation Scripts need to be added, reviewed and maintained for each
release cycle. Maintenance becomes necessary to improve effectiveness of
Automation Scripts.

CHAPTER-3
LITERATURE REVIEW
3.1 HISTORICAL BACKGROUND
For the quality of the test, the designing of test cases are important. A large
number of test methods have been developed to support the developer when
choosing appropriate test data. Some useful testing methods are structural testing
methods, functional testing methods and statistical testing methods [81]. It is very
difficult to develop correct, good and unique test cases manually. Therefore,
automation of test cases is important. The success of a test data generation method
largely depends upon the efficiency of its search technique. Different researchers
have worked on test case automation from time to time with the aim to increase the
quality of the tests and to achieve substantial cost saving in the system development
by means of higher degree of automation.
One critical task in software testing is the creation of test data to satisfy a
given test - coverage criterion. This process is called as Test Data Generation.
Developments in the field of automated test data generation were initiated in early
70s when papers on Testing large software with automated software evaluation
systems by Ramamoorthy, in 976 [6] and Holland, in 1975 [8] and Automatic
Generation of Floating-Point Test Data by Miller and Spooner, in 1976 [10] is
published. Work done by Nevertheless, Clarke in 1976 [5] is considered to be the
first of its kind to produce a solid algorithm for Automatic Test Data Generation
(ATDG).

Various mechanisms exist to contextualize complex testing problems with


respect to existing literature. Problem classification is an important prerequisite
to the selection of a suitable solution strategy since information regarding
problem complexity and existing algorithms provide useful points of departure
for new algorithm development. Automatically test data generation for software
testing with minimum time and cost is a known to be NP-hard and only
exhaustive search guarantees the optimal solutions. But these can become
prohibitively expensive to compute even for small problems. Several methods
have been used to solve combinatorial optimization problem but each of them

has its own limitation and

advantages. Some useful existing optimization techniques to solve the software


testing problems have surveyed. Some of the important literature on software test
case generation for software testing has been presented in respect of techniques
ranging from the traditional exact methods to modern metaheuristic methods here.
3.2 SOFTWARE COVERAGE ANALYSIS TECHNIQUES
A number of test-data generation techniques have been developed for
coverage of software under test. Each one of them uses different kinds or variations
of existing testing techniques [23, 34, 83, 84]. Test adequacy criterion usually
involves coverage analysis, which can be measured based on different aspects of
software like statements, branches, paths and all-uses [53].
In statement testing, each and every statement of the software/ program
under test has to be executed at least once during testing. The main drawback of
statement testing is that even if one achieves a very high level of statement
coverage, it does not reflect that program is error free [34].
Branch coverage is stronger testing criteria than statement coverage testing
criteria [39]. For branch coverage each and every branch has to be executed at least
once during testing. In this testing all control transfer are executed [39]. However
some errors can only be detected if the statements and branches are executed in a
certain order [34].

Path testing searches the program domain for suitable test cases that
covers every possible path in the Software Under Test (SUT) [53]. It is stronger
criteria as compare to statement and branch coverage criteria [37]. This tries to
find out the percentage of code coverage to more extent and hence increase the
chances of error detection [34]. However, it is generally impossible to achieve
this goal, for several reasons. First, a program may contain an infinite number of
paths when the program has loops [23, 34, 53, 85]. Second, the number of paths
in a program is exponential to the number of branches in it [83, 86] and many of
them may be unfeasible. Third, the numbers of test cases are too large, since
each path can be covered by several test cases. For these reasons, the problem of
path testing can become a NP complete problem [83], making the covering of all
possible path computationally impractical. Since it is impossible to cover all
paths in software, the problem of path testing selects a subset of paths to execute

and find test data to cover it.

Frankl, in 1988 [113] uses all-uses criteria in her paper An Applicable


Family of Data Flow Testing Criteria. This was stronger criteria as compare to
already discuss one. It focuses on all p-uses and c-uses of each and every variable
hence coving each and every path and branch of software under the test.
Girgis [13] has proposed a technique that uses GA which is guided by the data
flow dependencies in the program to search for test data to fulfill data flow path
selection criteria namely the all-uses criterion. Data-flow testing is important because it
augments control-flow testing criteria and concentrates on how a variable is defined and
used, which could lead to more efficient and targeted test suites. Girgis used the ratio
between the numbers of the covered def-use paths covered by a test case to the total
number of def-use paths. This technique cannot find the closeness of the test cases
because the fitness function gives the same value for all test cases that cover the same
number of def-use paths and 0 for all test cases that do not cover any def-use path. This
technique will result in a loss of valuable information (test data that contains good
genes) when it ignores test cases that cover only the use node [25].

3.3 TESTING TYPES AND APPROACHES


Various test data generation methods have been proposed in the literature.
These methods can be classified as Static methods, Dynamic methods, functional
methods, random test data generators, symbolic evaluators and function
minimization methods [87].
3.3.1 Static Testing
The static methods never require the execution of code on computers but
involve the tester to go through the code to find the errors. The first automatic test
generation approach proposed by Clarke in 1976 was static which based on
symbolic execution [5]. The symbolic execution methods are static, in the sense that
they analyze a program to obtain a set of symbolic representations of each condition
predicate along a selected path. The expressions are obtained by attributing symbolic
values to the input variables. If the predicates are linear, then the solution can be
obtained by using linear programming [88]. Symbolic Test Data Generation
Techniques assign symbolic values to variables to create algebraic expressions for
the constraints in the program and use a constraints solver to find a solution for these
expressions that satisfies a test requirement [7, 11]. Symbolic execution cannot

determine which symbolic value of the potential values will be used for array or
pointer. Symbolic execution cannot find floating point inputs because the current
constraint solvers cannot solve floating point constraints.
Constraint Based Testing builds up constraint systems which describe the
given test goal. The solution to this constraint system brings about satisfaction of the
goal. The original purpose of Constraint Based testing was to generate test data for
mutation testing. Reachability constraints within the constraint system describe
conditions under which a particular statement will be reached. Necessity constraints
describe the conditions under which a mutant will be killed. With Constraint-based
testing, constraints must be computed before they are analysed.
Another test data generation technique type of Constraint-Based Testing is
invented by DeMillo and Offutt [2] and based on symbolic execution is used to
develop the constraints in terms of the input variables is called Domain Reduction.
Domain Reduction is then used to attempt a solution to the constraints. The first step
of this technique starts with the finding of domains of each input variable which are
derived from type or specification information or be supplied by the tester. The
domains are then reduced using information in the constraints, beginning with those
involving a relation operator, a variable, a constant and constraints involving a
relation operator and two variables. This helps in reducing the search space (input
domain) for solving a constraint system. Remaining constraints are then simplified
by back-substituting values. Although efforts were made for improving the
performance of algorithmic search methods by employing some techniques likes
identification of undesirable variables, finding optimum order of consideration of
input variables, use of binary search algorithm and expression handling technique
but these required a plenty of manual and time consuming analysis. This makes
algorithmic search methods very slow and ineffective. These algorithms also lack
global search capabilities which are a necessary requirement for software testing
where objective-functions are very complex and usually non-linear [82]. Since these
constraints are derived using symbolic execution, the method suffers from similar
problems involving loops, procedure calls and computed storage locations [88].
To overcome the limitations of domain reduction method another method called
Dynamic Domain Reduction was introduced by Offutt, in 1997 [11]. Dynamic Domain
Reduction also starts with the domain of input variables like the Domain

Reduction but these domains are reduced dynamically during the Symbolic
Execution stage, using constraints composed from branch predicates encountered as
the path is followed. If the branch predicate involves a variable comparison, the
domains of the input variables responsible for the outcome at the decision are split
at some arbitrary split point rather than assigning random input values. Dynamic
Domain Reduction still suffers with difficulties due to computed storage locations
and loops. Furthermore, it is not clear how domain reduction techniques handle nonordinal variable types such as enumerations [88].
3.3.2 Dynamic Testing
Unlike the static testing, dynamic methods require the execution of code.
The test cases are run on the code of the software product that has to be tested with
the help of computer. Since array subscripts and pointer values are known at runtime, many of the problems associated with symbolic execution can be discovered
with dynamic methods which are not possible with static testing. Dynamic Test Data
Generation Technique collects information during the execution of the program to
determine which test cases come closest to satisfying the requirement. Then, test
inputs are incrementally modified until one of them satisfies the requirements [3, 9].
Random Test-Data Generation Techniques select inputs at random until useful
inputs are found [14]. In random testing, random values are generated from domains of
inputs and program is executed using these values. If these inputs are able to satisfy the
testing criterion then they form a test case [82]. This technique may fail to find test data
to satisfy the requirements because information about the test requirements is not
incorporated into the generation process [25]. J. W. Duran and S. Ntafos in 1984 [64]
reported random testing to be satisfactory for small as well as large programs. Thayer
and others [89] used it to measure reliability of the system. Demillo and others [90] also
used random testing for identifying seeded faults in programs.
Mayer and Schneckenburger [91] empirically investigated different flavors of
adaptive random testing. They concluded that distance based random testing and
restricted random testing are the best methods for this class of testing techniques. This
approach is quick and simple but it is a poor choice with complex programs and
complex adequacy criteria. The probability of selecting an adequate input by chance
could be low in this case. The biggest issue for random approach is that of adequate

test data selection. Myers [37] viewed random testing as a worst case of program
testing.
The results of actual executions of the program with a search technique were
first studied by Miller and Spooner [30]. These were originally designed for the
generation of floating-point test data. However, the principles are more widely
applicable. The tester selects a path through the program and then produces a straightline version of it, containing only that path. Korel suggested a dynamic approach to
automatic test data generation using function minimization and directed search [32]. In
this work, the test data generation procedure worked on an instrumented version of the
original program without the need for a straight-line version to be produced. The search
targeted the satisfaction of each branch predicate along the path in turn, circumventing
issues encountered by the work of Miller and Spooner. In this type exploratory search is
done, in which the selected input variables are modified by a small amount and
submitted to the program. Korel [32] used alternate variable method for its dynamic test
data generator. The alternate variable method works in two phases. First, an input
variable is selected and its value is changed in small steps just to find out the direction
in which variable minimizes the branch function. This is called exploratory search.
Once the direction of search is known then pattern search is taken in large steps to find
the value of the variable in consideration for satisfying or minimizing the branch
function. If selected value of the variable fails to decrease the branch function then steps
of the pattern search are decreased successively before exploring other variables for
minimization purpose. Gallagher and Narasimhan [95] built on Korel's work for
programs written in ADA. In particular, this was the first work to record support for the
use of logical connectives within branch predicates. Dynamic techniques can stall when
they encounter local minima because they depend on local search techniques such as
gradient descent [82]
Korel, in 1992 [95] was first used concept of Goal-Oriented Approach. In 1992,
Goal-oriented techniques identify test data covering a selected goal such as a statement
or a branch, irrespective of the path taken [23]. This approach involves two basic steps:
to identify a set of statements (respective branches) the covering of which implies
covering the criterion; to generate input test data that execute every selected statement
(respective branch) [96]. Two typical approaches, Assertion-Based and Chaining
Approach are known as goal oriented. In the first case assertions are inserted

and then solved. In chaining approach data dependence analysis is carried out. It
uses the concept of an event sequence as an intermediate means of deciding the type
of path required for execution up to the target node [97, 99]. An event sequence is
basically a succession of program nodes that are to be executed. The initial event
sequence consists of just the start node and target node. Extra nodes are then
inserted into this event sequence when the test data search encounters difficulties.
Generally the goal-oriented approach faces issues of goal selection and selection of
adequate test data [98].
3.3.3 Functional Testing
Functional Testing is also called as specification based or Black Box Testing.
If testers want to test functional requirements, they may use Black-Box Testing
technique. On the other hand, function minimization methods are dynamic. They are
based on program execution. Black Box Testing does not need knowledge of how
software is programmed. It generates test data for software from its specification
without considering the behavior of the program under test. Testers inject test data to
execute program, then compare actual result with the specified test oracle. The test
engineers engaged in black box testing only knows the sets of input and expected
output and is unaware of how those inputs are transformed into output by software.
Black box testing requires functional knowledge of the product to be tested [1, 9].
Black Box Testing helps in the overall functionality verification of the system under
test.
Syntax Based Testing involves on boundary value analysis, partition
analysis, domain testing, equivalence partitioning, domain partitioning, and
functional analysis [100, 101, 102 , 87]. Hoffman in 1999 [100] presented a
technique based on boundary values analysis in this technique the relationship
between a generalized approach to boundary values and statement coverage is
explored. Jeng in 1999 [101] has presented a technique that is mainly related to
domain testing. It combined the static approach with the dynamic search method. In
1997, Gallagher and Lakshmi Narasimhan [102] proposed a method for locating
input domain boundaries intersections and generating ON/OFF test data points

3.4 METAHEURISTICS
Metaheuristics are general heuristic methods that guide the search through
the solution space, using as surrogate algorithms some form of heuristics and usually
local search. Starting from an initial solution built by some heuristic, metaheuristics
improve it iteratively until a stopping criterion is met. The stopping criterion can be
elapsed time, number of iterations, number of evaluations of the objective function
and so on [41]. Voss in 1999 [105] described a metaheuristic as Iterative master
processes that guides and modifies the operations of subordinate heuristics to
efficiently produce high quality solutions.
The most successful search algorithm class is based on metaheuristic
techniques like Hill Climbing (HC), Tabu Search (TS), Simulated Annealing (SA),
Genetic Algorithm (GA), Ant Colony Optimization (ACO), Particle Swarm
Optimization (PSO), Cat Intelligence etc. McMinn [88] has provided a detail and up
to date survey on use of metaheuristic techniques for software testing. Several
metaheuristics have been suggested for path coverage [83, 85], statement coverage
[23] and branch coverage [23, 34]. In such cases the use of metaheuristics would be
very useful in providing usable results in a reasonable time.
3.4.1 Hill Climbing (HC)
Hill Climbing is a local search algorithm. Starting from a solution created at
random or by some problem specific heuristic, standard local search tries to improve on
it by iteratively deriving a similar solution in the neighborhood of the so-far best
solution. Responsible for finding a neighboring solution is a move-operator, which must
be carefully defined according to the problem. This progression improvement is likened
to the climbing of hills in the landscape of a maximising objective function [88]. It
applies standard local search multiple times from different starting solutions and returns
the best local optimum identified [41]. The major disadvantage of standard local search
is its high probability of getting trapped at a poor local optimum.

3.4.2 Tabu Search (TS)


Diaz [104] developed a tabu search based test generator that have used
program control flow graph for branch coverage. It maintains a search list also
called as tabu list. This strategy extends local search by the introduction of memory.
Stagnation at a local optimum is avoided by maintaining a data structure called

history, in which the last created solutions or alternatively the last moves (i.e.,
changes from one candidate solution to the next) are stored. These solutions,
respectively moves, are forbidden (tabu) in the next iteration and the algorithm is
forced to approach unexplored areas of the search space. It uses neighborhood
information and backtracking for solving local optima. They defined two cost
functions for intensifying and diversifying the search mechanism. These cost
functions are similar to the functions used by Wegner, in 12002 [81] in which
individuals are penalised for taking wrong path while executing the program.
Penalty is fixed on the basis of error value produced by an individual in the effort of
satisfying a branch constraint.
3.4.3 Simulated Annealing (SA)
Another way for enabling local search to escape from local optima and
approach new areas of attraction in the search space is to sometimes also accept
worse neighboring solutions. Simulated annealing does this in a probabilistic way.
Simulated Annealing (SA) algorithms, based on the analogy of annealing process of
metals, were proposed by Metropolis [106] in 1953 and were first applied to
combinatorial optimization problems by Kirkpatrick in 1983 [ 107]. SA is
considered to be an improvement heuristic where a given initial solution is
iteratively improved upon. SA is a metaheuristic method used for test case
generation in which process of cooling of a material simulates the change in energy
level with time or iterations. The steady state in energy symbolizes the convergence
of solution. At the beginning of the optimization, worse solutions are accepted with
a relatively high probability and this probability is reduced over time in order to
achieve convergence A number of researchers have applied SA to testing problems.
Tracey [108, 109] constructed a SA based test data generator for safety critical
system. A hybrid objective function is used which includes concepts, branch
distance and number of executed control dependent nodes. N. Mansour [83] in 2004
presents that GA is faster than SA for generating test cases.
3.4.4 Genetic Algorithms (GAs)
GA is one of the most popular and intensively pursued techniques for software
testing. The GA is a global search metaheuristic proposed originally by Holland [8] in
1975. Extensive work has been done on the development of the original algorithm in

the last 20 years and it has been applied successfully in many fields of science and
engineering [92, 93]. The GA is based on the principles of Darwins theory of
natural evolution and belongs to a more general category, the Evolutionary
Algorithms (EAs).
Recently, test-data generation techniques based on genetic algorithms (GAs)
have been developed [13, 24, 23, 81, 84, 103]. Whereas previous techniques may
not be useful in practice, techniques based on GAs have the potential to use for real
systems. Xanthakis [103] first time applied GA for automatic test case generation.
Pargas et al. [23] presented a Genetic Algorithm directed by the control-dependence
graph of the program under test to search for test data to satisfy all-nodes and allbranches criteria. Wagener [18] logarithmized the objective function to provide
better guidance for its GA based test case generator. They present a test environment
for automatic generation of test data for statement and branch testing. These
techniques evolve a set of test data using genetic operations (selection and
recombination) to find the required test data. Michael et al. [24] used GAs for
automatic test-data generation to satisfy condition-decision test-coverage criterion.
They proposed a GA based test generation system called Genetic Algorithm Data
Generation Tool (GADGET) to generate test cases for large C and C++ programs by
using condition decision coverage metrics.
Watkins [84] and Ropar [38] used coverage based criteria for assessing the
fitness of individuals in their GA based test generator. Lin and Yeh [85] used
hamming distance based metric in objective function of their GA program to
identify the similarity and distance between actual path and already selected target
path in dynamic testing. Bouchachia [116] incorporated immune operators in
genetic algorithm to generate software test data for condition coverage
GA has started getting competition from other heuristic search techniques
like Particle Swarm Optimization. Various works [16 - 20] show that particle swarm
optimization is equally well suited or even better than Genetic Algorithms for
solving a number of test problems [21].
3.4.5 Particle Swarm Optimization (PSO)
PSO has been applied successfully to a wide variety of search and optimization
problems [16 -20, 110, 111]. It is motivated from the simulation of social

behavior [114]. PSO was proposed by Kennedy and Eberhart in 1995[16] is


commonly used to solve the problem of nonlinear optimization through the
coordination between the individual to implement population convergence.
Windisch [17] have reported the application of this swarm intelligence based
technique for test data generation for dynamic testing. They have conducted
experiments to prove the usefulness and utility of search algorithm towards test case
generation. Compared with GA, PSO has some attractive characteristics. It has
memory, so knowledge of good solutions is retained by all particles; whereas in GA,
previous knowledge of the problem is destroyed once the population changes. It has
constructive cooperation between particles, particles in the swarm share information
between them. The individuals in the PSO update themselves using the best value of
their own and the best value of the whole population in the history. Finally, the
entire population will converge to the global optimum [115]. The research work of
different researchers from time to time [16-20] shows that PSO is better alternates
compare to GAs in generation of test cases.
3.4.6 Ant Colony Optimization (ACO)
ACO has been applied in the area of software testing in 2003 [80, 117].
Boerner and Gutjahr [80] described an approach involving ACO and a Markov
software usage model for deriving a set of test paths for a software system. McMinn
and Holcombe [117] presented ACO as a supplementary optimization stage for
finding sequences of transitional statements in generating test data for evolutionary
testing. H. Li and C. P. Lam [79, 118] proposed an ACO approach to test data
generation for the state-based software testing. Ayari et al. [119] proposed an
approach based on Ant Colony to reduce the cost of test data generation in the
context of mutation testing. Srivastava and Rai [120] proposed an ant colony
optimization based approach to test sequence generation for control-flow based
software testing. K. Li et al. [121] presents a model of generating test data based on
an improved ant colony optimization and path coverage criteria. P. R. Srivastava et
al. [122] made an algorithm with the help of an ACO for the optimal path
identification by using the basic property and behavior of the ants. This ACO based
approach is enhanced by a probability density estimation technique in order to better
guide the search for continuous input parameters.

3.4.4 Hybrid Metaheuristics


Hybridization of evolutionary algorithms with local search has been
investigated in many studies [124 126]. Such a hybrid is often referred to as a
memetic algorithm [127129]. Talbi [123] gave a classification framework and
taxonomy of hybrid metaheuristics.
L. Wangand and D. Z. Zheng [135] present a hybrid approach which
combined Genetic Algorithm and local optimization technique for simulation
optimization problems. Through the combination of genetic algorithms with the
local optimization method, it can maximally use the good global property of random
searching and the convergence rate of a local method. Their study considers the
sampling procedure based on orthogonal design and quantization technology, the use
of orthogonal Genetic Algorithm with quantization for the global exploration and the
application of local optimization technique for local exploitation. The final
experimental results demonstrated that the proposed approach can find optimal or
close-to-optimal solutions and is superior to other recent algorithms in simulation
optimization.
D. Kusum, D. K. Nath [133] presented a Hybrid Binary Coded Genetic
Algorithm (HBGA) for constrained optimization. They called it HBGA-C. It is based on
the features of Hybrid Binary Coded Genetic Algorithms. The aim was to implement
constraint handling technique to HBGA. It was easy to implement and it also provided
feasible and better solutions with a fewer number of function evaluations. It was
compared with Constrained Binary GA (BGA-C) by incorporating the constraint
handling technique on BGA that used Roulette wheel selection and single point
crossover. Their comparative performance was tested on a set of twenty five constrained
benchmark problems. The results have shown the better performance.
Y. R. Ali, O. Nursel, K. Necmettin and O. Ferruh [131] describe a new hybrid
approach, which deals with the improvement of shape optimization process. The
objective is to contribute to the development of more efficient shape optimization
approaches in an integrated optimal topology and shape optimization area with the help
of GA and robustness issues. An improved GA is introduced to solve multi-objective
shape design optimization problems. The specific issue is to overcome the limitations
caused by larger population of solutions in the pure multi-objective genetic

algorithm. The combination of genetic algorithm with robust parameter design


through a smaller population of individuals results in a solution that leads to better
parameter values for design optimization problems. The effectiveness of the
proposed hybrid approach is illustrated and evaluated with test problems. It shows
that the proposed approach can be used as first stage in other multi-objective GA to
enhance the performance of GA. Finally, the shape optimization is applied for
solving multi-objective shape design optimization problems.
The social foraging behavior of bacteria has been used to solve optimization
problems [130]. V. K. D. Hwa, A. Ajith and C. J. Hoon proposed a hybrid approach
involving GA and Bacterial Foraging (BF) algorithms for function optimization
problems. The algorithm emphasises on mutation, crossover, variation of step sizes,
and the lifetime of the bacteria. The algorithm is then used to tune a PID controller
of an Automatic Voltage Regulator (AVR). Simulation results show the efficiency of
it. It could easily be extended for other global optimization problems.
Devraj [144] presented a GA with adaptive mutation based upon nonrevisiting. The algorithm removed the duplicate individuals. Moreover, instead of
using simple GA, by which the individuals are generated again and again, which is
clearly wastage of time and computational resources, an improved GA has been
suggested. The proposed GA is flexible with all function with any number of
variables.
A hybrid algorithm based on Simulating Annealing and Genetic Algorithm
was proposed by Wangsnd [135 -136] to improve neighbor search ability of the
heuristic algorithms. They divided the initial population which was generated
randomly into subpopulations and apply multiple crossover operations to these
subpopulations in order to improve the exploring potential of traditional GA based
approaches. They analyzed that this hybrid algorithm provides better results as
compare to existing simple GA but hybrid heuristic is computationally more
expensive. Using a hybrid of Ant System and Genetic Algorithm Noorul Haq [137,
138] proposed new techniques that give as compared to pure metaheuristics
techniques. In this hybridization the output of Ant System became input GA. A
hybrid algorithm based on Simulating Annealing, Genetic algorithm and iterative
hill-climbing procedure to avoid local-minima at each step in the iteration is
proposed in 2004 by Nearchou [139].

A more superior hybrid genetic algorithm in which initial solutions have been
searched by PSO for multi-objective scheduling of flexible manufacturing system was
proposed by Biswal [140]. The outstanding performance of this algorithm overcomes
the main limitation of early work done by Naderi in 2009 [143].

D.H. Kim [141] in 2007 proposes a hybrid approach by combining a


Euclidian distance (EU) based GA and PSO method.
K. Li in 2010 [142] proposed a GPSMA (Genetic-Particle Swarm Mixed
Algorithm) to breed software test data for path testing. On the basis of population
division, drawing on the idea of niche, the GPSMA method used to generate test
data in each subpopulation. They used a new method to breed software test data
called GPSMA for structure data test generation. They introduced a new strategy to
replace the mutation operation in traditional GA. They used the excellent rate of
production to implement the interaction between sub-populations. Theoretical
analysis and practical testing results show that the approach is simpler, easier and
more effective in generating test data automatically. The comparison with ant colony
optimization and traditional genetic algorithm shows that the GPSMA is a good
alternative for test data generation problems.
3.5 LIMITATIONS / GAPS OF EXISTING RESEARCH
After a comprehensive study made on the existing literature, a lot of limitations/gaps
have been found in the area of Software Testing:

Majority of work reported for software testing problems has been dealt with
statement testing, branch testing, path testing, and data flow testing which have
their own limitations. Hence a more attention is required towards a new
approach for testing.

Automatic test data generation is major issue in software testing problem. Most of
the works reported in automatic test data generation but a new approach is required
that can generate unique test data and that does not fall into local optima.

Most of work with the hybridization of local search and heuristic techniques has
done. There is limited work towards hybridization of metaheuristics algorithms
in software testing. Hence more emphasis is required towards it.

From the survey of literature, it is concluded that metaheuristic


techniques especially GA and PSO has become interesting preference for
researchers to solve

testing problems. Development of heuristics and metaheuristics are still the major issues related to
software testing which includes automatically test data generation to covers each and every
statement. Therefore, in the present work, automatic test data generation problems with very good
performance measures including generation of unique test data, covering each and every statement or
100 percent statement coverage have been considered. An attempt has been made to develop Hybrid
algorithm that is based on combination of powers of two algorithms PSO and GA for solving test
data generation problem of software testing which must be effective in generating test cases.

30

Chapter 4
Problem formulation
4.1 Challenges in software testing
Software Testing has lot of challenges both in manual as well as in automation. Generally in
manual testing scenario developers through the build to test team assuming the responsible test
team or tester will pick the build and will come to ask what the build is about? This is the case in
organizations not following so-called processes. Tester is the middleman between developing
team and the customers, handling the pressure from both the sides. This is not the case always.
Sometimes testers may add complications in testing process due to their unskilled way of
working.
So here we go with the top challenges:
1) Testing the complete application: Is it possible? I think impossible. There are millions of test
combinations. Its not possible to test each and every combination both in manual as well as in
automation testing. If you try all these combinations you will never ship the product.
2) Misunderstanding of company processes: Sometimes you just dont pay proper attention
what the company-defined processes are and these are for what purposes. There are some myths
in testers that they should only go with company processes even these processes are not
applicable for their current testing scenario. This results in incomplete and inappropriate
application testing.
3) Relationship with developers: Big challenge. Requires very skilled tester to handle this
relation positively and even by completing the work in testers way. There are simply hundreds of
excuses developers or testers can make when they are not agree with some points. For this tester
also requires good communication, troubleshooting and analyzing skill.
4) Regression testing: When project goes on expanding the regression testing work simply
becomes uncontrolled. Pressure to handle the current functionality changes, previous working
functionality checks and bug tracking.
5) Lack of skilled testers: I will call this as wrong management decision while selecting or
training testers for their project task in hand. These unskilled fellows may add more chaos than
simplifying the testing work. This results into incomplete, insufficient and ad-hoc testing
throughout the testing life cycle.
6) Testing always under time constraint: Hey tester, we want to ship this product by this
weekend, are you ready for completion? When this order comes from boss, tester simply focuses
on task completion and not on the test coverage and quality of work. There is huge list of tasks
that you need to complete within specified time. This includes writing, executing, automating
and reviewing the test cases.
31

7) Which tests to execute first? If you are facing the challenge stated in point no 6, then how
will you take decision which test cases should be executed and with what priority? Which tests
are important over others? This requires good experience to work under pressure.
8 ) Understanding the requirements: Sometimes testers are responsible for communicating
with customers for understanding the requirements. What if tester fails to understand the
requirements? Will he be able to test the application properly? Definitely No! Testers require
good listening and understanding capabilities.
Software engineering Automated Software Testing for Matlab Software testing can improve
software quality. To test effectively, scientists and engineers should know how to write and run
tests, define appropriate test cases, determine expected outputs, and correctly handle floatingpoint arithmetic.
Using Matlab MLUnit automated testing framework, scientists and engineers using Matlab can
make software testing an integrated part of their software development routine.

Write Unit Tests


Assemble test methods into test-case classes
Run Unit Tests
Run test suites in the testing framework
Analyze Test Results
Use test results to identify failures

4.2 Issues with xUnit Tool


As of R2013a (March 2013), MATLAB includes a unit test framework. There are no plans to
continue further development of MATLAB xUnit. For information about the new unit test
framework in MATLAB,
MATLAB xUnit Test Framework is a unit test framework for MATLAB code.
MATLAB xUnit is designed to be easy to use for MATLAB users with a wide range of
experience. Users can write tests using ordinary M-files that are very simple in structure.
MATLAB xUnit comes with extensive documentation that ranges in scope from a "Getting
Started" section to advanced techniques and architectural notes. You can view this
documentation online without downloading the package. For example, scroll down to the
"Published M Files" section on this page and click on "MATLAB xUnit Quick Start - How to
write and run tests." To see all the MATLAB xUnit documentation online, scroll down to the
"HTML Files" section on this page and click on "Readme.html."
Only the "xunit" directory is needed to use the framework. The "tests" directory contains the
framework's own test suite. The "architecture" directory contains architectural notes on the
framework's design and how it might be extended.
32

MATLAB xUnit can be used with MATLAB releases R2008a and later. MATLAB xUnit relies
heavily on object-oriented language features introduced in R2008a and will not work with earlier
releases.

4.3 MLUnit
mlUnit is a unit test framework for the MATLAB M language.
It follows patterns of the xUnit family, including assertions,
test cases and suites as well as the fixture.
In contrast to MATLAB's own unit test framework:
1. Ml Unit outputs j Unit compatible XML reports
2. Ml Unit is compatible with Your MATLAB (not just R2013b), down to R2006b * ml
Unit offers specialised assert functions, e.g. assert_ empty, assert _warning, and many
more.
This software and all associated files are released under the GNU General Public License (GPL)
as published by the Free Software Foundation

33

Chapter 5
DESIGN & IMPLEMENTATION
5.1 User-Defined Functions
A user-defined function is a Matlab program that is created by the user, saved as a function file,
and then can be used like a built-in function. A function in general has input arguments (or
parameters) and output variables (or parameters) that can be scalars, vectors, or matrices of any
size. There can be any number of input and output parameters, including zero. Calculations
performed inside a function typically make use of the input parameters, and the results of the
calculations are transferred out of the function by the output parameters.
Writing a Function File
A function file can be written using any text editor (including the Matlab Editor). The file must
be in the Matlab Path in order for Matlab to be able to locate the file. The first executable line in
a function file must be the function definition line, which must begin with the keyword function.
The most general syntax for the function definition line is:
function [out1, out2, ...] = function Name(in_1,in2, ...)
where "function Name" is the name of the user-defined function, in1, in2, ... are the input
parameters, and out1, out2, ... are the output parameters.
The parentheses are needed even if the function has no input parameters:
function [out1, out2, ...] = function Name( )
If there is only one output parameter, then the square brackets can be omitted:
function out = function name(in1, in2, ...)
If there is no output parameter at all, then they are called void functions. The function definition
line is written as:
function function Name(in1, in2, ...)

5.2 Inline Function


34

Inline (expr) constructs an inline function object from the MATLAB expression contained in the
string expr. The input argument to the inline function is automatically determined by searching
expr for an isolated lower case alphabetic character, other than i or j, that is not part of a word
formed from several alphabetic characters. If no such character exists, x is used. If the character
is not unique, the one closest to x is used. If two characters are found, the one later in the
alphabet is chosen.
inline(expr,arg1,arg2,...) constructs an inline function whose input arguments are specified by the
strings arg1, arg2,.... Multi character symbol names may be used.
Inline (expr,n) where n is a scalar, constructs an inline function whose input arguments are x, P1,
P2, ... .
Example a:
This example creates a simple inline function to square a number.
g = inline('t^2')
g =Inline function:
g(t) = t^2
You can convert the result to a string using the char function.
char(g)
ans =
t^2
Example b:
This example creates an inline function to represent the formula f = 3sin(2x2). The resulting
inline function can be evaluated with the argnames and formula functions.
f = inline('3*sin(2*x.^2)')
f = Inline function:
f(x) = 3*sin(2*x.^2)
argnames(f)
ans =
'x'
formula(f)
ans =
35

3*sin(2*x.^2)
Example c:
This call to inline defines the function f to be dependent on two variables, alpha and x:
f = inline('sin(alpha*x)')
f=
Inline function:
f(alpha,x) = sin(alpha*x)
If inline does not return the desired function variables or if the function variables are in the
wrong order, you can specify the desired variables explicitly with the inline argument list.
g = inline('sin(alpha*x)','x','alpha')
g=
Inline function:
g(x,alpha) = sin(alpha*x)

5.3 Write Unit Tests


The matlab.unittest package is an MLUnit-style, unit-testing framework for MATLAB . To test a
MATLAB program, write a test case using qualifications, which are methods for testing values
and responding to failures. The test case contains test functions and test fixtures (setup and
teardown code).

5.3.1 Script-Based Unit Tests


Write Script-Based Unit Tests

5.3.2 Function-Based Unit Tests


(a)Write Function-Based Unit Tests
(b)Write Simple Test Case Using Functions
(c)Write Test Using Setup and Teardown Functions

36

5.4 Classes in the MATLAB Language


5.4.1 Classes
In the MATLAB language, every value is assigned to a class. For example, creating a variable
with an assignment statement constructs a variable of the appropriate class:
a = 7;
b = 'some text';
s.Name = 'Nancy';
s.Age = 64;
whos
whos
Name Size Bytes Class Attributes
a
b
s

1x1
1x9
1x1

8 double
18 char
370 struct

Basic commands like who display the class of each value in the workspace. This information
helps MATLAB users recognize that some values are characters and display as text while other
values are double precision numbers, and so on. Some variables can contain different classes of
values like structures.
5.4.2 Predefined Classes
MATLAB defines fundamental class that comprise the basic types used by the language.
5.4.3 User-Defined Classes
You can create your own MATLAB classes. For example, you could define a class to represent
polynomials. This class could define the operations typically associated with MATLAB classes,
like addition, subtraction, indexing, displaying in the command window, and so on. These
operations would need to perform the equivalent of polynomial addition, polynomial subtraction,
and so on. For example, when you add two polynomial objects:
p1 + p2
the plus operation must be able to add polynomial objects because the polynomial class defines
this operation.
When you define a class, you can overload special MATLAB functions (such as plus.m for the
addition operator). MATLAB calls these methods when users apply those operations to objects of
your class.

37

5.5 MATLAB Classes Key Terms


MATLAB classes use the following words to describe different parts of a class definition and
related concepts.
(a) Class definition Description of what is common to every instance of a class.
(b) Properties Data storage for class instances
(c) Methods Special functions that implement operations that are usually performed
only on instances of the class
(d) Events Messages that are defined by classes and broadcast by class instances
when some specific action occurs
(e) Attributes Values that modify the behavior of properties, methods, events, and
classes
(f) Listeners Objects that respond to a specific event by executing a callback function
when the event notice is broadcast
(g) Objects Instances of classes, which contain actual data values stored in the objects'
properties
(h) Subclasses Classes that are derived from other classes and that inherit the
methods, properties, and events from those classes (subclasses facilitate the reuse of code
defined in the superclass from which they are derived).
(i) Super classes Classes that are used as a basis for the creation of more specifically
defined classes (i.e., subclasses).
(j) Packages Folders that define a scope for class and function naming.
These are general descriptions of these components and concepts. This documentation describes
all of these components in detail.
Some Basic Relationships
A class is Classes
A definition that specifies certain characteristics that all instances of the class share. These
characteristics are determined by the properties, methods, and events that define the class and the
values of attributes that modify the behavior of each of these class components. Class definitions
describe how objects of the class are created and destroyed, what data the objects contain, and
how you can manipulate this data.
38

5.6 Class Hierarchies


It sometimes makes sense to define a new class in terms of existing classes. This enables you to
reuse the designs and techniques in a new class that represents a similar entity. You accomplish
this reuse by creating a subclass. A subclass defines objects that are a subset of those defined by
the super class. A subclass is more specific than its super class and might add new properties,
methods, and events to those inherited from the super class.
Mathematical sets can help illustrate the relationships among classes. In the following diagram,
the set of Positive Integers is a subset of the set of Integers and a subset of Positive numbers. All
three sets are subsets of Real numbers, which is a subset of All Numbers.
The definition of Positive Integers requires the additional specification that members of the set
be greater than zero. Positive Integers combine the definitions from both Integers and Positives.
The resulting subset is more specific, and therefore more narrowly defined, than the supersets,
but still shares all the characteristics that define the supersets.

Fig. Class Hierarchies


The "is a" relationship is a good way to determine if it is appropriate to define a particular subset
in terms of existing supersets. For example, each of the following statements makes senses:
(a). A Positive Integer is an Integer
(b). A Positive Integer is a Positive number
39

If the "is a" relationship holds, then it is likely you can define a new class from a class or classes
that represent some more general case.
Reusing Solutions
Classes are usually organized into taxonomies to foster code reuse. For example, if you define a
class to implement an interface to the serial port of a computer, it would probably be very similar
to a class designed to implement an interface to the parallel port. To reuse code, you could define
a super class that contains everything that is common to the two types of ports, and then derive
subclasses from the super class in which you implement only what is unique to each specific
port. Then the subclasses would inherit all of the common functionality from the super class.
Objects
A class is like a template for the creation of a specific instance of the class. This instance or
object contains actual data for a particular entity that is represented by the class. For example, an
instance of a bank account class is an object that represents a specific bank account, with an
actual account number and an actual balance. This object has built into it the ability to perform
operations defined by the class, such as making deposits to and withdrawals from the account
balance.
Objects are not just passive data containers. Objects actively manage the data contained by
allowing only certain operations to be performed, by hiding data that does not need to be public,
and by preventing external clients from misusing data by performing operations for which the
object was not designed. Objects even control what happens when they are destroyed.
Encapsulating Information
An important aspect of objects is that you can write software that accesses the information stored
in the object via its properties and methods without knowing anything about how that
information is stored, or even whether it is stored or calculated when queried. The object isolates
code that accesses the object from the internal implementation of methods and properties. You
can define classes that hide both data and operations from any methods that are not part of the
class. You can then implement whatever interface is most appropriate for the intended use.
Define a Simple Class
The basic purpose of a class is to define an object that encapsulates data and the operations
performed on that data. For example, Basic Class defines a property and two methods that
operate on the data in that property:

Value Property that contains the data stored in an object of the class
Sq Method that square the value of the property.

Here is the definition of Basic Class:


classdef clas1
40

properties
x
end
methods
function p=sq(obj)
p= obj.x*obj.x
end
end
end
To use the class, create an object of the class, assign the class data, and call operations on that
data.when we run above code then result is as follow
Create object of class
>> y=clas1
y =clas1
Assign value of property
>> y.x=9
y=
clas1
accessing member function of class and passing object as parameter
>> sq(y)
p=
81
Testing by passing wrong value
assert_equals(82,sq(y))
p=

81

??? Error using ==> mlunit_fail at 34


41

Data not equal:


Expected : 82
Actual : 81

Error in ==> abstract_assert_equals at 115


mlunit_fail(msg);

Error in ==> assert_equals at 42


abstract_assert_equals(true, expected, actual, varargin{:});

Creating Test Case for MLUnit


test_cl1.m
function self = test_cl1(name)
%test_cl1 constructor.
%
% Class Info / Example
% ====================
% The class test_cl1 is the fixture for all tests of the test-driven
% cl1. The constructor shall not be called directly, but through
% a test runner.
tc = test_case(name);
self = class(struct([]), 'test_cl1', tc);
test_v1
function self = test_v1(self)
42

y=clas1;
y.x=9;
assert_equals(81,sq(y))
assert_equals(80,sq(y))
Output :

5.7 Overloading Functions


Classes can implement existing functionality, such as addition, by defining a method with the
same name as the existing MATLAB function. For example, suppose you want to add two Basic
Class object. It makes sense to add the values of the Object Value properties of each object.
Here is an overload of the MATLAB plus function. It defines addition for this class as adding the
property values:
method
function r = plus(o1,o2)
r = [o1.Value] + [o2.Value];
end
43

end
By implementing a method called plus, you can use the "+" operator with objects of Basic Class.
a = Basic Class(pi/3);
b = Basic Class(pi/4);
a+b
ans = 1.8326
Class-Based Unit Tests
(a) Author Class-Based Unit Tests in MATLAB
(b) Write Simple Test Case Using Classes
(c) Write Setup and Teardown Code Using Classes
(d) Tag Unit Tests
(e) Write Tests Using Shared Fixtures
(f) Create Basic Custom Fixture
(g) Create Advanced Custom Fixture
(h) Create Basic Parameterized Test
(i) Create Advanced Parameterized Test
Run Unit Tests
Run test suites in the testing framework
To run a group of tests in matlab. Unit test, create a test suite from the matlab.unit test. Test Suite
class. Test suites can contain:

All tests in a package


All tests in a class

All tests in a folder

A single test method

Arrays of test suites


44

Proposed implementation in Run Unit Tests


(a) Create simple test suites
(b) Run Tests for Various Workflows
(c) Add Plugin to Test Runner
(d) Write Plugin to Save Diagnostic Details
(e) Write plugin to extend test runners
(f) Create custom plugin
Analyze Test Results
Use test results to identify failures:
(a) Analyze Test Case Results
(b) Analyze Failed Test Results

45

Chapter 6
Implementation

6.1 Proposed implementation

Class-Based Unit Tests in MATLAB

To test a MATLAB program, write a unit test using qualifications that are methods for testing
values and responding to failures.
Qualifications are methods for testing values and responding to failures. This table lists the types
of qualifications.

Verifications Use this qualification to produce matlab.unittest.qualifications.Verifiable


and record failures without
throwing an exception. The
remaining tests run to completion.
Assumptions Use this qualification to ensure that matlab.unittest.qualifications.Assumable
a test runs only when certain
preconditions
are
satisfied.
However, running the test without
satisfying the preconditions does
not produce a test failure. When an
assumption failure occurs, the
testing framework marks the test as
filtered.
Assertions

Use this qualification to ensure that matlab.unittest.qualifications.Assertable


the preconditions of the current test
are met.

Fatal
assertions

Use this qualification when the matlab.unittest.qualifications.FatalAssertable


failure at the assertion point renders
the remainder of the current test
method invalid or the state is
unrecoverable.
Table No. 1

The MATLAB Unit Testing Framework provides approximately 25 qualification methods for
each type of qualification. For example, use verify Class or assert Class to test that a value is of
46

an expected class, and use assume True or fatal Assert True to test if the actual value is true. For
a summary of qualification methods, see Types of Qualifications.
Often, each unit test function obtains an actual value by exercising the code that you are testing
and defines the associated expected value. For example, if you are testing the plus function, the
actual value might be plus(2,3) and the expected value 5. Within the test function, you pass the
actual and expected values to a qualification method. For example:
Test Case.verify Equal(plus(2,3),5)
For an example of a basic unit test, see Write Simple Test Case Using Classes.
Additional Features for Advanced Test Classes
The MATLAB Unit Testing Framework includes several features for authoring more advanced
test classes:

Setup and teardown methods blocks to implicitly set up the pretest state of the system and
return it to the original state after running the tests. For an example of a test class with
setup and teardown code, see Write Setup and Teardown Code Using Classes.
Advanced qualification features, including actual value proxies, test diagnostics, and a
constraint interface. For more information, see matlab.unittest.constraints and
matlab.unittest.diagnostics.

Parameterized tests to combine and execute tests on the specified lists of parameters. For
more information, see Create Basic Parameterized Test and Create Advanced
Parameterized Test.

Ready-to-use fixtures for handling the setup and teardown of frequently used testing
actions and for sharing fixtures between classes. For more information, see
matlab.unittest.fixtures and Write Tests Using Shared Fixtures.

Ability to create custom test fixtures. For more information see Create Basic Custom
Fixture and Create Advanced Custom Fixture.

Write Setup and Teardown Code Using Classes


Test Fixtures
Test fixtures are setup and teardown code that sets up the pretest state of the system and returns it
to the original state after running the test. Setup and teardown methods are defined in the Test
Case class by the following method attributes:

Test Method Setup and Test Method Teardown methods run before and after each test
method.
47

Test Class Setup and Test Class Teardown methods run before and after all test methods
in the test case.

The testing framework guarantees that Test Method Setup and Test Class Setup methods of super
classes are executed before those in subclasses.
It is good practice for test authors to perform all teardown activities from within the Test Method
Setup and Test Class Setup blocks using the add Teardown method instead of implementing
corresponding teardown methods in the Test Method Teardown and Test Class Teardown blocks.
This guarantees the teardown is executed in the reverse order of the setup and also ensures that
the test content is exception safe.

Test Case with Method-Level Setup Code

The following test case, Figure Properties Test, contains setup code at the method level. The Test
Method Setup method creates a figure before running each test, and Test Method Teardown
closes the figure afterwards. As discussed previously, you should try to define teardown activities
with the add Teardown method. However, for illustrative purposes, this example shows the
implementation of a Test Method Teardown block.
classdef Figure Properties Test < matlab.unittest.TestCase
properties
Test Figure
end
methods(TestMethodSetup)
function create Figure(testCase)
% comment
testCase.TestFigure = figure;
end
end
methods(TestMethodTeardown)
function closeFigure(testCase)
close(testCase.TestFigure)
end
end
methods(Test)
function defaultCurrentPoint(testCase)
48

cp = testCase.TestFigure.CurrentPoint;
testCase.verifyEqual(cp, [0 0], ...
'Default current point is incorrect')
end
function defaultCurrentObject(testCase)
import matlab.unittest.constraints.IsEmpty
co = testCase.TestFigure.CurrentObject;
testCase.verifyThat(co, IsEmpty, ...
'Default current object should be empty')
end
end
end

6.1.1 Test Case with Class-Level Setup Code


The following test case, Bank Account Test, contains setup code at the class level.
To setup the Bank Account Test, which tests the Bank Account class example described in
Developing Classes Typical Workflow, add a Test Class Setup method, add Bank Account
Class To Path. This method adds the path to the Bank Account example file. Typically, you set up
the path using a Path Fixture. this example performs the setup and teardown activities manually
for illustrative purposes.
classdef BankAccountTest < matlab.unittest.TestCase
% Tests the Bank Account class.
methods (Test Class Setup)
function addBankAccountClassToPath(testCase)
p = path;
testCase.addTeardown(@path,p);
addpath(fullfile(matlabroot,'help','techdoc','matlab_oop',...
'examples'));
end
49

end
methods (Test)
function testConstructor(testCase)
b = BankAccount(1234, 100);
testCase.verifyEqual(b.AccountNumber, 1234, ...
'Constructor failed to correctly set account number');
testCase.verifyEqual(b.AccountBalance, 100, ...
'Constructor failed to correctly set account balance');
end
function testConstructorNotEnoughInputs(testCase)
import matlab.unittest.constraints.Throws;
testCase.verifyThat(@()BankAccount, ...
Throws('BankAccount:InvalidInitialization'));
end
function testDesposit(testCase)
b = BankAccount(1234, 100);
b.deposit(25);
testCase.verifyEqual(b.AccountBalance, 125);
end
function testWithdraw(testCase)
b = BankAccount(1234, 100);
b.withdraw(25);
testCase.verifyEqual(b.AccountBalance, 75);
end
function testNotifyInsufficientFunds(testCase)
50

callbackExecuted = false;
function testCallback(~,~)
callbackExecuted = true;
end
b = BankAccount(1234, 100);
b.addlistener('InsufficientFunds', @testCallback);
b.withdraw(50);
testCase.assertFalse(callbackExecuted, ...
'The callback should not have executed yet');
b.withdraw(60);
testCase.verifyTrue(callbackExecuted, ...
'The listener callback should have fired');
end
end
end

6.1.2 Run Tests for Various Workflows


Set Up Example Tests
To explore different ways to run tests, create a class-based test and a function-based test in your
current working folder. For the class-based test file use the Doc Polynom Test example test
presented in the matlab.unittest.qualifications. Verifiable example. For the function-based test
file use the axes Properties Test example test presented in Write Test Using Setup and Teardown
Functions.
6.1.3 Run All Tests in Class or Function
Use the run method of the Test Case class to directly run tests contained in a single test file.
When running tests directly, you do not need to explicitly create a Test array.
% Directly run a single file of class-based tests
results1 = run(DocPolynomTest);
51

% Directly run a single file of function-based tests


results2 = run(axesPropertiesTest);
You can also assign the test file output to a variable and run the tests using the functional form or
dot notation.
% Create Test or Test Case objects
t1 = DocPolynom Test;
% Test Case object from class-based test
t2 = axesProperties Test; % Test object from function-based test
% Run tests using functional form
results1 = run(t1);
results2 = run(t2);
% Run tests using dot notation
results1 = t1.run;
results2 = t2.run;
Alternatively, you can run tests contained in a single file by using run tests.
6.1.4 Run Single Test in Class or Function
Run a single test from within a class-based test file by specifying the test method as an input
argument to the run method. For example, only run the test, test Multiplication, from the
DocPolynom Test file.
results1 = run(DocPolynomTest,'testMultiplication');
Function-based test files return an array of Test objects instead of a single Test Case object. You
can run a particular test by indexing into the array. However, you must examine the Name field
in the test array to ensure you run the correct test. For example, only run the test, surface Color
Test, from the axes Properties Test file.
t2 = axes Properties Test; % Test object from function-based test
t2(:).Name
ans =
axes Properties Test/test DefaultXLim
ans =
axes Properties Test/surfaceColorTest
The surface Color Test test corresponds to the second element in the array.
52

Only run the surface Color Test test.


results2 = t2(2).run; % or results2 = run(t2(2));
6.1.5 Run Test Suites by Name
You can run a group, or suite, of tests together. To run the test suite using run tests, the suite is
defined as a cell array of strings representing a test file, a test class, a package that contains tests
or a folder that contains tests.
suite = {'axesPropertiesTest','DocPolynomTest'};
run tests(suite);
Run all tests in the current folder using the pwd as input to the run tests function.
Run tests(pwd);
Alternatively, you can explicitly create Test arrays and use the run method to run them.
6.1.6 Run Test Suites from Test Array
You can explicitly create Test arrays and use the run method in the Test Suite class to run them.
Using this approach, you explicitly define Test Suite objects and, therefore, can examine the
contents. The run tests function does not return the Test Suite object.
import matlab.unittest.TestSuite
s1 = TestSuite.fromClass(?DocPolynomTest);
s2 = TestSuite.fromFile('axesPropertiesTest.m');
% generate test suite and then run
Full Suite = [s1 s2];
result = run(fullSuite);
Since the suite is explicitly defined, it is easy for you to perform further analysis on the suite,
such as rerunning failed tests.
Failed Tests = fullSuite([result.Failed]);
result2 = run(failedTests);
6.1.7 Run Tests with Customized Test Runner
You can specialize the test running by defining a custom test runner and adding plugins. The run
method of the Test Runner class operates on a Test Suite object.
import matlab.unittest.TestRunner
import matlab.unittest.TestSuite
import matlab.unittest.plugins.TestRunProgressPlugin
53

% Generate TestSuite.
s1 = Test Suite. From Class(?DocPolynomTest);
s2 = Test Suite. From File('axesPropertiesTest.m');
suite = [s1 s2];
% Create silent test runner.
runner = TestRunner.withNoPlugins;
% Add plugin to display test progress.
runner.addPlugin(TestRunProgressPlugin.withVerbosity(2))
% Run tests using customized runner.
result = run(runner,[suite]);

Setting of mlunit

Step1: Download mlunit from


Step 2: Extract mlunit folder and place in matlab directory in my documents

Fig.6.1 download ml unit

54

Fig.6.2 Extract mlunit folder


In mlunit folder there is test directory where we place all tests

Fig.6.3 Test directory


Step 3: Creating testing application
55

Step 3.1: create folder for test with @name of test


For example @test_fib
Note code of fib.m is
function y = fib(x)
% Simple queue implementation of Fibonacci function..
if x < 0 || (int64(x) ~= x)
error('invalid input: please input only non-negative integers.');
end
if x < 2, y = x; return; end
q = [0 1];
for k = 2:x
q = [q sum(q)];
q(1) = [];
end
y = q(2);
Step 3.2: Create test_fib.m file in this folder

Fig.6.4 Creation of fib.m file


Step 3.3: Now create another file test value.m

56

Fig.6.5 Create another file test value .m


Step 3.4: Create test_null.m

Fig.6.6 Create test null.m


Step 3.5 create test_value1.m

57

Fig.6.7 Create test value 1.m


Step 3.6 After saving files will appear in @test_fib as follow

Fig.6.8 Save file by @test-fb


Step 4: Now set the path for mlunit

58

Fig.6.9 Set path for ml unit

59

Step 5: run mlunit_test to test application

Fig.6.10 Test ml unit


Step 6 Check TEST-@test_fib.xml file as output file
<?xml version="1.0" encoding="UTF-8"?>
<testsuite name="@test_fib" errors="0" failures="1" tests="3" time="1.239"
hostname="unknown" timestamp="2015-05-18T08:05:08">
<properties/>
<testcase classname="@test_fib" name="test_null"/>
<testcase classname="@test_fib" name="test_value"/>
<testcase classname="@test_fib" name="test_value1">
<failure><![CDATA[Data not equal:
Expected : 0
Actual : 1
In
<a
href="matlab:opentoline('C:\Users\aa\Documents\MATLAB\mlunit\test\@test_fib\test_value1.m'
,10)">
test_value1.m</a> at line 10]]></failure>
</testcase>
<system-out/>
<system-err/>
</testsuite>

60

Chapter 7
FUTURE SCOPE & CONCLUSION
7.1 Future scope

Manual Testing of all work flows, all fields, all negative scenarios is time and cost
consuming
It is difficult to test for multi lingual sites manually.

Automation does not require Human intervention.

You can run automated test unattended (overnight).

Automation increases speed of test execution.

Automation helps increase Test Coverage.

Manual Testing can become boring and hence error prone

7.2 Conclusion
Automation Testing is use of tools to execute test cases whereas manual testing requires human
intervention for test execution.
Within the automotive area, very little upfront testing has been done. With the introduction of
executable modeling tools such as ML Unit this upfront testing is more feasible. It is the job of
the tool vendors to make this testing technology available and practical to the end user.
Automation Testing saves time, cost and manpower. Once recorded, it's easier to run an
automated test suite when compared to manual testing which will require skilled labor.
Any type of application can be tested manually but automated testing is recommended only for
stable systems and is mostly used for regression testing. Also, certain testing types like ad-hoc
and monkey testing are more suited for manual execution.
Manual testing can be become repetitive and boring. On the contrary, the boring part of
executing same test cases time and again is handled by automation software in automation
testing.

61

REFERENCES
[1] Automated Testing tools
http://www.guru99.com/automation-testing.html
[2] Artem, M., Abrahamsson, P., & Ihme, T. (2009). Long-Term Effects of Test-Driven

Development A case study. In: Agile Processes in Software Engineering and Extreme
Programming,10th International Conference, XP 2009,. 31, pp. 13-22. Pula, Sardinia, Italy:
Springer.
[3] Bach, J. (2000, November). Session based test management. Software testing and quality
engineering magzine(11/2000),( http://www.satisfice.com/articles/sbtm.pdf).
[4] Bach, J. (2003). Exploratory Testing Explained, The Test Practitioner 2002,
(http://www.satisfice.com/articles/et-article.pdf).
[5] Bach, J. (2006). How to manage and measure exploratory testing. Quardev Inc.,
(http://www.quardev.com/content/whitepapers/how_measure_exploratory_testing.pdf).
[6] Basilli, V., & Selby, R. (1987). Comparing the effectiveness of software testing strategies.
IEEE Trans. Software Eng., 13(12), 1278-1296.
[7] Berg, B. L. (2009). Qualitative Research Methods for the Social Sciences (7th International
Edition) (7th ed.). Boston: Pearson Education.
[8] Bernat, G., Gaundel, M. C., & Merre, B. (2007). Software testing based on formal
specifications: a theory and tool. In:Testing Techniques in Software Engineering, Second
Pernambuco Summer School on Software Engineering. 6153, pp. 215-242. Recife:
Springer.
[9] Bertolino, A. (2007). Software Testing Research: Achievements Challenges Dreams. In:
International Conference on Software Engineering, ISCE 2007, (pp. 85-103). Minneapolis:
IEEE.
[10] Butts, K., et. al., Automotive Powertrain Control Development Using CACSD,
Perspectives in Control: New Concepts and Applications, Tariq Samad (ed.), IEEE Press, 1999.
[11] Butts, K., Toeppe, S., Ranville, S., Specification and Testing of Automotive Powertrain
Control System Software using CACSD tools, 1998, Proceedings of the 17 th AIAA/IEEE/SAE
Digital Avionics System Conference
[12] Biezer, B., Software Testing Techniques, Second Edition, International Thomson
Computer Press, 1990
[13] Causevic, A., Sundmark, D., & Punnekkat, S. (2010). An Industrial Survey on
62

Contemporary Aspects of Software Testing. In: Third International Conference on


Software Testing, Verification and Validation (pp. 393-401). Paris: IEEE Computer
Society.
[14] Chillarege, R. (1999). Software Testing Best Practices. Tehcnical Report RC2145, IBM.
[15] Chilenski, J. J., Miller, S., Applicability of modified condition/decision coverage to
software testing, 1994, Software Engineering Journal
[16] Frankl, P. G., & Hamlet, R. G. (1998). Evaluating Testing Methods by Delivered
Reliability. IEEE Trans. Software Eng., 24(8), pp. 586-601.
[17] Galin, D. (2004). Software Quality Assurance: From theory to implementation. Pearson
Education Ltd.
[18] Matlab Documentation
http://in.mathworks.com/help/matlab/matlab_oop/getting-familiar-with-classes.html
[19] ML-Unit Matlab unit Test Framework
http://sourceforge.net/p/mlunit/mlunit/HEAD/tree/trunk/
[20] Object Oriented programming in Matlab
http://www.ce.berkeley.edu/~sanjay/e7/oop.pdf
[21] Object Oriented software testing by Devid C. Kung
http://www.ecs.csun.edu/~rlingard/COMP595VAV/OOSWTesting.pdf
[22] Patel, S., Smith, P., Sun, W., Ramanan, R., Donald, H., Toeppe, S., Ranville, S., Bostic, D.,
Butts, K., CACSD in Production Development: An Engine Control Case Study, 2000, Global
Powertrain Conference
[23] Toeppe, S., Ranville, S., Model Driven Automatic Unit Testing Technology: Tool
Architecture Introduction and Overview, 1999, Proceedings of the 18th AIAA/IEEE/SAE Digital
Avionics System Conference
[24] Toeppe, S., Ranville, S., An Automated Inspection Tool For a Graphical Specification and
Programming Language, 1999, Quality Week Conference
[25] Toeppe, S., Ranville, S., Bostic, D., Rzeimen, K., Automatic Code Generation
Requirements For Production Automotive Powertrain Applications, 1999, IEEE International
Symposium on Computer Aided Control System Design
[23] Toeppe, S., Ranville, S., Bostic, D., Wang, C., Practical Validation of Model Based Code
Generation for Automotive Applications, 1999, Proceedings of the 18th AIAA/IEEE/SAE
Digital Avionics System Conf.
63

[24] Toeppe, S., Ranville, S., Bostic, D., "Automating Software Specification, Design and
Synthesis for Computer Aided Control System Design Tools", 2000, Proceedings of the 19 th
AIAA/IEEE/SAE Digital Avionics System Conf.
[25] The Mathworks, www.mathworks.com

64

You might also like