SE-Unit-4
SE-Unit-4
SE-Unit-4
Introduction
Software Testing is evaluation of the software against requirements gathered from users and
system specifications. Testing is conducted at the phase level in software development life cycle or
at module level in program code. Software testing comprises of Validation and Verification.
Verification: “Are we building the product right?”
Validation: “Are we building the right product?”
Software Validation
Validation is process of examining whether or not the software satisfies the user
requirements. It is carried out at the end of the SDLC. If the software matches requirements for
which it was made, it is validated.
Validation ensures the product under development is as per the user requirements.
Validation answers the question – "Are we developing the product which attempts all that
user needs from this software ?".
Validation emphasizes on user requirements.
Software Verification
Verification is the process of confirming if the software is meeting the business
requirements, and is developed adhering to the proper specifications and methodologies.
Verification ensures the product being developed is according to design specifications.
Verification answers the question– "Are we developing this product by firmly following all
design specifications ?"
Verifications concentrates on the design and system specifications.
Target of the test are -
Errors - These are actual coding mistakes made by developers. In addition, there is a
difference in output of software and desired output, is considered as an error.
Fault - When error exists fault occurs. A fault, also known as a bug, is a result of an error
which can cause system to fail.
Failure - failure is said to be the inability of the system to perform the desired task. Failure
occurs when fault exists in the system.
Test Characteristics:
A good test has a high probability of finding an error
o The tester must understand the software and how it might fail
A good test is not redundant
o Testing time is limited; one test should not serve the same purpose as another test
A good test should be ―best of breedǁ
o Tests that have the highest likelihood of uncovering a whole class of errors should be used
A good test should be neither too simple nor too complex
o Each test should be executed separately; combining a series of tests could cause side
effects and mask certain errors
Testing Guidelines
There are certain testing guidelines that should be followed while testing the software:
Development team should avoid testing the software: Testing should always be performed by
the testing team. The developer team should never test the software themselves. This is because
after spending several hours building the software, it might unconsciously become too proprietorial
and that might prevent seeing any flaws in the system. The testers should have a destructive
approach towards the product. Developers can perform unit testing and integration testing but
software testing should be done by the testing team.
Software can never be 100% bug-free: Testing can never prove the software to 100% bug-free. In
other words, there is no way to prove that the software is free of errors even after making a number
of test cases.
Start as early as possible: Testing should always starts parallelly alongside the requirement
analysis process. This is crucial in order to avoid the problem of defect migration. It is important to
determine the test objects and scope as early as possible.
Prioritize sections: If there are certain critical sections, then it should be ensured that these
sections are tested with the highest priority and as early as possible.
The time available is limited: Testing time for software is limited. It must be kept in mind that the
time available for testing is not unlimited and that an effective test plan is very crucial before starting
the process of testing. There should be some criteria to decide when to terminate the process of
testing. This criterion needs to be decided beforehand. For instance, when the system is left with an
acceptable level of risk or according to timelines or budget constraints.
Testing must be done with unexpected and negative inputs: Testing should be done with
correct data and test cases as well as with flawed test cases to make sure the system is leak proof.
Test cases must be well documented to ensure future reuse for testing at later stages. This means
that the test cases must be enlisted with proper definitions and descriptions of inputs passed and
respective outputs expected. Testing should be done for functional as well as the non-functional
requirements of the software product.
Inspecting test results properly: Quantitative assessment of tests and their results must be done.
The documentation should be referred to properly while validating the results of the test cases to
ensure proper testing. Testing must be supported by automated tools and techniques as much as
possible. Besides ensuring that the system does what all it is supposed to do, testers also need to
ensure that the system does not perform operations which it isn’t supposed to do.
Validating assumptions: The test cases should never be developed on the basis of assumptions or
hypothesis. They must always be validated properly. For instance, assuming that the software
product is free from any bugs while designing test cases may result in extremely weak test cases.
Testing Levels:
Testing itself may be defined at various levels of SDLC. The testing process runs parallel to
software development. Before jumping on the next stage, a stage is tested, validated and verified.
Testing separately is done just to make sure that there are no hidden bugs or issues left in the
software. Software is tested on various levels -
Unit Testing
While coding, the programmer performs some tests on that unit of program to know if it is error
free. Testing is performed under white-box testing approach. Unit testing helps developers decide that
individual units of the program are working as per requirement and are error free.
Integration Testing
Even if the units of software are working fine individually, there is a need to find out if the units if
integrated together would also work without errors. For example, argument passing and data updation
etc.
System Testing
The software is compiled as product and then it is tested as a whole. This can be accomplished
using one or more of the following tests:
Functionality testing - Tests all functionalities of the software against the requirement.
Performance testing - This test proves how efficient the software is. It tests the effectiveness
and average time taken by the software to do desired task. Performance testing is done by
means of load testing and stress testing where the software is put under high user and data load
under various environment conditions.
Security & Portability - These tests are done when the software is meant to work on various
platforms and accessed by number of persons.
Acceptance Testing
When the software is ready to hand over to the customer it has to go through last phase of testing
where it is tested for user-interaction and response. This is important because even if the software
matches all user requirements and if user does not like the way it appears or works, it may be rejected.
Alpha testing - The team of developer themselves perform alpha testing by using the system as
if it is being used in work environment. They try to find out how user would react to some action
in software and how the system should respond to inputs.
Beta testing - After the software is tested internally, it is handed over to the users to use it under
their production environment only for testing purpose. This is not as yet the delivered product.
Developers expect that users at this stage will bring minute problems, which were skipped to
attend.
Regression Testing
Whenever a software product is updated with new code, feature or functionality, it is tested
thoroughly to detect if there is any negative impact of the added code. This is known as regression
testing.
Testing Documentation
Testing documents are prepared at different stages -
Before Testing
Testing starts with test cases generation. Following documents are needed for reference –
SRS document - Functional Requirements document
Test Policy document - This describes how far testing should take place before releasing the
product.
Test Strategy document - This mentions detail aspects of test team, responsibility matrix and
rights/responsibility of test manager and test engineer.
Traceability Matrix document - This is SDLC document, which is related to requirement
gathering process. As new requirements come, they are added to this matrix. These matrices
help testers know the source of requirement. They can be traced forward and backward.
While Being Tested
The following documents may be required while testing is started and is being done:
Test Case document - This document contains list of tests required to be conducted. It includes
Unit test plan, Integration test plan, System test plan and Acceptance test plan.
Test description - This document is a detailed description of all test cases and procedures to
execute them.
Test case report - This document contains test case report as a result of the test.
Test logs - This document contains test logs for every test case report.
After Testing
The following documents may be generated after testing :
Test summary - This test summary is collective analysis of all test reports and logs. It
summarizes and concludes if the software is ready to be launched. The software is released
under version control system if it is ready to launch.
In scientific experimental settings, researchers often manipulate a variable (the independent variable)
to see what effect it has on a second variable (the dependent variable) For example, a researcher might,
for different experimental groups, manipulate the dosage of a particular drug between groups to see what
effect it has on health. In this example, the researcher wants to make a causal inference, namely, that
different doses of the drug may be held responsible for observed changes or differences. When the
researcher may confidently attribute the observed changes or differences in the dependent variable to the
independent variable, and when he can rule out other explanations (or rival hypotheses), then his causal
inference is said to be internally valid
In many cases, however, the magnitude of effects found in the dependent variable may not just
depend on
variations in the independent variable,
the power of the instruments and statistical procedures used to measure and detect the effects,
and
the choice of statistical methods (see: Statistical conclusion validity).
Rather, a number of variables or circumstances uncontrolled for (or uncontrollable) may lead to
additional or alternative explanations (a) for the effects found and/or (b) for the magnitude of the effects
found. Internal validity, therefore, is more a matter of degree than of either-or, and that is exactly why
research designs other than true experiments may also yield results with a high degree of internal
validity.
In order to allow for inferences with a high degree of internal validity, precautions may be taken
during the design of the scientific study. As a rule of thumb, conclusions based on correlations or
associations may only allow for lesser degrees of internal validity than conclusions drawn on the basis of
direct manipulation of the independent variable. And, when viewed only from the perspective of Internal
Validity, highly controlled true experimental designs (i.e. with random selection, random assignment to
either the control or experimental groups, reliable instruments, reliable manipulation processes, and
safeguards against confounding factors) may be the "gold standard" of scientific research. By contrast,
however, the very strategies employed to control these factors may also limit the generalizability
or External Validity of the findings.
External validity is the validity of generalized (causal) inferences in scientific research, usually based
on experiments as experimental validity. In other words, it is the extent to which the results of a study can
be generalized to other situations and to other people For example, inferences based on comparative
psychotherapy studies often employ specific samples (e.g. volunteers, highly depressed, no comorbidity).
If psychotherapy is found effective for these sample patients, will it also be effective for non-volunteers or
the mildly depressed or patients with concurrent other disorders?
Situation: All situational specifics (e.g. treatment conditions, time, location, lighting, noise,
treatment administration, investigator, timing, scope and extent of measurement, etc. etc.) of a
study potentially limit generalizability.
Pre-test effects: If cause-effect relationships can only be found when pre-tests are carried out,
then this also limits the generality of the findings.
Post-test effects: If cause-effect relationships can only be found when post-tests are carried out,
then this also limits the generality of the findings.
Reactivity (placebo, novelty, and Hawthorne effects): If cause-effect relationships are found they
might not be generalizable to other settings or situations if the effects found only occurred as an
effect of studying the situation.
Rosenthal effects: Inferences about cause-consequence relationships may not be generalizable
to other investigators or researchers
Testing Approaches
Before going through types, there are broader testing approaches which are:
Blackbox (Functional): It is generally used when the tester has limited knowledge of the system
under test or when access to source code is not available, it is mainly done through defined
interfaces.
Knowing the specified function that a product has been designed to perform,
test to see if that function is fully operational and error free
Includes tests that are conducted at the software interface
Not concerned with internal logical structure of the software
Whitebox (Structural): Known as clear box testing, glass box testing, transparent box testing,
and structural testing. It is a method of testing software that tests internal structures or workings of
an application, as opposed to its functionality, it is mainly done on the source code itself.
Knowing the internal workings of a product, test that all internal operations are
performed according to specifications and all internal components have been
exercised
Involves tests that concentrate on close examination of procedural detail
Logical paths through the software are tested
Test cases exercise specific sets of conditions and loops
Graybox: It mainly combines the two other approaches where the tester knows about the code
and develop the test cases in the Blackbox approach.
Testing techniques:
1. Statement coverage: In this technique, the aim is to traverse all statement at least once. Hence,
each line of code is tested. In case of a flowchart, every node must be traversed at least once.
Since all lines of code are covered, helps in pointing out faulty code.
2. Branch Coverage: In this technique, test cases are designed so that each branch from all
decision points are traversed at least once. In a flowchart, all edges must be traversed at least
once.
3. Condition Coverage: In this technique, all individual conditions must be covered as shown in the
following example:
1. READ X, Y
2. IF(X == 0 || Y == 0)
3. PRINT ‘0’
In this example, there are 2 conditions: X == 0 and Y == 0. Now, test these conditions get
TRUE and FALSE as their values. One possible example would be:
#TC1 – X = 0, Y = 55
#TC2 – X = 5, Y = 0
4. Multiple Condition Coverage: In this technique, all the possible combinations of the possible
outcomes of conditions are tested at least once. Let’s consider the following example:
1. READ X, Y
2. IF(X == 0 || Y == 0)
3. PRINT ‘0’
#TC1: X = 0, Y = 0
#TC2: X = 0, Y = 5
#TC3: X = 55, Y = 0
#TC4: X = 55, Y = 5
Hence, four test cases required for two individual conditions.
Similarly, if there are n conditions then 2n test cases would be required.
5. Basis Path Testing
White-box testing technique proposed by Tom McCabe
Enables the test case designer to derive a logical complexity measure of a
procedural design
Uses this measure as a guide for defining a basis set of execution paths
Test cases derived to exercise the basis set are guaranteed to execute every
statement in the program at least one time during testing
In this technique, control flow graphs are made from code or flowchart and then
Cyclomatic complexity is calculated which defines the number of independent paths so that
the minimal number of test cases can be designed for each independent path.
Steps:
1. Make the corresponding control flow graph
2. Calculate the cyclomatic complexity
3. Find the independent paths
4. Design test cases corresponding to each independent path
Flow Graph Notation
• A circle in a graph represents a node, which stands for a sequence of one or more
procedural statements
• A node containing a simple conditional expression is referred to as a predicate node
– Each compound condition in a conditional expression containing one or more
Boolean operators (e.g., and, or) is represented by a separate predicate node
– A predicate node has two edges leading out from it (True and False)
• An edge, or a link, is a an arrow representing flow of control in a specific direction
– An edge must start and terminate at a node
– An edge does not intersect or cross over another edge
• Areas bounded by a set of edges and nodes are called regions
• When counting regions, include the area outside the graph as a region, too
Cyclomatic Complexity
Cyclomatic complexity is a software metric that gives the quantitative measure of logical
complexity of the program. The Cyclomatic complexity defines the number of independent
paths in the basis set of the program that provides the upper bound for the number of tests
that must be conducted to ensure that all the statements have been executed at least once.
The cyclomatic complexity can be computed by one of the following ways.
1. The number of regions of the flow graph correspond to the cyclomatic complexity.
2. Cyclomatic complexity, V(G), for a flow graph, G, is defined as: V(G) = E - N + 2*P ,
E - number of flow graph edges, N - number of flow graph nodes, P – number of
nodes that have exit points.
3. V(G) = P + 1, where P is the number of predicate nodes contained in the flow graph G.
Example:
6. Loop Testing: Loops are widely used and these are fundamental to many algorithms hence,
their testing is very important. Errors often occur at the beginnings and ends of loops.
1. Simple loops: For simple loops of size n, test cases are designed that:
Skip the loop entirely
Only one pass through the loop
2 passes
m passes, where m < n
n-1 ans n+1 passes
2. Nested loops: For nested loops, all the loops are set to their minimum count and we start
from the innermost loop. Simple loop tests are conducted for the innermost loop and this is
worked outwards till all the loops have been tested.
3. Concatenated loops: Independent loops, one after another. Simple loop tests are applied
for each.
Advantages:
1. White box testing is very thorough as the entire code and structures are tested.
2. It results in the optimization of code removing error and helps in removing extra lines of
code.
3. It can start at an earlier stage as it doesn’t require any interface as in case of black box
testing.
4. Easy to automate.
Disadvantages:
1. Main disadvantage is that it is very expensive.
2. Redesign of code and rewriting code needs test cases to be written again.
3. Testers are required to have in-depth knowledge of the code and programming language as
opposed to black box testing.
4. Missing functionalities cannot be detected as the code that exists is tested.
5. Very complex and at times not realistic.
Black-Box Testing
Black box testing is a type of software testing in which the functionality of the software is not
known. The testing is done without the internal knowledge of the products.
Black box testing can be done in following ways:
1. Syntax Driven Testing – This type of testing is applied to systems that can be syntactically
represented by some language. For example- compilers,language that can be represented by
context free grammar. In this, the test cases are generated so that each grammar rule is used at
least once.
2. Equivalence partitioning – It is often seen that many type of inputs work similarly so instead of
giving all of them separately we can group them together and test only one input of each group.
The idea is to partition the input domain of the system into a number of equivalence classes such
that each member of class works in a similar way, i.e., if a test case in one class results in some
error, other members of class would also result into same error.
Testing techniques:
Graph-Based Testing:
• Black-box methods based on the nature of the relationships (links) among the program
objects (nodes), test cases are designed to traverse the entire graph
• Transaction flow testing: nodes represent steps in some transaction and links
represent logical connections between steps that need to be validated
• Finite state modeling: nodes represent user observable states of the software and links
represent state transitions
• Data flow modeling: nodes are data objects and links are transformations of one data
object to another data object
• Timing modeling: nodes are program objects and links are sequential
connections between these objects link weights are required execution times
Equivalence Partitioning:
• Black-box technique that divides the input domain into classes of data from which test
cases can be derived
• An ideal test case uncovers a class of errors that might require many arbitrary test
cases to be executed before a general error is observed
Each column corresponds to a rule which will become a test case for testing. So there will be 4 test
cases.
Difference between testing approaches:
The internal workings of an The tester has limited knowledge Tester has full knowledge of
application need not be known. of the internal workings of the the internal workings of the
application. application.
Also known as closed-box Also known as translucent testing, Also known as clear-box
testing, data-driven testing, or as the tester has limited testing, structural testing, or
functional testing. knowledge of the insides of the code-based testing.
application.
Performed by end-users and Performed by end-users and also Normally done by testers
also by testers and developers. by testers and developers. and developers.
Testing is based on external Testing is done on the basis of Internal workings are fully
expectations - Internal behavior high-level database diagrams and known and the tester can
of the application is unknown. data flow diagrams. design test data accordingly.
It is exhaustive and the least Partly time-consuming and The most exhaustive and
time-consuming. exhaustive. time-consuming type of
testing.
Not suited for algorithm testing. Not suited for algorithm testing. Suited for algorithm testing.
This can only be done by trial- Data domains and internal Data domains and internal
and-error method. boundaries can be tested, if boundaries can be better
known. tested.
Regression Testing
Regression testing is a type of software testing that seeks to uncover new software bugs,
or regressions, in existing functional and non-functional areas of a system after changes such as
enhancements, patches or configuration changes, have been made to them.
The intent of regression testing is to ensure that changes such as those mentioned above
have not introduced new faults. One of the main reasons for regression testing is to determine
whether a change in one part of the software affects other parts of the software
Common methods of regression testing include rerunning previously completed tests and
checking whether program behavior has changed and whether previously fixed faults have re-
emerged. Regression testing can be performed to test a system efficiently by systematically selecting
the appropriate minimum set of tests needed to adequately cover a particular change.
Contrast with non-regression testing (usually validation-test for a new issue), which aims to
verify whether, after introducing or updating a given software application, the change has had the
intended effect.
Unit Testing
In computer programming, unit testing is a software testing method by which individual units
of source code, sets of one or more computer program modules together with associated control data,
usage procedures, and operating procedures are tested to determine if they are fit for use. Intuitively,
one can view a unit as the smallest testable part of an application. In procedural programming, a unit
could be an entire module, but it is more commonly an individual function or procedure. In object-
oriented programming, a unit is often an entire interface, such as a class, but could be an individual
method. Unit tests are short code fragments created by programmers or occasionally by white box
testers during the development process.
Ideally, each test case is independent from the others. Substitutes such as method
stubs, mock objects, fakes, and test harnesses can be used to assist testing a module in isolation.
Unit tests are typically written and run by software developers to ensure that code meets its design
and behaves as intended.
Unit Testing Limitations
Testing will not catch every error in the program, since it cannot evaluate every execution path
in any but the most trivial programs. The same is true for unit testing. Additionally, unit testing by
definition only tests the functionality of the units themselves. Therefore, it will not catch integration
errors or broader system-level errors (such as functions performed across multiple units, or non-
functional test areas such as performance). Unit testing should be done in conjunction with
other software testing activities, as they can only show the presence or absence of particular errors;
they cannot prove a complete absence of errors. In order to guarantee correct behavior for every
execution path and every possible input, and ensure the absence of errors, other techniques are
required, namely the application of formal methods to proving that a software component has no
unexpected behavior.
Software testing is a combinatorial problem. For example, every boolean decision statement
requires at least two tests: one with an outcome of "true" and one with an outcome of "false". As a
result, for every line of code written, programmers often need 3 to 5 lines of test code. This obviously
takes time and its investment may not be worth the effort. There are also many problems that cannot
easily be tested at all – for example those that are nondeterministic or involve multiple threads. In
addition, code for a unit test is likely to be at least as buggy as the code it is testing. Fred
Brooks in The Mythical Man-Month quotes: "Never go to sea with two chronometers; take one or
three. Meaning, if two chronometers contradict, how do you know which one is correct?
Another challenge related to writing the unit tests is the difficulty of setting up realistic and
useful tests. It is necessary to create relevant initial conditions so the part of the application being
tested behaves like part of the complete system. If these initial conditions are not set correctly, the
test will not be exercising the code in a realistic context, which diminishes the value and accuracy of
unit test results
To obtain the intended benefits from unit testing, rigorous discipline is needed throughout the
software development process. It is essential to keep careful records not only of the tests that have
been performed, but also of all changes that have been made to the source code of this or any other
unit in the software. Use of a version control system is essential. If a later version of the unit fails a
particular test that it had previously passed, the version-control software can provide a list of the
source code changes (if any) that have been applied to the unit since that time.
It is also essential to implement a sustainable process for ensuring that test case failures are
reviewed daily and addressed immediately. If such a process is not implemented and ingrained into
the team's workflow, the application will evolve out of sync with the unit test suite, increasing false
positives and reducing the effectiveness of the test suite.
Unit testing embedded system software presents a unique challenge: Since the software is
being developed on a different platform than the one it will eventually run on, you cannot readily run a
test program in the actual deployment environment, as is possible with desktop programs
Integration Testing
Integration testing (sometimes called integration and testing, abbreviated I&T) is the phase
in software testing in which individual software modules are combined and tested as a group. It
occurs after unit testing and before validation testing. Integration testing takes as its
input modules that have been unit tested, groups them in larger aggregates, applies tests defined in
an integration test plan to those aggregates, and delivers as its output the integrated system ready
for system testing.
The purpose of integration testing is to verify functional, performance, and
reliability requirements placed on major design items. These "design items", i.e. assemblages (or
groups of units), are exercised through their interfaces using black box testing, success and error
cases being simulated via appropriate parameter and data inputs. Simulated usage of shared data
areas and inter-process communication is tested and individual subsystems are exercised through
their input interface. Test cases are constructed to test whether all the components within
assemblages interact correctly, for example across procedure calls or process activations, and this is
done after testing individual modules, i.e. unit testing. The overall idea is a "building block" approach,
in which verified assemblages are added to a verified base which is then used to support the
integration testing of further assemblages.
Some different types of integration testing are big bang, top-down, and bottom-up. Other
Integration Patterns are: Collaboration Integration, Backbone Integration, Layer Integration,
Client/Server Integration, Distributed Services Integration and High-frequency Integration.
Validation Testing
In software project management, software testing, and software engineering, verification and
validation (V&V) is the process of checking that a software system meets specifications and that it
fulfills its intended purpose. It may also be referred to as software quality control. It is normally the
responsibility of software testers as part of the software development lifecycle
Validation checks that the product design satisfies or fits the intended use (high-level
checking), i.e., the software meets the user requirements. This is done through dynamic testing and
other forms of review.
Verification and validation are not the same thing, although they are often
confused. Boehm succinctly expressed the difference between
1. Validation: Are we building the right product? (This is dynamic process for checking and
testing the real product. Software validation always involves with executing the code)
2. Verification: Are we building the product right? (This is static method for verifying
design,code. Software verification is human based checking of documents and files)
According to the Capability Maturity Model
3. Software Validation: The process of evaluating software during or at the end of the
development process to determine whether it satisfies specified requirements.
4. Software Verification: The process of evaluating software to determine whether the products
of a given development phase satisfy the conditions imposed at the start of that phaseIn other
words, software validation ensures that the product actually meets the user's needs, and that
the specifications were correct in the first place, while software verification is ensuring that the
product has been built according to the requirements and design specifications. Software
validation ensures that "you built the right thing". Software verification ensures that "you built it
right". Software validation confirms that the product, as provided, will fulfill its intended use.
From testing perspective:
1. Fault – wrong or missing function in the code.
2. Failure – the manifestation of a fault during execution.
3. Malfunction – according to its specification the system does not meet its specified functionality.
TECHNIQUES
1. Print debugging (or tracing) is the act of watching (live or recorded) trace statements, or
print statements, that indicate the flow of execution of a process. This is sometimes
called printf debugging, due to the use of the printf function in C. This kind of debugging was
turned on by the command TRON in the original versions of the novice-
oriented BASIC programming language. TRON stood for, "Trace On." TRON caused the line
numbers of each BASIC command line to print as the program ran.
2. Remote debugging is the process of debugging a program running on a system different from
the debugger. To start remote debugging, a debugger connects to a remote system over a
network. The debugger can then control the execution of the program on the remote system
and retrieve information about its state.
3. Post-mortem debugging is debugging of the program after it has already crashed.
Related techniques often include various tracing techniques (for example,) and/or analysis
of memory dump (or core dump) of the crashed process. The dump of the process could be
obtained automatically by the system (for example, when process has terminated due to an
unhandled exception), or by a programmer-inserted instruction, or manually by the interactive
user.
4. "Wolf fence" algorithm: Edward Gauss described this simple but very useful and now
famous algorithm in a 1982 article for communications of the ACM as follows: "There's one
wolf in Alaska; how do you find it? First build a fence down the middle of the state, wait for the
wolf to howl, determine which side of the fence it is on. Repeat process on that side only, until
you get to the point where you can see the wolf. This is implemented e.g. in the Git version
control system as the command git bisect, which uses the above algorithm to determine
which commit introduced a particular bug.
5. Delta Debugging – a technique of automating test case simplification.
Software Implementation
What are the different methods of programming in software implementation?
The software implementation is associated with the following programming models.
Structured Programming
The codes leads to enlarge the software size as the codes multiply thus making it a difficult
task to connect with the program flow. It becomes very hard for the program to be shared or modifies,
as the files, programs, procedures and the manner in which the program is constructed is not
remembered. This drawback of coding is overcome by structured programming. Some of the
structures such as subroutines and loops are used by the developers. These subroutines and loops
facilitate in improving the efficiency and the coding time is decreased and also the coding is
organized.
The coding of a particular programming is described by structured programming. There are three
basic concepts upon which the concept of structured programming revolve. They are -
Top-down analysis – The most important part of any program is problem solving. Problem
can be easily solved only when the problem is understood. The problem is divided into
several parts. Each sub-problem is solved individually and hence the problem solving is
simplified.
Modular Programming – Programming can also be done by simplifying the code into
groups of instructions. These groups of instructions are known as modules or sub-programs.
Modular programming is done by following the Top-down analysis. As the jump method
makes it difficult to locate the program. Structured programming always prefer modular
format that Jumps.
Structured Coding – Under this concept, the modules are broken down further thus
simplifying the execution. The flow of the program is controlled as control structure is chosen
by structured programming. The coding instructions are enabled to organize by structured
coding.
Functional Programming
It is a programming that uses mathematical functions. When a particular argument is received
by a mathematical function, result produced by that function will be the same. In some of the
procedural languages, procedures take over the control on the flow of the program. There is a
possibility of the state of the program getting changed in the process of shifting the control flow from
one procedure to another.
When a particular argument is received by the procedural programming, result produced by
that program will be different as the state of the program keeps on changing. This calls for taking
more consideration about some of the aspects of programming such as sequence of the program and
the timing of the code.
Mathematical functions are used by functional programming and which enable to produce the
result without considering the state of the program. This enables to forecast the program behaviour.
There are different concepts upon which the concept of functional programming revolves. They are
First class and High-order functions – Some other function can be accepted as an
argument or the result may be produced on some other function by these High-order
functions.
Pure functions – These functions are not considered to be destructive as the memory, Input
or Output is not influenced by these functions. They can be easily deleted, if not required and
hence the program is not hampered.
Recursion – The program code is repeated by the function as the function calls itself. The
repetition of the program code stops when some pre-defined conditions match. The repetition
of the code results in development of loops. Here the input and the output are based on
functional programming.
Strict evaluation – The argument of the function is evaluated either by strict evaluation or
non-strict evaluation. Before a function is called, the expression is evaluated by strict
evaluation. Unless required the expression is not evaluated by non-strict evaluation.
λ-calculus – λ-calculus is chosen by the function programming. Only when they occur, they
are evaluated.
Examples of the function programming are - Common Lisp, Scala, Haskell, Erlang and F#
Programming style
The code is written in accordance with some of the predefined coding rules. Programming
style refers to the set of such coding rules. For the program, the program code may be written by on
developer and the program is worked on by other developer. Thus making it a big confusion. This
confusion is avoided by setting and following some standard programming styles to write the program
code.
Relevant function and variable names are included by the programming style for a particular
task. This facilitates in letting the indentation to be well-placed, at the reader convenience the code
can be commented and the presentation of the code. Thus the program code is easily understood.
Here solving of the errors can be made easy. It also simplifies the documentation of the program code
and updation.
Coding Guidelines
Different organization has different language of coding and different styles for coding. Each of
the organization coding guidelines has to consider some of the coding elements in general, such as -
Naming conventions – Defines the way in which the functions, variables, constants and
variables need to be named.
Indenting – Indenting is the space that is left blank at the beginning of line, 2-8 whitespace
or single tab is used for indenting.
Whitespace - It is generally omitted at the end of line.
Operators – The different operators – assignment. Mathematical and logical are defined. For
example, space should be there before and after the assignment operator ‘=’ as in “x = 2”.
Control Structures – Some of the clauses such as if-then-else, case-switch, while-until are
to be written by following some set of rules which are defined by Control Structures.
Line length and wrapping – The length of the line in terms of number of characters is
defined. For the long sentences the lines should be wrapped as per the specified rules of
wrapping.
Functions – The declaration and the calling of the function is defined in two ways – with
parameters and without parameters.
Variables – The definitions and the declarations of different variables are mentioned.
Comments – The functioning of the code is described as comments. The predefined
comments help in creating documentations and the associated descriptions.
Software Maintenance
Software maintenance is widely accepted part of SDLC now a days. It stands for all the
modifications and updations done after the delivery of software product. There are number of
reasons, why modifications are required, some of them are briefly mentioned below:
Market Conditions - Policies, which changes over the time, such as taxation and newly
introduced constraints like, how to maintain bookkeeping, may trigger need for modification.
Client Requirements - Over the time, customer may ask for new features or functions in the
software.
Host Modifications - If any of the hardware and/or platform (such as operating system) of the
target host changes, software changes are needed to keep adaptability.
Organization Changes - If there is any business level change at client end, such as
reduction of organization strength, acquiring another company, organization venturing into
new business, need to modify in the original software may arise.
Types of maintenance
In a software lifetime, type of maintenance may vary based on its nature. It may be just a routine
maintenance tasks as some bug discovered by some user or it may be a large event in itself based
on maintenance size or nature. Following are some types of maintenance based on their
characteristics:
Corrective Maintenance - This includes modifications and updations done in order to correct
or fix problems, which are either discovered by user or concluded by user error reports.
Adaptive Maintenance - This includes modifications and updations applied to keep the
software product up-to date and tuned to the ever changing world of technology and business
environment.
Perfective Maintenance - This includes modifications and updates done in order to keep the
software usable over long period of time. It includes new features, new user requirements for
refining the software and improve its reliability and performance.
Preventive Maintenance - This includes modifications and updations to prevent future
problems of the software. It aims to attend problems, which are not significant at this moment
but may cause serious issues in future.
Cost of Maintenance
Reports suggest that the cost of maintenance is high. A study on estimating software
maintenance found that the cost of maintenance is as high as 67% of the cost of entire software
process cycle.
On an average, the cost of software maintenance is more than 50% of all SDLC phases. There are
various factors, which trigger maintenance cost go high, such as:
Real-world factors affecting Maintenance Cost
The standard age of any software is considered up to 10 to 15 years.
Older softwares, which were meant to work on slow machines with less memory and storage
capacity cannot keep themselves challenging against newly coming enhanced softwares on
modern hardware.
As technology advances, it becomes costly to maintain old software.
Most maintenance engineers are newbie and use trial and error method to rectify problem.
Often, changes made can easily hurt the original structure of the software, making it hard for
any subsequent changes.
Changes are often left undocumented which may cause more conflicts in future.
Software-end factors affecting Maintenance Cost
Structure of Software Program
Programming Language
Dependence on external environment
Staff reliability and availability
Maintenance Activities
IEEE provides a framework for sequential maintenance process activities. It can be used in
iterative manner and can be extended so that customized items and processes can be included.
Software Re-engineering
Software Re-Engineering is the examination and alteration of a system to reconstitute it in a
new form. The principles of Re-Engineering when applied to the software development process is
called software re-engineering. It affects positively at software cost, quality, service to the customer
and speed of delivery. In Software Re-engineering, we are improving the software to make it more
efficient and effective.
For example, initially Unix was developed in assembly language. When language C came into
existence, Unix was re-engineered in C, because working in assembly language was difficult.
Other than this, sometimes programmers notice that few parts of software need more maintenance
than others and they also need re-engineering.
Objectives of Re-engineering:
To describe a cost-effective option for system evolution.
To describe the activities involved in the software maintenance process.
To distinguish between software and data re-engineering and to explain the problems of data re-
engineering.
Re-Engineering Process
Decide what to re-engineer. Is it whole software or a part of it?
Perform Reverse Engineering, in order to obtain specifications of existing software.
Restructure Program if required. For example, changing function-oriented programs into
object-oriented programs.
Re-structure data as required.
Apply Forward engineering concepts in order to get re-engineered software.
Diagrammatic Representation:
Software Re-Engineering Activities:
1. Inventory Analysis:
Every software organisation should have an inventory of all the applications.
Inventory can be nothing more than a spreadsheet model containing information that provides a
detailed description of every active application.
By sorting this information according to business criticality, longevity, current maintainability and
other local important criteria, candidates for re-engineering appear.
Resource can then be allocated to candidate application for re-engineering work.
2. Document reconstructing:
Documentation of a system either explains how it operate or how to use it.
Documentation must be updated.
It may not be necessary to fully document an application.
The system is business critical and must be fully re-documented.
3. Reverse Engineering:
Reverse engineering is a process of design recovery. Reverse engineering tools extracts data,
architectural and proccedural design information from an existing program.
4. Code Reconstructing:
To accomplish code reconstructing, the source code is analysed using a reconstructing tool.
Violations of structured programming construct are noted and code is then reconstruct.
The resultant restructured code is reviewed and tested to ensure that no anomalies have been
introduced.
5. Data Restructuring:
Data restructuring begins with the reverse engineering activity.
Current data architecture is dissecred, and necessary data models are defined.
Data objects and attributes are identified, and existing data structure are reviewed for quality.
6. Forward Engineering:
Forward Engineering also called as renovation or reclamation not only for recovers design information
from existing software but uses this information to alter or reconstitute the existing system in an effort
to improve its overall quality.
Program Restructuring
It is a process to re-structure and re-construct the existing software. It is all about re-arranging
the source code, either in same programming language or from one programming language to a
different one. Restructuring can have either source code-restructuring and data-restructuring or both.
Re-structuring does not impact the functionality of the software but enhance reliability and
maintainability. Program components, which cause errors very frequently can be changed, or
updated with re-structuring. The dependability of software on obsolete hardware platform can be
removed via re-structuring.
Forward Engineering
Forward engineering is a process of obtaining desired software from the specifications in
hand which were brought down by means of reverse engineering. It assumes that there was some
software engineering already done in the past. Forward engineering is same as software engineering
process with only one difference – it is carried out always after reverse engineering.
Component reusability
A component is a part of software program code, which executes an independent task in the
system. It can be a small module or sub-system itself.
Example
The login procedures used on the web can be considered as components, printing system in
software can be seen as a component of the software. Components have high cohesion of
functionality and lower rate of coupling, i.e. they work independently and can perform tasks without
depending on other modules.
In OOP, the objects are designed are very specific to their concern and have fewer chances
to be used in some other software. In modular programming, the modules are coded to perform
specific tasks which can be used across number of other software programs. There is a whole new
vertical, which is based on re-use of software component, and is known as Component Based
Software Engineering (CBSE).
BASIS FOR
FORWARD ENGINEERING REVERSE ENGINEERING
COMPARISON
Basic Development of the application The requirements are deduced
with provided requirements. from the given application.
Certainty Always produces an application One can yield several ideas about
implementing the requirements. the requirement from an
implementation.
Nature Prescriptive Adaptive
Needed skills High proficiency Low-level expertise
Time required More Less
Accuracy Model must be precise and Inexact model can also provide
complete. partial information.
Possible Questions
Part – A
1. Define: Software Testing.
2. What is a test case?
3. Outline the need for system testing.
4. List the levels of testing.
5. What are the testing principles the software engineer must apply while performing the software
testing?
6. Define: Reverse Engineering.
7. Define: Refactoring.
8. Distinguish between verification and validation.
9. Write down generic characteristics of software testing.
10. What is the difference between alpha testing and beta testing?
11. Write the best practices for coding.
12. What is the need for regression testing?
13. In unit testing of a module, it is found for a set of test data at maximum of 90% of the code
alone were tested with the probability of success 0.9. What is the reliability of the module?
14. Why does software fail it has passed from acceptance testing?
15. What methods are used for breaking very long expression and statements?
16. What is boundary condition testing?
17. What are the various testing activities?
18. How can refactoring be made more effective?
19. Define Debugging. What are the common approaches in debugging?
20. Write about drivers and stubs.
21. What is smoke testing?
22. List out the activities of BPR model.
23. What is Forward Engineering?
24. Why debugging is difficult?
25. List out the types of system testing.
Part – B
1. Elaborate path testing and regression testing with an example.
2. Explain how Business Process Reengineering (BPE) helps to achieve a defined business
outcome.
3. Outline how the reverse engineering process helps to improve the legacy software. (5)
4. Compare White box testing and block box testing. (4)
5. List the phases in software reengineering process model and explain each phase.
6. Explain the various types of block box testing methods.
7. Explain the various types of White box testing methods.
8. Explain about various testing strategies.
9. Why does software testing need extensive planning? Explain. (Testing Principles)
10. Explain the validation testing in detail.
11. What is black box & white-box testing? Explain how basis path testing helps to derive test
cases to test every statement of a program.
12. Define: Regression testing. Distinguish: top-down and bottom-up integration. How is testing
different from debugging? Justify.
13. Write a note on equivalence partitioning & boundary value analysis of black box testing.
14. Explain the best coding practices and refactoring of software implementation in detail.
Part – C
1. Write the procedure for the following: Given three sides of a triangle, return the type of
triangle i.e., equilateral, isosceles and scalene triangle. Draw the control flow graph and
calculate cyclomatic complexity to calculate the minimum number of paths. Enumerate the
paths to be tested. (8)
2. Given a set of numbers „n‟, the function FindPrime (a [],n) prints a number – if it is a prime
number. Draw a control flow graph, calculate the cyclomatic complexity and enumerate all
paths. State how many test case-s are needed to adequately cover the code in terms of
branches, decisions and statement? Develop the necessary test cases using sample values
for „a‟ and „n‟.
3. Consider the pseudocode for simple subtraction given below :
(1) Program „Simple Subtraction‟
(2) Input (x,y)
(3) Output (x)
(4) Output (y)
(5) If x > y then DO
(6) x-y =z
(7) Else y-x = z
(8) EndIf
(9) Output (z)
(10) Output “ End Program”
Perform basis path testing and generate test cases.