Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

SE-Unit-4

Download as pdf or txt
Download as pdf or txt
You are on page 1of 27

UNIT – IV: TESTING AND MAINTENANCE

Software testing fundamentals-Internal and external views of Testing-white box testing –


basis path testing-control structure testing-black box testing- Regression Testing – Unit Testing –
Integration Testing – Validation Testing – System Testing And Debugging –Software Implementation
Techniques: Coding practices-Refactoring-Maintenance and Reengineering-BPR model-
Reengineering process model-Reverse and Forward Engineering.

Introduction
Software Testing is evaluation of the software against requirements gathered from users and
system specifications. Testing is conducted at the phase level in software development life cycle or
at module level in program code. Software testing comprises of Validation and Verification.
Verification: “Are we building the product right?”
Validation: “Are we building the right product?”
Software Validation
Validation is process of examining whether or not the software satisfies the user
requirements. It is carried out at the end of the SDLC. If the software matches requirements for
which it was made, it is validated.
 Validation ensures the product under development is as per the user requirements.
 Validation answers the question – "Are we developing the product which attempts all that
user needs from this software ?".
 Validation emphasizes on user requirements.
Software Verification
Verification is the process of confirming if the software is meeting the business
requirements, and is developed adhering to the proper specifications and methodologies.
 Verification ensures the product being developed is according to design specifications.
 Verification answers the question– "Are we developing this product by firmly following all
design specifications ?"
 Verifications concentrates on the design and system specifications.
Target of the test are -
 Errors - These are actual coding mistakes made by developers. In addition, there is a
difference in output of software and desired output, is considered as an error.
 Fault - When error exists fault occurs. A fault, also known as a bug, is a result of an error
which can cause system to fail.
 Failure - failure is said to be the inability of the system to perform the desired task. Failure
occurs when fault exists in the system.

What Has To Be Really Tested?


Characteristics of Testable Software:
 Operable
o The better it works (i.e., better quality), the easier it is to test
 Observable
o Incorrect output is easily identified; internal errors are automatically detected
 Controllable
o The states and variables of the software can be controlled directly by the tester
 Decomposable
o The software is built from independent modules that can be tested independently
 Simple
o The program should exhibit functional, structural, and code simplicity
 Stable
o Changes to the software during testing are infrequent and do not invalidate existing tests
 Understandable
o The architectural design is well understood; documentation is available and organized

Test Characteristics:
 A good test has a high probability of finding an error
o The tester must understand the software and how it might fail
 A good test is not redundant
o Testing time is limited; one test should not serve the same purpose as another test
 A good test should be ―best of breedǁ
o Tests that have the highest likelihood of uncovering a whole class of errors should be used
 A good test should be neither too simple nor too complex
o Each test should be executed separately; combining a series of tests could cause side
effects and mask certain errors

Seven Principles of software testing


Software testing is a process of executing a program with the aim of finding the error. To make
our software perform well it should be error free. If testing is done successfully it will remove all the errors
from the software.
There are seven principles in software testing:
1. Testing shows presence of defects
2. Exhaustive testing is not possible
3. Early testing
4. Defect clustering
5. Pesticide paradox
6. Testing is context dependent
7. Absence of errors fallacy
 Testing shows presence of defects: The goal of software testing is to make the software fail.
Software testing reduces the presence of defects. Software testing talks about the presence of
defects and doesn’t talk about the absence of defects. Software testing can ensure that defects are
present but it can not prove that software is defects free. Even multiple testing can never ensure that
software is 100% bug-free. Testing can reduce the number of defects but not removes all defects.
 Exhaustive testing is not possible: It is the process of testing the functionality of a software in all
possible inputs (valid or invalid) and pre-conditions is known as exhaustive testing. Exhaustive
testing is impossible means the software can never test at every test cases. It can test only some
test cases and assume that software is correct and it will produce the correct output in every test
cases. If the software will test every test cases then it will take more cost, effort, etc. and which is
impractical.
 Early Testing: To find the defect in the software, early test activity shall be started. The defect
detected in early phases of SDLC will very less expensive. For better performance of software,
software testing will start at initial phase i.e. testing will perform at the requirement analysis phase.
 Defect clustering: In a project, a small number of the module can contain most of the defects.
Pareto Principle to software testing state that 80% of software defect comes from 20% of modules.
 Pesticide paradox: Repeating the same test cases again and again will not find new bugs. So it is
necessary to review the test cases and add or update test cases to find new bugs.
 Testing is context dependent: Testing approach depends on context of software developed.
Different types of software need to perform different types of testing. For example, The testing of the
e-commerce site is different from the testing of the Android application.
 Absence of errors fallacy: If a built software is 99% bug-free but it does not follow the user
requirement then it is unusable. It is not only necessary that software is 99% bug-free but it also
mandatory to fulfill all the customer requirements.

Testing Guidelines
There are certain testing guidelines that should be followed while testing the software:
 Development team should avoid testing the software: Testing should always be performed by
the testing team. The developer team should never test the software themselves. This is because
after spending several hours building the software, it might unconsciously become too proprietorial
and that might prevent seeing any flaws in the system. The testers should have a destructive
approach towards the product. Developers can perform unit testing and integration testing but
software testing should be done by the testing team.
 Software can never be 100% bug-free: Testing can never prove the software to 100% bug-free. In
other words, there is no way to prove that the software is free of errors even after making a number
of test cases.
 Start as early as possible: Testing should always starts parallelly alongside the requirement
analysis process. This is crucial in order to avoid the problem of defect migration. It is important to
determine the test objects and scope as early as possible.
 Prioritize sections: If there are certain critical sections, then it should be ensured that these
sections are tested with the highest priority and as early as possible.
 The time available is limited: Testing time for software is limited. It must be kept in mind that the
time available for testing is not unlimited and that an effective test plan is very crucial before starting
the process of testing. There should be some criteria to decide when to terminate the process of
testing. This criterion needs to be decided beforehand. For instance, when the system is left with an
acceptable level of risk or according to timelines or budget constraints.
 Testing must be done with unexpected and negative inputs: Testing should be done with
correct data and test cases as well as with flawed test cases to make sure the system is leak proof.
Test cases must be well documented to ensure future reuse for testing at later stages. This means
that the test cases must be enlisted with proper definitions and descriptions of inputs passed and
respective outputs expected. Testing should be done for functional as well as the non-functional
requirements of the software product.
 Inspecting test results properly: Quantitative assessment of tests and their results must be done.
The documentation should be referred to properly while validating the results of the test cases to
ensure proper testing. Testing must be supported by automated tools and techniques as much as
possible. Besides ensuring that the system does what all it is supposed to do, testers also need to
ensure that the system does not perform operations which it isn’t supposed to do.
 Validating assumptions: The test cases should never be developed on the basis of assumptions or
hypothesis. They must always be validated properly. For instance, assuming that the software
product is free from any bugs while designing test cases may result in extremely weak test cases.

Testability can be determined by:


 Controllability: Testing process can be optimized only if we can control it.
 Observability: What you see is what can be tested. Factors affecting the final outcome are
visible.
 Availability: In order to test a system we have to get at it.
 Simplicity: When the design is self-consistent, features are not very complex and coding
practices are simple then there is less to test. When the software is not simple anymore it
becomes difficult to test.
 Stability: If too many changes are made to the design now and then there will be lot of
disruptions in software testing.
 Information: Efficiency of testing greatly depends on how much information is available for the
software.
Every testing cycle has some common activities, which are:
 Requirements testing: mainly how to ensure that each requirement is testable.
 Test planning: It is about how to plan the testing activities, estimate the effort, the required team,
etc.
 Writing Test Cases: In this activity, the testers start to write the testing scenarios and scripts, this
scenario should include unit, integration, system testing, etc.
 Test execution: It is mainly about preparing the testing environment and starting testing
execution
 Testing feedback: after the execution, the testing results and defects report should be reported
to the development team to start fixing them.
 Defect Retesting: when the developer report that the defect has been fixed, it should be tested
again by the testing team.
 User Acceptance Test: this should be the validation activity with the end users who will use the
system to ensure that they are working correctly from the business perspective. This can be
iterative as well after the customer reports some defects as well.
 Testing Closure: It is important to know when we should stop testing, explore the testing
findings, and learn from the cycle for the new testing cycles
Test Case Design
 Objective to uncover error
 Criteria in a complete manner
 Constraint with a minimum of effort and time

Manual Vs Automated Testing


Testing can either be done manually or using an automated testing tool:
 Manual - This testing is performed without taking help of automated testing tools. The software
tester prepares test cases for different sections and levels of the code, executes the tests and
reports the result to the manager.
Manual testing is time and resource consuming. The tester needs to confirm whether or not right
test cases are used. Major portion of testing involves manual testing.
 Automated This testing is a testing procedure done with aid of automated testing tools. The
limitations with manual testing can be overcome using automated test tools.

Testing Levels:
Testing itself may be defined at various levels of SDLC. The testing process runs parallel to
software development. Before jumping on the next stage, a stage is tested, validated and verified.
Testing separately is done just to make sure that there are no hidden bugs or issues left in the
software. Software is tested on various levels -

Unit Testing
While coding, the programmer performs some tests on that unit of program to know if it is error
free. Testing is performed under white-box testing approach. Unit testing helps developers decide that
individual units of the program are working as per requirement and are error free.
Integration Testing
Even if the units of software are working fine individually, there is a need to find out if the units if
integrated together would also work without errors. For example, argument passing and data updation
etc.
System Testing
The software is compiled as product and then it is tested as a whole. This can be accomplished
using one or more of the following tests:
 Functionality testing - Tests all functionalities of the software against the requirement.
 Performance testing - This test proves how efficient the software is. It tests the effectiveness
and average time taken by the software to do desired task. Performance testing is done by
means of load testing and stress testing where the software is put under high user and data load
under various environment conditions.
 Security & Portability - These tests are done when the software is meant to work on various
platforms and accessed by number of persons.
Acceptance Testing
When the software is ready to hand over to the customer it has to go through last phase of testing
where it is tested for user-interaction and response. This is important because even if the software
matches all user requirements and if user does not like the way it appears or works, it may be rejected.
 Alpha testing - The team of developer themselves perform alpha testing by using the system as
if it is being used in work environment. They try to find out how user would react to some action
in software and how the system should respond to inputs.
 Beta testing - After the software is tested internally, it is handed over to the users to use it under
their production environment only for testing purpose. This is not as yet the delivered product.
Developers expect that users at this stage will bring minute problems, which were skipped to
attend.
Regression Testing
Whenever a software product is updated with new code, feature or functionality, it is tested
thoroughly to detect if there is any negative impact of the added code. This is known as regression
testing.

Testing Documentation
Testing documents are prepared at different stages -
Before Testing
Testing starts with test cases generation. Following documents are needed for reference –
 SRS document - Functional Requirements document
 Test Policy document - This describes how far testing should take place before releasing the
product.
 Test Strategy document - This mentions detail aspects of test team, responsibility matrix and
rights/responsibility of test manager and test engineer.
 Traceability Matrix document - This is SDLC document, which is related to requirement
gathering process. As new requirements come, they are added to this matrix. These matrices
help testers know the source of requirement. They can be traced forward and backward.
While Being Tested
The following documents may be required while testing is started and is being done:
 Test Case document - This document contains list of tests required to be conducted. It includes
Unit test plan, Integration test plan, System test plan and Acceptance test plan.
 Test description - This document is a detailed description of all test cases and procedures to
execute them.
 Test case report - This document contains test case report as a result of the test.
 Test logs - This document contains test logs for every test case report.
After Testing
The following documents may be generated after testing :
 Test summary - This test summary is collective analysis of all test reports and logs. It
summarizes and concludes if the software is ready to be launched. The software is released
under version control system if it is ready to launch.

Testing vs. Quality Control, Quality Assurance and Audit


We need to understand that software testing is different from software quality assurance,
software quality control and software auditing.
 Software quality assurance - These are software development process monitoring means, by
which it is assured that all the measures are taken as per the standards of organization. This
monitoring is done to make sure that proper software development methods were followed.
 Software quality control - This is a system to maintain the quality of software product. It may
include functional and non-functional aspects of software product, which enhance the goodwill of
the organization. This system makes sure that the customer is receiving quality product for their
requirement and the product certified as ‘fit for use’.
 Software audit - This is a review of procedure used by the organization to develop the software.
A team of auditors, independent of development team examines the software process,
procedure, requirements and other aspects of SDLC. The purpose of software audit is to check
that software and its development process, both conform standards, rules and regulations.

Internal and External Views of Testing


Any engineered product (and most other things) can be tested in one of two ways:
 Knowing the specified function that a product has been designed to perform, tests
can be conducted that demonstrate each function is fully operational while at the
same time searching for errors in each function;
 Knowing the internal workings of a product, tests can be conducted to ensure that "all
gears mesh," that is, internal operations are performed according to specifications
and all internal components have been adequately exercised.
Inferences are said to possess internal validity if a causal relation between two variables is properly
demonstrated. A causal inference may be based on a relation when three criteria are satisfied:
1. the "cause" precedes the "effect" in time (temporal precedence),
2. the "cause" and the "effect" are related (covariation), and
3. there are no plausible alternative explanations for the observed covariation (nonspuriousness)

In scientific experimental settings, researchers often manipulate a variable (the independent variable)
to see what effect it has on a second variable (the dependent variable) For example, a researcher might,
for different experimental groups, manipulate the dosage of a particular drug between groups to see what
effect it has on health. In this example, the researcher wants to make a causal inference, namely, that
different doses of the drug may be held responsible for observed changes or differences. When the
researcher may confidently attribute the observed changes or differences in the dependent variable to the
independent variable, and when he can rule out other explanations (or rival hypotheses), then his causal
inference is said to be internally valid
In many cases, however, the magnitude of effects found in the dependent variable may not just
depend on
 variations in the independent variable,
 the power of the instruments and statistical procedures used to measure and detect the effects,
and
 the choice of statistical methods (see: Statistical conclusion validity).
Rather, a number of variables or circumstances uncontrolled for (or uncontrollable) may lead to
additional or alternative explanations (a) for the effects found and/or (b) for the magnitude of the effects
found. Internal validity, therefore, is more a matter of degree than of either-or, and that is exactly why
research designs other than true experiments may also yield results with a high degree of internal
validity.
In order to allow for inferences with a high degree of internal validity, precautions may be taken
during the design of the scientific study. As a rule of thumb, conclusions based on correlations or
associations may only allow for lesser degrees of internal validity than conclusions drawn on the basis of
direct manipulation of the independent variable. And, when viewed only from the perspective of Internal
Validity, highly controlled true experimental designs (i.e. with random selection, random assignment to
either the control or experimental groups, reliable instruments, reliable manipulation processes, and
safeguards against confounding factors) may be the "gold standard" of scientific research. By contrast,
however, the very strategies employed to control these factors may also limit the generalizability
or External Validity of the findings.

External validity is the validity of generalized (causal) inferences in scientific research, usually based
on experiments as experimental validity. In other words, it is the extent to which the results of a study can
be generalized to other situations and to other people For example, inferences based on comparative
psychotherapy studies often employ specific samples (e.g. volunteers, highly depressed, no comorbidity).
If psychotherapy is found effective for these sample patients, will it also be effective for non-volunteers or
the mildly depressed or patients with concurrent other disorders?
 Situation: All situational specifics (e.g. treatment conditions, time, location, lighting, noise,
treatment administration, investigator, timing, scope and extent of measurement, etc. etc.) of a
study potentially limit generalizability.
 Pre-test effects: If cause-effect relationships can only be found when pre-tests are carried out,
then this also limits the generality of the findings.
 Post-test effects: If cause-effect relationships can only be found when post-tests are carried out,
then this also limits the generality of the findings.
 Reactivity (placebo, novelty, and Hawthorne effects): If cause-effect relationships are found they
might not be generalizable to other settings or situations if the effects found only occurred as an
effect of studying the situation.
 Rosenthal effects: Inferences about cause-consequence relationships may not be generalizable
to other investigators or researchers

Testing Approaches
Before going through types, there are broader testing approaches which are:
 Blackbox (Functional): It is generally used when the tester has limited knowledge of the system
under test or when access to source code is not available, it is mainly done through defined
interfaces.
 Knowing the specified function that a product has been designed to perform,
test to see if that function is fully operational and error free
 Includes tests that are conducted at the software interface
 Not concerned with internal logical structure of the software

 Whitebox (Structural): Known as clear box testing, glass box testing, transparent box testing,
and structural testing. It is a method of testing software that tests internal structures or workings of
an application, as opposed to its functionality, it is mainly done on the source code itself.
 Knowing the internal workings of a product, test that all internal operations are
performed according to specifications and all internal components have been
exercised
 Involves tests that concentrate on close examination of procedural detail
 Logical paths through the software are tested
 Test cases exercise specific sets of conditions and loops

 Graybox: It mainly combines the two other approaches where the tester knows about the code
and develop the test cases in the Blackbox approach.

White Box Testing


The white box testing is a testing method which is based on close examination of procedural
details. Hence it is also called as glass box testing. In white box testing the test cases are
derived for
1. Examining all the independent paths within a module.
2. Exercising all the logical paths with their true and false sides.
3. Executing all the loops within their boundaries and within operational bounds.
4. Exercising internal data structures to ensure their validity.

Why to perform white box testing?


There are three main reasons behind performing the white box testing.
1. Programmers may have some incorrect assumptions while designing or implementing
some functions. Due to this there are chances of having logical errors in the program. To
detect and correct such logical errors procedural details need to be examined.
2. Certain assumptions on flow of control and data may lead programmer to make design
errors. To uncover the errors on logical path, white box testing is must.
3. There may be certain typographical errors that remain undetected even after syntax and
type checking mechanisms. Such errors can be uncovered during white box testing.
Working process of white box testing:
 Input: Requirements, Functional specifications, design documents, source code.
 Processing: Performing risk analysis for guiding through the entire process.
 Proper test planning: Designing test cases so as to cover entire code. Execute rinse-repeat until
error-free software is reached. Also, the results are communicated.
 Output: Preparing final report of the entire testing process.

Testing techniques:
1. Statement coverage: In this technique, the aim is to traverse all statement at least once. Hence,
each line of code is tested. In case of a flowchart, every node must be traversed at least once.
Since all lines of code are covered, helps in pointing out faulty code.
2. Branch Coverage: In this technique, test cases are designed so that each branch from all
decision points are traversed at least once. In a flowchart, all edges must be traversed at least
once.
3. Condition Coverage: In this technique, all individual conditions must be covered as shown in the
following example:
1. READ X, Y
2. IF(X == 0 || Y == 0)
3. PRINT ‘0’
In this example, there are 2 conditions: X == 0 and Y == 0. Now, test these conditions get
TRUE and FALSE as their values. One possible example would be:
 #TC1 – X = 0, Y = 55
 #TC2 – X = 5, Y = 0
4. Multiple Condition Coverage: In this technique, all the possible combinations of the possible
outcomes of conditions are tested at least once. Let’s consider the following example:
1. READ X, Y
2. IF(X == 0 || Y == 0)
3. PRINT ‘0’
 #TC1: X = 0, Y = 0
 #TC2: X = 0, Y = 5
 #TC3: X = 55, Y = 0
 #TC4: X = 55, Y = 5
Hence, four test cases required for two individual conditions.
Similarly, if there are n conditions then 2n test cases would be required.
5. Basis Path Testing
 White-box testing technique proposed by Tom McCabe
 Enables the test case designer to derive a logical complexity measure of a
procedural design
 Uses this measure as a guide for defining a basis set of execution paths
 Test cases derived to exercise the basis set are guaranteed to execute every
statement in the program at least one time during testing
In this technique, control flow graphs are made from code or flowchart and then
Cyclomatic complexity is calculated which defines the number of independent paths so that
the minimal number of test cases can be designed for each independent path.
Steps:
1. Make the corresponding control flow graph
2. Calculate the cyclomatic complexity
3. Find the independent paths
4. Design test cases corresponding to each independent path
Flow Graph Notation
• A circle in a graph represents a node, which stands for a sequence of one or more
procedural statements
• A node containing a simple conditional expression is referred to as a predicate node
– Each compound condition in a conditional expression containing one or more
Boolean operators (e.g., and, or) is represented by a separate predicate node
– A predicate node has two edges leading out from it (True and False)
• An edge, or a link, is a an arrow representing flow of control in a specific direction
– An edge must start and terminate at a node
– An edge does not intersect or cross over another edge
• Areas bounded by a set of edges and nodes are called regions
• When counting regions, include the area outside the graph as a region, too

Cyclomatic Complexity
Cyclomatic complexity is a software metric that gives the quantitative measure of logical
complexity of the program. The Cyclomatic complexity defines the number of independent
paths in the basis set of the program that provides the upper bound for the number of tests
that must be conducted to ensure that all the statements have been executed at least once.
The cyclomatic complexity can be computed by one of the following ways.
1. The number of regions of the flow graph correspond to the cyclomatic complexity.
2. Cyclomatic complexity, V(G), for a flow graph, G, is defined as: V(G) = E - N + 2*P ,
E - number of flow graph edges, N - number of flow graph nodes, P – number of
nodes that have exit points.
3. V(G) = P + 1, where P is the number of predicate nodes contained in the flow graph G.
Example:

V(G) = 4 (Using any of the above formulae)


No of independent paths = 4
 #P1: 1 – 2 – 4 – 7 – 8
 #P2: 1 – 2 – 3 – 5 – 7 – 8
 #P3: 1 – 2 – 3 – 6 – 7 – 8
 #P4: 1 – 2 – 4 – 7 – 1 – . . . – 7 – 8
Example:
IF A = 10 THEN
IF B > C THEN
A=B
ELSE
A=C
ENDIF
ENDIF
Print A
Print B
Print C
The Cyclomatic complexity is calculated using the above control flow diagram that shows seven
nodes(shapes) and eight edges (lines), hence the cyclomatic complexity is 8 - 7 + 2 = 3

6. Loop Testing: Loops are widely used and these are fundamental to many algorithms hence,
their testing is very important. Errors often occur at the beginnings and ends of loops.

1. Simple loops: For simple loops of size n, test cases are designed that:
 Skip the loop entirely
 Only one pass through the loop
 2 passes
 m passes, where m < n
 n-1 ans n+1 passes
2. Nested loops: For nested loops, all the loops are set to their minimum count and we start
from the innermost loop. Simple loop tests are conducted for the innermost loop and this is
worked outwards till all the loops have been tested.
3. Concatenated loops: Independent loops, one after another. Simple loop tests are applied
for each.
Advantages:
1. White box testing is very thorough as the entire code and structures are tested.
2. It results in the optimization of code removing error and helps in removing extra lines of
code.
3. It can start at an earlier stage as it doesn’t require any interface as in case of black box
testing.
4. Easy to automate.
Disadvantages:
1. Main disadvantage is that it is very expensive.
2. Redesign of code and rewriting code needs test cases to be written again.
3. Testers are required to have in-depth knowledge of the code and programming language as
opposed to black box testing.
4. Missing functionalities cannot be detected as the code that exists is tested.
5. Very complex and at times not realistic.

Black-Box Testing
Black box testing is a type of software testing in which the functionality of the software is not
known. The testing is done without the internal knowledge of the products.
Black box testing can be done in following ways:
1. Syntax Driven Testing – This type of testing is applied to systems that can be syntactically
represented by some language. For example- compilers,language that can be represented by
context free grammar. In this, the test cases are generated so that each grammar rule is used at
least once.
2. Equivalence partitioning – It is often seen that many type of inputs work similarly so instead of
giving all of them separately we can group them together and test only one input of each group.
The idea is to partition the input domain of the system into a number of equivalence classes such
that each member of class works in a similar way, i.e., if a test case in one class results in some
error, other members of class would also result into same error.

Testing techniques:
Graph-Based Testing:
• Black-box methods based on the nature of the relationships (links) among the program
objects (nodes), test cases are designed to traverse the entire graph
• Transaction flow testing: nodes represent steps in some transaction and links
represent logical connections between steps that need to be validated
• Finite state modeling: nodes represent user observable states of the software and links
represent state transitions
• Data flow modeling: nodes are data objects and links are transformations of one data
object to another data object
• Timing modeling: nodes are program objects and links are sequential
connections between these objects link weights are required execution times
Equivalence Partitioning:
• Black-box technique that divides the input domain into classes of data from which test
cases can be derived
• An ideal test case uncovers a class of errors that might require many arbitrary test
cases to be executed before a general error is observed

Equivalence Class Guidelines:


If input condition specifies a range, one valid and two invalid equivalence classes are defined
• If an input condition requires a specific value, one valid and two invalid equivalence classes
are
defined
• If an input condition specifies a member of a set, one valid and one invalid equivalence class
is
defined
• If an input condition is Boolean, one valid and one invalid equivalence class is defined
• Boundary Value Analysis - 1
• Black-box technique
focuses on the boundaries of the input domain rather than its center
• Guidelines:
If input condition specifies a range bounded by values a and b, test cases should
include a and b, values just above and just below a and b
If an input condition specifies and number of values, test cases should be
exercise the minimum and maximum numbers, as well as values just above and
just below the minimum and maximum values
For example
Area code: input condition. Boolean - the area code mayor may not be present.
Input condition, range - value defined between 200 and 700.
Password: input condition, Boolean - a password mayor may not be present.
Input condition, value - seven character string.
Command: input condition, set - containing commands noted before.

Boundary Value Analysis


A boundary value analysis is a testing technique in which the elements at the
edge of the domain are selected and tested.
Instead of focusing on input conditions only, the test cases from output domain
are also derived.
Boundary value analysis is a test case design technique that complements
equivalence partitioning technique.

Guidelines for boundary value analysis technique are


1. If the input condition specified the range bounded by values x and y, then test cases
should be designed with values x and y. Also test cases should be with the values above
and below x and y.
2. If input condition specifies the number of values then the test cases should be designed
with minimum and maximum values as well as wit~ the values that are just above and
below the maximum and minimum should be tested.
3. If the output condition specified the range bounded by values x and y, then test cases
should be designed with values x and y. Also test cases should be with the values above
and below x and y.
4. If output condition specifies the number of values then the test cases should be designed
with minimum and maximum values as well as with the values that are just above and
below the maximum and minimum should be tested.
5. If the internal program data structures specify such boundaries then the test cases must
be designed such that the values at the boundaries of data structure can be tested.
6. Apply guidelines 1 and 2 to output conditions, test cases should be designed to produce
the minimum and maximum output reports
7. If internal program data structures have boundaries (e.g. size limitations), be certain to
test the boundaries
For example
Integer D with input condition [-2, 0], test values: -2, 10, 11, -I, 0
If input condition specifies a number values, test cases should developed to exercise the
minimum and maximum numbers. Values just above and below this min and max should be
tested.
Enumerate data E with input condition: {2, 7, 100, 102}
Test values: 2, 102, -I, 200, 7
Comparison Testing:
• Black-box testing for safety critical systems in which independently developed
implementations of redundant systems are tested for conformance to specifications
• Often equivalence class partitioning is used to develop a common set of test cases for each
implementation

Orthogonal Array Testing:


• Black-box technique that enables the design of a reasonably small set of test cases
that provide maximum test coverage
• Focus is on categories of faulty logic likely to be present in the software component (without
examining the code)
• Priorities for assessing tests using an orthogonal array
Detect and isolate all single mode faults
Detect all double mode faults
Multimode faults

Cause effect Graphing:


This technique establishes relationship between logical input called causes with corresponding
actions called effect. The causes and effects are represented using Boolean graphs. The following
steps are followed:
1. Identify inputs (causes) and outputs (effect).
2. Develop cause effect graph.
3. Transform the graph into decision table.
4. Convert decision table rules to test cases.
For example, in the following cause effect graph:

It can be converted into decision table like:

Each column corresponds to a rule which will become a test case for testing. So there will be 4 test
cases.
Difference between testing approaches:

Black-Box Testing Grey-Box Testing White-Box Testing

The internal workings of an The tester has limited knowledge Tester has full knowledge of
application need not be known. of the internal workings of the the internal workings of the
application. application.

Also known as closed-box Also known as translucent testing, Also known as clear-box
testing, data-driven testing, or as the tester has limited testing, structural testing, or
functional testing. knowledge of the insides of the code-based testing.
application.

Performed by end-users and Performed by end-users and also Normally done by testers
also by testers and developers. by testers and developers. and developers.

Testing is based on external Testing is done on the basis of Internal workings are fully
expectations - Internal behavior high-level database diagrams and known and the tester can
of the application is unknown. data flow diagrams. design test data accordingly.

It is exhaustive and the least Partly time-consuming and The most exhaustive and
time-consuming. exhaustive. time-consuming type of
testing.

Not suited for algorithm testing. Not suited for algorithm testing. Suited for algorithm testing.

This can only be done by trial- Data domains and internal Data domains and internal
and-error method. boundaries can be tested, if boundaries can be better
known. tested.

Regression Testing
Regression testing is a type of software testing that seeks to uncover new software bugs,
or regressions, in existing functional and non-functional areas of a system after changes such as
enhancements, patches or configuration changes, have been made to them.
The intent of regression testing is to ensure that changes such as those mentioned above
have not introduced new faults. One of the main reasons for regression testing is to determine
whether a change in one part of the software affects other parts of the software
Common methods of regression testing include rerunning previously completed tests and
checking whether program behavior has changed and whether previously fixed faults have re-
emerged. Regression testing can be performed to test a system efficiently by systematically selecting
the appropriate minimum set of tests needed to adequately cover a particular change.
Contrast with non-regression testing (usually validation-test for a new issue), which aims to
verify whether, after introducing or updating a given software application, the change has had the
intended effect.

Unit Testing
In computer programming, unit testing is a software testing method by which individual units
of source code, sets of one or more computer program modules together with associated control data,
usage procedures, and operating procedures are tested to determine if they are fit for use. Intuitively,
one can view a unit as the smallest testable part of an application. In procedural programming, a unit
could be an entire module, but it is more commonly an individual function or procedure. In object-
oriented programming, a unit is often an entire interface, such as a class, but could be an individual
method. Unit tests are short code fragments created by programmers or occasionally by white box
testers during the development process.
Ideally, each test case is independent from the others. Substitutes such as method
stubs, mock objects, fakes, and test harnesses can be used to assist testing a module in isolation.
Unit tests are typically written and run by software developers to ensure that code meets its design
and behaves as intended.
Unit Testing Limitations
Testing will not catch every error in the program, since it cannot evaluate every execution path
in any but the most trivial programs. The same is true for unit testing. Additionally, unit testing by
definition only tests the functionality of the units themselves. Therefore, it will not catch integration
errors or broader system-level errors (such as functions performed across multiple units, or non-
functional test areas such as performance). Unit testing should be done in conjunction with
other software testing activities, as they can only show the presence or absence of particular errors;
they cannot prove a complete absence of errors. In order to guarantee correct behavior for every
execution path and every possible input, and ensure the absence of errors, other techniques are
required, namely the application of formal methods to proving that a software component has no
unexpected behavior.
Software testing is a combinatorial problem. For example, every boolean decision statement
requires at least two tests: one with an outcome of "true" and one with an outcome of "false". As a
result, for every line of code written, programmers often need 3 to 5 lines of test code. This obviously
takes time and its investment may not be worth the effort. There are also many problems that cannot
easily be tested at all – for example those that are nondeterministic or involve multiple threads. In
addition, code for a unit test is likely to be at least as buggy as the code it is testing. Fred
Brooks in The Mythical Man-Month quotes: "Never go to sea with two chronometers; take one or
three. Meaning, if two chronometers contradict, how do you know which one is correct?
Another challenge related to writing the unit tests is the difficulty of setting up realistic and
useful tests. It is necessary to create relevant initial conditions so the part of the application being
tested behaves like part of the complete system. If these initial conditions are not set correctly, the
test will not be exercising the code in a realistic context, which diminishes the value and accuracy of
unit test results
To obtain the intended benefits from unit testing, rigorous discipline is needed throughout the
software development process. It is essential to keep careful records not only of the tests that have
been performed, but also of all changes that have been made to the source code of this or any other
unit in the software. Use of a version control system is essential. If a later version of the unit fails a
particular test that it had previously passed, the version-control software can provide a list of the
source code changes (if any) that have been applied to the unit since that time.
It is also essential to implement a sustainable process for ensuring that test case failures are
reviewed daily and addressed immediately. If such a process is not implemented and ingrained into
the team's workflow, the application will evolve out of sync with the unit test suite, increasing false
positives and reducing the effectiveness of the test suite.
Unit testing embedded system software presents a unique challenge: Since the software is
being developed on a different platform than the one it will eventually run on, you cannot readily run a
test program in the actual deployment environment, as is possible with desktop programs

Integration Testing
Integration testing (sometimes called integration and testing, abbreviated I&T) is the phase
in software testing in which individual software modules are combined and tested as a group. It
occurs after unit testing and before validation testing. Integration testing takes as its
input modules that have been unit tested, groups them in larger aggregates, applies tests defined in
an integration test plan to those aggregates, and delivers as its output the integrated system ready
for system testing.
The purpose of integration testing is to verify functional, performance, and
reliability requirements placed on major design items. These "design items", i.e. assemblages (or
groups of units), are exercised through their interfaces using black box testing, success and error
cases being simulated via appropriate parameter and data inputs. Simulated usage of shared data
areas and inter-process communication is tested and individual subsystems are exercised through
their input interface. Test cases are constructed to test whether all the components within
assemblages interact correctly, for example across procedure calls or process activations, and this is
done after testing individual modules, i.e. unit testing. The overall idea is a "building block" approach,
in which verified assemblages are added to a verified base which is then used to support the
integration testing of further assemblages.
Some different types of integration testing are big bang, top-down, and bottom-up. Other
Integration Patterns are: Collaboration Integration, Backbone Integration, Layer Integration,
Client/Server Integration, Distributed Services Integration and High-frequency Integration.
Validation Testing
In software project management, software testing, and software engineering, verification and
validation (V&V) is the process of checking that a software system meets specifications and that it
fulfills its intended purpose. It may also be referred to as software quality control. It is normally the
responsibility of software testers as part of the software development lifecycle
Validation checks that the product design satisfies or fits the intended use (high-level
checking), i.e., the software meets the user requirements. This is done through dynamic testing and
other forms of review.
Verification and validation are not the same thing, although they are often
confused. Boehm succinctly expressed the difference between
1. Validation: Are we building the right product? (This is dynamic process for checking and
testing the real product. Software validation always involves with executing the code)
2. Verification: Are we building the product right? (This is static method for verifying
design,code. Software verification is human based checking of documents and files)
According to the Capability Maturity Model
3. Software Validation: The process of evaluating software during or at the end of the
development process to determine whether it satisfies specified requirements.
4. Software Verification: The process of evaluating software to determine whether the products
of a given development phase satisfy the conditions imposed at the start of that phaseIn other
words, software validation ensures that the product actually meets the user's needs, and that
the specifications were correct in the first place, while software verification is ensuring that the
product has been built according to the requirements and design specifications. Software
validation ensures that "you built the right thing". Software verification ensures that "you built it
right". Software validation confirms that the product, as provided, will fulfill its intended use.
From testing perspective:
1. Fault – wrong or missing function in the code.
2. Failure – the manifestation of a fault during execution.
3. Malfunction – according to its specification the system does not meet its specified functionality.

System Testing And Debugging


Debugging is a methodical process of finding and reducing the number of bugs, or defects, in
a computer program or a piece of electronic hardware, thus making it behave as expected.
Debugging tends to be harder when various subsystems are tightly coupled, as changes in one may
cause bugs to emerge in another.
Numerous books have been written about debugging (see below: Further reading), as it
involves numerous aspects, including interactive debugging, control flow, integration testing, log files,
monitoring (application, system), memory dumps, profiling, Statistical Process Control, and special
design tactics to improve detection while simplifying changes
Normally the first step in debugging is to attempt to reproduce the problem. This can be a non-
trivial task, for example as with parallel processes or some unusual software bugs. Also, specific user
environment and usage history can make it difficult to reproduce the problem.
After the bug is reproduced, the input of the program may need to be simplified to make it
easier to debug. For example, a bug in a compiler can make it crash when parsing some large source
file. However, after simplification of the test case, only few lines from the original source file can be
sufficient to reproduce the same crash. Such simplification can be made manually, using a divide-
and-conquer approach. The programmer will try to remove some parts of original test case and check
if the problem still exists. When debugging the problem in a GUI, the programmer can try to skip some
user interaction from the original problem description and check if remaining actions are sufficient for
bugs to appear.
After the test case is sufficiently simplified, a programmer can use a debugger tool to examine
program states (values of variables, plus the call stack) and track down the origin of the problem(s).
Alternatively, tracing can be used. In simple cases, tracing is just a few print statements, which output
the values of variables at certain points of program execution

TECHNIQUES
1. Print debugging (or tracing) is the act of watching (live or recorded) trace statements, or
print statements, that indicate the flow of execution of a process. This is sometimes
called printf debugging, due to the use of the printf function in C. This kind of debugging was
turned on by the command TRON in the original versions of the novice-
oriented BASIC programming language. TRON stood for, "Trace On." TRON caused the line
numbers of each BASIC command line to print as the program ran.
2. Remote debugging is the process of debugging a program running on a system different from
the debugger. To start remote debugging, a debugger connects to a remote system over a
network. The debugger can then control the execution of the program on the remote system
and retrieve information about its state.
3. Post-mortem debugging is debugging of the program after it has already crashed.
Related techniques often include various tracing techniques (for example,) and/or analysis
of memory dump (or core dump) of the crashed process. The dump of the process could be
obtained automatically by the system (for example, when process has terminated due to an
unhandled exception), or by a programmer-inserted instruction, or manually by the interactive
user.
4. "Wolf fence" algorithm: Edward Gauss described this simple but very useful and now
famous algorithm in a 1982 article for communications of the ACM as follows: "There's one
wolf in Alaska; how do you find it? First build a fence down the middle of the state, wait for the
wolf to howl, determine which side of the fence it is on. Repeat process on that side only, until
you get to the point where you can see the wolf. This is implemented e.g. in the Git version
control system as the command git bisect, which uses the above algorithm to determine
which commit introduced a particular bug.
5. Delta Debugging – a technique of automating test case simplification.

Software Implementation
What are the different methods of programming in software implementation?
The software implementation is associated with the following programming models.
Structured Programming
The codes leads to enlarge the software size as the codes multiply thus making it a difficult
task to connect with the program flow. It becomes very hard for the program to be shared or modifies,
as the files, programs, procedures and the manner in which the program is constructed is not
remembered. This drawback of coding is overcome by structured programming. Some of the
structures such as subroutines and loops are used by the developers. These subroutines and loops
facilitate in improving the efficiency and the coding time is decreased and also the coding is
organized.
The coding of a particular programming is described by structured programming. There are three
basic concepts upon which the concept of structured programming revolve. They are -
 Top-down analysis – The most important part of any program is problem solving. Problem
can be easily solved only when the problem is understood. The problem is divided into
several parts. Each sub-problem is solved individually and hence the problem solving is
simplified.
 Modular Programming – Programming can also be done by simplifying the code into
groups of instructions. These groups of instructions are known as modules or sub-programs.
Modular programming is done by following the Top-down analysis. As the jump method
makes it difficult to locate the program. Structured programming always prefer modular
format that Jumps.
 Structured Coding – Under this concept, the modules are broken down further thus
simplifying the execution. The flow of the program is controlled as control structure is chosen
by structured programming. The coding instructions are enabled to organize by structured
coding.
Functional Programming
It is a programming that uses mathematical functions. When a particular argument is received
by a mathematical function, result produced by that function will be the same. In some of the
procedural languages, procedures take over the control on the flow of the program. There is a
possibility of the state of the program getting changed in the process of shifting the control flow from
one procedure to another.
When a particular argument is received by the procedural programming, result produced by
that program will be different as the state of the program keeps on changing. This calls for taking
more consideration about some of the aspects of programming such as sequence of the program and
the timing of the code.
Mathematical functions are used by functional programming and which enable to produce the
result without considering the state of the program. This enables to forecast the program behaviour.
There are different concepts upon which the concept of functional programming revolves. They are
 First class and High-order functions – Some other function can be accepted as an
argument or the result may be produced on some other function by these High-order
functions.
 Pure functions – These functions are not considered to be destructive as the memory, Input
or Output is not influenced by these functions. They can be easily deleted, if not required and
hence the program is not hampered.
 Recursion – The program code is repeated by the function as the function calls itself. The
repetition of the program code stops when some pre-defined conditions match. The repetition
of the code results in development of loops. Here the input and the output are based on
functional programming.
 Strict evaluation – The argument of the function is evaluated either by strict evaluation or
non-strict evaluation. Before a function is called, the expression is evaluated by strict
evaluation. Unless required the expression is not evaluated by non-strict evaluation.
 λ-calculus – λ-calculus is chosen by the function programming. Only when they occur, they
are evaluated.
Examples of the function programming are - Common Lisp, Scala, Haskell, Erlang and F#
Programming style
The code is written in accordance with some of the predefined coding rules. Programming
style refers to the set of such coding rules. For the program, the program code may be written by on
developer and the program is worked on by other developer. Thus making it a big confusion. This
confusion is avoided by setting and following some standard programming styles to write the program
code.
Relevant function and variable names are included by the programming style for a particular
task. This facilitates in letting the indentation to be well-placed, at the reader convenience the code
can be commented and the presentation of the code. Thus the program code is easily understood.
Here solving of the errors can be made easy. It also simplifies the documentation of the program code
and updation.

Coding Guidelines
Different organization has different language of coding and different styles for coding. Each of
the organization coding guidelines has to consider some of the coding elements in general, such as -
 Naming conventions – Defines the way in which the functions, variables, constants and
variables need to be named.
 Indenting – Indenting is the space that is left blank at the beginning of line, 2-8 whitespace
or single tab is used for indenting.
 Whitespace - It is generally omitted at the end of line.
 Operators – The different operators – assignment. Mathematical and logical are defined. For
example, space should be there before and after the assignment operator ‘=’ as in “x = 2”.
 Control Structures – Some of the clauses such as if-then-else, case-switch, while-until are
to be written by following some set of rules which are defined by Control Structures.
 Line length and wrapping – The length of the line in terms of number of characters is
defined. For the long sentences the lines should be wrapped as per the specified rules of
wrapping.
 Functions – The declaration and the calling of the function is defined in two ways – with
parameters and without parameters.
 Variables – The definitions and the declarations of different variables are mentioned.
 Comments – The functioning of the code is described as comments. The predefined
comments help in creating documentations and the associated descriptions.

How the documentation of the Software is done?


A software document is considered as a repository of information related to the complete
process of the software. The information about the usage of the product is also provided by the
software documentation. Well-structured and well-organized software documentation involves
maintenance of certain documents. They are -
Requirement documentation
As the name implies, all the functional and non-functional requirements and the description of
the desired software are documented in requirement documentation. Since this includes the collection
of requirements, it is considered as an important tool for the complete team of developers, designers
and testers.
This document is prepared on the basis of previous data of the running software, which is
running at the Client; It is also based on the research, questionnaires submitted by the clients and by
interviewing the clients. The software management team stores and maintains this document and is
stored in either spreadsheet or word processing document.
It is the basic foundation to develop any software. Verification and validation of the software is
also done on the basis of this document. Requirement documentation also facilitates in preparing the
test-cases.
Software Design documentation
The necessary information that is required for developing the software is provided by this
software design documentation. The document provide information related to High-level software
architecture, Software design, Data flow diagrams, and Database design
The software implementation is mainly based on this document. All the relevant and important
information regarding coding and implementation is provided. Though not provide the details of
exactly how the program is coded.
Technical documentation
The information about the code is documented by technical documentation, which can only be
prepared by the developers. The code is written along with some additional information such as the
code objective, writer of the code, the usage of the code, resources required by the code etc.
When the same code is worked on by different programmers, this documentation enables the
programmers to better understand and interact with each other. The ability of re-usability of the code
is improved by this documentation.
Technical documentation can be prepared by using different available tools and some of such
tools are made available with the programming language. For example java comes with JavaDoc tool,
which facilitate in developing the technical documentation of the code.
User documentation
The explanation about the working of the software and best usage of the software for
obtaining the desired results is provided by User documentation. User documentation varies from
other three types of documentation in the aspect that they are more information oriented for software
development, while user documents provides explanation.
The guidelines related to how to install a software, user-guidelines, methods for uninstallation,
information about the license updation is provided by these user documentation.

What are the different challenges faced by Software Implementation?


The implementation of the software is not left without the challenges for the developers. Some of the
major challenges are -
 Code-reuse – The code that is earlier created and meant for some other software is preferred to
be re-used by the management in order to reduce the cost of the end product. This leave the
programmers with bunch of issues as they will be in a dilemma on how much of the code need to
be re-used and it is troublesome to do the compatibility checks for the code for re-using.
 Version Management – For all the new software, the version of the software and the configuration
of the software has to be documented and maintained by the developers. This should me made
easily available and handy whenever required.
 Target-Host – The software program need to be designed with respect to the host machines at
the end of the user. Sometimes it may be feasible to do so but at some instances, it becomes very
difficult and challenging to design the software with respect to the host machines.

Software Implementation Techniques


Coding practices
Best coding practices are a set of informal rules that the software development community has
learned over time which can help improve the quality of software. Many computer programs remain in
use for far longer than the original authors ever envisaged (sometimes 40 years or more) so any rules
need to facilitate both initial development and subsequent maintenance and enhancement by people
other than the original authors.
In Ninety-ninety rule, Tim Cargill is credited with this explanation as to why programming
projects often run late: "The first 90% of the code accounts for the first 90% of the development time.
The remaining 10% of the code accounts for the other 90% of the development time." Any guidance
which can redress this lack of foresight is worth considering.
The size of a project or program has a significant effect on error rates, programmer
productivity, and the amount of management needed
 Maintainability.
 Dependability.
 Efficiency.
 Usability.
Refactoring
Refactoring is usually motivated by noticing a code smell. For example the method at hand
may be very long, or it may be a near duplicate of another nearby method. Once recognized, such
problems can be addressed by refactoring the source code, or transforming it into a new form that
behaves the same as before but that no longer "smells". For a long routine, one or more smaller
subroutines can be extracted; or for duplicate routines, the duplication can be removed and replaced
with one shared function. Failure to perform refactoring can result in accumulating technical debt.

There are two general categories of benefits to the activity of refactoring.


1. Maintainability. It is easier to fix bugs because the source code is easy to read and the intent
of its author is easy to grasp. This might be achieved by reducing large monolithic routines into
a set of individually concise, well-named, single-purpose methods. It might be achieved by
moving a method to a more appropriate class, or by removing misleading comments.
2. Extensibility. It is easier to extend the capabilities of the application if it uses
recognizable design patterns, and it provides some flexibility where none before may have
existed.
Before applying a refactoring to a section of code, a solid set of automatic unit tests is needed. The
tests are used to demonstrate that the behavior of the module is correct before the refactoring. If it
inadvertently turns out that a test fails, then it's generally best to fix the test first, because otherwise it
is hard to distinguish between failures introduced by refactoring and failures that were already there.
After the refactoring, the tests are run again to verify the refactoring didn't break the tests. Of course,
the tests can never prove that there are no bugs, but the important point is that this process can be
cost-effective: good unit tests can catch enough errors to make them worthwhile and to make
refactoring safe enough.
The process is then an iterative cycle of making a small program transformation, testing it to
ensure correctness, and making another small transformation. If at any point a test fails, the last small
change is undone and repeated in a different way. Through many small steps the program moves
from where it was to where you want it to be. For this very iterative process to be practical, the tests
must run very quickly, or the programmer would have to spend a large fraction of his or her time
waiting for the tests to finish. Proponents of extreme programming and other agile
methodologies describe this activity as an integral part of the software development cycle.

 Techniques that allow for more abstraction


o Encapsulate Field – force code to access the field with getter and setter methods
o Generalize Type – create more general types to allow for more code sharing
o Replace type-checking code with State/Strateg
o Replace conditional with polymorphism
 Techniques for breaking code apart into more logical pieces
o Componentization breaks code down into reusable semantic units that present clear, well-
defined, simple-to-use interfaces.
o Extract Class moves part of the code from an existing class into a new class.
o Extract Method, to turn part of a larger method into a new method. By breaking down code
in smaller pieces, it is more easily understandable. This is also applicable to functions.
 Techniques for improving names and location of code
o Move Method or Move Field – move to a more appropriate Class or source file
o Rename Method or Rename Field – changing the name into a new one that better reveals
its purpose
o Pull Up – in OOP, move to a superclass o Push Down – in OOP, move to a subclass

Software Maintenance
Software maintenance is widely accepted part of SDLC now a days. It stands for all the
modifications and updations done after the delivery of software product. There are number of
reasons, why modifications are required, some of them are briefly mentioned below:
 Market Conditions - Policies, which changes over the time, such as taxation and newly
introduced constraints like, how to maintain bookkeeping, may trigger need for modification.
 Client Requirements - Over the time, customer may ask for new features or functions in the
software.
 Host Modifications - If any of the hardware and/or platform (such as operating system) of the
target host changes, software changes are needed to keep adaptability.
 Organization Changes - If there is any business level change at client end, such as
reduction of organization strength, acquiring another company, organization venturing into
new business, need to modify in the original software may arise.

Types of maintenance
In a software lifetime, type of maintenance may vary based on its nature. It may be just a routine
maintenance tasks as some bug discovered by some user or it may be a large event in itself based
on maintenance size or nature. Following are some types of maintenance based on their
characteristics:
 Corrective Maintenance - This includes modifications and updations done in order to correct
or fix problems, which are either discovered by user or concluded by user error reports.
 Adaptive Maintenance - This includes modifications and updations applied to keep the
software product up-to date and tuned to the ever changing world of technology and business
environment.
 Perfective Maintenance - This includes modifications and updates done in order to keep the
software usable over long period of time. It includes new features, new user requirements for
refining the software and improve its reliability and performance.
 Preventive Maintenance - This includes modifications and updations to prevent future
problems of the software. It aims to attend problems, which are not significant at this moment
but may cause serious issues in future.

Cost of Maintenance
Reports suggest that the cost of maintenance is high. A study on estimating software
maintenance found that the cost of maintenance is as high as 67% of the cost of entire software
process cycle.

On an average, the cost of software maintenance is more than 50% of all SDLC phases. There are
various factors, which trigger maintenance cost go high, such as:
Real-world factors affecting Maintenance Cost
 The standard age of any software is considered up to 10 to 15 years.
 Older softwares, which were meant to work on slow machines with less memory and storage
capacity cannot keep themselves challenging against newly coming enhanced softwares on
modern hardware.
 As technology advances, it becomes costly to maintain old software.
 Most maintenance engineers are newbie and use trial and error method to rectify problem.
 Often, changes made can easily hurt the original structure of the software, making it hard for
any subsequent changes.
 Changes are often left undocumented which may cause more conflicts in future.
Software-end factors affecting Maintenance Cost
 Structure of Software Program
 Programming Language
 Dependence on external environment
 Staff reliability and availability
Maintenance Activities
IEEE provides a framework for sequential maintenance process activities. It can be used in
iterative manner and can be extended so that customized items and processes can be included.

These activities go hand-in-hand with each of the following phase:


 Identification & Tracing - It involves activities pertaining to identification of requirement of
modification or maintenance. It is generated by user or system may itself report via logs or
error messages.Here, the maintenance type is classified also.
 Analysis - The modification is analyzed for its impact on the system including safety and
security implications. If probable impact is severe, alternative solution is looked for. A set of
required modifications is then materialized into requirement specifications. The cost of
modification/maintenance is analyzed and estimation is concluded.
 Design - New modules, which need to be replaced or modified, are designed against
requirement specifications set in the previous stage. Test cases are created for validation and
verification.
 Implementation - The new modules are coded with the help of structured design created in
the design step.Every programmer is expected to do unit testing in parallel.
 System Testing - Integration testing is done among newly created modules. Integration
testing is also carried out between new modules and the system. Finally the system is tested
as a whole, following regressive testing procedures.
 Acceptance Testing - After testing the system internally, it is tested for acceptance with the
help of users. If at this state, user complaints some issues they are addressed or noted to
address in next iteration.
 Delivery - After acceptance test, the system is deployed all over the organization either by
small update package or fresh installation of the system. The final testing takes place at client
end after the software is delivered.
Training facility is provided if required, in addition to the hard copy of user manual.
 Maintenance management - Configuration management is an essential part of system
maintenance. It is aided with version control tools to control versions, semi-version or patch
management.

Software Re-engineering
Software Re-Engineering is the examination and alteration of a system to reconstitute it in a
new form. The principles of Re-Engineering when applied to the software development process is
called software re-engineering. It affects positively at software cost, quality, service to the customer
and speed of delivery. In Software Re-engineering, we are improving the software to make it more
efficient and effective.
For example, initially Unix was developed in assembly language. When language C came into
existence, Unix was re-engineered in C, because working in assembly language was difficult.
Other than this, sometimes programmers notice that few parts of software need more maintenance
than others and they also need re-engineering.

Objectives of Re-engineering:
 To describe a cost-effective option for system evolution.
 To describe the activities involved in the software maintenance process.
 To distinguish between software and data re-engineering and to explain the problems of data re-
engineering.

Steps involved in Re-engineering:


1. Inventory Analysis
2. Document Reconstruction
3. Reverse Engineering
4. Code Reconstruction
5. Data Reconstruction
6. Forward Engineering

Re-Engineering cost factors:


 The quality of the software to be re-engineered.
 The tool support availability for engineering.
 Extent of the data conversion which is required.
 The availability of expert staff for Re-engineering.

Re-Engineering Process
 Decide what to re-engineer. Is it whole software or a part of it?
 Perform Reverse Engineering, in order to obtain specifications of existing software.
 Restructure Program if required. For example, changing function-oriented programs into
object-oriented programs.
 Re-structure data as required.
 Apply Forward engineering concepts in order to get re-engineered software.

Diagrammatic Representation:
Software Re-Engineering Activities:
1. Inventory Analysis:
Every software organisation should have an inventory of all the applications.
 Inventory can be nothing more than a spreadsheet model containing information that provides a
detailed description of every active application.
 By sorting this information according to business criticality, longevity, current maintainability and
other local important criteria, candidates for re-engineering appear.
 Resource can then be allocated to candidate application for re-engineering work.
2. Document reconstructing:
Documentation of a system either explains how it operate or how to use it.
 Documentation must be updated.
 It may not be necessary to fully document an application.
 The system is business critical and must be fully re-documented.
3. Reverse Engineering:
Reverse engineering is a process of design recovery. Reverse engineering tools extracts data,
architectural and proccedural design information from an existing program.
4. Code Reconstructing:
 To accomplish code reconstructing, the source code is analysed using a reconstructing tool.
Violations of structured programming construct are noted and code is then reconstruct.
 The resultant restructured code is reviewed and tested to ensure that no anomalies have been
introduced.
5. Data Restructuring:
 Data restructuring begins with the reverse engineering activity.
 Current data architecture is dissecred, and necessary data models are defined.
 Data objects and attributes are identified, and existing data structure are reviewed for quality.
6. Forward Engineering:
Forward Engineering also called as renovation or reclamation not only for recovers design information
from existing software but uses this information to alter or reconstitute the existing system in an effort
to improve its overall quality.

There are few important terms used in Software re-engineering


Reverse Engineering
It is a process to achieve system specification by thoroughly analyzing, understanding the
existing system. This process can be seen as reverse SDLC model, i.e. we try to get higher
abstraction level by analyzing lower abstraction levels. An existing system is previously implemented
design, about which we know nothing. Designers then do reverse engineering by looking at the code
and try to get the design. With design in hand, they try to conclude the specifications. Thus, going in
reverse from code to system specification.

Program Restructuring
It is a process to re-structure and re-construct the existing software. It is all about re-arranging
the source code, either in same programming language or from one programming language to a
different one. Restructuring can have either source code-restructuring and data-restructuring or both.
Re-structuring does not impact the functionality of the software but enhance reliability and
maintainability. Program components, which cause errors very frequently can be changed, or
updated with re-structuring. The dependability of software on obsolete hardware platform can be
removed via re-structuring.

Forward Engineering
Forward engineering is a process of obtaining desired software from the specifications in
hand which were brought down by means of reverse engineering. It assumes that there was some
software engineering already done in the past. Forward engineering is same as software engineering
process with only one difference – it is carried out always after reverse engineering.
Component reusability
A component is a part of software program code, which executes an independent task in the
system. It can be a small module or sub-system itself.
Example
The login procedures used on the web can be considered as components, printing system in
software can be seen as a component of the software. Components have high cohesion of
functionality and lower rate of coupling, i.e. they work independently and can perform tasks without
depending on other modules.
In OOP, the objects are designed are very specific to their concern and have fewer chances
to be used in some other software. In modular programming, the modules are coded to perform
specific tasks which can be used across number of other software programs. There is a whole new
vertical, which is based on re-use of software component, and is known as Component Based
Software Engineering (CBSE).

Re-use can be done at various levels


 Application level - Where an entire application is used as sub-system of new software.
 Component level - Where sub-system of an application is used.
 Modules level - Where functional modules are re-used.
Software components provide interfaces, which can be used to establish communication
among different components.
Reuse Process
Two kinds of method can be adopted: either by keeping requirements same and adjusting
components or by keeping components same and modifying requirements.

 Requirement Specification - The functional and non-functional requirements are specified,


which a software product must comply to, with the help of existing system, user input or both.
 Design - This is also a standard SDLC process step, where requirements are defined in
terms of software parlance. Basic architecture of system as a whole and its sub-systems are
created.
 Specify Components - By studying the software design, the designers segregate the entire
system into smaller components or sub-systems. One complete software design turns into a
collection of a huge set of components working together.
 Search Suitable Components - The software component repository is referred by designers
to search for the matching component, on the basis of functionality and intended software
requirements..
 Incorporate Components - All matched components are packed together to shape them as
complete software.

BASIS FOR
FORWARD ENGINEERING REVERSE ENGINEERING
COMPARISON
Basic Development of the application The requirements are deduced
with provided requirements. from the given application.
Certainty Always produces an application One can yield several ideas about
implementing the requirements. the requirement from an
implementation.
Nature Prescriptive Adaptive
Needed skills High proficiency Low-level expertise
Time required More Less
Accuracy Model must be precise and Inexact model can also provide
complete. partial information.

Business Process Re-engineering (BPR) model


Explain BPR model? What are steps involved in BPR model?
BPR or business process re- engineering model is the model that is being used by many of
the organizations of today’s software world for analysing and designing their processes and work
flows. In this article we have discussed about this business process re- engineering model in detail.
The model works on the principle that the business processes are a set of logically related tasks
that aim at working together towards a defined business goal. This model has been deployed by
many management systems like:
1. Information systems management
2. Supply chain management
3. Group ware systems
4. Human resource management systems
5. Enterprise resource planning etc.

Phases of BPR model


The business process re- engineering model involves a cyclic process consisting of four main phases:
1. Identification of the processes
2. Review and updating and analysis
3. Design to be
4. Testing and implementation to be
Implementation of the business process engineering requires that you implement an HPO or high
performing organization. It should be kept in mind BPR is an organizational method that redesigns
an organization’s process in terms of effectiveness, efficiency and economy.

Few of the activities that are undertaken in BPR are:


1. Activity based costing analysis
2. Business case analysis
3. Industrial engineering techniques
4. Productivity assessment
5. Human capital tools
6. Base lining and bench-marking studies
7. Functionality assessment
8. Organization analysis
9. Work force analysis

Steps in Implementation of BPR model


Now we shall what all steps are involved in the implementation of this business process re-
engineering model!
1. Establishment of BPR and HPO project plan:
In this step the reason for the BPR nomination is justified and the organization states the objectives of
the BPR. The employees and activities that will be affected are identified. The customers and the
stakeholders that will take effects by the impact of the BPR are also identified. The available
contractor support is also identified. The organization then needs to describe the outcomes like
metrics, business and human capital.
2. Preliminary planning:
It involves identification and assignment of the MSO development team members (MSO stands for
most sustainable organization), development of the action plan with check points and milestones,
development of the communication plans, and establishment of the data analysis requirements and
collection methods.
3. Development of the business case:
This step involves the implementation of the AS- IS analysis of the organization, development of the
MSO to be, measuring the gaps between the actual and the “to be” organizations and the
development of the transition plans.
4. Implementation of the business case:
This step involves the identification and assignment of the most sustainable
organization implementation team members, establishment of the letter of obligation between MSO
activity manager and agency head and the initiation of the transition to the MSO.
5. Tracking and validation of the MSO performance:
It involves usage of the identified metrics in the measurement of the:
(i) Closing skills
(ii) Competency gaps
(iii)Closing performance gaps
(iv) Improving timeliness and quality
(v) Achieving savings and so on.

This whole process is driven by two teams namely:


1.Development team:
It develops the MSO and includes functional experts, personnel specialists etc.
2. Implementation team:
It implements the MSO and includes development team members and employees who will
work in MSO.
3. Other people involved in the business process re- engineering are
- Acquisition officer: participates in the BPR efforts.
- MSO activity manager
- Human resource advisor
- Human capital officer
- CFO
- General officer
- Budget officer
- Support contractor and so on.

Possible Questions
Part – A
1. Define: Software Testing.
2. What is a test case?
3. Outline the need for system testing.
4. List the levels of testing.
5. What are the testing principles the software engineer must apply while performing the software
testing?
6. Define: Reverse Engineering.
7. Define: Refactoring.
8. Distinguish between verification and validation.
9. Write down generic characteristics of software testing.
10. What is the difference between alpha testing and beta testing?
11. Write the best practices for coding.
12. What is the need for regression testing?
13. In unit testing of a module, it is found for a set of test data at maximum of 90% of the code
alone were tested with the probability of success 0.9. What is the reliability of the module?
14. Why does software fail it has passed from acceptance testing?
15. What methods are used for breaking very long expression and statements?
16. What is boundary condition testing?
17. What are the various testing activities?
18. How can refactoring be made more effective?
19. Define Debugging. What are the common approaches in debugging?
20. Write about drivers and stubs.
21. What is smoke testing?
22. List out the activities of BPR model.
23. What is Forward Engineering?
24. Why debugging is difficult?
25. List out the types of system testing.

Part – B
1. Elaborate path testing and regression testing with an example.
2. Explain how Business Process Reengineering (BPE) helps to achieve a defined business
outcome.
3. Outline how the reverse engineering process helps to improve the legacy software. (5)
4. Compare White box testing and block box testing. (4)
5. List the phases in software reengineering process model and explain each phase.
6. Explain the various types of block box testing methods.
7. Explain the various types of White box testing methods.
8. Explain about various testing strategies.
9. Why does software testing need extensive planning? Explain. (Testing Principles)
10. Explain the validation testing in detail.
11. What is black box & white-box testing? Explain how basis path testing helps to derive test
cases to test every statement of a program.
12. Define: Regression testing. Distinguish: top-down and bottom-up integration. How is testing
different from debugging? Justify.
13. Write a note on equivalence partitioning & boundary value analysis of black box testing.
14. Explain the best coding practices and refactoring of software implementation in detail.

Part – C
1. Write the procedure for the following: Given three sides of a triangle, return the type of
triangle i.e., equilateral, isosceles and scalene triangle. Draw the control flow graph and
calculate cyclomatic complexity to calculate the minimum number of paths. Enumerate the
paths to be tested. (8)
2. Given a set of numbers „n‟, the function FindPrime (a [],n) prints a number – if it is a prime
number. Draw a control flow graph, calculate the cyclomatic complexity and enumerate all
paths. State how many test case-s are needed to adequately cover the code in terms of
branches, decisions and statement? Develop the necessary test cases using sample values
for „a‟ and „n‟.
3. Consider the pseudocode for simple subtraction given below :
(1) Program „Simple Subtraction‟
(2) Input (x,y)
(3) Output (x)
(4) Output (y)
(5) If x > y then DO
(6) x-y =z
(7) Else y-x = z
(8) EndIf
(9) Output (z)
(10) Output “ End Program”
Perform basis path testing and generate test cases.

You might also like