Software Engineering Notes
Software Engineering Notes
Software Engineering Notes
FP)
DECOMPOSITION TECHNIQUES - SOFTWARE/ SYSTEM SIZING
Accuracy of system (software) project estimate is predicted on a number of things
a) Degree to which the planner has properly estimated the size of the product to be built.
b) Ability to translate the size estimate into human efforts, calendar time and money.
c) The degree to which project plan reflects the abilities of the system development team
d) The ability of product requirements and the environment that supports the system development
efforts
Project estimate is as good as the estimate of the sizes of the work to be accomplished
Size is a quantifiable outcome of the system/ software project
"
Quality is hard to define, impossible to measure and easy to recognize.
Definition Quality is continually satisfying customer requirements (Smith 1987)
International Standards Organization (ISO) The totality of features and characteristics of a
product or service that bear on the ability to satisfy specified or implied needs (ISO 1986)
Garvins view of quality (Garvin 1984) identifies five views of quality
a) The transcendent view Quality is immeasurable but can be seen, sensed or felt and
appreciated e.g. art or music
b) Product based view Quality is measured by the attributes/ ingredients in a product
c) User based view Quality is fitness for purpose, meeting needs as specified
d) Value based view the ability to provide the customer with the product/ services they
want at the price they can afford.
SOFTWARE COST ESTIMATION
The dominant cost is the effort cost. This is the most difficult to estimate and control, and has the
most significant effect on overall costs. Software costing should be carried out objectively with
the aim of accurately predicting the cost to the contractor of developing the software. Software
cost estimation is a continuing activity which starts at the proposal stage and continues throughout
the lifetime of a project. Projects normally have a budget, and continual cost estimation is
necessary to ensure that spending is in line with the budget. Effort can be measured in staff-hours
or staff-months (Used to be known as man-hours or man-months). Boehm (1981) discusses seven
techniques of software cost estimation:
(1) Algorithmic cost modeling - A model is developed using historical cost information which
relates some software metric (usually its size) to the project cost. An estimate is made of that
metric and the model predicts the effort required.
Page 34 of 61
(2) Expert judgement - One or more experts on the software development techniques to be used
and on the application domain are consulted. They each estimate the project cost and the final
cost estimate is arrived at by consensus.
(3) Estimation by analogy - This technique is applicable when other projects in the same
application domain have been completed. The cost of a new project is estimated by analogy with
these completed projects.
(4) Parkinson's Law - Parkinson's Law states that work expands to fill the time available. In
software costing, this means that the cost is determined by available resources rather than by
objective assessment. If the software has to be delivered in 12 months and 5 people are available,
the effort required is estimated to be 60 person-months.
(5) Pricing to win - The software cost is estimated to be whatever the customer has available to
spend on the project. The estimated effort depends on the customer's budget and not on the
software functionality.
(6) Top- down estimation - A cost estimate is established by considering the overall
functionality of the product and how that functionality is provided by interacting sub-functions.
Cost estimates are made on the basis of the logical function rather than the components
implementing that function.
(7) Bottom- up estimation - The cost of each component is estimated. All these costs are added
to produce a final cost estimate.
ESTIMATION METHODS / TOOLS
Estimation may be the most difficult task in an entire software project. Almost all software cost
estimation is related to human resources. This is different from other engineering disciplines,
which have to deal with more physical resources.
Many estimation techniques use some estimate of size, such as
KLOC (Thousands of lines of code)
FP (Function points)
OP (Object points)
Estimation techniques for software projects are:
Using historical data
Decomposition techniques
Empirical models
A combination of any or all of the above
HISTORICAL DATA
Using historical data for estimation is pretty self-explanatory. This method uses track record on
previous projects to make estimates for the new project.
Main advantage: It is specific to that organization
Page 35 of 61
Main disadvantages:
Continuous process improvement is sometimes hard to factor in
Some projects may be very different than previous ones
ALGORITHMIC COST MODELING
Costs are analyzed using mathematical formulae linking costs with metrics. The most commonly
used metric for cost estimation is the number of lines of source code in the finished system
(which of course is not known). Size estimation may involve estimation by analogy with other
projects, estimation by ranking the sizes of system components and using a known reference
component to estimate the component size or may simply be a question of engineering
judgement.
Code size estimates are uncertain because they depend on hardware and software choices, use of
a commercial database management system etc. An alternative to using code size as the
estimated product attribute is the use of `function- points', which are related to the functionality
of the software rather than to its size. Function points are computed by counting the following
software characteristics:
External inputs and outputs.
User interactions.
External interfaces.
Files used by the system.
Each of these is then individually assessed for complexity and given a weighting value which
varies from 3 (for simple external inputs) to 15 (for complex internal files). The function point
count is computed by multiplying each raw count by the estimated weight and summing all
values, then multiplied by the project complexity factors which consider the overall complexity
of the project according to a range of factors such as the degree of distributed processing, the
amount of reuse, the performance, and so on.
Function point counts can be used in conjunction with lines of code estimation techniques. The
number of function points is used to estimate the final code size.
Based on historical data analysis, the average number of lines of code in a particular language
required to implement a function point can be estimated (AVC). The estimated code size for a
new application is computed as follows: Code size = AVC x Number of function points
The advantage of this approach is that the number of function points can often be estimated from
the requirements specification so an early code size prediction can be made. Levels of selected
software languages relative to Assembler language
DECOMPOSITION TECHNIQUES
For estimation can be one of two types:
Page 36 of 61
Problem-based - Use functional or object decomposition, estimate each object, and sum the
result. Estimates either use historical data or empirical techniques.
Process-based - Estimate effort and cost for each task of the process, and then sum the result.
Estimates use historical data
Main advantage: Easier to estimate smaller parts
Main disadvantage: More variables involved means more potential errors
EMPIRICAL MODELS
For estimation uses formulae of the form g = f(x)
Where g is the value to be estimated (cost, effort or project duration) and x is a parameter such as
KLOC, FP or OP. Most formulae involving KLOC consistently show that there is an almost
linear relationship between KLOC and estimated total effort.
Main advantage: Easy to use
Main disadvantage: Models are usually derived from a limited number of projects, but are used
in a more generalized fashion
SOFTWARE LIFE CYCLE MANAGEMNT (SLIM)
SLIM was constructed by the USA army in 1978 to cover projects exceeding 70 KLOC. This
Putmans model assumes that the effort for software development project is distributed similarly
to a collection of Raleigh curves, one for each major development activity.
The model is based on empirical studies and relates the size (S), technology factor (C), total project
effort in person years (td); thus S = CK
1/3
td
4/3
The equation allows one to assess the effect of varying delivery date on the total effort needed to
complete the project. Thus for a 10%decrease in elapsed time
S = CK
1/3
td
4/3
= CK
1/3
(0.9td)
4/3
i.e. K/K = 1.52 a 52% increase in total life cycle effort.
Advantages of SLIM
Uses linear programming to consider development constraints on both cost and effort
Has fewer parameters needed to generate an estimate (over COCOMO)
Disadvantages of SLIM
Model usually found insufficiency
Not suitable for small projects
Estimates are extremely sensitive to the technology factor
CONSTRUCTIVE COST MODEL
COCOMO is probably the most well-known and well-established empirical model. It was first
introduced by Barry Boehm in 1981. It has since evolved into a more comprehensive model
named COCOMO II. COCOMO II has a variety of formulae
For various stages of the project
For various size parameters (KLOC, FP, object points)
For various types of project teams
Page 37 of 61
COCOMO - Most widely used model for effort and cost estimation and considers a wide variety
of factors. Projects fall into three categories: organic, semidetached, and embedded, characterized
by their size. In the basic model which uses only source size: There is also an intermediate model
which, as well as size, uses 15 other cost drivers. Cost Drivers for the COCOMO Model.
Software reliability
Size of application database
Complexity
Analyst capability
Software engineering capability
Applications experience
Virtual machine experience
Programming language expertise
Performance requirements
Memory constraints
Volatility of virtual machine
Environment
Turnaround time
Use of software tools
Application of software engineering methods
Required development schedule
Values are assigned by the manager.
Page 38 of 61
The intermediate model is more accurate than the basic model.
Page 39 of 61
7. SOFTWARE METRICS
From a survey of managers and technicians:
quality of external documentation
programming language
well-defined programming practices
availability of tools
programmer experience in data processing
programmer experience in the functional area
effect of project communication
independent modules for individual assignment
In an experiment, five programming teams were given a different objective each: When
productivity was evaluated each team ranked first in its primary objective. This shows that
programmers respond to a goal.
minimum internal memory
output clarity
program clarity
minimum source statements
minimum hours
Define Software Metrics as a set of the measures that are considered and are to be
incorporated for quality software product
These set of Software Metrics and include:-
Software Maintainability - This is the main programming costs in most installations, and is
affected by data structures, logical structure, documentation, diagnostic tools, and by personnel
attributes such as specialization, experience, training, intelligence, motivation. Maintenance
includes the cost of rewriting, testing, debugging and integrating new features. Methods for
improving maintainability are:
inspections
automated audits of comments
test path analysis programs
use of pseudo code documentation
dual maintenance of source code
modularity
Structured program logic flow.
Maintainability is "the ease with which changes can be made to satisfy new requirements or to
correct deficiencies" [Balci 1997]. Well designed software should be flexible enough to
accommodate future changes that will be needed as new requirements come to light. Since
maintenance accounts for nearly 70% of the cost of the software life cycle [Schach 1999], the
importance of this quality characteristic cannot be overemphasized. Quite often the programmer
responsible for writing a section of code is not the one who must maintain it. For this reason, the
Page 40 of 61
quality of the software documentation significantly affects the maintainability of the software
product.
Software Correctness is "the degree with which software adheres to its specified requirements"
[Balci 1997]. At the start of the software life cycle, the requirements for the software are
determined and formalized in the requirements specification document. Well designed software
should meet all the stated requirements. While it might seem obvious that software should be
correct, the reality is that this characteristic is one of the hardest to assess. Because of the
tremendous complexity of software products, it is impossible to perform exhaustive execution-
based testing to insure that no errors will occur when the software is run. Also, it is important to
remember that some products of the software life cycle such as the design specification cannot be
"executed" for testing. Instead, these products must be tested with various other techniques such
as formal proofs, inspections, and walkthroughs.
Software Reusability is "the ease with which software can be reused in developing other software"
[Balci 1997]. By reusing existing software, developers can create more complex software in a
shorter amount of time. Reuse is already a common technique employed in other engineering
disciplines. For example, when a house is constructed, the trusses which support the roof are
typically purchased preassembled. Unless a special design is needed, the architect will not bother
to design a new truss for the house. Instead, he or she will simply reuse an existing design that has
proven itself to be reliable. In much the same way, software can be designed to accommodate reuse
in many situations. A simple example of software reuse could be the development of an efficient
sorting routine that can be incorporated in many future applications.
Software Documentation - is one of the items which is said to lead to high maintenance costs. It
is not just the program listing with comments. A program librarian must be responsible for the
system documentation, but programmers are responsible for the technical writing. Other aids may
be text editors, and Source Code Control System (SCCS) tool for producing records. Some
companies insist that programmers dictate any test or changes onto a tape every day.
Software Complexity - is a measure of the resources which must be expanded in developing,
maintaining, or using a software product. A large share of the resources is used to find errors,
debug, and retest; thus, an associated measure of complexity is the number of software errors.
Items to consider under resources are:
time and memory space;
man-hours to supervise, comprehend, design, code, test, maintain and change software;
number of interfaces;
scope of support software, e.g. assemblers, interpreters, compilers, operating systems,
maintenance program editors, debuggers, and other utilities;
the amount of reused code modules;
travel expenses;
secretarial and technical publications support;
Overheads relating to support of personnel and system hardware.
Page 41 of 61
Consideration of storage, complexity, and processing time may or may not be considered in the
conceptualization stage. For example, storage of a large database might be considered, whereas if
one already existed, it could be left until system-specification.
Measures of complexity are useful to:
1. Rank competitive designs and give a "distance" between rankings.
2. Rank difficulty of various modules in order to assign personnel.
3. Judge whether subdivision of a module is necessary.
4. Measure progress and quality during development.
Software Reliability - Reliability means: issues that related to the design of the product which
will operate well for a substantial length of time. a metric which is the probability of operational
success of software.
Probabilistic Models can refer to deterministic events (e.g. motor burns out) when it cannot be
predicted when they will occur; or random events. The probability space -- the space of all
possible occurrences must first be defined, e.g. in a probability model for program error it is all
possible paths in a program. Then the rules for selection are specified, e.g. for each path,
combinations of initial conditions and input values. A software failure occurs when an execution
sequence containing an error is processed.
Software Reliability Theory is the application of probability theory to the modeling of failures
and the prediction of success probability. Thus Software reliability is the probability that the
program performs successfully, according to specifications, for a given time period. The precise
statements / Specifications of:
the host machine
the operating system and support software
the operating environment
the definition of success
details of hardware interfaces with the machine
details of ranges and rates of I/O data
The operational procedures.
Errors are found from a system failure, and may be: hardware, software, operator, or unresolved.
Time may be divided into:
operating time,
calendar time during operation,
calendar time during development,
man-hours of coding,
development, testing,
debugging,
Computer test times.
Page 42 of 61
Software is repairable if it can be debugged and the errors corrected. This may not be possible
without inconveniencing the user, e.g. air-traffic control system.
Software Reliability is "the frequency and criticality of software failure, where failure is an
unacceptable effect or behavior occurring under permissible operating conditions" [Balci 1997].
The frequency of software failure is measured by the average time between failures. The
criticality of software failure is measured by the average time required for repair. Ideally,
software engineers want their products to fail as little as possible (i.e., demonstrate high
correctness) and be as easy as possible to fix (i.e., demonstrate good maintainability). For some
real-time systems such as air traffic control or heart monitors, reliability becomes the most
important software quality characteristic. However, it would be difficult to imagine a highly
reliable system that did not also demonstrate high correctness and good maintainability.
Software availability - is the probability that the program is performing successfully, according
to specifications, at a given point in time. Availability is defined as:
1. The ratio of systems up at some instant to the size of the population studied (no. of
systems).
2. The ratio of observed uptime to the sum of the uptime and downtime:
A = T (up) / (T (up) + T (down)) (of a single system).
These measurements are used to:
Quantify A and compare with other systems or goals.
Track A over time to see if it increases as errors are found.
Plan for repair personnel, facilities (e.g. test time) and alternative service.
If the system is still in the design and development phase then a third definition is used:
1. the ratio of the mean time to failure (uptimes) and the sum of the mean time to failure and
the mean time to repair (downtime): A = MTTF / (MTTF + MTTR)
Various hypotheses exist about program errors, and seem to be true, but no controlled tests have
been run to prove or disprove them:
1. Bugs per line constant. There are fewer errors per line in a high level language. Many
types of errors in machine code do not exist in HOL.
2. Memory shortage encourages bugs. Mainly due to programming "tricks" used to squeeze
code.
3. Heavy load causes errors to occur. Very difficult to document and test heavy loads.
4. Tuning reduces error occurrences rate. This involves removing errors for a class of input
data. If new inputs are needed, new errors could occur, and the system (hardware and
software) must be retuned.
Further hypotheses about errors:
Page 43 of 61
1. The normalized number of errors is constant. Normalization is the total number of errors
divided by the number of machine language instruction.
2. The normalized error-removal rate is constant. These two hypotheses apply over similar
programs.
3. Bug characteristics remain unchanged as debugging proceeds. Those found in the first
few weeks are representative of the total bug population.
4. Independent debugging results in similar programs. When two independent debuggers
work on a large program, the evolution of the program is such that the difference between
their versions is negligible.
Many researchers have put forward models of reliability based on measures of the hardware, the
software, and the operator; and used them for prediction, comparative analysis, and development
control. Error reliability and availability models provide a quantitative measure of the goodness
of the software. There are still many unanswered questions.
Software Portability is "the ease with which software can be used on computer configurations
other than its current one" [Balci 1997]. Porting software to other computer configurations is
important for several reasons. First, "good software products can have a life of 15 years or more,
whereas hardware is frequently changed at least every 4 or 5 years. Thus good software can be
implemented, over its lifetime, on three or more different hardware configurations" [Schach 1999].
Second, porting software to a new computer configuration may be less expensive than developing
analogous software from scratch. Third, the sales of "shrink-wrapped software" can be increased
because a greater market for the software is available.
Software Efficiency is "the degree with which software fulfills its purpose without waste of
resources" [Balci 1997]. Efficiency is really a multifaceted quality characteristic and must be
assessed with respect to a particular resource such as execution time or storage space. One measure
of efficiency is the speed of a program's execution. Another measure is the amount of storage space
the program requires for execution. Often these two measures are inversely related, that is,
increasing the execution efficiency causes a decrease in the space efficiency. This relationship is
known as the space-time tradeoff. When it is not possible to design a software product with
efficiency in every aspect, the most important resources of the software are given priority.
Software Comprehensibility - This includes such issues as:
high and low-level comments
mnemonic variable names
complexity of control flow
general program type
Sheppard did experiments with professional programmers, types of program (engineering,
statistical, nonnumeric), levels of structure, and levels of mnemonic variable names. He found that
the least structured program was most difficult to reconstruct (after studying for 25 minutes) and
the partially structured one was easiest. No differences were found for mnemonic variable names,
or order of presentation of programs.
Page 44 of 61
Summarize the SW Metrics as well as the aspects of quality assurance as follows
They define a hierarchical software characteristic tree, the arrow indicates logical implication. The lowest level
characteristics are combined into medium level characteristics. The lowest level are recommended as quantitative
metrics. They define each one. Then they evaluated each by their correlation with program quality, potential benefits
in terms of insights and decision inputs for the developer and user, quantifiable, feasibility of automating evaluation.
The list is more useful as a check to programmers rather than a guide to program construction.
SOFTWARE TESTING
Defect testing - Testing programs to establish the presence of system defects. The goal is to
discover defects in programs. A successful defect test is a test which causes a program to behave
in an anomalous way. Tests show the presence not the absence of defects
Component testing - Testing of individual program components, usually the responsibility of
the component developer (except sometimes for critical systems) Tests are derived from the
developers experience
Integration testing - Testing of groups of components integrated to create a system or sub-
system. The responsibility of an independent testing team Tests are based on a system
specification
Black-box testing - An approach to testing where the program is considered as a black-box in
that the program test cases are based on the system specification
Test data - Inputs which have been devised to test the system
Test cases - Inputs to test the system and the predicted outputs from these inputs if the
system operates according to its specification
Page 45 of 61
Testing guidelines
Test software with sequences which have only a single value
Use sequences of different sizes in different tests
Derive tests so that all the elements of the sequence are accessed
White Box Testing - also referred to as Structural testing and it is a derivation of test cases
according to program structure, where Knowledge of the program is used to identify additional
test cases especially ascertaining the conditions and elements of arrays
Path testing - The objective of path testing is to ensure that the set of test cases is such that each
path through the program is executed at least once. The starting point for path testing is a
program flow graph that shows nodes representing program decisions and arcs representing the
flow of control
Top-down testing - Starts with high-level system and integrate from the top-downwards
replacing individual components by stubs where appropriate
Bottom-up testing - Integrate individual components in the various levels until the complete
system is created.
NB. In practice, most of testing approaches involves a combination of these strategies
Interface testing - Takes place when modules or sub-systems are integrated to create larger
systems. Objectives are to detect faults due to interface errors or invalid assumptions about
interfaces. Particularly important for object-oriented development as objects are defined by their
interfaces
Interfaces types
Parameter interfaces - Data passed from one procedure to another
Shared memory interfaces - Block of memory is shared between procedures
Procedural interfaces - Sub-system encapsulates set of procedures to be called by other systems
Message passing interfaces - Sub-systems request services from other sub-systems
Stress testing - Stress testing checks for unacceptable loss of service or data. Particularly
relevant to distributed systems which can exhibit severe degradation as a
network becomes overloaded. Stressing the system often causes defects to be revealed.
Object-oriented testing - The components to be tested are object classes that are instantiated as
objects an extension of white-box testing.
Scenario-based testing - Identify scenarios from use-cases and supplement these with
interaction diagrams that show the objects involved in the scenario
Page 46 of 61
SOFTWARE VERIFICATION AND VALIDATION
Verification and Validation are concerned with assuring that a software system meets a user's
needs
Validation: validation shows that the program meets the customers needs. The software should
do what the user really requires. The designers are guided by the notion of whether they are
building the right product
Verification: Verification shows conformance with specification. The software should conform
to its functional specification. The designers are guided by the notion whether they are building
the product right
Static and dynamic verification
Static verification on Software inspections is concerned with analysis of the static system
representation to discover problems within software product based on document and code
analysis
Dynamic Verification - concerns Software testing with exercising and observing product
behaviour where the system is executed with test data and its operational behaviour is observed
Program testing is done to reveal the presence of errors NOT their absence. A successful test
is a test which discovers one or more errors. The only validation technique for non-functional
requirements
Verification and validation should establish confidence that the software is fit for purpose it is
designed for. This does NOT mean completely free of defects rather, it must be good enough for
its intended use which determines the degree of confidence needed. Depends on systems
purpose, user expectations and marketing environment
Software function concerned with the level of confidence depends on how critical the
software is to an organisation
User expectations - Users may have low expectations of certain kinds of software
Marketing environment - Getting a product to the market early which may be more important
than finding defects in the program
SOFTWARE TESTING
Definition 2 - Testing involves actual execution of program code using representative test data
sets to exercise the program and outputs are examined to detect any deviation from the expected
output
Page 47 of 61
Definition 1- Testing is classified as dynamic verification and validation activities
Reviews can be applied to:
Requirement specifications
High level system designs
Detailed designs
Program code
User documentation
Operation of delivered system
Objectives of Testing
1. To demonstrate the operation of the software.
2. To detect errors in the software and therefore:
Obtain a level of confidence,
Produce measure of quality.
THE TESTING PROCESS
Systems in this case are tested, as a single unit therefore testing should proceed in stages, where
testing is carried out incrementally in conjunction with system implementation.
The most widely used testing process consists of 5 stages:
(a) Unit testing
(b) Module testing
(c) Sub-system testing
(d) System testing
(e) Acceptance (alpha) testing.
(A) UNIT TESTING
Unit testing is where individual components are tested independently to ensure they operate
correctly.
(B) MODULE TESTING
A module is a collection of dependent components e.g. an object class, an abstract data type or
collection of procedures and functions.
Module testing is where related components (modules) are tested without other system modules.
(C) SUB-SYSTEM TESTING
Sub-systems are integrated to make up a system.
Sub-system testing aims at finding errors of unanticipated interactions between sub-systems and
system components. Sub-system testing also aims at validating that the system meets the functional
and non-functional components.
(D) ACCEPTANCE TESTING (ALPHA TESTING)
Page 48 of 61
Acceptance testing is also known as alpha testing or last testing.
In this case the system is tested with real data (from client) and not simulated test data.
Acceptance testing:
Reveals errors and omissions in systems requirements definition.
Test whether the system meets the users needs or if the system performance is acceptable.
Acceptance testing is carried out till users /clients agree its an acceptable implementation of the
system.
N/B 1:Beta testing
Beta testing approach is used for software to be marketed.
It involves delivering it to a number of potential customers who agree to use it and report problems
to the developers.
After this feedback, it is modified and released again for another beta testing or general use.
N/B 2:
The five steps of testing are based on incremental system integration i.e.
(Unit testing module testing sub-system testing - system testing- acceptance testing). But object
oriented development is different and levels have clear/ distinct
Operations and data forms objects units
Object integrated forms class (equivalent to) modules
Therefore class testing is cluster testing.
TEST PLANNING
Test planning is setting out standards for the testing process rather than describing product tests.
Test plans allow developers get an overall picture of the system tests as well as ensure required
hardware, software, resources are available to the testing team.
Components of a test plan:
Testing process
This is a description of the major phases of the testing process.
Requirement traceability
This is a plan to test all requirements individually.
Testing schedule
This includes the overall testing schedule and resource allocation.
Test recording procedures
This is the systematic recording of test results.
Hardware and software requirements.
Here you set out the software tools required and hardware utilization.
Constraints
This involves anticipation of hardships /drawbacks affecting testing e.g. staff shortage
should be anticipated here.
N/B - Test plan should be revised regularly.
Page 49 of 61
TESTING STRATEGIES
This is the general approach to the testing process.
There are different strategies depending on the type of system to be tested and development process
used: -
Top-down testing
This involves testing from most abstract component downwards.
Bottom-up testing
This involves testing from fundamental components upwards.
Thread testing
This is testing for systems with multiple processes where the processing of transactions
threads through these processes.
Stress testing
This relies on stressing the system by going beyond the specified limits therefore
testing on how well it can cope with overload situations.
Back to back testing
It is used to test versions of a system and compare the outputs.
N/B- Large systems are usually tested using a mixture of strategies.
Top-down testing
Tests high levels of a system before testing its detailed components. The program is represented
as a single abstract component with sub-components represented by stubs.
Stubs have the same interface as the component, but limited functionality.
After top-level component (the system program) is tested. Its sub-components (sub-systems) are
implemented and tested through the same way and continues to the bottom component (unit).
If top-down testing is used:
- Unnoticed errors maybe detected early (structured errors)
- Validation is done early in the process.
Disadvantages of Using Top down Testing
1. It is difficult to implement because:
Stubs are required to simulate lower levels of the system. Complex components are impractical
to produce a stub that can be tested correctly.
Requires knowledge of internal pointer representation
2. Test output is difficult to observe. Some higher levels do not generate output therefore must
be forced to do so e.g. (classes) therefore create an artificial environment to generate test
results.
N/b therefore it is not appropriate for object oriented systems but individual systems may be tested.
Bottom-up testing
This is the opposite of top-down testing. This is testing modules at lower levels in the hierarchy,
then working up to the final level.
Advantages of bottom up are the disadvantages of top-down. +
1. Architectural faults are unlikely to be discovered till much of the system has been tested.
Page 50 of 61
2. It is appropriate for object oriented systems because individual objects can be tested using their
own test drivers, then integrated and collectively tested.
Thread testing (transaction flow testing-by Bezier 1990)
This is for testing real time systems.
It is an event-based approach where tests are based on the events, which trigger system actions.
It may be used after objects have been individually tested and integrated into sub-system.
-Processing of each external event threads its way through the system processes or
objects with processing carried out at each stage
It involves identifying and executing each possible processing thread.
The system should be analyzed to identify as many threads as possible.
After each thread has been tested with a single event, processing of multiple events of same type
should be tested without events of any other type (multiple-input thread testing).
After multiple-input thread testing, the system is tested for its reactions to more than one class of
simultaneous event i.e. multiple thread testing.
Page 51 of 61
SOFTWARE MAINTENACE
Definition 1 - Maintenance is the process of changing a system after it has been delivered and is
in use.
Simple - correcting coding errors
Extensive - correcting design errors.
Enhancement- correcting specification errors or accommodate new requirements.
Definition 2 - Maintenance is the evolution i.e. process of changing a system to maintain its ability
to survive.
The maintenance stage of system development involves
a) correcting errors discovered after other stages of system development
b) improving implementation of the system units
c) enhancing system services as new requirements are perceived
Information is fed back to all previous development phases and errors and omissions in original
software requirements are discovered, program and design errors found and need for new software
functionality identified.
TYPES OF SOFTWARE MAINTENANCE
The following are the different types of maintenance:
Corrective maintenance - This involves fixing discovered errors in software (Coding errors,
design errors, requirement errors) once the software is implemented and is in full operation, it is
examined to see if it has met the objectives set out in the original specifications. Unforeseen
problems may need to be overcome, and may involve returning to the earlier stages in the system
development life cycle to take corrective actions
Adaptive maintenance - This is changing the software to operate in a different environment
(operating system, hardware) this doesnt radically change the software functionality. After
running the software for some time, the original environment e.g. operating System and the
peripherals, for which the software was developed, may change.
At this stage the software will be modified to accommodate the changes that will have occurred in
its external environment. This could even call for a repeat of the system development life cycle
yet again
Perfective maintenance - Implementing new functional or non-functional system requirements,
generated by software customers as their organization or business changes. Also as the software is
used, the user will recognize additional functions that could provide benefits or enhance the
software if added to it
Preventive maintenance - Making changes on software to prevent possible problems or
difficulties (collapse, slow down, stalling, self-destructive e.g. Y2K).
Operation stage involves
use of documentation to train users of system and its resource
system configuration
Page 52 of 61
repairs and maintenance
safety precautions
date control
Train user to get help on the system.
Maintenance cost (fixing bugs) is usually higher than what software is original due to: -
I. Program being maintained may be old, and not consistent to modern software engineering
techniques. They may be unstructured and optimized for efficiency rather than
understandability.
II. Changes made may introduce new faults, which trigger further change requests. This is
mainly since complexity of the system may make it difficult to assess the effects of a change.
III. Changes made tend to degrade system structure, making it harder to understand and make
further changes (program becomes less cohesive.)
IV. Loss of program links to its associated documentation therefore its documentation is
unreliable therefore need for a new one.
Factors affecting maintenance
Module independence - Use of design methods that allow easy change through concepts such as
functional independence or object classes (where one can be maintained independently)
Quality of documentation - A program is easier to understand when supported by clear and
concise documentation.
Programming language and style - Use of a high level language and adopting a consistent style
through out the code.
Program validation and testing - Comprehensive validation of system design and program
testing will reduce corrective maintenance.
Configuration management - Ensure that all system documentation is kept consistent through
out various releases of system (documentation of new editions.)
Understanding of current system and staff availability - Original development staff may not
always be available. Undocumented code can be difficult to understand (team management).
Application domain - Clear and understood requirements.
Hardware stability concerns for the equipment to be fault tolerant
Dependence of program on external environment
SOFTWARE MAINTENANCE PROCESS
Page 53 of 61
Maintenance process is triggered by
(a) A set of change requests from users, management or customers.
(b) Cost and impact of the changes are assumed, If acceptable,
(c) New release is planned involving maintenance elements (adaptive, corrective perfective..)
NB. Changes are implemented and validated and new versions of system released.
SOFTWARE CONFIGURATION MANAGEMENT
Software configuration is a collection of the items that comprise all information produced as part
of the software process. The output of software process is information and includes PC programs,
documentation and the data (in its program and external to it).
Software Configuration Management
Definition 1
These are a set of activities that are developed to manage change through out the life cycle of PC
software.
Changes are caused by the following:-
New customer needs
New business / market conditions and rules
Budgetary or scheduling constraints etc
Reorganization or restructuring of business for growth
Definition 2
The process which controls the changes made to a system, and manages the different versions of
the evolving software product. It involves development and application of procedures and
standards for managing an evolving system product. Procedures should be developed for building
system releasing them to customers
Standards should be developed for recording for recording and processing proposed system
changes and identifying and storing different versions of the system.
Configuration managers (team) are responsible for controlling software changes. Controlled
systems are called baselines. They are the starting point for controlled evolution.
Software may exist in different configurations (versions).
produced for different computers (hardware)
produced for different operating system
produced for different client-specific functions etc
Configuration managers are responsible for keeping track of difference between software versions
and ensuring new versions are derived in a controlled way.
Are also responsible for ensuring that new versions are released to the correct customers at the
appropriate time
Configuration management and associated documentation should be based on a set of standards,
which should be published in configuration management hand book (or quality handbook) E.g.
IEEE standards 8238-1983 which is standard for configuration management plans.
Page 54 of 61
Main configuration managements activities:
1. Configuration management planning (planning for product evolution)
2. Managing changes to the systems
3. Controlling versions and releases (of systems)
4. Building systems from other components
Benefits of effective configuration management
Better communication among staff
Better communication with the customer
Better technical intelligence
Reduced confusion for changes
Screening of frivolous changes
Provides a paper trail
Configuration Management and Planning
Configuration management and planning takes control of the systems after they have been
developed therefore planning the process must start during development.
The plan should be developed as part of overall project planning process.
The plan should include:
(a) Definitions of what entities are to be managed and formal scheme for identifying these entities.
(b) Statement of configuration management team.
(c) Configuration management policies for change and version control / management.
(d) Description of the tools to be used in configuration management and the process to be used.
(e) Definition of the configuration database which will be used to record configuration
information. (Recording and retrieval of project information.)
(f) Description of management of external information
(g) Auditing procedures.
Configuration database is used to record all relevant information relating to configuration to:
a) assert with assessing the impact of system changes
b) Provide management information about configuration management.
Configuration database defines/describes
Customers who have taken delivery of a particular version
Hardware and software operating system requirements to run a given version.
The number of versions of system so far made and when they were made etc
CONFIGURATION MANAGEMENT TOOLS
Configuration Management CM- is a procedural process so it can be modelled and integrated
with a version management system
Page 55 of 61
Configuration Management processes are standardised and involve applying pre-defined
procedures so as to manage large amounts of data
Configuration Management tools
Form editor - to support processing the change request forms
Workflow system - to define who does what and to automate information transfer
Change database - that manages change proposals and is linked to a VM system
Version and release identification - Systems assign identifiers automatically when a new
version is submitted to the system
Storage management - System stores the differences between versions rather than all the
version code
Change history recording - Record reasons for version creation
I ndependent development - Only one version at a time may be checked out for change or
Parallel working on different versions
CASE TOOLS FOR CONFIGURATION MANAGEMENT
Computer Aided Software Engineering (CASE) tools support for CM is therefore essential and
are available ranging from stand-alone tools to integrated CM workbenches
CASE Tools is a software package that supports construction and maintenance of a logical
system specification model
Designed to support rules and interactions of models defined in a specific methodology
Also permit software prototyping and code generation
Aim to automate document production process by ensuring automation of analysis and design
operations
ADVANTAGES OF CASE TOOLS
Make construction of the various analysis and design logical elements easy e.g. DFD,
ERM etc
Integration of separate elements allowing software to do additional tasks e.g. rechecking
and notifying on defined data and programs
Streamline the development of the analysis documentation allowing for use of graphics
and manipulation of the data dictionaries
Allow for easy maintenance of specifications which in turn will be more reliably updated
Enforce rigorous standards for all developers and projects making communication more
efficient
Check specifications for errors, omissions and inconsistencies
Provide everyone on the project team with easy access to the latest updates and project
specifications
Encourage iterative refinements resulting to higher quality system that better meets the
needs of the users
DISADVANTAGES
CASE products can be expensive
CASE technology is not yet fully evolved so its software is often large and inflexible
Products may not provide a fully integrated development environment
There is usually a long time for learning before the tools can be effectively used i.e. no
soon benefits realized
Page 56 of 61
Analysts must have a mastery of the structured analysis and design techniques if they are to
exploit CASE tools
Time and cost estimates may have to be inflated to allow for an extended learning period of
CASE tools
PROGRAM EVALUATION DYNAMICS
Program evolution is the study of system change Lehmans Laws (by Lehman Beladay 1985) about
system change.
The Laws are:
a) Law of Continuing change - Program used in real world environment must change or become
progressively less useful in the environment.
b) Law of Increasing complexity - Program changes make its structure more complex therefore
extra resources must be devoted to preserving and simplifying the structure.
c) Law of Large program evolution - Program evaluation in self-regulating process.
d) Law of Organization stability - In a program lifetime its rate of development is approximately
constant and independent of resources devoted to system development.
e) Law of Conservation familiarity - Over the system lifetime the incremental change in each
release is approximately constant
SOFTWARE ENGINEERING DOCUMENTATION
It is a very important aid to maintenance engineers.
Definition - It includes all documents describing implementation of the system from requirements
specification to final test plan.
The documents include:
Requirement documents and an associated rationale
System architecture documents
Design description
Program source code
Validation documents on how validation is done
Maintenance guide for possible /known problems.
Documentation should be
Clear and non-ambiguous
Structured and directive
Readable and presentable
Tool-assisted (case tools) in production (automation).
SYSTEM DOCUMENTATION
Items for documentation to be produced for a software product include:-
System Request this is a written request that identifies deficiencies in the current system
besides requesting for change
Page 57 of 61
Feasibility Report this indicates the economic, legal, technical and operational feasibility of
the proposed project
Preliminary Investigation Report this is a report to the management clearly specifying the
identified problems within the system and what further action to be taken is also recommended
System Requirements report this specifies the entire end user and management
requirements, all the alternatives plans, their costs and the recommendations to the management
System Design Specification it contains the designs for the inputs, outputs, program files and
procedures
User Manual it guides the user in the implementation and installation of the information
system
Maintenance Report a record of the maintenance tasks done
Software Code this refers to the code written for the information system
Test Report this should contain test details e.g. sample test data and results etc
Tutorials - a brief demonstration and exercise to introduce the user to the working of the
software product
SOFTWARE DOCUMENTATION
The typical items included in the software documentation are
Introduction shows the organizations principles, abstracts for other sections and notation
guide
Computer characteristics a general description with particular attention to key attributes and
summarized features
Hardware interfaces a concise description of information received or transmitted by the
computer
Software functions shows what the software must do to meet requirements, in various
situations and in response to various events
Timing constraints how often and how fast each function must be performed
Accuracy constraints how close output values must be ideal to expected values for them to be
acceptable
Response to undesired events what the software must do in events e.g. sensor goes down,
invalid data etc
Program sub-sets what the program should do it if it cannot do everything
Fundamental assumptions the characteristics of the program that will stay the same, no
matter what changes are made
Changes the type of changes that have been made or are expected
Sources annotated list of documentation and personnel, indicating the types of questions each
can answer
Glossary most documentation is fraught with acronyms and technical terms
9. SOFTWARE QUALITY EVALUATION
Page 58 of 61
This concerns identifying key issues or measures that should show where a program is deficient.
Managers must decide on the relative importance of:
On-time delivery of the software product
Efficient use of resources e.g. processing units, memory, peripheral devices etc
Maintainable code issues e.g. comprehensibility, modifiability, portability etc
Problem areas cited software production include
1. User demands for enhancements, extensions
2. Quality of system documentation
3. Competing demands on maintenance personnel time
4. Quality of original programs
5. Meeting scheduled commitments
6. Lack of user understanding of system
7. Availability of maintenance program personnel
8. Adequacy of system design specifications
9. Turnover of maintenance personnel
10. Unrealistic user expectations
11. Processing time of system
12. Forecasting personnel requirements
13. Skills of maintenance personnel
14. Changes to hardware and software
15. Budgetary pressures
16. Adherence to programming standards in maintenance
17. Data integrity
18. Motivation of maintenance personnel
19. Application failures
20. Maintenance programming productivity
21. Hardware and software reliability
22. Storage requirements
23. Management support of system
24. Lack of user interest in system
Quality Assurance
Quality management System
- relevant procedures and standards to be followed
- Quality Assurance assessments to be carried out
Definition controls to that
- Relevant procedures and standards are followed
- Relevant deliverables are produced
Page 59 of 61
Software quality assurance techniques:
The quality and reliability of software can be improved by using
A standard development methodology
Software metrics
Thorough testing procedures
Allocating resources to put more emphasis on the analysis and design stages of systems
development.
Standards specification to be applied during development enforces Quality of products. The
specifications should include Quality Assurance (QA) standards to be adopted, which should be
of one of the recognized standards or clients specified ones e.g.
Correctness - ensures the system operates correctly and provides the value to its user and performs
the required functions therefore defects must be fixed/ corrected
Maintainability - is the ease with which system can be corrected if an error is encountered, adept
if its environment changes or enhance if the user desires a change in requirements
Integrity - is the measure of the system ability to withstand attacks (accidental or intentional) to
its security in terms of data processing, program performance and documentation
Usability - is the measure of user friendliness of a system as measured in terms of physical and
intellectual skills required to learn the system, the time required to become moderately efficient in
using it, the net increase in productivity if used by moderately efficient user, and the general user
attitude towards the system.
System Quality can be looked at in two ways: -
Quality of design is the characteristic the designers specify for an item/ product. The grade of
the materials, tolerance and performance specifications
Quality of conformance - The degree to which the design specification are followed during
development and construction (implementation)
QUALITY ASSURANCE
Since quality should be measurable, then quality assurance needs to be put in place
Quality Assurance consists of Auditing and reporting functions of the management
Quality Assurance must outline the standards to be adopted i.e. either International recognized
standards or client designed standards
Quality Assurance must lay down the working procedure to be adopted during project lifetime,
which includes: -
Design and Program reviews
Program monitoring and reporting
Quality Assurance related procedure
Test procedure and Fault reporting
Delivery and Liaison mechanisms
Page 60 of 61
Safety aspects and Resource usage
Quality Assurance system should be managed independently of development and production
departments and clients should have a right to access the contractors Quality Assurance System
and Plan
Quality Assurance builds clients confidence (increase acceptability) as well as contractors own
confidence in knowing that they are building the right system and that it will be highly acceptable
Testing and error correction assures system will perform as expected without defects or collapse
and also ensures accuracy and reliability.
POOR QUALITY SYSTEM
High cost of maintenance and correcting errors (unnecessary maintenance)
Low productivity due to poor performance
Unreliability in terms of functionalities
Risk of injury to safety critical systems (e.g. robots)
Loss of business due to errors
Lack of confidence to (in) the developers by clients.
PROFESSIONAL ISSUES IN SYSTEMS DEVELOPMENT
System development is a profession and belongs to the engineering discipline that employs
scientific methods in solving problems and providing solutions to the society.
Profession is an employment (not mechanical), that require some degree of learning, a calling,
habitual employment is a collective body of persons engaged in any profession
The main professional task in system development is on management of the tasks, with an aim of
producing system that meets users needs, on time and within budget.
Therefore main concerns of the management are: - Planning, Progress monitoring and Quality
control
There are a number of tasks carried out in an engineering organization and are classified into their
function: -
Production - activities that directly contribute to creating products and services the organization
sells
Quality management - activities necessary to ensure the quality of products / services maintained
at this agreed level
Research and development - ways of creating / improving products and production process
Sales and Marketing - selling products / services and involves activities such as advertising,
transporting, distribution etc
Page 61 of 61
I NDI VI DUAL PROFESSI ONAL RESPONSI BILI TI ES
Do not harm others ethical behaviour concerned with both helping clients satisfy their needs
and not hurting them
Be competent IT Professionals master the complex body of knowledge in their profession; a
challenging issue because IT is dynamic and rapidly evolving field. Wrong advice to the client
can be costly
Maintain independence and avoid conflict of interests in excising of their professional
duties, they should be free from influence, guidance or control of other parties e.g. vendors. Thus
avoid corruption and fraud
Match clients expectations it is unethical to misrepresent either your qualifications or ability
to perform a certain job
Maintain fiduciary responsibility IP to hold in trust information provided to them
Safeguard client and source privacy ensure privacy of all private and personal information and
do not leak it
Protect records safeguard records they generate and keep on business transactions with their
clients
Safeguard intellectual property they are trustees of information and software and hence must
recognize that these are intellectual property that must be safeguarded
Provide quality information the creator of information / products must disclose information
about the quality and even the source of information in a report or product record
Avoid selection bias IT Professionals routinely make selection decisions at various stages of
the information life cycle. They must avoid the bias of prevailing point of view. Selection is
related to censorship
Be a steward of a clients assets, energy and attention - Provide information at the right time,
right place and at the right cost
Manage gate-keeping and censorship and obtain informed consent
Obtain confidential information and keep client confidentiality
Abide by laws, contracts, and license agreements; Exercising Professional Judgement