Software Engineering - Full - Notes
Software Engineering - Full - Notes
Software Engineering - Full - Notes
SYLLABUS
Module I: Introduction: Evolution; Software life cycle models: A few basic concepts, Waterfall model
and its extension, Agile development models, Spiral model, Comparison of different life cycle models
Module II: Software Project Management, Project Planning, Metrics for project size estimations,
Project Estimation Techniques, Basic COCOMO model, Risk Management, Software Requirements
Analysis and Specification: Requirements gathering and analysis, Software Requirements
Specification
Module III: Software Design: overview of the design process, How to characterise a good software
design, Cohesion and Coupling, Approaches to software design, Function oriented design: Overview
of SA/SD Methodology, Structured analysis, Developing the DFD model of a system, Structured
Design, User Interface design: Characteristics of a good user interface, Basic concepts, Types of user
interfaces
Module IV: Coding and Testing: Coding, Code review, Software documentation, Testing, Unit testing,
Black box testing, white box testing: Basic concepts, Debugging Integration testing, system testing,
Software Reliability and quality management: Software reliability, Software quality, Software
maintenance: Characteristics of software maintenance, Software reverse engineering, Emerging
Trends: Client Server Software, Client Server architectures, CORBA, Service Oriented Architectures
(SOA), Software as a Service.
The need of software engineering arises because of higher rate of change in user
requirements and environment on which the software is working.
Large software - It is easier to build a wall than to a house or building, likewise, as the
size of software become large engineering has to step to give it a scientific process.
Scalability- If the software process were not based on scientific and engineering
concepts, it would be easier to re-create new software than to scale an existing one.
Cost- As hardware industry has shown its skills and huge manufacturing has lower
down the price of computer and electronic hardware. But the cost of software remains
high if proper process is not adapted.
Dynamic Nature- The always growing and adapting nature of software hugely
depends upon the environment in which user works. If the nature of software is always
changing, new enhancements need to be done in the existing one. This is where
software engineering plays a good role.
Quality Management- Better process of software development provides better and
quality software product.
Operational
Transitional
Maintenance
Well-engineered and crafted software is expected to have the following
characteristics:
Operational
This tells us how well software works in operations. It can be measured on:
Budget
Usability
Efficiency
Correctness
Functionality
Dependability
Security
Safety
Transitional
This aspect is important when the software is moved from one platform to another:
Portability
Interoperability
Reusability
Adaptability
Maintenance
This aspect briefs about how well a software has the capabilities to maintain itself inthe ever-
changing environment:
Modularity
Maintainability
Flexibility
Scalability
In short, Software engineering is a branch of computer science, which uses well- defined
engineering concepts required to produce efficient, durable, scalable, in- budget and on-time
software products.
What is SDLC?
Why SDLC?
Here, are prime reasons why SDLC is important for developing a software system.
It offers a basis for project planning, scheduling, and estimating
Provides a framework for a standard set of activities and deliverables
It is a mechanism for project tracking and control
Increases visibility of project planning to all involved stakeholders of thedevelopment
process
Increased and enhance development speed
Improved client relations
Helps you to decrease project risk and project management plan overhead
SDLC Phases
Classical waterfall model is the basic software development life cycle model. It isvery
simple but idealistic. Earlier this model was very popular but nowadays it is not used. But it
is very important because all the other software development life cycle models are based on
the classical waterfall model.
Classical waterfall model divides the life cycle into a set of phases. This model considers that
one phase can be started after completion of the previous phase. Thatis the output of one
phase will be the input to the next phase. Thus the development process can be considered
as a sequential flow in the waterfall. Here the phases do not overlap with each other. The
different sequential phases of the classical waterfall
model are shown in the below figure:
• In a practical software development project, the classical waterfall model is hard to use.
So, Iterative waterfall model can be thought of as incorporating the necessary changes
to the classical waterfall model to make it usable in practical software development
projects. It is almost same as the classical waterfall model except some changes are
made to increase the efficiency of the software development.
• The iterative waterfall model provides feedback paths from every phase to its
preceding phases, which is the main difference from the classical waterfall
model.
Feedback paths introduced by the iterative waterfall model are shown in the figure below.
When errors are detected at some later phase, these feedback paths allow correcting errors
committed by programmers during some phase. The feedback paths allow the phase to be
reworked in which errors are committed and these changes are reflected in the later phases.
But, there is no feedback path to the stage
– feasibility study, because once a project has been taken, does not give up the project
easily.
It is good to detect errors in the same phase in which they are committed. It reduces the effort
and time required to correct the errors.
Phase Containment of Errors: The principle of detecting errors as close to theirpoints of
commitment as possible is known as Phase containment of errors.
Advantages of Iterative Waterfall Model
• Feedback Path: In the classical waterfall model, there are no feedback paths, so there
is no mechanism for error correction. But in iterative waterfall model feedback path from
one phase to its preceding phase allows correcting the errors that are committed and
these changes are reflected in the later phases.
• Simple: Iterative waterfall model is very simple to understand and use. That’s why it is
one of the most widely used software development models.
Drawbacks of Iterative Waterfall Model
• Difficult to incorporate change requests: The major drawback of the iterativewaterfall
model is that all the requirements must be clearly stated before starting of the
development phase. Customer may change requirements after some time but the
iterative waterfall model does not leave any scope to incorporate change requests that
are made after development phase starts.
• Incremental delivery not supported: In the iterative waterfall model, the full software
is completely developed and tested before delivery to the customer. There is no scope
for any intermediate delivery. So, customers have to wait long for getting the software.
• Overlapping of phases not supported: Iterative waterfall model assumes that one
phase can start after completion of the previous phase, But in real projects,phases may
overlap to reduce the effort and time needed to complete the project.
• Risk handling not supported: Projects may suffer from various types of risks. But,
Iterative waterfall model has no mechanism for risk handling.
• Limited customer interactions: Customer interaction occurs at the start of theproject
at the time of requirement gathering and at project completion at the time of software
delivery. These fewer interactions with the customers may lead to many problems as
the finally developed software may differ from thecustomers’ actual requirements.
Prototyping Model
The Prototyping model is also a popular software development life cycle model. The
prototyping model can be considered to be an extension of the Iterative Waterfall model. This
model suggests building a working Prototype of the system, before the development of the
actual software.
A prototype is a toy and crude implementation of a system. It has limited functional
capabilities, low reliability, or inefficient performance as compared to the actual
software. A prototype can be built very quickly by using several shortcuts by
developing inefficient, inaccurate or dummy functions.
Necessity of the Prototyping Model –
• It is advantageous to develop the Graphical User Interface (GUI) part of a software
using the Prototyping Model. Through prototype, the user can experiment with a
working user interface and they can suggest any change ifneeded.
• The prototyping model is especially useful when the exact technical solutions are
unclear to the development team. A prototype can help them to critically examine the
technical issues associated with the product development. The lack of familiarity with
a required development technology is a technical risk. This can be resolved by
developing a prototype to understand the issues andaccommodate the changes in
the next iteration.
Phases of Prototyping Model –
The Prototyping Model of software development is graphically shown in the figure below.
The software is developed through two major activities – one is prototype construction and
another is iterative waterfall based software development.
Prototype Development – Prototype development starts with an initial requirementsgathering
phase. A quick design is carried out and a prototype is built. The developed prototype is
submitted to the customer for evaluation. Based on the customer feedback, the requirements
are refined and the prototype is suitably modified. This cycle of obtaining customer feedback
and modifying the prototype continues till the customer approves the prototype.
Iterative Development – Once the customer approves the prototype, the actual software is
developed using the iterative waterfall approach. In spite of the availability of a working
prototype, the SRS document is usually needed to be developed since the SRS Document is
invaluable for carrying out tractability analysis, verification and test case design during later
phases.
The code for the prototype is usually thrown away. However, the experience gathered from
developing the prototype helps a great deal in developing the actual software. By constructing
the prototype and submitting it for user evaluation, many customer requirements get properly
defined and technical issues get resolved by experimenting with the prototype. This minimises
later change requests from the customer and the associated redesign costs.
Advantages of Prototyping Model – This model is most appropriate for the projects that
suffer from technical and requirements risks. A constructed prototype helps to overcome these
risks.
Spiral Model
Spiral model is one of the most important Software Development Life Cycle
models, which provides support for Risk Handling. In its diagrammatic
representation, it looks like a spiral with many loops. The exact number of loops of
the spiral is unknown and can vary from project to project. Each loop of the spiral
is called a Phase of the software development process. The exact number of
phases needed to develop the product can be varied by the project manager
depending upon the project risks. As the project manager dynamically determines
the number ofphases, so the project manager has an important role to develop a
product using spiral model.
The Radius of the spiral at any point represents the expenses(cost) of the project
so far, and the angular dimension represents the progress made so far in the current
phase.
Below diagram shows the different phases of the Spiral Model:
Each phase of Spiral Model is divided into four quadrants as shown in the above
figure. The functions of these four quadrants are discussed below-
Agile Model
• Planning
• Requirements Analysis
• Design
• Coding
• Unit Testing and
• Acceptance Testing.
At the end of the iteration, a working product is displayed to the customer and
important stakeholders.
Following are the Agile Manifesto principles −
• Individuals and interactions − In Agile development, self-organization and
motivation are important, as are interactions like co-location and pair
programming.
• Working software − Demo working software is considered the best means
of communication with the customers to understand their requirements,
instead of just depending on documentation.
• Customer collaboration − As the requirements cannot be gathered
completely in the beginning of the project due to various factors, continuous
customer interaction is very important to get proper product requirements.
• Responding to change − Agile Development is focused on quick
responsesto change and continuous development
Agile Vs Traditional SDLC Models
Agile is based on the adaptive software development methods, whereas the
traditional SDLC models like the waterfall model is based on a predictive approach.
Predictive teams in the traditional SDLC models usually work with detailed
planning and have a complete forecast of the exact tasks and features to be
delivered in the next few months or during the product life cycle.
Predictive methods entirely depend on the requirement analysis and
planning done in the beginning of cycle. Any changes to be incorporated go
through a strict change control management and prioritization.
Agile uses an adaptive approach where there is no detailed planning and there is
clarity on future tasks only in respect of what features need to be developed. There
is feature driven development and the team adapts to the changing product
requirements dynamically. The product is tested very frequently, through the
release iterations, minimizing the risk of any major failures in future.
Customer Interaction is the backbone of this Agile methodology, and open
communication with minimum documentation are the typical features of Agile
development environment. The agile teams work in close collaboration with each
other and are most often located in the same geographical location.
Agile Model - Pros and Cons
Agile methods are being widely accepted in the software world recently. However,
this method may not always be suitable for all products. Here are some pros and
cons of the Agile model.
The advantages of the Agile Model are as follows −
• Is a very realistic approach to software development.
• Promotes teamwork and cross training.
• Functionality can be developed rapidly and demonstrated.
• Resource requirements are minimum.
• Suitable for fixed or changing requirements
• Delivers early partial working solutions.
• Good model for environments that change steadily.
• Minimal rules, documentation easily employed.
• Enables concurrent development and delivery within an overall planned
context.
• Little or no planning required.
• Easy to manage.
• Gives flexibility to developers.
The disadvantages of the Agile Model are as follows −
• Not suitable for handling complex dependencies.
• More risk of sustainability, maintainability and extensibility.
• An overall plan, an agile leader and agile PM practice is a must without
whichit will not work.
• Strict delivery management dictates the scope, functionality to be
delivered,and adjustments to meet the deadlines.
• Depends heavily on customer interaction, so if customer is not clear,
teamcan be driven in the wrong direction.
• There is a very high individual dependency, since there is minimum
documentation generated.
• Transfer of technology to new team members may be quite challenging
dueto lack of documentation.
MODULE II
Project planning
Once a project is found to be feasible, software project managers undertake project
planning. Project planning is undertaken and completed even before any development
activity starts. Project planning consists of the following essential activities:
Project planning requires utmost care and attention since commitment to unrealistic time and
resource estimates result in schedule slippage. Schedule delays can cause customer
dissatisfaction and adversely affect team morale. It can even cause project failure. However,
project planning is a very challenging activity. Especially for large projects, it is very much
difficult to make accurate plans. A part of this difficulty is due to the fact that the proper
parameters, scope of the project, project staff, etc. may change during the span of the project.
In order to overcome this problem, sometimes project managers undertake project planning in
stages. Planning a project over a number of stages protects managers from making big
commitments too early. This technique of staggered planning is known as Sliding Window
Planning. In the sliding window technique, starting with an initial plan, the project is planned
more accurately in successive development stages. At the start of a project, project managers
have incomplete knowledge about the details of the project. Their information base gradually
improves as the project progresses through different phases. After thecompletion of every
phase, the project managers can plan each subsequent phase more accurately and with
increasing levels of confidence.
1. Introduction
(a) Objectives
(b) Major Functions
(c) Performance Issues
(d) Management and Technical Constraints
2. Project Estimates
(a) Historical Data Used
(b) Estimation Techniques Used
(c) Effort, Resource, Cost, and Project Duration Estimates
3. Schedule
(a) Work Breakdown Structure
(b) Task Network Representation
(c) Gantt Chart Representation
(d) PERT Chart Representation
4. Project Resources
(a) People
(b) Hardware and Software
(c) Special Resources
5. Staff Organization
(a) Team Structure
(b) Management Reporting
8. Miscellaneous Plans
Currently two metrics are popularly being used widely to estimate size: lines of code
(LOC) and function point (FP). The usage of each of these metricsin project size estimation
has its own advantages and disadvantages.
LOC is the simplest among all metrics available to estimate project size. This metric is very
popular because it is the simplest to use. Using this metric, the project size is estimated by
counting the number of source instructions in the developed program. Obviously, while counting
the number of source instructions,lines used for commenting the code and the header lines
should be ignored.
Determining the LOC count at the end of a project is a very simple job. However,
accurate estimation of the LOC count at the beginning of a project is very difficult. In order to
estimate the LOC count at the beginning of a project, project managers usually divide the
problem into modules, and each module into submodules and so on, until the sizes of the
different leaf-level modules can be approximately predicted. To be able to do this, past
experience in developing similar products is helpful. By using the estimation of the lowest level
modules, project managers arrive at the total size estimation.
Function point metric was proposed by Albrecht [1983]. This metric overcomes many of the
shortcomings of the LOC metric. Since its inception in late 1970s, function point metric has
been slowly gaining popularity. One of the important advantages of using the function point
metric is that it can be used to easily estimate the size of a software product directly from the
problem specification. This is in contrast to the LOC metric, where the size can be accurately
determined only after the product has fully been developed.
The conceptual idea behind the function point metric is that the size of a software product is
directly dependent on the number of different functions or features it supports. A software
product supporting many features would certainlybe of larger size than a product with less
number of features. Each function wheninvoked reads some input data and transforms it to the
corresponding output data. For example, the issue book feature (as shown in fig. 11.2) of a
Library Automation Software takes the name of the book as input and displays itslocation and
the number of copies available. Thus, a computation of the number of input and the output data
values to a system gives some indication of the number of functions supported by the system.
Albrecht postulated that in additionto the number of basic functions that a software performs,
the size is alsodependent on the number of files and the number of interfaces.
Besides using the number of input and output data values, function point metric computes the
size of a software product (in units of functions points or FPs) using three other characteristics
of the product as shown in the following expression. The size of a product in function points
(FP) can be expressed as theweighted sum of these five problem characteristics. The weights
associated with the five characteristics were proposed empirically and validated by the
observations over many projects. Function point is computed in two steps. The first step is to
compute the unadjusted function point (UFP).
Number of inputs: Each data item input by the user is counted. Data inputs should be
distinguished from user inquiries. Inquiries are user commands such as print-account-balance.
Inquiries are counted separately. It must be noted that individual data items input by the user
are not considered in the calculation of thenumber of inputs, but a group of related inputs are
considered as a single input.
For example, while entering the data concerning an employee to an employee pay roll software;
the data items name, age, sex, address, phone number, etc.are together considered as a
single input. All these data items can be consideredto be related, since they pertain to a single
employee.
Number of outputs: The outputs considered refer to reports printed, screen outputs, error
messages produced, etc. While outputting the number of outputs the individual data items
within a report are not considered, but a set of related data items is counted as one input.
Number of inquiries: Number of inquiries is the number of distinct interactive queries which
can be made by the users. These inquiries are the user commands which require specific
action by the system.
Number of files: Each logical file is counted. A logical file means groups of logically related
data. Thus, logical files can be data structures or physical files.
Number of interfaces: Here the interfaces considered are the interfaces used to exchange
information with other systems. Examples of such interfaces aredata files on tapes, disks,
communication links with other systems etc.
Once the unadjusted function point (UFP) is computed, the technical complexity factor
(TCF) is computed next. TCF refines the UFP measure by considering fourteen other factors
such as high transaction rates, throughput,and response time requirements, etc. Each of
these 14 factors is assigned from 0(not present or no influence) to 6 (strong influence). The
resulting numbers are summed, yielding the total degree of influence (DI). Now, TCF is
computed as (0.65+0.01*DI). As DI can vary from 0 to 70, TCF can vary from 0.65 to 1.35.
Finally, FP=UFP*TCF.
• A good problem size measure should consider the overall complexity of the
problem and the effort needed to solve it. That is, it should consider the local effort
needed to specify, design, code, test, etc. and not just the coding effort. LOC,
however, focuses on the coding activity alone; it merely computes the number of
source lines in the finalprogram. We have already seen that coding is only a small
part of the overall software development activities. It is also wrong to argue that the
overall product development effort is proportional to the effort required in writing the
program code. This is because even though the design might be very complex, the
code might be straightforward and vice versa. In such cases, code size is a grossly
improper indicator of the problem size.
• LOC measure correlates poorly with the quality and efficiency of the code. Larger
code size does not necessarily imply better quality or higher efficiency. Some
programmers produce lengthy and complicated code as they do not make effective
use of the available instruction set. In fact, it is very likely that a poor and sloppily
written piece of code might have larger number of source instructions than a piece
that is neat and efficient.
• It is very difficult to accurately estimate LOC in the final product from the problem
specification. The LOC count can be accurately computed only after the code has
been fully developed. Therefore, the LOCmetric is little use to the project
managers during project planning,since project planning is carried out even before
any development activity has started. This possibly is the biggest shortcoming
of theLOC metric from the project manager’s perspective.
Heuristic Techniques
Heuristic techniques assume that the relationships among the different project parameters can
be modeled using suitable mathematical expressions. Once the basic (independent)
parameters are known, the other (dependent) parameters can be easily determined by
substituting the value of the basic parameters in the
Mathematical expression. Different heuristic estimation models can be divided into the following
two classes: single variable model and the multi variable model.
In the above expression, e is the characteristic of the software which has already been
estimated (independent variable). Estimated Parameter is the dependent parameter to be
estimated. The dependent parameter to be estimatedcould be effort, project duration, staff size,
etc. c1 and d1 are constants. The values of the constants c1 and d1 are usually determined using
data collected from past projects (historical data). The basic COCOMO model is an example of
single variable cost estimation model.
d d
Resource = c1*e1 1 + c2*e2 2 + ...
Where e1, e2, … are the basic (independent) characteristics of the software
already estimated, and c1, c2, d1, d2, … are constants. Multivariable estimation models are
expected to give more accurate estimates compared to the single variable models, since a
project parameter is typically influenced by several independent parameters. The independent
parameters influence the dependent parameter to different extents. This is modeled by the
constants c1, c2, d1, d2, … .Values of these constants are usually determined from historical
data. The intermediate COCOMO model can be considered to be an example of a multivariable
estimation model.
Program Volume
The length of a program (i.e. the total number of operators and operands used in
the code) depends on the choice of the operatorsand operands used. In other
words, for the same programming problem, the length would depend on the
programming style. This typeof dependency would produce different measures of
length for essentially the same problem when different programming languages are
used. Thus, while expressing program size, the programming language used must
be taken into consideration:
V = Nlog2η
Here the program volume V is the minimum number of bits needed to encode the
program. In fact, to represent η different identifiers uniquely, at least log2η bits (where
η is the program vocabulary) will be needed. In this scheme, Nlog2η bits will be
needed to store a programof length N. Therefore, the volume V represents the size
of the program by approximately compensating for the effect of the programming
language used.
Potential Minimum Volume
The potential minimum volume V* is defined as the volume of most succinct program
in which a problem can be coded. The minimum volume is obtained when the
program can be expressed using a singlesource code instruction., say a function
call like foo( ) ;. In other words, the volume is bound from below due to the fact that
a program would have at least two operators and no less than the requisite number
of operands.
Thus, if an algorithm operates on input and output data d 1, d2, …dn, the most
succinct program would be f(d1, d2, … dn); for which η1 = 2, η2 = n. Therefore, V* =
(2 + η2)log2(2 + η2).
The program level L is given by L = V*/V. The concept of programlevel L is
introduced in an attempt to measure the level of abstractionprovided by
the programming language. Using this definition,
languages can be ranked into levels that also appear intuitively correct. The above
result implies that the higher the level of a language,
the less effort it takes to develop a program using that language. This result agrees
with the intuitive notion that it takes more effort to develop a program in assembly
language than to develop a program ina high-level language to solve a problem.
Length Estimation
Even though the length of a program can be found by calculating the total number
of operators and operands in a program, Halstead suggests a way to determine the
length of a program using the numberof unique operators and operands used in the
program. Using this method, the program parameters such as length, volume, cost,
effort, etc. can be determined even before the start of any programming activity. His
method is summarized below.
Halstead assumed that it is quite unlikely that a program has several identical
parts – in formal language terminology identical substrings – of length greater than
η (η being the program vocabulary).In fact, once a piece of code occurs identically at
several places, it is made into a procedure or a function. Thus, it can be assumed
that any program of length N consists of N/ η unique strings of length η. Now, itis
standard combinatorial result that for any given alphabet of size K, there are exactly
Kr different strings of length r.
Thus.
N/η ≤ ηη Or, N ≤ ηη+1
Since operators and operands usually alternate in a program, the upper bound
can be further refined into N ≤ η η1η1 η2η2. Also, N mustinclude not only the ordered
set of n elements, but it should also include all possible subsets of that ordered
sets, i.e. the power set of Nstrings (This particular reasoning of Halstead is not very
convincing!!!).
Therefore,
2N = η η1η1 η2η2
Or, taking logarithm on both sides,
N = log2η +log 2(η1η1 η2η2)So we get,
Or,
N = log 2(η1η1 η2η2)
(approximately, by ignoring log2η)
N = log2η1η1 + log2η2η2
= η1log2η1 + η2log2η2
COCOMO Model
The first level, Basic COCOMO can be used for quick and slightly rough calculations
of Software Costs. Its accuracy is somewhat restricted due to the absence of sufficient
factor considerations.
Intermediate COCOMO takes these Cost Drivers into account and Detailed COCOMO
additionally accounts for the influence of individual project phases, i.e in case of
Detailed it accounts for both these cost drivers and also calculations are performed
phase wise henceforth producing a more accurate result. These two models are further
discussed below.
Basic Model –
E=a(KLOC)b
Time=c(Effort)d
Person required=Effort/Time
The above formula is used for the cost estimation of for the basic COCOMO
model, and also is used in the subsequent models. The constant values a,b,c and
d for the Basic Model for the different categories of system:
SOFTWARE PROJECTS
SOFTWAR
PROJECTS A B C D
Semi
Embedde
SOFTWARE PROJECTS A B
DetailedModel –
Detailed COCOMO incorporates all characteristics of the intermediate version with
an assessment of the cost driver’s impact on each step of the software engineering
process. The detailed model uses different effort multipliers for each cost driver
attribute. In detailed cocomo, the whole software is divided into different
modules and then we apply COCOMO in different modules to estimate effort
and then sum the effort.
The Six phases of detailed COCOMO are:
1. Planning and requirements
2. System design
3. Detailed design
4. Module code and test
5. Integration and test
6. Cost Constructive model
The effort is calculated as a function of program size and a set of cost
driversare given according to each phase of the software lifecycle.
What is Risk?
"Tomorrow problems are today's risk." Hence, a clear definition of a "risk" is a problem
that could cause some loss or threaten the progress of the project, but which has not
happened yet.
These potential issues might harm cost, schedule or technical success of the project
and the quality of our software device, or project team morale.
We need to differentiate risks, as potential issues, from the current problems of the
project.
For example, staff storage, because we have not been able to select people with the
right technical skills is a current problem, but the threat of our technical persons being
hired away by the competition is a risk.
Risk Management
A software project can be concerned with a large variety of risks. In order to be adept
to systematically identify the significant risks which might affect a software project, it is
essential to classify risks into different classes. The project manager can then check
which risks from each class are relevant to the project.
There are three main classifications of risks which can affect a software project:
1. Project risks
2. Technical risks
3. Business risks
1. Project risks: Project risks concern differ forms of budgetary, schedule, personnel,
resource, and customer-related problems. A vital project risk is schedule slippage.
Since the software is intangible, it is very tough to monitor and control a software
project. It is very tough to control something which cannot be identified. For any
manufacturing program, such as the manufacturing of cars, the plan executive can
recognize the product taking shape.
3. Business risks: This type of risks contain risks of building an excellent product that
no one need, losing budgetary or personnel commitments, etc.
1. 1. Known risks: Those risks that can be uncovered after careful assessment of
the project program, the business and technical environment in which the plan
is being developed, and more reliable data sources (e.g., unrealistic delivery
date)
2. 2. Predictable risks: Those risks that are hypothesized from previous project
experience (e.g., past turnover)
3. 3. Unpredictable risks: Those risks that can and do occur, but are extremely
tough to identify in advance.
Based on these two methods, the priority of each risk can be estimated:
p=r*s
Where p is the priority with which the risk must be controlled, r is the probability of the
risk becoming true, and s is the severity of loss caused due to the risk becoming true.
If all identified risks are set up, then the most likely and damaging risks can be
controlled first, and more comprehensive risk abatement methods can be designed for
these risks.
1. Risk Identification: The project organizer needs to anticipate the risk in the
project as early as possible so that the impact of risk can be reduced by making
effective risk management planning.
A project can be of use by a large variety of risk. To identify the significant risk, this
might affect a project. It is necessary to categories into the different risk of classes.
There are different types of risks which can affect a software project:
1. Technology risks: Risks that assume from the software or hardware
technologies that are used to develop the system.
2. People risks: Risks that are connected with the person in the development
team.
3. Organizational risks: Risks that assume from the organizational environment
where the software is being developed.
4. Tools risks: Risks that assume from the software tools and other support
software used to create the system.
5. Requirement risks: Risks that assume from the changes to the customer
requirement and the process of managing the requirements change.
6. Estimation risks: Risks that assume from the management estimates of the
resources required to build the system
2. Risk Analysis: During the risk analysis process, you have to consider every
identified risk and make a perception of the probability and seriousness of that risk.
There is no simple way to do this. You have to rely on your perception and experience
of previous projects and the problems that arise in them.
It is not possible to make an exact, the numerical estimate of the probability and
seriousness of each risk. Instead, you should authorize the risk to one of several
bands:
1. The probability of the risk might be determined as very low (0-10%), low (10-
25%), moderate (25-50%), high (50-75%) or very high (+75%).
2. The effect of the risk might be determined as catastrophic (threaten the survival
of the plan), serious (would cause significant delays), tolerable (delays are
within allowed contingency), or insignificant.
Risk Control
It is the process of managing risks to achieve desired outcomes. After all, the identified
risks of a plan are determined; the project must be made to include the most harmful
and the most likely risks. Different risks need different containment methods. In fact,
most risks need ingenuity on the part of the project manager in tackling the risk.
1. Avoid the risk: This may take several ways such as discussing with the client
to change the requirements to decrease the scope of the work, giving incentives
to the engineers to avoid the risk of human resources turnover, etc.
2. Transfer the risk: This method involves getting the risky element developed
by a third party, buying insurance cover, etc.
3. Risk reduction: This means planning method to include the loss due to risk.
For instance, if there is a risk that some key personnel might leave, new
recruitment can be planned.
Risk Leverage: To choose between the various methods of handling risk, the project
plan must consider the amount of controlling the risk and the corresponding reduction
of risk. For this, the risk leverage of the various risks can be estimated.
Risk leverage is the variation in risk exposure divided by the amount of reducing the
risk.
1. Risk planning: The risk planning method considers each of the key risks that have
been identified and develop ways to maintain these risks.
For each of the risks, you have to think of the behavior that you may take to minimize
the disruption to the plan if the issue identified in the risk occurs.
You also should think about data that you might need to collect while monitoring the
plan so that issues can be anticipated.
Again, there is no easy process that can be followed for contingency planning. It rely
on the judgment and experience of the project manager.
2. Risk Monitoring: Risk monitoring is the method king that your assumption about
the product, process, and business risks has not changed.
Software Requirement Specification document (SRS)
Functional requirements
Performance requirements
Design constraints
Conceptually, any SRS should have these components. Now we will discuss them one by one.
1. Functional Requirements
Functional requirements specify what output should be produced from the given inputs. So
they basically describe the connectivity between the input and output of the system. For each
functional requirement:
1. A detailed description of all the data inputs and their sources, the units of measure, and
the range of valid inputs be specified:
2. All the operations to be performed on the input data obtain the output should be specified,
and
3. Care must be taken not to specify any algorithms that are not parts of the system but that
may be needed to implement the system.
4. It must clearly state what the system should do if system behaves abnormally when
any invalid input is given or due to some error during computation. Specifically, it should
specify the behaviour of the system for invalid inputs and invalid outputs.
This part of an SRS specifies the performance constraints on the software system. All the
requirements related to the performance characteristics of the system must be clearly
specified. Performance requirements are typically expressed as processed transaction s per
second or response time from the system for a user event or screen refresh time or a
combination of these. It is a good idea to pin down performance requirements for the most
used or critical transactions, user events and screens.
2. Design Constraints
The client environment may restrict the designer to include some design constraints that must
be followed. The various design constraints are standard compliance, resource limits,
operating environment, reliability and security requirements and policies that may have an
impact on the design of the system. An SRS should identify and specify all such constraints.
Standard Compliance: It specifies the requirements for the standard the system must follow.
The standards may include the report format and according procedures.
Fault Tolerance: Fault tolerance requirements can place a major constraint on how the
system is to be designed. Fault tolerance requirements often make the system more complex
and expensive, so they should be minimized.
Security: Currently security requirements have become essential and major for all types of
systems. Security requirements place restriction s on the use of certain commands control
access to database, provide different kinds of access, requirements for different people,
require the use of passwords and cryptography techniques, and maintain a log of activities in
the system.
1. All the possible interactions of the software with people hardware and other software should
be clearly specified,
2. The characteristics of each user interface of the software product should be specified and
3. The SRS should specify the logical characteristics of each interface between the software
product and the hardware components for hardware interfacing.
Concise: The SRS report should be concise and at the same time, unambiguous, consistent,
and complete. Verbose and irrelevant descriptions decrease readability and also increase error
possibilities.
Black-box view: It should only define what the system should do and refrain from stating how
to do these. This means that the SRS document should define the external behavior of the
system and not discuss the implementation issues. The SRS report should view the system to
be developed as a black box and should define the externally visible behavior of the system.
For this reason, the SRS report is also known as the black-box specification of a system.
Conceptual integrity: It should show conceptual integrity so that the reader can merely
understand it. Response to undesired events: It should characterize acceptable responses to
unwanted events. These are called system response to exceptional conditions.
Verifiable: All requirements of the system, as documented in the SRS document, should be
correct. This means that it should be possible to decide whether or not requirements have
been met in an implementation.
Structure of SRS
Decision tree
A decision tree is a map of the possible outcomes of a series of related choices. It allows an
individual or organization to weigh possible actions against one another based on their costs,
probabilities, and benefits.
As the name goes, it uses a tree-like model of decisions. They can be used either to drive
informal discussion or to map out an algorithm that predicts the best choice mathematically.
A decision tree typically starts with a single node, which branches into possible outcomes.
Each of those outcomes leads to additional nodes, which branch off into other possibilities.
This gives it a tree-like shape.
Decision table is a brief visual representation for specifying which actions to perform
depending on given conditions. The information represented in decision tables can also be
represented as decision trees or in a programming language using if-then-else and switch-
case statements.
A decision table is a good way to settle with different combination inputs with their
corresponding outputs and also called cause-effect table.
CONDITIONS STEP 1 STEP 2 STEP 3 STEP 4
Condition 1 Y Y N N
Condition 2 Y N Y N
Condition 3 Y N N Y
Condition 4 N Y Y N
MODULE 3
Software Design
Software design is a mechanism to transform user requirements into some suitable form, which
helps the programmer in software coding and implementation. It deals with representing the
client's requirement, as described in SRS (Software Requirement Specification) document, into a
form, i.e., easily implementable using programming language.
The software design phase is the first step in SDLC (Software Design Life Cycle), which
moves the concentration from the problem domain to the solution domain. In software design,
we consider the system to be a set of components or modules with clearly defined behaviors&
boundaries.
Problem Partitioning
For small problem, we can handle the entire problem at once but for the significant problem,
divide the problems and conquer the problem it means to divide the problem into smaller pieces
so that each piece can be captured separately.
For software design, the goal is to divide the problem into manageable pieces.
Abstraction
An abstraction is a tool that enables a designer to consider a component at an abstract level
without bothering about the internal details of the implementation. Abstraction can be used for
existing element as well as the component being designed.
1. Functional Abstraction
2. Data Abstraction
Functional Abstraction
i. A module is specified by the method it performs.
ii. The details of the algorithm to accomplish the functions are not visible to the user of the
function.
Functional abstraction forms the basis for Function oriented design approaches.
Data Abstraction
Details of the data elements are not visible to the users of data. Data Abstraction forms the basis
for Object Oriented design approaches.
Modularity
Modularity specifies to the division of software into separate modules which are differently named
and addressed and are integrated later on in to obtain the completely functional software. It is
the only property that allows a program to be intellectually manageable. Single large programs
are difficult to understand and read due to a large number of reference variables, control paths,
global variables, etc.
o Each module is a well-defined system that can be used with other applications.
o Each module has single specified objectives.
o Modules can be separately compiled and saved in the library.
o Modules should be easier to use than to build.
o Modules are simpler from outside than inside.
Modular Design
Modular design reduces the design complexity and results in easier and faster implementation by
allowing parallel development of various parts of a system. We discuss a different section of
modular design in detail in this section:
2. Information hiding: The fundamental of Information hiding suggests that modules can be
characterized by the design decisions that protect from the others, i.e., In other words, modules
should be specified that data include within a module is inaccessible to other modules that do not
need for such information.
The use of information hiding as design criteria for modular system provides the most significant
benefits when modifications are required during testing's and later during software maintenance.
This is because as most data and procedures are hidden from other parts of the software,
inadvertent errors introduced during modifications are less likely to propagate to different
locations within the software.
Strategy of Design
A good system design strategy is to organize the program modules in such a method that are
easy to develop and latter too, change. Structured design methods help developers to deal with
the size and complexity of programs. Analysts generate instructions for the developers about how
code should be composed and how pieces of code should fit together to form a program.
1. Top-down Approach
2. Bottom-up Approach
1. Top-down Approach: This approach starts with the identification of the main components
and then decomposing them into their more detailed sub-components.
2. Bottom-up Approach: A bottom-up approach begins with the lower details and moves
towards up the hierarchy, as shown in fig. This approach is suitable in case of an existing system.
Coupling and Cohesion
Module Coupling
In software engineering, the coupling is the degree of interdependence between software
modules. Two modules that are tightly coupled are strongly dependent on each other. However,
two modules that are loosely coupled are not dependent on each other. Uncoupled
modules have no interdependence at all within them.
A good design is the one that has low coupling. Coupling is measured by the number of
relations between the modules. That is, the coupling increases as the number of calls
between modules increase or the amount of shared data is large. Thus, it can be said
that a design with high coupling will have more errors.
In this case, modules are subordinates to different modules. Therefore, no direct coupling.
2. Data Coupling: When data of one module is passed to another module, this is called data
coupling.
3. Stamp Coupling: Two modules are stamp coupled if they communicate using composite data
items such as structure, objects, etc. When the module passes non-global data structure or
entire structure to another module, they are said to be stamp coupled. For example, passing
structure variable in C or object in C++ language to a module.
4. Control Coupling: Control Coupling exists among two modules if data from one module is
used to direct the structure of instruction execution in another.
5. External Coupling: External Coupling arises when two modules share an externally imposed
data format, communication protocols, or device interface. This is related to communication to
external tools and devices.
6. Common Coupling: Two modules are common coupled if they share information through
some global data items.
7. Content Coupling: Content Coupling exists among two modules if they share code, e.g., a
branch from one module into another module.
Module Cohesion
In computer programming, cohesion defines to the degree to which the elements of a module
belong together. Thus, cohesion measures the strength of relationships between pieces
of functionality within a given module. For example, in highly cohesive systems, functionality
is strongly related.
Coupling Cohesion
Coupling is also called Inter-Module Binding. Cohesion is also called Intra-Module Binding.
Coupling shows the relationships between Cohesion shows the relationship within the module.
modules.
Coupling shows the Cohesion shows the module's relative functional strength.
relative independence between the modules.
While creating, you should aim for low coupling, While creating you should aim for high cohesion, i.e., a cohes
i.e., dependency among modules should be component/ module focuses on a single function (i.e., single-
less. mindedness) with little interaction with other modules of the
In coupling, modules are linked to the other In cohesion, the module focuses on a single thing.
modules.
Software Design Approaches
Here are two generic approaches for software designing:
Top Down Design
We know that a system is composed of more than one sub-systems and it contains a number
of components. Further, these sub-systems and components may have their on set of sub-
system and components and creates hierarchical structure in the system.
Top-down design takes the whole software system as one entity and then decomposes it to
achieve more than one sub-system or component based on some characteristics. Each sub-
system or component is then treated as a system and decomposed further. This process
keeps on running until the lowest level of system in the top-down hierarchy is achieved.
Top-down design starts with a generalized model of system and keeps on defining the more
specific part of it. When all components are composed the whole system comes into
existence.
Top-down design is more suitable when the software solution needs to be designed from
scratch and specific details are unknown.
Bottom-up Design
The bottom up design model starts with most specific and basic components. It proceeds with
composing higher level of components by using basic or lower level components. It keeps
creating higher level components until the desired system is not evolved as one single
component. With each higher level, the amount of abstraction is increased.
Bottom-up strategy is more suitable when a system needs to be created from some existing
system, where the basic primitives can be used in the newer system.
Both, top-down and bottom-up approaches are not practical individually. Instead, a good
combination of both is used.
Function Oriented Design
Function Oriented design is a method to software design where the model is decomposed into a set of
interacting units or modules where each unit or module has a clearly defined function. Thus, the system
is designed from a functional viewpoint.
Data-flow diagrams are a useful and intuitive way of describing a system. They are generally
understandable without specialized training, notably if control information is excluded. They show end-
to-end processing. That is the flow of processing from when data enters the system to where it leaves
the system can be traced.
Data-flow design is an integral part of several design methods, and most CASE tools support data-flow
diagram creation. Different ways may use different icons to represent data-flow diagram entities, but
their meanings are similar.
2 TITLE title 60
Decision Trees
Decision trees are a method for defining complex relationships by describing decisions and
avoiding the problems in communication. A decision tree is a diagram that shows alternative
actions and conditions within horizontal tree framework. Thus, it depicts which conditions to
consider first, second, and so on.
Decision trees depict the relationship of each condition and their permissible actions. A square
node indicates an action and a circle indicates a condition. It forces analysts to consider the
sequence of decisions and identifies the actual decision that must be made.
The major limitation of a decision tree is that it lacks information in its format to describe what
other combinations of conditions you can take for testing. It is a single representation of the
relationships between conditions and actions.
For example, refer the following decision tree −
Decision Tables
Decision tables are a method of describing the complex logical relationship in a precise
manner which is easily understandable.
• It is useful in situations where the resulting actions depend on the occurrence of one or
several combinations of independent conditions.
• It is a matrix containing row or columns for defining a problem and the actions.
Components of a Decision Table
• Condition Stub − It is in the upper left quadrant which lists all the condition to be
checked.
• Action Stub − It is in the lower left quadrant which outlines all the action to be carried
out to meet such condition.
• Condition Entry − It is in upper right quadrant which provides answers to questions
asked in condition stub quadrant.
• Action Entry − It is in lower right quadrant which indicates the appropriate action
resulting from the answers to the conditions in the condition entry quadrant.
The entries in decision table are given by Decision Rules which define the relationships
between combinations of conditions and courses of action. In rules section,
Regular Customer - Y N -
ACTIONS
Give 5% discount X X - -
Give no discount - - X X
Structured Charts
It partitions a system into block boxes. A Black box system that functionality is known to the user without the
knowledge of internal design.
Structured Chart is a graphical representation which shows:
Text-Based User Interface: This method relies primarily on the keyboard. A typical example of this is UNIX.
Advantages
o Many and easier to customizations options.
o Typically capable of more important tasks.
Disadvantages
o Relies heavily on recall rather than recognition.
o Navigation is often more difficult.
Graphical User Interface (GUI): GUI relies much more heavily on the mouse. A typical example of this type of
interface is any versions of the Windows operating systems.
GUI Characteristics
Characteristics Descriptions
Icons Icons different types of information. On some systems, icons represent files. On other
icons describes processes.
Menus Commands are selected from a menu rather than typed in a command language.
Pointing A pointing device such as a mouse is used for selecting choices from a menu or
indicating items of interests in a window.
Graphics Graphics elements can be mixed with text or the same display.
Advantages
o Less expert knowledge is required to use it.
o Easier to Navigate and can look through folders quickly in a guess and check manner.
o The user may switch quickly from one task to another and can interact with several different applications.
Disadvantages
o Typically decreased options.
o Usually less customizable. Not easy to use one button for tons of different variations.
MODULE4
Coding Standards and Guidelines
Different modules specified in the design document are coded in the Coding phase according
to the module specification. The main goal of the coding phase is to code from the design
document prepared after the design phase through a high-level language and then to unit test
this code.
Good software development organizations want their programmers to maintain to some well-
defined and standard style of coding called coding standards. They usually make their own
coding standards and guidelines depending on what suits their organization best and based
on the types of software they develop. It is very important for the programmers to maintain the
coding standards otherwise the code will be rejected during code review.
Purpose of Having Coding Standards:
• A coding standard gives a uniform appearance to the codes written by different
engineers.
• It improves readability, and maintainability of the code and it reduces complexity also.
• It helps in code reuse and helps to detect error easily.
• It promotes sound programming practices and increases efficiency of the programmers.
Some of the coding standards are given below:
1. Limited use of globals:
These rules tell about which types of data that can be declared global and the data that
can’t be.
On the other hand, Coding guidelines give some general suggestions regarding the
coding style that to be followed for the betterment of understandability and readability of
the code. Some of the coding guidelines are given below :
Software Documentation
Any written text, illustrations or video that describe a software or program to its users is
called program or software document. User can be anyone from a programmer, system
analyst and administrator to end user. At various stages of development multiple documents
may be created for different users. In fact, software documentation is a critical process in
the overall software development process.
In modular programming documentation becomes even more important because different
modules of the software are developed by different teams. If anyone other than the
development team wants to or needs to understand a module, good and detailed
documentation will make the task easier.
These are some guidelines for creating the documents −
• Documentation should be from the point of view of the reader
• Document should be unambiguous
• There should be no repetition
• Industry standards should be used
• Documents should always be updated
• Any outdated document should be phased out after due recording of the phase out
Advantages of Documentation
These are some of the advantages of providing program documentation −
• Keeps track of all parts of a software or program
• Maintenance is easier
• Programmers other than the developer can understand all aspects of software
• Improves overall quality of the software
• Assists in user training
• Ensures knowledge de-centralization, cutting costs and effort if people leave the
system abruptly
Example Documents
A software can have many types of documents associated with it. Some of the important ones
include −
• User manual − It describes instructions and procedures for end users to use the
different features of the software.
• Operational manual − It lists and describes all the operations being carried out and
their inter-dependencies.
• Design Document − It gives an overview of the software and describes design
elements in detail. It documents details like data flow diagrams, entity relationship
diagrams, etc.
• Requirements Document − It has a list of all the requirements of the system as well
as an analysis of viability of the requirements. It can have user cases, reallife scenarios,
etc.
• Technical Documentation − It is a documentation of actual programming components
like algorithms, flowcharts, program codes, functional modules, etc.
• Testing Document − It records test plan, test cases, validation plan, verification plan,
test results, etc. Testing is one phase of software development that needs intensive
documentation.
• List of Known Bugs − Every software has bugs or errors that cannot be removed
because either they were discovered very late or are harmless or will take more effort
and time than necessary to rectify. These bugs are listed with program documentation
so that they may be removed at a later date. Also they help the users, implementers
and maintenance people if the bug is activated.
Software Testing
Software testing can be stated as the process of verifying and validating that a software or
application is bug free, meets the technical requirements as guided by it’s design and
development and meets the user requirements effectively and efficiently with handling all the
exceptional and boundary cases.
The process of software testing aims not only at finding faults in the existing software but
also at finding measures to improve the software in terms of efficiency, accuracy and
usability. It mainly aims at measuring specification, functionality and performance of a
software program or application.
Software testing can be divided into two steps:
1. Verification: it refers to the set of tasks that ensure that software correctly implements a
specific function.
2. Validation: it refers to a different set of tasks that ensure that the software that has been
built is traceable to customer requirements.
Verification: “Are we building the product right?”
Validation: “Are we building the right product?”
End users, testers and developers. Normally done by testers and developers.
THis can only be done by trial and error Data domains and internal boundaries can be
1. Black Box Testing is a software testing method in which the internal structure/ design/
implementation of the item being tested is not known to the tester
2. White Box Testing is a software testing method in which the internal structure/ design/
implementation of the item being tested is known to the tester.
the internal structure or the program or which the tester has knowledge about
the code is hidden and nothing is the internal structure or the code or the
needed. required.
required. programming.
Can be done by trial and error ways internal boundaries can be better
• A. Functional Testing
• B. Non-functional testing
• C. Regression Testing
• A. Path Testing
• B. Loop Testing
• C. Condition testing
To calculate the square root of a number, the equivalence classes will be:
(a) Valid inputs:
• Whole number which is a perfect square- output will be an integer.
• Whole number which is not a perfect square- output will be decimal number.
• Positive decimals
(b) Invalid inputs:
• Negative numbers(integer or decimal).
• Characters other that numbers like “a”,”!”,”;”,etc.
3. Boundary value analysis – Boundaries are very good places for errors to occur. Hence if
test cases are designed for boundary values of input domain then the efficiency of testing
improves and probability of finding errors also increase. For example – If valid range is 10 to
100 then test for 10,100 also apart from valid and invalid inputs
• Branch Coverge: In this technique, test cases are designed so that each branch from
all decision points are traversed at least once. In a flowchart, all edges must be
traversed at least once.
4 test cases required such that all branches of all decisions are covered, i.e, all edges
Debugging
Introduction:
In the context of software engineering, debugging is the process of fixing a bug in the
software. In other words, it refers to identifying, analyzing and removing errors. This activity
begins after the software fails to execute properly and concludes by solving the problem and
successfully testing the software. It is considered to be an extremely complex and tedious
task because errors need to be resolved at all stages of debugging.
Difference Between Debugging and Testing:
Debugging is different from testing. Testing focuses on finding bugs, errors, etc whereas
debugging starts after a bug has been identified in the software. Testing is used to ensure
that the program is correct and it was supposed to do with a certain minimum success rate.
Testing can be manual or automated. There are several different types of testing like unit
testing, integration testing, alpha and beta testing, etc.
Debugging requires a lot of knowledge, skills, and expertise. It can be supported by some
automated tools available but is more of a manual process as every bug is different and
requires a different technique, unlike a pre-defined testing mechanism.
Integration Testing
, https://www.geeksforgeeks.org/types-software-testing/
Integration testing is the process of testing the interface between two software units or
module. It’s focus on determining the correctness of the interface. The purpose of the
integration testing is to expose faults in the interaction between integrated units. Once all the
modules have been unit tested, integration testing is performed.
Integration test approaches –
There are four types of integration testing approaches. Those approaches are the following:
1. Big-Bang Integration Testing –
It is the simplest integration testing approach, where all the modules are combining and
verifying the functionality after the completion of individual module testing. In simple words, all
the modules of the system are simply put together and tested. This approach is practicable
only for very small systems. If once an error is found during the integration testing, it is very
difficult to localize the error as the error may potentially belong to any of the modules being
integrated. So, debugging errors reported during big bang integration testing are very
expensive to fix.
Advantages:
• It is convenient for small systems.
Disadvantages:
• There will be quite a lot of delay because you would have to wait for all the modules to
be integrated.
• High risk critical modules are not isolated and tested on priority since all modules are
tested at once.
2. Bottom-Up Integration Testing –
In bottom-up testing, each module at lower levels is tested with higher modules until all
modules are tested. The primary purpose of this integration testing is, each subsystem is to
test the interfaces among various modules making up the subsystem. This integration testing
uses test drivers to drive and pass appropriate data to the lower level modules.
Advantages:
• In bottom-up testing, no stubs are required.
• A principle advantage of this integration testing is that several disjoint subsystems can
be tested simultaneously.
Disadvantages:
• Driver modules must be produced.
• In this testing, the complexity that occurs when the system is made up of a large
number of small subsystem.
3. Top-Down Integration Testing –
Top-down integration testing technique used in order to simulate the behaviour of the lower-
level modules that are not yet integrated.In this integration testing, testing takes place from
top to bottom. First high-level modules are tested and then low-level modules and finally
integrating the low-level modules to a high level to ensure the system is working as intended.
Advantages:
• Separately debugged module.
• Few or no drivers needed.
• It is more stable and accurate at the aggregate level.
Disadvantages:
• Needs many Stubs.
• Modules at lower level are tested inadequately.
4. Mixed Integration Testing –
A mixed integration testing is also called sandwiched integration testing. A mixed integration
testing follows a combination of top down and bottom-up testing approaches. In top-down
approach, testing can start only after the top-level module have been coded and unit tested.
In bottom-up approach, testing can start only after the bottom level modules are ready. This
sandwich or mixed approach overcomes this shortcoming of the top-down and bottom-up
approaches. A mixed integration testing is also called sandwiched integration testing.
Advantages:
• Mixed approach is useful for very large projects having several sub projects.
• This Sandwich approach overcomes this shortcoming of the top-down and bottom-up
approaches.
Disadvantages:
• For mixed integration testing, require very high cost because one part has Top-down
approach while another part has bottom-up approach.
• This integration testing cannot be used for smaller system with huge interdependence
between different modules.
System Testing
Software reliability is also defined as the probability that a software system fulfills its
assigned task in a given environment for a predefined number of input cases, assuming that
the hardware and the input are free of error.
Software quality product is defined in term of its fitness of purpose. That is, a quality
product does precisely what the users want it to do. For software products, the fitness of use
is generally explained in terms of satisfaction of the requirements laid down in the SRS
document.
The modern view of a quality associated with a software product several quality methods
such as the following:
Usability: A software product has better usability if various categories of users can easily
invoke the functions of the product.
Reusability: A software product has excellent reusability if different modules of the product
can quickly be reused to develop new products.
Software Maintenance
Software Maintenance is the process of modifying a software product after it has been
delivered to the customer. The main purpose of software maintenance is to modify and
update software application after delivery to correct faults and to improve performance.
Need for Maintenance –
Software Maintenance must be performed in order to:
• Correct faults.
• Improve the design.
• Implement enhancements.
• Interface with other systems.
• Accommodate programs so that different hardware, software, system features, and
telecommunications facilities can be used.
• Migrate legacy software.
• Retire software.
Categories of Software Maintenance –
Maintenance can be divided into the following:
1. Corrective maintenance:
Corrective maintenance of a software product may be essential either to rectify some
bugs observed while the system is in use, or to enhance the performance of the system.
2. Adaptive maintenance:
This includes modifications and updations when the customers need the product to run
on new platforms, on new operating systems, or when they need the product to
interface with new hardware and software.
3. Perfective maintenance:
A software product needs maintenance to support the new features that the users want
or to change different types of functionalities of the system according to the customer
demands.
4. Preventive maintenance:
This type of maintenance includes modifications and updations to prevent future
problems of the software. It goals to attend problems, which are not significant at this
moment but may cause serious issues in future.
Reverse Engineering –
Reverse Engineering is processes of extracting knowledge or design information from
anything man-made and reproducing it based on extracted information. It is also called back
Engineering.
Software Reverse Engineering –
Software Reverse Engineering is the process of recovering the design and the requirements
specification of a product from an analysis of it’s code. Reverse Engineering is becoming
important, since several existing software products, lack proper documentation, are highly
unstructured, or their structure has degraded through a series of maintenance efforts.