Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
114 views

Software Engineering 1

The document discusses the evolution of software engineering from an art to an engineering discipline. It covers the emergence of software engineering due to innovations like high-level programming languages, structured programming, data flow design, object-oriented design and software life cycles. The document also compares programs and software products and distinguishes between programmers and software engineers.

Uploaded by

ramagowda416
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
114 views

Software Engineering 1

The document discusses the evolution of software engineering from an art to an engineering discipline. It covers the emergence of software engineering due to innovations like high-level programming languages, structured programming, data flow design, object-oriented design and software life cycles. The document also compares programs and software products and distinguishes between programmers and software engineers.

Uploaded by

ramagowda416
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 88

SOFTWARE ENGINEERING

Syallabus
UNIT - I
Introduction - Evolution software Development projects – Emergence of
software Engineering-software Life cycle models -Waterfall Model –Rapid
Application Development Model-Agile Model - Spiral Model
UNIT - II
Requirements Analysis and Specification: Requirements Gathering and
Analysis -Software Requirements Specification (SRS) - Formal System
Specifications.
UNIT - III
Software Design: Characteristics of a Good Software Design - Cohesion and
Coupling –Layered Design-Function-Oriented Software Design- Structured Analysis -
Data Flow Diagrams (DFDs)-Structured Design-Detailed design.
UNIT - IV
Object Modeling Using UML: Overview of Object-Oriented Concepts - UML
Diagrams - Use Case Model - Class Diagrams - Interaction Diagrams - Activity
Diagrams - State Chart Diagram-Postscript
UNIT - V
Coding and Testing: Coding - Review - Documentation-Testing - Black-Box,
White-Box, Integration, OOTesting - Smoke Testing.
TEXT BOOK
1. Rajib Mall, "Fundamentals of Software Engineering",3rd Edition, Prentice Hall of
India Private Limited, 2008.
REFERENCE BOOKS
1. Rajib Mall, "Fundamentals of Software Engineering", 4thEdition, Prentice Hall of
India Private Limited, 2014. 2. Richard Fairley, "Software Engineering Concepts",
TMGH Publications, 2004.
UNIT – I
Introduction:
- Several fundamental similarities between software engineering and other engineering
approaches such as civil engineering.
- “A systematic collection of good program development practices and techniques”.
- Software engineering discusses systematic and cost-effective techniques for software
development. These techniques help develop software using an engineering approach.
- These are characterized by the following aspects:
● Heavy use of past experiences. The past experiences are systematically arranged
and theoretical basis for them are also provided wherever possible.
● While designing a system, several conflicting goals might have to be optimized. In
such situations, there may not be any unique solution and thus several alternative
solutions may be proposed.
● A pragmatic approach to cost-effectiveness is adopted and economic concerns are
addressed.
1 Software Engineering Discipline – Evolution and Impact:
1.1 Evolution of an Art to an Engineering Discipline:
- The early programmers used an exploratory programming style.
- In an exploratory programming style, every programmer himself evolves his own
software development techniques solely guided by his intuition, experience, whims,
and fancies.
- We can consider the exploratory program development style as an art-since, art is
mostly guided by intuition.
- There are many stories about programmers in the past who were like proficient artists
and could write good programs based on some esoteric knowledge.
- If we analyze the evolution of software development style over the past fifty years.
- A schematic representation of this technology development pattern is shown in figure.
- Example, let us consider the evolution of iron-making technology.
- Critics point out that many of the methodologies and guidelines provided by the
software engineering discipline lack scientific basis, are subjective, and are often
inadequate.
- There is no denying the fact that adopting software engineering techniques facilitates
development of high quality software in a cost-effective and efficient manner.
1.2 A Solution to the Software Crisis?
- To explain the present software crisis in simple words, consider the following.
- The expenses that organizations all around the world are incurring on software
purchases compared to those on hardware purchases have been showing a worrying
trend over the years.
- Organizations are spending larger and larger portions of their budget on software.
- As can be seen in the figure, organisations are spending increasingly larger portions o
f their budget on software as compared to that o n hardware.

Programs Vs Software Products:


- Programs are developed by individuals for their personal use.
- Software products have multiple users and therefore, have good user-interface, proper
users manuals, and good documentation support.
- A software product consists not only of the program code but also of all the associated
documents such as the requirements specification document, the design document, the
test document.
- A further difference is that the software products often are too large to be developed
by any individual.
- A software product is usually developed by a group of engineers working in a team.
- Distinguish between software engineers who develop software products and
programmers who write programs.
- A group of software engineers usually work together in a team to develop a software
product, it is necessary for them to adopt some systematic development methodology.
- Software engineering principles are intended primarily for use in the development of
software products, many results of software engineering are appropriate for
development of small programs.

Emergence of Software Engineering:


Software engineering discipline is the result of advancement in the field of technology.
In this section, we will discuss various innovations and technologies that led to the
emergence of software engineering discipline.
Early Computer Programming

As we know that in the early 1950s, computers were slow and expensive. Though the
programs at that time were very small in size, these computers took considerable time to
process them. They relied on assembly language which was specific to computer architecture.
Thus, developing a program required lot of effort. Every programmer used his own style to
develop the programs.

High level programming languages

With the introduction of semiconductor technology, the computers became smaller,


faster, cheaper, and reliable than their predecessors. One of the major developments includes
the progress from assembly language to high-level languages. Early high level programming
languages such as COBOL and FORTRAN came into existence. As a result, the
programming became easier and thus, increased the productivity of the programmers.
However, still the programs were limited in size and the programmers developed programs
using their own style and experience.
Control Flow Based Design

With the advent of powerful machines and high level languages, the usage of
computers grew rapidly: In addition, the nature of programs also changed from simple to
complex. The increased size and the complexity could not be managed by individual style. It
was analyzed that clarity of control flow (the sequence in which the program’s instructions
are executed) is of great importance. To help the programmer to design programs having
good control flow structure, flowcharting technique was developed. In flowcharting
technique, the algorithm is represented using flowcharts. A flowchart is a graphical
representation that depicts the sequence of operations to be carried out to solve a given
problem.
Structured programming became a powerful tool that allowed programmers to write
moderately complex programs easily. It forces a logical structure in the program to be
written in an efficient and understandable manner. The purpose of structured programming
is to make the software code easy to modify when required. Some languages such as Ada,
Pascal, and dBase are designed with features that implement the logical program structure in
the software code.
Data-Flow Oriented Design

With the introduction of very Large Scale Integrated circuits (VLSI), the computers
became more powerful and faster. As a result, various significant developments like
networking and GUIs came into being. Clearly, the complexity of software could not be
dealt using control flow based design. Thus, a new technique, namely, data-flow-
oriented technique came into existence. In this technique, the flow of data through business
functions or processes is represented using Data-flow Diagram (DFD). IEEE defines a
data-flow diagram (also known as bubble chart and work-flow diagram) as ‘a diagram
that depicts data sources, data sinks, data storage, and processes performed on data as nodes,
and logical flow of data as links between the nodes.’
Object Oriented Design

Object-oriented design technique has revolutionized the process of software


development. It not only includes the best features of structured programming but also some
new and powerful features such as encapsulation, abstraction, inheritance, and
polymorphism. These new features have tremendously helped in the development of well-
designed and high-quality software. Object-oriented techniques are widely used these days
as they allow reusability of the code. They lead to faster software development and high-
quality programs. Moreover, they are easier to adapt and scale, that is, large systems can be
created by assembling reusable subsystems.
Software Life cycle

SDLC stands for Software Development Life Cycle. SDLC is a process that consists
of a series of planned activities to develop or alter the Software Products. This tutorial will
give you an overview of the SDLC basics, SDLC models available and their application in
the industry. This tutorial also elaborates on other related methodologies like Agile, RAD and
Prototyping.
Why to Learn SDLC?
Software Development Life Cycle (SDLC) is a process used by the software industry
to design, develop and test high quality softwares. The SDLC aims to produce a high-quality
software that meets or exceeds customer expectations, reaches completion within times and
cost estimates.
SDLC is a process followed for a software project, within a software organization. It
consists of a detailed plan describing how to develop, maintain, replace and alter or enhance
specific software. The life cycle defines a methodology for improving the quality of software
and the overall development process.
 SDLC is the acronym of Software Development Life Cycle.
 It is also called as Software Development Process.
 SDLC is a framework defining tasks performed at each step in the software
development process.
 ISO/IEC 12207 is an international standard for software life-cycle processes. It aims to
be the standard that defines all the tasks required for developing and maintaining
software.

What is SDLC?
SDLC is a process followed for a software project, within a software organization. It
consists of a detailed plan describing how to develop, maintain, replace and alter or enhance
specific software. The life cycle defines a methodology for improving the quality of software
and the overall development process.
The following figure is a graphical representation of the various stages of a typical SDLC.
A typical Software Development Life Cycle consists of the following stages −
Stage 1: Planning and Requirement Analysis
Requirement analysis is the most important and fundamental stage in SDLC. It is performed
by the senior members of the team with inputs from the customer, the sales department,
market surveys and domain experts in the industry. This information is then used to plan the
basic project approach and to conduct product feasibility study in the economical, operational
and technical areas.
Planning for the quality assurance requirements and identification of the risks associated with
the project is also done in the planning stage. The outcome of the technical feasibility study is
to define the various technical approaches that can be followed to implement the project
successfully with minimum risks.
Stage 2: Defining Requirements
Once the requirement analysis is done the next step is to clearly define and document the
product requirements and get them approved from the customer or the market analysts. This
is done through an SRS (Software Requirement Specification) document which consists of
all the product requirements to be designed and developed during the project life cycle.
Stage 3: Designing the Product Architecture
SRS is the reference for product architects to come out with the best architecture for the
product to be developed. Based on the requirements specified in SRS, usually more than one
design approach for the product architecture is proposed and documented in a DDS - Design
Document Specification.
This DDS is reviewed by all the important stakeholders and based on various parameters as
risk assessment, product robustness, design modularity, budget and time constraints, the best
design approach is selected for the product.
A design approach clearly defines all the architectural modules of the product along with its
communication and data flow representation with the external and third party modules (if
any). The internal design of all the modules of the proposed architecture should be clearly
defined with the minutest of the details in DDS.
Stage 4: Building or Developing the Product
In this stage of SDLC the actual development starts and the product is built. The
programming code is generated as per DDS during this stage. If the design is performed in a
detailed and organized manner, code generation can be accomplished without much hassle.
Developers must follow the coding guidelines defined by their organization and
programming tools like compilers, interpreters, debuggers, etc. are used to generate the code.
Different high level programming languages such as C, C++, Pascal, Java and PHP are used
for coding. The programming language is chosen with respect to the type of software being
developed.
Stage 5: Testing the Product
This stage is usually a subset of all the stages as in the modern SDLC models, the testing
activities are mostly involved in all the stages of SDLC. However, this stage refers to the
testing only stage of the product where product defects are reported, tracked, fixed and
retested, until the product reaches the quality standards defined in the SRS.
Stage 6: Deployment in the Market and Maintenance
Once the product is tested and ready to be deployed it is released formally in the appropriate
market. Sometimes product deployment happens in stages as per the business strategy of that
organization. The product may first be released in a limited segment and tested in the real
business environment (UAT- User Acceptance testing).
Then based on the feedback, the product may be released as it is or with suggested
enhancements in the targeting market segment. After the product is released in the market, its
maintenance is done for the existing customer base.

SDLC Models
There are various software development life cycle models defined and designed which are
followed during the software development process. These models are also referred as
Software Development Process Models. Each process model follows a Series of steps unique
to its type to ensure success in the process of software development.
Following are the most important and popular SDLC models followed in the industry −

 Waterfall Model
 Iterative Model
 Spiral Model
 V-Model
 Big Bang Model
Other related methodologies are Agile Model, RAD Model, Rapid Application Development
and Prototyping Models.
Waterfall Model:

Waterfall approach was first SDLC Model to be used widely in Software Engineering to
ensure success of the project. In "The Waterfall" approach, the whole process of software
development is divided into separate phases. In this Waterfall model, typically, the outcome
of one phase acts as the input for the next phase sequentially.
The following illustration is a representation of the different phases of the Waterfall Model.
The sequential phases in Waterfall model are −
 Requirement Gathering and analysis − All possible requirements of the system to be
developed are captured in this phase and documented in a requirement specification
document.
 System Design − The requirement specifications from first phase are studied in this
phase and the system design is prepared. This system design helps in specifying
hardware and system requirements and helps in defining the overall system
architecture.
 Implementation − With inputs from the system design, the system is first developed
in small programs called units, which are integrated in the next phase. Each unit is
developed and tested for its functionality, which is referred to as Unit Testing.
 Integration and Testing − All the units developed in the implementation phase are
integrated into a system after testing of each unit. Post integration the entire system is
tested for any faults and failures.
 Deployment of system − Once the functional and non-functional testing is done; the
product is deployed in the customer environment or released into the market.
 Maintenance − There are some issues which come up in the client environment. To
fix those issues, patches are released. Also to enhance the product some better versions
are released. Maintenance is done to deliver these changes in the customer
environment.
All these phases are cascaded to each other in which progress is seen as flowing steadily
downwards (like a waterfall) through the phases. The next phase is started only after the
defined set of goals are achieved for previous phase and it is signed off, so the name
"Waterfall Model". In this model, phases do not overlap.
Waterfall Model - Application
Every software developed is different and requires a suitable SDLC approach to be followed
based on the internal and external factors. Some situations where the use of Waterfall model
is most appropriate are −
 Requirements are very well documented, clear and fixed.
 Product definition is stable.
 Technology is understood and is not dynamic.
 There are no ambiguous requirements.
 Ample resources with required expertise are available to support the product.
 The project is short.

Waterfall Model - Advantages


The advantages of waterfall development are that it allows for departmentalization and
control. A schedule can be set with deadlines for each stage of development and a product
can proceed through the development process model phases one by one.
Development moves from concept, through design, implementation, testing, installation,
troubleshooting, and ends up at operation and maintenance. Each phase of development
proceeds in strict order.
Some of the major advantages of the Waterfall Model are as follows −
 Simple and easy to understand and use
 Easy to manage due to the rigidity of the model. Each phase has specific deliverables
and a review process.
 Phases are processed and completed one at a time.
 Works well for smaller projects where requirements are very well understood.
 Clearly defined stages.
 Well understood milestones.
 Easy to arrange tasks.
 Process and results are well documented.

Waterfall Model - Disadvantages


The disadvantage of waterfall development is that it does not allow much reflection or
revision. Once an application is in the testing stage, it is very difficult to go back and change
something that was not well-documented or thought upon in the concept stage.
The major disadvantages of the Waterfall Model are as follows −
 No working software is produced until late during the life cycle.
 High amounts of risk and uncertainty.
 Not a good model for complex and object-oriented projects.
 Poor model for long and ongoing projects.
 Not suitable for the projects where requirements are at a moderate to high risk of
changing. So, risk and uncertainty is high with this process model.
 It is difficult to measure progress within stages.
 Cannot accommodate changing requirements.
 Adjusting scope during the life cycle can end a project.
 Integration is done as a "big-bang. at the very end, which doesn't allow identifying any
technological or business bottleneck or challenges early.
Rapid Application Development (RAD) Model:
The rapid application development model emphasizes on delivering projects in small pieces.
If the project is large, it is divided into a series of smaller projects. Each of these smaller
projects is planned and delivered individually. Thus, with a series of smaller projects, the
final project is delivered quickly and in a less structured manner. The major characteristic of
the RAD model is that it focuses on the reuse of code, processes, templates, and tools.

The phases of RAD model are listed below.

 Planning: Inthis phase, the tasks and activities are planned. The derivables produced
from this phase are project definition, project management procedures, and a work
plan. Project definition determines and describes the project to be developed. Project
management procedure describes processes for managing issues, scope, risk,
communication, quality, and so on. Work plan describes the activities required for
completing the project.

 Analysis: The requirements are gathered at a high level instead of at the precise set of
detailed requirements level. Incase the user changes the requirements, RAD allows
changing these requirements over a period of time. This phase determines plans for
testing, training and implementation processes. Generally, the RAD projects are small
in size, due to which high-level strategy documents are avoided.

 Prototyping: The requirements defined in the analysis phase are used to develop a
prototype of the application. A final system is then developed with the help of the
prototype. For this, it is essential to make decisions regarding technology and the tools
required to develop the final system.

 Repeat analysis and prototyping as necessary: When the prototype is developed, it is


sent to the user for evaluating its functioning. After the modified requirements are
available, the prototype is updated according to the new set of requirements and is again
sent to the user for analysis.

 Conclusion of prototyping: As a prototype is an iterative process, the project manager


and user agree on a fixed number of processes. Ideally, three iterations are considered.
After the third iteration, additional tasks for developing the software are performed and
then tested. Last of all, the tested software is implemented.
 Implementation: The developed software, which is fully functioning, is deployed at
the user’s end.

Various advantages and disadvantages associated with the RAD model are listed in Table.
Table Advantages and Disadvantages of RAD Model

Advantages Disadvantages
 Deliverables are easier to transfer as  Useful for only larger projects
high-level abstractions, scripts, and  RAD projects fail if there is no
intermediate codes are used. commitment by the developers or the
 Provides greater flexibility as redesign users to get software completed on
is done according to the developer. time.
 Results in reduction of manual coding  Not appropriate when technical risks
due to code generators and code reuse. are high. This occurs when the new
 Encourages user involvement. application utilizes new technology or
 Possibility of lesser defects due to when new software requires a high
prototyping in nature. degree of interoperability with
existing system.
 As the interests of users and
developers can diverge from single
iteration to next, requirements may not
converge in RAD model.

Agile Model

Agile model believes that every project needs to be handled differently and the existing
methods need to be tailored to best suit the project requirements. In Agile, the tasks are
divided to time boxes (small time frames) to deliver specific features for a release.
Iterative approach is taken and working software build is delivered after each iteration. Each
build is incremental in terms of features; the final build holds all the features required by the
customer.
Here is a graphical illustration of the Agile Model −
The Agile thought process had started early in the software development and started
becoming popular with time due to its flexibility and adaptability.
The most popular Agile methods include Rational Unified Process (1994), Scrum (1995),
Crystal Clear, Extreme Programming (1996), Adaptive Software Development, Feature
Driven Development, and Dynamic Systems Development Method (DSDM) (1995). These
are now collectively referred to as Agile Methodologies, after the Agile Manifesto was
published in 2001.
Following are the Agile Manifesto principles −
 Individuals and interactions − In Agile development, self-organization and
motivation are important, as are interactions like co-location and pair programming.
 Working software − Demo working software is considered the best means of
communication with the customers to understand their requirements, instead of just
depending on documentation.
 Customer collaboration − As the requirements cannot be gathered completely in the
beginning of the project due to various factors, continuous customer interaction is very
important to get proper product requirements.
 Responding to change − Agile Development is focused on quick responses to change
and continuous development.
Agile Vs Traditional SDLC Models
Agile is based on the adaptive software development methods, whereas the traditional
SDLC models like the waterfall model is based on a predictive approach. Predictive teams in
the traditional SDLC models usually work with detailed planning and have a complete
forecast of the exact tasks and features to be delivered in the next few months or during the
product life cycle.
Predictive methods entirely depend on the requirement analysis and planning done in the
beginning of cycle. Any changes to be incorporated go through a strict change control
management and prioritization.
Agile uses an adaptive approach where there is no detailed planning and there is clarity on
future tasks only in respect of what features need to be developed. There is feature driven
development and the team adapts to the changing product requirements dynamically. The
product is tested very frequently, through the release iterations, minimizing the risk of any
major failures in future.
Customer Interaction is the backbone of this Agile methodology, and open communication
with minimum documentation are the typical features of Agile development environment.
The agile teams work in close collaboration with each other and are most often located in the
same geographical location.

Agile Model - Pros and Cons


Agile methods are being widely accepted in the software world recently. However, this
method may not always be suitable for all products. Here are some pros and cons of the Agile
model.
The advantages of the Agile Model are as follows −
 Is a very realistic approach to software development.
 Promotes teamwork and cross training.
 Functionality can be developed rapidly and demonstrated.
 Resource requirements are minimum.
 Suitable for fixed or changing requirements
 Delivers early partial working solutions.
 Good model for environments that change steadily.
 Minimal rules, documentation easily employed.
 Enables concurrent development and delivery within an overall planned context.
 Little or no planning required.
 Easy to manage.
 Gives flexibility to developers.
The disadvantages of the Agile Model are as follows −
 Not suitable for handling complex dependencies.
 More risk of sustainability, maintainability and extensibility.
 An overall plan, an agile leader and agile PM practice is a must without which it will
not work.
 Strict delivery management dictates the scope, functionality to be delivered, and
adjustments to meet the deadlines.
 Depends heavily on customer interaction, so if customer is not clear, team can be
driven in the wrong direction.
 There is a very high individual dependency, since there is minimum documentation
generated.
 Transfer of technology to new team members may be quite challenging due to lack of
documentation.
Spiral Model :
- The spiral model of software development is shown in Fig.

- The diagrammatic representation of this model appears like a spiral with many loops.
- The exact number of loops in the spiral is not fixed. Each loop of the spiral represents
a phase of the software process.
Risk handling in spiral model:
- A risk is essentially any adverse circumstance that might hamper the successful
completion of a software project.
- As an example, consider a project for which a risk can be that data access from a
remote database might be too slow to be acceptable by the customer.
- This risk can be resolved by building a prototype of the data access subsystem and
experimenting with the exact access rate.
- The spiral model supports coping up with risks by providing the scope to build a
prototype at every phase of software development.

Phases of the Spiral Model:


- Each phase in this model is split into four sectors (or quadrants).
- The first quadrant, a few features of the software are identified to be taken up for
immediate development based on how crucial it is to the overall software
development.
Quadrant 1:
- 1st quadrant identifies the objectives of the phase and the alternative solutions possible
for the phase under consideration.
Quadrant 2:
- During the 2nd quadrant, the alternative solutions are evaluated to select the best
solution possible.
- For the chosen solution, the potential risks are identified and dealt with by developing
an appropriate prototype.
- A risk is essentially any adverse circumstance that might hamper the successful
completion of a software project.
Quadrant 3:
- During the 3rd quadrant consist of developing and verifying the next level of the
product.
Quadrant 4:
- 4th concern reviewing the results of the stages traversed so far with the customer and
planning the next iteration around the spiral.
- In this model of development, the project team must decide how exactly to structure
the project into phases.
- The spiral model can be viewed as a meta model, since it subsumes al the discussed
models.
Advantages/pros and disadvantages/cons of the spiral model:
- There are a few disadvantages of the spiral model that restrict its use to a only a few
types of projects.
- To the developers of a project, the spiral model usually appears as a complex model it
is much more powerful than the prototyping model.
- Prototyping model can meaningfully be used when all the risks associated with a
project are known beforehand
UNIT-II

1. Requirements Analysis and Specification:


Goal: The goal of the requirements analysis and specification phase is to clearly understand
the customer requirements and to systematically organize the requirements into a document
called the Software Requirements Specification (SRS) document.

- The requirements analysis and specification phase starts once the feasibility study
phase is complete and the project is found to be financially sound and technically
feasible.
- The goal of the requirements analysis and specification phase is to clearly understand
the customer requirements and to systematically organize these requirements in a
specification document.
- Two activities:
o Requirements gathering and analysis
o Requirements specification
- The engineers who gather and analyze customer requirements and write the
requirements specification document are known as system analysts in the software
industry parlance.
- System analysts collect data pertaining to system analysts in the software industry
these data to conceptualize what exactly needs to be done.
- Once the system analyst understands the precise user requirements.
- The SRS document is the final output of the requirements analysis and specification
phase.
Requirement Gathering and Analysis
- The analyst starts the requirements gathering and analysis activity by collecting all
information from the customer which could be used to develop the requirements of
the system.
- It is very difficult to gather the necessary information and to form an unambiguous
understanding of a problem.
- This is especially so if there are no working models of the problem.
- If the product involves developing something new for which no working model exists,
then the requirements gathering and analysis activities become all the more difficult.
- We can conceptually divide the requirements gathering and analysis activity into two
separate tasks:
• Requirements gathering
• Requirements analysis
Requirements Gathering:
- Requirements gathering is also popularly known as requirements elicitation.
- The primary objective of the requirements gathering task is to collect the requirements
from the stakeholders.
- A stakeholder is a source of the requirements and is usually a person, or a group of
persons who either directly or indirectly are concerned with the software.
- Requirements gathering may sound like a simple task.
- However, in practice it is very difficult to gather all the necessary information from a
large number of stakeholders and from information scattered across several pieces of
documents.
- Given that many customers are not computer savvy, they describe their requirements
very vaguely.
- Good analysts share their experience and expertise with the customer and give his
suggestions to define certain functionalities more comprehensively, make the
functionalities more general and more complete.
- In the following, we briefly discuss the important ways in which an experienced
analyst gathers requirements:
1. Studying existing documentation: The analyst usually studies all the available documents
regarding the system to be developed before visiting the
customer site
2. Interview: Typically, there are many different categories of users of a software. Each
category of users typically requires a different set of features from the software. Therefore, it
is important for the analyst to first identify t he different categories of users and then
determine the requirements of each.
3. Task analysis: The users usually have a black-box view of a software and consider the
software as something that provides a set of services (functionalities). A service supported by
a software is also called a task. We can therefore say that the software performs various tasks
of the users.
Scenario analysis: A task can have many scenarios of operation. The different scenarios of a
task may take place when the task is invoked under different situations.
- For example, the possible scenarios for the book issue task of a library automation
software may be:
- Book is issued successfully to the member and the book issue slip is printed.
- The book is reserved, and hence cannot be issued to the member.
- The maximum number of books that can be issued to the member is already reached,
and no more books can be issued to the member.
Form analysis: Form analysis is an important and effective requirements gathering
activity that is undertaken by the analyst, when the project involves automating an
existing manual system.
Requirements Analysis:
- The main purpose of this activity is to clearly understand the exact requirements of
the customer.
- The main purpose of the requirements analysis activity is to analyse the gathered
requirements to remove all ambiguities, incompleteness, and inconsistencies from the
gathered customer requirements and to obtain a clear understanding of the software to
be developed.
The following basic questions pertaining to the project should be clearly understood
by the analyst in order to obtain a good grasp of the problem:
● What is the problem?
● Why is it important to solve the problem?
● What are the possible solutions to the problem?
● What are the likely complexities that might arise while solving the problem?
- During requirements analysis,the analyst needs to identify and resolve three main
types of problems in the requirements:
• Anomaly
• Inconsistency
• Incompleteness
Anomaly:
An anomaly is an ambiguity in the requirement. When a requirement is anomalous,
several interpretations of the requirement are possible.
Inconsistency:
The requirements become inconsistent, if any one of the requirements contradicts
another.
Incompleteness:
- An incomplete requirement is one where some of the requirements have been
overlooked. Often, incompleteness is caused by the inability of the customer to
visualize and anticipate all the features that would be required in a system to be
developed.
- If the requirements are specified and studied using a formal method, then many of
these subtle ano,alies and inconsistencies can get detected. Once a system has been
formally specified, it can be systematically analyzed to remove all problems from the
specification.
Software Requirements Specification (SRS):
- After the analyst has gathered all the required information regarding the software to
be developed, and has removed all incompleteness, inconsistencies, and anomalies
from the specification, he starts to systematically organize the requirements in the
form of an SRS document.
- The SRS document usually contains all the user requirements in the informal form.
- Different people need the SRS document for very different purposes. Some of the
important categories of users of the SRS document and their needs are as follows:
Users, customers, and marketing personnel: The goal of this set of audience is to
ensure that the system as described in the SRS document will meet their needs.
Software Developers: The software developers refer to the SRS document to make
sure that they develop exactly what is required by the customer.
Test Engineers: Their goal is to ensure that the requirements are understandable
from a functionality point of view, so that they can test the software and validate its
working.
User Documentation Writers: Their goal in reading the SRS document is to ensure
that they understand the document well enough to be able to write the users manuals.
Project Managers: They want to ensure that they can estimate the cost easily by
referring to the SRS document and that it contains all the information required to plan
the project well.
Maintenance Engineers: the SRS document helps the maintenance engineers to
understand the functionality of the system. A clear knowledge of the functionality can
help them to understand the design and code.
- Many software engineers in a project consider the SRS document as a reference
document.
- The SRS document can even be used as a legal document to settle disputes between
the customers and the developers.
- Once the customer agrees to the SRS documents, the development team proceeds
Why spend time and resource to develop an SRS document?
- A well-formulated SRS document finds a variety of usage other than the primary
intended usage as a basis for starting the software development work.
- In the following subsection, we identify the important uses of a well-formulated SRS
document:
- Forms an agreement between the customers and the developers: A good SRS
document sets the stage for the customers to form their expectation about the software
and the developers about what is expected from the software.
- Reduces future reworks: The process of preparation of the SRS document forces
the stakeholders to rigorously think about all of the requirements before design and
development get underway. Provides a basis for estimating costs and schedules:
Project managers usually estimate the size of the software from an analysis of the SRS
document.
- Provides a baseline for validation and verification: The SRS document provides a
baseline against which compliance of the developed software can be checked. It is
also used by the test engineers to create the test plan.
- Facilitates future extensions: The SRS document usually serves as a basis for
planning future enhancements.
Characteristics of a Good SRS Document:
- Concise: The SRS document should be concise and at the same time unambiguous,
consistent, and complete. Verbose and irrelevant descriptions reduce readability and
also increase the possibilities of errors in the document.
- Implementation-independent: The SRS should be free of design and
implementation decisions unless those decisions reflect actual requirements.
- The SRS document should describe the system to be developed as a black box, and
should specify only the externally visible behaviour of the system. For this reason, the
S R S document is also called the black-box specification of the software being
developed.
-
- Traceable: It should be possible to trace a specific requirement to the design
elements that implement it and vice versa. Similarly, it should be possible to trace a
requirement to the code segments that implement it and the test cases that test this
requirement and vice versa.
- Modifiable: Customers frequently change the requirements during the software
development development due to a variety of reasons.
- Identification of response to undesired events: The SRS document should discuss
the system responses to various undesired events and exceptional conditions that may
arise.
- Verifiable: All requirements of the system as documented in the SRS document
should be verifiable.
Attributes of bad SRS documents:
- SRS documents written by novices frequently suffer from a variety of problems.
- Over-specification: It occurs when the analyst tries to address the “how to” aspects
in the SRS document.
- Forward references: One should not refer to aspects that are discussed much later in
the SRS document. Forward referencing seriously reduces readability of the
specification.
- Wishful thinking: This type of problems concern description of aspects which would
be difficult to implement.
- Noise: The term noise refers to presence of material not directly relevant to the
software development process.
Important Categories of Customer Requirements:
- A good SRS document, should properly categorize and organise the requirements into
different sections.
- An SRS document should clearly document the following aspects of a software:
• Functional requirements
• Non-functional requirements
— Design and implementation constraints
— External interfaces required
— Other non-functional requirements
• Goals of implementation.
Functional requirements:- The functional requirements part discusses the functionalities
required from the system.
The system is considered to perform a set of high-level functions {f i }. The
functional view of the system is shown in fig. 5.1. Each function f i of the system can be
considered as a transformation of a set of input data (ii) to the corresponding set of output
data (o i ). The user can get some meaningful piece of work done using a high-level function.

Nonfunctional requirements:- Nonfunctional requirements deal with the characteristics of


the system which cannot be expressed as functions - such as the maintainability of the
system, portability of the system, usability of the system, etc.
Goals of implementation:- The goals of implementation part documents some general
suggestions regarding development. These suggestions guide trade-off among design goals.
The goals of implementation section might document issues such as revisions to the system
functionalities that may be required in the future, new devices to be supported in the future,
reusability issues, etc.
Decision tree
- A decision tree gives a graphics view of the processing logic involved in decision
making and the corresponding actions taken.
- Decision tables specify which variables are to be tested, and based on this what actions
need to be taken depending upon the outcome of the decision making logic, and the
order in which decision making is performed
- The edge of a decision tree represent conditions and the leaf node represent the actions
to be performed depending on the outcome of testing the condition

Decision table
- A decision table is used to represent the complex processing logic in a tabular or a
matrix form.
- The upper rows of the table specify the variables or conditions to be evaluated. The
lower rows of the table specify the actions to be taken when the corresponding
conditions are satisfied.
- A column in a table is called a rule. A rule implies that if a condition is true, then the
corresponding action is to be executed.
FORMAL SYSTEM DEVELOPMENT TECHNIQUES
- Formal techniques have emerged as a central issue in software engineering. This is not
accidental; the importance of precise specification modeling and verification is
recognized in most engineering disciplines.
- Formal methods provide us with tool to precisely describe a system and show that a
system is correctly implemented.
- These two approaches are discussed here. We will first highlight some important
concepts in formal methods, and then examine the merits and demerits of using formal
techniques.
What is a formal technique?
- A formal technique is mathematical method used to : specify a hardware and / or
software system, verify whether a specification is realizable, verify whether an
implementation satisfies its specification, prove properties of a system without
necessarily running the system and so on.
- A formal specification language consists of two sets syn and sem and relation sat
between them. The set syn is called the syntactic domain, the set sem is called semantic
domain and the relation sat is called the syntactic domain
Syntactic domains
- The syntactic domain of a formal specification language consists of an alphabet of
symbol and a set of formation rules to construct well formed formulas from the
alphabet. The well formed formulas are used to specify a system
Semantic domains
- Formal techniques can have considerably different semantic domains. Abstract data
type specification languages are used to specify algebras, theories and programs.
- Programming languages are used to specify functions from input to output values.
Concurrent and the distributed system specification language are used to specify state
sequences, event sequences, state transition sequence, synchronization trees, partial
orders, state machines etc.

Model Vs Property oriented methods


- Formal methods are usually classified into two broad categories – the so-called model
oriented and the property oriented approaches. In a model oriented style, one defines a
systems behaviour directly by construction a model of the system in terms of the
mathematical structures such as tuples, relations, function, sets, sequence, etc
- In the property oriented style, the systems behaviour is defined indirectly by stating its
properties usually in the form of as set of axioms that the system must satisfy.
- It is alleged that property oriented approaches are more suitable for requirements
specifications, and the model oriented approaches are more suited to system
specification.
- The reason for this distinction is the fact that property oriented approaches specifies
system behaviors no by what the say of the system but by what they do not say of the
system.
- Initial customer requirements undergo several changes as the development proceeds;
the property oriented style is generally preferred for requirements specification.
Operational Semantics
- There are different types of operational semantics according to what is meant by a
single run of the system and how the runs are grouped together to describe the
behaviour of the system.
Linear semantic
- A run of a system is described by a sequence of event or states. The concurrent
activities of the system are represented by non deterministic interleaving of the atomic
actions
Branching semantics
- The behaviour of a system is represented by a direct graph. The nodes of the graph
represent the possible states in the evolution of a system
Maximally parallel semantics
- This is again not a natural model of concurrency since it implicitly assumes the
availability of all the required computational resources Partial order semantics
- The semantics ascribed to a system constitute a structure of states satisfying a partial
order relation among the states.
Partial order semantics:
- Under this view, the semantics ascribed to a system is a structure of states satisfying a
partial order relation among the states (events).
- The partial order represents a precedence ordering among events, and constrains some
events to occur only after some other events have occurred; while the occurrence of
other events (called concurrent events) is considered to be incomparable.
Merits and limitations of formal methods
- A formal specification encourages rigour. Often, the very process of constructions of a
rigorous specification is more important than the formal specification itself.
- The construction of a rigorous specification clarifies several aspects of system
behaviour that are not obvious in an informal specification.
- Formal methods usually have well founded mathematical basics. Thus, formal
specifications are not only more precise, but also mathematically sound and can be used
to reason the properties of a specification and to rigorously prove that an
implementation satisfies its specification. Informal specification may be useful in
understanding a system and its documentation, but they cannot serve as a basis of
verification.
UNIT-III
SOFTWARE DESIGN
- Software design deals with transforming the customer requirements, as described in the
SRS document, into a form that is implementable using a programming language.
- The objectives of the design phase are to take the SRS document as the input and
produce the above mentioned documents before completion of the design phase.
- The design activities into two important parts
▪ Preliminary design
▪ Detailed design
- The meaning and scope of these two activities tend to vary considerably from one
methodology to another.
- The outcome of high level design is called the program structure or software
architecture.
- A large number of software design techniques are available. We will study a few of
these techniques and discuss their important characteristics.

CHARACTERISTICS OF A GOOD SOFTWARE DESIGN


- A definition of a good software design can vary depending on the application for which
it is being designed.
- For example, the memory size used up by a program may be an important issues to
characterize a good solution for embedded software development since embedded
applications are often required to be implemented using memory of limited size due to
cost, space or power consumption constraints.
- Most researchers and software engineers agree on a few desirable characteristics that
every good software design for general applications must process.
- The characteristics are listed below
~ Correctness: a good design should correctly implement all the
functionalities of the system.
~ Understandability: a good design should be easily understandable
~ Efficiency: it should be efficient
~ Maintainability: it should be easily amenable to change
Understandability of a Design: A Major Concern:
- Given that we are choosing from only correct design solutions, understandability of a
design solution is possibly the most important issue to be considered while judging
the goodness of a design.
- A good design solution should be simple and easily understandable. A design that is
easy to understand is also easy to develop and maintain. A complex design would lead
to severely increased life cycle costs. Unless a design is easily understandable
- In order to facilitate understandability of a design the design should have the following
features
~ It should use consistent and meaningful names for various design components.
~ The design should be modular. The term modularity means that it should use a
cleanly decomposed set of modules.
~ It should neatly arrange the modules in a hierarchy
Ex. Tree like diagram
- We now elaborate the concepts of modularity, clean decomposition, and neat
arrangement of modules.
- A design solution should be modular and layered to be understandable
Modularity:
- A modular design is an effective decomposition of a problem
- Decomposition of a problem into modules facilities the design by taking advantage of
the divide and conquer principle.
- If different modules are independent of each other, then each module can be understood
separately.
- In Figure 5.2, in which the modules M1 , M2 etc. have been drawn as rectangles.
- The invocation of a module by another module has been shown as an arrow. It can
easily be seen that the design solution of Figure 5.2(a) would be easier to understand
since the interactions among the different modules is low.
- A design solution is said to be highly modular , if the different modules in the solution
have high cohesion and their inter-module couplings are low.
Layered design
Layered design
- A layered design is one in which when the call relations among different modules
are represented graphically, it would result in a tree-like diagram with clear
layering.
- In a layered design solution, the modules are arranged in a hierarchy of layers.
- A module can only invoke functions of the modules in the layer immediately
below it.
- The higher layer modules can be considered to be similar to managers that invoke
(order) the lower layer modules to get certain tasks done
- A layered design can make the design solution easily understandable.

COHESION AND COUPLING


- Cohesion is a measure of the functional strength of a module whereas the coupling
between two modules is a measure of the degree of interdependence or interaction
between the two modules
- A module having high cohesion and low coupling is said to be functionality
independent of other modules.
- By the term functional independence, we mean that at cohesive module performs a
single task or function
- A functionality independent module has minimal interaction with other modules.
- Functional independence is a key to any good design primarily due to the following
reason
Error isolation
- Functional independence reduces error propagation. The reason behind this is that if a
module is functionally independent.
- Its degree of interaction with other modules is less.
- Therefore, any error existing in a module would not directly affect the other modules
Scope for reuse
- Reuse of a module becomes possible, because each module does some well defined and
precise functions, and the interface of the module with other modules is simple and
minimal.
- Therefore, a cohesive module can be easily taken out and be reused in a different
program
Understandability
- Complexity of the design is reduces, because different modules are more or less
independent of each other can be understood in isolation.

Classification of Cohesiveness
Coincidental cohesion
- a module is said to have coincidental cohesion, if it performance a set of tasks that
relate to each other very loosely
- The module contains a random collection of functions.
- It is likely that the functions have been put in the module out of pure coincidence
without any though or design
Logical cohesion
- A module is said to be logically cohesive, if all elements of the module perform similar
operations.
Temporal cohesion
- When a module contains functions that are related by the fact that all the functions must
be executed in the same time span, the module is said to exhibit temporal cohesions
- The set of functions responsible for initialization, start up shut down of some process
etc. exhibit temporal cohesion

Procedural cohesion
- A module is said to posses procedural cohesion, if the set of functions of the module are
all part of procedure in which certain sequence of steps has to be carried out for
achieving an objective
Communicational cohesion
- A module is said to have communicational cohesion, if all the functions of the module
refer to or update the same data structure
Sequential cohesion
- A module is said to process sequential cohesion, it the elements of module form the
parts of sequence, where the output from one element of the sequence is input to the
next
Functional cohesion
- Functional cohesion is said to exit, if the different elements of a module cooperate to
achieve a single function.
Classification of coupling
- The coupling between two modules indicates the degree of interdependence between
them.
- Intuitively, if two modules interchange large amounts of data, then they are highly
interdependent.
- The degree of coupling between two modules depends on their interface complexity
- Five type of couplings can occur between any tow modules
Data coupling
- Two modules are data couples, if they communicate using and elementary data item
that is passed as a parameter between the two.
Stamp coupling
- Two modules are stamp coupled, if they communicate using a composite data item such
as a record in PASCAL or a structure in C
Control coupling
- Control coupling exits between two modules, if data from one module is used to direct
the order of instruction execution in another
- An example of control coupling is a flag set in one module and tested in another
module.
Common coupling
- Two modules are common coupled, if they share some global data items.
Content coupling
- Content coupling exists between two modules, if their code is shared ex. A branch
from one module into another module
NEAT ARRANGEMENT
- The control hierarchy represents the organization of the program components.
- The control hierarchy is also called program structure.
- Many different types of notations have been used to represent the control hierarchy.

Layering
- In a layered design solution, the modules are arranged in layers.
- The control relationship among modules in a layer is expressed in the following way.
- A module that controls another module is said to be subordinate to it.
- Conversely, module hierarchies also represent a subtle characteristic of software
architecture.
Control abstraction
- A module should invoke the functions of the modules in the layer immediately below it.
- A module at a lower layer, should not invoke the service of the module
Depth and Width
- These provide an indication of the number of levels of controls and the overall span of
control respectively
Fan out
- It is a measure of the number of modules that are directly controlled by a given module
- A design having modules with high fan out numbers is not a good design as such
module would lack cohesion
Fan in
- It indicated the number of modules directly invoking a given module. High fan is
representing code reuse and is, in general encouraged.

SOFTWARE DESING APPROACHS


- There are fundamentally two different approaches to software design – function
oriented design, and object oriented design
- However, these two design approaches are radically different.
Function oriented design
1. Top-down decomposition:
A system is viewed as something that performs a set of functions. Starting at
this high level view of the system, each function is successively refined into more
detailed functions. For example consider a function create new library member which
essentially creates the recorded for a new member, assigns a unique membership
number to the new member, and prints a bill towards the membership charge. This
function may consist of the following sub functions.
~ Assign membership number
~ Create member record
~ Print bill
2. Centralized system state:
The system state is centralized and shared among different functions.
~ Create new member
~ Delete member
~ Update member record

Object oriented design


- In the object oriented design approach, the system is viewed as a collection of objects.
- The system state is decentralized among the objects and each object to manage its own
state information.
- Objects have their own internal data which define their state.
- Similar objects constitute a class.
- In other words, each object is a member of some class
- Classes may inherit feature from a super class.
Data abstraction: The principle of data abstraction implies that how data is exactly stored is
abstracted away. This means that any entity external to the object (that is, an instance
of an ADT) would have no knowledge about how data is exactly stored, organised,
and manipulated inside the object
Data structure: A data structure is constructed from a collection of primitive data items. Just
as a civil engineer builds a large civil engineering structure using primitive building
materials such as bricks, iron rods, and cement;
Data type: A type is a programming language terminology that refers to anything that can be
instantiated. For example, int, float, char etc., are the basic data types supported by C
programming language.
OBJECT ORIENTED VS FUNCTION ORIENTED DESIGN
- Unlike function oriented design methods, in ODD the basic abstractions are not real
world functions such as sort, display track etc.,
- But real world entities such as employee, picture, machine radar system etc
- In ODD, state information is not represented in a centralized shard memory but is
distributed among the objects of the system.
- Function oriented techniques, such as SA/SD group the functions together if as a group
they constitute a higher level function
Fire alarm system
- The fire alarm system would monitor the status of these smoke detectors.
- Whenever a fire condition is reported by any of the smoke detectors, the fire alarm
system should determine the location at which the fire condition has occurred and then
sound the sirens only in the neighboring locations
- The fire alarm system should also flash and alarm message on the computer console.
Function oriented approach
The functions which operate on the system state are
interrogate_detectors ();
get_detector_location ();
determine_neighbour ();
ring_alarm ();
reset_alarm ();
report_fire_location ();
Object oriented approach
Class detector
Attribute: status, location, neighbors
Operations: crate, sense-status, get location, find neighbors
Class alarm
Attributes: location, status
Operations: create, ring-alarm, get location, reset alarm
- In the object oriented program, an appropriate number of instance of the class detector
and alarm should be created.
- We can now examine the function oriented and the object oriented programs, and see
that in the function oriented program the system state is centralized and several
functions on this central data are defines.
- In case of the object oriented program, the state information is distributed among
various objects
FUNCTION ORIENTED SOFTWARE DESIGN

OVERVIEW OF SA/SD METHODOLOGY


- The SA/SD methodology consists of two distinct activities
o Structured Analysis (SA)
o Structured Design (SD)
- The aim of the structured analysis activities is to transform a textual problem
description into graphics model
- During structured analysis, the SRS document is transformed into a data flow diagram
(DFD) model.
- During structured design, the DFD model is transformed into a structure chart
- It is important to understand that the purpose of structured analysis is to capture the
detailed structure of the system as perceived by the user
- The results of structured analysis can therefore, be easily understood analysis can
therefore, and be easily understood by the user since his terminology is used for naming
different functions and data.

- The structured analysis activity transforms the SRS document into a graphic model
called the DFD model.
STRUCTURED ANALYSIS
- During structured analysis, the major processing tasks of the system are analyzed, and
the data flows among these processing tasks are represented graphically.
⬧ Top down decomposition approach
⬧ Divide and conquer principle. Each function is decomposed
independently
⬧ Graphical representation of the analysis results using data flow
diagrams (DFDs)
- A DFD in simple words is a hierarchical graphical model of a system that shows the
different processing activities or functions that the system performs and the data
interchange among these functions
- In the DFD terminology it is useful to consider each function as a processing station
that consumes some input data and produces some output data.

DATA FLOW DIAGRAMS (DFDs)


- The DFD (also known as the bubble chart) is a simple graphical formalism that can be
used to represent a system in terms of the input data to the system, various processing
carried out on these data, and the output data generated by the system.
- The main reason why the DFD technique is so popular is probably because of the fact
that DFD is a very simple formalism it is simple to understand and use.
- A DFD model uses a very limited number of primitive symbols to represent the
functions performed by a system and the data flow among these functions.
- Starting with a set of high level functions that a system performs a DFD model
hierarchal represents various sub functions.

Primitive Symbols used for Construction DFDs

Function symbol
- A function is represented using a circle.
- This symbol is called a process or a bubble.
- Bubbles are annotated with the names of the corresponding functions
External entity symbol
- An external entity such as a librarian, a library member, etc. is represented by a
rectangle
- The external entities are essentially those physical entities external to the software
system which interact with the system by inputting data to the system or by consuming
the data produced by the system.
Data flow symbol
- A directed arc or an arrow is used as a data flow symbol.
- A data flow symbol represents the data flow occurring between two processes or
between an external entity and a process, in the direction of the data flow arrow.
- Data flow symbols are usually annotated with the corresponding data names.
Data store symbol
- A data store represents a logical file.
- It is represented using two parallel lines.
- A logical file can represent either a data store symbol which can represent either a data
structure or a physical file on disk.
- Each data store is connected to a process by means of a data flow symbol.
- The direction of the data flow arrow shows whether data is being read form or written
into a data store.
- An arrow flowing in or out of a data store implicitly represents the entire data of the
data store and hence arrows connecting to a data store need not be annotated with the
name of the corresponding data items.
Output symbol
- The notations that we are following in this text are closer to the Yourdon’s notations
than to the other notations.
Some important concepts associated with designing DFDs
Synchronous and asynchronous operations
- If two bubbles are directly connected by a data flow arrow, then they are synchronous.
- This means that they operate at the same speed
- However , if two bubbles are connected through a data store, as in Figure 6.3(b) then
the speed of operation of the bubbles are independent.
Data dictionary
- DFD model of a system must be accompanied by a data dictionary.
- A data dictionary lists data items appearing in the DFD model of a system. The data
items listed include all data flow and the contents of all data items appearing in the
DFD model of a system.
- A data dictionary lists the purpose of all data items and the definition of all composite
data item in terms of their components data items.
grossPay = regularPay + overtimePay
- A data dictionary plays a very important role in any software development process
because of the following reasons.
~ A data dictionary provides a standard terminology for all relevant data
for use by all engineers working in the same project. A consistent
vocabulary for data items is very important, since in large projects
different engineers have a tendency to use different terms to refer to
the same data, which unnecessarily causes confusion
~ The data dictionary provides the analyst with a means to determine the
definition of different data structure in terms of their component
elements
- For large systems, the data dictionary can be extremely complex and voluminous.
- Even moderate sized project can have thousands of entries in the data dictionary
Data definition
- Composite data items can be defined in terms of primitive data items using the
following data definition operators
+ - denotes composition of two data items (eg. a+b)
[,,] - represent selection (eg. [a, b]
() - the contents inside the bracket represent optional data which may or may not
appear
{} - represent iterative data definition
= - represent equivalence
/* - anything appear within this is considered as comment
Developing the DFD model of system
- A DFD model of a system graphically depicts the transformation of the data input to the
system to the final result through a hierarchy of levels.
- To develop the data flow model of a system, first the most abstract representation of the
problem is worked out.

-
Context diagram
- The context diagram is the most abstract data flow representation of a system.
- The data input to the system and the data output from the system are represented as
incoming and outgoing arrows.
- To develop the context diagram of the system, we have to analyze the SRS document to
identify the different types of user.
Level 1 DFD
- To develop the level 1 DFD, examine the high level functional requirements.
- If there are between three to seven high level functional requirements, then these can be
directly represented as bubbles in the level 1 DFD.
Decomposition
- Decomposition of a bubble is known as factoring or exploding a bubble.
- Each bubble at any level of DFD is usually decomposed to anything between three to
seven bubbles.
1. The SRS document is examined to determine
2. The high level functions described in the SRS document are examined
3. Each high level function is decomposed into its constituent subfunction through the
following set of activities.
4. Step 3 is repeated recursively for each subfunction until a subfunction can be
represented by using a simple algorithm
Numbering of bubbles
- The bubble at the context level is usually assigned the number 0 to indicate that it is the
0 level DFD.
- Bubbles at level 1 are numbered, 0.1, 0.2, 0.3 etc.
Commonly made errors while constructing a DFD model
- DFDs are simple to understand and draw, students and practitioners alike encounter
similar types of problems while modeling software problems using DFDs.
- It is therefore helpful to understand the different types of mistakes that beginners
usually make while constructing the DFD model of systems.
Example 1 – RMS calculating software
- A software system called RMS calculating software reads three integral numbers form
the user in the range between -1000 and +1000 and determines the root mean square of
the three input numbers and then display it.
- In practical structured analysis of non-trivial systems, decomposition is never carried on
up to the basic instruction level as done
Shortcomings of the DFD model
- DFDs level ample scope to be imprecise.
- In the DFD model we judge the function performed by a bubble from its label.
- However a short label may not capture the entire functionality of a bubble
- The method of carrying out decomposition to arrive at the successive levels and the
ultimate level to which decomposition is carried out are highly subjective and depend
on the choice and judgment of the analyst.
UNIT-IV
OBJECT MODELING USING UML
Overview of Object oriented concepts
Basic mechanisms:
The principles of object-orientation have been founded on a few simple concepts.

Objects
- Conceptually, an object is a thing you can interact with: you can send it various
messages and it will react.
- How it behaves depends on the current internal state of the object, which may change,
for example, as part of the object’s reaction to receiving a message.
- Each object in an object-oriented program usually represents a tangible real-world
entity such as a library member, a book, an issue register, etc.
- It matters which object you interact with, and you usually address an object by name;
that is, an object has an identity which distinguishes it from all other objects.
- So an object is a thing which has behavior, state and identity:
A library member can be an object of a library automation system. The private data of such a
library member object can be
⬧ Name of the member
⬧ Membership number
⬧ Address
⬧ Phone number
⬧ Email
⬧ Date when admitted as a member
⬧ Membership expiry date
⬧ Books outstanding etc.
- The data stored internally in an object are called its attributes, and the operations
supported by an object are called its methods.
Class
- Similar objects constitute a class. That is, all the objects constituting a class possess
similar attributes and methods.
- For example, the set of all library members would constitute the class LibraryMember
in a library automation application.
- In this case, each library member object has attributes such as member name,
membership number, member address, etc. and also has methods such as issue-book,
returnbook,
- Let us now examine whether a class supports the two mechanisms of an ADT.
- Abstract data: The data of an object can be accessed only through its methods. In other
words, the exact way data is stored internally (stack, array, queue, etc.) in the object is
abstracted out (not known to the other objects).
- Data type: In programming language terminology, a data type defines a collection of
data values and a set of predefined operations on those values.
Methods and messages
- The operations (such as create, issue, return, etc.) supported by an object are called its
methods
- Notice that we are distinguishing between the terms operation and method. Though
the terms ‘operation’ and ‘method’ are sometimes used interchangeably.
- An operation is a specific responsibility of a class, and the responsibility is
implemented using a method.
- Methods are the only means available to other objects for accessing and manipulating
the data of another object. The methods of objects are invoked by sending messages to
it.
Class relationships:
- Classes in a programming solution can be related to each other in the following four
ways:
• Inheritance
• Association and link
• Aggregation and composition
• Dependency
Inheritance:
- The inheritance feature allows one to define a new class by incrementally extending the
features of an existing class.
- The original class is called the base class (also called superclass o r parent class ) and
the new class obtained through inheritance is called the derived class (also called a
subclass or a child class ).
- The derived class is said to inherit the features of the base class.
- classes Faculty, Students, and Staff have been derived from the base class
LibraryMember.

Multiple inheritances
- Multiple inheritances are a mechanism by which a subclass can inherit attributes and
methods from more than one base class.
Association and link:
- Association is a common type of relation among classes. When two classes are
associated, they can take each others help (i.e. invoke each others methods) to serve
user requests.
- More technically, we can say that if one class is associated with another bidirectionally,
then the corresponding objects of the two classes know each others ids (identities).
- A Student can register in one Elective subject. In this example, the class Student is
associated with the class ElectiveSubject.
- Therefore, an ElectiveSubject object (e.g. MachineLearning) would know the ids of all
Student objects.
- The association relationship can either be bidirectional or unidirectional. That is, both
the associated classes know each other (store each others ids).

N-Ary Association:
- Binary association between classes is very commonly encountered in design problems.
- However, there can be situations where three or more different classes can be involved
in an association.
- A class can have an association relationship with itself. This is called recursive
association or unary association.
Composition and Aggregation:
- Composition and aggregation represent part/whole relationships among objects. Objects
which contain other objects are called composite objects.
- As an example, consider the following—A Book object can have upto ten Chapters. In
this case, a Book object is said to be composed of upto ten Chapter objects.

Dependency:
- A dependency relation between two classes shows that any change made to the
independent class would require the corresponding change to be made to the dependent
class.
- Two important reasons for dependency to exist between two classes are the following:
● A method of a class takes an object of another class as an argument.
● A class implements an interface class. In this case, dependency arises due to
the following reason. If some properties of the interface class are changed,
then a change becomes necessary to the class implementing the interface
class as well.

Abstract class
- Classes that are not intended to produce instance of themselves are called abstract class.
- By using abstract classes, code reuse can be enhanced and the effort required to develop
software brought down.
- For example, in a library automation software Issuable can be an abstract class (see
Figure 7.7) and the concrete classes Book, Journal, and CD are derived from the
Issuable class. The Issuable class may define several generic methods such as issue.
How to Identify Class Relationships?:
✔ Composition
❖B is a permanent part of A
❖A is made up of Bs
❖A is a permanent collection of Bs
✔ Aggregation
❖ B is a part of A
❖ A contains B
❖ A is a collection of Bs
✔ Inheritance
❖ A is a kind of B
❖ A is a specialisation of B
❖ A behaves like B
✔ Association
❖ A delegates to B
❖ A needs help from B
❖ A collaborates with B. Here collaborates with can be any of a large
❖ variety of collaborations that are possible among classes such as
❖ employs, credits, precedes, succeeds, etc.

Key concepts
Abstraction
- The abstraction mechanism allows us to represent a problem in a simpler way by
considering only those aspects that are relevant to some purpose and omitting all other
details that are irrelevant.
- Abstraction is supported in two different ways in an object-oriented designs (OODs).
- These are the following:
- Feature abstraction: A class hierarchy can be viewed as defining several levels
(hierarchy) of abstraction, where each class is an abstraction of its subclasses.
- Data abstraction: An object itself can be considered as a data abstraction entity,
because it abstracts out the exact way in which it stores its various private data items
and it merely provides a set of methods to other objects to access and manipulate these
data items.
Encapsulation:
- The data of an object is encapsulated within its methods. To access the data internal to
an object, other objects have to invoke its methods, and cannot directly access the data.
- Data hiding: Encapsulation implies that the internal structure data of an object are
hidden, so that all interactions with the object are simple and standardised.
- Weak coupling: Since objects do not directly change each others internal data, they are
weakly coupled.

Polymorphism
- Polymorphism literally means poly (many) morphism (forms). This term, derived from
Greek, is to do with having m any shapes.
- In programming languages, it refers to a situation in which an entity could have any one
of several types.
- In an object-oriented situation, a polymorphic variable could refer (at different times) to
objects of several different classes.
- There are two main types of polymorphisms in object-orientation:
- Static polymorphism: Static polymorphism occurs when multiple methods implement
the same operation. In this type of polymorphism, when a method is called (same
method name but different parameter types), different behaviour (actions) would be
observed.

- Dynamic Binding: In dynamic binding,the exact method that would be invoked


(bound) on a method call can only be known at the run time (dynamically) and cannot
be determined at compile time.
Advantages and Disadvantages of OOD:
Advantages of OOD:
✔ Code and design reuse
✔ Increased productivity
✔ Ease of testing and maintenance
✔ Better code and design understandability enabling development of large
programs
Disadvantages of OOD:
- The principles of abstraction, data hiding, inheritance, etc. do incur run time overhead
due to the additional code that gets generated on account of these features
- An important consequence of object-orientation is that the data that is centralised in a
procedural implementation, gets scattered across various objects in an object-oriented
implementation.
UML
- UML is a modeling language. It provide a set of notation to create models of systems,
models are very useful in documenting the design and analysis results.
- Models also facilitate the generation of analysis and design procedures themselves.
- UML can be used to document object-oriented analysis and design results that have
been obtained using any methodology
Origin
- UML (Unified Modeling Language) is a standard notation for the modeling of real-
world objects as a first step in developing an object-oriented design methodology. Its
notation is derived from and unifies the notations of three object-oriented design and
analysis methodologies:
✔ OMT [Rumbaugh 1991]
✔ Booch’s methodology [Booch 1991]
✔ OOSE [Jacobson 1992]
✔ Odell’s methodology [Odell 1992]
✔ Shlaer and Mellor methodology[Shlaer 1992]
UML DIAGRAMS:
- UML diagrams can capture the following views (models) of a system:
✔ User’s view
✔ Structural view
✔ Behaviourial view
✔ Implementation view
✔ Environmental view

- The users’ view is shown as the central view. This is because based on the users’ view,
all other views are developed and all views need to conform to the user’s view.
Users’ view:
- The users’ view captures the view of the system in terms of the functionalities offered
by the system to its users.
- The users’ view can be considered as the central view and all other views are required
to conform to this view.
Structural view:
- The structural model is also called the static model, since the structure of a system
does not change with time.
- The structural view defines the structure of the problem (or the solution) in terms of the
kinds of objects (classes) important to the understanding of the working of a system and
to its implementation.
- It also captures the relationships among the classes (objects).
Behaviourial view:
- The behaviourial view captures how objects interact with each other in time to realise
the system behaviour. The system behaviour captures the time-dependent (dynamic)
behaviour of the system.

Implementation view:
- This view captures the important components of the system and their
interdependencies. For example, the implementation view might show the GUI part,
the middleware, and the database part as the different
- parts and also would capture their interdependencies.
Environmental view:
- This view models how the different components are implemented on different pieces
of hardware.
USE CASE MODEL:
The use cases represent the different ways in which a system can be used by the users.
A Use Case Model describes the proposed functionality of a new system.
A Use Case represents a discrete unit of interaction between a user (human or machine) and
the system. This interaction is a single unit of meaningful work, such as Create Account or
View Account Details.
Each Use Case describes the functionality to be built in the proposed system, which can
include another Use Case's functionality or extend another Use Case with its own behavior.

Representation of Use Cases


- A use case model can be documented by drawing a use case diagram and writing an
accompanying text elaborating the drawing.
- In the use case diagram, each use case is represented by an ellipse with the name of
the use case written inside the ellipse.
- All the ellipses (i.e. use cases) of a system are enclosed within a rectangle which
represents the system boundary.
- The name of the system being modeled (e.g., library information system ) appears
inside the rectangle.
- Example 7 . 2 The use case model for the Tic-tac-toe game software is shown in
Figure 7.15.
- This software has only one use case, namely, “playmove”. Note that we did not name
the use case “get-user-move”, as “getuser-move” would be inappropriate because this
would represent the developer’s perspective of the use case.
- The use cases should be named from the users’ perspective.

Text description:
- Each ellipse in a use case diagram, by itself conveys very little information, other than
giving a hazy idea about the use case.
- Therefore, every use case diagram should be accompanied by a text description.
- The text description should define the details of the interaction between the user and
the computer as well as other relevant aspects of the use case.
- Contact persons: This section lists of personnel of the client organization with whom
the use case was discussed, date and time of the meeting, etc.
- Actors: In addition to identifying the actors, some information about actors using a
use case which may help the implementation of the use case may be recorded.
- Pre-condition: The preconditions would describe the state of the system before the
use case execution starts.
- Post-condition: This captures the state of the system after the use case has
successfully completed.
- Non-functional requirements : This could contain the important constraints for the
design and implementation, such as platform and environment conditions, qualitative
statements, response time requirements, etc.
- Exceptions, error situations: This contains only the domain-related errors such as
lack of user’s access rights, invalid entry in the input fields, etc. Obviously, errors that
are not domain related, such as software errors, need not be discussed here.
- Sample dialogs: These serve as examples illustrating the use case.
- Specific user interface requirements : These contain specific requirements for the
user interface of the use case. For example, it may contain forms to be used, screen
shots, interaction style, etc.
- Document references: This part contains references to specific domainrelated
documents which may be useful to understand the system operation.
- Example 7.3 The use case diagram of the Super market prize scheme described in
example 6.3 is shown in Figure 7.16.

Why Develop the Use Case Diagram?:


- If you examine a use case diagram, the utility of the use cases represented by the
ellipses would become obvious.
- They along with the accompanying text description serve as a type of requirements
specification of the system and the model based on which all other models are
developed.
How to Identify the Use Cases of a System?:
- Identification of the use cases involves brain storming and reviewing the SRS
document.
- Typically, the high-level requirements specified in the SRS document correspond to the
use cases. In the absence of a well formulated SRS document, a popular method of
identifying the use cases is actor-based.
Essential Use Case versus Real Use Case:
- Essential use cases are created during early requirements elicitation. These are also
early problem analysis artifacts.
- They are independent of the design decisions and tend to be correct over long periods
of time.
- Real use cases describe the functionality of the system in terms of its actual current
design committed to specific input/output technologies.
- Therefore, the real use cases can be developed only after the design decisions have
been made. Real use cases are a design artifact.
Factoring of Commonality among Use Cases:
i). Generalisation:
- Use case generalisation can be used when you have one use case that is
- similar to another, but does something slightly differently or something more.
- Generalisation works the same way with use cases as it does with classes.
- The child use case inherits the behavior and meaning of the present use case.

ii) Includes:
- The includes relationship in the older versions of UML (prior to UML 1.1) was known
as the uses relationship.
- The includes relationship implies one use case includes the behaviour of another use
case in its sequence of events and actions.
- The includes relationship is appropriate when you have a chunk of behaviour that is
similar across a number of use cases.
- Figure 7.18, the includes relationship is represented using a predefined stereotype
<<include>>.
- In the includes relationship, a base use case compulsorily and automatically includes the
behaviour of the common use case.
- As shown in example Figure 7.19, the use cases issue-book and renew-book both
include check-reservation use case.

iii) Extends:
- The main idea behind the extends relationship among use cases is that it allows you
show optional system behaviour.
- An optional system behaviour is executed only if certain conditions hold, otherwise the
optional behaviour is not executed.

Iv)Organisation:
- When the use cases are factored, they are organised hierarchically. The highlevel use
cases are refined into a set of smaller and more refined use cases as shown in Figure
7.21.
- Top-level use cases are super-ordinate to the refined use cases. The refined use cases
are sub-ordinate to the top-level use cases.
CLASS DIAGRAMS:
- A class diagram describes the static structure of a system. It shows how a system is
structured rather than how it behaves.
- The static structure of a system comprises a number of class diagrams and their
dependencies.
- The main constituents of a class diagram are classes and their relationships—
generalisation, aggregation, association, and various kinds of dependencies.
Classes : The classes represent entities with common features, i.e., attributes and
operations. Classes are represented as solid outline rectangles with compartments.
- The class name is usually written using mixed case convention and begins with an
uppercase (e.g. LibraryMember).
- Object names on the other hand, are written using a mixed case convention, but
starts with a small case letter (e.g., studentMember).
Attributes:
- An attribute is a named property of a class. It represents the kind of data that an
object might contain.
- Attributes are listed with their names, and may optionally contain specification of
their type (that is, their class, e.g., Int, Book, Employee, etc.), an initial value, and
constraints.
Association:
- Association between two classes is represented by drawing a straight line between the
concerned classes.
- Figure 7.24 illustrates the graphical representation of the association relation.

- The name of the association is written along side the association line.
- An arrowhead may be placed on the association line to indicate the reading direction
of the association.
- An asterisk is used as a wild card and means many (zero or more).
- The association of Figure 7.24 should be read as “Many books may be borrowed by a
LibraryMember”.
Aggregation:
- Aggregation is a special type of association relation where the involved classes are
not only associated to each other , but a whole-part relationship exists between them.
- That is, the aggregate object not only knows the addresses of its parts and therefore
invoke the methods of its parts, but also takes the responsibility of creating and
destroying its parts.
- An example of aggregation, a book register is an aggregation of book objects. Books
can be added to the register and deleted as and when required.

Composition:
- Composition is a stricter form of aggregation, in which the parts are existence-
dependent on the whole.
- This means that the life of the parts cannot exist outside the whole. In other words, the
lifeline of the whole and the part are identical.

Differences between composition and aggregation


- Both aggregation and composition represent part/whole relationships.
- When the components can dynamically be added and removed from the aggregate, then
the relationship is aggregation.
- If the components cannot be dynamically added/delete then the components are have
the same life time as the composite. In this case, the relationship is represented by
composition. ‘
Inheritance :
- The inheritance relationship is represented by means of an empty arrow pointing from
the subclass to the superclass.
- The arrow may be directly drawn from the subclass to the superclass. Alternatively,
when there are many subclasses of a base class, the inheritance arrow from the
subclasses may be combined to a single line (see Figure 7.27) and is labelled with the
aspect of the class that is abstracted
- Dependency A dependency relationship is shown as a dotted arrow (see Figure 7.28)
that is drawn from the dependent class to the independent class. Dependency A
dependency relationship is shown as a dotted arrow (see Figure 7.28) that is drawn from
the dependent class to the independent class.

Dependency:
- A dependency relationship is shown as a dotted arrow (see Figure 7.28) that is drawn
from the dependent class to the independent class.

Constraints :
- A constraint describes a condition or an integrity rule. Constraints are typically used
to describe the permissible set of values of an attribute, to specify the pre- and post-
conditions for operations, to define certain ordering of items, etc.
- For example, to denote that the books in a library are sorted on ISBN number we can
annotate the book class with the constraint {sorted}.
INTERACTION DIAGRAMS:
- Interaction diagrams, as their name itself implies, are models that describe how
groups of objects interact among themselves through message passing to realise some
behaviour.
- There are two kinds of interaction diagrams—sequence diagrams and collaboration
diagrams.
Sequence diagram:
A sequence diagram shows interaction among objects as a two dimensional chart. The chart
is read from top to bottom.
- The objects participating in the interaction are shown at the top of the chart as boxes
attached to a vertical dashed line.
- The vertical dashed line is called the object’s lifeline. Any point on the lifeline implies
that the object exists at that point.
- A rectangle called the activation symbol is drawn on the lifeline of an object to indicate
the points of time at which the object is active. Thus an activation symbol indicates that
an object is active as long as the symbol (rectangle) exists on the lifeline.
- Each message is labelled with the message name. Some control information can also be
included. Two important types of control information are:
- A condition (e.g., [invalid]) indicates that a message is sent, only if the condition is true.
- An iteration marker shows that the message is sent many times to multiple receiver
objects as would happen when you are iterating over a collection or the elements of an
array. You can also indicate the basis of the iteration, e.g., [for every book object].

Collaboration diagram:
- A collaboration diagram shows both structural and behavioural aspects explicitly.
- This is unlike a sequence diagram which shows only the behavioural aspects.
- The structural aspect of a collaboration diagram consists of objects and links among
them indicating association
- The link between objects is shown as a solid line and can be used to send messages
between two objects. The message is shown as a labelled arrow placed near the link.

ACTIVITY DIAGRAM:
- The activity diagram is possibly one modelling element which was not present in any
of the predecessors of UML.
- An activity is a state with an internal action and one or more outgoing transitions
which automatically follow the termination of the internal activity.
- Activity diagrams are similar to the procedural flow charts. The main difference is
that activity diagrams support description of parallel activities and synchronization
aspects involved in different activities.
- Parallel activities are represented on an activity diagram by using swim lanes.
- Swim lanes enable you to group activities based on who is performing them, e.g.,
academic department vs. hostel office.
- Thus swim lanes subdivide activities based on the responsibilities of some
components.
- The student admission process in IIT is shown as an activity diagram in Figure 7.32.
This shows the part played by different components of the Institute in the admission
procedure.
- After the fees are received at the account section, parallel activities start at the hostel
office, hospital, and the Department.
- After all these activities complete (this is a synchronisation issue and is represented as
a horizontal line), the identity card can be issued to a student by the Academic
section.
STATE CHART DIAGRAM:
- A state chart diagram is normally used to model how the state of an object changes in
its life time.
- State chart diagrams are good at describing how the behaviour of an object changes
across several use case executions.
- State chart diagrams are based on the finite state machine (FSM) formalism.
- An FSM consists of a finite number of states corresponding to those of the object
being modelled.
- Why state chart?
- A major disadvantage of the FSM formalism is the state explosion problem. The
number of states becomes too many and the model too complex when used to model
practical systems.
- This problem is overcome in UML by using state charts. The state chart formalism
was proposed by David Harel [1990].
- A state chart is a hierarchical model of a system and introduces the concept of a
composite state (also called nested state ).

Basic elements of a state chart :


- The basic elements of the state chart diagram are as follows:
- Initial state: This represented as a filled circle.
- Final state: This is represented by a filled circle inside a larger circle.
- State: These are represented by rectangles with rounded corners.
- Transition: A transition is shown as an arrow between two states.
UNIT-V
Coding and Testing
● Coding is undertaken once the design phase is complete and the design documents
have been successfully reviewed
● After all the modules of a system have been coded and unit tested, the integration and
system testing phase is undertaken.
CODING:
● The coding is the process of transforming the design of a system into a computer
language format. This coding phase of software development is concerned with
software translating design specification into the source code. ... Coding is done by
the coder or programmers who are independent people than the designer.
● The objective of the coding phase is to transform the design of a system into code in a
high-level language, and then to unit test this code.
● The input to the coding phase is the design document produced at the end of the
design phase.
● Please recollect that the design document contains not only the high-level design of
the system in the form of a module structure (e.g., a structure chart), but also the
detailed design.
● The main advantages of adhering to a standard style of coding are the following:
o A coding standard gives a uniform appearance to the codes written by
different engineers.
o It facilitates code understanding and code reuse.
o It promotes good programming practices.
● After a module has been coded, usually code review is carried out to ensure that the
coding standards are followed and also to detect as many errors as possible before
testing.
1. Coding Standards and Guidelines :
● Good software development organizations usually develop their own coding standards
and guidelines depending on what suits their organization best and based on the
specific types of software they develop.
● To give an idea about the types of coding standards that are being used.
Representative coding standards:
o Rules for limiting the use of globals: These rules list what types of data can
be declared global and what cannot, with a view to limit the data that needs to
be defined with global scope.
o Standard headers for different modules: The header of different modules
should have standard format and information for ease of understanding and
maintenance. The following is an example of header format that is being used
in some companies:
✔ Name of the module.
✔ Date on which the module was created.
✔ Author’s name. Modification history. Synopsis of the module.
✔ This is a small writeup about what the module does.
✔ Different functions supported in the module, along with their input/output
parameters.
✔ Global variables accessed/modified by the module
o Naming conventions for global variables, local variables, and constant
identifiers: A popular naming convention is that variables are named using mixed case
lettering. Global variable names would always start with a capital letter (e.g., GlobalData)
and local variable names start with small letters (e.g., localData). Constant names should
be formed using capital letters only (e.g., CONSTDATA).
o Conventions regarding error return values and exception handling
mechanisms: The way error conditions are reported by different functions in a program
should be standard within an organisation. For example, all functions while encountering
an error condition should either return a 0 or 1 consistently
o Representative coding guidelines: The following are some representative coding
guidelines that are recommended by many software development organisations. Wherever
necessary, the rationale behind these guidelines is also mentioned.
o Do not use a coding style that is too clever or too difficult to understand:
Code should be easy to understand. Many inexperienced engineers actually take pride in
writing cryptic and incomprehensible code.
o Avoid obscure side effects: The side effects of a function call include
modifications to the parameters passed by reference, modification of global variables, and
I/O operations. An obscure side effect is one that is not obvious from a casual
examination of the code.
o Do not use an identifier for multiple purposes: Programmers often use the same
identifier to denote several temporary entities. For example, some programmers make use
of a temporary loop variable for also computing and storing the final result. The rationale
that they give for such multiple use of variables is memory efficiency, e.g., three variables
use up three memory locations, whereas when the same variable is used for three different
purposes, only one memory location is used.
o Some of the problems caused by the use of a variable for multiple purposes are as
follows:
✔ Each variable should be given a descriptive name indicating its purpose. This is
not possible if an identifier is used for multiple purposes. Use of a variable for
multiple purposes can lead to confusion and make it difficult for somebody
trying to read and understand the code.
✔ Use of variables for multiple purposes usually makes future enhancements
more difficult. For example, while changing the final computed result from
integer to float type, the programmer might subsequently notice that it has also
been used as a temporary loop variable that cannot be a float type.
o Code should be well-documented: As a rule of thumb, there should be at least one
comment line on the average for every three source lines of code.
o Length of any function should not exceed 10 source lines: A lengthy function is
usually very difficult to understand as it probably has a large number of variables and
carries out many different types of computations. For the same reason, lengthy
functions are likely to have disproportionately larger number of bugs.
o Do not use GO TO statements: Use of GO TO statements makes a program
unstructured. This makes the program very difficult to understand, debug, and
maintain.
TESTING :
● The aim of program testing is to help realiseidentify all defects in a program.
● However , in practice, even after satisfactory completion of the testing phase, it is not
possible to guarantee that a program is error free.
● This is because the input data domain of most programs is very large, and it is not
practical to test the program exhaustively with respect to each value that the input can
assume.
● Consider a function taking a floating point number as argument.
● If a tester takes 1sec to type in a value, then even a million testers would not be able
to exhaustively test it after trying for a million number of years.
1. Basic Concepts and Terminologies:
i).How to test a program? :
- Testing a program involves executing the program with a set of test inputs and
observing if the program behaves as expected. If the
- program fails to behave as expected, then the input data and the conditions under
which it fails are noted for later debugging and error correction.

ii).Terminologies:
✔ A mistake is essentially any programmer action that later shows up as an incorrect
result during program execution. A programmer may commit a mistake in almost any
development activity.
✔ Division by zero in an arithmetic operation. Both these mistakes can lead to an
incorrect result.
✔ An error is the result of a mistake committed by a developer in any of the
development activities. Among the extremely large variety of errors that can exist in a
program. One example of an error is a call made to a wrong function.
✔ A failure of a program essentially denotes an incorrect behaviour exhibited by the
program during its execution. An incorrect behaviour is observed either as an
incorrect result produced or as an inappropriate activity carried out by the program.
Every failure is caused by some bugs present in the program.
✔ in the following we give three randomly selected examples:
o – The result computed by a program is 0, when the correct result is 10.
o – A program crashes on an input.
o – A robot fails to avoid an obstacle and collides with it.
✔ A test case is a triplet [I , S, R], where I is the data input to the program under test, S
is the state of the program at which the data is to be input, and R is the result expected
to be produced by the program. The state of a program is also called its execution
mode.
✔ A test scenario is an abstract test case in the sense that it only identifies the aspects of
the program that are to be tested without identifying the input, state, or output. A test
case can be said to be an implementation of a test scenario. In the test case, the input,
output, and the state at which the input would be applied is designed such that the
scenario can be executed.
✔ A test script is an encoding of a test case as a short program. Test scripts are
developed for automated execution of the test cases.
✔ A test case is said to be a positive test case if it is designed to test whether the
software correctly performs a required functionality.
✔ A test case is said to be negative test case, if it is designed to test whether the
software carries out something, that is not required of the system.
✔ A test suite is the set of all test that have been designed by a tester to test a given
program.
✔ Testability of a requirement denotes the extent to which it is possible to determine
whether an implementation of the requirement conforms to it in both functionality and
performance.
✔ A failure mode of a software denotes an observable way in which it can fail. In other
words, all failures that have similar observable symptoms, constitute a failure mode.
As an example of the failure modes of a software, consider a railway ticket booking
software that has three failure modes—failing to book an available seat, incorrect seat
booking (e.g., booking an already booked seat), and system crash.
✔ Equivalent faults denote two or more bugs that result in the system failing in the
same failure mode. As an example of equivalent faults, consider the following two
faults in C language—division by zero and illegal memory access errors. These two
are equivalent faults, since each of these leads to a program crash.
Verification versus validation:
● The objectives of both verification and validation techniques are very similar since
both these techniques are designed to help remove errors in a software.
● Verification is the process of determining whether the output of one phase of
software development conforms to that of its previous phase; whereas validation is
the process of determining whether a fully developed software conforms to its
requirements specification.
● The primary techniques used for verification include review, simulation, formal
verification, and testing. Review, simulation, and testing are usually considered as
informal verification techniques. Formal verification usually involves use of theorem
proving techniques or use of automated tools such as a model checker . On the other
hand, validation techniques are primarily based on product testing.
● Verification does not require execution of the software, whereas validation requires
execution of the software.
● Verification is carried out during the development process to check if the
development activities are proceeding alright, whereas validation is carried out to
check if the right as required by the customer has been developed.
● Verification techniques can be viewed as an attempt to achieve phase containment of
errors. Phase containment of errors has been acknowledged to be a cost-effective way
to eliminate program bugs, and is an important software engineering principle.
● While verification is concerned with phase containment of errors, the aim of
validation is to check whether the deliverable software is error free.
2 . Testing Activities:
✔ Test suite design: The set of test cases using which a program is to be tested is
designed possibly using several test case design techniques.
✔ Running test cases and checking the results to detect failures: Each test case is run
and the results are compared with the expected results. A mismatch between the
actual result and expected results indicates a failure. The test cases for which the
system fails are noted down for later debugging.
✔ Locate error: In this activity, the failure symptoms are analysed to locate the errors.
For each failure observed during the previous activity, the statements that are in error
are identified.
✔ Error correction: After the error is located during debugging, the code is
appropriately changed to correct the error.
3. Why Design Test Cases?:
✔ When test cases are designed based on random input data, many of the test cases do
not contribute to the significance of the test suite, That is, they do not help detect any
additional defects not already being detected by other test cases in the suite.
✔ Testing a software using a large collection of randomly selected test cases does not
guarantee that all (or even most) of the errors in the system will be uncovered.
✔ Let us try to understand why the number of random test cases in a test suite, in
general, does not indicate of the effectiveness of testing. Consider the following
example code segment which determines the greater of two integer values x and y.
✔ This code segment has a simple programming error:
if (x>y) max = x;
else max = x;
✔ For the given code segment, the test suite {(x=3,y=2);(x=2,y=3)} can detect the error ,
whereas a larger test suite {(x=3,y=2);(x=4,y=3); (x=5,y=1)} does not detect the
error.
✔ To satisfactorily test a software with minimum cost, we must design a minimal test
suite that is of reasonable size and can uncover as many existing errors in the system
as possible.
✔ To reduce testing cost and at the same time to make testing more effective, systematic
approaches have been developed to design a small test suite that can detect most, if
not all failures.
✔ A minimal test suite is a carefully designed set of test cases such that each test case
helps detect different errors. This is in contrast to testing using some random input
values.
✔ There are essentially two main approaches to systematically design test cases:
▪ Black-box approach
▪ White-box (or glass-box) approach
✔ In the black-box approach, test cases are designed using only the functional
specification of the software. That is, test cases are designed solely based on an
analysis of the input/out behaviour (that is, functional behaviour) and does not require
any knowledge of the internal structure of a program.
✔ For this reason, black-box testing is also known as functional testing.
✔ White-box test cases requires a thorough knowledge of the internal structure of a
program, and therefore white-box testing is also called structural testing.
✔ Black- box test cases are designed solely based on the input-output behaviour of a
program. In contrast, white-box test cases are based on an analysis of the code.
4 .Testing in the Large versus Testing in the Small:
⮚ A software product is normally tested in three levels or stages:
1. Unit testing
2. Integration testing
3. System testing
✔ During unit testing, the individual functions (or units) of a program are tested.
✔ Unit testing is referred to as testing in the small, whereas integration and system
testing are referred to as testing in the large.
✔ After testing all the units individually, the units are slowly integrated and tested after
each step of integration (integration testing). Finally, the fully integrated system is
tested (system testing). Integration and system testing are known as testing in the
large.
UNIT TESTING:
✔ Unit testing is undertaken after a module has been coded and reviewed.
✔ UNIT TESTING is a level of software testing where individual units/ components of
a software are tested.
1.Driver and stub modules:
✔ In order to test a single module, we need a complete environment to provide all
relevant code that is necessary for execution of the module. That is, besides the
module under test, the following are needed to test the module:
⮚ The procedures belonging to other modules that the module under test calls.
⮚ Non-local data structures that the module accesses.
⮚ A procedure to call the functions of the module under test with appropriate
parameters.
✔ Stub: The role of stub and driver modules is pictorially shown in Figure 10.3. A stub
procedure is a dummy procedure that has the same I/O parameters as the function
called by the unit under test but has a highly simplified behaviour .

✔ Driver: A driver module should contain the non-local data structures accessed by the
module under test. Additionally, it should also have the code to call the different
functions of the unit under test with appropriate parameter values for testing.
BLACK-BOX TESTING:
✔ In black-box testing, test cases are designed from an examination of the input/output
values only and no knowledge of design or code is required.
✔ BLACK BOX TESTING, also known as Behavioral Testing, is a software testing
method in which the internal structure/design/implementation of the item being tested
is not known to the tester.

This method attempts to find errors in the following categories:

● Incorrect or missing functions


● Interface errors
● Errors in data structures or external database access
● Behavior or performance errors
● Initialization and termination errors

✔ The following are the two main approaches available to design black box test cases:

⮚ Equivalence class partitioning


⮚ Boundary value analysis

1. Equivalence Class Partitioning:


✔ In the equivalence class partitioning approach, the domain of input values to the
program under test is partitioned into a set of equivalence classes.
✔ The partitioning is done such that for every input data belonging to the same
equivalence class, the program behaves similarly.
✔ The main idea behind defining equivalence classes of input data is that testing the
code with any one value belonging to an equivalence class is as good as testing the
code with any other value belonging to the same equivalence class.
✔ The following are two general guidelines for designing the equivalence classes:
1. If the input data values to a system can be specified by a range of values, then
one valid and two invalid equivalence classes need to be defined. For example,
if the equivalence class is the set of integers in the range 1 to 10 (i.e., [1,10]),
then the invalid equivalence classes are [−∞,0], [11,+∞].
2. If the input data assumes values from a set of discrete members of some
domain, then one equivalence class for the valid input values and another
equivalence class for the invalid input values should be defined. For example,
if the valid equivalence classes are {A,B,C}, then the invalid equivalence class
is -{A,B,C}, where is the universe of possible input values.
2. Boundary Value Analysis:
✔ A type of programming error that is frequently committed by programmers is missing
out on the special consideration that should be given to the values at the boundaries of
different equivalence classes of inputs.
✔ For example, programmers may improperly use < instead of <=, or conversely <= for
<, etc.
✔ Boundary value analysis-based test suite design involves designing test cases using
the values at the boundaries of different equivalence classes.
✔ To design boundary value test cases, it is required to examine the equivalence classes
to check if any of the equivalence classes contains a range of values.
✔ For example, if an equivalence class contains the integers in the range 1 to 10, then
the boundary value test suite is {0,1,10,11}.
3. Summary of the Black-box Test Suite Design Approach:
⮚ We now summarise the important steps in the black-box test suite design approach:
❖ Examine the input and output values of the program.
❖ Identify the equivalence classes.
❖ Design equivalence class test cases by picking one representative value from
each equivalence class.
❖ Design the boundary value test cases as follows. Examine if any equivalence
class is a range of values.
WHITE-BOX TESTING:
✔ White-box testing is an important type of unit testing. A large number of white-box
testing strategies exist.
✔ Each testing strategy essentially designs test cases based on analysis of some aspect of
source code and is based on some heuristic.
✔ White Box Testing is defined as the testing of a software solution's internal structure,
design, and coding. In this type of testing, the code is visible to the tester
1. Basic Concepts:
A white-box testing strategy can either be coverage-based or fault-based.
✔ Fault-based testing: A fault-based testing strategy targets to detect certain types of
faults. These faults that a test strategy focuses on constitutes the fault model of the
strategy. An example of a fault-based strategy is mutation testing,
✔ Coverage-based testing: A coverage-based testing strategy attempts to execute (or
cover) certain elements of a program. Popular examples of coverage-based testing
strategies are statement coverage, branch coverage, multiple condition coverage, and
path coverage-based testing.
✔ Testing criterion for coverage-based testing: A coverage-based testing strategy
typically targets to execute (i.e., cover) certain program elements for discovering
failures.
✔ Stronger versus weaker testing :
- The concepts of stronger , weaker , and complementary testing are
schematically illustrated in Figure 10.6.
- Observe in Figure 10.6(a) that testing strategy A is stronger than B since B
covers only a proper subset of elements covered by B.
- On the other hand, Figure 10.6(b) shows A and B are complementary testing
strategies since some elements of A are not covered by B and vice versa.
- If a stronger testing has been performed, then a weaker testing need not be
carried out.
2. Statement Coverage:
✔ The principal idea governing the statement coverage strategy is that unless a statement
is executed, there is no way to determine whether an error exists in that statement.
✔ It is obvious that without executing a statement, it is difficult to determine whether it
causes a failure due to illegal memory access, wrong result computation due to
improper arithmetic operation, etc.
✔ It can however be pointed out that a weakness of the statement- coverage strategy is
that executing a statement once and observing that it behaves properly for one input
value is no guarantee that it will behave correctly for all input values.
✔ Example 10.11 :Design statement coverage-based test suite for the following Euclid’s
GCD computation program:
int computeGCD(x,y)
int x,y;
{
1 while (x != y) {
2 if (x>y) then
3 x=x-y;
4 else y=y-x;
5}
6 return x;
}
✔ To design the test cases for the statement coverage, the conditional expression of the
while statement needs to be made true and the conditional expression of the if
statement needs to be made both true and false. By choosing the test set {(x = 3, y =
3), (x = 4, y = 3), (x = 3, y = 4)}, all statements of the program would be executed at
least once.
3. Branch Coverage:
✔ A test suite satisfies branch coverage, if it makes each branch condition in the
program to assume true and false values in turn.
✔ In other words, for branch coverage each branch in the CFG representation of the
program must be taken at least once, when the test suite is executed.
✔ Branch testing is also known as edge testing, since in this testing scheme, each edge
of a program’s control flow graph is traversed at least once.
Example 10.12 For the program of Example 10.11, determine a test suite to achieve
branch coverage.
Answer: The test suite {(x = 3, y = 3), (x = 3, y = 2), (x = 4, y = 3), (x = 3, y = 4)}
achieves branch coverage.
4. Multiple Condition Coverage:
✔ In the multiple condition (MC) coverage-based testing, test cases are designed to
make each component of a composite conditional expression to assume both true and
false values.
✔ For example, consider the composite conditional expression ((c1 .and.c2 ).or.c3).
✔ A test suite would achieve MC coverage, if all the component conditions c1, c2 and
c3 are each made to assume both true and false values.
5. Path Coverage:
✔ A test suite achieves path coverage if it exeutes each linearly independent paths ( o r
basis paths ) at least once.
✔ A linearly independent path can be defined in terms of the control flow graph (CFG)
of a program.
Control flow graph (CFG):
- A control flow graph describes the sequence in which the different instructions of
a program get executed.
- In order to draw the control flow graph of a program, we need to first number all
the statements of a program. The different numbered statements serve as nodes of
the control flow graph (see Figure 10.5).

- we can define a CFG as follows. A CFG is a directed graph consisting of a set of


nodes and edges (N, E), such that each node n N corresponds to a unique program
statement and an edge exists between two nodes if control can transfer from one
node to the other.
Path:
- A path through a program is any node and edge sequence from the start node
to a terminal node of the control flow graph of a program.
- Please note that a program can have more than one terminal nodes when it
contains multiple exit or return type of statements.
- Writing test cases to cover all paths of a typical program is impractical since there
can be an infinite number of paths through a program in presence of loops.

Linearly independent set of paths (or basis path set):


- A set of paths for a given program is called linearly independent set of paths (or
the set of basis paths or simply the basis set), if each path in the set introduces at
least one new edge that is not included in any other path in the set.
- If a set of paths is linearly independent of each other , then no path in the set can
be obtained through any linear operations (i.e., additions or subtractions) on the
other paths in the set.
6. Data Flow-based Testing:
✔ Data flow based testing method selects test paths of a program according to the
definitions and uses of different variables in a program.
✔ Data flow testing is a family of test strategies based on selecting paths through the
program's control flow in order to explore sequences of events related to the status of
variables or data objects.
✔ Dataflow Testing focuses on the points at which variables receive values and the
points at which these values are used.
✔ Data Flow Testing uses the control flow graph to find the situations that can interrupt
the flow of the program.
✔ Reference or define anomalies in the flow of the data are detected at the time of
associations between values and variables.
✔ These anomalies are:
o A variable is defined but not used or referenced,
o A variable is used but never defined,
o A variable is defined twice before it is used
Advantages of Data Flow Testing:
Data Flow Testing is used to find the following issues-
✔ To find a variable that is used but never defined,
✔ To find a variable that is defined but never used,
✔ To find a variable that is defined multiple times before it is use,
✔ Deallocating a variable before it is used.
Disadvantages of Data Flow Testing
✔ Time consuming and costly process
✔ Requires knowledge of programming languages
8 .Mutation Testing
✔ Mutation Testing is a type of software testing where we mutate (change) certain
statements in the source code and check if the test cases are able to find the errors.

DEBUGGING:
✔ After a failure has been detected, it is necessary to first identify the program
statement(s) that are in error and are responsible for the failure, the error can then be
fixed.
1. Debugging Approaches:
I).Brute force method:
✔ This is the most common method of debugging but is the least efficient method.
✔ In this approach, print statements are inserted throughout the program to print the
intermediate values with the hope that some of the printed values will help to identify
the statement in error.
✔ This approach becomes more systematic with the use of a symbolic debugger (also
called a source code debugger ), because values of different variables can be easily
checked and break points and watch points can be easily set to test the values of
variables effortlessly
II).Backtracking:
✔ This is also a fairly common approach. In this approach, starting from the
statement at which an error symptom has been observed, the source code is traced
backwards until the error is discovered.
✔ Unfortunately, as the number of source lines to be traced back increases, the
number of potential backward paths increases and may become unmanageably
large for complex programs, limiting the use of this approach.
III).Cause elimination method:
✔ In this approach, once a failure is observed, the symptoms of the failure (i.e.,
certain variable is having a negative value though it should be positive, etc.) are
noted.
✔ Based on the failure symptoms, the causes which could possibly have contributed
to the symptom is developed and tests are conducted to eliminate each.
✔ A related technique of identification of the error from the error symptom is the
software fault tree analysis.
IV).Program slicing:
✔ This technique is similar to back tracking. In the backtracking approach, one often
has to examine a large number of statements.
✔ However , the search space is reduced by defining slices.
✔ A slice of a program for a particular variable and at a particular statement is the
set of source lines preceding this statement that can influence the value of that
variable
✔ Program slicing makes use of the fact that an error in the value of a variable can
be caused by the statements on which it is data dependent.
2. Debugging Guidelines:
❖The following are some general guidelines for effective debugging:
✔ Many times debugging requires a thorough understanding of the program
design. Trying to debug based on a partial understanding of the program
design may require an inordinate amount of effort to be put into debugging
even for simple problems.
✔ Debugging may sometimes even require full redesign of the system. In such
cases, a common mistakes that novice programmers often make is attempting
not to fix the error but its symptoms.
✔ One must be beware of the possibility that an error correction may introduce
new errors. Therefore after every round of error-fixing, regression testing
must be carried out.
INTEGRATION TESTING:
✔ Integration testing is carried out after all (or at least some of ) the modules have been
unit tested.
✔ Successful completion of unit testing, to a large extent, ensures that the unit (or
module) as a whole works satisfactorily.
✔ In this context, the objective of integration testing is to detect the errors at the module
interfaces (call parameters).
✔ For example, it is checked that no parameter mismatch occurs when one module
invokes the functionality of another module.
✔ The objective of integration testing is to check whether the different modules of a
program interface with each other properly.
✔ During integration testing, different modules of a system are integrated in a planned
manner using an integration plan.
✔ The integration plan specifies the steps and the order in which modules are combined
to realise the full system.
✔ After each integration step, the partially integrated system is tested.
✔ Any one (or a mixture) of the following approaches can be used to develop the test
plan:
▪ Big-bang approach to integration testing
▪ Top-down approach to integration testing
▪ Bottom-up approach to integration testing
▪ Mixed (also called sandwiched ) approach to integration testing
I).Big-bang approach to integration testing:
- Big-bang testing is the most obvious approach to integration testing.
- In this approach, all the modules making up a system are integrated in a single step.
- In simple words, all the unit tested modules of the system are simply linked together
and tested.
- However , this technique can meaningfully be used only for very small systems.
- The main problem with this approach is that once a failure has been detected during
integration testing, it is very difficult to localise the error as the error may potentially
lie in any of the modules.
II).Bottom-up approach to integration testing:
- Large software products are often made up of several subsystems.
- A subsystem might consist of many modules which communicate among each
other through well-defined interfaces.
- In bottom-up integration testing, first the modules for the each subsystem are
integrated.
- The primary purpose of carrying out the integration testing a subsystem is to test
whether the interfaces among various modules making up the subsystem work
satisfactorily.
- The principal advantage of bottom- up integration testing is that several disjoint
subsystems can be tested simultaneously.
- Another advantage of bottom-up testing is that the low-level modules get tested
thoroughly, since they are exercised in each integration step.
- A disadvantage of bottom-up testing is the complexity that occurs when the
system is made up of a large number of small subsystems that are at the same
level.
III). Top-down approach to integration testing:
- Top-down integration testing starts with the root module in the structure chart and one
or two subordinate modules of the root module.
- After the top-level ‘skeleton’ has been tested, the modules that are at the immediately
lower layer of the ‘skeleton’ are combined with it and tested.
- Top-down integration testing approach requires the use of program stubs to simulate
the effect of lower-level routines that are called by the routines under test.
- An advantage of top-down integration testing is that it requires writing only stubs, and
stubs are simpler to write compared to drivers.
- A disadvantage of the top-down integration testing approach is that in the absence of
lower-level routines, it becomes difficult to exercise the top-level routines in the
desired manner since the lower level routines usually perform input/output (I/O)
operations.
IV). Mixed approach to integration testing:
- The mixed (also called sandwiched ) integration testing follows a combination of
top-down and bottom-up testing approaches.
- In topdown approach, testing can start only after the top-level modules have been
coded and unit tested.
- Similarly, bottom-up testing can start only after the bottom level modules are
ready.
- The mixed approach overcomes this shortcoming of the top-down and bottom-up
approaches.
- In the mixed testing approach, testing can start as and when modules become
available after unit testing.
1. Phased versus Incremental Integration Testing:
- Big-bang integration testing is carried out in a single step of integration.
- In contrast, in the other strategies, integration is carried out over several steps.
- In these later strategies, modules can be integrated either in a phased or incremental
manner .
- A comparison of these two strategies is as follows:
✔ In incremental integration testing, only one new module is added to the
partially integrated system each time.
✔ In phased integration, a group of related modules are added to the partial
system each time.
SYSTEM TESTING:
✔ After all the units of a program have been integrated together and tested, system testing
is taken up.
✔ System tests are designed to validate a fully developed system to assure that it meets
its requirements. The test cases are therefore designed solely based on the SRS
document.
✔ There are essentially three main kinds of system testing depending on who carries out
testing:
1.Alpha Testing: Alpha testing refers to the system testing carried out by the test
team within the developing organisation.
2. Beta Testing: Beta testing is the system testing performed by a select group of
friendly customers.
3. Acceptance Testing: Acceptance testing is the system testing performed by the
customer to determine whether to accept the delivery of the system.
1. Smoke Testing:
- Smoke testing is carried out before initiating system testing to ensure that system
testing would be meaningful, or whether many parts of the software would fail.
- The idea behind smoke testing is that if the integrated program cannot pass even the
basic tests, it is not ready for a vigorous testing.
- For smoke testing, a few test cases are designed to check whether the basic
functionalities are working.
- For example, for a library automation system, the smoke tests may check whether
books can be created and deleted, whether member records can be created and
deleted, and whether books can be loaned and returned.
2. Performance Testing:
- Performance testing is carried out to check whether the system meets the nonfunctional
requirements identified in the SRS document.
- There are several types of performance testing corresponding to various types of non-
functional requirements.

I). Stress testing:


- Stress testing is also known as endurance testing.
- Stress testing evaluates system performance when it is stressed for short periods of
time.
- Stress tests are black-box tests which are designed to impose a range of abnormal
and even illegal input conditions so as to stress the capabilities of the software.
- Input data volume, input data rate, processing time, utilisation of memory, etc., are
tested beyond the designed capacity.
II). Volume testing:
- Volume testing checks whether the data structures (buffers, arrays, queues, stacks,
etc.) have been designed to successfully handle extraordinary situations.
- For example, the volume testing for a compiler might be to check whether the
symbol table overflows when a very large program is compiled.
III). Configuration testing:
- Configuration testing is used to test system behaviour in various hardware and
software configurations specified in the requirements.
- Sometimes systems are built to work in different configurations for different users.
- For instance, a minimal system might be required to serve a single user , and other
extended configurations may be required to serve additional users during
configuration testing.
IV). Compatibility testing:
- This type of testing is required when the system interfaces with external systems (e.g.,
databases, servers, etc.).
- Compatibility aims to check whether the interfaces with the external systems are
performing as required.
- For instance, if the system needs to communicate with a large database system to
retrieve information, compatibility testing is required to test the speed and accuracy of
data retrieval.
V). Regression testing:
- This type of testing is required when a software is maintained to fix some bugs or
enhance functionality, performance, etc.
VI). Recovery testing:
- Recovery testing tests the response of the system to the presence of faults, or loss of
power , devices, services, data, etc.
- The system is subjected to the loss of the mentioned resources (as discussed in the
SRS document) and it is checked if the system recovers satisfactorily.
VIII). Maintenance testing:
- This addresses testing the diagnostic programs, and other procedures that are required
to help maintenance of the system.
- It is verified that the artifacts exist and they perform properly.
IX). Documentation testing:
- It is checked whether the required user manual, maintenance manuals, and technical
manuals exist and are consistent.
- If the requirements specify the types of audience for which a specific manual should be
designed, then the manual is checked for compliance of this requirement.
X). Usability testing:
- Usability testing concerns checking the user interface to see if it meets all user
requirements concerning the user interface.
- During usability testing, the display screens, messages, report formats, and other
aspects relating to the user interface requirements are tested.
XI). Security testing:
- Security testing is essential for software that handle or process confidential data that is
to be gurarded against pilfering.
- It needs to be tested whether the system is fool-proof from security attacks such as
intrusion by hackers.
- Over the last few years, a large number of security testing techniques have been
proposed, and these include password cracking, penetration testing, and attacks on
specific ports, etc.
3. Error Seeding:
- Sometimes customers specify the maximum number of residual errors that can be
present in the delivered software.
- These requirements are often expressed in terms of maximum number of allowable
errors per line of source code.
- The error seeding technique can be used to estimate the number of residual errors in a
software.
- Error seeding, as the name implies, it involves seeding the code with some known
errors.
- In other words, some artificial errors are introduced (seeded) into the program.
- The number of these seeded errors that are detected in the course of standard testing is
determined.
- These values in conjunction with the number of unseeded errors detected during
testing can be used to predict the following aspects of a program:
1. The number of errors remaining in the product.
2. The effectiveness of the testing strategy.
- Let N be the total number of defects in the system, and let n of these defects be found
by testing.
- Let S be the total number of seeded defects, and let s of these defects be found during
testing. Therefore, we get:

Defects still remaining in the program after testing can be given by:

- Error seeding works satisfactorily only if the kind seeded errors and their frequency of
occurrence matches closely with the kind of defects that actually exist.
- However , it is difficult to predict the types of errors that exist in a software.
- To some extent, the different categories of errors that are latent and their frequency of
occurrence can be estimated by analyzing historical data collected from similar
projects

You might also like