Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
4 views

Software Engineering Tutorial

Uploaded by

volatility75s
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views

Software Engineering Tutorial

Uploaded by

volatility75s
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 126

Software Engineering Overview

Software engineering is an engineering branch associated with


development of software product using well-defined scientific principles,
methods and procedures. The outcome of software engineering is an
efficient and reliable software product.

Software Evolution
The process of developing a software product using software engineering
principles and methods is referred to as software evolution. This
includes the initial development of software and its maintenance and
updates, till desired software product is developed, which satisfies the
expected requirements.

Evolution starts from the requirement gathering process. After which


developers create a prototype of the intended software and show it to the
users to get their feedback at the early stage of software product
development. The users suggest changes, on which several consecutive
updates and maintenance keep on changing too. This process changes to
the original software, till the desired software is accomplished.
Even after the user has desired software in hand, the advancing
technology and the changing requirements force the software product to
change accordingly. Re-creating software from scratch and to go one-on-
one with requirement is not feasible. The only feasible and economical
solution is to update the existing software so that it matches the latest
requirements.

Software Evolution Laws


Lehman has given laws for software evolution. He divided the software
into three different categories:

 S-type (static-type) - This is a software, which works strictly


according to defined specifications and solutions. The solution
and the method to achieve it, both are immediately
understood before coding. The s-type software is least
subjected to changes hence this is the simplest of all. For
example, calculator program for mathematical computation.
 P-type (practical-type) - This is a software with a collection
of procedures. This is defined by exactly what procedures can
do. In this software, the specifications can be described but
the solution is not obvious instantly. For example, gaming
software.
 E-type (embedded-type) - This software works closely as
the requirement of real-world environment. This software has
a high degree of evolution as there are various changes in
laws, taxes etc. in the real world situations. For example,
Online trading software.

E-Type software evolution


Lehman has given eight laws for E-Type software evolution -

 Continuing change - An E-type software system must


continue to adapt to the real world changes, else it becomes
progressively less useful.
 Increasing complexity - As an E-type software system
evolves, its complexity tends to increase unless work is done
to maintain or reduce it.
 Conservation of familiarity - The familiarity with the
software or the knowledge about how it was developed, why
was it developed in that particular manner etc. must be
retained at any cost, to implement the changes in the system.
 Continuing growth- In order for an E-type system intended
to resolve some business problem, its size of implementing the
changes grows according to the lifestyle changes of the
business.
 Reducing quality - An E-type software system declines in
quality unless rigorously maintained and adapted to a
changing operational environment.
 Feedback systems- The E-type software systems constitute
multi-loop, multi-level feedback systems and must be treated
as such to be successfully modified or improved.
 Self-regulation - E-type system evolution processes are self-
regulating with the distribution of product and process
measures close to normal.
 Organizational stability - The average effective global
activity rate in an evolving E-type system is invariant over the
lifetime of the product.

Software Paradigms
Software paradigms refer to the methods and steps, which are taken while
designing the software. There are many methods proposed and are in
work today, but we need to see where in the software engineering these
paradigms stand. These can be combined into various categories, though
each of them is contained in one another:

Programming paradigm is a subset of Software design paradigm which is


further a subset of Software development paradigm.

Software Development Paradigm


This Paradigm is known as software engineering paradigms where all the
engineering concepts pertaining to the development of software are
applied. It includes various researches and requirement gathering which
helps the software product to build. It consists of –
 Requirement gathering
 Software design
 Programming
Software Design Paradigm
This paradigm is a part of Software Development and includes –

 Design
 Maintenance
 Programming
Programming Paradigm
This paradigm is related closely to programming aspect of software
development. This includes –

 Coding
 Testing
 Integration

Need of Software Engineering


The need of software engineering arises because of higher rate of change
in user requirements and environment on which the software is working.

 Large software - It is easier to build a wall than to a house or


building, likewise, as the size of software become large
engineering has to step to give it a scientific process.
 Scalability- If the software process were not based on
scientific and engineering concepts, it would be easier to re-
create new software than to scale an existing one.
 Cost- As hardware industry has shown its skills and huge
manufacturing has lower down the price of computer and
electronic hardware. But the cost of software remains high if
proper process is not adapted.
 Dynamic Nature- The always growing and adapting nature of
software hugely depends upon the environment in which user
works. If the nature of software is always changing, new
enhancements need to be done in the existing one. This is
where software engineering plays a good role.
 Quality Management- Better process of software
development provides better and quality software product.
Characteristics of good software
A software product can be judged by what it offers and how well it can be
used. This software must satisfy on the following grounds:

 Operational
 Transitional
 Maintenance
Well-engineered and crafted software is expected to have the following
characteristic3s:

Operational
This tells us how well software works in operations. It can be measured
on:

 Budget
 Usability
 Efficiency
 Correctness

 Functionality
 Dependability
 Security
 Safety
Transitional
This aspect is important when the software is moved from one platform to
another:

 Portability
 Interoperability
 Reusability
 Adaptability
Maintenance
This aspect briefs about how well a software has the capabilities to
maintain itself in the ever-changing environment:

 Modularity
 Maintainability
 Flexibility
 Scalability
In short, Software engineering is a branch of computer science, which
uses well-defined engineering concepts required to produce efficient,
durable, scalable, in-budget and on-time software products.
Requirement Engineering
Requirements engineering (RE) refers to the process of defining,
documenting, and maintaining requirements in the engineering design
process. Requirement engineering provides the appropriate mechanism to
understand what the customer desires, analyzing the need, and assessing
feasibility, negotiating a reasonable solution, specifying the solution
clearly, validating the specifications and managing the requirements as
they are transformed into a working system. Thus, requirement
engineering is the disciplined application of proven principles, methods,
tools, and notation to describe a proposed system's intended behavior
and its associated constraints.

Requirement Engineering Process

It is a four-step process, which includes -

1. Feasibility Study
2. Requirement Elicitation and Analysis
3. Software Requirement Specification
4. Software Requirement Validation
5. Software Requirement Management

1. Feasibility Study:
The objective behind the feasibility study is to create the reasons for
developing the software that is acceptable to users, flexible to change and
conformable to established standards.

Types of Feasibility:

1. Technical Feasibility - Technical feasibility evaluates the current


technologies, which are needed to accomplish customer
requirements within the time and budget.
2. Operational Feasibility - Operational feasibility assesses the
range in which the required software performs a series of levels to
solve business problems and customer requirements.
3. Economic Feasibility - Economic feasibility decides whether the
necessary software can generate financial profits for an
organization.
2. Requirement Elicitation and Analysis:
This is also known as the gathering of requirements. Here,
requirements are identified with the help of customers and existing
systems processes, if available.
Analysis of requirements starts with requirement elicitation. The
requirements are analyzed to identify inconsistencies, defects, omission,
etc. We describe requirements in terms of relationships and also resolve
conflicts if any.
Problems of Elicitation and Analysis
o Getting all, and only, the right people involved.
o Stakeholders often don't know what they want
o Stakeholders express requirements in their terms.
o Stakeholders may have conflicting requirements.
o Requirement change during the analysis process.
o Organizational and political factors may influence system
requirements.

3. Software Requirement Specification:


Software requirement specification is a kind of document which is created
by a software analyst after the requirements collected from the various
sources - the requirement received by the customer written in ordinary
language. It is the job of the analyst to write the requirement in technical
language so that they can be understood and beneficial by the
development team.
The models used at this stage include ER diagrams, data flow diagrams
(DFDs), function decomposition diagrams (FDDs), data dictionaries, etc.
o Data Flow Diagrams: Data Flow Diagrams (DFDs) are used widely
for modeling the requirements. DFD shows the flow of data through
a system. The system may be a company, an organization, a set of
procedures, a computer hardware system, a software system, or
any combination of the preceding. The DFD is also known as a data
flow graph or bubble chart.
o Data Dictionaries: Data Dictionaries are simply repositories to
store information about all data items defined in DFDs. At the
requirements stage, the data dictionary should at least define
customer data items, to ensure that the customer and developers
use the same definition and terminologies.
o Entity-Relationship Diagrams: Another tool for requirement
specification is the entity-relationship diagram, often called an "E-R
diagram." It is a detailed logical representation of the data for the
organization and uses three main constructs i.e. data entities,
relationships, and their associated attributes.
4. Software Requirement Validation:
After requirement specifications developed, the requirements discussed in
this document are validated. The user might demand illegal, impossible
solution or experts may misinterpret the needs. Requirements can be the
check against the following conditions -
o If they can practically implement
o If they are correct and as per the functionality and specially of
software
o If there are any ambiguities
o If they are full
o If they can describe
Requirements Validation Techniques
o Requirements reviews/inspections: systematic manual analysis
of the requirements.
o Prototyping: Using an executable model of the system to check
requirements.
o Test-case generation: Developing tests for requirements to check
testability.
o Automated consistency analysis: checking for the consistency of
structured requirements descriptions.
Software Requirement Management:
Requirement management is the process of managing changing
requirements during the requirements engineering process and system
development.
New requirements emerge during the process as business needs a
change, and a better understanding of the system is developed.
The priority of requirements from different viewpoints changes during
development process.
The business and technical environment of the system changes during the
development.
Prerequisite of Software requirements
Collection of software requirements is the basis of the entire software
development project. Hence they should be clear, correct, and well-
defined.
A complete Software Requirement Specifications should be:
o Clear
o Correct
o Consistent
o Coherent
o Comprehensible
o Modifiable
o Verifiable
o Prioritized
o Unambiguous
o Traceable
o Credible source
Software Requirements: Largely software requirements must be
categorized into two categories:
1. Functional Requirements: Functional requirements define a
function that a system or system element must be qualified to
perform and must be documented in different forms. The functional
requirements are describing the behavior of the system as it
correlates to the system's functionality.
2. Non-functional Requirements: This can be the necessities that
specify the criteria that can be used to decide the operation instead
of specific behaviors of the system.
Non-functional requirements are divided into two main categories:
o Execution qualities like security and usability, which are
observable at run time.
o Evolution qualities like testability, maintainability,
extensibility, and scalability that embodied in the static
structure of the software system.

Software Development Life Cycle


Software Development Life Cycle, SDLC for short, is a well-defined,
structured sequence of stages in software engineering to develop the
intended software product.

SDLC Activities
SDLC provides a serie,s of steps to be followed to design and develop a
software product efficiently. SDLC framework includes the following steps:

Communication
This is the first step where the user initiates the request for a desired
software product. He contacts the service provider and tries to negotiate
the terms. He submits his request to the service providing organization in
writing.

Requirement Gathering
This step onwards the software development team works to carry on the
project. The team holds discussions with various stakeholders from
problem domain and tries to bring out as much information as possible on
their requirements. The requirements are contemplated and segregated
into user requirements, system requirements and functional
requirements. The requirements are collected using a number of practices
as given -

 studying the existing or obsolete system and software,


 conducting interviews of users and developers,
 referring to the database or
 collecting answers from the questionnaires.
Feasibility Study
After requirement gathering, the team comes up with a rough plan of
software process. At this step the team analyzes if a software can be
made to fulfill all requirements of the user and if there is any possibility of
software being no more useful. It is found out, if the project is financially,
practically and technologically feasible for the organization to take up.
There are many algorithms available, which help the developers to
conclude the feasibility of a software project.

System Analysis
At this step the developers decide a roadmap of their plan and try to bring
up the best software model suitable for the project. System analysis
includes Understanding of software product limitations, learning system
related problems or changes to be done in existing systems beforehand,
identifying and addressing the impact of project on organization and
personnel etc. The project team analyzes the scope of the project and
plans the schedule and resources accordingly.

Software Design
Next step is to bring down whole knowledge of requirements and analysis
on the desk and design the software product. The inputs from users and
information gathered in requirement gathering phase are the inputs of
this step. The output of this step comes in the form of two designs; logical
design and physical design. Engineers produce meta-data and data
dictionaries, logical diagrams, data-flow diagrams and in some cases
pseudo codes.

Coding
This step is also known as programming phase. The implementation of
software design starts in terms of writing program code in the suitable
programming language and developing error-free executable programs
efficiently.

Testing
An estimate says that 50% of whole software development process should
be tested. Errors may ruin the software from critical level to its own
removal. Software testing is done while coding by the developers and
thorough testing is conducted by testing experts at various levels of code
such as module testing, program testing, product testing, in-house testing
and testing the product at user’s end. Early discovery of errors and their
remedy is the key to reliable software.

Integration
Software may need to be integrated with the libraries, databases and
other program(s). This stage of SDLC is involved in the integration of
software with outer world entities.

Implementation
This means installing the software on user machines. At times, software
needs post-installation configurations at user end. Software is tested for
portability and adaptability and integration related issues are solved
during implementation.
Operation and Maintenance
This phase confirms the software operation in terms of more efficiency
and less errors. If required, the users are trained on, or aided with the
documentation on how to operate the software and how to keep the
software operational. The software is maintained timely by updating the
code according to the changes taking place in user end environment or
technology. This phase may face challenges from hidden bugs and real-
world unidentified problems.

Disposition
As time elapses, the software may decline on the performance front. It
may go completely obsolete or may need intense upgradation. Hence a
pressing need to eliminate a major portion of the system arises. This
phase includes archiving data and required software components, closing
down the system, planning disposition activity and terminating system at
appropriate end-of-system time.

Software Development Paradigm


The software development paradigm helps developer to select a strategy
to develop the software. A software development paradigm has its own
set of tools, methods and procedures, which are expressed clearly and
defines software development life cycle. A few of software development
paradigms or process models are defined as follows:

Waterfall Model
Waterfall model
Winston Royce introduced the Waterfall Model in 1970.This model has five
phases: Requirements analysis and specification, design, implementation,
and unit testing, integration and system testing, and operation and
maintenance. The steps always follow in this order and do not overlap.
The developer must complete every phase before the next phase begins.
This model is named "Waterfall Model", because its diagrammatic
representation resembles a cascade of waterfalls.
1. Requirements analysis and specification phase: The aim
of this phase is to understand the exact requirements of the
customer and to document them properly. Both the customer
and the software developer work together so as to document
all the functions, performance, and interfacing requirement of
the software. It describes the "what" of the system to be
produced and not "how."In this phase, a large document
called Software Requirement Specification
(SRS) document is created which contained a detailed
description of what the system will do in the common
language.
2. Design Phase: This phase aims to transform the requirements
gathered in the SRS into a suitable form which permits further coding in a
programming language. It defines the overall software architecture
together with high level and detailed design. All this work is documented
as a Software Design Document (SDD).
3. Implementation and unit testing: During this phase, design is
implemented. If the SDD is complete, the implementation or coding phase
proceeds smoothly, because all the information needed by software
developers is contained in the SDD.
During testing, the code is thoroughly examined and modified. Small
modules are tested in isolation initially. After that these modules are
tested by writing some overhead code to check the interaction between
these modules and the flow of intermediate output.
4. Integration and System Testing: This phase is highly crucial as the
quality of the end product is determined by the effectiveness of the
testing carried out. The better output will lead to satisfied customers,
lower maintenance costs, and accurate results. Unit testing determines
the efficiency of individual modules. However, in this phase, the modules
are tested for their interactions with each other and with the system.
5. Operation and maintenance phase: Maintenance is the task
performed by every user once the software has been delivered to the
customer, installed, and operational.
When to use SDLC Waterfall Model?
Some Circumstances where the use of the Waterfall model is most suited
are:
o When the requirements are constant and not changed regularly.
o A project is short
o The situation is calm
o Where the tools and technology used is consistent and is not
changing
o When resources are well prepared and are available to use.
Advantages of Waterfall model
o This model is simple to implement also the number of resources
that are required for it is minimal.
o The requirements are simple and explicitly declared; they remain
unchanged during the entire project development.
o The start and end points for each phase is fixed, which makes it
easy to cover progress.
o The release date for the complete product, as well as its final cost,
can be determined before development.
o It gives easy to control and clarity for the customer due to a strict
reporting system.
Disadvantages of Waterfall model
o In this model, the risk factor is higher, so this model is not suitable
for more significant and complex projects.
o This model cannot accept the changes in requirements during
development.
o It becomes tough to go back to the phase. For example, if the
application has now shifted to the coding phase, and there is a
change in requirement, It becomes tough to go back and change it.
o Since the testing done at a later stage, it does not allow identifying
the challenges and risks in the earlier phase, so the risk reduction
strategy is difficult to prepare.

RAD (Rapid Application Development) Model


RAD is a linear sequential software development process model that
emphasizes a concise development cycle using an element based
construction approach. If the requirements are well understood and
described, and the project scope is a constraint, the RAD process enables
a development team to create a fully functional system within a concise
time period.
RAD (Rapid Application Development) is a concept that products can be
developed faster and of higher quality through:
o Gathering requirements using workshops or focus groups
o Prototyping and early, reiterative user testing of designs
o The re-use of software components
o A rigidly paced schedule that refers design improvements to the
next product version
o Less formality in reviews and other team communication

The various phases of RAD are as follows:


1.Business Modelling: The information flow among business functions is
defined by answering questions like what data drives the business
process, what data is generated, who generates it, where does the
information go, who process it and so on.
2. Data Modelling: The data collected from business modeling is refined
into a set of data objects (entities) that are needed to support the
business. The attributes (character of each entity) are identified, and the
relation between these data objects (entities) is defined.
3. Process Modelling: The information object defined in the data
modeling phase are transformed to achieve the data flow necessary to
implement a business function. Processing descriptions are created for
adding, modifying, deleting, or retrieving a data object.
4. Application Generation: Automated tools are used to facilitate
construction of the software; even they use the 4th GL techniques.
5. Testing & Turnover: Many of the programming components have
already been tested since RAD emphasis reuse. This reduces the overall
testing time. But the new part must be tested, and all interfaces must be
fully exercised.
When to use RAD Model?
o When the system should need to create the project that modularizes
in a short span time (2-3 months).
o When the requirements are well-known.
o When the technical risk is limited.
o When there's a necessity to make a system, which modularized in 2-
3 months of period.
o It should be used only if the budget allows the use of automatic
code generating tools.
Advantage of RAD Model
o This model is flexible for change.
o In this model, changes are adoptable.
o Each phase in RAD brings highest priority functionality to the
customer.
o It reduced development time.
o It increases the reusability of features.
Disadvantage of RAD Model
o It required highly skilled designers.
o All application is not compatible with RAD.
o For smaller projects, we cannot use the RAD model.
o On the high technical risk, it's not suitable.
o Required user involvement.

Spiral Model
The spiral model, initially proposed by Boehm, is an evolutionary software
process model that couples the iterative feature of prototyping with the
controlled and systematic aspects of the linear sequential model. It
implements the potential for rapid development of new versions of the
software. Using the spiral model, the software is developed in a series of
incremental releases. During the early iterations, the additional release
may be a paper model or prototype. During later iterations, more and
more complete versions of the engineered system are produced.
Each cycle in the spiral is divided into four parts:
Objective setting: Each cycle in the spiral starts with the identification
of purpose for that cycle, the various alternatives that are possible for
achieving the targets, and the constraints that exists.
Risk Assessment and reduction: The next phase in the cycle is to
calculate these various alternatives based on the goals and constraints.
The focus of evaluation in this stage is located on the risk perception for
the project.
Development and validation: The next phase is to develop strategies
that resolve uncertainties and risks. This process may include activities
such as benchmarking, simulation, and prototyping.
Planning: Finally, the next step is planned. The project is reviewed, and a
choice made whether to continue with a further period of the spiral. If it is
determined to keep, plans are drawn up for the next step of the project.
The development phase depends on the remaining risks. For example, if
performance or user-interface risks are treated more essential than the
program development risks, the next phase may be an evolutionary
development that includes developing a more detailed prototype for
solving the risks.
The risk-driven feature of the spiral model allows it to accommodate any
mixture of a specification-oriented, prototype-oriented, simulation-
oriented, or another type of approach. An essential element of the model
is that each period of the spiral is completed by a review that includes all
the products developed during that cycle, including plans for the next
cycle. The spiral model works for development as well as enhancement
projects.

When to use Spiral Model?


o When deliverance is required to be frequent.
o When the project is large
o When requirements are unclear and complex
o When changes may require at any time
o Large and high budget projects
Advantages
o High amount of risk analysis
o Useful for large and mission-critical projects.
Disadvantages
o Can be a costly model to use.
o Risk analysis needed highly particular expertise
o Doesn't work well for smaller projects.

V-Model
V-Model also referred to as the Verification and Validation Model. In this,
each phase of SDLC must complete before the next phase starts. It follows
a sequential design process same as the waterfall model. Testing of the
device is planned in parallel with a corresponding stage of development.

Verification: It involves a static analysis method (review) done without


executing code. It is the process of evaluation of the product development
process to find whether specified requirements meet.
Validation: It involves dynamic analysis method (functional, non-
functional), testing is done by executing code. Validation is the process to
classify the software after the completion of the development process to
determine whether the software meets the customer expectations and
requirements.
So V-Model contains Verification phases on one side of the Validation
phases on the other side. Verification and Validation process is joined by
coding phase in V-shape. Thus it is known as V-Model.
There are the various phases of Verification Phase of V-model:
1. Business requirement analysis: This is the first step where
product requirements understood from the customer's side. This
phase contains detailed communication to understand customer's
expectations and exact requirements.
2. System Design: In this stage system engineers analyze and
interpret the business of the proposed system by studying the user
requirements document.
3. Architecture Design: The baseline in selecting the architecture is
that it should understand all which typically consists of the list of
modules, brief functionality of each module, their interface
relationships, dependencies, database tables, architecture
diagrams, technology detail, etc. The integration testing model is
carried out in a particular phase.
4. Module Design: In the module design phase, the system breaks
down into small modules. The detailed design of the modules is
specified, which is known as Low-Level Design
5. Coding Phase: After designing, the coding phase is started. Based
on the requirements, a suitable programming language is decided.
There are some guidelines and standards for coding. Before
checking in the repository, the final build is optimized for better
performance, and the code goes through many code reviews to
check the performance.
There are the various phases of Validation Phase of V-model:
1. Unit Testing: In the V-Model, Unit Test Plans (UTPs) are developed
during the module design phase. These UTPs are executed to
eliminate errors at code level or unit level. A unit is the smallest
entity which can independently exist, e.g., a program module. Unit
testing verifies that the smallest entity can function correctly when
isolated from the rest of the codes/ units.
2. Integration Testing: Integration Test Plans are developed during
the Architectural Design Phase. These tests verify that groups
created and tested independently can coexist and communicate
among themselves.
3. System Testing: System Tests Plans are developed during System
Design Phase. Unlike Unit and Integration Test Plans, System Tests
Plans are composed by the client?s business team. System Test
ensures that expectations from an application developer are met.
4. Acceptance Testing: Acceptance testing is related to the business
requirement analysis part. It includes testing the software product in
user atmosphere. Acceptance tests reveal the compatibility
problems with the different systems, which is available within the
user atmosphere. It conjointly discovers the non-functional problems
like load and performance defects within the real user atmosphere.
When to use V-Model?
o When the requirement is well defined and not ambiguous.
o The V-shaped model should be used for small to medium-sized
projects where requirements are clearly defined and fixed.
o The V-shaped model should be chosen when sample technical
resources are available with essential technical expertise.
Advantage (Pros) of V-Model:
1. Easy to Understand.
2. Testing Methods like planning, test designing happens well before
coding.
3. This saves a lot of time. Hence a higher chance of success over the
waterfall model.
4. Avoids the downward flow of the defects.
5. Works well for small plans where requirements are easily
understood.
Disadvantage (Cons) of V-Model:
1. Very rigid and least flexible.
2. Not a good for a complex project.
3. Software is developed during the implementation stage, so no early
prototypes of the software are produced.
4. If any changes happen in the midway, then the test documents
along with the required documents, has to be updated.

Incremental Model
Incremental Model is a process of software development where
requirements divided into multiple standalone modules of the software
development cycle. In this model, each module goes through the
requirements, design, implementation and testing phases. Every
subsequent release of the module adds function to the previous release.
The process continues until the complete system achieved.
The various phases of incremental model are as follows:
1. Requirement analysis: In the first phase of the incremental model,
the product analysis expertise identifies the requirements. And the system
functional requirements are understood by the requirement analysis
team. To develop the software under the incremental model, this phase
performs a crucial role.
2. Design & Development: In this phase of the Incremental model of
SDLC, the design of the system functionality and the development method
are finished with success. When software develops new practicality, the
incremental model uses style and development phase.
3. Testing: In the incremental model, the testing phase checks the
performance of each existing function as well as additional functionality.
In the testing phase, the various methods are used to test the behavior of
each task.
4. Implementation: Implementation phase enables the coding phase of
the development system. It involves the final coding that design in the
designing and development phase and tests the functionality in the
testing phase. After completion of this phase, the number of the product
working is enhanced and upgraded up to the final system product
When we use the Incremental Model?
o When the requirements are superior.
o A project has a lengthy development schedule.
o When Software team are not very well skilled or trained.
o When the customer demands a quick release of the product.
o You can develop prioritized requirements first.
Advantage of Incremental Model
o Errors are easy to be recognized.
o Easier to test and debug
o More flexible.
o Simple to manage risk because it handled during its iteration.
o The Client gets important functionality early.
Disadvantage of Incremental Model
o Need for good planning
o Total Cost is high.
o Well defined module interfaces are needed.

Agile Model
The meaning of Agile is swift or versatile."Agile process model" refers to
a software development approach based on iterative development. Agile
methods break tasks into smaller iterations, or parts do not directly
involve long term planning. The project scope and requirements are laid
down at the beginning of the development process. Plans regarding the
number of iterations, the duration and the scope of each iteration are
clearly defined in advance.
Each iteration is considered as a short time "frame" in the Agile process
model, which typically lasts from one to four weeks. The division of the
entire project into smaller parts helps to minimize the project risk and to
reduce the overall project delivery time requirements. Each iteration
involves a team working through a full software development life cycle
including planning, requirements analysis, design, coding, and testing
before a working product is demonstrated to the client.

Phases of Agile Model:


Following are the phases in the Agile model are as follows:
1. Requirements gathering
2. Design the requirements
3. Construction/ iteration
4. Testing/ Quality assurance
5. Deployment
6. Feedback
1. Requirements gathering: In this phase, you must define the
requirements. You should explain business opportunities and plan the
time and effort needed to build the project. Based on this information, you
can evaluate technical and economic feasibility.
2. Design the requirements: When you have identified the project,
work with stakeholders to define requirements. You can use the user flow
diagram or the high-level UML diagram to show the work of new features
and show how it will apply to your existing system.
3. Construction/ iteration: When the team defines the requirements,
the work begins. Designers and developers start working on their project,
which aims to deploy a working product. The product will undergo various
stages of improvement, so it includes simple, minimal functionality.
4. Testing: In this phase, the Quality Assurance team examines the
product's performance and looks for the bug.
5. Deployment: In this phase, the team issues a product for the user's
work environment.
6. Feedback: After releasing the product, the last step is feedback. In
this, the team receives feedback about the product and works through the
feedback.
Agile Testing Methods:
o Scrum
o Crystal
o Dynamic Software Development Method(DSDM)
o Feature Driven Development(FDD)
o Lean Software Development
o eXtreme Programming(XP)
Scrum
SCRUM is an agile development process focused primarily on ways to
manage tasks in team-based development conditions.
There are three roles in it, and their responsibilities are:
o Scrum Master: The scrum can set up the master team, arrange the
meeting and remove obstacles for the process
o Product owner: The product owner makes the product backlog,
prioritizes the delay and is responsible for the distribution of
functionality on each repetition.
o Scrum Team: The team manages its work and organizes the work
to complete the sprint or cycle.
eXtreme Programming(XP)
This type of methodology is used when customers are constantly
changing demands or requirements, or when they are not sure about the
system's performance.
Crystal:
There are three concepts of this method-
1. Chartering: Multi activities are involved in this phase such as making
a development team, performing feasibility analysis, developing
plans, etc.
2. Cyclic delivery: under this, two more cycles consist, these are:
o Team updates the release plan.
o Integrated product delivers to the users.
3. Wrap up: According to the user environment, this phase performs
deployment, post-deployment.
Dynamic Software Development Method(DSDM):
DSDM is a rapid application development strategy for software
development and gives an agile project distribution structure. The
essential features of DSDM are that users must be actively connected,
and teams have been given the right to make decisions. The techniques
used in DSDM are:
1. Time Boxing
2. MoSCoW Rules
3. Prototyping
The DSDM project contains seven stages:
1. Pre-project
2. Feasibility Study
3. Business Study
4. Functional Model Iteration
5. Design and build Iteration
6. Implementation
7. Post-project
Feature Driven Development(FDD):
This method focuses on "Designing and Building" features. In contrast to
other smart methods, FDD describes the small steps of the work that
should be obtained separately per function.
Lean Software Development:
Lean software development methodology follows the principle "just in
time production." The lean method indicates the increasing speed of
software development and reducing costs. Lean development can be
summarized in seven phases.
1. Eliminating Waste
2. Amplifying learning
3. Defer commitment (deciding as late as possible)
4. Early delivery
5. Empowering the team
6. Building Integrity
7. Optimize the whole
When to use the Agile Model?
o When frequent changes are required.
o When a highly qualified and experienced team is available.
o When a customer is ready to have a meeting with a software team
all the time.
o When project size is small.
Advantage(Pros) of Agile Method:
1. Frequent Delivery
2. Face-to-Face Communication with clients.
3. Efficient design and fulfils the business requirement.
4. Anytime changes are acceptable.
5. It reduces total development time.
Disadvantages(Cons) of Agile Model:
1. Due to the shortage of formal documents, it creates confusion and
crucial decisions taken throughout various phases can be
misinterpreted at any time by different team members.
2. Due to the lack of proper documentation, once the project
completes and the developers allotted to another project,
maintenance of the finished project can become a difficulty.

Iterative Model
In this Model, you can start with some of the software specifications and
develop the first version of the software. After the first version if there is a
need to change the software, then a new version of the software is
created with a new iteration. Every release of the Iterative Model finishes
in an exact and fixed period that is called iteration.
The Iterative Model allows the accessing earlier phases, in which the
variations made respectively. The final output of the project renewed at
the end of the Software Development Life Cycle (SDLC) process.

The various phases of Iterative model are as follows:


1. Requirement gathering & analysis: In this phase, requirements are
gathered from customers and check by an analyst whether requirements
will fulfil or not. Analyst checks that need will achieve within budget or
not. After all of this, the software team skips to the next phase.
2. Design: In the design phase, team design the software by the different
diagrams like Data Flow diagram, activity diagram, class diagram, state
transition diagram, etc.
3. Implementation: In the implementation, requirements are written in
the coding language and transformed into computer programmes which
are called Software.
4. Testing: After completing the coding phase, software testing starts
using different test methods. There are many test methods, but the most
common are white box, black box, and grey box test methods.
5. Deployment: After completing all the phases, software is deployed to
its work environment.
6. Review: In this phase, after the product deployment, review phase is
performed to check the behaviour and validity of the developed product.
And if there are any error found then the process starts again from the
requirement gathering.
7. Maintenance: In the maintenance phase, after deployment of the
software in the working environment there may be some bugs, some
errors or new updates are required. Maintenance involves debugging and
new addition options.
When to use the Iterative Model?
1. When requirements are defined clearly and easy to understand.
2. When the software application is large.
3. When there is a requirement of changes in future.
Advantage(Pros) of Iterative Model:
1. Testing and debugging during smaller iteration is easy.
2. A Parallel development can plan.
3. It is easily acceptable to ever-changing needs of the project.
4. Risks are identified and resolved during iteration.
5. Limited time spent on documentation and extra time on designing.
Disadvantage(Cons) of Iterative Model:
1. It is not suitable for smaller projects.
2. More Resources may be required.
3. Design can be changed again and again because of imperfect
requirements.
4. Requirement changes can cause over budget.
5. Project completion date not confirmed because of changing
requirements.

Big Bang Model


In this model, developers do not follow any specific process. Development
begins with the necessary funds and efforts in the form of inputs. And the
result may or may not be as per the customer's requirement, because in
this model, even the customer requirements are not defined.
This model is ideal for small projects like academic projects or practical
projects. One or two developers can work together on this model.
When to use Big Bang Model?
As we discussed above, this model is required when this project is small
like an academic project or a practical project. This method is also used
when the size of the developer team is small and when requirements are
not defined, and the release date is not confirmed or given by the
customer.
Advantage(Pros) of Big Bang Model:
1. There is no planning required.
2. Simple Model.
3. Few resources required.
4. Easy to manage.
5. Flexible for developers.
Disadvantage(Cons) of Big Bang Model:
1. There are high risk and uncertainty.
2. Not acceptable for a large project.
3. If requirements are not clear that can cause very expensive.

Prototype Model
The prototype model requires that before carrying out the development of
actual software, a working prototype of the system should be built. A
prototype is a toy implementation of the system. A prototype usually turns
out to be a very crude version of the actual system, possible exhibiting
limited functional capabilities, low reliability, and inefficient performance
as compared to actual software. In many instances, the client only has a
general view of what is expected from the software product. In such a
scenario where there is an absence of detailed information regarding the
input to the system, the processing needs, and the output requirement,
the prototyping model may be employed.

Steps of Prototype Model


1. Requirement Gathering and Analyst
2. Quick Decision
3. Build a Prototype
4. Assessment or User Evaluation
5. Prototype Refinement
6. Engineer Product
Advantage of Prototype Model
1. Reduce the risk of incorrect user requirement
2. Good where requirement are changing/uncommitted
3. Regular visible process aids management
4. Support early product marketing
5. Reduce Maintenance cost.
6. Errors can be detected much earlier as the system is made side by
side.
Disadvantage of Prototype Model
1. An unstable/badly implemented prototype often becomes the final
product.
2. Require extensive customer collaboration
o Costs customer money
o Needs committed customer
o Difficult to finish if customer withdraw
o May be too customer specific, no broad market
3. Difficult to know how long the project will last.
4. Easy to fall back into the code and fix without proper requirement
analysis, design, customer evaluation, and feedback.
5. Prototyping tools are expensive.
6. Special tools & techniques are required to build a prototype.
7. It is a time-consuming process.
Evolutionary Process Model
Evolutionary process model resembles the iterative enhancement model.
The same phases are defined for the waterfall model occurs here in a
cyclical fashion. This model differs from the iterative enhancement model
in the sense that this does not require a useful product at the end of each
cycle. In evolutionary development, requirements are implemented by
category rather than by priority.
For example, in a simple database application, one cycle might implement
the graphical user Interface (GUI), another file manipulation, another
queries and another updates. All four cycles must complete before there
is a working product available. GUI allows the users to interact with the
system, file manipulation allow the data to be saved and retrieved,
queries allow user to get out of the system, and updates allows users to
put data into the system.
Benefits of Evolutionary Process Model
Use of EVO brings a significant reduction in risk for software projects.
EVO can reduce costs by providing a structured, disciplined avenue for
experimentation.
EVO allows the marketing department access to early deliveries,
facilitating the development of documentation and demonstration.
Better fit the product to user needs and market requirements.
Manage project risk with the definition of early cycle content.
Uncover key issues early and focus attention appropriately.
Increase the opportunity to hit market windows.
Accelerate sales cycles with early customer exposure.
Increase management visibility of project progress.
Increase product team productivity and motivations.

Software Project Management


The job pattern of an IT company engaged in software development can
be seen split in two parts:

 Software Creation
 Software Project Management
A project is well-defined task, which is a collection of several operations
done in order to achieve a goal (for example, software development and
delivery). A Project can be characterized as:

 Every project may has a unique and distinct goal.


 Project is not routine activity or day-to-day operations.
 Project comes with a start time and end time.
 Project ends when its goal is achieved hence it is a temporary
phase in the lifetime of an organization.
 Project needs adequate resources in terms of time, manpower,
finance, material and knowledge-bank.

Software Project
A Software Project is the complete procedure of software development
from requirement gathering to testing and maintenance, carried out
according to the execution methodologies, in a specified period of time to
achieve intended software product.

Need of software project management


Software is said to be an intangible product. Software development is a
kind of all new stream in world business and there’s very little experience
in building software products. Most software products are tailor made to
fit client’s requirements. The most important is that the underlying
technology changes and advances so frequently and rapidly that
experience of one product may not be applied to the other one. All such
business and environmental constraints bring risk in software
development hence it is essential to manage software projects efficiently.
The image above shows triple constraints for software projects. It is an
essential part of software organization to deliver quality product, keeping
the cost within client’s budget constrain and deliver the project as per
scheduled. There are several factors, both internal and external, which
may impact this triple constrain triangle. Any of three factor can severely
impact the other two.
Therefore, software project management is essential to incorporate user
requirements along with budget and time constraints.

Software Project Manager


A software project manager is a person who undertakes the responsibility
of executing the software project. Software project manager is thoroughly
aware of all the phases of SDLC that the software would go through.
Project manager may never directly involve in producing the end product
but he controls and manages the activities involved in production.
A project manager closely monitors the development process, prepares
and executes various plans, arranges necessary and adequate resources,
maintains communication among all team members in order to address
issues of cost, budget, resources, time, quality and customer satisfaction.
Let us see few responsibilities that a project manager shoulders -

Managing People
 Act as project leader
 Lesion with stakeholders
 Managing human resources
 Setting up reporting hierarchy etc.
Managing Project
 Defining and setting up project scope
 Managing project management activities
 Monitoring progress and performance
 Risk analysis at every phase
 Take necessary step to avoid or come out of problems
 Act as project spokesperson

Software Management Activities


Software project management comprises of a number of activities, which
contains planning of project, deciding scope of software product,
estimation of cost in various terms, scheduling of tasks and events, and
resource management. Project management activities may include:

 Project Planning
 Scope Management
 Project Estimation

Project Planning
Software project planning is task, which is performed before the
production of software actually starts. It is there for the software
production but involves no concrete activity that has any direction
connection with software production; rather it is a set of multiple
processes, which facilitates software production. Project planning may
include the following:

Scope Management
It defines the scope of project; this includes all the activities, process need
to be done in order to make a deliverable software product. Scope
management is essential because it creates boundaries of the project by
clearly defining what would be done in the project and what would not be
done. This makes project to contain limited and quantifiable tasks, which
can easily be documented and in turn avoids cost and time overrun.
During Project Scope management, it is necessary to -

 Define the scope


 Decide its verification and control
 Divide the project into various smaller parts for ease of
management.
 Verify the scope
 Control the scope by incorporating changes to the scope

Project Estimation
For an effective management accurate estimation of various measures is
a must. With correct estimation managers can manage and control the
project more efficiently and effectively.
Project estimation may involve the following:

 Software size estimation


Software size may be estimated either in terms of KLOC (Kilo
Line of Code) or by calculating number of function points in the
software. Lines of code depend upon coding practices and
Function points vary according to the user or software
requirement.
 Effort estimation
The managers estimate efforts in terms of personnel
requirement and man-hour required to produce the software.
For effort estimation software size should be known. This can
either be derived by managers’ experience, organization’s
historical data or software size can be converted into efforts by
using some standard formulae.
 Time estimation
Once size and efforts are estimated, the time required to
produce the software can be estimated. Efforts required is
segregated into sub categories as per the requirement
specifications and interdependency of various components of
software. Software tasks are divided into smaller tasks,
activities or events by Work Breakthrough Structure (WBS).
The tasks are scheduled on day-to-day basis or in calendar
months.
The sum of time required to complete all tasks in hours or
days is the total time invested to complete the project.
 Cost estimation
This might be considered as the most difficult of all because it
depends on more elements than any of the previous ones. For
estimating project cost, it is required to consider -
o Size of software
o Software quality
o Hardware
o Additional software or tools, licenses etc.
o Skilled personnel with task-specific skills
o Travel involved
o Communication
o Training and support

Project Estimation Techniques


We discussed various parameters involving project estimation such as
size, effort, time and cost.
Project manager can estimate the listed factors using two broadly
recognized techniques –

Decomposition Technique
This technique assumes the software as a product of various
compositions.
There are two main models -

 Line of Code Estimation is done on behalf of number of line of


codes in the software product.
 Function Points Estimation is done on behalf of number of
function points in the software product.
Empirical Estimation Technique
This technique uses empirically derived formulae to make
estimation.These formulae are based on LOC or FPs.

 Putnam Model
This model is made by Lawrence H. Putnam, which is based on
Norden’s frequency distribution (Rayleigh curve). Putnam
model maps time and efforts required with software size.
 COCOMO
COCOMO stands for COnstructive COst MOdel, developed by
Barry W. Boehm. It divides the software product into three
categories of software: organic, semi-detached and
embedded.

Project Scheduling
Project Scheduling in a project refers to roadmap of all activities to be
done with specified order and within time slot allotted to each activity.
Project managers tend to tend to define various tasks, and project
milestones and arrange them keeping various factors in mind. They look
for tasks lie in critical path in the schedule, which are necessary to
complete in specific manner (because of task interdependency) and
strictly within the time allocated. Arrangement of tasks which lies out of
critical path are less likely to impact over all schedule of the project.
For scheduling a project, it is necessary to -

 Break down the project tasks into smaller, manageable form


 Find out various tasks and correlate them
 Estimate time frame required for each task
 Divide time into work-units
 Assign adequate number of work-units for each task
 Calculate total time required for the project from start to finish

Resource management
All elements used to develop a software product may be assumed as
resource for that project. This may include human resource, productive
tools and software libraries.
The resources are available in limited quantity and stay in the
organization as a pool of assets. The shortage of resources hampers the
development of project and it can lag behind the schedule. Allocating
extra resources increases development cost in the end. It is therefore
necessary to estimate and allocate adequate resources for the project.
Resource management includes -

 Defining proper organization project by creating a project


team and allocating responsibilities to each team member
 Determining resources required at a particular stage and their
availability
 Manage Resources by generating resource request when they
are required and de-allocating them when they are no more
needed.

Project Risk Management


Risk management involves all activities pertaining to identification,
analyzing and making provision for predictable and non-predictable risks
in the project. Risk may include the following:

 Experienced staff leaving the project and new staff coming in.
 Change in organizational management.
 Requirement change or misinterpreting requirement.
 Under-estimation of required time and resources.
 Technological changes, environmental changes, business
competition.

Risk Management Process


There are following activities involved in risk management process:

 Identification - Make note of all possible risks, which may


occur in the project.
 Categorize - Categorize known risks into high, medium and
low risk intensity as per their possible impact on the project.
 Manage - Analyze the probability of occurrence of risks at
various phases. Make plan to avoid or face risks. Attempt to
minimize their side-effects.
 Monitor - Closely monitor the potential risks and their early
symptoms. Also monitor the effects of steps taken to mitigate
or avoid them.

Project Execution & Monitoring


In this phase, the tasks described in project plans are executed according
to their schedules.
Execution needs monitoring in order to check whether everything is going
according to the plan. Monitoring is observing to check the probability of
risk and taking measures to address the risk or report the status of
various tasks.
These measures include -

 Activity Monitoring - All activities scheduled within some


task can be monitored on day-to-day basis. When all activities
in a task are completed, it is considered as complete.
 Status Reports - The reports contain status of activities and
tasks completed within a given time frame, generally a week.
Status can be marked as finished, pending or work-in-progress
etc.
 Milestones Checklist - Every project is divided into multiple
phases where major tasks are performed (milestones) based
on the phases of SDLC. This milestone checklist is prepared
once every few weeks and reports the status of milestones.

Project Communication Management


Effective communication plays vital role in the success of a project. It
bridges gaps between client and the organization, among the team
members as well as other stake holders in the project such as hardware
suppliers.
Communication can be oral or written. Communication management
process may have the following steps:
 Planning - This step includes the identifications of all the
stakeholders in the project and the mode of communication
among them. It also considers if any additional communication
facilities are required.
 Sharing - After determining various aspects of planning,
manager focuses on sharing correct information with the
correct person on correct time. This keeps every one involved
the project up to date with project progress and its status.
 Feedback - Project managers use various measures and
feedback mechanism and create status and performance
reports. This mechanism ensures that input from various
stakeholders is coming to the project manager as their
feedback.
 Closure - At the end of each major event, end of a phase of
SDLC or end of the project itself, administrative closure is
formally announced to update every stakeholder by sending
email, by distributing a hardcopy of document or by other
mean of effective communication.
After closure, the team moves to next phase or project.

Configuration Management
Configuration management is a process of tracking and controlling the
changes in software in terms of the requirements, design, functions and
development of the product.
IEEE defines it as “the process of identifying and defining the items in the
system, controlling the change of these items throughout their life cycle,
recording and reporting the status of items and change requests, and
verifying the completeness and correctness of items”.
Generally, once the SRS is finalized there is less chance of requirement of
changes from user. If they occur, the changes are addressed only with
prior approval of higher management, as there is a possibility of cost and
time overrun.

Baseline
A phase of SDLC is assumed over if it baselined, i.e. baseline is a
measurement that defines completeness of a phase. A phase is baselined
when all activities pertaining to it are finished and well documented. If it
was not the final phase, its output would be used in next immediate
phase.
Configuration management is a discipline of organization administration,
which takes care of occurrence of any change (process, requirement,
technological, strategical etc.) after a phase is baselined. CM keeps check
on any changes done in software.

Change Control
Change control is function of configuration management, which ensures
that all changes made to software system are consistent and made as per
organizational rules and regulations.
A change in the configuration of product goes through following steps -
 Identification - A change request arrives from either internal
or external source. When change request is identified formally,
it is properly documented.
 Validation - Validity of the change request is checked and its
handling procedure is confirmed.
 Analysis - The impact of change request is analyzed in terms
of schedule, cost and required efforts. Overall impact of the
prospective change on system is analyzed.
 Control - If the prospective change either impacts too many
entities in the system or it is unavoidable, it is mandatory to
take approval of high authorities before change is incorporated
into the system. It is decided if the change is worth
incorporation or not. If it is not, change request is refused
formally.
 Execution - If the previous phase determines to execute the
change request, this phase take appropriate actions to
execute the change, does a thorough revision if necessary.
 Close request - The change is verified for correct
implementation and merging with the rest of the system. This
newly incorporated change in the software is documented
properly and the request is formally is closed.

Project Management Tools


The risk and uncertainty rises multifold with respect to the size of the
project, even when the project is developed according to set
methodologies.
There are tools available, which aid for effective project management. A
few are described -

Gantt Chart
Gantt charts was devised by Henry Gantt (1917). It represents project
schedule with respect to time periods. It is a horizontal bar chart with bars
representing activities and time scheduled for the project activities.
PERT Chart
PERT (Program Evaluation & Review Technique) chart is a tool that depicts
project as network diagram. It is capable of graphically representing main
events of project in both parallel and consecutive way. Events, which
occur one after another, show dependency of the later event over the
previous one.

Events are shown as numbered nodes. They are connected by labeled


arrows depicting sequence of tasks in the project.

Resource Histogram
This is a graphical tool that contains bar or chart representing number of
resources (usually skilled staff) required over time for a project event (or
phase). Resource Histogram is an effective tool for staff planning and
coordination.
Critical Path Analysis
This tools is useful in recognizing interdependent tasks in the project. It
also helps to find out the shortest path or critical path to complete the
project successfully. Like PERT diagram, each event is allotted a specific
time frame. This tool shows dependency of event assuming an event can
proceed to next only if the previous one is completed.
The events are arranged according to their earliest possible start time.
Path between start and end node is critical path which cannot be further
reduced and all events require to be executed in same order.

Software Requirements
The software requirements are description of features and functionalities
of the target system. Requirements convey the expectations of users from
the software product. The requirements can be obvious or hidden, known
or unknown, expected or unexpected from client’s point of view.

Requirement Engineering
The process to gather the software requirements from client, analyze and
document them is known as requirement engineering.
The goal of requirement engineering is to develop and maintain
sophisticated and descriptive ‘System Requirements Specification’
document.

Requirement Engineering Process


It is a four step process, which includes –

 Feasibility Study
 Requirement Gathering
 Software Requirement Specification
 Software Requirement Validation
Let us see the process briefly -

Feasibility study
When the client approaches the organization for getting the desired
product developed, it comes up with rough idea about what all functions
the software must perform and which all features are expected from the
software.
Referencing to this information, the analysts does a detailed study about
whether the desired system and its functionality are feasible to develop.
This feasibility study is focused towards goal of the organization. This
study analyzes whether the software product can be practically
materialized in terms of implementation, contribution of project to
organization, cost constraints and as per values and objectives of the
organization. It explores technical aspects of the project and product such
as usability, maintainability, productivity and integration ability.
The output of this phase should be a feasibility study report that should
contain adequate comments and recommendations for management
about whether or not the project should be undertaken.

Requirement Gathering
If the feasibility report is positive towards undertaking the project, next
phase starts with gathering requirements from the user. Analysts and
engineers communicate with the client and end-users to know their ideas
on what the software should provide and which features they want the
software to include.

Software Requirement Specification


SRS is a document created by system analyst after the requirements are
collected from various stakeholders.
SRS defines how the intended software will interact with hardware,
external interfaces, speed of operation, response time of system,
portability of software across various platforms, maintainability, speed of
recovery after crashing, Security, Quality, Limitations etc.
The requirements received from client are written in natural language. It
is the responsibility of system analyst to document the requirements in
technical language so that they can be comprehended and useful by the
software development team.
SRS should come up with following features:

 User Requirements are expressed in natural language.


 Technical requirements are expressed in structured language,
which is used inside the organization.
 Design description should be written in Pseudo code.
 Format of Forms and GUI screen prints.
 Conditional and mathematical notations for DFDs etc.
Software Requirement Validation
After requirement specifications are developed, the requirements
mentioned in this document are validated. User might ask for illegal,
impractical solution or experts may interpret the requirements incorrectly.
This results in huge increase in cost if not nipped in the bud.
Requirements can be checked against following conditions -

 If they can be practically implemented


 If they are valid and as per functionality and domain of
software
 If there are any ambiguities
 If they are complete
 If they can be demonstrated

Requirement Elicitation Process


Requirement elicitation process can be depicted using the folloiwng
diagram:

 Requirements gathering - The developers discuss with the


client and end users and know their expectations from the
software.
 Organizing Requirements - The developers prioritize and
arrange the requirements in order of importance, urgency and
convenience.
 Negotiation & discussion - If requirements are ambiguous
or there are some conflicts in requirements of various
stakeholders, if they are, it is then negotiated and discussed
with stakeholders. Requirements may then be prioritized and
reasonably compromised.
The requirements come from various stakeholders. To remove
the ambiguity and conflicts, they are discussed for clarity and
correctness. Unrealistic requirements are compromised
reasonably.
 Documentation - All formal & informal, functional and non-
functional requirements are documented and made available
for next phase processing.

Requirement Elicitation Techniques


Requirements Elicitation is the process to find out the requirements for an
intended software system by communicating with client, end users,
system users and others who have a stake in the software system
development.
There are various ways to discover requirements

Interviews
Interviews are strong medium to collect requirements. Organization may
conduct several types of interviews such as:

 Structured (closed) interviews, where every single information


to gather is decided in advance, they follow pattern and
matter of discussion firmly.
 Non-structured (open) interviews, where information to gather
is not decided in advance, more flexible and less biased.
 Oral interviews
 Written interviews
 One-to-one interviews which are held between two persons
across the table.
 Group interviews which are held between groups of
participants. They help to uncover any missing requirement as
numerous people are involved.
Surveys
Organization may conduct surveys among various stakeholders by
querying about their expectation and requirements from the upcoming
system.

Questionnaires
A document with pre-defined set of objective questions and respective
options is handed over to all stakeholders to answer, which are collected
and compiled.
A shortcoming of this technique is, if an option for some issue is not
mentioned in the questionnaire, the issue might be left unattended.

Task analysis
Team of engineers and developers may analyze the operation for which
the new system is required. If the client already has some software to
perform certain operation, it is studied and requirements of proposed
system are collected.

Domain Analysis
Every software falls into some domain category. The expert people in the
domain can be a great help to analyze general and specific requirements.

Brainstorming
An informal debate is held among various stakeholders and all their inputs
are recorded for further requirements analysis.

Prototyping
Prototyping is building user interface without adding detail functionality
for user to interpret the features of intended software product. It helps
giving better idea of requirements. If there is no software installed at
client’s end for developer’s reference and the client is not aware of its
own requirements, the developer creates a prototype based on initially
mentioned requirements. The prototype is shown to the client and the
feedback is noted. The client feedback serves as an input for requirement
gathering.

Observation
Team of experts visit the client’s organization or workplace. They observe
the actual working of the existing installed systems. They observe the
workflow at client’s end and how execution problems are dealt. The team
itself draws some conclusions which aid to form requirements expected
from the software.

Software Requirements Characteristics


Gathering software requirements is the foundation of the entire software
development project. Hence they must be clear, correct and well-defined.
A complete Software Requirement Specifications must be:

 Clear
 Correct
 Consistent
 Coherent
 Comprehensible
 Modifiable
 Verifiable
 Prioritized
 Unambiguous
 Traceable
 Credible source

Software Requirements
We should try to understand what sort of requirements may arise in the
requirement elicitation phase and what kinds of requirements are
expected from the software system.
Broadly software requirements should be categorized in two categories:

Functional Requirements
Requirements, which are related to functional aspect of software fall into
this category.
They define functions and functionality within and from the software
system.
Examples -
 Search option given to user to search from various invoices.
 User should be able to mail any report to management.
 Users can be divided into groups and groups can be given
separate rights.
 Should comply business rules and administrative functions.
 Software is developed keeping downward compatibility intact.
Non-Functional Requirements
Requirements, which are not related to functional aspect of software, fall
into this category. They are implicit or expected characteristics of
software, which users make assumption of.
Non-functional requirements include -

 Security
 Logging
 Storage
 Configuration
 Performance
 Cost
 Interoperability
 Flexibility
 Disaster recovery
 Accessibility
Requirements are categorized logically as

 Must Have : Software cannot be said operational without


them.
 Should have : Enhancing the functionality of software.
 Could have : Software can still properly function with these
requirements.
 Wish list : These requirements do not map to any objectives
of software.
While developing software, ‘Must have’ must be implemented, ‘Should
have’ is a matter of debate with stakeholders and negation, whereas
‘could have’ and ‘wish list’ can be kept for software updates.

User Interface requirements


UI is an important part of any software or hardware or hybrid system. A
software is widely accepted if it is -

 easy to operate
 quick in response
 effectively handling operational errors
 providing simple yet consistent user interface
User acceptance majorly depends upon how user can use the software. UI
is the only way for users to perceive the system. A well performing
software system must also be equipped with attractive, clear, consistent
and responsive user interface. Otherwise the functionalities of software
system can not be used in convenient way. A system is said be good if it
provides means to use it efficiently. User interface requirements are
briefly mentioned below -

 Content presentation
 Easy Navigation
 Simple interface
 Responsive
 Consistent UI elements
 Feedback mechanism
 Default settings
 Purposeful layout
 Strategical use of color and texture.
 Provide help information
 User centric approach
 Group based view settings.

Software System Analyst


System analyst in an IT organization is a person, who analyzes the
requirement of proposed system and ensures that requirements are
conceived and documented properly & correctly. Role of an analyst starts
during Software Analysis Phase of SDLC. It is the responsibility of analyst
to make sure that the developed software meets the requirements of the
client.
System Analysts have the following responsibilities:

 Analyzing and understanding requirements of intended


software
 Understanding how the project will contribute in the
organization objectives
 Identify sources of requirement
 Validation of requirement
 Develop and implement requirement management plan
 Documentation of business, technical, process and product
requirements
 Coordination with clients to prioritize requirements and
remove and ambiguity
 Finalizing acceptance criteria with client and other
stakeholders

Software Metrics and Measures


Software Measures can be understood as a process of quantifying and
symbolizing various attributes and aspects of software.
Software Metrics provide measures for various aspects of software
process and software product.
Software measures are fundamental requirement of software engineering.
They not only help to control the software development process but also
aid to keep quality of ultimate product excellent.
According to Tom DeMarco, a (Software Engineer), “You cannot control
what you cannot measure.” By his saying, it is very clear how important
software measures are.
Let us see some software metrics:
 Size Metrics - LOC (Lines of Code), mostly calculated in
thousands of delivered source code lines, denoted as KLOC.
Function Point Count is measure of the functionality provided
by the software. Function Point count defines the size of
functional aspect of software.
 Complexity Metrics - McCabe’s Cyclomatic complexity
quantifies the upper bound of the number of independent
paths in a program, which is perceived as complexity of the
program or its modules. It is represented in terms of graph
theory concepts by using control flow graph.
 Quality Metrics - Defects, their types and causes,
consequence, intensity of severity and their implications
define the quality of product.
The number of defects found in development process and
number of defects reported by the client after the product is
installed or delivered at client-end, define quality of product.
 Process Metrics - In various phases of SDLC, the methods
and tools used, the company standards and the performance
of development are software process metrics.
 Resource Metrics - Effort, time and various resources used,
represents metrics for resource measurement.
Software Design Basics
Software design is a process to transform user requirements into some
suitable form, which helps the programmer in software coding and
implementation.
For assessing user requirements, an SRS (Software Requirement
Specification) document is created whereas for coding and
implementation, there is a need of more specific and detailed
requirements in software terms. The output of this process can directly be
used into implementation in programming languages.
Software design is the first step in SDLC (Software Design Life Cycle),
which moves the concentration from problem domain to solution domain.
It tries to specify how to fulfill the requirements mentioned in SRS.

Software Design Levels


Software design yields three levels of results:

 Architectural Design - The architectural design is the


highest abstract version of the system. It identifies the
software as a system with many components interacting with
each other. At this level, the designers get the idea of
proposed solution domain.
 High-level Design- The high-level design breaks the ‘single
entity-multiple component’ concept of architectural design into
less-abstracted view of sub-systems and modules and depicts
their interaction with each other. High-level design focuses on
how the system along with all of its components can be
implemented in forms of modules. It recognizes modular
structure of each sub-system and their relation and interaction
among each other.
 Detailed Design- Detailed design deals with the
implementation part of what is seen as a system and its sub-
systems in the previous two designs. It is more detailed
towards modules and their implementations. It defines logical
structure of each module and their interfaces to communicate
with other modules.

Modularization
Modularization is a technique to divide a software system into multiple
discrete and independent modules, which are expected to be capable of
carrying out task(s) independently. These modules may work as basic
constructs for the entire software. Designers tend to design modules such
that they can be executed and/or compiled separately and independently.
Modular design unintentionally follows the rules of ‘divide and conquer’
problem-solving strategy this is because there are many other benefits
attached with the modular design of a software.
Advantage of modularization:

 Smaller components are easier to maintain


 Program can be divided based on functional aspects
 Desired level of abstraction can be brought in the program
 Components with high cohesion can be re-used again
 Concurrent execution can be made possible
 Desired from security aspect

Concurrency
Back in time, all software are meant to be executed sequentially. By
sequential execution we mean that the coded instruction will be executed
one after another implying only one portion of program being activated at
any given time. Say, a software has multiple modules, then only one of all
the modules can be found active at any time of execution.
In software design, concurrency is implemented by splitting the software
into multiple independent units of execution, like modules and executing
them in parallel. In other words, concurrency provides capability to the
software to execute more than one part of code in parallel to each other.
It is necessary for the programmers and designers to recognize those
modules, which can be made parallel execution.

Example
The spell check feature in word processor is a module of software, which
runs along side the word processor itself.

Coupling and Cohesion


When a software program is modularized, its tasks are divided into
several modules based on some characteristics. As we know, modules are
set of instructions put together in order to achieve some tasks. They are
though, considered as single entity but may refer to each other to work
together. There are measures by which the quality of a design of modules
and their interaction among them can be measured. These measures are
called coupling and cohesion.

Cohesion
Cohesion is a measure that defines the degree of intra-dependability
within elements of a module. The greater the cohesion, the better is the
program design.
There are seven types of cohesion, namely –

 Co-incidental cohesion - It is unplanned and random


cohesion, which might be the result of breaking the program
into smaller modules for the sake of modularization. Because it
is unplanned, it may serve confusion to the programmers and
is generally not-accepted.
 Logical cohesion - When logically categorized elements are
put together into a module, it is called logical cohesion.
 Temporal Cohesion - When elements of module are
organized such that they are processed at a similar point in
time, it is called temporal cohesion.
 Procedural cohesion - When elements of module are
grouped together, which are executed sequentially in order to
perform a task, it is called procedural cohesion.
 Communicational cohesion - When elements of module are
grouped together, which are executed sequentially and work
on same data (information), it is called communicational
cohesion.
 Sequential cohesion - When elements of module are
grouped because the output of one element serves as input to
another and so on, it is called sequential cohesion.
 Functional cohesion - It is considered to be the highest
degree of cohesion, and it is highly expected. Elements of
module in functional cohesion are grouped because they all
contribute to a single well-defined function. It can also be
reused.

Coupling
Coupling is a measure that defines the level of inter-dependability among
modules of a program. It tells at what level the modules interfere and
interact with each other. The lower the coupling, the better the program.
There are five levels of coupling, namely -

 Content coupling - When a module can directly access or


modify or refer to the content of another module, it is called
content level coupling.
 Common coupling- When multiple modules have read and
write access to some global data, it is called common or global
coupling.
 Control coupling- Two modules are called control-coupled if
one of them decides the function of the other module or
changes its flow of execution.
 Stamp coupling- When multiple modules share common data
structure and work on different part of it, it is called stamp
coupling.
 Data coupling- Data coupling is when two modules interact
with each other by means of passing data (as parameter). If a
module passes data structure as parameter, then the receiving
module should use all its components.
Ideally, no coupling is considered to be the best.

Design Verification
The output of software design process is design documentation, pseudo
codes, detailed logic diagrams, process diagrams, and detailed description
of all functional or non-functional requirements.
The next phase, which is the implementation of software, depends on all
outputs mentioned above.
It is then becomes necessary to verify the output before proceeding to the
next phase. The early any mistake is detected, the better it is or it might
not be detected until testing of the product. If the outputs of design phase
are in formal notation form, then their associated tools for verification
should be used otherwise a thorough design review can be used for
verification and validation.
By structured verification approach, reviewers can detect defects that
might be caused by overlooking some conditions. A good design review is
important for good software design, accuracy and quality.

Software Analysis & Design Tools


Software analysis and design includes all activities, which help the
transformation of requirement specification into implementation.
Requirement specifications specify all functional and non-functional
expectations from the software. These requirement specifications come in
the shape of human readable and understandable documents, to which a
computer has nothing to do.
Software analysis and design is the intermediate stage, which helps
human-readable requirements to be transformed into actual code.
Let us see few analysis and design tools used by software designers:

Data Flow Diagram


Data flow diagram is graphical representation of flow of data in an
information system. It is capable of depicting incoming data flow, outgoing
data flow and stored data. The DFD does not mention anything about how
data flows through the system.
There is a prominent difference between DFD and Flowchart. The
flowchart depicts flow of control in program modules. DFDs depict flow of
data in the system at various levels. DFD does not contain any control or
branch elements.

Types of DFD
Data Flow Diagrams are either Logical or Physical.

 Logical DFD - This type of DFD concentrates on the system


process, and flow of data in the system.For example in a
Banking software system, how data is moved between
different entities.
 Physical DFD - This type of DFD shows how the data flow is
actually implemented in the system. It is more specific and
close to the implementation.
DFD Components
DFD can represent Source, destination, storage and flow of data using the
following set of components –

 Entities - Entities are source and destination of information


data. Entities are represented by a rectangles with their
respective names.
 Process - Activities and action taken on the data are
represented by Circle or Round-edged rectangles.
 Data Storage - There are two variants of data storage - it can
either be represented as a rectangle with absence of both
smaller sides or as an open-sided rectangle with only one side
missing.
 Data Flow - Movement of data is shown by pointed arrows.
Data movement is shown from the base of arrow as its source
towards head of the arrow as destination.

Levels of DFD
 Level 0 - Highest abstraction level DFD is known as Level 0
DFD, which depicts the entire information system as one
diagram concealing all the underlying details. Level 0 DFDs are
also known as context level DFDs.
 Level 1 - The Level 0 DFD is broken down into more specific,
Level 1 DFD. Level 1 DFD depicts basic modules in the system
and flow of data among various modules. Level 1 DFD also
mentions basic processes and sources of information.

 Level 2 - At this level, DFD shows how data flows inside the
modules mentioned in Level 1.
Higher level DFDs can be transformed into more specific lower
level DFDs with deeper level of understanding unless the
desired level of specification is achieved.

Structure Charts
Structure chart is a chart derived from Data Flow Diagram. It represents
the system in more detail than DFD. It breaks down the entire system into
lowest functional modules, describes functions and sub-functions of each
module of the system to a greater detail than DFD.
Structure chart represents hierarchical structure of modules. At each layer
a specific task is performed.
Here are the symbols used in construction of structure charts -

 Module - It represents process or subroutine or task. A control


module branches to more than one sub-module. Library
Modules are re-usable and invokable from any module.

 Condition - It is represented by small diamond at the base of


module. It depicts that control module can select any of sub-
routine based on some condition.

 Jump - An arrow is shown pointing inside the module to depict


that the control will jump in the middle of the sub-module.

 Loop - A curved arrow represents loop in the module. All sub-


modules covered by loop repeat execution of module.

 Data flow - A directed arrow with empty circle at the end


represents data flow.
 Control flow - A directed arrow with filled circle at the end
represents control flow.

HIPO Diagram
HIPO (Hierarchical Input Process Output) diagram is a combination of two
organized method to analyze the system and provide the means of
documentation. HIPO model was developed by IBM in year 1970.
HIPO diagram represents the hierarchy of modules in the software system.
Analyst uses HIPO diagram in order to obtain high-level view of system
functions. It decomposes functions into sub-functions in a hierarchical
manner. It depicts the functions performed by system.
HIPO diagrams are good for documentation purpose. Their graphical
representation makes it easier for designers and managers to get the
pictorial idea of the system structure.
In contrast to IPO (Input Process Output) diagram, which depicts the flow
of control and data in a module, HIPO does not provide any information
about data flow or control flow.

Example
Both parts of HIPO diagram, Hierarchical presentation and IPO Chart are
used for structure design of software program as well as documentation of
the same.

Structured English
Most programmers are unaware of the large picture of software so they
only rely on what their managers tell them to do. It is the responsibility of
higher software management to provide accurate information to the
programmers to develop accurate yet fast code.
Other forms of methods, which use graphs or diagrams, may are
sometimes interpreted differently by different people.
Hence, analysts and designers of the software come up with tools such as
Structured English. It is nothing but the description of what is required to
code and how to code it. Structured English helps the programmer to
write error-free code.
Other form of methods, which use graphs or diagrams, may are
sometimes interpreted differently by different people. Here, both
Structured English and Pseudo-Code tries to mitigate that understanding
gap.
Structured English is the It uses plain English words in structured
programming paradigm. It is not the ultimate code but a kind of
description what is required to code and how to code it. The following are
some tokens of structured programming.
IF-THEN-ELSE,
DO-WHILE-UNTIL
Analyst uses the same variable and data name, which are stored in Data
Dictionary, making it much simpler to write and understand the code.

Example
We take the same example of Customer Authentication in the online
shopping environment. This procedure to authenticate customer can be
written in Structured English as:
Enter Customer_Name
SEEK Customer_Name in Customer_Name_DB file
IF Customer_Name found THEN
Call procedure USER_PASSWORD_AUTHENTICATE()
ELSE
PRINT error message
Call procedure NEW_CUSTOMER_REQUEST()
ENDIF
The code written in Structured English is more like day-to-day spoken
English. It can not be implemented directly as a code of software.
Structured English is independent of programming language.

Pseudo-Code
Pseudo code is written more close to programming language. It may be
considered as augmented programming language, full of comments and
descriptions.
Pseudo code avoids variable declaration but they are written using some
actual programming language’s constructs, like C, Fortran, Pascal etc.
Pseudo code contains more programming details than Structured English.
It provides a method to perform the task, as if a computer is executing the
code.

Example
Program to print Fibonacci up to n numbers.

void function Fibonacci


Get value of n;
Set value of a to 1;
Set value of b to 1;
Initialize I to 0
for (i=0; i< n; i++)
{
if a greater than b
{
Increase b by a;
Print b;
}
else if b greater than a
{
increase a by b;
print a;
}
}

Decision Tables
A Decision table represents conditions and the respective actions to be
taken to address them, in a structured tabular format.
It is a powerful tool to debug and prevent errors. It helps group similar
information into a single table and then by combining tables it delivers
easy and convenient decision-making.

Creating Decision Table


To create the decision table, the developer must follow basic four steps:

 Identify all possible conditions to be addressed


 Determine actions for all identified conditions
 Create Maximum possible rules
 Define action for each rule
Decision Tables should be verified by end-users and can lately be
simplified by eliminating duplicate rules and actions.

Example
Let us take a simple example of day-to-day problem with our Internet
connectivity. We begin by identifying all problems that can arise while
starting the internet and their respective possible solutions.
We list all possible problems under column conditions and the prospective
actions under column Actions.

Conditions/Actions Rules

Shows Connected N N N N Y Y Y Y

Conditions Ping is Working N N Y Y N N Y Y

Opens Website Y N Y N Y N Y N

Check network cable X

Check internet router X X X X

Actions Restart Web Browser X

Contact Service provider X X X X X X

Do no action
Table : Decision Table – In-house Internet Troubleshooting

Entity-Relationship Model
Entity-Relationship model is a type of database model based on the notion
of real world entities and relationship among them. We can map real
world scenario onto ER database model. ER Model creates a set of entities
with their attributes, a set of constraints and relation among them.
ER Model is best used for the conceptual design of database. ER Model
can be represented as follows :

 Entity - An entity in ER Model is a real world being, which has


some properties called attributes. Every attribute is defined
by its corresponding set of values, called domain.
For example, Consider a school database. Here, a student is an
entity. Student has various attributes like name, id, age and
class etc.
 Relationship - The logical association among entities is
called relationship. Relationships are mapped with entities in
various ways. Mapping cardinalities define the number of
associations between two entities.
Mapping cardinalities:
o one to one
o one to many
o many to one
o many to many

Data Dictionary
Data dictionary is the centralized collection of information about data. It
stores meaning and origin of data, its relationship with other data, data
format for usage etc. Data dictionary has rigorous definitions of all names
in order to facilitate user and software designers.
Data dictionary is often referenced as meta-data (data about data)
repository. It is created along with DFD (Data Flow Diagram) model of
software program and is expected to be updated whenever DFD is
changed or updated.
Requirement of Data Dictionary
The data is referenced via data dictionary while designing and
implementing software. Data dictionary removes any chances of
ambiguity. It helps keeping work of programmers and designers
synchronized while using same object reference everywhere in the
program.
Data dictionary provides a way of documentation for the complete
database system in one place. Validation of DFD is carried out using data
dictionary.

Contents
Data dictionary should contain information about the following

 Data Flow
 Data Structure
 Data Elements
 Data Stores
 Data Processing
Data Flow is described by means of DFDs as studied earlier and
represented in algebraic form as described.

= Composed of

{} Repetition

() Optional

+ And

[/] Or

Example
Address = House No + (Street / Area) + City + State
Course ID = Course Number + Course Name + Course Level + Course
Grades

Data Elements
Data elements consist of Name and descriptions of Data and Control
Items, Internal or External data stores etc. with the following details:

 Primary Name
 Secondary Name (Alias)
 Use-case (How and where to use)
 Content Description (Notation etc. )
 Supplementary Information (preset values, constraints etc.)
Data Store
It stores the information from where the data enters into the system and
exists out of the system. The Data Store may include -

 Files
o Internal to software.
o External to software but on the same machine.
o External to software and system, located on
different machine.
 Tables
o Naming convention
o Indexing property
Data Processing
There are two types of Data Processing:

 Logical: As user sees it


 Physical: As software sees it
Software Design Strategies
Software design is a process to conceptualize the software requirements
into software implementation. Software design takes the user
requirements as challenges and tries to find optimum solution. While the
software is being conceptualized, a plan is chalked out to find the best
possible design for implementing the intended solution.
There are multiple variants of software design. Let us study them briefly:

Structured Design
Structured design is a conceptualization of problem into several well-
organized elements of solution. It is basically concerned with the solution
design. Benefit of structured design is, it gives better understanding of
how the problem is being solved. Structured design also makes it simpler
for designer to concentrate on the problem more accurately.
Structured design is mostly based on ‘divide and conquer’ strategy where
a problem is broken into several small problems and each small problem
is individually solved until the whole problem is solved.
The small pieces of problem are solved by means of solution modules.
Structured design emphasis that these modules be well organized in order
to achieve precise solution.
These modules are arranged in hierarchy. They communicate with each
other. A good structured design always follows some rules for
communication among multiple modules, namely -
Cohesion - grouping of all functionally related elements.
Coupling - communication between different modules.
A good structured design has high cohesion and low coupling
arrangements.

Function Oriented Design


In function-oriented design, the system is comprised of many smaller sub-
systems known as functions. These functions are capable of performing
significant task in the system. The system is considered as top view of all
functions.
Function oriented design inherits some properties of structured design
where divide and conquer methodology is used.
This design mechanism divides the whole system into smaller functions,
which provides means of abstraction by concealing the information and
their operation.. These functional modules can share information among
themselves by means of information passing and using information
available globally.
Another characteristic of functions is that when a program calls a function,
the function changes the state of the program, which sometimes is not
acceptable by other modules. Function oriented design works well where
the system state does not matter and program/functions work on input
rather than on a state.

Design Process
 The whole system is seen as how data flows in the system by
means of data flow diagram.
 DFD depicts how functions changes data and state of entire
system.
 The entire system is logically broken down into smaller units
known as functions on the basis of their operation in the
system.
 Each function is then described at large.

Object Oriented Design


Object oriented design works around the entities and their characteristics
instead of functions involved in the software system. This design
strategies focuses on entities and its characteristics. The whole concept of
software solution revolves around the engaged entities.
Let us see the important concepts of Object Oriented Design:

 Objects - All entities involved in the solution design are


known as objects. For example, person, banks, company and
customers are treated as objects. Every entity has some
attributes associated to it and has some methods to perform
on the attributes.
 Classes - A class is a generalized description of an object. An
object is an instance of a class. Class defines all the attributes,
which an object can have and methods, which defines the
functionality of the object.
In the solution design, attributes are stored as variables and
functionalities are defined by means of methods or
procedures.
 Encapsulation - In OOD, the attributes (data variables) and
methods (operation on the data) are bundled together is called
encapsulation. Encapsulation not only bundles important
information of an object together, but also restricts access of
the data and methods from the outside world. This is called
information hiding.
 Inheritance - OOD allows similar classes to stack up in
hierarchical manner where the lower or sub-classes can
import, implement and re-use allowed variables and methods
from their immediate super classes. This property of OOD is
known as inheritance. This makes it easier to define specific
class and to create generalized classes from specific ones.
 Polymorphism - OOD languages provide a mechanism where
methods performing similar tasks but vary in arguments, can
be assigned same name. This is called polymorphism, which
allows a single interface performing tasks for different types.
Depending upon how the function is invoked, respective
portion of the code gets executed.
Design Process
Software design process can be perceived as series of well-defined steps.
Though it varies according to design approach (function oriented or object
oriented, yet It may have the following steps involved:

 A solution design is created from requirement or previous used


system and/or system sequence diagram.
 Objects are identified and grouped into classes on behalf of
similarity in attribute characteristics.
 Class hierarchy and relation among them is defined.
 Application framework is defined.

Software Design Approaches


Here are two generic approaches for software designing:

Top Down Design


We know that a system is composed of more than one sub-systems and it
contains a number of components. Further, these sub-systems and
components may have their on set of sub-system and components and
creates hierarchical structure in the system.
Top-down design takes the whole software system as one entity and then
decomposes it to achieve more than one sub-system or component based
on some characteristics. Each sub-system or component is then treated as
a system and decomposed further. This process keeps on running until
the lowest level of system in the top-down hierarchy is achieved.
Top-down design starts with a generalized model of system and keeps on
defining the more specific part of it. When all components are composed
the whole system comes into existence.
Top-down design is more suitable when the software solution needs to be
designed from scratch and specific details are unknown.

Bottom-up Design
The bottom up design model starts with most specific and basic
components. It proceeds with composing higher level of components by
using basic or lower level components. It keeps creating higher level
components until the desired system is not evolved as one single
component. With each higher level, the amount of abstraction is
increased.
Bottom-up strategy is more suitable when a system needs to be created
from some existing system, where the basic primitives can be used in the
newer system.
Both, top-down and bottom-up approaches are not practical individually.
Instead, a good combination of both is used.

Software User Interface Design


User interface is the front-end application view to which user interacts in
order to use the software. User can manipulate and control the software
as well as hardware by means of user interface. Today, user interface is
found at almost every place where digital technology exists, right from
computers, mobile phones, cars, music players, airplanes, ships etc.
User interface is part of software and is designed such a way that it is
expected to provide the user insight of the software. UI provides
fundamental platform for human-computer interaction.
UI can be graphical, text-based, audio-video based, depending upon the
underlying hardware and software combination. UI can be hardware or
software or a combination of both.
The software becomes more popular if its user interface is:
 Attractive
 Simple to use
 Responsive in short time
 Clear to understand
 Consistent on all interfacing screens
UI is broadly divided into two categories:

 Command Line Interface


 Graphical User Interface

Command Line Interface (CLI)


CLI has been a great tool of interaction with computers until the video
display monitors came into existence. CLI is first choice of many technical
users and programmers. CLI is minimum interface a software can provide
to its users.
CLI provides a command prompt, the place where the user types the
command and feeds to the system. The user needs to remember the
syntax of command and its use. Earlier CLI were not programmed to
handle the user errors effectively.
A command is a text-based reference to set of instructions, which are
expected to be executed by the system. There are methods like macros,
scripts that make it easy for the user to operate.
CLI uses less amount of computer resource as compared to GUI.

CLI Elements

A text-based command line interface can have the following elements:


 Command Prompt - It is text-based notifier that is mostly
shows the context in which the user is working. It is generated
by the software system.
 Cursor - It is a small horizontal line or a vertical bar of the
height of line, to represent position of character while typing.
Cursor is mostly found in blinking state. It moves as the user
writes or deletes something.
 Command - A command is an executable instruction. It may
have one or more parameters. Output on command execution
is shown inline on the screen. When output is produced,
command prompt is displayed on the next line.

Graphical User Interface


Graphical User Interface provides the user graphical means to interact
with the system. GUI can be combination of both hardware and software.
Using GUI, user interprets the software.
Typically, GUI is more resource consuming than that of CLI. With
advancing technology, the programmers and designers create complex
GUI designs that work with more efficiency, accuracy and speed.

GUI Elements
GUI provides a set of components to interact with software or hardware.
Every graphical component provides a way to work with the system. A GUI
system has following elements such as:

 Window - An area where contents of application are


displayed. Contents in a window can be displayed in the form
of icons or lists, if the window represents file structure. It is
easier for a user to navigate in the file system in an exploring
window. Windows can be minimized, resized or maximized to
the size of screen. They can be moved anywhere on the
screen. A window may contain another window of the same
application, called child window.
 Tabs - If an application allows executing multiple instances of
itself, they appear on the screen as separate
windows. Tabbed Document Interface has come up to open
multiple documents in the same window. This interface also
helps in viewing preference panel in application. All modern
web-browsers use this feature.
 Menu - Menu is an array of standard commands, grouped
together and placed at a visible place (usually top) inside the
application window. The menu can be programmed to appear
or hide on mouse clicks.
 Icon - An icon is small picture representing an associated
application. When these icons are clicked or double clicked,
the application window is opened. Icon displays application
and programs installed on a system in the form of small
pictures.
 Cursor - Interacting devices such as mouse, touch pad, digital
pen are represented in GUI as cursors. On screen cursor
follows the instructions from hardware in almost real-time.
Cursors are also named pointers in GUI systems. They are
used to select menus, windows and other application features.
Application specific GUI components
A GUI of an application contains one or more of the listed GUI elements:
 Application Window - Most application windows uses the
constructs supplied by operating systems but many use their
own customer created windows to contain the contents of
application.
 Dialogue Box - It is a child window that contains message for
the user and request for some action to be taken. For
Example: Application generate a dialogue to get confirmation
from user to delete a file.

 Text-Box - Provides an area for user to type and enter text-


based data.
 Buttons - They imitate real life buttons and are used to
submit inputs to the software.

 Radio-button - Displays available options for selection. Only


one can be selected among all offered.
 Check-box - Functions similar to list-box. When an option is
selected, the box is marked as checked. Multiple options
represented by check boxes can be selected.
 List-box - Provides list of available items for selection. More
than one item can be selected.

Other impressive GUI components are:

 Sliders
 Combo-box
 Data-grid
 Drop-down list

User Interface Design Activities


There are a number of activities performed for designing user interface.
The process of GUI design and implementation is alike SDLC. Any model
can be used for GUI implementation among Waterfall, Iterative or Spiral
Model.
A model used for GUI design and development should fulfill these GUI
specific steps.

 GUI Requirement Gathering - The designers may like to


have list of all functional and non-functional requirements of
GUI. This can be taken from user and their existing software
solution.
 User Analysis - The designer studies who is going to use the
software GUI. The target audience matters as the design
details change according to the knowledge and competency
level of the user. If user is technical savvy, advanced and
complex GUI can be incorporated. For a novice user, more
information is included on how-to of software.
 Task Analysis - Designers have to analyze what task is to be
done by the software solution. Here in GUI, it does not matter
how it will be done. Tasks can be represented in hierarchical
manner taking one major task and dividing it further into
smaller sub-tasks. Tasks provide goals for GUI presentation.
Flow of information among sub-tasks determines the flow of
GUI contents in the software.
 GUI Design & implementation - Designers after having
information about requirements, tasks and user environment,
design the GUI and implements into code and embed the GUI
with working or dummy software in the background. It is then
self-tested by the developers.
 Testing - GUI testing can be done in various ways.
Organization can have in-house inspection, direct involvement
of users and release of beta version are few of them. Testing
may include usability, compatibility, user acceptance etc.

GUI Implementation Tools


There are several tools available using which the designers can create
entire GUI on a mouse click. Some tools can be embedded into the
software environment (IDE).
GUI implementation tools provide powerful array of GUI controls. For
software customization, designers can change the code accordingly.
There are different segments of GUI tools according to their different use
and platform.

Example
Mobile GUI, Computer GUI, Touch-Screen GUI etc. Here is a list of few tools
which come handy to build GUI:

 FLUID
 AppInventor (Android)
 LucidChart
 Wavemaker
 Visual Studio

User Interface Golden rules


The following rules are mentioned to be the golden rules for GUI design,
described by Shneiderman and Plaisant in their book (Designing the User
Interface).
 Strive for consistency - Consistent sequences of actions
should be required in similar situations. Identical terminology
should be used in prompts, menus, and help screens.
Consistent commands should be employed throughout.
 Enable frequent users to use short-cuts - The user’s
desire to reduce the number of interactions increases with the
frequency of use. Abbreviations, function keys, hidden
commands, and macro facilities are very helpful to an expert
user.
 Offer informative feedback - For every operator action,
there should be some system feedback. For frequent and
minor actions, the response must be modest, while for
infrequent and major actions, the response must be more
substantial.
 Design dialog to yield closure - Sequences of actions
should be organized into groups with a beginning, middle, and
end. The informative feedback at the completion of a group of
actions gives the operators the satisfaction of
accomplishment, a sense of relief, the signal to drop
contingency plans and options from their minds, and this
indicates that the way ahead is clear to prepare for the next
group of actions.
 Offer simple error handling - As much as possible, design
the system so the user will not make a serious error. If an error
is made, the system should be able to detect it and offer
simple, comprehensible mechanisms for handling the error.
 Permit easy reversal of actions - This feature relieves
anxiety, since the user knows that errors can be undone. Easy
reversal of actions encourages exploration of unfamiliar
options. The units of reversibility may be a single action, a
data entry, or a complete group of actions.
 Support internal locus of control - Experienced operators
strongly desire the sense that they are in charge of the system
and that the system responds to their actions. Design the
system to make users the initiators of actions rather than the
responders.
 Reduce short-term memory load - The limitation of human
information processing in short-term memory requires the
displays to be kept simple, multiple page displays be
consolidated, window-motion frequency be reduced, and
sufficient training time be allotted for codes, mnemonics, and
sequences of actions.
Software Design Complexity
The term complexity stands for state of events or things, which have
multiple interconnected links and highly complicated structures. In
software programming, as the design of software is realized, the number
of elements and their interconnections gradually emerge to be huge,
which becomes too difficult to understand at once.
Software design complexity is difficult to assess without using complexity
metrics and measures. Let us see three important software complexity
measures.

Halstead's Complexity Measures


In 1977, Mr. Maurice Howard Halstead introduced metrics to measure
software complexity. Halstead’s metrics depends upon the actual
implementation of program and its measures, which are computed
directly from the operators and operands from source code, in static
manner. It allows to evaluate testing time, vocabulary, size, difficulty,
errors, and efforts for C/C++/Java source code.
According to Halstead, “A computer program is an implementation of an
algorithm considered to be a collection of tokens which can be classified
as either operators or operands”. Halstead metrics think a program as
sequence of operators and their associated operands.
He defines various indicators to check complexity of module.

Parameter Meaning

n1 Number of unique operators

n2 Number of unique operands

N1 Number of total occurrence of operators

N2 Number of total occurrence of operands

When we select source file to view its complexity details in Metric Viewer,
the following result is seen in Metric Report:

Metric Meaning Mathematical Representation


n Vocabulary n1 + n2

N Size N1 + N2

V Volume Length * Log2 Vocabulary

D Difficulty (n1/2) * (N1/n2)

E Efforts Difficulty * Volume

B Errors Volume / 3000

Time = Efforts / S, where S=18


T Testing time
seconds.

Cyclomatic Complexity Measures


Every program encompasses statements to execute in order to perform
some task and other decision-making statements that decide, what
statements need to be executed. These decision-making constructs
change the flow of the program.
If we compare two programs of same size, the one with more decision-
making statements will be more complex as the control of program jumps
frequently.
McCabe, in 1976, proposed Cyclomatic Complexity Measure to quantify
complexity of a given software. It is graph driven model that is based on
decision-making constructs of program such as if-else, do-while, repeat-
until, switch-case and goto statements.
Process to make flow control graph:

 Break program in smaller blocks, delimited by decision-making


constructs.
 Create nodes representing each of these nodes.
 Connect nodes as follows:
o If control can branch from block i to block j
Draw an arc
o From exit node to entry node
Draw an arc.
To calculate Cyclomatic complexity of a program module, we use the
formula –

V(G) = e – n + 2
Where
e is total number of edges
n is total number of nodes

The Cyclomatic complexity of the above module is


e = 10
n=8
Cyclomatic Complexity = 10 - 8 + 2
=4
According to P. Jorgensen, Cyclomatic Complexity of a module should not
exceed 10.

Function Point
It is widely used to measure the size of software. Function Point
concentrates on functionality provided by the system. Features and
functionality of the system are used to measure the software complexity.
Function point counts on five parameters, named as External Input,
External Output, Logical Internal Files, External Interface Files, and
External Inquiry. To consider the complexity of software each parameter is
further categorized as simple, average or complex.

Let us see parameters of function point:


External Input
Every unique input to the system, from outside, is considered as external
input. Uniqueness of input is measured, as no two inputs should have
same formats. These inputs can either be data or control parameters.
 Simple - if input count is low and affects less internal files
 Complex - if input count is high and affects more internal files
 Average - in-between simple and complex.
External Output
All output types provided by the system are counted in this category.
Output is considered unique if their output format and/or processing are
unique.
 Simple - if output count is low
 Complex - if output count is high
 Average - in between simple and complex.
Logical Internal Files
Every software system maintains internal files in order to maintain its
functional information and to function properly. These files hold logical
data of the system. This logical data may contain both functional data and
control data.
 Simple - if number of record types are low
 Complex - if number of record types are high
 Average - in between simple and complex.
External Interface Files
Software system may need to share its files with some external software
or it may need to pass the file for processing or as parameter to some
function. All these files are counted as external interface files.
 Simple - if number of record types in shared file are low
 Complex - if number of record types in shared file are high
 Average - in between simple and complex.
External Inquiry
An inquiry is a combination of input and output, where user sends some
data to inquire about as input and the system responds to the user with
the output of inquiry processed. The complexity of a query is more than
External Input and External Output. Query is said to be unique if its input
and output are unique in terms of format and data.
 Simple - if query needs low processing and yields small
amount of output data
 Complex - if query needs high process and yields large
amount of output data
 Average - in between simple and complex.
Each of these parameters in the system is given weightage according to
their class and complexity. The table below mentions the weightage given
to each parameter:
Parameter Simple Average Complex

Inputs 3 4 6

Outputs 4 5 7

Enquiry 3 4 6

Files 7 10 15

Interfaces 5 7 10

The table above yields raw Function Points. These function points are
adjusted according to the environment complexity. System is described
using fourteen different characteristics:

 Data communications
 Distributed processing
 Performance objectives
 Operation configuration load
 Transaction rate
 Online data entry,
 End user efficiency
 Online update
 Complex processing logic
 Re-usability
 Installation ease
 Operational ease
 Multiple sites
 Desire to facilitate changes
These characteristics factors are then rated from 0 to 5, as mentioned
below:

 No influence
 Incidental
 Moderate
 Average
 Significant
 Essential
All ratings are then summed up as N. The value of N ranges from 0 to 70
(14 types of characteristics x 5 types of ratings). It is used to calculate
Complexity Adjustment Factors (CAF), using the following formulae:
CAF = 0.65 + 0.01N
Then,
Delivered Function Points (FP)= CAF x Raw FP
This FP can then be used in various metrics, such as:
Cost = $ / FP
Quality = Errors / FP
Productivity = FP / person-month

Software Implementation
In this chapter, we will study about programming methods,
documentation and challenges in software implementation.

Structured Programming
In the process of coding, the lines of code keep multiplying, thus, size of
the software increases. Gradually, it becomes next to impossible to
remember the flow of program. If one forgets how software and its
underlying programs, files, procedures are constructed it then becomes
very difficult to share, debug and modify the program. The solution to this
is structured programming. It encourages the developer to use
subroutines and loops instead of using simple jumps in the code, thereby
bringing clarity in the code and improving its efficiency Structured
programming also helps programmer to reduce coding time and organize
code properly.
Structured programming states how the program shall be coded.
Structured programming uses three main concepts:
 Top-down analysis - A software is always made to perform
some rational work. This rational work is known as problem in
the software parlance. Thus it is very important that we
understand how to solve the problem. Under top-down
analysis, the problem is broken down into small pieces where
each one has some significance. Each problem is individually
solved and steps are clearly stated about how to solve the
problem.
 Modular Programming - While programming, the code is
broken down into smaller group of instructions. These groups
are known as modules, subprograms or subroutines. Modular
programming based on the understanding of top-down
analysis. It discourages jumps using ‘goto’ statements in the
program, which often makes the program flow non-traceable.
Jumps are prohibited and modular format is encouraged in
structured programming.
 Structured Coding - In reference with top-down analysis,
structured coding sub-divides the modules into further smaller
units of code in the order of their execution. Structured
programming uses control structure, which controls the flow of
the program, whereas structured coding uses control structure
to organize its instructions in definable patterns.

Functional Programming
Functional programming is style of programming language, which uses
the concepts of mathematical functions. A function in mathematics should
always produce the same result on receiving the same argument. In
procedural languages, the flow of the program runs through procedures,
i.e. the control of program is transferred to the called procedure. While
control flow is transferring from one procedure to another, the program
changes its state.
In procedural programming, it is possible for a procedure to produce
different results when it is called with the same argument, as the program
itself can be in different state while calling it. This is a property as well as
a drawback of procedural programming, in which the sequence or timing
of the procedure execution becomes important.
Functional programming provides means of computation as mathematical
functions, which produces results irrespective of program state. This
makes it possible to predict the behavior of the program.
Functional programming uses the following concepts:
 First class and High-order functions - These functions
have capability to accept another function as argument or
they return other functions as results.
 Pure functions - These functions do not include destructive
updates, that is, they do not affect any I/O or memory and if
they are not in use, they can easily be removed without
hampering the rest of the program.
 Recursion - Recursion is a programming technique where a
function calls itself and repeats the program code in it unless
some pre-defined condition matches. Recursion is the way of
creating loops in functional programming.
 Strict evaluation - It is a method of evaluating the
expression passed to a function as an argument. Functional
programming has two types of evaluation methods, strict
(eager) or non-strict (lazy). Strict evaluation always evaluates
the expression before invoking the function. Non-strict
evaluation does not evaluate the expression unless it is
needed.
 λ-calculus - Most functional programming languages use λ-
calculus as their type systems. λ-expressions are executed by
evaluating them as they occur.
Common Lisp, Scala, Haskell, Erlang and F# are some examples of
functional programming languages.
Programming style
Programming style is set of coding rules followed by all the programmers
to write the code. When multiple programmers work on the same software
project, they frequently need to work with the program code written by
some other developer. This becomes tedious or at times impossible, if all
developers do not follow some standard programming style to code the
program.
An appropriate programming style includes using function and variable
names relevant to the intended task, using well-placed indentation,
commenting code for the convenience of reader and overall presentation
of code. This makes the program code readable and understandable by
all, which in turn makes debugging and error solving easier. Also, proper
coding style helps ease the documentation and updation.

Coding Guidelines
Practice of coding style varies with organizations, operating systems and
language of coding itself.
The following coding elements may be defined under coding guidelines of
an organization:
 Naming conventions - This section defines how to name
functions, variables, constants and global variables.
 Indenting - This is the space left at the beginning of line,
usually 2-8 whitespace or single tab.
 Whitespace - It is generally omitted at the end of line.
 Operators - Defines the rules of writing mathematical,
assignment and logical operators. For example, assignment
operator ‘=’ should have space before and after it, as in “x =
2”.
 Control Structures - The rules of writing if-then-else, case-
switch, while-until and for control flow statements solely and in
nested fashion.
 Line length and wrapping - Defines how many characters
should be there in one line, mostly a line is 80 characters long.
Wrapping defines how a line should be wrapped, if is too long.
 Functions - This defines how functions should be declared
and invoked, with and without parameters.
 Variables - This mentions how variables of different data
types are declared and defined.
 Comments - This is one of the important coding components,
as the comments included in the code describe what the code
actually does and all other associated descriptions. This
section also helps creating help documentations for other
developers.

Software Documentation
Software documentation is an important part of software process. A well
written document provides a great tool and means of information
repository necessary to know about software process. Software
documentation also provides information about how to use the product.
A well-maintained documentation should involve the following documents:
 Requirement documentation - This documentation works
as key tool for software designer, developer and the test team
to carry out their respective tasks. This document contains all
the functional, non-functional and behavioral description of the
intended software.
Source of this document can be previously stored data about
the software, already running software at the client’s end,
client’s interview, questionnaires and research. Generally it is
stored in the form of spreadsheet or word processing
document with the high-end software management team.
This documentation works as foundation for the software to be
developed and is majorly used in verification and validation
phases. Most test-cases are built directly from requirement
documentation.
 Software Design documentation - These documentations
contain all the necessary information, which are needed to
build the software. It contains: (a) High-level software
architecture, (b) Software design details, (c) Data flow
diagrams, (d) Database design
These documents work as repository for developers to
implement the software. Though these documents do not give
any details on how to code the program, they give all
necessary information that is required for coding and
implementation.
 Technical documentation - These documentations are
maintained by the developers and actual coders. These
documents, as a whole, represent information about the code.
While writing the code, the programmers also mention
objective of the code, who wrote it, where will it be required,
what it does and how it does, what other resources the code
uses, etc.
The technical documentation increases the understanding
between various programmers working on the same code. It
enhances re-use capability of the code. It makes debugging
easy and traceable.
There are various automated tools available and some comes
with the programming language itself. For example java
comes JavaDoc tool to generate technical documentation of
code.
 User documentation - This documentation is different from
all the above explained. All previous documentations are
maintained to provide information about the software and its
development process. But user documentation explains how
the software product should work and how it should be used to
get the desired results.
These documentations may include, software installation
procedures, how-to guides, user-guides, uninstallation method
and special references to get more information like license
updation etc.

Software Implementation Challenges


There are some challenges faced by the development team while
implementing the software. Some of them are mentioned below:
 Code-reuse - Programming interfaces of present-day
languages are very sophisticated and are equipped huge
library functions. Still, to bring the cost down of end product,
the organization management prefers to re-use the code,
which was created earlier for some other software. There are
huge issues faced by programmers for compatibility checks
and deciding how much code to re-use.
 Version Management - Every time a new software is issued
to the customer, developers have to maintain version and
configuration related documentation. This documentation
needs to be highly accurate and available on time.
 Target-Host - The software program, which is being
developed in the organization, needs to be designed for host
machines at the customers end. But at times, it is impossible
to design a software that works on the target machines.

Software Testing Overview


Software Testing is evaluation of the software against requirements
gathered from users and system specifications. Testing is conducted at
the phase level in software development life cycle or at module level in
program code. Software testing comprises of Validation and Verification.

Software Validation
Validation is process of examining whether or not the software satisfies
the user requirements. It is carried out at the end of the SDLC. If the
software matches requirements for which it was made, it is validated.

 Validation ensures the product under development is as per


the user requirements.
 Validation answers the question – "Are we developing the
product which attempts all that user needs from this
software ?".
 Validation emphasizes on user requirements.

Software Verification
Verification is the process of confirming if the software is meeting the
business requirements, and is developed adhering to the proper
specifications and methodologies.

 Verification ensures the product being developed is according


to design specifications.
 Verification answers the question– "Are we developing this
product by firmly following all design specifications ?"
 Verifications concentrates on the design and system
specifications.
Target of the test are -
 Errors - These are actual coding mistakes made by
developers. In addition, there is a difference in output of
software and desired output, is considered as an error.
 Fault - When error exists fault occurs. A fault, also known as a
bug, is a result of an error which can cause system to fail.
 Failure - failure is said to be the inability of the system to
perform the desired task. Failure occurs when fault exists in
the system.

Manual Vs Automated Testing


Testing can either be done manually or using an automated testing tool:
 Manual - This testing is performed without taking help of
automated testing tools. The software tester prepares test
cases for different sections and levels of the code, executes
the tests and reports the result to the manager.
Manual testing is time and resource consuming. The tester
needs to confirm whether or not right test cases are used.
Major portion of testing involves manual testing.
 Automated This testing is a testing procedure done with aid
of automated testing tools. The limitations with manual testing
can be overcome using automated test tools.
A test needs to check if a webpage can be opened in Internet Explorer.
This can be easily done with manual testing. But to check if the web-
server can take the load of 1 million users, it is quite impossible to test
manually.
There are software and hardware tools which helps tester in conducting
load testing, stress testing, regression testing.

Testing Approaches
Tests can be conducted based on two approaches –

 Functionality testing
 Implementation testing
When functionality is being tested without taking the actual
implementation in concern it is known as black-box testing. The other side
is known as white-box testing where not only functionality is tested but
the way it is implemented is also analyzed.
Exhaustive tests are the best-desired method for a perfect testing. Every
single possible value in the range of the input and output values is tested.
It is not possible to test each and every value in real world scenario if the
range of values is large.

Black-box testing

It is carried out to test functionality of the program. It is also called


‘Behavioral’ testing. The tester in this case, has a set of input values and
respective desired results. On providing input, if the output matches with
the desired results, the program is tested ‘ok’, and problematic otherwise.

In this testing method, the design and structure of the code are not known
to the tester, and testing engineers and end users conduct this test on the
software.
Black-box testing techniques:
 Equivalence class - The input is divided into similar classes.
If one element of a class passes the test, it is assumed that all
the class is passed.
 Boundary values - The input is divided into higher and lower
end values. If these values pass the test, it is assumed that all
values in between may pass too.
 Cause-effect graphing - In both previous methods, only one
input value at a time is tested. Cause (input) – Effect (output)
is a testing technique where combinations of input values are
tested in a systematic way.
 Pair-wise Testing - The behavior of software depends on
multiple parameters. In pairwise testing, the multiple
parameters are tested pair-wise for their different values.
 State-based testing - The system changes state on provision
of input. These systems are tested based on their states and
input.
White-box testing
It is conducted to test program and its implementation, in order to
improve code efficiency or structure. It is also known as ‘Structural’
testing.

In this testing method, the design and structure of the code are known to
the tester. Programmers of the code conduct this test on the code.
The below are some White-box testing techniques:
 Control-flow testing - The purpose of the control-flow testing
to set up test cases which covers all statements and branch
conditions. The branch conditions are tested for both being
true and false, so that all statements can be covered.
 Data-flow testing - This testing technique emphasis to cover
all the data variables included in the program. It tests where
the variables were declared and defined and where they were
used or changed.

Testing Levels
Testing itself may be defined at various levels of SDLC. The testing
process runs parallel to software development. Before jumping on the
next stage, a stage is tested, validated and verified.
Testing separately is done just to make sure that there are no hidden
bugs or issues left in the software. Software is tested on various levels -

Unit Testing
While coding, the programmer performs some tests on that unit of
program to know if it is error free. Testing is performed under white-box
testing approach. Unit testing helps developers decide that individual
units of the program are working as per requirement and are error free.

Integration Testing
Even if the units of software are working fine individually, there is a need
to find out if the units if integrated together would also work without
errors. For example, argument passing and data updation etc.

System Testing
The software is compiled as product and then it is tested as a whole. This
can be accomplished using one or more of the following tests:
 Functionality testing - Tests all functionalities of the
software against the requirement.
 Performance testing - This test proves how efficient the
software is. It tests the effectiveness and average time taken
by the software to do desired task. Performance testing is
done by means of load testing and stress testing where the
software is put under high user and data load under various
environment conditions.
 Security & Portability - These tests are done when the
software is meant to work on various platforms and accessed
by number of persons.
Acceptance Testing
When the software is ready to hand over to the customer it has to go
through last phase of testing where it is tested for user-interaction and
response. This is important because even if the software matches all user
requirements and if user does not like the way it appears or works, it may
be rejected.
 Alpha testing - The team of developer themselves perform
alpha testing by using the system as if it is being used in work
environment. They try to find out how user would react to
some action in software and how the system should respond
to inputs.
 Beta testing - After the software is tested internally, it is
handed over to the users to use it under their production
environment only for testing purpose. This is not as yet the
delivered product. Developers expect that users at this stage
will bring minute problems, which were skipped to attend.
Regression Testing
Whenever a software product is updated with new code, feature or
functionality, it is tested thoroughly to detect if there is any negative
impact of the added code. This is known as regression testing.

Testing Documentation
Testing documents are prepared at different stages -
Before Testing
Testing starts with test cases generation. Following documents are
needed for reference –
 SRS document - Functional Requirements document
 Test Policy document - This describes how far testing should
take place before releasing the product.
 Test Strategy document - This mentions detail aspects of
test team, responsibility matrix and rights/responsibility of test
manager and test engineer.
 Traceability Matrix document - This is SDLC document,
which is related to requirement gathering process. As new
requirements come, they are added to this matrix. These
matrices help testers know the source of requirement. They
can be traced forward and backward.
While Being Tested
The following documents may be required while testing is started and is
being done:
 Test Case document - This document contains list of tests
required to be conducted. It includes Unit test plan, Integration
test plan, System test plan and Acceptance test plan.
 Test description - This document is a detailed description of
all test cases and procedures to execute them.
 Test case report - This document contains test case report
as a result of the test.
 Test logs - This document contains test logs for every test
case report.
After Testing
The following documents may be generated after testing :
 Test summary - This test summary is collective analysis of all
test reports and logs. It summarizes and concludes if the
software is ready to be launched. The software is released
under version control system if it is ready to launch.

Testing vs. Quality Control, Quality Assurance and


Audit
We need to understand that software testing is different from software
quality assurance, software quality control and software auditing.
 Software quality assurance - These are software
development process monitoring means, by which it is assured
that all the measures are taken as per the standards of
organization. This monitoring is done to make sure that proper
software development methods were followed.
 Software quality control - This is a system to maintain the
quality of software product. It may include functional and non-
functional aspects of software product, which enhance the
goodwill of the organization. This system makes sure that the
customer is receiving quality product for their requirement and
the product certified as ‘fit for use’.
 Software audit - This is a review of procedure used by the
organization to develop the software. A team of auditors,
independent of development team examines the software
process, procedure, requirements and other aspects of SDLC.
The purpose of software audit is to check that software and its
development process, both conform standards, rules and
regulations.

Software Maintenance Overview


Software maintenance is widely accepted part of SDLC now a days. It
stands for all the modifications and updations done after the delivery of
software product. There are number of reasons, why modifications are
required, some of them are briefly mentioned below:
 Market Conditions - Policies, which changes over the time,
such as taxation and newly introduced constraints like, how to
maintain bookkeeping, may trigger need for modification.
 Client Requirements - Over the time, customer may ask for
new features or functions in the software.
 Host Modifications - If any of the hardware and/or platform
(such as operating system) of the target host changes,
software changes are needed to keep adaptability.
 Organization Changes - If there is any business level change
at client end, such as reduction of organization strength,
acquiring another company, organization venturing into new
business, need to modify in the original software may arise.

Types of maintenance
In a software lifetime, type of maintenance may vary based on its nature.
It may be just a routine maintenance tasks as some bug discovered by
some user or it may be a large event in itself based on maintenance size
or nature. Following are some types of maintenance based on their
characteristics:
 Corrective Maintenance - This includes modifications and
updations done in order to correct or fix problems, which are
either discovered by user or concluded by user error reports.
 Adaptive Maintenance - This includes modifications and
updations applied to keep the software product up-to date and
tuned to the ever changing world of technology and business
environment.
 Perfective Maintenance - This includes modifications and
updates done in order to keep the software usable over long
period of time. It includes new features, new user
requirements for refining the software and improve its
reliability and performance.
 Preventive Maintenance - This includes modifications and
updations to prevent future problems of the software. It aims
to attend problems, which are not significant at this moment
but may cause serious issues in future.

Cost of Maintenance
Reports suggest that the cost of maintenance is high. A study on
estimating software maintenance found that the cost of maintenance is as
high as 67% of the cost of entire software process cycle.

On an average, the cost of software maintenance is more than 50% of all


SDLC phases. There are various factors, which trigger maintenance cost
go high, such as:

Real-world factors affecting Maintenance Cost


 The standard age of any software is considered up to 10 to 15
years.
 Older softwares, which were meant to work on slow machines
with less memory and storage capacity cannot keep
themselves challenging against newly coming enhanced
softwares on modern hardware.
 As technology advances, it becomes costly to maintain old
software.
 Most maintenance engineers are newbie and use trial and
error method to rectify problem.
 Often, changes made can easily hurt the original structure of
the software, making it hard for any subsequent changes.
 Changes are often left undocumented which may cause more
conflicts in future.
Software-end factors affecting Maintenance Cost
 Structure of Software Program
 Programming Language
 Dependence on external environment
 Staff reliability and availability

Maintenance Activities
IEEE provides a framework for sequential maintenance process activities.
It can be used in iterative manner and can be extended so that
customized items and processes can be included.

These activities go hand-in-hand with each of the following phase:


 Identification & Tracing - It involves activities pertaining to
identification of requirement of modification or maintenance. It
is generated by user or system may itself report via logs or
error messages.Here, the maintenance type is classified also.
 Analysis - The modification is analyzed for its impact on the
system including safety and security implications. If probable
impact is severe, alternative solution is looked for. A set of
required modifications is then materialized into requirement
specifications. The cost of modification/maintenance is
analyzed and estimation is concluded.
 Design - New modules, which need to be replaced or
modified, are designed against requirement specifications set
in the previous stage. Test cases are created for validation and
verification.
 Implementation - The new modules are coded with the help
of structured design created in the design step.Every
programmer is expected to do unit testing in parallel.
 System Testing - Integration testing is done among newly
created modules. Integration testing is also carried out
between new modules and the system. Finally the system is
tested as a whole, following regressive testing procedures.
 Acceptance Testing - After testing the system internally, it is
tested for acceptance with the help of users. If at this state,
user complaints some issues they are addressed or noted to
address in next iteration.
 Delivery - After acceptance test, the system is deployed all
over the organization either by small update package or fresh
installation of the system. The final testing takes place at
client end after the software is delivered.
Training facility is provided if required, in addition to the hard
copy of user manual.
 Maintenance management - Configuration management is
an essential part of system maintenance. It is aided with
version control tools to control versions, semi-version or patch
management.

Software Re-engineering
When we need to update the software to keep it to the current market,
without impacting its functionality, it is called software re-engineering. It
is a thorough process where the design of software is changed and
programs are re-written.
Legacy software cannot keep tuning with the latest technology available
in the market. As the hardware become obsolete, updating of software
becomes a headache. Even if software grows old with time, its
functionality does not.
For example, initially Unix was developed in assembly language. When
language C came into existence, Unix was re-engineered in C, because
working in assembly language was difficult.
Other than this, sometimes programmers notice that few parts of software
need more maintenance than others and they also need re-engineering.
Re-Engineering Process
 Decide what to re-engineer. Is it whole software or a part of
it?
 Perform Reverse Engineering, in order to obtain specifications
of existing software.
 Restructure Program if required. For example, changing
function-oriented programs into object-oriented programs.
 Re-structure data as required.
 Apply Forward engineering concepts in order to get re-
engineered software.
There are few important terms used in Software re-engineering

Reverse Engineering
It is a process to achieve system specification by thoroughly analyzing,
understanding the existing system. This process can be seen as reverse
SDLC model, i.e. we try to get higher abstraction level by analyzing lower
abstraction levels.
An existing system is previously implemented design, about which we
know nothing. Designers then do reverse engineering by looking at the
code and try to get the design. With design in hand, they try to conclude
the specifications. Thus, going in reverse from code to system
specification.

Program Restructuring
It is a process to re-structure and re-construct the existing software. It is
all about re-arranging the source code, either in same programming
language or from one programming language to a different one.
Restructuring can have either source code-restructuring and data-
restructuring or both.
Re-structuring does not impact the functionality of the software but
enhance reliability and maintainability. Program components, which cause
errors very frequently can be changed, or updated with re-structuring.
The dependability of software on obsolete hardware platform can be
removed via re-structuring.

Forward Engineering
Forward engineering is a process of obtaining desired software from the
specifications in hand which were brought down by means of reverse
engineering. It assumes that there was some software engineering
already done in the past.
Forward engineering is same as software engineering process with only
one difference – it is carried out always after reverse engineering.

Component reusability
A component is a part of software program code, which executes an
independent task in the system. It can be a small module or sub-system
itself.

Example
The login procedures used on the web can be considered as components,
printing system in software can be seen as a component of the software.
Components have high cohesion of functionality and lower rate of
coupling, i.e. they work independently and can perform tasks without
depending on other modules.
In OOP, the objects are designed are very specific to their concern and
have fewer chances to be used in some other software.
In modular programming, the modules are coded to perform specific tasks
which can be used across number of other software programs.
There is a whole new vertical, which is based on re-use of software
component, and is known as Component Based Software Engineering
(CBSE).

Re-use can be done at various levels


 Application level - Where an entire application is used as
sub-system of new software.
 Component level - Where sub-system of an application is
used.
 Modules level - Where functional modules are re-used.
Software components provide interfaces, which can be used to
establish communication among different components.
Reuse Process
Two kinds of method can be adopted: either by keeping requirements
same and adjusting components or by keeping components same and
modifying requirements.

 Requirement Specification - The functional and non-


functional requirements are specified, which a software
product must comply to, with the help of existing system, user
input or both.
 Design - This is also a standard SDLC process step, where
requirements are defined in terms of software parlance. Basic
architecture of system as a whole and its sub-systems are
created.
 Specify Components - By studying the software design, the
designers segregate the entire system into smaller
components or sub-systems. One complete software design
turns into a collection of a huge set of components working
together.
 Search Suitable Components - The software component
repository is referred by designers to search for the matching
component, on the basis of functionality and intended
software requirements..
 Incorporate Components - All matched components are
packed together to shape them as complete software.
Software Case Tools Overview
CASE stands for Computer Aided Software Engineering. It means,
development and maintenance of software projects with help of various
automated software tools.

CASE Tools
CASE tools are set of software application programs, which are used to
automate SDLC activities. CASE tools are used by software project
managers, analysts and engineers to develop software system.
There are number of CASE tools available to simplify various stages of
Software Development Life Cycle such as Analysis tools, Design tools,
Project management tools, Database Management tools, Documentation
tools are to name a few.
Use of CASE tools accelerates the development of project to produce
desired result and helps to uncover flaws before moving ahead with next
stage in software development.

Components of CASE Tools


CASE tools can be broadly divided into the following parts based on their
use at a particular SDLC stage:
 Central Repository - CASE tools require a central repository,
which can serve as a source of common, integrated and
consistent information. Central repository is a central place of
storage where product specifications, requirement documents,
related reports and diagrams, other useful information
regarding management is stored. Central repository also
serves as data dictionary.

 Upper Case Tools - Upper CASE tools are used in planning,


analysis and design stages of SDLC.
 Lower Case Tools - Lower CASE tools are used in
implementation, testing and maintenance.
 Integrated Case Tools - Integrated CASE tools are helpful in
all the stages of SDLC, from Requirement gathering to Testing
and documentation.
CASE tools can be grouped together if they have similar functionality,
process activities and capability of getting integrated with other tools.

Scope of Case Tools


The scope of CASE tools goes throughout the SDLC.

Case Tools Types


Now we briefly go through various CASE tools

Diagram tools
These tools are used to represent system components, data and control
flow among various software components and system structure in a
graphical form. For example, Flow Chart Maker tool for creating state-of-
the-art flowcharts.

Process Modeling Tools


Process modeling is method to create software process model, which is
used to develop the software. Process modeling tools help the managers
to choose a process model or modify it as per the requirement of software
product. For example, EPF Composer

Project Management Tools


These tools are used for project planning, cost and effort estimation,
project scheduling and resource planning. Managers have to strictly
comply project execution with every mentioned step in software project
management. Project management tools help in storing and sharing
project information in real-time throughout the organization. For example,
Creative Pro Office, Trac Project, Basecamp.

Documentation Tools
Documentation in a software project starts prior to the software process,
goes throughout all phases of SDLC and after the completion of the
project.
Documentation tools generate documents for technical users and end
users. Technical users are mostly in-house professionals of the
development team who refer to system manual, reference manual,
training manual, installation manuals etc. The end user documents
describe the functioning and how-to of the system such as user manual.
For example, Doxygen, DrExplain, Adobe RoboHelp for documentation.

Analysis Tools
These tools help to gather requirements, automatically check for any
inconsistency, inaccuracy in the diagrams, data redundancies or
erroneous omissions. For example, Accept 360, Accompa, CaseComplete
for requirement analysis, Visible Analyst for total analysis.

Design Tools
These tools help software designers to design the block structure of the
software, which may further be broken down in smaller modules using
refinement techniques. These tools provides detailing of each module and
interconnections among modules. For example, Animated Software Design

Configuration Management Tools


An instance of software is released under one version. Configuration
Management tools deal with –

 Version and revision management


 Baseline configuration management
 Change control management
CASE tools help in this by automatic tracking, version management and
release management. For example, Fossil, Git, Accu REV.

Change Control Tools


These tools are considered as a part of configuration management tools.
They deal with changes made to the software after its baseline is fixed or
when the software is first released. CASE tools automate change tracking,
file management, code management and more. It also helps in enforcing
change policy of the organization.

Programming Tools
These tools consist of programming environments like IDE (Integrated
Development Environment), in-built modules library and simulation tools.
These tools provide comprehensive aid in building software product and
include features for simulation and testing. For example, Cscope to search
code in C, Eclipse.

Prototyping Tools
Software prototype is simulated version of the intended software product.
Prototype provides initial look and feel of the product and simulates few
aspect of actual product.
Prototyping CASE tools essentially come with graphical libraries. They can
create hardware independent user interfaces and design. These tools help
us to build rapid prototypes based on existing information. In addition,
they provide simulation of software prototype. For example, Serena
prototype composer, Mockup Builder.

Web Development Tools


These tools assist in designing web pages with all allied elements like
forms, text, script, graphic and so on. Web tools also provide live preview
of what is being developed and how will it look after completion. For
example, Fontello, Adobe Edge Inspect, Foundation 3, Brackets.

Quality Assurance Tools


Quality assurance in a software organization is monitoring the engineering
process and methods adopted to develop the software product in order to
ensure conformance of quality as per organization standards. QA tools
consist of configuration and change control tools and software testing
tools. For example, SoapTest, AppsWatch, JMeter.

Maintenance Tools
Software maintenance includes modifications in the software product after
it is delivered. Automatic logging and error reporting techniques,
automatic error ticket generation and root cause Analysis are few CASE
tools, which help software organization in maintenance phase of SDLC. For
example, Bugzilla for defect tracking, HP Quality Center.

Debugging

Debugging is the process of identifying and resolving errors, or


bugs, in a software system. It is an important aspect of software
engineering because bugs can cause a software system to
malfunction, and can lead to poor performance or incorrect results.
Debugging can be a time-consuming and complex task, but it is
essential for ensuring that a software system is functioning
correctly.

There are several common methods and techniques used in


debugging, including:

1. Code Inspection: This involves manually reviewing the


source code of a software system to identify potential bugs
or errors.
2. Debugging Tools: There are various tools available for
debugging such as debuggers, trace tools, and profilers that
can be used to identify and resolve bugs.
3. Unit Testing: This involves testing individual units or
components of a software system to identify bugs or errors.
4. Integration Testing: This involves testing the interactions
between different components of a software system to
identify bugs or errors.
5. System Testing: This involves testing the entire software
system to identify bugs or errors.
6. Monitoring: This involves monitoring a software system for
unusual behavior or performance issues that can indicate
the presence of bugs or errors.
7. Logging: This involves recording events and messages
related to the software system, which can be used to
identify bugs or errors.
It is important to note that debugging is an iterative process, and it
may take multiple attempts to identify and resolve all bugs in a
software system. Additionally, it is important to have a well-defined
process in place for reporting and tracking bugs, so that they can be
effectively managed and resolved.
In summary, debugging is an important aspect of software
engineering, it’s the process of identifying and resolving errors, or
bugs, in a software system. There are several common methods and
techniques used in debugging, including code inspection, debugging
tools, unit testing, integration testing, system testing, monitoring,
and logging. It is an iterative process that may take multiple
attempts to identify and resolve all bugs in a software system.
In the context of software engineering, debugging is the process of
fixing a bug in the software. In other words, it refers to identifying,
analyzing, and removing errors. This activity begins after the
software fails to execute properly and concludes by solving the
problem and successfully testing the software. It is considered to be
an extremely complex and tedious task because errors need to be
resolved at all stages of debugging.
Debugging Process: Steps involved in debugging are:
 Problem identification and report preparation.
 Assigning the report to the software engineer to the defect
to verify that it is genuine.
 Defect Analysis using modeling, documentation, finding and
testing candidate flaws, etc.
 Defect Resolution by making required changes to the
system.
 Validation of corrections.
The debugging process will always have one of two outcomes :
1. The cause will be found and corrected.
2. The cause will not be found.
Later, the person performing debugging may suspect a cause,
design a test case to help validate that suspicion and work toward
error correction in an iterative fashion.
During debugging, we encounter errors that range from mildly
annoying to catastrophic. As the consequences of an error increase,
the amount of pressure to find the cause also increases. Often,
pressure sometimes forces a software developer to fix one error and
at the same time introduce two more.
Debugging Approaches/Strategies:
1. Brute Force: Study the system for a larger duration in
order to understand the system. It helps the debugger to
construct different representations of systems to be
debugging depending on the need. A study of the system is
also done actively to find recent changes made to the
software.
2. Backtracking: Backward analysis of the problem which
involves tracing the program backward from the location of
the failure message in order to identify the region of faulty
code. A detailed study of the region is conducted to find the
cause of defects.
3. Forward analysis of the program involves tracing the
program forwards using breakpoints or print statements at
different points in the program and studying the results. The
region where the wrong outputs are obtained is the region
that needs to be focused on to find the defect.
4. Using the past experience of the software debug the
software with similar problems in nature. The success of this
approach depends on the expertise of the debugger.
5. Cause elimination: it introduces the concept of binary
partitioning. Data related to the error occurrence are
organized to isolate potential causes.
Debugging Tools:
Debugging tool is a computer program that is used to test and
debug other programs. A lot of public domain software like gdb and
dbx are available for debugging. They offer console-based
command-line interfaces. Examples of automated debugging tools
include code based tracers, profilers, interpreters, etc. Some of the
widely used debuggers are:
 Radare2
 WinDbg
 Valgrind
Difference Between Debugging and Testing:
Debugging is different from testing. Testing focuses on finding bugs,
errors, etc whereas debugging starts after a bug has been identified
in the software. Testing is used to ensure that the program is correct
and it was supposed to do with a certain minimum success rate.
Testing can be manual or automated. There are several different
types of testing like unit testing, integration testing, alpha and beta
testing, etc. Debugging requires a lot of knowledge, skills, and
expertise. It can be supported by some automated tools available
but is more of a manual process as every bug is different and
requires a different technique, unlike a pre-defined testing
mechanism.

Advantages of Debugging :

There are several advantages of debugging in software


engineering:

1. Improved system quality: By identifying and resolving


bugs, a software system can be made more reliable and
efficient, resulting in improved overall quality.
2. Reduced system downtime: By identifying and resolving
bugs, a software system can be made more stable and less
likely to experience downtime, which can result in improved
availability for users.
3. Increased user satisfaction: By identifying and resolving
bugs, a software system can be made more user-friendly
and better able to meet the needs of users, which can result
in increased satisfaction.
4. Reduced development costs: By identifying and
resolving bugs early in the development process, it can save
time and resources that would otherwise be spent on fixing
bugs later in the development process or after the system
has been deployed.
5. Increased security: By identifying and resolving bugs that
could be exploited by attackers, a software system can be
made more secure, reducing the risk of security breaches.
6. Facilitates change: With debugging, it becomes easy to
make changes to the software as it becomes easy to identify
and fix bugs that would have been caused by the changes.
7. Better understanding of the system: Debugging can
help developers gain a better understanding of how a
software system works, and how different components of
the system interact with one another.
8. Facilitates testing: By identifying and resolving bugs, it
makes it easier to test the software and ensure that it meets
the requirements and specifications.
In summary, debugging is an important aspect of software
engineering as it helps to improve system quality, reduce system
downtime, increase user satisfaction, reduce development costs,
increase security, facilitates change, better understanding of the
system and facilitates testing.

Disadvantages of Debugging:

While debugging is an important aspect of software engineering,


there are also some disadvantages to consider:
1. Time-consuming: Debugging can be a time-consuming
process, especially if the bug is difficult to find or reproduce.
This can cause delays in the development process and add
to the overall cost of the project.
2. Requires specialized skills: Debugging can be a complex
task that requires specialized skills and knowledge. This can
be a challenge for developers who are not familiar with the
tools and techniques used in debugging.
3. Can be difficult to reproduce: Some bugs may be difficult
to reproduce, which can make it challenging to identify and
resolve them.
4. Can be difficult to diagnose: Some bugs may be caused
by interactions between different components of a software
system, which can make it challenging to identify the root
cause of the problem.
5. Can be difficult to fix: Some bugs may be caused by
fundamental design flaws or architecture issues, which can
be difficult or impossible to fix without significant changes
to the software system.
6. Limited insight: In some cases, debugging tools can only
provide a limited insight into the problem and may not
provide enough information to identify the root cause of the
problem.
7. Can be expensive: Debugging can be an expensive
process, especially if it requires additional resources such as
specialized debugging tools or additional development time.
In summary, debugging is an important aspect of software
engineering but it also has some disadvantages, it can be time-
consuming, requires specialized skills, can be difficult to reproduce,
diagnose and fix, may have limited insight and can be expensive

Debugging
The goal of debugging is to catch and correct errors, especially in an early
stage, and provide tools to support bug finding, should they happen later in
production. Debugging is primarily performed in relation with coding. For
example:

 assertion checks of pre- and post-conditions at begin or ending of


functions (if they fail, the execution aborts),
 logging in order to be able to analyse cause of bugs if they happen,
 extensive testing to find potential bugs,
 overnight coffee driven activity til exhaustion or the bug
exterminated.
Except some general purpose code to support debugging (especially logging),
debugging wouldn't directly influence software structure IMO.

Antibugging
The goal of antibugging is to prevent bugs from happening. This activity
performed throughout the whole development process. For example:

 design that prevents conditions of bugs


 defensive coding that prevent error propagation and that ensure
that the special case is properly handled
 automatic error correction or recovery strategies (e.g. relaunching
a service that aborted but is needed)
This kind of prevention should be undertaken from the start of the
development and should be rooted in the design and the software structure
(e.g. API design, exception management). It could hence influence the
software architecture. And it includes also traditional defensive programming,
to ensure offer an alternate path of execution to gracefully handle error
conditions.

So debugging and antibugging have clear boundaries.


Programming Languages

The computer system is simply a machine and hence it cannot perform any work; therefore, in order
to make it functional different languages are developed, which are known as programming languages
or simply computer languages.

Over the last two decades, dozens of computer languages have been developed. Each of these
languages comes with its own set of vocabulary and rules, better known as syntax. Furthermore,
while writing the computer language, syntax has to be followed literally, as even a small mistake will
result in an error and not generate the required output.

Following are the major categories of Programming Languages −

 Machine Language

 Assembly Language

 High Level Language

 System Language

 Scripting Language

Let us discuss the programming languages in brief.

Machine Language or Code

This is the language that is written for the computer hardware. Such language is effected directly by
the central processing unit (CPU) of a computer system.

Assembly Language

It is a language of an encoding of machine code that makes simpler and readable.

High Level Language


The high level language is simple and easy to understand and it is similar to English language. For
example, COBOL, FORTRAN, BASIC, C, C+, Python, etc.

High-level languages are very important, as they help in developing complex software and they have
the following advantages −

 Unlike assembly language or machine language, users do not need to learn the high-level
language in order to work with it.

 High-level languages are similar to natural languages, therefore, easy to learn and
understand.

 High-level language is designed in such a way that it detects the errors immediately.

 High-level language is easy to maintain and it can be easily modified.

 High-level language makes development faster.

 High-level language is comparatively cheaper to develop.

 High-level language is easier to document.

Although a high-level language has many benefits, yet it also has a drawback. It has poor control on
machine/hardware.

The following table lists down the frequently used languages −

.NET Framework
.NET is a framework to develop software applications. It is designed and
developed by Microsoft and the first beta version released in 2000.
It is used to develop applications for web, Windows, phone. Moreover, it
provides a broad range of functionalities and support.

This framework contains a large number of class libraries known as


Framework Class Library (FCL). The software programs written in .NET are
executed in the execution environment, which is called CLR (Common
Language Runtime). These are the core and essential parts of the .NET
framework.

This framework provides various services like memory management,


networking, security, memory management, and type-safety. kip

The .Net Framework supports more than 60 programming languages such


as C#, F#, VB.NET, J#, VC++, JScript.NET, APL, COBOL, Perl, Oberon, ML,
Pascal, Eiffel, Smalltalk, Python, Cobra, ADA, etc.

Following is the .NET framework Stack that shows the modules and
components of the Framework.

The .NET Framework is composed of four main components:

1. Common Language Runtime (CLR)


2. Framework Class Library (FCL),
3. Core Languages (WinForms, ASP.NET, and ADO.NET), and
4. Other Modules (WCF, WPF, WF, Card Space, LINQ, Entity Framework,
Parallel LINQ, Task Parallel Library, etc.)
CLR (Common Language Runtime)
It is a program execution engine that loads and executes the program. It
converts the program into native code. It acts as an interface between the
framework and operating system. It does exception handling, memory
management, and garbage collection. Moreover, it provides security,
type-safety, interoperability, and portablility. A list of CLR components are
given below:

FCL (Framework Class Library)


It is a standard library that is a collection of thousands of classes and used
to build an application. The BCL (Base Class Library) is the core of the FCL
and provides basic functionalities.
WinForms
Windows Forms is a smart client technology for the .NET Framework, a set
of managed libraries that simplify common application tasks such as
reading and writing to the file system.

ASP.NET
ASP.NET is a web framework designed and developed by Microsoft. It is
used to develop websites, web applications, and web services. It provides
a fantastic integration of HTML, CSS, and JavaScript. It was first released
in January 2002.

ADO.NET
ADO.NET is a module of .Net Framework, which is used to establish a
connection between application and data sources. Data sources can be
such as SQL Server and XML. ADO .NET consists of classes that can be
used to connect, retrieve, insert, and delete data.

WPF (Windows Presentation Foundation)


Windows Presentation Foundation (WPF) is a graphical subsystem by
Microsoft for rendering user interfaces in Windows-based applications.
WPF, previously known as "Avalon", was initially released as part of .NET
Framework 3.0 in 2006. WPF uses DirectX.

WCF (Windows Communication Foundation)


It is a framework for building service-oriented applications. Using WCF,
you can send data as asynchronous messages from one service endpoint
to another.

WF (Workflow Foundation)
Windows Workflow Foundation (WF) is a Microsoft technology that
provides an API, an in-process workflow engine, and a rehostable designer
to implement long-running processes as workflows within .NET
applications.

LINQ (Language Integrated Query)


It is a query language, introduced in .NET 3.5 framework. It is used to
make the query for data sources with C# or Visual Basics programming
languages.

Entity Framework
It is an ORM based open source framework which is used to work with a
database using .NET objects. It eliminates a lot of developers effort to
handle the database. It is Microsoft's recommended technology to deal
with the database.

Parallel LINQ
Parallel LINQ or PLINQ is a parallel implementation of LINQ to objects. It
combines the simplicity and readability of LINQ and provides the power of
parallel programming.

It can improve and provide fast speed to execute the LINQ query by using
all available computer capabilities.

Apart from the above features and libraries, .NET includes other APIs and
Model to improve and enhance the .NET framework.

In 2015, Task parallel and Task parallel libraries were added. In .NET 4.5, a
task-based asynchronous model was added.
Why use C?
C was initially used for system development work, particularly the
programs that make-up the operating system. C was adopted as a system
development language because it produces code that runs nearly as fast
as the code written in assembly language. Some examples of the use of C
might be −

 Operating Systems
 Language Compilers
 Assemblers
 Text Editors
 Print Spoolers
 Network Drivers
 Modern Programs
 Databases
 Language Interpreters
 Utilities

C Programs
A C program can vary from 3 lines to millions of lines and it should be
written into one or more text files with extension ".c"; for
example, hello.c. You can use "vi", "vim" or any other text editor to write
your C program into a file.
This tutorial assumes that you know how to edit a text file and how to
write source code inside a program file.

C - Environment Setup

Local Environment Setup


If you want to set up your environment for C programming language, you
need the following two software tools available on your computer, (a) Text
Editor and (b) The C Compiler.

Text Editor
This will be used to type your program. Examples of few a editors include
Windows Notepad, OS Edit command, Brief, Epsilon, EMACS, and vim or vi.
The name and version of text editors can vary on different operating
systems. For example, Notepad will be used on Windows, and vim or vi
can be used on windows as well as on Linux or UNIX.
The files you create with your editor are called the source files and they
contain the program source codes. The source files for C programs are
typically named with the extension ".c".
Before starting your programming, make sure you have one text editor in
place and you have enough experience to write a computer program, save
it in a file, compile it and finally execute it.

The C Compiler
The source code written in source file is the human readable source for
your program. It needs to be "compiled", into machine language so that
your CPU can actually execute the program as per the instructions given.
The compiler compiles the source codes into final executable programs.
The most frequently used and free available compiler is the GNU C/C++
compiler, otherwise you can have compilers either from HP or Solaris if
you have the respective operating systems.
The following section explains how to install GNU C/C++ compiler on
various OS. We keep mentioning C/C++ together because GNU gcc
compiler works for both C and C++ programming languages.

Installation on UNIX/Linux
If you are using Linux or UNIX, then check whether GCC is installed on
your system by entering the following command from the command line −
$ gcc -v
If you have GNU compiler installed on your machine, then it should print a
message as follows −
Using built-in specs.
Target: i386-redhat-linux
Configured with: ../configure --prefix=/usr .......
Thread model: posix
gcc version 4.1.2 20080704 (Red Hat 4.1.2-46)
If GCC is not installed, then you will have to install it yourself using the
detailed instructions available at https://gcc.gnu.org/install/
This tutorial has been written based on Linux and all the given examples
have been compiled on the Cent OS flavor of the Linux system.

Installation on Mac OS
If you use Mac OS X, the easiest way to obtain GCC is to download the
Xcode development environment from Apple's web site and follow the
simple installation instructions. Once you have Xcode setup, you will be
able to use GNU compiler for C/C++.
Xcode is currently available at developer.apple.com/technologies/tools/.

Installation on Windows
To install GCC on Windows, you need to install MinGW. To install MinGW,
go to the MinGW homepage, www.mingw.org, and follow the link to the
MinGW download page. Download the latest version of the MinGW
installation program, which should be named MinGW-<version>.exe.
While installing Min GW, at a minimum, you must install gcc-core, gcc-g+
+, binutils, and the MinGW runtime, but you may wish to install more.
Add the bin subdirectory of your MinGW installation to
your PATH environment variable, so that you can specify these tools on
the command line by their simple names.
After the installation is complete, you will be able to run gcc, g++, ar,
ranlib, dlltool, and several other GNU tools from the Windows command
line.

Features of C Language

C is the widely used language. It provides many features that are given
below.
1. Simple
2. Machine Independent or Portable
3. Mid-level programming language
4. structured programming language
5. Rich Library
6. Memory Management
7. Fast Speed
8. Pointers
9. Recursion
10. Extensible

1) Simple
C is a simple language in the sense that it provides a structured
approach (to break the problem into parts), the rich set of library
functions, data types, etc.

2) Machine Independent or Portable


Unlike assembly language, c programs can be executed on different
machines with some machine specific changes. Therefore, C is a machine
independent language.

3) Mid-level programming language


Although, C is intended to do low-level programming. It is used to
develop system applications such as kernel, driver, etc. It also supports
the features of a high-level language. That is why it is known as mid-
level language.

4) Structured programming language


C is a structured programming language in the sense that we can break
the program into parts using functions. So, it is easy to understand
and modify. Functions also provide code reusability.

5) Rich Library
C provides a lot of inbuilt functions that make the development fast.
6) Memory Management
It supports the feature of dynamic memory allocation. In C language,
we can free the allocated memory at any time by calling
the free() function.

7) Speed
The compilation and execution time of C language is fast since there are
lesser inbuilt functions and hence the lesser overhead.

8) Pointer
C provides the feature of pointers. We can directly interact with the
memory by using the pointers. We can use pointers for memory,
structures, functions, array, etc.

9) Recursion
In C, we can call the function within the function. It provides code
reusability for every function. Recursion enables us to use the approach of
backtracking.

10) Extensible
C language is extensible because it can easily adopt new features.

First C Program
Before starting the abcd of C language, you need to learn how to write,
compile and run the first c program.

To write the first c program, open the C console and write the following
code:

1. #include <stdio.h>
2. int main(){
3. printf("Hello C Language");
4. return 0;
5. }

#include <stdio.h> includes the standard input output library


functions. The printf() function is defined in stdio.h .
int main() The main() function is the entry point of every
program in c language. 10s

printf() The printf() function is used to print data on the console.

return 0 The return 0 statement, returns execution status to the OS. The
0 value is used for successful execution and 1 for unsuccessful execution.

How to compile and run the c program


There are 2 ways to compile and run the c program, by menu and by
shortcut.

By menu
Now click on the compile menu then compile sub menu to compile
the c program.

Then click on the run menu then run sub menu to run the c program.

By shortcut
Or, press ctrl+f9 keys compile and run the program directly.

You will see the following output on user screen.

You can view the user screen any time by pressing the alt+f5 keys.

Now press Esc to return to the turbo c++ console.

Compilation process in c
What is a compilation?
The compilation is a process of converting the source code into object
code. It is done with the help of the compiler. The compiler checks the
source code for the syntactical or structural errors, and if the source code
is error-free, then it generates the object code.

The c compilation process converts the source code taken as input into
the object code or machine code. The compilation process can be divided
into four steps, i.e., Pre-processing, Compiling, Assembling, and Linking.

The preprocessor takes the source code as an input, and it removes all
the comments from the source code. The preprocessor takes the
preprocessor directive and interprets it. For example, if <stdio.h>, the
directive is available in the program, then the preprocessor interprets the
directive and replace this directive with the content of the 'stdio.h' file.

The following are the phases through which our program passes before
being transformed into an executable form:0s

o Preprocessor
o Compiler
o Assembler
o Linker
Preprocessor
The source code is the code which is written in a text editor and the
source code file is given an extension ".c". This source code is first passed
to the preprocessor, and then the preprocessor expands this code. After
expanding the code, the expanded code is passed to the compiler.

Compiler
The code which is expanded by the preprocessor is passed to the
compiler. The compiler converts this code into assembly code. Or we can
say that the C compiler converts the pre-processed code into assembly
code.

Assembler
The assembly code is converted into object code by using an assembler.
The name of the object file generated by the assembler is the same as the
source file. The extension of the object file in DOS is '.obj,' and in UNIX,
the extension is 'o'. If the name of the source file is 'hello.c', then the
name of the object file would be 'hello.obj'.

Linker
Mainly, all the programs written in C use library functions. These library
functions are pre-compiled, and the object code of these library files is
stored with '.lib' (or '.a') extension. The main working of the linker is to
combine the object code of library files with the object code of our
program. Sometimes the situation arises when our program refers to the
functions defined in other files; then linker plays a very important role in
this. It links the object code of these files to our program. Therefore, we
conclude that the job of the linker is to link the object code of our program
with the object code of the library files and other files. The output of the
linker is the executable file. The name of the executable file is the same
as the source file but differs only in their extensions. In DOS, the
extension of the executable file is '.exe', and in UNIX, the executable file
can be named as 'a.out'. For example, if we are using printf() function in a
program, then the linker adds its associated code in an output file.

Let's understand through an example.

hello.c

1. #include <stdio.h>
2. int main()
3. {
4. printf("Hello javaTpoint");
5. return 0;
6. }

Now, we will create a flow diagram of the above program:


In the above flow diagram, the following steps are taken to
execute a program:
o Firstly, the input file, i.e., hello.c, is passed to the preprocessor, and the
preprocessor converts the source code into expanded source code. The
extension of the expanded source code would be hello.i.
o The expanded source code is passed to the compiler, and the compiler
converts this expanded source code into assembly code. The extension of
the assembly code would be hello.s.
o This assembly code is then sent to the assembler, which converts the
assembly code into object code.
o After the creation of an object code, the linker creates the executable file.
The loader will then load the executable file for the execution.

Tokens in C
A C program consists of various tokens and a token is either a keyword,
an identifier, a constant, a string literal, or a symbol. For example, the
following C statement consists of five tokens −
printf("Hello, World! \n");
The individual tokens are −
printf
(
"Hello, World! \n"
)
;

Semicolons
In a C program, the semicolon is a statement terminator. That is, each
individual statement must be ended with a semicolon. It indicates the end
of one logical entity.
Given below are two different statements −
printf("Hello, World! \n");
return 0;

Comments
Comments are like helping text in your C program and they are ignored by
the compiler. They start with /* and terminate with the characters */ as
shown below −
/* my first program in C */
You cannot have comments within comments and they do not occur
within a string or character literals.

Identifiers
A C identifier is a name used to identify a variable, function, or any other
user-defined item. An identifier starts with a letter A to Z, a to z, or an
underscore '_' followed by zero or more letters, underscores, and digits (0
to 9).
C does not allow punctuation characters such as @, $, and % within
identifiers. C is a case-sensitive programming language.
Thus, Manpower and manpower are two different identifiers in C. Here are
some examples of acceptable identifiers −
mohd zara abc move_name a_123
myname50 _temp j a23b9 retVal

Keywords
The following list shows the reserved words in C. These reserved words
may not be used as constants or variables or any other identifier names.

auto else long switch

break enum register typedef

case extern return union

char float short unsigned

const for signed void

continue goto sizeof volatile

default if static while

do int struct _Packed

double

Whitespace in C
A line containing only whitespace, possibly with a comment, is known as a
blank line, and a C compiler totally ignores it.
Whitespace is the term used in C to describe blanks, tabs, newline
characters and comments. Whitespace separates one part of a statement
from another and enables the compiler to identify where one element in a
statement, such as int, ends and the next element begins. Therefore, in
the following statement −
int age;
there must be at least one whitespace character (usually a space)
between int and age for the compiler to be able to distinguish them. On
the other hand, in the following statement −
fruit = apples + oranges; // get the total fruit
no whitespace characters are necessary between fruit and =, or between
= and apples, although you are free to include some if you wish to
increase readability.

C - Data Types
Data types in c refer to an extensive system used for declaring variables
or functions of different types. The type of a variable determines how
much space it occupies in storage and how the bit pattern stored is
interpreted.
The types in C can be classified as follows –

Sr.N Types & Description


o.
1
Basic Types
They are arithmetic types and are further classified into: (a)
integer types and (b) floating-point types.

2
Enumerated types
They are again arithmetic types and they are used to define
variables that can only assign certain discrete integer values
throughout the program.

3
The type void
The type specifier void indicates that no value is available.

4
Derived types
They include (a) Pointer types, (b) Array types, (c) Structure
types, (d) Union types and (e) Function types.

The array types and structure types are referred collectively as the
aggregate types. The type of a function specifies the type of the function's
return value. We will see the basic types in the following section, where as
other types will be covered in the upcoming chapters.

Integer Types
The following table provides the details of standard integer types with
their storage sizes and value ranges −

Type Storage size Value range

char 1 byte -128 to 127 or 0 to 255

unsigned char 1 byte 0 to 255

signed char 1 byte -128 to 127

-32,768 to 32,767 or -
int 2 or 4 bytes
2,147,483,648 to 2,147,483,647

unsigned int 2 or 4 bytes 0 to 65,535 or 0 to 4,294,967,295


short 2 bytes -32,768 to 32,767

unsigned short 2 bytes 0 to 65,535

long 8 bytes -9223372036854775808 to


9223372036854775807

unsigned long 8 bytes 0 to 18446744073709551615

To get the exact size of a type or a variable on a particular platform, you


can use the sizeof operator. The expressions sizeof(type) yields the
storage size of the object or type in bytes. Given below is an example to
get the size of various type on a machine using different constant defined
in limits.h header file −

#include <stdio.h>
#include <stdlib.h>
#include <limits.h>
#include <float.h>

int main(int argc, char** argv) {

printf("CHAR_BIT : %d\n", CHAR_BIT);


printf("CHAR_MAX : %d\n", CHAR_MAX);
printf("CHAR_MIN : %d\n", CHAR_MIN);
printf("INT_MAX : %d\n", INT_MAX);
printf("INT_MIN : %d\n", INT_MIN);
printf("LONG_MAX : %ld\n", (long) LONG_MAX);
printf("LONG_MIN : %ld\n", (long) LONG_MIN);
printf("SCHAR_MAX : %d\n", SCHAR_MAX);
printf("SCHAR_MIN : %d\n", SCHAR_MIN);
printf("SHRT_MAX : %d\n", SHRT_MAX);
printf("SHRT_MIN : %d\n", SHRT_MIN);
printf("UCHAR_MAX : %d\n", UCHAR_MAX);
printf("UINT_MAX : %u\n", (unsigned int) UINT_MAX);
printf("ULONG_MAX : %lu\n", (unsigned long) ULONG_MAX);
printf("USHRT_MAX : %d\n", (unsigned short) USHRT_MAX);

return 0;
}
When you compile and execute the above program, it produces the
following result on Linux −
CHAR_BIT : 8
CHAR_MAX : 127
CHAR_MIN : -128
INT_MAX : 2147483647
INT_MIN : -2147483648
LONG_MAX : 9223372036854775807
LONG_MIN : -9223372036854775808
SCHAR_MAX : 127
SCHAR_MIN : -128
SHRT_MAX : 32767
SHRT_MIN : -32768
UCHAR_MAX : 255
UINT_MAX : 4294967295
ULONG_MAX : 18446744073709551615
USHRT_MAX : 65535

What is Correctness?
Correctness from software engineering perspective can be defined as the
adherence to the specifications that determine how users can interact
with the software and how the software should behave when it is used
correctly.
If the software behaves incorrectly, it might take considerable amount of
time to achieve the task or sometimes it is impossible to achieve it.

Important rules:
Below are some of the important rules for effective programming which
are consequences of the program correctness theory.
 Defining the problem completely.
 Develop the algorithm and then the program logic.
 Reuse the proved models as much as possible.
 Prove the correctness of algorithms during the design phase.
 Developers should pay attention to the clarity and simplicity of
your program.
 Verifying each part of a program as soon as it is developed.

You might also like