Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Software Engineering

Download as ppt, pdf, or txt
Download as ppt, pdf, or txt
You are on page 1of 131
At a glance
Powered by AI
The key takeaways are that software is a set of instructions to acquire inputs and manipulate them to produce desired outputs as determined by the user. Software can be classified as generic or customized based on its intended users and requirements. Good software must be maintainable, dependable, efficient and usable to satisfy changing needs of its users.

The different types of software discussed are generic software which is designed for broad customer markets with common requirements, and customized software which is developed for a unique customer based on their unique domain, environment and requirements.

The characteristics of good software discussed are that it must deliver the required functionality and performance to the users. It should also be maintainable to evolve and meet changing needs, dependable to be trustworthy, efficient to not waste system resources, and usable for the users it is designed for.

1908503-Software Engineering

Prepared by
V.Prema AP/CSE
UNIT I

SOFTWARE PROCESS AND AGILE


DEVELOPMENT
What is Software?

 Software is a set of instructions to acquire inputs and to


manipulate them to produce the desired output in terms of
functions and performance as determined by the user of
the software
 Also include a set of documents, such as the software
manual , meant for users to understand the software
system
Description of the Software

 A software is described by its capabilities. The capabilities relate to


the functions it executes, the features it provides and the facilities it
offers.
EXAMPLE
Software written for Sales-order processing would have different
functions to process different types of sales order from different
market segments .
 The features for example , would be to handle multi-currency
computing, updating product , sales and Tax status.
 The facilities could be printing of sales orders, email to customers
and reports to the store department to dispatch the goods.
Classes of Software

Software is classified into two classes:


 Generic Software:
is designed for broad customer market
whose requirements are very common, fairly stable and
well understood by the software engineer.
 Customized Software:
is developed for a customer where
domain , environment and requirements are being
unique to that customer and cannot be satisfied by
generic products.
What is Good Software?

 Software has number of attributes which decide


whether it is a good or bad .
 The definition of a good software changes with the
person who evaluates it.
 The software is required by the customer , used by the
end users of an organization and developed by software
engineer .
 Each one will evaluate the different attributes differently
in order to decide whether the software is good.
What are the attributes of good
software?

The software should deliver the required functionality and performance to


the user and should be maintainable, dependable and usable.
• Maintainability
 Software must evolve to meet changing needs
 Dependability
 Software must be trustworthy
 Efficiency
 Software should not make wasteful use of system resources
 Usability
 Software must be usable by the users for which it was designed
Software - Characteristics

 Software has a dual role. It is a product, but also a vehicle for


delivering a product.

 Software is a logical rather than a physical system element.

 Software has characteristics that differ considerably from those of


hardware.

 Software is developed or engineered, it is not


manufactured in the classical sense

 Software doesn’t “wear out”

 Most software is custom-built, rather than being


assembled from existing components.
Changing nature of
software(Types)
 System Software- A collection of programs written to
service other programs at system level.
For example, compiler, operating systems.

 Real-time Software- Programs that


monitor/analyze/control real world events as they occur.

 Business Software- Programs that access, analyze and


process business information.

 Engineering and Scientific Software - Software using


“number crunching” algorithms for different science and
applications. System simulation, computer-aided design.
Changing nature of
software(Types)
 Embedded Software-:
Embedded software resides in read-only memory and is
used to control products and systems for the consumer and
industrial markets. It has very limited and esoteric functions
and control capability.

 Artificial Intelligence (AI) Software:


Programs make use of AI techniques and methods to solve
complex problems. Active areas are expert systems, pattern
recognition, games

 Internet Software :
Programs that support internet accesses and applications. For
example, search engine, browser, e-commerce software,
authoring tools.
Software Engineering

 “A systematic approach to the analysis, design,


implementation and maintenance of software.”
(The Free On-Line Dictionary of Computing)
 “ The systematic application of tools and techniques in
the development of computer-based applications.”
(Sue Conger in The New Software Engineering)
 “ Software Engineering is about designing and
developing high-quality software.”
(Shari Lawrence Pfleeger in Software Engineering -- The
Production of Quality Software)
What is Software
Engineering?
 Engineering: The Application of Science to the Solution of Practical
Problems
 Software Engineering: The Application of CS to Building Practical
Software Systems
 Programming
 Individual Writes Complete Program
 One Person, One Computer
 Well-Defined Problem
 Programming-in-the-Small
 Software Engineering
 Individuals Write Program Components
 Team Assembles Complete Program
 Programming-in-the-Large
What is the difference between software engineering and computer
science?

Computer Science Software Engineering


is concerned with
 theory  the practicalities of developing
 fundamentals  delivering useful software

Computer science theories are currently insufficient to act as a complete


underpinning for software engineering, BUT it is a foundation for practica
aspects of software engineering
What is Software Engineering?

Although hundreds of authors have developed personal definitions of


software engineering, a definition proposed by Fritz Bauer[NAU69]
provides a basis:

 “[Software engineering is] the establishment and use of sound


engineering principles in order to obtain economically software
that is reliable and works efficiently on real machines.”

The IEEE [IEE93] has developed a more comprehensive definition


when it states:

 “Software Engineering: (1) The application of a systematic,


disciplined, quantifiable approach to the development,
operation, and maintenance of software; that is, the application
of engineering to software. (2) The study of approaches as in
(1).”
Software engineering is a layered technology -
Pressman’s view:

Software Engineering Layers


What is Software Engineering?

 Software methods:
 Software engineering methods provide the technical
“how to’s” for building software.

 Methods --> how to encompass a broad array of tasks:


- requirements analysis, design, coding, testing, and
maintenance

 Software engineering methods rely on a set of basic


principles.
What is Software Engineering?

 Software process:
Software engineering process is the glue that holds:
- technology together
- enables rational and timely development of computer
software.
Software engineering process is a framework of a set of key
process areas.
It forms a basis for:
- project management, budget and schedule control
- applications of technical methods
- product quality control
What is Software Engineering?

 Software tools:
- programs provide automated or semi-automated support
for the process and methods.
- programs support engineers to perform their tasks in a
systematic and/or automatic manner.
Why Software Engineering?

 Objectives:
- Identify new problems and solutions in software
production.
- Study new systematic methods, principles, approaches for
system analysis,design, implementation, testing and
maintenance.
- Provide new ways to control, manage, and monitor
software process.
- Build new software tools and environment to support
software engineering.
Why Software Engineering?

Major Goals:
- To increase software productivity and quality.
- To effectively control software schedule and planning.
- To reduce the cost of software development.
- To meet the customers’ needs and requirements.
- To enhance the conduction of software engineering
process.
- To improve the current software engineering practice.
- To support the engineers’ activities in a systematic and
efficient manner.
A Process Framework
Process Framework Activities

 Communication
 Planning
 Modeling
 Construction
 Deployment
Umbrella Activities

 Software project tracking & control


 Risk Management
 Formal Technical reviews
 Software configuration management
 Reusability management
PROCESS MODELS
SDLC Process Model

SDLC – Software Development Life Cycle

Process model is a framework that describes the


activities,actions,tasks,milestones,and work products
performed at each stage of a software development
project that leads to a high quality software
Life cycle models

 Waterfall model
 Incremental process models
 Incremental model
 RAD model
 Evolutionary Process Models
 Prototyping model
 Spiral model
 Object oriented process model
WATERFALL MODEL
a.k.as Linear life cycle model or
COMMUNICATION
Project initiation
classic life cycle model
Req. gathering

PLANNING
Estimating
Scheduling
tracking

MODELLING
Analysis
design

CONSTRUCTION
Code
Test

DEPLOYMENT
Delivery
Support
feedback
WATERFALL MODEL

 Project initiation & requirement gathering


 What is the Problem to Solve?
 What Does Customer Need/Want?
 Interactions Between SE and Customer
 Identify and Document System Requirements
 Generate User Manuals and Test Plans
• Planning
 Prioritize the requirements
 Plan the process
WATERFALL MODEL

 Analysis and design


 How is the Problem to be Solved?
 High-Level Design
 Determine Components/Modules
 Transition to Detailed Design
 Detail Functionality of Components/Modules
 Coding and Testing
 Writing Code to Meet Component/Module Design
Specifications
 Individual Test Modules in Isolation
 Integration of Components/Modules into Subsystems
 Integration of Subsystems into Final Program
WATERFALL MODEL

 Deployment
 System Delivered to Customer/Market
 Bug Fixes and Version Releases Over Time
Strengths
 Easy to understand, easy to use
 Provides structure to inexperienced staff
 Milestones are well understood
 Sets requirements stability
 Good for management control (plan, staff, track)
 Works well when quality is more important than cost or schedule
Waterfall Drawbacks

 All projects cannot follow linear process


 All requirements must be known upfront
 Few business systems have stable requirements.
 The customer must have patience. A working version of the program will
not be available until late in the project time-span
 Leads to ‘blocking states’
 Inappropriate to changes
When to use the Waterfall Model
 Requirements are very well known
 Product definition is stable
 Technology is understood
 New version of an existing product
 Porting an existing product to a new platform.
Incremental SDLC Model

 Combines the elements of waterfall model in an iterative


fashion
 Construct a partial implementation of a total system
 Then slowly add increased functionality
 User requirements are prioritised and the highest priority
requirements are included in early increments.
 Each subsequent release of the system adds function to the
previous release, until all designed functionality has been
implemented.
Incremental Model
Weaknesses
 Requires good planning and design
 Requires early definition of a complete and fully
functional system to allow for the definition of
increments
 Well-defined module interfaces are required (some will
be developed long before others)
 Total cost of the complete system is not lower
When to use the
Incremental Model

 When staffing is not available by deadline


 Most of the requirements are known up-front but are
expected to evolve over time
 When the software can be broken into increments and
each increment represent a solution
 A need to get basic functionality to the market early
 On projects which have lengthy development schedules
 On a project with new technology
RAD MODEL
Rapid Application Development
Model
 An incremental process model that emphasizes
short development cycle
 “High-speed” adaptation of the waterfall
model.
 RAD approach also maps into the generic
framework activities
Drawbacks of RAD

 For large projects, RAD requires sufficient human


resources to create the right number of RAD teams.
 If developers & customers are not committed to
rapid-fire activities, RAD projects will fail.
 If the system cannot be properly modularized,
building the components will be problematic
 If high-performance is an issue, RAD may not
work.
 RAD may be inappropriate when technical risks are
high
When to use RAD

 Reasonably well-known requirements


 User involved throughout the life cycle
 Project can be time-boxed
 Functionality delivered in increments
 High performance not required
 Low technical risks
 System can be modularized
EVOLUTIONARY PROCESS
MODEL
 These models produce an increasingly more complete
version of the software with each iteration
 When to use
 Tight market deadlines
 Well defined system requirements
 No details about system definition
SPIRAL MODEL
Spiral Model Strengths

• Focuses attention on reuse options.


• Focuses attention on early error elimination.
• Puts quality objectives up front.
• Integrates development and maintenance.
• Provides a framework for hardware/software
Development.
Drawbacks of the Spiral Model

 It may be difficult to convince customers that the


evolutionary approach is controllable.
 It demands risk assessment expertise and relies on this
expertise for success.
 If a major risk is uncovered and managed, problems will
occur.
When to use Spiral Model

 When creation of a prototype is appropriate


 When costs and risk evaluation is important
 For medium to high-risk projects
 Long-term project commitment unwise because of potential
changes to economic priorities
 Users are unsure of their needs
 Requirements are complex
 New product line
 Significant changes are expected
UNIT II

REQUIREMENT ANALYSIS AND


SPECIFICATION
What is Software Requirement
Specification - [SRS]?

 A software requirements specification (SRS) is a


document that captures complete description about
how the system is expected to perform. It is usually
signed off at the end of requirements engineering
phase.
Qualities of SRS:

 Correct
 Unambiguous
 Complete
 Consistent
 Ranked for importance and/or stability
 Verifiable
 Modifiable
 Traceable
Types of Requirements:

 The below diagram depicts the various types of


requirements that are captured during SRS.
Software Requirement
The software requirements are description of features and
functionalities of the target system. Requirements convey the
expectations of users from the software product. The
requirements can be obvious or hidden, known or unknown,
expected or unexpected from client’s point of view.
Requirement Engineering
The process to gather the software requirements from client,
analyze and document them is known as requirement
engineering.
The goal of requirement engineering is to develop and
maintain sophisticated and descriptive ‘System Requirements
Specification’ document.
Requirement Engineering Process
It is a four step process, which includes –
Feasibility Study
Requirement Gathering
Software Requirement Specification
Software Requirement Validation
Feasibility study
 When the client approaches the organization for getting
the desired product developed, it comes up with rough
idea about what all functions the software must perform
and which all features are expected from the software.
 Referencing to this information, the analysts does a
detailed study about whether the desired system and its
functionality are feasible to develop.
 This feasibility study is focused towards goal of the
organization. This study analyzes whether the software
product can be practically materialized in terms of
implementation, contribution of project to organization,
cost constraints and as per values and objectives of the
organization. It explores technical aspects of the project
and product such as usability, maintainability, productivity
and integration ability.
 The output of this phase should be a feasibility study
report that should contain adequate comments and
recommendations for management about whether or not
the project should be undertaken.
Requirement Gathering

 If the feasibility report is positive towards undertaking


the project, next phase starts with gathering
requirements from the user. Analysts and engineers
communicate with the client and end-users to know
their ideas on what the software should provide and
which features they want the software to include.
Software Requirement
Specification
SRS is a document created by system analyst after the
requirements are collected from various stakeholders.
SRS defines how the intended software will interact with
hardware, external interfaces, speed of operation, response
time of system, portability of software across various
platforms, maintainability, speed of recovery after crashing,
Security, Quality, Limitations etc.
The requirements received from client are written in natural
language. It is the responsibility of system analyst to document
the requirements in technical language so that they can be
comprehended and useful by the software development team.
SRS should come up with following features:
User Requirements are expressed in natural language.
Technical requirements are expressed in structured
language, which is used inside the organization.
Design description should be written in Pseudo code.
Format of Forms and GUI screen prints.
Conditional and mathematical notations for DFDs etc.
Software Requirement
Validation
After requirement specifications are developed, the
requirements mentioned in this document are validated.
User might ask for illegal, impractical solution or experts
may interpret the requirements incorrectly. This results in
huge increase in cost if not nipped in the bud.
Requirements can be checked against following conditions -
If they can be practically implemented
If they are valid and as per functionality and domain of
software
If there are any ambiguities
If they are complete
If they can be demonstrated
Requirement Elicitation
Process

 Requirements gathering - The developers discuss with the client


and end users and know their expectations from the software.
 Organizing Requirements - The developers prioritize and arrange
the requirements in order of importance, urgency and
convenience.
 Negotiation & discussion - If requirements are ambiguous or there
are some conflicts in requirements of various stakeholders, if they
are, it is then negotiated and discussed with stakeholders.
Requirements may then be prioritized and reasonably
compromised.
 The requirements come from various stakeholders. To remove the
ambiguity and conflicts, they are discussed for clarity and
correctness. Unrealistic requirements are compromised reasonably.
 Documentation - All formal & informal, functional and non-
functional requirements are documented and made available for
next phase processing.
Requirement Elicitation
Techniques
Requirements Elicitation is the process to find out the requirements for
an intended software system by communicating with client, end users,
system users and others who have a stake in the software system
development.
There are various ways to discover requirements
Interviews

Interviewsare strong medium to collect requirements. Organization


may conduct several types of interviews such as:
Structured (closed) interviews, where every single information to
gather is decided in advance, they follow pattern and matter of
discussion firmly.
Non-structured (open) interviews, where information to gather is not
decided in advance, more flexible and less biased.
Oral interviews
Written interviews
One-to-one interviews which are held between two persons across
the table.
Group interviews which are held between groups of participants.
They help to uncover any missing requirement as numerous people are
involved.
Requirement Elicitation
Techniques Cont…
Surveys
Organization may conduct surveys among various
stakeholders by querying about their expectation and
requirements from the upcoming system.
Questionnaires
A document with pre-defined set of objective questions and
respective options is handed over to all stakeholders to
answer, which are collected and compiled.
A shortcoming of this technique is, if an option for some
issue is not mentioned in the questionnaire, the issue might be
left unattended.
Task analysis
Team of engineers and developers may analyze the operation
for which the new system is required. If the client already has
some software to perform certain operation, it is studied and
requirements of proposed system are collected.
Requirement Elicitation
Techniques Cont…
Domain Analysis
Every software falls into some domain category. The expert
people in the domain can be a great help to analyze general
and specific requirements.
Brainstorming
An informal debate is held among various stakeholders and
all their inputs are recorded for further requirements analysis.
Prototyping
Prototyping is building user interface without adding detail
functionality for user to interpret the features of intended
software product. It helps giving better idea of requirements.
If there is no software installed at client’s end for developer’s
reference and the client is not aware of its own requirements,
the developer creates a prototype based on initially mentioned
requirements. The prototype is shown to the client and the
feedback is noted. The client feedback serves as an input for
requirement gathering.
Software Requirements
Characteristics
Gathering software requirements is the foundation of the
entire software development project. Hence they must be
clear, correct and well-defined.
A complete Software Requirement Specifications must be:
Clear

Correct

Consistent

Coherent

Comprehensible

Modifiable

Verifiable

Prioritized

Unambiguous

Traceable

Credible source
Software Requirements
We should try to understand what sort of requirements may arise in the
requirement elicitation phase and what kinds of requirements are expected from
the software system.
Broadly software requirements should be categorized in two categories:
Functional Requirements
Requirements, which are related to functional aspect of software fall into this
category.
They define functions and functionality within and from the software system.
Examples -
Search option given to user to search from various invoices.
User should be able to mail any report to management.
Users can be divided into groups and groups can be given separate rights.
Should comply business rules and administrative functions.
Software is developed keeping downward compatibility intact.
Software Requirements
Non-Functional Requirements
Requirements, which are not related to functional aspect of software, fall into
this category. They are implicit or expected characteristics of software, which users
make assumption of.
Non-functional requirements include -
Security

Logging

Storage

Configuration

Performance

Cost

Interoperability

Flexibility

Disaster recovery
Accessibility
User Interface requirements
 UIis an important part of any software or
hardware or hybrid system. A software is
widely accepted if it is -
 easy to operate
 quick in response
 effectively handling operational errors
 providing simple yet consistent user interface
Software Metrics and Measures
Software Measures can be understood as a process of quantifying and symbolizing various
attributes and aspects of software.
Software Metrics provide measures for various aspects of software process and software
product.
Let us see some software metrics:
Size Metrics - LOC (Lines of Code), mostly calculated in thousands of delivered source code
lines, denoted as KLOC.
Function Point Count is measure of the functionality provided by the software. Function Point
count defines the size of functional aspect of software.
Complexity Metrics - McCabe’s Cyclomatic complexity quantifies the upper bound of the
number of independent paths in a program, which is perceived as complexity of the program or
its modules. It is represented in terms of graph theory concepts by using control flow graph.
Quality Metrics - Defects, their types and causes, consequence, intensity of severity and their
implications define the quality of product.
The number of defects found in development process and number of defects reported by the
client after the product is installed or delivered at client-end, define quality of product.
Process Metrics - In various phases of SDLC, the methods and tools used, the company
standards and the performance of development are software process metrics.
ResourceMetrics - Effort, time and various resources used, represents metrics for resource
measurement.
Petri net
A Petri net, also known as a place/transition (PT) net, is one of
several mathematical modeling languages for the description of 
distributed systems.
It is a class of discrete event dynamic system. A Petri net is a
directed bipartite graph, in which the nodes represent transitions
(i.e. events that may occur, represented by bars) and places (i.e.
conditions, represented by circles).
The directed arcs describe which places are pre- and/or
postconditions for which transitions (signified by arrows). Some
sources[1] state that Petri nets were invented in August 1939 by 
Carl Adam Petri—at the age of 13—for the purpose of describing
chemical processes.
Formal definition and basic terminology
Formal definition and basic terminology

If a Petri net is equivalent to an elementary net,


then Z can be the countable set {0,1} and those
elements in P that map to 1 under M form a
configuration. Similarly, if a Petri net is not an
elementary net, then the multiset M can be interpreted
as representing a non-singleton set of configurations.
In this respect, M extends the concept of configuration
for elementary nets to Petri nets.
In the diagram of a Petri net (see top figure right),
places are conventionally depicted with circles,
transitions with long narrow rectangles and arcs as
one-way arrows that show connections of places to
transitions or transitions to places. If the diagram were
of an elementary net, then those places in a
configuration would be conventionally depicted as
circles, where each circle encompasses a single dot
called a token. In the given diagram of a Petri net (see
right), the place circles may encompass more than one
token to show the number of times a place appears in a
configuration. The configuration of tokens distributed
over an entire Petri net diagram is called a marking.
UNIT III

SOFTWARE DESIGN
Software Design Basics

 Software design is a process to transform user


requirements into some suitable form, which helps the
programmer in software coding and implementation.
 For assessing user requirements, an SRS (Software
Requirement Specification) document is created
whereas for coding and implementation, there is a need
of more specific and detailed requirements in software
terms. The output of this process can directly be used
into implementation in programming languages.
 Software design is the first step in SDLC (Software
Design Life Cycle), which moves the concentration from
problem domain to solution domain. It tries to specify
how to fulfill the requirements mentioned in SRS.
Software Design Levels
Software design yields three levels of results:
Architectural Design - The architectural design is the highest
abstract version of the system. It identifies the software as a
system with many components interacting with each other. At
this level, the designers get the idea of proposed solution
domain.
High-level Design- The high-level design breaks the ‘single
entity-multiple component’ concept of architectural design
into less-abstracted view of sub-systems and modules and
depicts their interaction with each other. High-level design
focuses on how the system along with all of its components
can be implemented in forms of modules. It recognizes
modular structure of each sub-system and their relation and
interaction among each other.
Detailed Design- Detailed design deals with the
implementation part of what is seen as a system and its sub-
systems in the previous two designs. It is more detailed
towards modules and their implementations. It defines logical
structure of each module and their interfaces to communicate
with other modules.
Modularization
Modularization is a technique to divide a software system
into multiple discrete and independent modules, which are
expected to be capable of carrying out task(s)
independently. These modules may work as basic
constructs for the entire software. Designers tend to design
modules such that they can be executed and/or compiled
separately and independently.
Advantage of modularization:
Smaller components are easier to maintain
Program can be divided based on functional aspects
Desired level of abstraction can be brought in the
program
Components with high cohesion can be re-used again
Concurrent execution can be made possible
Desired from security aspect
Concurrency
 Back in time, all software are meant to be executed
sequentially. By sequential execution we mean that the
coded instruction will be executed one after another
implying only one portion of program being activated at
any given time. Say, a software has multiple modules,
then only one of all the modules can be found active at
any time of execution.
 In software design, concurrency is implemented by
splitting the software into multiple independent units of
execution, like modules and executing them in parallel.
In other words, concurrency provides capability to the
software to execute more than one part of code in
parallel to each other.
 It is necessary for the programmers and designers to
recognize those modules, which can be made parallel
execution.
Coupling and Cohesion
 When a software program is modularized,
its tasks are divided into several modules
based on some characteristics. As we
know, modules are set of instructions put
together in order to achieve some tasks.
They are though, considered as single
entity but may refer to each other to
work together. There are measures by
which the quality of a design of modules
and their interaction among them can be
measured. These measures are called
coupling and cohesion.
Cohesion
 Cohesion is a measure that defines the degree of intra-dependability within elements of a
module. The greater the cohesion, the better is the program design.
 There are seven types of cohesion, namely –
 Co-incidental cohesion - It is unplanned and random cohesion, which might be the result of
breaking the program into smaller modules for the sake of modularization. Because it is
unplanned, it may serve confusion to the programmers and is generally not-accepted.
 Logical cohesion - When logically categorized elements are put together into a module, it is
called logical cohesion.
 Temporal Cohesion - When elements of module are organized such that they are processed at a
similar point in time, it is called temporal cohesion.
 Procedural cohesion - When elements of module are grouped together, which are executed
sequentially in order to perform a task, it is called procedural cohesion.
 Communicational cohesion - When elements of module are grouped together, which are
executed sequentially and work on same data (information), it is called communicational
cohesion.
 Sequential cohesion - When elements of module are grouped because the output of one element
serves as input to another and so on, it is called sequential cohesion.
 Functional cohesion - It is considered to be the highest degree of cohesion, and it is highly
expected. Elements of module in functional cohesion are grouped because they all contribute to
a single well-defined function. It can also be reused.
Coupling
Coupling is a measure that defines the level of inter-dependability among modules of a
program. It tells at what level the modules interfere and interact with each other. The
lower the coupling, the better the program.
There are five levels of coupling, namely -
Content coupling - When a module can directly access or modify or refer to the
content of another module, it is called content level coupling.
Common coupling- When multiple modules have read and write access to some global
data, it is called common or global coupling.
Control coupling- Two modules are called control-coupled if one of them decides the
function of the other module or changes its flow of execution.
Stamp coupling- When multiple modules share common data structure and work on
different part of it, it is called stamp coupling.
Data coupling- Data coupling is when two modules interact with each other by means
of passing data (as parameter). If a module passes data structure as parameter, then
the receiving module should use all its components.
Ideally, no coupling is considered to be the best.
Design Verification
Coupling is a measure that defines the level of inter-dependability among modules of a
program. It tells at what level the modules interfere and interact with each other. The
lower the coupling, the better the program.
There are five levels of coupling, namely -
Content coupling - When a module can directly access or modify or refer to the
content of another module, it is called content level coupling.
Common coupling- When multiple modules have read and write access to some global
data, it is called common or global coupling.
Control coupling- Two modules are called control-coupled if one of them decides the
function of the other module or changes its flow of execution.
Stamp coupling- When multiple modules share common data structure and work on
different part of it, it is called stamp coupling.
Data coupling- Data coupling is when two modules interact with each other by means
of passing data (as parameter). If a module passes data structure as parameter, then
the receiving module should use all its components.
Ideally, no coupling is considered to be the best.
Design Heuristic 
A heuristic evaluation is a usability inspection method for
computer software that helps to identify usability problems
in the user interface (UI) design. It specifically involves
evaluators examining the interface and judging its
compliance with recognized usability principles (the
"heuristics"). These evaluation methods are now widely
taught and practiced in the new media sector, where UIs are
often designed in a short space of time on a budget that may
restrict the amount of money available to provide for other
types of interface testing.
Architectural Design
Architectural design is a process for identifying the sub-systems making up
a system and the framework for sub-system control and communication. The
output of this design process is a description of the software architecture.
Architectural design is an early stage of the system design process. It
represents the link between specification and design processes and is often
carried out in parallel with some specification activities. It involves
identifying major system components and their communications.
Software architectures can be designed at two levels of abstraction:
Architecture in the small is concerned with the architecture of individual
programs. At this level, we are concerned with the way that an individual
program is decomposed into components.
Architecture in the large is concerned with the architecture of complex
enterprise systems that include other systems, programs, and program
components. These enterprise systems are distributed over different
computers, which may be owned and managed by different companies.
Advantages
Three advantages of explicitly designing and
documenting software architecture:
Stakeholder communication: Architecture may
be used as a focus of discussion by system
stakeholders.
System analysis: Well-documented architecture
enables the analysis of whether the system can
meet its non-functional requirements.
Large-scale reuse: The architecture may be
reusable across a range of systems or entire lines
of products
Architectural design decisions
Architectural design is a creative process so the process differs
depending on the type of system being developed. However, a
number of common decisions span all design processes and these
decisions affect the non-functional characteristics of the system:
Is there a generic application architecture that can be used?
How will the system be distributed?
What architectural styles are appropriate?
What approach will be used to structure the system?
How will the system be decomposed into modules?
What control strategy should be used?
How will the architectural design be evaluated?
How should the architecture be documented?
Architectural views
Architectural design is a creative process so the process differs
depending on the type of system being developed. However, a
number of common decisions span all design processes and these
decisions affect the non-functional characteristics of the system:
Is there a generic application architecture that can be used?
How will the system be distributed?
What architectural styles are appropriate?
What approach will be used to structure the system?
How will the system be decomposed into modules?
What control strategy should be used?
How will the architectural design be evaluated?
How should the architecture be documented?
Architectural patterns
Patterns are a means of representing, sharing and
reusing knowledge. An architectural pattern is
a stylized description of a good design practice,
which has been tried and tested in different
environments. Patterns should include information
about when they are and when the are not useful.
Patterns may be represented using tabular and
graphical descriptions.
Model-View-Controller
 Serves as a basis of interaction management in many web-based systems.
 Decouples three major interconnected components:
 The model is the central component of the pattern that directly manages the
data, logic and rules of the application. It is the application's dynamic data
structure, independent of the user interface.
 A view can be any output representation of information, such as a chart or a
diagram. Multiple views of the same information are possible.
 The controller accepts input and converts it to commands for the model or
view.
 Supported by most language frameworks.
Architectural Mapping Using Data
Flow 
A mapping technique, called structured design, is often characterized as a
data flow-oriented design method because it provides a convenient transition
from a data flow diagram to software architecture. 
The transition from information flow to program structure is accomplished as
part of a six step process: 
 (1) The type of information flow is established, 
 (2) Flow boundaries are indicated,
 (3) The DFD is mapped into the program structure, 
 (4) Control hierarchy is defined, 
 (5) The resultant structure is refined using design measures. 
 (6) The architectural description is refined and elaborated.
User Interface Design

 User interface is the front-end application view to which user interacts in


order to use the software. User can manipulate and control the software
as well as hardware by means of user interface. Today, user interface is
found at almost every place where digital technology exists, right from
computers, mobile phones, cars, music players, airplanes, ships etc.
 User interface is part of software and is designed such a way that it is
expected to provide the user insight of the software. UI provides
fundamental platform for human-computer interaction.
 UI can be graphical, text-based, audio-video based, depending upon the
underlying hardware and software combination. UI can be hardware or
software or a combination of both.
User Interface Design

The software becomes more popular if its user interface is:


Attractive

Simple to use
Responsive in short time
Clear to understand
Consistent on all interfacing screens

UI is broadly divided into two categories:


Command Line Interface
Graphical User Interface
Command Line Interface (CLI)

 CLI has been a great tool of interaction with computers until the video
display monitors came into existence. CLI is first choice of many technical
users and programmers. CLI is minimum interface a software can provide
to its users.
 CLI provides a command prompt, the place where the user types the
command and feeds to the system. The user needs to remember the
syntax of command and its use. Earlier CLI were not programmed to
handle the user errors effectively.
Graphical User Interface
 Graphical User Interface provides the user graphical means to interact
with the system. GUI can be combination of both hardware and software.
Using GUI, user interprets the software.
 Typically, GUI is more resource consuming than that of CLI. With
advancing technology, the programmers and designers create complex GUI
designs that work with more efficiency, accuracy and speed.
 GUI provides a set of components to interact with software or hardware.
 Every graphical component provides a way to work with the system. A GUI
system has following elements such as:
Graphical User Interface
 Window - An area where contents of application are displayed. Contents in a window can be
displayed in the form of icons or lists, if the window represents file structure
 Tabs - If an application allows executing multiple instances of itself, they appear on the screen
as separate windows. 
 Menu - Menu is an array of standard commands, grouped together and placed at a visible place
(usually top) inside the application window. The menu can be programmed to appear or hide on
mouse clicks.
 Icon - An icon is small picture representing an associated application. When these icons are
clicked or double clicked, the application window is opened.
 Cursor - Interacting devices such as mouse, touch pad, digital pen are represented in GUI as
cursors.
COMPONENT LEVEL DESIGN
 Component level design is the definition and design of components and modules
after the architectural design phase. Component-level design defines the data
structures, algorithms, interface characteristics, and communication mechanisms
allocated to each component for the system development.
 A complete set of software components is defined during architectural design. But
the internal data structures and processing details of each component are not
represented at a level of abstraction that is close to code. Component-level design
defines the data structures, algorithms, interface characteristics, and
communication mechanisms allocated to each component.
 According to OMG UML specification component is expressed as, “A modular,
deployable, and replaceable part of a system that encapsulates implementation
and exposes a set of interfaces.”
Component Views
• OO View – A component is a set of collaborating classes.
• Conventional View – A component is a functional element of a program that
incorporates processing logic, the internal data structures required to implement
the processing logic, and an interface that enables the component to be invoked
and data to be passed to it.
CLASS ELABORATION
 Class elaboration focuses on providing a detailed description of attributes, interfaces and
methods before the development of the system activities. The following example provides a
elaboration design class for “PrintJob”, the elaborated design class provides a detail
description of the attributes, interfaces and the operations of the class.
Views of a Component
 A component can have three different views − object-oriented view, conventional view,
and process-related view.
Object-oriented view
 A component is viewed as a set of one or more cooperating classes. Each problem domain
class (analysis) and infrastructure class (design) are explained to identify all attributes and
operations that apply to its implementation. It also involves defining the interfaces that
enable classes to communicate and cooperate.
Conventional view
 It is viewed as a functional element or a module of a program that integrates the
processing logic, the internal data structures that are required to implement the
processing logic and an interface that enables the component to be invoked and data to
be passed to it.
Process-related view
 In this view, instead of creating each component from scratch, the system is building from
existing components maintained in a library. As the software architecture is formulated,
components are selected from the library and used to populate the architecture.
 A user interface (UI) component includes grids, buttons referred as controls, and utility
components expose a specific subset of functions used in other components.
Characteristics of Components
 Reusability − Components are usually designed to be reused in
different situations in different applications. However, some
components may be designed for a specific task.
 Replaceable − Components may be freely substituted with other
similar components.
 Not context specific − Components are designed to operate in
different environments and contexts.
 Extensible − A component can be extended from existing
components to provide new behavior.
 Encapsulated − A A component depicts the interfaces, which allow
the caller to use its functionality, and do not expose details of the
internal processes or any internal variables or state.
 Independent − Components are designed to have minimal
dependencies on other components.
Principles of Component−Based Design
 Reusability − Components are usually designed to be reused in different
situations in different applications. However, some components may be
designed for a specific task.
 Replaceable − Components may be freely substituted with other similar
components.
 Not context specific − Components are designed to operate in different
environments and contexts.
 Extensible − A component can be extended from existing components to
provide new behavior.
 Encapsulated − A A component depicts the interfaces, which allow the
caller to use its functionality, and do not expose details of the internal
processes or any internal variables or state.
 Independent − Components are designed to have minimal dependencies
on other components.
UNIT IV

TESTING AND MAINTENANCE


Software Testing
 Software Testing is evaluation of the software against requirements gathered from users and
system specifications. Testing is conducted at the phase level in software development life cycle
or at module level in program code. Software testing comprises of Validation and Verification.

Software Validation
 Validation is process of examining whether or not the software satisfies the user requirements. It
is carried out at the end of the SDLC. If the software matches requirements for which it was made,
it is validated.
 Validation ensures the product under development is as per the user requirements.
 Validation answers the question – "Are we developing the product which attempts all that user
needs from this software ?".
 Validation emphasizes on user requirements.

Software Verification
 Verification is the process of confirming if the software is meeting the business requirements, and
is developed adhering to the proper specifications and methodologies.
 Verification ensures the product being developed is according to design specifications.
 Verification answers the question– "Are we developing this product by firmly following all design
specifications ?"
 Verifications concentrates on the design and system specifications.
Manual vs Automated Testing
Testing can either be done manually or using an automated testing tool:
Manual - This testing is performed without taking help of automated testing
tools. The software tester prepares test cases for different sections and
levels of the code, executes the tests and reports the result to the manager.
Manual testing is time and resource consuming. The tester needs to confirm
whether or not right test cases are used. Major portion of testing involves
manual testing.
Automated This testing is a testing procedure done with aid of automated
testing tools. The limitations with manual testing can be overcome using
automated test tools.
A test needs to check if a webpage can be opened in Internet Explorer. This
can be easily done with manual testing. But to check if the web-server can
take the load of 1 million users, it is quite impossible to test manually.
There are software and hardware tools which helps tester in conducting load
testing, stress testing, regression testing.
Black-box testing
It is carried out to test functionality of the program. It is also called
‘Behavioral’ testing. The tester in this case, has a set of input values and
respective desired results. On providing input, if the output matches with the
desired results, the program is tested ‘ok’, and problematic otherwise.
Black-box testing
In this testing method, the design and structure of the code are not known to the
tester, and testing engineers and end users conduct this test on the software.
Black-box testing techniques:
Equivalence class - The input is divided into similar classes. If one element of
a class passes the test, it is assumed that all the class is passed.
Boundary values - The input is divided into higher and lower end values. If
these values pass the test, it is assumed that all values in between may pass too.
Cause-effect graphing - In both previous methods, only one input value at a
time is tested. Cause (input) – Effect (output) is a testing technique where
combinations of input values are tested in a systematic way.
Pair-wise Testing - The behavior of software depends on multiple parameters.
In pairwise testing, the multiple parameters are tested pair-wise for their
different values.
State-based testing - The system changes state on provision of input. These
systems are tested based on their states and input.
White-box testing

It is conducted to test program and its implementation, in order to improve


code efficiency or structure. It is also known as ‘Structural’ testing.

In this testing method, the design and structure of the code are known to the
tester. Programmers of the code conduct this test on the code.
The below are some White-box testing techniques:
•Control-flow testing - The purpose of the control-flow testing to set up test
cases which covers all statements and branch conditions. The branch
conditions are tested for both being true and false, so that all statements can
be covered.
•Data-flow testing - This testing technique emphasis to cover all the data
variables included in the program. It tests where the variables were declared
and defined and where they were used or changed.
Testing Levels
Testing itself may be defined at various levels of SDLC. The testing process runs parallel to software
development. Before jumping on the next stage, a stage is tested, validated and verified.
Testing separately is done just to make sure that there are no hidden bugs or issues left in the software.
Software is tested on various levels -

Unit Testing
While coding, the programmer performs some tests on that unit of program to
know if it is error free. Testing is performed under white-box testing approach.
Unit testing helps developers decide that individual units of the program are
working as per requirement and are error free.

Integration Testing
Even if the units of software are working fine individually, there is a need to find
out if the units if integrated together would also work without errors. For
example, argument passing and data updation etc.
System Testing

The software is compiled as product and then it is tested as a whole. This can
be accomplished using one or more of the following tests:

Functionality testing - Tests all functionalities of the software against the


requirement.

Performance testing - This test proves how efficient the software is. It tests
the effectiveness and average time taken by the software to do desired task.
Performance testing is done by means of load testing and stress testing where
the software is put under high user and data load under various environment
conditions.

Security & Portability - These tests are done when the software is meant to
work on various platforms and accessed by number of persons.
Acceptance Testing

When the software is ready to hand over to the customer it has to go through
last phase of testing where it is tested for user-interaction and response. This is
important because even if the software matches all user requirements and if
user does not like the way it appears or works, it may be rejected.

Alpha testing - The team of developer themselves perform alpha testing by


using the system as if it is being used in work environment. They try to find out
how user would react to some action in software and how the system should
respond to inputs.

Beta testing - After the software is tested internally, it is handed over to the
users to use it under their production environment only for testing purpose. This
is not as yet the delivered product. Developers expect that users at this stage
will bring minute problems, which were skipped to attend.
Regression Testing

Whenever a software product is updated with new code, feature or


functionality, it is tested thoroughly to detect if there is any negative impact of
the added code. This is known as regression testing.
Software Implementation

Structured Programming

In the process of coding, the lines of code keep multiplying, thus, size of the
software increases. Gradually, it becomes next to impossible to remember the
flow of program. If one forgets how software and its underlying programs, files,
procedures are constructed it then becomes very difficult to share, debug and
modify the program.
Software Implementation

Structured programming states how the program shall be coded. Structured


programming uses three main concepts:
Top-down analysis - A software is always made to perform some rational
work. This rational work is known as problem in the software parlance. Thus it
is very important that we understand how to solve the problem. Under top-down
analysis, the problem is broken down into small pieces where each one has
some significance. Each problem is individually solved and steps are clearly
stated about how to solve the problem.
Modular Programming - While programming, the code is broken down into
smaller group of instructions. These groups are known as modules,
subprograms or subroutines. Modular programming based on the
understanding of top-down analysis. It discourages jumps using ‘goto’
statements in the program, which often makes the program flow non-traceable.
Jumps are prohibited and modular format is encouraged in structured
programming.
Structured Coding - In reference with top-down analysis, structured coding
sub-divides the modules into further smaller units of code in the order of their
execution. Structured programming uses control structure, which controls the
flow of the program, whereas structured coding uses control structure to
Functional Programming

Functional programming is style of programming language, which uses the


concepts of mathematical functions. A function in mathematics should always
produce the same result on receiving the same argument. In procedural
languages, the flow of the program runs through procedures, i.e. the control of
program is transferred to the called procedure. While control flow is transferring
from one procedure to another, the program changes its state.

In procedural programming, it is possible for a procedure to produce different


results when it is called with the same argument, as the program itself can be in
different state while calling it. This is a property as well as a drawback of
procedural programming, in which the sequence or timing of the procedure
execution becomes important.

Functional programming provides means of computation as mathematical


functions, which produces results irrespective of program state. This makes it
possible to predict the behavior of the program.
Functional Programming

Functional programming uses the following concepts:


First class and High-order functions - These functions have capability to
accept another function as argument or they return other functions as results.
Pure functions - These functions do not include destructive updates, that is,
they do not affect any I/O or memory and if they are not in use, they can easily
be removed without hampering the rest of the program.
Recursion - Recursion is a programming technique where a function calls itself
and repeats the program code in it unless some pre-defined condition matches.
Recursion is the way of creating loops in functional programming.
Strict evaluation - It is a method of evaluating the expression passed to a
function as an argument. Functional programming has two types of evaluation
methods, strict (eager) or non-strict (lazy). Strict evaluation always evaluates
the expression before invoking the function. Non-strict evaluation does not
evaluate the expression unless it is needed.
λ-calculus - Most functional programming languages use λ-calculus as their
type systems. λ-expressions are executed by evaluating them as they occur.
Programming style

Programming style is set of coding rules followed by all the programmers to


write the code. When multiple programmers work on the same software project,
they frequently need to work with the program code written by some other
developer. This becomes tedious or at times impossible, if all developers do not
follow some standard programming style to code the program.

An appropriate programming style includes using function and variable names


relevant to the intended task, using well-placed indentation, commenting code
for the convenience of reader and overall presentation of code. This makes the
program code readable and understandable by all, which in turn makes
debugging and error solving easier. Also, proper coding style helps ease the
documentation and updation.
Coding Guidelines
Practice of coding style varies with organizations, operating systems and language of
coding itself.
The following coding elements may be defined under coding guidelines of an
organization:
Naming conventions - This section defines how to name functions, variables,
constants and global variables.
Indenting - This is the space left at the beginning of line, usually 2-8 whitespace or
single tab.
Whitespace - It is generally omitted at the end of line.
Operators - Defines the rules of writing mathematical, assignment and logical operators.
For example, assignment operator ‘=’ should have space before and after it, as in “x =
2”.
Control Structures - The rules of writing if-then-else, case-switch, while-until and for
control flow statements solely and in nested fashion.
Line length and wrapping - Defines how many characters should be there in one line,
mostly a line is 80 characters long. Wrapping defines how a line should be wrapped, if is
too long.
Functions - This defines how functions should be declared and invoked, with and
without parameters.
Variables - This mentions how variables of different data types are declared and
defined.
Comments - This is one of the important coding components, as the comments
included in the code describe what the code actually does and all other associated
descriptions. This section also helps creating help documentations for other developers.
Code Refactoring
Definition
Refactoring consists of improving the internal structure of an existing program’s source
code, while preserving its external behavior.
The noun “refactoring” refers to one particular behavior-preserving transformation, such
as “Extract Method” or “Introduce Parameter.”

Common Pitfalls
Refactoring does “not” mean:
rewriting code
fixing bugs
improve observable aspects of software such as its interface
Refactoring in the absence of safeguards against introducing defects (i.e. violating the
“behaviour preserving” condition) is risky. Safeguards include aids to regression testing
including automated unit tests or automated acceptance tests, and aids to formal
reasoning such as type systems.
Expected Benefits
The following are claimed benefits of refactoring:
refactoring improves objective attributes of code (length, duplication, coupling and
cohesion, cyclomatic complexity) that correlate with ease of maintenance
refactoring helps code understanding
refactoring encourages each developer to think about and understand design decisions,
in particular in the context of collective ownership / collective code ownership
refactoring favors the emergence of reusable design elements (such as design patterns)
and code modules
UNIT V

PROJECT MANAGEMENT
Project Management
The job pattern of an IT company engaged in software development
can be seen split in two parts:
Software Creation
Software Project Management

A project is well-defined task, which is a collection of several


operations done in order to achieve a goal (for example, software
development and delivery). A Project can be characterized as:
•Every project may has a unique and distinct goal.
•Project is not routine activity or day-to-day operations.
•Project comes with a start time and end time.
•Project ends when its goal is achieved hence it is a temporary phase
in the lifetime of an organization.
•Project needs adequate resources in terms of time, manpower,
finance, material and knowledge-bank.
Need of software project management
Software is said to be an intangible product. Software development is a kind of all new
stream in world business and there’s very little experience in building software products.
Most software products are tailor made to fit client’s requirements. The most important is
that the underlying technology changes and advances so frequently and rapidly that
experience of one product may not be applied to the other one. All such business and
environmental constraints bring risk in software development hence it is essential to
manage software projects efficiently.

The image above shows triple constraints for software projects. It is an


essential part of software organization to deliver quality product, keeping the
cost within client’s budget constrain and deliver the project as per
scheduled. There are several factors, both internal and external, which may
impact this triple constrain triangle. Any of three factor can severely impact
the other two.
Therefore, software project management is essential to incorporate user
requirements along with budget and time constraints.
Estimation
Estimation is the process of finding an estimate, or approximation,
which is a value that can be used for some purpose even if input data
may be incomplete, uncertain, or unstable.
Estimation determines how much money, effort, resources, and time it
will take to build a specific system or product. Estimation is based on −
•Past Data/Past Experience
•Available Documents/Knowledge
•Assumptions
Identified Risks
•The four basic steps in Software Project Estimation are −
•Estimate the size of the development product.
•Estimate the effort in person-months or person-hours.
•Estimate the schedule in calendar months.
•Estimate the project cost in agreed currency.
General Project Estimation Approach
The Project Estimation Approach that is widely used is Decomposition Technique.
Decomposition techniques take a divide and conquer approach. Size, Effort and Cost
estimation are performed in a stepwise manner by breaking down a Project into major
Functions or related Software Engineering Activities.
Step 1 − Understand the scope of the software to be built.
Step 2 − Generate an estimate of the software size.
Start with the statement of scope.
Decompose the software into functions that can each be estimated individually.
Calculate the size of each function.
Derive effort and cost estimates by applying the size values to your baseline productivity
metrics.
Combine function estimates to produce an overall estimate for the entire project.
Step 3 − Generate an estimate of the effort and cost. You can arrive at the effort and
cost estimates by breaking down a project into related software engineering activities.
Identify the sequence of activities that need to be performed for the project to be
completed.
Divide activities into tasks that can be measured.
Estimate the effort (in person hours/days) required to complete each task.
Combine effort estimates of tasks of activity to produce an estimate for the activity.
Obtain cost units (i.e., cost/unit effort) for each activity from the database.
Compute the total effort and cost for each activity.
Combine effort and cost estimates for each activity to produce an overall effort and cost
estimate for the entire project.
General Project Estimation Approach
Cont…

Step 4 − Reconcile estimates: Compare the resulting values from Step 3 to those
obtained from Step 2. If both sets of estimates agree, then your numbers are highly
reliable. Otherwise, if widely divergent estimates occur conduct further investigation
concerning whether −
The scope of the project is not adequately understood or has been misinterpreted.
The function and/or activity breakdown is not accurate.
Historical data used for the estimation techniques is inappropriate for the application, or
obsolete, or has been misapplied.
Step 5 − Determine the cause of divergence and then reconcile the estimates.
Estimation Techniques - Function Points
A Function Point (FP) is a unit of measurement to express the amount of
business functionality, an information system (as a product) provides to a user.
FPs measure software size. They are widely accepted as an industry standard
for functional sizing.
For sizing software based on FP, several recognized standards and/or public
specifications have come into existence. As of 2013, these are −
ISO Standards
COSMIC − ISO/IEC 19761:2011 Software engineering. A functional size
measurement method.
FiSMA − ISO/IEC 29881:2008 Information technology - Software and systems
engineering - FiSMA 1.1 functional size measurement method.
IFPUG − ISO/IEC 20926:2009 Software and systems engineering - Software
measurement - IFPUG functional size measurement method.
Mark-II − ISO/IEC 20968:2002 Software engineering - Ml II Function Point
Analysis - Counting Practices Manual.
NESMA − ISO/IEC 24570:2005 Software engineering - NESMA function size
measurement method version 2.1 - Definitions and counting guidelines for the
application of Function Point Analysis.
Object Management Group Specification
for Automated Function Point

Object Management Group (OMG), an open membership and not-for-profit


computer industry standards consortium, has adopted the Automated Function
Point (AFP) specification led by the Consortium for IT Software Quality. It
provides a standard for automating FP counting according to the guidelines of
the International Function Point User Group (IFPUG).
Function Point Analysis (FPA) technique quantifies the functions contained
within software in terms that are meaningful to the software users. FPs
consider the number of functions being developed based on the requirements
specification.
Function Points (FP) Counting is governed by a standard set of rules,
processes and guidelines as defined by the International Function Point Users
Group (IFPUG). These are published in Counting Practices Manual (CPM).
Estimation Techniques

Elementary Process (EP)


Elementary Process is the smallest unit of functional user requirement that −
Is meaningful to the user.
Constitutes a complete transaction.
Is self-contained and leaves the business of the application being counted in a
consistent state.
Functions
There are two types of functions −
Data Functions
Transaction Functions
Data Functions
There are two types of data functions −
Internal Logical Files
External Interface Files
Data Functions are made up of internal and external resources that affect the
system.
Estimation Techniques
Cont…
Internal Logical Files
Internal Logical File (ILF) is a user identifiable group of logically related data or
control information that resides entirely within the application boundary. The
primary intent of an ILF is to hold data maintained through one or more
elementary processes of the application being counted. An ILF has the
inherent meaning that it is internally maintained, it has some logical structure
and it is stored in a file. (Refer Figure 1)
External Interface Files
External Interface File (EIF) is a user identifiable group of logically related data
or control information that is used by the application for reference purposes
only. The data resides entirely outside the application boundary and is
maintained in an ILF by another application. An EIF has the inherent meaning
that it is externally maintained, an interface has to be developed to get the
data from the file. (Refer Figure 1)
Estimation Techniques
Cont…
Transaction Functions
There are three types of transaction functions.
•External Inputs
•External Outputs
•External Inquiries
Transaction functions are made up of the processes that are exchanged between the
user, the external applications and the application being measured.

External Inputs

External Input (EI) is a transaction function in which Data goes “into” the application from
outside the boundary to inside. This data is coming external to the application.
Data may come from a data input screen or another application.
An EI is how an application gets information.
Data can be either control information or business information.
Data may be used to maintain one or more Internal Logical Files.
If the data is control information, it does not have to update an Internal Logical File.
External Outputs
External Output (EO) is a transaction function in which data comes “out” of the system.
Additionally, an EO may update an ILF. The data creates reports or output files sent to
other applications.
External Inquiries
External Inquiry (EQ) is a transaction function with both input and output components
that result in data retrieval.
Definition of RETs, DETs, FTRs
Cont…
Record Element Type
A Record Element Type (RET) is the largest user identifiable subgroup of elements
within an ILF or an EIF. It is best to look at logical groupings of data to help identify
them.

Data Element Type


Data Element Type (DET) is the data subgroup within an FTR. They are unique and
user identifiable.

File Type Referenced


File Type Referenced (FTR) is the largest user identifiable subgroup within the EI, EO,
or EQ that is referenced to.

The transaction functions EI, EO, EQ are measured by counting FTRs and DETs that
they contain following counting rules. Likewise, data functions ILF and EIF are
measured by counting DETs and RETs that they contain following counting rules. The
measures of transaction functions and data functions are used in FP counting which
results in the functional size or function points.
Software Engineering | COCOMO Model
Cont…
Cocomo (Constructive Cost Model) is a regression model based on
LOC, i.e number of Lines of Code. It is a procedural cost estimate
model for software projects and often used as a process of reliably
predicting the various parameters associated with making a project
such as size, effort, cost, time and quality. It was proposed by Barry
Boehm in 1970 and is based on the study of 63 projects, which make
it one of the best-documented models.
The key parameters which define the quality of any software products,
which are also an outcome of the Cocomo are primarily Effort &
Schedule:
Effort: Amount of labor that will be required to complete a task. It is
measured in person-months units.
Schedule: Simply means the amount of time required for the
completion of the job, which is, of course, proportional to the effort put.
It is measured in the units of time such as weeks, months.
Boehm’s definition
Boehm’s definition of organic, semidetached, and embedded systems:
Organic – A software project is said to be an organic type if the team size
required is adequately small, the problem is well understood and has been
solved in the past and also the team members have a nominal experience
regarding the problem.
Semi-detached – A software project is said to be a Semi-detached type if the
vital characteristics such as team-size, experience, knowledge of the various
programming environment lie in between that of organic and Embedded. The
projects classified as Semi-Detached are comparatively less familiar and
difficult to develop compared to the organic ones and require more experience
and better guidance and creativity. Eg: Compilers or different Embedded
Systems can be considered of Semi-Detached type.
Embedded – A software project with requiring the highest level of complexity,
creativity, and experience requirement fall under this category. Such software
requires a larger team size than the other two models and also the developers
need to be sufficiently experienced and creative to develop such complex
models.
All the above system types utilize different values of the constants used in
Effort Calculations.
Boehm’s definition
Types of Models: COCOMO consists of a hierarchy of three increasingly
detailed and accurate forms. Any of the three forms can be adopted according
to our requirements. These are types of COCOMO model:
Basic COCOMO Model
Intermediate COCOMO Model
Detailed COCOMO Model
The first level, Basic COCOMO can be used for quick and slightly rough
calculations of Software Costs. Its accuracy is somewhat restricted due to the
absence of sufficient factor considerations.

Intermediate COCOMO takes these Cost Drivers into account and Detailed


COCOMO additionally accounts for the influence of individual project phases,
i.e in case of Detailed it accounts for both these cost drivers and also
calculations are performed phase wise henceforth producing a more accurate
result. These two models are further discussed below.
Earned Value Analysis
Earned Value Analysis (EVA) is an industry standard method of measuring a
project's progress at any given point in time, forecasting its completion date
and final cost, and analyzing variances in the schedule and budget as the
project proceeds. It compares the planned amount of work with what has
actually been completed, to determine if the cost, schedule, and work
accomplished are progressing in accordance with the plan. As work is
completed, it is considered "earned".

Work Breakdown Structure (WBS)


EVA works most effectively when it is compartmentalized, i.e. when the project
is broken down into an organized Work Breakdown Structure, or WBS. The
WBS is used as the basic building block for the planning of the project. It is a
product-oriented division of project tasks that ensures the entire Scope of
Work is captured and allows for the integration of technical, schedule, and cost
information. It breaks down all the work scope into appropriate elements for
planning, budgeting, scheduling, cost accounting, work authorization, progress
measuring, and management control. 
Calculating Earned Value
Earned Value Management measures progress against a baseline. It involves
calculating three key values for each activity in the WBS:

The Planned Value (PV), (formerly known as the budgeted cost of work


scheduled or BCWS)—that portion of the approved cost estimate planned to
be spent on the given activity during a given period.

The Actual Cost (AC), (formerly known as the actual cost of work


performed or ACWP)—the total of the costs incurred in accomplishing work on
the activity in a given period. This Actual Cost must correspond to whatever
was budgeted for the Planned Value and the Earned Value (e.g. all labor,
material, equipment, and indirect costs).

The Earned Value (EV), (formerly known as the budget cost of work


performed or BCWP)—the value of the work actually completed.
Request for proposal (RFP)

A request for proposal (RFP) is a document that an organization, often a


government agency or large enterprise, posts to elicit a response -- a formal
bid -- from potential vendors for a desired IT solution. The RFP specifies what
the customer is looking for and describes each evaluation criterion on which a
vendor's proposal will be assessed.

What needs to be included in an RFP?


An RFP generally includes background on the issuing organization and its
lines of business (LOBs), a set of specifications that describe the sought-after
solution and evaluation criteria that disclose how proposals will be graded.

Why are RFPs important and who uses them?


An RFP may be issued for a number of reasons. In some cases, the
complexity of an IT project calls for a formal RFP. An organization can benefit
from multiple bidders and perspectives when seeking an integrated solution
calling for a mix of technologies, vendors and potential configurations.
Risk Management

Understanding Risk Management in Software Development


Software development is activity that uses a variety of technological
advancements and requires high levels of knowledge. Because of these and
other factors, every software development project contains elements of
uncertainty. This is known as project risk. The success of a software
development project depends quite heavily on the amount of risk that
corresponds to each project activity. As a project manager, it’s not enough
to merely be aware of the risks. To achieve a successful outcome, project
leadership must identify, assess, prioritize, and manage all of the major risks.

What Is Risk In Software Engineering?


Very simply, a risk is a potential problem. It’s an activity or event that may
compromise the success of a software development project. Risk is the
possibility of suffering loss, and total risk exposure to a specific project will
account for both the probability and the size of the potential loss. 
Risk Management

Risk management includes the following tasks:

Identify risks and their triggers


Classify and prioritize all risks
Craft a plan that links each risk to a mitigation
Monitor for risk triggers during the project
Implement the mitigating action if any risk materializes
Communicate risk status throughout project
Risk Mitigation, Monitoring and
Management (RMMM)
RMMM Plan
•It is a part of the software development plan or a separate document.
•The RMMM plan documents all work executed as a part of risk analysis and
used by the project manager as a part of the overall project plan.
•The risk mitigation and monitoring starts after the project is started and the
documentation of RMMM is completed.

There are three important issues considered in developing an effective


strategy:

Risk avoidance or mitigation - It is the primary strategy which is fulfilled


through a plan.
Risk monitoring - The project manager monitors the factors and gives an
indication whether the risk is becoming more or less.
Risk management and planning - It assumes that the mitigation effort failed
and the risk is a reality.

You might also like