Software Engineering Unit
Software Engineering Unit
Software Engineering Unit
Notes
Software Engineering
[3CS4 - 07]
Prepared By:
Manju Vyas
Abhishek Jain
Geerija Lavania
VISION
To become renowned centre of outcome based learning and work towards academic,
professional, cultural and social enrichments of the lives of individual and communities”
MISSION
M1. Focus on evaluation of learning outcomes and motivate students to inculcate research
aptitude by project based learning.
M2. Identify areas of focus and provide platform to gain knowledge and solutions based on
informed perception of Indian, regional and global needs.
M4. Develop human potential to its fullest extent so that intellectually capable and
imaginatively gifted leaders can emerge in a range of professions.
VISION
To become renowned Centre of excellence in computer science and engineering and make
competent engineers & professionals with high ethical values prepared for lifelong learning.
MISSION
M1: To impart outcome based education for emerging technologies in the field of computer
science and engineering.
M3: To provide platform for lifelong learning by accepting the change in technologies
COURSE OUTCOMES
CO1) understand the purpose of designing a system and evaluate the various models suitable
as per its requirement analysis
CO2) understand and apply software project management, effort estimation and project
scheduling.
CO4) Implement the concept of object oriented analysis modelling with the reference of
UML and advance SE tools
1.To provide students with the fundamentals of Engineering Sciences with more emphasis in
Computer Science & Engineering by way of analyzing and exploiting engineering challenge
5. To prepare students to excel in Industry and Higher education by Educating Students along
with High moral values and Knowledge.
MAPPING CO-PO
Cos/POs PO1 PO2 PO3 PO4 PO5 PO6 PO7 PO8 PO9 PO10 PO11 PO12
CO1
3 3 3 3 3 2 1 2 1 1 2 3
CO2
3 3 3 3 2 2 1 2 2 2 3 3
CO3
3 3 3 2 2 2 1 2 1 2 2 3
CO4
3 3 3 3 3 1 0 1 1 2 2 3
PSO
PSO1: Ability to interpret and analyze network specific and cyber security issues, automation
in real word environment.
PSO2: Ability to Design and Develop Mobile and Web-based applications under realistic
constraints.
SYLLABUS
UNIT 2: Software Project Management: Objectives, Resources and their estimation, LOC
and FP estimation, effort estimation, COCOMO estimation model, risk analysis, software
project scheduling.
UNIT 5: Object Oriented Analysis: Object oriented Analysis Modeling, Data modeling.
Object Oriented Design: OOD concepts, Class and object relationships, object
modularization, Introduction to Unified Modeling Language
The term software engineering is composed of two words, software and engineering.
Software is more than just a program code. A program is an executable code, which serves
some computational purpose. Software is considered to be a collection of executable
programming code, associated libraries and documentations. Software, when made for a
specific requirement is called software product.
Engineering on the other hand, is all about developing products, using well-defined,
scientific principles and methods. So, we can define software engineering as an engineering
branch associated with the development of software product using well-defined scientific
principles, methods and procedures. The outcome of software engineering is an efficient
and reliable software product.
[ Reference - R1 ]
The need of software engineering arises because of higher rate of change in user requirements
and environment on which the software is working.
Large software - It is easier to build a wall than to a house or building, likewise, as the size
of software become large engineering has to step to give it a scientific process.
Scalability- If the software process were not based on scientific and engineering concepts, it
would be easier to re-create new software than to scale an existing one.
Cost- As hardware industry has shown its skills and huge manufacturing has lower down the
price of computer and electronic hardware. But the cost of software remains high if proper
process is not adapted.
Dynamic Nature- The always growing and adapting nature of software hugely depends upon
the environment in which the user works. If the nature of software is always changing, new
enhancements need to be done in the existing one. This is where software engineering plays a
good role.
Quality Management- Better process of software development provides better and quality
software product. [ Reference - R1 ]
A software product can be judged by what it offers and how well it can be used. This
software must satisfy on the following grounds:
• Operational
• Transitional
• Maintenance
Operational: This tells us how well software works in operations. It can be measured on:
• Budget
• Usability
• Efficiency
• Correctness
• Functionality
• Dependability
• Security
• Safety
Transitional: This aspect is important when the software is moved from one platform to
another:
• Portability
• Interoperability
• Reusability
• Adaptability
Maintenance: This aspect briefs about how well a software has the capabilities to maintain
itself in the ever-changing environment:
• Modularity
• Maintainability
• Flexibility
• Scalability
[ Reference - R1 ]
A Software engineering process is not a rigid prescription for how to build computer
software. Rather, it is an adaptable approach that enables the people doing the work (the
software team) to pick and choose the appropriate set of work actions and tasks. The intent is
always to deliver software in a timely manner and with sufficient quality to satisfy those who
have sponsored its creation and those who will use it.
Software engineering methods provide the technical how-to’s for building software.
Methods encompass a broad array of tasks that include communication, requirements
analysis, design modeling, program construction, testing, and support. Software engineering
methods rely on a set of basic principles that govern each area of the technology and include
modeling activities and other descriptive techniques.
Software engineering tools provide automated or semi-automated support for the process
and the methods. When tools are integrated so that information created by one tool can be
used by another, a system for the support of software development, called computer-aided
software engineering, is established.
[ Reference - R2 ]
A process is a collection of activities, actions, and tasks that are performed when some work
product is to be created.
An activity strives to achieve a broad objective (e.g., communication with stakeholders) and
is applied regardless of the application domain, size of the project, complexity of the effort,
or degree of rigor with which software engineering is to be applied.
An action (e.g., architectural design) encompasses a set of tasks that produce a major work
product (e.g., an architectural design model).
A task focuses on a small, but well-defined objective (e.g., conducting a unit test) that
produces a tangible outcome.
In the context of software engineering, a process is not a rigid prescription for how to build
computer software. Rather, it is an adaptable approach that enables the people doing the work
(the software team) to pick and choose the appropriate set of work actions and tasks.
The intent is always to deliver software in a timely manner and with sufficient quality to
satisfy those who have sponsored its creation and those who will use it.
[ Reference - R3 ]
In addition, the process framework encompasses a set of umbrella activities that are
applicable across the entire software process.
Planning: Any complicated journey can be simplified if a map exists. A software project is a
complicated journey, and the planning activity creates a “map” that helps guide the team as it
makes the journey. The map—called a software project plan—defines the software
engineering work by describing the technical tasks to be conducted, the risks that are likely,
the resources that will be required, the work products to be produced, and a work schedule.
Construction: This activity combines code generation (either manual or automated) and the
testing that is required to uncover errors in the code.
These five generic framework activities can be used during the development of small, simple
programs, the creation of large Web applications, and for the engineering of large, complex
computer-based systems. The details of the software process will be quite different in each
case, but the framework activities remain the same.
[ Reference - R3 & R4 ]
A software life cycle model (also called process model) is a descriptive and diagrammatic
representation of the software life cycle.
A life cycle model represents all the activities required to make a software product transit
through its life cycle phases. It also captures the order in which these activities are to be
undertaken. In other words, a life cycle model maps the different activities performed on a
software product from its inception to retirement.
Different life cycle models may map the basic development activities to phases in different
ways. Thus, no matter which life cycle model is followed, the basic activities are included in
all life cycle models though the activities may be carried out in different orders in different
life cycle models. During any life cycle phase, more than one activity may also be carried
out.
The development team must identify a suitable life cycle model for the particular project and
then adhere to it. Without using of a particular life cycle model the development of a software
product would not be in a systematic and disciplined manner.
When a software product is being developed by a team there must be a clear understanding
among team members about when and what to do. Otherwise it would lead to chaos and
project failure.
A software life cycle model defines entry and exit criteria for every phase. A phase can start
only if its phase-entry criteria have been satisfied. So without software life cycle model the
entry and exit criteria for a phase cannot be recognized. Without software life cycle models it
becomes difficult for software project managers to monitor the progress of the project.
Many life cycle models have been proposed so far. Each of them has some advantages as
well as some disadvantages. A few important and commonly used life cycle models are as
follows:
▪ Waterfall Model
▪ The V – Model
▪ Incremental Model
▪ RAD Model
▪ Prototyping Model
▪ Spiral Method
[ Reference - R5 ]
WATERFALL MODEL
Phases of Waterfall model: All work flows from communication towards deployment in a
reasonably linear fashion.
• Planning: In planning major activities like planning for schedule, keeping tracks on
the processes and the estimation related to the project are done. Planning is even used
to find the types of risks involved throughout the projects. Planning describes how
technical tasks are going to take place and what resources are needed and how to use
them.
• Modeling: This is one of the important phases of the architecture of the system is
designed in this phase. An analysis is carried out and depending on the analysis a
software model is designed. Different models for developing software are created
depending on the requirements gathered in the first phase and the planning done in the
second phase.
• Construction: The actual coding of the software is done in this phase. This coding is
done based on the model designed in the modeling phase. So, in this phase software is
developed and tested..
• Deployment: In this last phase, the product is rolled out or delivered & installed at
the customer’s end and support is given if required. Feedback is taken from the
customer to ensure the quality of the product.
Advantages
1. Easy to explain to the users
2. Structures approach.
3. Stages and activities are well defined.
4. Helps to plan and schedule the project
5. Verification at each stage ensures early detection of errors/misunderstanding
6. Each phase has specific deliverables.
Disadvantages
1. Very difficult to go back to any stage after it finished
2. Costly and required more time, in addition to the detailed plan.
3. Assumes that the requirements of a system can be frozen
4. A little flexibility and adjusting scope is difficult and expensive
To overcome the major shortcomings of the classical waterfall model, we come up with
the iterative waterfall model.
Here, we provide feedback paths for error correction as & when detected later in a phase.
Though errors are inevitable, but it is desirable to detect them in the same phase in which
they occur. If so, this can reduce the effort to correct the bug.
The advantage of this model is that there is a working model of the system at a very early
stage of development which makes it easier to find functional or design flaws. Finding
issues at an early stage of development enables to take corrective measures in a limited
budget. The disadvantage with this SWDLC model is that it is applicable only to large
and bulky software development projects. This is because it is hard to break a small
software system into further small serviceable increments/modules.
[ Reference - R7 ]
THE V-MODEL
The V-model is a type of SDLC model where process executes in a sequential manner in V-
shape. It is also known as Verification and Validation model. It is based on the association of
a testing phase for each corresponding development stage. Development of each step directly
associated with the testing phase. The next phase starts only after completion of the previous
phase i.e. for each development activity, there is a testing activity corresponding to it.
FIGURE: V- MODEL
So V-Model contains Verification phases on one side of the Validation phases on the other
side. Verification and Validation phases are joined by coding phase in V-shape. Thus it is
called V-Model.
Design Phase:
• Requirement Analysis: This phase contains detailed communication with the customer
to understand their requirements and expectations. This stage is known as Requirement
Gathering.
• System Design: This phase contains the system design and the complete hardware and
communication setup for developing product.
• Architectural Design: System design is broken down further into modules taking up
different functionalities. The data transfer and communication between the internal
modules and with the outside world (other systems) is clearly understood.
• Module Design: In this phase the system breaks dowm into small modules. The detailed
design of modules is specified, also known as Low-Level Design (LLD).
Testing Phases:
• Unit Testing: Unit Test Plans are developed during module design phase. These Unit
Test Plans are executed to eliminate bugs at code or unit level.
• Integration testing: After completion of unit testing Integration testing is performed. In
integration testing, the modules are integrated and the system is tested. Integration
testing is performed on the Architecture design phase. This test verifies the
communication of modules among themselves.
• System Testing: System testing test the complete application with its functionality, inter
dependency, and communication. It tests the functional and non-functional requirements
of the developed application.
• User Acceptance Testing (UAT): UAT is performed in a user environment that
resembles the production environment. UAT verifies that the delivered system meets
user’s requirement and system is ready for use in real world.
Advantages:
• This is a highly disciplined model and Phases are completed one at a time.
• V-Model is used for small projects where project requirements are clear.
• Simple and easy to understand and use.
• This model focuses on verification and validation activities early in the life cycle
thereby enhancing the probability of building an error-free and good quality product.
• It enables project management to track progress accurately.
Disadvantages:
• High risk and uncertainty.
• It is not a good for complex and object-oriented projects.
• It is not suitable for projects where requirements are not clear and contains high risk of
changing.
[ Reference - R8 ]
There are many situations in which initial software requirements are well defined, but it is not
possible to follow a purely linear process. In addition, there may be a need to provide a
limited set of software functionality to users quickly and then refine and expand on that
functionality in later software releases. In such cases, you can choose a process model that is
designed to produce the software in increments.
The incremental model combines elements of linear and parallel process flows and applies
linear sequences in a stepwise manner according to calendar. Each linear sequence
produces deliverable “increments” of the software.
For example, word-processing software developed using the incremental model may deliver:
• basic file management, editing, and document production functions in the first
increment;
It should be noted that the process flow for any increment can incorporate the prototyping
methods.
When an incremental model is used, the first increment is often a core product. That is,
basic requirements are addressed but many supplementary features (some known, others
unknown) remain undelivered. The core product is used by the customer (or undergoes
detailed evaluation). As a result of use and/or evaluation, a plan is developed for the next
increment.
The plan addresses the modification of the core product to better meet the needs of the
customer and the delivery of additional features and functionality. This process is repeated
following the delivery of each increment, until the complete product is produced.
The incremental process model focuses on the delivery of an operational product with each
increment. Early increments are stripped-down versions of the final product, but they do
provide capability that serves the user and also provide a platform for evaluation by the user.
Early increments can be implemented with fewer people. If the core product is well received,
then additional staff (if required) can be added to implement the next increment. In addition,
increments can be planned to manage technical risks.
For example, a major system might require the availability of new hardware that is under
development and whose delivery date is uncertain. It might be possible to plan early
increments in a way that avoids the use of this hardware, thereby enabling partial
functionality to be delivered to end users without inordinate delay.
Advantages –
• Error Reduction (core modules are used by the customer from the beginning of the
phase and then these are tested thoroughly)
• Uses divide and conquer for breakdown of tasks.
• Lowers initial delivery cost.
• Incremental Resource Deployment.
Disadvantages –
• Requires good planning and design.
• Total cost is not lower.
• Well defined module interfaces are required.
(i) Communication: This step works to understand the business problems and the
information characteristics that the software must accommodate.
(ii) Planning: This is very important as multiple teams work on different systems.
(iii) Modeling: Modeling includes the major phases, like business, data, process modeling
and establishes design representation that serves as the basis for RAD’s construction
activity.
(iv) Construction: This includes the use of preexisting software components and the
application of automatic code generation.
Advantages –
• Use of reusable components helps to reduce the cycle time of the project.
• Feedback from the customer is available at initial stages.
• Reduced costs as fewer developers are required.
• Use of powerful development tools results in better quality products in comparatively
shorter time spans.
• The progress and development of the project can be measured through the various
stages.
• It is easier to accommodate changing requirements due to the short iteration time spans.
Disadvantages –
• The use of powerful and efficient tools requires highly skilled professionals.
• The absence of reusable components can lead to failure of the project.
• The team leader must work closely with the developers and customers to close the
project in time.
• The systems which cannot be modularized suitably cannot use this model.
• Customer involvement is required throughout the life cycle.
• It is not meant for small scale projects as for such cases, the cost of using automated
tools and techniques may exceed the entire budget of the project.
PROTOTYPING MODEL
Prototyping: Often, a customer defines a set of general objectives for software, but does not
identify detailed requirements for functions and features. In other cases, the developer may be
unsure of the efficiency of an algorithm, the adaptability of an operating system, or the form
that human-machine interaction should take. In these, and many other situations, a
prototyping paradigm may offer the best approach.
Although prototyping can be used as a stand-alone process model, it is more commonly used
as a technique that can be implemented within the context of any one of the process models.
Figure: Prototyping
The prototyping paradigm begins with communication. You meet with other stakeholders to
define the overall objectives for the software, identify whatever requirements are known, and
outline areas where further definition is mandatory.
A prototyping iteration is planned quickly, and modeling (in the form of a “quick design”)
occurs. A quick design focuses on a representation of those aspects of the software that will
be visible to end users (e.g., human interface layout or output display formats).
The quick design leads to the construction of a prototype. The prototype is deployed and
evaluated by stakeholders, who provide feedback that is used to further refine requirements.
Iteration occurs as the prototype is tuned to satisfy the needs of various stakeholders, while at
the same time enabling you to better understand what needs to be done.
Advantages –
• The customers get to see the partial product early in the life cycle. This ensures a
greater level of customer satisfaction and comfort.
Disadvantages –
• Costly w.r.t time as well as money.
• There may be too much variation in requirements each time the prototype is evaluated
by the customer.
• Poor Documentation due to continuously changing customer requirements.
• It is very difficult for the developers to accommodate all the changes demanded by the
customer.
• There is uncertainty in determining the number of iterations that would be required
before the prototype is finally accepted by the customer.
• After seeing an early prototype, the customers sometimes demand the actual product to
be delivered soon.
• Developers in a hurry to build prototypes may end up with sub-optimal solutions.
• The customer might lose interest in the product if he/she is not satisfied with the initial
prototype.
Originally proposed by Barry Boehm, the spiral model is an evolutionary software process
model that couples the iterative nature of prototyping with the controlled and systematic
aspects of the waterfall model.
It provides the potential for rapid development of increasingly more complete versions of the
software. Boehm describes the model in the following manner:
"The spiral development model is a risk-driven process model generator that is used to guide
multi-stakeholder concurrent engineering of software intensive systems. It has two main
distinguishing features. One is a cyclic approach for incrementally growing a system’s degree
of definition and implementation while decreasing its degree of risk. The other is a set of
anchor point milestones for ensuring stakeholder commitment to feasible and mutually
satisfactory system solutions."
[NOTE: The arrows pointing inward along the axis separating the deployment region from
the communication region indicate a potential for local iteration along the same spiral path.]
Using the spiral model, software is developed in a series of evolutionary releases. During
early iterations, the release might be a model or prototype. During later iterations,
increasingly more complete versions of the engineered system are produced.
A spiral model is divided into a set of framework activities defined by the software
engineering team. Each of the framework activities represent one segment of the spiral path
illustrated in Figure. The spiral model can be adapted to apply throughout the life of the
computer software.
SEGMENTS:
As this evolutionary process begins, the software team performs activities that are implied by
a circuit around the spiral in a clockwise direction, beginning at the center.
Anchor point milestones, a combination of work products and conditions that are attained
along the path of the spiral, are noted for each evolutionary pass.
The first circuit around the spiral might result in the development of a product specification
and concept development of project, that starts at the core of the spiral and continues for
multiple iterations until concept development is complete.
subsequent passes around the spiral might be used to develop a prototype and then
progressively more sophisticated versions of the software.
Each pass through the planning region results in adjustments to the project plan.
Cost and schedule are adjusted based on feedback derived from the customer after delivery.
In addition, the project manager adjusts the planned number of iterations required to complete
the software.
The version or build or deliverable produced at the end of Deployment phase of the last
circuit, is the final software product.
Advantages of Spiral Model: Below are some of the advantages of the Spiral Model.
• Risk Handling: The projects with many unknown risks that occur as the development
proceeds, in that case, Spiral Model is the best development model to follow due to the
risk analysis and risk handling at every phase.
• Good for large projects: It is recommended to use the Spiral Model in large and
complex projects.
• Flexibility in Requirements: Change requests in the Requirements at later phase can be
incorporated accurately by using this model.
• Customer Satisfaction: Customer can see the development of the product at the early
phase of the software development and thus, they habituated with the system by using it
before completion of the total product.
Disadvantages of Spiral Model: Below are some of the main disadvantages of the spiral
model.
• Complex: The Spiral Model is much more complex than other SDLC models.
• Expensive: Spiral Model is not suitable for small projects as it is expensive.
• Too much dependable on Risk Analysis: The successful completion of the project is
very much dependent on Risk Analysis. Without very highly experienced expertise, it is
going to be a failure to develop a project using this model.
• Difficulty in time management: As the number of phases is unknown at the start of the
project, so time estimation is very difficult.
• It allows a software team to represent iterative and concurrent elements of any of the
process model.
• For example, the modeling activity defined for the spiral model is accomplished by
invoking one or more of the software engineering actions: prototyping, analysis, and
design.
• The activity—modeling—may be in any one of the states noted at any given time.
• Similarly, other activities, actions, or tasks (e.g., communication or construction) can be
represented in an similar manner.
• All software engineering activities exist concurrently but reside in different states.
• For example, early in a project the communication activity (not shown in the figure) has
completed its first iteration and exists in the awaiting changes state.
• The modeling activity (which existed in the inactive state while initial communication was
completed, now makes a transition into the under development state. If, however, the
customer indicates that changes in requirements must be made, the modeling activity
moves from the under development state into the awaiting changes state.
• Concurrent modeling defines a series of events that will trigger transitions from state to
state for each of the software engineering activities, actions, or tasks.
The SRS is developed based the agreement between customer and contractors. It may include
the use cases of how user is going to interact with software system. The software requirement
specification document consistent of all necessary requirements required for project
development.
To develop the software system we should have clear understanding of Software system. To
achieve this we need to continuous communication with customers to gather all requirements.
A good SRS defines the how Software System will interact with all internal modules,
hardware, communication with other programs and human user interactions with wide range
of real life scenarios.
It is highly recommended to review or test SRS documents before start writing test cases and
making any plan for testing.
Let’s see how to test SRS and the important point to keep in mind while testing it.
1. Correctness of SRS should be checked. Since the whole testing phase is dependent on
SRS, it is very important to check its correctness. There are some standards with which we
can compare and verify.
2. Ambiguity should be avoided. Sometimes in SRS, some words have more than one
meaning and this might confused testers making it difficult to get the exact reference. It is
advisable to check for such ambiguous words and make the meaning clear for better
understanding.
3. Requirements should be complete. When tester writes test cases, what exactly is required
from the application, is the first thing which needs to be clear. For e.g. if application needs to
send the specific data of some specific size then it should be clearly mentioned in SRS that
how much data and what is the size limit to send.
4. Consistent requirements. The SRS should be consistent within itself and consistent to its
reference documents. If you call an input “Start and Stop” in one place, don’t call it
“Start/Stop” in another. This sets the standard and should be followed throughout the testing
phase.
5. Verification of expected result: SRS should not have statements like “Work as expected”,
it should be clearly stated that what is expected since different testers would have different
thinking aspects and may draw different results from this statement.
6. Testing environment: some applications need specific conditions to test and also a
particular environment for accurate result. SRS should have clear documentation on what
type of environment is needed to set up.
7. Pre-conditions defined clearly: one of the most important part of test cases is pre-
conditions. If they are not met properly then actual result will always be different expected
result. Verify that in SRS, all the pre-conditions are mentioned clearly.
8. Requirements ID: these are the base of test case template. Based on requirement Ids, test
case ids are written. Also, requirements ids make it easy to categorize modules so just by
looking at them, tester will know which module to refer. SRS must have them such as id
defines a particular module.
10. Assumption should be avoided: sometimes when requirement is not cleared to tester, he
tends to make some assumptions related to it, which is not a right way to do testing as
assumptions could go wrong and hence, test results may vary. It is better to avoid
assumptions and ask clients about all the “missing requirements” to have a better
understanding of expected results.
11. Deletion of irrelevant requirements: there are more than one team who work on SRS so
it might be possible that some irrelevant requirements are included in SRS. Based on the
understanding of the software, tester can find out which are these requirements and remove
them to avoid confusions and reduce work load.
Most of the defects which we find during testing are because of either incomplete
requirements or ambiguity in SRS. To avoid such defects it is very important to test software
requirements specification before writing the test cases. Keep the latest version of SRS with
you for reference and keep yourself updated with the latest change made to the SRS. Best
practice is to go through the document very carefully and note down all the confusions,
assumptions and incomplete requirements and then have a meeting with the client to get them
clear before development phase starts as it becomes costly to fix the bugs after the software is
developed. After all the requirements are cleared to a tester, it becomes easy for him to write
effective test cases and accurate expected results.
[ Reference - R9 ]
• Formal specification may be automatically processed. Software tools can be built to assist
with their development, understanding, and debugging.
• Depending on the formal specification language being used, it may be possible to animate a
formal system specification to provide a prototype system.
• Formal specifications are mathematical entities and may be studied and analyzed using
mathematical methods.
Relational notations are used based on the concept of entities and attributes.
Entities are elements in a system; the names are chosen to denote the nature of the elements
(e.g., stacks, queues).
Attributes are specified by applying functions and relations to the named entities.
Attributes specify permitted operations on entities, relationships among entities, and data
flow between entities.
Relational notations include implicit equations, recurrence relations, and algebraic axioms.
State-oriented specifications use the current state of the system and the current stimuli
presented to the system to show the next state of the system.
The execution history by which the current state was attained does not influence the next
state; it is dependent only on the current state and the current stimuli.
State-oriented notations include decision tables, event tables, transition tables, and finite-state
tables.
SPECIFICATION PRINCIPLES
Principle 3: The specification must provide the implementer all of the information
he/she needs to complete the program, and no more. In particular, no information about
the structure of the calling program should be conveyed.
Principle 5: The specification should discuss the program in terms normally used by the
user and implementer alike.
1. Implicit Equations
Specify computation of square root of a number between 0 and some maximum value Y to a
tolerance E.
(0<=X<=Y){ABS_VALUE[(WHAT(X)) 2-X]}<=E
2. Recurrence Relation
FI(0) = 0;
FI(1) = 1;
[Reference - R10]
We have seen the “V-Model”. In the V Model Software Development Life Cycle, based on
requirement specification document the development & testing activity is started.
The testing activity is perform in the each phase of Software Testing Life Cycle.
In the first half of the model validations testing activity is integrated in each phase like review
user requirements, System Design document & in the next half the Verification testing
activity is come in picture.
Definition : The process of evaluating software to determine whether the products of a given
development phase satisfy the conditions imposed at the start of that phase.
Verification is a static practice of verifying documents, design, code and program. It includes
all the activities associated with producing high quality software: inspection, design analysis
and specification analysis. It is a relatively objective process.
Verification will help to determine whether the software is of high quality, but it will not
ensure that the system is useful. Verification is concerned with whether the system is well-
engineered and error-free.
• Walkthrough
• Inspection
• Review
Definition: The process of evaluating software during or at the end of the development
process to determine whether it satisfies specified requirements.
Validation is the process of evaluating the final product to check whether the software meets
the customer expectations and requirements. It is a dynamic mechanism of validating and
testing the actual product.
The distinction between the two terms is largely to do with the role of specifications.
Verification is the process of checking that the software meets the specification. “Did I build
what I need?”
Validation is the process of checking whether the specification captures the customer’s
needs. “Did I build what I said I would?”
Verification Validation
It does not involve executing the code. It always involves executing the code.
Verification uses methods like inspections, Validation uses methods like black box
reviews, walkthroughs, and Desk-checking (functional) testing, gray box testing,
etc. and white box (structural) testing etc.
It can catch errors that validation cannot It can catch errors that verification
catch. It is low level exercise. cannot catch. It is High Level Exercise.
Are we building the system right? Are we building the right system?
testing etc.
Verification process explains whether the Validation process describes whether the
outputs are according to inputs or not. software is accepted by the user or not.
Verification is carried out before the Validation activity is carried out just
Validation. after the Verification.
[Reference - R10 ]
OBJECTIVES
The objective of software project planning is to provide a framework that enables the
manager to make reasonable estimates of resources, cost, and schedule. These estimates are
made within a limited time frame at the beginning of a software project and should be
updated regularly as the project progresses. In addition, estimates should attempt to define
best case and worst case scenarios so that project outcomes can be bounded. The planning
objective is achieved through a process of information discovery that leads to reasonable
estimates. In the following sections, each of the activities associated with software project
planning is discussed.
RESOURCES
The second software planning task is estimation of the resources required to accomplish the
software development effort. Figure illustrates development resources as a pyramid. The
development environment—hardware and software tools—sits at the foundation of the
resources pyramid and provides the infrastructure to support the development effort. At a
higher level, we encounter reusable software components— software building blocks that can
dramatically reduce development costs and accelerate delivery. At the top of the pyramid is
the primary resource—people. Each resource is specified with four characteristics:
description of the resource, a statement of availability, time when the resource will be
required; duration of time that resource will be applied. The last two characteristics can be
viewed as a time window.
Availability of the resource for a specified window must be established at the earliest
practical time.
HUMAN RESOURCES
The planner begins by evaluating scope and selecting the skills required to complete
development. Both organizational position (e.g., manager, senior software engineer) and
specialty (e.g., telecommunications, database, client/server) are specified. For relatively small
projects (one person-year or less), a single individual may perform all software engineering
tasks, consulting with specialists as required.The number of people required for a software
project can be determined only after an estimate of development effort (e.g., person-months)
is made.
Bennatan [BEN92] suggests four software resource categories that should be considered as
planning proceeds:
Off-the-shelf components. Existing software that can be acquired from a third party or that
has been developed internally for a past project. COTS (commercial off-the-shelf)
components are purchased from a third party, are ready for use on the current project, and
have been fully validated.
Full-experience components.
Existing specifications, designs, code, or test data developed for past projects that are similar
to the software to be built for the current project. Members of the current software team have
had full experience in the application area represented by these components. Therefore,
modifications required for full-experience components will be relatively low-risk.
Partial-experience components.
Existing specifications, designs, code, or test data developed for past projects that are related
to the software to be built for the current project but will require substantial modification.
Members of the current software team have only limited experience in the application area
represented by these components. Therefore, modifications required for partial-experience
components have a fair degree of risk.
New components.
Software components that must be built by the software team specifically for the needs of the
current project.The following guidelines should be considered by the software planner when
reusable components are specified as a resource.
DECOMPOSITION TECHNIQUES
Software Sizing
“Fuzzy logic” sizing. This approach uses the approximate reasoning techniques that are the
cornerstone of fuzzy logic. To apply this approach, the planner must identify the type of
application, establish its magnitude on a qualitative scale, and then refine the magnitude
within the original range. Although personal experience can be used, the planner should also
have access to a historical database of projects so that estimates can be compared to actual
experience.
Function point sizing. Standard component sizing. For example, the standard components
for an information system are subsystems, modules, screens, reports, interactive programs,
batch programs, files, LOC, and object-level instructions. The project planner estimates the
number of occurrences of each standard component and then uses historical project data to
determine the delivered size per standard component. To illustrate, consider an information
systems application. The planner estimates that 18 reports will be generated. Historical data
indicates that 967 lines of COBOL are required per report. This enables the planner to
estimate that 17,000 LOC will be required for the reports component. Similar estimates and
computation are made for other standard components, and a combined size value (adjusted
statistically) results.
Change sizing. This approach is used when a project encompasses the use of existing
software that must be modified in some way as part of a project. The planner estimates the
number and type (e.g., reuse, adding code, changing code, deleting code) of modifications
that must be accomplished. Using an “effort ratio”for each type of change, the size of the
change may be estimated
Problem-Based Estimation
LOC-Based Estimation
FP BASED ESTIMATION:
Process-Based Estimation
The objective of software project planning is to provide a framework that enables the
manager to make reasonable estimates of resources, cost, and schedule. These estimates are
made within a limited time frame at the beginning of a software project and should be
updated regularly as the project progresses. In addition, estimates should attempt to define
best case and worst case scenarios so that project outcomes can be bounded.
The planning objective is achieved through a process of information discovery that leads to
reasonable estimates. In the following sections, each of the activities associated with software
project planning is discussed.
In his classic book on “software engineering economics,” Barry Boehm [BOE81] introduced
a hierarchy of software estimation models bearing the name COCOMO, for COnstructive
COst MOdel. The original COCOMO model became one of the most widely used and
discussed software cost estimation models in the industry. It has evolved into a more
comprehensive estimation model, called COCOMO II .Like its predecessor, COCOMO II is
actually a hierarchy of estimation models that address the following areas:
Application composition model. Used during the early stages of software engineering, when
prototyping of user interfaces, consideration of software and system interaction, assessment
of performance, and evaluation of technology maturity are paramount.
Early design stage model. Used once requirements have been stabilized and basic software
architecture has been established.
Post-architecture-stage model. Used during the construction of the software. Like all
estimation models for software, the COCOMO II models require sizing information.Three
different sizing options are available as part of the model hierarchy:
object points, function points, and lines of source code. The object point is an indirect
software measure that is computed using counts of the number of
(3) Components likely to be required to build the application. Each object instance (e.g., a
screen or report) is classified into one of three complexity levels (i.e.,simple, medium, or
difficult) using criteria suggested by Boehm [BOE96]. In essence,complexity is a function of
the number and source of the client and server data tables that are required to generate the
screen or report and the number of views or sections presented as part of the screen or report.
The software equation Is a dynamic multivariable model that assumes a specific distribution
of effort over the life of a software development project. The model has been derived from
productivity data collected for over 4000 contemporary software projects. Based on these
data, an estimation model of the form
applications.17 The productivity parameter can be derived for local conditions using
historical data collected from past development efforts. It is important to note that the
software equation has two independent parameters:
(1) an estimate of size (in LOC) and (2) an indication of project duration in calendar months
or years.
RISK ANALYSIS
First, risk concerns future happenings. Today and yesterday are beyond active concern, as we
are already reaping what was previously sowed by our past actions. The question is, can we,
therefore, by changing our actions today, create an opportunity for a different and hopefully
better situation for ourselves tomorrow. This means second, that risk involves change, such as
in changes of mind, opinion, actions, or places . . . [Third,] risk involves choice, and the
uncertainty that choice itself entails.
What is it?
Risk analysis and management are a series of steps that help a software team to understand
and manage uncertainty. Many problems can plague a software project. A risk is a potential
problem—it might happen, it might not. But, regardless of the outcome, it’s a really good
idea to identify it, assess its probability of occurrence, estimate its impact, and establish a
contingency plan should the problem actually occur.
SOFTWARE RISKS
• Uncertainty—the risk may or may not happen; that is, there are no 100% probable risks.
• Loss—if the risk becomes a reality, unwanted consequences or losses will occur.
When risks are analyzed, it is important to quantify the level of uncertainty and the degree of
loss associated with each risk. To accomplish this, different categories of risks are
considered.
Project risks threaten the project plan. That is, if project risks become real, it is likely that
project schedule will slip and that costs will increase. Project risks identify potential
budgetary, schedule, personnel (staffing and organization), resource, customer, and
requirements problems and their impact on a software project.project complexity, size, and
the degree of structural uncertainty were also defined as project (and estimation) risk factors.
Technical risks threaten the quality and timeliness of the software to be produced.If a
technical risk becomes a reality, implementation may become difficult or
impossible.Technical risks identify potential design, implementation, interface,
verification,and maintenance problems. In addition, specification ambiguity, technical
uncertainty, technical obsolescence, and "leading-edge" technology are also risk
factors.Technical risks occur because the problem is harder to solve than we thought it would
be.
Business risks threaten the viability of the software to be built. Business risks often
jeopardize the project or the product. Candidates for the top five business risks are
(1) building a excellent product or system that no one really wants (market risk), (2)building
a product that no longer fits into the overall business strategy for the company (strategic risk)
how to sell
Another general categorization of risks has been proposed by Charette [CHA89]. Known
risks are those that can be uncovered after careful evaluation of the project plan, the business
and technical environment in which the project is being developed, and other reliable
information sources (e.g., unrealistic delivery date, lack of documented requirements or
software scope, poor development environment).
Predictable risks are extrapolated from past project experience (e.g., staff turnover, poor
communication with the customer, dilution of staff effort as ongoing maintenance requests
are serviced).
Unpredictable risks are the joker in the deck. They can and do occur, but they are extremely
difficult to identify in advance.
RISK IDENTIFICATION
Risk identification is a systematic attempt to specify threats to the project plan (estimates,
schedule, resource loading, etc.). By identifying known and predictable risks, the project
manager takes a first step toward avoiding them when possible and controlling them when
necessary.
There are two distinct types of risks for each of the categories that have been presented earlier
: generic risks and product-specific risks.
Product-specific risks can be identified only by those with a clear understanding of the
technology, the people, and the environment that is specific to the project at hand. To identify
product-specific risks, the project plan and the software statement of scope are examined and
an answer to the following question is developed: "What special characteristics of this
product may threaten our project plan?"
One method for identifying risks is to create a risk item checklist. The checklist can be used
for risk identification and focuses on some subset of known and predictable risks in the
following generic subcategories:
• Product size—risks associated with the overall size of the software to be built or modified.
• Customer characteristics—risks associated with the sophistication of the customer and the
developer's ability to communicate with the customer in a timely manner.
• Process definition—risks associated with the degree to which the software process has been
defined and is followed by the development organization.
• Development environment—risks associated with the availability and quality of the tools to
be used to build the product.
• Technology to be built—risks associated with the complexity of the system to be built and
the "newness" of the technology that is packaged by the system.
• Staff size and experience—risks associated with the overall technical and project experience
of the software engineers who will do the work.
The risk item checklist can be organized in different ways. Questions relevant to each of the
topics can be answered for each software project. The answers to these questions allow the
planner to estimate the impact of risk. A different risk item checklist format simply lists
characteristics that are relevant to each generic subcategory. Finally, a set of “risk
components and drivers" [AFC88] are listed along with their probability Although generic
risks are important to consider, usually the product-specific risks cause the most headaches.
Be certain to spend the time to identify as many product-specific risks as possible.
• Performance risk—the degree of uncertainty that the product will meet its requirements and
be fit for its intended use.
• Cost risk—the degree of uncertainty that the project budget will be maintained.
• Support risk—the degree of uncertainty that the resultant software will be easy to correct,
adapt, and enhance.
• Schedule risk—the degree of uncertainty that the project schedule will be maintained and
that the product will be delivered on time.
All of the risk analysis activities presented to this point have a single goal—to assist the
project team in developing a strategy for dealing with risk. An effective strategy must
consider three issues:
• risk avoidance
• risk monitoring
If a software team adopts a proactive approach to risk, avoidance is always the best strategy.
This is achieved by developing a plan for risk mitigation. For example, assume that high staff
turnover is noted as a project risk, r1. Based on past history and management intuition, the
likelihood, l1, of high turnover is estimated to be 0.70 (70 percent, rather high) and the
impact, x1, is projected at level 2. That is, high turnover will have a critical impact on project
cost and schedule.
To mitigate this risk, project management must develop a strategy for reducing turnover.
Among the possible steps to be taken are
• Meet with current staff to determine causes for turnover (e.g., poor working conditions, low
pay, competitive job market).
• Mitigate those causes that are under our control before the project starts.
• Once the project commences, assume turnover will occur and develop techniques to ensure
continuity when people leave.
• Organize project teams so that information about each development activity is widely
dispersed.
• Define documentation standards and establish mechanisms to be sure that documents are
developed in a timely manner.
• Conduct peer reviews of all work (so that more than one person is "up to speed”).
• Assign a backup staff member for every critical technologist.As the project proceeds, risk
monitoring activities commence. The project manager monitors factors that may provide an
indication of whether the risk is becoming more or less likely. In the case of high staff
turnover, the following factors can be monitored:
In addition to monitoring these factors, the project manager should monitor the effectiveness
of risk mitigation steps. For example, a risk mitigation step noted here called for the
definition of documentation standards and mechanisms to be sure that documents are
developed in a timely manner. This is one mechanism for ensuring continuity, should a
critical individual leave the project. The project manager should monitor documents carefully
to ensure that each can stand on its own and that each imparts information that would be
necessary if a newcomer were forced to join the software team somewhere in the middle of
the project.
Risk management and contingency planning assumes that mitigation efforts have failed and
that the risk has become a reality. Continuing the example, the project is well underway and a
number of people announce that they will be leaving. If the mitigation strategy has been
followed, backup is available, information is documented, and knowledge has been dispersed
across the team. In addition, the project manager may temporarily refocus resources (and
readjust the project schedule) to those functions that are fully staffed, enabling newcomers
who must be added to the team to “get up to speed.” Those individuals who are leaving are
asked to stop all work and spend their last weeks in “knowledge transfer mode.” This might
include video-based knowledge capture, the development of “commentary documents,”
and/or meeting with other team members who will remain on the project.
It is important to note that RMMM steps incur additional project cost. For example,spending
the time to "backup" every critical technologist costs money. Part of risk management,
therefore, is to evaluate when the benefits accrued by the RMMM steps are outweighed by
the costs associated with implementing them. In essence, the project planner performs a
classic cost/benefit analysis. If risk aversion steps for high turnover will increase both project
cost and duration by an estimated 15 percent, but the predominant cost factor is "backup,"
management may decide not to implement this step. On the other hand, if the risk aversion
steps are projected to increase costs by 5 percent and duration by only 3 percent management
will likely put all into place.
For a large project, 30 or 40 risks may be identified. If between three and seven risk
management steps are identified for each, risk management may become a project in itself!
For this reason, we adapt the Pareto 80–20 rule to software risk. Experience indicates that 80
percent of the overall project risk (i.e., 80 percent of the potential for project failure) can be
accounted for by only 20 percent of the identified risks. The work performed during earlier
risk analysis steps will help the planner to determine which of the risks reside in that 20
percent (e.g., risks that lead to the highest risk exposure). For this reason, some of the risks
identified, assessed, and projected may not make it into the RMMM plan—they don't fall into
the critical 20 percent (the risks with highest project priority).
Software project scheduling is an activity that distributes estimated effort across the planned
project duration by allocating the effort to specific software engineering tasks. During early
stages of project planning, a macroscopic schedule is developed. This type of schedule
identifies all major software engineering activities and the product functions to which they
are applied. As the project gets under way, each entry on the macroscopic schedule is refined
into a detailed schedule. Here, specific software tasks (required to accomplish an activity) are
identified and scheduled. Scheduling for software engineering projects can be viewed from
two rather different perspectives. In the first, an end-date for release of a computer-based
system has already (and irrevocably) been established. The software organization is
constrained to distribute effort within the prescribed time frame. The second view of software
scheduling assumes that rough chronological bounds have been discussed but that the end-
date is set by the software engineering organization. Effort is distributed to make best use of
resources and an end-date is defined after careful analysis of the software. Unfortunately, the
first situation is encountered far more frequently than the second. Like all other areas of
software engineering, a number of basic principles guide
Time allocation. Each task to be scheduled must be allocated some number of work units
(e.g., person-days of effort). In addition, each task must be assigned a start date and a
completion date that are a function of the interdependencies and whether work will be
conducted on a full-time or part-time basis.
Effort validation. Every project has a defined number of staff members. As time allocation
occurs, the project manager must ensure that no more than the allocated number of people
have been scheduled at any given time. For example, consider a project that has three
assigned staff members (e.g., 3 person-days are available per day of assigned effort5). On a
given day, seven concurrent tasks must be accomplished. Each task requires 0.50 person days
of effort. More effort has been allocated than there are people to do the work.
Defined responsibilities. Every task that is scheduled should be assigned to a specific team
member.
Defined outcomes. Every task that is scheduled should have a defined outcome. For software
projects, the outcome is normally a work product (e.g. the design of a module) or a part of a
work product. Work products are often combined in deliverables.
Defined milestones. Every task or group of tasks should be associated with a project
milestone. A milestone is accomplished when one or more work products has been reviewed
for quality and has been approved.
SCHEDULING
Scheduling of a software project does not differ greatly from scheduling of any multitask
engineering effort. Therefore, generalized project scheduling tools and techniques can be
applied with little modification to software projects.
Program evaluation and review technique (PERT) and critical path method (CPM)
[MOD83] are two project scheduling methods that can be applied to software development.
Both techniques are driven by information already developed in earlier project planning
activities:
• Estimates of effort
• Decomposition of tasks
Interdependencies among tasks may be defined using a task network. Tasks, sometimes
called the project work breakdown structure (WBS), are defined for the product as a whole or
for individual functions.
Both PERT and CPM provide quantitative tools that allow the software planner to
(1) determine the critical path—the chain of tasks that determines the duration of the project;
(2) establish “most likely” time estimates for individual tasks by applying statistical models;
(3) calculate “boundary times” that define a time "window" for a particular task.
Boundary time calculations can be very useful in software project scheduling. Slippage in the
design of one function, for example, can retard further development of other functions.
Riggs describes important boundary times that may be discerned from a PERT or CPM
network:
(1) the earliest time that a task can begin when all preceding tasks are completed in the
shortest possible time,
(2) the latest time for task initiation before the minimum project completion time is delayed,
(3) the earliest finish—the sum of the earliest start and the task duration,
(4) the latest finish the latest start time added to task duration, and
(5) the total float—the amount of surplus time or leeway allowed in scheduling tasks so that
the network critical path is maintained on schedule. Boundary time calculations lead to a
determination of critical path and provide the manager with a quantitative method for
evaluating progress as tasks are completed.