Oose Unit 5
Oose Unit 5
Oose Unit 5
UNIT 5
PART - A
1. Define Software measure.
A Software measure is a mapping from a set of objects in the software engineering world
into a set of mathematical constructs such as numbers or vector of numbers.
4. What is PSP?
The personal software process emphasizes personal measurement of both the work product that
is produced and the resultant quality of the work product.
8. What led to the transition from product oriented development to process oriented
development? May/June 2016
During the last decade process oriented software quality management was considered the
successful paradigm of developing quality software.
But the constant pace of change in the IT world brought up new challenges demanding
for a different model of software development.
Product orientation is described as an alternative to process orientation, which is
specifically adapted to some of those new challenges.
Both development models are adapted to different environments or development tasks.
And they are based on different concepts of product quality and quality management.
14. How are information domain values defined in function oriented metrics?
Number of user inputs
Number of user outputs
Number of user inquires
Number of files
Number of external interfaces
20. When does the software change? List the strategies for software changes.
Once the software is put into use , new requirements emerge and existing requirement change as
the business running that software changes. The following are the strategies for software
changes.
Software maintenance
Architectural information
Software re – engineering
21. What is meant by program evolution dynamics? List all the five Lehmans' law
concerning system change.
Program evolution dynamics is the study of system change. A set of laws concerning system
change were proposed by Lehman. They are :
Continuing change
Increasing Complexity
Large program evaluation
Organizational stability
Conservation of familiarity
22. Write notes on software maintenance.
Software maintenance is the general process of changing a system after it has been delivered.
The changes may be simple changes to correct coding errors, more extensive changes to correct
design errors or significant enhancements to correct specifications errors or accommodate new
requirements
24. What are the key factors that distinguish development and maintenance?
The key factors that distinguish development and maintenance are:
Team stability
Contractual responsibility
PREPARED BY: Mrs.P.R.JAYANTHI AP/CSE, Ms.G.SARASWATHI AP/CSE, Mrs.K.KAVITHA AP/CSE Page 4
Staff skills
Program age and structure
26. What are the key factors that influence a change in the architecture of legacy systems?
The factors that influencing system distribution decisions are :
Business importance
System age
System Structure
Hardware procurement policies.
E = A + B x (ev)C
where A, B, and C are empirically derived constants, E is effort in person-months, and ev is the
estimation variable (either LOC or FP).
28. What are the processes of risk management? NOV/DEC’10 APR/MAY 2021
Risk Identification
Risk analysis
Risk planning
Risk monitoring
(1) Will the delivery date of the software product be sooner than that for internally
developed software?
(2) Will the cost of acquisition plus the cost of customization be less than the cost
of developing the software internally?
The software equation is a dynamic multivariable model that assumes a specific distribution of
effort over the life of a software development project. The model has been derived from
productivity data collected for over 4000 contemporary software projects. Based on these data,
we derive an estimation model of the form
where
E _ effort in person-months or person-years
t _ project duration in months or years
B _ “special skills factor”
P _ “productivity parameter”
35. Define project planning.
Planning requires you to make an initial commitment, even though it’s likely that this
“commitment” will be proven wrong. Whenever estimates are made, you look into the
future and accept some degree of uncertainty as a matter of course.
Estimation of resources, cost, and schedule for a software engineering effort requires
experience, access to good historical information (metrics), and the courage to commit to
quantitative predictions when qualitative information is all that exists. Estimation carries
inherent risk,1 and this risk leads to uncertainty.
RE = P x C
Where P is the probability of occurrence for a risk, and C is the the cost to the project should the
risk occur.
42. What are the most commonly used Project Scheduling methods?
Program evaluation and review technique (PERT) and the critical path method (CPM)
are two project scheduling methods that can be applied to software development.
Both techniques are driven by information already developed in earlier project planning
activities: estimates of effort, a decomposition of the product function, the selection of
the appropriate process model and task set, and decomposition of the tasks that are
selected.
The curve indicates a minimum value to that indicates the least cost for delivery (i.e., the
delivery time that will result in the least effort expended). As we move left of to (i.e., as
we try to accelerate delivery), the curve rises nonlinearly.
In fact, a technique for performing quantitative analysis of progress does exist. It is called
Earned Value Analysis (EVA).
51. Will exhaustive testing guarantee that the program is 100% correct? (May/June 2016)
No, even exhaustive testing will not guarantee that the program is 100 percent correct.
There are too many variables to consider.
Installation testing - did the program install according to the instructions?
Integration testing - did the program work with all of the other programs on the system
without interference, and did the installed modules of the program integrate and work
with other installed modules?
Function testing - did each of the program functions work properly?
Unit testing - did the unit work as a standalone as designed, and did the unit work when
placed in the overall process?
User Acceptance Testing - did the program fulfill all of the user requirements and work
per the user design?
Performance testing - did the program perform to a level that was satisfactory .
53. How is productivity and cost related to function points? (Nov/Dec 2016)
Function Points are becoming widely accepted as the standard metric for measuring
software size.
Now that Function Points have made adequate sizing possible, it can now be anticipated
that the overall rate of progress in software productivity and software quality will
improve.
Understanding software size is the key to understanding both productivity and quality.
Without a reliable sizing metric relative changes in productivity (Function Points per
Work Month) or relative changes in quality (Defects per Function Point) cannot be
calculated.
If relative changes in productivity and quality can be calculated and plotted over time,
then focus can be put upon an organizations strengths and weaknesses. Most important,
any attempt to correct weaknesses can be measured for effectiveness.
To plan and schedule project activities and tasks the project manager needs to take the four steps:
Set up activities.
Define relationships between activities.
Estimate resources required for performing activities.
Estimate durations for activities.
55. State the importance of scheduling activity in project management. (April/May 2015)
Scheduling of a software project does not differ greatly from scheduling of any multitask
engineering effort. Therefore, generalized project scheduling tools and techniques can be
applied with little modification for software projects.
Program evaluation and review technique (PERT) and the critical path method (CPM)
are two project scheduling methods that can be applied to software development.
Both techniques are driven by information already developed in earlier project planning
activities: estimates of effort, a decomposition of the product function, the selection of
the appropriate process model and task set, and decomposition of the tasks that are
selected.
A risk is a potential problem—it might happen, it might not.But, regardless of the outcome, it’s a
really good idea to identify it, assess its probability of occurrence, estimate its impact, and
establish a contingency plan should the problem actually occur.
Project risks threaten the project plan. Project risks identify potential budgetary,
schedule, personnel (staffing and organization), resource, customer, and
requirements problems and their impact on a software project. The project
complexity, size, and the degree of structural uncertainty were also defined as
project (and estimation) risk factors.
Technical risks threaten the quality and timeliness of the software to be produced.
If a technical risk becomes a reality, implementation may become difficult or
impossible. Business risks threaten the viability of the software to be built.
Business risks often jeopardize the project or the product.
57. Mr. Koushan is the project manager on a project to build a new cricket stadium in
Mumbai, India. After six months of work, the project is 27% complete. At the start of the
project. Koushan estimated that it would cost $50,000.000, What is the Earned Value?
Nov/Dec 2015
Earned Value is an approach where one monitor the project plan, actual work, and work
completed value to see if a project is on track. Earned Value shows how much of the budget and
time should have been spent, considering the amount of work done so far.
1. Planned Value (PV) = The budgeted amount through the current reporting period.
2. Actual Cost (AC) = Actual costs to date.
3. Earned Value (EV) = Total project budget multiplied by the % complete of the project.
EV=(27/100)*50000
EV=13500
58. What are the different types of productivity estimation measures? Apr/May 2017
Lines of code and function points were described as measures from which productivity metrics
can be computed. LOC and FP data are used in two ways during software project estimation:
(1) as estimation variables to “size” each element of the software and
(2) as baseline metrics collected from past projects and used in conjunction with
estimation variables to develop cost and effort projections.
59. List two customer related and technology related risks. Apr/May 2017
61. Write a note on Risk Information Sheet (RIS). Nov/Dec 2017, Nov/Dec 2018
What Is It?
A risk information sheet is a means of capturing information about a risk. Risk information
sheets are used to document new risks as they are identified. They are also used to modify
information as risks are managed.
It is a form that can be submitted to the appropriate person or included in a database with
other project risks. In the absence of a database, this becomes a primary means of
documenting and retaining information about a risk.
When To Use?
Benefits
The form
62. Explain about the factors the cause difficulty in testing software. Nov/Dec 2018
While good project documentation is a positive factor, it’s also true that having to
produce detailed documentation, such as meticulously specified test cases, results in
delays. During test execution, having to maintain such detailed documentation requires
lots of effort, as does working with fragile test data that must be maintained or restored
frequently during testing.
Increasing the size of the product leads to increases in the size of the project and the
project team.
Time pressure is another factor to be considered. Pressure should not be an excuse to take
unwarranted risks
People execute the process, and people factors are as important as or more important than
any other. Important people factors include the skills of the individuals and the team as a
whole, and the alignment of those skills with the project’s needs. It is true that there are
many troubling things about a project but an excellent team can often make good things
happen on the project and in testing.
63. Discuss about various factors that affect a project plan. Nov/Dec 2018
Deadline:
Deadline is one of the key aspects that determine how a project is managed. Missing a
deadline creates a bad impression for your team
Budget:
Budget is another critical factor that determines a project’s progress and management.
Stakeholders:
Techniques of managing projects will vary depending upon the kind of stakeholders for
the projects.
Project Members:
Project management techniques are also determined by the challenges faced by a project
manager which, in turn, depends on the kind of team he or she is handling.
Demand:
Demand is another key factor that influences project management techniques. Demand
itself depends on a few factors such as type of products or services, usability, etc.
Supply:
In order to meet the demand within a stipulated date and time (which we came across as
deadline), supply of resources is necessary. A project manager needs to ensure that
supply is adequate, so that deadline is not compromised for want of resources.
Price:
Price is an important aspect of project management. Price is determined by high level
managers in consultation with project sponsors after studying market trends.
64. Enumerate the factors that influence a project schedule. Nov/Dec 2018
- Resource availability: The less resources allocated to the project, the lengthier the project
schedule is.
- Project complexity: If the project has complex and never undertaken tasks, then surely the
project manager must accommodate that fact in his project schedule.
- Task dependencies: There are many tasks that cannot be started before others are finished;
there are also some tasks that have to start together. Tasks dependencies will dictate the sequence
of tasks and will also dictate the project's need for resources at any one point.
- Team experience: If the project team is weak or has no experience on similar projects, then the
project manager must account for that in his project schedule and must pad his estimates.
- Deadline: Quite often, the stakeholders impose a specific deadline (for example, we need the
project to be finished on November to be able to Launch on Christmas or New Year). A "hard"
deadline will great affect the project schedule, as the project manager will probably over-
allocate resources and will also probably make them work on weekends and holidays in order to
meet the deadline.
- Project priority: If the project has low priority in the company, then the project manager will
not be able to claim the resources he really needs to finish the project in a condensed timeframe,
he will only get a fraction of his needs, as most resources will be assigned to high priority
projects.
- Material availability: For example, in a construction project, you need sand, but the sand is
not available until the third month of the project, so the project manager needs to create the
project schedule accordingly and ensure that there are no idle resources waiting for the sand to
arrive in these first 3 months.
65. Identify the type of maintenance for each of the following: Apr/May 2018
a) Correcting the software faults
b) adapting the change in environment.
69. Write any two differences between “Known risks” and “predictable risks”
Known Risk :
1) It can be uncovered after careful evaluation project plan, business and technical environment
in which the project is being developed, other reliable information resources.
Predictable Risk:
2) E.g. staff turnover, poor communication with the customer, dilution of staff effort as ongoing
maintenance requests are serviced.
1) Identification of SCI
2) Change control
3) Version control
4) Configuration auditing
5) Reporting
71. Write the importance of Software Configuration Management.
1. Effective Bug Tracking: Linking code modifications to issues that have been reported,
makes bug tracking more effective.
2. Continuous Deployment and Integration: SCM combines with continuous processes to
automate deployment and testing, resulting in more dependable and timely software
delivery.
3. Risk management: SCM lowers the chance of introducing critical flaws by assisting in
the early detection and correction of problems.
4. Support for Big Projects: Source Code Control (SCM) offers an orderly method to
handle code modifications for big projects, fostering a well-organized development
process.
5. Reproducibility: By recording precise versions of code, libraries, and dependencies,
source code versioning (SCM) makes builds repeatable.
6. Parallel Development: SCM facilitates parallel development by enabling several
developers to collaborate on various branches at once.
72. Why need for System configuration management?
1. Replicability: Software version control (SCM) makes ensures that a software system
can be replicated at any stage of its development. This is necessary for testing,
debugging, and upholding consistent environments in production, testing, and
development.
2. Identification of Configuration: Source code, documentation, and executable files are
examples of configuration elements that SCM helps in locating and labeling. The
management of a system’s constituent parts and their interactions depend on this
identification.
3. Effective Process of Development: By automating monotonous processes like
managing dependencies, merging changes, and resolving disputes, SCM simplifies the
development process. Error risk is decreased and efficiency is increased because of this
automation.
PART - B
Configuration management:
The output of the software process is information that may be divided into three broad
categories: (1) computer programs (both source level and executable forms), (2) work
products that describe the computer programs (targeted at various stakeholders), and (3)
data or content (contained within the program or external to it).
The items that comprise all information produced as part of the software process are
collectively called a software configuration.
There are four fundamental sources of change:
New business or market conditions dictate changes in product requirements or business
rules.
Software configuration management is a set of activities that have been developed to manage
change throughout the life cycle of computer software. SCM can be viewed as a software quality
assurance activity that is applied throughout the software process.
Four important elements that should exist when a configuration management system is
developed:
Component elements. A set of tools coupled within a file management system (e.g., a
database) that enables access to and management of each software configuration item.
Process elements. A collection of procedures and tasks that define an effective approach
to change management (and related activities) for all constituencies involved in the
management, engineering, and use of computer software.
Construction elements. A set of tools that automate the construction of software by
ensuring that the proper set of validated components (i.e., the correct version) have been
assembled.
Human elements. A set of tools and process features (encompassing other CM elements)
used by the software team to implement effective SCM.
The SCM repository is the set of mechanisms and data structures that allow a software
team to manage change in an effective manner. It provides the obvious functions of a
modern database management system by ensuring data integrity, sharing, and integration.
In addition, the SCM repository provides a hub for the integration of software tools, is
central to the flow of the software process, and can enforce uniform structure and format
for software engineering work products.
A repository (Refer figure 5.2) that serves a software engineering team should also
(1) integrate with or directly support process management functions,
(2) support specific rules that govern the SCM function and the data maintained within the
repository,
(3) provide an interface to other software engineering tools, and
(4) accommodate storage of sophisticated data objects (e.g., text, graphics, video, audio).
SCM Features
Versioning.
As a project progresses, many versions of individual work products will be created. The
repository must be able to save all these versions to enable effective management of
product releases and to permit developers to go back to previous versions during testing
and debugging.
Dependency Tracking and Change Management.
The repository manages a wide variety of relationships among the data elements stored in
it. These include relationships between enterprise entities and processes, among the parts
of an application design, between design components and the enterprise information
architecture, between design elements and deliverables, and so on. Some of these
relationships are merely associations, and some are dependencies or mandatory
relationships.
Requirements Tracing.
This special function depends on link management and provides the ability to track all the
design and construction components and deliverables that result from a specific
requirements specification (forward tracing).
Configuration Management.
A configuration management facility keeps track of a series of configurations
representing specific project milestones or production releases.
Audit Trails.
An audit trail establishes additional information about when, why, and by whom changes
are made. Information about the source of changes can be entered as attributes of specific
objects in the repository.
The Change Management Process
The software change management process defines a series of tasks that have four primary
objectives:
(1) to identify all items that collectively define the software configuration,
(2) to manage changes to one or more of these items,
(3) to facilitate the construction of different versions of an application, and
(4) to ensure that software quality is maintained as the configuration evolves over time.
Baselines
A baseline is a software configuration management concept that helps you to control
change without seriously impeding justifiable change.
A specification or product that has been formally reviewed and agreed upon, that
thereafter serves as the basis for further development, and that can be changed only
through formal change control procedures.
Before a software configuration item becomes a baseline, change may be made quickly
and informally. However, once a baseline is established, changes can be made, but a
specific, formal procedure must be applied to evaluate and verify each change.
In the context of software engineering, a baseline is a milestone in the development of
software.
A baseline is marked by the delivery of one or more software configuration items that
have been approved as a consequence of a technical review.
o For example, the elements of a design model have been documented and
reviewed.
o Errors are found and corrected.
o Once all parts of the model have been reviewed, corrected, and then approved, the
design model becomes a baseline.
Further changes to the program architecture (documented in the design model) can be
made only after each has been evaluated and approved.
Although baselines can be defined at any level of detail, the most common software
baselines are shown in Figure 5.4.
Software engineering tasks produce one or more SCIs. After SCIs are reviewed and
approved, they are placed in a project database (also called a project library or software
repository).
Be sure that the project database is maintained in a centralized, controlled location.
When a member of a software engineering team wants to make a modification to a
baselined SCI, it is copied from the project database into the engineer’s private
workspace. However, this extracted SCI can be modified only if SCM controls are
followed.
The arrows in Figure 5.4 illustrate the modification path for a baselined SCI.
Change Control
For a large software project, uncontrolled change rapidly leads to chaos. For such
projects, change control combines human procedures and automated tools to provide a
mechanism for the control of change.
A change request is submitted and evaluated to assess technical merit, potential side
effects, overall impact on other configuration objects and system functions, and the
projected cost of the change.
The results of the evaluation are presented as a change report, which is used by a change
control authority (CCA)—a person or group that makes a final decision on the status and
priority of the change.
An engineering change order (ECO) is generated for each approved change. The ECO
describes the change to be made, the constraints that must be respected, and the criteria
for review and audit.
The object(s) to be changed (Refer figure 5.5) can be placed in a directory that is
controlled solely by the software engineer making the change. A version control system
updates the original file once the change has been made.
As an alternative, the object(s) to be changed can be “checked out” of the project
database (repository), the change is made, and appropriate SQA activities are applied.
The object(s) is (are) then “checked in” to the database, and appropriate version control
mechanisms are used to create the next version of the software.
These version control mechanisms, integrated within the change control process,
implement two important elements of change management—access control and
synchronization control.
Access control governs which software engineers have the authority to access and modify a
particular configuration object.
Synchronization control helps to ensure that parallel changes, performed by two different
people, don’t overwrite one another.
Version control combines procedures and tools to manage different versions of configuration
objects that are created during the software process.
A version control system implements or is directly integrated with four major capabilities:
(1) a project database (repository) that stores all relevant configuration objects,
(2) a version management capability that stores all versions of a configuration object (or
enables any version to be constructed using differences from past versions),
(3) a make facility that enables you to collect all relevant configuration objects and
construct a specific version of the software. In addition, version control and change
control systems often implement
(4) an issues tracking (also called bug tracking) capability that enables the team to record
and track the status of all outstanding issues associated with each configuration object.
Configuration Audit
1)Technical reviews
The technical review focuses on the technical correctness of the configuration object that
has been modified. The reviewers assess the SCI to determine consistency with other
SCIs, omissions, or potential side effects.
A technical review should be conducted for all but the most trivial changes.
Status Reporting
Configuration status reporting (sometimes called status accounting) is an SCM task that
answers the following questions:
(1) What happened? (2) Who did it? (3) When did it happen? (4) What else will be affected?
At the very least, develop a “need to know” list for every configuration object and keep it
up to date. When a change is made, be sure that everyone on the list is notified. Each time
an SCI is assigned new or updated identification, a CSR entry is made. Each time a
change is approved by the CCA (i.e., an ECO is issued), a CSR entry is made.
Each time a configuration audit is conducted, the results are reported as part of the CSR
task.
Output from CSR may be placed in an online database or website, so that software
developers or support staff can access change information by keyword category.
In addition, a CSR report is generated on a regular basis and is intended to keep
management and practitioners apprised of important changes.
The People
The “people factor” is so important that the Software Engineering Institute has developed
a People Capability Maturity Model (People-CMM).
The people capability maturity model defines the following key practice areas for
software people: staffing, communication and coordination, work environment,
performance management, training, compensation, competency analysis and
development, career development, workgroup development, team/culture development,
and others.
The Product
Before a project can be planned, product objectives and scope should be established,
alternative solutions should be considered, and technical and management constraints
should be identified.
Without this information, it is impossible to define reasonable (and accurate) estimates of
the cost, an effective assessment of risk, a realistic breakdown of project tasks, or a
manageable project schedule that provides a meaningful indication of progress.
Once the product objectives and scope are understood, alternative solutions are considered.
Although very little detail is discussed, the alternatives enable managers and practitioners
to select a “best” approach, given the constraints imposed by delivery deadlines, budgetary
restrictions, personnel availability, technical interfaces, and myriad other factors.
The Process
A software process provides the framework from which a comprehensive plan for
software development can be established. A small number of framework activities are
applicable to all software projects, regardless of their size or complexity.
A number of different task sets—tasks, milestones, work products, and quality assurance
points—enable the framework activities to be adapted to the characteristics of the software
project and the requirements of the project team.
Finally, umbrella activities—such as software quality assurance, software configuration
management, and measurement—overlay the process model. Umbrella activities are
independent of any one framework activity and occur throughout the process.
The Project
We conduct planned and controlled software projects for one primary reason—it is the only
known way to manage complexity. And yet, software teams still struggle. In a study of 250
large software projects between 1998 and 2004, Capers Jones [Jon04] found that “about 25
were deemed successful in that they achieved their schedule, cost, and quality objectives.
About 50 had delays or overruns below 35 percent, while about 175 experienced major
delays and overruns, or were terminated without completion.” Although the success rate for
present-day software projects may have improved somewhat, our project failure rate remains
much higher than it should be.
To avoid project failure, a software project manager and the software engineers who build
the product must avoid a set of common warning signs, understand the critical success
factors that lead to good project management, and develop a commonsense approach for
planning, monitoring, and controlling the project.
People
The Stakeholders
The software process (and every software project) is populated by stakeholders who
can be categorized into one of five constituencies:
Senior managers who define the business issues that often have a significant influence
on the project.
Project (technical) managers who must plan, motivate, organize, and control the
practitioners who do software work.
Practitioners who deliver the technical skills that are necessary to engineer a product or
application.
Customers who specify the requirements for the software to be engineered and other
stakeholders who have a peripheral interest in the outcome.
End users who interact with the software once it is released for production use.
Team Leaders
In an excellent book of technical leadership, Jerry Weinberg [Wei86] suggests an MOI
model of leadership:
Motivation. The ability to encourage (by “push or pull”) technical people to produce to
their best ability.
Organization. The ability to mold existing processes (or invent new ones) that will
enable the initial concept to be translated into a final product.
Ideas or innovation. The ability to encourage people to create and feel creative even
when they must work within bounds established for a particular software product or
application.
Agile Teams
The small, highly motivated project team, also called an agile team, adopts many of the
characteristics of successful software project teams discussed in the preceding section and avoids
many of the toxins that create problems. However, the agile philosophy stresses individual (team
member) competency coupled with group collaboration as critical success factors for the team.
Boehm suggests an approach that addresses project objectives, milestones and schedules,
responsibilities, management and technical approaches, and required resources.
He calls it the W5HH Principle, after a series of questions that lead to a definition of key
project characteristics and the resultant project plan:
Why is the system being developed? All stakeholders should assess the validity of
business reasons for the software work. Does the business purpose justify the expenditure
of people, time, and money?
What will be done? The task set required for the project is defined.
When will it be done? The team establishes a project schedule by identifying when
project tasks are to be conducted and when milestones are to be reached.
Who is responsible for a function? The role and responsibility of each member of the
software team is defined.
Where are they located organizationally? Not all roles and responsibilities reside within
software practitioners. The customer, users, and other stakeholders also have
responsibilities.
How will the job be done technically and managerially? Once product scope is
established, a management and technical strategy for the project must be defined.
How much of each resource is needed? The answer to this question is derived by
developing estimates based on answers to earlier questions.
3) Briefly explain the Project Scheduling and Tracking methods. NOV/DEC’08 (Or)
Write short notes on the following. April /May 2015
(OR)
With appropriate time-line chart describe the scheduling of a software project. Also
construct the project table for the plan and task. (NOV/DEC 2021)
Project Scheduling
Scheduling of a software project does not differ greatly from scheduling of any multitask
engineering effort.
Therefore, generalized project scheduling tools and techniques can be applied with little
modification for software projects.
Program evaluation and review technique (PERT) and the critical path method (CPM)
are two project scheduling methods that can be applied to software development.
Both techniques are driven by information already developed in earlier project planning
activities: estimates of effort, a decomposition of the product function, the selection of
the appropriate process model and task set, and decomposition of the tasks that are
selected.
Interdependencies among tasks may be defined using a task network. Tasks, sometimes
called the project Work Breakdown Structure (WBS), are defined for the product as a
whole or for individual functions.
Both PERT and CPM provide quantitative tools that allow you to
(1) Determine the critical path—the chain of tasks that determines the duration of the
project,
(2) Establish “most likely” time estimates for individual tasks by applying statistical
models, and
(3) Calculate “boundary times” that define a time “window” for a particular task.
Time-Line Charts When creating a software project schedule, you begin with a set of
tasks (the work breakdown structure).
If automated tools are used, the work breakdown is input as a task network or task
outline. Effort, duration, and start date are then input for each task.
In addition, tasks may be assigned to specific individuals. As a consequence of this input,
a time-line chart, also called a Gantt chart, is generated.
A time-line chart can be developed for the entire project. Alternatively, separate charts
can be developed for each project function or for each individual working on the project.
Figure 5.7 illustrates the format of a time-line chart. It depicts a part of a software project
schedule that emphasizes the concept scoping task for a word-processing (WP) software
product.
All project tasks (for concept scoping) are listed in the left-hand column. The horizontal
bars indicate the duration of each task.
When multiple bars occur at the same time on the calendar, task concurrency is implied.
The diamonds indicate milestones.
Once the information necessary for the generation of a time-line chart has been input, the
majority of software project scheduling tools produce project tables—a tabular listing of
all project tasks, their planned and actual start and end dates, and a variety of related
information (Figure 5.8).
Used in conjunction with the time-line chart, project tables enable you to track progress.
Conducting periodic project status meetings in which each team member reports progress
and problems
Evaluating the results of all reviews conducted throughout the software engineering
process.
Determining whether formal project milestones (the diamonds shown in Figure 27.3)
have been accomplished by the scheduled date
Comparing the actual start date to the planned start date for each project task listed in the
resource table (Figure 5.8)
Meeting informally with practitioners to obtain their subjective assessment of progress to
date and problems on the horizon
Using earned value analysis to assess progress quantitatively.
In reality, all of these tracking techniques are used by experienced project managers.
Each new class has been implemented in code from the design model.
Extracted classes (from a reuse library) have been implemented.
Prototype or increment has been built.
Technical milestone: OO testing
The correctness and completeness of OO analysis and design models has been
reviewed.
A class-responsibility-collaboration network has been developed and reviewed.
Test cases are designed, and class-level tests have been conducted for each class.
Test cases are designed, and cluster testing is completed and the classes are
integrated.
System-level tests have been completed.
Recalling that the OO process model is iterative, each of these milestones may be revisited as
different increments are delivered to the customer.
4. Elaborate the relationship between people and effort, Task Set & Network (Or) Discuss
Putnam resources allocation model. Derive the time and effort equations. (May/June 2016)
Elaborate the relationship between people and effort, Task Set & Network. Write short
notes on the following. i)Task network. April/ May 2015
Basic Principles
Like all other areas of software engineering, a number of basic principles guide software project
scheduling:
Compartmentalization. The project must be compartmentalized into a number of
manageable activities and tasks.
Interdependency. The interdependency of each compartmentalized activity or task must
be determined. Some tasks must occur in sequence, while others can occur in parallel.
Time allocation. Each task to be scheduled must be allocated some number of work units
(e.g., person-days of effort). In addition, each task must be assigned a start date and a
completion date that are a function of the interdependencies
Effort validation. Every project has a defined number of people on the software team. As
time allocation occurs, you must ensure that no more than the allocated number of people
has been scheduled at any given time.
Defined responsibilities. Every task that is scheduled should be assigned to a specific
team member.
Defined outcomes. Every task that is scheduled should have a defined outcome. For
software projects, the outcome is normally a work product (e.g., the design of a
component) or a part of a work product. Work products are often combined in
deliverables.
Defined milestones. Every task or group of tasks should be associated with a project
milestone. A milestone is accomplished when one or more work products has been
PREPARED BY: Mrs.P.R.JAYANTHI AP/CSE, Ms.G.SARASWATHI AP/CSE, Mrs.K.KAVITHA AP/CSE Page 34
reviewed for quality and has been approved. Each of these principles is applied as the
project schedule evolves.
There is a common myth that is still believed by many managers who are responsible for
software development projects: “If we fall behind schedule, we can always add more
programmers and catch up later in the project.” Unfortunately, adding people late in a
project often has a disruptive effect on the project, causing schedules to slip even further.
The people who are added must learn the system, and the people who teach them are the
same people who were doing the work. While teaching, no work is done, and the project
falls further behind.
The software equation introduced is derived from the PNR curve and demonstrates the
highly nonlinear relationship between chronological time to complete a project and
human effort applied to the project.
The number of delivered lines of code (source statements), L, is related to effort and
development time by the equation:
Rearranging this software equation, we can arrive at an expression for development effort
E:
where E is the effort expended (in person-years) over the entire life cycle for software
development and maintenance and t is the development time in years. The equation for
development effort can be related to development cost by the inclusion of a burdened
labor rate factor ($/person-year).
This leads to some interesting results. Consider a complex, real-time software project
estimated at 33,000 LOC, 12 person-years of effort.
If eight people are assigned to the project team, the project can be completed in
approximately 1.3 years. If, however, we extend the end date to 1.75 years, the highly
nonlinear nature of the model described in Equation (5.1) yields:
Effort Distribution
Each of the software project estimation techniques discussed leads to estimates of work
units (e.g., person-months) required to complete software development.
Forty percent of all effort is allocated to front end analysis and design.
An effective software process should define a collection of task sets, each designed to
meet the needs of different types of projects.
A task set is a collection of software engineering work tasks, milestones, work products,
and quality assurance filters that must be accomplished to complete a particular project.
The task set must provide enough discipline to achieve high software quality.
There is no certainty that the technology will be applicable, but a customer (e.g.,
marketing) believes that potential benefit exists. Concept development projects are
approached by applying the following actions:
1.1 Concept scoping determines the overall scope of the project.
1.2 Preliminary concept planning establishes the organization’s ability to undertake the
work implied by the project scope.
1.3 Technology risk assessment evaluates the risk associated with the technology to be
implemented as part of the project scope.
PREPARED BY: Mrs.P.R.JAYANTHI AP/CSE, Ms.G.SARASWATHI AP/CSE, Mrs.K.KAVITHA AP/CSE Page 37
1.4 Proof of concept demonstrates the viability of a new technology in the software
context.
1.5 Concept implementation implements the concept representation in a manner that
can be reviewed by a customer and is used for “marketing” purposes when a concept
must be sold to other customers or management.
1.6 Customer reaction to the concept solicits feedback on a new technology concept and
targets specific customer applications. A quick scan of these actions should yield few
surprises. In fact, the software engineering flow for concept development projects (and
for all other types of projects as well) is little more than common sense.
The software engineering actions described in the preceding section may be used to
define a macroscopic schedule for a project.
However, the macroscopic schedule must be refined to create a detailed project schedule.
Refinement begins by taking each action and decomposing it into a set of tasks (with
related work products and milestones).
In addition, when more than one person is involved in a software engineering project, it is
likely that development activities and tasks will be performed in parallel. When this
occurs, concurrent tasks must be coordinated so that they will be complete when later
tasks require their work product(s).
A task network, also called an activity network, is a graphic representation of the task
flow for a project. It is sometimes used as the mechanism through which task sequence
and dependencies are input to an automated project scheduling tool.
In addition, you should be aware of those tasks that lie on the critical path. That is, tasks
that must be completed on schedule if the project as a whole is to be completed on
schedule.
It is important to note that the task network shown in Figure 5.10 is macroscopic. In a
detailed task network (a precursor to a detailed schedule), each action shown in the figure
would be expanded. For example, Task 1.1 would be expanded to show all tasks detailed
in the refinement of Actions.
There exist numbers of qualitative approaches to project tracking. Each provides the
project manager with an indication of progress, but an assessment of the information
provided is somewhat subjective.
It is reasonable to ask whether there is a quantitative technique for assessing progress as
the software team progresses through the work tasks allocated to the project schedule.
In fact, a technique for performing quantitative analysis of progress does exist. It is called
Earned Value Analysis (EVA).
To determine the earned value, the following steps are performed:
• The Budgeted Cost Of Work Scheduled (BCWS) is determined for each work
task represented in the schedule. During estimation, the work (in person-hours or
person-days) of each software engineering task is planned.
• Hence, BCWSi is the effort planned for work task i. To determine progress at a
given point along the project schedule, the value of BCWS is the sum of the
BCWSi values for all work tasks that should have been completed by that point in
time on the project schedule.
• The BCWS values for all work tasks are summed to derive the budget at
completion (BAC). Hence,
• Next, the value for Budgeted Cost Of Work Performed (BCWP) is computed.
The value for BCWP is the sum of the BCWS values for all work tasks that have
actually been completed by a point in time on the project schedule.
• “The distinction between the BCWS and the BCWP is that the former represents
the budget of the activities that were planned to be completed and the latter
represents the budget of the activities that actually were completed.” Given values
for BCWS, BAC, and BCWP, important progress indicators can be computed:
• SPI is an indication of the efficiency with which the project is utilizing scheduled
resources. An SPI value close to 1.0 indicates efficient execution of the project schedule.
SV is simply an absolute indication of variance from the planned schedule.
provides an indication of the percentage of work that should have been completed by time t.
provides a quantitative indication of the percent of completeness of the project at a given point in
time t.
It is also possible to compute the actual cost of work performed (ACWP). The value for
ACWP is the sum of the effort actually expended on work tasks that have been completed
by a point in time on the project schedule. It is then possible to compute
• A CPI value close to 1.0 provides a strong indication that the project is within its defined
budget. CV is an absolute indication of cost savings (against planned costs) or shortfall at
a particular stage of a project.
6. Write short notes on LOC based Estimation and FP Based Estimation. (Apr 2019)
Lines of code and function points were described as measures from which
productivity metrics can be computed.
LOC and FP data are used in two ways during software project estimation:
PREPARED BY: Mrs.P.R.JAYANTHI AP/CSE, Ms.G.SARASWATHI AP/CSE, Mrs.K.KAVITHA AP/CSE Page 40
gives heaviest credence to the “most likely” estimate and follows a beta probability distribution.
The mechanical CAD software will accept two- and three-dimensional geometric data
from an engineer. The engineer will interact and control the CAD system through a user
interface that will exhibit characteristics of good human/machine interface design. All
geometric data and other supporting information will be maintained in a CAD database.
Design analysis modules will be developed to produce the required output, which will be
displayed on a variety of graphics devices.
The software will be designed to control and interact with peripheral devices that include
a mouse, digitizer, laser printer, and plotter.
Following the decomposition technique for LOC, an estimation table (Figure 5.11) is
developed. A range of LOC estimates is developed for each function.
PREPARED BY: Mrs.P.R.JAYANTHI AP/CSE, Ms.G.SARASWATHI AP/CSE, Mrs.K.KAVITHA AP/CSE Page 41
For example, the range of LOC estimates for the 3D geometric analysis function is
optimistic, 4600 LOC; most likely, 6900 LOC; and pessimistic, 8600 LOC.
Applying Equation 5.2, the expected value for the 3D geometric analysis function is 6800
LOC. Other estimates are derived in a similar fashion.
By summing vertically in the estimated LOC column, an estimate of 33,200 lines of code
is established for the CAD system. A review of historical data indicates that the
organizational average productivity for systems of this type is 620 LOC/pm.
Based on a burdened labor rate of $8000 per month, the cost per line of code is
approximately $13. Based on the LOC estimate and the historical productivity data, the
total estimated project cost is $431,000 and the estimated effort is 54 person-months.
Decomposition for FP-based estimation focuses on information domain values rather than
software functions.
Referring to the table presented in Figure 5.12, you would estimate inputs, outputs,
inquiries, files, and external interfaces for the CAD software.
For the purposes of this estimate, the complexity weighting factor is assumed to be
average. Figure 5.12 presents the results of this estimate.
The organizational average productivity for systems of this type is 6.5 FP/pm. Based on a
burdened labor rate of $8000 per month; the cost per FP is approximately $1230. Based
on the FP estimate and the historical productivity data, the total estimated project cost is
$461,000 and the estimated effort is 58 person-months.
7. Present neatly about Make/Buy Decision. (Or) Write short notes on the following.
In many software application areas, it is often more cost effective to acquire rather than
develop computer software.
Software engineering managers are faced with a make/ buy decision that can be further
complicated by a number of acquisition options:
(1) software may be purchased (or licensed) off-the-shelf,
(2) “full-experience” or “partial-experience” software components may be acquired and
then modified and integrated to meet specific needs, or
(3) software may be custom built by an outside contractor to meet the purchaser’s
specifications.
The make/buy decision is made based on the following conditions:
(1) Will the delivery date of the software product be sooner than that for internally
developed software?
(2) Will the cost of acquisition plus the cost of customization be less than the cost of
developing the software internally?
(3) Will the cost of outside support (e.g., a maintenance contract) be less than the cost of
internal support? These conditions apply for each of the acquisition options.
Creating a Decision Tree
The steps just described can be augmented using statistical techniques such as decision
tree analysis. For example, Figure 5.13 depicts a decision tree for a software based
system X. In this case, the software engineering organization can
(1) build system X from scratch,
(2) reuse existing partial-experience components to construct the system,
(3) buy an available software product and modify it to meet local needs, or
(4) contract the software development to an outside vendor.
PREPARED BY: Mrs.P.R.JAYANTHI AP/CSE, Ms.G.SARASWATHI AP/CSE, Mrs.K.KAVITHA AP/CSE Page 44
If the system is to be built from scratch, there is a 70 percent probability that the job will
be difficult.The project planner estimates that a difficult development effort will cost
$450,000.
A “simple” development effort is estimated to cost $380,000. The expected value for
cost, computed along any branch of the decision tree, is
Following other paths of the decision tree, the projected costs for reuse, purchase, and
contract, under a variety of circumstances, are also shown. The expected costs for these
paths are
Based on the probability and projected costs that have been noted in Figure 5.13, the
lowest expected cost is the “buy” option. It is important to note, however, that many
criteria—not just cost— must be considered during the decision-making process.
Availability, experience of the developer/ vendor/contractor, conformance to
requirements, local “politics,” and the likelihood of change are but a few of the criteria
that may affect the ultimate decision to build, reuse, buy, or contract.
Outsourcing
In concept, outsourcing is extremely simple. Software engineering activities are
contracted to a third party who does the work at lower cost and, hopefully, higher quality.
Software work conducted within a company is reduced to a contract management
activity.
The COCOMO II application composition model uses object points and is illustrated in
the following paragraphs. It should be noted that other, more sophisticated estimation
models (using FP and KLOC) are also available as part of COCOMO II.
Like function points, the object point is an indirect software measure that is computed
using counts of the number of
(1) screens (at the user interface),
(2) reports, and
(3) components likely to be required to build the application.
Each object instance (e.g., a screen or report) is classified into one of three complexity
levels (i.e., simple, medium, or difficult) using criteria suggested by Boehm.
In essence, complexity is a function of the number and source of the client and server
data tables that are required to generate the screen or report and the number of views or
sections presented as part of the screen or report.
Once complexity is determined, the number of screens, reports, and components are
weighted according to the table illustrated in Figure 5.14.
The object point count is then determined by multiplying the original number of object
instances by the weighting factor in the figure and summing to obtain a total object point
count.
When component-based development or general software reuse is to be applied, the
percent of reuse (%reuse) is estimated and the object point count is adjusted:
for different levels of developer experience and development environment maturity. Once the
productivity rate has been determined, an estimate of project effort is computed using
In more advanced COCOMO II models,12 a variety of scale factors, cost drivers, and
adjustment procedures are required.
where
E _ effort in person-months or person-years
t _ project duration in months or years
B _ “special skills factor”
P _ “productivity parameter”
The productivity parameter can be derived for local conditions using historical data
collected from past development efforts. Note that the software equation has two
independent parameters:
(1) an estimate of size (in LOC) and
(2) an indication of project duration in calendar months or years.
To simplify the estimation process and use a more common form for their estimation
model, Putnam and Myers [Put92] suggest a set of equations derived from the software
equation. Minimum development time is defined as
Note that t in Equation (5.5) is represented in years. Using Equation (5.3) with P _ 12,000
(the recommended value for scientific software) for the CAD software.
Process and project metrics can provide historical perspective and powerful input for the
generation of quantitative estimates. Past experience (of all people involved) can aid
immeasurably as estimates are developed and reviewed.
Because estimation lays a foundation for all other project planning actions, and project
planning provides the road map for successful software engineering.
Estimation of resources, cost, and schedule for a software engineering effort requires
experience, access to good historical information (metrics), and the courage to commit to
quantitative predictions when qualitative information is all that exists. Estimation carries
inherent risk and this risk leads to uncertainty.
Project complexity has a strong effect on the uncertainty inherent in planning.
Complexity, however, is a relative measure that is affected by familiarity with past effort.
The first-time developer of a sophisticated e-commerce application might consider it to
be exceedingly complex. However, a Web engineering team developing its tenth e-
commerce WebApp would consider such work run-of-the-mill.
A number of quantitative software complexity measures have been proposed. Such
measures are applied at the design or code level and are therefore difficult to use during
software planning (before a design and code exist). However, other, more subjective
Planning Process
The objective of software project planning is to provide a framework that enables the
manager to make reasonable estimates of resources, cost, and schedule.
Although there is an inherent degree of uncertainty, the software team embarks on a plan
that has been established as a consequence of these tasks.
Therefore, the plan must be adapted and updated as the project proceeds.
Resources
The second planning task is estimation of the resources required to accomplish the
software development effort. Figure 5.16 depicts the three major categories of software
engineering resources—people, reusable software components, and the development
environment (hardware and software tools).
Each resource is specified with four characteristics: description of the resource, a
statement of availability, time when the resource will be required, and duration of time
that the resource will be applied. The last two characteristics can be viewed as a time
window.
Human Resources
The planner begins by evaluating software scope and selecting the skills required to
complete development. Both organizational position (e.g., manager, senior software
engineer) and specialty (e.g., telecommunications, database, client-server) are specified.
For relatively small projects (a few person-months), a single individual may perform all
software engineering tasks, consulting with specialists as required.
For larger projects, the software team may be geographically dispersed across a number
of different locations.
Hence, the location of each human resource is specified. The number of people required
for a software project can be determined only after an estimate of development effort
(e.g., person-months) is made.
purchased from a third party, are ready for use on the current project, and have
been fully validated.
Environmental Resources
The environment that supports a software project, often called the software engineering
environment (SEE), incorporates hardware and software. Hardware provides a platform
that supports the tools (software) required producing the work products that are an
outcome of good software engineering practice.
10. Write short notes on Risk Management. Nov/Dec’10. State the need for Risk
Management and explain the activities under risk Management. April/May 2015 & 2019
Explain in detail about the risk management in a software development life cycle. (16)
Nov/Dec 2015 & 2019, APR/MAY 2021
o Uncertainty—the risk may or may not happen; that is, there are no 100%
probable risks.
o Loss—if the risk becomes a reality, unwanted consequences or losses will occur.
When risks are analyzed, it is important to quantify the level of uncertainty and
the degree of loss associated with each risk.
One method for identifying risks is to create a risk item checklist. The checklist can be used for
risk identification and focuses on some subset of known and predictable risks in the following
generic subcategories:
Product size—risks associated with the overall size of the software to be built or
modified.
Business impact—risks associated with constraints imposed by management or the
marketplace.
Customer characteristics—risks associated with the sophistication of the customer and
the developer's ability to communicate with the customer in timely manner.
Process definition—risks associated with the degree to which the software
process has been defined and is followed by the development organization.
Development environment—risks associated with the availability and quality of the tools
to be used to build the product.
Technology to be built—risks associated with the complexity of the system to be built
and the "newness" of the technology that is packaged by the system.
Staff size and experience—risks associated with the overall technical and project
experience of the software engineers who will do the work.
The overall risk exposure, RE, is determined using the following relationship
RE = P x C
where P is the probability of occurrence for a risk, and C is the the cost to the project
should the risk occur.
Risk Assessment
At this point in the risk management process, we have established a set of triplets of the
form :
[ri, li, xi]
Where ri is risk, li is the likelihood (probability) of the risk, and xi is the impact of the
risk.
To mitigate this risk, project management must develop a strategy for reducing turnover.
Among the possible steps to be taken are
Meet with current staff to determine causes for turnover (e.g., poor working
conditions, low pay, and competitive job market).
Mitigate those causes that are under our control before the project starts.
Once the project commences, assume turnover will occur and develop
techniques to ensure continuity when people leave.
Organize project teams so that information about each development activity is
widely dispersed.
Process metrics are collected across all projects and over long periods of time. Their
intent is to provide a set of process indicators that lead to long-term software process
improvement.
Project metrics enable a software project manager to
(1) assess the status of an ongoing project,
(2) track potential risks,
(3) uncover problem areas before they go “critical,”
(4) adjust work flow or tasks, and
(5) evaluate the project team’s ability to control quality of software work products.
Measures that are collected by a project team and converted into metrics for use during a
project can also be transmitted to those with responsibility for software process
improvement.
Process sits at the center of a triangle connecting three factors that have a profound
influence on software quality and organizational performance.
The skill and motivation of people has been shown to be the single most influential factor
in quality and performance.
The complexity of the product can have a substantial impact on quality and team
performance. The technology (i.e., the software engineering methods and tools) that
populates the process also has an impact.
In addition, the process triangle exists within a circle of environmental conditions that
include the development environment (e.g., integrated software tools), business
conditions (e.g., deadlines, business rules), and customer characteristics (e.g., ease of
communication and collaboration).
As an organization becomes more comfortable with the collection and use of process
metrics, the derivation of simple indicators gives way to a more rigorous approach called
Statistical Software Process Improvement (SSPI). In essence, SSPI uses software failure
analysis to collect information about all errors and defects encountered as an application,
system, or product is developed and used.
Project Metrics
Unlike software process metrics that are used for strategic purposes, software project
measures are tactical. That is, project metrics and the indicators derived from them are
used by a project manager and a software team to adapt project workflow and technical
activities.
The first application of project metrics on most software projects occurs during
estimation. Metrics collected from past projects are used as a basis from which effort and
time estimates are made for current software work. As a project proceeds, measures of
effort and calendar time expended are compared to original estimates (and the project
schedule).The project manager uses these data to monitor and control progress.
Software Measurement
Direct measures of the software process include cost and effort applied. Direct measures
of the product include lines of code (LOC) produced, execution speed, memory size, and
defects reported over some set period of time.
Indirect measures of the product include functionality, quality, complexity, efficiency,
reliability, maintainability, and many other “–abilities”.
The cost and effort required to build software, the number of lines of code produced, and
other direct measures are relatively easy to collect, as long as specific conventions for
measurement are established in advance. However, the quality and functionality of
software or its efficiency or maintainability are more difficult to assess and can be
measured only indirectly.
To illustrate, consider a simple example. Individuals on two different project teams
record and categorize all errors that they find during the software process. Individual
measures are then combined to develop team measures. Team A found 342 errors
during the software process prior to release. Team B found 184 errors.
All other things being equal, which team is more effective in uncovering errors
throughout the process? Because you do not know the size or complexity of the projects,
you cannot answer this question. However, if the measures are normalized, it is possible
to create software metrics that enable comparison to broader organizational averages.
Size-Oriented Metrics
Referring to the table entry (Figure 5.19) for project alpha: 12,100 lines of code were
developed with 24 person-months of effort at a cost of $168,000. It should be noted that
the effort and cost recorded in the table represent all software engineering activities
(analysis, design, code, and test), not just coding.
Further information for project alpha indicates that 365 pages of documentation were
developed, 134 errors were recorded before the software was released, and 29 defects
were encountered after release to the customer within the first year of operation.
Three people worked on the development of software for project alpha. In order to
develop metrics that can be assimilated with similar metrics from other projects, you can
choose lines of code as a normalization value. From the rudimentary data contained in the
table, a set of simple size-oriented metrics can be developed for each project:
Errors per KLOC (thousand lines of code)
Defects per KLOC
$ per KLOC
Pages of documentation per KLOC
In addition, other interesting metrics can be computed:
Errors per person-month
KLOC per person-month
$ per page of documentation
Function-Oriented Metrics
Object-Oriented Metrics
1. Conventional software project metrics (LOC or FP) can be used to estimate object-
oriented software projects.
2. However, these metrics do not provide enough granularity for the schedule and effort
adjustments that are required as you iterate through an evolutionary or incremental
process.
Number of support classes. Support classes are required to implement the system
but are not immediately related to the problem domain. Examples might be user
interface (GUI) classes, database access and manipulation classes, and computation
classes.
Average number of support classes per key class. In general, key classes are
known early in the project. Support classes are defined throughout. If the average
number of support classes per key class were known for a given problem domain,
estimating (based on total number of classes) would be greatly simplified.
Number of subsystems. A subsystem is an aggregation of classes that support a
function that is visible to the end user of a system. Once subsystems are identified, it
is easier to lay out a reasonable schedule in which work on subsystems is partitioned
among project staff.
Use-Case–Oriented Metrics
Use cases are used widely as a method for describing customer-level or business domain
requirements that imply software features and functions. It would seem reasonable to use
the use case as a normalization measure similar to LOC or FP.
Like FP, the use case is defined early in the software process, allowing it to be used for
estimation before significant modeling and construction activities are initiated. Use cases
describe (indirectly, at least) user-visible functions and features that are basic
requirements for a system. The use case is independent of programming language.
Researchers have suggested use-case points (UCPs) as a mechanism for estimating
project effort and other characteristics. The UCP is a function of the number of actors and
transactions implied by the use-case models and is analogous to the FP in some ways.
• Web pages with dynamic content (i.e., end-user actions or other external factors
result in customized content displayed on the page) are essential in all e-
commerce applications, search engines, financial applications, and many other
WebApp categories.
• These pages represent higher relative complexity and require more effort to
construct than static pages.
• This measure provides an indication of the overall size of the application and the
effort required to develop it.
Number of internal page links:
• Internal page links are pointers that provide a hyperlink to some other Web page
within the WebApp.
Number of persistent data objects:
• One or more persistent data objects (e.g., a database or data file) may be accessed
by a WebApp.
• As the number of persistent data objects grows, the complexity of the WebApp
also grows and the effort to implement it increases proportionally.
Number of external systems interfaced.
• WebApps must often interface with “backroom” business applications. As the
requirement for interfacing grows, system complexity and development effort also
increase.
Number of static content objects:
• Static content objects encompass statictext-based, graphical, video, animation,
and audio information that are incorporated within the WebApp.
• Multiple content objects may appear on a single Web page.
Number of dynamic content objects:
• Dynamic content objects are generated based on end-user actions and encompass
internally generated textbased, graphical, video, animation, and audio information
that are incorporated within the WebApp.
• Multiple content objects may appear on a single Web page.
Number of executable functions:
• An executable function (e.g., a script or applet) provides some computational
service to the end user. As the number of executable functions increases,
modeling and construction effort also increase.
Measuring Quality
Correctness.
A program must operate correctly or it provides little value to its users. Correctness is the
degree to which the software performs its required function.
Maintainability.
• A simple time-oriented metric is Mean-Time-To-Change (MTTC), the time it
takes to analyze the change request, design an appropriate modification,
implement the change, test it, and distribute the change to all users.
• On average, programs that are maintainable will have a lower MTTC (for
equivalent types of changes) than programs that are not maintainable.
Integrity.
• To measure integrity, two additional attributes must be defined: threat and
security.
• Threat is the probability (which can be estimated or derived from empirical
evidence) that an attack of a specific type will occur within a given time.
• Security is the probability (which can be estimated or derived from empirical
evidence) that the attack of a specific type will be repelled. Usability.
Defect Removal Efficiency
A quality metric that provides benefit at both the project and process level is Defect
Removal Efficiency (DRE).
When considered for a project as a whole, DRE is defined in the following manner:
where E is the number of errors found before delivery of the software to the end user and
D is the number of defects found after delivery. The ideal value for DRE is 1. That is, no
defects are found in the software. Realistically, D will be greater than 0, but the value of
DRE can still approach 1 as E increases for a given value of D. In fact, as E increases, it
is likely that the final value of D will decrease (errors are filtered out before they become
defects).
12. List the Metrics For Small Organizations. (Or)Discuss about the metrics for small
organizations. (6) Nov /Dec 2015
The vast majority of software development organizations have fewer than 20 software
people. It is unreasonable, and in most cases unrealistic, to expect that such organizations
will develop comprehensive software metrics programs. However, it is reasonable to
suggest that software organizations of all sizes measure and then use the resultant metrics
to help improve their local software process and the quality and timeliness of the products
they produce.
A small organization might select the following set of easily collected measures:
• Time (hours or days) elapsed from the time a request is made until evaluation is
complete, tqueue.
• Effort (person-hours) to perform the evaluation, Weval.
• Time (hours or days) elapsed from completion of evaluation to assignment of
change order to personnel, teval.
• Effort (person-hours) required to make the change, Wchange.
• Time required (hours or days) to make the change, tchange.
• Errors uncovered during work to make change, Echange.
• Defects uncovered after change is released to the customer base, Dchange.
Once these measures have been collected for a number of change requests, it is possible
to compute the total elapsed time from change request to implementation of the change
and the percentage of elapsed time absorbed by initial queuing, evaluation and change
assignment, and change implementation. Similarly, the percentage of effort required for
evaluation and implementation can be determined. These metrics can be assessed in the
context of quality data, Echange and Dchange.
The percentages provide insight into where the change request process slows down and
may lead to process improvement steps to reduce tqueue, Weval, teval, Wchange, and/or
Echange. In addition, the defect removal efficiency can be computed as
13.(i) An application has the following: 10 low external inputs, 8 high external outputs, 13
low internal logical files, 17 high external interface files, 11 average external inquiries and
complexity adjustment factor of 1.10. What are the unadjusted and adjusted function point
counts?
SOLUTION
ELF 17 10 17*10=170
ILF 13 15 13*15=195
EQ 11 6 11*6=66
EO 8 7 8*7=56
EI 10 6 10*6=60
=361.567
PREPARED BY: Mrs.P.R.JAYANTHI AP/CSE, Ms.G.SARASWATHI AP/CSE, Mrs.K.KAVITHA AP/CSE Page 65
=362
14.(i) Suppose you have a budgeted cost of a project as Rs. 9,00,000. The project is to be
completed in 9 months. After a month, you have completed 10 percent of the project at a
total expense of Rs. 1,00,000. The planned completion should have been 15 percent. You
need to determine whether the project is on-time and on-budget? Use Earned Value
analysis approach and interpret. [Nov/Dec 2016]
SOLUTION
EV = Earned Value
PV = Planned Value
BAC = Budget at Completion
AC = Actual Cost
Formulas
The following formulas will be used for the following examples.
PV = Planned Completion (%) * BAC
EV = Actual Completion (%) * BAC
CPI = EV/AC
SPI = EV/PV
The Planned Value (PV) and Earned Value (EV) can then be computed as follows:
Planned Value = Planned Completion (%) * BAC = 15% * $ 900,000 = $ 135,000
Earned Value = Actual Completion (%) * BAC = 10% * $ 900,000 = $ 90,000
15. Consider the following Function point components and their complexity. If the total
degree of influence is 52, find the estimated function points? [Nov/Dec 2016]
SOLUTION:
ELF 2 7 2*7=14
ILF 4 10 10*4=40
EQ 22 4 22*4=88
EO 16 5 16*5=80
EI 24 4 24*4=96
= (52*0.01)+0.65
VAF=1.17
=318*1.17
=372.06
16. Describe in detail COCOMO model for software cost estimation. Use it to estimate the
effort required to build software for a simple ATM that produces 12 screens, 10 reports
and has 80 software components. Assume average complexity and average developer
maturity. Use application composition model with object points.[Nov/Dec 2016]
SOLUTION
Screen=12
Reports=10
Software Components=80
Screen 12 2 2*12=24
Report 10 5 10*5=50
Components 80 -
Object Points=74
PROD = NOP/Person-per-month
= 74/13
=5.6
17. Explain about the factors that cause difficulty in testing a software. (5)
Nov/Dec 2017
18. Given the following project plan of tables table 1 and table 2 : Apr/May
2017
Table 1
Actual Actual
ID Task Status Actual start (days) duration costs
(days) ($)
The Gantt chart can now be easily drawn, by taking into account the expected duration of each
activity. The result is shown in the following diagram 5.21(notice that we are assuming the
duration to be expressed in working-day sand that we are using a “standard” calendar, in which
Saturday and Sunday are non-working time):
As it can been seen, the delay on activity B delays all other activities in the plan. The
activities marked in red (0%) are in the critical path.
AC is the sum of the actual costs incurred into. It is computed by looking at the actual costs
when they took place. Similar to the previous case:
• For each activity, we look at its actual costs (second table of the question) and split them evenly
for the actual duration of the activity, up to the monitoring date (that is, the date in which the
analysis is performed)
EV is the sum of the planned costs on the actual schedule. There are different rules for
computing EV. We use 50%-50% (50% of planned costs when an activity starts, the remaining
50%, when the activity ends.
The result is shown in the following table:
• PV > AC indicates that the project is under budget. However, it might be under budget because
of two reasons: it is, in fact, efficient or, alas, it is late (the expenditure has not yet occurred,
because activities did not start).
• EV < PV indicates that the project is late. At W13, in fact, the value we currently produced is
the one we should have had at W9.
For more precise analyses about the project efficiency, we can compute CPI and SPI,
which measure cost efficiency and schedule efficiency.
More in details: CPI = EV/AC, that is, how many dollars we produce (EV) for each
dollar we spend (AC). Clearly CPI > 1 is a good sign, while CPI < 1 indicates that the
project is inefficient and will probably end over budget.
The following graphs (Refer figure 5.25) shows the behaviour of CPI over time. If we do not
consider some noise (due to the 50%-50% rule, which causes, for instance, the peak at W3), we
can see that CPI is getting close to 1, indicating that the project should end on budget, if the trend
is confirmed.
The SPI index measure the schedule: SPI = EV/PV and indicates how much we produce
(EV) with respect to what we thought we would produce. Also in this case SPI > 1 is a
good sign (ahead of schedule), while SPI < 1 indicates that the project is late. In our
example we should expect SPI to be < 1, as it is, in fact, shown by the following diagram
5.26, which plots SPI over time:
DevOps is a set of practices intended to reduce the time between committing a change to
a system and the change being placed into normal production, while ensuring high
quality.
This allows a single team to handle the entire application lifecycle, from development
to testing, deployment, and operations. DevOps helps you to reduce the disconnection
between software developers, quality assurance (QA) engineers, and system
administrators.
DevOps Practices:
In between are practices that cover team practices, build processes, testing processes, and
deployment processes.
1. Requirements: Professionals determine the commercial need and gather end-user opinions
throughout this level. In this step, they design a project plan to optimize business impact
and produce the intended result.
2. Development – During this point, the code is being developed. To simplify the design
process, the developer team employs lifecycle DevOps tools and extensions like Git that
assist them in preventing safety problems and bad coding standards.
3. Build – After programmers have completed their tasks, they use tools such as Maven and
Gradle to submit the code to the common code source.
4. Testing – To assure software integrity, the product is first delivered to the test platform to
execute various sorts of screening such as user acceptability testing, safety testing,
integration checking, speed testing, and so on, utilizing tools such as JUnit, Selenium, etc.
5. Release – At this point, the build is prepared to be deployed in the operational
environment. The DevOps department prepares updates or sends several versions to
production when the build satisfies all checks based on the organizational demands.
6. Deploy – At this point, Infrastructure-as-Code assists in creating the operational
infrastructure and subsequently publishes the build using various DevOps lifecycle tools.
7. Execution – This version is now convenient for users to utilize. With tools including Chef,
the management department take care of server configuration and deployment at this point.
7 Cs of DevOps
1. Continuous Development
2. Continuous Integration
3. Continuous Testing
4. Continuous Deployment/Continuous Delivery
5. Continuous Monitoring
6. Continuous Feedback
7. Continuous Operations
1. Continuous Development
In Continuous Development code is written in small, continuous bits rather than all at
once, Continuous Development is important in DevOps because this improves
efficiency every time a piece of code is created, it is tested, built, and deployed into
production. Continuous Development raises the standard of the code and streamlines
the process of repairing flaws, vulnerabilities, and defects. It facilitates developers’
ability to concentrate on creating high-quality code.
2. Continuous Integration
This stage is the heart of the entire DevOps lifecycle. It is a software development
practice in which the developers require to commit changes to the source code more
frequently. This may be on a daily or weekly basis. Then every commit is built, and this
allows early detection of problems if they are present. Building code is not only involved
compilation, but it also includes unit testing, integration testing, code review,
and packaging.
The code supporting new functionality is continuously integrated with the existing code.
Therefore, there is continuous development of software. The updated code needs to be
integrated continuously and smoothly with the systems to reflect changes to the end-
users.
3) Continuous Testing
This phase, where the developed software is continuously testing for bugs. For constant
testing, automation testing tools such as TestNG, JUnit, Selenium, etc are used. These
tools allow QAs to test multiple code-bases thoroughly in parallel to ensure that there is
no flaw in the functionality. In this phase, Docker Containers can be used for simulating
the test environment.
Selenium does the automation testing, and TestNG generates the reports. This entire
testing phase can automate with the help of a Continuous Integration tool called Jenkins.
Automation testing saves a lot of time and effort for executing the tests instead of doing
this manually. Apart from that, report generation is a big plus. The task of evaluating the
test cases that failed in a test suite gets simpler. Also, we can schedule the execution of
the test cases at predefined times. After testing, the code is continuously integrated with
the existing code.
4) Continuous Monitoring
Monitoring is a phase that involves all the operational factors of the entire DevOps
process, where important information about the use of the software is recorded and
carefully processed to find out trends and identify problem areas. Usually, the monitoring
is integrated within the operational capabilities of the software application.
It may occur in the form of documentation files or maybe produce large-scale data about
the application parameters when it is in a continuous use position. The system errors such
as server not reachable, low memory, etc are resolved in this phase. It maintains the
security and availability of the service.
5) Continuous Feedback
The application development is consistently improved by analyzing the results from the
operations of the software. This is carried out by placing the critical phase of constant
feedback between the operations and the development of the next version of the current
software application.
The continuity is the essential factor in the DevOps as it removes the unnecessary steps
which are required to take a software application from development, using it to find out
its issues and then producing a better version. It kills the efficiency that may be possible
with the app and reduce the number of interested customers.
7. Continuous Operations
20. Why do we need DevOps? Differentiate Agile and DevOps process. (or) Compare the
process of Agile vs DevOps with a neat diagram.
Why DevOps?
o Release Process
o Releasing a new system or version of an existing system to customers is one of
the most sensitive steps in the software development cycle.
1. Inception phase.
During the inception phase, release planning and initial requirements specification are done.
a. Considerations of Ops will add some requirements for the developers. We will see these in
more detail later in this book, but maintaining backward compatibility between releases and
having features be software switchable are two of these requirements. The form and content of
operational log messages impacts the ability of Ops to troubleshoot a problem.
b. Release planning includes feature prioritization but it also includes coordination with
operations personnel about the scheduling of the release and determining what training the
operations personnel require to support the new release. Release planning also includes ensuring
compatibility with other packages in the environment and a recovery plan if the release fails.
DevOps practices make incorporation of many of the coordination-related topics in release
planning unnecessary, whereas other aspects become highly automated.
2. Construction phase.
During the construction phase, key elements of the DevOps practices are the management
of the code branches, the use of continuous integration and continuous deployment, and
incorporation of test cases for automated testing. These are also agile practices but form
an important portion of the ability to automate the deployment pipeline. A new element is
the integrated and automated connection between construction and transition activities.
3. Transition phase.
In the transition phase, the solution is deployed and the development team is responsible
for the deployment, monitoring the process of the deployment, deciding whether to roll
back and when, and monitoring the execution after deployment. The development team
has a role of “reliability engineer,” who is responsible for monitoring and troubleshooting
problems during deployment and subsequent execution.
NIST also characterizes the various types of services (Refer table 5.2) available from cloud
providers
The consumer is provided the capability to use the provider’s applications running on a
cloud infrastructure. The applications are accessible from various client devices through
either a thin client interface, such as a web browser (e.g., web-based email) or an
application interface.
The consumer does not manage or control the underlying cloud infrastructure including
networks, servers, operating systems, storage, or even individual application capabilities,
with the possible exception of limited user-specific application configuration settings.
The consumer is provided the capability to deploy onto the cloud infrastructure
consumer-created or acquired applications created using programming languages,
libraries, services, and tools supported by the provider.
The consumer does not manage or control the underlying cloud infrastructure including
networks, servers, operating systems, or storage, but has control over the deployed
applications and possibly configuration settings for the applicationhosting environment.
The consumer is provided the capability to provision processing, storage, networks, and
other fundamental computing resources where the consumer is able to deploy and run
arbitrary software, which can include operating systems and applications.
The consumer does not manage or control the underlying cloud infrastructure but has
control over operating systems, storage, and deployed applications; and possibly limited
control of select networking components (e.g., host firewalls).
Virtualization
The user issues a command to create a VM. Typically, the cloud provider has a utility
that enables the creation of the VM. This utility is told the resources required by the VM,
the account to which the charges accrued by the VM should be charged, the software to
be loaded (see below), and a set of configuration parameters specifying security and the
external connections for the VM.
The cloud infrastructure decides on which physical machine to create the VM instance.
The operating system for this physical machine is called a hypervisor, and it allocates
resources for the new VM and “wires” the new machine so that it can send and receive
messages.
The new VM is assigned an IP address that is used for sending and receiving messages.
We have described the situation where the hypervisor is running on bare metal. It is also
possible that there are additional layers of operating system–type software involved but
each layer introduces overhead and so the most common situation is the one we
described.
Loading a Virtual Machine
Each VM needs to be loaded with a set of software in order to do meaningful work. The
software can be loaded partially as a VM and partially as a result of the activated VM
loading software after launching.
A VM image can be created by loading and configuring a machine with the desired
software and data, and then copying the memory contents (typically in the form of the
virtual hard disk) of the machine to a persistent file.
New VM instances from that VM image (software and data) can then be created at will.
The process of creating a VM image is called baking the image.
A heavily baked image contains all of the software required to run an application and a
lightly baked image contains only a portion of the software required, such as an operating
system and a middleware container.
Virtualization introduces several types of uncertainty that you should be aware of.
Because a VM shares resources with other VMs on a single physical machine, there may
be some performance interference among the VMs.
This situation may be particularly difficult for cloud consumers as they usually have no
visibility into the co-located VMs owned by other consumers.
There are also time and dependability uncertainties when loading a VM, depending on
the underlying physical infrastructure and the additional software that needs to be
dynamically loaded.
DevOps operations often create and destroy VMs frequently for setting up different
environments or deploying new versions of software. It is important that you are aware of
these uncertainties.
Three of the unique aspects of the cloud that impact DevOps are:
Environments:
Writing to or altering the state of these external entities should only be done by the
production environment, and separate external entities must be created (e.g., as dummies
or test clones) for all other environments.
One method of visualizing an environment is as a silo.
Figure 5.32 shows two variants of two different environments—a testing environment
and a production environment.
Each contains slightly different versions of the same system. The two load balancers,
responsible for their respective environments, have different IP addresses.
Testing can be done by forking the input stream to the production environment and
sending a copy to the testing environment as shown in Figure 5.32a. In this case, it is
important that the test database be isolated from the production database.
Figure 5.32b shows an alternative situation. In this case, some subset of actual
production messages is sent to the test environment that performs live testing.
FIGURE 5.32 (a) Using live data to test. (b) Live testing with a subset of users. [Notation:
Architecture]
Data Considerations
The economic viability of the cloud coincided with the advent of NoSQL database
systems. Many systems utilize multiple different database systems, both relational and
NoSQL. Furthermore, large amounts of data are being gathered from a variety of sources
for various business intelligence or operational purposes.
HDFS
name space of file names and allocates space when an application wishes to write a new
block.
This manager also provides information so that applications can perform direct access to
particular blocks. There also is a pool of storage nodes.
In HDFS the manager is called the NameNode, and each element of the storage pool is
called a DataNode. There is one NameNode with provision for a hot backup.
Each DataNode is a separate physical computer or VM. Applications are restricted to
write a fixed-size block—typically 64MB.
Operational Considerations
The operational considerations associated with a shared file system such as HDFS are
twofold.
1. Who manages the HDFS installation? HDFS can be either a shared system
among multiple applications, or it can be instantiated for a single application. In case of a
single application, its management will be the responsibility of the development team for
that application. In the shared case, the management of the system must be assigned
somewhere within the organization.
2. How is the data stored within HDFS protected in the case of a disaster? HDFS itself
replicates data across multiple DataNodes, but a general failure of a datacenter may cause
HDFS to become unavailable or the data being managed by HDFS to become corrupted
or lost. Consequently, business continuity for those portions of the business dependent on
the continued execution of HDFS and access to the data stored within HDFS is an issue
that must be addressed.
Operations
Operations Services
Provisioning of Hardware
Provisioning of Software:
Capacity Planning
Ops is responsible for ensuring that adequate computational resources are available for
the organization.
In the case of capacity planning, the other stakeholders are business and marketing. With
cloud elasticity, the pay-as-you-go model, and the ease of provisioning new virtual
hardware, capacity planning is becoming more about runtime monitoring and auto scaling
rather than planning for purchasing hardware.
When a disaster occurs, what is the maximum period for which data loss is tolerable? If backups
are taken every hour then the RPO would be 1 hour, since the data that would be lost is that
which accumulated since the last backup.
ii)Recovery time objective (RTO). When a disaster occurs, what is the maximum tolerable
period for service to be unavailable? For instance, if a recovery solution takes 10 minutes to
access the backup in a separate datacenter and another 5 minutes to instantiate new servers using
the backed-up data, the RTO is 15 minutes.
Service Strategy
Developing a strategy is a matter of deciding where you would like your organization to
be in a particular area within a particular time frame, determining where you currently
are, and deciding on a path from the current state to the desired state.
The desired state is affected by both internal and external events. Internal events such as
personnel attrition, hardware failure, new software releases, marketing, and business
activities will all affect the desired state.
External events such as acquisitions, government policies, or consumer reaction will also
affect the desired state. The events that might occur all have some probability of
occurrence, thus, strategic planning shares some elements with fortune telling.
Service Design
Service transition subsumes all activities between service design and operation, namely,
all that is required to successfully get a new or changed service into operation.
Transition and planning support includes aspects of: resources, capacity, and change
planning; scoping and goals of the transition; documentation requirements; consideration
of applicable rules and regulations; financial planning; and milestones.
DevOps and continuous deployment require the delivery part of service transition to be
highly automated so it can deal with high-frequency transition and provide better quality
control.
Service Operation
Monitoring can be combined with some control, Control can be open-loop or closed-loop.
Open-loop control (i.e., monitoring feedback is not taken into account) can be used for
regular backups at predefined times. In closed-loop control, monitoring information is
taken into account when deciding on an action, such as in the auto scaling example.
Closed-loop feedback cycles can be nested into more complex control loops, where
lower-level control reacts to individual metrics and higher-level control considers a wider
range of information and trends developing over longer time spans.
At the highest level, control loops can link the different life-cycle activities. Depending
on the measured deviations from the desired metrics, continual service improvement can
lead to alterations in service strategy, design, or transition—all of which eventually
comes back to changes in service operation.
This data-driven process starts off with an identification of the vision, strategy, and goals
that are driving the current improvement cycle.
Micro service architecture is an architectural style that satisfies the following requirements.
Deploying without the necessity of explicit coordination with other teams reduces the
time required to place a component into production.
Allowing for different versions of the same service to be simultaneously in production
leads to different team members deploying without coordination with other members of
their team.
Rolling back a deployment in the event of errors allows for various forms of live testing.
A microservice architecture consists of a collection of services where each service provides a
small amount of functionality and the total functionality of the system is derived from
composing multiple services
Deploying without the necessity of explicit coordination with other teams reduces the time
required to place a component into production.
Allowing for different versions of the same service to be simultaneously in production leads to
different team members deploying without coordination with other members of their team.
Rolling back a deployment in the event of errors allows for various forms of live testing.
A microservice architecture consists of a collection of services where each service provides a
small amount of functionality and the total functionality of the system is derived from
composing multiple services.
A user interacts with a single consumer-facing service. This service, in turn, utilizes a collection
of other services.
A single component can be a client in one interaction and a service in another. (Refer figure
5.35)
The three categories are: the coordination model, management of resources, and mapping among
architectural elements.
Coordination Model:
Figure 5.36 gives an overview of the interaction between a service and its client.
The service registers with a registry. The registration includes a name for the service as
well as information on how to invoke it, for example, an endpoint location as a URL or
an IP address.
A client can retrieve the information about the service from the registry and invoke the
service using this information.
If the registry provides IP addresses, it acts as a local DNS server—local, because
typically, the registry is not open to the general Internet but is within the environment of
the application.
Netflix Eureka is an example of a cloud service registry that acts as a DNS server. The
registry serves as a catalogue of available services, and can further be used to track
PREPARED BY: Mrs.P.R.JAYANTHI AP/CSE, Ms.G.SARASWATHI AP/CSE, Mrs.K.KAVITHA AP/CSE Page 97
aspects such as versioning, ownership, service level agreements (SLAs), etc., for the set
of services in an organization.
The protocol used for communication between the client and the service can be any
remote communication protocol, for example, HTTP, RPC, SOAP, etc.
Management of Resources:
Two types of resource management decisions can be made globally and incorporated in
the architecture—provisioning/deprovisioning VMs and managing variation in demand.
Determining which component controls the provisioning and deprovisioning of a new instance
for a service is another important aspect. Three possibilities exist for the controlling
component.
1. A service itself can be responsible for (de)provisioning additional instances.
2. A client or a component in the client chain can be responsible for (de)provisioning
instances of a service.
3. An external component monitors the performance of service instances (e.g., their CPU
load) and (de)provisions an instance when the load reaches a given threshold.
Managing Demand
The number of instances of an individual service that exist should reflect the demand on the
service from client requests.
Work assignments:
A single team may work on multiple modules, but having multiple development teams
work on the same module requires a great deal of coordination among those development
teams. Since coordination takes time, an easier structure is to package the work of a single
team into modules and develop interfaces among the modules to allow modules developed
by different teams to interoperate.
Allocation:
Each component (i.e., microservice) will exist as an independent deployable unit. This
allows each component to be allocated to a single (virtual) machine or container, or it
allows multiple components to be allocated to a single (virtual) machine. The redeployment
or upgrade of one microservice will not affect any other microservices.
Dependability:
Three sources for dependability problems are: the small amount of interteam coordination,
correctness of environment, and the possibility that an instance of a service can fail.
PREPARED BY: Mrs.P.R.JAYANTHI AP/CSE, Ms.G.SARASWATHI AP/CSE, Mrs.K.KAVITHA AP/CSE Page 99
Modifiability:
Making a service modifiable comes down to making likely changes easy and reducing the
ripple effects of those changes. In both cases, a method for making the service more
modifiable is to encapsulate either the affected portions of a likely change or the interactions
that might cause ripple effects of a change.
Some likely changes that come from the development process, rather than the service being
provided, are:
PREPARED BY: Mrs.P.R.JAYANTHI AP/CSE, Ms.G.SARASWATHI AP/CSE, Mrs.K.KAVITHA AP/CSE Page 100
A deployment pipeline, as shown in Figure 5.37, consists of the steps that are taken between a
developer committing code and the code actually being promoted into normal production, while
ensuring high quality.
The deployment pipeline begins when a developer commits code to a joint versioning
system.
Prior to doing this commit, the developer will have performed a series of pre-commit
tests on their local environment; failure of the pre-commit tests of course means that the
commit does not take place.
A commit then triggers an integration build of the service being developed. This build is
tested by integration tests.
If these tests are successful, the build is promoted to a quasi-production environment—
the staging environment—where it is tested once more.
Then, it is promoted to production under close supervision. After another period of close
supervision, it is promoted to normal production.
The specific tasks may vary a bit for different organizations. For example, a small
company may not have a staging environment or special supervision for a recently
deployed version.
One way to define continuous integration is to have automatic triggers between one phase
and the next, up to integration tests.
That is, if the build is successful then integration tests are triggered. If not, the developer
responsible for the failure is notified. Continuous delivery is defined as having automated
triggers as far as the staging system. This is the box labeled UAT (user acceptance
test)/staging/performance tests in Figure 5.1.
Continuous deployment means that the next to last step (i.e., deployment into the
production system) is automated as well.
PREPARED BY: Mrs.P.R.JAYANTHI AP/CSE, Ms.G.SARASWATHI AP/CSE, Mrs.K.KAVITHA AP/CSE Page 101
Once a service is deployed into production it is closely monitored for a period and then it
is promoted into normal production. At this final stage, monitoring and testing still exist
but the service is no different from other services in this regard.
As the system moves through the deployment pipeline, these items work together to
generate the desired behavior or information.
Pre-commit.
The code is the module of the system on which the developer is working. Building this
code into something that can be tested requires access to the appropriate portions of the
version control repository that are being created by other developers.
The environment is usually a continuous integration server. The code is compiled, and
the component is built and baked into a VM image.
The image can be either heavily or lightly baked. This VM image does not change in
subsequent steps of the pipeline. During integration testing, a set of test data forms a test
database. This database is not the production database, rather, it consists of a sufficient
amount of data to perform the automated tests associated with integration. The
configuration parameters connect the built system with an integration testing
environment.
UAT/staging/performance testing.
Production.
The production environment should access the live database and have sufficient resources
to adequately handle its workload. Configuration parameters connect the system with the
production environment.
PREPARED BY: Mrs.P.R.JAYANTHI AP/CSE, Ms.G.SARASWATHI AP/CSE, Mrs.K.KAVITHA AP/CSE Page 102
Build is the process of creating an executable artifact from input such as source code and
configuration.
As such, it primarily consists of compiling source code (if you are working with
compiled languages) and packaging all files that are required for execution (e.g., the
executables from the code, interpretable files like HTML, JavaScript, etc.).
Once the build is complete, a set of automated tests are executed that test whether the
integration with other parts of the system uncovers any errors.
The unit tests can be repeated here to generate a history available more broadly than to a
single developer.
Build Scripts
The build and integration tests are performed by a continuous integration (CI) server. The
input to this server should be scripts that can be invoked by a single command.
This practice ensures that the build is repeatable and traceable. Repeatability is achieved
because the scripts can be rerun, and traceability is achieved because the scripts can be
examined to determine the origin of the various pieces that were integrated together.
Packaging
The goal of building is to create something suitable for deployment. There are several
standard methods of packaging the elements of a system for deployment.
The appropriate method of packaging will depend on the production environment.
Some packaging options are:
Runtime-specific packages, such as Java archives, web application archives, and federal
acquisition regulation archives in Java, or .NET assemblies.
Operating system packages. If the application is packaged into software packages of the target
OS (such as the Debian or Red Hat package system), a variety of well-proven tools can be used
for deployment.
VM images can be created from a template image, to include the changes from the latest
revision.
Lightweight containers are a new phenomenon. Like VM images, lightweight containers can
contain all libraries and other pieces of software necessary to run the application, while retaining
isolation of processes, rights, files, and so forth. In contrast to VM images, lightweight
containers do not require a hypervisor on the host machine, nor do they contain the whole
operating system, which reduces overhead, load, and size.
PREPARED BY: Mrs.P.R.JAYANTHI AP/CSE, Ms.G.SARASWATHI AP/CSE, Mrs.K.KAVITHA AP/CSE Page 103
There are two dominant strategies for applying changes in an application when using VM
images or lightweight containers: heavily baked versus lightly baked images, with a
spectrum between the extreme ends.
Baking here refers to the creation of the image.
Heavily baked images cannot be changed at runtime. This concept is also termed
immutable servers: Once a VM has been started, no changes (other than configuration
values) are applied to it.
Lightly baked images are fairly similar to heavily baked images, with the exception that
certain changes to the instances are allowed at runtime.
Whatever packaging mechanism is used, the build step in the deployment pipeline should
consist of compiling, packaging or baking an image, and archiving the build in a build
repository.
Integration Testing
Integration testing is the step in which the built executable artifact is tested.
The environment includes connections to external services, such as a surrogate database.
Including other services requires mechanisms to distinguish between production and test
requests, so that running a test does not trigger any actual transactions, such as
production, shipment, or payment.
UAT/Staging/Performance Testing
Staging is the last step of the deployment pipeline prior to deploying the system into production.
The types of tests that occur at this step are the following:
PREPARED BY: Mrs.P.R.JAYANTHI AP/CSE, Ms.G.SARASWATHI AP/CSE, Mrs.K.KAVITHA AP/CSE Page 104
User acceptance tests (UATs) are tests where prospective users work with a current
revision of the system through its UI and test it, either according to a test script or in an
exploratory fashion. This is done in the UAT environment, which closely mirrors
production but still uses test or mock versions of external services.
Automated acceptance tests are the automated version of repetitive UATs. Such tests
control the application through the UI, trying to closely mirror what a human user would
do. Automation takes some load off the UATs, while ensuring that the interaction is done
in exactly the same way each time.
Smoke tests, are a subset of the automated acceptance tests that are used to quickly
analyze if a new commit breaks some of the core functions of the application.
Nonfunctional tests test aspects such as performance, security, capacity, and availability.
Proper performance testing requires a suitable setup, using resources comparable to
production and very similar every time the tests are run.
25. Discuss about the deployment process of DevOps and its issues.
Deployment
There are three reasons for changing a service—to fix an error, to improve some quality of
the service, or to add a new feature.
The goal of a deployment is to move from the current state that has N VMs of the old
version, A, of a service executing, to a new state where there are N VMs of the new
version, B, of the same service in execution.
1) Strategies for Managing a Deployment:
There are two popular strategies for managing a deployment—blue/green deployment and rolling
upgrade.
They differ in terms of costs and complexity. The cost may include both that of the VM and the
licensing of the software running inside the VM.
we need to make the following two assumptions:
1. Service to the clients should be maintained while the new version is being deployed.
2. Any development team should be able to deploy a new version of their service at any
time without coordinating with other teams
Blue/Green Deployment
PREPARED BY: Mrs.P.R.JAYANTHI AP/CSE, Ms.G.SARASWATHI AP/CSE, Mrs.K.KAVITHA AP/CSE Page 105
Rolling Upgrade:
PREPARED BY: Mrs.P.R.JAYANTHI AP/CSE, Ms.G.SARASWATHI AP/CSE, Mrs.K.KAVITHA AP/CSE Page 106
2) Logical Consistency
Two components are shown—the client and two versions (versions A and B) of a service.
PREPARED BY: Mrs.P.R.JAYANTHI AP/CSE, Ms.G.SARASWATHI AP/CSE, Mrs.K.KAVITHA AP/CSE Page 107
The client sends a message that is routed to version B. Version B performs its actions and
returns some state to the client.
The client then includes that state in its next request to the service. The second request is
routed to version A, and this version does not know what to make of the state, because
the state assumes version B.
Therefore, an error occurs. This problem is called a mixed-version race condition.
(Refer figure 5.39)
Make the client version aware so that it knows that its initial request was serviced by a
version B VM. Then it can require its second request to be serviced by a version B VM.
This registration can contain the version number. The client can then request a specific
version of the service. Response messages from the service should contain a tag so that
the client is aware of the version of the service with which it has just interacted.
Toggle the new features contained in version B and the client so that only one version is
offering the service at any given time. More details are given below.
Make the services forward and backward compatible, and enable the clients to recognize
when a particular request has not been satisfied. Again, more details are given below.
Feature Toggling
PREPARED BY: Mrs.P.R.JAYANTHI AP/CSE, Ms.G.SARASWATHI AP/CSE, Mrs.K.KAVITHA AP/CSE Page 108
A feature toggle, to repeat, is a piece of code within an if statement where the if condition
is based on an externally settable feature variable.
Using this technique means that the problems associated with activating a feature are (a)
determining that all services involved in implementing a feature have been sufficiently
upgraded and (b) activating the feature in all of the VMs of these services at the same
time.
A service is backward compatible if the new version of the service behaves as the old
version. For requests that are known to the old version of a service, the new version
provides the same behavior. In other words, the external interfaces provided by version B
of a service are a superset of the external interfaces provided by version A of that service.
Forward compatibility means that a client deals gracefully with error responses indicating
an incorrect method call. Suppose a client wishes to utilize a method that will be
available in version B of a service but the method is not present in version A. Then if the
service returns an error code indicating it does not recognize the method call, the client
can infer that it has reached version A of the service.
In addition to maintaining compatibility among the various services, some services must
also be able to read and write to a database in a consistent fashion.
The most basic solution to such a schema change is not to modify existing fields but only
to add new fields or tables, which can be done without affecting existing code.
The use of the new fields or tables can be integrated into the application incrementally.
One method for accomplishing this is to treat new fields or tables as new features in a
release.
Two options:
1. Convert the persistent data from the old schema to the new one.
2. Convert data into the appropriate form during reads and writes. This could be done
either by the service or by the database management system.
Packaging components onto a VM image is called baking and the options range from
lightly baked to heavily baked.
PREPARED BY: Mrs.P.R.JAYANTHI AP/CSE, Ms.G.SARASWATHI AP/CSE, Mrs.K.KAVITHA AP/CSE Page 109
A VM is an image that is running on top of a hypervisor that enables sharing a single bare
metal processor, memory, and network among multiple tenants or VMs. The image of the
VM is loaded onto the hypervisor from which it is scheduled.
One difference in these two options is the number of times that a VM image must be
baked. If there is one service per VM, then that VM image is created when a change in its
service is committed. If there are two services per VM, then the VM image must be
rebaked whenever a change to either the first or second service is committed. This
difference is minor.
A more important difference occurs when service 1 sends a message to service 2. If the
two are in the same VM, then the message does not need to leave the VM to be delivered.
If they are in different VMs, then more handling and, potentially, network
communication are involved. Hence, the latency for messages will be higher when each
service is packaged into a single VM.
We can deploy some of your services to one environment such as VMware and other
services to a different environment such as Amazon EC2.
There will also be a performance penalty for messages sent across environments. The
amount of this penalty needs to be determined experimentally so that the overall penalty
is within acceptable limits.
PREPARED BY: Mrs.P.R.JAYANTHI AP/CSE, Ms.G.SARASWATHI AP/CSE, Mrs.K.KAVITHA AP/CSE Page 110
Business continuity is the ability for a business to maintain service when facing a disaster
or serious outages. Fundamentally, it is achieved by deploying to sites that are physically
and logically separated from each other.
Partial Deployment:
Canary Testing:
A canary test is conceptually similar to a beta test in the shrink-wrapped software world.
The mechanism for performing the canary tests depends on whether features are activated
with feature toggles or whether services are assumed to be forward or backward
compatible.
If the services use forward and backward compatibility, then the tests will be
accomplished once all of the services involved in a new feature have been upgraded to
the new version.
A/B Testing
The “A” and “B” refer to two different versions of a service that present either different
user interfaces or different behavior. In this case, it is the behavior of the user when
presented with these two different versions that is being tested.
If either A or B shows preferable behavior in terms of some business metric such as
orders placed, then that version becomes the production version and the other version is
retired.
26. What are DevOps Tools? Explain some of the tools used for deployment. Explain its
benefits.
DevOps tools make it convenient and easier for companies to reduce the probability of
errors and maintain continuous integration in operations. It addresses the key aspects
of a company. DevOps tools automate the whole process and automatically build, test,
and deploy the features.
DevOps tools make the whole deployment process and easy going one and they can
help you with the following aspects:
PREPARED BY: Mrs.P.R.JAYANTHI AP/CSE, Ms.G.SARASWATHI AP/CSE, Mrs.K.KAVITHA AP/CSE Page 111
Increased development.
Improvement in operational efficiency.
Faster release.
Non-stop delivery.
Quicker rate of innovation.
Improvement in collaboration.
Seamless flow in the process chain.
Best Deployment Tools in DevOps
This list contains the most used, widely accepted DevOps tools in 2024 that give the
best results. (Refer figure 5.41)
Capistrano
It is an open-source tool that works best on multiple servers for the execution of
arbitrary tasks. It is written in the Ruby language.
Advantages:
Reliable deployment
Easy setup and automation.
Simplify general tasks in software teams.
Encapsulate drive infrastructures.
Companies using Capistrano: Typeforum, New Relic, Tilt, Repro, Qiita.
PREPARED BY: Mrs.P.R.JAYANTHI AP/CSE, Ms.G.SARASWATHI AP/CSE, Mrs.K.KAVITHA AP/CSE Page 112
Juju
It is an open-source management tool that decreases operational overhead by
performing various tasks in public and private clouds.
Advantages:
The fastest way to deploy open stack Cloud.
users get service configurations as per their needs.
No dependency problems.
Offers environment portability.
Ensures a GUI-based tool and a command line.
Offers control on scalability.
Companies using Juju: SparkCognition, LabCorp, Lenovo, IBM, ARM, Veritas Technologies.
Travis CI
It is an open-source continuous integration tool that offers the advantage of building
and testing applications on GitHub.
Advantages:
Easy to build and maintain.
Effective integration with GitHub.
Offers a great UI and dashboard.
Provides vendors with built-in support.
Companies using Travis CI: Govini, Idea Evolver, Emogi, Baker Hughes, Cvent, Mendal.
GoCD
It is an open-source application used for continuous delivery and automation across
various teams within an organization which completes the whole process from building
to deployment.
Advantages:
Quick configuration with a sequential execution.
Easy extraction of templates.
Provides built-in test examination.
Offers easy visualization and reliable UI.
Companies using GoCD: Hazeorid, OpenX, Xola, ThoughtWorks, Omnifone, Feedzai.
Jenkins
It is a world-famous open-source automation server that is put down in Java. It is a
server-based system that contains multiple dashboards.
Advantages:
PREPARED BY: Mrs.P.R.JAYANTHI AP/CSE, Ms.G.SARASWATHI AP/CSE, Mrs.K.KAVITHA AP/CSE Page 113
Octopus Deploy
It is a management server that deals with automation deployment and releases. It is
compatible with databases.
Cost price: Free for small teams. Chargeable for large teams.
Advantages:
Deployments are repeatable and reliable.
Support changes.
Flexible integration abilities across platforms.
Provides a simple UI.
Companies using Octopus Deploy: GM Financial, CIMA, Parexel, AMETEK, ACA
Compliance Group.
AWS CodeDeploy
It is a tool that is automated and works for deployment that is exclusively offered by
Amazon. It is used for releasing new features without any Hassle.
PREPARED BY: Mrs.P.R.JAYANTHI AP/CSE, Ms.G.SARASWATHI AP/CSE, Mrs.K.KAVITHA AP/CSE Page 114
Advantages:
Reliable and faster deployments.
Applications are easy to launch and track.
Decreases downtime and increases application availability.
Easily adapt to other applications.
Companies using AWS CodeDeploy: Algorithmia, Indica, Hello Labs, Adsia, Zugata,
Eventtus.
DeployBot
It is used for building connections with any Git Repository so that automatic or manual
deployments can take place. It can also be deployed through Slack.
Advantages:
Allows execution and compilation of codes.
Offers various update features.
Monitors the deployment process.
It can automatically search for keywords.
Companies using DeployBot: Sellsuki, Edify, Kiwi, Millstream, Kasper systems.
Shippable
It is a DevOps tool for continuous delivery. It is associated with docker pipelines for
continuous integration and quick delivery. It also supports multi-tier applications.
Advantages:
The complete process, from building to deployment.
Easy configuration.
Separate pipelines for separate code repositories.
Multiple runtimes are allowed.
PREPARED BY: Mrs.P.R.JAYANTHI AP/CSE, Ms.G.SARASWATHI AP/CSE, Mrs.K.KAVITHA AP/CSE Page 115
PREPARED BY: Mrs.P.R.JAYANTHI AP/CSE, Ms.G.SARASWATHI AP/CSE, Mrs.K.KAVITHA AP/CSE Page 116
DevOps works great with the rapidly growing technological world. You should use tools
used for continuous deployment in DevOps because it offers a ton of advantages. DevOps
code deployment tools come with the following benefits:
Speed
Faster delivery
Continuous integration
Less scope of errors
Increased reliability
Improved collaboration:
Increased security
Offers stability
PREPARED BY: Mrs.P.R.JAYANTHI AP/CSE, Ms.G.SARASWATHI AP/CSE, Mrs.K.KAVITHA AP/CSE Page 117