SPM Unit-1 Material
SPM Unit-1 Material
SPM Unit-1 Material
UNIT - I
Conventional Software Management: The waterfall model, conventional software Management performance.
1.IN THEORY
It provides an insightful and concise summary of conventional software managementThree main primary points
are
1. There are two essential steps common to the development of computer programs: analysis and
coding.
Waterfall Model part 1: The two basic steps common to the development of computer programs.
i.e, Analysis and coding
1
Fig: Large scale system approach
3.The basic framework described in the waterfall model is risky and invites failure. The testing phase that
occurs at the end of the development cycle is the first event for which timing, storage, input/output transfers,
etc., are experienced as distinguished from analyzed. The resulting design changes are likely to be so disruptive
that the software requirements upon which the design is based are likely violated. Either the requirements must
be modified or a substantial design change is warranted.
Five necessary improvements for waterfall model are:-
1. Program design comes first
2. Document the design
3. Do it twice
4. Plan, control, and monitor testing
5. Involve the customer
1. Program design comes first. Insert a preliminary program design phase between the software
requirements generation phase and the analysis phase. By this technique, the program designer
assures that the software will not fail because of storage, timing, and data flux (continuous
change). As analysis proceeds in the succeeding phase, the program designer must impose on the
analyst the storage, timing, and operational constraints in such a way that he senses the
consequences.If the total resources to be applied are insufficient or if the embryonic(in an early
stage of development) operational design is wrong, it will be recognized at this early stage and
the iteration with requirements and preliminary design can be redone before final design, coding,
and test commences. How is this program design procedure implemented?
2. Document the design. The amount of documentation required on most software programs is quite a lot,
certainly much more than most programmers, analysts, or program designers are willing to do if left to their
own devices. Why do we need so much documentation?
(1) Each designer must communicate with interfacing designers, managers, and possibly customers.
2
(3) The real monetary value of documentation is to support later modifications by a separate test team, a separate
maintenance team, and operations personnel who are not software literate.
3. Do it twice.
If a computer program is being developed for the first time, arrange matters so that the version finally
delivered to the customer for operational deployment is actually the second version insofar as critical
design/operations are concerned.
Note that this is simply the entire process done in miniature, to a time scale that is relatively small with
respect to the overall effort.
In the first version, the team must have a special broad competence where they can quickly sense trouble
spots in the design, model them, model alternatives, forget the straightforward aspects of the design that
aren't worth studying at this early point, and, finally, arrive at an error-free program.
Without question, the biggest user of project resources-manpower, computer time, and/or management
judgment-is the test phase.
This is the phase of greatest risk in terms of cost and schedule. It occurs at the latest point in the schedule,
when backup alternatives are least available, if at all.
The previous three recommendations were all aimed at uncovering and solving problems before entering
the test phase.
However, even after doing these things, there is still a test phase and there are still important things to be
done, including: (1) employ a team of test specialists who were not responsible for the
original design; (2) employ visual inspections to spot the obvious errors like dropped minus signs, missing
factors of two, jumps to wrong addresses (do not use the computer to detect this kind of thing, it is too
expensive); (3) test every logic path; (4) employ the final checkout on the target computer.
Early success via paper designs and thorough (often too thorough) briefings.
Commitment to code late in the life cycle.
Integration nightmares (unpleasant experience) due to unforeseen implementation issues and
interfaceambiguities.
Heavy budget and schedule pressure to get the system working.
3
Late shoe-homing of no optimal fixes, with no time for redesign.
A very fragile, unmentionable product delivered late.
In the conventional model, the entire system was designed on paper, then implemented all at once, then
integrated. Table 1-1 provides a typical profile of cost expenditures across the spectrum of software activities.
A serious issue associated with the waterfall lifecycle was the lack of early risk resolution. Figure 1.3
illustrates a typical risk profile for conventional waterfall model projects.
It includes four distinct periods of risk exposure, where risk is defined as the probability of missing a cost,
schedule, feature, or quality goal.
Early in the life cycle, as the requirements were being specified, the actual risk exposure was highly
unpredictable.
4
Fig: Risk profile of a conventional software project across its life cycle
3.Requirements-Driven Functional Decomposition:
This approach depends on specifying requirements completely and unambiguously before other
development activities begin.
It naively treats all requirements as equally important, and depends on those requirements remaining
constant over the software development life cycle.
These conditions rarely occur in the real world. Specification of requirements is a difficult and
important part of the software development process.
Another property of the conventional approach is that the requirements were typically specified in a
functional manner.
Built into the classic waterfall process was the fundamental assumption that the software itself was
decomposed into functions; requirements were then allocated to the resulting components.
This decomposition was often very different from a decomposition based on object-oriented design and
the use of existing components.
Figure 1-4 illustrates the result of requirements-driven approaches: a software structure that is
organized around the requirements specification structure.
5
difficulties of requirements specification and the exchange of information solely through paper documents
that captured engineering information in ad hoc formats.
The following sequence of events was typical for most contractual software efforts:
1. The contractor prepared a draft contract-deliverable document that captured an intermediate
artifactand delivered it to the customer for approval.
2. The customer was expected to provide comments (typically within 15 to 30 days).
3. The contractor incorporated these comments and submitted (typically within 15 to 30 days) a
finalversion for approval.
This one-shot review process encouraged high levels of sensitivity on the part of customers and contractors.
6
8. Software systems and products typically cost 3 times as much per SLOC as individual
softwareprograms. Software-system products (i.e., system of systems) cost 9 times as much.
9. Walkthroughs catch 60% of the errors
10. 80% of the contribution comes from 20% of the contributors.
adaptability The relationships among these parameters and the estimated cost can be written as
follows:
One important aspect of software economics (as represented within today's software cost models) is
that the relationship between effort and size exhibits a diseconomy of scale. The diseconomy of scale of
software development is a result of the process exponent being greater than 1.0. Contrary to most
manufacturing processes, the more software you build, the more expensive it is per unit item.
Figure 2-1 shows three generations of basic technology advancement in tools, components, and
processes. The required levels of quality and personnel are assumed to be constant. The ordinate of the graph
refers to software unit costs (pick your favorite: per SLOC, per function point, per component) realized by
an organization.
The three generations of software development are defined as follows:
1) Conventional: 1960s and 1970s, craftsmanship. Organizations used custom tools, custom
processes, and virtually all custom components built in primitive languages. Project performance
was highly predictable in that cost, schedule, and quality objectives were almost always
underachieved.
2) Transition: 1980s and 1990s, software engineering. Organiz:1tions used more-repeatable processes and
off- the-shelf tools, and mostly (>70%) custom components built in higher level languages. Some of the
components (<30%) were available as commercial products, including the operating system, database
management system, networking, and graphical user interface.
3) Modern practices: 2000 and later, software production. This book's philosophy is rooted
in theuse of managed and measured processes, integrated automation environments, and
mostly (70%) off-the-shelf components. Perhaps as few as 30% of the components need to
be custom built
Technologies for environment automation, size reduction, and process improvement are not independent
of one another. In each new era, the key is complementary growth in all technologies. For example, the
process advances could not be used successfully without new component technologies and increased tool
automation.
7
Organizations are achieving better economies of scale in successive technology eras-with very large projects
(systems of systems), long-lived products, and lines of business comprising multiple similar projects. Figure
2-2 provides an overview of how a return on investment (ROI) profile can be achieved in subsequent efforts
across life cycles of various domains.
8
2.2.PRAGMATIC SOFTWARE COST ESTIMATION
One critical problem in software cost estimation is a lack of well-documented case studies of projects
that usedan iterative development approach.
Software industry has inconsistently defined metrics or atomic units of measure, the data from actual
projects are highly suspect in terms of consistency and comparability.
It is hard enough to collect a homogeneous set of project data within one organization; it is extremely
difficult to homogenize data across different organizations with different processes, languages,
domains, and so on.
There have been many debates among developers and vendors of software cost estimation models and
tools. Three topics of these debates are of particular interest here:
There are several popular cost estimation models (such as COCOMO, CHECKPOINT, ESTIMACS,
Knowledge Plan, Price-S, ProQMS, SEER, SLIM, SOFTCOST, and SPQR/20), CO COMO is also one of
the most open and well-documented cost estimation models. The general accuracy of conventional cost
models (such as COCOMO) has been described as "within 20% of actuals, 70% of the time."
1. Most real-world use of cost models is bottom-up (substantiating a target cost) rather than top-down
(estimating the "should" cost).
2. Figure 2-3 illustrates the predominant practice: The software project manager defines the target cost of
the software, and then manipulates the parameters and sizing until the target cost can be justified.
3. The rationale for the target cost maybe to win a proposal, to solicit customer funding, to attain internal
corporate funding, or to achieve some other goal.
4. The process described in Figure 2-3 is not all bad. In fact, it is absolutely necessary to analyze the cost
risks and understand the sensitivities and trade-offs objectively.
5. It forces the software project manager to examine therisks associated with achieving the target costs
and to discuss this information with other stakeholders.
A good software cost estimate has the following attributes:
1. It is conceived and supported by the project manager, architecture team, development team,
and testteam accountable for performing the work.
2. It is accepted by all stakeholders as ambitious but realizable.
3. It is based on a well-defined software cost model with a credible basis.
4. It is based on a database of relevant project experience that includes similar processes, similar
technologies, similar environments, similar quality requirements, and similar people.
9
5. It is defined in enough detail so that its key risk areas are understood and the probability of
success isobjectively assessed.
Extrapolating from a good estimate, an ideal estimate would be derived from a mature cost model with an
experience base that reflects multiple similar projects done by the same team with the same mature processes
and tools.
3.Improving Software Economics: Reducing Software product size, improving software processes,
improvingteam effectiveness, improving automation, Achieving required quality, peer inspections.
Five basic parameters of the software cost model are
1. Reducing the size or complexity of what needs to be developed.
2. Improving the development process.
3. Using more-skilled personnel and better teams (not necessarily the same thing).
4. Using better environments (tools to automate the process).
5. Trading off or backing off on quality thresholds.
These parameters are given in priority order for most software domains.
Table 3-1 lists some of the technology developments, process improvement efforts, and management
approaches targeted at improving the economics of software development and integration.
10
3.1.REDUCING SOFTWARE PRODUCT SIZE
The most significant way to improve affordability and return on investment (ROI) is usually to produce a
product that achieves the design goals with the minimum amount of human-generated source material.
Component-based development is introduced as the general term for reducing the "source" language size to
achieve a software solution.
Reuse, object-oriented technology, automatic code production, and higher order programming languages are
all focused on achieving a given system with fewer lines of human-specified source directives (statements).
size reduction is the primary motivation behind improvements in higher order languages (such as C++, Ada
95, Java, Visual Basic), automatic code generators (CASE tools, visual modeling tools, GUI builders), reuse
of commercial components (operating systems, windowing environments, database management systems,
middleware, networks), and object-oriented technologies (Unified Modeling Language, visual modeling
tools, architecture frameworks).
The reduction is defined in terms of human-generated source material. In general, when size-reducing
technologies are used, they reduce the number of human-generated source lines.
1. LANGUAGES
2.OBJECT-ORIENTED METHODS AND VISUAL MODELING
3.REUSE
4.COMMERCIAL COMPONENTS
1.LANGUAGES
Universal function points (UFPs1) are useful estimators for language-independent, early life-cycle
estimates. The basic units of function points are external user inputs, external outputs, internal logical data
groups, external data interfaces, and external inquiries. SLOC metrics are useful estimators for software
after a candidate solution is formulated and an implementation language is known. Substantial data have
been documented relating SLOC to function points. Some of these results are shown in Table 3-2.
Languages expressiveness of some of today’s popular languages
Table 3-2
2.OBJECT-ORIENTED METHODS AND VISUAL MODELING
Object-oriented technology is not germane to most of the software management topics discussed
here, and books on object-oriented technology abound.
Object-oriented programming languages appear to benefit both software productivity and software
quality.
The fundamental impact of object-oriented technology is in reducing the overall size of what needs
to be developed.
People like drawing pictures to explain something to others or to themselves. When they do it for
software system design, they call these pictures diagrams or diagrammatic models and the very
notation for them a modeling language.
These are interesting examples of the interrelationships among the dimensions of improving software eco-
nomics.
11
1. An object-oriented model of the problem and its solution encourages a common vocabulary
between the end users of a system and its developers, thus creating a shared understanding of the
problem being solved.
2. The use of continuous integration creates opportunities to recognize risk early and make incremental
corrections without destabilizing the entire development effort.
3. An object-oriented architecture provides a clear separation of concerns among disparate elements
of a system, creating firewalls that prevent a change in one part of the system from rending the
fabric of the entire architecture.
1. A ruthless focus on the development of a system that provides a well understood collection of
essentialminimal characteristics.
2. The existence of a culture that is centered on results, encourages communication, and yet is not
afraidto fail.
3. The effective use of object-oriented modeling.
4. The existence of a strong architectural vision.
5. The application of a well-managed iterative and incremental development life cycle.
3.REUSE
Reusing existing components and building reusable components have been natural software engineering
activities since the earliest improvements in programming languages. With reuse in order to minimize
development costs while achieving all the other required attributes of performance, feature set, and quality.
Try to treat reuse as a mundane part of achieving a return on investment.
Most truly reusable components of value are transitioned to commercial products supported
by organizations with the following characteristics:
4.COMMERCIAL COMPONENTS
A common approach being pursued today in many domains is to maximize integration of commercial
12
components and off-the-shelf products.
While the use of commercial components is certainly desirable as a means of reducing custom
development, it has not proven to be straightforward in practice.
Table 3-3 identifies some of the advantages and disadvantages of using commercial components.
13
In a perfect software engineering world with an immaculate problem description, an obvious solution space,
a development team of experienced geniuses, adequate resources, and stakeholders with common goals,
we could execute a software development process in one iteration with almost no scrap and rework. Because
we work in an imperfect world, however, we need to manage engineering activities so that scrap and rework
profiles do not have an impact on the win conditions of any stakeholder. This should be the underlying
premise for most process improvements.
Software project managers need many leadership qualities in order to enhance team effectiveness.
The following are some crucial attributes of successful software project managers that deserve much more
attention:
14
1. Hiring skills. Few decisions are as important as hiring decisions. Placing the right person in the
right job seems obvious but is surprisingly hard to achieve.
2. Customer-interface skill. Avoiding adversarial relationships among stakeholders is a prerequisite
for success.
Decision-making skill. The jillion books written about management have failed to provide a clear
definition of this attribute. We all know a good leader when we run into one, and decision-making
skill seems obvious despite its intangible definition.
Team-building skill. Teamwork requires that a manager establish trust, motivate progress, exploit
eccentric prima donnas, transition average people into top performers, eliminate misfits, and
consolidate diverse opinions into a team direction.
Selling skill. Successful project managers must sell all stakeholders (including themselves) on
decisions and priorities, sell candidates on job positions, sell changes to the status quo in the face of
resistance, and sell achievements against objectives. In practice, selling requires continuous
negotiation, compromise, and empathy
15
3.5.ACHIEVING REQUIRED QUALITY
Software best practices are derived from the development process and technologies. Table 3-5 summarizes
some dimensions of quality improvement.
Key practices that improve overall software quality include the following:
Focusing on driving requirements and critical use cases early in the life cycle, focusing on
requirements completeness and traceability late in the life cycle, and focusing throughout the life
cycle on a balance between requirements evolution, design evolution, and plan evolution
Using metrics and indicators to measure the progress and quality of an architecture as it evolves
froma high-level prototype into a fully compliant product
Providing integrated life-cycle environments that support early and continuous configuration
control, change management, rigorous design methods, document automation, and regression test
automation
Using visual modeling and higher level languages that support architectural control, abstraction,
reliable programming, reuse, and self-documentation
Early and continuous insight into performance issues through demonstration-based evaluations
Conventional development processes stressed early sizing and timing estimates of computer program
resource utilization. However, the typical chronology of events in performance assessment was as
follows
Project inception. The proposed design was asserted to be low risk with adequate performance
margin.
Initial design review. Optimistic assessments of adequate design margin were based mostly on
paper analysis or rough simulation of the critical threads. In most cases, the actual application
algorithms and database sizes were fairly well understood.
Mid-life-cycle design review. The assessments started whittling away at the margin, as early
16
benchmarks and initial tests began exposing the optimism inherent in earlier estimates.
Integration and test. Serious performance problems were uncovered, necessitating fundamental
changes in the architecture. The underlying infrastructure was usually the scapegoat, but the real
culprit was immature use of the infrastructure, immature architectural solutions, or poorly
understood early design trade-offs.
3.6.PEER INSPECTIONS: A PRAGMATIC VIEW
Peer inspections are frequently over hyped as the key aspect of a quality system. In my experience, peer
reviews are valuable as secondary mechanisms, but they are rarely significant contributors to quality
compared with the following primary quality mechanisms and indicators, which should be emphasized in
the management process:
Transitioning engineering information from one artifact set to another, thereby assessing the
consistency, feasibility, understandability, and technology constraints inherent in the engineering
artifacts
Major milestone demonstrations that force the artifacts to be assessed against tangible
criteria in thecontext of relevant use cases
Environment tools (compilers, debuggers, analyzers, automated test suites) that ensure
representationrigor, consistency, completeness, and change control
Life-cycle testing for detailed insight into critical trade-offs, acceptance criteria, and
requirements compliance
Change management metrics for objective insight into multiple-perspective change trends
andconvergence or divergence from quality and progress goals
Inspections are also a good vehicle for holding authors accountable for quality products. All authors of
software and documentation should have their products scrutinized as a natural by-product of the
process. Therefore, the coverage of inspections should be across all authors rather than across all
components.
17