SPM Unit I
SPM Unit I
SPM Unit I
Conventional Software Management: The waterfall model, conventional software Management performance.
Evolution of Software Economics: Software Economics, pragmatic software cost estimation.
Improving Software Economics: Reducing Software product size, improving software processes, improving team
effectiveness, improving automation, Achieving required quality, peer inspections.
Conventional and Modern Software Management: The principles of conventional software Engineering,
principles of modern software management, transitioning to an iterative process.
1
Requirement
Analysis
Design
Coding
Testing
Operation
3. The basic framework described in the waterfall model is risky and invites failure. The testing phase
that occurs at the end of the development cycle is the first event for which timing, storage,
input/output transfers, etc., are experienced as distinguished from analyzed. The resulting design
changes are likely to be so disruptive that the software requirements upon which the design is based
are likely violated. Either the requirements must be modified or a substantial design change is
warranted.
1. Program design comes first. Insert a preliminary program design phase between the software
requirements generation phase and the analysis phase. By this technique, the program designer
assures that the software will not fail because of storage, timing, and data flux (continuous
change). As analysis proceeds in the succeeding phase, the program designer must impose on the
analyst the storage, timing, and operational constraints in such a way that he senses the consequences.
If the total resources to be applied are insufficient or if the embryonic(in an early stage of
development) operational design is wrong, it will be recognized at this early stage and the iteration
with requirements and preliminary design can be redone before final design, coding, and test
commences. How is this program design procedure implemented?
2. Document the design. The amount of documentation required on most software programs is quite a lot,
certainly much more than most programmers, analysts, or program designers are willing to do if left to their
own devices. Why do we need so much documentation? (1) Each designer must communicate with interfacing
designers, managers, and possibly customers. (2) During early phases, the documentation is the design. (3) The
real monetary value of documentation is to support later modifications by a separate test team, a separate
maintenance team, and operations personnel who are not software literate.
3. Do it twice. If a computer program is being developed for the first time, arrange matters so that the version
finally delivered to the customer for operational deployment is actually the second version insofar as critical
design/operations are concerned. Note that this is simply the entire process done in miniature, to a time scale
2
that is relatively small with respect to the overall effort. In the first version, the team must have a special
broad competence where they can quickly sense trouble spots in the design, model them, model alternatives,
forget the straightforward aspects of the design that aren't worth studying at this early point, and, finally,
arrive at an error-free program.
4. Plan, control, and monitor testing. Without question, the biggest user of project resources-manpower,
computer time, and/or management judgment-is the test phase. This is the phase of greatest risk in terms of
cost and schedule. It occurs at the latest point in the schedule, when backup alternatives are least available, if
at all. The previous three recommendations were all aimed at uncovering and solving problems before
entering the test phase. However, even after doing these things, there is still a test phase and there are still
important things to be done, including: (1) employ a team of test specialists who were not responsible for the
original design; (2) employ visual inspections to spot the obvious errors like dropped minus signs, missing
factors of two, jumps to wrong addresses (do not use the computer to detect this kind of thing, it is too
expensive); (3) test every logic path; (4) employ the final checkout on the target computer.
5. Involve the customer. It is important to involve the customer in a formal way so that he has committed
himself at earlier points before final delivery. There are three points following requirements definition where
the insight, judgment, and commitment of the customer can bolster the development effort. These include a
"preliminary software review" following the preliminary program design step, a sequence of "critical software
design reviews" during program design, and a "final software acceptance review".
1.1.2 IN PRACTICE
Some software projects still practice the conventional software management approach.
It is useful to summarize the characteristics of the conventional process as it has typically been applied,
which is not necessarily as it was intended. Projects destined for trouble frequently exhibit the following
symptoms:
Early success via paper designs and thorough (often too thorough) briefings.
Commitment to code late in the life cycle.
Integration nightmares (unpleasant experience) due to unforeseen implementation issues and interface
ambiguities.
Heavy budget and schedule pressure to get the system working.
Late shoe-homing of no optimal fixes, with no time for redesign.
A very fragile, unmentionable product delivered late.
3
In the conventional model, the entire system was designed on paper, then implemented all at once, then
integrated. Table 1-1 provides a typical profile of cost expenditures across the spectrum of software activities.
Late risk resolution A serious issue associated with the waterfall lifecycle was the lack of early risk resolution.
Figure 1.3 illustrates a typical risk profile for conventional waterfall model projects. It includes four distinct
periods of risk exposure, where risk is defined as the probability of missing a cost, schedule, feature, or quality
goal. Early in the life cycle, as the requirements were being specified, the actual risk exposure was highly
unpredictable.
4
Requirements-Driven Functional Decomposition: This approach depends on specifying requirements com-
pletely and unambiguously before other development activities begin. It naively treats all requirements as
equally important, and depends on those requirements remaining constant over the software development life
cycle. These conditions rarely occur in the real world. Specification of requirements is a difficult and important
part of the software development process.
Another property of the conventional approach is that the requirements were typically specified in a
functional manner. Built into the classic waterfall process was the fundamental assumption that the software
itself was decomposed into functions; requirements were then allocated to the resulting components. This
decomposition was often very different from a decomposition based on object-oriented design and the use of
existing components. Figure 1-4 illustrates the result of requirements-driven approaches: a software structure
that is organized around the requirements specification structure.
5
The following sequence of events was typical for most contractual software efforts:
1. The contractor prepared a draft contract-deliverable document that captured an intermediate artifact
and delivered it to the customer for approval.
2. The customer was expected to provide comments (typically within 15 to 30 days).
3. The contractor incorporated these comments and submitted (typically within 15 to 30 days) a final
version for approval.
This one-shot review process encouraged high levels of sensitivity on the part of customers and contractors.
6
2. Evolution of Software Economics
The relationships among these parameters and the estimated cost can be written as follows:
One important aspect of software economics (as represented within today's software cost models) is that
the relationship between effort and size exhibits a diseconomy of scale. The diseconomy of scale of software
development is a result of the process exponent being greater than 1.0. Contrary to most manufacturing
processes, the more software you build, the more expensive it is per unit item.
Figure 2-1 shows three generations of basic technology advancement in tools, components, and processes.
The required levels of quality and personnel are assumed to be constant. The ordinate of the graph refers to
software unit costs (pick your favorite: per SLOC, per function point, per component) realized by an
organization.
The three generations of software development are defined as follows:
1) Conventional: 1960s and 1970s, craftsmanship. Organizations used custom tools, custom processes,
and virtually all custom components built in primitive languages. Project performance was highly
predictable in that cost, schedule, and quality objectives were almost always underachieved.
2) Transition: 1980s and 1990s, software engineering. Organiz:1tions used more-repeatable processes and off-
the-shelf tools, and mostly (>70%) custom components built in higher level languages. Some of the
components (<30%) were available as commercial products, including the operating system, database
management system, networking, and graphical user interface.
3) Modern practices: 2000 and later, software production. This book's philosophy is rooted in the
use of managed and measured processes, integrated automation environments, and mostly
(70%) off-the-shelf components. Perhaps as few as 30% of the components need to be custom
built
Technologies for environment automation, size reduction, and process improvement are not independent of
one another. In each new era, the key is complementary growth in all technologies. For example, the process
advances could not be used successfully without new component technologies and increased tool automation.
7
Organizations are achieving better economies of scale in successive technology eras-with very large projects
(systems of systems), long-lived products, and lines of business comprising multiple similar projects. Figure 2-2
provides an overview of how a return on investment (ROI) profile can be achieved in subsequent efforts across
life cycles of various domains.
8
2.2 PRAGMATIC SOFTWARE COST ESTIMATION
One critical problem in software cost estimation is a lack of well-documented case studies of projects that used
an iterative development approach. Software industry has inconsistently defined metrics or atomic units of
measure, the data from actual projects are highly suspect in terms of consistency and comparability. It is hard
enough to collect a homogeneous set of project data within one organization; it is extremely difficult to homog-
enize data across different organizations with different processes, languages, domains, and so on.
There have been many debates among developers and vendors of software cost estimation models and tools.
Three topics of these debates are of particular interest here:
9
most open and well-documented cost estimation models. The general accuracy of conventional cost models
(such as COCOMO) has been described as "within 20% of actuals, 70% of the time."
Most real-world use of cost models is bottom-up (substantiating a target cost) rather than top-down
(estimating the "should" cost). Figure 2-3 illustrates the predominant practice: The software project manager
defines the target cost of the software, and then manipulates the parameters and sizing until the target cost can
be justified. The rationale for the target cost maybe to win a proposal, to solicit customer funding, to attain
internal corporate funding, or to achieve some other goal.
The process described in Figure 2-3 is not all bad. In fact, it is absolutely necessary to analyze the cost risks and
understand the sensitivities and trade-offs objectively. It forces the software project manager to examine the
risks associated with achieving the target costs and to discuss this information with other stakeholders.
A good software cost estimate has the following attributes:
It is conceived and supported by the project manager, architecture team, development team, and test
team accountable for performing the work.
It is accepted by all stakeholders as ambitious but realizable.
It is based on a well-defined software cost model with a credible basis.
It is based on a database of relevant project experience that includes similar processes, similar
technologies, similar environments, similar quality requirements, and similar people.
It is defined in enough detail so that its key risk areas are understood and the probability of success is
objectively assessed.
Extrapolating from a good estimate, an ideal estimate would be derived from a mature cost model with an
experience base that reflects multiple similar projects done by the same team with the same mature processes
and tools.
10
These parameters are given in priority order for most software domains. Table 3-1 lists some of the
technology developments, process improvement efforts, and management approaches targeted at
improving the economics of software development and integration.
11
3.1.1 LANGUAGES
Universal function points (UFPs1) are useful estimators for language-independent, early life-cycle estimates.
The basic units of function points are external user inputs, external outputs, internal logical data groups,
external data interfaces, and external inquiries. SLOC metrics are useful estimators for software after a
candidate solution is formulated and an implementation language is known. Substantial data have been
documented relating SLOC to function points. Some of these results are shown in Table 3-2.
Languages expressiveness of some of today’s popular languages
LANGUAGES SLOC per
UFP
Assembly 320
C 128
FORTAN77 105
COBOL85 91
Ada83 71
C++ 56
Ada95 55
Java 55
Visual Basic 35
Table 3-2
1. An object-oriented model of the problem and its solution encourages a common vocabulary between
the end users of a system and its developers, thus creating a shared understanding of the problem
being solved.
2. The use of continuous integration creates opportunities to recognize risk early and make incremental
corrections without destabilizing the entire development effort.
3. An object-oriented architecture provides a clear separation of concerns among disparate elements of a
system, creating firewalls that prevent a change in one part of the system from rending the fabric of
the entire architecture.
1
Function point metrics provide a standardized method for measuring the various functions of a software application.
The basic units of function points are external user inputs, external outputs, internal logical data groups, external data interfaces, and
external inquiries.
12
Booch also summarized five characteristics of a successful object-oriented project.
1. A ruthless focus on the development of a system that provides a well understood collection of essential
minimal characteristics.
2. The existence of a culture that is centered on results, encourages communication, and yet is not afraid
to fail.
3. The effective use of object-oriented modeling.
4. The existence of a strong architectural vision.
5. The application of a well-managed iterative and incremental development life cycle.
3.1.3 REUSE
Reusing existing components and building reusable components have been natural software engineering
activities since the earliest improvements in programming languages. With reuse in order to minimize
development costs while achieving all the other required attributes of performance, feature set, and quality . Try
to treat reuse as a mundane part of achieving a return on investment.
Most truly reusable components of value are transitioned to commercial products supported by
organizations with the following characteristics:
13
3.1.4 COMMERCIAL COMPONENTS
A common approach being pursued today in many domains is to maximize integration of commercial
components and off-the-shelf products. While the use of commercial components is certainly desirable as a
means of reducing custom development, it has not proven to be straightforward in practice. Table 3-3 identifies
some of the advantages and disadvantages of using commercial components.
14
In a perfect software engineering world with an immaculate problem description, an obvious solution space, a
development team of experienced geniuses, adequate resources, and stakeholders with common goals, we
could execute a software development process in one iteration with almost no scrap and rework. Because we
work in an imperfect world, however, we need to manage engineering activities so that scrap and rework
profiles do not have an impact on the win conditions of any stakeholder. This should be the underlying
premise for most process improvements.
3.3 IMPROVING TEAM EFFECTIVENESS
Teamwork is much more important than the sum of the individuals. With software teams, a project manager
needs to configure a balance of solid talent with highly skilled people in the leverage positions. Some maxims
of team management include the following:
A well-managed project can succeed with a nominal engineering team.
A mismanaged project will almost never succeed, even with an expert team of engineers.
A well-architected system can be built by a nominal team of software builders.
A poorly architected system will flounder even with an expert team of builders.
15
Boehm five staffing principles are
1. The principle of top talent: Use better and fewer people
2. The principle of job matching: Fit the tasks to the skills and motivation of the people available.
3. The principle of career progression: An organization does best in the long run by helping its people
to self-actualize.
4. The principle of team balance: Select people who will complement and harmonize with one another
5. The principle of phase-out: Keeping a misfit on the team doesn't benefit anyone
Software project managers need many leadership qualities in order to enhance team effectiveness. The
following are some crucial attributes of successful software project managers that deserve much more attention:
1. Hiring skills. Few decisions are as important as hiring decisions. Placing the right person in the right
job seems obvious but is surprisingly hard to achieve.
2. Customer-interface skill. Avoiding adversarial relationships among stakeholders is a prerequisite for
success.
Decision-making skill. The jillion books written about management have failed to provide a clear
definition of this attribute. We all know a good leader when we run into one, and decision-making
skill seems obvious despite its intangible definition.
Team-building skill. Teamwork requires that a manager establish trust, motivate progress, exploit
eccentric prima donnas, transition average people into top performers, eliminate misfits, and
consolidate diverse opinions into a team direction.
Selling skill. Successful project managers must sell all stakeholders (including themselves) on decisions
and priorities, sell candidates on job positions, sell changes to the status quo in the face of resistance, and
sell achievements against objectives. In practice, selling requires continuous negotiation, compromise,
and empathy
16
Test activities can consume as much as 50% of a project's resources.
Configuration control and change management are critical activities that can consume as much as
25% of resources on a large-scale project.
Documentation activities can consume more than 30% of project engineering resources.
Project management, business administration, and progress assessment can consume as much as 30%
of project budgets.
Key practices that improve overall software quality include the following:
Focusing on driving requirements and critical use cases early in the life cycle, focusing on
requirements completeness and traceability late in the life cycle, and focusing throughout the life cycle
on a balance between requirements evolution, design evolution, and plan evolution
Using metrics and indicators to measure the progress and quality of an architecture as it evolves from
a high-level prototype into a fully compliant product
Providing integrated life-cycle environments that support early and continuous configuration control,
change management, rigorous design methods, document automation, and regression test automation
Using visual modeling and higher level languages that support architectural control, abstraction,
reliable programming, reuse, and self-documentation
Early and continuous insight into performance issues through demonstration-based evaluations
17
Conventional development processes stressed early sizing and timing estimates of computer program
resource utilization. However, the typical chronology of events in performance assessment was as follows
Project inception. The proposed design was asserted to be low risk with adequate performance margin.
Initial design review. Optimistic assessments of adequate design margin were based mostly on paper
analysis or rough simulation of the critical threads. In most cases, the actual application algorithms and
database sizes were fairly well understood.
Mid-life-cycle design review. The assessments started whittling away at the margin, as early
benchmarks and initial tests began exposing the optimism inherent in earlier estimates.
Integration and test. Serious performance problems were uncovered, necessitating fundamental changes
in the architecture. The underlying infrastructure was usually the scapegoat, but the real culprit was
immature use of the infrastructure, immature architectural solutions, or poorly understood early design
trade-offs.
1. Make quality Quality must be quantified and mechanisms put into place to motivate its achievement
2. High-quality software is possible. Techniques that have been demonstrated to increase quality include
involving the customer, prototyping, simplifying design, conducting inspections, and hiring the best people
3. Give products to customers early. No matter how hard you try to learn users' needs during the requirements
phase, the most effective way to determine real needs is to give users a product and let them play with it
4.Determine the problem before writing the requirements. When faced with what they believe is a problem,
most engineers rush to offer a solution. Before you try to solve a problem, be sure to explore all the alternatives
and don't be blinded by the obvious solution
19
5. Evaluate design alternatives. After the requirements are agreed upon, you must examine a variety of
architectures and algorithms. You certainly do not want to use” architecture" simply because it was used in the
requirements specification.
6. Use an appropriate process model. Each project must select a process that makes ·the most sense for that
project on the basis of corporate culture, willingness to take risks, application area, volatility of requirements, and
the extent to which requirements are well understood.
7. Use different languages for different phases. Our industry's eternal thirst for simple solutions to complex
problems has driven many to declare that the best development method is one that uses the same notation through-
out the life cycle.
8. Minimize intellectual distance. To minimize intellectual distance, the software's structure should be as close as
possible to the real-world structure
9. Put techniques before tools. An undisciplined software engineer with a tool becomes a dangerous,
undisciplined software engineer
10. Get it right before you make it faster. It is far easier to make a working program run faster than it is to make
a fast program work. Don't worry about optimization during initial coding
11. Inspect code. Inspecting the detailed design and code is a much better way to find errors than testing
12. Good management is more important than good technology. Good management motivates people to do
their best, but there are no universal "right" styles of management.
20
13. People are the key to success. Highly skilled people with appropriate experience, talent, and training are key.
14.Follow with care. Just because everybody is doing something does not make it right for you. It may be right,
but you must carefully assess its applicability to your environment.
15. Take responsibility. When a bridge collapses we ask, "What did the engineers do wrong?" Even when
software fails, we rarely ask this. The fact is that in any engineering discipline, the best methods can be used to
produce awful designs, and the most antiquated methods to produce elegant designs.
16. Understand the customer's priorities. It is possible the customer would tolerate 90% of the functionality
delivered late if they could have 10% of it on time.
17. The more they see, the more they need. The more functionality (or performance) you provide a user, the
more functionality (or performance) the user wants.
18. Plan to throw one away. One of the most important critical success factors is whether or not a product is
entirely new. Such brand-new applications, architectures, interfaces, or algorithms rarely work the first time.
19. Design for change. The architectures, components, and specification techniques you use must accommodate
change.
20. Design without documentation is not design. I have often heard software engineers say, "I have finished the
design. All that is left is the documentation. "
21. Use tools, but be realistic. Software tools make their users more efficient.
22. Avoid tricks. Many programmers love to create programs with tricks constructs that perform a function
correctly, but in an obscure way. Show the world how smart you are by avoiding tricky code
23. Encapsulate. Information-hiding is a simple, proven concept that results in software that is easier to test
and much easier to maintain.
24. Use coupling and cohesion. Coupling and cohesion are the best ways to measure software's inherent
maintainability and adaptability
25. Use the McCabe complexity measure. Although there are many metrics available to report the inherent
complexity of software, none is as intuitive and easy to use as Tom McCabe's
26. Don't test your own software. Software developers should never be the primary testers of their own
software.
27. Analyze causes for errors. It is far more cost-effective to reduce the effect of an error by preventing it than it
is to find and fix it. One way to do this is to analyze the causes of errors as they are detected
28. Realize that software's entropy increases. Any software system that undergoes continuous change will grow
in complexity and will become more and more disorganized
29. People and time are not interchangeable. Measuring a project solely by person-months makes little sense
30.Expect excellence. Your employees will do much better if you have high expectations for them.
21
4.2 THE PRINCIPLES OF MODERN SOFTWARE MANAGEMENT
Top 10 principles of modern software management are. (The first five, which are the main themes of my definition of an
iterative process, are summarized in Figure 4-1.)
1. Base the process on an architecture-first approach. This requires that a demonstrable balance be
achieved among the driving requirements, the architecturally significant design decisions, and the life-
cycle plans before the resources are committed for full-scale development.
2. Establish an iterative life-cycle process that confronts risk early. With today's sophisticated software
systems, it is not possible to define the entire problem, design the entire solution, build the software, and
then test the end product in sequence. Instead, an iterative process that refines the problem understanding,
an effective solution, and an effective plan over several iterations encourages a balanced treatment of all
stakeholder objectives. Major risks must be addressed early to increase predictability and avoid expensive
downstream scrap and rework.
3. Transition design methods to emphasize component-based development. Moving from a line-of-
code mentality to a component-based mentality is necessary to reduce the amount of human-generated
source code and custom development.
22
5. Enhance change freedom through tools that support round-trip engineering. Round-trip
engineering is the environment support necessary to automate and synchronize
engineering information in different formats(such as requirements specifications, design models,
source code, executable code, test cases).
6. Capture design artifacts in rigorous, model-based notation. A model based approach (such as UML)
supports the evolution of semantically rich graphical and textual design notations.
7. Instrument the process for objective quality control and progress assessment. Life-cycle assessment
of the progress and the quality of all intermediate products must be integrated into the process.
8. Use a demonstration-based approach to assess intermediate artifacts.
9. Plan intermediate releases in groups of usage scenarios with evolving levels of detail. It is
essential that the software management process drive toward early and continuous demonstrations
within the operational context of the system, namely its use cases.
10. Establish a configurable process that is economically scalable. No single process is suitable for
all software developments.
Table 4-1 maps top 10 risks of the conventional process to the key attributes and principles of a modern
process
23
4.3 TRANSITIONING TO AN ITERATIVE PROCESS
Modern software development processes have moved away from the conventional waterfall model, in which
each stage of the development process is dependent on completion of the previous stage.
The economic benefits inherent in transitioning from the conventional waterfall model to an iterative
development process are significant but difficult to quantify. As one benchmark of the expected economic
impact of process improvement, consider the process exponent parameters of the COCOMO II model.
(Appendix B provides more detail on the COCOMO model) This exponent can range from 1.01 (virtually no
diseconomy of scale) to 1.26 (significant diseconomy of scale). The parameters that govern the value of the
process exponent are application precedentedness, process flexibility, architecture risk resolution, team
cohesion, and software process maturity.
The following paragraphs map the process exponent parameters of CO COMO II to my top 10 principles of
a modern process.
Application precedentedness. Domain experience is a critical factor in understanding how to plan and
execute a software development project. For unprecedented systems, one of the key goals is to confront
risks and establish early precedents, even if they are incomplete or experimental. This is one of the primary
reasons that the software industry has moved to an iterative life-cycle process. Early iterations in the life
cycle establish precedents from which the product, the process, and the plans can be elaborated in evolving
levels of detail.
Process flexibility. Development of modern software is characterized by such a broad solution space and
so many interrelated concerns that there is a paramount need for continuous incorporation of changes.
These changes may be inherent in the problem understanding, the solution space, or the plans. Project
artifacts must be supported by efficient change management commensurate with project needs. A
configurable process that allows a common framework to be adapted across a range of projects is
necessary to achieve a software return on investment.
Architecture risk resolution. Architecture-first development is a crucial theme underlying a successful
iterative development process. A project team develops and stabilizes architecture before developing all the
components that make up the entire suite of applications components. An architecture-first and
component-based development approach forces the infrastructure, common mechanisms, and control
mechanisms to be elaborated early in the life cycle and drives all component make/buy decisions into the
architecture process.
Team cohesion. Successful teams are cohesive, and cohesive teams are successful. Successful teams and
cohesive teams share common objectives and priorities. Advances in technology (such as programming
languages, UML, and visual modeling) have enabled more rigorous and understandable notations for
communicating software engineering information, particularly in the requirements and design artifacts that
previously were ad hoc and based completely on paper exchange. These model-based formats have also
enabled the round-trip engineering support needed to establish change freedom sufficient for evolving
design representations.
Software process maturity. The Software Engineering Institute's Capability Maturity Model (CMM) is a
well-accepted benchmark for software process assessment. One of key themes is that truly mature
processes are enabled through an integrated environment that provides the appropriate level of automation
to instrument the process for objective quality control.
24
Important questions
Explain briefly Waterfall model. Also explain Conventional s/w management performance?
1.
Define Software Economics. Also explain Pragmatic s/w cost estimation?
2.
4. Explain five staffing principal offered by Boehm. Also explain Peer Inspections?
25