Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Unit 5

Download as pdf or txt
Download as pdf or txt
You are on page 1of 37

Software Project

Management renaissance
Conventional Software Management
• practices are sound in theory, but is still tied to archaic (outdated) technology and
techniques.
• The best thing about software is its flexibility: It can be programmed to do almost
anything.
• The worst thing about software is also its flexibility.: The "almost anything"
characteristic has made it difficult to plan, monitors, and control software
development.
• Three important analyses of the state of the software engineering industry are
• 1. Software development is still highly unpredictable. Only about 10% of software
projects are delivered successfully within initial budget and schedule estimates.
• 2. Management discipline is more of a discriminator in success or failure than are
technology advances.
• 3. The level of software scrap and rework is indicative of an immature process.
• All three analyses reached the same general conclusion: The success rate for
software projects is very low. The three analyses provide a good introduction to
the magnitude of the software problem and the current norms for conventional
software management performance
THE WATERFALL MODEL : in theory
• It provides an insightful and concise summary of conventional
software management .
• Three primary points are:
• . In order to manage and control all of the intellectual freedom associated with software development, one
must introduce several other "overhead" steps, including system requirements definition, software
requirements definition, program design, and testing. These steps supplement the analysis and coding steps.
Given Figure illustrates the resulting project profile and the basic steps in developing a large-scale program.
• 3. The basic framework described in the waterfall model is risky and invites failure. The testing phase that
occurs at the end of the development cycle is the first event for which timing, storage, input/output transfers,
etc., are experienced as distinguished from analyzed. The resulting design changes are likely to be so
disruptive that the software requirements upon which the design is based are likely violated. Either the
requirements must be modified or a substantial design change is warranted.
Improvements for waterfall model
• 1. Program design comes first. Insert a preliminary program design phase
between the software requirements generation phase and the analysis phase. By
this technique, the program designer assures that the software will not fail
because of storage, timing, and data flux (continuous change). As analysis
proceeds in the succeeding phase, the program designer must impose on the
analyst the storage, timing, and operational constraints in such a way that he
senses the consequences. If the total resources to be applied are insufficient or if
the embryonic(in an early stage of development) operational design is wrong, it
will be recognized at this early stage and the iteration with requirements and
preliminary design can be redone before final design, coding, and test
commences.
• 2. Document the design. The amount of documentation required on most
software programs is quite a lot, certainly much more than most programmers,
analysts, or program designers are willing to do if left to their own devices. Why
do we need so much documentation? (1) Each designer must communicate with
interfacing designers, managers, and possibly customers. (2) During early phases,
the documentation is the design. (3) The real monetary value of documentation is
to support later modifications by a separate test team, a separate maintenance
team, and operations personnel who are not software literate.
• 3. Do it twice. If a computer program is being developed for the first time, arrange matters so that the
version finally delivered to the customer for operational deployment is actually the second version
insofar as critical design/operations are concerned. Note that this is simply the entire process done in
miniature, to a time scale
• that is relatively small with respect to the overall effort. In the first version, the team must have a
special broad competence where they can quickly sense trouble spots in the design, model them,
model alternatives, forget the straightforward aspects of the design that aren't worth studying at this
early point, and, finally, arrive at an error-free program.
• 4. Plan, control, and monitor testing. Without question, the biggest user of project resources-
manpower, computer time, and/or management judgment-is the test phase. This is the phase of
greatest risk in terms of cost and schedule. It occurs at the latest point in the schedule, when backup
alternatives are least available, if at all. The previous three recommendations were all aimed at
uncovering and solving problems before entering the test phase. However, even after doing these
things, there is still a test phase and there are still important things to be done, including: (1) employ a
team of test specialists who were not responsible for the original design; (2) employ visual inspections
to spot the obvious errors like dropped minus signs, missing factors of two, jumps to wrong addresses
(do not use the computer to detect this kind of thing, it is too expensive); (3) test every logic path; (4)
employ the final checkout on the target computer.
• 5. Involve the customer. It is important to involve the customer in a formal way so that he has
committed himself at earlier points before final delivery. There are three points following requirements
definition where the insight, judgment, and commitment of the customer can bolster the development
effort. These include a "preliminary software review" following the preliminary program design step, a
sequence of "critical software design reviews" during program design, and a "final software acceptance
review".
THE WATERFALL MODEL : in practice
• Some software projects still practice the conventional software
management approach.
• Projects destined for trouble frequently exhibit the following
symptoms:
1. Protracted integration and late design breakage.
2. Late risk resolution.
3. Requirements-driven functional decomposition.
4. Adversarial (conflict or opposition) stakeholder relationships.
5. Focus on documents and review meetings.
Protracted Integration and Late Design Breakage

• For a typical development project that used a waterfall model management


process, The following sequence was common:
• Early success via paper designs and thorough (often too thorough) briefings.
• Commitment to code late in the life cycle.
• Integration nightmares (unpleasant experience) due to unforeseen
implementation issues and interface ambiguities.
• Heavy budget and schedule pressure to get the system working.
• Late shoe-homing of no optimal fixes, with no time for redesign.
• A very fragile, unmentionable product delivered late.
• In the conventional model, the entire system was designed on paper, then
implemented all at once, then integrated. Table 1-1 provides a typical profile of
cost expenditures across the spectrum of software activities.
Protracted integration and late design
breakage.
Late risk resolution
• A serious issue associated with the waterfall lifecycle was the lack of
early risk resolution. Figure 1.3 illustrates a typical risk profile for
conventional waterfall model projects. It includes four distinct periods
of risk exposure, where risk is defined as the probability of missing a
cost, schedule, feature, or quality goal. Early in the life cycle, as the
requirements were being specified, the actual risk exposure was
highly unpredictable.
Late risk resolution
Requirements-Driven Functional
Decomposition
• This approach depends on specifying requirements completely and
unambiguously before other development activities begin. It naively treats all
requirements as equally important, and depends on those requirements
remaining constant over the software development life cycle. These conditions
rarely occur in the real world. Specification of requirements is a difficult and
important part of the software development process.
• Another property of the conventional approach is that the requirements were
typically specified in a functional manner. Built into the classic waterfall process
was the fundamental assumption that the software itself was decomposed into
functions; requirements were then allocated to the resulting components. This
decomposition was often very different from a decomposition based on object-
oriented design and the use of existing components. Figure 1-4 illustrates the
result of requirements-driven approaches: a software structure that is organized
around the requirements specification structure.
Requirements-Driven Functional
Decomposition
Adversarial Stakeholder Relationships:
• The following sequence of events was typical for most contractual
software efforts:
• 1. The contractor prepared a draft contract-deliverable document
that captured an intermediate artifact and delivered it to the
customer for approval.
• 2. The customer was expected to provide comments (typically within
15 to 30 days).
• 3. The contractor incorporated these comments and submitted
(typically within 15 to 30 days) a final version for approval.
Focus on Documents and Review Meetings:
• The conventional process focused on producing various documents that
attempted to describe the software product, with insufficient focus on
producing tangible increments of the products themselves. Contractors
were driven to produce literally tons of paper to meet milestones and
demonstrate progress to stakeholders, rather than spend their energy on
tasks that would reduce risk and produce quality software. Typically,
presenters and the audience reviewed the simple things that they
understood rather than the complex and important issues. Most design
reviews therefore resulted in low engineering value and high cost in terms
of the effort and schedule involved in their preparation and conduct. They
presented merely a facade of progress.
• Table 1-2 summarizes the results of a typical design review.
Focus on Documents and Review Meetings:
CONVENTIONAL SOFTWARE MANAGEMENT PERFORMANCE
• Barry Boehm's "Industrial Software Metrics Top 10 List” is a good, objective
characterization of the state of software development.
1. Finding and fixing a software problem after delivery costs 100 times more than finding
and fixing the problem in early design phases.
2. You can compress software development schedules 25% of nominal, but no more.
3. For every $1 you spend on development, you will spend $2 on maintenance.
4. Software development and maintenance costs are primarily a function of the number
of source lines of code.
5. Variations among people account for the biggest differences in software productivity.
6. The overall ratio of software to hardware costs is still growing. In 1955 it was 15:85; in
1985, 85:15.
7. Only about 15% of software development effort is devoted to programming.
8. Software systems and products typically cost 3 times as much per SLOC as individual
software programs. Software-system products (i.e., system of systems) cost 9 times as
much.
9. Walkthroughs catch 60% of the errors
10. 80% of the contribution comes from 20% of the contributors
Evolution of Software
Economics
SOFTWARE ECONOMICS
• Most software cost models can be abstracted into a function of five
basic parameters: size, process, personnel, environment, and
required quality.
• The relationships among these parameters and the estimated cost
can be written as follows:

• One important aspect of software economics (as represented within


today's software cost models) is that the relationship between effort
and size exhibits a diseconomy of scale. The diseconomy of scale of
software development is a result of the process exponent being
greater than 1.0. Contrary to most manufacturing processes, the
more software you build, the more expensive it is per unit item.
• Figure 2-1 shows three generations of basic technology advancement in tools, components, and processes. The
required levels of quality and personnel are assumed to be constant. The ordinate of the graph refers to
software unit costs (pick your favorite: per SLOC, per function point, per component) realized by an
organization.
• The three generations of software development are defined as follows:
• 1) Conventional: 1960s and 1970s, craftsmanship. Organizations used custom tools, custom processes, and
virtually all custom components built in primitive languages. Project performance was highly predictable in that
cost, schedule, and quality objectives were almost always underachieved.
• 2) Transition: 1980s and 1990s, software engineering. Organiz:1tions used more-repeatable processes and off-
the-shelf tools, and mostly (>70%) custom components built in higher level languages. Some of the
components (<30%) were available as commercial products, including the operating system, database
management system, networking, and graphical user interface.
• 3) Modern practices: 2000 and later, software production. This book's philosophy is rooted in the use of
managed and measured processes, integrated automation environments, and mostly(70%) off-the-shelf
components. Perhaps as few as 30% of the components need to be custom built
• Technologies for environment automation, size reduction, and process improvement are not independent of
one another. In each new era, the key is complementary growth in all technologies. For example, the process
advances could not be used successfully without new component technologies and increased tool automation.
• Organizations are achieving better economies of scale in successive technology eras-with very large projects
(systems of systems), long-lived products, and lines of business comprising multiple similar projects.
Improving Software Economics
Five basic parameters of the software cost model are
1.Reducing the size or complexity of what needs to be developed.
2. Improving the development process.
3. Using more-skilled personnel and better teams (not necessarily the same thing).
4. Using better environments (tools to automate the process).
5. Trading off or backing off on quality thresholds.

These parameters are given in priority order for most software domains. Table 3-1
lists some of the technology developments, process improvement efforts, and
management approaches targeted at improving the economics of software
development and integration
Improving Software Economics
REDUCING SOFTWARE PRODUCT SIZE
• The most significant way to improve affordability and return on investment (ROI)
is usually to produce a product that achieves the design goals with the minimum
amount of human-generated source material. Component-based development is
introduced as the general term for reducing the "source" language size to achieve
a software solution.
• Reuse, object-oriented technology, automatic code production, and higher order
programming languages are all focused on achieving a given system with fewer
lines of human-specified source directives (statements).
• size reduction is the primary motivation behind improvements in higher order
languages (such as C++, Ada 95, Java, Visual Basic), automatic code generators
(CASE tools, visual modeling tools, GUI builders), reuse of commercial
components (operating systems, windowing environments, database
management systems, middleware, networks), and object-oriented technologies
(Unified Modeling Language, visual modeling tools, architecture frameworks).
• The reduction is defined in terms of human-generated source material. In
general, when size-reducing technologies are used, they reduce the number of
human-generated source lines.
IMPROVING SOFTWARE PROCESSES
• Process is an overloaded term. Three distinct process perspectives are.
• Metaprocess: an organization's policies, procedures, and practices for pursuing a software-intensive line of
business. The focus of this process is on organizational economics, long-term strategies, and software ROI.
• Macroprocess: a project's policies, procedures, and practices for producing a complete software product
within certain cost, schedule, and quality constraints. The focus of the macro process is on creating an
adequate instance of the Meta process for a specific set of constraints.
Microprocess: a project team's policies, procedures, and practices for achieving an artifact of the software
process. The focus of the micro process is on achieving an intermediate product baseline with adequate
quality and adequate functionality as economically and rapidly as practical.
• Although these three levels of process overlap somewhat, they have different objectives, audiences,
metrics, concerns, and time scales as shown in Table 3-4.
• In a perfect software engineering world with an immaculate problem description, an obvious solution
space, a development team of experienced geniuses, adequate resources, and stakeholders with common
goals, we could execute a software development process in one iteration with almost no scrap and rework.
Because we work in an imperfect world, however, we need to manage engineering activities so that scrap
and rework profiles do not have an impact on the win conditions of any stakeholder. This should be the
underlying premise for most process improvements.
IMPROVING SOFTWARE PROCESSES
IMPROVING TEAM EFFECTIVENESS
• Boehm five staffing principles are
1. The principle of top talent: Use better and fewer people
2. The principle of job matching: Fit the tasks to the skills and
motivation of the people available.
3. The principle of career progression: An organization does best in the
long run by helping its people to self-actualize.
4. The principle of team balance: Select people who will complement
and harmonize with one another
5. The principle of phase-out: Keeping a misfit on the team doesn't
benefit anyone
• The following are some crucial attributes of successful software
project managers that deserve much more attention: . Hiring skills, .
Customer-interface skill, Decision-making skill, Team-building skill
and selling skill.
IMPROVING AUTOMATION THROUGH SOFTWARE
ENVIRONMENTS
• Economic improvements associated with tools and environments. It is common
for tool vendors to make rela-tively accurate individual assessments of life-cycle
activities to support claims about the potential economic impact of their tools.
For example, it is easy to find statements such as the following from companies in
a particular tool.
1. Requirements analysis and evolution activities consume 40% of life-cycle costs.
2. Software design activities have an impact on more than 50% of the resources.
3. Coding and unit testing activities consume about 50% of software development
effort and schedule.
4. Test activities can consume as much as 50% of a project's resources.
5. Configuration control and change management are critical activities that can
consume as much as 25% of resources on a large-scale project.
6. Documentation activities can consume more than 30% of project engineering
resources.
7. Project management, business administration, and progress assessment can
consume as much as 30% of project budgets.
ACHIEVING REQUIRED QUALITY
• Software best practices are derived from the development process and technologies. Table 3-5
summarizes some dimensions of quality improvement.
• Key practices that improve overall software quality include the following:
• Focusing on driving requirements and critical use cases early in the life cycle, focusing on
requirements completeness and traceability late in the life cycle, and focusing throughout the life
cycle on a balance between requirements evolution, design evolution, and plan evolution
• Using metrics and indicators to measure the progress and quality of an architecture as it evolves
from a high-level prototype into a fully compliant product
• Providing integrated life-cycle environments that support early and continuous configuration
control, change management, rigorous design methods, document automation, and regression
test automation
• Using visual modeling and higher level languages that support architectural control, abstraction,
reliable programming, reuse, and self-documentation
• Early and continuous insight into performance issues through demonstration-based evaluations
ACHIEVING REQUIRED QUALITY
Old & new way

Over the past 2 decades there has been a significant re engineering of the software development process.
Many of the conventional management and technical practices have been replaced by a new approaches that
combines recurring themes of successful project experience with advances in software engineering
technology.

The old way is described in terms of ‘THE PRINCIPLES OF CONVENTIONAL SOFTWARE ENGINEERING’ and new
way in ‘THE PRINCIPLES OF MODERN SOFTWARE MANAGEMENT’
THE PRINCIPLES OF CONVENTIONAL SOFTWARE ENGINEERING
There are many descriptions of engineering software "the old way." After years of software development
experience, the software industry has learned many lessons and formulated many principles. The benchmark
chosen is a brief article titled 30 Principles of Software Engineering" [Davis, 1994, 1995],

1.Make quality Quality must be quantified and mechanisms put into place to motivate its achievement
2.High-quality software is possible. Techniques that have been demonstrated to increase quality include
involving the customer, prototyping, simplifying design, conducting inspections, and hiring the best people
3.Give products to customers early. No matter how hard you try to learn users' needs during the requirements
phase, the most effective way to determine real needs is to give users a product and let them play with it
4.Determine the problem before writing the requirements. When faced with what they believe is a problem,
most engineers rush to offer a solution. Before you try to solve a problem, be sure to explore all the alternatives
and don't be blinded by the obvious solution
5.Evaluate design alternatives. After the requirements are agreed upon, you must examine a variety of
architectures and algorithms. You certainly do not want to use” architecture" simply because it was used in the
requirements specification.
6.Use an appropriate process model. Each project must select a process that makes ·the most sense for that
project on the basis of corporate culture, willingness to take risks, application area, volatility of requirements,
and the extent to which requirements are well understood.
• 7.Use different languages for different phases. Our industry's eternal thirst for simple
solutions to complex problems has driven many to declare that the best development
method is one that uses the same notation through-out the life cycle.
• 8.Minimize intellectual distance. To minimize intellectual distance, the software's
structure should be as close as possible to the real-world structure
• 9.Put techniques before tools. An undisciplined software engineer with a tool becomes a
dangerous, undisciplined software engineer
• 10.Get it right before you make it faster. It is far easier to make a working program run
faster than it is to make a fast program work. Don't worry about optimization during
initial coding
• 11.Inspect code. Inspecting the detailed design and code is a much better way to find
errors than testing
• 12.Good management is more important than good technology. Good management
motivates people to do their best, but there are no universal "right" styles of
management
• 13.People are the key to success. Highly skilled people with appropriate experience,
talent, and training are key.
• 14.Follow with care. Just because everybody is doing something does not make it right
for you. It may be right, but you must carefully assess its applicability to your
environment.
• 15.Take responsibility. When a bridge collapses we ask, "What did the engineers do
wrong?" Even when software fails, we rarely ask this. The fact is that in any engineering
discipline, the best methods can be used to produce awful designs, and the most
antiquated methods to produce elegant designs.
• 16.Understand the customer's priorities. It is possible the customer would tolerate 90%
of the functionality delivered late if they could have 10% of it on time.
• 17.The more they see, the more they need. The more functionality (or performance)
you provide a user, the more functionality (or performance) the user wants.
• 18. Plan to throw one away. One of the most important critical success factors is
whether or not a product is entirely new. Such brand-new applications, architectures,
interfaces, or algorithms rarely work the first time.
• 19. Design for change. The architectures, components, and specification techniques you
use must accommodate change.
• 20. Design without documentation is not design. I have often heard software engineers
say, "I have finished the design. All that is left is the documentation. "
• 21. Use tools, but be realistic. Software tools make their users more efficient.
• 22. Avoid tricks. Many programmers love to create programs with tricks constructs that
perform a function correctly, but in an obscure way. Show the world how smart you are
by avoiding tricky code
• 23. Encapsulate. Information-hiding is a simple, proven concept that results in software
that is easier to test and much easier to maintain.
• 24. Use coupling and cohesion. Coupling and cohesion are the best ways to measure
software's inherent maintainability and adaptability
• 25. Use the McCabe complexity measure. Although there are many metrics available to
report the inherent complexity of software, none is as intuitive and easy to use as Tom
McCabe's
• 26.Don't test your own software. Software developers should never be the primary
• testers of their own software.
• 27.Analyze causes for errors. It is far more cost-effective to reduce the effect of an error
by preventing it than it is to find and fix it. One way to do this is to analyze the causes of
errors as they are detected
• 28.Realize that software's entropy increases. Any software system that undergoes
continuous change will grow in complexity and will become more and more disorganized
• 29.People and time are not interchangeable. Measuring a project solely by person-
months makes little sense
• 30.Expect excellence. Your employees will do much better if you have high expectations
for them.
THE PRINCIPLES OF MODERN SOFTWARE
MANAGEMENT
• Top 10 principles of modern software management are.
• 1. Base the process on an architecture-first approach. This requires that a demonstrable balance
be achieved among the driving requirements, the architecturally significant design decisions, and
the life-cycle plans before the resources are committed for full-scale development.
• 2. Establish an iterative life-cycle process that confronts risk early. With today's sophisticated
software systems, it is not possible to define the entire problem, design the entire solution, build
the software, and then test the end product in sequence. Instead, an iterative process that refines
the problem understanding, an effective solution, and an effective plan over several iterations
encourages a balanced treatment of all stakeholder objectives. Major risks must be addressed
early to increase predictability and avoid expensive downstream scrap and rework.
• 3. Transition design methods to emphasize component-based development. Moving from a line-
of-code mentality to a component-based mentality is necessary to reduce the amount of human-
generated source code and custom development.
• 4. Establish a change management environment. The dynamics of iterative development,
including concurrent workflows by different teams working on shared artifacts, necessitates
objectively controlled baselines.
• 5. Enhance change freedom through tools that support round-trip engineering.
Round-trip engineering is the environment support necessary to automate and
synchronize
• engineering information in different formats(such as requirements specifications,
design models, source code, executable code, test cases).
• 6. Capture design artifacts in rigorous, model-based notation. A model based
approach (such as UML) supports the evolution of semantically rich graphical and
textual design notations.
• 7. Instrument the process for objective quality control and progress assessment.
Life-cycle assessment of the progress and the quality of all intermediate products
must be integrated into the process.
• 8. Use a demonstration-based approach to assess intermediate artifacts.
• 9. Plan intermediate releases in groups of usage scenarios with evolving levels
of detail. It is essential that the software management process drive toward early
and continuous demonstrations within the operational context of the system,
namely its use cases.
• 10. Establish a configurable process that is economically scalable. No single
process is suitable for all software developments.

You might also like