Software Project Management Unit - 2 Notes
Software Project Management Unit - 2 Notes
A Software product development process usually starts when a request for the product
is received from the customer.
• Starting from the inception stage:
o A product undergoes a series of transformations through a few identifiable stages
o Until it is fully developed and released to the customer.
• After release:
o the product is used by the customer and during this time the product needs to
be maintained for fixing bugs and enhancing functionalities. This stage is
called Maintenance stage.
• This set of identifiable stages through which a product transits from inception
to retirement form the life cycle of the product.
• Life cycle model (also called a process model) is a graphical or textual
representation of its life cycle.
The no. of inter related activities to create a final product can be organized in different
ways and we can call these Process Models.
A software process model is a simplified representation of a software process. Each
model represents a process from a specific perspective. These generic models are abstractions
of the process that can be used to explain different approaches to the software development.
Plan-driven process is a process where all the activities are planned first, and the progress is
measured against the plan. While the agile process, planning is incremental and it’s easier to
change the process to reflect requirement changes.
• Requirements
• Design
• Implementation
• Testing
• Maintenance
In principle, the result of each phase is one or more documents that should be approved and the
next phase shouldn’t be started until the previous phase has completely been finished.
In practice, however, these phases overlap and feed information to each other. For example,
during design, problems with requirements can be identified, and during coding, some of the
design problems can be found, etc.
The software process therefore is not a simple linear but involves feedback from one phase to
another. So, documents produced in each phase may then have to be modified to reflect the
changes made.
The spiral model is similar to the incremental model, with more emphasis placed on risk
analysis. The spiral model has four phases: Planning, Risk Analysis, Engineering and Evaluation.
A software project repeatedly passes through these phases in iterations (called Spirals in this
model). The baseline spiral, starting in the planning phase, requirements is gathered and risk is
assessed. Each subsequent spiral builds on the baseline spiral. It is one of the software
development models like Waterfall, Agile, V-Model.
Planning Phase: Requirements are gathered during the planning phase. Requirements like
‘BRS’ that is ‘Bussiness Requirement Specifications’ and ‘SRS’ that is ‘System Requirement
specifications’.
Risk Analysis: In the risk analysis phase, a process is undertaken to identify risk and alternate
solutions. A prototype is produced at the end of the risk analysis phase. If any risk is found
during the risk analysis then alternate solutions are suggested and implemented.
Engineering Phase: In this phase software is developed, along with testing at the end of the
phase. Hence in this phase the development and testing is done.
Evaluation phase: This phase allows the customer to evaluate the output of the project to date
before the project continues to the next spiral.
Advantages of Spiral model:
• High amount of risk analysis hence, avoidance of Risk is enhanced.
• Good for large and mission-critical projects.
• Strong approval and documentation control.
• Additional Functionality can be added at a later date.
• Software is produced early in the software life cycle.
Disadvantages of Spiral model:
• Can be a costly model to use.
• Risk analysis requires highly specific expertise.
• Project’s success is highly dependent on the risk analysis phase.
• Doesn’t work well for smaller projects.
Incremental software development is better than a waterfall approach for most business, e-
commerce, and personal systems.
By developing the software incrementally, it is cheaper and easier to make changes in the
software as it is being developed.
Compared to the waterfall model, incremental development has three important benefits:
1. The cost of accommodating changing customer requirements is reduced. The amount of
analysis and documentation that has to be redone is much less than that’s required with
waterfall model.
2. It’s easier to get customer feedback on the work done during development than when the
system is fully developed, tested, and delivered.
3. More rapid delivery of useful software is possible even if all the functionality hasn’t been
included. Customers are able to use and gain value from the software earlier than it’s
possible with the waterfall model.
The Dynamic Systems Development technique (DSDM) is an associate degree agile code development
approach that provides a framework for building and maintaining systems. The DSDM philosophy is
borrowed from a modified version of the sociologist principle—80% of An application is often delivered
in twenty percent of the time it’d desire deliver the entire (100 percent) application.
DSDM is An iterative code method within which every iteration follows the 80% rule that simply e nough
work is needed for every increment to facilitate movement to the following increment. The remaining
detail is often completed later once a lot of business necessities are noted or changes are requested and
accommodated.
The DSDM tool (www.dsdm.org) could be a worldwide cluster of member companies that put together
tackle the role of “keeper” of the strategy. The pool has outlined AN Agile Development Model, known
as the DSDM life cycle that defines 3 different unvarying cycles, preceded by 2 further life cycle
activities:
1. Feasibility Study:
It establishes the essential business necessities and constraints related to the applying to be
designed then assesses whether or not the application could be a viable candidate for the DSDM
method.
2. Business Study:
It establishes the use and knowledge necessities that may permit the applying to supply business
value; additionally, it is the essential application design and identifies the maintainability
necessities for the applying.
3. Functional Model Iteration:
It produces a collection of progressive prototypes that demonstrate practicality for the client.
(Note: All DSDM prototypes are supposed to evolve into the deliverable application.) The intent
throughout this unvarying cycle is to collect further necessities by eliciting feedback from users as
they exercise the paradigm.
4. Design and Build Iteration:
It revisits prototypes designed throughout useful model iteration to make sure that everyone has
been designed during a manner that may alter it to supply operational business price for finish
users. In some cases, useful model iteration and style and build iteration occur at the same time.
5. Implementation:
It places the newest code increment (an “operationalized” prototype) into the operational
surroundings. It ought to be noted that:
1. (a) the increment might not 100% complete or,
2. (b) changes are also requested because the increment is placed into place. In either case,
DSDM development work continues by returning to the useful model iteration activity.
Below diagram describe the DSDM life cycle:
2.5.2 Extreme Programming
One of the foremost Agile methodologies is called Extreme Programming (XP), which
involves a high degree of participation between two parties in the software exchange: customer
and developers. The former inspires further development by emphasizing the most useful features of
a given software product through testimonials. The developers, in turn, base each successive set of
software upgrades on this feedback while continuing to test new innovations every few weeks.
XP has its share of pros and cons. On the upside, this Agile methodology involves a high
level of collaboration and a minimum of up-front documentation. It’s an efficient and persistent
delivery model. However, the methodology also requires a great level of discipline, as well as
plenty of involvement from people beyond the world of information technology. Furthermore, in
order for the best results, advanced XP proficiency is vital on the part of every team member.
Extreme programming (XP) is a software development methodology which is intended to
improve software quality and responsiveness to changing customer requirements. As a type of
agile software development, it advocates frequent "releases" in short development cycles, which
is intended to improve productivity and introduce checkpoints at which new customer
requirements can be adopted.
Other elements of extreme programming include: programming in pairs or doing extensive code
review, unit testing of all code, avoiding programming of features until they are actually needed,
a flat management structure, code simplicity and clarity, expecting changes in the customer's
requirements as time passes and the problem is better understood, and frequent communication
with the customer and among programmers. The methodology takes its name from the idea that
the beneficial elements of traditional software engineering practices are taken to "extreme"
levels. As an example, code reviews are considered a beneficial practice; taken to the extreme,
code can be reviewed continuously, i.e. the practice of pair programming.
Activities
XP describes four basic activities that are performed within the software development process:
coding, testing, listening, and designing. Each of those activities is described below.
Coding
The advocates of XP argue that the only truly important product of the system development
process is code – software instructions that a computer can interpret. Without code, there is no
working product.
Coding can also be used to figure out the most suitable solution. Coding can also help to
communicate thoughts about programming problems. A programmer dealing with a complex
programming problem, or finding it hard to explain the solution to fellow programmers, might
code it in a simplified manner and use the code to demonstrate what he or she means. Code, say
the proponents of this position, is always clear and concise and cannot be interpreted in more
than one way. Other programmers can give feedback on this code by also coding their thoughts.
Testing
Extreme programming's approach is that if a little testing can eliminate a few flaws, a lot of
testing can eliminate many more flaws.
• Unit tests determine whether a given feature works as intended. Programmers write as many
automated tests as they can think of that might "break" the code; if all tests run successfully,
then the coding is complete. Every piece of code that is written is tested before moving on to
the next feature.
• Acceptance tests verify that the requirements as understood by the programmers satisfy the
customer's actual requirements.
System-wide integration testing was encouraged, initially, as a daily end-of-day activity, for
early detection of incompatible interfaces, to reconnect before the separate sections diverged
widely from coherent functionality. However, system-wide integration testing has been reduced,
to weekly, or less often, depending on the stability of the overall interfaces in the system.
Listening
Programmers must listen to what the customers need the system to do, what "business logic" is
needed. They must understand these needs well enough to give the customer feedback about the
technical aspects of how the problem might be solved, or cannot be solved. Communication
between the customer and programmer is further addressed in the planning game.
Designing
From the point of view of simplicity, of course one could say that system development doesn't need
more than coding, testing and listening. If those activities are performed well, the result should
always be a system that works. In practice, this will not work. One can come a long way without
designing but at a given time one will get stuck. The system becomes too complex and the
dependencies within the system cease to be clear. One can avoid this by creating a design structure
that organizes the logic in the system. Good design will avoid lots of dependencies within a system;
this means that changing one part of the system will not affect other parts of the system.
Advantages
Robustness
Resilience
Cost savings
Lesser risks
Disadvantages
It assumes constant involvement of customers
Centered approach rather than design-centered approach
Lack of proper documentation
• No precise definition
• Difficult to estimate at start of a project
• Only a code measure
• Programmer-dependent
• Does not consider code complexity
o Analogy, where a similar, completed project is identified and its actual effort
is used as the basis of the estimate.
o Parkinson, where the staff effort available to do a project becomes the estimate.
o Price to win, where the estimate is a figure that seems sufficiently low to win
a contract.
o Top-down, where an overall estimate for the whole project is broken down
into the effort required for component tasks.
o Bottom-up, where component tasks are identified and sized and these
individual estimates are aggregated.
• Bang measure – Defines a function metric based on twelve primitive (simple) counts that affect
or show Bang, defined as "the measure of true function to be delivered as perceived by the user."
Bang measure may be helpful in evaluating a software unit's value in terms of how much useful
function it provides, although there is little evidence in the literature of such application.
The use of Bang measure could apply when re-engineering (either complete or piecewise) is
being considered, as discussed in Maintenance of Operational Systems—An Overview.
• Feature points – Adds changes to improve applicability to systems with significant internal
processing (e.g., operating systems, communications systems). This allows accounting for
functions not readily perceivable by the user, but essential for proper operation.
• Weighted Micro Function Points – One of the newer models (2009) which adjusts function
points using weights derived from program flow complexity, operand and operator vocabulary,
object usage, and algorithm.
Benefits
The use of function points in favor of lines of code seek to address several additional issues:
• The risk of "inflation" of the created lines of code, and thus reducing the value of the
measurement system, if developers are incentivized to be more productive. FP advocates
refer to this as measuring the size of the solution instead of the size of the problem.
• Lines of Code (LOC) measures reward low level languages because more lines of code are
needed to deliver a similar amount of functionality to a higher level language. C. Jones offers
a method of correcting this in his work.
• LOC measures are not useful during early project phases where estimating the number of
lines of code that will be delivered is challenging. However, Function Points can be derived
from requirements and therefore are useful in methods such as estimation by proxy.
Putnam was the first to study the problem of what should be a proper staffing pattern for
software projects. He extended the classical work of Norden who had earlier investigated the
staffing pattern of general research and development type of projects. In order to appreciate the
staffing pattern desirable for software projects, we must understand both Norden’s and Putnam’s
results.
Norden’s Work
• Norden studied the staffing patterns of several R&D projects.
• He found the staffing patterns of R&D projects to be very different from the
manufacturing or sales type of work.
• Staffing pattern of R&D types of projects changes dynamically over time for efficient
man power utilization.
• He concluded that staffing pattern for any R&D project can be approximated by the
Rayleigh distribution curve.
Putnam’s Work