Unit 2 Notes
Unit 2 Notes
Project planning: Project planning involves estimating several characteristics of a project and then
planning the project activities based on these estimates made.
Project monitoring and control: The focus of project monitoring and control activities is to ensure
that the software development proceeds as per plan.
Project Planning
Project planning is undertaken and completed before any development activity starts.
During project planning, the project manager performs the following activities.
The effectiveness of all later planning activities such as scheduling and staffing are dependent
on the accuracy with which these three estimations have been made.
Scheduling: After all the necessary project parameters have been estimated, the schedules for
manpower and other resources are developed.
Staffing: Staff organization and staffing plans are made.
Risk management: This includes risk identification, analysis, and abatement planning.
Miscellaneous plans: This includes making several other plans such as quality assurance plan,
and configuration management plan, etc.
Size is the most fundamental parameter based on which all other estimations and project plans are
made.
LOC is possibly the simplest among all metrics available to measure project size.
This metric measures the size of a project by counting the number of source instructions in the
developed program while counting the number of source instructions, comment lines, and
header lines are ignored
Determining the LOC count at the end of a project is very simple.
Accurate estimation of LOC count at the beginning of a project is a very difficult task.
LOC count at the starting of a project, only by using some form of systematic guess work
Systematic guessing typically involves the following.
The project manager divides the problem into modules, and each module into sub-modules and
so on, until the LOC of the leaf-level modules are small enough to be predicted.
By adding the estimates for all leaf level modules together, project managers arrive at the total
size estimation.
In spite of its conceptual simplicity, LOC metric has several shortcomings when used to
measure problem size.
Function Point (FP) Metric overcomes many of the shortcomings of the LOC metric.
In the function point metric size can easily be computed from the problem specification itself,
whereas in LOC metrics the size can accurately be determined only after the product has fully
been developed.
The conceptual idea behind the function point metric is the following.
The size of a software product is directly dependent on the number of different high-level
functions or features it supports.
A software product supporting many features would certainly have large size than the product
with fewer features.
The size of the program depends on the number of files and interfaces.
Interface is used for data transfer with other external systems
The size of a software product (in units of function points or FPs) is computed using different
characteristics of the product identified in its requirements specification.
Step 1: Compute the unadjusted function point (UFP) using a heuristic expression.
Step 2: Refine UFP to reflect the actual complexities of the different parameters used in UFP
computation.
Step 3: Compute FP by further refining UFP to account for the specific characteristics of the project
that can influence the entire development effort.
The unadjusted function points (UFP) is computed as the weighted sum of five characteristics
of a product as shown in the expression.
The weights associated with the five characteristics were determined empirically by Albrecht
through data gathered from many projects.
1. Number of inputs:
Expert Judgment
Delphi Cost Estimation
Expert Judgment
Expert judgement is a widely used size estimation technique. In this technique, an expert
makes an educated guess about the problem size after analysing the problem thoroughly.
Usually, the expert estimates the cost of the different components (i.e. modules or subsystems)
that would make up the system and then combines the estimates for the individual modules to
arrive at the overall estimate.
This technique suffers from several shortcomings. The outcome of the expert judgement
technique is subject to human errors and individual bias. Also, it is possible that an expert may
overlook some factors inadvertently.
Further, an expert making an estimate may not have relevant experience and knowledge of all
aspects of a project. For example, he may be conversant with the database and user interface
parts, but may not be very knowledgeable about the computer communication part.
Due to these factors, the size estimation arrived at by the judgement of a single expert may be
far from being accurate.
A more refined form of expert judgement is the estimation made by a group of experts.
Chances of errors arising out of issues such as individual oversight, lack of familiarity with a
particular aspect of a project, personal bias, and the desire to win contract through overly
optimistic estimates is minimised when the estimation is done by a group of experts. However,
the estimate made by a group of experts may still exhibit bias.
For example, on certain issues the entire group of experts may be biased due to reasons such as
those arising out of political or social considerations. Another important shortcoming of the
expert judgement technique is that the decision made by a group may be dominated by overly
assertive members.
Delphi cost estimation technique tries to overcome some of the shortcomings of the expert
judgement approach.
Delphi estimation is carried out by a team comprising a group of experts and a co-ordinator.
In this approach, the co-ordinator provides each estimator with a copy of the software
requirements specification (SRS) document and a form for recording his cost estimate.
Estimators complete their individual estimates anonymously and submit them to the co-
ordinator.
In their estimates, the estimators mention any unusual characteristic of the product which has
influe nce d their estimations.
The co-ordinator prepares the summary of the responses of all the estimators, and also includes
any unusual rationale noted by any of the estimators.
The prepared summary information is distributed to the estimators. Based on this summary, the
estimators re-estimate.
This process is iterated for several rounds. However, no discussions among the estimators is
allowed during the entire estimation process.
The purpose behind this restriction is that if any discussion is allowed among the estimators,
then many estimators may easily get influenced by the rationale of an estimator who may be
more experienced or senior.
After the completion of several iterations of estimations, the co-ordinator takes the
responsibility of compiling the results and preparing the final estimate.
The Delphi estimation, though consumes more time and effort, overcomes an important
shortcoming of the expert judgement technique in that the results can not unjustly be
influenced by overly assertive and senior members.
Boehm provides different sets of expressions to predict the effort and development time from
the size estimation given in KLOC (kilo Lines of Source Code).Open person-month is the effort an
individual can typically put in a month.
That effort estimation is expressed in units of person-months (PM).The person month unit
indicates the work done by one person working on the project for one month.
Effort estimation of 100PM does not imply that 100persons should work for one month, it imply that 1
person should be employed for 100 months.
The number personal working on the project usually increases and decreases as shown in person-
month curve.
Example
Assume that the size of an organic type software product has been estimated to be 32,000 lines
of source code. Assume that the average salary of a software developer is Rs. 15,000 per month.
Determine the effort required to develop the software product, the nominal development time, and the
cost to develop the product.
The basic COCOMO model assumes that effort and development time are functions of the
project alone
But the intermediate COCOMO model considered the other 15 parameters base on various
attributes of software development
They are:
1.product: it considered the complexity of the product, reliability requirements of the
products, etc.,
2.computer: it consider the execution speed, storage space required, etc.,
3.personnel: it consider the experience level of the person, programming capability, analysis
capability, etc.,
4.development environment: it consider the development facilities available to the
developers, sophistication of software development tools used for development(CASE).
COCOMO 2
The present day software is large in size and reuse of existing software to develop new
products. This give rise in component based development. New life cycle model and
development paradigms are being deployed for web based and component base software.
Most of the product are highly interactive and elaborate graphical user interface, thus
COCOMO 2 model is introduced.
COCOMO 2 model provides three increasingly detailed cost estimation models.these can be used
to estimate project costs at different phases of the software. Project progresses through these models
can be applied at the differen stages. They are,
1.application composition: used to estimate the cost of the prototyping eg: to resolve user interface
issues.
2.early design: this support estimation of cost at the architectural design stage
3.post architecture stage: this provides cost estimation during detailed design and codeing stage.
The application composition model is based on counting the number of screens, reports, and
modules (components). Each of these components is considered to be an object (this has nothing to do
with the concept of objects in the object-oriented paradigm). These are used to compute the object
points of the application.
Effort is estimated in the application composition model as follows:
1. Estimate the number of screens, reports, and modules (components) from an analysis of the SRS
document.
2. Determine the complexity level of each screen and report, and rate these as either simple,
medium, or difficult. The complexity of a screen or a report is determined by the number of tables
and views it contains.
3. Use the weight values in Table 3.3 to 3.5.
The weights have been designed to correspond to the amount of effort required to implement an
instance of an object at the assigned complexity class.
5. Estimate percentage of reuse expected in the system. Note that reuse refers to the amount of pre-
developed software that will be used within the system. Then, evaluate New Object-Point count
(NOP) as follows,
6. Determine the productivity using Table 3.6. The productivi ty depends onthe experience of the
developers as well as the maturity of the CASE environment used.
7. Finally, the estimated effort in person-months is computed as E = NOP/PROD.
Post-architecture model
This technique is to measure size , development effort development cost of software products.
Halstead used a few primitive program parameters to develop the expressions for over all
program length, potential minimum volume, actual volume, language level, effort and
development time.
Example 3.3 Consider the expression a = &b; a, b are the operands and =,
& are the operators.
Example 3.5 Consider the function call statement: func (a, b);. In this, func ‗ ,‘ a n d ; are considered
as operators and variables a, b are treated as operands.
The length of a program as defined by Halstead, quantifies total usage of all operators and
operands in the program.
Thus, length N = N1 + N2. Halstead‘s definition of the length of the program as the total
number of operators and operands roughly agrees with the intuitive notion of the program
length as the total number of tokens used in the program.
The program vocabulary is the number of unique operators and operands used in the program.
Thus, program vocabulary h = h1 + h2.
Program Volume
The length of a program (i.e., the total number of operators and operands used in the code)
depends on the choice of the operators and operands used.
In other words, for the same programming problem, the length would depend on the
programming style.
This type of dependency would produce different measures of length for essentially the same
problem when different programming languages are used.
Thus, while expressing program size, the programming language used must be taken into
consideration: V = N log2 h
The potential minimum volume V* is defined as the volume of the most succinct program in
which a problem can be coded.
The minimum volume is obtained when the program can be expressed using a single source
code instruction, say a function call like foo();.
The volume is bound from below due to the fact that a program would have at least two
operators and no less than the requisite number of operands.
Operands are the input and output data items.
Thus, if an algorithm operates on input and output data d1, d2, ... dn, the most succinct
program would be f(d1, d2 , ..., dn); for which, h1 = 2, h2 =n.
Therefore, V* = (2 + h2) log2 (2 + h2).
The program level L is given by L = V*/V.
Length Estimation
Even though the length of a program can be found by calculating the t o t a l number of
operators and operands in a program, Halstead suggests a way to determine the length of a
program using the number of unique operators and operands used in the program.
Using this method, the program parameters such as length, volume, cost, effort, etc., can be
determined even before the start of any programming activity.
combinatorial result that for any given alphabet of size K, there are exactly Kr different strings
of length r. Thus,
Norden’s Work
Putnam’s Work
where, Cte is the effective technology constant, td is the time to develop the software, and K is the
effort needed to develop the software.
SCHEDULING
The scheduling problem, in essence, consists of deciding which tasks would be taken up when and by
whom.
In order to schedule the project activities, a software project manager needs to do the following:
1. Identify all the major activities that need to be carried out to complete the project.
2. Break down each activity into tasks.
3. Determine the dependency among different tasks.
4. Establish the estimates for the time durations necessary to complete the tasks.
5. Represent the information in the form of an activity network.
6. Determine task starting and ending dates from the information represented in the activity network.
7. Determine the critical path. A critical path is a chain of tasks that determines the duration of the
project.
8. Allocate resources to tasks.
Activity Networks
Example 3.9: Determine the Activity network representation for the MIS development project of
Example 3.7. Assume that the manager has determined the tasks to be represented from the work
breakdown structure of Figure 3.7, and has determined the durations and dependencies for each task as
shown in Table 3.7.
Answer: The activity network representation has been shown in Figure 3.8.
Example 3.10 Use the Activity network of Figure 3.8 to determine the ES and EF for every task for
the MIS problem of Example 3.7.
Answer: The activity network with computed ES and EF values has been shown in Figure 3.9.
Gantt Charts
A Gantt chart is a special type of bar chart where each bar represents an activity. The bars are drawn
along a time line. The length of each bar is proportional to the duration of time planned for the
corresponding activity.
The chief programmer team is probably the most efficient way of completing simple and small
projects since the chief programmer can quickly work out a satisfactory design and ask the
programmers to code different modules of his design solution.
Democratic team
The democratic team structure, as the name implies, does not enforce any formal team
hierarchyTypically, a manager provides theadministrative leadership. At different timesdifferent
members of the group provide technical leadership.
The mixed control team organisation, as the name implies, draws upon the ideas from both the
democratic organisation and the chief-programmer organisation.
STAFFING
Software project managers usually take the responsibility of choosing their team. Therefore,
they need to identify good software developers for the success of the project. A common
misconception held by managers as evidenced in their staffing, planning and scheduling practices, is
the assumption that one software engineer is as productive as another.
Project risks: Project risks concern various forms of budgetary, schedule, personnel, resource,
and customer-related problems. An important project risk is schedule slippage. Since, software
is intangible, it is very difficult to monitor and control a software project. It is very difficult to
control something which cannot be seen.
Technical risks: Technical risks concern potential design, implementation, interfacing, testing,
and maintenance problems. Technical risks also include ambiguous specification, incomplete
specification, changing specification, technical uncertainty, and technical obsolescence. Most
technical risks occur due the development team‘s insufficient knowledge about the product.
Business risks: This type of risks includes the risk of building an excellent product that no one
wants, losing budgetary commitments, etc.
Risk Assessment
The objective of risk assessment is to rank the risks in terms of their damage causing potential. For
risk assessment, first each risk should be rated in two ways:
The likelihood of a risk becoming real (r).
The consequence of the problems associated with that risk (s).
Risk Mitigation
After all the identified risks of a project have been assessed, plans are made to contain the most
damaging and the most likely risks first. Different types of risks require different containment
procedures.
Avoid the risk: Risks can be avoided in several ways. Risks often arise due to project
constraints and can be avoided by suitably modifying the constraints. The different categories
of constraints that usually give rise to risks are:
Process-related risk: These risks arise due to aggressive work schedule, budget, and
resource utilisation.
Product-related risks: These risks arise due to commitment to challenging product
features (e.g. response time of one second, etc.), quality, reliability etc.
Technology-related risks: These risks arise due to commitment to use certain technology
(e.g., satellite communication).
The goal of the requirements analysis and specification phase is to clearly understand the
customer requirements and to systematically organize the requirements into a document called
the Software Requirements Specification (SRS) document.
The engineers who gather and analyze customer requirements and then write the requirements
specification document are known as system analysts in the software industry.
After understanding the precise user requirements, the analysts analyze the requirements to
weed out inconsistencies, anomalies and incompleteness.
They then proceed to write the software requirements specification (SRS) document. The SRS
document is the final outcome of the requirements analysis and specification phase.
Requirements analysis and specification phase mainly involves carrying out the following two
important activities:
o Requirements gathering and analysis
o Requirements specification
Interview:
Typically, there are many different categories of users of a software. Each category of users
typically requires a different set of features from the software.
It is important for the analyst to first identify the different categories of users and then
determine the requirements of each.
Scenario analysis
A task can have many scenarios of operation. The different scenarios of a task may take place
when the task is invoked under different situations.
For different types of scenarios of a task, the behaviour of the software can be different.
For example
Book is issued successfully to the member and the book issue slip is printed. The book is
reserved, and hence cannot be issued to the member. The maximum number of books that can be
issued to the member is already reached, and no more books can be issued to the member.
Form analysis
Form analysis is an important and effective requirements gathering activity that is undertaken
by the analyst, when the project involves automating an existing manual system.
Requirements Analysis
The main purpose of the requirements analysis activity is to analyse the gathered requirements
to remove all ambiguities, incompleteness, and inconsistencies from the gathered customer
requirements and to obtain a clear understanding of the software to be developed.
During requirements analysis, the analyst needs to identify and resolve three main types of problems
in the requirements:
• Anomaly
• Inconsistency
• Incompleteness
Inconsistency: Two requirements are said to be inconsistent, if one of the requirements contradicts the
other.
Incompleteness: An incomplete set of requirements is one in which some requirements have been
overlooked. The lack of these features would be felt by the customer much later, possibly while using
the software. Often, incompleteness is caused by the inability of the customer to visualize the system
that is to be developed and to anticipate all the features that would be required. An experienced analyst
can detect most of these missing features and suggest them to the customer for his consideration and
approval for incorporation in the requirements.
After the analyst has gathered all the required information regarding the software to be
developed, and has removed all incompleteness, inconsistencies, and anomalies from the specification,
he starts to systematically organise the requirements in the form of an SRS document. The SRS
document usually contains all the user requirements in a structured though an informal form.
Functional requirements
The functional requirements capture the functionalities required by the users from the system.
It is useful to consider a software as offering a set of functions {fi} to the user.
These functions can be considered similar to a mathematical function f : I → O, meaning that a
function transforms an element (ii) in the input domain (I) to a value (oi) in the output (O).
Each function fi of the system can be considered as reading certain data ii, and then
transforming a set of input data (ii) to the corresponding set of output data (oi).
The functional requirements of the system, should clearly describe each functionality that the
system would support along with the corresponding input and output data set.
Considering that the functional requirements are a crucial part of the SRS document.
Non-functional requirements
The non-functional requirements are non-negotiable obligations that must be supported by the
software.
Goals of implementation
The ‗goals of implementation‘ part of the SRS document offers some general suggestions
regarding the software to be developed.
These are not binding on the developers, and they may take these suggestions into account if
possible.
Functional Requirements
In order to document the functional requirements of a system, it is necessary to first learn to
identify the high-level functions of the systems by reading the informal documentation of the
gathered requirements.
The high-level functions would be split into smaller sub requirements. Each high-level function
is an instance of use of the system (use case) by the user in some way.
A high-level function is one using which the user can get some useful piece of work done
Each high-level requirement characterizes a way of system usage (service invocation) by some user to
perform some meaningful piece of work.
example, consider the withdraw-cash function of an automated teller machine (ATM) Since during the
course of execution of the withdraw-cash function, the user would have to input the type of account,
the amount to be withdrawn, it is very difficult to form a single high-level name that would accurately
describe both the input data. However, the input data for the sub functions can be more accurately
described.
The high-level functional requirements often need to be identified either from an informal problem
description document or from a conceptual understanding of the problem. Each high-level requirement
characterizes a way of system usage (service invocation) by some user to perform some meaningful
piece of work.
Once all the high-level functional requirements have been identified and the requirements problems
have been eliminated, these are documented. A function can be documented by identifying the state at
which the data is to be input to the system, its input data domain, the
output data domain, and the type of processing to be carried on the input data to obtain the output data.
R.1: Withdraw cash from ATM
Description:The withdraw cash function first determines the type of account that the user has and the
account number from which the user wishes to withdraw cash. It checks the balance to determine
whether the requested amount is available in the account. If enough balance is available, it outputs the
required cash, otherwise it generates an error message.
R.1.1: Select withdraw amount option
Input: ―Withdraw amount‖ option selected Output: User prompted to enter the account type
R.1.2: Select account type
I n p u t : User selects option from any one of the followings— savings/checking/deposit.
Output: Prompt to enter amount
R.1.3: Get required amount
Input: Amount to be withdrawn in integer values greater than 100 and less than 10,000 in multiples of
100.
Output: The requested cash and printed transaction statement.
Processing: The amount is debited from the user‘s account if sufficient balance is available, otherwise
an error message displayed.
Decision tree
A decision tree gives a graphic view of the processing logic involved in decision making and
the corresponding actions taken.
Decision tables specify which variables are to be tested, and based on this what actions are to
be taken depending upon the outcome of the decision making logic, and the order in which
decision making is performed.
The edges of a decision tree represent conditions and the leaf nodes represent the actions to be
performed depending on the outcome of testing the conditions.
Readability: Decision trees are easier to read and understand when the number of conditions are
small. On the other hand, a decision table causes the analyst to look at every possible combination of
conditions which he might otherwise omit.
Explicit representation of the order of decision making: In contrast to the decision trees, the order
of decision making is abstracted out in decision tables. A situation where decision tree is more useful
is when multilevel decision making is required. Decision trees can more intuitively represent
multilevel decision making hierarchically, whereas decision tables can only represent a Single decision
to select the appropriate action for execution.
Representing complex decision logic: Decision trees become very complex to understand when the
number of conditions and actions increase. It may even be to draw the tree on a single page. When
very large number of decisions is involved, the decision table representation may be preferred.
11 Mark Question: