Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Unit 5

Download as doc, pdf, or txt
Download as doc, pdf, or txt
You are on page 1of 32

Software Engineering 1

Unit 5
Software Project Management
Introduction
Building computer software is a complex undertaking task, which particularly involves many people working over a
relatively long time. That’s why software projects needs to be managed. The software project management is the first layer of software
engineering process. It starts before the technical work starts, continues as the software evolves from conceptual stage to implementation
stage. It is a crucial activity because the success and failure of the software is directly depends on it.
Software project management is needed because professional software engineering is always subject to budget constraints,
schedule constraints and quality oriented focus.

Definition
Project management involves the planning, monitoring and control of the people, process and events that occurs as software evolves
from a preliminary concept to an operational implementation. Software project management is an umbrella activity within software
engineering. It begins before any technical activity is initiated and continues throughout the definition, development and support of computer
software. The project management activity encompasses measurements and metrics estimation, risk analysis, schedules, tracking and control.

Effective software project management focuses on the four P’s: people, product, process, and project. The order is not arbitrary. The
manager who forgets that software engineering work is an intensely human endeavor will never have success in project management. A
manager who fails to encourage comprehensive customer communication early in the evolution of a project risks building an elegant
solution for the wrong problem. The manager who pays little attention to the process runs the risk of inserting competent technical methods
and tools into a vacuum. The manager who embarks without a solid project plan jeopardizes the success of the product.

1. The People
a) The “people factor” is so important that the Software Engineering Institute has developed a people management capability maturity
model (PM-CMM).
b) To enhance the readiness of software organizations to undertake increasingly complex applications by helping to attract,
grow, motivate, deploy, and retain the talent needed to improve their software development capability.
c) The PM-CMM is a companion to the software capability maturity model that guides organizations in the creation of a
mature software process.
d) The PMCMM defines the following areas for software people.
a. Recruiting
b. Selection
c. Performance Management
d. Training
e. Career Development
f. Team Culture Development
2. The Product
Before a project can be planned,
a) Product objectives and scope should be established.
b) Alternative solutions should be considered.
c) Technical and management constraints should be identified.
d) The software developer and customer must meet to define product objectives and scope.

----------------------------------------
----------------------------------------
e) Objectives identify the overall goals for the product (from the customer’s point of view) without considering how these goals will be
achieved.
f)Scope identifies the primary data, functions and behaviors that characterize the product, and more important, attempts to bound
these characteristics in a quantitative manner.
3. The Process
a) A software process provides the framework from which a comprehensive plan for software development can be established.
b) A small number of framework activities are applicable to all software projects, regardless of their size or complexity.
c) A number of different tasks set—tasks, milestones, work products and quality assurance points—enable the framework activities
to be adapted to the characteristics of the software project and the requirements of the project team.
d) Umbrella activities—such as software quality assurance, software configuration management, and measurement—overlay
the process model.
4. The Project
a) We conduct planned and controlled software projects for one primary reason—it is the only known way to manage complexity.
b) The overall development cycle is called as Project.
c) In order to avoid project failure, a software project manager and the software engineers who build the product must avoid a set of
common warning signs, understand the critical success factors that lead to good project management, and develop a commonsense
approach for planning, monitoring and controlling the project.

Software Project Planning – Following are the various project planning activities
1) Defining the problem –
a) Develop a definitive statement of the problem to be solved, including a description of the present situation, problem constraints and
a statement of the goals to be achieved. The problem statement should be formulated in the customer’s terminology.
b) Justify a computerized solution strategy for the problem.
c) Identify the various functions provided by the hardware subsystem, software subsystem and people subsystem. The constraints
thereof also should be identified.
d) Determining system level goals and requirements for the development process and the work products.
e) Establish high-level acceptance criteria for the system.

2) Developing a solution strategy –


a) Create multiple solution strategies, with out considering the constraints.
b) Conduct a feasibility study for each strategy.
C) Recommend a solution strategy, indicating why other strategies were rejected.
d) Develop a list of the characteristics of the product priority wise.

3) Planning the development process –


a) Define a life-cycle model and an organizational structure for the project.
b) Planning of –
I) Configuration management activities,
II) Quality assurance activities and
III) Validation activities.
c) Determine phase-dependent factors –
I) Tools,
II) Techniques, and
III) Notations.
d) Establish preliminary cost estimates for the system development.
e) Establish a preliminary development schedule.
f) Establish preliminary staffing estimates.
g) Preliminary estimation of the required computing resources and maintenance of those systems.
h) Preparation of glossary of terms.
i) Identification of information sources for reference during the project development.

Detail Explanation of Software Project Planning activities

Problem Definition

There is a need to prepare a concise statement of the problem to be solved and the constraints that exist for its solution in customer’s
terminology. Problem definition requires understanding of the problem domain and the problem environment. Techniques for gaining this
knowledge include –
a) Customer interviews,
b) Observation of the problem tasks, and
c) Actual performance of the tasks developed by the planner. Here the planner should not be biased in any way and
should certainly be technically experienced.
After preparing the solutions the successive task includes the determination of the -
a) Appropriateness of the computerized solution,
b) Cost-effectiveness,
c) It should avoid displacing existing workers as it may not be acceptable in the society. etc.

Requirement analysis and Goal determination:-


Goals:
2) Goals are targets for achievement and serve to establish the frame work for a software development project.
3) Goals apply to both the development process and the work products.
4) Goals can be either qualitative or quantitative. They have the following categories-
a) Qualitative process goal :- the development process should adhere to quality observed under quality assurance.
b) Quantitative process goal :- the system should be delivered with in a fixed time.
c) Qualitative product goal :- the system should make the user’s job more easy & interesting.
d) Quantitative product goal :- the system should reduce the cost of transaction by about 25 %.
5) Other common goals include a) transportability, b) early delivery, c) ease for nonprogrammers etc.

Requirement:
2) They include –
a) Functional aspects,
b) Performance aspects,
c) Hardware aspects,
d) Software aspects,
e) User interface, etc.
3) They also specify development standards and quality assurance standards for both project process and product.
4) Special efforts should be made in the process of developing meaningful requirement statements and methods that will be used to
verify those statements.
Planning the Software Development Processes
A Software process consists of two parts
1) Product Engineering Processes, which consists of
a) Development process: - This process consists of all kinds of activities which contribute towards the software development
process which are performed by the programmers, designers, testing personnel, librarians etc. The major goal here is to
improve quality of the software product.
b) Project Management process: - This process includes all the activities related to the efficient management of various
stages and resources in the development of the software. It includes scheduling, milestones, reviews, staffing etc. Here
the major goal is optimal usage of available resource in order to reduce the overall cost.
c) Software Configuration Management process: - It’s the process of understanding and formulating the necessary system
requirements and establishing the exact amount and type of resource needed to successfully complete the software product.

2) Process Management Processes: - This process is responsible for monitoring every process that are occurring during the software
development procedures. It ensures that high standards are followed during every process. It includes understanding the current
process, analyzing its properties, determining possible improvements and then implementing the improvements in the processes.

Fig Software Process

Software Cost Estimation


The major factors that influence software cost are
1. Programmer ability
2. Product complexity
3. Product size
4. Available time
5. Required reliability
6. Level of technology
Programmer Ability
It had been observed that on very large projects the differences in individual programmer ability will tend to average out and would not
affect the project adversely. But on projects utilizing 5 or lesser programmers the factor of individual ability is significant and can affect the
project positively or negatively. Hence individual programmer ability governs programmer productivity that affects the cost of the project.

Product Complexity
A software product can be classified on the basis of their complexity in to basic three types –
a) Application programs,
b) Utility programs,
c) Systems program.
Through extensive studies it had observed that Utility programs are 3 times as difficult to write as Application programs and that System
programs are three times as difficult to write as Utility programs. So the levels of complexity can be stated as 1 - 3 - 9 for
Application – Utility – Systems programs. The following equations were derived from the extensive research –

For calculating Programmer Months (PM): -


Application programs: - PMap = 2.4 * (KDSI) 1.05

Utility programs: - PMup = 3.0 * (KDSI) 1.12

Systems programs: - PMsp = 3.6 * (KDSI) 1.2


For calculating Development time for a program (TDEV): -
Application programs: - TDEVap = 2.4 * (PM) 1.05
Utility programs: - TDEVup = 3.0 * (PM) 1.12
Systems programs: - TDEVsp = 3.6 * (PM) 1.2
For calculating Average Staffing Level (SL): -
Application programs: - SLap= PMap / TDEVap
Utility programs: - SLup= PMup / TDEVup
Systems programs: - SLsp= PMsp / TDEVsp
Where, PM  Programmer Months and KDSI  Thousands of delivered source instructions.
A major consideration that needs to be taken in to account is extra coding required for housekeeping purpose. Housekeeping codes include
that portion of coding that handles
a) Input/output,
b) Interactive user communication,
c) Human interface engineering and
d) Error checking & error handling.
So the estimation of cost on lines of code should include housekeeping codes.

Product Size
A large software product is obviously more expensive to develop than a small one. The equations derived for Programmer months (PM) and
Development time (TDEV) show that the rate of increase in effort required grows with the number of source instructions at an exponential
rate slightly greater than 1.
Available Time
Total project effort is sensitive to the calendar time available for the project completion. Software projects require more effort if the
development time is compressed or expanded from the optimal time. After extensive research it had been observed that there is limit beyond
which a software project cannot reduce its schedule even by buying more personnel and equipment. The limit occurs roughly at 75% of the
nominal schedule.
Level of Technology
The level of technology provided by latest software products helps reducing overall cost. The type-checking and self-documentation aspects
of high-level languages improve the reliability and modifiability of the programs created using these high-level languages. The familiarity of
the programmer with the latest technology is also a contributing factor in effort reduction and cost reduction.

Software Cost Estimation Techniques


Cost estimates can be made either top-down or bottom-up.
Top-down estimation first focuses on system level costs, such as the computing resources and personnel required to develop the system, as
well as the costs of
a) Configuration management,
b) Quality assurance,
c) System integration,
d) Training, and
e) Publications.
Personnel costs are estimated by examining the cost of similar past projects.
Bottom-up cost estimation first estimated the costs to develop each module or sub-system; those costs are combined to arrive at an overall
estimate. Bottom-up estimation emphasizes the costs associated with developing individual system components, but may fail to account for
system level costs, such as configuration management and quality control.

1) Expert Judgment Technique


The most widely used cost estimation technique is Expert judgment. Inherently it’s a top-down estimation technique. Expert judgment relies
on the experience background and business sense of one or more key people in the organization.

Groups of experts sometimes prepare a consensus estimate (collective estimate); this tends to minimize individual oversights and lack of
familiarity with particular projects and neutralizes personal biases and the desire to win the contract through an overly optimistic estimate as
seen in individual judgment style.

The major disadvantage of group estimation is the effect that interpersonal group dynamics may have on individuals in the group. Group
members may not be candid enough due to political considerations. The presence of authority figures in the group or the dominance of an
overly assertive group member.

2) Delphi cost estimation


This technique was developed to gain expert consensus without introducing the adverse side effects of group meetings. The Dephi can be
adapted to software cost estimation in the following manners:-
A coordinator provides each estimator with the system definition document and a form for recording the cost estimation.
Estimators study the definition and complete their estimates anonymously. They may ask questions to the coordinator, but they do not
discuss their estimates with one another.
The coordinator prepares and distributes a summary of the estimator’s responses any unusual rationales noted by the estimators.
Estimators complete another estimate again anonymously, using the result from the previous estimate. Estimators whose estimates differ
sharply from the group may be asked, anonymously, to provide justification for their estimates.
The process id iterated for as many rounds as required. No group discussion is allowed during the entire process.

3) The COCOMO Model


The Constructive Cost Model (COCOMO) is an empirical estimation model i.e., the model uses a theoretically derived formula to predict
cost related factors. This model was created by “Barry Boehm”. The COCOMO model consists of three models:-
1. The Basic COCOMO model: - It computes the effort & related cost applied on the software development process as a function of
program size expressed in terms of estimated lines of code (LOC or KLOC).
2. The Intermediate COCMO model: - It computes the software development effort as a function of – a) Program size, and b) A set
of cost drivers that includes subjective assessments of product, hardware, personnel, and project attributes.
3. The Advanced COCOMO model: - It incorporates all the characteristics of the intermediate version along with an assessment of
cost driver’s impact on each step of software engineering process.
The COCOMO models are defined for three classes of software projects, stated as follows:-
1. Organic projects: - These are relatively small and simple software projects which require small team structure having good
application experience. The project requirements are not rigid.
2. Semi-detached projects: - These are medium size projects with a mix of rigid and less than rigid requirements level.
3. Embedded projects: - These are large projects that have a very rigid requirements level. Here the hardware, software and
operational constraints are of prime importance.

In the following section we will discuss only the Basic and Intermediate COCMO models.

Basic COCOMO Model


The effort equation is as follows: -
E = a * (KLOC) b
D = c * (E) d
Where E  effort applied by per person per month, D  development time in consecutive months, KLOC  estimated thousands of lines of
code delivered for the project. The coefficients a, b, and the coefficients c, d are given in the Table below

Software Project A b c D
Organic 2.4 1.05 2.5 0.38
Semidetached 3.0 1.12 2.5 0.35
Embedded 3.6 1.20 2.5 0.32

Intermediate COCOMO Model


The effort equation is as follows: -
E = a * (KLOC) b * EAF
Where E  efforts applied by per person per month, KLOC  estimated thousands of lines of code delivered for the project, and the
coefficients a and exponent b are given in the Table below. EAF is Effort Adjustment Factor whose typical ranges from 0.9 to 1.4 and the
value of EAF can be determined by the tables published by “Barry Boehm” for four major categories – product attributes, hardware
attributes, personal attributes and project attributes.

Software Project a b
Organic 3.2 1.05
Semidetached 3.0 1.12
Embedded 2.8 1.20

Work Breakdown Structures (WBS)


The WBS method is a bottom-up estimation tool.
It’s a hierarchical chart that accounts for the individual parts of system.
There are two main classifications - a) Product hierarchy, b) Process hierarchy.
Product hierarchy: - It identifies the product components and indicates the manner in which the components are interconnected.
Process hierarchy: - It identifies the work activities and the relationship between them.

The primary advantage of the WBS technique are it helps in identifying and accounting the various process & product factors and in
identifying the exact costs included in the estimates

Fig Example of a Product Hierarchical Structure

Fig Example of a Process Hierarchical Structure

Consider a project to develop a full screen editor. The major components identified are screen edit, command language interpreter, file input
and output, cursor movement, screen movement. The sizes for these are estimated to be 4K, 2K, 1K, 2K and 3K delivered source lines. Use
COCOMO Model to determine:
a) Overall cost and schedule estimates.
b) Cost and schedules for different phases.
Sol:
a) Overall cost and schedule estimates using COCOMO Model:
For estimation we will go for Basic COCOMO model. The table for constants for Basic COCOMO Model is as follows:
Software Project A b c d
Organic 2.4 1.05 2.5 0.38
Semidetached 3.0 1.12 2.5 0.35
Embedded 3.6 1.20 2.5 0.32

For our given project phases/ modules are there:


1. Screen Edit Size  4K
2. Command Language Interpreter  2K
3. File input and output  1K
4. Cursor movement  2K
5. Screen movement  3K
Total size  12K

Now, we will go for overall estimation of project:


If we are analyzing out project, then we find that full screen editor is a semidetached project. So, for estimation purpose, we will make, use
of those constants.
E = a * (KLOC)b
D = c * (E)d
Where, E  Effort applied in person months.

D  Development time in chronological months.


E = 3.0 * (12)1.12 = 48.50 person months
D = 2.5 * (48.50)0.35 = 9.72 months.
Number of people N = E/D = 48.50 / 9.72 = 4.98 = 5 persons (approximately)
For completion of this project we will require 5 people.
(b) Now, we will go for estimation of each module:
All modules will be organic software project because they are simple and we will make use of concerned constants.
i. Screen Edit  4K
E = 2.4 * (4)1.05 = 10.28 person months
D = 2.5 * (10.20)0.38 = 6.06 months
N = E/D = 1.6 = 2 persons (approximately)
ii. Command Language Interpreter  2K
E = 2.4 * (2)1.05 = 4.96 person months
D = 2.5 * (4.96)0.38 = 4.59 months
N = E/D = 1.08 = 1 person (approximately)
iii. File input and output  1K
E = 2.4 * (1)1.05 = 2.4 person months
D = 2.5 * (2.4)0.38 = 3.48 months
N = E/D = 0.68 persons
Not required dedicated individual, any person engaged with other module can do it simultaneously.
iv. Cursor Movement  2K
E = 2.4 * (2)1.05 = 4.96 person months
D = 2.5 * (4.96)0.38 = 4.59 months
N = E/D = 1.08 = 1 person (approximately)
v. Screen Movement  3K
E = 2.4 * (3) 1.05
= 7.6 person months
D = 2.5 * (7.6)0.38 = 5.4 months
N = E/D = 1.4 = 1 person (approximately)

Compute the estimation and duration of a project which is of type organic mode and estimated LOC is 10KLOC using COCOMO. Sol:
For estimation we will go for Basic COCOMO model. The table for constants for Basic COCOMO Model is as follows:

Software Project A b c d
Organic 2.4 1.05 2.5 0.38
Semidetached 3.0 1.12 2.5 0.35
Embedded 3.6 1.20 2.5 0.32

For our given project estimated LOC = 10KLOC


Given mode is organic, so a = 2.4, b = 1.05, c = 2.5 and d = 0.38
Effort E = a * (KLOC)b
E = 2.4 * (10)1.05 = 26.93 person months
Duration D = c * (E)d
D = 2.5 * (26.93)0.38 = 8.74 months

No of persons required N = E/D = 26.93/8.74 = 3.08 = 3 persons (approximately)

Assume that the size of an organic type software product has been estimated to be 32,000 LOC and assume that the average salary of software
engineer is Rs. 15,000 per month. Determine the effort required to develop the software product and the nominal development time. Sol: For
estimation we will go for Basic COCOMO model. The table for constants for Basic COCOMO Model is as follows:

Software Project A b c d
Organic 2.4 1.05 2.5 0.38
Semidetached 3.0 1.12 2.5 0.35
Embedded 3.6 1.20 2.5 0.32

For our given project estimated LOC = 32KLOC


Given mode is organic, so a = 2.4, b = 1.05, c = 2.5 and d = 0.38
Effort E = a * (KLOC)b
E = 2.4 * (32)1.05 = 91 person months
Duration D = c * (E)d
D = 2.5 * (91)0.38 = 14 months
Cost required to develop the product = 14 * 15,000 = Rs. 210,000

Suppose that we are faced with developing a system such that we expect to have about 1,00,000 LOC. Compute the effort and
development time for the organic and semidetached development mode.
Sol: For estimation we will go for Basic COCOMO model. The table for constants for Basic COCOMO Model is as follows:

Software Project A b c d
Organic 2.4 1.05 2.5 0.38
Semidetached 3.0 1.12 2.5 0.35
Embedded 3.6 1.20 2.5 0.32

For our given project estimated LOC = 100 KLOC


a) Organic Mode: For organic mode, a = 2.4, b = 1.05, c = 2.5 and d = 0.38
Effort E = a * (KLOC)b
E = 2.4 * (100)1.05 = 302.14 person months
Duration D = c * (E)d
D = 2.5 * (302.14)0.38 = 21.89 months
b) Semidetached Mode: For Semidetached mode, a = 3.0, b = 1.12, c = 2.5 and d = 0.35
Effort E = a * (KLOC)b
E = 3.0 * (100)1.12 = 521.34 person months
Duration D = c * (E)d
D = 2.5 * (521.34)0.35 = 22.33 months

The value of size of a program in KLOC and different cost drivers are given as size 300 KLOC, complexity 0.95, analyst capability 1.85,
application of software engineering method 0.8, performance requirement 0.75. Calculate the effort for three types of projects i.e., organic,
semidetached and embedded using COCOMO model.

Sol: For estimation we will go for intermediate COCOMO model. The table for constants for intermediate COCOMO Model is as follows:

Software Project A b c d
Organic 3.2 1.05 2.5 0.38
Semidetached 3.0 1.12 2.5 0.35
Embedded 2.8 1.20 2.5 0.32

For our given project estimated LOC = 300 KLOC


Complexity  0.95
Analyst Capability  1.85
Application of software engineering method  0.8
Performance Requirement  0.75
So, EAF = 0.95 * 1.85 * 0.8 * 0.75 = 1.05
a) Organic Mode : For organic mode, a = 3.2, b = 1.05, c = 2.5 and d = 0.38
Effort E = a * (KLOC)b * EAF
E = 2.4 * (300)1.05 * 1.05 = 1340.7 person months
Duration D = c * (E)d
D = 2.5 * (1340.7)0.38 = 38.58 months
No. of persons required N = E/D = 1340.7/38.58 = 34.75 = 35 persons (approximately)
b) Semidetached Mode : For Semidetached mode, a = 3.0, b = 1.12, c = 2.5 and d = 0.35
Effort E = a * (KLOC)b * EAF
E = 3.0 * (300)1.12 * 1.05= 1873.6 person months
Duration D = c * (E)d
D = 2.5 * (1873.6)0.35 = 34.94 months
No. of persons required N = E/D = 1873.6/34.94 = 53.62 = 54 persons (approximately)
c) Embedded Mode : For Embedded mode, a = 2.8, b = 1.20, c = 2.5 and d = 0.32
Effort E = a * (KLOC)b * EAF
E = 2.8 * (300)1.20 * 1.05= 2759.9 person months
Duration D = c * (E)d
D = 2.5 * (2759.9)0.32 = 31.55 months
No. of persons required N = E/D = 2759.9/31.55 = 87.48 = 87 persons (approximately)

Scheduling
Software project scheduling is an activity that distributes estimated effort across the planned project duration by allocating the effort to
specific software engineering tasks. It is important to note, however, that the schedule evolves over time. During early stages of project
planning, a macroscopic schedule is developed. This type of schedule identifies all major software engineering activities and the product
functions to which they are applied. As the project gets under way, each entry on the macroscopic schedule is refined into a detailed
schedule. Here, specific software tasks (required to accomplish an activity) are identified and scheduled.

Scheduling for software engineering projects can be viewed from two rather different perspectives. In the first, an end-date for release of
a computer-based system has already (and irrevocably) been established. The software organization is constrained to distribute effort
within the prescribed time frame. The second view of software scheduling assumes that rough chronological bounds have been discussed
but that

the end-date is set by the software engineering organization. Effort is distributed to make best use of resources and an end-date is defined
after careful analysis of the software. Unfortunately, the first situation is encountered far more frequently than the second.

Scheduling of a software project does not differ greatly from scheduling of any multitask engineering effort. Therefore, generalized project
scheduling tools and techniques can be applied with little modification to software projects.

Program evaluation and review technique (PERT) and critical path method (CPM) are two project scheduling methods that can be applied
to software development. Both techniques are driven by information already developed in earlier project planning activities:
• Estimates of effort
• A decomposition of the product function
• The selection of the appropriate process model and task set
• Decomposition of tasks
Interdependencies among tasks may be defined using a task network. Tasks, sometimes called the project work breakdown structure (WBS),
are defined for the product as a whole or for individual functions.
Both PERT and CPM provide quantitative tools that allow the software planner to
(1) Determine the critical path—the chain of tasks that determines the duration of the project;
(2) Establish “most likely” time estimates for individual tasks by applying statistical models; and
(3) Calculate “boundary times” that define a time "window" for a particular task.

Boundary time calculations can be very useful in software project scheduling. Slippage in the design of one function, for example, can retard
further development of other functions. Riggs describes important boundary times that may be discerned from a PERT or CPM network:
(1) The earliest time that a task can begin when all preceding tasks are completed in the shortest possible time,
(2) The latest time for task initiation before the minimum project completion time is delayed,
(3) The earliest finish—the sum of the earliest start and the task duration,
(4) The latest finish— the latest start time added to task duration, and
(5) The total float—the amount of surplus time or leeway allowed in scheduling tasks so that the network critical path is maintained
on schedule. Boundary time calculations lead to a determination of critical path and provide the manager with a quantitative method
for evaluating progress as tasks are completed.

Both PERT and CPM have been implemented in a wide variety of automated tools that are available for the personal computer. Such tools
are easy to use and make the scheduling methods described previously available to every software project manager.

Tracking the Schedule


Tracking can be accomplished in a number of different ways:
(1) Conducting periodic project status meetings in which each team member reports progress and problems.
(2) Evaluating the results of all reviews conducted throughout the software engineering process.
(3) Determining whether formal project milestones have been accomplished by the scheduled date.
(4) Comparing actual start-date to planned start-date for each project task listed in the resource table.
(5) Meeting informally with practitioners to obtain their subjective assessment of progress to date and problems on the horizon.
(6) Using earned value analysis to assess progress quantitatively.

Project Evaluation and Review Technique (PERT)

Project Evaluation and Review Technique (PERT) is a procedure through which activities of a project are represented in its
appropriate sequence and timing. It is a scheduling technique used to schedule, organize and integrate tasks within a project. PERT is
basically a mechanism for management planning and control which provides blueprint for a particular project. All of the primary elements
or events of a project have been finally identified by the PERT.
In this technique, a PERT Chart is made which represent a schedule for all the specified tasks in the project. The reporting levels of the
tasks or events in the PERT Charts is somewhat same as defined in the work breakdown structure (WBS).
Characteristics of PERT:
The main characteristics of PERT are as following :
1) It serves as a base for obtaining the important facts for implementing the decision-making.
2) It forms the basis for all the planning activities.
3) PERT helps management in deciding the best possible resource utilization method.
4) PERT take advantage by using time network analysis technique.
5) PERT presents the structure for reporting information.
6) It helps the management in identifying the essential elements for the completion of the project within time .
Advantages of PERT:
It has the following advantages :
1) Estimation of completion time of project is given by the PERT.
2) It supports the identification of the activities with slack time.
3) The start and dates of the activities of a specific project is determined.
4) It helps project manager in identifying the critical path activities.
5) PERT makes well organized diagram for the representation of large amount of data.
Disadvantages of PERT:
It has the following disadvantages :
6) The complexity of PERT is more which leads to the problem in implementation.
7) The estimation of activity time are subjective in PERT which is a major disadvantage.
8) Maintenance of PERT is also expensive and complex.
9) The actual distribution of may be different from the PERT beta distribution which causes wrong assumptions.
10) It under estimates the expected project completion time as there is chances that other paths can become the critical path if their related
activities are deferred.

Gantt Chart

Generalized Activity Normalization Time Table (GANTT) chart is type of chart in which series of horizontal lines are present that show
the amount of work done or production completed in given period of time in relation to amount planned for those projects. It is horizontal
bar chart developed by Henry L. Gantt (American engineer and social scientist) in 1917 as production control tool. It is simply used for
graphical representation of schedule that helps to plan in an efficient way, coordinate, and track some particular tasks in project. 
The purpose of Gantt chart is to emphasize scope of individual tasks. Hence set of tasks is given as input to Gantt chart. Gantt chart is also
known as timeline chart. It can be developed for entire project or it can be developed for individual functions. In most of projects, after
generation of timeline chart, project tables are prepared. In project tables, all tasks are listed in proper manner along with start date and end
date and information related to it. 
Gantt chart represents following things : 
 All the tasks are listed at leftmost column.
 The horizontal bars indicate or represent required time by corresponding particular task.
 When occurring of multiple horizontal bars takes place at same time on calendar, then that means concurrency can be applied for
performing particular tasks.
 The diamonds indicate milestones.
Advantages : 
 Simplify Project – 
Gantt charts are generally used for simplifying complex projects. 
 
 Establish Schedule – 
It simply establishes initial project schedule in which it mentions who is going to do what, when, and how much time it will take to
complete it. 
 
 Provide Efficiency – 
It brings efficiency in planning and allows team to better coordinate project activities. 
 
 Emphasize on scope – 
It helps in emphasizing i.e., gives importance to scope of individual tasks. 
 
 Ease at understanding – 
It makes it easy for stakeholders to understand timeline and brings clarity of dates. 
 
 Visualize project – 
It helps in clearly visualizing project management, project tasks involved. 
 
 Organize thoughts and Highly visible – 
It organizes your thoughts and can be highly visible so that everyone in enterprises can have basic level of understanding and have
knowledge about what’s happening in project even if they are not involved in working. 
 
Make Practical and Realistic planning – 
It makes the project planning practical and realistic as realistic planning generally helps to avoid any kind of delays and losses of many that
can arise. 
Disadvantages : 
 Sometimes, using Gantt chart makes project more complex.
 The size of bar chart dost not necessarily indicate amount of work done in project.
 Gantt charts and projects are needed to be updated on regular basis.
 It is not possible or difficult to view this chart on one sheet of paper. The software products that produce Gantt chart needed to be viewed
on computer screen so that whole project can be seen easily.
Applications:

There are several professions, where use of gantt chart is very beneficial. Some of them are given below: 
 Advertising Manager – 
Advertising Managers generally controls and supervises end result of advertising companies, scheduling advertisements in different
media, etc. 
 
 Operations Manager – 
Operations Managers generally control and handle resources that are essential for company operations . 
 
 Project Manager – 
Project Managers generally motivates project teams, collaborate with team members, schedule task and complete that on time, and report
to stakeholders, etc. 
 
Example : 
Nowadays, there are many companies and teams that use Gantt chart to plan, schedule, and execute their projects. Some of them are
consulting agencies, manufacturing companies, Marketing teams, Construction companies, etc. Below is an example of Gantt chart: 
Putnam Resource Allocation Model
The Lawrence Putnam model describes the time and effort requires finishing a software project of a specified size. Putnam makes a
use of a so-called The Norden/Rayleigh Curve to estimate project effort, schedule & defect rate as shown in fig:

Putnam noticed that software staffing profiles followed the well known Rayleigh distribution. Putnam used his observation about
productivity levels to derive the software equation:

The various terms of this expression are as follows:

K is the total effort expended (in PM) in product development, and L is the product estimate in KLOC .

td correlate to the time of system and integration testing. Therefore, td can be relatively considered as the time required for
developing the product.

Ck Is the state of technology constant and reflects requirements that impede the development of the program.

Typical values of Ck = 2 for poor development environment

Ck= 8 for good software development environment


Ck = 11 for an excellent environment (in addition to following software engineering principles, automated tools and techniques are
used).

The exact value of Ck for a specific task can be computed from the historical data of the organization developing it.

Putnam proposed that optimal staff develop on a project should follow the Rayleigh curve. Only a small number of engineers are
required at the beginning of a plan to carry out planning and specification tasks. As the project progresses and more detailed work
are necessary, the number of engineers reaches a peak. After implementation and unit testing, the number of project staff falls.

Effect of a Schedule change on Cost


Putnam derived the following expression:

Where, K is the total effort expended (in PM) in the product development

L is the product size in KLOC

td corresponds to the time of system and integration testing

Ck Is the state of technology constant and reflects constraints that impede the progress of the program

Now by using the above expression, it is obtained that,

For the same product size, C =L3 / Ck3 is a constant.

(As project development effort is equally proportional to project development cost)

From the above expression, it can be easily observed that when the schedule of a project is compressed, the required development
effort as well as project development cost increases in proportion to the fourth power of the degree of compression. It means that a
relatively small compression in delivery schedule can result in a substantial penalty of human effort as well as development cost.

For example, if the estimated development time is 1 year, then to develop the product in 6 months, the total effort required to
develop the product (and hence the project cost) increases 16 times.

Software Quality
Software quality product is defined in term of its fitness of purpose. That is, a quality product does precisely what the users want it to
do. For software products, the fitness of use is generally explained in terms of satisfaction of the requirements laid down in the SRS
document. Although "fitness of purpose" is a satisfactory interpretation of quality for many devices such as a car, a table fan, a
grinding machine, etc.for software products, "fitness of purpose" is not a wholly satisfactory definition of quality.

Example: Consider a functionally correct software product. That is, it performs all tasks as specified in the SRS document. But, has an
almost unusable user interface. Even though it may be functionally right, we cannot consider it to be a quality product.

The modern view of a quality associated with a software product several quality methods such as the following:

Portability: A software device is said to be portable, if it can be freely made to work in various operating system environments, in
multiple machines, with other software products, etc.

Usability: A software product has better usability if various categories of users can easily invoke the functions of the product.

Reusability: A software product has excellent reusability if different modules of the product can quickly be reused to develop new
products.

Correctness: A software product is correct if various requirements as specified in the SRS document have been correctly
implemented.

Maintainability: A software product is maintainable if bugs can be easily corrected as and when they show up, new tasks can be
easily added to the product, and the functionalities of the product can be easily modified, etc.

Software Quality Management System


A quality management system is the principal methods used by organizations to provide that the products they develop have the
desired quality.

A quality system subsists of the following:

Managerial Structure and Individual Responsibilities: A quality system is the responsibility of the organization as a whole.
However, every organization has a sever quality department to perform various quality system activities. The quality system of an
arrangement should have the support of the top management. Without help for the quality system at a high level in a company,
some members of staff will take the quality system seriously.

Quality System Activities: The quality system activities encompass the following:

Auditing of projects

Review of the quality system

Development of standards, methods, and guidelines, etc.

Production of documents for the top management summarizing the effectiveness of the quality system in the organization.

Evolution of Quality Management System


Quality systems have increasingly evolved over the last five decades. Before World War II, the usual function to produce quality
products was to inspect the finished products to remove defective devices. Since that time, quality systems of organizations have
undergone through four steps of evolution, as shown in the fig. The first product inspection task gave method to quality control (QC).

Quality control target not only on detecting the defective devices and removes them but also on determining the causes behind the
defects. Thus, quality control aims at correcting the reasons for bugs and not just rejecting the products. The next breakthrough in
quality methods was the development of quality assurance methods.
The primary premise of modern quality assurance is that if an organization's processes are proper and are followed rigorously, then
the products are obligated to be of good quality. The new quality functions include guidance for recognizing, defining, analyzing, and
improving the production process.

Total quality management (TQM) advocates that the procedure followed by an organization must be continuously improved through
process measurements. TQM goes stages further than quality assurance and aims at frequently process improvement. TQM goes
beyond documenting steps to optimizing them through a redesign. A term linked to TQM is Business Process Reengineering (BPR).

BPR aims at reengineering the method business is carried out in an organization. From the above conversation, it can be stated that
over the years, the quality paradigm has changed from product assurance to process assurance, as shown in fig.

Difference between ISO9000 and SEI-CMM

ISO 9000: 
It is a set of International Standards on quality management and quality assurance developed to help companies effectively document the
quality system elements needed to an efficient quality system. 
SEICMM: 
SEI (Software Engineering Institute), Capability Maturity Model (CMM) specifies an increasing series of levels of a software development
organization. 
Difference between ISO9000 and SEI-CMM: 
 
ISO 9000 SEICMM

ISO 9000 is a set of international standards on quality SEI (Software Engineering Institute), Capability
management and quality assurance developed to help Maturity Model (CMM) specifies an increasing
companies effectively document the quality system elements series of levels of a software development
needed to an efficient quality system. organization.

Focus is customer supplier relationship, attempting to reduce Focus on the software supplier to improve its
ISO 9000 SEICMM

interval processes to achieve a higher quality


customer’s risk in choosing a supplier.
product for the benefit of the customer.

It is created for hard goods manufacturing industries. It is created for software industry.

ISO9000 is recognized and accepted in most of the countries. SEICMM is used in USA, less widely elsewhere.

It specifies concepts, principles and safeguards that should be in CMM provides detailed and specific definition of
place. what is required for given levels.

This establishes one acceptance level. It assesses on 5 levels.

Its certification is valid for three years. It has no limit on certification.

It focuses on inwardly processes. It focus outwardly.

It has 5 levels: 
 
(a). Initial
(b). Repeatable
It has no level.
(c). Defined
(d). Managed
(e). Optimized
 

It is basically an audit. It is basically an appraisal.

It is open to multi sector. It is open to IT/ITES.

It emphasizes a process of continuous


Follow set of standards to make success repeatable.
improvement.

Personal Software Process (PSP)

The SEI CMM which is reference model for raising the maturity levels of software and predicts the most expected outcome from the
next project undertaken by the organizations does not tell software developers about how to analyze, design, code, test and document the
software products, but expects that the developers use effectual practices. The Personal Software Process realized that the process of
individual use is completely different from that required by the team.
Personal Software Process (PSP) is the skeleton or the structure that assist the engineers in finding a way to measure and improve the
way of working to a great extend. It helps them in developing their respective skills at a personal level and the way of doing planning,
estimations against the plans.
Objectives of PSP :
The aim of PSP is to give software engineers with the regulated methods for the betterment of personal software development processes.
The PSP helps software engineers to:
 Improve their approximating and planning skills.
 Make promises that can be fulfilled.
 Manage the standards of their projects.
 Reduce the number of faults and imperfections in their work.
Time measurement:
Personal Software Process recommend that the developers should structure the way to spend the time.The developer must measure and
count the time they spend on different activities during the development.
PSP Planning :
The engineers should plan the project before developing because without planning a high effort may be wasted on unimportant activities
which may lead to a poor and unsatisfactory quality of the result.
Levels of Personal Software Process :
Personal Software Process (PSP) has four levels-
 PSP 0 –
The first level of Personal Software Process, PSP 0 includes Personal measurement , basic size measures, coding standards.
 PSP 1 –
This level includes the planning of time and scheduling .
 PSP 2 –
This level introduces the personal quality management ,design and code reviews.
 PSP 3 –
The last level of the Personal Software Process is for the Personal process evolution.

Six Sigma
Six Sigma is the process of improving the quality of the output by identifying and eliminating the cause of defects and reduce
variability in manufacturing and business processes. The maturity of a manufacturing process can be defined by a sigma rating
indicating its percentage of defect-free products it creates. A six sigma method is one in which 99.99966% of all the opportunities to
produce some features of a component are statistically expected to be free of defects (3.4 defective features per million
opportunities).
History of Six Sigma
Six-Sigma is a set of methods and tools for process improvement. It was introduced by Engineer Sir Bill Smith while working
at Motorola in 1986. In the 1980s, Motorola was developing Quasar televisions which were famous, but the time there was lots of
defects which came up on that due to picture quality and sound variations.

By using the same raw material, machinery and workforce a Japanese form took over Quasar television production, and within a few
months, they produce Quasar TV's sets which have fewer errors. This was obtained by improving management techniques.

Six Sigma was adopted by Bob Galvin, the CEO of Motorola in 1986 and registered as a Motorola Trademark on December 28, 1993,
then it became a quality leader. deoForward Skip 10s

Characteristics of Six Sigma


The Characteristics of Six Sigma are as follows:

1. Statistical Quality Control: Six Sigma is derived from the Greek Letter σ (Sigma) from the Greek alphabet, which is used to
denote Standard Deviation in statistics. Standard Deviation is used to measure variance, which is an essential tool for
measuring non-conformance as far as the quality of output is concerned.
2. Methodical Approach: The Six Sigma is not a merely quality improvement strategy in theory, as it features a well defined
systematic approach of application in DMAIC and DMADV which can be used to improve the quality of production. DMAIC is
an acronym for Design-Measure- Analyze-Improve-Control. The alternative method DMADV stands for Design-Measure-
Analyze-Design-Verify.
3. Fact and Data-Based Approach: The statistical and methodical aspect of Six Sigma shows the scientific basis of the
technique. This accentuates essential elements of the Six Sigma that is a fact and data-based.
4. Project and Objective-Based Focus: The Six Sigma process is implemented for an organization's project tailored to its
specification and requirements. The process is flexed to suits the requirements and conditions in which the projects are
operating to get the best results.
5. Customer Focus: The customer focus is fundamental to the Six Sigma approach. The quality improvement and control
standards are based on specific customer requirements.
6. Teamwork Approach to Quality Management: The Six Sigma process requires organizations to get organized when it
comes to controlling and improving quality. Six Sigma involving a lot of training depending on the role of an individual in the
Quality Management team.

Six Sigma Methodologies


Six Sigma projects follow two project methodologies:

1. DMAIC
2. DMADV

DMAIC

It specifies a data-driven quality strategy for improving processes. This methodology is used to enhance an existing business process.

The DMAIC project methodology has five phases:


1. Define: It covers the process mapping and flow-charting, project charter development, problem-solving tools, and so-called
7-M tools.
2. Measure: It includes the principles of measurement, continuous and discrete data, and scales of measurement, an overview
of the principle of variations and repeatability and reproducibility (RR) studies for continuous and discrete data.
3. Analyze: It covers establishing a process baseline, how to determine process improvement goals, knowledge discovery,
including descriptive and exploratory data analysis and data mining tools, the basic principle of Statistical Process Control
(SPC), specialized control charts, process capability analysis, correlation and regression analysis, analysis of categorical data,
and non-parametric statistical methods.
4. Improve: It covers project management, risk assessment, process simulation, and design of experiments (DOE), robust design
concepts, and process optimization.
5. Control: It covers process control planning, using SPC for operational control and PRE-Control.

DMADV

It specifies a data-driven quality strategy for designing products and processes. This method is used to create new product designs or
process designs in such a way that it results in a more predictable, mature, and detect free performance.

The DMADV project methodology has five phases:

1. Define: It defines the problem or project goal that needs to be addressed.


2. Measure: It measures and determines the customer's needs and specifications.
3. Analyze: It analyzes the process to meet customer needs.
4. Design: It can design a process that will meet customer needs.
5. Verify: It can verify the design performance and ability to meet customer needs.

Computer Aided Software Engineering (CASE)


Computer-aided software engineering (CASE) is the implementation of computer-facilitated tools and methods in software
development. CASE is used to ensure high-quality and defect-free software. CASE ensures a check-pointed and disciplined approach
and helps designers, developers, testers, managers, and others to see the project milestones during development. 
CASE can also help as a warehouse for documents related to projects, like business plans, requirements, and design specifications.
One of the major advantages of using CASE is the delivery of the final product, which is more likely to meet real-world requirements
as it ensures that customers remain part of the process. 
CASE illustrates a wide set of labor-saving tools that are used in software development. It generates a framework for organizing
projects and to be helpful in enhancing productivity. There was more interest in the concept of CASE tools years ago, but less so
today, as the tools have morphed into different functions, often in reaction to software developer needs. The concept of CASE also
received a heavy dose of criticism after its release. 
CASE Tools: The essential idea of CASE tools is that in-built programs can help to analyze developing systems in order to enhance
quality and provide better outcomes. Throughout the 1990, CASE tool became part of the software lexicon, and big companies like
IBM were using these kinds of tools to help create software. 
Various tools are incorporated in CASE and are called CASE tools, which are used to support different stages and milestones in a
software development life cycle. 
Types of CASE Tools:  
1) Diagramming Tools: 
It helps in diagrammatic and graphical representations of the data and system processes. It represents system elements,
control flow and data flow among different software components and system structures in a pictorial form. For example, Flow
Chart Maker tool for making state-of-the-art flowcharts .  
2) Computer Display and Report Generators: These help in understanding the data requirements and the relationships
involved. 
3) Analysis Tools: It focuses on inconsistent, incorrect specifications involved in the diagram and data flow. It helps in collecting
requirements, automatically check for any irregularity, imprecision in the diagrams, data redundancies, or erroneous
omissions. 
For example:
(i) Accept 360, Accompa, CaseComplete for requirement analysis. 
(ii) Visible Analyst for total analysis. 
 
4) Central Repository: It provides a single point of storage for data diagrams, reports, and documents related to project
management. 
 
5) Documentation Generators: It helps in generating user and technical documentation as per standards. It creates documents
for technical users and end users. 
For example, Doxygen, DrExplain, Adobe RoboHelp for documentation .  
6) Code Generators: It aids in the auto-generation of code, including definitions, with the help of designs, documents, and diagrams. 
Advantages of the CASE approach: 
 As the special emphasis is placed on the redesign as well as testing, the servicing cost of a product over its expected lifetime
is considerably reduced. 
 The overall quality of the product is improved as an organized approach is undertaken during the process of development. 
 Chances to meet real-world requirements are more likely and easier with a computer-aided software engineering approach. 
 CASE indirectly provides an organization with a competitive advantage by helping ensure the development of high-quality
products. 
 It provides better documentation.
 It improves accuracy.
 It provides intangible benefits.
 It reduces lifetime maintenance.
 It is an opportunity to non-programmers.
 It impacts the style of working of the company.
 It reduces the drudgery in software engineer’s work.
 It increases the speed of processing.
 It is easy to program software. 
Disadvantages of the CASE approach: 
 Cost: Using a case tool is very costly. Most firms engaged in software development on a small scale do not invest in CASE
tools because they think that the benefit of CASE is justifiable only in the development of large systems.
 Learning Curve: In most cases, programmers’ productivity may fall in the initial phase of implementation, because users
need time to learn the technology. Many consultants offer training and on-site services that can be important to accelerate
the learning curve and to the development and use of the CASE tools.
 Tool Mix: It is important to build an appropriate selection tool mix to urge cost advantage CASE integration and data
integration across all platforms is extremely important.

Software Maintenance
Software maintenance is a part of the Software Development Life Cycle. Its primary goal is to modify and
update software application after delivery to correct errors and to improve performance. Software is a model of
the real world. When the real world changes, the software require alteration wherever possible.

Software Maintenance is an inclusive activity that includes error corrections, enhancement of capabilities,
deletion of obsolete capabilities, and optimization.

Need for Maintenance


Software Maintenance is needed for:-

o Correct errors
o Change in user requirement with time
o Changing hardware/software requirements
o To improve system efficiency
o To optimize the code to run faster
o To modify the components
o To reduce any unwanted side effects.

Thus the maintenance is required to ensure that the system continues to satisfy user requirements.
Types of Software Maintenance

1. Corrective Maintenance
Corrective maintenance aims to correct any remaining errors regardless of where they may cause specifications,
design, coding, testing, and documentation, etc.

2. Adaptive Maintenance
It contains modifying the software to match changes in the ever-changing environment.

3. Preventive Maintenance
It is the process by which we prevent our system from being obsolete. It involves the concept of reengineering
& reverse engineering in which an old system with old technology is re-engineered using new technology. This
maintenance prevents the system from dying out.

4. Perfective Maintenance
It defines improving processing efficiency or performance or restricting the software to enhance changeability.
This may contain enhancement of existing system functionality, improvement in computational efficiency, etc.

What is software reuse?


Software reuse is a term used for developing the software by using the existing software components. Some of
the components that can be reuse are as follows;
 Source code
 Design and interfaces
 User manuals
 Software Documentation
 Software requirement specifications and many more.

What are the advantages of software reuse?


 Less effort: Software reuse requires less effort because many components use in the
system are ready made components.
 Time-saving: Re-using the ready made components is time saving for the software
team.
 Reduce cost: Less effort, and time saving leads to the overall cost reduction.
 Increase software productivity: when you are provided with ready made components,
then you can focus on the new components that are not available just like ready made
components.
 Utilize fewer resources: Software reuse save many sources just like effort, time, money
etc.
 Leads to a better quality software: Software reuse save our time and we can consume
our more time on maintaining software quality and assurance.

What are Commercial-off-the-shelf and Commercial-off-the-shelf components?


Commercial-off-the-shelf is ready-made software. Commercial-off-the-shelf software
components are ready-made components that can be reused for a new software.
What is reuse software engineering?
Reuse software engineering is based on guidelines and principles for reusing the existing
software.
What are stages of reuse-oriented software engineering?
Requirement specification:
First of all, specify the requirements. This will help to decide that we have some existing
software components for the development of software or not.
Component analysis
Helps to decide that which component can be reused where.
Requirement updations / modifications.
If the requirements are changed by the customer, then still existing components are helpful for
reuse or not.
Reuse System design
If the requirements are changed by the customer, then still existing system designs are helpful
for reuse or not.
Development
Existing components are matching with new software or not.
Integration
Can we integrate the new systems with existing components?
System validation
To validate the system that it can be accepted by the customer or not.

software reuse success factors


1. Capturing Domain Variations
2. Easing Integration
3. Understanding Design Context
4. Effective Teamwork
5. Managing Domain Complexity

How does inheritance promote software re-usability?


Inheritance helps in the software re-usability by using the  existing components of the software
to create new  component.
In object oriented programming protected data members are accessible in the child and so we
can say that yes inheritance promote software re-usability.
Reuse of software helps reduce the need for testing?
This statement is true just for those components that are ready made and are ready to be reuse.
New components must be test.

Component Based Software Development


Component-based architecture focuses on the decomposition of the design into individual functional or logical components that
represent well-defined communication interfaces containing methods, events, and properties. It provides a higher level of abstraction
and divides the problem into sub-problems, each associated with component partitions.
The primary objective of component-based architecture is to ensure component reusability. A component encapsulates
functionality and behaviors of a software element into a reusable and self-deployable binary unit. There are many standard
component frameworks such as COM/DCOM, JavaBean, EJB, CORBA, .NET, web services, and grid services. These technologies
are widely used in local desktop GUI application design such as graphic JavaBean components, MS ActiveX components, and COM
components which can be reused by simply drag and drop operation.
Component-oriented software design has many advantages over the traditional object-oriented approaches such as −
 Reduced time in market and the development cost by reusing existing components.
 Increased reliability with the reuse of the existing components.
What is a Component?
A component is a modular, portable, replaceable, and reusable set of well-defined functionality that encapsulates its implementation
and exporting it as a higher-level interface.
A component is a software object, intended to interact with other components, encapsulating certain functionality or a set of
functionalities. It has an obviously defined interface and conforms to a recommended behavior common to all components within an
architecture.
A software component can be defined as a unit of composition with a contractually specified interface and explicit context
dependencies only. That is, a software component can be deployed independently and is subject to composition by third parties.

Views of a Component
A component can have three different views − object-oriented view, conventional view, and process-related view.
Object-oriented view
A component is viewed as a set of one or more cooperating classes. Each problem domain class (analysis) and infrastructure class
(design) are explained to identify all attributes and operations that apply to its implementation. It also involves defining the interfaces
that enable classes to communicate and cooperate.
Conventional view
It is viewed as a functional element or a module of a program that integrates the processing logic, the internal data structures that are
required to implement the processing logic and an interface that enables the component to be invoked and data to be passed to it.
Process-related view
In this view, instead of creating each component from scratch, the system is building from existing components maintained in a
library. As the software architecture is formulated, components are selected from the library and used to populate the architecture.
 A user interface (UI) component includes grids, buttons referred as controls, and utility components expose a specific subset
of functions used in other components.
 Other common types of components are those that are resource intensive, not frequently accessed, and must be activated
using the just-in-time (JIT) approach.
 Many components are invisible which are distributed in enterprise business applications and internet web applications such
as Enterprise JavaBean (EJB), .NET components, and CORBA components.
Characteristics of Components
 Reusability − Components are usually designed to be reused in different situations in different applications. However, some
components may be designed for a specific task.
 Replaceable − Components may be freely substituted with other similar components.
 Not context specific − Components are designed to operate in different environments and contexts.
 Extensible − A component can be extended from existing components to provide new behavior.
 Encapsulated − A A component depicts the interfaces, which allow the caller to use its functionality, and do not expose
details of the internal processes or any internal variables or state.
 Independent − Components are designed to have minimal dependencies on other components.

Principles of Component−Based Design


A component-level design can be represented by using some intermediary representation (e.g. graphical, tabular, or text-based) that
can be translated into source code. The design of data structures, interfaces, and algorithms should conform to well-established
guidelines to help us avoid the introduction of errors.
 The software system is decomposed into reusable, cohesive, and encapsulated component units.
 Each component has its own interface that specifies required ports and provided ports; each component hides its detailed
implementation.
 A component should be extended without the need to make internal code or design modifications to the existing parts of the
component.
 Depend on abstractions component do not depend on other concrete components, which increase difficulty in expendability.
 Connectors connected components, specifying and ruling the interaction among components. The interaction type is
specified by the interfaces of the components.
 Components interaction can take the form of method invocations, asynchronous invocations, broadcasting, message driven
interactions, data stream communications, and other protocol specific interactions.
 For a server class, specialized interfaces should be created to serve major categories of clients. Only those operations that
are relevant to a particular category of clients should be specified in the interface.
 A component can extend to other components and still offer its own extension points. It is the concept of plug-in based
architecture. This allows a plugin to offer another plugin API.

Component-Level Design Guidelines


Creates a naming conventions for components that are specified as part of the architectural model and then refines or elaborates as
part of the component-level model.
 Attains architectural component names from the problem domain and ensures that they have meaning to all stakeholders
who view the architectural model.
 Extracts the business process entities that can exist independently without any associated dependency on other entities.
 Recognizes and discover these independent entities as new components.
 Uses infrastructure component names that reflect their implementation-specific meaning.
 Models any dependencies from left to right and inheritance from top (base class) to bottom (derived classes).
 Model any component dependencies as interfaces rather than representing them as a direct component-to-component
dependency.

Conducting Component-Level Design


Recognizes all design classes that correspond to the problem domain as defined in the analysis model and architectural model.
 Recognizes all design classes that correspond to the infrastructure domain.
 Describes all design classes that are not acquired as reusable components, and specifies message details.
 Identifies appropriate interfaces for each component and elaborates attributes and defines data types and data structures
required to implement them.
 Describes processing flow within each operation in detail by means of pseudo code or UML activity diagrams.
 Describes persistent data sources (databases and files) and identifies the classes required to manage them.
 Develop and elaborates behavioral representations for a class or component. This can be done by elaborating the UML state
diagrams created for the analysis model and by examining all use cases that are relevant to the design class.
 Elaborates deployment diagrams to provide additional implementation detail.
 Demonstrates the location of key packages or classes of components in a system by using class instances and designating
specific hardware and operating system environment.
 The final decision can be made by using established design principles and guidelines. Experienced designers consider all (or
most) of the alternative design solutions before settling on the final design model.
Advantages
 Ease of deployment − As new compatible versions become available, it is easier to replace existing versions with no impact
on the other components or the system as a whole.
 Reduced cost − The use of third-party components allows you to spread the cost of development and maintenance.
 Ease of development − Components implement well-known interfaces to provide defined functionality, allowing
development without impacting other parts of the system.
 Reusable − The use of reusable components means that they can be used to spread the development and maintenance
cost across several applications or systems.
 Modification of technical complexity − A component modifies the complexity through the use of a component container
and its services.
 Reliability − The overall system reliability increases since the reliability of each individual component enhances the reliability
of the whole system via reuse.
 System maintenance and evolution − Easy to change and update the implementation without affecting the rest of the
system.
 Independent − Independency and flexible connectivity of components. Independent development of components by different
group in parallel. Productivity for the software development and future software development.

Component-Based Software Engineering (CBSE) is a process that focuses on the design and development of
computer-based systems with the use of reusable software components. 
It not only identifies candidate components but also qualifies each component’s interface, adapts components to
remove architectural mismatches, assembles components into a selected architectural style, and updates components
as requirements for the system change.
The process model for component-based software engineering occurs concurrently with  component-based
development.
Component-based development:
Component-based development (CBD) is a CBSE activity that occurs in parallel with domain engineering. Using
analysis and architectural design methods, the software team refines an architectural style that is appropriate for the
analysis model created for the application to be built.
CBSE Framework Activities:
Framework activities of Component-Based Software Engineering are as follows:
1) Component Qualification: This activity ensures that the system architecture defines the requirements of the
components for becoming a reusable components. Reusable components are generally identified through the traits
in their interfaces. It means “the services that are given and the means by which customers or consumers access
these services ” are defined as a part of the component interface.
2) Component Adaptation: This activity ensures that the architecture defines the design conditions for all components
and identifies their modes of connection. In some cases, existing reusable components may not be allowed to get
used due to the architecture’s design rules and conditions. These components should adapt and meet the
requirements of the architecture or be refused and replaced by other, more suitable components.
3) Component Composition: This activity ensures that the Architectural style of the system integrates the software
components and forms a working system. By identifying the connection and coordination mechanisms of the system,
the architecture describes the composition of the end product.
4) Component Update: This activity ensures the updation of reusable components. Sometimes, updates are
complicated due to the inclusion of third-party (the organization that developed the reusable component may be
outside the immediate control of the software engineering organization accessing the component currently).

You might also like