Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Report On Project Management

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 21
At a glance
Powered by AI
The document discusses various topics related to software project management including conventional practices, principles of modern management, project organizations and responsibilities, project control and metrics.

The main topics discussed include conventional software management, principles of modern software management, workflow and process, project organizations and responsibilities, project control and instrumentation, and future software project management.

Some characteristics of the conventional or 'waterfall' model include its sequential nature, lack of customer involvement, lack of documentation and testing planning. The document also discusses some necessary improvements to the waterfall model.

CASE STUDY REPORT ON PROJECT MANAGEMENT

INDEX
Sr. No.
1.

Topic Name
Conventional Software Management Conventional Software Management Performance Evolution of Software Economics

2.

Principles of Modern Software Management Artifacts of The Process

3.

Workflow of The Process Iterative Process Planning

4.

Project Organizations & Responsibilities Line of Business Organizations Project Organizations Process Automation

5.

Project Control & Process Instrumentation Seven Core Metrics of Software Project Metrics Automation Tailoring The Process

6.

Future Software project management Modern Project profiles Next generations Software Economics

I. Conventional software management


Conventional software management practices are sound in theory, but practice is still tied to archaic technology and techniques. Conventional software economics provides a benchmark of performance for conventional software management principles. Three important analyses of the state of the software engineering industry are 1. Software development is still highly unpredictable. Only about 10% of software projects are delivered successfully within initial budget and schedule estimates. 2. Management discipline is more of a discriminator in success or failure than are technology advances. 3. The level of software scrap and rework is indicative of an immature process. THE WATERFALL MODEL Most software engineering texts present the waterfall model as the source of the "conventional" software process.

Fig. Waterfall Model Five necessary improvements for waterfall model are:1. Program design comes first: Insert a preliminary program design phase between the software requirements generation phase and the analysis

phase.Begin the design process with program designers, not analysts or programmers. 2. Document the design: The amount of documentation required on most software programs is quite a lot, certainly much more than most programmers, analysts, or program designers are willing to do if left to their own devices. 3. Do it twice:If a computer program is being developed for the first time, arrange matters so that the version finally delivered to the customer for operational deployment is actually the second version insofar as critical design/operations are concerned. 4. Plan, control, and monitor testing: Without question, the biggest user of project resources-manpower, computer time, and/or management judgment-is the test phase. This is the phase of greatest risk in terms of cost and schedule. 5. Involve the customer: For some reason, what a software design is going to do is subject to wide interpretation, even after previous agreement. It is important to involve the customer in a formal way so that he has committed himself at earlier points before final delivery. It is useful to summarize the characteristics of the conventional process as it has typically been applied, which is not necessarily as it was intended. Projects destined for trouble frequently exhibit the following symptoms: Protracted integration and late design breakage Late risk resolution Requirements-driven functional decomposition Adversarial(conflict or opposition) stakeholder relationships Focus on documents and review meetings

1.1CONVENTIONAL SOFTWARE MANAGEMENT PERFORMANCE: Barry Boehm's "Industrial Software Metrics Top 10 List is a good, objective characterization of the state of software development. 1. Finding and fixing a software problem after delivery costs 100 times more than finding and fixing the problem in early design phases 2. You can compress software development schedules 25% of nominal, but no more. 3. For every $1 you spend on development, you will spend $2 on maintenance 4. Software development and maintenance costs are primarily a function of the number of source lines of code

5. Variations among people account for the biggest differences in software productivity 6. The overall ratio of software to hardware costs is still growing. In 1955 it was 15:85; in 1985, 85:15. 7. Only about 15% of software development effort is devoted to programming. 8. Software systems and products typically cost 3 times as much per SLOC as individual software programs. Software-system products (i.e., system of systems) cost 9 times as much. 9. Walkthroughs catch 60% of the errors 10.80% of the contribution comes from 20% of the contributors. 1.2 EVOLUTION OF SOFTWARE ECONOMICS
1.2.1SOFTWARE ECONOMICS:

Most software cost models can be abstracted into a function of five basic parameters: size, process, personnel, environment, and required quality. 1. The size of the end product (in human-generated components), which is typically quantified in terms of the number of source instructions or the number of function points required to develop the required functionality. 2. The processused to produce the end product, in particular the ability of the process to avoid non-value-adding activities. (rework, bureaucratic delays, communications overhead) 3. The capabilities of software engineering personnel,and particularly their experience with the computer science issues and the applications domain issues of the project. 4. The environment, which is made up of the tools and techniques available to support efficient software development and to automate the process. 5. The required qualityof the product, including its features, performance, reliability, and adaptability. The relationships among these parameters and the estimated cost can be written as follows: Effort = (Personnel) (Environment)( Quality)( Sizeprocess)

The three generations of software development are defined as follows:

1) Conventional:1960s and 1970s, craftsmanship. Organizations used custom tools, custom processes, and virtually all custom components built in primitive languages. Project performance was highly predictable in that cost, schedule, and quality objectives were almost always underachieved. 2) Transition: 1980s and 1990s, software engineering. Organiz:1tions used more-repeatable processes and off-the-shelf tools, and mostly (>70%) custom components built in higher level languages. Some of the components (<30%) were available as commercial products, including the operating system, database management system, networking, and graphical user interface. 3) Modern practices: 2000 and later, software production. This book's philosophy is rooted in the use of managed and measured processes, integrated automation environments, and mostly (70%) off-the-shelf components. Perhaps as few as 30% of the components need to be custom built Technologies for environment automation, size reduction, and process improvement are not independent of one another. 1.2.2 PRAGMATIC SOFTWARE COST ESTIMATION: One critical problem in software cost estimation is a lack of welldocumented case studies of projects that used an iterative development approach. Software industry has inconsistently defined metrics or atomic units of measure, the data from actual projects are highly suspect in terms of consistency and comparability. There are several popular cost estimation models (such as COCOMO, CHECKPOINT, ESTIMACS, KnowledgePlan, Price-S, ProQMS, SEER, SLIM, SOFTCOST, and SPQR/20), CO COMO is also one of the most open and well-documented cost estimation models. The general accuracy of conventional cost models (such as COCOMO) has been described as "within 20% of actuals, 70% of the time." A good software cost estimate has the following attributes: It is conceived and supported by the project manager, architecture team, development team, and test team accountable for performing the work. It is accepted by all stakeholders as ambitious but realizable. It is based on a well-defined software cost model with a credible basis. It is based on a database of relevant project experience that includes similar processes, similar technologies, similar environments, similar quality requirements, and similar people.

II.Principles of Modern Software Management:


Royce Walker describes tens principles of modern software management ([Royce, Walker 1998]). The principles are in priority order: 1. Base the process on anarchitecture-first approach: This requires that a demonstrable balance be achieved among the driving requirements, the architecturally significant design decisions, and the life-cycle plans before the resources are committed for full-scale development. 2. Establish an iterative life-cycle process that confronts risk early: With todays sophisticated software systems, it is not possible to define the entire problem, design the entire solution, built the software, then test the end product in sequence. 3. Transition design methods to emphasize component-based development: Moving from a line-of-code mentality to a component-based mentality is necessary to reduce the amount of human-generated source code and custom development. 4. Establish a change management environment: The dynamics of iterative development, including concurrent workflows by different teams working on shared artifacts, necessitates objectively controlled baselines. 5. Enhance change freedom through tools that support round-trip engineering:Round-trip engineering is the environment support necessary to automate and synchronize engineering information in different formats (such as requirements specifications, design models, source code, executable code, test cases). 6. Capture design artifacts in rigorous, model-based notation: A modelbased approach (such as UML) supports the evolution of semantically rich graphical and textual design notations.

7. Instrument the process for objective quality control and progress assessment:Life-cycle assessment of the progress and the quality of all intermediate products must be integrated into the process. 8. Use a demonstration-based approach to assess intermediate artifacts: Transitioning the current state-of-the-product artifacts (whether the artifact is an early prototype, a baseline architecture, or a beta capability) into an executable demonstration of relevant scenarios stimulates earlier convergence on integration, a more tangible understanding of design tradeoffs, and earlier elimilation of architectural defects. 9. Plan intermediate releases in groups of usage scenarios with envolving levels of detail: It is essential that the software management process drive toward early and continuous demonstrations within the operational context of the system, namely its use cases. 10.Establish a configurable process that is economically scalable: No single process is suitable for all software developments. A pragmatic process framework must be configurable to a broad spectrum of applications.

2.1 Artifacts of the Process Each Rational Unified Process activity has associated artifacts, either required as an input or generated as an output. Some artifacts are used to direct input to subsequent activities, kept as reference resources on the project, or generated in a format as contractual deliverables. Models: Models are the most important kind of artifact in the Rational Unified Process. A model is a simplification of reality, created to better understand the system being created. In the Rational Unified Process, there are a number of models that collectively cover all the important decisions that go into visualizing, specifying, constructing, and documenting a software-intensive system. 1. Business use case Establishes an abstraction of the organization model 2. Business analysis Establishes the context of the system model

1. Business use case Establishes an abstraction of the organization model 3. Use case model 4. Analysis model (optional) 5. Design model 6. Data model (optional) 7. Deployment model 8. Implementation model Other Artifacts: The Rational Unified Process's artifacts are categorized as either management artifacts or technical artifacts. The Rational Unified Process's technical artifacts may be divided into four main sets. 1. Requirements set: 2. Analysis and design set: 3. Test set: 4. Implementation set: 5. Deployment set: Describes what the system must do Describes how the system is to be constructed Describes the approach by which the system is validated and verified Describes the assembly of developed software components Provides all the data for the deliverable configuration Establishes the system's functional requirements Establishes a conceptual design Establishes the vocabulary of the problem and its solution Establishes the representation of data for databases and other repositories Establishes the hardware topology on which the system is executed as well as the systems concurrency and synchronization mechanisms Establishes the parts used to assemble and release the physical system

Requirements Set: This set groups all information describing what the system must do. This may comprise a use case model, a nonfunctional requirements model, a domain model, an analysis model, and other forms of expression of the user's needs, including but not limited to mock-ups, interface prototypes, regulatory constraints, and so on. Design Set: This set groups information describing how the system is to be constructed and captures decisions about how the system is to be built, taking

into account all the constraints of time, budget, legacy, reuse, quality objectives, and so forth. This may comprise a design model, a test model, and other forms of expression of the system's nature, including but not limited to prototypes and executable architectures. Test Set: This set groups information about testing the system, including scripts, test cases, defect-tracking metrics, and acceptance criteria. Implementation Set: This set groups all information about the elements of the software that comprises the system, including but not limited to source code in various programming languages, configuration files, data files, software components, and so on, together with the information describing how to assemble the system. Deployment Set:This set groups all information about the way the software is actually packaged, shipped, installed, and run on the target environment.

Fig.Major artifacts of the Rational Unified Process and the information flow between them.

III. Work Flows of the Process


Main Workflow Techniques: Based on the method used for process modelling, Workflow Management Systems are divided into three main categories, as follows: Communication-based techniques: Which reduce every action in a workflow of four phases based oncommunication between a customer and a performer; preparation; negotiation; performance andacceptance. Activity-based techniques: Which focus on modelling the tasks involved in a process and theirdependencies. Hybrid techniques: This can be considered as a combination of the communication-based and the activity-based techniques.

3.1 Iterative Process Planning:


Like software,a plan is an intangible piece of intellectualproperty to which all the same concepts mustbe applied. Plans have an engineering stage, during which the plan is developed, and a production stage, when the plan is executed. 3.1.1 WORKBREAKDOWN STRUCTURES: A good work breakdown structure and its synchronization with the process framework are critical factors in software project success. A WBS is simply a hierarchy of elements that decomposes the project plan into the discrete work tasks. A WBS provides the following information structure: A delineation of all significant work. A clear task decomposition for assignment ofresponsibilities. A framework for scheduling, budgeting, and expenditure tracking. 3.1.2 PLANNING GUIDELINES: Software projects span a broad range of application domains. It is valuable but riskyto make specific planning recommendations independent of project context. 3.1.3 THE COST AND SCHEDULE ESTIMATING PROCESS:

It starts with an understanding of the general requirements andconstraints, derives a macro-level budget and schedule, then decomposes these elementsinto lower level budgets and intermediate milestones. From this perspective, thefollowing planning sequence would occur: 1. The software project manager (and others) develops a characterization of the overall size, process, environment, people, and quality required for the project. 2. A macro-level estimate of the total effort and schedule is developed using a software cost estimation model. 3. The software project manager partitions the estimate for the effort into a top-level WBS using guidelines. The project manager also partitions the schedule into major milestone dates and partitions the effort into a staffing profile using guidelines. 4. At this point, subproject managers are given the responsibility for decomposing each of the WBS elements into lower levels using their top-level allocation, staffing profile, and major milestone dates as constraints. 3.1.4 THEITERATION PLANNING PROCESS: Planning the content and schedule of themajor milestones and their intermediate iterations is probably the most tangible formof the overall risk management plan.Iteration is used to mean a complete synchronizationacross the project, with a well-orchestrated global assessment of the entire projectbaseline. Inception iterations: The early prototyping activities integrate the foundation components of a candidate architecture and provide an executable framework for elaborating the critical use cases of the system. Elaboration iterations: These iterations result in architecture, including a complete framework and infrastructure for execution. Construction iterations: Most projects require at least two major constructions iterations: an alpha release and a beta release. Transition iterations: Most projects use a single iteration to transition a beta release into the final product. 3.1.5 PRAGMATIC PLANNING: Even though good planning is more dynamic in an iterative process, doing it accurately is far easier. While executing iteration N of any phase, the software project manager must be monitoring and controlling against a plan that was initiated in iteration N 1 and must be planning iteration N + 1. The art of good project managementis to make trade-offs in the current iteration plan and the next iteration plan based onobjective results in the current iteration and previous iterations.

IV. Project Organizations and Responsibilities


Software lines of business and project teams have different motivations. Software lines of business are motivated by return on investment, new business discriminators, market diversification, and profitability. 4.1 Line of Business Organizations: The main features of the default organization are as follows: Responsibility for process definition and maintenance is specific to a cohesive line of business, where process commonality makes sense. Responsibility for process automation is an organizational role and is equal in importance to the process definition role. Organizational roles may be fulfilled by a single individual or several different teams, depending on the scale of the organization. Business Organization: The Organization Manager has four managers reporting to them (SEPA, PRA, SEEA, and the Infrastructure Manager), along with some number of project managers. 1. Software Engineering Process Authority (SEPA): The SEPA is a necessary role in any organization. The SEPA could be a singleindividual like the general manager or a team of representatives. 2. Project Review Authority (PRA): PRA is the single individual responsible for ensuring that a software project compiles with all organizational and business unit software policies, practices and standards. 3. Software Engineering Environment Authority (SEEA): SEEA is the person or group responsible for automating the organization's process, maintaining the organization's standard environment, training project teams to use the environment and maintaining organizationwide reusable assets. 4. Infrastructure:

The components of the organizational infrastructure include project administration, engineering skill centers and professional development. Project administration includes timeaccounting system, contracts, pricing, terms and conditions. 4.2 PROJECT ORGANIZATIONS: The main features of default organization are The project management team is an active participant, responsible for producing as well as managing the plan. The architecture team is responsible for real artifacts and for the integration ofcomponents. The development team is responsible for component construction and maintenance activities. The assessment team is separate from development.

4.3 PROCESS AUTOMATION:


Automation means the loss of many organization jobs. Automation needs growthdepending on the scale of the effort. Process automation is critical to an iterative process. There are many tools available to automate the software development process. The mapping of the software development tools to the process workflows is shown below: Workflows Environment tools and process automation 1. Management Workflow automation, metrics automation 2. Environment Change management, document automation 3. Requirementsmanagement 4. Designvisualmodelling 5. Implementation Editor-compiler-debugger 6. Assessment Test automation, defect tracking 7. Deployment Defect tracking

V. Project Control & Process Instrumentation


Software Metrics: Software metrics are used to implement the activities and products of the softwaredevelopment process. Hence, the quality of the software products and the achievements in thedevelopment process can be determined using the software metrics. Need for Software Metrics: 1. Software metrics are needed for calculating the cost and schedule of a software productwith great accuracy. 2. Software metrics are required for making an accurate estimation of the progress. 3. The metrics are also required for understanding the quality of the software product. Indicators: An indicator is a metric or a group of metrics that provides an understanding of thesoftware process or software product or a software project. Two types of indicators are: (i) Management indicators. (ii) Quality indicators. 5.1 Seven Core Metrics of Software Project: Software metrics instrument the activities and products of the software development/integration process. Metrics values provide an important perspective for managingthe process. Seven core metrics related to project control: Management Indicators Work and Progress Budgeted cost and expenditures Staffing and team dynamics Change traffic and stability Breakage and modularity Rework and adaptability Mean time between failures (MTBF) and maturity

1. Work and progress: This metric measure the work performed over time. Work is the effort to beaccomplished to complete a certain set of tasks. The default perspectives of this metric are: Software architecture team: - Use cases demonstrated. Software development team: - SLOC under baseline change management, SCOs closed Software assessment team: - SCOs opened, test hours executed and evaluation criteria meet. Software management team: - milestones completed. 2. Budgeted cost and expenditures: This metric measures cost incurred over time. Budgeted cost is the planned expenditureprofile over the life cycle of the project. 3. Staffing and team dynamics: This metric measure the personnel changes over time, which involves staffing additionsand reductions over time. 4. Change traffic and stability: This metric measures the change traffic over time. The number of software change ordersopened and closed over the life cycle is called change traffic. 5. Breakage and modularity: This metric measures the average breakage per change over time. Breakage is defined asthe average extent of change, which is the amount of software baseline that needs rework andmeasured in source lines of code, function points, components, subsystems, files or other units. 6. Rework and adaptability: This metric measures the average rework per change over time. Rework is defined as theaverage cost of change which is the effort to analyze, resolve and retest all changes to softwarebaselines. 7. MTBF and Maturity: This metric measures defect rater over time. MTBF (Mean Time Between Failures) is theaverage usage time between software faults.

5.2 METRICS AUTOMATION:


Many opportunities are available to automate the project control activities of a softwareproject. A Software Project Control Panel (SPCP) is essential for managing against a plan. Thispanel integrates data from multiple sources to

show the current status of some aspect of theproject. The panel can support standard features and provide extensive capability for detailedsituation analysis. SPCP is one example of metrics automation approach that collects, organizesand reports values and trends extracted directly from the evolving engineering artifacts. SPCP: To implement a complete SPCP, the following are necessary. Metrics primitives - trends, comparisons and progressions A graphical user interface. Metrics collection agents Metrics data management server Metrics definitions - actual metrics presentations for requirements progress, implementation progress, assessment progress, design progress and other progressdimensions. Actors - monitor and administrator.

5.3 TAILORING THE PROCESS:


In tailoring the management process to a specific domain or project, there are two dimensions of discriminating factors: technical complexity and management complexity. Aprocess framework is not a project-specific process implementation with a well-defined recipefor success. The process discriminants are organized around six process parameters - scale,stakeholder cohesion, process flexibility, process maturity, architectural risk and domainexperience. 1. Scale: The scale of the project is the team size which drives the process configuration more than any other factor. There are many ways to measure scale, including number of sources lines of code, number of function points, number of use cases and number of dollars. The primary measure of scale is the size of the team. Five people are an optimal size for an engineering team. 2. Stakeholder Cohesion or Contention: The degree of cooperation and coordination among stakeholders (buyers, developers, users, subcontractors and maintainers) significantly drives the specifics of how a process is defined. This process parameter ranges from cohesive to adversarial. Cohesive teams have common goals, complementary skills and close communications. 3. Process Flexibility or Rigor:

The implementation of the project's process depends on the degree of rigor, formality and change freedom evolved from projects contract (vision document, business case and development plan). 4. Process Maturity: The process maturity level of the development organization is the key driver of management complexity. Managing a mature process is very simpler than managing animmature process. Organization with a mature process have a high level of precedent experience in developing software and a high level of existing process collateral that enables predictable planning and execution of the process. 5. Architectural Risk: The degree of technical feasibility is an important dimension of defining a specific projects process. There are many sources of architecture risk. They are (1) systemperformance which includes resource utilization, response time, throughout and accuracy, (2)robustness to change which includes addition of new features & incorporation of new technologyand (3) system reliability which includes predictable behaviour and fault tolerance. 6. Domain Experience: The development organization's domain experience governs its ability to converge on an acceptable architecture in a minimum no of iterations.

VI. Future SoftwareProject Management:


6.1Modern Project Profiles: Continuous Integration Early Risk Resolution Evolutionary Requirements Teamwork Among Stakeholders Top 10 Software Management Principles Software Management Best Practices 1. Continuous Integration: The continuous integration inherent in an iterative development process enables better insight into quality trade-offs.System characteristics that are largely inherent in the architecture (performance, fault tolerance, maintainability) are tangible earlier in the process, when issues are still correctable. 2.Early Risk Resolution: Conventional projects usually do the easy stuff first,modern process attacks the important 20% of the requirements, use cases, components, and risks.The effect of the overall life-cycle philosophy on the 80/20 lessons provides a useful risk management perspective. 3.Evolutionary Requirements: Conventional approaches decomposed system requirementsinto subsystem requirements, subsystem requirements intocomponent requirements, and component requirements intounit requirements. 4. TeamworkAmong Stakeholders: Many aspects of the classic development process cause stakeholder relationships to degenerate into mutual distrust,making it difficult to balance requirements, product features and plans. 5. Top 10 Software Management Principles: 1. Base the process on an architecture-first approach rework rates remain stable over the project life cycle.

2. Establish an iterative life-cycle process that confronts risk early. 3. Transition design methods to emphasize component-based development. 4. Establish a change management environment the dynamicsof iterative development, including concurrent workflows by different teams working on shared artifacts, necessitate highly controlled baselines 5. Enhance change freedom through tools that support round-trip engineering. 6. Capture design artifacts in rigorous, model-based notation. 7. Instrument the process for objective quality control and progress assessment. 8. Use a demonstration-based approach to asses intermediate artifacts. 9. Plan intermediate releases in groups of usage scenarios with evolving levels of detail. 10. Establish a configurable process that is economicallyScalable. 6.Software Management Best Practices: There are nine best practices: 1. Formal risk management 2. Agreement on interfaces 3. Formal inspections 4. Metric-based scheduling and management 5. Binary quality gates at the inch-pebble level 6. Program-wide visibility of progress versus plan 7. Defect tracking against quality targets 8. Configuration management 9. People-aware management accountability 6.2 Next-Generation Software Economics: Software experts hold widely varying opinions about software economics and its manifestation in software cost estimation models.It will be difficult to improve empirical estimation models while the project data going into these models are noisyand highly uncorrelated, and are based on differing process and technology foundations. Two major improvements in next-generation software cost estimation models: Separation of the engineering stage from the production stagewill force estimators to differentiate between architectural scale and implementation size.

Rigorous design notations such as UML will offer an opportunityto define units of measure for scale that are more standardized and therefore can be automated and tracked.

You might also like