Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Process

Download as ppt, pdf, or txt
Download as ppt, pdf, or txt
You are on page 1of 42

UNIT 1

THE PROCESS

The Process
Principles 1.1Traits (Qualities) of Successful Projects A successful software project is one whose deliverables -satisfy and possibly exceed the customer's expectations, was developed in a timely and economical fashion, and is resilient to change and adaptation. There are two traits that are common to all of the successful object-oriented systems we have encountered:

1. First

The existence of a Strong architectural vision The application of a well-managed iterative and incremental development life cycle

1.1.1 Architectural Vision A system that has a sound architecture is one that has conceptual integrity and, conceptual integrity is the most important consideration in system design. The architecture of an object-oriented software system encompasses its class and object structure, organized in terms of distinct layers and partitions. Good software architectures tend to have several attributes in common:

They are constructed in well-defined layers of abstraction, each layer representing a coherent abstraction, provided through a well-defined and controlled interface, and built upon equally well-defined and controlled facilities at lower levels of abstraction. There is a clear separation of concerns between the interface and implementation of each layer, making it possible to change the implementation of a layer without violating the assumptions made by its clients. The architecture is simple: common behavior is achieved through common abstractions and common mechanisms.

1.1.2 Iterative and Incremental Life Cycle Consider two extremes: An organization that has no well-defined development life cycle, and One that has very rigid and strictly-enforced policies that dictate every aspect of development. In the former case, we have anarchy: through the hard work and individual contributions of a few developers, the team may eventually produce something of value, but we can never reliably predict anything: not progress to date, not work remaining, and certainly not quality. In the second case, we have a dictatorship, in which creativity is punished, experimentation that could yie1d a more elegant architecture is discouraged, and the customer's real expectations are never correctly communicated to the lowly developer who is hidden behind a veritable paper wall erected by the organizations bureaucracy. The successful object-oriented projects we have encountered follow neither anarchic nor draconian development life cycles.

process that leads to the successful construction of objectoriented architectures tends to be both iterative and incremental. Such process is iterative in the sense that it involves the successive refinement of an object-oriented architecture, from which we apply the experience and results of each release to the next iteration of analysis and design The process is incremental in the sense that each pass through an analysis/design/evolution cycle leads us to gradually refine our strategic and tactical decisions, ultimately converging upon a solution that meets the end user's real (and usually unstated) requirements, and yet is simple, reliable, and adaptable. An iterative and incremental development life cycle is the antithesis of the traditional waterfall life cycle, and so represents neither a strictly top-down nor a bottom-up process

1.2 Towards a rational (balanced) Design Process Having a prescriptive process is fundamental to the maturity of a software organization. As there are five distinct levels of process maturity defined by CMM(Capability Maturity Model): 1. Initial : The development process is ad hoc and often chaotic. Organizations can progress by introducing basic project controls. 2. Repeatable :The organization has reasonable control over its plans and commitments. Organizations can progress by institutionalizing a well-defined process. 3. Defined :The development process is reasonably welldefined, understood, and practiced; it serves as a stable foundation for calibrating the team and predicting progress. Organizations can progress their development practices. 4. Managed : The organization has quantitative measures of its process. Organizations can progress by lowering the cost of gathering this data, and instituting practices that permit this data to influence the process. 5. Optimizing :The organization has in place a well-tuned process that consistently yields products of high quality in a predictable, timely, and cost-effective manner.

Unfortunately, we will never find a process that allows us to design software in a perfectly rational way, because of the need for creativity and innovation during the development process. However, we will come closer to a rational process if we try to follow the process rather than proceed on an ad hoc basis. As we move our development organizations to higher levels of maturity, how then do we bring together the need for creativity and innovation with the requirement for more controlled management practices? The answer appears to lie in distinguishing the micro and macro elements of the development process. The micro process is more closely related to Boehm's spiral model of development, and serves as the framework for an iterative and incremental approach to development. The macro process is more closely related to the traditional waterfall life cycle, and serves as the controlling framework for the micro process

1.3 The Micro Development Process

To a large extent, the micro process represents the daily activities of the individual developer or a small team of developers. The micro process of object-oriented development is largely driven by the stream of scenarios and architectural products that emerge from and that are successively refined by the macro process. The micro process applies equally to the software engineer and the software architect. From the perspective of the engineer, the micro process offers guidance in making the many tactical decisions that are part of the daily production and adaptation of the architecture; From the perspective of the architect, the micro process offers a framework for evolving the architecture and exploring alternative designs

The different phases of a software project, such as design, programming, and testing, cannot be strictly separated. The micro process tends to track the following activities: Identify the classes and objects at a given level of abstraction. Identify the semantics of these classes and objects. Identify the relationships among these classes and objects. Specify the interface and then the implementation of these classes and objects.

1.3.1 Identifying Classes and Objects 1.3.1.1 Purpose


The purpose of identifying classes and objects is to establish the boundaries of the problem at hand, additionally; this activity is the first step in devising an object-oriented decomposition of the system under development. As part of analysis, we apply this step to discover those abstractions that form the vocabulary of the problem domain, and by so doing, we begin to constrain our problem by deciding what is and what is not of interest. As part of design, we apply this step to invent new abstractions that form elements of the solution. As implementation proceeds, we apply this step in order to invent lower-level abstractions that we can use to construct higher-level ones, and to discover commonality among existing abstractions, which we can then exploit in order to simplify the system's architecture.

1.3.1.2 Products The central product of this step is a data dictionary that is updated as development proceeds.. 1.3.1.3 Activities The identification of classes and objects involves two activities: discovery and invention In each case, we carry out these activities by applying any of the various approaches to classification. A typical order of events might be the following: Apply the classical approach to object-oriented analysis to generate a set of candidate classes and objects. . Apply the techniques of behavior analysis to identify abstractions that: are directly related to system function points. From the relevant scenarios generated as part of the macro process, apply the techniques of use-case analysis.

1.3.1.4 Milestones and Measures We successfully complete this phase when we have a reasonably stable data dictionary. Because of the iterative and incremental nature of the micro, process, we don't expect to complete or freeze this dictionary until very late in the development process. Rather, it is sufficient that we have a dictionary containing an ample set of abstractions, consistently named and with a sensible separation of responsibilities. .

1.3.2 Identifying the Semantics of Classes and Objects

1.3.2.1 Purpose The purpose of identifying the semantics of classes and objects is to establish the behavior and attributes of each abstraction identified in the previous phase. As part of analysis, we apply this step to allocate the responsibilities for different system behaviors. As part of design, we apply this step to achieve a clear separation of concerns among the parts of our solution. As implementation proceeds, we move from free-form descriptions of roles and responsibilities to specifying a concrete protocol for each abstraction 1.3.2.2 Products There are several products that flow from this step. The first is a refinement of the data dictionary, whereby we initially attach responsibilities to each abstraction. As development proceeds, we may create specifications for each which state the named operations that form the protocol of each class.

1.3.2.3 Activities There are three activities associated with this step: storyboarding, isolated class design, and pattern scavenging. The primary and peripheral scenarios generated by the macro process are the main drivers of storyboarding. This activity represents a top-down identification of semantics and, where it concerns system function points, addresses strategic issues. A typical order of events might be the following:
Select one scenario or a set of scenarios related to a single function point; from the previous step, identify those abstractions relevant to the scenario.

Walk though the activity of the scenario, assigning responsibilities to each abstraction sufficient to accomplish the desired behavior.

As the storyboarding proceeds, reallocate responsibilities so that there is a reasonably balanced distribution of behavior. Where possible, reuse or adapt existing responsibilities. 1.3.2.4 Milestones and Measures We successfully complete this phase when we have a reasonably sufficient, primitive, and complete set of

1.3.3 Identifying the Relationships among Classes and Objects


1.3.3.1 Purpose The purpose of identifying the relationships among classes and objects is to set the boundaries of and to recognize the collaborators with each abstraction identified earlier in the micro process. This activity formalizes the conceptual as well as physical separations of concern among abstractions begun in the previous step. As part of analysis, we apply this step to specify the associations among classes and objects (including certain important inheritance and aggregation relationships).. As part of design, we apply this step to specify the collaborations that: form the mechanisms of our architecture, as well as the higher-level clustering of classes into categories and modules into subsystems. As implementation proceeds, we refine relationships such as associations into more implementation-oriented relationships, including instantiation and use.

1.3.3.2 Products Class diagrams, object diagrams, and module diagrams are the primary products of this step. During analysis, we produce class diagrams that state the associations among abstractions, and add detai1s from the previous step (the operations and attributes for certain abstractions) as needed to capture the important subtleties of our decisions. During design, we refine these diagrams to show the tactical decisions we have made about inheritance, aggregation, instantiation, and use. As implementation proceeds, we must make decisions about the physical packaging of our system into modules, and the allocation of processes to processors. These are both decisions of relationships, which we can express in module and process diagrams. Our data dictionary is updated as part of this step as well, to reflect the allocation of classes and objects to categories and modules to subsystems.

1.3.3.3 Activities There are three activities associated with this step: the specification of associations, the identification of various collaborations, and the refinement of associations. A typical order of events for this activity might be the following:

Collect a set of classes at a given level of abstraction, or associated with a particular family of scenarios;. Consider the presence of a semantic dependency between any two classes, and establish an association if such a dependency exists. For each association, specify the role of each participant, as well as any relevant cardinality or other kind of constraint.

1.3.3.4 Milestones and Measures

We successfully complete this phase when we have specified the semantics and relationships among certain interesting abstractions sufficiently to serve as a blueprint for their implementation. Measures of goodness include cohesion, coupling, and completeness.

1.3.4 Implementing Classes and Objects


1.3.4.1 Purpose During analysis, the purpose of implementing classes and objects is to provide a refinement of existing abstractions sufficient to unveil new classes and objects at the next level of abstraction, which we then feed into the following iteration of the micro process. During design, the purpose of this activity is to create tangible representations of our abstractions in support of the successive refinement of the executable releases in the macro process.

1.3.4.3 Activities There is one primary activity associated with this step: the selection of the structures and algorithms that provide the semantics of the abstractions we identified earlier in the micro process. Whereas the first three phases of the micro process focus upon the outside view of our abstractions, this step focuses upon their inside view . Typical order of events for this activity might be the following:

For each class, identify the patterns of use among clients, in order to determine which operations are central, and hence should be optimized. Before choosing a representation from scratch, consider the use of protected or private inheritance for implementation, or the use of parameterized classes.

1.3.4.2 Products Decisions about the representation of each abstraction and the mapping of these representations to the physical model drive the products from this step. Early in the development process, we may capture these tactical representation decisions in the form of refined class specifications. Where these decisions are of general interest or represent opportunities for reuse, we also document them in class diagrams (showing their static semantics) and finite state machines or interaction diagrams (showing their dynamic semantics). As development proceeds, and as we make further bindings to the given implementation language, we begin to deliver pseudo- or executable code. As development proceeds, we may use method-specific tools that automatically forward-engineer code from these diagrams, or reverse engineer them from the implementation. As part of this step, we also update our data dictionary, including the new classes and objects that we discovered or invented in formulating the implementation of existing abstractions.

1.3.4.4 Milestones and Measures During analysis, we successfully complete this phase once we have identified all the interesting abstractions necessary to satisfy the responsibilities of higher-level abstractions identified during this pass through the micro process. During design, we successfully complete this phase when we have an executable or near-executable model of our abstractions. The primary measure of goodness for this phase is simplicity.

1.4 The Macro Development Process


Overview The macro process serves as the controlling framework for the micro process. This broader procedure dictates a number of measurable products and activities that permit the development team to meaningfully assess risk and make early corrections to the micro process, so as to better focus the team's analysis and design activities. The macro process represents the activities of the entire development team on the scale of weeks to months at a time. The macro process is primarily the concern of the development team's technical management, whose focus is subtly different than that of the individual developer the macro process focuses upon risk and architectural vision, the two manageable elements that have the greatest impact upon schedules, quality, and completeness.

the macro process tends to track the following activities:


Establish the core requirements for the software (conceptualization). Develop a model of the system's desired behavior (analysis). Create architecture for the implementation (design). Evolve the implementation through successive refinement (evolution). Manage post delivery evolution (maintenance).

1.4.1 Conceptualization
1.4.1.1 Purpose Conceptualization seeks to establish the core requirements for the system. For any truly new piece of software, or even for the novel adaptation of an existing system, there exists some moment in time where, in the mind of the developer, the architect, the analyst, or the end user, there springs forth an idea for some application.

1.4.1.2 Products Prototypes are the primary products of conceptualization. Specifically, for every significant new system, there should be some proof of concept, manifesting itself in the form of a quick-and-dirty prototype. Such prototypes are by their very nature incomplete and only marginally engineered.

1.4.1.3 Activities A typical order of events is the following: Establish a set of goals for the proof of concept, including criteria for when the effort is to be finished. Assemble an appropriate team to develop the prototype. Often, this may be a team of one (who is usually the original visionary). The best thing the development organization can do to facilitate the team's efforts is to stay out of its way. Evaluate the resulting prototype, and make an explicit decision for product development or further exploration. A decision to develop a product should be made with a reasonable assessment of the potential risks, which the proof of concept should uncover.

1.4.1.4 Milestones and Measures It is important that explicit criteria be established for completion of a prototype. Proofs of concept are often schedule-driven (meaning that the prototype must be delivered on a certain date) rather than feature-driven. This is not necessarily bad, for it artificially limits the prototyping effort and discourages the tendency to deliver a production system prematurely. Upper management can often measure the health of the software development organization by measuring its response to new ideas. Any organization that is not itself producing new ideas is dead, or in a moribund business. The most prudent action is usually to diversify or abandon the business. In contrast, any organization that is overwhelmed with new ideas and Yet is unable to make any intelligent prioritization of them is out of control. Such organizations often waste significant development resources by jumping to product development too early, without exploring the risks of the effort though a

1.4.2 Analysis

1.4.2.1 Purpose The purpose of analysis is to provide a description of a problem. The description must be complete, consistent, readable, and reviewable by diverse interested parties, and testable against reality. The purpose of analysis is to provide a model of the system's behavior. We must emphasize that analysis focuses upon behavior, not form. Analysis must yield a statement of what the system does, not how it does it.

1.4.2.2 Products The output of analysis is a description of the function of the system, along with statements about performance and resources required. In object-oriented development, we capture these descriptions through scenarios, where each scenario denotes some particular function point. We use primary scenarios to illustrate key behaviors, and secondary scenarios to show behavior under exceptional conditions. A secondary product of analysis is a risk assessment that identifies the known areas of technical risk that may impact the design process. Facing up to the presence of risks early in the development process makes it far easier to make pragmatic architectural trade-offs later in the development process.

1.4.2.3 Activities Two primary activities are associated with analysis: domain analysis and scenario planning. domain analysis seeks to identify the classes and objects that are common to a particular problem domain.. Scenario planning is the central activity of analysis. A typical order of events for this activity follows:

Identify all the primary function points of the system and, if possible, group them into clusters of functionally related behaviors.
For each interesting set of function points, storyboard a scenario, using the techniques of use-case and behavior analysis . CRC card techniques are effective in brainstorming about each scenario. As needed, generate secondary scenarios that illustrate behavior under exceptional conditions.

Where the life cycle of certain objects is significant or essential to a scenario, develop a finite state machine for the class of objects.
Scavenge for patterns among scenarios, and express these patterns in terms of more abstract, generalized scenarios, or in terms of class diagrams showing the associations among key

1.4.2.4 Milestones and Measures We successfully complete this phase when we have developed and signed off on scenarios for all fundamental system behaviors. By signed off we mean that the resulting analysis products have been validated by the domain expert, end user, analyst, and architect; by fundamental we refer to behaviors that are central to the application's purpose. Another important milestone of analysis is delivery of a risk assessment, which helps the team to manage future strategic and tactical tradeoffs.

1.4.3 Design

1.4.3 Design 1.4.3.1 Purpose The purpose of design is to create architecture for the evolving implementation, and to establish the common tactical policies that must be used by disparate elements of the system. We begin the design process as soon as we have some reasonably complete model of the behavior of the system. 1.4.3.2 Products There are two primary products of design: a description of the architecture, and descriptions of common tactical policies. We may describe an architecture through diagrams as well as architectural releases of the system. At the architectural level, it is most important to show the clustering of classes into class categories (for the logical architecture) and the clustering of modules into subsystems (for the physical architecture). We may deliver these diagrams as part of a formal architecture document, which should be reviewed with

1.4.3.3 Activities There are three activities associated with design: architectural planning, tactical design, and release planning. Architectural planning involves devising the layers and partitions of the overall system. Architectural planning encompasses a logical decomposition, representing a clustering of classes, as well as a physical decomposition, representing a clustering of modules and the allocation of functions to different processors. A typical order of events for this activity is as follows: Consider the clustering of function points from the products of analysis, and allocate these to layers and partitions of the architecture. Functions that build upon one another should fall into different layers; functions that collaborate to yield behaviors at a similar level of abstraction should fall into partitions, which represent peer services.

Validate the architecture by creating an executable release that partially satisfies the semantics of a few interesting system scenarios as derived from analysis. Instrument that architecture and assess its weakness and strengths. Identify the risk of each key architectural interface so that resources can be meaningfully allocated as evolution commences.

The focus of architectural planning is to create very early in the life cycle a domain-specific application framework that we may successively refine. Tactical design involves making decisions about the myriad of common policies.

Relative to the given application domain, enumerate the common policies that must be addressed by disparate elements of the architecture. Some such policies are foundational, meaning that they address domain-independent issues such as memory management, error handling, and so on. Other policies are domain-specific, and include idioms and mechanisms that are germane to that domain, such as control policies in real-time systems, or transaction and database management in information systems. For each common policy, develop a scenario that describes the semantics of that policy. Further capture its semantics in the form of an executable prototype that can be instrumented and refined. Document each policy and carry out a peer walkthrough, so as to broadcast its architectural vision. Release planning sets the stage for architectural evolution. Taking the required function points and risk assessment generated during analysis, release planning serves to identify a controlled series of architectural releases, each growing in its functionality, ultimately encompassing the requirements of the complete production system. A typical order of events for this activity is as follows:

Given the scenarios identified during analysis, organize them in order of foundational to peripheral behaviors. Prioritizing scenarios can best be accomplished with a team including a domain expert, analysis, architect, and quality-assurance personnel. Allocate the related function points to a series of architectural releases whose final delivery represents the production system. Adjust the goals and schedules of this stream of releases so that delivery dates are sufficiently separated to allow adequate development time, and so that releases are synchronized with other development activities, such as documentation and field testing. Begin task planning, wherein a work breakdown structure is identified, and development resources are identified that are necessary to achieve each architectural release. A natural by-product of release planning is a formal development plan, which identifies the stream of architectural releases, team tasks, and risk assessments

1.4.3.4 Milestones and Measures We successfully complete this phase when we have validated the architecture through a prototype and through formal review. In addition, we must have signoff on the design of all primary tactical policies, and a plan for successive releases. The primary measure of goodness is simplicity. A good architecture is one that embodies the characteristics of organized complex systems. The main benefits of this activity are the early identification of architectural flaws and the establishment of common policies that yield a simpler architecture.

1.4.4 Evolution
1.4.4.1 Purpose The purpose of the evolutionary phase is to grow and change the implementation through successive refinement, ultimately leading to the production system. The evolution of architecture is largely a matter of trying to satisfy a number of competing constraints, including functionality, time, and space: one is always limited by the largest constraint.

1.4.4.2 Products The primary product of evolution is a stream of executable releases representing successive refinements to the initial architectural release. Secondary products include behavioral prototypes that are used to explore alternative designs or to further analyze the dark corners of the systems' functionality. These executable releases follow the schedule established in the earlier activity of release planning. Between each successive external release, the development team may also produce behavioral Prototypes

1.4.4.3 Activities Two activities are associated with evolution: application of the micro process, and change management. The work that is carried out between executable releases represents a compressed development process, and so is essentially one spin of the micro process. This activity begins with an analysis

of the requirements for the next release, proceeds to the design of architecture, and continues with the invention of classes and objects necessary to implement this design. A typical order of events for this activity is as follows: Identify the function points to be satisfied by this executable release, as well as the areas of highest risk, especially those identified through evaluation of the previous release. Assign tasks to the team to carry out this release, and initiate one spin of the micro process.. As needed to understand the semantics of the system's desired behavior, assign developers to produce behavioral prototypes. Force closure of the micro process by integrating and releasing the executable release.

Change management exists in recognition of the incremental and iterative nature of object oriented systems. It is tempting to allow undisciplined change to class hierarchies, class protocols, or mechanisms, but unrestrained change tends to rot the strategic architecture and leads to thrashing of the development team. In practice, we find that the following kinds of changes are to be expected during the evolution of a system:
o Adding a new class or a new collaboration of classes o Changing the implementation of a class o Changing the representation of a class o Reorganizing the class structure o Changing the interface of a class

1.4.4.4 Milestones and Measures We successfully complete this phase when the functionality and quality of the releases are sufficient to ship the product. The releases of intermediate executable forms are the major milestones we use to manage the development of the final product. The primary measure of goodness is therefore to what degree we satisfy the function points allocated to each intermediate release, and how well we met the schedules established during release planning. Two other essential measures of goodness include tracking defect discovery rates, and measuring the rate of change of key architectural interfaces and tactical policies.

1.4.5 Maintenance

1.4.5.1 Purpose Maintenance is the activity of managing post delivery evolution. This phase is largely a continuation of the previous phase, except that architectural innovation is less of an issue. Instead, more localized changes are made to the system as new requirements are added and lingering bugs stamped out. A program that is used in a real-world environment necessarily must change or become less and less useful in that environment (the law of continuing change). As an evolving program changes, its structure becomes more complex unless active efforts are made to avoid this phenomenon 1.4.5.2 Products

Since maintenance is in a sense the continued evolution of a system, its products are similar to those of the previous phase. In addition, maintenance involves managing a punch list of new tasks. Additionally, as more users exercise the system, new bugs and patterns of use will be uncovered that quality assurance could not anticipate. A punch list serves as the vehicle for collecting bugs and enhancement requirements, so that they can be prioritized for future releases. 1.4.5.3 Activities Maintenance involves activities that are little different than those required during the evolution of a system. Especially if we have done a good job in the original architecture, adding new functionality or modifying some existing behavior will come naturally. In addition to the usual activities of evolution, maintenance involves a planning activity that prioritizes tasks on the punch list. A typical order of events for this activity is as follows: Prioritize requests for major enhancement or bug reports that denote systemic problems, and assess the cost of redevelopment. Establish a meaningful collection of these changes and treat them as function points for the next evolution. If resources allow it, add less intense, more localized enhancements (the so called low hanging fruit) to the next release. Manage the next evolutionary release.

1.4.5.4 Milestones and Measures The milestones of maintenance involve continued production releases, plus intermediate bug releases. We know that we are still maintaining a system if the architecture remains resilient to change; we know we have entered the stage of preservation when responding to new enhancements begins to require excessive development resources.

You might also like