Location via proxy:   
[Report a bug]   [Manage cookies]                

Thursday, February 2, 2012

The Agile System Development Life Cycle (SDLC)



1. The Scope of Life Cycles

As we described in the book The Enterprise Unified Process (EUP) the scope of life cycles can vary dramatically.  For example, Figure 1 depicts the Scrum construction life cycle whereas Figure 2 depicts an extended version of that diagram which covers the full system development life cycle (SDLC) and Figure 3 extends that further by addressing enterprise-level disciplines via the EUP life cycle.  The points that I'm trying to make are:
  • System development is complicated.  Although it's comforting to think that development is as simple as Figure 1 makes it out to be, the fact is that we know that it's not.  If you adopt a development process that doesn't actually address the full development cycle then you've adopted little more than consultantware in the end.  My experience is that you need to go beyond the construction life cycle of Figure 1 to the full SDLC of Figure 2 (ok, Retirement may not be all that critical) if you're to be successful
  • There's more to IT than development.  To be successful at IT you must take a multi-system, multi-life cycle stage view as depicted in Figure 3.  The reality is that organizations have many potential projects in the planning stage (which I'll call Iteration -1 in this article), many in development, and many in production.
Figure 1 uses the terminology of the Scrum methodology.  The rest of this article uses the terminology popularized in the mid-1990s by the Unified Process (Sprint = Iteration, Backlog = Stack, Daily Scrum Meeting = Daily Meeting).  Figure 1 shows how agilists treat requirements like a prioritized stack, pulling just enough work off the stack for the current iteration (in Scrum iterations/sprints are often 30-days long, although this can vary).  At the end of the iteration the system is demoed to the stakeholders to verify that the work that the team promised to do at the beginning of the iteration was in fact accomplished.
Figure 1. The Scrum construction life cycle.

The Scrum construction life cycle of Figure 1, although attractive proves to be a bit naive in practice.  Where does the product backlog come from?  Does it get beamed down from the Starship Enterprise?  Of course not, it's actually the result of initial requirements envisioning early in the project.  You don't only implement requirements during an iteration, you also fix defects (disciplined agile teams have a parallel testing effort during construction iterations where these defects are found), go on holiday, support other teams (perhaps as reviewers of their work), and so on.  So you really need to expand the product backlog into a full work items list.  You also release your system into production, often a complex endeavor. 
A more realistic life cycle is captured Figure 2, overviewing the full agile SDLC.  This SDLC is comprised of six phases: Iteration -1, Iteration 0/Warm Up, Construction, Release/End Game, Production, and Retirement.  Although many agile developers may balk at the idea of phases, perhaps Gary Evan's analogy of development seasons may be a bit more palatable, the fact is that it's been recognized that processes such as Extreme Programming (XP) and Agile Unified Process (AUP) do in fact have phases (for diagrams, see XP life cycle and AUP life cycle respectively).  The Disciplined Agile Delivery (DAD) lifecycle also includes phases (granted, I lead the development of DAD).  Furthermore, the Agile MSF calls its phases/seasons "tracks".
Figure 2. A detailed agile SDLC.

Figure 3. The Enterprise Unified Process (EUP) life cycle.

Figure 4. The Agile SDLC (high-level).

On the surface, the agile SDLC of Figure 4 looks very much like a traditional SDLC, but when you dive deeper you quickly discover that this isn't the case.  This is particularly true when you consider the detailed view of Figure 2.  Because the agile SDLC is highly collaborative, iterative, and incremental the roles which people take are much more robust than on traditional projects.  In the traditional world a business analyst created a requirements model that is handed off to an architect who creates design models that are handed off to a coder who writes programs which are handed off to a tester and so on.  On an agile project, developers work closely with their stakeholders to understand their needs, they pair together to implement and test their solution, and the solution is shown to the stakeholder for quick feedback.  Instead of specialists handing artifacts to one another, and thereby injecting defects at every step along the way, agile developers are generalizing specialists with full life cycle skills.  

2. Iteration -1: Pre-Project Planning

Iteration -1, the “pre-Inception phase” in the Enterprise Unified Process (EUP), is the pre-project aspects of portfolio management.  During this phase you will:
  1. Define the business opportunity.  You must consider the bigger business picture and focus on market concerns.  This includes exploring how the new functionality will improve your organization’s presence in the market, how it will impact profitability, and how it will impact the people within your organization.  This exploration effort should be brief, not all projects will make the initial cut so you only want to invest enough effort at this point to get a good “gut feel” for the business potential.  A good strategy is to follow Outside-In Development’s focus on identifying the potential stakeholders and their goals, key information to help identify the scope of the effort. 
  2. Identify a viable for the project.  There are several issues to consider when identifying a potential strategy for the project.  For example, do you build a new system or buy an existing package and modify it?  If you decide to build, do you do so onshore or offshore?  Will the work be solely done by your own development team, by a team from a system integrator (SI), or in partnership with the SI?  What development paradigm – traditional/waterfall, iterative, or agile – will you follow?  Will the team be co-located, near-located within the same geographic region, or far-located around the world?   As you can see there are many combinations of strategy available to you, and at this point in time you may only be able to narrow the range of the possibilities but be forced to leave the final decision to the project team in future iterations. 
  3. Assess the feasibility. During Iteration -1 you will want to do just enough feasibility analysis to determine if it makes sense to invest in the potential project.  Depending on the situation you may choose to invest very little effort in considering feasibility, for many systems just considering these issues for a few minutes is sufficient for now, and for some systems you may choose to invest days if not weeks exploring feasibility.  Many organizations choose to do just a little bit of feasibility analysis during Iteration -1, and then if they decide to fund the project they will invest more effort during Iteration 0.  In my experience you need to consider four issues when exploring feasibility: economic feasibility, technical feasibility, operational feasibility, and political feasibility.   Your feasibility analysis efforts should also produce a list of potential risks and criteria against which to make go/no-go decisions at key milestone points during your project.  Remember that agile teams only have a success rate of 72%, compared to 63% for traditional projects, implying that almost 30% of agile projects are considered failures.  Therefore you should question the feasibility of the project throughout the life cycle to reduce overall project risk.

Iteration -1 activities can and should be as agile as you can possibly make it – you should collaborate with stakeholders who are knowledgeable enough and motivated enough to consider this potential project and invest in just enough effort to decide whether to consider funding the effort further. 

3. Iteration 0/Warm Up: Project Initiation

The first week or so of an agile project is often referred to as “Iteration 0” (or "Cycle 0") or in The Eclipse Way the "Warm Up" iteration.  Your goal during this period is to initiate the project by:
  1. Garnering initial support and funding for the project.  This may have been already achieved via your portfolio management efforts, but realistically at some point somebody is going to ask what are we going to get, how much is it going to cost, and how long is it going to take. You need to be able to provide reasonable, although potentially evolving, answers to these questions if you're going to get permission to work on the project.  You may need to justify your project via a feasibility study.
  2. Actively working with stakeholders to initially model the scope of the system.  As you see in Figure 5, during Iteration 0 agilists will do some initial requirements modeling with their stakeholders to identify the initial, albeit high-level, requirements for the system.  To promote active stakeholder participation you should use inclusive tools, such as index cards and white boards to do this modeling – our goal is to understand the problem and solution domain, not to create mounds of documentation.  The details of these requirements are modeled on a just in time (JIT) basis in model storming sessions during the development cycles.
  3. Starting to build the team.  Although your team will evolve over time, at the beginning of a development project you will need to start identifying key team members and start bringing them onto the team.  At this point you will want to have at least one or two senior developers, the project coach/manager, and one or more stakeholder representatives.
  4. Modeling an initial architecture for the system.  Early in the project you need to have at least a general idea of how you're going to build the system.  Is it a mainframe COBOL application?  A .Net application?  J2EE?  Something else?  As you see in Figure 5, the developers on the project will get together in a room, often around a whiteboard, discuss and then sketch out a potential architecture for the system.  This architecture will likely evolve over time, it will not be very detailed yet (it just needs to be good enough for now), and very little documentation (if any) needs to be written.  The goal is to identify an architectural strategy, not write mounds of documentation.  You will work through the design details later during development cycles in model storming sessions and via TDD.
  5. Setting up the environment.  You need workstations, development tools, a work area, ... for the team.  You don't need access to all of these resources right away, although at the start of the project you will need most of them.
  6. Estimating the project.  You'll need to put together an initial estimate for your agile project based on the initial requirements, the initial architecture, and the skills of your team.  This estimate will evolve throughout the project.

Figure 5: The Agile Model Driven Development (AMDD) life cycle.
The 2009 Agile Project Initiation Survey found that the average time to initiate an agile project took 3.9 weeks.  Figure 6 depicts the range of initiation periods.  Differences are the results of the complexity of the domain/problem space, technical complexity of what you're trying to accomplish, availability of stakeholders, ability of stakeholders to come to agreement as to the scope. and ability of the team to form itself and to obtain necessary resources.
Figure 6. How long did it take to initiate an agile project?


4. Construction Iterations

During construction iterations agilists incrementally deliver high-quality working software which meets the changing needs of our stakeholders, as overviewed in Figure 7
Figure 7. Agile software development process during a construction iteration.
We achieve this by:
  1. Collaborating closely with both our stakeholders and with other developers.  We do this to reduce risk through tightening the feedback cycle and by improving communication via closer collaboration
  2. Implementing functionality in priority order.  We allow our stakeholders to change the requirements to meet their exact needs as they see fit.  The stakeholders are given complete control over the scope, budget, and schedule – they get what they want and spend as much as they want for as long as they’re willing to do so.
  3. Analyzing and designing. We analyze individual requirements by model storming on a just-in-time (JIT) basis for a few minutes before spending several hours or days implementing the requirement.  Guided by our architecture models, often hand-sketched diagrams, we take a highly-collaborative, test-driven design (TDD) approach to development (see Figure 8) where we iteratively write a test and then write just enough production code to fulfill that test.  Sometimes, particularly for complex requirements or for design issues requiring significant forethought, we will model just a bit ahead to ensure that the developers don't need to wait for information.
  4. Ensuring quality.  Agilists are firm believers in following guidance such as coding conventions and modeling style guidelines.  Furthermore, we refactor our application code and/or our database schema as required to ensure that we have the best design possible. 
  5. Regularly delivering working software.  At the end of each development cycle/iteration you should have a partial, working system to show people.  Better yet, you should be able to deploy this software into a pre-production testing/QA sandbox for system integration testing.  The sooner, and more often, you can do such testing the better.  See Agile Testing and Quality Strategies: Discipline Over Rhetoric for more thoughts.
  6. Testing, testing, and yes, testing.  As you can see in Figure 9 agilists do a significant amount of testing throughout construction.  As part of construction we do confirmatory testing, a combination of developer testing at the design level and agile acceptance testing at the requirements level.  In many ways confirmatory testing is the agile equivalent of "testing against the specification" because it confirms that the software which we've built to date works according to the intent of our stakeholders as we understand it today. This isn't the complete testing picture: Because we are producing working software on a regular basis, at least at the end of each iteration although ideally more often, we're in a position to deliver that working software to an independent test team for investigative testing.  Investigative testing is done by test professionals who are good at finding defects which the developers have missed.  These defects might pertain to usability or integration problems, sometimes they pertain to requirements which we missed or simply haven't implemented yet, and sometimes they pertain to things we simply didn't think to test for.
Figure 8. Taking a "test first" approach to construction.

Figure 9. Testing during construction iterations.


I would rather fail three months into a two-year project than three years into a two-year project.

5. Release Iterations(s): The "End Game"

During the release iteration(s), also known as the "end game", we transition the system into production.  Not that for complex systems the end game may prove to be several iterations, although if you've done system and user testing during construction iterations (as indicated by Figure 6) this likely won't be the case.  As you can see in Figure 10, there are several important aspects to this effort:
  1. Final testing of the system.  Final system and acceptance testing should be performed at this point, although as I pointed out earlier the majority of testing should be done during construction iterations.  You may choose to pilot/beta test your system with a subset of the eventual end users.  See the Full Life Cycle Object-Oriented Testing (FLOOT) method for more thoughts on testing.
  2. Rework.  There is no value testing the system if you don't plan to act on the defects that you find.  You may not address all defects, but you should expect to fix some of them.
  3. Finalization of any system and user documentation.  Some documentation may have been written during construction iterations, but it typically isn't finalized until the system release itself has been finalized to avoid unnecessary rework  Note that documentation is treated like any other requirement: it should be costed, prioritized, and created only if stakeholders are willing to invest in it.  Agilists believe that if stakeholders are smart enough to earn the money then they must also be smart enough to spend it appropriately.  
  4. Training.  We train end users, operations staff, and support staff to work effectively with our system.
  5. Deploy the system.  See my article entitled System Deployment Tips and Techniques.
Figure 10. The AUP Deployment discipline workflow.

As you can see in Figure 11, on average agile teams take 4.6 weeks to transition their system into production according to the November 2010 Agile State of the Art Survey.  As you can see in the figure there is a wide range of time taken.  I believe this variance ranges based on the complexity of the solution, the amount of deployment automation (teams that have adopted a continuous deployment strategy automate many of the technical aspects of transition), the comprehensiveness of the agile testing effort during construction, the need for manual efforts such as training and educating end users (or support or operations staff), and the organizational complexity of your environment.
Figure 11. Amount of time experienced agile teams invested in releasing/transitioning their solution into production.


6. Production

The goal of the Production Phase is to keep systems useful and productive after they have been deployed to the user community. This process will differ from organization to organization and perhaps even from system to system, but the fundamental goal remains the same: keep the system running and help users to use it. Shrink-wrapped software, for example, will not require operational support but will typically require a help desk to assist users. Organizations that implement systems for internal use will usually require an operational staff to run and monitor systems.
This phase ends when the release of a system has been slated for retirement or when support for that release has ended. The latter may occur immediately upon the release of a newer version, some time after the release of a newer version, or simply on a date that the business has decided to end support.  This phase typically has one iteration because it applies to the operational lifetime of a single release of your software. There may be multiple iterations, however, if you defined multiple levels of support that your software will have over time.

7. Retirement

The goal of the Retirement Phase is the removal of a system release from production, and occasionally even the complete system itself, an activity also known as system decommissioning or system sunsetting.  Retirement of systems is a serious issue faced by many organizations today as legacy systems are removed and replaced by new systems.  You must strive to complete this effort with minimal impact to business operations.  If you have tried this in the past, you know how complex it can be to execute successfully.  System releases are removed from production for several reasons, including:
  1. The system is being complete replaced.  It is not uncommon to see homegrown systems for human resource functions being replaced by COTS systems such as SAP or Oracle Financials. 
  2. The release is no longer to be supported.  Sometimes organizations will have several releases in production at the same time, and over time older releases are dropped.
  3. The system no longer needed to support the current business model.  A organization may explore a new business area by developing new systems only to discover that it is not cost effective.
  4. The system is redundant.  Organizations that grow by mergers and/or acquisitions often end up with redundant systems as they consolidate their operations.
  5. The system has become obsolete
In most cases, the retirement of older releases is a handled during the deployment of a newer version of the system and is a relatively simple exercise.  Typically, the deployment of the new release includes steps to remove the previous release.  There are times, however, when you do not retire a release simply because you deploy a newer version.  This may happen if you can not require users to migrate to the new release or if you must maintain an older system for backward compatibility.

8. Recommended Reading



Monday, December 12, 2011

Chapter 31

Chapter 31
Reengineering


CHAPTER OVERVIEW AND COMMENTS

Reengineering became ‘corporate chic’ during the 1990s. At the corporate level, entire business processes were reengineered to make them more efficient and more competitive (at least in theory!). These changes had a trickle down effect on information systems. As business processes changed, the information technology that supported them also had to change.
         The intent of this chapter is to introduce business process reengineering (BPR) and discuss the management and technical aspects of software reengineering. The basic principles of BPR are introduced and an overall model for reengineering the business is discussed briefly. In reality BPR is a topic that is beyond the scope of most courses that use SEPA. It is introduced in this chapter because it is the driving force behind most software reengineering activities.
         The discussion of software reengineering begins with the “maintenance iceberg.” Even after almost 40 years, this metaphor rings true, and yet, many students have virtually no appreciation of the burden that maintenance places on the software community. The steps of the software reengineering process model are considered in some detail. A model for assessing the economics of software reengineering is presented at the conclusion of the chapter.

Critical Points:  BPR extends beyond the scope of software to the entire business. Yet the results of BPR can have a profound impact on information systems that service a business. Software maintenance is a significant burden for all software organizations. The software reengineering process encompasses inventory analysis, restructuring, reverse engineering, and forward engineering.




31.1     Business Process Reengineering
             
This section presents an overview of BPR with an emphasis on its impact, basic BPR principles and the tasks that define the BPR model. Be sure to spend a moment on Section 31.1.4. There is significant hype associated with BPR and the ramifications of this should be discussed for your students.
If time permits you might have the student conduct a BPR exercise by reengineering some process (e.g., registration) at your university. Follow the process outlined in this section

31.2     Software Reengineering

It’s worth spending substantial time discussing software maintenance (Section 30.2.1) and its impact on the software community. The basic activities that are performed when software is to be reengineered are discussed in Section 30.2.2. The discussion begins with software maintenance and then continues into an overview of the software reengineering process model. Each task performed as part of the model is discussed briefly.

31.3     Reverse Engineering

This section considers reverse engineering activities and identifies key concepts that must be understood as information is extracted from an existing program. Reverse engineering techniques for processing, data, and user interfaces are all discussed. If time permits, distribute two undocumented pieces of code, one well structures and designed, the other a kludge. Have your students attempt to reverse engineer each and then draw conclusions from their experience.

31.4     Restructuring

Section 31.4 presents a brief overview of restructuring techniques for code and data. If time permits have students research one or more restructuring tools (see the sidebar in this section) and then discuss how they work on existing code.

31.5     Forward Engineering

This section presents an overview of forward engineering approaches for client/server systems, OO systems, and user interfaces.

31.6     The Economics of Reengineering

This section presents a simple cost-benefit model for reengineering. Not every application should be reengineered. The model presented in this section enables your students to compute the projected cost benefit of a reengineering activity.

Chapter 32

Chapter 32
The Road Ahead

CHAPTER OVERVIEW AND COMMENTS

The intent of this chapter is to provide a peek into the future of software engineering practice. Like any attempt at crystal ball gazing, my projections may be off the mark, but the general trends outlined in this final chapter of SEPA are likely to emerge as in the years ahead.
In addition to prognostication, this chapter presents a brief discussion of software engineering ethics. You may choose to cover this material very early in a software engineering course, although I think it’s best to wait until a student has substantial knowledge of just what software engineering is. In any event, this is a very important topic and should be covered, even if time is short.

32.1     The Importance of Software—Revisited


This section revisits the importance of computer software. The key aspect of this discussion is "software as differentiator." It is interesting to have your students come up with examples of software as a differentiator for products and services.

32.2     The Scope of Change

The technologies noted in the sidebar are worth discussing. You might also have your student visited Wired magazine’s on-line site (www.wired.com) for many interesting articles and projections about future technologies.

32.3     People and the Way They Build Systems

People and cultures change very slowly. In this section, I make the argument that ad evolving software engineering environment may have as much or more to do with people issues (in software engineering) than the people themselves. As tools, interaction mechanisms, and methodology mature, the culture for building software may change accordingly.

32.4     The New Software Engineering Process

The agile, incremental process model is discussed. If you did not emphasize Chapter 4, you might assign it now. Otherwise, focus on the milieu the “forces” many software teams to adopt this process model.

32.5     New Modes of Representing Information

Students spend much time thinking about data and program architectures, algorithms and the like. They spend very little time considering the intent of the data that is processed. This section considers the relationship between data, information, knowledge and wisdom.
You might relate some of this discussion to data mining in general and specific applications that span multiple data bases.

32.6     Technology as a Driver
Review the technology trends noted in the sidebar in this section.

32.7     The Software Engineering Responsibility

Software engineers should abide by a code of ethics that guides the work that they do and the products that they produce. The Software Engineering Code of Ethics and Professional Practices is well worth discussing with your students. To make the discussion more meaningful, you should pose specific business or personal situations and have your students indicate how they would react to them.

Chapter 29

Chapter 29
Cleanroom Software Engineering


CHAPTER OVERVIEW AND COMMENTS

The late Harlan Mills (one of the true giants of the first half century of computing) suggested that software could be constructed in a way that eliminated all (or at least most) errors before delivery to a customer. He argued that proper specification, correctness proofs, and formal review mechanisms could replace haphazard testing, and as a consequence, very high quality computer software could be built. His approach, called cleanroom software engineering, is the focus of this chapter.
The cleanroom software engineering strategy introduces a radically different paradigm for software work. It emphasizes a special specification approach, formal design, correctness verification,  “statistical” testing, and certification as the set of salient activities for software engineering. The intent of this chapter is to introduce the student to each of these activities.
The chapter begins by introducing box structure specification and a more rigorous approach for representing the analysis model.  Next define refinement of the box structure specification is presented, followed by the correctness proofs that can be applied to the specification to verify that it is correct. The cleanroom approach to testing is radically different that more conventional software engineering paradigms. The culmination of this chapter is to emphasize the cleanroom testing approach.
The key concepts for students to understand are boxes, formal verification, probability distributions, and usage based testing. The mathematics and statistical background needed to read the chapter is not overwhelming. However, if you want to have your students do some cleanroom software development, you may need to do some additional teaching on program verification and statistical sampling.
Note:  Cleanroom software engineering has not been widely adopted in the industry, and for that reason, a number of reviewers for SEPA 6/e suggested that the chapter could be deleted. I disagree. Although the complete cleanroom paradigm may not be widely used, the individual elements of the approach have significant merit and provide students with insight that they may not otherwise obtain. Even if they don’t use cleanroom, your students can certainly adopt and use things like statistical use testing or proofs of correctness.

29.1  The Cleanroom Approach

This section introduces the key concepts of cleanroom software engineering and discusses its strengths and weaknesses. An outline of the basic cleanroom strategy is presented. Students will need some additional information on the use of box specifications and probability distributions before they can apply this strategy for their own projects.

29.2  Functional Specification

Functional specification using boxes is the focus of this section. It is important for students to understand the differences between black boxes (specifications), state boxes (architectural designs), and clear boxes (component designs). Even if students have weak understanding of program verification techniques, they should be able to write box specifications for their own projects using the notations shown in this section.

29.3  Cleanroom Design

If you plan to have your students verify their box specifications formally, you may need to show them some examples of the techniques used (if you did not already do so when covering Chapter 28). The key to making verification accessible to students at this level is to have them write procedural designs using only structured programming constructs in their designs. This will reduce considerably the complexity of the logic required to complete the proof. It is important for students to have a chance to consider the advantages offered by formal verification over exhaustive unit testing to try to identify defects after the fact.

29.4  Cleanroom Testing

This section provides and overview of statistical use testing and increment certification. It is important for students to understand that some type of empirical data needs to be collected to determine the probability distribution for the software usage pattern. The set of test cases created should reflect this probability distribution and then random samples of these test cases may be used as part of the testing process. Some additional review of probability and sampling may be required. Students would benefit from seeing the process of developing usage test cases for a real software product. Developing usage test cases for their own projects will be difficult, unless they have some means of acquiring projected usage pattern data. Certification is an important concept. Students should understand the differences among the certification models presented in this section as well.

Chapter 30

Chapter 30
Component-Based Software Engineering


CHAPTER OVERVIEW AND COMMENTS

This chapter describes component-based software engineering (CBSE) as a process that emphasizes the design and construction of systems with reusable software components. CBSE has two parallel engineering activities, domain engineering (discussed earlier in the text) and component-based software development. The important point to emphasize to students is that custom software components are only created when existing components cannot be modified for reuse in the system under development. It is also important to remind students that formal technical reviews and testing are still used to ensure the quality of the final product. The advantage, of course, is that less time is spent designing and coding, so software can be produced less expensively. Students would benefit from having access to a library of commercial off-the-shelf components or at least a locally developed component library.

30.1  Engineering of Component-Based Systems           

It is important to have students understand the differences between CBSE and object-oriented software engineering. The biggest difference is that in CBSE, after the architectural design is established, the software team examines the requirements to see which can be satisfied by reusing existing components rather than constructing everything from scratch. In object-oriented software engineering, developers begin detailed design immediately after establishing the architectural design. Students need to understand the software activities that take place once reusable components are identified (component qualification, component adaptation, component composition, and component update).  These activities will be described in more detail later in the chapter. It may be worthwhile for students to be given a set of requirements and an indexed collection of reusable components and try to determine which requirements can be satisfied by the reusable component and which cannot.



30.2  The CBSE Process

This is a short section that contains schematic diagrams (Figure 30.1) for the two CBSE engineering activities (domain engineering and component-based engineering). Be sure students understand the interrelationships between the domain engineering and component-based engineering activities. The activities are discussed in more detail in the subsequent chapter sections.

30.3  Domain Engineering

Students should make sure they understand the three major domain engineering activities (analysis, construction, and dissemination). It is important for students to remember is that the purpose of conducting domain analysis is to identify reusable software components. Structural modeling is an important pattern-based domain engineering approach. Students may benefit from trying to conduct their own domain analysis. Alternatively, they may benefit from discussing a real world case study that includes domain analysis.
A useful exercise is to have students identify domain classes (or functions) for a domain such as “retailing” or “health care.”

30.4  Component-Based Development

Component-based software development activities are discussed in detail in this section. If students have access to a library of reusable components (or COTS components) they should be encouraged to use the composition techniques presented to assemble a new software product. Students may benefit from using one of the free component libraries like JavaBeans. Another good exercise might be to have students try to design one of their own software components so that is can be added to an existing software reuse library.

30.5  Classifying and Retrieving Components

This section discusses the issues associated with indexing and retrieving software components from a reuse library.  It also describes the necessary features for a component reuse environment. The material is presented at a fairly general level. If your students are familiar with multimedia databases and client-server computing, you might explore some of the implementation concerns that need to be addressed to construct a reuse repository by examining a real life example of one.

27.6  Economics of CBSE

This section discusses the economics of software reuse by examining its impact on software quality, programmer productivity, and system development cost. The calculations are not hard to follow. Students might appreciate seeing the benefits gained from reuse by examining project data from a real world example. Some discussion of structure points during project estimation appears in this section. Students might appreciate seeing a complete example that uses structure points as part of the cost estimation process. Some reuse metrics are defined. Students might be encouraged to compute the reuse metrics for their own software projects.

Chapter 28

Part 5     Advanced Topics in Software Engineering

Chapter 28
Formal Methods


CHAPTER OVERVIEW AND COMMENTS

This chapter presents an introduction to the use of formal methods in software engineering. The focus of the discussion is on why formal methods allow software engineers to write better specifications than can be done using natural language. Students without precious exposure to set theory, logic, and proof of correctness (found in a discrete mathematics course) will need more instruction on these topics than is contained in this chapter.  The chapter contains several examples of specifications that are written using various levels of rigor. However, there is not sufficient detail for a student to learn the language (supplementary materials will be required).
If your students are familiar with a specification language (like OCL or Z) they should be encouraged to use it to write formal specifications for at least one of their own functions. If they are not familiar with a specification language, it may be worthwhile to teach them one (if your course is two semesters long).  Having students write correctness proofs for their function specifications will be difficult and you may need to review proof techniques and heuristics if you wish students to write proofs.

28.1  Basic Concepts

This section discusses the benefits of using formal specification techniques and the weaknesses of informal specification techniques. Many of the concepts of formal specification are introduced (without mathematics) through the presentation of three examples showing how formal specifications would be written using natural language. It may be worthwhile to revisit these examples after students have completed the chapter and have them write these specifications using mathematical notation or a specification language (like OCL or Z).



28.2  Mathematical Preliminaries

A review of the mathematics needed for the remainder of the chapter appears in this section. If your students have completed a good course in discrete mathematics, this review should be adequate. If they have not, some additional (and more concrete) examples will need to be presented to your students. Constructive set specification writing is a very important concept for your students to understand, as is work with predicate calculus and quantified logic. Formal proofs of set theory axioms and logic expressions is not necessary, unless you plan to have your students do correctness proofs for their specifications. Work with sequences may be less familiar to your students, if they have not worked with files and lists at an abstract level.

28.3  Applying Mathematical Notation for Formal Specification

This section uses mathematical notation to refine the block handler specification from Section 28.1.3. It may be desirable to refine the other two specification examples from Section 28.1 using similar notation. If your students are comfortable with mathematical proofs, you may wish to present an informal correctness proof for these three specifications. Having students write specifications for some of their own functions, using notation similar to that used in this section may be desirable.

28.4  Formal Specification Languages

This section discusses the properties of formal specification languages from a theoretical perspective. The next two sections use OCL and the Z specification language to rewrite the block handler specification more formally. You might have students try writing the specifications for their own functions using a pseudocode type notation embellished with comments describing semantic information.

28.5  Object Constraint Language (OCL)

This section presents a brief overview of OCL syntax and semantics and then applies OCL to the block handler example. The intent is to give the student a feel for OCL without attempted to teach the language. If time and inclination permit, the material presented here can be supplemented with additional OCL information from the UML specification or other sources.

28.6  Using Z to Represent and Example Software Component

This section presents a brief overview of Z syntax and semantics and then applies Z to the block handler example. The intent is to give the student a feel for Z without attempted to teach the language. If time and inclination permit, the material presented here can be supplemented with additional Z information.

25.7  The Ten Commandments of Formal Methods

Class discussion of the 10 commandments listed in this section would be beneficial to students. They need to be able to articulate in their own words, why these commandments are important when using formal methods in software engineering.

25.8  Formal Methods—The Road Ahead

This section mentions several obstacles to using formal methods on large real world several development projects. Students should acknowledge that formal methods require discipline and hard work. They should also recognize that the there are times when the risks associated with using informal specification techniques make the use of formal methods necessary.


A Detailed Example of the Z Language

To illustrate the practical use of a specification language, Spivey [1] considers a real-time operating system kernel and represents some of its basic characteristics using the Z specification language [SPI88]. The remainder of this section has been adapted from his paper (with permission of the IEEE).

*************

Embedded systems are commonly built around a small operating-system kernel that provides process-scheduling and interrupt-handling facilities. This article reports on a case study made using  Z notation,  a mathematical specification language, to specify the kernel for a diagnostic X-ray machine.
        Beginning with the documentation and source code of an existing implementation, a mathematical model, expressed in Z, was constructed of the states that the kernel could  occupy and the events that could take it from one state to another. The goal was a precise specification that could be used as a basis for a new implementation on different hardware.
        This case study in specification had a surprising by-product. In studying one of the kernel's operations, the potential for deadlock was discovered: the kernel would disable interrupts and enter a tight loop, vainly searching for a process ready to run.
        This flaw in the kernel's design was reflected directly in a mathematical property of its specification, demonstrating how formal techniques can help avoid design errors. This help should be especially welcome in embedded systems, which are notoriously difficult to test effectively.
        A conversion with the kernel designer later revealed that, for two reasons, the design error did not in fact endanger patients using the X-ray machine. Nevertheless, the error seriously affected the X-ray machine's robustness and reliability because later enhancements to the controlling software might reveal the problem with deadlock that had been hidden before.
        The specification presented in this article has been simplified by making less use of the schema calculus, a way of structuring Z specifications. This has made the specification a little longer and more repetitive, but perhaps a little easier to follow without knowledge of Z.

About the Kernel
       
The kernel supports both background processes and interrupt handlers. There may be several background processes, and one may be marked as current. This process runs whenever no interrupts are active, and it remains current until it explicitly releases the processor, the kernel may then select another process to be current. Each background process has a ready flag, and the kernel chooses the new current process from among those with a ready flag set to true.
        When interrupts are active, the kernel chooses the most urgent according to a numerical priority, and the interrupt handler for that priority runs. An interrupt may become active if it has a higher priority than those already active and it becomes inactive again when its handler signals that it has finished. A background process may become an interrupt handler by registering itself itself as the handler for a certain priority.

Documentation

Figures 9.15 and 9.16 are diagrams from the existing kernel documentation, typical of the ones used to describe kernels like this. Figure 9.15 shows the kernel data structures. Figure 9.16 shows the states that a single process may occupy and the possible transitions between them, caused either by a kernel call from the process itself or by some other event.


        In a way, Figure 9.16 is a partial  specification of the kernel as a set of finite-state machines, one for each process. However, it gives no explicit information about the interactions between processes—the very thing the kernel is required to manage. Also, it fails to show several possible states of a process. For example , the current background process may not be ready if it has set its own ready flag to false, but the state "current but not ready" is not shown in the diagram. Correcting this defect would require adding two more states and seven more transitions. This highlights another deficiency of state diagrams like this: their size tends to grow exponentially as a system complexity increases.




Kernel States

Like most Z specifications, the kernel model begins with a description of the state space, the collection of variables that determine what state the kernel is in and the invariant relationships that always hold between these variables' values.
        In this article, the kernel state space is described in several parts, corresponding to the background processes, the interrupt handlers, and the state of the processor on which the kernel runs. Each piece is described by a schema, the  basic unit of specification in Z. Table 9.1 {Table 25.2 in SEPA, 4/e] describes the Z notation used in this case study.
        The state space of the whole kernel is obtained by putting together these pieces and adding more invariants, among them the static policy for allocating the processor to a background process or interrupt handler. Processes are named in the kernel by  process identifiers. In the implemented kernel, these are the addresses of process-control blocks, but this detail is irrelevant to a programmer using the kernel, so they are introduced as a basic type PID:

        [PID]

        This declaration introduces PID as the name of a set, without giving any information about its members. From the specification's point of view, the members are simply atomic objects.
        For convenience, a fictitious process identifier, none, was introduces. none is not the name of any genuine process. When the processor is idle, the current process is none. The set PID1 contains all process identifiers except none:

        none: PID
        PID1: P PID
————————————
        PID1 = PID\{none}

        The part of the kernel state concerned with background processes is described by this schema:

————Scheduler——————————
        background: P PID1
        ready: P PID1
            current: PID
——————————————————
        ready   background
        current   background  {none}
——————————————————

Like all schema, this one declares some typed variables and states a relationship between them. Above the horizontal dividing line, in the declaration part, the three variables are declared:

    background is the set of processes under control of the scheduler;
    ready is the set of processes that may be selected for execution when the processor becomes free;
     current is the process selected for execution in the background.

        Below the line, in the predicate part, the schema states two relationships that always hold. The set ready is always a subset of background, and current is either a member of background or the fictitious process none. This schema lets the current process be not ready, because no relationship between current and ready is specified.
        This schema does not reflect the full significance of the set ready, but its real meaning will be shown later in the Select operation, where it forms the pool from which a new current process is chosen.
        Interrupts are identified by their priority levels, which are small positive integers. The finite set ILEVEL includes all the priorities:

        ILEVEL: F N
————————————
        0    ILEVEL

        Zero is not one of the interrupt priority levels, but it is the priority associated with background processes.
        The state space for the interrupt-handling part of the kernel is described like this:
       
——————IntHandler——————
        handler: ILEVEL   PID1
        enabled, active:  P ILEVEL
—————————————————
        enabled   active   dom handler
—————————————————

This schema declares the three variables:

    handler is a function that associates certain priority levels with processes, the interrupt handlers for those priorities;
    enabled is the set of priority levels that are enabled, so an interrupt can happen at that priority;
    active is the set of priority levels for which an interrupt is being handled.

        The predicate part of this schema says that each priority level that is either enabled or active must be associated with a handler. An interrupt may be active without being enabled if, for example, it has been disabled by the handler itself since becoming active. The declaration

        handler: ILEVEL  PID1

declares handler to be a partial injection. It is partial in that not every priority level need be associated with an interrupt handler and it is an injection in that no two distinct priority levels may share the same handler.
        Information like this — that interrupts may be active without being enabled and that different priority levels may not share a handler— is vital to understanding how the kernel works. It is especially valuable because it is static information about the states the kernel may occupy, rather than about what happens as the system moves from state to state. Such static information is often absent from the text of the program, which consists mostly of dynamic, executable code.
        The state space of the whole kernel combines the background and interrupt parts:


—————Kernel——————————
        Scheduler
        IntHandler
———————————
        background   ran handler =  
——————————————————

        The declarations Scheduler and IntHandler in this schema implicitly include above the line all the declarations from those schemas and implicitly include below the line all their invariants. So the schema Kernel has six variables: background, ready, and current from Scheduler, and handler, enabled, and active for IntHandler.
        The additional invariant has been added so that no process can be both background process and an interrupt handler at the same time.
        The main job of the kernel is to control which process the  processor runs and at what priority. Therefore, the running process and the processor priority have been made part of the state of the system. Here is a schema that declares them:
       
—————CPU————————
        running: PID
        priority: ILEVEL  {0}
———————————————

        This schema has an empty predicate part, so it places no restriction on the values of its variables, except that each must be a member of its type. The variable running takes the value none when no process is running.
        Of course, there are many other parts of the processor state, including the contents of registers and memory, condition codes, and so on, but they are irrelevant in this context.
        Because the kernel always uses the same scheduling policy to select the running process and the CPU priority, this policy is another invariant of the system. It is stated in the schema State, which combines the kernel and CPU parts of the system state:

————State—————————————
        Kernel
        CPU
———————————————————
priority= max (active {0})
priority  =  0  running  = current
priority  >  0  running  = handler (priority)
———————————————————

        If any interrupts are active, the processor priority is the highest priority of an active interrupt, and the processor runs the interrupt handler for that priority. Otherwise, the processor runs the current background process at priority zero.
        The invariant part of this schema uniquely  determines priority and running in terms of active, current, and handler, three variables of the schema Kernel. This fact will be exploited when the events that change the system state are described. With the description of the kernel's and processor's state space complete, the next step is to look at the operations and events that can change the state.

Background Processing

        Some kernel operations affect only the background processing part of the state space. They start background processes, set and clear their ready flags, let them release the processor temporarily or permanently, and select a new process to run when the processor is idle.
        A process enters the control of the scheduler through the operation Start, described by this schema:
——————Start————————
        _State
        p?: PID1
————————————————
p?   ran handler
background' = background   {p?}
ready' = ready    {p?}
current' = current
IntHandler' =   IntHandler
————————————————

        LIke all schemas describing operations, this one includes the declaration _State, which implicitly declares two copies of each variable in the state space State, one with a prime (') and one without. Variables like background and ready without a prime refer to the system state before the operation has happened, and variables like background' and ready' refer to the state afterward.
        The declaration _State also implicitly constrains these variables to obey the invariant relationships that were documented in defining the schema State—including the scheduling policy—so they hold both before and after the operation.
        In addition to theses state variables, the Start operation has an input p?, the identifier of the process to be started, By convention, inputs to operations are given names that end in a ?.
        The predicate part of an operation schema lists the precondition that must be true when the operation is invoked and postcondition that must be true afterward. In this case, the precondition is explicitly stated: that the process p? being started must not be an interrupt handler (because that would violate the invariant that background processes are disjoint from interrupt handlers).
        In general, an operation's precondition is that a final state exists that satisfies the predicates written in the schema. Part of the precondition may be implicit in the predicates that relate the initial and final states. If an operation is specified be the schema Op, its precondition can be calculated as
               
         State Op

        If the precondition is true, the specification requires that the operation should terminate in a state satisfying the postcondition. On the other hand, if the precondition is false when the operation is invoked, the specification says nothing about what happens. The operation may fail to terminate, or the kernel may stop working completely.
        For the Start operation, the postcondition says that the new process is added to the set of background processes and marked as ready to run. The new process does not start to run immediately, because current is unchanged; instead, the processor continues to run the same process as before.
        The final equation in the postcondition
       
          Inthandler' =   Inthandler                                                                                               

means that the part of the state described by the schema IntHandler is the same after the operation as before it.
        The equation in this schema determines the final values of the six variables in the kernel state space in terms of their initial values and the inputs p?,  but they say nothing about the final values of the CPU variables running and priority. These are determined by the requirements, implicit in the declaration _State, that the scheduling policy be obeyed after the operation has finished. Because the values of active, handler, and current do not change in the operation, neither does the CPU state.
        The current background process may release the processor by calling the Detach operation, specified like this:

——————Detach————————
        _State
—————————————————
        running background                                                                                                            
        background' = background
        ready' = ready 
        current' = none
        IntHandler' =    IntHandler
—————————————————
               
        Again, this operation is described using _State variables before and after the operation has happened. The precondition is that the process is running a background process. The only change specified in the postcondition is that the current process changes to none, meaning that the processor is now idle. The next event will be either an interrupt or the selection of a new background process or run.
        After a call to Detach—and after other operations described later— current has values none, indicating that no background process has been selected for execution. If no interrupts are active, the processor is idle, and the Select operation may happen spontaneously. It is specified like this:

——————Select————————
        _State
—————————————————
        running =none                                                                                                                            
        background' = background
        ready' = ready 
        current' ready
        IntHandler' =   IntHandler
—————————————————

        Rather than a part of the interface between the kernel and an application, Select is an internal operation of the kernel that can happen whenever its precondition is true. The precondition is

        running = none   ready _    
       
The processor must be idle, and at least one background process must be ready to run. The first part of this precondition is stated explicitly, and the second part is implicit in the predicate

        current'     ready

        The new value of current is selected from ready, but the specification does not say how the choice is made—it is nondeterministic. This nondeterminism lets the specification say exactly what programmers may rely on the kernel to do: there is no guarantee that processes will be scheduled in a  particular order.
        In fact, the nondeterminism is a natural consequence of the abstract view taken in the specification. Although the program that implements this specification is deterministic—if started with the ring of processes in a certain state, it will always select the same process—it appears to be nondeterministic if the set of processes that are ready are considered, as has been done in the specification.
        However, the kernel selects the new current process, the specification says that it starts to run, because of the static scheduling policy, which determines that after the operation, running is current and priority is zero.
        A background process may terminate itself using the Stop operation:
       
——————Stop————————
        _State
—————————————————
        running  background                                                                                                           
        background' = background   {current}
        ready' = ready   {current}
        current' = none
        IntHandler' =   IntHandler
—————————————————

        For this operation to be permissible, the processor must be running a background process. This process is removed from background and ready, and the current process becomes none, so the next action will be to select another process.
        A final operation, SetReady, sets or clears a process's ready flag. It has two inputs, the process identifier and a flag , which takes one of the values set or clear:
       
        FLAG :: = set \ clear

The SetReady operation is:

——————SetReady———————
        _State
        p?: PID 
        flag?: FLAG
—————————————————
        p?  background
        flag? =set  ready' = ready   {p?}
        flag? = clear  ready' = ready {p?}                                                                                
        background' = background
        current' = current
        IntHandler' = IntHandler
—————————————————

        The precondition is that p? is a background process, according to the value of flag?, it is either inserted in ready or removed from it. The scheduling parameters  do not change, so there is no change in  the running process.

Interrupt handling

                Other operations affect the kernel's interrupt-handling part. A background process may register itself as the handler for a certain priority level by calling the operation IEnter:

——————IEnter———————
        _State
        i?: ILEVEL   
—————————————————
        running background
        background' = background   {current}
        ready' = ready   {current}
        current' = none
        handler' = handler     {i?  current}
        enabled' = enabled   {i?}
        active' = active
—————————————————

        This operation may be called only by a background process. The operation removes the calling process from background and ready, and the new value of current is none, just as in  the Stop operation. Also, the calling process becomes an interrupt handler for the priority level i?, given as an input to the operation, and that priority level becomes enabled. The expression

        handler      {i?  current}

denotes a function identical to handler, except that i? is mapped to current, This function is an injection, because current, a background process, cannot already be the handler for any other priority level. The new handler supersedes any existing handler for priority i?, which can never run again unless restarted with the Start operation.
        Once a process has registered itself as an interrupt handler, the scheduler chooses a new process to run in the background, and the new interrupt handler waits for an interrupt to happen:

——————Interrupt———————
        _State
        i?: ILEVEL   
—————————————————
        i? enabled    i?   > priority
         Scheduler' = Scheduler
        handler' = handler   
        enabled' = enabled
        active' = active   {i?}
—————————————————
       
        The process hardware ensures that interrupts happen only when they are enabled and have priority greater than the processor priority. If these conditions are satisfied, the interrupt can happen and the kernel then adds the interrupt can happen and the kernel then adds the interrupt to active.
        The scheduling  policy ensures that the associated interrupt handler starts to run. In this calculation of the processor priority, each step is justified by the comment in brackets:

                Priority'
        = [scheduling  policy]
                max(active'  {0})
        = [postcondition]
                max((active  {i?})  {0})
        = [  assoc. and comm.]
                max((active  {0})  {i?})
        = max dist. over  ]
                max{max(active  {0}), i?}
        = [scheduling policy]
                max{priority, i?}
        = [ i? > priority]
                i?

        So priority' =  i? > 0 and the other part of the scheduling policy ensures that running' equals handler (i?).
        After the interrupt handler has finished the processing associated with the interrupt, it calls the kernel operation IWait and suspends itself until another interrupt arrives. IWait is specified as

——————IWait———————
        _State
—————————————————
        priority  > 0
         Scheduler' = Scheduler
        handler' = handler   
        enabled' = enabled
        active' = active    {priority}
—————————————————

        The precondition priority > 0 means that the processor must be running an interrupt handler. The current priority level is removed from active, and as for the Interrupt operation, the scheduling policy determines what happens  next. If any other interrupts are active, the processor returns to the interrupt handler with the next highest priority. Otherwise, it returns to the current background process.
        Another kernel operation, IExit, lets an interrupt handler cancel its registration:

——————IExit———————
        _State
—————————————————
        priority  > 0
        background' = background {handler (priority)}
        ready' = ready  {handler (priority)}
        current' = current
        handler' = {priority}   handler   
        enabled' = enabled     {priority}
        active' = active    {priority}
—————————————————

        Again, the processor must be running an interrupt handler. This handler leaves the interrupt-handling part of the kernel and becomes a background process again, marked as ready to run. As with IWait, the processor returns to the interrupt handler with the next highest priority or to the current background process. The process that called IWait is suspended until the scheduler selects it for execution. In this schema, the expression

        {priority}   handler   

denotes a function identical to handler except that priority has been removed from its domain; it is an injection provided that handler is one.
        Two more kernel operations, Mask and Unmask, let interrupt priorities be selectively disabled and enabled. Their  specifications are like SetReady, so have been omitted. The kernel specification is now complete.





[1]          Spivey, J.M., “Specifying a Real-Time Kernel,” IEEE Software, September, 1990, pp. 21 - 28.