International Workshop on
Engineering Methods
to Support Information Systems Evolution
In conjunction with OOIS’03
9 International Conference on Object-Oriented Information Systems
th
Geneva, Switzerland, September 2-5, 2003
Workshop Co-Chairs
Jolita Ralyté, Colette Rolland
Organisers – Program Co-chairs
Jolita Ralyté, CUI, University of Geneva, Switzerland
Colette Rolland, CRI, University of Paris 1- Sorbonne, France
Program Committee
Sjaak Brinkkemper – Vrije University, The Netherlands
Rébecca Deneckère – University of Paris 1, France
Brian Henderson-Sellers – University of Technology, Sydney, Australia
Michel Léonard – University of Geneva, Switzerland
Kalle Lyytinen – Case Western Reserve University, USA
Graham McLeod – University of Cape Town, South Africa
Naveen Prakash – JIIT, Noida, India
Jolita Ralyté – University of Geneva, Switzerland
Colette Rolland – University of Paris 1, France
Motoshi Saeki – Tokyo Institute of Technology, Japan
Keng Siau – University of Nebraska-Lincoln, USA
Samira Si-Said Cherfi – CNAM, Paris, France
Janis Stirna – Royal Institute of Technology and Stockholm University, Sweden
Juha-Pekka Tolvanen – University of Jyväskylä, Finland
For further information, please contact
Jolita Ralyté
CUI, University of Geneva
24 rue du Général Dufour
CH-1211 Geneva 4, Switzerland
e-mail: jolita.ralyte@unige.ch
©
Authors retain copyright ownership of their papers. No papers in these proceedings may be
reproduced or distributed in any form or by any means, or stored in a database or retrieval system,
without the prior approval of the authors.
Preface
Welcome to the International Workshop on Engineering Methods to Support Information Systems
Evolution (EMSISE’03) held in conjunction with OOIS’03. The objective of this workshop is to
provide a forum for the presentation and exchange of research results and practical experiences within
the field of Method Engineering.
Information systems evolution is the main theme of the workshop. We propose to discuss new analysis
and design methods, models and tools supporting the problem of information systems evolution as
well as new method engineering approaches and techniques allowing to create such methods.
Submissions to EMSISE’03 came from Australia, Canada, France, Germany, India, Japan,
Switzerland and UK. We take this opportunity to thank all the authors for their interest to the
workshop. After the review process, we accepted 9 papers (7 regular and 2 position papers) from 12
submissions.
We express our thanks to the international program committee, whose members are well-known and
highly qualified researchers. The EMSISE’03 workshop exists thanks to their generous contribution of
time and effort in the review process.
We would like to express our special thanks to the invited speaker Motoshi Saeki form Tokyo Institute
of Technology. His talk ‘Toward Automated Method Engineering: Supporting Method Assembly in
CAME’ will start the program of the workshop.
Finally, we thank all the workshop participants and hope that they will enjoy the workshop and have a
great time in Geneva, Switzerland!
August 2003
Jolita Ralyté, Colette Rolland
Organizing Co-chairs, EMSISE’03
Table of Contents
Method Engineering
Engineering Schema Transformation Methods …………………………………………………. 1
Naveen Prakash, Sangeeta Srivastava (India)
Extending Methods to Express Change Requirements………………………………………….. 15
Anne Etien, Rébecca Deneckère, Camille Salinesi (France)
Towards a Clear Definition of Patterns, Aspects and Views in MDA ………………………….. 29
Joël Champeau, François Mekerke, Emmanuel Rochefort (France)
Methods for Information Systems Development
ArgoUWE: A CASE Tool for Web Applications ………………………………………………. 37
Alexander Knapp, Nora Koch, Flavia Moser and Gefei Zhang (Germany)
Precise Graphical Representation of Roles in Requirements Engineering ……………………... 51
Pavel Balabko, Alain Wegmann (Swidzerland)
Enterprise Knowledge and Information System Modelling in an Evolving Environment
Selmin Nurcan, Judith Barrios (France, Venezuela)
61
Requirement-Centric Method for Application Development …………………………………… 75
Smita Ghaisas, Ulka Shrotri and R. Venkatesh
Information Systems Evolution
Self-measurement, Self-monitoring, Self-learning, and Self-valuation as the Necessary
Conditions for Autonomous Evolution of Information Systems ………………………………... 87
Jingde Cheng (Japan)
A Component Framework for Description-Driven Systems ……………………………………. 99
F. Estrella, Z. Kovacs, R. McClatchey, N. Toth and T. Solomonides (Switzerland, UK)
Engineering Schema Transformation Methods
Naveen Prakash
Sangeeta Srivastava
JIIT
A-10, Sector 62
NOIDA 201307, India
Phone No.-91-118-2490213
praknav@hotmail.com
BCAS
Dwarka, Phase-I
New Delhi, India
Phone No.-91-011-23347369
sangeetasrivastava@hotmail.com
Abstract. Rather than produce schema transformation methods for every different pair of models, we
formulate a generic way of performing schema evolution across models. We formalize the transformation
technique by using the graph theoretic notion of isomorphism. Method concepts are organized in an is built
of method graph. The generation technique is used to establish isomorphism between the concepts of the two
graphs by generating nodes and edges. This technique produces feasible mappings, which can be presented to
the method engineer for subsequent fine-tuning. The technique is illustrated for mapping between ER and
Relational models. It is a soft approach as it allows the method engineer to accept or make modifications as
per his requirement.
1. INTRODUCTION
Though IS evolution can occur in a number of situations, our interest is in that class of
evolution where schemata expressed in one model evolve to those in another model. Such
evolution occurs for example in
i)
moving from one stage in the system life cycle to the next,
ii)
building a common global schema for several schemata in heterogeneous systems
(Mcb02),
iii) reverse engineering,
iv) transforming structured schemata to semi-structured HTML/XML schemata
(Wil01).
A number of proposals exist for schemata evolution across models. As examples,
consider the technique of converting a schema expressed in ER to that in the Relational
model (Kor91), from the Relational to XML (Wil01), and from OO to XML (Ren01). These
techniques work for the given model only. They result from the experience of their
designers. In this sense they are ad-hoc. We are looking for a generic way of performing
schema evolution across models. The technique should be generic in the sense that it should
be able to handle any pair of models so that the prevalent experience based ad-hoc approach
gives way to a systematic, computer-supported one.
As far as we are aware, there are two generic techniques available today. One approach
is to define a Common Data Model (Mcb98) and transformation rules between the CDM
and the models. Another is to define a low level hypergraph based data model and rules to
transform from the given models to this HDM (Mcb99). An attempt has also been made to
understand models in relationship to one another on the basis of specificity. Models are
placed in a hierarchy with the root being the most general model and the leaves being the
most specific. This approach, while comparing models, does not explicitly help in schema
transformation. In this paper, we explore the third way of inter model transformation by
abandoning the CDM/ HDM approach and attempting a direct transformation between any
given pair of models.
EMSISE’03
1
Naveen Prakash and Sangeeta Srivastava
In an earlier paper (Pra 02), we had formulated the problem in three steps. Let there be a
schema s expressed in model M and let it be required to transform it to s′ in M′. Then our
generic technique must
a)
Generate a mapping between the concepts of M and M′.
b)
Define transformation rules that transform an instance of a concept of M to the
mapping concept in M′.
c)
Perform the transformation process by applying the transformation rules to s so as
to obtain s′. The method engineer carries out step (a) and (b) The method engineer
gives the transformation rules to the application engineer who does step (c).
Now, the discipline of method engineering enables method engineers to use CAME tools
for building methods. Method engineering has been successful in building stand-alone
methods like those based on ER, or even individual components of compound methods like
OM, FM, and DM of OMT. However, as far as we are aware, method engineering has not
yet addressed the question of defining mappings between models, for example, the mapping
between ER and Relational or between OM and FM is not engineered. In other words, the
development of a generic technique shall facilitate the development of CAME tools for step
(a) and (b) above.
In (Pra02) we had presented our initial ideas on how step (a) above could be performed.
These ideas were essentially intuitive and opened up the possibility of providing CAME
support to establish mapping between models. Now, we provide a formal basis for this. As
before, we will consider a method as a directed graph. Each method concept is a node in this
graph. The edges of the graph are the is built of relationships between nodes. The is built of
says that concepts of a product model are built out of simpler ones. If si is built of sj then we
shall refer to si as super concept and to sj as sub-concept. Is built of relationships cannot be
cyclic.
The is built of relationship is an M: N relationship whose cardinality is expressed in two
attributes called min and max, respectively. These provide values for the minimum and
maximum number of sub-concepts needed by a super-concept. The source of the edge is the
node that is needed to build the node at the destination. For example, since an attribute is
needed to build an entity in the ER, there is a directed edge from attribute to entity in the
graph of the ER method.
Given a pair of graphs that represent two methods, referred to hereafter as method
graphs, we extend the graph-theoretic notion of isomorphism to establish the mapping. Two
graphs are considered equivalent if they are isomorphic, that is, if there is 1:1 node
correspondence and edge correspondence between them. Graph theory tells us the necessary
conditions for isomorphic graphs. However, the sufficiency conditions for isomorphism are
not given to us. Therefore, given two graphs we cannot syntactically determine whether they
are isomorphic or not. We use a generation technique to establish a feasible 1:1 mapping
between the two graphs to make them isomorphic.
The generation technique takes two method graphs G1 and G2 and generates G1’ and
G2’ by duplicating the nodes /edges of their source graphs such that G1’ and G2’ are
isomorphic. Since, no new concepts are introduced in G1’ and G2’ so this isomorphism
establishes a feasible mapping between the concepts of G1 and G2 and hold this out as a
candidate for equivalence to the method engineer. The method engineer may either modify
the form to make it semantically meaningful or accept it in which case the generated node
and edge mappings are accepted for step (b) above.
2
Engineering Schema Transformation Methods
G2
G1
1:1
mapping
Duplicates nodes
/edges
Duplicates nodes
/edges
1:1 mapping
G1’
G2’
Figure 1: The Generation Technique
In the rest of this paper we formulate this generative technique and leave steps (b) and (c)
for later papers. Our approach to generating candidate isomorphic graphs is as follows:
1. Generate Level Mapping: Given a pair of directed graphs G1 and G2 we first determine
the number of levels in each assuming that nodes that have no edges entering them are
at level 0. Following the edges from these nodes results in nodes at level 1. In general,
following nodes at level (n-1) yields nodes at level n. The levels are numbered with the
bottom level as zero and increases in the upward direction. Graphs that have a different
number of levels cannot be isomorphic. Therefore, the first step is the generation of
graphs in a form that have the same number of levels. We refer to this step as the levelmapping step.
2. Generate Node Mapping: The second step of the generation technique establishes node
mapping. We use topological similarity in the graphs for mapping nodes. There is a
possibility that one or more than one node map to a single node. We use operators to
generate nodes and edges, which are defined in Section2.
3. Generate Edge Mapping: The third step of generation technique generates one to one
correspondences between a pair of edges of the two graphs. It checks for
correspondence between the edges lying between two mapping nodes.
In the next section we introduce the application of isomorphism and some associated
notions. The is built of method graph and the way it can be constructed is presented in
Section 3. Establishment of mapping between concepts of a pair of models is dealt in
Section 4. In Section 5, we show that the generic approach of Section 4 can be successfully
applied to convert an ER schema to a Relational schema.
2. APPLYING ISOMORPHISM
Graphs are thought of as equivalent and called isomorphic if they have identical behavior
in terms of graph theoretic properties. Two graphs are said to be isomorphic if there is a one
to one mapping between their vertices and between their edges such that the incidence
relationship is preserved.
Two operations (Deo 00) are interesting for us. These are Fusion and Deletion.
Fusion: -A pair of vertices x, y in a graph is said to be fused if the two vertices are replaced
by a single new vertex, z such that every edge that was incident on either x or y or on both is
incident on z. Fusion does not alter the number of edges.
Deletion: - If v’ is a vertex in graph G then G- v’ denotes a sub graph of G obtained by
deleting v’ from G. Deletion of a vertex always implies the deletion of all edges incident on
the vertex. Similarly for edge deletion if e1 is an edge in G, then G- e1 is the sub graph of G
obtained by deleting e1 from G.
For our purposes, we define four operations, upward vertical defusion, downward vertical
defusion, horizontal defusion and addition. These are as follows:
3
Naveen Prakash and Sangeeta Srivastava
Upward Vertical Defusion: - A vertex z and the edges incident on z are reproduced. The
reproduced vertex z’ is at the next higher level. An edge is introduced between z and z’ to
establish an is built of relationship between them.
Downward Vertical Defusion:- A vertex z and the edges incident on z are reproduced. The
reproduced vertex z’ is at the next lower level. An edge is introduced between z and z’ to
establish an is built of relationship between them.
Horizontal Defusion: A vertex z and the edges incident on z are reproduced. Both z and the
reproduced node are at the same level as z.
Addition: - The inverse of deletion of a vertex is Addition. This requires the addition of a
node v and the edges incident on it.
We can apply these four operations to method graphs to make them isomorphic to one
another. If the number of levels L1 of graph G1 is different from the number of levels L2 of
the other graph G2 then
•
•
•
L2> L1 or L2< L1: above the topmost level of L2 or L1 respectively then upward
vertical defusion of nodes in the graph G1 or G2 respectively must occur
L2=L1 nothing is to be done
L2> L1 or L2< L1: below the bottommost level of L2 or L1 respectively then
downward vertical defusion of nodes in the graph G1 or G2 respectively must occur
If the number of nodes of the two graphs at a given level are different or the number of
edges between a given pair of levels is different, then horizontal defusion or addition of
nodes will be required to make the graphs isomorphic.
3. CONSTRUCTING THE GRAPH
The is built of relationships between the method concepts of a method helps in building
an is built of method graph. We take the is built of information in the same format as
(Gup01), i.e. <super concept, sub concept, min, max> and convert it into a method graph
using the construction rules given below.
3.1 RULES FOR CONSTRUCTING THE IS BUILT OF METHOD GRAPH
Let (Y, X) denote that Y is built of X where Y and X are concepts of a method. The
following rules help us in constructing the method graph. We will use X, Y, Z and A to
denote method concepts.
1. If –|ЭA| (X, A) then level (X) = 0.
As X has no sub concepts from which it is built of, it is atomic and should be at the
lowest level i.e. zero level.
2. If (Y, X) then Level (Y) =Level (X) +1.
Since Y is built of X it is at one level higher than that of X.
3. If Level(Y)=p and Level(Z)=q and (Y, X) ^ (Z, X) then
Level (X) = lowest [(p-1), (q-1)].
Proof
Since (Y, X), Level (X) = p-1………. Rule 2
Again, since (Z, X) Level (X) =q-1…..Rule 2
4
Engineering Schema Transformation Methods
Case 1: p =q
If p =q then Level (X) =p-1=q-1=lowest [(p-1), (q-1)]
Case 2: p ≠q
Let p>q then p-1>q-1 since X cannot be at two different levels simultaneously.
Therefore it is at lowest [(p-1), (q-1)].
4. If Level (Y) = p-1 and Level (Z) = p-4 where both Y and Z have no sub concepts,
then both level (Y) and level (Z) are=0 i.e.p-1= p-4=0 then p=4. Different branches
give different values, however, for the total graph the highest value of p is chosen.
5. If –|ЭA| (X, A) and level (X)= lowest(p-3,q-2)=0 then, if both p-3 and q-2 have been
used to determine the level(X) using the same path then both p-3 and q-2 =0
6. If Level(Y)=p and Level(Z)=q and (X, Y) ^ (X, Z) then
Level (X) = highest [(p+1), (q+1)].
This can be shown to be true by an argument similar to that used for rule 3 above.
3.2
EXAMPLE OF CONSTRUCTING AN IS BUILT OF METHOD GRAPH
As an example of the construction of an is built of method graph, consider the following
list of method concepts C and the is built of relationships between them.
C=<Z, Y, X, A, B, M, N>
Is built of Relationships
[Z, A, 1, n]
[Z, B, 1, 1]
[Z, X, 1, n]
[Y, X, 1, n]
[X, A, 1, n]
[X, B, 1, n]
[A, B, 1, n]
[Y, M, 1, n]
Step 1:-Let level (Z) =p then from
(Z, A) level (A) =p-1……….. Rule 2
(Z, B) level (B) =p-1…………Rule2
(Z, X) level(X) =p-1…………Rule 2
Let level(Y) =q then from
(Y, X) level(X) =q-1………..Rule 2 but from
(Y, X) and (Z, X) level(X) =lowest (p-1, q-1)………….Rule 3
(X, A) level (A) = [lowest (p-1, q-1)-1]………….Rule 2
(X, B) level (B) = [lowest (p-1, q-1)-1]………….Rule 2
(A, B) level (B) = [lowest {(p-1, q-1)-2} = {p-3, q-3}]………….Rule 2
(Y, M) level (M) =q-1 ………………Rule 2
5
Naveen Prakash and Sangeeta Srivastava
Now only B and M remain in the list which have no sub concepts so from Rule 1 Level (B)
and level (M) =0
Level (M) =q-1=0………….. (1)
Level (B) =lowest (p-3, q-3) =0……… (2)
From Rule 4 when multiple values of q are there choose the lowest so we choose q-3 in
stead of q-1. Also from Rule 5 if Level (B) is determined in terms of p and q using the same
path then both values for level (B) are equal and in this case are equal to zero so we get
P-3 =q-3=0 or
p=q=3 so we get the following levels and the graph
Level (Y) =3
Level (Z) =3
Level (X) =2
Level (A) =1
Level (B) =0
Level (M) =0
Level (N) =0
The graph is given below.
Y
Z
Level3
Level2
X
Level1
A
B
M
N
Level0
Figure 2: Is built of Method Graphs
4. THE GENERATION TECHNIQUE
Now the two is built of graphs are available to us, we try to establish that the two graphs
are candidates for isomorphism. In general, the number of levels in the two graphs may be
different. Similarly, the total number of edges and nodes in the two graphs may be unequal.
Further these may be distributed across levels differently.
The generation technique establishes mapping between the two graphs by
1) Generating one to one mapping between the levels. It does so by using vertical defusion.
2) Generating one to one node mappings by horizontal defusion and addition
3) Generating one to one edge mappings.
It is organized in three steps mapping to 1) through 3) above. Once the generation
process starts it produces the mappings with no further intervention from the method
engineer. These are presented to the method engineer who either accepts them or modifies
them to meet his/her needs.
6
Engineering Schema Transformation Methods
4.1 GENERATING LEVEL MAPPING
Input for this step may come in two forms. First, it is possible that no knowledge of
node mapping exists. In this case an empty input is supplied to this step. Second, it is
possible that the method engineer knows that one or more nodes of one graph map to some
nodes of the other. All such mappings are input to this step.
Case 1: Empty Input
Here level mapping between the two graphs is completely unknown. We assume that the 0th
level of G1maps to the 0th level of G2. We denote this by ML (0, 0). Starting from ML (0,
0), we use the level mapping rule 1 (Section 4.1.1) to establish mapping between the levels
above it till the levels in a graph say G1 is exhausted. The remaining levels are mapped by
using the operation of upward vertical defusion as in Rule 2. This operation is applied to all
the nodes at the topmost level of the exhausted graph G1 to produce multiple levels having
the same nodes as the topmost level of G1.
Case 2:Non-Empty Input:
There are two sub cases here
a) Exactly one node mapping is given:
Let the mapping nodes given be nip and nlq, where nip is the ni th node at level p and nlq is
the nlq th node at the level q. This is denoted as MN (nip, nlq). As before, from this we
assume ML (p, q), for levels above and below (p, q) we establish level mapping using
the rule 1 of 4.1.1. As for case 1, there may be unmapped levels. Upward vertical
defusion (Rule 2) is used for mapping the unmapped levels at the top end of the graphs
whereas downward vertical defusion (Rule 3) is used for the bottom end.
b) More than one node mappings are given:
1. Let the given mapping levels be ML1 (p, q), ML2 (p+3, q+2) and ML3 (p+5, q+4).
Choose the lowest given pair of mapping levels in the graph ML1 and ML2. Level
mapping is established as in case 1 for the levels between ML1 and ML2, till the
levels between the pair in one of the graphs is exhausted.
2. There is a possibility that one or more levels between ML1 and ML2 of say G1 still
remain to be mapped. As in case 1 upward vertical defusion is done to map the
remaining levels between ML1 and ML2.
3. Repeat step 2 and 3 for the next pair of levels between ML2 and ML3 till all the
successive pair of mapping levels given is exhausted.
4. Further, it may be the case that there are additional levels in the graphs above the
highest given mapping levels ML3. In that case, starting from the highest mapping
level the procedure of case 1 is adopted.
5. Finally, there may be levels below the lowest given mapping level ML1. As before,
starting from this lowest mapping level we proceed downwards till all the levels of
one are exhausted. Any remaining levels are mapped to the bottommost level of the
exhausted graph and downward vertical defusion is done (Rule 4).
4.1.1 Rules of Level Mapping
Let ML (p, q) be given. Further, let X1, Y1 be the concepts of method M1 and X2, Y2
be the concepts of method M2. Also, let level (X1) =p and level (X2) =q.
1.
Given Э level (p+1) and Э level (q+1) then ML (p+1, q+1).
7
Naveen Prakash and Sangeeta Srivastava
As (p+1) and (q+1) exist it means that Э (Y1, X1) and Э (Y2, X2) where level (Y1)
= p+1 and level (Y2) =q+1. To preserve the is built of between X1, Y1 and X2, Y2
there must be ML (p+1, q+1).
2.
If there is no level (q+1) then ML (p+1, q+1).
Mapping is possible by introducing a level (q+1) by applying upward vertical
defusion to X2.
3.
Given Э level (p-1) and Э level (q-1) then ML(p-1, q-1)
4.
Given ML (p, q) and Э level( p-1)and not Э level (q-1) then ML(p-1, q-1)
As (p-1) and (q-1) exist it means that Э (X1, Y1) and Э (X2, Y2) where level (Y1)
=p-1 and level (Y2) =q-1. To preserve (X1, Y1) while mapping to (X2, Y2) we get
ML (p-1, q-1).
As in Rule 2 above mapping is possible by introducing a level (q-1) by applying
downward vertical defusion to Y2.
4.1.2 Example of Level Mapping
As an example of developing the level mapping consider the method graphs shown in
Figure 3. Let the given node mappings be (A, P) and (C, Q). These imply ML (1, 1) and ML
(3, 3). Following procedure b of case 2 (Section 4.1), we start with the two levels (1, 1) and
(3, 3). This gives us ML (2, 2) and ML (3, 3) from mapping Rule 1 above.
4
E
3
R
C
3
2
2
Q
P
B
1
1
A
D
F
0
S
0
Method Graph G1
Method Graph G2
Figure 3: Level mapping
There are no more remaining levels between (1, 1) and (3, 3) in the two graphs. So, we can
proceed with level mapping for the levels above (3, 3). As there are no more levels
remaining in graph G2 upward vertical defusion as per Rule 2 is done and we get ML (4,
4). Upward Vertical defusion introduces a new node R’ (R=R’) at level 4 in G2 and the
edges J, K and L. Next, we map levels below ML (1, 1). This results in ML (0, 0) from Rule
3 above. The resultant mapping levels are shown below in the Figure 4.
R’
E
4
G
3
J
C
4
L
R
K
3
Q
2
1
0
B
A
D
P
F
S
Method Graph G1
2
1
0
Method Graph G2
Figure 4: Level Mapping with Upward Vertical Defusion
8
Engineering Schema Transformation Methods
4.2
GENERATING NODE MAPPING
The second step of the generation technique establishes node mapping. We use similarity for
node mappings. A similarity exists between two nodes provided
1. The two candidate nodes are at mapping levels.
2. The levels of the super concepts and the sub concepts of the edges in which the nodes
participates are mapping levels
3. The min, max cardinalities (Section 1) of the edges is same.
4. The degree of the two nodes is the same.
We assume that if the two nodes are similar then they are mapping.
It is possible that the generation technique discovers that one node maps to exactly one node,
or one node maps to more than one node or many nodes map too many nodes. Further there may
be no node mapping to a node in the other. These possibilities are dealt with as follows:
i)
1:1 this is the case of 1:1 mapping. No further action is required.
ii)
1: n node mapping. (n-1) nodes and the edges incident on the node in the graph are
generated on the one side by horizontal defusion.
iii)
M: n node mapping. Assume n>m, then (n-m) nodes and the edges incident on it are
generated by horizontal defusion in the graph on m side. An arbitrary node from
among the m nodes is picked up and horizontal defusion is applied to it (n-m) times.
iv)
0: n node mapping. N nodes are generated by addition on the 0 side.
The procedure adopted for node mapping is as follows:
Start from the lowest mapping level in the two method graphs. For each pair of nodes at
this mapping level check that conditions 1 to 4 are satisfied and take appropriate action i)
to iv) as required. Repeat till all the mapping levels are exhausted.
As an example for the cases i) to iv) above consider the two method graphs shown in
Figure 5. We will apply this algorithm and demonstrate the action taken for each of these
cases.
•
•
•
1:1 node mapping: - consider the nodes nva and nja at ML (v, j). The two nodes are at
mapping levels, the level of their super concepts are ML(s, g), the min, max cardinality
of both the edges is (1, n) and the degree of both the nodes is one. They satisfy all the
conditions for node mapping so we get MN (nva, nja )
m: n node mapping: - consider the nodes nua, nub, nuc and nia, nib at ML (u, i). All the
nodes are at mapping levels their super concepts are at ML(s, g), the min, max
cardinality for all the edges is (1, 1) and the degree of all the nodes is one. As the
number of mapping nodes is 3:2 so horizontal defusion of either nia or nib at level i in G2
is done. Let us apply horizontal defusion to nia to get nia. This results in MN (nua, nia),
MN (nub, nib) and MN (nuc, nia ).
0: n node mapping: - consider the node nra at level r in G1. There is no mapping node at
ML(r, f) in G2. Addition of a mapping node nfa at level f in G2 is done. Thus we get MN
(nra, nfa).
9
Naveen Prakash and Sangeeta Srivastava
•
1: n node mapping: consider the nodes npa, npb and nda at ML (p, d). There are two nodes
mapping to one node in G2 so as in the case of m: n mapping horizontal defusion of nda
at level d in G2 is done to get ndb. So we get MN (npa, nda) and MN (npb, ndb).
(1,1)
q
r
t
u
ndb d
nda
(1,1)
(1,1)
(0,n)
e
nga
nsa
(1,1)
nuaa
(1,1)
(0,n)
nfa
(1,n)
nra
(1,n)
s
npb
npa
p
(1,1)
nub
(1,1)
nuc
(1,1)
(1,n)
nia
nva
v
g
h
(1,1) (1,1)
nib
(1,n) i
nia
nja
Method Graph G1
f
j
Method Graph G2
Figure 5: Node mapping
Applying this algorithm repeatedly the following node correspondence results:
4.3
nva
nja
nua
nia
nub
nib
nuc
nia
nta
nha
nsa
nga
nra
nfa
npa
nda
GENERATING EDGE MAPPING
The third step of generation technique generates one to one correspondences between edges E1 of
G1 and E2 of G2. This is denoted as (E1, E2). The generation technique checks the two
conditions E1 and E2 as follows:
1) The source and the destination nodes for E1 and E2 are mapping nodes and
2) The min, max cardinalities of the two edges is same.
The procedure for establishing edge mapping is as follows: Repeat till all levels are exhausted.
Start from the lowest mapping level in the two graphs G1 and G2. For a pair of edges
incident on the mapping nodes check that the conditions 1 and 2 above are satisfied.
Repeat for all the pair of edges incident on mapping nodes at this level.
As an example of edge mapping consider the two graphs shown in figure 6. The horizontal lines
show the mapping levels. Mapping nodes are
MN(nva, nfa), MN(nsa, nga), MN (nua, nia), MN (nub, nib )and MN (nva, nja ).
Staring from the lowest ML (v, j) we select the MN (nva, nja). The edges incident on the two
nodes are E and L respectively .The destination node of E is nsa and for L it is nga. The two
destination nodes are MN (nsa, nga). The min, max cardinality of the edges E and L is (1, n).
As the two conditions for edge mapping are satisfied E and L are mapping edges. Similarly,
10
Engineering Schema Transformation Methods
mapping for other edges is established. The mapping edges for the two graphs are as
follows:(A, H), (E, L), (M, P) and (D, J).
r
nra
nfa
A
(1,n)
t
(1,1)
M
u
H
nga
nsa
s
nua
f
(1,n)
g
(1,1)
D
(1,1)
nub
E
P
(1,n)
nia
K
(1,1)
nib
nva
v
h
L
(1,n) i
nja
Method Graph G1
j
Method Graph G2
Figure 6: Edge mapping
5. A COMPLETE EXAMPLE OF ER TO RELATIONAL
MAPPING
As an example for developing the mapping using the generation technique (Section 4),
consider the methods graphs for the ER and Relational models.
The generation technique establishes mapping between the two graphs by
1. Generating one to one mapping between the levels of ER and Relational method graphs.
2. Generating one to one node mappings by horizontal defusion and addition in the
Relational method graph and
3. Generating one to one edge mappings in the ER and Relational graphs.
Step 1: Level Mapping:Let the node mapping given to us be (Attribute, Attribute) and (Entity, Relation). This
implies ML (1, 1) and ML (3, 3) as shown in Figure 7 below.
4
Relation
Relationship
G
Entity
E
C
P_key
2 B
A
1 Attribute
3
Q
3
P_key
R
F
2
P
Attribute
H
1
S
D
0
Functionality
Cardinality
ER Method Graph
0
Functionality
Relational Method Graph
Figure 7: Method Graphs of ER and Relational Models
We use the level mapping rules and procedure b of case 2 to establish level mapping. We
start with the two levels (1, 1) and (3, 3). This gives us ML (2, 2) and ML (3, 3) from Rule
1. As there are no more levels remaining between (3, 3) and (1, 1) we proceed with level
mapping for the levels above (3, 3). As the levels of the Relational method graph are
exhausted, upward vertical defusion is done as per Rule 2 to the Relation node. Thus we get
ML (4, 4), a new node Relation’ at level 4 in G2 with an is built of edge J between Relation
and Relation’ and the edges K and L. Next, we map the levels below the given ML (1, 1).
This results in the ML (0, 0) from Rule 3 above.
The level mapping thus achieved is as follows: -
11
Naveen Prakash and Sangeeta Srivastava
ML (1, 1) ,ML (2, 2), ML (3, 3), ML (4, 4)
The resultant mapping levels are shown below in Figure 8.
4
Relation’
Relationship
G
Entity
E
C
P_key
2 B
A
1 Attribute
K
3
F
3
Q
R
Attribute
4
L
J
Relation
P
P_key
H
2
1
S
D
0
Functionality
Cardinality
ER Method Graph G1
0
Functionality
Relational Method Graph G2
Figure 8: Vertically Defused Graphs
Step 2: Node Mapping
Start from the lowest ML (0, 0). We have nodes Functionality of G1, which can be mapped
to nodes Functionality or Cardinality of G2.
Let us consider Cardinality of G2 and Functionality of G1 for node mapping. The level of
their super concepts is 4 and 1 respectively, so they are not mapping nodes.
Next consider Functionality of G1 and Functionality of G2, both are at ML (0,0), the level
of their super concepts Attribute in G1 and Attribute in G2 respectively are ML (1,1), the
min, max cardinality of the edges D and S is (1,n) and the degree of both the nodes is one so
we get MN (Functionality, Functionality).
There is no mapping node for Cardinality of G1 in G2 at ML (0,0), so we use addition to
add the node Cardinality and the edge H’ between Cardinality and Relation’ in the
Relational method graph. This results in MN (Cardinality, Cardinality) at this mapping
level. The final node mapping for this set of models is as below.
(Functionality, Functionality)
(Cardinality, Cardinality)
(Attribute, Attribute)
(P_key, P_key)
(Entity, Relation)
(Relationship, Relation)
Step 3: Edge Mapping
Start from the lowest mapping level ML (0,0) consider the MN (Functionality,
Functionality) the edges incident on them are D and S The destination nodes of the two
edges are MN (Attribute, Attribute) and the min, max cardinality of both the edges is (1,n),
so they are mapping edges. The other mapping nodes at this level are (Cardinality,
Cardinality) the edges H and H’ incident on them satisfy the conditions of edge mapping
also so we get (H, H’). The other mapping edges are
(A, P), (C, Q), (B, R), (D, S), (G, J), (E, K), (F, L), (H, H’).
12
Engineering Schema Transformation Methods
Relation’
Relationship
G
4
Entity
E
C
P_key
2 B
A
1 Attribute
K
3
Attribute
3
Q
R
F
4
L
J
Relation
P_key
P
H
2
1
S
D
0
Functionality
Cardinality
0
Cardinality
Functionality
Relational Method Graph G2
ER Method Graph G1
Figure 9: Mapping graphs
Thus the final node mappings between the two graphs shown above is: Functionality
Functionality
Cardinality
Cardinality
Attribute
Attribute
P_key
P_key
Entity
Relation
Relationship
Relation
CONCLUSION
There are two aspects of our generative technique that need justification
a)
Level mapping: We have used the heuristic that unmapped levels should be made to
map to the highest level. This corresponds to a) in Figure 10.As shown in Figure 10 b)
it is possible to use another heuristic that maps levels in a top down manner and
eventually unmapped bottom levels map to the bottom level of the other. Figure 10 c)
shows yet another possibility.
L14
L14
L14
L22
L13
L13
L13
L22
L12
L22
L12
L12
L21
L11
L21
L11
(a)
(b)
L11
L21
(c)
Figure 10: Level mapping
The is built of relationship says that a super concept is built on a sub concept.
Therefore, the level of the super concept must normally be higher than that of its sub
concept. This must be ensured by any mapping technique. Thus, the technique should
be bottom up thereby ensuring that the lowest levels are mapped to one another, the
next higher to each other and so on. This means that any unmapped levels accumulate
at the top and map to the topmost level of the other.
13
Naveen Prakash and Sangeeta Srivastava
b) The application of horizontal defusion to any arbitrary node in the m:n case of node
mapping. We have not found any heuristic here and defuse the leftmost node. The
method engineer is presented with all such arbitrary mappings for
confirmation/modification.
REFERENCES
[Mcb 98] P.McBrien, A.Poulovassilis, A general framework for schema transformation. Data and
Knowledge Engineering 28 (1998) pp 47-71.
[Mcb 99] P.J. McBrien and A. Poulovassilis. A uniform approach to inter-model transformations. In
Advanced Information Systems Engineering, 11th International Conference CAiSE'99, volume
1626 of LNCS, pages 333348. Springer-Verlag, 1999.
[Mcb 02] Peter Mc.Brien and Alexandra Poulovassilis. Schema Evolution in Heterogeneous Database
Architectures, A Schema Transformation Approach. In `Advanced Information Systems
Engineering',14th International Conference, CAiSE 2002,A. Banks Pidduck et al (Eds),Springer
Verlag LNCS 2348, Pages 484-499, 2002.
[Oei 94] Han Oei and Eckhard Falkenberg, Harmonisation of Information system modeling and
specification techniques. Proceedings of the IFIP WG8.1 Working Conference on Methods and
associated tools for the Information Systems Life cycle Maastricht, The Netherlands, 26-28
September, 1994.
[Pra 02]
Naveen Prakash, Sangeeta Srivastava, Engineering methods for schema transformation:
Application to XML, in proc. of IFIP TC8/WG8.1 Working Conference on EISIC, pp153.
[Gup 01] Gupta D. Prakash N, Engineering Methods from their Requirements Specification,
Requirements Engineering Journal, 6, 3, pp133-160
[Pra 97]
Prakash N. Towards a formal definition of Methods, Requirements Engineering Journal
2,1, pp 23-50
[Ram 97] J.Rambaugh, M.Blaha, W.Premerlani, F.Eddy, W.Lorensen, Object Oriented Modeling
and Design, Prentice Hall of India 1997
[Kor 91]
H.F.Korth, A.Silbershatz, Database System Concepts, McGraw Hill 1991
[Chen 97] Yangjun Chen and Wolfgang Benn, Rule-based Technology for Schema Transformation,
Proceedings of the 2nd IFCIS International Conference on Cooperative Information
Systems (CoopIS '97)
[Ren 01]
Renguo Xiaou, Tharam S. Dillon1, E. Chang, and Ling Feng, Modeling and
Transformation of Object-Oriented Conceptual Models into XML Schema, LNCS 2113, p
95.
[Deo 00] Narsingh Deo, Graph Theory with Applications to Engineering and Computer Science,
Prentice hall of India Pvt. Ltd. 2000.
[Wil 01]
[Thi 94]
K.Williams, M. Brundage, P.Dengler,J.Gabriel, A.Hoskinson, M.kay, et al Professional XML
databases
Thieme and A. Siebes. An Approach to Schema Integration Based on Transformations and
Behaviour. In LNCS, editor, Proceedings of the 6th Intern'l Conf. on Advanced Information
Systems Engineering, CAiSE'94, pages 297310, 1994.
14
Extending Methods to Express Change Requirements
Anne Etien, Rébecca Deneckère, Camille Salinesi
Centre de Recherche en Informatique
Université Paris 1 – Panthéon Sorbonne
90, rue de Tolbiac 75013 Paris – France
{aetien, denecker, camille}@univ-paris1.fr
Abstract. A large portion of Information System engineering efforts involves their evolution. As for IS
development from scratch, dealing with requirements in an appropriate way is a crucial aspect of IS
evolution. There is however very few Requirements Engineering methods that propose explicit concepts to
handle the required changes. We developed a number of concepts to express change requirement in an
industrial setting. These concepts were generalized and their use experienced in the context of several IS
engineering methods. This paper proposes a systematic method engineering approach to introduce these
concepts in any IS engineering method based on a meta-model. This SME approach is extension-based. It
allows to change an origin method in order to obtain an extended method that meets the requirements. The
changes between these two methods are expressed with the same concepts than for the IS it-self.
1
Introduction
It is well known that methods are not always well suited to the user needs and that it is necessary to
add/change the concepts and/or relationships between them [Lyytinen87]. Situational Method
Engineering (SME) aims at project-specific method construction. The need for a better productivity of
Information System (IS) engineering teams, as well as a better quality of products motivates the
development of solutions to adapt methods to the project situation at hand [Seaki93], [Harmsen97],
[Rolland98], [Ralyté99b]. SME favours the construction of modular methods that can be modified and
augmented to meet the requirements of given situations [Harmsen94], [Slooten93]. There are several
approaches to adapt methods: by instantiation of generic method components [Rolland93,94b,96], by
assembly of different method chunks [Ralyté01], by integration of overlapping fragments [Ralyté99a]
or by extension [Deneckère01]. This last approach aims at modifying an existing method (called the
origin method) by on-the-fly addition or modification of concepts in its meta-model (to obtain an
extended method).
There is a number of similarities between adapting methods by extension and what is done in
Requirements Engineering to deal with the particular issue of IS evolution. Indeed, when an IS
evolves, it moves from an initial situation to a future one. Traditionally, these situations are
respectively represented by an As-Is model and a To-Be model [Jackson95]. The problem of eliciting
requirements in the context of change can be seen as the one of adapting the As-Is model to reach the
To-Be model [Rolland03]. We believe that the same approach can be used when working with
methods. However, instead of models adaptation, the issue is the one of adapting meta-models.
Therefore, method adaptation deals with changing an “As-Is” meta-model into a “To-Be” meta-model.
Our proposal is thus to use the language we proposed to express change requirements for IS adaptation
at the meta-model level, and therefore to describe the requirements for method adaptation. The
adaptation consists in extending methods with our concepts for the expression of change requirements.
Our experience with several industrial projects [Salinesi02b], [Rolland03], showed us that
practitioners often find inefficient to completely describe As-Is and To-Be models then to compare
them. Instead of this complex way of working, they prefer to express the required changes under the
form of gaps with the current situation. Therefore, we believe requirements elicitation for IS evolution
shall be driven by gaps. Gaps identify what has to be changed/adapted to the new situation. In order to
formalise the definition of change requirements with gaps, we proposed to use operators that express
model transformations. These operators have been generalised so that they can be used with any metamodel [Rolland03]. This approach has been experienced with several meta-models
[Salinesi02b][Salinesi02a][Rolland03]. We also found that it is more exhaustive than ad hoc proposals
such as the typologies of gap operators defined by [Banerjee87] and by [Casati96].
EMSISE’03
15
Anne Etien, Rébecca Deneckère and Camille Salinesi
This paper proposes a method engineering approach which aims at improving any method to allow the
expression of change requirements. The approach relies on a method extension process. This process
adapts our typology of gap operators to the specificities of the method to extend. It then introduces the
adapted typology into the method’s meta-model. The process also proposes a number of quality
criteria to evaluate the method with respect to the change requirements concern.
The next section presents our approach for eliciting change requirements in the IS Engineering domain
(i.e. at the model level), and adapts it to the Method Engineering domain (i.e. at the meta model level).
The principles of the approach we propose for extending any method with change requirements
concepts is outlined in section 3. The method extension process is detailed in section 4 and illustrated
with an example in section 5.
2
Change requirements elicitation approach
According to [Rolland03] change requirements are expressed as gaps between the As-Is and the To-Be
situations. On the one hand, gaps are specified by instantiating gap operators. On the other hand, the
As-Is and To-Be situations are respectively specified with models that instantiate a unique metamodel. The issue is to find out which gap operators should be used with given meta-models. Rather
than defining from scratch another collection of gap operators for each new meta-model, we proposed
a generic typology of gap operators that can be systematically adapted to any specific meta-model.
2.1
Presentation of the approach
As shown in figure 1, specific meta-models are re-defined using a generic meta-model. Besides, our
generic gap typology is instantiated by specific gap typologies. The purpose of the former instantiation
is to identify the key elements and structures of the specific meta-models (e.g. UML, OMT etc.).
The generic gap operators defined in the generic typology apply to the concepts of the generic metamodel. The production of specific gap typologies is made systematic by combining the generic gap
typology and the specific meta-model expressed by instantiation of the generic meta-model.
At the model level, the current situation and the future situation are respectively defined by the As-Is
and the To-Be models. These are instances of the specific meta-model. The gaps between the As-Is
and the To-Be are represented in the figure by Δ. They are instances of operators contained in the
specific typology.
Generic
Meta-model
level
Generic
Meta-model
Applied on
elements of
Instance of
Specific
Meta-model
level
Model
level
Instance of
Applied on
elements of
Meta-model
Instance of
Generic
Typology
Instance of
As-Is
Model
(schema)
Δ
Typology of
operators
Instance of
Δ
Δ
To-Be
model
(schema)
Figure 1: Framework for the change requirement elicitation approach
Examples of meta-models at the specific level are Entity-Relationship, Use Case Diagrams (UCD),
etc. Let us take the example of an IS for which requirements have been elicited using Use Case
Diagrams. If the IS has to evolve, then the elicitation of the change requirements shall be driven by
16
Extending Methods to Express Change Requirements
gaps. The typology of gap operators that can be used for this purpose will for instance include “Add a
Use Case”, “Change the Origin of a Use Case-Actor Association”, “Merge Actors”, etc. The complete
typology of Use Case gap operators to be used can be systematically designed by instantiating the
generic gap typology. The remainder of this section presents how to achieve this.
2.2
The generic meta-model and generic typology of gap operators
The generic meta-model aims at making explicit the elements and structures of any meta-model.
According to this generic meta-model, any given meta-model is composed of a collection of objects
that are either elements or properties of elements. As shown in Figure 2, Elements are classified into
two bundles. First, a distinction between Simple Elements and Compound Elements is made. Second,
elements can be classified into Link and Not Link.
Compound Elements can be decomposed into finer-grain elements. These can be simple or at their turn
compound elements. In this view, any model is seen as a compound element.
Link Elements are connectors between pairs of elements. One of the connected elements plays the role
of the Source and the other is the Target.
For technical reasons, there is always an element classified as Root. This allows to indicate what the
minimal content of a model is: the “Object” class in a class hierarchy, the system boundary in a Use
Case diagram, etc.
P ro p e rty
Root
0..*
has a
N o t lin k
Is-a
Com pound
E le m e n t
N am e
L in k
S im p le
so urce
targe t
Figure 2: Meta-model for gap typology definition
In our approach, change requirements are expressed as gaps under the form of change operations made
on models. There are different kinds of such operations: adding or removing elements, changing,
replacing, etc. Fourteen operators have been identified and defined on the generic level, i.e. to apply
on the generic meta-model [Rolland03]. As Figure 3 shows, gaps between Use Case models can for
instance be defined using these operators.
C6
C1
C1
<<extends>>
Extension
condition EC3
<<uses>>
A1
<<extends>>
Extension
condition EC1
A1
C2
<<extends>>
Extension
condition EC2
C2
<<uses>>
C5
C5
C7
C3
<<uses>>
A4
A3
A2
C4
C3
Figure 3: Example of Use case models evolution.
In this example, there exist some gaps between the As-Is and the To-Be models. In particular, the
following operators have been used:
17
Anne Etien, Rébecca Deneckère and Camille Salinesi
•
•
•
•
•
•
•
•
AddComponent: The Use Case C6 has been added in the Use Case Diagram (which is a compound
element). This could for instance correspond to a new service expected from the system. Besides,
the ‘is-A’ link from C6 to C1 is added to precise the definition of this new service.
Merge: The use cases C3 and C4 have been merged into C3. This could for instance indicate that
from now on, the corresponding services shall be provided by the system within a single
transaction.
Split: The use case C2 has been splitted into C2 and C7. This can occur when the user requires to
be able to use independently the service defined in C7.
ChangeOrigin: The link ‘uses’ from C3 to C2 has been changed. In the future situation, the C3
Use Case should include C7. From now on, the service defined in C7 shall be used in the context
of C3.
RemoveComponent: The actor A2 has been removed. This gap expresses that the system shall not
be used by this actor in the future.
Replace: The link ‘uses’ from C1 to C2 has been replaced by a link ‘extends’ from C2 to C1. This
means that rather than being a sub-part of C1, the service provided by C2 shall now be a variation
on a part of C1.
Give: The extension condition EC3 has been added to the extend link between C2 and C1. This
gap comes in complement to the former one. It indicates the new conditions under which C2 shall
be used within the context of C1.
Modify: The extension condition EC1 has been modified; in the future situation, EC2 shall replace
it.
The three other operators of the generic gap typology are Retype, MoveComponent and Withdraw.
Retype applies on any kind of element (Compound, Simple, Link, or Not Link). This operator can be
used when an element shall have a different type in the future situation. MoveComponent only applies
on Compound elements. It can be used to express the fact that the position of an element is changed
with respect to a compound element. The Withdraw operator is only used for properties. Its
consequence is the removal of an element’s property in the To-Be model.
2.3
Extending the view provided by the generic meta-model with the concept of gap
operator
In our approach the gap operators are the generic concepts that should be instantiated to specify
change requirements. Once introduced in a method, they belong to the meta-model, and thereafter they
instantiate the generic meta-model. Rather than systematically re-defining this instantiation in an ad
hoc way, we propose to introduce an extension to the generic meta-model.
The extended generic meta-model has two parts: the first part is the core meta-model (shown in figure
2), and the second part provides a pattern defining any gap operators in the terms of the core generic
meta-model. As shown in Figure 4, operators are seen as specialisations of the existing generic
concepts. Indeed, an operator is a compound element that includes one or several link elements. Each
of these links is set between the operator itself (as source) and any other kind of object (as target). The
links with the target elements have a particular property that tell to which time horizon these shall
belong: e.g. As-Is or To-Be [Salinesi03a].
For example the AddComponent operator is defined with a link to the changed compound element
(identified in the As-Is model), and another link to the added element (that shall belong to the To-Be
model). Similarly, the Merge operator is composed of three links; two of these target the merged
elements from the As-Is model, the third link targets the resulting element to be included in the To-Be
model.
18
Extending Methods to Express Change Requirements
Compound
target
Link
Operator
Object
has a
source
Time
Horizon
Property
Figure 4: Extended generic meta-model
Let us take the example of Use Case Diagrams. Our Method Engineering requirement is to extend the
Use Case Diagrams with gap operators such as Use Case merging. This will allow to express change
requirements with extended Use Case Diagrams. The meta-model of such extended UCD shall now
include the concept of MergeUseCase that links three Use Cases from different time horizons. This
concepts instantiates the pattern provided by the extended generic meta-model. Besides, it reuses the
definition provided by the merge operator in the generic gap typology.
3
How to change a method to support change requirements elicitation
We propose to adopt a SME approach to guide the extension of methods with a gap typology. The
basis of SME is the issue of the adequacy between a project situation and the method used in the
project. If the situation is such that the method used is inadequate, then it should be changed. These
changes should comply to change requirements which, in our view, should be expressed as any other
kind of change requirements, i.e. with gaps.
3.1
Overview of the approach
The extended generic meta-model defines gap operators as particular kinds of concepts (Figure 4).
Gap operators are applicable on every concept of any meta-model. Therefore, they can be applied on
themselves. It is thus correct to use a gap operator to indicate that another gap operator should be
introduced or modified in a method. Gap operators are used in our approach to define the changes
required on a typology of gap operators. This recursion allows us to specify extension of any specific
product meta-model with an adequate typology of gap operators.
The extension of product meta-models with a typology of gap operators is challenged by the
achievement of quality criteria. Indeed, it is often possible to extend a product meta-model, but the
important question is why the extension is needed, and what it aims for. Our view is that quality
criteria can help answer these questions. Based on a literature review, we selected a number of such
criteria and adapted them to the purpose of our approach:
•
•
•
•
•
Completeness: the typology must be expressive enough to capture all ‘essential aspects’ of
changes requirements [Teeuw97]. In other terms, the set of operators must subsume every possible
change [Banerjee87], [Kradolfer00].
Consistency: gap operator definitions must not conflict with each other [Teeuw97]. For example, it
should not be possible to define ambiguous gaps, i.e. gaps that could be interpreted as referring to
different gap operators.
Minimality refers to the achievement of completeness with a minimal set of operators [Casati96].
In other terms, a set of gap operators can be qualified as minimal if it does not contain any
operators that can be obtained by composition of others.
Exhaustiveness: a typology of gap operators can be considered exhaustive if any type of change
can be expressed using only one of its gap operators.
Fitness for use: as suggested by Juran [Juran88], the question raised by this criterion is the one of
meeting the customer needs. This involves adequacy to the kind of requirements dealt with (e.g.
NFRs, architectural requirements), to the techniques and tools used to express them (e.g.
19
Anne Etien, Rébecca Deneckère and Camille Salinesi
•
unstructured text), as well as to the purpose of the project (e.g. creation of system, configuration
management).
Correctness: gap operators must be defined so as to preserve invariants1 that are associated to the
meta-model.
It can be noticed that these criteria cannot always be simultaneously validated. For example, the
criteria of minimality and exhaustiveness are clearly contradictory. Therefore, the choice of quality
criteria should be carefully driven by the project situation coupled with the method extension
objectives. One has to keep in mind that the non-satisfaction of one of these criteria can engender
modifications in the Meta-Model.
For example, in the WIDE project [Casati96], the existing gap typology fulfils the minimality,
completeness, consistency and correctness quality criteria. Some change requirements, like merging
two tasks in the workflow, can only be expressed by combining several operators of the existing
typology. It can thus be interesting to extend it in order to achieve the exhaustiveness quality criteria.
The requirement is to AddOperators such as MergeTask, SplitTask, or ReplaceVariable in the gap
typology. This method extension has however for consequence that the collection of gap operators is
no more minimal in the extended meta-model.
Our general framework (shown in Figure 1) was adapted to take into account the usage of the gap
typology to express methods extension requirements. As shown in Figure 5, the view is not anymore
that of a single specific meta-model defining the method. Rather, the framework tells us that we are
dealing with an evolving method represented by an As-Is and a To-Be extended specific meta-models.
Each extended specific meta-model is composed of the core meta-model and a specific typology of
gap operators. The former instantiates the core generic meta-model (Figure 2) whereas the latter
instantiates the extension of the generic meta-model (Figure 4). The method extension requirements
are expressed as instances of the generic gap typology (using the gap typology for operator object, i.e.
the generic gap typology instantiated only for the operator object) insofar as operators are themselves
considered as elements. These modifications, represented with Δ in Figure 5 allow to obtain a To-Be
extended specific meta-model.
Generic
Meta-model
level
Extended
Generic
Meta-model
Instance of
Specific
Meta-model
level
Part of
Generic
Typology
Instance of
As-Is
Extended
Meta-Model
Δ
Instance of
Gap Typology for
operator object
Instance of
Δ
Δ
To-Be
Extended
Meta-model
Figure 5: Schema of the extended eliciting change requirements approach
For example, the Orion meta-model, formalized by the extended generic meta-model and further
extended with our typology of gap operators has two different representations: one for the As-Is
(before extension) and one for the To-Be (after the extension). The required differences between these
two are represented by gaps which instantiate the gap typology for operator object.
3.2
Method extension process
Figure 6 presents our method extension process using the Map formalism [Rolland99], [Rolland01]. A
map is an oriented graph which nodes are goals and edges strategies, i.e. ways to achieve a goal. Maps
organise goals and strategies to represent the flow of decisions made in the process. Contrary to what
could be assumed from the directed nature of maps, the purpose is not to specify a sequence of goal
achievements, but to represent a non-deterministic ordering of goal/strategy selections.
1
Invariants are properties of the meta-model that hold at every quiescent state, that is, before and after any
model modification [Banerjee87].
20
Extending Methods to Express Change Requirements
The overall objective of the map represented Figure 6 is to extend the specific meta-model of a
method. Two situations can occur: either the meta-model is already composed of a typology (it is
already extended), or not. Formalising the extended specific meta-model is essential as it allows to
precisely specify the situation. This process aims at the two main goals identified in the map:
“Formalise core meta-model” and “Define extension”. Several strategies are available to achieve
those; the Generic meta-model driven strategy and the Meta-model knowledge driven strategy for the
former goal, and the Extension formalisation strategy for the latter.
-
The principle of the Generic meta-model driven strategy is to formalise the method’s core specific
meta-model with the concepts defined in the generic core meta-model.
-
The Meta-model knowledge driven strategy, uses the conceptual knowledge of the method to
refine the core specific meta-model that has already been formalised.
-
The purpose of the Extension formalisation strategy is to define using the extended generic metamodel formalism, the concepts that already exist in the method to specify change requirements.
Two remarks shall be made: (i) having formalised concepts in the core meta-model is a pre-requisite
for defining concepts in the extension part, hence the flow from the “Formalise core meta-model” goal
to the “Define extension” goal, and (ii) the method has so far not been modified as no supplementary
extension has been made, the only improvement stands in the reformulation of its As-Is extended
meta-model.
At this point in time, it is possible to evaluate the As-Is extended meta-model using one of the six
“Stop” strategies. Each of them proposes to perform this evaluation according to one of the six quality
criteria described above. If the quality of the method is found sufficient, then no improvement is
needed and the process terminates. Its added value is then to demonstrate the qualities of the existing
method. In the opposite case, the quality criteria will provide input on the target of the method
extension process.
In addition to the Extension formalisation strategy, the map proposes to define the extended metamodel By extension typology based strategy. This strategy uses the generic gap typology to generate
“from scratch” a gap typology that is adapted to the core specific meta-model.
Once the collection of gap operators defined in the meta-model, it should be worked out to reach the
chosen quality criteria. The required method improvements can be specified using the gap typology
for operator object. This is the proposal made in the By application of meta-model modification
operator strategy. This strategy is refined into eight sub-strategies each corresponding to an operator.
The modifications on the extended meta-model can thus be made following the RenameOperator, the
MergeOperator, the SplitOperator, the ReplaceOperator, the ChangeOrigin, the AddOperator, the
RemoveOperator, or the MoveOperatorComponent strategy.
Each of the six quality criteria can be evaluated at any moment using the corresponding “Stop”
strategy. The modification process ends up when the quality of the To-Be extended meta-model meets
the initial expectations.
Start
Generic meta-model
driven strategy
Formalise core
meta-model
Extension typologybased strategy
By fitness for use
strategy
By minimality strategy
By concistency
strategy
Meta-model
knowledge driven strategy
Extension
formalization strategy
Define
extension
By application of metamodel modification
operator strategy
By completeness
strategy
By exhaustiveness
strategy
By correctness
strategy
Figure 6: Modification process of an extended specific meta-model
21
Stop
Anne Etien, Rébecca Deneckère and Camille Salinesi
Although the purpose of this process is to extend methods so that they allow the specification of
change requirements, we believe it can be reused to improve their core too. For example, the second
goal “Define extension” could be replaced by the more general goal “Define modification”, and Metamodel modification operator strategies could be additionally proposed for the core meta-model, e.g. to
AddLink, RemoveElement, etc. Let us for instance take the case where temporal concepts such as the
Calendar class are needed in the Orion method. The AddElement, and AddLink, strategies of this
adapted process can be used to specify the need to add the Calendar class in the To-Be core metamodel, then link it to the Object class.
4
Example
This section proposes to illustrate our approach by extending the Orion method with an exhaustive
typology of gap operators [Banerjee87].
The existing Orion meta-model proposes an already rich collection of gap operators. However, it can
easily be shown that this collection is not exhaustive. For example, it does not allow to express change
requirements such as mergeClass or splitClass with a single gap operator. The required extension
should preserve the completeness, correctness and consistency qualities of the extended Orion metamodel. The To-Be meta-model extension should also be exhaustive.
The next subsections indicate how the proposed process can be used to attain this goal.
4.1
Formalising the Orion core meta-model
First, the Orion core meta-model has to be formalised with the concepts defined in the generic metamodel. Figure 7 uses grey levels to emphasise which concept of the generic meta-model is
instantiated. For example, a Class is a compound element that contains Methods and Instance
Variables. These are simple elements with various properties. Two kinds of Links relate Classes: the
Composition Link, and Is-A Links. The former are defined as link elements with Class elements as
both source and target. The latter are defined as Link elements with Sub-Classes and Super-Classes
respectively as source and target. This allows to associate the Order property to Super Classes to
define priorities in case of multiple inheritance.
Class Hierarchy
Has for source
Composite Link
Object
Has for target
Simple
Compound
Link
Property
Class
Has for
target
Has a
1
1
Code
Has for target
Order
Is-A Link
Has for source
Method
Has a
Super Class
Instance Variable
Sub Class
Has a
1
Domain
*
*
Default Value
Inheritance Link
Has for source
Shared Value
Figure 7: Instantiation of the generic meta-model for Orion meta-model
4.2
Defining Orion’s extension by the typology-based strategy
It has already been identified that the As-Is extended meta-model is non exhaustive. A complete
typology is thus constructed from the Orion specific meta-model and the generic gap typology, as
reported in [Etien03]. The collection of gap operators generated that way is listed in Table 1. The table
emphasises the instantiated generic operator (in row), and the object of the Orion meta-model on
which the generated operator applies (in column).
Four additional gap operators are generated (to add and remove an empty Class Hierarchy and its
Object class). These are not presented in the table for the sake of space.
22
Extending Methods to Express Change Requirements
Generic Operator
Rename
Name of the operator
RenameClass
RenameMethod
RenameInstanceVar
RenameIsA
RenameComposite
RenameInheritanceLink
MergeClass
Merge
MergeMethod
MergeInstanceVar
MergeIsA
MergeComposite
MergeInheritanceLink
SplitClass
Split
SplitMethod
SplitInstanceVar
SplitIsA
SplitComposite
SplitInheritanceLink
ChangeOriginIsA
ChangeOrigin
ChangeOriginComposite
ChangeOriginInheritance
AddClass
AddComponent
AddMethod
AddInstanceVar
AddIsA
AddComposite
AddInheritanceLink
RemoveComponent RemoveClass
RemoveMethod
RemoveInstanceVar
RemoveIsA
RemoveComposite
RemoveInheritanceLink
RetypeClass
Retype
RetypeMethod
RetypeInstanceVar
RetypeIsA
RetypeComposite
RetypeInheritanceLink
GiveCode
Give
GiveDomain
GiveDefaultValue
GiveSharedValue
GiveOrder
WithdrawCode
Withdraw
WithdrawDomain
WithdrawDefaultValue
WithdrawSharedValue
WithdrawOrder
ModifyCode
Modify
ModifyDomain
ModifyDefaultValue
ModifySharedValue
ModifyOrder
As-Is Object
Class
Method
InstanceVariable
Is-A Link
CompositeLink
InheritanceLink
Class, Class
Method, Method,
InstanceVar, InstanceVar
Is-A Link, Is-A Link
CompositeLink, CompositeLink
InheritanceLink, InheritanceLink
Class
Method
InstanceVariable
Is-A Link
CompositeLink
InheritanceLink
Is-A Link
CompositeLink
InheritanceLink
Class Hierarchy
Class
Class
Class Hierarchy
Class Hierarchy
Class Hierarchy
Class
Method
InstanceVariable
Is A Link
CompositeLink
InheritanceLink
Class
Method
InstanceVariable
Is A Link
CompositeLink
InheritanceLink
Method
InstanceVar
InstanceVar
InstanceVar
Is A Link
Code
Domain
DefaultValue
SharedValue
Order
Code
Domain
DefaultValue
SharedValue
Order
To-Be Object
Class
Method
InstanceVariable
Is-A Link
CompositeLink
InheritanceLink
Class
Method
InstanceVariable
Is-A Link
CompositeLink
InheritanceLink
Class, Class
Method, Method,
InstanceVar, InstanceVar
Is-A Link, Is-A Link
CompositeLink, CompositeLink
InheritanceLink, InheritanceLink
Is-A Link
CompositeLink
InheritanceLink
Class
Method
InstanceVariable
Is A Link
CompositeLink
InheritanceLink
Class Hierarchy
Class
Class
Class Hierarchy
Class Hierarchy
Class Hierarchy
Method or InstanceVar
Class or InstanceVar
Class or Method
Composite or Inheritance
Is-A or Inheritance
Is-A or Composite
Code
Domain
DefaultValue
SharedValue
Order
Method
InstanceVar
InstanceVar
InstanceVar
Is A Link
Code
Domain
DefaultValue
SharedValue
Order
Table 1: Complete typology by instantiation of the generic typology
It can be noticed that the operators ChangeSource and ChangeTarget are only applicable to Link
elements. Similarly, the Give, Withdraw and Modify operators only act on properties and are not
relevant for the elements that do not have a property. These restrictions directly result from the
exploitation of the Orion-specific meta-model structure which instantiates the generic meta-model.
4.3
Defining Orion’s extension using the extension formalisation strategy
Orion already proposes a number of operators to express change requirements as shown in
[Banerjee87]. The purpose of this section is to show how these operators are formalised with our
meta-model. According to the meta-model, each operator has a name and is composed of links with
objects in As-Is models and links with objects in To-Be models. Table 2 emphasises for each of the
Orion operators the list of objects it is linked to in As-Is models, and the list of objects it is linked to in
To-Be models.
23
Anne Etien, Rébecca Deneckère and Camille Salinesi
Orion Operator
As-Is concept
Add a new instance variable to a class
Class
Drop an existing instance variable from a class
InstanceVariable
Change the name of an instance variable of a class
InstanceVariable
Change the domain of an instance variable of a class
Domain
Change the default value of an instance variable
DefaultValue
Add a shared value
InstanceVariable
Change the shared value
SharedValue
Drop the shared value
SharedValue
Drop the composite link of an instance variable
CompositeLink
Add a new method to a class
Class
Drop an existing method from a class
Method
Change the name of a method of a class
Method
Change the code of a method in a class
Code
Change the inheritance of a method
InheritanceLink
Make a class S a superclass of a class C
Class Hierarchy
Remove a class S from the superclass list of a class list Is-A Link
of a class
Change the order of superclasses of a class C
Order
Add a new class
Class Hierarchy
Drop an existing class
Class
Class
Change the name of a class
To-Be Concept
InstanceVariable
Class
InstanceVariable
Domain
DefaultValue
SharedValue
SharedValue
InstanceVariable
Class Hierarchy
Method
Class
Method
Code
InheritanceLink
Is-A Link
Class Hierarchy
Corresponding operator
AddInstanceVariable
RemoveInstanceVariable
RenameInstanceVariable
ModifyDomain
ModifyDefaultValue
GiveSharedValue
ModifySharedValue
WithdrawSharedValue
RemoveComposite
AddMethod
RemoveMethod
RenameCode
ModifyCode
ReplaceInheritance
AddIsA
RemoveIsA
Order
Class
Class Hierarchy
Class
ModifyOrder
AddClass
RemoveClass
RenameClass
Table 2: Formalisation of the Orion existing operators
The last column of the table identifies the corresponding operators instantiated from the generic
typology. As shown by the table, each of the operators proposed by Orion has a unique equivalent
directly instantiated from the generic typology.
4.4
Application of the meta-model modification operator strategy
The Orion extended meta-model is now composed of a number of operators formalised from Orion,
and of a number of operators generated from our generic typology. In order to achieve consistency, it
is necessary to compare both lists of operators (as done in Table 2), and change the extended metamodel, as proposed by the meta-model modification operator strategy.
For example, the operator AddInstanceVariable obtained from the generic typology (Table 1) and
Orion’s operator “add a new instance variable to a class” (Table 2) are identical. Thus, an application
of mergeOperator is required on those. This results into the AddInstanceVariable operator shown in
Table 3. Besides, a number of the operators generated from generic typology, like RenameIsALink or
RetypeComposite are meaningless with respect to Orion. Application of RemoveOperator is thus
required. Therefore these operators do not appear in the resulting list. The same reasoning is applied
on the other Orion operators and the other operators generated from the generic typology. The
resulting consolidated list of operators is shown in Table 3. The operators that directly result from our
generic typology are emphasised in bold. All the other operators result from a merge with Orion’s
operators.
Operator
Rename
Class
Method
RenameClass RenameMethod
Instance Variable
RenameInstanceVar
Is-A Link
N/A
Composite Link
N/A
Inheritance Link
N/A
Merge
Split
MergeClass
SplitClass
MergeInstanceVar
SplitInstanceVar
N/A
N/A
N/A
N/A
N/A
N/A
Replace
Origin
ReplaceClass ReplaceMethod ReplaceInstanceVar
N/A (not
N/A
N/A
applicable)
ReplaceComposite ReplaceInheritance
ChangeSource ChangeSource
ChangeSource
ChangeTarget ChangeTarget
ChangeTarget
AddComponent
AddClass
AddInstanceVar
AddIsA
AddComposite
RemoveInstanceVar
RemoveIsA
RemoveComposite
N/A
Change
MergeMethod
SplitMethod
AddMethod
RemoveComponent RemoveClass RemoveMethod
N/A
N/A
N/A
RetypeInstanceVar
N/A
N/A
N/A
Give
RetypeClass
N/A
GiveCode
GiveDomain
GiveDefaultValue
GiveSharedValue
N/A
N/A
N/A
ithdraw
N/A
WithdrawCode WithdrawDomain
N/A
N/A
N/A
ModifyOrder
N/A
N/A
Retype
WithdrawDefaultValue
WithdrawSharedValue
Modify
N/A
ModifyCode
ModifyDomain
ModifyDefaultValue
ModifySharedValue
Table 3: Typology of gaps for the Orion system
24
Extending Methods to Express Change Requirements
4.5
Verifying the quality criteria to stop the process
As shown in the previous sub-sections, an effort was made to achieve exhaustiveness (by exploiting
the generic typology), and consistency (by cross-checking the operators generated from the typology
and those initially proposed by Orion). Our process model proposes to evaluate the quality of the
resulting extended meta-model to verify the quality criteria that were initially chosen.
4.5.1
Exhaustiveness
A typology can be considered exhaustive if any change requirement can be expressed without using a
combination of gap operators. There is no formal way to demonstrate exhaustiveness. In this example,
exhaustiveness was empirically evaluated using the examples of change requirements found in the
Orion literature. The collection of operators shown in table 3 was found exhaustive as every change
requirements could be formalised using a single operator. This confirms our findings reported in
[Etien03] concerning the exhaustiveness of our approach on the generic level.
4.5.2
Completeness
On the one hand, [Banerjee87] demonstrates that Orion’s initial collection of gap operators is
complete (i.e. every change requirements can be expressed using it, if necessary by combining several
operators). On the other hand, no operator from this collection has been removed; only merges have
been achieved with the operators generated from the generic typology when they had the same
semantics. This formally demonstrates that the resulting extended meta-model proposes a complete
collection of gap operators to express any change requirement in Orion.
4.5.3
Correctness
The proposed collection of gap operators can be considered as correct if and only if each operator
definition is itself correct. As defined in [Rolland03], an operator is correct when it preserves the
invariant properties of the model it applies to. In the case of Orion, Banerjee defines the five following
invariant properties on the core meta-model:
•
•
•
•
•
Class lattice invariant: “the lattice is a rooted and connected directed acyclic graph with named
nodes and labelled edges. It has only one root and its nodes have a different name”.
Distinct Name invariant: “all instance variables and methods of a class have a distinct name”.
Distinct identity invariant: “all instance variables and method of a class have distinct identity”.
Full inheritance invariant: “a class inherits all instance variables and method from each of its
superclasses, except when full inheritance causes a violation of the distinct name and distinct
identity invariants”.
Domain compatibility invariant: “if an instance variable V2 of a class C is inherited from an
instance V1 of a superclass of C, then the domain of V2 is either the same as that of V1 or a
subclass of V1”.
For the sake of space, we do not present the formal definition of the 45 operators. These definitions are
obtained by adapting those presented in [Rolland03]. For example, AddClass has the following formal
definition:
AddClass: {Class} → {Is-Alink}, Class
AddClass({Ci}) = Ij.has-for-source(C1) ∧ Ij.has-for-target(C) ∧ ∀k, C.name ≠ Ck.name ∧
[∀M1, M2, (C.composed-of(M1) ∧ C.composed-of(M2)) ⇒ M1.name ≠ M2.name] ∧
[∀IV1, IV2, (C.composed-of(IV1) ∧ C.composed-of(IV2)) ⇒ IV1.name ≠ IV2.name] | Ij
∈ Is-Alink, (C, Ci, Ck) ∈ Class, (M1, M2) ∈ Method (I1, I2) ∈ InstanceVariable.
25
Anne Etien, Rébecca Deneckère and Camille Salinesi
This definition indicates that the result of adding a class C should be such that: (i) C has one or several
existing super-class(es) Ci (ii) C and Ci, are associated through Is-Alink Ij, (iii) there is no other class
that has the same name as C, (iv) all methods in C have a different name, and (v) all variable instances
of C have a different name. These two last properties are relevant because an added class implicitly
inherits from its super-classes’ methods and instance variables. Cross checking this definition with
Orion’s list of invariant properties shows that :
-
the first invariant is preserved as the class is only added if it has a different name than the
other classes of the hierarchy, and because the class is a leaf in the class hierarchy (it
cannot belong to a cycle as it is not the source of any Is-A link);
-
the second, third and fourth invariants are preserved as the class can only be added if it
isn’t composed of methods or instance variables with the same name, therefore, two
methods or instance variables of the added class cannot have the same identity;
-
the fifth invariant is preserved as when a variable instance is inherited, it implicitly comes
with its domain.
As a result, the AddClass operator is correct. A similar cross-checking of all operators with all Orion’s
invariants shows that the proposal is correct.
4.5.4
Consistency
Both Orion’s collection of gap operators and the generated one are consistent. Each operator of the
former collection has been merged with one of the latter when their semantic were the same. As a
result there is no conflict between the operators. The resulting collection of gap operators is thus
consistent.
All the quality criteria are verified, the process to extend the gap typology for the Orion meta-model
can thus be considered as terminated.
5
Conclusion
This paper proposes an approach to extend any method with concepts to express change requirements.
A generic typology of gap operators and a generic meta-model have already been proposed to specify
the concepts necessary when expressing change requirements. The approach presented here combines
both of them to extend methods until they satisfy some quality criteria. The main assumption
underlying this approach is that any of our gap operators can be regarded: (i) on the model level (to
define change requirements), (ii) on the meta-model level (to define changes needed on meta-models),
and (iii) on the generic meta model level (as part of a generic meta model).
The proposed approach combines an extended version of the generic meta-model with a process to
guide method extension. The processes offers a number of strategies to achieve two main goals:
formalise the meta-model and define extension, and to evaluate the quality of the resulting method
with respect to a number of quality criteria.
So far, the main flaw of our approach is that it doesn’t tell what to do with models when the
corresponding meta-models are extended. We believe this issue is similar to the one addressed when
looking at what to do with the running instances of a system when the model that defines it evolves
[Salinesi03b].
Addressing this issue is the main concern of the next stage in our research programme, together with
undertaking an empirical investigation of our approach, and with developing a prototype tool to guide
its application.
26
Extending Methods to Express Change Requirements
6
References
[Banerjee87] J. Banerjee, W. Kim, H.-J. Kim, H.F. Korth, Semantics and Implementation of Schema Evolution
in Object Oriented Databases, Proceedings of the ACM-SIGMOD Annual Conference, pp 311-322, San
Francisco, CA, May 1987.
[Casati96] F. Casati, S. Ceri, B. Pernici, G. Pozzi, Workflow Evolution, Proceedings of the 15th International
Conference On Conceptual Modeling (ER'96), Cottbus, Germany, pp. 438-455, 1996.
[Deneckère01] R. Deneckère, Approche d’extension de méthodes fondée sur l’utilisation de composants
génériques, PhD Thesis, University of Paris1 Panthéon-Sorbonne, January 2001.
[Etien03] A. Etien, C. Salinesi, Towards a Systematic Definition of Requirements for Software Evolution: A
Case-study Driven Investigation, Proceedings of EMMSAD’03, Klagenfurt/Velden, Austria, June, 2003.
[Harmsen94] A.F. Harmsen, S. Brinkkemper, H. Oei, Situational Method Engineering for Information System
Projects, In Olle T. W. and A. A. Verrijn Stuart (Eds.), Methods and Associated Tools for the Information
Systems Life Cycle, Proceedings of the IFIP WG8.1 Working Conference CRIS'94, pp. 169-194, North-Holland,
Amsterdam, 1994.
[Harmsen97] A.F. Harmsen, Situational Method Engineering, Moret Ernst & Young , 1997.
[Jackson95] M. Jackson, Software Requirements and Specifications, Addison-Wesley, 1995.
[Juran88] J. M. Juran, F. M. Gryna, Juran’s Quality Control Handbook, Mac Graw Hill, 1988.
[Kradolfer00] M. Kradolfer, A Workflow Metamodel Supporting Dynamic, Reuse-based Model Evolution, PhD
thesis, Department of Information Technology, University of Zurich, Switzerland, chap. 4, pp. 59-73, May 2000.
[Lyytinen87] K. Lyytinen, Different perspectives on information systems : problems and solutions, ACM
Computing Surveys, Vol 19, No1, 1987.
[Ralyté99a] J. Ralyté, C. Rolland, V. Plihon, Method enhancement With Scenario Based Techniques,
Proceedings of the 11th Conference on Advanced Information Systems Engineering (CAISE’99), Heidelberg,
Germany, June, 1999.
[Ralyté99b] J. Ralyté, Reusing Scenario Based Approaches in Requirement Engineering Methods: CREWS
Method Base, Proceedings of the First International Workshop on the Requirements Engineering Process Innovative Techniques, Models, Tools to support the RE Process, Florence, Italy, September 1999.
[Ralyté01] J. Ralyté, Ingénierie des méthodes à base de composants, PhD Thesis, University of Paris1 PanthéonSorbonne, January 2001.
[Rolland01] C. Rolland, N. Prakash, Matching ERP System Functionality to Customer Requirements,
Proceedings of the 5th International Symposium on Requirements Engineering (RE'01), Toronto, Canada, pp.
66-75, 2001.
[Rolland93] C. Rolland, Modeling the requirements engineering Process, Information Modelling and
Knowledge Bases, IOS Press, 1993.
[Rolland94a] C. Rolland, Modeling the evolution of artifacts, Proceedings of the 1st IEEE International
Conference on Requirements Engineering, Colorado Springs, Colorado, 1994.
[Rolland94b] C. Rolland, A Contextual Approach to modeling the Requirements Engineering Process,
Proceedings of the 6th International Conference on Software Engineering and Knowledge Engineering
(SEKE'94), Vilnius, Lithuania, 1994.
[Rolland96] C. Rolland, V. Plihon, Using generic chunks to generate process models fragments, Proceedings of
the 2nd IEEE International Conference on Requirements Engineering (ICRE’96), Colorado Spring, 1996.
[Rolland98] C. Rolland, V. Plihon, J. Ralyté, Specifying the reuse context of scenario method chunks,
Proceedings of the 10th Conference on Advanced Information Systems Engineering (CAiSE’98), Pisa Italy, June
1998.
[Rolland99] C. Rolland, N. Prakash, A. Benjamen, A Multi-Model View of process Modelling, Requirements
Engineering Journal, Vol 4, pp 169-187, 1999.
[Rolland03] C. Rolland, C. Salinesi, A. Etien, Eliciting Gaps in Requirements Change, To appear in
Requirement Engineering Journal, 2003.
[Saeki93] M. Saeki, K. Iguchi, K Wen-yin, M Shinohara, A meta-model for representing software specification
& design methods, Proceedings of the IFIP¨WG8.1 Conference on Information Systems Development Process,
Come, pp 149-166, 1993.
27
Anne Etien, Rébecca Deneckère and Camille Salinesi
[Salinesi02a] C. Salinesi, M. J. Presso, A Method to Analyse Changes in the Realisation of Business Intentions
and Strategies for Information System Adaptation, Proceedings of the 6th IEEE International Enterprise
Distributed Object Computing Conference (EDOC’02), Lausanne, Switzerland, September 2002.
[Salinesi02b] C. Salinesi, J. Wäyrynen, A Methodological Framework for Understanding IS Adaptation through
Enterprise Change, Proceedings of the 8th International Conference on Object-Oriented Information Systems
(OOIS’02), Montpellier, France, September 2002.
[Salinesi03a] C. Salinesi, C. Rolland, Fitting Business Models to Systems Functionality Exploring the Fitness
Relationship, Proceedings of the 15th Conference on Advanced Information Systems Engineering (CAiSE’03),
Klagenfurt/Velden, Austria, June, 2003.
[Salinesi03b] C. Salinesi, A. Etien, Compliance Gaps: a Requirements Elicitation Approach in the Context of
System Evolution, To appear in the proceedings of the 9th International Conference on Object-Oriented
Information Systems (OOIS’03), Geneva, Switzerland, September 2003.
[Slooten93] K. van Slooten, S. Brinkkemper, A Method Engineering Approach to Information Systems
Development, In Information Systems Development process, N. Prakash, C. Rolland, B. Pernici (Eds.), Elsevier
Science Publishers B.V. (North-Holand), 1993.
[Teeuw97] W. B. Teeuw, H. van den Berg, On the Quality of Conceptual Models, Proceedings of the 16th
International Conference on Conceptual Modeling (ER'97), Los Angeles, CA, November 1997.
28
Towards a Clear Definition of
Patterns, Aspects and Views in MDA
Joël Champeau, François Mekerke, Emmanuel Rochefort
ENSIETA - Laboratoire DTN
29806 Brest Cedex, France
{joel.champeau, mekerkfr, rochefem}@ensieta.fr
Abstract
In order to improve the quality of software architecture for large-scale information systems, methods are
developed that intend to increase their flexibility. The modeling phase has become important and languages
such as the Unified Modeling Language (UML), primarily focused on objects, are now associated with the
concepts of Patterns, Aspects and Views due to the increasing complexity of software design. The application
of these concepts in an MDA (Model Driven Architecture) process requires that we define these concepts
more precisely, especially the relationships between them. We think that a confusion between these different
paradigms came from the mixed use of aspects, patterns and views (all related to separation of concern), but
not at the same level and with different use processes. Our purpose here is to engage a reflection on precise
definition and relationships of concepts. We will quickly recall some admitted facts about these concepts, and
present a few elements of our reflection to try to link the different concepts with respect to the semantic of the
entities. We hope these propositions will be discussed and improved to define a broad set of possible uses
based on separation of concerns for building and observing information system models.
Introduction
In the perspective of an improvement of software development process, growing constraints such as
time-to-market and flexibility for future addition of functionality must be taken into account, which
imposes a high quality of software architectures for information systems. For this purpose, the
attention dedicated to the modeling phase increases, and the spectrum of possibilities offered in the
field widens. Languages such as the Unified Modeling Language (UML), were primarily focused on
objects, but since applications get more and more complex, the "object" concept has been extended
with the concepts of Patterns, Aspects and Views, which enhance the flexibility of software design.
The Model Driven Architecture (MDA) approach is tightly coupled with these concepts, and
associates them with refactoring/transformations from analysis to design and development, and
between every iteration of the development cycle. However, if we want to be able to use
transformations efficiently for architecture development through MDA, we must found our approach
on stable concepts, which means we have to define these concepts more precisely, especially the
relationships between them. Once precise definitions are set, model consistency as well as
communication among architects will be improved, and operational systems can only benefit from it.
If we have a look at what is being done in the domain of model transformations, we can easily
understand that the spectra of uses of the three concepts we have chosen to study overlap. In the MDA
approach, models are split into three categories: Platform Independent Model (PIM) and Platform
Specific Model (PSM), to which can be added Platform Description Model (PDM).
Transformations can then be used to merge the PIM and the PDM into the PSM, in order to obtain the
implementation of a business model on a specific platform, but also to extract a PIM from a PSM,
knowing the PDM. To realize this processing, the concepts used to describe the different parts of the
system, as well as the methods, are obviously not imposed yet. To merge models, we can choose
between different techniques: Patterns, as building blocks to cement for our own purposes; Aspects, as
localization of initial crosscutting concerns to weave into our architecture; Views, to visualize specific
parts of the application.
EMSISE’03
29
Joël Champeau, François Mekerke and Emmanuel Rochefort
Patterns are used by composition to create and adapt a model, most often to refactor it, by replacing a
pattern by another for example. Different types of composition can be applied, among them stringing
and overlapping (see [6]), but the general idea is to access a pattern repository and compose its
patterns following a chosen technique so as to obtain the final product.
With aspects, the weaving process is in charge of the merging step between application entities and
crosscutting aspects. In this case, aspects provide dedicated infrastructures for the application.
Now, is there a real difference between composition of patterns and weaving of aspects? Would it be
that different words describe same concepts and processes, or the difference lie more in the processes
than in the concept? In this context, views are either just an aspect of the application or a different
process to build a visualization of a part of the application.
The confusion between these different paradigms came from the mixed use of aspects, patterns and
views that are all related to separation of concern in an application, but not at the same level and with
different use processes.
Our purpose here is to engage a reflection on precise definition and relationships of concepts. We will
therefore recall quickly some things that are commonly admitted about the birth of these concepts,
then try to extract a set of relationships among them, so as to finally propose a base of discussion and
exchange around this subject.
1. Historic and evolution
1.1.
Aspects
When it comes to system/application requirements, the risk is always to have one or several, even
many of them that could have to evolve from time to time. The problem, common to many architects
and developers is to tackle these requirements with enough flexibility to allow potential changes in the
future. The solution provided by the concept of "Aspect" is to deal with each requirement
independently, so that if a change had to occur, it could be made easily and its consequences localized.
Aspects are present at both code and model levels: It started with AOP, and later spawned AOSD.
1.1.1.
Aspect-Oriented Programming
To our knowledge, the word "Aspect" was used in a non-casual manner for the first time in AOP (see
[4]), even though similar ideas were already developed in other works. (i.e. Adaptive Programming,
Composition Filters or Subject-Oriented Programming)
In [3], Kiczales \& al. describe an aspect as a "well-modularized crosscutting concern": They seem to
imply that a problem can be divided in a set of concerns, which can be identified and then addressed
through as many aspects, which will be woven to build the application.
Aspects therefore seem to be a good means to tackle flexibility problems, at least at code level.
1.1.2.
Aspect-Oriented Software Development
In [5], Suzuki and Yamamoto describe how the concept of "Separation of Concerns" used in AOP can
be adapted to the model level. Aspects are described in relationship to abstract
crosscutting/overlapping concerns, which is coherent with the AOP approach.
This process announces new methods for software architecture: specialists in various domains create
independently pieces of architecture that are woven together afterwards, using a proper mapping.
30
Towards a Clear Definition of Patterns, Aspects and Views in MDA
This works basically the same as AOP, but at model level. It therefore provides the same kind of gain
in flexibility, since requirements are localized, which means any change can be made easily in a
separate model, before being reflected into the architecture.
1.1.
Patterns
Their origin is commonly attributed to the works of Christopher Alexander. In [1], he writes: "Each
pattern is a three-part rule, which expresses a relation between a certain context, a problem, and a
solution", he identifies and analyses a set of patterns that he thinks are related to well-conceived
architecture designs.
This work inspired software architects, and led to the release of [2], which is best described as the
"Holy Book" of Patterns.
The central element of this book is that it defines a structure to use when describing a pattern. Doing
this, they provided the first common base for exchanging information on patterns, which really helped
the concept spread. However, this form given to patterns also somehow weakens their expressiveness.
Patterns being three-part rules can be understood as them being solution of a problem in a context. The
reality is unfortunately a bit more complex: patterns can better be viewed as abstract repositories for
recurring solutions to recurring problems. Can be called patterns only solutions that have been
identified several times in various systems, through cross-analysis.
Patterns have several facets. Their prominent property is that they constitute a repository for contextrelated solutions, which makes them a means to increase reusability. Their second advantage, which is
closely linked to the first one, is that they enable an improvement in communication, by capturing
experience and putting words on it.
1.2.
Views
Views are familiar to each one of us: by changing our point of view on a system, we focus on different
things. They help us zoom in and out of a system, provide insights or details, depending on the set of
criteria we choose. If we take the case of UML, an architecture is divided in several diagrams, so as to
lower the complexity; each diagram focuses on a few carefully chosen entities, mainly classes and
associations for a class diagram, and all behavior-related entities for a sequence diagram.
However, views are not implemented as views in UML. We could find only two standards
implementing views:
̇ OMG MOF 2.0 RFP on Query, Views and Transformations, which states that "A view reveals
specific aspects of a modeled system. A view is a model that is derived from another model."
(Note the word "Aspect" used in this sentence) In our opinion, this means that a view is in this
context a subset of the elements of an original model.
̇ ANSI/IEEE Std 1471-2000 on "Architecture Description". This standard gathers concepts such as
stakeholders, concerns, views and introduces viewpoints, described as the way a stakeholder is
looking at the system, depending on his concerns (see Figure 1: IEEE Std P1471-2000 on
"Architecture Description" for details, it is taken from [7]). The force of this standard is to be very
general: it provides a set of concepts and the relationships among them, without depending on any
technical or technological solution. The stakeholders can be concerned by different aspects of the
system, horizontally as well as vertically, on its final functionalities as well as on its development.
31
Joël Champeau, François Mekerke and Emmanuel Rochefort
Figure 1: IEEE Std P1471-2000 on "Architecture Description"
If we come back to the UML example, we can say that the vision of views is coherent, since (1) IEEE
gives a framework for UML diagrams, by identifying them as viewpoints, which result in views when
applied on models, and (2) OMG provides the way to build them (by picking the entities that interest
us).
2. Discussion
If we try to summarize what we know about the contribution of aspects, pattern and views in the
domain of software engineering, we obtain:
̇ Aspects: Flexibility, localization of concerns in terms of requirements or platform constraint
related to a concern.
̇ Patterns: Reusability, localization of concerns in terms of technical/technological problems.
̇ Views: Abstraction/Details, visualization of concern-related entities.
If the motives for using each concept are quite clear, things can become more difficult when they are
used together in the modeling phase, since even if their conceptual contributions are very different,
they are expressed through the same diagrams, which can seem to be similar. For example, patterns,
aspects and views all share the description as the sum of a structural part including a class diagram,
and a behavioral part including a sequence diagram.
A same diagram, thus a same representation, can therefore have a different semantic depending on the
concept it represents.
2.1.
Aspects-Patterns
The most tempting move is actually to try to combine the flexibility of aspects with the reusability
provided by patterns. And that one decides to reuse aspects, or to gather patterns related to the same
requirements does not make a real difference here, since both approaches lead to "mutant" concepts
that betray their predecessors.
32
Towards a Clear Definition of Patterns, Aspects and Views in MDA
However, if the merge into one concept does not seem a bright idea to us, we think that the search for
the relationships among them is essential, since the combination of flexibility and reusability has to be
looked for. This not talking of the valuable insight provided by views.
Using aspects without patterns means to deal with a concern without having the will to reuse it. It
seems normal to do so when a problem is specific to the system, thus a priori not reusable elsewhere.
2.2.
Patterns-Views
The most obvious interaction between patterns and views may be the pattern "Observer-Observable"
(which is used in a slightly different manner by users of Smalltalk as the Model-View-Controller),
which provides a mechanism to update the view components when the model changes. This
mechanism at code level can be abstracted so as to provide the same concepts at model level. In this
case, the view is composed by a subset of model elements extracted from the complete model.
Views can be built out of patterns: If we take the case of a system composed of a set of patterns
"glued" together by other entities, we can think of building a view mechanism that would allow us to
see the system at different levels of abstraction. Such a mechanism is described in [6] : three logical
views are built that correspond to different levels of pattern abstraction, Pattern Level, Pattern Level
with Interfaces and Detailed-Pattern Level. Choosing patterns to encapsulate the set of entities to
display provides a powerful communication medium.
However, we think that it is not possible to establish a logical link between patterns and views. There
can be patterns to implement views, and views that take advantage of patterns, but there is no
composition/inclusion relationship between them.
2.3.
Aspects-Views
Aspects and views share a common ground: both of them are concern-related concepts. But they
appear for different reasons in the development process: Aspects are used in the building phase, Views
afterwards in the perspective of observing/checking/modifying the system. Aspects are used to include
aspects into the model during the engineering process, while views are used to extract them in a
reverse-engineering process.
Moreover, they don't seem to really occur at the same level, at least with models. Aspects are meant to
be general, when views seem more specific to a given system. This is why viewpoints defined in
ANSI/IEEE Std 1471-2000 on "Architecture Description" seem interesting to us, since they are
entities that can be placed at the same level than aspects (one could even think of an inheritance link
between the two of them) to define the expression of views.
This provides us with four entities on two levels: aspects and viewpoints are abstract entities that
describe the way "mapped aspects" and views must be built from the system parts. For example, a
viewpoint could be set as general as: "display each class with its name and its attributes' name", while
the resulting views would be the result of applying this one viewpoint on various different systems.
On Figure 2: Views and Viewpoints (taken from [7]) is displayed the relationship between viewpoints
and diagrams, which shows that we all use viewpoints for a long time, through the use of diagrams.
33
Joël Champeau, François Mekerke and Emmanuel Rochefort
Figure 2: Views and Viewpoints
At this occasion, we can remark that aspects and viewpoints can be considered patterns for building
"mapped" aspects and views.
3. Proposal
The schema of relationships that summarizes our observations is shown on Figure 3: Relationships
between P, A and V.
Figure 3: Relationships between P, A and V
To realize this schema, we just tried to link the different concepts with respect to the nature of the
entities. We added what was necessary to fill conceptual gaps by taking entities in the various
standards we talked about previously.
For the record, here are the sources we took our inspiration from:
̇ ANSI/IEEE Std 1471-2000: Concerns, Viewpoints, Views
̇ ODP - UML Profile for EDOC: Viewpoints, Views
̇ OMG MOF 2.0 RFP Q/V/T: Views
Of course, since all these standards are linked to architecture description, we can take advantage of
them in the MDA process, using UML to model patterns and aspects. We are going to place ourselves
in this context from now on.
The main points of our proposal are:
34
Towards a Clear Definition of Patterns, Aspects and Views in MDA
̇
̇
̇
̇
Aspects are tightly coupled with concerns: we believe this relationship must stay the main
characteristic of the aspect concept.
We considered aspects to be possible composition of patterns, and not the contrary. For reusability
purpose and only for that, we can represent an aspect by a composition of patterns. In this case, the
patterns are woven with the PIM (Platform-Independent Model) elements to obtain a new set of
model elements.
Viewpoints are in relation with a concern, which is the subject that we want to observe in the
model. From this point of view, viewpoints are equivalent to aspects. The viewpoint can also be
represented by patterns, for reusability purpose. So we have relationships between these two
concepts but building a view is done by extracting patterns from a model. This process is the
contrary of the one involving aspects.
The previous points involves a separation between platform independent entities (aspects,
viewpoints) and platform dependent entities ("mapped" aspects and views). The platform-related
entities are the definition of aspects or viewpoints applied to the current model system.
Additional remark: since aspects and viewpoints have the same kind of utility in the same environment
(platform-independent concern-related entities), we thought maybe viewpoints could be seen as a
specialization of aspects.
Conclusion
In order to obtain flexible software architectures, we think that the community must find a way to use
transformations efficiently for building information system architectures. This is why we advocate an
approach founded on stable concepts, within a stable process. For this purpose, we would like to
initiate a discussion on the definitions and relationships of these concepts that will be intensively used
in the future for modeling information systems. This comes in two phases, that can be conducted in
parallel: (1) Define clearly each concept, by agreeing on the core functionalities we want each of them
to offer, and (2) Examine their relationships, to be able to use them efficiently together.
The main advantages of a clear definition of concepts are obviously (1) an improved communication
among the software engineering community over these concepts and therefore (2) the development of
new software development/maintenance methods, based on their composition. Software development
could become something like: After using the key concept of Separation of Concerns, which would
have led us to identify various requirements and platform constraints for our system, we could tackle
each of them through as many aspects, each of them product of a pattern composition. Building
applications could then become an effort on mapping patterns one onto another inside aspects. Then
these latter could be mapped one onto another before being woven. And the other way round, software
development tools should offer the possibility to build views of a system or an application, depending
on various criteria (themselves depending on the concerns of the user).
To achieve these goals, we have presented here a few elements of reflection. We hope these
propositions and definitions will be discussed and improved to define a broad set of possible uses
based on separation of concerns for building and observing information system models.
References
1 Christopher Alexander, Sara Ishikawa, and Murray Silverstein.
A Pattern Language: Towns, Buildings, Construction.
Oxford University Press, 1977.
ISBN: 0-195019-19-9.
2 Erich Gamma, Richard Helm, Ralph Johnson, and John Vlissides.
Design Patterns: Elements of Reusable Object-Oriented Software.
Addison-Wesley, 1994.
ISBN: 0-201633-61-2.
35
Joël Champeau, François Mekerke and Emmanuel Rochefort
3 Gregor Kiczales, Erik Hilsdale, Jim Hugunin, Mik Kersten, Jeffrey Palm, and William G. Griswold.
An overview of AspectJ.
Lecture Notes in Computer Science, 2072:327-355, 2001.
4 Gregor Kiczales, John Lamping, Anurag Menhdhekar, Chris Maeda, Cristina Lopes, Jean-Marc Loingtier, and
John Irwin.
Aspect-oriented programming.
In Mehmet Aksit and Satoshi Matsuoka, editors, Proceedings European Conference on Object-Oriented
Programming, volume 1241, pages 220-242.
Springer-Verlag, Berlin, Heidelberg, and New York, 1997.
5 Junichi Suzuki and Yoshikazu Yamamoto.
Extending uml with aspects: Aspect support in the design phase.
In Proceedings of the third ECOOP Aspect-Oriented Programming Workshop, 1999.
6 Sherif Yacoub and Hany Ammar.
UML support for designing software systems as a composition of design patterns.
In UML 2001 - The Unified Modeling Language, Modeling Languages, Concepts, and Tools, 4th
International Conference, Toronto, Canada,
October 1-5, 2001, Proceedings.
7 Rich Hillard.
Using the UML for Architecture Description.
In UML 1999 - The Unified Modeling Language, Beyond the Standard, 2nd International Conference, Fort
Collins, USA,
October 28-30, 1999, Proceedings.
36
ArgoUWE: A CASE Tool for Web Applications
Alexander Knapp, Nora Koch, Flavia Moser and Gefei Zhang1
Ludwig-Maximilians-Universität München, Germany
{knapp,kochn,moser,zhangg@informatik.uni-muenchen.de}
Abstract. The UWE methodology provides a systematic approach for the development of Web applications.
UWE is based on a conservative extension of the UML and comprises the separate modeling of the
conceptual, navigational and presentational aspects of Web applications. We present the CASE tool
ArgoUWE to support the design phase of the UWE development process. It is implemented as a plugin
module of the open source ArgoUML modeling tool. ArgoUWE fully integrates the UWE metamodel and
provides an XMI extension. The construction process of Web applications is supported by incorporating the
semi-automatic UWE development steps as well as the OCL well-formedness rules of the UWE metamodel
that allow the designer to check the consistency of the UWE models during editing. ArgoUWE is part of the
OpenUWE tool environment for model-driven generation of Web applications.
Keywords. CASE Tools for IS Design and Implementation, Web Design, Web Engineering, UML, OCL
1 Introduction
The Web Engineering field is rich in design methods supporting the complex task of designing Web
applications. From our point of view the usability requirements to such methods are the following: to
be based on standards, to define a process for the systematic development of Web applications and to
provide tool support for the model-driven design and generation of Web applications. The well-known
standard used for modeling is the Unified Modeling Language [UML 2003].
Most of the existing Web engineering methods fulfill some of these usability requirements, but not all
of them. Interesting approaches for the systematic development supported by CASE-tools are those for
the method OO-H process [Gomez et al. 2001] and for the modeling language WebML [Ceri et al.
2002]. Conallen [2003] proposes an extension of UML for a more architecture-oriented and
implementation-based approach.
The main focus of our UML-based Web Engineering (UWE) methodology is to stick to the use of
standards in the systematic design followed by a semi-automatic generation of Web applications
fulfilling this way as close as possible the usability requirements we enumerated above. First, as
indicated by its name, UWE is UML compliant. Second, UWE defines a systematic development
process that can be performed semi-automatically. Third, the tool support is guaranteed by the
OpenUWE model-driven development environment that comprises at the current implementation state
two CASE tools: ArgoUWE to aid the design and UWEXML to generate Web applications
automatically.
The focus of this work is the presentation of the tool ArgoUWE2 describing the underlying UWE
concepts, the functionality provided to the users of this tool and its architecture. The complete
description of the UWE notation and the UWE process is not within the scope of this article, but can
be found in [Koch & Kraus 2002]. The UWE methodology covers structure modeling as well as
behavior modeling of Web applications. ArgoUWE, however, until now provides support to structural
modeling only, therefore we limit ourselves to the presentation of these aspects.
1
This work has been partially supported by the European Union within the IST project AGILE (IST-2001-32747), the DFG
project InOpSys (WI 841/6-1) and the BMBF project MMISS (08NM070D).
2
http:/ / www.pst.informatik.uni-muenchen.de/ projekte/ argouwe
EMSISE’03
37
Alexander Knapp, Nora Koch, Flavia Moser and Gefei Zhang
ArgoUWE is built as a conservative extension of a plugin module for ArgoUML3. The main advantage
we see in an ArgoUML-based tool is the fact that it is an open source tool that provides a module
plugin concept. To remark is that metamodeling plays a fundamental role in CASE tool construction
and is also the core of the automatic generation. We have defined an easily extendible metamodel for
the UWE methodology ([Zhang 2002], [Kraus & Koch 2003]) as a conservative extension of the UML
metamodel (version 1.5). The goal to stay thereby compatible with the MOF interchange metamodel is
to take advantage on the use of the corresponding XMI interchange format. The resulting UWE
metamodel is profileable, which means that it is possible to map the metamodel to a UML profile.
Moreover, it is easy to integrate into ArgoUML.
The reminder of this paper is structured as follows: Section 2 provides an overview of how the UWE
methodology supports the development of Web applications with focus on the systematic design.
Section 3 describes the UWE metamodel which is the basis of the ArgoUWE CASE tool. Section 4
and Section 5, which are the core of this work, present the functionality and the architecture of
ArgoUWE, respectively. Finally, in the last section some concluding remarks and future work are
outlined.
2 Developing Web Applications with UWE
The UML-based Web Engineering (UWE) is a software engineering approach for the development of
Web applications that is continuously extended since 1999 [Baumeister et al. 1999; Koch & Kraus
2003]. UWE supports Web application development with special focus on systematization [Koch &
Kraus 2002]. Being a software engineering approach it requires three pillars to be based on: a process,
a notation and tool-support. The focus of this article is the tool-support, we restrict ourselves to give in
this section a brief overview of the notation and the process
The UWE process is object-oriented, iterative and incremental. It is based on the Unified Software
Development Process [Jacobson et al. 1999] and covers the whole life-cycle of Web applications
focusing on design and automatic generation [Koch & Kraus 2003].
The UWE notation used for the analysis and design of Web applications is a lightweight UML profile
[UML 2003] developed in various previous works. Such a profile is a UML extension based on the
extension mechanisms defined by the UML itself, i.e. it only includes stereotypes, tagged values and
constraints. These modeling elements are used in the design of the conceptual model, the navigation
structure and the presentation aspects of Web applications, as it is shown in Section 2.1. The UWE
methodology provides guidelines for the systematic and stepwise construction of models. The
precision can be augmented by the definition of constraints in the Object Constraint Language (OCL
[Warmer & Kleppe 1999]) of the UML. The core modeling activities are the requirements analysis,
conceptual, navigation and presentation design.
The goal of a systematic design is to support the modeler with a clear design process defined step by
step and with as many automatic generation steps as possible. Although some automation is possible,
there are other steps that can not be automated because they are application specific or based on design
decisions depending on the designers experience and knowledge. In the following we describe the
UWE steps for developing design models of Web applications. These main steps – outlined in
Sections 2.1 to 2.3 – are based on the clear separation of concerns by Web applications which are
conceptual modeling, navigation modeling and presentation modeling.
The tool support that we achieve with OpenUWE – as already said in the introduction – is twofold. On
the one hand, a CASE tool to support the design of Web applications using the UWE notation and
methodology is realized by ArgoUWE. On the other hand, the semi-automatic generation of Web
applications from the models (built with ArgoUWE or any other CASE tool that provides an XMI
interface) is supported by UWEXML using a model-driven Code Generator for deployment to an
XML publishing framework. A detailed description of UWEXML is not within the scope of this
paper. For further details see Koch & Kraus [2003].
3
http:/ / argouml.tigris.org
38
ArgoUWE: A CASE Tool for Web Applications
As a running example to illustrate how the main UWE design models are built, we use a Conference
Review Management application – conference example for short. This application offers conference
organizers, authors and reviewers information about the conference, submitted papers and
corresponding reviews. The conference example application allows authors to submit papers and
reviewers to provide an online evaluation of the papers.
2.1
Conceptual Modeling
UWE proposes the use of UML use cases and activity diagrams for capturing the requirements [Koch
& Kraus 2002]. A conceptual model includes those objects needed to support the functionality the
system will offer to the users. The conceptual design aims to build a conceptual model, which attempts
to ignore as many of the navigation paths, presentation and interaction aspects as possible. These
aspects are postponed to the steps of the navigation and presentation modeling. The main UML
modeling elements used in the conceptual model are: class, association and package. These are
represented graphically using the UML notation [UML 2003]. Figure 1 shows the conceptual model
for the Conference example (upper left).
2.2
Navigation Modeling
Navigation design activities comprise the specification of which objects can be visited by navigation
through the Web application and how these objects can be reached through access structures. UWE
proposes a set of guidelines and semi-automatic mechanisms for modeling the navigation of an
application [Koch & Kraus 2002]. Figure 1 (upper right) shows the result of the semi-automatic
generation of the navigation model for the conference example. The navigation relevant classes of the
conceptual model are transformed into navigation classes, the associations into navigation links, such
as navigation class Conference and the navigation link between class Conference and class Paper.
Instead, the navigation links for accepted and rejected papers as well as the navigation link between
Review and Paper have been added manually.
The main modeling elements are the stereotyped class «navigation class» and the stereotyped
association «direct navigability». These are the pendant to page (node) and link in the Web
terminology. The access elements defined by UWE are indexes, guided tours, queries and menus. The
stereotyped classes for the access elements are «index», «guided tour», «query» and «menu». All
modeling elements and their corresponding stereotypes and associated icons are defined in Baumeister
et al. [1999]. Figure 1 (lower right) shows the navigation model after been enriched with the access
structures. The second navigation model can be generated automatically based on the first one using
some default decisions. Afterwards, the designer can perform as many changes as considered
necessary.
Note that only those classes of the conceptual model that are relevant for navigation are included in
the navigation model, as shown in Figure 1. Although information of the omitted classes may be kept
as attributes of other navigation classes (e.g. the newly introduced attribute keyword of the navigation
class Paper); OCL constraints are used to express the relationship between conceptual classes and
navigation classes or attributes of navigation classes.
2.3
Presentation Modeling
The presentation model describes where and how navigation objects and access primitives will be
presented to the user. Presentation design supports the transformation of the navigation structure
model in a set of models that show the static location of the objects visible to the user, i.e. a schematic
representation of these objects (sketches of the pages). The production of sketches of this kind is often
helpful in early discussions with the customer.
UWE proposes a set of stereotyped modeling elements to describe the abstract user interface, such as
«text», «form», «button», «image», «audio», «anchor», «collection» and «anchored collection». The
39
Alexander Knapp, Nora Koch, Flavia Moser and Gefei Zhang
classes «collection» and «anchored collection» provide a convenient representation of frequently used
composites. Anchor and form are the basic interactive elements. An anchor is always associated with a
link for navigation. Through a form a user interacts with the Web application supplying information
and triggering a submission event [Baumeister et al. 1999]. Figure 1 (lower left) depicts the
presentation sketch of a publication.
Figure 1 The UWE Design Process for the Conference Review Example
3 UWE Metamodel
The UWE metamodel is designed as a conservative extension of the UML metamodel (version 1.5).
Conservative means that the modeling elements of the UML metamodel are not modified. Instead, all
new modeling elements of the UWE metamodel are related by inheritance to at least one modeling
element of the UML metamodel. We define for the new elements additional features and relationships.
In addition, analogous to the well-formedness rules in the UML specification, we use OCL constraints
to specify the additional static semantics of these new elements. We present here only an overview, for
further details see Koch & Kraus [2003].
By staying thereby compatible with the MOF interchange metamodel we can take advantage of
metamodeling tools that base on the corresponding XML interchange format XMI. The resulting UWE
metamodel is profileable [Baresi et al. 2002], which means that it is possible to map the metamodel to
a UML profile. Thus standard UML CASE tools with support for UML profiles or the UML extension
mechanisms, i.e. stereotypes, tagged values and OCL constraints can be used to create the UWE
models of Web applications. If technically possible these CASE tools can further be extended to
support the UWE method. ArgoUWE presents an instance of such CASE tool support for UWE based
on the UWE metamodel.
40
ArgoUWE: A CASE Tool for Web Applications
3.1
The UWE Package Structure
All UWE modeling elements are contained within one top-level package UWE which is added to the
three UML top-level packages. The structure of the packages inside the UWE package depicted in
Figure 2 is analogous to the UML top-level package structure (shown in grey). The package
Foundation contains all basic static modeling elements, the package Behavioral Elements depends on
it and contains all elements for behavioral modeling and finally the package Model Management
which also depends on the Foundation package contains all elements to describe the models
themselves specific to UWE. These UWE packages depend on the corresponding UML top-level
packages. Note that the separation of concerns of Web applications is represented by the package
structure of the UWE metamodel.
UWE
Behavioral
Elements
Model
Management
Behavioral Elements
Behavioral
Elements
Adaptation
Model
Management
Process
Foundation
Model
Management
Foundation
Core
Conceptual
Navigation
Context
User
Presentation
Environment
Foundation
Figure 2 Embedding the UWE Metamodel into the UML Metamodel
The package Foundation contains all basic static modeling elements and is further structured in the
Core and the Context packages (see Figure 2). The former contains packages for the core (static)
modeling elements for the basic aspects of Web applications which are the conceptual, the navigation
and the presentation aspects. The latter depends on the Core package and contains further subpackages for modeling the user and the environment context.
The package Behavioral Elements depends on the Foundation package and consists of the two subpackages Process and Adaptation that comprise modeling elements for the workflow and
personalization aspects of a Web application, respectively. Finally, the package Model Management
which also depends on the Foundation package contains all elements to describe the models specific
to UWE, such as conceptual, navigation and presentation models.
The basic elements in navigation models are nodes and links. The corresponding modeling elements in
the UWE metamodel are NavigationNode and Link (not to be confused with Link of the UML package
Common Behavior) which are derived from the UML Core elements Class and Association,
respectively. This navigation modeling elements are shown in Figure 3. The NavigationNode
metaclass is abstract which means that only further specialized classes may be instantiated;
furthermore it can be designated to be a node which is directly reachable from all other nodes of the
application with the isLandmark attribute.
Figure 3 also shows the connection between navigation and conceptual objects. A NavigationClass is
derived from the ConceptualClass at the association end with the role name derivedFrom – there can
exist several navigation views on a conceptual class. The NavigationClass consists of
NavigationAttributes (derived from the UML Core element Attribute) which are themselves derived
from ConceptualAttributes. Figure 3 illustrates as well how access primitive classes, such as Index, are
aggregated to navigation links with an association of type composition. Note that Menu is a
specialization of class NavigationNode.
41
Alexander Knapp, Nora Koch, Flavia Moser and Gefei Zhang
Class
(Foundation Core)
Association
(Foundation Core)
+source
NavigationNode
Menu
isLandmark:
Boolean
+outLinks
{derived}
1
+target
Link
*
+inLinks isAutomatic:
Boolean
NavigationLink
+derivedFrom
NavigationClass
ConceptualClass
(UWE.Foundation.Core.Conceptual)
1
{ordered}
*
*
Access Primitive
«implicit»
+derivedFromAttributes
NavigationAttribute
ConceptualAttribute
(UWE.Foundation.Core.Conceptual)
*
*
*
Attribute
(Foundation Core)
Index
Figure 3 Connection between Navigation and Conceptual Package
3.2
Well-formedness Rules
Just like in the UML our UWE metamodel is subject to some well-formedness rules that we have
formulated in OCL. The following are examples of the OCL constraints that are part of the UWE
metamodel.
The first constraint expresses that a conceptual class contains only conceptual operations and/or
conceptual attributes.
context ConceptualClass
(1)
inv: self.feature-> forAll(
f | f.oclI sTypeOf(ConceptualOperation) or
f.oclIsTypeOf(ConceptualAttribute))
The second constraint forbids the existence of “isolated” navigation nodes.
context NavigationNode
(2)
inv: self.outLinks-> size() + self.inLinks-> size() > 0
The third constraint expresses that if a navigation node is landmarked, then it is directly reachable
from every other navigation node.
context NavigationModel
(3)
def: ownedNavigationNode : Bag(NavigationNode) =
self.ownedElement->
select (n | n.oclIsTypeOf(NavigationNode))->
collect (n | n.oclAsType(NavigationNode))
inv: self.ownedNavigationNode ->
forAll (n1 | n1.isLandmark implies
self.ownedNavigationNode->
forAll (n2 | n2.outLinks-> exists (a | a.target = n1)))
4 Modeling with ArgoUWE
ArgoUWE is based on ArgoUML and makes use of the graphical user interface of ArgoUML. We
introduce new types of diagrams to represent the new, UWE specific models. In these diagrams, users
42
ArgoUWE: A CASE Tool for Web Applications
can add, remove, copy and paste model elements as well as replace their figures and edit their
properties just as they are used to in ArgoUML.
As shown in Figure 4, ArgoUWE inherits the four compartments of the ArgoUML Project Browser:
1. the navigator pane, in which all diagrams and model elements of the model are listed in a
tree structure;
2. the multieditor pane, which is the main pane of ArgoUWE and where the diagrams are
depicted and can be edited,
3. the critique pane, where a list of design critique issues are shown; and
4. the detail pane, where the attributes of the currently selected model element can be edited.
ArgoUWE exports user models using standard XMI. In the exported XMI UWE specific model
elements are labeled with special tagged values.
In the following we illustrate how to use ArgoUWE to model Web applications by means of the
Conference Review example (see Section 2).
4.1
Starting with the Conceptual Model
In ArgoUWE a conceptual model is represented as a conventional UML class diagram. The standard
procedure to model class diagrams is not changed. For the UWE process, however, the modeler can
mark some conceptual classes as “navigation relevant”, i.e. these classes shall be connected to
navigation classes that represent the nodes of the Web application structure.
Figure 4 Conceptual Model for the Conference Example
43
Alexander Knapp, Nora Koch, Flavia Moser and Gefei Zhang
Figure 4 shows the conceptual model of our Conference Review example. The conceptual class Paper
has been identified as navigation relevant (see the encircled checkbox at the bottom of the screenshot).
4.2
Building the Navigation Model
In ArgoUWE a navigation model can be built semi-automatically based on the conceptual classes
marked as navigation relevant by the modeler. When the designer selects Create Diagram | Navigation
Diagram (in the menu line of the ArgoUML main window, see Figure 4), ArgoUWE copies all
navigation relevant conceptual classes and all associations between them from the conceptual model to
a new navigation model. This mechanism not only relieves the designer from copying conceptual
classes one by one manually but also keeps the model consistent.
After automatic creation of the navigation diagram, the modeler can add some additional associations
designating direct navigability from one navigation class to another. Figure 5 shows the result of this
editing process. Instead of a single association between Conference and Paper the user now has
separate links to an overview of all accepted papers and an overview of all rejected papers. The class
Keyword has not been selected as navigation relevant and therefore no navigation class is created for
it. Note that Figure 5 corresponds to Navigation Model I in Figure 1 (upper right).
Figure 5 Navigation Model for the Conference Example (1)
The next step is to add menus between navigation classes and access primitives. The user can add
indexes, menus, and associations to landmarked nodes automatically (encircled buttons from left to
right):
•
•
•
Indexes are added by the tool between two classes related by an association whenever the
multiplicity on the target end is greater than one.
Menus are added by the tool to every class that has more than a single outgoing association.
Menus are included by composition.
Associations to classes selected as landmarks are added by the tool at every navigation node
from which the landmarked node cannot already be directly reached.
44
ArgoUWE: A CASE Tool for Web Applications
Additionally queries for explicit searches can be added by the modeler at any place he likes. The result
of adding indexes, menus, queries and associations is shown in Figure 6, which corresponds to Figure
1 (lower right).
Figure 6 Navigation Model for the Conference Example (2)
4.3
Completing with the Presentation Model
The building process of a presentation model from the navigation model is similar to building a
navigation model from the conceptual model and is triggered by choosing the menu item Create
Diagram | Presentation Diagram. All navigation nodes are copied into the new class diagram for the
presentation. Moreover, ArgoUWE creates for each attribute in a navigation class automatically a
presentation element as well as an association of type composition to its owner presentation class.
Figure 7 Partial Presentation Model for the Conference Example
45
Alexander Knapp, Nora Koch, Flavia Moser and Gefei Zhang
Additionally, for each menu a presentation class is created and the composition between the menu and
its owner navigation class is copied. A part of the created presentation diagram of our Conference
example is shown in Figure 7. Note that in ArgoUWE the nested representation as shown in Figure 1
(lower left) of composition is currently not possible.
4.4
Consistency Checking
ArgoUWE helps the modeler keep the models consistent. Some of the constraints of the UWE
metamodel (see Section 3) are constantly enforced by ArgoUWE. For instance, in the conceptual
diagram, since only conceptual operations and conceptual attributes are available, the user cannot
create any operation and any attribute but the conceptual ones, and thus constraint (1) in Section 3.2
can never be violated. On the other hand, there are also constraints that simply cannot be enforced
continually during modeling. For example, at the moment where a new navigation node (e.g. a
navigation class) is created, ArgoUWE must accept for a while that it is neither landmarked nor
directly reachable from all other navigation nodes and therefore constraint (3) in Section 3.2 is
violated. However, as soon as the user triggers ArgoUWE’s consistency check of the current models
by pressing the “???”-button (encircled in Figure 8), the user will be warned about this violation.
Figure 8 Constraint Violation
46
ArgoUWE: A CASE Tool for Web Applications
If we landmark the navigation class Paper, the navigation model shown in Figure 5 would be
inconsistent because Paper cannot be reached from the class User directly. A warning of that
constraint violation is shown in Figure 8.
5 Architecture of ArgoUWE
The ArgoUWE tool is implemented as a plugin into the open-source UML modeling tool ArgoUML
(version 0.10), both written in Java. ArgoUML provides a suitable basis for an extension with UWE
tool support by being based on a flexible UML metamodel library (NSUML4, version 1.3/0.4.20) and a
general graph editing framework (GEF5, version 0.95), as well as featuring an extendable module
architecture. Not only these feature characteristics but also the fact that ArgoUML is an open-source
tool with an active developer community lead us to favoring ArgoUML as development basis over
other, commercial tools — although the open source of ArgoUML has sometimes to outweigh its
rather poor documentation. However, tools like Rational Rose™ 6 or Gentleware’s Poseidon™ 7
would also afford the necessary extension prerequisites, perhaps with the exception of metamodel
extensions.
5.1
ArgoUWE Metamodel
The “Novosoft UML library” (NSUML), on which ArgoUML is based, not only provides a library for
working with UML 1.3 models in terms of Java objects, but also contains an XML-based generator for
arbitrarily changed and extended (UML) metamodels. As UWE uses additional modeling concepts
targeted onto Web applications, ArgoUWE uses NSUML to generate an extended UML/UWE
metamodel that again allows the programmer to handle UWE entities in a seamless and
straightforward manner. In particular, we chose a “heavyweight extension” for the physical metamodel
that is generated by NSUML. Alternatively, we could employ the UWE lightweight UML profile
directly. However, stereotyping and tagging is not compatible with the concept of overloading in
object-oriented programming. For the current ArgoUML versions, the adaptations of the UML
metamodel merely consist of extending the NSUML generator resource files by the UWE metaclasses
ConceptualClass, NavigationClass, PresentationClass, etc.
However, the more recent versions of the Novosoft UML library, starting with 1.4/0.13, replace the
proprietary XML metamodel description by a standardized “Meta Object Facility” (MOF) input. The
libraries generated with NSUML 1.4/0.13 (for UML 1.4) are code-incompatible with the
NSUML 1.3/0.4.20 (for UML 1.3) libraries and thus cannot be used in ArgoUML directly yet.
5.2
Plugin Architecture
In ArgoUML as of version 0.10, the model is encapsulated in an instance of the (extended) NSUML
library class ru.novosoft.uml.model_management.MModel. Thus, manipulations of the model have a single
access point, all effects of model manipulations are disseminated to other components by a general
observer mechanism following the Model-View-Controller paradigm. Figure 9 summarizes the
general structure of the ArgoUML model, view, and control devices as used in ArgoUWE.
The UWE diagram kinds: conceptual diagram, navigation diagram, and presentation diagram are
straightforwardly supported by introducing new subclasses of org.argouml.uml.diagram.ui.UMLDiagram,
more specifically the common superclass org.argouml.uml.diagram.ui.UWEDiagram. A UMLDiagram
captures the graphical presentation and manipulation of model elements. It is based on a bridge, the
4
http:/ / nsuml.tigris.org
5
http:/ / gef.tigris.org
6
http:/ / www.rational.com
7
http:/ / www.gentleware.com
47
Alexander Knapp, Nora Koch, Flavia Moser and Gefei Zhang
graph model, between the model realized by NSUML library classes and the graphical presentation
using the “Graph Editing Framework” (GEF) library classes. The bridges for UWE are all derived
from org.argouml.uml.diagram.static_structure.ClassDiagramGraphModel. Each UWE model element is
linked to a figure node (org.tigris.gef.presentation.FigNode) or figure edge (org.tigris.gef.presentation.
FigEdge) of the GEF library. In addition to manipulating the model elements graphically, they can also
be changed and edited by using property panels that are implemented as subclasses of
org.argouml.uml.ui.PropPanel. ArgoUWE adds property panels for conceptual classes, navigation classes,
etc., which are installed on the ArgoUML platform automatically with the diagram counterparts by
reflection.
ArgoUWE plugin
UWEDiagram
GraphModel
FigNode
ArgoUML
FigEdge
PropertyPanel
UML/ UWE model
Figure 9 Overview of the ArgoUML plugin architecture
The UWE model consistency checks and the semi-automatic editing functionalities induced by the
UWE method (see Section 4) are triggered in a UWEDiagram ( see Section 3). In particular, wellformedness of a UWE model is not enforced for every model change; during editing some
inconsistencies may be inevitable or at least tolerable.
The UWE extensions are packaged in a plugin module. The original ArgoUML user interface is
extended by a class implementing org.argouml.application.api.PluggableMenu, registering the new diagram
types and their support. This extension, when put into the extension directory (./ ext ) of ArgoUML, is
loaded automatically on ArgoUML start-up. However, it must be noted that the UWE extension of
ArgoUML is not completely orthogonal to ArgoUML as the underlying metamodel has been changed.
Nevertheless, packaging the UWE extensions into a plugin module paves the way for growing the
extensions towards the development of new ArgoUML versions.
6 Related Work
Many methods for the development of Web applications have been proposed since the middle of the
nineties. An excellent overview is presented in Schwabe [2001] where the most relevant methods,
such as OOHDM [Rossi et al. 2001], OO-H [Gomez et al. 2001], WSDM [De Troyer et al. 2001],
W2000 [Garzotto et al. 2001] and UWE [Koch et al. 2001] are described on the basis of a same case
study. Only some of them have implemented a CASE-tool supporting the systematic development.
The most advanced tool-support is offered for the method OO-H the modeling language WebML.
VisualWADE is the tool supporting the OO-H method that includes a set of model compilers to
provide automatic code generation capabilities and rapid prototyping. In contrast to our ArgoUWE, it
uses the UML [UML 2003] only in the first phase of the development process. WebRatio is based on
48
ArgoUWE: A CASE Tool for Web Applications
the proprietary Web modeling language WebML and supports code generation technology built on
XSL [Ceri et al. 2002]. This approach differs from ours as it does not perform a clear separation of the
navigation and presentation aspects. A more architecture-oriented approach is proposed by Conallen
[2003]. It extends the UML to support the design of Web applications focusing on current
technological aspects of the implementation and is based on the generic RUP development process.
The notation is supported by the Rational Rose™ tool, but conversely to ArgoUWE it neither supports
the systematic development process nor guides the developer through the process.
7 Conclusions and Future Work
We have presented the CASE tool ArgoUWE that we have developed for the computer aided design of
Web applications using the UWE methodology. ArgoUWE is built as a flexible extension of
ArgoUML due to the plugin architecture facilities provided by the ArgoUML tool (version 0.10). We
stress that the core of the CASE tool is the underlying UWE metamodel defined as a conservative
extension of the UML metamodel.
We outlined in this work the basic ideas behind the UWE methodology and how ArgoUWE is
integrated in the OpenUWE tool suite environment to achieve a model-driven generation of such Web
applications. In addition we presented a running example to show how the tool supports the design of
the three main UWE models: conceptual, navigation, and presentation models in a semi-automatic
process where user and tool-activities are interleaved.
We are currently working on minor improvements of the usability of ArgoUWE and the migration to
the latest version of ArgoUML. We plan to include the UWE well-formedness rules into the design
critique mechanism provided by ArgoUML. This model checking mechanism allows the continuous
verification of the rules in contrast to the current checking process that is explicitly triggered by the
modeler. Further, we will include better support for iterative and incremental modeling. Finally, we
will improve ArgoUWE with the new modeling elements incorporated in UWE as well as additional
well-formedness rules needed in the design of personalized and business process guided Web
applications.
References
Luciano Baresi, Franca Garzotto, Monica Maritati (2002). W2000 as a MOF Metamodel. In: Proc. 6th World
Multiconf. Systemics, Cybernetics & Informatics, Web Engineering Track.
Hubert Baumeister, Nora Koch, Luis Mandel (1999). Towards a UML Extension for Hypermedia Design. In:
Robert France, Bernhard Rumpe (eds.), Proc. 2nd Int. Conf. UML (UML’99). Lect. Notes Comp. Sci. 1723.
Springer.
Stefano Ceri, Pietro Fraternali, Aldo Bongio, Marco Brambilla, Sara Comai, Maristella Matera (2002).
Designing Data-Intensive Web Applications. Morgan-Kaufmann.
Jim Conallen (2003). Building Web Applications with UML. Addison-Wesley. 2nd edition.
Jaime Gómez, Cristina Cachero, Oscar Pastor (2001). On Conceptual Modeling of Device-Independent Web
Applications: Towards a Web-Engineering Approach. IEEE Multimedia 8(2), pp. 26–39.
Ivar Jacobson, Grady Booch, James Rumbaugh (1999). The Unified Software Development Process. AddisonWesley.
Nora Koch, Andreas Kraus (2002). The Expressive Power of UML-based Web Engineering. In: Daniel
Schwabe, Oscar Pastor, Gustavo Rossi, Luis Olsina (eds.), Proc. 2nd Int. Wsh. Web-Oriented Software
Technology (IWOOST’02), CYTED.
Nora Koch, Andreas Kraus (2003). Towards a Common Metamodel for Web Applications. In: Proc. 3rd Int.
Conf. Web Engineering (ICWE’03). Lect. Notes Comp. Sci. Springer. To appear.
Daniel Schwabe (2001). A Conference Review System. In: 1st Workshop on Web-oriented Software
Technology, Valencia, http:/ / www.dsic.upv.es/ ~ west2001.
49
Alexander Knapp, Nora Koch, Flavia Moser and Gefei Zhang
UML (2003). OMG Unified Modeling Language Specification, version 1.5. Object Management Group.
http:/ / www.omg.org/ cgi-bin/ doc?formal/ 03-03-01.
Jos Warmer, Anneke Kleppe (1999). The Object Constraint Language. Addison-Wesley.
Gefei Zhang (2002). CASE Support for Modeling Web Applications. Diploma thesis, Ludwig-MaximiliansUniversität München.
50
Precise Graphical Representation of Roles in Requirements
Engineering
Pavel Balabko, Alain Wegmann
Laboratory of Systemic Modeling, Ecole Polytechnique Fédérale de Lausanne, EPFL-IC-LAMS, Lausanne,
Switzerland
Email: pavel.balabko@epfl.ch, alain.wegmann@epfl.ch
Abstract: Modeling complex systems can not be done without considering a system from multiple views.
Using multiple views improves model understandability. However the analysis of models that integrate
multiple views is difficult. In many cases a model can be evaluated only after its implementation. In our work
we describe a visual modeling framework that allows for the evaluation of multiview models. We give an
overview of our framework using a small case study of a Simple Music Management System. For each view
in our framework we specify a separate role of the system. The whole system is specified as a composition of
smaller roles. Each role, as well as the whole model of a system, can be evaluated by means of the analysis of
possible instance diagrams (examples) that can be generated automatically. Instance diagrams are generated
based on the formalization of visual models using the Alloy modeling language.
1 Introduction
Modeling of complex systems from multiple views is unavoidable. Multiple views help in solving
the scalability problem described in [Chang99] and improve model understandability: each view can
be analyzed independently of other views. However, the reasoning about the whole model as a
composition of multiple views is difficult. It is difficult because the same entity in the Universe of
Discourse (UoD or reality) can be modeled as several independent model elements. The role of
modeler in this case is to make one model from the set of models corresponding to different views. A
modeler has to find identical model elements (elements that model the same entity in the UoD) in
models to be composed and make sure that a resulting model makes sense (it reflects “reality” or the
UoD in some meaningful way). Here we will not talk about how a modeler should search for the
identity of model elements. This is a separate research topic that is based on the Tarski declarative
semantics: the semantics that defines equivalence of an agreed conceptualization of the UoD to a
concrete concept in the model (see [Naumenko02] for details). We concentrate on the second part of
the problem: assuming that we agree what the identity of model elements means, we investigate how
to ensure that a composed model gives us some adequate reflection of reality.
The design of software systems should be based on adequate models (models that help to solve
someone problems). Only adequate models can ensure that a system evolves in correspondence with
the system’s users needs. Therefore early system requirement specification is an important step in the
evolution of software systems. In our work we concentrate on the very early model validation of
system requirements that results in an adequate model. The understanding of model adequacy requires
the interaction between system developers and customers (or other system stakeholders). They are not
experts in system modeling and may have problems in understanding semantics of specification
diagrams (like UML class or statechart diagrams). These diagrams are convenient for professionals
and represent the generalizations of many examples (or scenarios) from the problem domain. On the
contrary, for customers it is more convenient to talk in terms of particular examples (or instances) of
models. In our work we describe a visual modeling framework that allows for the composition of
models from different views and the analysis of composed models based on automatically generated
model instances.
Our framework supports modeling with multiple views. For each view we specify a separate role
for a system. The whole system is specified as a composition of smaller roles. Each role as well as the
whole model of a system can be evaluated by means of the analysis of the possible instance diagrams
(examples) that can be generated automatically. Instance diagrams are generated using the Alloy
EMSISE’03
51
Pavel Balabko and Alain Wegmann
Constraint Analyzer1 [Jackson02]. It is based on the Alloy modeling language. It is a simple structural
modeling language based on first-order logic. Alloy models are analyzed using the Alloy Constraint
Analyzer. The analyzer attempts to find a solution based on a given specification. If a solution is
found, it displays it as a graph of nodes and edges (similar to an instance diagram).
We describe our framework using a small case study of the Simple Music Management System (a
simplified version of www.mp3.com). We took mp3.com as a base of a Simple Music Management
System because its model is a complex one and can not be modeled from a single point of view. We
consider two major views for this system: User View and Artist View. Here we give a brief
description of these two views:
MP3.com users can:
•
•
•
Find free artist albums and artist singles in places like MP3.com
artist pages (pages containing artist albums, singles etc), and add
them to My Music (personal user music collection at
http://my.mp3.com).
Play music from My Music.
Manage personal user music collection (My Music):
o Manage user play lists. A user play list can include single tracks
(a user track that is not part of any other user album) or album
tracks from the personal user music collection (My Music).
o Delete user playlists, singles and albums from My Music.
MP3.com artists can:
•
With a Standard Content Agreement:
o Create one or more Artist Pages where an artist can post his
materials
o Artist materials can include: artist singles and artist albums with
album tracks.
The structure of this paper is the following. In section 2 we give the minimum set of concepts
necessary for the understanding of our paper. In section 3 we show how a model of a Simple
Management Music System can be built in our framework: in section 3.1 we begin with the
specification of base roles, in section 3.2 we specify a composition of base roles and in section 3.3 we
show how a composed model can be evaluated through the analysis of automatically generated
instance diagrams. Section 4 is a conclusion.
2 Definition of Main Concepts
In this section we present concepts that we use in our work. In order to give rigorous definitions for
these concepts, we have to choose a consistent semantic framework. We use the ISO/ITU standard
1
See http://sdg.lcs.mit.edu/alloy/
52
Precise Graphical Representation of Roles in Requirements Engineering
“Reference Model for Open Distributed Processing” – part 2 [ISO96] as a basis of our modeling
framework.
Based on RM-ODP, modeling consists of identifying entities in the universe of discourse and
representing them in a model. The universe of discourse (UoD) corresponds to what is perceived as
being reality by a developer or a customer; an entity is “any concrete or abstract thing of interest”
[ISO96] in the UoD. Identified entities are modeled as model elements in a model. Model elements are
different modeling concepts (object, action, behavior etc). We give definitions of some modeling
concepts necessary for the understanding of our paper (other definitions see in the RM-ODP). We
begin with the definition of an object. If in the UoD we have entities that can be modeled with state
and behavior, we model these entities as objects:
Object: “An object is characterized by its behavior and dually by its state” [ISO96].
The duality of state and behavior means that the state of an object determines the subsequent behavior
of this object. To specify state and behavior of an object we will use Object Behavior and Object State
Structure:
Object Behavior Structure: A collection of actions and a set of (sequential) relations between
actions.
All possible state of a system we model as a state structure:
Object State Structure: A collection of attributes, attribute values and relations between attributes.
Attributes can change their values; relations between attributes can be created or deleted.
Based on the definition of behavior we define a role:
Role: “An abstraction of the behavior of an object” intended for achieving a certain common goal
in collaboration with other roles.
Like a behavior of an object is defined dually with a state of an object, a role is also defined with its
state. To specify possible states of a role we use Role State Structure:
“Role State Structure is a subset of the complete state of a system that is used for reasoning about
a given role”.
In this work we consider modeling and analysis of a role state structure. Graphically we represent a
role state structure in the following way: each role we represent as a box with two panes (see figure 1).
The upper pane indicates the name of a role. The lower pane contains a role state structure. To
represent it we use graphical notation inspired by a UML class diagram.
Manage One User Music
1
User Music
1
1
1
albums *
playlists 1..n
Play List
name
info
pictures
tracks
1..n
*
User
*
Track
tracks
User
Album
1
tracks 1..n
User
Single
User Album
Track
Figure 1. Graphical notation that shows a role state structure for the “Manage One User Music” role.
The main difference with a UML class diagram is that we make explicit belonging of attributes to
roles. We do it by means of the notation inspired by the Catalysis method [D'Souza98]. We connect
attributes, that are not parts of any other attribute, to the border of a lower pane box. In figure 1, for
example, we explicitly connect the “User Music” attribute to the “Manage One User Music” role. This
allows us to specify that this role always has one “User Music” attribute.
53
Pavel Balabko and Alain Wegmann
3 Model of the Simple Music Management System and its Analysis
This section is the main part of our paper where we describe our modeling framework based on the
example of a model of a Simple Music Management System. In section 3.1 we begin modeling of a
Simple Music Management System with two base roles: “Manage One User Music” and “Manage One
Artist Music” (roles in the lower part of the role hierarchy in figure 2). The first role is a role of the
system from the point of view of one user that manages his music (My Music). The second role is a
role of the system from the point of view of one artist that manages his music to be provided for users.
Simple Music
ManagementSystem
1
1
Manage Multiple
Users Music
Manage Multiple
Artist's Music
*
*
Manage One
User Music
Manage One
Artist Music
Figure 2 Hierarchy of roles for a model of a Simple Music Management System.
In section 3.1 we give models of two higher level roles in the role hierarchy: “Manage Multiple
User Music” and “Manage Multiple Artist Music”. Both these roles are correspondingly composed
from multiple lower-level roles: “Manage One User Music” and “Manage One Artist Music”. The first
one specifies the system from the point of view where there are multiple users managing their music;
the second role specifies the system from the point of view where there are multiple artists, managing
their music. In section 3.2 we explain how a composition of the “Manage Multiple User Music” and
“Manage Multiple Artist Music” roles can be specified. In section 3.3 we explain how a composed
model can be evaluated based on the analysis of automatically generated instance diagrams.
3.1
Specification of Base Roles
We begin with the modeling of the state structure of the “Manage One User Music” role. For this role
we give an Alloy code and explain how it corresponds to a model of this role (see table 1).
M anage One Use r M usic
1
Us e r M us ic
1
playlists 1..*
Play List
name
info
picture s
1
1
albums *
User
tracks *
Album
User
1..n
1
* Track
tracks
tracks 1..*
User
Use r Album
Single
Track
1. module Main/UserMusic/OneUserMusic //module declaration
2. sig UserTrack {}
3. sig UserMusic { tracks: set UserTrack,
playlists: set PlayList,
albums: set UserAlbum}
4. sig UserAlbum {tracks: set UserAlbumTrack}
5. part sig UserSingle, UserAlbumTrack extends UserTrack {}
6. sig PlayList {tracks: set UserTrack}
// ------------------ Multiplicity Facts ------------------------7. fact { // PlayList (1..*) -> (*) UserTrack
8.
all ut:UserTrack | some pl:PlayList | ut in pl.tracks }
9. fact { // UserMusic (1) -> (1..n) PlayList
10. (all um:UserMusic | some pl:PlayList | pl in um.playlists) &&
11. (all pl:PlayList | one um:UserMusic | pl in um.playlists) }
12. fact { // UserMusic (1) -> (*) UserTrack
13. all ut:UserTrack | one um:UserMusic | ut in um.tracks }
14. fact { // UserMusic (1) -> (*) UserAlbum
15. all ua:UserAlbum | one um:UserMusic | ua in um.albums }
16. fact { // UserAlbum (1) -> (1..*) UserAlbumTrack
17. (all ua:UserAlbum|some uat:UserAlbumTrack|uat in ua.tracks) &&
18. (all uat:UserAlbumTrack | one ua:UserAlbum | uat in ua.tracks) }
// ------------------ End of Multiplicity Facts -------------------------
Table 1 State structure model for the “Manage One User Music” role and the Alloy code.
54
Precise Graphical Representation of Roles in Requirements Engineering
Let’s look at the Alloy code in table 1. All rectangles (attributes) in the diagram from this table have
corresponding type declarations in the code (lines 2-6). A user track can be either a user single (a track
that is not a part of any user’s album) or a user album track. In code this is expressed as the
partitioning of all user tracks into two sets (line 5).
A type declaration may introduce relations. For example the “User Music” type (line 3) introduces
three relations (tracks, playlists and albums). The keyword set indicates multiplicity: it tells that a
relation points to a set of elements. For example, for each element of the “User Music” type there is a
set of elements of the “User Track” type. To make multiplicity more strict (like “1..*” instead of “*”),
multiplicity facts are used (lines 7 - 18). For example, the fact in line 16 contains two constraints (line
17 and 18). The constraint from line 17 tells that for all elements of the “User Album” type there are
some (at least one) elements of the “UserAlbumTrack” type in the “tracks” relation.
As we mentioned in the introduction, the simplest way to make an agreement with no professionals
(like customers) that a model adequately represents the UoD is to consider model instances. Model
instances can be generated using the Alloy Constraint Analyzer. It checks the consistency of a formal
Alloy model; randomly generates a solution (an instance of a model) and visualizes it. In order to
generate a solution, a scope of a model should be defined. A scope defines how many instances of
each type a solution may include.
Examples from figure 3 and 4 were built with the Alloy Constraint Analyzer.
Figure 3 Example with a scope 2 but 1 UserMusic
Figure 4 Example with a scope 2
The scope of the example in figure 3 was defined in the following way: 2 All, 1 UserMusic. This
means that this example, generated by the Alloy Constraint Analyzer, could have maximum two
elements of each type (except the UserMusic type) and one element of the UserMusic type. Having
only one element of the UserMusic type corresponds to the fact that the “Manage One User Music”
role includes one UserMusic attribute (see diagram in Table 1). The Alloy Constraint Analyzer allows
for the generation of several different examples for the same model and the same scope. All these
examples can be used to reach an agreement with a customer if the state structure model of the
“Manage One User Music” corresponds to the common understanding of the UoD.
55
Pavel Balabko and Alain Wegmann
The next step will be to make a state structure model for the “Manage Multiple User Music” role
(see figure in table 2). Comparing with the diagram from table 1, we changed the multiplicity of the
association that connects the “Manage Multiple User Music” role with the UserMusic attribute (see
figure in table 2). This allows for specifying that this role has multiple “User Music” attributes. All
other elements of the diagrams from tables 1 and 2 are the same (for the moment we ignore inv1 and
inv2 in the diagram from Table 2). This allows us to reuse the Alloy code from table 1 and test it for a
new scope. With a new scope we have to require that a solution includes several (at least two)
instances of the UserMusic type. Therefore we choose a new scope equal to two. This means a
solution may include maximum two instances of each type.
A solution, found by the Alloy Constraint Analyzer for the Alloy code from table 2 with the scope
equal to two, is shown in figure 4. We can see that this solution does not adequately show the reality:
“A user play list may include single tracks or album tracks from this user music” (see the description
of a Simple Music Management System in introduction). However, the solution, found by the Alloy
Constraint Analyzer, shows that a user play list may include tracks of another user. Therefore we have
to put additional constraints to make a state structure model for the “Manage Multiple User Music”
correct. These additional constraints we put in a separate Alloy module: MultipleUserMusic (see
Table 2).
module Main/UserMusic/MultipleUserMusic
open Main/UserMusic/OneUserMusic
//module declaration
M anage M ultiple Use r M usic
// ------------------ Multiplicity Invariants -------------------------
*
Use r M usic
1
playlists 1..*
Play List
name
info
picture s
1
1
albums *
User
inv2
inv1 tracks *
Album
User
1..*
1
* Track
tracks
tracks 1..*
User
Use r Album
Single
Track
fact inv1 {
all um:UserMusic | all ut:UserTrack | (ut in um.playlists.tracks)
=> (ut in um.tracks) }
fact inv2 {
all um:UserMusic | all uat:UserAlbumTrack |
(uat in um.tracks) => (uat in um.albums.tracks)
}
// ------------------ End of Multiplicity Invariants ------------------
Table 2 Specification of state for the “Manage Multiple User Music” role and the corresponding
Alloy code.
Additional constraints come directly from the diagram in table 2: for all cycles in this diagram we
add invariants in the Alloy code. For example, inv1 requires for all users, that tracks from all user
playlists (for a given user) are tracks of the same user. The MultipleUserMusic Alloy module can be
analyzed by means of generating different examples. We do not give them here since they are similar
with examples from figures 3 and 4.
Based on the description of our small case study, we can build a state structure model of the
“Manage Multiple Artist Music” role (see figure 5).
M anage M ultiple Artist M usic
*
Artist M usic
tracks *
albums *
Artist
Track
Album
inv1
name
1..*
info
1..* Album
Artist
picture s
Track
Single
tracks
Figure 5 The state structure model for the “Manage Multiple Artist Music” role.
56
Precise Graphical Representation of Roles in Requirements Engineering
This model and the corresponding Alloy code are built in the same way as for the “Manage Multiple
User Music” role. Therefore we do not show the Alloy code here.
Let’s conclude: up to this point we have the role state structure models of the both “Manage
Multiple User Music” and “Manage Multiple Artist Music” roles. In the next section we show how a
composition of these two models is done.
3.2
Composed view
As we explain in [Balabko03], the modeling process of a complex system consists in building
models in specific contexts and finding identical elements in these models. In this section we continue
the example of a model of a Simple Music Management System that gives an intuition of the meaning
of identical elements (more details about identical elements see in [Balabko03]). Let’s imagine an
external observer who perceives the work of two users (see figure 6).
Exteranal Observer View
M anage One Use r
Music
M anage One Use r
Music
1
1
Use r M usic
Use r M usic
1
1
tracks *
User
Track
tracks *
User
Track
identical ?
Figure 6 Identical model elements.
at:Artist
Track
aa:Artist
Album
t1:Use r
t1:Use r
Track
t1:Use r
Track
Track
a1:Use r
a1:Use r
Albuma1:Use r
Album
Album
identical
identical
(a)
(b)
Figure 7. Modeling of identical model elements: (a) Identical User Tracks; (b) Identical User Albums
We suppose that this external observer can not see all details about user tracks: how they are created,
used and deleted. The only one thing that he can observe is the content of tracks. This defines an
External Observer view. From this point of view, the external observer may say that some user tracks
of some different users are identical, i.e. the music of these tracks is the same. The question is, if we
have to model this identity or not. In our case study, this identity is important because it takes into
consideration the Artist Track concept from the “Manage Multiple Artist Music” role: two tracks can
be considered identical if they correspond to the same artist track.
To model the identity of model elements in our framework, we define a new attribute (concept) that
is associated with all identical elements. In our case study, this new attribute is the Artist Track
attribute from the “Manage Multiple Artist Music” role (see figure 7.a). Figure 7.a shows that an Artist
Track has multiple identical User Tracks. In the same way as User Tracks can be identical because
they refer to the same artist track, user albums can also be identical because they refer to the same
57
Pavel Balabko and Alain Wegmann
artist album. Therefore, we model an Artist Album has multiple identical User Albums (see figure
7.b).
Let’s conclude. We modeled the state structure of two roles (“Manage Multiple User Music” and
“Manage Multiple Artist Music”). We also modeled how the state structures of these two roles are
composed (using a diagram from figure 7). All these models give us the state structure model of a
Simple Music Management System. At this point we can create an Alloy code (see table 3) that
reflects the composition of the two roles and then we can do the analysis of the complete model.
1. module MusicManagementSystem //module declaration
Simple M usic M anage me nt Syste m
M anage M ultiple
Artist M usic
1..* Album
M anage One Use r
Music
1
aalbum
0..1
inv1
Artist
Track
1..* Album
tracks Track
1
atrack
*
User
Album
1
User
Track
tracks 1..*
Use r Album
Track
2. open MusicManagementSystem/UserMusic/MultipleUserMusic
3. open MusicManagementSystem/UserMusic/OneUserMusic
4. open MusicManagementSystem/ArtistMusic/MultipleArtistMusic
5. open MusicManagementSystem/ArtistMusic/OneArtistMusic
7. sig UserTrack1 extends UserTrack {atrack: ArtistTrack}
8. sig UserAlbum1 extends UserAlbum{aalbum: Album}
9. fact { all ut:UserTrack | ut in UserTrack1 }
10. fact { all ua:UserAlbum | ua in UserAlbum1 }
// ------------------ Multiplicity Invariants ------------------------// Multipl: Album (1) <- (0..1) UserTrack
11. fact {
all um:UserMusic | all a:Album | sole ua:um.albums | a =
ua.aalbum }
// ------------------ End of Multiplicity Invariants ---------------// ------------------ Multiplicity Invariants ------------------------12. fact inv1 { all um:UserMusic | all ua:um.albums |
ua.aalbum.tracks = ua.tracks.atrack }
// ------------------ End of Multiplicity Invariants ----------------------13. fact {
// Definition of UserSingle: a user track
// that is not a part of any user album
all um:UserMusic | all us:UserSingle |
(us in um.tracks) => (us.atrack not in um.albums.tracks.atrack) }
Table 3 State Structure Composition for the “Manage Multiple User Music” and “Manage Multiple
Artist” roles.
We have to reflect in the Alloy code new relations between the AtristTracks and UserTrack types
and between the Album and UserAlbumTrack types. To make a new relation between two already
specified attributes in the current version of Alloy, we have to extend existing types: UserTrack and
UserAlbum (lines 7-8 in table 3). To ensure that new attributes (UserTrack1 and UserAlbum1) and
their predecessors participate in the same relations we created two facts (line 9-10 in table 3). Line 11
in table 3 specifies the multiplicity invariant. Note that this multiplicity is specified in the context of
one user (as it is shown in a diagram from table 3): we require that for any artist album there is only
one user album in the context of a given user music. When we create an Alloy code, we have to take
into account possible “conceptual cycles”. Like in previous examples we have to create invariants for
these cycles.
The last constraint2 that we add to the Alloy code is related with the definition of a user single track:
a user track that is not part of any other user album (see introduction). The fact from line 13 specifies
this constraint. It tells that for any user single of some given user, the artist track for this user single is
not an artist track for any album track for the same user.
3.3
Analysis of the Composed Model
Based on the Alloy code from table 1, several instance diagrams can be generated. One of these
diagrams is shown in figure 8.
2
This constraint was discovered in the result of the analysis of instance diagrams generated by the
Alloy Constraint Analyzer.
58
Precise Graphical Representation of Roles in Requirements Engineering
UserMusic_1
UserMusic_0
tracks
albums
albums
UserTrack_3
(UserSingle)
ArtistMusic_0
tracks
UserAlbum_2
UserAlbum_1
tracks
tracks
aalbum
UserTrack_2
(UserAlbumTrack)
tracks
albums
aalbum
albums
UserTrack_1
Album_0 Album_2
(UserAlbumTrack)
atrack tracks tracks
atrack
albums
tracks
ArtistTrack_1
(AlbumSingle)
tracks
tracks
tracks
ArtistTrack_3
(AlbumTrack)
Album_1
tracks
atrack
ArtistTrack_2
(AlbumTrack)
Figure 8 Instance diagram for the Simple Music Management System.
We used filters provided by the Alloy Constraint Analyzer tool to hide instances of the User Play List
attribute. This attribute is not involved in the composition of base roles and therefore is not necessary
for the analysis of the composition.
To make the analysis easier we colored the instances of attributes from the “Manage Multiple User
Music” role in black. All other instances of attributes are from the “Manage Multiple Artist Music”
role. The diagram from figure 8 can be used for the analysis of the model of a Simple Music
Management System. This analysis can be done in collaboration with a customer who asked for the
development of this system. The goal of the analysis is to show possible states of the system (instance
diagrams) and ask if these states can adequately represent the reality. For example, the following
question can be asked: If a user single track (see UserTrack_3 in figure 8) can be related with an artist
album track (see ArtistTrack_2 in figure 8), i.e. if a user can add to its music collection only one track
from an artist album versus adding the whole album? Similar questions can be asked based on several
instance diagrams like one in figure 8. This kind of model analysis helps for avoiding mistakes that
can be discovered later in the implementation phase.
4 Conclusion
In this work we presented the small case study of a Simple Music Management System. This case
study explains how our approach can be used for the modeling of role state structure, a composition of
role state structures and its analysis. There are several approaches that allow for the composition of
roles.
Many of these approaches are very implementation oriented. Michael VanHilst and David Notkin
describe a method for the implementation of roles in C++ [VanHilst96]. Several works consider role
composition using implementations with role wrappers. Wrappers can intercept incoming or outgoing
messages from different roles and impose rules on message passing between these roles (a
communication protocol). For example, M. Aksit specifies a communication protocol between roles
with Abstract Communication Types [Aksit94], L. Andrade and J. Fiadeiro specify it with
coordination contracts [Andrade01], M. Kandé specifies it as a connector that defines a pattern of
interactions between two or more components [Kande00]. Achermann, F., et al. in [Achermann01]
propose a more general approach for composition that is based on a new composition language,
PICCOLA, that can be used as unique language for components composition. This language can
provide support for key concepts form the various existing composition languages like: scripting
languages (Perl, Python), coordination languages (Linda, Manifold), architecture description
languages (Wright, Rapide), and glue languages (Smalltalk). However, none of these approaches can
be used by an analyst whose goal is to build system requirements without looking at the
implementation details. All these composition approaches are quite difficult to use and do not have
visual representation that is more convenient for requirements engineer. In the field of requirements
59
Pavel Balabko and Alain Wegmann
engineering the following approaches based on role modeling can be used: RIN – Role Interaction
Networks [Singh92], RAD – Role Activity Diagram [Ould95] and OORAM – the Object-Oriented
Role Analysis Method [Reenskaug96]. These three approaches are quite similar. Roles are considered
as sets of sequentially ordered actions and/or interactions. The main drawback of these approaches is
that goals are difficult to model with these PMTs. Another problem with these PMTs is that states are
defined in such a way that it is difficult to split the state into subsets (for different contexts). These two
problems are related to the fact that the PMTs do not have a state structure (state is considered as an
instant between connective actions). We have not seen many approaches that allow for the
composition and analysis of object state structures. One similar approach is the View based UML
(VUML) described in [Nassar03]. This approach provides the concept of a multiview class that
specifies the composition of models from the point of view of different system users. The
dependencies between views are specified using OCL. However the VUML approach does not provide
means for the evaluation of a composition before its implementation.
The main contribution of our work is the possibility for the formal visual analysis of a composed
model. The analysis of a composed model allows for reaching an agreement with a customer that a
model is adequate before a system get implemented.
5 References
[Achermann01] Achermann, F., et al., Piccola - a Small Composition Language, in Formal Methods for
Distributed Processing - A Survey of Object-Oriented Approaches, H. Bowman and J. Derrick,
Editors. 2001, Cambridge University Press. p. 403--426.
[Aksit94]
Aksit, M., et al., Abstracting Object Interactions Using Composition Filters, in Proceedings of
the ECOOP'93 Workshop on Object-Based Distributed Programming, R. Guerraoui, O.
Nierstrasz, and M. Riveill, Editors., 1994, Springer-Verlag. p. 152--184.
[Andrade01] Andrade, L. and J. Fiadeiro. Coordination Technologies for Managing Information System
Evolution, in Proceedings of CAiSE'01, 2001, Interlaken, Switzerland: Springer-Verlag.
[Balabko03] Balabko, P., Wegmann, A., “A Synthesis of Business Role Models”, in Proceedings of ICEIS
2003, ICEIS Press, Anger, France, download from
http://lamswww.epfl.ch/publication/lams_publication_selection.asp
[Chang99]
Chang, S.K., et al. “The Future of Visual Languages”, in Proceedings of IEEE Symposium on
Visual Languages, 1999, Tokyo, Japan. pp. 58-61
[D'Souza98] D'Souza, D.F. and A.C. Wills, Objects, Components, and Frameworks With Uml: The Catalysis
Approach. Addison-Wesley Object Technology Series. 1998: Addison-Wesley Pub Co.
[ISO96]
ISO/IEC, (1996). 10746-1, 3, 4 | ITU-T Recommendation X.902, Open Distributed Processing Basic Reference Model - Part 2: Foundations.
[Jackson02] Daniel Jackson, “Micromodels of Software: Lightweight Modelling and Analysis with Alloy”,
Software Design Group, MIT Lab for Computer Science, 2002, downloaded from
http://sdg.lcs.mit.edu/alloy/reference-manual.pdf
[Kande00]
Kandé, M.M. and A. Strohmeier. Towards a UML Profile for Software Architecture. in
UML'2000 - The Unified Modeling Language: Advancing the Standard. 2000. York, UK,: LNCS
Lecture Notes in Computer Science.
[Nassar03]
Mahmoud Nassar, at al. “Towards a View Based Unified Modeling Language”, in Proceedings
of ICEIS 2003, ICEIS Press, Anger, France.
[Naumenko02] Triune Continuum Paradigm: a paradigm for General System Modeling and its applications for
UML and RM-ODP, Ph.D thesis number 2581, EPFL June 2002.
[Ould95]
Ould M. A., Business Processes: Modeling and analysis for re-engineering and improvement,
John Wiley & Sons, 1995
[Reenskaug96] Reenskaug, T., et al., Working With Objects: The OOram Software Engineering Method. 1996
ed: Manning Publication Co
[Singh92]
Singh, B. and G.L. Rein, Role Interaction Nets (RINs): A Process Description Formalism, 1992,
MCC: Austin, TX, USA, Technical Report CT-083-92
[VanHilst96] VanHilst, M. and D. Notkin. Using Role Components to Implement Collaboration-Based
Designs. in Proceedings of OOPSLA'96. 1996: ACM Press.
60
Enterprise Knowledge and Information System Modelling
in an Evolving Environment
Selmin Nurcan *+, Judith Barrios x
(+)
(*)
IAE de Paris (Business School)
Université Paris 1 - Panthéon - Sorbonne
21, rue Broca 75005 Paris France
Université Paris 1 - Panthéon - Sorbonne
Centre de Recherche en Informatique
90, rue de Tolbiac 75634 Paris cedex 13 France
(x)
Computing Department, Faculty of Engineering, University of Los Andes
Merida, 5101. Venezuela
{ nurcan@univ-paris1.fr, ijudith@ula.ve }
Abstract
The assumption of the work presented in this paper is the situatedness of the enterprise modelling process in
a continuously evolving environment. The Enterprise Knowledge Development - Change Management
Method (EKD-CMM) provides multiple and dynamically constructed ways of working to organise and to
guide the enterprise knowledge modelling and organisational change processes. The method is built on the
notion of labelled graph of intentions and strategies called a map and the associated guidelines. This paper
presents the EKD-CMM map and highlights the relationships between the enterprise objectives, processes
and the supporting information and communication technology (ICT) systems.
1. Introduction and motivation
1.1. Evolution of enterprises’ requirements and of types of management
Before the seventies, companies used the principle of scientific management founded by Frederik W.
Taylor and were strongly production-oriented. The resulting organisation lead to a vertical division of
the activities to be performed and to functional and extremely hierarchical structures.
Since the eighties, companies are nowadays facing huge pressures to improve their competitiveness.
Responses to these were restructuring, downsizing and reengineering along with a strong commitment
to customer satisfaction. Organisational transformation became then a major issue. In this competitive
and evolving market, quality is fundamental to obtain and to keep market share. Stora and Montaigne
(Dumas and Charbonnel, 1990) define quality as "... the conformity of products or services with the
needs expressed by internal or external customers and undertaken by internal or external suppliers".
The concept of quality went through four successive stages. Around 1940, Taylor's theories about
work organisation came strongly into effect and caused a separation between producers and quality
controllers. Quality was obtained essentially by the final control of the products (inspection). Between
1950 and 1960, emphasis lay on the quality of the process and not only on the quality of the product.
Roles and responsibilities between production and quality changed. The production function was
responsible for the quality of its products, and controls were transferred to it. The quality function was
responsible for the quality procedures necessary to meet customer needs (quality assurance). In 1980,
experts acknowledged that the total management of quality is one of the factors in improved
competitiveness. The Total Quality Management (TQM) was thus defined as a management method
which aims towards long-range success. It is based on collective participation of each member in the
improvement of processes, products, services and organisation of the company. Each business process
is (re)designed to contribute to the quality of the products and services. The last stage is around the
wave of the Business Process Reengineering (BPR), proposed by Hammer and Champy (Hammer and
Champy, 1993), which consists of a radical remodelling of the organisation around its processes1. The
difference between TQM and BPR is that the former deals with continuous change whereas the latter
1
a set of activities which produces, from one or several inputs, an output valuable for the customer
EMSISE’03
61
Selmin Nurcan and Judith Barrios
deals with discontinuous, radical change.
Over the past decade, continuous challenges have been made to traditional business practices. Rapid
market changes such as electronic commerce, deregulation, globalisation and increased competition
have led to a business environment that is constantly evolving. Companies change to better satisfy
customer requirements, address increasingly tough competition, improve internal processes, modify
the range of products and services they offer (Jacobson et al., 1994). At the same time, organisations
also experience the effects of the integration and evolution of information technology. While
information systems continue to serve traditional business needs such as co-ordination of production
and enhancements of services offered, a new and important role has emerged, namely the potential for
such systems to adopting a supervisory or strategic support role. Information and Communication
Technologies (ICT) were thus positioned as a strategic resource that enables automation, monitoring,
analysis and coordination to support the transformation of business processes (Grover et al., 1994).
While ICT and information systems are becoming an integrated aspect of organisations, the efficient
communication between the stakeholders and requirements engineers became more and more critical
because systems should be continuously adapted to changing business practices and needs.
Stakeholders and requirement engineers have to well understand each other when eliciting and
understanding requirements and reconciling differences at technical and social levels.
1.2. New requirements for information systems development
In such an unstable environment, information system developers were challenged to develop systems
that can meet the requirements of modern organisations. The paradigms of Business Process
Reengineering and Business Process Improvement contrast with traditional information system
development that focused on automating and supporting existing business processes (Guha et al.,
1993). Now, enterprises should create -entirely- new ways of working to survive in a competitive
environment. As stated in (Barrett, 1994), organisational transformation depends of the creation of a
powerful vision of what future should be like. We claim that an in depth understanding of the current
functioning is also required. In this context, enterprise knowledge modelling can help understanding
the current business situation (Jarzabek and Ling, 1996) and establishing a vision of what the future
should be like. Therefore, modelling of enterprise knowledge becomes a pre-requisite for system
requirements elicitation and system development.
Enterprise knowledge modelling refers to a collection of conceptual modelling techniques for
describing different facets of the organisational domain including operational (information systems),
organisational (business processes, actors, roles, flow of information etc), and teleological (purposes)
considerations (Bubenko, 1994). Existing enterprise knowledge modelling frameworks (Dobson et al.,
1994), (van Lamsweerde et al., 1995), (Yu and Mylopoulos, 1996), (Loucopoulos et al., 1998),
(Nurcan et al., 1998), (Rolland et al., 1998b), (Bubenko, 1994), (Loucopoulos and Kavakli, 1995)
stress the need to represent and structure enterprise knowledge. However, very few approaches
investigate the dynamic aspect of knowledge modelling; i.e., how enterprise knowledge models are
generated and evolve and how reasoning about enterprise knowledge can guide the organisational
transformation. Therefore, process guidance concerns the support provided to the enterprise modelling
and the organisational transformation. Work in this area mainly focuses on prescriptive approaches.
However, due to its social and innovative nature, the organisational change can not be fully
prescribed. In fact, the enterprise modelling process in an evolving environment is a decision making
process i.e. a non-deterministic process. Accordingly, process guidance should allow selecting
dynamically the next modelling activity to be performed depending on the situation at hand (Rolland
et al., 1996), (Rolland et al., 1997a), (Rolland et al., 1999), (Rolland et al, 2000). To this end, this
paper puts forward an intentional framework known as the EKD-CMM2 approach. EKD-CMM is the
confluence of two technologies: Enterprise Knowledge Modelling and Process Guidance.
This paper is organised as follows: Section 2 sets the terminology and the background of our proposal
and presents a state-of the-art in order to situate the EKD-CMM method among several others that
2
The term EKD-CMM stands for Enterprise Knowledge Development-Change Management Method
62
Enterprise Knowledge and Information System Modelling in an Evolving Environment
have been published under similar research themes. Section 3 defines the EKD-CMM views of
enterprise knowledge modelling and proposes a way-of-working which can be used for several
organisational purposes from business process reengineering to information systems design.
2. State of the Art
For the purposes of information systems planning, enterprises used first functional, exhaustive and bottomup approaches like Business Systems Planning (IBM, 1984) which core concept was the information
architecture of the analysed business describing which data is handled by which business function. The
need to taking into account higher level (strategic) objectives led afterwards to the use of non exhaustive
and top-down approaches based for example on the study of the critical success factors of the enterprise
(Ward and Griffiths, 1996). But even by taking into account the critical success factors, the enterprise can
not obtain the maximum benefits from the ICT. In fact, in these classical approaches, the choices are
done first in the strategic and organisational levels before thinking about support systems. A more recent
approach suggests to reverse the usual way-of-doing by studying first the possibilities of ICT, identifying
then which innovating activities the enterprise can perform supported by them and finally eliciting the
corresponding organisational objectives (Österle et al., 1993). Obviously, the new activities elicited by
this way, should be compatible with the mission of the enterprise.
Most of the current approaches for modelling enterprise knowledge and organisational change view
change management as a top-down process. Such approaches (e.g., BPR) assume that the change
process starts with a high level description of the business goals for change. The initial goals are then
put into more concrete forms during the process, progressively arriving at the specification of the
future system requirements that satisfy these goals. Other approaches (e.g., TQM) advocate a bottomup orientation whereby the need for change is discovered through analysis of the current
organisational situation and reasoning about whether existing business structures satisfy the strategic
concerns of the stakeholders. In the first case, the goals for change are prescribed in the sense that they
do not explicitly link the need for change to the existing organisational context, rather they reflect how
change is perceived from the strategic management point of view or is codified in the organisation’s
policies and visions. Such goals do not always reflect reality (Anton, 1996). On the other hand, in
bottom-up approaches, goals for change are described i.e., they are discovered from an analysis of
actual processes. However, descriptive goals tend to be too constrained by current practice, which can
be a serious drawback when business innovation is sought (Pohl, 1996).
From the point of view of method engineering, a business model is a product (Odell, 1996),
(Brinkemper, 1996). In fact, the product is the desired output of the design process, whereas the
process keeps track of how the product has been constructed in a descriptive manner. A process and its
related product are specific to an application. A Product Model defines the set of concepts and their
relationships that can be used to built a product, i.e., in our case, to built a model representing a given
business. Nevertheless, method engineering establishes that a well-defined method should also have a
Process Model that guides the construction of the product. The Process Model defines how to use the
concepts defined within a Product Model and may serve two distinct purposes: descriptive or
prescriptive (Curtis et al., 1992), (Lonchamp, 1993). A descriptive Process Model aims at recording
and providing a trace of what happens during the development process (Gotel and Finkelstein, 1996).
A prescriptive Process Model is used to describe "how things must/should/could be done".
Prescriptive Process Models are often referred to as ways-of-working (Seligmann et al., 1989). A
Process Model and its related Product Model 3 are specific to a method.
The study of the state-of-the-art on Product Models suggests that existing approaches to enterprise
knowledge modelling can be classified into two categories. In the first category, an organisation is
represented as a set of inter-related elements satisfying common objectives (Checkland and Scholes,
1990), (Flood and Jackson, 1991). For instance, VSM (Espejo and Harnden, 1989) allows us to model
an organisation as a set of viable sub-systems representing respectively the operation, co-ordination,
3
We use capitalised initials in order to differentiate the method specific Models from the application specific models (for instance a business
model) that compose the product.
63
Selmin Nurcan and Judith Barrios
control, intelligence (reasoning, analysis) and politics (strategy) aspects of an organisation.
In the second category, the focus is given to developing different views of the organisation dealing
respectively on actors, roles, resources, business processes, objectives, rules, etc. (Bubenko, 1994),
(Decker et al., 1997), (Jarzabek and Ling, 1996). Business process modelling usually employs and/or
combines three basic views: (i) the functional view expressed based on Data Flow Diagrams
(DeMarco, 1979), (Ross, 1985), (Marca and McGowan, 1993); (ii) the behavioural view focused on
when and under which conditions activities are performed and based on state diagrams or interaction
diagrams (Jacobson et al., 1993), (Harel, 1990); and (iii) the structural view focused on the static
aspect of the business process capturing the objects that are manipulated by the business process and
their relationships (Rumbaugh et al., 1991). Existing Workflow Models belong also to the second
category (Ellis and Wainer, 1994), (Medina-Mora et al., 1992), (McCarthy and Sarin, 1993).
The study of the state-of-the-art suggests that existing Process Models can be classified into three
categories (Dowson, 1987) (Rolland et al., 1999): activity-oriented Models, product-oriented Models,
and decision-oriented Models. Each category has an underlying paradigm that we examined in terms
of its appropriateness to enterprise modelling in an evolving context.
Activity-oriented Models attempt to describe the development process as a set of activities with
conditions constraining the order of these activities (Emmerich et al., 1991), (Jacherri et al., 1992),
(Armenise et al., 1993), (Finkelstein et al., 1994). They were inspired by generic system development
approaches (e.g. the Waterfall model, the Spiral model, the Fountain model, etc.). The linear view of
activity decomposition promoted by this paradigm is inadequate in the context of enterprise modelling
and organisational change (Lehman, 1987) which require creative activities essential to development,
for instance in the use of heuristic, the choice of alternatives, back tracking decision, etc...
Product-oriented Models do not put forward the activities of a process but rather the result of these
activities (Finkelstein et al., 1990), (Humphrey, 1989), (Nadin and Novak, 1987). A positive aspect is
that they model the evolution of the product and couple the product state to the activities that generate
this state. They are useful for tracing the transformations performed and their resulting products.
However as far as guidance is concerned, and considering the highly non-deterministic nature of the
enterprise knowledge development process, it is difficult to write down a realistic state-transition
diagram that adequately describes what has to happen during the entire process.
The most recent type of Process Models are based on the decision-oriented paradigm (Jarke et al.,
1992), (Potts, 1989), (Rolland and Grosz, 1994), (Rose et al., 1991) according to which the successive
transformations of the product are looked upon as consequences of decisions. Such models are
semantically more powerful than the two others because they explain not only how the process
proceeds but also why transformations happen (Lee, 1991), (Ramesh and Dhar, 1992). Their
enactment guide the decision making process that shapes the development, help reasoning about the
rationale of decisions, and record the associated deliberation process. The decision-oriented modelling
paradigm seems to be the most appropriate for the enterprise modelling and organisational
transformation processes both for trace and guidance purposes. The addition of a capability to record
the design decisions facilitates understanding of the engineer's intention, and thus, leads to a better
reuse of the results and an easier introduction of change in systems requirements. However, enterprise
knowledge development processes, whether the enterprise is in a stable situation or in a transformation
process, are not adequately covered in existing decision-oriented models. Clearly, there is a high
need for methods which offer process guidance to provide advice on which activities are appropriate
under which situations and how to perform them (Wynekoop and Russo, 1993), (Dowson and
Fernstrom, 1994), (Rolland, 1996), (Rolland et al., 1999) and to handle the modelling of evolving
organisations.
In the following section, we present a method, namely Enterprise Knowledge Development- Change
Management Method (EKD-CMM), which intends to provide such guidance.
3. Enterprise Knowledge Development – Change Management Method
64
Enterprise Knowledge and Information System Modelling in an Evolving Environment
Due to its social and innovative nature, the enterprise knowledge modelling and the organisational
change can not be fully prescribed because these are, first of all, decision making processes, therefore
non deterministic in nature.
Sub-section 3.1 presents the EKD-CMM framework and more precisely the organisational situations
in which the so called EKD-CMM method can be applied. This sub-section gives also a short
summary of the EKD-CMM Product-Models. Sub-section 3.2 presents the high level vision of the
EKD-CMM Process Model. Sub-section 3.3 highlights the relationships between the modelling of the
business processes and of the ICT systems supporting them.
3.1. EKD-CMM Framework
Enterprises that can manage complexity and can respond to rapid change in an informed manner can
gain a competitive advantage. EKD-CMM is a method to documenting an enterprise, its objectives,
business processes and support systems, helping enterprises to consciously develop schemes for
implementing changes. EKD-CMM satisfies two requirements : (i) assisting enterprise knowledge
modelling and (ii) guiding the enterprise modelling and the organisational transformation processes.
The EKD-CMM enterprise knowledge modelling component (Nurcan et al., 1999), (Loucopoulos et
al., 1998), (Rolland et al., 1998c), (Bubenko, 1994), (Loucopoulos and Kavakli, 1995), (Nurcan and
Rolland, 2003) recognises that it is advantageous to examine an enterprise from multiple perspectives.
As shown in Figure 1, the inter-connected set of EKD-CMM models describing an enterprise are
structured in three levels of concern: Enterprise Goal Model, Enterprise Process Model and
Enterprise Information System Model. The first two levels focus on intentional and organisational
aspects of the enterprise, i.e. the organisational objectives and how these are achieved through the cooperation of enterprise actors manipulating such enterprise objects. The third level, is useful when the
EKD-CMM approach is applied to define the requirements for an information system. In this case, the
focus is on system aspects i.e., the computerised system that will support the enterprise, processes and
actors in order to achieve the enterprise objectives.
Enterprise
objectives
Enterprise
objects
Enterprise
processes
Actors/
Roles
Roles/
Activities
Information
Systems
Figure 1: The EKD-CMM view of enterprise modelling
Therefore, within EKD-CMM, the product is a set of operational (information systems), organisational
(business processes) and intentional (business objectives) models describing the new system to be
65
Selmin Nurcan and Judith Barrios
constructed and the organisation in which it will operate. The Product Models used in the two higher
levels of abstraction have been previously presented in (Nurcan et al., 2002), (Barrios and Nurcan,
2002), (Nurcan and Rolland, 2003). We list them hereafter to remind their purposes.
-
The goal models represent the current or future enterprise objectives. Their purpose is to
describe what the enterprise wants to achieve or to avoid.
-
Enterprise business processes, motivated by enterprise objectives, are modelled at the second
level according to several points of view. Consequently, enterprise process models resulting
from these descriptions require different Product Models:
o
What happens in enterprise processes can be analysed in terms of the roles that
individuals or groups play in order to meet their responsibilities. Roles correspond to
sets of responsibilities and related activities. The actor/role model aims to describe
how actors are related to each other and also to enterprise objectives.
o
People perform activities to achieve enterprise objectives. The role/activity model is
used to define enterprise processes, the way they consume/produce resources to
achieve enterprise objectives.
o
Activities carried out by different roles deal with business objects. The object model is
used to define the enterprise entities, attributes and relationships.
Using models to represent the enterprise allows a more coherent and complete description of
enterprise objectives, business processes, actors and enterprise objects than a textual description.
These models are useful because they allow (i) to improve the knowledge about the enterprise, (ii) to
reason on alternative solutions and diverging points of view, and (iii) to reach an agreement. They
proved their efficiency as well as for improving communication than making easier the organisational
learning.
The intention based modelling used in EKD-CMM provides basis for understanding and supporting
the enterprise modelling, organisational change and helping the development of the supporting
information systems. Process guidance in EKD-CMM is based on a map which is a navigational
structure in the sense that it allows the requirements engineer to determine a path from Start intention
to Stop intention. The approach suggests a dynamic construction of the most appropriate path by
navigating in the map. Thus, EKD-CMM proposes several ways of working, and in this sense, it is a
multi-method. In fact, using the EKD-CMM framework, one can start at any level and move on to
other levels, depending on the modelling and organisational situations. As illustrated in Figure 2, the
proposed method can be used for both business engineering and information system engineering
purposes, allowing:
(a) business process reengineering: from business processes level to the business objectives for
change (Rolland et al., 1998b), (Nurcan et al., 1999), (Nurcan and Rolland, 1999), (Rolland et
al., 1999b) and then to the business process architecture for the future ;
(b) reverse engineering: from legacy information systems to the information system level which
may be than used to model the business processes level (Kavakli and Loucopoulos, 1998),
(Kardasis and Loucopoulos, 1998) ;
(c) forward engineering or information system design: from business objectives to business
process modelling and to the choice of the processes to be supported by the information and
communication technologies (ICT) and than to the IS modelling ;
(d) business process improvement: by modelling and analysing the business processes in order to
enhance them by specific modifications such as role definition or activity flow ;
(e) quality management: by defining the business processes and quality procedures and by
aligning them ones with respect to others.
66
Enterprise Knowledge and Information System Modelling in an Evolving Environment
Current state
Future state
goal
model
(a)
goal
model
(a)
BUSINESS OBJECTIVES
(a)
BUSINESS OBJECTIVES
(e)
role/activity
model
actor/role
model
BUSINESS PROCESSES
rule
model
role/activity
model
(d)
actor/role
model
object
model
BUSINESS PROCESSES
rule
model
object
model
(b)
(c)
(c)
information system
model
information system
model
INFORMATION SYSTEMS
INFORMATION SYSTEMS
Figure 2: Purposes for using EKD-CMM
During our previous work, we were particularly interested to the definition and modelling of the
organisational change processes. To this end, we focused our attention on business processes to
understand the current way of working of the enterprise (second level in Figure 1) and reasoned on the
organisational change at the intentional level (Nurcan et al., 1999), (Nurcan and Rolland, 1999),
(Rolland et al., 1999b). The EKD–CMM approach has been thus successfully applied in an European
Project (ELEKTRA) aiming to discover generic knowledge about change management in the
electricity supply sector for reusing it in similar settings. Two end-user applications have been
considered within the project. The common theme underpinning their requirements was their need to
deal with change in a controlled way which would lead to an evaluation of alternative options of
possible means to meet the objectives for change.
Our conclusion was that reasoning on the enterprise objectives makes easier understanding of
problems and communication on essential aspects (what and why instead of who, when, where and
how). This representation “by objectives” may (i) constitute a document for business analysts to
discuss about the enterprise and its evolution, and (ii) help, in term, analysts, designers and developers
of information systems.
Our current work consists to focus on the two lower layers shown in Figure 1, namely enterprise
process models and information systems in order to highlight the relationships between the enterprise
process models and the specifications of the ICT systems.
3.2. EKD-CMM Process Model
This sub-section presents first the concepts used for defining the EKD-CMM Process Model, and then
introduce the EKD-CMM Map which defines the multiple ways-of-working offered by the method.
3.2.1. The Map Meta-Model
A map (Rolland et al., 1999c) is a Process Model in which a non-deterministic ordering of intentions
and strategies has been included. It is a labelled directed graph with intentions as nodes and strategies
as edges between intentions. As shown in Figure 34, a map consists of a number of sections each of
which is a triplet < source intention I5i, target intention Ij, strategy Sij>. There are two distinct
intentions called Start and Stop respectively that represent the intentions to start navigating in the map
and to stop doing so. Thus, it can be seen that there are a number of paths in the graph from Start to
Stop. The map is a navigational structure that supports the dynamic selection of the intention to be
achieved next and the appropriate strategy to achieve it whereas the associated guidelines help in the
achievement of the selected intention.
A strategy is an approach, a manner to achieve an intention. The strategy, as part of the triplet
4
We use an E/R like notation. A box represents en Entity Type (ET), the labeled link represents a Relationship Type (RT) and the embedded
box refers to an objectified RT.
5
Intention are in italics (Ii, Ij)
67
Selmin Nurcan and Judith Barrios
<Ii,Ij,Sij> characterises the flow from Ii to Ij and the way Ij can be achieved. The specific manner in
which an intention can be achieved is captured in a section of the map whereas the various sections
having the same intention Ii as a source and Ij as target show the different strategies that can be
adopted for achieving Ij when coming from Ii. Similarly, there can be different sections having Ii as
source and Ij1, Ij2, ....Ijn as targets. These show the different intentions that can be achieved after the
achievement of Ii.
There might be several flows from Ii to Ij, each corresponding to a specific strategy. In this sense the
map offers multi-thread flows. There might also be several strategies from different intentions to reach
an intention Ii. In this sense the map offers multi-flow paths to achieve an intention. The map contains
a finite number of paths, each of them prescribing a way to develop the product (for instance a
business process model), i.e. each of them is a Process Model. Therefore the map is a multi-model.
The approach suggests a dynamic construction of the actual path by navigating in the map. Because
the next intention and strategy to achieve it are selected dynamically, guidelines that make available
all choices open to handle a given situation are of great importance. A guideline is a set of indications
on how to proceed to achieve an intention. A guideline embodies method knowledge to guide the
designer in achieving an intention in a given situation. The execution of each map section is supported
by a guideline which can be atomic or compound. Some sections in a map can be defined as maps in a
lower level of abstraction.
Start
is defined as
source
Map
Stop
Intention
Si
1,n
target
composed of
0,1 1,1
0,n
1,1
Sik
1,1
Guideline
1,1 supported by
1,1
Section
Sij1
Ii
Start
1,1
Strategy
Sk
Sjk
Ik
composed of
Se
1,n
Compound
Ij
Sij2
End
Atomic
Legend:
Entitytype
Relationshiptype
Objectified
relationship-type
Figure 3: The Map Meta-Model
3.2.2. The EKD-CMM Map
We assume enterprise modelling and organisational transformation processes to be intention-oriented.
The EKD-CMM map is shown in Figure 4. As shown in this figure, there are three key intentions in
EKD-CMM, namely Elicit Enterprise Goal Structure, Conceptualise Enterprise Business Process
Model and Conceptualise Information Systems Model. We refer to them as ‘Process Intentions’.
Conceptualise Enterprise Business Process Model (BPM) refers to all activities required to construct a
business process model whereas Elicit Enterprise Goal Structure refers to all those activities that are
needed to identify goals and to relate them one another through AND, OR (exclusive OR) and
AND/OR (inclusive OR) relationships. Finally, Conceptualise Information System Model refers to all
activities required to construct the supporting software systems.
The EKD-CMM map contains a finite number of paths, each of them is a EKD-CMM Process Model.
Therefore the EKD-CMM map is a multi-model. None of the finite set of models included in the map
is recommended ‘a priori’. Instead the approach suggests a dynamic construction of the actual path by
navigating in the map. In this sense the approach is sensitive to the specific situations as they arise in
the process. The multiple purposes, listed in sub-section 3.1, for which EKD-CMM can be applied are
all included in the EKD-CMM map. Guidelines aiming at helping EKD-CMM users to construct
dynamically their path are also provided by the EKD-CMM map. These guidelines help users to
choose between two alternative sections between a source process intention and a target process
68
Enterprise Knowledge and Information System Modelling in an Evolving Environment
intention (strategy selection guidelines) or to choose between possible target intentions when moving
from a source intention (intention selection guidelines).
The experience gained during our previous work shown that the path to be followed in the EKD-CMM
map during a particular enterprise modelling project is situation-dependent. For instance, the selection
of the bottom-up6 path for one of the two end-users in the ELEKTRA project was influenced by the
uncertainty regarding both the current Electricity Distribution Business Unit situation and its possible
re-organisation alternatives. Application of the specific strategies forming this path was also affected
by a number of situational factors including: (i) organisational culture (organisational actors that were
not used to working in groups in a participative way, felt awkward in such a situation and found it
difficult to contribute as intended) ; (ii) ability to commit resources (the quality of the enterprise
models largely depended in the participation of the ‘right’ people both in terms of business experts and
method experts); (iii) social skills and consensus attitudes of participating actors (conflicts between
individuals and groups within the project increased the complexity of the situation); (iv) use of
software tools to facilitate the process execution (the use of group support technologies in participative
sessions increased both productivity and the quality of results obtained); and (v) familiarity with
applied strategies and supporting technologies (understanding, among project participants, of the
capabilities and limitations of the strategies and tools applied was vital in order to make the best use of
them and to produce useful results).
By opposition, for the second application of the ELEKTRA project, a different path of the map, called
top-down, was used. The map sections composing this path use mainly the participative modelling
strategy. For this end-user, the future enterprise goal structure was first elicited and then future
enterprise process models were conceptualised.
participative
modelling
strategy
evaluation
strategy
Elicit enterprise
goal structure
analyst
driven
strategy
process
clustering
strategy
participative
modelling
strategy
analyst
driven
strategy
Start
completeness
strategy
goal
deployment
strategy
process
clustering
strategy
participative
modelling
strategy
analyst driven
strategy
analyst driven
strategy
Stop
completeness
strategy
completeness
strategy
evaluation
strategy
‘inversion of the logics’
strategy
Conceptualise
enterprise
process model
participative
modelling
strategy
IS design
strategy
Reverse engineeing
strategy
Conceptualise
information
system
model
reverse engineeing strategy
ICT driven stagegy
Figure 4: The EKD-CMM map
All guidelines corresponding to the sections between the process intentions Elicit Enterprise Goal
Structure and Conceptualise Enterprise Business Process Model have been developed in (Nurcan and
Rolland, 2003) and (Barrios, 2001). Our current work consists in identifying and developing the
guidelines associated to the map sections having the process intention Conceptualise Information
System Model as source or as target.
3.3. The path of the EKD-CMM map for forward engineering
6
so called because this part suggests first to conceptualize the current enterprise process model, then to elicit the current
enterprise goal structure and finally to model alternative change scenario.
69
Selmin Nurcan and Judith Barrios
As stated in (Erikson and Penker, 2000) a business model can act as the basis for modelling and
designing the supporting software systems in an enterprise. Typically, business modelling and
software modelling use different languages and concepts making integration of the two models
difficult. The set EKD-CMM Product Models aims to ease this integration providing methodological
tools to use a business model (enterprise goal model and enterprise process models) to define the
supporting information systems’ architecture. Nevertheless, some parts of the business models that are
performed manually can not become part of the IS models.
Let us suppose that the future business processes have been modelled from different perspectives (see
(Nurcan and Rolland, 2003) and (Nurcan et al., 2002) for details) , i.e. by modelling actors that are
responsible for their execution and the set of activities that are under the responsibility of those actors,
as well as the resources involved in the execution of those activities. The resulting business process
models are –instances of- actor/role models, role/activity models and the business object model with
their relationships as depicted in Figure 5.
Business Process
Model
1
Actor / Role sub-Model
1
1
Object sub-Model
1..*
Object
responsible
Actor
1..*
1..*
1
1..*
1
Organisational
Unit
Role
involves
Business Goals
Model
1..*
1
1
1..* Business Process
1..*
Goal
use / produce
Business Goals
composed
1
1..*
Event
triggers
1..*
1..*
1..* Activity
accomplish
1..*
Role / Activity
sub-Model
A/R
Model
Role
Business
Process Model
Business Obj
ects
Model
Figure 5: The Integrated Business Process Model
Business
Process
Business Object
R/A Model
Business Processes
Business
Rules
support
Event
Hardware
regulates
...
Software
modify state
Business
Object
ICT
technology
1..*
implemented
by
ICT
1..*
1..*
involves
1..*
Information
Requirement
1..*
Information
System
supported on
satisfied by
1..*
Strategic plans
based on
1..*
Information systems
applications
Business
Objects
1..*
1..*
Information Systems
0..1
Information Systems Model
correspond
Management
Indicator
Information
System
Architecture
0..1
0..*
Development
Plan
Figure 6 : The methodological framework and the
relationships between the three layers
1
has
1
associated to
0..*
depends
Strategic Plan
0..1
associated to
1..*
Purchase
Plan
specifies
1..*
Figure 7: The Information System Model
The methodological framework can than be used to define the most appropriated information system
architecture to support this business model. This is possible because the framework permits to
establish a detailed view of what the relationships between new processes execution and future
information systems are. The objective of the business model is twofold. First, to help organisational
members to understand what they wanted to be as a service organisation, corresponding to the
identified enterprise goals and consequently, to (re)define business processes. Second, to design the
information systems architecture that best fits their future needs.
The business object model constitutes the main link between the business processes and the
information systems that support them. It represents all business elements involved in business
processes execution as shown in Figures 6 and 7.
70
Enterprise Knowledge and Information System Modelling in an Evolving Environment
The Information System model should contain not only the set of information systems (IS), but the
definition of the local and shared databases, as well as the information requirements and management
indicators that should be satisfied by different applications.
Figure 7 shows the main concepts included as part of the Information Systems model. The object
model is a refinement of the business object model which is a sub-model of the second level. It must
be refined and expressed according to the adopted software engineering techniques.
The way this model has been structured assures that business processes are at the origin of the business
objects as well as the definitions of information requirements and management performance
indicators. In consequence, they will be taken into account for the design and distribution of the
software components.
4. Conclusion
This paper reports on the use of an intentional framework for modelling enterprise knowledge using
business models and IS models. A major advantage of the proposed approach is the systematic way of
dealing with enterprise modelling and organisational transformation in terms of knowledge modelling
used with a process guidance framework.
The experience gained during our previous work has substantiated the view that the path of the EKDCMM map to be followed in a particular enterprise modelling project is very much dependent on the
enactment context of the project and a number of situational factors including degree of formal
hierarchy (few vs. many formal levels), decision structure (authoritative vs. management by objectives),
company culture (collectivistic vs. individualistic), degree of distance of power (short vs. long), type of
market (deregulated vs. regulated), etc. The implication of these empirical observations is that the
enterprise modelling processes in an evolving environment cannot be fully prescribed. Even when one
follows a certain strategy the situational factors dominating the project may cause a number the
adaptations to this strategy. This fact strengthens the position advocated by the EKD-CMM map that in
order to support the execution of enterprise modelling processes in an evolving environment, flexible
guidelines are more relevant than rigid rules. Thus, the EKD-CMM framework provides a systematic,
nevertheless flexible, way to organise and to guide the enterprise knowledge development processes.
We also observed that the ability of the company to commit the right resources as well as the
familiarity of the involved people with the EKD-CMM formalism and the supporting technologies
have a major impact in the success or the failure of the enterprise modelling project. Clearly, the EKDCMM experts need the domain knowledge to fully understand the organisation. Rather than trying to
gain huge amounts of knowledge, a better solution seems to involve one or several employees of the
company in the project. These employees will provide organisational knowledge or will know where it
may be found. Simultaneously they will become an important resource by gaining knowledge of EKDCMM, which will be useful if the organisation desires to continue work with enterprise modelling and
analysis.
Our framework contributes to define accurate and precise decision making processes inside modern
organisations which are highly dependent of information and communication technologies. It
reinforces also the ability of companies which apply it to adopt a policy of knowledge management.
References
Anton, A. (1996) Goal-Based Requirements Analysis. ICRE '96, IEEE, Colorado Springs, Colorado USA, 136144.
Armenise, P., Bandinelli, S., Ghezzi, C. and Morzenti, A. (1993). A survey and assessment of software process
representation formalisms. International Journal of Software Engineering and Knowledge Engineering, 3(3).
Barrett, J.L. (1994) Process visualization, Getting the vision right is key. Information Systems Management,
Spring, 14-23.
Barrios, J. (2001) Une méthode pour la définition de l’impact organisationnel du changement. Thèse de Doctorat de
71
Selmin Nurcan and Judith Barrios
l’Université de Paris 1.
Barrios, J. and Nurcan, S. (2002) MeDIC: A Method Engineering Proposal for the Analysis and Representation
of the Organisational Impact of Change. The 2002 International Conference on Software Engineering
Research and PracticeSERP'02 - June 24-27, Las Vegas, USA.
Brinkemper, J. (1996) Method Engineering: Engineering of Information Systems, Methods and Tools.
Information and Software Technology, 38, 275-280.
Bubenko, J. (1994) Enterprise Modelling. Ingénierie des Systems d' Information, Vol. 2, N° 6.
Bubenko J.A., jr., Persson, A. and Stirna, J. (2001) http://www.dsv.su.se/~js/ekd_user_guide.html
Checkland, P. and Scholes, J. (1990) Soft Systems Methodology in Action, John Wiley and Sons.
Curtis, B., Kellner, M. and Over, J. (1992) Process Modeling. Communications of ACM, 35, 9, 75-90.
Decker, S., Daniel, M., Erdmann, M. and Studer, R. (1997) An enterprise reference scheme for integrating Model
based knowledge engineering and enterprise modeling. 10th European Workshop on Knowledge Acquisition,
Modeling and Management, EKAW’97, Lecture Notes in Artificial Intelligence, Springer-Verlag,
Heidelberg.
DeMarco, T. (1979) Structured Analysis and System Specification, New Jersey: Prentice-Hall.
Dobson, J.S., Blyth, A.J.C., Chudge, J. and Strens, R. (1994) The ORDIT Approach to Organisational
Requirements, in 'Requirements Engineering: Social and Technical Issues', Academic Press, London, 87-106.
Dowson, M. (1987) Iteration in the Software Process. The 9th International Conference on Software
Engineering.
Dowson, M. and Fernstrom, C. (1994) Towards requirements for enactment mechanisms. European Workshop
on Software Process Technology.
Dumas P. and Charbonnel G. (1990) La méthode OSSAD-Pour maîtriser les technologies de l'information Tome 1:Principes. Les Editions d'Organisation, Paris.
Ellis, C.A. and Wainer, J. (1994) Goal-based models of collaboration. Collaborative Computing, 1:1.
Emmerich, W., Junkermann, G. and Schafer, W. (1991) MERLIN: Knowledge-based process modelling, First
European Workshop on Software Process Modelling, Milan, Italy.
Eriksson, H.-E. and Penker, M. (2000), Business modeling with UML- Business patterns at work, J. Wiley.
Espejo, R. and Harnden R. (eds) (1989). The Viable System Model: Interpretations and Applications of Stafford
Beer’s VSM, Chichester: Wiley.
Finkelstein, A., Kramer, J. and Nuseibeh, B. (eds) (1994). Software Process Modeling and Technology, John
Wiley Pub.
Finkelstein, A., Kramer, J. and Goedicke, M. (1990) ViewPoint Oriented Software Development. Conférence Le
Génie Logiciel et ses Applications, Toulouse, 337-351.
Flood, R.L. and Jackson, M.C. Creative Problem Solving. Total System Intervention, John Wiley and Sons Ltd,
1991.
Gotel, O. and Finkelstein, A. USA (1996) An Analysis of the Requirements Traceability Problem. First IEEE
International Conference ICRE'94, Colorado Springs.
Grover, V., Fiedler, K.D. and Teng, J.T.C. (1994) Exploring the success of information technology enabled
business process reengineering. IEEE Transactions on Engineering Management, 41:3 (August), 276-283.
Guha, S., Kettinger, W.J. and Teng, J.T.C (1993) Business Process Reengineering, Building a comprehensive
methodology, Information System Management, Summer, 13-22.
Hammer M. and Champy J. (1993) Reengineering the Corporation: a Manifesto for Business Revolution,
Harper Collins Publishers, Inc., New York.
Harel, D. (1990) STATEMATE: A working environment for the development of complex reactive systems.
IEEE Transactions on Software Engineering, 16:4, (April), 403-414.
Humphrey, W.S. (1989) Managing the Software Process, Addison-Wesley.
IBM Corporation (1984) Business Systems Planning, IBM GE 20-0527-4, 4th edition.
Jacherri, L., Larseon, J.O. and Conradi, R. (1992) Software process modeling and evolution in EPOS. 4th
72
Enterprise Knowledge and Information System Modelling in an Evolving Environment
International Conference on Software Engineering and Knowledge Engineering (SEKE'92), Capri, Italy.
Jacobson I., Ericsson M. and Jacobson A. (1994) The object advantage - Business Process Reengineering with
object technology, Addison-Wesley.
Jacobson, I., Chisreton, M., Jonsson, P. and Overgaard, G. (1993) Object Oriented Software Engineering – A
Use Case Driven Approach, Addison-Wesley.
Jarke, M., Mylopoulos, J., Schmidt, J.W. and Vassiliou, Y. (1992) DAIDA - An environment for evolving
information systems. ACM Transactions on Information Systems, 10, 1.
Jarzabek, S. and Ling, T.W. (1996) Model-based support for business reengineering, Information and Software
Technology, N° 38, 355-374.
Kardasis, P. and Loucopoulos, P. (1998) Aligning Legacy Information Systems to Business Processes. 10th Int.
Conf. on Advanced Information Systems Engineering (CAiSE'98), B. Pernici (ed.), Springer-Verlag, Pisa,
Italy, 25-39.
Kavakli, V. and Loucopoulos, P. (1998) Goal-Driven Business Process Analysis: Application in Electricity
Deregulation. 10th Int. Conf. on Advanced Information Systems Engineering, B. Pernici (ed.), Springer-Verlag,
Pisa, Italy, 305-324.
Lee, J. (1991) Extending the Potts and Bruns Model for Recording Design Rationale, IEEE 13th International
Conference on Software Engineering, Austin, Texas, May.
Lehman, M.M. (1987) Process Models, Process Programs, Programming Support, 9th International Conference
on Software Engineering.
Lonchamp, J. (1993) A structured conceptual and terminological framework for software process engineering.
International Conference on Software Process.
Loucopoulos, P., and Kavakli, V. (1995) Enterprise modeling and teleological approach to requirements
engineering. International Journal of Intelligent and Cooperative Information Systems, Vol. 4, N° 1, 44-79.
Loucopoulos, P., Kavakli, V., Prekas, N., Dimitromanolaki, I. Yilmazturk, N., Rolland, C., Grosz, G., Nurcan,
S., Beis, D., and Vgontzas, G. (1998) The ELEKTRA project: Enterprise Knowledge Modeling for change in
the distribution unit of Public Power Corporation. 2nd IMACS International Conference on Circuits,
Systems and Computers (IMACS-CSC’98), Athens, Greece, 352-357.
Marca, D.A. and McGowan, C.L. (1993) IDEF0/SADT: Business Process and Enterprise Modeling. San Diego:
Eclectic Solutions, Inc..
McCarthy, D.R. and Sarin, S.K. (1993) Workflow and transactions in InConcert. Bulletin of Technical
Committee on Data Engineering, 16:2, IEEE, Special Issue on Workflow and Extended Transactions Systems.
Medina-Mora, R., Winograd, T., Flores, R. and Flores, F. (1992) The Action Workflow approach to workflow
management technology. CSCW’92, ACM, Toronto, Canada.
Nadin, M. and Novak, M. (1987) MIND: A design machine, conceptual framework. Intelligent CAD Systems I,
Springer Verlag.
Nurcan, S., Grosz, G. and Souveyet, C. (1998) Describing business processes with a guided use case approach.
10th International Conference on Advanced Information Systems Engineering (CAiSE'98), B. Pernici (ed.),
Springer-Verlag, Pisa, Italy, 339-361.
Nurcan, S. and Rolland. C. (1999) Using EKD-CMM electronic guide book for managing change in
organisations. 9th European-Japanese Conference on Information Modelling and Knowledge Bases,
ECIS'99, Iwate, Japan, May 24-28, 105-123.
Nurcan, S., Barrios, J., Grosz, G. and Rolland, C. (1999) Change process modelling using the EKD - Change
Management Method. 7th European Conference on Information Systems, ECIS'99, Copenhagen, Denmark,
June 23-25, 513-529.
Nurcan, S., Barrios, J. and Rolland, C. (2002) Une méthode pour la définition de l'impact organisationnel du
changement. Numéro Spécial de la Revue Ingénierie des Systèmes d'Information "Connaissances Métier
dans l'Ingénierie des SI, 7:4, Hermès.
Nurcan, C. and Rolland, R. (2003) A multi-method for defining the organisational change. Journal of
Information and Software Technology, Elsevier. 45:2, 61-82.
Odell, J. (1996) A
primer to Method Engineering. INFOSYS. The Electronic Newsletter for Information
73
Selmin Nurcan and Judith Barrios
Systems. 3:19, Massey University, New Zealand.
Österle, H., Brenner, W., Hilbers, K. (1993) Total information systems management, J. Wiley and Sons, 1993.
Pohl, K. (1996) Process-Centered Requirements Engineering, Research Studies Press Ltd., Taunton, Somerset,
England.
Potts, C. (1989) A Generic Model for Representing Design Methods. 11th International Conference on Software
Engineering.
Ramesh B. and Dhar, V. (1992) Supporting Systems Development by Capturing Deliberations During
Requirements Engineering, IEEE Transactions on Software Engineering, 18:6.
Rolland, C. and Grosz, G. (1994) A general framework for describing the requirements engineering process.
IEEE Conference on Systems Man and Cybernetics, CSMC94, San Antonio, Texas.
Rolland, C., Grosz, G. and Nurcan, S. (1996) Guiding the EKD process. ELEKTRA project, Research Report,
December.
Rolland, C. (1996) Understanding and guiding requirements engineering processes. Invited talk. IFIP World
Congress, Camberra, Australia.
Rolland, C., Nurcan, S. and Grosz, G. (1997a) Guiding the participative design process. Association for
Information Systems, Americas Conference on Information Systems, Indianapolis, Indiana, USA, 15-17
August, 922-924.
Rolland, C., Loucopoulos, P., Grosz and G., Nurcan, S. (1998b) A framework for generic patterns dedicated to
the management of change in the electricity supply industry. 9th International DEXA Conference and
Workshop on Database and Expert Systems Applications (August 24-28), 907-911.
Rolland, C., Grosz, G., Nurcan, S., Yue, W. and Gnaho, C. (1998c) An electronic handbook for accessing
domain specific generic patterns, IFIP WG 8.1 Working Conference: Information Systems in the WWW
environment, July 15-17, Beijing, Chine, 89-111.
Rolland, C., Nurcan, S. and Grosz, G. (1999) Enterprise knowledge development: the process view. Information
and Management, 36:3, September.
Rolland, C., Loucopoulos, P., Kavakli, V. and Nurcan S. (1999b) Intention based modelling of organisational
change: an experience report. Fourth CAISE/IFIP 8.1 International Workshop on Evaluation of Modeling
Methods in Systems Analysis and Design (EMMSAD'99), Heidelberg, Germany, June 14-15.
Rolland, C., Prakash, N. and Benjamen A. (1999c) A Multi-Model View of Process Modelling, Requirements
Engineering Journal, 4:4, 169-187.
Rolland, C., Nurcan, S. and Grosz, G. (2000) A decision making pattern for guiding the enterprise knowledge
development process. Information and Software Technology, 42:5.
Rose, T., Jarke, M., Gocek, M., Maltzahn, C. and Nissen, H.W. (1991) A decision-based configuration process
environment. IEEE Software Engineering Journal, 6:3.
Ross, D.T. (1985) Douglas Ross talks about structured analysis. IEEE Computer (July), 80-88.
Rumbaugh, J., Blaha, M., Premerlani, W., Eddy, F. and Lorensen, W. (1991) Object Oriented Modeling and
Design, Prentice-Hall.
Seligmann, P.S., Wijers, G.M. and Sol, H.G. (1989) Analysing the structure of I. S. methodologies, an
alternative approach. First Conference on Information Systems, Amersfoort, The Netherlands.
Van Lamsweerde, A., Darimont, R. and Massonet, P. (1995) Goal-Directed Elaboration of Requirements for a
Meeting Scheduler: Problems and Lessons Learnt, RE'95, IEEE Computer Society Press, 194-203.
Ward, J. and Griffiths, P. (1996) Strategic planning for information systems, J. Wiley and Sons.
Wynekoop, J.D. and Russo, N.L. (1993) System Development Methodologies: unanswered questions and the
research-practice gap. 14th ICIS Conference (eds. J.I. DeGross, R.P. Bostrom, D. Robey), Orlando, USA,
181-190.
Yu, E.S.K. and Mylopoulos, J. (1996) Using Goals, Rules and Methods to Support Reasoning in Business
Process Reengineering, Intelligent Systems in Accounting, Finance and Management, Vol. 5, 1-13.
74
Requirement-Centric Method for Application Development
Smita Ghaisas, Ulka Shrotri and R. Venkatesh
Tata Consultancy Services
54 B, Hadapsar Industrial Estate
Pune 411 013, INDIA
Phone- 91-20-6871058
Email: {smitasg, ulkas, rvenky} @pune.tcs.co.in
Abstract
Poor requirement specification is a source of many defects in software application development. To address
this problem, we propose a requirement-driven method — MasterCraft Agile Process (MAP). The proposed
method separates the problem domain and solution domain clearly and identifies four distinct contexts for
capturing, analyzing, modelling and prototyping requirements. It is for the first time that types of
requirements have been explored as the basic distinguishing criteria for defining viewpoints. In this paper,
we focus on analysis of functional requirement models to detect inconsistencies. We show how modelchecking, simulation and prototyping of functional requirements can help consolidate requirements at an
early stage of software development. Once validated, the requirement models can be used to synthesise an
implementation using standard design patterns. Some of the proposed techniques are implemented in a casetool — MasterCraft. The separation of functional and technical concerns prescribed in our approach and
supported in our tool-set empowers the application developer to adapt efficiently to changing requirements
and thus renders agility to application development.
1. Introduction
The primary test of the success of software is the extent to which it meets its intended purpose.
Requirement capture and analysis are processes that help in discovering that purpose by identifying
stakeholders and their expectations, and capturing these expectations in a form that is amenable to
analysis and implementation. It is a regular observation that software projects fail to meet their
expectations 1 due to problems in articulation of requirements, poor quality of analysis and quite
often, a lack of sufficient focus on the business perspective. There is clearly a need to explore
approaches that empower application developers in managing requirements better.
To address this problem, we propose a requirement-centric method – MasterCraft Agile Process
(MAP) – as supported in MasterCraft – a Tata Consultancy Services (TCS) tool for model-driven
application development. We separate the problem domain and solution domain clearly and classify
requirements into four distinct contexts. We refer to these contexts as viewpoints. Each viewpoint
addresses requirements relevant to it and provides a solution that meets those requirements. Transition
from requirements to an implementation is automated through the use of tools that incorporate well
tested design patterns, guidelines and strategies. A change in a requirement can be confined to the
viewpoint that addresses that type of requirement.
EMSISE’03
75
Smita Ghaisas, Ulka Shrotri and R. Venkatesh
2. Related Approaches and Methods
The notion of viewpoints is explored earlier. The Rational Unified Process (RUP) 2 prescribes a Use
Case-driven approach and defines views for application development. The Reference Model of Open
Distributed Processing – RM-ODP 3 also uses the notion of viewpoints for this purpose. In the
MasterCraft Agile Process (MAP), we explicitly use the types of requirements as the basic
distinguishing criteria for defining viewpoints.
The KAOS 4 approach presents a goal-oriented requirement elaboration method. The method goes by
the route of identification, structuring, refinement and formalization of goals. KAOS focuses on
formal refinement of high-level goals into system constraints meant as functional requirements.
Enterprise Knowledge Development (EKD) 5 is a refinement of the Enterprise Modelling to
accommodate change management. The focus here is on alternate scenarios for change and selection
of the most appropriate one.
These are goal-driven approaches and they focus largely on the enterprise modelling part of
application developmennt .
Our approach focuses on clearly separating and classifying requirements based on their types — i.e.,
viewpoints and on structuring the solutions around these viewpoints. Tool-assisted transitioning of
requirement models to an implemented solution on a deployment platform is an important focus area
of MAP. A complete solution is synthesized from the individual solutions by applying design
strategies, patterns and guidelines implemented in our tool-set. MAP enables developers to manage
change locally within each viewpoint by confining the impact of changes to relevant viewpoints. The
agility in our approach comes from a tool-assisted transformation of requirement models into an
implementation. We emphasize consistency checks between the same information captured from
different sets of users and tool-assisted analysis. MAP ensures quality throughout the development
cycle by clearly outlining V&V in each viewpoint and by testing artefacts produced by each
viewpoint independently.
76
Requirement-Centric Method for Application Development
3. The requirement-centric Viewpoint model – an overview
Fig. 1 shows our viewpoint model schematically. The problem domain addresses business
requirements while the solution domain addresses the technical requirements. The requirement
models in problem domain capture business objectives, (rules + policies) and processes that
implement the objectives as well as enterprise architecture that must support them. These are
essentially computation independent in nature. These models contain sufficient detail and precision to
enable tool-assisted analysis and simulation. The Functional Requirements Viewpoint (FRV) and the
Functional Architecture Viewpoint (FAV) comprise the problem domain. The models in the solution
domain address non-functional requirements and leverage state-of-the-art technology to define an
implementation platform. The Technical Architecture Viewpoint (TAV) and Deployment
Architecture Viewpoint (DAV) comprise the solution domain. The models of the problem domain are
automatically transformed into an implementation on the deployment platform by applying design
patterns, strategies and guidelines. This approach makes it possible to confine changes in business
requirements to problem domain models without having to deal with their platform-specific impact. It
also lets a technical developer focus on exploring technical architectural solutions without having to
worry about the business functionality. This separation of functional and technical concerns
empowers the application developer to adapt to changing requirements efficiently and thus renders
agility to application development.
Each viewpoint addresses requirements relevant to that viewpoint. Development proceeds along each
viewpoint using the standard — requirements capture analysis, specification/coding and testing —
cycle. The key point to note is that the artefacts produced by each viewpoint are tested independently
of other viewpoints.
3.1
Functional Requirements Viewpoint (FRV)
Functional Requirements Viewpoint addresses application functionality requirements from the
business user’s point of view. The business users can be of two types: managers / business process
owners who can give inputs on business rules, policies and processes and hands-on users who can
give inputs on tasks to be performed using the application, in order to implement the processes. The
business processes captured from managers/ process owners are elucidated by identifying process
steps. These may be manual or automated. Use Cases captured from hands-on users should be
consistent with automatable process steps. Also the business rules captured from managers/ process
owners should be consistent with validations specified by hands-on users. Detecting inconsistencies in
requirements captured thus from different users can help in consolidating the specification. We use
model checkers to automatically detect inconsistencies. Important FRV artefacts include a glossary of
business terms, objectives, rules, processes, policies, use cases and validations corresponding to use
cases. The artefacts also include business entity models. The prototypes are functional prototypes that
would give the users a feel for the realization of desired application functionality.
3.2
Functional Architecture Viewpoint (FAV)
Functional Architecture Viewpoint addresses requirements pertinent to the enterprise architecture.
Identifying and defining components to be developed, making decisions about the reuse of existing
components, purchase of commercial components and determining the inter-component interactions
are important activities in this viewpoint.
77
Smita Ghaisas, Ulka Shrotri and R. Venkatesh
3.3
Technical Architecture Viewpoint (TAV)
Technical Architecture Viewpoint addresses non-functional requirements necessary to implement the
business requirements defined in the problem domain viewpoints (FRV and FAV). Precise
quantification of non-functional requirements such as performance and usability, identification of
technical components, mapping of the enterprise architecture components (identified in FAV) onto the
technical architecture, implementing the solution and testing it are important activities in this
viewpoint. TAV artefacts include multiple prototypes to validate the technical architecture and
platform choices compliant with functional requirements.
MasterCraft uses platform-independent models (PIMs) of FRV as inputs to generate platform-specific
solutions. We thus combine the benefits of the OMG’s Model-Driven Architecture (MDA) and the
Agile Development (AD) approach.
The MasterCraft tool-set is parameterized by design patterns and well-tested strategies and guidelines
that are derived out of a vast consulting experience within Tata Consultancy Services.
Separation and classification of requirements has an important consequence. Changes in requirements
can be confined to respective viewpoints without impacting other viewpoints.
•
•
•
If we identify a requirement such as ‘Add copies of a Titl’e in FRV that maintains a One-toMany relationship (One Title has many copies), TAV has a Graphical User Interface (GUI)
pattern for a Master-Detail type of screen and a guideline to associate the FRV requirement
with this pattern. A GUI corresponding to the FRV requirement is automatically generated.
Any changes in requirements pertinent to FRV can be thus specified in FRV without having
to change the corresponding GUI.
Requirements relevant to TAV such as performance improvements or look-and-feel
customizations are confined to and handled in TAV alone without impacting other
viewpoints.
Strategies for database management
Database management strategies include modelling for the table design (modelling tables,
columns and keys to the tables along with their mapping to the object model), implementation
of the class hierarchies, and class to table mapping. Currently, our tool-set supports one-toone mapping between classes and tables and implements class hierarchy through replication.
It provides database management functions and an interface to the RDBMS.
3.4
Deployment Architecture Viewpoint (DAV)
Deployment Architecture Viewpoint addresses requirements relevant to the post-delivery phase to
ensure smooth running of a deployed application. Identifying physical architecture, making a roll-out
and release plan, installation builds and scripts, training program, user documentation, helpdesk and
support mechanism are important activities in this viewpoint.
In this paper, we present details of our work on one of the viewpoints viz., the Functional
Requirements Viewpoint (FRV).
4. Functional Requirements Viewpoint (FRV) – Capturing, Modelling and
analyzing requirements
FRV addresses application functionality requirements from the business user’s point of view.
Business processes achieve business objectives (through a sequence of process steps) while adhering
to a framework of business rules and policies. The diagram in Fig. 2 depicts their associations.
78
Requirement-Centric Method for Application Development
Fig. 2: FRV: Requirement Capture and Analysis
In a typical application development exercise, business processes are captured using text and business
entities are captured either using text or the UML 6 class model. Formal specification of rules often
gets embedded as validations in code and is seldom captured explicitly. This approach causes the
domain knowledge to get locked in platform-specific code and permits re-use only at code level. It
prevents automated analysis of requirements, since information about business rules is not available in
the form of requirement models. A precise high-level and formal specification of rules would be
helpful in automated analysis and simulation of the requirement models. Our tool supports a visual
notation to capture these.
In this section, we present our work on requirements capture, modeling, analysis and simulation of
functional requirements with the help of a case study example.
We first outline the actors, activities, artefacts and Verification and Validation (V&V) in this
viewpoint. We demonstrate that requirement models can be used as inputs for analysis and detection
of inconsistencies. They can be prototyped using tool support. Such an automated analysis and
prototyping of functional requirements helps in an efficient detection of rule violations.
4.1
Actors:
Requirement analyst, managers, business process owners, domain experts, hands-on users.
79
Smita Ghaisas, Ulka Shrotri and R. Venkatesh
4.2
•
•
•
•
•
•
Activities
Interface effectively with user groups for acquiring functional requirements.
o
Functional requirements can be categorized as business requirements (to be captured from
managers, business process owners, or sponsors) and user requirements (to be collected
from hands-on users). Business requirements address the business objectives at a high
level, while user requirements represent the tasks to be performed by hands-on users in
order to accomplish those objectives.
o
Identify and prepare a comprehensive list of business objectives, processes, policies, and
rules through interview sessions with managers.
Use high-level business processes for grouping-related requirements.
Elucidate the business processes by identifying process steps. Each process step may be
performed manually or may be automated. Associate applicable rules and policies with each
process step.
Capture user requirements as Use Cases through interview sessions with hands-on users. Use
templates to document detailed Use Cases including pre and post conditions, validations. Capture
critical scenarios in detail.
Illustrate the static structure of an application using business entity model. The business entity
model also includes cardinality constraints.
Consistency checks in MAP
o
Analyzing for consistency between the (same) information captured from different sets of
users.
X Check that each automatable process step corresponds to a Use Case and vice
versa. An automatable process step such as ‘Issue Book’ in a library system may
be conveyed through a Use Case ‘Issue Book’ that includes ‘Check Availabilty’
‘Check Claim’s and ‘Assign book to Member’ by hands-on users.
o
Analyzing for consistency of invariants
X For each business rule, there need to exist corresponding validations within
processes where those rules apply. Also, for every validation, a business rule
should exist. A rule such as : ‘An available book shall be held for the claimant, if
any’ needs to be enforced by a validation such as ‘Confirm no claims’ before
issuing a book to a member. However, if specified informally in a natural
language, it is not amenable to analysis. Our tool-set supports a visual notation to
specify rules formally and also automates the analysis. Inconsistencies thus
detected help in correcting specified rules as well as identifying additional rules.
X Violations of association cardinalities such as ‘A book may be held for at most
one claim’ are detected automatically.
•
X Non-conformance to process such as Return book’ should result in ‘hold book for
claimant if any’ is brought out in the automated analysis.
Generate rapid functional requirement prototypes to demonstrate core business processes. The
focus of this prototype should be to outline the process flow by capturing all the process steps.
Automated rapid prototyping is helpful in taking the application closer to user expectations
iteratively and efficiently. GUI prototyping is currently supported. Design for a complete
functional prototype is in place and we are currently in the process of implementing it.
80
Requirement-Centric Method for Application Development
4.3
Artefacts and (V&V)
Important FRV artefacts include a glossary of business terms, objectives, rules, processes, policies,
Use Cases and validations* corresponding to Use Cases. The artefacts also include business entity
models that are a result of Use Case analysis. An important V&V step is to ensure consistency
between the same information captured from a different sets of users, for example, (i) process steps
(automatable) outlined by managers and / or business process owners and Use Cases specified by the
hands-on user (ii) business rules specified by managers and / or business process owners and
validations described by hands-on users.
4.4
Example
We have chosen a simple library system to demonstrate the results of our requirement-centric
approach to application development.
Fig. 3 represents a business entity diagram for the library system. A library maintains a collection of
books. Members of a library borrow and return books. On return of a book, if there are pending claims
for the title, the book is held for one of the claimants. Table 1 given below captures a simplistic partial
enterprise model for such a system. We consider a simple example that illustrates rules that apply to
the entity Book in the library system and processes that implement those rules.
Table1: Partial Model for the Library System
Business entities
Rules
Processes
*
Everything that is a part of the
enterprise – persons, things,
concepts
Policies, Objectives, Laws of
land, Domain
Steps to achieve objectives
Should not violate the rule
Book, Member, Title, Loan,
Claim
Rule1: ‘A book shall be
issued to only one person’
Rule2: ‘An available book
shall be held for claimant, if
any’
Borrow
Available? -Issue
else- put claim and issue when
available
Return
No claims? Make available
else – hold for claimant
Business Rule: An invariant directive of an organization
Business Policy: A guideline to business while conforming to rules
Business Process: A sequence of steps that implement a business rule or policy
Use Case: Task corresponding to a process step
Validation: A check that must be performed while executing Use Cases so that business rules are not violated
81
Smita Ghaisas, Ulka Shrotri and R. Venkatesh
The ‘Borrow’ and the ‘Return’ processes should be checked for their conformance to Rule 1 and Rule
2.
We have used UML class models to capture rules, business entity models and activity diagrams to
capture business processes. The UML notation is extended using stereotypes 8.
Invariant on the entity Book: A book in a library will be either loaned to a member, or held for a
claimant or available (if there are no claims) [View Fig. 4].
Borrow: (1). A library member can borrow a book if it is available; i.e., it is not loaned or held for
another member. (2) If it is not available, a claim can be put against a title. (3) When available and
loaned to the claimant, the claim should automatically get cancelled. (4) A claim cannot be put against
a title that is already available and is not held for any claimant [Fig. 5]. The figure does not have
specifications corresponding to ‘Put Claim’ and ‘Issue’. The reader is referred to [8] for the same.
Return: (1) A library member may return a book issued to her. (2) On return, the book becomes
available to other members, if there are no claims against the title. (3) If there is a claim against it, the
book should be held for the claimant [Fig. 6]. As in Borrow, the figure does not show the
specifications corresponding to ‘Make Available’ and ‘Hold for Claimant’.
After translating the visual specifications into a formal notation, we ran the model checker on the
resulting specification. The specification language TLA 7 and its associated model checker TLC were
used to verify object models with assertions specifying pre- and post-conditions for operations and
invariants.
82
Requirement-Centric Method for Application Development
Several errors were detected in the original specification of the library system. By inspecting the error
trace generated, we were able to locate the source of the error. Several inconsistencies indicating rule
violations could be detected out of the original informal specifications. Fig. 7 depicts the process of
automated requirements validation..
Rules + Process steps
Fix errors
Translate
Formal Specification
Automated
Model Check
Rules Violation
Fig 7. Requirements validation
The errors detected by the model checker were caught either as instances of invariant violation or
absence of necessary invariants in our original specifications.
For a small object model such as this, these are encouraging results. Having thus analysed the
requirements, a functional prototype complete with a first-cut UI can be generated and used to acquire
an early user feedback.
Actual implementation of the functional requirements can be done using the approach prescribed in
MAP and MasterCraft’s model-based generative support. Using tool-assisted support for formal
specification, analysis and rapid prototyping of requirements, a developer can change requirements
when necessary, specify them formally, analyze them and detect inconsistencies in them. Once
consolidated, their implementation can be done through automated transformation mechanisms.
Multiple prototypes and verification and validation (V&V) mechanisms in each viewpoint are
outlined to incorporate user feedback iteratively. In each viewpoint we clearly define artefacts,
prescribe rapid prototyping and state V&V to be done by users. We have discussed the Functional
Requirement Viewpoint (FRV) in detail here. However, a brief description of requirements addressed
in each viewpoint pertinent to the Library System is given below.
In FAV, the artefacts include identification components to be developed, purchased and outsourced,
assignment of requirements to components and inter-component relationships. The end users will
evaluate the functional architecture in terms of functionality offered by each component, their logical
coherence, modularity, and their potential for reuse. With the Library System discussed here, we can
identify Library Management as one of the functional architecture components. The services like
‘Borrow’, ‘Return’, ‘Cancel’, and ‘Reserve’ should be assigned to this component. The Library
System may have to interact with an existing Accounts Management component for processes
relevant to budgeting and purchase of new books. The TAV addresses precisely quantified technical
requirements such as performance and usability. The Library System under discussion may pose the
following kind of requirements in the context of this viewpoint (1) It should be possible for 5000
members to log on concurrently. (3) Response time should be not more than 4 sec. The requirement
models of the problem domain can be translated into solutions by using design strategies, and patterns
supported in MasterCraft. TAV artefacts include multiple prototypes to validate the technical
architecture and platform choices compliant with functional requirements. The DAV caters largely to
post-delivery requirements like ensuring availability of an application, roll-out and release plans and
83
Smita Ghaisas, Ulka Shrotri and R. Venkatesh
achieving user comfort. For example, addressing a requirement of 24/7 availability and remote access
prompts a developer to examine the necessity to replicate servers and databases at multiple locations,
having an archival strategy with minimum down time.
Project planning can be done around baselines in each viewpoint, defined as follows. The FRV
baseline should comprise requirements that correspond to some of the core business processes and
critical scenarios. Correspondingly, the FAV baseline should include business components necessary
for incorporation of the business processes identified in the FRV baseline. The TAV baseline should
comprise technical architecture components necessary for implementation of the FRV and FAV
baselines. The DAV baseline should include partial physical architecture necessary for deployment of
the baselined solution.
5. Case Study
We applied this to analyze requirements of an Enterprise Resource Planning (ERP) application to
check the scalability of our method. The application is for a Chemical Industry with four different
sites in India and that has over 500 products. Fig. 8 shows a partial business entity diagram for the
system.
Fig. 8: Partial Entity Diagram for the ERP Application
The size of the application can be deduced from the following data:
No of Use Cases
No of Components
No. of screens
400
50
500
For want of space, we present details pertaining only to a process in FRV.
Here, we present the Purchase Process as an example. The activity diagram in Fig. 9 below captures
captures this process. This has the process steps described below:
1. Raise indent
2. Check the required approval level
3. Route to approver
4. {If approved} Include for consolidating indents
5. PO activities:
•
•
•
Invite quotes from vendors
Compare quotes
Select quote with lowest price
84
Requirement-Centric Method for Application Development
•
•
Send letter of intent to vendor
Prepare purchase order
Of the process steps outlined above, the automatable ones were used for consistency checks against
Use Cases captured from hands-on users.
Business users specified some of the rules that apply to this business process, as follows:
1. An item can be issued only to the department for which it is reserved.
2. The same item cannot appear twice on an indent.
3. ISO and non-ISO items cannot be present in the same indent.
4. Terms and conditions in a purchase order should be picked up from the contract (if any exists)
with the vendor.
Absence of several invariants (rules) was detected through our analysis. Some of them are:
1. Only indents of type ‘Firm’ have Purchase Orders (PO) associated with them.
2. Items in a PO detail should exist in an indent detail corresponding to the indent header.
3. Items should not be repeated in indent details for an indent.
4. The Sender and Receiver cannot be the same for an indent.
Our simulations generated a number of interesting scenarios that could be used for verification and
validation with the users.
For example:
1. An unauthorized employee attempts raising an indent for a department.
2. An indent detail is associated with more than one indent header.
Transformation of the FRV requirements to an implementation was done using the MAP approach
outlined above.
Fig. 9: Purchase Process
85
Smita Ghaisas, Ulka Shrotri and R. Venkatesh
6. Summary
In this paper, we have demonstrated use of our requirement-centric method for application
development. We separate the problem domain and solution domain clearly and classify requirements
into four distinct contexts. We refer to these contexts as viewpoints. Each viewpoint addresses
requirements relevant to it and provides a solution that meets those requirements. We have shown that
requirement models can be used as inputs for analysis and detection of inconsistencies and they can
be prototyped using tool support. Such an automated analysis and prototyping of functional
requirements helps in an efficient detection of rule violations and validations and consolidation of
requirements at an early stage of software development. Once validated, the requirement models can
be used for actual implementation of a solution using the MAP approach and MasterCraft’s generative
support.
The integration of intuitive diagrammatic notations with formal tools has opened up the possibility of
early analysis of requirements resulting in early defect detection and correction. It is also possible to
generate early prototypes of the system being modelled.
References
1. D. Leffingwell. Calculating the return on investment from more effective requirements
management. American Programmer, 10(4),13-16, 1997.
2. Philippe Kruchten, The Rational Unified Process – an Introduction, Addison-Wesley, 1999.
10(4): 13–16, 1997.
3. Kerry Raymond, Reference Model of Open Distributed Processing: Introduction
http://magda.elibel.tm.fr/refs/UML/odp.pdf
4. Axel Van Lamsweerde, Goal-Oriented Requirement Engineering: A guided Tour, Proceedings
RE’01, 5th IEEE International Symposium on Requirements Engineering, Toronto, August 2001,
249-263.
5. Colette Rolland and Naveen Prakash, From Conceptual Modelling to Requirements Engineering,
Annals of Software Engineering, 10, 151-176, 2000
6. G. Booch, J. Rumbaugh and I. Jacobson; The Unified Modeling Language User Guide, AddisonWesley, 1998.
7. L. Lamport, Specifying Systems: The TLA+ Language Tools for Hardware and Software
Engineers, Addison-Wesley, 2002.
8. Ulka Shrotri, Purandar Bhaduri and R. Venkatesh, Model Checking Visual Specification of
Requirements, International Conference on Software Engineering and Formal
Methods,
Brisbane,
September
2003,
IEEE
Computer
Society
Press. To appear
86
Self-measurement, Self-monitoring, Self-learning, and Selfvaluation as the Necessary Conditions for Autonomous Evolution
of Information Systems 1
Jingde Cheng
Department of Information and Computer Sciences, Saitama University
Saitama, 338-8570, Japan
cheng@aise.ics.saitama-u.ac.jp
“As long as a branch of science offers an abundance of problems, so
long is it alive: a lack of problems foreshadows extinction or the
cessation of independent development.” - David Hilbert, 1900.
“The formulation of a problem is often more essential than its
solution, which may be merely a matter of mathematical or
experimental skill. To raise new questions, new possibilities, to
regard old problems from a new angle, requires creative imagination
and marks real advance in science.” - Albert Einstein, 1938.
Abstract. Traditional information systems are passive in the sense that data or knowledge is created,
retrieved, modified, updated, and deleted only in response to operations issued by users or application
programs and the systems only can execute queries or transactions explicitly submitted by users or
application programs but have no ability to do something actively by themselves. Unlike a traditional
information system serving just as a storehouse of data or knowledge and working passively according to
queries or transactions explicitly issued by users and application programs, an autonomous evolutionary
information system serves as an autonomous and evolutionary partner of its users such that it discovers new
knowledge by automated reasoning technique from its database or knowledge-base autonomously,
communicates and cooperates with its users in solving problems actively by providing the users with advices
and helps, has a certain mechanism to improve its own state of ‘knowing’ and ability of ‘working’, and
ultimately ‘grows up’ following its users. However, to implement an actual autonomous evolutionary
information system useful in advanced applications in the real world is very difficult. The most important
and difficult issues in design and implementation of autonomous evolutionary information systems, the
author’s fundamental considerations on the issues, and the author’s approaches to the issues are presented,
and some challenging problems are shown for future research. The notion of an autonomous evolutionary
information system, its actual implementation, and its practical applications concern various areas including
logic, theory of computation, automated reasoning and proving, software engineering, knowledge
engineering, and information security engineering, and will raise many new research problems in these areas.
1. Introduction
Traditional information systems (without regard to either database systems or knowledge-base
systems) are passive in the sense that data or knowledge is created, retrieved, modified, updated, and
deleted only in response to operations issued by users or application programs and the systems only
can execute queries or transactions explicitly submitted by users or application programs but have no
ability to do something actively by themselves [29, 30]. Although active database systems allow users
to define reactive behavior by means of active rules resulting in a more flexible and powerful
formalism, the systems are still passive in the sense that they are not autonomous and evolutionary
[22, 32].
Future generation of information systems will be more intelligent than the traditional systems in
order to satisfy the requirements from advanced applications of information systems. An intelligent
1
This work was supported in part by The Ministry of Education, Culture, Sports, Science and Technology of Japan under Grant-in-Aid
for Scientific Research on Priority Areas No. 04229213 and No. 05213212, Grant-in-Aid for Exploratory Research No. 09878061, and
Grant-in-Aid for Scientific Research (B) No. 11480079, and a grant by Artificial Intelligence Research Promotion Foundation, Japan.
EMSISE’03
87
Jingde Cheng
information system differs primarily from the traditional information systems in that it can provide its
users with not only data or knowledge stored in its database or knowledge-base by its developers and
users, but also new knowledge, which are discovered or reasoned out automatically by the system
itself from its database or knowledge-base. These new knowledge may be new facts, new
propositions, new conditionals, new inference rules, and so on. Therefore, unlike a traditional
information system serving just as a storehouse of data or knowledge and working passively according
to queries or transactions explicitly issued by users and application programs, an intelligent
information system serves as an autonomous and evolutionary partner of its users such that it
discovers new knowledge by automated reasoning technique from its database or knowledge-base
autonomously, communicates and cooperates with its users in solving problems actively by providing
the users with advices and helps, and has a certain mechanism to improve its own state of ‘knowing’
and ability of ‘working’, and ultimately ‘grows up’ following its users. The present author named this
type of information systems ‘Autonomous Evolutionary Information Systems’. The characteristics
of an autonomous evolutionary information system are as follows:
(1) An autonomous evolutionary information system is an information system that stores and
manages structured data and/or formally represented knowledge. In this aspect, there is no
intrinsic difference between an autonomous evolutionary information system and a usual database
or knowledge-base system.
(2) An autonomous evolutionary information system has the ability of reasoning in some certain
degree to reason out new knowledge from its database or knowledge-base autonomously. This
ability of reasoning autonomously is one of the most intrinsic characteristics of the system.
(3) An autonomous evolutionary information system communicates and cooperates with its users in
solving problems actively by providing the users with advices and helps that are based on new
knowledge reasoned out by itself, and therefore, it acts as an assistant of its users.
(4) An autonomous evolutionary information system has the ability (mechanism) to improve its own
state of ‘knowing’ and ability of ‘working’ in the sense that it can add new facts and knowledge
into its database or knowledge-base automatically and add new inference rules into its reasoning
engine automatically. It is in this sense that we say that the system is to be evolutionary.
(5) An autonomous evolutionary information system will ultimately ‘grow up’ following its users
such that two autonomous evolutionary information systems with the same primitive contents and
ability may act very differently if they are used by different users during a certain long period of
time.
An autonomous evolutionary information system is different from the so-called ‘software agent’
or ‘intelligent software agent’ [3, 4, 33] in the following aspects: (1) the former is an information
system for general-purpose without an explicitly defined specific task in a particular environment as
its goal to achieve, while the latter is usually defined as a specific program with an explicitly defined
specific task in a particular environment as its goal to achieve, (2) the former in general can serve for
the public users, or a specific group of users, or a specific individual user but does not act on behalf of
a specific user or a specific group of users in a particular environment, while the latter in general acts
on behalf of its principal in a particular environment, (3) the former communicates and cooperates
only with its users but does not communicate and cooperate with other autonomous evolutionary
information systems, while the latter communicate and cooperate with other agents, and (4) the former
is evolutionary in the sense that it has a certain mechanism to improve its own state of ‘knowing’ and
ability of ‘working’, and ultimately ‘grows up’ following its users, while the latter is a steady
program.
2. Autonomous Evolution: What Is It?
The term ‘evolution’ means a gradual process in which something changes into a different and
usually better, maturer, or more complete form. Note that in order to identify, observe, and then
ultimately control any gradual process, it is indispensable to measure and monitor the behavior of that
88
Self-measurement, Self-monitoring, Self-learning, and Self-valuation
gradual process. The autonomous evolution of a system, which may be either natural or artificial,
should be a gradual process in which everything changes by conforming to the system’s own laws
only, and not subject to some higher ones.
Making a computing system autonomously evolutionary is very difficult, if not impossible,
because the behavior of any computing system itself is intrinsically deterministic (For example, it is
well known that any computing system itself cannot generate true random numbers). What we can do
is to design and implement some mechanism in a computing system such that it can react to stimuli
given from its outside environment and then to behave better.
A reactive system is a computing system that maintains an ongoing interaction with its
environment, as opposed to computing some final value on termination [23, 24]. An autonomous
evolutionary information system should be a reactive system with the capability of concurrently
maintaining its database or knowledge-base, discovering and providing new knowledge, interacting
with and learning from its users and environment, and improving its own state of ‘knowing’ and
ability of ‘working’.
Measuring the behavior of a computing system means capturing run-time information about the
system through detecting attributes of some specified objects in the system in some way and then
assigning numerical or symbolic values to the attributes in such a way as to describe the attributes
according to clearly defined rules.
Monitoring the behavior of a computing system means collecting and reporting run-time
information about the system, which is captured by measuring the system. Measuring and monitoring
mechanisms can be implemented in hardware technique, or software technique, or both.
For any computing system, we can identify and observe its evolution, i.e. a gradual change
process, only if we can certainly measure and monitor the system’s behavior. Also, an autonomous
evolutionary computing system must have some way to measure and monitor its own behavior by
itself.
Learning is the action or process of gaining knowledge of a subject or skill in an art through
study, experience, teaching, or training. It usually leads to the modification of behavior of the learner
or the acquisition of new abilities of the learner, and results in the growth or maturation of the learner.
Self-learning is the action or process of gaining knowledge or skill through acquiring knowledge by
the learner himself (herself) rather than receiving instructions from others. It is obvious that a selflearning process can result in the growth or maturation of the learner only if the learner can correctly
valuate the effectiveness of every phase in the self-learning process.
Valuation is the action or process of assessing or estimating the value or price of a thing based
on some certain criterion about its worth, excellence, merit, or character. Self-valuation is the action
or process of assessing or estimating the value or price of a thing concerning the valuator himself
(herself) that is made by the valuator himself (herself).
For any computing system, its autonomous evolution is impossible if it has no self-learning and
self-valuation abilities. However, because the behavior of any computing system itself is intrinsically
deterministic and is a closed world, true or pure self-learning and self-valuation is meaningless, if not
impossible, to its autonomous evolution. Therefore, the autonomous evolution of a computing system
should be accomplished in a gradual process by interaction with its outside environment.
Thus, based on the above definitions of autonomous evolution, measuring, monitoring, and
learning, and valuation, we have some fundamental questions or open issues about design and
development of an autonomous evolutionary information system as follows: What architecture and/or
structure should the system have? What objects in the system should be taken as primary ones such
that they have to be measured and monitored in order to identify, observe, and then ultimately control
its gradual evolution process? How can we measure and monitor the gradual evolution process of the
system? What behavioral changes in the system are intrinsically important to its evolution? What
formal logic system can satisfactorily underlie various capabilities of the system? What formal logic
system can satisfactorily underlie reasoning about the reactive and gradual change process of the
89
Jingde Cheng
system? How is a formal theory of dynamics of the system, which may be paraconsistent and infinite,
organized, represented, and constructed? How can we formally define that something discovered by
the system is new and interesting? How can we find new and interesting empirical theorems in a
formal theory automatically?
It is obvious that at present we cannot give all answers to all the above problems in a single paper
because many studies in this new research direction are still in their early stages and some problems
are still completely open. In the rest of this paper, we will present our fundamental considerations on
the issues and show our approaches to the issues.
3. The Architecture of Autonomous Evolutionary Information Systems
As we have pointed out, an autonomous evolutionary information system should be a reactive
system with the capability of concurrently maintaining its database or knowledge-base, discovering
and providing new knowledge, interacting with and learning from its users, and improving its own
state of ‘knowing’ and ability of ‘working’.
According to the present author’s considerations, there are some fundamental principles in
concurrent-systems engineering as follows [9, 10]:
The dependence principle in measuring, monitoring, and controlling: “Any system cannot
control what it cannot measure and monitor.”
The wholeness principle of concurrent systems: “the behavior of a concurrent system is not
simply the mechanical putting together of its parts that act concurrently but a whole such that one
cannot find some way to resolve it into parts mechanically and then simply compose the sum of its
parts as the same as its original behavior.”
The uncertainty principle in measuring and monitoring concurrent systems: “the behavior of an
observer such as a run-time monitor cannot be separated from what is being observed.”
The self-measurement principle in designing, developing, and maintaining concurrent systems:
“a large-scale, long-lived, and highly reliable concurrent system should be constructed by some
function components and some (maybe only one) permanent self-measurement components that act
concurrently with the function components, measure and monitor the system itself according to some
requirements, and pass run-time information about the system’s behavior to the outside world of the
system.”
Fig. 1 shows a reconfigurable architecture of an autonomous evolutionary information system we
are designing and developing based on the above fundamental principles.
90
Self-measurement, Self-monitoring, Self-learning, and Self-valuation
LTB
EP
TP
Working Space
EETB
IFB
IETB
EG
EnCal
HF
CF
Mo
C/S
Me
R
OPB
EFB
For
Int
The working scientist
Fig. 1 A reconfigurable architecture of autonomous evolutionary information systems
The system is an automated theorem finding system acting as an assistant for mathematicians
and/or scientists. It is physically distributed but we rather conceptually regard it as a concurrent
system. Its central components include a central measurer (Me), a central recorder (R), a central
monitor (Mo), and a central controller / scheduler (C/S), all of which are permanent components of the
system. The functional components of the system are separated into two groups which are measured,
recorded, monitored, and controlled by the central components through two (internal and external)
instruction/data buses. As a result, the system can measure, record, monitor, and ultimately control
and schedule its own behavior by itself at any time, and therefore, it can ‘evolve’ according to the
autonomous evolution mechanism programmed previously in the system.
One group of the functional components of the system includes the following components, which
can only be invoked by the central components and communicate with users by internal
instruction/data bus and working space:
EP: An interpreter for epistemic programs. An epistemic program is a sequence of instructions
such that for a primary epistemic state given as the initial input, an execution of the instructions
produces an epistemic process where every primary epistemic operation corresponds to an instruction
whose execution results in a new epistemic state, in particular, the terminal epistemic state is also
called the result of the execution of the program [12].
TP: A theorem prover, which can be used to find a proof, based on the underlying logic system,
for a given formula from some given premises.
EG: An explanation generator, which can be used to generate explanations for given proofs.
EnCal: An automated forward deduction system for general-purpose entailment calculus, which
can be used as a domain-independent reasoning engine in various knowledge-based systems that
require an autonomous reasoning mechanism to get new entailments and empirical conditionals [7].
HF: A hypothesis formator, which autonomously suggests new hypothesis or hypotheses to
users.
CF: A concept formator, which autonomously suggests that it is rational to define a new concept
based on some formula or formulas to users.
91
Jingde Cheng
LTB: A logical-theorem-base, which includes all logical theorem schemata of the underlying
logic system.
IFB: An implicit-fact-base, which includes all facts that are consequences implicitly entailed,
based on the underlying logic system, by those facts explicitly provided by users.
IETB: An implicit-empirical-theorem-base, which includes all empirical theorems that are
consequences implicitly entailed, based on the underlying logic system, by those facts and empirical
theorems explicitly provided by users.
Another group of the functional components of the system includes the following components,
which can directly communicate with users by external instruction/data bus:
EETB: An explicit-empirical-theorem-base, which includes all empirical theorems explicitly
provided or known by users.
OPB: An open-problem-base, which includes all open problems explicitly provided or known by
users.
EFB: An explicit-fact-base, which includes all facts explicitly provided or known by users.
For: A formalizer, which is used to help users to formalize natural language descriptions into
formulas.
Int: An interpreter, which is used to interpret formulas into natural language descriptions for
users.
The above architecture of autonomous evolutionary information systems is reconfigurable
because either an internal functional component or an external functional component can be easily
added into or removed from the systems. Unlike those traditional information systems where function
components are often considered as the ‘heart’ of systems, the above architecture of autonomous
evolutionary information systems is bus-centralized. This architecture can also be used in design and
development of an autonomous evolutionary information system with other purpose [14].
4. Self-learning by Discovery based on Relevant and Ampliative Reasoning
The most intrinsic difference between an autonomous evolutionary information system and a
traditional information system is that the former can serve as an autonomous and evolutionary partner
of its users that discovers new knowledge by automated reasoning technique from its database or
knowledge-base, communicates and cooperates with its users in solving problems actively by
providing the users with advices and helps, and has a certain mechanism to improve its own state of
‘knowing’ and ability of ‘working’, but the latter can not. Therefore, the most crucial issue in design
and development of an autonomous evolutionary information system is to choose the right logic
system (or systems) to underlie various capabilities of the system, because almost all the capabilities
need some logic system as the fundamental, domain-independent, task-independent, and formal
representation and reasoning tool. In particular, the capability of knowledge discovery is the most
intrinsic characteristic of an autonomous evolutionary information system and it is logic that serves as
the only criterion to be used to justify the validity of reasoning in knowledge discovery.
Reasoning is the process of drawing new conclusions from given premises, which are already
known facts or previously assumed hypotheses (Note that how to define the notion of ‘new’ formally
and satisfactorily is still a difficult open problem until now). Therefore, reasoning is intrinsically
ampliative, i.e. it has the function of enlarging or extending some things, or adding to what is already
known or assumed. In general, a reasoning consists of a number of arguments (or inferences) in some
order. An argument (or inference) is a set of declarative sentences consisting of one or more
sentences as its premises, which contain the evidence, and one sentence as its conclusion. In an
argument, a claim is being made that there is some sort of evidential relation between its premises and
its conclusion: the conclusion is supposed to follow from the premises, or equivalently, the premises
are supposed to entail the conclusion. Therefore, the correctness of an argument is a matter of the
92
Self-measurement, Self-monitoring, Self-learning, and Self-valuation
connection between its premises and its conclusion, and concerns the strength of the relation between
them (Note that the correctness of an argument depends neither on whether the premises are really true
or not, nor on whether the conclusion is really true or not). Thus, there are some fundamental
questions: What is the criterion by which one can decide whether the conclusion of an argument or a
reasoning really does follow from its premises or not? Is there the only one criterion, or are there
many criteria? If there are many criteria, what are the intrinsic differences between them? It is logic
that deals with the validity of argument and reasoning in general.
A logically valid reasoning is a reasoning such that its arguments are justified based on some
logical validity criterion provided by a logic system in order to obtain correct conclusions (Note that
here the term ‘correct’ does not necessarily mean ‘true’). Today, there are so many different logic
systems motivated by various philosophical considerations. As a result, a reasoning may be valid on
one logical validity criterion but invalid on another. For example, the classical account of validity,
which is one of fundamental principles and assumptions underlying classical mathematical logic and
its various conservative extensions, is defined in terms of truth-preservation (in some certain sense of
truth) as: an argument is valid if and only if it is impossible for all its premises to be true while its
conclusion is false. Therefore, a classically valid reasoning must be truth-preserving. On the other
hand, for any correct argument in scientific reasoning as well as our everyday reasoning, its premises
must somehow be relevant to its conclusion, and vice versa. The relevant account of validity is
defined in terms of relevance as: for an argument to be valid there must be some connection of
meaning, i.e. some relevance, between its premises and its conclusion. Obviously, the relevance
between the premises and conclusion of an argument is not accounted for by the classical logical
validity criterion, and therefore, a classically valid reasoning is not necessarily relevant.
Proving is the process of finding a justification for an explicitly specified statement from given
premises, which are already known facts or previously assumed hypotheses. A proof is a description
of a found justification. A logically valid proving is a proving such that it is justified based on some
logical validity criterion provided by a logic system in order to obtain a correct proof.
The most intrinsic difference between reasoning and proving is that the former is intrinsically
prescriptive and predictive while the latter is intrinsically descriptive and non-predictive. The purpose
of reasoning is to find some new conclusion previously unknown or unrecognized, while the purpose
of proving is to find a justification for some specified statement previously given. Proving has an
explicitly given target as its goal while reasoning does not. Unfortunately, until now, many studies in
Computer Science and Artificial Intelligence disciplines still confuse proving with reasoning.
Discovery is the process to find out or bring to light of that which was previously unknown.
Therefore, in any discovery both the discovered thing and its truth must be previously unknown before
the completion of discovery process. Reasoning is the only way to draw new conclusions from some
premises that are known facts or assumed hypothesis. There is no discovery process that does not
invoke reasoning. Since any discovery process has no explicitly defined target, the only criterion the
discovery process must act according to is to reason correct conclusions from the premises.
Because the intrinsically characteristic task of an autonomous evolutionary information system is
discovering some new things rather than justifying a given statement, it is obvious that the system
must invoke reasoning rather than proving. Moreover, the reasoning performed by the system must be
relevant and ampliative. On the other hand, not any logic system can underlie relevant and ampliative
reasoning well. The question, “Which is the right logic?” invites the immediate counter-question
“Right for what?” Only if we certainly know what we want to obtain, we can make a good choice.
Therefore, we have to specify the essential requirements for the fundamental formal logic system that
can satisfactorily underlie relevant and ampliative reasoning.
Logic is a special discipline which is considered to be the basis for all other sciences, and
therefore, it is a science prior to all others, which contains the ideas and principles underlying all
sciences [20, 28]. Logic deals with what entails what or what follows from what, and aims at
determining which are the correct conclusions of a given set of premises, i.e. to determine which
arguments are valid. Therefore, the most essential and central concept in logic is the logical
93
Jingde Cheng
consequence relation that relates a given set of premises to those conclusions, which validly follow
from the premises.
In general, a formal logic system L consists of a formal language, called the object language and
denoted by F(L), which is the set of all well-formed formulas of L, and a logical consequence
relation, denoted by meta-linguistic symbol |−L, such that for P ⊆ F(L) and c ∈ F(L), P |−L c means
that within the framework of L, c is a valid conclusion of premises P, i.e. c validly follows from P.
For a formal logic system (F(L), |−L), a logical theorem t is a formula of L such that φ |−L t where φ is
the empty set. We use Th(L) to denote the set of all logical theorems of L. Th(L) is completely
determined by the logical consequence relation |−L. According to the representation of the logical
consequence relation of a logic, the logic can be represented as a Hilbert style formal system, a
Gentzen natural deduction system, a Gentzen sequent calculus system, or other type of formal system.
A formal logic system L is said to be explosive if and only if {A, ¬A} |−L B for any two different
formulas A and B; L is said to be paraconsistent if and only if it is not explosive.
Let (F(L), |−L) be a formal logic system and P ⊆ F(L) be a non-empty set of sentences (i.e.
closed well-formed formulas). A formal theory with premises P based on L, called a L-theory with
premises P and denoted by TL(P), is defined as TL(P) =df Th(L) ∪ ThLe(P), and ThLe(P) =df {et | P |−L et
and et ∉ Th(L)} where Th(L) and ThLe(P) are called the logical part and the empirical part of the
formal theory, respectively, and any element of ThLe(P) is called an empirical theorem of the formal
theory. A formal theory TL(P) is said to be directly inconsistent if and only if there exists a formula A
of L such that both A ∈ P and ¬A ∈ P hold. A formal theory TL(P) is said to be indirectly
inconsistent if and only if it is not directly inconsistent but there exists a formula A of L such that both
A ∈ TL(P) and ¬A ∈ TL(P). A formal theory TL(P) is said to be consistent if and only if it is neither
directly inconsistent nor indirectly inconsistent. A formal theory TL(P) is said to be explosive if and
only if A ∈ TL(P) for arbitrary formula A of L; TL(P) is said to be paraconsistent if and only if it is
not explosive. An explosive formal theory is not useful at all. Therefore, any meaningful formal
theory should be paraconsistent. Note that if a formal logic system L is explosive, then any directly or
indirectly inconsistent L-theory TL(P) must be explosive.
In the literature of mathematical, natural, social, and human sciences, it is probably difficult, if
not impossible, to find a sentence form that is more generally used for describing various definitions,
propositions, and theorems than the sentence form of ‘if ... then ...’. In logic, a sentence in the form of
‘if ... then ...’ is usually called a conditional proposition or simply conditional which states that there
exists a relation of sufficient condition between the ‘if’ part and the ‘then’ part of the sentence.
Scientists always use conditionals in their descriptions of various definitions, propositions, and
theorems to connect a concept, fact, situation or conclusion to its sufficient conditions. The major
work of almost all scientists is to discover some sufficient condition relations between various
phenomena, data, and laws in their research fields. Indeed, Russell 1903 has said, “Pure Mathematics
is the class of all propositions of the form ‘p implies q,’ where p and q are propositions containing one
or more variables, the same in the two propositions, and neither p nor q contains any constants except
logical constants” [27].
In general, a conditional must concern two parts which are connected by the connective “if ...
then ... ” and called the antecedent and the consequent of that conditional, respectively. The truth of a
conditional depends not only on the truth of its antecedent and consequent but also, and more
essentially, on a necessarily relevant and conditional relation between them. The notion of conditional
plays the most essential role in reasoning because any reasoning form must invoke it, and therefore, it
is historically always the most important subject studied in logic and is regarded as the heart of logic
[1]. In fact, from the age of ancient Greece, the notion of conditional has been discussed by the
ancients of Greek. For example, the extensional truth-functional definition of the notion of material
implication was given by Philo of Megara in about 400 B.C. [21, 28].
When we study and use logic, the notion of conditional may appear in both the object logic (i.e.
the logic we are studying) and the meta-logic (i.e. the logic we are using to study the object logic). In
the object logic, there usually is a connective in its formal language to represent the notion of
conditional, and the notion of conditional, usually represented by a meta-linguistic symbol, is also
94
Self-measurement, Self-monitoring, Self-learning, and Self-valuation
used for representing a logical consequence relation in its proof theory or model theory. On the other
hand, in the meta-logic, the notion of conditional, usually in the form of natural language, is used for
defining various meta-notions and describing various meta-theorems about the object logic.
From the viewpoint of object logic, there are two classes of conditionals. One class is empirical
conditionals and the other class is logical conditionals. For a logic, a conditional is called an
empirical conditional of the logic if its truth-value, in the sense of that logic, depends on the contents
of its antecedent and consequent and therefore cannot be determined only by its abstract form (i.e.
from the viewpoint of that logic, the relevant relation between the antecedent and the consequent of
that conditional is regarded to be empirical); a conditional is called a logical conditional of the logic
if its truth-value, in the sense of that logic, depends only on its abstract form but not on the contents of
its antecedent and consequent, and therefore, it is considered to be universally true or false (i.e. from
the viewpoint of that logic, the relevant relation between the antecedent and the consequent of that
conditional is regarded to be logical). A logical conditional that is considered to be universally true, in
the sense of that logic, is also called an entailment of that logic. Indeed, the most intrinsic difference
between various different logic systems is to regard what class of conditionals as entailments, as Diaz
pointed out: “The problem in modern logic can best be put as follows: can we give an explanation of
those conditionals that represent an entailment relation?” [15]
The fundamental formal logic system that can satisfactorily underlie autonomous evolutionary
information systems has to satisfy at least the following essential requirements.
First, as a general logical criterion for validity of reasoning rather than proving, the logic must be
able to underlie relevant reasoning as well as truth-preserving reasoning in the sense of conditional.
This requirement is crucial to the reasoning engine of an autonomous evolutionary information system
because without such a logical criterion for validity, a forward deduction performed by the reasoning
engine may produce a lot of useless (even burdensome!) conclusions which are completely irrelevant
to given premises or not really correct in the sense of conditional.
Second, the logic must be able to underlie ampliative reasoning, i.e. the truth of conclusion of the
reasoning should be recognized after the completion of the reasoning process but not be invoked in
deciding the truth of premises of the reasoning. This requirement is obvious to an autonomous
evolutionary information system because any evolution process itself should intend to predict some
future event whose truth should be unknown at the point of that reasoning is being performed.
Third, the logic must be able to underlie paracomplete and paraconsistent reasoning, because in
any application area in the real world the completeness and the consistency are often not necessarily
guaranteed. In particular, the principle of Explosion that everything follows from a contradiction
cannot be accepted by the logic as a valid principle. In fact, any formal theory may be indirectly
inconsistent, without regard to that it is constructed as a purely deductive science or it is constructed
based on some empirical or experimental science. Therefore, paracomplete reasoning (or reasoning
with incomplete information) and paraconsistent reasoning (or reasoning in the presence of
inconsistency) are indispensable to scientific discovery in the real world.
Finally, the logic must be able to underlie temporal reasoning, because any evolution process
itself is an intrinsically time-dependent mode. No account of reasoning can properly be called
complete if it does not say something about how we reason about change. This is in particular true to
an autonomous evolutionary information system because its behavior changes dynamically and its
autonomous evolution is a gradual process. Without a logic to underlie temporal reasoning, we cannot
reason about the dynamics of an autonomous evolutionary information system.
To discuss in detail which formal logic system can satisfactorily satisfy the above essential
requirements to underlie autonomous evolutionary information systems need a lot of space. We now
only present the current results obtained by the present author’s investigations as follows: Classical
mathematical logic and its various classical conservative extensions or non-classical alternatives
cannot satisfy any of the above four essential requirements [1, 2, 6, 12, 16, 25, 26]; classical temporal
logic [5, 17, 18, 19, 23, 24, 31] can satisfy the last requirement but cannot satisfy the first three of the
requirements [11, 13]; traditional relevant logics [1, 2, 16, 25, 26] can satisfy the first requirement
95
Jingde Cheng
partly and the second and third requirements but cannot satisfy the fourth requirement [6, 8, 11, 12,
13]; strong relevant logics [6, 12] can satisfy the first three of the requirements well but cannot satisfy
the fourth requirement [8, 11, 13]; at present, the only hopeful candidate which can satisfy the above
all four essential requirements is temporal relevant logics [8, 11, 13], which are obtained by
introducing temporal operators and related axiom schemata and inference rules into strong relevant
logics.
5. Self-valuation: The Most Difficult Issue in Autonomous Evolution
Even if an information system can measure and monitor its own behavior by itself, and can learn
some things by discovery based on relevant and ampliative reasoning, it not necessarily can evolve
from a lower form into a better, maturer, or more complete form autonomously. To be an autonomous
evolutionary information system, an information system must be able to valuate those data,
information, or knowledge obtained by measuring, monitoring, and learning and then determine a
correct direction or way to improve its own abilities from a lower form into a better, maturer, or more
complete form. Since determining a correct direction or way is dependent on the correct valuation of
the current state of the system and the current situation of its outside environment, the fundamental
question here is: Can the system make the valuation correctly by itself?
On the other hand, since valuation is the action or process of assessing or estimating the value or
price of a thing based on some certain criterion about its worth, excellence, merit, or character, any
valuator must have a certain criterion before the valuation is made. In fact, the most difficult issue in
any valuation is how to establish such a criterion. In general, a human being has different criterion for
valuating different things, and his/her various criteria are established in a long time of his/her life by
experiencing and learning various things. Moreover, one of the most intrinsic characteristics of
human society is that for the same thing, different individual or people may make different valuation
based on different criteria. It is this characteristic that leads to the variety and dynamics in human
society.
The present author’s basic assertion, named ‘autonomous evolution test’, on autonomous
evolution of a computing system is as follows: Only when one can program a mechanism into a
computing system such that after a certain long period of time of running and using its different copies
by different users, these copies may respond very differently to the same stimulus from the outside
world, we can say that the system has the ability to evolve autonomously; the more different the
behavior of various copies of the system are, the more autonomous the evolution of the system is.
Obviously, this assertion requires that the mechanism programmed into the system can make different
criteria according to interactions with different users such that for the same thing different valuations
can be made based on the different criteria.
Thus, the fundamental questions are: What is the mechanism to make different criteria for
valuation? How can we implement the mechanism? To the knowledge of the present author, until
now, there is no idea or methodology proposed for addressing these difficult issues.
6. Concluding Remarks
We have proposed the notion of autonomous evolutionary information system, presented a
reconfigurable architecture of autonomous evolutionary information systems, showed that temporal
relevant logics are hopeful candidates for the fundamental logic to underlie autonomous evolutionary
information systems, and pointed out that the most difficult issue in design and development of an
autonomous evolutionary information system is how to implement its self-valuation mechanism.
Future work on this new research direction should be focused on designing and developing
autonomous evolutionary information systems that work for all sorts of people in various areas of the
96
Self-measurement, Self-monitoring, Self-learning, and Self-valuation
real world. We are developing the following two types of autonomous evolutionary information
systems:
Automated Theorem Finding Systems: The systems of this type work for all sorts of scientists
in various areas. A scientist works with an automated theorem finding system as an assistant that
knows the background knowledge and the state of the art of an area the scientist is working on,
discovers new and interesting empirical theorems from its database or knowledge-base, provides the
scientist with advices, and improve its own state of ‘knowing’ and ability of ‘working’ by interactions
with the scientist.
Personal / Family Information Systems: The systems of this type work for the folks. An
autonomous evolutionary personal information system serves as a partner of its user such that it
creates, retrieves, modifies, and updates all personal or private information and knowledge in the
lifetime of the user, discovers new and interesting things, which are not explicitly known by the user,
from its database or knowledge-base, provides the user with advices, improve its own state of
‘knowing’ and ability of ‘working’ by interactions with the user, and ultimately grows up together
with growth of the user. An autonomous evolutionary family information system is just a systematic
combination of autonomous evolutionary personal information systems of all members of a family. It
grows up together with both combination of some members and growth of every member of the
family.
An autonomous evolution mechanism is intrinsically important and indispensable to both of the
above two types of information systems.
The notion of an autonomous evolutionary information system, its actual implementation, and its
practical applications concern various areas including logic, theory of computation, automated
reasoning and proving, software engineering, knowledge engineering, and information security
engineering, and will raise many new research problems in these areas.
References
[1] A. R. Anderson and N. D. Belnap Jr., “Entailment: The Logic of Relevance and Necessity,” Vol. I, Princeton
University Press, 1975.
[2] A. R. Anderson, N. D. Belnap Jr., and J. M. Dunn, “Entailment: The Logic of Relevance and Necessity,”
Vol. II, Princeton University Press, 1992.
[3] J. M. Bradshaw (Ed.), “Software Agents,” AAAI Press / The MIT Press, 1997.
[4] W. Brenner, R. Zarnekow, and H. Wittig, “Intelligent Software Agents: Foundations and Applications,”
Springer, 1998.
[5] J. P. Burgess, “Basic Tense Logic,” in: D. Gabbay and F. Guenthner (Eds.), “Handbook of Philosophical
Logic, 2nd Edition,” Vol. 7, pp. 1-42, Kluwer Academic, 2002.
[6] J. Cheng, “The Fundamental Role of Entailment in Knowledge Representation and Reasoning,” Journal of
Computing and Information, Vol. 2, No. 1, pp. 853-873, 1996.
[7] J. Cheng, “EnCal: An Automated Forward Deduction System for General-Purpose Entailment Calculus,” in
N. Terashima and E. Altman (Eds.), “Advanced IT Tools, IFIP World Conference on IT Tools, IFIP96 - 14th
World Computer Congress, September 1996, Canberra, Australia,” pp. 507-514, Chapman & Hall, September
1996.
[8] J. Cheng, “Temporal Relevant Logic as the Logic Basis for Reasoning about Dynamics of Concurrent
Systems,” Proc. 1998 IEEE-SMC Annual International Conference on Systems, Man, and Cybernetics, Vol. 1,
pp. 794-799, 1998.
[9] J. Cheng, “The Self-Measurement Principle: A Design Principle for Large-scale, Long-lived, and Highly
Reliable Concurrent Systems,” Proc. 1998 IEEE-SMC Annual International Conference on Systems, Man, and
Cybernetics, Vol. 4, pp. 4010-4015, 1998.
97
Jingde Cheng
[10] J. Cheng, “Wholeness, Uncertainty, and Self-Measurement: Three Fundamental Principles in Concurrent
Systems Engineering,” Proc. 13th International Conference on Systems Engineering, CS7-CS12, 1999.
[11] J. Cheng, “Temporal Relevant Logic: What Is It and Why Study It?” Abstracts of the IUHPS/DLMPS
11th International Congress of Logic, Methodology and Philosophy of Science, p. 253, 1999.
[12] J. Cheng, “A Strong Relevant Logic Model of Epistemic Processes in Scientific Discovery,” in E.
Kawaguchi, H. Kangassalo, H. Jaakkola, and I. A. Hamid (Eds.), “Information Modelling and Knowledge Bases
XI,” pp. 136-159, IOS Press, 2000.
[13] J. Cheng, “Temporal Relevant Logic as the Logical Basis of Anticipatory Reasoning-Reacting Systems,”
Proc. 6th International Conference on Computing Anticipatory Systems, 2003.
[14] J. Cheng, N. Akimoto, Y. Goto, M. Koide, K. Nanashima, and S. Nara, “HILBERT: An Autonomous
Evolutionary Information System for Teaching and Learning Logic,” Proc. 6th International Conference on
Computer Based Learning in Science, 2003.
[15] M. R. Diaz, “Topics in the Logic of Relevance,” Philosophia Verlag, 1981.
[16] J. M. Dunn and G. Restall, “Relevance Logic,” in: D. Gabbay and F. Guenthner (Eds.), “Handbook of
Philosophical Logic, 2nd Edition,” Vol. 6, pp. 1-128, Kluwer Academic, 2002.
[17] D. M. Gabbay, I. Hodkinson, and M. Reynolds, “Temporal Logic: Mathematical Foundations and
Computational Aspects,” Vol. 1, Oxford: Oxford University Press, 1994.
[18] D. M. Gabbay, M. A. Reynolds, and M. Finger, “Temporal Logic: Mathematical Foundations and
Computational Aspects,” Vol. 2, Oxford: Oxford University Press, 2000.
[19] A. Galton (Ed.), “Temporal Logics and Their Applications,” Academic Press, 1987.
[20] K. Godel, “Russell’s Mathematical Logic,” in Schilpp (Ed.), “The Philosophy of Bertrand Russell,” Open
Court Publishing Company, 1944.
[21] W. Kneale and M. Kneale, “The Development of Logic,” Oxford University Press, 1962.
[22] G. Lausen, B. Ludascher, and W. May, “On Logical Foundations of Active Databases,” in J. Chomicki
and G. Saake (Eds.), “Logics for Databases and Information Systems,” pp. 389-422, Kluwer Academic, 1998.
[23] Z. Manna and A. Pnueli, “The Temporal Logic of Reactive and Concurrent Systems: Specification,”
Springer, 1992.
[24] Z. Manna and A. Pnueli, “Temporal Verification of Reactive Systems: Safety,” Springer, 1995.
[25] E. D. Mares and R. K. Meyer, “Relevant Logics,” in L. Goble (Ed.), “The Blackwell Guide to
Philosophical Logic,” pp. 280-308, Blackwell, 2001.
[26] S. Read, “Relevant Logic: A Philosophical Examination of Inference,” Blackwell, 1988.
[27] B. Russell, “The Principles of Mathematics, 2nd edition,” Cambridge University Press, 1903, 1938,
Norton Paperback Edition, Norton, 1996.
[28] A. Tarski, “Introduction to Logic and to the Methodology of the Deductive Sciences, 4th Edition,
Revised,” Oxford University Press, 1941, 1946, 1965, 1994.
[29] J. D. Ullman, “Principles of Database and Knowledge-Base Systems,” Vol. 1, Computer Science Press,
1988.
[30] J. D. Ullman, “Principles of Database and Knowledge-Base Systems: The New Technologies,” Vol. 2,
Computer Science Press, 1989.
[31] Y. Venema, “Temporal Logic,” in L. Goble (Ed.), “The Blackwell Guide to Philosophical Logic,” pp.
203-223, Blackwell, 2001.
[32] J. Widom and S. Ceri, “Introduction to Active Database Systems,” in J. Widom and S. Ceri (Eds.),
“Active Database Systems – Triggers and Rules for Advanced Database Processing,” pp. 1-41, Morgan
Kaufmann, 1996.
[33] M. Wooldridge, “An Introduction to Multiagent Systems,” John Wiley & Sons, 2002.
98
A Component Framework for Description-Driven Systems
F. Estrella1, 2, Z. Kovacs1, R. McClatchey2, N. Toth * 1, 2 & T. Solomonides2
1
European Organization for Nuclear Research (CERN), 1211 Geneva 23, Switzerland
{Richard.McClatchey, Tony.Solomonides}@uwe.ac.uk
2
Centre for Complex Co-operative Systems, University of West of England, Frenchay, Bristol BS16 1QY, UK
{Florida.Estrella,Zsolt.Kovacs,Norbert.Toth}@cern.ch
Abstract : Software systems are increasingly impinging on new areas in everyday life; competitive market
conditions mean that the requirements for such software are subject to frequent change. As a result, there is
an increasing need for fast and flexible software development. Meta-objects can be used in multi-layered
description-driven systems to represent various levels of information that endow flexibility both to the
managed data and to the system design. This position paper identifies those generic system components that
together provide a coherent architecture for maintaining system evolution and data complexity in a holistic
approach. We argue that these generic components can be used as building blocks of a system at multiple
abstraction layers, introducing reuse and extensibility in higher abstractions, and reducing the complexity of
the system architecture. To illustrate the judicious use of components in such development, this paper
overviews the CRISTAL system as one example of an existing implementation of a system.
1. Introduction
Software systems are increasingly impinging on new areas in everyday life; competitive market
conditions mean that the requirements for such software are subject to frequent changes, even during
the lifecycle of software production. Consequently, there is a need for methods of rapid development
of new software products from scratch, as well as for the rapid and frequent modification of existing
systems. This approach is advocated by exponents of so-called Agile software development [1]. As a
complement, a number of works promoting runtime software development have been published,
including [2, 3, 4]. Developers using traditional software development techniques are usually forced to
terminate running applications, modify, compile and debug application code. Any new application
must then either support multiple versions of data or require data to be migrated to the new version.
Systems that support runtime software development would allow some of these changes to take place
during program execution, replacing static model structures with first-class objects.
Figure 1. Description-driven approach.
* N. Toth is partially supported by the National Scientific Research Fund (OTKA) through grant T029264.
EMSISE’03
99
F. Estrella, Z. Kovacs, R. McClatchey, N. Toth and T. Solomonides
Systems built on what we have termed a Description-Driven (DD) approach [2, 5] support run-time
software development and scalable data management through the use of meta-objects. Figure 1 shows
that DD system objects at meta-data and higher abstraction levels represent type- and object propertyspecific information extracted from the lower abstraction level. Solid lines refer to the traditional
‘instance-of’ relationships while dashed lines represent ‘described-by’ relationships. The latter
describes and often constrains the creation and behaviour of objects on the lower abstraction level.
Instance objects can be simulated [6] by meta-objects, inducing a homomorphism between the two
abstractions. Meta-objects being first-class objects are the means for run-time system evolution, while
data values extracted to the meta-level provide scalable data management. Depending on the semantics
of instance objects being described, meta-objects may represent descriptions of process, workflow
activity, data, role as well as relationship type.
Due to the multiplicity of information abstraction layers that DD systems operate on and describe,
such systems are more complex in their architecture than traditional systems, resulting in additional
development and maintenance time. However, DD systems are flexible and highly responsive to
changes in the environment of any given application instance, expressed as changes at a descriptive
level; the hoped for flexibility across multiple applications appears to be a realistic expectation. A
generic component framework, proposed here, supporting the DD approach provides a solution with
two benefits: the framework supplies the basic system components that a run-time evolving
application would require and still, by leaving some application specific decisions open for
developers, by only providing framework interface implementations and guidelines, applications can
flexibly utilize the DD approach. The remaining task of software developers is therefore to focus on
implementing narrow framework interfaces and (re-)configuring the meta-level(s) of the system.
Following the identification of the required framework components, we argue that by exploitation of
the resulting components, systems can equally manage multiple layers of information. This is in
contrast with other related works that provide different models for object instances and for their
runtime class representation. The benefits of our approach include reduced complexity of system
architecture, reduced system maintenance and the relatively simple extension of a system towards
higher abstraction layers.
2. Elements of the Description-Driven Component Framework
The DD approach targets both problems of system evolution management and data complexity
management through the judicious use of metadata. Information about the instance model, system
configuration decisions, instance object structures through simulation, and property values that are
common to a set of instance objects are abstracted and represented by meta-objects. In this section we
will identify those system entity parts that are required as generic building blocks of a DD system.
The generic system building block (entity) requires a set of properties that are often given as name and
value pairs. Properties may reflect entity characteristics (such as unique identity) at instance and metalevels, while meta-level properties may additionally contain the type of instance properties and metainformation. Various semantically different relationships including grouping, aggregation, description
and generalization represent specific kinds of object connections that are fundamental parts of an OO
system. The management of the semantic information is however left to the software developer in
most programming languages.
The use of a reified graph pattern [5] provides a generic solution to representing relationships
between entities in a reflective and managed form, by raising the semantics of the relationship often
from a simple object pointer to a first-class object. As a practical example, information held by these
objects may include the number of related objects, the distance between related objects or a mixing
ratio of related objects. Any change propagation upon request to a multiple number of related entities,
and the handling of relationship information by the reified graph pattern itself can be achieved in a
well-established, synchronised and reliable manner, taking away this task from system developers.
Objectified relationships at the meta-level simulate instance object structures and hold descriptive
information related to the instance object structures, such as reconfigurable structural constraints.
100
A Component Framework for Description-Driven Systems
The dynamic maintenance of entity behaviour is one of the main requirements for run-time program
development. Workflow management systems provide one such solution by allowing the run-time
reconfiguration of activities (adding, removing, reordering, splitting, joining, repeating, etc.), where
each activity describes a certain task for execution. Each individual entity having its own workflow
management process is capable of evolving independently. Meta-entities hold two types of workflow
information: they manage their own workflows and they hold a workflow description for those
instance entities that meta-entities describe.
As the system is continually allowed to evolve through its description layer, the management of
coexisting versions of structure, workflow and data requires the presence of an audit-trail system by
recording events that trigger important design changes, thus setting the basis of version control. As
events that produce system changes (such as the new version of a script, structural or behavioural
changes at any abstraction level) are triggered by the execution of tasks, these tasks may generate
resultant data, specific to the executed task. The type of such data is specific to the executed script.
Recording such data attached to the triggering event at each level of abstraction will allow the data to
evolve, making possible access to all intermediate states at a later stage. Considering the frequent data
type evolution of a large number of scripts, we believe that an XML-like data format is best suited to
the storage of such information, all the more so since object-oriented and XML formats have wellestablished conversion techniques and rapid type evolution does not necessarily affect the data
description of the storage device. Another important feature is the XSD type description of XML
document instances, where the format of the document type specification conforms to the format of
document instance, opening opportunities to extensibility towards the more abstract layer.
Modeling Abstraction
To produce ‘snapshots’ of an entity, to retrieve coherent versions of a number of entities or to search
through result data stored in the audit-trail, each entity element needs to obtain a viewpoint that acts as
a query executor. Its task involves query optimization techniques such as query rewriting [7]
exploiting the simulation, and query propagation to related entities. Viewpoints may become tools to
provide crosscutting, used by aspect oriented software development.
Instance
Model
Meta-Model
Component
Framework
Component
Framework
Component
Framework
Meta-MetaObjects
MetaObjects
Objects
Data
Meta-Data
Meta-Meta-Data
Data Abstraction
Figure 2. The Description-Driven approach based on a component framework.
In traditional OO programming public methods represent services to other objects, while methods of
other objects specify the sequence of methods to be called. According to our approach, tasks that are
specified by workflow activities of an entity need to be completed by the same or another entity.
Capabilities of an entity present a list of activity types that the entity offers to perform.
101
F. Estrella, Z. Kovacs, R. McClatchey, N. Toth and T. Solomonides
The above system components (highlighted in bold) have been identified in order to provide generic
building blocks to model traditional objects and their descriptions in a way that reflects DD systems.
Distributed computing and object persistence are additional issues that can be addressed at the level of
the entity, by supplying it with remote object manageability and persistence capability. Appropriate
information exchange between components and their appropriate organization within and across layers
of the multi-layered architecture, establish a holistic framework for the development of DD systems.
External to the component framework the system needs to support the overall management of generic
entities. These tasks include support for database management, unique identification management,
roles and policies and graphical user interfaces to provide run-time application development.. The
resulting component framework can be reused for each abstraction layer simplifying system
architecture and allowing ease of extension towards higher abstraction layers as shown by Figure 2.
3. A Practical Example – The CRISTAL System at CERN
It has been argued above that property, workflow, collection, audit-trail, viewpoint and capability
components form a set of generic system elements that can be managed collectively to facilitate the
building of DD systems relatively independently of the layer of abstraction. In order to build an
application, developers need to adopt these concepts and adapt them to their specific domain and
designs. We present the requirements and an overview of the CRISTAL software application as an
example to show how a particular DD system copes with the special production management
requirements of a large-scale experiment at CERN. Although our example is taken from the
application domain of engineering data management, other process-oriented areas such as software
development management, business process management and software integration tools are also
suitable domains.
3.1 Requirements
The CRISTAL software (Cooperative Repositories and Information System for Tracking Assembly
Lifecycles) [8, 9] has been developed to meet the production management requirements of one of the
physics experiments at CERN. The experiment involves the implementation of a one-of-a-kind
complex sub-detector by constructing a prototype and the final product at the same time. The subdetector consists of a large number of compound products. These products are categorized into a large
number of types, each described by different parameters and subject to a different set of workflow
activities. Physicists, being the main users of the system, often need to apply changes to the detector
design such as the specification of additional properties or new workflow activities for an existing
product type, the introduction of a new type, or the modification of the workflow for specific products
of a type. These changes may happen independently in geographically distributed production centres.
The physical characteristics of each product are measured and possible problems diagnosed before and
after assembly. All workflow activity results are recorded to provide full traceability. The estimated
data by the end of production will accumulate to the order of one Terabyte. This data serves as the
basis for maintaining quality control and, more importantly in this scientific setting, an iterative
refinement of the production process.
The type of measurements defined by a specified workflow activity type may evolve over time as
physicists study and learn from previous results. Although the production process of each product is
identified by the type of the product, there can be no accurate description of how an individual product
needs to be handled, as each product may be subject to an ad-hoc process modification due to the
research nature of the production. System evolution therefore occurs as new product types have to be
introduced and modified by physicists across the geographically distributed system. These system
requirements demand software solutions that show scalable and complex data management and
flexibility towards system evolution.
102
A Component Framework for Description-Driven Systems
CRISTAL is a distributed product data and workflow management system, which makes use of an OO
database for its repository, a multi-layered architecture for its component abstraction and dynamic
object modeling for the design of the objects and components of the system. CRISTAL is based on a
DDS architecture using meta-objects. The DDS approach has been followed to handle the complexity
of such a data-intensive system and to provide the flexibility to adapt to the changing scenarios found
at CERN that are typical of any research production system. In the two years of operation of
CRISTAL it has gathered over 25 Gbytes of data and been able to cope with more than 30 evolutions
of the underlying data schema without code or schema recompilations. These changes included
evolutions of type definitions, the additions of new managed entities and on-the-fly redefinitions of
workflow components. In addition CRISTAL offers domain-independence in that the underlying data
model is generic in concept; for example the Agilium group [10] is currently adapting the Kernel of
the CRISTAL system for the purposes of commercial Enterprise Application Integration (EAI).
3.2 Component Organization
For the purpose of fitting the production management environment optimally onto the above concepts,
the entity acting as the overall container for all components has been separated into two parts. Fig. 3
illustrates that the workflow activities defining and requesting task executions on the Passive
Traceable Entity (PTE) is separated from the actual task execution of Active Entity (AE). Manageable
Entity represents the common utilities such as object identity, distribution capability and persistence
capability.
Figure 3. Component framework organized for production management
PTEs in production management may include products, order forms, Bills-of-Material (BOM), etc.
Activities contained in the workflow of a PTE describe those tasks that need to be performed on the
PTE. However, the tasks themselves need to be executed by an AE. AEs may represent various
instruments, human workers and user codes that perform specific calculations.
3.3 Workflow Activities
CRISTAL differentiates between two types of workflow activities. Predefined activities support
fundamental system manipulation to cope with runtime evolution, while application specific
extensions can be added by software developers during runtime to meet application requirements.
Predefined activities include manipulating workflow activities and manipulating an entity in the
collection structure. For example, an application specific activity is the conversion from inches to
103
F. Estrella, Z. Kovacs, R. McClatchey, N. Toth and T. Solomonides
metres and transversal light transmission measurement. One task is assigned to each activity. The
runtime evolution of workflow activities is supported by CRISTAL in two ways: predefined steps
provide creation and reorganization of workflow activities, while Java beans and a generic
communication interface for data exchange provide support for runtime task definition. Workflow
activities are graphically described in a number of steps with pre- and post-conditions using the
standard CRISTAL object specification mechanisms.
3.4 Workflow Activity Execution
The execution of PTE workflow activities is carried out by the AE system participants. Providing
multiple AEs with the same capability in a distributed working environment allows the assignment of
the next executable activities in such a way that load balancing is supported.
The communication channel between PTEs and AEs in the case of CRISTAL is established through a
notification service, decoupling the two participants. Executable activity tasks are assigned an AE
using the publish/subscribe mechanism and an appropriate business rule. Following the execution of
the task, the resulting data (if any) is sent from the AE to the PTE through a direct method invocation.
The PTE registers this event along with the data in the audit-trail component.
3.5 Workflow and Workflow Description
When instance-level entities are added to the system their descriptions i.e. meta-level entities, need to
be assigned to them. Each meta-level entity may hold multiple versions of workflow descriptions
acting as templates for instance-level workflows that can be later modified. The creation of a
workflow description version is achieved by executing the task of a predefined activity belonging to
the meta-level entity. The task provides a tool to construct a new workflow description. In line with
any workflow activity execution, the resulting new workflow version – being the resulting data of the
task – is recorded in the audit-trail data of the meta-level entity. Some of the activities recur across
different workflow versions; by reducing the granularity of the recorded workflow to the level of an
activity, CRISTAL increases the design reuse and establishes scalability to the system.
3.6 Data and Data Description
Data organized by the audit-trail information permits the exchange of data between workflow activity
tasks. Different types and versions of tasks manipulate previously recorded results through the
viewpoint and append them with their own results. These resulting data can be generated by user code
or an instrument, or can be manually input by a human worker. Generating input forms based on the
schema describing the type of the result data offers one way to produce valid, type-conformant data.
Instance data descriptions – or schemas – are kept as data of the associated description-level entity.
3.7 Collections
CRISTAL uses two types of information stored in the meta-level reified collections. Semantic
information interpreted by the system, such as the allowed number of described instance-level entities,
and application-specific information that is propagated to the instance-level collection when a new
instance-level entity with its collections has been created. This application specific information can
then be interpreted by application specific workflow activity tasks.
3.8 Properties and Property Descriptions
Properties of an instance-level entity are created in an analogous way to that in which workflows are
instantiated from workflow descriptions. Property descriptions are stored in the audit-trail of a
dedicated property description entity attached to the meta-level entity. The semantic information of
104
A Component Framework for Description-Driven Systems
property descriptions such as optional properties and default values are interpreted by a predefined
activity. A runtime manageable XSD schema is attached to each type of property description. Based
on this schema information graphical building tools may provide property editor forms for the users
that produce XML formatted documents. Valid XML documents can be turned into property objects
and attached to the created instance-level entity.
3.9 Towards higher abstractions
Extending the system towards the meta-meta-level provides bootstrapping configuration options for
meta-level entities. Predefined activities also available at the highest implemented abstraction layer
provide initial functionalities to configure the layer to application needs, from which lower abstraction
layers can be instantiated in a scalable manner. The definition of meta-meta schemas at the meta-meta
layer in CRISTAL puts constraints on meta-layer schemas that include for example the prohibition of
specific languages and certain property naming rules.
4. Conclusions
Parallel works in run-time system evolution management have shown the need to model different
abstraction layers with dedicated software architectures in multi-layered systems (e.g [3 & 4]). This
position paper argues that it is possible to represent multiple layers of abstractions by a set of
collectively managed reusable generic system components. We support our argument by introducing
an implementation of a large-scale production management system as an example [8, 9].
Benefits of the resulting component framework include system integration and interoperability, rapid
software development of a simplified system architecture, and runtime adaptability to system
evolution.
Acknowledgments
The authors take this opportunity to acknowledge the support of their home institutes and numerous
colleagues responsible for the CRISTAL software. In addition Professor Richard McClatchey
acknowledges the support of the Royal Society in the preparation of this paper.
References
[1]
The Manifesto for Agile Software Development. See: http://agilemanifesto.org/
[2]
Z. Kovacs. “The Integration of Product Data with Workflow Management Systems”. PhD Thesis,
University of the West of England, Bristol, England, 1999.
[3]
J. W. Yoder & R. Johnson. “The Adaptive Object-Model Architectural Style”, the Working IEEE/IFIP
Conference on Software Architecture (WICSA3), Montreal, Canada. August 2002.
[4]
D. Riehle, S. Fraleigh, D. Bucka-Lassen, N. Omorogbe. “The Architecture of a UML Virtual Machine”.
Proceddings of the Object-Oriented Programming, Systems, Languages and Architectures conference,
(OOPSLA), Tampa Bay USA. October, 2001.
[5]
F. Estrella. “Objects, Patterns and Descriptions in Data Management”. PhD Thesis, University of the
West of England, Bristol, England, 2000.
[6]
P. Buneman, S. B. Davidson, M. F. Fernandez, D. Suciu. “Adding structure to Unstructured Data”,
Proceedings of the International Conference on Database Theory,(ICDT) Delphi, Greece. January 1997.
[7]
C. Koch. “Optimizing Queries Using a Meta-level Database”. ArXiv:cs:DB/0205060 v1. 2002.
[8]
F. Estrella, Z. Kovacs, J-M. Le Goff, R. McClatchey, T. Solomonides & N. Toth, “Pattern Reification
as the Basis for Description-Driven Systems”, In press Vol 2 Issue 2 of the Journal of Software and
System Modeling, Springer-Verlag ISSN: 1619-1366, 2003.
[9]
F. Estrella, J-M Le Goff, Z. Kovacs, R McClatchey & S. Gaspard , “Promoting Reuse Through the
Capture of System Description”. Lecture Notes in Computer Science Vol 2426 p 101-111 ISBN 3540-44088-7 Springer-Verlag, 2002 (Presented at the OOIS 2002 Workshop on Reuse in ObjectOriented Information Systems Design Montpellier, France. September 2002).
[10]
S. Gaspard, F. Estrella, R. McClatchey & R. Dindeleux, “Managing Evolving Business Workflows
through the Capture of Descriptive Information”. Accepted by the eCOMO'2003 4th Int. Workshop on
Conceptual Modeling Approaches for e-Business 22nd International Conference on Conceptual
Modeling ER 2003. Chicago, USA. October 2003.
105