Complex Systems Design Management 2019 PDF
Complex Systems Design Management 2019 PDF
Complex Systems Design Management 2019 PDF
Complex
Systems
Design
& Management
Proceedings
of the Ninth International Conference
on Complex Systems Design
& Management
CSD&M Paris 2018
123
Complex Systems Design & Management
Eric Bonjour Daniel Krob
•
Editors
123
Editors
Eric Bonjour Luca Palladino
Université de Lorraine Safran
Laxou, France Magny Les Hameaux, France
This Springer imprint is published by the registered company Springer Nature Switzerland AG
The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland
Preface
Introduction
v
vi Preface
The CSD&M Paris 2018 edition received 52 submitted papers, out of which the
Program Committee selected 19 regular papers to be published in the conference
proceedings. A 37% acceptance ratio was reached which guarantees the high
Preface vii
quality of the presentations. The Program Committee also selected 16 papers for a
collective presentation during the poster workshop of the conference.
Each submission was assigned to at least two Program Committee members,
who carefully reviewed the papers, in many cases with the help of external referees.
These reviews were discussed by the Program Committee Co-chairs during an
online meeting by 26 June 2018 and managed via the EasyChair conference system.
We also chose several outstanding speakers with industrial and scientific
expertise who gave a series of invited talks covering all the spectrum of the con-
ference during the two days of CSD&M Paris 2018. The conference was organized
around a common topic: “Products and Services Development in a Digital World.”
Each day proposed various invited keynote speakers’ presentations and a “à la
carte” program consisting in accepted papers’ presentations and in different sessions
(thematic tracks on Day 1 and sectoral tracks on Day 2).
Furthermore, we had a “poster workshop”, to encourage presentation and dis-
cussion on interesting but “not-yet-polished” ideas. CSD&M Paris 2018 also
offered booths presenting the last engineering and technological news to
participants.
Conference Chairs
General Chair
Program Committee
Academic Members
Co-chair
ix
x Conference Organization
Members
Industrial Members
Co-chair
Members
Organizing Committee
Organizing Committee
Chair
Members
Invited Speakers
Plenary sessions
“Aeronautics” Track
“Energy” Track
We would like to thank all members of the Program and Organizing Committees for
their time, effort, and contributions to make CSD&M Paris 2018 a top-quality
conference. Special thanks go to the CESAM Community team who permanently
and efficiently managed all the administration, logistics, and communication of the
CSD&M Paris 2018 conference (see http://cesam.community/en).
The organizers of the conference are also grateful to the following partners
without whom the CSD&M Paris 2018 event would not exist:
• Founding partners
– CESAM Community managed by the Center of Excellence on Systems
Architecture, Management, Economy & Strategy (CESAMES),
– Association Française d’Ingénierie Système (AFIS),
– The Ecole Polytechnique – ENSTA ParisTech – Télécom ParisTech –
Dassault Aviation – Naval Group – DGA – Thales “Engineering of Complex
Systems” chair.
• Industrial and institutional partners
– Airbus Group,
– Alstom Transport,
– ArianeGroup,
– INCOSE,
– MEGA International,
– Renault,
– Thales.
• Participating engineering and software tools companies
– Airbus Apsys,
– Aras,
– Dassault Systèmes,
– Easis Consulting,
xiii
xiv Acknowledgements
– Esteco,
– Hexagon PPM,
– Intempora,
– No Magic Europe,
– Obeo,
– Persistent Systems,
– SE Training,
– Siemens,
– Sodius.
Contents
Regular Papers
Formal Methods in Systems Integration: Deployment of Formal
Techniques in INSPEX . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
Richard Banach, Joe Razavi, Suzanne Lesecq, Olivier Debicki,
Nicolas Mareau, Julie Foucault, Marc Correvon, and Gabriela Dudnik
Ontology-Based Optimization for Systems Engineering . . . . . . . . . . . . . 16
Dominique Ernadote
On-Time-Launch Capability for Ariane 6 Launch System . . . . . . . . . . . 33
Stéphanie Bouffet-Bellaud, Vincent Coipeau-Maia, Ronald Cheve,
and Thierry Garnier
Towards a Standards-Based Domain Specific Language
for Industry 4.0 Architectures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
Christoph Binder, Christian Neureiter, Goran Lastro, Mathias Uslar,
and Peter Lieber
Assessing the Maturity of Interface Design . . . . . . . . . . . . . . . . . . . . . . 56
Alan Guegan and Aymeric Bonnaud
Tracking Dynamics in Concurrent Digital Twins . . . . . . . . . . . . . . . . . . 67
Michael Borth and Emile van Gerwen
How to Boost the Extended Enterprise Approach in Engineering
Using MBSE – A Case Study from the Railway Business . . . . . . . . . . . 79
Marco Ferrogalini, Thomas Linke, and Ulrich Schweiger
Model-Based System Reconfiguration: A Descriptive Study
of Current Industrial Challenges . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
Lara Qasim, Marija Jankovic, Sorin Olaru, and Jean-Luc Garnier
xv
xvi Contents
Posters
The Systems Engineering Concept . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 233
Henrik Balslev
Contents xvii
1 Introduction
The contemporary hardware scene is driven, to a large extent, by the desire
to make devices smaller and of lower power consumption. Not only does this
save materials and energy, but given the commercial pull to make mobile phones
increasingly capable, when small low power devices are incorporated into mobile
phones, it vastly increases the market for them. The smartphone of today is
unrecognisable (in terms of the facilities it offers) from phones even as little as
a decade old. This phenomenon results from ever greater advances in system
c Springer Nature Switzerland AG 2019
E. Bonjour et al. (Eds.): CSD&M 2018, Complex Systems Design & Management, pp. 3–15, 2019.
https://doi.org/10.1007/978-3-030-04209-7_1
4 R. Banach et al.
mobile, since along with the need to be more smart, they particularly need to
avoid harm to any humans who may be working nearby. Security surveillance
systems, traditionally relying on infra-red sensors, can also benefit from the extra
precision of INSPEX.
At the top of Fig. 1 we see some examples concerned with distance estimation.
Modern distance measuring tools typically make use of a single laser beam whose
reflection is processed to derive the numerical result. For surfaces other than
smooth hard ones, the measurement arrived at may be imprecise, for various
reasons. INSPEX can perform better in such situations by combining readings
from a number of sensors. A very familiar application area for such ideas is
autofocus in cameras. These days, camera systems (typically in leading edge
phones) employ increasingly sophisticated algorithms to distinguish foreground
from background, to make up for varying lighting conditions, and generally to
compensate for the user’s lack of expertise in photography. INSPEX can add to
the capabilities available to such systems.
6 R. Banach et al.
Although a large number of use cases are envisaged for a system such as INSPEX,
the primary use case addressed within the INSPEX Project is the smart white
cane to assist visually impaired and blind persons. Figure 2 shows a schematic
of one possible configuration for the attachment of a smart addon to a standard
type of white cane. The white cane application needs other devices to support
the white cane addon, in order that a system usable by the VIB community
ensues. Figure 3 shows the overall system architecture.
As well as the Mobile
Detection Device addon to
the white cane, there is
an Audio Headset contain-
ing extra-auricular binau-
ral speakers and an inertial
measurement unit (IMU)—
the latter so that an audio
image correctly oriented
with respect to 3D space
may be projected to the
user, despite the user’s Fig. 3. The architecture of the INSPEX system.
head movements. Another
vital component of the system is a smartphone. This correlates the information
Formal Methods in Systems Integration 7
obtained by the mobile detection device with what is required by the headset.
It also is able, in smart city environments, to receive information from wireless
beacons which appropriately equipped users can access. This enables the whole
system to be even more informative for its users.
The white cane add-on contains the sensors that generate the data needed
to create the information that is needed by the user. The chief among these
comprise a short range LiDAR, a long range LiDAR, a wideband RADAR, and a
MEMS ultrasound sensor. Besides these there are the support services that they
need, namely an Energy Source Unit, environmental sensors for ambient light,
temperature and humidity, another IMU and a Generic Embedded Platform
(GEP).
The main sensors are subject to significant development and minaturisation
by a number of partners in the INSPEX project. The short range LiDAR is
developed by the Swiss Center for Electronics and Microtechnology (CSEM)
and the French Alternative Energies and Atomic Energy Commission (CEA).
The long range LiDAR is developed by the Tyndall National Institute Cork and
SensL Technologies, while the wideband RADAR is also developed by CEA. The
MEMS ultrasound sensor is from STMicroelectronics (STM). Cork Institute of
Technology (CIT) design the containing enclosure and support services, while
the audio headset is designed by French SME GoSense.
The GEP has a noteworthy challenge to confront. Data from the sensors
comes in at various times, and with varying reliability. Distance measurements
from the sensors are just that, merely distance data without any notion of direc-
tion, or orientation with respect to the user. The latter is elucidated by reference
to data from the IMU in the mobile detection device. Data from both the IMU
and directional sensors is timestamped, since freshness of data is crucial in pro-
viding information to the user that is not only accurate but timely. This enables
distance sensor data to be aggregated by time and IMU data.
Once the data has been correctly aggregated, it is passed to the module
in the GEP that computes the occupation grid. This is a partition of the 3D
space in front of the user into cells, each of which is assigned a probability of
its being occupied by some obstacle. The occupation grid idea is classical from
the autonomous vehicle domain, but in its standard implementation, involves
intensive floating point computation [28,35]. This is too onerous for the kind
of lightweight applications envisaged by the concept of INSPEX. Fortunately
INSPEX is able to benefit from a highly efficient implementation of the occupa-
tion grid, due to a careful analysis of the computations that are needed to derive
a good occupation grid result [13]. The integration of all the hardware and soft-
ware activities described, constitutes a non-trivial complex systems undertaking.
The wide range of sensors and their concomitant capabilities in the INSPEX
white cane application is necessitated by the detailed needs of VIB persons nav-
igating around the outdoors environment (in particular). Although a standard
white cane can give good feedback to its user regarding the quality and charac-
teristics of the ground in front of them, especially when the ground texture in
the urban environment is deliberately engineered to exhibit a range of standard
8 R. Banach et al.
Whereas all the preceding approaches relied on there being a model of the
system that was presented in a relatively abstract language, the growing power
and scalability of tools generated an interest in techniques that worked directly
on implementation level code. By now there are many well established tools
that input an implementation in a given language such as C or C++, and that
take this implementation and then analyse it directly for correctness properties
[37]. Very often these properties are predefined runtime correctness properties
concerning commonly introduced programmer errors, such as (the absence of)
division by zero or (the absence of) null pointer dereference. Some however, e.g.
[6,9] allow more application specific properties to be checked.
While direct checking of implementations would appear to be a panacea for
verification, it nevertheless risks overemphasising low level system properties at
the expense of the higher level view. When we recognise that deciding what
the system should be is always a human level responsibility, and that formal
approaches can only police the consistency between different descriptions of the
same thing, abandoning the obligation to independently consider the abstract
high level view of the system risks abandoning a valuable source of corroboration
of the requirements that the system is intended to address. It is this kind of
‘stereoscopic vision’ on what the system ought to do and to be that constitutes
the most valuable contribution that a top-down formal approach makes to system
development, quite aside from the formal consistency checking.
In normal software developments, one starts the process with a good idea
of the capabilities of software in general, so in principle, it is feasible to use
a relatively pure top-down approach. Likewise in most hardware developments
that take place at the chip level, one starts the process with a good idea of the
capabilities of the technology platform that will be used, and working top-down
is perfectly feasible (and in fact is unavoidable given the scale of today’s chips).
In both of these cases deploying top-down formal techniques (if the choice is
made to do so) is feasible.
In INSPEX however, the development of the devices at the physical level
is a key element of ongoing project activity, and the low level properties of all
the devices used in the INSPEX deliverable are contingent and emergent to a
significant extent. This makes the naive use of top-down approaches problematic,
since there is no guarantee that the low level model that emerges from a top-
down development process will be drafted in terms of low level properties that are
actually reflected in the devices available, since the constraints on the system’s
behaviour that are directly attributable to physics are simply incontestable. As
a result of this, the approach to incorporating formal techniques in INSPEX was
a hybrid one. Top-down and bottom-up approaches were pursued concurrently,
with the aim of meeting in the middle.
The next sections cover how this hybrid approach was applied in two of
the INSPEX Project’s activities, namely the design of the power management
strategy for the mobile detection device module, and in the verification of the
data pathway from the sensors to the data fusion application.
10 R. Banach et al.
UART off
LowPower
Reset
UART on
Dormant Active
Cmd A
Cmd 0,0
LP-Advert
Timeout
The most primitive properties include the state transition diagrams of the
various sensors and other components. Figure 4 gives an example of a transi-
tion diagram for the Bluetooth submodule, rather drastically simplified from
Formal Methods in Systems Integration 11
A formal model such as the fragment above relates to the low level real
time software and firmware as follows. Each event in the model corresponds to
a software or firmware command, or an interrupt routine. The guard portion,
expressed in the WHEN clause of the event, corresponds to the entry condition
code in the command, or scheduler code that checks the cause of the interrupt.
The event’s THEN clause corresponds to the software command body, or the
interrupt handler routine. As stated earlier, capturing all the commands and
sources of interrupt enables questions of overall consistency to be examined.
Once the low level integrity has been established, other considerations can be
brought to bear. A major element is the quantitative aspect. Event descriptions
as above are embellished with numerical data regarding the energetic conse-
quences of executing the event, enabling overall conclusions about energy con-
sumptions to be drawn. Finally, considerations of overall power management
policy can be layered onto the formal model and made to correspond with the
implementation code.
outlined earlier, in INSPEX, there are several sensors, each working to different
characteristics, but all contributing to the resolution of the spatial orientation
challenge that is the raison d’être of INSPEX.
The various INSPEX sensors work at frequencies that individually can vary
by many orders of magnitude. For example, the LiDARs can produce data frames
with extreme rapidity, whereas the ultrasound sensor is limited by the propaga-
tion speed of pressure waves through the air, which is massively slower than the
propagation characteristics of electromagnetic radiation. The ultrasound sensor,
in turn, can produce data frames much faster than typical human users are able
to re-orient their white canes, let alone move themselves to a significant degree,
either of which requires a fresh occupation grid to be computed. This means
that the data integration performed by INSPEX has to be harmonised to the
pace of the human user.
The main vehicle for achieving this is the IMU. The IMU is configured to
supply readings about the orientation of the INSPEX mobile detection device
addon at a rate commensurate with the needs of human movement. This ‘pace-
maker rate’ is then used to solicit measurements from the other sensors in a
way that not only respects their response times but is also staggered sufficiently
within an individual IMU ‘window’ that the energy demands of the individual
measurements are not suboptimal with respect to the power management policy
currently in force.
The above indicates a complex set of information receipt tasks, made the
more challenging by the fact that all the sensors speak to the same receiving
hardware. The goal of the information receipt tasks is to harvest a collection
of data frames from the individual sensors, each timestamped by its time of
measurement, and each related to a before- IMU data frame, and an after- IMU
data frame, each itself timestamped. The two successive IMU data frames, and
way their data might differ due to user movement, enable the interpolation of
orientation of the distances delivered at various times by the other sensors.
Timing is evidently of critical importance in the management of the incoming
data. This notwithstanding, all the tasks that handle these information manage-
ment duties are executed at the behest of the generic embedded device’s low
level scheduler. The scheduler used belongs to the real time operating system
employed in the GEP, which is a version of FreeRTOS [18].
Turning to the formal modelling of what has just been described, it may
well seem that the complexity of the situation might defeat efforts to add useful
verification to the design. The situation is helped considerably by the existence
of a formal model of the FreeRTOS scheduler [14]. This is in the kind of state
oriented model based form that can be made use of in the modelling and verifi-
cation of the data acquisition pathway in INSPEX. Accordingly, the properties
of the FreeRTOS scheduler derived in [14] can be translated into the Event-B
framework used for INSPEX and then fed in as axioms in the Event-B models
that contribute to the INSPEX data acquisition pathway verification.
Within this context, the rather delicate modelling of timing issues indicated
above can be based on a sensible foundation. The complexities of the behaviour
Formal Methods in Systems Integration 13
In the preceding sections, we introduced the INSPEX Project and its intended
use cases, before homing in on the VIB white cane add-on use case which forms
the focus of the project itself. The main objective of this paper was to describe
the use of formal techniques within INSPEX, to which we addressed ourselves
in Sect. 4. This contained a summary of the deployment of formal techniques in
the data acquisition pathway and the power management design.
Given the practical constraints of the project, it was impossible to follow a
pristine top-down formal development approach in combining formal and more
mainstream techniques. Given that the two approaches were being pursued con-
currently, one of the greatest challenges that arises is to keep both activities in
step. Little purpose is served by verifying the correctness of a design that has
been superseded and contradicted in some significant aspect. The formal activity
therefore paid significant attention to checking whether the growing implementa-
tion continued to remain in line with what had previously been formally modelled
and verified. This way of working contributed the greatest element of novelty
to the combined use of formal and conventional techniques in the project, and
constitutes a stimulus for finding further novel way of reaping the benefits of
fusing the two approaches.
References
1. Alloy. http://alloy.mit.edu/
2. Abrial, J.R.: The B-Book: Assigning Programs to Meanings. Cambridge University
Press (1996)
3. Abrial, J.R.: Formal Methods in Industry: Achievements. Problems Future. In:
Proceedings of ACM/IEEE ICSE 2006, pp. 761–768 (2006)
4. Abrial, J.R.: Modeling in Event-B: System and Software Engineering. CUP (2010)
5. Andronick, J., Jeffery, R., Klein, G., Kolanski, R., Staples, M., Zhang, H., Zhu, L.:
Large-scale formal verification in practice: a process perspective. In: Proceedings
of ACM/IEEE ICSE 2012, pp. 374–393 (2012)
6. Astrée Tool. http://www.astree.ens.fr/
14 R. Banach et al.
32. Spivey, J.: The Z Notation: A Reference Manual, 2nd edn. Prentice-Hall Interna-
tional (1992)
33. Stepney, S.: New Horizons in Formal Methods. The Computer Bulletin, pp. 24–26
(2001)
34. Stepney, S., Cooper, D.: Formal Methods for Industrial Products. In: Proceedings
of 1st Conference of B and Z Users. LNCS, vol. 1878, pp. 374–393. Springer (2000)
35. Thrun, S., Burgard, W., Fox, D.: Probabilistic Robotics. MIT Press (2005)
36. UPPAAL Tool. http://www.uppaal.org/
37. Wikipedia: List of tools for static code analysis. https://en.wikipedia.org/wiki/
List of tools for static code analysis
38. Woodcock, J.: First steps in the the verified software grand challenge. IEEE Com-
puter 39(10), 57–64 (2006)
39. Woodcock, J., Banach, R.: The verification grand challenge. JUCS 13, 661–668
(2007)
Ontology-Based Optimization
for Systems Engineering
Dominique Ernadote(B)
1 Introduction
System engineering is a multi-domain process that encompasses the design, real-
isation, delivery, and management of complex systems or system of systems.
Best-practices that ensure the quality of such processes has been documented
in standards, such as ISO 15288 [15], and system engineering resources, such as
the INCOSE handbook [2,12]. These standards assess and describe the different
activities of the system engineering process, detail the involved stakeholders and
their responsibilities with respect to these activities, and specify the required
and the produced deliverables. These descriptions are highly useful, but they
mainly remain document centric. Meanwhile, a Model-Based System Engineer-
ing (MBSE) approach is commonly accepted by the system engineers commu-
nity [17] that depends up on the creation of centralized models to produce the
expected deliverables. Standard metamodels such as UML [21], SysML [20] or
NMM/NAF [18] are typically used to describe the relevant concepts for these
descriptive models.
A concrete implementation of an MBSE approach implies that system engi-
neers know the corresponding metamodels. Such metamodels are typically well
understood within technical domains, for example software development based
on UML, or database development using conceptual data models and schemas.
However, if one considers the stakeholders involved in the entire system engineer-
ing process, he has to include the V&V stakeholders, the stakeholders involved in
c Springer Nature Switzerland AG 2019
E. Bonjour et al. (Eds.): CSD&M 2018, Complex Systems Design & Management, pp. 16–32, 2019.
https://doi.org/10.1007/978-3-030-04209-7_2
Ontology-Based Optimization for Systems Engineering 17
the system rollout, the end-users. . . each in a specific domain with its own vocab-
ulary. Even if there are some commonalities, each specialist integrates specific
terms due to the type of systems designed. With those sharing constraints, the
usage of a predefined metamodel creates barriers between the modeling experts
and the stakeholders responsible for other activities. Thus, an efficient collabo-
ration requires the adoption of the model presentation to this later audience via
domain specific languages (DSLs) also named ontologies (see [13] for a discussion
of the differences between DSL, ontology, and metamodel).
The author proposed an approach in [6] to reconcile the usage of complex
but necessary metamodels with dedicated ontologies which are more friendly for
the end users. Briefly, the method is based on the mapping of the user-defined
ontology against a modeling tool metamodel. This added layer eases the creation
of models by different communities of users whilst ensuring the storage mode
is valid against a selected standard metamodel which is of interest when the
modeling tools interoperability becomes a concern.
However, all these mechanims rely on common desciptive languages which
means the models are first depicted in the selected language by creating one by
one the modeling elements, setting their property values and linking them each
other. Then the models are assessed against engineering criteria, stakehoders
needs, or system requirements. Then, the system engineer applies such or such
analysis technique on 2, 3, or more model alternatives. The more models are
studied, the more confident the engineer is regarding her/his final choice.
The main drawback of this way of doing resides in the time consuming effort
to produce the models. According to [9], the current complexity increase in
system design is posing dramatic increasing challenges for a system engineer-
ing under time and resources constraints. System architecture designs have been
proved to be NP-Hard [10] and the direct consequence is that the system architec-
ture design by itself should be considered as an optimization problem. This non-
linear complexity increase accompanied with few changes regarding the resource
availability enforce to seek for new means and specifically to be machinely helped
during the optimal system discovery process.
Still focusing on the engineering productivity, this raises the following ques-
tions: Is there a way to handle the variability of a system such a way that the
modeling activities are effortless, or at least not linear against the number of
system solutions to be compared? Extending again the expectations, is it pos-
sible to declare the constraints from the stakeholder perspectives and then to
automatically produce a model that fulfills all the constraints and is proved to
be one of the best solutions? These are the questions addressed in this paper.
both the individual modeling elements and the sets of those elements. The Sect. 3
addresses this topic.
Secondly, the constraining expression of the model variability are used to find
out an optimal solution. An appropriate algorithm must be found to efficiently
find the solution, and the impact of an existing model have to be considered so
that the solution is stored in the model repository. This topic is addressed in the
Sect. 4.
3 Model Variability
According to [3], there are two kinds of variability modeling approaches: fea-
ture modeling (FM), and decision modeling (DM). Feature modeling captures
features defined as capabilities of systems having a specific interest for the end-
users. On the other hand, a decision model focuses on decisions trading-off the
different branches of a variation model. [3] compares different approaches includ-
ing both the two modeling types and concludes there are mainly similar. In our
context, the models based on SysML or NAF do not integrate by themselves the
notion of decision. This should be added but for standard compatibility reason
which is often a customer requirements in the defense sector, we have to reduce
the metamodel extension. This is why we will consider the feature modeling in
more detail.
Regardless the modeling mode, there are one of the compared characteristics
studied in [3]. Orthogonality: the degree to which the variability is separated from
the system model. This is for us an important concern in order to ensure the
constraints elicitation can be listed by family and then applied to an independent
specific system model. Data Types: types refer to the primitive and composite
values which opens the solution space. Still according to [3], DM and FM are
similar. Modularity: Ability to reuse variability expression in order to handle the
variability model complexity.
One conclusion of this comparison paper, is that the Common Variability
Language (CVL) [11], which is a variability modeling language, presents most
of the expected advantages including orthogonality, and a large expressiveness.
Another advantage is the standardization of this language since the Object Man-
agement Group organization (OMG) is under the process of integrating the lan-
guage proposal. This should be made easier by the fact it is a sublanguage of
OCL (Object Constraint Language) [19,22,24], an already standardized con-
straint language for UML. By the way, the compositional layer of CVL provides
ways to encapsulate and reuse variability specifications through Configurable
Units and Variability Interfaces [4, Sect. 2.5].
RentalStation self.employee->forAll(p |
p.income(‘‘98/03/01’’) > 2000)
20 D. Ernadote
includes a reference to the association end employee and the function income().
The dot notation can be used to concatenate association ends via the notion of
Set [19, Sect. 7.5.3, p. 18].
This kind of notation suits perfectly the MB2 SE framework; since the model-
ing objectives are mapped to a conceptual data model (step MMP-2) of the MPP
method, the notions of classes, attributes, and association ends are also present.
The MB2 SE foundation uses a very small subset of the class diagram capabilities
(for example, no functions are described) but at least the important property
notions are there and should be integrated into the constraint language finally
selected to support the design process. By using a model to describe the con-
ceptual data model allows linking the corresponding modeling elements (class,
attribute, and association ends) to the constraints. Thus syntax checking are
possible by analyzing the data model for example to check a path is valid.
In 3.1, we showed that languages like OCL or CVL are convenient to express the
constraints linked to the conceptual data model used in the MB2 SE approach.
Thus, one topic to be studied is the link between such constraint language and
the LSP language. A first approach consists of analyzing the OCL grammar,
and trying to convert all combinations into the equivalent LSP syntax. The
main blocking point is the language equivalence; the grammars are different
and designed for specific purpose: a generic constraint declarations based on
a conceptual data model on one hand, and the declaration of constraints for
research operation treatments on the other hand. Thus, the OCL usage shall be
limited to what the LSP language can address.
After, a first attempt in that direction we decided to implement the opposite:
what about writing the constraints in the LSP language but adding it the dot
notation? That way, there is no complex translater to implement but the one
converting the dot notation based paths into something LSP understands: we
chose to convert the path into LSP lists where items refers to the different data
model elements (class, attribute, association end). The Sect. 3.1 demonstrates
that the relevant benefit of the constraint language is the alignment with the
data model which is still fulfilled that way. Regarding the constraint capabilities
both implementations are finally restricted by the LSP language functionalities
so there is no technical loss by using this final approach.
Having resolved this language link, the engineering steps integrating the dele-
gation of the optimization resolution to a local-search blackbox would be (see
Fig. 2):
3. the black-box local search tool searches for a solution to the constraints con-
sidering the current model instances,
4. the solution is pushed back into the modeling tool with some facilities to help
the system engineer understanding the changes.
the racks costs, and ensure all the applications are deployed. On the other hand,
the solution must fulfill the following technical constraints:
– The weight of servers is compatible with the Rack specification,
– The size in U (Unit) of the server is compatible with the Rack specifications,
– The disk space of the server is compatible with the application specifications,
– Assuming the applications are runned by the hosting server (for simplification
purpose), the global processor usage consumption shall be < 0.9 (90%).
This is more formally translated into the LSP-like language:
the Fig. 2. The HOPEX for NAF contains both the initial model and the LSP-
Like constraint declarations. Fake data have been created for the purpose of
the test of this paper but actual optimization has also been performed for a
French MoD project on a more representative set of data. Each modeling con-
cept (Application, ApplicationPackage. . . ) involved in a constraint is con-
verted from the HOPEX tool to a corresponding flat file (see Fig. 5).
Other CSV flat files are also generated for each relation between two concepts.
Refer to Fig. 6 for an example of the relation between the applications and the
functions displayed as a 2-D table.
Fig. 6. A 2-D table corresponding to the relation between applications and functions.
The usage of this modeling tool allows creating transparent links between the
textual constraint and the referenced data model artefacts. These links are used
to automate the export of the appropriate flat files according to the constraints
contents. Once the computation is done, the results are against transferred into
the flat files for difference visualization and re-import into the modeling tool.
From the local search toolbox perspective, the CSV files are the input data.
What is missing to complete the solution research are the constraints. All the
constraints expressed in the LSP-like language for friendliness purpose have to
be converted into the actual LSP language. In this language, the constraints are
declared and contribute to the building of a constraint tree which is solved by
Ontology-Based Optimization for Systems Engineering 27
the local search tool. Such conversion leads to the rewriting of the requirements
as follows:
converter also adds curling braces around each constraint so that to ensure the
variable declarations are only specified in the context of the constraint.
Secondly, the space of the solution must be explicitely declared. For the
first optimization problem, only the relations selectedPackage, applications,
and functions are allowed to be updated. This must be explicitely said so
the added requirements RA and RB. For simplification reason, the function
mlDecideRelation() hides the LSP code which transforms the relation between a
pair of modeling elements into an open boolean which can take one of the false
or true values, the last one indicating the relation exists. For all the variables not
allowed to be changed, their values are initiated from the modeling repository
data with fixed values.
This problem space definition mechanism is replicated for the concept of the
data model; each concept corresponds to a list of arrays matching the concepts’
attributes. They are initiated according to the modeling repository data and only
those used in the left part of a constraint are included in the problem space. In
this case, the user has nothing to do but declaring the constraints.
Note also that requirement R03 is in fact already formalized by the
association multiplicities set on the relation between CustomerNeeds and
ApplicationPackage (Fig. 2). So, we can remove the explicit requirement and
introduce an automated generation based on the multiplicities this for the sake
of simplicity.
This requirement can be written using only the LSP native language as
follows:
local sumCosts = {};
for [ cn in CustomerNeeds ] {
sumCosts [ cn ] <- sum [ ap in ApplicationPackage ](
selectedPackages [ cn ][ ap ] ? (
sum [ app in Application ](
Applications [ ap ][ a ] ?
Application [ a ]. Cost : 0) ) : 0) ;
minimize sumCosts [ cn ];
}
The main idea of the first formulation is: to sum on Applications[a].Cost,
we need to check if there is a path from cn to ap. This path is decomposed in
two relations: CustomerNeeds – selectedPackage → ApplicationPackages,
and ApplicationPackages – applications → Application. The sum checks
whether there is path between cn and ap, if so it looks for a path between ap
and a.
Another way of writing the same objective is the following:
local sumCosts = {};
for [ cn in CustomerNeeds ] {
sumCosts [ cn ] <- sum [ ap in ApplicationPackage ]
[ app in Application ](
selectedPackage [ cn ][ ap ] * applications [ ap ][ a ] * Application [ a ]. Cost ) ;
minimize sumCosts [ cn ];
}
The main idea here is that we sum of every 3-tuple on cn, ap and a, and if no
path exists between the three, the following product : selectedP ackage[cn][ap] ×
applications[ap][a] × Application[a].Cost results to 0 (taking into account that
boolean values are 0 and 1 in LSP language).
As far as the LocalSolver tool is concerned, both formulations are not equiv-
alent in terms of performance to obtain the results. We run LocalSolver twice on
a set of 5 customer needs (CN1 . . . CN5), changing the formulation of the above
constraint, leading to the different results expressed in the Table 1. Best results
are displayed in bold format.
In the same given time (10 s), the first formulation gives better results in every
customer needs. Even after 10 min, the solution of the second formulation doesn’t
30 D. Ernadote
reach the quality of the first formulation. In this example, time isn’t enough
to compensate for a computationally expensive formulation. For an identical
time (10 s), we can observe that the numbers of iterations and moves – which
reflects the heuristics based algorithm used to solve the problem – are higher
for the first formulation, meaning LocalSolver could better explore the space of
feasible solutions (i.e. could explore more solutions in the same time). So, the
formulation of the constraint not only affects the complexity of the evaluation,
but also the quality of the results given by LocalSolver. As LocalSolver’s time
spent is controllable, the issue to properly face is not the time that LocalSolver
spends on solving the problem but the quality of the operations that the local
search tool can do during this given time.
One way to reduce the risk of bad solutions due to the writing of constraint
expressions by non LSP experts is by transforming the constraint graphs into an
equivalent one but proven to be optimal regarding the search performance. There
are currently works done in that direction by the developers of the LocalSolver
tool so for us the issue of finding an optimal solution will continue to be deletaged
to the optimization tool as a black box regardless the writing of the expressions.
Another more affordable way, is by wrapping the constraint implementation into
predefined functions with the best known approach regarding the implementa-
tion. This is what has been done by proposing the functions mlIsSubSetOf (),
mlSum(), or mlCard(). This function hides the complex implementation and
give the end-user a more friendly understanding of the constraint meanings.
Regarding the mlSubSetOf () function, a naive implementation leads to the res-
olution of an actual project in 37 mn. By considering the function as a pure func-
tion (which from a programmactic perspective always returns the same result for
a given input and has no side-effect), it is possible to optimize the computation
time of this recursive function by using a memoization mechanism [16]. This
dramatically reduced the general complexity and the same computation is now
done in 2 s while still keeping the same function declaration for the end-users.
6 Future Work
This paper demonstrates how the declaration of constraints against a conceptual
data model can be automatically converted into an LSP program including the
modeling data which enables the computed search for optimal solutions satysi-
fying the constraints. This mechanism allows the update of concept attributes,
and corresponding relations through the simple declaration of contraints based
on the LSP language augmented with the dot notation for pathes.
We also explained how the writing of the constraints can alterate or improve
the obtention of a solution within a reasonable time. This is why a library of
predefined functions hides this writing complexity. By applying this approach
in an actual French MoD project, we learnt that additional functions are again
necessary to be developed to foster the usage of the constraint declarations, espe-
cially all the set functions diving into the model (union, intersection, different
modes of properties agregation. . . ).
Ontology-Based Optimization for Systems Engineering 31
The modeling principles established by the MB2 SE framework [6] create new
issues regarding the research for optimality. This framework relies on the decla-
ration of a modeling data model mapped to a storage metamodel such as NAF.
This mapping separates the business perspectives from the technical require-
ments relate to modeling tool interoperability. As illustrated in [7], two system
engineers can create distinct data models pointing to the same modeling ele-
ments. So the following questions to be studied and solved: How to solve system
engineering constraints declared in two different perspectives? How to detect
whether the system engineering constraints declared for one perspective impact
the ones declared in another perspective? Is there similar issues to be solved in
the context of a unique conceptual data model? For example, how to handle the
inheritance between two concepts dealing with a common subset of modeling
elements?
References
1. Benoist, T., Estellon, B., Gardi, F., Megel, R., Nouioua, K.: Localsolver 1. x: a
black-box local-search solver for 0-1 programming. 4OR Q. J. Oper. Res. 9(3),
299–316 (2011)
2. BKCASE. Sebok, guide to the systems engineering body of knowledge. http://
sebokwiki.org
3. Czarnecki, K., Grünbacher, P., Rabiser, R., Schmid, K., Wasowski,
A.: Cool fea-
tures and toughdecisions: a comparison of variability modeling approaches. In: Pro-
ceedings of the Sixth International Workshop on Variability Modeling of Software-
Intensive Systems, pp. 173–182. ACM (2012)
4. Dumitrescu, C.: CO-OVM: a practical approach to systems engineering variability
modeling. Université Panthéon-Sorbonne - Paris I, Theses (2014)
5. Dupin, N.: Modélisation et résolution de grands problèmes stochastiques combina-
toires. Ph.D. thesis, Université Lille 1, Laboratoire Cristal (2015)
6. Ernadote, D.: An ontology mindset for system engineering. In: 2015 1st IEEE Inter-
national Symposium on Systems Engineering (ISSE), pp. 454–460. IEEE (2015)
7. Ernadote, D.: Ontology reconciliation for system engineering. In: 2016 IEEE Inter-
national Symposium on Systems Engineering (ISSE), pp. 1–8. IEEE (2016)
8. Estellon, B., Gardi, F., Nouioua, K.: Two local search approaches for solving real-
life car sequencing problems. Eur. J. Oper. Res. 191(3), 928–944 (2008)
9. Hammami, O.: Multiobjective optimization of collaborative process for modeling
and simulation-< q, r, t. In: 2015 IEEE International Symposium on Systems
Engineering (ISSE), pp. 446–453. IEEE (2015)
10. Hammami, O., Houllier, M.: Rationalizing approaches to multi-objective optimiza-
tion in systems architecture design. In: 2014 8th Annual IEEE Systems Conference
(SysCon), pp. 407–410. IEEE (2014)
11. Haugen, Ø., Wasowski, A., Czarnecki, K.: CVL: common variability language.
SPLC 2, 266–267 (2012)
32 D. Ernadote
Abstract. For space transportation systems used to place satellites in space, the
launch rate fulfillment (turnover) and the launch delay reduction (avoid addi-
tional cost) are key parameters driving the launch cost. In addition, launching
on-time is beneficial to the payload operator business model.
In Europe, Ariane 5 has demonstrated its unmatched reliability with more
than 80 successful consecutive launches. Due to increasing competition
worldwide for space transportation systems, Ariane 6 will have to achieve the
same reliability but with twice the launch cadence.
This is the reason why, on Ariane 6, the On-Time-Launch capability has been
taken into account since upstream development phases. Firstly, the main drivers
for this performance have been identified (lessons learnt) on the complete life
cycle. Based on cost approach, an allocation methodology has been defined,
including risk severity and occurrence management. Then, mitigation actions
and robustness to degraded cases have been deduced.
In the frame of CSDM 2018, the On-Time-Launch Capability for Ariane 6
Launch System and associated methodology is proposed to be presented during
a 30 min talk.
Ariane 6 is the new European launcher with two versions A62 and A64. Its perfor-
mances to geostationary transfer orbit will be 5 tons (A62 with 530 t weight at lift-off)
and more than 10,5 tons (A64 with 860 t weight at lift-off) (Fig. 1).
Ariane 6 development is in progress. First flight is planned in 2020.
Ariane 6 launcher is composed of:
• a Central Core providing thrust with a Lower Liquid Propulsion Module (LLPM)
equipped with a Vulcain engine and an Upper Liquid Propulsion Module (ULPM)
equipped with a Vinci engine;
• two (A62) or four (A64) Equipped Solid Rockets (ESR) boosters depending on the
Ariane 6 Launcher configuration;
• an Upper Part including the fairing, the Launch Vehicle Adaptor, the Payload
Adaptor Fitting (to fit with payload interface diameter) and any system allowing to
perform dual launch or multiple launch.
Ariane 6 fully integrated launcher is built with the following process (Fig. 2).
The DOORS system engineering tool is used for the requirements allocation and
flow down in the technical specification (Fig. 4).
Requirements are based on either Probabilistic approach (like MTTR (Mean Time
To Repair)/MTBF (Mean Time Before Failure) or determinist approach (see ref.2 for
vocabulary definition).
36 S. Bouffet-Bellaud et al.
The On-Time-Launch activity has firstly analyzed the customer need on this topic and
performed a lessons-learnt on similar complex system.
2.1 Lessons-Learnt
The purpose of the present chapter is to have a status about risk of launch schedule
slipping by performing a “lessons learnt” activity on similar projects (Ariane 5 and
other Space-Systems projects).
The lessons-learnt activity was based on twenty years of experience in space
domain, on three major space program and many databases of incidents.
This activity aims to identify causes and consequences of disruptions in operation
and the stakeholders.
A classification and summary has been performed to identify the main unavail-
ability contributors (Launcher, Ground, Weather, Check, etc.…).
A list of recommendations has been extracted from these lessons learnt.
Due to Ariane 6 launch rate specificities, some new unavailability contributors
could appear. Indeed, the Ariane 6 launch rate objective (customer requirement) is
twice higher than Ariane 5 one. The fact to be in a tight planning will highlight some
new contributors. Complementary unavailability will appear and lessons-learnt activity
is not sufficient to address the global Ariane 6 topic. A global Ariane 6 approach is
necessary and proposed in paragraphs below.
Once the risk identification performed, the next step is the classification and
selection/prioritization of possible delay risk in each life phase before launch.
The ranking of the delay duration severity is based on the cost (money lost
acceptability). The scale of this occurrence acceptability uses the Likelihood class table
(Fig. 6) below:
The Launch Pad destruction is treated at P4 which will be our limitation level for
the On-Time-Launch severity table: The On-Time-Launch severity table will then used
the likelihood level from P1 to P3.
The severity table allows linking cost and occurrence acceptability, this table
concerns cost topic and is managed through qualitative criteria. Finally, the severity
table makes a connection between the delay duration and the likelihood objective.
In order to derive the On-Time-Launch severity table, the objective is shared
between the different life phases of the launcher preparation.
In this sharing allocation objective, it has been chosen to put more constraints on
operations which are far from the expected launch slot because it is easier (more time)
to treat contingencies. Indeed, very close to the “expected launch slot”, it remains very
short time to treat contingencies without impact on “expected launch slot”.
That means that:
• constraints are higher on phase “Before integration process in Kourou”.
• constraints are minimized for Launch-Pad activities.
The allocation is shared between Europe, Kourou CSG before Launch Pad arrival,
Launch Pad. The severity table makes a link, for each previous life phase, between the
delay duration and the likelihood objective.
On-Time-Launch capability for Ariane 6 Launch System 39
The On Time Launch approach is based on the fact that mitigation actions are
implemented to avoid delay risk (delay occurrence reduction). In a perfect allocation,
all the major delay risks identified won’t occur because mitigation actions (occurrence)
are implemented. In the real allocation, in front of some delay risks, no adequate
mitigation actions (occurrence) are found. In this case, the delay risk can appear and it
corresponds to an On-Time-Launch degraded case. This On-Time-Launch Degraded
Case shall be managed as quickly as possible (mitigation actions (severity)) in order to
limit as much as possible launch delay duration.
This mitigation action (barrier) logic is described in the following picture (Fig. 7):
Few examples of delay risks and associated mitigations are detailed below:
1- Ground means and/or infrastructures failure leading to launch delay
In front of this delay risk, the following mitigations are proposed:
• Total Corrective Maintenance (TCM) management through failure occurrence
reduction (MTBF - Mean Time Before Failure) and corrective maintenance
(MTTR - Mean Time To Repair)
• Recovery procedure (including hardware implementation) to mitigate the delay
2- Operation (assembly/integration/test/maintenance) scheduling leading to launch
delay
In front of this delay risk, the following mitigations are proposed:
• Operation duration specified for AIT and maintenance, learning curve follow
and associated actions in case of drift
• Availability objectives specified in Maintenance and Exploitation Contracts.
3- Weather alerts leading to launch delay
In front of this delay risk, the following mitigations are proposed:
• Launcher system shall be operated with a value of ground wind in Kourou
compatible with availability need, including sizing of launcher vehicule wrt to
level of ground wind. The ground wind value is based on a database (ground
wind measurements in CSG during 20 years) associated with an annual prob-
abilistic approach.
4- Equipments/Products alerts leading to launch delay
In front of this delay risk, the following mitigations are proposed (Fig. 8):
• APQP+ (Advanced Product Quality Planning) method
Due to world competitiveness increasing, launch rate fulfillment and launch delay
reduction are key parameters for launcher program.
For Ariane 6, On-Time-Launch capability has been taken into account during
upstream phases.
Based on cost approach, an allocation methodology has been defined, including
risk severity and occurrence management implementation. Then, mitigation actions and
degraded cases have been deduced.
The way forwards will consist on requirement verification logic definition and
requirement compliance verification through DOORS tool (Requirement Verification
Plan, Verification Compliance Document). The On-Time-Launch performance will be
monitor with indicators implementation and with continuous improvement in
exploitation.
The On Time Launch objectives achievement is foreseen at Full Operational
Capability after 2023, benefiting from learning curve during the transition period since
the first flight in 2020.
On-Time-Launch capability for Ariane 6 Launch System 43
References
1. Walden, D., Roedler, G., Forsberg, K., Hamelin, D., Shortell, T.: Systems Engineering
Handbook
2. ECSS-Q-ST-30-09C - Availability analysis
Towards a Standards-Based Domain
Specific Language for Industry 4.0
Architectures
1 Introduction
Optimized management of available resources in order to maximize profit and at
the same time reducing costs and expenses is the main goal of most manufactur-
ing companies. Results from research and development in the area of information
technology (IT) offer new possibilities to support this goal, which drive change
in the present industrial area and lead the path to a new form of an automation
driven industry, the so-called Industry 4.0. An example of a technology result-
ing from this change is cyber-physical system (CPS). CPS are mainly intelligent
components of a manufacturing process, where they take over a specific task.
The main advantage regarding productivity is, that CPS are able to find the
economically most valuable decision on their own, based on information pro-
vided by other CPS taking part in the system [6]. As discussed before, the aim
c Springer Nature Switzerland AG 2019
E. Bonjour et al. (Eds.): CSD&M 2018, Complex Systems Design & Management, pp. 44–55, 2019.
https://doi.org/10.1007/978-3-030-04209-7_4
Towards a Standards-Based Domain Specific Language 45
2 Related Work
2.1 Domain Specific Architecture Framework
The Reference Architecture Model Industrie 4.0 (RAMI 4.0), depicted in Fig. 1,
has been developed by the Plattform Industrie 4.0, a union of leading German
46 C. Binder et al.
To represent an asset over its whole life cycle, the horizontal axis has been
introduced. It defines a product according to IEC 62890 [12] as type and instance,
whereas the type represents the asset during development and prototype cre-
ation. However, the instance states a product as individuality and all its admin-
istration. Furthermore, the second axis deals with the classification of an item
within the factory. Based on IEC 62264 [11] and IEC 61512 [10] a single product
can be located regarding its spreading, from connected world over enterprise
up to a single device used in production [1]. The top-down arrangement of the
layers enables the classification of subjects according to their task areas. This
also enables the mapping of the single system development processes to their
respective area. The system analysis takes place at the top layers, more detailed
the Business Layer and the Function Layer. It describes the conditions and busi-
ness processes the system has to follow and in further consequence the run-time
environment of the system with all functionalities of its services and applica-
tions. The Information Layer provides all kind of data to its adjacent layers and
Towards a Standards-Based Domain Specific Language 47
the Communication Layer takes care of the connections within the system. To
deal with all characteristics of CPS the Integration Layer has been defined only
to display the physical objects on the Asset Layer.
• MDG Technology, which contains the specifications stated in the DSL and
provides them for usage
• Model Templates, which support system engineers by providing a fully mod-
elled example and giving information about specific problems
• Reference Data, that contain information about the matrix used in SGAM
and make sure to integrate those information into the model
With the help of the SGAM Toolbox, several international projects have already
been realized. Through years of use it has established itself as main technology
driver to create Smart Grid systems. Adopting these successful concepts to the
industrial area can bring the approach of RAMI 4.0 a major step forward.
According to ADSRM, the first step is to draw up a suitable Case Study. In this
case a typical use case concerning Industry 4.0 is presented. More precisely, this
example makes use of a shoe manufacturing company that offers the creation
of individual shoes to its customers. The manufacturer provides all tools used
for customer interaction as well as the factories where the shoes are produced.
The goal is to optimize production processes, therefore raw materials and sup-
plier goods need to be available at the time they are used in production. On
the other hand, the customer wants to create his individual pair of shoes out
of his mind. The ideology of Industry 4.0 is the fully automated processing of
the order and the consequent production of the shoes. Therefore, all machines
should communicate with each other in order to find the optimal solution con-
cerning resources. Firstly, to keep track of administrative and change efforts, the
requirements that the system underlies are elaborated. Concerning the classifi-
cation of non-functional requirements, the following five requirements have been
specified:
model of the real world. Semantics and structure of this model help defining the
abstractions of the Metamodel, dependencies between physical and virtual world
formulate the connections of its elements, according to [17]. The Metamodel rep-
resenting RAMI 4.0 is composed by a conceptual architecture, constituted of the
Unified Modeling Language (UML). It describes the conceptional aspects a lan-
guage needs to contain to model a system based on Industry 4.0. By doing so, the
Metamodel is structured in the six layers of RAMI 4.0. On each layer, design ele-
ments for describing a viewpoint on a system are provided. The Business Layer
therefore consists of elements like business actors, business goals and business
cases for representing the cooperation between two actors. With these elements
desires of stakeholders can be formulated. High-level use cases are specified to
realize business cases on the Function Layer in order to fulfill the defined require-
ments. Information objects, characterized by a specific data model standard, as
well as the connection paths they are exchanged over are being modeled in the
lower layers. The Integration Layer offers a representation of the Asset Adminis-
tration Shell (AAS), a model of the digital twin every physical asset has. Those
assets itself are being depicted in the same called Asset Layer.
As the Metamodel is a graphical representation of domain-specific elements
and their interconnections, a language is designed for a detailed description of
those. Similar to the concepts presented in [4], the conceptual architecture serves
as a base to create a specific DSL. This language needs to be utilized throughout
the whole development process, from designing the system followed by describing
up to modeling it. Consisting of an UML profile, the DSL itself can be designed
using well known methods provided by UML. The profile contains all elements
previously elaborated from the physical world. Given by UML, the elements itself
are consisting of a stereotype and a metaclass. The metaclass is representing the
underlying model element where the stereotype is describing the element as it
will be used in the DSL.
There are several software applications on the market tailored to systems devel-
opment. Concerning its functionality to extend, the modeling tool Enterprise
Architect (EA) developed by Sparx Systems [22] is suitable for providing an envi-
ronment in order to architect and design Industry 4.0 based systems. To achieve
this, the already given general modeling functionalities need to be extended by
implementing the DSL. The result is an Add-In called the RAMI Toolbox1 . The
main part of this toolbox is the DSL described in the previous section. It consists
of the UML profile and two other profiles for the utilization of a tool-set as well
as a suitable UML diagram to describe an industrial model. Adapted from the
SGAM Toolbox it also provides demonstration examples showing how to use
RAMI 4.0. To make use of this DSL, EA needs to load it during its start-up
process to provide a set of tools supporting the modeling of industrial systems.
1
The RAMI Toolbox is publicly available for download at http://www.rami-toolbox.
org/download.
52 C. Binder et al.
The Case Study2 itself is created by using the development process described in
Sect. 3.2. According to these considerations, the Business Layer contains three
major actors, the customer, the manufacturer and the supplier. In the system
analysis the goals of each actor are elaborated through requirements engineering.
Those goals specify the boundary and rules the system should follow. To keep
it simple, this scenario identifies one High Level Use Case (HLUC) “Create
Custom Shoes” with the three previously mentioned actors. Out of the generic
business model a more specific functional viewpoint can be created. The HLUC is
decomposed into more detailed Primary Use Cases (PUCs). “Order Processing”
or “Factory Maintenance” could be representatives of this kind. In the Function
Layer, every Use Case has actors interacting with them. Figure 4 depicts the most
detailed functionalities in the development process like forming supplier goods
or assembling raw materials. By doing so, the single functions are represented as
Use Cases with their related Actors. The resulting Logical Architecture builds
the base for the real architecture of the system including all components and
parts. The architectural solution is built referring to the results of the system
analysis. The modeled processes need to be represented by physical components.
Technological speaking, a model transformation introduced by MDA takes place
by mapping Logical Actors to their physical components. In the Information
and Communication Layer of RAMI 4.0, the interaction of these components
is modelled based on the specifications coming from OPC Unified Architecture
(OPC UA). The first step of the system architect is to find out which kind
of information is exchanged between the elements. This process is followed by
designing and specifying the communication paths and interfaces of which the
information is sent. During this phase the components are seen as Black-boxes
and only those needed for interaction are described.
The decomposition of the components itself takes place on the Integration
and Asset Layer. On these layers, the elements are described as physical units like
they are in the real world. The Integration Layer generates a Digital Twin out of
the physical units. This means, one AAS containing all information and data as
well as safety and security aspects is created for each asset or a group of assets
working together. Furthermore, the Integration Layer has to deal with Human
Machine Interfaces (HMIs) in order to access the needed data. Technologies like
Near Field Communication (NFC), Bluetooth, Barcodes and USB find its place
on this layer.
4.2 Findings
2
A click-through model is available at http://www.rami-toolbox.org/UseCaseShoes.
Towards a Standards-Based Domain Specific Language 53
of ADSRM needs to deal with more detailed problems. For example, it was
shown that some specifications of DIN 91345 need to be refined or adapted to
suit for every domain included. Furthermore, an extension of the process model
with familiar standards results in the definition of a more detailed development
process. Hence, the standard ISO/IEC 42010 [13] provides a formalization of an
architecture framework that may well fit for RAMI 4.0, for example deriving
viewpoints and views for each layer. In the same step, the concepts of the Uni-
fied Architecture Framework (UAF) standard are elaborated on their suitability
for the RAMI Toolbox. The general approach and its enhancements need to be
validated by an external domain stakeholder providing a more sophisticated case
study in the last step.
Grid systems has all functionalities needed for system engineering. Due to the
similarities between energy and industry domains, the concepts of the SGAM
Toolbox [20] may be applicable to Industry 4.0. In this paper, two major concepts
have been tested on applicability in the RAMI Toolbox. First, the modeling of
Use Cases on basis of an existing reference architecture has been approved by the
shoe manufacturing industry example. Although modeling took only place on a
superficial perspective, existing concepts and technologies seem to work in the
industrial domain as well. The domain specific representation and visualization of
components as entry point for discussions or building a common understanding is
realized by the DSL. The findings of this paper build a base for the future work of
the authors. With the results mentioned above, an application of RAMI 4.0 has
been developed for the first time. Now, the results need to be applied to a more
sophisticated case study in order to adapt the concept to upcoming domain-
specific requirements. In future work, the integration of well-known standards
for architectures, processes or industrial specifications and the development of
new features may lead the path towards establishing this approach to become a
widely used technology for building Industry 4.0-based architectures.
References
1. Bitkom, VDMA, ZVEI: Umsetzungsstrategie Industrie 4.0, Ergebnisbericht der
Plattform Industrie 4.0. ZVEI (2015)
2. Boardman, J., Sauser, B.: System of systems-the meaning of. In: IEEE/SMC Inter-
national Conference on System of Systems Engineering, 2006, pp. 6–pp. IEEE
(2006)
3. Conboy, K., Gleasure, R., Cullina, E.: Agile design science research. In: Interna-
tional Conference on Design Science Research in Information Systems, pp. 168–180.
Springer (2015)
4. Dänekas, C., Neureiter, C., Rohjans, S., Uslar, M., Engel, D.: Towards a model-
driven-architecture process for Smart Grid projects. In: Digital enterprise design
& management, pp. 47–58. Springer (2014)
5. DIN SPEC: 91345: 2016-04. Reference Architecture Model Industrie 4.0 (2016)
6. Drath, R., Horch, A.: Industrie 4.0: Hit or hype? IEEE Ind. Electron. Mag. 8(2),
56–58 (2014)
7. Hankel, M., Rexroth, B.: The Reference Architectural Model Industrie 4.0 (RAMI
4.0). ZVEI (2015)
8. Hermann, M., Pentek, T., Otto, B.: Design principles for Industrie 4.0 scenarios.
In: 49th Hawaii International Conference System Sciences (HICSS), pp. 3928–3937.
IEEE (2016)
9. Industrial Internet Consortium and Plattform Industrie 4.0: An Industrial Internet
Consortium and Plattform Industrie 4.0 Joint Whitepaper (2017)
10. International Electrotechnical Commission: IEC 61512: Batch control (2001)
11. International Electrotechnical Commission: IEC 62264: Enterprise-control system
integration (2016)
Towards a Standards-Based Domain Specific Language 55
Abstract. It is widely accepted that the way the interfaces between subsystems
are designed is a major aspect of system architecture. The task of designing
interfaces is made difficult by the technical diversity of subsystems, of inter-
faces, of functional requirements and integration constraints. Change manage-
ment processes have long been implemented by the industry to monitor and
control interface design (see, e.g. Eckert 2009). In this paper, change request
data from several projects completed by Naval Group is analyzed. The Change
Generation Index is introduced and a heuristic formula is proposed to link the
maturity of interface design with change request generation. This approach
comes as a complement to existing results on change propagation patterns
within large systems. A promising parallel is established between the design
process of a large system and learning processes well known to the social
sciences community.
1 Introduction
The complexity of a system is higher when the number of interactions is large with
respect to the number of subsystems within the system. Often, when the design of a
subsystem is refined or redefined, the subsequent impacts extend well beyond the
subsystem itself. It is thus necessary to perform design iterations to take into account
unexpected impacts and relax towards a consistent system design.
Due to the large number and the diversity of interactions between subsystems,
information about the design is generated as the design is being worked out. The design
process is not linear; rather, the design follows a “maturation” process distributed over
the many subparts of the system. In these conditions it is difficult to assess the maturity
of the design. Any part in the system can call for re-work, at virtually any time, due to
unforeseen interactions between this part and the rest of the system.
In order to assess the maturity of the design, we propose to view the design process as a
learning experience in which the design teams accumulate over time a body of knowledge
about subsystems, the interactions between subsystems, and the system as a whole.
The maturity of each subsystem can be inferred from subsystem design artefacts
such as plans, models, technical reports. The maturity of the design of the system as a
whole is assessed with the help of design reviews and technical assessments by experts.
From our experience, it is more difficult to assess the maturity of the interfaces between
subsystems, which makes most of the system’s architecture.
In this paper we propose to use change requests as a marker for the maturity of the
design of interfaces within the system. The analysis of actual data from industrial
programmes involving hardware, software, or both shows a clear, reproducible trend
for the number of design changes identified over the course of a project. We propose a
theoretical explanation for this trend and discuss the agreement with the data. Even-
tually, we discuss how KPIs can be derived to monitor interface design maturity during
the development of complex systems.
Fig. 1. Number of change requests registered each week during the development process of six
hardware/software/mixed systems (solid lines). All of these curves exhibit a similar shape
(“bump” shape outlined by dashed lines), with project-specific deviations discussed in more
detail in Sect. 4.
Assessing the Maturity of Interface Design 59
addressed in Gallistel et al. (2004) smooth out individual differences in the learning
rates and performances. As a consequence, the unavoidable variability between pro-
jects has not proven a difficulty in the analysis of the CGI.
It is useful to place the present paper in the perspective of other change request
management processes. Conventional change request management usually focuses on
the absolute number of change requests that has been generated over a given period of
time, the number of changes that have been implemented, or the average time needed to
implement change requests. In the present approach the focus is shifted to the rate at
which change requests have been generated, regardless of whether the design changes
have been actually implemented or not, and regardless of how many still need to be
implemented or how many have been implemented in total.
It is not the first time the importance of tracking the rate at which changes are
generated is acknowledged (see, e.g. Giffin (2007)). This rate was seen as a conse-
quence of project staffing and project events rather than an indicator of the learning
process associated with design activities, as we propose here.
Some key performance indicators have been introduced in the past, for instance the
Change Rejection Index (Alabdulkareem et al. 2013) that reflects the rate at which
change requests are rejected by a subsystem (not implemented), or the Change Prop-
agation Index (Giffin et al. 2009) that reflects the likelihood of a subsystem generating
new changes after implementing a modification. These papers focus on change prop-
agation or rejection with a view to assign change requests to those subsystems that are
more prone to evolution, thus increasing the efficiency of the change management
process. We expect that shifting the focus onto the overall pattern of change requests
generation will yield complementary and similarly useful information about the design
process.
curve (Fig. 1 in cited paper). We chose to use the following sigmoïd function as the
reference in our analyses:
where erf is the error function and t is time. Function S has a number of characteristics
that make it a convenient reference for analyzing the knowledge generated during
design processes:
– S goes from 0 to 1, which is consistent with the knowledge about a system’s
interfaces increasing from 0% to 100% over the course of the design process,
– The derivative of S is a gaussian function,
– S can be fitted to the characteristics of any individual project, by the affine change
of variables T = a(t + b).
Function S is displayed in Fig. 2 (solid line). Early in the design process, little is
known about the system’s interfaces and teams have few opportunities to assess how
the parts within the system work together: the learning process is slow. Halfway
through the design process, a lot more is known about the system, its subparts, and the
interactions between them. Many decisions made during this phase conflict with pre-
vious work and induce change requests: design teams generate a lot of knowledge
about the interfaces and the S-curve has a steeper slope. By the end of the design
process, little remains to be learnt about the interfaces within the system and the
S-curve levels off.
date 95%
50%
height
width
Fig. 2. Number of change requests registered each week for project 6, Fig. 1. The thick solid
line represents the learning curve (axis on the right), the dotted line is its derivative,
corresponding to the intensity of the learning (gaussian curve). We postulate that the Change
Generation Index scales like the product of these two quantities (dashed line, axis on the left).
The parameters for the CGI have been fitted on the actual data (thin, solid line). The learning
curve can then be used to infer a “maturity” indicator, here pictured at 50% and 95% maturity
(circles).
Assessing the Maturity of Interface Design 61
The learning process we describe here is actually stepwise at small scales and
continuous (represented by function S) at a global scale. Each time a change request is
issued, it is a sign that a piece of knowledge has been acquired that contributes to the
global learning curve, very much like the stepwise learning of several individuals
within a group contribute to the sigmoïd-shaped knowledge accumulation by the group
as a whole (Galistel et al. 2004). The knowledge accumulated weekly is the derivative
(slope) of the learning curve: it is the intensity of knowledge acquisition (dotted line in
Fig. 2).
As stated in Sect. 3.1, change requests are generated when technical decisions
being made conflict with technical decisions made earlier. Following this interpreta-
tion, it is reasonable to assume that the number of change requests being generated at
any time in the project scales with both the amount of knowledge being acquired and
the knowledge already accumulated:
With the assumption that Knowledge (t) scales like S, the Change Generation Index
scales like:
The CGI as defined by the above formula is displayed in Fig. 2 (dashed line). The
“heuristic” CGI curve resembles the actual data in Fig. 1. The section that follows
investigates the relationship between the heuristic and the actual data and shows that
the heuristic shows a good match with the actual data.
along the y-axis, and it won’t change the shape of the curve that is given by the date
and width parameters.
In what follows, parameters 1, 2, and 3 will be called the “date”, “width”, and
“height” parameters. Practically, the date and width parameters are implemented by
replacing variable t by u = a(t + b), with a being the “width” parameter and b being the
“date” parameter. The “height” parameter comes as a multiplying coefficient applied to
eðt Þ SðtÞ.
2
We propose to apply the following procedure to infer project maturity from the
measure of the Change Generation Index:
– 1/ plot the actual number of change requests generated each week versus time,
– 2/ plot the heuristic CGI curve, and match the “date” parameter with the date at
which the maximum number of change requests has been generated according to the
measure,
– 3/ tune the “height” parameter so as to match the maximum number of change
requests generated weekly,
– 4/ tune the “width” parameter to fit the measured data,
– 5/ adjust parameters so that the heuristic CGI curve superimposes the actual data in
the best possible way,
– 6/ plot function S with the “date” and “width” parameters found at 5/ and use it the
assess the maturity of the design of the interfaces.
In this procedure parameter identification is done “by hand”, which leaves room to
interpretation. However, we have found it to yield consistent results across projects,
and to be more resilient to specific (meaningful) deviations in the data than an auto-
mated identification algorithm would.
An example is shown in Fig. 2. The development process took approximately 10
months (between weeks 15 and 55); in average, up to 13 change requests were created
each week, with a maximum around week 30. Interface design maturity reached 50%
around week 25, and 95% at week 45.
Section 4 provides several examples of actual data and what conclusions can be
drawn by using this approach.
4 Examples
Fig. 3. Change Generation Index as measured for a naval system (thin, jagged line). Heuristic
parameters have been tuned to fit the measured data (fitting curve in dotted line), with rather good
agreement between weeks 100 and 400.
The project that is displayed in Fig. 3 was a success. Commissioning started around
week 280. The learning curve that we have derived from the interpolated CGI indicates
a 97% maturity by that time. The large majority of the projects that we have analyzed
show a similarly good agreement with the theory, with a maturity comprised between
95% and 98% at comissioning. Some of them, however, depart from the theory quite
significantly. Sections 4.2 and 4.3 show two such examples of “unorthodox”
behaviours.
Fig. 4. Change Generation Index as measured for a naval system (thin, jagged line). Heuristic
parameters have been tuned to fit the measured data (fitting curve in dotted line), with rather good
agreement between weeks 50 and 180. The maturity of the design (thick, solid line), as inferred
from the parameters of the heuristic CGI, shall be reconsidered in view of the “technical debt”
that shows in the excessive number of change requests observed over the last 20 weeks of the
project.
Fig. 5. Change Generation Index as measured during the development of an embedded software
(thin, jagged line). A sliding average over 11 weeks is displayed as a thick dashed line, to
highlight the flatness of the CGI over time.
Assessing the Maturity of Interface Design 65
This project lasted much longer than originally planned, due to unceasing changes
in the specification from the client entity. Detailed software design was initiated early,
at a time when the system supporting the software had not yet been designed in enough
detail. This resulted in changes originating outside the software design process, making
it impossible to “drain” the subject of designing a software that fits the customer’s
needs. In this case, a flat CGI curve is the sign of a Sisyphean project constrained by an
endlessly moving environment.
5 Discussion
The present work suggests that change requests might be used as a marker for design
maturity. The heuristic that has been introduced in Sect. 3.2 reproduces the trends
observed in change request generation, and preliminary analysis shows that maturity as
inferred from parameter identification is in good agreement with project milestones (see
4.1: 97% maturity corresponds to system commissioning).
If confirmed, the approach would provide a way to assess the maturity of interface
design based on data, which would be an invaluable complement to expert-based
assessment. The scarce data in the first phases of the design does not allow for precise
maturity assessment, yet the “peak change” can be identified with little ambiguity –
indicating a 70% maturity milestone – and the remaining 30% maturation can be
evaluated and tracked with reasonable precision with the CGI.
Further work is needed to assess the statistical robustness of the approach. Still, we
believe that this work is a new confirmation that change request management is an
essential part of the design processes that shall be applied to complex systems. Sec-
tion 2.2 raises the question of when during the development process design changes
shall be traced. Change management is often viewed as a costly process, at least in the
first stages of the design. It might be of interest to establish more informal change
management very early, to dispose of change request data as early as possible to the
benefit of design maturity assessment.
The heuristic we propose also offers an opportunity to diagnose project weaknesses.
In Sects. 4.2 and 4.3, the diagnosis is performed a posteriori to provide return of
experience on the development process. It is possible to identify, if not quantify, the
technical debt accumulated during the early stages of the design. A late surge in change
requests is the telltale sign of an incomplete system design. It is also possible to identify
projects that have suffered from inefficient learning.
The most promising aspect of the present work lies in the fact that we explicitly
viewed the design process as a learning process. This has implications on the metrics
associated with the design process - the CGI is one such metric. By pushing this
paradigm further, it could also have consequences on the way complex systems are
designed, or on the tools that are being used to develop complex systems. If the design
team is a group in the process of learning something by itself, then maybe education
sciences can bring some interesting methods, best practices and tools to improve and
organize the learning.
66 A. Guegan and A. Bonnaud
References
Alabdulkareem, A., Alfaris, A., Sakhrani, V., Alsaati, A., de Weck, O.: The multidimensional
hierarchically integrated framework for modeling complex engineering systems. In: CSD&M
(2013)
Eckert, C., de Weck, O., Keller, R., John Clarkson, P.: Engineering change: drivers, sources, and
approaches in industry. In: Proceedings of ICED 2009 (2009)
Gallistel, C.R., Fairhurst, S., Balsam, P.: The learning curve: Implications of a quantitative
analysis. PNAS 101(36), 13124–13131 (2004)
Giffin, M.L.: Change Propagation in Large Technical Systems. Ph.D. thesis, MIT (2007)
Giffin, M.L., de Weck, O., Bounova, G., Keller, R., Eckert, C., John Clarkson, P.: Change
propagation analysis in complex technical systems. J. Mech. Des. 131 (2009)
Han, J., Morag, C.: The influence of the sigmoid function parameters on the speed of
backpropagation learning. In: From Natural to Artificial Neural Computation, pp. 195–201
(1995)
Tracking Dynamics in Concurrent
Digital Twins
1 Introduction
Machine-generated data, i.e., data that was produced entirely by machines, e.g., from
sensor readings [1], became the backbone of many industrial and societal development.
It is mission-critical for Smart Buildings, Industry 4.0, the Internet-of-Things (IoT), and
Autonomous Driving [2], but also enables system stakeholders to address business
concerns like total-costs-of-ownership with novel applications for, e.g., process control,
diagnosis, and predictive maintenance that are driven by data analytics. These means
are realized as digital twins in their most versatile and comprehensive form. Digital
twins, which were identified by Gartner as one of the current top strategic technology
trends [3], are digital replica of assets or systems together with their processes. They are
based on models of the knowledge of domain experts as well as the real-time data
collected from the systems and their environments. As such, they are subject to real
world dynamics that change what the twins are about. We address these dynamics and
their consequences for the digital twin in this article and provide a novel approach to
detect and cope with them.
Digital Twins, as originally defined by Grieves around 2001–2002 (see the newer [4])
provide a virtual representation of operational systems or other assets that aims at
tightening the loop between design and execution. One of their strengths is the explicit
use of expectations w.r.t. a system’s behavior according to both domain engineering
© Springer Nature Switzerland AG 2019
E. Bonjour et al. (Eds.): CSD&M 2018, Complex Systems Design & Management, pp. 67–78, 2019.
https://doi.org/10.1007/978-3-030-04209-7_6
68 M. Borth and E. van Gerwen
knowledge and to analysis models derived from data in comparisons with observations
about the actual behavior (Fig. 1). The term ‘behavior’ has a wide interpretation in this
context: for automated process control, it is typically understood according to the
functional specifications of a system’s performance, while diagnosis includes expec-
tations about reliability, mean-time between failures, etc., as does prognosis, especially
for the purpose of predictive maintenance.
Simpler twins offer primarily insights into the operations of the systems as well as
in the larger interconnected systems-of-systems such as a manufacturing plant that form
the environment of the system. Oracle, listing those as ‘virtual twins’ in [5], points out
that they go beyond simplistic documents enumerating observed and desired values,
but reserves the inclusion of analytics models built using a variety of techniques for
‘predictive twins’. The respective analysis models are typically based on data science,
even though domain application experts, system architects and engineers, and data
scientists often need to pool their expertise for success.
Once established, a digital twin typically serves various purposes. In cooperation
with our industrial partners, we normally aim for an operational use that has a direct
positive impact on total cost of ownership (TCO), e.g., energy savings with smart
lighting controls for buildings, in addition to one or two service purposes, especially
model-based diagnosis and prognosis. Furthermore, we strive to capture the causal
relationships between systems, their components and functionality, and their state
(especially failure modes and effects). As Pearl laid out in [6], this allows to investigate
interventions, i.e., changes to the system, in a systematic manner. We described our use
of this approach in [7].
from data analysis over an initial time period, typically one that consists of smooth
operations – a so called happy flow – that illustrates which observations are expected if
the system works well.
Seen as that, the digital twin is stationary: It mirrors the system in accordance to the
knowledge and observations that were available at one point in time. Therefore, their
status quo becomes a status quo ante, literally ‘the state in which before’, meaning here
the state of affairs that existed previously – i.e., before the system and its environment
changed.
Such change is ubiquitous. Even rather unassuming and mostly mechanical systems
like wind turbines, which are monitored to prevent prohibitively expensive damage in
case of a catastrophic failure [8], experience differences in the viscosity of lubricants or
the stiffness of welded joints based on temperature, and thus season. These differences
affect, e.g., the signatures and transfer of vibrations, a prime indicator of bearing issues
among other items. The experts who build the digital twin might take that into account
to a certain extent, but most likely there is no data and possibly even a lack of
understanding for extreme situations, which are thus not adequately covered by the
digital twin. The operation of a concurrent digital twin with this limitation will
therefore lead to warnings that the condition of the system is degrading or that there is a
failure, as the observations are abnormal. This, however, is a false positive, given that
the observations can be explained by the impact of the environment. More complex
examples illustrate that the dynamics systems face, and thus their digital twins as well,
cannot always be foreseen and thus ‘modeled in’, as one might argue in the example
above. Industrial production, realized with systems-of-systems typically orchestrated
by a production management system [9], becomes, e.g., more and more flexible, up to
the point of the so-called ‘lot-size one’. Here, the number of items manufactured in a
single production run is one, meaning that each item is made to order, following a
unique process. This trend, which is made possible by the concepts and technologies
covered under the term Industry 4.0 [10], leads to production processes, settings, and
parameters that were unknown in the conception phase of the factory as market
demands and insights into process improvements result in constant change.
Given our digital twin’s purpose of diagnostics and prognostics, we chose Bayesian
networks [11] for their underlying computational models, as they excel at these tasks.
These graph-based representations of the joint probability distribution over all modeled
variables offer causal probabilistic modeling [6], optimal to investigate cause-effect
Tracking Dynamics in Concurrent Digital Twins 71
relationships, e.g., what impact a production setting has on product quality. Further-
more, Bayes nets allow sensitivity analyses to investigate the impact of factors [12],
e.g., to determine the dependency of a system’s performance on the environment. The
listed literature provides details on Bayesian networks, whereas the remaining article
only assumes familiarity with the core concepts: Bayes nets are graphs with (random)
variables as nodes. Directed edges between nodes show their relationships, i.e.,
probabilistic or causal dependencies, encoded as conditional probability distribution of
a variable given all its parents.
Several modeling techniques exist that enable the efficient use of Bayes nets for
system modeling, e.g., by supporting re-use with object-orientation [13], and semi-
automated construction of networks from knowledge bases [14] or system descriptions
[15]. Such techniques allow us to use consistently generated network building blocks,
called network fragments, as re-usable and maintainable parts of the underlying model
of digital twins. Being able to do so is critical for our work, as the networks will grow
very large, resulting in disproportional efforts to ensure their correctness otherwise.
process instability and the presence of assignable causes. The idea is to measure the
mean and variance of a process when it is assumed to be stable and use a series of rules
to flag suspicious data. Within the rules, locations of observations relative to the control
chart control limits (typically at ±3 standard deviations for symmetrical processes) and
centerline defined by the mean indicate whether the process in question should be
investigated for assignable causes. The core concept here is that the occurrence of data
in certain locations, or of certain series of locations, is too unlikely to be ignored safely.
Conceptually, we use the major four Western Electric rules as described in [16]:
1. A single data point is more than 3 standard deviation (sigma) from the mean
2. Two out of three consecutive points are beyond 2 sigma on same side of the mean
3. Four out of five consecutive points are beyond 1 sigma on same side of the mean
4. Eight consecutive points are on the same side of the mean.
Originally, the WER operate on sensor measurement expressed in numbers. We
look at probabilities and are only interested in values below the mean, as any value
above the mean cannot indicate a decreasing fit between the Bayes net and the data.
Given that probabilities form a ratio scale, the WER translate into the appropriate part
of the scale from mean towards 0, as shown in Fig. 4 (left) for a ¼ to ¾ ratio. The four
major WER are interpreted for probability of finding (Fig. 4 right), wherein the ratios
were selected to capture the same likelihoods as the original WER.
Above, we described our methods to detect and localize discrepancies between the
current, observable reality that a system operates in and the computational model that a
digital twin uses to monitor and analyze that system and its behavior, and, e.g., to
provide diagnosis and prognosis for predictive maintenance.
We developed these methods to maintain the computational model, i.e., to keep it
and thus the concurrent digital twin up to date w.r.t. the dynamics of the real world. As
we face large models that monitor potentially mission-critical systems for which both
Tracking Dynamics in Concurrent Digital Twins 73
down-time and non-optimal operations induce high costs, we require that the main-
tenance of the digital twin occurs timely and that its quality is assured. This leads to a
set of requirements for its processes and methods:
• no significant delay in the detection of an operational space transgression (no false
negatives)
• no unwarranted maintenance (no false positives)
• efficient mechanisms to update the model:
localized maintenance without complete re-building or re-training
• low efforts to ensure the updated model’s quality:
only local effects of updates, no needs to re-evaluate the whole.
As presented in Sect. 3, we address the latter two requirements in our approach via
the fragmentation of the Bayesian networks. It allows us to adjust only the parts of the
model that are outdated. Given the advantages of causal modeling, quality assurance
techniques like testing and expert evaluations are also restricted to the respective
fragments and the envelope that consists of nodes connecting to them. Consequently,
we also require
• localization of change w.r.t. the parts of the model:
detection only triggers for relevant model fragments.
Figure 5 displays an overview of the resulting workflow for the maintenance of
Bayes nets by local adaptations. While its technical details are out of the scope for this
article, we show that the methods defined here fulfill the requirements listed above via
the experimental validation described in Sect. 6 after an introduction of the underlying
industrial use case in Sect. 5.
We applied our methods to an industrial use case that we will describe in anonymized
terms because of confidentiality. As our work is set to improve complex manufacturing
processes that span factory equipment from multiple vendors, our goal is to safeguard a
digital twin that monitors production equipment against dynamics that stem either from
changes within the factory control, e.g., parameter settings for new products, or from
changes to the factory’s setup, e.g., updates to equipment and processes. For this
article, we consider a simplified production line in which two production steps P1 and
P2 precede our own production equipment (Fig. 6).
74 M. Borth and E. van Gerwen
A digital twin’s purpose in this setup is twofold: it monitors the health of the
equipment and it provides information towards process step optimization. The latter
acts on partial information, as P1 and P2 are acting as so-called grey boxes, and data
from the end of the production line arrives too late to form a feedback that impacts
production runs directly. Based on expert knowledge on optimization schemes and fed
with historical data, we developed a Bayesian Network for the model core of the
envisioned digital twin’s second purpose. Figure 7 depicts a variant of this network
fitting to the simplified version of our use case.
6 Experimental Validation
We validated our methods in an experimental setting that simulates our industrial use
case, but allows us to control the location of scope of change within the production line,
thus providing the ground truth that the tracking needs to cope with.
For this, we generated time-series of 4000 data-points per observable and changed
the setup with progressive changes: first altering the machine setup parameters, then
replacing production machine P1 with another one having different characteristics, and
last introducing an additional source of influence in the P1 production process that
Tracking Dynamics in Concurrent Digital Twins 75
cannot be sensed directly but does affect the physical properties of the product leaving
machine P1. The changes resulted in a significant decrease of the probability-of-
findings (PoFs), i.e., the fit between the model and the data, as Fig. 8 shows.
The three changes that we introduced to the setup happened subsequently after
1000 time-steps each. The out-of-scope detection triggered at least with one of its rules
basically immediately for each change, as Fig. 9 shows. (There is an inherent delay of
one time-step for the first rule, up to eight time-steps for rule 4.)
In the picture, we see experimental results from the use of the detection rules
introduced in Sect. 3 working on three different models: the original one that was
generated for the digital twin and those generated from it in two successive mainte-
nance operations in which we updated the parts of the model for which the detection
localized the transgression of the models’ operational space (top left to top right to
lower right).
change. In the first section, e.g., it was possible to explain the change of the probability-
of-findings within the production parameter settings, indicating a new production run,
while the second section correctly identifies a change in the P1 step, while the data
shows that P2 is still covered correctly by the model. The third section shows an
interesting special case in this regard: the detection picks up a difference between the
expected and the observed KPI data, but cannot localize a change that would cause the
product quality to change. This is consistent with the change we introduced as it was
outside the scope of the digital twin.
All in all, we saw that our approach’s ability to check both the whole model that
underlies the digital twin, but also relevant fragments of the Bayes nets individually
enabled an exact localization of the parts of the probabilistic model that required
maintenance. This is a major asset to us, as it allows us to keep very large models on
track efficiently, a requirement to work with digital twins of industrial scope in dynamic
settings.
There are several areas in which we foresee sensible extensions of our work. First and
foremost is automation and the generation of a seamless workflow for the continuous
tracking of real-world dynamics in digital twins. This is primarily an industry issue, as
such a workflow must fit within the operational use of the digital twin and adhere to the
respective company’s quality assurance procedures.
Tracking Dynamics in Concurrent Digital Twins 77
References
1. Monash, C.: Examples and Definition of Machine-Generated Data. Monash Research
Publication (2010). www.dbms2.com/2010/12/30/examples-and-definition-of-machine-
generated-data. Accessed Apr 2018
2. Laney, D., Jain, A.: 100 Data and Analytics Predictions Through 2021. Gartner Report
G00332376 (2017)
3. Gartner Press Release: Gartner Identifies the Top 10 Strategic Technology Trends for 2017.
Gartner (2016). www.gartner.com/newsroom/id/3482617. Accessed Apr 2018
78 M. Borth and E. van Gerwen
4. Grieves, M., Vickers, J.: Digital twin: mitigating unpredictable, undesirable emergent
behavior in complex systems. In: Transdisciplinary Perspectives on Complex Systems,
pp. 85–114. Springer International Publishing (2016)
5. Oracle: Digital twins for IoT applications. Oracle White Paper (2017). www.oracle.com/us/
solutions/internetofthings/digital-twins-for-iot-apps-wp-3491953.pdf. Accessed Apr 2018
6. Pearl, J.: Causality. Cambridge University Press, New York (2009)
7. Borth, M.: Probabilistic system summaries for behavior architecting. In: Proceedings of the
Complex Systems Design and Management 2014 CEUR Workshop, pp. 71–82 (2014)
8. Christensen, J.J., Andersson, C., Gutt, S.: Remote condition monitoring of Vestas turbines.
In: Proceedings European Wind Energy Conference, pp. 1–10 (2009)
9. Gupta, S., Starr, M.: Production and Operations Management Systems. CRC Press, Boca
Raton (2014)
10. Schwab, K.: The Fourth Industrial Revolution. Portfolio Penguin, London (2017)
11. Jensen, F.V.: Bayesian Networks and Decision Graphs. Springer, New York (2001)
12. Jensen, F.V., Aldenryd, S.H., Jensen, K.B.: Sensitivity analysis in Bayesian networks. In:
Carbonell, J.G., et al. (eds.) Symbolic and Quantitative Approaches to Reasoning and
Uncertainty. Springer Lecture Notes in CS, vol. 946, pp. 243–250 (1995)
13. Koller, D., Pfeffer, A.: Object-oriented Bayesian networks. In: Geiger, D., Shenoy, P.
P. (eds.) Proceedings of the 13th Conference on Uncertainty in Artificial Intelligence (UAI
1997), pp. 302–313. Morgan Kaufmann Publishers Inc. (1997)
14. Laskey, K.B., Mahoney, S.M.: Network fragments: representing knowledge for constructing
probabilistic models. In: Geiger, D., Shenoy, P.P. (eds.) Proceedings of the 13th Conference
on Uncertainty in Artificial Intelligence (UAI 1997), pp. 334–341. Morgan Kaufmann
Publishers Inc. (1997)
15. Borth, M., von Hasseln, H.: Systematic generation of Bayesian networks from systems
specifications. In: Musen, M.A., Neumann, B., Studer, R. (eds.) Intelligent Information
Processing, pp. 155–166. Kluver (2002)
16. Western Electric Rules: From Wikipedia. en.wikipedia.org/wiki/Western_Electric_rules.
Accessed May 2018
17. Gama, J., Žliobaitė, I., Bifet, A., Pechenizkiy, M., Bouchachia, H.: A survey on concept drift
adaptation. ACM Comput. Surv. (CSUR) 46, 44 (2014)
18. Pimentel, M.A., Clifton, D.A., Clifton, L., Tarassenko, L.: A review of novelty detection.
Signal Process. 215–249 (2014)
19. Borth, M., van Gerwen, E.: Data-driven aspects of engineering. In: IEEE SoSE 2018, Paris
(2018, accepted)
How to Boost the Extended Enterprise
Approach in Engineering Using MBSE –
A Case Study from the Railway Business
1 Introduction
Looking to the technology evolution on the control systems for rolling stock, it’s
important to note that these systems started as fully mechanical systems (steam loco-
motives), and once electronics appeared, all the «non-mechanical» functions to monitor
and control the system were realized initially with wiring logic (relays logic). It’s just
much more recently (last decade) that with the introduction of the “train control and
monitoring system” the control started to be realized with a combination of software
and hardwired logic. Typically, safety functions and reliability performance are granted
by hardwired implementation.
One of the most important factors of this complexity growth is the transformation to
reduce more and more hardwired function in favour of software. Typically, around
twenty to twenty-five different subsystems (traction, braking, doors, HVAC, passenger
info and entertainment, CCTV, signaling, etc.) shall be functionally integrated by using
several different types of communication buses to interconnect within themselves and
with other legacy rolling stock.
As mentioned above, when the train shall provide all the typical services of a
connected object (IoT) then the subsystems have been forced to deeply modify their
traditional data communication architectures. Indeed the introduction of Ethernet
technology in the rolling stock control domain have been required to support additional
82 M. Ferrogalini et al.
This chapter will provide insights about the engineering phases performed at BT to
conceptualize a rolling stock system functional architecture and also some insights
about how the concept phase has been implemented by using a model based approach
based on the SysML language [2, 4, 11, 17].
• Level 2 – Operational scenarios: each operational context is broken down into more
granular ones. This breakdown is formalized through a set of activity diagrams, one
per each operational context. Operational scenarios are modelled as activities.
• Level 3 – Consist use cases: basic “bricks” of the train behaviour which are defined
in the next process phase. As for the previous step, each consist use case is mod-
elled as an activity (and as a use case), each operational scenario is formalized
trough an activity diagram.
The key idea of shifting from the classical document based engineering approach to the
model based engineering approach is not in discarding the whole set of documents but
rather keep all the development artefacts in a common and concise model. Out of this
model the typical system engineering process documents of the railway industry can be
generated. The model is continuously enriched and detailed with the information
created during each design step being the single source of truth.
It helps the engineers to maintain coherency, consistency and provides a “real-time”
common view on the system design. The integration of the subsystem into the vehicle
requires advanced engineering methods to achieve a seamless integration and an effi-
cient on-site commissioning, testing and vehicle homologation. Mains benefits are:
• Clearly defined and managed interfaces, not only on static interface definition, but
as well as behavioural description to improve the quality of the whole process.
• A modular approach with defined variation points enables re-usable building blocks
as a product line approach. This allows the system integrator for a quick and easy
adaption of their vehicle platform to different operator requirements.
The following chapter deals with the model based process based on SysML lan-
guage which has replaced the conventional document driven approach.
The core concept is to feed and re-use a common asset base made of subsystem
functions, subsystem complex technical elements which realize system functions, and
system architectures which combine different technical elements to a train sub system
(Fig. 4).
The Function Carriers are technical products ready to be used in customer projects;
they are located on the same level of the Technical View. Function Carriers and
discipline-specific elements (on levels D…) can be grouped into System Families. On
the left side of the matrix, the column Requirements shows the connection between the
System Model and the textual requirements stored separately in the requirements
management tool. The same levels used to structure the System Model are also used to
structure the textual requirements which might be linked to model elements of the same
level to ensure traceability between requirements and solutions. The light-blue colour
used for the Operational Analysis View in the figures below illustrates the special role
of that view as a bridge between the textual requirements and the design models in the
Functional and Technical View (orange columns). The Feature Model containing
information about the variability of the system is orthogonal to both views and levels;
feature constraints may be connected with elements of any view and any level. The
Feature Model thus represents a third dimension of the Generic System Model and is
not represented in the matrix.
The three views in the model are:
Operational Analysis View
The Operational Analysis View falls in two parts: the Context Model and the Use Case
Model.
Context Model
The context of a system is the sum of all human, natural or technical entities that
surround the system and are relevant for the correct operation of the system in its
88 M. Ferrogalini et al.
environment. The context does not belong to the system; in a development project, it
must usually be accepted as is, because it may not be changed. Therefore, the context
has a strong impact on the requirements for the system. The purpose of the Context
Model is to analyze influences and constraints on the system originating in its envi-
ronment. Such influences may be of physical (e.g. climate conditions), technical (e.g.
communication protocols), social (e.g. ergonomics and usability), or legal nature (laws,
standards and other regulations). In Context Diagrams, the System of Interest (SoI) is
represented as a black box interacting with Context Elements. The Context Model
serves as an entry point to the Generic System Model. It includes separate Context
Diagrams for all Main Functions. In combination, these diagrams yield a Context
Diagram for the complete brake system, or, in a similar way, for any other train
subsystem included in the model.
Use Case Model
Use Case Modelling is a high-level method to characterize the functionality of a system
from the individual perspectives of the different users. Users in this sense may be
humans as well as technical systems that use services of the system of interest. In the
KB System Model, there are no special rules for Use Case Diagrams; they simply
follow the standard as defined in UML or SysML.
Functional View
The modelling approach at KB is function-based, meaning that modelling activities
typically start with a functional analysis that abstracts from technical details. The
function of a technical system is the action or purpose for which it is designed.
Analysing and decomposing the function of a system is a way to find abstract
descriptions of its actions and purposes. Functional descriptions are solution-neutral,
which means that they don’t anticipate design decisions or technical details.
Function Classification
A model-wide Function Classification is defined for the complete functionality of the
systems considered in the model. The Function Classification is a tree with Main
Functions as leaf nodes. It is represented as a structure of nested model packages and
used as a common structuring backbone in several model parts. The Function Clas-
sification is a classification scheme which is similar, but not identical with the system
levels (S0, S1, etc.). While the Function Classification is used to classify Main
Functions on the same level, and to group them into packages for easier orientation, the
system levels are used to model functions that combine several Main Functions in a
certain way to realize more comprehensive functionality (called System Functions).
Currently, the Function Classification comprises the functionality of the Brake System.
In the future, it may be extended to include other train subsystems, such as Propulsion,
Doors, Air Conditioning, etc.
The most important diagrams used in the Functional View are Functional Context
Diagrams (which define the link to the Context Model), Function Trees (decomposi-
tions of functions into Sub-Functions), and Function Networks (showing the interre-
lations of Main Functions within a System Function or Sub-Functions within a Main
Function).
How to Boost the Extended Enterprise Approach in Engineering 89
Technical View
Describing how the System of Interest is composed of technical or physical compo-
nents. Like the other views of the model, the Technical View is organized in multi-
discipline system levels (S0, S1,…) and discipline-specific levels (D1, D2,…). Each
model element of the Technical View is classified into one of the levels, and for each
model element the following information is provided:
• its system boundary;
• its constituents (the model elements of lower levels of which it is composed);
• its interfaces at the system boundary;
• the interfaces of its constituents and how they are interconnected;
• the engineering discipline it belongs to (for discipline-specific elements).
The central model elements in the Technical View are the Function Carriers.
A Function Carrier is a standardized subsystem of system level S3 which implements a
defined set of standardized functions (Main and/or Sub-Functions), and is composed of
standardized units of one or more disciplines (mechanical, pneumatic, electrical, elec-
tronic (including software)) realized on physical devices in a defined arrangement. The
model elements of the Technical View are described by the following two diagram types:
• Decomposition Diagrams: Block Definition Diagrams (BDD) that define which
parts an element consists of;
• Architecture Diagrams: Internal Block Diagrams (IBD) that show how the parts
within an element are connected through their interfaces.
These diagram types occur on each of the model levels in a similar fashion:
On level S3, for example, they describe the interfaces and internal architecture of
Function Carriers; whereas, on level D1, they describe the interfaces and internal
architecture of Function Carrier Elements.
Levels
To structure the model according to abstraction and granularity, the following levels are
defined. The levels starting with S… (for “system”) contain mixed-discipline items,
those starting with D… (for “discipline”) contain items that are discipline-specific, i.e.
purely mechanical, pneumatic, electrical, or electronic. For Requirements Management,
the same level numbering is used as in the System Model.
Variability
The Variability Model constitutes a cross-cutting aspect of the model. It is used to
model and manage the variation of the System Model and of other development
artefacts (e.g. documentation). The Variability Model contains features and their
interdependencies. According to ISO/IEC 26550 [8], features are abstract functional or
non-functional characteristics of a System of Interest for end-users and other stake-
holders. The Variability Model has relationships with all other views: artefact depen-
dencies can be used to describe variability anywhere in the System Model. Thus, the
Variability Model is orthogonal to the rest of the System Model and may be regarded as
a third dimension added to the Level/View Matrix (see Views and Levels). In the
90 M. Ferrogalini et al.
This chapter will provide insights on the main focus of this paper: the customer-
supplier relationship between a system integrator and a subsystem provider. This is a
key area where the extended enterprise approach can be successfully implemented,
especially when boosted by the application of the MBSE on both sides.
How to Boost the Extended Enterprise Approach in Engineering 91
has pushed smart organizations to rethink the way of delivering value to customers, for
instance by reshaping the relationship with suppliers. A way to pave this is improving
collaboration and communication to share the end goal and the related risks.
According to Jan Duffy and Mary Tod, IDC, the authors of the article “The
Extended Enterprise: Eliminating the Barriers” [3], the extended enterprise can only be
successful if all the component groups and individuals have the information they need
to do business effectively.
We came here to the core part of this paper, where it will be described how two
companies in a customer-supplier relationship in the railway business, respectively
Bombardier Transportation and Knorr-Bremse, have defined a model based method-
ology to support the extended enterprise approach for the functional integration of a
brake subsystem in the rolling stock product.
This chapter will give insights on a model-based collaboration concept between Bom-
bardier Transportation (BT) and Knorr-Bremse (KB). The process is a cyclic process
passed through as long as necessary to achieve a solution accepted by both partners. It is
built on system models as they are used in Model-Based Systems Engineering (MBSE).
Both partners possess system models that have been developed separately and in dif-
ferent tools, yet based on the same modelling language (SysML). The BT-KB MBSE
Cycle proposes to connect these models and to use them in a well-defined way to foster
model-based collaboration. The goals of the BT-KB MBSE Cycle are:
• Simplifying the definition of interfaces
• Generating interface control documents based on consistent models
• Traceability of functional requirements and design decisions across company
borders
• Facilitating iterative refinement and change management
• Effect analysis across company borders
• For subsystems (e.g. Brakes, Doors, etc.): Using standardized products taken from a
portfolio of standard system functions, function carriers and reference architectures
• On vehicle level: Integration of standardized subsystem products into standardized
functional building blocks.
To make the interface alignment traceable on both sides, the KB technical ports are
mapped to the BT functional signals, because the BT functional view has roughly the
same level of technical detail as the KB technical view. This mapping should be
supported by a tool linking the corresponding elements of the two models together.
Once the model elements are linked, this information can be used in further iterations
and will speed up the process of alignment in the BT-KB MBSE Cycle. There is a
special reason why a White box view of a Technical Solution Block is delivered by KB
in addition to the interface specification: Some of the connections between the Function
Carriers in the Technical Solution Block might be realized by BT as part of the train
infrastructure (bus systems, wiring, pipes). In the model, the connections for which this
is true may be marked with special attributes. By analysing these attributes of the
connections within the Technical Solution Blocks, requirements to the BT train
infrastructure (e.g. bandwidth and performance of bus systems) can be derived.
For simplicity, the technical solution exchanged between the partners in this step is
not instantiated yet. This means that groups of technical elements which need to be
repeated several times in the train architecture are only represented once in the models
exchanged. During the mapping process, discrepancies between both models become
evident and can be analysed. Because this might lead to changes of requirements or of
interface elements in the model, the process described here is a cyclic process com-
prising as many iterations as necessary to achieve a solution accepted by both partners.
Step 4 (BT and KB): Instantiate the Technical Solution
After BT has received a non-instantiated technical solution from KB, both sides
independently work on instantiating it. Instantiating means adding information on how
often the technical elements are repeated in the train architecture. The system of interest
for this design step is a consist. On the KB side, a portfolio of reference architectures
supports the instantiation. The results of the instantiation are project-specific technical
models on both sides. The KB model will contain information on the correct number of
Function Carriers needed to realize the technical solution for a complete consist,
together with the correct multiplicity of bus signals needed for the communication
between the Function Carriers. The BT model contains similar information about the
communication between the different subsystems (e.g. Brakes, Propulsion, etc.).
Step 5 (BT and KB): Align ICDs
After the instantiated models have been developed independently by both partners, the
results need to be aligned. To this end, corresponding model elements of the instan-
tiated models are mapped. This mapping will draw on the results of step 3, i.e. the
mapping of the non-instantiated models will be refined adding information about
multiplicity. During this process, inconsistencies in the interface definitions of the two
partners become evident and can be corrected. This might even cause another cycle of
the complete MBSE process if the inconsistencies can’t be solved by just correcting the
instantiations made in step 4.
Just as in step 3, the mapping results will be stored in a tool. This ensures trace-
ability between the two models, allowing further cyclic refinement and simplifying
change management. Due to slightly different modelling approaches on both sides,
technical signals on the BT side will be mapped to technical ports on the KB side.
How to Boost the Extended Enterprise Approach in Engineering 95
6 Conclusion
Railway market evolution is challenging the current status quo of how the companies
working on this business are organized and are interacting with each other. Improving
collaboration and communication to share the end goal and the related risks between
rolling stock integrator and subsystems suppliers is becoming a must to remain
competitive.
The extended enterprise approach answers to that need, but it requires some
enablers, while MBSE can definitively be the one for the functional content of the
product.
Unfortunately, the only available and widely adopted standards in the MBSE
domain are the generic system modelling language SysML giving basic elements and
diagrams to depict system structure and behaviour plus some generic modelling
methodologies which remain quite abstract. No common industrial standard on mod-
elling methodology for the railway industrial sector has been developed yet.
In some other engineering domains like software, there are coding standards like
MISRA which give a guideline how to apply the language, or in other industrial sectors
like automotive, standards like AUTOSAR (Automotive Open System Architecture)
[1] have established an open and standardized software architecture for automotive
electronic control units (ECUs), permitting the scalability to different vehicle and
platform variants, transferability of software, the consideration of availability and
safety requirements, as well as collaboration between various partners and maintain-
ability throughout the whole “Product Life Cycle”.
The lack of such standards requires companies that want to implement an MBSE
approach (like Bombardier and Knorr-Bremse) to develop by their own specific profiles
of the SysML language and related modelling methodologies. As major consequences
it’s very difficult on one hand to interlink models across companies, especially when
different tool environments are used, and on the other hand to extend the usage on the
functional models linking down to behavioural simulation environments like Modelica.
The case study presented in this paper is an exceptional lucky case where the lack
of those standards hasn’t been impeding Bombardier and Knorr-Bremse to explore the
feasibility of implementing a real MBSE based extended enterprise approach and to
appreciate all the possible benefits of it. Both the companies strongly believe in MBSE
and in the potentialities to enable a strong extended enterprise approach, and wish that
an open modelling methodology standard will be developed providing a common
MBSE framework across the railway industry.
96 M. Ferrogalini et al.
References
1. AUTOSAR: Standards. https://www.autosar.org/standards. Accessed 10 July 2018
2. Delligatti, L.: SysML Distilled. Addison-Wesley, Upper Saddle River (2015)
3. Duffy, J., Tod, M.: The Extended Enterprise: Eliminating the Barriers. IDC
4. Friedenthal, S, Moore, A., Steiner, R.: A Practical Guide to SysML, 3rd edn. Morgan
Kaufmann (2014)
5. Grady, J.O.: Universal architecture description framework. Syst. Eng. 12(2) (2009)
6. International Council on Systems Engineering (INCOSE): Systems Engineering Vision
2020, version 2.03. INCOSE-TP-2004-004-02, Seattle (2007)
7. International Council on Systems Engineering (INCOSE): Systems Engineering Handbook,
version 3.2.2. INCOSE-TP-2003-002-03.2. San Diego (2012)
8. International Organization for Standardization (ISO): Software and systems engineering –
Reference model for product line engineering and management. ISO/IEC 26550 (2013)
9. Kang, K.C., et al.: Feature-Oriented Domain Analysis (FODA) Feasibility Study. Carnegie
Mellon University, Technical report CMU/SEI-90-TR-2
10. Krob, D.: Eléments d’architecture des systèmes complexes. In: Gestion de la complexité et
de l’information dans les grands systèmes critiques, Alain Appriou editor, CNRS (2009)
11. Object Management Group: OMG Systems Modeling Language (SysML), version 1.3,
https://www.omg.org/spec/SysML/1.3. Accessed 10 July 2018
12. Pohl, K., Böckle, G., van der Linden, F.: Software Product Line Engineering. Springer,
Berlin (2005)
13. Pohl, K., Hönninger, H., Achatz, R., Broy, M. (eds.): Model-Based Engineering of
Embedded Systems. The SPES 2020 Methodology. Springer, Berlin (2012)
14. Rajan, A., Wahl, T. (eds.): CESAR - Cost-efficient Methods and Processes for Safety-
relevant Embedded Systems. Springer, Berlin (2013)
15. Ratiu, D., Schwitzer, W., Thyssen, J.: A System of Abstraction Layers for the Seamless
Development of Embedded Software Systems. SPES 2020 Deliverable D1.2.A-2. Technis-
che Universität München (2009)
16. Van Gaasbeek, J.R.: Model-Based System Engineering (MBSE), presented in the
INCOSE L.A. Mini-Conference. INCOSE (2010)
17. Weilkiens, T.: Systems Engineering mit SysML/UML, 2nd edn. dpunkt.verlag (2008)
18. Wymore, A.W.: Model-Based Systems Engineering. CRC Press, Boca Raton (1993)
Model-Based System Reconfiguration:
A Descriptive Study of Current Industrial
Challenges
1 Introduction
control activities must react by reconfiguring the system to cope with these abnormal
behaviors. Literature is addressing these concerns as fault detection, isolation, and
reconfiguration or FTC. The primary purpose of FTC functionalities is to overcome the
malfunctions while maintaining desirable stability and performance properties [4, 5].
Passive and active FTC functionalities exist, depending on their management of
detected faults. Passive FTC functionalities sustain robust control activities that handle
faults within a predefined quality of service. On the other hand, active FTC func-
tionalities allow reaction to a detected fault and perform reconfiguration so that the
stability and the performances can be maintained [6]. In active FTC functionalities, the
fact that the control activities are reconfigurable means that one can adaptively address
non-predefined faults.
A typical active FTC functionality relies on two fundamental mechanisms: fault
detection and isolation (FDI) sometimes referred to as “fault diagnosis” [7], and
reconfiguration control mechanisms (RC) [4]. The reconfiguration control aims at
masking the fault either by switching to a redundant system/component or by revising
the controller structure. In some cases, the available resources do not allow counter-
acting fault effects. In such cases, the best solution is to allow system degradation when
the performance is accepted to be out of the optimal area [5].
There are different techniques used in fault detection and isolation. They are
classified into model-based and data-based techniques [4]. Model-based techniques use
system models to estimate the system states and parameters. Data-driven techniques, on
the other hand, rely on classifiers and signal processing [4]. In this paper, the interest
lies in changes and deviations in the system state addressed by model-based techniques
while data-driven techniques fall out of interest.
Reiter [8], in his theory of diagnosis, proposes a method that requires a model
describing the system. Given observations of a system, diagnosis compares the observed
system with the expected behavior (modelled system) to determine the malfunctioning
components. Reiter’s theory has been extended to deal with the model-based diagnosis
of different kinds of systems in different domains of applications [9, 10]. Identifying
faults in malfunctioning systems is important but repairing these systems so that they
can continue their missions is an essential problem to be addressed. Reiter’s theory of
model-based diagnosis has been extended to a theory of reconfiguration [11]. Much
research has been conducted to use the model-based analysis concepts in the recon-
figuration control design and analysis algorithms [12–15].
3 Methodology
The research presented in this paper is action-based research [26]. This means that at
least one of the researchers is also an engineer in Industry. This paper is addressing the
first stages of this research. The research methodology (Fig. 1) is based on the
exploration of current literature through the examination of papers supported by data
collection. According to Blessing and Chakrabarti [27], observation and data gathering
are essential to analyze and understand the industrial context and to propose a
descriptive study that covers both empirical studies and their analysis to form new
hypotheses.
In the first phase of this action-based research, the aim is identifying the current
challenges in management of system configurations and overall System Reconfigura-
tion process with regard to existing literature, including norms and standards. This
Model-Based System Reconfiguration: A Descriptive Study 101
Experts interviewed during this research have emphasized the need to support
systems evolution during each lifecycle phase. According to them, “Reconfiguration is
an everyday question.” For instance, during dismantling one should think of recon-
figuration because dismantling a service or a product might have an impact on the
overall services provided by the system. However, as for the perimeter considered in
the enterprise, 2 phases of systems lifecycle seem to be critical: design and operations
(Fig. 3).
This research seeks to build on previous work and aims at proposing an integrated
approach for model-based instantiation of system configuration and reconfiguration
addressing both design-time and run-time. In particular, the approach aims at using
models for representing system configuration and health monitoring system to harness
complexity. A system management framework will be studied in order to propose a
configuration manager (Fig. 5) including an engine and a knowledge base. This
knowledge base will contain models representing the system configurations and
reconfiguration rules. The configuration engine will be in charge of applying relevant
configurations.
106 L. Qasim et al.
6 Conclusion
System configuration and System Reconfiguration are essential as they support system
evolutions to ensure affectivity and efficiency of systems through their life cycles. This
paper identifies current industrial challenges related to system configuration and Sys-
tem Reconfiguration at design-time (development) and run-time (in-service phase).
This paper discusses identified challenges – with regard to the literature review and
Model-Based System Reconfiguration: A Descriptive Study 107
References
1. ISO/IEC/IEEE/15288: Systems and software engineering–system life cycle processes (2015)
2. INCOSE: Systems engineering handbook: a guide for system life cycle processes and
activities. In: Walden, D.D., Roedler, G.J., Forsberg, K., Hamelin, R.D., Shortell, T.M.,
(eds.) International Council on Systems Engineering, 4th edn. Wiley, San Diego (2015)
3. NASA: NASA Systems Engineering Handbook, vol. 6105 (2007)
4. Zhang, Y., Jiang, J.: Bibliographical review on reconfigurable fault-tolerant control systems.
Annu. Rev. Control 32(2), 229–252 (2008)
5. Stoican, F., Olaru, S.: Set-Theoretic Fault Detection and Control Design for Multisensor
Systems (2013)
6. Eterno, J., Weiss, J., Looze, D., Willsky, A.: Design issues for fault tolerant-restructurable
aircraft control. In: 24th IEEE Conference on Decision and Control, pp. 900–905 (1985)
7. Isermann, R.: Supervision, fault-detection and fault-diagnosis methods–an introduction.
Control Eng. Pract. 5(5), 639–652 (1997)
8. Reiter, R.: A theory of diagnosis from first principles. Artif. Intell. 32(1), 57–95 (1987)
9. Kuntz, F., Gaudan, S., Sannino, C., Laurent, É., Griffault, A., Point, G.: Model-based
diagnosis for avionics systems using minimal cuts. In: Sachenbacher, M., Dressler, O.,
Hofbaur, M., (eds.) DX 2011, Oct 2011, pp. 138–145. Murnau, Germany (2011)
10. Ng, H.T.: Model-based, multiple fault diagnosis of time-varying, continuous physical
devices. In: Proceedings 6th Conference on A. I. Applications, pp. 9–15 (1990)
11. Crow, J., Rushby, J.: Model-based reconfiguration: toward an integration with diagnosis. In:
Proceedings of AAAI 1991, pp. 836–841 (1991)
12. Provan, G., Chen, Y.-L.: Model-based diagnosis and control reconfiguration for discrete
event systems: an integrated approach. In: Proceedings of the 38th IEEE Conference on
Decision and Control, vol. 2, pp. 1762–1768 (1999)
13. Russell, K.J., Broadwater, R.P.: Model-based automated reconfiguration for fault isolation
and restoration. In: IEEE PES Innovative Smart Grid Technologies (ISGT), pp. 1–4 (2012)
14. Cui, Y., Shi, J., Wang, Z.: Backward reconfiguration management for modular avionic
reconfigurable systems. IEEE Syst. J. 12(1), 137–148 (2018)
15. Shan, S., Hou, Z.: Neural network NARMAX model based unmanned aircraft control
surface reconfiguration. In: 9th International Symposium on Computational Intelligence and
Design (ISCID), vol. 2, pp. 154–157 (2016)
16. Ludwig, M., Farcet, N.: Evaluating enterprise architectures through executable models. In:
Proceedings of the 15th International Command and Control Research and Technology
Symposium (2010)
17. Boardman, J., Sauser, B.: System of systems–the meaning of of. In: 2006 IEEE/SMC
International Conference on System of Systems Engineering, pp. 118–123 (2006)
18. Nilchiani, R., Hastings, D.E.: Measuring the value of flexibility in space systems: a six-
element framework. Syst. Eng. 10(1), 26–44 (2007)
19. Alsafi, Y., Vyatkin, V.: Ontology-based reconfiguration agent for intelligent mechatronic
systems in flexible manufacturing. Robot. Comput. Integr. Manuf. 26(4), 381–391 (2010)
108 L. Qasim et al.
20. Regulin, D., Schutz, D., Aicher, T., Vogel-Heuser, B.: Model based design of knowledge
bases in multi agent systems for enabling automatic reconfiguration capabilities of material
flow modules. In: IEEE International Conference on Automation Science and Engineering,
pp. 133–140 (2016)
21. Rodriguez, I.B., Drira, K., Chassot, C., Jmaiel, M.: A model-based multi-level architectural
reconfiguration applied to adaptability management in context-aware cooperative commu-
nication support systems. In: 2009 Joint Working IEEE/IFIP Conference on Software
Architecture and European Conference on Software Architecture, WICSA/ECSA, pp. 353–
356 (2009)
22. Otto, K., Wood, K.L.: Product design: techniques in reverse engineering and new product
development, September 2014 (2001)
23. Giffin, M., de Weck, O.L., Bounova, G., Keller, R., Eckert, C., Clarkson, P.J.: Change
propagation analysis in complex technical systems. ASME J. Mech. Des. 131, 1–14 (2009)
24. Clarkson, P.J., Simons, C., Eckert, C.: Predicting change propagation in complex design.
J. Mech. Des. 126(5), 788 (2004)
25. Schuh, G., Riesener, M., Breunig, S.: Design for changeability: incorporating change
propagation analysis in modular product platform design. Procedia CIRP 61, 63–68 (2017)
26. Ottosson, S.: Participation action research. Technovation 23(2), 87–94 (2003)
27. Blessing, L.T.M., Chakrabarti, A.: DRM, a Design Research Methodology, vol. 1 (2009)
28. Summers, J.D., Eckert, C.M.: Design research methods: interviewing. In: Workshop in
ASME Conference 2013, Portland, Oregan, USA (2013)
A Domain Model-Centric Approach
for the Development of Large-Scale Office
Lighting Systems
1 Introduction
of the specifications is very difficult to assess. Only in the later stages of development,
system elements will be implemented and can be checked and tested. However, in practice
system-level testing is often immature, and very limited due to time pressure.
In general, errors are introduced during the manual steps from specification to coding
and configuring high-tech systems, impacting the quality of the product. Especially in
testing software, finding errors (and their solutions) may prove time consuming [3].
From a business point of view these delays in development lead to an increase in
time to market for products, which is a definite adverse effect. The perceived quality of
products will strongly be influenced by the occurrence of failures in the field, which are
the result of missed faults in development, installation or commissioning phases. In
many cases, repair, reconfiguration, or even recall of products in the field has proven to
be extremely expensive [3]. Obviously, the resulting low overall development effi-
ciency will lead to a high development cost of the product.
Given the mentioned issues and challenges, we aim at devising an approach to improve
the development process in a fundamental way. The two main goals are: (1) reduce
development effort and (2) handle complexity of control.
The aim is to reduce the effort for the full lifecycle: specification, development,
validation, installation, commissioning, upgrade, etc. A specific goal is to replace the
costly on-site testing of a physical system by off-site analysis of virtual prototypes.
Our focus is on the improvement of the specification process and the subsequent
processes during development. This leads to several benefits, such as easy validation and
verification of specifications and proposed solutions, and alignment and harmonization of
operations along the development path (e.g. handovers between development phases).
Fig. 1. High-level diagram of the domain model-centric approach. DSLs form the central
domain model for system specification and information.
A Domain Model-Centric Approach for the Development 111
The basic idea behind the domain model-centric approach is the use of a set of
languages (capturing the stakeholders’ terminology) that allow specifying system
information. In essence these languages and the specifications expressed in these
languages replace the documents that currently capture the knowledge of experts and
system information, as depicted on the left side of Fig. 1.
3. Analysis: (a) Create specifications using the languages, typically starting with
simple ones and gradually increasing in complexity and size. (b) Validate the
specification of the system statically (performed by the language editor). (c) Create
and validate the transformations. (d) Perform the transformations to the various
analysis targets. (e) Validate system behavior dynamically in various ways: simu-
lation, model checking, model-based testing, etc.
4. Synthesis: When step 3 has proved successful one can safely generate system
configurations or code. (a) Create and validate the transformations. (b) Transform
specifications to the various synthesis targets. (c) Use the generated artefacts in the
final product environment. (d) As a last step usually acceptance tests of the gen-
erated system are performed.
5. Feedback and improvement: Wider introduction in the organization and further
usage of this approach leads to a secondary process of feedback and improvement of
the specification process, languages and transformations. It may become part of
existing improvement activities within the organization.
The introduction of the domain model-centric approach leads to new ways of
working and new functional roles. We separate language creation from language usage,
as there are clearly different capabilities involved. The language creators perform
‘language engineering’ and should be able to isolate domain concepts and capture them
into abstract language elements. The experts working in the company will be heavily
involved in this process to provide their domain knowledge, and direct the language
and its usage. The users of the languages, mainly the system specifiers, are typically
focusing on one aspect, or one domain.
The transformation experts create the bridge between the abstract specification and
the target, therefore they should be able to span this gap. The users of the transfor-
mation results can be split into system analyzers (e.g. system architects) and system
implementers (e.g. engineers). For the latter, work may change significantly as many
implementation activities become automated.
Of course, the languages, specifications, and transformation code as well as tools,
need to be versioned and maintained [8, 9]. This may require a dedicated role: e.g.
modeling manager.
3 Office Lighting
A smart indoor lighting system (e.g. for an office building) is a prime example of a
complex system as described in the Introduction: it is a large-scale distributed system,
containing thousands of sensor, controller, and actuator components, connected via a
network. The system can have very many configurations, but is comprised of only a
limited number of types of components (see Fig. 2). Typical for modern lighting
systems is the complicating factor of having to cooperate with other systems (HVAC,
network, power, security, building and cloud services), often leading to conflicting
requirements.
To complicate things further, office lighting systems have a lifespan of several
decades. Because market requirements and legislation change over time, lighting
114 R. Doornbos et al.
The development challenges of office lighting systems can be distinguished for the
various phases of the life cycle. Some typical questions during system development
are: How to specify system behavior in the language of the end user, and how to
translate it towards test cases? How to prove the correctness of the lighting behavior for
a given building before the actual installation in the building? How to determine the
system performance, e.g. energy usage, response times, before installation? Can we
perform design space exploration to optimize system performance, and find the best
system configuration for a specific customer?
For the operational phase there are different challenges: Can we add new features to
the system, when the system is already deployed and operational, and be sure the
addition will not break current behavior? How to easily reconfigure the system to
address changes in its usage or environment?
For office lighting systems, the domain model-centric approach is very beneficial since
there is a vast amount of system configurations, and these systems are built up from a
small set of component types. It is clear that for lighting systems there is a significant
amount of inherent reuse.
In the application of our approach we have focused on the control behavior. We
have addressed four main outlets (see Fig. 3): (1) the static validation of specifications,
(2) the dynamic validation of system behavior, (3) analysis of system performance, and
(4) system configuration and code generation.
Fig. 3. A set of office lighting specific languages form the central domain knowledge.
We have performed the steps mentioned in Sect. 2.4, and we will focus in the
remainder of this section only on the notable results.
116 R. Doornbos et al.
Usage DSL
used by used by
Light Behavior DSL Control DSL Experiment DSL State Machine DSL
used by used by generate
used by
Requirement DSL
used by
The Light Behavior DSL describes the lighting behavior of the system, which can
be applied to rooms with any number of luminaires. The behavior is parameterized in
scenes; some examples: ‘concentration’, ‘presentation’, ‘relaxation’. The dynamic
behavior is expressed in terms of a set of lighting-specific state machines with sensor
and actuator groups.
The building in which the lighting system is to be deployed is specified in the
Building DSL. This DSL describes the site, its topology (rooms, corridors, staircases,
etc.) and the locations of lighting equipment (luminaires, buttons); information typi-
cally available in building information models (BIMs) [11].
The Control DSL is a specification that binds the actual building model to the light
behavior specification. It maps the behavior specification onto rooms and corridors.
With this specification it is clear which sensors and actuators are involved in a given
control structure.
As mentioned in Sect. 2.1 the environment specification is needed for the analysis
activities. The Scenario DSL allows the specification of simple usage scenarios; it
describes sensor events occurring in specified sensors at given moments in time. For
more elaborate scenarios, we have defined an Occupancy DSL that describes the user’s
activities, and their mapping on locations in the building. The interaction of the
environment with the system is (obviously via sensors and buttons) therefore specified.
One level of automation was added by the definition of a Usage DSL, which describes
user profiles. The automatic transformation to Occupancy specifications allows to
generate many office user instances, to be deployed in the virtual prototype.
The Requirement DSL describes system-level properties that have to be verified in
the analysis step. This language consists of a generic and an office lighting specific part
[12], allowing the generic part of the language to be reused in other business areas.
The Experiment DSL couples all the mentioned specifications needed for analysis.
It does not describe details about the analysis as such.
A Domain Model-Centric Approach for the Development 117
The State Machine DSL describes the state machines used to express the behavior
of sensors, controllers, actuators, and office users. This generic language is fully
lighting independent, and is a starting point for transformations towards analysis
models, see Fig. 1.
We have used Xtext in combination with Xtend in an Eclipse development envi-
ronment as language technology [13]. This choice is not critical, but in our experience
it is easy to use, has powerful features, and has a large community for development and
support.
Fig. 5. Interactive lighting system visualization showing the floor layout. The inset zooms in on
(orange) light points, (red, green when activated) sensors, and (purple) office users. On the right
energy usage graphs are shown.
floor of a real building, containing 367 luminaires and more than 1300 behavioral
functions. In Fig. 6 the curves are shown, created by 20 simulations.
Fig. 6. Simulation results of the energy usage of the entire floor for various hold times and
occupancy levels. Each experiment simulates one hour of typical behavior, the energy values are
not calibrated.
The advantages of the domain model-centric approach occur in various areas: orga-
nizational, business, architectural, and technical. Having formal and unambiguous
system specifications leads to a more predictable development process. Note that the
A Domain Model-Centric Approach for the Development 119
behavior specification only has to be made once and can be deployed across multiple
product solutions. A shared ‘domain language’ connects engineers from different
domains and leads to less misunderstanding. In most cases it leads to a simpler, more
user-centric but formal specification. Another advantage is that the marketing depart-
ment is enabled to show product characteristics and behavior via virtual prototypes in
the customer’s environment.
On the technical side it is very important to have clear specifications in an
understandable, solution-independent language. The enabled automation leads to a
significant speed up, via automatic checks, code generation, and generation of virtual
prototypes. Design-time simulation and analysis provides a tremendous risk reduction
by more reuse of (proven) building blocks, and easy architectural exploration of novel
solutions. Less on-site commissioning work is needed as better off-site checks are
performed, leading to considerable cost savings.
Summarizing, the domain model-centric approach is a key step towards virtual-
ization of product development. It allows model-based architectural validation and
provides early insight in consequences of architectural decisions and configuration
settings. It provides better control over system complexity and reduces significant cost
and effort in system development in industrial practice.
Acknowledgement. The research is carried out as part of the Prisma programme and H2020
OpenAIS project under the responsibility of Embedded Systems Innovation (ESI) with Philips
Lighting as the carrying industrial partner. The Prisma programme is supported by the Nether-
lands Ministry of Economic Affairs, the OpenAIS project is co-funded by the Horizon 2020
Framework Programme of the European Union under grant agreement number 644332 and the
Netherlands Organisation for Applied Scientific Research TNO.
References
1. Akesson, B., Hooman, J., Dekker, R., Ekkelkamp, W., Stottelaar, B.: Pain-mitigation
techniques for model-based engineering using domain-specific languages. In: Proceedings of
MOMA3N 2018 (2018)
2. Hooman, J.: Industrial application of formal models generated from domain specific
Languages. In: Theory and Practice of Formal Methods, pp 277–293 (2016)
3. Westland, J.C.: The cost of errors in software development: evidence from industry. J. Syst.
and Softw. 62, 1–9 (2002)
4. Mooij, A.J., Hooman, J., Albers, R.: Gaining industrial confidence for the introduction of
domain-specific languages. In: 2013 IEEE 37th Annual Computer Software and Applica-
tions Conference Workshops (COMPSACW) (2013)
5. Schuts, M., Hooman, J.: Industrial Application of domain specific languages combined with
formal techniques. In: Proceedings of Workshop on Real World Domain Specific
Languages, The International Symposium on Code Generation and Optimization, pp. 2:1–
2:8 (2016)
6. Bettini, L.: Implementing Domain-Specific Languages with Xtext and Xtend. Packt
Publishing Ltd., Birmingham (2016)
7. Evans, E.: Domain-Driven Design: Tackling Complexity in the Heart of Software. Addison-
Wesley, Boston (2004)
120 R. Doornbos et al.
8. INCOSE: Systems engineering handbook: a guide for system life cycle processes and
activities, version 3.2.1. International Council on Systems Engineering (INCOSE), INCOSE-
TP-2003-002-03.2.2, San Diego, CA, USA (2012)
9. Bernstein, P.A.: Applying model management to classical meta data problems. In:
Proceedings of the 2003 CIDR Conference (2003)
10. Stecklein, J.M., Dabney, J., Dick, B., Haskins, B., Lovell, R., Moroney, G.: Error cost
escalation through the project life cycle. In: Proceedings of the 14th INCOSE Annual
International Symposium, June 2014
11. Eastman, C., Teicholz, P., Sacks, R., Liston, K.: BIM Handbook: A Guide to Building
Information Modeling for Owners, Managers, Designers, Engineers and Contractors. Wiley
(2011)
12. Buit, L.J.: Developing an Easy-to-Use Query Language for Verification of Lighting Systems.
Master’s thesis (http://essay.utwente.nl/74020/), University of Twente (2017)
13. Mooij, A.J., Hooman, J.: Creating a Domain Specific Language (DSL) with Xtext. http://
www.cs.kun.nl/J.Hooman/DSL, ESI/Radboud University (2017)
14. Uppaal. http://www.uppaal.org/
Through a Glass, Darkly? Taking a Network
Perspective on System-of-Systems
Architectures
1 Introduction
System designers are increasingly faced with the challenge of architecting complex
systems that may have to operate within the context of a wider System of Systems
(SoS). It can be natural to think of an SoS architecture as a network of entities con-
nected together by relationships, and to regard this network as complex in the sense that
it exhibits interesting structure. Architects require tools that can support architecture
analysis. Network science provides a toolbox of techniques developed to shed light on
complex networks of all kinds. However, in applying networks science analyses to
real-world architectures, several challenges and limitations become clear. The purpose
of this paper is to highlight these challenges and suggest ways to help bridge the gap
between academic theory and industrial practice.
The paper is structured as follows; first a brief summary of prior work applying
network science techniques to real-world SoS architectures is provided before cautions
surrounding the use of graph-theoretic approaches to assess such architectures are
described. These address difficulties in correctly identifying important architectural
entities, evaluating architecture structure, and anticipating the response of architectures
jE j
D¼ ð1Þ
j V j ð j V j 1Þ
The structure of the architecture was also evaluated in terms of strongly connected
components (groupings of entities within which a path exists linking each pair of
entities in both directions [3]) to reveal the presence of a core and periphery structure.
Finally, a community detection algorithm [4] was used to suggest an alternative
approach to partitioning a complex SoS architecture by grouping entities into com-
munities within which connectivity was relatively strong by comparison with con-
nectivity to entities outside the community.
Here we briefly report additional results drawn from the analysis of a second use
case, a tactical military communications enterprise architecture (MComms). The
architecture was created in accordance with the Ministry of Defence Architecture
Framework (MODAF) [5], to enable Thales and their customer to have a shared
understanding of the complex environment within which a tactical military commu-
nications solution would have to interoperate. The MComms use case describes the
challenge of enabling effective tactical communications between soldiers. Systems,
services, functions, artefacts (components), software and capability configurations were
modelled as vertices and the communications and dependency relationships, such as
systems fulfilling functions and hierarchical compositions, modelled as edges in a
directed graph.
In the following section we use these two real-world use cases to highlight the
challenges facing an architect trying to take advantage of a network perspective.
Through a Glass, Darkly? Taking a Network Perspective 123
n1
Ci ¼ P ð2Þ
j d ðj; iÞ
The Betweeness Centrality, Bi, of a vertex i is given by Eq. (4) where rst is the total
number of shortest paths from vertex s to vertex t and rst(i) is the number of those paths
that pass through i [13].
X rst ðiÞ
Bi ¼ ð4Þ
s6¼i6¼t rst
The Eigenvector Centrality,Ei , is given by Eq. (5) where aj;i is the adjacency matrix
of the graph (the adjacency matrix is a square matrix representing the graph, where
aj,i = 1 if vertex j is connected to vertex i and aj;i ¼ 0 otherwise) and k6¼0 is a
constant [14].
124 M. Potts et al.
1X
Ei ¼ a
j2G j;i
ð5Þ
k
While each of these measures can be applied to a network representing a system
architecture, there are three main concerns when using them to determine important
entities within it. First, there are practical limitations in their calculation. Second, they
do not agree on which entities are most important. Finally, and most importantly, they
neglect the rich context inherent to a complex SoS architecture, potentially allowing an
architect to be misled by a numerical analysis that ignores more significant determi-
nants of importance.
A limitation in calculating these measures is the treatment of disconnected entities
and entities with connectivity in only one direction (e.g., a node that has one incoming
edge from its sole neighbor but no out-going edges linking it to the network). Both the
SAR and MComms architectures have a large number of entities with relationships in
only one direction (e.g., node i depends on node j, but not vice versa) which results in
many entities being assigned an importance score of zero by some of the network
metrics, due to the effectively “infinite” distance between the node and parts of the
network that it cannot reach. Calculating the shortest path distances, d ðj; iÞ, takes
account of directionality in a directed graph, noting it is from i to all other n − 1
vertices. For example, 74% of the entities in the MComms architecture are given a
Closeness Centrality of zero and 82% are given a Betweenness Centrality of zero,
similarly the 27% of the SAR entities have an Eigenvector Centrality score of zero. It
may be the case that high scoring entities are the most important within an architecture,
but the inability of the measures to cope with the directed, tree-like structure of the
networks is not encouraging. Confidence in these results can only be obtained through
further analysis of what each centrality scores measure, and how this corresponds to a
notion of importance that makes sense to an architect. Rather than expecting a single
network science measure to identify key entities, a more useful approach for an
architect would be to iterate between two questions: ‘what makes an entity important in
this complex SoS architecture and how might that be reflected in the network repre-
sentation of it?’ and ‘what network properties are these measures capturing and what
are their implications for my understanding of the architecture itself?’. Note that rather
than conflate the network and the architecture together as though they are the same
thing, these questions explicitly separate the architecture (the thing we are interested in)
from the network (a particular abstraction of that thing that may help us understand it).
Unsurprisingly, there is little agreement between the centrality measures regarding
which entities they identify as most important (Fig. 1). This reinforces the point that
caution must be exercised in their use. Despite employing similar concepts such as
geometric distance or connectivity, there is little significant correlation between them.
Most importantly, however, it may be that network measures of the type described
above are not capable of tracking importance in a sense that is relevant to a system
architect’s needs. For example, in the case of the SAR and MComms architectures,
what makes an entity important may be less to do with its location in the network and
more to do with its contribution to mission effectiveness. Unless this contribution is
Through a Glass, Darkly? Taking a Network Perspective 125
Fig. 1. Relationships between five importance metrics for the 41 entities represented in the SAR
architecture network (left), and the 235 entities represented as vertices in the MComms
architecture network (right).
either explicitly coded in the network as a node property, or implicitly reflected in some
aspect of network structure, this importance criterion will be invisible to graph-
theoretic analysis. The graph-theoretic signifiers of importance do not necessarily even
have any meaning in the real-world system, an architect would need to determine what
terms like the ‘shortest path’ correspond to in a heterogeneous architecture abstracted
into a simple directed graph of vertices and edges.
126 M. Potts et al.
reciprocity was greater than 99.6% of random architectures. However, the characteri-
zation of a SoS may need to go further than comparison with an entirely structureless
null model. In order to determine if an architecture’s reciprocity values are high for an
architecture of its type, it would need to be compared against a null model that respects
the constraints under which architectures are arrived at. Since these constraints, whether
fundamental or conventional, are to a large extent unknown, such a null model is hard to
arrive at. Making progress will require a closer appreciation of the role of reciprocity in
real-world architectures. Indeed, it might be argued that one positive associated with
taking a network perspective is pressure to formulate the thinking that is required by
such null models – thinking that has been painstakingly undertaken in other fields that
have benefitted from networks science approaches [28, 29].
but could result in cascading failure. The fundamental limitation with trying to char-
acterise this vulnerability is that achieving sufficiently secure knowledge about failure
dynamics is likely to be extremely hard at the early stages of a system lifecycle.
Further, an architect has a significant challenge in conceptualizing how failures and
failure cascades could affect a SoS architecture that encompasses a diverse, potentially
autonomous and independent collection of systems [39–41]. Whilst the explicit cascade
dynamics may not be known at the early stages of a design lifecycle, it may be useful to
consider the possibility of local failures creating global, SoS wide failures. It then may
be possible to lean on network science approaches to explicitly design against cas-
cading failures [42] by introducing protection to the nodes with the highest degree or
implementing control mechanisms to distribute the functionality a failed node pro-
vided. However, their effectiveness hinges on understanding the cascade dynamics for
the system, for example a protection strategy that assumes cascades follow a perco-
lation model may not provide the same benefits for epidemic models of cascades.
The next section highlights the challenges to making progress in using a network
perspective to evaluate complex SoS architectures and details the other open research
questions.
3 Challenges to Progress
4 Conclusion
Given the impact of network science on fields as diverse as neuroscience [45] and
archaeology [46], it is highly likes that there are insights to be gained from taking a
network perspective on SoS architectures; which entities in an architecture are most
important to one another, is one architecture likely to be more robust, efficient, or
manageable than another, to what degree and in what ways might an architecture be
vulnerable to failure? In seeking these insights, however, it becomes clear that the tools
from network science cannot straightforwardly be applied without developing a more
130 M. Potts et al.
sophisticated understanding of how they map onto the diversity, richness and context
sensitivity characteristic of complex SoS architectures. The social sciences spent sev-
eral decades developing a set of interpretations and conceptualizations that allowed the
effective mobilization of networks concepts. Developing an equivalent set of concep-
tual tools for the analysis of complex SoS architectures remains an open research
challenge.
References
1. Potts, M., Sartor, P., Johnson, A., Bullock, S.: Hidden structures: using graph theory to
explore complex system of systems architectures. In: Paper presented at the International
Conference on Complex Systems Design & Management. CSD&M, Paris, France,
December 2017
2. North Atlantic Treaty Organization: NATO architecture framework v4.0 documentation
(draft) (2017). http://nafdocs.org/
3. Diestel, R.: Graph Theory, Electronic. In: Graduate Texts in Mathematics, vol. 173.
Springer, Berlin (2005)
4. Blondel, V.D., Guillaume, J.-L., Lambiotte, R., Lefebvre, E.: Fast unfolding of communities
in large networks. J. Stat. Mech. Theory Exp. 10, P10008 (2008)
5. Biggs, B.: Ministry of defence architectural framework (MODAF) (2005)
6. Newman, M.: Networks: an Introduction. Oxford University Press, Oxford (2010)
7. Okami, S., Kohtake, N.: Transitional complexity of health information system of systems:
managing by the engineering systems multiple-domain modeling approach. IEEE Syst. J.,
1–12 (2017)
8. Bartolomei, J.E., Hastings, D.E., de Neufville, R., Rhodes, D.H.: Engineering systems
multiple-domain matrix: an organizing framework for modeling large-scale complex
systems. Syst. Eng. 15(1), 41–61 (2012)
9. Santana, A., Kreimeyer, M., Clo, P., Fischbach, K., de Moura, H.: An empirical
investigation of enterprise architecture analysis based on network measures and expert
knowledge: a case from the automotive industry. In: Modern Project Management, pp. 46–
56 (2016)
10. Iyer, B., Dreyfus, D., Gyllstrom, P.: A network-based view of enterprise architecture. In:
Handbook of Enterprise Systems Architecture in Practice, p. 500. PFPC Worldwide Inc.,
USA (2007)
11. Freeman, L.C.: Centrality in social networks conceptual clarification. Soc. Netw. 1(3), 215–
239 (1978)
12. Boldi, P., Vigna, S.: Axioms for centrality. Internet Math. 10(3–4), 222–262 (2014)
13. Brandes, U.: A faster algorithm for betweenness centrality. J. Math. Sociol. 25(2), 163–177
(2001)
14. Newman, M.E.: The mathematics of networks. In: The New Palgrave Encyclopedia of
Economics, 2nd edn., pp 1–12 (2008)
15. IEEE/ISO/IEC Draft Standard for Systems and Software Engineering - Architecture
Evaluation, pp. 1–76 (2017). ISO/IEC/IEEE DIS P42030/D1, December 2017
16. Kossiakoff, A., Sweet, W.N., Seymour, S.J., Biemer, S.M.: Systems Engineering Principles
and Practice, vol. 83. Wiley, London (2011)
17. Buede, D.M., Miller, W.D.: The Engineering Design of Systems: Models and Methods.
Wiley, London (2016)
Through a Glass, Darkly? Taking a Network Perspective 131
18. Bullock, S., Barnett, L., Di Paolo, E.A.: Spatial embedding and the structure of complex
networks. Complexity 16(2), 20–28 (2010)
19. Sinha, K., de Weck, O.L.: Structural complexity metric for engineered complex systems and
its application. In: Gain Competitive Advantage by Managing Complexity: Proceedings of
the 14th International DSM Conference Kyoto, Japan, pp. 181–194 (2012)
20. Lloyd, S.: Measures of complexity: a nonexhaustive list. IEEE Control Syst. Mag. 21(4), 7–8
(2001)
21. Sheard, S.A.: 5.2. 1 systems engineering complexity in context. In: INCOSE International
Symposium, vol. 1, pp. 1145–1158. Wiley Online Library (2013)
22. Fischi, J., Nilchiani, R., Wade, J.: Dynamic complexity measures for use in complexity-
based system design. IEEE Syst. J. 11(4), 2018–2027 (2015)
23. MacCormack, A.: The architecture of complex systems: do “core-periphery” structures
dominate? In: Academy of Management Proceedings, vol 1, pp. 1–6. Academy of
Management (2010)
24. Rechtin, E.: Systems architecting: Creating and building complex systems, vol. 1. Prentice
Hall, Englewood Cliffs (1991)
25. Sillitto, H.: Architecting Systems: Concepts, Principles and Practice. College Publications,
London (2014)
26. Newman, M.E.: Mixing patterns in networks. Phys. Rev. E 67(2), 026126 (2003)
27. ISO/IEC/IEEE International standard - systems and software engineering – system life cycle
processes, pp. 1–118 (2015). ISO/IEC/IEEE 15288 First edition 2015-05-15. https://doi.org/
10.1109/ieeestd.2015.7106435
28. Freeman, L.: The Development of Social Network Analysis. A Study in the Sociology of
Science 1. Empirical Press, Vancouver (2004)
29. Gilbert, N., Bullock, S.: Complexity at the social science interface. Complexity 19(6), 1–4
(2014)
30. Crawley, E., De Weck, O., Magee, C., Moses, J., Seeringk, W., Schindall, J., Wallace, D.,
Whitney, D.: The influence of architecture in engineering systems (monograph) (2004)
31. De Weck, O.L., Roos, D., Magee, C.L.: Engineering Systems: Meeting Human Needs in a
Complex Technological World. Mit Press, Cambridge (2011)
32. De Weck, O.L., Ross, A.M., Rhodes, D.H.: Investigating relationships and semantic sets
amongst system lifecycle properties (Ilities) (2012)
33. De Neufville, R., Scholtes, S.: Flexibility in Engineering Design. MIT Press, Cambridge
(2011)
34. Newman, M.E.: Complex systems: a survey (2011). arXiv preprint arXiv:11121440
35. Newman, M.E.: The structure and function of complex networks. SIAM Rev. 45(2), 167–
256 (2003)
36. Albert, R., Jeong, H., Barabási, A.-L.: Error and attack tolerance of complex networks
(2000). arXiv preprint cond-mat/0008064
37. Khoury, M., Bullock, S.: Multi-level resilience: reconciling robustness, recovery and
adaptability from a network science perspective. Int. J. Adapt. Resil. Auton. Syst. (IJARAS)
5(4), 34–45 (2014)
38. Khoury, M., Bullock, S., Fu, G., Dawson, R.: Improving measures of topological robustness
in networks of networks and suggestion of a novel way to counter both failure propagation
and isolation. Infrastruct. Complex. 2(1), 1 (2015)
39. Boardman, J., Sauser, B.: System of systems-the meaning of of. In: Proceedings of the 2006
IEEE/SMC International Conference on System of Systems Engineering Los Angeles, CA,
USA, pp. 118–126, April 2006
40. Maier, M.W.: Architecting principles for systems‐of‐systems. In: INCOSE International
Symposium, vol 1. Wiley Online Library, pp. 565–573 (1996)
132 M. Potts et al.
41. ISO/IEC/IEEE Draft international standard - systems and software engineering - systems of
systems considerations in engineering of systems, pp. 1–43 (2017). ISO/IEC/IEEE P21839,
April 2017
42. Fu, G., Dawson, R., Khoury, M., Bullock, S.: Interdependent networks: vulnerability
analysis and strategies to limit cascading failure. Eur. Phys. J. B 87(7), 148 (2014)
43. Marvin, J.W., Garrett Jr., R.K.: Quantitative SoS architecture modeling. Procedia Comput.
Sci. 36, 41–48 (2014)
44. ISO/IEC/IEEE DIS 42020 Enterprise, systems and software - architecture processes (2017)
45. Barnett, L., Buckley, C.L., Bullock, S.: Neural complexity: a graph theoretic interpretation.
Phys. Rev. E 83(4), 041906 (2011)
46. Brughmans, T.: Connecting the dots: towards archaeological network analysis. Oxf.
J. Archaeol. 29(3), 277–303 (2010)
Generation and Visualization of Release Notes
for Systems Engineering Software
Malik Khalfallah(&)
1 Introduction
Let’s consider multiple engineering teams working on the design of a car. We chose
a car example because it is simpler to illustrate our approach using a car rather than
using a satellite system. Figure 1 shows how the dataset of the Car and its branches and
dependencies are evolving. Basically the dataset Car has been branched at the first
version and starting from version 200.9, it has started to have dependencies with other
subsystems’ datasets.
At the end of the milestone, the architect wants to generate the release note of all
resolved issues between the last version 300.4 (depicted by a square) and 200.9 of the
trunk. The details of this history contain two important pieces of information:
• Information about the modifications of the dataset Car between the versions 300.4
and 200.9
• Information about the modifications made on the dependencies and branches of this
dataset during the interval [200.9, 300.4].
The release note is presented as an excel document that shows the issue number, its
description and the version of the dataset impacted by the modification. An example of
such a table is given in the following figure.
(continued)
Graphical representation and the created User defined object
hierarchy
Multiple Reintegrated Branches: This
pattern concerns the reintegration of different
branches created before or from the target
dataset into the source dataset
The result returned for this pattern concerns
the issues resolved in all reintegrated branches
plus the issues resolved between versions
300.4 and 200.9 of the dataset Car
Nested Branch Reintegration Branches:
Following the same idea of dependencies
having other dependencies (c.f. pattern 3), we
can consider the case where branches have
themselves other branches. In this case, the
final result is constituted by the issues
resolved in all reintegrated branches
(recursively) plus the issues resolved between
300.4 and 200.9 of the dataset Car
(continued)
Graphical representation and the created User defined object
hierarchy
Multi-Origin Dependency [Different
Depth]: This pattern is a particular case of the
previous one
Projects (i) (ii) (iii) (iv) Mean number of resolved issues in an Mean release
interval of two versions of a dataset time (day)
Project 1 3 525 31 15 3,34 21,61
newly created release object under its parent is performed if and only if among all
release objects there is one that has a dependency between its source dataset and p1.
dataset and a dependency between its target dataset and p2.dataset.
When all token elements satisfying r_comparable condition have been consumed,
at the end the places p1, p2 will contain only datasets that are not comparable (*) and
thus should be treated depending on the patterns identified above. These details are
implemented in the function r_not_comparable. Nevertheless, since each project could
decide to create different release objects, in this case the implementation of the function
r_not_comparable can be updated to manage the project specificities.
To obtain the complete history of resolved issues we need to include the issues
resolved in the dependencies and the issues resolved in branches as well. Accordingly
the CPN of Fig. 3 computes the dependencies and the reintegrated branches of the
given couples hdatasetsource ; datasettarget i and then compute the resolved issues for
them. Nevertheless, to compute the dependencies and the reintegrated branches for the
initial datasets we need to use their associated tokens. Hence to make the tokens of the
initial datasets available for both history computation and also to compute branches and
dependencies, we need to create duplicates for these tokens. This duplication has the
advantage to allow the parallel computation of reintegrated branches, dependencies and
the history of resolved issues too.
Proof: Since this CPN contains cycles, we need to prove that these cycles end at a
certain time. In other terms, we need to prove that the generation of tokens representing
branches and dependencies ends at a certain time. More formally, we consider two
functions:
A function that computes the reintegrated branches of a given dataset:
In order to prove that the computation made by our CPN ends, we put the two
following hypotheses:
1. There is always in the graph of dependencies the last dependency into which all
datasets have a direct or indirect dependency called the common dataset that sat-
isfies the following condition:
dependencyðcommondataset Þ ¼ ;
2. The number of reintegrated branches for any dataset is limited. More formally:
Using the first hypothesis, we conclude that when the data source place contains
dataseti = commondataset then no dependency will be generated. Using the second
hypothesis, we conclude that we reach a dataset that has no branch at that time and
consequently the place containing branches will be empty. Moreover, the common
dataset will be reached as a last dependency. Accordingly, the computation ends since
the dataset source and dataset target will no longer be populated. ■
Proof: Let r1, r2 two release objects. The release r1 is a child of r2 iff r1.datasetSource
and r2.datasetSource have a dependency as well as r1.dataSetTarget and r2.
datasetTarget have a dependency and are of the same nature. Or they are branches.
Since a branch can have only one origin then there will be one parent.
The proof continue by enumeration on the different patterns. ∎
2.2.4 Implementation
We have developed a prototype for this approach. We have built a CPN model in EMF
and its visualization in GMF. Architects can update the behavior of that CPN according
to their needs. Additionally, we have developed the visual representation of the release
note in Eclipse RCP that constitutes a plugin to RangeDB. Figure 4 depicts an excerpt
of the release note visualization corresponding to the example above: (Fig. 5).
3 Related Work
Generating and visualizing release note is an active research field. Klepper et al. [10]
developed a semi-automated approach for the generation of release notes. The principle
of their approach consists in identifying the changes in the commits between two
releases of a project. They summarize the code changes and link it to information
commit notes and issue tracking system. This approach would provide relevant
information of interested people but authors do not elaborate how they automatically
gather all relevant information. In our case we provided a workflow designed by a CPN
that elaborates on how we perform the generation of the release note.
[11] developed a tool called ChangeScribe. The purpose of ChangeScribe is to
assist users to generate the right message to associate to commits. Its principle consists
in extracting and analyzing the differences between two versions of the source code and
generates a commit message. The analysis of the source code relies on code summa-
rization techniques, stereotype detection and impact analysis. The generated commit
message provides an overview of the changes. Basically, it describes the why and what
of a change using natural language. For simple source code files this approach could
work as demonstrated by the authors. However, when committing many files this tool
could fail. Moreover, the generated messages do not contain the issue number which is
fundamental to show other developers which issue has been resolved by the commit.
[12] developed RCLinker that aims at predicting if a link exists between a commit
message and an issue defined in a system such as redmine. It relies on ChangeScribe to
automatically generate commit messages. Then it tries to extract features from the
automatically generated commit messages and issues description. These features allow
RClinker to link the generated commit messages with their corresponding issues.
RCLinker made the hypothesis that in many cases software developers could
perform a commit without associating it to an issue [13–15]. In our case (system
development) it is always mandatory that database architects associate an issue to their
commits. This is due to the difference between systems and software. In software we
could refactor the source code and commit it without informing the final user. At the
end, the behavior of the software remains the same. However in system development
the final users are system architects and every data update will probably impact their
design. Therefore every data update should be backed by an issue.
4 Conclusion
In this paper, we have presented the foundations of a release note generation frame-
work for systems engineering software. We first determined the need for such
framework that results from a business need expressed in AIRBUS DS. Second we
have performed an empirical study on a large sample of satellite projects data in order
to categorize the entries of a release note. Third we have defined patterns corresponding
to these categories and we have developed a CPN based discovery algorithm to gen-
erate the release note. We finally proved important properties of that algorithm.
In the future we aim at enriching that framework by developing a generic interface
between System engineering software and issue management systems.
144 M. Khalfallah
References
1. Madni, A., et al.: Model-based systems engineering: motivation, current status, and needed
advances. In: Disciplinary Convergence in Systems Engineering Research (2018)
2. Lanubile, F., et al.: Collaboration tools for global software engineering. IEEE Softw. 27(2),
52–55 (2010)
3. Schindel, B., et al.: Introduction to the agile systems engineering life cycle MBSE pattern.
In: INCOSE International Symposium (2016)
4. Boehm, B.: Software defect reduction top 10 list. IEEE Comput. J. 34(1), 135–137 (2001)
5. Lotufo, R., et al.: Modelling the ‘Hurried’ bug report reading process to summarize bug
reports. J. Empir. Softw. Eng. 20(2), 516–548 (2015)
6. Eisenmann, H.: MBSE has a good start; requires more work for sufficient support of systems
engineering activities through models. INCOSE Insight 18(2), 14–18 (2015)
7. Eisenmann, H., et al.: RangeDB the product to meet the challenges of nowadays system
database. In: SESP-ESA (2015)
8. Eickhoff, J.: Onboard Computers, Onboard Software and Satellite Operations. Springer,
Berlin (2012)
9. Cazenave, C., et al.: Benefiting of digitalization for spacecraft engineering. In: SESP-ESA
(2017)
10. Klepper, S., et al.: Semi-automatic generation of audience-specific release notes. In:
IEEE/ACM CSED 2016 (2016)
11. Cortès-Coy, L., et al.: ChangeScribe: a tool for automatically generating commit messages.
In: IEEE SCAM 2014 (2014)
12. Le, T., et al.: RCLinker: automated linking of issue reports and commits leveraging rich
contextual information. In: IEEE ICPC 2015 (2015)
13. Bachmann, A., et al.: The missing links: bugs and bug-fix commits. In: ACM SIGSOFT FSE
(2010)
14. Bird, C., et al.: Fair and balanced?: bias in bug-fix datasets. In: ACM SIGSOFT FSE (2009)
15. Thanh, N., et al.: A case study of bias in bug-fix datasets. In: Working Conference on
Reverse Engineering (2010)
Safety Architecture Overview Framework
for the Prediction, Explanation and Control
of Risks of ERTMS
1 Introduction
• For the Dutch situation, the absence of a central designer [6] and overarching safety
decision-making processes between railway and train transportation lowers the
degree to which the parties succeed in harmonizing various processes.
• An increased number of actors has caused a lack of insight into cross-border
information.
These challenges require improvements in resilience, more awareness and increased
sensitivity for interrelationships between hazards and risks, but even more: joint
comprehension of the safety architecture and creation of cross-discipline
understanding.
In this study, we create a safety architecture overview framework representing
structured scenarios including hazards, consequences, RACs, risks and decisions in
various layers. We model interfaces between scenarios, risk analysis and risk evalua-
tion so that stakeholders are able to verify data origin, argumentation route, and
application. Also, we argue that the proposed framework addresses the explained
challenges through combining safety data generation, data processing and structuring,
definition of interactions, and finally the creation of customized representations in order
to predict, explain, and control risks.
Section 2 provides an overview of ERTMS and state of the art of safety models
aiming at modelling elements of the safety architecture. The methodology is discussed
in Sect. 3. Section 4 explains the creation of the safety architecture overview frame-
work and how this complies with the challenges described above. These results are
discussed in Sect. 5. Section 6 summarises the results, draws conclusions, and explains
future work in order to test the proposed framework.
2 Background
Stakeholders involved with the creation of the safety case are the Dutch Ministry of
Infrastructure and the Environment (I&W), the Dutch infrastructure provider (ProRail),
and train operating companies such as the Dutch railways (NS). Next, the safety case
must show that the correct management for controlling safety is in place. For the
Dutch ERTMS, this management system refers to the safety management systems
(SMS) of both ProRail and NS.
In short, ERTMS is subject to the influence of the Dutch House of Representatives,
the application of the CSM and EN5012x, the SMS of the infrastructure provider and
train operating companies, multiple technical components, trains operating on tracks,
and of course, the consideration that ERTMS is an important link in the ambition to
ensure the passengers and shippers view the railway system as an attractive mode of
transportation. This indicates active layers in the area of government, regulations,
company management, technical and operational management, physical process and
activities, and environment. These levels of decision making that are involved in risk
management and control hazardous processes, are explained in [7].
underlying the safety case, and customisation of risk analysis and risk evaluation
representations for enabling communications between safety stakeholders.
3 Method
This study and the design of the Safety Architecture Overview Framework is carried
out in four successive steps, though execution of step three and four are iterative.
• Step 1. Translate need/requirements to a top-level use case diagram. The resulting
use cases can be considered as top-level functionalities of the framework and related
to system requirements.
• Step 2. Decompose to a set of functions. The functions explained in the use case
diagram are decomposed to a set of functions. Per top-level functionality, we define
input and output.
• Step 3. Finding solutions. For each functionality, literature review is combined with
ERTMS application in order to find suitable solutions.
• Step 4. Evaluation on functionality and compatibility between solutions. This step is
interrelated with step 2 and 3, because this evaluation can suggest a change of flow
or solutions that contradict one another.
The first two steps are executed through following the Design Research Method-
ology (DRM) described by [21]. Step 3 and step 4 are executed through following the
systematic search with the help of classification schemes described in Engineering
Design by [22]. This approach is shown in Fig. 1.
4 Results
Fig. 2. Use case diagram representing top functionalities of the safety architecture overview
framework.
• Abstract data. We translate informal raw data to a formal language that creates
common understanding. Next, we filter information to prevent information over-
load, and to deal with safety complexity.
• Synthesize information. The fitting together of parts or elements to produce new
effects and to demonstrate that these effects create an all over order [22]. Grouping
indicates that elements belong together based on some common characteristic. In
this function, filtered information is labelled (stereotypes) according to their type.
This processing from raw data to interpretive safety information is shown in the
activity diagram in Fig. 3.
Fig. 3. Activity diagram representing the processing of safety data to interpretive safety
information.
In PHA, high-level system hazards are identified inductively by asking “what if this
component fails”, and hazard are also identified deductively by asking “how this could
happen”. Scenario-guided hazard analysis is to be structured around the flows within a
system. For example, each HAZOP contains complex chains of flow of information,
and each flow can have hazardous effects. As for identification of hazards, their causes,
and their effects, the focus within this framework is on the properties and behaviours of
flows in the system.
A typical methodology for scenario identification is ETA; Cause-Consequence
Analysis in particular may also be applied to identify scenarios. Causal analysis aims to
identify the logical sequences of hazardous events that may lead to an undesirable
effect (EN50126). Typical causal analysis techniques are FTA and FMECA. The use of
inductive and deductive safety analyses results in downstream and upstream flows, see
Fig. 4.
152 K. Schuitemaker et al.
Fig. 4. Activity diagram of the flows representing safety analyses performed in the safety
architecture overview framework.
As for ERTMS and moreover, the Dutch Railway industry, the safety case
approach is applied to construct an argument that the system is adequately safe for a
given application in a given environment. In accordance with the safety case, the
structure upon which the Safety Architecture Overview Framework is built consists of:
• Claims. A conclusion or premise to be demonstrated. For example, that the system
is safe to operate.
• Evidence. References that can be a result of a safety analysis. For example, FTA’s
or FMEA’s.
• Arguments. Set of inferences between claims and evidence.
As for the example in Fig. 4, flow 1 includes some hazards (resulting for example
from a HAZOP) for which mitigation M(f1, 1) and M(f1, 2) are applied in order to
reduce the risk. For this reason, one can claim that execution of Flow 1 is acceptably
safe.
The definition of interactions includes the identification of all factors that contribute
to a failure. According to EN50126, the definition of the operational context is nec-
essary to evaluate the risks specific to a hazard within its accident scenario. Identifi-
cation of causal scenarios allows architects to discover interactions between various
flows and layers such as human, technological, organisational and external, that might
contribute to the failure at the system output. Each element of the scenario is allocated
Safety Architecture Overview Framework 153
5 Discussion
Some benefits of the Safety Architecture Overview Framework have been shortly
explained in earlier sections. Though, there are some specific added values and chal-
lenges that require more explanation. First, the abstraction reduces complexity and
emphasizes the system under consideration. This can be useful in collaborative work
and should reduce ambiguity. In order to predict, explain and control risks, it is of
primary importance to find a balance between concreteness and abstraction. Main
challenge is to extract data without losing essential information necessary for defining
the architecture. Second, incorporating structure allows better partitioning. This mod-
elling of interfaces also allows that parts can be independently produced. Structuring
the safety architecture eliminates vagueness in descriptions and clarifies tradeoffs
among analyses. Third, the risk decision-maker requires an understanding of social and
political issues, technical issues, management issues, and communication issues. An
overview of risk analysis, risk evaluation, and detailed view of layered scenarios will
improve readability and comprehension [24].
As for compatibility between top-level functionalities of the framework, hazard
identification should be systematic and structured, which means taking into account
factors such as system boundaries, interactions with the environment and modes of
operation and environmental conditions. SysML incorporates the advantages of sys-
tematic structure of object- and process-oriented methods, which can easily describe
154 K. Schuitemaker et al.
the connection and data exchange among systems [23]. There is evidence that SysML
proved its value in other safety models (see Sect. 2 about background). Information
interpretation depends on information structure. Structuring information improves
readability and comprehension, contributing to the creation of representations, and
essential for the quality of data generation. Also, the scenarios in various layers require
structure of causal relationships between the scenarios. Finally, the origin of a failure
can come from decisions made earlier in the process. Complex systems come to be in
the interaction of components. Baxter explains that undesirable events are simplisti-
cally seen as the result of organisational findings [25]. For these reasons, it is important
to define the interactions between earlier explained layers.
6 Conclusion
The proposed framework combines safety data generation, data processing and struc-
turing, definition of interactions and finally the creation of customized representations
in order to predict, explain, and control risks by various safety experts, safety architects
and safety managers.
For safety data generation, data will come from scenario-guided hazard analysis,
consequences from causality analysis, risk matrices including risk acceptance criteria,
and ALARP evaluations and decisions that influence the safety analyses. The focus of
the Safety Architecture Overview Framework is on the properties and behaviors of
functional flows and hazardous flows of the system under consideration. The structure
upon which the framework is built consists of claims, evidence and arguments. The
identification of causal scenarios allows safety experts, safety architects and safety
managers to discover interactions between various flows and layers. Graphic repre-
sentation exposes the interrelationships of events and their interdependence upon each
other. By visualization, we make boundaries of safety decisions explicit, and reveal
patterns such as links, inferences, and contextual relationships, that would be otherwise
hard to find. The views consist of: a risk analysis overview, a risk evaluation overview,
and a detailed view of scenario analyses. These views can illustrate the main inter-
actions between the various layers and system components. Also, it is possible to
illustrate the criticality of each layer and subcomponent. Explicit representation
delivers insight, stimulates striving for completeness, and leads to consistency of the
safety analysis.
In terms of acceptance, factors that would be of interest to the stakeholders for
adoption of the framework are described in [5]. These are, among other things, more
awareness and sensitivity for interrelationships between hazards and risks, but even
more: comprehending the safety architecture and creating cross-discipline under-
standing. In response, we plan to test this framework in a real-life Dutch railway case
that, at this moment, is setting up their risk analyses and evaluations.
Safety Architecture Overview Framework 155
References
1. Alexandersson, G., Hultén, S.: The Swedish deregulation path. Rev. Netw. Econ. 7(1), 1–19
(2008)
2. European Union: Commission Decision of 25 January 2012 on the technical specification for
interoperability relating to the control-command and signaling subsystems of the trans-
European rail system. Off. J. Eur. Union 55, 1–51 (2012)
3. UNIFE: UNISIG, An industrial consortium to develop ERTMS/ETCS technical specifica-
tion. http://www.ertms.net. Accessed May 2018
4. Rajabalinejad, M., Martinetti, A., Dongen, L.A.M.: Operation, safety and human: critical
factors for the success of railway transportation. In: Systems of Systems Engineering
Conference, pp. 1–6 (2016)
5. Schuitemaker, K., Rajabalinejad, M.: ERTMS challenges for a safe and interoperable
European railway system. In: Proceedings of the Seventh International Conference on
Performance, Safety and Robustness in Complex Systems and Applications, pp. 17–22
(2017)
6. Stoop, J.A.A.M., Dekker, S.: The ERTMS railway signaling system: deals on wheels? An
inquiry into the safety architecture of high speed train safety. In: Proceedings of the Third
Resilience Engineering symposium, pp. 255–262 (2008)
7. Svedung, I., Rasmussen, J.: Graphic representation of accident scenarios: mapping system
structure and the causation of accidents. Saf. Sci. 40, 397–417 (2002)
8. Kelly, T.: Arguing safety a systematic approach to managing safety cases. PhD Thesis
(1998)
9. Arnold, A., Point, G., Griffault, A., Rauzy, A.: The AltaRica formalism for describing
concurrent systems. Fundam. Informatica 40(2), 109–124 (1999)
10. Cuenot, P., Chen, D.J., Gerard, S., Lönn, H., et al.: Towards improving dependability of
automotive systems by using the EAST-ADL architecture description language. In:
Architecting Dependable Systems IV. Lecture Notes in Computer Science, vol. 4615,
pp. 39–65 (2006)
11. Güdemann, M., Ortmeier, F.: A framework for qualitative and quantitative formal model-
based safety analysis. In: Proceedings of the 12th IEEE International Symposium on High-
Assurance Systems Engineering (HASE), pp. 132–141 (2010)
12. Cressent, R., David, P., Idasiak, V., Kratz, F.: Designing the database for reliability aware
model-based system engineering process. Reliab. Eng. Syst. Saf. 111, 171–182 (2013)
13. Falessi, D., Nejati, S., Sabetzadeh, M., Briand, L., Messina, A.: SafeSlide: a model slicing
and design safety inspection tool for SysML. In: Proceedings of SIGSOFT FSE, pp. 460–
463 (2011)
14. Sabetzadeh, M., Nejati, S., Briand, L., Evensen Mills, A.: Using SysML for modeling of
Safety-critical software-hardware interfaces: guidelines and industry experience. In: IEEE
13th International Symposium on High-Assurance Systems Engineering, pp. 193–201
(2011)
15. De la Vara, J.L., Panesar-Walawege, R.K.: SafetyMet: a metamodel for safety standards. In:
International Conference on Model Driven Engineering Languages and Systems, pp. 69–86
(2013)
16. Biggs, G., Sakamoto, T., Kotoku, T.: A profile and tool for modelling safety information
with design information in SysML. Softw. Syst. Model. 15(1), 147–178 (2016)
17. Mauborgne, P.: Operational and system hazard analysis in a safe systems requirement
engineering process – application to automotive industry. Saf. Sci. 87, 256–268 (2016)
156 K. Schuitemaker et al.
18. Belmonte, F., Soubiran, E.: A model based approach for safety analysis. In: International
Conference on Computer Safety, Reliability, and Security, pp. 50–63 (2012)
19. Yakymets, N., Dhouib, S., Jaber, H., Lanusse, A.: Model-driven safety assessment of robotic
systems. In: Intelligent Robots and Systems, pp. 1137–1142 (2013)
20. Sharvia, S., Papadopoulos, Y.: Integrating model checking with HiP-HOPS in model-based
safety analysis. Reliab. Eng. Syst. Saf. 135, 64–80 (2015)
21. Blessing, L.T.M., Chakrabarti, A.: DRM, a Design Research Methodology. Springer,
London (2009)
22. Pahl, G., Beitz, W., Feldhusen, J., Grote, K.H.: Engineering Design, a Systematic Approach.
Springer, Berlin, Heidelberg (2003)
23. Wang, P.: Civil Aircraft Electrical Power System Safety Assessment: Issues and Practices.
Butterworth-Heinemann (2017)
24. Brussel, F.F., Bonnema, G.M.: Interactive A3 architecture overviews. Proc. Comput. Sci. 44,
204–213 (2015)
25. Baxter, G., Sommerville, I.: Socio-technical systems: from design methods to systems
engineering. Interact. Comput. 23, 4–17 (2011)
Formalization and Reuse of Collaboration
Experiences in Industrial Processes
1 Introduction
The overall aim of this paper is to propose a conceptual collaboration model that
allows capitalizing how individuals collaborate in a process in order to reuse these
collaboration experiences in the future.
This article is organized as follows: Sect. 2 describes the related works on col-
laboration characterization and Knowledge Management Systems applied to collabo-
ration in processes. In Sect. 3, the collaboration model and capitalization methodology
are presented. Finally, Sect. 4 presents the conclusions and discusses some limitations
of the proposed model and perspectives for future research.
2 Literature Review
Collaboration has been analyzed in several studies due to its impact on the enterprise
success. This section presents the current research of two key domains in our model:
collaboration characterization in industrial processes and Knowledge Management
Systems applied to collaboration in processes.
work scale measures the ease with which social vertices can pool resources, it is
calculated by aggregating two mathematical measures: the clustering coefficient and the
centrality degree of the collaboration graph. This indicator allows for assessing the
activity of an actor and interconnectedness within a cluster for teamwork. The decision
making scale measures the ease with which social vertices can make choices, it is
calculated by aggregating the clustering coefficient and the closeness degree. This
indicator assesses the ease with which an actor within the intra-organizational network
can make decisions based on the interconnectedness and connections for relationships.
Finally, coordination scale measures the ease with which social vertices can harmonize
interactions; it is calculated by aggregating the closeness and the centrality degree.
These indicators permit to characterize the performance of collaboration between actors
to perform activities.
Experience
capitalization
KB
EB
involves
employs Organization Contract
- Duration
- Years of service - Name - Role - Start date
- Foundation date - Due date
- Type of
* 1 - Type of contract
1 organization
1 1
Activity includes
Actor contributes to
- id
- Name - Cost *
- Last Name - Duration
1 - Cost per hour - Total effort Commitment
- Competences - Type of activity - Deadline
- Department 1 ...* 1 ...* - Competences - Type of
1 * commitment
1 takes part in *
requires
interacts with *
- Effort hours
- Duration - Information access Requirement
- Type of interaction - Cost per hour - Acceptance
- Role - Type of
requirement
an activity. Thus, an actor takes part in one or several activities and he interacts with
one or several actors during the execution of the activity. Figure 3 shows the set of
elements of the proposed collaboration model.
An organization is a group of people, structured in a specific way to achieve
shared commitments. For this element, we must identify the name, the foundation date
and the type of organization. A contract represents one or several agreements where an
organization provides goods or services to another organization; these agreements can
also be verbal, what allows starting collaborative activities without a written contract.
For this element, we must identify the start date, end date and the type of contract.
A commitment in the proposed collaboration model represents the output of a process
activity. It is characterized by a type of commitment classified in: product, report,
service and systems for example. A requirement is a specific need that the commit-
ments have to meet. For this element, we must identify the type of requirement. An
activity of an industrial process describes the work, which transform one or several
inputs in intermediate or final outputs of the process. The following information must
be identified for each activity: cost, duration, total effort, and type of activity. The cost
attribute includes the cost of all actors who participate in the activity and others costs
such as material cost, transportation cost, etc. The duration attribute is the difference
between the start date and the due date. The total effort is the sum of all workloads in
person-hours needed to carry out the activity. An actor a person who participate in one
or more activities of an industrial process. They are characterized by: name, cost per
hour, department and one or more competences.
For the relations between vertices, the main relations are: Takes part in, Interacts,
Includes, Contributes, Involves, Requires and Employs.
The relation “Takes part in” is the relation between an actor and an activity. It is
the contribution of the actor for a given activity and it is characterized by the total
number of hours required by the actor to execute his/her contribution otherwise the
actor’s effort. Another characteristic is the information access. We propose to measure
this indicator with a number between the 0 and 1. The value 1 indicates that the
information necessary to carry out an activity is easily obtainable. The value 0 means
that it is impossible to access to the information. The relation “Interacts” is the relation
between an actor i and actor j. The relation “Contributes” is the relation between an
activity and a commitment, it indicates which activity contribute to a commitment. The
relation “Requires” is the relation between a commitment and a requirement, it rep-
resents the requirements that must be met. The relation “Involves” represents the
relation between an organization and a contract. It is characterized by the duration and
the organization’s role for a specific contract. The relation “Includes” is the link
between a contract and a commitment. A contract may have one or several commit-
ments. The relation “Employs” represents the link of work between an actor and an
organization. An actor cannot have a direct link to two or more organizations.
The attributes of vertices and edges must be standardizing in order to facilitate the
future reuse. Then, a taxonomy of concepts allows this standardization and it ensures
an accurate capitalization.
Formalization and Reuse of Collaboration Experiences 163
UNIVERSAL
Role Edge
Vertex Requires Employs
Actor Contribute to Includes
Entreprise
Commitment Organization Actor Involves Takes part in
Requirement Contract Regulator Interacts with
Activity Leader
Functional Teaming Customers Monitor Monitoring
Request Performance Consortium Suppliers Diffuser Management
Delivery Usability Joint venture Sponsor Contractor Cooperation
Analyze Operational Sponsored Creditors Regulator Learning
Design Physical Verbal contract Community Negotiator Assistance
Produce Environmental Fixed-price contract Planning
Order Cost Expert
Cost reimbursable Communication
Operator
Schedule contract Coordination
Technical bond
Logistical Validation
In our work, taxonomies are defined for some attributes in order to characterize the
collaboration experiences and facilitate their retrieval into the experience base where all
experiences will be stored. This will be develop in Sect. 3.4. An example of taxonomy
for collaboration experiences is represented in Fig. 4. This taxonomy of concepts is
based on existing taxonomies proposed by (System Requirements - SEBoK, 2015) for
requirements, (Boucher et al. 2007) for actor’s roles and (Mayer et al. 2012) for
contracts.
one vertex actor and one vertex activity. For each element, there are certain attributes
for which their values will be found in the proposed taxonomy. In Fig. 5, for the vertex
activity 1, the given value for the attribute competence is “Quality control” and the
given value for the attribute type of activity is “Production”. Both values are coming
from the taxonomy of concepts.
ACTOR
- Effort hours : 6 hours
ACTIVITY 1
- Name : Peter - Information access : 1
- Cost per hour : 30 $ - Id : activity 1
- Last Name : Sullivan
- Role : Monitor - Cost : 60 $
- Cost per hour : 30 $
- Duration : 3 days
- Competences : Quality control
- Total effort : 8 hours
- Department : Quality
- Type of activity : Production
-Competences : Quality control
Identify the concepts and the relations types of each element of collaboration experience
Fig. 5. Example of instantiation and link with taxonomy for two elements
Vertex : contract
- Edge: requires
- Vertex:
requirement
- Vertex : commitment - Acceptance : yes
- Deadline : 24/06/2017 - Type :
- Type: Delivrable performance
- Edge: contributes
Edge: contributes -
- Vertex : activity
- Id : 1 - Vertex : activity
- Cost : 2100 $ - id: 2
- Duration : 3 days - Cost : 420 $
- Total effort : 63 hours - Duration : 2 days
- Type: Delivery - Total effort : 8 hours
- Competences : Quality, Logistics, - Type : Analyze
Lean - Competences : Six-sigma
- Edge: takes part in - Edge: takes part in
- Effort hours : 21 hours - Effort hours : 21h Edge: takes part in -
- Information access: 1 - Edge: takes part in Edge: takes part in -
- Information access: 1 Effort hours : 21h -
- Role : Monitor - Effort hours : 2h Effort hours : 6h -
- Role : Executor Information access: 1-
- Information access: Information access: 1 -
Role : Expert -
- Vertex : actor 1 Role : Executor -
- Role : Monitor
- Name 3
- Vertex : actor
- Last Name 3
- Name 4
- Cost per hour : 30 $
- Last Name 4
- Competences : Aircraft production
- Cost per hour : 30 $
- Department : Quality
- Competences : Know-
- Edge : interacts How
- Duration : 5h
- Type: validation - Edge : interacts
- Edge : interacts - Duration :6h
- Duration : 6h - Edge : interacts Edge : interacts - Edge : interacts - Type :
- Type: communication - Duration : 6h - - Duration :6h coordination
- Type: communication Duration : 5h - - Type: assistance
where the actors of the process who have previously participated in the problem
solving can be identified and filtered by more specific characteristics such as product
type, years of experience or competences.
4 Conclusion
From these quality indicators, it will be possible to help to define efficient asso-
ciations of organizations following the past experiences with regard to collaboration.
Finally, the experience feedback process is still at a preliminary stage and requires
further development. The first axis of development is the definition of (i) a method to
reuse experiences and (ii) a mechanism to generalize several experiences in knowledge.
References
Bergmann, R.: Experience Management: Foundations, Development Methodology, and Internet-
Based Applications. Springer, Heidelberg (2002)
Boucher, X., Bonjour, E., Grabot, B.: Formalisation and use of competencies for industrial
performance optimisation: a survey. Comput. Ind. 58(2), 98–117 (2007)
Durugbo, C., Hutabarat, W., Tiwari, A., Alcock, J.: Modelling collaboration using complex
networks. Inf. Sci. 181(1), 3143–3161 (2011)
Gogan, L., Popescu, A., Duran, V.: Misunderstandings between cross-cultural members within
collaborative engineering teams. Procedia Soc. Behav. Sci. 109, 370–374 (2014)
Hermann, A., Scholta, H., Bräuer, S., Becker, J.: Collaborative business process management-a
literature-based analysis of methods for supporting model understandability. In: Proceedings
Internationale Tagung Wirtschaftsinformatik, vol. 13, pp. 286–300 (2017)
Jabrouni, H., Kamsu-Foguem, B., Geneste, L., Vaysse, C.: Continuous improvement through
knowledge-guided analysis in experience feedback. Eng. Appl. Artif. Intell. 24(8), 1419–
1431 (2011)
Kolodner, J.: Case-Based Reasoning. Morgan Kaufmann (1993)
Lambert, D., Emmelhainz, M., Gardner, J.: Building successful logistics partnerships. J. Bus.
Logistics 20(1), 165 (1999)
Ma, Y.-S., Chen, G., Thimm, G.: Paradigm shift: unified and associative feature-based
concurrent and collaborative engineering. J. Intell. Manuf. 19(6), 625–641 (2008)
Mas, F., Menéndez, J., Oliva, M., Ríos, J.: Collaborative engineering: an airbus case study.
Procedia Eng. 63, 336–345 (2013)
Mayer, D., Warner, D., Siedel, G., Lieberman, J.: The Law, Sales, and Marketing (2012)
De Mendonça Neto, M.G., Seaman, C., Basili, V.R., Kim, Y.M.: A prototype experience
management system for a software consulting organization. In: SEKE, pp. 29–36 (2001)
Rajsiri, V., Lorré, J.P., Benaben, F., Pingaud, H.: Knowledge-based system for collaborative
process specification. Comput. Ind. 61(2), 161–175 (2010)
Roa, J., Chiotti, O., Villarreal, P.: Detection of anti-patterns in the control flow of collaborative
business processes. In: Simposio Argentino de Ingeniería de Software - ASSE 44 (2015)
Segonds, F., Mantelet, F., Maranzana, N., Gaillard, S.: Early stages of apparel design: how to
define collaborative needs for PLM and fashion? Int. J. Fash. Des. Technol. Educ. 7(2), 105–
114 (2014)
Skyrme, D.: Knowledge Networking: Creating the Collaborative Enterprise. Routledge (2007)
Pyster, A., et al.: Guide to the systems engineering body of knowledge (SEBoK) v. 1.0. 1. Guide
to the Systems Engineering Body of Knowledge (SEBoK) (2012)
Van Rees, R.: Clarity in the Usage of the Terms Ontology, Taxonomy and Classification.
CIB REPORT 284. 432, pp. 1–8 (2003)
An MBSE Framework to Support Agile
Functional Definition of an Avionics System
Abstract. In avionics domain, there have been many efforts in recent years to
build a MBSE methodology with tooling support. The main purpose is often to
improve quality and efficiency of system definition, architecture and integration.
Sometimes there is also an additional objective to ease system verification and
validation. This paper introduces an additional challenge with the support of an
agile development cycle to ease impact analysis and incorporation of late and
changing requirements at different times. It presents key principles and
requirements of an agile MBSE approach and presents associated modeling
activities with illustration on an avionics case study.
1 Introduction
1
http://agilemanifesto.org/principles.html.
At the beginning, all functional needs or requirements are collected from the different
stakeholders identified for the System of Interest (SoI). They are imported in the model
to be easily manipulated and related for traceability. Then, the proposed MBSE
framework is made of organized sets of model elements and 4 modeling activities that
populate and update those elements with traceability.
associated actors. The first message shows the triggering event of the scenario while the
expected responses of system is described with reflexive messages attached to the SoI
lifeline. The capture of those reflexive messages is useful to provide a first list of top
level functions, but it is not main priority at that stage because that list is likely to
change when other requirements are considered. Focus is rather put on functional
interfaces, i.e. the incoming and outgoing functional messages, and on the causal
dependencies between those functional messages. The goal is to deduce consistent
functional scenarios that cross the SoI, i.e. interactions between the SoI and its func-
tional operating context represented by actors.
one can select a validated UC scenario and use it to simulate functional architecture
model in its operational context.
When simulation passes for all validated scenarios, then the model contains a
consistent set of system interfaces and top-level functions that address all source
functional needs capturing by scenarios validated by customer. Hence, using model
simulation helps in preparing V&V and continuous integration of systems models in an
agile context. Furthermore, ensuring consistency in the model is a key factor to derive a
MBSE Framework to Support Agile Functional Definition 173
The proposed MBSE approach aims at supporting late arrival or late analysis of
functional requirements, with immediate integration of new functions and updated
functions into current definition. As we apply a scenario-based approach for functional
requirements, customers must be able to analyze and validate changes in use case and
scenarios at any time. Indeed, in an agile context, the customer is involved in the
development process, especially to write and review user stories. Therefore, logics
(if/else and loops) are prohibited as this behavior is likely to change during functional
refinement. This leads to a first requirement:
REQ 1: The framework shall enable customers to validate use cases and sce-
narios. The scenarios shall remain conceptual and functional to be easily understood
by customer and validated at some point in time.
Changes in use cases and scenarios shall be immediately reflected into the current
functional definition. Such immediacy implies a second requirement:
REQ 2: The framework shall be able to identify new and updated functions from
scenarios and inject those changes into functional definition model.
In addition, the signals identified from input and output messages shall also be
updated when new scenarios are created or modified. We call those signals “system
external functional flows” as they represent functional exchanges between the SoI and
its environment.
REQ 3: The framework shall be able to identify new and updated system external
functional flows from scenarios.
Analyzing changes in an agile context is a strong asset to perform risk analysis.
Hence, the definition model elements (signals an activities) shall be traced to source
requirements, but through the scenarios to get easy support of change management
thanks to the traceability links. In addition, traceability is mandatory when developing
safety critical systems.
REQ 4: The framework shall be able to provide complete traceability between
requirements, scenarios, and functional architecture.
To maintain the capability of validating functional definition at any time, the
framework shall support the creation of simulated functional behavior for system
operating environment (represented by actors interacting with SoI).
REQ 5: The framework shall be able to generate and update behavioral model to
stress the system with inputs coming from actor’s behavior involved in the context of a
given UC scenario.
Teamwork is also an important part of agile method. Each team member should be
able to deliver value on the model without overlaps:
REQ 6: The framework shall be able to assess the independence of use cases. In
case a function is shared between several use cases, then the granularity of uses cases
should be modified to avoid overlap.
174 J. Tang et al.
To assess the relevance of those activities and illustrate them, we used the Onboard
Maintenance System (OMS) as an industrial case study. The OMS is a software
intensive avionics system that monitors the health of the aircraft during flight and
supports run of tests while on ground. The main functions of the OMS are as follows:
• Monitor continuously aircraft avionics during flight
• Analyze faults, diagnosis of the root cause, and alert crew
• Inform maintenance crew of needed repairs
• Perform on-ground testing on aircraft avionics
In this experiment we focused on the operational use of the OMS without considering
the physical aspects (electrical, thermal, …) of the system. Then, since the OMS is a
software intensive system, we mainly specified the discrete aspect of the system.
Experiments were conducted using Cameo suite (Systems Modeler, Simulation Toolkit
and DataHub). Requirements were imported from DOORS Database.
The approach was applied in an engineering team comprising 7 systems engineers.
As depicted by the Fig. 4, high-level functional needs were captured from input
functional requirements and structured with use cases. Then, each use case was
assigned to one systems engineer whose goal was to provide black-box operational
scenarios. For each black-box scenario, the proposed approach was performed itera-
tively: scenario writing, scenarios combining (automated) and scenarios simulation.
4.1 Results
During functional needs and requirement capture, 8 main use cases and 4 actors were
identified for OMS. Among those use cases, 3 were considered as representative
enough to be detailed through the MBSE approach:
• Execute ground tests: this use case has a lot of interactions with aircraft equipment
and with maintenance technician.
• Aircraft Condition Monitoring Function (ACMF): it provides important function-
ality with regards to failure messages and good health of aircraft equipment.
• Upload data to target member system: it provides important functionality with
regards to aircraft equipment configuration.
From those 3 use cases, and with help of textual functional needs and requirements,
17 operational black-box scenarios were created, containing 108 functional messages
between actors and the OMS. The scenarios permitted to generate 71 signal definitions
all traced to the initial messages for impact analysis. Then, scenarios were combined to
create 18 executable top-level functions, 15 internal sub-functions, and 51 interface
functions that define the first level of the functional architecture model.
Concerning the “agile performance” of the approach here are our findings from the
experiments. First, it appears that the independence of use cases is a strong asset for
parallel teamwork. Indeed, we did not find any shared function between the use cases.
This is interesting as each team member can go from scenario down to functional
design without overlapping other team members work, at least at the functional level.
MBSE Framework to Support Agile Functional Definition 175
Therefore, everyone was able to contribute to the same model at the same time by
delivering frequently added values to the system models.
The second finding concern traceability. In the proposed approach, interaction
messages are traced input requirements (when direct capture), and each extracted
signal, interface function and internal function are traced to the messages that generate
them. Therefore, when modifying a scenario or when adding a new one, team members
were able to identify impacts on the functional architecture under design.
Finally, the third finding concern continuous validation. Indeed, the simulation
context representing actors’ behaviors permitted to validate the functional architecture
during the design phase.
4.2 Discussion
Let us now discuss the use case modelling approach. For use case identification we
may have used the functions allocated from upper engineering level (aircraft and
avionics system in our case). However, there is often overlap between those functions
and it is not easy to detect overlaps because of limited function description. Hence, we
suggest performing the use case technics without relying on assumption that upper
level functions are the use cases. The purpose is to reach good use cases with “com-
plete” property, i.e. a use case shall be executed up to its end to fully address the
functionality it describes and reach a new stable state or error state of the system.
The second point of discussion concerns the use of actors instead of real external
systems in the black-box operational scenarios. This choice is motivated by two main
reasons. First, an actor is a role that can be played by any kind of real external systems
but irrespectively of the real interfaces: it let the problem space open, reusable, and
abstract, especially in avionics domain when operating context is often complex
(ARINC and AFDX network with redundancy for instance). However, this does not
mean that no operating context is needed. This is the point for the second reason: actors
can be allocated to real external systems present in the operating context. In that way,
the generated actors’ behaviors (UML Activity) could be automatically allocated to one
or several different real external systems for simulation.
176 J. Tang et al.
The third point concerns the synthesis of activity diagram from sequence diagrams.
While several MBSE approaches [9, 11–13] suggest creating definition diagrams like
SysML Activity diagram or EFFBD (Enhanced Function Flow Block Diagram) to
combine the different UC behaviors. We consider that the sequence diagrams define
user stories that are also acceptance test cases to validate the system. Hence, sequence
diagrams and synthetized activity diagrams have not the same value. The first one is
validated by customers, who are non-model experts, and is a usage diagram, while the
second one is a definition diagram that enables refinement and design.
Finally, the end of Sect. 2 just touches upon requirements derivation process. The
derivation process creates requirements from top-level functions definitions (inputs and
outputs, trigger event, conditions of activation) all along functional breakdown hier-
archy. As depicted by the Fig. 5, one can derive a set of good-quality system
requirements from the validated functional definition, what is not guaranteed by
document-centric approaches. This could be done using generation patterns and tem-
plate. The functional hierarchy leads to requirements decomposition, while the different
kind of functions (interface, internal, top-level) lead to different requirements. We have
done a proof of concept, but this is mainly part of further work.
5 Related Work
Use case driven development and use of Sequence Diagrams are common practices.
Yue et al. [14–16] propose a set of rules and use case templates to reduce ambiguities
and generate Activity Diagrams and State-Machine. The rules are applied on the use of
Natural Language and enforce the use of specific keywords to document a use case, i.e.
its precondition, dependency, basic flow steps, alternative flow. Song et al. [17] provide
a method to create Sequence Diagrams and check their consistency with regards to the
corresponding use case and class diagram. While those papers deal with scenario
consistency, they do not address the concept of function and generation of functional
chains in a Systems Engineering context.
MBSE Framework to Support Agile Functional Definition 177
6 Conclusion
References
1. Cloutier, R., Bone, M.: MBSE survey: initial report results, INCOSE IW, Los Angeles,
USA, January 2015
2. Douglass, B.P.: Agile Systems Engineering, San Francisco, CA, USA (2015)
3. Rosser, L., Marbach, P., Lempia, D., Osvalds, G.: Systems engineering for software
intensive projects using agile methods. In: International Council on Systems Engineering
IS2014, Las Vegas, USA (2014)
4. Schindel, B., Dove, R.: Introduction to the agile systems engineering life cycle MBSE
pattern. In: INCOSE International Symposium, 2016
5. Society of Automotive Engineers (SAE): Guidelines for Development of Civil Aircraft and
Systems - ARP4754A (2010)
178 J. Tang et al.
6. Friedenthal, S., Moore, A., Steiner, R.: A Practical Guide to SysML: The Systems Modeling
Language. Morgan Kaufmann (2014). ISBN 978-0-12-800202-5
7. ISO/IEC 15288: Systems Engineering-System Life Cycle Processes. ISO 2015
8. Adolph, S., Cockburn, A., Bramble, P.: Patterns for Effective Use Cases. Addison-Wesley
Longman Publishing Co., Inc., Boston (2002)
9. Weilkiens, T.: SYSMOD-The Systems Modeling Toolbox (20160
10. OMG: Object Management Group Foundational Subset for Executable UML Models
Specification (2017). https://www.omg.org/spec/FUML/1.3/
11. Vitech: STRATA Methodology – One Model, Many Interest, Many Views. http://www.
vitechcorp.com/resources/white_papers/onemodel.pdf
12. Hoffmann, H.: Harmony SE A SysML Based Systems Engineering Process (2008)
13. Pearce, P., Hause, M.: OOSEM and model-based submarine design. In: SETE/APCOSE
(2012)
14. Yue, T., Briand, L., Labiche, Y.: A use case modeling approach to facilitate the transition
towards analysis models: concepts and empirical evaluation. In: Proceedings of the 12th
International Conference on Model Driven Engineering Languages and Systems.
(MODELS), Denver USA (2009)
15. Yue, T., Briand, L., Labiche, Y.: An automated approach to transform use cases into activity;
diagram. In: Proceedings of the European Conference on Modelling Foundations and
Application, Paris, France (2010)
16. Yue, T., Ali, S., Briand, L., Automated transition from use cases to UML state machines to
support state-based testing. In: Proceedings of the 7th European Conference on Modelling
Foundations and Application, Birmingham, UK (2011)
17. Song, I.Y., Khare, R., An, Y., Hilbos, M.: A multi-level methodology for developing UML
sequence diagram. In: Proceedings of the 27th International Conference on Conceptual
Modeling, Barcelona, Spain (2008)
18. Piques, J.D.: SysML for embedded automotive systems: SysCARS methodology. In:
Proceedings of the Embedded Real Time Software and Systems Conference, Toulouse,
France (2011)
19. Campean, F., Yildirim, U., Enhanced sequence diagram for function modelling of complex
systems. In: Proceedings of the 27th Complex Systems Engineering and Development,
Cranfield, UK (2017)
20. Douglass, B.P.: Real-Time Agility: The Harmony/ESW Method for Real-Time and
Embedded Systems Development. Pearson Education (2009)
21. Mordecai, Y., Dori, D.: Agile modeling of an evolving ballistic missile defense system with
object-process methodology. In: IEEE Systems Conference Proceedings. Vancouver, BC
(2015)
Analyzing Awareness, Decision, and Outcome
Sequences of Project Design Groups:
A Platform for Instrumentation
of Workshop-Based Experiments
1 Introduction
complex projects [1]. The agent-based simulation software TeamPort forecasts the
project’s cost and duration including the dynamics driven by coordination efforts [2].
A project’s team leaders come together in a workshop at the project’s front-end to
design the project architecture and model its activity dependencies. They rely on
known theories and their experience from previous projects. When modelling depen-
dencies, they face the Rumsfeld-Dilemma of known unknowns and unknown knowns
[3]. While known unknowns can be simulated with the help of probability functions,
unknown knowns need to be discovered by social interactions and experiments. These
so-called blind spots are often not visible in the first place but occur to be obvious when
reflecting the project after it has been completed [4]. Common examples are large
public projects like building a shopping mall or an airport where cost and duration
suddenly explode due to unseen conditions. Afterwards, the reasons for failure appear
to be simple and one does wonder why they have not been discovered earlier.
To unveil these blind spots, a Project Design group needs to become aware of the
causes and effects of critical activity dependencies in order to establish interactions
between those teams who are assigned to the dependent activities [5]. When modelling
the dependencies in TeamPort correctly (with respect to environmental system
boundaries), the group can decrease both the cost and duration of a given project
model. This ability is defined as the subject’s Project Design performance (outcome).
In our research, we relate the three levels of awareness, decision, and outcome with
each other. The detailed research framework and methodology is explained in the
following.
2 Research Framework
apart and clearing up the structure. This structuring method is understood as treatment
in the following. A control group would not conduct the layout exercise but receive a
ready structured project model from the beginning on. Two research questions
emerge: (1) Does a structuring method increase a group’s awareness for activity
dependencies? (2) Does the awareness for activity dependencies increase a group’s
performance?
To answer these questions, we designed an experiment in a workshop setup for
Project Design groups. We used TeamPort as platform and orientated the design of
experiment on an approach which was previously defined by Moser et al. [1]. Later we
elaborated the approach by implementing the learnings of Chucholowski et al. [8].
In our workshop-based experiment, the participants were assigned to either a
control group or a treatment group. Each Project Design group consisted of four
participants. Social interactions between the group members fostered the participants’
learning. On the one hand, they could discuss and reflect on their modeling approach.
On the other hand, they could support each other on technical questions. After a
briefing and tutorial phase which explained the software controls, all groups worked on
the same predefined project model and tried to reduce its cost and duration. The
treatment groups began with the layout structuring exercise, while the control group
started with an already prepared clear structure. The exercise lasted 1.5 h. After
completing the exercise, the participants were debriefed. The groups compared their
modeling results. Each participant filled out a survey regarding their Project Design
approach. The feedback from the participants given in the surveys on the workshop-
based experiment indicates a high benefit from the interactive way of solving a problem
with the simulation software using an intuitive graphical interface as well as the social
experience of working in group and developing solution strategies together. The
feedback was also used to collect improvement suggestions for further research.
182 C. Fruehling and B. R. Moser
3 Clustering Analysis
When analyzing the attention allocation data of a subject, not only the absolute number
of perceptions (mouse clicks) must be taken into account, but the relative order of
occurrences as well. We used Bioinformatic algorithms to perform a sequence analysis.
In Bioinformatics, sequences of genomes are aligned with each other. After counting
the number of matches between two sequences, their similarity is accessed. Comparing
all sequences from one data set allows to draw a phylogenetic tree which shows the
relationship between the genomes. Looking at two genomes, the less branches lie
between them, the closer they are related. In our sociotechnical research, a sequence
alignment would not work as it would imply that similar behaving groups would click
on the same elements at the same time. Instead, we use an alignment-free approach and
define features which are frequency-based. For each feature, we first generate a first
distance matrix for all groups (G) and a second distance matrix for each included
sequence (S). Next, we cluster the matrices hierarchically and compare the clusterings
of predictive and outcome features. After accessing the degree of similarity, we
Analyzing Awareness, Decision, and Outcome Sequences of Project Design Groups 183
1. Conducting Experiment
2. Compiling Data
Change Focus Performance
Change Rate Return Time Distribution
Change 3.2 Process Changes 3.1 Computing Distances Proximity Walk
Consistency Element Focus
Change Change Attention Performance
Distribution Velocity Distances Distances
Main Class
Class 3.3 Classify Groups
Consistency
Approach Class
Approach 4. Building Clusters
Distribution
6. Identifying Patterns
Next, we calculate the Return Time Distribution (RTD). This feature is deter-
mined by the frequency of elements which appear in sequence until the same element
appears again [9]. In our case, we take the number of clicks on other element types,
until an element of the same type is clicked again. The smaller this number (the Return
Time) is, the higher is a subject’s attention allocation towards that element. As a result,
we get the frequency of each Return Time for each element. The means and standard
deviationsh of these
i frequencies give the RTD. The pairwise distances of this feature
(DRTD ¼ dijRTD ) are calculated with
184 C. Fruehling and B. R. Moser
rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
XX 2 XX 2
dijRTD ¼ r¼A
l RTD lRTD
ir jr þ r¼A
r RTD rRTD
ir jr ð2Þ
where
P
ðReturn Time FrequencyÞ
lRTD
r ¼ P ð3Þ
Frequency
sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
RðFrequency lRTD Þ2
rRTD
r ¼ r
ð4Þ
RFrequency
8r 2 fA; B; C; D; E; F; G; H; Xg ð5Þ
The index r represents the element type. In TeamPort, we measure eight different
element clicks (A H) plus the click for a simulation run (X). The element types are
separated into objects (products, teams, activities, and phases), relations between
objects (dependencies and contracts), as well as locations and projects.
Further, we examine the groups’ Proximity Walk (PW). The PW is defined by the
structural distances a subject covers between two consecutive clicks. The structural
distance is the shortest path on connections between to objects. Each click is given a
Proximity distance with respect to the previous click. The frequency distribution of all
Proximity distances gives the PW. Two different frequency profiles exist. Either, A
subject clicks incrementally from element to element what results in a short PW. Or, it
jumps forth and back between different segments of the project what results in a long
PW PW
PW. We use theh means i (l Þ and standard deviations (r ) to compute the distance
matrix DProx ¼ dijProx . The pairwise distances are calculated with
rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
ffi
2 2
dijPW ¼ lPW
i lPW
j þ rPW
i rPW
j ð6Þ
Next, we determine the Element Focus (EF) of each subject. This feature is defined
by the ratio (RET ) of number of clicks on one element types (ET) to number of clicks in
total.
clicksET
RET ¼ ð7Þ
clickstotal
h i the ratios for eight element types (A H). The pairwise distances
We calculate
(D ¼ dijEF Þ are calculated with
EF
vffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
u H
uX 2
dijEF ¼t Rir Rjr ð8Þ
r¼A
Analyzing Awareness, Decision, and Outcome Sequences of Project Design Groups 185
8r 2 fA; B; C; D; E; F; G; H g ð9Þ
Each distance matrix was normalized to its highest value. This allows to directly
compare them with each other. Next, all distance matrices were clustered hierarchically
with the Neighbor Joining algorithm. The Neighbor Joining algorithm minimizes the
sum of branch lengths for each node and results in an unrooted tree. The method is
computational more efficient than other clustering algorithms [10].
In total, we examined the features of 38 groups (two had to be excluded as their data
was insufficiently recorded). We gained 303 sequences for our analysis. The PIs are
plotted in Fig. 3. The four quadrants of the scatter plot represent the four possible
outcomes of a sequence. If both PItot and PIinc are positive, the group is performing well.
If PItot is positive but PIinc is negative, the group’s performance is still better than the
initial status but is moving in the wrong direction. It should consider stepping back to its
last project model version. If PItot is negative but PIinc is positive, the group moves in the
right direction, but should consider loading a different project model version, as its total
performance is still negative. If both PItot and PIinc are negative, the group is completely
on the wrong track as it is declining project performance in both ways.
In our data, we can see that most groups could make a positive incremental as well
as total impact on the project performance. However, the treatment groups (those who
used the structuring method) underperformed the control groups on average. Therefore,
the treatment was not effective.
Nevertheless, we applied the clustering algorithm and produced a PI clustering (not
shown here). The PI clustering is very distinct as it has multiple large and small
clusters. To draw a conclusion, we need to analyze the predictive features. For that
purpose, we show a cell plot of the attention allocation over the exercise time initiated
186 C. Fruehling and B. R. Moser
by mouse clicks (Fig. 4). The length of each bar in the cell plot indicates the relative
time spent in one view mode. This data is gained from the Fingerprint Report as
explained above. It shows the participants’ mouse clicks and the mode in which they
view the project model.
The Architecture view allows to sketch the project structure. The Forecast window
is used to trace back the reasons for cost and duration effects. The Matrix view shows
relations between objects in a sorted table. Additionally, the project model can be
viewed in three different Breakdown Structures (BS). The Location Map gives an
overview about the project teams’ locations and the environmental boundaries. The
Meeting Coordination tab allows to schedule meetings for the project teams. By rowing
the view modes of each group together, their action sequences are gained.
The groups’ action sequences have been ranked for their maximum performance.
A qualitative analysis of the cell plot gives only little information. It seems that high
performing groups first plan their approach before starting to work on the model. Low
performing groups seem to use a trial-and-error approach instead. Additional analyses
have shown the effectiveness of the treatment. If many clicks occur in a short time
during the first part of an action sequence, the group clicked on many elements while it
was laying out the project model.
4 Results Interpretation
In the next step, we derive the predictive clusterings. Figure 5 shows the RTD clus-
tering which consists of a few large clusters and many small clusters. The other
clusterings are not shown here. The distance between two elements is calculated as
RTD (fingerprint based). Nodes bundle elements with short distance to each other.
Analyzing Awareness, Decision, and Outcome Sequences of Project Design Groups 187
Clusters can be defined by “cutting” the tree at a certain diameter. In other words, one
node can represent a cluster. The node itself has no meaning. There are large and small
clusters. The farther a knot is away from the middle, the smaller a cluster is. Large
clusters are a sign of typical behavior shared with many subjects. Small clusters are
“exotic” behaviors only few subjects showed.
The idea of clustering visualization is to identify correlations between features by
layering their clusterings over each other. However, this qualitative comparison is only
practicable for a small sample. On the other hand, a large sample size gives a pre-
dictability. A quantitative comparison method is needed to analyze the correlation of
different clusterings. Therefore, we used the Fowlkes-Mallows-Index (FMI) to compare
the clusterings and determine their degree of similarity. The FMI takes values between
0 and 1. At 1, the examined clusterings are identical. At 0, they are not related at all.
Both clusterings must consist of the same number of groups (n). The number of clusters
(k) takes all integer number from 2 to n 1. Monte Carlo samplings show that the FMI
converts against 0 for higher number of clusters what differentiates it from other
similarity indices [11].
188 C. Fruehling and B. R. Moser
The FMI needs to be contrasted by its null hypothesis to determine the p-Value of
the resulting values. In our case, we used a bivariate normal sample of 20 pairs with the
null hypothesis values
0 1 0
n ¼ 303; l ¼ ;r ¼ ð1Þ
0 0 1
The sample size equals the size of data set. Figure 6 shows the FMI Distribution of
three clustering comparisons (PI with RTD, PI with PW, and PI with EF) as well as the
null case.
If the line lies above the dashed null case line, the degree of similarity is higher than
random. The analysis shows that the FMI of PI with RTD is lower than null for cluster
numbers higher than 39. The same applies for the FMI of PI with PW for cluster
numbers higher than 26. Only the FMI of PI with EF stays over null for the entire
interval. Nevertheless, we need to add a confidence level to null case (Table 2).
The calculated p-values do not satisfy a statistical threshold of 0.05. So far, we
could not find a feature which would be a good predictor in our clustering analysis.
Analyzing Awareness, Decision, and Outcome Sequences of Project Design Groups 189
5 Conclusion
the project architecture. After increasing their awareness for activity dependencies,
project design groups should use a respective approach for making decision in the
project design phase. Decision-making data was collected throughout the experiments.
This data has not been analyzed, yet. Next, we would like to add different clustering
techniques to our analysis methods to test the recorded data for more features.
The method proposed in this paper is a first step in various directions of our
research strategy. First, it proposes activity dependencies more richly characterized
than precedence as a relevant research topic. Complex dependencies are a crucial
aspect in complex programs and can have a big impact on the performance. By
modelling dependencies explicitly and being able to capture how teams are aware and
interact with them, we bring the topic closer to the practitioners and give them a way to
evaluate and steer the behavior of the team. Second, the research platform and approach
we propose opens possibilities of fast, global experiments in real time.
The instrumentation of the decision-making process, together with the algorithms
and analysis techniques proposed, enable teams to gain more insights on their behavior.
Teams can recognize best practices and adapt their collaboration accordingly. With a
globally deployable approach for the proposed workshop-based experiments, large
amounts of data would be gathered. Analyses techniques like machine learning could
be used to process the resulting data and build a prediction model for project design
practices. Further research should leverage the gained insights on how to steer projects
in the right direction for the design of new projects.
Acknowledgements. The authors would like the thank the contributions of the Global Team-
work Lab (GTL) at MIT and University of Tokyo, as well as Dr. Eric Rebentisch of CEPE at
MIT and Nepomuk Chucholowski of Technical University of Munich.
References
1. Moser, R., Grossmann, W., Starke, P.: Mechanism of dependence in engineering projects as
sociotechnical systems. Presented at the 22nd ISPE Concurrent Engineering Conference (CE
2015) (2015)
2. Moser, B.R., Wood, R.T.: Design of complex programs as sociotechnical systems. In:
Stjepandić, J. (ed.) Concurrent Engineering in the 21st Century, pp. 197–220. Springer
International Publishing, Switzerland (2015)
3. Rumsfeld, D.H.: DoD News Briefing. Federal News Service Inc., Washington, D.C. (2002)
4. Luft, J., Ingham, H.: The Johari Window: a graphic model of awareness in interpersonal
relations, Human relations training news, vol. 5, pp. 6–7 (1961)
5. Marle, F., Vidal, L.-A.: Managing Complex, High Risk Projects - A Guide to Basic and
Advanced Project Management. Springer-Verlag, London (2016)
6. Endsley, M.R.: Toward a theory of situational awareness in dynamic systems. Hum. Factors
37, 32–64 (1995)
7. Kahneman, D.: Thinking, Fast and Slow, 1st edn. Farrar, Straus and Giroux, New York
(2011)
Analyzing Awareness, Decision, and Outcome Sequences of Project Design Groups 191
8. Chucholowski, N., Starke, P., Moser, B.R., Rebentisch, E., Lindemann, U.: Characterizing
and measuring activity dependence in engineering projects. Presented at the Portland
international conference on management of engineering and technology 2016, Honolulu,
Hawaii, USA (2016)
9. Kolekar, P., Kale, M., Kulkarni-Kale, U.: Alignment-free distance measure based on return
time distribution for sequence analysis: applications to clustering, molecular phylogeny and
subtyping. Mol. Phylogenet. Evol. 65, 510–522 (2012)
10. Saitou, N., Nei, M.: The neighbor-joining method: a new method for reconstructing
phylogenetic trees. Mol. Biol. Evol. 4, 406–425 (1987)
11. Fowlkes, E.B., Mallows, C.L.: A method for comparing two hierarchical clusterings. J. Am.
Stat. Assoc. 78, 553–569 (1983)
Systemic Design Engineering
Curriculum and Instructional Results
1 Background
Fig. 1. Committed Life Cycle Cost against Time. 3.1. Fort Belvoir, VA: Defense Acquisition
University. (Source: DAU, 1993)
addressing design problems [2–10]. Even within the engineering community, designers
from different disciplines working on different types of problems have devised their
own frameworks to demystify the design process. As academia and industry are
moving towards becoming transdisciplinary, it is important to find ways to train stu-
dents with knowledge from various disciplines. Part of the challenge is to identify
where these disciplines overlap and find a common language to communicate among
practitioners with different training. It no longer makes sense to have separate design
processes for different disciplines that seek to solve technical problems, and there is a
need to create a codified standard to optimally equip students who will be designing the
products, services, and systems of the future [11].
The objectives for this interdisciplinary approach are [11]:
• To create a methodology that is logical, easy to follow, and can be taught as part of
any curriculum that teaches approaches to problem-solving, complex systems, and
product development.
• To create a process that promotes exploration of innovative ideas and a systematic
way to select the most promising option (engineering design).
• To promote a process that results in a low-risk, realizable, and profitable solution
(systems engineering).
• To create resulting products that satisfy a real user need (design thinking).
• To demonstrate that resulting products are sustainable in the changing and
increasingly connected marketplace (systems thinking).
• To verify that the approach can be used for systems which are humancentric,
including complex social systems (systemic design).
• To establish a framework that can be tailored to be used on systems and products of
all types (agile systems/software engineering).
194 J. Wade et al.
Systemic Design Engineering (SDE) [11] was developed to satisfy the aforementioned
objectives. This approach combines the most advantageous elements of: (1) systems
thinking to ensure that the resulting design is appropriate to its context and is sus-
tainable, (2) design thinking to ensure that the design properly considers human cen-
tricity, and (3) systems and software engineering to ensure that the design can be
implemented, deployed and supported at the required performance, cost and quality.
One commercial example of success in each of these areas is the iPhone. In this
case, Apple has enabled an ecosystem of application developers such that they are not
direct employees, but rather are the emergent result of a system that was established by
Apple. This ecosystem is such a formidable barrier to entry that only two have been
successfully established, those for the IOS and Android. Other phone makers, such as
the pioneer Nokia, are no longer in the market as their businesses were not sustainable
without the existence of such an ecosystem. Hence, the critical importance of systems
thinking. Design thinking is a critical partner to systems thinking, as it ensures that the
product properly considers the human-centric nature of systems and their interactions
with people. In the smart phone example, the iPhone utilized a successful human
interface that has resulted in many consumers staying with the iPhone despite the
sometimes significant advantages of competitors’ hardware platforms. Learning a new
user interface is simply too great of a hurdle for many, even if the competing interfaces
apply many of the same interface principles (e.g., swipes and other gestural motions).
In addition, understanding the ethnographic and cultural contexts of the users can be
critical to the success of the products. Finally, systems and software engineering are
critical to enable successful engineering of the product such that it can be rapidly
ramped up to tens of millions of units per month all while achieving the necessary
levels of cost and quality.
The following is a brief description from [11] of how SDE is used which provides
context for the development of the course curricula that was developed for the
instruction of conceptual design. As noted earlier, it is in the front-end conceptual-
ization and design of the system where design and systems thinking can make the
greatest contributions. Using this approach, the Cynefin system taxonomy [12], shown
in Fig. 2, is used to first identify and classify different types of systems.
As noted in this diagram, simple systems can be addressed by best practices using
an approach where one senses the system, categorizes it, and then responds with the
best practice that is applicable for that situation. Those skilled in bureaucratic are most
comfortable with this approach. Complicated systems are ones in which cause and
effect are well-understood, and the most appropriate approach is to sense the system to
collect data, perform analysis, and then make decisions based on this analysis. This is
the system type with which engineers are most comfortable with the result that all
systems are seen to be complicated. Chaotic systems require immediate action, and as a
result, there is no time for data collection and analysis. Thus, the appropriate response
is one of act, sense the results of the action, and then base further responses on these
results. Authoritarian are most comfortable with these types of systems with the result
Systemic Design Engineering 195
that the firefighters might be the ones who start the fires through the lack of proactive
measures.
Simple systems are rare and generally do not require the attention of engineering.
Chaotic systems require rapid response and thus are not appropriate for engineering
responses. While complicated systems are already addressed well by existing analytic
engineering practices, this is not the case for complex systems. In addition, the
importance of complex systems is growing rapidly due to the increasing predominance
of humans and social elements in our socio-technical systems. In fact, with the
inclusion of humans in systems, anthropological approaches resulting in an under-
standing of the cultural norms and “interwoven meanings and practices” of the
stakeholders and system users becomes critical [11]. In complex systems, engineers
need to play the role of scientists, probing the systems through experimentation probing
the system and then sensing the results to begin to understand the relationships between
cause and effect, which then enables an educated response. The design of experiments
and use of rapid iterative prototyping are essential skills in this area, for the goal is not
to ‘quickly fail’, but rather to quickly learn.
Thus, the combination of systems and design thinking is critical in addressing the
challenges of complex systems. Systems thinking approaches are necessary to ensure
that the overall context of the system is well understood so that resulting systems can
evolve in an acceptable fashion and be sustainable. Many different systems thinking
methods and tools can be deployed in this effort, but they will need to define the extent
of the systems, its critical interconnections and interactions, the overall dynamics of the
system, and the potential leverage points that might need to be used to influence the
system to have the desired behavior [13]. Through system and design thinking, critical
196 J. Wade et al.
3 Course Curricula
Once this Systemic Design Engineering concept was created, it was tested in the
context of a course called the Conception of Cyber-Physical Systems, which is part of a
four course graduate series covering the lifecycle of conception, design, implementa-
tion and sustainment of cyber-physical systems. The core modules for the course are
based on the critical aspects of systems thinking, design thinking, and systems and
software engineering that are described below, along with the structure and learning
objectives for each module.
3.1.2 Relationships
Objectives:
• Understand how systemigrams are constructed and used
• Be able to transform a root definition into a systemigram mainstay
• Be able to construct a systemigram for a system of interest
• Be able to tell a story using a systemigram.
3.1.3 Dynamics
Objectives:
• Understand the elements and operation of causal loops
• Understand the difference between reinforcing and balancing loops
• Be familiar with some causal loop archetypes
• Be able to construct a causal loop
• Be able to tell a story with causal loops.
high-level business and conceptual plan for a program. The project presentations are
made after the modules have been presented, but before the students have completed
the detailed work which is contained in the final report. A rubric is presented at the
beginning of the course such that it can be reinforced in the subsequent instruction. The
outputs from this course have been used as the preliminary design concept for a
complete system development, which includes architecture and design, implementa-
tion, and sustainment in a four-course series that forms the core of a systems engi-
neering of cyber-physical systems program.
4 Pilot Results
While much of the lecture and course materials for this course existed in a series of
prior separate courses, they were first integrated into a single Systemic Design Engi-
neering graduate course targeted for introduction in the summer of 2017. The pilot
class was composed of eight professional masters students in an aerospace corporation.
The greatest concern when assembling this course was not whether the materials were
useful, but rather if it would be possible to successfully cover this amount of material in
a single course. The prior version of the course had successfully combined elements of
design thinking with systems and software engineering. In addition, it was believed that
the course already was full of content even before adding the new major section on
systems thinking and its four modules. The problem was addressed by comparing the
learning objectives and modules from this course with those of a separate Systems
Thinking course and determining the relative importance of the learning objectives in
each as well as their overlap. The four essential modules from Systems Thinking were
identified, and through the removal of some overlapping material and less critical
material in the existing course, there was time to accommodate all of them.
Despite these challenges, the instruction of the material in the pilot course went
smoothly with a small number of changes noted for subsequent courses. Most critical,
however, was the assessment of the student’s results to determine how well they had
learned the concepts taught in the course. This was assessed subjectively by instructor
in two different ways. The first assessment was in systems thinking concepts in which
the comparison was between the results obtained in an existing course devoted entirely
to systems thinking, and to this new course which included the core of systems
thinking. The results were surprising. The systems thinking modeling and analysis in
the Systemic Design Engineering course was judged to be the equal and, in some ways,
superior to what is produced by similar professional master’s students in a course
dedicated to systems thinking. While the analysis may not have been as deep as would
be seen in the dedicated systems thinking course, the quality of the analysis was at least
on par. The lessened depth of the analysis is easily explained by the fact that final
reports are limited in size and thus the systems thinking analysis is also limited.
The second area of comparison is in the design thinking, and systems and software
engineering portions of the course. These areas also were judged to be on par or
superior to the results seen in the prior conceptualization courses which did not contain
the systems thinking material. The inclusion of the systems thinking material provides
the basis for a more sustainable proposed system concept.
Systemic Design Engineering 201
While the sample size is very small (the pilot contained eight students grouped into
two teams), the results are promising in that the learning objectives of all the material in
the Systemic Design Engineering course was mastered by most, if not all the students,
without any notice of loss from that of the individual courses containing this material.
The only noted downside was that some of the students noted that this course required
more time than they had anticipated. However, this same comment was just as
prevalent on prior courses that did not have this additional material.
The results of creating a Systemic Design Engineering course for systems and software
engineering masters engineering students were quite positive. The students in the pilot
successfully mastered the systems thinking concepts that were added to the existing
course, with results that were judged to be at least equivalent to what students were
learning in a course dedicated to that subject. The additional material also provided an
increased richness in the resulting designs. The course will continue to be refined based
on feedback from the students and instructors, and a thorough analysis of the learning
results. The future plan is to update the Systemic Design Engineering course such that
it can be used as the design course for systems, software, cyber-physical and socio-
technical systems masters’ degree program, as the three elements of systems thinking,
design thinking, and systems and software engineering makes it relevant to each of
these four systems types in the Cynefin framework. It will be interesting to see how the
course will be optimized to support design efforts for each of these different system
types. It is believed that this understanding will enable the continued evolution of the
course and the concept of Systemic Design Engineering as a multi-disciplinary design
approach resulting in improvements that will provide benefits for all four system types.
References
1. Walden, D., Roedler, G., Forsberg, K., Hamelin, R., Shortell, T. (eds.): INCOSE Systems
Engineering Handbook – A Guide for System Life Cycle Processes and Activities, 4th edn.
Wiley Publishing (2015)
2. Boehm, B.W.: A spiral model of software development and enhancement. Computer 21(5),
61–72 (1988)
3. Brown, T.: Change by Design: How Design Thinking Transforms Organizations and
Inspires Innovation. Harper Collins, New York (2009)
4. Buede, D.M.: The Engineering Design of Systems: Models and Methods. Wiley, New York
(2000)
5. Chen, W., Lewis, K.E., Schmidt, L.: Decision-Based Design: An Emerging Design
Perspective, Engineering Valuation & Cost Analysis, special edition on ‘‘Decision-Based
Design: Status & Promise’’ 3(2/3), pp. 57–66 (2000)
6. Dym, C.L., Agogino, A.M., Eris, O., Frey, D.D., Leifer, L.J.: Engineering design thinking,
teaching, and learning. J. Eng. Educ. 94(1) (2005)
202 J. Wade et al.
7. Giambalvo, J., Vance, J., Hoffenson, S.: Toward a decision support tool for selecting
engineering design methodologies. In: ASEE Annual Conference and Exposition, Colum-
bus, Ohio, 25–28 June 2017
8. Hildenbrand, T., Wiele, C.: The road to innovation: design thinking and lean development at
SAP (2013)
9. Jones, P.H.: Systemic design principles for complex social systems. In: Metcalf, G. (ed.)
Social Systems and Design, volume 1 of the Translational Systems Science Series, pp. 91–
128. Springer, Japan (2014)
10. Pahl, G., Beitz, W., Feldhusen, J., Grote, K.H.: Engineering Design: A Systematic
Approach, 3rd edn. Springer, London (2007)
11. Wade, J., Hoffenson, S., Gerardo, H.: Systemic design engineering. In: 27th
Annual INCOSE International Symposium. Adelaide, Australia, 15–20 July 2017
12. Kurtz, C.F., Snowden, D.J.: The new dynamics of strategy: sense-making in a complex and
complicated world. IBM Syst. J. 42(3), 462–483 (2003). https://doi.org/10.1147/sj.423.0462.
ISSN 0018-8670
13. Arnold, R.D., Wade, J.P.: A definition of systems thinking: a systems approach. Procedia
Comput. Sci. 44, 669–678 (2015)
14. Geertz, C.: The Interpretation of Cultures (1973)
Field Guide for Interpreting Engineering Team
Behavior with Sensor Data
1 Introduction
The study of engineering, like many fields, will be changed by pervasive sensors, data,
and analytics. In particular, manual observation of engineering teams conducted by
researchers may be supplemented or even replaced by “digital fingerprints” generated
by pervasive and non-intrusive instrumentation. There is an opportunity for the
research community to utilize sensors to increase the reproducibility, scalability and
efficiency of teamwork experiments. Before researchers embrace sensors, however, a
thoughtful understanding of the sensitivity and accuracy of these new data relative to
and in combination with existing experimental methods, including ethnographic and
survey based techniques, should be considered. This field guide introduces our recent
experience in interpreting a narrative derived from a team’s digital fingerprints.
Also, we explore how team narratives generated from sensor data can differ from direct
human observation.
The unit of analysis in this exploration is the team narrative, a record of an engi-
neering team’s collective behavior that includes moment-by-moment attention, ratio-
nale, decision making, solution generation, and solution performance. While an
engineering team may consist of multiple individuals, this research departs from prior
investigation of design teams of diverse individuals sharing common scope and
objectives, to a broader view of the “team of teams” with different (yet coupled) scope
and various agenda. A team narrative tells the story of a semi-structured design
experience, during which teams utilize their attentions, tacit knowledge, dialogue, and
tools to explore solution(s) for a given challenge.
A narrative acts as frame to gain a deeper understanding of the challenge, the teams,
and how these teams came together to interact, design, and implement solutions.
A useful narrative explains and predicts workshop performance outcomes. A researcher
may derive a teamwork narrative through primary observation of an engineering team
(i.e. watching a team in action) and by secondary observation of a team’s digital
fingerprints. We are interested in how narratives derived from digital fingerprints differ
or complement those generated by primary observation. Differences to consider are
specificity, accuracy, and teams’ affirmation of a narrative’s content.
In this paper, a framework to transform data from engineering design teamwork
into team narratives is proposed, and it is applied to some experiments. On the basis of
our experience in these experiments, we discuss what steps a researcher can take to
improve observations derived from pervasive and non-intrusive instrumentation in
complement to direct observation by humans.
The research contributions of this paper are:
1. a formalized concept of a team design walk narrative and a taxonomy of a design
experiment for understanding team dynamics;
2. a proposed experimental setup generating team narratives and a set of mapping
rules for transforming sensor data into a team narrative;
3. a demonstration of the aforementioned setup and rules, by showing some experi-
mental data and actual narratives of design.
2 Related Research
how should teams be organized and behave for the engineering of systems of systems?
To begin, related research in collective intelligence, learning, design studies, human-
computer interaction, and serious games are reviewed.
Collective intelligence (CI) is a group’s emergent capability, defined by Malone
et al. as “groups of individuals acting collectively in ways that seem intelligent.”
Several papers [2–4] report a psychometric methodology for quantifying CI, reflecting
how well groups perform on a diverse set of group problem-solving tasks. These
studies suggest that CI is not strongly correlated with the average or maximum indi-
vidual intelligence of group members, but is instead correlated with: the average social
sensitivity of group members; the equality in distribution of conversational turn-taking;
and the proportion of females in the group. Malone [2] also notes that some scholars
equate intelligence with learning.
Learning is associated with change in knowledge which may be gauged from
change in performance, but also change in group processes or repertoires. Sensing
knowledge changes in groups is difficult, as it may either explicit (easily codified and
observable), but other times tacit (unarticulated and difficult to communicate). Moser
[5] reviews and categorizes models of learning: models based on experiential learning
[6, 7], reflective practice [8, 9], learning as knowledge conversion [10], and expansive
learning [11]. Valkenburg [9] discusses the so-called Element of Surprise introduced by
Schoen as initiator of the reflective practice process that ultimately underpins the results
of the design process. Valkenburg argues that, for design teams, surprises become
social events. Further, Stompff et al. [12] revise Valkenburg’s model of the mechanism
of reflective practice to include surprises, things that “fall outside of the current frame”,
and require reflection. They argue that surprises are a source for team learning and
innovation.
Studies of designers in practice—“design observatories”—are a strongly empirical
approach. For example, Carrizosa et al. [13] developed a special room for the design
observatory, installing four cameras to capture designers’ behavior and work. Milne
et al. [14] installed an interactive whiteboard and attempted to capture work processes.
Törlind et al. [15] showed two uses of the design observatory, to observe social aspects
of synchronous team-based design and to understand design activity through iterative
prototyping. Similarly researchers in design thinking have conducted series of exper-
iments, often in small teams and centered on iterative prototypes using various tools
and rules of interaction [16]. A recent application extended this approach to consid-
eration of teamwork in healthcare [17].
Human-Computer Interaction (HCI) research began grounded in the premise that
computing systems had become inherently characterized by human factors [18].
Contemporary HCI researchers have studied the interaction of teams through digital
means as computer supported cooperative work, or CSCW [19, 20]. A taxonomy called
the Groupware Matrix characterizes team interaction by two dimensions: location and
time [21]. Location indicates a team’s co-location (or not), while time refers syn-
chronous (or asynchronous) interactions by individuals. A CSCW system’s position
within the Groupware Matrix has guided the classification of teamwork observations
according to this technological context.
206 L. Pelegrin et al.
Typically, HCI experiments had relied on small sample sizes (10–100 users),
constrained by the need to gather participants and record data [18]. Remote collabo-
ration tools have recently been able to achieve much larger sample sizes (10,000–
100,000 users), yet synchronous-collocated environments, where users must be in the
same place at the same time, are still difficult for researchers to scale. Additionally,
human cognitive phenomena previously studied in individuals have been extended to
the collective level, such as memory and learning for groups [22].
Simulations and Serious Games are similar to the complex systems architectural
decision models treated in this paper. Grogan and de Weck [23] developed an inter-
operable simulation game and applied it to infrastructure systems. Infrastructure design
involves cross-sector interactions amongst stakeholders - that is, highly collaborative
design. They conducted experiments to identify what features of a design tool lead to
more effective collaborative decisions, and they demonstrated technical data exchange
supported by integrated and synchronous tools is correlated with effective designs.
Their analysis is focused on technical aspects of the design process, and these insights
are limited in that the supporting data comes only from their own design tools.
In summary, related studies across several disciplines are relevant to the motivation
of this paper. Our examination of related research was limited given these broad
multiple sources. CI research provides us with a measure of group intelligence and
insight about what may be correlated with intelligence. Frame reflection and learning
guide us to develop sensors that can capture mental models and surprises during
teamwork. Design studies capture processes and the generation of ideas and solutions.
In contract, HCI research teaches us to measure the means for interaction across
interfaces, described as channels of technology with timing, location, and purpose.
Our approach is similar to design observatories with three contributions: problems
framed by well-articulated systems models, increased interactive visualization for real-
time exploration of complex SoS, and new sensors for data capture. This experimental
framework characterizes activity across the problem space, the solution space and the
social space for engineering teamwork. The framework takes experiment repeatability
into consideration, in order to ease the implementation of quasi-experiments and,
through improvement, increase robustness of experimental design. Moreover, inter-
active visualization based data capture, shown as a “design walk”, is applied to observe
interaction between problem and solution spaces. These sensors allow the researchers
to see all events as the solution space is explored. Further, the design walk can also
visualize the relationship between the solution space and social space. With the
addition of audio, video, and ethnographic notes, it is possible to observe meaningful
interactions amongst the three spaces.
Our research seeks to detect how team attention and activities map to the problem
space (manifested as need and, values of stakeholders), solution space (as requirements,
function and form of the technical system) and the social space (through roles, capa-
bility, motivation and power of the agents). We assert that a sociotechnical event (e.g.
awareness of a need, exposure of an assumption, a decision about the technical solu-
tion) will have effects across all 3 spaces. This assertion follows a broad view of design
as a social activity, of teamwork during complex problem solving having stakeholder,
organization, process, and product considerations [24, 25].
Field Guide for Interpreting Engineering Team Behavior 207
Decades of research in the social sciences provide important input to our research
approach, including many case studies, meta-analysis and so on. However, a frustration
is that these studies are often difficult or infeasible to reproduce, nor scalable to
industrial teams of teams. For example, while it is widely accepted that overall effi-
ciency in organizations is achievable with high performing teams and learning orga-
nizations [26], questions remain of what defines a high performing team and what
metrics measure organization learning. Google’s Project Aristotle, which reviewed
academic studies on how teams work, did not establish strong patterns to define the
“ideal” makeup of the team that achieves best team effectiveness [27].
Ideally, a scientific body of knowledge should be scalable to industrial relevance
and produce reproducible insights, in order to increase collective cognitive capability
by teams of teams for complex systems problems. Moreover, a final objective should
be to reveal the mechanisms inside teams working in complex problems, a
sociotechnical physics. Previous studies, including inspired ethnography, great thinkers
and insightful writers are relevant guides, yet we cannot be sure without uncovering the
underlying phenomena with reproducible experiments. Indeed, insightful case studies
might be only shadows of the underlying phenomena. This work is an early attempt to
seek the underlying science of teamwork for complexity, and the first principles of
sociotechnical systems.
3 Methods
In this section, the following methods are explained; a simple taxonomy of the design
process, an overview of the quasi experiment, and then a method to instrument the
experiment, collect data, and generate design walk narratives.
Fig. 1. A nomenclature for the design process, which consists of a design walk and events
occurring simultaneously in the problem, solution, and social spaces - which forms the context
for problem, solution and teamwork.
A team’s behavior in the social space affects their awareness, exploration, and selection
of solutions, which is their behavior as seen in the solution space. The outcome of their
selection in the solution space is expressed in the problem space, and a decision on
whether they are satisfied with the result or not is also affected by the social space.
Thus, the underlying events that make up a design process interact simultaneously
across solution, problem and social spaces. By tracking these events and interactions,
instrumented teamwork experiments attempt to reveal teams’ dynamics moment by
moment. A design walk is the path taken by the team over time to consider, generate,
evaluate, and iterate design alternatives. While along the design walk, a tradespace is
explored and exposed. We characterize the happenings moment by moment during the
design walk as sociotechnical events existing simultaneously in all three spaces.
Solution and problem spaces are connected via an assessment or analysis method - here
simulation - while they are connected to the social space by some interface, here an
interactive visualization software. Teamwork of interest takes place in the context of
this design activity.
Fig. 2. A conceptual diagram of the experiment setup and research flow. During design
experiments, sensors consist of DSS logs and microphones and “direct feedback” by human
participants or observers. Sensor data is displayed, and both are interpreted into narratives.
solution space, and surprises. The “direct feedback” in particular may be used to
generate a holistic (primary) narrative of the design walk, for analysis, supported by
sensor data. However, it is preferable to create a narrative mainly based on inexpensive,
non-intrusive sensor data - a secondary narrative. Such sensors record: (A) design
decisions made (or attention allocation to inputs), (B) attention allocation to KPIs, and
(C) design walk in the problem space. By comparing the primary and secondary
narratives, we may assess the validity of the sensor interpretation. We seek to develop
an inexpensive, pervasive, non-intrusive sensor package that can make a secondary
narrative similar in quality to a primary.
Observations are made over the course of a design experiment, sensing how
attention is allocated to different parts of the system at hand. A graphical user interface
(GUI) enables users to deliberately choose to render or activate mutually exclusive
parameters of information. Attention is “allocated” as long as the user is viewing or
activating a certain parameter. For DSS inputs (A in Fig. 2), a unit of attention is
allocated if an input parameter is changed between discrete simulation events. A sim-
ulation event occurs when a team evaluates a selected architecture using the simulator.
In the case of inputs, attention is modeled as an instantaneous blip when the “Simulate”
button is pressed. For DSS output calculation (B), attention to a particular output
parameter (e.g. Fuel Cost) is allocated continuously as long as the output is selected by
the user for viewing on the tradespace plot (C). Since the tradespace plot only has two
axes, the GUI only allows a team to view two of the seven KPIs at a time. One view of
a team’s design walk can be visualized on such a plot by how two selected output
parameters’ values change from one architecture to the next. Outputs of consecutive
architectures are connected by a line, and the solution chosen by a team, represented by
a larger circle (see Fig. 4).
210 L. Pelegrin et al.
Fig. 3. Overview of the experimental framework and procedure. The current work is in grey,
concerned with integrating data to generate holistic design walk narratives.
Field Guide for Interpreting Engineering Team Behavior 211
any errors or omissions. These validated narratives are used to create tentative “map-
ping rules” from a sensor to a fragment of the narrative that it could inform. These rules
are then used with other design experiments’ sensor data only, to generate new sensor-
based design walk narratives. These are similarly shown to participants, and we seek
any difference in score or feedback between the primary and secondary narratives.
4 Results
The following design walk narrative was generated for a team from an experiment in
March 2018 with an emphasis on “direct feedback”: written rationale, post-surveys and
interviews - supported by observation and DSS data. Figure 4 is the sensor data cor-
responding to the narrative.
Primary Narrative: This team interprets the design task literally: to reduce emissions
at low cost, though are concerned at the lack of comparison of emissions to regulation.
They also realize that waiting time should be considered, but decide to neglect the
amount of cargo moved, because of unclear interpretation of this KPI. Thus they
consider mostly NOx, CAPEX and OPEX, checking other KPIs also. Their design walk
has 3 phases. Firstly, there is some initial exploration with multiple parameters.
Secondly, they focus on LNG options and investigate bunkering facilities, finding the
best combination of location and methods. A key surprise for them is that inexpensive
truck-to-ship bunkering seems sufficient, and located in Singapore might be best.
Thirdly, they try to change fleet composition. The 2 team members from the maritime
industry seem to be like-minded, possibly because of their pre-existing relationship and
shared (hierarchical) culture. On the other hand, the third designer may be less
influential because of a cultural barrier. When recording, they usually signal
“surprise” - options better or worse than expected - yet discussions suggest they don’t
212 L. Pelegrin et al.
Table 1. Selected mapping rules, from sensor data to design walk narrative
Narrative Source of Sensor/ Mapping rule/proposed
fragment narrative proposed
Model confidence Comments from Surprise Fequent or early surprises
scratch sheet, post detector, input may indicate conflicting
survey, logy mental model; an input log
observation showing OAT model factor
testing is low-confidence
Prioritized Human - design Output log, Avoided or preferred area of
preferences for rationale attention log problem space: suspect key.
decision Expected attention on key
variable
Phases of design Time series of Input log, Look for pattern in input
walk aggregated input output log action “macro” time series,
results
Preferences - KPIs Human - scratch Input log, Sequence: change input
“satisfied”(e.g. sheet comments output log, levers, achieve “good”
“fuel type- LNG is attention log output, then leave these
good”) levers unchanged & explore
other levers; maybe change
attention
Key surprises Human - scratch Surprise After surprise, “path
(learning) (e.g. “1 sheet comments detector, input dependent sequence”:
truck is enough!”) (on surprises), log, output log, marked change in behavior -
post survey attention log use different input levers,
attention, maybe output
trends
Accidental result Combined output Attention log, Good result, but no attention
and attention logs output log paid to this KPI in the log
Utilizing 16 rules (of which 6 are in Table 1), a secondary narrative was generated
for another sensor footprint of the design walk, shown below:
Secondary Narrative: From their attention to KPIs, the team’s goal appears to be
interpreted literally: reduce emissions at low cost, but keeping cargo moved nominal.
However, emissions attention & outcomes are slightly less disciplined than for cargo
moved and OPEX & CAPEX - the team may have made some minor change in goal
emphasis (indeed, they often return to check NOx later). But we see no clear sign of
perceiving a goal or requirement to be ambiguous or unclear. We do not suspect low
model confidence, as no One-At-a-Time (OAT) model testing behavior was observed.
We may segment the design walk into two broad phases: early on, larger KPI fluc-
tuations occurred with fleet composition, and then later they focused on options around
LNG bunkering facilities. Approximately halfway through, there was also an intensive
period of considering Singapore options, particularly for OPEX, CAPEX and cargo
moved. A “satisficing” event occurred, for a half hybrid, half LNG fleet with non-shore
bunkers - a solution subspace which was never left. It followed two key surprises: a
positive one which preceded keeping the hybrid fleet composition, and a negative one
214 L. Pelegrin et al.
Fig. 5. Example Outputs over time (horizontal lines), Input Aggregated Changes over time,
Trade Variable Pairs consulted over time (empty dots), Surprises over time (vertical lines).
Fig. 6. Design Walk as interpreted. Blue circles and squares indicate key events.
Field Guide for Interpreting Engineering Team Behavior 215
when trying 3 shore bunkers, leading to the largest cost peak of the walk, after which
shore bunkers were always avoided. As this coincided with the largest cost peak. By
contrast, 9 “surprises” were detected in groups later in the walk, but they were
associated to small KPI changes, so we suspect they are considered minor. As for
degree of team consensus, a proposed audio analysis was unsuccessful, yet the walk
appears steady & unperturbed.
5 Discussion
The first, “primary” narrative exposes information from the path of exploration, which
we refer to as the design walk. Some of the path events are relevant to goals and their
attributes - interpretation and clarity, requirements and preferences; KPIs and their
prioritization are neglected. Other signals reveal phases of the design walk, charac-
terized by different primary activities, subspaces, or approaches. In turn, these phases
are marked at the boundary by surprises or learning which trigger a new phase; relative
influence, hierarchy & agreement in the social space; general approach to tool usage;
trust in the model, mental models, and their pre-existence or conflict. These terms begin
to populate a taxonomy of design walk characteristics this experimental setup can
capture, refining the simple taxonomy illustrated in Fig. 1. Comparing the terms to the
framework of Fig. 1, the social space seems underrepresented - perhaps unsurprising
since few sensors were intended to capture it so far. However, it is somewhat
encouraging that in our early feedback responses, none of the participants identified
any large gaps in the narrative, and rated high fidelity. This suggests our initial
approach can capture meaningful characteristics of a design walk.
Some of these aspects are more easily captured by the sensor footprint than others.
At this early stage, across the limited datasets surveyed (10+ in this experiment), more
easily capturable characteristics include the KPI prioritization, phases of design walk,
and the approach to tool usage, while information about goals (details or interpretation)
and social influence is among the most difficult. Model trust, mental models, and
surprises are somewhat intermediate - the sensor footprint provides some strong hints,
but not a full story. In fact, early evaluation suggests “secondary” narratives capture
many of the phenomena from the direct feedback sources, and we anticipate modest
differences in fidelity scoring between primary and secondary narratives (though data
analysis continues). However as can be seen in the examples above, secondary nar-
ratives may only hint at many characteristics, while direct feedback data is more
amenable to a stronger assertion. Between the various sensors, so far, no obvious data
inconsistencies have been found, nor any with the direct feedback data. Comparing the
sensors’ merits, we find that unsurprisingly the DSS logs are most useful so far,
perhaps particularly the attention log - however this suffers from the drawback of
forcing a 2-dimensional tradespace view. Used together with a time series of input and
output data, a rough picture of the design walk can be quickly created (see Fig. 4,
though output time series not shown). Desired improvements include an “automatic”
surprise detector, and “conversation detector” with voice recognition. This should
provide more insight into the social space.
216 L. Pelegrin et al.
The mapping rules and field guide are to serve two overall purposes (Fig. 3): aid
data interpretation, and improve experimental setup. Both should reduce dependence
on difficult-to-obtain human-sourced direct feedback data, succeeding primary narra-
tives with high-fidelity secondary narratives (see Fig. 2). How well does our initial stab
work towards this goal? Table 1 shows some sample rules to holistically characterize a
design walk using sensor data; they are among the more promising of 16 rules (and
growing). We note that diverse and fundamental aspects of the walk can be discussed
using sensors - the most promising rules so far may yield insight on prioritization,
model trust, phases/modes of activity, and depth of surprise/learning. Many of these
rules are low confidence, but more data, particularly validation from participant ratings,
should improve it. Interestingly, some sensor data may also be better than direct
feedback - akin to “revealed preferences”. For now, human feedback is key to intent &
the social space, and to validate sensor-based narratives.
microphones distant from voices, and participants talking over one another) did not
allow for sufficient speech clarity.
6 Conclusion
In this paper, a framework to transform data from engineering teamwork into team
narratives is proposed, and it is applied to a quasi-experiment. On the basis of our
experience in these experiments, we discuss what steps a researcher can take to
improve observations derived from pervasive and non-intrusive instrumentation in
complement to direct observation by humans.
The contributions of this paper are:
1. a formalized concept of a team design walk narrative and a taxonomy of a design
experiment for understanding team dynamics;
2. a proposed experimental setup generating team narratives and a set of mapping
rules for transforming sensor data into a team narrative;
3. a demonstration of the aforementioned setup and rules, by showing some experi-
mental data and actual narratives of design.
The design of complex systems is a challenge with many promises from the recent
generation of digital tools for modeling, simulation, and interaction. However, vali-
dation of resulting behavior and emergent outcomes given these much-needed new
tools has been difficult. The recent availability of pervasive sensors may allow the
creation of experiment platforms to increase empirical data from experiments, their
scalability, and analyses towards reproducibility. We find the approach promising for
the generation of sensor-derived stories and the potential for deeper and scalable
studies on engineering teamwork.
References
1. Hirshorn, S.R., Voss, L.D., Bromley, L.K.: NASA Systems Engineering Handbook (2017)
2. Malone, T.W., Bernstein, M.S.: Handbook of Collective Intelligence. MIT Press (2015)
3. Woolley, A.W., Chabris, C.F., Pentland, A., Hashmi, N., Malone, T.W.: Evidence for a
collective intelligence factor in the performance of human groups. Science, 686–688 (2010)
4. Malone, T.W., Laubacher, R., Dellarocas, C.: The collective intelligence genome. MIT
Sloan Manag. Rev. 51(3), 21 (2010)
5. Moser, H.A.: Systems Engineering, Systems Thinking, and Learning: A Case Study in Space
Industry. Springer (2013)
6. Kolb, D.: Experience as the Source of Learning and Development. Prentice-Hall, Englewood
Cliffs (1984)
7. Ross, A.N.: Knowledge creation and learning through conversation: a longitudinal case
study of a design project (2003)
8. Schon, D.A.: The Reflective Practitioner: How Professionals Think in Action, vol. 1. Basic
Books, New York (1983)
9. Valkenburg, R.: The Reflective Practice in Product Design Teams (Ph.D. thesis). Delft
University of Technology, Netherlands (2000)
218 L. Pelegrin et al.
10. Nonaka, I.: A dynamic theory of organizational knowledge creation. Organ. Sci. 1, 14 (1994)
11. Engeström, Y.: Learning by Expanding. Cambridge University Press (2014)
12. Stompff, G., Smulders, F., Henze, L.: Surprises are the benefits: reframing in multidisci-
plinary design teams. Des. Stud. 47, 187–214 (2016)
13. Carrizosa, K., Eris, Ö., Milne, A., Mabogunje, A.: Building the design observatory: a core
instrument for design research. In DS 30: Proceedings of DESIGN 2002, the 7th
International Design Conference, Dubrovnik, pp. 37–42
14. Milne, A., Winograd, T.: The iLoft project: a technologically advanced collaborative design
workspace as research instrument. In: DS 31: Proceedings of ICED 2003, the 14th
International Conference on Engineering Design, Stockholm, pp. 315–316 (2003)
15. Törlind, P., Sonalkar, N., Bergström, M., Blanco, E., Hicks, B., McAlpine, H.: Lessons
learned and future challenges for design observatory research. In DS 58-2: Proceedings of
ICED 09, the 17th International Conference on Engineering Design, vol. 2, Design Theory
and Research Methodology, Palo Alto, CA, USA, 24–27 August 2009
16. Yang, M.C.: Observations on concept generation and sketching in engineering design. Res.
Eng. Des. 20(1), 1–11 (2009)
17. Rosen, M.A., Dietz, A.S., Yang, T., Priebe, C.E., Pronovost, P.J.: An integrative framework
for sensor-based measurement of teamwork in healthcare. J. Am. Med. Inform. Assoc. 22(1),
11–18 (2015)
18. Lazar, J., Feng, J.H., Hochheiser, H.: Research Methods in Human-Computer Interaction.
Morgan Kaufmann (2017)
19. Baecker, R., Grudin, J., Buxton, W., Greenberg, S., Chui, M.: Readings in human-computer
interaction, towards year 2000. Libr. Inf. Sci. Res. 18(2), 187–188 (1996)
20. Wilson, P.: Computer Supported Cooperative Work: An Introduction. Springer Science &
Business Media (1991)
21. Johansen, R.: Groupware: Computer Support for Business Teams. The Free Press (1988)
22. Kirschner, P.A., Sweller, J., Kirschner, F., Zambrano, J.: From cognitive load theory to
collaborative cognitive load theory. Int. J. Comput. Supp. Collab. Learn. 1–21 (2018)
23. Grogan, P.T., de Weck, O.L.: Collaborative design in the sustainable infrastructure planning
game. In: Society for Computer Simulation International, p. 4 (2016)
24. Moser, B., Mori, K., Suzuki, H., Kimura, F.: Global product development based on activity
models with coordination distance features. In: Proceedings of the 29th International
Seminar on Manufacturing Systems, pp. 161–166 (1997)
25. Moser, B.R., Wood, R.T.: Design of complex programs as sociotechnical systems. In:
Concurrent Engineering in the 21st Century, pp. 197–220. Springer (2015)
26. Argyris, C.: Double loop learning in organizations. Harvard Bus. Rev. 5, 115–125 (1977)
27. Duhigg, C.: What Google learned from its quest to build the perfect team. NY Times
Magazine, 26 (2016)
28. Pelegrin, L.: Teamwork Phenomena: Exploring Path Dependency and Learning in Teams
during Architectural Design of Sustainable Maritime Shipping Systems [Master of Science
in Engineering and Management]. Massachusetts Institute of Technology (2018)
A Review of Know-How Reuse with Patterns
in Model-Based Systems Engineering
1 Introduction
1
http://www.omgwiki.org/MBSE/doku.php (visited on 31/05/2018).
continuing throughout development and later life cycle phases”. However, adoption of
MBSE takes time, as many inhibitors remain such as cultural and general resistance to
change, lack of related Knowledge and Know-How (K&KH), or the need to higher
degree of guidance and reuse.
For a wider MBSE adoption, several advances seem to be necessary concerning
organizational, methodological, and tools perspectives. In particular, from a method-
ological point of view, reuse seems to be promising. Reusing engineer’s Knowledge
and Know-How is an act of capitalization on previous experiences or projects, whether
it is on the System Of Interest (SOI) or on the Systems Engineering Activities (SEA).
But, often, those data are kept in their mind, and works have to be done to formalize
them, with the goal of sharing them so they can be reused. The expected benefits make
the assumption that reused modelling artefacts satisfy some maturity criteria to grant
that they have reached a level of quality compatible with reuse objectives.
This article reviews engineering practices which intent to capitalize on K&KH, and
to facilitate information sharing and reuse. A focus is made on reusing K&KH through
the concept of “pattern”. In this way, the second section presents reuse challenges in
engineering and related works, the third a short history on patterns, the fourth a lit-
erature review of pattern for SE, and the last section discusses on the interest of using
patterns in MBSE.
Most people in the pattern community attribute the first promoter of the value of
“pattern” to Alexander et al. (1977) in a book on architecture, urban design and
community liveability. They formalized a “pattern language”, made of a myriad of
patterns that helped them to express design in terms of relationships between the parts
of a house, and the rules to transform those relationships (Coplien 1997). They began to
identify patterns with the idea that “Each pattern describes a problem which occurs
over and over again in our environment, and then describes the core of the solution to
that problem, in such a way that you can use this solution a million times over,
222 Q. Wu et al.
without ever doing it the same way twice” (Alexander et al. 1977). The same way
engineers reuse their knowledge based on their previous experience, Cloutier (2006)
point out that Alexander and his co-authors “did not invent these patterns, they came
from observation and testing; and only then were they documented as patterns”.
Since these pioneer works, the pattern approach has been introduced in various
engineering fields such as Software, Requirements, Telecommunications and Control
Systems Engineering (Cloutier 2006). Beck and Cunningham (1987) were the first to
propose object-oriented patterns in the Software community. The goal was to improve
quality and to facilitate code writing by adopting good practices. Gamma et al. (1995),
also known as the “Gang of Four”, wrote an authoritative book describing 23 Software
Design Patterns such as Composite, Iterator, Command… A Design Pattern is a gen-
eral, reusable solution to a recurring problem in the design of object-oriented appli-
cations; it describes a proven solution for solving software architecture problems. As
Design Patterns are not a finished design (concrete algorithm), but a structured
description of computer programming, it means they are independent from program-
ming languages. Design Patterns have been widely accepted, and encouraged other
domains to write patterns to capture their experience.
In the field of SE, the value of patterns appears towards the growing complexity of
systems and the difficulty to capture large body of knowledge. That is why Barter
(1998) proposes the creation of a Systems Engineering Pattern Language, which is a
collection of patterns that, when combined, address problems larger than the problems
that an individual pattern can address. In the same way, Haskins (2003) proposes the
use of SE patterns to capture the information in the Systems Engineering Body of
Knowledge (SEBOK). Other works have used the concept of pattern in SE, especially
in the Product Information System field, where Cauvet et al. (1998), Gzara (2000)
propose a methodological framework based on the reuse of patterns during all the
lifecycle, or Conte et al. (2001) who proposed patterns libraries to support a
methodological framework for the conception of product information system.
After this short history of patterns, the next section aims at improving the com-
prehension of what is a pattern in SE.
It happens that similar designs are made independently by different engineers (Gaffar
and Moha 2005). This phenomenon acknowledges the fact that the same design ele-
ments exist in multiple designs, and the study and documentation of such designs foster
reuse among projects. Indeed, it prevents from “reinventing the wheel” and provides a
vocabulary for the design concepts that projects can share. This is consistent with the
notion that patterns “are not created from a blank page; they are mined” (Hanmer and
Kocan 2004). It appears that SE patterns are embedded in existing designs, and that it is
necessary to find a mechanism to identify them. Those patterns are called “buried
patterns” by Pfister et al. (2012) and represent a scientific issue. As the process of
“Mining” appears to be essential for creating Pattern Languages, various approaches
have been identified to write patterns from the element extracted from pattern mining.
According to DeLano (1998) classification, it is possible to classify mining’s processes
A Review of Know-How Reuse with Patterns 223
into three categories: Individual contributions where writers of the pattern used their
own experiences or ones from their colleagues; Second-hand contributions where
patterns are written based on interviews with experts or by guiding another person in
the writing of patterns, it can also come from borrowing patterns from the literature or
from companies in the same domain; Workshops/Meeting contributions that consists of
groups of around ten people who brainstorm the elements of a patterns, along with a
moderator and a facilitator.
When mining a pattern, depending on the language used (textual or modelling), it
appears that a minimal set of information is always provided, as a pattern seems to
possess an inherent triptych composed of {Context, Problem, Solution}. Gaffar and
Moha (2005) define a “Minimal Triangle” that defines the core meaning of a pattern
(Fig. 1). It summarizes the idea that a pattern provides a solution to a recurring problem
in a particular context. However, a general consensus enlarges the minimal elements
needed in a pattern, Barter (1998) describe a generic pattern with the minimal elements
needed to be written (Fig. 2). Cloutier and Verma (2007) conduct a survey that allow
them to list a recommended Systems Pattern Form. They also underline the fact that
concepts used in Systems Engineering represent higher levels of complexity and
abstraction that the prevailing notions of Alexander in architecture. For instance, the
architecture of the underlying concepts of control-command requires a more complex
notation than the sketch used in Alexander et al. (1977), thus Pfister et al. (2012) used
the Enhanced Functional Flow Block Diagram (eFFBD) to represent the model of their
control-command and rely on formal conceptual foundations in the form of a meta-
model.
Fig. 1. The minimal triangle, extracted Fig. 2. Generic pattern, extracted from
from Gaffar and Moha (2005) Barter (1998)
Like models, patterns are abstractions or a set of abstractions of the reality and not a
magical solution. They allow people to solve complex problems by leveraging expe-
rience, K&KH from their predecessors. The results of a study conducted, in the Open
Source Software community, by Hahsler (2005) show that the larger the team size was,
the greater the use of patterns was for documenting changes: from 11.4% for a unique
developer to 82.2% in a team of ten or more developer. The capacity of patterns to
224 Q. Wu et al.
deliver at each level of the development the correct amount of information for the stage
it is applied, allow its quick adoption and most importantly its active use as Hahsler
concludes in his study: “design patterns are adopted for documenting changes and thus
for communication in practice by many of the most active open source developers”.
Patterns offer the possibility to create a common lexicon between systems architects
that foster a common understanding of systems architecture, validated by experts. In
this way, the experience acquired by the software community on pattern will be
valuable, and help systems engineers to walk in their footsteps in order to develop
patterns that will foster reuse, as well as helping control the complexity of a system.
As the interest for MBSE increases, it is important to also examine the work done for
integrating the concept of pattern in this framework. The integration of the OMG System
Modelling Language (OMG SysML) and its consequences on how to define problems and
describe solutions are particularly interesting and will be examined in the next section.
Although research works have been made to assess whether the concept of pattern can
be applied in the Systems Engineering field such as Pfister et al. (2012), Cloutier
(2006), Haskins (2005), the value of patterns in a MBSE framework has not been fully
explored. Yet, it appears crucial to consider all the different needs, requirements and
constraints of the different stakeholders in the early design stages. Perceived by many
companies as a time loss, it appears that introducing or reinforcing reuse capacity in
MBSE methodologies allows the design of a new project with much less human effort,
benefiting from the reuse of the already existing system models (Shani and Broodney
2015). In this way, the capitalization and reuse of system models through the concept
of pattern can be implemented in MBSE, and thus, favour its adoption at a larger scale.
Models are abstraction or a set of abstractions of the reality (i.e. the reality can be
represented under different consistent views), which means that it can be easy to reuse a
model in a new project since no physical limitations get in the way. However,
depending on the type of reuse to do, the complexity of the system under design, and
also the heterogeneity of methodologies and tools, it appears that the adoption of
MBSE is penalized. Indeed, reusing existing modelling artefacts (even if their designs
have been made to be reusable) is harder than expected. As Korff (2013) stated, the
“biggest problem is to transfer and manage the knowledge [of] what is actually
available for re-use”. He emphasizes on the fact that it is necessary for system engi-
neers to be aware of system assets that can be defined and propagated among teams
designing complex systems. However, the creation of assets library is not sufficient, as
the purpose is to allow engineers to reuse those assets in their ongoing projects. Korff
underlines the fact that users should have the possibility to search, publish, and reuse
assets in defined libraries and catalogues, without any specific technical pre-requisite.
Contrary to Korff (2013), Paydar and Kahani (2015) do not focus on the creation of
assets but propose an approach concerning the adaptation of promising reusable assets
during a model reuse process, especially on the adaptation of OMG Unified Modeling
Language (OMG UML) activity diagrams to new use cases, in the context of web
engineering. This work proposes to semi-automatically create an activity diagram from
A Review of Know-How Reuse with Patterns 225
existing activity diagrams according to the input use case diagram. Even though this
approach is not presented in a MBSE framework, the fact that between OMG UML and
OMG SysML, use case diagrams are identical and that activity diagrams presents the
same use, allows considering a transposition in the SE field.
In the case of variant modelling in MBSE, Oster et al. (2016) propose an approach
for building and exploiting composable architectures to the design and development of
a product line of complex systems in the aerospace and defence market. They choose
OMG SysML as the core language to define descriptive models of the composable
system reference architecture and extended it to define parametric models. This
methodology allowed the product line to evolve more readily as the impact of infor-
mation propagation of adding, updating or modifying new components was automatic.
As their works consider physical layer, Di Maio et al. (2016) focus their attention on
the development of a functional architectures that can accommodate to change due to
decisions made in the logical layer for System of Systems (SoS). The results of their
study are a MBSE process that consists in the integration of a system model before the
consideration of the variants. It requires that the system model should contain both the
original configuration and the variant one. This separation is important in case of a new
technology is introduced but the older one are not abandoned yet. They also investigate
the aspects of including variant modelling into the OMG SysML, with a focus on
extending an existing and operating model to support a new variant in the case where a
similar technology is used.
The introduction of a reuse capacity in MBSE frameworks has proven to improve
engineering efficiency in engineers work. However, the steep learning curve induced
for organizations to adopt MBSE methodologies, results in the need of helping the
engineers to “quickly identify not only valid architectural solutions, but optimal value
solutions for the mission need” (Oster et al. 2016). Thus, it appears that the concept of
patterns could be an answer to this challenge. Indeed, works have been done to
introduce patterns during various phases of the engineering cycles. Gasser (2012)
described behavioural construct patterns (Fig. 3) to facilitate and systematize the
modelling of system behaviour. Instead of thinking at the level of atomic graphical
elements, he defined a structured way to represent elementary behavioural constructs.
In this way, he advocates the use of an “insert policy”, like in the construction of
Functional Flow Block Diagram (FFBD) where the resizing of the diagram is automatic
when new elements are inserted. The proposed behavioural construct patterns will
allow engineers to work in an algorithmic way of thinking, which implies a higher
modelling level that will permit to focus more on the expected behaviour than on the
aesthetics of the diagrams.
In order to help engineers to focus on what is important, patterns should guide the
development to avoid deviation. For example, Barbieri et al. (2014) proposed a process
for the development of mechatronic systems based on a SysML design pattern. Their
intent is to demonstrate that adequate guidelines for modelling can benefits the
development process by allowing an efficient traceability of all information within the
system model to trace change influences more easily. This approach proves to be
particularly helpful for facilitating the impact analysis in later lifecycle phases and for
the reuse for future projects.
Pursuing the work of Haskins (2005) on patterns, Schindel (2005) proposed an
engineering paradigm where patterns are re-usable models, that enables what he calls
Pattern-Based Systems Engineering (PBSE), where patterns can be configured or
specialized into product lines or into product systems. With the advent of MBSE, this
modelling framework has led to the creation of an INCOSE working group called
MBSE Patterns2. In this context, Schindel and Peterson (2013) developed their
approach, they see “patterns as re-usable models” and apply them to requirements and
design. At a high-level, they constitute a generic system pattern model that can be
customized according to enterprise needs, configuration, uses, so that engineers can
benefit from the concepts of MBSE without being an expert of modelling method-
ologies. Cook and Schindel (2017) applies it for the Verification and Validation pro-
cesses, and Bradley et al. (2010) in the pharmaceutical market.
6 Conclusion
2
http://www.omgwiki.org/MBSE/doku.php?id=mbse:patterns:patterns (visited on 31/05/2018).
A Review of Know-How Reuse with Patterns 227
engineering effectiveness concerns the development and the adoption of MBSE soft-
ware tools that integrate patterns libraries supporting their capitalization, selection,
reuse, and update.
References
Alexander, C., Ishikawa, S., Silverstein, M.: A Pattern Language. Ch. Alexander (1977). https://
doi.org/10.2307/1574526
Barbieri, G., Kernschmidt, K., Fantuzzi, C., Vogel-Heuser, B.: A SysML based design pattern for
the high-level development of mechatronic systems to enhance re-usability. In: IFAC
Proceedings Volumes (IFAC-PapersOnline), vol. 19, pp. 3431–3437. IFAC (2014). https://
doi.org/10.3182/20140824-6-za-1003.00615
Barter, R.H.: A systems engineering pattern language. In: INCOSE, pp. 350–353 (1998)
Beck, K., Cunningham, W.: Using pattern languages for object-oriented programs. In: OOPSLA-
87 Workshop on the Specification and Design for Object-Oriented Programming (1987)
Boehm, B., Abts, C.: COTS integration: plug and pray? Computer 32(1), 135–138 (1999).
https://doi.org/10.1109/2.738311
Bollinger, L.A., Evins, R.: Facilitating model reuse and integration in an urban energy simulation
platform. Procedia Comput. Sci. 51(1), 2127–2136 (2015). https://doi.org/10.1016/j.procs.
2015.05.484
Bradley, J.L., Hughes, M.T., Schindel, W.: Optimizing delivery of global pharmaceutical
packaging solutions, using systems engineering patterns. In: 20th Annual International
Symposium of the International Council on Systems Engineering, INCOSE 2010, vol. 3,
pp. 2441–2447 (2010). https://doi.org/10.1002/j.2334-5837.2010.tb01175.x
Cauvet, C., Rieu, D., Espinasse, B., Giraudin, J.-P., Tollenaere, M.: Ingénierie Des Systèmes
d’information Produit: Une Approche Méthodologique Centrée Réutilisation de Patrons. In:
Inforsid, pp. 71–90 (1998). http://dblp.uni-trier.de/db/conf/inforsid/inforsid1998.
html#CauvetREGT98
Cloutier, R.J.: Applicability of Patterns to Architecting Complex Systems, vol. 466. Stevens
Institute of Technology, Hoboken (2006)
Cloutier, R.J.: Model driven architecture for systems engineering. In: Language, no. September
(2008). http://personal.stevens.edu/*pkorfiat/CONOPS/Research/1_018.pdf
Cloutier, R.J., Verma, D.: Applying the concept of patterns to systems architecture. Syst. Eng. 10
(2), 138–154 (2007). https://doi.org/10.1002/sys.20066
Cochard, T.: Contribution à La Génération de Séquences Pour La Conduite de Systèmes
Complexes Critiques (2017)
Conte, A., Fredj, M., Giraudin, J.-P., Rieu, D.: P-Sigma: Un Formalisme Pour Une
Représentation Unifiée de Patrons. In: XIXème Congrès INFORSID, no. January, pp. 67–
86 (2001). http://liris.cnrs.fr/inforsid/sites/default/files/a366c1YfHw5cvgN2I.pdf
Cook, D., Schindel, W.: Utilizing MBSE patterns to accelerate system verification. Insight 20(1),
32–41 (2017). https://doi.org/10.1002/inst.12142
Coplien, J.O.: Idioms and patterns as architectural literature. IEEE Softw. 14(1), 36–42 (1997).
https://doi.org/10.1109/52.566426
Darimont, R., Zhao, W., Ponsard, C., Michot, A.: Deploying a template and pattern library for
improved reuse of requirements across projects. In: Proceedings—2017 IEEE 25th
International Requirements Engineering Conference, RE 2017, pp. 456–457 (2017). https://
doi.org/10.1109/re.2017.44
228 Q. Wu et al.
DeLano, D.E.: Patterns Mining. In: Rising, L. (ed.) The Pattern Handbook: Techniques,
Strategies, and Applications, pp. 87–96. Cambridge University Press, New York (1998)
Demian, P., Fruchter, R.: An ethnographic study of design knowledge reuse in the architecture,
engineering, and construction industry. Res. Eng. Des. 16(4), 184–195 (2006). https://doi.org/
10.1007/s00163-006-0010-x
Estefan, J.A.: Survey of model-based systems engineering (MBSE) methodologies. In:
INCOSE MBSE Initiative (2008). https://doi.org/10.1109/35.295942
Gaffar, A., Moha, N.: Semantics of a pattern system. In: Proceedings of the STEP International
Workshop on Design Pattern Theory and Practice IWDPTP05 (2005)
Gamma, E., Helm, R., Johnson, R., Vlissides, J.: Design Patterns: Elements of Reusable Object-
Oriented Software. Addison-Wesley Longman Publishing Co. Inc., Boston (1995)
Gasser, L.: Structuring activity diagrams. In: 14th IFAC Symposium on Information Control
Problems in Manufacturing, Bucharest, Romania. IFAC (2012). https://doi.org/10.3182/
20120523-3-ro-2023.00153
Gautam, N., Chinnam, R.B., Singh, N.: Design reuse framework: a perspective for lean
development. Int. J. Prod. Dev. 4(5), 485–507 (2007). https://doi.org/10.1504/IJPD.2007.
013044
Gzara, L.: Résumé - Les Patterns Pour l’Ingénierie Des Systèmes d’Informations Produit (2000)
Gzara, L., Rieu, D., Tollenaere, M.: Product information systems engineering: an approach for
building product models by reuse of patterns. Robot. Comput. Integr. Manuf. 19(3), 239–261
(2003). https://doi.org/10.1016/S0736-5845(03)00028-0
Hahsler, M.: A quantitative study of the adoption of design patterns by open source software
developers. In: Free/Open Source Software Development, pp. 103–124. IGI Global (2005)
Hanmer, R.S., Kocan, K.F.: Documenting architectures with patterns. Bell Labs Tech. J. 9(1),
143–163 (2004). https://doi.org/10.1002/bltj.20010
Haskins, C.: 1.1.2 using patterns to share best results - a proposal to codify the SEBOK. In:
INCOSE International Symposium, vol. 13, no. 1, pp. 15–23 (2003). https://doi.org/10.1002/
j.2334-5837.2003.tb02596.x
Haskins, C.: Application of patterns and pattern languages to systems engineering. In: 15th
Annual International Symposium of the International Council on Systems Engineering,
INCOSE 2005, vol. 2, pp. 1619–1627 (2005). http://www.scopus.com/inward/record.url?eid=
2-s2.0-84883318751&partnerID=tZOtx3y1
Korff, A.: Re-using SysML system architectures. In: Complex Systems Design and Management
—Proceedings of the 4th International Conference on Complex Systems Design and
Management, pp. 257–266. Springer, Berlin (2013). https://doi.org/10.1007/978-3-319-
02812-5-19
Di Maio, M., Kapos, G.D., Klusmann, N., Allen, C.: Challenges in the modelling of SoS design
alternatives with MBSE. In: 2016 11th Systems of Systems Engineering Conference, SoSE
(2016). https://doi.org/10.1109/sysose.2016.7542937
Majchrzak, A., Cooper, L.P., Neece, O.E.: Knowledge reuse for innovation. Manage. Sci. 50(2),
174–188 (2004). https://doi.org/10.1287/mnsc.1030.0116
Manzoni, L.V., Price, R.T.: Identifying extensions required by RUP (Rational Unified Process) to
comply with CMM (Capability Maturity Model) levels 2 and 3. IEEE Trans. Softw. Eng. 29
(2), 181–192 (2003). https://doi.org/10.1109/TSE.2003.1178058
Miled, A.B.: Reusing knowledge based on ontology and organizational model. Procedia Comput.
Sci. 35, 766–775 (2014). https://doi.org/10.1016/j.procs.2014.08.159
Mourtzis, D., Doukas, M., Giannoulis, C.: An inference-based knowledge reuse framework for
historical product and production information retrieval. Procedia CIRP 41, 472–477 (2016).
https://doi.org/10.1016/j.procir.2015.12.026
A Review of Know-How Reuse with Patterns 229
Oster, C., Kaiser, M., Kruse, J., Wade, J., Cloutier, R.: Applying composable architectures to the
design and development of a product line of complex systems. Syst. Eng. 19(6), 522–534
(2016). https://doi.org/10.1002/sys.21373
Palomares, C., Quer, C., Franch, X.: Requirements reuse with the PABRE framework. Requir.
Eng. Mag. 2014, 1 (2014)
Paydar, S., Kahani, M.: A semi-automated approach to adapt activity diagrams for new use cases.
Inf. Softw. Technol. 57(1), 543–570 (2015). https://doi.org/10.1016/j.infsof.2014.06.007
Pfister, F., Chapurlat, V., Huchard, M., Nebut, C., Wippler, J.-L.: A proposed meta-model for
formalizing systems engineering knowledge, based on functional architecture patterns. Syst.
Eng. 15(3), 321–332 (2012). https://doi.org/10.1002/sys.21204
Schindel, W.: Requirements statements are transfer functions: an insight from model-based
systems engineering. In: INCOSE International Symposium, vol. 15, no. 1, pp. 1604–1618
(2005). https://doi.org/10.1002/j.2334-5837.2005.tb00775.x
Schindel, W., Peterson, T.: Introduction to pattern-based systems engineering (PBSE): leveraging
MBSE techniques. In: INCOSE International Symposium, vol. 23, no. 1, p. 1639 (2013).
https://doi.org/10.1002/j.2334-5837.2013.tb03127.x
Shani, U., Broodney, H.: Reuse in model-based systems engineering. In: 9th Annual IEEE
International Systems Conference, SysCon 2015 - Proceedings, pp. 77–83 (2015). https://doi.
org/10.1109/syscon.2015.7116732
Vogel-Heuser, B., Fischer, J., Neumann, E.-M., Diehm, S.: Key maturity indicators for module
libraries for PLC-based control software in the domain of automated production systems. In:
16th IFAC Symposium on Information Control Problems in Manufacturing (2018)
Wang, G., Valerdi, R., Fortune, J.: Reuse in systems engineering. IEEE Syst. J. 4(3), 376–384
(2010). https://doi.org/10.1109/JSYST.2010.2051748
Posters
The Systems Engineering Concept
A Practical Hands-on Approach to Systems Engineering
Henrik Balslev(&)
David Schumacher(&)
Abstract. The fad in today’s market for customer-specific products pushed the
industry to renew itself and drive value creation initiative. In fact, companies are
concerned not only about selling the product as a function, but also about selling
the value as a solution. It is reasonable to think that the creation of these new
business models involve building flexible manufacturing facilities, digitizing
and integrating inter and intra-company systems into one intelligent data man-
agement structure which allow physical and software components to interact
with each other in a myriad of ways that change with context, in spite of the
different spatial and temporal scales they operate on. This synergic interaction
can be fulfilled by accomplishing an industry 4.0 environment that aims to
transcend mechatronic systems and move to cyber-physical systems (CPS).
In this paper, we present our methodology to model CPS. The results show
promising research opportunity for implementing CPS in industry.
B F
Balslev, Henrik, 233 Faudou, Raphaël, 168
Banach, Richard, 3 Ferrogalini, Marco, 79
Binder, Christoph, 44 Foucault, Julie, 3
Boehm, Barry, 241 Fruehling, Carl, 179
Bonjour, Eric, 235
Bonnaud, Aymeric, 56
Bonnema, G. Maarten, 145 G
Borth, Michael, 67 Garnier, Jean-Luc, 97
Boudau, Sophie, 219 Garnier, Thierry, 33
Bouffet-Bellaud, Stéphanie, 33 Gauthier, Jean-Marie, 168
Bullock, Seth, 121 Geneste, Laurent, 157
Gerado, Hortense, 192
C Gouyon, David, 219
Chavy-Macdonald, Marc-Andre, 203 Guegan, Alan, 56
Cheve, Ronald, 33 Guerroum, Mariya, 237
Coipeau-Maia, Vincent, 33
Corbier, Franck, 238 H
Correvon, Marc, 3 Hoffenson, Steven, 192
Coudert, Thierry, 157 Huijbrechts, Bas, 109
D
De Valroger, Aymeric, 157 J
Debicki, Olivier, 3 Jankovic, Marija, 97
Deshayes, Laurent, 237, 240 Johnson, Angus, 121
Doornbos, Richard, 109
Dudnik, Gabriela, 3 K
Dutré, Mathieu, 236 Kaiser, Dennis, 239
Kamach, Oualid, 240
E Khalfallah, Malik, 133
El-Alaoui, Ali, 237 Kouiss, Khalid, 240
Ernadote, Dominique, 16 Kuijsten, Marco, 145
L S
Lastro, Goran, 44 Saadi, Janah, 237
Lesecq, Suzanne, 3 Sadvandi, Sara, 238
Levrat, Éric, 219 Sanchez, Felipe, 235
Lieber, Peter, 44 Sartor, Pia, 121
Linke, Thomas, 79 Schuitemaker, Katja, 145
Schumacher, David, 234
M
Schweiger, Ulrich, 79
Maitre, Paul, 242
Ševo, Kristina, 109
Mallah, Sara, 240
Silverans, Sam, 236
Mareau, Nicolas, 3
Sleuters, Jack, 109
McDermott, Thomas, 241
Medromi, Hicham, 237
T
Meléndez, Diana, 157
Tang, Jian, 168
Mevel, Eric, 238
Micaëlli, Jean-Pierre, 235
U
Monticolo, Davy, 235
Uslar, Mathias, 44
Moser, Bryan R., 179
Moser, Bryan, 203
V
N van Gerwen, Emile, 67
Neureiter, Christian, 44 Van Kelecom, Nick, 236
van Spaandonk, Heidi, 145
O Verberkt, Mark, 109
Olaru, Sorin, 97 Verma, Dinesh, 241
Verriet, Jacques, 109
P Verstraete, Timothy, 236
Pelegrin, Lorena, 203 Videau, François, 242
Potts, Matthew, 121
W
Q Wade, Jon, 192, 241
Qasim, Lara, 97 Walter, Benedikt, 239
Wanaka, Shinnosuke, 203
R Winder, Ira, 203
Raby-Lemoine, Jérôme, 242 Wu, Quentin, 219
Rajabalinejad, Mohammad, 145
Razavi, Joe, 3 Z
Romero Bejarano, Juan C., 157 Zegrari, Mourad, 237
Rudolph, Stephan, 239 Zhu, Shaofan, 168