Compos Able Architectures Book
Compos Able Architectures Book
Compos Able Architectures Book
logo
TBD
Gerrit Muller
University of South-Eastern Norway-NISE
Hasbergsvei 36 P.O. Box 235, NO-3603 Kongsberg Norway
gaudisite@gmail.com
Abstract
Composable architectures are used to create product families and individual
products of a product family. This book bundles articles addressing several
concerns and approaches with respect to composing products in a composable
architecture.
Distribution
This article or presentation is written as part of the Gaudí project. The Gaudí project philosophy is to improve
by obtaining frequent feedback. Frequent feedback is pursued by an open creation process. This document is
published as intermediate or nearly mature version to get feedback. Further distribution is allowed as long as the
document remains complete and unchanged.
Introduction xi
Q: How to manage
Recommendations
platform architectures?
market
case
driven
architecting process
time
platform
dimension
1.1 Introduction
market
case
driven
architecting process
time
platform
dimension
Cardio
MR CT URF Surgery
Vascular
Common
Medical
X-ray
Imaging
Components
Image quality
Diagnosis
Relaxed patient
patient accessibility
patient handling
patient entry, exit
Department universality
Efficiency
integrated information flow
up time
1
A poor name for this collection; The main difference is in the maturity of the modality, where
this group exists from relative ”young” modalities, 20 a 30 years old.
2
equally important core for Philips Medical Systems is the cardio imaging equipment in the
catheterization rooms of the cardiology department, which is out of the Medical Imaging Workstation
scope.
1987 1988 1989 1990 1991 1992 1993 1994 1995 1996 1997 1998 1999 2000
Easyvision RF Medical
Imaging
Basic Application Easyvision RF R2
Platform
plus toolboxes Easyvision CT/MR
Easyvision Xray R1
Easyvision CT/MR R2
Easyvision RAD
EasyReview
Philips Medical Systems has been striving for re-useable viewing components
at least from the late seventies. This quest is based on the assumption that the
viewing of all Medical Imaging Products is so similar, that cost reduction should
be possible when a common implementation is used. The lessons learned during
this long struggle have been partially consolidated in [13].
The group of people, which started the Common Viewing development, applied
a massive amount of technology innovations, see Figure 1.5.
Basic Application
legend
user interface
Image Gfx UI DB
toolbox
SunOS, SunView
operating system
hardware
Standard Sun workstation
platform (Workstation, the Sun version of UNIX SunOS and the Sun windowing
environment Sunview). The core of common viewing is the imaging and graphics
toolbox, and the UI gadgets and style.
The consequence of the literal screen copy was that a lot of redundant infor-
mation is present on the film, such as patient name, birth date and acquisition
settings. On top of that the field of view was supposed to be square or circular,
although the actual field of view is often smaller due to the shutters applied.
printer
detector
Examination Control Reading
Room Room Room
light box
Figure 1.8: X-ray rooms from examination to reading, when Medical Imaging is
applied as printserver
Figure 1.9: Comparison of conventional screen copy based film and a film
produced by Medical Imaging. This case is very favorable for the Medical Imaging
approach, typical gain is 20% to 50%.
RC HC hardware
Install DOR Standard IPX workstation
interf interf
Start
up
Desk, cabinets, cables, etc. SW infrastructure
3M
RC
DSI connected system
Figure 1.10: Idealized layers of the Medical Imaging R/F software in september
1992
The definition of the Medical Imaging was done by marketing, which described
that job as a luxury problem. Normally heavy negotiations were required to get
oblique
slices curved
slice
remote
Compose Print Store MPR View Export Cluster
access
customi PMS- PMS-
Spool HCU Store Image Gfx UI DB
zation net in net out
service CDSpack
RC dials HC
Install interf
DOR Standard IPX or Sparcstation 5 workstation
interf
Figure 1.13: Idealized layers of the Medical Imaging software in june 1994
All diagrams 1.6, 1.10 and 1.13 are labelled as idealized. This adjective is used
because the actual software structure was less well structured than presented by
Reception
MR CT
"MPR" CT Reading
Room
Medical Imaging R/F and Medical Imaging CT/MR were positioned as modality
enhancers. The use of these systems enhances the value of the modality. They
are used in the immediate neighborhood of the modality, before the reporting is
done. From sales point of view these Medical Imaging are additional options for a
modality sales.
The radiology workflow is much more than the acquisition of the images.
Digitalization of the health-care information flow requires products which fit in
the broader context of radiology and even the diagnostic workflow. Figures 1.14
and 1.15 show the increasing context where the workstation technology can be
deployed.
IT infrastructure
in basement
remote
Compose Print Store MPR View Export Cluster
access
customi PMS- PMS-
Spool HCU Store Image Gfx UI DB
zation net in net out
service CDSpack
RC HC DOR NIX
SW keys
driver driver driver
Solaris
Config
RC dials HC
Install interf
DOR Standard Sparcstation 5 workstation
interf
guidance
API's high level rules monitoring
layer n !calls n-k; k>1
layer n !calls n+k; k>0
storage
Decomposition display
de- decoding
compress Pipeline
3. Allocation 4. Infra-
structure
view play browse
Decomposition drivers
frame-
scheduler
DS
OS
tuner
buffer
MPEG
P
CPU RAM etc
5. Choice of
Performance
Resource Exception integrating
usage handling
concepts
shows that architecting involves many more aspects and especially the integrating
concepts are crucial to get working products.
Architecting involves amongst others analyzing, assessing, balancing, making
trade-offs and taking decisions. This is based on architecture information and facts,
following the needs and addressing the expectations of the stakeholders. A lot of
the architecting is performed by the architect, which is frequently using intuition.
As part of the architecting vision, overview, insight and understanding are created
and used.
Stakeholders
Expectations
Architecting
Facts
analyze vision
assess overview Architecture
Architecture(s)
balance insight uncertainties
trade-off understanding unknowns
problems, legacy decide
The strength of a good architect is to do this job in the real world situation,
where the facts, expectations and intuition sometimes turn out top be false or
changed! Figure 6.17 visualizes this art of architecting.
Integration is
difficult
Architecture
Flattened Architecture
Subset of which into description
architect is aware
Actually written
by architect(s)
Figure 1.22: The architecture description is by definition a flattened and poor repre-
sentation of an actual architecture.
IEEE 1471 makes another interesting step: it discusses the architecture description
not the architecture itself. The architecture is used here for the way the system is
experienced and perceived by the stakeholders3 .
This separation of architecture and architecture description provides an inter-
esting insight. The architecture is infinite, rich and intangible, denoted by a cloud
3
Long philosophical discussions can be held about the definition of the architecture. These
discussions tend to be more entertaining than effective. Many definitions and discussions about
the definition can be found, for instance in [7], [5], or [9]
Figure 1.23 shows drivers for Generic Developments and the derived require-
ments for the Generic Something Creation Process. The first driver (Customer
value is extrovert: does the product have value for the customer and is he willing
to buy the product? The second driver Internal Benefits is introvert, it is the normal
economic constraint for a company.
Today high tech companies are knowhow and skill constrained, in a market
which is extremely fast changing and which is rather turbulent. Cost considera-
tions are degraded to an economic constraint, which is orders of magnitude less
important than being capable to have valuable and sellable products.
The derivation of the requirements shows clearly that these requirements are
not a goal in itself. For instance an shared architecture framework is required
to enable features developed for one product to be used in other products as well,
which in turn should have value for a customer. So the verification of this requirement
is to propagate a new valuable feature from one product to the next, with small
effort and lead time.
These drivers and requirements derivation is emphasized, because many generic
developments result in large monolithic general purpose things, fulfilling:
• availability accumulated feature set
• mature
without bringing any customer value; ”You can not have this easy shortcut, because
our architectural framework does not support it, changing the framework will cost
us 100 man-years in 3 years elapsed time”
legend
product platform
infrastructure
infrastructure
same deliverables with more detailed content. The message of this last figure is
that much more is involved in platform development than a set of source code files.
The case, as shown in Section 1.2, used a platform approach to share common
functions. In the table in Figure 1.27 the efficiency of this platform approach is
evaluated. The basis for this evaluation is the number of different applications
that has been realized and the required effort. This table shows that 13 persons
were needed per application in 1993, while in 1996 only 3 persons per application
were needed. The re-use of lower level functions facilitated a more efficient appli-
cation development process. In practice the lead-time reduction of new applica-
tions was even more important. A rich and flexible platform is also a rapid proto-
typing vehicle. This last argument is far from trivial: many platforms are large and
complex and do not facilitate rapid prototyping at all!
applications 1 4 8 16 32
value
metric number of inputs
1 5 10 15
(a.o. modalities)
platform 35 37 38
number of
people applications 27 35 41
total 52 62 72 79
people per
efficiency
application 13 8 5 3
proprietary software
purchased SW
architecture
software
embedding purchased OS
Architectural mismatch:
wrappers, translators, conflicting controls
additional code UI UI
and complexity,
no added value tuner
tuner
MPEG MPEG
Poor performance;
additional resource usage
Duplication
Problems Architecture Reuse non problem
platform
baseline R1 R2 R3
R1 R2 R3
platform
as consolidation baseline
• appropriate skill set (the so-called ”100%” instead of ”80/20” oriented people)
The first clause for our type of products is nearly always false, remember the
dynamic market. The second clause is in practical cases not met (100+ manyear
projects), although it might be validly pointed out that the size of the projects is the
cause of many problems. The third clause is very difficult to meet, I do know only
a handful of people fitting this category, none of them making out type of products
(for instance professors).
Figure 6.20 shows the relationship between team size and the chance of success-
fully following the first time right approach.
Understanding of the problem as well as the solution is key to being effective.
Learning via feedback is a quick way of building up this understanding. Waterfall
methods all suffer from late feedback, see figure 7.14 for a visualization of the
influence of feedback frequency on project elapsed time.
The evolution of a platform is illustrated in figure 6.22 by showing the change
in the Easyvision [16] platform in the period 1991-1996. It is clearly visible that
every generation doubles the amount of code, while at the same time half of the
1994
minor SW major SW
release release
Imaging and treatment functions are provided of modality systems with the
focus on the patient. Safety plays an important role, in view of all kinds of hazards
such as radiation, RF power, mechanical movements et cetera. The variation
between systems is mostly determined by:
• psycho-social factors
• political factors
• cultural factors
• language factors
These factors influence what information must be stored (liability), or must not
be stored (privacy), how information is to be presented and exchanged, who may
access that information, et cetera.
The archiving of images and information in a robust and reliable way is a highly
specialized activity. The storage of information in such a way that it survives fires,
floods, and earthquakes is not trivial4 . Specialized service providers offer this kind
of storage, where the service is location-independent thanks to the high-bandwidth
networks.
All of these application functions build on top of readily available IT compo-
nents: the base technology. These IT components are innovated rapidly, resulting
in short component life-cycles. Economic pressure from other domains stimulate
the rapid innovation of these technologies. The amount of domain-specific technology
that has to be developed is decreasing, and is replaced by base technology.
4
Today terrorist attacks need to be included in this list full of disasters, and secure needs to be
added to the required qualities.
customer
supplying business
eu
val
product creation
process
Product Creation Process This Process feeds the Customer Oriented Process with
new products. This process ensures the continuity of the enterprise by creating
products which enables the primary process to generate cash-flow tomorrow
as well.
People and Technology Management Process Here the main assets of the company
are managed: the know how and skills residing in people.
Strategy Process This process is future oriented, not constrained by short term
goals, it is defining the future direction of the company by means of roadmaps.
These roadmaps give direction to the Product Creation Process and the People
and Technology Management Process. For the medium term these roadmaps
are transformed in budgets and plans, which are committal for all stake-
holders.
The simplified process description given in figure 1.38 assumes that product
creation processes for multiple products are more or less independent. When
generic developments are factored out for strategic reasons an additional process is
required to visualize this. Figure 3.25 shows the modified process decomposition
Philips business
eu
val
PCP
Philips business
policy ent
and customer oriented on
eratiprocess
nagem
maplanning w genproduction)
ashfloservice,
c(sales,
eu
val
hflo w
w's cas
oPCP
tomorr
a sse t
trategic components
createsgeneric
e ra tion
gen
Figure 3.26 shows these processes from the financial point of view. From
financial point of view the purpose of this additional process is the generation of
strategic assets. These assets are used by the product generation process to enable
tomorrow’s cash-flow.
The consequence of this additional process is an lengthening of the value chain
and consequently a longer feedback chain as well. This is shown in figure 3.27.
The increased length of the feedback chain is a significant threat for generic devel-
opments. In products where integration plays a major role (which are nearly
all products) the generic developments are pre-integrated into a platform or base
b a d-
Philips business
ck
fee
policy and customer oriented process
planning (sales, service, production)
eu
val
PCP
product feature 2
e
eas
product feature 1
Rel
feature 1
feature 2
customer
supplying business
direct feedback
policy and
planning
customer oriented process
(sales, service, production)
lead customer too specific?
product feedback
Product Creation Process carrier product product specific?
The lead customer as driving force guarantees a direct feedback path from an actual
customer. Due to the importance of feedback this is a very significant advantage.
The main disadvantages of this approach are that the outcome of such a devel-
opment often needs a lot of work to make it reusable as a generic product. The
focus is on the functionality and performance, while many of the quality aspects
are secondary in the beginning. Also the requirements of this lead customer can be
rather customer specific, with a low value for other customer.
1.6.3 Platform
In maturing product families the generic developments are often decoupled from
the product developments. In products where integration plays a major role (which
are nearly all products) the generic developments are pre-integrated into a platform
or base product, which is released to be used by the product developments.
enables, supports
The job of the architect is to integrate these views in a consistent and balanced
way. Architects do this job by frequent viewpoint hopping, looking at the problem
from many different viewpoints, sampling the problem and solution space in order
to build up an understanding of the business. Top down (objective driven, based
on intention and context understanding) in combination with bottom up (constraint
aware, identifying opportunities, know how based), see figure 1.46.
In other words the views must be used concurrently, not top down like the
waterfall model. However at the end a consistent story must be available, where the
justification and the needs are expressed in the customer side, while the technical
solution side enables and support the customer side.
The model will be used to provide a next level of reference models and methods.
Although the 5 views are presented here as sharp disjunct views, many subse-
quent models and methods don’t fit entirely in one single view. This in itself not a
problem, the model is a means to build up understanding, it is not a goal in itself.
One of the key success factors of platform development is scoping. The opposing
forces are the efficiency drive by higher management teams, increasing the scope,
and the need for customer specifics by project teams, minimizing the platform
scope. Scope overstretching is one of the major platform pitfalls: in best case the
result is that the organization is very efficient, but customers are dissatisfied. Worst
case the entire organization drowns in the overwhelming complexity. Blindly
Figure 1.46: Five viewpoints for an architecture. The task of the architect is
to integrate all these viewpoints, in order to get a valuable, usable and feasible
product.
shared core
technology
intelligent Closed
buildings motorway
management Circuit TV
Figure 1.48: Example of the four key drivers in a motorway management system
Figure 1.49: Example Thread of Reasoning from the Medical Imaging Workstation
Q: How to manage
Recommendations
platform architectures?
segments
cost
power P1800
3
traffic
2
1
P1900
volume
P2200
quality
taste
customer P2600
products
key drivers
2.1 Introduction
for example material sciences EM specialists everything possible e-beam sources, optics
electron life sciences biologists specific handling vacuum
microscope semiconductors process quality high throughput acquisition control
markets:
Single same
different
different shared shared
market applications &
customers products concepts technology
stakeholders
for example, radiography patient support
gastrointestinal
health care, radiology x-ray diagnostics patient information
orthopedics
radiology department MRI, CT scanner image information
neurology
market viewing storage & communication
Figure 2.2 shows the stepwise method to explore and analyze opportunities to
harvest synergy.
Explore markets, customers, products and technologies to create a shared under-
standing of the playing field.
Share market and customer insights by studying one customer and one product,
followed by a more extensive study of work flows.
Make maps where the views that resulted from the first steps are related.
Discuss value, synergy and (potential) conflicts to get the main issues on the table
in a factual way.
Create long term and short term plan to transform what can be done into something
that (probably) will be done.
The exploration is performed by using fixed time boxes to discuss the following
questions by the exploration team:
• What specific customers do we expect? What are the key concerns per
customer?
The purpose is to make a quick scan of the playing field so that a shared insight is
created between the members of the team. Figure 2.3 shows the typical result of
the exploration: a number of flip-charts with sticky notes. This first scan can be
done in a half day to a full day.
Environment key-driver
Ensure system health and fault indication
Reduce emissions
Note: the graph is only partially elaborated
for application drivers and requirements functional physical
graph configuration model model
The challenge is to get more substance after the first quick scan. Figure 2.4
shows how the CAFCR model is followed to explore one product for one market
in more depth. The idea is that one such depth probe helps the team to get a deeper
understanding that can be extended to other product by variations on an theme. In
this figure the CAF-views are covered by a key driver graph, the F-view focuses
on the required commercial product structure, The Conceptual view is used for a
functional model of the system internals and the R view shows a block diagram.
This is an example of a CAFCR analysis, but specific markets and products can
benefit from other submethods in the CAFCR views.
accessory preparation workflow
cabinet
1 get patient
3
2 patient on table
1 stakeholders admin
cabinets
console
6
2D map Who
dressing
technical room control room
Where room rest room waiting room
corridor
sketch
workflow
functional procedure
When
table
walk sit, position talk coils in magnet walk talk plan scan
up
make plan scan
14:20
14:15
The next step in digging in deeper is to explore the work flow of different
Who is involved
functionality
performance
P2600
P2200
many changes
and variations
mature P2000
P1900
niche
P1800
sales
price
At this moment the exploration team has insight in different customers. It helps
the team and its stakeholders if the growing insight of these different customers and
their needs for products can somehow be captured in a single map with a few main
characteristics. Figure 2.6 shows a simplistic example. Often characteristics such
as price and performance parameters are used for such map.
software components or building blocks, what are the dependencies between them?
The main purpose of this step is to understand the potential commercial and technical
modularity. From this modularity the synergy can emerge between products.
segments
cost
power P1800
3
traffic
2
1
P1900
volume
P2200
quality
taste
customer P2600
products
key drivers
The first views have resulted in the identification of market segments, customer
key drivers, features, products, and components. In this step the objective is to
relate these views, e.g.market segments to customer key drivers,
customer key drivers to features, features to products , and products to compo-
nents, see Figure 2.8. Each mapping can be many to many, for example different
market segments can share the same key drivers, while every market segment has
multiple key drivers.
products
P1800 P1900 P2200
market share
market share
market share
satisfaction
satisfaction
satisfaction
sales price
sales price
sales price
customer
customer
customer
features
feeder 1 5 4 3 4 4 4 5 5
hf feeder
buffer 4 3 4 5 3 4 4 3 4
sunpower 2 2 1 2 2 1 2 2 4
Figure 2.10 shows the results of this selection process. Note that the discussion
provides most of the value to the exploration team. The need to characterize and
agree on the scoring forces the team to compare features and to articulate their
value.
shared core
technology
intelligent Closed
buildings motorway
management Circuit TV
openness reliability
interoperability reuse
3.1 Introduction
Many good reasons exists to deploy a reuse strategy for product creation, see
figure 3.1. This list, the result of a brainstorm, can be extended with more objec-
tives, but this list is already sufficiently attractive to consider a reuse strategy.
Reuse is deployed already in many product development centers. Brainstorming
with architects involved in such developements about their experiences gives a very
mixed picture, see figure 3.2 for the bad versus the good experiences.
Analysis of the positive experiences show that successful applications of a
reuse strategy share one or more of the following characteristics: homogeneous
domain, hardware dominated or limited scope. Figure 3.3 shows a number of
examples.
Reuse strategies can work successfully for a long time and then suddenly run
into problems. Figure 3.4 shows the limitations of successful reuse strategies.
The main problem with successful reuse strategies is that they work efficient as
long as the external conditions evolve slowly. However breakthrough events don’t
+ reduced time to market
+ reduced cost per function
+ improved quality
+ improved reliability
+ easier diversity management
+ employees only have to understand one base system
+ improved predictability
+ larger purchasing power
+ means to consolidate knowledge
+ increase added value
+ enables parallel developments of multiple products
+ free feature propagation
cath lab
homogeneous domain MRI
television
waferstepper
car
airplane
hardware dominated shaver
television
audio codec
limited scope compression library
streaming library
complicated
overdesign or conflicting
supplier customer
under performance interests
relationships
people implementation
openness reliability
interoperability reuse
Figure 3.7 show these trends in the market in the left hand column, where the
length of the arrow indicate the relative increase or decrease.
The consequence of the market trends for product creation are that more and
more features start to interact and that the complexity increases. This is reflected
in a string growth in the amount of software in products. The integration effort
increases also. The combination of these factors threaten the reliability, products
which simply cease operating have become a fact of life.
To accomodate these trend multiple solutions need to be applied concurrently,
as shown in the right hand column. New methods and tools are needed, which
fit in this fast evolving, connected world. The fast developments of the hardware
(Moore’s law) help significantly in following the expectations in the market. New
software technology, increasing the abstraction level used by programmers, increases
the productivity and reduces complexity. New standards reduce the interoperability
issues.
Reuse of software modules potentially decreases the creation effort, enables
focus on the required feautures and increases the quality if the modules have been
proven.
over-generic class
specific
generic design from implementations
scratch without a priori re-use after refactoring
Figure 7.3 show an actual example of part of the Medical Imaging system [16],
which used a platform based reuse strategy. The first implementation of a ”Tool”
class was overgeneric. It contained lots of if-then-else, configuration options, stubs
for application specific extensions, and lots of best guess defaults. As a conse-
quence the client code based on this generic class contained lots of configuration
settins and overrides of predefined functions.
The programmers were challenged to write the same functionality specific,
which resulted in significantly less code. In the 3 specific instances of this function-
ality the shared functionality became visible. This shared functionality was factored
out, decreasing maintenance and supporting new applications.
Bloating is one of the main causes of the software crisis. Bloating is the unnec-
essary growth of code. The really needed amount of code to solve a problem is
often an order of magnitude less than the actual solution is using. Figure 7.1 shows
a number of causes for bloating.
One of the bloating problems is that bloating causes more bloating, as shown in
figure 7.6. Software engineering principles force us to decompose large modules
in smaller modules. ”Good” modules are somewhere between 100 and 1000 lines
of code. So where unbloated functionality fits in one module, the bloated version
is too large and needs to be decomposed in smaller modules. This decomposition
adds some interfacing overhead. Unfortunately the same causes of overhead also
dogmatic rules
core
function
genericity
configurability
for instance fine grain COM interfaces
poor specification ("what")
provisions for
future
poor design ("how")
dogmatic rules
core
functionality
support for
unused legacy
code
legenda
poor specification ("what")
dogmatic rules
dogmatic rules
overhead
core core core
functionality functionality functionality
value
support for support for support for
unused legacy unused legacy unused legacy
code code code
dogmatic rules
Solution: special measures: core
functionality
dogmatic rules
dogmatic rules
core core core
functionality functionality functionality
product family
family
family
operational marketing
family manager
architect
manager
subsystem
subsystem
subsystem project
architect
leader
module developers
The introduction of reuse has a big impact on this hierarchy in the operational
organziation of the PCP. Figure 3.13 shows the organization after the addition of a
shared platform of shared components. The platform project leader reports directly
to the operational manager of the product family. His other core team members also
report directly to the family counter part: platform architect to family architect,
platform manager to family marketing manager. The supplier relationship is that
the platform delivers to the product, in other words the product creation is the
customer of the platform creation.
Figure 3.14 focuses on the tension which created by the sharing of a single
platform creation by multiple product creations. Conflicting interests with respect
to platform functionality or performance cannot be solved by the individual product
creation teams, but is propagated to the family level. At family level the policy is
set, which is executed by the platform creation. The platform team has to disap-
point one or more of its customers in favor of another customer.
The same problems happens with external suppliers, where the supplier has to
satisfy multiple customers. The main difference is that in such a supplier customer
relationship economic rules apply, where a dissatisfied customer will change from
supplier. The threshold to change from supplier in platform driven organizations is
family family
product family
operational marketing
family architect
manager manager
single
product project single product
platform product product
platform
leader project architect architect manager manager
platform platform leader
sub- component subsystem
system component subsystem component
project project
architect architect manager
component leader leader
component subsystem
module
developers developers
budgets
policies constraints
priorities
customer
platform product
deliverables
creation creation
del
ive conflicting
rab
les interests
product
creation
customer
Figure 3.14: Conflicting interests of customers escalate to family level, have impact
on platform, product creation teams benefit or suffer from the top down induced
policy
Decomposition
is "easy"
Integration is
difficult
Projects run without (visible) problems during the decomposition phases. All
components builders are happily designing, making and testing their component.
When the integration begins problems become visible. Figure 3.16 visualizes this
process. The invisible problems cause a significant delay1 .
Combining existing software packages is mostly difficult due to ”architectural
mismatches”. Different design approaches with respect to exception handling,
resource management, control hierarchy, configuration management et cetera, which
prohibit straightforward merging. The solution is adding lots of code, in the form
of wrappers, translators and so on, while this additional code adds complexity, it
does not add any end-user value.
Performance and resource usage are most often far from optimal after a merger.
Amazingly many people start worrying about duplication of functionality when
merging, while this is the least of a problem in practice. This concern is the cause
of reuse initiatives, which address the wrong (non-existing) problem: duplication,
while the serious architectural problems are not addressed.
Creating the solution is a collective effort of many designers and engineers.
The architect is mostly guiding the implementation, the actual work is done by the
1
This is also known as the 95% ready syndrome, when the project members declare to at 95%,
then actually more than half of the work still needs to be done.
component 2
component 4 delay
Figure 3.16: Integration problems show up late during the project, as a complete
surprise
additional code UI UI
and complexity,
no added value tuner
tuner
MPEG MPEG
Poor performance;
additional resource usage
Duplication
Problems Architecture Reuse non problem
1. functional
encoding
decomposition acquisition compress
storage
2. construction
display
de-
compress
decoding pipeline
decomposition security
3. allocation persistence
view play browse
tuner
frame-
buffer
MPEG DSP CPU RAM etc abstraction
A B
Product 1 Product 2 Product n Product 1 Product 2 Product n
specifics specifics specifics specifics specifics specifics
subsystem
support
module
Dynamic Market
1994
rate of
change
"easy" reuse
costly
reuse
Understanding spec design implemen-
tation
Figure 3.23 shows the CAFCR model at the top. Below the rate of change is
shown for the different views. The rate of change in the implementation view is
very high. All changes from the other views accumulate here, and on top of that
the fast change of the technology is added.
Reusing an implementation is like shooting for a fast moving target. The actual
benefits might never be harvested, due to obsolescence of the used implementation.
The understanding of the customer is a quite valuable resource. Due to the conser-
vative nature of most humans the half-life of this know how is quite long.
The understanding of the customer is translated into specifications. These
specifications have a shorter half-life, due to the competition and the technology
developments. Nevertheless reuse of specifications, especially the generic parts,
can be very rewarding.
The conceptual view contains the more stable insights of the design. The
CAFCR model on purpose factors out the concepts, because concepts are reused
by nature.
Philips business
eu
val
PCP
Philips business
eu
val
PCP
The simplified process description given in figure 3.24 assumes that product
creation processes for multiple products are more or less independent. When
generic developments are factored out for strategic reasons an additional process is
required to visualize this. Figure 3.25 shows the modified process decomposition
(still simplified of course) including this additional process "Generic Something
Creation Process".
customer
Philips business
policy ent
and customer oriented on
eratiprocess
nagem
maplanning w genproduction)
ashfloservice,
c(sales,
eu
val
hflo w
w's cas
oPCP
tomorr
a sse t
trategic components
createsgeneric
ra tion
gene
Figure 3.26 shows these processes from the financial point of view. From
financial point of view the purpose of this additional process is the generation of
strategic assets. These assets are used by the product generation process to enable
tomorrow’s cashflow.
The consequence of this additional process is an lengthening of the value chain
b a d-
Philips business
ck
fee
policy and customer oriented process
planning (sales, service, production)
e
u
val
PCP
and consequently a longer feedback chain as well. This is shown in figure 3.27.
The increased length of the feedback chain is a significant threat for generic devel-
opments.
Many different models for the development of generic things are in use. An
important differentiating characteristic is the driving force, which often directly
relates to the de facto organization structure. The main flavors of driving forces are
shown in figure 3.28.
advanced
demanding
good innovate for specific customer
direct feedback lead customer refactor to extract generics
3.9.3 Platform
In maturing product families the generic developments are often decoupled from
the product developments. In products where integration plays a major role (which
are nearly all products) the generic developments are pre-integrated into a platform
or base product, which is released to be used by the product developments.
The benefit of this approach is separation of concerns and decoupling of products
and platforms in smaller manageable units. Both benefits are also the main weakness
of such a model, as a consequence the feedback loop is stretched to a dangerous
length. At the same time the time from feature/technology to market increases, see
figure 3.29.
e
eas
product feature 1
Rel
Product integration test
e
eas
Rel
feature 1
feature 2
performance
Does it satisfy the needs? functionality
user interface
cost price
Does it fit in the constraints? effort
multiplication of problems
Is the quality sufficient? or multiplication of benefits
Aggregation Levels in
Composable Architectures
Documentation
Source Code
Composition Deployment
Management
Product Creation
Figure 4.1: Venn diagram showing the overlap between Viewpoints on Aggregation
Levels
Figure 4.1 shows a Venn diagram with 5 viewpoints with respect to aggre-
gation levels, in the overall context of Product Creation. For every viewpoint the
dominating concerns are mentioned in table 4.1 and the related aggregation levels
or entities in table 4.2.
Viewpoint Concerns
Documentation Requirements, Specification, Design, Transfer, Test, Support
Source Code Management Storage, Management, Generation
Composition System, Subsystem, Function, Application
Deployment Releasing, Distribution, Protection, Update, Installation,
Configuration
Integration and Test Confidence, Problem Tracking
4.3 Documentation
Many types of documentation are required when building Product Families by
means of Composable Architectures. The granularity issues with respect to documen-
tation are described in [12].
The aggregation levels for documentation are shown in table 4.2. Figure 4.2
visualizes the documentation concerns. For every level relevant documents should
be produced, with respect to the what (requirements, specifications), how (design),
transfer (to Customer Oriented Process), verification (test) and how-to (support to
Transfer
to Product Creation
ated (Support) and Customer
What What solid
How con in Oriented Process
is asked for drives will be realized drives (Engineering)
(Design)
(Requirements) (Specifications)
Verify
report
(Test)
use reusable assets in creation of products). In what and how documents a selected
amount of why need to be present.
The documentation structure will evolve in time. This evolution requires explicit
refactoring steps in the product family lifecycle. The why and to a lesser extent the
what will be factored out, because this information is more stable and therefore
re-useable than the how. Part of the information will move ”upward” in the aggre-
gation level stack: generic patterns become clear, which are consolidated as abstrac-
tions on an higher aggregation level.
Composed from
Contains Subsystems using or
Packages Generated from Applications recursive
composed from
Services
Composed from
Contains
Classes
Source Files Generated from
Modules
Figure 4.3: The source code is stored in files in a repository. The unit of structuring
is called a package. These source code aggregation levels get a more semantic
meaning when being used.
The most widely used unit for management and storage of source data is file.
Source code in this context means all original formal descriptions, such as C,
C++, include, text, data, make et cetera files. Original means that generated C-
code does not belong to the source code, the data used for generating this code
does belong to the source code.
The provide and require interface descriptions belong to the source code according
to this definition, as do IDL interface definitions. For example see the KOALA
component model as described in [26]. Generic subsystem configuration data
defining the composition also belong to the source code.
Most source code need to be transformed in computer oriented intermediate
formats before it can be used run time. The build step (compilation, building et
cetera) required for this transformation may influence the repository structure. A
well defined compile time dependency structure is desirable to enable a predictable
composition step.
Table 4.3 shows the typical sizes, anno 2000, of source code repositories. The
size is expressed in lines of code (loc). Historical data, see cost models in [3]
and [1] shows a remarkable constant relationship between lines of code and the
required manpower to create and maintain the software. The observed productivity
in the Medical Imaging case study was ca. 10 kloc per manyear. Taking this
number for a zero-order approximation the size of entities can be transformed in
effort.
This simple table illustrates a number of very essential design criteria, in relation
to granularity of management.
Rules of thumb for typical file sizes are:
Figure 4.4: Coarse versus Fine grained with respect to the number of connections
and relations; 9 large Components with 18 Connections, 81 small Components
with 648 Connections
Table 4.4: The relation between the number of components and the required
number of architects, zero order model
Table 4.5: The relation between the number of components and the required
number of architects, first order model
• lifecycle support
arbritary elapsed
r a 3
d u g ra t
time scale
scale inte
10 2
ttom up
co st o f b o
tes ng
ti 1
1
0
building block
Integration is a non-exhaustive activity. Best case the most relevant (from usage
and test risks perspective) areas are touched. This means that the level of confi-
dence obtained by integration decreases with increasing size.
An acceptable level of confidence is only reached by a combination of bottom
up testing, integration testing and intermediate common sense verification steps in
between.
4.8 Acknowledgements
This paper has been written as part of the ”composable project”. The project
members are: Pierre America, Hans Jonkers, Jürgen Müller, Henk Obbink, Rob
van Ommering, William van der Sterren, Jan Gerben Wijnstra and Gerrit Muller.
It has been discussed within the team, and the team contributed significantly to the
contents.
Jürgen Müller suggested several improvements with respect to flow, consis-
tency and balance. Wim Vree indicated multiple improvements, amongst others
”local terminology and acronyms”, which have either to be avoided or explained.
TV Hybrid TV Digital TV
Set Top Digital TV UI
M
3rd party Box TV
TV applications H Set Top
TV stack(s) functions applications
P M
TV applications computing 3rd party Box
TV TV computing H
Infra- stack(s) functions
computing Infra- P
structure
Infra- structure
TV domain TV domain
Set Top Box Platform
structure platform platform
TV domain
Set Top Box Platform
platform
Digital Video Platform SW Digital Video Platform SW
Set Top Box Computing
Computing Computing Set Top Box
TV domain HW TV domain HW domain HW HW TV domain HW Computing HW
HW HW domain HW
trans portable
media-
mitter screen
re-
"All-in-one" combi TV
game
ceiver console
Digital TV UI
Storage
TV applications Set Top Box applications
M
3rd party
Digital Set stack(s)
functions H
DVR elec. PDP P
Cable top TV
TV domain computing
platform Infra- Set Top Box Platform
structure Storage srvices
Audio
CD man
Telecom
Computer
Consumer
while the right hand side shows some of the environments. The number of useful
combinations of functions, form factors and environments is nearly infinite!
In this presentation video entertainment will be used as the application area.
Figure 5.4 shows a typical diagram of the set up of video products in our homes.
We see products to connect with the outside world (set top box), storage products
(Video Cassette Recorder abbreviated as VCR), and conventional TV’s and remote
controls, which are the de facto user interface.
This chain of video products is slowly evolving, as depicted in figure 5.5. In
the past a straightforward analog chain was used. The elements in this chain are
stepwise changed into digital elements. The introduction of the Large Flat TV’s
breaks open the old paradigm of an integrated TV, which integrates tuner and
related electronics with the monitor function.
In the near future many more changes can be expected, such as the introduction
of the Digital Video Recorder (DVR), a gateway to alternative broadband solutions,
wireless inputs and outputs, home networks enabling multiple TV’s.
The function allocation for this last stage of figure 5.5 and the network topology
can be solved in several ways. Figure 5.6 shows four alternatives, based on a client-
server idiom.
car
pen garment
Ambient Intelligence
Communicator living room
car navigation computer games flat display
DVD
Player
PC Video
Recorder
The expectation is that all alternatives will materialize, where the consumer
chooses a solution which fits his needs and environment.
These alternatives require different packaging of functions into products, as
shown in figure 5.7.
re- game
DVD ceiver console
Digital Set
Digital Set DVR elec. PDP
VCR TV Cable top
Cable top
2 ADSL
Gate DVD
TV2
DVD way RW
10 or more for ad hoc products. Typical values for CE type products is 3 faults per
1000 lines of code. See figure 5.9 for typical values as function of the year.
The increase of the amount of software causes many problems:
Reuse is often presented as the solution for all problems mentioned above.
Experience learns that quite the opposite happens in many cases, see [13], the
challenge of executing a successful reuse program is often severely underestimated.
The most common root-cause of reuse failure is the mistake to see reuse as a
goal rather than a means.
Network
Network Network Network
Server Server
All-in-one
Thin Server
Servers Server
Figure 5.12 shows the rationale behind the reuse of existing software packages.
The cumulative effort of the software involved exceeds 500 manyears.
portable portable
"All-in-one" trans trans
media- media-
4B Digital TV
mitter screen 4D "Modular" mitter screen
1965 1979
From: COPA tutorial, Rob van Ommering
1 kB
Moore's law
2000 1990
2 MB 64 kB
1000
errors per product
typical amount of
manyears per
10000
product
100
1000
10
Pro
time
ty
Reali
glue
Digital Video Platform SW Digital Video Platform SW
Digital TV UI
Architectural mismatch:
wrappers, translators, conflicting controls
additional code UI UI
and complexity,
no added value tuner
tuner
MPEG MPEG
Poor performance;
additional resource usage
Duplication
Problems Architecture Reuse non problem
• How well will the architecture evolve to follow the market dynamics?
In every increment to the market both concerns should be addresses, which trans-
lates in clear business goals (product, functions, value proposition) and clear refac-
toring goals fitting in a limited investment. The refactoring goals should be based
on a longer term architecture vision, see 5.14.
Examples of Refactoring goals can be seen in figure 5.15. These refactoring
goals should be sufficiently “SMART” to be used as feedback criterium.
Note: many refactoring projects spend lots of effort, while critical review after-
wards does not show any improvement. Often loss of goal or focus is the basis for
feedback on direction
limited investment
based on long term architecture vision
such a disaster.
Improvement investment
+ Decrease Code Size as percentage of total budget
Quality
20%
* power
0%
* memory
* silicon area
new ...
remove()
raise( exception ) ...
garbage collection
opments. Unfortunately most people believe in stability and are biased towards
stabilizing architectures. Architectures and their implementations are sandwiched
between the fast moving market at one side and technology improvements at the
other side. Since both sides change quite rapidly, the architecture and its imple-
mentation will have to change in response, see figure 6.19.
The evolution of a platform is illustrated in figure 6.22 by showing the change
in the Easyvision [16] platform in the period 1991-1996. It is clearly visible that
every generation doubles the amount of code, while at the same time half of the
existing code base is touched by changes.
Long Term Vision
In order to set refactoring goals it is useful to have a long term vision on the
architecture. Such a long term vision may be quite ambitious. The ambition of the
vision will be balanced by the pragmatics of short term business goals and limited
investments in improvement.
Figure 5.20 shows an example of a long term vision, where a framework is
foreseen, which decouples 6 design and implementation concerns:
• applications
• services
• personalization
Figure 5.17: Frequent feedback results in faster results and a shorter path to the
result
• configuration
• computing infrastructure
• domain infrastructure
The actual implementation will not have such a level of decoupling for a long time,
the penalty in effort, resource usage and many other aspects will be prohibitive
for a long time. Nevertheless the decoupling will become crucial if the variety of
products is really very large and dynamic.
1994
n
na o n
io
at
tio ti
liz
n a ra
er u
nt g
Reference
. I fi
es n
i.e on
m tio
Architecture
C
he a
, t liz
es a
un o n
i . e e rs
Framework
p
.t
elec.
re- game
ceiver console
trans
Digital Set
Opportunistic mitter
DVR elec. PDP
Cable top Legacy re-
ceiver
Gate DVD
Integration DVD
ADSL TV2
way RW RW
DVR
Digital TV
Digital TV UI
Set
TV applications
3rd
party
Set Top Box
functions MHP
top
Proclaimed TV
computing
Infra-
structure
stack(s)
Gate
TV domain
way
reuse platform
Set Top Box Platform
glue
Digital Video Platform SW
5.4 Acknowledgements
Lex Heerink patiently listened to the presentation and provided valuable feedback.
portable
trans
media-
mitter screen
re-
"All-in-one" combi TV
game
ceiver console
Digital TV UI
Storage
TV applications Set Top Box applications
M
3rd party
Digital Set stack(s)
functions H
DVR elec. PDP P
Cable top TV
TV domain computing
platform Infra- Set Top Box Platform
structure Storage srvices
overall
= Flexibility * Manageability
effectiveness
Effectiveness
Flexibility
Manageability
6.1 Introduction
Architecture is the combination of the know how of the solution (technology)
with understanding of the problem (customer/application). The architect must
play an independent role in considering all stakeholders interests and searching
for an effective solution. The fundamental architecting activities are depicted in
figure 6.1.
Philips Components ST LG
Intel
Microsoft Component and
TI
Micron Platform Suppliers
Philips Semiconductors Samsung
Liberate
One of the major trends in this industry is the magic buzzword convergence.
Three more or less independent worlds of computers, consumer electronics and
telecom are merging, see figure 6.4; functions from one domain can also be done
in the other domain.
The name convergence and the visualisation in figure 6.4 suggest a more uniform
set of products, a simplification. However the opposite is happening. The conver-
gence enables integration of functions, which were separate sofar for technical
reasons. The technical capabilities have increased to a level, that required function-
ality, performance, form factor and environment together determine the products
to be made. Figure 6.5 shows at the left hand side many of today’s appliances,
in the middle many form factors are shown and the right hand side shows some
environments.
Computer
Consumer
car
pen garment
Ambient Intelligence
Communicator living room
car navigation computer games flat display
Note that making all kind of combination products, with many different form
factors for different environments and different price performance points creates a
very large diversity of products!
Amazon.com
source: BigChart.com
dd march 19, 2001
Another market factor to take into account is the uncertainty of all players in
the value chain. One of the symptoms of this uncertainty is the strong fluctuation
of the stock prices, see figure 6.6.
1965 1979
From: COPA tutorial, Rob van Ommering
1 kB
Moore's law
2000 1990
2 MB 64 kB
GSM
100 106 1000
TV
infrastructure
digital TV
1 1 10
time to
volume effort
market
From business point of view the products and/or markets of the system integrators
can be characterized by time to market, volume, effort to create. In these 3 dimen-
sions a huge dynamic range need to be covered. Infrastructure (for instance the
last mile to the home) takes a large amount of time to change, due to economical
constraints, while new applications and functions need to be introduced quickly (to
follow the fashion or to respond to a new killer application from the competitor).
The volume is preferably large from manufacturing point of view (economy of
scale), while the consumer wants to personalize, to express his identity or community
(which means small scale). As mentioned before the effort to create is increasing
exponentially, which means that the effort is changing order of magnitudes over
decades. Figure 6.9 summarizes these characteristics.
109 1 109
GSM digital TV
GSM
106 10-3 106 GSM
Problem space
months
units manyear Operations/s Watt Byte
home
GSM
100 server
106 1000 1012 103 1012
TV
infrastructure
home digital TV
home
digital TV digital TV
server server
10 10 3
100 9
109
GSM 10 1
personalized
application (skins, themes) GSM digital TV
GSM -3
1 1 10 106 10 106 GSM
effort
storage
market
time to
volume
power
performance
home
server 1394 MP3
1012 103 pSOS
digital TV digital TV
home WAP
server MIPS Real
WinCE
109 1 Bluetooth
TriMedia
ARM
GSM
GPS
GSM
Motion
10 6 10-3 detector MPEG USB
decoder GSM
802.11 RF
performance power
amp
The Philips product division Semiconductors has many hardware and software
solutions available in IP-blocks. For a single problem many solutions are available.
These solutions differ in their characteristics, such as performance, power and
storage. The choice of the solution is driven by the specific product requirements.
Figure 6.11 shows a subset of the available solutions and shows for 3 specific
solutions their performance and power characterization.
Technologies
Bluetooth
TriMedia
decoder
MPEG
TCP/IP
WinCE
pSOS
MIPS
GSM
ARM
1394
GPS
Real
MP3
amp
RF
Systems
watch
communicator
digital TV
pda
camcorder
required
optional
Bluetooth
TriMedia
decoder
MPEG
TCP/IP
WinCE
pSOS
MIPS
GSM
ARM
1394
GPS
Real
MP3
amp
RF
months Systems
units manyear Operations/s Watt Byte
GSM
home watch
100 server
106 1000 1012 103 1012
communicator
TV
infrastructure
home digital TV
home
digital TV digital TV
server server digital TV
10 103 100 9
109
GSM 10 1 set top box
personalized
application (skins, themes) GSM digital TV pda
GSM -3
1 1 10 106 10 106 GSM camcorder
performance
effort
storage
market
time to
volume
power
required
Composable optional
Architecture
Family of
products
Competitive
Programmability, Increase Solution
supplier Performance / Configurability
flexibility
content cost / power ingredients
B A P O
Business Architecture Process Organization
Why What
storage
Decomposition display
de- decoding
compress Pipeline
3. Allocation 4. Infra-
structure
view play browse
Decomposition drivers
frame-
scheduler
DS
OS
tuner
buffer
MPEG
P
CPU RAM etc
5. Choice of
Performance
Resource Exception integrating
usage handling
concepts
Figure 6.19 explains why this is a myth. A platform is build using technology that
itself is changing very fast (Moore’s law again). At the other hand a platform served
a dynamic fast changing market, see section 6.2. In other words it is a miracle if a
platform is stable, when both the supplying as well as the consuming side are not
stable at all.
The more academical oriented methods propose a ”first time right approach”.
This sounds plausible, why waste time on wrong implementations first? The practical
problem with this type of approach is that it does only work in very specific circum-
stances:
• appropriate skill set (the so-called ”100%” instead of ”80/20” oriented people)
The first clause for our type of products is nearly always false, remember the
dynamic market. The second clause is in practical cases not met (100+ manyear
projects), although it might be validly pointed out that the size of the projects is the
cause of many problems. The third clause is very difficult to meet, I do know only
a handful of people fitting this category, none of them making out type of products
(for instance professors).
Figure 6.20 shows the relationship between team size and the chance of success-
fully following the first time right approach.
Understanding of the problem as well as the solution is key to being effective.
Learning via feedback is a quick way of building up this understanding. Waterfall
methods all suffer from late feedback, see figure 7.14 for a visualization of the
influence of feedback frequency on project elapsed time.
Integration is
difficult
Dynamic Market
Figure 6.21: Example with diffrent feedback cycles (3, 2, and 1 months) showing
the time to market decrease with shorter feedback cycles
1994
Rule Rule
Rule
1 Rule
1
Rule
1 1Rule
Rule
1n
weight(architecture) = weight(rule) 2
conditional mandatory
weight (rule) =f( level of enforcement , guideline
rule rule
Figure 6.23 gives a definition for the weight of the architecture. The simple
definition is that the overall weight of an architecture is the sum of the weight of
all rules which together form the architecture.
The weight of a single rule is determined by level of enforcement, scope (impact),
size, level of coupling or number of dependencies. Figure 6.23 gives for each of
these parameters a scale from low weight to high weight.
Business or Portfolio Heavy-weight
n High impact
m
Large scope
Product Family
n
m
Product
n
m
Subsystem
n
Small scope
m Low impact
Component Light-weight
Figure 6.24 zooms in on the scope parameter to make clear the relation between
the scope of the rule and the consequence for the weight.
For instance a rule like: all the functions in all the products of the portfolio
must (manadatory) return an complete predefined status object as defined in the
Feedback Business
Customer Responsiveness
being informed manager
functionality bottomline
performance future growth
timely available
acceptable cost
guidance
implementation understandability
decoupling accessibility
solution freedom product feasibility
Solution Freedom
Suppliers Communicable
Engineers
Flexibility Manageability
Evolution Integration
Responsiveness Interoperability
Maintenance Providing control
The next step in reaching a judgement is to look at the relation between effec-
tiveness and weight. Figure 6.26 show this relationship for flexibility (evolution,
overall
= Flexibility * Manageability
effectiveness
Effectiveness
Flexibility
Manageability
weight(architecture) = weight(rule)
all rules
2. Minimize the weight per rule
Understand
· your customer
· your customer's customer
etcetera
The next step is to minimize the weight of every individual rule. Every parameter
influencing the weight of a rule must be minimalized. The level of enforcement can
be minimized by making as few as possible rules manadatory, work with guidelines
as much as possible. The scope can be minimized by empowering and delegating
as much as possible, in other words let component or subsystem architects make
local rules (or better guidelines) for their specific scope. The size of a rule is
minimized by leaving out details in the rule itself, short conceptual rules are very
powerful. The level of coupling is minimized by ”designing” the architecture rules.
Especially multi-view architecting helps to cope with the highly complex reality.
One dimensional decompositions result in highly coupled rules to capture aspects
from other dimensions.
Figure 6.29 visualizes the way to minimize the individual weight of a rule.
size,
Apply design principles on architecture
level of coupling or
number of dependencies ) Multi-view architecting
GSM
1 1 10 106 10-3 106 GSM
storage
market
time to
volume
power
Change is normal,
Stability is the exception
6.7 Acknowledgements
This presentation has been enabled by the inspiring and critical comments of:
• Jürgen Müller
• Lex Heerink
configurability
poor specification ("what")
provisions for
CAFCR
poor design ("how")
future
dogmatic rules
iteration
extensive right
regression tests aggressive retirement technology
refactoring policy
7.1 Introduction
Bloating is one of the main causes of the software crisis. Bloating is the unnec-
essary growth of code. The really needed amount of code to solve a problem is
often an order of magnitude less than the actual solution is using. Most SW based
products contain an order of magnitude more software than is required. The cause
of this excessive amount of software is explored in section 7.2 and 7.3.
The overall aspects of bloating are devastating: increased development, test
and maintenance costs, degraded performance, increased hardware costs, loss of
overview, et cetera.
dogmatic rules
core
function
testing
instrumentation
regular diagnostics
functionality tracing
asserts
boundary behavior:
exceptional cases
error handling
Figure 7.2: Necessary functionality is more than the intended regular function
Note that the core functionality in the center is all the required functionality
to obtain a well behaved product. This means that it includes much more than the
over-generic class
specific
generic design from implementations
scratch without a priori re-use after refactoring
All together the new module is much worse bloated than the old module: shit
propagation and amplification.
Class Old: Class New: Class DoubleNew:
capacity = startCapacity capacity = 1 capacity = 1
values = int(capacity) values = int(capacity) values = int(capacity)
size = 0 size = 0 size = 0
provisions for
future
poor design ("how")
dogmatic rules
core
functionality
support for
unused legacy
code
legenda
poor specification ("what")
dogmatic rules
dogmatic rules
overhead
core core core
functionality functionality functionality
value
support for support for support for
unused legacy unused legacy unused legacy
code code code
One of the bloating problems is that bloating causes more bloating, as shown in
figure 7.6. Software engineering principles force us to decompose large modules
in smaller modules. ”Good” modules are somewhere between 100 and 1000 lines
of code. So where non-bloated functionality fits in one module, the bloated version
is too large and needs to be decomposed in smaller modules. This decomposition
adds some interfacing overhead. Unfortunately the same causes of overhead also
apply to this decomposition overhead, which means again additional code.
All this additional code does not only cost additional development, test and
maintenance effort, it also has run time costs: CPU and memory usage. In other
words the system performance degrades, in some cases also with an order of magnitude.
When the resulting system performance is unacceptable then repair actions are
needed. The most common repair actions involve the creation of even more code:
memory pools, caches, and shortcuts for critical functions. This is shown in figure 7.7.
dogmatic rules
Solution: special measures: core
functionality
dogmatic rules
dogmatic rules
core core core
functionality functionality functionality
"how"
core
function organization overhead
legacy
20% overview
The immediate consequence is that all parameters which are in first approx-
imation proportional with the code size, will be reduced with the same factor.
Imagine the impact of having 5 times less faults on the reliability or on the time
needed for integration!
The creation crew and the maintenance crew decrease also proportional, which
eases the communication tremendously. The organization also becomes much
simpler and more direct. The housing demands are smaller, the crew fits in a
smaller location. Figure 7.9 shows the relation between crew size, organization
and housing.
room
floor
building
campus
better
better design
specification less
expression code refactoring
power
appropriate redesign
technology viable
concern change
locality locality
If we are able to reverse the trend of bloating, an anti bloating multiplier effect
will help us, as shown in figure 7.10. Less code helps in many ways to reduce
the code even more: less code enables faster prototyping, which helps to get
early feedback, which in turn improves the specification, and a better specification
reduces the amount of code! Similar circular effects are obtained via the use of
right technology, via refactoring and through improved overview.
The same multiplier effect is also present when we are able to reduce the crew
size. Less people means easier communication, less distance, less need for bureau-
cratic control, less organizational overhead, all of them again reducing the amount
of people needed!
provisions for
CAFCR
poor design ("how")
future
dogmatic rules
iteration
extensive right
regression tests aggressive retirement technology
refactoring policy
problem solution
bottom line
usability
system
level
detail
level
implementation
decisions
integration safety
the problem as well as the solution is key to being effective. Learning via feedback
is a quick way of building up this understanding. Waterfall methods all suffer
from late feedback, see figure 7.14 for a visualization of the influence of feedback
frequency on project elapsed time.
A more practical way to obtain more powerful and generic solutions is to
start with learning. In practice after 3 initial implementations (often with some
copy/paste based reuse), sufficient know how is available to factor out the generic
part, see figure 7.15
time
the bloating process due to all additional measures needed to meet performance or
timing needs.
Highly repeatable problems, with small variations, can be addressed by specialized
generators. Development of dedicated toolkits for this class of problems is often
highly efficient in terms of amount of code and cost.
overall
= Flexibility * Manageability
effectiveness
Effectiveness
Flexibility
Manageability
[1] Chris M.S. Abts, Barry W. Boehm, and Elizabeth Bailey Clark.
COCOTS: A COTS software integration lifecycle cost model-
model overview and preliminary data collection findings. http:
//sunset.usc.edu/publications/TECHRPTS/2000/
usccse2000-501/usccse2000-501.pdf, 2000.
[9] Carnegie Mellon Software Engineering Institute. How do you define software
architecture? http://www.sei.cmu.edu/architecture/
definitions.html, 2002. large collection of definitions of software
architecture.
[10] Philippe B. Kruchten. A rational development process. Crosstalk 9, pages
11–16, July 1996.
[11] James N. Martin. Systems Engineering Guidebook. CRC Press, Boca Raton,
Florida, 1996.
[16] Gerrit Muller. Case study: Medical imaging; from toolbox to product to
platform. http://www.gaudisite.nl/MedicalImagingPaper.
pdf, 2000.
[19] Gerrit Muller. Light weight architectures; the way of the future? http:
//www.gaudisite.nl/info/LightWeightArchitecting.
info.html, 2001.
[23] Gerrit Muller. CAFCR: A multi-view method for embedded systems archi-
tecting: Balancing genericity and specificity. http://www.gaudisite.
nl/ThesisBook.pdf, 2004.
[26] Henk Obbink, Jürgen Müller, Pierre America, and Rob van Ommering.
COPA: A component-oriented platform architecting method for
families of software-intensive electronic products. http://www.
hitech-projects.com/SAE/COPA/COPA_Tutorial.pdf, 2000.
History
Version: 0.4, date: July 13, 2010 changed by: Gerrit Muller
• Added ”A Method to Explore Synergy between Products”
Version: 0.3, date: june 6, 2003 changed by: Gerrit Muller
• Added ”How to Create a Manageable Platform Architecture?”
Version: 0.2, date: june 6, 2003 changed by: Gerrit Muller
• Added ”Exploration of the bloating of software”
Version: 0.1, date: March 7, 2003 changed by: Gerrit Muller
• Added ”Software Reuse; Caught between strategic importance and practical feasibility” no changelog yet
Version: 0, date: June 13, 2002 changed by: Gerrit Muller
• Created very preliminary bookstructure, no changelog yet