Wide-Area Networks or Wans Allow Hundreds of Millions of Machines All
Wide-Area Networks or Wans Allow Hundreds of Millions of Machines All
Wide-Area Networks or Wans Allow Hundreds of Millions of Machines All
Introduction
The pace at which computer systems change was, is, and continues to be
overwhelming. From 1945, when the modern computer era began, until about
1985, computers were large and expensive. Moreover, for lack of a way to
connect them, these computers operated independently from one another.
Starting in the mid-1980s, however, two advances in technology began to
change that situation. The first was the development of powerful microproces-
sors. Initially, these were 8-bit machines, but soon 16-, 32-, and 64-bit CPUs
became common. With multicore CPUs, we now are refacing the challenge
of adapting and developing programs to exploit parallelism. In any case, the
current generation of machines have the computing power of the mainframes
deployed 30 or 40 years ago, but for 1/1000th of the price or less.
The second development was the invention of high-speed computer net-
works. Local-area networks or LANs allow thousands of machines within a
building to be connected in such a way that small amounts of information
can be transferred in a few microseconds or so. Larger amounts of data
can be moved between machines at rates of billions of bits per second (bps).
Wide-area networks or WANs allow hundreds of millions of machines all
over the earth to be connected at speeds varying from tens of thousands to
hundreds of millions bps.
Parallel to the development of increasingly powerful and networked ma-
chines, we have also been able to witness miniaturization of computer systems
with perhaps the smartphone as the most impressive outcome. Packed with
sensors, lots of memory, and a powerful CPU, these devices are nothing less
than full-fledged computers. Of course, they also have networking capabilities.
Along the same lines, so-called plug computers are finding their way to the
A version of this chapter has been published as “A Brief Introduction to Distributed Systems,”
Computing, vol. 98(10):967-1009, 2016.
1
2 CHAPTER 1. INTRODUCTION
market. These small computers, often the size of a power adapter, can be
plugged directly into an outlet and offer near-desktop performance.
The result of these technologies is that it is now not only feasible, but
easy, to put together a computing system composed of a large numbers of
networked computers, be they large or small. These computers are generally
geographically dispersed, for which reason they are usually said to form a
distributed system. The size of a distributed system may vary from a handful
of devices, to millions of computers. The interconnection network may be
wired, wireless, or a combination of both. Moreover, distributed systems are
often highly dynamic, in the sense that computers can join and leave, with the
topology and performance of the underlying network almost continuously
changing.
In this chapter, we provide an initial exploration of distributed systems
and their design goals, and follow that up by discussing some well-known
types of systems.
each other, then there is no use in putting them into the same distributed
system. In practice, nodes are programmed to achieve common goals, which
are realized by exchanging messages with each other. A node reacts to in-
coming messages, which are then processed and, in turn, leading to further
communication through message passing.
An important observation is that, as a consequence of dealing with inde-
pendent nodes, each one will have its own notion of time. In other words, we
cannot always assume that there is something like a global clock. This lack
of a common reference of time leads to fundamental questions regarding the
synchronization and coordination within a distributed system, which we will
come to discuss extensively in Chapter 6. The fact that we are dealing with a
collection of nodes implies that we may also need to manage the membership
and organization of that collection. In other words, we may need to register
which nodes may or may not belong to the system, and also provide each
member with a list of nodes it can directly communicate with.
Managing group membership can be exceedingly difficult, if only for
reasons of admission control. To explain, we make a distinction between
open and closed groups. In an open group, any node is allowed to join the
distributed system, effectively meaning that it can send messages to any other
node in the system. In contrast, with a closed group, only the members of
that group can communicate with each other and a separate mechanism is
needed to let a node join or leave the group.
It is not difficult to see that admission control can be difficult. First, a
mechanism is needed to authenticate a node, and as we shall see in Chap-
ter 9, if not properly designed, managing authentication can easily create
a scalability bottleneck. Second, each node must, in principle, check if it is
indeed communicating with another group member and not, for example,
with an intruder aiming to create havoc. Finally, considering that a member
can easily communicate with nonmembers, if confidentiality is an issue in the
communication within the distributed system, we may be facing trust issues.
Concerning the organization of the collection, practice shows that a dis-
tributed system is often organized as an overlay network [Tarkoma, 2010]. In
this case, a node is typically a software process equipped with a list of other
processes it can directly send messages to. It may also be the case that a neigh-
bor needs to be first looked up. Message passing is then done through TCP/IP
or UDP channels, but as we shall see in Chapter 4, higher-level facilities may
be available as well. There are roughly two types of overlay networks:
Structured overlay: In this case, each node has a well-defined set of neighbors
with whom it can communicate. For example, the nodes are organized
in a tree or logical ring.
Unstructured overlay: In these overlays, each node has a number of refer-
ences to randomly selected other nodes.
Transactions: Many applications make use of multiple services that are dis-
tributed among several computers. Middleware generally offers special
support for executing such services in an all-or-nothing fashion, com-
monly referred to as an atomic transaction. In this case, the application
developer need only specify the remote services involved, and by fol-
lowing a standardized protocol, the middleware makes sure that every
service is invoked, or none at all.
conference, edited by Peter Naur and Brian Randell in October 1968 [Naur and
Randell, 1968]. Indeed, middleware was placed precisely between applications
and service routines (the equivalent of operating systems).
Transparency Description
Access Hide differences in data representation and how an object is
accessed
Location Hide where an object is located
Relocation Hide that an object may be moved to another location while
in use
Migration Hide that an object may move to another location
Replication Hide that an object is replicated
Concurrency Hide that an object may be shared by several independent
users
Failure Hide the failure and recovery of an object
that run different operating systems, each having their own file-naming con-
ventions. Differences in naming conventions, differences in file operations, or
differences in how low-level communication with other processes is to take
place, are examples of access issues that should preferably be hidden from
users and applications.
An important group of transparency types concerns the location of a pro-
cess or resource. Location transparency refers to the fact that users cannot
tell where an object is physically located in the system. Naming plays an
important role in achieving location transparency. In particular, location
transparency can often be achieved by assigning only logical names to re-
sources, that is, names in which the location of a resource is not secretly
encoded. An example of a such a name is the uniform resource locator (URL)
http://www.prenhall.com/index.html, which gives no clue about the actual
location of Prentice Hall’s main Web server. The URL also gives no clue as
to whether the file index.html has always been at its current location or was
recently moved there. For example, the entire site may have been moved from
one data center to another, yet users should not notice. The latter is an exam-
ple of relocation transparency, which is becoming increasingly important in
the context of cloud computing to which we return later in this chapter.
Where relocation transparency refers to being moved by the distributed
system, migration transparency is offered by a distributed system when it
supports the mobility of processes and resources initiated by users, with-
out affecting ongoing communication and operations. A typical example
is communication between mobile phones: regardless whether two people
are actually moving, mobile phones will allow them to continue their con-
versation. Other examples that come to mind include online tracking and
tracing of goods as they are being transported from one place to another,
and teleconferencing (partly) using devices that are equipped with mobile
Internet.
As we shall see, replication plays an important role in distributed systems.
For example, resources may be replicated to increase availability or to im-
prove performance by placing a copy close to the place where it is accessed.
Replication transparency deals with hiding the fact that several copies of a
resource exist, or that several processes are operating in some form of lockstep
mode so that one can take over when another fails. To hide replication from
users, it is necessary that all replicas have the same name. Consequently,
a system that supports replication transparency should generally support
location transparency as well, because it would otherwise be impossible to
refer to replicas at different locations.
We already mentioned that an important goal of distributed systems is
to allow sharing of resources. In many cases, sharing resources is done in a
cooperative way, as in the case of communication channels. However, there
are also many examples of competitive sharing of resources. For example,
two independent users may each have stored their files on the same file server
or may be accessing the same tables in a shared database. In such cases, it
is important that each user does not notice that the other is making use of
the same resource. This phenomenon is called concurrency transparency.
An important issue is that concurrent access to a shared resource leaves that
resource in a consistent state. Consistency can be achieved through locking
mechanisms, by which users are, in turn, given exclusive access to the desired
resource. A more refined mechanism is to make use of transactions, but these
may be difficult to implement in a distributed system, notably when scalability
is an issue.
Last, but certainly not least, it is important that a distributed system
provides failure transparency. This means that a user or application does not
notice that some piece of the system fails to work properly, and that the system
subsequently (and automatically) recovers from that failure. Masking failures
is one of the hardest issues in distributed systems and is even impossible
when certain apparently realistic assumptions are made, as we will discuss
in Chapter 8. The main difficulty in masking and transparently recovering
from failures lies in the inability to distinguish between a dead process and a
painfully slowly responding one. For example, when contacting a busy Web
server, a browser will eventually time out and report that the Web page is
unavailable. At that point, the user cannot tell whether the server is actually
down or that the network is badly congested.
system as a whole. In such a case, it may have been better to give up earlier,
or at least let the user cancel the attempts to make contact.
Another example is where we need to guarantee that several replicas,
located on different continents, must be consistent all the time. In other words,
if one copy is changed, that change should be propagated to all copies before
allowing any other operation. It is clear that a single update operation may
now even take seconds to complete, something that cannot be hidden from
users.
Finally, there are situations in which it is not at all obvious that hiding
distribution is a good idea. As distributed systems are expanding to devices
that people carry around and where the very notion of location and context
awareness is becoming increasingly important, it may be best to actually expose
distribution rather than trying to hide it. An obvious example is making use
of location-based services, which can often be found on mobile phones, such
as finding the nearest Chinese take-away or checking whether any of your
friends are nearby.
There are also other arguments against distribution transparency. Recog-
nizing that full distribution transparency is simply impossible, we should ask
ourselves whether it is even wise to pretend that we can achieve it. It may
be much better to make distribution explicit so that the user and applica-
tion developer are never tricked into believing that there is such a thing as
transparency. The result will be that users will much better understand the
(sometimes unexpected) behavior of a distributed system, and are thus much
better prepared to deal with this behavior.
process wanting that data. Moreover, modifying a data item should not be done.
Instead, it can only be updated to a new version. It is not difficult to imagine
that many other problems will surface. However, Wams shows that many existing
applications can be retrofitted to this alternative approach without sacrificing
functionality.
Being open
Another important goal of distributed systems is openness. An open dis-
tributed system is essentially a system that offers components that can easily
be used by, or integrated into other systems. At the same time, an open
distributed system itself will often consist of components that originate from
elsewhere.
As pointed out in Blair and Stefani [1998], completeness and neutrality are
important for interoperability and portability. Interoperability characterizes
the extent by which two implementations of systems or components from
different manufacturers can co-exist and work together by merely relying
on each other’s services as specified by a common standard. Portability
characterizes to what extent an application developed for a distributed system
A can be executed, without modification, on a different distributed system B
that implements the same interfaces as A.
Another important goal for an open distributed system is that it should
be easy to configure the system out of different components (possibly from
different developers). Also, it should be easy to add new components or
replace existing ones without affecting those components that stay in place.
In other words, an open distributed system should also be extensible. For
example, in an extensible system, it should be relatively easy to add parts that
run on a different operating system, or even to replace an entire file system.
Being scalable
For many of us, worldwide connectivity through the Internet is as common
as being able to send a postcard to anyone anywhere around the world.
Moreover, where until recently we were used to having relatively powerful
desktop computers for office applications and storage, we are now witnessing
that such applications and services are being placed in what has been coined
“the cloud,” in turn leading to an increase of much smaller networked devices
such as tablet computers. With this in mind, scalability has become one of the
most important design goals for developers of distributed systems.
Scalability dimensions
Scalability of a system can be measured along at least three different dimen-
sions (see [Neuman, 1994]):
Size scalability: A system can be scalable with respect to its size, meaning
that we can easily add more users and resources to the system without
any noticeable loss of performance.
Geographical scalability: A geographically scalable system is one in which
the users and resources may lie far apart, but the fact that communication
delays may be significant is hardly noticed.
Administrative scalability: An administratively scalable system is one that
can still be easily managed even if it spans many independent adminis-
trative organizations.
Size scalability. When a system needs to scale, very different types of prob-
lems need to be solved. Let us first consider scaling with respect to size.
If more users or resources need to be supported, we are often confronted
with the limitations of centralized services, although often for very different
reasons. For example, many services are centralized in the sense that they
are implemented by means of a single server running on a specific machine
in the distributed system. In a more modern setting, we may have a group
of collaborating servers co-located on a cluster of tightly coupled machines
physically placed at the same location. The problem with this scheme is
obvious: the server, or group of servers, can simply become a bottleneck when
it needs to process an increasing number of requests. To illustrate how this
can happen, let us assume that a service is implemented on a single machine.
In that case there are essentially three root causes for becoming a bottleneck:
Let us first consider the computational capacity. Just imagine a service for
computing optimal routes taking real-time traffic information into account. It
is not difficult to imagine that this may be primarily a compute-bound service
requiring several (tens of) seconds to complete a request. If there is only a
single machine available, then even a modern high-end system will eventually
run into problems if the number of requests increases beyond a certain point.
Likewise, but for different reasons, we will run into problems when having
a service that is mainly I/O bound. A typical example is a poorly designed
centralized search engine. The problem with content-based search queries is
that we essentially need to match a query against an entire data set. Even
with advanced indexing techniques, we may still face the problem of having
to process a huge amount of data exceeding the main-memory capacity of
the machine running the service. As a consequence, much of the processing
time will be determined by the relatively slow disk accesses and transfer of
data between disk and main memory. Simply adding more or higher-speed
disks will prove not to be a sustainable solution as the number of requests
continues to increase.
Finally, the network between the user and the service may also be the cause
of poor scalability. Just imagine a video-on-demand service that needs to
stream high-quality video to multiple users. A video stream can easily require
a bandwidth of 8 to 10 Mbps, meaning that if a service sets up point-to-point
connections with its customers, it may soon hit the limits of the network
capacity of its own outgoing transmission lines.
There are several solutions to attack size scalability which we discuss
below after having looked into geographical and administrative scalability.
further processing. Strictly speaking, this means that the arrival rate of requests is
not influenced by what is currently in the queue or being processed. Assuming
that the arrival rate of requests is l requests per second, and that the processing
capacity of the service is µ requests per second, one can compute that the fraction
of time pk that there are k requests in the system is equal to:
l l k
pk = 1
µ µ
If we define the utilization U of a service as the fraction of time that it is busy,
then clearly,
l
U= Â pk = 1 p0 = ) p k = (1 U )U k
k >0
µ
( 1 U )U U
N= Â k · p k = Â k · (1 U )U k = ( 1 U) Â k · Uk = 2
= .
k 0 k 0 k 0
(1 U ) 1 U
What we are really interested in, is the response time R: how long does it take
before the service to process a request, including the time spent in the queue.
To that end, we need the average throughput X. Considering that the service is
“busy” when at least one request is being processed, and that this then happens
with a throughput of µ requests per second, and during a fraction U of the total
time, we have:
l
X= U·µ + (1 U ) · 0 = · µ = l
| {z } | {z } µ
server at work server idle
Using Little’s formula [Trivedi, 2002], we can then derive the response time as
N S R 1
R= = ) =
X 1 U S 1 U
where S = µ1 , the actual service time. Note that if U is very small, the response-
to-service time ratio is close to 1, meaning that a request is virtually instantly
processed, and at the maximum speed possible. However, as soon as the utilization
comes closer to 1, we see that the response-to-server time ratio quickly increases to
very high values, effectively meaning that the system is coming close to a grinding
halt. This is where we see scalability problems emerge. From this simple model,
we can see that the only solution is bringing down the service time S. We leave it
as an exercise to the reader to explore how S may be decreased.
a single domain can often be trusted by users that operate within that same
domain. In such cases, system administration may have tested and certified
applications, and may have taken special measures to ensure that such com-
ponents cannot be tampered with. In essence, the users trust their system
administrators. However, this trust does not expand naturally across domain
boundaries.
what to expect from such foreign code. The problem, as we shall see in
Chapter 9, is how to enforce those limitations.
As a counterexample of distributed systems spanning multiple adminis-
trative domains that apparently do not suffer from administrative scalability
problems, consider modern file-sharing peer-to-peer networks. In these cases,
end users simply install a program implementing distributed search and
download functions and within minutes can start downloading files. Other ex-
amples include peer-to-peer applications for telephony over the Internet such
as Skype [Baset and Schulzrinne, 2006], and peer-assisted audio-streaming
applications such as Spotify [Kreitz and Niemelä, 2010]. What these dis-
tributed systems have in common is that end users, and not administrative
entities, collaborate to keep the system up and running. At best, underlying
administrative organizations such as Internet Service Providers (ISPs) can
police the network traffic that these peer-to-peer systems cause, but so far
such efforts have not been very effective.
Scaling techniques
Having discussed some of the scalability problems brings us to the question
of how those problems can generally be solved. In most cases, scalability
problems in distributed systems appear as performance problems caused by
limited capacity of servers and network. Simply improving their capacity (e.g.,
by increasing memory, upgrading CPUs, or replacing network modules) is
often a solution, referred to as scaling up. When it comes to scaling out, that
is, expanding the distributed system by essentially deploying more machines,
there are basically only three techniques we can apply: hiding communication
latencies, distribution of work, and replication (see also Neuman [1994]).
(a)
(b)
Figure 1.4: The difference between letting (a) a server or (b) a client check
forms as they are being filled.
host. Basically, resolving a name means returning the network address of the
associated host. Consider, for example, the name flits.cs.vu.nl. To resolve this
name, it is first passed to the server of zone Z1 (see Figure 1.5) which returns
the address of the server for zone Z2, to which the rest of name, flits.cs.vu, can
be handed. The server for Z2 will return the address of the server for zone
Z3, which is capable of handling the last part of the name and will return the
address of the associated host.
Figure 1.5: An example of dividing the (original) DNS name space into zones.
Pitfalls
It should be clear by now that developing a distributed system is a formidable
task. As we will see many times throughout this book, there are so many
issues to consider at the same time that it seems that only complexity can
be the result. Nevertheless, by following a number of design principles,
distributed systems can be developed that strongly adhere to the goals we set
out in this chapter.
Distributed systems differ from traditional software because components
are dispersed across a network. Not taking this dispersion into account during
design time is what makes so many systems needlessly complex and results in
flaws that need to be patched later on. Peter Deutsch, at the time working at
Sun Microsystems, formulated these flaws as the following false assumptions
that everyone makes when developing a distributed application for the first
time:
Note how these assumptions relate to properties that are unique to dis-
tributed systems: reliability, security, heterogeneity, and topology of the
network; latency and bandwidth; transport costs; and finally administrative
domains. When developing nondistributed applications, most of these issues
will most likely not show up.
Most of the principles we discuss in this book relate immediately to these
assumptions. In all cases, we will be discussing solutions to problems that
are caused by the fact that one or more assumptions are false. For example,
reliable networks simply do not exist and lead to the impossibility of achieving
failure transparency. We devote an entire chapter to deal with the fact that
networked communication is inherently insecure. We have already argued
that distributed systems need to be open and take heterogeneity into account.
Likewise, when discussing replication for solving scalability problems, we
are essentially tackling latency and bandwidth problems. We will also touch
upon management issues at various points throughout this book.
(a) (b)
Figure 1.6: A comparison between (a) multiprocessor and (b) multicom-
puter architectures.
To overcome the limitations of shared-memory systems, high-performance
computing moved to distributed-memory systems. This shift also meant that many
programs had to make use of message passing instead of modifying shared data as
a means of communication and synchronization between threads. Unfortunately,
message-passing models have proven to be much more difficult and error-prone
compared to the shared-memory programming models. For this reason, there
has been significant research in attempting to build so-called distributed shared-
memory multicomputers, or simply DSM system [Amza et al., 1996].
In essence, a DSM system allows a processor to address a memory location
at another computer as if it were local memory. This can be achieved using
existing techniques available to the operating system, for example, by mapping all
main-memory pages of the various processors into a single virtual address space.
Whenever a processor A addresses a page located at another processor B, a page
fault occurs at A allowing the operating system at A to fetch the content of the
referenced page at B in the same way that it would normally fetch it locally from
disk. At the same time, processor B would be informed that the page is currently
not accessible.
This elegant idea of mimicking shared-memory systems using multicomputers
eventually had to be abandoned for the simple reason that performance could
never meet the expectations of programmers, who would rather resort to far
more intricate, yet better (predictably) performing message-passing programming
models.
An important side-effect of exploring the hardware-software boundaries of
parallel processing is a thorough understanding of consistency models, to which
we return extensively in Chapter 7.
Cluster computing
compute nodes with dedicated, lightweight operating systems will most likely
provide optimal performance for compute-intensive applications. Likewise,
storage functionality can most likely be optimally handled by other specially
configured nodes such as file and directory servers. The same holds for other
dedicated middleware services, including job management, database services,
and perhaps general Internet access to external services.
Grid computing
A characteristic feature of traditional cluster computing is its homogeneity.
In most cases, the computers in a cluster are largely the same, have the
same operating system, and are all connected through the same network.
However, as we just discussed, there has been a trend towards more hybrid
architectures in which nodes are specifically configured for certain tasks. This
diversity is even more prevalent in grid computing systems: no assumptions
are made concerning similarity of hardware, operating systems, networks,
administrative domains, security policies, etc.
A key issue in a grid-computing system is that resources from different
organizations are brought together to allow the collaboration of a group of
people from different institutions, indeed forming a federation of systems.
Such a collaboration is realized in the form of a virtual organization. The
processes belonging to the same virtual organization have access rights to the
resources that are provided to that organization. Typically, resources consist of
compute servers (including supercomputers, possibly implemented as cluster
computers), storage facilities, and databases. In addition, special networked
devices such as telescopes, sensors, etc., can be provided as well.
Given its nature, much of the software for realizing grid computing evolves
around providing access to resources from different administrative domains,
and to only those users and applications that belong to a specific virtual
organization. For this reason, focus is often on architectural issues. An
architecture initially proposed by Foster et al. [2001] is shown in Figure 1.8,
which still forms the basis for many grid computing systems.
The architecture consists of four layers. The lowest fabric layer provides
interfaces to local resources at a specific site. Note that these interfaces are
tailored to allow sharing of resources within a virtual organization. Typically,
they will provide functions for querying the state and capabilities of a resource,
along with functions for actual resource management (e.g., locking resources).
The connectivity layer consists of communication protocols for supporting
grid transactions that span the usage of multiple resources. For example,
protocols are needed to transfer data between resources, or to simply access
a resource from a remote location. In addition, the connectivity layer will
contain security protocols to authenticate users and resources. Note that in
many cases human users are not authenticated; instead, programs acting on
behalf of the users are authenticated. In this sense, delegating rights from
a user to programs is an important function that needs to be supported in
the connectivity layer. We return to delegation when discussing security in
distributed systems in Chapter 9.
The resource layer is responsible for managing a single resource. It uses the
functions provided by the connectivity layer and calls directly the interfaces
made available by the fabric layer. For example, this layer will offer functions
for obtaining configuration information on a specific resource, or, in general,
to perform specific operations such as creating a process or reading data. The
resource layer is thus seen to be responsible for access control, and hence will
rely on the authentication performed as part of the connectivity layer.
The next layer in the hierarchy is the collective layer. It deals with handling
access to multiple resources and typically consists of services for resource
discovery, allocation and scheduling of tasks onto multiple resources, data
replication, and so on. Unlike the connectivity and resource layer, each
consisting of a relatively small, standard collection of protocols, the collective
layer may consist of many different protocols reflecting the broad spectrum of
services it may offer to a virtual organization.
Finally, the application layer consists of the applications that operate within a
virtual organization and which make use of the grid computing environment.
Typically the collective, connectivity, and resource layer form the heart of
what could be called a grid middleware layer. These layers jointly provide
access to and management of resources that are potentially dispersed across
multiple sites.
An important observation from a middleware perspective is that in grid
computing the notion of a site (or administrative unit) is common. This
prevalence is emphasized by the gradual shift toward a service-oriented ar-
chitecture in which sites offer access to the various layers through a collection
of Web services [Joseph et al., 2004]. This, by now, has lead to the definition
of an alternative architecture known as the Open Grid Services Architecture
(OGSA) [Foster et al., 2006]. OGSA is based upon the original ideas as for-
mulated by Foster et al. [2001], yet having gone through a standardization
Cloud computing
While researchers were pondering on how to organize computational grids
that were easily accessible, organizations in charge of running data centers
were facing the problem of opening up their resources to customers. Eventu-
ally, this lead to the concept of utility computing by which a customer could
upload tasks to a data center and be charged on a per-resource basis. Utility
computing formed the basis for what is now called cloud computing.
Following Vaquero et al. [2008], cloud computing is characterized by an
easily usable and accessible pool of virtualized resources. Which and how
resources are used can be configured dynamically, providing the basis for
scalability: if more work needs to be done, a customer can simply acquire
more resources. The link to utility computing is formed by the fact that cloud
computing is generally based on a pay-per-use model in which guarantees
are offered by means of customized service level agreements (SLAs).
Figure 1.9: The organization of clouds (adapted from Zhang et al. [2010]).
In practice, clouds are organized into four layers, as shown in Figure 1.9
(see also Zhang et al. [2010]):
Hardware: The lowest layer is formed by the means to manage the necessary
hardware: processors, routers, but also power and cooling systems. It is
generally implemented at data centers and contains the resources that
customers normally never get to see directly.
Platform: One could argue that the platform layer provides to a cloud-
computing customer what an operating system provides to application
developers, namely the means to easily develop and deploy applications
that need to run in a cloud. In practice, an application developer is
offered a vendor-specific API, which includes calls to uploading and ex-
ecuting a program in that vendor’s cloud. In a sense, this is comparable
the Unix exec family of system calls, which take an executable file as
parameter and pass it to the operating system to be executed.
Also like operating systems, the platform layer provides higher-level
abstractions for storage and such. For example, as we discuss in more
detail later, the Amazon S3 storage system [Murty, 2008] is offered to the
application developer in the form of an API allowing (locally created)
files to be organized and stored in buckets. A bucket is somewhat
comparable to a directory. By storing a file in a bucket, that file is
automatically uploaded to the Amazon cloud.
Application: Actual applications run in this layer and are offered to users
for further customization. Well-known examples include those found
in office suites (text processors, spreadsheet applications, presentation
applications, and so on). It is important to realize that these applica-
tions are again executed in the vendor’s cloud. As before, they can be
compared to the traditional suite of applications that are shipped when
installing an operating system.
Let us now look into the benefits and Internet costs of a migration plan.
Benefits For each migration plan M, one can expect to have monetary savings
expressed as Benefits(M), because fewer machines or network connections need to
be maintained. In many organizations, such costs are known so that it may be
relatively simple to compute the savings. On the other hand, there are costs to
be made for using the cloud. Hajjat et al. [2010] make a simplifying distinction
between the benefit Bc of migrating a compute-intensive component, and the
benefit Bs of migrating a storage-intensive component. If there are Mc compute-
intensive and Ms storage-intensive components, we have Benefits(M) = Bc · Mc +
Bs · Ms . Obviously, much more sophisticated models can be deployed as well.
Internet costs To compute the increased communication costs because com-
ponents are spread across the cloud as well as the local infrastructure, we need
to take user-initiated requests into account. To simplify matters, we make no
distinction between internal users (i.e., members of the enterprise), and external
users (as one would see in the case of Web applications). Traffic from users before
migration can be expressed as:
Trlocal,inet = Â(Tuser,i Suser,i + Ti,user Si,user )
Ci
where Tuser,i denotes the number of transactions per time unit leading to data
flowing from users to Ci . We have analogous interpretations for Ti,user , Suseri , and
Si,user .
For each component Ci , let Ci,local denote the servers that continue to operate
on the local infrastructure, and Ci,cloud its servers that are placed in the cloud. Note
that |Ci,cloud | = ni . For simplicity, assume that a server from Ci,local distributes
traffic in the same proportions as a server from Ci,cloud . We are interested in
the rate of transactions between local servers, cloud servers, and between local
and cloud servers, after migration. Let sk be the server for component Ck and
denote by f k the fraction nk /Nk . We then have for the rate of transactions Ti,j ⇤ after
migration:
8
>
>
>
(1 f i ) · (1 f j ) · Ti,j when si 2 Ci,local and sj 2 Cj,local
>
< (1 f ) · f · T
⇤ i j i,j when si 2 Ci,local and sj 2 Cj,cloud
Ti,j =
>
> f i · (1 f j ) · Ti,j when si 2 Ci,cloud and sj 2 Cj,local
>
>
:
f i · f j · Ti,j when si 2 Ci,cloud and sj 2 Cj,cloud
⇤ is the amount of data associated with T ⇤ . Note that f denotes the fraction of
Si,j i,j k
servers of component Ck that are moved to the cloud. In other words, (1 f k ) is
the fraction that stays in the local infrastructure. We leave it to the reader to give
⇤
an expression for Ti,user .
Finally, let costlocal,inet and costcloud,inet denote the per-unit Internet costs for
traffic to and from the local infrastructure and cloud, respectively. Ignoring a few
subtleties explained in [Hajjat et al., 2010], we can then compute the local Internet
traffic after migration as:
⇤
Trlocal,inet = Â ⇤ ⇤
( Ti,j ⇤ ⇤
Si,j + Tj,i S j,i ) + Â ⇤
( Tuser,j ⇤
Suser,j ⇤
+ Tj,user S⇤j,user )
Ci,local ,Cj,local Cj,local
Together, this leads to a model for the increase in Internet communication costs:
⇤ ⇤
costlocal,inet ( Trlocal,inet Trlocal,inet ) + costcloud,inet Trcloud,inet
Clearly, answering the question whether moving to the cloud is cheaper requires a
lot of detailed information and careful planning of exactly what to migrate. Hajjat
et al. [2010] provide a first step toward making an informed decision. Their model
is more detailed than we are willing to explain here. An important aspect that
we have not touched upon is that migrating components also means that special
attention will have to be paid to migrating security components. The interested
reader is referred to their paper.
Primitive Description
BEGIN_TRANSACTION Mark the start of a transaction
END_TRANSACTION Terminate the transaction and try to commit
ABORT_TRANSACTION Kill the transaction and restore the old values
READ Read data from a file, a table, or otherwise
WRITE Write data to a file, a table, or otherwise
commits, making its results visible to the parent transaction. After further
computation, the parent aborts, restoring the entire system to the state it
had before the top-level transaction started. Consequently, the results of
the subtransaction that committed must nevertheless be undone. Thus the
permanence referred to above applies only to top-level transactions.
Since transactions can be nested arbitrarily deep, considerable administra-
tion is needed to get everything right. The semantics are clear, however. When
any transaction or subtransaction starts, it is conceptually given a private copy
of all data in the entire system for it to manipulate as it wishes. If it aborts,
its private universe just vanishes, as if it had never existed. If it commits,
its private universe replaces the parent’s universe. Thus if a subtransaction
commits and then later a new subtransaction is started, the second one sees
the results produced by the first one. Likewise, if an enclosing (higher level)
transaction aborts, all its underlying subtransactions have to be aborted as
well. And if several transactions are started concurrently, the result is as if
they ran sequentially in some unspecified order.
Nested transactions are important in distributed systems, for they provide
a natural way of distributing a transaction across multiple machines. They
follow a logical division of the work of the original transaction. For example,
a transaction for planning a trip by which three different flights need to be
reserved can be logically split up into three subtransactions. Each of these
subtransactions can be managed separately and independently of the other
two.
In the early days of enterprise middleware systems, the component that
handled distributed (or nested) transactions formed the core for integrating
applications at the server or database level. This component was called a
transaction processing monitor or TP monitor for short. Its main task was
to allow an application to access multiple server/databases by offering it a
transactional programming model, as shown in Figure 1.12. Essentially, the TP
monitor coordinated the commitment of subtransactions following a standard
protocol known as distributed commit, which we discuss in Section 8.5.
• File format and layout: text, binary, its structure, and so on. Nowadays,
XML has become popular as its files are, in principle, self-describing.
• File management: where are they stored, how are they named, who is
responsible for deleting files?
Shared database: Many of the problems associated with integration through files
are alleviated when using a shared database. All applications will have ac-
cess to the same data, and often through a high-level language such as SQL.
Also, it is easy to notify applications when changes occur, as triggers are
often part of modern databases. There are, however, two major drawbacks.
First, there is still a need to design a common data schema, which may be
far from trivial if the set of applications that need to be integrated is not
completely known in advance. Second, when there are many reads and
updates, a shared database can easily become a performance bottleneck.
Remote procedure call: Integration through files or a database implicitly as-
sumes that changes by one application can easily trigger other applications
to take action. However, practice shows that sometimes small changes
should actually trigger many applications to take actions. In such cases,
it is not really the change of data that is important, but the execution of a
series of actions.
Series of actions are best captured through the execution of a procedure
(which may, in turn, lead to all kinds of changes in shared data). To
prevent that every application needs to know all the internals of those
actions (as implemented by another application), standard encapsulation
techniques should be used, as deployed with traditional procedure calls
or object invocations. For such situations, an application can best offer
a procedure to other applications in the form of a remote procedure call,
or RPC. In essence, an RPC allows an application A to make use of the
information available only to application B, without giving A direct access
to that information. There are many advantages and disadvantages to
remote procedure calls, which are discussed in depth in Chapter 4.
Messaging: A main drawback of RPCs is that caller and callee need to be up
and running at the same time in order for the call to succeed. However, in
many scenarios this simultaneous activity is often difficult or impossible
to guarantee. In such cases, offering a messaging system carrying requests
from application A to perform an action at application B, is what is needed.
The messaging system ensures that eventually the request is delivered,
and if needed, that a response is eventually returned as well. Obviously,
messaging is not the panacea for application integration: it also introduces
problems concerning data formatting and layout, it requires an application
to know where to send a message to, there need to be scenarios for dealing
with lost messages, and so on. Like RPCs, we will be discussing these
issues extensively in Chapter 4.
What these four approaches tell us, is that application integration will generally
not be simple. Middleware (in the form of a distributed system), however, can
significantly help in integration by providing the right facilities such as support
for RPCs or messaging. As said, enterprise application integration is an important
target field for many middleware products.
Pervasive systems
The distributed systems discussed so far are largely characterized by their
stability: nodes are fixed and have a more or less permanent and high-quality
connection to a network. To a certain extent, this stability is realized through
the various techniques for achieving distribution transparency. For example,
there are many ways how we can create the illusion that only occasionally
components may fail. Likewise, there are all kinds of means to hide the actual
network location of a node, effectively allowing users and applications to
believe that nodes stay put.
However, matters have changed since the introduction of mobile and
embedded computing devices, leading to what are generally referred to as
pervasive systems. As its name suggests, pervasive systems are intended to
naturally blend into our environment. They are naturally also distributed
systems, and certainly meet the characterization we gave in Section 1.1.
What makes them unique in comparison to the computing and information
systems described so far, is that the separation between users and system
components is much more blurred. There is often no single dedicated interface,
such as a screen/keyboard combination. Instead, a pervasive system is often
equipped with many sensors that pick up various aspects of a user’s behavior.
Likewise, it may have a myriad of actuators to provide information and
feedback, often even purposefully aiming to steer behavior.
Many devices in pervasive systems are characterized by being small,
battery-powered, mobile, and having only a wireless connection, although
not all these characteristics apply to all devices. These are not necessarily
restrictive characteristics, as is illustrated by smartphones [Roussos et al., 2005]
and their role in what is now coined as the Internet of Things [Mattern and
Floerkemeier, 2010; Stankovic, 2014]. Nevertheless, notably the fact that we
often need to deal with the intricacies of wireless and mobile communication,
will require special solutions to make a pervasive system as transparent or
unobtrusive as possible.
In the following, we make a distinction between three different types of
pervasive systems, although there is considerable overlap between the three
types: ubiquitous computing systems, mobile systems, and sensor networks.
This distinction allows us to focus on different aspects of pervasive systems.
Ad. 3: Context awareness. Reacting to the sensory input, but also the explicit
input from users is more easily said than done. What a ubiquitous computing
system needs to do, is to take the context in which interactions take place
into account. Context awareness also differentiates ubiquitous computing
systems from the more traditional systems we have been discussing before,
and is described by Dey and Abowd [2000] as “any information that can be
used to characterize the situation of entities (i.e., whether a person, place or
object) that are considered relevant to the interaction between a user and an
application, including the user and the application themselves.” In practice,
context is often characterized by location, identity, time, and activity: the where,
who, when, and what. A system will need to have the necessary (sensory) input
to determine one or several of these context types.
What is important from a distributed-systems perspective, is that raw data
as collected by various sensors is lifted to a level of abstraction that can be
used by applications. A concrete example is detecting where a person is,
for example in terms of GPS coordinates, and subsequently mapping that
information to an actual location, such as the corner of a street, or a specific
shop or other known facility. The question is where this processing of sensory
input takes place: is all data collected at a central server connected to a
database with detailed information on a city, or is it the user’s smartphone
where the mapping is done? Clearly, there are trade-offs to be considered.
Dey [2010] discusses more general approaches toward building context-
aware applications. When it comes to combining flexibility and potential
distribution, so-called shared data spaces in which processes are decoupled
in time and space are attractive, yet as we shall see in later chapters, suffer
from scalability problems. A survey on context-awareness and its relation to
middleware and distributed systems is provided by Baldauf et al. [2007].
Admittedly, these are very simple examples, but the picture should be clear
that manual intervention is to be kept to a minimum. We will be discussing
many techniques related to self-management in detail throughout the book.
to pervasive systems in general (see also Adelstein et al. [2005] and Tarkoma
and Kangasharju [2009]).
First, the devices that form part of a (distributed) mobile system may
vary widely. Typically, mobile computing is now done with devices such
as smartphones and tablet computers. However, completely different types
of devices are now using the Internet Protocol (IP) to communicate, placing
mobile computing in a different perspective. Such devices include remote
controls, pagers, active badges, car equipment, various GPS-enabled devices,
and so on. A characteristic feature of all these devices is that they use wireless
communication. Mobile implies wireless so it seems (although there are
exceptions to the rules).
Second, in mobile computing the location of a device is assumed to change
over time. A changing location has its effects on many issues. For example, if
the location of a device changes regularly, so will perhaps the services that
are locally available. As a consequence, we may need to pay special attention
to dynamically discovering services, but also letting services announce their
presence. In a similar vein, we often also want to know where a device actually
is. This may mean that we need to know the actual geographical coordinates
of a device such as in tracking and tracing applications, but it may also require
that we are able to simply detect its network position (as in mobile IP [Perkins,
2010; Perkins et al., 2011].
Changing locations also has a profound effect on communication. To
illustrate, consider a (wireless) mobile ad hoc network, generally abbreviated
as a MANET. Suppose that two devices in a MANET have discovered each
other in the sense that they know each other’s network address. How do we
route messages between the two? Static routes are generally not sustainable
as nodes along the routing path can easily move out of their neighbor’s range,
invalidating the path. For large MANETs, using a priori set-up paths is not
a viable option. What we are dealing with here are so-called disruption-
tolerant networks: networks in which connectivity between two nodes can
simply not be guaranteed. Getting a message from one node to another may
then be problematic, to say the least.
The trick in such cases, is not to attempt to set up a communication path
from the source to the destination, but to rely on two principles. First, as we
will discuss in Section 4.4, using special flooding-based techniques will allow
a message to gradually spread through a part of the network, to eventually
reach the destination. Obviously, any type of flooding will impose redundant
communication, but this may be the price we have to pay. Second, in a
disruption-tolerant network, we let an intermediate node store a received
message until it encounters another node to which it can pass it on. In other
words, a node becomes a temporary carrier of a message, as sketched in
Figure 1.14. Eventually, the message should reach its destination.
It is not difficult to imagine that selectively passing messages to encoun-
tered nodes may help to ensure efficient delivery. For example, if nodes are
known to belong to a certain class, and the source and destination belong to
the same class, we may decide to pass messages only among nodes in that
class. Likewise, it may prove efficient to pass messages only to well-connected
nodes, that is, nodes who have been in range of many other nodes in the
recent past. An overview is provided by Spyropoulos et al. [2010].
Sensor networks
Our last example of pervasive systems is sensor networks. These networks in
many cases form part of the enabling technology for pervasiveness and we
see that many solutions for sensor networks return in pervasive applications.
What makes sensor networks interesting from a distributed system’s perspec-
tive is that they are more than just a collection of input devices. Instead, as
we shall see, sensor nodes often collaborate to efficiently process the sensed
data in an application-specific manner, making them very different from, for
example, traditional computer networks. Akyildiz et al. [2002] and Akyildiz
et al. [2005] provide an overview from a networking perspective. A more
systems-oriented introduction to sensor networks is given by Zhao and Guibas
[2004], but also Karl and Willig [2005] will show to be useful.
A sensor network generally consists of tens to hundreds or thousands of
relatively small nodes, each equipped with one or more sensing devices. In
addition, nodes can often act as actuators [Akyildiz and Kasimoglu, 2004],
a typical example being the automatic activation of sprinklers when a fire
has been detected. Many sensor networks use wireless communication, and
the nodes are often battery powered. Their limited resources, restricted
communication capabilities, and constrained power consumption demand
that efficiency is high on the list of design criteria.
When zooming into an individual node, we see that, conceptually, they
do not differ a lot from “normal” computers: above the hardware there is a
software layer akin to what traditional operating systems offer, including low-
In line 1, a node first creates a region of its eight nearest neighbors, after which
it fetches a value from its sensor(s). This reading is subsequently written to
the previously defined region to be defined using the key reading_key. In
line 4, the node checks whose sensor reading in the defined region was the
largest, which is returned in the variable max_id.
As another related example, consider a sensor network as implementing a
distributed database, which is, according to Mottola and Picco [2011], one of
four possible ways of accessing data. This database view is quite common and
easy to understand when realizing that many sensor networks are deployed
for measurement and surveillance applications [Bonnet et al., 2002]. In these
cases, an operator would like to extract information from (a part of) the
network by simply issuing queries such as “What is the northbound traffic
load on highway 1 as Santa Cruz?” Such queries resemble those of traditional
databases. In this case, the answer will probably need to be provided through
collaboration of many sensors along highway 1, while leaving other sensors
untouched.
To organize a sensor network as a distributed database, there are essentially
two extremes, as shown in Figure 1.16. First, sensors do not cooperate but
simply send their data to a centralized database located at the operator’s site.
The other extreme is to forward queries to relevant sensors and to let each
compute an answer, requiring the operator to aggregate the responses.
Neither of these solutions is very attractive. The first one requires that
(a)
(b)
Figure 1.16: Organizing a sensor network database, while storing and pro-
cessing data (a) only at the operator’s site or (b) only at the sensors.
sensors send all their measured data through the network, which may waste
network resources and energy. The second solution may also be wasteful as
it discards the aggregation capabilities of sensors which would allow much
less data to be returned to the operator. What is needed are facilities for in-
network data processing, similar to the previous example of abstract regions.
In-network processing can be done in numerous ways. One obvious one is
to forward a query to all sensor nodes along a tree encompassing all nodes
and to subsequently aggregate the results as they are propagated back to the
root, where the initiator is located. Aggregation will take place where two
or more branches of the tree come together. As simple as this scheme may
sound, it introduces difficult questions:
intermediate node will collect and aggregate the results from its children,
along with its own findings, and send that toward the root. To make matters
efficient, queries span a period of time allowing for careful scheduling of
operations so that network resources and energy are optimally consumed.
However, when queries can be initiated from different points in the net-
work, using single-rooted trees such as in TinyDB may not be efficient enough.
As an alternative, sensor networks may be equipped with special nodes where
results are forwarded to, as well as the queries related to those results. To give
a simple example, queries and results related to temperature readings may
be collected at a different location than those related to humidity measure-
ments. This approach corresponds directly to the notion of publish/subscribe
systems.
Values for t are typically in the order of 10 30%, but when a network needs
to stay operational for periods exceeding many months, or even years, attaining
values as low as 1% are critical.
A problem with duty-cycled networks is that, in principle, nodes need to be
active at the same time for otherwise communication would simply not be possible.
Considering that while a node is suspended, only its local clock continues ticking,
and that these clocks are subject to drifts, waking up at the same time may be
problematic. This is particularly true for networks with very low duty cycles.
When a group of nodes are active at the same time, the nodes are said to
form a synchronized group. There are essentially two problems that need to be
addressed. First, we need to make sure that the nodes in a synchronized group
remain active at the same time. In practice, this turns out to be relatively simple
if each node communicates information on its current local time. Then, simple
local clock adjustments will do the trick. The second problem is more difficult,
namely how two different synchronized groups can be merged into one in which
all nodes are synchronized. Let us take a closer look at what we are facing. Most
of the following discussion is based on material by Voulgaris et al. [2016].
In order to have two groups be merged, we need to first ensure that one group
detects the other. Indeed, if their respective active periods are completely disjoint,
there is no hope that any node in one group can pick up a message from a node
in the other group. In an active detection method, a node will send a join message
during its suspended period. In other words, while it is suspended, it temporarily
wakes up to elicit nodes in other groups to join. How big is the chance that
another node will pick up this message? Realize that we need to consider only the
case when t < 0.5, for otherwise two active periods will always overlap, meaning
that two groups can easily detect each other’s presence. The probability Pda that a
join message can be picked up during another node’s active period, is equal to
Tactive t
Pda = =
Tsuspended 1 t
This means that for low values of t, Pda is also very small.
In a passive detection method, a node skips the suspended state with (a very
low) probability Pdp , that is, it simply stays active during the Tsuspended time
units following its active period. During this time, it will be able to pick up any
messages sent by its neighbors, who are, by definition, member of a different
synchronized group. Experiments show that passive detection is inferior to active
detection.
Simply stating that two synchronized groups need to merge is not enough:
if A and B have discovered each other, which group will adapt the duty-cycle
settings of the other? A simple solution is to use a notion of cluster IDs. Each
node starts with a randomly chosen ID and effectively also a synchronized group
having only itself as member. After detecting another group B, all nodes in group
A join B if and only if the cluster ID of B is larger than that of A.
Synchronization can be improved considerably using so-called targeted join
messages. Whenever a node N receives a join message from a group A with a
lower cluster ID, it should obviously not join A. However, as N now knows when
the active period of A is, it can send a join message exactly during that period.
Obviously, the chance that a node from A will receive that message is very high,
allowing the nodes from A to join N’s group. In addition, when a node decides to
join another group, it can send a special message to its group members, giving
the opportunity to quickly join as well.
1.4 Summary
Distributed systems consist of autonomous computers that work together to
give the appearance of a single coherent system. This combination of inde-
pendent, yet coherent collective behavior is achieved by collecting application-
independent protocols into what is known as middleware: a software layer
logically placed between operating systems and distributed applications. Pro-
tocols include those for communication, transactions, service composition,
and perhaps most important, reliability.
Design goals for distributed systems include sharing resources and ensur-
ing openness. In addition, designers aim at hiding many of the intricacies
related to the distribution of processes, data, and control. However, this
distribution transparency not only comes at a performance price, in practical
situations it can never be fully achieved. The fact that trade-offs need to
be made between achieving various forms of distribution transparency is
inherent to the design of distributed systems, and can easily complicate their
understanding. One specific difficult design goal that does not always blend
well with achieving distribution transparency is scalability. This is particularly
true for geographical scalability, in which case hiding latencies and bandwidth
restrictions can turn out to be difficult. Likewise, administrative scalability
by which a system is designed to span multiple administrative domains, may
easily conflict goals for achieving distribution transparency.
Matters are further complicated by the fact that many developers initially
make assumptions about the underlying network that are fundamentally
wrong. Later, when assumptions are dropped, it may turn out to be difficult
to mask unwanted behavior. A typical example is assuming that network
latency is not significant. Other pitfalls include assuming that the network is
reliable, static, secure, and homogeneous.
Different types of distributed systems exist which can be classified as
being oriented toward supporting computations, information processing, and
pervasiveness. Distributed computing systems are typically deployed for
high-performance applications often originating from the field of parallel
computing. A field that emerged from parallel processing was initially grid
computing with a strong focus on worldwide sharing of resources, in turn
leading to what is now known as cloud computing. Cloud computing goes
beyond high-performance computing and also supports distributed systems
found in traditional office environments where we see databases playing an
important role. Typically, transaction processing systems are deployed in