Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Introduction of DS

Download as ppt, pdf, or txt
Download as ppt, pdf, or txt
You are on page 1of 41

DISTRIBUTED SYSTEMS

Principles and Paradigms


Second Edition
ANDREW S. TANENBAUM
MAARTEN VAN STEEN

Chapter 1
Introduction

Tanenbaum & Van Steen, Distributed Systems: Principles and Paradigms, 2e, (c) 2007
Prentice-Hall, Inc. All rights reserved. 0-13-239227-5
What is an Operating System
An operating system is:

A collection of software components that


• Provides useful abstractions and
• Manages resources to
• Support application programs, and
• Provide an interface for users and programs
Operating System Functions
An operating system’s main functions are to:

• Schedule processes & multiplex CPU


• Provide mechanisms for IPC and
synchronization
• Manage main memory
• Manage other resources
• Provide convenient persistent storage (files)
• Maintain system integrity, handle failures
• Enforce security policies (e.g., access control)
• Give users and processes an interface
Definition of a Distributed System (1)
A distributed system is (Tannenbaum):

A collection of independent computers


that appears to its users as a single
coherent system.

A distributed system is (Lamport):

One in which the failure of a computer


you didn't even know existed can
render your own computer unusable
Properties of Distributed Systems
• Concurrency
– Multicore systems
– Multiple hosts
• No global clock
– Theoretical impossibility
– Expense of accurate clocks
• Independent view
– Message delay, failure
– Impossible to distinguish slow vs. failed node
• Independent failure
– Message delivery (loss, corruption)
– Nodes (fail-stop, Byzantine)
Software Concepts
System Description Main Goal

Tightly-coupled operating system for multi- Hide and manage


DOS processors and homogeneous hardware
multicomputers resources

Loosely-coupled operating system for


Offer local services
NOS heterogeneous multicomputers (LAN and
to remote clients
WAN)
Provide
Additional layer atop of NOS implementing
Middleware distribution
general-purpose services
transparency

An overview of
• NOS (Network Operating Systems) (80’s)
• DOS (Distributed Operating Systems) (90’s)
• Middleware (00’s)
Definition of a Distributed System (2)

Figure 1-1. A distributed system organized as middleware. The


middleware layer extends over multiple machines, and offers
each application the same interface.
• Heterogeneity – can the system handle a large variety of types of PCs and
devices?
• Robustness – is the system resilient to host crashes and failures, and to the
network dropping messages?
• Availability – are data+services always there for clients?
• Transparency – can the system hide its internal workings from the users?
• Concurrency – can the server handle multiple clients simultaneously?
• Efficiency – is the service fast enough? Does it utilize 100% of all resources?
• Scalability – can it handle 100 million nodes without degrading service?
(nodes=clients and/or servers)
• Security – can the system withstand hacker attacks?
• Openness – is the system extensible?
Transparency in a Distributed System

Figure 1-2. Different forms of transparency in a


distributed system (ISO, 1995).
Other forms:
Parallelism – Hide the number of nodes working on a task
Size – Hide the number of components in the system
Revision – Hide changes in software/hardware versions
Challenges
• Performance
• Concurrency
• Failures
• Scalability
• System updates/growth
• Heterogeneity
• Openness
• Multiplicity of ownership, authority
• Security
• Quality of service/user experience
• Transparency
• Debugging
Approaches
• Virtual clocks
• Group communication
• Heartbeats/failure detection, group membership
• Distributed agreement, snapshots
• Leader election
• Transaction protocols
• Redundancy, replication, caching
• Indirection - naming
• Distributed mutual exclusion
• Middleware, modularization, layering
– Decomposition vs. integration
• Cryptographic protocols
Scalability Problems

Figure 1-3. Examples of scalability limitations.

Engineering = art of compromise (making tradeoffs)


Distributed systems – many theoretical results on lower
bounds of tradeoffs that limit practical solutions
Scalability Examples
Distributed systems are ubiquitous and necessary:
• Web search
• Financial transactions
• Multiplayer games
• DNS
• Travel reservation systems
• Utility infrastructure (e.g., power grid)
• Embedded systems (e.g., cars)
• Sensor networks
Failure to scale is fatal
• Instagram – share cellphone pix
• Facebook IPO
Web Search
• Google uses thousands of machines to
– Provide search results
– Run Page-Rank algorithm
• Issues
– Connecting large number of machines
– Distributed file system (GFS)
– Indexing
– Programming model
– Scaling up when current system reaches limits
Financial Transactions
Volume is huge
• 4 million messages per second
• 50 million things you can trade
Requirements are stringent
• Low latency
• 24/7 operation (around the world)
• Failure “is not an option”
• Facebook NASDAQ Freeze
– Transaction system overwhelmed
– Hours to complete transactions in falling market
Multiplayer Games
Very popular – huge market
Characteristics
• May have millions of players
• Players operate in same “world”
• Players interact with world, each other
Issues
• Number of users
• Latency, consistency
• Coordination of multiple servers
• Architecture???
Scalability Problems
Characteristics of decentralized algorithms:
• No machine has complete information about the
system state.
• Machines make decisions based only on local
information.
• Failure of one machine does not ruin the
algorithm.
• There is no implicit assumption that a global
clock exists.
Scaling Techniques (1)

Figure 1-4. The difference between letting (a) a server


or (b) a client check forms as they are being filled.
Scaling Techniques (2)

Figure 1-5. An example of dividing the DNS


name space into zones.
Pitfalls when Developing
Distributed Systems
False assumptions made by first time developer:
• The network is reliable.
• The network is secure.
• The network is homogeneous.
• The topology does not change.
• Latency is zero.
• Bandwidth is infinite.
• Transport cost is zero.
• There is one administrator.
Multicore Systems
• Knights corner: 64 cores on a chip
• Intel “Cloud in a Chip” – 48 cores/256GB @$9K
– http://www.intel.com/content/www/us/en/research/intel-labs-single-chip-cloud-c
omputer.html

• Most hosts are 2, 4, or 8 core now


• Fine-grained parallelism hard
– Detailed knowledge of algo/programmer involved
– Very fancy compiler
– Scheduling a challenge
• Virtualization
– Treat N cores as N hosts (with low latency comm)
– Do sequential programming
– Use DS framework to integrate
Knights Corner (KC) Chip

10 rings (5 in each direction), Tag Dir, Mem Ctl


Types of Distributed Systems
1. Distributed Computing systems
I. Cluster computing system
II. Grid Computing systems
2. Distributed information systems
A. Transaction processing systems
B. Enterprise application integration
3. Distributed Pervasive Systems
1. Home system
2. Electronic health care systems
3. Sensor networks
Cluster Computing Systems

Figure 1-6. An example of a cluster computing system.


Cluster Computing Systems
• In cluster computing the underlying hardware consists of a collection of
similar workstations or Pcs closely connection by means of high speed
LAN.
• In addition each node runs the same operating system
• These are popular when the price/performance ratio of PC and
workstation is improved.
• Each cluster consists of a collection of computing nodes that are
controlled and accessed by means of a single master node. The
master typically handles the allocation of nodes to a particular parallel
program, maintains a batch queue of submitted jobs and provides an
interface for the users of the system. As such, the master actually runs
the middleware needed for the execution of programs and
management of the cluster.
Grid/Cloud Computing Systems

Figure 1-7. A layered architecture for grid computing systems.


Grid/Cloud Computing Systems
Figure 1-7. A layered architecture for grid computing systems.
Common Distributed Systems

• Query Processing
• Transaction Processing
• Enterprise Applications
• Pervasive Systems
• Sensor Networks
Transaction Processing Systems (1)

Figure 1-8. Example primitives for transactions.


Transaction Processing Systems (2)
Characteristic properties of transactions:
• Atomic: To the outside world, the transaction
happens indivisibly.
• Consistent: The transaction does not violate
system invariants.
• Isolated: Concurrent transactions do not
interfere with each other.
• Durable: Once a transaction commits, the
changes are permanent.

Known as ACID properties


Transaction Processing Systems (3)

Figure 1-9. A nested transaction.


Transaction Processing Systems (4)

Figure 1-10. The role of a TP monitor (a.k.a. Coordinator)


in distributed systems.
Transaction Processing Systems (4.5)
Coordinator Object
Client
Transaction Object

...
Scheduler
...

Manager Manager

Object
Client

Participants
Object
Client
Transaction Object

...
Scheduler
...

Manager Manager
Object
Client

Decomposition of the Transaction Monitor in a TPS


TM – 2PC; SCH – serializability; OM – Atomic Update
Enterprise Application Integration

Figure 1-11. Middleware as a communication facilitator in


enterprise application integration.
Distributed Pervasive Systems
Requirements for pervasive systems

• Embrace contextual changes.


• Encourage ad hoc composition.
• Recognize sharing as the default.
Electronic Health Care Systems (1)
Questions to be addressed for health care systems:
• Where and how should monitored data be
stored?
• How can we prevent loss of crucial data?
• What infrastructure is needed to generate and
propagate alerts?
• How can physicians provide online feedback?
• How can extreme robustness of the monitoring
system be realized?
• What are the security issues and how can the
proper policies be enforced?
Electronic Health Care Systems (2)

Figure 1-12. Monitoring a person in a pervasive electronic health


care system, using (a) a local hub or
(b) a continuous wireless connection.
Sensor Networks (1)

Questions concerning sensor networks:


• How do we (dynamically) set up an
efficient tree in a sensor network?
• How does aggregation of results take
place? Can it be controlled?
• What happens when network links fail?
Sensor Networks (2)

Figure 1-13. Organizing a sensor network database, while storing


and processing data (a) only at the operator’s site or …
Sensor Networks (3)

Figure 1-13. Organizing a sensor network database, while storing


and processing data … or (b) only at the sensors.
May also do data fusion/aggregation/processing at nodes
along the path to the master node/operator
Some Fundamental Issues
• How do we decompose a complex
problem/task into logical/manageable
chunks?
• What is the physical architecture?
• How do we assign roles/responsibilities to
physical components?
• How do we find components (logical and
physical)?
• How do we define and maintain
consistency?

You might also like