Lecture Notes Distributed System
Lecture Notes Distributed System
Systems
Lecture
Notes
Distributed Systems Lecture Notes
Himayatullah sharief
UNIT-I
Definition: A distributed system is a collection of independent computers that appears to its users
as a single coherent system.
The first one is that a distributed system consists of components (i.e., computers) that
are autonomous. A second aspect is that users (be they people or programs) think they
are dealing with a single system.
The above diagram shows a distributed system organized as middleware. The middleware layer
extends over multiple machines, and offers each application the same interface.
1
Distributed Systems Lecture Notes
Himayatullah sharief
3. Openness
An open distributed system is a system that offers services according to standard rules
that describe the syntax and semantics of those services. For example, in computer
networks, standard rules govern the format, contents, and meaning of messages sent
and received. Such rules are formalized in protocols. In distributed systems, services
are generally specified through interfaces.
4. Scalability
The scalability problems in DSs appear as performance problems caused by limited
capacity of server and network. There are basically three techniques for scaling:
1. Hiding communication latencies: try to avoid waiting for response to remote services
as much as possible using asynchronous communication.
Scaling Techniques -1
2
Distributed Systems Lecture Notes
Himayatullah sharief
The difference between letting (a) a server or (b) a client check forms as they are being filled.
3
Distributed Systems Lecture Notes
Himayatullah sharief
There are several ways to organize hardware in multiple-CPU systems, especially in
teams of how they are interconnected and how they communicate
Multi-computers
Homogeneous multi computers (usually used in parallel systems): a single inter
connection network, all processors are the same and generally have access to the same
amount of private memory.
Heterogeneous multi computers (usually used in distributed systems): a variety of
different, independent computers connected through different networks.
Due to the large scale, inherent heterogeneity, and lack of global system view in
heterogeneous multicomputer, sophisticated software is needed to build applications,
developing a distributed system (DS). Thus DSs usually have a software layer
(middleware) to provide transparency.
4
Distributed Systems Lecture Notes
Himayatullah sharief
In multiprocessor, DOS supports for multiple processors having access to a shared
memory and protects data against simultaneous access the same shared memory
locations.
The main goal of DOS is to hide the intricacies of managing the underlying hardware
such that it can be shared by multiple processes.
In such a DS, the local OS in each computer manages its resources while the
middleware offer a more-or-less complete collection of services used by the
applications.
Most middleware is based on some model (or paradigm) for describing distribution
and communication, such as distributed file system, remote procedure call (RPC),
distributed object, distributed document etc.
5
Distributed Systems Lecture Notes
Himayatullah sharief
1. Layered architectures
2. Object-based architectures
3. Data-centered architectures
4. Event-based architectures
Architectural Styles-2
Architectural Styles -3
6
Distributed Systems Lecture Notes
Himayatullah sharief
Architectural Styles -4
Centralized Architectures
7
Distributed Systems Lecture Notes
Himayatullah sharief
Application Layering -1
Recall previously mentioned layers of architectural style
Application Layering -2
The simplified organization of an Internet search engine into three different layers
Multitier Architectures -1
The simplest organization is to have only two types of machines:
1. A client machine containing only the programs implementing (part of) the user-interface level
2. A server machine containing the rest, the programs implementing the processing and data level
8
Distributed Systems Lecture Notes
Himayatullah sharief
9
Distributed Systems Lecture Notes
Himayatullah sharief
UNIT-II
Communication
Inter-process communication is at the heart of all distributed systems. Communication
in distributed systems is always based on low-level message passing as offered by the
underlying network.
In the OSI model, communication is divided up into seven layers, as shown below.
Each layer deals with one specific aspect of the communication. In this way, the
problem can be divided up into manageable pieces, each of which can be solved
independent of the others. Each layer provides an interface to the one above it. The
interface consists of a set of operations that together define the service the layer is
prepared to offer its users.
Layered Protocols -1
Working
When process-A on machine 1 wants to communicate with process-B on machine 2, it
builds a message and passes the message to the application layer on its machine.
10
Distributed Systems Lecture Notes
Himayatullah sharief
The application layer software then adds a header to the front of the message and passes
the resulting message across the layer 6/7 inter-face to the presentation layer. The
presentation layer in turn adds its own header and passes the result down to the session
layer, and so on. Some layers add not only a header to the front, but also a trailer to the
end. When it hits the bottom, the physical layer actually transmits the message, shown
below.
Middleware Protocols
Middleware is an application that logically lives (mostly) in the application layer, but
which contains many general-purpose protocols that warrant their own layers,
independent of other, more specific applications. A distinction can be made between
high-level communication protocols and protocols for establishing various middleware
services.
As an example, consider a distributed locking protocol by which a resource can be
protected against simultaneous access by a collection of processes that are distributed
across multiple machines.
11
Distributed Systems Lecture Notes
Himayatullah sharief
Types of Communication
12
Distributed Systems Lecture Notes
Himayatullah sharief
1. The sender may be blocked until the middleware notifies that it will take over
transmission of the request.
2. The sender may synchronize until its request has been delivered to the intended
recipient.
3. Synchronization may take place by letting the sender wait until its request has been
fully processed, that is, up the time that the recipient returns a response.
(a) Parameter passing in a local procedure call: the stack before the call to read
13
Distributed Systems Lecture Notes
Himayatullah sharief
Client and Server Stubs
When the message arrives at the server, the server's operating system passes it
up to a server stub. A server stub is the server-side equivalent of a client stub: it is a
piece of code that transforms requests coming in over the network into local procedure
calls. Typically the server stub will have called receive and be blocked waiting for
incoming messages. The server stub unpacks the parameters from the message and then
calls the server procedure in the usual way (i.e., as in Fig of Conventional RPC Call).
From the server's point of view, it is as though it is being called directly by the
Steps:
1. The client procedure calls the client stub in the normal way.
2. The client stub builds a message and calls the local operating system.
3. The client’s OS sends the message to the remote OS.
4. The remote OS gives the message to the server stub.
5. The server stub unpacks the parameters and calls the server.
6. The server does the work and returns the result to the stub.
7. The server stub packs it in a message and calls its local OS.
8. The server’s OS sends the message to the client’s OS.
9. The client’s OS gives the message to the client stub.
10. The stub unpacks the result and returns to the client.
Asynchronous RPC -1
As in conventional procedure calls, when a client calls a remote procedure, the
client will block until a reply is returned. This strict request-reply behavior is
unnecessary when there is no result to return, and only leads to blocking the client while
it could have proceeded and have done useful work just after requesting the remote
procedure to be called. Examples of where there is often no need to wait for a reply
include: transferring money from one account to another, adding entries into a database,
starting remote services, batch processing, and so on. To support such situations, RPC
systems may provide facilities for what are called asynchronous RPCs, by which a client
immediately continues after issuing the RPC request. With asynchronous RPCs, the
server immediately sends a reply back to the client the moment the RPC request is
received, after which it calls the requested procedure. The reply acts as an
acknowledgment to the client that the server is going to process the RPC.
15
Distributed Systems Lecture Notes
Himayatullah sharief
Asynchronous RPC -2
Asynchronous RPC -3
16
Distributed Systems Lecture Notes
Himayatullah sharief
Writing a Client and a Server (1)
1. Registration of a server makes it possible for a client to locate the server and bind to it.
17
Distributed Systems Lecture Notes
Himayatullah sharief
Binding a Client to a Server -2
MESSAGE-ORIENTED COMMUNICATION
Remote procedure calls and remote object invocations contribute to hiding
communication in distributed systems, that is, they enhance access transparency.
Unfortunately, neither mechanism is always appropriate. In particular, when it cannot
be assumed that the receiving side is executing at the time a request is issued, alternative
communication services are needed. Likewise, the inherent synchronous nature of
RPCs, by which a client is blocked until its request has been processed, sometimes
needs to be replaced by messaging.
18
Distributed Systems Lecture Notes
Himayatullah sharief
19
Distributed Systems Lecture Notes
Himayatullah sharief
STREAM-ORIENTED COMMUNICATION
There are also forms of communication in which timing plays a crucial role.
Consider, for example, an audio stream built up as a sequence of 16-bit samples, each
representing the amplitude of the sound wave as is done through Pulse Code Modulation
(PCM).
Enforcing QoS -1
Given that the underlying system offers only a best-effort delivery service, a distributed
system can try to conceal as much as possible of the lack of quality of service.
A distributed system should help in getting data across to receivers. Although there are
generally not many tools available, one that is particularly useful is to use buffers to
reduce jitter. The principle is simple, as shown in Figure below.
Assuming that packets are delayed with a certain variance when transmitted over the
network, the receiver simply stores them in a buffer for a maximum amount of time.
This will allow the receiver to pass packets to the application at a regular rate, knowing
20
Distributed Systems Lecture Notes
Himayatullah sharief
that there will always be enough packets entering the buffer to be played back at that
rate.
One problem that may occur is that a single packet contains multiple audio and video
frames. Interleaving frames, as shown in Figure below is used. In this way, when a
packet is lost, the resulting gap in successive frames is distributed over time
The effect of packet loss in (a) non interleaved transmission and (b) interleaved transmission
Stream Synchronization
21
Distributed Systems Lecture Notes
Himayatullah sharief
The simplest form of synchronization is that between a discrete data stream and
a continuous data stream. Consider, for example, a slide show on the Web that has been
enhanced with audio. Each slide is transferred from the server to the client in the form
of a discrete data stream. At the same time, the client should play out a specific (part of
an) audio stream that matches the current slide that is also fetched from the server. In
this case, the audio stream is to be 'synchronized with the presentation of slides.
For example, consider a movie that is presented as two input streams. The video
stream contains uncompressed low-quality images of 320x240 pixels, each encoded by
a single byte, leading to video data units of 76,800 bytes each. Assume that images are
to be displayed at 30 Hz, or one images every 33 msec. The audio stream is assumed to
contain audio samples grouped into units of 11760 bytes, each corresponding to 33 ms
of audio, as explained above. If the input process can handle 2.5 MB/sec, we can achieve
lip synchronization by simply alternating between reading an image and reading a block
of audio samples every 33 ms.
22
Distributed Systems Lecture Notes
Himayatullah sharief
23
Distributed Systems Lecture Notes
Himayatullah sharief
UNIT-III
Processes
Introduction to Threads
To execute a program, an operating system creates a number of virtual processors, each
one for running a different program. To keep track of these virtual processors, the
operating system has a process table, containing entries to store CPU register values,
memory maps, open files, accounting information. Privileges etc.
A process is often defined as a program in execution, that is, a program that is currently
being executed on one of the operating system's virtual processors.
The major drawback of all IPC mechanisms is that communication often requires
extensive context switching, shown at three different points in Figure below
Like a process, a thread executes its own piece of code, independently from other
threads. However, in contrast to processes, no attempt is made to achieve a high degree
of concurrency transparency if this would result in performance degradation.
Thread Implementation
Threads are often provided in the form of a thread package. There are basically two
approaches to implement a thread package.
24
Distributed Systems Lecture Notes
Himayatullah sharief
1. The first approach is to construct a thread library that is executed entirely in user
mode.
2. The second approach is to have the kernel be aware of threads and schedule
them.
A user-level thread library has a number of advantages. First, it is cheap to create
and destroy threads. Because all thread administration is kept in the user's address space,
the price of creating a thread is primarily determined by the cost for allocating memory
to set up a thread stack. Analogously, destroying a thread mainly involves freeing
memory for the stack, which is no longer used. Both operations are cheap.
The thread package has a single routine to schedule the next thread. When
creating an LWP (which is done by means of a system call), the LWP is given its own
stack, and is instructed to execute the scheduling routine in search of a thread to
execute. If there are several LWPs, then each of them executes the scheduler.
Multithreaded Servers
The single-threaded server retains the ease and simplicity of blocking system
calls, but gives up some amount of performance. The finite-state machine approach
achieves high performance through parallelism, but uses non-blocking calls, thus is
hard to program. These models are summarized in Figure below.
26
Distributed Systems Lecture Notes
Himayatullah sharief
A major task of client machines is to provide the means for users to interact with remote
servers. There are roughly two ways In Figure (a) for each remote service the client
machine will have a separate counterpart that can contact the service over the network.
A typical example is an agenda running on a user's PDA that needs to synchronize with
a remote.
A second solution, shown below in Figure (b) is to provide direct access to remote
services by only offering a convenient user interface. Effectively, this means that the
client machine is used only as a terminal with no need for local storage, leading to an
application neutral solution.
(a) A networked application with its own protocol (b) A general solution to allow access to remote applns
27
Distributed Systems Lecture Notes
Himayatullah sharief
General Design Issues -1
A concurrent server does not handle the request itself, but passes it to a separate thread
or another process, after which it immediately waits for the next incoming request. A
multithreaded server is an example of a concurrent server.
Clients contact a server. In all cases, clients send requests to an end point, also called a
port, at the machine where the server is running. Each server listens to a specific end
point. How do clients know the end point of a service?
28
Distributed Systems Lecture Notes
Himayatullah sharief
Distributed Servers
The basic idea behind a distributed server is that clients benefit from a robust,
High performing, stable server. These properties can often be provided by high end
mainframes.
The main idea is to make use of available networking services, notably mobility
support for IP version 6 (MIPv6). In MIPv6, a mobile node is assumed to have a home
network where it normally resides and for which it has an associated stable address,
known as its home address (HoA). This home network has a special router attached,
known as the home agent, which will take care of traffic to the mobile node when it is
away.
Naming
Names, Identifiers, And Addresses
Names: A name in a DS is a string of bits or characters that is used to refer an entity. Ex. Resources
such as hosts, printers, and disks, explicitly named resources are Processes, web pages, network
connections etc
Address: An access point is required to operate an entity. AP is a special kind of entity in DS. The
name of access point is called address.
29
Distributed Systems Lecture Notes
Himayatullah sharief
Forwarding Pointers -1
Each forwarding pointer is implemented as a (client stub, server stub) pair as shown in
Figure below. A server stub contains either a local reference to the actual object or a
local reference to a remote client stub for that object.
The principle of forwarding pointers using (client stub, server stub) pairs
To short-cut a chain of (client stub, server stub) pairs, an object invocation carries the
identification of the client stub from where that invocation was initiated. A client-stub
identification consists of the client's transport-level address, combined with a locally
generated number to identify that stub. When the invocation reaches the object at its
current location, a response is sent back to the client stub where the invocation was
initiated. The current location is piggybacked with this response, and the client stub
adjusts its companion server stub to the one in the object's current location. This
principle is shown in Figure below
30
Distributed Systems Lecture Notes
Himayatullah sharief
Home-Based Approaches
An approach which supports mobile entities in large-scale networks is to
introduce a home location, which keeps track of the current location of an entity. Special
techniques may be applied to safeguard against network or process failures. In practice,
the home location is often chosen to be the place where an entity was created. The home-
based approach is used as a fall-back mechanism for location services based on
forwarding pointers.
Another example where the home-based approach is followed is in Mobile IP.
Hierarchical Approach
In a hierarchical scheme, a network is divided into a collection of domains. There
is a single top-level domain that spans the entire network. Each domain can be
subdivided into multiple, smaller sub-domains. A lowest-level domain, called a leaf
domain, typically corresponds to a local-area network in a computer network or a cell
in a mobile telephone network. Each domain D has associated directory node dirt D)
that keeps track of the entities in that domain. This leads to a tree of directory nodes.
The directory node of the top-level domain, called the root (directory) node, knows
about all entities. This general organization of a network into domains and directory
nodes is illustrated in Figure below
31
Distributed Systems Lecture Notes
Himayatullah sharief
Hierarchical organization of a location service into domains, each having an associated directory
node.
Name spaces
Names in Distributed Systems are organized in to what is commonly referred to as name space.
A name space can be represented as a labeled directed graph with 2 types of nodes.
A leaf node represents a named entity and has the property that it has no outgoing edges. A leaf
node represents a named entity and has the property that it has no outgoing edges. A leaf node generally
stores information on the entity it is representing. Ex its address and alternatively it can also store the
state of the entity.
A directory node has a no. of outgoing edges, each labeled with a name. Each node in a naming
graph is considered as yet another entity in DS and in particular has an associated identifier. A directory
node stores a table in which an outgoing edge represented as a pair (edge label, node identifier). Such a
table is called directory table.
Name Spaces -1
Name Spaces -2
32
Distributed Systems Lecture Notes
Himayatullah sharief
The general organizations of the UNIX file system implementation on a logical disk of
contiguous disk blocks
Name Resolution:
Name spaces offer a convenient mechanism for storing and retrieving information about entities
by means of names. M ore generally given a pathname it should be possible to look up any information
stored in the node referred to by that name. the process of looking up a name is called name resolution.
Ex. N:<label1, label2,..label-n>. Resolution of the name starts at N.
Closure mechanism:
Name resolution can take place only if we know how and where to start. Knowing how and where to
start name resolution is referred to as closure mechanism.
33
Distributed Systems Lecture Notes
Himayatullah sharief
Implementation of Name space
A name space forms the heart of a naming service i.e a service that allows users and
processes to add, remove and look up names. A naming service is implemented by name servers.
In large scale DS with may entities, possibly spread across a large geographical area, it is
necessary to distribute the implementation of name space over multiple name servers.
An example partitioning of the DNS name space, including Internet-accessible files, into three layers
A comparison between name servers for implementing nodes from a large-scale name space
partitioned into a global layer, an administrational layer, and a managerial layer.
34
Distributed Systems Lecture Notes
Himayatullah sharief
Implementation of Name Resolution
Assume the (absolute) path name root: «nl, VU, CS, ftp, pub, globe, index.html> is to
be resolved. Using a URL notation, this path name would correspond to ftp://ftp.cs.
vu.nl/pub/globe/index.html. There are two ways to implement name resolution.
35
Distributed Systems Lecture Notes
Himayatullah sharief
Drawback:
Advantage:
Caching results is more effective compared to the iterative name resolution and communication
costs may be reduced.
Recursive name resolution of < nl, vu,cs, ftp>. Name servers cache intermediate results for
subsequent lookups
The benefit of this approach is that eventually lookup operations can be handled extremely efficiently
for ex, suppose that another client later requests resolution of the path name root:< nl, vu, cs, flits>.
This name is passed to the root, which can immediately forward it to the name server for the cs node,
and request it to resolve the remaining path name cs:<flits>
The comparison between recursive and iterative name resolution with respect to communication costs
The most important types of resource records forming the contents of nodes in the DNS name space
36
Distributed Systems Lecture Notes
Himayatullah sharief
DNS Implementation
37
Distributed Systems Lecture Notes
Himayatullah sharief
UNIT-IV
Distributed Object-Based Systems
Overview of CORBA
• CORBA: Common Object Request Broker Architecture
• Background:
Architecture of CORBA
– The Object Request Broker (ORB) forms the core of any CORBA
distributed system.
• User interface
• Information management
• System management
• Task management
– Vertical facilities consist of high-level services that are targeted to a
specific application domain such as electronic commerce, banking, and
manufacturing.
38
Distributed Systems Lecture Notes
Himayatullah sharief
Object Model
• CORBA follows an interface based approach to objects:
– Not the objects, but interfaces are the really important entities
– An object may implement one or more interfaces
– Interface descriptions can be stored in an interface repository, and looked
up at runtime
– Mappings from IDL to specific programming are part of the CORBA
specification (languages include C, C++, Smalltalk, Cobol, Ada, and Java.
• In DCOM and Globe, interfaces can be specified at a lower level in the form of
tables, called binary interfaces.
• Object Request Broker (ORB): CORBA's object broker that connects clients,
objects, and services
• Object adapter: Server side code that handles incoming invocation requests.
• Interface repository:
39
Distributed Systems Lecture Notes
Himayatullah sharief
– Database containing interface definitions and which can be queried at
runtime
– Whenever an interface definition is compiled, the IDL compiler assigns a
repository identifier to that interface.
• Implementation repository:
Failure
Request type Description
semantics
Communication Models
• CORBA supports the message-queuing model through the messaging service.
40
Distributed Systems Lecture Notes
Himayatullah sharief
• Internet Inter-ORB Protocol (IIOP) is a GIOP on top of TCP
Processes
• CORBA distinguishes two types of processes: clients and servers.
Naming
• In CORBA, it is essential to distinguish specificationlevel and implementation-
level object references
41
Distributed Systems Lecture Notes
Himayatullah sharief
• Conclusion: Object references in CORBA used to be highly implementation
dependent. Different implementations of CORBA could normally not exchange their
references.
Synchronization
The two most important services that facilitate synchronization in CORBA are its concurrency
control service and its transaction service.
The two services collaborate to implement distributed and nested transactions using two-
phase locking.
CASCADE is built to provide a generic, scalable mechanism that allows any kind of CORBA
object to be cached.
CASCADE offers a caching service implemented as a large collection of object servers referred
to as a Domain Caching Server (DCS).
Each DCS is an object server running on a CORBA ORB. The collection of DCSs may be
spread across the Internet.
42
Distributed Systems Lecture Notes
Himayatullah sharief
Fault Tolerance
In CORBA version 3, fault tolerance is addressed. The basic approach for fault tolerance is to
replicate objects into object groups.
Masking failures is achieved through replication by putting objects into object groups. Object
groups are transparent to clients. They appear as normal objects.
This approach requires a separate type of object reference: Interoperable Object Group
Reference (IOGR).
IOGRs have the same structure as IORs. The main difference is that they are used differently.
In IORs an additional profile is used as an alternative. In IOGR, it denotes another replica.
Security
The underlying idea is to allow the client and object to be mostly unaware of all the security
policies, except perhaps at binding time. The ORB does the rest.
Specific policies are passed to the ORB as (local) policy objects and are invoked when
necessary.
43
Distributed Systems Lecture Notes
Himayatullah sharief
Distributed COM
DCOM: Distributed Component Object Model
DCOM Overview
DCOM is related to many things that have been introduced by Microsoft in the past couple of
years:
– Proxy marshaler: handles the way that object references are passed between
different machines.
44
Distributed Systems Lecture Notes
Himayatullah sharief
Naming in DCOM
Observation: DCOM can handle only objects as temporary instances of a class. To
accommodate objects that can outlive their client, something else is needed.
Moniker: A name that uniquely identifies a Microsoft's COM (persistent) object similar to a
directory path name.
45
Distributed Systems Lecture Notes
Himayatullah sharief
Security in DCOM
Declarative security: Register per object what the system should enforce with respect to
authentication. Authentication is associated with users and user groups. There are different
authentication levels.
Delegation: A server can impersonate a client depending on a level.
Note: There is also support for programmatic security by which security levels can be set by
an application, as well as the required security services (see book).
Globe
•A Globe object is a physically distributed shared object: the object's state may be physically
distributed across several machines
•Local object: A non-distributed object residing a single address space, often representing a
distributed shared object
•Contact point: A point where clients can contact the distributed object; each contact point is
described through a contact address
46
Distributed Systems Lecture Notes
Himayatullah sharief
•Observation: Globe attempts to separate functionality from distribution by distinguishing
different local sub-objects:
Control sub object: Connects the user defined interfaces of the semantics sub object to
the generic, predefined interfaces of the replication sub object
Communication in GLOBE
47
Distributed Systems Lecture Notes
Himayatullah sharief
48
Distributed Systems Lecture Notes
Himayatullah sharief
Read method
Modify method
State Action to take Method call Next state
49
Distributed Systems Lecture Notes
Himayatullah sharief
State transitions and actions with primary-backup replication
Read method
50
Distributed Systems Lecture Notes
Himayatullah sharief
Security
Additional security sub object checks for authorized communication, invocation, and
parameter values. Globe can be integrated with existing security services:
The position of a security sub object in a Globe local object
Sync.
Yes Yes Yes
communication
Async.
Yes Yes No
communication
51
Distributed Systems Lecture Notes
Himayatullah sharief
Object server Flexible (POA) Hard-coded Object dependent
UNIT-V
It is a large quantity of distributed data, typically streamed out from one or many
receivers of the data which run over general purpose infrastructure. Data is time
sensitive, but not necessarily real time
QoS:
QoS guarantees requires that resources are allocated and scheduled to multimedia
applications under real time requirements need for QoS-driven resource management
when resources are shared between several application and some of these have real time
deadlines.
52
Distributed Systems Lecture Notes
Himayatullah sharief
53
Distributed Systems Lecture Notes
Himayatullah sharief
PC/workstation PC/workstation
Wi ndow system
Came ra H
A K G
Code c Code c
B L
Micropho nes Mixer
Network
C conne cti ons
Screen Vi deo fi le system Vi deo
D store
M
Code c
Wi ndow system
: mu ltim ediastream
Wh ite boxes repres ent medi a pro ces sing co mpon ents,
ma ny of wh ich are impl emen ted in s oftware , in clu din g:
codec: codin g/de cod ingfi lter
mi xe r: so und-m ixingcompo nent
54
Distributed Systems Lecture Notes
Himayatullah sharief
Fl ow spe c.
QoS manager evaluates new requirements
against the available resourc es .
Suffic ient?
Yes No
Reserve the reques ted resourc es Negotiate reduced resourc eprov is ion w ith application.
Agreement?
Reso urce co ntract Yes No
Allow application to proc eed
Do not allow application to proc eed
Applic ation runs w ith res ources as Applic ation notifies QoS manager of
per resourc e contract inc reased resourc e requirements
Admission control
QoS values must be mapped to resource requirements
1. Schedulability: - can the CPU slots be assigned to tasks such that all tasks
receive sufficient slots?
55
Distributed Systems Lecture Notes
Himayatullah sharief
2. buffer space e.g., for encoding/decoding, jitter removal buffer, ...
3. bandwidth e.g., MPEG1 stream with VCR quality generates about 1.5 Mbps
4. availability/capabilities of devices
Resource Management
• Resource Scheduling
1. Fair Scheduling
If several streams compete for a same resource it is necessary to consider
fairness and to prevent ill behaved streams taking too much bandwidth.
Round robin method is used on bit by bit basis, which provides more fairness
with respect to varying packet sizes and arrival times.
2. Real-time Scheduling
The Scheduling Algorithms assigns CPU time slots to a set of processes in a
manner that ensures that they complete their tasks on time.
Earliest- deadline first (EDF).
56