Lect1 - Introduction
Lect1 - Introduction
System wide
CPU CPU CPU CPU
Shared memory
Interconnection hardware
Loosely coupled systems
The processors do not share memory, and
each processor has its own local memory.
These are also known as distributed
computing systems or simply distributed
systems.
Communication Network
Distributed Computing System
A DCS is a collection of independent computers that
appears to its users as a single coherent system,
or
A collection of processors interconnected by a
communication network in which
Each processor has its own local memory and other
peripherals, and
The communication between any two processors of
the system takes place by message passing.
For a particular processor, its own resources are
local, whereas the other processors and their
resources are remote.
Cont…
Together, a processor and its resources are
usually referred to as a node or site or machine
of the distributed computing system.
Mini- Terminals
computer
Mini- Mini-
Communication
computer computer
network
Mini-
computer
Workstation Model
It consists of several workstations interconnected by a
communication network.
workstation
workstation workstation
Communication workstation
workstation network
workstation workstation
workstation
Cont… : Issues
1. How does the system find an idle workstation?
2. How is a process transferred from one
workstation to get it executed on another
workstation?
3. What happens to a remote process if a user
logs onto a workstation that was idle until now
and was being used to execute a process of
another workstation?
Cont… : Issues
Three approaches to handle third issue:
Allow the remote process share the resources
of the workstation along with its own logged-on
user’s processes.
workstation
workstation workstation
Communication workstation
workstation network
Mini-
Mini- Mini-
computer
computer computer
Used as
Used as Used as
Print
File server database
server
server
Cont…
Advantages of workstation server model over
workstation model:
It is much cheaper to use a few minicomputers equipped
with large and fast disks.
System maintenance is easy.
Flexibility to use any workstation and access the files
because all files are managed by file servers.
Request-response protocol is mainly used to access the
services of the server machines.
Both client and server can run on the same computer.
A user has guaranteed response time because each
workstations are not used for executing remote
processes.
Processor Pool Model
Based on notion that most of the time, the CPU
computing power is not used. One needs large
computing power for short time.
The pool of processors consists of a large number of
microcomputers and minicomputers attached to the
network.
Each processor in the pool has its own memory to load
and run the program.
A user does not log onto a particular machine but to the
system as a whole.
Cont…
Terminals
Communication
network
Run File
server server
Cont…
Advantages over workstation server model:
Better utilization
The entire processing power of the system is
available for use by the currently logged-on users.
Greater flexibility
The system’s services can be easily expanded
without the need to install any more computers;
The processors in the pool can be allocated to act
as extra servers to carry any additional load arising
from an increased user population or to provide new
services.
Cont…
Disadvantage
Unsuitable for high-performance interactive
applications.
Because of the slow speed of communication
between the computer on which the application
program of a user is being executed and the
terminal via which the user is interacting with the
system.
Hybrid Model
Workstation Server Model is most widely used for small
program/Application.
Processor-Pool is ideal for massive computation jobs.
Privacy Constraints
Resource sharing
Such as software libraries, databases, and hardware resources.
Factors that led to the emergence of
distributed computing system
Better price-performance ratio
Higher reliability
Reliability refers to the degree of tolerance against errors
and component failures in a system.
Achieved by multiplicity of resources.
Factors that led to the emergence of
distributed computing system
Extensibility and incremental growth
By adding additional resources to the system as and when
the need arise.
These are termed as open distributed System.
3. FAULT TOLERANCE
Low in Network OS
High in Distribute OS
User Mobility
No matter which machine a user is logged onto, he
should be able to access a resource with the same
name.
Replication Transparency
Replicas are used for better performance and reliability.
No deadlock
It ensures that a situation will never occur in which
competing processes prevent their mutual progress even
though no single one requests more resources than
available in the system.
Performance transparency
The aim of performance transparency is to allow the
system to be automatically reconfigured to improve
performance, as loads vary dynamically in the system.
The large size of the kernel reduces the overall flexibility and
configurability of the resulting OS.
Network Hardware
Microkernel model
Goal is to keep the kernel as small as possible.
Network Hardware
Cont…
Modular in nature, OS is easy to design, implement and
install.
For adding or changing a service, there is no need to stop
the system and boot a new kernel, as in the case of
monolithic kernel.
Performance penalty :
Each server module has to its own address space. Hence
some form of message based IPC is required while
performing some job.
Message passing between server processes and the
microkernel requires context switches, resulting in additional
performance overhead.
Cont…
Advantages of microkernel model over monolithic kernel
model:
Flexibility to design, maintenance and portability.
In practice the performance penalty is not too much due to
other factors and the small overhead involved in exchanging
messages which is usually negligible.
PERFORMANCE
Design principles for better performance are:
Batch if possible
Transfer of data in large chunk is more efficient than
individual pages.
Caching whenever possible
Saves a large amount of time and network bandwidth.
Centralized database
Bandwidth
WS
Services interface
64-bit
processor 70
What is a Cloud?
SLAs
Web Services
Virtualization
Why Cloud Computing?
Cloud computing is an important model for the
distribution and access of computing resources.
As-needed availability:
Aligns resource expenditure with actual resource
usage thus allowing the organization to pay only
for the resources required, when they are
required.
Fog, Edge & Cloud
Local data processing helps to mitigate the
weakness of cloud computing.
In addition, fog computing also brings new
advantages, such as greater context-
awareness, realtime processing, lower
bandwidth requirement, etc.
Fog Computing
A standard that defines how edge computing
should work, and it facilitates the operation of
compute, storage and networking services
between end devices and cloud computing
data centers.
Additionally, many use fog as a jumping-off
point for edge computing.
Why Fog Computing?