Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Multiprocessor Systems: 1. Increased Throughput. by Increasing The Number of Processors, We Expect

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 8

Multiprocessor Systems

Although single-processor systems are most common, multiprocessor systems


(also known as parallel systems or tightly coupled systems) are growing
in importance. Such systems have two or more processors in close communication,
sharing the computer bus and sometimes the clock, memory, and
peripheral devices.

Multiprocessor systems have three main advantages:

1. Increased throughput. By increasing the number of processors, we expect


to get more work done in less time. The speed-up ratio with N processors,
is not N, however; rather, it is less than N. When multiple processors
cooperate on a task, a certain amount of overhead is incurred in keeping
all the parts working correctly. This overhead, plus contention for shared
resources, lowers the expected gain from additional processors. Similarly,
N programmers working closely together do not produce N times the
amount of work a single programmer would produce.
1.3 Computer-System Architecture 13

2. Economy of scale. Multiprocessor systems can cost less than equivalent


multiple single-processor systems, because they can share peripherals,
mass storage, and power supplies. If several programs operate on the
same set of data, it is cheaper to store those data on one disk and to have
all the processors share them than to have many computers with local
disks and many copies of the data.

3. Increased reliability. If ftmctions can be distributed properly among


several processors, then the failure of one processor will not halt the
system, only slow it down. If we have ten processors and one fails, then
each of the remaining nine processors can pick up a share of the work of
the failed processor. Thus, the entire system runs only 10 percent slower,
rather than failing altogether.
Increased reliability of a computer system is crucial in many applications.
The ability to continue providing service proportional to the level of surviving
hardware is called graceful degradation. Some systems go beyond graceful
degradation and are called fault tolerant, because they can suffer a failure of
any single component and still continue operation. Note that fault tolerance
requires a mechanism to allow the failure to be detected, diagnosed, and, if
possible, corrected. The HP NonStop system (formerly Tandem) system uses
both hardware and software duplication to ensure continued operation despite
faults. The system consists of multiple pairs of CPUs, working in lockstep. Both
processors in the pair execute each instruction and compare the results. If the
results differ, then one CPU of the pair is at fault, and both are halted. The
process that was being executed is then moved to another pair of CPUs, and the
instruction that failed is restarted. This solution is expensive, since it involves
special hardware and considerable hardware duplication.
The multiple-processor systems in use today are of two types. Some
systems use asymmetric multiprocessing, in which each processor is assigned
a specific task. A master processor controls the system; the other processors
either look to the master for instruction or have predefined tasks. This scheme
defines a master-slave relationship. The master processor schedules and
allocates work to the slave processors.
The most common systems use symmetric multiprocessing (SMP), in
which each processor performs all tasks within the operating system. SMP
means that all processors are peers; no master-slave relationship exists
between processors. Figure 1.6 illustrates a typical SMP architecture. An
example of the SMP system is Solaris, a commercial version of UNIX designed
by Sun Microsystems. A Solaris system can be configured to employ dozens of
processors, all running Solaris. The benefit of this model is that many processes
can run simultaneously-N processes can run if there are N CPUs-without
causing a significant deterioration of performance. However, we must carefully
controll/O to ensure that the data reach the appropriate processor. Also, since
the CPUs are separate, one may be sitting idle while another is overloaded,
resulting in inefficiencies. These inefficiencies can be avoided if the processors
share certain data structures. A multiprocessor system of this form will allow
processes and resources-such as memory-to be shared dynamically among
the various processors and can lower the variance among the processors. Such
a system must be written carefully, as we shall see in Chapter 6. Virtually all
modern operating systems-including Windows, Windows XP, Mac OS x, and
Linux-now provide support for SMP.
The difference between symmetric and asymmetric multiprocessing may
result from either hardware or software. Special hardware can differentiate the
multiple processors, or the software can be written to allow only one master and
multiple slaves. For instance, Sun's operating system SunOS Version 4 provided
asymmetric multiprocessing, whereas Version 5 (Solaris) is symmetric on the
same hardware.
A recent trend in CPU design is to include multiple compute cores on
a single chip. In essence, these are multiprocessor chips. Two-way chips are
becoming mainstream, while N-way chips are going to be common in high-end
systems. Aside from architectural considerations such as cache, memory, and
bus contention, these multi-core CPus look to the operating system just as N
standard processors.
Lastly, blade servers are a recent development in which multiple processor
boards, I/O boards, and networking boards are placed in the same chassis.
The difference between these and traditional multiprocessor systems is that
each blade-processor board boots independently and runs its own operating
system. Some blade-server boards are multiprocessor as well, which blurs the
lines between types of computers. In essence, those servers consist of multiple
independent multiprocessor systems.

MULTIPROGRAMMING & TIME SHARING SYSTEM

One of the most important aspects of operating systems is the ability to


multiprogram. A single user cannot, in general, keep either the CPU or the
[fO devices busy at all times. Multiprogramming increases CPU utilization by
organizing jobs (code and data) so that the CPU always has one to execute.
The idea is as follows: The operating system keeps several jobs in memory
simultaneously (Figure 1.7). This set of jobs can be a subset of the jobs kept in
the job pool-which contains all jobs that enter the system-since the number
of jobs that can be kept simultaneously in memory is usually smaller than
the number of jobs that can be kept in the job pool. The operating system
picks and begins to execute one of the jobs in memory. Eventually, the job
may have to wait for some task, such as an I/O operation, to complete. In a

0 operating system
job1
job2
job3
job 4
512M '-- -----J
Figure 1.7 Memory layout for a multiprogramming system.

non-multiprogrammed system, the CPU would sit idle. In a multiprogrammed


system, the operating system simply switches to, and executes, another job.
When that job needs to wait, the CPU is switched to another job, and so on.
Eventually, the first job finishes waiting and gets the CPU back. As long as at
least one job needs to execute, the CPU is never idle.

This idea is common in other life situations. A lawyer does not work for
only one client at a time, for example. While one case is waiting to go to trial
or have papers typed, the lawyer can work on another case. If he has enough
clients, the lawyer will never be idle for lack of work. (Idle lawyers tend to
become politicians, so there is a certain social value in keeping lawyers busy.)
.
Multiprogrammed systems provide an environment in which the various
system resources (for example, CPU, memory, and peripheral devices) are
utilized effectively, but they do not provide for user interaction with the
computer system. Time sharing (or multitasking) is a logical extension of
multiprogramming. In time-sharing systems, the CPU executes multiple jobs
by switching among them, but the switches occur so frequently that the users
can interact with each program while it is running.

Time sharing requires an interactive (or hands-on) computer system,


which provides direct communication between the user and the system. The
user gives instructions to the operating system or to a program directly, using a
input device such as a keyboard or a mouse, and waits for immediate results on
an output device. Accordingly, the response time should be short-typically
less than one second.

A time-shared operating system allows many users to share the computer


simultaneously. Since each action or command in a time-shared system tends
to be short, only a little CPU time is needed for each user. As the system switches
rapidly from one user to the next, each user is given the impression that the
entire computer system is dedicated to his use, even though it is being shared
among many users.

A time-shared operating system uses CPU scheduling and multiprogramming


to provide each user with a small portion of a time-shared computer.
Each user has at least one separate program in memory. A program loaded into
memory and executing is called a process. When a process executes, it typically
executes for only a short time before it either finishes or needs to perform I/O.
I/O may be interactive; that is, output goes to a display for the user, and input
comes from a user keyboard, mouse, or other device. Since interactive I/O
typically runs at "people speeds," it may take a long time to complete. Input,
for example, may be bounded by the user's typing speed; seven characters per
second is fast for people but incredibly slow for computers. Rather than let
the CPU sit idle as this interactive input takes place, the operating system will
rapidly switch the CPU to the program of some other user.

Time-sharing and multiprogramming require several jobs to be kept


simultaneously in memory. Since in general main memory is too small to
accommodate all jobs, the jobs are kept initially on the disk in the job pool.
This pool consists of all processes residing on disk awaiting allocation of main
memory. If several jobs are ready to be brought into memory, and if there is
not enough room for all of them, then the system must choose among them.
Making this decision is job scheduling, which is discussed in Chapter 5. When
the operating system selects a job from the job pool, it loads that job into
memory for execution. Having several programs in memory at the same time
requires some form of memory management, which is covered in Chapters 8
and 9. In addition, if several jobs are ready to run at the same time, the system
must choose among them. Making this decision is CPU scheduling, which is
discussed in Chapter 5. Finally, running multiple jobs concurrently requires
that their ability to affect one another be limited in all phases of the operating
system, including process scheduling, disk storage, and memory management.
These considerations are discussed throughout the text.

In a time-sharing system, the operating system must ensure reasonable


response time, which is sometimes accomplished through swapping, where
processes are swapped in and out of main memory to the disk. A more common
method for achieving this goal is virtual memory, a technique that allows
the execution of a process that is not completely in memory (Chapter 9).
The main advantage of the virtual-memory scheme is that it enables users
to run programs that are larger than actual physical memory. Further, it
abstracts main memory into a large, uniform array of storage, separating logical
memory as viewed by the user from physical memory. This arrangement frees
programmers from concern over memory-storage limitations.
Time-sharing systems must also provide a file system (Chapters 10 and 11).
The file system resides on a collection of disks; hence, disk management must
be provided (Chapter 12). Also, time-sharing systems provide a mechanism for
protecting resources from inappropriate use (Chapter 14). To ensure orderly
execution, the system must provide mechanisms for job synchronization and
communication (Chapter 6), and it may ensure that jobs do not get stuck in a
deadlock, forever waiting for one another (Chapter 7).

Distributed Systems

A distributed system is a collection of physically separate, possibly heterogeneous


computer systems that are networked to provide the users with access
to the various resources that the system maintains. Access to a shared resource
increases computation speed, functionality, data availability, and reliability.
Some operating systems generalize network access as a form of file access, with
the details of networking contained in the network interface's device driver.
Others make users specifically invoke network functions. Generally, systems
contain a mix of the two modes-for example FTP and NFS. The protocols
that create a distributed system can greatly affect that system's utility and
popularity.

A network, in the simplest terms, is a communication path between


two or more systems. Distributed systems depend on networking for their
functionality. Networks vary by the protocols used, the distances between
nodes, and the transport media. TCPlIP is the most common network protocot
although ATM and other protocols are in widespread use. Likewise,
operatingsystem support of protocols varies. Most operating systems support
TCPlIP, including the Windows and UNIX operating systems. Some systems
support proprietary protocols to suit their needs. To an operating system, a network
protocol simply needs an interface device-a network adapter, for examplewith
a device driver to manage it, as well as software to handle data. These
concepts are discussed throughout this book.

Networks are characterized based on the distances between their nodes.


A local-area network (LAN) connects computers within a room, a floor,
or a building. A wide-area network (WAN) usually links buildings, cities,
or countries. A global company may have a WAN to connect its offices
worldwide. These networks may run one protocol or several protocols. The
continuing advent of new teclu1010gies brings about new forms of networks.
For example, a metropolitan-area network (MAN) could link buildings within
a city. BlueTooth and 802.11 devices use wireless technology to communicate
over a distance of several feet, in essence creating a small-area network such
as might be found in a home.

The media to carry networks are equally varied. They include copper wires,
fiber strands, and wireless transmissions between satellites, microwave dishes,
and radios. When computing devices are connected to cellular phones, they
create a network. Even very short-range infrared communication can be used
for networking. At a rudimentary level, whenever computers communicate,
they use or create a network. These networks also vary in their performance
and reliability.
\
Some operating systems have taken the concept of networks and distributed
systems further than the notion of providing network connectivity. A
network operating system is an operating system that provides features such
as file sharing across the network and that includes a communication scheme
that allows different processes on different computers to exchange messages.
A computer running a network operating system acts autonomously from all
other computers on the network, although it is aware of the network and is
able to communicate with other networked computers. A distributed operating
system provides a less autonomous environment: The different operating
systems communicate closely enough to provide the illusion that only a ¥ingle
operating system controls the network.
1) Distributed Operating systems are also referred to as Loosely Coupled systems
whereas parallel processin g systems are referred to as tightly coupled systems.

2) A Loosley coupled system is one in which the processors do not share memory
and each processor has its own local memory whereas in a tightly coupled system
there is a single systemwide primary memory shared by all the processors.

3) The processors of distributed operating systems can be placed far away from
each other to cover a wider geographic area which is not the case with parallel
processing systems.

4) The no. of processors that can be usefully deployed is very small in a parallel
processing operating system whereas for a ditributed operating system a larger no.
of processors can be usefully deployed.......

5)globle clock is used for controlling simd n mimd in parallel..... .in distributed no
any global colck present in this synchronization algorithms are used
6)in the distributed operating system there is an unpredictable communication
delays between processors whereas the processors in the parallel processing system
share over an interconnection network

You might also like