Ch:1 Introduction (12M) : Operating System
Ch:1 Introduction (12M) : Operating System
Ch:1 Introduction (12M) : Operating System
Introduction
An operating system is a program that acts as an intermediary between the
user of a computer and the computer hardware
Advantages
convenient
efficient
Mainframe Systems
Mainframe computer systems were the first computers used to tackle many commercial and scientific applications. In
this section, we trace the growth of mainframe systems from simple batch systems, where the computer runs one -and
only one-application, to time-shared systems, which allow for user interaction with the computer system.
Batch Systems
Early computers were physically enormous machines run from a console. The common input devices were card
readers and tape drives. The common output devices were line printers, tape drives, and card punches. The user did
not intkract directly with the computer systems. Rather, the user prepared a job -which consisted of the program, the
1
data, and some control information about the nature of the job (control cards)-and submitted it to the computer
operator. The job was usually in the form of punch cards. At some later time (after minutes, hours, or days), the
output appeared. The output consisted of the result of the program
The operating system in these early computers was fairly simple. Its major task was to transfer control
automatically from one job to the next. The operating system was always resident in memory
Multiprogrammed Systems
The most important aspect of job scheduling is the ability to multiprogram. A single user cannot, in general,
keep either the CPU or the 1/0 devices busy at all times. Multiprogramming increases CPU utilization by organizing
jobs so that the CPU always has one to execute.
The idea is as follows: The operating system keeps several jobs in memory simultaneously (Figure 1.3). This
set of jobs is a subset of the jobs kept in the job pool-since the number of jobs that can be kept simultaneously in
memory is usually much smaller than the number of jobs that can be in the job pool. operating system picks and
begins to execute one of the jobs in the memory. Eventually, the job may have to wait for some task, such as an I/O
operation, to complete. In a non-multiprogrammed system, the CPU would sit idle. In a multiprogramming system,
the operating system simply switches to, and executes, another job. When that job needs to wait, the CPU is switched
to another job, and so on. Eventually, the first job finishes waiting and gets the CPU back. As long as at least one job
needs to execute, the CPU is never idle.
2
Time-Sharing Systems
Multiprogrammed, batched systems provided an environment where the various system resources (for
example, CPU, memory, peripheral devices) were utilized effectively, but it did not provide for user interaction with
the computer system. Time sharing (or multitasking) is a logical extension of multiprogramming. The CPU executes
multiple jobs by switching among them, but the switches occur so frequently that the users can interact with each
program while it is running.
An interactive (or hands-on) computer system provides direct communication between the user and the
system. The user gives instructions to the operating system or to a program directly, using a keyboard or a mouse,
and waits for immediate results.
A time-shared operating system uses CPU scheduling and multiprogramming to provide each user with a
small portion of a time-shared computer. Each user has at least one separate program in memory. A program loaded
into memory and executing is commonly referred to as a process. When a process executes, it typically executes for
only a short time before it either finishes or needs to perform I/O. I/O may be interactive
Desktop Systems
Personal computers PCs appeared in the 1970s. During their first decade, the CPUs in PCs lacked the features needed
to protect an operating system from user programs. PC operating systems therefore were neither multiuser nor
multitasking. However, the goals of these operating systems have changed with time; instead of maximizing CPU
and peripheral utilization, the systems opt for maximizing user convenience and responsiveness. These systems
include PCs running Microsoft Windows and the Apple Macintosh.
Multiprocessor Systems
3
Most systems to date are single-processor systems; that is, they have only one main CPU. However,
multiprocessor systems (also known as parallel systems or tightly coupled systems) are growing in importance.
Such systems have more than one processor in close communication, sharing the computer bus, the clock, and
sometimes memory and peripheral devices.
Multiprocessor systems have three main advantages.
1. Increased throughput. By increasing the number of processors, we hope to get more work done in less time. The
speed-up ratio with N processors is not N; rather, it is less than N. When multiple processors cooperate on
a task, a certain amount of overhead is incurred in keeping all the parts working correctly. This overhead, plus
contention for shared resources, lowers the expected gain from additional processors.
2. Economy of scale. Multiprocessor systems can save more money than multiple single-processor systems, because
they can share peripherals, mass storage, and power supplies.
3. Increased reliablility. If functions can be distributed properly among several processors, then the failure of one
processor will not halt the system, only slow it down. If we have ten processors and one fails, then each of the
remaining nine processors must pick up a share of the work of the failed processor. Thus, the entire system runs only
10 percent slower, rather than failing altogether.
The most common multiple-processor systems now use symmetric multiprocessing (SMP), in whch each processor
runs an identical copy of the operating system, and these copies communicate with one another as needed.
Some systems use asymmetric multiprocessing, in which each processor is assigned a specific task. A master
processor controls the system; the other processors either look to the master for instruction or have predefined tasks.
This scheme defines a master-slave relationship.
SMP means that all processors are peers; no master-slave relationship exists between processors. Each
processor concurrently runs a copy of the operating system. The benefit of this model is that many processes
can run simultaneously-N processes can run if there are N CPUs-without causing a significant deterioration of
performance. However, we must carefully control I/O to ensure that the data reach the appropriate processor. Also,
since the CPUs are separate, one may be sitting idle while another is overloaded, resulting in inefficiencies.
Windows NT, Solaris, Digital UNIX, 0S/2, and Linux-now provide support for SMP.
Sun's operating system SunOS Version 4 provides
4
Distributed Systems
A network, in the simplest terms, is a communication path between two or more systems. Distributed
systems depend on networking for their functionality.By being able to communicate, distributed systems are able to
share computational tasks, and provide a rich set of features to users.
Networks vary by the protocols used, the distances between nodes, and the transport media. TCP/IP is the
most common network protocol, although ATM and other protocols are in widespread use. Likewise, operating-
system support of protocols varies. Most operating systems support TCP/IP, including the Windows and UNIX
operating systems. Some systems support proprietary protocols to suit their needs. To an operating system, a network
protocol simply needs an interface device-a network adapter
Clustered Systems
Like parallel systems, clustered systems gather together multiple CPUs to accomplish computational work.
Clustered systems differ from parallel systems, however, in that they are composed of two or more individual
systems coupled together. The generally accepted definition is that clustered computers share storage and are closely
linked via LAN networking.
Clustering is usually performed to provide high availability. A layer of cluster software runs on the cluster
nodes. Each node can monitor one or more of the others (over the LAN). If the monitored machine fails, the
monitoring machine can take ownership of its storage, and restart the application(s) that were running on the failed
machine.
In asymmetric clustering, one machine is in hot standby mode while the other is running the applications.
The hot standby host (machine) does nothing but monitor the active server. If that server fails, the hot standby host
becomes the active server. In symmetric mode, two or more hosts are running applications, and they are monitoring
each other. This mode is obviously more efficient, as it uses all of the available hardware. It does require that more
than one application be available to run.
Real-Time Systems
Another form of a special-purpose operating system is the real-time system. A real-time system is used when rigid
time requirements have been placed on the operation of a processor or the flow of data; thus, it is often used as a
control device in a dedicated application. Systems that control scientific experiments, medical imaging systems,
industrial control systems, are real-time systems.
A real-time system has well-defined, fixed time constraints. Processing must be done within the defined
constraints, or the system will fail.
Real-time systems come in two flavors: hard and soft. A hard real-time system guarantees that critical tasks
be completed on time.
A less restrictive type of real-time system is a soft real-time system, where a critical real-time task gets
priority over other tasks, and retains that priority until it completes.