DCA6101-Unit 12-Operating System Concepts
DCA6101-Unit 12-Operating System Concepts
DCA6101-Unit 12-Operating System Concepts
12.1 Introduction
As we discussed in the previous unit, Computer software has become a
driving force. It is the engine that drives business decision making. It serves
as the basis for modern scientific investigation and engineering problem
solving. In this unit, we will study the operating system concepts like,
functions of an Operating System, Development of Operating System, Early
Systems, Simple Batch Systems, Multi Programmed Batch Systems, Time
Sharing Systems, Distributed Systems, Real-time Operating System,
Operating System Components, Operating System Services and Operating
Systems for Different Computers.
Objectives:
After studying this unit, you should be able to:
describe Operating System
deliberate the various functions of an Operating System
explain the evolution of Operating Systems
system. The most important components of the user interface are the
command interpreter, the file system, on-line help, and application
integration. The recent trend has been toward increasingly integrated
graphical user interfaces that encompass the activities of multiple processes
on networks of computers.
Daily uses of an Operating System
– Performing application programs
– Formatting floppy diskettes
– Setting up directories to organize the files
– Displaying a list of files stored on a particular disk
– Verifying that there is enough room on a disk to save a file.
– Protecting and backing up your files by copying them to other disks for
safekeeping.
Self Assessment Questions
1. ________ is software that hides lower level details and provides a set of
higher-level functions.
2. ________ transforms the computer hardware into multiple virtual
computers, each belonging to a different program.
time. Even in the research lab, there were many researchers competing for
limited computing time. The first solution was a reservation system, with
researchers signing up for specific time slots.
The answer to this problematic was to have programmers make their work
off-line on some input medium (often on punched cards, paper tape, or
magnetic tape) and then hand the work to a computer operator. The
computer operator would load up jobs in the order received (with priority
overrides based on politics and other factors). Each job still ran one at a
time with complete control of the computer, but as soon as a job finished,
the operator would transfer the results to some output medium (punched
tape, paper tape, magnetic tape, or printed paper) and deliver the results to
the suitable programmer. If the program ran to completion, the result would
be some end data. If the program crashed, memory would be transferred to
some output medium for the programmer to study (because some of the
early business computing systems used magnetic core memory, these
became known as “core dumps”). After the first successes with digital
computer experiments, computers moved out of the lab and into practical
use. The first practical application of these experimental digital computers
was the generation of artillery tables for the British and American armies.
Much of the early research in computers was paid for by the British and
American militaries. Business and scientific applications followed. As
computer use increased, programmers noticed that they were duplicating
the same efforts.
Every single programmer was writing his or her own routines for I/O, such
as reading input from a magnetic tape or writing output to a line printer. It
made sense to write a common device driver for each input or output device
and then have every programmer share the same device drivers rather than
each programmer writing his or her own. Some programmers resisted the
use of common device drivers in the belief that they could write “more
competent” or faster or "“better” device drivers of their own.
12.4.2 Simple Batch Systems
While punched cards were used for user jobs, processing of a job involved
physical actions by the system operator, e.g., loading a deck of cards into
the card reader, pressing switches on the computer's console to initiate a
job, etc. These actions wasted a lot of central processing unit (CPU) time.
Operating System
User Program Area
To speed up processing, jobs with similar needs were batched together and
were run as a group. Batch processing (BP) was implemented by locating a
component of the BP system, called the batch monitor or supervisor,
permanently in one part of computer's memory. The remaining memory was
used to process a user job - the current job in the batch as shown in the
figure 12.1 above.
The interval between job submission and accomplishment was considerable
in batch processed system as a number of programs were put in a batch
and the entire batch had to be processed before the results were printed.
Further card reading and printing were slow as they used slower mechanical
units compared to CPU which was electronic. The speed mismatch was of
the order of 1000. To alleviate this problem programs were spooled. Spool
is an acronym for simultaneous peripheral operation on-line. In essence the
idea was to use a cheaper processor known as peripheral processing unit
(PPU) to read programs and data from cards store them on a disk. The
faster CPU read programs/data from the disk processed them and wrote the
results back on the disk. The cheaper processor then read the results from
the disk and printed them.
12.4.3 Multi Programmed Batch Systems
Even though disks are faster than card reader/ printer they are still two
orders of magnitude slower than CPU. It is thus useful to have several
programs ready to run waiting in the main memory of CPU. When one
program needs input/output (I/O) from disk it is suspended and another
program whose data is already in main memory (as shown in the figure 6.2
bellow) is taken up for execution. This is called multiprogramming.
Operating System
Program 1
Program 2
Program 3
Program 4
Figure 12.2: Multi Programmed Batch Systems
The processors in a distributed system may vary in size and function, and
referred by a number of different names, such as sites, nodes, computers
and so on depending on the context. The major reasons for building
distributed systems are:
Resource sharing: If a number of different sites are connected to one
another, then a user at one site may be able to use the resources available
at the other.
Computation speed up: If a particular computation can be partitioned into
a number of sub computations that can run concurrently, then a distributed
system may allow a user to distribute computation among the various sites
to run them concurrently.
Reliability: If one site fails in a distributed system, the remaining sites can
potentially continue operations.
Communication: There are many instances in which programs need to
exchange data with one another. Distributed data base system is an
example of this.
12.4.6 Real-time Operating System
The advent of timesharing provided good response times to computer users.
However, timesharing could not satisfy the requirements of some
applications. Real-time (RT) operating systems were developed to meet the
response requirements of such applications.
There are two types of real-time systems. A hard real-time system
guarantees that critical tasks complete at a specified time. A less restrictive
type of real time system is soft real-time system, where a critical real-time
task gets priority over other tasks, and retains that priority until it completes.
The several areas in which this type is useful are multimedia, virtual reality,
and advance scientific projects such as undersea exploration and planetary
rovers. Because of the expanded uses for soft real-time functionality, it is
finding its way into most current operating systems, including major versions
of UNIX and Windows NT O.S.
A real-time operating system is one, which helps to fulfill the worst-case
response time requirements of an application. An RT OS provides the
following facilities for this purpose:
1. Multitasking within an application.
Manipal University of Jaipur Page No. 277
Fundamentals of Computer and IT Unit 12
fixed number of bytes. Each location in storage has an address; the set of
all addresses available to a program is called an address space.
The three major activities of an operating system in regard to secondary
storage management are:
1. Scheduling the requests for memory access.
2. Managing the free space available on the secondary-storage device.
3. Allocation of storage space when new files have to be written.
6. Networking
A distributed system is a group of processors that do not share memory,
peripheral devices, or a clock. The processors communicate with one
another through communication lines called network. The communication-
network design must consider routing and connection strategies, and the
problems of contention and security.
7. Protection System
If a computer system has multiple users and allows the concurrent
execution of multiple processes, then various processes must be protected
from one another's activities. Protection refers to mechanism for controlling
the access of programs, processes, or users to the resources defined by a
computer system.
8. Command Interpreter System
A command interpreter is an interface of the operating system with the user.
The user gives commands with are executed by operating system (usually
by turning them into system calls). The main function of a command
interpreter is to get and execute the next user specified command.
Command-Interpreter is usually not part of the kernel, since multiple
command interpreters (shell, in UNIX terminology) may be supported by an
operating system, and they do not really need to run in kernel mode. There
are two main advantages of separating the command interpreter from the
kernel.
1. If we want to change the way the command interpreter looks, i.e., I want
to change the interface of command interpreter, I am able to do that if
the command interpreter is separate from the kernel. I cannot change
the code of the kernel so I cannot modify the interface.
2. If the command interpreter is a part of the kernel, it is possible for a
malicious process to gain access to certain part of the kernel that it
Following are the five services provided by operating systems for the
convenience of the users.
1. Execution of Program
The persistence of a computer system is to allow the user to execute
programs. So the operating system provides an environment where the user
can conveniently run programs. The user does not have to worry about the
memory allocation or multitasking or anything. These things are taken care
of by the operating systems. Running a program involves the allocating and
de-allocating memory, CPU scheduling in case of multi-process. These
functions cannot be given to the user-level programs. So user-level
programs cannot help the user to run programs independently without the
help from operating systems.
2. I/O Operations
Every program needs an input and produces output. This involves the use of
I/O. The operating systems hide from the user the details of underlying
hardware for the I/O. All the users see that the I/O has been performed
without any details. So the operating system, by providing I/O, makes it
convenient for the users to run programs. For efficiently and protection
Manipal University of Jaipur Page No. 281
Fundamentals of Computer and IT Unit 12
to the user programs. A user program if given these privileges can interfere
with the correct (normal) operation of the operating systems.
Self Assessment Questions
8. An error in one part of the system may cause malfunctioning of the
complete system. (True/False)
9. ________involves the allocating and de-allocating memory, CPU
scheduling in case of multi-process.
12.8 Summary
Let’s recapitulate the important concepts discussed in this unit:
A compiler is used to translate a high level language program into
assembly language or machine code, and an assembler is used to
translate an assembly language program into machine code.
Resources Management – An operating system as resource manager
controls how processes (the active agents) may access resources
(passive entities).
Multiprogramming (MP) increases CPU utilization by organizing jobs
such that the CPU always has a job to execute.
Multiprogramming features were overlaid on BP to ensure good
utilization of CPU but from the point of view of a user the service was
poor as the response time, i.e., the time elapsed between submitting a
job and getting the results was unacceptably high.
A real-time operating system is one, which helps to fulfill the worst-case
response time requirements of an application.
A task is a sub-computation in an application program, which can be
executed concurrently with other sub-computations in the program,
except at specific places in its execution called synchronization points.
Primary – Memory or Main-Memory is a large array of words or bytes.
A file is a collection of related information defined by its creator.
A distributed system is a group of processors that do not share memory,
peripheral devices, or a clock.
A command interpreter is an interface of the operating system with the
user.
A computer cluster is a group of computers that work together closely so
that in many respects they can be viewed as though they are a single
computer.
12.10 Answers
Self Assessment Questions
1. Abstraction
2. operating system
3. True
4. True
5. File
6. Distributed
7. True
8. True
9. Running a program
10. Servers
11. Workstations
12. Embedded
Terminal Questions
1. An operating system (OS) is a program that controls the execution of an
application program and acts as an interface between the user and
computer hardware. (Refer section 12.2)
2. Modern Operating systems generally have following three major goals.
(Refer section 12.3)
3. An operating system as resource manager controls how processes (the
active agents) may access resources (passive entities). (Refer section
12.3)
4. Multiprogramming (MP) increases CPU utilization by organizing jobs
such that the CPU always has a job to execute. (Refer section 12.4.3)
5. A recent trend in computer system is to distribute computation among
several processors. (Refer section 12.4.5)
6. Real-time (RT) operating systems were developed to meet the response
requirements of such applications. (Refer section 12.4.6)
Book References:
Modern Operating Systems (3rd Edition) by Andrew S. Tanenbaum
Operating System Concepts Eight Edition by Avi Silberschatz, Peter
Baer Galvin & Greg Gagne
Operating Systems Internals and Design Principles by William Stallings
E-References
www.computerhope.com
www.howstuffworks.com
www.personal.kent.edu
www.searchcio-midmarket.techtarget.com