OS Notes
OS Notes
2. Batch Processing: To reduce time wasted between jobs when a number of jobs are to be processed by a
machine, and to reduce the manual operations of loading each job, a batch processing system is used. In this
system, jobs are made into a batch and fed to the computer. The system loads the appropriate compiler for
each job, compiles the program, keeps track of error messages and facilitates smooth transition form job to
job.
3. Multiprogramming: To keep all units of computer simultaneously busy for most of the time it is desirable
to process a number of programs concurrently. A multiprogramming operating system keeps many jobs of
different users in the memory at a time, schedules and executes them and provides I/O facilities requested
by each user and optimizes the use of computer resources.
4. Multitasking: Multitasking is the system’s capability to concurrently work on more than one task. This
means that whenever a task needs to perform I/O operations, the CPU can be used for executing some other
task that is also residing in the system and ready to use the CPU.
Multitasking is similar to multiprogramming. The term multiprogramming is used for multi-user systems
and multitasking is used for single-user system.
5. Multiprocessing: The term multiprocessing is used to describe interconnected computer configuration or
computer with two or more CPUs, which have the ability to simultaneously execute several programs. In
such a system, instructions from different and independent programs can be processed simultaneously by
different CPUs, or the CPU may simultaneously execute different instructions from the same program.
6. Time-sharing: Time-sharing is a mechanism to provide simultaneously interactive use of a computer
system by many users in such a way that each user is given the impression that the user has his own
computer. It uses multiprogramming with a special CPU scheduling algorithm to achieve this.
A time-sharing system has many user terminals simultaneously connected to the same computer. Using
these terminals, many users can simultaneously work on the system.
7. Real Time Computer Systems: In some applications of computer systems, such as automation of scientific
experiments, tests of complex products, monitoring and control of various production processes, processing
input data and issue of the result, strict time limitations are imposed on the speed of processing in a computer
system. Such systems are known as operating in real time.
The real time computer system is a system in which data input must be processed quickly enough to enable
the results to be used as feedback information. Real systems are usually considered to be those in which the
response time is several fractions of a second. An example of real time system is an automatic process
control system.
• Operating-System Structure
A system as large and complex as a modern operating system must be engineered carefully if it is to function
properly and be modified easily. A common approach is to partition the task into small components, or modules,
rather than have one system. Each of these modules should be a well-defined portion of the system, with carefully
defined inputs, outputs, and functions. The different structures of OS are monolithic system, layered systems,
microkernel and client server model.
1. Monolithic Systems
By far the most common organization, in the monolithic approach the entire operating system runs as a
single program in kernel mode. The operating system is written as a collection of procedures, linked
together into a single large executable binary program. When this technique is used, each procedure in
the system is free to call any other one, if the latter provides some useful computation that the former
needs. Being able to call any procedure you want is very efficient, but having thousands of procedures
that can call each other without restriction may also lead to a system that is unwieldy and difficult to
understand. Also, a crash in any of these procedures will take down the entire operating system.
2. Layered Approach
With proper hardware support, operating systems can be broken into pieces that are smaller and more
appropriate than those allowed by the monolithic systems. The operating system can then retain much
greater control over the computer and over the applications that make use of that computer. Implementers
have more freedom in changing the inner workings of the system and in creating modular operating
systems. Under a top-down approach, the overall functionality and features are determined and are
separated into components. Information hiding is also important, because it leaves programmers free to
implement the low-level routines as they see fit, provided that the
external interface of the routine stays unchanged and that the
routine itself performs the advertised task.
A system can be made modular in many ways. One method is the
layered approach, in which the operating system is broken into a
number of layers (levels). The bottom layer (layer 0) is the
hardware; the highest (layer N) is the user interface. This layering
structure is depicted in the following figure.
3. Microkernels
With the layered approach, the designers have a choice where to
draw the kernel-user boundary. Traditionally, all the layers went in
the kernel, but that is not necessary. In fact, a strong case can be
made for putting as little as possible in kernel mode because bugs in the kernel can bring down the system
instantly. In contrast, user processes can be set up to have less power so that a bug there may not be fatal.
The basic idea behind the microkernel design is to achieve high reliability by splitting the operating
system up into small, well-defined modules, only one of which the microkernel runs in kernel mode and
the rest run as relatively powerless ordinary user processes. In particular, by running each device driver
and file system as a separate user process, a bug in one of these can crash that component, but cannot
crash the entire system. Thus, a bug in the audio driver will cause the sound to be garbled or stop, but will
not crash the computer. In contrast, in a monolithic system with all the drivers in the kernel, a buggy
audio driver can easily reference an invalid memory address and bring the system to a crushing halt
instantly.
4. Client-Server Model
A slight variation of the microkernel idea is to distinguish two classes of processes, the servers, each of
which provides some service, and the clients, which use these services. This model is known as the client-
server model. Often the lowest layer is a microkernel, but that is not required. The essence is the presence
of client processes and server processes.
Communication between clients and servers is often by message passing. To obtain a service, a client
process constructs a message saying what it wants and sends it to the appropriate server. The server then
does the work and sends back the answer. If the client and server happen to run on the same machine,
certain optimizations are possible, but conceptually, we are still talking about message passing here.
Process Management
Process
A process is a program in execution. A process is more than the program code,
which is sometimes known as the text section. It also includes the current
activity, as represented by the value of the program counter and the contents of
the processor’s registers. A process generally also includes the process stack,
which contains temporary data (such as function parameters, return addresses,
and local variables), and a data section, which contains global variables. A
process may also include a heap, which is memory that is dynamically allocated
during process run time. The structure of a process in memory is shown in the
following figure.
A program is a passive entity, such as a file containing a list of instructions
stored on disk (often called an executable file). In contrast, a process is an active
entity, with a program counter specifying the next instruction to execute and a
set of associated resources. A program becomes a process when an executable
file is loaded into memory. Two common techniques for loading executable files
are double-clicking an icon representing the executable file and entering the
name of the executable file on the command line.
Process States
As a process executes, it changes state. The state of a process is defined by the current activity of that process.
A process may be in one of the following states:
• New: The process is being created.
• Running: Instructions are being executed.
• Waiting: The process is waiting for some event to occur (such as an I/O completion or reception of a
signal).
• Ready: The process is waiting to be assigned to a processor.
• Terminated: The process has finished execution.
These names are arbitrary, and they vary across operating systems. The states that they represent are found on
all systems, however. Certain operating systems also more finely delineate process states. It is important to
realize that only one process can be running on any processor at any instant, however, many processes may be
ready and waiting. The state diagram corresponding to these states is presented in the following figure.
Threads
The process model is generally implied that a process is a program that performs a single thread of execution.
For example, when a process is running a word-processor program, a single thread of instructions is being
executed. This single thread of control allows the process to perform only one task at a time. For example, the
user cannot simultaneously type in characters and run the spell checker within the same process. Most modern
operating systems have extended the process concept to allow a process to have multiple threads of execution
and thus to perform more than one task at a time. This feature is especially beneficial on multicore systems,
where multiple threads can run in parallel. On a system that supports threads, the PCB is expanded to include
information for each thread. Other changes throughout the system are also needed to support threads.
Process Scheduling
The objective of multi programming is to have some process running at a time so as to maximize CPU utilization.
When a process enters the system, it is put in job queue. The process that are residing in main memory and
waiting to execute kept on a queue called the ready queue. The common representation of a process scheduling
is queuing diagram.
Each rectangular box represents a queue. Two types of queues are present: the ready queue and a set of device
queues. The circles represent the resources that serve the queues, and the arrows indicate the flow of processes
in the system.
A new process is initially put in the ready queue. It waits there until it is selected for execution, or dispatched.
Once the process is allocated the CPU and is executing, one of several events could occur:
• The process could issue an I/O request and then be placed in an I/O queue.
• The process could create a new child process and wait for the child’s termination.
• The process could be removed forcibly from the CPU, as a result of an interrupt, and be put back in the
ready queue.
In the first two cases, the process eventually switches from the waiting state to the ready state and is then put
back in the ready queue. A process continues this cycle until it terminates, at which time it is removed from all
queues and has its PCB and resources deallocated.
Schedulers
Schedulers are special system software which handle process scheduling in various ways. Their main task is to
select the jobs to be submitted into the system and to decide which process to run. Schedulers are of three types:
• Long-Term Scheduler
• Short-Term Scheduler
• Medium-Term Scheduler
Long Term Scheduler
It is also called a job scheduler. A long-term scheduler determines which programs are admitted to the system
for processing. It selects processes from the queue and loads them into memory for execution. Process loads into
the memory for CPU scheduling.
The primary objective of the job scheduler is to provide a balanced mix of jobs, such as I/O bound and processor
bound. It also controls the degree of multiprogramming. If the degree of multiprogramming is stable, then the
average rate of process creation must be equal to the average departure rate of processes leaving the system.
On some systems, the long-term scheduler may not be available or minimal. Time-sharing operating systems
have no long-term scheduler. When a process changes the state from new to ready, then there is use of long-
term scheduler.
Short Term Scheduler
It is also called as CPU scheduler. Its main objective is to increase system performance in accordance with the
chosen set of criteria. It is the change of ready state to running state of the process. CPU scheduler selects a
process among the processes that are ready to execute and allocates CPU to one of them.
Short-term schedulers, also known as dispatchers, make the decision of which process to execute next. Short-
term schedulers are faster than long-term schedulers.
Medium Term Scheduler
Medium-term scheduling is a part of swapping. It removes the processes from the memory. It reduces the degree
of multiprogramming. The medium-term scheduler is in-charge of handling the swapped out-processes.
A running process may become suspended if it makes an I/O request. A suspended process cannot make any
progress towards completion. In this condition, to remove the process from memory and make space for other
processes, the suspended process is moved to the secondary storage. This process is called swapping, and the
process is said to be swapped out or rolled out. Swapping may be necessary to improve the process mix.
Context Switch
A context switch is the mechanism to store and restore the state or context
of a CPU in Process Control block so that a process execution can be
resumed from the same point at a later time. Using this technique, a context
switcher enables multiple processes to share a single CPU. Context
switching is an essential part of a multitasking operating system features.
When the scheduler switches the CPU from executing one process to
execute another, the state from the current running process is stored into
the process control block. After this, the state for the process to run next is
loaded from its own PCB and used to set the PC, registers, etc. At that point,
the second process can start executing.
Context switches are computationally intensive since register and memory
state must be saved and restored. To avoid the amount of context switching
time, some hardware systems employ two or more sets of processor
registers. When the process is switched, the following information is stored
for later use.
• Program Counter
• Scheduling information
• Base and limit register value
• Currently used register
• Changed State
• I/O State information
• Accounting information