Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
65 views

OS Notes

The document discusses operating systems and their key functions. It defines an operating system as a program that manages computer hardware resources and provides common services for other programs. The main objectives of an operating system are to make the computer system convenient to use and efficiently manage system resources. It describes operating systems as providing an abstraction layer that hides low-level hardware details and presents resources like files in a simpler interface. Operating systems also function as resource managers that allocate CPU time, memory, disk space and I/O devices among different processes. Common operating system concepts are also outlined such as processes, files, shells and types of operating systems.

Uploaded by

Pritam Kisku
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
65 views

OS Notes

The document discusses operating systems and their key functions. It defines an operating system as a program that manages computer hardware resources and provides common services for other programs. The main objectives of an operating system are to make the computer system convenient to use and efficiently manage system resources. It describes operating systems as providing an abstraction layer that hides low-level hardware details and presents resources like files in a simpler interface. Operating systems also function as resource managers that allocate CPU time, memory, disk space and I/O devices among different processes. Common operating system concepts are also outlined such as processes, files, shells and types of operating systems.

Uploaded by

Pritam Kisku
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 8

Operation System

• What is Operating System?


An Operating System (OS) is an integrated set of programs that is used to manage the various resources like
CPU, memory, I/O devices etc. of a computer system and provide its user with an interface which is more
convenient to use than the bare machine. The operating system is responsible for the smooth and efficient
operation of the entire computer system. According to the definition, the two primary objectives of OS are –
1. Making a computer system convenient to use.
2. Managing the resources of a computer system.
Operating system is the most important program for computer system. Without an operating system, every
computer program would have to contain instructions telling the hardware each step the hardware should take
to do its job, such as storing a file on a disk. Because the operating system contains these instructions, any
program can call on the operating system when a service is needed.

• Operating System as an Extended Machine


Operating Systems turn ugly hardware into beautiful abstractions. Just as the operating system shields the
programmer from the disk hardware and presents a simple file-oriented interface, it also conceals a lot of
unpleasant business concerning interrupts, timers, memory management, and other low-level features.
In each case, the abstraction offered by the operating system is simpler and easier to use than that offered by the
underlying hardware. Moreover, in this view, the function of the operating system is to present the user with the
equivalent of an extended machine or virtual machine that is easier to work with than the underlying hardware.
The operating system provides a variety of services that programs can obtain using special instructions called
system calls.

• Operating system as a Resource Manager


Modern computers consist of processors, memories, timers, disks, I/O devices, network interfaces, printers, and
a wide variety of other devices. In the alternative view, the job of the operating system is to provide for an
orderly and controlled allocation of the processors, memories, and input/output devices among the various
programs competing for them. When a computer has multiple users, the need for managing and protecting the
memory, input/output devices, and other resources is even greater, since the users might otherwise interface with
one another. In addition, users often need to share not only hardware, but information (files, databases, etc.) as
well. In short, this view of the operating system holds that its primary task is to keep track of which programs
are using which resources, to grant resource requests, to account for usage, and to mediate conflicting requests
from different programs and users.

• Operating System Concepts:


➢ Processes: A key concept in all operating systems is the process. A process is basically a program in
execution. Associated with each process is its address space, a list of memory locations from 0 to some
maximum, which the process can read and write. The address space contains the executable program, the
program’s data, and its stack. Also associated with each process is a set of resources, commonly including
registers (the program counter and stack pointer), a list of open files, lists of related processes, and all the
other information needed to run the program. A process is fundamentally a container that holds all the
information needed to run a program.
➢ Files: Another key concept supported by virtually all operating systems is the file system. As noted before, a
major function of the operating system is to hide the peculiarities of the disks and other I/O devices and
present the programmer with a nice, clean abstract model of device-independent files. System calls are
obviously needed to create files, remove files, read files, and write files. Before a file can be read, it must be
located on the disk and opened, and after being read it should be closed, so calls are provided to do these
things.
➢ Shell: Shell is the code that carries out the system calls. Editors, compilers, assemblers, linkers, utility
programs, and command interpreters definitely are not part of the operating system, even though they are
important and useful. Although it is not part of the operating system, it makes heavy use of many operating
system features and thus serves as a good example of how the system calls are used. Shell is also the main
interface between a user sitting at his terminal and the operating system, unless the user is using a graphical
user interface.

• Functions of Operating System


An operating system executes many functions to operate computer system efficiently.
1. Process management: Process management helps OS to create and delete processes. It also provides
mechanisms for synchronization and communication among processes.
2. Memory management: Memory management module performs the task of allocation and de-allocation of
memory space to programs in need of this resources.
3. File management: It manages all the file-related activities such as organize storage, retrieval, naming,
sharing, and protection of files.
4. Device Management: Device management keeps tracks of all devices. This module also responsible for this
task is known as the I/O controller. It also performs the task of allocation and de-allocation of the devices.
5. Security: Security module protects the data and information of a computer system against malware threat
and authorized access.
6. Command interpretation: This module is interpreting commands given by the user and directing the system
resources to handle that request. With this mode of interaction, the user is usually not too concerned with
the hardware details of the system.
7. Job accounting: Keeping track of time & resource used by various job and users.

• Efficiency of Operating System


The efficiency of an operating system and the overall performance of a computer installation are judged by a
combination of factors. They are:
1. Throughput: It is the total volume of work performed by the system over a given period of time.
2. Turnaround time: It is defined as the interval between the times, a user submits his job to the system for
processing and the time he receives results.
3. Response time: It is the interval from the time of submission of a job to the system for processing to the time
of first response for the job is produces by the system.

• Types of Operating System


1. Serial Processing: Before 1950 the programmer directly interacts with hardware, there is no operating
system at that time. Programmers need to follow the following steps to execute a program:
▪ Type the program on punched card.
▪ Insert the punch card into card reader.
▪ Connect the card reader to the computing machine.
▪ If there is any error, the error condition indicated by light.
▪ The programmer examines the register and main memory to identify the cause of error.
▪ If there is no error, take the output on the printer.

2. Batch Processing: To reduce time wasted between jobs when a number of jobs are to be processed by a
machine, and to reduce the manual operations of loading each job, a batch processing system is used. In this
system, jobs are made into a batch and fed to the computer. The system loads the appropriate compiler for
each job, compiles the program, keeps track of error messages and facilitates smooth transition form job to
job.
3. Multiprogramming: To keep all units of computer simultaneously busy for most of the time it is desirable
to process a number of programs concurrently. A multiprogramming operating system keeps many jobs of
different users in the memory at a time, schedules and executes them and provides I/O facilities requested
by each user and optimizes the use of computer resources.
4. Multitasking: Multitasking is the system’s capability to concurrently work on more than one task. This
means that whenever a task needs to perform I/O operations, the CPU can be used for executing some other
task that is also residing in the system and ready to use the CPU.
Multitasking is similar to multiprogramming. The term multiprogramming is used for multi-user systems
and multitasking is used for single-user system.
5. Multiprocessing: The term multiprocessing is used to describe interconnected computer configuration or
computer with two or more CPUs, which have the ability to simultaneously execute several programs. In
such a system, instructions from different and independent programs can be processed simultaneously by
different CPUs, or the CPU may simultaneously execute different instructions from the same program.
6. Time-sharing: Time-sharing is a mechanism to provide simultaneously interactive use of a computer
system by many users in such a way that each user is given the impression that the user has his own
computer. It uses multiprogramming with a special CPU scheduling algorithm to achieve this.
A time-sharing system has many user terminals simultaneously connected to the same computer. Using
these terminals, many users can simultaneously work on the system.
7. Real Time Computer Systems: In some applications of computer systems, such as automation of scientific
experiments, tests of complex products, monitoring and control of various production processes, processing
input data and issue of the result, strict time limitations are imposed on the speed of processing in a computer
system. Such systems are known as operating in real time.
The real time computer system is a system in which data input must be processed quickly enough to enable
the results to be used as feedback information. Real systems are usually considered to be those in which the
response time is several fractions of a second. An example of real time system is an automatic process
control system.

• Difference between Multiprogramming and Multiprocessing:


Multiprogramming Multiprocessing
1. It means execution of two or more programs by 1. It is the simultaneous execution or two or more
a single CPU computer system. programs by a computer system having more than
one CPU.
2. It involves executing a segment on one 2. In multiprocessing it is possible for the system
program, then a segment of another program etc., to simultaneously work on several program
in brief consecutive periods. segments of one or more programs.

• Operating-System Structure
A system as large and complex as a modern operating system must be engineered carefully if it is to function
properly and be modified easily. A common approach is to partition the task into small components, or modules,
rather than have one system. Each of these modules should be a well-defined portion of the system, with carefully
defined inputs, outputs, and functions. The different structures of OS are monolithic system, layered systems,
microkernel and client server model.
1. Monolithic Systems
By far the most common organization, in the monolithic approach the entire operating system runs as a
single program in kernel mode. The operating system is written as a collection of procedures, linked
together into a single large executable binary program. When this technique is used, each procedure in
the system is free to call any other one, if the latter provides some useful computation that the former
needs. Being able to call any procedure you want is very efficient, but having thousands of procedures
that can call each other without restriction may also lead to a system that is unwieldy and difficult to
understand. Also, a crash in any of these procedures will take down the entire operating system.
2. Layered Approach
With proper hardware support, operating systems can be broken into pieces that are smaller and more
appropriate than those allowed by the monolithic systems. The operating system can then retain much
greater control over the computer and over the applications that make use of that computer. Implementers
have more freedom in changing the inner workings of the system and in creating modular operating
systems. Under a top-down approach, the overall functionality and features are determined and are
separated into components. Information hiding is also important, because it leaves programmers free to
implement the low-level routines as they see fit, provided that the
external interface of the routine stays unchanged and that the
routine itself performs the advertised task.
A system can be made modular in many ways. One method is the
layered approach, in which the operating system is broken into a
number of layers (levels). The bottom layer (layer 0) is the
hardware; the highest (layer N) is the user interface. This layering
structure is depicted in the following figure.
3. Microkernels
With the layered approach, the designers have a choice where to
draw the kernel-user boundary. Traditionally, all the layers went in
the kernel, but that is not necessary. In fact, a strong case can be
made for putting as little as possible in kernel mode because bugs in the kernel can bring down the system
instantly. In contrast, user processes can be set up to have less power so that a bug there may not be fatal.
The basic idea behind the microkernel design is to achieve high reliability by splitting the operating
system up into small, well-defined modules, only one of which the microkernel runs in kernel mode and
the rest run as relatively powerless ordinary user processes. In particular, by running each device driver
and file system as a separate user process, a bug in one of these can crash that component, but cannot
crash the entire system. Thus, a bug in the audio driver will cause the sound to be garbled or stop, but will
not crash the computer. In contrast, in a monolithic system with all the drivers in the kernel, a buggy
audio driver can easily reference an invalid memory address and bring the system to a crushing halt
instantly.
4. Client-Server Model
A slight variation of the microkernel idea is to distinguish two classes of processes, the servers, each of
which provides some service, and the clients, which use these services. This model is known as the client-
server model. Often the lowest layer is a microkernel, but that is not required. The essence is the presence
of client processes and server processes.
Communication between clients and servers is often by message passing. To obtain a service, a client
process constructs a message saying what it wants and sends it to the appropriate server. The server then
does the work and sends back the answer. If the client and server happen to run on the same machine,
certain optimizations are possible, but conceptually, we are still talking about message passing here.
Process Management
Process
A process is a program in execution. A process is more than the program code,
which is sometimes known as the text section. It also includes the current
activity, as represented by the value of the program counter and the contents of
the processor’s registers. A process generally also includes the process stack,
which contains temporary data (such as function parameters, return addresses,
and local variables), and a data section, which contains global variables. A
process may also include a heap, which is memory that is dynamically allocated
during process run time. The structure of a process in memory is shown in the
following figure.
A program is a passive entity, such as a file containing a list of instructions
stored on disk (often called an executable file). In contrast, a process is an active
entity, with a program counter specifying the next instruction to execute and a
set of associated resources. A program becomes a process when an executable
file is loaded into memory. Two common techniques for loading executable files
are double-clicking an icon representing the executable file and entering the
name of the executable file on the command line.

Process States
As a process executes, it changes state. The state of a process is defined by the current activity of that process.
A process may be in one of the following states:
• New: The process is being created.
• Running: Instructions are being executed.
• Waiting: The process is waiting for some event to occur (such as an I/O completion or reception of a
signal).
• Ready: The process is waiting to be assigned to a processor.
• Terminated: The process has finished execution.
These names are arbitrary, and they vary across operating systems. The states that they represent are found on
all systems, however. Certain operating systems also more finely delineate process states. It is important to
realize that only one process can be running on any processor at any instant, however, many processes may be
ready and waiting. The state diagram corresponding to these states is presented in the following figure.

Process Control Block (PCB)


Each process is represented in the operating system by a process control block (PCB)—also called a task
control block. A PCB is shown in the following figure. It contains many pieces of information associated with
a specific process, including these:
• Process state. The state may be new, ready, running, waiting, halted, and so on.
• Program counter. The counter indicates the address of the next instruction to be executed for this
process.
• CPU registers. The registers vary in number and type, depending on the computer
architecture. They include accumulators, index registers, stack pointers, and general-
purpose registers, plus any condition-code information. Along with the program
counter, this state information must be saved when an interrupt occurs, to allow the
process to be continued correctly afterward.
• CPU-scheduling information. This information includes a process priority, pointers
to scheduling queues, and any other scheduling parameters.
• Memory-management information. This information may include such items as the
value of the base and limit registers and the page tables, or the segment tables,
depending on the memory system used by the operating system.
• Accounting information. This information includes the amount of CPU and real time used, time limits,
account numbers, job or process numbers, and so on.
• I/O status information. This information includes the list of I/O devices allocated to the process, a list
of open files, and so on.

Threads
The process model is generally implied that a process is a program that performs a single thread of execution.
For example, when a process is running a word-processor program, a single thread of instructions is being
executed. This single thread of control allows the process to perform only one task at a time. For example, the
user cannot simultaneously type in characters and run the spell checker within the same process. Most modern
operating systems have extended the process concept to allow a process to have multiple threads of execution
and thus to perform more than one task at a time. This feature is especially beneficial on multicore systems,
where multiple threads can run in parallel. On a system that supports threads, the PCB is expanded to include
information for each thread. Other changes throughout the system are also needed to support threads.

Process Scheduling
The objective of multi programming is to have some process running at a time so as to maximize CPU utilization.
When a process enters the system, it is put in job queue. The process that are residing in main memory and
waiting to execute kept on a queue called the ready queue. The common representation of a process scheduling
is queuing diagram.

Each rectangular box represents a queue. Two types of queues are present: the ready queue and a set of device
queues. The circles represent the resources that serve the queues, and the arrows indicate the flow of processes
in the system.
A new process is initially put in the ready queue. It waits there until it is selected for execution, or dispatched.
Once the process is allocated the CPU and is executing, one of several events could occur:
• The process could issue an I/O request and then be placed in an I/O queue.
• The process could create a new child process and wait for the child’s termination.
• The process could be removed forcibly from the CPU, as a result of an interrupt, and be put back in the
ready queue.
In the first two cases, the process eventually switches from the waiting state to the ready state and is then put
back in the ready queue. A process continues this cycle until it terminates, at which time it is removed from all
queues and has its PCB and resources deallocated.

Process Scheduling Queues


The OS maintains all PCBs in Process Scheduling
Queues. The OS maintains a separate queue for each of
the process states and PCBs of all processes in the same
execution state are placed in the same queue. When the
state of a process is changed, its PCB is unlinked from its
current queue and moved to its new state queue.
The Operating System maintains the following important
process scheduling queues −
• Job queue − This queue keeps all the processes in
the system.
• Ready queue − This queue keeps a set of all processes residing in main memory, ready and waiting to
execute. A new process is always put in this queue.
• Device queues − The processes which are blocked due to unavailability of an I/O device constitute this
queue.
The OS can use different policies to manage each queue (FIFO, Round Robin, Priority, etc.). The OS scheduler
determines how to move processes between the ready and running queues which can only have one entry per
processor core on the system; in the above diagram, it has been merged with the CPU.

Schedulers
Schedulers are special system software which handle process scheduling in various ways. Their main task is to
select the jobs to be submitted into the system and to decide which process to run. Schedulers are of three types:
• Long-Term Scheduler
• Short-Term Scheduler
• Medium-Term Scheduler
Long Term Scheduler
It is also called a job scheduler. A long-term scheduler determines which programs are admitted to the system
for processing. It selects processes from the queue and loads them into memory for execution. Process loads into
the memory for CPU scheduling.
The primary objective of the job scheduler is to provide a balanced mix of jobs, such as I/O bound and processor
bound. It also controls the degree of multiprogramming. If the degree of multiprogramming is stable, then the
average rate of process creation must be equal to the average departure rate of processes leaving the system.
On some systems, the long-term scheduler may not be available or minimal. Time-sharing operating systems
have no long-term scheduler. When a process changes the state from new to ready, then there is use of long-
term scheduler.
Short Term Scheduler
It is also called as CPU scheduler. Its main objective is to increase system performance in accordance with the
chosen set of criteria. It is the change of ready state to running state of the process. CPU scheduler selects a
process among the processes that are ready to execute and allocates CPU to one of them.
Short-term schedulers, also known as dispatchers, make the decision of which process to execute next. Short-
term schedulers are faster than long-term schedulers.
Medium Term Scheduler
Medium-term scheduling is a part of swapping. It removes the processes from the memory. It reduces the degree
of multiprogramming. The medium-term scheduler is in-charge of handling the swapped out-processes.
A running process may become suspended if it makes an I/O request. A suspended process cannot make any
progress towards completion. In this condition, to remove the process from memory and make space for other
processes, the suspended process is moved to the secondary storage. This process is called swapping, and the
process is said to be swapped out or rolled out. Swapping may be necessary to improve the process mix.

Comparison among Scheduler


Long-Term Scheduler Short-Term Scheduler Medium-Term Scheduler
1 It is a job scheduler It is a CPU scheduler It is a process swapping
scheduler.
2 Speed is lesser than short term Speed is fastest among other two Speed is in between both short-
scheduler term and long-term scheduler.
3 It controls the degree of It provides lesser control over It reduces the degree of
multiprogramming degree of multiprogramming multiprogramming.
4 It is almost absent or minimal in It is also minimal in time It is a part of Time-sharing
time sharing system sharing system systems.
5 It selects processes from pool It selects those processes which It can re-introduce the process
and loads them into memory for are ready to execute into memory and execution can
execution be continued.

Context Switch
A context switch is the mechanism to store and restore the state or context
of a CPU in Process Control block so that a process execution can be
resumed from the same point at a later time. Using this technique, a context
switcher enables multiple processes to share a single CPU. Context
switching is an essential part of a multitasking operating system features.
When the scheduler switches the CPU from executing one process to
execute another, the state from the current running process is stored into
the process control block. After this, the state for the process to run next is
loaded from its own PCB and used to set the PC, registers, etc. At that point,
the second process can start executing.
Context switches are computationally intensive since register and memory
state must be saved and restored. To avoid the amount of context switching
time, some hardware systems employ two or more sets of processor
registers. When the process is switched, the following information is stored
for later use.
• Program Counter
• Scheduling information
• Base and limit register value
• Currently used register
• Changed State
• I/O State information
• Accounting information

You might also like