Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Os Lec Prelim Reviewer

Download as pdf or txt
Download as pdf or txt
You are on page 1of 24

Introduction to Operating System

What is an Operating System?

• A program that acts as an intermediary between a user of a computer and the


computer hardware.
• An operating System is a collection of system programs that together control
the operations of a computer system.

Some examples of operating systems are:

• UNIX, Mach
• MS-DOS
• MS-Windows
• Windows/NT
• Chicago
• OS/2
• MacOS
• VMS
• MVS
• VM

Operating system goals:


• Execute user programs and make solving user problems easier.
• Make the computer system convenient to use.
• Use the computer hardware in an efficient manner.
Computer System Components

1. Hardware – provides basic computing resources (CPU, memory, I/O devices).


2. Operating system – controls and coordinates the use of the hardware among
the various application programs for the various users.
3. Applications programs – Define the ways in which the system resources are
used to solve the computing problems of the users (compilers, database
systems, video games, business programs).
4. Users (people, machines, other computers) - These are people who uses the
Computer System

Operating System Definitions


Resource Allocator – manages and allocates resources.
Control program – controls the execution of user programs and operations of I/O devices
.
Kernel – The one program running at all times (all else being application programs).
Components of Operating System

1. Kernel is an active part of an OS i.e., it is the part of OS running at all times. It


is a programs which can interact with the hardware.
2. Shell is called as the command interpreter. It is a set of programs used to
interact with the application programs. It is responsible for execution of
instructions given to OS

Operating systems can be explored from two viewpoints:


User View:
From the user’s point view, the OS is designed for one user to monopolize its resources,
to maximize the work that the user is performing and for ease of use.
System View:
From the computer's point of view, an operating system is a control program that
manages the execution of user programs to prevent errors and improper use of the
computer. It is concerned with the operation and control of I/O devices.

Function of Operating System


Process Management
A process is a program in execution. A process needs certain resources, including CPU
time, memory, files, and I/O devices, to accomplish its task.
The operating system is responsible for the following activities in connection with
process management.

• Process creation and deletion.


• Process suspension and resumption.
• Provision of mechanisms for:
o process synchronization
o process communication

Main-Memory Management
Memory is a large array of words or bytes, each with its own address. It is a repository of
quickly accessible data shared by the CPU and I/O devices.
Main memory is a volatile storage device. It loses its contents in the case of system
failure.
The operating system is responsible for the following activities in connections with
memory management:
• Keep track of which parts of memory are currently being used and by whom.
• Decide which processes to load when memory space becomes available.
• Allocate and de-allocate memory space as needed.

File Management
A file is a collection of related information defined by its creator. Commonly, files
represent programs (both source and object forms) and data.
The operating system is responsible for the following activities in connections with file
management:

• File creation and deletion.


• Directory creation and deletion.
• Support of primitives for manipulating files and directories.
• Mapping files onto secondary storage.
• File backup on stable (nonvolatile) storage media.

I/O System Management


The I/O system consists of:

• A buffer-caching system
• A general device-driver interface
• Drivers for specific hardware devices

Secondary-Storage Management
Since main memory (primary storage) is volatile and too small to accommodate all data
and programs permanently, the computer system must provide secondary storage to
back up main memory. Most modern computer systems use disks as the principle on-line
storage medium, for both programs and data.
The operating system is responsible for the following activities in connection with disk
management:

• Free space management


• Storage allocation
• Disk scheduling

Networking (Distributed Systems)


A distributed system is a collection processors that do not share memory or a clock. Each
processor has its own local memory.

• The processors in the system are connected through a communication


network.
• Communication takes place using a protocol.
• A distributed system provides user access to various system resources.
• Access to a shared resource allows:
o Computation speed-up
o Increased data availability
o Enhanced reliability

Protection System
Protection refers to a mechanism for controlling access by programs, processes, or users
to both system and user resources. The protection mechanism must:

• distinguish between authorized and unauthorized usage.


• specify the controls to be imposed.
• provide a means of enforcement.

Command-Interpreter System
Many commands are given to the operating system by control statements which deal
with:

• process creation and management


• I/O handling
• secondary-storage management
• main-memory management
• file-system access
• protection
• networking

Operating-System Structures

• System Components
• Operating System Services
• System Calls
• System Programs
• System Structure
• Virtual Machines
• System Design and Implementation
• System Generation

Common System Components

• Process Management
• Main Memory Management
• File Management
• I/O System Management
• Secondary Management
• Networking
• Protection System
• Command-Interpreter System

Evolution of Operating System


Mainframe Systems
Reduce setup time by batching similar jobs Automatic job sequencing – automatically
transfers control from one job to another. First rudimentary operating system. Resident
monitor

• initial control in monitor


• control transfers to job
• when job completes control transfers pack to monitor

Batch Processing Operating System

• This type of OS accepts more than one jobs and these jobs are batched/
grouped together according to their similar requirements. This is done by
computer operator. Whenever the computer becomes available, the batched
jobs are sent for execution and gradually the output is sent back to the user.
• It allowed only one program at a time.
• This OS is responsible for scheduling the jobs according to priority and the
resource required.

Multiprogramming Operating System

• This type of OS is used to execute more than one jobs simultaneously by a


single processor. it increases CPU utilization by organizing jobs so that the
CPU always has one job to execute.
• The concept of multiprogramming is described as follows:
o All the jobs that enter the system are stored in the job pool( in disc).
The operating system loads a set of jobs from job pool into main
memory and begins to execute.
o During execution, the job may have to wait for some task, such as
an I/O operation, to complete. In a multiprogramming system, the
operating system simply switches to another job and executes.
When that job needs to wait, the CPU is switched to another job,
and so on.
o When the first job finishes waiting and it gets the CPU back.
o As long as at least one job needs to execute, the CPU is never idle.
• Multiprogramming operating systems use the mechanism of job scheduling
and CPU scheduling.

Time-Sharing/multitasking Operating Systems


Time sharing (or multitasking) OS is a logical extension of multiprogramming. It provides
extra facilities such as:

• Faster switching between multiple jobs to make processing faster.


• Allows multiple users to share computer system simultaneously.
• The users can interact with each job while it is running.

These systems use a concept of virtual memory for effective utilization of memory
space. Hence, in this OS, no jobs are discarded. Each one is executed using virtual
memory concept. It uses CPU scheduling, memory management, disc management and
security management.
Multiprocessor Operating Systems
Multiprocessor operating systems are also known as parallel OS or tightly coupled OS.
Such operating systems have more than one processor in close communication that
sharing the computer bus, the clock and sometimes memory and peripheral devices. It
executes multiple jobs at same time and makes the processing faster.
Multiprocessor systems have three main advantages:

• Increased throughput: By increasing the number of processors, the system


performs more work in less time. The speed-up ratio with N processors is less
than N.
• Economy of scale: Multiprocessor systems can save more money than multiple
single-processor systems, because they can share peripherals, mass storage,
and power supplies.
• Increased reliability: If one processor fails to done its task, then each of the
remaining processors must pick up a share of the work of the failed processor.
The failure of one processor will not halt the system, only slow it down.

The ability to continue providing service proportional to the level of surviving hardware
is called graceful degradation. Systems designed for graceful degradation are called fault
tolerant.
The multiprocessor operating systems are classified into two categories:

1. In symmetric multiprocessing system, each processor runs an identical copy of


the operating system, and these copies communicate with one another as
needed.
2. In asymmetric multiprocessing system, a processor is called master processor
that controls other processors called slave processor. Thus, it establishes
master-slave relationship. The master processor schedules the jobs and
manages the memory for entire system.

Distributed Operating Systems

• In distributed system, the different machines are connected in a network and


each machine has its own processor and own local memory.
• In this system, the operating systems on all the machines work together to
manage the collective network resource.
• It can be classified into two categories:
o Client-Server systems
o Peer-to-Peer systems

Advantages of distributed systems

• Resources Sharing
• Computation speed up – load sharing
• Reliability
• Communications
• Requires networking infrastructure.
• Local area networks (LAN) or Wide area networks (WAN)

Desktop Systems/Personal Computer Systems

• The PC operating system is designed for maximizing user convenience and


responsiveness. This system is neither multi-user nor multitasking.
• These systems include PCs running Microsoft Windows and the Apple
Macintosh. The MS-DOS operating system from Microsoft has been
superseded by multiple flavors of Microsoft Windows and IBM has upgraded
MS-DOS to the OS/2 multitasking system.
• The Apple Macintosh operating system has been ported to more advanced
hardware, and now includes new features such as virtual memory and
multitasking.

Real-Time Operating Systems (RTOS)

• A real-time operating system (RTOS) is a multitasking operating system


intended for applications with fixed deadlines (real-time computing). Such
applications include some small embedded systems, automobile engine
controllers, industrial robots, spacecraft, industrial control, and some large-
scale computing systems.
• The real time operating system can be classified into two categories:
1. A hard real-time system guarantees that critical tasks be completed
on time. This goal requires that all delays in the system be bounded,
from the retrieval of stored data to the time that it takes the
operating system to finish any request made of it. Such time
constraints dictate the facilities that are available in hard real-time
systems.
2. A soft real-time system is a less restrictive type of real-time system.
Here, a critical real-time task gets priority over other tasks and
retains that priority until it completes. Soft real time system can be
mixed with other types of systems. Due to less restriction, they are
risky to use for industrial control and robotics.

Operating System Services


Program Execution
The purpose of computer systems is to allow the user to execute programs. So the
operating system provides an environment where the user can conveniently run
programs. Running a program involves the allocating and deallocating memory, CPU
scheduling in case of multiprocessing.
I/O Operations
Each program requires an input and produces output. This involves the use of I/O. So
the operating systems are providing I/O makes it convenient for the users to run
programs.
File System Manipulation
The output of a program may need to be written into new files or input taken from some
files. The operating system provides this service.
Communications
The processes need to communicate with each other to exchange information during
execution. It may be between processes running on the same computer or running on
the different computers. Communications can be occur in two ways: (i) shared memory
or (ii) message passing
Error Detection
An error is one part of the system may cause malfunctioning of the complete system. To
avoid such a situation operating system constantly monitors the system for detecting the
errors. This relieves the user of the worry of errors propagating to various part of the
system and causing malfunctioning.
Following are the three services provided by operating systems for ensuring the efficient
operation of the system itself.

1. Resource allocation
2. Accounting
3. Protection
System Call:

• System calls provide an interface between the process and the operating
system.
• System calls allow user-level processes to request some services from the
operating system which process itself is not allowed to do.
• For example, for I/O a process involves a system call telling the operating
system to read or write particular area and this request is satisfied by the
operating system.

Virtual Machines

• A virtual machine takes the layered approach to its logical conclusion. It treats
hardware and the operating system kernel as though they were all hardware.
• A virtual machine provides an interface identical to the underlying bare
hardware.
• The operating system creates the illusion of multiple processes, each
executing on its own processor with its own (virtual) memory.
• The resources of the physical computer are shared to create the virtual
machines.
o CPU scheduling can create the appearance that users have their
own processor.
o Spooling and a file system can provide virtual card readers and
virtual line printers.
o A normal user time-sharing terminal serves as the virtual machine
operator’s console.
Advantages/Disadvantages of Virtual Machines

• The virtual-machine concept provides complete protection of system


resources since each virtual machine is isolated from all other virtual
machines. This isolation, however, permits no direct sharing of resources.
• A virtual-machine system is a perfect vehicle for operating-systems research
and development. System development is done on the virtual machine,
instead of on a physical machine and so does not disrupt normal system
operation.
• The virtual machine concept is difficult to implement due to the effort
required to provide an exact duplicate to the underlying machine.

System Generation (SYSGEN)


Operating systems are designed to run on any of a class of machines at a variety of sites
with a variety of peripheral configurations. The system must then be configured or
generated for each specific computer site, a process sometimes known as system
generation (SYSGEN).
SYSGEN program obtains information concerning the specific configuration of the
hardware system. To generate a system, we use a special program. The SYSGEN
program reads from a given file, or asks the operator of the system for information
concerning the specific configuration of the hardware system, or probes the hardware
directly to determine what components are there.
The following kinds of information must be determined.

• What CPU will be used?


• What options (extended instruction sets, floating point arithmetic, and so on) are
installed? For multiple-CPU systems, each CPU must be described.
• How much memory is available? Some systems will determine this value
themselves by referencing memory location after memory location until an
"illegal address" fault is generated. This procedure defines the final legal
address and hence the amount of available memory.
• What devices are available? The system will need to know how to address each
device (the device number), the device interrupt number, the device's type and
model, and any special device characteristics.
• What operating-system options are desired, or what parameter values are to be
used? These options or values might include how many buffers of which sizes
should be used, what type of CPU-scheduling algorithm is desired, what the
maximum number of processes to be supported is.

Booting –The procedure of starting a computer by loading the kernel is known as


booting the system. Most computer systems have a small piece of code, stored in ROM,
known as the bootstrap program or bootstrap loader. This code is able to locate the
kernel, load it into main memory, and start its execution. Some computer systems, such
as PCs, use a two-step process in which a simple bootstrap loader fetches a more
complex boot program from disk, which in turn loads the kernel.
COMPUTER SYSTEM ARCHITECTURE
Computer-System Operation

• I/O devices and the CPU can execute concurrently.


• Each device controller is in charge of a particular device type.
• Each device controller has a local buffer.
• CPU moves data from/to main memory to/from local buffers
• I/O is from the device to local buffer of controller.
• Device controller informs CPU that it has finished its operation by causing an
interrupt.

Common Functions of Interrupts

• Interrupt transfers control to the interrupt service routine generally, through


the interrupt vector, which contains the addresses of all the service routines.
• Interrupt architecture must save the address of the interrupted instruction.
• Incoming interrupts are disabled while another interrupt is being processed to
prevent a lost interrupt.
• A trap is a software-generated interrupt caused either by an error or a user
request.
• An operating system is interrupt driven.

Interrupt Handling

• The operating system preserves the state of the CPU by storing registers and
the program counter.
• Determines which type of interrupt has occurred: polling, vectored interrupt
system
• Separate segments of code determine what action should be taken for each
type of interrupt
• Interrupt Time Line for a Single Process Doing Output
Direct Memory Access Structure

• Used for high-speed I/O devices able to transmit information at close to


memory speeds.
• Device controller transfers blocks of data from buffer storage directly to main
memory without CPU intervention.
• Only on interrupt is generated per block, rather than the one interrupt per
byte.

Storage Structure
Main memory – only large storage media that the CPU can access directly.
Secondary storage – extension of main memory that provides large nonvolatile storage
capacity.
Magnetic disks – rigid metal or glass platters covered with magnetic recording material

• Disk surface is logically divided into tracks, which are subdivided into sectors.
• The disk controller determines the logical interaction between the device and
the computer.

Storage Hierarchy
Storage systems organized in hierarchy.

• Speed
• Cost
• Volatility

Caching – copying information into faster storage system; main memory can be viewed
as a last cache for secondary storage.

• Use of high-speed memory to hold recently-accessed data.


• Requires a cache management policy.
• Caching introduces another level in storage hierarchy.
• This requires data that is simultaneously stored in more than one level to be
consistent.
Storage-Device Hierarchy

Hardware Protection

• Dual-Mode Operation
• I/O Protection
• Memory Protection
• CPU Protection

Dual-Mode Operation

• Sharing system resources requires operating system to ensure that an


incorrect program cannot cause other programs to execute incorrectly.
• Provide hardware support to differentiate between at least two modes of
operations.
• User mode – execution done on behalf of a user.
• Monitor mode (also kernel mode or system mode) – execution done on behalf
of operating system.
• Mode bit added to computer hardware to indicate the current mode: monitor
(0) or user (1).
• When an interrupt or fault occurs hardware switches to Privileged instructions
can be issued only in monitor mode.

I/O Protection

• All I/O instructions are privileged instructions.


• Must ensure that a user program could never gain control of the computer in
monitor mode (I.e.a user program that, as part of its execution, stores a new
address in the interrupt vector).

Memory Protection

• Must provide memory protection at least for the interrupt vector and the
interrupt service routines.
• In order to have memory protection, add two registers that determine the
range of legal addresses a program may access:
o Base register – holds the smallest legal physical memory address.
o Limit register – contains the size of the range
• Memory outside the defined range is protected.

Hardware Protection

• When executing in monitor mode, the operating system has unrestricted


access to both monitor and user’s memory.
• The load instructions for the base and limit registers are privileged
instructions.

CPU Protection

• Timer – interrupts computer after specified period to ensure operating system


maintains control.
o Timer is decremented every clock tick.
o When timer reaches the value 0, an interrupt occurs.
• Timer commonly used to implement time sharing.
• Time also used to compute the current time.
• Load-timer is a privileged instruction.
PROCESS CONCEPT
Process Concept
Informally, a process is a program in execution. A process is more than the program
code, which is sometimes known as the text section. It also includes the current activity,
as represented by the value of the program counter and the contents of the processor's
registers. In addition, a process generally includes the process stack, which contains
temporary data (such as method parameters, return addresses, and local variables), and a
data section, which contains global variables.
An operating system executes a variety of programs:

• Batch system – jobs


• Time-shared systems – user programs or tasks
o Process – a program in execution; process execution must progress
in sequential fashion.
o process includes: program counter, stack, data section

Process State
As a process executes, it changes state

• New State: The process is being created.


• Running State: A process is said to be running if it has the CPU, that is, process
actually using the CPU at that particular instant.
• Blocked (or waiting) State: A process is said to be blocked if it is waiting for
some event to happen such that as an I/O completion before it can proceed.
Note that a process is unable to run until some external event happens.
• Ready State: A process is said to be ready if it needs a CPU to execute. A
ready state process is runnable but temporarily stopped running to let another
process run.
• Terminated state: The process has finished execution.

The Difference between PROCESS and PROGRAM

1. Both are same beast with different name or when this beast is sleeping (not
executing) it is called program and when it is executing becomes process.
2. Program is a static object whereas a process is a dynamic object.
3. A program resides in secondary storage whereas a process resides in main
memory.
4. The span time of a program is unlimited but the span time of a process is
limited.
5. A process is an 'active' entity whereas a program is a 'passive' entity.
6. A program is an algorithm expressed in programming language whereas a
process is expressed in assembly language or machine language.
Diagram of Process State
PROCESS CONTROL BLOCK, SCHEDULING QUEUES AND SCHEDULERS
Process Control Block (PCB)
Information associated with each process.

Process state: The state may be new, ready, running, waiting, halted, and SO on.
Program counter: The counter indicates the address of the next instruction to be
executed for this process.
CPU registers: The registers vary in number and type, depending on the computer
architecture. They include accumulators, index registers, stack pointers, and general-
purpose registers, plus any condition-code information. Along with the program counter,
this state information must be saved when an interrupt occurs, to allow the process to
be continued correctly afterward.
CPU-scheduling information: This information includes a process priority, pointers to
scheduling queues, and any other scheduling parameters.
Memory-management information: This information may include such information as the
value of the base and limit registers, the page tables, or the segment tables, depending
on the memory system used by the operating system.
Accounting information: This information includes the amount of CPU and real time used,
time limits, account numbers, job or process numbers, and so on.
Status information: The information includes the list of I/O devices allocated to this
process, a list of open files, and so on.
The PCB simply serves as the repository for any information that may vary from process
to process.
Process Scheduling Queues
Job Queue: This queue consists of all processes in the system; those processes are
entered to the system as new processes.
Ready Queue: This queue consists of the processes that are residing in main memory and
are ready and waiting to execute by CPU. This queue is generally stored as a linked list. A
ready-queue header contains pointers to the first and final PCBs in the list. Each PCB
includes a pointer field that points to the next PCB in the ready queue.
Representation of Process Scheduling

Schedulers
A scheduler is a decision maker that selects the processes from one scheduling queue to
another or allocates CPU for execution. The Operating System has three types of
scheduler:

1. Long-term scheduler or Job scheduler


2. Short-term scheduler or CPU scheduler
3. Medium-term scheduler

Long-term scheduler or Job scheduler

• The long-term scheduler or job scheduler selects processes from discs and
loads them into main memory for execution. It executes much less frequently.
• It controls the degree of multiprogramming (i.e., the number of processes in
memory).
• Because of the longer interval between executions, the long-term scheduler
can afford to take more time to select a process for execution.

Short-term scheduler or CPU scheduler

• The short-term scheduler or CPU scheduler selects a process from among the
processes that are ready to execute and allocates the CPU.
• The short-term scheduler must select a new process for the CPU frequently. A
process may execute for only a few milliseconds before waiting for an I/O
request.

Medium-term scheduler
The medium-term scheduler schedules the processes as intermediate level of scheduling
Processes can be described as either:

• I/O-bound process – spends more time doing I/O than computations, many
short CPU bursts.
• CPU-bound process – spends more time doing computations; few very long
CPU bursts.

Context Switch

• When CPU switches to another process, the system must save the state of the
old process and load the saved state for the new process.
• Context-switch time is overhead; the system does no useful work while
switching.
• Time dependent on hardware support

Deadlock is a situation where a set of processes are blocked because each process is
holding a resource and waiting for another resource acquired by some other process.
Consider an example when two trains are coming toward each other on same track and
there is only one track, none of the trains can move once they are in front of each other.
Similar situation occurs in operating systems when there are two or more processes hold
some resources and wait for resources held by other(s). For example, in the below diagram,
Process 1 is holding Resource 1 and waiting for resource 2 which is acquired by process
2, and process 2 is waiting for resource 1.
Memory management is the functionality of an operating system which handles or
manages primary memory and moves processes back and forth between main memory
and disk during execution. Memory management keeps track of each and every memory
location, regardless of either it is allocated to some process or it is free. It checks how
much memory is to be allocated to processes. It decides which process will get memory at
what time. It tracks whenever some memory gets freed or unallocated and
correspondingly it updates the status.
CPU SCHEDULING

FIRST COME FIRST SERVE SCHEDULING ALGORITHM


By far the simplest CPU-scheduling algorithm is the first-come, first-served (FCFS)
scheduling algorithm. With this scheme, the process that requests the CPU first is
allocated the CPU first. The implementation of the FCFS policy is easily managed with a
FIFO queue. When a process enters the ready queue, its PCB is linked onto the tail of the
queue. When the CPU is free, it is allocated to the process at the head of the queue. The
running process is then removed from the queue. The code for FCFS scheduling is simple
to write and understand. The average waiting time under the FCFS policy, however, is
often quite long.
Shortest-Job-First Scheduling
A different approach to CPU scheduling is the shortest-job-first (SJF) scheduling
algorithm. This algorithm associates with each process the length of the latter's next CPU
burst. When the CPU is available, it is assigned to the process that has the smallest next
CPU burst. If two processes have the same length next CPU burst, FCFS scheduling is used
to break the tie. Note that a more appropriate term would be the shortest next CPU
burst, because the scheduling is done by examining the length of the next CPU burst of a
process, rather than its total length. We use the term SJF because most people and
textbooks refer to this type of scheduling discipline as SJF.
Priority scheduling is a type of scheduling algorithm used by the operating system to
schedule the processes for execution. The priority scheduling has both the preemptive
mode of scheduling and the non-preemptive mode of scheduling. Here, we will discuss
the non-preemptive priority scheduling algorithm.
Non-Preemptive Priority
A scheduling discipline is non preemptive, if once a process has been used the CPU, the
CPU cannot be taken away from that process.
As the name suggests, the scheduling depends upon the priority of the processes rather
than its burst time. So, the processes, in this case, must also have the priority number in
its details on the basis of which the OS will schedule it.
RTF, Which Stands for Shortest Remaining Time First is a scheduling algorithm used in
Operating Systems, which can also be called as the preemptive version of the SJF
scheduling algorithm. The process which has the least processing time remaining is
executed first. As it is a preemptive type of schedule, it is claimed to be better than SJF
scheduling Algorithm (Links to an external site.).
In the priority scheduling, the processes are scheduled on the basis of their priority, and not on
the basis of their burst time. If the preemptive mode of this scheduling is being followed, then a
process with a higher priority than the currently executing process can replace the executing
process.
Round Robin scheduling algorithm is a type of preemptive type of scheduling used by the
operating system for scheduling the processes. In the Round Robin scheduling algorithm, a time
quantum is decided which remains constant throughout the execution of all processes. Each
process executes only for this much time. If within this time, the process completes its execution,
then it is terminated. Else, it waits for its turn for again getting the processor for the same time
quantum, and this process continues.

Multi-Level Feedback Queue


Every algorithm supports a different class of process but in a generalized system, some
process wants to be scheduled using a priority algorithm. While some process wants to
remain in the system (interactive process) while some are background process when
execution can be delayed.
In general round-robin algorithm with different time quantum is used for such
maintenance. The Number of ready queue algorithm between the queue, algorithm
inside a queue algo but the queues may change from system to system. There is a
various class of scheduling algorithm which is created for situations like in which the
processes are easily divided into the different groups. There is a common division
between the foreground(interactive) process and background (batch process). There are
two types processes have the different response time, the different requirement of
resources so these processes need different scheduling algorithms. The foreground
processes have priority(externally defined) over background processes.
In the multilevel queue scheduling algorithm partition the ready queue has divided into
seven separate queues. Based on some priority of the process; like memory size, process
priority, or process type these processes are permanently assigned to one queue. Each
queue has its own scheduling algorithm. For example, some queues are used for the
foreground process and some for the background process.
The foreground queue can be scheduled by using a round-robin algorithm while the
background queue is scheduled by a first come first serve algorithm.
It is said that there will be scheduled between the queues which are easily implemented
as a fixed- priority preemptive scheduling. Let us take an example of a multilevel queue
scheduling algorithm with five queues:

1. System process
2. Interactive processes
3. Interactive editing processes
4. Batch processes
5. Student processes
Multi-Level Feedback Queue

Generally, we see in multilevel queue scheduling algorithm processes are permanently


stored in one queue in the system and do not move between the queue. There is some
separate queue for foreground or background processes but the processes do not move
from one queue to another queue and these processes do not change their foreground
or background nature, these type of arrangement has the advantage of low scheduling
but it is inflexible in nature.
Multilevel feedback queue scheduling it allows a process to move between the queue.
This the process are separate with different CPU burst time. If a process uses too much
CPU time then it will be moved to the lowest priority queue. This idea leaves I/O bound
and interactive processes in the higher priority queue. Similarly, the process which waits
too long in a lower priority queue may be moved to a higher priority queue. This form of
aging prevents starvation.
The multilevel feedback queue scheduler has the following parameters:

• The number of queues in the system.


• The scheduling algorithm for each queue in the system.
• The method used to determine when the process is upgraded to a higher-
priority queue.
• The method used to determine when to demote a queue to a lower - priority
queue.
• The method used to determine which process will enter in queue and when
that process needs service.

You might also like