Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

OS Manual LBS

Download as pdf or txt
Download as pdf or txt
You are on page 1of 73

LAB MANUAL

CST 206 OPERATING SYSTEMS LAB

B. Tech 4th Semester ComputerScience & Engineering

2019 Scheme

(APJ Abdul Kalam Technological University)

Department of Computer Science and Engineering


LBS INSTITUTE OF TECHNOLOGY FOR WOMEN
POOJAPPURA, KERALA
Vision and Mission of the Institute

Vision

To be a centre of academic excellence empowering women in technical domain.

Mission

Imparting value based technical education for transforming young women to professionals
excelling globally in academics, research & development and industry meeting societal
challenges.

Department of Computer Science & Engineering

Vision

To be a renowned academic and research center in Computer Science and allied domains.

Mission

To impart quality technical education by providing a conducive learning and research


ambience for molding ethically responsible engineers, academicians and researchers catering
to the needs of industry and society.

Program Educational Objectives (PEOs)

Our graduated would

1. Excel as IT professionals in providing solutions to real world problems using contemporary


technology with societal considerations and professional ethics.

2. Possess zeal to pursue higher education and research.

3. Demonstrate good leadership and interpersonal skills to work in diversified teams on


multidisciplinary projects.

Program Specific Outcomes ( PSOs)

A graduate of the Information Technology Program will demonstrate:


1. Develop good problem-solving skills and inculcate “out of box thinking” to design
quality software.
2. Design communication systems and computing models to provide IT enabled solutions
by considering societal needs.

Program Outcomes (POs)

1. Engineering knowledge: Apply the knowledge of mathematics, science, engineering


fundamentals, and an engineering specialization to the solution of complex engineering
problems.

2. Problem analysis: Identify, formulate, review research Literature, and analyze complex
engineering problems reaching substantiated conclusions using first principles of mathematics,
natural sciences, and engineering sciences.

3. Design/development of solutions: Design solutions for complex engineering problems and


design system components or processes that meet the specified needs with appropriate
consideration for the public health and safety, and the cultural, societal, and environmental
considerations.

4. Conduct investigations of complex problems: Use research-based knowledge and


research methods including design of experiments, analysis and interpretation of data, and
synthesis of the information to provide valid conclusions.

5. Modern tool usage: Create, select, and apply appropriate techniques, resources, and
modern engineering and IT tools including prediction and modelling to complex engineering
activities with an understanding of the Limitations.

6. The engineer and society: Apply reasoning informed by the contextual knowledge to assess
societal, health, safety, legal and cultural issues and the consequent responsibilities relevant to
the professional engineering practice.

7. Environment and sustainability: Understand the impact of the professional engineering


solutions in societal and environmental contexts, and demonstrate the knowledge of, and need
for sustainable development.

8. Ethics: Apply ethical principles and commit to professional ethics and responsibilities and
norms of the engineering practice.
9. Individual and team work: Function effectively as an individual, and as a member or
leader in diverse teams, and in multidisciplinary settings.

10. Communication: Communicate effectively on complex engineering activities with the


engineering community and with society at Large, such as, being able to comprehend and write
effective reports and design documentation, make effective presentations, and give and receive
dear instructions.

11. Project management and finance: Demonstrate knowledge and understanding of the
engineering and management principles and apply these to one's own work, as a member and
leader in a team, to manage projects and in multidisciplinary environments.

12. Life-long learning: Recognize the need for, and have the preparation and ability to engage
in independent and life-long learning in the broadest context of technological change.
CONTENTS

Slno: Experiment Name

1 FAMILIARISATION

2 CREATION OF PROCESS

3 IMPLEMENTING SYSTEM CALLS

4 INTERPROCESS COMMUNICATION USING SHARED MEMORY

5 IMPLEMENTING SEMAPHORES

6 IMPLEMENTATION OF CPU SCHEDULING ALGORITHM

7 IMPLEMENTATION OF MEMORY ALLOCATION METHODS FOR


FIXED PARTITION

8 IMPLEMENTATION OF PAGE REPLACEMENT ALGORITHM

9 BANKERS ALGORITHM FOR DEADLOCK AVOIDANCE

10 SIMULATE DISK SCHEDULING ALGORITHM


FAMILIARISATION

Introduction
Operating System

Operating System is a type of System Software and is a collection (set) of programs, which
performs two specific functions. First, it provides a user Interface so that human user can
interact with the machine. Second, the operating system manages computer resources
.
Computer Resources

Computer resources are physical devices, which an operating system accesses, and manages.
Printers, memories, input/output devices, files, etc., are examples of computer resources. So
why an operating is called a resource manager.Some of important tasks, which a typical modern
operating system has to perform, are given below:
Processes Scheduling
Interprocess Communication
Synchronization
Memory Management (physical memory allocation, virtual memory etc.)
Resource Management
Directory and File Management
Communication

Linux

Linux is an operating system, which is a flavor of Unix. Linux is a multiuser and multitasking
operating system. It is a leading operating system on servers and
other big iron systems such as mainframe computers and super computers. More than 90%of
today's 500 fastest supercomputers run some variant of Linux,
Including the 10 fastest. The Android system in wide use on mobile devices is built on the
Linux kernel. Since with the likeness with UNIX, all the programs written for UNIX can be
compiled and run under Linux. Linux operating system runes
on verity of machines like 486/Pentium, Sun Sparces, PowerPC, etc.
Linus Torvalds, principal author, at the University of Helsinki, Finland, wrote Linux Kernel.
UNIX programmers around the world in the development of Linux assisted him. Popular Linux
distributions include: Debian (Knoppix, Ubuntu) , Fedora, Gentoo, openSUSE etc.

Linux System

Linux System can be split into two parts.


i)Shell
ii)Kernel
Formally, a Shell is interface between a user and a Linux operating system, i.e. user
interacts with the Linux operating system through the shell. There may be two tasks to be
performed by a shell. First, accepts commands from a user and second, interprets those
commands.
Two shells, which are commonly used, are Bourne shell and C shell . One other shell, which is
rather complex, is Korn shell

Kernel is the core of Linux Operating System, while the system is operational, it keeps on
running. The kernel is the part of the Linux Operating system which consist of routines,
which interact with underlying hardware, and routines which include system call handling,
process management, scheduling, pipes, signals, paging, swapping, the file system, and high
level part of the I/O system. So shell accepts commands from user interpret them and deliver
these interpreted commands to kernel for execution. After execution, the shell displays result
of executed commands.

Files
File is a mechanism through which we store information. Normally, there are two modes of
storing information.
i)File
ii)Directories

i) File- A simple file stores some type of information. The information it has may
be in text format, or in binary format.
ii) Directories- are special types of files owned the operating system, which
contains information about files, and may contain other directories (called
Subdirectories). So directories are also files, which contains some vital
information about the files, and other directories. There’s a file (management)
system in operating system, which manipulates file and directories. The major
operations, which can be performed on files and directories, are given below:
Create
Delete
Open
Close
Read
Write
Append
Seek
Rename
Get Attributes
Set Attributes

Basic Linux Commands


1. man – used to display the documentation/user manual on just about any linux
command that can be executed on the terminal.
2. cd – used to change the current working directory to a specified folder inside the
terminal.
3. ls – used to display a directory’s files and folder.
4. cat – used to read the contents of one or more files and display their contents inside
the terminal.
5. touch – used to create a new file inside your working directory directly from the
terminal.
6. mkdir – used to create new directories inside an existing working directory from the
terminal.
7. pwd – used to display the path of the current working directory inside the terminal.
8. echo – used for debugging shell programs inside the terminal.

System Calls

A system call is a mechanism that provides the interface between a process and the
operating system. It is a programmatic method in which a computer program
requests a service from the kernel of the OS. System call offers the services of the
operating system to the user programs via API (Application Programming
Interface). System calls are the only entry points for the kernel system.

System calls are divided into 5 categories mainly :


● Process Control
● File Management
● Device Management
● Information Maintenance
● Communication

Process Control :
This system calls perform the task of process creation, process termination, etc.
The Linux System calls under this are fork() , exit() , exec().
fork()
⮚ A new process is created by the fork() system call.
⮚ A new process may be created with fork() without a new program being run-
the new sub-process simply continues to execute exactly the same program that
the first (parent) process was running.
⮚ It is one of the most widely used system calls under process management.
exit()
⮚ The exit() system call is used by a program to terminate its execution.
⮚ The operating system reclaims resources that were used by the process after the
exit() system call.
exec()
⮚ A new program will start executing after a call to exec()
⮚ Running a new program does not require that a new process be created first:
any process may call exec() at any time. The currently running program is
immediately terminated, and the new program starts executing in the context of
the existing process.

File Management :

File management system calls handle file manipulation jobs like creating a file,
reading, and writing, etc. The Linux System calls under this are open(), read(),
write(), close().

open():
⮚ It is the system call to open a file.
⮚ This system call just opens the file, to perform operations such as read and
write, we need to execute different system call to perform the operations.
read():
⮚ This system call opens the file in reading mode
⮚ We can not edit the files with this system call.
⮚ Multiple processes can execute the read() system call on the same file
simultaneously.
write():
⮚ This system call opens the file in writing mode
⮚ We can edit the files with this system call.
⮚ Multiple processes can not execute the write() system call on the same file
simultaneously.
close():
⮚ This system call closes the opened file.

Device Management :

Device management does the job of device manipulation like reading from device
buffers, writing into device buffers, etc. The Linux System calls under this is ioctl().

ioctl():
⮚ ioctl() is referred to as Input and Output Control.
⮚ ioctl is a system call for device-specific input/output operations and other
operations which cannot be expressed by regular system calls.

Information Maintenance:
It handles information and its transfer between the OS and the user program. In
addition, OS keeps the information about all its processes and system calls are used to
access this information. The System calls under this are getpid(), alarm(), sleep().

getpid():
⮚ getpid stands for Get the Process ID.
⮚ The getpid() function shall return the process ID of the calling process.
⮚ The getpid() function shall always be successful and no return value is reserved
to indicate an error.

alarm():
⮚ This system call sets an alarm clock for the delivery of a signal that when it has
to be reached.
⮚ It arranges for a signal to be delivered to the calling process

sleep():
⮚ This System call suspends the execution of the currently running process for
some interval of time
⮚ Meanwhile, during this interval, another process is given chance to execute

Communication
These types of system calls are specially used for inter-process communications.
Two models are used for inter-process communication
1. Message Passing(processes exchange messages with one another)
2. Shared memory(processes share memory region to communicate)
The system calls under this are pipe() , shmget() ,mmap().

pipe():
⮚ The pipe() system call is used to communicate between different Linux
processes.
⮚ It is mainly used for inter-process communication.
⮚ The pipe() system function is used to open file descriptors.

shmget():
⮚ shmget stands for shared memory segment.
⮚ It is mainly used for Shared memory communication.
⮚ This system call is used to access the shared memory and access the messages
in order to communicate with the process.

mmap():
⮚ This function call is used to map or unmap files or devices into memory.
⮚ The mmap() system call is responsible for mapping the content of the file to the
virtual memory space of the process.
INTRODUCTION TO OPERATING SYSTEMS

An Operating System is a program that manages the Computer hardware. It controls and
coordinates the use of the hardware among the various application programs for the various users.

A Process is a program in execution. As a process executes, it changes state


• New: The process is being created

• Running: Instructions are being executed

• Waiting: The process is waiting for some event to occur

• Ready: The process is waiting to be assigned to a process

• Terminated : The process has finished execution

Apart from the program code, it includes the current activity represented by
• Program Counter,

• Contents of Processor registers,

• Process Stack which contains temporary data like function parameters, return
addresses and local variables
• Data section which contains global variables

• Heap for dynamic memory allocation

A Multi-programmed system can have many processes running simultaneously with the CPU
multiplexed among them. By switching the CPU between the processes, the OS can make the computer
more productive. There is Process Scheduler which selects the process among many processes that are
ready, for program execution on the CPU. Switching the CPU to another process requires performing a
state save of the current process and a state restore of new process, this is Context Switch.

Scheduling Algorithms
CPU Scheduler can select processes from ready queue based on various scheduling algorithms.
Different scheduling algorithms have different properties, and the choice of a particular algorithm may
favour one class of processes over another. The scheduling criteria include
• CPU utilization:
• Throughput: The number of processes that are completed per unit time.

• Waiting time: The sum of periods spent waiting in ready queue.

• Turnaround time: The interval between the time of submission of process to the time
of completion.
• Response time: The time from submission of a request until the first response is produced.

The different scheduling algorithms are

1. FCFS: First Come First Serve Scheduling

• It is the simplest algorithm to implement.


• The process with the minimal arrival time will get the CPU first.

• The lesser the arrival time, the sooner will the process gets the CPU.

• It is the non-pre-emptive type of scheduling.

• The Turnaround time and the waiting time are calculated by using the following formula.

Turn Around Time = Completion Time - Arrival Time


Waiting Time = Turnaround time - Burst Time
Process Arrival Burst Completi Turn Waiting
ID Time Time on Time Around Time
Time

0 0 2 2 2 0

1 1 6 8 7 1

2 2 4 12 8 4

3 3 9 21 18 9

4 4 12 33 29 17
Avg Waiting Time=31/5

2. SJF: Shortest Job First Scheduling

• The job with the shortest burst time will get the CPU first.

• The lesser the burst time, the sooner will the process get the CPU.

• It is the non-pre-emptive type of scheduling.

• However, it is very difficult to predict the burst time needed for a process hence this
algorithm is very difficult to implement in the system.
• In the following example, there are five jobs named as P1, P2, P3, P4 and P5. Their arrival
time and burst time are given in the table below.

Process Arrival Burst Completi Turn Waiting


ID Time Time on Time Around Time
Time

1 1 7 8 7 0

2 3 3 13 10 7

3 6 2 10 4 2

4 7 10 31 24 14

5 9 8 21 12 4

Since, No Process arrives at time 0 hence; there will be an empty slot in the Gantt chart from
time 0 to 1 (the time at which the first process arrives)
.
• According to the algorithm, the OS schedules the process which is having the lowest burst
time among the available processes in the ready queue.
• Till now, we have only one process in the ready queue hence the scheduler will schedule this
to the processor no matter what is its burst time.
• This will be executed till 8 units of time.
• Till then we have three more processes arrived in the ready queue hence the scheduler will
choose the process with the lowest burst time.
• Among the processes given in the table, P3 will be executed next since it is having the lowest
burst time among all the available processes.

Avg Waiting Time = 27/5

3. SRTF: Shortest Remaining Time First Scheduling

• It is the pre-emptive form of SJF. In this algorithm, the OS schedules the Job according to
the remaining time of the execution

4. Priority Scheduling

• In this algorithm, the priority will be assigned to each of the processes.

•The higher the priority, the sooner will the process get the CPU.
• If the priority of the two processes is same then they will be scheduled according to their
arrival time.

5. Round Robin Scheduling

• In the Round Robin scheduling algorithm, the OS defines a time quantum


(slice). •All the processes will get executed in the cyclic way.
•Each of the process will get the CPU for a small amount of time (called time quantum) and
then get back to the ready queue to wait for its next turn. It is a pre-emptive type of sched
uling.
6. Multilevel Queue Scheduling

•A multi-level queue scheduling algorithm partitions the ready queue into several separate
queues.
•The processes are permanently assigned to one queue, generally based on some property of the
process, such as memory size, process priority, or process type. •Each queue has its own
scheduling algorithm.

7. Multilevel Feedback Queue Scheduling

•Multilevel feedback queue scheduling, however, allows a process to move


between queues.
•The idea is to separate processes with different CPU-burst characteristics.

• If a process uses too much CPU time, it will be moved to a lower-priority


queue. • Similarly, a process that waits too long in a lower-priority queue may be
moved to a higher-priority queue.
•This form of aging prevents starvation.

EXPERIMENT-2

2.A) Fork() System Call:

AIM: To write the program to implement fork () system call.

DESCRIPTION:
Used to create new processes. The new process consists of a copy of the address space of the
original process. The value of process id for the child process is zero, whereas the value of
process
id for the parent is an integer value greater than zero.
Syntax:
Fork ( );

ALGORITHM:
1: Start the program.
2: Declare the variables pid and child id.
3: Get the child id value using system call fork().
4: If child id value is greater than zero then print as “i am in the parent process”.
5: If child id! = 0 then using getpid() system call get the process id.
6: Print “i am in the parent process” and print the process id.
7: If child id! = 0 then using getppid() system call get the parent process id.
8: Print “i am in the parent process” and print the parent process id.
9: Else If child id value is less than zero then print as “i am in the child process”.
10: If child id! = 0 then using getpid() system call get the process id.
11: Print “i am in the child process” and print the process id.
12: If child id! = 0 then using getppid() system call get the parent process id.
13: Print “i am in the child process” and print the parent process id.
14: Stop the program.

PROGRAM :
SOURCE CODE:
/* fork system call */
#include<stdio.h>
#include <unistd.h>
#include<sys/types.h>
int main()
{
int id,childid;
id=getpid();
if((childid=fork())>0)
{
printf("\n i am in the parent process %d",id);
printf("\n i am in the parent process %d",getpid());
printf("\n i am in the parent process %d\n",getppid());
}
else
{
printf("\n i am in child process %d",id);
printf("\n i am in the child process %d",getpid());
printf("\n i am in the child process %d",getppid());
}
}

OUTPUT:
$ vi fork.c
$ cc fork.c
$ ./a.out
i am in child process 3765
i am in the child process 3766
i am in the child process 3765
i am in the parent process 3765
i am in the parent process 3765
i am in the parent process 3680

RESULT:
Thus the program was executed and verified successfully
2.B) Wait () and Exit () System Calls:

AIM:To write the program to implement the system calls wait ( ) and exit ( ).

DESCRIPTION:
i. fork ( )
Used to create new process. The new process consists of a copy of the address space of the
original process. The value of process id for the child process is zero, whereas the value of
process
id for the parent is an integer value greater than zero.
Syntax: fork ( );
ii. wait ( )
The parent waits for the child process to complete using the wait system call. The wait
system
call returns the process identifier of a terminated child, so that the parent can tell which of its
possibly many children has terminated.
Syntax: wait (NULL);
iii. exit ( )
A process terminates when it finishes executing its final statement and asks the operating
system to delete it by using the exit system call. At that point, the process may return data
(output)
to its parent process (via the wait system call).
Syntax: exit (0);

ALGORITHM:
Step 1: Start the program.
Step 2: Declare the variables pid and i as integers.
Step 3: Get the child id value using the system call fork ().
Step 4: If child id value is less than zero then print “fork failed”.
Step 5: Else if child id value is equal to zero, it is the id value of the child and then start the
child
process to execute and perform Steps 7 & 8.
Step 6: Else perform Step 9.
Step 7: Use a for loop for almost five child processes to be called.
Step 8: After execution of the for loop then print “child process ends”.
Step 9: Execute the system call wait ( ) to make the parent to wait for the child process to get
over.
Step 10: Once the child processes are terminated, the parent terminates and hence prints
“Parent
process ends”.
Step 11: After both the parent and the child processes get terminated it execute the wait ( )
system
call to permanently get deleted from the OS.
Step 12: Stop the program.

PROGRAM:
2.B.1) SOURCE CODE:
#include<stdio.h>
#include<unistd.h>
int main( )
{
int i, pid;
pid=fork( );
if(pid== -1)
{
printf("fork failed");
exit(0);
}
else if(pid==0)
{
printf("\n Child process starts");
for(i=0; i<5; i++)
{
printf("\n Child process %d is called", i);
}
printf("\n Child process ends");
}
else
{
wait(0);
printf("\n Parent process ends");
}
exit(0);
}

OUTPUT:
$ vi waitexit.c
$ cc waitexit.c
$ ./a.out
Child process starts
Child process 0 is called
Child process 1 is called
Child process 2 is called
Child process 3 is called
Child process 4 is called
Child process ends
Parent process ends

RESULT:
Thus the program was executed and verified successfully
2.B.2 ) WAIT () AND EXIT () SYSTEM CALLS

PROGRAM:
SOURCE CODE:
/* wait system call */
#include <stdlib.h>
#include <errno.h>
#include<stdio.h>
#include <unistd.h>
#include <sys/types.h>
#include <sys/wait.h>
main()
{
pid_t pid;
int rv;
switch(pid=fork())
{
case -1:
perror("fork");
exit(1);
case 0:
printf("\n CHILD: This is the child process!\n");
fflush(stdout);
printf("\n CHILD: My PID is %d\n", getpid());
printf("\n CHILD: My parent's PID is %d\n",getppid());
printf("\n CHILD: Enter my exit status (make it small):\n ");
printf("\n CHILD: I'm outta here!\n");
scanf(" %d", &rv);
exit(rv);
default:
printf("\nPARENT: This is the parent process!\n");
printf("\nPARENT: My PID is %d\n", getpid());
fflush(stdout);
wait(&rv);
fflush(stdout);
printf("\nPARENT: My child's PID is %d\n", pid);

RESULT:
Thus the program was executed and verified successfully

EXPERIMENT-3
3.A) Program to Implement System Calls
DESCRIPTION: A system call is a method for a computer program to request a service from
the kernel of the operating system on which it is running. A system call is a method of
interacting with the operating system via programs. A system call is a request from computer
software to an operating system's kernel.

read():It is used to obtain data from a file on the file system.

write():It is used to write data from a user buffer to a device like a file. This system call is one
way for a program to generate data.

AIM: To write a program to read maximum of 15 characters from the user and print them on
screen
ALGORITHM:
1.Start the program
2.Declare the buffer size
3.Read user from standard input device to the buffer using read() system call.
4.Print on screen using write() system call.
5.Stop the program.
PROGRAM:
SOURCE CODE:
#include<unistd.h>
int main()
{
int n;
char buff[15];
n=read(0,buff,15);
write(1,buff,n);
}
OUTPUT:
HELLO
HELLO.

RESULT:
Thus the program was executed and verified successfully

3.B) Program to implement system calls


AIM: To write a program in C to read some content ( 20 characters) of file F1.txt into file
F2.txt. The content of file F2.txt should not get deleted or overwritten.
ALGORITHM:
1.Start the program
2.Declare buffer size
3.Create two files F1 and F2
4.Open text F1.txt in read only mode
5.Open text F2.txt in write only mode and in append mode for not to get overwritten.
6.Read from F1.txt using the system call read()
7.Then write to F2.txt using the system call write()
8.Stop the program.
PROGRAM:
SOURCE CODE:
#include<unistd.h>
#include<sys/types.h>
#include<sys/stat.h>
#include<fcntl.h>
int main()
{
int fd1,fd2,n;
char buff[25];
fd1=open(“F1.txt”,O_RDONLY);
fd2=open(“F2.txt”,O_WRONLY|O_APPEND);
n=read(fd1,buff,20);
write(fd2,buff,n);
}
OUTPUT:
F1.txt:
123

F2.txt;
abcd

F3.txt:
abcd
123

RESULT:
Thus the program was executed and verified successfully

3.C) Program to implement System calls( OPENDIR and READDIR)


DESCRIPTION:
The opendir subroutine also returns a pointer to identify the directory stream in subsequent
operations. The null pointer is returned when the directory named by
the DirectoryName parameter cannot be accessed or when not enough memory is available to
hold the entire stream. A successful call to any of the exec functions closes any directory
streams opened in the calling process.
The readdir subroutine returns a pointer to the next directory entry. The readdir subroutine
returns entries for . (dot) and .. (dot dot), if present, but never returns an invalid entry
(with d_ino set to 0). When it reaches the end of the directory, or when it detects an
invalid seekdir operation, the readdir subroutine returns the null value. The returned pointer
designates data that may be overwritten by another call to the readdir subroutine on the same
directory stream. A call to the readdir subroutine on a different directory stream does not
overwrite this data. The readdir subroutine marks the st_atime field of the directory for
update each time the directory is actually read.
AIM: To write a program in C to display the files in the given directory
ALGORITHM:
1. Start the program
2. Declare the variables to the structure dirent ( defines the file system- independent
directory) and also for DIR.
3. Specify the directory path to be displayed using the opendir system call.
4. Check for the existence of the directory and read the contents of the directory using
readdir ()system call.
5. Repeat the above step until all the files in the directory are listed.
6. Stop the program
PROGRAM:
SOURCE CODE:
#include<stdio.h>
#include<dirent.h>
#include<stdlib.h>
int main()
{
DIR *p;
struct dirent *dp;
p=opendir("");
if(p==NULL)
{
perror("opendir");
exit(0);

}
dp=readdir(p);
while(p!=NULL)
{
printf("%d%s",dp->d_ino,dp->d_name);
dp=readdir(p);
}
}
OUTPUT:
2753482.2753486A.OUT
236356..2753496PROCESS.C
2754390PROCESS2.C
2753489F1.txt
275349F2.txt
27533501PROCESS3.C
RESULT:
Thus the program was executed and verified successfully

EXPERIMENT-4
4) INTER PROCESS COMMUNICATION USING SHARED MEMORY
DESCRIPTION: Shared memory is a memory shared between two or more processes. Each
process has its own address space; if any process wants to communicate with some information
from its own address space to other processes, then it is only possible with IPC (inter-process
communication) techniques.
Shared memory is the fastest inter-process communication mechanism. The operating system
maps a memory segment in the address space of several processes to read and write in that
memory segment without calling operating system functions.

AIM: To write a program to implement Interprocess Communication using shared memory


ALGORITHM:( Sender Program)
1. Create the shared memory segment with key.
2. Attach the process to the shared memory
3. Get data from the user
4. Finally copy the data to the shared memory segment
5. Stop
ALGORITHM: ( Receiver Program)
1. Start
2. Get the shmid (an integer value) returned by shmget
3. Print the data read from the shared memory.
4. Stop

SENDER PROGRAM:
SOURCE CODE:

#include<stdio.h>
#include<stdlib.h>
#include<unistd.h>
#include<sys/shm.h>
#include<string.h>
int main()
{
void *shared_memory;
char buff[100];
int shmid;
shmid=shmget((key_t)1122,1024,0666|IPC_CREAT);
printf("Key of shared memory is %d",shmid);
shared_memory=shmat(shmid,NULL,0);
printf("Process attached at %p\n",shared_memory);
printf("Enter some data to write to shared memory");
read(0,buff,100);
strcpy(shared_memory,buff);
printf("You write :%s",(char *)shared_memory);
}

RECEIVER PROGRAM:
SOURCE CODE:
#include<stdio.h>
#include<stdlib.h>
#include<unistd.h>
#include<sys/shm.h>
#include<string.h>
int main()
{
void *shared_memory;
char buff[100];
int shmid;
shmid=shmget((key_t)1122,1024,0666);
printf("Key of shared memory is %d",shmid);
shared_memory=shmat(shmid,NULL,0);
printf("Process attached at %p\n",shared_memory);
printf("Data read from shared memory is :%s\n",(char *)shared_memory);
}
OUTPUT(SENDER):
Key of shared memory is : 33
Process attached at 0x7ff3eabf900
Enter source data to write to shared memory: You are home
You write: You are home
OUTPUT(RECEIVER):
Key of shared memory is : 33
Process attached at 0x7ff3eabf300
Data read from shared memory: You are home

RESULT:
Thus the program was executed and verified successfully

EXPERIMENT-5
5) IMPLEMENT SEMAPHORES
DESCRIPTION: Semaphore in OS is an integer value that indicates whether the resource
required by the process is available or not. The value of a semaphore is modified by wait() or
signal() operation where the wait() operation decrements the value of semaphore and the
signal() operation increments the value of the semaphore.
AIM: To write a C program to implement semaphores
ALGORITHM:
1. Start the program.
2. Get the no of jobs from the user.
3. When job1 is processing, job 2 is also starts processing.
4. When job 1 enters critical section, next job starts processing.
5. When job1 comes out of the critical section, the other job enters the critical section.
6. The above 3 steps are performed for various programs.
7. End the program.

PROGRAM:
SOURCE CODE:
#include<stdio.h>
main()
{
int i,a=1,h=2,n;
printf("\n Enter the no of jobs");
scanf("%d",&n);
for(i=0;i<n;i++)
{
if(a==1)
{
printf("processing %d......! \n", i+1);
a++;
}
if(h>1)
{
if(i+2<=n)
{
printf("\n processing %d.....! \n",i+2);
}
printf("\n Process %d Enters Critical section", i+1);
printf("\n Process %d Leaves Critical section", i+1);
}
h+1;
}
}
OUTPUT:
ENTER THE NUMBER OF JOBS:4
Processing 1…….!
Processing 2……..!
Process 1 enters critical condition.
Process 1 leaves critical condition.
Processing 3……..!
Process 2 enters critical condition.
Process 2 leaves critical condition.
Processing 4……..!
Process 3 enters critical condition.
Process 3 leaves critical condition.
Process 4 enters critical condition.
Process 4 leaves critical condition.
RESULT:
Thus the program was executed and verified successfully

EXPERIMENT-6
6) Write a C program to simulate the following non-preemptive CPU scheduling algorithms
to find turnaround time and waiting time for the above problem.
a) FCFS
b) SJF
c) ROUNDROBIN
d) PRIORITY

DESCRIPTION: Assume all the processes arrive at the same time.


FCFS CPU SCHEDULING ALGORITHM :
For FCFS scheduling algorithm,read the number of processes/jobs in the system, their CPU
burst n times.The scheduling is performed on the basis of arrival time of the processes
irrespective of their other parameters. Each process will be executed according to its arrival
time. Calculate the waiting time and turnaround time of each of the processes accordingly.

SJF CPU SCHEDULING ALGORITHM :


For SJF scheduling algorithm, read the number of processes/jobs in the system, their CPU
burst times. Arrange all the jobs in order with respect to their burst times. There may be two
jobs in queue with the same execution time, and then FCFS approach is to be performed.
Each process will be executed according to the length of its burst time. Then calculate the
waiting time and turnaround time of each of the processes accordingly

ROUND ROBIN CPU SCHEDULING ALGORITHM:


For round robin scheduling algorithm, read the number of processes/jobs in the system, their
CPU burst times, and the size of the time slice. Time slices are assigned to each process in
equal portions and in circular order, handling all processes execution. This allows every
process to get an equal chance. Calculate the waiting time and turnaround time of each of the
processes accordingly.

PRIORITY CPU SCHEDULING ALGORITHM:


For priority scheduling algorithm, read the number of processes/jobs in the system, their CPU
burst times, and the priorities. Arrange all the jobs in order with respect to their priorities.
There may be two jobs in queue with the same priority, and then FCFS approach isto be
performed. Each process will be executed according to its priority. Calculate thewaiting time
and turnaround time of each ofthe processes accordingly.

PROGRAM 6.A:
SOURCE CODE:
#include <stdio.h>
int waitingtime(int proc[], int n,int burst_time[], int wait_time[]) {
wait_time[0] = 0;
for (int i = 1; i < n ; i++ )
wait_time[i] = burst_time[i-1] + wait_time[i-1] ;
return 0;
}
int turnaroundtime( int proc[], int n,
int burst_time[], int wait_time[], int tat[]) {
int i;
for ( i = 0; i < n ; i++)
tat[i] = burst_time[i] + wait_time[i];
return 0;
}
int avgtime( int proc[], int n, int burst_time[]) {
int wait_time[n], tat[n], total_wt = 0, total_tat = 0;
int i;
waitingtime(proc, n, burst_time, wait_time);
turnaroundtime(proc, n, burst_time, wait_time, tat);
printf("Processes Burst Waiting Turn around ");
for ( i=0; i<n; i++) {
total_wt = total_wt + wait_time[i];
total_tat = total_tat + tat[i];
printf(" %d\t %d\t\t %d \t%d
", i+1, burst_time[i], wait_time[i], tat[i]);
}
printf("Average waiting time = %f
", (float)total_wt / (float)n);
printf("Average turn around time = %f
", (float)total_tat / (float)n);
return 0;
}
int main() {
int proc[] = { 1, 2, 3};
int n = sizeof proc / sizeof proc[0];
int burst_time[] = {5, 8, 12};
avgtime(proc, n, burst_time);
return 0;
}
OUTPUT:
Processes Burst Waiting Turn around
1 5 0 5
2 8 5 13
3 12 13 25
Average Waiting time = 6.000000
Average turn around time = 14.333333
RESULT:
Thus the program was executed and verified successfully

PROGRAM 6.B:
SOURCE CODE:
#include<string.h>
int main()
{
int bt[20],at[10],n,i,j,temp,st[10],ft[10],wt[10],ta[10];
int totwt=0,totta=0;
double awt,ata;
char pn[10][10],t[10];
//clrscr();
printf("Enter the number of process:");
scanf("%d",&n);
for(i=0; i<n; i++)
{
printf("Enter process name, arrival time& burst time:");
scanf("%s%d%d",pn[i],&at[i],&bt[i]);
}
for(i=0; i<n; i++)
for(j=0; j<n; j++)
{
if(bt[i]<bt[j])
{
temp=at[i];
at[i]=at[j];
at[j]=temp;
temp=bt[i];
bt[i]=bt[j];
bt[j]=temp;
strcpy(t,pn[i]);
strcpy(pn[i],pn[j]);
strcpy(pn[j],t);
}
}
for(i=0; i<n; i++)
{
if(i==0)
st[i]=at[i];
else
st[i]=ft[i-1];
wt[i]=st[i]-at[i];
ft[i]=st[i]+bt[i];
ta[i]=ft[i]-at[i];
totwt+=wt[i];
totta+=ta[i];
}
awt=(double)totwt/n;
ata=(double)totta/n;
printf("\nProcessname\tarrivaltime\tbursttime\twaitingtime\tturnaroundtime");
for(i=0; i<n; i++)
{
printf("\n%s\t%5d\t\t%5d\t\t%5d\t\t%5d",pn[i],at[i],bt[i],wt[i],ta[i]);
}
printf("\nAverage waiting time: %f",awt);
printf("\nAverage turnaroundtime: %f",ata);
return 0;
}

OUTPUT:
Enter the number of process:4
Enter process name, arrival time& burst time:
1
1
3
Enter process name, arrival time& burst time:
2
2
4
Enter process name, arrival time& burst time:
3
1
2
Enter process name, arrival time& burst time:
4
4
4
RESULT:
Thus the program was executed and verified successfully

PROGRAM 6.C:
SOURCE CODE:
#include<stdio.h>
int main()
{
int cnt,j,n,t,remain,flag=0,tq;
int wt=0,tat=0,at[10],bt[10],rt[10];
printf("Enter Total Process:\t ");
scanf("%d",&n);
remain=n;
for(cnt=0;cnt<n;cnt++)
{
printf("Enter Arrival Time and Burst Time for Process Process Number %d :",cnt+1);
scanf("%d",&at[cnt]);
scanf("%d",&bt[cnt]);
rt[cnt]=bt[cnt];
}
printf("Enter Time Quantum:\t");
scanf("%d",&tq);
printf("\n\nProcess\t|Turnaround Time|Waiting Time\n\n");
for(t=0,cnt=0;remain!=0;)
{
if(rt[cnt]<=tq && rt[cnt]>0)
{
t+=rt[cnt];
rt[cnt]=0;
flag=1;
}
else if(rt[cnt]>0)
{
rt[cnt]-=tq;
t+=tq;
}
if(rt[cnt]==0 && flag==1)
{
remain--;
printf("P[%d]\t|\t%d\t|\t%d\n",cnt+1,t-at[cnt],t-at[cnt]-bt[cnt]);
wt+=t-at[cnt]-bt[cnt];
tat+=t-at[cnt];
flag=0;
}
if(cnt==n-1)
cnt=0;
else if(at[cnt+1]<=t)
cnt++;
else
cnt=0;
}
printf("\nAverage Waiting Time= %f\n",wt*1.0/n);
printf("Avg Turnaround Time = %f",tat*1.0/n);
return 0;
}
OUTPUT:
Enter Total Process: 4
Enter Arrival Time and Burst Time for Process Process Number 1 :0 5
Enter Arrival Time and Burst Time for Process Process Number 2 :1 4
Enter Arrival Time and Burst Time for Process Process Number 3 :2 2
Enter Arrival Time and Burst Time for Process Process Number 4 :4 1
Enter Time Quantum: 2
Process Turnaround Time Waiting Time

P[3] | 4 | 2
P[4] | 3 | 2
P[2] | 10 | 6
P[1] | 12 | 7

Average Waiting Time= 4.250000


Avg Turnaround Time = 7.250000
RESULT:
Thus the program was executed and verified successfully

PROGRAM 6.D:
SOURCE PROGRAM:
#include <stdio.h>
void swap(int *a,int *b)
{
int temp=*a;
*a=*b;
*b=temp;
}
int main()
{
int n;
printf("Enter Number of Processes: ");
scanf("%d",&n);
int b[n],p[n],index[n];
for(int i=0;i<n;i++)
{
printf("Enter Burst Time and Priority Value for Process %d: ",i+1);
scanf("%d %d",&b[i],&p[i]);
index[i]=i+1;
}
for(int i=0;i<n;i++)
{
int a=p[i],m=i;
for(int j=i;j<n;j++)
{
if(p[j] > a)
{
a=p[j];
m=j;
}
}
swap(&p[i], &p[m]);
swap(&b[i], &b[m]);
swap(&index[i],&index[m]);
}
int t=0;
printf("Order of process Execution is\n");
for(int i=0;i<n;i++)
{
printf("P%d is executed from %d to %d\n",index[i],t,t+b[i]);
t+=b[i];
}
printf("\n");
printf("Process Id Burst Time Wait Time TurnAround Time\n");
int wait_time=0;
for(int i=0;i<n;i++)
{
printf("P%d %d %d %d\n",index[i],b[i],wait_time,wait_time + b[i]);
wait_time += b[i];
}
return 0;
}
OUTPUT:
Enter Number of Processes: 3
Enter Burst Time and Priority Value for Process 1: 10 2
Enter Burst Time and Priority Value for Process 2: 5 0
Enter Burst Time and Priority Value for Process 3: 8 1
Order of process Execution is
P1 is executed from 0 to 10
P3 is executed from 10 to 18
P2 is executed from 18 to 23

Process Id Burst Time Wait Time TurnAround Time


P1 10 0 10
P3 8 10 18
P2 5 18 23
RESULT:
Thus the program was executed and verified successfully

EXPERIMENT-7
7.A)Implementation of the Memory Allocation Methods for fixed partition*
First Fit
DESCRIPTION: In the first fit, the partition is allocated which is first sufficient from the top
of Main Memory. Its advantage is that it is the fastest search as it searches only the first block
i.e. enough to assign a process. The first fit memory allocation scheme checks the empty
memory block in a sequential manner. This means that the memory block found empty at the
first attempt is checked for size.If the size is not less than the required size, it is allocated.
One of the biggest issues in this memory allocation scheme is, when a process is allocated to
a comparatively larger space than needed, it creates huge chunks of memory space.

AIM: To write a C program for implementation memory allocation methods for fixed
partition using first fit.

ALGORITHM:
1:Define the max as 25.
2: Declare the variable frag[max],b[max],f[max],i,j,nb,nf,temp, highest=0, bf[max],ff[max].
3: Get the number of blocks,files,size of the blocks using for loop.
4: In for loop check bf[j]!=1, if so temp=b[j]-f[i]
5: Check highest

PROGRAM:
SOURCE CODE:

#include<stdio.h>
#include<conio.h>
#define max 25
void main()
{
int frag[max],b[max],f[max],i,j,nb,nf,temp;
static int bf[max],ff[max];

printf("\n\tMemory Management Scheme - First Fit");


printf("\nEnter the number of blocks:");
scanf("%d",&nb);
printf("Enter the number of files:");
scanf("%d",&nf);
printf("\nEnter the size of the blocks:-\n");
for(i=1;i<=nb;i++)
{
printf("Block %d:",i);
scanf("%d",&b[i]);
}
printf("Enter the size of the files :-\n");
for(i=1;i<=nf;i++)
{
printf("File %d:",i);
scanf("%d",&f[i]);
}
for(i=1;i<=nf;i++)
{
for(j=1;j<=nb;j++)
{
if(bf[j]!=1)
{
temp=b[j]-f[i];
if(temp>=0)
{
ff[i]=j;
break;
}
}
}
frag[i]=temp;
bf[ff[i]]=1;
}
printf("\nFile_no:\tFile_size :\tBlock_no:\tBlock_size:\tFragement");
for(i=1;i<=nf;i++)
printf("\n%d\t\t%d\t\t%d\t\t%d\t\t%d",i,f[i],ff[i],b[ff[i]],frag[i]);
getch();
}

INPUT
Enter the number of blocks: 3
Enter the number of files: 2

Enter the size of the blocks:-


Block 1: 5
Block 2: 2
Block 3: 7

Enter the size of the files:-


File 1: 1
File 2: 4

OUTPUT
File No File Size Block No Block Size Fragment
1 1 1 5 4
2 4 3 7 3

RESULT:
Thus the program was executed and verified successfully

7.B)MEMORY ALLOCATION METHODS FOR FIXED PARTITION


WORST FIT
DESCRIPTION: Worst Fit allocates a process to the partition which is largest sufficient
among the freely available partitions available in the main memory. If a large process comes
at a later stage, then memory will not have space to accommodate it.

AIM:To write a C program for implementation of FCFS and SJF scheduling algorithms.
ALGORITHM:
1:Define the max as 25.
2: Declare the variable frag[max],b[max],f[max],i,j,nb,nf,temp, highest=0, bf[max],ff[max].
3: Get the number of blocks,files,size of the blocks using for loop.
4: In for loop check bf[j]!=1, if so temp=b[j]-f[i]
5: Check temp>=0,if so assign ff[i]=j break the for loop.
6: Assign frag[i]=temp,bf[ff[i]]=1;
7: Repeat step 4 to step 6.
8: Print file no,size,block no,size and fragment.
9: Stop the program.

PROGRAM:
SOURCE CODE:
#include<stdio.h>
#include<conio.h>
#define max 25
void main()
{
int frag[max],b[max],f[max],i,j,nb,nf,temp,highest=0;
static int bf[max],ff[max];
clrscr();
printf("\n\tMemory Management Scheme - Worst Fit");
printf("\nEnter the number of blocks:");
scanf("%d",&nb);
printf("Enter the number of files:");
scanf("%d",&nf);
printf("\nEnter the size of the blocks:-\n");
for(i=1;i<=nb;i++)
{
printf("Block %d:",i);
scanf("%d",&b[i]);
}
printf("Enter the size of the files :-\n");
for(i=1;i<=nf;i++)
{
printf("File %d:",i);
scanf("%d",&f[i]);
}
for(i=1;i<=nf;i++)
{

for(j=1;j<=nb;j++)
{
if(bf[j]!=1) //if bf[j] is not allocated
{
temp=b[j]-f[i];
if(temp>=0)
if(highest<temp)
{
ff[i]=j;
highest=temp;
}
}
}
frag[i]=highest;
bf[ff[i]]=1;
highest=0;
}
printf("\nFile_no:\tFile_size :\tBlock_no:\tBlock_size:\tFragement");
for(i=1;i<=nf;i++)
printf("\n%d\t\t%d\t\t%d\t\t%d\t\t%d",i,f[i],ff[i],b[ff[i]],frag[i]);
getch();
}

INPUT:

Enter the number of blocks: 3


Enter the number of files: 2

Enter the size of the blocks:-


Block 1: 5
Block 2: 2
Block 3: 7

Enter the size of the files:-


File 1: 1
File 2: 4

OUTPUT:
File No File Size Block No Block Size Fragment
1 1 3 7 6
2 4 1 5 1

RESULT:
Thus the program was executed and verified successfully

7.C)MEMORY ALLOCATION METHODS FOR FIXED PARTITION


BEST FIT

DESCRIPTION: Best-Fit Allocation is a memory allocation technique used in operating


systems to allocate memory to a process. In Best-Fit, the operating system searches through
the list of free blocks of memory to find the block that is closest in size to the memory
request from the process. Once a suitable block is found, the operating system splits the block
into two parts: the portion that will be allocated to the process, and the remaining free block.

AIM:To write a C program for implementation of FCFS and SJF scheduling algorithms.

ALGORITHM:
1:Define the max as 25.
2: Declare the variable frag[max],b[max],f[max],i,j,nb,nf,temp, highest=0, bf[max],ff[max].
3: Get the number of blocks,files,size of the blocks using for loop.
4: In for loop check bf[j]!=1, if so temp=b[j]-f[i]
5: Check lowest>temp,if so assign ff[i]=j,highest=temp
6: Assign frag[i]=lowest, bf[ff[i]]=1,lowest=10000
7: Repeat step 4 to step 6.
8: Print file no,size,block no,size and fragment.
9: Stop the program.

PROGRAM:
SOURCE CODE:
#include<stdio.h>
#include<conio.h>
#define max 25
void main()
{
int frag[max],b[max],f[max],i,j,nb,nf,temp,lowest=10000;
static int bf[max],ff[max];

printf("\nEnter the number of blocks:");


scanf("%d",&nb);
printf("Enter the number of files:");
scanf("%d",&nf);
printf("\nEnter the size of the blocks:-\n");
for(i=1;i<=nb;i++)
{
printf("Block %d:",i);
scanf("%d",&b[i]);
}
printf("Enter the size of the files :-\n");
for(i=1;i<=nf;i++)
{
printf("File %d:",i);
scanf("%d",&f[i]);
}
for(i=1;i<=nf;i++)
{
for(j=1;j<=nb;j++)
{
if(bf[j]!=1)
{
temp=b[j]-f[i];
if(temp>=0)
if(lowest>temp)
{
ff[i]=j;

lowest=temp;
}
}
}
frag[i]=lowest;
bf[ff[i]]=1;
lowest=10000;
}
printf("\nFile No\tFile Size \tBlock No\tBlock Size\tFragment");
for(i=1;i<=nf && ff[i]!=0;i++)
printf("\n%d\t\t%d\t\t%d\t\t%d\t\t%d",i,f[i],ff[i],b[ff[i]],frag[i]);
getch();
}
INPUT
Enter the number of blocks: 3
Enter the number of files: 2

Enter the size of the blocks:-


Block 1: 5
Block 2: 2
Block 3: 7

Enter the size of the files:-


File 1: 1
File 2: 4

OUTPUT
File No File Size Block No Block Size Fragment
1 1 2 2 1
2 4 1 5 1

RESULT:
Thus the program was executed and verified successfully

EXPERIMENT-8
8.A)IMPLEMENTATION OF FIFO PAGE REPLACEMENT ALGORITHM
DESCRIPTION: FIFO which is also known as First In First Out is one of the types of page
replacement algorithm. The FIFO algorithm is used in the paging method for memory
management in an operating system that decides which existing page needs to be replaced in
the queue. FIFO algorithm replaces the oldest (First) page which has been present for
the longest time in the main memory

AIM:To write a c program to implement FIFO page replacement algorithm

ALGORITHM:

1. Start the process

2. Declare the size with respect to page length

3. Check the need of replacement from the page to memory

4. Check the need of replacement from old page to new page in memory

5. Form a queue to hold all pages


6. Insert the page require memory into the queue

7. Check for bad replacement and page fault

8. Get the number of processes to be inserted

9. Display the values

10. Stop the process

PROGRAM:
SOURCE CODE:

#include<stdio.h>
int main()
{
int i,j,n,a[50],frame[10],no,k,avail,count=0;
printf("\n ENTER THE NUMBER OF PAGES:\n");
scanf("%d",&n);
printf("\n ENTER THE PAGE NUMBER :\n");
for(i=1;i<=n;i++)
scanf("%d",&a[i]);
printf("\n ENTER THE NUMBER OF FRAMES :");
scanf("%d",&no);
for(i=0;i<no;i++)
frame[i]= -1;
j=0;
printf("\tref string\t page frames\n");
for(i=1;i<=n;i++)
{
printf("%d\t\t",a[i]);
avail=0;
for(k=0;k<no;k++)
if(frame[k]==a[i])
avail=1;
if (avail==0)
{
frame[j]=a[i];
j=(j+1)%no;
count++;
for(k=0;k<no;k++)
printf("%d\t",frame[k]);
}
printf("\n");
}
printf("Page Fault Is %d",count);
return 0;
}
OUTPUT:

ENTER THE NUMBER OF PAGES: 20


ENTER THE PAGE NUMBER : 70120304230321201701
ENTER THE NUMBER OF FRAMES :3
ref string page frames
7 7 -1 -1
0 7 0 -1
1 7 0 1
2 2 0 1
0
3 2 3 1
0 2 3 0
4 4 3 0
2 4 2 0
3 4 2 3
0 0 2 3
3
2
1 0 1 3
2 0 1 2
0
1
7 7 1 2
0 7 0 2
1 7 0 1
Page Fault Is 15

RESULT:
Thus the program was executed and verified successfully

8.B)IMPLEMENTATION OF LRU PAGE REPLACEMENT ALGORITHM


DESCRIPTION: FIFO which is also known as First In First Out is one of the types of page
replacement algorithm. The FIFO algorithm is used in the paging method for memory
management in an operating system that decides which existing page needs to be replaced in
the queue. FIFO algorithm replaces the oldest (First) page which has been present for
the longest time in the main memory

AIM:To write a c program to implement LRU page replacement algorithm

ALGORITHM :

1. Start the process

2. Declare the size

3. Get the number of pages to be inserted

4. Get the value


5. Declare counter and stack

6. Select the least recently used page by counter value

7. Stack them according the selection.

8. Display the values

9. Stop the process

PROGRAM:
SOURCE CODE:
#include<stdio.h>
main()
{
int q[20],p[50],c=0,c1,d,f,i,j,k=0,n,r,t,b[20],c2[20];
printf("Enter no of pages:");
scanf("%d",&n);
printf("Enter the reference string:");
for(i=0;i<n;i++)
scanf("%d",&p[i]);
printf("Enter no of frames:");
scanf("%d",&f);
q[k]=p[k];
printf("\n\t%d\n",q[k]);
c++;
k++;
for(i=1;i<n;i++)
{
c1=0;
for(j=0;j<f;j++)
{
if(p[i]!=q[j])
c1++;
}
if(c1==f)
{
c++;
if(k<f)
{
q[k]=p[i];
k++;
for(j=0;j<k;j++)
printf("\t%d",q[j]);
printf("\n");
}
else
{
for(r=0;r<f;r++)
{
c2[r]=0;
for(j=i-1;j<n;j--)
{
if(q[r]!=p[j])
c2[r]++;
else
break;
}
}
for(r=0;r<f;r++)
b[r]=c2[r];
for(r=0;r<f;r++)
{
for(j=r;j<f;j++)
{
if(b[r]<b[j])
{
t=b[r];
b[r]=b[j];
b[j]=t;
}
}
}
for(r=0;r<f;r++)
{
if(c2[r]==b[0])
q[r]=p[i];
printf("\t%d",q[r]);
}
printf("\n");
}
}
}
printf("\nThe no of page faults is %d",c);
}

OUTPUT:

Enter no of pages:10
Enter the reference string:7 5 9 4 3 7 9 6 2 1
Enter no of frames:3
7
7 5
7 5 9
4 5 9
4 3 9
4 3 7
9 3 7
9 6 7
9 6 2
1 6 2

The no of page faults is 10

RESULT:
Thus the program was executed and verified successfully

8.C)Program to implement LFU page replacement technique


DESCRIPTION: LRU stands for Least Recently Used. As the name suggests, this algorithm
is based on the strategy that whenever a page fault occurs, the least recently used page will be
replaced with a new page. So, the page not utilized for the longest time in the memory
(compared to all other pages) gets replaced.
ALGORTHIM:
1. Start
2. Read Number Of Pages And Frames
3.Read Each Page Value
4. Search For Page In The Frames
5. If Not Available Allocate Free Frame
6. If No Frames Is Free Repalce The Page With The Page That Is Leastly Used
7.Print Page Number Of Page Faults
8.Stop

PROGRAM:
SOURCE CODE:
#include<stdio.h>
#include<conio.h>
main()
{
int rs[50], i, j, k, m, f, cntr[20], a[20], min, pf=0;
clrscr();
printf("\nEnter number of page references -- ");
scanf("%d",&m);
printf("\nEnter the reference string -- ");
for(i=0;i<m;i++)
scanf("%d",&rs[i]);
printf("\nEnter the available no. of frames -- ");
scanf("%d",&f);
for(i=0;i<f;i++)
{
cntr[i]=0; a[i]=-1;
}
printf(“\nThe Page Replacement Process is – \n“);
for(i=0;i<m;i++)
{
for(j=0;j<f;j++)
if(rs[i]==a[j])
{
cntr[j]++;
break;

if(j==f)
{ min = 0;
for(k=1;k<f;k++)
if(cntr[k]<cntr[min])
min=k;
a[min]=rs[i]; cntr[min]=1;
pf++;
}
printf("\n");
for(j=0;j<f;j++)
printf("\t%d",a[j]);
if(j==f)

}
printf(“\tPF No. %d”,pf);}
printf("\n\n Total number of page faults -- %d",pf);
getch();
}

INPUT
Enter number of page references -- 10
Enter the reference string -- 1 2 3 4 5 2 5 2 5 1 4 3
Enter the available no. of frames -- 3

OUTPUT
The Page Replacement Process is –

1 -1 -1 PF No. 1
1 2 -1 PF No. 2
1 2 3 PF No. 3
4 2 3 PF No. 4
5 2 3 PF No. 5
523
523
5 2 1 PF No. 6
5 2 4 PF No. 7
5 2 3 PF No. 8

Total number of page faults -- 8

RESULT:
Thus the program was executed and verified successfully

EXPERIMENT-9

9)Bankers Algorithm for Deadlock Avoidance


DESCRIPTION: The banker’s algorithm is a resource allocation and deadlock avoidance
algorithm that tests for safety by simulating the allocation for the predetermined maximum
possible amounts of all resources, then makes an “s-state” check to test for possible activities,
before deciding whether allocation should be allowed to continue.

AIM: To write a c program to implement Bankers Algorithm for Deadlock Avoidance

ALGORITHM:
1. Start the program
2. Declare the memory for the process
3. Read the no of processes, resources, allocation matrix and available matrix
4. Calculate the need matrix ;Need = Max- Allocation
5. Compare each and every proceses using the bankers algorithm.
6. If the process in the safe state , then it is not a deadlock process otherwise it is a deadlock
process.
7. Process the result of state of process.
8. Stop the program

SAFETY ALGORITHM:
1) Let Work and Finish be vectors of length ‘m’ and ‘n’ respectively.
Initialize: Work = Available
Finish[i] = false; for i=1, 2, 3, 4….n
2) Find an i such that both
a) Finish[i] = false
b) Needi <= Work
if no such i exists goto step (4)
3) Work = Work + Allocation[i]
Finish[i] = true
goto step (2)
4) if Finish [i] = true for all i
then the system is in a safe state
RESOURCE REQUEST ALGORITHM:

1) If Requesti <= Needi


Goto step (2) ; otherwise, raise an error condition, since the process has exceeded its
maximum claim.
2) If Requesti <= Available
Goto step (3); otherwise, Pi must wait, since the resources are not available.
3) Have the system pretend to have allocated the requested resources to process Pi by
modifying the state as
follows:
Available = Available – Requesti
Allocationi = Allocationi + Requesti
Needi = Needi– Requesti

PROGRAM:
SOURCE CODE:

#include <stdio.h>
int main()
{
int n, m, i, j, k, y,alloc[20][20],max[20][20],avail[50],ind=0;
printf("Enter the no of Proceses:");
scanf("%d",&n);
printf("Enter the no of Resources:");
scanf("%d",&m);
printf("Enter the Allocation Matrix:");
for (i = 0; i < n; i++) {
for (j = 0; j < m; j++)
scanf("%d",&alloc[i][j]);
}
printf("Enter the Max Matrix:");
for (i = 0; i < n; i++) {
for (j = 0; j < m; j++)
scanf("%d",&max[i][j]);
}
printf("Enter the Available Matrix");
for(i=0;i<m;i++)
scanf("%d",&avail[i]);
int finish[n], safesequence[n],work[m],need[n][m];
//calculating NEED matrix
for (i = 0; i < n; i++) {
for (j = 0; j < m; j++)
need[i][j] = max[i][j] - alloc[i][j];
}
printf("NEED matrix is");
for (i = 0; i < n; i++)
{
printf("\n");
for (j = 0; j < m; j++)
printf(" %d ",need[i][j]);
}
for(i=0;i<m;i++)
{
work[i]=avail[i];
}
for (i = 0; i < n; i++) {
finish[i] = 0;
}
for (k = 0; k < n; k++) {
for (i = 0; i < n; i++)
{
if (finish[i] == 0)
{
int flag = 0;
for (j = 0; j < m; j++)
{
if (need[i][j] > work[j])
{
flag = 1;
break;
}
}
if (flag == 0) {
safesequence[ind++] = i;
for (y = 0; y < m; y++)
work[y] += alloc[i][y];
finish[i] = 1;
}
}
}
}
printf("\nFollowing is the SAFE Sequence\n");
for (i = 0; i <= n - 1; i++)
printf(" P%d ", safesequence[i]);
}

OUTPUT:

Enter the no of Proceses:5


Enter the no of Resources:4
Enter the Allocation Matrix:
0012
1000
1354
0632
0014
Enter the Max Matrix:
0012
1750
2356
0652
0656
Enter the Available Matrix
1520
NEED matrix is
0 0 0 0
0 7 5 0
1 0 0 2
0 0 2 0
0 6 4 2
Following is the SAFE Sequence
P0 P2 P3 P4 P1

RESULT:
Thus the program was executed and verified successfully

EXPERIMENT-10
10)SIMULATING DISK SCHEDULING ALGORITHM
10.A: FCFS

DESCRIPTION: First Come First Serve (FCFS) is an operating system scheduling algorithm
that automatically executes queued requests and processes in order of their arrival. It is the
easiest and simplest CPU scheduling algorithm. In this type of algorithm, processes which
requests the CPU first get the CPU allocation first
AIM: To write a c program to implement FCFS disk scheduling algorithm.
ALGORITHM
1: Start the program
2: Let the request array represent an array storing indexes of tracks that have been requested in
ascending order of their time of arrival. ‘initial’ is their position of disk head.
3: Let us one by one take the tracks in default order and calculate the absolute distance of the track
from the central.
4: Increment the total head moment with this distance.
5: Currently serviced track position now becomes the new head positions.
6: Go to step 3 until all tracks in request array have been serviced.
7: Stop the program.
PROGRAM:
SOURCE CODE:
#include <stdio.h>
#include <stdlib.h>
int main(){
int TotalHeadMoment=0,initial,RQ[100],n,i;
printf("Enter the number of requests:");
scanf("%d",&n);
printf("Enter the request sequence:");
for(i=0;i<n;i++)
scanf("%d",&RQ[i]);
printf("Enter the initial head position:");
scanf("%d",&initial);
//logic for fcfs disk scheduling
for(i=0;i<n;i++){
TotalHeadMoment=TotalHeadMoment+abs(RQ[i]-initial);
initial = RQ[i];
}
printf("Total head moment is %d",TotalHeadMoment);
return 0;
}

OUTPUT:
Enter the number of requests:8
Enter the request sequence:95
180
34
119
11
123
62
64
Enter the initial head position:50
Total head moment is 644

RESULT: The program was written, executed and output verified.

10.B:SCAN

DESCRIPTION: In SCAN disk scheduling algorithm, head starts from one end of the disk and
moves towards the other end, servicing requests in between one by one and reach the other
end. Then the direction of the head is reversed and the process continues as head
continuously scan back and forth to access the disk. So, this algorithm works as an elevator
and hence also known as the elevator algorithm. As a result, the requests at the midrange are
serviced more and those arriving behind the disk arm will have to wait.
AIM: To write a c program to implement scan scheduling algorithm.
ALGORITHM:
1: Start the program.
2: Let the request array represent an array of string indices of tracks that have been requested in
ascending order of their time.
3: Let the direction represent whether the initial is moving towards the left or right.
4: In the direction in which the initial is moving service all tracks one by one.
5: Calculate the absolute distance of the track from the head.
6: Increment the total head moment with this distance.
7: Currently serviced track position now becomes the new initial position.
8: Go to step 4 until we reach at one of the ends of the disk.
9: If we reach at the end of the disk, reverse the direction, and go to step 3 until all tracks in the
request array have not been serviced.
10: Stop the program.

PROGRAM:
SOURCE CODE:
#include<stdio.h>
#include<stdlib.h>
void main(){
int TotalHeadMoment=0,initial,index,move,rq[20],temp,n,size,i=0;
printf("Enter the number of requests:");
scanf("%d",&n);
printf("Enter the request sequence:");
for(i=0;i<n;i++)
scanf("%d",&rq[i]);
printf("Enter the initial head position:");
scanf("%d",&initial);
printf("Enter the disk size:");
scanf("%d",&size);
printf("Enter the head moment direction (for high:1 and for low:0) :");
scanf("%d",&move);

for(i=0;i<n;i++){
for(int j=0;j<n-i-1;j++){
if(rq[j+1]<rq[j]){
temp=rq[j];
rq[j]=rq[j+1];
rq[j+1]=temp;
}
}
}
for(i=0;i<n;i++){
if(initial<rq[i]){
index=i;
break;
}
}

// if movement is towards high value


if(move==1){
for(i=index;i<n;i++){
TotalHeadMoment=TotalHeadMoment+abs(rq[i]-initial);
initial=rq[i];
}
//last movemtn for max size
TotalHeadMoment=TotalHeadMoment+abs(size-rq[i-1]-1);
initial=size-1;
for(i=index-1;i>=0;i--){
TotalHeadMoment=TotalHeadMoment+abs(rq[i]-initial);
initial=rq[i];
}
}
// if movement is towards low value
else{
for(i=index-1;i>=0;i--){
TotalHeadMoment=TotalHeadMoment+abs(rq[i]-initial);
initial=rq[i];
}
//last movement for min size
TotalHeadMoment=TotalHeadMoment+abs(rq[i+1]-0);
initial=0;
for(i=index;i<n;i++){
TotalHeadMoment=TotalHeadMoment+abs(rq[i]-initial);
initial=rq[i];
}
}
printf("Total head movement=%d",TotalHeadMoment);
}

OUTPUT:
Enter the number of requests:8
Enter the request sequence:95
180
34
119
11
123
62
64
Enter the initial head position:50
Enter the disk size:200
Enter the head moment direction (for high:1 and for low:0) :1
Total head movement=337

RESULT: The program was written, executed and output verified.

10.C: C-SCAN

DESCRIPTION: The circular SCAN (C-SCAN) scheduling algorithm is a modified version of


the SCAN disk scheduling algorithm that deals with the inefficiency of the SCAN algorithm
by servicing the requests more uniformly. Like SCAN (Elevator Algorithm) C-SCAN moves
the head from one end servicing all the requests to the other end. However, as soon as the
head reaches the other end, it immediately returns to the beginning of the disk without
servicing any requests on the return trip (see chart below) and starts servicing again once
reaches the beginning. This is also known as the “Circular Elevator Algorithm” as it
essentially treats the cylinders as a circular list that wraps around from the final cylinder to
the first one.
AIM: To write a C program to implement C-SCAN scheduling algorithm.

ALGORITHM:
1: Start the program.
2: Let the request array represent an array, storing indices of tracks that have been requested in
ascending order of their time of arrival. ‘initial’ is the position of the disk head.
3: The initial sevices only in the right direction from 0 to the size of the disk.
4: While moving in the left direction do not service any of the tracks.
5: When we reach the beginning (left end) reverse the direction.
6: While moving in the right direction it services all tracks one by one.
7: While moving in the right direction calculate the absolute distance of the tracks from the initial.
8: Increment the total head moment with this distance.
9: Currently serviced track position now becomes the new initial position.
10: Go to step 7 until we reach the right end of the disk.
11: If we reach the right end of the disk reverse the direction and go to step 4 until all tracks in the
request array have been serviced.
12: Stop the program.

PROGRAM:
SOURCE CODE:
#include<stdio.h>
#include<stdlib.h>
int main()
{
int i,ref[10],max,initial,n,TotalHeadMoment=0,mov,j,index;
printf("Enter the number of requests:");
scanf("%d",&n);
printf("Enter the request sequence:");
for(i=0;i<n;i++){
scanf("%d",&ref[i]);
}
printf("Enter the initial head position:");
scanf("%d",&initial);
printf("Enter the max range:");
scanf("%d",&max);
printf("Enter the head moment direction (for high:1 and for low:0) :");
scanf("%d",&mov);

//logic for C-SCAN disk scheduling


for(i=0;i<n;i++)
{
for( j=0;j<n-i-1;j++)
{
if(ref[j]>ref[j+1])
{
int temp;
temp=ref[j];
ref[j]=ref[j+1];
ref[j+1]=temp;
}
}
}
for(i=0;i<n;i++){
if(initial<ref[i]){
index=i;
break;
}
}
//if movement is towards high value
if(mov==1){
for(i=index;i<n;i++){
TotalHeadMoment=TotalHeadMoment+abs(ref[i]-initial);
initial=ref[i];
}
//last movement for max size
TotalHeadMoment=TotalHeadMoment+abs(max-ref[i-1]-1);
//movement max to min disk
TotalHeadMoment=TotalHeadMoment+abs(max-1-0);
initial=0;
for(i=0;i<index;i++){
TotalHeadMoment=TotalHeadMoment+abs(ref[i]-initial);
initial=ref[i];
}
}
//if movement is towards low value
else{
for(i=index;i>=0;i--){
TotalHeadMoment=TotalHeadMoment+abs(ref[i]-initial);
initial=ref[i];
}
//last movement for min size
TotalHeadMoment=TotalHeadMoment+abs(ref[i+1]-0);
//movement min to max disk
TotalHeadMoment=TotalHeadMoment+abs(max-1-0);
initial=max-1;
for(i=n-1;i>=index;i--){
TotalHeadMoment=TotalHeadMoment+abs(ref[i]-initial);
initial=ref[i];
}
}
printf("Total Head Moment:%d",TotalHeadMoment);
}

OUTPUT:
Enter the number of requests:8
Enter the request sequence:95
180
34
119
11
123
62
64
Enter the initial head position:50
Enter the max range:200
Enter the head moment direction (for high:1 and for low:0) :1
Total Head Moment:382

RESULT: The program was written, executed and output verified.

VIVA QUESTIONS:

1) Explain the main purpose of an operating system?


Ans: Operating systems exist for two main purposes. One is that it is designed to make sure a
computer system performs well by managing its computational activities. Another is that it
provides
an environment for the development and execution of programs.
1) Explain the main purpose of an operating system?
Ans: Operating systems exist for two main purposes. One is that it is designed to make sure a
computer system performs well by managing its computational activities. Another is that it
provides
an environment for the development and execution of programs.
1) Explain the main purpose of an operating system?
Ans: Operating systems exist for two main purposes. One is that it is designed to make sure a
computer system performs well by managing its computational activities. Another is that it
provides
an environment for the development and execution of programs.
1) Explain the main purpose of an operating system?
Ans: Operating systems exist for two main purposes. One is that it is designed to make sure a
computer system performs well by managing its computational activities. Another is that it
provides an environment for the development and execution of programs.
2) What is demand paging?
Ans: Demand paging is referred when not all of a process’s pages are in the RAM, then the OS
brings the missing(and required) pages from the disk into the RAM.
) What are the advantages of a multiprocessor system?
Ans: With an increased number of processors, there is a considerable increase in throughput.
It can
also save more money because they can share resources. Finally, overall reliability is increased
a
3) What are the advantages of a multiprocessor system?
Ans: With an increased number of processors, there is a considerable increase in throughput.
It can also save more money because they can share resources. Finally, overall reliability is
increased as well
4) What is kernel?
Ans: A kernel is the core of every operating system. It connects applications to the actual
processing of data. It also manages all communications between software and hardware
components to ensure usability and reliability
5) What are real-time systems?
Ans: Real-time systems are used when rigid time requirements have been placed on the
operation of a processor. It has well defined and fixed time constraints.
6) What is a virtual memory?
Ans: Virtual memory is a memory management technique for letting processes execute outside
of memory. This is very useful especially is an executing program cannot fit in the physical
memory.
7) Describe the objective of multiprogramming
Ans: The main objective of multiprogramming is to have a process running at all times. With
this design, CPU utilization is said to be maximized.
8) What is time- sharing system?
Ans: In a Time-sharing system, the CPU executes multiple jobs by switching among them, also
known as multitasking. This process happens so fast that users can interact with each program
while it is running.
9) What is SMP?
Ans: SMP is a short form of Symmetric Multi-Processing. It is the most common type of
multiple-processor systems. In this system, each processor runs an identical copy of the
operating system, and these copies communicate with one another as needed.
10) How are server systems classified?
Ans: Server systems can be classified as either computer-server systems or file server systems.
In the first case, an interface is made available for clients to send requests to perform an action.
In the second case, provisions are available for clients to create, access and update files.

11) What is asymmetric clustering?


Ans: In asymmetric clustering, a machine is in a state known as hot standby mode where it
does nothing but to monitor the active server. That machine takes the active server’s role should
the server fails.
12) What is a thread?
Ans: A thread is a basic unit of CPU utilization. In general, a thread is composed of a thread
ID,
program counter, register set, and the stack
12) What is a thread?
Ans: A thread is a basic unit of CPU utilization. In general, a thread is composed of a thread
ID, program counter, register set, and the stack
13) Give some benefits of multithreaded programming.
Ans:
● there is increased responsiveness to the user
● resource sharing within the process
● economy
● utilization of multiprocessing architecture
14) State the main difference between logical from physical address space.
Ans: Logical address refers to the address that is generated by the CPU. On the other hand,
physical address refers to the address that is seen by the memory unit.
15) How does dynamic loading aid in better memory space utilization?
Ans: With dynamic loading, a routine is not loaded until it is called. This method is especially
useful when large amounts of code are needed in order to handle infrequently occurring cases
such as error routines.
16) What are overlays?
Ans: Overlays are used to enable a process to be larger than the amount of memory allocated
to it. The basic idea of this is that only instructions and data that are needed at any given time
are kept in memory.
17) What is the basic function of paging?
Ans: Paging is a memory management scheme that permits the physical address space of a
process to be noncontiguous. It avoids the considerable problem of having to fit varied sized
memory chunks onto the backing store.
18) What is fragmentation?
Ans: Fragmentation is memory wasted. It can be internal if we are dealing with systems that
have fixed-sized allocation units, or external if we are dealing with systems that have variable-
sized allocation units.

19) How does swapping result in better memory management?

During regular intervals that are set by the operating system, processes can be copied from
main memory to a backing store, and then copied back later. Swapping allows more operations
to be
run that can fit into memory at one time.
During regular intervals that are set by the operating system, processes can be copied from
main memory to a backing store, and then copied back later. Swapping allows more operations
to be run that can fit into memory at one time.

20) What is Direct Access Method?


Ans: Direct Access method is based on a disk model of a file, such that it is viewed as a
numbered sequence of blocks or records. It allows arbitrary blocks to be read or written. Direct
access is advantageous when accessing large amounts of information.

21) When does thrashing occur?


Ans: Thrashing refers to an instance of high paging activity. This happens when it is spending
more time paging instead of executing.
22) What is the best page size when designing an operating system?
Ans: The best paging size varies from system to system, so there is no single best when it
comes to page size. There are different factors to consider in order to come up with a suitable
page size, such as page table, paging time, and its effect on the overall efficiency of the
operating system.
23) When designing the file structure for an operating system, what attributes are
considered?
Ans: Typically, the different attributes for a file structure are naming, identifier, supported file
types, and location for the files, size, and level of protection.
24) What is root partition?
Ans: Root partition is where the operating system kernel is located. It also contains other
potentially important system files that are mounted during boot time.
25) What are the different operating systems?
1. Batched operating systems
2. Multi-programmed operating systems
3. timesharing operating systems
4. Distributed operating systems
5. Real-time operating systems

26) What is a boot-strap program?


Bootstrapping is a technique by which a simple computer program activates a more
complicated system of programs.
27) What is BIOS?
A BIOS is software that is put on computers. This allows the user to configure the input and
output of a computer. A BIOS is also known as firmware.
28) Explain the concept of Real-time operating systems?

A real time operating system is used when rigid time requirement have been placed on the
operation of a processor or the flow of the data; thus, it is often used as a control device in a
dedicated application. Here the sensors bring data to the computer. The computer must analyze
the data and possibly adjust controls to modify the sensor input.
They are of two types:
1. Hard real time OS
2. Soft real time OS
Hard-real-time OS has well-defined fixed time constraints. But soft real time operating systems
have less stringent timing constraints.

29) Define MULTICS?


MULTICS (Multiplexed information and computing services) operating system was developed
from 1965-1970 at Massachusetts institute of technology as a computing utility.
30) What is SCSI?
Small computer systems interface.
31) What is a sector?
Smallest addressable portion of a disk.
32) What is cache-coherency?
In a multiprocessor system there exist several caches each may containing a copy of same
variable A. Then a change in one cache should immediately be reflected in all other caches this
process of maintaining the same value of a data in all the caches s called cache-coherency.
33) What are residence monitors?
Early operating systems were called residence monitors.
34) What is dual-mode operation?
In order to protect the operating systems and the system programs from the malfunctioning
programs the two mode operations were evolved:
1. System mode.
2. User mode.
Here the user programs cannot directly interact with the system resources, instead they request
the operating system which checks the request and does the required task for the user programs-
DOS was written for intel 8088 and has no dual-mode. Pentium provides dual-mode operation.
35) What are the operating system components?
1. Process management
2. Main memory management
3. File management
4. I/O system management
5. Secondary storage management
6. Networking
7. Protection system
8. Command interpreter system

36) What are operating system services?


1. Program execution
2. I/O operations
3. File system manipulation
4. Communication
5. Error detection
6. Resource allocation
7. Accounting
8. Protection

37) What are system calls?


System calls provide the interface between a process and the operating system. System calls
for modern Microsoft windows platforms are part of the win32 API, which is available for all
the compilers written for Microsoft windows.

38) What is a process?


A program in execution is called a process. Or it may also be called a unit of work. A process
needs some system resources as CPU time, memory, files, and i/o devices to accomplish the
task. Each process is represented in the operating system by a process control block or task
control block (PCB).Processes are of two types:
1. Operating system processes
2. User processes

39) What are the states of a process?


1. New
2. Running
3. Waiting
4. Ready
5. Terminated

40) What are various scheduling queues?


1. Job queue
2. Ready queue
3. Device queue
41) What is a job queue?
When a process enters the system it is placed in the job queue.
42) What is a ready queue?
The processes that are residing in the main memory and are ready and waiting to execute are
kept on a list called the ready queue.
43) What is a device queue?
A list of processes waiting for a particular I/O device is called device queue.
44) What is a long term schedule and short term schedulers?
Long term schedulers are the job schedulers that select processes from the job queue and load
them into memory for execution. The short term schedulers are the CPU schedulers that select
a process form the ready queue and allocate the CPU to one of them.
45) What is context switching?
Transferring the control from one process to other process requires saving the state of the old
process and loading the saved state for new process. This task is known as context switching.
46) What are the disadvantages of context switching?
Time taken for switching from one process to other is pure over head. Because the system does
no useful work while switching. So one of the solutions is to go for threading when ever
possible.
47) What are co-operating processes?
The processes which share system resources as data among each other. Also the processes can
communicate with each other via interprocess communication facility generally used in
distributed systems. The best example is chat program used on the www.
48) What is a thread?
A thread is a program line under execution. Thread sometimes called a light-weight process, is
a basic unit of CPU utilization; it comprises a thread id, a program counter, a register set, and
a stack.
49) What are the benefits of multithreaded programming?
1. Responsiveness (needn’t to wait for a lengthy process)
2. Resources sharing
3. Economy (Context switching between threads is easy)
4. Utilization of multiprocessor architectures (perfect utilization of the multiple processors).
50) What are types of threads?
1. User thread
2. Kernel thread
User threads are easy to create and use but the disadvantage is that if they perform a blocking
system calls the kernel is engaged completely to the single user thread blocking other processes.
They are created in user space.
Kernel threads are supported directly by the operating system. They are slower to create and
manage. Most of the OS like Windows NT, Windows 2000, Solaris2, BeOS, and Tru64 Unix
support kernel threading.
51) What is process synchronization?
A situation, where several processes access and manipulate the same data concurrently and the
outcome of the execution depends on the particular order in which the access takes place, is
called race condition. To guard against the race condition we need to ensure that only one
process at a time can be manipulating the same data. The technique we use for this is called
process synchronization.
52).What is critical section problem?
Critical section is the code segment of a process in which the process may be changing common
variables, updating tables, writing a file and so on. Only one process is allowed to go into
critical section at any given time (mutually exclusive).The critical section problem is to design
a protocol that the processes can use to co-operate. The three basic requirements of critical
section are:
1. Mutual exclusion
2. Progress
3. bounded waiting
53).What is a semaphore?
It is a synchronization tool used to solve complex critical section problems. A semaphore is an
integer variable that, apart from initialization, is accessed only through two standard atomic
operations: Wait and Signal.
54).What is bounded-buffer problem?
Here we assume that a pool consists of n buffers, each capable of holding one item. The
semaphore provides mutual exclusion for accesses to the buffer pool and is initialized to the
value 1.The empty and full semaphores count the number of empty and full buffers,
respectively. Empty is initialized to n, and full is initialized to 0.
55) What is readers-writers problem?
Here we divide the processes into two types:
1. Readers (Who want to retrieve the data only)
2. Writers (Who want to retrieve as well as manipulate)
We can provide permission to a number of readers to read same data at same time.But a writer
must be exclusively allowed to access. There are two solutions to this problem:
1. No reader will be kept waiting unless a writer has already obtained permission to use the
shared object. In other words, no reader should wait for other readers to complete simply
because a writer is waiting.
2. Once a writer is ready, that writer performs its write as soon as possible. In other words, if a
writer is waiting to access the object, no new may start reading.
56).What is dining philosophers’ problem?
Consider 5 philosophers who spend their lives thinking and eating. The philosophers share a
common circular table surrounded by 5 chairs, each belonging to one philosopher. In the center
of the table is a bowl of rice, and the table is laid with five single chop sticks. When a
philosopher thinks, she doesn’t interact with her colleagues.
From time to time, a philosopher gets hungry and tries to pick up two chop sticks that are
closest to her .A philosopher may pick up only one chop stick at a time. Obviously she can’t
pick the stick in some others hand. When a hungry philosopher has both her chopsticks at the
same time, she eats without releasing her chopsticks. When she is finished eating, she puts
down both of her chopsticks and start thinking again.
57) What is a deadlock?
Suppose a process request resources; if the resources are not available at that time the process
enters into a wait state. A waiting process may never again change state, because the resources
they have requested are held by some other waiting processes. This situation is called deadlock.
58) What are necessary conditions for dead lock?
1. Mutual exclusion (where at least one resource is non-sharable)
2. Hold and wait (where a process hold one resource and waits for other resource)
3. No preemption (where the resources can’t be preempted)
4. circular wait
59).What is resource allocation graph?
This is the graphical description of deadlocks. This graph consists of a set of edges E and a set
of vertices V. The set of vertices V is partitioned into two different types of nodes
P={p1,p2,…,pn}, the set consisting of all the resources in the system, R={r1,r2,…rn}.A
directed edge Pi?Rj is called a request edge; a directed edge Rj?
Pi is called an assignment edge. Pictorially we represent a process Pi as a circle, and each
resource type Rj as square.Since resource type Rj may have more than one instance, we
represent each such instance as a dot within the square.When a request is fulfilled the request
edge is transformed into an assignment edge. When a process releases a resource the
assignment edge is deleted. If the cycle involves a set of resource types, each of which has only
a single instance, then a deadlock has occurred. Each process involved in the cycle is deadlock.
60) What are deadlock prevention techniques?
Mutual exclusion :Some resources such as read only files shouldn’t be mutually exclusive.
They should be sharable. But some resources such as printers must be mutually exclusive.
Hold and wait :To avoid this condition we have to ensure that if a process is requesting for a
resource it should not hold any resources.
No preemption : If a process is holding some resources and requests another resource that
cannot be immediately allocated to it (that is the process must wait), then all the resources
currently being held are preempted(released autonomously).
Circular wait : the way to ensure that this condition never holds is to impose a total ordering of
all the resource types, and to require that each process requests resources in an increasing order
of enumeration.

61) What is a safe state and a safe sequence?


A system is in safe state only if there exists a safe sequence. A sequence of processes is a safe
sequence for the current allocation state if, for each Pi, the resources that the Pi can still request
can be satisfied by the currently available resources plus the resources held by all the Pj, with
j
62) What are the deadlock avoidance algorithms?
A dead lock avoidance algorithm dynamically examines the resource-allocation state to ensure
that a circular wait condition can never exist. The resource allocation state is defined by the
number of available and allocated resources, and the maximum demand of the process.There
are two algorithms:
1. Resource allocation graph algorithm
2. Banker’s algorithm
a. Safety algorithm
b. Resource request algorithm
63)What are the basic functions of an operating system?
Operating system controls and coordinates the use of the hardware among the various
applications programs for various uses. Operating system acts as resource allocator and
manager. Since there are many possibly conflicting requests for resources the operating system
must decide which requests are allocated resources to operating the computer system efficiently
and fairly. Also operating system is control program which controls the user programs to
prevent errors and improper use of the computer. It is especially concerned with the operation
and control of I/O devices.
64).Explain briefly about, processor, assembler, compiler, loader, linker and the
functions executed by them.
Processor:-A processor is the part a computer system that executes instructions .It is also called
a CPU
Assembler: An assembler is a program that takes basic computer instructions and converts
them into a pattern of bits that the computer’s processor can use to perform its basic operations.
Some people call these instructions assembler language and others use the term assembly
language.
Compiler: A compiler is a special program that processes statements written in a particular
programming language and turns them into machine language or “code” that a computer’s
processor uses. Typically, a programmer writes language statements in a language such as
Pascal or C one line at a time using an editor. The file that is created contains what are called
the source statements. The programmer then runs the appropriate language compiler,
specifying the name of the file that contains the source statements.
Loader: In a computer operating system, a loader is a component that locates a given program
(which can be an application or, in some cases, part of the operating system itself) in offline
storage (such as a hard disk), loads it into main storage (in a personal computer, it’s called
random access memory), and gives that program control of the compute
Linker: Linker performs the linking of libraries with the object code to make the object code
into an executable machine code.
65). What is a Real-Time System?
A real time process is a process that must respond to the events within a certain time period. A
real time operating system is an operating system that can run real time processes successfully
66) What is the difference between Hard and Soft real-time systems?
A hard real-time system guarantees that critical tasks complete on time. This goal requires that
all delays in the system be bounded from the retrieval of the stored data to the time that it takes
the operating system to finish any request made of it. A soft real time system where a critical
real-time task gets priority over other tasks and retains that priority until it completes. As in
hard real time systems kernel delays need to be bounded
67) What is virtual memory?
A virtual memory is hardware technique where the system appears to have more memory that
it actually does. This is done by time-sharing, the physical memory and storage parts of the
memory one disk when they are not actively being used.
68) Why paging is used?
Paging is solution to external fragmentation problem which is to permit the logical address
space of a process to be noncontiguous, thus allowing a process to be allocating physical
memory wherever the latter is available.
69) What is Context Switch?
Switching the CPU to another process requires saving the state of the old process and loading
the saved state for the new process. This task is known as a context switch.Context-switch time
is pure overhead, because the system does no useful work while switching. Its speed varies
from machine to machine, depending on the memory speed, the number of registers which
must be copied, the existed of special instructions(such as a single instruction to load or store
all registers).
70) What is CPU Scheduler?
Selects from among the processes in memory that are ready to execute, and allocates the CPU
to one of them.
CPU scheduling decisions may take place when a process:
1.Switches from running to waiting state.
2.Switches from running to ready state.
3.Switches from waiting to ready.
4.Terminates.
Scheduling under 1 and 4 is nonpreemptive.
All other scheduling is preemptive.
71) . What do you mean by deadlock?
Deadlock is a situation where a group of processes are all blocked and none of them can become
unblocked until one of the other becomes unblocked.The simplest deadlock is two processes
each of which is waiting for a message from the other.
72)What is Dispatcher?
Dispatcher module gives control of the CPU to the process selected by the short-term
scheduler; this involves:
Switching context
Switching to user mode
Jumping to the proper location in the user program to restart that program
Dispatch latency – time it takes for the dispatcher to stop one process and start another running.
73) What is Throughput, Turnaround time, waiting time and Response time?
Throughput – number of processes that complete their execution per time unit
Turnaround time – amount of time to execute a particular process
Waiting time – amount of time a process has been waiting in the ready queue
Response time – amount of time it takes from when a request was submitted until the first
response is produced, not output (for time-sharing environment)
74) Explain the difference between microkernel and macro kernel?
Micro-Kernel: micro-kernel is a minimal operating system that performs only the essential
functions of an operating system. All other operating system functions are performed by system
processes
Monolithic: A monolithic operating system is one where all operating system code is in a single
executable image and all operating system code runs in system mode.
75) .What is multi tasking, multi programming, multi threading?
Multi programming: Multiprogramming is the technique of running several programs at a time
using timesharing.It allows a computer to do several things at the same time.
Multiprogramming creates logical parallelism.
The concept of multiprogramming is that the operating system keeps several jobs in memory
simultaneously. The operating system selects a job from the job pool and starts executing a job,
when that job needs to wait for any i/o operations the CPU is switched to another job. So the
main idea here is that the CPU is never idle.
Multi tasking: Multitasking is the logical extension of multiprogramming .The concept of
multitasking is quite similar to multiprogramming but difference is that the switching between
jobs occurs so frequently that the users can interact with each program while it is running. This
concept is also known as time-sharing systems. A time-shared operating system uses CPU
scheduling and multiprogramming to provide each user with a small portion of time-shared
system.
Multi threading: An application typically is implemented as a separate process with several
threads of control. In some situations a single application may be required to perform several
similar tasks for example a web server accepts client requests for web pages, images, sound,
and so forth. A busy web server may have several of clients concurrently accessing it. If the
web server ran as a traditional single-threaded process, it would be able to service only one
client at a time. The amount of time that a client might have to wait for its request to be serviced
could be enormous.
So it is efficient to have one process that contains multiple threads to serve the same purpose.
This approach would multithread the web-server process, the server would create a separate
thread that would listen for client requests when a request was made rather than creating
another process it would create another thread to service the request.
So to get the advantages like responsiveness, Resource sharing economy and utilization of
multiprocessor architectures multithreading concept can be used
76) Give a non-computer example of preemptive and non-preemptive scheduling?
Consider any system where people use some kind of resources and compete for them. The non-
computer examples for preemptive scheduling the traffic on the single lane road if there is
emergency or there is an ambulance on the road the other vehicles give path to the vehicles that
are in need. The example for preemptive scheduling is people standing in queue for tickets.
77) What is starvation and aging?
Starvation: Starvation is a resource management problem where a process does not get the
resources it needs for a long time because the resources are being allocated to other processes.
Aging: Aging is a technique to avoid starvation in a scheduling system. It works by adding an
aging factor to the priority of each request. The aging factor must increase the request’s priority
as time passes and must ensure that a request will eventually be the highest priority request
(after it has waited long enough)
78) Different types of Real-Time Scheduling?
Hard real-time systems – required to complete a critical task within a guaranteed amount of
time.
Soft real-time computing – requires that critical processes receive priority over less fortunate
ones.
79) What are the Methods for Handling Deadlocks?
● Ensure that the system will never enter a deadlock state.
● Allow the system to enter a deadlock state and then recover.
● Ignore the problem and pretend that deadlocks never occur in the system; used by most
operating systems, including UNIX.
80) What is a Safe State and its’ use in deadlock avoidance?
When a process requests an available resource, system must decide if immediate allocation
leaves the system in a safe state
● System is in safe state if there exists a safe sequence of all processes.
● Sequence is safe if for each Pi, the resources that Pi can still request can be satisfied by
currently available resources + resources held by all the Pj, with j
● If Pi resource needs are not immediately available, then Pi can wait until all Pj have
finished.
● When Pj is finished, Pi can obtain needed resources, execute, return allocated resources,
and terminate.
● When Pi terminates, Pi+1 can obtain its needed resources, and so on.
● Deadlock Avoidance ensure that a system will never enter an unsafe state.

You might also like