Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
97 views

Operating System Assignment

Operating systems provide an interface between users and computer hardware. There are different types of operating system architectures like monolithic, layered, and microkernel systems. A system call allows processes to request services from the kernel and is the main entry point for accessing operating system functions. Context switching involves saving the state of the currently running process, loading the saved state of another process, and transferring control to the new process. This allows multiple processes to run concurrently by rapidly switching between them.

Uploaded by

vikram
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
97 views

Operating System Assignment

Operating systems provide an interface between users and computer hardware. There are different types of operating system architectures like monolithic, layered, and microkernel systems. A system call allows processes to request services from the kernel and is the main entry point for accessing operating system functions. Context switching involves saving the state of the currently running process, loading the saved state of another process, and transferring control to the new process. This allows multiple processes to run concurrently by rapidly switching between them.

Uploaded by

vikram
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 45

Operating System Assignment

Q-1 What is operating system? Explain different operating


system architectures
Ans. An Operating system (OS) is a software which acts as an
interface between the end user and computer hardware. Every
computer must have at least one OS to run other programs. An
application like Chrome, MS Word, Games, etc needs some
environment in which it will run and perform its task. The OS
helps you to communicate with the computer without knowing
how to speak the computer's language. It is not possible for the
user to use any computer or mobile device without having an
operating system.
Examples : Windows , Linux , Macintosh.

Different operating system architectures are :


Monolithic system , Layered system , Microkernel , Client Server
Model , Virtual Machine, VM Rediscovered, Exokernels.
Q-2 Explain system call with example.
Ans. A system call is a mechanism that provides the
interface between a process and the operating system. It
is a programmatic method in which a computer program
requests a service from the kernel of the OS.

System call offers the services of the operating system to


the user programs via API (Application Programming
Interface). System calls are the only entry points for the
kernel system.
Example of System Call:
For example if we need to write a program code to read
data from one file, copy that data into another file. The
first information that the program requires is the name of
the two files, the input and output files.
In an interactive system, this type of program execution
requires some system calls by OS.

First call is to write a prompting message on the screen


Second, to read from the keyboard, the characters which
define the two files.

How System Call Works?


Here are steps for System Call:
Architecture of the System Call
As you can see in the above-given diagram.

Step 1) The processes executed in the user mode till the


time a system call interrupts it.
Step 2) After that, the system call is executed in the
kernel-mode on a priority basis.
Step 3) Once system call execution is over, control returns
to the user mode.,
Step 4) The execution of user processes resumed in Kernel
mode.

Q-3 Explain the difference between process and


thread. Explain how context switching works
between different processes?
Definition of Process:
The process is the execution of a program and performs
the relevant actions specified in a program, or it is an
execution unit where a program runs. The operating
system creates, schedules and terminates the processes
for the use of the CPU. The other processes created by the
main process are known as child process.

A process operations are controlled with the help of


PCB(Process control Block) can be considered as the brain
of the process, which contains all the crucial information
regarding to a process such as a process id, priority, state,
PWS and contents CPU register.

PCB is also a kernel-based data structure which uses the


three kinds of functions which are scheduling, dispatching
and context save.

Scheduling: It is a method of selecting the sequence of the


process in simple words chooses the process which has to
be executed first in the CPU.
Dispatching: It sets up an environment for the process to
be executed.
Context save: This function saves the information
regarding to a process when it gets resumed or blocked.
There are certain states included in a process lifecycle
such as ready, running, blocked and terminated. Process
States are used for keeping the track of the process
activity at an instant.

From the programmer’s point of view, processes are the


medium to achieve the concurrent execution of a
program. The chief process of a concurrent program
creates a child process. The main process and child
process need to interact with each to achieve a common
goal.

Interleaving operations of processes enhance the


computation speed when i/o operation in one process
overlaps with a computational activity in another process.

Properties of a Process:
Creation of each process includes system calls for each
process separately.
A process is an isolated execution entity and does not
share data and information.
Processes use IPC (Inter-process communication)
mechanism for communication which significantly
increases the number of system calls.
Process management consumes more system calls.
Each process has its own stack and heap memory,
instruction, data and memory map.
Definition of Thread:
The thread is a program execution that uses process
resources for accomplishing the task. All threads within a
single program are logically contained within a process.
The kernel allocates a stack and a thread control block
(TCB) to each thread. The operating system saves only the
stack pointer and CPU state at the time of switching
between the threads of the same process.

Threads are implemented in three different ways; these


are kernel-level threads, user-level threads, hybrid
threads. Threads can have three states running, ready and
blocked; it only includes computational state not resource
allocation and communication state which reduces the
switching overhead. It enhances the concurrency
(parallelism) hence speed also increases.
Multithreading also comes with demerits, Multiple
threads doesn’t create complexity, but the interaction
between them does.

A thread must have priority property when there are


multiple threads are active. The time it gets for execution
respective to other active threads in the same process is
specified by the priority of the thread.

Properties of a Thread:
Only one system call can create more than one thread
(Lightweight process).
Threads share data and information.
Threads shares instruction, global and heap regions but
has its own individual stack and registers.
Thread management consumes no or fewer system calls
as the communication between threads can be achieved
using shared memory.
The isolation property of the process increases its
overhead in terms of resource consumption.

Key Differences Between Process and Thread:


All threads of a program are logically contained within a
process.
A process is heavy weighted, but a thread is light
weighted.
A program is an isolated execution unit whereas thread is
not isolated and shares memory.
A thread cannot have an individual existence; it is
attached to a process. On the other hand, a process can
exist individually.
At the time of expiration of a thread, its associated stack
could be recovered as every thread has its own stack. In
contrast, if a process dies, all threads die including the
process.

Conclusion:
Processes are used to achieve execution of programs in
concurrent and sequential manner. While a thread is a
program execution unit which uses the environment of
the process when many threads use the environment of
the same process they need to share its code, data and
resources. The operating system uses this fact to reduce
the overhead and improve computation.
What is Context Switching?
A context switching is a process that involves switching of
the CPU from one process or task to another. In this
phenomenon, the execution of the process that is present
in the running state is suspended by the kernel and
another process that is present in the ready state is
executed by the CPU.
It is one of the essential features of the multitasking
operating system. The processes are switched so fastly
that it gives an illusion to the user that all the processes
are being executed at the same time.
But the context switching process involved a number of
steps that need to be followed. You can't directly switch a
process from the running state to the ready state. You
have to save the context of that process. If you are not
saving the context of any process P then after some time,
when the process P comes in the CPU for execution again,
then the process will start executing from starting. But in
reality, it should continue from that point where it left the
CPU in its previous execution. So, the context of the
process should be saved before putting any other process
in the running state.
A context is the contents of a CPU's registers and program
counter at any point in time. Context switching can
happen due to the following reasons:
 When a process of high priority comes in the ready
state. In this case, the execution of the running
process should be stopped and the higher priority
process should be given the CPU for execution.
 When an interruption occurs then the process in the
running state should be stopped and the CPU should
handle the interrupt before doing something else.
 When a transition between the user mode and kernel
mode is required then you have to perform the
context switching.
Steps involved in Context Switching
The process of context switching involves a number of
steps. The following diagram depicts the process of
context switching between the two processes P1 and P2.
In the above figure, you can see that initially, the process
P1 is in the running state and the process P2 is in the
ready state. Now, when some interruption occurs then
you have to switch the process P1 from running to the
ready state after saving the context and the process P2
from ready to running state. The following steps will be
performed:
1. Firstly, the context of the process P1 i.e. the process
present in the running state will be saved in the
Process Control Block of process P1 i.e. PCB1.
2. Now, you have to move the PCB1 to the relevant
queue i.e. ready queue, I/O queue, waiting queue,
etc.
3. From the ready state, select the new process that is to
be executed i.e. the process P2.
4. Now, update the Process Control Block of process P2
i.e. PCB2 by setting the process state to running. If the
process P2 was earlier executed by the CPU, then you
can get the position of last executed instruction so
that you can resume the execution of P2.
5. Similarly, if you want to execute the process P1 again,
then you have to follow the same steps as mentioned
above(from step 1 to 4).
For context switching to happen, two processes are at
least required in general, and in the case of the round-
robin algorithm, you can perform context switching with
the help of one process only.
The time involved in the context switching of one
process by other is called the Context Switching Time.

Q-4 Explain different services provided by the


operating system with example.
Ans.
The following are examples of services provided by an
operating system:

Context Switching & Scheduling, which allocate a process


CPU time to execute its instructions.
Memory Management, which deals with allocating
memory to processes.
Interprocess Communication, which deals with facilities to
allow concurrently running processes to communicate
with each other.
File Systems, which provide higher level files out of low
level unstructured data on a disk.
High level I/O facilities, which free a process from the low-
level details of interrupt handling.

An Operating System provides services to both the users


and to the programs.

It provides programs an environment to execute.


It provides users the services to execute the programs in a
convenient manner.
Following are a few common services provided by an
operating system −

Program execution
I/O operations
File System manipulation
Communication
Error Detection
Resource Allocation
Protection
Program execution:
Operating systems handle many kinds of activities from
user programs to system programs like printer spooler,
name servers, file server, etc. Each of these activities is
encapsulated as a process.

A process includes the complete execution context (code


to execute, data to manipulate, registers, OS resources in
use). Following are the major activities of an operating
system with respect to program management −

Loads a program into memory.


Executes the program.
Handles program's execution.
Provides a mechanism for process synchronization.
Provides a mechanism for process communication.
Provides a mechanism for deadlock handling.
I/O Operation:
An I/O subsystem comprises of I/O devices and their
corresponding driver software. Drivers hide the
peculiarities of specific hardware devices from the users.

An Operating System manages the communication


between user and device drivers.
I/O operation means read or write operation with any file
or any specific I/O device.
Operating system provides the access to the required I/O
device when required.
File system manipulation:
A file represents a collection of related information.
Computers can store files on the disk (secondary storage),
for long-term storage purpose. Examples of storage media
include magnetic tape, magnetic disk and optical disk
drives like CD, DVD. Each of these media has its own
properties like speed, capacity, data transfer rate and data
access methods.

A file system is normally organized into directories for


easy navigation and usage. These directories may contain
files and other directions. Following are the major
activities of an operating system with respect to file
management −

Program needs to read a file or write a file.


The operating system gives the permission to the program
for operation on file.
Permission varies from read-only, read-write, denied and
so on.
Operating System provides an interface to the user to
create/delete files.
Operating System provides an interface to the user to
create/delete directories.
Operating System provides an interface to create the
backup of file system.
Communication:
In case of distributed systems which are a collection of
processors that do not share memory, peripheral devices,
or a clock, the operating system manages
communications between all the processes. Multiple
processes communicate with one another through
communication lines in the network.

The OS handles routing and connection strategies, and the


problems of contention and security. Following are the
major activities of an operating system with respect to
communication −

Two processes often require data to be transferred


between them
Both the processes can be on one computer or on
different computers, but are connected through a
computer network.
Communication may be implemented by two methods,
either by Shared Memory or by Message Passing.
Error handling:
Errors can occur anytime and anywhere. An error may
occur in CPU, in I/O devices or in the memory hardware.
Following are the major activities of an operating system
with respect to error handling −

The OS constantly checks for possible errors.


The OS takes an appropriate action to ensure correct and
consistent computing.
Resource Management:
In case of multi-user or multi-tasking environment,
resources such as main memory, CPU cycles and files
storage are to be allocated to each user or job. Following
are the major activities of an operating system with
respect to resource management −

The OS manages all kinds of resources using schedulers.


CPU scheduling algorithms are used for better utilization
of CPU.
Protection:
Considering a computer system having multiple users and
concurrent execution of multiple processes, the various
processes must be protected from each other's activities.

Protection refers to a mechanism or a way to control the


access of programs, processes, or users to the resources
defined by a computer system. Following are the major
activities of an operating system with respect to
protection −

The OS ensures that all access to system resources is


controlled.
The OS ensures that external I/O devices are protected
from invalid access attempts.
The OS provides authentication features for each user by
means of passwords.

Q-5 Consider the two-dimensional array A:


int A[] [] = new int [100] [100] ;
where A [0] [0] is at location 200 in a paged memory
system with pages of size 200. A small
process that manipulates the matrix resides in page 0
(locations 0 to 199). Thus, every instruction
fetch will be from page 0. For three page frames, how
many page faults are generated by the
following array-initialization loops, using LRU replacement
and assuming that page frame 1
contains the process and the other two are initially
empty?
for (int i = 0; i < 100; i++)
for (int j = 0; j < 100; j++)
A [i] [j] = 0;
Ans.
First of all I am assuming some of the facts because the
question does not state them.
I assume array is stored in row major order and i is used
for accessing row and j is used for accessing column. As
page size is specified in number I am assuming it is going
to hold 200 elements of the array.
Now it is saying memory has 3 frames and as said earlier 1
frame will be always occupied by the process so we have
left with 2 frames.
For the loop, I am drawing the image diagram.
You may guess that now blue element will be needed
for which another page fault will occur and we will
be accessing only 2 element per page fault. So number of
page faults =10000/2=5000.
This concept can be extended if page size is given in kB
and array element size is taken as 4B- just count number
of array elements in one page. I think now you have
understood the concept.

Q-6 Assuming a 1-KB page size, what are the page


numbers and offsets for the following
address references (provided as decimal numbers):
a. 2375 – Ans. Page number = 1 , Page offset = 1351
b. 19366 – Ans. Page number = 4 , Page offset = 2982
Q-7 Explain Banker algorithm with example.
Ans.
Banker’s Algorithm in Operating System
The banker’s algorithm is a resource allocation and
deadlock avoidance algorithm that tests for safety by
simulating the allocation for predetermined maximum
possible amounts of all resources, then makes an “s-state”
check to test for possible activities, before deciding
whether allocation should be allowed to continue.

Why Banker’s algorithm is named so?


Banker’s algorithm is named so because it is used in
banking system to check whether loan can be sanctioned
to a person or not. Suppose there are n number of
account holders in a bank and the total sum of their
money is S. If a person applies for a loan then the bank
first subtracts the loan amount from the total money that
bank has and if the remaining amount is greater than S
then only the loan is sanctioned. It is done because if all
the account holders comes to withdraw their money then
the bank can easily do it.
In other words, the bank would never allocate its money
in such a way that it can no longer satisfy the needs of all
its customers. The bank would try to be in safe state
always.

Following Data structures are used to implement the


Banker’s Algorithm:
Let ‘n’ be the number of processes in the system
and ‘m’ be the number of resources types.
Available :
 It is a 1-d array of size ‘m’ indicating the number of
available resources of each type.
 Available[ j ] = k means there are ‘k’ instances of
resource type Rj
Max :
 It is a 2-d array of size ‘n*m’ that defines the
maximum demand of each process in a system.
 Max[ i, j ] = k means process Pi may request at
most ‘k’ instances of resource type Rj.
Allocation :
 It is a 2-d array of size ‘n*m’ that defines the number
of resources of each type currently allocated to each
process.
 Allocation[ i, j ] = k means process Pi is currently
allocated ‘k’ instances of resource type Rj
Need :
 It is a 2-d array of size ‘n*m’ that indicates the
remaining resource need of each process.
 Need [ i, j ] = k means process Pi currently
need ‘k’ instances of resource type Rj
for its execution.
 Need [ i, j ] = Max [ i, j ] – Allocation [ i, j ]

Allocationi specifies the resources currently allocated to


process Pi and Needi specifies the additional resources
that process Pi may still request to complete its task.
Banker’s algorithm consists of Safety algorithm and
Resource request algorithm

Safety Algorithm:
The algorithm for finding out whether or not a system is in
a safe state can be described as follows:
1) Let Work and Finish be vectors of length ‘m’ and ‘n’
respectively.
Initialize: Work = Available
Finish[i] = false; for i=1, 2, 3, 4….n
2) Find an i such that both
a) Finish[i] = false
b) Needi <= Work
if no such i exists goto step (4)
3) Work = Work + Allocation[i]
Finish[i] = true
goto step (2)
4) if Finish [i] = true for all i
then the system is in a safe state

Resource-Request Algorithm:
Let Requesti be the request array for process Pi.
Requesti [j] = k means process Pi wants k instances of
resource type Rj. When a request for resources is made by
process Pi, the following actions are taken:
1) If Requesti <= Needi
Goto step (2) ; otherwise, raise an error condition, since
the process has exceeded its maximum claim.
2) If Requesti <= Available
Goto step (3); otherwise, Pi must wait, since the resources
are not available.
3) Have the system pretend to have allocated the
requested resources to process Pi by modifying the state
as
follows:
Available = Available – Requesti
Allocationi = Allocationi + Requesti
Needi = Needi– Requesti
Example:
Considering a system with five processes P0 through
P4 and three resources of type A, B, C. Resource type A
has 10 instances, B has 5 instances and type C has 7
instances. Suppose at time t0 following snapshot of the
system has been taken:

Question1. What will be the content of the Need matrix?


Need [i, j] = Max [i, j] – Allocation [i, j]
So, the content of Need Matrix is:

Question2. Is the system in a safe state? If Yes, then


what is the safe sequence?
Applying the Safety algorithm on the given system,
Question3. What will happen if process P1 requests one
additional instance of resource type A and two instances
of resource type C?
We must determine whether this new system state is
safe. To do so, we again execute Safety algorithm on the
above data structures.

Hence the new system state is safe, so we can


immediately grant the request for process P1 .

Code for Banker’s Algorithm: C++


// Banker's Algorithm
#include <iostream>
using namespace std;

int main()
{
// P0, P1, P2, P3, P4 are the Process names here

int n, m, i, j, k;
n = 5; // Number of processes
m = 3; // Number of resources
int alloc[5][3] = { { 0, 1, 0 }, // P0 // Allocation
Matrix
{ 2, 0, 0 }, // P1
{ 3, 0, 2 }, // P2
{ 2, 1, 1 }, // P3
{ 0, 0, 2 } }; // P4

int max[5][3] = { { 7, 5, 3 }, // P0 // MAX Matrix


{ 3, 2, 2 }, // P1
{ 9, 0, 2 }, // P2
{ 2, 2, 2 }, // P3
{ 4, 3, 3 } }; // P4

int avail[3] = { 3, 3, 2 }; // Available Resources


int f[n], ans[n], ind = 0;
for (k = 0; k < n; k++) {
f[k] = 0;
}
int need[n][m];
for (i = 0; i < n; i++) {
for (j = 0; j < m; j++)
need[i][j] = max[i][j] - alloc[i][j];
}
int y = 0;
for (k = 0; k < 5; k++) {
for (i = 0; i < n; i++) {
if (f[i] == 0) {

int flag = 0;
for (j = 0; j < m; j++) {
if (need[i][j] > avail[j]){
flag = 1;
break;
}
}

if (flag == 0) {
ans[ind++] = i;
for (y = 0; y < m; y++)
avail[y] += alloc[i][y];
f[i] = 1;
}
}
}
}

cout << "Following is the SAFE Sequence" << endl;


for (i = 0; i < n - 1; i++)
cout << " P" << ans[i] << " ->";
cout << " P" << ans[n - 1] <<endl;

return (0);
}
Output:
Following is the SAFE Sequence
P1 -> P3 -> P4 -> P0 -> P2

Q-8 Explain monitor for synchronization.


Ans.
The monitor is one of the ways to achieve Process
synchronization. The monitor is supported by
programming languages to achieve mutual exclusion
between processes. For example Java Synchronized
methods. Java provides wait() and notify() constructs.
1. It is the collection of condition variables and
procedures combined together in a special kind of
module or a package.
2. The processes running outside the monitor can’t
access the internal variable of the monitor but can call
procedures of the monitor.
3. Only one process at a time can execute code inside
monitors.
Syntax:
Condition Variables:
Two different operations are performed on the condition
variables of the monitor.
Wait.
signal.
let say we have 2 condition variables
condition x, y; // Declaring variable

Wait operation
x.wait() : Process performing wait operation on any
condition variable are suspended. The suspended
processes are placed in block queue of that condition
variable.
Note: Each condition variable has its unique block queue.
Signal operation
x.signal(): When a process performs signal operation on
condition variable, one of the blocked processes is given
chance.
If (x block queue empty)
// Ignore signal
else
// Resume a process from block queue.
Advantages of Monitor:
Monitors have the advantage of making parallel
programming easier and less error prone than using
techniques such as semaphore.
Disadvantages of Monitor:
Monitors have to be implemented as part of the
programming language . The compiler must generate
code for them. This gives the compiler the additional
burden of having to know what operating system facilities
are available to control access to critical sections in
concurrent processes. Some languages that do support
monitors are Java,C#,Visual Basic,Ada and concurrent
Euclid.
Q-9 Show that, if the wait () and signal () semaphore
operations are not executed atomically,
then mutual exclusion may be violated.
Ans. A wait operation atomically decrements the
value associated with a semaphore. If two wait
operations are executed on a semaphore when its
value is 1, if the two operations are not performed
atomically, then it is possible that both operations
might proceed to decrement the semaphore value,
thereby violating mutual exclusion.

Suppose that the value of semaphore S = 1 and


processes P1 and P2 execute wait(S) concurrently
according to the following timeline:
a.T0: P1 determines that value of S = 1
b. T1: P2 determines that value of S = 1
c. T2: P1 decrements S by 1 and enters critical
section
d. T3: P2 decrements S by 1 and enters critical
section
Q-10. Suppose that a disk drive has 5,000 cylinders,
numbered 0 to 4999. The drive is currently
serving a request at cylinder 143, and the previous
request was at cylinder 125. The queue of
pending requests, in FIFO order, is:
86, 1470, 913, 1774, 948, 1509, 1022, 1750, 130
Starting from the current head position, what is the
total distance (in cylinders) that the disk arm
moves to satisfy all the pending requests for SCAN
disk-scheduling algorithms? Ans = 9769(seek distance).

You might also like