Misc Topics in Computer Networks
Misc Topics in Computer Networks
Question 1
WRONG
Which one of the following is not a client server application?
A Internet chat
Web browsing
C E-mail
ping
GATE CS 2010 Misc Topics in Computer Networks
Discuss it
Question 1 Explanation:
Ping is not a client server application. Ping is a computer network administration utility used to test the
reachability of a host on an Internet Protocol (IP). In ping, there is no server that provides a service.
Question 2
CORRECT
Match the following:
A P2Q1R3S5
P1Q4R2S3
C P1Q4R2S5
D P2Q4R1S3
Question 2 Explanation:
See question 4 of http://www.geeksforgeeks.org/computer-networks-set-10/
Question 3
CORRECT
In the following pairs of OSI protocol layer/sub-layer and its functionality, the INCORRECT pair is
Question 3 Explanation:
1) Yes, Network layer does Rotuing
communication
Channel sharing.
Question 4
CORRECT
Choose the best matching between Group 1 and Group 2.
Group-1 Group-2
transmission
Question 4 Explanation:
Data link layer is the second layer of the OSI Model. This layer is responsible for data transfer between
nodes on the network and providing a point to point local delivery framework. So, P matches with 1.
Network layer is the third layer of the OSI Model. This layer is responsible for forwarding of data packets
and routing through intermediate routers. So, Q matches with 4. Transport layer is the fourth layer of the
OSI Model. This layer is responsible for delivering data from process to process. So, R matches with 3.
Thus, A is the correct option. Please comment below if you find anything wrong in the above post.
Question 5
WRONG
Which of the following is NOT true with respect to a transparent bridge and a router?
Question 6
WRONG
Host A sends a UDP datagram containing 8880 bytes of user data to host B over an Ethernet LAN.
Ethernet frames may carry data up to 1500 bytes (i.e. MTU = 1500 bytes). Size of UDP header is 8 bytes
and size of IP header is 20 bytes. There is no option field in IP header. How may total number of IP
fragments will be transmitted and what will be the contents of offset field in the last fragment?
A 6 and 925
6 and 7400
7 and 1110
D 7 and 8880
Question 6 Explanation:
UDP data = 8880 bytes
UDP header = 8 bytes
IP Header = 20 bytes
=7
Question 7
WRONG
Since it is a network that uses switch, every packet goes through two links, one from source to switch and
other from switch to destination. Since there are 10000 bits and packet size is 5000, two packets are sent.
Transmission time for each packet is 5000 / 1077 bits per second links. Each link has a propagation delay
of 20 microseconds. The switch begins forwarding a packet 35 microseconds after it receives the same. If
10000 bits of data are to be transmitted between the two hosts using a packet size of 5000 bits, the time
elapsed between the transmission of the first bit of data and the reception of the last bit of the data in
microseconds is _________.
A 1075
1575
C 2220
2200
Misc Topics in Computer Networks GATE-CS-2015 (Set 3)
Discuss it
Question 7 Explanation:
Sender host transmits first packet to switch, the transmission time is 5000/10 7 which is 500 microseconds.
After 500 microseconds, the second packet is transmitted. The first packet reaches destination in 500 +
35 + 20 + 20 + 500 = 1075 microseconds. While the first packet is traveling to destination, the second
packet starts its journey after 500 microseconds and rest of the time taken by second packet overlaps with
first packet. So overall time is 1075 + 500 = 1575.
Question 8
WRONG
Which one of the following statements is FALSE?
TCP guarantees a minimum communication rate
TCP ensures in-order delivery
Question 9
WRONG
Which one of the following statements is FALSE?
Question 9 Explanation:
HTML describes structure of page not HTTP. HTTP is the set of rules for transferring files (text, graphic
images, sound, video, and other multimedia files) on the World Wide Web.
Question 10
WRONG
A serial transmission Ti uses 8 information bits, 2 start bits, 1 stop bit and 1 parity bit for each character. A
synchronous transmission T2 uses 3 eight bit sync characters followed by 30 eight bit information
characters. If the bit rate is 1200 bits/second in both cases, what are the transfer rates of Ti and T2?
Question 10 Explanation:
Serial communication :
Total number of bits transmitted = 8 + 2 + 1 + 1 = 12 bits Bit rate = 1200 / second Transfer Rate = 1200 *
(8/12) = 800 bits/sec = 100 bytes/sec = 100 characters/sec
Synchronous transmission :
Total number of bits transmitted = 3 + 30 = 33 bits Transfer Rate = 1200 * (30/33) = 136 characters/sec
Thus, option (C) is correct.
Please comment below if you find anything wrong in the above post.
Question 11
CORRECT
In a sliding window ARQ scheme, the transmitter's window size is N and the receiver's window size is M.
The minimum number of distinct sequence numbers required to ensure correct operation of the ARQ
scheme is
A min (M, N)
B max (M, N)
M+N
D MN
Question 11 Explanation:
In general sliding window ARQ scheme , the sending process sends a number of
frames without worrying about receiving an ACK(acknowledgement) packet from the
receiver. The sending window size in general is N and receiver window is 1. This means
it can transmit N frames to its peer before requiring an ACK. The receiver keeps track of
the sequence number of the next frame it expects to receive and sends that number
with ever ACK it sends. But in case of the question the sender window size is N and
receiver is M so the receiver will accept M frames instead of 1 frame in general. Thus
sending M sequence numbers attached with the acknowledgement. Hence, for such a
scheme to work properly we will need a total of M+ N distinct sequence numbers. This
solution is contributed by Namita Singh.
Question 12
WRONG
Which one of the following protocols is NOT used to resolve one form of address to another one?
A DNS
ARP
DHCP
D RARP
Misc Topics in Computer Networks GATE-CS-2016 (Set 1)
Discuss it
Question 12 Explanation:
DHCP is used to assign IP dynamically. All others are used to convert one address to other.
Question 13
CORRECT
Identify the correct sequence in which the following packets are transmitted on the network by a host
when a browser requests a webpage from a remote server, assuming that the host has just been
restarted.
Question 13 Explanation:
Step 1 : Whenever the client request for a webpage, the query is made in the form say
www.geeksforgeeks.org. As soon as the query is made the server makes the DNS query to identify the
Domain Name Space. DNS query is the process to identify the IP address of the DNS such as www.org.
The clients computer will make a DNS query to one of its internet service providers DNS server. Step
2 : As soon as DNS server is located a TCP connection is to be established for the further
communication. The TCP protocol requests the server to establishing a connection by sending a TCP
SYN message. Which is further responded by the server using SYN_ ACK from server to client and then
ACK back to server from client (3- way hand shaking protocol). Step 3 : Once the connection has been
established the HTTP protocol comes into picture. It requests for the webpage using its GET method and
thus, sending an HTTP GET request. Hence, the correct sequence for the transmission of packets is
DNS query, TCP SYN, HTTP GET request. This explanation has been contributed by Namita Singh.
Question 14
WRONG
Consider the following statements about the timeout value used in TCP. i. The timeout value is set to the
RTT (Round Trip Time) measured during TCP connection establishment for the entire duration of the
connection. ii. Appropriate RTT estimation algorithm is used to set the timeout value of a TCP connection.
iii. Timeout value is set to twice the propagation delay from the sender to the receiver. Which of the
following choices hold?
Question 14 Explanation:
Time-out timer in TCP: One cant use static timer used in data link layer (DLL), which is
HOP to HOP connection, since nobody knows how many hops are there in the path
form sender to receiver as it uses IP service and path may vary time to time. So,
dynamic timers are used in TCP. Time-out timer should increase or decrease depending
on traffic to avoid unnecessary congestion due to retransmissions. There are three
algorithms are for this purpose: 1. Basic algorithm 2. Jacobsons algorithm 3. Karls
modification. Solution:
1. The timeout value is set to the RTT (Round Trip Time) measured during TCP
connection establishment for the entire duration of the connection.- FALSE The
timeout value cant be fixed for entire duration as it will turn timer to static
timer, we need dynamic timer for timeout.
2. Appropriate RTT estimation algorithm is used to set the timeout value of a TCP
connection.-TRUE Yes, all three algorithm are appropriate RTT estimation
algorithm used to set timeout value dynamically.
3. Timeout value is set to twice the propagation delay from the sender to the receiver.-
FALSE This statement is false because, timeout value is set to twice the
propagation delay in data link layer where, hop to hop distance is known, not
in TCP layer.
This solution is contributed by Sandeep pandey.
Question 15
WRONG
A firewall is to be configured to allow hosts in a private network to freely open TCP connections and send
packets on open connections. However, it will only allow external hosts to send packets on existing open
TCP connections or connections that are being opened (by internal hosts) but not allow them to open TCP
connections to hosts in the private network. To achieve this the minimum capability of the firewall should
be that of
A A combinational circuit
A finite automaton
Question 15 Explanation:
A) A combinational circuit => Not possible, because we need memory in Firewall, Combinational ckt has
none.
B) A finite automaton => We need infinite memory, there is no upper limit on Number of TCP ckt so Not
this.
C) A pushdown automaton with one stack => Stack is infinite. Suppose we have 2 connections , we have
pushed details of those on stack we can not access the details of connection which was pushed first,
without popping it off. So Big NO.
D) pushdown automaton with two stacks => This is TM. It can do everything our normal computer can do
so Yes. Firewall can be created out of TM.
Question 16
CORRECT
How many bytes of data can be sent in 15 seconds over a serial link with baud rate of 9600 in
asynchronous mode with odd parity and two stop bits in the frame?
A 10,000 bytes
12,000 bytes
C 15,000 bytes
D 27,000 bytes
Question 16 Explanation:
1 sec--------> 9600 bits
15 sec------->9600*15 bits
given, 1 parity bit+2 stop bits + 1 start bit
=> 12 bits extra each frame
=> 9600*15/12 = 12000bytes
Question 17
WRONG
Provide the best matching between the entries in the two columns given in the table below:
A I-a, II-d, III-c, IV-b
Question 17 Explanation:
DNS - Allows caching of entries at local server.
Question 18
WRONG
Which protocol will be used to automate the IP configuration mechanism which includes IP address,
subnet mask, default gateway, and DNS information?
A SMTP
DHCP
C ARP
TCP/IP
Misc Topics in Computer Networks GATE 2017 Mock
Discuss it
Question 18 Explanation:
DHCP (Dynamic Host Configuration Protocol) is used to provide IP information to the hosts on the
network along with the information regarding IP address, subnet mask, default gateway and DNS
information.
Question 19
CORRECT
In Goback 3 flow control protocol every 6th packet is lost. If we have to send 11 packets. How many
transmissions will be needed ?
A 10
17
C 12
D 9
Question 20
WRONG
What will be the total minimum bandwidth of the channel required for 7 channels of 400 kHz bandwidth
multiplexed together with each guard band of 20 kHz?
A 2800 khz
2600 khz
C 3600 khz
2920 khz
Misc Topics in Computer Networks GATE 2017 Mock
Discuss it
Question 20 Explanation:
(for 6 guard band 20 * 6 = 120) + (for 7 channels 400* 7= 2800)
= 120+ 2800 = 2920 kHz
Process Management
Question 1
WRONG
Consider the following code fragment:
if (fork() == 0)
{ a = a + 5; printf("%d,%d\n", a, &a); }
else { a = a 5; printf("%d, %d\n", a, &a); }
Run on IDE
Let u, v be the values printed by the parent process, and x, y be the values printed by the child process.
Which one of the following is TRUE?
A u = x + 10 and v = y
u = x + 10 and v != y
u + 10 = x and v = y
D u + 10 = x and v != y
Process Management
Discuss it
Question 1 Explanation:
fork() returns 0 in child process and process ID of child process in parent process. In Child (x), a = a + 5
In Parent (u), a = a 5; Therefore x = u + 10. The physical addresses of a in parent and child must be
different. But our program accesses virtual addresses (assuming we are running on an OS that uses
virtual memory). The child process gets an exact copy of parent process and virtual address of a doesnt
change in child process. Therefore, we get same addresses in both parent and child. See this run for
example.
Question 2
CORRECT
The atomic fetch-and-set x, y instruction unconditionally sets the memory location x to 1 and fetches the
old value of x in y without allowing any intervening access to the memory location x. consider the following
implementation of P and V functions on a binary semaphore .
void P (binary_semaphore *s) {
unsigned y;
unsigned *x = &(s->value);
do {
fetch-and-set x, y;
} while (y);
S->value = 0;
Question 2 Explanation:
Let us talk about the operation P(). It stores the value of s in x, then it fetches the old value of x, stores it
in y and sets x as 1. The while loop of a process will continue forever if some other process doesn't
execute V() and sets the value of s as 0. If context switching is disabled in P, the while loop will run forever
as no other process will be able to execute V().
Question 3
WRONG
Three concurrent processes X, Y, and Z execute three different code segments that access and update
certain shared variables. Process X executes the P operation (i.e., wait) on semaphores a, b and c;
process Y executes the P operation on semaphores b, c and d; process Z executes the P operation on
semaphores c, d, and a before entering the respective code segments. After completing the execution of
its code segment, each process invokes the V operation (i.e., signal) on its three semaphores. All
semaphores are binary semaphores initialized to one. Which one of the following represents a
deadlockfree order of invoking the P operations by the processes? (GATE CS 2013)
X: P(a)P(b)P(c) Y:P(b)P(c)P(d) Z:P(c)P(d)P(a)
X: P(b)P(a)P(c) Y:P(b)P(c)P(d) Z:P(a)P(c)P(d)
Question 3 Explanation:
Option A can cause deadlock. Imagine a situation process X has acquired a, process Y has acquired b
and process Z has acquired c and d. There is circular wait now. Option C can also cause deadlock.
Imagine a situation process X has acquired b, process Y has acquired c and process Z has acquired a.
There is circular wait now. Option D can also cause deadlock. Imagine a situation process X has acquired
a and b, process Y has acquired c. X and Y circularly waiting for each other.
See http://www.eee.metu.edu.tr/~halici/courses/442/Ch5%20Deadlocks.pdf Consider option A) for
example here all 3 processes are concurrent so X will get semaphore a, Y will get b and Z will get c, now
X is blocked for b, Y is blocked for c, Z gets d and blocked for a. Thus it will lead to deadlock. Similarly
one can figure out that for B) completion order is Z,X then Y. This question is duplicate
of http://geeksquiz.com/gate-gate-cs-2013-question-16/
Question 4
WRONG
A shared variable x, initialized to zero, is operated on by four concurrent processes W, X, Y, Z as follows.
Each of the processes W and X reads x from memory, increments by one, stores it to memory, and then
terminates. Each of the processes Y and Z reads x from memory, decrements by two, stores it to memory,
and then terminates. Each process before reading x invokes the P operation (i.e., wait) on a counting
semaphore S and invokes the V operation (i.e., signal) on the semaphore S after storing x to memory.
Semaphore S is initialized to two. What is the maximum possible value of x after all processes complete
execution? (GATE CS 2013)
A -2
-1
C 1
2
Process Management
Discuss it
Question 4 Explanation:
Processes can run in many ways, below is one of the cases in which x attains max value
Semaphore S is initialized to 2
So correct option is D
Question 5
WRONG
A shared variable x, initialized to zero, is operated on by four concurrent processes W, X, Y, Z as follows.
Each of the processes W and X reads x from memory, increments by one, stores it to memory, and then
terminates. Each of the processes Y and Z reads x from memory, decrements by two, stores it to memory,
and then terminates. Each process before reading x invokes the P operation (i.e., wait) on a counting
semaphore S and invokes the V operation (i.e., signal) on the semaphore S after storing x to memory.
Semaphore S is initialized to two. What is the maximum possible value of x after all processes complete
execution? (GATE CS 2013)
-2
B -1
C 1
2
Process Management
Discuss it
Question 5 Explanation:
See http://geeksquiz.com/operating-systems-process-management-question-11/ for explanation.
Question 6
WRONG
A certain computation generates two arrays a and b such that a[i]=f(i) for 0 i < n and b[i]=g(a[i]) for 0 i
< n. Suppose this computation is decomposed into two concurrent processes X and Y such that X
computes the array a and Y computes the array b. The processes employ two binary semaphores R and
S, both initialized to zero. The array a is shared by the two processes. The structures of the processes are
shown below.
Process X: Process Y:
private i; private i;
for (i=0; i < n; i++) { for (i=0; i < n; i++) {
a[i] = f(i); EntryY(R, S);
ExitX(R, S); b[i]=g(a[i]);
} }
Which one of the following represents the CORRECT implementations of ExitX and EntryY? (A)
ExitX(R, S) {
P(R);
V(S);
EntryY (R, S) {
P(S);
V(R);
(B)
ExitX(R, S) {
V(R);
V(S);
EntryY(R, S) {
P(R);
P(S);
(C)
ExitX(R, S) {
P(S);
V(R);
EntryY(R, S) {
V(S);
P(R);
(D)
ExitX(R, S) {
V(R);
P(S);
EntryY(R, S) {
V(S);
P(R);
A A
B
C
D D
Process Management
Discuss it
Question 6 Explanation:
The purpose here is neither the deadlock should occur
than one.
A leads to deadlock
Question 7
WRONG
Three concurrent processes X, Y, and Z execute three different code segments that access and update
certain shared variables. Process X executes the P operation (i.e., wait) on semaphores a, b and c;
process Y executes the P operation on semaphores b, c and d; process Z executes the P operation on
semaphores c, d, and a before entering the respective code segments. After completing the execution of
its code segment, each process invokes the V operation (i.e., signal) on its three semaphores. All
semaphores are binary semaphores initialized to one. Which one of the following represents a deadlock-
free order of invoking the P operations by the processes?
Question 7 Explanation:
Option A can cause deadlock. Imagine a situation process X has acquired a, process Y has acquired b
and process Z has acquired c and d. There is circular wait now. Option C can also cause deadlock.
Imagine a situation process X has acquired b, process Y has acquired c and process Z has acquired a.
There is circular wait now. Option D can also cause deadlock. Imagine a situation process X has acquired
a and b, process Y has acquired c. X and Y circularly waiting for each other.
See http://www.eee.metu.edu.tr/~halici/courses/442/Ch5%20Deadlocks.pdf Consider option A) for
example here all 3 processes are concurrent so X will get semaphore a, Y will get b and Z will get c, now
X is blocked for b, Y is blocked for c, Z gets d and blocked for a. Thus it will lead to deadlock. Similarly
one can figure out that for B) completion order is Z,X then Y. This question is duplicate
of http://geeksquiz.com/operating-systems-process-management-question-8/
Question 8
WRONG
A shared variable x, initialized to zero, is operated on by four concurrent processes W, X, Y, Z as follows.
Each of the processes W and X reads x from memory, increments by one, stores it to memory, and then
terminates. Each of the processes Y and Z reads x from memory, decrements by two, stores it to memory,
and then terminates. Each process before reading x invokes the P operation (i.e., wait) on a counting
semaphore S and invokes the V operation (i.e., signal) on the semaphore S after storing x to memory.
Semaphore S is initialized to two. What is the maximum possible value of x after all processes complete
execution?
A -2
B -1
1
2
Process Management GATE CS 2013
Discuss it
Question 8 Explanation:
Background Explanation: A critical section in which the process may be changing common variables,
updating table, writing a file and perform another function. The important problem is that if one process is
executing in its critical section, no other process is to be allowed to execute in its critical section. Each
process much request permission to enter its critical section. A semaphore is a tool for synchronization
and it is used to remove the critical section problem which is that no two processes can run
simultaneously together so to remove this two signal operations are used named as wait and signal which
is used to remove the mutual exclusion of the critical section. as an unsigned one of the most important
synchronization primitives, because you can build many other Decrementing the semaphore is called
acquiring or locking it, incrementing is called releasing or unlocking. Solution : Since initial value of
semaphore is 2, two processes can enter critical section at a time- this is bad and we can see why. Say, X
and Y be the processes.X increments x by 1 and Z decrements x by 2. Now, Z stores back and after this
X stores back. So, final value of x is 1 and not -1 and two Signal operations make the semaphore value 2
again. So, now W and Z can also execute like this and the value of x can be 2 which is the maximum
possible in any order of execution of the processes. (If the semaphore is initialized to 1, processed would
execute correctly and we get the final value of x as -2.) Option (D) is the correct answer. Another
Solution: Processes can run in many ways, below is one of the cases in which x attains max value
Semaphore S is initialized to 2 Process W executes S=1, x=1 but it doesn't update the x variable. Then
process Y executes S=0, it decrements x, now x= -2 and signal semaphore S=1 Now process Z executes
s=0, x=-4, signal semaphore S=1 Now process W updates x=1, S=2 Then process X executes X=2 So
correct option is D Another Solution: S is a counting semaphore initialized to 2 i.e., Two process can go
inside a critical section protected by S. W, X read the variable, increment by 1 and write it back. Y, Z can
read the variable, decrement by 2 and write it back. Whenever Y or Z runs the count gets decreased by 2.
So, to have the maximum sum, we should copy the variable into one of the processes which increases the
count, and at the same time the decrementing processed should run parallel, so that whatever they write
back into memory can be overridden by incrementing process. So, in effect decrement would never
happen.
Question 9
WRONG
A certain computation generates two arrays a and b such that a[i]=f(i) for 0 i < n and b[i]=g(a[i]) for 0 i
< n. Suppose this computation is decomposed into two concurrent processes X and Y such that X
computes the array a and Y computes the array b. The processes employ two binary semaphores R and
S, both initialized to zero. The array a is shared by the two processes. The structures of the processes are
shown below.
Process X: Process Y:
private i; private i;
for (i=0; i < n; i++) { for (i=0; i < n; i++) {
a[i] = f(i); EntryY(R, S);
ExitX(R, S); b[i]=g(a[i]);
} }
Which one of the following represents the CORRECT implementations of ExitX and EntryY? (A)
ExitX(R, S) {
P(R);
V(S);
EntryY (R, S) {
P(S);
V(R);
(B)
ExitX(R, S) {
V(R);
V(S);
EntryY(R, S) {
P(R);
P(S);
}
(C)
ExitX(R, S) {
P(S);
V(R);
EntryY(R, S) {
V(S);
P(R);
(D)
ExitX(R, S) {
V(R);
P(S);
EntryY(R, S) {
V(S);
P(R);
A A
B
C
D D
Question 9 Explanation:
The purpose here is neither the deadlock should occur
than one.
A leads to deadlock
2 in some cases
See http://geeksquiz.com/operating-systems-process-management-question-13/
Question 10
CORRECT
A process executes the code
fork();
fork();
fork();
A 3
B 4
7
D 8
Question 10 Explanation:
Let us put some label names for the three lines
fork (); // Line 1
/ \
/ \ / \
We can also use direct formula to get the number of child processes. With n fork statements, there are
always 2^n 1 child processes. Also see this post for more details.
Question 11
WRONG
Fetch_And_Add(X,i) is an atomic Read-Modify-Write instruction that reads the value of memory
location X, increments it by the value i, and returns the old value of X. It is used in the pseudocode
shown below to implement a busy-wait lock. L is an unsigned integer shared variable initialized to
0. The value of 0 corresponds to lock being available, while any non-zero value corresponds to the
lock being not available.
AcquireLock(L){
while (Fetch_And_Add(L,1))
L = 1;
ReleaseLock(L){
L = 0;
This implementation
fails as L can take on a non-zero value when the lock is actually available
works correctly but may starve some processes
D works correctly without starvation
Question 11 Explanation:
Take closer look the below while loop.
while (Fetch_And_Add(L,1))
Consider a situation where a process has just released the lock and made L = 0. Let there be one more
process waiting for the lock, means executing the AcquireLock() function. Just after the L was made 0, let
the waiting processes executed the line L = 1. Now, the lock is available and L = 1. Since L is 1, the
waiting process (and any other future coming processes) can not come out of the while loop. The above
problem can be resolved by changing the AcuireLock() to following.
AcquireLock(L){
while (Fetch_And_Add(L,1))
{ // Do Nothing }
Source : http://www.geeksforgeeks.org/operating-systems-set-17/
Question 12
WRONG
The time taken to switch between user and kernel modes of execution be t1 while the time taken to switch
between two processes be t2. Which of the following is TRUE?
A t1 > t2
B t1 = t2
t1 < t2
nothing can be said about the relation between t1 and t2
Process Management GATE CS 2011
Discuss it
Question 12 Explanation:
Process switches or Context switches can occur in only kernel mode . So for process switches first we
have to move from user to kernel mode . Then we have to save the PCB of the process from which we are
taking off CPU and then we have to load PCB of the required process . At switching from kernel to user
mode is done. But switching from user to kernel mode is a very fast operation(OS has to just change
single bit at hardware level) Thus T1< T2 This explanation has been contributed by Abhishek Kumar.
Question 13
WRONG
A thread is usually defined as a "light weight process" because an operating system (OS) maintains
smaller data structures for a thread than for a process. In relation to this, which of the following is TRUE?
Question 13 Explanation:
Threads share address space of Process. Virtually memory is concerned with processes not with
Threads. A thread is a basic unit of CPU utilization, consisting of a program counter, a stack, and a set of
registers, (and a thread ID.) As you can see, for a single thread of control - there is one program counter,
and one sequence of instructions that can be carried out at any given time and for multi-threaded
applications-there are multiple threads within a single process, each having their own program counter,
stack and set of registers, but sharing common code, data, and certain structures such as open files.
Question 14
WRONG
Consider the methods used by processes P1 and P2 for accessing their critical sections whenever
needed, as given below. The initial values of shared boolean variables S1 and S2 are randomly assigned.
Method Used by P1
Critica1 Section
S1 = S2;
Method Used by P2
Critica1 Section
S2 = not (S1);
Question 14 Explanation:
Mutual Exclusion: A way of making sure that if one process is using a shared modifiable data, the other
processes will be excluded from doing the same thing. while one process executes the shared variable, all
other processes desiring to do so at the same time moment should be kept waiting; when that process
has finished executing the shared variable, one of the processes waiting; while that process has finished
executing the shared variable, one of the processes waiting to do so should be allowed to proceed. In this
fashion, each process executing the shared data (variables) excludes all others from doing so
simultaneously. This is called Mutual Exclusion. Progress Requirement: If no process is executing in its
critical section and there exist some processes that wish to enter their critical section, then the selection of
the processes that will enter the critical section next cannot be postponed indefinitely. Solution: It can be
easily observed that the Mutual Exclusion requirement is satisfied by the above solution, P1 can enter
critical section only if S1 is not equal to S2, and P2 can enter critical section only if S1 is equal to S2. But
here Progress Requirement is not satisfied. Suppose when s1=1 and s2=0 and process p1 is not
interested to enter into critical section but p2 want to enter critical section. P2 is not able to enter critical
section in this as only when p1 finishes execution, then only p2 can enter (then only s1 = s2 condition be
satisfied). Progress will not be satisfied when any process which is not interested to enter into the critical
section will not allow other interested process to enter into the critical section.
Reference
:http://www.personal.kent.edu/~rmuhamma/OpSystems/Myos/mutualExclu.htmSee http://www.geeksforge
eks.org/operating-systems-set-7/ This solution is contributed by Nitika Bansal
Question 15
WRONG
The following program consists of 3 concurrent processes and 3 binary semaphores.The semaphores are
initialized as S0 = 1, S1 = 0, S2 = 0.
B Exactly twice
Exactly thrice
D Exactly once
Question 15 Explanation:
Initially only P0 can go inside the while loop as S0 = 1, S1 = 0, S2 = 0. P0 first prints '0' then, after
releasing S1 and S2, either P1 or P2 will execute and release S0. So 0 is printed again.
Question 16
WRONG
The enter_CS() and leave_CS() functions to implement critical section of a process are realized using
test-and-set instruction as follows:
void enter_CS(X)
while test-and-set(X) ;
void leave_CS(X)
X = 0;
}
In the above solution, X is a memory location associated with the CS and is initialized to 0. Now consider
the following statements: I. The above solution to CS problem is deadlock-free II. The solution is
starvation free. III. The processes enter CS in FIFO order. IV More than one process can enter CS at the
same time. Which of the above statements is TRUE?
I only
B I and II
C II and III
IV only
Process Management GATE-CS-2009
Discuss it
Question 16 Explanation:
The above solution is a simple test-and-set solution that makes sure that deadlock doesnt occur, but it
doesnt use any queue to avoid starvation or to have FIFO order.
Question 17
WRONG
The P and V operations on counting semaphores, where s is a counting semaphore, are defined as
follows:
P(s) : s = s - 1;
V(s) : s = s + 1;
Assume that Pb and Vb the wait and signal operations on binary semaphores are provided. Two binary
semaphores Xb and Yb are used to implement the semaphore operations P(s) and V(s) as follows:
P(s) : Pb(Xb);
s = s - 1;
if (s < 0) {
Vb(Xb) ;
Pb(Yb) ;
else Vb(Xb);
V(s) : Pb(Xb) ;
s = s + 1;
if (s <= 0) Vb(Yb) ;
Vb(Xb) ;
A 0 and 0
B 0 and 1
1 and 0
1 and 1
Process Management GATE CS 2008
Discuss it
Question 17 Explanation:
Suppose Xb = 0, then because of P(s): Pb(Xb) operation, Xb will be -1 and processs will get blocked as it
will enter into waiting section. So, Xb will be one. Suppose s=2(means 2 process are accessing shared
resource), taking Xb as 1,
first P(s): Pb(Xb) operation will make Xb as zero. s will be 1 and Then Vb(Xb) operation will be executed
which will increase the count of Xb as one. Then same process will be repeated making Xb as one and s
as zero.
Now suppose one more process comes, then Xb will be 1 but s will be -1 which will make this process go
into loop (s <0) and will result into calling Vb(Xb) and Pb(Yb) operations. Vb(Xb) will result into Xb as 2
and Pb(Yb) will result into decrementing the value of Yb.
case 1: if Yb has value as 0, it will be -1 and it will go into waiting and will be blocked.total 2 process will
access shared resource (according to counting semaphore, max 3 process can access shared resource)
and value of s is -1 means only 1 process will be waiting for resources and just now, one process got
blocked. So it is still true.
case 2: if Yb has value as 1, it will be 0. Total 3 process will access shared resource (according to
counting semaphore, max 3 process can access shared resource) and value of s is -1 means only 1
process will be waiting for resources and but there is no process waiting for resources.So it is false.
See Question 2 of http://www.geeksforgeeks.org/operating-systems-set-10/ This solution is contributed
by Nitika Bansal
Question 18
WRONG
A process executes the following code
for (i = 0; i < n; i++) fork();
A n
2^n - 1
2^n
D 2^(n+1) - 1;
Process Management GATE CS 2008
Discuss it
Question 18 Explanation:
F0 // There will be 1 child process created by first fork
/ \
/ \ / \
/\ /\/\ /\
............... // and so on
If we sum all levels of above tree for i = 0 to n-1, we get 2^n - 1. So there will be 2^n 1 child processes.
Also see this post for more details.
Question 19
WRONG
Consider the following statements about user level threads and kernel level threads. Which one of the
following statement is FALSE?
Context switch time is longer for kernel level threads than for user level threads.
C Related kernel level threads can be scheduled on different processors in a multi-processor system.
Question 19 Explanation:
Kernel level threads are managed by the OS, therefore, thread operations are implemented in the kernel
code. Kernel level threads can also utilize multiprocessor systems by splitting threads on different
processors. If one thread blocks it does not cause the entire process to block. Kernel level threads have
disadvantages as well. They are slower than user level threads due to the management overhead. Kernel
level context switch involves more steps than just saving some registers. Finally, they are not portable
because the implementation is operating system dependent. option (A): Context switch time is longer for
kernel level threads than for user level threads. True, As User level threads are managed by user and
Kernel level threads are managed by OS. There are many overheads involved in Kernel level thread
management, which are not present in User level thread management. So context switch time is longer
for kernel level threads than for user level threads. Option (B): User level threads do not need any
hardware support True, as User level threads are managed by user and implemented by Libraries, User
level threads do not need any hardware support. Option (C): Related kernel level threads can be
scheduled on different processors in a multi- processor system. This is true. Option (D): Blocking one
kernel level thread blocks all related threads. false, since kernel level threads are managed by operating
system, if one thread blocks, it does not cause all threads or entire process to block. See Question 4
of http://www.geeksforgeeks.org/operating-systems-set-13/ Reference
:http://www.personal.kent.edu/~rmuhamma/OpSystems/Myos/threads.htm http://quiz.geeksforgeeks.org/o
perating-system-user-level-thread-vs-kernel-level-thread/ This solution is contributed by Nitika Bansal
Question 20
WRONG
Two processes, P1 and P2, need to access a critical section of code. Consider the following
synchronization construct used by the processes:Here, wants1 and wants2 are shared variables, which
are initialized to false. Which one of the following statements is TRUE about the above construct?v
/* P1 */
while (true) {
wants1 = true;
/* Critical
Section */
wants1=false;
/* Remainder section */
/* P2 */
while (true) {
wants2 = true;
while (wants1==true);
/* Critical
Section */
wants2 = false;
/* Remainder section */
Question 20 Explanation:
Bounded waiting :There exists a bound, or limit, on the number of times other processes are allowed to
enter their critical sections after a process has made request to enter its critical section and before that
request is granted. mutual exclusion prevents simultaneous access to a shared resource. This concept is
used in concurrent programming with a critical section, a piece of code in which processes or threads
access a shared resource. Solution: Two processes, P1 and P2, need to access a critical section of
code. Here, wants1 and wants2 are shared variables, which are initialized to false. Now, when both
wants1 and wants2 become true, both process p1 and p2 enter in while loop and waiting for each other to
finish. This while loop run indefinitely which leads to deadlock. Now, Assume P1 is in critical section (it
means wants1=true, wants2 can be anything, true or false). So this ensures that p2 wont enter in critical
section and vice versa. This satisfies the property of mutual exclusion. Here bounded waiting condition is
also satisfied as there is a bound on the number of process which gets access to critical section after a
process request access to it. See question 3 of http://www.geeksforgeeks.org/operating-systems-set-
13/ This solution is contributed by Nitika Bansal
Question 21
WRONG
Which one of the following is FALSE?
B When a user level thread is blocked, all other threads of its process are blocked.
Context switching between user level threads is faster than context switching between kernel
level threads.
Kernel level threads cannot share the code segment
Process Management GATE-CS-2014-(Set-1)
Discuss it
Question 21 Explanation:
operation then entire process will be blocked. then another thread can continue execution.
Example : Java thread, POSIX threads. Example : Window Solaris.
Source: http://geeksquiz.com/operating-system-user-level-thread-vs-kernel-level-thread/
Question 22
WRONG
Consider two processors P1 and P2 executing the same instruction set. Assume that under identical
conditions, for the same input, a program running on P2 takes 25% less time but incurs 20% more CPI
(clock cycles per instruction) as compared to the program running on P1. If the clock frequency of P1 is
1GHz, then the clock frequency of P2 (in GHz) is _________.
1.6
B 3.2
1.2
D 0.8
Question 22 Explanation:
For P1 clock period = 1ns
7.5 ns = 12*t ns
Question 23
Consider the procedure below for the Producer-Consumer problem which uses semaphores:
Whi
ch one of the following is TRUE?
A The producer will be able to add an item to the buffer, but the consumer can never consume it.
B The consumer will remove no more than one item from the buffer.
C Deadlock occurs if the consumer succeeds in acquiring semaphore s when the buffer is empty.
D The starting value for the semaphore n must be 1 and not 0 for deadlock-free operation.
Question 24
WRONG
The atomic fetch-and-set x, y instruction unconditionally sets the memory location x to 1 and fetches the
old value of x n y without allowing any intervening access to the memory location x. consider the following
implementation of P and V functions on a binary semaphore S.
void P (binary_semaphore *s)
unsigned y;
unsigned *x = &(s->value);
do
fetch-and-set x, y;
while (y);
S->value = 0;
Question 24 Explanation:
See Question 3 of http://www.geeksforgeeks.org/operating-systems-set-15/
Question 25
WRONG
Barrier is a synchronization construct where a set of processes synchronizes globally i.e. each process in
the set arrives at the barrier and waits for all others to arrive and then all processes leave the barrier. Let
the number of processes in the set be three and S be a binary semaphore with the usual P and V
functions. Consider the following C implementation of a barrier with line numbers shown on left.
void barrier (void) {
1: P(S);
2: process_arrived++;
3. V(S);
4: while (process_arrived !=3);
5: P(S);
6: process_left++;
7: if (process_left==3) {
8: process_arrived = 0;
9: process_left = 0;
10: }
11: V(S);
}
Run on IDE
The variables process_arrived and process_left are shared among all processes and are initialized to
zero. In a concurrent program all the three processes call the barrier function when they need to
synchronize globally. The above implementation of barrier is incorrect. Which one of the following is true?
The barrier implementation may lead to a deadlock if two barrier in invocations are used in
immediate succession.
Lines 6 to 10 need not be inside a critical section
D The barrier implementation is correct if there are only two processes instead of three.
Question 25 Explanation:
It is possible that process_arrived becomes greater than 3. It will not be possible for process arrived to
become 3 again, hence deadlock.
Question 26
WRONG
Barrier is a synchronization construct where a set of processes synchronizes globally i.e. each process in
the set arrives at the barrier and waits for all others to arrive and then all processes leave the barrier. Let
the number of processes in the set be three and S be a binary semaphore with the usual P and V
functions. Consider the following C implementation of a barrier with line numbers shown on left.
void barrier (void) {
1: P(S);
2: process_arrived++;
3. V(S);
4: while (process_arrived !=3);
5: P(S);
6: process_left++;
7: if (process_left==3) {
8: process_arrived = 0;
9: process_left = 0;
10: }
11: V(S);
}
Run on IDE
The variables process_arrived and process_left are shared among all processes and are initialized to
zero. In a concurrent program all the three processes call the barrier function when they need to
synchronize globally. Which one of the following rectifies the problem in the implementation?
Lines 6 to 10 are simply replaced by process_arrived--
At the beginning of the barrier the first process to enter the barrier waits until process_arrived
C Context switch is disabled at the beginning of the barrier and re-enabled at the end.
Question 26 Explanation:
Step 2 should not be executed when the process enters the barrier second time till other two processes
have not completed their 7th step. This is to prevent variable process_arrived becoming greater than 3.
So, when variable process_arrived becomes zero and variable process_left also becomes zero then the
problem of deadlock will be resolved.
Thus, at the beginning of the barrier the first process to enter the barrier waits until process_arrived
becomes zero before proceeding to execute P(S).
Please comment below if you find anything wrong in the above post.
Question 27
WRONG
Consider two processes P1 and P2 accessing the shared variables X and Y protected by two binary
semaphores SX and SY respectively, both initialized to 1. P and V denote the usual semaphone
operators, where P decrements the semaphore value, and V increments the semaphore value. The
pseudo-code of P1 and P2 is as follows : P1 :
While true do {
L1 : ................
L2 : ................
X = X + 1;
Y = Y - 1;
V(SX);
V(SY);
P2 :
While true do {
L3 : ................
L4 : ................
Y = Y + 1;
X = Y - 1;
V(SY);
V(SX);
In order to avoid deadlock, the correct operators at L1, L2, L3 and L4 are respectively
P(SY), P(SX); P(SX), P(SY)
Question 27 Explanation:
Option A: In line L1 ( p(Sy) ) i.e. process p1 wants lock on Sy that is
held by process p2 and line L3 (p(Sx)) p2 wants lock on Sx which held by p1.
by process p2 and line L3 (p(Sy)) p2 wants lock on Sx which held by p1. So here
Option C: In line L1 ( p(Sx) ) i.e. process p1 wants lock on Sx and line L3 (p(Sy))
p2 wants lock on Sx . But Sx and Sy cant be released by its processes p1 and p2.
Please read the following to learn more about process synchronization and semaphores: Process
Synchronization Set 1 This explanation has been contributed by Dheerendra Singh.
Question 28
WRONG
Suppose we want to synchronize two concurrent processes P and Q using binary semaphores S and T.
The code for the processes P and Q is shown below.
Process P:
while (1) {
W:
print '0';
print '0';
X:
}
Process Q:
while (1) {
Y:
print '1';
print '1';
Z:
}
Synchronization statements can be inserted only at points W, X, Y and Z. Which of the following will
always lead to an output staring with '001100110011' ?
P(S) at W, V(S) at X, P(T) at Y, V(T) at Z, S and T initially 1
P(S) at W, V(T) at X, P(T) at Y, V(S) at Z, S initially 1, and T initially 0
Question 28 Explanation:
P(S) means wait on semaphore S and V(S) means signal on semaphore S. 1 Wait(S) { while (i <= 0)
--S; } Signal(S) { S++; } [/sourcecode] Initially, we assume S = 1 and T = 0 to support mutual exclusion in
process P and Q. Since S = 1, only process P will be executed and wait(S) will decrement the value of S.
Therefore, S = 0. At the same instant, in process Q, value of T = 0. Therefore, in process Q, control will be
stuck in while loop till the time process P prints 00 and increments the value of T by calling the function
V(T). While the control is in process Q, semaphore S = 0 and process P would be stuck in while loop and
would not execute till the time process Q prints 11 and makes the value of S = 1 by calling the function
V(S). This whole process will repeat to give the output 00 11 00 11 .
Please comment below if you find anything wrong in the above post.
Question 29
WRONG
Suppose we want to synchronize two concurrent processes P and Q using binary semaphores S and T.
The code for the processes P and Q is shown below.
Process P:
while (1) {
W:
print '0';
print '0';
X:
}
Process Q:
while (1) {
Y:
print '1';
print '1';
Z:
}
Synchronization statements can be inserted only at points W, X, Y and Z Which of the following will
ensure that the output string never contains a substring of the form 01n0 or 10n1 where n is odd?
P(S) at W, V(S) at X, P(T) at Y, V(T) at Z, S and T initially 1
Question 29 Explanation:
P(S) means wait on semaphore S and V(S) means signal on semaphore S. The definition of these
functions are :
Wait(S) {
while (i <= 0) ;
S-- ;
}
Signal(S) {
S++ ;
}
Please comment below if you find anything wrong in the above post.
Question 30
WRONG
Which of the following does not interrupt a running process?
A A device
B Timer
Scheduler process
Power failure
Process Management GATE-CS-2001
Discuss it
Question 30 Explanation:
Scheduler process doesnt interrupt any process, its Job is to select the processes for following three
purposes. Long-term scheduler(or job scheduler) selects which processes should be brought into the
ready queue Short-term scheduler(or CPU scheduler) selects which process should be executed next
and allocates CPU. Mid-term Scheduler (Swapper)- present in all systems with virtual memory,
temporarily removes processes from main memory and places them on secondary memory (such as a
disk drive) or vice versa. The mid-term scheduler may decide to swap out a process which has not been
active for some time, or a process which has a low priority, or a process which is page faulting frequently,
or a process which is taking up a large amount of memory in order to free up main memory for other
processes, swapping the process back in later when more memory is available, or when the process has
been unblocked and is no longer waiting for a resource. Source: http://www.geeksforgeeks.org/operating-
systems-set-3/
Question 31
CORRECT
Which of the following need not necessarily be saved on a context switch between processes?
C Program counter
Question 31 Explanation:
See question 2 of http://www.geeksforgeeks.org/operating-systems-set-3/
Question 32
CORRECT
The following two functions P1 and P2 that share a variable B with an initial value of 2 execute
concurrently.
P1()
C = B 1;
B = 2*C;
P2()
D = 2 * B;
B = D - 1;
The number of distinct values that B can possibly take after the execution is
3
B 2
C 5
D 4
Question 32 Explanation:
There are following ways that concurrent processes can follow.
C = B 1; // C = 1
B = 2*C; // B = 2
D = 2 * B; // D = 4
B = D - 1; // B = 3
C = B 1; // C = 1
D = 2 * B; // D = 4
B = D - 1; // B = 3
B = 2*C; // B = 2
C = B 1; // C = 1
D = 2 * B; // D = 4
B = 2*C; // B = 2
B = D - 1; // B = 3
D = 2 * B; // D = 4
C = B 1; // C = 1
B = 2*C; // B = 2
B = D - 1; // B = 3
D = 2 * B; // D = 4
B = D - 1; // B = 3
C = B 1; // C = 2
B = 2*C; // B = 4
Question 33
WRONG
Two processes X and Y need to access a critical section. Consider the following synchronization construct
used by both the processes.
B The proposed solution guarantees mutual exclusion but fails to prevent deadlock
C The proposed solution guarantees mutual exclusion and prevents deadlock
The proposed solution fails to prevent deadlock and fails to guarantee mutual exclusion
Process Management GATE-CS-2015 (Set 3)
Discuss it
Question 33 Explanation:
When both processes try to enter critical section simultaneously,both are allowed to do so since both
shared variables varP and varQ are true.So, clearly there is NO mutual exclusion. Also, deadlock is
prevented because mutual exclusion is one of the four conditions to be satisfied for deadlock to
happen.Hence, answer is A.
Question 34
WRONG
In a certain operating system, deadlock prevention is attempted using the following scheme. Each
process is assigned a unique timestamp, and is restarted with the same timestamp if killed. Let Ph be the
process holding a resource R, Pr be a process requesting for the same resource R, and T(Ph) and T(Pr)
be their timestamps respectively. The decision to wait or preempt one of the processes is based on the
following algorithm.
if T(Pr) < T(Ph)
then kill Pr
else wait
Question 34 Explanation:
1. This scheme is making sure that the timestamp of requesting process is always lesser than
holding process
2. The process is restarted with same timestamp if killed and that timestamp can NOT be
greater than the existing time stamp
From 1 and 2,it is clear that any new process coming having LESSER timestamp will be KILLED.So,NO
DEADLOCK possible However, a new process will lower timestamp may have to wait
infinitely because of its LOWER timestamp(as killed process will also have same timestamp ,as it was
killed earlier).STARVATION IS Definitely POSSIBLE So Answer is A
Question 35
WRONG
A process executes the following segment of code :
for(i = 1; i < = n; i++)
fork ();
B ((n(n + 1))/2)
2n - 1
D 3n - 1
Question 35 Explanation:
fork (); // Line 1
.....till n
/ \
/ \ / \
........
We can also use direct formula to get the number of child processes. With n fork statements, there are
always 2n 1 child processes. Also see this post for more details.
Question 36
WRONG
The semaphore variables full, empty and mutex are initialized to 0, n and 1, respectively. Process
P1 repeatedly adds one item at a time to a buffer of size n, and process P 2 repeatedly removes one item at
a time from the same buffer using the programs given below. In the programs, K, L, M and N are
unspecified statements.
P1
while (1) { K; P(mutex); Add an item to the buffer; V(mutex); L; } P2 while (1) { M;P(mutex);
Remove an item from the buffer; V(mutex); N; } The statements K, L, M and N are respectively
P(full), V(empty), P(full), V(empty)
Question 36 Explanation:
Please comment below if you find anything wrong in the above post.
Question 37
WRONG
Consider the following two-process synchronization solution.
The shared variable turn is initialized to zero. Which one of the following is TRUE?
Question 37 Explanation:
It satisfies the mutual excluision :
Process P0 and P1 could not have successfully executed their while
statements at the same time as value of turn can either be 0 or 1
but cant be both at the same time. Lets say, when process P0 executing
its while statements with the condition turn == 1, So this condition
will persist as long as process P1 is executing its critical section. And
when P1 comes out from its critical section it changes the value of turn
to 0 in exit section and because of that time P0 comes out from the its while
loop and enters into its critical section. Therefore only one process is
able to execute its critical section at a time.
Its also satisfies bounded waiting :
It is limit on number of times that other process is allowed to enter its
critical section after a process has made a request to enter its critical
section and before that request is granted. Lets say, P0 wishes to enter into
its critical section, it will definitely get a chance to enter into its critical
section after at most one entry made by p1 as after executing its critical section
it will set turn to 0 (zero). And vice-versa (strict alteration).
Progess is not satisfied :
Because of strict alternation no process can stop other process from entering into
its critical section.
This explanation has been contributed by Dheerendra Singh.
Question 38
WRONG
Consider a non-negative counting semaphore S. The operation P(S) decrements S, and V(S) increments
S. During an execution, 20 P(S) operations and 12 V(S) operations are issued in some order. The largest
initial value of S for which at least one P(S) operation will remain blocked is ________.
7
B 8
C 9
10
Process Management GATE-CS-2016 (Set 2)
Discuss it
Question 38 Explanation:
20-7 -> 13 will be in blocked state, when we perform 12 V(S) operation makes 12 more process to get
chance for execution from blocked state. So one process will be left in the queue (blocked state) here i
have considered that if a process is in under CS then it not get blocked by other process.
Question 39
WRONG
Which of the following DMA transfer modes and interrupt handling mechanisms will enable the highest I/O
band-width?
Question 40
WRONG
In the working-set strategy, which of the following is done by the operating system to prevent thrashing?
1. It initiates another process if there are enough extra frames.
2. It selects a process to suspend if the sum of the sizes of the working-sets exceeds the total
number of available frames.
A I only
B II only
Neither I nor II
Both I and II
Process Management GATE IT 2006
Discuss it
Question 40 Explanation:
According to concept of thrashing,
I is true because to prevent thrashing we must provide processes with as many frames as
they really need "right now".If there are enough extra frames, another process can be
initiated.
II is true because The total demand, D, is the sum of the sizes of the working sets for all
processes. If D exceeds the total number of available frames, then at least one process is
thrashing, because there are not enough frames available to satisfy its minimum working
set. If D is significantly less than the currently available frames, then additional processes
can be launched.
Question 41
WRONG
Processes P1 and P2 use critical_flag in the following routine to achieve mutual exclusion. Assume that
critical_flag is initialized to FALSE in the main program.
get_exclusive_access ( ) { if (critical _flag == FALSE) { critical_flag = TRUE ; critical_region () ;
critical_flag = FALSE; } } Consider the following statements.
i. It is possible for both P1 and P2 to access critical_region concurrently.
ii. This may lead to a deadlock.
Which of the following holds?
Question 41 Explanation:
Say P1 starts first and executes statement 1, after that system context switches to P 2 (before executing
statement 2), and it enters inside if statement, since the flag is still false. So now both processes are in
critical section!! so (i) is true.. (ii) is false By no way it happens that flag is true and no process' are inside
the if clause, if someone enters the critical section, it will definitely make flag = false. So no deadlock.
Question 42
WRONG
The following is a code with two threads, producer and consumer, that can run in parallel. Further, S and
Q are binary semaphores equipped with the standard P and V operations. semaphore S = 1, Q = 0;
integer x; producer: consumer: while (true) do while (true) do P(S);
P(Q); x = produce (); consume (x); V(Q); V(S); done
done Which of the following is TRUE about the program above?
Question 43
CORRECT
An operating system implements a policy that requires a process to release all resources before making a
request for another resource. Select the TRUE statement from the following:
Question 43 Explanation:
Starvation may occur, as a process may want othe resource in ||<sup>al</sup> along with currently hold
resources. <br> According to given conditions it will never be possible to collect all at a time.<br> No
deadlock.
Question 44
CORRECT
If the time-slice used in the round-robin scheduling policy is more than the maximum time required to
execute any process, then the policy will
Question 44 Explanation:
RR executes processes in FCFS manner with a time slice. It this time slice becomes long enough, so that
a process finishes within it, It becomes FCFS.
Question 45
WRONG
Consider the following C code for process P1 and P2. a=4, b=0, c=0 (initialization)
P1 P2
if (a < 0) b = 10;
c = b-a; a = -3;
else
c = b+a;
If the processes P1 and P2 executes concurrently (shared variables a, b and c), which of the following
cannot be the value of c after both processes complete?
4
B 7
10
D 13
Question 45 Explanation:
P1 : 1, 3, 4 -> c = 0+4 =4 {hence option a}
P2 : i, ii and P1 : 1, 2 -> c = 10-(-3) = 13 {hence option d}
P1 : 1 , P2 : i, ii and P1 : 3, 4 -> c= 10+(-3) = 7 { hence option b}
So 10 cannot be c value.
CPU Scheduling
Question 1
WRONG
Consider three processes (process id 0, 1, 2 respectively) with compute time bursts 2, 4 and 8 time units.
All processes arrive at time zero. Consider the longest remaining time first (LRTF) scheduling algorithm.
In LRTF ties are broken by giving priority to the process with the lowest process id. The average turn
around time is:
13 units
14 units
C 15 units
D 16 units
CPU Scheduling
Discuss it
Question 1 Explanation:
Let the processes be p0, p1 and p2. These processes will be executed in following order.
p2 p1 p2 p1 p2 p0 p1 p2 p0 p1 p2
0 4 5 6 7 8 9 10 11 12 13 14
Turn around time of a process is total time between submission of the process and its completion. Turn
around time of p0 = 12 (12-0) Turn around time of p1 = 13 (13-0) Turn around time of p2 = 14 (14-0)
Average turn around time is (12+13+14)/3 = 13.
Question 2
WRONG
Consider three processes, all arriving at time zero, with total execution time of 10, 20 and 30 units,
respectively. Each process spends the first 20% of execution time doing I/O, the next 70% of time doing
computation, and the last 10% of time doing I/O again. The operating system uses a shortest remaining
compute time first scheduling algorithm and schedules a new process either when the running process
gets blocked on I/O or when the running process finishes its compute burst. Assume that all I/O
operations can be overlapped as much as possible. For what percentage of time does the CPU remain
idle?
0%
10.6%
C 30.0%
D 89.4%
CPU Scheduling
Discuss it
Question 2 Explanation:
Let three processes be p0, p1 and p2. Their execution time is 10, 20 and 30 respectively. p0 spends first 2
time units in I/O, 7 units of CPU time and finally 1 unit in I/O. p1 spends first 4 units in I/O, 14 units of CPU
time and finally 2 units in I/O. p2 spends first 6 units in I/O, 21 units of CPU time and finally 3 units in I/O.
idle p0 p1 p2 idle
0 2 9 23 44 47
Total time spent = 47 Idle time = 2 + 3 = 5 Percentage of idle time = (5/47)*100 = 10.6 %
Question 3
WRONG
Consider three CPU-intensive processes, which require 10, 20 and 30 time units and arrive at times 0, 2
and 6, respectively. How many context switches are needed if the operating system implements a shortest
remaining time first scheduling algorithm? Do not count the context switches at time zero and at the end.
A 1
2
3
D 4
CPU Scheduling
Discuss it
Question 3 Explanation:
Let three process be P0, P1 and P2 with arrival times 0, 2 and 6 respectively and CPU burst times 10, 20
and 30 respectively. At time 0, P0 is the only available process so it runs. At time 2, P1 arrives, but P0 has
the shortest remaining time, so it continues. At time 6, P2 arrives, but P0 has the shortest remaining time,
so it continues. At time 10, P1 is scheduled as it is the shortest remaining time process. At time 30, P2 is
scheduled. Only two context switches are needed. P0 to P1 and P1 to P2.
Question 4
CORRECT
Which of the following process scheduling algorithm may lead to starvation
A FIFO
B Round Robin
CPU Scheduling
Discuss it
Question 4 Explanation:
Shortest job next may lead to process starvation for processes which will require a long time to complete if
short processes are continually added.
Question 5
WRONG
If the quantum time of round robin algorithm is very large, then it is equivalent to:
First in first out
Lottery scheduling
CPU Scheduling
Discuss it
Question 5 Explanation:
If time quantum is very large, then scheduling happens according to FCFS.
Question 6
CORRECT
A scheduling algorithm assigns priority proportional to the waiting time of a process. Every process starts
with priority zero (the lowest priority). The scheduler re-evaluates the process priorities every T time units
and decides the next process to schedule. Which one of the following is TRUE if the processes have no
I/O operations and all arrive at time zero?
Question 6 Explanation:
The scheduling algorithm works as round robin with quantum time equals to T. After a process's turn
comes and it has executed for T units, its waiting time becomes least and its turn comes again after every
other process has got the token for T units.
Question 7
WRONG
Consider the 3 processes, P1, P2 and P3 shown in the table.
Process Arrival time Time Units Required
P1 0 5
P2 1 7
P3 3 4
The completion order of the 3 processes under the policies FCFS and RR2 (round robin scheduling with
CPU quantum of 2 time units) are
FCFS: P1, P2, P3
Question 7 Explanation:
FCFS is clear.
This question involves the concept of ready queue. At t=2, p2 starts and p1 is sent to the ready queue
and at t=3 p3 arrives so then the job p3 is queued in ready queue after p1. So at t=4, again p1 is executed
then p3 is executed for first time at t=6.
Question 8
WRONG
Consider the following table of arrival time and burst time for three processes P0, P1 and P2.
Process Arrival time Burst Time
P0 0 ms 9 ms
P1 1 ms 4 ms
P2 2 ms 9 ms
The pre-emptive shortest job first scheduling algorithm is used. Scheduling is carried out only at arrival or
completion of processes. What is the average waiting time for the three processes?
5.0 ms
4.33 ms
C 6.33
D 7.33
Question 8 Explanation:
See Question 4 of http://www.geeksforgeeks.org/operating-systems-set-6/
Question 9
WRONG
Which of the following statements are true?
I. Shortest remaining time first scheduling may cause starvation
A I only
Question 9 Explanation:
I) Shortest remaining time first scheduling is a pre-emptive version of shortest job scheduling. In SRTF,
job with the shortest CPU burst will be scheduled first. Because of this process, It may cause starvation
as shorter processes may keep coming and a long CPU burst process never gets CPU. II) Pre-emptive
just means a process before completing its execution is stopped and other process can start execution.
The stopped process can later come back and continue from where it was stopped. In pre-emptive
scheduling, suppose process P1 is executing in CPU and after some time process P2 with high priority
then P1 will arrive in ready queue then p1 is pre-empted and p2 will brought into CPU for execution. In
this way if process which is arriving in ready queue is of higher priority then p1, then p1 is always pre-
empted and it may possible that it suffer from starvation. III) round robin will give better response time
then FCFS ,in FCFS when process is executing ,it executed up to its complete burst time, but in round
robin it will execute up to time quantum. So Round Robin Scheduling improves response time as all
processes get CPU after a specified time. So, I,II,III are true which is option (D). Option (D) is correct
answer.
Reference:https://www.cs.uic.edu/~jbell/CourseNotes/OperatingSystems/5_CPU_Scheduling.html http://w
ww.geeksforgeeks.org/operating-systems-set-7/ This solution is contributed by Nitika Bansal
Question 10
CORRECT
In the following process state transition diagram for a uniprocessor system, assume that there are always
some processes in the ready state: Now consider the following statements:
A I and II
B I and III
II and III
D II and IV
Question 10 Explanation:
I is false. If a process makes a transition D, it would result in another process making transition B, not A. II
is true. A process can move to ready state when I/O completes irrespective of other process being in
running state or not. III is true because there is a transition from running to ready state. IV is false as the
OS uses preemptive scheduling.
Question 11
CORRECT
Group 1 contains some CPU scheduling algorithms and Group 2 contains some applications. Match
entries in Group 1 to entries in Group 2.
Group I Group II
P3Q2R1
B P1Q2R3
C P2Q3R1
D P1Q3R2
Question 11 Explanation:
See question 2 of http://www.geeksforgeeks.org/operating-systems-set-12/
Question 12
WRONG
An operating system uses Shortest Remaining Time first (SRT) process scheduling algorithm. Consider
the arrival times and execution times for the following processes:
Process Execution time Arrival time
P1 20 0
P2 25 15
P3 10 30
P4 15 45
C 40
D 55
GATE-CS-2007 CPU Scheduling
Discuss it
Question 12 Explanation:
Shortest remaining time, also known as shortest remaining time first (SRTF), is a scheduling method that
is a pre-emptive version of shortest job next scheduling. In this scheduling algorithm, the process with the
smallest amount of time remaining until completion is selected to execute. Since the currently executing
process is the one with the shortest amount of time remaining by definition, and since that time should
only reduce as execution progresses, processes will always run until they complete or a new process is
added that requires a smaller amount of time. The Gantt chart of execution of
processes: At
time 0, P1 is the only process, P1 runs for 15 time units. At time 15, P2 arrives, but P1 has the shortest
remaining time. So P1 continues for 5 more time units. At time 20, P2 is the only process. So it runs for 10
time units. at time 30, P3 is the shortest remaining time process. So it runs for 10 time units. at time 40,
P2 runs as it is the only process. P2 runs for 5 time units. At time 45, P3 arrives, but P2 has the shortest
remaining time. So P2 continues for 10 more time units. P2 completes its execution at time 55.
As we know, turn around time is total time between submission of the process and its completion. Waiting
time is the time The amount of time that is taken by a process in ready queue and waiting time is the
difference between Turn around time and burst time. Total turnaround time for P2 = Completion time -
Arrival time = 55 - 15 = 40 Total Waiting Time for P2= turn around time - Burst time = 40 25 = 15 See
question 3 of http://www.geeksforgeeks.org/operating-systems-set-12/ This solution is contributed
by Nitika Bansal
Question 13
WRONG
Consider three CPU-intensive processes, which require 10, 20 and 30 time units and arrive at times 0, 2
and 6, respectively. How many context switches are needed if the operating system implements a shortest
remaining time first scheduling algorithm? Do not count the context switches at time zero and at the end.
1
2
C 3
D 4
Question 13 Explanation:
Shortest remaining time, also known as shortest remaining time first (SRTF), is a scheduling method that
is a pre-emptive version of shortest job next scheduling. In this scheduling algorithm, the process with the
smallest amount of time remaining until completion is selected to execute. Since the currently executing
process is the one with the shortest amount of time remaining by definition, and since that time should
only reduce as execution progresses, processes will always run until they complete or a new process is
added that requires a smaller amount of time.Solution: Let three process be P0, P1 and P2 with arrival
times 0, 2 and 6 respectively and CPU burst times 10, 20 and 30 respectively. At time 0, P0 is the only
available process so it runs. At time 2, P1 arrives, but P0 has the shortest remaining time, so it continues.
At time 6, P2 also arrives, but P0 still has the shortest remaining time, so it continues. At time 10, P1 is
scheduled as it is the shortest remaining time process. At time 30, P2 is scheduled. Only two context
switches are needed. P0 to P1 and P1 to P2. See question 1 of http://www.geeksforgeeks.org/operating-
systems-set-14/ This solution is contributed by Nitika Bansal
Question 14
WRONG
Three processes A, B and C each execute a loop of 100 iterations. In each iteration of the loop, a process
performs a single computation that requires tc CPU milliseconds and then initiates a single I/O operation
that lasts for tio milliseconds. It is assumed that the computer where the processes execute has sufficient
number of I/O devices and the OS of the computer assigns different I/O devices to each process. Also,
the scheduling overhead of the OS is negligible. The processes have the following characteristics:
Process id tc tio
A 100 ms 500 ms
B 350 ms 500 ms
C 200 ms 500 ms
The processes A, B, and C are started at times 0, 5 and 10 milliseconds respectively, in a pure time
sharing system (round robin scheduling) that uses a time slice of 50 milliseconds. The time in
milliseconds at which process C would complete its first I/O operation is ___________.
A 500
1000
2000
D 10000
A, B, C, A
50 + 50 + 50 + 50 (200 ms passed)
B, C, B, C, B, C
50 + 50 + 50 + 50 + 50 + 50 (300 ms passed)
Question 15
WRONG
An operating system uses shortest remaining time first scheduling algorithm for pre-emptive scheduling of
processes. Consider the following set of processes with their arrival times and CPU burst times (in
milliseconds):
Process Arrival Time Burst Time
P1 0 12
P2 2 4
P3 3 6
P4 8 5
A 4.5
B 5.0
5.5
6.5
GATE-CS-2014-(Set-3) CPU Scheduling
Discuss it
Question 15 Explanation:
Process Arrival Time Burst Time
P1 0 12
P2 2 4
P3 3 6
P4 8 5
Burst Time - The total time needed by a process from the CPU for its complete execution. Waiting Time -
How much time processes spend in the ready queue waiting their turn to get on the CPU Now, The Gantt
chart for the above processes is :
P1 - 0 to 2 milliseconds
P2 - 2 to 6 milliseconds
P3 - 6 to 12 milliseconds
P4 - 12 to 17 milliseconds
P1 - 17 to 27 milliseconds
Process p1 arrived at time 0, hence cpu started executing it. After 2 units of time P2 arrives and burst time
of P2 was 4 units, and the remaining time of the process p1 was 10 units,hence cpu started executing P2,
putting P1 in waiting state(Pre-emptive and Shortest remaining time first scheduling). Due to P1's highest
remaining time it was executed by the cpu in the end.
Now calculating the waiting time of each process:
P1 -> 17 -2 = 15
P2 -> 0
P3 -> 6 - 3 = 3
P4 -> 12 - 8 = 4
= 15+0+3+4=22
Total no of processes = 4
Question 16
WRONG
Consider the following set of processes, with the arrival times and the CPU-burst times given in
milliseconds
Process Arrival Time Burst Time
P1 0 5
P2 1 3
P3 2 3
P4 4 1
What is the average turnaround time for these processes with the preemptive shortest remaining
processing time first (SRPT) algorithm ?
5.50
B 5.75
C 6.00
6.25
GATE-CS-2004 CPU Scheduling
Discuss it
Question 16 Explanation:
The following is Gantt Chart of execution
P1 P2 P4 P3 P1
1 4 5 8 12
Turn Around Time = Completion Time - Arrival Time Avg Turn Around Time = (12 + 3 + 6+ 1)/4 = 5.50
Question 17
A uni-processor computer system only has two processes, both of which alternate 10ms CPU bursts with
90ms I/O bursts. Both the processes were created at nearly the same time. The I/O of both processes can
proceed in parallel. Which of the following scheduling strategies will result in the least CPU utilization
(over a long period of time) for this system ?
C Static priority scheduling with different priorities for the two processes
Question 18
CORRECT
Which of the following scheduling algorithms is non-preemptive?
A Round Robin
First-In First-Out
Question 18 Explanation:
Round Robin - Preemption takes place when the time quantum expires First In First Out - No Preemption,
the process once started completes before the other process takes over Multi Level Queue Scheduling -
Preemption takes place when a process of higher priority arrives Multi Level Queue Scheduling with
Feedback - Preemption takes a place when process of higher priority arrives or when the quantum of high
priority queue expires and we need to move the process to low priority queue So, B is the correct choice.
Please comment below if you find anything wrong in the above post.
Question 19
WRONG
Consider a set of n tasks with known runtimes r1, r2, .... rn to be run on a uniprocessor machine. Which of
the following processor scheduling algorithms will result in the maximum throughput?
Round-Robin
Shortest-Job-First
C Highest-Response-Ratio-Next
D First-Come-First-Served
Question 19 Explanation:
Throughput means total number of tasks executed per unit time i.e. sum of waiting time and burst time.
Shortest job first scheduling is a scheduling policy that selects the waiting process with the smallest
execution time to execute next.
Thus, in shortest job first scheduling, shortest jobs are executed first. This means CPU utilization is
maximum. So, maximum number of tasks are completed.
Please comment below if you find anything wrong in the above post.
Question 20
WRONG
Consider a uniprocessor system executing three tasks T1, T2 and T3, each of which is composed of an
infinite sequence of jobs (or instances) which arrive periodically at intervals of 3, 7 and 20 milliseconds,
respectively. The priority of each task is the inverse of its period and the available tasks are scheduled in
order of priority, with the highest priority task scheduled first. Each instance of T1, T2 and T3 requires an
execution time of 1, 2 and 4 milliseconds, respectively. Given that all tasks initially arrive at the beginning
of the 1st milliseconds and task preemptions are allowed, the first instance of T3 completes its execution
at the end of ______________ milliseconds.
A 5
10
12
D 15
Question 20 Explanation:
Periods of T1, T2 and T3 are 3ms, 7ms and 20ms
T1 is preferred
and 20 respectively.
Third instance of T1, T2 and T3 shall arrive at 6, 14,
and 49 respectively.
Time-Interval Tasks
0-1 T1
1-2 T2
2-3 T2
4-5 T3
5-6 T3
[Therefore T3 is preempted]
8-9 T2
10-11 T3
Question 21
CORRECT
The maximum number of processes that can be in Ready state for a computer system with n CPUs is
A n
B n2
C 2n
Independent of n
GATE-CS-2015 (Set 3) CPU Scheduling
Discuss it
Question 21 Explanation:
The size of ready queue doesn't depend on number of processes. A single processor system may have a
large number of processes waiting in ready queue.
Question 22
WRONG
For the processes listed in the following table, which of the following scheduling schemes will give the
lowest average turnaround time?
Process Arrival Time Processing Time
A 0 3
B 1 6
C 4 4
D 6 2
Question 22 Explanation:
Turnaround time is the total time taken between the submission of a program/process/thread/task (Linux)
for execution and the return of the complete output to the customer/user. Turnaround Time = Completion
Time - Arrival Time. FCFS = First Come First Serve (A, B, C, D) SJF = Non-preemptive Shortest Job First
(A, B, D, C) SRT = Shortest Remaining Time (A(3), B(1), C(4), D(2), B(5)) RR = Round Robin with
Quantum value 2 (A(2), B(2), A(1),C(2),B(2),D(2),C(2),B(2)
Pr Arr.Time P.Time FCFS SJF SRT RR
Question 23
WRONG
Which of the following is FALSE about SJF (Shortest Job First Scheduling)?
S1: It causes minimum average waiting time
A Only S1
B Only S2
Both S1 and S2
Neither S1 nor S2
GATE-CS-2015 (Mock Test) CPU Scheduling
Discuss it
Question 23 Explanation:
1. Both SJF and Shortest Remaining time first algorithms may cause starvation. Consider a
situation when long process is there in ready queue and shorter processes keep coming.
2. SJF is optimal in terms of average waiting time for a given set of processes, but problems
with SJF is how to know/predict time of next job.
Refer Process Scheduling for more details.
Question 24
WRONG
Two concurrent processes P1 and P2 use four shared resources R1, R2, R3 and R4, as shown below.
P1 P2
Compute: Use R1; Use R2; Use R3; Use R4; Compute; Use R1; Use R2; Use R3;. Use R4;
Both processes are started at the same time, and each resource can be accessed by only one process at
a time The following scheduling constraints exist between the access of resources by the processes:
P2 must complete use of R1 before P1 gets access to R1
P1 must complete use of R2 before P2 gets access to R2.
P2 must complete use of R3 before P1 gets access to R3.
P1 must complete use of R4 before P2 gets access to R4.
There are no other scheduling constraints between the processes. If only binary semaphores are used to
enforce the above scheduling constraints, what is the minimum number of binary semaphores needed?
A 1
2
3
D 4
Question 24 Explanation:
P1:
Compute;
Wait(A);
Use R1;
Use R2;
Signal(B);
Wait(A);
Use R3;
Use R4;
Signal(B);
P2:
Compute;
Wait(B);
Use r1;
Signal(A);
Wait(B);
Use R2;
Use R3;
Signal(A);
Wait(B);
Use R4;
Signal(B);
In process p1, initially control will be stuck in while loop of Wait(A) because A = 0. In process p2, Wait(B)
decrements the value of B to 0 . Now, P2 uses the resource R1 and increments the value to A to 1 so that
process P1 can enter its critical section and use resource R1.
Thus, P2 will complete use of R1 before P1 gets access to R1.
Now, in P2 values of B = 0. So, P2 can not use resource R2 till P1 uses R2 and calls function Signal(B) to
increment the value of B to 1. Thus, P1 will complete use of R2 before P2 gets access to R2.
Now, semaphore A = 0. So, P1 can not execute further and gets stuck in while loop of function Wait(A).
Process P2 uses R3 and increments the value of semaphore A to 1.Now, P1 can enter its critical section
to use R3. Thus, P2 will complete use of R3 before P1 gets access to R3.
Now, P1 will use R4 and increments the value of B to 1 so that P2 can enter is critical section to use R4.
Thus, P1 will complete use of R4 before P2 gets access to R4.
Please comment below if you find anything wrong in the above post.
Question 25
WRONG
We wish to schedule three processes P1, P2 and P3 on a uniprocessor system. The priorities, CPU time
requirements and arrival times of the processes are as shown below.
Process Priority CPU time required Arrival time (hh:mm:ss)
P1 10(highest) 20 sec 00:00:05
P2 9 10 sec 00:00:03
P3 8 (lowest) 15 sec 00:00:00
We have a choice of preemptive or non-preemptive scheduling. In preemptive scheduling, a late-arriving
higher priority process can preempt a currently running process with lower priority. In non-preemptive
scheduling, a late-arriving higher priority process must wait for the currently executing process to
complete before it can be scheduled on the processor. What are the turnaround times (time from arrival till
completion) of P2 using preemptive and non-preemptive scheduling respectively.
30 sec, 30 sec
B 30 sec, 10 sec
C 42 sec, 42 sec
30 sec, 42 sec
CPU Scheduling Gate IT 2005
Discuss it
Question 25 Explanation:
For Non preemptive scheduling
P3 P3 P3 P2 P2 P1 P2 P3
0 1 2 3 4 5 25 33 45
Turn Around Time= Completion Time - Arrival Time = 33 - 3 = 30
Question 26
WRONG
Consider an arbitrary set of CPU-bound processes with unequal CPU burst lengths submitted at the same
time to a computer system. Which one of the following process scheduling algorithms would minimize the
average waiting time in the ready queue?
Shortest remaining time first
B Round-robin with time quantum less than the shortest CPU burst
Uniform random
Question 26 Explanation:
Turnaround time is the total time taken by the process between starting and the completion and waiting
time is the time for which process is ready to run but not executed by CPU scheduler. As we know, in all
CPU Scheduling algorithms, shortest job first is optimal i.ie. it gives minimum turn round time, minimum
average waiting time and high throughput and the most important thing is that shortest remaining time first
is the pre-emptive version of shortest job first. shortest remaining time first scheduling algorithm may lead
to starvation because If the short processes are added to the cpu scheduler continuously then the
currently running process will never be able to execute as they will get pre-empted but here all the
processes are arrived at same time so there will be no issue such as starvation. So, the answer is
Shortest remaining time first, which is answer (A).
Reference:https://www.cs.uic.edu/~jbell/CourseNotes/OperatingSystems/5_CPU_Scheduling.html http://g
eeksquiz.com/gate-notes-operating-system-process-scheduling/ This solution is contributed by Nitika
Bansal
Question 27
WRONG
Consider the following processes, with the arrival time and the length of the CPU burst given in
milliseconds. The scheduling algorithm used is preemptive shortest remaining-time first.
The average turn around time of these processes is ___________ milliseconds. Note : This question
was asked as Numerical Answer Type.
8.25
10.25
C 6.35
D 4.25
Question 27 Explanation:
PreEmptive Shortest Remaining time first scheduling, i.e. that processes will be scheduled on the CPU
which will be having least remaining burst time( required time at the CPU). The processes are scheduled
and executed as given in the below Gantt chart. Turn
Around Time(TAT) = Completion Time(CT) - Arrival Time(AT) TAT for P1 = 20 - 0 = 20 TAT for P2 = 10 - 3
= 7 TAT for P3 = 8- 7 = 1 TAT for P4 = 13 - 8 = 5 Hence, Average TAT = Total TAT of all the processes / no
of processes = ( 20 + 7 + 1 + 5 ) / 4 = 33 / 4 = 8.25 Thus, A is the correct choice.
Question 28
CORRECT
Consider n jobs J1, J2,......Jn such that job Ji has execution time ti and a non-negative integer weight wi. The
weighted mean completion time of the jobs is defined to be , where Ti is the completion time of
job Ji. Assuming that there is only one processor available, in what order must the jobs be executed in
order to minimize the weighted mean completion time of the jobs?
A Non-decreasing order of ti
B Non-increasing order of wi
Question 29
CORRECT
Assume every process requires 3 seconds of service time in a system with single processor. If new
processes are arriving at the rate of 10 processes per minute, then estimate the fraction of time CPU is
busy in system?
A 20%
B 30%
50%
D 60%
Memory Management
Question 1
WRONG
Which of the following page replacement algorithms suffers from Beladys anomaly?
FIFO
LRU
Memory Management
Discuss it
Question 1 Explanation:
Beladys anomaly proves that it is possible to have more page faults when increasing the number of page
frames while using the First in First Out (FIFO) page replacement algorithm. See the example given
on Wiki Page.
Question 2
WRONG
What is the swap space in the disk used for?
Saving temporary html pages
Saving process data
Memory Management
Discuss it
Question 2 Explanation:
Swap space is typically used to store process data. See this for more details.
Question 3
WRONG
Increasing the RAM of a computer typically improves performance because:
Virtual memory increases
Memory Management
Discuss it
Question 3 Explanation:
When there is more RAM, there would be more mapped virtual pages in physical memory, hence
fewer page faults. A page fault causes performance degradation as the page has to be loaded from
secondary device.
Question 4
WRONG
A computer system supports 32-bit virtual addresses as well as 32-bit physical addresses. Since the
virtual address space is of the same size as the physical address space, the operating system designers
decide to get rid of the virtual memory entirely. Which one of the following is true?
Efficient implementation of multi-user support is no longer possible
Memory Management
Discuss it
Question 4 Explanation:
For supporting virtual memory, special hardware support is needed from Memory Management Unit.
Since operating system designers decide to get rid of the virtual memory entirely, hardware support for
memory management is no longer needed
Question 5
WRONG
A CPU generates 32-bit virtual addresses. The page size is 4 KB. The processor has a translation look-
aside buffer (TLB) which can hold a total of 128 page table entries and is 4-way set associative. The
minimum size of the TLB tag is:
11 bits
B 13 bits
15 bits
D 20 bits
Memory Management
Discuss it
Question 5 Explanation:
Size of a page = 4KB = 2^12 Total number of bits needed to address a page frame = 32 12 = 20 If there
are n cache lines in a set, the cache placement is called n-way set associative. Since TLB is 4 way set
associative and can hold total 128 (2^7) page table entries, number of sets in cache = 2^7/4 = 2^5. So 5
bits are needed to address a set, and 15 (20 5) bits are needed for tag.
Question 6
WRONG
Virtual memory is
Large secondary memory
Memory Management
Discuss it
Question 6 Explanation:
Virtual memory is illusion of large main memory.
Question 7
WRONG
Page fault occurs when
When a requested page is in memory
When a requested page is not in memory
Memory Management
Discuss it
Question 7 Explanation:
Page fault occurs when a requested page is mapped in virtual address space but not present in memory.
Question 8
WRONG
Thrashing occurs when
When a page fault occurs
Processes on system frequently access pages not memory
Memory Management
Discuss it
Question 8 Explanation:
Thrashing occurs processes on system require more memory than it has. If processes do not have
enough pages, the pagefault rate is very high. This leads to: low CPU utilization operating system
spends most of its time swapping to disk The above situation is called thrashing
Question 9
WRONG
A computer uses 46bit virtual address, 32bit physical address, and a threelevel paged page table
organization. The page table base register stores the base address of the firstlevel table (T1), which
occupies exactly one page. Each entry of T1 stores the base address of a page of the secondlevel table
(T2). Each entry of T2 stores the base address of a page of the thirdlevel table (T3). Each entry of T3
stores a page table entry (PTE). The PTE is 32 bits in size. The processor used in the computer has a 1
MB 16 way set associative virtually indexed physically tagged cache. The cache block size is 64 bytes.
What is the size of a page in KB in this computer? (GATE 2013)
2
B 4
D 16
Memory Management
Discuss it
Question 9 Explanation:
Let the page size is of 'x' bits
Size of T1 = 2 ^ x bytes
or 4 bytes in size)
page tables
((2^x) / 4) * (2^x)
= 2^(3x - 6)
2^(3x - 6) = 2^(46 - x)
3x - 6 = 46 - x
4x = 52
x = 13
Question 10
CORRECT
Consider data given in the above question. What is the minimum number of page colours needed to
guarantee that no two synonyms map to different sets in the processor cache of this computer? (GATE
CS 2013)
A 2
B 4
D 16
Memory Management
Discuss it
Question 10 Explanation:
1 MB 16-way set associative virtually indexed physically tagged cache(VIPT).
The cache block size is 64 bytes.
VA(46)
+-------------------------------+
tag(30) , Set(10) , block offset(6)
+-------------------------------+
but we need 8 colors because the number bits where the cache set index and
physical page number over lap is 3 so 2^3 page colors is required.(option
c is ans).
Question 11
WRONG
Consider the virtual page reference string 1, 2, 3, 2, 4, 1, 3, 2, 4, 1 On a demand paged virtual memory
system running on a computer system that main memory size of 3 pages frames which are initially empty.
Let LRU, FIFO and OPTIMAL denote the number of page faults under the corresponding page
replacements policy. Then
D OPTIMAL = FIFO
GATE CS 2012 Memory Management
Discuss it
Question 11 Explanation:
First In First Out (FIFO) This is the simplest page replacement algorithm. In this algorithm, operating
system keeps track of all pages in the memory in a queue; oldest page is in the front of the queue. When
a page needs to be replaced page in the front of the queue is selected for removal. Optimal Page
replacement: in this algorithm, pages are replaced which are not used for the longest duration of time in
the future. Least Recently Used (LRU) In this algorithm page will be replaced which is least recently
used. Solution: the virtual page reference string is 1, 2, 3, 2, 4, 1, 3, 2, 4, 1 size of main memory pages
frames is 3. For FIFO: total no of page faults are 6 (depicted in bold and
italic)
The Optimal will be 5, FIFO 6 and LRU 9. so, OPTIMAL < FIFO < LRU option (B) is correct answer.
See http://www.geeksforgeeks.org/operating-systems-set-5/ This solution is contributed by Nitika Bansal
Question 12
CORRECT
Let the page fault service time be 10ms in a computer with average memory access time being 20ns. If
one page fault is generated for every 10^6 memory accesses, what is the effective access time for the
memory?
A 21ns
30ns
C 23ns
D 35ns
= ( 1/(10^6) )* 10 * (10^6) ns +
(1 - 1/(10^6)) * 20 ns
= 30 ns (approx)
Question 13
CORRECT
A system uses FIFO policy for page replacement. It has 4 page frames with no pages loaded to begin
with. The system first accesses 100 distinct pages in some order and then accesses the same 100 pages
but now in the reverse order. How many page faults will occur?
196
B 192
C 197
D 195
Question 13 Explanation:
See http://www.geeksforgeeks.org/operating-systems-set-7/
Question 14
WRONG
In which one of the following page replacement policies, Beladys anomaly may occur?
FIFO
B Optimal
LRU
D MRU
Question 14 Explanation:
Beladys anomaly proves that it is possible to have more page faults when increasing the number of page
frames while using the First in First Out (FIFO) page replacement algorithm. See the wiki page for an
example of increasing page faults with number of page frames.
Question 15
WRONG
The essential content(s) in each entry of a page table is / are
Question 15 Explanation:
A page table entry must contain Page frame number. Virtual page number is typically used as index in
page table to get the corresponding page frame number. See this for details.
Question 16
WRONG
A multilevel page table is preferred in comparison to a single level page table for translating virtual
address to physical address because
It helps to reduce the size of page table needed to implement the virtual address space of a
process.
It is required by the translation lookaside buffer.
Question 16 Explanation:
The size of page table may become too big (See this) to fit in contiguous space. That is why page tables
are typically divided in levels.
Question 17
CORRECT
A processor uses 36 bit physical addresses and 32 bit virtual addresses, with a page frame size of 4
Kbytes. Each page table entry is of size 4 bytes. A three level page table is used for virtual to physical
address translation, where the virtual address is used as follows Bits 30-31 are used to index into the
first level page table Bits 21-29 are used to index into the second level page table Bits 12-20 are used
to index into the third level page table, and Bits 0-11 are used as offset within the page The number of
bits required for addressing the next level page table (or page frame) in the page table entry of the first,
second and third level page tables are respectively.
A 20, 20 and 20
24, 24 and 24
C 24, 24 and 20
D 25, 25 and 24
Question 17 Explanation:
Virtual address size = 32 bits Physical address size = 36 bits Physical memory size = 2^36 bytes Page
frame size = 4K bytes = 2^12 bytes No. of bits for offset (or number of bits required for accessing location
within a page frame) = 12. No. of bits required to access physical memory frame = 36 - 12 = 24 So in third
level of page table, 24 bits are required to access an entry. 9 bits of virtual address are used to access
second level page table entry and size of pages in second level is 4 bytes. So size of second level page
table is (2^9)*4 = 2^11 bytes. It means there are (2^36)/(2^11) possible locations to store this page table.
Therefore the second page table requires 25 bits to address it. Similarly, the third page table needs 25
bits to address it.
Question 18
CORRECT
A virtual memory system uses First In First Out (FIFO) page replacement policy and allocates a fixed
number of frames to a process. Consider the following statements:
Question 18 Explanation:
First In First Out Page Replacement Algorithms: This is the simplest page replacement algorithm. In this
algorithm, operating system keeps track of all pages in the memory in a queue, oldest page is in the front
of the queue. When a page needs to be replaced page in the front of the queue is selected for removal.
FIFO Page replacement algorithms suffers from Beladys anomaly : Beladys anomaly states that it is
possible to have more page faults when increasing the number of page frames. Solution: Statement
P: Increasing the number of page frames allocated to a process sometimes increases the page fault rate.
Correct, as FIFO page replacement algorithm suffers from beladys anomaly which states above
statement. Statement Q: Some programs do not exhibit locality of reference. Correct, Locality often
occurs because code contains loops that tend to reference arrays or other data structures by indices. So
we can write a program does not contain loop and do not exhibit locality of reference. So, both statement
P and Q are correct but Q is not the reason for P as Beladys Anomaly occurs for some specific patterns
of page references. See Question 1 of http://www.geeksforgeeks.org/operating-systems-set-
13/ Reference :http://quiz.geeksforgeeks.org/operating-system-page-replacement-algorithm/ This solution
is contributed by Nitika Bansal
Question 19
WRONG
A process has been allocated 3 page frames. Assume that none of the pages of the process are available
in the memory initially. The process makes the following sequence of page references (reference string):
1, 2, 1, 3, 7, 4, 5, 6, 3, 1 If optimal page replacement policy is used, how many page faults occur for the
above reference string?
7
8
C 9
D 10
Question 19 Explanation:
Optimal replacement policy looks forward in time to see which frame to replace on a page fault. 1 23 ->
1,2,3 //page faults 173 ->7 143 ->4 153 -> 5 163 -> 6 Total=7 So Answer is A
Question 20
WRONG
Consider the data given in above question. Least Recently Used (LRU) page replacement policy is a
practical approximation to optimal page replacement. For the above reference string, how many more
page faults occur with LRU than with the optimal page replacement policy?
0
B 1
D 3
Question 20 Explanation:
LRU replacement policy: The page that is least recently used is being Replaced. Given String: 1, 2, 1, 3,
7, 4, 5, 6, 3, 1 123 // 1 ,2, 3 //page faults 173 ->7 473 ->4 453 ->5 456 ->6 356 ->3 316 ->1 Total 9 In
http://geeksquiz.com/gate-gate-cs-2007-question-82/, In optimal Replacement total page
faults=7 Therefore 2 more page faults Answer is C
Question 21
CORRECT
Assume that there are 3 page frames which are initially empty. If the page reference string is 1, 2, 3, 4, 2,
1, 5, 3, 2, 4, 6, the number of page faults using the optimal replacement policy is__________.
A 5
B 6
D 8
Memory Management GATE-CS-2014-(Set-1)
Discuss it
Question 21 Explanation:
In optimal page replacement replacement policy, we replace the place which is not used for longest
duration in future.
Given three page frames.
Reference string is 1, 2, 3, 4, 2, 1, 5, 3, 2, 4, 6
Question 22
WRONG
A computer has twenty physical page frames which contain pages numbered 101 through 120. Now a
program accesses the pages numbered 1, 2, , 100 in that order, and repeats the access sequence
THRICE. Which one of the following page replacement policies experiences the same number of page
faults as the optimal page replacement policy for this program?
A Least-recently-used
First-in-first-out
C Last-in-first-out
Most-recently-used
Memory Management GATE-CS-2014-(Set-2)
Discuss it
Question 22 Explanation:
The optimal page replacement algorithm swaps out the page whose next use will occur farthest in the
future. In the given question, the computer has 20 page frames and initially page frames are filled with
pages numbered from 101 to 120. Then program accesses the pages numbered 1, 2, , 100 in that
order, and repeats the access sequence THRICE. The first 20 accesses to pages from 1 to 20 would
definitely cause page fault. When 21st is accessed, there is another page fault. The page swapped out
would be 20 because 20 is going to be accessed farthest in future. When 22nd is accessed, 21st is going
to go out as it is going to be the farthest in future. The above optimal page replacement algorithm actually
works as most recently used in this case. As a side note, the first 100 would cause 100 page faults, next
100 would cause 81 page faults (1 to 19 would never be removed), the last 100 would also cause 81 page
faults.
Question 23
WRONG
A system uses 3 page frames for storing process pages in main memory. It uses the Least Recently Used
(LRU) page replacement policy. Assume that all the page frames are initially empty. What is the total
number of page faults that will occur while processing the page reference string given below? 4, 7, 6, 1, 7,
6, 1, 2, 7, 2
A 4
B 5
6
7
Memory Management GATE-CS-2014-(Set-3)
Discuss it
Question 23 Explanation:
What is a Page fault ? An interrupt that occurs when a program requests data that is not currently in real
memory. The interrupt triggers the operating system to fetch the data from a virtual memory and load it
into RAM. Now, 4, 7, 6, 1, 7, 6, 1, 2, 7, 2 is the reference string, you can think of it as data requests made
by a program. Now the system uses 3 page frames for storing process pages in main memory. It uses the
Least Recently Used (LRU) page replacement policy.
[ ] - Initially page frames are empty.i.e. no
page fault)
Explanation: Process page 4 was requested by the program, but it was not in the main memory(in form of
page frames),which resulted in a page fault, after that process page 4 was brought in the main memory
by the operating system.
After this 7, 6 and 1 are were already present in the frames hence no replacements in pages.
Hence, total number of page faults(also called pf) are 6. Therefore, C is the answer.
Question 24
WRONG
Consider a paging hardware with a TLB. Assume that the entire page table and all the pages are in the
physical memory. It takes 10 milliseconds to search the TLB and 80 milliseconds to access the physical
memory. If the TLB hit ratio is 0.6, the effective memory access time (in milliseconds) is _________.
A 120
122
124
D 118
Question 24 Explanation:
TLB stands for Translation Lookaside Buffer. In Virtual memory systems, the cpu generates virtual
memory addresses. But, the data is stored in actual physical memory i.e. we need to place a physical
memory address on the memory bus to fetch the data from the memory circuitry. So, a special table is
maintained by the operating system called the Page table. This table contains a mapping between the
virtual addresses and physical addresses. So, every time a cpu generates a virtual address, the operating
system page table has to be looked up to find the corresponding physical address. To speed this up, there
is hardware support called the TLB. The TLB is a high speed cache of the page table i.e. contains
recently accessed virtual to physical translations. TLB hit ratio- A TLB hit is the no of times a virtual-to-
physical address translation was already found in the TLB, instead of going all the way to the page table
which is located in slower physical memory. TLB hit ratio is nothing but the ratio of TLB hits/Total no of
queries into TLB. In the case that the page is found in the TLB (TLB hit) the total time would be the time of
search in the TLB plus the time to access memory, so TLB_hit_time := TLB_search_time +
memory_access_time In the case that the page is not found in the TLB (TLB miss) the total time would
be the time to search the TLB (you don't find anything, but searched nontheless) plus the time to access
memory to get the page table and frame, plus the time to access memory to get the data,
so TLB_miss_time := TLB_search_time + memory_access_time + memory_access_time But this is
in individual cases, when you want to know an average measure of the TLB performance, you use the
Effective Access Time, that is the weighted average of the previous measures EAT := TLB_miss_time *
(1- hit_ratio) + TLB_hit_time * hit_ratio. EAT := (TLB_search_time + 2*memory_access_time) * (1-
hit_ratio) + (TLB_search_time + memory_access_time)* hit_ratio. As both page table and page are in
physical memory T(eff) = hit ratio * (TLB access time + Main memory access time) + (1 - hit ratio) * (TLB
access time + 2 * main memory time) = 0.6*(10+80) + (1-0.6)*(10+2*80) = 0.6 * (90) + 0.4 * (170) = 122
This solution is contributed Nitika Bansal
Question 25
WRONG
The memory access time is 1 nanosecond for a read operation with a hit in cache, 5 nanoseconds for a
read operation with a miss in cache, 2 nanoseconds for a write operation with a hit in cache and 10
nanoseconds for a write operation with a miss in cache. Execution of a sequence of instructions involves
100 instruction fetch operations, 60 memory operand read operations and 40 memory operand write
operations. The cache hit-ratio is 0.9. The average memory access time (in nanoseconds) in executing
the sequence of instructions is __________.
1.26
1.68
C 2.46
D 4.52
Question 25 Explanation:
The question is to find the time taken for,
= 84ns
= 112 ns
// Here 2 and 10 the time taken for write when there is cache
= 336ns
= 1.68 ns
Question 26
CORRECT
A CPU generates 32-bit virtual addresses. The page size is 4 KB. The processor has a translation look-
aside buffer (TLB) which can hold a total of 128 page table entries and is 4-way set associative. The
minimum size of the TLB tag is:
A 11 bits
B 13 bits
15 bits
D 20 bits
Question 26 Explanation:
Virtual Memory would not be very effective if every memory address had to be translated by looking up
the associated physical page in memory. The solution is to cache the recent translations in a Translation
Lookaside Buffer (TLB). A TLB has a fixed number of slots that contain page table entries, which map
virtual addresses to physical addresses. Solution Size of a page = 4KB = 2^12 means 12 offset bits CPU
generates 32-bit virtual addresses Total number of bits needed to address a page frame = 32 12 = 20 If
there are n cache lines in a set, the cache placement is called n-way set associative. Since TLB is 4 way
set associative and can hold total 128 (2^7) page table entries, number of sets in cache = 2^7/4 = 2^5. So
5 bits are needed to address a set, and 15 (20 5) bits are needed for tag. Option (C) is the correct
answer. See Question 3 of http://www.geeksforgeeks.org/operating-systems-set-14/ This solution is
contributed by Nitika Bansal
Question 27
WRONG
A computer system supports 32-bit virtual addresses as well as 32-bit physical addresses. Since the
virtual address space is of the same size as the physical address space, the operating system designers
decide to get rid of the virtual memory entirely. Which one of the following is true?
Efficient implementation of multi-user support is no longer possible
Question 27 Explanation:
Same as http://geeksquiz.com/operating-systems-memory-management-question-4/
Question 28
WRONG
The minimum number of page frames that must be allocated to a running process in a virtual memory
environment is determined by
the instruction set architecture
page size
Question 28 Explanation:
There are two important tasks in virtual memory management: a page-replacement strategy and a frame-
allocation strategy. Frame allocation strategy says gives the idea of minimum number of frames which
should be allocated. The absolute minimum number of frames that a process must be allocated is
dependent on system architecture, and corresponds to the number of pages that could be touched by a
single (machine) instruction. So, it is instruction set architecture i.e. option (A) is correct answer. See
Question 3 of http://www.geeksforgeeks.org/operating-systems-set-
4/ Reference:https://www.cs.uic.edu/~jbell/CourseNotes/OperatingSystems/9_VirtualMemory.htmlThis
solution is contributed by Nitika Bansal
Question 29
WRONG
Consider a system with a two-level paging scheme in which a regular memory access takes 150
nanoseconds, and servicing a page fault takes 8 milliseconds. An average instruction takes 100
nanoseconds of CPU time, and two memory accesses. The TLB hit ratio is 90%, and the page fault rate is
one in every 10,000 instructions. What is the effective average instruction execution time?
645 nanoseconds
B 1050 nanoseconds
C 1215 nanoseconds
1230 nanoseconds
Memory Management GATE-CS-2004
Discuss it
Question 29 Explanation:
Where MEM is memory access time when page is present in memory. Now we calcu-
late MEM MEM = .9(TLB Access Time)+.1(TLB Access Time+2*150ns)
Here TLB Acess Time is not given lets assume it 0. So MEM=.9(0)+.1(300ns) =30ns ,
put MEM value in equation(2). M = (1 1/10 )(30ns) + (1/10 ) 8ms = 830ns
4 4
Question 30
CORRECT
In a system with 32 bit virtual addresses and 1 KB page size, use of one-level page tables for virtual to
physical address translation is not practical because of
Question 30 Explanation:
See question 4 of http://www.geeksforgeeks.org/operating-systems-set-4/
Question 31
CORRECT
Which of the following is NOT an advantage of using shared, dynamically linked libraries as opposed to
using statically linked libraries ?
D Existing programs need not be re-linked to take advantage of newer versions of libraries
Question 31 Explanation:
Refer Static and Dynamic Libraries In Non-Shared (static) libraries, since library code is connected at
compile time, the final executable has no dependencies on the the library at run time i.e. no additional
run-time loading costs, it means that you dont need to carry along a copy of the library that is being used
and you have everything under your control and there is no dependency.
Question 32
WRONG
A processor uses 2-level page tables for virtual to physical address translation. Page tables for both levels
are stored in the main memory. Virtual and physical addresses are both 32 bits wide. The memory is byte
addressable. For virtual to physical address translation, the 10 most significant bits of the virtual address
are used as index into the first level page table while the next 10 bits are used as index into the second
level page table. The 12 least significant bits of the virtual address are used as offset within the page.
Assume that the page table entries in both levels of page tables are 4 bytes wide. Further, the processor
has a translation look-aside buffer (TLB), with a hit rate of 96%. The TLB caches recently used virtual
page numbers and the corresponding physical page numbers. The processor also has a physically
addressed cache with a hit rate of 90%. Main memory access time is 10 ns, cache access time is 1 ns,
and TLB access time is also 1 ns. Assuming that no page faults occur, the average time taken to access a
virtual address is approximately (to the nearest 0.5 ns)
1.5 ns
B 2 ns
C 3 ns
4 ns
Memory Management GATE-CS-2003
Discuss it
Question 32 Explanation:
The possibilities are
= 3.8
Why 22 and 32? 22 is because when TLB miss occurs it takes 1ns and the for the physical address it has
to go through two level page tables which are in main memory and takes 2 memory access and the that
page is found in cache taking 1 ns which gives a total of 22
Question 33
WRONG
A processor uses 2-level page tables for virtual to physical address translation. Page tables for both levels
are stored in the main memory. Virtual and physical addresses are both 32 bits wide. The memory is byte
addressable. For virtual to physical address translation, the 10 most significant bits of the virtual address
are used as index into the first level page table while the next 10 bits are used as index into the second
level page table. The 12 least significant bits of the virtual address are used as offset within the page.
Assume that the page table entries in both levels of page tables are 4 bytes wide. Further, the processor
has a translation look-aside buffer (TLB), with a hit rate of 96%. The TLB caches recently used virtual
page numbers and the corresponding physical page numbers. The processor also has a physically
addressed cache with a hit rate of 90%. Main memory access time is 10 ns, cache access time is 1 ns,
and TLB access time is also 1 ns. Suppose a process has only the following pages in its virtual address
space: two contiguous code pages starting at virtual address 0x00000000, two contiguous data pages
starting at virtual address 000400000, and a stack page starting at virtual address 0FFFFF000. The
amount of memory required for storing the page tables of this process is:
A 8 KB
12 KB
16 KB
D 20 KB
Question 33 Explanation:
Breakup of given addresses into bit form:-
Now, for second level page table, we will just require 1 Page
which will contain following 3 distinct entries i.e. 0000 0000 00,
Now, for each of these distinct entries, we will have 1-1 page
in Level-1.
Hence, we will have in total 4 pages and page size = 2^12 = 4KB.
Question 34
WRONG
Which of the following is not a form of memory?
A instruction cache
instruction register
instruction opcode
Question 34 Explanation:
Instruction Cache - Used for storing instructions that are frequently used Instruction Register - Part of
CPU's control unit that stores the instruction currently being executed Instruction Opcode - It is the portion
of a machine language instruction that specifies the operation to be performed Translation Lookaside
Buffer - It is a memory cache that stores recent translations of virtual memory to physical addresses for
faster access. So, all the above except Instruction Opcode are memories. Thus, C is the correct choice.
Please comment below if you find anything wrong in the above post.
Question 35
WRONG
The optimal page replacement algorithm will select the page that
Has not been used for the longest time in the past.
Will not be used for the longest time in the future.
Question 35 Explanation:
The optimal page replacement algorithm will select the page whose next occurrence will be after the
longest time in future. For example, if we need to swap a page and there are two options from which we
can swap, say one would be used after 10s and the other after 5s, then the algorithm will swap out the
page that would be required 10s later. Thus, B is the correct choice. Please comment below if you find
anything wrong in the above post.
Question 36
WRONG
Dynamic linking can cause security concerns because:
Security is dynamic
The path for searching dynamic libraries is not known till runtime
C Linking is insecure
Question 36 Explanation:
Static Linking and Static Libraries is the result of the linker making copy of all used library functions to
the executable file. Static Linking creates larger binary files, and need more space on disk and main
memory. Examples of static libraries (libraries which are statically linked) are, .a files in Linux and .lib files
in Windows.Dynamic linking and Dynamic Libraries Dynamic Linking doesnt require the code to be
copied, it is done by just placing name of the library in the binary file. The actual linking happens when the
program is run, when both the binary file and the library are in memory. Examples of Dynamic libraries
(libraries which are linked at run-time) are, .so in Linux and .dll in Windows. In Dynamic Linking,the path
for searching dynamic libraries is not known till runtime
Question 37
WRONG
Which of the following statements is false?
Virtual memory implements the translation of a programs address space into physical memory
A address space
Virtual memory allows each program to exceed the size of the primary memory
Question 37 Explanation:
See question 4 of http://www.geeksforgeeks.org/operating-systems-set-2/
Question 38
The process of assigning load addresses to the various parts of the program and adjusting the code and
date in the program to reflect the assigned addresses is called
A Assembly
B Parsing
C Relocation
D Symbol resolution
Question 39
WRONG
Where does the swap space reside?
RAM
Disk
C ROM
D On-chip cache
Question 39 Explanation:
Swap space is an area on disk that temporarily holds a process memory image. When memory is full
and process needs memory, inactive parts of process are put in swap space of disk.
Question 40
WRONG
Consider a virtual memory system with FIFO page replacement policy. For an arbitrary page access
pattern, increasing the number of page frames in main memory will
Question 40 Explanation:
See question 4 of http://www.geeksforgeeks.org/operating-systems-set-1/
Question 41
CORRECT
Consider a machine with 64 MB physical memory and a 32-bit virtual address space. If the page size is
4KB, what is the approximate size of the page table?
A 16 MB
B 8 MB
2 MB
D 24 MB
Question 41 Explanation:
See question 1 of http://www.geeksforgeeks.org/operating-systems-set-2/
Question 42
WRONG
Suppose the time to service a page fault is on the average 10 milliseconds, while a memory access takes
1 microsecond. Then a 99.99% hit ratio results in average memory access time of (GATE CS 2000)
A 1.9999 milliseconds
B 1 millisecond
9.999 microseconds
1.9999 microseconds
Memory Management GATE-CS-2000
Discuss it
Question 42 Explanation:
If any page request comes it will first search into page table, if present, then it will directly fetch the page
from memory, thus in this case time requires will be only memory access time. But if required page will
not be found, first we have to bring it out and then go for memory access. This extra time is called page
fault service time. Let hit ratio be p , memory access time be t1 , and page fault service time be t2.
Hence, average memory access time = p*t1 + (1-p)*t2
Question 43
WRONG
Consider a system with byte-addressable memory, 32 bit logical addresses, 4 kilobyte page size and
page table entries of 4 bytes each. The size of the page table in the system in megabytes is ___________
2
4
C 8
D 16
Question 43 Explanation:
Number of entries in page table = 232 / 4Kbyte
==2 2 32 / 2
20
12
Size of page=table
222
4 *=
4 bytes
(No. page table entries)*(Size of an entry)
Megabytes
20
Question 44
WRONG
A computer system implements a 40 bit virtual address, page size of 8 kilobytes, and a 128-entry
translation look-aside buffer (TLB) organized into 32 sets each having four ways. Assume that the TLB tag
does not store any process id. The minimum length of the TLB tag in bits is _________
20
B 10
C 11
22
Memory Management GATE-CS-2015 (Set 2)
Discuss it
Question 44 Explanation:
Total virtual address size = 40
Question 45
CORRECT
Consider six memory partitions of size 200 KB, 400 KB, 600 KB, 500 KB, 300 KB, and 250 KB, where KB
refers to kilobyte. These partitions need to be allotted to four processes of sizes 357 KB, 210 KB, 468 KB
and 491 KB in that order. If the best fit algorithm is used, which partitions are NOT allotted to any
process?
200 KB and 300 KB
Question 45 Explanation:
Best fit allocates the smallest block among those that are large enough for the new process. So the
memory blocks are allocated in below order.
357 ---> 400
Question 46
CORRECT
A Computer system implements 8 kilobyte pages and a 32-bit physical address space. Each page table
entry contains a valid bit, a dirty bit three permission bits, and the translation. If the maximum size of the
page table of a process is 24 megabytes, the length of the virtual address supported by the system is
_______________ bits
36
B 32
C 28
D 40
Question 46 Explanation:
Max size of virtual address can be calculated by
table entry.
1 (valid bit) +
1 (dirty bit) +
3 (permission bits) +
a page is 13.
So value of x = 32 - 13 = 19
= 223
Question 47
WRONG
Which one of the following is NOT shared by the threads of the same process?
Stack
B Address Space
D Message Queue
Question 47 Explanation:
Threads can not share stack (used for maintaining function calls) as they may have their individual
function call sequence.
Image
Source: https://www.cs.uic.edu/~jbell/CourseNotes/OperatingSystems/4_Threads.html
Question 48
WRONG
Consider a fully associative cache with 8 cache blocks (numbered 0-7) and the following sequence of
memory block requests: 4, 3, 25, 8, 19, 6, 25, 8, 16, 35, 45, 22, 8, 3, 16, 25, 7 If LRU replacement policy
is used, which cache block will have memory block 7?
A 4
C 6
7
Memory Management GATE-IT-2004
Discuss it
Question 48 Explanation:
Block size is =8 Given 4, 3, 25, 8, 19, 6, 25, 8, 16, 35, 45, 22, 8, 3, 16, 25, 7 So from 0 to 7 ,we have
4 3 25 8 19 6 16 35 //25,8 LRU so next 16,35 come in the block.
45 3 25 8 19 6 16 35
45 22 25 8 19 6 16 35
45 22 25 8 19 6 16 35
45 22 25 8 3 6 16 35 //16 and 25 already there
45 22 25 8 3 7 16 35 //7 in 5th block Therefore , answer is B
Question 49
WRONG
The storage area of a disk has innermost diameter of 10 cm and outermost diameter of 20 cm. The
maximum storage density of the disk is 1400bits/cm. The disk rotates at a speed of 4200 RPM. The main
memory of a computer has 64-bit word length and 1s cycle time. If cycle stealing is used for data transfer
from the disk, the percentage of memory cycles stolen for transferring one word is
A 0.5%
B 1%
5%
10%
Memory Management GATE-IT-2004
Discuss it
Question 49 Explanation:
Please comment below if you find anything wrong in the above post.
Question 50
WRONG
A disk has 200 tracks (numbered 0 through 199). At a given time, it was servicing the request of reading
data from track 120, and at the previous request, service was for track 90. The pending requests (in order
of their arrival) are for track numbers. 30 70 115 130 110 80 20 25. How many times will the head change
its direction for the disk scheduling policies SSTF(Shortest Seek Time First) and FCFS (First Come Fist
Serve)
2 and 3
B 3 and 3
3 and 4
D 4 and 4
Question 50 Explanation:
According to Shortest Seek Time First: 90-> 120-> 115-> 110-> 130-> 80-> 70-> 30-> 25-> 20 Change
of direction(Total 3); 120->15; 110->130; 130->80 According to First Come First Serve: 90-> 120-> 30->
70-> 115-> 130-> 110-> 80-> 20-> 25Change of direction(Total 4); 120->30; 30->70; 130->110;20-
>25 Therefore,Answer is C
Question 51
WRONG
In a virtual memory system, size of virtual address is 32-bit, size of physical address is 30-bit, page size is
4 Kbyte and size of each page table entry is 32-bit. The main memory is byte addressable. Which one of
the following is the maximum number of bits that can be used for storing protection and other information
in each page table entry?
2
B 10
C 12
14
Memory Management GATE-IT-2004
Discuss it
Question 51 Explanation:
Please comment below if you find anything wrong in the above post.
Question 52
WRONG
In a particular Unix OS, each data block is of size 1024 bytes, each node has 10 direct data block
addresses and three additional addresses: one for single indirect block, one for double indirect block and
one for triple indirect block. Also, each block can contain addresses for 128 blocks. Which one of the
following is approximately the maximum size of a file in the file system?
A 512 MB
2GB
8GB
D 16GB
Question 52 Explanation:
Given:
Hence,
128*128*128*1024 Bytes
= 2113674*1024 Bytes
= 2.0157 GB ~ 2GB
Question 53
CORRECT
A two-way switch has three terminals a, b and c. In ON position (logic value 1), a is connected to b, and in
OFF position, a is connected to c. Two of these two-way switches S1 and S2 are connected to a bulb as
A S1.S2'
B S1+S2
(S1S2)'
D S1S2
Question 53 Explanation:
If we draw truth table of the above circuit,it'll be S1 S2 Bulb 0 0 On 0 1 Off 1
0 Off 1 1 On = (S1 S2)' Therefore answer is C
Question 54
WRONG
Consider a 2-way set associative cache memory with 4 sets and total 8 cache blocks (0-7) and a main
memory with 128 blocks (0-127). What memory blocks will be present in the cache after the following
sequence of memory block references if LRU policy is used for cache block replacement. Assuming that
initially the cache did not have any memory block from the current job? 0 5 3 9 7 0 16 55
A 0 3 5 7 16 55
0 3 5 7 9 16 55
0 5 7 9 16 55
D 3 5 7 9 16 55
Question 54 Explanation:
2-way set associative cache memory, .i.e K = 2.
The number of blocks in the main memory is 128, i.e M = 128. ( numbered from 0 -127)
set numbered ( X mod S ) of the the cache memory. In that set, the
block can be placed at any location, but if the set has already become
full, then the current referred block of the main memory should replace
X-->set no ( X mod 4 )
9--->1 ( block 9 is placed in set 1, set 1 has currently 1 empty block location,
block 9 is placed in that, now set 1 is full, and block 5 is the
least recently used block )
55--->3 ( block 55 should be placed in set 3, but set 3 is full with block 3 and 7,
hence need to replace one block with block 55, as block 3 is the least
recently used block in the set 3, it is replaced with block 55.
Hence the main memory blocks present in the cache memory are : 0, 5, 7, 9, 16, 55 . (Note: block 3 is not
present in the cache memory, it was replaced with block 55 ) Read the following articles to learn more
related to the above question: Cache Memory Cache Organization | Introduction
Question 55
WRONG
Q81 Part_A A disk has 8 equidistant tracks. The diameters of the innermost and outermost tracks are 1
cm and 8 cm respectively. The innermost track has a storage capacity of 10 MB. What is the total amount
of data that can be stored on the disk if it is used with a drive that rotates it with (i) Constant Linear
Velocity (ii) Constant Angular Velocity?
(i) 80 MB (ii) 2040 MB
Question 55 Explanation:
Please comment below if you find anything wrong in the above post.
Question 56
CORRECT
Consider a computer system with 40-bit virtual addressing and page size of sixteen kilobytes. If the
computer system has a one-level page table per process and each page table entry requires 48 bits, then
the size of the per-process page table is _________megabytes. Note : This question was asked as
Numerical Answer Type.
384
B 48
C 192
D 96
Question 56 Explanation:
Size of memory = 240 Page size = 16KB = 214 No of pages= size of Memory/ page size = 240 / 214 = 226 Size
of page table = 226 * 48/8 bytes = 26*6 MB =384 MB Thus, A is the correct choice.
Question 57
WRONG
Consider a computer system with ten physical page frames. The system is provided with an access
sequence a1, a2, ..., a20, a1, a2, ..., a20), where each ai number. The difference in the number of page
faults between the last-in-first-out page replacement policy and the optimal page replacement policy is
__________ [Note that this question was originally Fill-in-the-Blanks question]
A 0
1
2
D 3
Question 57 Explanation:
LIFO stands for last in, first out a1 to a10 will result in page faults, So 10 page faults from a1 to a10.
Then a11 will replace a10(last in is a10), a12 will replace a11 and so on till a20, so 10 page faults from
a11 to a20 and a20 will be top of stack and a9a1 are remained as such. Then a1 to a9 are already
there. So 0 page faults from a1 to a9. a10 will replace a20, a11 will replace a10 and so on. So 11 page
faults from a10 to a20. So total faults will be 10+10+11 = 31. Optimal a1 to a10 will result in page faults,
So 10 page faults from a1 to a10. Then a11 will replace a10 because among a1 to a10, a10 will be used
later, a12 will replace a11 and so on. So 10 page faults from a11 to a20 and a20 will be top of stack and
a9a1 are remained as such. Then a1 to a9 are already there. So 0 page faults from a1 to a9. a10 will
replace a1 because it will not be used afterwards and so on, a10 to a19 will have 10 page faults. a20 is
already there, so no page fault for a20. Total faults 10+10+10 = 30. Difference = 1
Question 58
WRONG
In which one of the following page replacement algorithms it is possible for the page fault rate to increase
even when the number of allocated frames increases?
LRU (Least Recently Used)
Question 58 Explanation:
In some situations FIFO page replacement gives more page faults when increasing the number of page
frames. This situation is Beladys anomaly. Beladys anomaly proves that it is possible to have more
page faults when increasing the number of page frames while using the First in First Out (FIFO) page
replacement algorithm. For example, if we consider reference string 3 2 1 0 3 2 4 3 2 1 0 4 and 3 slots, we
get 9 total page faults, but if we increase slots to 4, we get 10 page faults.
Question 59
CORRECT
The address sequence generated by tracing a particular program executing in a pure demand paging
system with 100 bytes per page is
0100, 0200, 0430, 0499, 0510, 0530, 0560, 0120, 0220, 0240, 0260, 0320, 0410.
Suppose that the memory can store only one page and if x is the address which causes a page fault then
the bytes from addresses x to x + 99 are loaded on to the memory.
How many page faults will occur ?
A 0
B 4
D 8
Question 59 Explanation:
Question 60
WRONG
A paging scheme uses a Translation Look-aside Buffer (TLB). A TLB-access takes 10 ns and a main
memory access takes 50 ns. What is the effective access time(in ns) if the TLB hit ratio is 90% and there
is no page-fault?
54
B 60
65
D 75
Question 60 Explanation:
Effective access time = hit ratio * time during hit + miss ratio * time during miss TLB time = 10ns, Memory
time = 50ns Hit Ratio= 90% E.A.T. = (0.90)*(60)+0.10*110 =65
Question 61
WRONG
Assume that a main memory with only 4 pages, each of 16 bytes, is initially empty. The CPU generates
the following sequence of virtual addresses and uses the Least Recently Used (LRU) page replacement
policy.
0, 4, 8, 20, 24, 36, 44, 12, 68, 72, 80, 84, 28, 32, 88, 92
How many page faults does this sequence cause? What are the page numbers of the pages present in
the main memory at the end of the sequence?
6 and 1, 2, 3, 4
7 and 1, 2, 4, 5
C 8 and 1, 2, 4, 5
D 9 and 1, 2, 3, 5
Question 61 Explanation:
Question 62
WRONG
Match the following flag bits used in the context of virtual memory management on the left side with the
different purposes on the right side of the table below.
Question 63
WRONG
Consider a computer with a 4-ways set-associative mapped cache of the following characteristics: a total
of 1 MB of main memory, a word size of 1 byte, a block size of 128 words and a cache size of 8 KB. The
number of bits in the TAG, SET and WORD fields, respectively are:
A 7, 6, 7
8, 5, 7
C 8, 6, 6
9, 4, 7
Memory Management Computer Organization and Architecture Gate IT 2008
Discuss it
Question 63 Explanation:
According to the question it is given that No. of bytes in a word= 1byte No. of words per
block of memory= 128 words Total size of the cache memory= 8 KB So the total number
of block can be calculated as under Cache size/(no. words per block* size of 1 word) =
8KB/( 128*1) =64 Since, it is given that the computer has a 4 way set associative
memory. Therefore, Total number of sets in the cache memory given = number of cache
blocks given/4 = 64/4 = 16 So, the number of SET bits required = 4 as 16= power(2, 4).
Thus, with 4 bits we will be able to get 16 possible output bits As per the question only
physical memory information is given we can assume that cache memory is physically
tagged. So, the memory can be divided into 16 regions or blocks. Size of the region a
single set can address = 1MB/ 16 = power(2, 16 )Bytes = power(2, 16) / 128 = power(2,
9) cache blocks Thus, to uniquely identify these power(2, 9) blocks we will need 9 bits to
tag these blocks. Thus, TAG= 9 Cache block is 128 words so for indicating any
particular block we will need 7 bits as 128=power(2,7). Thus, WORD = 7. Hence the
answer will be (TAG, SET, WORD) = (9,4,7). This solution is contributed by Namita Singh.
Question 64
CORRECT
Consider a computer with a 4-ways set-associative mapped cache of the following characteristics: a total
of 1 MB of main memory, a word size of 1 byte, a block size of 128 words and a cache size of 8 KB. While
accessing the memory location 0C795H by the CPU, the contents of the TAG field of the corresponding
cache line is
000011000
B 110001111
C 00011000
D 110010101
Question 64 Explanation:
TAG will take 9 bits SET will need 4 bits and WORD will need 7 bits of the cache
memory location Thus, using the above conclusion as derived in previous question. The
memory location 0C795H can be written as 0000 1100 0111 1001 0101 Thus TAG= 9
bits = 0000 1100 0 SET =4 bits =111 1 WORD = 7 bits =001 0101 Therefore, the
matching option is option A. This solution is contributed by Namita Singh .
Question 65
CORRECT
Linked Questions 58-59
Assume GeeksforGeeks implemented the new page replacement algorithm in virtual memory and given
its name as Geek. Consider the working strategy of Geek as following-
Each page in memory maintains a count which is incremented if the page is referred and no page
fault occurs.
If a page fault occurs, the physical page with zero count or smallest count is replaced by new
page and if more than one page with zero count or smallest count then it uses FIFO strategy to
replace the page.
Find the number of page faults using Geeks algorithm for the following reference string (assume three
physical frames are available which are initially free)
Reference String : A B C D A B E A B C D E B A D
A 7
B 9
11
D 13
Question 66
WRONG
If LRU and Geek page replacement are compared (in terms of page faults) only for above reference string
then find the correct statement from the following:
LRU and Geek are same
D None
Question 66 Explanation:
Question 1
WRONG
Which of the following is major part of time taken when accessing data on the disk?
A Settle time
Rotational latency
Seek time
D Waiting time
Question 1 Explanation:
Seek time is time taken by the head to travel to the track of the disk where the data to be accessed is
stored.
Question 2
WRONG
We describe a protocol of input device communication below. a. Each device has a distinct address b.
The bus controller scans each device in sequence of increasing address value to determine if the entity
wishes to communicate. c. The device ready to communicate leaves it data in IO register. d. The data is
picked up and the controller moves to step-a above. Identify the form of communication best describes the
IO mode amongst the following: Source: nptel
DMA
C Interrupt mode
Polling
Input Output Systems
Discuss it
Question 2 Explanation:
See Polling
Question 3
From amongst the following given scenarios determine the right one to justify interrupt mode of data-
transfer: Source: nptel
Question 4
WRONG
Normally user programs are prevented from handling I/O directly by I/O instructions in them. For CPUs
having explicit I/O instructions, such I/O protection is ensured by having the I/O instructions privileged. In
a CPU with memory mapped I/O, there is no explicit I/O instruction. Which one of the following is true for
a CPU with memory mapped I/O? (GATE CS 2005)
I/O protection is ensured by operating system routine(s)
Question 4 Explanation:
See question 1 of http://www.geeksforgeeks.org/operating-systems-set-16/
Question 5
WRONG
Put the following disk scheduling policies results in minimum amount of head movement.
FCS
Circular scan
C Elevator
Question 5 Explanation:
First Come -First Serve (FCFS) All incoming requests are placed at the end of the queue. Whatever
number that is next in the queue will be the next number served. Using this algorithm doesn't provide the
best results. Elevator (SCAN): This approach works like an elevator does. It scans down towards the
nearest end and then when it hits the bottom it scans up servicing the requests that it didn't get going
down. If a request comes in after it has been scanned it will not be serviced until the process comes back
down or moves back up. Circular Scan (C-SCAN): Circular scanning works just like the elevator to some
extent. It begins its scan toward the nearest end and works it way all the way to the end of the system.
Once it hits the bottom or top it jumps to the other end and moves in the same direction. Keep in mind that
the huge jump doesn't count as a head movement.
Source: http://www.cs.iit.edu/~cs561/cs450/disksched/disksched.html
Question 6
WRONG
Consider a hard disk with 16 recording surfaces (0-15) having 16384 cylinders (0-16383) and each
cylinder contains 64 sectors (0-63). Data storage capacity in each sector is 512 bytes. Data are organized
cylinder-wise and the addressing format is <cylinder no., surface no., sector no.> . A file of size 42797 KB
is stored in the disk and the starting disk location of the file is <1200, 9, 40>. What is the cylinder number
of the last sector of the file, if it is stored in a contiguous manner?
1281
B 1282
C 1283
1284
Input Output Systems GATE CS 2013
Discuss it
Question 6 Explanation:
42797KB will take 85512 sectors (42797*1024 bytes / 512 bytes)
required.
one more fact to be noted is that the file occupies 83.58 cylinders,
but the 0.58 cannot be accommodated in the first one (the file storage
Question 7
WRONG
A file system with 300 GByte disk uses a file descriptor with 8 direct block addresses, 1 indirect block
address and 1 doubly indirect block address. The size of each disk block is 128 Bytes and the size of
each disk block address is 8 Bytes. The maximum possible file size in this file system is
3 Kbytes
35 Kbytes
C 280 Bytes
D Dependent on the size of the disk
Question 7 Explanation:
See http://www.geeksforgeeks.org/operating-systems-set-5/
Question 8
WRONG
A computer handles several interrupt sources of which the following are relevant for this question.
or a button is pressed)
pressed or released)
read is completed)
Question 8 Explanation:
Higher priority interrupt levels are assigned to requests which, if delayed or interrupted, could have
serious consequences. Devices with high speed transfer such as magnetic disks are given high priority,
and slow devices such as keyboard receive low priority (Source: Computer System Architecture by Morris
Mano) Interrupt from CPU temperature sensor would have serious consequences if ignored.
Question 9
CORRECT
An application loads 100 libraries at start-up. Loading each library requires exactly one disk access. The
seek time of the disk to a random location is given as 10 ms. Rotational speed of disk is 6000 rpm. If all
100 libraries are loaded from random locations on the disk, how long does it take to load all libraries?
(The time to transfer data from the disk block once the head has been positioned at the start of the block
may be neglected)
A 0.50 s
1.50 s
C 1.25 s
D 1.00 s
Question 9 Explanation:
See Question 3 of http://www.geeksforgeeks.org/operating-systems-set-6/
Question 10
CORRECT
A CPU generally handles an interrupt by executing an interrupt service routine
By checking the interrupt register after finishing the execution of the current instruction.
Question 10 Explanation:
Hardware detects interrupt immediately, but CPU acts only after its current instruction. This is followed to
ensure integrity of instructions.
Question 11
WRONG
A hard disk has 63 sectors per track, 10 platters each with 2 recording surfaces and 1000 cylinders. The
address of a sector is given as a triple (c, h, s), where c is the cylinder number, h is the surface number
and s is the sector number. Thus, the 0th sector is addressed as (0, 0, 0), the 1st sector as (0, 0, 1), and
so on The address <400,16,29> corresponds to sector number:
505035
B 505036
505037
D 505038
Question 11 Explanation:
Overview The data in hard disk is arranged in the shown manner. The smallest division is sector. Sectors
are then combined to make a track. Cylinder is formed by combining the tracks which lie on same
dimension of the platters. Read write head access the disk. Head has to reach at a particular track and
then wait for the rotation of the platter so that the required sector comes under it. Here, each platter has
two surfaces, which is the r/w head can access the platter from the two sides, upper and lower.
So,<400,16,29> will represent 400 cylinders are passed(0-399) and thus, for each cylinder 20 surfaces
(10 platters * 2 surface each) and each cylinder has 63 sectors per surface. Hence we have passed 0-399
= 400 * 20 * 63 sectors + In 400th cylinder we have passed 16 surfaces(0-15) each of which again
contains 63 sectors per cylinder so 16 * 63 sectors. + Now on the 16th surface we are on 29th sector. So,
sector no = 400x20x63 + 1663 + 29 = 505037. Reference :https://www.ilbe.com/1144674842 This
solution is contributed by Shashank Shanker khare.
Question 12
WRONG
Consider the data given in previous question. The address of the 1039th sector is
(0, 15, 31)
Question 12 Explanation:
You can also see the image uploaded in previous question. (a) <0,15,31> 0th cylinder 15th surface and
31st sector So, 0 cylinders passed 0*20*63 As each cylinder has 20 surfaces and each surface has 63
sectors. + 15 surfaces passed (0-14) 15*63 As each surface has 63 sectors + We are on 31st sector So,
sector no. =0*20*63+15*63+31=976 sector. Which is not equal to 1039. (b) <0,16,30> Similarly this
represents, 0*20*63 + 16*63 (0-15 sectors and each sector has 63 sectors) + 30 sectors on 16th sector
Sector no = 0*20*63+16*63+30=1038 sector which is not equal to 1039. (c) <0,16,31> Similarly this
represents, 0*20*63 + 16*63 (0-15 sectors and each sector has 63 sectors) + 31 sectors on 16th sector
Sector no = 0*20*63+16*63+31=1039 sector which is equal to 1039. Hence,option c is correct.
(d) <0,17,31> Similarly this represents, 0*20*63 + 17*63 (0-16 sectors and each sector has 63 sectors) +
31 sectors on 17th sector Sector no = 0*20*63+17*63+31=1102 sector which is not equal to 1039. This
solution is contributed by Shashank Shanker khare.
Question 13
WRONG
The data blocks of a very large file in the Unix file system are allocated using
A contiguous allocation
B linked allocation
indexed allocation
an extension of indexed allocation
Input Output Systems GATE CS 2008
Discuss it
Question 13 Explanation:
The Unix file system uses an extension of indexed allocation. It uses direct blocks, single indirect blocks,
double indirect blocks and triple indirect blocks. Following diagram shows implementation of Unix file
system. The diagram is taken from Operating System Concept book.
Question 14
WRONG
For a magnetic disk with concentric circular tracks, the seek latency is not linearly proportional to the seek
distance due to
Question 14 Explanation:
Whenever head moves from one track to other then its speed and direction changes, which is noting but
change in motion or the case of inertia. So answer B This explanation has been contributed by Abhishek
Kumar. See Disk drive performance characteristics_Seek_time
Question 15
WRONG
Which of the following statements about synchronous and asynchronous I/O is NOT true?
An ISR is invoked on completion of I/O in synchronous I/O but not in asynchronous I/O
In both synchronous and asynchronous I/O, an ISR (Interrupt Service Routine) is invoked after
B completion of the I/O
A process making a synchronous I/O call waits until I/O is complete, but a process making an
C asynchronous I/O call does not wait for completion of the I/O
In the case of synchronous I/O, the process waiting for the completion of I/O is woken up by the
Question 15 Explanation:
There are two types of input/output (I/O) synchronization: synchronous I/O and asynchronous I/O.
Asynchronous I/O is also referred to as overlapped I/O. In synchronous file I/O, a thread starts an I/O
operation and immediately enters a wait state until the I/O request has completed. An ISR will be invoked
after the completion of I/O operation and it will place process from block state to ready state. A thread
performing asynchronous file I/O sends an I/O request to the kernel by calling an appropriate function. If
the request is accepted by the kernel, the calling thread continues processing another job until the kernel
signals to the thread that the I/O operation is complete. It then interrupts its current job and processes the
data from the I/O operation as necessary. See Question 3 of http://www.geeksforgeeks.org/operating-
systems-set-10/ Reference:https://msdn.microsoft.com/en-
us/library/windows/desktop/aa365683%28v=vs.85%29.aspx This solution is contributed by Nitika Bansal
Question 16
WRONG
Consider a disk pack with 16 surfaces, 128 tracks per surface and 256 sectors per track. 512 bytes of
data are stored in a bit serial manner in a sector. The capacity of the disk pack and the number of bits
required to specify a particular sector in the disk are respectively:
256 Mbyte, 19 bits
64 Gbyte, 28 bit
Input Output Systems GATE-CS-2007
Discuss it
Question 16 Explanation:
See Question 1 of http://www.geeksforgeeks.org/operating-systems-set-12/
Question 17
CORRECT
Suppose a disk has 201 cylinders, numbered from 0 to 200. At some time the disk arm is at cylinder 100,
and there is a queue of disk access requests for cylinders 30, 85, 90, 100, 105, 110, 135 and 145. If
Shortest-Seek Time First (SSTF) is being used for scheduling the disk access, the request for cylinder 90
is serviced after servicing ____________ number of requests.
A 1
B 2
D 4
Question 17 Explanation:
In Shortest-Seek-First algorithm, request closest to the current position of the disk arm and head is
handled first. In this question, the arm is currently at cylinder number 100. Now the requests come in the
queue order for cylinder numbers 30, 85, 90, 100, 105, 110, 135 and 145. The disk will service that
request first whose cylinder number is closest to its arm. Hence 1st serviced request is for cylinder no 100
( as the arm is itself pointing to it ), then 105, then 110, and then the arm comes to service request for
cylinder 90. Hence before servicing request for cylinder 90, the disk would had serviced 3 requests.
Hence option C.
Question 18
WRONG
A device with data transfer rate 10 KB/sec is connected to a CPU. Data is transferred byte-wise. Let the
interrupt overhead be 4 msec. The byte transfer time between the device interface register and CPU or
memory is negligible. What is the minimum performance gain of operating the device under interrupt
mode over operating it under program controlled mode?
A 15
25
C 35
45
Input Output Systems GATE-CS-2005
Discuss it
Question 18 Explanation:
In programmed I/O, CPU does continuous polling,
WRONG
Consider a disk drive with the following specifications: 16 surfaces, 512 tracks/surface, 512 sectors/track,
1 KB/sector, rotation speed 3000 rpm. The disk is operated in cycle stealing mode whereby whenever one
byte word is ready it is sent to memory; similarly, for writing, the disk interface reads a 4 byte word from
the memory in each DMA cycle. Memory cycle time is 40 nsec. The maximum percentage of time that the
CPU gets blocked during DMA operation is:
10
25
C 40
D 50
Question 19 Explanation:
Time takes for 1 rotation = 60/3000 It reads 512*1024 Bytes in one rotation. Time taken to read 4 bytes =
153 ns 153 is approximately 4 cycles (160ns) Percentage of time CPU gets blocked = 40*100/160 = 25
Question 20
CORRECT
Consider an operating system capable of loading and executing a single sequential user process at a
time. The disk head scheduling algorithm used is First Come First Served (FCFS). If FCFS is replaced by
Shortest Seek Time First (SSTF), claimed by the vendor to give 50% better benchmark results, what is
the expected improvement in the I/O performance of user programs?
A 50%
B 40%
C 25%
0%
Input Output Systems GATE-CS-2004
Discuss it
Question 20 Explanation:
Since Operating System can execute a single sequential user process at a time, the disk is accessed in
FCFS manner always. The OS never has a choice to pick an IO from multiple IOs as there is always one
IO at a time
Question 21
CORRECT
A Unix-style i-node has 10 direct pointers and one single, one double and one triple indirect pointers. Disk
block size is 1 Kbyte, disk block address is 32 bits, and 48-bit integers are used. What is the maximum
possible file size ?
A 224 bytes
B 232 bytes
234 bytes
D 248 bytes
Question 21 Explanation:
= 10 + 28 + 28*28 + 28*28*28
224 Blocks
Question 22
WRONG
A hard disk with a transfer rate of 10 Mbytes/ second is constantly transferring data to memory using
DMA. The processor runs at 600 MHz, and takes 300 and 900 clock cycles to initiate and complete DMA
transfer respectively. If the size of the transfer is 20 Kbytes, what is the percentage of processor time
consumed for the transfer operation ?
A 5.0%
B 1.0%
0.5%
0.1%
Input Output Systems GATE-CS-2004
Discuss it
Question 22 Explanation:
Transfer rate=10 MB per second Data=20 KB=20* 2 10 So Time=(20 * 2 10)/(10 * 2 20)= 2* 10-3 =2 ms
Processor speed= 600 MHz=600 Cycles/sec Cycles required by CPU=300+900 =1200 For DMA=1200 So
time=1200/(600 *10 6)=.002 ms In %=.002/2*100=.1% So (D) is correct option
Question 23
WRONG
Using a larger block size in a fixed block size file system leads to :
better disk throughput but poorer disk space utilization
Question 23 Explanation:
Using larger block size makes disk utilization poorer as more space would be wasted for small data in a
block. It may make throughput better as the number of blocks would decrease. A larger block size
guarantees that more data from a single file can be written or read at a time into a single block without
having to move the disk s head to another spot on the disk. The less time you spend moving your heads
across the disk, the more continuous reads/writes per second. The smaller the block size, the more
frequent it is required to move before a read/write can occur. Larger block size means less number of
blocks to fetch and hence better throughput. But larger block size also means space is wasted when only
small size is required and hence poor utilization.
This solution is contributed by Nitika Bansal
Question 24
WRONG
Which of the following requires a device driver?
Register
B Cache
C Main memory
Disk
Input Output Systems GATE-CS-2001
Discuss it
Question 24 Explanation:
A disk driver is software which enables communication between internal hard disk (or drive) and
computer.
It allows a specific disk drive to interact with the remainder of the computer.
Please comment below if you find anything wrong in the above post.
Question 25
WRONG
A graphics card has on board memory of 1 MB. Which of the following modes can the card not support?
1600 x 400 resolution with 256 colours on a 17-inch monitor
1600 x 400 resolution with 16 million colours on a 14-inch monitor
Question 25 Explanation:
See question 3 of http://www.geeksforgeeks.org/operating-systems-set-1/
Question 26
WRONG
Consider the situation in which the disk read/write head is currently located at track 45 (of tracks 0-255)
and moving in the positive direction. Assume that the following track requests have been made in this
order: 40, 67, 11, 240, 87. What is the order in which optimised C-SCAN would service these requests
and what is the total seek distance?
A 600
B 810
505
550
Input Output Systems GATE-CS-2015 (Mock Test)
Discuss it
Question 26 Explanation:
Circular scanning works just like the elevator to some extent. It begins its scan toward the nearest end
and works it way all the way to the end of the system. Once it hits the bottom or top it jumps to the other
end and moves in the same direction. Keep in mind that the huge jump doesn't count as a head
movement. Solution: Disk queue: 40, 67, 11, 240, 87 and disk is currently located at track 45.The order
in which optimised C-SCAN would service these requests is shown by the following
diagram.
Question 27
WRONG
Suppose the following disk request sequence (track numbers) for a disk with 100 tracks is given: 45, 20,
90, 10, 50, 60, 80, 25, 70. Assume that the initial position of the R/W head is on track 50. The additional
distance that will be traversed by the R/W head when the Shortest Seek Time First (SSTF) algorithm is
used compared to the SCAN (Elevator) algorithm (assuming that SCAN algorithm moves towards 100
when it starts execution) is _________ tracks
8
B 9
10
D 11
Question 27 Explanation:
In Shortest seek first (SSTF), closest request to the current position of the head, and then services that
request next. In SCAN (or Elevator) algorithm, requests are serviced only in the current direction of arm
movement until the arm reaches the edge of the disk. When this happens, the direction of the arm
reverses, and the requests that were remaining in the opposite direction are serviced, and so on.
Given a disk with 100 tracks
And Sequence 45, 20, 90, 10, 50, 60, 80, 25, 70.
50 0
60 10
70 10
80 10
90 10
25 20
20 5
10 10
-----------------------------------
Question 28
WRONG
Consider a disk pack with a seek time of 4 milliseconds and rotational speed of 10000 rotations per
minute (RPM). It has 600 sectors per track and each sector can store 512 bytes of data. Consider a file
stored in the disk. The file contains 2000 sectors. Assume that every sector access necessitates a seek,
and the average rotational latency for accessing each sector is half of the time for one complete rotation.
The total time (in milliseconds) needed to read the entire file is _________.
14020
B 14000
25030
D 15000
# To access a file,
= 4ms+3ms =7ms
= 14020 ms
Question 29
CORRECT
Consider a typical disk that rotates at 15000 rotations per minute (RPM) and has a transfer rate of 50
106 bytes/sec. If the average seek time of the disk is twice the average rotational delay and the controllers
transfer time is 10 times the disk transfer time, the average time (in milliseconds) to read or write a 512
byte sector of the disk is _____________
6.1
Input Output Systems GATE-CS-2015 (Set 2)
Discuss it
Question 29 Explanation:
Disk latency = Seek Time + Rotation Time + Transfer Time + Controller Overhead
Seek Time? Depends no. tracks the arm moves and seek speed of disk
Rotation Time? depends on rotational speed and how far the sector is from the head
Transfer Time? depends on data rate (bandwidth) of disk (bit density) and the size of request
Disk latency = Seek Time + Rotation Time +
It is given that the average seek time is twice the average rotational delay
Question 30
CORRECT
Consider a disk queue with requests for I/O to blocks on cylinders 47, 38, 121, 191, 87, 11, 92, 10. The C-
LOOK scheduling algorithm is used. The head is initially at cylinder number 63, moving towards larger
cylinder numbers on its servicing pass. The cylinders are numbered from 0 to 199. The total head
movement (in number of cylinders) incurred while servicing these requests is: Note : This question was
asked as Numerical Answer Type.
A 346
165
C 154
D 173
Question 30 Explanation:
The head movement would be :
63 => 87 24 movements
87 => 92 5 movements
10 => 11 1 movement
11 => 38 27 movements
38 => 47 9 movements
Question 31
WRONG
Which of the following DMA transfer modes and interrupt handling mechanisms will enable the highest I/O
band-width?
Transparent DMA and Polling interrupts
1) The data blocks of a very large file in the Unix file system are allocated using
(A) contiguous allocation
(B) linked allocation
(C) indexed allocation
(D) an extension of indexed allocation
Answer (D)
The Unix file system uses an extension of indexed allocation. It uses direct blocks, single indirect blocks,
double indirect blocks and triple indirect blocks. Following diagram shows implementation of Unix file
system. The diagram is taken from Operating System Concept book.
2) The P and V operations on counting semaphores, where s is a counting semaphore, are defined
as follows:
P(s) : s = s - 1;
V(s) : s = s + 1;
Assume that Pb and Vb the wait and signal operations on binary semaphores are provided. Two
binary semaphores Xb and Yb are used to implement the semaphore operations P(s) and V(s) as
follows:
P(s) : Pb(Xb);
s = s - 1;
if (s < 0) {
Vb(Xb) ;
Pb(Yb) ;
else Vb(Xb);
V(s) : Pb(Xb) ;
s = s + 1;
if (s <= 0) Vb(Yb) ;
Vb(Xb) ;
Answer (C)
Both P(s) and V(s) operations are perform Pb(xb) as first step. If Xb is 0, then all processes executing
these operations will be blocked. Therefore, Xb must be 1.
If Yb is 1, it may become possible that two processes can execute P(s) one after other (implying 2
processes in critical section). Consider the case when s = 1, y = 1. So Yb must be 0.
3) Which of the following statements about synchronous and asynchronous I/O is NOT true?
(A) An ISR is invoked on completion of I/O in synchronous I/O but not in asynchronous I/O
(B) In both synchronous and asynchronous I/O, an ISR (Interrupt Service Routine) is invoked after
completion of the I/O
(C) A process making a synchronous I/O call waits until I/O is complete, but a process making an
asynchronous I/O call does not wait for completion of the I/O
(D) In the case of synchronous I/O, the process waiting for the completion of I/O is woken up by the ISR
that is invoked after the completion of I/O
Answer (A)
In both Synchronous and Asynchronous, an interrupt is generated on completion of I/O. In Synchronous,
interrupt is generated to wake up the process waiting for I/O. In Asynchronous, interrupt is generated to
inform the process that the I/O is complete and it can process the data from the I/O operation.
See this for more details.
Petersons Algorithm for Mutual Exclusion |
Set 1 (Basic C implementation)
Problem: Given 2 process i and j, you need to write a program that can guarantee mutual exclusion
between the two without any additional hardware support.
Solution: There can be multiple ways to solve this problem, but most of them require additional hardware
support. The simplest and the most popular way to do this is by using Peterson Algorithm for mutual
Exclusion. It was developed by Peterson in 1981 though the initial work in this direction by done by
Theodorus Jozef Dekker who came up with Dekkers algorithm in 1960, which was later refined by
Peterson and came to be known as Petersons Algorithm.
Basically, Petersons algorithm provides guaranteed mutual exclusion by using only the shared memory. It
uses two ideas in the algorithm,
Prerequisite : Multithreading in C
Explanation:
The idea is that first a thread expresses its desire to acquire lock and sets flag[self] = 1 and then gives
the other thread a chance to acquire the lock. If the thread desires to acquire the lock, then, it gets the
lock and then passes the chance to the 1st thread. If it does not desire to get the lock then the while loop
breaks and the 1st thread gets the chance.
Implementation in C language
// Filename: peterson_spinlock.c
#include <stdio.h>
#include <pthread.h>
#include"mythreads.h"
int flag[2];
int turn;
int ans = 0;
void lock_init()
flag[0] = flag[1] = 0;
turn = 0;
flag[self] = 1;
// acquire lock
turn = 1-self;
// the lock.
flag[self] = 0;
// in main()
int i = 0;
lock(self);
ans++;
unlock(self);
// Driver code
int main()
lock_init();
Pthread_join(p1, NULL);
Pthread_join(p2, NULL);
ans, MAX*2);
return 0;
Run on IDE
// statements)
#ifndef __MYTHREADS_h__
#define __MYTHREADS_h__
#include <pthread.h>
#include <assert.h>
#include <sched.h>
int rc = pthread_mutex_lock(m);
assert(rc == 0);
int rc = pthread_mutex_unlock(m);
assert(rc == 0);
assert(rc == 0);
}
void Pthread_join(pthread_t thread, void **value_ptr)
assert(rc == 0);
#endif // __MYTHREADS_h__
Run on IDE
Output:
Thread Entered: 1
Thread Entered: 0
In layman terms, when a thread was waiting for its turn, it ended in a long while loop which tested the
condition millions of times per second thus doing unnecessary computation. There is a better way to wait,
and it is known as yield.
To understand what it does, we need to dig deep into how the Process scheduler works in Linux. The idea
mentioned here is a simplified version of the scheduler, the actual implementation has lots of
complications.
This is a complete waste of the 100 CPU clock cycles. To avoid this, we mutually give up the CPU time
slice, i.e. yield, which essentially ends this time slice and the scheduler picks up the next process to run.
Now, we test our condition once, then we give up the CPU. Considering our test takes 25 clock cycles, we
save 75% of our computation in a time slice. To put this graphically,
Considering the processor clock speed as 1MHz this is a lot of saving!.
Different distributions provide different function to achieve this functionality. Linux provides sched_yield().
flag[self] = 1;
turn = 1-self;
turn == 1-self)
// sched_yield() call
sched_yield();
Run on IDE
Memory fence.
The code in earlier tutorial might have worked on most systems, but is was not 100% correct. The logic
was perfect, but most modern CPUs employ performance optimizations that can result in out-of-order
execution. This reordering of memory operations (loads and stores) normally goes unnoticed within a
single thread of execution, but can cause unpredictable behaviour in concurrent programs.
while (f == 0);
print x;
Run on IDE
In the above example, the compiler considers the 2 statements as independent of each other and thus
tries to increase the code efficiency by re-ordering them, which can lead to problems for concurrent
programs. To avoid this we place a memory fence to give hint to the compiler about the possible
relationship between the statements across the barrier.
flag[self] = 1;
turn = 1-self;
while (turn condition check)
yield();
has to be exactly the same in order for the lock to work, otherwise it will end up in a deadlock condition.
To ensure this, compilers provide a instruction that prevent ordering of statements across this barrier. In
case of gcc, its __sync_synchronize().
// Filename: peterson_yieldlock_memoryfence.c
#include<stdio.h>
#include<pthread.h>
#include "mythreads.h"
int flag[2];
int turn;
int ans = 0;
void lock_init()
flag[0] = flag[1] = 0;
turn = 0;
// to acquire lock
flag[self]=1;
turn = 1-self;
__sync_synchronize();
sched_yield();
// the lock.
flag[self]=0;
// in main()
int i = 0;
lock(self);
ans++;
unlock(self);
// Driver code
int main()
lock_init();
Pthread_join(p1, NULL);
Pthread_join(p2, NULL);
" %d\n",ans,MAX*2);
return 0;
Run on IDE
// mythread.h (A wrapper header file with assert
// statements)
#ifndef __MYTHREADS_h__
#define __MYTHREADS_h__
#include <pthread.h>
#include <assert.h>
#include <sched.h>
int rc = pthread_mutex_lock(m);
assert(rc == 0);
int rc = pthread_mutex_unlock(m);
assert(rc == 0);
assert(rc == 0);
}
assert(rc == 0);
#endif // __MYTHREADS_h__
Run on IDE
Output:
Thread Entered: 1
Thread Entered: 0
Types of OS:
Batch OS: A set of similar jobs are stored in the main memory for execution. A job
gets assigned to the CPU, only when the execution of the previous job completes.
Multiprogramming OS: The main memory consists of jobs waiting for CPU time.
The OS selects one of the processes and assigns it the CPU time. Whenever the
executing process needs to wait for any other operation (like I/O), the OS selects
another process from the job queue and assigns it the CPU. This way, the CPU is
never kept idle and the user gets the flavor of getting multiple tasks done at once.
Multitasking OS: Multitasking OS combines the benefits of Multiprogramming OS
and CPU scheduling to perform quick switches between jobs. The switch is so quick
that the user can interact with each program as it runs
Time Sharing OS: Time sharing systems require interaction with the user to instruct
the OS to perform various tasks. The OS responds with an output. The instructions are
usually given through an input device like the keyboard.
Real Time OS : Real Time OS are usually built for dedicated systems to accomplish
a specific set of tasks within deadlines.
Threads
A thread is a light weight process and forms a basic unit of CPU utilization. A process can
perform more
than one task at the same time by including multiple threads.
A thread has its own program counter, register set, and stack
A thread shares with other threads of the same process the code section, the
data section, files and signals.
A new thread, or a child process of a given process, can be introduced by using the fork()
system call. A process with n fork() system calls generates 2n 1 child processes.
There are two types of threads:
User threads
Kernel threads
users. OS.
easy. complicated.
If one user level thread perform If one kernel thread perform blocking
Process:
A process is a program under execution. The value of program counter (PC) indicates the
address of the current instruction of the process being executed. Each process is
represented by a Process Control Block (PCB).
Process Scheduling:
Waiting Time(W.T): Time Difference between turn around time and burst time.
Waiting Time = Turn Around Time - Burst Time
Max throughput [Number of processes that complete their execution per time unit]
Shortest Job First(SJF): Process which have the shortest burst time are scheduled first.
Round Robin Scheduling: Each process is assigned a fixed time in cyclic way.
Highest Response Ratio Next (HRRN) In this scheduling, processes with highest
response ratio is scheduled. This algorithm avoids starvation.
Multilevel Queue Scheduling: According to the priority of process, processes are placed in
the different queues. Generally high priority process are placed in the top level queue. Only
after completion of processes from top level queue, lower level queued processes are
scheduled.
Multi level Feedback Queue Scheduling: It allows the process to move in between
queues. The idea is to separate processes according to the characteristics of their CPU
bursts. If a process uses too much CPU time, it is moved to a lower-priority queue.
2) Both SJF and Shortest Remaining time first algorithms may cause starvation. Consider a
situation when long process is there in ready queue and shorter processes keep coming.
3) If time quantum for Round Robin scheduling is very large, then it behaves same as FCFS
scheduling.
4) SJF is optimal in terms of average waiting time for a given set of processes. SJF gives
minimum average waiting time, but problems with SJF is how to know/predict time of next
job.
Critical Section: The portion of the code in the program where shared variables are
accessed and/or updated.
Remainder Section: The remaining portion of the program excluding the Critical Section.
Race around Condition: The final output of the code depends on the order in which the
variables are accessed. This is termed as the race around condition.
A solution for the critical section problem must satisfy the following three conditions:
Synchronization Tools
Semaphores: A semaphore is an integer variable that is accessed only through two atomic
operations, wait () and signal (). An atomic operation is executed in a single CPU time slice
without any pre-emption.
Deadlock
A situation where a set of processes are blocked because each process is holding a
resource and waiting for another resource acquired by some other process.
2) Deadlock detection and recovery: Let deadlock occur, then do preemption to handle it
once occurred.
3) Ignore the problem all together: If deadlock is very rare, then let it happen and reboot the
system. This is the approach that both Windows and UNIX take.
Bankers Algorithm:
This algorithm handles multiple instances of the same resource.
Memory Management:
Swapping: In a multiprogramming program, the instructions that have used the time slice
are swapped out from the memory.
1: Single Partition Allocation Schemes: The memory is divided into two parts. One part
is kept for use by the OS and the other for use by the users.
1. Paging: The physical memory is divided into equal sized frames. The main memory is
divided into fixed size pages. The size of a physical memory frame is equal to the size of a
virtual memory frame.
Page Fault
A page fault is a type of interrupt, raised by the hardware when a running program accesses
a memory page that is mapped into the virtual address space, but not loaded in physical
memory.
Initially all slots are empty, so when 1, 3, 0 came they are allocated to the empty slots > 3
Page Faults.
when 3 comes, it is already in memory so > 0 Page Faults.
Then 5 comes, it is not available in memory so it replaces the oldest page slot i.e 1. >1
Page Fault.
Finally 6 comes, it is also not available in memory so it replaces the oldest page slot i.e 3
>1 Page Fault.
Beladys anomaly
Beladys anomaly proves that it is possible to have more page faults when increasing the
number of page frames while using the First in First Out (FIFO) page replacement
algorithm. For example, if we consider reference string 3 2 1 0 3 2 4 3
2 1 0 4 and 3 slots, we get 9 total page faults, but if we increase slots to 4, we get
10 page faults.
Optimal Page replacement
In this algorithm, pages are replaced which are not used for the longest duration of time in
the future.
Initially all slots are empty, so when 7 0 1 2 are allocated to the empty slots > 4 Page
faults
0 is already there so > 0 Page fault.
when 3 came it will take the place of 7 because it is not used for the longest duration of time
in the future.>1 Page fault.
0 is already there so > 0 Page fault..
4 will takes place of 1 > 1 Page Fault.
Now for the further page reference string > 0 Page fault because they are already
available in the memory.
Optimal page replacement is perfect, but not possible in practice as operating system
cannot know future requests. The use of Optimal Page replacement is to set up a
benchmark so that other replacement algorithms can be analyzed against it.
Let say the page reference string 7 0 1 2 0 3 0 4 2 3 0 3 2 . Initially we have 4 page slots
empty.
Initially all slots are empty, so when 7 0 1 2 are allocated to the empty slots > 4 Page
faults
0 is already their so > 0 Page fault.
when 3 came it will take the place of 7 because it is least recently used >1 Page fault
0 is already in memory so > 0 Page fault.
4 will takes place of 1 > 1 Page Fault
Now for the further page reference string > 0 Page fault because they are already
available in the memory.