Operting System
Operting System
Operting System
Q.2
Analysis which determines the meaning of a statement once its grammatical structure
becomes known is termed as
(A) Semantic analysis
(B) Syntax analysis
(C) Regular analysis
(D) General analysis
Ans: (A)
Q.3
Q.4
Q.5
Q.6
(B) DOS
(D) Application software
Ans: (A)
Q.7
(B) Instruction
(D) Function
DC14
Q.8
Q.9
Q.10
The scheduling in which CPU is allocated to the process with least CPU-burst time
is called
(A) Priority Scheduling
(B) Shortest job first Scheduling
(C) Round Robin Scheduling
(D) Multilevel Queue Scheduling
Ans: (B)
Q.11
Q.12
Q.13
Q.14
DC14
Q.15
Program preemption is
(A) forced de allocation of the CPU from a program which is executing on the
CPU.
(B) release of CPU by the program after completing its task.
(C) forced allotment of CPU by a program to itself.
(D) a program terminating itself due to detection of an error.
Ans: (A)
Q.16
An assembler is
(A) programming language dependent.
(B) syntax dependant.
(C) machine dependant.
(D) data dependant.
Ans: (C)
Q.17
Q.18
Ans: (C)
Q.19
Q.20
Which of the following approaches do not require knowledge of the system state?
(A) deadlock detection.
(B) deadlock prevention.
(C) deadlock avoidance.
(D) none of the above.
Ans: (D)
Q.21
DC14
Q.22
Q.23
An imperative statement
(A) Reserves areas of memory and associates names with them
(B) Indicates an action to be performed during execution of assembled program
(C) Indicates an action to be performed during optimization
(D) None of the above
Ans: (B)
Q.24
Ans: (C)
Q.25
Q.26
Throughput of a system is
(A) Number of programs processed by it per unit time
(B) Number of times the program is invoked by the system
(C) Number of requests made to a program by the system
(D) None of the above
Ans: (A)
Q.27
Q.28
DC14
Ans: (D)
Q.29
Which amongst the following is valid syntax of the Fork and Join Primitive?
(B) Fork <label>
(A) Fork <label>
Join <var>
Join <label>
(C) For <var>
(D) Fork <var>
Join <var>
join <var>
Ans: (A)
Q.30
Q.31
Q.32
Q.33.
Q.34
A linker program
(A) places the program in the memory for the purpose of execution.
(B) relocates the program to execute from the specific memory area
allocated to it.
(C) links the program with other programs needed for its execution.
(D) interfaces the program with the entities generating its input data.
Ans: (C)
Q.35
DC14
Q.36
Q.37
Q.38
Locality of reference implies that the page reference being made by a process
(A) will always be to the page used in the previous page reference.
(B) is likely to be the one of the pages used in the last few page references.
(C) will always be to one of the pages existing in memory.
(D)will always lead to a page fault.
Ans: (B)
Q.39
Q.40
Q.41
Q.42
DC14
Q.43
Q.44
Which of the following is not a key piece of information, stored in single page
entry, assuming pure paging and virtual memory
(A) Frame number
(B) A bit indicating whether the page is in physical memory or on the disk
(C) A reference for the disk block that stores the page
(D) None of the above
table
Ans: (C)
Q.45
Q.46
Q.47
Q.48
Consider a program with a linked origin of 5000. Let the memory area allocated to it
have the start address of 70000. Which amongst the following will be the value
to be loaded in relocation register?
(A) 20000
(B) 50000
(C) 70000
(D) 90000
Ans: (None of the above choice in correct. )
Q.49
An assembly language is a
(A) low level programming language
7
DC14
Q.50
Q.51
Q.52
Q.53
Q.54
Q.55
Q.56
A set of techniques that allow to execute a program which is not entirely in memory
is called
(A) demand paging
(B) virtual memory
(C) auxiliary memory
(D) secondary memory
8
DC14
Q. 57
Q.58
Before proceeding with its execution, each process must acquire all the resources
it needs is called
(A) hold and wait
(B) No pre-emption
(C) circular wait
(D) starvation
Ans: (A)
Q.59
Virtual memory is
(A) simple to implement
(B) used in all major commercial operating systems
(C) less efficient in utilization of memory
(D) useful when fast I/O devices are not available
Ans: (B)
Q.60
Q.61
Q.62
Relocatable programs
(A) cannot be used with fixed partitions
(B) can be loaded almost anywhere in memory
(C) do not need a linker
(D) can be loaded only at one specific location
Ans: (B)
Q.63
Page stealing
(A) is a sign of efficient system
(B) is taking page frames other working sets
(C) should be the tuning goal
(D) is taking larger disk spaces for pages paged out
Ans: (B)
9
DC14
Q.64
Q.65
Q.66
Q.67
Q.68
Q.69
Q.70
Q.71
DC14
Q.72
Q.73
Q.77
Q.78
DC14
PART II
DESCRIPTIVES
Q.1. Discuss in detail Table management Techniques?
(7)
Ans:
An Assembler uses the following tables:
OPTAB: Operation Code Table Contains mnemonic operation code and its machine
language equivalent.
SYMTAB: Symbol Table maintains symbolic label, operand and their corresponding
machine.
LITTAB is a table of literals used in the program
For efficiency reasons SYMTAB must remain in main memory throughout passes I
and II of the assembler. LITTAB is not accessed as frequently as SYMTAB, however
it may be accessed sufficiently frequently to justify its presence in the memory. If
memory is at a premium, only a part of LITTAB can be kept in memory. OPTAB
should be in memory during pass I
Q.2 Define the following:
(i) Formal language Grammars.
(ii) Terminal symbols.
(iii) Alphabet and String.
(9)
Ans:
(i) A formal language grammar is a set of formation rules that describe which strings
formed from the alphabet of a formal language are syntactically valid, within the
language. A grammar only addresses the location and manipulation of the strings of the
language. It does not describe anything else about a language, such as its semantics.
As proposed by Noam Chomsky, a grammar G consists of the following components:
A finite set N of non terminal symbols.
A finite set of terminal symbols that is disjoint from N.
A finite set P of production rules, each rule of the form
*
where is the Kleene star operator and denotes set union. That is, each production
rule maps from one string of symbols to another, where the first string contains at
least one non terminal symbol.
A distinguished non terminal symbol from set N that is the start symbol.
(ii)Terminal symbols are literal strings forming the input of a formal grammar and
cannot be broken down into smaller units without losing their literal meaning. In simple
words, terminal symbols cannot be changed using the rules of the grammar; that is,
they're the end of the line, or terminal. For example, if the grammar rules are that x can
become xa and x can become ax, then a is a terminal symbol because it cannot become
something else. These are the symbols which can appear as it is in the programme.
(iii) A finite set of symbols is called alphabet. An alphabet is often denoted by sigma,
yet can be given any name.
B = {0, 1} says B is an alphabet of two symbols, 0 and 1.
C = {a, b, c} says C is an alphabet of three symbols, a, b and c.
12
DC14
Q.3. What is parsing? Write down the drawback of top down parsing of backtracking. (7)
Ans:
Parsing is the process of analyzing a text, made of a sequence of tokens, to
determine its grammatical structure with respect to a given formal grammar. Parsing
is also known as syntactic analysis and parser is used for analyzing a text. The task
of the parser is essentially to determine if and how the input can be derived from the
start symbol of the grammar. The input is a valid input with respect to a given formal
grammar if it can be derived from the start symbol of the grammar.
Following are drawbacks of top down parsing of backtracking:
(i) Semantic actions cannot be performed while making a prediction. The actions
must be delayed until the prediction is known to be a part of a successful parse.
(ii) Precise error reporting is not possible. A mismatch merely triggers
backtracking. A source string is known to be erroneous only after all predictions
have failed.
Q.4.
Memory
CPU
Interpreter
Errors
Machine
language
program
+
Data
PC
Source
Program
+
Data
PC
The CPU uses a program counter (PC) to note the address of next instruction to be
executed. This instruction is subjected to the instruction execution cycle consisting of
the following steps:
1. Fetch the instruction.
2. Decode the instruction to determine the operation to be performed, and also its
operands.
3.Execute the instruction.
At the end of the cycle, the instruction address in PC is updated and the cycle is
repeated for the next instruction. Program interpretation can proceed in a similar
manner. The PC can indicate which statement of the source program is to be
13
DC14
interpreted next. This statement would be subjected to the interpretation cycle, which
consists of the following steps:
1. Fetch the statement.
2. Analyse the statement and determine its meaning, viz . the computation to be
performed and its operands.
3. Execute the meaning of the statement.
Q.5. Give the difference between multiprogramming and multiprocessing.
(5)
Ans:
A multiprocessing system is a computer hardware configuration that includes more
than one independent processing unit. The term multiprocessing is generally used to
refer to large computer hardware complexes found in major scientific or commercial
applications. The multiprocessor system is characterized by-increased system
throughput and application speedup-parallel processing. The main feature of this
architecture is to provide high speed at low cost in comparison to uni- processor.
A multiprogramming operating system is system that allows more than one active
user program (or part of user program) to be stored in main memory simultaneously.
Multi programmed operating systems are fairly sophisticated. All the jobs that enter
the system are kept in the job pool. This pool consists of all processes residing on mass
storage awaiting allocation of main memory. If several jobs are ready to be brought
into memory, and there is not enough room for all of them, then the system must
choose among them. A time-sharing system is a multiprogramming system.
Q.6. Write down different system calls for performing different kinds of tasks.
(4)
Ans:
A system call is a request made by any program to the operating system for
performing tasks -- picked from a predefined set -- which the said program does not
have required permissions to execute in its own flow of execution. System calls
provide the interface between a process and the operating system. Most operations
interacting with the system require permissions not available to a user level process,
e.g. I/O performed with a device present on the system or any form of communication
with other processes requires the use of system calls.
The main types of system calls are as follows:
Process Control: These types of system calls are used to control the processes.
Some examples are end, abort, load, execute, create process, terminate process etc.
File Management: These types of system calls are used to manage files. Some
examples are Create file, delete file, open, close, read, write etc.
Device Management: These types of system calls are used to manage devices.
Some examples are Request device, release device, read, write, get device attributes
etc.
Q.7. Differentiate between pre-emptive and non-pre-emptive scheduling.
(4)
Ans:
In a pre-emptive scheduling approach, CPU can be taken away from a process if
there is a need while in a non-pre-emptive approach if once a process has been
given the CPU, the CPU cannot be taken away from that process, unless the
process completes or leaves the CPU for performing an Input Output.
14
DC14
Q.8.
CPU burst time indicates the time, the process needs the CPU. The following are
the set of processes with their respective CPU burst time (in milliseconds).
Processes
CPU-burst time
P1
P2
P3
10
5
5
Calculate the average waiting time if the process arrived in the following
order:
(i) P1, P2 & P3
(ii) P2, P3 & P1
(6)
Ans:
Considering FCFS scheduling
Process
Burst Time
P1
10
P2
5
P3
5
(i) Suppose that the processes arrive in the order: P1 , P2 , P3
The Gantt Chart for the schedule is:
P1
P2
P3
10
15
20
P3
P1
10
20
(6)
Ans:
A semaphore is a protected variable or abstract data type which constitutes the
classic method for restricting access to shared resources such as shared memory in a
parallel programming environment.
15
DC14
(4)
Ans:
Four necessary conditions for deadlock prevention:
1. Removing the mutual exclusion condition means that no process may have
exclusive access to a resource. This proves impossible for resources that cannot be
spooled, and even with spooled resources deadlock could still occur. Algorithms that
avoid mutual exclusion are called non-blocking synchronization algorithms.
2. The "hold and wait" conditions may be removed by requiring processes to request
all the resources they will need before starting up. Another way is to require processes
to release all their resources before requesting all the resources they will need.
3. A "no preemption" (lockout) condition may also be difficult or impossible to avoid
as a process has to be able to have a resource for a certain amount of time, or the
processing outcome may be inconsistent or thrashing may occur. However, inability
to enforce preemption may interfere with a priority algorithm. Algorithms that allow
preemption include lock-free and wait-free algorithms and optimistic concurrency
control.
4. The circular wait condition: Algorithms that avoid circular waits include "disable
interrupts during critical sections", and "use a hierarchy to determine a partial
ordering of resources" and Dijkstra's solution.
Q.11. Define the following:
(i) FIFO Page replacement algorithm.
(ii) LRU Page replacement algorithm.
(6)
Ans:
(i) FIFO policy: This policy simply removes pages in the order they arrived in the
main memory. Using this policy we simply remove a page based on the time of its
arrival in the memory. For example if we have the reference string: 1, 2, 3, 4, 1, 2, 5,
1, 2, 3, 4, 5 and 3 frames (3 pages can be in memory at a time per process) then we
have 9 page faults as shown
16
DC14
If frames are increased say to 4, then number of page faults also increases, to 10 in
this case.
(ii) LRU policy: LRU expands to least recently used. This policy suggests that we remove a page whose last usage is farthest from current time. For reference string: 1, 2,
3, 4, 1, 2, 5, 1, 2, 3, 4, 5, we have the following page faults
Q.12. List the properties which a hashing function should possess to ensure a good
search performance. What approaches are adopted to handle collision? (8)
Ans:
A hashing function h should possess the following properties to ensure good search
performance:
1.The hashing function should not be sensitive to the symbols used in some source
program. That is it should perform equally well for different source programs.
2.The hashing function h should execute reasonably fast.
The following approaches are adopted to handle collision are:
Chaining: One simple scheme is to chain all collisions in lists attached to the
appropriate slot. This allows an unlimited number of collisions to be handled and
doesn't require a priori knowledge of how many elements are contained in the
collection. The tradeoff is the same as with linked lists versus array implementations
of collections: linked list overhead in space and, to a lesser extent, in time.
Rehashing: Re-hashing schemes use a second hashing operation when there is a
collision. If there is a further collision, we re-hash until an empty "slot" in the table is
found. The re-hashing function can either be a new function or a re-application of the
original one. As long as the functions are applied to a key in the same order, then a
sought key can always be located.
17
DC14
Overflow chaining: Another scheme will divide the pre-allocated table into two
sections: the primary area to which keys are mapped and an area for collisions,
normally termed the overflow area. When a collision occurs, a slot in the overflow
area is used for the new element and a link from the primary slot established as in a
chained system. This is essentially the same as chaining, except that the overflow area
is pre-allocated and thus possibly faster to access. As with re-hashing, the maximum
number of elements must be known in advance, but in this case, two parameters must
be estimated: the optimum size of the primary and overflow areas.
Q.13. What is assembly language? What kinds of statements are present in an assembly
language program? Discuss. Also highlight the advantages of assembly language.
(8)
Ans:
Assembly language is a family of low-level language for programming computers,
microprocessors, microcontrollers etc. They implement a symbolic representation of
the numeric machine codes and other constants needed to program a particular CPU
architecture. This representation is usually defined by the hardware manufacturer, and
is based on abbreviations (called mnemonic) that help the programmer remember
18
DC14
reduced errors
The terminal nodes (leaves) of an expression tree are the variables or constants in the
expression (a, b, c, d, and e). The non-terminal nodes of an expression tree are the
operators (+, -, , and )
The expression tree is evaluated using a post-order traversal of the expression tree as
follows:
1.
If this node has no children, it should return the value of the node
2.
Evaluate the left hand child
3.
Evaluate the right hand child
4.
Then evaluate the operation indicated by the node and return this value
An expression tree is advantageous for:
19
DC14
The expression will be evaluated in the following order: resister R1 first, then
register R2, and so on.
f+
(x+y)
* ( (a+b)
R1
R2
(c-d))
R3
R4
R5
R6
Q.16.
Explain the following:(i) Elimination of common sub expressions during code optimisation.
(ii) Pure and impure interpreters.
(iii) Lexical substitution during macro expansion.
(iv)Overlay structured program.
(v) Facilities of a debug monitor.
(vi) Actions of an interrupt processing routine.
(vii) Real time operating system.
(viii) Fork-join.
20
(16)
DC14
DC14
Q.17. List and explain the three events concerning resource allocation. Define the
following:
(i) Deadlock.
(ii) Resource request and allocation graph (RRAG)
(iii)Wait for graph (WFG)
(6)
Ans:
(i) Deadlock: Each process in a set of processes is waiting for an event that only
a process in the set can cause.
(ii) Deadlocks can be described by a directed bipartite graph called a ResourceRequest-Allocation graph (RRAG).A graph G = (V,E) is called bipartite if V
can be decomposed into two disjoint sets V1 and V2 such that every edge in E
joins a vertex in V1 to a vertex in V2.Let V1 be a set of processes and V2 be a set
of resources. Since the graph is directed we will consider:
22
DC14
Q.18.
23
DC14
Q.19. What is a race condition? Explain how does a critical section avoid this condition.
What are the properties which a data item should possess to implement a critical
section?
(6)
Ans:
Race condition: The situation where several processes access and manipulate
shared data concurrently. The final value of the shared data depends upon which
process finishes last. To prevent race conditions, concurrent processes must be
synchronized.
Data consistency requires that only one processes should update the value of a data
item at any time. This is ensured through the notion of a critical section. A critical
section for data item d is a section of code, which cannot be executed concurrently
with itself or with other critical sections for d. Consider a system of n processes (P0,
P1,, P n-1), each process has a segment of code called a critical section, in which
the proceses may be changing common variables, updating a table, writing a file, and
so on. The important feature of the system is that, when one process is executing in its
critical section, no other process is to be allowed to execute in its critical section.
Thus the execution of critical sections by the processes is mutually exclusive in time.
repeat
Entry section
critical section
Exit section
remainder section
until FALSE
Solution to the Critical Section Problem must satisfy the following three
conditions:
1.
Mutual Exclusion. If process Pi is executing in its critical section, then no
other processes can be executing in their critical sections.
2.
Progress. If no process is executing in its critical section and there exist some
processes that wish to enter their critical section, then the selection of the processes
that will enter the critical section next cannot be postponed indefinitely.
3.
Bounded Waiting. A bound must exist on the number of times that other
processes are allowed to enter their critical sections after a process has made a request
to enter its critical section and before that request is granted.
Assume that each process executes at a nonzero speed
No assumption concerning relative speed of the n processes.
Q.20. Describe a solution to the Dining philosopher problem so that no races arise.
Ans:
A solution to the dining philosopher problem:
monitor DP
{
24
(4)
DC14
Each philosopher I invokes the operations pickup() and putdown() in the following
sequence:
dp.pickup (i)
EAT
dp.putdown (i)
Q.21. Discuss two main approaches to identify and reuse free memory area in a heap.
(6)
Ans:
Two popular techniques to identify free memory areas as a result of allocation and deallocations in a heap are:
1. Reference count: the system associates a reference count with each memory area to
indicate the number of its active users. This number is incremented when a user
accesses that area and decrements when user stops using that. The area is free if the
reference counts drops to zero. This scheme is very simple to implement however
incurs incremental overheads.
2. Garbage collection: In this technique two passes are made over the memory to
identify unused areas. In the first pass it traverses all pointers pointing to allocated
areas and marks the memory areas that are in use. The second pass finds all unmarked
areas and declares them to be free. The garbage collection overheads are not
incremental. They are incurred every time the system runs out of free memory to
allocate to fresh requests.
Two main approaches to reuse free memory area in a heap are:
First-fit: Allocate the first hole that is big enough. Searching can start either at the
beginning of the set of holes or where the previous first-fit search ended. Searching is
stopped as soon as a free hole is found that is large enough
Best-fit: Allocate the smallest hole that is big enough; Entire list is searched, unless
ordered by size. This strategy produces the smallest leftover hole.
25
DC14
Q.22.
Q.23.
DC14
Q.24.
Discuss the different techniques with which a file can be shared among different
users.
(8)
Ans:
Some popular techniques with which a file can be shared among different users
are:
1. Sequential sharing: In this sharing technique, a file can be shared by only one
program at a time, that is, file accesses by P1 and P2 are spaced out over time. A
lock field can be used to implement this. Setting and resetting of the lock at file open
and close ensures that only one program can use the file at any time.
2.
Concurrent sharing: Here a number of programs may share a file
concurrently. When this is the case, it is essential to avoid mutual interference
between them. There are three categories of concurrent sharing:
a. Immutable files: If a file is shared in immutable mode, none of the sharing
programs can modify it. This mode has the advantage that sharing programs
are independent of one another.
b. Single image immutable files: Here the changes made by one program are
immediately visible to other programs. The Unix file system uses this filesharing mode.
c. Multiple image mutable files: Here many programs can concurrently
update the shared file. Each updating program creates a new version of the
file, which is different from the version created by concurrent programs.
This sharing mode can only be used in applications where concurrent
updates and the existence of multiple versions are meaningful.
Q.25.
Differentiate between protection and security. Explain the techniques used for
protection of user files.
(8)
Ans:
Operating system consists of a collection of objects, hardware or software. Each
object has a unique name and can be accessed through a well-defined set of
operations. Protection problem - ensure that each object is accessed correctly and
only by those processes that are allowed to do so.
Security must consider external environment of the system, and protect it from:
27
DC14
(8)
Ans
Parsing is the process of analyzing a text, made of a sequence of tokens, to
determine its grammatical structure with respect to a given formal grammer. Parsing
is also known as syntactic analysis and parser is used for analyzing a text. The task
of the parser is essentially to determine if and how the input can be derived from the
start symbol of the grammar.
Following are three parsing techniques:
Top-down parsing - Top-down parsing can be viewed as an attempt to find left-most
derivations of an input-stream by searching for parse trees using a top-down
expansion of the given formal grammar rules. Tokens are consumed from left to right.
Inclusive choice is used to accommodate ambiguity by expanding all alternative righthand-sides of grammar rules.
Bottom-up parsing - A parser can start with the input and attempt to rewrite it to the
start symbol. Intuitively, the parser attempts to locate the most basic elements, then
the elements containing these, and so on. LR parsers are examples of bottom-up
parsers. Another term used for this type of parser is Shift-Reduce parsing.
Recursive descent parsing- It is a top down parsing without backtracking. This parsing
technique uses a set of recursive procedures to perform parsing. Salient advantages of
recursive descent parsing are its simplicity and generality. It can be implemented in
any language supporting recursive procedures.
Q.27. Draw the state diagram of a process from its creation to termination, including all
transitions, and briefly elaborate every state and every transition.
(8)
Ans:
As a process executes, it changes state
new: The process is being created.
running: Instructions are being executed.
waiting: The process is waiting for some event to occur.
ready: The process is waiting to be assigned to a processor.
terminated: The process has finished execution.
28
DC14
Q.28. Consider the following system snapshot using data structures in the Bankers
algorithm, with resources A, B, C, and D, and process P0 to P4:
P0
P1
P2
P3
P4
Max
A B C
6 0 1
1 7 5
2 3 5
1 6 5
1 6 5
D
2
0
6
3
6
Allocation
A B C D
4 0 0 1
1 1 0 0
1 2 5 4
0 6 3 3
0 2 1 2
Need
A B C D
Available
A B C D
29
DC14
P0
P1
P2
P3
P4
A
6
1
2
1
1
Max
B C
0 1
7 5
3 5
6 5
6 5
D
2
0
6
3
6
Allocation
A B C D
4 0 0 1
1 1 0 0
1 2 5 4
0 6 3 3
1 4 1 2
Need
A B C D
2 0 1 1
0 6 5 0
1 1 0 2
1 0 2 0
0 2 4 4
Available
A B C D
After PO completes P3 can be allocated. 1020 from released 6012 and available
2011(Total 80 23) and <Po, P3, P4, P2, P1> is a safe sequence.
Q.29. Define the following
(i) Process;
(ii) Process Control Block; (PCB)
(iii) Multi programming;
(iv)Time sharing.
(8)
Ans:
(i) Process: Process is a program in execution; process execution must progress in
sequential fashion. A process includes:
program counter
stack
data section
(ii) Process Control Block (PCB): Information associated with each process is stored
in Process control Block.
Process state
Program counter
CPU registers
CPU scheduling information
Memory-management information
Accounting information
I/O status information
(iii) Multiprogramming: A multiprogramming operating system is system that
allows more than one active user program (or part of user program) to be stored in
main memory simultaneously. Multi programmed operating systems are fairly
sophisticated. All the jobs that enter the system are kept in the job pool. This pool
consists of all processes residing on mass storage awaiting allocation of main
memory. If several jobs are ready to be brought into memory, and there is not enough
room for all of them, then the system must choose among them. A time-sharing
system is a multiprogramming system.
(iv) Time Sharing: Sharing of a computing resource among many users by means of
multiprogramming and multi-tasking is known as timesharing. By allowing a large
number of users to interact concurrently with a single computer, time-sharing
dramatically lowered the cost of providing computing capability, made it possible for
individuals and organizations to use a computer without owning one, and promoted
the interactive use of computers and the development of new interactive applications.
Q.30. Why are Translation Look-aside Buffers (TLBs) important? In a simple paging
system, what information is stored in a typical TLB table entry?
(8)
30
DC14
A typical TLB table entry consists of page# and frame#, when a logical address is
generated by the CPU, its page number is presented to a set of associative registers
that contain page number along with their corresponding frame numbers. If the page
number is found in the associative registers, its frame number is available and is used
to access memory. If the page number is not in the associated registers, a memory
reference to the page table must be made. When the frame number is obtained, it can
be used to access memory and the page number along with its frame number is added
to the associated registers.
Q.31.
Why is segmented paging important (as compared to a paging system)? What are
the different pieces of the virtual address in a segmented paging?
(6)
Ans:
Paging can be superimposed on a segment oriented addressing mechanism to
obtain efficient utilization of the memory. This is a clever scheme with advantages
of both paging as well as segmentation. In such a scheme each segment would have
a descriptor with its pages identified. So we have to now use three sets of offsets.
First, a segment offset helps to identify the set of pages. Next, within the
corresponding page table (for the segment), we need to identify the exact page
table. This is done by using the page table part of the virtual address. Once the
exact page has been identified, the offset is used to obtain main memory address
reference. The final address resolution is exactly same as in paging. The different
pieces of virtual address in a segmented paging is as shown below:
31
DC14
Q.32. Consider the situation in which the disk read/write head is currently located at track
45 (of tracks 0-255) and moving in the positive direction. Assume that the following
track requests have been made in this order: 40, 67, 11, 240, 87. What is the order in
which optimised C-SCAN would service these requests and what is the total seek
distance?
(6)
Ans:
Disk queue: 40, 67, 11, 240, 87 and disk is currently located at track 45.The
order in which optimised C-SCAN would service these requests is shown by the
following diagram.
0
11
40 45
67
87
240
255
Explain any three policies for process scheduling that uses resource consumption
information. What is response ratio?
(8)
32
DC14
Q.34. What is a semaphore? Explain a binary semaphore with the help of an example? (4)
Ans:
A semaphore is a synchronization tool that provides a general-purpose solution to
controlling access to critical sections.
A semaphore is an abstract data type (ADT) that defines a nonnegative integer
variable which, apart from initialization, is accessed only through two standard
operations: wait and signal. The classical definition of wait in pseudo code is
wait(S){
while(S<=0)
; // do nothing
S--;
}
The classical definitions of signal in pseudocode is
signal(S){
S++;
}
33
DC14
Q.35.
Consider the following page reference and reference time strings for a program:
Page reference string: 5,4,3,2,1,4,3,5,4,3,2,1,5,..
Show how pages will be allocated using the FIFO page replacement policy. Also
calculate the total number of page faults when allocated page blocks are 3 and 4
respectively.
(8)
Ans:
Page reference string is: 5,4,3,2,1,4,3,5,4,3,2,1,5,..
For allocated page blocks 3, we have following FIFO allocation. Page reference
marked with + cause page fault and result in page replacement which is
performed by replacing the earliest loaded page existing in memory:
4
5
3
4
5
3
4
2
3
1
2
4
1
2
4
1
3
4
5
3
4
5
3
4
5
3
2
5
3
2
5
1
2
5
1
5+
4+
3+
2+
1+
4+
3+
5+
2+
1+
Page Reference
For allocated page blocks 4, we have following FIFO allocation. Page reference
marked with + cause page fault and result in page replacement.
5+
4
5
4+
3
4
5
3+
2
3
4
5
2+
2
3
4
1
1+
2
3
4
1
2
3
4
1
2
3
5
1
5+
2
4
5
1
4+
3
4
5
1
3+
3
4
5
2
3
4
1
2
2+
3
5
1
2
1+
5+
DC14
Q.37. What is meant by inter process communication? Explain the two fundamental
models of inter process communication.
(8)
Ans:
Inter process Communication: The OS provides the means for cooperating
processes to communicate with each other via an inter process communication (IPC)
facility.
IPC provides a mechanism to allow processes to communicate and to synchronize
their actions without sharing the same address space. IPC is particularly useful in a
distributed environment where the communicating processes may reside on different
computers connected with a network.
IPC is best implemented by message passing system where communication among the
user processes is accomplished through the passing of messages. An IPC facility
provides at least the two operations:
send(message) and receive(message).
Two types of message passing system are as follows:
(a) Direct Communication: With direct communication, each process that wants to
communicate must explicitly name the recipient or sender of the communication. In
this scheme, the send and receive primitives are defined as:
send(P, message)- Send a message to process P.
receive(Q, message)- Receive a message from process Q.
A communication link in this scheme has the following properties:
A link is established automatically between every pair of processes that
want to communicate. The processes need to know only each others
identity to communicate.
A link is associated with exactly two processes.
Exactly one link exists between each pair of processes.
(b)With indirect communication, the messages are sent to and received from
mailboxes, or ports. Each mailbox has a unique identification. In this scheme, a
process can communicate with some other process via a number of different
35
DC14
mailboxes. Two processes can communicate only if they share a mailbox. The send
and receive primitives are defined as follows:
send (A, message)- Send a message to mailbox A
receive (A, message)- Receive a message from mailbox A.
In this scheme, a communication link has the following properties:
A link is established between a pair of processes only if both members
of the pair have a shared mailbox.
A link may be associated with more than two processes.
A number of different links may exist between each pair of
communicating processes, with each link corresponding to one
mailbox.
Q.38. Differentiate between program translation and program interpretation.
(6)
Ans:
The program translation model bridges the execution gap by translating a program
written in a programming language, called the source program (SP), into an
equivalent program in the machine or assembly language of the computer system,
called the target program (TP). Following diagram depicts the program translation
model.
Source
program
Errors
Data
Translator
m/c language
program
Target
program
In a program interpretation process, the interpreter reads the source program and
stores it in its memory. It bridges an execution gap without generating a machine
language program so we can say that the interpreter is a language translator. However,
it takes one statement of higher-level language at a time, translates it into machine
language and executes it immediately. Translation and execution are carried out for
each statement. The absence of a target program implies the absence of an outer
interface of the interpreter. Thus language-processing activity of an interpreter cannot
be separated from its program execution activities. Hence we can say that interpreter
executes a program written in a programming language. In essence, the execution gap
vanishes. Following figure depicts the program interpretation model.
Memory
Interpreter
Source
Program
+
Data
PC
Errors
36
DC14
Q.39.
(4)
Ans:
Macros Vs Subroutines
(i) Macros are pre processor directives i.e. it is processed before the source
program is passed to the compiler.
Subroutines are blocks of codes with a specific task, to be performed and are
directly passed to the compiler.
(ii) In a macro call the pre processor replaces the macro template with its macro
expansion, in a literal way.
As against this, in a function call the control is passed to a function along with
certain arguments, some calculations are performed in the function and a useful
value is returned back from the function.
(iii) Macro increases the program size. For example, if we use a macro hundred
times in a program, the macro expansion goes into our source code at hundred
different places. Whereas, functions make the program smaller and compact. For
example, if a function is used, the even if it is called from hundred different places
in the program, it would take the same amount of space in the program.
(iv) Macros make the program run faster as they have already been expanded and
placed in the source code before compilation. Whereas, passing arguments to a
function and getting back the returned values does take time and would therefore
slow down the program.
(v) Example of macro
#define AREA(x) (3.14*x*x) // macro definition
main(){
float r1=6.25, r2=2.5, a;
a=AREA(r1); // expanded to (3.14 * r1 * r1)
printf(\n Area of circle =%f,a);
a=AREA(r2); // // expanded to (3.14 * r2 * r2)
printf(\n Area of circle= %f,a);}
Example of subroutine
main(){
float r1=6.25, r2=2.5, a;
a=AREA(r1); // calls AREA()
printf(\n Area of circle =%f,a);
a=AREA(r2); // calls AREA()
printf(\n Area of circle= %f,a);}
float AREA(float r) // subroutine{
return 3.14*r*r;}
37
DC14
Q.40.
(6)
Ans:
In stack-based allocation, objects are allocated in last-in, first-out data
structure, a stack.E.g. Recursive subroutine parameters. The stack storage
allocation model
Grow and shrink on procedure calls and returns.
Register allocation works best for stack-allocated objects.
Memory allocation and freeing are partially predictable.
Restricted but simple and efficient.
Allocation is hierarchical: Memory freed in opposite order of allocation. That
is If alloc (A) then alloc (B) then alloc (C), then it must be free(C) then
free(B) then free(A).
Q.41.
(a)
(b)
2. These programs are translated into assembly programs.
38
DC14
void f (){
int i;
i = 1;
/* dead store */
global = 1;
/* dead store */
global = 2;
return;
global = 3;
/* unreachable */}
Below is the code fragment after dead code
elimination.
int global;
void f (){
global = 2;
return;}
3. Loop-invariant code motion
If a quantity is computed inside a loop during every iteration, and its value is the
same for each iteration, it can vastly improve efficiency to hoist it outside the loop and
compute its value just once before the loop begins. This is particularly important with
the address-calculation expressions generated by loops over arrays. For correct
implementation, this technique must be used with loop inversion, because not all code
is safe to be hoisted outside the loop.
for example:
39
DC14
(8)
Ans:
(i)YACC stands for Yet another Compiler-Compiler : Computer program input
generally has some structure; in fact, every computer program that does input can be
thought of as defining an input language which it accepts. An input language may be
as complex as a programming language, or as simple as a sequence of numbers.
Unfortunately, usual input facilities are limited, difficult to use, and often are lax about
checking their inputs for validity.
YACC provides a general tool for describing the input to a computer program. The
YACC user specifies the structures of his input, together with code to be invoked as
each such structure is recognized. YACC turns such a specification into a subroutine
that handles the input process; frequently, it is convenient and appropriate to have most
of the flow of control in the user's application handled by this subroutine. The output
from YACC is LALR parser for the input programming laughing
(ii)Debug monitors provide debugging support for a program. A debug monitor
executes the program being debugged under its own control thereby providing
execution efficiency during debugging. There are debug monitors that are language
independent and can handle programs written in many languages. For example-DEC10. Debug monitor provide the following facilities for dynamic debugging:
1. Setting breakpoints in the program
2. Initiating a debug conversation when control reaches a breakpoint.
3. Displaying values of variables
4. Assigning new values to variables.
5. Testing user defined assertions and predicates involving program variables.
Q.45. What is an operating system? List the typical functions of operating systems. (4)
Ans:
An operating system is system software that provides interface between user and
hardware. The operating system provides the means for the proper use of resources
(CPU, memory, I/O devices, data and so on) in the operation of the computer system.
An operating system provides an environment within which other programs can do
useful work.
Typical functions of operating system are as follows:
(1)Process management: A process is a program in execution. It is the job, which is
currently being executed by the processor. During its execution a process would
require certain system resources such as processor, time, main memory, files etc.
OS supports multiple processes simultaneously. The process management module of
40
DC14
Q.46. What are interrupts? How are they handled by the operating system? (5)
Ans:
Interrupt: An interrupt is a hardware mechanism that enables an external device,
typically I/O devices, to send a signal to the CPU. An interrupt signal requests the
CPU to interrupt its current activities and attend to the interrupting devices needs. A
CPU will check interrupts only after it has completed the processing of one
instruction and before it fetches a subsequent one. The basic interrupt mechanism
works as follows:
The CPU hardware has wire called the interrupt-request line that the CPU senses after
executing instruction. The device controller raises an interrupt by asserting a signal on
the interrupt request line. CPU detects that a controller has asserted a signal on the
interrupt request line.
41
DC14
I/O controller
1. Device driver
initiates I/O
2. Initiates I/O
CPU executing checks for interrupts
between instructions
4. CPU receiving
interrupt transfers
control to interrupt
handler
5. Interrupt handler
processes data returns
from Interrupt
6. CPU resumes
processing of
interrupted task
The CPU saves small amount of state, such as the current value of the instruction
pointer i.e. return address, and jumps to the interrupt handler routine at fixed address
in memory. The interrupt handler determines the cause of the interrupt, performs the
necessary processing, and executes a return from interrupt instruction, thereby
clearing the interrupt. The CPU resumes to the execution state prior to the interrupt.
Q.47. Define process. Describe the contents of a Process Control Block (PCB). (5)
Ans:
Process: A process is a program in execution.
A process is an active entity, represented by the value of the program counter and the
contents of the processors registers. A process generally includes the process stack,
which contains temporary data (such as method parameters, return addresses, and
local variables. Two processes (may or may not be associated with the same program)
are two separate execution sequences with its own text and data sections. A process
may spawn many processes as it runs. Process Control Block (PCB): Each process is
represented in the operating system by a process control block or task control block.
It contains many pieces of information associated with a specific process such as:
(1) Process state: The state may be new, ready, running, waiting, halted and so on.
42
DC14
.
.
.
Figure: Process Control Block(PCB)
(4) CPU-scheduling information: This information includes a process priority,
pointers to scheduling queues, and any other scheduling parameters.
(5) Memory-management information: This information may include such
information as the value of the base and limit registers, the page tables, or the segment
tables, depending on the memory system used by the OS.
(6) Accounting information: This information includes the amount of CPU and real
time used, time limits, account numbers, job or process numbers and so on.
(7) I/O status information: The information includes the list of I/o devices allocated
to this process, a list of open files, and so on.
The PCB simply serves as the repository for any information that may vary from
process to process
Q48. What are interacting processes? Explain any two methods of implementing
interacting processes.
(8)
Ans:
Interacting processes: The concurrent processes executing in the operating system
are interacting or cooperating processes if they can be affected by each other.
Any process that shares data with other processes is an interacting process.
Two methods of implementing interacting process are as follows:
(i)
Shared memory solution: This scheme requires that these processes share a
common buffer pool and the code for implementing the buffer be written by
the application programmer.
For example, a shared-memory solution can be provided to the boundedbuffer problem. The producer and consumer processes share the following
variables:
#define BUFFER_SIZE 10
Typedef struct{
.
43
DC14
(ii)
}item;
Item buffer[BUFFER_SIZE];
int in=0;
int out=0;
The shared buffer is implemented as a circular array with two logical
pointers: in and out. The variable in points to the next free position in the
buffer; out points to the first full position in the buffer. The buffer is empty
when in==out; the buffer is full when (( in + 1)%BUFFER_SIZE)==out.
The producer process has a local variable nextProduced in which the new
item to be produced is stored:
while(1){
/* produce and item in nextProduced */
While(((in + 1)%BUFFER_SIZE)==out)
; // do nothing
Buffer[in]=nextProduced;
in =(in+1)% BUFFER_SIZE;}
The consumer process has a local variable nextConsumed in which the item
to be consumed is stored:
while(1){
while(in==out)
; //do nothing
nextConsumed = buffer[out];
out=(out +1)% BUFFER_SIZE;
/* consume the item in nextConsumed */}
Inter process Communication: The OS provides the means for cooperating
processes to communicate with each other via an interprocess
communication (IPC) facility. IPC provides a mechanism to allow processes
to communicate and to synchronize their actions without sharing the same
address space. IPC is particularly useful in a distributed environment where
the communicating processes may reside on different computers connected
with a network. IPC is best implemented by message passing system where
communication among the user processes is accomplished through the
passing of messages. An IPC facility provides at least the two operations:
send(message) and receive(message).
Some types of message passing system are as follows:
Direct or Indirect Communication: With direct communication, each process
that wants to communicate must explicitly name the recipient or sender of the
communication. In this scheme, the send and receive primitives are defined
as:
send(P, message)- Send a message to process P.
receive(Q, message)- Receive a message from process Q.
A communication link in this scheme has the following properties:
A link is established automatically between every pair of processes that
want to communicate. The processes need to know only each others
identity to communicate.
A link is associated with exactly two processes.
Exactly one link exists between each pair of processes.
With indirect communication, the messages are sent to and received from
mailboxes, or ports. Each mailbox has a unique identification. In this scheme,
a process can communicate with some other process via a number of different
44
DC14
Q.49. Consider the following set of jobs with their arrival times, execution time (in
minutes), and deadlines.
Job Ids
1
2
3
4
5
Arrival Time
0
1
3
7
10
Execution time
5
15
12
25
5
Deadline
5
25
10
50
12
Calculate the mean turn-around time, the mean weighted turn-around time and
the
throughput
for
FCFS,
SJN
and
deadline
scheduling
algorithms.
(6)
Ans:
Chart for First Come First Served scheduling
1
0
20
32
57
5
62
3
5
5
17
2
22
4
37
62
DC14
DC14
Q52. An operating system contains 3 resource classes. The number of resource units in
these classes is 7, 7 and 10. The current resource allocation state is shown below:
Allocated resources
Maximum requirements
R1
R2
R3
R1
R2
R3
2
2
3
3
6
8
2
0
3
4
3
3
1
2
4
3
4
4
Is the current allocation state safe?
Can the request made by process P1 (1, 1, 0) be granted?
(5)
Processes
P1
P2
P3
(i)
(ii)
Ans:
(i) In the given question,
Available matrix for resources [R1 R2 R3] = No of resource unit -Total
Allocation = [7 7 10]-[5 4 10]= [ 2 3 0]
Need matrix is defined as (Max Allocation),
Processes Need of resources
R1
R2
R3
P1
1
4
5
P2
2
3
0
P3
2
2
0
Using Safety Algorithm, we get sequence:
Processes Available resources
after satisfying need
R1
R2
R3
2
3
0
P2
4
3
3
P3
5
5
7
P1
7
7
10
The sequence <P2, P3, P1> satisfies the safety criteria. So current allocation
state is safe.
(ii) Request made by process P1, Request(P1)= [1 1 0]
Here, Request(P1)< Need(P1) < Available
i.e. [1 1 0]< [1 4 5] < [2 3 0]
Pretending that request can be fulfilled, we get new state:
47
DC14
Q.53. What are semaphores? How do they implement mutual exclusion? (6)
Ans:
Semaphore: A semaphore is a synchronization tool that provides a general-purpose
solution to controlling access to critical sections. A semaphore is an abstract data type
(ADT) that defines a nonnegative integer variable which, apart from initialization, is
accessed only through two standard operations: wait and signal. The classical definition
of wait in pseudo code is
wait(S){
while(S<=0)
; // do nothing
S--; }
The classical definitions of signal in pseudo code is
signal(S){
S++; }
When one process modifies the semaphore value, no other process can simultaneously
modify that same semaphore value. In addition, in the case of the wait(S), the testing
of the integer value of S(S<=0), and its possible modification(S--), must also be
executed without interruption.
Mutual-exclusion implementation with semaphores:
Let there are n-processes and they share a semaphore, mutex (standing for mutual
exclusion), initialized to 1.Each process Pi is organized as shown below:
do{
wait(mutex);
critical section
signal(mutex);
remainder section
}while(1);
Disadvantage: Mutual-exclusion solutions given by semaphores require busy waiting.
That is, while a process is in its critical section, any other process that tries to enter its
critical section must loop continuously in the entry code. Hence, busy waiting wastes
CPU cycles that some other process might able to use productively.
Advantage: This type of semaphore is also called spinlock because the process spins
while waiting for the lock. Spinlocks are useful in multiprocessor systems as no context
switch is required when a process must wait on a lock. Thus, when locks are expected
to be held for short times, spinlocks are useful.
Q.54. Give a solution for readers-writers problem using conditional critical regions. (8)
Ans:
Readers-writers problem: Let a data object(such as a file or record) is to be shared
among several concurrent processes. Readers are the processes that are interested in
only reading the content of shared data object. Writers are the processes that may want
48
DC14
to update (that is, to read and write) the shared data object. If two readers access the
shared data object simultaneously, no adverse effects will result. However if a writer
and some other process (either a reader or writer) access the shared object
simultaneously, anomaly may arise. To ensure that these difficulties do not arise,
writers are required to have exclusive access to the shared object. This synchronization
problem is referred to as the readers-writers problem.
Solution for readers-writers problem using conditional critical regions. Conditional
critical region is a high level synchronization construct. We assume that a process
consists of some local data, and a sequential program that can operate on the data. The
local data can be accessed by only the sequential program that is encapsulated within
same process. One process cannot directly access the local data of another process.
Processes can, however, share global data.
Conditional critical region synchronization construct requires that a variable v of type
T, which is to be shared among many processes, be declared as
v: shared T;
The variable v can be accessed only inside a region statement of the following form:
region v when B do S;
This construct means that, while statement S is being executed, no other process can
access the variable v. When a process tries to enter the critical-section region, the
Boolean expression B is evaluated. If the expression is true, statement S is executed. If
it is false, the process releases the mutual exclusion and is delayed until B becomes
true and no other process is in the region associated with v.
Now, let A is the shared data object.
Let readcount is the variable that keeps track of how many processes are currently
reading the object A.
Let writecount is the variable that keeps track of how many processes are currently
writing the object A. Only one writer can update object A, at a given time.
Variables readcount and writecount are initialized to 0.
A writer can update the shared object A when no reader is reading the object A.
region A when( readcount = = 0 AND writecount = = 0){
writing is performed
}
A reader can read the shared object A unless a writer has obtained permission to update
the object A.
region A when(readcount >=0 AND writecount = = 0){
reading is performed
}
Q.55. Given memory partitions of 100k, 500k, 200k, 300k, and 600k (in order), apply first
fit and best fit algorithms to place processes with the space requirement of 212k,
417k, 112k and 426k (in order)? Which algorithm makes the most effective use of
memory?
(3)
Ans:
Given memory partitions of 100k, 500k, 200k, 300k, and 600k (in order), applying
first fit algorithms to place processes with the space requirement of 212k, 417k, 112k
and 426k (in order), we have the following status:
49
DC14
500K
200K
300K
600K
212K
100K
288K
200K
300K
600K
417K
100K
288K
200K
300K
183K
112K
100K
176K
200K
300K
183K
426K
Memory
Request
100K
500K
200K
300K
600K
212K
100K
500K
200K
88K
600K
417K
100K
83K
200K
88K
600K
112K
100K
83K
88K
88K
600K
426K
100K
83K
88K
88K
174K
Memory
Request
Differentiate between
(i) Problem-oriented and procedure-oriented language
(ii) Dynamic and static binding
(iii) Scanning and parsing
(9)
Ans:
(i) Problem-oriented and procedure-oriented language: The programming
languages that can be used for specific applications are called problem oriented
50
DC14
Q.57.
DC14
Q.58.
Enumerate the data structures used during the first pass of the assembler.
Indicate
the fields of these data structures and their purpose/usage. (8)
Ans:
Three major data structures used during the first pass of the assembler are:
- LOCCTR
(Location counter)
- OPTAB
(operation code table)
- SYMTAB
(Symbol table)
LOCCTR: Location Counter keeps track machine addresses of symbolic
tables.
- Initialized by START.
- Increased for each instruction
o
Pseudo Instruction: BYTE, WORD,RESB, RESW.
o
Machine Instruction: Fixed length (3 bytes) for SIC, variable length for
SIC/XE (looking up OPTAB).
o
Assign the value (address) of LOCCTR to corresponding symbol table.
OPTAB: operation table contains mnemonic operation code and its machine
language equivalent.
52
DC14
DC14
Q.61. What are threads? Why are they required? Discuss the differentiate between Kernel
level and user level threads?
(8)
Ans:
A thread, sometimes called a lightweight process(LWP), is a basic unit of CPU
utilization; it comprises a thread ID, a program counter, a register set and a stack.
A thread shares with other threads belonging to the same process its code section,
data section and other operating-system resources, such as open files and signals.
If the process has multiple threads of control, it can do more than one task at a time.
User Level Threads Vs Kernel Supported Threads
i. User threads are supported above the kernel and are implemented by a thread
library at the user level.
Whereas, kernel threads are supported directly by the operating system.
ii. For user threads, the thread library provides support for thread creation,
scheduling and management in user space with no support from the kernel as
the kernel is unaware of user-level threads.
In case of kernel threads, the kernel performs thread creation, scheduling and
management in kernel space.
iii. As there is no need of kernel intervention, user-level threads are generally fast
to create and manage.
As thread management is done by the operating system, kernel threads are
generally slower to create and manage that are user threads.
iv. If the kernel is single-threaded, then any user-level thread performing
blocking system call, will cause the entire process to block, even if other
threads are available to run within the application.
However, since the kernel is managing the kernel threads, if a thread performs
a blocking system call, the kernel can schedule another thread in the application
for execution.
v. User-thread libraries include POSIX P threads, Mach C-threads and Solaris 2
UI-threads.
Some of the cotemporary operating systems that support kernel threads are
Windows NT, Windows 2000, Solaris 2, BeOS and Tru64 UNIX(formerly
Digital UNIX).
Q.62. What is swapping? Does swapping increase the Operating Systems overheads?
Justify your answer.
(8)
Ans:
A process can be swapped temporarily out of memory to a backing store, and
then brought back into memory for continued execution.
54
DC14
Major part of swap time is transfer time; total transfer time is directly proportional to
the amount of memory swapped. Modified versions of swapping are found on many
systems, i.e., UNIX, Linux, and Windows.
Q.63. Suppose there are 2 copies of resource A, 3 copies of resource B, and 3 copies of
resource C. Suppose further that process 1 holds one unit of resources B and C and
is waiting for a unit of A; that process 2 is holding a unit of A and waiting on a unit
of B; and that process 3 is holding one unit of A, two units of B, and one unit of C.
Draw the resource allocation graph. Is the system in a deadlocked state? Why or
why not?
(8)
Ans:
Resource allocation graph is given below:
P3
P2
A
P1
There is a cycle in the resource allocation graph implies system is not in a safe state.
System is in deadlock as required resources are 1 unit of resource A and 1 unit of
resource B while available resource is 1 unit of resource C. None of the process can
be completed.
55
DC14
Q.64 what is system programming? Explain the evolution of system software. (6)
Ans:
System software is collection of system programs that perform a variety of
functions, viz file editing, recourse accounting, IO management, storage management
etc.
System programming is the activity of designing and implementing SPs.
System programs which are the standard component of the s/w of most computer
systems; The two fold motivation mentioned above arises out of single primary goal
viz of making the entire program execution process more effective.
Q.65 Give difference between assembler, compiler and interpreter. (6)
Ans:
An assembler is the translator for an assembly language of a computer. An assembly
language is a low-level programming language which is peculiar to a certain
computer.
A compiler is a translator for machine independent HLL like say FORTRAN,
COBOL etc.
An interpreter analysis the source program statement by statement and it self carries
out the actions implied by each statement.
Q.66 Write down the general model for the translation process. (4)
Ans:
General model for the translation process can be represented as follows:
Q.67 Pass I of the assembler must also generate the intermediate code for the processed
statements. Justify your answer. (8)
Ans: Criteria for selection of an appropriate intermediate code form are;
(i) Ease of use: It should be easy to construct the intermediate code form and also easy
to analyze and interpret it during pass II, i.e. the amount of processing required to be
done during its construction and analysis should be minimal.
(ii) Economy of storage: It should be compact as the target code itself .This will reduce
the overall storage requirements of assembler.
Q.68 what are the advantages and disadvantages of macro pre-processor? (8)
Ans: The advantage of macro pre-processor is that any existing conventional
assembler can be enhanced in this manner to incorporate macro processing. It would
reduce the programming cost involved in making a macro facility available.
The disadvantage is that this scheme is probably not very efficient because of the
time spent in generating assembly language statement and processing them again for
the purpose of translation to the target language.
56
DC14
Q.69. What is parsing? Give difference between top down parsing and bottom up parsing. (6)
Ans:
The goal of parsing is to determine the syntactic validity of a source string. If
the string is valid, a tree is built for use by subsequent phase of compiler.
Top down parsing: Given an input string, top down parsing attempts to derive a
string identical to it by successive application of grammar rules to the grammars
distinguished symbol. When such a string is obtained , a tree representing its
derivation would be the syntax tree for an input string. Thus if is input-string, a top
down parse determines a derivation sequence.
Bottom up parsing: A bottom up parse attempts to develop syntax tree for an input
string through a sequence of reduction. If the input string can be reduced to the
distinguished symbol, the string is valid. If not, error would be detected and indicated
during the process of reduction itself.
Q.70 How non-relocatable programs are different from relocatable programs? (4)
Ans: A non relocatable program is one which cannot be made to execute in any area
of storage other than the one designated for it at the time of its coding or translation.
A relocatable program form is one which consists of a program and relevant
information for its relocation. Using this information it is possible to relocate the
program to execute from a storage area then the one designated for it at the time of its
coding or translation.
Q.71 what are the fundamental steps in program development? Discuss program testing and
debugging in detail. (6)
Ans:
The fundamental steps in program development are:
(i) Program design, coding and documentation.
(ii) Preparation of the program in machine readable form, and initial editing to
adopt two required formats.
(iii) Program translation and linking/ loading
(iv) Program testing and debugging.
(v) Program modification for performance enhancement.
(vi) Reformatting programs data and/or results to suite other programs which
process them.
In program testing and debugging important steps are as follows:
(i) Construction of test data for the program
(ii) Analysis of test results to detect program errors.
(iii) Localization of errors and modification of the program to eliminate them, i.e.
debugging.
Q.72
57
DC14
Load
ADD
STORE
LOAD
SUB
STORE
LOAD
DIU
Q.73
A
B
TEMP
C
D
TEMP2
TEMP1
TEMP2
LOAD
SUB
STORE
LOAD
ADD
DIU
C
D
TEMP1
A
B
TEMP1
A
B
C
A(code)
B(code)
C(code)
A(data)
B(data)
Free area
pointer
Q.74. Differentiate between synchronous and asynchronous input / output with the help of
an example.
(8)
Ans: The I/O operation is asynchronous input output operation because after the start
of input/output, control is returned to the user program without waiting for the
input/output to complete. The input/output continues and on its completion, an
interrupt is generated by the controller to attain CPUs attention.
CPU execution waits, while I/O proceeds, in which case, there is a possibility that at
most one request I/O request is outstanding at a time. This is known as synchronous
I/O.
Q.75. List the major activities of an operating system with respect to memory management,
secondary storage management and process management.
(8)
Ans:
Operating system is responsible for following activities in connection with
management of memory;
(i) Allocation and de allocation of memory as and when needed
58
DC14
Q.76. What are the disadvantages of FCFS scheduling algorithm as compared to shortest
job first (SJF) scheduling?
(8)
Ans:
Disadvantages:
(i) Waiting time can be large if short requests wait behind the long ones.
(ii) It is not suitable for time sharing systems where it is important that each user
should get the CPU for an equal amount of time interval.
(iii) A proper mix of jobs is needed to achieve good results from FCFS scheduling.
Q.77
Explain deadlock detection algorithm for single instance of each resource type. (8)
Ans:
(i) Maintain a wait: nodes in a graph represent process. If process i is waiting for
resource hold by process j. Then there is an edge from I to j.
(ii) Periodically invokes an algorithm that searches for cycles in the graph. If there is
a cycle in the wait-for-graph a dead lock is said to be exist in the system.
Q.78 Discuss the concept of segmentation? What is the main problem with segmentation?
(8)
Ans:
Segmentation is techniques for the non contiguous storage allocation. It is different
from paging as it supports users view of his program.
Problem with segmentation
(i) Is with paging, this mapping requires two memory references per logical address,
which slows down the computer system by a factor of two. Caching is the method
used to solve this problem.
59
DC14
Q79. What is the difference between absolute and relative path name of a file? (8)
Ans:
Absolute path name:
It is listing of the directories and files from the root directory to the intended file.
Relative path name :
A user can specify a path particular directory as his current working directory and
all the path names instead of being specified from the root directory are specified
relative to the working directory.
Q.80.
(8)
Ans:
There are two different types of language processing activities:
1. Program generation activities
2. Program execution activities
Program generation activities: A program generation activity aims at automatic
generation of a program. The source language is a specification language of an
application domain and the target language is typically a procedure oriented
programming language. the following figure shows program generation activity
Interpretation: The interpreter reads the source program and stores it in its memory.
During interpretation it takes a statement, determines its meaning and performs
actions which implement it.
60
DC14
Q.81 Explain the criteria to classify data structures used for language processors?
(8)
Ans:
The data structures used in language processing can be classified on the basis of the
following criteria:
1. Nature of data structure (whether a linear or non-linear data structure)
2. Purpose of a data structure (whether a search data structure or an allocation data
structure)
3. Life time of a data structure (whether used during language processing or during
target program execution)
A linear data structure consists of a linear arrangement of elements in the memory. A
linear data structure requires a contiguous area of memory for its elements. This poses
a problem in situations where the size of a data structure is difficult to predict. the
elements of non linear data structures are accessed using pointers. Hence the elements
need not occupy contiguous area of memory.
Search Data structures are used during language processing to maintain attribute
information concerning different entities in the source program. In this the entry for
an entity is created only once, but may be searched for large number of times.
Allocation data structures are characterized by the fact that the address of memory
area allocated to an entity is known to the users. So no search operations are
conducted.
Q.82 Explain macro definition, macro call and macro expansion?
(6)
Ans :
A unit of specification for a program generation is called a macro. It consists of name,
set of formal parameters and body of code. When a macro name is used with a set of
actual parameters it is replaced by a code generated from its body. This code is called
macro expansion. There are two types of expansions:
1. lexical expansion
2. Semantic expansion
Lexical expansion: It means a replacement of character string by another string
during program generation. It is generally used to replace occurrences of formal
parameters by corresponding actual ones.
Semantic Expansion: It implies generation of instructions build to the requirements
of specific usage. It is characterized by the fact that different uses of a macro can lead
to codes which differ in the number, sequence and opcodes of instructions.
The macro definition is located at the beginning of the program is enclosed between a
macro header and macro end statement.
A macro statement contains macro name and parameters
< macro name > { < parameters>}
A macro call: A macro is called by writing the macro name in the mnemonic field of
an assembly statement.
61
DC14
Q.83.
Q.84
(6)
Ans:
(i)Translated address: Address assigned by the translator
(ii)Linked address: Address assigned by the linker
(iii)Load time address: Address assigned by the loader.
While compiling a program P, a translator is given an original specification for P.
This is called the translated origin of P. The translator uses the value of the translated
origin to perform memory allocation for the symbols declared in P. This results in the
assignment of a translated time address to each symbol in the program. The origin of a
program may have to be changed by the linker or loader for one of the following
reasons:
1.
Object modules of library routines often have the same translated origin so
memory allocation to such programs would conflict unless their origins are changed.
62
DC14
Q.85
What are the functions of passes used in two-pass assembler? Explain pass-1
algorithm?
(10)
Ans :
Two pass translation of an assembly language program can handle forward references
early.
The following tasks are performed by the passes of a two pass assembler are as
follows:
Pass I: (i) Separate the symbol, mnemonic opcode and operand fields
(ii) Build the symbol table
(iii) Perform LC processing
(iv) Construct intermediate representation.
Pass II: Synthesize the target program
Pass I uses the following data structures:
OPTAB : A table of mnemonic opcodes and related information
SYMTAB: symbol table
LITTAB: A table literally used in the program
OPTAB contains the fields mnemonic opcode, class and mnemonic information. The
class field indicated whether the opcode corresponds to an imperative statement (IS),
a declaration statement (DL) or an assembler directive (AD).
(SYMTAB entry contains the fields address and length. A LITTAB entry contains
literals and address.)
Q.86
DC14
Q.87. Categorize the CPU scheduling algorithms? Explain non-pre-emptive algorithms? (4)
Ans
The various CPU scheduling algorithms are classified as follows:
Non preemptive algorithms: In this method a job is given to CPU for execution as long
as the job is non completed the CPU cannot be given to other processes.
There are three types of non preemptive algorithms.
(i) First-come-first-serve (FCFS):
This is simplest CPU scheduling algorithm . With this scheme, the process that
requests the CPU at first is given to the CPU at first. The implementation of FCFS is
easily managed by with a FIFO queue.
(ii) Shortest-job-first (SJF): This is also called SPN (shortest process next). In this the
burst times of all the jobs which are waiting in the queue are compared. The job which
is having the least CPU execution time will be given to the processor at first. In this
turnaround time and waiting times are least. This also suffers with starvation. Indefinite
waiting time is called as starvation. It is complex than FCFS.
(iii) Priority: In this algorithm every job is associated with CPU execution time,
arrival time and the priority. Here the job which is having the higher priority will be
given to the execution at first. This also suffers with starvation. And by using aging
technique starvation effect may be reduced.
Q.88. Differentiate between Batch Operating System and Time Sharing Operating System?
(6)
Ans
Batch operating systems: A batch is a sequence of jobs. This batch is submitted to
batch processing operating systems, and output would appear some later time in the
form of a program or as program error. To speed up processing similar jobs are batched
together. The major task of batch operating systems is to transfer control automatically
from one job to next. Here the operating is always in the memory.
(i) It is lack of interaction between user and job while executing
(ii) Turnaround time is more.
(iii) CPU is often idle, because of 1/0 devices are very slow.
64
DC14
(8)
Ans.
Deadlock is a situation, in which processes never finish executing and system
resources are tied up, preventing other jobs form starting.
A process requests resources; if the resources are not available at that time, the
process enters a wait state. Waiting processes may never again change state, because
the resources they have requested are held by other waiting processes, thereby causing
deadlock.
An algorithm for deadlock detection:
1. Let Work and Finish be vectors of length m and n, respectively.
Initialize:
(a) Work = Available
(b) For i = 1,2, , n, if Allocation i
0, then
Finish[i] = false; otherwise, Finish[i] = true.
2. Find an index i such that both:
(a) Finish[i] == false
i. (b) Requesti Work
If no such i exists, go to step 4.
3. Work = Work + Allocationi
Finish[i] = true
go to step 2.
4. If Finish[i] == false, for some i, 1 i n, then the system
is in deadlock state. Moreover, if Finish[i] == false, then Pi is
deadlocked.
Q 90
Q.91. What is critical section problem? Give two solutions for critical section problem?
(8)
Ans :
A race condition on a data item arises when many processes concurrently update its
value data consistency, requires that only one process should update the value of a
data item at any time. This ensured through the notion of a critical section.
65
DC14
Q.92.
Q.93.
Explain with the help of examples FIFO and LRU page replacement algorithms? (10)
Ans :
FIFO policy: This policy simply removes pages in the order they arrived in the main
memory. Using this policy we simply remove a page based on the time of its arrival in
the memory.
66
DC14
If frames are increased say to 4, then number of page faults also increases, to 10 in
this case.
LRU policy: LRU expands to least recently use. This policy suggests that we remove a page whose last usage is farthest from current time.
Q.94.
(8)
Ans:
Necessary conditions for deadlock
1. Mutual exclusion
2. Hold and wait
3. No preemption
4. Circular wait
Mutual exclusion: The mutual exclusion condition must hold for non sharable
resources. For example, a printer cannot be simultaneously shared by several
processes. Shared resources on the other hand, do not require mutually exclusive
access, and thus cannot be involved in the deadlock. In general, however it is not
possible to prevent deadlocks by denying the mutual exclusion condition.
Hold and wait: To ensure that the hold and wait condition never occurs in the
system, we must guarantee that, whenever a process requests a resource, it does not
hold any other resources. A process may request some resources and use them before
it can request any other resources, however it must release all the resources that it is
currently allocated.
No preemption: The third necessary condition is that there be no preemption of
resources that have already been allocated. If a process that is holding some resources
requests another resource that it cannot be immediately allocated to it. Then all
resources currently being held are preempted.
Circular Wait: One way to ensure that the circular wait condition never holds is to
impose a total ordering of all resources types, and to ensure that each process requests
resources in an increasing order of enumeration.
Q.95 What is a Process Scheduling? Explain the different sub-functions of Process
Scheduling.
67
(8)
DC14
Ans:
Scheduling is a key part of the workload management software which usually perform
some or all of:
Queuing
Scheduling
Monitoring
Resource management
Accounting
The difficult part of scheduling is to balance policy enforcement with resource
optimization in order to pick the best job to run. Essentially one can think of the
scheduler performing the following loop:
Select the best job to run, according to policy and available resources.
Start the job.
Stop the job and/or clean up after completion.
repeat.
Process scheduling consists of the following sub-functions:
1.
Scheduling: Selects the process to be executed next on the CPU. The
scheduling function uses information from the PCBs and selects a process based on
the scheduling policy in force.
2.
Dispatching: Sets up execution of the selected process on the CPU. This
function involves setting up the execution environment of the selected process, and
loading information from the PSR and registers fields of the PCB into the CPU.
3.
Context save: Saves the status of a running process when its execution is to be
suspended. This function performs housekeeping whenever a process releases the CPU
or is pre-empted.
The following diagram illustrates the use of scheduling sub functions: Occurrence of
an event invokes the context save function. The kernel now processes the event that has
occurred. The scheduling function is now invoked to select a process for execution on
the CPU. The dispatching function arranges execution of the selected function on the
CPU.
Event
Context save
Event processing
Scheduling
Dispatching
Q.96. Describe the essential properties of the following operating systems
Real Time and Distributed Operating System
Ans:
Real time operating system:
68
(8)
DC14
Q.97.
Describe Data structures used during passes of assembler and their use.
(10)
Ans:
Data structure during passes of assembler and their use.
Pass 1 data base
1. Input source program
2. A location counter (LC)
3. A table, the machine-operation table (MOT), that indicates the symbolic
mnemonic for each instruction and its length.
4. Pseudo- operation table
5. Symbol table
6. Literal table
7. Copy of the input to be used later by pass 2
Pass 2
1. Copy of source program input to pass 1
2. Location counter (LC)
3. MOT
4. POT
5. ST
6. Base table that indicates which registers are currently specified as base register.
7. A work space, INST, thats used to hold instruction as its various parts are being
assembled together
8. Punch line, used to produce a printed listing.
9. Punch card for converting assembled instructions into the format needed by the
loader.
Q.98. what is parsing and specify the goals of parsing
(6)
Ans:
Source programmed statements are regarded as tokens, building block of language
the task of scanning the source statement, recognizing and classifying the various
tokens is known as lexical analysis. The part of the compiler that performs this task
is commonly called a scanner.
After the token scan, each statement in the program must be recognized as some
language constructs, such as declaration or an assignment statement described by the
grammar.
69
DC14
(8)
Ans:
Code optimization is the optional phase designed to improve the intermediate code
so that the
Ultimate object program runs faster or takes less space. Code optimization in
compilers aims at improving the execution efficiency of a program by eliminating
redundancies and by rearranging the computations in the program without affecting
the real meaning of the program.
Scope First optimization seeks to improve a program rather than the algorithm
used in the program. Thus replacement of algorithm by a more efficient algorithm is
beyond the scope of optimization. Also efficient code generation for a specific target
machine also lies outside its scope.
The structure of program and the manner in which data is defined and used in it
provide vital clues for optimization.
Optimization transformations are classified into local and global transformations.
Q.100. Explain the Features of Major scheduling algorithms
Ans:
1. FCFS First come first served scheduling
2. Shortest job First scheduling
3. Priority scheduling
4. Round robin scheduling
FCFS1. Process that request the CPU first is allocated CPU first
2. Managed by FIFO queue.
3. Average waiting time is generally long
4. Non preemptive
5. Troublesome for time sharing systems
Shortest job first
1. The process which has the smallest CPU burst time gets first
2. It increases the waiting time of long processes.
3. Problem difficult to predict the length of next CPU request
4. May be preemptive or non preemptive
Priority
1. Highest priority process gets CPU allocation first
2. The larger the CPU burst, lower the priority
3. Priority can be defined internally or externally
4. Can be preemptive or non-preemptive
5. Problem-starvation, blocking of process
6. Solution aging increases the priority
Round robin
70
(8)
DC14
Q.101.
List the criteria on the basis of which data structures used in language
processing can be classified.
(3)
Ans:
The data structures used in language processing can be classified on the following
criterion:
1.
Nature of a data structure-whether a linear or nonlinear data structure. A
linear data structure consists of a linear arrangement of elements in memory and
elements require a contiguous area of memory. The elements of a non-linear data
structure are accessed using pointers and hence the elements need not occupy
contiguous areas of memory.
2.
Purpose of a data structure-whether a search data structure or an allocation
data structure. Search data structures are used during language processing to maintain
attribute information concerning different entities in the source program. Allocation
data structures are characterized by the fact that the user of that entity knows the
address of the memory area allocated to an entity thus no search operations are
conducted on them.
3.
Lifetime of a data structure-whether used during language processing or
during target program execution.
Q.102. Differentiate between logical address and physical address.
(4)
Ans:
A logical address is the address of the instruction or data word as used by a program
(this includes the use of index, base or segment register).
A physical address is the effective memory address of an instruction or data word.
The set of physical addresses generated during operation of system constitute the
physical address space of the system. The compile time and the load time address
binding schemes result in environment where the logical and physical address are
same, whereas during execution time addresses differ.
Q103.
71
DC14
(4)
(4)
72
DC14
Q.105. Can the operand expression in an ORG statement contain forward references? If
so, outline how the statement can be processed in a two-pass assembly
scheme.
(9)
Ans: (O
ORG (origin) is an assembler directive that
73
DC14
On the first pass, the ORG statement sets the location counter to $800. Thus the label
N has the value $800, the label M has the value $801, the label COLUMN has the
value $802, and the label ODD has the value $834. The instruction CLR M will take
three bytes , the instruction LDAB N will take three bytes, and so forth. Similarly, we
see that the first byte of instruction we see that the first byte of instruction
will be at location $872. Thus the symbolic address LOOP has the value $872 and so
on. Continuing in this way, we come to
This program searches the array COLUMN looking for odd, negative, one-byte
numbers, which then are stored in array ODD. The length of COLUMN is N and the
length of ODD is M, which the program calculates. We do not know the second byte
of this instruction because we do not know the value of the address JUMP yet. (This
is called a forward reference, using a label whose value is not yet known.) However,
we can leave this second byte undetermined and proceed until we see that the
machine code for DBNE is put into location $87f, thus giving JUMP the value $87f.
As we continue our first pass downward, we allocate three bytes for DBNE B,LOQP.
74
DC14
Q.106. What criteria should be adopted for choosing type of file organization
(8)
Ans:
Choosing a file organization is a design decision, hence it must be done having in
mind the achievement of good performance with respect to the most likely usage of
the file. The criteria usually considered important are:
1. fast access to single record or collection of related records
2. Easy record adding/update/removal, without disrupting (1)
3. Storage efficiency
4. Redundancy as a warranty against data corruption.
Needless to say, these requirements are in contrast with each other for all but the
most trivial situations, and its the designer job to find a good compromise among
them, yielding an adequate solution to the problem at hand. For example, easiness of
adding/ etc. is not an issue when defining the data organization of CD ROM product,
whereas fast access is given the huge amount of data that this media can store.
However as it will become apparent shortly, fast access techniques are based on the
use additional information about the records, which in turn competes with the high
volumes of data to be stored.
Logical data Organization is indeed the subject of whole shelves of books in the
Database section of your library. Here well briefly address some of the simpler
used techniques, mainly because of their relevance to data management from the
lower-level (with respect to a databases) point of view of an OS. Five organization
models will be considered:
(i) Pile.
(ii) Sequential.
(iii) indexed-sequential.
(iv)Indexed
(v) Hashed.
Q.107 Compare pre-emptive and non-preemptive scheduling policies.
(4)
Ans:
In preemptive scheduling we preempt the currently executing process.
In non-preemptive we allow the current process to finish its CPU burst
time.
Q.108. Explain the differences between:
(i) Logical and physical address space.
(ii) Internal and external fragmentation.
(iii)Paging and segmentation.
Ans:
75
(6)
DC14
Logical
address
Relocati
on
register
14000
Physical
address
CPU
346
14346
memory
MMU
DC14
(2) Internal fragmentation is found in multiple fixed partition schemes where all the
partitions are of the same size. That is, physical memory is broken into fixed-sized
blocks.
External fragmentation is found in multiple variable partition schemes. Instead of
dividing memory into a fixed set of partitions, an operating system can choose to
allocate to a process the exact amount of unused memory space it requires.
(3) In multiple fixed partition scheme, the partition table needs to store either the
starting address for each process or the number of the partition allocated to each
process.
In multiple variable partition scheme, the overhead of managing more data increases.
The partition table must store exact starting and ending location of each process and
data about which memory locations are free must be maintained.
(4) In multiple fixed partition schemes, size/limit register is set at boot time and
contains the partition size. Each time a process is allocated control of CPU, the
operating system only needs to reset the relocation register. In multiple variable
partition schemes, each time a different process is given control of the CPU, the
operating system must reset the size/limit register in addition to the relocation register.
The operating system must also make decisions on which partition it should allocate to
a process.
(5) Internal fragmentation can be reduced using multiple variable partition method.
However, this solution suffers from external fragmentation. External fragmentation can
be solved using compaction where the goal is to shuffle the memory contents to place
all free memory together in one large block. Another possible solution to the external
fragmentation problem is to permit the logical address space of a process to be non
contiguous. This solution is achieved by paging and segmentation.
(iii) Paging and segmentation
Paging
Computer memory is divided into small
partitions that are all the same size and
referred to as, page frames. Then when a
process is loaded it gets divided into pages
which are the same size as those previous
frames. The process pages are then loaded
into the frames.
Address generated by CPU is divided into:
Page number (p) used as an index into a
page table which contains base address of
each page in physical memory.
Page offset (d) combined with base
address to define the physical memory
address that is sent to the memory unit.
Segmentation
Memory-management scheme that supports
user view of memory.
Computer memory is allocated in various
sizes (segments) depending on the need for
address space by the process.
77
DC14
Q.109. Suppose that a process scheduling algorithm favors those processes that have used
the least processor time in the recent past. Why will this algorithm favour /IObound processes, but not starve CPU-bound processes?
(6)
Ans: It will favor the I/O-bound programs because of the relatively short CPU burst
request by them; however, the CPU-bound programs will not starve because the I/Obound programs will relinquish the CPU relatively often to do their I/O.
Q.110 List one advantage and one disadvantage of having large block size.
(4)
Segment Table
0
0x73
0x25
0x85
0x0F
0x17
0x2C
0x2D
0x31
0x3D
0x00
0x3
0x05
0x1E
0x01
0x5D
0x0D
0x1
0x17
0x5A
0x1F
0x1E
0x66
0x0
0x57
0x0F
0x09
0x6C
0x62
0x4
0x1A
0x7A
0x0A
0x2F
0x50
0x4B
0x2B
0x1A
0x78
0x32
0x6C
0x32
0x7B
0x11
0x11
Page Tables
(i) How many bytes are contained within the physical memory?
(ii) How large is the virtual address?
(iii) What is the physical address that corresponds to virtual address 0x312?
(iv) What is the physical address that corresponds to virtual address 0x1E9? (8)
Ans:
78
DC14
(7)
Ans:
The analysis and synthesis phases of a compiler are:
Analysis Phase: Breaks the source program into constituent pieces and creates
intermediate representation. The analysis part can be divided along the following
phases:
1. Lexical Analysis- The program is considered as a unique sequence of characters.
The Lexical Analyzer reads the program from left-to-right and sequence of characters
is grouped into tokenslexical units with a collective meaning.
2. Syntax Analysis- The Syntactic Analysis is also called Parsing. Tokens are
grouped into grammatical phrases represented by a Parse Tree, which gives a
hierarchical structure to the source program.
3. Semantic Analysis- The Semantic Analysis phase checks the program for semantic
errors (Type Checking) and gathers type information for the successive phases. Type
Checking check types of operands; No real number as index for array; etc.
Synthesis Phase: Generates the target program from the intermediate representation.
The synthesis part can be divided along the following phases:
1. Intermediate Code Generator- An intermediate code is generated as a program for
an abstract machine. The intermediate code should be easy to translate into the target
program.
2. Code Optimizer- This phase attempts to improve the intermediate code so that
faster-running machine code can be obtained. Different compilers adopt different
optimization techniques.
3. Code Generator- This phase generates the target code consisting of assembly code.
Here
1. Memory locations are selected for each variable;
2. Instructions are translated into a sequence of assembly instructions;
3. Variables and intermediate results are assigned to memory registers.
Q.113. Explain the concept of variable-partition contiguous storage allocation. (6)
Ans:
79
DC14
A hole of 64K is left after loading 3 processes: not enough room for another process.
Eventually each process is blocked. The OS swaps out process 2 to bring in process 4.
P1
P2
P1
P2
P1
P2
80
P1
P2
P2
DC14
11
12
28
Between 2-3unit of time, p2 is in ready queue and P1 has gone for I /O. So P2 should be
executed as it is pre-emptive SJF algorithm.
Completion time of process p1 = 12 unit
Completion time of process p2 = 28 unit
Round- robin (R R) algorithm
P1
P2
P1
P2
P1
P2
P1
P2
12
14
18
20
28
S.No.
0001
0002
0003
.
.
.
0012
0013
DATA-HERE
ABC
B
SEGMENT
DW
DW?
SAMPLE
SEGMENT
ASSUME
0014
0015
0016
0017
.
MOV
MOV
JMP
MOV
.
.
0027
.
.
.
0043
0044
.
MOV
SAMPLE
ENDS
END
81
offset
25
0000
0002
CS: SAMPLE,
DS: DATA-HERE
AX, DATA-HERE
DS, AX
A
AL, B
0000
0003
0005
0008
AX, BX
0196
DC14
82