Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
12 views

Operating Systems

Uploaded by

yugsinha033
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views

Operating Systems

Uploaded by

yugsinha033
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 6

Q1.Explain the concept of Real-time operating systems?

Ans: A real time operating system is used when rigid time requirement have been placed on
the operation of a processor or the flow of the data; thus, it is often used as a control device in
a dedicated application. Here the sensors bring data to the computer. The computer must
analyse the data and possibly adjust controls to modify the sensor input.
They are of two types:
1. Hard real time OS
2. Soft real time OS
Hard-real-time OS has well-defined fixed time constraints. But soft real time operating
systems have less stringent timing constraints

Q2.What is a layered approach and what is its advantage?


Ans: Layered approach is a step towards modularizing of the system, in which the operating
system is broken up into a number of layers (or levels), each built on top of lower layer. The
bottom layer is the hard ware and the top most is the user interface. The main advantage of
the layered approach is modularity. The layers are selected such that each uses the functions
(operations) and services of only lower layer. This approach simplifies the debugging and
system verification.

Q3.Explain the concept of the multi-programmed operating systems?


Ans: A multi-programmed operating systems can execute a number of programs
concurrently. The operating system fetches a group of programs from the job-pool in the
secondary storage which contains all the programs to be executed, and places them in the
main memory. This process is called job scheduling. Then it chooses a program from the
ready queue and gives them to CPU to execute. When a executing program needs some I/O
operation then the operating system fetches another program and hands it to the CPU for
execution, thus keeping the CPU busy all the time.

Q4.What are a virtual machines and site their advantages?


Ans: It is the concept by which an operating system can create an illusion that a process has
its own processor with its own (virtual) memory. The operating system implements virtual
machine concept by using CPU scheduling and virtual memory.
1. The basic advantage is it provides robust level of security as each virtual machine is
isolated from all other VM. Hence the system resources are completely protected.
2. Another advantage is that system development can be done without disrupting normal
operation. System programmers are given their own virtual machine, and as system
development is done on the virtual machine instead of on the actual physical machine.
Another advantage of the virtual machine is it solves the compatibility problem.
EX: Java supplied by Sun micro system provides a specification for java virtual machine

Q5. Explain the concept of the multi-processor systems or parallel systems?


Ans: They contain a no. of processors to increase the speed of execution, and reliability, and
economy. They are of two types:
1. Symmetric multiprocessing
2. Asymmetric multiprocessing
In Symmetric multi-processing each processor run an identical copy of the OS, and these
copies communicate with each other as and when needed. But in Asymmetric
multiprocessing each processor is assigned a specific task.
Q6. What is the difference between Hard and Soft real-time systems?
Ans : A hard real-time system guarantees that critical tasks complete on time. This goal
requires that all delays in the system be bounded from the retrieval of the stored data to the
time that it takes the operating system to finish any request made of it. A soft real time
system where a critical real-time task gets priority over other tasks and retains that priority
until it completes. As in hard real time systems kernel delays need to be bounded.

Q7.What is dual-mode operation?


Ans: In order to protect the operating systems and the system programs from the
malfunctioning programs the two mode operations were evolved:
1. System mode.
2. User mode.
Here the user programs cannot directly interact with the system resources, instead they
request the operating system which checks the request and does the required task for the user
programs- DOS was written for / intel 8088 and has no dual-mode. Pentium provides dual-
mode operation.

Q8. What is Throughput, Turnaround time, waiting time and Response time?
Ans :
Throughput: Number of processes that complete their execution per time unit
Turnaround time: Amount of time to execute a particular process
Waiting time: Amount of time a process has been waiting in the ready queue
Response time: Amount of time it takes from when a request was submitted until the first
response is produced, not output (for time-sharing environment).

Q9.What is micro kernel approach and site its advantages?


Ans: Micro kernel approach is a step towards modularizing the operating system where all
nonessential components from the kernel are removed and implemented as system and user
level program, making the kernel smaller. The benefits of the micro kernel approach include
the ease of extending the operating system. All new services are added to the user space and
consequently do not require modification of the kernel. And as kernel is smaller it is easier to
upgrade it. Also this approach provides more security and reliability since most services are
running as user processes rather than kernel’s keeping the kernel intact.

Q10.What is a long term scheduler & short term schedulers?


Ans: Long term schedulers are the job schedulers that select processes from the job queue
and load them into memory for execution. The short term schedulers are the CPU schedulers
that select a process form the ready queue and allocate the CPU to one of them.

Q11. What are Dynamic Loading, Dynamic Linking and Overlays?


Ans :
Dynamic Loading:
1. Routine is not loaded until it is called
2. Better memory-space utilization; unused routine is never loaded.
3. Useful when large amounts of code are needed to handle infrequently occurring cases.
4. No special support from the operating system is required implemented through
program design.
Dynamic Linking:
1. Linking postponed until execution time.
2. Small piece of code, stub, used to locate the appropriate memory-resident library
routine.
3. Stub replaces itself with the address of the routine, and executes the routine.
4. Operating system needed to check if routine is in processes’ memory address.
5. Dynamic linking is particularly useful for libraries.
Overlays:
1. Keep in memory only those instructions and data that are needed at any given time.
2. Needed when process is larger than amount of memory allocated to it.
3. Implemented by user, no special support needed from operating system, programming
design of overlay structure is complex.

Q12.What is process synchronization?


Ans: A situation, where several processes access and manipulate the same data concurrently
and the outcome of the execution depends on the particular order in which the access takes
place, is called race condition. To guard against the race condition we need to ensure that
only one process at a time can be manipulating the same data. The technique we use for this
is called process synchronization.

Q13.What is critical section problem?


Ans: Critical section is the code segment of a process in which the process may be changing
common variables, updating tables, writing a file and so on. Only one process is allowed to
go into critical section at any given time (mutually exclusive).The critical section problem is
to design a protocol that the processes can use to co-operate. The three basic requirements of
critical section are:
1. Mutual exclusion
2. Progress
3. bounded waiting
Bakery algorithm is one of the solutions to CS problem.

Q14. Under what circumstances do page faults occur? Describe the actions taken by the
operating system when a page fault occurs?
Ans : A page fault occurs when an access to a page that has not been brought into main
memory takes place. The operating system verifies the memory access, aborting the program
if it is invalid. If it is valid, a free frame is located and I/O is requested to read the needed
page into the free frame. Upon completion of I/O, the process table and page table are
updated and the instruction is restarted

Q15. What is fragmentation? Different types of fragmentation?


Ans : Fragmentation occurs in a dynamic memory allocation system when many of the free
blocks are too small to satisfy any request.

1. External Fragmentation: External Fragmentation happens when a dynamic memory


allocation algorithm allocates some memory and a small piece is left over that cannot
be effectively used. If too much external fragmentation occurs, the amount of usable
memory is drastically reduced. Total memory space exists to satisfy a request, but it is
not contiguous.
2. Internal Fragmentation: Internal fragmentation is the space wasted inside of
allocated memory blocks because of restriction on the allowed sizes of allocated
blocks. Allocated memory may be slightly larger than requested memory; this size
difference is memory internal to a partition, but not being used Reduce external
fragmentation by compaction:
 Shuffle memory contents to place all free memory together in one large block.
 Compaction is possible only if relocation is dynamic, and is done at execution
time.

Q16.What is dining philosophers’ problem?


Ans: Consider 5 philosophers who spend their lives thinking and eating. The philosophers
share a common circular table surrounded by 5 chairs, each belonging to one philosopher. In
the center of the table is a bowl of rice, and the table is laid with five single chop sticks.
When a philosopher thinks, she doesn’t interact with her colleagues. From time to time, a
philosopher gets hungry and tries to pick up two chop sticks that are closest to her .A
philosopher may pick up only one chop stick at a time. Obviously she can’t pick the
stick in some others hand. When a hungry philosopher has both her chopsticks at the same
time, she eats without releasing her chopsticks. When she is finished eating, she puts down
both of her chopsticks and start thinking again.

Q17.What is multi-tasking, multi programming, multi-threading?


Ans : Multi programming: Multiprogramming is the technique of running several programs
at a time using timesharing. It allows a computer to do several things at the same time.
Multiprogramming creates logical parallelism. The concept of multiprogramming is that the
operating system keeps several jobs in memory simultaneously. The operating system selects
a job from the job pool and starts executing a job, when that job needs to wait for any i/o
operations the CPU is switched to another job. So the main idea here is that the CPU is never
idle.
Multi tasking: Multitasking is the logical extension of multiprogramming .The concept of
Multitasking is quite similar to multiprogramming but difference is that the switching
between jobs occurs so frequently that the users can interact with each program while it is
running. This concept is also known as time-sharing systems. A time-shared operating system
uses CPU scheduling and multiprogramming to provide each user with a small portion of
time-shared system.
Multi threading: An application typically is implemented as a separate process with several
threads of control. In some situations a single application may be required to perform several
similar tasks for example a web server accepts client requests for web pages, images, sound,
and so forth. A busy web server may have several of clients concurrently accessing it. If the
web server ran as a traditional single-threaded process, it would be able to service only one
client at a time. The amount of time that a client might have to wait for its request to be
serviced could be enormous.

Q18.Difference between Physical and Logical Address Space?


Ans :
 The concept of a logical address space that is bound to a separate physical address
space is central to proper memory management. Logical address – generated by the
CPU; also referred to as virtual address. Physical address – address seen by the
memory unit.
 Logical and physical addresses are the same in compile-time and load-time address-
binding schemes; logical (virtual) and physical addresses differ in execution-time
address-binding scheme
Q19.What are necessary conditions for dead lock?
Ans:
1. Mutual exclusion (where at least one resource is non-sharable)
2. Hold and wait (where a process hold one resource and waits for other resource)
3. No preemption (where the resources can’t be preempted)
4. circular wait (where p[i] is waiting for p[j] to release a resource. i= 1,2,…n
j=if (i!=n) then i+1
else 1 )

Q20.What are the deadlock avoidance algorithms?


Ans: A dead lock avoidance algorithm dynamically examines the resource-allocation state to
ensure that a circular wait condition can never exist. The resource allocation state is defined
by the number of available and allocated resources, and the maximum demand of the process.
There are two algorithms:
1. Resource allocation graph algorithm
2. Banker’s algorithm
a. Safety algorithm
b. Resource request algorithm

Q21. What are the Binding of Instructions and Data to Memory?


Ans : Address binding of instructions and data to memory addresses can happen at three
different stages
Compile time: If memory location known a priori, absolute code can be generated; must
recompile code if starting location changes.
Load time: Must generate relocatable code if memory location is not known at compile time.
Execution time: Binding delayed until run time if the process can be moved during its
execution from one memory segment to another. Need hardware support for address maps
(e.g., base and limit registers).

Q22. What is the starvation and aging?


Ans : Starvation: Starvation is a resource management problem where a process does not get
the resources it needs for a long time because the resources are being allocated to other
processes.
Aging: Aging is a technique to avoid starvation in a scheduling system. It works by adding
an aging factor to the priority of each request. The aging factor must increase the request’s
priority as time passes and must ensure that a request will eventually be the highest priority
request (after it has waited long enough).

Q23. Define Demand Paging, Page fault interrupt, and Trashing?


Ans :
Demand Paging: Demand paging is the paging policy that a page is not read into memory
until it is requested, that is, until there is a page fault on the page.
Page fault interrupt: A page fault interrupt occurs when a memory reference is made to a
page that is not in memory.The present bit in the page table entry will be found to be off by
the virtual memory hardware and it will signal an interrupt.
Trashing: The problem of many page faults occurring in a short time, called “page
thrashing,”

Q24. Explain Segmentation with paging?


Ans : Segments can be of different lengths, so it is harder to find a place for a segment in
memory than a page. With segmented virtual memory, we get the benefits of virtual memory
but we still have to do dynamic storage allocation of physical memory. In order to avoid this,
it is possible to combine segmentation and paging into a two-level virtual memory system.
Each segment descriptor points to page table for that segment. This give some of the
advantages of paging (easy placement) with some of the advantages of segments (logical
division of the program).

Q25. What is the cause of thrashing? How does the system detect thrashing? Once it
detects thrashing, what can the system do to eliminate this problem?
Ans : Thrashing is caused by under allocation of the minimum number of pages required by a
process, forcing it to continuously page fault. The system can detect thrashing by evaluating
the level of CPU utilization as compared to the level of multiprogramming. It can be
eliminated by reducing the level of multiprogramming

You might also like