Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
49 views

Operating System

The document provides instructions for a test with multiple choice questions about operating systems. It lists 6 questions (a through f) with varying point values. Question topics include the functions of an operating system, file management system calls, the structure of a disk drive, process states and transitions between them, race conditions and mutual exclusion, and the shortest job first scheduling algorithm.

Uploaded by

faiyaz pardiwala
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
49 views

Operating System

The document provides instructions for a test with multiple choice questions about operating systems. It lists 6 questions (a through f) with varying point values. Question topics include the functions of an operating system, file management system calls, the structure of a disk drive, process states and transitions between them, race conditions and mutual exclusion, and the shortest job first scheduling algorithm.

Uploaded by

faiyaz pardiwala
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 11

(2½ Hours) [Total Marks: 75]

N. B.: (1) All questions are compulsory.


(2) Make suitable assumptions wherever necessary and state the assumptions made.
(3) Answers to the same question must be written together.
(4) Numbers to the right indicate marks.
(5) Draw neat labeled diagrams wherever necessary.
(6) Use of Non-programmable calculators is allowed.

1. Attempt any three of the following:


a. What is an Operating system? Explain its functions.
Operating system is the software that runs in kernel mode—and even that is not always true.
Functions
• The Operating System as an Extended Machine
• The Operating System as a Resource Manager

b. List and explain the system calls for file management.

c. With suitable diagram explain the structure of disk drive.

Disk storage is two orders of magnitude cheaper than RAM per bit and often two orders of magnitude
larger as well. The only problem is that the time to randomly access data on it is close to three orders of
magnitude slower. The reason is that a disk is a mechanical device, as shown in Fig.
A disk consists of one or more metal platters that rotate at 5400, 7200, 10,800 RPM or more. A
mechanical arm pivots over the platters from the corner, similar to the pickup arm on an old 33-RPM
phonograph for playing vinyl records. Information is written onto the disk in a series of concentric
circles. At any given arm position, each of the heads can read an annular region called a track.
Together, all the tracks for a given arm position form a cylinder.
Each track is divided into some number of sectors, typically 512 bytes per sector. On modern disks, the
outer cylinders contain more sectors than the inner ones. Moving the arm from one cylinder to the next
takes about 1 msec. Moving it to a random cylinder typically takes 5 to 10 msec, depending on the
drive. Once the arm is on the correct track, the drive must wait for the needed sector to rotate under
the head, an additional delay of 5 msec to 10 msec, depending on the drive’s RPM. Once the sector is
under the head, reading or writing occurs at a rate of 50 MB/sec on low-end disks to 160 MB/sec on
faster ones.
d. List various states of processes. Explain with neat diagram.
Three states a process may be in:
1. Running (actually using the CPU at that instant).
2. Ready (runnable; temporarily stopped to let another process run).
3. Blocked (unable to run until some external event happens).
Logically, the first two states are similar. In both cases the process is willing to run, only in the second
one, there is temporarily no CPU available for it. The third state is fundamentally different from the
first two in that the process cannot run, even if the CPU is idle and has nothing else to do

Four transitions are possible among these three states, as shown. Transition 1 occurs when the
operating system discovers that a process cannot continue right now. In some systems the process can
execute a system call, such as pause, to get into blocked state. In other systems, including UNIX, when
a process reads from a pipe or special file (e.g., a terminal) and there is no input available, the process
is automatically blocked.
Transitions 2 and 3 are caused by the process scheduler, a part of the operating system, without the
process even knowing about them. Transition 2 occurs when the scheduler decides that the running
process has run long enough, and it is time to let another process have some CPU time. Transition 3
occurs when all the other processes have had their fair share and it is time for the first process to get the
CPU to run again..
Many algorithms have been devised to try to balance the competing demands of efficiency for the
system as a whole and fairness to individual processes.
Transition 4 occurs when the external event for which a process was waiting (such as the arrival of
some input) happens. If no other process is running at that instant, transition 3 will be triggered and the
process will start running. Otherwise it may have to wait in ready state for a little while until the CPU
is available and its turn comes.
e. What is race condition? How mutual exclusion handles race condition?
In some operating systems, processes that are working together may share some common storage that
each one can read and write. The shared storage may be in main memory (possibly in a kernel data
structure) or it may be a shared file; the location of the shared memory does not change the nature of
the communication or the problems that arise. To see how interprocess communication works in
practice, let us now consider a simple but common example: a print spooler. When a process
wants to print a file, it enters the file name in a special spooler directory. Another process, the printer
daemon, periodically checks to see if there are any files to be printed, and if there are, it prints them
and then removes their names from the directory.
Mutual Exclusion handles it in various ways as
Disabling Interrupts
Lock Variables
Strict Alternation

f. With suitable example explain the shortest job first scheduling algorithm.
In an insurance company, for example, people can predict quite accurately how long it will take to run a
batch of 1000 claims, since similar work is done every day. When several equally important jobs are
sitting in the input queue waiting to be started, the scheduler picks the shortest job first.
Look at Fig. . Here we find four jobs A, B, C, and D with run times of 8, 4, 4,
and 4 minutes, respectively. By running them in that order, the turnaround time for A is 8 minutes, for
B is 12 minutes, for C is 16 minutes, and for D is 20 minutes for an average of 14 minutes.
Now let us consider running these four jobs using shortest job first, as shown in Fig. (b). The
turnaround times are now 4, 8, 12, and 20 minutes for an average of 11 minutes. Shortest job first is
provably optimal. Consider the case of four jobs, with execution times of a, b, c, and d, respectively.
The first job finishes at time a, the second at time a b, and so on. The mean turnaround time is
(4a 3b 2c d)/4. It is clear that a contributes more to the average than the other times, so it should
be the shortest job, with b next, then c, and finally d as the longest since it affects only its own
turnaround time. The same argument applies equally well to any number of jobs.
It is worth pointing out that shortest job first is optimal only when all the jobs are available
simultaneously. As a counterexample, consider fiv e jobs, A through E, with run times of 2, 4, 1, 1, and
1, respectively. Their arrival times are 0, 0, 3, 3, and 3. Initially, only A or B can be chosen, since the
other three jobs have not arrived yet. Using shortest job first, we will run the jobs in the order A, B, C,
D, E, for an average wait of 4.6. However, running them in the order B, C, D, E, A has an
av erage wait of 4.4.

2. Attempt any three of the following: 15


a. Explain the concept of address space in memory management.
The concept of an address space is very general and occurs in many contexts. Consider telephone
numbers. In the United States and many other countries, a local telephone number is usually a 7-digit
number. The address space for telephone numbers thus runs from 0,000,000 to 9,999,999, although
some numbers, such as those beginning with 000 are not used. With the growth of smartphones,
modems, and fax machines, this space is becoming too small, in which case more digits have to be
used. The address space for I/O ports on the x86 runs from 0 to 16383. IPv4 addresses are 32-bit
numbers, so their address space runs from 0 to 232 1 (again, with some reserved numbers).
Address spaces do not have to be numeric. The set of .com Internet domains is also an address space.
This address space consists of all the strings of length 2 to 63 characters that can be made using letters,
numbers, and hyphens, followed by .com. By now you should get the idea. It is fairly simple.
Somewhat harder is how to give each program its own address space, so address 28 in one program
means a different physical location than address 28 in another program. Below we will discuss a simple
way that used to be common but has fallen into disuse due to the ability to put much more complicated
(and better) schemes on modern CPU chips.

b. What is the purpose of swapping? Explain with example.


If the physical memory of the computer is large enough to hold all the processes, the schemes described
so far will more or less do. But in practice, the total amount of RAM needed by all the processes is
often much more than can fit in memory. On a typical Windows, OS X, or Linux system, something
like 50–100 processes or more may be started up as soon as the computer is booted. For example,
when a Windows application is installed, it often issues commands so that on subsequent system boots,
a process will be started that does nothing except check for updates to the application. Such a process
can easily occupy 5–10 MB of memory. Other background processes check for incoming mail,
incoming network connections, and many other things. And all this is before the first user program is
started. Serious user application programs nowadays, like Photoshop, can easily require 500 MB just to
boot and many gigabytes once they start processing data.
Consequently, keeping all processes in memory all the time requires a huge amount of memory and
cannot be done if there is insufficient memory.
swapping, consists of bringing in each process in its entirety, running it for a while, then putting it back
on the disk.
Idle processes are mostly stored on disk, so they do not take up any memory when they are not running
(although some of them wake up periodically to do their work, then go to sleep again)
The operation of a swapping system is illustrated in Fig. Initially, only process A is in memory. Then
processes B and C are created or swapped in from disk. In Fig. (d) A is swapped out to disk. Then D
comes in and B goes out. Finally A comes in again. Since A is now at a different location, addresses
contained in it must be relocated, either by software when it is swapped in or (more likely) by hardware
during program execution. For example, base and limit registers would work fine here.

When swapping creates multiple holes in memory, it is possible to combine them all into one big one
by moving all the processes downward as far as possible. This technique is known as memory
compaction.

c. Explain the first in first out page replacement algorithm. Give example.
consider a supermarket that has enough shelves to display exactly k different products. One day, some
company introduces a new convenience food—instant, freeze-dried, organic yogurt that can be
reconstituted in a microwave oven. It is an immediate success, so our finite supermarket
has to get rid of one old product in order to stock it. One possibility is to find the product that the
supermarket has been stocking the longest (i.e., something it began selling 120 years ago) and get rid of
it on the grounds that no one is interested any more. In effect, the supermarket maintains a
linked list of all the products it currently sells in the order they were introduced. The new one goes on
the back of the list; the one at the front of the list is dropped.
As a page replacement algorithm, the same idea is applicable. The operating system maintains a list of
all pages currently in memory, with the most recent arrival at the tail and the least recent arrival at the
head. On a page fault, the page at the head is removed and the new page added to the tail of the list.
When applied to stores, FIFO might remove mustache wax, but it might also remove flour, salt, or
butter. When applied to computers the same problem arises: the oldest page may still be useful. For this
reason, FIFO in its pure form is rarely used.
d. List and explain different file structures.
Three types of file structures
a) Byte sequence.
b) Record sequence.
c) Tree.
The file in Fig.(a) is an unstructured sequence of bytes. In effect, the operating system does not
know or care what is in the file. All it sees are bytes. Any meaning must be imposed by user-
level programs. Both UNIX and Windows use this approach.
Having the operating system regard files as nothing more than byte sequences provides the
maximum amount of flexibility. User programs can put anything they want in their files and
name them any way that they find convenient. The operating system does not help, but it also
does not get in the way. For users who want to do unusual things, the latter can be very
important. All versions of UNIX (including Linux and OS X) and Windows use this file
model.
The first step up in structure isillustrated in Fig.(b). In this model, a file is a sequence of fixed-
length records, each with some internal structure. Central to the idea of a file being a sequence
of records is the idea that the read operation returns one record and the write operation
overwrites or appends one record. As a historical note, in decades gone by, when the 80-
column punched card was king of the mountain, many (mainframe) operating systems based
their file systems on files consisting of 80-character records, in effect, card images. These
systems also supported files of 132-character records, which were intended for the line printer
(which in those days were big chain printers having 132 columns). Programs read input in
units of 80 characters and wrote it in units of 132 characters, although the final 52 could be
spaces, of course. No current general-purpose system uses this model as its primary file system
any more, but back in the days of 80-column punched cards and 132-character line printer
paper this was a common model on mainframe computers.
The third kind of file structure is shown in Fig. (c). In this organization, a file consists of a tree
of records, not necessarily all the same length, each containing a key field in a fixed position in
the record. The tree is sorted on the key field, to allow rapid searching for a particular key.
The basic operation here is not to get the ‘‘next’’ record, although that is also possible, but to
get the record with a specific key. For the zoo file of Fig. (c), one could ask the system to get
the record whose key is pony, for example, without worrying about its exact position in the
file. Furthermore, new records can be added to the file, with the operating system, and not the
user, deciding where to place them. This type of file is clearly quite different from the
unstructured byte streams used in UNIX and Windows and is used on some large mainframe
computers for commercial data processing.

e. Write various operations of file explain in short.


Files exist to store information and allow it to be retrieved later. Different systems provide different
operations to allow storage and retrieval. Below is a the most common system calls relating to files.
1. Create. A directory is created. It is empty except for dot and dotdot, which are put there
automatically by the system (or in a few cases, by the mkdir program).
2. Delete. A directory is deleted. Only an empty directory can be deleted. A directory
containing only dot and dotdot is considered empty as these cannot be deleted.
3. Opendir. Directories can be read. For example, to list all the files in a directory, a listing
program opens the directory to read out the names of all the files it contains. Before a directory
can be read, it must be opened, analogous to opening and reading a file.
4. Closedir. When a directory has been read, it should be closed to free up internal table space.
5. Readdir. This call returns the next entry in an open directory. Formerly, it was possible to
read directories using the usual read system call, but that approach has the disadvantage of
forcing the programmerto know and deal with the internal structure of directories.
In contrast, readdir always returns one entry in a standard format, no matter which of the
possible directory structures is being used.
6. Rename. In many respects, directories are just like files and can be renamed the same way
files can be.
7. Link. Linking is a technique that allows a file to appear in more than one directory. This
system call specifies an existing file and a path name, and creates a link from the existing file
to the name specified by the path. In this way, the same file may appear in multiple directories.
A link of this kind, which increments the counter in the file’s i-node (to keep track of the
number of directory entries containing the file), is sometimes called a hard link.
8. Unlink. A directory entry is removed. If the file being unlinked is only present in one
directory (the normal case), it is removed from the file system. If it is present in multiple
directories, only the path name specified is removed. The others remain. In UNIX, the system
call for deleting files (discussed earlier) is, in fact, unlink.
f. Explain the linked list allocation method for storing the files.

3. Attempt any three of the following: 15


a. Write a short note on memory mapped IO.
b. What is RAID? Explain in short.
CPU performance has been increasing exponentially over the past decade, roughly doubling every 18
months. Not so with disk performance. In the 1970s, average seek times on minicomputer disks were
50 to 100 msec. Now seek times are still a few msec. In most technical industries (say, automobiles or
aviation), a factor of 5 to 10 performance improvement in two decades would be major news
(imagine 300-MPG cars), but in the computer industry it is an embarrassment. Thus the gap between
CPU performance and (hard) disk performance has become much larger over time. Can anything be
done to help?
It has occurred to various people over the years that parallel I/O might be a good idea, too. In their 1988
paper, Patterson et al. suggested six specific disk organizations that could be used to improve disk
performance, reliability, or both (Patterson et al., 1988). These ideas were quickly adopted by industry
and have led to a new class of I/O device called a RAID. RAID can be defined as Redundant Array
of Inexpensive Disks, but industry redefined the I to be ‘‘Independent’’ rather than ‘‘Inexpensive’’
(maybe so they could charge more?). Since a villain was also needed.
the bad guy here was the SLED (Single Large Expensive Disk). The fundamental idea behind a RAID
is to install a box full of disks next to the computer, typically a large server, replace the disk controller
card with a RAID controller, copy the data over to the RAID, and then continue normal operation. In
other words, a RAID should look like a SLED to the operating system but have better performance and
better reliability. In the past, RAIDs consisted almost exclusively of a RAID SCSI controller plus a box
of SCSI disks, because the performance was good and modern SCSI supports up to 15 disks on a single
controller.
Now adays, many manufacturers also offer (less expensive) RAIDs based on SATA. In this way, no
software changes are required to use the RAID, a big selling point for many system administrators.
c. Write a short note on soft timers.
Most computers have a second programmable clock that can be set to cause timer interrupts at whatever
rate a program needs. This timer is in addition to the main system timer whose functions were
described above. As long as the interrupt frequency is low, there is no problem using this second timer
for application-specific purposes. The trouble arrives when the frequency of the application-specific
timer is very high. Below we will briefly describe a software-based timer scheme that works well under
many circumstances, even at fairly high frequencies. The idea is due to Aron and Druschel (1999). For
more details, please see their paper. Generally, there are two ways to manage I/O: interrupts and
polling. Interrupts have low latency, that is, they happen immediately after the event itself with little
or no delay. On the other hand, with modern CPUs, interrupts have a substantial overhead due to the
need for context switching and their influence on the pipeline, TLB, and cache. The alternative to
interrupts is to have the application poll for the event expected itself. Doing this avoids interrupts, but
there may be substantial latency because an event may happen directly after a poll, in which case it
waits almost a whole polling interval. On the average, the latency is half the polling interval.
Interrupt latency today is barely better than that of computers in the 1970s. On most minicomputers, for
example, an interrupt took four bus cycles: to stack the program counter and PSW and to load a new
program counter and PSW. Nowadays dealing with the pipeline, MMU, TLB, and cache adds a great
deal to the overhead. These effects are likely to get worse rather than better in time, thus canceling
out faster clock rates. Unfortunately, for certain applications, we want neither the overhead of interrupts
nor the latency of polling.
Soft timers avoid interrupts. Instead, whenever the kernel is running for some other reason, just before
it returns to user mode it checks the real-time clock to see if a soft timer has expired. If it has expired,
the scheduled event (e.g., packet transmission or checking for an incoming packet) is performed, with
no need to switch into kernel mode since the system is already there. After the work has been
performed, the soft timer is reset to go off again. All that has to be done is copy the current clock value
to the timer and add the timeout interval to it. Soft timers stand or fall with the rate at which kernel
entries are made for other reasons. These reasons include:
1. System calls.
2. TLB misses.
3. Page faults.
4. I/O interrupts.
5. The CPU going idle.
To see how often these ev/ents happen, Aron and Druschel made measurements with several CPU
loads, including a fully loaded Web server, a Web server with a compute-bound background job,
playing real-time audio from the Internet, and recompiling the UNIX kernel. The average entry rate
into the kernel varied from 2 to 18 sec, with about half of these entries being system calls.
d. List two types of resources. Explain with suitable example.
e. What is deadlock? Explain with suitable example.
Computer systems are full of resources that can be used only by one process at a time.
Common examples include printers, tape drives for backing up company data, and slots in the
system’s internal tables. Having two processes simultaneously writing to the printer leads to
gibberish. Having two processes using the same file-system table slot invariably will lead to a
corrupted file system. Consequently, all operating systems have the ability to (temporarily)
grant a process exclusive access to certain resources.
For many applications, a process needs exclusive access to not one resource, but sev eral.
Suppose, for example, two processes each want to record a scanned document on a Blu-ray
disc. Process A requests permission to use the scanner and is granted it. Process B is
programmed differently and requests the Blu-ray recorder first and is also granted it. Now A
asks for the Blu-ray recorder, but the request is suspended until B releases it. Unfortunately,
instead of releasing the Bluray recorder, B asks for the scanner. At this point both processes
are blocked and will remain so forever. This situation is called a deadlock.
Deadlocks can also occur across machines. For example, many off ices have a local area
network with many computers connected to it. Often devices such as scanners, Blu-ray/DVD
recorders, printers, and tape drives are connected to the network as shared resources, available
to any user on any machine. If these devices can be reserved remotely (i.e., from the user’s
home machine), deadlocks of the same kind can occur as described above. More complicated
situations can cause deadlocks involving three, four, or more devices and users.
Deadlocks can also occur in a variety of other situations.. In a database system, for example, a
program may have to lock several records it is using, to avoid race conditions. If process A
locks record R1 and process B locks record R2, and then each process tries to lock the other
one’s record, we also have a deadlock. Thus, deadlocks can occur on hardware resources or on
software resources.
f. Explain any one way to avoid deadlock.

4. Attempt any three of the following: 15


a. What is the need of virtualization?
b. What do you mean by cloud? Write the essential characteristics of cloud.
Virtualization technology played a crucial role in the dizzying rise of cloud computing. There are many
clouds. Some clouds are public and available to anyone willing to pay for the use of resources, others
are private to an organization. Likewise, different clouds offer different things. Some give their users
access to physical hardware, but most virtualize their environments. Some offer the bare machines,
virtual or not, and nothing more, but others offer software that is ready to use and can be combined in
interesting ways, or platforms that make it easy for their users to develop new services. Cloud providers
typically offer different categories of resources, such as ‘‘big machines’’ versus ‘‘little machines,’’ etc.
For all the talk about clouds, few people seem really sure about what they are exactly. The National
Institute of Standards and Technology, always a good source to fall back on, lists fiv e essential
characteristics:
1. On-demand self-service. Users should be able to provision resources automatically, without
requiring human interaction.
2. Broad network access. All these resources should be available over the network via standard
mechanisms so that heterogeneous devices can make use of them.
3. Resource pooling. The computing resource owned by the provider should be pooled to serve
multiple users and with the ability to assign and reassign resources dynamically. The users generally do
not even know the exact location of ‘‘their’’ resources or even which country they are located in.
4. Rapid elasticity. It should be possible to acquire and release resources elastically, perhaps even
automatically, to scale immediately with the users’ demands.
5. Measured service. The cloud provider meters the resources used in a way that matches the type of
service agreed upon.
c. Write a short note on I/O virtualization.
d. With the help of neat diagram explain the working of message passing multicomputer
system.
Each memory is local to a single CPU and can be accessed only by that CPU. The CPUs communicate
by sending multiword messages over the interconnect. With a good interconnect, a short message can
be sent in 10–50 sec, but still far longer than the memory access time of Fig. 8-1(a). There is no
shared global memory in this design. Multicomputers (i.e., message-passing systems) are much easier
to build than (shared-memory) multiprocessors, but they are harder to program. Thus each genre has its
fans.

e. Explain UMA multiprocessor system using crossbar switch.


The simplest multiprocessors are based on a single bus, as illustrated in Fig. (a). Tw o or more
CPUs and one or more memory modules all use the same bus for communication. When a
CPU wants to read a memory word, it first checks to see if the bus is busy. If the bus is idle,
the CPU puts the address of the word it wants on the bus, asserts a few control signals, and
waits until the memory puts the desired word on the bus.
If the bus is busy when a CPU wants to read or write memory, the CPU just waits until the bus
becomes idle. Herein lies the problem with this design. With two or three CPUs, contention for
the bus will be manageable; with 32 or 64 it will be unbearable. The system will be totally
limited by the bandwidth of the bus, and most of the CPUs will be idle most of the time.
The solution to this problem is to add a cache to each CPU, as depicted in Fig.(b). The cache
can be inside the CPU chip, next to the CPU chip, on the processor board, or some
combination of all three. Since many reads can now be satisfied out of the local cache, there
will be much less bus traffic, and the system can support more CPUs. In general, caching is not
done on an individual word basis but on the basis of 32- or 64-byte blocks. When a word is
referenced, its entire block, called a cache line, is fetched into the cache of the CPU touching
it. Each cache block is marked as being either read only (in which case it can be present in
multiple caches at the same time) or read-write (in which case it may not be present in any
other caches). If a CPU attempts to write a word that is in one or more remote caches, the bus
hardware detects the write and puts a signal on the bus informing all other caches of the write.
If other caches have a ‘‘clean’’ copy, that is, an exact copy of what is in memory, they can just
discard their copies and let the writer fetch the cache block from memory before modifying it.
If some other cache has a ‘‘dirty’’ (i.e., modified) copy, it must either write it back to memory
before the write can proceed or transfer it directly to the writer over the bus.
This set of rules is called a cache-coherence protocol and is one of many. Yet another
possibility is the design of Fig. (c), in which each CPU has not only a cache, but also a local,
private memory which it accesses over a dedicated (private) bus. To use this configuration
optimally, the compiler should place all the program text, strings, constants and other read-
only data, stacks, and local variables in the private memories. The shared memory is then only
used for writable shared variables. In most cases, this careful placement will greatly reduce bus
traffic, but it does require active cooperation from the compiler.

f. Explain the working of master slave multiprocessor system.


Here, one copy of the operating system
and its tables is present on CPU 1 and not on any of the others. All system calls are redirected to CPU 1
for processing there. CPU 1 may also run user processes if there is CPU time left over. This model is
called master-slave since CPU 1 is the master and all the others are slaves.

Here, one copy of the operating system and its tables is present on CPU 1 and not on any of the others.
All system calls are redirected to CPU 1 for processing there. CPU 1 may also run user processes if
there is CPU time left over. This model is called master-slave since CPU 1 is the master and all the
others are slaves.
idle while another is overloaded. Similarly, pages can be allocated among all the processes dynamically
and there is only one buffer cache, so inconsistencies never occur.
The problem with this model is that with many CPUs, the master will become a bottleneck. After all, it
must handle all system calls from all CPUs. If, say, 10% of all time is spent handling system calls, then
10 CPUs will pretty much saturate the master, and with 20 CPUs it will be completely overloaded.
Thus this model is simple and workable for small multiprocessors, but for large ones it fails.
5. Attempt any three of the following: 15
a. List categories of Linux utility programs. Explain any two
The command-line (shell) user interface to Linux consists of a large number of standard utility
programs. Roughly speaking, these programs can be divided into
six categories, as follows:
1. File and directory manipulation commands.
2. Filters.
3. Program development tools, such as editors and compilers.
4. Text processing.
5. System administration.
6. Miscellaneous
b. Explain various process management system calls in Linux.

c. Explain the booting process of Linux.


Details vary from platform to platform, but in general the following steps represent the boot
process. When the computer starts, the BIOS performs Power- On-Self-Test (POST) and initial
device discovery and initialization, since the OS’ boot process may rely on access to disks,
screens, keyboards, and so on. Next, the first sector of the boot disk, the MBR (Master Boot
Record), is read into a fixed memory location and executed. This sector contains a small (512-
byte) program that loads a standalone program called boot from the boot device, such as a
SATA or SCSI disk. The boot program first copies itself to a fixed high-memory address to
free up low memory for the operating system.
Once moved, boot reads the root directory of the boot device. To do this, it must understand
the file system and directory format, which is the case with some bootloaders such as GRUB
(GRand Unified Bootloader). Other popular bootloaders, such as Intel’s LILO, do not rely on
any specific file system. Instead, they need a block map and low-level addresses, which
describe physical sectors, heads, and cylinders, to find the relevant sectors to be loaded.
Then boot reads in the operating system kernel and jumps to it. At this point, it has finished its
job and the kernel is running.
The kernel start-up code is written in assembly language and is highly machine dependent.
Typical work includes setting up the kernel stack, identifying the CPU type, calculating the
amount of RAM present, disabling interrupts, enabling the MMU, and finally calling the C-
language main procedure to start the main part of the operating system.
d. Explain the concept of catching in Windows.
e. Write the fundamental concept of process in Windows.
f. Explain in short how memory is managed in Windows.
_____________________________

You might also like