Operating System Tutorial
Operating System Tutorial
TUTORIAL
STUDENT NAME – PALLAVI ARJUN
BHARTI
ROLL NUMBER - 18141112
BATCH – I1
Operating System Tutorial
Tutorial No – 1
Q.1) Define operating system? Describe OS components and Services.
Answer - The term OS or operating system is a type of software and it works as an interface
between the user and computer to perform all the tasks like memory management, file
management, input & output handling, security, process management, Job accounting, error
detecting, system performance controlling, peripheral devices controlling like printers & disk
drives. The popular operating systems mainly include Windows, Linux, AIX, VMS, z/OS, etc.
The components of an operating system play a key role to make a variety of computer system
parts work together. The operating components are discussed below.
Operating-System-Components
Kernel
The kernel in the OS provides the basic level of control on all the computer peripherals. In the
operating system, the kernel is an essential component that loads firstly and remains within the
main memory. So that memory accessibility can be managed for the programs within the RAM,
it creates the programs to get access from the hardware resources. It resets the operating states of
the CPU for the best operation at all times.
Process Execution
The OS gives an interface between the hardware as well as an application program so that the
program can connect through the hardware device by simply following procedures & principles
configured into the OS. The program execution mainly includes a process created through an OS
kernel that uses memory space as well as different types of other resources.
Interrupt
In the operating system, interrupts are essential because they give a reliable technique for the OS
to communicate & react to their surroundings. An interrupt is nothing but one kind of signal
between a device as well as a computer system otherwise from a program in the computer that
requires the OS to leave and decide accurately what to do subsequently. Whenever an interrupt
signal is received, then the hardware of the computer puts on hold automatically whatever FGT
computer program is running presently, keeps its status & runs a computer program which is
connected previously with the interrupt.
Memory Management
The functionality of an OS is nothing but memory management which manages main memory &
moves processes backward and forward between disk & main memory during implementation.
This tracks each & every memory position; until it is assigned to some process otherwise it is
open. It verifies how much memory can be allocated to processes and also makes a decision to
know which process will obtain memory at what time. Whenever memory is unallocated, then it
tracks correspondingly to update the status. Memory management work can be divided into three
important groups like memory management of hardware, OS and application memory
management.
Multitasking
It describes the working of several independent computer programs on a similar computer
system. Multitasking in an OS allows an operator to execute one or more computer tasks at a
time. Since many computers can perform one or two tasks at a time, usually this can be done
with the help of time-sharing, where each program uses the time of a computer to execute.
Networking
Networking can be defined as when the processor interacts with each other through
communication lines. The design of communication-network must consider routing, connection
methods, safety, the problems of opinion & security.
Presently most of the operating systems maintain different networking techniques, hardware, &
applications. This involves that computers that run on different operating systems could be
included in a general network to share resources like data, computing, scanners, printers, which
uses the connections of either wired otherwise wireless.
Security
If a computer has numerous individuals to allow the immediate process of various processes,
then the many processes have to be protected from other activities. This system security mainly
depends upon a variety of technologies that work effectively. Current operating systems give an
entrée to a number of resources, which are obtainable to work the software on the system, and to
external devices like networks by means of the kernel. The operating system should be capable
of distinguishing between demands which have to be allowed for progressing & others that don’t
need to be processed. Additionally, to permit or prohibit a security version, a computer system
with a high level of protection also provides auditing options. So this will allow monitoring the
requests from accessibility to resources
User Interface
A GUI or user interface (UI) is the part of an OS that permits an operator to get the information.
A user interface based on text displays the text as well as its commands which are typed over a
command line with the help of a keyboard.
The OS-based applications mainly provide a specific user interface for efficient communication.
The main function of a user interface of an application is to get the inputs from the operator & to
provide o/ps to the operator. But, the sorts of inputs received from the user interface as well as
the o/p types offered by the user interface may change from application to application. The UI of
any application can be classified into two types namely GUI (graphical UI) & CLI (command
line user interface).
Q.2) What is process scheduling? Explain its operation on process with the
help of example.
Answer - The process scheduling is the activity of the process manager that handles the
removal of the running process from the CPU and the selection of another process on the basis
of a particular strategy.
Process scheduling is an essential part of a Multiprogramming operating systems. Such
operating systems allow more than one process to be loaded into the executable memory at a
time and the loaded process shares the CPU using time multiplexing.
When the state of a process is changed, its PCB is unlinked from its current queue and moved to
its new state queue.
The Operating System maintains the following important process scheduling queues −
Job queue − This queue keeps all the processes in the system.
Ready queue − This queue keeps a set of all processes residing in main memory, ready and
waiting to execute. A new process is always put in this queue.
Device queues − The processes which are blocked due to unavailability of an I/O device
constitute this queue. The OS can use different policies to manage each queue (FIFO, Round
Robin, Priority, etc.). The OS scheduler determines how to move processes between the ready
and run queues which can only have one entry per processor core on the system; in the above
diagram, it has been merged with the CPU.
Two-State Process Model
Two-state process model refers to running and non-running states which are described below −
Running
When a new process is created, it enters into the system as in the running state.
Not Running
Processes that are not running are kept in queue, waiting for their turn to execute. Each entry in
the queue is a pointer to a particular process. Queue is implemented by using linked list. Use of
dispatcher is as follows. When a process is interrupted, that process is transferred in the waiting
queue. If the process has completed or aborted, the process is discarded. In either case, the
dispatcher then selects a process from the queue to execute.
Schedulers
Schedulers are special system software which handle process scheduling in various ways. Their
main task is to select the jobs to be submitted into the system and to decide which process to run.
Schedulers are of three types −
Long-Term Scheduler
Short-Term Scheduler
Medium-Term Scheduler
Long Term Scheduler
It is also called a job scheduler. A long-term scheduler determines which programs are admitted
to the system for processing. It selects processes from the queue and loads them into memory for
execution. Process loads into the memory for CPU scheduling.
The primary objective of the job scheduler is to provide a balanced mix of jobs, such as I/O
bound and processor bound. It also controls the degree of multiprogramming. If the degree of
multiprogramming is stable, then the average rate of process creation must be equal to the
average departure rate of processes leaving the system.
On some systems, the long-term scheduler may not be available or minimal. Time-sharing
operating systems have no long term scheduler. When a process changes the state from new to
ready, then there is use of long-term scheduler.
Short Term Scheduler
It is also called as CPU scheduler. Its main objective is to increase system performance in
accordance with the chosen set of criteria. It is the change of ready state to running state of the
process. CPU scheduler selects a process among the processes that are ready to execute and
allocates CPU to one of them.
Short-term schedulers, also known as dispatchers, make the decision of which process to execute
next. Short-term schedulers are faster than long-term schedulers.
Medium Term Scheduler
Medium-term scheduling is a part of swapping. It removes the processes from the memory. It
reduces the degree of multiprogramming. The medium-term scheduler is in-charge of handling
the swapped out-processes.
A running process may become suspended if it makes an I/O request. A suspended processes
cannot make any progress towards completion. In this condition, to remove the process from
memory and make space for other processes, the suspended process is moved to the secondary
storage. This process is called swapping, and the process is said to be swapped out or rolled out.
Swapping may be necessary to improve the process mix.
Q.3) Why to learn operating system? What are the different applications of
operating system?
Answer - An operating system is the most important software that runs on a computer. It
manages the computer's memory and processes, as well as all of its software and hardware. It
also allows you to communicate with the computer without knowing how to speak the
computer's language. Without an operating system, a computer is useless. Your computer's
operating system (OS) manages all of the software and hardware on the computer. Most of the
time, there are several different computer programs running at the same time, and they all need
to access your computer's central processing unit (CPU), memory, and storage. The operating
system coordinates all of this to make sure each program gets what it needs.
Types of operating systems
Operating systems usually come pre-loaded on any computer you buy. Most people use the
operating system that comes with their computer, but it's possible to upgrade or even change
operating systems. The three most common operating systems for personal computers are
Microsoft Windows, macOS, and Linux. Modern operating systems use a graphical user
interface, or GUI (pronounced gooey). A GUI lets you use your mouse to click icons, buttons,
and menus, and everything is clearly displayed on the screen using a combination of graphics
and text. Each operating system's GUI has a different look and feel, so if you switch to a
different operating system it may seem unfamiliar at first. However, modern operating systems
are designed to be easy to use, and most of the basic principles are the same.
Microsoft Windows
Microsoft created the Windows operating system in the mid-1980s. There have been many
different versions of Windows, but the most recent ones are Windows 10 (released in 2015),
Windows 8 (2012), Windows 7 (2009), and Windows Vista (2007). Windows comes pre-loaded
on most new PCs, which helps to make it the most popular operating system in the world.
macOS
macOS (previously called OS X) is a line of operating systems created by Apple. It comes
preloaded on all Macintosh computers, or Macs. Some of the specific versions include Mojave
(released in 2018), High Sierra (2017), and Sierra (2016). According to StatCounter Global Stats,
macOS users account for less than 10% of global operating systems—much lower than the
percentage of Windows users (more than 80%). One reason for this is that Apple computers tend
to be more expensive. However, many people do prefer the look and feel of macOS over
Windows.
Linux
Linux (pronounced LINN-ux) is a family of open-source operating systems, which means they
can be modified and distributed by anyone around the world. This is different from proprietary
software like Windows, which can only be modified by the company that owns it. The
advantages of Linux are that it is free, and there are many different distribution or version you
can choose from. According to StatCounter Global Stats, Linux users account for less than 2% of
global operating systems. However, most servers run Linux because it's relatively easy to
customize.
Operating systems for mobile devices
The operating systems we've been talking about so far were designed to run on desktop and
laptop computers. Mobile devices such as phones, tablet computers, and MP3 players are
different from desktop and laptop computers, so they run operating systems that are designed
specifically for mobile devices. Examples of mobile operating systems include Apple iOS and
Google Android. In the screenshot below, you can see iOS running on an iPad. Operating
systems for mobile devices generally aren't as fully featured as those made for desktop and
laptop computers, and they aren't able to run all of the same software. However, you can still do
a lot of things with them, like watch movies, browse the Web, manage your calendar, and play
games.
\
Tutorial No – 2
Q.1) What is a process scheduler? State the characteristics of a good process
scheduler?
Answer - The process scheduling is the activity of the process manager that handles the removal
of the running process from the CPU and the selection of another process on the basis of a
particular strategy. Process scheduling is an essential part of a Multiprogramming operating
systems.
Characteristics of a good scheduling algorithm
Characteristics include Utilization of CPU, Response time, Throughput, Turnaround time,
Waiting time and fairness.
Utilization of CPU
This algorithm keeps the CPU busy by using most of it.
Throughput
The process involves the number of works finished in a time unit and the algorithm increases the
number.
Response Time:
The time used in accepting the request of a task and the scheduler minimizes the time.
Turnaround time:
The time span between the starting of a job and finishing it is the turnaround time.
Waiting time:
In a multi programming system when a task waits for allocation of resources the time it takes is
the waiting time and the algorithm decreases it.
Fairness:
The scheduler takes care of the share that each process receives in CPU.
1. is in execution. process.
2. terminate. terminate.
3. It takes more time for creation. It takes less time for creation.
It also takes more time for It takes less time for context
6. resources. resources.
process can execute until the task could run, while one
B) Long term scheduler, short term scheduler and medium term scheduler
Answer - Long Term Scheduler
It is also called a job scheduler. A long-term scheduler determines which programs are admitted
to the system for processing. It selects processes from the queue and loads them into memory for
execution. Process loads into the memory for CPU scheduling.
The primary objective of the job scheduler is to provide a balanced mix of jobs, such as I/O
bound and processor bound. It also controls the degree of multiprogramming. If the degree of
multiprogramming is stable, then the average rate of process creation must be equal to the
average departure rate of processes leaving the system.
On some systems, the long-term scheduler may not be available or minimal. Time-sharing
operating systems have no long term scheduler. When a process changes the state from new to
ready, then there is use of long-term scheduler.
Short Term Scheduler
It is also called as CPU scheduler. Its main objective is to increase system performance in
accordance with the chosen set of criteria. It is the change of ready state to running state of the
process. CPU scheduler selects a process among the processes that are ready to execute and
allocates CPU to one of them.
Short-term schedulers, also known as dispatchers, make the decision of which process to execute
next. Short-term schedulers are faster than long-term schedulers.
Medium Term Scheduler
Medium-term scheduling is a part of swapping. It removes the processes from the memory. It
reduces the degree of multiprogramming. The medium-term scheduler is in-charge of handling
the swapped out-processes.
A running process may become suspended if it makes an I/O request. A suspended processes
cannot make any progress towards completion. In this condition, to remove the process from
memory and make space for other processes, the suspended process is moved to the secondary
storage. This process is called swapping, and the process is said to be swapped out or rolled out.
Swapping may be necessary to improve the process mix.
Q.3) Write short note on.
Semaphores-
Semaphores are integer variables that are used to solve the critical section problem by using two
atomic operations, wait and signal that are used for process synchronization.
The definitions of wait and signal are as follows −
Wait
The wait operation decrements the value of its argument S, if it is positive. If S is negative or
zero, then no operation is performed.
wait(S)
{
while (S<=0);
S--;
}
Signal
The signal operation increments the value of its argument S.
signal(S)
{
S++;
}
Types of Semaphores
There are two main types of semaphores i.e. counting semaphores and binary semaphores.
Details about these are given as follows −
Counting Semaphores
These are integer value semaphores and have an unrestricted value domain. These semaphores
are used to coordinate the resource access, where the semaphore count is the number of available
resources. If the resources are added, semaphore count automatically incremented and if the
resources are removed, the count is decremented.
Binary Semaphores
The binary semaphores are like counting semaphores but their value is restricted to 0 and 1. The
wait operation only works when the semaphore is 1 and the signal operation succeeds when
semaphore is 0. It is sometimes easier to implement binary semaphores than counting
semaphores.
Advantages of Semaphores
Some of the advantages of semaphores are as follows −
Semaphores allow only one process into the critical section. They follow the mutual exclusion
principle strictly and are much more efficient than some other methods of synchronization.
There is no resource wastage because of busy waiting in semaphores as processor time is not
wasted unnecessarily to check if a condition is fulfilled to allow a process to access the critical
section.
Semaphores are implemented in the machine independent code of the microkernel. So they are
machine independent.
Disadvantages of Semaphores
Some of the disadvantages of semaphores are as follows −
Semaphores are complicated so the wait and signal operations must be implemented in the
correct order to prevent deadlocks.
Semaphores are impractical for last scale use as their use leads to loss of modularity. This
happens because the wait and signal operations prevent the creation of a structured layout for the
system.
Semaphores may lead to a priority inversion where low priority processes may access the critical
section first and high priority processes later.
Process states-
New (Create) – In this step, the process is about to be created but not yet created, it is the
program which is present in secondary memory that will be picked up by OS to create the
process.
Ready – New -> Ready to run. After the creation of a process, the process enters the ready state
i.e. the process is loaded into the main memory. The process here is ready to run and is waiting
to get the CPU time for its execution. Processes that are ready for execution by the CPU are
maintained in a queue for ready processes.
Run – The process is chosen by CPU for execution and the instructions within the process are
executed by any one of the available CPU cores.
Blocked or wait – Whenever the process requests access to I/O or needs input from the user or
needs access to a critical region(the lock for which is already acquired) it enters the blocked or
wait state. The process continues to wait in the main memory and does not require CPU. Once
the I/O operation is completed the process goes to the ready state.
Terminated or completed – Process is killed as well as PCB is deleted.
Suspend ready – Process that was initially in the ready state but were swapped out of main
memory(refer Virtual Memory topic) and placed onto external storage by scheduler are said to be
in suspend ready state. The process will transition back to ready state whenever the process is
again brought onto the main memory.
Suspend wait or suspend blocked – Similar to suspend ready but uses the process which was
performing I/O operation and lack of main memory caused them to move to secondary memory.
When work is finished it may go to suspend ready.
Tutorial No – 4
Q.1) Explain contiguous memory management techniques in detail (i.e. fixed size
and variable size).
Answer - Contiguous Memory Allocation
In contiguous memory allocation each process is contained in a single contiguous block of
memory. Memory is divided into several fixed size partitions. Each partition contains exactly
one process. When a partition is free, a process is selected from the input queue and loaded into
it. The free blocks of memory are known as holes. The set of holes is searched to determine
which hole is best to allocate.
Fixed Equal-size Partitions
It divides the main memory into equal number of fixed sized partitions, operating
system occupies some fixed portion and remaining portion of main memory is available for user
processes.
Advantages
o Any process whose size is less than or equal to the partition size can be loaded into
any available partition.
o It supports multiprogramming.
Disadvantages
o If a program is too big to fit into a partition use overlay technique.
o Memory use is inefficient, i.e., block of data loaded into memory may be smaller
than the partition. It is known as internal fragmentation.
Fixed Variable Size Partitions
By using fixed variable size partitions we can overcome the disadvantages present
in fixed equal size partitioning.
3) Segmentation:
In Operating Systems, Segmentation is a memory management technique in which, the memory
is divided into the variable size parts. Each part is known as segment which can be allocated to a
process.
The details about each segment are stored in a table called as segment table. Segment table is
stored in one (or many) of the segments.
Segment table contains mainly two information about segment:
Base: It is the base address of the segment
Limit: It is the length of the segment.
Why Segmentation is required?
Till now, we were using Paging as our main memory management technique. Paging is more
close to Operating system rather than the User. It divides all the process into the form of pages
regardless of the fact that a process can have some relative parts of functions which needs to be
loaded in the same page.
Operating system doesn't care about the User's view of the process. It may divide the same
function into different pages and those pages may or may not be loaded at the same time into the
memory. It decreases the efficiency of the system.
It is better to have segmentation which divides the process into the segments. Each segment
contain same type of functions such as main function can be included in one segment and the
library functions can be included in the other segment,
Translation of Logical address into physical address by segment table
CPU generates a logical address which contains two parts:
The Segment number is mapped to the segment table. The limit of the respective segment is
compared with the offset. If the offset is less than the limit then the address is valid otherwise it
throws an error as the address is invalid.
Tutorial No – 5
b) Belady’s anomaly:
Belady’s anomaly proves that it is possible to have more page faults when increasing the
number of page frames while using the First in First Out (FIFO) page replacement
algorithm. For example, if we consider reference string 3, 2, 1, 0, 3, 2, 4, 3, 2, 1, 0, 4 and 3
slots, we get 9 total page faults, but if we increase slots to 4, we get 10 page faults.
Optimal Page replacement –
In this algorithm, pages are replaced which would not be used for the longest duration of
time in the future.
Example-2:Consider the page references 7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, with 4 page frame.
Find number of page fault.
Initially all slots are empty, so when 7 0 1 2 are allocated to the empty slots —> 4 Page
faults
0 is already there so —> 0 Page fault.
when 3 came it will take the place of 7 because it is not used for the longest duration of time
in the future.—>1 Page fault.
0 is already there so —> 0 Page fault..
4 will takes place of 1 —> 1 Page Fault.
Now for the further page reference string —> 0 Page fault because they are already available
in the memory.
Optimal page replacement is perfect, but not possible in practice as the operating system
cannot know future requests. The use of Optimal Page replacement is to set up a benchmark
so that other replacement algorithms can be analyzed against it.
c) Inverted paging:
Inverted Page Table –
An alternate approach is to use the Inverted Page Table structure that consists of one-page table
entry for every frame of the main memory. So the number of page table entries in the Inverted
Page Table reduces to the number of frames in physical memory and a single page table is used to
represent the paging information of all the processes.
Through the inverted page table, the overhead of storing an individual page table for every process
gets eliminated and only a fixed portion of memory is required to store the paging information of
all the processes together. This technique is called as inverted paging as the indexing is done with
respect to the frame number instead of the logical page number. Each entry in the page table
contains the following fields.
Page number – It specifies the page number range of the logical address.
Process id – An inverted page table contains the address space information of all the
processes in execution. Since two different processes can have similar set of virtual
addresses, it becomes necessary in Inverted Page Table to store a process Id of each
process to identify it’s address space uniquely. This is done by using the combination of
PId and Page Number. So this Process Id acts as an address space identifier and ensures
that a virtual page for a particular process is mapped correctly to the corresponding
physical frame.
Control bits – These bits are used to store extra paging-related information. These
include the valid bit, dirty bit, reference bits, protection and locking information bits.
Chained pointer – It may be possible sometime that two or more processes share a part
of main memory. In this case, two or more logical pages map to same Page Table Entry
then a chaining pointer is used to map the details of these logical pages to the root page
table.
Tutorial No – 6
b) Linked allocation:
Linked List Allocation
Linked List allocation solves all problems of contiguous allocation. In linked list allocation, each
file is considered as the linked list of disk blocks. However, the disks blocks allocated to a
particular file need not to be contiguous on the disk. Each disk block allocated to a file contains a
pointer which points to the next disk block allocated to the same file.
os linked list allocation
Advantages
There is no external fragmentation with linked allocation.
Any free block can be utilized in order to satisfy the file block requests.
File can continue to grow as long as the free blocks are available.
Directory entry will only contain the starting block address.
Disadvantages
Random Access is not provided.
Pointers require some space in the disk blocks.
Any of the pointers in the linked list must not be broken otherwise the file will get corrupted.
Need to traverse each block.
c) Indexed allocation:
Limitation in the existing technology causes the evolution of a new technology. Till now, we
have seen various allocation methods; each of them was carrying several advantages and
disadvantages.
File allocation table tries to solve as many problems as possible but leads to a drawback. The
more the number of blocks, the more will be the size of FAT.
Therefore, we need to allocate more space to a file allocation table. Since, file allocation table
needs to be cached therefore it is impossible to have as many space in cache. Here we need a
new technology which can solve such problems.
Indexed Allocation Scheme
Instead of maintaining a file allocation table of all the disk pointers, Indexed allocation scheme
stores all the disk pointers in one of the blocks called as indexed block. Indexed block doesn't
hold the file data, but it holds the pointers to all the disk blocks allocated to that particular file.
Directory entry will only contain the index block address.
Advantages
Supports direct access
A bad data block causes the lost of only that block.
Disadvantages
A bad index block could cause the lost of entire file.
Size of a file depends upon the number of pointers, a index block can hold.
Having an index block for a small file is totally wastage.
Tutorial No – 7
Q.1) Explain file system structure in detail.
Answer - File System Structure
File System provide efficient access to the disk by allowing data to be stored,
located and retrieved in a convenient way. A file System must be able to store
the file, locate the file and retrieve the file.
Most of the Operating Systems use layering approach for every task including
file systems. Every layer of the file system is responsible for some activities.
The image shown below, elaborates how the file system is divided in different
layers, and also the functionality of each layer.
2. Linked List
It is another approach for free space management. This approach suggests linking together all the
free blocks and keeping a pointer in the cache which points to the first free block.
Therefore, all the free blocks on the disks will be linked together with a pointer. Whenever a
block gets allocated, its previous free block will be linked to its next free block.
In Figure-2, the free space list head points to Block 5 which points to Block 6, the next free
block and so on. The last free block would contain a null pointer indicating the end of free list.
A drawback of this method is the I/O required for free space list traversal.
Grouping –
This approach stores the address of the free blocks in the first free block. The first free block
stores the address of some, say n free blocks. Out of these n blocks, the first n-1 blocks are
actually free and the last block contains the address of next free n blocks.
An advantage of this approach is that the addresses of a group of free disk blocks can be found
easily.
Counting –
This approach stores the address of the first free disk block and a number n of free contiguous
disk blocks that follow the first block.
Every entry in the list would contain:
Address of first free disk block
A number n
For example, in Figure-1, the first entry of the free space list would be: ([Address of Block 5], 2),
because 2 contiguous free blocks follow block 5.
Tutorial No – 8
Q.1) Explain following disk scheduling algorithms in detail.
FCFS scheduling algorithm
SSTF (shortest seek time first) algorithm
SCAN scheduling
C-SCAN scheduling
LOOK Scheduling
C-LOOK scheduling
Answer –
The list of various disks scheduling algorithm is given below. Each algorithm is carrying
some advantages and disadvantages. The limitation of each algorithm leads to the evolution
of a new algorithm.
Assume that in the above figure, C3 is lost due to some disk failure. Then, we can recompute
the data bit stored in C3 by looking at the values of all the other columns and the parity bit.
This allows us to recover lost data.
Evaluation:
Reliability: 1
RAID-4 allows recovery of at most 1 disk failure (because of the way parity works). If
more than one disk fails, there is no way to recover the data.
Capacity: (N-1)*B
One disk in the system is reserved for storing the parity. Hence, (N-1) disks are made
available for data storage, each disk having B blocks.
RAID-5 (Block-Level Striping with Distributed Parity)
This is a slight modification of the RAID-4 system where the only difference is that the
parity rotates among the drives.
In the figure, we can notice how the parity bit “rotates”.
This was introduced to make the random write performance better.
Evaluation:
Reliability: 1
RAID-5 allows recovery of at most 1 disk failure (because of the way parity works). If
more than one disk fails, there is no way to recover the data. This is identical to RAID-4.
Capacity: (N-1)*B
Overall, space equivalent to one disk is utilized in storing the parity. Hence, (N-1) disks are
made available for data storage, each disk having B blocks.
What about the other RAID levels?
RAID-2 consists of bit-level striping using a Hamming Code parity. RAID-3 consists of byte-level
striping with a dedicated parity. These two are less commonly used.
RAID-6 is a recent advancement which contains a distributed double parity, which involves block-
level striping with 2 parity bits instead of just 1 distributed across all the disks. There are also
hybrid RAIDs, which make use of more than one RAID levels nested one after the other, to fulfill
specific requirements.
Tutorial No – 9
Q.1) Differentiate between windows and linux os.
Answer -
S.N
O LINUX WINDOWS
Linux is a open source operating While windows are the not the
There is forward slash is used for While there is back slash is used
Linux provides more security than While it provides less security than
7. windows. linux.
Basic Features
Following are some of the important features of Linux Operating System.
Portable − Portability means software can works on different types of hardware in same
way. Linux kernel and application programs supports their installation on any kind of
hardware platform.
Open Source − Linux source code is freely available and it is community based
development project. Multiple teams work in collaboration to enhance the capability of
Linux operating system and it is continuously evolving.
Multi-User − Linux is a multiuser system means multiple users can access system
resources like memory/ ram/ application programs at same time.
Multiprogramming − Linux is a multiprogramming system means multiple applications
can run at same time.
Hierarchical File System − Linux provides a standard file structure in which system files/
user files are arranged.
Shell − Linux provides a special interpreter program which can be used to execute
commands of the operating system. It can be used to do various types of operations, call
application programs. etc.
Security − Linux provides user security using authentication features like password
protection/ controlled access to specific files/ encryption of data.
Q.3) Write short note on:
a) Process management in linux :
A process means program in execution. It generally takes an input, processes it and gives us the
appropriate output. Check Introduction to Process Management for more details about a process.
There are basically 2 types of processes.
Foreground processes: Such kind of processes are also known as interactive processes. These are
the processes which are to be executed or initiated by the user or the programmer, they can not
be initialized by system services. Such processes take input from the user and return the output.
While these processes are running we can not directly initiate a new process from the same
terminal.
Background processes: Such kind of processes are also known as non interactive processes.
These are the processes that are to be executed or initiated by the system itself or by users,
though they can even be managed by users. These processes have a unique PID or process if
assigned to them and we can initiate other processes within the same terminal from which they
are initiated.
Tutorial No – 10
1 Explain design principles of linux system in details.
Answer –
As a result of the development of PC technology, the Linux kernel is also becoming more
complete in implementing UNIX functions. Fast and efficient are important design goals, but
lately the concentration of Linux development has focused more on the third design goal,
standardization. The POSIX standard consists of a collection of specifications from different
aspects of operating system behavior. There are POSIX documents for ordinary operating system
functions and for extensions such as processes for threads and real-time operations. Linux is
designed to fit the relevant POSIX documents; there are at least two Linux distributions that have
received POSIX official certification.
Because Linux provides a standard interface to programmers and users, Linux does not make
many surprises to anyone who is familiar with UNIX. But the Linux programming interface
refers to the UNIX SVR4 semantics rather than BSD behavior. A different collection of libraries
is available to implement the BSD semantics in places where the two behaviors are very
different.
There are many other standards in the UNIX world, but Linux’s full certification of other UNIX
standards sometimes becomes slow because it is more often available at a certain price (not
freely), and there is a price to pay if it involves certification of approval or compatibility of an
operating system with most standards . Supporting broad applications is important for all
operating systems so that the implementation of the standard is the main goal of developing
Linux even though its implementation is not formally valid. In addition to the POSIX standard,
Linux currently supports POSIX thread extensions and subsets of extensions for POSIX real-
time process control.