Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
122 views

Operating System Tutorial

This document provides information about operating systems and their components from a student's tutorial. It defines an operating system and describes its main components like the kernel, process execution, interrupts, memory management, multitasking, networking, security, and the user interface. It then explains process scheduling, describing the ready queue, job queue, device queues, and two-state process model. It also discusses long-term, short-term, and medium-term schedulers. Finally, it asks why operating systems are important to learn and what their applications are.

Uploaded by

Pallavi Bharti
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
122 views

Operating System Tutorial

This document provides information about operating systems and their components from a student's tutorial. It defines an operating system and describes its main components like the kernel, process execution, interrupts, memory management, multitasking, networking, security, and the user interface. It then explains process scheduling, describing the ready queue, job queue, device queues, and two-state process model. It also discusses long-term, short-term, and medium-term schedulers. Finally, it asks why operating systems are important to learn and what their applications are.

Uploaded by

Pallavi Bharti
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 41

OPERATING SYSTEM

TUTORIAL
STUDENT NAME – PALLAVI ARJUN
BHARTI
ROLL NUMBER - 18141112
BATCH – I1
Operating System Tutorial
Tutorial No – 1
Q.1) Define operating system? Describe OS components and Services.
Answer - The term OS or operating system is a type of software and it works as an interface
between the user and computer to perform all the tasks like memory management, file
management, input & output handling, security, process management, Job accounting, error
detecting, system performance controlling, peripheral devices controlling like printers & disk
drives. The popular operating systems mainly include Windows, Linux, AIX, VMS, z/OS, etc.
The components of an operating system play a key role to make a variety of computer system
parts work together. The operating components are discussed below.
Operating-System-Components
Kernel
The kernel in the OS provides the basic level of control on all the computer peripherals. In the
operating system, the kernel is an essential component that loads firstly and remains within the
main memory. So that memory accessibility can be managed for the programs within the RAM,
it creates the programs to get access from the hardware resources. It resets the operating states of
the CPU for the best operation at all times.
Process Execution
The OS gives an interface between the hardware as well as an application program so that the
program can connect through the hardware device by simply following procedures & principles
configured into the OS. The program execution mainly includes a process created through an OS
kernel that uses memory space as well as different types of other resources.
Interrupt
In the operating system, interrupts are essential because they give a reliable technique for the OS
to communicate & react to their surroundings. An interrupt is nothing but one kind of signal
between a device as well as a computer system otherwise from a program in the computer that
requires the OS to leave and decide accurately what to do subsequently. Whenever an interrupt
signal is received, then the hardware of the computer puts on hold automatically whatever FGT
computer program is running presently, keeps its status & runs a computer program which is
connected previously with the interrupt.
Memory Management
The functionality of an OS is nothing but memory management which manages main memory &
moves processes backward and forward between disk & main memory during implementation.
This tracks each & every memory position; until it is assigned to some process otherwise it is
open. It verifies how much memory can be allocated to processes and also makes a decision to
know which process will obtain memory at what time. Whenever memory is unallocated, then it
tracks correspondingly to update the status. Memory management work can be divided into three
important groups like memory management of hardware, OS and application memory
management.
Multitasking
It describes the working of several independent computer programs on a similar computer
system. Multitasking in an OS allows an operator to execute one or more computer tasks at a
time. Since many computers can perform one or two tasks at a time, usually this can be done
with the help of time-sharing, where each program uses the time of a computer to execute.
Networking
Networking can be defined as when the processor interacts with each other through
communication lines. The design of communication-network must consider routing, connection
methods, safety, the problems of opinion & security.
Presently most of the operating systems maintain different networking techniques, hardware, &
applications. This involves that computers that run on different operating systems could be
included in a general network to share resources like data, computing, scanners, printers, which
uses the connections of either wired otherwise wireless.
Security
If a computer has numerous individuals to allow the immediate process of various processes,
then the many processes have to be protected from other activities. This system security mainly
depends upon a variety of technologies that work effectively. Current operating systems give an
entrée to a number of resources, which are obtainable to work the software on the system, and to
external devices like networks by means of the kernel. The operating system should be capable
of distinguishing between demands which have to be allowed for progressing & others that don’t
need to be processed. Additionally, to permit or prohibit a security version, a computer system
with a high level of protection also provides auditing options. So this will allow monitoring the
requests from accessibility to resources
User Interface
A GUI or user interface (UI) is the part of an OS that permits an operator to get the information.
A user interface based on text displays the text as well as its commands which are typed over a
command line with the help of a keyboard.
The OS-based applications mainly provide a specific user interface for efficient communication.
The main function of a user interface of an application is to get the inputs from the operator & to
provide o/ps to the operator. But, the sorts of inputs received from the user interface as well as
the o/p types offered by the user interface may change from application to application. The UI of
any application can be classified into two types namely GUI (graphical UI) & CLI (command
line user interface).
Q.2) What is process scheduling? Explain its operation on process with the
help of example.
Answer - The process scheduling is the activity of the process manager that handles the
removal of the running process from the CPU and the selection of another process on the basis
of a particular strategy.
Process scheduling is an essential part of a Multiprogramming operating systems. Such
operating systems allow more than one process to be loaded into the executable memory at a
time and the loaded process shares the CPU using time multiplexing.
When the state of a process is changed, its PCB is unlinked from its current queue and moved to
its new state queue.
The Operating System maintains the following important process scheduling queues −
Job queue − This queue keeps all the processes in the system.
Ready queue − This queue keeps a set of all processes residing in main memory, ready and
waiting to execute. A new process is always put in this queue.
Device queues − The processes which are blocked due to unavailability of an I/O device
constitute this queue. The OS can use different policies to manage each queue (FIFO, Round
Robin, Priority, etc.). The OS scheduler determines how to move processes between the ready
and run queues which can only have one entry per processor core on the system; in the above
diagram, it has been merged with the CPU.
Two-State Process Model
Two-state process model refers to running and non-running states which are described below −
Running
When a new process is created, it enters into the system as in the running state.
Not Running
Processes that are not running are kept in queue, waiting for their turn to execute. Each entry in
the queue is a pointer to a particular process. Queue is implemented by using linked list. Use of
dispatcher is as follows. When a process is interrupted, that process is transferred in the waiting
queue. If the process has completed or aborted, the process is discarded. In either case, the
dispatcher then selects a process from the queue to execute.
Schedulers
Schedulers are special system software which handle process scheduling in various ways. Their
main task is to select the jobs to be submitted into the system and to decide which process to run.
Schedulers are of three types −
Long-Term Scheduler
Short-Term Scheduler
Medium-Term Scheduler
Long Term Scheduler
It is also called a job scheduler. A long-term scheduler determines which programs are admitted
to the system for processing. It selects processes from the queue and loads them into memory for
execution. Process loads into the memory for CPU scheduling.
The primary objective of the job scheduler is to provide a balanced mix of jobs, such as I/O
bound and processor bound. It also controls the degree of multiprogramming. If the degree of
multiprogramming is stable, then the average rate of process creation must be equal to the
average departure rate of processes leaving the system.
On some systems, the long-term scheduler may not be available or minimal. Time-sharing
operating systems have no long term scheduler. When a process changes the state from new to
ready, then there is use of long-term scheduler.
Short Term Scheduler
It is also called as CPU scheduler. Its main objective is to increase system performance in
accordance with the chosen set of criteria. It is the change of ready state to running state of the
process. CPU scheduler selects a process among the processes that are ready to execute and
allocates CPU to one of them.
Short-term schedulers, also known as dispatchers, make the decision of which process to execute
next. Short-term schedulers are faster than long-term schedulers.
Medium Term Scheduler
Medium-term scheduling is a part of swapping. It removes the processes from the memory. It
reduces the degree of multiprogramming. The medium-term scheduler is in-charge of handling
the swapped out-processes.
A running process may become suspended if it makes an I/O request. A suspended processes
cannot make any progress towards completion. In this condition, to remove the process from
memory and make space for other processes, the suspended process is moved to the secondary
storage. This process is called swapping, and the process is said to be swapped out or rolled out.
Swapping may be necessary to improve the process mix.
Q.3) Why to learn operating system? What are the different applications of
operating system?
Answer - An operating system is the most important software that runs on a computer. It
manages the computer's memory and processes, as well as all of its software and hardware. It
also allows you to communicate with the computer without knowing how to speak the
computer's language. Without an operating system, a computer is useless. Your computer's
operating system (OS) manages all of the software and hardware on the computer. Most of the
time, there are several different computer programs running at the same time, and they all need
to access your computer's central processing unit (CPU), memory, and storage. The operating
system coordinates all of this to make sure each program gets what it needs.
Types of operating systems
Operating systems usually come pre-loaded on any computer you buy. Most people use the
operating system that comes with their computer, but it's possible to upgrade or even change
operating systems. The three most common operating systems for personal computers are
Microsoft Windows, macOS, and Linux. Modern operating systems use a graphical user
interface, or GUI (pronounced gooey). A GUI lets you use your mouse to click icons, buttons,
and menus, and everything is clearly displayed on the screen using a combination of graphics
and text. Each operating system's GUI has a different look and feel, so if you switch to a
different operating system it may seem unfamiliar at first. However, modern operating systems
are designed to be easy to use, and most of the basic principles are the same.
Microsoft Windows
Microsoft created the Windows operating system in the mid-1980s. There have been many
different versions of Windows, but the most recent ones are Windows 10 (released in 2015),
Windows 8 (2012), Windows 7 (2009), and Windows Vista (2007). Windows comes pre-loaded
on most new PCs, which helps to make it the most popular operating system in the world.
macOS
macOS (previously called OS X) is a line of operating systems created by Apple. It comes
preloaded on all Macintosh computers, or Macs. Some of the specific versions include Mojave
(released in 2018), High Sierra (2017), and Sierra (2016). According to StatCounter Global Stats,
macOS users account for less than 10% of global operating systems—much lower than the
percentage of Windows users (more than 80%). One reason for this is that Apple computers tend
to be more expensive. However, many people do prefer the look and feel of macOS over
Windows.
Linux
Linux (pronounced LINN-ux) is a family of open-source operating systems, which means they
can be modified and distributed by anyone around the world. This is different from proprietary
software like Windows, which can only be modified by the company that owns it. The
advantages of Linux are that it is free, and there are many different distribution or version you
can choose from. According to StatCounter Global Stats, Linux users account for less than 2% of
global operating systems. However, most servers run Linux because it's relatively easy to
customize.
Operating systems for mobile devices
The operating systems we've been talking about so far were designed to run on desktop and
laptop computers. Mobile devices such as phones, tablet computers, and MP3 players are
different from desktop and laptop computers, so they run operating systems that are designed
specifically for mobile devices. Examples of mobile operating systems include Apple iOS and
Google Android. In the screenshot below, you can see iOS running on an iPad. Operating
systems for mobile devices generally aren't as fully featured as those made for desktop and
laptop computers, and they aren't able to run all of the same software. However, you can still do
a lot of things with them, like watch movies, browse the Web, manage your calendar, and play
games.

\
Tutorial No – 2
Q.1) What is a process scheduler? State the characteristics of a good process
scheduler?
Answer - The process scheduling is the activity of the process manager that handles the removal
of the running process from the CPU and the selection of another process on the basis of a
particular strategy. Process scheduling is an essential part of a Multiprogramming operating
systems.
Characteristics of a good scheduling algorithm
Characteristics include Utilization of CPU, Response time, Throughput, Turnaround time,
Waiting time and fairness.
Utilization of CPU
This algorithm keeps the CPU busy by using most of it.
Throughput
The process involves the number of works finished in a time unit and the algorithm increases the
number.
Response Time:
The time used in accepting the request of a task and the scheduler minimizes the time.
Turnaround time:
The time span between the starting of a job and finishing it is the turnaround time.
Waiting time:
In a multi programming system when a task waits for allocation of resources the time it takes is
the waiting time and the algorithm decreases it.
Fairness:
The scheduler takes care of the share that each process receives in CPU.

Q.2) Define process scheduling. Which are the types of scheduler?


Answer - Schedulers
Schedulers are special system software which handle process scheduling in various ways. Their
main task is to select the jobs to be submitted into the system and to decide which process to run.
Schedulers are of three types −
There are three types of process scheduler.
Long Term or job scheduler :
It brings the new process to the ‘Ready State’. It controls Degree of Multi-programming, i.e.,
number of process present in ready state at any point of time. It is important that the long-term
scheduler make a careful selection of both IO and CPU bound process. IO bound tasks are which
use much of their time in input and output operations while CPU bound processes are which
spend their time on CPU. The job scheduler increases efficiency by maintaining a balance
between the two.
Short term or CPU scheduler :
It is responsible for selecting one process from ready state for scheduling it on the running state.
Note: Short-term scheduler only selects the process to schedule it doesn’t load the process on
running. Here is when all the scheduling algorithms are used. The CPU scheduler is responsible
for ensuring there is no starvation owing to high burst time processes.
Dispatcher is responsible for loading the process selected by Short-term scheduler on the CPU
(Ready to Running State) Context switching is done by dispatcher only. A dispatcher does the
following:
Switching context.
Switching to user mode.
Jumping to the proper location in the newly loaded program.
Medium-term scheduler :
It is responsible for suspending and resuming the process. It mainly does swapping (moving
processes from main memory to disk and vice versa). Swapping may be necessary to improve the
process mix or because a change in memory requirements has overcommitted available memory,
requiring memory to be freed up. It is helpful in maintaining a perfect balance between the I/O
bound and the CPU bound. It reduces the degree of multiprogramming.

Q.3) Describe operating system scheduling algorithm in detail.


Answer - A Process Scheduler schedules different processes to be assigned to the CPU based on
particular scheduling algorithms. There are six popular process scheduling algorithms which we
are going to discuss in this chapter −
 First-Come, First-Served (FCFS) Scheduling
 Shortest-Job-Next (SJN) Scheduling
 Priority Scheduling
 Shortest Remaining Time
 Round Robin(RR) Scheduling
 Multiple-Level Queues Scheduling
These algorithms are either non-preemptive or preemptive. Non-preemptive algorithms are
designed so that once a process enters the running state, it cannot be preempted until it completes
its allotted time, whereas the preemptive scheduling is based on priority where a scheduler may
preempt a low priority running process anytime when a high priority process enters into a ready
state.
First Come First Serve (FCFS)
Jobs are executed on first come, first serve basis.
It is a non-preemptive, pre-emptive scheduling algorithm.
Easy to understand and implement.
Its implementation is based on FIFO queue.
Poor in performance as average wait time is high.
Shortest Job Next (SJN)
This is also known as shortest job first, or SJF
This is a non-preemptive, pre-emptive scheduling algorithm.
Best approach to minimize waiting time.
Easy to implement in Batch systems where required CPU time is known in advance.
Impossible to implement in interactive systems where required CPU time is not known.
The processer should know in advance how much time process will take.
Priority Based Scheduling
Priority scheduling is a non-preemptive algorithm and one of the most common scheduling
algorithms in batch systems.
Each process is assigned a priority. Process with highest priority is to be executed first and so on.
Processes with same priority are executed on first come first served basis.
Priority can be decided based on memory requirements, time requirements or any other resource
requirement.
Shortest Remaining Time
Shortest remaining time (SRT) is the preemptive version of the SJN algorithm.
The processor is allocated to the job closest to completion but it can be preempted by a newer
ready job with shorter time to completion.
Impossible to implement in interactive systems where required CPU time is not known.
It is often used in batch environments where short jobs need to give preference.
Round Robin Scheduling
Round Robin is the preemptive process scheduling algorithm.
Each process is provided a fix time to execute, it is called a quantum.
Once a process is executed for a given time period, it is preempted and other process executes for
a given time period.
Context switching is used to save states of preempted processes.
Multiple-Level Queues Scheduling
Multiple-level queues are not an independent scheduling algorithm. They make use of other
existing algorithms to group and schedule jobs with common characteristics.
Multiple queues are maintained for processes with common characteristics.
Each queue can have its own scheduling algorithms.
Priorities are assigned to each queue.
For example, CPU-bound jobs can be scheduled in one queue and all I/O-bound jobs in another
queue. The Process Scheduler then alternately selects jobs from each queue and assigns them to
the CPU based on the algorithm assigned to the queue.
Tutorial No – 3
Q.1) Explain the critical section problem.
Answer - Critical Section is the part of a program which tries to access shared resources. That
resource may be any resource in a computer like a memory location, Data structure, CPU or any
IO device. The critical section cannot be executed by more than one process at the same time;
operating system faces the difficulties in allowing and disallowing the processes from entering
the critical section. The critical section problem is used to design a set of protocols which can
ensure that the Race condition among the processes will never arise. In order to synchronize the
cooperative processes, our main task is to solve the critical section problem. We need to provide
a solution in such a way that the following conditions can be satisfied.
Requirements of Synchronization mechanisms
Primary
Mutual Exclusion
Our solution must provide mutual exclusion. By Mutual Exclusion, we mean that if one process
is executing inside critical section then the other process must not enter in the critical section.
Progress
Progress means that if one process doesn't need to execute into critical section then it should not
stop other processes to get into the critical section.
Secondary
Bounded Waiting
We should be able to predict the waiting time for every process to get into the critical section.
The process must not be endlessly waiting for getting into the critical section.
Architectural Neutrality
Our mechanism must be architectural natural. It means that if our solution is working fine on one
architecture then it should also run on the other ones as well.

Q.2) Compare the following.


A) Process and Thread
Answer –
PROCESS THREAD

Process means any program Thread means segment of a

1. is in execution. process.

Process takes more time to Thread takes less time to

2. terminate. terminate.

3. It takes more time for creation. It takes less time for creation.

It also takes more time for It takes less time for context

4. context switching. switching.

Process is less efficient in Thread is more efficient in

5. term of communication. term of communication.

Process consume more Thread consume less

6. resources. resources.

7. Process is isolated. Threads share memory.

Process is called heavy Thread is called light weight

8. weight process. process.

Thread switching does not

require to call a operating

Process switching uses system and cause an

9. interface in operating system. interrupt to the kernel.

If one server process is

blocked no other server Second thread in the same

process can execute until the task could run, while one

10. first process unblocked. server thread is blocked.


Thread has Parents’ PCB, its

Process has its own Process own Thread Control Block

Control Block, Stack and and Stack and common

11. Address Space. Address space.

B) Long term scheduler, short term scheduler and medium term scheduler
Answer - Long Term Scheduler
It is also called a job scheduler. A long-term scheduler determines which programs are admitted
to the system for processing. It selects processes from the queue and loads them into memory for
execution. Process loads into the memory for CPU scheduling.
The primary objective of the job scheduler is to provide a balanced mix of jobs, such as I/O
bound and processor bound. It also controls the degree of multiprogramming. If the degree of
multiprogramming is stable, then the average rate of process creation must be equal to the
average departure rate of processes leaving the system.
On some systems, the long-term scheduler may not be available or minimal. Time-sharing
operating systems have no long term scheduler. When a process changes the state from new to
ready, then there is use of long-term scheduler.
Short Term Scheduler
It is also called as CPU scheduler. Its main objective is to increase system performance in
accordance with the chosen set of criteria. It is the change of ready state to running state of the
process. CPU scheduler selects a process among the processes that are ready to execute and
allocates CPU to one of them.
Short-term schedulers, also known as dispatchers, make the decision of which process to execute
next. Short-term schedulers are faster than long-term schedulers.
Medium Term Scheduler
Medium-term scheduling is a part of swapping. It removes the processes from the memory. It
reduces the degree of multiprogramming. The medium-term scheduler is in-charge of handling
the swapped out-processes.
A running process may become suspended if it makes an I/O request. A suspended processes
cannot make any progress towards completion. In this condition, to remove the process from
memory and make space for other processes, the suspended process is moved to the secondary
storage. This process is called swapping, and the process is said to be swapped out or rolled out.
Swapping may be necessary to improve the process mix.
Q.3) Write short note on.
Semaphores-
Semaphores are integer variables that are used to solve the critical section problem by using two
atomic operations, wait and signal that are used for process synchronization.
The definitions of wait and signal are as follows −
Wait
The wait operation decrements the value of its argument S, if it is positive. If S is negative or
zero, then no operation is performed.
wait(S)
{
while (S<=0);

S--;
}
Signal
The signal operation increments the value of its argument S.

signal(S)
{
S++;
}
Types of Semaphores
There are two main types of semaphores i.e. counting semaphores and binary semaphores.
Details about these are given as follows −
Counting Semaphores
These are integer value semaphores and have an unrestricted value domain. These semaphores
are used to coordinate the resource access, where the semaphore count is the number of available
resources. If the resources are added, semaphore count automatically incremented and if the
resources are removed, the count is decremented.
Binary Semaphores
The binary semaphores are like counting semaphores but their value is restricted to 0 and 1. The
wait operation only works when the semaphore is 1 and the signal operation succeeds when
semaphore is 0. It is sometimes easier to implement binary semaphores than counting
semaphores.
Advantages of Semaphores
Some of the advantages of semaphores are as follows −
Semaphores allow only one process into the critical section. They follow the mutual exclusion
principle strictly and are much more efficient than some other methods of synchronization.
There is no resource wastage because of busy waiting in semaphores as processor time is not
wasted unnecessarily to check if a condition is fulfilled to allow a process to access the critical
section.
Semaphores are implemented in the machine independent code of the microkernel. So they are
machine independent.
Disadvantages of Semaphores
Some of the disadvantages of semaphores are as follows −
Semaphores are complicated so the wait and signal operations must be implemented in the
correct order to prevent deadlocks.
Semaphores are impractical for last scale use as their use leads to loss of modularity. This
happens because the wait and signal operations prevent the creation of a structured layout for the
system.
Semaphores may lead to a priority inversion where low priority processes may access the critical
section first and high priority processes later.

Process states-
New (Create) – In this step, the process is about to be created but not yet created, it is the
program which is present in secondary memory that will be picked up by OS to create the
process.
Ready – New -> Ready to run. After the creation of a process, the process enters the ready state
i.e. the process is loaded into the main memory. The process here is ready to run and is waiting
to get the CPU time for its execution. Processes that are ready for execution by the CPU are
maintained in a queue for ready processes.
Run – The process is chosen by CPU for execution and the instructions within the process are
executed by any one of the available CPU cores.
Blocked or wait – Whenever the process requests access to I/O or needs input from the user or
needs access to a critical region(the lock for which is already acquired) it enters the blocked or
wait state. The process continues to wait in the main memory and does not require CPU. Once
the I/O operation is completed the process goes to the ready state.
Terminated or completed – Process is killed as well as PCB is deleted.
Suspend ready – Process that was initially in the ready state but were swapped out of main
memory(refer Virtual Memory topic) and placed onto external storage by scheduler are said to be
in suspend ready state. The process will transition back to ready state whenever the process is
again brought onto the main memory.
Suspend wait or suspend blocked – Similar to suspend ready but uses the process which was
performing I/O operation and lack of main memory caused them to move to secondary memory.
When work is finished it may go to suspend ready.

Process control block-


The process control stores many data items that are needed for efficient process management.
Some of these data items are explained with the help of the given diagram −
The following are the data items −
Process State
This specifies the process state i.e. new, ready, running, waiting or terminated.
Process Number
This shows the number of the particular process.
Program Counter
This contains the address of the next instruction that needs to be executed in the process.
Registers
This specifies the registers that are used by the process. They may include accumulators, index
registers, stack pointers, general purpose registers etc.

Tutorial No – 4
Q.1) Explain contiguous memory management techniques in detail (i.e. fixed size
and variable size).
Answer - Contiguous Memory Allocation
In contiguous memory allocation each process is contained in a single contiguous block of
memory. Memory is divided into several fixed size partitions. Each partition contains exactly
one process. When a partition is free, a process is selected from the input queue and loaded into
it. The free blocks of memory are known as holes. The set of holes is searched to determine
which hole is best to allocate.
Fixed Equal-size Partitions
It divides the main memory into equal number of fixed sized partitions, operating
system occupies some fixed portion and remaining portion of main memory is available for user
processes.
Advantages
o Any process whose size is less than or equal to the partition size can be loaded into
any available partition.
o It supports multiprogramming.
Disadvantages
o If a program is too big to fit into a partition use overlay technique.
o Memory use is inefficient, i.e., block of data loaded into memory may be smaller
than the partition. It is known as internal fragmentation.
 Fixed Variable Size Partitions
By using fixed variable size partitions we can overcome the disadvantages present
in fixed equal size partitioning.

Q.2) Write short note on:


1) Thrashing:
Thrashing is a condition or a situation when the system is spending a major portion of its time in
servicing the page faults, but the actual processing done is very negligible.
The basic concept involved is that if a process is allocated too few frames, then there will be too
many and too frequent page faults. As a result, no useful work would be done by the CPU and the
CPU utilisation would fall drastically. The long-term scheduler would then try to improve the CPU
utilisation by loading some more processes into the memory thereby increasing the degree of
multiprogramming. This would result in a further decrease in the CPU utilization triggering a
chained reaction of higher page faults followed by an increase in the degree of multiprogramming,
called Thrashing.
2) Virtual Memory Management:
A computer can address more memory than the amount physically installed on the system. This
extra memory is actually called virtual memory and it is a section of a hard disk that's set up to
emulate the computer's RAM.
The main visible advantage of this scheme is that programs can be larger than physical memory.
Virtual memory serves two purposes. First, it allows us to extend the use of physical memory by
using disk. Second, it allows us to have memory protection, because each virtual address is
translated to a physical address.
Following are the situations, when entire program is not required to be loaded fully in main
memory.
User written error handling routines are used only when an error occurred in the data or
computation.
Certain options and features of a program may be used rarely.
Many tables are assigned a fixed amount of address space even though only a small amount of
the table is actually used.
The ability to execute a program that is only partially in memory would counter many benefits.
Less number of I/O would be needed to load or swap each user program into memory.
A program would no longer be constrained by the amount of physical memory that is available.
Each user program could take less physical memory, more programs could be run the same time,
with a corresponding increase in CPU utilization and throughput.

3) Segmentation:
In Operating Systems, Segmentation is a memory management technique in which, the memory
is divided into the variable size parts. Each part is known as segment which can be allocated to a
process.
The details about each segment are stored in a table called as segment table. Segment table is
stored in one (or many) of the segments.
Segment table contains mainly two information about segment:
Base: It is the base address of the segment
Limit: It is the length of the segment.
Why Segmentation is required?
Till now, we were using Paging as our main memory management technique. Paging is more
close to Operating system rather than the User. It divides all the process into the form of pages
regardless of the fact that a process can have some relative parts of functions which needs to be
loaded in the same page.
Operating system doesn't care about the User's view of the process. It may divide the same
function into different pages and those pages may or may not be loaded at the same time into the
memory. It decreases the efficiency of the system.
It is better to have segmentation which divides the process into the segments. Each segment
contain same type of functions such as main function can be included in one segment and the
library functions can be included in the other segment,
Translation of Logical address into physical address by segment table
CPU generates a logical address which contains two parts:
The Segment number is mapped to the segment table. The limit of the respective segment is
compared with the offset. If the offset is less than the limit then the address is valid otherwise it
throws an error as the address is invalid.
Tutorial No – 5

Q.1) Explain page replacement policies in detail.


Answer – In an operating system that uses paging for memory management, a page replacement
algorithm is needed to decide which page needs to be replaced when new page comes in.
Page Fault – A page fault happens when a running program accesses a memory page that is
mapped into the virtual address space, but not loaded in physical memory.
Since actual physical memory is much smaller than virtual memory, page faults happen. In case
of page fault, Operating System might have to replace one of the existing pages with the newly
needed page. Different page replacement algorithms suggest different ways to decide which page
to replace. The target for all algorithms is to reduce the number of page faults.
Page Replacement Algorithms :
First In First Out (FIFO) –
This is the simplest page replacement algorithm. In this algorithm, the operating system keeps
track of all pages in the memory in a queue, the oldest page is in the front of the queue. When a
page needs to be replaced page in the front of the queue is selected for removal.
Example-1Consider page reference string 1, 3, 0, 3, 5, 6 with 3 page frames.Find number of page
faults.
Initially all slots are empty, so when 1, 3, 0 came they are allocated to the empty slots —> 3
Page Faults.
when 3 comes, it is already in memory so —> 0 Page Faults.
Then 5 comes, it is not available in memory so it replaces the oldest page slot i.e 1. —>1 Page
Fault.
6 comes, it is also not available in memory so it replaces the oldest page slot i.e 3 —>1 Page
Fault.
Finally when 3 come it is not avilable so it replaces 0 1 page fault

Q.2) Write short note on:


a) Demand Paging :
According to the concept of Virtual Memory, in order to execute some process, only a part of the
process needs to be present in the main memory which means that only a few pages will only be
present in the main memory at any time
However, deciding, which pages need to be kept in the main memory and which need to be kept
in the secondary memory, is going to be difficult because we cannot say in advance that a
process will require a particular page at particular time.
Therefore, to overcome this problem, there is a concept called Demand Paging is introduced. It
suggests keeping all pages of the frames in the secondary memory until they are required. In
other words, it says that do not load any page in the main memory until it is required.
Whenever any page is referred for the first time in the main memory, then that page will be
found in the secondary memory.
After that, it may or may not be present in the main memory depending upon the page
replacement algorithm which will be covered later in this tutorial.
What is a Page Fault?
If the referred page is not present in the main memory then there will be a miss and the concept
is called Page miss or page fault.
The CPU has to access the missed page from the secondary memory. If the number of page fault
is very high then the effective access time of the system will become very high.

b) Belady’s anomaly:
 Belady’s anomaly proves that it is possible to have more page faults when increasing the
number of page frames while using the First in First Out (FIFO) page replacement
algorithm.  For example, if we consider reference string 3, 2, 1, 0, 3, 2, 4, 3, 2, 1, 0, 4 and 3
slots, we get 9 total page faults, but if we increase slots to 4, we get 10 page faults.
 Optimal Page replacement –
In this algorithm, pages are replaced which would not be used for the longest duration of
time in the future.
Example-2:Consider the page references 7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, with 4 page frame.
Find number of page fault.

Initially all slots are empty, so when 7 0 1 2 are allocated to the empty slots —> 4 Page
faults
0 is already there so —> 0 Page fault.
when 3 came it will take the place of 7 because it is not used for the longest duration of time
in the future.—>1 Page fault.
0 is already there so —> 0 Page fault..
4 will takes place of 1 —> 1 Page Fault.

Now for the further page reference string —> 0 Page fault because they are already available
in the memory.
Optimal page replacement is perfect, but not possible in practice as the operating system
cannot know future requests. The use of Optimal Page replacement is to set up a benchmark
so that other replacement algorithms can be analyzed against it.
c) Inverted paging:
Inverted Page Table –
An alternate approach is to use the Inverted Page Table structure that consists of one-page table
entry for every frame of the main memory. So the number of page table entries in the Inverted
Page Table reduces to the number of frames in physical memory and a single page table is used to
represent the paging information of all the processes.
Through the inverted page table, the overhead of storing an individual page table for every process
gets eliminated and only a fixed portion of memory is required to store the paging information of
all the processes together. This technique is called as inverted paging as the indexing is done with
respect to the frame number instead of the logical page number. Each entry in the page table
contains the following fields.
 Page number – It specifies the page number range of the logical address.
 Process id – An inverted page table contains the address space information of all the
processes in execution. Since two different processes can have similar set of virtual
addresses, it becomes necessary in Inverted Page Table to store a process Id of each
process to identify it’s address space uniquely. This is done by using the combination of
PId and Page Number. So this Process Id acts as an address space identifier and ensures
that a virtual page for a particular process is mapped correctly to the corresponding
physical frame.
 Control bits – These bits are used to store extra paging-related information. These
include the valid bit, dirty bit, reference bits, protection and locking information bits.
 Chained pointer – It may be possible sometime that two or more processes share a part
of main memory. In this case, two or more logical pages map to same Page Table Entry
then a chaining pointer is used to map the details of these logical pages to the root page
table.
Tutorial No – 6

Q.1) What is file? Explain various structures of files in detail.


Answer – File system is the part of the operating system which is responsible for file
management. It provides a mechanism to store the data and access to the file contents including
data and programs. Some Operating systems treats everything as a file for example Ubuntu.
The File system takes care of the following issues
File Structure
We have seen various data structures in which the file can be stored. The task of the file system
is to maintain an optimal file structure.
Recovering Free space
Whenever a file gets deleted from the hard disk, there is a free space created in the disk. There
can be many such spaces which need to be recovered in order to reallocate them to other files.
disk space assignment to the files
The major concern about the file is deciding where to store the files on the hard disk. There are
various disks scheduling algorithm which will be covered later in this tutorial.
tracking data location
A File may or may not be stored within only one block. It can be stored in the non contiguous
blocks on the disk. We need to keep track of all the blocks on which the part of the files reside.
File System provide efficient access to the disk by allowing data to be stored, located and
retrieved in a convenient way. A file System must be able to store the file, locate the file and
retrieve the file.
Most of the Operating Systems use layering approach for every task including file systems.
Every layer of the file system is responsible for some activities.
The image shown below, elaborates how the file system is divided in different layers, and also
the functionality of each layer.
File System Structure
When an application program asks for a file, the first request is directed to the logical file
system. The logical file system contains the Meta data of the file and directory structure. If the
application program doesn't have the required permissions of the file then this layer will throw an
error. Logical file systems also verify the path to the file.
Generally, files are divided into various logical blocks. Files are to be stored in the hard disk and
to be retrieved from the hard disk. Hard disk is divided into various tracks and sectors.
Therefore, in order to store and retrieve the files, the logical blocks need to be mapped to
physical blocks. This mapping is done by File organization module. It is also responsible for free
space management.
Once File organization module decided which physical block the application program needs, it
passes this information to basic file system. The basic file system is responsible for issuing the
commands to I/O control in order to fetch those blocks.
I/O controls contain the codes by using which it can access hard disk. These codes are known as
device drivers. I/O controls are also responsible for handling interrupts.

Q.2) Explain following allocation methods in detail with advantages and


disadvantages.
a) Contiguous allocation:
If the blocks are allocated to the file in such a way that all the logical blocks of the file get the
contiguous physical block in the hard disk then such allocation scheme is known as contiguous
allocation.
In the image shown below, there are three files in the directory. The starting block and the length
of each file are mentioned in the table. We can check in the table that the contiguous blocks are
assigned to each file as per its need.
os contiguous allocation
Advantages
It is simple to implement.
We will get Excellent read performance.
Supports Random Access into files.
Disadvantages
The disk will become fragmented.
It may be difficult to have a file grow.

b) Linked allocation:
Linked List Allocation
Linked List allocation solves all problems of contiguous allocation. In linked list allocation, each
file is considered as the linked list of disk blocks. However, the disks blocks allocated to a
particular file need not to be contiguous on the disk. Each disk block allocated to a file contains a
pointer which points to the next disk block allocated to the same file.
os linked list allocation
Advantages
There is no external fragmentation with linked allocation.
Any free block can be utilized in order to satisfy the file block requests.
File can continue to grow as long as the free blocks are available.
Directory entry will only contain the starting block address.
Disadvantages
Random Access is not provided.
Pointers require some space in the disk blocks.
Any of the pointers in the linked list must not be broken otherwise the file will get corrupted.
Need to traverse each block.

c) Indexed allocation:
Limitation in the existing technology causes the evolution of a new technology. Till now, we
have seen various allocation methods; each of them was carrying several advantages and
disadvantages.
File allocation table tries to solve as many problems as possible but leads to a drawback. The
more the number of blocks, the more will be the size of FAT.
Therefore, we need to allocate more space to a file allocation table. Since, file allocation table
needs to be cached therefore it is impossible to have as many space in cache. Here we need a
new technology which can solve such problems.
Indexed Allocation Scheme
Instead of maintaining a file allocation table of all the disk pointers, Indexed allocation scheme
stores all the disk pointers in one of the blocks called as indexed block. Indexed block doesn't
hold the file data, but it holds the pointers to all the disk blocks allocated to that particular file.
Directory entry will only contain the index block address.
Advantages
Supports direct access
A bad data block causes the lost of only that block.
Disadvantages
A bad index block could cause the lost of entire file.
Size of a file depends upon the number of pointers, a index block can hold.
Having an index block for a small file is totally wastage.
Tutorial No – 7
Q.1) Explain file system structure in detail.
Answer - File System Structure

File System provide efficient access to the disk by allowing data to be stored,
located and retrieved in a convenient way. A file System must be able to store
the file, locate the file and retrieve the file.

Most of the Operating Systems use layering approach for every task including
file systems. Every layer of the file system is responsible for some activities.

The image shown below, elaborates how the file system is divided in different
layers, and also the functionality of each layer.

o When an application program asks for a file, the first request is


directed to the logical file system. The logical file system contains
the Meta data of the file and directory structure. If the application
program doesn't have the required permissions of the file then this
layer will throw an error. Logical file systems also verify the path
to the file.
o Generally, files are divided into various logical blocks. Files are to
be stored in the hard disk and to be retrieved from the hard disk.
Hard disk is divided into various tracks and sectors. Therefore, in
order to store and retrieve the files, the logical blocks need to be
mapped to physical blocks. This mapping is done by File organization
module. It is also responsible for free space management.
o Once File organization module decided which physical block the
application program needs, it passes this information to basic file
system. The basic file system is responsible for issuing the
commands to I/O control in order to fetch those blocks.
o I/O controls contain the codes by using which it can access hard
disk. These codes are known as device drivers. I/O controls are
also responsible for handling interrupts.
Q.2) Explain in detail free space management techniques with diagram.
Answer - A file system is responsible to allocate the free blocks to the file therefore it has to
keep track of all the free blocks present in the disk. There are mainly two approaches by using
which, the free blocks in the disk are managed.
1. Bit Vector
In this approach, the free space list is implemented as a bit map vector. It contains the number of
bits where each bit represents each block.
If the block is empty then the bit is 1 otherwise it is 0. Initially all the blocks are empty therefore
each bit in the bit map vector contains 1.
LAs the space allocation proceeds, the file system
starts allocating blocks to the files and setting the
respective bit to 0.
Advantages –
Simple to understand.
Finding the first free block is efficient. It requires
scanning the words (a group of 8 bits) in a bitmap for a
non-zero word. (A 0-valued word has all bits 0). The
first free block is then found by scanning for the first 1
bit in the non-zero word.
The block number can be calculated as:
(number of bits per word) *(number of 0-values
words) + offset of bit first bit 1 in the non-zero word .

2. Linked List
It is another approach for free space management. This approach suggests linking together all the
free blocks and keeping a pointer in the cache which points to the first free block.
Therefore, all the free blocks on the disks will be linked together with a pointer. Whenever a
block gets allocated, its previous free block will be linked to its next free block.
In Figure-2, the free space list head points to Block 5 which points to Block 6, the next free
block and so on. The last free block would contain a null pointer indicating the end of free list.
A drawback of this method is the I/O required for free space list traversal.
Grouping –
This approach stores the address of the free blocks in the first free block. The first free block
stores the address of some, say n free blocks. Out of these n blocks, the first n-1 blocks are
actually free and the last block contains the address of next free n blocks.
An advantage of this approach is that the addresses of a group of free disk blocks can be found
easily.
Counting –
This approach stores the address of the first free disk block and a number n of free contiguous
disk blocks that follow the first block.
Every entry in the list would contain:
Address of first free disk block
A number n
For example, in Figure-1, the first entry of the free space list would be: ([Address of Block 5], 2),
because 2 contiguous free blocks follow block 5.

Tutorial No – 8
Q.1) Explain following disk scheduling algorithms in detail.
FCFS scheduling algorithm
SSTF (shortest seek time first) algorithm
SCAN scheduling
C-SCAN scheduling
LOOK Scheduling
C-LOOK scheduling
Answer –

The list of various disks scheduling algorithm is given below. Each algorithm is carrying
some advantages and disadvantages. The limitation of each algorithm leads to the evolution
of a new algorithm.

o FCFS scheduling algorithm


o SSTF (shortest seek time first) algorithm
o SCAN scheduling
o C-SCAN scheduling
o LOOK Scheduling
o C-LOOK scheduling

FCFS Scheduling Algorithm


It is the simplest Disk Scheduling algorithm. It services the IO requests in the order in which
they arrive. There is no starvation in this algorithm, every request is serviced.
Disadvantages
The scheme does not optimize the seek time.
The request may come from different processes therefore there is the possibility of inappropriate
movement of the head.
SSTF Scheduling Algorithm
Shortest seek time first (SSTF) algorithm selects the disk I/O request which requires the least
disk arm movement from its current position regardless of the direction. It reduces the total seek
time as compared to FCFS.
It allows the head to move to the closest track in the service queue.
Disadvantages
It may cause starvation for some requests.
Switching direction on the frequent basis slows the working of algorithm.
It is not the most optimal algorithm.
SCAN and C-SCAN algorithm
Scan Algorithm
It is also called as Elevator Algorithm. In this algorithm, the disk arm moves into a particular
direction till the end, satisfying all the requests coming in its path,and then it turns backand
moves in the reverse direction satisfying requests coming in its path.
It works in the way an elevator works, elevator moves in a direction completely till the last floor
of that direction and then turns back.
C-SCAN algorithm
In C-SCAN algorithm, the arm of the disk moves in a particular direction servicing requests until
it reaches the last cylinder, then it jumps to the last cylinder of the opposite direction without
servicing any request then it turns back and start moving in that direction servicing the remaining
requests.
Look Scheduling
It is like SCAN scheduling Algorithm to some extant except the difference that, in this
scheduling algorithm, the arm of the disk stops moving inwards (or outwards) when no more
request in that direction exists. This algorithm tries to overcome the overhead of SCAN
algorithm which forces disk arm to move in one direction till the end regardless of knowing if
any request exists in the direction or not.
C Look Scheduling
C Look Algorithm is similar to C-SCAN algorithm to some extent. In this algorithm, the arm of
the disk moves outwards servicing requests until it reaches the highest request cylinder, then it
jumps to the lowest request cylinder without servicing any request then it again start moving
outwards servicing the remaining requests.
It is different from C SCAN algorithm in the sense that, C SCAN force the disk arm to move till
the last cylinder regardless of knowing whether any request is to be serviced on that cylinder or
not.
Q.2) What is RAID? Explain different levels of RAID in details.
Answer - RAID, or “Redundant Arrays of Independent Disks” is a technique which makes use of
a combination of multiple disks instead of using a single disk for increased performance, data
redundancy or both. The term was coined by David Patterson, Garth A. Gibson, and Randy Katz
at the University of California, Berkeley in 1987.
Why data redundancy?
Data redundancy, although taking up extra space, adds to disk reliability. This means, in case of
disk failure, if the same data is also backed up onto another disk, we can retrieve the data and go
on with the operation. On the other hand, if the data is spread across just multiple disks without
the RAID technique, the loss of a single disk can affect the entire data.
Key evaluation points for a RAID System
Reliability: How many disk faults can the system tolerate?
Availability: What fraction of the total session time is a system in uptime mode, i.e. how
available is the system for actual use?
Performance: How good is the response time? How high is the throughput (rate of processing
work)? Note that performance contains a lot of parameters and not just the two.
Capacity: Given a set of N disks each with B blocks, how much useful capacity is available to
the user?
RAID is very transparent to the underlying system. This means, to the host system, it appears as
a single big disk presenting itself as a linear array of blocks. This allows older technologies to be
replaced by RAID without making too many changes in the existing code.
RAID-0 (Striping)
 Blocks are “striped” across disks.

In the figure, blocks “0,1,2,3” form a stripe.


 Instead of placing just one block into a disk at a time, we can work with two (or more)
blocks placed into a disk before moving on to the next one.
Evaluation:
 Reliability: 0
There is no duplication of data. Hence, a block once lost cannot be recovered.
 Capacity: N*B
The entire space is being used to store data. Since there is no duplication, N disks each
having B blocks are fully utilized.
RAID-1 (Mirroring)
 More than one copy of each block is stored in a separate disk. Thus, every block has two
(or more) copies, lying on different disks.

The above figure shows a RAID-1 system with mirroring level 2.


 RAID 0 was unable to tolerate any disk failure. But RAID 1 is capable of reliability.
Evaluation:
Assume a RAID system with mirroring level 2.
 Reliability: 1 to N/2
1 disk failure can be handled for certain, because blocks of that disk would have duplicates
on some other disk. If we are lucky enough and disks 0 and 2 fail, then again this can be
handled as the blocks of these disks have duplicates on disks 1 and 3. So, in the best case,
N/2 disk failures can be handled.
 Capacity: N*B/2
Only half the space is being used to store data. The other half is just a mirror to the already
stored data.
RAID-4 (Block-Level Striping with Dedicated Parity)
 Instead of duplicating data, this adopts a parity-based approach.
In the figure, we can observe one column (disk) dedicated to parity.
 Parity is calculated using a simple XOR function. If the data bits are 0,0,0,1 the parity bit
is XOR(0,0,0,1) = 1. If the data bits are 0,1,1,0 the parity bit is XOR(0,1,1,0) = 0. A simple
approach is that even number of ones results in parity 0, and an odd number of ones results
in parity 1.

Assume that in the above figure, C3 is lost due to some disk failure. Then, we can recompute
the data bit stored in C3 by looking at the values of all the other columns and the parity bit.
This allows us to recover lost data.
Evaluation:
 Reliability: 1
RAID-4 allows recovery of at most 1 disk failure (because of the way parity works). If
more than one disk fails, there is no way to recover the data.
 Capacity: (N-1)*B
One disk in the system is reserved for storing the parity. Hence, (N-1) disks are made
available for data storage, each disk having B blocks.
RAID-5 (Block-Level Striping with Distributed Parity)
 This is a slight modification of the RAID-4 system where the only difference is that the
parity rotates among the drives.
In the figure, we can notice how the parity bit “rotates”.
 This was introduced to make the random write performance better.
Evaluation:
 Reliability: 1
RAID-5 allows recovery of at most 1 disk failure (because of the way parity works). If
more than one disk fails, there is no way to recover the data. This is identical to RAID-4.
 Capacity: (N-1)*B
Overall, space equivalent to one disk is utilized in storing the parity. Hence, (N-1) disks are
made available for data storage, each disk having B blocks.
What about the other RAID levels?
RAID-2 consists of bit-level striping using a Hamming Code parity. RAID-3 consists of byte-level
striping with a dedicated parity. These two are less commonly used.
RAID-6 is a recent advancement which contains a distributed double parity, which involves block-
level striping with 2 parity bits instead of just 1 distributed across all the disks. There are also
hybrid RAIDs, which make use of more than one RAID levels nested one after the other, to fulfill
specific requirements.
Tutorial No – 9
Q.1) Differentiate between windows and linux os.
Answer -
S.N

O LINUX WINDOWS

Linux is a open source operating While windows are the not the

1. system. open source operating system.

2. Linux is free of cost. While it is costly.

While it’s file name is case-

3. It’s file name case-sensitive. insensitive.

4. In linux, monolithic kernel is used. While in this, micro kernel is used.

Linux is more efficient in

5. comparison of windows. While windows are less efficient.

There is forward slash is used for While there is back slash is used

6. Separating the directories. for Separating the directories.

Linux provides more security than While it provides less security than

7. windows. linux.

Linux is widely used in hacking While windows does not provide

8. purpose based systems. much efficiency in hacking.

Q.2) Explain components of linux systems and features in detail.


Answer - Components of Linux System
Linux Operating System has primarily three components
 Kernel − Kernel is the core part of Linux. It is responsible for all major activities of this
operating system. It consists of various modules and it interacts directly with the
underlying hardware. Kernel provides the required abstraction to hide low level
hardware details to system or application programs.
 System Library − System libraries are special functions or programs using which
application programs or system utilities accesses Kernel's features. These libraries
implement most of the functionalities of the operating system and do not requires kernel
module's code access rights.
 System Utility − System Utility programs are responsible to do specialized, individual
level tasks.

Kernel Mode vs User Mode


Kernel component code executes in a special privileged mode called kernel mode with full
access to all resources of the computer. This code represents a single process, executes in single
address space and do not require any context switch and hence is very efficient and fast. Kernel
runs each processes and provides system services to processes, provides protected access to
hardware to processes.
Support code which is not required to run in kernel mode is in System Library. User programs
and other system programs works in User Mode which has no access to system hardware and
kernel code. User programs/ utilities use System libraries to access Kernel functions to get
system's low level tasks.

Basic Features
Following are some of the important features of Linux Operating System.
 Portable − Portability means software can works on different types of hardware in same
way. Linux kernel and application programs supports their installation on any kind of
hardware platform.
 Open Source − Linux source code is freely available and it is community based
development project. Multiple teams work in collaboration to enhance the capability of
Linux operating system and it is continuously evolving.
 Multi-User − Linux is a multiuser system means multiple users can access system
resources like memory/ ram/ application programs at same time.
 Multiprogramming − Linux is a multiprogramming system means multiple applications
can run at same time.
 Hierarchical File System − Linux provides a standard file structure in which system files/
user files are arranged.
 Shell − Linux provides a special interpreter program which can be used to execute
commands of the operating system. It can be used to do various types of operations, call
application programs. etc.
 Security − Linux provides user security using authentication features like password
protection/ controlled access to specific files/ encryption of data.
Q.3) Write short note on:
a) Process management in linux :
A process means program in execution. It generally takes an input, processes it and gives us the
appropriate output. Check Introduction to Process Management for more details about a process.
There are basically 2 types of processes.
Foreground processes: Such kind of processes are also known as interactive processes. These are
the processes which are to be executed or initiated by the user or the programmer, they can not
be initialized by system services. Such processes take input from the user and return the output.
While these processes are running we can not directly initiate a new process from the same
terminal.
Background processes: Such kind of processes are also known as non interactive processes.
These are the processes that are to be executed or initiated by the system itself or by users,
though they can even be managed by users. These processes have a unique PID or process if
assigned to them and we can initiate other processes within the same terminal from which they
are initiated.

b) Memory management in linux :


The Linux memory manager implements demand paging with a copy-on-write strategy relying
on the 386's paging support. A process acquires its page tables from its
parent (during a fork()) with the entries marked as read-only or swapped. Then, if the process
tries to write to that memory space, and the page is a copy-on-write
page, it is copied, and the page is marked read-write. An exec() results in the reading in of a page
or so from the executable. The process then faults in any other pages it needs.
Each process has a page directory which means it can access 1 KB of page tables pointing to 1
MB of 4 KB pages which is 4 GB of memory. A process' page
directory is initialized during a fork by copy_page_tables(). The idle process has its page
directory initialized during the initialization sequence.
Each user process has a local descriptor table that contains a code segment and data-stack
segment. These user segments extend from 0 to 3 GB (0xc0000000). Inuser space, linear
addresses and logical addresses are identical.
On the 80386, linear address run from 0GB to 4GB. A linear address points to a particular
memory location within this space. A linear address is not a physical address--it is a virtual
address. A logical address consists of a selector and an offset. The selector points to a segment
and the offset tells how far into that segment the address is located)
The kernel code and data segments are priveleged segments defined in the global descriptor table
and extend from 3 GB to 4 GB. The swapper page directory (swapper_page_dir is set up so that
logical addresses and physical addresses are identical in kernel space.
The space above 3 GB appears in a process' page directory as pointers to kernel page tables. This
space is invisible to the process in user mode but the mapping becomes relevant when privileged
mode is entered, for example, to handle a system call. Supervisor mode is entered within the
context of the current process so address translation occurs with respect to the process' page
directory but using kernel segments. This is identically the mapping produced by using the
swapper_pg_dir and kernel segments as both page directories use the same page tables in this
space. Only task[0] (the idle task, sometimes called the swapper task for historical reasons, even
though it has nothing to do with swapping in the Linux implementation) uses the swapper_pg_dir
directly.The user process' segment_base = 0x00, page_dir private to the process.user process
makes a system call: segment_base=0xc0000000 page_dir = same user page_dir.swapper_pg_dir
contains a mapping for all physical pages from 0xc0000000 to 0xc0000000 + end_mem, so the
first 768 entries in swapper_pg_dir are 0's, and then there are 4 or more that point to kernel page
tables.The user page directories have the same entries as swapper_pg_dir above 768. The first
768 entries map the user space.The upshot is that whenever the linear address is above
0xc0000000 everything uses the same kernel page tables.

Tutorial No – 10
1 Explain design principles of linux system in details.
Answer –
As a result of the development of PC technology, the Linux kernel is also becoming more
complete in implementing UNIX functions. Fast and efficient are important design goals, but
lately the concentration of Linux development has focused more on the third design goal,
standardization. The POSIX standard consists of a collection of specifications from different
aspects of operating system behavior. There are POSIX documents for ordinary operating system
functions and for extensions such as processes for threads and real-time operations. Linux is
designed to fit the relevant POSIX documents; there are at least two Linux distributions that have
received POSIX official certification.
Because Linux provides a standard interface to programmers and users, Linux does not make
many surprises to anyone who is familiar with UNIX. But the Linux programming interface
refers to the UNIX SVR4 semantics rather than BSD behavior. A different collection of libraries
is available to implement the BSD semantics in places where the two behaviors are very
different.
There are many other standards in the UNIX world, but Linux’s full certification of other UNIX
standards sometimes becomes slow because it is more often available at a certain price (not
freely), and there is a price to pay if it involves certification of approval or compatibility of an
operating system with most standards . Supporting broad applications is important for all
operating systems so that the implementation of the standard is the main goal of developing
Linux even though its implementation is not formally valid. In addition to the POSIX standard,
Linux currently supports POSIX thread extensions and subsets of extensions for POSIX real-
time process control.

2 Explain network structure and security in Linux.


Answer - Discussing the network structure in a Linux operating system gets a bit complicated.
By itself, Linux does not address networking; it is, after all, a server operating system intended to
run applications, not networks. OpenStack, however, does provide a networking service that’s
meant to be used with Linux.
OpenStack is a combination of open source software tools for building and managing virtualized
cloud computing services, providing services including compute, storage and identity
management. There’s also a networking component, Neutron, which enables all the other
OpenStack components to communicate with one another. Given that OpenStack was designed
to run on a Linux kernel, it could be said that Neutron is a networking service for Linux – but
only when used in an OpenStack cloud environment.
Neutron enables network virtualization in an OpenStack environment, providing software-
defined network services through a series of plug-ins. It is intended to enable organizations to
spin up network services on demand, including virtual LANS and virtual private networks
(VPNs), as well as services such as firewalls, intrusion detection and load balancing.
In practice, the networking capabilities of Neutron are somewhat limited, with its main drawback
being a lack of scalability. While companies may use Neutron in a lab environment, when it
comes to production they typically look for other options.
A number of companies have developed SDN and network virtualization software that is more
enterprise-ready. Pica8, for example, offers PICOS, an open network operating system built on a
Debian Linux kernel. PICOS is a white box NOS intended to run on white box network switches
and be used in a virtualized, SDN environment. But it provides the scalability required to extend
to hundreds or thousands of white box switches, making it a viable option for enterprise use.

You might also like