Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

OS2

Download as pdf or txt
Download as pdf or txt
You are on page 1of 28

Components of Operating System:

The operating system has two components: Shell and Kernel

 Shell
Shell handles user interactions. It is the outermost layer of the OS and
manages the interaction between user and operating system by:
 Prompting the user to give input
 Interpreting the input for the operating system
 Handling the output from the operating system.
Shell provides a way to communicate with the OS by either taking the
input from the user or the shell script. A shell script is a sequence of
system commands that are stored in a file.
What is Kernel?
The kernel is the core component of an operating system for a
computer (OS). All other components of the OS rely on the core to
supply them with essential services. It serves as the primary interface
between the OS and the hardware and aids in the control of devices,
networking, file systems, and process and memory management.
Functions of kernel
The kernel is the core component of an operating system, which acts
as an interface between applications, and the data is processed at the
hardware level.

When an OS is loaded into memory:


1. The kernel is loaded first and remains in memory until the OS
shut down.
2. After that, the kernel provides and manages the computer
resources and allows other programs to run and use these
resources.
3. The kernel also sets up the memory address space for
applications, loads the files with application code into memory,
and sets up the execution stack for programs.

The kernel is responsible for performing the following tasks:

1. Input-Output management
2. Memory Management
3. Process Management for application execution.
4. Device Management
5. System calls control
Earlier, all the basic system services like process and memory
management, interrupt handling, etc., were packaged into a single
module in the kernel space. This type of kernel was called the
Monolithic Kernel. The problem with this approach was that the
whole kernel had to be recompiled for even a small change.

In a modern-day approach to monolithic architecture, a microkernel


contains different modules like device management, file
management, etc. It is dynamically loaded and unloaded. With this
modern-day approach, the kernel code size was reduced while its
stability increased.
Types of Kernel
1. Monolithic Kernel: As the name suggests, a monolithic kernel is a
single large program that contains all operating system components.

 The entire kernel executes in the processor’s privileged


mode and provides full access to the system’s hardware.
 Monolithic kernels are faster than microkernels because
they do not have the overhead of message passing.
 This type of kernel is generally used in embedded systems
and real-time operating systems.

2. Microkernel: A microkernel is a kernel that contains only the


essential components required for the basic functioning of the
operating system.

 All other components are removed from the kernel and


implemented as user-space processes.
 The microkernel approach provides better modularity, flexibility,
and extensibility.
 It is also more stable and secure than monolithic kernels.

3. Hybrid Kernel: A hybrid kernel is a kernel that combines the best


features of both monolithic kernels and microkernels.

 It contains a small microkernel that provides the essential


components for the basic functioning of the OS.
 The remaining components are implemented as user-space
processes or as loadable kernel modules.
 This approach provides the best of both worlds, namely, the
performance of monolithic kernels and the modularity of
microkernels.
4. Exokernel: An exokernel is a kernel that provides the bare minimum
components required for the basic functioning of the operating
system.

 All other components are removed from the kernel and


implemented as user-space processes.
 The exokernel approach provides the best possible performance
because there is no kernel overhead.
 However, it is also the most difficult to implement and is not
widely used.

Types of Operating Systems


There are several different types of operating systems present.

 Batch OS

 Distributed OS

 Multitasking OS

 Network OS

 Real-OS

 Mobile OS
Batch OS
 Batch OS is the first operating system for second-generation
computers.
 This OS does not directly interact with the computer.
 Instead, an operator takes up similar jobs and groups them
together into a batch, and then these batches are executed one
by one based on the first-come, first, serve principle.

Advantages of Batch OS

 Multiple users can share batch systems.

 Managing large works becomes easy in batch systems.

 The idle time for a single batch is very less.

Disadvantages of OS

 It is hard to debug batch systems.

 If a job fails, then the other jobs have to wait for an unknown
time until the issue is resolved.

 Batch systems are sometimes costly.

Examples of Batch OS: payroll system, bank statements, data entry,


etc.
Distributed OS
 In a distributed OS, various computers are connected through a
single communication channel.
 These independent computers have their memory unit and CPU
and known as loosely coupled systems.
 The system processes can be of different sizes and can perform
different functions.
 The major benefit of such a type of operating system is that a
user can access files that are not present on his system but in
another connected system. In addition, remote access is
available to the systems connected to this network.

Advantages of Distributed OS

 Failure of one system will not affect the other systems because
all the computers are independent of each other.

 The load on the host system is reduced.

 The size of the network is easily scalable as many computers can


be added to the network.

 As the workload and resources are shared therefore the


calculations are performed at a higher speed.

Disadvantages of Distributed OS

 The setup cost is high.

 Software used for such systems is highly complex.

 Failure of the main network will lead to the failure of the whole
system.

Examples of Distributed OS: LOCUS, etc.


Multitasking OS
 The multitasking OS is also known as the time-sharing operating
system as each task is given some time so that all the tasks work
efficiently.
 This system provides access to a large number of users, and each
user gets the time of CPU as they get in a single system.
 The tasks performed are given by a single user or by different
users.
 The time allotted to execute one task is called a quantum, and
as soon as the time to execute one task is completed, the system
switches over to another task.

Advantages of Multitasking OS

 Each task gets equal time for execution.

 The idle time for the CPU will be the lowest.

 There are very few chances for the duplication of the software.

Disadvantages of Multitasking OS

 Processes with higher priority cannot be executed first as equal


priority is given to each process or task.

 Various user data is needed to be taken care of from


unauthorized access.

 Sometimes there is a data communication problem.

Examples of Multitasking OS: UNIX, etc.


Network OS
 Network operating systems are the systems that run on a server
and manage all the networking functions.
 They allow sharing of various files, applications, printers,
security, and other networking functions over a small network of
computers like LAN or any other private network.
 In the network OS, all the users are aware of the configurations
of every other user within the network, which is why network
operating systems are also known as tightly coupled systems.

Advantages of Network OS

 New technologies and hardware can easily upgrade the systems.

 Security of the system is managed over servers.

 Servers can be accessed remotely from different locations and


systems.

 The centralized servers are stable.

Disadvantages of Network OS

 Server costs are high.

 Regular updates and maintenance are required.

 Users are dependent on the central location for the maximum


number of operations.

Examples of Network OS: Microsoft Windows server 2008, LINUX,


etc.
Real-Time OS
 Real-Time operating systems serve real-time systems.
 These operating systems are useful when many events occur in
a short time or within certain deadlines, such as real-time
simulations.

Types of the real-time OS are:


1. Hard real-time OS
 The hard real-time OS is the operating system for mainly the
applications in which the slightest delay is also unacceptable.
 The time constraints of such applications are very strict.
 Such systems are built for life-saving equipment like parachutes
and airbags, which immediately need to be in action if an
accident happens.

2. Soft real-time OS
 The soft real-time OS is the operating system for applications
where time constraint is not very strict.
 In a soft real-time system, an important task is prioritized over
less important tasks, and this priority remains active until the
completion of the task.
 Furthermore, a time limit is always set for a specific job, enabling
short time delays for future tasks, which is acceptable.
 For Example, virtual reality, reservation systems, etc.

Advantages of Real-Time OS

 It provides more output from all the resources as there is


maximum utilization of systems.

 It provides the best management of memory allocation.

 These systems are always error-free.


 These operating systems focus more on running applications
than those in the queue.

 Shifting from one task to another takes very little time.

Disadvantages of Real-Time OS

 System resources are extremely expensive and are not so good.

 The algorithms used are very complex.

 Only limited tasks can run at a single time.

 In such systems, we cannot set thread priority as these systems


cannot switch tasks easily.

Examples of Real-Time OS: Medical imaging systems, robots, etc.

Mobile OS
 A mobile OS is an operating system for smartphones, tablets,
and PDA’s.
 It is a platform on which other applications can run on mobile
devices.

Advantages of Mobile OS

 It provides ease to users.

Disadvantages of Mobile OS

 Some of mobile operating systems give poor battery quality to


users.

 Some of the mobile operating systems are not user-friendly.

Examples of Mobile OS: Android OS, ios, Symbian OS, and Windows
mobile OS.
Single-tasking vs. multi-tasking operating systems:
Single-tasking operating systems allow only one program to run at a
time, while multi-tasking operating systems allow multiple programs
to run simultaneously.

Desktop vs. mobile operating systems:


Desktop operating systems, such as Windows and macOS, are
designed for use on desktop and laptop computers, while mobile
operating systems, such as iOS and Android, are designed for use on
smartphones and tablets.

Open-source vs. proprietary operating systems:


Open-source operating systems are developed by a community of
developers and are available for free, while proprietary operating
systems are developed by a single company and must be purchased.
32-bit OS versus 64-bit OS

Parameter 32-Bit OS 64-Bit OS

The 32 bit OS can store and In contrast, the 64 bit OS


manage less data than the has a larger data handling
64 bit OS, as its name capacity than the 32 bit OS.
Data and
would imply. It addresses a It indicates that a total of
Storage
maximum of 4,294,967,296 264 memory addresses, or
bytes (4 GB) of RAM in 18 quintillion gigabytes of
more detail. RAM, can be addressed.

A 32-bit processor system A 64-bit processor system


Compatibility
will run only on 32-bit OS can run either a 32-bit or
of System
and not on 64 bit OS. 64-bit OS

The 32-bit OS support


Application The 64-bit OS do not
applications with no
Support support applications.
hassle.

Performance of 32- bit OS Higher performance than


Performance
is less efficient. the 32-bit processor.

These support Windows XP


These support Windows 7,
Professional, Windows 7,
Systems Windows XP, Windows
Windows 8, Windows 10,
Available Vista, Windows 8, and
Windows Vista, Linux, and
Linux.
Mac OS X.
Popular Operating Systems
Some of the most popular operating systems in use today
include:

 Windows: Windows is the most popular desktop


operating system, used by over 1 billion users worldwide.
It has a wide range of features and applications, including
the Office suite, gaming, and productivity tools.

 macOS: macOS is the desktop operating system used by


Apple Mac computers. It is known for its clean, user-
friendly interface and is popular among creative
professionals.

 Linux: Linux is an open-source operating system that is


available for free and can be customized to meet specific
needs. It is used by developers, businesses, and
individuals who prefer an open-source, customizable
operating system.

 iOS: iOS is the mobile operating system used by Apple


iPhones and iPads. It is known for its user-friendly
interface, tight integration with Apple’s hardware and
software, and robust security features.

 Android: Android is the most popular mobile operating


system, used by over 2 billion users worldwide. It is
known for its open-source nature, customization options,
and compatibility with a wide range of devices.
Difference between Process and Thread
Process:

Processes are basically the programs that are dispatched from the
ready state and are scheduled in the CPU for execution. PCB(Process
Control Block) holds the concept of process. A process can create
other processes which are known as Child Processes. The process
takes more time to terminate and it is isolated means it does not share
the memory with any other process.

The process can have the following states: new, ready, running,
waiting, terminated, and suspended.

Thread: Thread is the segment of a process which means a process


can have multiple threads and these multiple threads are contained
within a process.

A thread has three states: Running, Ready, and Blocked.

Difference between Process and Thread:

S.NO Process Thread

Process means any Thread means a segment of a


1. program is in execution. process.

The process takes more The thread takes less time to


2. time to terminate. terminate.

It takes more time for


It takes less time for creation.
3. creation.
S.NO Process Thread

It also takes more time It takes less time for context


4. for context switching. switching.

The process is less


Thread is more efficient in terms
efficient in terms of
of communication.
5. communication.

We don’t need multi programs in


Multiprogramming holds
action for multiple threads
the concepts of multi-
because a single process consists
process.
6. of multiple threads.

7. The process is isolated. Threads share memory.

A Thread is lightweight as each


The process is called the
thread in a process shares code,
heavyweight process.
8. data, and resources.

Thread switching does not


Process switching uses
require calling an operating
an interface in an
system and causes an interrupt
operating system.
9. to the kernel.

If one process is blocked


If a user-level thread is blocked,
then it will not affect the
then all other user-level threads
execution of other
are blocked.
10. processes
S.NO Process Thread

The process has its own


Thread has Parents’ PCB, its own
Process Control Block,
Thread Control Block, and Stack
Stack, and Address
and common Address space.
11. Space.

Since all threads of the same


process share address space and
Changes to the parent
other resources so any changes
process do not affect
to the main thread may affect
child processes.
the behavior of the other
12. threads of the process.

A system call is involved No system call is involved, it is


13. in it. created using APIs.

The process does not


Threads share data with each
share data with each
other.
14. other.

Context Switch in Operating System


An operating system is a program loaded into a system or computer.
and manage all the other program which is running on that OS
Program, it manages the all other application programs. or in other
words, we can say that the OS is an interface between the user and
computer hardware.
Context Switch

Context switching in an operating system involves saving the context


or state of a running process so that it can be restored later, and then
loading the context or state of another. process and run it.

Context Switching refers to the process/method used by the system


to change the process from one state to another using the CPUs
present in the system to perform its job.

Example – Suppose in the OS there (N) numbers of processes are


stored in a Process Control Block (PCB). like The process is running
using the CPU to do its job. While a process is running, other processes
with the highest priority queue up to use the CPU to complete their
job.

Switching the CPU to another process requires performing a state save


of the current process and a state restore of a different process. This
task is known as a context switch. When a context switch occurs, the
kernel saves the context of the old process in its PCB and loads the
saved context of the new process scheduled to run. Context-switch
time is pure overhead, because the system does no useful work while
switching. Switching speed varies from machine to machine,
depending on the memory speed, the number of registers that must
be copied, and the existence of special instructions (such as a single
instruction to load or store all registers). A typical speed is a few
milliseconds. Context-switch times are highly dependent on hardware
support. For instance, some processors (such as the Sun UltraSPARC)
provide multiple sets of registers. A context switch here simply
requires changing the pointer to the current register set. Of course, if
there are more active processes than there are register sets, the
system resorts to copying register data to and from memory, as
before. Also, the more complex the operating system, the greater the
amount of work that must be done during a context switch.
The Need for Context Switching

Context switching enables all processes to share a single CPU to finish


their execution and store the status of the system’s tasks. The
execution of the process begins at the same place where there is a
conflict when the process is reloaded into the system.

The operating system’s need for context switching is explained by the


reasons listed below.

1. One process does not directly switch to another within the


system. Context switching makes it easier for the operating
system to use the CPU’s resources to carry out its tasks and store
its context while switching between multiple processes.

2. Context switching enables all processes to share a single CPU to


finish their execution and store the status of the system’s tasks.
The execution of the process begins at the same place where
there is a conflict when the process is reloaded into the system.

3. Context switching only allows a single CPU to handle multiple


processes requests parallelly without the need for any additional
processors.

Context Changes as a Trigger

The three different categories of context-switching triggers are as


follows.

1. Interrupts

2. Multitasking

3. User/Kernel switch

Interrupts: When a CPU requests that data be read from a disc, if any
interruptions occur, context switching automatically switches to a
component of the hardware that can handle the interruptions more
quickly.

Multitasking: The ability for a process to be switched from the CPU so


that another process can run is known as context switching. When a
process is switched, the previous state is retained so that the process
can continue running at the same spot in the system.

Kernel/User Switch: This trigger is used when the OS needed to switch


between the user mode and kernel mode.

When switching between user mode and kernel/user mode is


necessary, operating systems use the kernel/user switch.

Process Control Block

So, The Process Control block(PCB) is also known as a Task Control


Block. it represents a process in the Operating System. A process
control block (PCB) is a data structure used by a computer to store all
information about a process. It is also called the descriptive process.
When a process is created (started or installed), the operating system
creates a process manager.

State Diagram of Context Switching


Working Process Context Switching

So the context switching of two processes, the priority-based process


occurs in the ready queue of the process control block. These are the
following steps.

 The state of the current process must be saved for rescheduling.

 The process state contains records, credentials, and operating


system-specific information stored on the PCB or switch.

 The PCB can be stored in a single layer in kernel memory or in a


custom OS file.

 A handle has been added to the PCB to have the system ready to
run.

 The operating system aborts the execution of the current


process and selects a process from the waiting list by tuning its
PCB.

 Load the PCB’s program counter and continue execution in the


selected process.

 Process/thread values can affect which processes are selected


from the queue, this can be important.

Process Schedulers in Operating System


Process scheduling is the activity of the process manager that handles
the removal of the running process from the CPU and the selection of
another process on the basis of a particular strategy.

Process scheduling is an essential part of a Multi


programming operating system. Such operating systems allow more
than one process to be loaded into the executable memory at a time
and the loaded process shares the CPU using time multiplexing.
Categories in Scheduling

Scheduling falls into one of two categories:

 Non-preemptive: In this case, a process’s resource cannot be


taken before the process has finished running. When a running
process finishes and transitions to a waiting state, resources are
switched.

 Preemptive: In this case, the OS assigns resources to a process


for a predetermined period of time. The process switches from
running state to ready state or from waiting for state to ready
state during resource allocation. This switching happens because
the CPU may give other processes priority and substitute the
currently active process for the higher priority process.

There are three types of process schedulers.

Long Term or Job Scheduler

It brings the new process to the ‘Ready State’. It controls the Degree
of Multi-programming, i.e., the number of processes present in a
ready state at any point in time. It is important that the long-term
scheduler make a careful selection of both I/O and CPU-bound
processes. I/O-bound tasks are which use much of their time in input
and output operations while CPU-bound processes are which spend
their time on the CPU. The job scheduler increases efficiency by
maintaining a balance between the two. They operate at a high level
and are typically used in batch-processing systems.

Short-Term or CPU Scheduler

It is responsible for selecting one process from the ready state for
scheduling it on the running state. Note: Short-term scheduler only
selects the process to schedule it doesn’t load the process on
running. Here is when all the scheduling algorithms are used. The CPU
scheduler is responsible for ensuring no starvation due to high burst
time processes.The dispatcher is responsible for loading the process
selected by the Short-term scheduler on the CPU (Ready to Running
State) Context switching is done by the dispatcher only. A dispatcher
does the following:

1. Switching context.

2. Switching to user mode.

3. Jumping to the proper location in the newly loaded program.

Medium-Term Scheduler

It is responsible for suspending and resuming the process. It mainly


does swapping (moving processes from main memory to disk and vice
versa). Swapping may be necessary to improve the process mix or
because a change in memory requirements has overcommitted
available memory, requiring memory to be freed up. It is helpful in
maintaining a perfect balance between the I/O bound and the CPU
bound. It reduces the degree of multiprogramming.

Some Other Schedulers

 I/O schedulers: I/O schedulers are in charge of managing the


execution of I/O operations such as reading and writing to discs
or networks. They can use various algorithms to determine the
order in which I/O operations are executed, such as FCFS (First-
Come, First-Served) or RR (Round Robin).

 Real-time schedulers: In real-time systems, real-time schedulers


ensure that critical tasks are completed within a specified time
frame. They can prioritize and schedule tasks using various
algorithms such as EDF (Earliest Deadline First) or RM (Rate
Monotonic).
Comparison among Scheduler

Short term Medium Term


Long Term Scheduler schedular Scheduler

It is a process-
It is a job scheduler It is a CPU scheduler
swapping scheduler.

Speed lies in
Generally, Speed is
Speed is the fastest between both short
lesser than short
among all of them. and long-term
term scheduler
schedulers.

It gives less control


It controls the It reduces the degree
over how much
degree of of
multiprogramming
multiprogramming multiprogramming.
is done.

It is barely present or It is a component of


It is a minimal time-
nonexistent in the systems for time
sharing system.
time-sharing system. sharing.

It can re-enter the It can re-introduce


process into It selects those the process into
memory, allowing processes which are memory and
for the continuation ready to execute execution can be
of execution. continued.
Two-State Process Model Short-term

The terms “running” and “non-running” states are used to describe


the two-state process model.

S.no State Description

Running
1. A newly created process joins the system in a running state
when it is created.

Not running

Processes that are not currently running are kept in a


queue and await execution. A pointer to a specific process
is contained in each entry in the queue. Linked lists are
2. used to implement the queue system. This is how the
dispatcher is used. When a process is stopped, it is moved
to the back of the waiting queue. The process is discarded
depending on whether it succeeded or failed. The
dispatcher then chooses a process to run from the queue
in either scenario.

Inter Process Communication (IPC)


A process can be of two types:

 Independent process.

 Co-operating process.
An independent process is not affected by the execution of other
processes while a co-operating process can be affected by other
executing processes. Though one can think that those processes,
which are running independently, will execute very efficiently, in
reality, there are many situations when co-operative nature can be
utilized for increasing computational speed, convenience, and
modularity. Inter-process communication (IPC) is a mechanism that
allows processes to communicate with each other and synchronize
their actions. The communication between these processes can be
seen as a method of co-operation between them. Processes can
communicate with each other through both:

1. Shared Memory

2. Message passing

Figure 1 below shows a basic structure of communication between


processes via the shared memory method and via the message passing
method.
An operating system can implement both methods of
communication. First, we will discuss the shared memory methods of
communication and then message passing. Communication between
processes using shared memory requires processes to share some
variable, and it completely depends on how the programmer will
implement it. One way of communication using shared memory can
be imagined like this: Suppose process1 and process2 are executing
simultaneously, and they share some resources or use some
information from another process. Process1 generates information
about certain computations or resources being used and keeps it as a
record in shared memory. When process2 needs to use the shared
information, it will check in the record stored in shared memory and
take note of the information generated by process1 and act
accordingly. Processes can use shared memory for extracting
information as a record from another process as well as for delivering
any specific information to other processes.
Let’s discuss an example of communication between processes using
the shared memory method.

i) Shared Memory Method

Ex: Producer-Consumer problem

There are two processes: Producer and Consumer. The producer


produces some items and the Consumer consumes that item. The two
processes share a common space or memory location known as a
buffer where the item produced by the Producer is stored and from
which the Consumer consumes the item if needed. There are two
versions of this problem: the first one is known as the unbounded
buffer problem in which the Producer can keep on producing items
and there is no limit on the size of the buffer, the second one is known
as the bounded buffer problem in which the Producer can produce up
to a certain number of items before it starts waiting for Consumer to
consume it. We will discuss the bounded buffer problem. First, the
Producer and the Consumer will share some common memory, then
the producer will start producing items. If the total produced item is
equal to the size of the buffer, the producer will wait to get it
consumed by the Consumer. Similarly, the consumer will first check
for the availability of the item. If no item is available, the Consumer
will wait for the Producer to produce it. If there are items available,
Consumer will consume them.
ii) Messaging Passing Method

Now, We will start our discussion of the communication between


processes via message passing. In this method, processes
communicate with each other without using any kind of shared
memory. If two processes p1 and p2 want to communicate with each
other, they proceed as follows:

 Establish a communication link (if a link already exists, no need


to establish it again.)

 Start exchanging messages using basic primitives.


We need at least two primitives:
– send(message, destination) or send(message)
– receive(message, host) or receive(message)

The message size can be of fixed size or of variable size. If it is of fixed


size, it is easy for an OS designer but complicated for a programmer
and if it is of variable size then it is easy for a programmer but
complicated for the OS designer. A standard message can have two
parts: header and body.

You might also like