Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Operating System

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 10

Operating system

An operating system (OS) is system software that manages computer


hardwareand software resources and provides
common services for computer programs

History
Early computers were built to perform a series of single tasks, like a
calculator. Basic operating system features were developed in the 1950s,
such as resident monitor functions that could automatically run different
programs in succession to speed up processing. Operating systems did not
exist in their modern and more complex forms until the early 1960s.
[9]
Hardware features were added, that enabled use of runtime
libraries, interrupts, and parallel processing. When personal
computers became popular in the 1980s, operating systems were made for
them similar in concept to those used on larger computers.

In the 1940s, the earliest electronic digital systems had no operating


systems. Electronic systems of this time were programmed on rows of
mechanical switches or by jumper wires on plug boards. These were
special-purpose systems that, for example, generated ballistics tables for
the military or controlled the printing of payroll checks from data on
punched paper cards. After programmable general purpose computers
were invented, machine languages (consisting of strings of the binary digits
0 and 1 on punched paper tape) were introduced that sped up the
programming process (Stern, 1981).

In the early 1950s, a computer could execute only one program at a time.
Each user had sole use of the computer for a limited period of time and
would arrive at a scheduled time with program and data on punched paper
cards or punched tape. The program would be loaded into the machine,
and the machine would be set to work until the program completed or
crashed. Programs could generally be debugged via a front panel using
toggle switches and panel lights. It is said that Alan Turing was a master of
this on the early Manchester Mark 1 machine, and he was already deriving
the primitive conception of an operating system from the principles of
the universal Turing machine.

TYPES OF OPERATING SYSTEM


1.Batch operating System

2. Multiprogramming operating system

3. Time sharing operating system

4. Real time operating system

5. Distributed operating system

1. Batch operating System


Batch processing requires the program, data and appropriate system
commands to be submitted together in the form of a job to the computer
operator. The job is usually in the form of punch cards. The major task of
operating system in such type of computers is to transfer control from one
job to another automatically. It allows little or no interaction between users
and the programs. Scheduling in batch operating system is very simple.
Jobs are processed in the order of their submission i.e.; first-come first-
serve.

Advantage:
Batch processing is particularly useful for that operating system that
requires the computer or a peripheral device for an extended period of time
and very rare interaction with the user.

Disadvantages:
a.) No interaction with the user while the program is being executed.
b.) CPU sits idle during the transition from one job to other.

2. Multiprogramming operating system

When a computer processes several programs concurrently, such


execution of programs is called as multiprogramming. Multiprogramming
increases CPU utilisation by organising jobs so that CPU always has one to
execute. The operating system could support for keeping several jobs in
memory at one time and can start the execution of one of the job in the
memory.

Advantage:
High CPU utilisation.

Disadvantages:
a.) Jobs may have different sizes; therefore memory management is
needed to accommodate them in the memory.
b.) Many jobs may be ready to run on the CPU, which implies that we
require CPU scheduling.

3. Time sharing operating system


Time sharing is a logical extension of multiprogramming. The CPU
executed multiple jobs by switching among them, but the switches occur so
frequently that the user can interact with each of them while program is
running. Memory management in time sharing systems provides for
isolation and protection of co-resident programs. Most time sharing
systems use round-robin scheduling. In order to prevent programs from
monopolizing the processor, a program executing longer than the system
defined time-slice is interrupted by the operating system and placed at the
end of the queue of waiting programs.

Advantages :
In time sharing systems all the tasks are given specific time and task
switching time is very less so applications don’t get interrupted by it. Many
applications can run at the same time. You can also use time sharing in
batch systems if appropriate which increases performance .

Disadvantages:
The big disadvantages of time sharing systems is that it consumes much
resources so it need special operating systems. Switching between tasks
becomes sometimes sophisticated as there are lot of users and
applications running which may hang up the system. So the time sharing
systems should have high specifications of hardware.

4. Real time operating system


Real time operating systems are used in the environment where a large
number of events must be accepted and processed in the short time or in
the given duration. Such applications are industrial control, flight control
and real time simulations. Memory management in real-time systems is
comparatively less demanding than the other types of multiprogramming
systems.

One of the main features of the real-time system is time-critical device


management. Real time operating system can be of two types:

a.) Hard real-time operating system:


This operating system guarantees that the task should be completed within
the given range of time.

b.) Soft real-time operating system:


This operating system provides some time relaxation for the completion of
task.

5. Distributed operating system


A distributed operating system governs the operation of a distributed
computer system and provides a virtual machine abstraction to its users.
The main objective of operating system is transparency. Distributed
operating systems provide the means for system-wide sharing of resources
such as computational capacity and I/O devices.

Advantages :

a.)Give more performance than single system


b.)More resources can be added easily
c.)Resources like printers can be shared on multiple pc’s

Disadvantages :

a.)Security problem due to sharing


b.)Some messages can be lost in the network system
c.)Overloading is another problem in distributed operating systems

Examples
Unix and Unix-like operating systems
Unix was originally written in assembly language.[13] Ken
Thompson wrote B, mainly based on BCPL, based on his experience in
the MULTICS project. B was replaced by C, and Unix, rewritten in C,
developed into a large, complex family of inter-related operating systems
which have been influential in every modern operating system
(see History).

The Unix-like family is a diverse group of operating systems, with several


major sub-categories including System V, BSD, and Linux. The name
"UNIX" is a trademark of The Open Group which licenses it for use with any
operating system that has been shown to conform to their definitions.
"UNIX-like" is commonly used to refer to the large set of operating systems
which resemble the original UNIX.

Unix-like systems run on a wide variety of computer architectures. They are


used heavily for servers in business, as well as workstations in academic
and engineering environments. Free UNIX variants, such
as Linux and BSD, are popular in these areas.

Linux
The Linux kernel originated in 1991, as a project of Linus Torvalds, while a
university student in Finland. He posted information about his project on a
newsgroup for computer students and programmers, and received support
and assistance from volunteers who succeeded in creating a complete and
functional kernel.
Linux is Unix-like, but was developed without any Unix code, unlike BSD
and its variants. Because of its open license model, the Linux kernel code
is available for study and modification, which resulted in its use on a wide
range of computing machinery from supercomputers to smart-watches.
Although estimates suggest that Linux is used on only 1.82% of all
"desktop" (or laptop) PCs,[15] it has been widely adopted for use in
servers[16] and embedded systems[17] such as cell phones. Linux has
superseded Unix on many platforms and is used on most supercomputers
including the top 385.[18] Many of the same computers are also
on Green500 (but in different order), and Linux runs on the top 10. Linux is
also commonly used on other small energy-efficient computers, such
as smartphones and smartwatches. The Linux kernel is used in some
popular distributions, such as Red Hat, Debian, Ubuntu, Linux
Mint and Google's Android, Chrome OS, and Chromium OS.

Components

The components of an operating system all exist in order to make the


different parts of a computer work together. All user software needs to go
through the operating system in order to use any of the hardware, whether
it be as simple as a mouse or keyboard or as complex as an Internet
component.

Kernel
With the aid of the firmware and device drivers, the kernel provides the
most basic level of control over all of the computer's hardware devices. It
manages memory access for programs in the RAM, it determines which
programs get access to which hardware resources, it sets up or resets the
CPU's operating states for optimal operation at all times, and it organizes
the data for long-term non-volatile storage with file systems on such media
as disks, tapes, flash memory, etc.

Memory management
Among other things, a multiprogramming operating system kernel must be
responsible for managing all system memory which is currently in use by
programs. This ensures that a program does not interfere with memory
already in use by another program. Since programs time share, each
program must have independent access to memory.
Cooperative memory management, used by many early operating systems,
assumes that all programs make voluntary use of the kernel's memory
manager, and do not exceed their allocated memory. This system of
memory management is almost never seen any more, since programs
often contain bugs which can cause them to exceed their allocated
memory. If a program fails, it may cause memory used by one or more
other programs to be affected or overwritten. Malicious programs or viruses
may purposefully alter another program's memory, or may affect the
operation of the operating system itself. With cooperative memory
management, it takes only one misbehaved program to crash the system.
Memory protection enables the kernel to limit a process' access to the
computer's memory. Various methods of memory protection exist,
including memory segmentation and paging. All methods require some
level of hardware support (such as the 80286 MMU), which doesn't exist in
all computers.
In both segmentation and paging, certain protected mode registers specify
to the CPU what memory address it should allow a running program to
access. Attempts to access other addresses trigger an interrupt which
cause the CPU to re-enter supervisor mode, placing the kernel in charge.
This is called a segmentation violation or Seg-V for short, and since it is
both difficult to assign a meaningful result to such an operation, and
because it is usually a sign of a misbehaving program, the kernel generally
resorts to terminating the offending program, and reports the error.
Windows versions 3.1 through ME had some level of memory protection,
but programs could easily circumvent the need to use it. A general
protection fault would be produced, indicating a segmentation violation had
occurred; however, the system would often crash anyway.
Virtual memory
Many operating systems can "trick" programs into using memory scattered
around the hard disk and RAM as if it is one continuous chunk of memory,
called virtual memory.
The use of virtual memory addressing (such as paging or segmentation)
means that the kernel can choose what memory each program may use at
any given time, allowing the operating system to use the same memory
locations for multiple tasks.
If a program tries to access memory that isn't in its current range of
accessible memory, but nonetheless has been allocated to it, the kernel is
interrupted in the same way as it would if the program were to exceed its
allocated memory. (See section on memory management.) Under UNIX
this kind of interrupt is referred to as a page fault.
When the kernel detects a page fault it generally adjusts the virtual memory
range of the program which triggered it, granting it access to the memory
requested. This gives the kernel discretionary power over where a
particular application's memory is stored, or even whether or not it has
actually been allocated yet.
In modern operating systems, memory which is accessed less frequently
can be temporarily stored on disk or other media to make that space
available for use by other programs. This is called swapping, as an area of
memory can be used by multiple programs, and what that memory area
contains can be swapped or exchanged on demand.
"Virtual memory" provides the programmer or the user with the perception
that there is a much larger amount of RAM in the computer than is really
there.

Multitasking
Multitasking refers to the running of multiple independent computer
programs on the same computer; giving the appearance that it is
performing the tasks at the same time. Since most computers can do at
most one or two things at one time, this is generally done via time-sharing,
which means that each program uses a share of the computer's time to
execute.
An operating system kernel contains a scheduling program which
determines how much time each process spends executing, and in which
order execution control should be passed to programs. Control is passed to
a process by the kernel, which allows the program access to the CPU and
memory. Later, control is returned to the kernel through some mechanism,
so that another program may be allowed to use the CPU. This so-called
passing of control between the kernel and applications is called a context
switch.
An early model which governed the allocation of time to programs was
called cooperative multitasking. In this model, when control is passed to a
program by the kernel, it may execute for as long as it wants before
explicitly returning control to the kernel. This means that a malicious or
malfunctioning program may not only prevent any other programs from
using the CPU, but it can hang the entire system if it enters an infinite loop.
Modern operating systems extend the concepts of application preemption
to device drivers and kernel code, so that the operating system has
preemptive control over internal run-times as well.
The philosophy governing preemptive multitasking is that of ensuring that
all programs are given regular time on the CPU. This implies that all
programs must be limited in how much time they are allowed to spend on
the CPU without being interrupted. To accomplish this, modern operating
system kernels make use of a timed interrupt. A protected mode timer is set
by the kernel which triggers a return to supervisor mode after the specified
time has elapsed. (See above sections on Interrupts and Dual Mode
Operation.)
On many single user operating systems cooperative multitasking is
perfectly adequate, as home computers generally run a small number of
well tested programs. The AmigaOS is an exception, having preemptive
multitasking from its very first version. Windows NT was the first version
of Microsoft Windows which enforced preemptive multitasking, but it didn't
reach the home user market until Windows XP (since Windows NT was
targeted at professionals).

You might also like