Unit 1 - Os - PPT
Unit 1 - Os - PPT
Unit 1 - Os - PPT
UNIT – 1
INTRODUCTION TO OS
MS-DOS Drivers
Standard C
Library Example
Three general methods are used to pass parameters
between a running program and the operating system.
– Simplest approach is to pass parameters in
registers.
– Store the parameters in a table in memory, and the
table address is passed as a parameter in a register
– Push (store) the parameters onto the stack by the
program, and pop off the stack by operating system.
Passing of parameters as a table
TYPES OF SYSTEM CALLS
• System calls can be grouped roughly in to
five categories:
– Process control
– File management
– Device management
– Information maintenance
– Communications
Types of System Calls
• Process control
– end, abort
– load, execute
– create process, terminate process
– get process attributes, set process attributes
– wait for time
– wait event, signal event
– allocate and free memory
• File management
– create file, delete file
– open, close file
– read, write, reposition
– get and set file attributes
Types of System Calls (Cont.)
• Device management
– request device, release device
– read, write, reposition
– get device attributes, set device attributes
– logically attach or detach devices
• Information maintenance
– get time or date, set time or date
– get system data, set system data
– get and set process, file, or device attributes
• Communications
– create, delete communication connection
– send, receive messages
– transfer status information
– attach and detach remote devices
Examples of Windows and Unix System Calls
Process
Processes
A process can be thought of as a program in execution, A process will need
certain resources
such as CPU time, memory, files, and I/O devices
to accomplish its task.
Process is an Active Entity and Program is a Passive Entity.
A process is the unit of work in most systems.
Systems consist of a collection of processes:
Operating-system processes execute system code, and
user processes execute user code.
All these processes may execute concurrently.
Sections of a Process
• Code section
• Data Section
• Heap Section
• Stack Section
Process Control Block
Process Control Block(data structure used by computer operating systems to store
all the information about a process)
Process Control Block
Process Control Block
The most common source is a file. A process must request a file before it can read it or
write it. Further, if the file is unavailable, the process must wait until it becomes
available. This abstract description of a resource is crucial to the way various
entities(such as files, memory and devices) are managed.
Thread
Thread
A thread is a basic unit of CPU utilization. A thread, sometimes called as light
weight process whereas a process is a heavyweight process.
Thread comprises:
A thread ID
A program counter
A register set
A stack.
A process is a program that performs a single thread of execution i.e., a
process is a executing program with a single thread of control. For example,
when a process is running a word- processor program, a single thread of
instructions is being executed.
Thread
Thread
The single thread of control allows the process to perform only one task at one
time. For example, the user cannot simultaneously type in characters and run the
spell checker within the same process.
Traditionally a process contained only a single thread of control as it ran, many
modern operating systems have extended the process concept to allow a process
to have multiple threads of execution and thus to perform more than one task at
a time.
For example, in a browser, multiple tabs can be different threads
Thread
Thread
The operating system is responsible for the following activities in connection with
process and thread management:
The creation and deletion of both user and system processes
The scheduling of processes
The provision of mechanisms for synchronization
Communication
Deadlock handling for processes.
Process Model
Process Model
Thread Model
Thread Model
Creation time for 50,000 Process and Thread
Applications
Processes
Processes
Thread
Thread
Benefits
The benefits of multi threaded programming can be broken down into four major categories:
Responsiveness
Multi threading an interactive application may allow a program to continue running even if part of
it is blocked or is performing a lengthy operation, thereby increasing responsiveness to the user. A
multi threaded web browser could still allow user interaction in one thread while an image was
being loaded in another thread.
Resource sharing
By default, threads share the memory and the resources of the process to which they belong. The
benefit of sharing code and data is that it allows an application to have several different threads of
activity within the same address space.
Benefits
Economy
Allocating memory and resources for process creation is costly. Because threads share
resources of the process to which they belong, it is more economical to create and context-
switch threads.
It is much more time consuming to create and manage processes than threads. In Solaris, for
example, creating a process is about thirty times slower than is creating a thread, and context
switching is about five times slower.
Utilization of multiprocessor architectures
The benefits of multithreading can be greatly increased in a multiprocessor architecture,
where threads may be running in parallel on different processors.
A single threaded process can only run on one CPU, no matter how many are available.
Multithreading on a multi-CPU machine increases concurrency.
User Level Thread
User Level Thread
User threads are supported above the kernel and are managed without kernel support i.e., they are
implemented by thread library at the user level. These are the threads that application programmers use in
their programs.
The library provides support for thread creation, scheduling and management with no support from the
kernel.
Because the kernel is unaware of user-level threads, all thread creation and scheduling are done in user
space without the need for kernel intervention.
Therefore, user-level threads are generally fast to create and manage. User-thread libraries include
POSIX Pthreads, Mach C-threads, and Solaris 2 UI-threads.
User Level Thread
User Level Thread
Advantage:
Fast
Thread Management happens with thread
library.
Drawback:
Lack of Coordination between Kernel and
Thread
If one thread invokes system call, all threads
has to wait
Kernel Level Thread
Kernel Level Thread
Kernel threads are supported and managed directly by the operating system.
The kernel performs thread creation, scheduling and management in kernel space.
Since thread management is done by the operating system, kernel threads are generally slower to
create and manage than are user threads.
Also, in a multiprocessor environment, the kernel can schedule threads on different processors.
Most contemporary operating systems—including Windows NT, Windows 2000, Solaris 2,BeOS, and
Tru64 UNIX (formerly Digital UNIX)—support kernel threads
Kernel Level Thread
Kernel Level Thread
Advantage:
Scheduler can decide to give more time to process
having more threads
IF a Thread invokes a system call the remaining
thread will run.
Drawback:
Slow
Overhead to manage thread and process
Multithreading Models
Multithreading Models
Some operating system provide a combined user level thread and Kernel level
thread facility. Eg. Solaris.
Three common ways of establishing this relationship are:
Many-to-One Model
One-to-one Model
Many-to-Many Model
Multithreading Models
Many-to-One Model
Multithreading Models
Many-to-One Model
The many-to-one model maps many user-level threads to one kernel thread.
Thread management is done by the thread library in user space, so it is
efficient; but the entire process will block if a thread makes a blocking system
call.
Also, because only one thread can access the kernel at a time, multiple
threads are unable to run in parallel on multiprocessors. Green threads—a
thread library available for Solaris 2—uses this model.
Multithreading Models
One-to-One Model
Multithreading Models
One-to-One Model
The one-to-one model maps each user thread to a kernel thread. It provides
more concurrency than the many-to-one model by allowing another thread
to run when a thread makes a blocking system call.
It also allows multiple threads to run in parallel on multiprocessors.
The only drawback to this model is that creating a user thread requires
creating the corresponding kernel thread.
Linux, along with the family of Windows operating systems—including
Windows 95, 98, NT, 2000, and OS/2— implement the one-to-one model.
Multithreading Models
Many-to-Many Model
Multithreading Models
Many-to-Many Model
The many-to-many model multiplexes many user-level threads to a smaller
or equal number of kernel threads.
The one-to-one model allows for greater concurrency, but the developer has
to be careful not to create too many threads within an application.
The many-to-many model suffers from neither of these shortcomings:
Developers can create as many user threads as necessary, and the
corresponding kernel threads can run in parallel on a multiprocessor. Also,
when a thread performs a blocking system call, the kernel can schedule
another thread for execution. Solaris 2, IRIX, HP-UX and Tru64 UNIX support
this model.
Process Vs Thread
Difference between Process and Thread
Sl.No Process Thread
It keeps track of all the devices. The program responsible for keeping track of
all the devices is known as I/O controller.
It provides a uniform interface to access devices with different physical
characteristics.
It allocates the devices in an efficient manner.
It deallocates the devices after usage.
It decides which process gets the device and how much time it should be
used.
It optimizes the performance of each individual device.
Device Management
Approaches
Approaches
Direct I/O
Memory Mapped I/O
Direct Memory Access
Device Management
Approaches
Direct I/O
Device addressing
simplifies the interface
(device seen as a range of
memory locations)
Common Control signal for
Memory and I/O Devices
Common Address and Data
Bus for I/O Devices and
Memory
I/O System Organization
An application process uses a device by issuing commands and exchanging data
with the device management (device driver).
Device driver responsibilities:
Implement communication APIs that abstract the functionality of the device
Provide device dependent operations to implement functions defined by the
API
API should be similar across different device drivers, reducing the amount of info
an application programmer needs to know to use the device
I/O System Organization
I/O System Organization
Since each device controller is specific to a particular device, the device driver
implementation will be device specific, to
Provide correct commands to the controller
Interpret the controller status register (CSR) correctly
Transfer data to and from device controller data registers as required for
correct device operation
I/O with Polling
Each I/O operation requires that the software and hardware coordinate their
operations to accomplish desired effect
In direct I/O polling this coordination is done in the device driver;
While managing the I/O, the device manager will poll the busy/done flags to
detect the operation’s completion; thus, the CPU starts the device, then polls
the CSR to determine when the operation has completed
With this approach is difficult to achieve high CPU utilization, since the CPU
must constantly check the controller status
I/O with polling-read
Application process requests a read operation
The device driver queries the CSR to determine whether the device is idle; if
device is busy, the driver waits for it to become idle
The driver stores an input command into the controller’s command register, thus
starting the device
The driver repeatedly reads the content of CSR to detect the completion of the
read operation
The driver copies the content of the controller's data register(s) into the main
memory user’s process’s space.
I/O with polling-write
The application process requests a write operation
The device driver queries the CSR to determine if the device is idle; if busy, it
will wait to become idle
The device driver copies data from user space memory to the controller’s
data register(s)
The driver stores an output command into the command register, thus
starting the device
The driver repeatedly reads the CSR to determine when the device
completed its operation.
Interrupt driven I/O
In a multiprogramming system the wasted CPU time (in polled I/O) could be used by
another process; because the CPU is used by other processes in addition to the one
waiting for the I/O operation completion.
This may be remedied by use of interrupts
The reason for incorporating the interrupts into a computer hardware is to eliminate
the need for a device driver to constantly poll the CSR
Instead polling, the device controller “automatically” notifies the device driver when
the operation has completed.
Interrupt driven I/O
Interrupt driven I/O
The application process requests a read operation
The device driver queries the CSR to find out if the device is idle; if busy, then it waits until
the device becomes idle
The driver stores an input command into the controller’s command register, thus starting
the device
When this part of the device driver completes its work, it saves information regarding the
operation it began in the device status table; this table contains an entry for each device in
system; the information written into this table contains the return address of the original call
and any special parameters for the I/O operation; the CPU, after doing this, can be used by
other program, so the device manager invokes the scheduler part of the process manager.
It then terminates
Interrupt driven I/O
Interrupt driven I/O
The device completes the operation and interrupts the CPU, therefore causing an interrupt
handler to run
The interrupt handler determines which device caused the interrupt; it then branches to the
device handler for that device
The device driver retrieves the pending I/O status information from the device status table
(a,b) The device driver copies the content of the controller’s data register(s) into the user
process’s space
The device handler returns the control to the application process (knowing the return
address from the device status table). Same sequence (or similar) of operations will be
accomplished for an output operation.
Buffering
Buffering
Buffering is a technique by which a device manager keeps the slower I/O devices busy when a
process is not requiring I/O operations.
A buffer is a memory area that stores data being transferred between two devices or between a
device and an application.
Input buffering is the process of reading the data into the primary memory before the process
requests it.
Output buffering is the process of saving the data in the memory and then writing it to the
device while the process continues its execution.
Buffering
Hardware level buffering
Consider a simple character device controller that reads a single byte form a modem for each input
operation.
Normal operation: read occurs, the driver passes a read command to the controller; the
controller instructs the device to put the next character into one-byte data controller’s
register; the process calling for byte waits for the operation to complete and then retrieves
Add a hardware buffer to the controller to decrease the amount of time the process has to wait
Buffered operation: the next character to be read by the process has already been
placed into the data register, even the process has not yet called for the read operation
Buffering
Hardware level buffering
Buffering
Driver level buffering
This is generally called double buffering. One buffer is for the driver to store the data while waiting for the
higher layers to read it. The other buffer is to store data from the lower level module. This technique can be
used for the block-oriented devices (buffers must be large enough to accommodate a block of data).
Buffering
Driver level buffering
The number of buffers is extended from two to n. The data producer is writing into buffer i while the
data consumer is reading from buffer j.
In this configuration buffers j+1 to n-1 and 0 to i-1 are full. This is known as circular buffering
technique.
Device Driver
Device driver
It is a software program that controls a particular type of device attached to the computer. It
provides an interface to the hardware devices without the requirement to know the precise
information about the hardware.
A device driver communicates with the device through a bus or communication sub
system.
Responsibilities
Initialize devices
Interpreting the commands from the operating system
Manage data transfers
Accept and process interrupts
Maintain the integrity of driver and kernel data structures
Device Driver
Two ways of dealing with the device drivers
– Old way: Driver is part of the operating system, to add a new device driver, the whole OS must
have been complied
– Modern way: Drivers installation is allowed without re-compilation of the OS by using
reconfigurable device drivers; the OS dynamically binds the OS code to the driver functions.