Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
20 views

Module 2

Uploaded by

Fazal
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
20 views

Module 2

Uploaded by

Fazal
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 46

OPERATING SYSTEM

CONCEPTS

Module 2
PROCESS MANAGEMENT
CONCEPT OF A PROCESS
An operating system executes a variety of programs in Batch system, Time-
shared systems etc
Process – a program in execution; process execution must progress in
sequential fashion
A process includes:
 Process no/ Id
 program counter – address of the next instruction
 data
PROCESS MANAGEMENT
Process is a program in execution
Difference b/w
Process Program
 A process is an active entity.  Source code is stored in
 Translated object code is secondary memory
stored in main memory.  Program is a passive entity.
 A process is a program in  A program is a set of
execution. instructions
 The process is dynamic.  The program is static.
Process State
As a process executes, it changes state
 New: The process is being created

 Ready: The process is waiting to be


assigned to a processor
 Active/Running: Instructions are
being executed
 Waiting: The process is waiting for
some event to occur
 Halted/Terminated: The process has
finished execution
PROCESS CONTROL BLOCK (PCB)
Information associated with each process
Process state – new, ready, active, waiting, halted,..
Program counter – address of the next instruction
CPU registers – accumulator, general purpose registers, etc
CPU scheduling information – scheduling queues
Memory-management information – page and segment tables
Accounting information – process number, time used, etc
I/O status information – I/O devices
PROCESS CONTROL BLOCK (PCB)
New, Ready, Running,
Waiting, Halt

PID

Address of
Next Instruction
CONTEXT SWITCHING

Switching the CPU to another process requires performing a state save of the
Current Process and a state restore of a different process.
SCHEDULERS

Long-term scheduler:
(job scheduler) – selects
which processes should be
brought into the ready
queue

Short-term scheduler: Medium term scheduler: It is


(CPU scheduler) – selects advantageous to remove process from
which process should be the memory in multiprocessing systems.
executed next and Swapping: The removed process is
allocates CPU
reintroduced into the memory.
I/O AND CPU BOUND PROCESSES
Processes can be described as either:

 I/O-bound process – spends more time doing I/O than computations,


many short CPU bursts

 CPU-bound process – spends more time doing computations; few very


long CPU bursts
CPU SCHEDULER
Selects from among the processes in ready queue, and
allocates the CPU to one of them
 Queue may be scheduled in various ways
CPU scheduling decisions may take place when a
process:
1. Switches from running to waiting state
2. Switches from running to ready state
3. Switches from waiting to ready
4. Terminates
Scheduling under 1, 3 and 4 is non-preemptive
Scheduling under 2 is preemptive
SCHEDULING CRITERIA

CPU utilization – keep the CPU as busy as possible


Throughput – No. of processes that complete their execution per time unit
Turnaround time – amount of time taken to execute a particular process
Waiting time – amount of time a process has been waiting in the ready queue
Response time – amount of time it takes from when a request was submitted from new state
until the first response is produced
SCHEDULING ALGORITHM OPTIMIZATION CRITERIA

Max CPU utilization


Max throughput
Min turnaround time
Min waiting time
Min response time
FIRST-COME, FIRST-SERVED (FCFS) SCHEDULING
Process Burst Time
P1 24
P2 3
P3 3
Suppose that the processes arrive in the order: P1 , P2 , P3

The Gantt Chart for the schedule is:


P1 P2 P3

0 24 27 30

Waiting time for P1 = 0; P2 = 24; P3 = 27


Average waiting time: (0 + 24 + 27)/3 = 17
Turn Around Time: P1 = 24; P2 = 27; P3 = 30
Average Turn Around Time: (24+27+30)/3 = 27
FCFS SCHEDULING (CONT.)
Suppose that the processes arrive in the order:
P2 , P3 , P1
The Gantt chart for the schedule is:

P2 P3 P1

0 3 6 30

Waiting time for P1 = 6; P2 = 0; P3 = 3


Average waiting time: (6 + 0 + 3)/3 = 3
Much better than previous case
Convoy effect – When a short process is behind long process
SHORTEST-JOB-FIRST (SJF) SCHEDULING
Associate with each process the length of its next CPU burst
 Use these lengths to schedule the process with the shortest time

SJF is optimal – gives minimum average waiting time for a given


set of processes
 The difficulty is knowing the length of the next CPU request
EXAMPLE OF SJF
Process Burst Time
P1 6
P2 8
P3 7
P4 3
SJF scheduling chart

P4 P1 P3 P2

0 3 9 16 24
Average waiting time = (3 + 16 + 9 + 0) / 4 = 7
Average Turn around Time = (3+9+16+24)/4 = 13
SHORTEST JOB FIRST WITH ARRIVAL TIME
Now we add the concepts of varying arrival times and preemption to the
analysis
Process Arrival Time Burst Time
Waiting time = Completion time – Burst Time –
P1 0 8 Arrival Time
P2 1 4 Turn around time = Completion time – Arrival Time
P3 2 9
P4 3 5
Preemptive SJF Gantt Chart
P1 P2 P4 P1 P3

0 1 5 10 17 26

Waiting time = P1: 0+10-1, P2: 0, P4: 2, P3:15


Average [(10-1)+(1-1)+(17-2)+5-3)]/4 = 26/4 = 6.5 ms
Average Turn around Time=[(17-0)+(5-1)+(26-2)+(10-3)]/4 = 13
SJF PREEMPTIVE SCHEDULING
At ( t =0ms ), P1 arrives. It’s the only process so CPU starts executing it.
At ( t = 1ms ), P2 has arrived . At this time, P1 (remaining time ) = 5 ms . P2 has 4ms, so as P2 is shorter, P1 is preempted and P2
process starts executing.
At ( t = 2ms ), P3 process has arrived. At this time, P1(remaining time) = 5ms, P2(remaining time ) = 3 ms , P3 = 2ms. Since P3 is having
least burst time, P3 is executed .
At ( t = 3ms ), P4 comes , At this time, P1 = 5ms, P2 = 3ms, P3 = 1ms, P4 = 3ms. Since P4 does not have short burst time, so P3 continues
to execute.
At ( t= 4ms ), P3 is finished . Now, remaining tasks are P1 = 5ms, P2 = 3ms, P4 = 3ms. As ,P2 and P4 have same time, so the task which
came first will be executed first. So, P2 gets executed first.
At ( t = 7ms ),P4 gets executed for 3ms.
At ( t = 10ms ), P1 gets executed till it finishes.
Waiting time = Completion time – Burst Time – Arrival Time
P1 waiting time = (15-6-0) = 9ms
P2 waiting time = (7-4-1) = 2ms
P3 waiting time = (4-2-2 )=0ms
5
P4 waiting time = (10-3-3)= 4ms 3
0
3
The average waiting time is ( 9 +2 + 0 + 4)/4 = 15/4 = 3.75
PRIORITY SCHEDULING
A priority number (integer) is associated with each process

The CPU is allocated to the process with the highest priority


(smallest integer  highest priority)
Preemptive and Non-preemptive scheduling variations.

SJF is priority scheduling where priority is the inverse of predicted


next CPU burst time

Problem: Starvation – low priority processes may never execute

Solution: Aging – as time progresses increase the priority of the


process
EXAMPLE OF PRIORITY SCHEDULING
Process Burst Time Priority
P1 10 3
P2 1 1
P3 2 4
P4 1 5
P5 5 2
Priority scheduling Gantt Chart
P2 P5 P1 P3 P4

0 1 6 16 18 19

Average waiting time = 8.2 ms


Average Turn around Time=12 ms
ROUND ROBIN (RR)
Each process gets a small unit of CPU time (time quantum q), usually 10-
100 milliseconds. After this time has elapsed, the process is preempted
and added to the end of the ready queue.
If there are n processes in the ready queue and the time quantum is q,
then each process gets 1/n of the CPU time in chunks of at most q time
units at once. No process waits more than (n-1)q time units.
Timer interrupts every quantum to schedule next process
Performance
 quantum large  FIFO
 quantum small  q must be large with respect to context switch, otherwise
overhead is too high
EXAMPLE OF RR
Time Quantum = 4
Process Burst Time
P1 24
P2 3
P3 3
The Gantt chart is:
P1 P2 P3 P1 P1 P1 P1 P1

0 4 7 10 14 18 22 26 30

Avg Waiting Time =(10-4)+4+7=17/3=5.8


Avg Turn Around Time = 30+7+10=47/3=15.66

q usually 10ms to 100ms, context switch < 10 musec


MULTILEVEL QUEUE
ØReady queue is partitioned into separate queues,
Eg:
 foreground (interactive)
 background (batch)
ØProcess will permanently be in a given queue.
ØEach queue has its own scheduling algorithm:
Eg.
 foreground – RR
 background – FCFS
ØScheduling must be done between the queues:
 Fixed priority scheduling; (i.e., serve all from foreground then from background).
 Possibility of starvation is there.
 Time slice – each queue gets a certain amount of CPU time which it can schedule amongst its processes;
 i.e., 80% to foreground in RR and 20% to background in FCFS.
MULTILEVEL QUEUE SCHEDULING
MULTILEVEL FEEDBACK QUEUE
ØA process can move between the various queues; aging can be implemented this
way
ØMultilevel-feedback-queue scheduler defined by the following parameters:
 number of queues
 scheduling algorithms for each queue
 method used to determine when to upgrade a process
 method used to determine when to demote a process
 method used to determine which queue a process will enter when that process
needs service
MULTILEVEL FEEDBACK QUEUES
MULTIPLE-PROCESSOR SCHEDULING
CPU scheduling more complex when multiple CPUs are available

Homogeneous processors within a multiprocessor

Asymmetric multiprocessing – only one processor accesses the system data structures, for
data sharing

Symmetric multiprocessing (SMP) – each processor is self-scheduling, all processes in


common ready queue, or each has its own private queue of ready processes
 Currently, most common
ALGORITHM EVALUATION
How to select CPU-scheduling algorithm for an OS?
Determine criteria, then evaluate algorithms
Deterministic modeling
Type of analytic evaluation
Takes a particular predetermined workload and defines the
performance of each algorithm for that workload
QUEUING MODELS
Describes the arrival of processes, and CPU and I/O bursts probabilistically
 Commonly exponential, and described by mean
 Computes average throughput, utilization, waiting time, etc

Computer system described as network of servers, each with queue of waiting


processes
 Knowing arrival rates and service rates
 Computes utilization, average queue length, average wait time, etc
INTERPROCESS COMMUNICATION
Processes within a system may be independent or cooperating
Cooperating process can affect or be affected by other processes,
including sharing data
Reasons for cooperating processes:
 Information sharing
 Computation speedup
 Modularity
 Convenience
Cooperating processes need interprocess communication (IPC)
COMMUNICATIONS MODELS

Two models of IPC


a) Message
passing

b) Shared
memory
IPC
Message Passing:
The function of message system is to allow processes to
communicate with one another without the need to share data. It
is used as a method of communication in microkernels.
If processes P and Q wants to communicate they need a
communication link to send and receive messages.
IPC
Shared Memory:
Shared Memory is a memory shared between two processes. Each
process that wants to communicate with another process can do so only
using Inter process communication techniques.
It is the fastest IPC mechanism.
The OS maps an address space to read and write without calling
operating system functions.
Steps:
1. Request a memory segment that can be shared between processes
to the operating system.
2. Associate it with the address space of the calling process.
THREADS
ØA thread is a basic unit of CPU utilization.
ØIt consists of a thread ID, program counter, a stack, and a set of registers.
ØTraditional processes have a single thread of control. It is also called as heavyweight
process. There is one program counter, and one sequence of instructions that can be
carried out at any given time.
ØA multi-threaded application have multiple threads within a single process, each
having their own program counter, stack and set of registers, but sharing common code,
data, and certain structures such as open files. Such process are called as lightweight
process.
ADVANTAGES
n Responsiveness – may allow continued execution if part of process is
blocked, especially important for user interfaces
n Resource Sharing – threads share resources of process, easier than shared
memory or message passing
n Economy – cheaper than process creation, thread switching lower
overhead than context switching
n Scalability – process can take advantage of multiprocessor architectures
TYPES OF THREADS
1) User Threads are above the kernel and are managed without kernel support.
2) Kernel Threads-managed by the operating system itself

There are 3 types of relationships b/w user threads and kernel threads.
USER LEVEL THREADS
Each thread is represented by a PC, registers, stack and a small control block, all stored in the user
process address space.
Thread management done by user-level threads library.
Three primary thread libraries: POSIX Pthreads ,Win32 threads ,Java threads
User-level threads implement in user-level libraries, rather than via systems calls, so
thread switching does not need to call operating system and to cause interrupt to the kernel.
Fast and Efficient: Thread switching is not much more expensive than a procedure call.
Simple Management: Creating a thread, switching between threads and synchronization between
threads can all be done without intervention of the kernel.
KERNEL LEVEL THREADS
Instead of thread table in each process, the kernel has a thread table that keeps
track of all threads in the system.
The kernel-level threads are slow and inefficient. For instance, threads operations are
hundreds of times slower than that of user-level threads.
Kernel must manage and schedule threads as well as processes. It requires a full
thread control block (TCB) for each thread to maintain information about threads.
As a result there is significant overhead and increased in kernel complexity.
MULTITHREADED PROCESSES
MANY-TO-ONE MODEL
Many user-level threads are mapped to single
kernel thread.
Multiple threads may not run in parallel on muticore
system because only one may be in kernel at a time.
Thread management is done by thread library in
user space.
If one of the thread makes a blocking system call
then the entire process will be blocked.
Few systems currently use this model
Examples:
 Solaris Green Threads
 GNU Portable Threads
ONE-TO-ONE MODEL
Each user-level thread maps to one kernel
thread
Creating a user-level thread creates a kernel
thread
Allows multiple threads to run in parallel on
multiprocessors.
Number of threads per process sometimes
restricted due to overhead
Examples
 Windows
 Linux
 Solaris 9 and later
MANY-TO-MANY MODEL
Allows many user level threads to be
mapped to many kernel threads
Allows the operating system to create a
sufficient number of kernel threads
Example:
Solaris prior to version 9
Windows with the ThreadFiber package
MULTICORE PROGRAMMING

A recent trend in computer architecture is to


produce chips with multiple cores, or CPUs on a
single chip.
A multi-threaded application running on a
traditional single-core chip, would have to execute
the threads one after another.
Parallel execution in a multicore system
On a multi-core chip, the threads could be spread
across the available cores, allowing true parallel
processing.
THREAD LIBRARY
A thread library provides the programmer with an API for creating and managing
thread.
There are two primary ways of implementing thread library, which are as follows −
•The first approach is to provide a library entirely in user space with kernel support.
All code and data structures for the library exist in a local function call in user space
and not in a system call.
•The second approach is to implement a kernel level library supported directly by
the operating system. In this case the code and data structures for the library exist in
kernel space.
Invoking a function in the API for the library typically results in a system call to the
kernel.
The main thread libraries which are used are given below −
•POSIX threads − Pthreads, the threads extension of the POSIX standard, may be
provided as either a user level or a kernel level library.
•WIN 32 thread − The windows thread library is a kernel level library available on
windows systems.
•JAVA thread − The JAVA thread API allows threads to be created and managed
directly as JAVA programs.

You might also like