Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
206 views

Unit-2 Operating System

The document discusses key concepts related to processes and process management including: 1) It defines a process as a program in execution and describes the components that make up a process, such as program code, stack, data section, and heap. 2) It discusses process states like new, running, ready, waiting, and terminated. Process scheduling and different scheduling queues like the job queue, ready queue, and device queues are also covered. 3) Context switching is described as the mechanism where the OS saves and loads process state when switching between CPU processes. Inter-process communication allows cooperating processes to exchange data and information.

Uploaded by

Saroj Varshney
Copyright
© © All Rights Reserved
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
206 views

Unit-2 Operating System

The document discusses key concepts related to processes and process management including: 1) It defines a process as a program in execution and describes the components that make up a process, such as program code, stack, data section, and heap. 2) It discusses process states like new, running, ready, waiting, and terminated. Process scheduling and different scheduling queues like the job queue, ready queue, and device queues are also covered. 3) Context switching is described as the mechanism where the OS saves and loads process state when switching between CPU processes. Inter-process communication allows cooperating processes to exchange data and information.

Uploaded by

Saroj Varshney
Copyright
© © All Rights Reserved
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 74

Unit - 2

Dr.Srinath.S
UNIT : 2 Syllabus

• Process Management and Synchronization:


• Process concept; Process scheduling; Operations on processes; Inter-
process communication. Multi-Threaded Programming: Overview;
Multithreading models; Thread Libraries; Threading issues. Process
Scheduling: Basic concepts; Scheduling criteria; Scheduling
algorithms; Multiple-Processor scheduling; Thread scheduling.

• Synchronization: The Critical section problem; Peterson’s solution;


Synchronization hardware; Semaphores; Classical problems of
synchronization; Monitors.

Dr.Srinath.S
Chapter – 3
Process Concept

Dr.Srinath.S
Process – General view
• A process can be thought of as a program in execution.

• A process will need certain resources such as CPU time, memory, files and
I/O devices.

• To accomplish this resources are to be allocated to the process either when


it is created or while it is executing.

Dr.Srinath.S
Process Concept
• An operating system executes the program in different ways
• Batch system
• Time-shared systems

• terms job and process are used almost interchangeably

• Process – a program in execution;


• process execution must progress in sequential fashion

Dr.Srinath.S
The Process
Process includes:

• The process is more than the program code, which is sometimes known as the text section.
• It also includes the current activity, as represented by the value of the program counter.
• It includes the process Stack which contains temporary data
• Function parameters, return addresses, local variables
• Data section containing global variables
• Heap containing memory dynamically allocated during run time

• Program is passive entity, process is active


• Program becomes process when executable file loaded into memory
• Execution of program started via GUI mouse clicks, command line entry of its name, etc
• One program can have several processes
• Consider multiple users executing the same program
Dr.Srinath.S
Process in Memory
Temporary data

Memory for shared


libraries, unused
space

Dynamic memory

Global variables

Program code

Dr.Srinath.S
Process State
• As a process executes, it changes state
• new: The process is being created
• running: Instructions are being executed
• waiting: The process is waiting for some event to occur
• ready: The process is waiting to be assigned to a processor
• terminated: The process has finished execution

Dr.Srinath.S
Diagram of Process State

Dr.Srinath.S
Process control Block (PCB)

• Each process is represented in the OS by a process control block.


• It is also called as task control block.
• PCB consists of many pieces of information as shown in next slide.
• Each process has its own PCB

Dr.Srinath.S
Process Control Block (PCB)
Information associated with each process
• Process state – may be new, ready, running, waiting, halted and so on.

• Program counter – address of the next instruction

• CPU registers - it varies depending on the computer architecture. It includes


accumulator, index registers …

• CPU scheduling information - scheduling information


(covered in process scheduling chapter)

• Memory-management information – information about value of the base register,


limit register, page table, segment table….

• Accounting information – amount of CPU time required,

• I/O status information – list of I/O devices required for the process
Dr.Srinath.S
Process Control Block (PCB)

Dr.Srinath.S
CPU Switch From Process to Process

Dr.Srinath.S
Process Scheduling
• Maximize CPU use, quickly switch processes onto CPU for time sharing

• Process scheduler selects among available processes for next execution on CPU

• Maintains scheduling queues of processes

• Job queue – set of all processes in the system. As processes enter the system, they are put
into a job queue.

• Ready queue – set of all processes residing in main memory, ready and waiting to execute
(head of the ready queue contains the first and last process PCB address. And
each PCB has the address of the next PCB)

• Device queues – set of processes waiting for an I/O device.


( Each device will have a device queue)

• Processes migrate among the various queues


Dr.Srinath.S
Ready Queue And Various
I/O Device Queues

Dr.Srinath.S
Representation of Process Scheduling

Dr.Srinath.S
Schedulers
• Long-term scheduler (or job scheduler) – selects which processes
should be brought into the ready queue

• Short-term scheduler (or CPU scheduler) – selects which process


should be executed next and allocates CPU
• Sometimes the only scheduler in a system

Dr.Srinath.S
Schedulers (Cont.)
• Short-term scheduler is invoked very frequently (milliseconds) 
(must be fast)

• Long-term scheduler is invoked very infrequently (seconds,


minutes)  (may be slow)

• The long-term scheduler controls the degree of


multiprogramming

• Processes can be described as either:


• I/O-bound process – spends more time doing I/O than computations,
many short CPU bursts
• CPU-bound process – spends more time doing computations; few very
long CPU bursts

Dr.Srinath.S
Medium term scheduler

• Some operating systems, such as time-sharing system, may introduce


an additional intermediate level scheduling.
• The key idea behind a medium term scheduler is that sometimes it
can be advantageous to remove processes from memory and thus
reduce the degree of multiprogramming.
• Later the process can be reintroduced into memory and its execution
can be continued where it left off.
• This scheme is called swapping.
• The process is swapped out and is later swapped in by the medium
term scheduler.

Dr.Srinath.S
Addition of Medium Term Scheduling

Dr.Srinath.S
Context Switch
• When CPU switches to another process, the system must save the state of the
old process and load the saved state for the new process via a context switch.

• Context of a process represented in the PCB

• Context-switch time is overhead; the system does no useful work while


switching
• The more complex the OS and the PCB -> longer the context switch

• Context switch time dependent on hardware support


• Some hardware provides multiple sets of registers per CPU -> multiple contexts loaded
at once

Dr.Srinath.S
Process Creation
• Parent process create children processes, which, in turn create
other processes, forming a tree of processes

• Generally, process identified and managed via a process identifier


(pid)

• Resource sharing
• Parent and children share all resources
• Children share subset of parent’s resources

• Execution
• Parent and children execute concurrently
• Parent waits until children terminate

Dr.Srinath.S
A Tree of Processes on Solaris

Dr.Srinath.S
Process Creation (Cont.)

• There is also two possibility exists in terms of address space of new process
• Address space
• Child duplicate of parent
• Child has a program loaded into it

• UNIX examples
• fork system call creates new process
• exec system call used after a fork to replace the process’ memory space with a new
program

Dr.Srinath.S
Process Termination
• Process executes last statement and asks the operating system to
delete it (exit)
• Output data from child to parent (via wait)
• Process’ resources are de-allocated by operating system

• Parent may terminate execution of children processes (abort)


• Child has exceeded allocated resources
• Task assigned to child is no longer required
• If parent is exiting
• Some operating systems do not allow child to continue if its parent terminates
• All children terminated - cascading termination

Dr.Srinath.S
Inter-process Communication
• Processes within a system may be independent or cooperating

• Process is independent if it cannot affect or be affected by the other processes executing in


the system.

• Cooperating process can affect or be affected by other processes.

• There are several reasons for having cooperating processes


• Information sharing - several users are interested in the same piece of information.

• Computation speedup – if we want a particular task to run faster, we must break it into subtasks, each of
which will be executing in parallel with others. This is possible only when multiple computing models like
CPU exists.

• Modularity – to construct the system in modular fashion.

• Convenience – user may be working on many tasks at the same time like editing, printing and compiling

• Cooperating processes need inter-process communication (IPC), which will allow them to
exchange data and information.

• Two models of IPC


• Shared memory
Dr.Srinath.S
• Message passing
Communications Models

Message passing model


Shared Memory model

Dr.Srinath.S
Inter-process Communication – Message Passing

• Processes that want to communicate must have a way to refer to each other.
They can use either direct or indirect communication.

Under direct communication: each process must name the recipient or sender:
Direct communication:
• IPC facility provides two operations:
• send(message) : send (P, message) : send the message to process P
• receive(message) : receive (Q, message) : receive a message from process Q

Indirect Communication: using mailbox

Dr.Srinath.S
Indirect Communication
• Operations
• create a new mailbox
• send and receive messages through mailbox
• destroy a mailbox after the communication

• Primitives are defined as:


send(A, message) – send a message to mailbox A
receive(A, message) – receive a message from mailbox A

Dr.Srinath.S
Indirect Communication
• Mailbox sharing
• P1, P2, and P3 share mailbox A
• P1, sends; P2 and P3 receive
• Who gets the message?

• Solutions
• Allow a link to be associated with at most two processes
• Allow only one process at a time to execute a receive operation
• Allow the system to select arbitrarily the receiver. Sender is notified who
the receiver was.

Dr.Srinath.S
Synchronization
• Message passing may be either blocking or non-blocking

• Blocking is considered synchronous


• Blocking send has the sender block until the message is received
• Blocking receive has the receiver block until a message is available

• Non-blocking is considered asynchronous


• Non-blocking send has the sender send the message and continue
• Non-blocking receive has the receiver receive a valid message or null

Dr.Srinath.S
Threads
Thread

• Thread is a light weight process created by some process.


• Example: MS Word
• Thread for grammar
• Thread for spelling
• Animation
• saving
• Thread for editing and many more

• Advantage of thread
• we can do multiple task from a single process.
• Reduce context switching
• Improve CPU Utilization

Dr.Srinath.S
Difference between process vs. thread

Process is heavy weight, thread is light weight

Process context switch requires OS intervention, whereas thread context


switch within a process does not require OS intervention. Threads while
context switching, it interacts with Process not with OS.

In multiprocessing environment, each process will have independent


memory, and other resources. Where as thread will share the same
resource of a process.

The advantage of thread is, within a process, when one thread is blocked
for I/O another thread is selected for execution.
Dr.Srinath.S
Two types of threads

• User level thread : Managed by the user, Kernel will not know
about the existence of thread and will not manage the threads.

• (Thread library contains codes for:


• creating, destroy, passing message and scheduling between the threads )

• Kernel level thread


• Are supported directly by the OS
• Kernel will maintain the context information for a process as a whole and also
for the individual threads inside the process also.
• The advantage is Kernel can schedule multiple threads in parallel.
• But they are expensive, and are slower
Dr.Srinath.S
as they require OS intervention
Dr.Srinath.S
• Thread is created by a process
• Thread will contain
• Thread ID
• Program Counter
• Set of registers
• Stack

• Thread will not contain


• Its own resources

One or more threads of a process will share


• Code section
• Data section
Dr.Srinath.S
Single and Multithreaded Processes

Dr.Srinath.S
Benefits

• Responsiveness

• Resource Sharing

• Economy

• Utilization of MP Architectures

Dr.Srinath.S
User Threads

• Thread management done by user-level threads library

• Examples in different OS
- POSIX Pthreads
- Mach C-threads
- Solaris threads

Dr.Srinath.S
Kernel Threads

• Supported by the Kernel

• Examples
- Windows 95/98/NT/2000
- Solaris
- Tru64 UNIX
- BeOS
- Linux

Dr.Srinath.S
Multithreading Models

• Many-to-One

• One-to-One

• Many-to-Many

Dr.Srinath.S
Many-to-One

• Many user-level threads mapped to single kernel thread.

• Used on systems that do not support kernel threads.

Dr.Srinath.S
Many-to-One Model

Dr.Srinath.S
One-to-One

• Each user-level thread maps to kernel thread.

• Examples
- Windows 95/98/NT/2000
- OS/2

Dr.Srinath.S
One-to-one Model

Dr.Srinath.S
Many-to-Many Model

• Allows many user level threads to be mapped to many kernel threads.


• Allows the operating system to create a sufficient number of kernel
threads.
• Solaris 2
• Windows NT/2000 with the ThreadFiber package

Dr.Srinath.S
Many-to-Many Model

Dr.Srinath.S
Threading Issues

• Semantics of fork() and exec() system calls.


• Thread cancellation.
• Signal handling
• Thread pools
• Thread specific data

Dr.Srinath.S
Java Threads

• Java threads may be created by:

• Extending Thread class


• Implementing the Runnable interface

• Java threads are managed by the JVM.

Dr.Srinath.S
Java Thread States

Dr.Srinath.S
CPU Scheduling

Dr.Srinath.S
CPU Scheduling : Basic Concepts

• Maximum CPU utilization obtained with multiprogramming


• CPU–I/O Burst Cycle – Process execution consists of a cycle of CPU
execution and I/O wait.

Dr.Srinath.S
Alternating Sequence of CPU And I/O Bursts

Dr.Srinath.S
CPU Scheduler

• Selects from among the processes in memory that are ready to


execute, and allocates the CPU to one of them.
• CPU scheduling decisions may take place when a process:
1.Switches from running to waiting state.
2.Switches from running to ready state.
3.Switches from waiting to ready.
4.Terminates.
• Scheduling under 1 and 4 is non-preemptive.
• All other scheduling is preemptive.

Dr.Srinath.S
Dispatcher : Associated with Context Switch

• Dispatcher module gives control of the CPU to the process selected by


the short-term scheduler; this involves:
• switching context
• switching to user mode
• jumping to the proper location in the user program to restart that program

• Dispatch latency – time it takes for the dispatcher to stop one process
and start another running.

Dr.Srinath.S
Scheduling Criteria
• CPU utilization – keep the CPU as busy as possible
• Throughput – # of processes that complete their execution per
time unit
• Turnaround time – amount of time to execute a particular
process
• Waiting time – amount of time a process has been waiting in
the ready queue
• Response time – amount of time it takes from when a request
was submitted until the first response is produced, not output
(for time-sharing environment)

Dr.Srinath.S
Optimization Criteria
• Max CPU utilization
• Max throughput
• Min turnaround time
• Min waiting time
• Min response time

Dr.Srinath.S
First-Come, First-Served (FCFS) Scheduling

Process Burst Time


P1 24
P2 3
P3 3
• Suppose that the processes arrive in the order: P1 , P2 , P3
The Gantt Chart for the schedule is:
P1 P2 P3

0 24 27 30

• Waiting time for P1 = 0; P2 = 24; P3 = 27


• Average waiting time: (0 + 24 + 27)/3 = 17

Dr.Srinath.S
FCFS Scheduling (Cont.)

Suppose that the processes arrive in the order


P2 , P3 , P1 .
• The Gantt chart for the schedule is:
P2 P3 P1

0 3 6 30

• Waiting time for P1 = 6; P2 = 0; P3 = 3


• Average waiting time: (6 + 0 + 3)/3 = 3
• Much better than previous case.
• Convoy effect short process behind long process

Dr.Srinath.S
Shortest-Job-First (Remaining) (SJR) Scheduling

• Associate with each process the length of its next CPU burst. Use these
lengths to schedule the process with the shortest time.
• Two schemes:
• Non preemptive – once CPU given to the process it cannot be preempted until
completes its CPU burst.
• preemptive – if a new process arrives with CPU burst length less than remaining
time of current executing process, preempt. This scheme is know as the
Shortest-Remaining-Time-First (SRTF).
• SJF is optimal – gives minimum average waiting time for a given set of
processes.

Dr.Srinath.S
Example of Non-Preemptive SJF

Process Arrival Time Burst Time


P1 0.0 7
P2 2.0 4
P3 4.0 1
P4 5.0 4
• SJF (non-preemptive)

P1 P3 P2 P4

0 3 7 8 12 16

• Average waiting time = (0 + 6 + 3 + 7)/4 - 4


Dr.Srinath.S
Example of Preemptive SJF

Process Arrival Time Burst Time


P1 0.0 7
P2 2.0 4
P3 4.0 1
P4 5.0 4
• SJF (preemptive)

P1 P2 P3 P2 P4 P1

0 2 4 5 7 11 16

• Average waiting time = (9 + 1 + 0 +2)/4 - 3


Dr.Srinath.S
Priority Scheduling

• A priority number (integer) is associated with each process


• The CPU is allocated to the process with the highest priority (smallest
integer  highest priority).
• Preemptive
• nonpreemptive
• SJF is a priority scheduling where priority is the predicted next CPU
burst time.
• Problem  Starvation – low priority processes may never execute.
• Solution  Aging – as time progresses increase the priority of the
process.

Dr.Srinath.S
Round Robin (RR)

• Each process gets a small unit of CPU time (time quantum), usually 10-
100 milliseconds. After this time has elapsed, the process is
preempted and added to the end of the ready queue.
• If there are n processes in the ready queue and the time quantum is q,
then each process gets 1/n of the CPU time in chunks of at most q time
units at once. No process waits more than (n-1)q time units.
• Performance
• q large  FIFO
• q small  q must be large with respect to context switch, otherwise overhead is
too high.

Dr.Srinath.S
Example of RR with Time Quantum = 20

Process Burst Time


P1 53
P2 17
P3 68
P4 24
• The Gantt chart is:
P1 P2 P3 P4 P1 P3 P4 P1 P3 P3

0 20 37 57 77 97 117 121 134 154 162

• Typically, higher average turnaround than SJF, but better


response. Dr.Srinath.S
Time Quantum and Context Switch Time

Dr.Srinath.S
Turnaround Time Varies With The Time Quantum

Dr.Srinath.S
Multilevel Queue

• Ready queue is partitioned into separate queues:


foreground (interactive)
background (batch)
• Each queue has its own scheduling algorithm,
foreground – RR
background – FCFS
• Scheduling must be done between the queues.
• Fixed priority scheduling; (i.e., serve all from foreground then from background).
Possibility of starvation.
• Time slice – each queue gets a certain amount of CPU time which it can schedule
amongst its processes; i.e., 80% to foreground in RR
• 20% to background in FCFS

Dr.Srinath.S
Multilevel Queue Scheduling

Dr.Srinath.S
Multilevel Feedback Queue

• A process can move between the various queues; aging can be


implemented this way.
• Multilevel-feedback-queue scheduler defined by the following
parameters:
• number of queues
• scheduling algorithms for each queue
• method used to determine when to upgrade a process
• method used to determine when to demote a process
• method used to determine which queue a process will enter when that process
needs service

Dr.Srinath.S
Example of Multilevel Feedback Queue

• Three queues:
• Q0 – time quantum 8 milliseconds
• Q1 – time quantum 16 milliseconds
• Q2 – FCFS
• Scheduling
• A new job enters queue Q0 which is served FCFS. When it gains CPU, job receives
8 milliseconds. If it does not finish in 8 milliseconds, job is moved to queue Q1.
• At Q1 job is again served FCFS and receives 16 additional milliseconds. If it still
does not complete, it is preempted and moved to queue Q2.

Dr.Srinath.S
Multilevel Feedback Queues

Dr.Srinath.S
Multiple-Processor Scheduling

• CPU scheduling more complex when multiple CPUs are available.


• Homogeneous processors within a multiprocessor.
• Load sharing
• Asymmetric multiprocessing – only one processor accesses the system
data structures, alleviating the need for data sharing.

Dr.Srinath.S

You might also like