Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

OS-PROCESS MANAGEMENT Module - 2.1

Download as ppt, pdf, or txt
Download as ppt, pdf, or txt
You are on page 1of 64

Operating Systems

PROCESS
MANAGEMENT
Module-2

Part -1
Process Management
• A process can be thought of as a program in execution.
• A process will need certain resources— such as CPU time,
memory, files, and I/O devices
– to accomplish its task. These resources are allocated to the process
either when it is created or while it is executing.

 A process is the unit of work in most systems.


 Systems consist of a collection of processes: operating-
system processes execute system code, and user processes
execute user code.
 All these processes may execute concurrently

2
The Process
Process memory is divided into four sections:
• The stack is used to store temporary data such as local
variables, function parameters, function return values,
return address etc.
• The heap which is memory that is dynamically allocated
during process run time
• The data section stores global variables.
• The text section comprises the compiled program code.
• Note that, there is a free space between the stack and the
heap. When the stack is full, it grows downwards and
when the heap is full, it grows upwards.
3
Process in memory

4
The Process
• A program is a passive entity, such as a file
containing a list of instructions stored on disk
(often called an executable file).
• In contrast, a process is an active entity, with a
program counter specifying the next instruction
to execute and a set of associated resources.
• A program becomes a process when an
executable file is loaded into memory

5
The Process
• Two common techniques for loading executable files are
double-clicking an icon representing the executable file and
entering the name of the executable file on the command line
(as in prog.exe or a.out).
• Although two processes may be associated with the same
program, they are nevertheless considered two separate
execution sequences.
– For instance, several users may be running different copies of the mail
program, or the same user may invoke many copies of the web browser
program.
– Each of these is a separate process; and although the text sections are
equivalent, the data, heap, and stack sections vary

6
The Process
• A process itself can be an execution environment for other
code. The Java programming environment provides a good
example.
• An executable Java program is executed within the Java virtual
machine (JVM). The JVM executes as a process that interprets
the loaded Java code and takes actions (via native machine
instructions) on behalf of that code.
• For example, to run the compiled Java program Program.class,
we would enter
java Program
• The command java runs the JVM as an ordinary process, which
in turns executes the Java program Program in the virtual
7 machine.
Process State

• As a process executes, it changes state. The state of a process


is defined in part by the current activity of that process. A
process may be in one of the following states:
– New. The process is being created.
– Running. Instructions are being executed.
– Waiting. The process is waiting for some event to
occur (such as an I/O completion or reception of a
signal).
– Ready. The process is waiting to be assigned to a
processor.
– Terminated. The process has finished execution.

8
• Many processes may be ready and waiting, however. The state
diagram corresponding to these states is presented in Figure.

Figure Diagram of process state.

9
Process Control Block

• Each process is represented in the operating system by a process


control block (PCB) — also called a task control block.A PCB is
shown in Figure. It contains many pieces of information associated
with a specific process.

10
Process Control Block
• Process state. The state may be new, ready, running, waiting,
halted, and so on.
• Program counter. The counter indicates the address of the
next instruction to be executed for this process.
• CPU registers. The registers vary in number and type,
depending on the computer architecture. They include
accumulators, index registers, stack pointers, and general-
purpose registers, plus any condition-code information. Along
with the program counter, this state information must be saved
when an interrupt occurs, to allow the process to be continued
correctly afterward.

11
Process Control Block
• CPU-scheduling information. This information includes a process
priority, pointers to scheduling queues, and any other scheduling
parameters.
• Memory-management information. This information may include
such items as the value of the base and limit registers and the page
tables, or the segment tables, depending on the memory system used
by the operating system.
• Accounting information. This information includes the amount of
CPU and real time used, time limits, account numbers, job or process
numbers, and so on.
• I/O status information. This information includes the list of I/O
devices allocated to the process, a list of open files, and so on.
In brief, the PCB simply serves as the repository for any information
that may vary from process to process.
12
Process Control Block

13 Figure: Diagram showing CPU switch from process to process


Process Scheduling
• The objective of multiprogramming is to have some process
running at all times, to maximize CPU utilization.
• The objective of time sharing is to switch the CPU among
processes so frequently that users can interact with each
program while it is running.
• To meet these objectives, the process scheduler selects an
available process (possibly from a set of several available
processes) for program execution on the CPU.
• For a single-processor system, there will never be more than
one running process. If there are more processes, the rest will
have to wait until the CPU is free and can be rescheduled.

14
Scheduling Queues
• As processes enter the system, they are put into a job queue,
which consists of all processes in the system.
• The processes that are residing in main memory and are ready
and waiting to execute are kept on a list called the ready
queue.
• This queue is generally stored as a linked list. A ready-queue
header contains pointers to the first and final PCBs in the list.
Each PCB includes a pointer field that points to the next PCB
in the ready queue.
• The list of processes waiting for a particular I/O device is
called a device queue. Each device has its own device queue

15
The ready queue and various I/O
device queues

16
Scheduling Queues
• A common representation of process scheduling is a queueing
diagram. Each rectangular box represents a queue. Two types of
queues are present: the ready queue and a set of device queues. The
circles represent the resources that serve the queues, and the arrows
indicate the flow of processes in the system.
• A new process is initially put in the ready queue. It waits there until
it is selected for execution, or dispatched. Once the process is
allocated the CPU and is executing, one of several events could
occur:
– The process could issue an I/O request and then be placed in an I/O queue.
– The process could create a new child process and wait for the child’s
termination.
– The process could be removed forcibly from the CPU, as a result of an
interrupt, and be put back in the ready queue.

17
Scheduling Queues
• In the first two cases, the process eventually switches from the
waiting state to the ready state, and is then put back in the ready
queue. A process continues this cycle until it terminates, at which
time it is removed from all queues.

18
Schedulers
• A process migrates among the various scheduling queues throughout
its lifetime. The operating system must select, for scheduling
purposes, processes from these queues.
• Schedulers are software which selects an available program to be
assigned to CPU.
• The long-term scheduler, or job scheduler, selects processes from
this pool and loads them into memory for execution.
• The short-term scheduler, or CPU scheduler, selects from among
the processes that are ready to execute and allocates the CPU to one
of them.
• The medium-term scheduler - selects the process in ready queue
and reintroduced into the memory.

19
Schedulers
• The primary distinction between these two schedulers lies in
frequency of execution.
• The short-term scheduler must select a new process for the CPU
frequently.
• A process may execute for only a few milliseconds before waiting
for an I/O request. Often, the short-term scheduler executes at least
once every 100 milliseconds. Because of the short time between
executions, the short-term scheduler must be fast.
• If it takes 10 milliseconds to decide to execute a process for 100
milliseconds, then 10/(100 + 10) = 9 percent of the CPU is being
used (wasted) simply for scheduling the work.

20
Schedulers
• The long-term scheduler executes much less frequently; minutes may
separate the creation of one new process and the next.
• The long-term scheduler controls the degree of multiprogramming
(the number of processes in memory). If the degree of
multiprogramming is stable, then the average rate of process creation
must be equal to the average departure rate of processes leaving the
system.
• Thus, the long-term scheduler may need to be invoked only when a
process leaves the system. Because of the longer interval between
executions, the long-term scheduler can afford to take more time to
decide which process should be selected for execution

21
Schedulers
• Processes can be described as either:
– I/O-bound process – spends more time doing I/O than computations,
– CPU-bound process – spends more time doing computations and few I/O
operations
• An efficient scheduling system will select a good mix of CPU-
bound processes and I/O bound processes.
– If the scheduler selects more I/O bound process, then I/O queue
will be full and ready queue will be empty.
– If the scheduler selects more CPU bound process, then ready
queue will be full and I/O queue will be empty.

22
Schedulers
• Time sharing systems employ a medium-term scheduler. It swaps
out the process from ready queue and swap in the process to ready
queue.
• When system loads get high, this scheduler will swap one or more
processes out of the ready queue for a few seconds, in order to allow
smaller faster jobs to finish up quickly and clear the system.
• Advantages of medium-term scheduler –
– To remove process from memory and thus reduce the degree of
multiprogramming (number of processes in memory).
– To make a proper mix of processes (CPU bound and I/O bound)

23
Schedulers

24
Context Switch
• Switching the CPU to another process requires performing a state
save of the current process and a state restore of a different process.
This task is known as a context switch.
• When a context switch occurs, the kernel saves the context of the old
process in its PCB and loads the saved context of the new process
scheduled to run.
• Context-switch time is pure overhead, because the system does no
useful work while switching. Switching speed varies from machine
to machine, depending on the memory speed, the number of registers
that must be copied, and the existence of special instructions).A
typical speed is a few milliseconds.
• Context-switch times are highly dependent on hardware support.

25
Operations on Processes
• The processes in most systems can execute
concurrently, and they may be created and
deleted dynamically.
• The systems must provide a mechanism for
process creation and termination.

26
Process Creation
• A process may create several new processes during the
execution.
• The creating process is called a parent process, and the new
processes are called the children of that process.
• Each of these new processes may in turn create other
processes, forming a tree of processes.
• Most operating systems (including UNIX, Linux, and
Windows) identify processes according to a unique process
identifier (or pid), which is typically an integer number.
• The pid provides a unique value for each process in the
system, and it can be used as an index to access various
attributes of a process within the kernel.
27
Process Creation
• Figure illustrates a typical process tree for the Solaris operating system, showing the name of each
process and its pid. ( process in some OS as Linux prefers the term task instead.)

28
Process Creation
• On typical Solaris systems, the process at the top of
the tree is the ‘sched’ process with PID of 0.
• The ‘sched’ process creates several children processes
– init, pageout and fsflush.
• Pageout and fsflush are responsible for managing
memory and file systems.
• The init process with a PID of 1, serves as a parent
process for all user processes

29
Process Creation
• A process will need certain resources (CPU time, memory,
files, I/O devices) to accomplish its task.
• When a process creates a subprocess, the subprocess may be
able to obtain its resources in two ways:
• directly from the operating system
• Subprocess may take the resources of the parent process.
• The resource can be taken from parent in two ways –
• The parent may have to partition its resources among its
children
• Share the resources among several children.

30
Process Creation
• There are two options for the parent process after creating the child:
– Wait for the child process to terminate and then continue
execution. The parent makes a wait() system call.
– Run concurrently with the child, continuing to execute without
waiting.
• Two possibilities for the address space of the child relative to the
parent:
– The child may be an exact duplicate of the parent, sharing the
same program and data segments in memory. Each will have their
own PCB, including program counter, registers, and PID. This is
the behaviour of the fork system call in UNIX.
– The child process may have a new program loaded into its
address space, with all new code and data segments. This is the
31 behaviour of the spawn system calls in Windows.
Process Creation

32
Process Creation
• In UNIX OS, a child process can be created by fork() system
call.
• The fork system call, if successful, returns the PID of the child
process to its parents and returns a zero to the child process.
• If failure, it returns -1 to the parent. Process IDs of current
process or its direct parent can be accessed using the getpid( )
and getppid( ) system calls respectively.
• The parent waits for the child process to complete with the
wait() system call. When the child process completes, the
parent process resumes and completes its execution.

33
34
Process Creation
• In windows the child process is created using the function
createprocess( ). The createprocess( ) returns 1, if the child is
created and returns 0, if the child is not created.

35
Process Termination
• A process terminates when it finishes executing its
last statement and asks the operating system to delete
it, by using the exit () system call.
• All of the resources assigned to the process like
memory, open files, and I/O buffers, are deallocated
by the operating system.
• A process can cause the termination of another
process by using appropriate system call. The parent
process can terminate its child processes by knowing
of the PID of the child.
36
Process Termination
• A parent may terminate the execution of children for a
variety of reasons, such as:
• The child has exceeded its usage of the resources, it has
been allocated.
• The task assigned to the child is no longer required.
• The parent is exiting, and the operating system
terminates all the children. This is called cascading
termination

37
• A process that has terminated, but whose parent has not yet
called wait(), is known as a zombie process.
• All processes transition to this state when they terminate, but
generally they exist as zombies only briefly. Once the parent
calls wait(), the process identifier of the zombie process and its
entry in the process table are released.
• what would happen if a parent did not invoke wait() and
instead terminated, thereby leaving its child processes as
orphans. Linux and UNIX address this scenario by assigning
the init process as the new parent to orphan processes.

38
Interprocess Communication
• Interprocess Communication- Processes
executing may be either co-operative or
independent processes.
– Independent Processes – processes that cannot
affect other processes or be affected by other
processes executing in the system.
– Cooperating Processes – processes that can affect
other processes or be affected by other processes
executing in the system.

39
Interprocess Communication
Co-operation among processes are allowed for following reasons
•Information Sharing - There may be several processes which
need to access the same file. So the information must be
accessible at the same time to all users.
•Computation speedup - Often a solution to a problem can be
solved faster if the problem can be broken down into sub-tasks,
which are solved simultaneously (particularly when
multipleprocessors are involved.)
•Modularity - A system can be divided into cooperating modules
and executed by sending information among one another.
•Convenience - Even a single user can work on multiple tasks by
information sharing.
40
Interprocess Communication
• Cooperating processes require an interprocess
communication (IPC) mechanism that will allow them to
exchange data and information.
• There are two fundamental models of interprocess
communication: shared memory and message passing
• In the shared-memory model, a region of memory that is
shared by cooperating processes is established. Processes can
then exchange information by reading and writing data to the
shared region.
• In the message-passing model, communication takes place by
means of messages exchanged between the cooperating
processes..
41
Interprocess Communication

Figure Communications models. (a) Message passing. (b) Shared memory.


42
Interprocess Communication
• Both of the models just mentioned are common in operating
systems, and many systems implement both. Message passing
is useful for exchanging smaller amounts of data, because no
conflicts need be avoided. Message passing is also easier to
implement in a distributed system than shared memory.
Sl No Shared Memory Message passing

A region of memory is shared by Message exchange is done among the processes by using
.1 communicating processes, into which the information is .objects
written and read

.2 Useful for sending large block of data .Useful for sending small data

.3 System call is used only to create System call is used during every
shared memory .read and write operation

.4 Message is sent faster, as there are no .Message is communicated slowly


system calls
43
Interprocess Communication
• Shared Memory is faster once it is set up, because no system
calls are required and access occurs at normal memory speeds.
Shared memory is generally preferable when large amounts of
information must be shared quickly on the same computer.
• Message Passing requires system calls for every message
transfer, and is therefore slower, but it is simpler to set up and
works well across multiple computers. Message passing is
generally preferable when the amount and/or frequency of data
transfers is small.

44
Shared-Memory Systems
• A region of shared-memory is created within the address
space of a process, which needs to communicate. Other
process that needs to communicate uses this shared
memory.
• The form of data and position of creating shared memory
area is decided by the process. Generally, a few messages
must be passed back and forth between the cooperating
processes first in order to set up and coordinate the shared
memory access.
• The process should take care that the two processes will
not write the data to the shared memory at the same time.
45
Producer– Consumer Problem
• This is a classic example, in which one process is
producing data and another process is consuming the data.
• The data is passed via an intermediary buffer (shared
memory). The producer puts the data to the buffer and the
consumer takes out the data from the buffer.
• A producer can produce one item while the consumer is
consuming another item. The producer and consumer
must be synchronized, so that the consumer does not try to
consume an item that has not yet been produced.
• In this situation, the consumer must wait until an item is
produced
46
Producer– Consumer Problem
• A compiler may produce assembly code that is consumed by
an assembler. The assembler, in turn, may produce object
modules that are consumed by the loader.
• A web server produces (that is, provides) HTML files and
images, which are consumed (that is, read) by the client web
browser requesting the resource.
• One solution to the producer– consumer problem uses shared
memory. To allow producer and consumer processes to run
concurrently, we must have available a buffer of items that can
be filled by the producer and emptied by the consumer.
• This buffer will reside in a region of memory that is shared by
the producer and consumer processes
47
Producer– Consumer Problem
• There are two types of buffers into which information can be
put –
– Unbounded buffer
– Bounded buffer
• With Unbounded buffer, there is no limit on the size of the
buffer, and so on the data produced by producer. But the
consumer may have to wait for new items.
• With bounded-buffer – As the buffer size is fixed. The
producer has to wait if the buffer is full and the consumer has
to wait if the buffer is empty.

48
Producer– Consumer Problem
• This example uses shared memory as a circular queue. The in
and out are two pointers to the array. Note in the code below
that only the producer changes "in", and only the consumer
changes "out".
• First the following data is set up in the shared memory area:

49
Producer– Consumer Problem
• The producer process –
• Note that the buffer is full when [ (in+1) % BUFFER_SIZE ==
out]

50
Producer– Consumer Problem
• The consumer process –
• Note that the buffer is empty when [ in == out]

51
Message-Passing Systems
A mechanism to allow process communication without sharing
address space. It is used in distributed systems.
•A communication link must be established between the
cooperating processes before messages can be sent.
•A message-passing facility provides at least two operations:
send(message) receive(message)
•Messages sent by a process can be either fixed or variable in size.
If only fixed-sized messages can be sent, the system-level
implementation is straight- forward.
•Variable-sized messages require a more complex system- level
implementation, but the programming task becomes simpler. This
is a common kind of tradeoff seen throughout operating-system
52 design.
Message-Passing Systems
• There are three methods of creating the link between the
sender and the receiver-
1. Direct or indirect communication (naming)
2. Synchronous or asynchronous communication
(Synchronization)
3. Automatic or explicit buffering

53
Naming
• Processes that want to communicate must have a way to refer
to each other. They can use either direct or indirect
communication.
• Under direct communication, each process that wants to
communicate must explicitly name the recipient or sender of
the communication.
• In this scheme, the send() and receive() primitives are defined
as:
– send(P, message)—Send a message to process P.
– receive(Q, message)— Receive a message from process Q.

54
Naming
• A communication link in this scheme has the
following properties:
– A link is established automatically between every
pair of processes that want to communicate. The
processes need to know only each other’s identity
to communicate.
– A link is associated with exactly two processes.
– Between each pair of processes, there exists exactly
one link.

55
Naming
• This scheme exhibits symmetry in addressing; that is, both the
sender process and the receiver process must name the other to
communicate.
• A variant of this scheme employs asymmetry in addressing.
Here, only the sender names the recipient; the recipient is not
required to name the sender.
– send(P, message)—Send a message to process P.
– receive(id, message)— Receive a message from any process. The
variable id is set to the name of the process with which communication
has taken place.
• Disadvantages of direct communication – any changes in the
identifier of a process, may have to change the identifier in the
whole system (sender and receiver), where the messages are
56 sent and received
Naming
• Indirect communication uses shared mailboxes, or ports.
• A mailbox or port is used to send and receive messages.
Mailbox is an object into which messages can be sent and
received. It has a unique ID. Using this identifier messages are
sent and received.
• Two processes can communicate only if they have a shared
mailbox. The send and receive functions are –
– send (A, message) – send a message to mailbox A
– receive (A, message) – receive a message from mailbox A

57
Naming
• Properties of communication link:
• A link is established between a pair of processes only if they
have a shared mailbox.
• A link may be associated with more than two processes
Between each pair of communicating processes, there may be
any number of links, each link is associated with one mailbox.
• A mail box can be owned by the operating system. It must take
steps to –
• create a new mailbox
• send and receive messages from mailbox
• delete mailboxes.

58
Naming
• Now suppose that processes P1, P2, and P3 all share mailbox A.
Process P1 sends a message to A, while both P2 and P3 execute
a receive() from A. Which process will receive the message
sent by P1? The answer depends on which of the following
methods we choose:
– Allow a link to be associated with two processes at most.
– Allow at most one process at a time to execute a receive()
operation.
– Allow the system to select arbitrarily which process will
receive the message (that is, either P2 or P3, but not both, will
receive the message). The system may define an algorithm for
selecting which process will receive the message (for example,
round robin, where processes take turns receiving messages).
59 The system may identify the receiver to the sender.
Synchronization
• Communication between processes takes place through calls to
send() and receive() primitives.
• There are different design options for implementing each
primitive. Message passing may be either blocking or
nonblocking — also known as synchronous and
asynchronous.
– Blocking send. The sending process is blocked until the
message is received by the receiving process or by the mailbox.
– Nonblocking send. The sending process sends the message
and resumes operation.
– Blocking receive. The receiver blocks until a message is
available.
– Nonblocking receive. The receiver retrieves either a valid
message or a null.
60
Synchronization
• Different combinations of send() and receive() are possible.
When both send() and receive() are blocking, we have a
rendezvous between the sender and the receiver.
• The solution to the producer– consumer problem becomes
trivial when we use blocking send() and receive() statements.
The producer merely invokes the blocking send() call and waits
until the message is delivered to either the receiver or the
mailbox.Likewise, when the consumer invokes receive(), it
blocks until a message is available.

61
Synchronization
message next_produced;
while (true) {
/* produce an item in next_produced */
send(next_produced);
}
The producer process using message passing

62
Synchronization
message next consumed;
while (true) {
receive(next_consumed);
/* consume the item in next consumed */
}

The consumer process using message passing

63
Buffering
• Whether communication is direct or indirect, messages exchanged by
commu- nicating processes reside in a temporary queue. Basically,
such queues can be implemented in three ways:-
• Zero capacity. The queue has a maximum length of zero; thus, the
link cannot have any messages waiting in it. In this case, the sender
must block until the recipient receives the message.
• Bounded capacity. The queue has finite length n; thus, at most n
messages can reside in it. If the queue is not full when a new
message is sent, the message is placed in the queue (either the
message is copied or a pointer to the message is kept), and the
sender can continue execution without waiting. The link’s capacity
is finite, however. If the link is full, the sender must block until
space is available in the queue.
• Unbounded capacity. The queue’s length is potentially infinite;
thus, any number of messages can wait in it. The sender never
64 blocks.

You might also like