Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
15 views

Module3 Process Synchronization

Uploaded by

yowaimo2005gojo
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
15 views

Module3 Process Synchronization

Uploaded by

yowaimo2005gojo
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 40

CHAPTER 6

PROCESS SYNCHRONIZATION
Introduction
• Processes can execute concurrently.
– May be interrupted at any time, partially completing execution.

• Concurrent access to shared data may result in data inconsistency.

• Maintaining data consistency requires mechanisms to ensure the


orderly execution of cooperating processes.
Producer Consumer problem
Suppose that we wanted to provide a solution to the
consumer-producer problem that fills all the buffers.

We can do so by having an integer counter that keeps track of the


number of full buffers. Initially, counter is set to 0.

It is incremented by the producer after it produces a new buffer and


is decremented by the consumer after it consumes a buffer.
Producer :
while (true) {
/* produce an item in next produced */
while (counter == BUFFER_SIZE) ;
/* do nothing */
buffer[in] = next_produced;
in = (in + 1) % BUFFER_SIZE;
counter++;
}
Producer Consumer problem
Consumer:
while (true) {
while (counter == 0)
; /* do nothing */
next_consumed = buffer[out];
out = (out + 1) % BUFFER_SIZE;
counter--;
/* consume the item in next consumed */
}
Race Condition :
Several processes access and manipulate the same data concurrently
and the outcome of the execution depends on the particular order
in which the access takes place is called race condition.
To guard against race condition we require processes be
synchronized in some way.
Critical Section Problem
• Consider system of n processes {p0, p1, … pn-1}.

• Each process has a segment of code called critical section, in which


process may be changing common variables, updating table, writing
file, etc.
– When one process in critical section, no other may be in its
critical section.

• Critical section problem is to design a protocol that the processes


can use to cooperate.

• Each process must ask permission to enter critical section in entry


section, may follow critical section with exit section, then
remainder section.
Critical Section

• General structure of process Pi

• Entry Section: It is a block of code executed in preparation for


entering critical section.

• Exit Section :The code executed upon leaving the critical section.

• Remainder Section : Rest of the code is remainder section.


Solution to Critical-Section Problem
1. Mutual Exclusion - If process Pi is executing in its critical section,
then no other processes can be executing in their critical section.

2. Progress - When no process is executing in its critical section , any


process that requests entry in to the critical section must be
permitted without delay.

3. Bounded Waiting - A bound must exist on the number of times


that other processes are allowed to enter their critical sections
after a process has made a request to enter its critical section and
before that request is granted.
Critical-Section Handling in OS

• Two approaches depending on if kernel is preemptive or non-


preemptive .
– Preemptive – allows preemption of process when running in
kernel mode.
– Non-preemptive – runs until exits kernel mode, blocks, or
voluntarily yields CPU.
•Essentially free of race conditions in kernel mode.
Peterson’s Solution
• Good algorithmic description of solving the problem.

Two process solution :


• Assume that the load and store machine-language instructions are
atomic ie., cannot be interrupted.

• The two processes share two variables:


– int turn;
– Boolean flag[2]

• The variable turn indicates whose turn it is to enter the critical


section.

• The flag array is used to indicate if a process is ready to enter the


critical section. flag[i] = true implies that process Pi is ready!
Algorithm for Process Pi

do
{
flag[i] = true;
turn = j;
while (flag[j] && turn = = j);
critical section
flag[i] = false;
remainder section
} while (true);
Peterson’s Solution (Cont.)

• Provable that the three CS requirement are met:


1. Mutual exclusion is preserved , Pi enters CS only if:
either flag[j] = false or turn = i
2. Progress requirement is satisfied.
3. Bounded-waiting requirement is met.
Semaphores
• Synchronization tool that provides more sophisticated ways for
process to synchronize their activities.

• Semaphore S – integer variable.

• Can only be accessed via two indivisible (atomic) operations.


– wait() and signal() Originally called P() and V()
• Definition of the wait() operation
wait(S) {
while (S <= 0); // busy wait
S--;
}
• Definition of the signal() operation
signal(S) {
S++;
}
Semaphore Usage
Advantages of semaphores :
• Solves critical section problem.
• Decides the order of execution of process.
• Resource management and can solve various synchronization
problems.

Types of semaphores :
• Counting semaphore – integer value can range over an unrestricted
domain.
• Binary semaphore – integer value can range only between 0 and 1.

Semaphore Implementation :
• In multiprogramming system , busy waiting wastes CPU cycles that
some other process might be able to use productively. This type of
semaphore is called spin lock i.e process spins waiting for lock.
Semaphore Implementation with no Busy waiting
• With each semaphore there is an associated waiting queue.
• Each entry in a waiting queue has two data items:
– value (of type integer)
– pointer to next record in the list.

• Two operations:
– block – place the process invoking the operation on the
appropriate waiting queue.
– wakeup – remove one of processes in the waiting queue and
place it in the ready queue.
typedef struct{
int value;
struct process *list;
} semaphore;
Implementation with no Busy waiting (Cont.)

wait(semaphore *S) {
S->value--;
if (S->value < 0) {
add this process to S->list;
block();
}
}

signal(semaphore *S) {
S->value++;
if (S->value <= 0) {
remove a process P from S->list;
wakeup(P);
}
}
Monitors
• A high-level abstraction that provides a convenient and effective
mechanism for process synchronization.

• Abstract data type, internal variables are accessible by code within


the procedure.

• Only one process may be active within the monitor at a time.

monitor monitor-name
{
// shared variable declarations
procedure P1 (…) { …. }
procedure Pn (…) {……}
Initialization code (…) { … }
}
}
Chapter 8
Memory management
Basic Hardware
• Main memory and registers built in to the processor itself are
the only storage that the CPU can access directly.

• If the data are not in memory they must be moved before CPU
can operate on them.

• Make sure that each process has a separate memory space.

• To do this, we need the ability to determine the range of legal


addresses that the process may access and to ensure that the
process can access only these legal addresses.

• Legal address – Providing protection through two registers : base


and limit register.
Basic Hardware
• A pair of base and limit registers define the logical address
space.
• Logical address : It is the address generated by the CPU.
• Base register holds the smallest legal physical memory address.
• Limit register specifies the size of the range.

• CPU must check every memory access generated in user mode


to be sure that it is between base and limit for that user.
Hardware Address Protection

Base – Smallest legal physical address


Limit -- Size of Range

Eg : CPU Address --- 256002


Base --- 256000 (256002>256000)
Limit---- (300040- 256000 )= 44040
Base + Limit = 300040
• The base and limit registers can be loaded only by the
operating system, which uses a special privileged instruction.

• These privileged instructions and operating system executes in


kernel mode.

• Only the operating system can load base and limit registers.

• OS can change the value of the registers but user programs


cannot change the register contents.
Binding of Instructions and Data to Memory
• Binding of instructions and data to memory addresses can
happen at three different stages.

• Compile time: If memory location known at compile time , where


the process will reside then absolute code can be generated . Eg :
MSDOS.

• Load time: If it is not known at compile time where the process


will reside in memory, then the compiler must generate
relocatable code.

• Execution time: If the process can be moved during its execution


from one memory segment to another then binding will be
delayed until run time.
Logical versus Physical address space
• The address generated by CPU is known as logical address(virtual
address).
• The address which is seen by the memory unit is known as
physical address.
• The compile time and load time address time binding methods
generate identical physical and logical address.
• The set of all logical addresses generated by a program is a logical
address space.
• The set of all physical addresses corresponding to these logical
addresses is a physical address space.
Swapping
• A process can be swapped temporarily out of memory to a
backing store, and then brought back into memory for continued
execution.

• Backing store : Fast disk which is large enough to accommodate


copies of all memory images for all users; must provide direct
access to these memory images.

• Round robin scheduling algorithm : When a time quantum


expires, the memory manager will start to swap out the process
that just finished and to swap another process into the memory
space that has been freed.
Contiguous Allocation
• The memory is usually divided into two partitions: one for the
resident operating system and one for the user processes.

• Contiguous allocation is one of the early method . Resident


operating system, usually held in low memory .User process reside
in high memory.

• Consider how to allocate available memory to the processes that


are in the input queue waiting to be brought into memory.

• In contiguous memory allocation, each process is contained in a


single contiguous section of memory.
• Simplest method for Memory
allocatingAllocation
memory is to divide memory into
several fixed-sized partitions.

• Each partition may contain exactly one process . Degree of


multiprogramming is bound by the number of partitions.

• In multiple partition when a partition is free, a process is selected


from the input queue and is loaded into the free partition. When
the process terminates, the partition becomes available for another
process.

• In variable partition scheme, the operating system keeps a table


indicating which parts of memory are available and which are
occupied.
• Initially, all memory is available for user processes and is considered
one large block of available memory called hole.
Dynamic Storage-Allocation Problem
How to satisfy a request of size n from a list of free holes?
• First-fit: Allocate the first hole that is big enough. Searching can
start either at the beginning of the set of holes or at the location
where the previous first-fit search ended.

• Best-fit: Allocate the smallest hole that is big enough; must


search entire list, unless ordered by size
– Produces the smallest leftover hole.

• Worst-fit: Allocate the largest hole; must also search entire list
– Produces the largest leftover hole.

– First-fit and best-fit better than worst-fit in terms of speed and


storage utilization but first fit is generally faster.
Fragmentation
Both first fit and best fit suffers from external fragmentation
• External Fragmentation – It exists when there is total memory
space exists to satisfy a request, but it is not contiguous.

• Internal Fragmentation – allocated memory may be slightly


larger than requested memory; this size difference is the
memory internal to a partition, but not being used.

Eg : Multiple partition scheme with a hole of 18464 bytes.


Suppose process requests 18462 bytes, if we allocate the
requested block we are left with a hole of 2 bytes(unused
memory).

• First fit analysis reveals that given N blocks allocated, 0.5 N


blocks lost to fragmentation
– 1/3 may be unusable -> 50-percent rule
Fragmentation (Cont.)
• Reduce external fragmentation by compaction.

– Shuffle memory contents to place all free memory together in


one large block.

– Compaction is done at execution time and if the relocation is


dynamic.

– Relocation requires only moving the program n data and then


changing the base register to reflect the new base address.

– Permit the logical address space to be non contiguous thus


allowing the physical memory to be allocated whenever such
memory is available.
Segmentation
• Segmentation is a memory-management scheme that supports
user view of memory .
• A program is a collection of segments
– A segment is a logical unit such as: main program ,procedure
function , method , object , local variables, global variables
common block , stack , symbol table ,arrays.

User’s view of a program


Segmentation Architecture
• Logical address consists of a two tuple:
<segment-number, offset>

• Segment table – maps two-dimensional physical addresses;


each table entry has:
– base – contains the starting physical address where the
segments reside in memory
– limit – specifies the length of the segment

• Segment-table base register (STBR) points to the segment


table’s location in memory

• Segment-table length register (STLR) indicates number of


segments used by a program.
Segmentation Hardware

You might also like